I completely agree. ChatGPT put all kinds of nonsense into its memory. “Cruffle is trying to make bath bombs with baking soda and citric acid” or “Cruffle is deciding between a red colored bedsheet or a green colored bedsheet”. Like great both of those are “time bound” and have no relevance after I made the bath bomb or picked a white bedsheet…
All these LLM manufacturers lack ways to edit these memories either. It’s like they want you to treat their shit as “the truth” and you have to “convince” the model to update it rather than directly edit it yourself. I feel the same way about Claude’s implementation of artifacts too… they are read only and the only way to change them is via prompting (I forget if ChatGPT lets you edit its canvas artifacts). In fact the inability to “hand edit” LLM artifacts is pervasive… Claude code doesn’t let you directly edit its plans, nor does it let you edit the diffs. Cursor does! You can edit all of the artifacts it generates just fine, putting me in the drivers seat instead of being a passive observer. Claude code doesn’t even let you edit previous prompts, which is incredibly annoying because like you, editing your prompt is key to getting optimal output.
Anyway, enough rambling. I’ll conclude with a “yes this!!”. Because yeah, I find these memory features pretty worthless. They never give you much control over when the system uses them and little control over what gets stored. And honestly, if they did expose ways to manage the memory and edit it and stuff… the amount of micromanagement required would make it not worth it.
From the linked post:
> If you use projects, Claude creates a separate memory for each project. This ensures that your product launch planning stays separate from client work, and confidential discussions remain separate from general operations.
If for some reason you want Claude's help making bath bombs, you can make a separate project in which memory is containerized. Alternatively, the bath bomb and bedsheet questions seem like good candidates for the Incognito Chat feature that the post also describes.
> All these LLM manufacturers lack ways to edit these memories either.
I'm not sure if you read through the linked post or not, but also there:
> Memory is fully optional, with granular user controls that help you manage what Claude remembers. (...) Claude uses a memory summary to capture all its memories in one place for you to view and edit. In your settings, you can see exactly what Claude remembers from your conversations, and update the summary at any time by chatting with Claude. Based on what you tell Claude to focus on or to ignore, Claude will adjust the memories it references.
So there you have it, I guess. You have a way to edit memories. Personally, I don't see myself bothering, since it's pretty easy and straightforward to switch to a different LLM service (use ChatGPT for creative stuff, Gemini for general information queries, Claude for programming etc.) but I could see use cases in certain professional contexts.
Appreciate the nuanced response
In fairness, you can always ask Claude Code to write it's plan to an MD file, make edits to it, and then ask it to execute the updated plan you created. I suppose it's an extra step or two vs directly editing from the the terminal, but I prefer it overall. It's nice to have something to reference while the plan is being implemented
I do the same. It lets you see exactly what the LLM is using for context and you can easily correct manually. Similar to the spec-driven-development in Kiro where you define the plan first, then move to creating code to meet the plan.
You can delete memories in ChatGPT and ask your bot to add a custom ones; memories can be instructions too. Gemini lets you create and edit memories.
Were the bath bombs any good? Did the LLM's advice(?) make a meaningful difference? I didn't know making them was so simple.
They are pretty simple in the abstract but lots of iterations… kiddo loves making them.