I'm pretty deep in this stuff and I find memory super useful.

For instance, I can ask "what windshield wipers should I buy" and Claude (and ChatGPT and others) will remember where I live, what winter's like, the make, model, and year of my car, and give me a part number.

Sure, there's more control in re-typing those details every single time. But there is also value in not having to.

I would say these are two distinct use cases - one is the assistant that remembers my preferences. The other use case is the clean intelligent blackbox that knows nothing about previous sessions and I can manage the context in fine detail. Both are useful, but for very different problems.

I'd imagine 99% of ChatGPT users see the app as the former. And then the rest know how to turn the memory off manually.

Either way, I think memory can be especially sneakily bad when trying to get creative outputs. If I have had multiple separate chats about a theme I'm exploring, I definitely don't want the model to have any sort of summary from those in context if I want a new angle on the whole thing. The opposite: I'd rather have 'random' topics only tangentially related, in order to add some sort of entropy in the outout.

Good point. I almost wish for an anonymous mode with chat history.

Would that just be the ability to chat without making new memories while using existing memories?

In chatgpt at least if you start a temporary chat it does not have access to memories.

Well you're in luck! They have that feature and talk about it in the article

I've found this memory across chats quite useful on a practical level too, but it also has added to the feeling of developing an ongoing personal relationship with the LLM.

Not only does the model (chat gpt) know about my job, tech interests etc and tie chats together using that info.

But also I have noticed the "tone" of the conversation seems to mimick my own style some what - in a slightly OTT way. For example Chat GPT wil now often call me "mate" or reply often with terms like "Yes mate!".

This is not far off how my own close friends might talk to me, it definitely feels like it's adapted to my own conversational style.

I mostly find it useful as well, until it starts hallucinating memories, or using memories in an incorrect context. It may have been my fault for not managing its memories correctly but I don't expect the average non power user will be doing that.

until you ask it why you have trouble seeing when driving at night and it focuses on you need to buy replacement wiper blades.

Claude, at least in my use in the last couple weeks, is loads better than any other LLMs at being able to take feedback and not focus on a method. They must have some anti-ADHD meds for it ;)

You can leave memory enabled and tell it to not use memory in the prompt of it's interfering.

Like valid, but also just ?temporarychat=true that mfer