This is literally insane.

I love that people hate this because that means I'm using AI in an interesting way. People will see what I mean eventually.

Edit: I see the confusion. OP is talking about needing precise output for agents. I'm talking about riffing on ideas that may go in strange places.

No, he's talking about memory getting passed into the prompts and maintaining control. When you turn on memory, you have no idea what's getting stuffed into the system prompt. This applies to chats and agents. He's talking about chat.

Parent is not chatting though. Parent is crafting a precise prompt. I agree, in that case you don't want memory to introduce global state.

I see the distinction between two workflows: one where you need deterministic control and one where you want emergent, exploratory conversation.

Yes, you still craft an initial prompt with exploratory chats. I feel like I'm talking to a bot right now tbh.

The first sentence is mine. The second I adapted from Claude after it helped me understand why someone called my original reply insane. Turns out we're talking about different approaches to using LLMs.

[deleted]

> "the truth sometimes hurts"

But it's not the truth in the first place.

The training data contains all kinds of truths. Say I told Claude I was a Christian at some point and then later on I told it I was thinking of stealing office supplies and quitting to start my own business. If Claude said "thou shalt not steal," wouldn't that be true?

Not necessarily.

You know that it's true that stealing is against the ten commandments, so when the LLM says something to that effect based on the internal processing of your input in relation to its training data, YOU can determine the truth of that.

> The training data contains all kinds of truths.

There is also noise, fiction, satire, and lies in the training data. And the recombination of true data can lead to false outputs - attributing a real statement to the wrong person is false, even if the statement and the speaker are both real.

But you are not talking about simple factual information, you're talking about finding uncomfortable truths through conversation with an LLM.

The LLM is not telling you things that it understands to be truth. It is generating ink blots for you to interpret following a set of hints and guidance about relationships between tokens & some probabilistic noise for good measure.

If you find truth in what the LLM says, that comes from YOU, it's not because the LLM in some way can knows what is true and give it to you straight.

Personifying the LLM as being capable of knowing truths seems like a risky pattern to me. If you ever (intentionally or not) find yourself "trusting" the LLM to where you end up believing something is true based purely on it telling you, you are polluting your own mental training data with unverified technohaikus. The downstream effects of this don't seem very good to me.

Of course, we internalize lies all the time, but chatbots have such a person-like way of interacting that I think they can end run around some of our usual defenses in ways we haven't really figured out yet.

> Personifying the LLM as being capable of knowing truths seems like a risky pattern to me.

I can see why I got downvoted now. People must think I'm a Blake Lemoine at Google saying LLMs are sentient.

> If you find truth in what the LLM says, that comes from YOU, it's not because the LLM in some way can knows what is true

I thought that goes without saying. I assign the truthiness of LLM output according to my educational background and experience. What I'm saying is that sometimes it helps to take a good hard look in the mirror. I didn't think that would controversial when talking about LLMs, with people rushing to remind me that the mirror is not sentient. It feels like an insecurity on the part of many.

> I didn't think that would controversial when talking about LLMs, with people rushing to remind me that the mirror is not sentient. It feels like an insecurity on the part of many.

For what it's worth I never thought you perceived the LLM as sentient. Though I see the overlap - one of the reasons I don't consider LLM output to be "truth" is that that there is no sense in which the LLM _knows_ what is true or not. So it's just ... stuff, and often sycophantic stuff at that.

The mirror is a better metaphor. If there is any "uncomfortable truth" surfaced in the way I think you have described, it is only the meaning you make from the inanimate stream of words received from the LLM. And in as much as the output is interesting of useful for you, great.