It's not really an assumption, it's an observation. Run an agentic tool and you'll see it do this kind of thing all the time. It's pretty clear that they use the information to guide themselves (i.e. there's an entropy reduction there in the space of future policies, if you want to use the language of the OP).

> Unlike living things, that information doesn't allow them to change.

It absolutely does. Their behaviour changes constantly as they explore your codebase, run scripts, question you... this is just plainly obvious to anyone using these things. I agree that somewhere down the line there is a fixed set of tensors but that is not the algorithm. If you want to analyse this stuff in good faith you need to include the rest of the system too, including it's memory, context and more generally any tool it can interact with.

> The "AI" doesn't collect the information.

I really don't know how to engage on this. It certainly isn't me collecting the information. I just tell it what I want it to do at a high level and it goes and does all this stuff on its own.

> There is no world-model, there is no understanding of information.

I'm also not going to engage on this. I could care less what labels people assign to the behaviour of AI agents, and whether it counts as "understanding" or "intelligence" or whatever. I'm interested in their observable behaviour, and how to use them, not so much in the philosophy. In my experience trying to discuss the latter just leads to flame wars (for now).

> It absolutely does.

Go run an agentic workflow using RAG on a local model. Do an md5 checksum of the model before and after usage. The result will be the same.

> I agree that somewhere down the line there is a fixed set of tensors but that is not the algorithm.

And for our current tools, that is fine. They are not the algorithm, the LLM is just a part of a large machine that involves countless other things. And that is fine.

For an AGI, that would very much not be fine. An AGI has to be able to learn. Learning doesn't just involve gathering information, it also involves changing how information is used. New things from the information it ingests, have to be able to change what is currently a static thing, or it is not an AGI.

When a human reads a book twice, hes not encountering the information in the same way both times, because the first time he reads it, he alters his internal state. That's how we have things such as favorite books or movies.

> I really don't know how to engage on this. It certainly isn't me collecting the information.

And it certainly isn't the "AI" doing it either. I should know, because I implemented my own agentic AI frameworks. Information is provided by external systems.

And again, this is fine for LLMs playing their role in an "agentic" workflow. But an AGI that is limited to that, again, wouldn't be an AGI. It would just be a somewhat better LLM, as limited to the same constraints.

> I'm interested in their observable behaviour,

As am I. And that observable behavior includes hallucinations, a tendency to be repettive, falling for leading questions, regurgitating statistically correct (because it appears in the training set) but flawed (because it is obviosuly wrong to do so) information such as dumping API secrets into frontend code and many more problems.

All of which, in the end, boil down to the fact that a language model doesn't really "understand" the information it is dealing with. It just understands statistical relationships between tokens.

And if an AGI suffers from that same flaw, then it, again, isn't an AGI.

Okay, yeah, like I said - not personally interested in debating the meaning of "AGI" or "understand". More power to you for thinking about it.

> And that observable behavior includes hallucinations, a tendency to be repettive, falling for leading questions [...]

I agree with you, obviously, these are common behaviours. You can improve the outcomes a lot with tight feedback loops for development workflows (like fast-running tests and linting/formatting for the agent to code against). In a vacuum these things go totally nuts - part of the reason I think the environment deserves just as much thought in any analysis of an AI-based system!

> Go run an agentic workflow using RAG on a local model. Do an md5 checksum of the model before and after usage. The result will be the same.

As I said in my last comment, I agree with you. The md5 checksum of the tensors won't change. If your workflow accomplished anything at all, however, there will be many changes elsewhere in the system and it's environment (like your codebase). And those changes will in turn affect the future execution of workflows. Nothing controversial here.

> In a vacuum these things go totally nuts

And that is, in a nutshell, my point. An AGI has to be autonomous. It cannot "go nuts" without handholding, same as a human needs to be able to (under normal operating conditions) remain coherent, even if left to their own devices.

> the environment deserves just as much thought in any analysis of an AI-based system.

Couldn't agree more, and since I know how much work these environments are to build, the people doing so well, have at least as much of my respect as the ones who devise the models.

But again, and I'm sorry I am pulling the "definition and meaning" card again: We cannot devise a system that requires a tight corset of an execution environment keeping tabs on it all the time lest it goes bananas, and still call it an AGI. Humans don't work that way, and no matter how we define "AGI", in the end I think we can agree that "something like how we do thinking" is pretty close to any valid definition, no?

If I need to lock something in 10 days to sunday to prevent it from going off the rails, I cannot really call it an AGI.