> It absolutely does.
Go run an agentic workflow using RAG on a local model. Do an md5 checksum of the model before and after usage. The result will be the same.
> I agree that somewhere down the line there is a fixed set of tensors but that is not the algorithm.
And for our current tools, that is fine. They are not the algorithm, the LLM is just a part of a large machine that involves countless other things. And that is fine.
For an AGI, that would very much not be fine. An AGI has to be able to learn. Learning doesn't just involve gathering information, it also involves changing how information is used. New things from the information it ingests, have to be able to change what is currently a static thing, or it is not an AGI.
When a human reads a book twice, hes not encountering the information in the same way both times, because the first time he reads it, he alters his internal state. That's how we have things such as favorite books or movies.
> I really don't know how to engage on this. It certainly isn't me collecting the information.
And it certainly isn't the "AI" doing it either. I should know, because I implemented my own agentic AI frameworks. Information is provided by external systems.
And again, this is fine for LLMs playing their role in an "agentic" workflow. But an AGI that is limited to that, again, wouldn't be an AGI. It would just be a somewhat better LLM, as limited to the same constraints.
> I'm interested in their observable behaviour,
As am I. And that observable behavior includes hallucinations, a tendency to be repettive, falling for leading questions, regurgitating statistically correct (because it appears in the training set) but flawed (because it is obviosuly wrong to do so) information such as dumping API secrets into frontend code and many more problems.
All of which, in the end, boil down to the fact that a language model doesn't really "understand" the information it is dealing with. It just understands statistical relationships between tokens.
And if an AGI suffers from that same flaw, then it, again, isn't an AGI.
Okay, yeah, like I said - not personally interested in debating the meaning of "AGI" or "understand". More power to you for thinking about it.
> And that observable behavior includes hallucinations, a tendency to be repettive, falling for leading questions [...]
I agree with you, obviously, these are common behaviours. You can improve the outcomes a lot with tight feedback loops for development workflows (like fast-running tests and linting/formatting for the agent to code against). In a vacuum these things go totally nuts - part of the reason I think the environment deserves just as much thought in any analysis of an AI-based system!
> Go run an agentic workflow using RAG on a local model. Do an md5 checksum of the model before and after usage. The result will be the same.
As I said in my last comment, I agree with you. The md5 checksum of the tensors won't change. If your workflow accomplished anything at all, however, there will be many changes elsewhere in the system and it's environment (like your codebase). And those changes will in turn affect the future execution of workflows. Nothing controversial here.
> In a vacuum these things go totally nuts
And that is, in a nutshell, my point. An AGI has to be autonomous. It cannot "go nuts" without handholding, same as a human needs to be able to (under normal operating conditions) remain coherent, even if left to their own devices.
> the environment deserves just as much thought in any analysis of an AI-based system.
Couldn't agree more, and since I know how much work these environments are to build, the people doing so well, have at least as much of my respect as the ones who devise the models.
But again, and I'm sorry I am pulling the "definition and meaning" card again: We cannot devise a system that requires a tight corset of an execution environment keeping tabs on it all the time lest it goes bananas, and still call it an AGI. Humans don't work that way, and no matter how we define "AGI", in the end I think we can agree that "something like how we do thinking" is pretty close to any valid definition, no?
If I need to lock something in 10 days to sunday to prevent it from going off the rails, I cannot really call it an AGI.
Haha, this is the weird thing about definition debates, you often don't disagree about anything substantial =P thanks for the measured response.
> An AGI has to be autonomous. It cannot "go nuts" without handholding [...]
So I think this is where I get off your bus - regardless of what you call it, I think current agentic systems like claude code are already there. They can construct their own handholds as they go. I have a section in all my CLAUDE.md files that tells them to always develop within a feedback loop like a test, and to set it up themselves if necessary, for instance. It works remarkably well!
> in the end I think we can agree that "something like how we do thinking" is pretty close to any valid definition, no?
I agree, we aren't even close to human-level ability here. I just think that people get hung up on looking at a bunch of tensors, but to me at least the real complexity is when these things embed in an environment.
All these arguments considering pure Turing machines miss this, I think. You don't study ecology by taking organisms out individully and cutting them up. There's value in that, of course, but the interactions are where the really interesting things happen.
Are we sure people don't work that way? Almost all of us operate on instincts almost all of the time. We have guardrails, people who operate them are often committed to institutions. When we choose to do things, it is based on that static hardwiring. Our meta model later comes up with reasons why we did the things. Sometimes, but rarely, it is correct. The human brain is extremely heterogenous, modular even. Some of our modules function remarkably like a memory store fed back into a context window. Adding a meta model to an llm that is updated autonomously by an additional model that analyzes outcomes to upde this predictive meta model would quite likely result in the agent's models mistaking the meta model for a self. Much like we do.