because it has business context and better reasoning, and can ask humans for clarification and take direction.

You don't need to benchmark this, although it's important. We have clear scaling laws on true statistical performance that is monotonically related to any notion of what performance means.

I do benchmarks for a living and can attest: benchmarks are bad, but it doesn't matter for the point I'm trying to make.

I feel like you're missing the initial context of this conversation (no pun intended):

> Like for example a trusted user makes feedback -> feedback gets curated into a ticket by an AI agent, then turned into a PR by an Agent, then reviewed by an Agent, before being deployed by an Agent.

Once you add "humans for clarifications and take direction" then yeah, things can be useful, but that's far away from the non-human-involvment-loop earlier described in this thread, which is what people are pushing back against.

Of course, involving people makes things better, that's the entire point here, and that by removing the human, you won't get as good results. Going back to benchmarks, obviously involving humans aren't possible here, so again we're back to being unable to score these processes at all.

I'm confused on the scenario here. There is human in the loop, it's the feedback part...there is business context, it is either seeded or maintained by the human and expanded by the agent. The agent can make inferences about the world, especially when embodiment + better multimodal interaction is rolled out [embodiment taking longer].

Benchmarks ==> it's absolutely not a given that humans can't be involved in the loop of performance measurement. Why would that be the case?

> because it has business context

It doesn't because it doesn't learn. Every time you run it, it's a new dawn with no knowledge of your business or your business context

> better reasoning

It doesn't have better reasoning beyond very localized decisions.

> and can ask humans for clarification and take direction.

And yet it doesn't, no matter how many .md file you throw at it, at crucial places in code.

> We have clear scaling laws on true statistical performance that is monotonically related to any notion of what performance means.

This is just a bunch of words stringed together, isn't it?

> It doesn't because it doesn't learn. Every time you run it, it's a new dawn with no knowledge of your business or your business context

It does learn in context. And lack of continuous learning is temporary, that is a quirk of the current stack, expect this to change rather quickly. Also still not relevant, consider that agentic systems can be hierarchical and that they have no trouble being able to grok codebases or do internal searches effectively and this will only improve.

> It doesn't have better reasoning beyond very localized decisions.

Do you have any basis for this claim? It contradicts a large amount of direct evidence and measurement and theory.

> This is just a bunch of words stringed together, isn't it?

Maybe to yourself? Chinchilla scaling laws and RL scaling laws are measured very accurately based on next token test loss (Chinchilla). This scales very predictably. It is related to downstream performance, but that relationship is noisy but clearly monotonic

> It does learn in context

It quite literally doesn't.

It also doesn't help that every new context is a new dawn with no knowledge if things past.

> Also still not relevant, consider that agentic systems can be hierarchical and that they have no trouble being able

A bunch of Memento guys directing a bunch of other Memento guys don't make a robust system, or a system that learns, or a system that maintains and retains things like business context.

> and this will only improve.

We've heard this mantra for quite some time now.

> Do you have any basis for this claim?

Oh. Just the fact that in every single coding session even on a small 20kloc codebase I need to spend time cleaning up large amounts of duplicated code, undo quite a few wrong assumptions, and correct the agent when it goes on wild tangents and goose hunts.

> Maybe to yourself? Chinchilla scaling laws a

yap yap yap. The result is anything but your rosy description of these amazing reasoning learning systems that handle business context.

> It quite literally doesn't.

Awesome you've backed this up with real literature. Let's just include this for now to easily refute your argument which I don't know where it comes from: https://transformer-circuits.pub/2022/in-context-learning-an...

> It also doesn't help that every new context is a new dawn with no knowledge if things past.

Absolutely true that it doesn't help but: agents like Claude have access to older sessions, they can grok impressive amounts of data via tool use, they can compose agents into hierarchical systems that effectively have much larger context lengths at the expense of cost and coordination which needs improvement. Again this is a temporary and already partially solved limitation

> A bunch of Memento guys directing a bunch of other Memento guys don't make a robust system, or a system that learns, or a system that maintains and retains things like business context.

I think you are not understanding: hierarchical agents have long term memory maintained by higher level agents in the hierarchy, it's the whole point. It's annoying to reset model context, but yet you have a knowledge base of the business context persisted and it can grok it...

> We've heard this mantra for quite some time now.

yes you have, and it has held true and will continue to hold true. Have you read the literature on scaling laws? Do you follow benchmark progression? Do you know how RL works? If you do I don't think you will have this opinion.

> yap yap yap. The result is anything but your rosy description of these amazing reasoning learning systems that handle business context.

Well that's fine to call an entire body of literature "yap" but don't pretend like you have some intelligible argument, I don't see you backing up any argument you have here with any evidence, unlike the multitude of sources I have provided to you.

Do you argue things have not improved in the last year with reasoning systems? If so I would really love to hear the evidence for this.

> Let's just include this for now to easily refute your argument which I don't know where it comes from: https://transformer-circuits.pub/2022/in-context-learning-an...

I love it when people include links to papers that refute their words.

So, Antropic (which is heavily reliant on hype and making models appear more than they are) authors a paper which clearly states: "tokens later in context are easier to predict and there's less loss of tokens. For no reason at all we decided to give this a new name, in-context learning".

> agents like Claude have access to older sessions, they can grok impressive amounts of data via tool use

That is they rebuild the world from scratch for every new session, and can't build on what was learned or built in the last one.

Hence continuous repeating failure modes.

10 years ago I worked in a team implementing royalties for a streaming service. I can still give you a bunch of details, including references to multiple national laws, about that. Agents would exhaust their context window just re-"learning" it from scratch, every time. And they would miss a huge amount of important context and business implications.

> Have you read the literature on scaling laws?

You keep referencing this literature as it was Holy Bible. Meanwhile the one you keep referring to, Chinchilla, clearly shows the very hard limits of those laws.

> Do you argue things have not improved in the last year with reasoning systems?

I don't.

Frankly, I find your aggressiveness quite tiring

> Frankly, I find your aggressiveness quite tiring

having to answer for opinions with no basis in the literature is I'm sure very tiring for you. Your aggression being met is I'm sure uncomfortable.

> I love it when people include links to papers that refute their words. > So, Antropic (which is heavily reliant on hype and making models appear more than they are) authors a paper which clearly states: "tokens later in context are easier to predict and there's less loss of tokens. For no reason at all we decided to give this a new name, in-context learning".

well I don't really love it when people just totally misread a paper because they have an agenda to push and can't seem to accept that their opinions are contradicted by real evidence.

in-context learning is not "later tokens easier" it’s task adaptation from examples in the prompt. I'm sure you realize this. Models can learn a mapping (e.g. word --> translation) from a few examples in the prompt, apply inputs within the same forward pass. That is function learning at inference time, not just "predicting later tokens better"

I'm sure also you're happy to chalk up any contradicting evidence to a grand conspiracy of all AI companies just gaming benchmarks and that this gaming somehow completely explains progress.

> That is they rebuild the world from scratch for every new session, and can't build on what was learned or built in the last one.

That they rebuild the world from scratch (wrong, they have priors from pretraining, but I accept your point here) does not mean they can't build on what was learned or built in the last one. They have access to the full transcript, and they have access to the full codebase, the diff history, whatever knowledge base is available. It's just disingenuous to say this, and then it also assumes (1) there is no mitigation for this, which I have presented twice before and you don't seem to understand it, (2) this is a temporary limitation, continual learning is one of the most important and well funded problems right now.

> 10 years ago I worked in a team implementing royalties for a streaming service. I can still give you a bunch of details, including references to multiple national laws, about that. Agents would exhaust their context window just re-"learning" it from scratch, every time. And they would miss a huge amount of important context and business implications.

also not an accurate understanding of how agents and their context work; you can use multiple session to digest and distill information useful in other sessions and in fact Claude does this automatically with subagents. It's a problem we have _already sort of solved today_ and that will continue to improve.

> You keep referencing this literature as it was Holy Bible. Meanwhile the one you keep referring to, Chinchilla, clearly shows the very hard limits of those laws.

You keep dismissing this literature as if you have understood it and that your opinion somehow holds more weight...Can you elaborate on why you think Chinchilla shows the hard limits of the scaling laws? Perhaps you're referring to the term capturing the irreducible loss? Is that what you're saying?

> Do you argue things have not improved in the last year with reasoning systems? I don't

Then are you arguing this progress will stop? I'm just not sure I understand, you seem to contradict yourself

Almost every task that people are tackling agents on, it’s either not worth doing, can be done better with scripts and software, or require human oversight (that negates all the advantages.

I assume this is a troll because it's just so far removed from reality there's not much to say. "Almost every task" -- I'm sure you have great data to back this up. "It's not worth doing" well sure if you want to put your head in the sand and ignore even what systems today can do let alone the improvement trajectory. "can be done better with scripts and software" .... not sure if you realize this but agents write scripts and software. "or require human oversight (that negates all the advantages." it certainly does not; human oversight vs actual humans implementing the code is pretty dramatically more efficient and productive.