It's been 6 months away for 5 years now. In that time we've seen relatively mild incremental changes, not any qualitative ones. It's probably not 6 months away.
It's been 6 months away for 5 years now. In that time we've seen relatively mild incremental changes, not any qualitative ones. It's probably not 6 months away.
Yeah. I feel like that like many projects the last 20% take 80% of time, and imho we are not in the last 20%
Sure LLMs are getting better and better, and at least for me more and more useful, and more and more correct. Arguably better than humans at many tasks yet terribly lacking behind in some others.
Coding wise, one of the things it does “best”, it still has many issues: For me still some of the biggest issues are still lack of initiative and lack of reliable memory. When I do use it to write code the first manifests for me by often sticking to a suboptimal yet overly complex approach quite often. And lack of memory in that I have to keep reminding it of edge cases (else it often breaks functionality), or to stop reinventing the wheel instead of using functions/classes already implemented in the project.
All that can be mitigated by careful prompting, but no matter the claim about information recall accuracy I still find that even with that information in the prompt it is quite unreliable.
And more generally the simple fact that when you talk to one the only way to “store” these memories is externally (ie not by updating the weights), is kinda like dealing with someone that can’t retain memories and has to keep writing things down to even get a small chance to cope. I get that updating the weights is possible in theory but just not practical, still.
It's 6 months away the same way coding is apparently "solved" now.
I think we - in last few months - are very close to, if not already at, the point where "coding" is solved. That doesn't mean that software design or software engineering is solved, but it does mean that a SOTA model like GPT 5.4 or Opus 4.6 has a good chance of being able to code up a working version of whatever you specify, with reason.
What's still missing is the general reasoning ability to plan what to build or how to attack novel problems - how to assess the consequences of deciding to build something a given way, and I doubt that auto-regressively trained LLMs is the way to get there, but there is a huge swathe of apps that are so boilerplate in nature that this isn't the limitation.
I think that LeCun is on the right track to AGI with JEPA - hardly a unique insight, but significant to now have a well funded lab pursuing this approach. Whether they are successful, or timely, will depend if this startup executes as a blue skies research lab, or in more of an urgent engineering mode. I think at this point most of the things needed for AGI are more engineering challenges rather than what I'd consider as research problems.
Sure, Claude and other SOTA LLMs do generate about 90% of my code but I feel like we are not closer to solving the last 10% than we were a year ago in the days of Claude 3.7. It can pretty reliably get 90% there and then I can either keep prompting it to get the rest done or just do it manually which is quite often faster.
It's interesting that people don't seem to think the likely outcome might be... capital and labour. Not capital alone.
You see this in construction - the capital is used for certain things and is operated by labour.
We're certainly in the "capital/robot + labor" phase of AI at the moment, which Dario Amodei is referring to as the "centaur" (half horse, half human) phase, and expects to be very short lived.
Eventually (maybe taking a lot longer than a lot of people expect and/or are hoping for) we'll achieve full human-equivalent AI, at which point you won't NEED a centaur approach - the mechanical horse will be capable of doing ALL non-physical work by itself, but that doesn't mean this is how this will actually play out. If we do end up heading for some dystopian "Soylent Green" type future where most humans are unemployed, surviving poorly on government handouts, then I expect there would eventually be riots and uprising that would push back against it. It also just doesn't work - you can't create profits without customers, and customers need money to buy what you're selling.
Part of why we may (and hopefully will) continue to see humans, from CEO on down, still working when they could be replaced with AI, is that even "AGI", which we've yet to achieve, doesn't mean human-like - it's really just focusing on intelligence. Creating an actual remote-worker replacement requires more than just automating the intelligent decision-making part of a human (the "AGI" part) - it also requires the human/social/emotional part, which will take longer, and there may not even be any desire to push for that. I think people maybe discount how much of being a successful member of a team is based around human soft skills, our ability to understand and interact with each other, not just raw intellectual capacity, and certainly at this point in time corporate success is still very much "who you know, not what you know".
you know Amodei is a salesman, right
Reminds me of how cold fusion reactors are only 5 years away for decades now
Cold fusion reactors haven't produced usable intermediate results. LLMs have.
LLMs produce slop far to often to say they are in any way better than cold fusion in terms of usable results. "AI" kind of is the cold fusion of tech. We've always been 5 or 10 years away from "AGI" and likely always will be.
That's just nonsense. That they produce slop does not negate that I and many others get plenty of value out of them in their current form, while we get zero value out of fusion so far - cold or otherwise.
But I swear this time is different! Just give me another 6 months!
And another 6 trillion dollars :^)