I think we - in last few months - are very close to, if not already at, the point where "coding" is solved. That doesn't mean that software design or software engineering is solved, but it does mean that a SOTA model like GPT 5.4 or Opus 4.6 has a good chance of being able to code up a working version of whatever you specify, with reason.
What's still missing is the general reasoning ability to plan what to build or how to attack novel problems - how to assess the consequences of deciding to build something a given way, and I doubt that auto-regressively trained LLMs is the way to get there, but there is a huge swathe of apps that are so boilerplate in nature that this isn't the limitation.
I think that LeCun is on the right track to AGI with JEPA - hardly a unique insight, but significant to now have a well funded lab pursuing this approach. Whether they are successful, or timely, will depend if this startup executes as a blue skies research lab, or in more of an urgent engineering mode. I think at this point most of the things needed for AGI are more engineering challenges rather than what I'd consider as research problems.
Sure, Claude and other SOTA LLMs do generate about 90% of my code but I feel like we are not closer to solving the last 10% than we were a year ago in the days of Claude 3.7. It can pretty reliably get 90% there and then I can either keep prompting it to get the rest done or just do it manually which is quite often faster.
We're certainly in the "capital/robot + labor" phase of AI at the moment, which Dario Amodei is referring to as the "centaur" (half horse, half human) phase, and expects to be very short lived.
Eventually (maybe taking a lot longer than a lot of people expect and/or are hoping for) we'll achieve full human-equivalent AI, at which point you won't NEED a centaur approach - the mechanical horse will be capable of doing ALL non-physical work by itself, but that doesn't mean this is how this will actually play out. If we do end up heading for some dystopian "Soylent Green" type future where most humans are unemployed, surviving poorly on government handouts, then I expect there would eventually be riots and uprising that would push back against it. It also just doesn't work - you can't create profits without customers, and customers need money to buy what you're selling.
Part of why we may (and hopefully will) continue to see humans, from CEO on down, still working when they could be replaced with AI, is that even "AGI", which we've yet to achieve, doesn't mean human-like - it's really just focusing on intelligence. Creating an actual remote-worker replacement requires more than just automating the intelligent decision-making part of a human (the "AGI" part) - it also requires the human/social/emotional part, which will take longer, and there may not even be any desire to push for that. I think people maybe discount how much of being a successful member of a team is based around human soft skills, our ability to understand and interact with each other, not just raw intellectual capacity, and certainly at this point in time corporate success is still very much "who you know, not what you know".
I think we - in last few months - are very close to, if not already at, the point where "coding" is solved. That doesn't mean that software design or software engineering is solved, but it does mean that a SOTA model like GPT 5.4 or Opus 4.6 has a good chance of being able to code up a working version of whatever you specify, with reason.
What's still missing is the general reasoning ability to plan what to build or how to attack novel problems - how to assess the consequences of deciding to build something a given way, and I doubt that auto-regressively trained LLMs is the way to get there, but there is a huge swathe of apps that are so boilerplate in nature that this isn't the limitation.
I think that LeCun is on the right track to AGI with JEPA - hardly a unique insight, but significant to now have a well funded lab pursuing this approach. Whether they are successful, or timely, will depend if this startup executes as a blue skies research lab, or in more of an urgent engineering mode. I think at this point most of the things needed for AGI are more engineering challenges rather than what I'd consider as research problems.
Sure, Claude and other SOTA LLMs do generate about 90% of my code but I feel like we are not closer to solving the last 10% than we were a year ago in the days of Claude 3.7. It can pretty reliably get 90% there and then I can either keep prompting it to get the rest done or just do it manually which is quite often faster.
It's interesting that people don't seem to think the likely outcome might be... capital and labour. Not capital alone.
You see this in construction - the capital is used for certain things and is operated by labour.
We're certainly in the "capital/robot + labor" phase of AI at the moment, which Dario Amodei is referring to as the "centaur" (half horse, half human) phase, and expects to be very short lived.
Eventually (maybe taking a lot longer than a lot of people expect and/or are hoping for) we'll achieve full human-equivalent AI, at which point you won't NEED a centaur approach - the mechanical horse will be capable of doing ALL non-physical work by itself, but that doesn't mean this is how this will actually play out. If we do end up heading for some dystopian "Soylent Green" type future where most humans are unemployed, surviving poorly on government handouts, then I expect there would eventually be riots and uprising that would push back against it. It also just doesn't work - you can't create profits without customers, and customers need money to buy what you're selling.
Part of why we may (and hopefully will) continue to see humans, from CEO on down, still working when they could be replaced with AI, is that even "AGI", which we've yet to achieve, doesn't mean human-like - it's really just focusing on intelligence. Creating an actual remote-worker replacement requires more than just automating the intelligent decision-making part of a human (the "AGI" part) - it also requires the human/social/emotional part, which will take longer, and there may not even be any desire to push for that. I think people maybe discount how much of being a successful member of a team is based around human soft skills, our ability to understand and interact with each other, not just raw intellectual capacity, and certainly at this point in time corporate success is still very much "who you know, not what you know".
you know Amodei is a salesman, right