> This isn't up for debate at this point.
If by not up for debate, you mean that it is delusional and literally evidence of psychosis to suggest that computer software is doing something it is not programmed to do, you would be correct. Probabilistic analysis can carry you very, very far in doing something that looks like logical inference at the surface level, but it is nonetheless not logical inference. LLM models have been getting increasingly good at factoring in larger and longer contexts and still managing to generate plausibly correct answers, becoming more and more useful all the while, but are still not capable of logical inference. This is why your genius mathematician AGI consciousness stumbles on trivial logic puzzles it has not seen before like the car wash meme.
>delusional and literally evidence of psychosis to suggest that computer software is doing something it is not programmed to do
These are just insults and outright lies, and you know that. We're done here.
AI progress from here on out will be extra sweet.
You don't have the ability to predict progress, either.
Well, I'm not clairvoyant, but this is a very easy prediction to make. And we're not talking about decades in the future, this is simply a matter of letting the near-future unfold.