Of course it continues. Look at the investment in hardware going on. Even with no algorithmic efficiency improvement that is just going to force power out of the equation just like a massive inefficient V8 engine with paltry horsepower per liter figures.

I believe it continues, but I don't know if the rate is that favorable. Today's gigawatt-hungry models that can cost $10-100 per task or more to run... still can't beat Pokémon without a harness. And Pokémon is far from one task.

I believe AGI is probably coming, but not on a predictable timeline or via blind scaling.

The harness can be iterated upon (1).

I don't think the sci fi definition agi is happening soon but, something more boring in the meanwhile that is perhaps nearly as destructive to life as we know it as knowledge workers today. That is, using a human still, but increasingly fewer humans of lower and lower skill as the models are able to output more and more complete solutions. And naturally, there are no geographic or governmental barriers to protect employment in this sector, or physical realities that demand the jobs take place in a certain place of the world. This path forward is ripe for offshoring to the lowest internet-connected labor available, long term. Other knowledge work professions like lawyer or doctor have set up legal moats to protect their field and compensation decades ago, whereas there is nothing similar to protect the domestic computer science engineer.

By all means they are on this trajectory already. You often see comments on here from developers who say something along the lines of the models years ago needing careful oversight, now they are able to trust them to do more of the project accurately with less oversight as a result. Of course you will find anecdotes either way, but as the years go on I see more and more devs reporting useful output from these tools.

1. https://news.ycombinator.com/item?id=46988596

https://news.ycombinator.com/item?id=46988596

In my experience, AI enables smart people to do their best work while automating zero-quality work like SEO spam that no humans should have been doing in the first place. I have yet to see anything that I would remotely call tragic.

> legal moats to protect their field

I wonder how do they hold up when there's a big enough benefit of using AI over human work. Like how are politicians to explain these moats to the masses when your AI doctor costs 10x less and according to a multitude of studies is much better at diagnosis?

Or in law? I've read China is pushing AI judges because people weren't happy with the impartiality of the human ones. I think in general people overestimate how much these legal moats are worth in the long run.