Regarding "AI systems will be better than any human at any cognitive task", while I believe that this may become possible in a distant future, with systems having a quite different structure, I do not see any evidence of such a thing becoming possible during the next decade or two.

Nothing that I have seen described here on HN or elsewhere, by the most enthusiast users of AI, who claim that their own productivity has been multiplied, does not demonstrate performance in cognitive tasks even remotely comparable with that of a competent human, much less better performance.

All that I see is that the AI systems outperform humans for various tasks only because they had access in their training data to much more information than most humans are allowed to access, because they do not have enough money to obtain such access, both because the various copyright paywalls and also because of the actual cost of storage and retrieval systems.

Using an AI agent may be faster than if you were given access to the training data and you would use conventional search tools on it, but the speed may be illusory, because when I search something and I have access to the original sources I can validate the search results faster and with much more certainty than when I try to ponder about the correctness of what AI has provided, e.g. whether a program produced by it really does what I have requested and it is bug free (in comparison with having access to its training programs and being able to choose myself what to copy and paste).

I hope that paid access to AI tools gives better results, but the AI replies that popular search engines, like Google and Bing, force upon their users have made Internet searches much worse not better, as their answers always contain something else than I want, and this is in the best case, when the answers are not plainly wrong.

> I hope that paid access to AI tools gives better results

You should get yourself a paid subscription. Honest advice. The difference between agentic workflow vs single-shot questions in free-tier services is night and day. Building context and letting the model have access to your code is the largest differentiator between "wtf, I don't need this" and "wtf".

> All that I see is that the AI systems outperform humans for various tasks only because they had access in their training data to much more information than most humans are allowed to access, because they do not have enough money to obtain such access

Humans cannot even theoretically read and consume the volume of data the models can do so it's not really about the money - it's more about the infinite amount of time humans would need to have and the extremely large cognitive load it would impose on them. How many people can even synthesize so much diverse topics at high and constant pace? None or very little.

Also, models are proven to generalize very well so having access to your codebase during the training phase is not necessary for them to provide you with the correct answers. Give it a try.

Models being able to generalize very well is one of the ways AI labs think they may reach the "AI systems will be better than any human at any cognitive task" goal. I am not convinced that this will be the only sauce needed but I am also not too skeptic about it too, given the speed at which AI capabilities unfolded, especially during the 2025.

I think we already reached the point where it's safe to say that "AI systems are better than many humans at most cognitive tasks". I can see it myself on the project I am currently working on. These are not the top-tier developers. And when I talk to the top-tier ones I have previously worked with, we share the similar sentiment. The only difference might be "AI systems are much faster than many humans at most congitive tasks".

One of the first skills I made for Claude was a research skill.

I give it a question (narrow or really broad), and the model does a bunch of web searches using subagents, to try and get a comprehensive answer using current results

The important part is, when the model answers, I have it cite its sources, using direct links. So, I can directly confirm the accuracy and quality of any info it finds

It's been super helpful. I can give it super broad questions like "Here's the architecture and environment details I'm planning for a new project. Can you see if there's any known issues with this setup". Then, it'll give me direct links + summaries to any relevant pages.

Saves a ton of time manually searching through the haystack, and so far, the latest models are pretty good about not missing important things (and catching plenty of things I missed)

That's not actually research though. The LLM API is only requesting a few of the top search results from a search engine and then adding those web pages to its context window.

That might work for simple tasks, but it's easily susceptible to prompt injection attacks and there's no way to validate the quality if it's statistically novel enough and outside of the core training data.