> The progress of knowledge—and the fact that we’re educated about it—lets us get to a certain level of abstraction. And, one suspects, the more capacity there is in a brain, the further it will be able to go.
This is the underlying assumption behind most of the article, which is that brains are computational, so more computation means more thinking (ish).
I think that's probsbly somewhat true, but it misses the crucial thing that our minds do, which is that they conceptually represent and relate. The article talks about this but it glosses over that part a bit.
In my experience, the people who have the deepest intellectual insights aren't necessarily the ones who have the most "processing power", they often have good intellectual judgement on where their own ideas stand, and strong understanding of the limits of their judgements.
I think we could all, at least hypothetically, go a lot further with the brain power we have, and similarly, fail just as much, even with more brain power.
>but it misses the crucial thing that our minds do, which is that they conceptually represent and relate
You seem to be drawing a distinction between that and computation. But I would like to think that conceptualization is one of the things that computation is doing. The devil's in the details of course, because it hinges on like a specific forms and manner of informational representation, it's not simply a matter of there being computation there, but even so, I think it's within the capabilities of engines that do computations, and not something that's missing.
Yes, I think I'd agree. To make an analogy to computers though, some algorithms are much faster than others, and finding the right algorithm is a better route to effectiveness than throwing more CPU at a problem.
That said, there are obviously whole categories of problem that we can only solve, even with the best choice of programme, with a certain level of CPU.
Sorry if that example was a bit tenuous!
Not tenuous at all, a great example. The ability of computers to do fancy stuff with information, up to and including abstract conceptualization and association between concepts, hinges on details about how it's doing it, and how efficient it is. The discussion of the details, in their execution, is where all the meat and potatoes are to be found.
This is one of the reasons why intelligence and wisdom are separate stats in AD&D :)
Intelligence is about how big is your gun, and wisdom is about how well can you aim. Success in intellectual pursuits is often not as much about thinking hard about a problem but more about identifying the right problem to solve.
In my highest ego moments I've probably regarded my strength in the space you articulately describe - that sort of balanced points, connector, abstractor, quick learner, cross-domain renaissance dabbler.
It also seems to be something that LLMs are remarkably strong at, of course threatening my value to society.
They're not quite as good at hunches, intuition, instinct, and the meta-version of doing this kind of problem solving just yet, but despite being on the whole a doubter about how far this current AI wave will get us and how much it is oversold, I'm not so confident that it won't get very good at this kind of reasoning that I've held so dearly as my UVP.