I’ve made frameworks that turn a project entirely over to the AI — eg, turn a paragraph summary of what I want into a book on that topic.
Obviously I get much less out of that — I’m not denying the tradeoff, just saying that some people are all the way to “write a short request, accept the result” for (certain) thinking tasks.
Sure but even that falls on the spectrum. The request requires some thinking. So if we're not being pedantic then people will criticize because natural language isn't
I think it’s a difference in kind, ie, if we return to above[0] and the discussion about “outsourcing our thinking” — then it deeply depends on what we hope to accomplish. That’s what I was originally intending to convey: that people are actually inhabiting the space you used as an extreme because they’re operating in a different mode.
That is, we seem to be conflating different cases - ie, being an expert versus hiring an expert. A manager and an SDE get different utility from the LLM.
I think I expressed it poorly, but I think that we need to consider that outsourcing thinking entirely is the right answer in the way that subcontracting or outsourcing or hiring itself can be; and that we seem to get caught in a “spectrum” or false dichotomy (ie, “is outsourcing good or bad?”) discussion, when the actual utilization of LLMs, their content, etc interacts in a complex way due to the diversity of roles, needs, etc that humans themselves have. And the impact on acquired expertise is only one aspect, for which “less work, less learning” is both true but too simple.
[0] - https://news.ycombinator.com/item?id=47040091