LLMs display a form of abductive reasoning which is not the same as judgement. The only thing in the universe we know that can display judgement is a human. However many tasks we presume to require human judgement do not and abductive reasoning will perform as well as a human. This in theory acts as a filter if used right reducing the tasks of human judgement to those that can’t be automated with similar or better precision and recall. The trick then is using LLMs and other techniques to reduce the problem space for the human to the kernel of quandary that requires human judgement and to isolate the salient information to reduce the cognitive load as much as possible. Many many mundane tasks can be automated in this way, and many complex tasks can be facilitated to greatly magnify the effectiveness of the human in the middle’s time.