In the n-dimensional solution space of all potential approaches (known and unknown) to building a true human equivalent AGI, what are the odds that current LLMs are even directionally correct?
In the n-dimensional solution space of all potential approaches (known and unknown) to building a true human equivalent AGI, what are the odds that current LLMs are even directionally correct?