They have a chart that shows it. The education level of the input determines the education level of the output.
These things are supposed to have intelligence on tap. I'll imagine this in a very simple way. Let's say "intellignce" is like a fluid. It's a finite thing. Intelligence is very valuable, it's the substrate for real-world problem solving that makes these things ostensibly worth trillions of dollars. Intelligence comes from interaction with the world; someone's education and experience. You spend some effort and energy feeding someone, clothing them, sending them to college. And then you get something out, which is intelligence that can create value for society.
When you are having a conversation with the AI, is the intelligence flowing out of the AI? Or is it flowing out of the human operator?
The answer to this question is extremely important. If the AI can be intelligent "on its own" without a human operator, then it will be very valuable -- feed electricity into a datacenter and out comes business value. But if a model is only intelligent as someone using it, well, the utility seems to be very harshly capped. At best it saves a bit of time, but it will never do anything novel, it will never create value on its own, independently, it will never scale beyond a 1:1 "human picking outputs".
If you must encode intelligence into the prompt to get intelligence out of the model, well, this doesn't quite look like AGI does it?
How much of this is actually due to the recipient of the information being low-intelligence? If we use Communications theory for this (SCMR Model), having an intelligent sender and content won't do much use if the receiver of said information is unable to understand and use it.
I see it with coworkers all the time. They'll ask ChatGPT to do an analysis and it'll output test results for a T-test. They don't know how to interpret it at all, and so it's ultimately meaningless to them. They're just using "stat sig" as a way to make a non-technical VP happy. In situations like this, I don't think a highly intelligent source, model or human, can make the recipient be more intelligent than they actually are.
ofc what I'm getting at is, you can't get something from nothing. There is no free lunch.
You spend energy distilling the intelligence of the entire internet into a set of weights, but you still had to expend the energy to have humans create the internet first. And on top of this, in order to pick out what you want from the corpus, you have to put some energy in: first, the energy of inference, but second and far more importantly, the energy of prompting. The model is valuable because the dataset is valuable; the model output is valuable because the prompt is valuable.
So wait then, where does this exponential increase in value come from again?
the same place an increase in power comes from when you use a lever.
> the same place an increase in power comes from when you use a lever.
I don't understand the analogy. A lever doesn't give you an increase in power (which would be a free lunch); it gives you an increase in force, in exchange for a decrease in movement. What equivalent to this tradeoff are you pointing to?