Predicting the future is valuable. If a model can apply the same underlying world model that it uses to accurately predict OLHC series as it does to produce English language, then you can interrogate and expand on that underlying world model in complex and very useful ways. Being able to prompt it can describe a scenario, or uncover hidden influences that wouldn't be apparent from a simple accurate prediction. Things like that allow sophistication in the tools - instead of an accurate chart with all sorts of complex indicators, you can get English explication and variations on scenarios.
You can't tell a numbers only model "ok, with this data, but now you know all the tomatoes in the world have gone rotten and the market doesn't know it yet, what's the best move?" You can use an LLM model like that, however, and with RL, which allows you to branch and layer strategies dependent on dynamic conditions and private data, for arbitrary outcomes. Deploy such a model at scale and run tens of thousands of simulations, iterating through different scenarios, and you can start to apply confidence metrics and complex multiple-degree-of-separation strategies to exploit arbitrage opportunities.
Any one of the big labs could do something like this, including modeling people, demographic samples, distributions of psychological profiles, cultural and current events, and they'd have a manipulation engine to tell them exactly who, when, and where to invest, candidates to support, messages to push and publish.
The fundamental measures of intelligence are how far into the future a system can predict across which domains. The broader the domains and farther into the future, the more intelligence, and things like this push the boundaries.
We should probably get around to doing a digital bill of rights, but I suspect it's too late already anyway, and we're full steam ahead to snow crash territory.
Automated hypothesis testing in the form of a search for alpha in the market is certainly being used right now. An LLM can ask new questions about correlations between assets, and run statistical tests on those correlations, in ways that previously was only possible by employing a phd statistician
The emergent behavior of LLMs being amazing at accurately predicting tokens in previously unseen conditions might be more powerful than more rigorous machine learning extrapolations.
Especially when you throw noisy subjective context at it.
The “prediction” in this case is I think some approximation of “ingest today’s news and social media buzz as it’s happening and predict what the financial news tomorrow morning will be.”
Predicting the future is valuable. If a model can apply the same underlying world model that it uses to accurately predict OLHC series as it does to produce English language, then you can interrogate and expand on that underlying world model in complex and very useful ways. Being able to prompt it can describe a scenario, or uncover hidden influences that wouldn't be apparent from a simple accurate prediction. Things like that allow sophistication in the tools - instead of an accurate chart with all sorts of complex indicators, you can get English explication and variations on scenarios.
You can't tell a numbers only model "ok, with this data, but now you know all the tomatoes in the world have gone rotten and the market doesn't know it yet, what's the best move?" You can use an LLM model like that, however, and with RL, which allows you to branch and layer strategies dependent on dynamic conditions and private data, for arbitrary outcomes. Deploy such a model at scale and run tens of thousands of simulations, iterating through different scenarios, and you can start to apply confidence metrics and complex multiple-degree-of-separation strategies to exploit arbitrage opportunities.
Any one of the big labs could do something like this, including modeling people, demographic samples, distributions of psychological profiles, cultural and current events, and they'd have a manipulation engine to tell them exactly who, when, and where to invest, candidates to support, messages to push and publish.
The fundamental measures of intelligence are how far into the future a system can predict across which domains. The broader the domains and farther into the future, the more intelligence, and things like this push the boundaries.
We should probably get around to doing a digital bill of rights, but I suspect it's too late already anyway, and we're full steam ahead to snow crash territory.
Automated hypothesis testing in the form of a search for alpha in the market is certainly being used right now. An LLM can ask new questions about correlations between assets, and run statistical tests on those correlations, in ways that previously was only possible by employing a phd statistician
The emergent behavior of LLMs being amazing at accurately predicting tokens in previously unseen conditions might be more powerful than more rigorous machine learning extrapolations.
Especially when you throw noisy subjective context at it.
The “prediction” in this case is I think some approximation of “ingest today’s news and social media buzz as it’s happening and predict what the financial news tomorrow morning will be.”
hypothetically LLM absorbed lots of world knowledge, and it can trace lots of deep correlations between various factors.