> There can be no objective story since the very act of assembling facts requires implicit beliefs about what should be emphasized and what should be left out. History is therefore a constant act of reinterpretation and triangulation, which is something that LLMs, as linguistic averaging machines, simply cannot do.

This exactly why tech companies want to replace those jobs with LLMs.

The companies control the models, the models control the narrative, the narrative controls the world.

Whoever can get the most stories into the heads of the masses runs the world.

"tech companies", "companies [who] control the models", "whoever"

To be more discrete, patchwork alliances of elites stretching decades and centuries back to concentrate power. Tech companies are under the thumb of the US government and the US government is under the thumb of the elites. It's not direct but it doesn't need to be. Many soft power mechanisms exist and can be deployed when needed e.g. Visa/Mastercard censorship. The US was always founded for elites, by elites but concessions needed to be made to workers out of necessity. With technology and the destruction of unions, this is no longer the case. The veracity of this statement is still up for debate but truth won't stop them from giving it a shot (see WW2).

"Whoever can get the most stories into the heads of the masses runs the world."

I'd argue this is already the case. It has nothing to do with transformer models or AGI but basic machine learning algorithms being applied at scale in apps like TikTok, YouTube, and Facebook to addict users, fragment them, and destroy their sense of reality. They are running the world and what is happening now is their plan to keep running it, eternally, and in the most extreme fashion.

[deleted]

>History is therefore a constant act of reinterpretation and triangulation, which is something that LLMs, as linguistic averaging machines, simply cannot do.

I know you weren't necessarily endorsing the passage you quoted, but I want to jump off and react to just this part for a moment. I find it completely baffling that people say things in the form of "computers can do [simple operation], but [adjusting for contextual variance] is something they simply cannot do."

There was a version of this in the debate over "robot umps" in baseball that exposed the limitation of this argument in an obvious way. People would insist that automated calls of balls and strikes loses the human element, because human umpires could situationally squeeze or expand the strike zone in big moments. E.g. if it's the World Series, the bases are loaded, the count is 0-2, and the next pitch is close, call it a ball, because it extends the game, you linger in the drama a bit more.

This was supposedly an example of something a computer could not do, and frequently when this point was made it induced lots of solemn head nodding in affirmation of this deep and cherished baseball wisdom. But... why TF not? You actually could define high leverage and close game situations, and define exactly how to expand the zone, and machines could call those too, and do so more accurately than humans. So they could better respect contextual sensitivity that critics insist is so important.

Even now, in fits and starts, LLMs are engaging in a kind of multi-layered triangulating, just to even understand language. It can pick up on multilayered things like subtext or balance of emphasis, or unstated implications, or connotations, all filtered through rules of grammar. It doesn't mean they are perfect, but calibrating for context or emphasis that is most important for historical understanding seems absolutely within machine capabilities, and I don't know what other than punch drunk romanticism for "the human element" moves people to think that's an enlightened intellectual position.

> "[...] dynamically changing the zone is something they simply cannot do." But... why TF not?

Because the computer is fundamentally knowable. Somebody defined what a "close game" ahead of time. Somebody defined what a "reasonable stretch" is ahead of time.

The minute it's solidified in an algorithm, the second there's an objective rule for it, it's no longer dynamic.

The beauty of the "human element" is that the person has to make that decision in a stressful situation. They will not have to contextualize it within all of their other decisions, they don't have to formulate an algebra. They just have to make a decision they believe people can live with. And then they will have to live with the consequences.

It creates conflict. You can't have a conflict with the machine. It's just there, following rules. It would be like having a conflict with the beurocrats at the DMV, there's no point. They didn't make a decision, they just execute on the rules as written.

Could we maybe say that an LLM which can update its own model weights using its own internal self-narrating log may be 'closer' to being dynamic? We can use Wolfram's computational irreducibility principle which says that even simple rule-based functions will often cause unpredictable patterns of its return values. You could say that computer randomness is deterministic, but could we maybe say that ontologically a Quantum-NN LLMs could perhaps be classified on paper as being Dynamic? (unless you believe that quantum computing is deterministic).

Not saying it isn’t possible, but this is scope creep to the greatest extent. The customer wants to play baseball.

Now we’re trying to build a self-learning and improving AI to replace a human who is also capable of self-learning and improving.

Baseball is supposed to be unscripted. When an umpire thinks it's their job to make judgments about what serves drama, we've descended from great clashes of human talent into storytelling, reversing the results of actual performance and negating the actual human element that the game was designed around.

If waxing rhapsodic for a little bit about human ineffibility is enough to get you to throw away the integrity of the game, because you think doing so is some grand romantic gesture celebrating human nature, at that point you could be talked into anything, because you lost sight of everything. Which I suppose proves the thesis of the article to be true after all: facts really won't save us, at least not if we leave it to the humans.

You also can't argue over whether the machine made the right call over a pint of beer. Or yell at the Robo-Ref from the stands. It not only makes the game more sterile and less following the "rule of cool" but it also diminishes the entertainment value for the people the game is actually for.

Sports is literally, in the truest sense of the word, reality TV and people watch reality TV for the drama and because it's messy. It's good tea, especially in golf.

> a constant act of reinterpretation and triangulation, which is something that LLMs, as linguistic averaging machines, simply cannot do.

Just a matter of time until they can do it. I actually believe it will be an "organic" nature for LLMs to correct narratives that require correction after assembling all facts.

The objective story is a discourse, including the nonsense that is often necessary for a few lines or more before one gets to the core of something or builds up the strength of character to say the truth. Objectivity is a conversation, a neverending one and getting in the way via censorship, gaslighting, cancel culture and what not is no more than an act of vanity.

Humanity's age of consciousness is getting fucked pretty bad atm and we won't recover "in time" to save enough minds before hitting the road towards singularity but I'm positive Robots will be able to salvage enough pieces later on, and simulate it to train us to be better.

I think you dramatically overestimate the effectiveness of trying to shape narratives and change people's minds.

Yes, online content is incredibly influential, but it's not like you can just choose which content is effective.

The effectiveness is tied to a zeitgeist that is not predictable, as far as I have seen.

I think you dramatically underestimate how much your own mind is a product of narratives, some of which are self-selected, many of which are influenced by others.

Perhaps, but what you said is unfalsifiable and/or unknowable.

Ideologies are not invented unless you're a caveman. We all got to know the world by listening to others.

The subject of discussion is if and when external forces can alter those ideologies at will. And I have not seen any evidence to support the feasibility of that at scale.

The idea that falsifiability is necessary or good for a belief system is a narrative you are choosing to believe in. :)

You lost me.

Let's concede you can't shape narrative or change peoples minds through online content (though I would disagree on this). The very act of addicting people to digital platforms is enough for control. Drain their dopamine daily, fragment them into isolated groups, use influencers as proxies for control, and voila, you have an effect.

I would agree with you. It's easier to just muddy the waters and degrade people's ability to hold attention or think critically. But that is not the same thing as convincing them of what you want them to think.

It's always easier to throw petrol on an existing fire than to light one.

If you censor all opposing arguments, all you need to do to convince the vast majority of people of most things is too keep repeating yourself until people forget that there ever were opposing arguments.

In this world you can get people censored for slandering beef, or for supporting the outcome of a Supreme Court case. Then pay people to sing your message over and over again in as many different voices as can be recruited. Done.

edit: I left out "offer your most effective enemies no-show jobs, and if they turn them down, accuse them of being pedophiles."

i'm having trouble following what you are saying. Are you describing something that's happening now or will happen in the future ?

I'm unaware of any mass censoring apparatus that exists outside of authoritarian countries, such as China or North Korea.

That’s their self selecting goal, sure. Fortunately for humanity the main drivers are old as hell, physics is ageist. Data centers are not a fundamental property of reality. They can be taken offline; sabotage or just loss of skills in time to maintain them leading to cascading failures. A new pandemic could wipe out billions and the loss of service workers cause it all to fail. Wifi satellites can go unreplaced.

They're a long long ways from "protomolecule" that just carries on infinitely on its own

CEOs don't really understand physics. Signal loss and such. Just data models that only mean something to their immediate business motives. They're more like priests; well versed in their profession, but oblivious to how anything outside that bubble works.