I don’t think you’re rational. Part of being able to be unbiased is to see it in yourself.

First of all. Nobody knows how LLMs work. Whether the singularity comes or not cannot be rationalized from what we know about LLMs because we simply don’t understand LLMs. This is unequivocal. I am not saying I don’t understand LLMs. I’m saying humanity doesn’t understand LLMs in much the same way we don’t understand the human brain.

So saying whether the singularity is imminent or not imminent based off of that reasoning alone is irrational.

The only thing we have is the black box output and input of AI. That input and output is steadily improving every month. It forms a trendline, and the trendline is sloped towards singularity. Whether the line actually gets there is up for question but you have to be borderline delusional if you think the whole thing can be explained away because you understand LLMs and transformer architecture. You don’t understand LLMs period. No one does.

> Nobody knows how LLMs work.

I'm sorry, come again?

Nobody knows how LLMs work.

Anybody who claims otherwise is making a false claim.

I think they meant "Nobody knows why LLMs work."

same thing? The how is not explainable. This is just pedantic. Nobody understands LLMs.

Because they encode statistical properties of the training corpus. You might not know why they work but plenty of people know why they work & understand the mechanics of approximating probability distributions w/ parametrized functions to sell it as a panacea for stupidity & the path to an automated & luxurious communist utopia.

No this is false. No one understands. Using big words doesn’t change the fact that you cannot explain for any given input output pair how the LLM arrived at the answer.

Every single academic expert who knows what they are talking about can confirm that we do not understand LLMs. We understand atoms and we know the human brain is made 100 percent out of atoms.we may know how atoms interact and bond and how a neuron works but none of this allows us to understand the brain. In the same way we do not understand LLMs.

Characterizing ML as some statistical approximation or best fit curve is just using an analogy to cover up something we don’t understand. Heck the human brain can practically be characterized by the same analogies. We. Do. Not. Understand. LLMs. Stop pretending that you do.

I'm not pretending. Unlike you I do not have any issues making sense of function approximation w/ gradient descent. I learned this stuff when I was an undergrad so I understand exactly what's going on. You might be confused but that's a personal problem you should work to rectify by learning the basics.

omfg the hard part of ML is proving back-propagation from first principles and that's not even that hard. Basic calculus and application of the chain rule that's it. Anyone can understand ML, not anyone can understand something like quantum physics.

Anyone can understand the "learning algorithm" but the sheer complexity of the output of the "learning algorithm" is way to high such that we cannot at all characterize even how an LLM arrived at the most basic query.

This isn't just me saying this. ANYONE who knows what they are talking about knows we don't understand LLMs. Geoffrey Hinton: https://www.youtube.com/shorts/zKM-msksXq0. Geoffrey, if you are unaware, is the person who started the whole machine learning craze over a decade ago. The god father of ML.

Understand?

There's no confusion. Just people who don't what they are talking about (you)

I don't see how telling me I don't understand anything is going to fix your confusion. If you're confused then take it up w/ the people who keep telling you they don't know how anything works. I have no such problem so I recommend you stop projecting your confusion onto strangers in online forums.

nobody can how how something that is non-deterministic works - by its pure definition

LLMs are deterministic simply because computers are at the core deterministic machines. LLMs run on computers and therefore are deterministic. The random number generator is an illusion and an LLM that utilizes it will produce the same illusion of indeterminism. Find the seed and the right generator and you can make an LLM consistently produce the same output from identical input.

Despite determinism, we still do not understand LLMs.

In what sense is this true? We understand the theory of what is happening and we can painstakingly walk through the token generation process and understand it. So in what sense do we not understand LLMs?