LLM’s are basically glorified slot machines. Some people try very hard to come up with techniques or theories about when the slot machine is hot, it’s only an illusion, let me tell you, it’s random and arbitrary, maybe today is your lucky day maybe not. Same with AI, learning the “skill” is as difficult as learning how to google or how to check stackoverflow, trivial. All the rest is luck and how many coins do you have in your pocket.

There's plenty of evidence that good prompts (prompt engineering, tuning) can result in better outputs.

Improving LLM output through better inputs is neither an illusion, nor as easy as learning how to google (entire companies are being built around improving llm outputs and measuring that improvement)

Sure, but tricks & techniques that work with one model often don't translate or are actively harmful with others. Especially when you compare models from today and 6 or more months ago.

Keep in mind that the first reasoning model (o1) was released less than 8 months ago and Claude Code was released less than 6 months ago.

Yes, though that just means the probability of success is a function of not only user input but also the model version.

Slot machines on the other hand are truly random and success is luck based with no priors (the legal ones in the US anyways)

This is not a good analogy. The parameters of slot machines can be changed to make the casino lose money. Just because something is random, doesn't mean it is useless. If you get 7 good outputs out of 10 from an LLM, you can still use it for your benefit. The frequency of good outputs and how much babysitting it requires determine whether it is worth using or not. Humans make mistakes too, although way less often.

I didn’t say it’s useless.

Learning how to Google is not trivial.

So true! About ten years ago Peter Norvig recommended the short Google online course on how to use Google Search: amazing how much one hour of structured learning permanently improved my search skills.

I have used neural networks since the 1980s, and modern LLM tech simply makes me happy, but there are strong limits to what I will use the current tech for.

Do you have an entry in your CV saying: proficiency in googling? It difficult not because it is complex, it difficult because Google want it to be opaque and as harder as possible to figure out.

If anything getting good information out of Google has become harder for us expert users because Google have tried to make it easier for everyone else.

The power-user tricks like "double quote phrase searches" and exclusion though -term are treated more as gentle guidelines now, because regular users aren't expected to figure them out.

There's always "verbatim" mode, though amusingly that appears to be almost entirely undocumented! I tried using Google to find the official documentation for that feature just now and couldn't do better than their 2011 blog entry introducing it: https://search.googleblog.com/2011/11/search-using-your-term...

Maybe if I was more skilled at Google I'd be able to use it to find documentation on its own features?

We know what random* looks like: a coin toss, the roll of a die. Token generation is neither.

Neither are slot machines. But there is a random element and that is more than enough to keep people hooked.

Pseudo-random number generators remain one of the most amazing things in computing IMO. Knuth volume 2. One of my favourite books.

[deleted]
[deleted]