Humanity has been using steel for over a millenia, however it's only in the past 100 years or so we have a good understanding of how carbon interacts with iron at an atomic level to create the strength characteristics that makes it useful. Based on this argument, we should not have used steel, until we had a complete first principles understanding.

What if you substituted "steel" with "asbestos" in your argument.

Yeah but well you see, humans did not go extinct from just asbestos!

Asbestos, lead paint, cigarettes, heroin(perscribed generously for basically whatever the doc felt like), "Radithor" (patent medicine containing radium-226 and 228, marketed as a "perpetual sunshine" energy tonic and cure for over 150 diseases), bloodletting, mercury treatments for syphilis, tobacco smoke enemas (yep that was a real thing), milk-based blood transfusions.

Didn't understand those either and used the fuck out of them because "the experts" said we should.

This is why I believe we should only listen to amateur opinions on everything, experts simply lack historical credibility. For example I've recently purchased a healing crystal (half off) for only $5000 dollars! It cleared up the imbalanced energies my street guru told me about right away.

I would never have been made aware about the consequences of imbalanced energies in the first place if I had asked an expert instead. They probably wouldn't even suggest an immediate solution to the problem like my reliable street guru always does! Something to consider.

Which year did we use steel to replace human workers and automate decision-making?

Around 1928ish

The entire industrial revolution was steel replacing human workers. And that is still the backbone of the world today. We are still living the industrial revolution.

Just like the invention of fire happened ages ago, but is still a crucial part of life today.

No, it was actually engines.

The mechanism behind engines were fully understood, any experiments with engines were reproducible and measurable. You could get an engine and create schematics by reverse engireening it.

LLMs, useful as they may be, are not that.

And what might an engine be made of? And a power plant? And a locomotive? And a ship?

Famously Andrew Carnegie spent years trying to get the steel to stop talking about goblins.

Steel is almost magic. Stainless steel is beyond magic.

I had a specialization in Chemistry in High School. For some analysis, the fist step is to dissolve everything in boiling Nitric Acid. But stainless steel has Chrome is like a spell of protection, so you must use boiling Hydrochloric Acid instead. I have no idea why. It's just like magic. It may have Nickel, Molybdenum, and other metals, that give it more magical properties.

A few years ago there was a nice post about copying a normal steel alloy for knives to get an equivalent made of stainless steel. You need to reduce the the Carbon content to make it less brittle. And they had to add Vanadium so it keeps the sharpness of the knives. I have no idea why. It's just like magic.

If you have half an hour, it's worth reading, but beware that it has too many technical details that are close to magical https://knifesteelnerds.com/2021/03/25/cpm-magnacut/ (HN discussion https://news.ycombinator.com/item?id=29696120 | 375 points | Dec 2021 | 108 comments)

Famously Andrew Carnegie dodged the point

That the industrial revolutions use of steel to augment or replace labor was similar in every way to using LLMs to do the same? Seems on point to me.

Assuming your timeline and metallurgical claims to be true, you're conflating engineering and (materials) science.

Humans have been using steel for however long, when and where it was understood to be an appropriate solution to a problem. In some sense, engineering is the development and application of that understanding. You do not need to have a molecular explanation of the interaction between carbon and iron to do effective engineering[-1] with steel.[0] Science seeks to explain how and why things are the way they are, and this can inform engineering, but it is not prerequisite.

I think that machine learning as a field has more of an understanding of how LLMs work than your parent post makes out. But I agree with the thrust of that comment because it's obvious that the reckless startups that are pushing LLMs as a solution to everything are not doing effective engineering.

[-1] "effective engineering" -- that's getting results, yes, but only with reasonable efficiency and always with safety being a fundamental consideration throughout

[0] No, I'm not saying that every instance of the use of steel has been effective/efficient/safe.

Poor correlation comparing physical material to computer technology

Why

Let me just quickly use absurdism to illustrate why argument by analogy is weak (and unfortunately overused on HN):

“”” Humanity has been using celibacy for over a millenia, however it's only in the past 100 years or so we have a good understanding of not having sex affects the psychology of a person, turning them into an ubermensch. Based on this argument, we should have never stopped having sex, until we had a complete first principles understanding. “””

Analogies can produce a lot of words, making it appear to be a high effort comment, but it also shifts the argument to why or why not an analogy is good or not, and away from the points the original poster was trying to make. And, by Sturgeon’s Law, most analogies are utter crap on top of being an already weak way to form an argument.

In my life I’ve come across a few people who are really good at making analogies and it’s wonderful and makes mine look like a child’s scribble next to a Monet.

In fact, I think analogies are some of the most powerful rhetorical devices and, unsurprisingly, one of the most difficult to master.

Look at some of the all time, almost supernaturally skilled, analogists: Jesus, Plato, Buddha, Aesop, Socrates. Their analogies will be eternal.

Now that said, we aren’t always seeing quite that level of skill often here on HN (or anywhere) but when you see a great analogy, it’s like…[scratch that, I’m resisting the urge to force an analogy here].

That's not his point at all. He advocates using LLMs.

The correct analogy is: if we just scale and improve steel enough, we'll get a flying car.

Well, we did build airplanes out of steel, but there are better (lighter) materials avaiable. But the developement of car engines did directly enabled airplane engines. Not sure if this is the right analogy path, but I kind of suspect similar with LLM's/transformers. They will be a important part.

An important stepping stone, perhaps. But I don’t think the final AGI thing will necessarily contain LLMs.

I don't know. I know I used to be pretty AI sceptic, until they became good enough to help with non trivial code problems on their own.

I strongly suspect, that we will come to a point, where it gets impossible to tell if something is AGI and consciouss or not.

History shows continuous evolution, there won't be a "final AGI thing". The definition of AGI is so vague anyways that any conversation around it is hardly useful. 5 years ago, what we have today would have been considered AGI.

Perhaps Douglas-Adamsesque the LLMs will specify the AGI.

> Well, we did build airplanes out of steel, but there are better (lighter) materials avaiable.

That's exactly my point. In this analogy LLMs are steel, but the flying things are made out of aluminum, lithium and titanium and not steel. We need a better idea than LLMs because LLMs's are not suddenly going to turn into something they are not.

We literally did that though. Walk outside and look up.

This is a very low-effort argument.

Humans could understand properties of steel long before they knew how Carbon interacted with Iron. Steel always behaved in a predictable, reproducible way. Empirical experiments with steel usage yielded outputs that could be documented and passed along. You could measure steel for its quality, etc.

The same cannot be said of LLMs. This is not to say they are not useful, this was never the claim of people that point at it's nondeterministic behavior and our lack of understanding of their workings to incorporate them into established processes.

Of course the hype merchants don't really care about any of this. They want to make destructive amounts of money out of it, consequences be damned.

>Steel always behaved in a predictable, reproducible way.

I'm not sure this is true. Even as late as WWII you have very high profile example of a process change in steel ship production lead to a completely unexpected behavior: https://metallurgyandmaterials.wordpress.com/2015/12/25/libe...

Sure, steel is more predictable than LLMs, but its a matter of degree, not of kind.

Oh for crying out loud! Let's stop inventing fake analogies to justify the inherent LLM shortcomings! Those of us who are critical - are only using the standards that the LLM companies set themselves ("superintelligence", "pocket phds" bla blabla), to hold them accountable. When does the grift stop?

[deleted]

Where did he say not to use LLMs? Oh that's right: he didn't.

pro LLM people are the kings of ad hoc fallacy. Why did you type this? You can consistently test steel and get a good idea of when and where it will break in a system without knowing its molecular structure.

LLMs are literally stochastic by nature and can't be relied on for anything critical as its impossible to determine why they fail, regardless of the deterministic tooling you build around them.

> LLMs are literally stochastic by nature and can't be relied on for anything critical

Ahh, yes, unlike humans, who are completely deterministic, and thus can be trusted.

Humans can be governed by rules with consequences and replaced with individuals with a appropriate level of risk taking / rule following for the role.

Rules and consequences seem to apply to humans in a similar way as prompts and harnesses govern LLMs. The greater the level of power a human possesses the less they are governed by these restraints, this doesnt apply to LLMs so at least in that aspect they are an improvement. But yea we can’t really punish or inflict pain on them - this seems like a problem

I think a simpler model is variety.

There are billions of people, you can interview/hire/fire until you get the right match.

There are 2? frontier LLM providers. 5? if you are more generous / ok with more trailing edge.

Everyone thought OpenAI was great, until Claude got better in Q1 and they switched to Anthropic, and then Codex got better and a good chunk moved back to OpenAI.. Seems kind of binary currently.

Why does it matter if you can inflict pain on them? Is that normal and acceptable in your line of work?

Being able to fire someone, thus causing potentially significant hardship, is considered quite normal and acceptable in most lines of work.

Yea I didn’t mean actual physical violence but rules need painful consequences in some way to be meaningful?

Which has, famously, been a great consolation for people who suffered the consequences of human failure in the past

That seems like it applies just fine to LLMs as well: You can replace an LLM with a different model, different prompts, etc. for the appropriate level of risk taking. Rule following is even easier, given you can sandbox them.

Theres at best a handful of frontier models vs billions of people and millions of SWEs.

You clearly have never met a human

If you cannot get humans to do roughly what you want as a manager, good luck with LLMs.

Wow, such a nasty view to hold. What's next, the Altman's bullshit argument about "all the food" that the humans need to grow up and develop brain ? Humans are intelligent. Humans can generalise and invent new concepts, ideas and art. LLMs are none of that.

What is the ad hoc fallacy? From googling I didn’t find any convincing definitions (definitions that demonstrate that it is a logical fallacy).

https://finmasters.com/ad-hoc-fallacy/

> Ad hoc fallacy is a fallacious rhetorical strategy in which a person presents a new explanation – that is unjustified or simply unreasonable – of why their original belief or hypothesis is correct after evidence that contradicts the previous explanation has emerged.

https://cerebralfaith.net/logical-fallacy-series-part-13-ad-...

> An argument is ad hoc if its only given in an attempt to avoid the proponent’s belief from being falsified. A person who is caught in a lie and then has to make up new lies in order to preserve the original lie is acting in an ad hoc manner.

It should be clear why the ad hoc fallacy is a fallacy.