I actually wrote up quite a few thoughts related to this a few days ago but my take is far more pessimistic: https://www.neilwithdata.com/outsourced-thinking
My fundamental argument: The way the average person is using AI today is as "Thinking as a Service" and this is going to have absolutely devastating long term consequences, training an entire generation not to think for themselves.
I think you hit the nail on the head. Without years of learning by doing, experience in the saddle as you put it, who would be equipped to judge or edit the output of AI? And as knowledge workers with hands-on experience age out of the workforce, who will replace us?
The critical difference between AI and a tool like a calculator, to me, is that a calculator's output is accurate, deterministic and provably true. We don't usually need to worry that a calculator might be giving us the wrong result, or an inferior result. It simply gives us an objective fact. Whereas the output of LLMs can be subjectively considered good or bad - even when it is accurate.
So imagine teaching an architecture student to draw plans for a house, with a calculator that spit out incorrect values 20% of the time, or silently developed an opinion about the height of countertops. You'd not just have a structurally unsound plan, you'd also have a student who'd failed to learn anything useful.
In current situation, by vibing and YOLOing most problems, we are losing the very ability we still need and can't replace with AI or other tools.
If you don't have building codes, you can totally yolo build a small house, no calculator needed. It may not be a great house, just like vibeware may not be great, but also, you have something.
I'm not saying this is ideal, but maybe there's another perspective to consider as well, which is lowering barriers to entry and increased ownership.
Many people can't/won't/don't do what it takes to build things, be it a house or an app, if they're starting from zero knowledge. But if you provide a simple guide they can follow, they might end actually building something. They'll learn a little along the way, make it theirs, and end up with ownership of their thing. As an owner, change comes from you, and so you learn a bit more about your thing.
Obviously whatever gets built by a noob isn't likely to be of the same caliber as a professional who spent half their life in school and job training, but that might be ok. DIY is a great teacher and motivator to continue learning.
Contrast to high barriers to entry, where nothing gets built and nothing gets learned, and the user is left dependent on the powers that be to get what he wants, probably overpriced, and with features he never wanted.
If you're a rocket surgeon and suddenly outsource all your thinking to a new and unpredictable machine, while you get fat and lazy watching tv, that's on you. But for a lot of people who were never going to put in years of preparation just to do a thing, vibing their idea may be a catalyst for positive change.
To continue the analogy, there’s something called renting and the range of choices. If there’s no code and you can’t build your own house, you’re left with bad houses built by someone else. It’s more likely to be bad when the owner already knows he will not be living in them as building it right can be expensive and time consuming.
When slop becomes easier, there are a lot more people ready to push it to others than people that tries to produce guenuine work. Especially when theh are hard to distinguish superficially.
> If calculators returned even 99.9% correct answers, it would be impossible to reliably build even small buildings with them.
I think past successes have led to a category error in the thinking of a lot of people.
For example, the internet, and many constituent parts of the internet, are built on a base of fallible hardware.
But mitigated hardware errors, whether equipment failures, alpha particles, or other, are uncorrelated.
If you had three uncorrelated calculators that each worked 99.99% of the time, and you used them to check each other, you'd be fine.
But three seemingly uncorrelated LLMs? No fucking way.
There's another category error compounding this issue: People think that because past revolutions in technology eventually led to higher living standards after periods of disruption, this one will too. I think this one is the exception for the reasons enumerated by the parent's blog post.
Agreed.
In point of fact, most technological revolutions have fairly immediately benefited a significant number of people in addition to those in the top 1% -- either by increasing demand for labor, or reducing the price of goods, or both.
The promise of LLMs is that they benefit people in the top 1% (investors and highly paid specialists) by reducing the demand for labor to produce the same stuff that was already being produced. There is an incidental initial increase in (or perhaps just reallocation of) labor to build out infrastructure, but that is possibly quite short-lived, and simultaneously drives a huge increase in the cost of electricity, buildings, and computer-related goods.
But the benefits of new technologies are never spread evenly.
When the technology of travel made remote destinations more accessible, it created tourist traps. Some well placed individuals and companies do well out of this, but typically, most people living near tourist traps suffer from the crowds and increased prices.
When power plants are built, neighbors suffer noise and pollution, but other people can turn their lights on.
We haven't yet begun to be able to calculate all the negative externalities of LLMs.
I would not be surpised if the best negative externality comparisons were to the work of Thomas Midgley, who gifted the world both leaded gasoline and CFC refrigerants.
The LLMs are not uncorrelated, though, they're all trained on the same dataset (the Internet) and subject to most of the same biases
Agreed.
This is why I differentiated "uncorrelated" from "seemingly uncorrelated." Sorry if that wasn't clear.
It's funny, I'm working on trying to get LLMs to place electrical devices, and it silently developed opinions that my switches above countertops should be at 4 feet and not the 3'10 I'm asking for (the top cannot be above 4')
That's quite funny, and almost astonishing, because I'm not an architect, and that scenario just came out of my head randomly as I wrote it. It seemed like something an architect friend of mine who passed away recently, and was a big fan of Douglas Adams, would have joked about. Maybe I just channeled him from the afterlife, and maybe he's also laughing about it.
They tend to develop silent opinions based on rules of thumb, so it's not actually reasoning that my symbol is to the center not the top.
I fear for trying to get it to unlearn code from the last building code cycle when there's changes
On the other hand the incorrect values may drive architects to think more critically about what their tools are producing.
On the whole, not trusting one's own tools is a regression, not an advancement. The cognitive load it imposes on even the most capable and careful person can lead to all sorts of downstream effects.
There's an Isaac Asimov story where people are "educated" by programming knowledge into their brains, Matrix style.
A certain group of people have something wrong with their brain where they can't be "educated" and are forced to learn by studying and such. The protagonist of the story is one of these people and feels ashamed at his disability and how everyone around him effortlessly knows things he has to struggle to learn.
He finds out (SPOILER) that he was actually selected for a "priesthood" of creative/problem solvers, because the education process gives knowledge without the ability to apply it creatively. It allows people to rapidly and easily be trained on some process but not the ability to reason it out.
Do you remember the title of that story, by chance?
Profession (1957)
https://en.wikipedia.org/wiki/Profession_(novella)
Profession as sibling said, available here: https://www.inf.ufpr.br/renato/profession.html
The wikipedia entry also has link to the text but the above is nicer IMHO, just the raw text. From a previous HN discussion some weeks ago!
That would have devastating consequences in the pre-LLM era, yes. What is less obvious is whether it'll be an advantage or disadvantage going forward. It is like observing that cars will make people fat and lazy and have devastating consequences on health outcomes - that is exactly what happened but the net impact was still positive because cars boost wealth, lifestyles and access to healthcare so much that the net impact is probably positive even if people get less exercise.
It is unclear that a human thinking about things is going to be an advantage in 10, 20 years. Might be, might not be. In 50 years people will probably be outraged if a human makes an important decision without deferring to an LLM's opinion. I'm quite excited that we seem to be building scaleable superintelligences that can patiently and empathetically explain why people are making stupid political choices and what policy prescriptions would actually get a good outcome based on reading all the available statistical and theoretical literature. Screw people primarily thinking for themselves on that topic, the public has no idea.
If you told me this was a verbatim cautionary sci-fi short story from 1953 I'd believe it.
Perhaps Asimov in 1958?
https://en.wikipedia.org/wiki/The_Feeling_of_Power
That said, I maintain there are huge qualitative differences between using a calculator versus "hey computer guess-solve this mess of inputs for me."
At long last, we have created the Torment Nexus from classic sci-fi novel "Don't Create The Torment Nexus"!
Eh 1953 was more about what’s going to happen to the people left behind, e.g. Childhood’s End. The vast majority of people will be better off having the market-winning AI tell them what to do.
Or how about that vast majority gets a decent education and higher standard of living so they can spend time learning and thinking on their own? You and a lot of folks seem to take for granted our unjust economy and its consequences, when we could easily change it.
How is that relevant? You can give whatever support you like to humans, but machine learning is doing the same thing in general cognition that it has done in every competitive game. It doesn't matter how much education the humans get - if they try to make complex decisions using their brain then, silicon will outperform them at planning to achieve desirable outcomes. Material prosperity is a desirable outcome, machines will be able to plot a better path to it than some trained monkey. The only question is how long it'll take to resolve the engineering challenges.
That is absurd and is not supported by any facts
There are some facts which makes it not outside the realm of possibility. Like computers being better at chess and go and giving directions to places or doing puzzles. (The picture-on-cardboard variety.)
You'd make a great dictator.
I think the comparison to giving change is a good one, especially given how frequently the LLM hype crowd uses the fictitious "calculator in your pocket" story. I've been in the exact situation you've described, long before LLMs came out and cashiers have had calculators in front of them for longer than we've had smartphones.
I'll add another analogy. I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated". It's a 3 step process where the hardest thing is multiplying a number by 2 (and usually a 2 digit number...). It's always struck me as odd that the response is that this is too complicated rather than a nice tip (pun intended) for figuring out how much to tip quickly and with essentially zero thinking. If any of those three steps appear difficult to you then your math skills are below that of elementary school.
I also see a problem with how we look at math and coding. I hear so often "abstraction is bad" yet, that is all coding (and math) is. It is fundamentally abstraction. The ability to abstract is what makes humans human. All creatures abstract, it is a necessary component of intelligence, but humans certainly have a unique capacity for it. Abstraction is no doubt hard, but when in life was anything worth doing easy? I think we unfortunately are willing to put significantly more effort into justifying our laziness than we will to be not lazy. My fear is that we will abdicate doing worthwhile things because they are hard. It's a thing people do every day. So many people love to outsource their thinking. Be it to a calculator, Google, "the algorithm", their favorite political pundit, religion, or anything else. Anything to abdicate responsibility. Anything to abdicate effort.
So I think AI is going to be no different from calculators, as you suggest. They can be great tools to help people do so much. But it will be far more commonly used to outsource thinking, even by many people considered intelligent. Skills atrophy. It's as simple as that.
I briefly taught a beginner CS course over a decade ago, and at the time it was already surprising and disappointing how many of my students would reach for a calculator to do single-digit arithmetic; something that was a requirement to be committed to memory when I was still in school. Not surprisingly, teaching them binary and hex was extremely frustrating.
I tell people when I tip I "round off to the nearest dollar, move the decimal place (10%), and multiply by 2" (generating a tip that will be in the ballpark of 18%), and am always told "that's too complicated".
I would tell others to "shift right once, then divide by 2 and add" for 15%, and get the same response.
However, I'm not so sure what you mean by a problem with thinking that abstraction is bad. Yes, abstraction is bad --- because it is a way to hide and obscure the actual details, and one could argue that such dependence on opaque things, just like a calculator or AI, is the actual problem.
> shift right once, then divide by 2
So, shift right twice? ;)
I think asking people to convert to binary might be a bit too much lol
No ifs, ands, or buts about it.
I'm sorry, I think you are teaching people the wrong thing if you are blanket statement saying "abstraction is bad". You are throwing the baby out with the bath water. You can "over abstract" and that certainly is not good but that's not easy to define as it is extremely problem dependent. But with these absurd blanket statements you just push code quality and performance down.
Over abstraction is bad because it can be too difficult to read or it can be bad because it de-optimizes programs. "Too difficult to read or maintain" is ultimately a skill issue. We don't let the juniors decide that but neither should we have abstraction where only wizards can maintain things. Both are errors.
But abstraction can also greatly increase readability and help maintain code. It's the reason we use functions. It's the reason we use OOP. It helps optimize code, it can help reduce writing, it can and does do many beneficial things.
Lumping everything together is just harmful.
Saying abstraction is bad is no different than saying "python is bad", or any duck typing language (including C++'s auto), because you're using an abstract data type. The "higher level" the language, the more abstract it is.
Saying abstraction is bad is no different than saying templates are bad.
Saying abstraction is bad is no different than saying object oriented programming is bad.
Saying abstraction is bad is saying coding is bad.
I'm sorry, literally everything we do is abstraction. Conflating "over abstraction" with "abstraction" is just as grave an error as the misrepresentation of Knuth's "premature optimization is the root of all evil." Dude said "grab a fucking profiler" and everyone heard "don't waste time making things work better".
If you want to minimize abstraction then you can go write machine code. Anything short of that has abstracted away many actions and operations. I'll admire your skill but this is a path I will never follow nor recommend. Abstraction is necessary and our ability to abstract is foundational into making code even work.
*I will die on this hill*
That's not abstraction, that obfuscation. Do not conflate these things. I'll let Dijkstra answer this: https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...I believe that collectively we passed that point long before the onset of LLMs. I have a feeling that throughout the human history vast amounts of people ware happy to outsource their thinking and even pay to do so. We just used to call those arrangements religions.
Religions may outsource opinions on morality, but no one went to their spiritual leader to ask about the Pythagorean theorem or the population of Zimbabwe.
Well, now, that's not actually true:
[1] https://plato.stanford.edu/entries/pythagoreanism/ [2] https://en.wikipedia.org/wiki/Pythia
Obviously I was using the Pythagorean theorem as a random not literal example. But I’m also curious about what you mean. Mind linking to the specific relevant parts? Linking to humongous articles doesn’t help much.
I was linking it partially tongue in cheek, but oracles and the auspices in antiquity were specifically not about morality. They were about predicting the future. If you wanted to know if you should invade Carthage on a certain day, you'd check the chickens. Literally. And plenty of medical practices were steeped in religious fare, too. If you go back further, a lot of shamanistic practices divine the facts about the present reality. In the words of Terrence McKenna, "[Shamans] cure disease (and another way of putting that is: they have a remarkable facility for choosing patients who will recover), they predict weather (very important), they tell where game has gone, the movement of game, and they seem to have a paranormal ability to look into questions, as I mentioned, who’s sleeping with who, who stole the chicken, who—you know, social transgressions are an open book to them." All very much dealing with facts, not morality.
With regards to Pythagoreanism, Pythagoras himself thought of mathematics in religious ways. From the entry on Pythagoras (https://plato.stanford.edu/entries/pythagoras/) in the SEP:
> The cosmos of the acusmata, however, clearly shows a belief in a world structured according to mathematics, and some of the evidence for this belief may have been drawn from genuine mathematical truths such as those embodied in the “Pythagorean” theorem and the relation of whole number ratios to musical concords.
There are numerous sections throughout both of these entries that discuss Pythagoras, mathematics, and religion. Plato too is another fruitful avenue, if you wanted to explore that further.
That’s a bit cynical. Religion is more like a technology. It was continuously invented to solve problems and increase capacity. Newer religions superseded older and survived based on productive and coercive supremacy.
If religion is a technology, it's inarguably one that prevented the development of a lot of other technologies for long periods of time. Whether that was a good thing is open to interpretation.
On the other hand it produced a lot of related technology. Calendars, mathematics, writing, agricultural practices, government and economic systems. Most of this stuff emerged as an effort to document and proliferate spiritual ideas.
I see your point, but I'd say religion's main technological purpose is as a storage system for the encoding of other technologies (and social patterns) into rituals, the reasons for which don't need to be understood; to the point that it actively discourages examination of their reasons, as what we could call an error-checking protocol. So a religion tends to freeze those technologies in the time at the point of inception, and to treat any reexamining of them as heresy. Calendars are useful for iron age farming, but you can't get past a certain point as a civilization if you're unwilling to reconsider your position that the sun and stars revolve around the earth, for example.
I think it is hard to fully remove religious practice from species. I think it exist along a spectrum and that there are base ritualistic behaviors most animals engage with (e.g. a pets ritual around eating or play), organized social order sort of rituals (e.g. birds expecting a particular mating dance performed well and this sensibility shared among the local group of birds), and finally what we observe in our own development as a species, higher religion, but that is merely iteratively developed from layering these simple behaviors onto simple behaviors until the whole is quite elaborate in fact.
In that sense I think getting caught up in “religion bad for tech” zeitgeist misses the point of what religion actually is. Collectively shared ritual. Belief in God, and specific shades of that, is just the step of the dance the bird does in this case. Taking a step back, plenty of atheists engage in collectively shared ritual too. Belief in the 9-5, the bludgeon that is the four years to specialize vs lifelong apprentanceships towards true mastery, economics constraining choice rather than pure skill. Do these rituals not also hold our species and technological development back? If we talk about religion, it is worth also considering the mountain of other blockers towards progress we have built for ourselves in this collectively agreed upon daily society ritual we all partake upon.
This is ahistorical, whiggish nonsense. The actual world is not a game of Civilization II.
Eh? I was talking about Galileo's trial for heresy.
Then you also understand nothing about Galileo.
> Can you audit/review/identify issues in a codebase if you've never written code?
Actual knowledge about systems work much better more often than not, LLMs are not sentient and still need to be driven to get decent results.
I'll say that I'm still kinda on the fence here, but I will point out that your argument is exactly the same as the argument against calculators back in the 70s/80s, computers and the internet in the 90s, etc.
You could argue that a lot of the people who few up with calculators have lost any kind of mathematical intuition. I am always horrified how bad a lot of people are with simple math, interest rates and other things. This definitely opened up a lot of opportunities for companies to exploit this ignorance.
This implies that people had better mathematical intuition, on average, pre calculator which seems difficult to believe.
The difference is a calculator always returns 2+2=4. And even then if you ended up with 6 instead of 4, the fact you know how to do addition already leads you to believe you fat fingered the last entry and that 2+2 does not equal 6.
Can’t say the same for LLM. Our teachers were right with the internet of course as well. If you remember those early internet wild west school days, no one was using the internet to actually look up a good source. No one even knew what that meant. Teachers had to say “cite from these works or references we discussed in class” or they’d get junk back.
Right so apply the exact same logic to LLMs as you did to the internet.
At first the internet was unreliable. Nobody could trust the information it gave you. So teachers insisted that students only use their trusted sources. But eventually the internet matured and now it would be seen as ridiculous for a teacher to tell a student not to do research on the internet.
Now replace "the internet" with "LLMs".
Most teachers would never let you grab any random internet source. We always had to get decent sources. Actual journal articles from our library’s JSTOR subscription would often be a hard requirement for a certain number of sources. Citing the text we used in class or other reference material we had access to as well. It was never free rein anything goes, unless that has changed.
I didn't mean to imply otherwise. Only to point out that in the early days of the internet, even into the 00s, teachers had a Hard No rule on any internet source.
I graduated high school in '04 and even then I was only allowed to use this system called "Galileo" which was basically a curated listed of encyclopedic articles specifically meant for education and research.
To some extent, the argument against calculators is perfectly valid.
The cash register says you owe $16.23, you give the cashier $21.28, and all hell breaks loose.
My experience is more that you give €20.28 and the cashier asks you whether you have €1.
I think Europe is a few years behind the US in many respects, including the dumbing down of the population.
I wish we wouldn't be just behind, but would choose another path, but the fear is that we are really just behind.
On the one hand, yeah, immigration and trade issues push the buttons of the hard right.
On the other hand, our hard right has a trifecta of business, gun culture, and religion.
You're lacking the religion and gun culture, and trying to take away your health care would be the third rail, so in some respects, it would be difficult for you to follow us.
Also, without that trifecta, it seems that it would be somewhat more difficult to push the sort of anti-education agenda that gets pushed here, both at the university level and at lower levels (e.g. giving equal time to science and creationism).
You have to remember that the US was, in large part, founded by dogmatic malcontents who couldn't get along with their neighbors.
Um, no? The cashier punches your $21.28 into the register, and it tells her that she needs to give you $5.05 in change.
At some places this is true.
At other places, they take the $20, look confused, and fart around with the change for awhile.
Too late. Outsourcing has already accomplished this.
No one is making cool shit for themselves. Everyone is held hostage ensuring Wall Street growth.
The "cross our fingers and hope for the best" position we find ourselves in politically is entirely due to labor capture.
The US benefited from a social network topology of small businesses. No single business being a lynch pin that would implode everything.
Now the economy is a handful of too big to fails eroding links between human nodes by capturing our agency.
I argued as hard as I could against shipping electronics manufacturing overseas so the next generation would learn real engineering skills. But 20 something me had no idea how far up the political tree the decision was made back then. I helped train a bunch of people's replacements before the telecom focused network hardware manufacturer I worked for then shut down.
American tech workers are now primarily cloud configurators and that's being automated away.
This is a decades long play on the part of aging leadership to ensure Americans feel their only choice is capitulate.
What are we going to do, start our own manufacturing business? Muricans are fish in a barrel.
And some pretty well connected people are hinting at similar sense of what's wrong: https://www.barchart.com/story/news/36862423/weve-done-our-c...