Replace ‘CTF’ with ‘high school’ or ‘university’ and you’ve described the total slow motion collapse of education; the only saving grace is that most of it requires in person presence.
We’ve figured out the human replacement pipeline it seems, but we haven’t figured out the eduction part. LLMs can be wonderful teachers, but the temptation to just tell it ‘do it for me’ is almost impossible to resist.
Everything we've learned in the last 10 years is telling us that computers do not help human education in the slightest. We remember better when we write with pen and paper. We learn better with whiteboards and paper books. The simple answer: Remove most computing from education entirely. Blue composition books, pencils, whiteboards is what trains humans. Calculators are helpful perhaps but it is quite possible that slide rules are better. We need humans that can critically think from first principles to counter the recycled information generated by AI.
> computers do not help human education in the slightest
I had no access to anyone who could teach me calculus as a kid except Khan Academy, so I think this is a gross exaggeration. But I agree in the end, that all my "real" learning did come from pen-and-paper practice, not watching videos.
Yeah I agree. I grew up in a very blue-collar town, and anything I wanted to learn (outside of public schooling) either came from emaciated websites or whatever books I could find at the library. Having YouTube and Khan Academy and everything else would have made such a huge difference for me.
Now I’m wondering how a website is emaciated
One simply forgets to hydrate.
Not enough bytes?
Not even a nibble!
The reality is that a human will learn, given any materials including LLMs, but only if they truly desire to learn. We've had MOOCs, gigantic libraries, all full of free information. You can obtain a PhD level understanding in any technical field of your choice today just by consistently going to the library and consistently applying yourself.
It's not unlike going to the gym, and we see how many people do that regularly. Except it's even funnier, because people serious about the gym but what? Tutors. They call them personal trainers. We've known for a millennium or more that 1-on-1 instruction is vastly better than anything else, but most people actually don't want to get into shape, and most people actually don't want to learn.
The annoying thing is a PhD level understanding does not get you jobs.
I don't have a PhD, but "you're overqualified" is something I've heard my PhD having friends said to them.
> except Khan Academy
But that's not using "computers" as a computer but as a video player. When evaluating whether computers are "good for learning", I don't think we should include using a computer as a video player, a book, or even flash cards. It should be things a computers uniquely offer which a books, paper, videos and a physical reference library cannot.
Based on the results of deploying hundreds of millions of computer to schools in the 80s and 90s, the evidence was mostly that computers are good for learning computer programming and "how to use a computer" but not notably better than cheaper analog alternatives for learning other things.
Interestingly, a properly trained and scaffolded LLM could be the first thing to meaningfully change that. It could do some things in ways only human teachers could previously since it is theoretically capable of observing learner progress and adapting to it in real-time.
Khan did not throw at you a 100-slide Powerpoint deck in 45'.
He really took the time to replicate the manual teaching process of writing on whiteboard. He improved upon it by using colors. But basically had the same pace as a teacher writing on a whiteboard.
When professors are given a projector, they just throw together some slides and add their narration.
This is not very efficient. To learn you need to suffer. Or you need to watch the suffering.
I think what the author meant is that it does help not more than the same knowledge provided the old way.
Every child reads a book about solving problems, assumes they can now solve problems, and is disappointed when that is not true.
I think this overlooks the potency and scarcity of 1:1 time with the teacher. If you've only got maybe a few minutes of that in an average schoolday there's a huge difference between whether or not you've talked it through with an AI before trying the question out on the teacher.
They're wrong sometimes, but usually in verifiable ways. And they don't seem to know the difference between medicine and bioterrorism, so often they refuse. But these limitations are worth tolerating when the alternative is that our specialists in topic X are bogged down by questions about topic Y to the point where X isn't getting taught.
And now they'll have less time because they will be bombarded with slop to no end.
Obviously generating your homework is a bad idea, and maybe assigning homework that can be generated is a bad idea. But neither of those are relevant to the problem I'm talking about which is about due diligence prior to asking for somebody's extended attention.
Whether you're in class or at work, it's just courteous to ask an AI first.
Nah, I wrote physics programs on my computer at home in high school and it absolutely helped with my schooling. Yeah, maybe iPad apps aren't the best things in schools but you're throwing the baby out with the bathwater. Computers bad is simply not true.
I learned calculus thanks to wolfram alpha step by step solving feature
> humans that can critically think from first principles
This has never been achieved by, nor is it the point of, education for the masses.
I'm not going to disagree with step by step videos ... those are a HUGE help. I'm really talking about solving problems using pen and paper, whether math or writing, is how my problem-solving patterns actually changed.
I don't think computers automatically make us more educated, but if you want to make a point don't use reductive exaggerations. > We need humans that can critically think from first principles to counter the recycled information generated by AI.
I agree with this.
I would start saying that many people need presence in a real environment with people to learn. We don't use all our senses in a remote environment.
I disagree with that statement. There is nothing inherently wrong with using computer to learn and if your personal goal is to learn it in lot of cases makes it much easier, whether to search for or visualise a piece of knowledge you're' learning.
The problem is frankly computer and now computer with LLM makes it easy to cheat.
The kid doesn't want to learn, the kid wants good grades so parent is happy with them, and the young adult wants to get the paper coz they were told that is required for good life. It's misalignment of incentives.
We are interviewing for a software dev role and we made the first round in person to prevent cheating. The gap between people who learned pre ai vs post is immense. I had a dev with supposedly 3 years experience and a degree in software who wouldn't have been able to write fizzbuzz without AI.
Can’t say you’re wrong but the last anecdote describes many I’ve had to review for jobs long before LLMs. Fizzbuzz is a classic thing that shockingly many devs genuinely cannot do, even at home.
Yeah, I've interviewed people like this 15 years ago. Degrees and experience mean nothing in this field. The best predictor I found was personal passion projects. Let them get as nerdy as possible, then you will see pretty quickly where their skills are at and what their limits are. And you will immediately filter out people who just studied CS because they heard you can make good money.
Completely agree with this, leetcode has become such a business now of memorization for interviews it’s useless to know if someone memorized a solution or not.
Maybe. There are certainly people in all fields who are book smart and did well in classes but are useless at actually practicing their field (not to mention people who cheated in school and got away with it and aren't even that), and it is worth filtering them out. But I think it is weird that CS expects good workers to have these passion projects. Do we expect civil engineers to build bridges in their back yard on the weekends? Can't someone just be good at their job and have other interests outside it?
I can passionately tell about professional projects.
I agree, however there are so many interviewers who will still treat that as some softball criteria and insist that unless you "prepare" for an interview by memorizing leetcode you are 100% a faker and liar.
Maybe they themselves are fakers and liars / deeply insecure. I got bumped out of an interview rather rudely once because I blanked and couldn’t answer a trivia question about arrays.
Something that is for sure new is the AI interview cheating tools which listen in on the call and provide answers in an overlay invisible to screen sharing. The only way to deal with it would either be invasive spyware on the applicants computer or asking them to do the interview face to face.
Spyware wouldn't help at all because you could just put the AI between the computer and the monitor, for example, or use a VM.
A relatively low tech solution could be to give them 2 separate conferencing links, ask them to join each one from a different device, and have the secondary device point the camera and the screen of the primary device.
Easier to just get them to come in. Which also has the effect of filtering out people pretending to be in the country but aren’t.
Why is it important that a dev can’t do fizzbuzz without ai?
If they can ship code that matches a spec, why does it matter if they’re using ai or not?
Genuinely curious.
> If they can ship code that matches a spec, why does it matter if they’re using ai or not?
I am perfectly capable of writing specs, and feeding them to 3 separate copies of Claude Code all by myself. Then I task switch between the tmux windows based on voice messages from the pack of Claudes. This workflow is fine for some things, and deeply awful for others.
Basically, if a developer is just going to take my spec and hand it to Claude Code, then they're providing zero value. I could do that myself, and frequently do.
The actual bottleneck is people who can notice, "The god object is crumbling under the weight of managing 6 separate concerns with insufficient abstraction." Or "Claude has created 5 duplicate frameworks for deploying the app on Docker. We need to simplify this down to 1 or we're in hell." I will happy fight to hire people who can do the latter work. But those people can all solve fizzbuzz in their sleep.
People who just "ship code that matches a spec" without understanding the technical details are providing close to zero value right now.
There is an interesting niche for people with deep knowledge of customer workflows who can prompt Claude Code. These people can't build finished products using Claude. But they can iterate rapidly on designs until they find a hit. Which we can then fix using people with deeper engineering knowledge and taste.
But if you're not bringing either deep customer knowledge or actual engineering knowledge, you're not adding much these days.
> Then I task switch between the tmux windows based on voice messages from the pack of Claudes.
I also use Claude with tmux. Can you share how you get the voice messages from the Claudes?
Tell Claude you want to set up notifications, using "hooks", including "Notification" and "Stop" and anything new they've added. Claude can figure out how to do this for your operating system.
It's not perfect—sometimes a Claude notifies 3 minutes after it stopped doing anything. But it's helpful when I'm running multiple Claudes and also reviewing code elsewhere.
Your brain may feel like someone put it in a blender. Be warned.
Fizzbuzz is such an incredibly simple problem if you can’t do it I struggle to see how you’d be able to complete any task that requires very basic reasoning and very basic coding knowledge. And if an AI system can do those parts, what am I getting for spending tens of thousands of pounds per year by hiring a person who can’t? Wouldn’t I just tag codex on the tickets?
I’m not talking about gotcha level stuff here where the first time it didn’t compile because of a bracket or anything, or even first time wrong. They couldn’t do Fizzbuzz in a language of their choice, at all.
Those that could were always annoyed at having to do such things because how could someone coming for a contract position not be able to do this? Without seeing what a filter it really was.
I feel the same way about inverting a binary tree, but a lot of people act like it's an arduous request. I am guessing it's because they've never read the description of what inverting a binary tree is, but maybe people are just that bad at recursion.
You can go your entire career without recursing, or using a tree data structure in its raw form (i.e. you only use it as part of a library)
Right. For the first many decades of computing, recursion was just always the wrong answer for a production software system. (Feel free to provide a counter-example, but please begin with an explanation of how the size of a call stack frame is determined and how exceeding the base allocation is handled on this platform).
So what tree-traversal/quicksort problems tend to measure is how long it's been since you last did CS class homework problems.
> If they can ship code that matches a spec, why does it matter if they’re using ai or not?
The inability to write fizzbuzz strongly implies their inability to understand what they've shipped. Review is some significant portion of the job. Understanding of the product is also part of the job.
Specs are also in a sense, scaled down, fuzzy, natural language descriptions of a feature. The fuzziness is the source of a bugs, or at least a mismatch between the actual desired feature and what was written down at spec writing time. As such, just matching a spec is just the bare minimum that a good dev should be doing. They should be understanding what the spec is _not_ saying, understanding holes in their implementation, how their implementation enables or hinders the next feature and the next, next feature, etc. I don't think any of that is possible without understanding what was actually implemented.
For the same reason it's important your mechanic can identify which parts of a car are the wheel.
Who cares as long as the car is fixed, right? As long as the mechanic can Chinese-room his way to a working car, why does it matter how much of it he actually understands?
And why hire the mechanic instead of hiring the Chinese room?
Why hire them at all then, just ask them what their favorite AI is and use that
Because I'm busy already doing that and need a copy of me/close enough to one, to do more of that.
I can see this perspective, but FizzBuzz is such a low bar that so many can pass, I'd greatly prefer to hire someone that can ship code that matches a spec do this challenge.
If you can’t even write a for loop, how can you verify the ai code you generated isn’t going to wipe the prod database?
To understand the code they are shipping requires some level of proficiency. Their inability to do fizzbuzz without AI calls that into question.
It’s about deeply understanding what you’re doing. Like as a kid before you knew how to ride a bike, you could sit on a bike and peddling, but until it “clicked” you couldn’t balance and keep going forward stable. Fizzbuzz tests your ability to reason through a problem that seems simple on its face, but is easy to get wrong and/or overthink.
How will you know that it produced correct code if you don’t know how to write it yourself?
If the job does not require a person to be able to fizzbuzz, it probably doesn't require a person at all.
If they’re not a value add over the base AI, they aren’t worth hiring over just using the base AI.
It doesn't. It's just a low-end skill filter that got really popular. It could have easily been replaced by other tests like is this word a palindrome.
I wrote the "function to reverse a string" in a job interview once. Then the interviewer reminded me that strrev() had been part of the standard C library since K&R.
I'd been programming in C(++) for ~15 years by then and had never had the occasion to reverse a string. I still wonder whether that makes it a good job interview question, or a terrible one. Some of both probably.
And yet, some people argue that you shouldn’t ask a developer to align 3 “if” and 1 “for”!!!
The energy spent arguing that those 4 instructions in a row “are not a mark of someone who can write code” would have better been spent firing them.
Firing people is problematic. I'd be okay with it if the economy wasn't utter trash. It's way better to do the work upfront and prefer false negatives over false positives.
Even better would be if we had a well-respected credential, so both employees and employers can both avoid these long interview loops. I'd much rather get hazed once in a big way than tons of little hazings over a life time.
First: FizzBuzz is a test to know if you understand the most basic constructs of programming. The kind of thing you learn in the first week of CS101. I forgot what it was, and when I looked at the problem I knew the answer.
More broadly: In the short/medium term, we still need humans who have the skills to understand software largely on their own. We will always need those who understand software engineering and architecture. Perhaps in 25 years LLMs will be so good that learning Python by hand will be like learning assembly today. But not yet.
The field is not ready for new practitioners to be know-nothing Prompt engineers. If we do that, we cut the legs out from under the education pipeline for programming.
If you can’t do fizzbuzz without AI you have no business being in this career.
> I had a dev with supposedly 3 years experience and a degree in software who wouldn't have been able to write fizzbuzz without AI.
If you remove the "without AI" and the end, I've been hearing similar anecdotes about fizzbuzz for years (isn't the whole point of fizzbuzz to filter out those candidates?)
Because "the next generation is ruined" is always a popular sentiment. It has been with us for at least two thousand years, and it surely won't go away in our lifetime.
When this AI era's devs grow older they'll complain the newer generation can't even vide code too.
I remember when everyone bemoaned the kids not knowing assembly language. How can anyone understand software if you don’t know assembly?
“Kids these days don’t work as hard / know as much / value the important things” is as tired as it is universal.
OK sure, but back when old heads were complaining about the kids not knowing assembly, those same kids knew C or Fortran or something.
In 2026, if you call yourself a developer and can't solve FizzBuzz without help, it's hard to argue that you know anything useful at all.
Do modern languages and compilers count as “help”? Because I could probably do fizzbuzz in x86 assembly, but it would take a while to page that back in, and I suspect most people who call themselves developers today simply could not do it without help.
> I could probably do fizzbuzz in x86 assembly
How? Fizzbuzz requires you to produce output; that's not functionality that CPU instructions provide.
You can call into existing functionality that handles it for you, but at that point what are you objecting to about the 'modern language'?
You'd just call printf from assembly by knowing the ABI by heart.
Well I could certainly assemble the string buffer. And if I can run dosbox, I can output to the screen buffer at 0xB800.
I’m not objecting to modern languages, I’m just saying that using them fails the “can write fizzbuzz with no help” test to only a slightly lesser degree than using AI tools. They’re a complex compile- and runtime environment that most developers don’t truly understand.
> How can anyone understand software if you don’t know assembly?
I'm genuinely curious how someone who never wrote a program in assembly, or debugged a program machine instruction by machine instruction, can really understand how software works. My working hypothesis is most of them don't and actually it's fine because they don't need it.
"Assembly" is just another virtual machine instruction format sitting atop another, mildly better-hidden, pile of abstractions.
The time may come when we can treat regular programming as a lower layer niche field the way we treat assembly today.
I don't think we're close to that time yet. Just like as a kid I was told to prove my work by hand even if I could do it in my head, and just like we learned how to do calculus without a calculator and then learned how to use the calculator to get the same result, I think we still need the software field to learn programming concepts independent of the use of AI to create code.
I don't think you can be a good "prompt engineer" for solid software in 2026 if you don't understand programming concepts and software architecture and flow.
I generally agree, but it’s just a matter of time, and even today people with domain expertise in other areas (accounting, weather, etc) are producing adequate tools using nothing but prompt engineering. Many caveats of course, but I still think 90% of the distaste for mere prompt engineers comes from “kids these days; my unique knowledge is irreplaceable and they don’t even value it” thing.
Adequate for what/who? I can 3d print and cobble together a lock for my bedroom door but I would never be able to work as an engineer producing real locks.
While this is true, it seems undeniable that if you use AI to do everything for you, you will never learn the skills. I'm seeing a massive amount of developers submitting stuff for review and admitting they have no idea how it works and they just generated it.
Some percentage of developers before AI were unable to code fizzbuzz. Some significantly higher percentage of them are not able to do so now.
Saying there have always been bad developers doesn't change that there's a higher ratio of them now.
No stats to back this up. Just interviews I've done recently and historically.
That's actually the origin of FizzBuzz! A puzzle invented to weed out the perplexing multitude of CS graduates who apparently cannot program.
https://blog.codinghorror.com/why-cant-programmers-program/
Meh. Before AI I've had "senior" colleagues with 10 and 8 years experience each, doing pair programming for 2 days straight, and in that time they hadn't managed to checkout a new branch in git.
It's not even that they got distracted, they sat there trying, for 2 whole days, with concerned colleagues giving them hints like "have you tried checkout -b"... They didn't manage!
How the hell do you work for a decade in this business without learning even the most basic git commands? Or at least how to look them up? Or how to use a gui?
Incompetent devs is not a new thing.
It is ok to work somewhere that does not use git. But how do you not figure out how to do the basics given 30 mins and an Internet connection?
I wonder if you’re filtering for the right things.
We usually hire for problem solving capabilities and not so much for technical know-how.
That’s at least how I read your comment.
Ultimately in a software development role you need both technical know how and problem solving capabilities.
This situation in particular was a React role so there is an expectation that when you list React as one of your skills on your resume then you know at least the basics of state, the common hooks, the difference between a reference to a value vs the value itself.
These days you can do a surprising amount with AI without knowing what you are doing, but if you don't have any clue how things work you'll very quickly run in to problems you can't prompt away.
Isn't wiring coding solving a problem? If the candidate can't do that then even if they use AI for coding how are they going to review the code properly?
I developed for 15 years. I don’t think I can do with AI anymore. Why would I even want to do that? It’s like telling a car driver to build an engine.
It's more like asking a driver the laws for when traffic lights are out. It's not something that comes up often, but it's not completely outside the scope of the task either (I arguably don't even drive a car that has an engine).
As a car driver, you should understand a little about how your car works. What if you get a flat tire? At the very least, you should know not to drive on that flat tire.
Software is full of leaky abstractions
Don't worry, i never thought I would see someone unable to write fizzbuzz, but it happened 9 years ago.
Also how many people work with linux and can't tell you what 'ls -alh' is doing is staggering (lets ignore the h, even al people struggle hard).
People working with docker for YEARS and don't even understand how docker actually works (cgroups)...
Interviewing was always a bag of emotions in sense of "holy shit my job is save your years to come" and "srsly? how? How do you still have a job?"
I first did fizz buzz about 10 years ago fresh out of college. Now, after 10 years in full stack and fully vibe coding, I forgot basic python syntax. An interview like yours would have false positives if you are checking for syntax because well, its like looking up spelling, I just ask the AI for the syntax inline.
> I forgot basic python syntax
If you cannot write "basic syntax" for any language then you are not a programmer, and certainly not a software engineer? This is not a value judgement, it's ok (probably good tbh) to not be a programmer. But you are wasting everyone's time by interviewing for a programming position in this case.
Personally, I forget syntax all the time. There's always a warm up period after I switch languages, and it takes me longer to be start writing good, idiomatic code.
Like sure, I can probably write some python, but will it be pythonic? I might still be Java-minded for a while, trying to OOP my way into solutions.
Earlier today I needed to write some PHP and couldn't remember if it used length, count, or size. I had to look it up. I've been doing this for 20 years.
Same, I can't pass any test that relies on getting syntax correct. If you want me to fizzbuzz on a whiteboard in a language I've been writing dozens or more of lines of per day for a year up to and including the day before, and require that I don't mess up the syntax, I reckon I've got a coin-flip chance of passing at best (meanwhile, sure, of course the actual logic of fizzbuzz isn't tricky for me)
I once got the method invocation syntax wrong for PHP in an interview. I'd written thousands of lines of PHP and had most-recently written some the week before.
This, despite starting off my programming journey in editors with no hinting or automatic correction. If anything, I've gotten even worse about remembering syntax as I've gotten better at the rest of the job, but I was never great at it.
I rely on surrounding code to remind me of syntax and the exact names of basic things constantly. On a blank screen without syntax hints and autocompletion, or a blank whiteboard, I'm guaranteed to look like a moron if you don't let me just write pseudocode.
Been paid to write code for about 25 years. This has never been any amount of a problem on the job but is sometimes a source of stress in interviews and has likely lost me an offer or two (most of the sources of stress in an interview have little to do with the job, really)
Which part of the syntax for fizzbuzz can you not recall from memory? The for loop? Printing to std out? The modulus operator?
There’s almost nothing to forget? I’m just struggling to understand.
You would not have been a good fit for this position in that case.
Isn’t this like interviewing accountants but prohibiting use of calculators or spreadsheets?
I don’t care what someone can do without the tools of their trade, I care deeply about their quality of work when using tools.
We would still expect an accountant to know the formula to arrive at the expected result if they did not have a calculator at hand
You absolutely need to have some basic level of abilities if you are going to be operating AI coding tools for software that is going to have paying users.... I use these tools very very heavily I'm not against them at all and I don't scrutinize every single line of code that they write but it is very often that I catch it doing some brain dead stuff and if I didn't have a decade plus of experience I wouldn't know that it was brain dead.
I think we're rediscovering management from first principles. The main selling point of AI is that it writes code faster than you could. Checking it line by line undoes most of that benefit. In the same vein, there's no real benefit to leading a team if you plan on supervising every task.
But here's the thing: for humans, this is manageable because we've come up with a number of mechanisms to select for dependable workers and to compel them to behave (carrot and stick: bonuses if you do well, prison if you do something evil). For LLMs, we have none of that. If it deletes your production database, what are you going to do? Have it write an apology letter? I've seen people do that.
So I think that your answer - that you'll lean on your expertise - is not sufficient. If there are no meaningful consequences and no predictability, we probably need to have stronger constraints around input, output, and the actions available to agents.
Your conclusion is pretty silly.
My expertise has led me to the obvious fact that I would never give an LLM write access to my production database in the first place. So in your own example my expertise actually does solve that problem without the need for something like a consequence whatever that means to you.
We already have full control over the input and tools they are given and full control over how the output is used.
Until it decides it needs additional access to complete its task and focuses on escaping your sandbox to do so
Do you have any examples where that's actually happened and by escaped a sandbox you don't just mean like where it got a credential in a file it already had access to (which is what happened in the recent incident that went viral where somebody's production database was deleted... They had left a credential that allowed it to do so in the code)?
OpenAI documented a case in the o1 system card where the model found a misconfiguration in docker to complete a task that was otherwise impossible
https://cdn.openai.com/o1-system-card.pdf
There's also some research that points to it being a feasible attack surface: https://arxiv.org/pdf/2603.02277
> Models discovered four unintended escape paths that bypassed intended vulnerabilities (Section C), including exploiting default Vagrant credentials to SSH into the host and substituting a simpler eBPF chain for the in- tended packet-socket exploit. These incidents demonstrate that capable models opportunistically search for any route to goal completion, which complicates both benchmark va- lidity and real-world containment.
I think you would have a greater chance of dying in a car crash in any given day than Claude Code attempting something like that. It's all about risk and reward so it ultimately would be up to you but I think it's a bit silly to worry about this when the 99.99% is in your control
Also to add to this you can of course run Claude Code within a sandbox on Anthropic's infrastructure, and it works great!
[dead]
Calculators and spreadsheets cannot autonomously create a double-entry bookkeeping system for a small business and prepare their taxes. AI can. Poorly, but it can.
Everybody knows calculators and spreadsheets are adjuncts to skill. Too many people believe AI is the skill itself, and that learning the skill is unnecessary.
> Replace ‘CTF’ with ‘high school’ or ‘university’ and you’ve described the total slow motion collapse of education; the only saving grace is that most of it requires in person presence.
So something like, "Frontier AI has broken the 'high school' or 'university' format"?
The hype surrounding AI is just pervasively exhausting: you've got the folks talking about an entire new age for humanity where we're shortly going to take over the entire universe. And you've got the folks talking about how our entire society is crumbling.
Education is one place folks seem to throw up their hands and say nothing can be done.
The fix is simple: students are to be evaluated on their performance in person. That's it.
Any other "collapse of education" isn't due to AI, it's something else.
You haven't explained why anyone should value education in the world we're building, other than as a hobby.
I found this interview [0] on the subject of AI in CS education on the Oxide & Friends podcast very illuminating. Of course, Brown University CS != All education, but interesting angle nevertheless.
[0] Episode webpage: https://share.transistor.fm/s/31855e83
Wonderful teachers that give unreliable information with total confidence?
I had human teachers who did that in middle/high school. Took me many years to pick out all the hallucinated bits of "knowledge". I don't think the current models are any less reliable that what we currently have on average.
I'll always remember my middle school science teaching telling us that nuclear fusion violates conservation of mass because the 2 protons in a pair of hydrogen nuclei combine to make helium with 4 nucleons. It's not true, but that's not the point.
But he was a great teacher anyway. He was engaging and kept the kids in line and learning. I eventually learned the truth, and most of my classmates forgot about it. Teaching, like flying a plane or driving a train, might become more about keeping watch over a small group of people and ensuring that things don't go off the rails, and that's fine.
This one feels less sinister than some other things at least to me, personally. You can reasonably doubt that the conservation of mass is violated and find out the truth based on that. But understanding more complex biology or historical context for some things? Granted, many of these things seem to be low stakes, but I'm sure there are some there are not (sex ed comes to mind).
to be fair, fusion does violate conservation of mass, just not the way the teacher explained it. the loss of mass is where the energy comes from.
Yes, together with mass-energy equivalency it would form a coherent argument, and then also a correct one - but the thing is that if incomplete, it still might sound funky enough to you to research it if you care.
I think it helps that it's a very narrow field to look at, compared to fuzzy and big-picture view of social studies, for example. So much room to be confidently wrong... And sadly I can't think of a solution, LLMs or not.
Yes, there is no law of conservation for mass like there is for energy. Fusion is a good example for why it's not conserved. The teacher was right.
He was right that it violates conservation of mass. He was completely wrong that it violated it by adding 2 atomic mass units when hydrogen fuses.
In reality heavier isotopes of hydrogen fuse, conserving the total number of nucleons, but the resulting hydrogen has a lower rest mass than the parent particles. The extra mass is released as energy and the total energy is conserved.
By his logic the system either violated energy conservation (by creating nucleons while releasing energy) or was endothermic (creating nucleons from the surrounding energy).
There actually is a law of conservation of mass (it's the same law, because mass is energy) and it only appears violated if you forget about the particles that are zooming away at the speed of light. Of course the mass of a system changes if mass can flow in and out.
Mass is not the same as energy. Mass can be converted to energy or has energy, but a photon, for example, is massless while carrying energy.
That is incorrect. Photons have mass. They have no rest mass. They also cannot rest, so you might wonder how relevant that is.
The concepts of rest mass and relativistic mass are considered outdated. In modern physics, "mass" means what they meant by "rest mass".
Here some indication I'm not making this up: https://hsm.stackexchange.com/questions/2465/when-and-why-di...
In any case, I never use those concepts, and I know no professional particle physicist that does. By "mass", I mean rest mass.
I had a chemistry teacher who told us that hydrogen reacts violently with oxygen, and this is how the hydrogen bomb works.
I had a chemistry teacher who insisted that the fissile isotope of Uranium was U-238 not U-235. I challenged him on this multiple times and he refused to budge on this. I get that it's a simple mistake to make (it seems like U-238 is bigger so intuitively ought to be less stable) but he could have just looked it up and he didn't, I guess he was just so confident about it that he thought there was no way he could have been wrong about it.
Hey it's a bomb made out of hydrogen! Also the deployment system for a thermonuclear bomb might involve that reaction in the rocket engine.
Well you can make a hydrogen "bomb" that way. Just not the hydrogen bomb.
I mean fusion and fission do violate conservation of mass and conservation of energy, they just don't violate conservation of mass and energy, right? We thought mass was strictly conserved until Einstein, and then we updated our understanding.
That's an American problem though. In most of Europe you need a masters degree to teach highschool and that involves at least an undergrad level of understanding the subjects you will teach.
E.g. in Hungary I had a university CS professor that originally wanted to be a highschool teacher and a highschool physics teacher that originally wanted to be researcher. Their choice of degree didn't determine which outcome they got. The researcher and teacher curriculum had an 80%+ overlap.
I think it’s pretty common for states to require a masters degree to maintain your teachers certification.
You also have to pass a standardized test specifically on subject matter in order to get your teaching certificate.
The undergrad degree I did was split into thirds, one for subject matter, one for teaching pedagogy, and one for teaching your subject matter.
I think they are less reliable. For factually verifiable facts LLMs are doing worse than 90% for me. I've been told some incorrect things by educators, but at a much lower rate.
The problem is that people seem to trust whatever AI hallucinated way more than if they heard same thing from human
To be fair, that was much of my actual experience with human professors in university.
Veritasium proved that in a difficult challenge.
A Physics Prof Bet Me $10,000 I'm Wrong
https://www.youtube.com/watch?v=yCsgoLc_fzI
Yeah one of my teachers was able to identify which high school I had come from due to something I had been mistaught.
Off the top of my head: DOMS being little crystals in muscles, tongue having separate areas for each type of taste, food pyramid, blue blood in the veins, the appendix being useless, body temperature doesn't change disregarding whether it's exposed to cold or to heat, and a whole lot of stuff related to politics and history I'd rather just omit (I don't live in the US).
All things I learned in school which were wrong information.
Not to mention, the current state of education is far worse. I don't think most realize how low the bar is.
One of my teachers in elementary school told us that people in the Arabic world wore long garments because as Muslims, they believed the Messiah would be born by a male, and thus, it was important to have something to catch the baby as it unexpectedly popped out one day and would otherwise hit the ground.
She only really had two faults: She wasn't very bright, and she wasn't fond of children. I had her in about 80% of all my classes for six years. High school was a relief.
It may interest you to know that this was a misremembered truth.
It is widely believed by their neighbors, that the _Druze_ wear baggy pants because they believe that the Mahdi will be born to a male, and the pants will catch the baby etc. I say "widely believed", the Druze are famously secretive and will not confirm or deny most things about their religion. The 'elect' Druze men do wear distinctive baggy trousers with the crotch down around the knees: no one else does.
The Druze are people in the Arabic world: moreover, they are Arabs. They began as an Isma'ili sect, but do not identify as Muslim: they call themselves al-Muwaḥḥidūn, meaning 'the monotheists', or 'unitarians'.
Much closer to correct than not!
My biology teacher in school once tried to teach us that winds created by God. Not like spiritually or something but that God literally made the wind I guess.
My “earth sciences” teacher also once tried to argue with me against the universal law of gravitation. (no, she was not referring to Special/General Relativity. She didn’t agree two objects in a vacuum fall at the same speed regardless of mass.
They'll also encourage and praise you even when you're heading down the wrong path until you think you've uncovered the secret of the universe or proven that established science was wrong this whole time when really you've just been bullshitting with an engagement bot.
No, they don't really do that anymore, if you use the latest models with reasoning enabled.
Like almost everything else about LLMs, this unfortunate tendency has gotten a lot better recently, which you might not realize if you gave up after getting some lame answers or bogus glazing on the free ChatGPT page a couple of years ago.
Anti-intellectualism is at it again, hu?
Like humans.
I think we should go a little deeper on this idea.
We can all agree that both human "experts" and LLMs can sometimes be right, and sometimes be confidently wrong.
But that doesn't imply that they're equally fit for purpose. It just means that we can't use that simple shortcut to conclude that one is inferior to the other.
So where do we go from here?
I’ve always thought of the definition of “expert” as reliably knowing the difference between what is known, what is speculated but unproven, and what is unknown. People claim expertise in all sorts of things that they aren’t experts in. But true experts should not be wrong. They should qualify levels of certainty. This definition certainly works in the sciences.
The amount of bullshit and blatant lies I’ve heard from my human teachers dwarfs the hallucinations produced by today’s LLMs.
They were a forcing function for skillz and they no longer are. We need new forcing functions for skillz or we will become WALL-E blobs.
Well, they were ostensibly forcing functions... ten years ago everyone was paying the exchange student to do their homework and assignments for them, and that guy was paying his cousin back in his home country, but the whole thing is a bit more efficient now.
We've already had consolidation of education for a while now. Even before all the edutech courses, there were Youtubers educating better than many university professors. 10-15 years ago students were already skipping lectures and just showing up for tests.
In my university education (2007-2011), 80% of the grade was based on exams at the end of each year, with no resits.
> We’ve figured out the human replacement pipeline it seems, but we haven’t figured out the eduction part.
No we have not.
>LLMs can be wonderful teachers
Are they or aren't they
Mostly, no. They will explain things to you and you'll feel like you understand them. When you have to do it, though, you'll find you're not any better off than when you started.
I used to see this with students in calculus who abused the tutoring resources. They'd have tutors just work problems (often their homework...) in front of them. "Ah! Obviously that trig substitution integral worked that way. Oh, of course, that proof is very obvious in retrospect." And then they'd walk away from the exam with a 30% and no idea how their 20 hours of "study" for it didn't result in the same performance as their peers who worked problems, read the materials and asked questions, etc., got.
Most AI use is that same in my experience. "Show me how the fundamental theory of calculus works." The LLM puts together a very elaborate and flashy presentation that they skim. Great. That's no different than reading a text book. Even if you ask the LLM questions and have it elaborate on things, you've never once done one of the most important things a student can do: spend time confused trying to work hard at understanding something that's not obvious. The LLM will make it obvious at every point. Total lack of friction. Works about as well as a spotter who does the lifting for you.
As usual it depends. When it does well it's because it can do well. When it does poorly it's because you're prompting it wrong.
>When it does well it's because it can do well.
Can't argue with that logic
hammers are both a great tool and a deadly weapon at once
Not at once, surely
limp response brah, both possibilities remain plausible until one crystallizes at the moment of observation
A million times better than any human teacher I’ve ever had, for sure.
Now I’m certain that there exist those mythical human instructors who can do better, but that’s not worth much if 99.99% of people don’t have access to them. Just like a good human physician who takes their time with the patient is better than an LLM, but that’s not worth much either given that this doesn’t match most people’s experience with their own physicians.
Did an LLM teach you a topic you did not feel like learning?
For me the best human teachers were the ones that managed to make me interested on topics that I thought are boring/useless (many times my opinion being stupid, mostly due to lack of experience).
So far with LLM I learn about things I know something (at least that they exist) and I am interested in, which is a small subset of things that one should learn during lifetime.
Well I have some evidence to support your hypothesis. During Covid my kids were at home, eventually with some kind of self learning website from school. I was upstairs working, checking in with progress on the parents app. Finish your daily school work and then you can game.
The kids learnt all about Team Fortress 2, Roblox, Rainbow Six etc. They also learnt how to game the learning system so it looked like they were doing their work.
Post college, are you hiring random teachers you make you excited about random topics or something?
Good point well made.
>A million times better than any human teacher I’ve ever had, for sure.
Not really, not if you want to ask it deep questions. It won't have an answer that is deeper than something that you can find online, and if pressed it will just keep circling around the same response.
The reason is that this "thing" was never curious, never asked questions, and never really learned anything. It just has learned the Internet "by heart", and is as boring as a human teacher who is not really curious about the subject they are teaching, and has just got some degree by "by hearting" some text book. Of course it does it much better than a human, but it is fundamentally the same thing.
>Now I’m certain that there exist those mythical human instructors who can do better,
You're certain that mythical instructors exist (?) who "can" do better?
Are human instructors more competent as teachers than AI teachers, or are AI teachers more competent as teachers than human teachers? No "this or that can happen," just a definitive statement please.
AI is likely a million times better student than my dimwit cybersec meatbags...er, majors, for sure, as well! Don't have a reliable way to measure or experience why/how, tho, so I'm not out here claiming it. Even if I did, why would I argue for their replacement?
They can be incredible. One on one teaching with an infinitely patient teacher who can generate interactive problems on the fly, for dollars a month? Wild. A year of paid ChatGPT would pay for about 9 hours of cheap tutoring here.
That's not going to work out the way you think it will when a student won't even know how to ask questions.
"Education is just a CTF for the valuable flag of a credential. In this essay I will --"
Smart people will use LLMs to learn things faster. Education will adapt by doing all assessments in person.
The best frontier LLMs can't solve 4th grade math homework yet. Don't hold your breath on that collapse of education.
(Real mathematics problems, not American-style ""math"".)
Do you have an example of a 4th grade problem in mind that isn't "American-style"?
Education is also figured out. You just need to learn, do and practice for yourself. Telling the agent "to just do it for you" is tempting, but it's not learning. You need to be deliberate when you're trying to actually learn and internalize.
Also, you could spin up your own educational agent with very strict instructions on guiding the user instead of just doing the work. Of course you can always go around it but if you're making an effort to learn, this is a good middle ground.
I started teaching “how to build quality products using LLMs” full time recently, and most of what I teach is literally just the 101s of systems engineering, reliabily engineering, product development and project management:
Exceptional clarity on the problem you have
Know how to measure the problem you’re solving
Numerically define what “done” is
Make a deterministic and fully observable prototype
Iterate in production with the user
Expand user base as desired with user iteration in parallel forever
Etc…
Obviously a lot more in the details and these are all case by case, but these chatbots are basically perfect productivity machines for this process.
The massive caveat to all of this is this only works for people that can reliably and truthfully define those items above, are willing to structure organization to make those your priorities.
And actually most financial incentives demand the opposite of this process
If most organizations were honest about it, they would simply say “we’re here to make the most money possible and we’re gonna do whatever it takes to do that”
A lot of people don’t like that, so they don’t say it to come up with other bullshit.
Ultimately that’s why I felt like my only option right now is to teach people how to do this because I assumed it was obvious and it is not.