> If the job were mainly about producing syntactically valid code, then of course A.I. would be on a direct path to replacing large parts of the profession. But that was never the highest-value part of the work. The value was always in judgment.
> The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise
How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.
The author tries to answer this:
> That process is not optional. It is how engineers acquire and elevate their competency. If early-career engineers use A.I. to remove all struggle from the learning loop, they are hurting their development.
but in a world wherein writing code by hand (the "struggle") is "artisinal" and "outdated", this process being non-optional (which I agree with) is contradictory.
How juniors and fresh grads do that with AI that is designed to give you whatever answer you need in a given moment is unclear to me. I don't see how that's possible, but maybe I'm thinking too myopically.
Myopic is inevitable, to some extent. It's very hard to project this stuff.
Socrates wrote about what was being lost as philosophy was becoming written rather than oral...and he was right.
We can't even understand what was lost. Many methods of learning and thinking became entirely lost. You could say they were redundant, and they were. But... writing largely replaced oral traditions. It didn't just augment them.
He was that old school coder who had the skills to do philosophy and be an intellectual without writing. Writing was an augmentation for him. But for the new cohort... it was a new paradigm and old paradigm skills became absent.
It is very hard to imagine skilled coders becoming skilled without need pressing that skill acquisition. The diligent student will acquire some basic "manual coding" skill... but mostly the skill development will be wherever the hard work is.
> Socrates wrote about what was being lost…
Dr. Steven Skultety & Dr. Gad Saad discussed this in a recent video / podcast.
This link is time stamped to the topic https://youtu.be/7mcQf9E3YRo?t=1058
Socrates never wrote anything. At least, not as far as we know.
It's the opening page of the book Technopoly.
And here I thought I was being unique. I guess Socrates must be popular.
I'd say that by purging stuff from the brain we are losing thinking itself. Thinking is manipulating ideas and concepts in your head, assembling and linking. The fewer things there is, the more primitive the result. You cannot juggle without object to juggle, connecting the dots result in trivial patterns when you have just a couple of dots.
It just becomes more abstracted but the thinking is still there. And who is to say we aren’t going to keep reading books, delving into hobbies, or watching movies. All those concepts will then be mixed into the our brains and who knows what new things we will think of to extract out and desire to build with AI.
I think we'll continue to read books and stuff. But many books/movies will probably have devolved into AI slop (not that this hasn't been a trend for the last few decades to a lot of film buffs).
But hobbies like woodworking or instrument seem immune to slop... But people can be creative with what they can sloppify
It's true for all automation we do get more comfort. We build systems so that we humans have as little struggle as possible, not realising that struggle is the only reason for existence. By eliminating it, we are erasing ourselves from this world.
Automation is also for reducing drudgery - the work that prevents us from meaningful struggle by taking up resources that can be better applied elsewhere. Not all struggle (or pain) is created equal.
I wouldn’t count on reduced drudgery. The assembly line automated many movements needed for manufacturing. But which work involved more drudgery—-craftsman-style car production or standing on an assembly line at Ford?
With any new technology, subsequent drudgery depends on the technology, its concomitant economics, and the imagination of the people using it.
The craftsman didn't move to the assembly line.
This kind of argument flies in the face of the fact that plenty of inherited rich people seem to lead very happy lives. Of course, they do find things to struggle with, but it's much more pleasant to struggle to score 72 at the golf course or to outbid a rival for a piece of contemporary art than to struggle for basic needs.
I don’t share your idea of a happy life.
I can live a happy life without struggling for basic needs and without playing golf all day long. If you strip off every obligation from life, then you exist, not live.
Facing challenges and overcoming obstacles, friends and family is what makes me happy. When you’re rich, most people only care about your money, not the person you are. And I think that’s exactly what a happy life is about.
I guess to each their own. But in the little free time I have as a non-rich version, I like to face low-stakes challenges I myself choose, e.g. in my case those currently mostly are learning Chinese and learning to play a musical instrument. Those still provide obstacles, difficulties, the feeling of progress and moments of success/failure, but I can do them at my own pace and with no serious consequences if I fail.
I can imagine I could be perfectly happy with a life full of challenges of that kind, instead of being forced to work at given scheduled times which often imply I spend less time with my son than I would like, including days I don't feel like it, and including boring tasks (I love my job, but like almost every job, it also has its paperwork, pointless meetings, etc.), knowing I depend on that work to live.
In short, I think we all do need the challenge, the struggle, the successes and the failures, otherwise life would just be boring and pointless. But I don't think we (or at least I) need the obligation component and the high stakes.
What you mention about the rich attracting people focused on money rings true, but it would be moot if AI led us all to lead lives more similar to the rich, which was the point here. (Of course, there's also the issue of whether there is widespread or unequal access to AI, but that's another story...).
It's fairly easy to be submarine rich, and fly completely below the radar. Just brush off questions about your work with vagueness. If you're not flashy, nobody will suspect you're rich
"struggle is the only reason for existence"
That is a bold and frankly unsupportable claim.
Humans don’t tend towards idle quiescence.
We seem to be insatiable inquisitive.
Curiosity doth struggle many cats.
Being inquisitive doesn't equate to loving, or needing, struggle in my brain. Also, struggle differs for many people. Running a half marathon was a struggle for me, but I can't compare it to a family who is struggling to pay bills.
If we take Maslows hierarchy of needs, me running a half marathon is self actualization. Something I'm privileged to be able to do. A family struggling to put food on the table is still on the Lower tier of the pyramid.
Yes, I tend to agree.
A lot of paraimony between your statement and Socrates' comments on the transition to writing.
Interestingly, he placed a lot of importance on memory... where you emphasize manipulation of concepts.
I’ve grown to appreciate this aspect of standard examination as I’ve gotten older. Everyone wants to say “oh, you can just look it up now”, but how can you come up with higher level thinking, when you don’t have the fundamentals in your mind?
To use math as an example, you can always look up formulas. But after more than 1 "layer" of looking up, that quickly becomes impossible. Like, when I had to learn to calculate derivatives and primitives, I could look those things up. But when I got to linear algebra, I couldn't progress until I deeply internalized derivatives and primitives, because looking up formula A only for it to contain unknown formula B just becomes a mess.
I "purge" - or better yet choose not to retain - the data.
BUT, BUT! I keep the index.
My favourite quote from Donald Rumsfeld (a very bad human being, but this is still good)
> Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.
What I optimise for is to have as many "known unknowns" as possible. I know a concept, process or a tool exists, but don't understand it or know how to do it. But because I know it exists, I won't start inventing it again from scratch when I need it.
Like if one needs to do some esoteric task, they might start figuring it out from scratch. But because the index in my brain contains a link ("known unknown") to a tool/process that makes that specific thing a LOT easier, I can start looking into it more.
Or I might need to do something common like plumbing or some electrical work at home. Do I know how to do that? No. But I Know A Guy I can call, again externalising the knowledge. Either they come over and help me do it or talk me through the process of adjusting the thermostat in my shower faucet (you need to use WAY more force than I was comfortable with without an expert on the phone btw... there are no hidden screws, you just rip the bits off :D)
> I'd say that by purging stuff from the brain we are losing thinking itself
The idea that there will be less to think about seems a bit short-sighted. Humans are very good at moving to higher levels of abstraction, often with more complexity to deal with, not less.
We will never fundamentally get rid of thinking; it's coupled to navigation of 3D reality we live
And we don't need words to think; cognitive problem solving and language processing are separate processes [1]
We will shift the problems we need to think about. Same as always; humanity isn't really solving building stone pyramids. Did we stop thinking? No just thought about a different todo list.
[1] https://www.scientificamerican.com/article/you-dont-need-wor...
We also never run out of fuel. There will always be some energy left here and there to tap into.
Fuck thinking!
If I am free as “rational I,” then the rational in me, or reason, is free; and this freedom of reason, or freedom of the thought, was the ideal of the Christian world from of old. They wanted to make thinking – and, as aforesaid, faith is also thinking, as thinking is faith – free; the thinkers, the believers as well as the rational, were to be free; for the rest freedom was impossible. But the freedom of thinkers is the “freedom of the children of God,” and at the same time the most merciless – hierarchy or dominion of the thought; for Isuccumb to the thought. If thoughts are free, I am their slave; I have no power over them, and am dominated by them. But I want to have the thought, want to be full of thoughts, but at the same time I want to be thoughtless, and, instead of freedom of thought, I preserve for myself thoughtlessness. If the point is to have myself understood and to make communications, then assuredly I can make use only of human means, which are at my command because I am at the same time man. And really I have thoughts only as man; as I, I am at the same time thoughtless. He who cannot get rid of a thought is so far only man, is a thrall of language, this human institution, this treasury of human thoughts. Language or “the word” tyrannizes hardest over us, because it brings up against us a whole army of fixed ideas. Just observe yourself in the act of reflection, right now, and you will find how you make progress only by becoming thoughtless and speechless every moment. You are not thoughtless and speechless merely in (say) sleep, but even in the deepest reflection; yes, precisely then most so. And only by this thoughtlessness, this unrecognized “freedom of thought” or freedom from the thought, are you your own. Only from it do you arrive at putting language to use as your property. If thinking is not my thinking, it is merely a spun-out thought; it is slave work, or the work of a “servant obeying at the word.” For not a thought, but I, am the beginning for my thinking, and therefore I am its goal too, even as its whole course is only a course of my self-enjoyment; for absolute or free thinking, on the other hand, thinking itself is the beginning, and it plagues itself with propounding this beginning as the extremest “abstraction” (such as being). This very abstraction, or this thought, is then spun out further
- The ego and its own, Max Stirner
Yeah but where comparison with philosophy falls short is - if we lost some ways of thinking, it was gradual and most didn't notice.
Software code is on the other hand extremely formal, and either it works perfectly as intended, it works crappily and keeps breaking in various edge cases or just doesn't work (last 2 are just variants of same dysfunctionality, technically its binary state). There is no scenario where broken code somehow ends up working and delivering, or maybe 1 in trillion, sometimes.
Also the change is so fast that the failure is immediately obvious to everybody, its not gradual change of thinking over few decades/generations.
LLMs are getting impressive, but anybody claiming there is no massive long term harm to getting to what we call now proper seniority is... don't know, delusional, junior who never walked that long and hard-won path, doing PR for llms at all costs or some other similar type. Or simply has some narrow use case working great for them long term which definitely can't be transferred on whole industry, like 1-man indie game dev.
I would argue it's virtually impossible going forward for a junior engineer to run that harder path.
Because the easier path seemingly delivers what's expected of them. Sigh, they may even be demanded to take the faster path.
I've seen many junior unable to walk that necessary path before LLMs were a thing.
Socrates was histories first Luddite. He opened Pandora’s box. I wish him and Plato would be radically rejected as the garbage trash they are (basically just a defense of hierarchy and dialects)
Quoting my boy Max Stirner who also fking hated these guys
“This war is opened by Socrates, and not until the dying day of the old world does it end in peace.“ - The Ego and its Own, Max Stirner
> but in a world wherein writing code by hand (the "struggle") is "artisinal" and "outdated", this process being non-optional (which I agree with) is contradictory.
> How juniors and fresh grads do that with AI that is designed to give you whatever answer you need in a given moment is unclear to me. I don't see how that's possible, but maybe I'm thinking too myopically.
The contradiction is resolved by your employer pushing professional development into "your own time."
And they'll do that by being totally stupid and unaware: they'll push you to maximally use AI tools, but judge you for the skill deficits those tools create.
You aren't thinking myopically; it's a fundamental contradiction the root of which is in how human brains take in and understand new information. No amount of pontification or bollocks hedging as this and all other "thinkpieces" on this issue do, will change that. It is beyond preference and perspective. There is only doing the very task that produces skills pertaining to that task. Prompting alone or even in dominant is too far from this task. They can only write the code.
> How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.
It's not by writing syntax that you get there. It's by creating software and gaining the experience of seeing it go wrong.
"Good judgement comes from experience. Experience comes from bad judgement."
AI just shortens the cycle without needing to type out syntax, so you get even more iterations, faster, and learn the lessons more quickly.
Some do not learn from that experience. They were never going to learn without AI either.
> It's not by writing syntax that you get there.
Writing syntax is still an important part of the experience. It is valuable because it requires you to spend time immersed in the nuts and bolts that hold software together. I'd compare it to cooking, if you have an assistant or a machine do everything and you never actually touch a knife or stir a pot, you'll lose your touch. But there is also something valuable about covering more ground and the additional experience that brings.
Totally! I mean, the same could be said of painstakingly hand coding assembly language - that today's developers haven't done so is what leads us to bloated electron apps, so there is something lost!
But the larger scale system design is stronger than ever. Today, distributed systems, version control, including branching, stacked PRs etc., VMs/containers, idempotency, multimaster ACID databases, all of these things were probably never achievable in the world when the best devs had to spend their time poring over assembly language every day. Losing that skill allowed them more time to build other ones!
You can lead a horse to water, but you can't make it drink.
I dont understand why software engineers insist on keeping the craftsmanship aspect of writing code. Compare to other engineering disciplines, like civil engineering. Engineering was never about going in the field yourself to build things with your own hands. You can become a great civil engineer without building bridges that fail yourself. To me it doesn’t matter that the thing I design is built with a crane or AI. I can design quality control processes too to ensure the thing is built up to standard, I don’t have to build the thing myself to be sure. There is nothing wrong with artisanal code crafting, I appreciate this too, but professionally that’s not engineering. It seems AI is just forcing us to clear the confusion the hard way.
There's a false equivalency between software engineering and civil engineering here, in my opinion. I would argue that the craftsmanship SWEs see in their work stems from a necessity to be novel in order to truly make something worth putting out into the market. "Oh, you're making an app that tracks heartrate/makes music/provides driving directions? Why wouldn't the user just use <insert 'X' market-leading app>?" There's no real merit to making clones, whereas in civil engineering (I would argue) this is the bread and butter. You can't copy and paste a bridge. There's a physicality to it that says "okay, make another bridge similar to this but now for that gap", so the challenge becomes making the necessary repetition more efficient, and it's "fine" if no one is going out of their way to be an "artisanal civil engineer".
Combine this argument with the fact that LLMs are reliant on what information they've ingested; they'll only give you responses based on what already exists. The creativity needed to make something (worth making) is missing there. You'd hope that the humans using the AI fill that role, but comments like this one and others lauding praises on AI and vibe-coding give me serious doubt. We could argue instead that SWE is a misnomer for this field, but that's a separate conversation.
In my opinion, SWE should prioritize true innovation, which AI isn't designed for. (IMO, AI is better suited for fast info lookup rather than key production tasks) Without creativity in SWE, the industry bloats to a unsustainable mass of cloned/useless apps and startups just hoping to be eaten (bought) by a bigger fish, with investors/customers repeatedly being promised "something better is right around the corner!" ...and it just never comes, and the whole thing just collapses on itself.
> I would argue that the craftsmanship SWEs see in their work stems from a necessity to be novel in order to truly make something worth putting out into the market. ... There's no real merit to making clones, whereas in civil engineering (I would argue) this is the bread and butter. You can't copy and paste a bridge. There's a physicality to it that says "okay, make another bridge similar to this but now for that gap", so the challenge becomes making the necessary repetition more efficient, and it's "fine" if no one is going out of their way to be an "artisanal civil engineer".
This is a key insight that invalidates a lot of the manufacturing thought that infects software development. Manufacturing (in large part) is about making copies, better and cheaper. But with software, you can create perfect copies for free. A "software factory" makes no sense, there's a fundamental paradigm mismatch.
Software is blueprint. Can you really become a decent civil engineer if you never created a buleprint for anything?
Will large scale construction projects ever be started with AI made blueprints?
There's probably more to the whole engineering discipline, soft- and hardware, than you give it credit for here.
Because many aren't software engineers, they are brick layers.
To be comparable, they would have to go through the same university degree and professional certification, instead of doing a JavaScript training and call themselves software engineers instead of coders.
They are getting the blueprints from architects and senior devs, and putting those bricks into place, and carrying buckets.
bc software engineering learning is 99% BOTTOM-UP...
and that's bc SE education FAILED BADLY... almost nothing of what's useful is thought in schools and nothing of what's thought is useful
instead of FIXING education and theory, software engineering marched on forcefully without it
now we need to go back and properly fix education, because an intern should absolutely be required to have the "advanced" skills that we imagine in our deluded minds that only "10+ ys of industry experience" should confer, and that are absolutely required to be even a junior AI-augmented SE
SE/CS education should be rethought from scratch to distill, purify, and teach in 3ys max the concepts that used to be acquired through 10-30ys of experience - it 100% CAN be done, and we should wake tf up and DO IT instead of complaining about it - "advanced enterprise systems" architecture require nothing more than mid-highschool math and can be thought on symulated systems in sem 1 of year 1, it's just some of the "teachers" would have to actually put in the 80hrs-weeks of work to do it in due time
Yes, let's build a bridge with AI
I think the analogy (and it is not to be taken literally) is that of "commoditized processes".
Nowadays we don't build bridges to suit the site, we choose sites to accommodate bridges that we basically build identically via a few designs.
Connecting back to s/w AI can do the standard stuff ok as long as you test around the outside of it, so you might want to hone your judgment about how you build systems so it uses the stuff AI can do well, vs "building for the site". The gains are productivity. The losses are efficiency (the problem must go through some extra steps to meet the process where it works). Same as any engineering problem at scale.
you learn by struggling and slogging through, even as a senior if your shit breaks it's on you to understand why. no LLM will shortcut that process for you (even asking LLMs why something is wrong requires you to actually understand it eventually, aka LEARNING). how that happens is up to the person.
i don't understand all this fear projected as if people won't have agency of learning just because LLMs make it easier to do certain things. i don't think it's contradictory at all. half the people here will never have to wrangle the bullshit i dealt with 20 years ago and i'm sure when i was dealing with it there was another 20 years of bullshit before me lol.
if you vibe code your app with no regard for the underlying code you will pay the price for it at some point in the future, anybody worth their salt will slow down enough to figure it out the "artisanal" way.
I'd argue that the engineers of 20 years ago were better than the engineers of today because they were significantly more resource constrained and for example, would never use a 300mb javascript library for a profile page.
John Carmack did praise restraint of resources when he recalled his early days working as a lone contractor and as an employee of Softdisk, when he and the team had to push out games on a very tight schedule.
I think this extends to other parts of life, too. I still remember that I fondly played a game over and over again back in high school, when I did not have the Internet and had to borrow CDs from my friends — but when I went into the university and had access to pretty much every game freely on the Intranet, I rarely do that anymore. That’s why I always think an abundance of X may not be the best option for me. That’s why probably includes money, too.
I never buy these examples. Being a good engineer is more than purely resource optimization. I can think of many times over my career where resource optimization mattered but it’s not always a valuable undertaking.
As a percentage of good to mediocre, maybe. Engineers of 40 years ago were probably better than engineers 20 years ago. Less of them and more constraints they had to deal with. Democratization of technology makes it easier for more people to use. It applies to programming as much as just using a computer.
20 years ago we were complaining about steam being bloated and unnecessary, we were 6 months off vista being a bloated mess and the Office Ribbon debacle being in full swing. PC games were often half baked console ports with atrocious performance and filled with game breaking bugs. Software was super rigid - there was no real cross platform support. We were just heading into the core 2 duo realm and it was a mess.
Engineers sucked then as much as they suck now
Understanding something and learning something are not the same things.
nobody said they were, they are related. if you don't understand why something is behaving a certain way you need to learn
Almost none of my operational knowledge came from writing code but a lot sure came from the reading code in the debugging process.
This has happened in other industries before. Drafting for example when CAD arrived. Entry level wasn't "can draw, willing to learn" anymore, but demanded high domain understanding. So the pathway became compressed learning through study, and field exposure.
Study of senior drafter "red lines": what and why they changed the initial drawing, RFI response etc. Reverse engineering good work. Failed design studies etc.
SWE equivalents: PRs, code review, studying high quality codebases (guess what: LLMs are amazing at helping here), pair programming (learning why what the LLM did was wrong, how to improve it, etc), customer support, debugging prod incidents, studying post mortems etc
We don't hire juniors and throw them boilerplate and tiny bugs while expecting them to learn along the way ad hoc through some pair programming and the occasional deep end. We give them specific tasks and studies that develop their domain understanding and taste, actively support and mentor them, and expect them to drive some LLMs on the side to solve simple issues that still need human eyes on it.
> We don't hire juniors and throw them boilerplate and tiny bugs while expecting them to learn along the way ad hoc through some pair programming and the occasional deep end.
Is that generally the case though? I'm about two years into my first job in the industry and that's exactly my experience, and certainly frustrating...
> How do you think engineers in the second half got there? By writing tons and tons of code to "build those reps" and gain that experience.
Well this is true, but that doesn't mean that there isn't any other way to acquire this knowledge. Until now, this way of gaining deeper understanding was simply the most practical one, since you needed to write lots of code when starting out as a software engineer.
But it's just as well possible to gain knowledge about useful abstractions and clean code by using AI to do the work. You'll find out after a while which codebases get you stuck and which code abstractions leverage your AI because it needs fewer tokens to read and extend your codebase.
One thing worth mentioning is that even before AI only some small subset of engineers have experienced building systems from scratch or inventing new ways of doing things or root causing complex problems or even writing a lot of code. Most software engineering is maintenance or mundane or not productive.
Even in a world where there's a lot of AI generated code there can still be people that have enough exposure to doing hard things. Definitely at this point in time where AI can't really do all those hard things anyways - but even after it'll be able to.
you don't need to build systems from scratch to acquire problem-solving skills. even routine maintenance problems require to dig into documentation, look at github issues, and do root-cause analysis. These skills are eliminated from reliance on AI and there is no fallback if one never acquired them in the first place.
> I don't see how that's possible, but maybe I'm thinking too myopically.
you are thinking too myopically.
We have people who can still do maths well after the introduction of the calculator. We have people who can still spell after the introduction of spell check.
The junior only need to train without using AI to gain the skills needed - that's called education. If they choose to rely on AI solely, and gimp their own education, that's on them.
> We have people who can still do maths well after the introduction of the calculator.
I assume by "do maths" you mean doing simple calculations, like adding a bunch of small numbers, in one's head. That's because in many situations it's more convenient to do so, than using a calculator. So the skill is preserved / practiced, because a calculator is too cumbersome to use. The skills of most people settle at the equilibrium where it takes the same effort to take out the calculator and focus on typing, as it would to strain the brain doing it without a calculator.
> We have people who can still spell after the introduction of spell check.
When using spell check to fix your document, you automatically learn to spell. Your skills improve by using the tool. A better analogy to AI would be an email client with a "Fix all and send"-button, where you never look at the output of the spell checker.
I would also argue, that most school system forbid the usage of a calculator the first couple of years (at least that's how it was Germany a few decades ago). The same with writing per hand. You can spell check by looking the word up and then manually correcting it.
Both require manual "labor" which leads to learning.
And calculators took decades to become widespread. So we could learn of their side effects before they became mainstream.
Also to note. Calculators merely solve intermediary steps. LLMs are increasingly designed to do a one shot full blown work. Longer context, deep thinking, agentic loops.
No. These tools are very good at creating illusion of learning, without any learning. When you watch them do stuff, you think, yeah I got this. Once they are gone, you realize all your supposed skill is gone too. Getting a skill requires deliberate practice. You can use AI for that, but just using AI is not that.
Why no? It sounds like you agree with the person you replied to
There's an old Latin proverb "Scribere bis legere", which translates to "writing is reading twice".
In practice, what this means is that you can read some subject many times, but you would still struggle to reproduce the content by yourself. That is why, when learning, it is not sufficient to just read the material several times.
Those are inappropriate examples because they are all deterministic. The whole reason behind the AI movement is the move from deterministic processes, and exact descriptions, to handwavy descriptions and stochastic processes.
Of course there are people who do maths after the introduction of the calculator Just like there are more people who program after the introduction of the electronic computer.
Why is it always so consistently a comparison to a technology of a fundamentally different order? Perhaps what has been lost is the ability to recognise distinct and incommensurable categories.
Yes but currently I don't know of a single company in my area that doesn't make you use AI daily because of the supposedly increased productivity. That means that juniors also absolutely have to use AI, probably sabotaging their learning process in the long run.
> We have people who can still do maths well after the introduction of the calculator.
Arithmetics is a very, very small subset of math.
AI has not yet aligned with human thinking absolutely but some people create euphoria that it's surpassing human thinking so only after alignment and surpassing AI can think of an outside inview now it is still inside out