Relevant post by Kent Beck from 12th Dec 2025: The Bet On Juniors Just Got Better https://tidyfirst.substack.com/p/the-bet-on-juniors-just-got...

> The juniors working this way compress their ramp dramatically. Tasks that used to take days take hours. Not because the AI does the work, but because the AI collapses the search space. Instead of spending three hours figuring out which API to use, they spend twenty minutes evaluating options the AI surfaced. The time freed this way isn’t invested in another unprofitable feature, though, it’s invested in learning. [...]

> If you’re an engineering manager thinking about hiring: The junior bet has gotten better. Not because juniors have changed, but because the genie, used well, accelerates learning.

Isn't the struggling with docs and learning how and where to find the answers part of the learning process?

I would argue a machine that short circuits the process of getting stuck in obtuse documentation is actually harmful long term...

Isn't the struggle of sifting through a labyrinth of physical books and learning how and where to find the right answers part of the learning process?

I would argue a machine that short-circuits the process of getting stuck in obtuse books is actually harmful long term...

It may well be. Books have tons of useful expository material that you may not find in docs. A library has related books sitting in close proximity to one another. I don't know how many times I've gone to a library looking for one thing but ended up finding something much more interesting. Or to just go to the library with no end goal in mind...

Speaking as a junior, I’m happy to do this on my own (and do!).

Conversations like this are always well intentioned and friction truly is super useful to learning. But the ‘…’ in these conversations seems to always be implicating that we should inject friction.

There’s no need. I have peers who aren’t interested in learning at all. Adding friction to their process doesn’t force them to learn. Meanwhile adding friction to the process of my buddies who are avidly researching just sucks.

If your junior isn’t learning it likely has more to do with them just not being interested (which, hey, I get it) than some flaw in your process.

Start asking prospective hires what their favorite books are. It’s the easiest way to find folks who care.

I’ll also make the observation that the extra time spent is very valuable if your objective solely is learning, but often the Business™ needs require something working ASAP

It's not that friction is always good for learning either though. If you ever prepared course materials, you know that it's important to reduce friction in the irrelevant parts, so that students don't get distracted and demotivated and time and energy is spent on what they need to learn.

So in principle Gen AI could accelerate learning with deliberate use, but it's hard for the instructor to guide that, especially for less motivated students

You're reading a lot into my ellipsis that isn't there. :-)

Please read it as: "who knows what you'll find if you take a stop by the library and just browse!"

I admire your attitude and the clarity of your thought.

It’s not as if today’s juniors won’t have their own hairy situations to struggle through, and I bet those struggles will be where they learn too. The problem space will present struggles enough: where’s the virtue in imposing them artificially?

This should be possible online, it would be if more journals were open access.

Disagree, actually. Having spent a lot of time publishing papers in those very journals, I can tell you that just browsing a journal is much less conducive to discovering a new area to dive into than going to a library and reading a book. IME, books tend to synthesize and collect important results and present them in an understandable (pedagogical?!) way that most journals do not, especially considering that many papers (nowadays) are written primarily to build people's tenure packets and secure grant funding. Older papers aren't quite so bad this way (say, pre-2000).

I've done professional ghostreading for published nonfiction authors. Many such titles are literally a synthesis of x-number of published papers and books. It is all an industry of sorts.

I think I don’t disagree. Only, it would at least be easier to trace the research concept you are interested in up to a nice 70’s paper or a textbook.

> It may well be. Books have tons of useful expository material that you may not find in docs

Books often have the "scam trap" where highly-regarded/praised books are often only useful if you are already familiar with the topic.

For example: i fell for the scam of buying "Advanced Programming in the unix environment" and a lot of concept are only shown but not explained. Wasted money, really. It's one of those book i regret not pirating before buying, really.

At the end of the day, watching some youtube video and then referencing the OS-specific manpage is worth much more than reading that book.

I suspect the case to be the same for other "highly-praised" books as well.

You could make much the same observation about online search results.

When I first opened QBasic, <N> years ago, when I was a wee lad, the online QBasic help didn't replace my trusty qbasic book (it supplemented it, maybe), nor did it write the programs for me. It was just there, doing nothing, waiting for me to press F1.

AI, on the other hand...

I couldn't make head nor tails of the QBasic help back in the day. I wanted to. I remember reading the sections about integers and booleans and trying to make sense out of them. I think I did manage to figure out how to use subroutines eventually, but it took quite a lot of time and frustration. I wish I'd had a book... or a deeper programming class. The one I had never went further than loops. No arrays, etc.

</resurgent-childhood-trauma>

You posted this in jest but it's literally true. You need to read the whole book to get the context. You SHOULD be reading the manuals and the docs. They weren't written because they're fun.

I'm not sure what you are trying to say here, or if you are trying to somehow diminish my statement by somehow claiming that online documentation is causing the same magnitude of harm compared to using a book?

Two things:

1 - I agree with you. A good printed resource is incredibly valuable and should be perfectly valid in this day and age.

2 - many resources are not in print, e.g. API docs, so I'm not sure how books are supposed to help here.

It’s an interesting question isn’t it? There are obvious qualities about being able to find information quickly and precisely. However, the search becomes much narrower, and what must inevitably result is a homogeneity of outcomes.

Eventually we will have to somehow convince AI of new and better ways of doing things. It’ll be propaganda campaigns waged by humans to convince God to deploy new instructions to her children.

> inevitably result is a homogeneity of outcomes

And this outcome will be obvious very quickly for most observers won't it? So, the magic will occur by pushing AI beyond another limit or just have people go back to specialize on what eventually will becoming boring and procedural until AI catches up

Well, yes -- this is why I still sit down and read the damn books. The machine is useful to refresh my memory.

learning to learn

I recall similar arguments being made against search engines: People who had built up a library of internal knowledge about where and how to find things didn't like that it had become so easy to search for resources.

The arguments were similar, too: What will you do if Google goes down? What if Google gives the wrong answer? What if you become dependent on Google? Yet I'm willing to bet that everyone reading this uses search engines as a tool to find what they need quickly on a daily basis.

I argue that there is a strong, strong benefit to reading the docs: you often pick up additional context and details that would be missing in a summary.

Microsoft docs are a really good example of this where just looking through the ToC on the left usually exposes me to some capability or feature of the tooling that 1) I was not previously aware of and 2) I was not explicitly searching for.

The point is that the path to a singular answer can often include discovery of unrelated insight along the way. When you only get the answer to what you are asking, you lose that process of organic discovery of the broader surface area of the tooling or platform you are operating in.

I would liken AI search/summaries to visiting only the well-known, touristy spots. Sure, you can get shuttled to that restaurant or that spot that everyone visits and posts on socials, but in traveling that way, you will miss all of the other amazing food, shops, and sights along the way that you might encounter by walking instead. Reading the docs is more like exploring the random nooks and crannies and finding experiences you weren't expecting and ultimately knowing more about the place you visited than if you had only visited the major tourist destinations.

As a senior-dev, I have generally a good idea of what to ask for because I have built many systems and learned many things along the way. A junior dev? They may not know what to ask for and therefore, may never discover those "detours" that would yield additional insights to tuck into the manifolds of their brains for future reference. For the junior dev, it's like the only trip they will experience is one where they just go to the well known tourist traps instead of exploring and discovering.

I have been online since 1993 on Usenet. That was definitely not a widespread belief. We thought DejaNews was a godsend.

It's possible those arguments are correct. I wouldn't give up Google and SO, but I suspect I was learning faster when my first stop was K&R or a man page. There's a lot of benefit in building your own library of knowledge instead of cribbing from someone else's.

Of course no-one's stopping a junior from doing it the old way, but no-one's teaching them they can, either.

[deleted]

No, trying stuff out is the valuable process. How I search for information changed (dramatically) in the last 20 years I've been programming. My intuition about how programs work is still relevant - you'll still see graybeards saying "there's a paper from 70s talking about that" for every "new" fad in programming, and they are usually right.

So if AI gets you iterating faster and testing your assumptions/hypothesis I would say that's a net win. If you're just begging it to solve the problem for you with different wording - then yeah you are reducing yourself to a shitty LLM proxy.

The naturally curious will remain naturally curious and be rewarded for it, everyone else will always take the shortest path offered to complete the task.

> The naturally curious will remain naturally curious and be rewarded for it

Maybe. The naturally curious will also typically be slower to arrive at a solution due to their curiosity and interest in making certain they have all the facts.

If everyone else is racing ahead, will the slowpokes be rewarded for their comprehension or punished for their poor metrics?

> If everyone else is racing ahead, will the slowpokes be rewarded for their comprehension or punished for their poor metrics?

It's always possible to go slower (with diminishing benefits).

Or I think putting it in terms of benefits and risks/costs: I think it's fair to have "fast with shallow understanding" and "slower but deeper understanding" as different ends of some continuum.

I think what's preferable somewhat depends on context & attitude of "what's the cost of making a mistake?". If making a mistake is expensive, surely it's better to take an approach which has more comprehensive understanding. If mistakes are cheap, surely faster iteration time is better.

The impact of LLM tools? LLM tools increase the impact of both cases. It's quicker to build a comprehensive understanding by making use of LLM tools, similar to how stuff like autocompletion or high-level programming languages can speed up development.

[deleted]

> learning how and where to find the answers part of the learning process?

Yes. And now you can ask the AI where the docs are.

The struggling is not the goal. And rest assured there are plenty of other things to struggle with.

The thing is you need both. You need to have periods where you are reading through the docs learning random things and just expanding your knowledge, but the time to do that is not when you are trying to work out how to get a string into the right byte format and saved in the database as a blob (or whatever it is). Documentation has always has lots of different uses and the one that gets you answers to direct questions has improved a bit but its not really reliable yet so you are still going to have to check it.

I think if this were true, then individualized mastery learning wouldn't prove to be so effective

https://en.wikipedia.org/wiki/Mastery_learning

Except none of us have a master teaching and verifying our knowledge on how to use a library. And AI doesn’t do that either.

The problem isn't that AI makes obtuse documentation usable. It's that it makes good documentation unread.

There's a lot of good documentation where you learn more about the context of how or why something is done a certain way.

The best part is when the AI just makes up the docs

It really depends on what's being learned. For example, take writing scripts based on the AWS SDK. The APIs documentation is gigantic (and poorly designed, as it takes ages to load the documentation of each entry), and one uses only a tiny fraction of the APIs. I don't find "learning to find the right APIs" a valuable knowledge; rather, I find "learning to design a (small) program/script starting from a basic example" valuable, since I waste less time in menial tasks (ie. textual search).

> It really depends on what's being learned.

Also the difference between using it to find information versus delegating executive-function.

I'm afraid there will be a portion of workers who crutch heavily on "Now what do I do next, Robot Soulmate?"

No :)

Any task has “core difficulty” and “incidental difficulty”. Struggling with docs is incidental difficulty, it’s a tax on energy and focus.

Your argument is an argument against the use of Google or StackOverflow.

Not really. There’s a pattern to reading docs, just like there’s a pattern to reading code. Once you grasped it, your speed increase a lot. The slowness that junior has is a lack of understanding.

Complaining about docs is like complaining about why research article is not written like elementary school textbooks.

If the docs are poorly written then your not learning anything except how to control frustration

Struggling with poorly organized docs seems entirely like incidental complexity to me. Good learning resources can be both faster and better pedagogically. (How good today's LLM-based chat tools are is a totally separate question.)

Nobody said anything about poorly organized docs. Reading well structured and organized complex material is immensely difficult. Anyone who’s read Hegel can attest to that.

And yet I wouldn’t trust a single word coming out of the mouth of someone who couldn’t understand Hegel so they read an AI summary instead.

There is value in struggling through difficult things.

Why?

If you can just get to the answer immediately, what’s the value of the struggle?

Research isn’t time coding. So it’s not making the developer less familiar with the code base she’s responsible for. Which is the usual worry with AI.

Disagree. While documentation is often out of date, the threshold for maintaining it properly has been lowered, so your team should be doing everything it can to surface effective docs to devs and AIs looking for them. This, in turn, also lowers the barrier to writing good docs since your team's exposure to good docs increases.

If you read great books all the time, you will find yourself more skilled at identifying good versus bad writing.

Feel free to waste your time sifting through a dozen wrong answers. Meanwhile the rest of us can get the answers, absorb the right information quickly then move on to solving more problems.

And you will have learned nothing in the process. Congratulations, you are now behind your peer who "wasted his time" but actually knows stuff which he can lean on in the future.

This is a wrong take. People learn plenty while using AI. it's how you use it that matters. Same issue happened years ago if you just copied stack overflow without understanding what you were doing.

Its no different now, just the level of effort required to get the code copy is lower.

Whenever I use AI I sit and read and understand every line before pushing. Its not hard. I learn more.

Yes, it is. And yes, it absolutely is harmful.

1965: learning how to punch your own punch cards is part of the learning process

1995: struggling with docs and learning how and where to find the answers part of the learning process

2005: struggling with stackoverflow and learning how to find answers to questions that others have asked before quickly is part of the learning process

2015: using search to find answers is part of the learning process

2025: using AI to get answers is part of the learning process

...

This is both anachronistic and wrong.

To the extent that learning to punch your own punch cards was useful, it was because you needed to understand the kinds of failures that would occur if the punch cards weren't punched properly. However, this was never really a big part of programming, and often it was off-loaded to people other than the programmers.

In 1995, most of the struggling with the docs was because the docs were of poor quality. Some people did publish decent documentation, either in books or digitally. The Microsoft KB articles were helpfully available on CD-ROM, for those without an internet connection, and were quite easy to reference.

Stack Overflow did not exist in 2005, and it was very much born from an environment in which search engines were in use. You could swap your 2005 and 2015 entries, and it would be more accurate.

No comment on your 2025 entry.

> To the extent that learning to punch your own punch cards was useful, it was because you needed to understand the kinds of failures that would occur if the punch cards weren't punched properly. However, this was never really a big part of programming, and often it was off-loaded to people other than the programmers.

I thought all computer scientists heard about Dijkstra making this claim at one time in their careers. I guess I was wrong? Here is the context:

> A famous computer scientist, Edsger Dijkstra, did complain about interactive terminals, essentially favoring the disciplined approach required by punch cards and batch processing.

> While many programmers embraced the interactivity and immediate feedback of terminals, Dijkstra argued that the "trial and error" approach fostered by interactive systems led to sloppy thinking and poor program design. He believed that the batch processing environment, which necessitated careful, error-free coding before submission, instilled the discipline necessary for writing robust, well-thought-out code.

> "On the Cruelty of Really Teaching Computing Science" (EWD 1036) (1988 lecture/essay)

Seriously, the laments I hear now have been the same in my entire career as a computer scientist. Let's just look toward to 2035 where someone on HN will complain some old way of doing things is better than the new way because its harder and wearing hair shirts is good for building character.

Dijkstra did not make that claim in EWD1036. The general philosophy you're alluding to is described in EWD249, which – as it happens – does mention punchcards:

> The naive approach to this situation is that we must be able to modify an existing program […] The task is then viewed as one of text manipulation; as an aside we may recall that the need to do so has been used as an argument in favour of punched cards as against paper tape as an input medium for program texts. The actual modification of a program text, however, is a clerical matter, which can be dealt with in many different ways; my point is […]

He then goes on to describe what today we'd call "forking" or "conditional compilation" (in those days, there was little difference). "Using AI to get answers", indeed. At least you had the decency to use blockquote syntax, but it's tremendously impolite to copy-paste AI slop at people. If you're going to ingest it, do so in private, not in front of a public discussion forum.

The position you've attributed to Dijkstra is defensible – but it's not the same thing at all as punching the cards yourself. The modern-day equivalent would be running the full test suite only in CI, after you've opened a pull request: you're motivated to program in a fashion that ensures you won't break the tests, as opposed to just iterating until the tests are green (and woe betide there's a gap in the coverage), because it will be clear to your colleagues if you've just made changes willy-nilly and broken some unrelated part of the program and that's a little bit embarrassing.

I would recommend reading EWD1035 and EWD1036: actually reading them, not just getting the AI to summarise them. While you'll certainly disagree with parts, the fundamental point that E.W.Dijkstra was making in those essays is correct. You may also find EWD514 relevant – but if I linked every one of Dijkstra's essays that I find useful, we'd be here all day.

I'll leave you with a passage from EWD480, which broadly refutes your mischaracterisation of Dijkstra's opinion (and serves as a criticism of your general approach):

> This disastrous blending deserves a special warning, and it does not suffice to point out that there exists a point of view of programming in which punched cards are as irrelevant as the question whether you do your mathematics with a pencil or with a ballpoint. It deserves a special warning because, besides being disastrous, it is so respectable! […] And when someone has the temerity of pointing out to you that most of the knowledge you broadcast is at best of moderate relevance and rather volatile, and probably even confusing, you can shrug out your shoulders and say "It is the best there is, isn't it?" As if there were an excuse for acting like teaching a discipline, that, upon closer scrutiny, is discovered not to be there.... Yet I am afraid, that this form of teaching computing science is very common. How else can we explain the often voiced opinion that the half-life of a computing scientist is about five years? What else is this than saying that he has been taught trash and tripe?

The full text of much of the EWD series can be found at https://www.cs.utexas.edu/~EWD/.

Has the quality of software been improving all this time?

The volume of software that we have produced with new tools has increased dramatically. The quality has remained at a level that the market can accept (and it doesn't want to bother paying for more quality for the cost of it).

Absolutely. I missed the punch card days, but have been here for the rest, and software quality is way higher (overall) than it used to be.

Sure, people were writing terrible code 25 years ago

XML oriented programming and other stuff was "invented" back then

Unironically, yes.

Now get back to work.

For an experienced engineer, working out the syntax, APIs, type issues, understanding errors, etc is the easy part of the job. Larger picture issues are the real task.

But for many Jr engineers it’s the hard part. They are not (yet) expected to be responsible for the larger issues.

what is a larger issue? lacking domain knowledge? or lacking deeper understanding of years of shit in the codebase that seniors may have better understanding? where I work, there is no issue that it "too large" for a junior to take on, it is the only way that "junior" becomes "non-junior" - by doing, not by delegating to so-called seniors (I am one of them)

"Larger issue" is overall technical direction and architecture, making decisions that don't paint you into a corner, establishing maintainability as a practice, designing work around an organization's structure and habit and so on.

But these are the things people learn through experience and exposure, and I still think AI can help by at least condensing the numerous books out there around technology leadership into some useful summaries.

[deleted]

Just curious, are you mostly FE? I could see this there (but there is still a lot of browser esoteria, etc)

Doing backend and large distributed systems it (seems to me), much deeper. Types of consistency and their tradeoffs in practice, details like implementing and correctly using lamport clocks, good API design, endless details about reworking, on and on.

And then for both, a learned sense of what approaches to system organization will work in the long run (how to avoid needing to stage a re-write every 5 years).

I still agree more or less that the best way for a junior to succeed is to jump in the deep end, not without guidance though. Mentorship is really important in distributed systems where the inner machinations can be quite obtuse. But I find you can't just explain it all and expect it to stick, mentoring someone through a task is the best way.

>Just curious, are you mostly FE

Gatekeeping?

Why couldn't a backend team have all tasks be junior compatible, if uncoupled from deadlines and time constraints?

> Gatekeeping

Not at all. Just trying to understand a POV I think I see here, and in other discussions that I can't quite place / relate to.

The person I replied to seemed to be saying that there is no role for experience, beyond knowing the language, tools, and the codebase. There is no real difference between someone with 5 years of experience and 15 years. This may not be what the think, or meant to say, I'm extrapolating a bit (which is why I asked for clarification)

That attitude (which I have run into in other places) seems totally alien to me, my experience, and that of my friends and colleagues. So, I think there must be some aspect that I'm missing or not understanding.

You can't give a junior tasks that require experience and nuance that have been acquired over years of development. If you babysit them, then perhaps but then what is the point? By it's nature "nuance" is something hard to describe concretely but as someone who has mentored a fair few juniors most of them don't have it. AI generally doesn't have it either. Juniors need tasks at the boundary of their capability, but not far beyond to be able to progress. Simply allowing them to make a mess of a difficult project is not a good way to get there.

There is such a thing as software engineering skill and it is not domain knowledge, nor knowledge of a specific codebase. It is good taste, an abstract ability to create/identify good solutions to a difficult problem.

> If you babysit them, then perhaps but then what is the point

In a long term enterprise the point is building up a long term skillset into the community. Bolstering your teams hive mind on a smaller scale also.

But work has evolved and the economy has become increasingly hostile to long term building, making it difficult to get buy in for efforts that don't immediately get work done or make money.

Much of the job of the Sr is to understand where the Jr is, and give them tasks that are challenging but achievable, and provide guidance.

you work(ed) in some shitty places if you believe this to be true

Perhaps, I don't consider them shitty myself but palates differ. Is engineering nirvana a place where tasks are such that any can been done by a junior engineer, and the concept of engineering skill developed through experience is non-existent?

> Is engineering nirvana a place where tasks are such that any can been done by a junior engineer, and the concept of engineering skill developed through experience is non-existent?

how does one junior acquire engineering skills except through experience as you said?

Unnecessary complexity, completely arbitrary one off designs, over emphasis on one part of the behavior while ignoring others. Using design patterns where they shouldn't be used, code once and forget operations exist, using languages and framework that are familiar but unfit for that purpose. The list goes on and I see it happen all the time, AI only makes it worse because it tend to verify all of these with "You're absolutely correct!".

Good luck maintaining that.

this can only happen in a shitty places with incompetent team

Every team has incompetence at some level. If every team was perfect, there would be no more work left to do, because they would always get the right product built correctly the first time. No bug fix releases, no feature refreshes, no version 2.

Beware, your ego may steer you astray.

been hacking 31 years with the same ego but you never know. and if I learned anything in these years is to get out the heck out of any place that treats people not by their skills but by how long ago their Mom gave them birth

[dead]

This is honestly what I (staff engineer) find AI the most useful for. I've been around the block enough that I typically know in general what I want, but I often find myself wanting it in a new framework or paradigm or similar, and if I could just ASK a person a question, they'd understand it. But not knowing the exact right keywords, especially in frameworks with lots of jargon, can still make it annoying. I can often get what I want by just sitting down and reading approximately 6 screen-heights of text out of the official docs on the general topic in question to find the random sentence 70% of the way down that answered my question.

But dyou know what's really great at taking a bunch of tokens and then giving me a bunch of probabilistically adjacent tokens? Yeah exactly! So often even if the AI is giving me something totally bonkers semantically, just knowing all those tokens are adjacent enough gives me a big leg up in knowing how to phrase my next question, and of course sometimes the AI is also accidentally semantically correct too.

When I joined I could do all this.

And this is always my question: "... because the genie, used well, accelerates learning." Does it though?

How are we defining "learning" here? The example I like to use is that a student who "learns" what a square root is, can calculate the square root of a number on a simple 4 function calculator (x, ÷, +, -) if iteratively. Whereas the student who "learns" that the √ key gives them the square root, is "stuck" when presented with a 4 function calculator. So did they 'learn' faster when the "genie" surfaced a key that gave them the answer? Or did they just become more dependent on the "genie" to do the work required of them?

Some random musings this reminded me of.

I graduated HS in mid 2000s and didn't start using a calculator for math classes until basically a junior in college. I would do every calculation by hand, on paper. I benefited from a great math teacher early on that taught me how to properly lay out my calculations and solutions on paper. I've had tests I've turned in where I spent more paper on a single question than others did on the entire test.

It really helped my understanding of numbers and how they interacted, and helped teachers/professors narrow down on my misunderstandings.

Not only that: I suspect you already have an inkling of the range of the expected outcomes for the answer in your head just looking through the problem and any answers that fail that test will cause you to pause to re-check your work.

This aspect is entirely missing when you use an oracle.

You still need to be curious. I learn a ton by asking questions of the LLMs when I see new things. “Explain this to me - I get X but why did you do Y?”

It’s diamond age and a half - you just need to continue to be curious and perhaps slow your shipping speed sometimes to make sure you budget time for learning as well.

I think that's the "used well" in "because the genie, used well, accelerates learning".

We had 3 interns this past summer - with AI I would say they were VERY capable of generating results quickly. Some of the code and assumptions were not great, but it did help us push out some releases quickly to alleviate customer issues. So there is a tradeoff with juniors. May help quickly get features out, may also need some refactoring later.

Interesting how similar this is to the tradeoff of using AI coding agents

What makes them more capable than a senior engineer with three LLM agents?

[deleted]

first response from me "let me mention how the real business world actually works" .. let's add a more nuanced slice to that however

Since desktop computers became popular, there have been thousands of small to mid-size companies that could benefit from software systems.. A thousand thousand "consultants" marched off to their nearest accountant, retailer, small manufacturer or attorney office, to show off the new desktop software and claim ability to make new, custom solutions.

We know now, this did not work out for a lot of small to mid-size business and/or consultants. Few could build a custom database application that is "good enough" .. not for lack of trying.. but pace of platforms, competitive features, stupid attention getting features.. all of that, outpaced small consultants .. the result is giant consolidation of basic Office software, not thousands of small systems custom built for small companies.

What now, in 2025? "junior" devs do what? design and build? no. Cookie-cutter procedures at AWS lock-in services far, far outpace small and interesting designs of software.. Automation of AWS actions is going to be very much in demand.. is that a "junior dev" ? or what?

This is a niche insight and not claiming to be the whole story.. but.. ps- insert your own story with "phones" instead of desktop software for another angle

One thing I'd point out is that there are only so many ways to write a document or build a spreadsheet. There are a ton of business processes that are custom enough to that org that they have to decide to go custom, change their process, or deal with the inefficiency of not having a technical solution that accomplishes the goal easily.

Lotus Notes is an example of that custom software niche that took off and spawned a successful consulting ecosystem around it too.

> Lotus Notes is an example

TIL Notes is still a thing. I had thought it was dead and gone some time ago.

I'm a little confused by this analysis. Are you saying that all enterprise software has been replaced with MS word and AWS?

certainly no -- not "all software" of anything. Where is the word "enterprise" in the post you have replied to ? "enterprise" means the very largest companies and institutions..

I did not write "all software" or "enterprise software" but you are surprised I said that... hmmm

I think the big win with AI is being able to work around jargon. Don't know what that word means ask AI. what the history on it no problem. don't understand a concepts explain this at a high school reading level.

I'm not swayed by appeals to authority, but this is a supremely bad take.

"AI" tools are most useful in the hands of experienced developers, not juniors. It's seniors who have the knowledge and capability to review the generated output, and decide whether the code will cause more issues when it's merged, or if it's usable if they tweak and adapt it in certain ways.

A junior developer has no such skills. Their only approach will be to run the code, test whether it fulfills the requirements, and, if they're thorough, try to understand and test it to the best of their abilities. Chances are that because they're pressured to deliver as quickly as possible to impress their colleagues and managers, they'll just accept whatever working solution the tool produces the first time.

This makes "AI" in the hands of junior developers risky and counterproductive. Companies that allow this type of development will quickly grind to a halt under the weight of technical debt, and a minefield of bugs they won't know how to maneuver around.

The unfortunate reality is that with "AI" there is no pathway for junior developers to become senior. Most people will gravitate towards using these tools as a crutch for quickly generating software, and not as a learning tool to improve their own skills. This should concern everyone vested in the future of this industry.

> A junior developer has no such skills. Their only approach will be to run the code, test whether it fulfills the requirements, and, if they're thorough, try to understand and test it to the best of their abilities.

This is also a supremely bad take... well, really it's mainly the way you worded it that's bad. Juniors have skills, natural aptitudes, as much intelligence on average as other programmers, and often even some experience but what they lack is work history. They sure as hell are capable of understanding code rather than just running it. Yes, of course experience is immensely useful, most especially at understanding how to achieve a maintainable and reliable codebase in the longterm, which is obviously of special importance, but long experience is not a hard requirement. You can reason about trade offs, learn from advice, learn quickly, etc.

You're right, that was harshly worded. I meant to contrast it with the capability of making a quality assessment of the generated output, and understanding how and what to change, if necessary. This is something that only experts in any field are capable of. I didn't mean to imply that people lacking experience are incapable of attaining these skills, let alone that they're less intelligent. It's just that the field is positioned against them in a way that they might never reach this level. Some will, but it will be much harder for most. This wouldn't be an issue if these new tools were infallible, but we're far from that stage.

> Instead of spending three hours figuring out which API to use, they spend twenty minutes evaluating options the AI surfaced

This really isn't the case from what I've seen. It's that they use Cursor or other code generation tools integrated into their development environment to generate code, and if it's functional and looks from a fuzzy distance like 'good' code (in the 'code in the small' sense), they send an oversized PR, and it's up to the reviewer to actually do the thinking.

That's bad and those juniors should be taught to do better or be "managed out of the company".

Their job is to deliver code that they have proved to work.

This inspired me to write a longer form version of this: Your job is to deliver code you have proven to work https://simonwillison.net/2025/Dec/18/code-we-have-proven-to...

The link is a 404 for me

This. I have seen MRs with generated open cv lut mapping in them because a junior didnt understand that what they needed was a simple interpolation function.

The crux is always that you dont know what u dont know. AI doesnt fix this.

Search is easily the best feature of AI/LLMs.

I kind of agree here. The mental model that works for me is "search results passed through a rock tumbler". Search results without attribution and mixed-and-matched across reputable and non-reputable sources, with a bias toward whatever source type is more common.

That's arguably all it ever was. Generating content using AI is just finding a point in latent space.

Which was trained on a pre-AI internet. What's going to happen in coming years when new tech comes out but perhaps isn't documented the same way anymore? It's not an unsolvable problem, but we could see unintended consequences like, say where you must pay the AI provider to ingest your data. Similar to buying poll space or AdSense or whatever they call it for search engines

If you release a new piece of technology from 2025 onwards and don't invest a decent amount of effort into producing LLM-friendly documentation (with good examples) that a user can slurp into their coding agent you're doing your new technology a disservice.

I thought this was always true? What’s new about documentation being important?

If your technology has competition that's already in the training data, the only way to make it equally accessible to LLM users is to ensure there is concise, available documentation that can be fed directly into those LLMs.

That's why "copy page" buttons are increasingly showing on manual pages eg. https://platform.claude.com/docs/en/get-started

If LLMs get more popular, fewer people will actually "browse the web" which could reduce the need for it to be published. At the least, fewer people will ask stack overflow questions for the LLM to learn from. So there could be an island of knowledge where LLMs excel at topics that had mass volume published before AI, and be much less useful for new tech developed after.

[deleted]

> the genie, used well, accelerates learning.

Ehh... 'used well' is doing some very heavy lifting there. And the incentive structure at 90% of companies does not optimize for 'using it well.'

The incentive is to ship quickly, meaning aim the AI-gun at the codebase for a few hours and grind out a "technically working" solution, with zero large-scale architecture thought and zero built-up knowledge of how the different parts of the application are intended to work together (because there was no "intention"). There will be tests, but they may not be sensible and may be very brittle.

Anyway, deploying a bunch of fresh grads armed not with good mentorship but with the ability to generate thousands of LOC a day is a recipe for accelerating the collapse I usually see in startup codebases about 6-8 years old. This is the point where the list of exceptions to every supposed pattern is longer than the list of things that follow the patterns, and where each bug, when properly pursued, leads to a long chain of past bad decisions, each of which would take days of effort to properly unwind (and that unwinding will also have a branching effect on other things). Also, coincidentally, this is the point where an AI agent is the most useless, because they really don't expect all the bizarre quirks in the codebase.

Am I saying AI is useless? No, it's great for prototyping and getting to PMF, and great in the hands of someone who can read its output with a critical eye, but I wouldn't combine it with inexperienced users who haven't had the opportunity to learn from all the many mistakes I've made over the years.

*Some juniors have gotten better.

I hate to be so negative, but one of the biggest problems junior engineers face is that they don't know how to make sense of or prioritize the gluttony of new-to-them information to make decisions. It's not helpful to have an AI reduce the search space because they still can't narrow down the last step effectively (or possibly independently).

There are junior engineers who seem to inherently have this skill. They might still be poor in finding all necessary information, but when they do, they can make the final, critical decision. Now, with AI, they've largely eliminated the search problem so they can focus more on the decision making.

The problem is it's extremely hard to identify who is what type. It's also something that senior level devs have generally figured out.

Not to disagree with Kent Beck's insights on juniors using AI, but the effect of AI on his own writing is palpably negative. His older content is much more enjoyable to read. And so is his recent non-post "activity" on Substack. For example, compare a "note" preceding this article (https://substack.com/@kentbeck/note/c-188541464), on the same topic, to the actual content.

>but because the genie, used well, accelerates learning.

This is "the kids will use the AI to learn and understand" level of cope

no, the kids will copy and paste the solution then go back to their preferred dopamine dispenser

I've learned a lot of shit while getting AI to give me the answers, because I wanted to understand why it did what it did. It saves me a lot of time trying to fix things that would have never worked, so I can just spend time analyzing success.

There might be value in learning from failure, but my guess is that there's more value in learning from success, and if the LLM doesn't need me to succeed my time is better spent pushing into territory where it fails so I can add real value.

>I've learned a lot of shit while getting AI to give me the answers

I would argue you're learning less than you might believe. Similarly to how people don't learn math by watching others solve problems, you're not going to learn to become a better engineer/problem solver by reading the output of ChatGPT.

If I know what I want to do and how I want to do it, and there's plumbing to make that a reality, am I not still solving problems? I'm just paying less attention to stuff that machines can successfully automate.

Regarding leveling up as an engineer, at this point in my career it's called management.

Do you honestly think that’s how people learn?

This is an example of a book on Common Lisp

https://gigamonkeys.com/book/practical-a-simple-database

What you usually do is follow the book instructions and get some result, then go to do some exploration on your own. There’s no walk in the dark trying to figure your own path.

Once you learn what works, and what does not, then you’ll have a solid foundation to tackle more complex subject. That’s the benefit of having a good book and/or a good teacher to guide you to the path of mastering. Using a slot machine is more tortuous than that.

I don't find it to be more torturous than that. In fact, if I were to go back and learn lisp again, I think I'd be a lot more motivated seeing how to build something interesting out of the gate rather than the toy programs I learned in my racket course.

Also, for a lot of things, that is how people learn because there aren't good textbooks available.

Define interesting.

I was helping a few people on getting started with an Android Development bootcamp and just being able to run the default example and get their bearing around the IDE was interesting to them. And I remember when I was first learning python. Just doing basic variable declaration and arithmetic was interesting. Same with learning C and being able to write tic-tac-toe.

I think a lot of harm is being done by making beginner have expectations that would befit people that have years of experience. Like you can learn docker in 2 months to someone that doesn't even know Linux exists or have never encountered the word POSIX.

Please do read the following article: https://www.norvig.com/21-days.html

Understanding "why it works" is one thing, understanding "why it should work this way and not another, and what the alternatives are" is entirely different. AI shows you just one of countless correct implementations. You might understand that single implementation, but not the entire theory behind it

Some might (most might?), those aren't the ones we're interested in.

Just as some might pull the answers from the back of the textbook, the interesting ones are the kids who want to find out why certain solutions are the way they are.

Then again I could be wrong, I try hard to stay away from the shithose that is the modern social media tech landscape (TikTok, Insta, and friends) so I'm probably WAY out of touch (and I prefer it that way).

Right, and they won't get hired beyond their internship.

Don’t confuse this with this persons ability to hide their instincts. He is redefining “senior” roles as junior, but words are meaningless in a world of numbers. The $$$ translation is that something that was worth $2 should now be worth $1.

Because that makes the most business sense.

I disagree. In my experience AI does most of the work and the juniors already poor skills atrophy. Then a senior engineer has to review AI slop and tell the junior to roll the AI dice again.

Agreed, this is like AI doing your homework. A select few will use it to learn but most will copy/pasta, let it create their PR and slack the rest of the day. But at least they are "trying" so they won't get fired. And it requires strong senior engineers to walk them through the changes they are trying to check in and see why they chose them.

I've seen it go both ways. As usual, a good manager should be able to navigate this.

Ok, but not all managers are good and not all situations are navigable.

I’m so sick of getting “but copilot said…” responses on PR comments.

The cynic in me sees it as using juniors as a vehicle for driving up AI metrics. The seniors will be less critical reviewing AI output with a human shield/messenger.

The amount of copium in the replies to this is just amazing. It’s amazing.

How would a person who describes himself as a "full time content producer" know what is actually going on in the industry?

https://substack.com/@kentbeck

What software projects is he actively working on?

The dude literally invented Extreme Programming and was the first signer of the Agile Manifesto. He's forgotten more about software development than most people on this site ever knew.

Seems to me that his core competency is in managing a software team, not developing software.

Someone's accomplishments don't make them incapable of having bad opinions and being wrong. Cults of personality are harmful to progress. Opinions should hold the same weight and be held to the same scrutiny regardless of who voiced them.

That wasn't the question being asked. The question being asked was literally "what are this guy's accomplishments," and Kent Beck is a tech industry OG with a laundry list of them.

Of course he can be wrong; he's human. That wasn't my point.

No, that wasn't the question.

When you're so out of touch as to not know who Kent Beck is, these questions hardly matter.

The thrust of the issue is that: when used suitably, AI tools can increase the rate of learning such that it changes the economics of investments in juniour developers - in a good way, to the contrary of how these tools have been discussed in the mainstream. That is an interesting take, and worthy of discussion.

Your appeal to authority here is out of place here and clearly uninformed, thus the downvotes.

I know who Kent Beck is and I'm not impressed by Agile and Extreme Programming.

What I did not know and what the Wikipedia page revealed is that he worked for a YCombinator company. Thus the downvotes.

Why are you asking us what he's working on? Why not go find out yourself?

What does any of that have to do with having a valid opinion?

https://en.wikipedia.org/wiki/Kent_Beck

To be fair, even if I appreciate Beck, some people do get too famous and start to inhabit a space that is far removed from the average company. Many of these guys tend to give out advice that is applicable to a handful of top earning companies but not the rest.

Doesn't this back up the point? From his wiki it seems like he is mostly famous as a programming influencer, not as a programmer.

So? His bio is literally one latest fad after the other. Now he joins the "AI" fad, what a surprise.

That is a very cynical take which completely ignores his contributions through the decades.

In many cases he helped build the bandwagons you're implying he simply jumped onto.

> In many cases he helped build the bandwagons you're implying he simply jumped onto.

The fact that I cannot tell if you mean this satirically or not (though I want to believe you do!) is alarming to me.