> The bottleneck is understanding the problem. No amount of faster typing fixes that.
Why not? Why can't faster typing help us understand the problem faster?
> When you speed up code output in this environment, you are speeding up the rate at which you build the wrong thing.
Why can't we figure out the right thing faster by building the wrong thing faster? Presumably we were gonna build the wrong thing either way in this example, weren't we?
I often build something to figure out what I want, and that's only become more true the cheaper it is to build a prototype version of a thing.
> You will build the wrong feature faster, ship it, watch it fail, and then do a retro where someone says "we need to talk to users more" and everyone nods solemnly and then absolutely nothing changes.
I guess because we're just cynical.
> Why can't we figure out the right thing faster by building the wrong thing faster?
Because usually the customer can only tolerate so many failed attempts per unit of time. Running your fitness function is often very expensive in terms of other people's time.
This is easily the biggest bottleneck in B2B/SaaS stuff for banking. You can iterate maybe once a week if you have a really, really good client.
> Why can't we figure out the right thing faster by building the wrong thing faster?
> Because usually the customer can only tolerate so many failed attempts per unit of time. Running your fitness function is often very expensive in terms of other people's time.
Heh, depends on what you do. Many times the stakeholders can't explain what they want but can clearly articulate what they don't want when they see it.
Generate a few alternatives, have them pick, is a tried and true method in design. It was way too expensive when coding was manual, so often you need multiple rounds of meetings and emails to align.
If you don't think coding was the bottleneck, you're not thinking creatively about what's only now possible.
It's not what you can do faster (well, it is, up to a point), but also what can you now, do that would have been positively insane and out of the question before.
That's done by arranging a demo (the very old way) or (better) by deploying to a staging server. The customer meets with you for a demo not very often, maybe once per month, or checks what's on the staging server maybe a couple of times per week. They have other things to do, so you cannot make them check your proposal multiple times per day. However I concede that if you are fast you can work for multiple customers at the same time and juggle their demos on the staging servers.
> Generate a few alternatives, have them pick, is a tried and true method in design. It was way too expensive when coding was manual, so often you need multiple rounds of meetings and emails to align.
Why do you need coding for those. You can doodle on a whiteboard for a lot of those discussions. I use Balsamiq[0] and I can produce a wireframe for a whole screen in minutes. Even faster than prompting.
> If you don't think coding was the bottleneck, you're not thinking creatively about what's only now possible.
If you think coding was a bottleneck, that means you spent too much time doing when you should have been thinking.
[0]: https://balsamiq.com/product/desktop/
The customer doesn't need to be shown every "wrong thing".
Then how will you know if it's the wrong thing? If you're not user testing then you're just guessing.
In my experience this just makes them lose confidence in you and the company. So when it eventually is right, they're resistant. Worst case you lose the contract.
But think of the strawmen.
That's fair. I'm usually my own customer.
I think a lot of the discourse around LLMs fails because of organizational differences.
I work in science, and I’ve recently worked with a couple projects where they generated >20,000 LOC before even understanding what the project was supposed to be doing. All the scientists hated it and it didn’t do anything that it was supposed to. But I still felt like I was being “anti-ai” when criticizing it.
I understand that it’s way better when you deeply understand the problem and field though.
I'm starting to see this. It starting to seem like a lot of the people making the most specious, yet wild AI SLDC claims are:
* Hobbyist or people engaged in hobby and personal projects
* Startup bros; often pre-funding and pre-team
* Consultancies selling an AI SDLC as that wasn't even possible 6 months ago as "the way; proven, facts!"
It's getting to the point I'd like people to disclose the size of the team and org they are applying these processes at LOL.
The rule of thumb I have in my head right now is that AI will benefit people with deep specialized knowledge a lot, but people with shallow knowledge or skills can’t build anything that your average SWE with a Claude code subscription can’t replicate in a few hours.
Most LinkedIn influencers, startup bros and consultancies kind of fall into the latter.
You have it completely backwards.
Most Enterprise IT projects fail. Including at banks. They are extremely saleable though. They don't see things that are failures as failures. The metrics are not real. Contract renewals do not focus on objective metrics.
This is why you make "$1" with all your banking relationships and actually valuable tacit knowledge, until Accenture notices and makes bajillions, and now Anthropic makes bajillions. Look, I agree that you know a lot. That's not what I'm saying. I'm saying the thing you are describing as a bottleneck is actually the foundation of the business of the IT industry.
Another POV is, yeah, listen, the code speed matters a fucking lot. Everyone says it does, and it does. Jesus Christ.
attempt != release to customer
when you're building a feature and have different ideas how to go about it, it's incredibly valuable to build them all, compare, and then build another, clean implementation based on all the insights
I used to do it before, but pretty rarely, only for the most important stuff. now I do it for basically everything. and while 2-4 agents are working on building these options, I have time to work on something else.
AI is really good when:
1. you want something that's literally been done tons of times before, and it can literally just find it inside its compressed dataset
2. you want something and as long as it roughly is what you wanted, it's fine
It turns out, this is not the majority of software people are paying engineers to write.
And it turns out that actually writing the code is only part of what you're paying for - much smaller than most people think.
You are not paying your surgeon only to cut things.
You are not paying your engineer only to write code.
> It turns out, this is not the majority of software people are paying engineers to write.
The above are definitely the majority of software people are paying developers to write. By an order of magnitude.
The novel problems for customers who specifically care about code quality is probably under 1% of software written.
If you don't recognise this, you simple don't understand the industry you work in.
As it turns out - "just make this button green" - is not the majority of what people at FAANG are doing...
As it turns out - 4 years before LLMs - at least one of the FAANGs already had auto-complete so good it could do most of what LLMs can practically do in a gigantic context.
But, sure...
>at least one of the FAANGs already had auto-complete so good it could do most of what LLMs can practically do
Could you clarify what you're referring to? I'm interested.
Less than 1% of software developers work at FAANG.
Contrary to popular opinion - the majority of engineers are not working at companies that have no revenue.
Anywhere the risk of something going wrong is high, a lot of what you're paying engineers for is to minimize that risk while getting shit done - not to "just do the thing" you might think you're paying for.
Wherever "this sort of works" is good enough, LLMs will excel. Wherever it doesn't, you'll still be paying a lot of money for humans.
Mostly because non-engineers cannot define what "working" is most of the time it's important.
You don't go to a surgeon and tell him to replace your heart in some way. You go to a surgeon to "fix" your heart. You wouldn't even know what that meant.
Almost 20 years ago, IBM was famous for the average engineer writing ONE line of code per day...
It's not like IBM was paying people to do nothing, contrary to what most people thought who worked there for 15 years.
It's almost as if lines of code don't have value and a working product and the ability to change it reliably does.
Everyone has its own set of novel problems. And they use libraries and framework for things that are outside it. The average SaaS provider will not write its own OS, database, network protocols,... But it will have its own features and while it may be similar to others, they're evolving in different environments and encounter different issues that need different solutions.
Non-novel problem != non-novel solution
Most problems are mostly non-novel but with a few added constraints, the combination of which can require a novel solution.
Those are exactly the types of problems that LLMs excel at solving.
Actually the surgeon analogy is really good. Saying AI will replace programming is like saying an electric saw will replace surgeons because the hospital director can use it to cut into flesh.
It's so much faster too! How many meters of flesh have you cut this month, and how are you working toward increasing that number?
> Why not? Why can't faster typing help us understand the problem faster?
Why can't you understand the problem faster by talking faster?
Sometimes you can.
>Why not? Why can't faster typing help us understand the problem faster?
do you have a example (even a toy one) where typing faster would help you understand a problem faster?
Has everyone always nailed their implementation of every program on the first try? Of course not. Probably what happens most times is you first complete something that sorta works and then iterate from there by modifying code, executing, observing, and looping back to the beginning. You can wonder about ultimately how much of your time/energy is consumed by the "typing code" part, and there's surely a wide range of variation there by individual and situation, but it's undeniable that it is a part of the core iteration loop for building software.
I don't understand why GP's comment is so controversial. GP is not denying that you should maybe think a little before a key hits the keyboard as many commenters seem to suppose. Both can be true.
That kind of thinking pops up very prominently in the article.
Here's a literal toy one.
Build a toy car with square wheels and one with triangular wheels and one with round wheels and see which one rolls better.
The issue isn't "typing faster" it's "building faster".
No need to build three, you just have to quickly write a proof for which shapes can roll. You'll then spend x+y units of time, where y<<x, instead of 3*x units. We have stories that highlight the importance of thinking instead of blindly doing (sharpening the axe, $1 for pressing a button and $9999 for knowing which button to press).
> quickly write a proof for which shapes can roll.
Writing the 3 are the proofs.
Sometimes articulating the problem is all you need to see what the solution is. Trying many things quickly can prime you to see what the viable path is going to be. Iterating fast can get you to a higher level of understanding than methodical, deliberative construction.
Nevertheless, it's a tool that should be used when it's useful, just like slower consideration can be used. Frontier LLMs can help significantly in either case.
so, what i am gathering is that some people in this comment section read "typing faster" literally, while other people are reading it and translating it to "iterating faster".
"Code writing speed" is just a superficial dismissal of AI without consideration as to whether AI is being used well or poorly for the task at hand. Saying that AI is the same as making people type faster, or that AI only produces slop, etc, is a very self limiting mindset.
> do you have a example (even a toy one)
It's extremely common with video games. Lots of game design is done by seeing what something feels like and changing it or throwing it away, repeatedly.
I think UI can be this way a lot too.
I often understand problems by discussing them with AI (by typing prompts and reading the response). Typing or reading faster would make this faster.
Fast prototyping helps when the prototype forces contact with the problem, like users saying "nope" or the spec collapsing under a demo. If the loop is only you typing, debugging, and polishing, you're mostly making a bigger mess in the repo and convincing yourself that the mess taught you something.
Code is one way to ask a question, not proof that you asked a good one. Sometimes the best move is an annoying hour with the PM, the customer, or whoever wrote the ticket.
I completely agree with this. I actually spent some time recently working on the design for a project. This was a side thing I spent months thinking about in my spare time, eventually spec'ing an API and data model.
I only recently decided to take it on, given how capable Claude Code has become recently. It knocked out a working version of my backend pretty quickly, adhering to my spec, and then built a frontend.
The result? I realized pretty quickly that the (IMO) beautiful design just didn't actually work with how it made sense for the product to work. An hour with the prototype made it clear that I needed to redesign from the ground up around a different piece to make the user experience actually work intuitively.
If I had spent months of my spare time banging on that only to hit that wall, it would've been a much more demotivating experience. Instead, I was able to re-spec and spin up a much better version almost immediately.
> Why can't faster typing help us understand the problem faster?
Why can't standing on your head?
Everyone has their own process.
> Why not? Why can't faster typing help us understand the problem faster?
I think we can, in some cases.
For instance, I prototyped a feature recently and tested an integration it enabled. It took me a few hours. There's no way I would have tried this, let alone succeeded, without opencode. Because I was testing functionality, I didn't care about other aspects: performance, maintainability, simplicity.
I was able to write a better description of the problem and assure the person working on it that the integration would work. This was valuable.
I immediately threw away that prototype code, though. See above aspects I just didn't need to think about.
>There's no way I would have tried this, let alone succeeded, without opencode
Sure there is.
You could have used Claude or Codex directly :)
Because you're working on the implementation before you understand the problem?
Ding ding ding!
The article talks about process flows and finding the bottleneck. That might be coding, but probably is not.
> Why not? Why can't faster typing help us understand the problem faster?
Sometimes you need to think slow to understand something. Offloading your thinking to a black box of numbers and accepting what it emits is not thinking slow (i.e. ponder) and processing the problem at hand.
On the contrary, it's entering tunnel vision and brute forcing. i.e. shotgun coding.
"Why can't faster typing help us understand the problem faster?"
Because typing is not the same as understanding.
The typing referred to here is not "the typing part of coding" (fingers touching the keyboard), it's the whole coding (LLM is not a typing aid, it's a coding aid).
And coding faster CAN help us understand the problem faster. Coding faster means iterating, refactoring, trying different designs - and seeing what does and doesn't work, faster.
I'm reminded of the original Agile joke, "software you don't want in 30days or less." today it's "software you don't want in 5days or less."
Pretty much, the article assumes people didn't build the wrong thing before AI. Except that did happen all the time and it just happened slower, took longer to realize that it was the wrong thing and then building the right thing took longer.
It's funny, because you could actually take that story and use it to market AI.
> I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks.
Except now with AI it takes one engineer 6 hours, people realize it's the wrong thing and move on. If anything, I would say it helps prove the point that typing faster _does_ help.
Sometimes being involved in the construction process allows you to discover all the (many, overlapping) ways it's the "wrong thing" sooner.
In the long term, some of the most expensive wrong-things are the ones where the prototype gets a "looks good to me" from users, and it turns out what they were asking for was not what they needed or what could work, for reasons that aren't visually apparent.
In other words, it's important to have many people look at it from many perspectives, and optimizing for the end-user/tester perspective at the expense of the inner-working/developer perspective might backfire. Especially when the first group knows something is wrong, but the second group doesn't have a clue why it's happening or how to fix it. Worse still if every day feels like learning a new external codebase (re-)written by (LLM) strangers.
The post also smells heavily LLM-processed. I feel like I've been had by someone pumping out low effort blog posts.
> Why not? Why can't faster typing help us understand the problem faster?
Why do you need to type at all to understand the problem?
I write my best code when I'm driving my car. When I stop and park up, it's just a case of typing it all in at my leisure.
You can build a lot of wrong things that don't also help you narrow that solution space. The most effective way to "build the wrong thing" in an informative way is to first think hard and understand your solution space. You want to build the right wrong thing. The thing that helps you rule out lots of stuff. But if you are doing it randomly then you aren't doing this effectively and probably wasting a lot of time. You probably are already doing this but not thinking too much about this explicitly, but if you think explicitly you'll improve on this.
You always build the "wrong" thing. But it is about how wrong you are. Despite being about physics, I think Asimov's Relativity of Wrong[0] (short essay) is pretty relevant here and says everything I want to say but better. It is worth the read and I come back to it frequently. Yes! But this is not quite the same thing. I do this too! I never know the full details of the thing I want before I start building. I'm not sure that's even possible tbh. You're always going to discover more things as you get into the details and nuance. But that doesn't mean foresight is useless either. Let's say I'm somewhere in the middle of America and I want to get to NYC. Analogous to your framing you are saying "why can't moving faster help us get there faster?" Obviously it can! BUT speed is meaningless without direction. You don't want speed, you want velocity. If you start driving as fast as possible in a random direction you're equally likely to head in a direction that increases your distance than one that decreases. And you are very unlikely to head in a good direction. Driving fast in the wrong direction significantly increases harm than were you to drive slowly in the wrong direction.So what's the optimal strategy? Find a general direction (e.g. use the sun or stars/moon) to figure out where "east"(ish) is, start driving relatively slowly, refine your direction as you get more information about the landscape, speed up as you gain more information. If you can't find a general direction you should slowly meander, carefully taking in how the landscape/environment is changing. If it is very unchanging, then yeah, speedup, but only until you find a region that becomes more informative, then repeat the process.
If we already had perfect information about how to get to NYC then just drive as fast as fucking possible. But if we don't have that information we need a completely different strategy. Thus, t̶y̶p̶i̶n̶g̶ driving speed isn't the bottleneck.
Speed doesn't matter, velocity does
[0] https://hermiene.net/essays-trans/relativity_of_wrong.html