I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before. We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality. I can't believe it's actually happening, and I've never had more fun computing.
I can't empathize with the complaint that we've "lost something" at all. We're on the precipice of something incredible. That's not to say there aren't downsides (WOPR almost killed everyone after all), but we're definitely in a golden age of computing.
The golden age for me is any period where you have the fully documented systems.
Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did.
And software that’s open and can be modified.
Instead what we have is:
- AI which are little black boxes and beyond our ability to fully reason.
- perpetual subscription services for the same software we used to “own”.
- hardware that is completely undocumented to all but a small few who are granted an NDA before hand
- operating systems that are trying harder and harder to prevent us from running any software they haven’t approved because “security”
- and distributed systems become centralised, such as GitHub, CloudFlare, AWS, and so on and so forth.
The only thing special about right now is that we have added yet another abstraction on top of an already overly complex software stack to allow us to use natural language as pseudocode. And that is a version special breakthrough, but it’s not enough by itself to overlook all the other problems with modern computing.
My take on the difference between now and then is “effort”. All those things mentioned above are now effortless but the door to “effort” remains open as it always has been. Take the first point for example. Those little black boxes of AI can be significantly demystified by, for example, watching a bunch of videos (https://karpathy.ai/zero-to-hero.html) and spending at least 40 hours of hard cognitive effort learning about it yourself. We used to purchase software or write it ourselves before it became effortless to get it for free in exchange for ads and then a subscription when we grew tired of ads or were tricked into bait and switch. You can also argue that it has never been easier to write your own software than it is today.
Hostile operating systems. Take the effort to switch to Linux.
Undocumented hardware, well there is far more open source hardware out there today and back in the day it was fun to reverse engineer hardware, now we just expect it to be open because we couldn’t be bothered to put in the effort anymore.
Effort gives me agency. I really like learning new things and so agentic LLMs don’t make me feel hopeless.
I’ve worked in the AI space and I understand how LLMs work as a principle. But we don’t know the magic contained within a model after it’s been trained. We understand how to design a model, and how models work at a theoretical level. But we cannot know how well it will be at inference until we test it. So much of AI research is just trial and error with different dials repeated tweaked until we get something desirable. So no, we don’t understand these models in the same way we might understand how an hashing algorithm works. Or a compression routine. Or an encryption cypher. Or any other hand-programmed algorithm.
I also run Linux. But that doesn’t change how the two major platforms behave and that, as software developers, we have to support those platforms.
Open source hardware is great but it’s not on the same league of price and performance as proprietary hardware.
Agentic AI doesn’t make me feel hopeless either. I’m just describing what I’d personally define as a “golden age of computing”.
> The golden age for me is any period where you have the fully documented systems. Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did. And software that’s open and can be modified.
I agree, that it would be good. (It is one reason why I wanted to design a better computer, which would include full documentation about the hardware and the software (hopefully enough to make a compatible computer), as well as full source codes (which can help if some parts of the documentation are unclear, but also can be used to make your own modifications if needed).) (In some cases, we have some of this already, but not entirely. Not all hardware and software has the problems you list, although it is too common now. Making a better computer will not prevent such problematic things on other computers, and not entirely preventing such problems on the new computer design either, but it would help a bit, especially if it is actually designed good rather than badly.)
Have you tried using GenAI to write documentation? You can literally point it to a folder and say, analyze everything in this folder and write a document about it. And it will do it. It's more thorough than anything a human could do, especially in the time frame we're talking about.
If GenAI could only write documentation it would still be a game changer.
But it write mostly useless documentation Which take time to read and decipher.
And worse, if you are using it for public documentation, sometimes it hallucinate endpoints (i don't want to say too much here, but it happened recently to a quite used B2B SaaS).
Loop it. Use another agent (from a different company helps) to review the code and documentation and call out any inconsistencies.
I run a bunch of jobs weekly to review docs for inconsistencies and write a plan to fix. It still needs humans in the loop if the agents don’t converge after a few turns, but it’s largely automatic (I baby sat it for a few months validating each change).
The problems about documentation I described wasn’t about the effort of writing it. It was that modern chipsets are trade secrets.
When you bought a computer in the 80s, you’d get a technical manual about the internal workings of the hardware. In some cases even going as far as detailing what the registers did on their graphics chipset or CPU.
GenAI wouldn’t help here for modern hardware because GenAI doesn’t have access to those specifications. And if it did, then it would already be documented so we wouldnt need GenAI to write it ;)
Actually this makes me think of an interesting point. We DO have too many layers of software.. and rebuilding is always so cost prohibative.
Maybe an iteresting route is using LLMs to flatten/simplify.. so we can dig out from some of the complexity.
I’ve heard this argument made before and it’s the only side of AI software development that excites me.
Using AI to write yet another run-of-the-mill web service written in the same bloated frameworks and programming languages designed for the lowest common denominator of developers really doesn’t feel like it’s taking advantage leap in capabilities that AI bring.
But using AI to write native applications in low level languages, built for performance and memory utilisation, does at least feel like we are bringing some actual quality of life savings in exchange for all those fossil fuels burnt to crunch the LLMs tokens.
> perpetual subscription services for the same software we used to “own”.
In another thread, people were looking for things to build. If there's a subscription service that you think shouldn't be a subscription (because they're not actually doing anything new for that subscription), disrupt the fuck out of it. Rent seekers about to lose their shirts. I pay for eg Spotify because there's new music that has to happen, but Dropbox?
If you're not adding new whatever (features/content) in order to justify a subscription, then you're only worth the electricity and hardware costs or else I'm gonna build and host my own.
People have been building alternatives to MS Office, Adopt Creative Suite, and so on and so forth for literally decades and yet they’re still the de facto standard.
Turns out it’s a lot harder to disrupt than it sounds.
In some ways, I'd say we're in a software dark age. In 40 years, we'll still have C, bash, grep, and Mario ROMs, but practically none of the software written today will still be around. That's by design. SaaS is a rent seeking business model. But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies. There's no way you'll be able to take a repo from 2026 and spin it up in 2050 without major work.
One question is how will AI factor in to this. Will it completely remove the problem? Will local models be capable of finding or fixing every dependency in your 20yo project? Or will they exacerbate things by writing terrible code with black hole dependency trees? We're gonna find out.
> That's by design. SaaS is a rent seeking business model.
Not all software now is SaaS, but unfortunately it is too common now.
> But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies.
Some people (including myself) prefer to write programs without too many dependencies, in order to avoid that problem. Other things also help, including some people write programs for older systems which can be emulated, or will use a more simpler portable C code, etc. There are things that can be done, to avoid too many dependencies.
There is uxn, which is a simple enough instruction set that people can probably implement it without too much difficulty. Although some programs might need some extensions, and some might use file names, etc, many programs will work, because it is designed in a simple way that it will work.
I’m not sure Go belongs on that list. Otherwise I hear what you’re saying.
A large percentage of the code I've written the last 10 years is Go. I think it does somewhat better than the others in some areas, such as relative simplicity and having a robust stdlib, but a lot of this is false security. The simplicity is surface level. The runtime and GC are very complex. And the stdlib being robust means that if you ever have to implement a compiler from scratch, you have to implement all of std.
All in all I think the end result will be the same. I don't think any of my Go code will survive long term.
I’ve got 8 year old Go code that still compiles fine on the latest Go compiler.
Go has its warts but backwards compatibility isn’t one of them. The language is almost as durable as Perl.
We have what I've dreamed of for years: the reverse dictionary.
Put in a word and see what it means? That's been easy for at least a century. Have a meaning in mind and get the word? The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was. Now it's always available.
This is a great description of how I use Claude.
> Now it's always available.
And often incorrect! (and occasionally refuses to answer)
Is it? I’ve seen AI hallucinations, but they seem to be increasingly rare these days.
Much of the AI antipathy reminds me of Wikipedia in the early-mid 2000s. I remember feeling amazed with it, but also remember a lot of ranting by skeptics about how anyone could put anything on there, and therefore it was unreliable, not to be used, and doomed to fail.
20 years later and everyone understands that Wikipedia may have its shortcomings, and yet it is still the most impressive, useful advancement in human knowledge transfer in a generation.
I think robust crowdsourcing is probably the biggest capital-A Advancement in humanity's capabilities that came out of the internet, and there's a huge disparity in results that comes from how that capability is structured and used. Wikipedia designed protocols, laws, and institutions that leverage crowdsourcing to be the most reliable de facto aggregator of human knowledge. Social media designed protocols, laws, and institutions to rot people's brains, surveil their every move, and enable mass-disinformation to take over the public imagination on a regular basis.
I think LLMs as a technology are pretty cool, much like crowdsourcing is. We finally have pretty good automatic natural language processing that scales to large corpora. That's big. Also, I think the state of the software industry that is mostly driving the development, deployment, and ownership of this technology is mostly doing uninspired and shitty things with it. I have some hope that better orgs and distributed communities will accomplish some cool and maybe even monumental things with them over time, but right now the field is bleak, not because the technology isn't impressive (although somehow despite how impressive it is it's still being oversold) but because silicon valley is full of rotten institutions with broken incentives, the same ones that brought us social media and subscriptions to software. My hope for the new world a technology will bring about will never rest with corporate aristocracy, but with the more thoughtful institutions and the distributed open source communities that actually build good shit for humanity, time and time again
It is! But you can then verify it via a correct, conventional forward dictionary.
The scary applications are the ones where it's not so easy to check correctness...
Right. Except the dictionary analogy only goes so far and we reach the true problem.
It's not an analogy.
Sure, but it's easy to check if it's incorrect and try again.
Forgive me if "just dig your way out of the hole" doesn't sound appealing.
You're free to use whatever tools you like.
> You're free to use whatever tools you like.
this is important, i feel like a lot of people are falling in to the "stop liking what i don't like" way of thinking. Further, there's a million different ways to apply an AI helper in software development. You can adjust your workflow in whatever way works best for you. ..or leave it as is.
Surely you, a programmer, can imagine a way to automate this process
No, I actually haven't made, nor desire to make, a way to automate "thinking about, researching, and solving a problem".
When you use it to lookup a single word, yeah, but people here use it to lookup thousand words at once and then can't check it all.
The "reverse dictionary" is called a "thesaurus". Wikipedia quotes Peter Mark Roget (1852):
> ...to find the word, or words, by which [an] idea may be most fitly and aptly expressed
Digital reverse dictionaries / thesauri like https://www.onelook.com/thesaurus/ can take natural language input, and afaict are strictly better at this task than LLMs. (I didn't know these tools existed when I wrote the rest of this comment.)
I briefly investigated LLMs for this purpose, back when I didn't know how to use a thesaurus; but I find thesauruses a lot more useful. (Actually, I'm usually too lazy to crack out a proper thesaurus, so I spend 5 seconds poking around Wiktionary first: that's usually Good Enough™ to find me an answer, when I find an answer I can trust it, and I get the answer faster than waiting for an LLM to finish generating a response.)
There's definitely room to improve upon the traditional "big book of synonyms with double-indirect pointers" thesaurus, but LLMs are an extremely crude solution that I don't think actually is an improvement.
A thesaurus is not a reverse dictionary
Really?
"What's a word that means admitting a large number of uses?"
That seems hard to find in a thesaurus without either versatile or multifarious as a starting point (but those are the end points).
I plugged "admitting a large number of uses" into OneLook Thesaurus (https://www.onelook.com/thesaurus/?s=admitting%20a%20large%2...), and it returned:
> Best match is versatile which usually means: Capable of many different uses
with "multi-purpose", "adaptable", "flexible" and "multi-use" as the runner-up candidates.
---
Like you, I had no idea that tools like OneLook Thesaurus existed (despite how easy it would be to make one), so here's my attempt to look this up manually.
"Admitting a large number of uses" -> manually abbreviated to "very useful" -> https://en.wiktionary.org/wiki/useful -> dead end. Give up, use a thesaurus.
https://www.wordhippo.com/what-is/another-word-for/very_usef..., sense 2 "Usable in multiple ways", lists:
> useful multipurpose versatile flexible multifunction adaptable all-around all-purpose all-round multiuse multifaceted extremely useful one-size-fits-all universal protean general general-purpose […]
Taking advantage of the fact my passive vocabulary is greater than my active vocabulary: no, no, yes. (I've spuriously rejected "multipurpose" – a decent synonym of "versatile [tool]" – but that doesn't matter.) I'm pretty sure WordHippo is machine-generated from some corpus, and a lot of these words don't mean "very useful", but they're good at playing the SEO game, and I'm lazy. Once we have versatile, we can put that into an actual thesaurus: https://dictionary.cambridge.org/thesaurus/versatile. But none of those really have the same sense as "versatile" in the context I'm thinking of (except perhaps "adaptable"), so if I were writing something, I'd go with "versatile".
Total time taken: 15 seconds. And I'm confident that the answer is correct.
By the way, I'm not finding "multifarious" anywhere. It's not a word I'm familiar with, but that doesn't actually seem to be a proper synonym (according to Wiktionary, at least: https://en.wiktionary.org/wiki/Thesaurus:heterogeneous). There are certainly contexts where you could use this word in place of "versatile" (e.g. "versatile skill-set" → "multifarious skill-set"), but I criticise WordHippo for far less dubious synonym suggestions.
'multifarious uses' -> the implication would be having not just many but also a wide diversity of uses
M-W gives an example use of "Today’s Thermomix has become a beast of multifarious functionality. — Matthew Korfhage, Wired News, 21 Nov. 2025 "
wordhippo strikes me as having gone beyond the traditional paper thesaurus, but I can accept that things change and that we can make a much larger thesaurus than we did when we had to collect and print. thesaurus.com does not offer these results, though, as a reflection of a more traditional one, nor does the m-w thesaurus.
"The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was"
Did you have trouble with this part?
This seems like a hostile question.
Glad to see this already expressed here because I wholly agree. Programming has not brought me this much joy in decades. What a wonderful time to be alive.
I wish I could have you sit by my side for a week or two and pair program what I'm working on because most for the time I'm not getting great results.
Depends on the project. For web based functionality it seems great, because of all the prior work that is out there. For more obscure things like Obsidian Note extentions or Home Assistant help it's more hit and miss
You in SF? My schedule is a bit busy since we launched but I could find an hour in the city.
One thing that I realized was that a lot of our so-called "craft" is converged "know-how". Take the recent news that Anthropic used Claude Code to write a C compiler for example, writing compiler is hard (and fun) for us humans because we indeed need to spend years understanding deeply the compiler theory and learning every minute detail of implementation. That kind of learning is not easily transferrable. Most students tried the compiler class and then never learned enough, only a handful few every year continued to grow into true compiler engineers. Yet to our AI models, it does not matter much. They already learned the well-established patterns of compiler writing from the excellent open-source implementations, and now they can churn out millions of code easily. If not perfect, they will get better in the future.
So, in a sense our "craft" no longer matters, but what really happens is that the repetitive know-how has become commoditized. We still need people to do creative work, but what is not clear is how many such people we will need. After all, at least in short term, most people build their career by perfecting procedural work because transferring the know-how and the underlying whys is very expensive to human. For the long term, though, I'm optimistic that engineers just get an amazing tool and will use it create more opportunities that demand more people.
writing a C compiler is a 1st year undergrad project
C was explicitly designed to make it simple to write a compiler
Which university offers compiler for freshmen? Can you provide a link to the course?
Good for you. But there are already so, so many posts and threads celebrating all of this. Everyone is different. Some of us enjoy the activity of programming by hand. This thread is for those us, to mourn.
You're still allowed to program by hand. Even in assembly language if you like.
[dead]
> I can't empathize with the complaint that we've "lost something" at all.
We could easily approach a state of affairs where most of what you see online is AI and almost every "person" you interact with is fake. It's hard to see how someone who supposedly remembers computing in the 80s, when the power of USENET and BBSs to facilitate long-distance, or even international, communication and foster personal relationships (often IRL) was enthralling, not thinking we've lost something.
I grew up on 80's and 90's BBSes. The transition from BBSes to Usenet and the early Internet was a magical period, a time I still look back upon fondly and will never forget.
Some of my best friends IRL today were people I first met "online" in those days... but I haven't met anyone new in a longggg time. Yeah, I'm also much older, but the environment is also very different. The community aspect is long gone.
I'm from the early 90s era. I know exactly what you're saying. I entered the internet on muds, irc and usenet. There were just far fewer people online in those communities in those days, and in my country, it was mostly only us university students.
But, those days disappeared a long time ago. Probably at least 20-30 years ago.
IRC is still around, that old internet is still there.
You just have to get off the commercial crap and you’ll find it.
even in the 90s there was the phrase "the Internet, where the men are men, the women are men, and the teen girls are FBI agents". It was always the case you never really knew who/what you were dealing with on the Internet.
> We're on the precipice of something incredible.
Total dependence on a service?
The quality of local models has increased significantly since this time last year. As have the options for running larger local models.
The quality of local models is still abysmal compared to commercial SOTA models. You're not going to run something like Gemini or Claude locally. I have some "serious" hardware with 128G of VRAM and the results are still laughable. If I moved up to 512G, it still wouldn't be enough. You need serious hardware to get both quality and speed. If I can get "quality" at a couple tokens a second, it's not worth bothering.
They are getting better, but that doesn't mean they're good.
Good by what standard? Compared to SOTA today? No they're not. But they are better than the SOTA in 2020, and likely 2023.
We have a magical pseudo-thinking machine that we can run locally completely under our control, and instead the goal posts have moved to "but it's not as fast as the proprietary could".
My comparison was today's local AI to today's SOTA commercial AI. Both have improved, no argument.
It's more cost effective for someone to pay $20 to $100 month for a Claude subscription compared to buying a 512 gig Mac Studio for $10K. We won't discuss the cost of the NVidia rig.
I mess around with local AI all the time. It's a fun hobby, but the quality is still night and day.
These takes are terrible.
1. It costs 100k in hardware to run Kimi 2.5 with a single session at decent tok p/s and its still not capable for anything serious.
2. I want whatever you're smoking if you think anyone is going to spend billions training models capable of outcompeting them are affordable to run and then open source them.
On a scale that would make big tobacco blush.
Yes this is the issue. We truly have something incredible now. Something that could benefit all of humanity. Unfortunately it comes at $200/month from Sam Altman & co.
If that was the final price, no strings attached and perfect, reliable privacy then I might consider it. Maybe not for the current iteration but for what will be on offer in a year or two.
But as it stands right now, the most useful LLMs are hosted by companies that are legally obligated to hand over your data if the US gov. had decided that it wants it. It's unacceptable.
That 200/month price isn’t sustainable either. Eventually they’re going to have to jack that up substantially.
Between the internet, or more generally computers, or even more generally electricity, are we not already?
prefrontal cortex as a service
yup, all these folks claiming AI is the bees knees are delegating their thinking to a roulette that may or may not give proper answers. the world will become more and more like the movie idiocracy
I agree with you with the caveat that all the "ease of building" benefits, for me, could potentially be dwarfed by job losses and pay decreases. If SWE really becomes obsolete, or even if the number of roles decrease a lot and/or the pay decreases a lot (or even fails to increase with inflation), I am suddenly in the unenviable position of not being financially secure and being stuck in my 30s with an increasingly useless degree. A life disaster, in other words. In that scenario the unhappiness of worrying about money and retraining far outweighs the happiness I get from being able to build stuff really fast.
Fundamentally this is the only point I really have on the 'anti-AI' side, but it's a really important one.
We definitely have lost something. I got into computers because they're deterministic. Way less complicated than people.
Now the determinism is gone and computers are gaining the worst qualities of people.
My only sanctuary in life is slipping away from me. And I have to hear people tell me I'm wrong who aren't even sympathetic to how this affects me.
But no one is forcing you to use this software?
Nothing meaningful happened in almost 20 years. After the iPhone, what happened that truly changed our lives? The dumpster fire of social media? Background Netflix TV?
In fact, I remember when I could actually shop on Amazon or browse for restaurants on Yelp while trusting the reviews. None of that is possible today.
We have been going through a decade of enshitification.
I really am very thankful for @simonw posting a TikTok from Chris Ashworth, a Baltimore theater software developer, who recently picked up LLM's for building a voxel display software controller. And who was just blown away. https://simonwillison.net/2026/Jan/30/a-programming-tool-for...
Simon doesn't touch on my favorite part of Chris's video though, which is Chris citing his friend Jesse Kriss. This stuck out at me so hard, and is so close to what you are talking about:
> The interesting thing about this is that it's not taking away something that was human and making it a robot. We've been forced to talk to computers in computer language. And this is turning that around.
I don't see (as you say) a personality. But I do see the ability to talk. The esoteria is still here underneath, but computer programmers having this lock on the thing that has eaten the world, being the only machine whisperers around, is over. That depth of knowledge is still there and not going away! But notably too, the LLM will help you wade in, help those not of the esoteric personhood of programmers to dive in & explore.
> golden age of computing
I feel like we've reached the worst age of computing. Where our platforms are controlled by power hungry megacorporations and our software is over-engineered garbage.
The same company that develops our browsers and our web standards is also actively destroying the internet with AI scrapers. Hobbyists lost the internet to companies and all software got worse for it.
Our most popular desktop operating system doesn't even have an easy way to package and update software for it.
Yes, this is where it's at for me. LLM's are cool and I can see them as progress, but I really dislike that they're controlled by huge corporations and cost a significant amount of money to use.
Use local OSS models then? They aren’t as good and you need beefy hardware (either Apple silicon or nvidia GPUs). But they are totally workable, and you avoid your dislikes directly.
"Not as good and costs a lot in hardware" still sounds like I'm at a disadvantage.
$3000 is not that much for hardware (like a refurbished MBP Max with decent amount of RAM), and you'd be surprised how much more useful a thing that is slightly worse than the expensive thing is when you don't have anxiety about token usage.
> they're controlled by huge corporations and cost a significant amount of money to use.
is there anything you use that isn't? like laptop on which you work, software that you use to browse the internet, read the email... I've heard similar comment like yours before and I am not sure I understand it given everything else - why does this matter for LLMs and not the phone you use etc etc?
I’ve used FreeBSD since I was 15 years old - Linux before that.
My computer was never controlled by any corporation, until now.
Yeah I've always run Linux on my computers for the past 30 years. I'm pretty used to being in control.
what phone do you use?
Dystopian cyberpunk was always part of the fantasy. Yes, scale has enabled terrible things.
There are more alternatives than ever though. People are still making C64 games today, cheap chips are everywhere. Documentation is abundant... When you layer in AI, it takes away labor costs, meaning that you don't need to make economically viable things, you can make fun things.
I have at least a dozen projects going now that I would have never had time or energy for. Any itch, no matter how geeky and idiosyncratic, is getting scratched by AI.
It’s never been easier for you to make a competitor
So what is stopping you other than yourself?
I’m not the OP, but my answer is that there’s a big difference between building products and building businesses.
I’ve been programming since 1998 when I was in elementary school. I have the technical skills to write almost anything I want, from productivity applications to operating systems and compilers. The vast availability of free, open source software tools helps a lot, and despite this year’s RAM and SSD prices, hardware is far more capable today at comparatively lower prices than a decade ago and especially when I started programming in 1998. My desktop computer is more capable than Google’s original cluster from 1998.
However, building businesses that can compete against Big Tech is an entirely different matter. Competing against Big Tech means fighting moats, network effects, and intellectual property laws. I can build an awesome mobile app, but when it’s time for me to distribute it, I have to either deal with app stores unless I build for a niche platform.
Yes, I agree that it’s never been easier to build competing products due to the tools we have today. However, Big Tech is even bigger today than it was in the past.
Yes. I have seen the better product lose out to network effects far too many times to believe that a real mass market competitor can happen nowadays.
Look at how even the Posix ecosystem - once a vibrant cluster of a dozen different commercial and open source operating systems built around a shared open standard - has more or less collapsed into an ironclad monopoly because LXC became a killer app in every sense of the term. It’s even starting to encroach on the last standing non-POSIX operating system, Windows, which now needs the ability to run Linux in a tightly integrated virtual machine to be viable for many commercial uses.
Oracle Solaris and IBM AIX are still going. Outside of enterprises that are die hard Sun/Oracle or IBM shops, I haven't seen a job requiring either in decades. I used to work with both and don't miss them in the least.
Billions of dollars?
You don't need billions of dollars to write an app. You need billions of dollars to create an independent platform that doesn't give the incumbent a veto over your app if you're trying to compete with them. And that's the problem.
I didn't imagine I would be sending all my source code directly to a corporation for access to an irritatingly chipper personality that is confidently incorrect the way these things are.
There have been wild technological developments but we've lost privacy and autonomy across basically all devices (excepting the people who deliberately choose to forego the most capable devices, and even then there are firmware blobs). We've got the facial recognition and tracking so many sci-fi dystopias have warned us to avoid.
I'm having an easier time accomplishing more difficult technological tasks. But I lament what we have come to. I don't think we are in the Star Trek future and I imagined doing more drugs in a Neuromancer future. It's like a Snow Crash / 1984 corporate government collab out here, it kinda sucks.
I retired a few years ago, so I have no idea what AI programming is.
But I mourned when CRT came out, I had just started programming. But I quickly learned CRTs were far better,
I mourned when we moved to GUIs, I never liked the move and still do not like dealing with GUIs, but I got used to it.
Went through all kinds of programming methods, too many to remember, but those were easy to ignore and workaround. I view this new AI thing in a similar way. I expect it will blow over and a new bright shiny programming methodology will become a thing to stress over. In the long run, I doubt anything will really change.
I think you're underestimating what AI can do in the coding space. It is an extreme paradigm shift. It's not like "we wrote C, but now we switch to C++, so now we think in objects and templates". It's closer to the shift from assembly to a higher level language. Your goal is still the same. But suddenly you're working in a completely newer level of abstraction where a lot of the manual work that used to be your main concern is suddenly automated away.
If you never tried Claude Code, give it s try. It's very easy to get I to. And you'll soon see how powerful it is.
> But suddenly you're working in a completely newer level of abstraction where a lot of the manual work that used to be your main concern is suddenly automated away.
It's remarkable that people who think like this don't have the foresight to see that this technology is not a higher level of abstraction, but a replacement of human intellect. You may be working with it today, but whatever you're doing will eventually be done better by the same technology. This is just a transition period.
Assuming, of course, that the people producing these tools can actually deliver what they're selling, which is very much uncertain. It doesn't change their end goal, however. Nor the fact that working with this new "abstraction" is the most mind numbing activity a person can do.
I agree with this. At a higher level of abstraction, you’re still doing the fundamental problem solving. Low-level machine language or high-level Java, C++ or even Python, the fundamental algorithm design is still entirely done by the programmer. LLMs aren’t being used to just write the code unless the user is directing how each line or at least each function is being written, often times you can just describe the problem and it solves it most of the way if not entirely. Only for really long and complex tasks do the better models really require hand-holding and they are improving on that end rapidly.
That’s not a higher level of abstraction, it’s having someone do the work for you while doing less and less of the thinking as well. Someone might resist that urge and consistently guide the model closely but that’s probably not what the collective range of SWEs who use these models are doing and rapidly the ease of using these models and our natural reluctance to take on mental stress is likely to make sure that eventually everyone lets LLMs do most or all of the thinking for them. If things really go in that direction and spread, I foresee a collective dumbing down of the general population.
OT but I see your account was created in 2015, so I'm assuming very late in your career. Curious what brought you to HN at that time and not before?
I did not know it existed before 2015 :)
Same.
I was born in 84 and have been doing software since 97
it’s never been easier, better or more accessible time to make literally anything - by far.
Also if you prefer to code by hand literally nobody is stopping you AND even that is easier.
Cause if you wanted to code for console games you literally couldn’t in the 90s without 100k specialized dev machine.
It’s not even close.
This “I’m a victim because my software engineering hobby isn’t profitable anymore” take is honestly baffling.
I'm not going to code by hand if it's 4x slower than having Claude do it. Yes, I can do that, but it just feels bad.
The analogy I like is it's like driving vs. walking. We were healthier when we walked everywhere, but it's very hard to quit driving and go back even if it's going to be better for you.
I actually like the analogy but for the opposite reason. Cars have become the most efficient way to travel for most industrial purposes. And yet enormous numbers of people still walk, run, ride bikes, or even horses, often for reasons entirely separate from financial gain.
I walk all the time
During the summer I’ll walk 30-50 miles a week
However I’m not going to walk to work ever and I’m damn sir not going to walk in the rain or snow unless if I can avoid it
it's an exciting time, things are changing and changing beyond "here's my new javascript framework". It's definitely an industry shakeup kind of deal and no one knows what lies 6 months, 1 year, 5 years from now. It makes me anxious seeing as i have a wife+2 kids to care for and my income is tied to this industry but it's exciting too.
Well you need to learn to adapt quickly if you have that much infrastructure to maintain
[dead]
[flagged]
I'm actually extremely good at programming. My point is I love computers and computing. You can use technology to achieve amazing things (even having fun). Now I can do much more of that than when I was limited to what I can personally code. In the end, it's what computers can do that's amazing, beautiful, terrifying... That thrill and to be on the bleeding edge is always what I was after.
The downside is that whatever you (Claude) can do so can anyone else too.
So you're welcome to make the 100000000th Copy of the same thing that nobody cares about anymore.
It's so easy to build things that I don't need anyone to care about it, I just need to the computer to do what I want it to do.
[flagged]
Thank you. I don't understand how people don't see that this is the universe's most perfect gift to corporations, and what a disaster it is for labor. There won't be a middle class. Future generations will be intellectual invalids. Baffling to see people celebrating.
it is a very, very strange thing to witness
even if you can be a prompt engineer (or whatever it's called this week) today
well, with the feedback you're providing: you're training it to do that too
you are LITERALLY training the newly hired outsourced personnel to do your job
but this time you won't be able to get a job anywhere else, because your fellow class traitors are doing exactly the same thing at every other company in the world
They are the useful idiots buying into the hype thinking they by some magic they get to keep their jobs and their incomes.
This things is going to erase careers and render skills sets and knowledge cultivated over decades worthless.
Anyone can promt the same fucking shit now and call it a day.
If you were confident in your own skills, you wouldn’t need to invent a whole backstory just to discredit someone.