I've been using computers since I was about 12...long time ago. What I've come to the conclusion is this: the best programs and the best tools are the ones that are lovingly (and perhaps with a bit of hate too) crafted by their developers. The best software ecosystems are the ones that are developed slowly and with care.

AI coding seems all wrong compared to that world. In fact, the only reason why there is a push for AI coding is because the software world of the ol' days has been co-opted by the evolutionary forces of consumerism and capitalism into a pathology, a nightmare that exists only to push us with dark patterns into using things and buying things that we don't really need. It takes the joy out of software development by placing two gods above all else that is mortal and good: efficiency and profit.

AI seems antithetical to the hacker ethic, not because it's not an intriguing project, but because within it seems to be the deep nature of automating away much of the joy of life. And yes, people can still use AI to be creative in some ways such as with AI prompts for art, but even so, the overall societal effect is the eroding of the bespoke and the custom and the personal from a human endeavor whose roots were once tinkering with machines and making them work for us, and whose final result is now US working for THEM.

> AI seems antithetical to the hacker ethic

Don't make the mistake I do and think HN is populated predominantly by that type of hacker. This is, at the end of the day, a board for startups.

(Not to say none frequent this board, but they seem relatively rare these days)

The "I did this cool thing" posts get way more upvotes than the "let's be startuppy" posts. I don't think the "hacker" population is as rare as you're suggesting.

I don't think "shiny new thing" posts getting more upvotes indicate anything about the hacker population.

I think he's refering to posts like this[0] rather than the shiny new tool people usually get excited about.

[0]: https://news.ycombinator.com/item?id=30803589

Correct.

My dream is to get a farm and start doing a lot of things which already exist and explore them by myself.

Doing pottery, art, music etc.

I do call myself a hacker and what i do for a living is very aligned with one of my biggest hobbies "computer" but that doesn't mean shit to me tbh.

I will leverage AI to write the things i wanna write but know that i don't have the time for doing those 'bigger' projects.

Btw. very few people actually write good software. For most its a job. I'm really good in what i do because there are not that many of us out there in a normal company.

> AI seems antithetical to the hacker ethic

I disagree, chatbots are arguably best for hacks: dodgy kludged-up software that works if you don't sneeze on it, but accomplishes whatever random thing I wanted to get done. They're a great tool for hackers.

A bunch of clueless managers are going to fall for the hype and try and lean on the jank machine to solve problems which aren't suitable to a quick hack, and will live to regret it. Ok, who am I kidding, most of them are going to fail up. But a bunch of still-employed hackers will curse their name, while cashing the fat paycheck they're earning for cleaning up the mess.

> What I've come to the conclusion is this: the best programs and the best tools are the ones that are lovingly (and perhaps with a bit of hate too) crafted by their developers.

I think this is an example of correlation but not causation. Obviously it's true to some extent in the sense that all things being equal more care is good, but I think all you're probably saying here is that good products are built well and products that are built well tend to be built by developers that care enough to make the right design decisions.

I don't think there's any reason AI couldn't make as good (or better) technical decisions than a human developer that's both technically knowledgable and who cares. I think I personally care a lot about the products I work on, but I'm far from infallible. I often look back at decision I've made and realise I could have done better. I could imagine how an AI with knowledge of every product on Github, a large collection of technical architecture documentation and blog posts could make better decisions than me.

I suppose there's also some creativity involved in making the "right" decisions too. Sometimes products have unique challenges which have no proven solutions that are considered more "correct" than any other. Developers in this cases need to come up with their own creative solutions and rank them on their unique metrics. Could an AI do this? Again, I think so. At least LLMs today seem able to come up with solutions to novel problems, even if they're not always great at this moment in time. Perhaps there are limits to the creativity of current LLMs though and for certain problems which require deep creativity humans will always outperform the best models. But even this is probably only true if LLM architecture doesn't advance – assuming creativity is a limitation in the first place, which I'm far from convinced of.

Are there other sites that focus more on hacker ethic projects?

[dead]

On the one hand, I agree with you. On the other hand, you could make similar arguments for the typewriter, the printing press, or the wheel.

Nope, there is a fundamental difference that people who point out this analogy ALWAYS fail to acknowledge: AI is a mechanism to replace people at work that is largely considered creative. Yes, AI may not be truly creative, but it does replace people doing jobs, and those people feel they are doing something creative.

The typewriter, the printing press, the wheel, never did anything of the sort.

And, you're also ignoring speed and scale: AI develops much faster and changes the world much faster than those inventions did.

Your argument is akin to arguing that driving at 200km/h is safe, simply because 20km/h is safe. The wheel was much safer because it changed the world at 20km/h. AI is like 1000km/h.

This feels like people getting spooked by autocomplete in editors.

We're pretty far from AI being able to properly, efficiently and effectively develop a full system that I believe before that will take my job, I'd probably be retired or something. If my feeling is wrong, I'm still sure some form of developer will be needed, even just to keep the AI running

I absolutely disagree regarding handwriting, and maybe also on maintaining your own typewriter. The task of producing the written document, and I don't just mean the thoughts conveyed, but each stroke of each letter, was a creative act that many enjoyed. Increasingly, there is only a very small subset of enthusiasts that are "serious" about writing by hand. Most "normal" people don't see the value in it, because if you're getting the idea across, who cares? But I'd wager if you talked to a monk who had spent their life slaving away in a too dark room making reproductions of books OR writing new accounts, and showed them the printing press, they would lament that the human joy of putting those thoughts to paper was in and of itself approaching the divine, and an important aspect of what makes us human.

Of course I don't think you need to go that far back; the main thing that differentiates pre and post printing press is that post printing press, the emphasis is increasingly more on the value of the idea, and less on the act of putting it down.

The first iPhone came out 2007. 17 years and less what it took a modern and connected society to just solve mobile communication.

This includes development of displays, chips, production, software (ios, android), apps etc.

AI is building upon this speed and only has software and specialized hardware and the AI we are currently building is already optimizing itself (copilot etc.).

And the output is not something 'new' which changes a few things like navigation, post service, banking but basically/potentially everything we do (including the physical world with robots).

If this is any indication, its very realistic to assume that the next 5-15 years will be very very interesting.

I agree with you, it's true. I guess I should have been more precise in saying that AI takes away a much greater proportion of creative work. But of course, horse driving, handwriting, and other such things still involved a level of creativity in them, which is why in turn I am against most technology, especially when its use is unrestricted and unmoderated.

I'm highly sympathetic to your perspective, but it would be hypocritical of me to entirely embrace it. Hitting the spacebar just gives me so much joy, the syncopated negative space of it that you don't get writing by hand, the power of typing "top" and getting a birdseye view of your system, that I can't really begrudge the next generation of computing enthusiasts getting that same joy of "simply typing an idea" and getting back a coherent informed response.

I personally lament the loss of the experience of using a computer that gives the same precision that you'd expect from a calculator, but if I'm being honest, that's been slowly degenerating even without the addition of AI.

Those are tools humans use to created output directly to speed up a process. The equivalent argument for AI would be if the typewriter wrote you a novel based on what you asked it to write, and then everyone else's typewriter might create the same/similar novel if it's averaging all of the same human data input. This leads to a cultural inbreeding of sorts since the data that went into it was curated to begin with.

The real defining thing to remember is that humans don't need AI, but AI needs human data.

Humans also need human data. You might be better than I, but at least for myself, I know that I am just a weighted pattern matcher with a some stochasticity mixed in.

I don't think the idea of painstakingly writing out a book, and then having a printing press propagate your book so that all can easily reproduce the idea in their own mind, is so very different.

I think this is why the real conversation here is about the lossiness of the data, where the "data" is conveying a fundamental idea. Put another way, human creativity is iterative, and the reason we accept "innovative" ideas is that we have a shared understanding of a body of work, a canon, and the real innovation is taking the canon and mixing it up with one new innovation.

I'm not even arguing that AI is net good or bad for humanity. Just that it really isn't so different than the printing press. And like the Bible was to the printing press, I think the dominant AI model will greatly shape human output for a very long time, as the new "canon" in an otherwise splintered society, for good and for bad.

Proprietary models, with funding and existing reach (like the Catholic Church when the Gutenberg press came along), will dominate the mental space. We already have Martin Luther's nailing creeds to the door of that church, though.

Still, writing by hand does still have special meaning, encoding additional information that is not conveyed by printing press. But then as now, that additional meaning is mostly only accessible to those closest to you, that have more shared experiences with you.

I'll accept that there's an additional distinction, though, since layers of communication will be imported and applied without understanding of their context; ideas replaced, filled in, rather than stripped. But let's be honest: every interpretation of a text was already distinct and uniquely an individual's own, albeit likely similar to those that shared an in-group.

AI upsets the balance between producers and consumers, but not in the way that it's easier for more people to be producers, but in this day in age, that there is so little time left to be a consumer when everyone you know can be such a prolific producer.

Edit: typewriters and printing presses also need human data

> Just that it really isn't so different than the printing press.

The part that makes the goals of the AI crowd an entirely different beast from things like the printing press is that the printing press doesn't think for anyone. It just lets people reproduce their own thoughts more widely.

The printing press lets people reproduce other people's thoughts more widely. As to reproducing your own thoughts more widely, this is why I was describing a cultural "canon" as being the foundation upon which new ideas can be built. In the AI world, the "new" idea is effectively just the prompt (and iterative direction); everything else is a remix of the canon. But pre-AI, in order for anyone to understand your new idea, you had to mix it into the existing canon as well.

Edit: to be abundantly clear, I'm not exactly hoping AI can do very well. It seems like it's going to excel at automating the parts of software development that I legitimately enjoy. I think that's also true for other creator-class jobs that it threatens.

Humans/life don't need data. Life survives off of experience and evolutionary pressures. Data is a watered-down/digitized form of experience meant as a replication of that experience, the same way you can hear/analyze music on your computer. It's just usually close enough that most people can't tell the difference. All of that was fed by human "data", which means AI as ultimately a copy of evolutionary pressures that it never went through.

Typewriter/printing presses are for faster propagation or execution. AI in the cultural sense is about replication, hence the Artificial Intelligence tag. Typewriters aren't attempting to replicate or substitute, they are tools like a hammer. They are designed to be operated by humans since they are analog in nature, like your keyboard. AI doesn't need a keyboard, it's operating off our end contributions directly. It cares about the final, digitized form of the novels we feed it, no how we made it or came up with it.

That is the key difference here. It is the same thing when someone creates something based on their own direct experiences versus someone who is simply copying something. It is why AI art for example is increasingly looking bizarre in my opinion: it's completely recycled/fake.

I remember when people used to say similar things about using ASM, and then about the craft of writing things in C instead of managed languages like Java.

At the end of the day most people will only care about how the tool is solving the problem and how cheaply. A cheap, slow, dirty solution today tends to win over a expensive, quick, elegant one next year.

Now there are still some people writing ASM, and a lot of them as a hobby. Maybe in a few years writing code from scratch will be seen in the same way, something very few people have to do in restricted situations, or as a pastime.

Writing code by typing on a keyboard will be just a hobby?

Sure, and who is supposed to understand the code written by AI when we retire? Since writing code by typing on a keyboard will apparently cease to exist, who will write prompts for an AI and put the code together?

Person: Hey AI, build me a website that does a, b and c.

AI: Here you go.

Person: Looks like magic to me, what do I do with all this text?

AI: Push it to Git and deploy it to a web server.

Person: What is a web server? What is a Git?

AI: ... let me google that for you.

Yeah, I'm just not seeing it play down as in the conversation above.

> Sure, and who is supposed to understand the code written by AI when we retire?

Why someone would need to? Do the product/business people who order creating something understand how it is done and what is Git, a webserver etc.? It is based on trust and if you can show the AI system can consistently achieve at least humanlike quality and speed on almost any development task then there is no need to have a technical person in the loop.

So there could never be a new provider or a new protocol because AI wouldn't be able to use them or create them.

You can just make websites on pre-approved list.

> So there could never be a new provider or a new protocol because AI wouldn't be able to use them or create them

On what do you base this? Is there some upper bound to the potential in AI reasoning that bounds it skill to creating anything more complex? I think it is on the contrary - it is humans who are bound by our biological and evolutionary hard limits, the machine is not.

Show me 2 AIs talking to each other and agreeing on a protocol and successfully both implementing it on their side in a way that it works then.

Where did I say that is the current state of its capabilities? My argument was about the future and the perspective on its skills.

If we are writing a scifi novel, sure.

If we mean that the current way of doing things will lead to that… I have strong doubts.

Don't worry, it'll spin up a git repo and an instance for you, as well.

How stable and secure all of this will be though is another question. A rhetorical one.

Have you seen devin the ai developer demo?

Business already doesn't know what security is, they will jump head first into everything which allows them to get rid of those weird developer dudes which they have to cater to and give a lot of money.

I personally would also assume that there might be a new programming language AI will invent. Something faster, more optimized for AI.

Scripted? Did you get access to it? I’ll believe it when I try it hands-on.

I already coded a few times with chatgpt (4). Devin doesn't has to be perfect but its clear (in my opinion) that this will become better and better faster than we think.

GPT-5 will tell us in summer were we at

[dead]

Presumably, the AI would have access to just do all the git and web server stuff for you.. The bigger problem I see would be if the AI just refuses to give you what you ask for.

Person: I want A

AI: Here's B

Person: No, I wanted A

AI: I'm sorry. Let me correct that... Here's B

.. ad nauseum.

Or alternatively:

Person: <reasonable request>

AI: I'm sorry, I can't do that

[dead]

> A cheap, slow, dirty solution today tends to win over a expensive, quick, elegant one next year.

I disagree with this platitude, one reason being the sheer scale of the hidden infrastructure we rely on. Just looking databases alone (Postgres, SQLite, Redis etc.) shows us that reliable and performant solutions dominate over others. Many other examples in other fields like operating system, protocol implementations, cryptography.

It might be that you disagree on the basis of what you see in day-to-day B2B and B2C business cases where just solving the problem gets you paid, but then your statements should reflect that too.