There's been a strong theme recently here on HN of confusing programming (the act of writing code to meet specifications) and Engineering(the writing of specifications, and the oversight of said process, along with overview of testing).

AI is definitely not ready for an Engineering role. My recent experience with ChatGPT5(Preview) via Visual Studio Code tells me that it might perform acceptably as a junior programmer. However, because I'm an old retired self taught programmer who only ever managed 1 other programmer, I lack the experience to know what's acceptable as a Junior Programmer at FAANG and elsewhere.

> There's been a strong theme recently here on HN of confusing programming (the act of writing code to meet specifications) and Engineering(the writing of specifications, and the oversight of said process, along with overview of testing).

You're making a distinction that might be interesting in some esoteric sense, but that doesn't exist in the industry. When it comes to software, the architects are also the construction crew. There's nothing to confuse if the terms are effectively synonymous.

You're making a distinction that might be interesting in some esoteric sense, but that doesn't exist in the industry.

Sure it exists. Not everywhere, not for all software, but it does exist.

I think this is generally true, and there are SO MANY blog posts and articles. Using something like Claude Code to build an entire SaaS from nothing can seem like magic, but there is still a point where the code is too big and any LLM will lose context.

But there is a "sweet spot" where it's amazing, specifically highly targeted tasks with a specific context. I wanted a simple media converter app that tied into ffmpeg, and I didn't want to install any of the spammy or bloated options I found... so I got Claude to build one. It took about 30 minutes and works great.

I also asked it to update some legacy project, and it fell down a testing a loop where it failed to understand the testing database was missing. Had I not had years of knowledge, I would've looked at the output and suggestions Claude was giving and spent hours on it... but it was a simple command that fixed it. As with all new tech, your milage will vary.

"Programmer" as a separate role to "engineer" (where the programmer merely implements specs devised by someone else) isn't really a common role at all these days, except arguably for super-junior people.

Anecdotes are unreliable, for one, your described use case and the tools you are using suggest you are at a very basic level and unable to extract the full capabilities of the tooling and models which many of use to solve if not complete complex software.

Just be aware that AI is a tool not a replacement but a human apt at AI as a tool will replace the former.

No offense but your experience with AI is fairly primitive if that's where you're at.

Please elaborate.

Share what you built and how you prompted and what you are making from it, and how many tokens you paid to rummage through.

I didn't want to get into the details, because I've already talked about BitGrid here endlessly, and was trying to stay on the topic of AI usefulness, but since you asked.

I'm trying to build a software stack that can eventually take something like a PyTorch model, and unwind everything, resulting in a directed acyclic graph of individual bit-level operations (OR, AND, XOR). That graph will then be compiled into a bitstream suitable for an FPGA-like substrate that eliminates the memory/compute divide, the BitGrid[1].

FPGA routing is a non-trivial problem, I'm hoping to get it down to seconds. I'm currently trying to build the software stack to make it usable.

The goal is to answer questions about BitGrid:

  How efficiently can I pack a program into the hardware?

  Is the model I've chosen for a cell optimal?

  How many femtojoules per operation would a cell actually take?
If the answers are favorable, then in the deep (and improbable) future, it's possible that there could be a set of racks with an array of thousands resulting in a system that could stream ChatGPT at aggregate rate of a gigatoken per second, for far less than the Trillion dollars Meta plans to spend.

This isn't just some CRUD application with a web front end. There are a number of layers of abstraction at play, and the LLMs seem to handle it well if you limit the depth under consideration.

[1] BitGrid eliminates the traditional memory/compute divide that causes most of the energy consumption of CPUs, GPUs, and other accelerators. Even FPGA systems tend to focus on emulation of these models, and routing fabric for minimum latency, instead of maximum performance. Because all the active lines only reach nearest neighbors, power consumption for a given operation can be far lower than the traditional approach.

PS: I pay $10/month for GitHub CoPilot, which apparently now includes ChatGPT5

you can't proclaim any sort of knowledge about AI's currently capabilities by opening up a codebase and typing a couple prompts into the vscode agent.

What are you building? How? How much is it making you?

I agree, using "chatgpt 5 in visual studio code" screams unfamiliarity with what is current.

just stop yeah?