This is a vibe coding Trojan horse article.
> That’s the real story. Not the script, but how AI changes the calculus of what’s worth our time.
Looking at the github source code, I can instantly tell. It's also full of gotchas.
This is a vibe coding Trojan horse article.
> That’s the real story. Not the script, but how AI changes the calculus of what’s worth our time.
Looking at the github source code, I can instantly tell. It's also full of gotchas.
Ugh. I appreciate the tool and I suppose I can appreciate AI for making the barrier to entry for writing such a tool lower. I just don't like AI, and I will continue with my current software development practices and standards for production-grade code - code coverage, linting, manual code reviews, things like that.
At the same time though I'm at a point in my career where I'm cynical and thinking it really doesn't matter because whatever I build today will be gone in 5-10 years anyway (front-end mainly).
is it worth it for everything? if you need a bash script that takes some input and produces some output. Does it matter if it's from an AI? It has to get through code review, the person who made it has to read through it before code review so they don't look like an ass.
yeah recently I needed a script to ingest individual json files into an sqlite db. I could have spent half the day writing, or asked an AI to write it and spend 10 minutes checking the data in the DB is correct.
There are plenty of non critical aspects that can be drastically accelerated, but also plenty of places where I know I don't want to use today's models to do the work.
I worked with contractor for a contractor who had AI write a script to update a repository (essentially doing a git pull). But for some strange reason it was using the GitHub API instead of git. The best part is if the token wasn't set up properly it overwrote every file (including itself) with 404s.
Ingesting json files into sqlite should only take half a day if you're doing it in C or Fortran for some reason (maybe there is a good reason). In a high level language or shouldn't take much more than 10 minutes in most cases, I would think?
regarding how long the ingestion should take to implement, I'm going to say: it depends!
It depends on how complex the models are, because now you need to parse your model before inserting them. Which means you need tables to be in the right format. And then you need your loops, for each file you might have to insert anywhere between 5 to nested 20 entities. And then you either have to use an ORM or write each SQL queries.
All of which I could do obviously, and isn't rocket science, just time consuming.
Sure, if the JSON is very complicated it makes sense that it could take a lot longer (but then I wouldn't really trust the AI to do it either...)
The author literally says this is vibe-coded. You even quoted it. How the hell is this "Trojan horse"? Did the Greeks have a warning sign saying "soldiers inside" on their wooden horse?
No, but it lacked the product safety leaflet.
Because it’s not in the title, and I personally prefer up-front warnings when generative “AI” is used in any context, whether it’s image slop or code slop
I'm not a go developer and this kind of thing is far from my area of expertise. Do you mind giving some examples?
As far as I can tell skimming the code, and as I said, without knowledge of Go or the domain, the "shape" of the code isn't bad. If I got any vibes (:))from it, it was lack of error handling and over reliance on exactly matching strings. Generally speaking, it looks quite fragile.
FWIW I don't think the conclusion is wrong. With limited knowledge he managed to build a useful program for himself to solve a problem he had. Without AI tools that wouldn't have happened.
There's a lot about it that isn't great. It treats Go like a scripting language, it's got no structure (1000+ lines in a single file), nothing is documented, the models are flat, no methods, it hard codes lots of strings, even the flags are string comparisons instead of using the proper tool, regex compiles and use inlined, limited device support based on some pre-configured, hard-coded strings, some assumptions made on storage device speeds based on its device name: nvme=fast, hdd=slow, etc.
On the whole, it might work for now, but it'll need recompiling for new devices, and is a mess to maintain if any of the structure of the data changes.
If a junior in my team asked me to review this, they'd be starting again; if anyone above junior PRd it, they'd be fired.
> Generally speaking, it looks quite fragile
I have a usb to sata plugged in and it's labeled as [Problem].