> I realize the author qualified his or her statement with "know how to harness it," which feels like a cop-out I'm seeing an awful lot in recent explorations of AI's relationship with productivity.
"You're doing AI wrong" is the new "you're doing agile wrong" which was the new "you're doing XP wrong".
Unfortunely many of us are old enough to know how those wrong eventually became the new normal, the wrong way.
At this point I don't even care to argue with people who disagree with me on this. History will decide the winner and if you think you're not on the losing side, best of luck.
I've been watching bad technology decisions win since before some of the people on this forum were born.
I mean, there's no real sharp edge here. If these things ultimately get to the point where they are demonstrably useful, sceptics will presumably adopt them. Given that the ecosystem changes drastically month-to-month, there's little obvious benefit to being an early adopter.
Actually I think the pascal's wager goes the other way. We're in a giant game of musical chairs in the software industry, and the people who will keep getting seats at each round are the overall most skilled and capable of using AI, and this is going to bias more towards AI skill over time.
The consequence in this model of not being an early AI adopter is that unless you're a rock star performer already, you're going to fall behind the curve and get ejected from the game of software engineering early. The people who stay in the game until the end will be the ones that have vision now.
If I'm wrong, then all the people who learned AI now will have just wasted a few years on a low value skill, which is honestly the story of the entire history of tech (hello flash devs?), i.e. not an existential threat to engineers.
> The consequence in this model of not being an early AI adopter is that unless you're a rock star performer already, you're going to fall behind the curve and get ejected from the game of software engineering early.
This is assuming that AI _currently improves productivity_. There's little empirical evidence for this (there is evidence that it causes people to believe that they themselves are more productive, but that's not very useful; _all sorts_ of snake oil cause people to believe that they themselves are more productive).
My baseline assumption right now would be that AI does not, in aggregate, improve productivity (at least in software engineering; it may in some other fields); if it ever _does_, then sure, I'll probably start using it?
AI 100% does currently improve productivity when used correctly. You can say that's a no true Scotsman, but you can look at my company GitHub page to see that I'm delivering.
AI delivers results when you understand its characteristics and build a workflow around it designed to play to its strengths. AI doesn't deliver huge results when you try to shoehorn it into AI unfriendly workflows. Even if you took the Stanford 95% study on its face (which you shouldn't, there are a lot of methodological issues), there are still 5% of projects that are returning value, and it's not random, it's process differences.
I'd say you're a bit biased.
I am biased, but by my personal experience, not the desire to sell anything to anyone. I don't even have products for sale right now, I'm just building free things to try and help people figure out how to ride this wave safely.
> I don't even have products for sale right now
The link in your bio is a long series of tools for various personas with a "schedule a consultation" link next to each one. I'm not sure what "consultation" is if not "a product". But maybe they're all free?
Please find a post where I've shilled anything that I charge money for. I explicitly don't mention that I do consulting here in the spirit of the community. I'm fine with being called out for a strongly pro-AI stance, but I would appreciate not having my patterns of frankness/honesty/community engagement called into question spuriously.
I do not think you are shilling anything at all - merely that you are biased and that bias is somewhat driven by a source of income that you link to your hacker news identity. If you were a Forth hacker with a Forth consultancy, the same thing would apply.
That said, I did not intend to call out your bias as a means of questioning your honesty and I apologize if my communication came across as doing so!
That you, in turn, do not immediately disclose your inevitable biases, makes me dubious of your motives. We all have biases. It’s important to be clear eyed and forthright
You mean "It can’t be that stupid, you must be prompting it wrong"
My favorite is when someone is demoing something that AI can do and they have to feed it some gigantic prompt. At that point, I often ask whether the AI has really made things faster/better or if we've just replaced the old way with an opaque black box.
The gigantic prompt has a side benefit though: it's documentation on design and rationale that otherwise would typically not get written.
Until now, it was a code smell if you need those often. There are exceptions to that, but those are a small minority.
Also, design and rationale what humans need is different than what LLMs need. Even what is needed according to humans writing code/documentation and what’s needed for reading is different, that’s why we have that many bad documentation. There are ton of Apache projects whose documentation is rather a burden than helpful. They are long and absolutely useless.
> Until now, it was a code smell if you need those often. There are exceptions to that, but those are a small minority.
Documentation for a system, particularly a rationale, is never a code smell.
> There are ton of Apache projects whose documentation is rather a burden than helpful. They are long and absolutely useless.
LLM prompts are short and to the point by comparison, that's part of my point.
> is never a code smell
/* This changes the sorting of the original list based on the result, when you use a pipeline <- you have an architectural problem - this happens for example in Splunk */
map()
/* You need to call these in this exact order, one after another <- your architecture is terrible */
processFirst()
processSecond()
processThird()
/* We did this unusual thing because we hate encapsulation <- obvious paraphrase, and lie, you were just lazy, or you didn't have time */
class A {
public static String x
}
In unrelated code: A.x = "something";
/* We merged these two classes because they looked similar, in the code we have a lot of switches and ifs to differentiate between them, we explained them one-by-one <- do I need to explain this? */
class TwoCompletelyUnrelatedThingsInOne
> that's part of my point
> The gigantic prompt
It was clearly not.*
Null argument. I'd rather have developers who do system design before the system implementation.
Developers who don't document typically do system design beforehand, but that design process often just isn't documented/recorded properly, that's my point. A development environment that records prompt history and lets you use an LLM to query it is a goldmine for auditing and avoiding the pitfalls of Chesterson's fence.
More like the new "you're holding it wrong"