>Such as?
it's crazy that the experiences are still so wildly varying that we get people that use this strategy as a 'valid' gotcha.
AI works for the vast majority of nowhere-near-the-edge CS work -- you know, all the stuff the majority of people have to do every day.
I don't touch any kind of SQL manually anymore. I don't touch iptables or UFW. I don't touch polkit, dbus, or any other human-hostile IPC anymore. I don't write cron jobs, or system unit files. I query for documentation rather than slogging through a stupid web wiki or equivalent. a decent LLM model does it all with fairly easy 5-10 word prompts.
ever do real work with a mic and speech-to-text? It's 50x'd by LLM support. Gone are the days of saying "H T T P COLON FORWARD SLASH FORWARD SLASH W W W".
this isn't some untested frontier land anymore. People that embrace it find it really empowering except on the edges, and even those state-of-the-art edge people are using it to do the crap work.
This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
People ask for examples because they want to know what other people are doing. Everything you mention here is VERY reasonable. It's exactly the kind of stuff no one is going to be surprised that you are getting good results with the current AI. But none of that is particularly groundbreaking.
I'm not trying to marginalize your or anyone else's usage of AI. The reason people are saying "such as" is to gauge where the value lies. The US GDP is around 30T. Right now there's is something like ~12T reasonably involved in the current AI economy. That's massive company valuations, data center and infrastructure build out a lot of it is underpinning and heavily influencing traditional sectors of the economy that have a real risk of being going down the wrong path.
So the question isn't what can AI do, it can do a lot, even very cheap models can handle most of what you have listed. The real question is what can the cutting edge state of the art models do so much better that is productively value added to justify such a massive economic presence.
That's all well and good, but what happens when the price to run these AIs goes up 10x or even 100x.
It's the same model as Uber, and I can't afford Uber most of the time anymore. It's become cost prohibitive just to take a short ride, but it used to cost like $7.
It's all fun and games until someone has to pay the bill, and these companies are losing many billions of dollars with no end in sight for the losses.
I doubt the tech and costs for the tech will improve fast enough to stop the flood of money going out, and I doubt people are going to want to pay what it really costs. That $200/month plan might not look so good when it's $2000/month, or more.
Why not try it yourself? Inference providers like BaseTen and AWS Bedrock have perfectly capable open source models as well as some licensed closed source models they host.
You can use "API-style" pricing on these providers which is more transparent to costs. It's very likely to end up more than 200 a month, but the question is, are you going to see more than that in value?
For me, the answer is yes.
What makes you think I haven't tried it myself?
The "costs" are subsidized, it's a loss-leader.
It's an important concern for those footing the bill, but I expect companies really in the face of being impacted by it to be able to do a cost-benefit calculation and use a mix of models. For the sorts of things GP described (iptables whatever, recalling how to scan open ports on the network, the sorts of things you usually could answer for yourself with 10-600 seconds in a manpage / help text / google search / stack overflow thread), local/open-weight models are already good enough and fast enough on a lot of commodity hardware to suffice. Whereas now companies might say just offload such queries to the frontier $200/mo plan because why not, tokens are plentiful and it's already being paid for, if in the future it goes to $2000/mo with more limited tokens, you might save them for the actual important or latency-sensitive work and use lower-cost local models for simpler stuff. That lower-cost might involve a $2000 GPU to be really usable, but it pays for itself shortly by comparison. To use your Uber analogy, people might have used it to get to downtown and the airport, but now it's way more expensive, so they'll take a bus or walk or drive downtown instead -- but the airport trip, even though it's more expensive than it used to be, is still attractive in the face of competing alternatives like taxis/long term parking.
None of that is concrete though; it's all alleged speed-ups with no discernable (though a lot of claimed) impact.
> This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
People will stop asking for the proof when the dust-eating commences.