> People outsourcing thinking and entire skillset to it - they usually have very little clue in the topic, are interested only in results, and are not interested in knowing more about the topic or honing their skills in the topic
And this may be fine in certain cases.
I'm learning German and my listening comprehension is marginal. I took a practice test and one of the exercises was listening to 15-30 seconds of audio followed by questions. I did terribly, but it seemed like a good way to practice. I used Claude Code to create a small app to generate short audio (via ElevenLabs) dialogs and set of questions. I ran the results by my German teacher and he was impressed.
I'm aware of the limitations: Sometimes the audio isn't great (it tends to mess up phone numbers), it can only a small part of my work learning German, etc.
The key part: I could have coded it, but I have other more important projects. I don't care that I didn't learn about the code. What I care about is I'm improving my German.
Seems like you are part of the first group then, not the second. The fact that you are interested in learning and are using it as a tool disqualifies you from someone who has little clue and just wants to get something out (i.e. just spit out code)
As I reread the original post, I'm not actually not sure which group I fall into. I think there's a bunch of overlap depending on perspective/how you read it:
> Group 1: intern/boring task executor
Yup, that makes sense I'm in group 1.
> Group 2: "outsourcing thinking and entire skillset to it - they usually have very little clue in the topic, are interested only in results"
Also me (in this case), as I'm outsourcing the software development part and just want the final app.
Soo... I probably have thought too much about the original proposed groups. I'm not sure they are as clear as the original suggests.
False dichotomy is one of the original sins. The two groups as advertised aren't all that's out there. Most people are interested in results. How we get those results is part of the journey of getting results, and sometimes it's about the journey not the destination. I care very much about the results of my biopsy or my flight, I don't know much about how we get there, I want to know if I have cancer, and that my plane didn't crash. I hope that doesn't put me on the B ark that gets sent into the sun.
I'd say you're still in the group 1. Your main goal is not the app but learning German. Therefore creating the app using AI is only a means to an end, a tool, and spending time coding it yourself is not important in this context.
The AI usage was not about learning German, but for creating an app. This would be group 2. He may use the tool he made to learn German, but using that tool isn't using AI
>using that tool isn't using AI
It is though. App is using AI underneath to generate audio snippets. That's literally its purpose
Creating those snippets don't require knowing how to make a proper recording, how to edit it down, or how to direct the voice actor for the line.
They could admittedly be more defined, but I think the original commenter missed a key word. It really boils down to whether or not you are offloading your critical thinking.
The word "thinking" can be a bit nebulous in these conversations, and critical thinking perhaps even more ambiguously defined, so before we discuss that, we need to define it. I go with the Merriam-Webster definition: the act or practice of thinking critically (as by applying reason and questioning assumptions) in order to solve problems, evaluate information, discern biases, etc.
LLMs seem to be able to mimic this, particularly to those who have no clue what it means when we call an LLM a "stochastic parrot" or some equally esoteric term. At first I was baffled that anyone really thought that LLMs could somehow apply reason or discern its own biases but I had to take a step back and look at how that public perception was shaped to see what these people were seeing. LLMs, generative AI, ML, etc are all extremely complex things. Couple that with the pervasive notion that thinking is hard and you have a massive pool of consumers who are only too happy to offload some of that thinking on to something they may not fully understand but were promised that it would do what they wanted, which is make their daily lives a bit easier.
We always get snagged by things that promise us convenience or offer to help us do less work. It's pretty human to desire both of those things, but proving to be an Achilles Heel for many. How we characterize AI determines our expectations of it; so do you think of it as a bag of tools you can use to complete tasks? Or is it the whole factory assembly line where you can push a few buttons and an pseudo-finished product comes out the other side?
This is me, but for writing code. I own a business, and I use Claude Code to build internal tools for myself.
Don't care about code quality; never seen the code. I care if the tools do the things I want them to do, and they verifiably do.
I'd love to hear about what your tools do.
You're in luck: https://theautomatedoperator.substack.com/
That's the place!
The most fun one is this, which creates listing images for my products: https://theautomatedoperator.substack.com/p/opus-45-codes-ge...
More recently, I'm using Claude Code to handle my inventory management by having it act as an analyst while coding itself tools to access my Amazon Seller accounts to retrieve the necessary info: https://theautomatedoperator.substack.com/p/trading-my-vibe-...
How do you verify them? How do you verify they do not create security risks?
They only run locally on my machine, and they use properly scoped API credentials. Is there some theoretical risk that someone could get their hands on my Gemini API key? Probably, but it'd be very tough and not a particularly compelling prize, so I'm not altogether too concerned here.
On the verification front, a few examples:
1. I built an app that generates listing images and whitebox photos for my products. Results there are verifiable for obvious reasons.
2. I use Claude Code to do inventory management - it has a bunch of scripts to pull the relevant data from Amazon then a set of instructions on how to project future sales and determine when I should reorder. It prints the data that it pulls from Amazon to the terminal, so that's verifiable. In terms of following the instructions on coming up with reorder dates, if it's way off, I'm going to know because I'm very familiar with the brands that I own. This is pretty standard manager/subordinate stuff - I put some trust in Claude to get it right, but I have enough context to know if the results are clearly bad. And if they're only off by a little, then the result is I incur some small financial penalty (either I reorder too late and temporarily stock out or I reorder too early and pay extra storage fees). But that's fine - I'm choosing to make that tradeoff as one always does when one hands off work.
3. I gave Claude Code a QuickBooks API key and use it to do my books. This one gets people horrified, but again, I have enough context to know if anything's clearly wrong, and if things are only slightly off then I will potentially pay a little too much in taxes. (Though to be fair it's also possible it screws up the other way, I underpay in taxes and in that case the likeliest outcome is I just saved money because audits are so rare.)
Not every tool can have a "security risk". I feel that this stems from people who see every application as a product and products must be an online web app available to the world.
Let's say I have a 5 person company and I vibe-engineer an application to manage shifts and equipment. I "verify" it by seeing with my own eyes that everyone has the tools they need and every shift is covered.
Before I either used an expensive SaaS piece of crap for it or did it with Excel. I didn't "verify" the Excel either and couldn't control when the SaaS provider updated their end, sometimes breaking features, sometimes adding or changing them.