As I reread the original post, I'm not actually not sure which group I fall into. I think there's a bunch of overlap depending on perspective/how you read it:
> Group 1: intern/boring task executor
Yup, that makes sense I'm in group 1.
> Group 2: "outsourcing thinking and entire skillset to it - they usually have very little clue in the topic, are interested only in results"
Also me (in this case), as I'm outsourcing the software development part and just want the final app.
Soo... I probably have thought too much about the original proposed groups. I'm not sure they are as clear as the original suggests.
False dichotomy is one of the original sins. The two groups as advertised aren't all that's out there. Most people are interested in results. How we get those results is part of the journey of getting results, and sometimes it's about the journey not the destination. I care very much about the results of my biopsy or my flight, I don't know much about how we get there, I want to know if I have cancer, and that my plane didn't crash. I hope that doesn't put me on the B ark that gets sent into the sun.
I'd say you're still in the group 1. Your main goal is not the app but learning German. Therefore creating the app using AI is only a means to an end, a tool, and spending time coding it yourself is not important in this context.
The AI usage was not about learning German, but for creating an app. This would be group 2. He may use the tool he made to learn German, but using that tool isn't using AI
>using that tool isn't using AI
It is though. App is using AI underneath to generate audio snippets. That's literally its purpose
Creating those snippets don't require knowing how to make a proper recording, how to edit it down, or how to direct the voice actor for the line.
They could admittedly be more defined, but I think the original commenter missed a key word. It really boils down to whether or not you are offloading your critical thinking.
The word "thinking" can be a bit nebulous in these conversations, and critical thinking perhaps even more ambiguously defined, so before we discuss that, we need to define it. I go with the Merriam-Webster definition: the act or practice of thinking critically (as by applying reason and questioning assumptions) in order to solve problems, evaluate information, discern biases, etc.
LLMs seem to be able to mimic this, particularly to those who have no clue what it means when we call an LLM a "stochastic parrot" or some equally esoteric term. At first I was baffled that anyone really thought that LLMs could somehow apply reason or discern its own biases but I had to take a step back and look at how that public perception was shaped to see what these people were seeing. LLMs, generative AI, ML, etc are all extremely complex things. Couple that with the pervasive notion that thinking is hard and you have a massive pool of consumers who are only too happy to offload some of that thinking on to something they may not fully understand but were promised that it would do what they wanted, which is make their daily lives a bit easier.
We always get snagged by things that promise us convenience or offer to help us do less work. It's pretty human to desire both of those things, but proving to be an Achilles Heel for many. How we characterize AI determines our expectations of it; so do you think of it as a bag of tools you can use to complete tasks? Or is it the whole factory assembly line where you can push a few buttons and an pseudo-finished product comes out the other side?