I don't use AI at all, primarily because I believe its harmful and I am quite mindful of things.
I've observed colleagues who have used it extensively, I've often been a late adopter for things that carry unspecified risk; and AI was already on par with Pandora's box in my estimation when the weights were first released; I am usually perceptually pretty far ahead of the curve naturally (and accurately so).
Objectively, I've found these colleagues attitude, mental alacrity, work product, and abstract reasoning skills have degraded significantly in reference to their prior work pre-AI. They tried harder, got more actual work done, and were able to converse easily and quickly before. Now its, let me get back to you; and you get emails which have been quite clearly put through an LLM, with no real reasoning happening.
What is worse, is its happened in ways they largely do not notice, and when objective observations are pointed out, they don't take kindly to the feedback despite it not being an issue with them, but with their AI use, and the perceptual blindspots it takes advantage of. Many seem to be adopting destructive behaviors common to junkies, who have addiction problems.
I think given sufficient time, this trend will be recognized; but not before it causes significant adverse effects.
I see AI the same way we see calculators: they don’t make us worse at math, they just offload repetitive computation.
The core question is not “are we degrading,” but rather: are we thinking better with better tools? Personally, I use AI only to reduce boilerplate and explore alternatives — the decision-making and abstraction stays on me.
If someone starts thinking less because of tools, the problem isn't the tool — it's how it's used.
This is misplaced circular fallacy, but to each their own. I value my life, and by extension my mind quite highly.
Those that seem to use these tools become dumber in ways they do not notice. In much the same vein I become smarter in relative retrospect just holding to my guns and shielding my exposure.
If you use a tool, whose primary consequence of use is that you become damaged and less each time you use it, and this happens in most cases in a way where you cannot recognize it happening. How do you ever stop? If you cannot know how to safely use it, and you cannot recognize the mechanism or issue, what is left?
If it alters your ability to perceive things, you certainly can't decide something if you don't recognize the need to decide.
If the factors required for that decision to come about are outside your perception, where the connections for a correct decision no longer exist, there isn't anything you can do.
You take an old argument that its just a tool, saying the choice is with the person who is responsible, not the tool, and yet the person doesn't or more likely cannot notice, or recognize the damage happening.
Its a very rare person who is capable of introspection at such a subtle degree. There is also no informed consent of the danger so all those children being force fed this stuff as GenAI when the data finally is in; well I don't want to think about a future like that, where there may be no future at all for them.
The decision-making process requires things that you may not have anymore, and while you may continue to think falsely that you do and are still capable of that but you've been blinded and when that happens, you've definitionally entered a state of delusion. Quite a lot of delusional people don't realize they've gone off the deep end, its a perceptual deficit.
Who knows maybe it will go so far as delirium as the debasement progresses and you unravel as a sentient person.
We all have psychological blindspots, and there is one blindspot above all others that we have no defense against; called distorted reflected appraisal.
There are some things where the issue is directly with the tool, not how its used.
[dead]