We're in strange new times, but the equivalence of human cognition and synthetic will likely become mainstream and mundane in the coming years.

Sci-fi has long had various "cyborg" type things as a plot element, but if you walk down the street in NYC today you'll pass thousands of people with pacemakers, artificial hips, insulin pumps, colostomy bags, and prosthetics. People who've had laser surgery on their eyes to see better or transplanted organs. Plus people's usage of smart watches that measure heart rate, steps, sleep quality or continuous blood glucose monitors.

We don't marvel at the cyborgs among us, we just accept it as modern medicine. Similarly, while we've gotten used to internet search and GPS turn-by-turn navigation. Gen Z and younger will probably just accept the integration of genAI into their everyday life as seamlessly and casually as we accepted our cyborgification.

You can say that an AI model can only "be trained, not educated" in the same way you can argue that a submarine doesn't swim. But does that really matter to any of the people using it?

You are preoccupied with semantics and romantic notions of blurred lines between people and software, rather than the actual reality of what a model is, and who tends to control it. The "people" training models are mostly massive business interests that exist to create profit.