For accurate predictions, or the lack thereof, it can be educational to look back in time. People in the late nineteenth century wrote down a lot of what in retrospect is a lot of hyperbole, nonsense, and rubbish. Some of it is pretty entertaining. The most outrageous ones actually got some of it right while completely missing the point at the same time. Jules Vernes for example had a pretty lively imagination. We went to the moon. But not by cannon ball. And there wasn't a whole lot there to see and do. And flying around the world takes a lot less than 80 days. Even in a balloon it can be done a lot quicker.

I was borne in the seventies. Much of what is science fact today was science fiction then. And much of that was pretty naive and enlightening at the same time.

My point is that nothing has changed when it comes to people's ability to predict the future. The louder people claim to know what it all means or rush to man-splain that to others, the more likely it is that they are completely and utterly missing the point. And probably in ways that will make them look pretty foolish in a few decades. Most people are just flailing around in the dark. And some of the crazier ones might actually be the ones to listen to. But you'd be well advised to filter out their interpretations and attempts to give meaning to it all.

Hal, the Paranoid Android, Kitt, C3PO, R2D2, Skynet, Data, and all the other science fiction AIs from my youth are now pretty much science fact. Some of those actually look a bit slow and retarded in comparison. Are we going to build better versions of these? I'd be very disappointed in the human race if we didn't. And I'd be also disappointed if that ends up resembling the original fantasies of those things. I don't think many people are capable of imagining anything more coherent than versions of themselves dressed up in some glossy exterior. Which is of course what C3PO is. Very relatable, a bit stupid, and clownish. But also, why would you want such a thing? And the angry Austrian body builder version of that of course isn't any better.

I think the raw facts are that we've invented some interesting software that passes the Turing test pretty much with flying colors. For much of my life that was the gold standard of testing AIs. I don't think anyone has bothered to actually deal with the formalities of letting AIs take that test and documenting the results in a scientific way. That test obviously became obsolete before people even thought of doing that. We now worry about abuse of AIs to deceive entire populations with AIs pretending to be humans manipulating people. You might actually have a hard time convincing people that have been abused in such a way that what they saw and heard was actually real. We imagined it would be hard to convince them it AIs are human. We failed to imagine the job of convincing them they are not is much harder.

Really? Mansplain? Why bring gender wars terms into this.