Early stuff was designed in a network of trusty organizations (universities, labs...). Security wasn't much a concern but it was reasonable given the setting in which it was designed.
This AI stuff? No excuse, it should have been designed with security and privacy in mind given the setting in which it's born. The conditions changed. The threat model is not the same. And this is well known.
Security is hard, so there's some excuse, but it is reasonable to expect basic levels.
It’s really not. AI, like every other tech advance, was largely created by enthusiasts carried away with what could be done, not by top-down design that included all best practices.
It’s frustrating to security people, but the reality is that security doesn’t become a design consideration until the tech has proven utility, which means there are always insecure implementations of early tech.
Does it make any sense that payphones would give free calls for blowing a whistle into them? Obvious design flaw to treat the microphone the same as the generated control tones; it would have been trivial to design more secure control tones. But nobody saw the need until the tech was deployed at scale.
It should be different, sure. But that’s just saying human nature “should” be different.
The payphones giving free calls was far less avoidable, virtually cost nothing to anybody and more importantly, didn't hurt anybody / threaten users' security.
I don't buy into this "enthusiasts carried away" theory; Comet is developed by a company valued at 18 billion US dollars in July 2025 [1]. We are talking about a company that seriously considers buying Google Chrome for $34.5 billion.
They had the money required for 1 person to think 5 minutes and see this prompt injection from page content from arbitrary internet places coming. That's as basic as the simplest SQL injection. I actually can't even imagine how they missed this. Maybe they didn't, and decided to not give a fuck and go ahead anyway.
More generally I don't believe one second that all this tech is largely created by "enthusiasts carried away", without planning and design. You don't deal with multiple billion dollars this way. I will more gladly take "planned carelessness". Unless you are describing, by "enthusiasts carried away", the people out there that want to make quick money without giving any fuck to anything.
> Perplexity AI has attracted legal scrutiny over allegations of copyright infringement, unauthorized content use, and trademark issues from several major media organizations, including the BBC, Dow Jones, and The New York Times.
> In August 2025, Cloudflare published research finding that Perplexity was using undeclared "stealth" web crawlers to bypass Web application firewalls and robots.txt files intended to block Perplexity crawlers. Cloudflare's CEO Matthew Prince tweeted that Perplexity acts "more like North Korean hackers" than like a reputable AI company. Perplexity publicly denied the claims, calling it a "charlatan publicity stunt".
Yeah… I see I blocked PerplexityBot in my nginx config because it was hammering my server. This industry just doesn't give one shit. They respect nobody. Screw them already.
Tech is not blissful and innocent, and certainly not AI. Large scale tech like this is not done by some blissful / clueless dev in their garage, clueless and disconnected from reality. And this lone clueless dev in his garage phantasm actually needs to die. We need people thoughtful of consequences of what they do on other people and on the environment, there's really nothing desirable about someone who doesn't.
[1] https://en.wikipedia.org/wiki/Perplexity_AI
Just me, my soldering iron, my garage, and my $4B of Nvidia H100s
Ok but AI doesn't need a special whistle that 0.1% of people have, you just hane it text by whatever means is available. 100% of users have the opportunity for prompt injection on any site that accepts user input. It's still a fairly different story.