How does this compare to the way security was implemented by early websites, internet protocols, or telecom systems?

Early stuff was designed in a network of trusty organizations (universities, labs...). Security wasn't much a concern but it was reasonable given the setting in which it was designed.

This AI stuff? No excuse, it should have been designed with security and privacy in mind given the setting in which it's born. The conditions changed. The threat model is not the same. And this is well known.

Security is hard, so there's some excuse, but it is reasonable to expect basic levels.

It’s really not. AI, like every other tech advance, was largely created by enthusiasts carried away with what could be done, not by top-down design that included all best practices.

It’s frustrating to security people, but the reality is that security doesn’t become a design consideration until the tech has proven utility, which means there are always insecure implementations of early tech.

Does it make any sense that payphones would give free calls for blowing a whistle into them? Obvious design flaw to treat the microphone the same as the generated control tones; it would have been trivial to design more secure control tones. But nobody saw the need until the tech was deployed at scale.

It should be different, sure. But that’s just saying human nature “should” be different.

The payphones giving free calls was far less avoidable, virtually cost nothing to anybody and more importantly, didn't hurt anybody / threaten users' security.

I don't buy into this "enthusiasts carried away" theory; Comet is developed by a company valued at 18 billion US dollars in July 2025 [1]. We are talking about a company that seriously considers buying Google Chrome for $34.5 billion.

They had the money required for 1 person to think 5 minutes and see this prompt injection from page content from arbitrary internet places coming. That's as basic as the simplest SQL injection. I actually can't even imagine how they missed this. Maybe they didn't, and decided to not give a fuck and go ahead anyway.

More generally I don't believe one second that all this tech is largely created by "enthusiasts carried away", without planning and design. You don't deal with multiple billion dollars this way. I will more gladly take "planned carelessness". Unless you are describing, by "enthusiasts carried away", the people out there that want to make quick money without giving any fuck to anything.

> Perplexity AI has attracted legal scrutiny over allegations of copyright infringement, unauthorized content use, and trademark issues from several major media organizations, including the BBC, Dow Jones, and The New York Times.

> In August 2025, Cloudflare published research finding that Perplexity was using undeclared "stealth" web crawlers to bypass Web application firewalls and robots.txt files intended to block Perplexity crawlers. Cloudflare's CEO Matthew Prince tweeted that Perplexity acts "more like North Korean hackers" than like a reputable AI company. Perplexity publicly denied the claims, calling it a "charlatan publicity stunt".

Yeah… I see I blocked PerplexityBot in my nginx config because it was hammering my server. This industry just doesn't give one shit. They respect nobody. Screw them already.

Tech is not blissful and innocent, and certainly not AI. Large scale tech like this is not done by some blissful / clueless dev in their garage, clueless and disconnected from reality. And this lone clueless dev in his garage phantasm actually needs to die. We need people thoughtful of consequences of what they do on other people and on the environment, there's really nothing desirable about someone who doesn't.

[1] https://en.wikipedia.org/wiki/Perplexity_AI

Just me, my soldering iron, my garage, and my $4B of Nvidia H100s

Ok but AI doesn't need a special whistle that 0.1% of people have, you just hane it text by whatever means is available. 100% of users have the opportunity for prompt injection on any site that accepts user input. It's still a fairly different story.

1. It's novel, meaning we have time to stop it before it becomes normalized.

2. It's a whole new category of threat vectors across all known/unknown quadarants.

3. Knowing what we know now vs. then, it's egregious and not naive, contextualizing how these companies operate and treat their customers.

4. There's a whole population of sophisticated predators ready to pounce instantly, they already have the knowledge and tools unlike in the 1990s.

5. Since it's novel, we need education and attention for this specifically.

Should I go on? Can we finally put to bed the thought-limiting midwit take that AI's flaws and risks aren't worth discussion because past technology has had flaws and risks?

Must we learn the same lessons over and over again? Why? Is our industry particularly stupid? Or just lazy?

Rather: it's perpetually in a rush for business reasons, and concerned with convenience. Security generally impedes both.

Information security is, fundamentally, a misalignment of expected capabilities with new technologies.

There is literally no way a new technology can be "secure" until it has existed in the public zeitgeist for long enough that the general public has an intuitive feel for its capabilities and limitations.

Yes, when you release a new product, you can ensure that its functionality aligns with expectations from other products in the industry, or analogous products that people are already using. You can make design choices where a user has to slowly expose themselves to more functionality as they understand the technology deeper, but each step of the way is going to expose them to additional threats that they might not fully understand.

Security is that journey. You can just release a product using a brand new technology that's "secure" right out of the gate.

I'm sorry but that's a pathetic excuse for what's going on here. These aren't some unpredictable novel threats that nobody could've reasonably seen coming.

Everyone who has their head screwed on right could tell you that this is an awful idea, for precisely these reasons, and we've known it for years. Maybe not their users if they haven't been exposed to LLMs to that degree, but certainly anyone who worked on this product should've known better, and if they didn't, then my opinion of this entire industry just fell through the floor.

This is tantamount to using SQL escaping instead of prepared statements in 2025. Except there's no equivalent to prepared statements in LLMs, so we know that mixing sensitive data with untrusted data shouldn't be done until we have the technical means to do it safely.

Doing it anyway when we've known about these risks for years is just negligence, and trying to use it as an excuse in 2025 points at total incompetence and indifference towards user safety.

+1

And if you tried it wouldn’t be usable, and you’d probably get the threat model wrong anyway.

Financially motivated to not prioritize security.

It's hard to sell what your product specifically can't do, while your competitors are spending their time building out what they can do. Beloved products can make a whole lot of serious mistakes before the public will actually turn on them.

"Our bridges don't collapse" is a selling point for an engineering firm, on something that their products don't do.

We need to stop calling ourselves engineers when we act like garage tinkerers.

Or, we need to actually regulate software that can have devastating failure modes such as "emptying your bank account" so that companies selling software to the public (directly or indirectly) cannot externalize the costs of their software architecture decisions.

Simply prohibiting disclaimer of liability in commercial software licenses might be enough.

Call yourself whatever you choose, but the garage tinkerers will always move faster and discover new markets before the Very Serious Engineers have completed the third review of the comprehensive threat model with all stakeholders.

Yes, they will move fast and they will brake things, and some of those breakages will have catastrophic consequences, and then they can go "whoopsy daisy", face no consequences, and try the same thing again. Very normal, extremely sane way to structure society

The only reason this works out the way it does is because certain governments have been corrupted by business interests to the point that businesses don't have to face any accountability for the harm that they cause.

If companies were fined serious amounts of money and the people responsible went to prison if they committed gross negligence and harmed millions of people, the attitude would quickly change. But as things stand, the system optimizes for carelessness, indifference towards harm, and sociopathy.

Nobody cares about bridges collapsing if you built the first bridges and none have collapsed yet from the couple first folks trying them out, though.

It's only when someone tries to drive their loaded ox-driven cart through for the first time that you might find out what the max load of your bridge is.

The winner (financially, and DAU-wise) is not going to be the one that moves slowly because they are building a secure product. That is, you only need security when you are big enough to either have Big Business customers or big enough to be the target of lawsuits.

its a steady stream of naive start ups run by people who think starting a company is something you do at the beginning of your career with no experience vs the end of your career with decades of experience.

LLMs can't learn lessons, you see, short context window.

Very poorly, because no matter how bad it was then, at least now we know better.

The way it compares is that we've had 30 years to learn from our mistakes (and apparently some of us have failed to).