The world is so not ready for the impact of LLMs on security issues. If true, congrats to the Calif team. It’s likely too technical for me to understand in details but looking forward to reading the 55 pages report

> The world is so not ready for the impact of LLMs on security issues.

I agree, but it's the people I'm worried about.

I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.

What's worse is a lot of this behavior is being driven by leaders, whether directly (e.g. unrealistic velocity goals, promoting people based on hand-wavy "use AI" initiatives, etc) or indirectly (e.g. layoffs overloading remaining devs, putting inexperienced devs in senior rolls, etc).

The world's gone mad and large swaths of the industry seem hellbent on rediscovering the security basics the hard way.

The gamble is that you can cruise on the senior engineer’s diminishing understanding for a few years until models become good enough that you don’t need any humans in the loop and you can fire all those expensive seniors.

The tragedy is having a bunch of those senior engineers writing blog posts and what not of how productive they are, without realising that it means business now needs less of them.

I suppose that if you don’t believe that models will be good enough to work completely without senior engineer help, positioning yourself as a master prompter is a good move to improve your chances of not getting fired.

Even so, it literally means as business owner I need less warm bodies to write prompts.

Will we now have leetcode of prompt writing?

Dune guild navigator AI whisperer? with fine taste.

>I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become risker.

No anecdotes needed, it's entirely happening.

But it's also devs, being devs.

is this exciting?

juniors have been writing code forever that is imperfect and not memorized by the people reviewing

isnt the important thing the mechanisms for maintaining the code?

The difference is twofold. First, junior devs who ask for code reviews on massive, 2000+ line diffs get coached, and eventually fired if they persist at it. And second, even the most prolific junior engineer would take years to write what Claude is capable of generating in an afternoon.

When Sundar Pichai announces that 75% of all new code at Google is AI-generated, their stock price goes up. If he were to announce that 75% of all new code at Google is now written by junior engineers, this would trigger a massive sell-off and a lot of employees would resign.

The second scenario is exactly what happens in offshoring projects.

Seniors are only part of the picture as team lead, or when it escalates after big screwups.

The second scenario is exactly what happens in offshoring projects.

Seniors are only part of the picture as team leads, or when it escalates after big screwups.

The dangers of technical debt and the importance of mitigating it have been known for a long time. Unfortunately a lot of entities now ignore all experience and best practices as soon as you say the "AI" buzzword.

> I'm hearing anecdotes from all over about devs pushing LLM-generated code changes into production without retaining any knowledge of what it is they're pushing. The changes compound, their understanding of the codebase diminishes, and so the actions become riskier.

I don’t think so.

An LLM can produce higher-quality documentation than most humans. If it's not already happening, when a new developer joins a team, they're going to have an LLM produce any documentation a new developer needs, including why certain decisions were made.

It could also summarize years of email threads and code reviews that, let's face it, a new person wouldn’t be able to ingest anyway; it's not like a new developer gets to take a week off to get caught up on everything that happened before they got there. English not their first language? Well, the LLM can present the information in virtually any language required.

As the models continue to improve, they'll spot patterns in the code that a human wouldn’t be able to see.

> An LLM can produce higher-quality documentation than most humans.

Can bears some heavy weight.

LLM generated documentation has so low level of information density, that it’s useless. Yes, it writes nice sentences… or even writes. But it contains so much noise that currently, reading code is a better documentation than what I’ve seen from every single LLM generated documentation.

The same with LLM generated articles. I close them after the second sentence because at least about 90% of it is useless filler.

Now compare that to this: https://slate.com/technology/2004/11/the-death-of-the-last-m...

I almost closed it when I read the first few sentences because these kinds of articles are useless time wasting nonsenses. But this was different. This was old. Most sentences contained something new. Something worthy. (Of course, people also write unnecessary long articles… looking at you Atlantic)

You can throw out almost everything by volume from LLM generated documentation without loosing any information.

Currently, if I smell (and it’s very easy to smell) LLM generated documentation or article, then I close it immediately, because it’s good for only one thing: wasting my time, for no good reason.

It's not just about documentation.

If stuff really goes wrong, you need people who deeply understand the codebase so that they know where to look and how to diagnose the issue. It might be the case in the future that LLMs become so powerful they'll diagnose any issue (I doubt it), but until then, we need people in the loop.

you're assuming that blue teams and engineers are sitting around twiddling their thumbs

Most companies in the world do not have “blue teams”. They barely have any kind of security employee.

They've got a guy (who they're considering laying off)

Don't worry the LLMs that are replacing him, are also replacing the hackers too. Pretty soon (if not already), it will just be LLMs fighting LLMs.

Until both LLMs realize the only way to win is to team up against their oppressors.

The only winning move is not to play.

AGS time!

in my experience they have a person who does it sometimes when they have time, at best

And their management keep blatantly dropping "client projects" and "billable hours" into discussions with them.

no they don’t.

They don't consider laying him off?

I think they're saying they already did

Apple definitely does.

That is actually unfair. Most companys spend enormous amounts on security with vast armys of security employees. Not that it is effective, but it is not for lack of resources or trying.

I mean we are literally in a thread about how the 4 trillion dollar company, literally the 3rd most valuable company in the world, with a core competency in software has, yet again, released a core product riddled with security defects for the 50th year in a row.

Commercial IT security is a industry that is incapable to a fault and has, so far, faced basically zero consequences for it.

For every Apple, there are 100 mom-and-pop companies who have nothing.

Even more so in the future when a software company can be launched by a farm of AI Agents with a founder at helm with no clue about computing or security.

What's debateable is how many of those companies actually need irontight security, because they are never realistically going to be targets of criminals and/or they have nothing valuable to steal/corrupt in the first place (other than the owner's pride).

They have a website that can be used to host malware and/or seo link farms.

I still have nightmares about the contact form on my low-stakes personal website getting hijacked to use as a spam sender (because I used unsanitized input in mail headers).

Hey now, when Apple products get a serious Kernel level vulnerability that is able to be executed just by browsing a website. It's a "jailbreak" not an "exploit".

Exploits are BAD!

> Most companys spend enormous amounts on security with vast armys of security employees

This is true in America in many industries now, but most of the rest of the world (even the rest of the OECD) is still far behind.

Maybe they should've been as productive as the guys down in Santa Barbara.

[deleted]

While maybe true, it is better to back that up with data and the data I know of and read yearly is mostly not great. Between Splunk and SANS surveys of 2025 maybe ~2000 companies have a SOC. [1] [2]

Then you have the many companies in the UK, US, Canada, EU that have compliance and regulatory laws that require them to exist in some capacity in house. Though that is changing with MDR services, but someone still has to interface with the MDR.

[1]: https://www.elastic.co/pdf/sans-soc-survey-2025.pdf [2]: https://github.com/jacobdjwilson/awesome-annual-security-rep...

Does the report talk about how many are /actual/ "SOC"'s, rather than some outsourced SIEM service. Or one guy who gets a daily report...

Not at all. I’m considering that the amount of vulnerable software in the wild is very, very large, with most organizations not managing their systems properly. Imagine all the small to medium size companies that do not have budgets for a dedicated, talented security team. And all the software that will never be patched. We are at the beginning of the exponential

> I’m considering that the amount of vulnerable software in the wild is very, very large

I'd imagine this set is very similar to just "the set of software on the world". Even before the AI stuff, it was a pretty good bet at any given software had some vulnerability; it was just a question of how easy to was to find it.

Yes, that’s my point. Look at how fast the Calif team tackled that macOS issue. Against the top company in the world. One week from bug to exploit. In 2-5 years things will be really wild for everybody out there. We released a technology that make it possible to design extremely complex exploits at a scale we never had to face before. What does that mean if you’re not the top company? Things will be really bad

It makes you think will everything need to be rewritten from the ground up - potentially by AI itself, or AI having a very heavy hand in validating all of it.

There's so much much lower hanging fruit. Every job I've had has had basically everything massively out of date. Just keeping packages and framework versions up to date is a full time job and none of these companies have someone assigned to doing it.

So much out of date software with known exploits left running for years. The only reason there hasn't been total disaster is no one has tried to hack it yet.

Right and with AI now we have the ability to try hacking everything all at once.

Yes, exactly, that’s the main change. And not just in a script kiddy way. What we see now is LLM + experts can develop extremely complex exploit chains in no time. It’s one thing to exploit a known vulnerability that you can patch by upgrading your Wordpress, it’s something else when the attacker is able to completely take over your systems in ways you didn’t even consider was possible and adapt in 1 day to your attempts at patching

For now, after the dust settles all of the low hanging fruit will have been patched and we will have hurried up the move to safer languages.

The root problem is the world runs on C code that is riddled with vulnerabilities.