They just added more details:

> Indicators of compromise (IOCs)

> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.

> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.

> OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

https://vercel.com/kb/bulletin/vercel-april-2026-security-in...

https://x.com/rauchg/status/2045995362499076169

> A Vercel employee got compromised via the breach of an AI platform customer called http://Context.ai that he was using.

> Through a series of maneuvers that escalated from our colleague’s compromised Vercel Google Workspace account, the attacker got further access to Vercel environments.

> We do have a capability however to designate environment variables as “non-sensitive”. Unfortunately, the attacker got further access through their enumeration.

> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.

Still no email blast from Vercel alerting users, which is concerning.

> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.

Blame it on AI ... trust me... it would have never happened if it wasn't for AI.

> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI.

Reads like the script of a hacker scene in CSI. "Quick, their mainframe is adapting faster than I can hack it. They must have a backdoor using AI gifs. Bleep bleep".

> Still no email blast from Vercel alerting users, which is concerning.

On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams.

But on the other hand... It's Sunday. Unless you're tuned-in to social media over the weekend, your main provider could be undergoing a meltdown while you are completely unaware. Many higher-up folks check company email over the weekend, but if they're traveling or relaxing, social media might be the furthest thing from their mind. It really bites that this is the only way to get critical information.

> On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams

This is not how things work. In a crisis like this there is a war room with all stakeholders present. Doesn’t matter if it’s Sunday or 3am or Christmas.

And for this company specifically, Guillermo is not one to defer to comms or legal.

If he's not one to defer to Comms or legal, maybe this one is so bad that he's acting differently then he normally would

[dead]

> the CEO can't just write a mass email without approval from legal or other comms teams.

They can be brought in to do their job on a Sunday for an event of this relevance. They can always take next Friday off or something.

Has anyone actually gotten an email from Vercel confirming their secrets were accessed? Right now we're all operating under the hope (?) that since we haven't (yet?) gotten an email, we're not completely hosed.

Hope-based security should not be a thing. Did you rotate your secrets? Did you audit your platform for weird access patterns? Don’t sit waiting for that vercel email.

Of course rotated. But we don't even know when the secrets were stolen vs we were told, so we're missing a ton of info needed to _fully_ triage.

> Did you rotate your secrets?

For most secrets they are under your control so, sure, go ahead and rotate them, allowing the old version to continue being used in parallel with the new version for 30 minutes or so.

For other secrets, rotation involves getting a new secret from some upstream provider and having some services (users of that secret) fail while the secret they have in cache expires.

For example, if your secret is a Stripe key; generating a new key should invalidate the old one (not too sure, I don't use Stripe), at which point the services with the cached secret will fail until the expiry.

nope...I feel u, the "Hope-based security" is exactly what Vercel is forcing on its users right now by prioritizing social media over direct notification.

If the attacker is moving with "surprising velocity," every hour of delay on an email blast is another hour the attacker has to use those potentially stolen secrets against downstream infrastructure. Using Twitter/X as a primary disclosure channel for a "sophisticated" breach is amateur hour. If legal is the bottleneck for a mass email during an active compromise, then your incident response plan is fundamentally broken.

> the CEO can't just write a mass email without approval from legal or other comms teams

Wouldn't the CEO be... you know... the chief executive?

Sure, and the reason he is is because he DOES check stuff like this before sending it out.

Top leaders excel because they assemble a team around them they trust. You can't do everything yourself, you need to delegate. And having people in those positions also means you shouldn't be acting alone or those people will not stick around

I disagree. In a crisis, a leader should take the lead and make decisions. If he/she is not able to that on their own, they are in the wrong place.

Now I will agree that there are many executives like the ones you describe. But they are not top leaders.

So you’re telling me a CEO must also be a practicing lawyer? Because any other option is how you guarantee your company gets sued into oblivion.

First of all, I would expect a top leader to be prepared for scenarios like this (including templates of customer communication).

And yeah, I would expect a CEO to have enough legal knowledge to handle such a situation (customer communication) on his own.

But I also have to mentioned that I'm not in the US. Not every country has the litigation system of the US where you can basically destroy a company because you as the customer are too dumb to not spill hot coffee over yourself.

> you as the customer are too dumb to not spill hot coffee over yourself

presuming you're referring to the hot coffee lawsuit, maybe read details of the story. McDonalds wasn't at all blameless, and the plaintiff had reasonable demands

You expect the CEO of a company to have the legal depth of knowledge AND knowledge of all their customers, contracts and SLAs to be able to wing a communication and not somehow trip over all of that? They also should understand every possible legal jurisdiction that could be affected? You realise even the head of their legal department (a HIGHLY competent lawyer) likely wouldn’t say there could do that without speaking to the key people in their team?

Should the CEO also bang out some dev estimates for the roadmap because, hey, they should be competent enough to do something like that. Why not submit the accounts for the year? How hard can it be, just reading a few lines off their Sage or Quickbooks accounts?

Let me be more clear on what I mean by “wing it,” because “having templates” doesn’t really cut it. Anyone can bang out a “we have a problem” template, so why does the CEO need to attach their name to it? Once you’re at the point of needing a CEO to communicate, you have a specific problem, with its own specific impacts that a single person can not be expected to have enough depth of knowledge in their brain to actually talk about without involving their domain experts, including legal, technical, whatever the situation needs.

> can not be expected to have enough depth of knowledge in their brain to actually talk about

What is the use of a CEO if not to have enough depth of knowledge about the different aspects of running a business?

Like what? Poor little CEO that doesn't understand anything about the world and how to run a company. Seems like helplessness is expected at every stage.

> What is the use of a CEO if not to have enough depth of knowledge about the different aspects of running a business?

Bit of a difference between “having depth of knowledge in their business” and “can speak off-the-cuff with the necessary accuracy to remain in compliance with every contract and legal jurisdiction their organisation is engaged in, without consulting the numerous domain experts they employ for just this purpose,” isn’t there.

Also, such a situation that requires the CEO’s direct attention has already gone FAR beyond your standard incidents where you can throw out a pre written statement. Do you want your organisation just cuffing it from the top down? Are you Elon Musk in disguise?

What use is a CEO if they can't take the lead in times like this?

If they are unprepared frankly they suck as CEO and should be thrown out. If only competency was a requirement for these jobs...

That’s not what I said though, is it?

I'm going down with the ship over on X.com the Everything App. There's a parcel of very important tech people that are running some playbook where posting to X.com is sufficient enough to be unimpeachable on communication, despite its rather beleaguered state and traffic.

Usually, companies have procedures for such events. But most do not.

Usually have procedures, but most don't? Say again

The disaster plan says there is a process, but it has never been used and is probably outdated. Chances are the social media strategy requires posting on the Facebook and updating key Circles on Google+

> an AI platform customer called http://Context.ai that he was using

Hmm? Who is the customer in this relationship? Is Vercel using a service provided by Context.ai which is hosted on Vercel?

Production network control plane must be completely isolated from the internet with a separate computer for each. The design I like best is admins have dedicated admin workstations that only ever connect to the admin network, corporate workstations, and you only ever connect to the internet from ephemeral VMs connected via RDP or similar protocol.

The actual app name would be good to have. Understandable they don’t want to throw them under the bus but it’s just delaying taking action by not revealing what app/service this was.

I was trying to look it up (basically https://developers.google.com/identity/protocols/oauth2/java... -- the consent screen shows the app name) but it now says "Error 401: invalid_client; The OAuth client was not found." so it was probably deleted by the oauth client owner.

It indeed was deleted as this URL shows: https://accounts.google.com/o/oauth2/v2/auth?client_id=11067...

[deleted]

Makes it even more relevant to have the actual app or vendor name - who’s to say they just removed it to save face and won’t add it later?

I don’t understand why they can’t just directly name the responsible app as it will come out eventually.

It’s context.ai

https://x.com/rauchg/status/2045995362499076169

[deleted]

Which itself was the subject of a broader compromise as far as i can tell

Maybe legal red tape?

Yes. The oauth ID is indisputable. It it seems to be context.ai. But suppose it was a fake context.ai that the employee was tricked into using. Or… or…

Better to report 100% known things quickly. People can figure it out with near zero effort, and it reduces one tiny bit of potential liability in the ops shitstorm they’re going through.

They might be buying time to sell the relevant stock

It looks like the app has already been deleted

Idk exactly how to articulate my thoughts here, perhaps someone can chime in and help.

This feels like a natural consequence of the direction web development has been going for the last decade, where it's normalised to wire up many third party solutions together rather than building from more stable foundations. So many moving parts, so many potential points of failure, and as this incident has shown, you are only as secure as your weakest link. Putting your business in the hands of a third party AI tool (which is surely vibe-coded) carries risks.

Is this the direction we want to continue in? Is it really necessary? How much more complex do things need to be before we course-correct?

This isn't a web development concept. It's the unix philosophy of "write programs that do one thing and do it well" and interconnect them, being taken to the extremes that were never intended.

We need a different hosting model.

Just throwing it out there - the Unix way to write software is often revered. But ideas about how to write software that came from the 1970s at Bell Labs might not be the best ideas for writing software for the modern web.

Instead of "programs that do one thing and do it well", "write programs which are designed to be used together" and "write programs to handle text streams", I might go with a foundational philosophy like "write programs that are do not trust the user or the admin" because in applications connected to the internet, both groups often make mistakes or are malicious. Also something like "write programs that are strict on which inputs they accept" because a lot of input is malicious.

The Unix model wasn't simply do one thing and do it well.

It was also a different model on ownership and vetting of those focused tools. It might have been a model of having the single source tree of an old UNIX or BSD, where everything was managed as a coherent whole from grep to cc all the way to X11. Or it might have been the Linux distribution model of having dedicated packagers do the vetting to piecemeal packages into more of a bazaar, even going so far as to rip scripting language bundles into their component pieces as for Python and Perl.

But in both of those models you were put farther away from the third-party authors bringing software into the open-source (and proprietary) supply chains.

This led to a host of issues with getting new software to users and with a fractal explosion of different versions of software dependencies to potentially have to work around, which is one reason we saw the explosion of NPM and Cargo and the like. Especially once Docker made it easy to go straight from stitching an app together with NPM on your local dev seat to getting it deployed to prod.

But the issue isn't with focused tooling as much as it is with hewing more closely to the upstream who could potentially be subverted in a supply chain attack.

After all, it's not as if people never tried to do this with Linux distros (or even the Linux kernel itself -- see for instance https://linux.slashdot.org/story/03/11/06/058249/linux-kerne... ). But the inherent delay and indirection in that model helped make it less of a serious risk.

But even if you only use 1 NPM package instead of 100, if it's a big enough package you can assume it's going to be a large target for attacks.

> Just throwing it out there - the Unix way to write software is often revered. But ideas about how to write software that came from the 1970s at Bell Labs might not be the best ideas for writing software for the modern web.

GP said it's about taking the Unix philosophy to extremes, you say something different.

Anything taken to extremes is bad; the key word there is "extremes". There is nothing wrong with the Unix philosophy, as "do one thing and do it well" never meant "thousands of dependencies over which you have no control, pulled in without review or thought".

I do not see what this has to do with Unix. The problem is not that programs interoperate or handle text streams, the problem is a) the supply chain issues in modern web-software (and thanks to Rust now system-level) development and b) that web applications do not run under user permissions but work for the user using token-based authentication schemes.

[deleted]

I guess we failed at the "do it well" step.

It's not a hosting model, it's a fundamental failure of software design and systems engineering/architecture.

Imagine if cars were developed like websites, with your brakes depending on a live connection to a 3rd party plugin on a website. Insanity, right? But not for web businesses people depend on for privacy, security, finances, transportation, healthcare, etc.

When the company's brakes go out today, we all just shrug, watch the car crash, then pick up the pieces and continue like it's normal. I have yet to hear a single CEO issue an ultimatum that the OWASP Top 10 (just an example) will be prevented by X date. Because they don't really care. They'll only lose a few customers and everyone else will shrug and keep using them. If we vote with our dollars, we've voted to let it continue.

> We need a different hosting model.

There really isn't an option here, IMO.

1. Somebody does it

2. You do it

Much happier doing it myself tbh.

There's a lot of wiggle room on how you define "it". At the ends of the spectrum it's obvious, but in the middle it gets a bit sticky.

In my mind the unix philosophy leads to running your cloud on your own hardware or VPS's, not this.

exactly this, write - not use some sh*t written by some dude from Akron OH 2 years ago”

That's why I wrote my own compiler and coreutils. Can't trust some shit written by GNU developers 30 years ago.

And my own kernel. Can't trust some shit written by a Finnish dude 30 years ago.

And my own UEFI firmware. Definitely can't trust some shit written by my hardware vendor ever.

Yeah definitely no difference between GNU coreutils and some vibe coded AI tool released last month that wants full oAuth permissions.

I’m not joking, but weirdly enough, that’s what most AI arguments boil down to. Show me what the difference is while I pull up the endless CVE list of which ever coreutils package you had in mind. It’s a frustrating argument because you know that authors of coreutils-like packages had intentionality in their work, while an LLM has no such thing. Yet at the end, security vulnerabilities are abundant in both.

The AI maximalists would argue that the only way is through more AI. Vibe code the app, then ask an LLM to security review it, then vibe code the security fixes, then ask the LLM to review the fixes and app again, rinse and repeat in an endless loop. Same with regressions, performance, features, etc. stick the LLM in endless loops for every vertical you care about.

Pointing to failed experiments like the browser or compiler ones somehow don’t seem to deter AI maximalists. They would simply claim they needed better models/skills/harness/tools/etc. the goalpost is always one foot away.

"endless list of CVE" seems rather exaggerated for coreutils. There are only very few CVEs in the last decade and most seem rather harmless.

Now I'd genuinely like to know whether "yes" had a CVE assigned, not sure how to search for it though...

I wouldn't describe myself as an AI maximalist at all. I just don't believe the false dichotomy of you either produce "vulnerable vibe coded AI slop running on a managed service" or "pure handcrafted code running on a self hosted service."

You can write good and bad code with and without AI, on a managed service, self-hosted, or something in between.

And the comment I was replying to said something about not trusting something written in Akron, OH 2 years ago, which makes no sense and is barely an argument, and I was mostly pointing out how silly that comment sounds.

I used to believe that too, yet the dichotomy is what’s being pushed by what I called an “AI maximalist” and it’s what I was pushing against.

There is no “I wrote this code with some AI assistance” when you’re sending 2k line change PR after 8 minutes of me giving you permission on the repo. That’s the type of shit I’m dealing with and management is ecstatic at the pace and progress and the person just looks at you and say “anything in particular that’s wrong or needs changing? I’m just asking for a review and feedback”

It's such a bad faith argument, they basically make false equivalencies with LLMs and other software. Same with the "AI is just a higher level compiler" argument. The "just" is doing a ton of heavy lifting in those arguments.

Regarding the unix philosophy argument, comparing it to AI tools just doesn't make any sense. If you look at what the philosophy is, it's obvious that it doesn't just boil down to "use many small tools" or "use many dependencies", it's so different that it not even wrong [0].

In their Unix paper of 1974, Ritchie and Thompson quote the following design considerations:

- Make it easy to write, test, and run programs.

- Interactive use instead of batch processing.

- Economy and elegance of design due to size constraints ("salvation through suffering").

- Self-supporting system: all Unix software is maintained under Unix.

In what way does that correspond to "use dependencies" or "use AI tools"? This was then formalised later to

- Write programs that do one thing and do it well.

- Write programs to work together.

- Write programs to handle text streams, because that is a universal interface.

This has absolutely nothing in common with pulling in thousands of dependences or using hundreds of third party services.

Then there is the argument that "AI is just a higher level compiler". That is akin to me saying that "AI is just a higher level musical instrument" except it's not, because it functions completely differently to musical instruments and people operate them in a completely different way. The argument seems to be that since both of them produce music, in the same way both a compiler and LLM generate "code", they are equivalent. The overarching argument is that only outputs matter, except when they don't because the LLM produces flawed outputs, so really it's just that the outputs are equivalent in the abstract, if you ignore the concrete real-world reality. Using that same argument, Spotify is a musical instrument because it outputs music, and hey look, my guitar also outputs music!

0: https://en.wikipedia.org/wiki/Not_even_wrong

So it’s not a binary thing, there’s context and nuance?

Embrace the suck.

cue Jeopardy theme song

Who is Apple?

TempleOS, is that you?

[flagged]

This was a Google oauth app and it was phished. So... No.

"The incident originated with a compromise of Context.ai, a third-party AI tool used by a Vercel employee"

So - yes, actually.