From what I understood at [1], Context.ai users "enable AI agents to perform actions across their external applications, facilitated via another 3rd-party service." I.e., it's designed to get someone's OAuth token and use it. Unless that is done really carefully, the risks are as high as the user's authorization goes. The danger doesn't only come from leaks, but also from agents, that can clear your db or directory at a whim.
They can mitigate it, if the user refuses to oauth into something that asks for too much scope. Most users just click "accept" (this claim based on no data at all).
> at least one Vercel employee signed up for the AI Office Suite using their Vercel enterprise account and granted “Allow All” permissions. Vercel’s internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel’s enterprise Google Workspace.
good point, we think of these OAuth logins as so safe and yet they may be the exact opposite because it's more like logging in with your master password. I think these oauth providers like Microsoft and Google need to start mandating 2FA for every company login, it's just too dangerous otherwise.
I remember implementing OAuth2 for my platform months ago and I was using the username from the provider's platform as the username within my own platform... But this is a big problem because what if a different person creates an account with the same username on a different platform? They could authenticate themselves onto my platform using that other provider to hijack the first person's account!
Thankfully I patched this issue just before it became a viable exploit because the two platforms I was supporting at the time had different username conventions; Google used email addresses with an @ symbol and GitHub used plain usernames; this naturally prevented the possibility of username hijacking. I discovered this issue as I was upgrading my platform to support universal OAuth; it would have been a major flaw had I not identified this. This sounds similar to the Vercel issue.
Anyway my fix was to append a unique hash based on the username and platform combination to the end of the username on my platform.
You should use the subject identifiers, not the usernames. You store a mapping of provider & subject to internal users yourself.
But this has been a problem in the past where people would hijack the email and create a new Google account to sign in with Google with.
Similarly, when someone deletes their account with a provider, someone else can re-register it and your hash will end up the same. The subject identifiers should be unique according to the spec.
Ah yeah but I wanted my platform to provide universal OAuth with any platform (that my app developer user trusts) as OAuth provider. If you rely entirely on subject identifiers; in theory, it gives one platform (OAuth provider) the ability to hijack any account belonging to users authenticating via a different platform; e.g. one platform could fake the subject identifiers of their own platform/provider to intentionally make them match that of target accounts from a different platform/provider.
Now, I realize that this would require a large-scale conspiracy by the company/platform to execute but I don't want to trust one platform with access to accounts coming from a different platform. I don't want any possible edge cases. I wanted to fully isolate them. If one platform was compromised; that would be bad news for a subset of users, but not all users.
If the maker of an application wants to trust some obscure platform as their OAuth provider; they're welcome to. In fact, I allow people running their own KeyCloak instances as provider to do their own OAuth so it's actually a realistic scenario.
This is why I used the hash approach; I have full control over the username on my platform.
[EDIT] I forgot to mention I incorporate the issuer's sub in addition to their username to produce a username with a hash which I use as my username. The key point I wanted to get across here is don't trust one provider with accounts created via a different provider.
Proprietary techniques like this are usually a good indication you’re missing something. In this case it sounds like you are missing appropriate validation of the issuer and/or token itself.
I want to support OAuth2, not OpenID so I don't rely on a JWT; I call the issuer's endpoint directly from my backend using their official domain name over HTTPS. I use the sub field to avoid re-allocation of usernames/emails but my point is that I don't trust it on its own; I couple it with the provider ID.
To make it universal, I had to keep complexity minimal and focus on the most supported protocol which is plain OAuth2.
How does that work, when you add an OAuth app, the resulting tokens are specific to that app having a certain set of permissions?
It's not a new attack vector as in giving too many scopes (beyond the usual "get personal details").
I am curious how this external OAuth app managed to move through the systems laterally.
LLM comment
I'm not super savvy with OAuth, but shouldn't scopes prevent issues like this?
https://oauth.net/2/scope/
From what I understood at [1], Context.ai users "enable AI agents to perform actions across their external applications, facilitated via another 3rd-party service." I.e., it's designed to get someone's OAuth token and use it. Unless that is done really carefully, the risks are as high as the user's authorization goes. The danger doesn't only come from leaks, but also from agents, that can clear your db or directory at a whim.
[1] https://context.ai/security-update
Oof. So much incompetence at so many levels. It's scary.
They can mitigate it, if the user refuses to oauth into something that asks for too much scope. Most users just click "accept" (this claim based on no data at all).
> at least one Vercel employee signed up for the AI Office Suite using their Vercel enterprise account and granted “Allow All” permissions. Vercel’s internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel’s enterprise Google Workspace.
https://context.ai/security-update
So it's not so much a problem with OAuth itself, but with the way it was implemented here?
Someone from marketing getting full access is absolutely a Vercel failure.
good point, we think of these OAuth logins as so safe and yet they may be the exact opposite because it's more like logging in with your master password. I think these oauth providers like Microsoft and Google need to start mandating 2FA for every company login, it's just too dangerous otherwise.
How would 2FA help here, you'd still create the compromised OAuth credential with 2FA?
I remember implementing OAuth2 for my platform months ago and I was using the username from the provider's platform as the username within my own platform... But this is a big problem because what if a different person creates an account with the same username on a different platform? They could authenticate themselves onto my platform using that other provider to hijack the first person's account!
Thankfully I patched this issue just before it became a viable exploit because the two platforms I was supporting at the time had different username conventions; Google used email addresses with an @ symbol and GitHub used plain usernames; this naturally prevented the possibility of username hijacking. I discovered this issue as I was upgrading my platform to support universal OAuth; it would have been a major flaw had I not identified this. This sounds similar to the Vercel issue.
Anyway my fix was to append a unique hash based on the username and platform combination to the end of the username on my platform.
You should use the subject identifiers, not the usernames. You store a mapping of provider & subject to internal users yourself.
But this has been a problem in the past where people would hijack the email and create a new Google account to sign in with Google with.
Similarly, when someone deletes their account with a provider, someone else can re-register it and your hash will end up the same. The subject identifiers should be unique according to the spec.
Ah yeah but I wanted my platform to provide universal OAuth with any platform (that my app developer user trusts) as OAuth provider. If you rely entirely on subject identifiers; in theory, it gives one platform (OAuth provider) the ability to hijack any account belonging to users authenticating via a different platform; e.g. one platform could fake the subject identifiers of their own platform/provider to intentionally make them match that of target accounts from a different platform/provider.
Now, I realize that this would require a large-scale conspiracy by the company/platform to execute but I don't want to trust one platform with access to accounts coming from a different platform. I don't want any possible edge cases. I wanted to fully isolate them. If one platform was compromised; that would be bad news for a subset of users, but not all users.
If the maker of an application wants to trust some obscure platform as their OAuth provider; they're welcome to. In fact, I allow people running their own KeyCloak instances as provider to do their own OAuth so it's actually a realistic scenario.
This is why I used the hash approach; I have full control over the username on my platform.
[EDIT] I forgot to mention I incorporate the issuer's sub in addition to their username to produce a username with a hash which I use as my username. The key point I wanted to get across here is don't trust one provider with accounts created via a different provider.
Proprietary techniques like this are usually a good indication you’re missing something. In this case it sounds like you are missing appropriate validation of the issuer and/or token itself.
I want to support OAuth2, not OpenID so I don't rely on a JWT; I call the issuer's endpoint directly from my backend using their official domain name over HTTPS. I use the sub field to avoid re-allocation of usernames/emails but my point is that I don't trust it on its own; I couple it with the provider ID.
To make it universal, I had to keep complexity minimal and focus on the most supported protocol which is plain OAuth2.