This is a pretty scary exploit, considering how easily it could be abused.

Imagine just one link in a tweet, support ticket, or email: https://discord.com/_mintlify/static/evil/exploit.svg. If you click it, JavaScript runs on the discord.com origin.

Here's what could happen:

- Your Discord session cookies and token could be stolen, leading to a complete account takeover.

- read/write your developer applications & webhooks, allowing them to add or modify bots, reset secrets, and push malicious updates to millions.

- access any Discord API endpoint as you, meaning they could join or delete servers, DM friends, or even buy Nitro with your saved payment info.

- maybe even harvest OAuth tokens from sites that use "Login with Disord."

Given the potential damage, the $4,000 bounty feels like a slap in the face.

edit: just noticed how HN just turned this into a clickable link - this makes it even scarier!

Doesn't stealing the cookies/token require a non-HTTP-only session cookie or a token in localstorage? Do you know that Discord puts their secrets in one of those insecure places, or was it just a guess?

I believe if you always keep session cookies in secure, HTTP-only cookies, then you are more resilient to this attack.

I interviewed frontend devs last year and was shocked how few knew about this stuff.

In general if a script can run, users sessions and more importantly passwords are at risk.

It's true that an HTTP-only session cookie couldn't be directly taken, but it's trivial to present the user with a login screen and collect their password (and OTP), at which point you can easily get a session remotely. It can look entirely like the regular login page right down to the url path (because the script can modify that without causing a page load).

Yep, httpOnly cookies just give the hacker a bit of extra work in some situations. TBH I don't even think httpOnly is worth the hassle it creates for platform developers given how little security it adds.

Wow did not realize a url could be set like that without promoting a page reload...

To be clear only the path and query parameters part of the url can change, the domain (or sub domain) stays intact.

Even scarier to me than the vulnerability is that Fidelity (whom I personally think is a good bank and investment company) was using a third party that allowed injection that could potentially steal a whole lot of money, affect markets, ruin or terminate billions of lives, and affect the course of humanity. What the fuck.

Their knowledge of finance is certainly better than their knowledge of web tech.

Historically and today.

That’s why I’m a Schwab junkie… but finance is a hotspot for this kind of stuff.

[dead]

If it weren't already in the same domain you wouldn't be able to read a non-HttpOnly cookie anyway, so that's moot.

Well that's how SPAs work (single page applications)

How do you modify the url exactly?

`history.replaceState(null, "", "/login")`

For Coinbase docs, this is a disaster particularly

By they looks of it their docs are under a subdomain, and no part of the domain can be changed when setting the url this way. So it would still look a little out of place at least.

I mean, you're not wrong, but this is going to trick a non-zero number of people and that's not okay. We should expect more out of companies like Coinbase and hold them to a high standard.

This is unacceptable and the amount offered in general is low. It feels like we can agree on this.

auth URLs are almost always a shitshow in every larger corp. Having the url be https://docs.bigcorp.com/sso/authlayerv1/us-east-24/aws/secu... would not stand out at all to anyone.

No because Discord auth tokens dont expire soon enough. The only thing that kills them is changing your password. Idk why Discord doesnt invalidate them after some time, it is seriously amateur hour over there and has been for a while.

Probably because the end user hates login in, my friends always complain about the “remember me” button being useless for some services.

No, these are tokens that you get a new one per request, if you open up dev tools, and open the user settings panel, you will see that you get a new one every single time you open the user settings panel. They never expire, at least for years they were insanely long lasting.

if you set the cookier header right (definitely not always the case), this is true, but the javascript can still send requests that will have that cookie included, effectively still letting the hacker use the session as the logged in user

with http-only they can't _steal_ the cookie, but they can still _use_ the cookie. It reduces the impact but doesn't fully solve it.

Discord puts the authentication token in local storage

Is that a problem on its own? It's like, encrypted right? Maybe a time sensitive token?

Not a problem in itself. Also, there's not much point of encrypting tokens. The attacker could use the encrypted token to authenticate themselves without having to decrypt. They could just make a request from the victim's own browser. They could do this with cookies too even with httpOnly cookies.

XSS is a big problem. If a hacker can inject a script into your front end and make it execute, it's game over. Once they get to that point, there's an infinite number of things they can do. They basically own the user's account.

Does anyone actually encrypt the contents of JWTs? I'd have thought that anyone who has concerns about the contents of the token being easily visible would be likely to avoid JWTs anyway and just use completely opaque tokens?

JWT supports some encryption algorithms as an alternative to signatures but my experience is that most people like to keep it simple.

JWT is intended for authentication. Most of the time you're basically just signing a token containing an account ID and nothing else... Sometimes a list of groups but that only scales to a small number of groups.

Encrypted tokens are opaque but they are also offline-verifiable. A simple opaque token has to be verified online (typically, against a database) whenever it's used.

Auth0, for example, supports JWE for its access tokens: https://auth0.com/docs/secure/tokens/access-tokens/json-web-...

Depends on the token; JWTs usually have payloads that are only base64 encoded. As well, if there's a refresh token in there it can be used to generate more tokens until invalidated (assuming invalidation is built in).

You may be thinking of CSRF mitigations. XSS exploits are more dangerous and can do more than steal sessions.

As a FE dev, I wouldn't be able to articulate what you just did in the way you did, but it is something I know in practice, just from experience. I don't think any of the FE courses I took tackled anything like that.

Token stealing hasn't been a real danger for a decade now. If you don't mark your token's as non-HTTP you're doing something explicitely wrong, because 99% of backends nowadays do this for you.

with http-only they can't _steal_ the cookie, but they can still _use_ the cookie. It reduces the impact but doesn't fully solve it.

Surely, if a script is in a position to sniff the cookie from local storage, they can also indirectly use the http-only cookie by making a request from the browser. So really not much of a difference as they will be taking over the account

The cookie storage and the local storage by all means is not the same! Cookies are not stored in the local storage and could be httpOnly, so they are not directly accessible by JavaScript. Nevertheless, as described above, with this XSS attack it is easy to bypass the token and just steal the user credentials by pretending a fresh login mask keeping the origin domain intact. That's why XSS attacks are dangerous since existence. Nothing new actually.

The fact that it is just so trivial and obvious that its scary. It didn't even require any real hacking chops, just patience: literally anyone with a cursory knowledge of site design could have stumbled on this if they were looking at it.

Terrifying.

>the $4,000 bounty feels like a slap in the face.

And serves a reminder crime does pay.

In the black market, it would have been worth a bit more.

I was once only given $1,000 for an exploit where I could put in npm usernames and get their email addresses. Big corps don't always pay what they should.

[deleted]

yeah, but nothing pays as much as doing free work for (checks notes) mintlify feels

No it would not have been.

This specific XSS vulnerability may not have been, but the linked RCE vulnerability found by their friend https://kibty.town/blog/mintlify/ certainly would've been worth more than the $5,000 they were awarded.

A vulnerability like that (or even a slightly worse XSS that allowed serving js instead of only svg) could've let them register service workers to all visiting users giving future XSS ability at any time, even after the original RCE and XSS were patched.

Maybe? I don't know enough about the vulnerability. Is it serverside? Then it isn't worth very much.

>i quickly realised that this was the server-side serverless (lol) environment of their main documentation app, while this calls to a external api to do everything, we have the token it calls it with in the env.

>alongside, we can poison the nextjs cache for everyone for any site, allowing mass xss, defacing, etc on any docs site.

So it's a serverside bug that basically creates a more-severe stored DOM corruption vulnerability? Yeah, that's not worth anything to any buyer of vulnerabilities that I know exists. Maybe you know ones that I don't know.

I can’t speak to the value of the vulnerability as I lack the universal Rolodex of Every Exploit Buyer that is apparently available (nor am I interested in debating this with somebody that admitted they didn’t know anything about the vulnerability, declared it worthless anyway, and then moved the goalposts after a core assumption about it was trivially shown to be wrong. I’m fairly certain at this point these kids could recreate the end of the movie Antitrust and there’d be a thread somewhere with tptacek posting “This isn’t that big of a deal because”).

I just saw that you asked if the article about the server-side exploit was about a server-side exploit. It is. It’s right there in the post.

Can I ask which exploit buyers you are aware of? None of us know all of them! It'll be easier to discuss this with a specific buyer in mind.

[deleted]
[deleted]

Could you elaborate on why not?

What 'arcwhite said (sorry, I got dragged into a call).

1. The exploits (not vulnerabilities; that's mostly not a thing) that command grey/black market value all have half-lives.

2. Those exploits all fit into existing business processes; if you're imagining a new business, one that isn't actively running right now as we speak (such as you'd have to do to fit any XSS in a specific service), you're not selling an exploit; you're planning a heist.

3. The high-dollar grey market services traffic exclusively in RCE (specifically: reliable RCE exploits, overwhelmingly in mainstream clientside platforms, with sharp dropoffs in valuation as you go from e.g. Chrome to the next most popular browser).

4. Most of the money made in high-ticket exploit sales apparently (according to people who actually do this work) comes on the backend, from tranched maintenance fees.

There's generally no grey market for XSS vulns. The people buying operationalized exploits generally want things that they can aim very specifically to achieve an outcome against a particular target, without that target knowing about it, and operationalized XSS vulns seldom have that nature.

Your other potential buyers are malware distributors and scammers, who usually want a vuln that has some staying power (e.g. years of exploitability). This one is pretty clearly time-limited once it becomes apparent.

It would have been. Ten times the amount at least.

For a reflected XSS? Tell me who is paying that much for such a relatively common bug...

To elaborate, to exploit this you have to convince your target to open a specially crafted link which would look very suspect. The most realistic way to exploit would be to send a shortened link and hope they click on it, that they are logged into discord.com when they do (most people use the app), that there are no other security measures (httponly cookies) etc

No real way to use this to compromise a large amount of users without more complex means

It isn't about the commonality of the bug, but the level of access it gets you on the type or massive scale of the target. This bug you your blog? Who cares. This bug on Discord or AWS? Much more attractive and lucrative.

Yes, but this is not a particularly high access level bug.

Depending on the target, it's possible that the most damage you could do with this bug is a phishing attack where the user is presented a fake sign-in form (on a sketchy url)

I think $4k is a fair amount, I've done hackerone bounties too and we got less than that years ago for a twitter reflected xss

Why would that be the maximum damage ? This XSS is particularly dangerous because you are running your script on the same domain where the user is logged-in so you can pretty much do anything you want under his session.

In addition this is widespread. It's golden for any attacker.

Because modern cookie directives and browser configs neuter a lot of the worst XSS outcomes/easiest exploit paths. I would expect all the big sites to be setting them, though I guess you never know.

I would not be that confident as you can see: on their first example, they show Discord and the XSS code is directly executed on Discord.com under the logged-in account (some people actually use web version of Discord to chat, or sign-in on the website for whatever reason).

If you have a high-value target, it is a great opportunity to use such exploits, even for single shots (it would likely not be detected anyway since it's a drop in the ocean of requests).

Spreading it on the whole internet is not a good strategy, but for 4000 USD, being able to target few users is a great value.

Besides XSS, phishing has its own opportunity.

Example: Coinbase is affected too though on the docs subdomain and there are 2-step, so you cannot do transactions directly but if you just replace the content with a "Sign-in to Coinbase / Follow this documentation procedure / Download update", this can get very very profitable.

Someone would pay 4000 USD to receive 500'000 USD back in stolen bitcoins).

Still, purely with executing things under the user sessions there are interesting things to do.

> some people actually use web version of Discord to chat, or sign-in on the website for whatever reason

Beside this security blunder on Discord’s part, I can see only upsides to using a browser version rather than an Electron desktop app. Especially given how prone Discord are to data mining their users, it seems foolish to let them out of the web sandbox and into your system

Again, here you have not so much sold a vulnerability as you have planned a heist. I agree, preemptively: you can get a lot of money from a well-executed heist!

Do you want to execute actions as logged-in user on high-value website XXX ?

If yes -> very useful

Nobody is disputing that a wide variety of vulnerabilities are "useful", only that there's no market for most of them. I'd still urgently fix an XSS.

There is a market outside Zerodium, it's Telegram. Finding a buyer takes time and trust, but it has definitively higher value than 4k USD because of its real-world impact, no matter if it is technically lower on the CVSS scores.

Really? Tell me a story about someone selling an XSS vulnerability on Telegram.

("The CVSS chart"?)

Moments later

Why do people keep bringing up "Zerodium" as if it's a thing?

I understand your perspective about the technical value of an exploit, but I disagree with the concept that technical value = market value.

There are unorganized buyers who may be interested if they see potential to weaponize it.

In reality, if you want to maximize revenue, yes, you need to organize your own heist (if that's what you meant)

Do you know this or do you just think it should be true?

> understand your perspective about the technical value of an exploit

Going out on the world’s sturdiest limb and saying u/tptacek knows the technical and trading sides of exploits. (Read his bio.)

AIU this feature is SSS, not XSS, so XSS protections don't apply.

How would you make money from this? Most likely via phishing. Not exactly a zero-click RCE.

What happens in all these discussions is that we stealthily transition from "selling a vulnerability" to "planning a heist", and you can tell yourself any kind of story about planning a heist.

Also the XSS exploit would have been dead in the water for any sites using CSP headers. Coinbase certainly uses CSP. With this in place an XSS vuln can't inject arbitrary JS.

I don't like tptacek, but it's insane to not back up this comment with any amount of evidence or at least explanation. The guy knows his shit.

Hey I was wrong about Apple downthread.

> - Your Discord session cookies and token could be stolen, leading to a complete account takeover.

Discord uses HttpOnly cookies (except for the cookie consent banner).

tokens are stored in localStorage, which is accessible by JS

Well, it used to be much more accessible before, now you have to do some hack to retrieve it, and by hack, I mean some "window.webpackChunkdiscord_app.push" kinda hack, no longer your usual retrieval. Basically you have to get the token from webpack. The localStorage one does not seem to work anymore. That is what I used, but now it does not work (or rather, not always). The webpack one seems to be reliably good.

So your code goes like:

  // Try localStorage first
  const token = getLocalStorageItem('token')
  if (token) return token

  // Try webpack if localStorage fails
  const webpackToken = await getTokenFromWebpack()
  if (webpackToken) return webpackToken
and localStorage does fail often now. I knew the reason for that (something about them removing it at some point when you load the website?) so you need the webpack way, which is consistently reliable.

I believe if you search for the snippet above, you can find the code for the webpack way.

Discord removes the token from localStorage when the web app is open and it's in app memory, and places it back when you close the tab using the "onbeforeunload" event.

Yeah, that is what I have observed, too.

You can retrieve it the webpack way though.

[deleted]