This kind of thing always makes me nervous, because you end with a mix of methods where you can (supposedly) pass arbitrary user input to them and they'll safely handle it, and methods where you can't do that without introducing vulnerabilities - but it's not at all clear which is which from the names. Ideally you design that in from the state, so any dangerous functions are very clearly dangerous from the name. But you can't easily do that down the line.
I'm also rather sceptical of things that "sanitise" HTML, both because there's a long history of them having holes, and because it's not immediately clear what that means, and what exactly is considered "safe".
You are right that the concept of "safe" is nebulous, but the goal here is specifically to be XSS-safe [1]. Elements or properties that could allow scripts to execute are removed. This functionality lives in the user agent and prevents adding unsafe elements to the DOM itself, so it should be easier to get correct than a string-to-string sanitizer. The logic of "is the element currently being added to the DOM a <script>" is fundamentally easier to get right than "does this HTML string include a script tag".
[1] https://developer.mozilla.org/en-US/docs/Web/API/Element/set...
It's certainly an improvement over people trying to homebrew their own sanitisers. But that distinction of being XSS-safe is a potentially subtle one, and could end up being dangerous if people don't carefully consider whether XSS-safe is good enough when they're handling arbitrary users input like that.
Also has made me nervous for years that there's been no schema against which one can validate HTML. "You want to validate? Paste your URL into the online validation tool."
This help? https://github.com/validator/validator
But for html snippets you can pretty much just check that tags follow a couple simple rules between <> and that they're closed or not closed correctly.
That app does look helpful!
Ideally you should be able to set a global property somewhere (as a web developer) that disallows outdated APIs like `innerHTML`, but with the Big Caveat that your website will not work on browsers older than X. But maybe there's web standards for that already, backup content if a browser is considered outdated.
It's not an "outdated API". It's still good for what it was always meant for: parsing trusted, application-generated markup and atomically inserting it into the content tree as a replacement for a given element's existing children.
> set a global property somewhere (as a web developer) that disallows[…] `innerHTML`
(Not that you should actually do this—anyone who has to resort to it in their codebase has deeper problems.)Doesn't using TrustedTypes basically do that? I'm not really web-y, someone please correct me if I'm off.
Yup, this is basically what TrustedTypes is for!
I like the idea of that. But I imagine linting rules are a much more immediate answer in a lot of projects.
The idea is you wouldn't mix innerHTML and setHTML, you would eliminate all usage of innerHTML and use the new setHTMLUnsafe if you needed the old functionality.
I looked up setHTMLUnsafe on MDN, and it looks like its been in every notable browser since last year.
Good idea to ship that one first, when it's easier to implement and is going to be the unsafe fallback going forward.
I looked up setHTMLUnsafe on MDN, and it looks like its been in every notable browser since last year.
Oddly though, the Sanitizer API that it's built on doesn't appear to be in Safari. https://developer.mozilla.org/en-US/docs/Web/API/Sanitizer
If I need the old functionality why not stick to innerHTML?
because the "unsafe" suffix conveys information to the reader, whereas `innherHTML` does not?
Any potential reader should be familiar with innerHTML.
Right. Like how any potential reader is familiar with the risks of sql injection which is why nothing has ever been hacked that way.
Or how any potential driver is familiar with seat belts which is why everybody wears them and nobody’s been thrown from a car since they were invented.
yes, and bugs shouldn't exist because everyone should be familiar with everything.
But if some are marked unsafe and others are not it gives a false sense of security if something is not marked unsafe.
So we shouldn’t mark anything as unsafe then? And give no indication whatsoever?
The issue isn’t that the word “safe” doesn’t appear in safe variants, it’s that “unsafe” makes your intentions clear: “I know this is unsafe, but it’s fine because of X and Y”.
Maybe we should add the word safe and consider everything else as unsafe
Like life, things should default to being safe. Unsafe, unexpected behaviours should be exception and thus require an exceptional name.
Legacy and backwards compatibility hampers this, but going forward…
Because then your linter won't be able to tell you when you're done migrating the calls that can be migrated.
Because sooner or later it'll be removed.
No because the web has to remain backwards compatible with older sites. This has always been the case.
And break millions of sites?
You can't rename an existing method. It would break compatibility with existing websites.
> you would eliminate all usage of innerHTML
The mythical refactor where all deprecated code is replaced with modern code. I'm not sure it has ever happened.
I don't have an alternative of course, adding new methods while keeping the old ones is the only way to edit an append-only standard like the web.
If you want to adopt this in your project, you can add a linter that explicitly bans innerHTML (and then go fix the issues it finds). Obviously Mozilla cannot magically fix the code of every website on the web but the tools exist for _your_ website.
I kinda like the way JS evolved into a modern language, where essentially ~everyone uses a linter that e.g. prevents the use of `var`. Sure, it's technically still in the language, but it's almost never used anymore.
(Assuming transpilers have stopped outputting it, which I'm not confident about.)
Actually... https://github.com/microsoft/TypeScript/issues/52924
Ah yeah, I remember that. General point still stands: in terms of the lived experience of developers, `var` is essentially deprecated.
I touch JS that uses var heavily on a daily basis and I would be incredibly surprised to find out that I am alone in that.
for some values of "everyone" and "never".
Depending on the transpiler and mode of operation, `var` is sometimes emitted.
For example, esbuild will emit var when targeting ESM, for performance and minification reasons. Because ESM has its own inherent scope barrier, this is fine, but it won't apply the same optimizations when targeting (e.g.) IIFE, because it's not fine in that context.
https://github.com/evanw/esbuild/issues/1301
It for sure happens for drop in replacements.
Nobody's talking about old code here.
Having an alternative to innerHTML means you can ban it from new code through linting.
Finally, a good use case for AI.
Yeah, using a kilowatt GPU for string replacement is going to be the killer feature. I probably shouldn't even be joking, people are using it like this already
When the condition for when you want to replace is hard to properly specify, AI shines for such find and replaces.
This one is literally matching "innerHTML = X" and setting "setHTML(X)" instead. Not some complex data format transformation
But I can see what you mean, even if then it would still be better for it to print the code that does what you want (uses a few Wh) than doing the actual transformation itself (prone to mistakes, injection attacks, and uses however many tokens your input data is)
That can break the site if you do the find and replace blindly. The goal here is to do the refactor without breaking the site.
> When the condition for when you want to replace is hard to properly specify, AI shines for such find and replaces.
And, in your opinion, this is one of those cases?
It is because the new API purposefully blocks things the old API did not.
This ship has sailed unfortunately, no later than yesterday I've seen coworkers redact a screenshot using chatGTP.
Wouldn't AI be trained on data using innerHTML?
My experience is that they somehow print quite modern code despite things like ES6 being too new to be standard knowledge even for me and I'm not even middle-aged yet
Maybe the last 10 years saw so much more modern code than the last cumulative 40+ years of coding and so modern code is statistically more likely to be output? Or maybe they assign higher weights to more recent commits/sources during training? Not sure but it seems to be good at picking this up. And you can always feed the info into its context window until then
This is not my experience. Claude has been happily generating code over the past week that is full of implicit any and using code that's been deprecated for at least 2 years.
>> Maybe the last 10 years saw so much more modern code than the last cumulative 40+ years of coding and so modern code is statistically more likely to be output?
The rate of change has made defining "modern" even more difficult and the timeframe brief, plus all that new code is based on old code, so it's more like a leaning tower than some sort of solid foundation.
ES6 is 11 years old. It's not that new.
> "ES6 being too new to be standard knowledge"
Huh? It's been a decade.
Which is why it can easily understand how innerHTML is being used so that it can replace it with the right thing.
Honest question: Is there a way to get an LLM to stop emitting deprecated code?
Theoretically, if you could train your own, and remove all references to the deprecated code in the training data, it wouldn't be able to emit deprecated code. Realistically that ability is out of reach at the hobbiest level so it will have to remain theoretical for at least a few more iterations of Moore's law.
> it's not at all clear which is which from the names. Ideally you design that in from the [start]
It was, and there is: setting elementNode.textContent is safe for untrusted inputs, and setting elementNode.innerHTML is unsafe for untrusted inputs. The former will escape everything, and the latter won't escape anything.
You are right that these "sanitizers" are fundamentally confused:
> "HTML sanitization" is never going to be solved because it's not solvable.¶ There's no getting around knowing whether or any arbitrary string is legitimate markup from a trusted source or some untrusted input that needs to be treated like text. This is a hard requirement.
<https://news.ycombinator.com/item?id=46222923>
The Web platform folks who are responsible for getting fundamental APIs standardized and implemented natively are in a position to know better, and they should know better. This API should not have made it past proposal stage and should not have been added to browsers.
> There's no getting around knowing whether or any arbitrary string is legitimate markup from a trusted source or some untrusted input that needs to be treated like text. This is a hard requirement.
It is not a hard requirement that untrusted input is "treated like text". And this API lets you customize exactly what tags/attributes are allowed in the untrusted input. That's way better than telling everyone to write their own; it's not trivial.
It is not a hard requirement that untrusted input is "treated like text".
It's also not a hard requirement that I defend the position that there's a hard requirement for untrusted input to be treated like text. That isn't my position, and it's not what I wrote.
Given that it is not a hard requirement that untrusted input be treated like text, it wouldn't make sense for anyone to claim that it is—and therefore it doesn't make sense for anyone, presented with I did write, to strenuously argue with me that such a tortured, implausible, uncharitable, non-sensical interpretation of what I wrote was something that I have to account for (versus the interpretation that does match what I wrote and is actually true and makes sense).
You are, willfully or not, misconstruing what I have written.
> That's way better than telling everyone to write their own; it's not trivial.
Right, it's not trivial. It's so far the opposite of trivial that it's (as I said the first time—and again, just now) not solvable.
No one should be writing their own.
No one should be trying to write their own.
No one should be using this API at all.
And no one should have pushed for its implementation.
It's a bad API.
[flagged]
I don't see how I differed from what you said? You divided strings going into HTML into two categories, where one category uses textContent and the other category uses innerHTML. My point is to disagree with those categories, not whatever subtle thing you're taking issue with.
Oh, okay. Tell me, dipshit, are the follow two claims equivalent or different?:
"Everyone who files a tax return should know whether they need to pay at least $1000 in unpaid taxes to the IRS."
"Everyone who files a tax return needs to pay at least $1000 in unpaid taxes to the IRS."
> You divided strings going into HTML into two categories, where one category uses textContent and the other category uses innerHTML.
No, I didn't:
> setting elementNode.textContent is safe for untrusted inputs, and setting elementNode.innerHTML is unsafe for untrusted inputs
That's what I wrote: a statement containing two claims (both true—and not even in the part of my comment that you actually quoted and pretended to be replying to).
This is a totally different kind of statement. You're not dividing tax returns into two categories and then saying what to do with each category.
Those claims are different but not in a way that analogizes to the HTML conversation.
I'd say I'm interested in hearing how you reason that knowing whether you need to pay at least $1000 in unpaid taxes to the IRS doesn't put you in one bucket or another, but I'm not.
The IRS thing indirectly has categories but it doesn't say what to do with them, and what to do with them is what I disagreed with your original post on. I didn't say all input is untrusted or whatever analogizes to your tax thing.
Anyway, I see you edited your previous post after I wrote my reply.
If you weren't trying to divide things into two categories, you wrote it very confusingly. When you say how to handle trusted strings, then say how to handle untrusted strings, then say "There's no getting around knowing whether or any arbitrary string is legitimate markup from a trusted source or some untrusted input that needs to be treated like text. This is a hard requirement." it really sounds like that's supposed that's supposed to cover all cases.
Me thinking you were using two categories is an honest mistake, not malicious misquoting.
And reading your original post that way is the interpretation that makes it stronger. If there are more categories then SetHTML is no longer "fundamentally confused". Your argument against it falls apart.
Guess how interested I am in pretending that a debate with you—about this or anything else—is going to be worthwhile anything other than bigger waste of time than it already has been.
fwiw, if you serve your page with:
Content-Security-Policy: require-trusted-types-for 'script'
…then it blocks you from passing regular strings to the methods that don't sanitize.
They do link the default configuration for "safe": https://wicg.github.io/sanitizer-api/#built-in-safe-default-...
But I agree, my default approach has usually been to only use innerText if it has untrusted content:
So if their demo is this:
Mine would be:What if I wanted an <h2>?
Edit: I don't mean this flippantly. If I want to render, say, my blog entry on your site, will I need to select every markup element from a dropdown list of custom elements that only accept text a la Wordpress?
If it's anything complex I'm doing it server side, personally
That's why I only allow user input of alphanumeric ascii characters. No need to worry about sanitation then, and you can just remove all the characters that don't match.
(It's a joke, but it is also 100% XSS, SQL injection, etc. safe and future proof)
Some sanitization is better than none? If you're relying on the browser to handle it for you, you're already in a lot of trouble.
> I'm also rather sceptical of things that "sanitise" HTML, both because there's a long history of them having holes, and because it's not immediately clear what that means, and what exactly is considered "safe".
What is safe depends on where the sanitized HTML is going, on what you're doing with it.
It isn't possible to "sanitize HTML" after collecting it so that, when you use it in the future, it will be safe. "Safe" is defined by the use.
But it is possible to sanitize it before using it, when you know what the use will be.
realSetSafeHTML()
[dead]
BTW, HTML allows inline SVG with an XML-flavored syntax that interprets <script/> and <title> differently. It's a goldmine for sanitizer escapes. There are completely bonkers syntax switching and error recovery rules that interact with parsing modes (there's even an edge case where a particular attribute value switches between HTML and XML-ish parsing rules).
Don't even try to allow inline <svg> from untrusted sources! (and then you still must sanitise any svg files you host)
If you just serve SVGs through <img> tag it’ll be much safer. I never understood the appeal of inline <svg> anyways.
Inline SVG is stylable with CSS styles in the same HTML page.
Also animatible with the same context (Animation API, etc.) as the parent page, so different SVGs can influence each other’s animations.
Inline reduces round trips.
You can use img with a data url?
It may be using some of the same deserialization machinery, but "parsing" is a broad term that includes things that the sanitizer is doing and that the browser's ordinary content-processing → rendering path does not.
Even with this being a native API, there are still two parsers that need to be maintained. What a native API achieves is to shift the onus for maintaining synchronicity between the two onto the browser makers. That's not nothing, but it's also not the sort of free lunch that some people naively believe it is.
it's not at all clear which is which from the names
There's setHTML and setHTMLUnsafe. That seems about as clear as you can get.
If that'd been the design from the start, then sure. But it's not at all obvious that setHTML is safe with arbitrary user input (for a given value of "safe") and innerHTML is dangerous.
But you can use InnerHTML to set HTML and that's not safe.
At this point that API has been around for decades and is probably impossible to deprecate without breaking fairly large amounts of the web. The only option is to introduce a new and better API, and maybe eventually have the browser throw out console warnings if a page still uses the old innerHTML API. I doubt any browser vendor will be gung ho enough to actually remove it for a very long time.