Note that <html> and <body> auto-close and don't need to be terminated.

Also, wrapping the <head> tags in an actual <head></head> is optional.

You also don't need the quotes as long the attribute doesn't have spaces or the like; <html lang=en> is OK.

(kind of pointless as the average website fetches a bazillion bytes of javascript for every page load nowadays, but sometimes slimming things down as much as possible can be fun and satisfying)

This kind of thing will always just feel shoddy to me. It is not much work to properly close a tag. The number of bytes saved is negligible, compared to basically any other aspect of a website. Avoiding not needed div spam already would save more. Or for example making sure CSS is not bloated. And of course avoiding downloading 3MB of JS.

What this achieves is making the syntax more irregular and harder to parse. I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient. It would greatly simplify browser code and HTML spec.

Implicit elements and end tags have been a part of HTML since the very beginning. They introduce zero ambiguity to the language, they’re very widely used, and any parser incapable of handling them violates the spec and would be incapable of handling piles of real‐world strict, standards‐compliant HTML.

> I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient.

They (W3C) tried that with XHTML. It was soundly rejected by webpage authors and by browser vendors. Nobody wants the Yellow Screen of Death. https://en.wikipedia.org/wiki/File:Yellow_screen_of_death.pn...

> They introduce zero ambiguity to the language

Well, to parsing it for machines yes, but for humans writing and reading it they are helpful. For example, if you have

    <p> foo
    <p> bar
and change it to

    <div> foo
    <div> bar
suddenly you've got a syntax error (or some quirks mode rendering with nested divs).

The "redundancy" of closing the tags acts basically like a checksum protecting against the "background radiation" of human editing. And if you're writing raw HTML without an editor that can autocomplete the closing tags then you're doing it wrong anyway. Yes that used to be common before and yes it's a useful backwards compatibility / newbie friendly feature for the language, but that doesn't mean you should use it if you know what you're doing.

It sounds like you're headed towards XHTML. The rise and fall of XHTML is well documented and you can binge the whole thing if you're so inclined.

But my summarization is that the reason it doesn't work is that strict document specs are too strict for humans. And at a time when there was legitimate browser competition, the one that made a "best effort" to render invalid content was the winner.

The merits and drawbacks of XHTML has already been discussed elsewhere in the thread and I am well aware of it.

> And at a time when there was legitimate browser competition, the one that made a "best effort" to render invalid content was the winner.

Yes, my point is that there is no reason to still write "invalid" code just because it's supported for backwards compatibility reasons. It sounds like you ignored 90% of my comment, or perhaps you replied to the wrong guy?

I'm a stickling pedant for HTML validity, but close tags on <p> and <li> are optional by spec. Close tags for <br>, <img>, and <hr> are prohibited. XML-like self-closing trailing slashes explicitly have no meaning in XML.

Close tags for <script> are required. But if people start treating it like XML, they write <script src="…" />. But that fails, because the script element requires closure, and that slash has no meaning in XML.

I think validity matters, but you have to measure validity according to the actual spec, not what you wish it was, or should have been. There's no substitute for actually knowing the real rules.

Are you misunderstanding on purpose? I am aware they are optional. I am arguing that there is no reason to omit them from your HTML. Whitespace is (mostly) optional in C, does that mean it's a good idea to omit it from your programs? Of course a br tag needs no closing tag because there is no content inside it. How exactly is that an argument for omitting the closing p tag? The XML standard has no relevance to the current discussion because I'm not arguing for "starting to treat it like XML".

I'm beginning to think I'm misunderstanding, but it's not on purpose.

Including closing tags as a general rule might make readers think that they can rely on their presence. Also, in some cases they are prohibited. So you can't achieve a simple evenly applied rule anyway.

Well, just because something is allowed by the syntax does not mean it's a good idea, that's why pretty much every language has linters.

And I do think there's an evenly applied rule, namely: always explicitly close all non-void elements. There are only 14 void elements anyway, so it's not too much to expect readers to know them. In your own words "there's no substitute for actually knowing the real rules".

I mean, your approach requires memorizing for which 15 elements the closing tag can be omitted anyway (otherwise you'll mentally parse the document wrong (i.e. thinking a br tag needs to be closed is equally likely as thinking p tags can be nested)).

The risk that somebody might be expecting a closing tag for an hr element seems minuscule and is a small price to pay for conveniences such as (as I explained above) being able to find and replace a p tag or a li tag to a div tag.

I don't believe there are any contexts where <li> is valid that <div> would also be valid.

I'm not opposed to closing <li> tags as a general a general practice. But I don't think it provides as much benefit as you're implying. Valid HTML has a number of special rules like this. Like different content parsing rules for <textarea> and <script>. Like "foreign content".

If you try to write lint-passing HTML in the hopes that you could change <li> to <div> easily, you still have to contend with the fact that such a change cannot be valid, except possibly as a direct descendant of <template>.

Again, you're focusing on a pointless detail. Sure, I made a mistake in offhandedly using li as an example. Why do you choose to ignore the actually valid p example though? Seems like you're more interested in demonstrating your knowledge of HTML parsing (great job, proud of ya) than anything else. Either way, you've given zero examples of benefits of not doing things the sensible way that most people would expect.

To (hopefully) be clear, I don't think there are many benefits either way.

IMO, all of those make logical sense. If you’re inserting a line break or literal line, it can be thought of as a 1-dimensional object, which cannot enclose anything. If you want another one, insert another one.

In contrast, paragraphs and lists do enclose content, so IMO they should have clear delineations - if nothing else, to make visually understanding the code more clear.

I’m also sure that someone will now reference another HTML attribute I didn’t think about that breaks my analogy.

I didn't have a problem with XHTML back in the day; it tool a while to unlearn it; I would instinctively close those tags: <br/>, etc.

It actually the XHTML 2.0 specification [1] that discarded backwards compatibility with HTML 4 was the straw that broke the camel's back. No more forms as we knew them, for example; we were supposed to use XFORMS.

That's when WHATWG was formed and broke with the W3C and created HTML5.

Thank goodness.

[1]: https://en.wikipedia.org/wiki/XHTML#XHTML_2.0

XHTML 2.0 had a bunch of good ideas and a lot of them got "backported" into HTML 5 over the years.

XHTML 2.0 didn't even really discard backwards-compatibility that much: it had its compatibility story baked in with XML Namespaces. You could embed XHTML 1.0 in an XHTML 2.0 document just as you can still embed SVG or MathML in HTML 5. XForms was expected to take a few more years and people were expecting to still embed XHTML 1.0 forms for a while into XHTML 2.0's life.

At least from my outside observer perspective, the formation of WHATWG was more a proxy war between the view of the web as a document platform versus the view of the web as an app platform. XHTML 2.0 wanted a stronger document-oriented web.

(Also, XForms had some good ideas, too. Some of what people want in "forms helpers" when they are asking for something like HTMX to standardized in browsers were a part of XForms such as JS-less fetch/XHR with in-place refresh for form submits. Some of what HTML 5 slowly added in terms of INPUT tag validation are also sort of "backports" from XForms, albeit with no dependency on XSD.)

XHTML in practice was too strict and tended to break a few other things (by design) for better or worse, so nobody used it...

That said, actually writing HTML that can be parsed via an XML parser is generally a good, neighborly thing to do, as it allows for easier scraping and parsing through browsers and non-browser applications alike. For that matter, I will also add additional data-* attributes to elements just to make testing (and scraping) easier.

You're not alone, this is called XHTML and it was tried but not enough people wanted to use it

Yeah, I remember, when I was at school and first learning HTML and this kind of stuff. When I stumbled upon XHTML, I right away adapted my approach to verify my page as valid XHTML. Guess I was always on this side of things. Maybe machine empathy? Or also human empathy, because someone needs to write those parsers and the logic to process this stuff.

oh man, I wish XHTML had won the war. But so many people (and CMSes) were creating dodgy markup that simply rendered yellow screens of doom, that no-one wanted it :(

i'm glad it never caught on. the case sensitivity (especially for css), having to remember the xmlns namespace URI in the root element, CDATA sections for inline scripts, and insane ideas from companies about extending it further with more xml namespaced elements... it was madness.

I'll copy what I wrote a few days ago:

The fact XHTML didn't gain traction is a mistake we've been paying off for decades.

Browser engines could've been simpler; web development tools could've been more robust and powerful much earlier; we would be able to rely on XSLT and invent other ways of processing and consuming web content; we would have proper XHTML modules, instead of the half-baked Web Components we have today. Etc.

Instead, we got standards built on poorly specified conventions, and we still have to rely on 3rd-party frameworks to build anything beyond a toy web site.

Stricter web documents wouldn't have fixed all our problems, but they would have certainly made a big impact for the better.

And add:

Yes, there were some initial usability quirks, but those could've been ironed out over time. Trading the potential of a strict markup standard for what we have today was a colossal mistake.

There's no way it could have gained traction. Consider two browsers. One follows the spec explicitly, and one goes into "best-effort" mode on encountering invalid markup. End users aren't going to care about the philosophical reasoning for why Browser A doesn't show them their school dance recital schedule.

Consider JSON and CSV. Both have formal specs. But in the wild, most parsers are more lenient than the spec.

Which is also largely what happened: HTML 5 is in some ways that "best-effort" mode, standardized by a different standards body to route around XHTML's philosophies.

Yeah this is it. We can debate what would be nicer theoretically until the cows come home but there's a kind of real world game theory that leads to browsers doing their best to parse all kinds of slop as well as they can, and then subsequently removing the incentive for developers and tooling to produce byte perfect output

It had too much unnecessary metadata yes, but case insensitivity is always the wrong way to do stuff in programming (e.g. case insensitive file system paths). The only reason you'd want it is for real-world stuff like person names and addresses etc. There's no reason you'd mix the case of your CSS classes anyway, and if you want that, why not also automatically match camelCase with snake_case with kebab-case?

> It would greatly simplify browser code and HTML spec.

I doubt it would make a dent - e.g. in the "skipping <head>" case, you'd be replacing the error recovery mechanism of "jump to the next insertion mode" with "display an error", but a) you'd still need the code path to handle it, b) now you're in the business of producing good error messages which is notoriously difficult.

Something that would actually make the parser a lot simpler is removing document.write, which has been obsolete ever since the introduction of the DOM and whose main remaining real world use-case seems to be ad delivery. (If it's not clear why this would help, consider that document.write can write scripts that call document.write, etc.)

> I wish all these tolerances wouldn't exist in HTML5 and browsers simply showed an error, instead of being lenient.

Who would want to use a browser which would prevent many currently valid pages from being shown?

I mean, I am obviously talking about a fictive scenario, a somewhat better timeline/universe. In such a scenario, the shoddy practices of not properly closing tags and leaning on leniency in browser parsing and sophisticated fallbacks and all that would not have become a practice and those many currently valid websites would mostly not have been created, because as someone tried to create them, the browsers would have told them no. Then those people would revise their code, and end up with clean, easier to parse code/documents, and we wouldn't have all these edge and special cases in our standards.

Also obviously that's unfortunately not the case today in our real world. Doesn't mean I cannot wish things were different.

I agree for sure, but that's a problem with the spec, not the website. If there are multiple ways of doing something you might as well do the minimal one. The parser will have always to be able to handle all the edge cases no matter what anyway.

You might want always consistently terminate all tags and such for aesthetic or human-centered (reduced cognitive load, easier scanning) reasons though, I'd accept that.

<html>, <head> and <body> start and end tags are all optional. In practice, you shouldn’t omit the <html> start tag because of the lang attribute, but the others never need any attributes. (If you’re putting attributes or classes on the body element, consider whether the html element is more appropriate.) It’s a long time since I wrote <head>, </head>, <body>, </body> or </html>.

> Note that <html> and <body> auto-close and don't need to be terminated.

You monster.

Not only do html and body auto-close, their tags including start-element tags can be omitted alltogether:

    <title>Shortest valid doc</title>
    <p>Body text following here
(cf explainer slides at [1] for the exact tag inferences SGML/HTML does to arrive at the fully tagged doc)

[1]: https://sgmljs.sgml.net/docs/html5-dtd-slides-wrapper.html (linked from https://sgmljs.sgml.net/blog/blog1701.html)

I'm not sure I'd call keeping the <body> tag open satisfying but it is a fun fact.

Didn't know you can omit <head> .. </head> but I prefer for clarify to keep them.

Do you also spell out the implicit <tbody> in all your tables for clarity?

I do.

`<thead>` and `<tfoot>`, too, if they're needed. I try to use all the free stuff that HTML gives you without needing to reach for JS. It's a surprising amount. Coupled with CSS and you can get pretty far without needing anything. Even just having `<template>` with minimal JS enables a ton of 'interactivity'.

Yes. Explicit is almost always better than implicit, in my experience.

Sometimes... especially if a single record displays across more than a single row.

I almost always use thead.

If I don't close something I opened, I feel weird.