I'll never understand why the HATEOAS meme hasn't died.
Is anyone using it? Anywhere?
What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
I'll never understand why the HATEOAS meme hasn't died.
Is anyone using it? Anywhere?
What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
I used it on an enterprise-grade video surveillance system. It was great - basically solved the versioning and permissions problem at the API level. We leveraged other RFCs where applicable.
The biggest issue was that people wanted to subvert the model to "make things easier" in ways that actually made things harder. The second biggest issue is that JSON is not, out of the box, a hypertext format. This makes application/json not suitable for HATEOAS, and forcing some hypertext semantics onto it always felt like a kludge.
> I'll never understand why the HATEOAS meme hasn't died.
> Is anyone using it? Anywhere?
As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol. If so (a cursory glance at RFC 8555 indicates that it may be), then it’s used by almost everyone who serves HTTPS.
Arguably HTTP, when used as it was intended, is itself a HATEOAS protocol.
> What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
LLMs seem to do well at this.
And remember that ‘auto-discovery’ means different things. A link typed next enables auto-discovery of the next resource (whatever that means); it assumes some pre-existing knowledge in the client of what ‘next’ actually means.
> As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol.
On this case specifically, everybody's lives are worse because of that.
I'm not super familiar with acme, but why is that? I usually dislike the HATEOS approach but I've never really seen it used seriously, so I'm curious!
Yes. You used it to enter this comment.
I am using it to enter this reply.
The magical client that can make use of an auto-discoverable API is called a "web browser", which you are using right this moment, as we speak.
This is true, but isn’t this quite far away from the normal understanding of API, which is an interface consumed by a program? Isn’t this the P in Application Programming Interface? If it’s a human at the helm, it’s called a User Interface.
I agree that's a common understanding of things, but I don't think that it's 100% accurate. I think that a web browser is a client program, consuming a RESTful application programming interface in the manner that RESTful APIs are designed to be consumed, and presenting the result to a human to choose actions.
I think if you restrict the notion of client to "automated programs that do not have a human driving them" then REST becomes much less useful:
https://htmx.org/essays/hypermedia-clients/
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
AI may change this at some point.
At that level, it would be infinitely clearer to say, "There is no such thing as a RESTful API, since the purpose of REST is to connect a system to a human user. There is only such a thing as a RESTful UI based on an underlying protocol (HTML/HTTP). But the implementation of this protocol (the web browser) is secondary to the actual purpose of the system, which is always a UI."
If you allow the notion of client to include "web browser driven by humans", then what is it about Fielding's dissertation that is considered so important and original in the first place? Sure it's formal and creates some new and precise terminology, but the concept of browsing was already well established when he wrote it.
It formalized the network architecture of distributed hypermedia systems and described interesting characteristics and tradeoffs of that approach. Whether or not it did a GOOD job of that for the layman I will leave to you, only noting the confusion around the topic found, ironically, across the internet.
So, given a hateos api, and stock firefox (or chrome, or safari, or whatever), it will generate client views with crud functionality?
Let alone ux affordances, branding, etc.
Yes. You used such an api to post your reply. And I am using it as well, via the affordances presented by the mobile safari hypermedia client program. Quite an amazing system!
No. I was served HTML. not a json respoise that the browser discovered how to display.
Yes. Exactly.
html is the hateoas response
I also use Google Maps, YouTube, Spotify, and Figma in the same web browser. But surely most of the functionality of those would not be considered HATEOAS.
The web browser is just following direct commands. The auto discovery and logic is implemented by my human brain
Yes.
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
Wait what? So everything is already HATEOAS?
I thought the “problem” was that no one was building proper restful / HATEOAS APIs.
It can’t go both ways.
The web, in traditional HTML-based responses, uses HATEOAS, almost by definition. JSON APIs rarely do, and when they do it's largely pointless.
https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
https://htmx.org/ might be the closest attempt?
https://data-star.dev are taking things a bit further in terms of simplicity and performance and hypermedia concepts. Worth a look.
I think OData isn't used, and that's a proper standard and a lower bar to clear. HATEOAS isn't even benefiting from a popular standard, which is both a cause and a result.
You realize that anyone using a browser to view HTML is using HATEOS, right? You could probably argue whether SPAs fit the bill, but for sure any server rendered or static site is using HATEOS.
The point isn't that clients must have absolutely no prior knowledge of the server, its that clients shouldn't have to have complete knowledge of the server.
We've grown used to that approach because most of us have been building tightly coupled apps where the frontend knows exactly how the backend works, but that isn't the only way to build a website or web app.
Can you be more specific? What exactly is the partial knowledge? And how is that different from non-conforming APIs?
HATEOAS is anything that serves the talking point now apparently