> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.

Why do people feel compelled to even consider it to be a battle?

As I see it, the REST concept is useful, but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves. This is in line with the Richardson maturity model[1], where the apex of REST includes all the HATEOAS bells and whistles.

Should REST without HATEOAS classify as REST? Why not? I mean, what is the strong argument to differentiate an architectural style that meets all but one requirement? And is there a point to this nitpicking if HATEOAS is practically irrelevant and the bulk of RESTful APIs do not implement it? What's the value in this nitpicking? Is there any value to cite thesis as if they where Monty Python skits?

[1] https://en.wikipedia.org/wiki/Richardson_Maturity_Model

For me the battle is with people who want to waste time bikeshedding over the definition of "REST" and whether the APIs are "RESTful", with no practical advantages, and then having to steer the conversation--and their motivation--towards more useful things without alienating them. It's tiresome.

It was buried towards the bottom of the article, but the reason, to me:

Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.

Of course, Open API (and perhaps to some extent now AI) also mean that clients don't need to be written they are just generated.

However it is important perhaps to remember the context here: SOAP is and was terrible, but for enterprise that needed a complex and robust RPC system, it was beginning to gain traction. HATEOS is a much more general yet simple and comprehensive system in comparison.

Of course, you don't need any of this. So people built APIs they did need that were not restfull but had an acronym that their bosses thought sounded better than SOAP, and the rest is History.

> Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.

That was the theory, but it was never true in practice.

The oft comparisons to the browser really missed the mark. The browser was driven by advanced AI wetware.

Given the advancements in LLMs, it's not even clear that RESTish interfaces would be easier for them to consume (say vs. gRPC, etc.)

Then let developer-Darwin win and fire those people. Let the natural selection of the hiring process win against pedantic assholes. The days are too short to argue over issues that are not really issues.

>the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.

Many server-rendered websites support REST by design: a web page with links and forms is the state transferred to client. Even in SPAs, HATEOAS APIs are great for shifting business logic and security to server, where it belongs. I have built plenty of them, it does require certain mindset, but it does make many things easier. What problems are you talking about?

complexity

Backend only and verbosity would be more correct description.

Defining media types seems right to me, but what ends up happening is that you use swagger instead to define APIs and out the window goes HATEOAS, and part of the reason for this is just that defining media types is not something people do (though they should).

Basically: define a schema for your JSON, use an obvious CRUD mapping to HTTP verbs for all actions, use URI local-parts embedded in the JSON, use standard HTTP status codes, and embed more error detail in the JSON.

> (...) and part of the reason for this is just that defining media types is not something people do (...)

People do not define media types because it's useless and serves no purpose. They define endpoints that return specific resource types, and clients send requests to those endpoints expecting those resource types. When a breaking change is introduced, backend developers simply provide a new version of the API where a new endpoint is added to serve the new resource.

In theory, media types would allow the same endpoint to support multiple resource types. Services would sent specific resource types to clients if they asked for them by passing the media type in the accept header. That is all fine and dandy, except this forces endpoints to support an ever more complex content negotiation scheme that no backend framework comes close to support, and this brings absolutely no improvement in the way clients are developed.

So why bother?

We should probably stop calling the thing that we call REST, REST and be done with it - it's only tangentially related to what Fielding tried to define.

> We should probably stop calling the thing that we call REST (...)

That solves no problem at all. We have Richardson maturity model that provides a crisp definition, and it's ignored. We have the concept of RESTful, which is also ignored. We have RESTless, to contrast with RESTful. Etc etc etc.

None of this discourages nitpickers. They are pedantic in one direction, and so lax in another direction.

Ultimately it's all about nitpicking.

I’m with you. HATEOAS is great when you have two independent (or more) enterprise teams with PMs fighting for budget.

When it’s just yours and your two pizza team, contract-first-design is totally fine. Just make sure you can version your endpoints or feature-flag new API’s so it doesn’t break your older clients.

> but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.

Only because we never had the tools and resources that, say, GraphQL has.

And now everyone keeps re-inventing half of HTTP anyway. See this diagram https://raw.githubusercontent.com/for-GET/http-decision-diag... (docs https://github.com/for-GET/http-decision-diagram/tree/master...) and this: https://github.com/for-GET/know-your-http-well

HATEOAS adds lots of practical value if you care about discoverability and longevity.

Discoverability by whom, exactly? Like if it's for developer humans, then good docs are better. If it's for robots, then _maybe_ there's some value... But in reality, it's not for robots.

HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"

You have got it wrong. Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side. This is the discoverability. It does not imply generated interfaces, UI may know something about the data in advance.

This is actually what we do at [DAYJOB] and it's been working well for over 12 years. Like any other kind of interface indirection it adds the overhead of indirection for the benefit of being able to change the producer's side of the implementation without having to change all of the consumers at the same time.

That's actually an interesting take, thank you.

How does the UI check if certain operations are available?

It’s literally in server response:

   {
    … resource model
    _links: {
       “delete” : { “href” : “.” }
    }
In this example you receive list of permitted operations embedded in the resource model. href=. means you can perform this operation on resource self link.

Oh, interesting. So rather than the UI computing what operations should be allowed currently by, say, knowing the user's current role and having rules baked into it about the relationship between role and UI widgets, the UI can compute what motive should be in or simply off of explicit statements or capability from the server.

I can see some meat on these bones. The counterpoint is that the protocol is now chattier than it would be otherwise... But a full analysis of bandwidth to the client would have to factor that you have to ship over a whole framework to implement those rules and keep those rules synchronized between client and server implementation.

I’d suggest that bandwidth optimization should happen when it becomes critical and control presence of hypermedia controls via feature flag or header. This way frontend becomes simpler, so FE dev speed and quality improves, but backend becomes more complex. The main problem here is that most backend frameworks are supporting RMM level 2 and hypermedia controls require different architecture to make server code less verbose. Unfortunately REST wasn’t understood well, so full support of it wasn’t in focus of open source community.

OPTIONS https://datatracker.ietf.org/doc/html/rfc2616

More links here: https://news.ycombinator.com/item?id=44510745

Or probably just an Allow header on a response to another query (e.g. when fetching an object, server could respond with an Allow: GET, PUT, DELETE if the user has read-write access and Allow: GET if it’s read-only).

That’s a neat idea actually, I think I’ll need to read up on the semantics of Allow again…. There is no reason you couldn’t just include it with arbitrary responses, no?

It’s something else. List of available actions may include other resources, so you cannot express it with pure HTTP, you need a data model for that (HAL is one of possible solutions, but there are others)

With HATEOAS you're supposed to return the list of available actions with the representation of your state.

Neo4j's old REST API was really good about that. See e.g. get node: https://neo4j.com/docs/rest-docs/current/#rest-api-get-node

> If it's for robots, then _maybe_ there's some value...

Nah, machine readable docs beat HATEOAS in basically any application.

The person that created HATEOAS was really not designing an API protocol. It's a general use content delivery platform and not very useful for software development.

The problems do exist, and they're everywhere. People just invented all sorts of hacks and workarounds for these issues instead of thinking more carefully about them. See my posts in this thread for some examples:

https://news.ycombinator.com/item?id=44509745

LLMs also appear to have an easier time consuming it (not surprisingly.)

For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.

And that's fine, but then you're doing RPC instead of REST and we should all be clear and honest about that.

I think you throw away a useful description of an API by lumping them all under RPC. If you tell me your API is RPC instead of REST then I'll assume that:

* If the API is available over HTTP then the only verb used is POST.

* The API is exposed on a single URL and the `method` is encoded in the body of the request.

It is true, if you say "RPC" I'm more likely to assume gRPC or something like that. If you say "REST", I'm 95% confident that it is a standard / familiar OpenAPI style json-over-http style API but will reserve a 5% probability that it is actually HATEOAS and have to deal with that. I'd say, if you are doing Roy Fielding certified REST / HATEOAS it is non-standard and you should call it out specifically by using the term "HATEOAS" to describe it.

What would it take for you to update your assumptions?

People in the real world referring to "REST" APIs, the kind that use HTTP verbs and have routes like /resource/id as RPC APIs. As it stands in the world outside of this thread nobody does that.

At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.