> To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client.

Generic clients just need to understand hypermedia and they can discover your API, as long as your API returns hypermedia from its starting endpoint and all other endpoints are transitively linked from that start point.

Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?

This is what discoverability via HATEOAS is. True REST can be seen as exporting an object model with reflection capabilities. For clients that are familiar with your API, they are using hypermedia to access known/named properties and methods, and generic clients can use reflection to do the same.

> Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?

Sure this can be done, but I can't see how to build a useful generic app that interacts with objects automatically by discovering the methods and calling them with discovered parameters. For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users, the UI needs to be aware what the available methods do and need to be intentionally designed to provide intuitive ways of calling the methods.

> For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users

Yes, exactly, but the point is that something like Swagger becomes completely trivial, and so you no longer need a separate, complex tool to do what the web automatically gives you.

The additional benefits are on the server-end, in terms of maintenance and service flexibility. For instance, you can now replace and transition any endpoint URL (except the entry endpoint) at any time without disrupting clients, as clients no longer depend on specific URL formats (URLs are meaningful only to the server), but depend only on the hypermedia that provides the endpoints they should be using. This is Wheeler's aphorism: hypermedia adds one level of indirection to an API which adds all sorts of flexibility.

For example, you could have a set of servers implementing an application function, each designated by a different URL, and serve the URL for each server in the hypermedia using any policy that makes sense, effectively making an application-specific load balancer. We worked around scaling issues over the years by adding adding SNI to TLS and creating dedicated load balancers, but Fielding's REST gave us everything we needed long before! And it's more flexible than SNI because these servers don't even have to be physically located behind a load balancer.

There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.

Was the client of the service that you worked on fully generic and application independent? It is one thing to be able to change URLs only on the server, without requiring a client code change, and such flexibility is indeed practical benefit that the REST architecture gives us. It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code. This goal is something that REST architecture tried to address, but IMO it was not realized in practice.

> There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.

It's definitely possible to achieve: anywhere that data is missing you present an input prompt, which is exactly what a web browser does.

That said, the set of autonomous programs that can do something useful without knowing what they're doing is of course more limited. These are generic programs like search engines and AI training bots that crawl and index information.

> It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code.

Web browsers do exactly this!

> Web browsers do exactly this!

Browser provide generic execution environment, but the client code (JavaScript/HTML/CSS) is not generic. Calendar application and messaging application entry points provide application specific code for implementing calendar or messaging apps functions . I don't think this is what was proposed in the REST paper, otherwise we wouldn't have articles like 'Most RESTful APIs aren't really RESTful'.

> but the client code (JavaScript/HTML/CSS) is not generic

The HTML/hypermedia returned is never generic, that's why HATEOAS works at all and is so flexible.

The "client" JS code is provided by the server, so it's not really client-specific (the client being the web browser here--maybe should call it "agent"). Regardless, sending JS is an optimization, calendars and messaging are possible using hypermedia alone, and proves the point that the web browser is a generic hypermedia agent that changes behaviour based on hypermedia that's dictated solely by the URL.

You can start programming any app with a plain hypermedia version and then add JS to make the user experience better, which is the approach that HTMx is reviving.

What I don't get from this and some other comments in this thread, is that the argument seems to be that REST is practical, every web page is actually a REST app, it has one entry point, all the actions are discoverable by the user from this entry point, application specific JavaScript code is allowed by REST architecture. But then, why are there so many articles and posts (also by Fielding) that complain that people claim do be doing REST, but are actually not doing it?

In all these discussion, I didn't see an article that would actually show an example of a successful application that does REST properly, all elements of it.

While I haven't looked too deeply, I think HN might be an example that follows REST. At least I don't see anything in the functionality that wouldn't be easily fulfilled by following REST with no change in the outwards behaviour. A light sprinkle of JS to avoid some page reloads and that's it.

I agree that not many frameworks encourage "true" REST design, but I don't think it's too hard to get the hang of it. Try out htmx on a toy project and restrict yourself to using literally no JS and no session state, and every UI-focused endpoint of your favoured server-side framework returns HTML.

> Generic clients just need to understand hypermedia

Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant. I'm pretty sure the reason we ended up settling on just tossing JSON blobs around and baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.

(Besides: practically, for a web-served interface, the client may as well carry semantic understanding because the client came from the server).

> Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant.

You don't need a full web browser. Fielding published his thesis in 2000, browsers were almost trivial then, and the needs for programming are even more trivial: you can basically skip any HTML that isn't a link tag or form data for most purposes.

> baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.

This is such a non-issue. Why aren't you worried about badly formatted JSON? Because we have well-tested JSON formatters. In a world where people understood the value of hypermedia as an interchange format, we'd be in exactly the same position.

And to be clear, if JSON had links as a first class type rather than just strings, then that would qualify as a hypermedia format too.

If I'm going to do HTML that isn't HTML then I might as well not do HTML, there's a lot of sharp edges in that particular markup that I'd prefer to avoid.

> Why aren't you worried about badly formatted JSON?

Because the json spec is much smaller than the HTML spec so it is much easier for the parser to prevalidate and reject invalid JSON.

Maybe I need to reread the paper and substitute "a good hypermedia language" for HTML conceptually, see if it makes more sense to me.

Fielding's thesis barely mentions HTML (20 times), and usually in the context of discussing standards or why JS beat Java applets, but he discusses hypermedia quite a bit (over 70 times).

If you extended JSON so that URLs (or URIs) were first-class, something like:

    url ::= "<" scheme ":" ["//" authority] path ["?" query] ["#" fragment] ">"
it would form a viable hypermedia format because then you can reliably distinguish references from other forms of data. I think the only reason something like this wasn't done is that Crockford wanted JSON to be easily parsable by existing JS interpreters.

You can workaround this with JSON schema to some extent, where the schema identifies which strings are URLs, but that's just way more cumbersome than the distinction being made right in the format.