When I was working on my first HTTP-based API 13 years ago, based on many comments about true REST, I decided to first study what REST should really be. I've read Fielding's paper cover to cover, I've read RESTful Web Services Cookbook from O'Reilly and then proceeded to workaround Django idioms to provide REST API. This was a bit cargo cult thinking from my end, I didn't truly understand how REST would benefit my service. I took me several more years and several more HTTP APIs to understand that in the case of these services, there were no benefits.
The vision of API that is self discoverable and that works with a generic client is not practical in most cases. I think that perhaps AWS dashboard with its multitude of services has some generic UI code that allows to handle these services without service-specific logic, but I doubt even that.
Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client. If your service is the only one that implements this client, you made a lot of extra effort to end up with the same solution that not REST services implement - a service provides an API and JS code to work with the API (or a command line client that works with the API), but there is no client code reuse at all.
I also think that good UX is not compatible with REST goals. From a user perspective, app-specific code can provide better UX than generic code that can discover endpoints and provide UI for any app. Of course, UI elements can be standardized and described in some languages (remember XUL?), so UI can adapt to app requirements. But the most flexible way for such standardization is to provide a language like JavaScript that is responsible for building UI.
> The vision of API that is self discoverable and that works with a generic client is not practical in most cases. [..] Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client
You said what I've thought about REST better than I could have put it.
A true implementation of a REST client is simply not possible. Any client needs to know what all those URLs are going to do. If you suddenly add a new action (like /cansofspam/123/frobnicate), a client won't know what to do with it. The client will need to be updated to add frobnication functionality, or else it just ignores it. At best, it could present a "Frobnicate" button.
This really is why nobody has implemented a REST server or client that actually conforms to Fielding's paper. It's just not realistic to have a client that can truly self-discover an API without being written to know what APIs to expect.
> A true implementation of a REST client is simply not possible
Sure it is, it's just not very interesting to a programmer. It's the browser. That's why there was no need talk about client implementations. And why it's hypermedia driven. It's implicit in the description that it's meant to be discoverable by humans.
AirBnb rediscovered REST when they implemented their Server Driven UI Platform. Once you strip away all the minutiae about resources and URIs the fundamental idea of HATEOS is ship the whole UI from the server and have the client be generic (the browser). Now you can't have the problem where the frontend gets desynced with the backend.
I think you're right. APIs have a lot of aspects to them, so describing them is hard. API users need to know typical latency bounds, which error codes may be retried, whether an action is atomic or idempotent. HATEOAS gets you none of these things.
So fully implementing a perfect version of REST is usually not necessary for most types of problems users actually encounter.
What REST has given us is an industry-wide lingua franca. At the basic level, it's a basic understanding of how to map nouns/verbs to HTTP verbs and URLs. Users get to use the basic HTTP response codes. There's still a ton of design and subtlety to all this. Do you really get to do things that are technically allowed, but might break at a typical load balancer (returning bodies with certain error codes)? Is your returning 500 retriable in all cases, with what preferred backoff behavior?
> What REST has given us is an industry-wide lingua franca. At the basic level, it's a basic understanding of how to map nouns/verbs to HTTP verbs and URLs. Users get to use the basic HTTP response codes. There's still a ton of design and subtlety to all this. Do you really get to do things that are technically allowed, but might break at a typical load balancer (returning bodies with certain error codes)? Is your returning 500 retriable in all cases, with what preferred backoff behavior?
What was wrong with all nouns and verbs map to POST (maybe sometimes GET), and HTTP response codes other than 200 mean your request failed somewhere between the client code and the application server code. HTTP 200 means the application server processed the request and you can check the payload for an application indicator of success, failure, and/or partial success. If you work with enough systems, you end up going back to this, because least common denominator works everywhere.
Either way, anything that isn't ***** SOAP is a good start.
>API users need to know typical latency bounds, which error codes may be retried, whether an action is atomic or idempotent. HATEOAS gets you none of these things.
Those things aren't always necessary. However API users always need to know which endpoints are available in the current context. This can be done via documentation and client-side business logic implementing it (arguably, more work) or this can be done with HATEOAS (just check if server returned the endpoint).
HTTP 500 retriable sounds like a design error, when you can use HTTP 503 to explicitly say "try again later, it's temporal".
I think this hits the nail on the head. Complaining that the current understanding of REST isn't exactly the same as the original usage is missing the point that now REST gives people a good idea of what to expect and how to use the exposed interface.
It's actually a very analogous complaint to how object-oriented programming isn't how it was supposed to be and that only Smalltalk got it right. People now understand what is meant when people say OOP even if it's not what the creator of the term envisioned.
Computer Science, and even the world in general, is littered with examples of this process in action. What's important is that there's a general consensus of the current meaning of a word.
Yes, the field is littered with imperfection.
One thing though - if you do take the time to learn the original "perfect" versions of these things, it helps you become a much better system designer. I'm constantly worried about API design because it has such large and hard-to-change consequences.
On the other hand, we as an industry have also succeeded quite a bit! So many of our abstractions work really well.
It's not just the original REST that usually has no benefits. The industry's reinterpreted version of weak REST also usually has little to no benefits. Who really cares that deleting a resource must necessarily be done with the DELETE HTTP verb rather than simply a POST?
The DELETE verb exists, there's no reason not to use it.
There's a great reason: I'm using HTTP only as a transport layer, not a semantic layer.
There is one reason. The DELETE absolutely must be idempotent. If it's not, then use POST.
The POST verb exists, there's no reason not to use it to ask a server to delete data.
In fact, there are plenty of reasons not to use DELETE and PUT. Middleboxes managed by incompetent security people block them, they require that developers have a minimum of expertise and don't break the idempotency rule, lots of software stacks simply don't support them (yeah, those stacks are bad, what still doesn't change anything), and the most of the internet just don't use the benefit they provide (because they don't trust the developers behind the server to not break the rules).
And you just added more work to yourself to interpret the HTTP verb. You already need work to interpret the body of a POST request, so why not put the information of "the operation is trying to delete" inside the body?
You have to represent the action somehow. And letting proxies understand a wee bit of what's going on is useful. That's how you can have a proxy that lets your users browse the web but not login to external sites, and so on.
The browser is "generic code" that provides the UX we use all day, every day.
REST includes allowing code to be part of the response from a server, there are the obvious security issues, but the browsers (and the standards) have dealt with a lot of that.
https://ics.uci.edu/~fielding/pubs/dissertation/net_arch_sty...
Personally I never saw "self-discoverable" as a goal, let alone an achievable one, so I think you're overestimating the ambitions of simple client-design.
Notably, the term "discoverable" doesn't even appear in TFA.
From the article: 'The phrase “not being driven by hypertext” in Roy Fielding’s criticism refers to the absence of Hypermedia as the Engine of Application State (HATEOAS) in many APIs that claim to be RESTful. HATEOAS is a fundamental principle of REST, requiring that the client dynamically discover actions and interactions through hypermedia links embedded in server responses, rather than relying on out-of-band knowledge (e.g., API documentation).'
Fielding's idea of REST does seem pretty pointless. "Did you know that human-facing websites are made out of hyperlinked pages? This is so crazy that it needs its own name for everyone to parrot!" But a web application isn't going to be doing much beyond basic CRUD when every individual change in state is supposed to be human-driven. And if it's not human-driven, then it's protocol-driven, and therefore not REST.
> To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client.
Generic clients just need to understand hypermedia and they can discover your API, as long as your API returns hypermedia from its starting endpoint and all other endpoints are transitively linked from that start point.
Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
This is what discoverability via HATEOAS is. True REST can be seen as exporting an object model with reflection capabilities. For clients that are familiar with your API, they are using hypermedia to access known/named properties and methods, and generic clients can use reflection to do the same.
> Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
Sure this can be done, but I can't see how to build a useful generic app that interacts with objects automatically by discovering the methods and calling them with discovered parameters. For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users, the UI needs to be aware what the available methods do and need to be intentionally designed to provide intuitive ways of calling the methods.
> For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users
Yes, exactly, but the point is that something like Swagger becomes completely trivial, and so you no longer need a separate, complex tool to do what the web automatically gives you.
The additional benefits are on the server-end, in terms of maintenance and service flexibility. For instance, you can now replace and transition any endpoint URL (except the entry endpoint) at any time without disrupting clients, as clients no longer depend on specific URL formats (URLs are meaningful only to the server), but depend only on the hypermedia that provides the endpoints they should be using. This is Wheeler's aphorism: hypermedia adds one level of indirection to an API which adds all sorts of flexibility.
For example, you could have a set of servers implementing an application function, each designated by a different URL, and serve the URL for each server in the hypermedia using any policy that makes sense, effectively making an application-specific load balancer. We worked around scaling issues over the years by adding adding SNI to TLS and creating dedicated load balancers, but Fielding's REST gave us everything we needed long before! And it's more flexible than SNI because these servers don't even have to be physically located behind a load balancer.
There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.
Was the client of the service that you worked on fully generic and application independent? It is one thing to be able to change URLs only on the server, without requiring a client code change, and such flexibility is indeed practical benefit that the REST architecture gives us. It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code. This goal is something that REST architecture tried to address, but IMO it was not realized in practice.
> There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.
It's definitely possible to achieve: anywhere that data is missing you present an input prompt, which is exactly what a web browser does.
That said, the set of autonomous programs that can do something useful without knowing what they're doing is of course more limited. These are generic programs like search engines and AI training bots that crawl and index information.
> It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code.
Web browsers do exactly this!
> Web browsers do exactly this!
Browser provide generic execution environment, but the client code (JavaScript/HTML/CSS) is not generic. Calendar application and messaging application entry points provide application specific code for implementing calendar or messaging apps functions . I don't think this is what was proposed in the REST paper, otherwise we wouldn't have articles like 'Most RESTful APIs aren't really RESTful'.
> but the client code (JavaScript/HTML/CSS) is not generic
The HTML/hypermedia returned is never generic, that's why HATEOAS works at all and is so flexible.
The "client" JS code is provided by the server, so it's not really client-specific (the client being the web browser here--maybe should call it "agent"). Regardless, sending JS is an optimization, calendars and messaging are possible using hypermedia alone, and proves the point that the web browser is a generic hypermedia agent that changes behaviour based on hypermedia that's dictated solely by the URL.
You can start programming any app with a plain hypermedia version and then add JS to make the user experience better, which is the approach that HTMx is reviving.
What I don't get from this and some other comments in this thread, is that the argument seems to be that REST is practical, every web page is actually a REST app, it has one entry point, all the actions are discoverable by the user from this entry point, application specific JavaScript code is allowed by REST architecture. But then, why are there so many articles and posts (also by Fielding) that complain that people claim do be doing REST, but are actually not doing it?
In all these discussion, I didn't see an article that would actually show an example of a successful application that does REST properly, all elements of it.
While I haven't looked too deeply, I think HN might be an example that follows REST. At least I don't see anything in the functionality that wouldn't be easily fulfilled by following REST with no change in the outwards behaviour. A light sprinkle of JS to avoid some page reloads and that's it.
I agree that not many frameworks encourage "true" REST design, but I don't think it's too hard to get the hang of it. Try out htmx on a toy project and restrict yourself to using literally no JS and no session state, and every UI-focused endpoint of your favoured server-side framework returns HTML.
> Generic clients just need to understand hypermedia
Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant. I'm pretty sure the reason we ended up settling on just tossing JSON blobs around and baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
(Besides: practically, for a web-served interface, the client may as well carry semantic understanding because the client came from the server).
> Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant.
You don't need a full web browser. Fielding published his thesis in 2000, browsers were almost trivial then, and the needs for programming are even more trivial: you can basically skip any HTML that isn't a link tag or form data for most purposes.
> baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
This is such a non-issue. Why aren't you worried about badly formatted JSON? Because we have well-tested JSON formatters. In a world where people understood the value of hypermedia as an interchange format, we'd be in exactly the same position.
And to be clear, if JSON had links as a first class type rather than just strings, then that would qualify as a hypermedia format too.
If I'm going to do HTML that isn't HTML then I might as well not do HTML, there's a lot of sharp edges in that particular markup that I'd prefer to avoid.
> Why aren't you worried about badly formatted JSON?
Because the json spec is much smaller than the HTML spec so it is much easier for the parser to prevalidate and reject invalid JSON.
Maybe I need to reread the paper and substitute "a good hypermedia language" for HTML conceptually, see if it makes more sense to me.
Fielding's thesis barely mentions HTML (20 times), and usually in the context of discussing standards or why JS beat Java applets, but he discusses hypermedia quite a bit (over 70 times).
If you extended JSON so that URLs (or URIs) were first-class, something like:
it would form a viable hypermedia format because then you can reliably distinguish references from other forms of data. I think the only reason something like this wasn't done is that Crockford wanted JSON to be easily parsable by existing JS interpreters.You can workaround this with JSON schema to some extent, where the schema identifies which strings are URLs, but that's just way more cumbersome than the distinction being made right in the format.
> Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs.
But it does though. A HTTP server returns a HTTP response to a request from a browser. The request is a HTML webpage that is rendered to the user with all discoverable APIs visible as clickable links. Welcome to the World Wide Web.
You describe how web pages work, web pages are intended for human interactions, APIs are intended for machine interaction. How a generic Python or JavaScript client can discover these APIs? Such clients will request JSON representation of a resource, because JSON is intended for machine consumption, HTML is intended for humans. Representations are equivalent, if you request JSON representations of a /users resource, you get a JSON list. If you request HTML representation of a /users resource you get an HTML list, but the content should be the same. Should you return UI controls for modifying a list as part of the HTML representation? If you do so, your JSON and HTML representations are different, and your Python and JavaScript client still cannot discover what list modification operations are possible, only human can do it by looking at the HTML representation. This is not REST if I understand the paper correctly.
> You describe how web pages work, web pages are intended for human interactions
Exactly, yes! The first few sentences from Wikipedia...
"REST (Representational State Transfer) is a software architectural style that was created to describe the design and guide the development of the architecture for the World Wide Web. REST defines a set of constraints for how the architecture of a distributed, Internet-scale hypermedia system, such as the Web, should behave." -- [1]
If you are desiging a system for the Web, use REST. If you are designing a system where a native app (that you create) talks to a set of services on a back end (that you also create), then why conform to REST principles?
[1] - https://en.wikipedia.org/wiki/REST
Most web apps today use APIs that return JSON and are called by JavaScript. Can you use REST for such services or does REST require a switch to HTML representation rendered by the server where each interaction returns new HTML page? How such HTML representation can even use PUT and DELETE verbs, as these are available only to JavaScript code? What If I design a system where API calls can be made both from the web and from a command line client or a library? Should I use two different architecture to cover both use cases?
[dead]