I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle. When I see "REST API" I can safely assume the following:
- The API returns JSON
- CRUD actions are mapped to POST/GET/PUT/DELETE
- The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
- There's a decent chance listing endpoints were changed to POST to support complex filters
Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood.
Fielding won the war precisely because he was intellectually incoherent and mostly wrong. It's the "worse is better" of the 21st century.
RPC systems were notoriously unergonomic and at best marginally successful. See Sun RPC, RMI, DCOM, CORBA, XML-RPC, SOAP, Protocol Buffers, etc.
People say it is not RPC but all the time we write some function in Javascript like
which does a and on the backend we have a function that looks like with some annotation that explains how to map the URL to an item call. So it is RPC, but instead of a highly complex system that is intellectually coherent but awkward and makes developers puke, we have a system that's more manual than it could be but has a lot of slack and leaves developers feeling like they're in control. 80% of what's wrong with it is that people won't just use ISO 8601 dates.I remember getting my hands on a CORBA specification back as a wide-eyed teen thinking there is this magical world of programming purity somewhere: all 1200 pages of it, IIRC (not sure what version).
And then you don't really need most of it, and one thing you need is so utterly complicated, that it is stupid (no RoI) to even bother being compliant.
And truly, less is more.
What RPC mechanisms, in your opinion, are the most ergonomic and why?
(I have been offering REST’ish and gRPC in software I write for many years now. With the REST’ish api generated from the gRPC APIs. I’m leaning towards dropping REST and only offering gRPC. Mostly because the generated clients are so ugly)
Just use gRPC or ConnectRPC (which is basically gRPC but over regular HTTP). It's simple and rigid.
REST is just too "floppy", there are too many ways to do things. You can transfer data as a part of the path, as query parameters, as POST fields (in multiple encodings!), as multipart forms, as streaming data, etc.
People get stuff done despite at all that.
I'd agree with your great-grandparent post... people get stuff done because of that.
There has been no lack of heavyweight, pre-declare everything, code-generating, highly structured, prescriptive standards that sloppyREST has casually dispatched (pun fully intended) in the real world. After some 30+ years of highly prescriptive RPC mechanisms, at some point it becomes time to stop waiting for those things to unseat "sloppy" mechanisms and it's time to simply take it as a brute fact and start examining why that's the case.
Fortunately, in 2025, if you have a use case for such a system, and there are many many such valid use cases, you have a number of solid options to choose from. Fortunately sloppyREST hasn't truly killed them. But the fact that it empirically dominates it in the wild even so is now a fact older than many people reading this, and bears examination in that light rather than casual dismissals. It's easy to list the negatives, but there must be some positives that make it so popular with so many.
I mean... I used to get stuff done with CORBA and DCOM.
It's the question of long-term consequences for supportability and product evolution. Will the next person supporting the API know all the hidden gotchas?
I'm not super familiar with SOAP and CORBA, but how is SOAP any more coherent than a "RESTful" API? It's basically just a bag of messages. I guess it involves a schema, but that's not more coherent imo, since you just end up with specifics for every endpoint anyways.
CORBA is less "incoherent", but I'm not sure that's actually helpful, since it's still a huge mess. You can most likely become a lot more proficient with RESTful APIs and be more productive with them, much faster than you could with CORBA. Even if CORBA is extremely well specified, and "RESTful" is based more on vibes than anything specific.
Though to be clear I'm talking about the current definition of REST APIs, not the original, which I think wasn't super useful.
SOAP, CORBA and such have a theory for everything (say authentication) It's hard to learn that theory, you have to learn a lot of it to be able to accomplish anything at all, you have to deal with build and tooling issues, but if you look closely there will be all sorts of WTFs. Developers of standards like that are always implementing things like distributed garbage collection and distributed transactions which are invariably problematic.
Circa 2006 I was working on a site that needed to calculate sales tax and we were looking for an API that could help with that. One vendor uses SOAP which would have worked if we were running ASP.NET but we were running PHP. In two days I figured out enough to reverse engineer the authentication system (docs weren't quite enough to make something that worked) but then I had more problems to debug. A competitive vendor used a much simpler system and we had it working in 45 min -- auth is always a chokepoint because if you can't get it working 100% you get 0% of the functionality.
HTTP never had an official authentication story that made sense. According to the docs there are basic, digest, etc. Have you ever seen a site that uses them? The world quietly adopted cookie-based auth that was an ad-hoc version of JSON Web Tokens, once we got an intellectually coherent spec snake oil vendors could spam HN with posts about how bad JWT is because... It had a name and numerous specifics to complain about.
Look at various modern HTTP APIs and you see auth is all across the board. There was the time I did a "shootout" of roughly 10 visual recognition APIs, I got all of them working in 20-30 mins except for Google where I had to install a lot of software on my machine, trashed my Python, and struggled mightily because... they had a complex theory of authentication which was a barrier to doing anything at all.
Worse is better.
Amen. Particularly ISO8601.
Always thought that a standard like ISO8601 which always stores the date and time in UTC but appends the local time zone would beneficial.
I mean, HTTP is an RPC protocol. It has methods and arguments and return types.
What I object to about eg xml-rpc is that it layers a second RPC protocol over HTTP so now I have two of them...
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
Why do people feel compelled to even consider it to be a battle?
As I see it, the REST concept is useful, but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves. This is in line with the Richardson maturity model[1], where the apex of REST includes all the HATEOAS bells and whistles.
Should REST without HATEOAS classify as REST? Why not? I mean, what is the strong argument to differentiate an architectural style that meets all but one requirement? And is there a point to this nitpicking if HATEOAS is practically irrelevant and the bulk of RESTful APIs do not implement it? What's the value in this nitpicking? Is there any value to cite thesis as if they where Monty Python skits?
[1] https://en.wikipedia.org/wiki/Richardson_Maturity_Model
For me the battle is with people who want to waste time bikeshedding over the definition of "REST" and whether the APIs are "RESTful", with no practical advantages, and then having to steer the conversation--and their motivation--towards more useful things without alienating them. It's tiresome.
It was buried towards the bottom of the article, but the reason, to me:
Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.
Of course, Open API (and perhaps to some extent now AI) also mean that clients don't need to be written they are just generated.
However it is important perhaps to remember the context here: SOAP is and was terrible, but for enterprise that needed a complex and robust RPC system, it was beginning to gain traction. HATEOS is a much more general yet simple and comprehensive system in comparison.
Of course, you don't need any of this. So people built APIs they did need that were not restfull but had an acronym that their bosses thought sounded better than SOAP, and the rest is History.
> Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.
That was the theory, but it was never true in practice.
The oft comparisons to the browser really missed the mark. The browser was driven by advanced AI wetware.
Given the advancements in LLMs, it's not even clear that RESTish interfaces would be easier for them to consume (say vs. gRPC, etc.)
Then let developer-Darwin win and fire those people. Let the natural selection of the hiring process win against pedantic assholes. The days are too short to argue over issues that are not really issues.
>the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.
Many server-rendered websites support REST by design: a web page with links and forms is the state transferred to client. Even in SPAs, HATEOAS APIs are great for shifting business logic and security to server, where it belongs. I have built plenty of them, it does require certain mindset, but it does make many things easier. What problems are you talking about?
complexity
Backend only and verbosity would be more correct description.
Defining media types seems right to me, but what ends up happening is that you use swagger instead to define APIs and out the window goes HATEOAS, and part of the reason for this is just that defining media types is not something people do (though they should).
Basically: define a schema for your JSON, use an obvious CRUD mapping to HTTP verbs for all actions, use URI local-parts embedded in the JSON, use standard HTTP status codes, and embed more error detail in the JSON.
> (...) and part of the reason for this is just that defining media types is not something people do (...)
People do not define media types because it's useless and serves no purpose. They define endpoints that return specific resource types, and clients send requests to those endpoints expecting those resource types. When a breaking change is introduced, backend developers simply provide a new version of the API where a new endpoint is added to serve the new resource.
In theory, media types would allow the same endpoint to support multiple resource types. Services would sent specific resource types to clients if they asked for them by passing the media type in the accept header. That is all fine and dandy, except this forces endpoints to support an ever more complex content negotiation scheme that no backend framework comes close to support, and this brings absolutely no improvement in the way clients are developed.
So why bother?
We should probably stop calling the thing that we call REST, REST and be done with it - it's only tangentially related to what Fielding tried to define.
> We should probably stop calling the thing that we call REST (...)
That solves no problem at all. We have Richardson maturity model that provides a crisp definition, and it's ignored. We have the concept of RESTful, which is also ignored. We have RESTless, to contrast with RESTful. Etc etc etc.
None of this discourages nitpickers. They are pedantic in one direction, and so lax in another direction.
Ultimately it's all about nitpicking.
I’m with you. HATEOAS is great when you have two independent (or more) enterprise teams with PMs fighting for budget.
When it’s just yours and your two pizza team, contract-first-design is totally fine. Just make sure you can version your endpoints or feature-flag new API’s so it doesn’t break your older clients.
> but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.
Only because we never had the tools and resources that, say, GraphQL has.
And now everyone keeps re-inventing half of HTTP anyway. See this diagram https://raw.githubusercontent.com/for-GET/http-decision-diag... (docs https://github.com/for-GET/http-decision-diagram/tree/master...) and this: https://github.com/for-GET/know-your-http-well
HATEOAS adds lots of practical value if you care about discoverability and longevity.
Discoverability by whom, exactly? Like if it's for developer humans, then good docs are better. If it's for robots, then _maybe_ there's some value... But in reality, it's not for robots.
HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"
You have got it wrong. Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side. This is the discoverability. It does not imply generated interfaces, UI may know something about the data in advance.
This is actually what we do at [DAYJOB] and it's been working well for over 12 years. Like any other kind of interface indirection it adds the overhead of indirection for the benefit of being able to change the producer's side of the implementation without having to change all of the consumers at the same time.
That's actually an interesting take, thank you.
How does the UI check if certain operations are available?
It’s literally in server response:
In this example you receive list of permitted operations embedded in the resource model. href=. means you can perform this operation on resource self link.Oh, interesting. So rather than the UI computing what operations should be allowed currently by, say, knowing the user's current role and having rules baked into it about the relationship between role and UI widgets, the UI can compute what motive should be in or simply off of explicit statements or capability from the server.
I can see some meat on these bones. The counterpoint is that the protocol is now chattier than it would be otherwise... But a full analysis of bandwidth to the client would have to factor that you have to ship over a whole framework to implement those rules and keep those rules synchronized between client and server implementation.
I’d suggest that bandwidth optimization should happen when it becomes critical and control presence of hypermedia controls via feature flag or header. This way frontend becomes simpler, so FE dev speed and quality improves, but backend becomes more complex. The main problem here is that most backend frameworks are supporting RMM level 2 and hypermedia controls require different architecture to make server code less verbose. Unfortunately REST wasn’t understood well, so full support of it wasn’t in focus of open source community.
OPTIONS https://datatracker.ietf.org/doc/html/rfc2616
More links here: https://news.ycombinator.com/item?id=44510745
Or probably just an Allow header on a response to another query (e.g. when fetching an object, server could respond with an Allow: GET, PUT, DELETE if the user has read-write access and Allow: GET if it’s read-only).
It’s something else. List of available actions may include other resources, so you cannot express it with pure HTTP, you need a data model for that (HAL is one of possible solutions, but there are others)
> If it's for robots, then _maybe_ there's some value...
Nah, machine readable docs beat HATEOAS in basically any application.
The person that created HATEOAS was really not designing an API protocol. It's a general use content delivery platform and not very useful for software development.
The problems do exist, and they're everywhere. People just invented all sorts of hacks and workarounds for these issues instead of thinking more carefully about them. See my posts in this thread for some examples:
https://news.ycombinator.com/item?id=44509745
LLMs also appear to have an easier time consuming it (not surprisingly.)
For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.
And that's fine, but then you're doing RPC instead of REST and we should all be clear and honest about that.
I think you throw away a useful description of an API by lumping them all under RPC. If you tell me your API is RPC instead of REST then I'll assume that:
* If the API is available over HTTP then the only verb used is POST.
* The API is exposed on a single URL and the `method` is encoded in the body of the request.
It is true, if you say "RPC" I'm more likely to assume gRPC or something like that. If you say "REST", I'm 95% confident that it is a standard / familiar OpenAPI style json-over-http style API but will reserve a 5% probability that it is actually HATEOAS and have to deal with that. I'd say, if you are doing Roy Fielding certified REST / HATEOAS it is non-standard and you should call it out specifically by using the term "HATEOAS" to describe it.
What would it take for you to update your assumptions?
People in the real world referring to "REST" APIs, the kind that use HTTP verbs and have routes like /resource/id as RPC APIs. As it stands in the world outside of this thread nobody does that.
At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.
To me, the most important nuance really is that just like "hypermedia links" (encoded as different link types, either with Link HTTP header or within the returned results) are "generic" (think that "activate" link), so is REST as done today: if you messed up and the proper action should not be "activate" but "enable", you are in no better position than having to change from /api/v1/account/ID/activate to /api/v2/account/ID/enable.
You still have to "hard code" somewhere what action anything needs to do over an API (and there is more missing metadata, like icons, translations for action description...).
Mostly to say that any thought of this approach being more general is only marginal, and really an illusion!
While I ask people whether they actually mean REST according to the paper or not, I am one of the people who refuse to just move on. The reason being that the mainstream use of the term doesn’t actually mean anything, it is not useful, and therefore not pragmatic at all. I basically say “so you actually just mean some web API, ok” and move on with that. The important difference being that I need to figure out the peculiarities of each such web API.
>> The important difference being that I need to figure out the peculiarities of each such web API
So if they say it is Roy Fielding certified, you would not have to figure out any "peculiarities"? I'd argue that creating a typical OpenAPI style spec which sticks to standard conventions is more professional than creating a pedantically HATEOAS API. Users of your API will be confused and confusion leads to bugs.
So you enjoy being pedantic for the sake of being pedantic? I see no useful benefit either from a professional or social setting to act like this.
I don’t find this method of discovery very productive and often regardless of meeting some standard in the API the real peculiarities are in the logic of the endpoints and not the surface.
I can see a value in pedantry in a professional setting from a signaling point of view. It's a cheap way to tell people "Hey! I'm not like those other girls, I care about quality," without necessarily actually needing to do the hard work of building that quality in somewhere where the discerning public can actually see your work.
(This is not a claim that the original commenter doesn't do that work, of course, they probably do. Pedants are many things but usually not hypocrites. It's just a qualifier.)
You'd still probably rather work with that guy than with me, where my preferred approach is the opposite of penalty. I slap it all together and rush it out the door as fast as possible.
>> "Hey! I'm not like those other girls, I care about quality,"
OMG. Pure gold!
What some people call pedantic, others may call precision. I normally just call the not-quite-REST API styles as simply "HTTP APIs" or even "RPC-style" APIs if they use POST to retrieve data or name their routes in terms of actions (like some AWS APIs).
REST is pretty much impossible to adhere to for any sufficiently complex API and we should just toss it in the garbage
100%. The needs of the client rule, and REST rarely meets the challenge. When I read the title, I was like "pfff", REST is crap to start with, why do I care?
REST means, generally, HTTP requests with json as a result.
It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id/child/:child_id`.
It was probably an organic response to the complexity of SOAP/WSDL at the time, so people harping on how it's not HATEOAS kinda miss the historical context; people didn't want another WSDL.
>> /things/:id/child/:child_id
It seems that nesting isn't super common in my experience. Maybe two levels if completely composite but they tend to be fairly flat.
> instead of GET/POST for everything
Sometimes that's a pragmatic choice too. I've worked with HTTP clients that only supported GET and POST. It's been a while but not that long ago.
> It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id`.
No not really. A lot of people don't understand REST to be anything other than JSON over HTTP. Sometimes, the HTTP verbs thing is done as part of CRUD but actually CRUD doesn't necessarily have to do with the HTTP verbs at all and there can just be different endpoints for each operation. It's a whole mess.
I also view it as inevitable.
I can count on one hand the number of times I've worked on a service that can accurately be modeled as just representational state transfer. The rest have at least some features that are inherently, inescapably some form of remote procedure call. Which the original REST model eschews.
This creates a lot of impedance mismatch, because the HTTP protocol's semantics just weren't designed to model that kind of thing. So yeah, it is hard to figure out how to shoehorn that into POST/GET/PUT/DELETE and HTTP status codes. And folks who say it's easy tend to get there by hyper-focusing on that one time they were lucky enough to be working on a project where it wasn't so hard, and dismissing as rare exceptions the 80% of cases where it did turn out to be a difficult quagmire that forced a bunch of unsatisfying compromises.
Alternatively you can pick a protocol that explicitly supports RPC. But that's not necessarily any better because all the well-known options with good language support are over-engineered monstrosities like GRPC, SOAP, and (shudder) CORBA. It might reduce your domain modeling headaches, but at the cost of increased engineering and operations hassle. I really can't blame anyone for deciding that an ad-hoc, ill-specified, janky application of not-actually-REST is the more pragmatic option. Because, frankly, it probably is.
xml-rpc (before it transmogrified into SOAP) was pretty simple and flexible. Still exists, and there is a JSON variant now too. It's effectively what a lot of web APIs are: a way to invoke a method or function remotely.
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I had to chuckle here. So true!
> I can safely assume [...] CRUD actions are mapped to POST/GET/PUT/DELETE
Not totally sure about that - I think you need to check what they decided about PUT vs PATCH.
It's always better to use GET/POST exclusively. The verb mapping was theoretical from someone who didn't have to implement. I've long ago caved to the reality of the web's limited support for most of the other verbs.
Agreed... in most large (non trivial systems) REST ends up looking/devolving closer to RPC more and more and you end up just using get and post for most things and end up with a REST-ISH-RPC system in practice.
REST purists will not be happy, but that's reality.
Isn't that fairly straightforward? PUT for full updates and PATCH for partial ones. Does anybody do anything different?
PUT for partial updates, yes, constantly. What i worked with last week: https://docs.gitlab.com/api/projects/#edit-a-project
Lots of people make PUTs that work like PATCHes and it drives me crazy. Same with people who use POST to retrieve information.
Well you can't reliably use GET with bodies. There is the proposed SEARCH but using custom methods also might not work everywhere.
No, QUERY. https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-saf...
SEARCH is from RFC 5323 (WebDAV).
The SEARCH verb draft was superseded by the QUERY verb draft last I checked. QUERY is somewhat more adopted, though it's still very new.
These verbs dont even make sense most of the time.
You sweet summer child.
HTTP/JSON API works too, but you can assume it's what they mean by REST.
It makes me wish we stuck with XML based stuff, it had proper standards, strictly enforced by libraries that get confused by things not following the standards. HTTP/JSON APIs are often hand-made and hand-read, NIH syndrone running rampant because it's perceived to be so simple and straightforward. To the point of "we don't need a spec, you can just see the response yourself, right?". At least that was the state ~2012, nowadays they use an OpenAPI spec but it's often incomplete, regardless of whether it's handmade (in which case people don't know everything they have to fill in) or generated (in which case the generators will often have limitations and MAYBE support for some custom comments that can fill in the gaps).
> HTTP/JSON API works too, but you can assume it's what they mean by REST.
This is the kind of slippery slope where pedantic nitpickers thrive. The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
In this sense, the term "RESTful" is useful to shut down these pedantic nitpickers. It's "REST-adjacent" still, but the right answer to nitpicking is "who cares".
> The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
wat?
Nowhere is JSON in the name of REpresentational State Transfer. Moreover, sending other representations than JSON (and/or different presentations in JSON) is not only acceptable, but is really a part of REST
> Nowhere is JSON in the name of REpresentational State Transfer.
If you read the message you're replying to, you'll notice you are commenting on the idea of coining the concept of HTTP/JSON API as a better fitting name.
Read messages before replying? It's the internet! Ain't no one got time for that
:)
Don't stress it. It happens to the best of us.
This. Or maybe we should call it "Rest API" in lowercase, meaning not the state transfer, but the state of mind, where developer reached satisfaction with API design and is no longer bothered with hypermedia controls, schemas etc.
I recall having to maintain an integration to some obscure SOAP API that ate and spit out XML with strict schemas and while I can't remember much about it, I think the integration broke quite easily if the other end changed their API somehow.
Assuming the / was meant to describe it as both an HTTP API and a JSON API (rather than HTTP API / JSON API) it should be JSON/HTTP, as it is JSON over HTTP, like TCP/IP or GNU/Linux :)
> it had proper standards
Lol. Have you read them?
SOAP in particular can really not be described as "proper".
It had the advantage that the API docs were always generated, and thus correct, but the most common thing is for one software stack not being able to use a service built with another stack.
I really hate my conclusions here, but from a limited freedom point of view, if all of that is going to happen...
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
So we should better start with a standard scaffolding for the replies so we can encode the errors and forget about status codes. So the only thing generating an error status is unhandled exception mapped to 500. That's the one design that survives people disagreeing.
> There's a decent chance listing endpoints were changed to POST to support complex filters
So we'd better just standardize that lists support both GET and POST from the beginning. While you are there, also accept queries on both the url and body parameters.
The world would be lovely if we could have standard error, listing responses, and a common query syntax.
I haven't done REST apis in a while, but I came across this recently for standardizing the error response: https://www.rfc-editor.org/rfc/rfc9457.html
> - CRUD actions are mapped to POST/GET/PUT/DELETE
Agree on your other three but I've seen far too many "REST APIs" with update, delete & even sometimes read operations behind a POST. "SOAP-style REST" I like to call it.
Do you care? From my point of view, post, put, delete, update, and patch all do the same. I would argue that if there is a difference, making the distinction in the url instead of the request method makes it easier to search code and log. And what's the correct verb anyway?
So that's an argument that there may be too many request methods, but you could also argue there aren't enough. But then standardization becomes an absolute mess.
So I say: GET or POST.
I agree. From what I have seen in corporate settings, using anything more than GET/POST takes the time to deploy the API to a different level. Using UPDATE, PATCH etc. typically involves firewall changes that may take weeks or months to get approved and deployed followed a never ending audit/re-justification process.
> From my point of view, post, put, delete, update, and patch all do the same.
That's how we got POST-only GraphQL.
In HTTP (and hence REST) these verbs have well-defined behaviour, including the very important things like idempotence and caching: https://github.com/for-GET/know-your-http-well/blob/master/m...
The defined behaviors are not so well defined for more complex APIs.
You may have an API for example that updates one object and inserts another one, or even deletes an old resource and inserts a new one
The verbs are only very clear for very simple CRUD operations. There is a lot of nuance otherwise that you need documentation for and having to deal with these verbs both as the developer or user of an API is a nuisance with no real benefit
> The defined behaviors are not so well defined for more complex APIs.
They are. Your APIs can always be defined as a combination of "safe, idempotent, cacheable"
There's no point in idempotency for operations that change the state. DELETE is supposed to be idempotent, but it can only be if you limit yourself to deletion by unique, non-repeating id. Should you do something like delete by email or product, you have to use another operation, which then obviously will be POST anyway. And there's no way to "cache" a delete operation.
It's just absurd to mention idempotency when the state gets altered.
Yeah but GET doesn’t allow requests to have bodies (yeah, I know, technically you can but it’s not very useful), and this is a legitimate issue preventing its use in complex APIs.
I actually had to change an API recently TO this. The request payload was getting too big, so we needed to send it via POST as a body.
> even sometimes read operations behind a POST
Even worse than that, when an API like the Pinboard API (v1) uses GET for write operations!
I work with an API that uses GET for delete :)
Hell yeah. IMO we should collectively get over ourselves and just agree that what you describe is the true, proper, present-day meaning of "REST API".
> Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood
This is an insightful observation. It happens with pretty much everything
As it has been happening recently with the term vibecoding. It started with some definition, and now it’s morphed into more or less just meaning ai-assisted coding. Some people don’t like it[1]
1: https://simonwillison.net/2025/Mar/19/vibe-coding/
I use the term "HTTP API"; more general. Context, in light of your definition: In many cases labeled "REST", there will only be POST, or POST and GET, and HTTP 200 status with an error in JSON is used instead of HTTP status codes. Your definition makes sense as a weaker form of the original, but it it still too strict compared to how the term is used. "REST" = "HTTP with JSON bodies" is the most practical definition I have.
>HTTP 200 status with an error in JSON is used instead of HTTP status codes
This is a bad approach. It prevents your frontend proxies from handling certain errors better. Such as: caching, rate limiting, or throttling abuse.
> HTTP 200 status with an error in JSON is used instead of HTTP status codes
I've seen some APIs that not only always return a 200 code, but will include a response in the JSON that itself indicates whether the HTTP request was successfully received, not whether the operation was successfully completed.
Building usable error handling with that kind of response is a real pain: there's no single identifier that indicates success/failure status, so we had to build our own lookup table of granular responses specific to each operation.
Haha, our API still returns XML. At least, most of the endpoints do. Not the ones written by that guy who thinks predictability in an API is lower priority than modern code, those ones return JSON.
I present to you this monstrosity: https://stackoverflow.com/q/39110233
Presumably they had an existing API, and then REST became all the rage, so they remapped the endpoints and simply converted the XML to JSON. What do you do with the <tag>value</tag> construct? Map it to the name `$`!
Congratulations, we're REST now, the world is a better place for it. Off to the pub to celebrate, gents. Ugh.
I think people tend to forget these things are tools, not shackles
the last point got me.
How can you idiomatically do a read only request with complex filters? For me both PUT and POST are "writable" operations, while "GET" are assumed to be read only. However, if you need to encode the state of the UI (filters or whatnot), it's preferred to use JSON rather than query params (which have length limitations).
So ... how does one do it?
One uses POST and recognizes that REST doesn't have to be so prescriptive.
The part of REST to focus on here is that the response from earlier well-formed requests will include all the forms (and possibly scripts) that allow for the client to make additional well-formed requests. If the complex filters are able to be made with a resource representation or from the root index, regardless of HTTP methods used, I think it should still count as REST (granted, HATEOAS is only part of REST but I think it should be a deciding part here).
When you factor in the effects of caching by intermediate proxy servers, you may find yourself adapting any search-like method to POST regardless, or at least GET with params, but you don't always want to, or can't, put the entire formdata in params.
Plus, with the vagaries of CSRF protections, per-user rate-limiting and access restrictions, etc.,, your GET is likely to turn into a POST for anything non-trivial. I wouldn't advise trying for pure REST-ful on the merits of its purity.
POST the filter, get a response back with the query to follow up with for the individual resources.
which then responds with And then you can make GET request calls against that resource.It adds in some data expiration problems to be solved, but its reasonably RESTful.
This has RESTful aesthetics but it is a bit unpractical if a read-only query changes state on the server, as in creating the uuid-referenced resource.
There's no requirement in HTTP (or REST) to either create a resource or return a Location header.
For the purposes of caching etc, it's useful to have one, as well as cache controls for the query results, and there can be links in the result relative to the Location (eg a link href of "next" is relative to the Location).
Isn't this twice as slow? If your server was far away it would double load times?
The response to POST can return everything you need. The Location header that you receive with it will contain permanent link for making the same search request again via GET.
Pros: no practical limit on query size. Cons: permalink is not user-friendly - you cannot figure out what filters are applied without making the request.
There was a proposal[1] a while back to define a new SEARCH verb that was basically just a GET with a body for this exact purpose.
[1]: https://www.ietf.org/archive/id/draft-ietf-httpbis-safe-meth...
Similarly, a more recent proposal for a new QUERY verb: https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-m...
If you really want this idiomatically correct, put the data in JSON or other suitable format, zip it and encode in Base64 to pass via GET as a single parameter. To hit the browser limits you will need so big query that you may hit UX constraints earlier in many cases (2048 bytes is 50+ UUIDs or 100+ polygon points etc).
Pros: the search query is a link that can be shared, the result can be cached. Cons: harder to debug, may not work in some cases due to URI length limits.
HTML FORMs are limited to www-form-encoded or multipart. The length or the queries on a GET with a FORM is limited by intermediaries that shouldn't be limiting it. But that's reality.
Do a POST of a query document/media type that returns a "Location" that contains the query resource that the server created as well as the data (or some of it) with appropriate link elements to drive the client to receive the remainder of the query.
In this case, the POST is "writing" a query resource to the server and the server is dealing with that query resource and returning the resulting information.
Soon, hopefully, QUERY will save us all. In the meantime, simply using POST is fine.
I've also seen solutions where you POST the filter config, then reference the returned filter ID in the GET request, but that often seems like overkill even if it adds some benefits.
Sounds about right. I've been calling this REST-ish for years and generally everyone I say that to gets what I mean without much (any) explanation.
As long as it's not SOAP, it's great.
If I never have to use SOAP again in my life, I will die a happy man.
This is very true. Over my 15 years of engineering, I have never suffered_that_ much with integrating with an api (assuming it exists). So the lack of "HATEOaS" hasn't even been noticable for me. As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429) I usually have no issuss integrating and don't even notice that they don't have some "discoverable api". As long as I can get the data I need or can make the update I need I am fine.
I think good rest api design is more a service for the engineer than the client.
> As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429)
A client had build an API that would return 200 on broken requests. We pointed it out and asked if maybe it could return 500, to make monitoring easier. Sure thing, next version "Http 200 - 500", they just wrote 500 in the message body, return remained 200.
Some developers just do not understand http.
I just consumed an API where errors were marked with a "success": false field.
The "success" is never true. If it's successful, it's not there. Also, a few endpoints return 500 instead, because of course they do. Oh, and one returns nothing on error and data on success, because, again, of course it does.
Anyway, if you want a clearer symptom that your development stack is shit and has way too much accidental complexity, there isn't any.
This is the real world. You just deal with it (at least I do) because fighting it is more work and at the end of the day the boss wants the project done.
Ive seen this a few times in the past but for a different reason. What would happen in these cases was that internally there’d be some cascade of calls to microservices that all get collected. In the most egregious examples it’s just some proxy call wrapping the “real” response.
So it becomes entirely possible to get a 200 from the thing responding g to you but it may be wrapping an upstream error that gave it a 500.
Sometimes I wish HN supported emojis so I could reply with the throw-up one.
{ "statusCode": 200, "error" : "internal server error" }
Nice.
I've had frontend devs ask for this, because it was "easier" to handle everything in the same then callback. They wanted me to put ANY error stuff as a payload in the response.
> So the lack of "HATEOaS" hasn't even been noticable for me.
I think HATEOAS tackles problems such as API versioning, service discovery, and state management in thin clients. API versioning is trivial to manage with sound API Management policies, and the remaining problems aren't really experienced by anyone. So you end up having to go way out of your way to benefit from HATEOAS, and you require more complexity both on clients and services.
In the end it's a solution searching for problems, and no one has those problems.
It isn't clear that HATEOS would be better. For instance:
>>Clients shouldn’t assume or hardcode paths like /users/123/posts
Is it really net better to return something like the following just so you can change the url structure.
"_links": { "posts": { "href": "/users/123/posts" }, }
I mean, so what? We've create some indirection so that the url can change (e.g. "/u/123/posts").
Yes, so the link doesn't have to be relative to the current host. If you move user posts to another server, the href changes, nothing else does.
If suddenly a bug is found that lets people iterate through users that aren't them, you can encrypt the url, but nothing else changes.
The bane of the life of backend developers is frontend developers that do dumb "URL construction" which assumes that the URL format never changes.
It's brittle and will break some time in the future.
Yeah
I can assure you very few people care
And why would they? They're getting value out of this and it fits their head and model view
Sweating over this takes you nowhere
100% agreed, “language evolves”
This article also tries to make the distinction of not focusing on the verbs themselves. That the RESTful dissertation doesn’t focus on them.
The other side of this is that the IETF RESTful proposals from 1999 that talk about the protocol for implementation are just incomplete. The obscure verbs have no consensus on their implementation and libraries across platforms may do PUT, PATCH, DELETE incompatibly. This is enough reason to just stick with GET and POST and not try to be a strict REST adherents since you’ll hit a wall.
Importantly for the discussion, this also doesn't mean the push for REST api's was a failure. Sure, we didn't end up with what was precisely envisioned from that paper, but we still got a whole lot better than CORBA and SOAP.
The lowest common denominator in the REST world is a lot better than the lowest common denominator in SOAP world, but you have to convince the technically literate and ideological bunch first.
We still have gRPC though...
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
True. Losing hacking/hacker was sad but I can live with it - crypto becoming associated with scam coins instead of cryptography makes me want to fight.
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I've done this enough times that now I don't really bother engaging. I don't believe anyone gets it 100% correct ever. As long as there is nothing egregiously incorrect, I'll accept whatever.
>- There's a decent chance listing endpoints were changed to POST to support complex filters
Please. Everyone knows they tried to make the complex filter work as a GET, then realized the filtering query is so long that it breaks whatever WAF or framework is being used because they block queries longer than 4k chars.
[flagged]
I disagree. It's a perfectly fine approach to many kinds of APIs, and people aren't "mediocre" just for using widely accepted words to describe this approach to designing HTTP APIs.
> and people aren't "mediocre" just for using widely accepted words
If you work off "widely accepted words" when there is disagreeing primary literature, you are probably mediocre.
So your view is that the person who coins a term forever has full rights to dictate the meaning of that term, regardless of what meaning turns out to be useful in practice and gets broadly accepted by the community? And you think that anyone who disagrees with such an ultra-prescriptivist view of linguistics is somehow a "mediocre programmer"? Do I have that right?
I have no dog in this fight, but 90% of technical people around me keep calling authentication authorization no matter how many times I explain the difference to those who even care to listen. It's misused in almost every application developed in this country.
Sometimes it really is bad and "everybody" can be very wrong, yes. None of us are native English speakers (most don't speak English at all), so these foreign sounding words all look the same, it's a forgivable "offence".
No. For all people who use "REST": If reading Fielding is the exception that gets you on HN, than not reading Fielding is what average person does. Mediocre.
Using Fieldings term to refer to something else is an extra source of confusion which kinda makes the term useless. Nobody knows what the speaker exactly refers no.
The point is lost on you though. There are REST APIs (almost none), and there are "REST APIs" - a battle cry of mediocre developers. Now go tell them their restful has nothing to do with rest. And I am now just repeating stuff said in article and in comments here.
Why should I (or you, for that matter) go and tell them their restful has nothing to do with rest? Why does it matter? They're making perfectly fine HTTP APIs, and they use the industry standard term to describe what kind of HTTP API it is.
It's convenient to have a word for "HTTP API where entities are represented by JSON objects with unique paths, errors are communicated via HTTP status codes and CRUD actions use the appropriate HTTP methods". The term we have for that kind of API is "rest". And that's fine.
1. Never said I'm going to tell them. It's on someone else. I'm just going to lower my expectation from such developers accordingly.
2. So just "HTTP API". And that would suffice. Adding "restful" is trying to be extra-smart or fit in if everyone's around an extra-smart.
> 1. Never said I'm going to tell them. It's on someone else. I'm just going to lower my expectation from such developers accordingly.
This doesn't seem like a useful line of conversation, so I will ignore it.
> 2. So just "HTTP API".
No! There are many kinds of HTTP APIs. I've both made and used "HTTP APIs" where HTTP is used as a transport and API semantics are wholly defined by the message types. I've seen APIs where every request is an HTTP POST with a protobuf-encoded request message and every response is a 200 OK with a protobuf-encoded response message (which might then indicate an error). I've seen GraphQL APIs. I've seen RPC-style APIs where every "RPC call" is a POST requset to an endpoint whose name looks like a function name. I've seen APIs where request and response data is encoded using multipart/form-data.
Hell, even gRPC APIs are "HTTP APIs": gRPC uses HTTP/2 as a transport.
Telling me that something is an "HTTP API" tells me pretty much nothing about how it works or how I'm expected to use it, other than that HTTP is in some way involved. On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it, and the documentation can assume a lot of pre-existing context because it can assume that I've used similar APIs before.
> On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it (...)
Precisely this. The value of words is that they help communicate concepts. REST API or even RESTful API conveys a precise idea. To help keep pedantry in check, Richardson's maturity model provides value.
Everyone manages to work with this. Not those who feel the need to attack people with blanket accusations of mediocrity, though. They hold onto meaningless details.
You're being needlessly pedantic, and it seems the only purpose to this pedantry is finding a pretext to accuse everyone of being mediocre.
I think the pushback is because you labelled people who create "REST APIs" as "mediocre" without any explanation. That may be a good starting point.
It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.
Most of us are not writing proper Restful APIs because we’re dealing with legacy software, weird requirements the egos of other developers. We’re not able to build whatever we want.
And I agree with the feature article.
> It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.
I'd go as far as to claim it is by far the dumbest kind, because it has no value, serves no purpose, and solves no problem. It's just trivia used to attack people.
I met a DevOps guy who didn't know what "dotfiles" are.
However I'd argue people who use the term to describe it the same as everyone else is the smart one, if you want to refer to the "real" one just add "strict" or "real" in front of it.
I don't think we should dismiss people over drifting definitions and lack of "fountational knowledge".
This is more like people arguing over "proper" English, the point of language is to communicate ideas. I work for a German company and my German is not great but if I can make myself understood, that's all that's needed. Likewise, the point of an API is to allow programs, systems, and people to interoperate. If it accomplishes that goal, it's fine and not worth fighting over.
If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF? My job isn't an academic paper, good enough to get the job done is going to have to be good enough.
I agree, thought it would be really really nice if a http method like GET would not modify things. :)
> This is more like people arguing over "proper" English, the point of language is to communicate ideas.
ur s0 rait, eye d0nt nnno wy ne1 b0dderz tu b3 "proppr"!!!!1!!
</sarcasm>
You are correct that communication is the point. Words do communicate a message. So too does disrespect for propriety: it communicates the message that the person who is ignorant or disrespectful of proper language is either uneducated or immature, and that in turn implies that such a person’s statements and opinions should be discounted if not ignored entirely.
Words and terms mean things. The term ‘REST’ was coined to mean something. I contend that the thing ‘REST’ originally denoted is a valuable thing to discuss, and a valuable thing to employ (I could be wrong, but how easy will it be for us to debate that if we can’t even agree on a term for the thing?).
It’s similar to the ironic use of the word ‘literally.’ The word has a useful meaning, there is already the word ‘figuratively’ which can be used to mean ‘not literally’ and a good replacement for the proper meaning of ‘literally’ doesn’t spring to mind: misusing it just decreases clarity and hinders communication.
> If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF?
Whether something is JSON or XML is independent of the representation — they are serialisations (or encodings) of a representation. E.g. {"type": "foo","id":1}, <foo id="1"/>, <foo><id>1</id></foo> and (foo (id 1)) all encode the same representation.
>misusing it just decreases clarity and hinders communication
There is no such thing as "misusing language". Language changes. It always does.
Maybe you grew up in an area of the world where it's really consistent everywhere, but in my experience I'm going to have a harder time understanding people even two to three villages away.
Because language always changes.
Words mean a particular thing at a point in time and space. At another one, they might mean something completely different. And that's fine.
You can like it or dislike it, that's up to you. However, I'd say every little bit of negative thoughts in that area only serve to make yourself miserable, since humanity and language at large just aren't consistent.
And that's ok. Be it REST, literally or even a normal word such as 'nice', which used to mean something like 'foolish'.
Again, language is inconsistent by default and meanings never stay the same for long - the more a terminus technicus gets adapted by the wider population, the more its meaning gets widened and/or changed.
One solution for this is to just say "REST in its original meaning" when referring to what is now the exception instead of the norm.
> I work for a German company and my German is not great but if I can make myself understood, that's all that's needed.
Really? What if somebody else wants to get some information to you? How do you know what to work on?
What an incredibly bad take.