> You can use GET requests and GET requests are cacheable.
GET requests are a crutch added to GraphQL precisely because of limitation of POST requests.
And the backend still has to normalise the GET request, and possibly peek inside it to make sure that it is the same as some previous request.
> how to auth data (anyone has access to everything) -> Authentication or authorization? What do you mean with anyone has access to everything?
Your schema is a single endpoint with all the fields you need exposed. Oh, but a person X with access Y might not have access to fields A, B, C, and D.
Too bad, these fields can appear at any level of the hierarchy in the request, deal with it.
> how to... -> yes?
A GraphQL query is ad-hoc. It can have unbounded complexity and unbounded recursion. Ooops, now you have to build complexity analysers and things to figure out recursion levels.
A GraphQL service usually collects data from several external services and/or a database (or even several databases). But remember, a GraphQL query is both ad-hoc and with potential unbounded complexity. Oh, suddenly we have to think how much data and at what time to we retrieve, how do we get the data without retrieving too much, and without hammering the external services and the database with thousands of extra requests.
That's just from the top of my head.
Ans so you end up with piles of additional solutions of various quality and availability on top of GraphQL servers and clients: caching, persisted queries etc. etc.
> GET requests are a crutch added to GraphQL precisely because of limitation of POST requests.
How are GET requests a crutch? If anything GraphQL is completely agnostic to which HTTP method you use to access it. You don't even have to run GraphQL over HTTP, it can work over MQTT, NATS, telnet...
> And the backend still has to normalise the GET request, and possibly peek inside it to make sure that it is the same as some previous request.
Which is what any caching proxy must do anyway?
> Your schema is a single endpoint with all the fields you need exposed. Oh, but a person X with access Y might not have access to fields A, B, C, and D.
In your GraphQL implementation you can just deny fulfilling requests that contain fields person X doesn't have access to. This problem is not limited to GraphQL, it's a generic authorization problem.
> A GraphQL query is ad-hoc. It can have unbounded complexity and unbounded recursion. Ooops, now you have to build complexity analysers and things to figure out recursion levels.
You don't have to build a complexity analyzer or figure out recursion levels, there are already tools that do that for you. But you can go another way and just create a list of approved queries.
> A GraphQL service usually collects data from several external services and/or a database (or even several databases)
Usually? That's just speculation. And that's entirely on the implementation of that service, it has nothing to do with GraphQL spec/technology itself.
They were not in the original spec IIRC. URL's are limited in legth (it's not in the spec, but most clients have a limit) etc.
> Which is what any caching proxy must do anyway?
Nope. A caching proxy can benefit from HTTP Cache Headers [1]. But cache headers don't work well with GraphQL's GET requests, and don't work at all with the default, which is POST.
> This problem is not limited to GraphQL, it's a generic authorization problem.
GraphQL makes it significantly more complex though. Because your requests are ad-hoc.
> You don't have to build a complexity analyzer or figure out recursion levels, there are already tools that do that for you.
Indeed. By adding more and more complexity. And no, tools only solve a part of the problem. Simply a dataloader on a server doesn't entirely solve the N+1 problem.
> But you can go another way and just create a list of approved queries.
Turning it into REST with none of the benefits of REST.
> Usually? That's just speculation. And that's entirely on the implementation of that service
It's not speculation. That's the main use case for GraphQL. But even if you just slap it on top of a single database, you still have the problem of ad-hoc queries hammering your database.
> They were not in the original spec IIRC. URL's are limited in legth (it's not in the spec, but most clients have a limit) etc.
They were not in spec because the spec doesn't say anything over which medium it should be transported. In fact the spec [1] only mentions the word HTTP 5 times: 4 times in example data and one time discussing implementation details when sending data over HTTP. GraphQL can't be faulted for the limits of the transport over which it is used.
> Nope. A caching proxy can benefit from HTTP Cache Headers [1]. But cache headers don't work well with GraphQL's GET requests, and don't work at all with the default, which is POST.
How do cache headers not work well with GraphQL GET requests? That is entirely up to the server that implements the API. If that server doesn't implement caching well, that's not GraphQL's fault.
> It's not speculation. That's the main use case for GraphQL. But even if you just slap it on top of a single database, you still have the problem of ad-hoc queries hammering your database.
The main use case of GraphQL is any two things that want to exchange data with each other. Merging data from multiple data sources as its main use case is simply not true. The ability of GraphQL to merge different data sources is one of its abilities but it's not intrinsic to GraphQL.
> Turning it into REST with none of the benefits of REST.
And what exactly are those benefits? I'm here defending GraphQL yet none of the downsides of REST are being taken into account. GraphQL brings structure where there was none, that alone is a significant reason to choose GraphQL to structure your API.
> N+1 problem
There are tools like Postgraphile that solve this. It converts your GraphQL query into one efficient database query.
> ad-hoc queries hammering your database
And what prevents anyone from hammering a REST API? GraphQL doesn't release the developer from implementing sane constraints - something that has to happen with any API implementation and not specific to GraphQL.
> They were not in spec because the spec doesn't say anything over which medium
If not the spec, then original documentation. GET is a late add-on.
> How do cache headers not work well with GraphQL GET requests?
In REST:
- a resource is uniquely identified by it's URI
- when the server sends back cache headers, any client in between (any proxies, the browser, any http clients in any programming language etc.) can and will use these cache headers to cache the request
GET requests are a crutch added to GraphQL precisely because of limitation of POST requests.
And the backend still has to normalise the GET request, and possibly peek inside it to make sure that it is the same as some previous request.
> how to auth data (anyone has access to everything) -> Authentication or authorization? What do you mean with anyone has access to everything?
Your schema is a single endpoint with all the fields you need exposed. Oh, but a person X with access Y might not have access to fields A, B, C, and D.
Too bad, these fields can appear at any level of the hierarchy in the request, deal with it.
> how to... -> yes?
A GraphQL query is ad-hoc. It can have unbounded complexity and unbounded recursion. Ooops, now you have to build complexity analysers and things to figure out recursion levels.
A GraphQL service usually collects data from several external services and/or a database (or even several databases). But remember, a GraphQL query is both ad-hoc and with potential unbounded complexity. Oh, suddenly we have to think how much data and at what time to we retrieve, how do we get the data without retrieving too much, and without hammering the external services and the database with thousands of extra requests.
That's just from the top of my head.
Ans so you end up with piles of additional solutions of various quality and availability on top of GraphQL servers and clients: caching, persisted queries etc. etc.