{"id":270060,"date":"2024-01-23T08:21:25","date_gmt":"2024-01-23T13:21:25","guid":{"rendered":"https:\/\/www.webscale.com\/blog\/how-to-solve-graphql-latency-challenges-by-deploying-closer-to-your-users\/"},"modified":"2024-01-23T12:18:58","modified_gmt":"2024-01-23T17:18:58","slug":"how-to-solve-graphql-latency-challenges-by-deploying-closer-to-your-users","status":"publish","type":"post","link":"https:\/\/www.webscale.com\/blog\/how-to-solve-graphql-latency-challenges-by-deploying-closer-to-your-users\/","title":{"rendered":"How to Solve GraphQL Latency Challenges by Deploying Closer to Your Users"},"content":{"rendered":"
GraphQL is a widely adopted alternative to REST APIs because of the many benefits that it offers, including performance, efficiency, and predictability. While the advantages are significant, many developers become frustrated with latency challenges when implementing GraphQL. One solution to this is deploying the application across a distributed system. Let\u2019s dive into these challenges and discuss some solutions that can be implemented today.<\/span><\/p>\n Get started for free<\/span><\/a> or jump straight into tutorials for deploying distributed GraphQL on Webscale CloudFlow:<\/span><\/p>\n GraphQL<\/span><\/a> was built by Facebook in 2012 to combat issues with their mobile application. It was having to fetch so much data via their existing API that it was bogging down the user experience. They ended up completely rebuilding that model. Their challenge: the need to make the Facebook app scalable within a mobile application that was limited in resources. They asked, why does my app need all this information from my REST API. Why do I have to make multiple API calls to get that information? And, ultimately, why can\u2019t there be just one call that returns all the information needed, and only that information? Hence, the creation of GraphQL.<\/span><\/p>\n GraphQL deliberately moved away from the twenty-year-old REST API model, with the Facebook team saying, instead we want to fetch content with just one HTTP request and we want it to return only the information that we need. GraphQL\u2019s tag line is \u201cAsk for what you need, get exactly that\u201d, aiming to make it easier to evolve APIs over time, and enable more powerful developer tools. The query language is designed to replace schema stitching and <\/span>solve challenges associated with REST APIs<\/span><\/a>, including separation of codes, brittle gateway code and coordination.<\/span><\/p>\n Since it was open sourced in 2015, GraphQL has become highly popular because of the flexibility and efficiencies it offers. Last year, The Linux Foundation <\/span>set up the GraphQL Foundation<\/span><\/a> to build a vendor-neutral community around it.<\/span><\/p>\n However, there are various challenges with GraphQL, particularly when trying to connect data across a distributed architecture. One of the main difficulties involves caching, which CDNs are unable to solve natively without altering their architecture.<\/span><\/p>\n With GraphQL, since you are using just one HTTP request, you need a structure to say, \u201cI need this information\u201d, hence you need to send a body. However, you don\u2019t typically send bodies from the client to the server with GET requests, but rather with POST requests, which are historically the only ones used for authentication. This means you can\u2019t analyze the bodies using a caching solution, such as Varnish Cache, because typically these reverse proxies cannot analyze POST bodies.<\/span><\/p>\n This problem has led to comments like \u201cGraphQL breaks caching\u201d or \u201cGraphQL is not cacheable\u201d. While it is <\/span>more nuanced than this<\/span><\/a>, GraphQL presents <\/span>three main caching issues<\/span><\/a>:<\/span><\/p>\n <\/p>\n Some CDNs have created a workaround of changing POST requests to GET requests, which populates the entire URL path with the POST body of the GraphQL request, which then gets normalized. However, this insufficient solution means you can only cache full responses. For the best performance, we want to be able to only cache certain aspects of the response and then stitch them together. Furthermore, terminating SSL and unwrapping the body to normalize it can also introduce security vulnerabilities and operational overhead.<\/span><\/p>\n The next challenge is that you need to have both a client application and a server to be able to handle GraphQL requests. There are some great out-of-the-box solutions out there, including <\/span>Apollo GraphQL<\/span><\/a>, which is an open source framework around the GraphQL <\/span>client<\/span><\/a> and <\/span>server<\/span><\/a>. While the GraphQL server is easy to implement at the origin, it becomes significantly more complex when trying to leverage performance benefits in a distributed architecture.<\/span><\/p>\n GraphQL becomes more performant<\/span><\/a> by storing and serving requests closer to the end user. It is also the only way to minimize the number of API requests. This way, you can deliver a cached result much more quickly than doing a full roundtrip to the origin. You also save on server load as the query doesn\u2019t actually hit your API. If your application doesn\u2019t have a great deal of frequently-changing or private data, it may not be necessary to utilize edge caching, but for applications with high volumes of public data that are constantly updating, such as publishing or media, it\u2019s essential.<\/span><\/p>\n Webscale\u2019s Edge Compute Platform, CloudFlow, offers a platform to build a distributed GraphQL solution that is fully configurable to address caching challenges without having to maintain a distributed system. We give developers full flexibility and control to distribute GraphQL servers, such as Apollo, to get the benefits of stitched GraphQL bodies, along with an optimized edge network and the opportunity to configure caching layers to meet performance and scalability requirements.<\/span><\/p>\n Here\u2019s an example of what an edge node might look like: <\/p>\n Let\u2019s say you decide to use a JavaScript-based GraphQL server, such as Apollo. You can easily drop it into CloudFlow Node JS edge module and add a caching layer behind it. There are significant benefits to utilizing Varnish Cache (or other caching solution) behind GraphQL servers to cache API requests, particularly when it comes to reducing load time and increasing the time to deliver.<\/span><\/p>\n With CloudFlow, you have the further benefit of managing your entire edge stack in a single solution.<\/span><\/p>\n Apollo Federation<\/span><\/a> is another solution to provide an open source architecture for building a distributed graph. Or, in other words, it offers the opportunity to implement GraphQL in a microservices architecture. <\/span>Their goal<\/span><\/a>: \u201cIdeally, we want to expose one graph for all of our organization\u2019s data without experiencing the pitfalls of a monolith. What if we could have the best of both worlds: a complete schema to connect all of our data with a distributed architecture so teams can own their portion of the graph?\u201d<\/span><\/p>\nTLDR?<\/span><\/h3>\n
\n
Why GraphQL?<\/span><\/h3>\n
How Does GraphQL Work?<\/span><\/h3>\n
Caching Challenges with GraphQL<\/span><\/h3>\n
\n
What Solutions Are Available?<\/span><\/h3>\n
Distributed GraphQL on Webscale CloudFlow\u2019s Edge Compute Platform<\/span><\/h4>\n
\n<\/span><\/p>\nApollo Federation<\/span><\/h4>\n