Reactive Stack

Architecture & Engineering

4 Lessons Learned Rewriting our Online Order API with GraphQL

By Benoit Tremblay - Published 2018/9/8

About a year ago, we had big performance issues with our online order API because our business were growing and our customers were as well growing. The loading time of the app was over 30 seconds and our UI had some weirds reactions to slow requests. Luckily because we were using Redux, we had a really good optimistic updates and made the experience smooth.

When the user was doing an action, we were guessing ahead of time what the API response should look like and show the result before it is received. As an example, when the user was adding an item to his order, it was visually added and the totals updated immediately. Then, it waits for a confirmation from the server and display the official response. Because we were loading a lot of data at the start of the app, we were able to replicate a lot of business logic on the frontend.

What made all of this possible was because restaurants don't have that much items on a menu. We were able to load all the items and even all their options and extras upfront. Our optimistic updates were our bread and butter and even though the API was slow, we were getting away with it.

However, with our growth, we had two majors problems. First of all, for every feature we were writing, we had to write the business logic both on the frontend and the backend. We could have done it in JavaScript and re-use the same code, but we were way too much invested in our .NET backend for switching at that point. We also had customers that wanted to use our API and the amount of business logic they had to implement was just unthinkable.

On top of that, we also had a big hairy problem. Because we were loading so much data when initializing the app, the more customers and more features we added, the slower the Time to Interactive became. Because that slow down was over a long period of time, we never realized it until our sale team complained about it when doing demos. They certainly learned to distract while that spinner was on the screen. However, one catastrophic event was the propellant to the rewrite. We hit a threshold and our servers kept crashing over and over because the CPU was over-utilized. In the short term, we added a good caching strategy to patch the problem.

At that point, the server failure was a huge wake up call to spend more though on the architecture and infrastructure and we ultimately made two technology change that ended up paying a lot on top of a few infrastructure improvements. To fix our data fetching issue, business logic replication and performance problems, we decided to rewrite our API using GraphQL.NET and Dapper.


For those that don't know what GraphQL is, here is the official description:

GraphQL is a query language for your API, and a server-side runtime for executing queries by using a type system you define for your data. GraphQL isn't tied to any specific database or storage engine and is instead backed by your existing code and data. - GraphQL Official Documentation

As an example, if you want to ask the API for the item and category name, you would do something like that:

query itemCategory($itemId: Int!) {
  item(id: $itemId) {
    category {

It is very easy to write, almost like JSON and the result from the server will be formed exactly like you requested it. On top of that, there is an awesome library called Apollo that let us easily query GraphQL on the frontend using React.js.

Switching from Entity Framework to Dapper

Even though we could have optimized a lot better our Entity Framework queries, we decided to keep it for our admin panel and generate the code-first schema. However, for our customer facing API, we went for Dapper because of the performance gain. Also, 95% of our queries were very simple to write. With an integration test for every query, we were pretty confident this was a solid choice… and it was.

The biggest change for us was however the switch to GraphQL. Not only did we had to learn a new way to build an API, we also had to turn an ugly private API into a public API many partners are now using.

The rewrite ended up being very successful and here are the 4 biggest lessons learned from doing the switch to GraphQL after one year.

1. Focus on user scenarios instead of your data models

This is certainly true for any type of API but I feel like this is important to stress out for GraphQL. The goal is not to expose your database but rather expose simplified informations and actions to create business values. It is the API equivalent of Tell Don't Ask. You want your API to help your consumers accomplish what they need easily and this is where GraphQL is strong. Our old API was falling short on that goal and we are still trying to improve whenever we learn about more user scenarios.

The developer reflex is often to synthesize every use case into a complex model that will fit every possible scenarios and this is where complexity creeps out. Instead of exposing a user profile with nothing more, you start exposing flags and settings that could have been hidden behind the API. We certainly did that mistake on a few queries again when we exposed the entire restaurant schedule config instead of feeding simple dates and hours the user can order.

It is very hard not to fall within the trap and it is important to fix them so that your API is less work to implement and it becomes easier to add new settings instead of harder.

2. Queries are not only for data but also business logic

When you look at your graph and only see data, you're missing an important part of GraphQL. You can not only query entities and fields. You can also pass parameters anywhere in the graph. This open the door to expose complex business logic with a significantly reduced complexity on the frontend.

They do not need to map on a field on your database. They can be computed and you should have a bunch of them. You want to know if a user can purchase an item? Query on your user entity canPurchase(itemId: Int!) and it will check the inventory, the law of your country and so on. Then, it returns an array with YES or a reason why you can't buy it. It might be out of stock or you might not be legally adult in your country and the item requires you to be.

3. Keep your resolvers atomic

One of the great strength of how GraphQL works is the control it gives so that every part of the graph can be resolved using different resolvers. All your resolver needs to know is the source it is coming from and you are good to go.

Let's say you have a video game library and you also want to list three similar games for each of them. You would build a query that looks like this:

query videoGames {
  videoGames {
    relatedGames {

You don't want your videoGames query to also be responsible to fetch the relatedGames. You want a different resolver that will fetch only the data about that specific information. The way you do it in GraphQL is simply by using a resolver on the relatedGames field. When the resolver is called, it includes the videoGame entity as the source. Then, you can use the video game id to query the related games.

By sticking to a single entity per resolver, you can have a very easy to follow code and only care about one piece of information. You might even want a resolver for a single special field with complex business logic behind.

4. Batch your database queries

It's really nice to be able to split your resolvers. However, you have to make sure it is not impacting too much your performance. If you're not careful, you will end up with a lot of N+1 queries situation. This happens when you have sub-resolvers because each of them will be resolved one by one with his parent as the source. To fix that problem, you can delay the database query until you have gather every entity that you need. Then, you can batch the query with a single transaction and then give the result to each of your resolvers.

The result can even be cached so that if you need the same entity on multiple part of your graph (let's say you want a list of articles with author name and the author is the same for multiple articles), you can cache the result and use the same result already in-memory.

Facebook developed a great library called Data Loader to help accomplish batching and caching. It is definitively worth checking out just to learn the principles.

Last Words

In the past year, GraphQL has proved over and over that it is not only a new shiny object. It is here to stay and it gives amazing power to the frontend to not only optimize smaller requests but also have a more flexible backend. Doing the switch has yield high return and was well worth the effort. One of the cool thing about GraphQL is you can slowly do the switch by at first exposing your REST API through GraphQL and slowly transitioning your code. For us, it was more a big bang rewrite but we made the transition one app and one customer at a time.

Did you enjoyed my article?

Join my Weekly Newsletter About

How to Build Amazing Softwares and be Productive

Where should we send the newsletter?