The case for caching
Though the concept of caching seems quite simple to most engineers, there is actually a lot of intriguing nuance to it. The choices for caching and the reasons to use it vs. not are varied, but let’s try to simplify.
First thing to ask is ‘what for?’ Caching is useful when you want decrease latency and/or decrease load on components of your system. You can use it in places where there is a separate, durable source of truth and it’s not terrible if the data in the cache expires or is lost some other way. Caches are not good tools for request buffering or source of truth data - data will be lost from time to time.
Second thing to ask is ‘where should we put it?’ And there are essentially 4 options: the end users client/browser, a CDN, a reverse proxy in front of your own web servers, or on your own web servers.
If the data you’re caching is specific to the user and not too large, it can be put into the client/browser - this is the most effective approach for latency and for reducing load. If the data is not user specific but it doesn’t need to be refreshed or to expire often, then the best place to put it is a CDN. With this solution you can serve many users from one instance of the cache and load on you backend system and latency are greatly reduced. If you need to invalidate cache entries often but there’s a large amount of data in the cache, you can put it in a reverse proxy which is similar to CSN but lives in your DC or somewhere close to your web servers. This approach only saves the latency of requests within the DC and processing time, but it still greatly reduces load on the other components of the system - and cache invalidations or data refreshes don’t have a long RTT. The last solution works when the amount of data you need to cache is small and latency isn’t a huge concern - you can store cached data on the actual web servers that would process the request. These servers only have as much memory as 1 machine, so the data set would need to fit on 1 host, and the requests still hit the web server so you’re mainly saving on the load of processing the query and/or sending requests to other backend components or data stores.
Another important decision is what type of cache to use - cache-aside or write-through. Cache-aside is where the cache itself never interacts with the backend data store and instead the application is expected to do read/write things from/to the cache and data store. Write through cache is one where the cache will act as a front end to the backend data store. Reads and writes always call the cache API, and the cache itself will do a lookup, call the backend data store and add the value to the cache when necessary, and respond to the request.
Lastly, if you’re working with a CDN or reverse proxy, you need to decide how to partition the data. Once you know the key space you’ll need, consistent hashing is a good mechanism for reducing the amount of data that moves around when servers are added/removed.
Memcached and redis are the most popular and simple caching technologies out there, though they both have their own trade offs (e.g. memcached requires partitioning to happen in the application layer, because the instances don’t even know about it) but they, along with the decisions outlined above, are a great place to start.
Comments
Post a Comment