A Deepdive on Caching Strategies using Django Rest Framework

Studio's strategies for utilizing caching as a way to speed up queries and save resources on the RDBMS server.
A Deepdive on Caching Strategies using Django Rest Framework

Given the volume of website traffic to many sites today, you may find yourself needing either a better or faster RDBMS server earlier than expected. As an alternative, consider caching as a way to speed up queries and save resources on the RDBMS server. Below, we’ll dig into a few strategies we utilize when we have a need for caching, especially within the context of using Django Rest Framework rather than vanilla Django.

Reasons for Caching

Faster response times

By using smart caching strategies, you can reduce your response times and enhance the overall experience for the end user.

Load reduction

Caching saves not only database resources, but can only save precious compute time on your backend servers - like saving responses in serialized form. This load reduction means you can handle more traffic with fewer resources, which can lead to cost reductions especially on cloud platforms.


Let’s look at an example, assuming some familiarity with both Django and Django Rest Framework and the current (at publish) version of Django 4.2 (LTS) and DRF 3.14.0. Django’s caching framework is a nice abstraction so no matter the backend you use, the following example will still work and be relevant. Let’s use Redis as our cache backend.

Here is the setup used:

        "default": {
            "BACKEND": "django.core.cache.backends.redis.RedisCache",
            "LOCATION": "redis://",

Nothing fancy, just the default client pointing to a local instance.

The Project

Now let’s imagine our site is a toy blog project, which is interesting given a mostly read-only data experience (aside from comments) which makes it an easy candidate for caching. Admittedly queries are simplistic enough that you likely wouldn’t need any caching in actuality, but let’s proceed.

Assume the following simple implementation as our base:

class Post(models.Model):
    title = models.CharField(max_length=200)
    content = models.TextField()
    author = models.ForeignKey(User, on_delete=models.CASCADE, related_name="posts")
    date_added = models.DateTimeField(auto_now_add=True)
    likes = models.ManyToManyField(User, related_name="liked", blank=True)

    def __str__(self):
        return self.title

And the assorted serializers:

class ListPostSerializer(serializers.ModelSerializer):
    likes_count = serializers.SerializerMethodField()

    class Meta:
        model = models.Post
        fields = ["id", "title", "author", "date_added", "likes_count"]

    def get_likes_count(self, obj):
        return obj.likes.count()

class RetrievePostSerializer(ListPostSerializer):
    class Meta:
        model = models.Post
        fields = ListPostSerializer.Meta.fields + ["content"]

Django Caching Strategies

The per-view cache that comes with Django vanilla can be made to work with DRF's own apiview/viewsets using method_decorator from django.utils.decorator. This is documented here for reference. The biggest problem we have had with this strategy is that as far as we know, there is no simple way to invalidate the cache. Sure, you can specify a cache-key prefix but there is no low level API to wild card delete cache entries. Because most of our data isn't cache-able on a time basis and invalidation is determined programmatically, we find ourselves in need of another caching solution.

The Low Level API

Most of the time for our project use cases, the high level caching strategies from Django are not an ideal solution. Fortunately, Django provides access to its so-called low level API with a basic API as well as a few other more specialized APIs that we’ll get back to:

from django.core.cache import cache

cache.set(key, value, timeout=DEFAULT_TIMEOUT, version=None)
cache.get(key, default=None, version=None)
cache.delete(key, version=None)

See the official documentation for a detailed API explanation.

The key is a string as expected and the value is any Python object that can be picked (most of them). Armed with this basic API, we now can devise new caching strategies.

Caching Single Objects

The likes count is simple to cache, as it doesn’t need to be updated in realtime so a simple timeout of 10 minutes for invalidation does the trick. It saves us a possible expensive count query on both the list and retrieve endpoints. This can have significant impact - especially on the listing endpoints if people really like your content.

def get_post_likes_count(post):
    cache_key = f"post-likes-count-{post.id}"
    data = cache.get(cache_key)

    if not data:
        data = post.likes.count()
        cache.set(key=cache_key, value=data, timeout=600)

    return data

And our serializer becomes:

# <snip>
def get_likes_count(self, obj):
	return get_post_likes_count(obj)

Retrieving Multiple Objects from Cache

All of this is great, but we can do better. Starting with the list endpoint. Currently we end up with many cache calls which in itself is not a big deal, but we know our cache backend (redis) supports a get_many primitive. Turns out, django low level cache also exposes it.

The problem is that right now, we only know the key (the post id) once in the serializer, which is too late to do a batching call. We must move the logic a bit earlier, in the view implementation.

Here is the viewset's list implementation:

def list(self, request, *args, **kwargs):
	page = self.paginate_queryset(self.get_queryset())
	likes = get_posts_likes_count(page)

	context = self.get_serializer_context()
	context.update({"likes": likes})
	serializer = self.get_serializer(page, many=True, context=context)

	return self.get_paginated_response(serializer.data)

By doing the pagination early, we ensure that our full queryset is evaluated only once, avoiding any unwanted queries. We then use that list to get a mapping of post IDs to likes count which is fed to our previous serializer through the context.

Let's get into the actual fetching and mapping implementation:

def get_posts_likes_count(posts):
    cache_keys = {f"post-likes-count-{p.id}": p.id for p in posts}
    data = cache.get_many(cache_keys.keys())

    data = {cache_keys[k]: v for k, v in data.items()}

    if len(data) != len(posts):
        missings = [p for p in posts if p.id not in data]
        for post in missings:
            data[post.id] = get_post_likes_count(post)

    return data

The implementation is pretty simple, we lookup all the keys using django's get_many function which saves many round trips. Then comes the logic bit to deal with caches missed. Because we might get partial cache misses, we must actually go over the full initial list and individually fetch the missing ones.

On to the last piece of the puzzle: the serializer modifications to read from our context provided mapping. This last part is pretty simple:

# <snip>
def get_likes_count(self, obj):
    if "likes" in self.context:
        return self.context["likes"][obj.id]
        return get_post_likes_count(obj)

Note that we still fall back to the get_post_likes_count() fetching function to keep this serializer usable in others context (the retrieve case, for example).

And there you have it, list endpoint lookup with one SQL query and one cache lookup.

Caching Serialized Objects

A maybe lesser known benefit of using caching in django is to save CPU time on the actual backend server. DRF's serializer are known to be bit a slow, some more than others like the GeoPoint serializers, for example. So when possible, we have found great speed and cpu usage benefits by caching the serialized form of the response. Implementation is trivial and left as an exercise for you.

Bonus: Double Caching Trick

One final bonus idea dubbed ‘double caching,’ is to store the result of the caching on the object instance which takes advantage of the fact that they are stateful during serialization. This is only useful in the rare occurrence that you need access twice to the same data to produce different serialized fields. For instance, perhaps you need to serialize the full list of comments of a blog post, but also have the latest one serialized to its own top level field.

Here is a possible implementation for this:

def get_post_comments(post):
    cache_key = f"post-comments-{post.id}"

    if not hasattr(post, "_post_comments_cache"):
        data = cache.get(cache_key)

        if not data:
            data = list(post.comments.all())
            cache.set(key=cache_key, value=data, timeout=0)

        post._post_comments_cache = data

    return post._post_comments_cache

As you can see, we first check if a specially chosen attribute exists on the obj and if it does, it indicates that the caching function has already been called and we just need to return it. Otherwise, it means we are first and we do the usual caching dance.

The serializer part (a bit contrived admittedly) looks like this:

def get_first_comment(self, obj):
  comments = get_post_comments(obj)
  return CommentSerializer(comments[0]).data if comments else None

def get_comments(self, obj):
  return CommentSerializer(get_post_comments(obj), many=True).data

This will only hit the cache or database depending if it's a cache miss, once during the object serialization.

We have seen a variety of caching strategies geared toward API use, using Django's low level cache. There is a famous saying by Phil Karlton that goes "There are only two hard things in Computer Science: cache invalidation and naming things." Stay tuned - we may next dig into that invalidation chapter!

Our team of backend developers and technical architects can help build a robust, scalable, and efficient backend for your project. Contact Studio today.

Subscribe to Studio Bytes, your weekly blast of all things tech.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Knowledge.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.