We use Varnish ( as a reverse proxy/caching server in front of our app. The code is at Our general system is:

  • Most pages use cache-control: private and are not cached by Varnish
  • Pages that are mostly static like the homepage and watch pages are cached.
  • We vary by Accept-Language and Cookie
  • In the Varnish VCL, we try to normalize/reduce those headers to improve cache hits. We make it so:
    • Cookie only contains the sessionid
    • Accept-Language is normalized, so it only contains the language that we want to display the page in.

Caching App

Amara uses a couple tricks for caching things.

Cache Groups

Cache groups are used to manage a group of related cache values. They add some extra functionality to the regular django caching system:

  • Key prefixing: cache keys are prefixed with a string to avoid name collisions
  • Invalidation: all values in the cache group can be invalidated together. Optionally, all values can be invalidated on server deploy
  • Optimized fetching: we can remember cache usage patterns in order to use get_many() to fetch all needed keys at once (see Cache Patterns)
  • Protection against race conditions: (see Race condition prevention)

Typically cache groups are associated with objects. For example we create a cache group for each user and each video. The user cache group stores things like the user menu HTML and message HTML. The video cache group stores the language list and other sections of the video/language pages.


  • A CacheGroup is a group of cache values that can all be invalidated together
  • You can automatically create a CacheGroup for each model instance
  • CacheGroups can be used with a cache pattern. This makes it so we remember which cache keys are requested and fetch them all using get_many()

Let’s take the video page caching as an example. To implement caching, we create cache groups for Team, Video, and User instances. Here’s a few examples of how we use those cache groups:

  • Language list: we store the rendered HTML in the video cache
  • User menu: we store the rendered HTML in the user cache (and we actually use that for all pages on the site)
  • Add subtitles form: we store the list of existing languages in the video cache (needed to set up the selectbox)
  • Follow video button: we store a list of user ids that are following the videos in the video cache. To the user is currently following we search that list for their user ID.
  • Add subtitles permissions: we store a list of member user ids in the team cache. To check if the user can view the tasks/collaboration page we search that list of the user ID

When we create the cache groups, we use the video-page cache pattern. This makes it so we can render the page with 3 cache requests. One get_many fetches the Video instance and all cache values related to the video, and similarly for the Team and User.

Cache invalidation is always tricky. We use a simple system where if a change could affect any cache value, we invalidate the entire group of values. For example if we add/remove a team member then we invalidate the cache for the team.

Cache Patterns

Cache patterns help optimize cache access. When a cache pattern is set for a CacheGroup we will do a couple things:

  • Remember which keys were fetched from cache.
  • On subsequent runs, we will try to use get_many() to fetch all cache values at once.

This speeds things up by reducing the number of round trips to memcached.

Behind the scenes

The main trick that CacheGroup uses is to store a “version” value in the cache, which is simply a random string. We also pack the version value together with all of our cache values. If a cache value’s version doesn’t match the version for the cache group, then it’s considered invalid. This allows us to invalidate the entire cache group by changing the version value to a different string.

Here’s some example data to show how it works.

key value in cache computed value
version abc N/A
X abc:foo foo
Y abc:bar bar
Z def:bar invalid


We also will prefix the all cache keys with the “<prefix>:” using the prefix passed into the CacheGroup constructor.


If invalidate_on_deploy is True, then we will append ”:<commit-id>” to the version key. This way the version key changes for each deploy, which will invalidate all values.

Race condition prevention

The typical cache usage pattern is:

  1. Fetch from the cache

  2. If there is a cache miss then:

    1. calculate the value
    2. store it to cache.

This pattern will often have a race condition if another process updates the DB between steps 2a and 2b. Even if the other process invalidates the cache, the step 2b will overwrite it, storing an outdated value.

This is not a problem with CacheGroup because of the way it handles the version key. When we get the value from cache, we also fetch the version value. If the version value isn’t set, we set it right then. Then when we store the value, we also store the version key that we saw when we did the get. If the version changes between the get() and set() calls, then the value stored with set() will not be valid. This works somewhat similarly to the memcached GETS and CAS operations.

Cache Groups and DB Models

Cache groups can save and restore django models using get_model() and set_model(). There is a pretty conservative policy around this. Only the actual row data will be stored to cache – other attributes like cached related instances are not stored. Also, restored models can’t be saved to the DB. All of this is to try to prevent overly aggressive caching from causing weird/wrong behavior.

To add caching support to your model, add ModelCacheManager as an attribute to your class definition.

class caching.cachegroup.CacheGroup(prefix, cache_pattern=None, invalidate_on_deploy=True)

Manage a group of cached values

  • prefix (str) – prefix keys with this
  • cache_pattern (str) – cache pattern identifier
  • invalidate_on_deploy (bool) – Invalidate values when we redeploy

Get a value from the cache

This method also checks that the version of the value stored matches the version in our version key.

If there is no value set for our version key, we set it now.


Get multiple keys at once

If there is no value set for our version key, we set it now.

set(key, value, timeout=None)

Set a value in the cache

set_many(values, timeout=None)

Set multiple values in the cache

get_or_calc(key, work_func, *args, **kwargs)

Shortcut for the typical cache usage pattern

get_or_calc() is used when a cache value stores the result of a function. The steps are:

  • Try self.get(key)
  • If there is a cache miss then
    • call work_func() to calculate the value
    • store it in the cache
get_model(ModelClass, key)

Get a model stored with set_model()


To be catious, models fetched from the cache don’t allow saving. If the cache data is out of date, we don’t want to saave it to disk.

set_model(key, instance, timeout=None)

Store a model instance in the cache

Storing a model is a tricky thing. This method works by storing a tuple containing the values of the DB row. We store it like that for 2 reasons:

  • It’s space efficient
  • It drops things like cached related objects. This is probably good since it makes it so we don’t also cache those objects, which can lead to unexpected behavior and bugs.
  • key – key to store the instance with
  • instance – Django model instance, or None to indicate the model does not exist in the DB. This will make get_model() raise a ObjectDoesNotExist exception.

Invalidate all values in this CacheGroup.

class caching.cachegroup.ModelCacheManager(default_cache_pattern=None)

Manage CacheGroups for a django model.

ModelCacheManager is meant to be added as an attribute to a class. It does 2 things: manages CacheGroups for the model class and implements the python descriptor protocol to create a CacheGroup for each instance. If you add cache = ModelCacheManager() to your class definition, then:

  • At the class level, MyModel.cache will be the ModelCacheManager instance
  • At the instance level, my_model.cache will be a CacheGroup specific to that instance
get_cache_group(pk, cache_pattern=None)

Create a CacheGroup for an instance of this model

  • pk – primary key value for the instance
  • cache_pattern – cache pattern to use or None to use the default cache pattern for this ModelCacheManager

Invalidate a CacheGroup for an instance

This is a shortcut for get_cache_group(pk).invalidate() and can be used to invalidate without having to load the instance from the DB.

get_instance(pk, cache_pattern=None)

Get a cached instance from it’s cache group

This will create a CacheGroup, get the instance from it or load it from the DB, then reuse the CacheGroup for the instance’s cache. If a cache pattern is used this means we can load the instance and all of the needed cache values with one get_many() call.