No F*cking Idea

Common answer to everything

Caching and Serving Stale Content With Nginx

| Comments

Caching

With ruby and rails we often want to have caching of static content so we will try to reduce requests that has to come through rails stack when possible. Nginx in front is a great tool and we can use its abilities to add caching easy.

App

lets imagine we have a simple sinatra app. For purpose of this post we will have app with one method /ohhai that shows current time. This is a great way to test if out caching is working fine. Code of the app is really simple:

app.rb
1
2
3
4
5
6
class ExampleStaleApp < Sinatra::Base
  get "/ohhai" do
    sleep(5)
    Time.now.to_s
  end
end

Also to start it easy i have created a config.ru rackup file describing how to start app (in repo there is start.sh script ;)

config.ru
1
2
3
4
5
6
7
8
require 'rubygems'
require 'bundler'

Bundler.require

require './app'

run ExampleStaleApp

If you are using code from my repo https://github.com/JakubOboza/003-nginx-cache-stale-example to start it all you need to do is

config.ru
1
2
3
4
λ git clone https://github.com/JakubOboza/003-nginx-cache-stale-example
λ cd 003-nginx-cache-stale-example
λ bundle install
λ ./start.sh

I configured the app with my own path/uri and to look for upstream server on port 6677 so you need to change it if you are using different settings.

Caching

Our app is running now. Lets add caching, for this we will need to add nginx frontend config. In most cases i create a single nginx config for each server in sites-available directory and symlink it in sites-enabled (like apache do by default). I like this setting, it helps a lot to maintain more then one site which is common in development environment and also common on shared applications servers.

Nginx config file

I will show complete nginx config file for this example and explain each bit one by one.

nginx.example.caching.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
upstream sinatra_rackup{
  server 0.0.0.0:6677;
}

proxy_cache_path  /tmp/cache levels=1:2 keys_zone=my-test-cache:8m max_size=5000m inactive=300m;

server {
    listen 80;
    server_name example_stale.local;
    root /Users/kuba/Workspace/Ruby/example_stale/public;

    access_log  /var/log/nginx/example.stale.access.log;

    location / {
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_cache my-test-cache;
      proxy_cache_valid  200 302  1m;
      proxy_cache_valid  404      60m;
      proxy_cache_use_stale   error timeout invalid_header updating;
      proxy_redirect off;

      if (-f $request_filename/index.html) {
        rewrite (.*) $1/index.html break;
      }
      if (-f $request_filename.html) {
        rewrite (.*) $1.html break;
      }
      if (!-f $request_filename) {
        proxy_pass http://sinatra_rackup;
        break;
      }
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
      root html;
    }
}

It isn’t even so long ;)

upstream

1
2
3
upstream sinatra_rackup{
  server 0.0.0.0:6677;
}

in this part we create description of our upstream. In other words where our application server will be listening. It is easy to just use port but you can configure it to use unix.socket if you want to gain on performance.

global cache config

1
proxy_cache_path  /tmp/cache levels=1:2 keys_zone=my-test-cache:8m max_size=5000m inactive=300m;

This directive sets the place where cache is stored and sets the zone name and how big it can be. We will refer tot his zone later on in proxy pass cache config.

app cache config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
server {
    listen 80;
    server_name example_stale.local;
    root /Users/kuba/Workspace/Ruby/example_stale/public;

    location / {
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_cache my-test-cache;
      proxy_cache_valid  200 302  1m;
      proxy_cache_valid  404      60m;
      proxy_cache_use_stale   error timeout invalid_header updating;
      proxy_redirect off;

      proxy_pass http://sinatra_rackup;
    }

}

Here we configure server name for out application, port to listen on for it and root directory (this is fixed with my mac so you should change it). Whole magic happens in location description. here you have few important things for us.

1
proxy_cache my-test-cache;

Sets which cache zone we will use. We use same cache zone we defined in proxy_cache_path with keys_zone. Next we are setting for how long it should be cached. I used 1 minute for 200 and 302 status. this lets us see on our example app how this works each minute we see new time :). This is awesome! Next you can set different caching time expiry for other status. Here we are refreshing 404 status cache each hour (it could be days :) ).

Last but not the least is serving stale content if upstream is dead.

1
proxy_cache_use_stale error timeout invalid_header updating;

This sets up for us config that will enable serving stale content if upstream is dead. This is nice if you want to provide some content in case when your backend is dead.

Test

You can test it now. Or wait… with my config you have to add entry to /etc/hosts

hosts
1
127.0.0.1 example_stale.local

Now you can go to example_stale.local/ohhai (or just curl example_stale.local/ohhai) and see how our cache works. Even more now you can kill your app server and still see cache being served correctly.

Results

First request 10 sec before Next requests few ms after

–> http://www.youtube.com/watch?v=lgoXUzIwXk0

Cheers

How you can use it? Depends on your app architecture, but for every bit of content that you create which is “static” it is great thing to have. I like this feature of nginx and i hope this post will help you ;).

Comments