1 Answer

0 votes
by
NGINX makes it possible to remove old and outdated cached files from the cache. It is mandatory for removing outdated cached content to prevent serving old and new versions of web pages at the same time. The cache is purged (cleaned) upon receiving a special "purge" request that contains either a custom HTTP header or the HTTP PURGE method.

Configuring Cache Purge

Let's set up a configuration which is used to identify the requests that use the HTTP PURGE method and deletes matching URLs.

1. In the http {} context, add a new variable, for example, $purge_method, that depends on the $request_method variable:

http {  

    ...  

    map $request_method $purge_method {  

        PURGE 1;  

        default 0;  

    }  

}  

2. In the location {} block where caching is configured, add the proxy_cache_purge directive to specify a condition for cache-purge requests. In our example, it is the $purge_method configured in the above step:

server {  

    listen      80;  

    server_name www.example.com;  

  

    location / {  

        proxy_pass  https://localhost:8002;  

        proxy_cache mycache;  

  

        proxy_cache_purge $purge_method;  

    }  

}  

Sending the Purge command

When the configuration of the proxy_cache_purge directive is finished, we need to send a special cache?purge request to purge the cache. We can give purge requests using a range of tools, including the curl command as in this example:

$ curl -X PURGE -D - "https://www.example.com/*"  

HTTP/1.1 204 No Content  

Server: nginx/1.15.0  

Date: Sat, 19 May 2018 16:33:04 GMT  

Connection: keep-alive  

In the above example, the resources that have a common URL part specified by the asterisk wildcard are purged. However, such cache entries are not removed completely from the cache: they remain on disk until they are deleted for either inactivity, by the cache purger (enabled with the purger parameter to proxy_cache_path), or by client attempts to access them.

Restricting Access to the Purge Command

We recommend that you limit the number of IP addresses that are allowed to send a cache-purge request:

geo $purge_allowed {  

   default         0;  # deny from other  

   10.0.0.1        1;  # allow from localhost  

   192.168.0.0/24  1;  # allow from 10.0.0.0/24  

}  

  

map $request_method $purge_method {  

   PURGE   $purge_allowed;  

   default 0;  

}  

In the above example, NGINX checks if the PURGE method is used in a request, and, if so, analyzes the client IP address. If the IP address is whitelisted, then the $purge_method is set to $purge_allowed: 1 is used to permit purging, and 0 for denies it.

Completely Removing Files from the Cache

To completely remove cache files that match an asterisk, activate a special cache purger process that permanently iterates through all cache entries and deletes the entries that match the wildcard key. Include the purger parameter to the proxy_cache_path directive in the http {} context:

proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=mycache:10m purger=on;  

Cache Purge Configuration Example

http {  

    ...  

    proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=mycache:10m purger=on;  

  

    map $request_method $purge_method {  

        PURGE 1;  

        default 0;  

    }  

  

    server {  

        listen      80;  

        server_name www.example.com;  

  

        location / {  

            proxy_pass        https://localhost:8002;  

            proxy_cache       mycache;  

            proxy_cache_purge $purge_method;  

        }  

    }  

  

    geo $purge_allowed {  

       default         0;  

       10.0.0.1        1;  

       192.168.0.0/24  1;  

    }  

  

    map $request_method $purge_method {  

       PURGE   $purge_allowed;  

       default 0;  

    }  

}  

Byte-Range Caching

Sometimes the initial cache fill operation takes quite a long time, especially for a large type of file. For example, when a video file starts downloading to fulfil the initial request for a part of the file, subsequent requests have to wait for the entire file to be downloaded and put into the cache.

In Nginx, it is possible to cache such range requests and gradually fill the cache with the Cache Slice module, which divides files into smaller "slices". Each range request chooses particular slices that cover the requested range and, if this range is still not cached, put it into the cache. All other requests for these slices take the data from the cache.

To enable byte-range caching:

First of all, make sure NGINX is compiled with the Cache Slice module.

Define the size of the slice with the slice directive:

location / {  

    slice  1m;  

}  

Choose the size of the slice that makes slice downloading fast. If the size is too small, then the memory usage might be excessive, and a large number of file descriptors opened. If the size is large, then processing the request might cause latency. Add the $slice_range variable to the cache key:

proxy_cache_key $uri$is_args$args$slice_range;  

Enable the caching of responses with the 206 status code:

proxy_cache_valid 200 206 1h;  

Enable passing of range requests to the proxied server by setting a variable, i.e., $slice_range in the Range header field:

proxy_set_header  Range $slice_range;  

Here, is the full configuration:

location / {  

    slice             1m;  

    proxy_cache       cache;  

    proxy_cache_key   $uri$is_args$args$slice_range;  

    proxy_set_header  Range $slice_range;  

    proxy_cache_valid 200 206 1h;  

    proxy_pass        http://localhost:8000;  

}  

Make a note that if slice caching is turned on, the initial file must not be changed.

Combined Configuration Example

http {  

    ...  

    proxy_cache_path /data/nginx/cache keys_zone=one:10m loader_threshold=300   

                     loader_files=200 max_size=200m;  

  

    server {  

        listen 8080;  

        proxy_cache mycache;  

  

        location / {  

            proxy_pass http://backend1;  

        }  

  

        location /some/path {  

            proxy_pass http://backend2;  

            proxy_cache_valid any 1m;  

            proxy_cache_min_uses 3;  

            proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;  

        }  

    }  

}

Related questions

0 votes
asked Sep 5, 2019 in NGINX by Robin
0 votes
asked Sep 5, 2019 in NGINX by Robin
...