Table of Contents
Among the many good reasons leading more and more Linux System Administrators to migrate from Apache Web Server to Nginx, performance plays a decisive role: not only Nginx is arguably faster, lighter and resource-intensive than Apache, but it also supports a number of different caching methods that greatly outperform its competitor. To be completely honest we ought to say that Apache has mod_cache, which isn't bad at all... Yet it's hardly a match - in terms of performances, speed and ad granular control - when compared the top two Nginx caching solutions: FastCGI-Cache and Proxy-Cache.
The main difference between the two is the protocol that they use to communicate with the backend: FastCGI-Cache caches output from a PHP-FastCGI backend, while Proxy-Cache is related to upstreams that use HTTP as the backend protocol. In this post we'll deal with the former one, leaving the latter from this other article, where we also explainend a bunch of basic concepts regarding HTTP proxy and its benefits in terms of scaling, stability, security and performance.
Introducing FastCGI-Cache
FastCGI-Cache is currently considered the most efficient way to implement a dynamic cache mechanism in front of our web server with Nginx: that web server can be Nginx itself or another web server that Nginx will "proxy" by catching all the incoming requests, routing them to the upstream PHP-FPM server, get the resulting HTTP responses and return them to the caller.
The FastCGI Nginx module has directives for caching dynamic content that are served from the PHP backend, thus eliminating the need for additional page caching solutions like reverse proxies - such as Nginx Proxy-Cache, Varnish, Squid and the likes - or application specific plugins - such as WordPress Total Cache, WP Super Cache and so on: content can also be conditionally excluded from caching based on the request method, URL, cookies, or any other server variable, just like any good cache mechanism available.
Why Use FastCGI?
FastCGI proxying within Nginx is generally used to translate client requests for an application server that does not or should not handle client requests directly. FastCGI is a protocol based on the earlier CGI (which stands for common gateway interface) protocol, which was originally meant to improve performance by not running each request as a separate process. It is used to efficiently interface with a server that processes requests for dynamic content.
One of the main use-cases of FastCGI proxying within Nginx is for PHP processing. Unlike Apache, which can handle PHP processing directly with the use of the mod_php module, Nginx must rely on a separate PHP processor to handle PHP requests. Most often, this processing is handled with php-fpm, a dedicated PHP processor that has been extensively tested to work with Nginx. It's worth noting that Nginx with FastCGI can be also used with applications using other languages, as long as there's an accessible component configured to respond to FastCGI requests.
Installing the prerequisites
Now that we're done with the introductory part, let's start to install the required software: Nginx itself and PHP 7.1 with the PHP-FPM module and all their respective prerequisites.
Nginx
To install Nginx, type these commands from a terminal:
1 2 |
sudo yum install epel-release sudo yum install Nginx |
Answer "Y" to all the questions until the Terminal says that the installation is complete.
Once done, we could start the Nginx service and also have it start automatically on each startup with the following lines:
1 2 |
sudo systemctl start Nginx sudo systemctl enable Nginx |
However, before doing that, we need to perform some changes to the configuration file (see below).
Php 7
When going with FastCGI, using PHP 7.1 is the recommended choice. In order to install it, together with PHP-FPM, type the following commands in Terminal to add the required repositories to the YUM directory:
1 2 |
sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sudo rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm |
Then type the following to install all the required modules:
1 |
sudo yum install -y mod_php71w php71w-cli php71w-common php71w-gd php71w-mbstring php71w-mcrypt php71w-mysqlnd php71w-xml php71w-opcache php71w-fpm |
Again, right after the installation process is complete we could start the PHP-FPM service and also have it start automatically on each startup with the following lines:
1 2 |
sudo systemctl start php-fpm sudo systemctl enable php-fpm |
php-fpm will start listening on port 9000 of the local machine, meaning that we'll be able to upstream to it using the 127.0.0.1:9000 loopback url. However, before doing that, we need to perform some changes to the configuration file (see below).
Nginx Configuration File
Here's a viable general-purpose Nginx configuration file with FastCGI-Cache support. The lines relevant to FastCGI are marked in bold: if you want to change them, be sure to do it wisely - read the Nginx official documentation for the FastCGI Module before doing that.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
# --------------------------------------------------------------------- # NGINX - FastCGI CACHE configuration # --------------------------------------------------------------------- # Created by Ryadel on 2017.12.09 # www.ryadel.com # --------------------------------------------------------------------- user www; worker_processes 2; working_directory /var/www; error_log /var/log/nginx/error.log; pid /run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; # --------------------------------------------------------------------- # FASTCGI GLOBAL CONFIGURATION - START # --------------------------------------------------------------------- # below lines must be outside server blocks to enable FastCGI cache for Nginx # path can be anywhere, app name must be consistent, total size small enough to avoid RAM depletion fastcgi_cache_path /var/cache/nginx/fastcgi levels=1:2 keys_zone=WORDPRESS:256m inactive=30m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; fastcgi_cache_use_stale error timeout invalid_header http_500; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; # --------------------------------------------------------------------- # FASTCGI GLOBAL CONFIGURATION - END # --------------------------------------------------------------------- # Custom header that will tell us if the response page has been served from cache (HIT) or from the upstream host (BYPASS or MISS). add_header X-FastCGI-Cache $upstream_cache_status; server { listen 80; server_name www.yourwebsite.com default_server; root <path_to_your_website_files>; index index.php; autoindex off; set $skip_cache 0; # --------------------------------------------------------------------- # CACHE SKIP RULES - START # --------------------------------------------------------------------- # Do not cache POST requests - they should always go to PHP if ($request_method = POST) { set $skip_cache 1; } # Do not cache URLs with a query string - they should always go to PHP if ($query_string != "") { set $skip_cache 1; } # WooCommerce-specific cache skip rules if ($request_uri ~* "/store.*|/cart.*|/my-account.*|/checkout.*|/addons.*") { set $skip_cache 1; set $skip_cache_reason WP_WooCommerce; } if ($cookie_woocommerce_items_in_cart) { set $skip_cache 1; set $skip_cache_reason WP_WooCommerce; } if ($request_uri ~* ("/cart.*")) { set $skip_cache 1; } # Don't cache URIs containing the following segments (admin panel, sitemaps, feeds, etc.) if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") { set $skip_cache 1; } # Don't use the cache for logged-in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; } # --------------------------------------------------------------------- # CACHE SKIP RULES - END # --------------------------------------------------------------------- location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { try_files $uri =404; # comment out this line if php-fpm is hosted on a remote machine include /etc/nginx/fastcgi.conf; limit_req zone=limit_req_php burst=20 nodelay; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; fastcgi_cache WORDPRESS; fastcgi_cache_valid 200 60m; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } } } } |
Notice how we've been using the user www here, as it's often the system account reserved for serving web pages: depending on your given scenario you should replace it with apache, nginx, www-data and so on: just be sure that it will have the permissions to access to the required folders (see below).
FastCGI directives
Here's an explanation of the Nginx directives we used in the above configuration file to properly configure FastCGI-Cache:
- fastcgi_pass: The actual directive that passes requests in the current context to the backend. This defines the location where the FastCGI processor can be reached.
- fastcgi_param: The array directive that can be used to set parameters to values. Most often, this is used in conjunction with Nginx variables to set FastCGI parameters to values specific to the request.
- fastcgi_split_path_info: This directive defines a regular expression with two captured groups. The first captured group is used as the value for the $fastcgi_script_name variable. The second captured group is used as the value for the $fastcgi_path_info variable. Both of these are often used to correctly parse the request so that the processor knows which pieces of the request are the files to run and which portions are additional information to pass to the script.
- fastcgi_index: This defines the index file that should be appended to $fastcgi_script_name values that end with a slash (/). This is often useful if the SCRIPT_FILENAME parameter is set to $document_root$fastcgi_script_name and the location block is configured to accept requests with info after the file.
- fastcgi_intercept_errors: This directive defines whether errors received from the FastCGI server should be handled by Nginx or passed directly to the client.
Also, we've used a couple of other common nginx directives frequently used with FastCGI configurations:
- try_files: We're going to use this directive to make sure that the requested file exists before passing it to the FastCGI processor (line 99 of the above code). It's worth nothing that that line won't work properly (404 error) if the file is not stored on this server, which is entirely possible with php-fpm/php-fcgi: it's very important to comment tat ine out if we've set up php-fpm/php-fcgi on another machine.
- include: We can optionally use this directive to include common, shared configuration blocks in multiple locations to overcome the nasty issues caused by the odd Nginx inheritance model for array -type directives (see this ServerFault answer for further details).
Opening the Firewall Port(s)
The default CentOS firewall rules does not allow inbound HTTP / HTTPS traffic, hence its necessary to open up some TCP ports for a webserver such as Nginx to accept connections from the outside. How it can be done depends of the firewall that our CentOS machine is actually using: firewalld or iptables.
Firewalld
These are the shell commands to open up Firewalld (assuming that the public zone has been assigned to the WAN network interface):
1 2 3 |
sudo firewall-cmd --permanent --zone=public --add-service=http sudo firewall-cmd --permanent --zone=public --add-service=https sudo firewall-cmd --reload |
Iptables
These are the rules to set for Iptables (assuming that we want to accept traffic coming from the eth0 network interface):
1 2 3 4 |
iptables -I INPUT 5 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -I INPUT 5 -i eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT service iptables save systemctl iptables restart |
Setting Permissions
Before firing the engines and starting Nginx we need to ensure that the Linux user we're using for the Web-Server related tasks (www, apache, nginx, www-data or anyone else), together with its relevant group, will be able to access to the required www, Nginx-Cache and PHP-FPM-Cache folders. Here are the folder that you need to check:
- Your website(s) path, such as /var/www/yoursite.com
- The Nginx cache folder: /var/cache/nginx
- The Nginx cache folder specific for FastCGI, which is the value of fastcgi_cache_path directive: in our example, is /var/cache/nginx/fastcgi
- The Nginx temporary folders: /var/lib/nginx and /var/lib/nginx/tmp (IMPORTANT: you need to set permissions on both of them - see this ServerFault thread).
That's it for now: is something goes wrong with your configuration, feel free to contact us and we'll be happy to help you figure out what you did wrong.
Further references
- PHP FastCGI Example from nginx.com.
- How to configure Nginx FASTCGI Cache from scalescale.com.
- Understanding and Implementing FastCGI Proxying in Nginx from digitalocean.com.