How To Migrate a LAMP Stack

Sooner or later, your operations are going to need to scale out to bigger and better servers. You’ll have to migrate your application to new hardware. Here’s how to do it. This is more of a checklist than a full guide, as a full guide would be far too specific to your requirements. We’ll be assuming a typical LAMP stack, along with email hosting, as this is a very common combination among small web startups.

First and foremost, take inventory of what needs to be moved, and make a full backup of absolutely everything. Due to hardware differences, and hosting company requirements, it probably won’t be possible to simply restore your backup on the new server.

Go in to your DNS provider’s settings and change your site’s TTL time to the minimum of one hour. This means it will take at most one hour for users to see the new DNS entries when you make the migration. Once you’re done moving, be sure to change it back.

We will be doing this move in two stages, the first stage moves the services and configurations and allows for testing; the second stage shuts down the previous server for good, and migrates all of the application data to its new host.

For the purposes of this tutorial, we will assume a Debian-based system, and that you’ll be working as root the entire time. These commands are mostly distribution-agnostic, however some file locations might be Debian specific.

To make a full backup, first stop all of the services that write to the database.

service apache2 stop
service postfix stop
service dovecot stop
service mysql stop

Then do the backup. We’re not going to compress it at this time, as compressing takes a very long time, and during this time your services are offline. We’ll also take this time to make proper SQL backups. Make sure you have enough room in the root partition to hold all of this, or put them in a place where there is enough room (such as under /var or /home).

mysqldump -u root -p reallyimportantdatabase > /reallyimportantdatabase.sql
mysqldump -u root -p someotherdatabase > /someotherdatabase.sql
tar cvpf /backup.tar 
--exclude=/proc --exclude=/dev 
--exclude=/lost+found --exclude=/backup.tar 
--exclude=/mnt --exclude=/sys 

Once it is done, start your services back up again.

service apache2 start
service postfix start
service dovecot start
service mysql start

Check the size of the backup. If it is less than, say, 10GB, don’t bother compressing it, as you’ll spend more time compressing and decompressing than you will transferring it. If it is significantly over 10GB, I suggest using pbzip2 to compress it, which is a backwards compatible bzip2 algorithm that does the compression in parallel. This means its speed increases linearly with how many cores you have available. If a bzip2 operation should take ten minutes, and you have two cores, it will take about five minutes with pbzip2. You can get it at http://compression.ca/pbzip2/, which also contains usage examples. It is contained within many distribution’s repositories, so try those first.

pbzip2 /backup.tar
scp /backup.tar.bz2 admin@new.server.tld:/home/admin

We’re now done with the old server for the rest of Stage One. All of the following instructions will take place on the new server, until we’re ready for Stage Two. Make sure you take the time to install your preferred management utilities such as GNU Screen and vim before proceeding. Also be sure to set up networking properly.

Let’s unpack the backup that we transferred over.

cd /home/admin
mkdir oldmachine
cd oldmachine
mv ../backup.tar.bz2 .
pbzip2 -d backup.tar.bz2
tar xvf backup.tar
While that's going, now's a good time to install the services and support utilities that were on the old server. This list will vary wildly depending on your needs, but a minimal command for Debian might look a bit like this:
apt-get install fail2ban postfix postfix-mysql 
dovecot-imapd dovecot-pop3d dovecot-mysql 
apache2 libapache2-mod-php5 php5-apc 
php5-gd php5-mcrypt php5-mysql mysql-server

Once all of that is done, we’ll move the stock configurations out of the way in preparation for restoring the old configurations.

cd /etc
mv dovecot dovecot.dist
mv postfix postfix.dist
mv php5 php5.dist
mv mysql mysql.dist
mv apache2 apache2.dist

Now copy all of the old configurations and support files from the backup to their new home. This list depends on your exact setup, but this should give you a general idea of what to look for. Note that we are not copying the SQL database files during this, they will be restored in their proper manner in just a bit. Don’t forget to copy over any SSL keys you may have!

cd /home/admin/oldmachine
cp -a var/www/* /var/www/
cd etc
cp -a dovecot /etc
cp -a postfix /etc
cp -a php5 /etc
cp -a mysql /etc
cp -a apache2 /etc
cd ../var
cp -a mail /var

Now we hope for the best and restart our services. Everything should go smooth, but, you never know. If your distribution changed, there will likely be problems with minor things like the location of pid files. These things should be very easy to fix, just be sure to take a look at your logs.

service apache2 restart
service postfix restart
service dovecot restart
service mysql restart

Now it’s time to import your databases the proper way.

mysql -u root -p reallyimportantdatabase < /home/admin/oldmachine/reallyimportantdatabase.sql
mysql -u root -p someotherdatabase < /home/admin/oldmachine/someotherdatabase.sql

Stage One is now complete. Test absolutely everything, then test it some more. Once you’re done testing, test it all again. This is your only chance to get things right without incurring undue downtime on your users, so make use of it. The exact method of testing depends on your application, but it will at the least involve some fake entries in to your local machine’s hosts file.

Now that you are satisfied that your new server is working properly, head over to your old machine and put your application in some sort of maintenance mode. The idea is to present the users with a page saying “we’re moving servers, thank you for your patience”, and for the application to not write to the database at all.

Now we back up and move the application’s database one final time.

mysqldump -u root -p reallyimportantdatabase > /reallyimportantdatabase.sql
scp /reallyimportantdatabase.sql admin@new.server.tld:/home/admin
And then we import it on the new machine.
mysql -u root -p reallyimportantdatabase < /home/admin/reallyimportantdatabase.sql
The last step is to move your DNS to point at the new servers. Don't forget to change your SPF records to reflect the new email server's IP address. That's it! You're done! To make sure it worked, keep a close eye on your Apache access logs.
tail -f /var/log/apache2/access.log

MySQL to MariaDB Migration

Lets migrate from MySQL to MariaDB, which is a popular alternative to MySQL. MariaDB was initially forked in January of 2009. We can make this transition quickly, but not without some downtime, as we can’t have both databases working on the same files simultaneously. These instructions are good for all popular distributions of GNU/Linux, however Debian 7 (Wheezy) will be used in the reference code shown.

This process only works reliably if you are running the same major version of MySQL as MariaDB. Currently this means you must be running MySQL 5.5 and intend on moving to MariaDB 5.5.

First and foremost, shut down all processes that use MySQL. You can’t move the database around while it’s in use. Doing this is far beyond the scope of this article, but since it is your system, hopefully you have a good idea about what all is running on your server, and can shut processes down appropriately. It is advised to stop MySQL by hand, rather than depending on your package manager to do it for you.

# service stop apache2
# service stop nginx
# service stop mysql
Next, make a simple mysqldump of your current databases. These commands dump every SQL database you have to a single file. Make sure you do this on a partition big enough to hold your data. Doing this in /tmp is a very bad idea.
# cd /backups
# mysqldump -u root -p --all-databases > mysqlbackup.sql

We are now done with MySQL. Use your package manager to remove it. Do not worry about associated libraries, as MariaDB is a drop in replacement. It should remain compatible at the API layer. If your package manager tries to uninstall half the system, cancel the operation and proceed to the next step. apt based distributions will probably want to remove all kinds of things, but since nothing is installed on our test system, this doesn’t happen so it’s safe to remove in our example.

# apt-get remove mysql-server-core-5.5 mysql-server-5.5 mysql-server mysql-common mysql-client-5.5 libmysqlclient18
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
  libaio1
Use 'apt-get autoremove' to remove it.
The following packages will be REMOVED:
  libdbd-mysql-perl libmysqlclient18 mysql-client-5.5 mysql-common
  mysql-server mysql-server-5.5 mysql-server-core-5.5 php5-mysql
0 upgraded, 0 newly installed, 8 to remove and 0 not upgraded.
After this operation, 94.8 MB disk space will be freed.
Do you want to continue [Y/n]? y

Next, add the MariaDB repositories, and install it. Instructions for your distribution can be found at:
https://downloads.mariadb.org/mariadb/repositories/

If, in the prior step your package manager wanted to uninstall everything, that means it wants to do this upgrade in-place. Installing the new MariaDB packages should have simply overwritten the old MySQL packages, as your package manager sees it as any other upgrade.

# apt-get install mariadb-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  libdbd-mysql-perl libmariadbclient18 libmysqlclient18 mariadb-client-5.5
  mariadb-client-core-5.5 mariadb-common mariadb-server-5.5
  mariadb-server-core-5.5 mysql-common
Suggested packages:
  tinyca mariadb-test
The following NEW packages will be installed:
  libdbd-mysql-perl libmariadbclient18 libmysqlclient18 mariadb-client-5.5
  mariadb-client-core-5.5 mariadb-common mariadb-server mariadb-server-5.5
  mariadb-server-core-5.5 mysql-common
0 upgraded, 10 newly installed, 0 to remove and 0 not upgraded.
Need to get 8,794 B/31.1 MB of archives.
After this operation, 108 MB of additional disk space will be used.
Do you want to continue [Y/n]? y

Some package systems start what they install automatically, but some don’t. If not, start up MariaDB now, and verify its sanity.

# mysql -u root -p -Be 'show databases'
Enter password:
Database
information_schema
drupal
mysql
performance_schema
test

This next part won’t be pretty, but it shouldn’t be tricky. The configuration has changed considerably between MySQL and MariaDB, however it is extremely easy to work through. Most everything that changed is related to mechanisms that have been replaced, such as how replication works. You should be safe in simply copying over the performance tuning options you set in MySQL’s my.cnf, and reconfiguring the rest by hand. For small databases with less intricate setups, it is likely that you won’t need to do any thing more than copy the performance options that you’ve changed.

bind-address            = 127.0.0.1

max_connections         = 10
connect_timeout         = 30
wait_timeout            = 600
max_allowed_packet      = 16M
thread_cache_size       = 256
sort_buffer_size        = 16M
bulk_insert_buffer_size = 16M
tmp_table_size          = 64M
max_heap_table_size     = 64M

Start up MariaDB again and verify sanity once again. Once you’ve done that and stopped MariaDB, start it up once more, as you normally would through your init system. Note that MariaDB shares the same name for compatibility.

# service restart mysql
Stopping MariaDB database server: mysqld.
Starting MariaDB database server: mysqld . . .
Checking for corrupt, not cleanly closed and upgrade needing tables..
# mysql -u root -p -Be 'show databases'
Enter password:
Database
information_schema
drupal
mysql
performance_schema
test

Lastly, start up the applications that use a SQL database.

# service apache2 start
# service tomcat start

At this point you’re done! There is no conversion of your databases when switching to MariaDB, so if for whatever reason you don’t like it, you can freely switch back to MySQL.

# service mysql stop
# apt-get remove mariadb-server-5.5 mariadb-common mariadb-client-5.5 libmariadbclient18
# apt-get install mysql-server

VirtualBox Headless Administration

VirtualBox is often used on the desktop through it’s GUI interface. Sometimes, though, you’ll need to run it on a headless server somewhere on the internet. Our demonstration setup is on a Debian system, and the virtual machine we’re creating will use a placeholder of $VM for it’s name. We’ll assume the guest OS is Debian, and that you’ve already installed VirtualBox itself to the host.

VirtualBox has one of the best manuals, even if it is a hundred pages long. Chapter 8 is of particular interest to us. You can find the manual here:
http://www.virtualbox.org/manual/

Installation instructions can be found here:
https://www.virtualbox.org/wiki/Linux_Downloads

We’ll need the VirtualBox Extension Pack to be able to use RDP to initially set up the guest OS. Download it from https://www.virtualbox.org/wiki/Downloads, and install it like so. Root is required, however this only needs to be done once on the host.

sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.2*.vbox-extpack

To make it easy to copy & paste these commands, set $VM to the name of the guest virtual machine. If you are using the bash shell, it goes like this:

VM=spaceball

We’ll start with creating a virtual machine. If you are running on a 32 bit host, drop the “_64” part. The setting for “ostypes” doesn’t really matter, however it does set a few sane defaults, so it’s best to use one if at all possible.

VBoxManage createvm --name $VM --ostype "Debian_64" --register

If you aren’t using Debian, you can run this to get a list of OS types:

VBoxManage list ostypes

Next we need to give it a hard drive. Make sure to adjust the argument to –size as needed. In this example, it is set to 10GB. The first line creates the file that is the drive itself, the second line adds a SATA controller to the VM, and the last line plugs the hard drive in to the controller. You can add additional drives by incrementing the argument to –port.

VBoxManage createhd --filename $VM.vdi --size 10240
VBoxManage storagectl $VM --name "SATA Controller" --add sata --controller IntelAHCI
VBoxManage storageattach $VM --storagectl "SATA Controller" --type hdd --medium $VM.vdi --port 0

We’ll need to boot from an ISO image to be able to install our guest operating system, so lets add a DVD drive and attach an image to it. VirtualBox does not support SATA DVD drives, so we make an IDE controller to attach it to.

VBoxManage storagectl $VM --name "IDE Controller" --add ide
VBoxManage storageattach $VM --storagectl "IDE Controller" --port 0 --device 0 --type dvddrive --medium debian-7.1.0-amd64-netinst.iso

Now we need to tell it the boot order. After this, you can go ahead and start the VM if you really want to, but it is a bad idea, as it hasn’t been tuned to run well yet, and we haven’t set up networking.

VBoxManage modifyvm $VM --boot1 dvd --boot2 disk --boot3 none --boot4 none

While at this point it is now possible to start it up, it’s best to tune it a bit so it makes proper use of the host hardware. For a full explanation of these options, please see the VirtualBox manual.

VBoxManage modifyvm $VM --ioapic on --hwvirtex on  --nestedpaging on --hwvirtexexcl on --pagefusion on --largepages on --acpi on

Now to set up networking. For this example, we’ll assume that you want a VM that can function as a full fledged server on the Internet. This means we want bridged networking. The exact details of this will change based on your host OS, for example under FreeBSD you’ll be using tun/tap interfaces.

VBoxManage modifyvm $VM --nic1 bridged --bridgeadapter1 eth0

We should probably give it more memory than the default. For the OS type of “Debian_64” it defaults to 384MB, so it would be a good idea to change it. Here we set it to 4GB.

VBoxManage modifyvm $VM --memory 4096

If your host has multiple CPU cores, we can take advantage of that. We can also tell VirtualBox to not use more than some percent of CPU time. This is a good idea on shared hosts. It defaults to 100%, so without setting it, VirtualBox will gladly allow a guest VM to use all the cycles it can.

VBoxManage modifyvm $VM --cpus 4
VBoxManage modifyvm $VM --cpuexecutioncap 80

Lastly, we should change the RDP port. Later, once the VM is pushed in to production, we will want to disable RDP entirely for security.

VBoxManage modifyvm $VM --vrdeport 12345

At this point we are now done setting up the new virtual machine. Time to start it up!

VBoxHeadless --startvm $VM

Connect to it with your favorite Remote Desktop viewer, and install however you want.

We’re done setting up a new virtual machine. There’s a few common tasks left to go over, such as safely shutting down, suspending, resetting and cloning. Do not ctrl-C the VirtualBox process, run these from a different terminal. Here they are, in order.

VBoxManage controlvm $VM acpipowerbutton
VBoxManage controlvm $VM savestate
VBoxManage controlvm $VM reset
VboxManage clonevm $VM --name “New VM Name” --register

If your VM locks up, you can “pull the plug” with this command (use this as a last resort):

VBoxManage controlvm $VM poweroff

Once your VM farm starts to grow a bit, you might need a refresher as to what is what. The first command lists all registered VMs, and the second command shows the configuration of the named VM.

VBoxManage list vms
VBoxManage showvminfo $VM

Limits and Restrictions With nginx

nginx offers several ways to limit and restrict incoming requests so that your hosting servers aren’t overloaded. It is generally better to limit incoming connections than to drop them entirely, so here’s one way to do it.

The stock nginx module “ngx_http_limit_req_module” limits incoming requests based on how many requests per time frame basis. In this example we’ll limit it to 1 request per second.

Slap this in your /etc/nginx/nginx.conf:

http {
	limit_req_zone  $binary_remote_addr  zone=somezone:10m   rate=1r/s;
}

…and it will be limited. The “somezone:10m” is an identifier for keeping state. Multiple servers can share the same zone, or have separate zones. “10m” refers to 10 megabytes maximum. The state will be purged if it grows larger than that.

Apache mod_rpaf And You

To get Apache to behave when it is running as the back end of any sort of load balancing mechanism (be it nginx, squid, or anything else) you will need a module that knows how to handle X-Forwarded-For and other headers correctly. Grab it from https://github.com/gnif/mod_rpaf or from your trusty package manager.

apt-get install libapache2-mod-rpaf

​The default configuration in Debian is acceptable if your load balancer is running locally (such as for caching or routing purposes), otherwise you will want to edit the "RPAFproxy_ips" setting to contain the hosts that are doing the load balancing. Also ensure that you set "RPAFheader" to "X-Forwarded-For" to get the desired effect. Once you are done, restart Apache and check its logs to see if it is recognizing the IPs of incoming connections correctly. Instead of all hits being from localhost, they should now show up as being from the actual client IP.

The best part of mod_rpaf is that you do not need to configure your software stack to handle the X-Forwarded-For header. It is by far the fastest way to add this support to your software stack.

Fixing signature verification errors in apt-get

So you run your apt-get update after adding a PPA or other repository, and you come across a warning like this:

W: GPG error: http://toolbelt.heroku.com ./ Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY C927EBE00F1B0520
W: GPG error: http://ftp.osuosl.org wheezy Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY CBCB082A1BB943DB
W: GPG error: http://ppa.launchpad.net precise Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY D46F45428842CE5E

While these are just warnings, and can be ignored, doing such is irresponsible as it means that apt cannot verify the packages it downloads as being signed by the authors. Most repositories give instructions on how to download and import the PGP key, but not all do. So here’s a foolproof method of obtaining the key and importing it in to apt.

For this example we’ll use the last one, which is the key for the official Bitcoin client PPA for Ubuntu. It (the Precise Pangolin branch) also works fine under Debian Wheezy.

The first step is to get the key in to your keyring.

root@debian:~# gpg --keyserver hkp://subkeys.pgp.net --recv-keys D46F45428842CE5E
gpg: directory `/root/.gnupg' created
gpg: new configuration file `/root/.gnupg/gpg.conf' created
gpg: WARNING: options in `/root/.gnupg/gpg.conf' are not yet active during this run
gpg: keyring `/root/.gnupg/secring.gpg' created
gpg: keyring `/root/.gnupg/pubring.gpg' created
gpg: requesting key 8842CE5E from hkp server subkeys.pgp.net

gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 8842CE5E: public key "Launchpad PPA for Bitcoin" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

The first several lines will only show if this is your first time doing anything PGP related, and are of no concern to us. The rest of the output shows that we received the key from the keyserver, however it isn’t marked as trusted by us. Marking it as trusted is optional and provides no actual benefit, unless you are able to personally verify that the key is correct (as in, you walked up to the key maintainer and exchanged keys in person, or some other highly secure method of verification). The important part is that you are now protected from man-in-the-middle attacks, so long as they signing key isn’t compromised.

Now we need to add them to the apt’s key store. This is very easy.

root@debian:~# gpg --export --armor D46F45428842CE5E | apt-key add --
OK

Basically we export the key from GnuPG in a plain text format supported by apt-key, to stdout, then have apt-key read the key from stdin.

Do this entire process for each missing public key, and the warnings will be fixed and your system will be secured against a potential attack vector.

Basic SSH Tunnels

SSH tunnels are incredibly handy for any time you need to pipe a single connection from one place to another, in a secure fashion. One of the best reasons to choose a SSH tunnel over other options (such as a proper VPN) is that you can easily tunnel a port running on localhost without needing to reconfigure the daemon. You also do not need root access to set up a SSH tunnel, so long as a privileged port is not involved. For this example we'll tunnel the MySQL port so that we can access the database as though it was running locally, such as you might do in production.

mngrif@kosh:~$ ssh lennier.info -N -L 3306:127.0.0.1:3306

The first port listed is the local port to bind to. In this instance, it is not required as we want it to be the same, but there are plenty of reasons why you'd want it to be on a different port. The "-N" argument is so no command is executed on the remote machine, such as spawning a shell. It isn't needed for a tunnel. Additionally you can add on the "-f" argument which will background this ssh process, which might be handy for more permanent tunnels. If you use "-f", you'll need to kill the process once you're done, rather than just giving it a ^C.

Here's an example of this in action:

mngrif@kosh:~$ ssh lennier.info -N -f -L 3306:127.0.0.1:3306
mngrif@kosh:~$ mysql -u root -p -h 127.0.0.1
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 51239
Server version: 5.5.36-MariaDB-1~wheezy-log mariadb.org binary distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> Bye
mngrif@kosh:~$ ps x|grep ssh
15630 ?        Ss     0:00 ssh lennier.info -N -f -L 3306:127.0.0.1:3306
mngrif@kosh:~$ kill 15630

Basic nginx Setup and Configuration

nginx is a powerful and modern HTTP server. It is perhaps most commonly used as a reverse proxy, also known as a load balancer or front end proxy. It follows the UNIX philosophy of doing one thing, and doing it well, and as such, it relies on several helper daemons to become a full-featured web server such as Apache. For example, to serve PHP, it relies on php-fpm to do the processing, while nginx itself handles the caching and speaking the HTTP protocol itself. In this article, we will talk about common configuration options, and how they relate to it's performance. We'll also discuss some basic administrative tasks. The configuration is organized in to various sections, such as event, http, and server. "event" defines global settings such as event models and some connection handeling. The "http" section defines variables concering the HTTP protocol itself, such as keepalive timeouts. "server" is where you define individual sites to be cached, proxied, and served. There is also a "main" section, which has no definition block. The exact location and structure of your nginx configuration depends on your distribtion of Linux or BSD, but it is most likely under /etc/nginx or /usr/local/etc/nginx. Some distributions like to seperate out the "server" blocks in to individual files in their default setups. While not required, it certainly makes adding and removing sites a lot easier, and it makes it easier to automate. We'll start with the "main" section. A typical section looks like this:

user www-data;
worker_processes 4;
pid /var/run/nginx.pid;

"user" defines the user that the server runs as, "worker_processes" defines how many nginx instances actually run, and "pid" refers to where it stores the process ID of the master process. Fairly standard stuff for any daemon. It is of course important to ensure the nginx process can read the files to be served, so pay close attention to file ownership. "worker_processes" should be initially set to the number of CPU cores available. The first section is "events". The settings here define how the daemon handles incoming requests at the system level.

events {
        #worker_connections 4096; #this is a lot
        worker_connections 1024; #this is not a lot
        multi_accept on;
        use epoll; #linux, kqueue for bsd
}

 
  • worker_connections – This is how many connections a single worker thread is allowed to process. Connections made past this number will result in the user seeing an error page. This number, times the number of worker_processes, is the maximum amount of connections your nginx server will handle.
  • multi_accept – Keep accepting connections even though the server hasn't finished handeling incoming connections. Enable it if your host OS supports it.
  • use epoll/kqueue/select/poll – Host OS dependent. Use what is supported and compiled in to nginx itself.

The next section is "http". This is where most of your tuning will take place. It also contains all "server" directives, which, in the example configuration shown are included from a directory.

http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        gzip on;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

There are many more options than what we have defined in this example, but the ones shown are of the highest priority, and the defaults often just won't do what you want.

  • sendfile – Enabling this will increase the speed that nginx can cache, and retreive from cache. Enable it if your kernel supports it.
  • tcp_nopush – Setting this option causes nginx to attempt to send it's HTTP response headers in one packet. Enable it if your kernel supports it.
  • tcp_nodelay – This disables a buffer that when used with keep-alive connections, can slow things down. It is generally advised to enable this.
  • keepalive_timeout – The maximum time between keepalive requests from client browsers. Setting this to just over a minute is in line with current standards, and will keep network resource waste to a minimum.
  • types_hash_max_size – The maximum size of hash tables. This directly influences cache performance. Higher numbers use more memory, and offer potentially higher performance. A value of 2048 is usually good for 4GB of RAM.
  • gzip – Enables standard gzip compression. Turn this on unless your backend is doing the compression.

The rest of the parameters are either self explanitory (such as access_log), or situational depending on your particular setup (such as the includes & default_type). It is important to note that the following "server" section is a part of the "http" section! Here is an example "server" section that is set up to serve out HTML and other static files. PHP and other server-side scripting support is beyond the scope of this introductory article. Remember, this is nested within the "http" section! You can have as many of these sections as you'd like, and most settings from the "http" section can be overridden here on a per-server basis. This is analogous to Apache's Virtual Host mechanisms.

server {
        listen 80 default_server;
        listen 443 default_server ssl;
        server_name lennier.info

        #ssl on;
        ssl_certificate /etc/nginx/startcom/ssl-unified.crt;
        ssl_certificate_key /etc/nginx/startcom/startcom.key;


        #access_log /var/log/nginx/website.access_log;
        #error_log /var/log/nginx/website.error_log;

        root /var/www/lennier.info;
        index index.html;

        location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
                expires max;
                log_not_found off;
        }
}

Out of all of this, the most important lines are the "listen", "root", and "index" directives. With those three you can have a fully functional site.

  • listen – This specifies what port, and optionally what host IP to listen on (in the form of 127.0.0.1:80). When given the "default_server" parameter, that means that any request not matching a server directive will be directed to this server directive. When given the "ssl" parameter, it will use SSL to transfer data. One or more of these is required!
  • server_name – This is the domain name of the site being hosted.
  • ssl – In the above example it is commented out. This forces all requests to be treated as SSL requests, even if coming in on port 80. This is for testing purposes only.
  • ssl_certificate – Your signed certificate. Startcom (http://www.startssl.com/) offers free certs that are valid for one year, perfect for testing and learning. Modern browsers recognize Startcom as a valid CA.
  • ssl_certificate_key – Your private key.
  • access_log & error_log – You can have seperate log files per server, if they are undefined then they will go in to the global nginx log files as defined in the "http" section.
  • root – The document root containing all of the site's files. Required!
  • index – What page to load when no page is specified in the request from the browser. Required!
  • location – This is what makes nginx so magical. Location directives are analogous to Apache's mod_rewrite, however they are far more powerful. They allow a multitude of operations to take place on a selected group of files. In this case, we are forcing the client browser to cache those file types for the longest possible time the browser is willing to do. We are also telling nginx to not log requests to files of those types that don't exist. Depending on your site's content, the contents of that regex can change wildly. You may wish to add "swf", "mp4", and others to the list.

With the above you are ready to host a static website. nginx does not require you to restart the server to pick up on configuration changes. The safe way to do this is:

nginx -s reload

This will test the configuration for sanity, then reload the configuration and start using it. It takes place immediately. For more information about nginx, refer to nginx's wiki.