Peercoin 0.5.x on Raspberry Pi 2

This is for the blog cred, you see.

git clone
cd ppcoin/src
make USE_UPNP= -f makefile.unix -j2 ppcoind

Your binary will be at ppcoin/src/ppcoind. I have tested -j2 to work and not run out of memory, as the raspi2 has 1GB that the video card also dips in to. Compilation takes about an hour, which is still not nearly as long as it takes to synchronize the block chain.

A simple way to start and background it is with (./ppcoind)& and ./ppcoind stop to stop it, just like every other *coin.

If you would like to be a “full node” and accept connections from other peers, and if you have the bandwidth to spare, edit your ~/.ppcoin/ppcoin.conf to contain this stuff after you’ve ran the daemon and shut it down safely for the first time:


For more info, check out this thread on the Peercoin forum.

You can watch the paint dry thusly: watch -n 5 ./ppcoind getinfo


Advanced VirtualBox Administration

VirtualBox is very flexible software, allowing you to clone an existing virtual machine, take snapshots of it’s disk’s data, or even migrate it to another host while still running (called teleporting). These instructions take place on a Debian 7 (Wheezy) host, but nothing is distribution specific as we’re only concerned with VirtualBox in this article.

We’ll take a look at how to take a snapshot of a virtual machine’s data first. Good practice dictates that you should always take a snapshot of your VM immediately after you have configured it, but before you’ve put your mission critical data on it. This way you can redeploy it easily in similar-but-different circumstances. The name is required, and the description is optional.

VBoxManage snapshot take $NAME $DESCRIPTION

To delete, and restore, the commands are obvious.

VBoxManage snapshot delete $NAME
VBoxManage snapshot restore $NAME

There’s also a shortcut to restore to the last snapshot you took. Handy if you accidentaly hose the system, or are done testing changes.

VBoxManage snapshot restorecurrent

Now we’ll learn how to clone a virtual machine. It is a very straighforward process. If you don’t give the cloned VM a name, it’s name will default to the original VM’s name, with “Clone” appended to the end. The “–register” is there so the new cloned VM will be registered with VirtualBox.

VBoxManage clonevm $ORIGINAL_VM --rename "New Cloned VM" --register
Additionally you can clone from a previous snapshot.
VBoxManage clonevm $ORIGINAL_VM --rename "Just Another Test" --snapshot "after config" --register

The last bit we’ll cover is teleporting. This is the act of moving a running guest VM to a different host VM, with no interruption. To do this, there are a few prerequisets, all of which are beyond the scope of this article.

  • The hosts must have shared storage, where the disk images are stored. SMB, NFS, and iSCSI are common choices for this.
  • Both hosts must have the VM configured and registered

Further configuration of the guest VM is required for this to work. On the host that the VM will be teleported to, run this, then start the VM.

VBoxManage modifyvm $VM_NAME --teleporter on --teleporterport $PORT
VBoxHeadless -s $VM_NAME

Instead of it running, it will sit patiently waiting for the teleport to occur. Now, on the source host, with the VM running, you can teleport it on the fly like so.

VBoxManage controlvm $VM_NAME teleport --host $DESTINATION_HOST --port $PORT

Apache as a Backend for nginx

Apache is powerful, nginx is fast. Here’s how to use both of them to provide blazing fast speeds while retaining the same feature set your site is built upon. While nginx is very powerful in its own right, it does not yet share the same amount of modules and module support that Apache does. A common example is mod_rewrite for Apache pretty much isn’t easily implemented in to nginx.

In these examples we’ll assume you’re running nginx and Apache on the same server. See my load balancing tutorial as a supplement to this if you wish to have multiple Apache backends, and to use nginx as a load balancer (reverse proxy). These instructions should be distribution-agnostic, however configuration file locations will be given in accordance to how Debian/Ubuntu places them. Adjust according to taste.

The first step is to tell Apache to stop listening on port 80, and have it listen somewhere else. If this is a separate server from the nginx load balancer, set the localhost IP to the proper LAN IP. In your /etc/apache2/ports.conf, make the following change:


Now, verify that the site that Apache is handling contains an appropriate ServerName. If you don’t do this, PHP will generate strange URI’s with the wrong port, and other odd behavior will occur. In, for example, /etc/apache2/sites-enabled/, within your VirtualHost directive:


Note that you didn’t specify a port. This will cause PHP to rely on the HTTP headers to determine the port, though some software will do its own detection (such as PHPMyAdmin). Software that does this should provide a way to define it’s URI.

Once you’re done, reload Apache’s configuration.

service apache2 reload

Now we need to set up nginx to use Apache as the backend provider. In /etc/nginx/sites-enabled/

upstream {

server {
        listen 80 default_server;

        root /var/www/;
        index index.php;

        location / {
                        proxy_set_header X-Forwarded-Host $http_host;
                        proxy_set_header X-Forwarded-Server $host;
                        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                        proxy_set_header X-Real-IP $remote_addr;


        location ~* ^.+.(jpg|jpeg|gif|png|mp4|mp3|css|zip|pdf|txt|js|flv|swf|html|htm)$ {
       expires max;

That last location line tells nginx to serve files matching that regex to be served from nginx itself, rather than being passed on to the Apache back end. Adjust this as needed! The current line serves static files that shouldn’t require processing by Apache, but, if you are a heavy user of mod_rewrite then this might not be appropriate and will likely need editing.

Reload nginx’s configuration, and give it a test. You should be done! As always, the error logs will guide you should something go wrong.

Basic nginx Modules

nginx has several modules that provide all the functionality. Unlike with Apache DSO modules, nginx modules are compile-time only options. This means that there is currently no way to load or unload modules on-demand.

We’ll go over the “standard” modules, and list the ./configure directive to disable them. For the more common ones we’ll list a usage example, as the more complex modules would fall far beyond the scope of this article.

Module: access
Control access based on IP address
Disable: –without-http_access_module

location / {
	deny all;

Module: auth basic
”Basic” HTTP authentication
Disable: –without-http_auth_basic_module
Example: I already have a nice tutorial on this

Module: auto index
Automatically generate directory listings
Disable: –without-http_autoindex_module

location /downloads {
	autoindex on;

Module: browser
Interpret “User-Agent:” header
Disable: –without-http_browser_module

Module: charset
Recode document character sets on the fly
Disable: –without-http_charset_module

include conf/win-utf;
charset utf-8;
source_charset windows-1252;

Module: empty gif
Serves a 1×1 transparent GIF, saves processing time on various CSS tricks that use the 1 pixel image trick.
Disable: –without-http_empty_gif_module

location = /images/1x1.gif {

Module: fastcgi
Allows FastCGI interraction, required for PHP processing, among other things.
Disable: –without-http_fastcgi_module

location / {
	fastcgi_pass  localhost:9000;
	fastcgi_index index.php;

Module: geo
Sets variables using key/value pairs of IP addresses and countries.
Disable: –without-http_geo_module

Module: gzip
Enables gzip compression of outbound data.
Disable: –without-http_gzip_module

gzip on;

Module: headers
Allows you to set various HTTP response headers
Disable: Can’t, it is a part of the HTTP core.

expires max; #sets the Expires: header to the maximum possible time, to improve client side caching

Module: index
Allows you to set which file is the directory index.
Disable: Can’t, it is a part of the HTTP core.

index index.htm index.html index.php index.shtml;

Module: limit requests
Limits HTTP connections based on frequency of connection.
Disable: –without-http_limit_req_module

http {
	limit_req_zone $binary_remote_addr zone=one:10m rate=3r/s;

	location /download {
		limit_req zone=somezone burst=10;

Module: limit conn
Limits HTTP connections based on variables.
Disable: –without-http_limit_conn_module

http {
	limit_conn_zone $binary_remote_addr zone=addr:30m;

	location /download {
		limit_conn addr 3;

Module: log
Allows you to customize the log output.
Disable: Can’t, it is a part of the core.

Module: map
Sets configuration values based on arbitrary key/value pairs.
Disable: –without-http_map_module

Module: memcached
Allows nginx to serve files out of a memcached cache.
Disable: –without-http_memcached_module

Module: proxy
Proxies connection to upstream servers.
Disable: –without-http_proxy_module
Example: See our article on nginx load balancing [Editor: link to the nginx load balancing article]

Module: referer
Filters requests based on the HTTP “referrer” header. Uses regular expressions, but requires an if statement.
Disable: –without-http_referer_module

valid_referers none blocked server_names
	* example.*;

if ($invalid_referer) { #if anybody else, then...
	return 403; #block them!

Module: rewrite
Rewrites requests using regular expressions.
Disable: –without-http_rewrite_module

if ($http_user_agent ~ Mozilla) {
	rewrite ^(.*)$ /mozilla/$1 break; #rewrites the URL 

if ($invalid_referer) {
	return 403; #returning an error code uses this module

Module: scgi
Support for the SCGI protocol.
Disable: –without-http_scgi_module

Module: split clients
Splits clients based on various conditions.
Disable: –without-http_split_clients_module

Module: ssi
Allows for Server Side Includes.
Disable: –without-http_ssi_module

location / {
	ssi on;

Module: upstream
Required for load-balancing and forward proxying.
Disable: Can’t, it is a part of the proxy module
Example: See our article on nginx load balancing [Editor: link to the nginx load balancing article]

Module: upstream ip hash
Allows load balancing based on a hash of the IP address (back end stickyness)
Disable: –without-http_upstream_ip_hash_module
Example: See our article on nginx load balancing [Editor: link to the nginx load balancing article]

Module: user id
User identifying cookies. Used for advanced load balancing and caching setups.
Disable: –without-http_userid_module

Module: uwsgi
Support for the uWSGI protocol.
Disable: –without-http_uwsgi_module

HTTP Basic Auth with nginx

Sometimes you need to secure a small set of files on your webserver. There is a protocol made for this, called “HTTP Basic Authentication”. Here’s how to set it up in nginx using the stock ngx_http_auth_basic_module module. Please note that this form of security, quite frankly, isn’t very secure. If anyone gets ahold of the associated password file, it will be easy to crack. Keep in mind that this only handles authentication, it does not securely transfer the files over the internet; for that, you will also want to set up SSL.

For this example we’ll be securing everything under the fake website “”. All of its files will be located at /var/www/ We’ll assume you just want to securely share a few images or something, so for brevity the examples won’t include things like handling PHP, or any optimizations. These instructions are distribution-agnostic, so it won’t matter what distro you’re running so long as you have a sane UNIX-like environment, and a recent version of Python to help generate the passwords.

First, grab a copy of this well known and maintained helper script that generates passwords. The download link is at the bottom of its page, choose to download in the “Original Format” for a hassle free experience. This script takes a lot of the leg work out of this process.

Now, lets set up the new subdomain under nginx. Under your /etc/nginx/sites-enabled directory (or it’s equivalent), create a file similar to this:

server {
        listen 80;

        root /var/www/;
        index index.html index.php;

        location / {
	        auth_basic              "Restricted";
	        auth_basic_user_file    htpasswd;
	        autoindex on;

The important parts are the lines in the “location” section that start with “auth_basic”. The first line that simply says “auth_basic” takes a single parameter, which is the “realm” of the authentication. If a user authenticates under one realm on a domain, then accesses other parts of the site that require authentication, they will remain authenticated so long as the realm is the same. The realm is also displayed to the user when they log in, in various ways depending on the browser. You can also set the realm to “off” which will disable authentication for directories deeper in the site’s structure. A site can have multiple realms, and any location can require authentication.

The next line defines what file lists the users and their associated passwords. It is relative to the nginx configuration root, so for most sane installations it will be relative to /etc/nginx, so in this case the file is /etc/nginx/htpasswd.

Once you reload nginx’s configuration with “nginx -s reload”, you can go ahead and test your configuration, but you won’t be able to log in.

The next step is to generate the username and password file.

touch /etc/nginx/htpasswd
python /etc/nginx/htpasswd bob neato

This will append the user “bob” with a password of “neato” to the htpasswd list. Run it as many times as is needed to add all of your users and passwords. There’s no need to reload nginx when adding or removing users.

Load Balancing with Apache

Apache makes for a great load balancer, with the help of mod_proxy and mod_proxy_balancer. If you use Apache as the back end, this makes deploying new back ends very easy, as everything in the cluster will use the same software load out.

In this tutorial, we’ll assume that you already have a site set up and working in Apache. We’ll also assume that you have separate back end servers. File locations and directory paths will be based on Debian/Ubuntu standards, so change them as needed for other distributions.

The first step to take is to enable the modules.

a2enmod mod_proxy
a2enmod mod_proxy_balancer

Now, make a new virtual host, and configure it like this:

        ProxyRequests off #disable forward proxying


                Order Deny,Allow
                Deny from none
                Allow from all

                #use round-robin balancing
                ProxySet lbmethod=byrequests

        #balance-manager is a tool that lets you configure and tune Apache
                SetHandler balancer-manager

                #lock this down tightly
                Order deny,allow
                Allow from all

        #what to actually balance
        #in this case, balance everything except the manager
        ProxyPass /balancer-manager !
        ProxyPass / balancer://cluster/

At this point you're practically done. Move all the site files over to the back end servers (and perhaps set up unison to keep the files synchronized), and change the default catch-all virtualhost as needed to make your site run correctly.

The configuration above uses a round-robin approach to load balancing, which is good for anything that doesn't involve user sessions. If you have user authentication of some kind in your application, and sessions are not somehow shared between the back end servers (usually through a database), then you will have to use persistent sessions.

This problem often manifests itself as browsing the site, logging in, then sometimes being logged out and back in, as you browse the site. To fix it, we'll need another Apache module.
a2enmod mod_headers

We’ll then need to change a few lines around the Proxy block of configuration.


	BalancerMember route=1
	BalancerMember route=2
	ProxySet stickysession=ROUTEPATH

Now which server you’re using will be stored as a cookie, and you will always connect to that same server.

Basic Filesharing with NFSv4

In this tutorial we’ll be setting up a basic Network Filesystem version 4 (NFSv4) share, and mount it on a client machine. NFSv4 is the current iteration of the NFS specification. NFS provides similar functionality as Samba (SMB) file sharing, although it provides tighter integration with Kerberos.

For this tutorial we’ll assume Debian-like environments for both the server and the client.

First, lets set up the server.

apt-get install nfs-kernel-server

Ignore any warnings about missing entries in /etc/exports, because we haven’t written the configuration yet. Lets go ahead and set up the shares, then we’ll write the configuration. The way that NFSv4 works is a bit different from previous versions, instead of separate directory paths that can be located anywhere on the filesystem, for security purposes all shares must reside in the same directory. To “work around” this, we use bind mounts. In our example, we want to share the files located under /home/data

mkdir /nfs
mount -o bind /home/data /nfs/data

To make this bind mount happen every time on boot, add an entry similar to this in to /etc/fstab:
/home/data    /nfs/data   none    bind  0  0

Now we need to configure NFS itself to export that directory. Edit /etc/exports and put down something similar to this:


This will export the single directory, /nfs/data, to all clients on the network

The server is now set up. Lets restart the daemons then move on to the client.

service nfs-kernel-server restart
service idmapd restart

Over on the client, first we need to install the supporting daemons (most distributions install them by default).

apt-get install nfs-common

We can now mount the entire NFS export tree to /mnt like so:

mount -t nfs4 -o proto=tcp,port=2049 the-nfs-server:/ /mnt

Or we can mount just the one share to /data:

mount -t nfs4 -o proto=tcp,port=2049 the-nfs-server:/data /data

To make it happen on boot, add something like this to /etc/fstab. For this example, we're assuming that the “data” export is what you want mounted.

nfs-server:/data   /data   nfs4    _netdev,auto  0  0

“_netdev” is a hook used by many init scripts to ensure networking is up. Exact usage depends on your distribution.

To mount it using autofs (highly recommended), in /etc/auto.master add this line:

/data        /etc/auto.nfs

Then edit /etc/auto.nfs to contain:

the-nfs-server   -fstype=nfs4     server:/data

Make sure you remove the /etc/fstab line if you've added it.

nginx Load Balancing

nginx is well known for being a front-end load balancer. It often sits between the client accessing the web site, and the PHP back ends that process your dynamic pages. While doing this, it will also perform admirably as a static content server. Here’s how to get this going. For this setup we’ll assume there are three seperate servers connected by LAN. One will run nginx and the other two will run whatever back end you need it to. nginx doesn’t care so long as it speaks HTTP, it’s just a proxy.

The most basic section of the site’s configuration is the “upstream” part, where you define the hosts that are being proxied to. The 2nd parameter is the name of the upstream provider. It can be anything unique, here it is “proxiedhosts”.

upstream proxiedhosts {

The rest is all within the “server” section of the configuration.

location / {
	if ( -f $document_root/maintenance.html) {
		return 503;
		if ($host != {
			rewrite  (.*)$$1 permanent;
		proxy_set_header X-Forwarded-Host $http_host;
		proxy_set_header X-Forwarded-Server $host;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_pass http://proxiedhosts;

There is some handy boilerplate in the example configuration shown here. The first “if” clause detects if the site is down for maintenance, and if it is, it redirects users to that page with a 503 Found, following best practices. The next “if” clause says that if anyone visits the site not using “”, to redirect them using a 301 Permanently Moved code, also in line with current best practices. This will update search engines. None of this is related to proxying back ends, but it is useful configuration to know and have in your toolbox.

The real magic begins with the lines that start with “proxy”. The “proxy_set_header” lines change the appropriate headers so that they match what is coming in from the client. If these lines don’t exist, the back end servers will “see” all requests as coming from the same server, the front end balancer. These are of vital importance otherwise the first five people to get their password wrong will lock everybody out of the system, as it’s all seen coming from the same IP, for example. The “proxy_pass” line merely tells nginx the protocol and what upstreams to use. If you only have one backend, you can leave out the “upstream” section and put the IP address of the host here.

That’s the basics of it all. Now we’ll talk a bit about tuning this. In the following configuration, requests will hit the 1st listed server twice as often as the 2nd server. It is a ratio, so change it to taste. This is only useful if your hardware is not identical, or the loads on the hardware isn’t identical; such as having one web server pulling double duty as also being the database server.

upstream proxiedhosts {
	server weight=2;
	server weight=1;

In this configuration, all requests from the same IP will continue to go to the same host. Some back end applications are picky about this sort of thing, though most applications now store session data in a database so it is becoming less of an issue. If you have a problem where you log in, and are then suddenly logged out, the “ip_hash” will fix it.

upstream proxiedhosts {

How To Migrate a LAMP Stack

Sooner or later, your operations are going to need to scale out to bigger and better servers. You’ll have to migrate your application to new hardware. Here’s how to do it. This is more of a checklist than a full guide, as a full guide would be far too specific to your requirements. We’ll be assuming a typical LAMP stack, along with email hosting, as this is a very common combination among small web startups.

First and foremost, take inventory of what needs to be moved, and make a full backup of absolutely everything. Due to hardware differences, and hosting company requirements, it probably won’t be possible to simply restore your backup on the new server.

Go in to your DNS provider’s settings and change your site’s TTL time to the minimum of one hour. This means it will take at most one hour for users to see the new DNS entries when you make the migration. Once you’re done moving, be sure to change it back.

We will be doing this move in two stages, the first stage moves the services and configurations and allows for testing; the second stage shuts down the previous server for good, and migrates all of the application data to its new host.

For the purposes of this tutorial, we will assume a Debian-based system, and that you’ll be working as root the entire time. These commands are mostly distribution-agnostic, however some file locations might be Debian specific.

To make a full backup, first stop all of the services that write to the database.

service apache2 stop
service postfix stop
service dovecot stop
service mysql stop

Then do the backup. We’re not going to compress it at this time, as compressing takes a very long time, and during this time your services are offline. We’ll also take this time to make proper SQL backups. Make sure you have enough room in the root partition to hold all of this, or put them in a place where there is enough room (such as under /var or /home).

mysqldump -u root -p reallyimportantdatabase > /reallyimportantdatabase.sql
mysqldump -u root -p someotherdatabase > /someotherdatabase.sql
tar cvpf /backup.tar 
--exclude=/proc --exclude=/dev 
--exclude=/lost+found --exclude=/backup.tar 
--exclude=/mnt --exclude=/sys 

Once it is done, start your services back up again.

service apache2 start
service postfix start
service dovecot start
service mysql start

Check the size of the backup. If it is less than, say, 10GB, don’t bother compressing it, as you’ll spend more time compressing and decompressing than you will transferring it. If it is significantly over 10GB, I suggest using pbzip2 to compress it, which is a backwards compatible bzip2 algorithm that does the compression in parallel. This means its speed increases linearly with how many cores you have available. If a bzip2 operation should take ten minutes, and you have two cores, it will take about five minutes with pbzip2. You can get it at, which also contains usage examples. It is contained within many distribution’s repositories, so try those first.

pbzip2 /backup.tar
scp /backup.tar.bz2 admin@new.server.tld:/home/admin

We’re now done with the old server for the rest of Stage One. All of the following instructions will take place on the new server, until we’re ready for Stage Two. Make sure you take the time to install your preferred management utilities such as GNU Screen and vim before proceeding. Also be sure to set up networking properly.

Let’s unpack the backup that we transferred over.

cd /home/admin
mkdir oldmachine
cd oldmachine
mv ../backup.tar.bz2 .
pbzip2 -d backup.tar.bz2
tar xvf backup.tar
While that's going, now's a good time to install the services and support utilities that were on the old server. This list will vary wildly depending on your needs, but a minimal command for Debian might look a bit like this:
apt-get install fail2ban postfix postfix-mysql 
dovecot-imapd dovecot-pop3d dovecot-mysql 
apache2 libapache2-mod-php5 php5-apc 
php5-gd php5-mcrypt php5-mysql mysql-server

Once all of that is done, we’ll move the stock configurations out of the way in preparation for restoring the old configurations.

cd /etc
mv dovecot dovecot.dist
mv postfix postfix.dist
mv php5 php5.dist
mv mysql mysql.dist
mv apache2 apache2.dist

Now copy all of the old configurations and support files from the backup to their new home. This list depends on your exact setup, but this should give you a general idea of what to look for. Note that we are not copying the SQL database files during this, they will be restored in their proper manner in just a bit. Don’t forget to copy over any SSL keys you may have!

cd /home/admin/oldmachine
cp -a var/www/* /var/www/
cd etc
cp -a dovecot /etc
cp -a postfix /etc
cp -a php5 /etc
cp -a mysql /etc
cp -a apache2 /etc
cd ../var
cp -a mail /var

Now we hope for the best and restart our services. Everything should go smooth, but, you never know. If your distribution changed, there will likely be problems with minor things like the location of pid files. These things should be very easy to fix, just be sure to take a look at your logs.

service apache2 restart
service postfix restart
service dovecot restart
service mysql restart

Now it’s time to import your databases the proper way.

mysql -u root -p reallyimportantdatabase < /home/admin/oldmachine/reallyimportantdatabase.sql
mysql -u root -p someotherdatabase < /home/admin/oldmachine/someotherdatabase.sql

Stage One is now complete. Test absolutely everything, then test it some more. Once you’re done testing, test it all again. This is your only chance to get things right without incurring undue downtime on your users, so make use of it. The exact method of testing depends on your application, but it will at the least involve some fake entries in to your local machine’s hosts file.

Now that you are satisfied that your new server is working properly, head over to your old machine and put your application in some sort of maintenance mode. The idea is to present the users with a page saying “we’re moving servers, thank you for your patience”, and for the application to not write to the database at all.

Now we back up and move the application’s database one final time.

mysqldump -u root -p reallyimportantdatabase > /reallyimportantdatabase.sql
scp /reallyimportantdatabase.sql admin@new.server.tld:/home/admin
And then we import it on the new machine.
mysql -u root -p reallyimportantdatabase < /home/admin/reallyimportantdatabase.sql
The last step is to move your DNS to point at the new servers. Don't forget to change your SPF records to reflect the new email server's IP address. That's it! You're done! To make sure it worked, keep a close eye on your Apache access logs.
tail -f /var/log/apache2/access.log

MySQL to MariaDB Migration

Lets migrate from MySQL to MariaDB, which is a popular alternative to MySQL. MariaDB was initially forked in January of 2009. We can make this transition quickly, but not without some downtime, as we can’t have both databases working on the same files simultaneously. These instructions are good for all popular distributions of GNU/Linux, however Debian 7 (Wheezy) will be used in the reference code shown.

This process only works reliably if you are running the same major version of MySQL as MariaDB. Currently this means you must be running MySQL 5.5 and intend on moving to MariaDB 5.5.

First and foremost, shut down all processes that use MySQL. You can’t move the database around while it’s in use. Doing this is far beyond the scope of this article, but since it is your system, hopefully you have a good idea about what all is running on your server, and can shut processes down appropriately. It is advised to stop MySQL by hand, rather than depending on your package manager to do it for you.

# service stop apache2
# service stop nginx
# service stop mysql
Next, make a simple mysqldump of your current databases. These commands dump every SQL database you have to a single file. Make sure you do this on a partition big enough to hold your data. Doing this in /tmp is a very bad idea.
# cd /backups
# mysqldump -u root -p --all-databases > mysqlbackup.sql

We are now done with MySQL. Use your package manager to remove it. Do not worry about associated libraries, as MariaDB is a drop in replacement. It should remain compatible at the API layer. If your package manager tries to uninstall half the system, cancel the operation and proceed to the next step. apt based distributions will probably want to remove all kinds of things, but since nothing is installed on our test system, this doesn’t happen so it’s safe to remove in our example.

# apt-get remove mysql-server-core-5.5 mysql-server-5.5 mysql-server mysql-common mysql-client-5.5 libmysqlclient18
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
Use 'apt-get autoremove' to remove it.
The following packages will be REMOVED:
  libdbd-mysql-perl libmysqlclient18 mysql-client-5.5 mysql-common
  mysql-server mysql-server-5.5 mysql-server-core-5.5 php5-mysql
0 upgraded, 0 newly installed, 8 to remove and 0 not upgraded.
After this operation, 94.8 MB disk space will be freed.
Do you want to continue [Y/n]? y

Next, add the MariaDB repositories, and install it. Instructions for your distribution can be found at:

If, in the prior step your package manager wanted to uninstall everything, that means it wants to do this upgrade in-place. Installing the new MariaDB packages should have simply overwritten the old MySQL packages, as your package manager sees it as any other upgrade.

# apt-get install mariadb-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  libdbd-mysql-perl libmariadbclient18 libmysqlclient18 mariadb-client-5.5
  mariadb-client-core-5.5 mariadb-common mariadb-server-5.5
  mariadb-server-core-5.5 mysql-common
Suggested packages:
  tinyca mariadb-test
The following NEW packages will be installed:
  libdbd-mysql-perl libmariadbclient18 libmysqlclient18 mariadb-client-5.5
  mariadb-client-core-5.5 mariadb-common mariadb-server mariadb-server-5.5
  mariadb-server-core-5.5 mysql-common
0 upgraded, 10 newly installed, 0 to remove and 0 not upgraded.
Need to get 8,794 B/31.1 MB of archives.
After this operation, 108 MB of additional disk space will be used.
Do you want to continue [Y/n]? y

Some package systems start what they install automatically, but some don’t. If not, start up MariaDB now, and verify its sanity.

# mysql -u root -p -Be 'show databases'
Enter password:

This next part won’t be pretty, but it shouldn’t be tricky. The configuration has changed considerably between MySQL and MariaDB, however it is extremely easy to work through. Most everything that changed is related to mechanisms that have been replaced, such as how replication works. You should be safe in simply copying over the performance tuning options you set in MySQL’s my.cnf, and reconfiguring the rest by hand. For small databases with less intricate setups, it is likely that you won’t need to do any thing more than copy the performance options that you’ve changed.

bind-address            =

max_connections         = 10
connect_timeout         = 30
wait_timeout            = 600
max_allowed_packet      = 16M
thread_cache_size       = 256
sort_buffer_size        = 16M
bulk_insert_buffer_size = 16M
tmp_table_size          = 64M
max_heap_table_size     = 64M

Start up MariaDB again and verify sanity once again. Once you’ve done that and stopped MariaDB, start it up once more, as you normally would through your init system. Note that MariaDB shares the same name for compatibility.

# service restart mysql
Stopping MariaDB database server: mysqld.
Starting MariaDB database server: mysqld . . .
Checking for corrupt, not cleanly closed and upgrade needing tables..
# mysql -u root -p -Be 'show databases'
Enter password:

Lastly, start up the applications that use a SQL database.

# service apache2 start
# service tomcat start

At this point you’re done! There is no conversion of your databases when switching to MariaDB, so if for whatever reason you don’t like it, you can freely switch back to MySQL.

# service mysql stop
# apt-get remove mariadb-server-5.5 mariadb-common mariadb-client-5.5 libmariadbclient18
# apt-get install mysql-server