Asus X540LA-SI30205P Review

This is a reposting of my review left at bestbuy.com. I’ve always had good luck with Asus, so I went with them when I needed a new one.

So the short and simple of it all is that this laptop is built to a price point, and they absolutely nailed it.

You get above average (for the price) specs, but the screen, battery, keyboard, touchpad, and enclosure are where the sacrifices are made. These are usually very important things, so you should reconsider if those things are more important to you than performance. For me, the sacrifice is well worth it. Overall I would recommend this to anyone needing mobile compute power on a very tight budget. 4 stars instead of 5 because UEFI lockdown and there is no documentation for the rebranded AMI BIOS. All testing was done with Linux, Windows was never booted. If I were a normal user it would be 5/5 easily. In this review I’ll mostly touch on things that aren’t easily found by searching the model number.

First and foremost, the first one I received had a faulty fan. I had zero problems getting a replacement device the next day. Best Buy’s customer service shined as usual with a hassle-free replacement. It made a slow ticking sound, had low air flow, and overheated during the initial setup. At this price point, QA will suffer, so be sure to fully evaluate it within the 15 day return policy otherwise you’ll have to deal with Asus warranty (which is likewise good, just much slower than a simple return).

The CPU is an Intel Core i3 5020U (5th gen). It has TWO physical cores, and has hyperthreading, which is where the advertising gets it’s “four” cores from. This is a bit misleading, but only if you actually care. It is a great CPU for the price (a common theme with this device). This is a well documented CPU so you can find reviews elsewhere. Important to me is that it has the Intel virtualization extensions, which is unheard of at this price point.

The GPU is an Intel HD 5500. It will play most any modern game on the lowest settings, and has hardware video decode. This is important for battery life. My experience with the GPU has been far better than prior Intel integrated chipsets. Like all integrated GPUs, it shares memory with the system, so you will only have ~3.5GB available to applications. Again, it is very good for the price.

The battery gives me just over 3 hours of heavy pedal-to-the-metal use and reduced screen brightness. I’d say you would be lucky to get a full 4 hours of normal use out of it. This is to be expected at this price, and not a detractor.

The hard drive is a Toshiba 1TB drive. Nothing special here, Toshiba makes good drives, but I would never specifically buy one as I trust other manufacturers more. It has a SD card reader along the front left edge which sits flush and out of the way. The headphone jack is a combined jack that will work with cell phone headphones/mics or with a very cheap adapter. Normal headphones work just fine. There is no line input. It has HDMI and VGA out, which is a bit strange to me. The Ethernet port is right next to the VGA port, and it gets blocked when the VGA port is in use. All of the I/O including power is on the left side of the device, with only the DVD drive on the right side. Nothing on the back (yay!). It has one USB2, one USB3, and one USB3 type C port. The power adapter is not grounded, and a bit on the short side.

The networking is gigE, not 10/100 only as advertised (it’s proper gigabit), and the wifi is 2.4GHz only, no 5GHz support. Realtek and Aetheros/Qualcomm respectively.

The touchpad is an Elantech “clickpad”. It has one actual button and depending on where your finger is when you click depends on whether it is a left or right click. I am unable to get a middle click event out of it. The biggest problem with this, in fact it is infuriating, is that when you click the cursor WILL move. It is possible to set “deadzones” so the cursor won’t move, but it is fiddly. This is absolutely the worst part about this laptop.

The screen is 1366×768, and the color depth is 6 bit (yikes, how low can you go). Model number is B156XTN04.5. The viewing angle is abysmal — you won’t be sharing a movie with a friend. It is glossy, and there’s very little light leakage around the edges. Again, this is by far how Asus was able to put such a good CPU/GPU at this price. This is typical of budget laptops, and you won’t see good screens until you’re paying at least double this price. As of this writing the cheapest laptop with a good screen at Best Buy is around $800.

Lastly we have the physical qualities of the laptop as a whole. The case provides no access to the HD, memory, or miniPCI slot. THE BATTERY IS NOT USER REPLACEABLE. I haven’t taken it apart yet to see just how bad it is. The keyboard is flimsy all over, not enough to be annoying and cause typos, but enough to notice and need to adjust to. All of the keys are where you would expect them on the keyboard, the function keys are function keys and don’t require you to push a button first. As mentioned before the touchpad is awful, so definitely beware of the learning curve unless you’re used to “clickpad” style mice. The plastic enclosure itself is very flimsy and thin (TOO thin). I doubt this would survive a drop from the couch on to carpet. I doubt it would survive a full year of commuting, but I’m about to find out. The plastic has a bit of a texture to it and does not show fingerprints easily. Stuff like this is how they are able to sell such a good CPU/GPU combo for such a cheap price.

Purchase price for me was $329.99, and I wouldn’t pay a single dime more. You get what you pay for. In this case the sacrifices are worth it for me. It will Youtube, Factorio, and homework just fine. It will compile and develop just fine. It’s a fantastic deal.

Some pro tips for you fellow nerds: Hit escape to get the boot menu & BIOS, turn off secure boot & “fast boot”, and enable “CSM” to allow booting from USB/DVD. The signing keys are a part of grub’s shims so you can turn secure boot back on if you want or need to. This is also the only way to get access to Window’s F8 boot options — normal UEFI stuff. Out of the box Ubuntu’s kernel will have all the hardware working perfectly, just add the open source Intel HD drivers if you want to game or want hardware video decode. Totally painless. The touchpad masquerades as a proper Synaptics touchpad, which causes driver problems. The ArchOS wiki has details on working around this. I haven’t gotten around to ricing the kernel and still having a working touchpad, though.

Peercoin 0.5.x on Raspberry Pi 2

This is for the blog cred, you see.

git clone https://github.com/ppcoin/ppcoin.git
cd ppcoin/src
make USE_UPNP= -f makefile.unix -j2 ppcoind

Your binary will be at ppcoin/src/ppcoind. I have tested -j2 to work and not run out of memory, as the raspi2 has 1GB that the video card also dips in to. Compilation takes about an hour, which is still not nearly as long as it takes to synchronize the block chain.

A simple way to start and background it is with (./ppcoind)& and ./ppcoind stop to stop it, just like every other *coin.

If you would like to be a “full node” and accept connections from other peers, and if you have the bandwidth to spare, edit your ~/.ppcoin/ppcoin.conf to contain this stuff after you’ve ran the daemon and shut it down safely for the first time:

maxconnections=150
listen=1

For more info, check out this thread on the Peercoin forum.

You can watch the paint dry thusly: watch -n 5 ./ppcoind getinfo

2016-01-17_17-04-00

Advanced VirtualBox Administration

VirtualBox is very flexible software, allowing you to clone an existing virtual machine, take snapshots of it’s disk’s data, or even migrate it to another host while still running (called teleporting). These instructions take place on a Debian 7 (Wheezy) host, but nothing is distribution specific as we’re only concerned with VirtualBox in this article.

We’ll take a look at how to take a snapshot of a virtual machine’s data first. Good practice dictates that you should always take a snapshot of your VM immediately after you have configured it, but before you’ve put your mission critical data on it. This way you can redeploy it easily in similar-but-different circumstances. The name is required, and the description is optional.

VBoxManage snapshot take $NAME $DESCRIPTION

To delete, and restore, the commands are obvious.

VBoxManage snapshot delete $NAME
VBoxManage snapshot restore $NAME

There’s also a shortcut to restore to the last snapshot you took. Handy if you accidentaly hose the system, or are done testing changes.

VBoxManage snapshot restorecurrent

Now we’ll learn how to clone a virtual machine. It is a very straighforward process. If you don’t give the cloned VM a name, it’s name will default to the original VM’s name, with “Clone” appended to the end. The “–register” is there so the new cloned VM will be registered with VirtualBox.

VBoxManage clonevm $ORIGINAL_VM --rename "New Cloned VM" --register
Additionally you can clone from a previous snapshot.
VBoxManage clonevm $ORIGINAL_VM --rename "Just Another Test" --snapshot "after config" --register

The last bit we’ll cover is teleporting. This is the act of moving a running guest VM to a different host VM, with no interruption. To do this, there are a few prerequisets, all of which are beyond the scope of this article.

  • The hosts must have shared storage, where the disk images are stored. SMB, NFS, and iSCSI are common choices for this.
  • Both hosts must have the VM configured and registered

Further configuration of the guest VM is required for this to work. On the host that the VM will be teleported to, run this, then start the VM.

VBoxManage modifyvm $VM_NAME --teleporter on --teleporterport $PORT
VBoxHeadless -s $VM_NAME

Instead of it running, it will sit patiently waiting for the teleport to occur. Now, on the source host, with the VM running, you can teleport it on the fly like so.

VBoxManage controlvm $VM_NAME teleport --host $DESTINATION_HOST --port $PORT

Apache as a Backend for nginx

Apache is powerful, nginx is fast. Here’s how to use both of them to provide blazing fast speeds while retaining the same feature set your site is built upon. While nginx is very powerful in its own right, it does not yet share the same amount of modules and module support that Apache does. A common example is mod_rewrite for Apache pretty much isn’t easily implemented in to nginx.

In these examples we’ll assume you’re running nginx and Apache on the same server. See my load balancing tutorial as a supplement to this if you wish to have multiple Apache backends, and to use nginx as a load balancer (reverse proxy). These instructions should be distribution-agnostic, however configuration file locations will be given in accordance to how Debian/Ubuntu places them. Adjust according to taste.

The first step is to tell Apache to stop listening on port 80, and have it listen somewhere else. If this is a separate server from the nginx load balancer, set the localhost IP to the proper LAN IP. In your /etc/apache2/ports.conf, make the following change:

NameVirtualHost 127.0.0.1:8080
Listen 127.0.0.1:8080

Now, verify that the site that Apache is handling contains an appropriate ServerName. If you don’t do this, PHP will generate strange URI’s with the wrong port, and other odd behavior will occur. In, for example, /etc/apache2/sites-enabled/001-beginlinux.com, within your VirtualHost directive:

ServerName beginlinux.com

Note that you didn’t specify a port. This will cause PHP to rely on the HTTP headers to determine the port, though some software will do its own detection (such as PHPMyAdmin). Software that does this should provide a way to define it’s URI.

Once you’re done, reload Apache’s configuration.

service apache2 reload

Now we need to set up nginx to use Apache as the backend provider. In /etc/nginx/sites-enabled/beginlinux.com:

upstream beginlinux.com {
        server 127.0.0.1:8080;
}

server {
        listen 80 default_server;
        server_name beginlinux.com www.beginlinux.com;

        root /var/www/beginlinux.com;
        index index.php;

        location / {
                        proxy_set_header X-Forwarded-Host $http_host;
                        proxy_set_header X-Forwarded-Server $host;
                        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                        proxy_set_header X-Real-IP $remote_addr;
                        proxy_pass http://beginlinux.com;

        }

        location ~* ^.+.(jpg|jpeg|gif|png|mp4|mp3|css|zip|pdf|txt|js|flv|swf|html|htm)$ {
       expires max;
       }
}

That last location line tells nginx to serve files matching that regex to be served from nginx itself, rather than being passed on to the Apache back end. Adjust this as needed! The current line serves static files that shouldn’t require processing by Apache, but, if you are a heavy user of mod_rewrite then this might not be appropriate and will likely need editing.

Reload nginx’s configuration, and give it a test. You should be done! As always, the error logs will guide you should something go wrong.

Basic nginx Modules

nginx has several modules that provide all the functionality. Unlike with Apache DSO modules, nginx modules are compile-time only options. This means that there is currently no way to load or unload modules on-demand.

We’ll go over the “standard” modules, and list the ./configure directive to disable them. For the more common ones we’ll list a usage example, as the more complex modules would fall far beyond the scope of this article.

Module: access
Control access based on IP address
Disable: –without-http_access_module
Example:

location / {
	allow 98.132.201.14;
	deny all;
}

Module: auth basic
”Basic” HTTP authentication
Disable: –without-http_auth_basic_module
Example: I already have a nice tutorial on this

Module: auto index
Automatically generate directory listings
Disable: –without-http_autoindex_module
Example:

location /downloads {
	autoindex on;
}

Module: browser
Interpret “User-Agent:” header
Disable: –without-http_browser_module

Module: charset
Recode document character sets on the fly
Disable: –without-http_charset_module
Example:

include conf/win-utf;
charset utf-8;
source_charset windows-1252;

Module: empty gif
Serves a 1×1 transparent GIF, saves processing time on various CSS tricks that use the 1 pixel image trick.
Disable: –without-http_empty_gif_module
Example:

location = /images/1x1.gif {
	empty_gif;
}

Module: fastcgi
Allows FastCGI interraction, required for PHP processing, among other things.
Disable: –without-http_fastcgi_module
Example:

location / {
	fastcgi_pass  localhost:9000;
	fastcgi_index index.php;
}

Module: geo
Sets variables using key/value pairs of IP addresses and countries.
Disable: –without-http_geo_module

Module: gzip
Enables gzip compression of outbound data.
Disable: –without-http_gzip_module
Example:

gzip on;

Module: headers
Allows you to set various HTTP response headers
Disable: Can’t, it is a part of the HTTP core.

expires max; #sets the Expires: header to the maximum possible time, to improve client side caching

Module: index
Allows you to set which file is the directory index.
Disable: Can’t, it is a part of the HTTP core.

index index.htm index.html index.php index.shtml;

Module: limit requests
Limits HTTP connections based on frequency of connection.
Disable: –without-http_limit_req_module

http {
	limit_req_zone $binary_remote_addr zone=one:10m rate=3r/s;

	location /download {
		limit_req zone=somezone burst=10;
	}
}

Module: limit conn
Limits HTTP connections based on variables.
Disable: –without-http_limit_conn_module

http {
	limit_conn_zone $binary_remote_addr zone=addr:30m;

	location /download {
		limit_conn addr 3;
	}
}

Module: log
Allows you to customize the log output.
Disable: Can’t, it is a part of the core.

Module: map
Sets configuration values based on arbitrary key/value pairs.
Disable: –without-http_map_module

Module: memcached
Allows nginx to serve files out of a memcached cache.
Disable: –without-http_memcached_module

Module: proxy
Proxies connection to upstream servers.
Disable: –without-http_proxy_module
Example: See our article on nginx load balancing [Editor: link to the nginx load balancing article]

Module: referer
Filters requests based on the HTTP “referrer” header. Uses regular expressions, but requires an if statement.
Disable: –without-http_referer_module

valid_referers none blocked server_names
	*.somesiteyoulike.com example.* www.somesiteyoulike.org/some/path
	 ~.google.;

if ($invalid_referer) { #if anybody else, then...
	return 403; #block them!
}

Module: rewrite
Rewrites requests using regular expressions.
Disable: –without-http_rewrite_module

if ($http_user_agent ~ Mozilla) {
	rewrite ^(.*)$ /mozilla/$1 break; #rewrites the URL 
}

if ($invalid_referer) {
	return 403; #returning an error code uses this module
}

Module: scgi
Support for the SCGI protocol.
Disable: –without-http_scgi_module

Module: split clients
Splits clients based on various conditions.
Disable: –without-http_split_clients_module

Module: ssi
Allows for Server Side Includes.
Disable: –without-http_ssi_module
Example:

location / {
	ssi on;
}

Module: upstream
Required for load-balancing and forward proxying.
Disable: Can’t, it is a part of the proxy module
Example: See our article on nginx load balancing [Editor: link to the nginx load balancing article]

Module: upstream ip hash
Allows load balancing based on a hash of the IP address (back end stickyness)
Disable: –without-http_upstream_ip_hash_module
Example: See our article on nginx load balancing [Editor: link to the nginx load balancing article]

Module: user id
User identifying cookies. Used for advanced load balancing and caching setups.
Disable: –without-http_userid_module

Module: uwsgi
Support for the uWSGI protocol.
Disable: –without-http_uwsgi_module

HTTP Basic Auth with nginx

Sometimes you need to secure a small set of files on your webserver. There is a protocol made for this, called “HTTP Basic Authentication”. Here’s how to set it up in nginx using the stock ngx_http_auth_basic_module module. Please note that this form of security, quite frankly, isn’t very secure. If anyone gets ahold of the associated password file, it will be easy to crack. Keep in mind that this only handles authentication, it does not securely transfer the files over the internet; for that, you will also want to set up SSL.

For this example we’ll be securing everything under the fake website “secure.beginlinux.com”. All of its files will be located at /var/www/secure.beginlinux.com. We’ll assume you just want to securely share a few images or something, so for brevity the examples won’t include things like handling PHP, or any optimizations. These instructions are distribution-agnostic, so it won’t matter what distro you’re running so long as you have a sane UNIX-like environment, and a recent version of Python to help generate the passwords.

First, grab a copy of this well known and maintained helper script that generates passwords. The download link is at the bottom of its page, choose to download in the “Original Format” for a hassle free experience. This script takes a lot of the leg work out of this process.

Now, lets set up the new subdomain under nginx. Under your /etc/nginx/sites-enabled directory (or it’s equivalent), create a file similar to this:

server {
        listen 80;
        server_name secure.beginlinux.com;

        root /var/www/secure.beginlinux.com;
        index index.html index.php;

        location / {
	        auth_basic              "Restricted";
	        auth_basic_user_file    htpasswd;
	        autoindex on;
        }
}

The important parts are the lines in the “location” section that start with “auth_basic”. The first line that simply says “auth_basic” takes a single parameter, which is the “realm” of the authentication. If a user authenticates under one realm on a domain, then accesses other parts of the site that require authentication, they will remain authenticated so long as the realm is the same. The realm is also displayed to the user when they log in, in various ways depending on the browser. You can also set the realm to “off” which will disable authentication for directories deeper in the site’s structure. A site can have multiple realms, and any location can require authentication.

The next line defines what file lists the users and their associated passwords. It is relative to the nginx configuration root, so for most sane installations it will be relative to /etc/nginx, so in this case the file is /etc/nginx/htpasswd.

Once you reload nginx’s configuration with “nginx -s reload”, you can go ahead and test your configuration, but you won’t be able to log in.

The next step is to generate the username and password file.

touch /etc/nginx/htpasswd
python htpasswd.py /etc/nginx/htpasswd bob neato

This will append the user “bob” with a password of “neato” to the htpasswd list. Run it as many times as is needed to add all of your users and passwords. There’s no need to reload nginx when adding or removing users.

Load Balancing with Apache

Apache makes for a great load balancer, with the help of mod_proxy and mod_proxy_balancer. If you use Apache as the back end, this makes deploying new back ends very easy, as everything in the cluster will use the same software load out.

In this tutorial, we’ll assume that you already have a site set up and working in Apache. We’ll also assume that you have separate back end servers. File locations and directory paths will be based on Debian/Ubuntu standards, so change them as needed for other distributions.

The first step to take is to enable the modules.

a2enmod mod_proxy
a2enmod mod_proxy_balancer

Now, make a new virtual host, and configure it like this:

        ProxyRequests off #disable forward proxying
        
        ServerName beginlinux.com

        
                BalancerMember http://172.31.0.42:80
                BalancerMember http://172.31.0.44:80

                #security
                Order Deny,Allow
                Deny from none
                Allow from all

                #use round-robin balancing
                ProxySet lbmethod=byrequests
        

        #balance-manager is a tool that lets you configure and tune Apache
        
                SetHandler balancer-manager


                #lock this down tightly
                Order deny,allow
                Allow from all
        

        #what to actually balance
        #in this case, balance everything except the manager
        ProxyPass /balancer-manager !
        ProxyPass / balancer://cluster/


At this point you're practically done. Move all the site files over to the back end servers (and perhaps set up unison to keep the files synchronized), and change the default catch-all virtualhost as needed to make your site run correctly.

The configuration above uses a round-robin approach to load balancing, which is good for anything that doesn't involve user sessions. If you have user authentication of some kind in your application, and sessions are not somehow shared between the back end servers (usually through a database), then you will have to use persistent sessions.

This problem often manifests itself as browsing the site, logging in, then sometimes being logged out and back in, as you browse the site. To fix it, we'll need another Apache module.
a2enmod mod_headers

We’ll then need to change a few lines around the Proxy block of configuration.

Header add Set-Cookie "ROUTEPATH=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED

	BalancerMember http://172.31.0.42:80 route=1
	BalancerMember http://172.31.0.44:80 route=2
	ProxySet stickysession=ROUTEPATH

Now which server you’re using will be stored as a cookie, and you will always connect to that same server.

Basic Filesharing with NFSv4

In this tutorial we’ll be setting up a basic Network Filesystem version 4 (NFSv4) share, and mount it on a client machine. NFSv4 is the current iteration of the NFS specification. NFS provides similar functionality as Samba (SMB) file sharing, although it provides tighter integration with Kerberos.

For this tutorial we’ll assume Debian-like environments for both the server and the client.

First, lets set up the server.

apt-get install nfs-kernel-server

Ignore any warnings about missing entries in /etc/exports, because we haven’t written the configuration yet. Lets go ahead and set up the shares, then we’ll write the configuration. The way that NFSv4 works is a bit different from previous versions, instead of separate directory paths that can be located anywhere on the filesystem, for security purposes all shares must reside in the same directory. To “work around” this, we use bind mounts. In our example, we want to share the files located under /home/data

mkdir /nfs
mount -o bind /home/data /nfs/data

To make this bind mount happen every time on boot, add an entry similar to this in to /etc/fstab:
/home/data    /nfs/data   none    bind  0  0

Now we need to configure NFS itself to export that directory. Edit /etc/exports and put down something similar to this:

/nfs         192.168.1.0/24(rw,fsid=0,no_subtree_check,sync)
/nfs/data 192.168.1.0/24(rw,nohide,insecure,no_subtree_check,sync)

This will export the single directory, /nfs/data, to all clients on the network 192.168.1.0/24.

The server is now set up. Lets restart the daemons then move on to the client.

service nfs-kernel-server restart
service idmapd restart

Over on the client, first we need to install the supporting daemons (most distributions install them by default).

apt-get install nfs-common

We can now mount the entire NFS export tree to /mnt like so:

mount -t nfs4 -o proto=tcp,port=2049 the-nfs-server:/ /mnt

Or we can mount just the one share to /data:

mount -t nfs4 -o proto=tcp,port=2049 the-nfs-server:/data /data

To make it happen on boot, add something like this to /etc/fstab. For this example, we're assuming that the “data” export is what you want mounted.

nfs-server:/data   /data   nfs4    _netdev,auto  0  0

“_netdev” is a hook used by many init scripts to ensure networking is up. Exact usage depends on your distribution.

To mount it using autofs (highly recommended), in /etc/auto.master add this line:

/data        /etc/auto.nfs

Then edit /etc/auto.nfs to contain:

the-nfs-server   -fstype=nfs4     server:/data

Make sure you remove the /etc/fstab line if you've added it.

nginx Load Balancing

nginx is well known for being a front-end load balancer. It often sits between the client accessing the web site, and the PHP back ends that process your dynamic pages. While doing this, it will also perform admirably as a static content server. Here’s how to get this going. For this setup we’ll assume there are three seperate servers connected by LAN. One will run nginx and the other two will run whatever back end you need it to. nginx doesn’t care so long as it speaks HTTP, it’s just a proxy.

The most basic section of the site’s configuration is the “upstream” part, where you define the hosts that are being proxied to. The 2nd parameter is the name of the upstream provider. It can be anything unique, here it is “proxiedhosts”.

upstream proxiedhosts {
	server 172.31.1.90:80;
	server 172.31.1.91:80;
}

The rest is all within the “server” section of the configuration.

location / {
	if ( -f $document_root/maintenance.html) {
		return 503;
	}
		if ($host != www.yoursite.com) {
			rewrite  (.*)$ http://www.yoursite.com$1 permanent;
		}
		proxy_set_header X-Forwarded-Host $http_host;
		proxy_set_header X-Forwarded-Server $host;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_pass http://proxiedhosts;
	}

There is some handy boilerplate in the example configuration shown here. The first “if” clause detects if the site is down for maintenance, and if it is, it redirects users to that page with a 503 Found, following best practices. The next “if” clause says that if anyone visits the site not using “www.yoursite.com”, to redirect them using a 301 Permanently Moved code, also in line with current best practices. This will update search engines. None of this is related to proxying back ends, but it is useful configuration to know and have in your toolbox.

The real magic begins with the lines that start with “proxy”. The “proxy_set_header” lines change the appropriate headers so that they match what is coming in from the client. If these lines don’t exist, the back end servers will “see” all requests as coming from the same server, the front end balancer. These are of vital importance otherwise the first five people to get their password wrong will lock everybody out of the system, as it’s all seen coming from the same IP, for example. The “proxy_pass” line merely tells nginx the protocol and what upstreams to use. If you only have one backend, you can leave out the “upstream” section and put the IP address of the host here.

That’s the basics of it all. Now we’ll talk a bit about tuning this. In the following configuration, requests will hit the 1st listed server twice as often as the 2nd server. It is a ratio, so change it to taste. This is only useful if your hardware is not identical, or the loads on the hardware isn’t identical; such as having one web server pulling double duty as also being the database server.

upstream proxiedhosts {
	server 172.31.1.90:80 weight=2;
	server 172.31.1.91:80 weight=1;
}

In this configuration, all requests from the same IP will continue to go to the same host. Some back end applications are picky about this sort of thing, though most applications now store session data in a database so it is becoming less of an issue. If you have a problem where you log in, and are then suddenly logged out, the “ip_hash” will fix it.

upstream proxiedhosts {
	ip_hash;
	server 172.31.1.90:80;
	server 172.31.1.91:80;
}