Alex Crowe bio photo

Alex Crowe

DevOps Engineer, London

Twitter LinkedIn Github

uWSGI Emperor behind Nginx on Debian

uWSGI is an extremely flexible and fast application server which is rapidly growing in popularity. From its early roots in Python it has quickly adding support for additional languages such as Ruby, PHP and may others.

I’ll be looking at how we’ve deployed uWSGI behind Nginx using virtual hosting. I will also mention some of the clever features (with names like zerg and broodlord) to provide dynamic resource allocation depending on load.

Installation

There are a few ways to install uWSGI, but I find the best approach is to use pip. This will install the latest stable version. It’s worth noting development of uWSGI moves at quite a pace so if you want even more stability you can use the LTS (long term support) release

apt-get install build-essential python-dev
...
pip install uwsgi
# Or for the LTS
pip install http://projects.unbit.it/downloads/uwsgi-lts.tar.gz
...
uwsgi --version
1.9.20

That was the easy! Remember to make sure the binary (defaults to /usr/local/bin/uwsgi) is able to be executed by the user you’ll be running your applications as.

Configuring the Emperor

uWSGI supports a mind boggling number of configuration formats. We will be using the ini format.

First task is to create the folder structure where the ini, log, pid and socket files will live.

$ mkdir -p /etc/uwsgi/apps-{available,enabled} # this allow us to switch apps on and off
$ mkdir -p /var/{log,run}/uwsgi

Create the Emperor configuration

[uwsgi]
daemonize = /var/log/uwsgi/emperor.log 
touch-logreopen = /etc/uwsgi/logrotate.trigger
emperor = /etc/uwsgi/apps-enabled
pidfile = /var/run/uwsgi/emperor.pid
die-on-term = true  

All the above options are detailed in full in the extensive uWSGI docs. You can run the emperor with uwsgi --ini /etc/uwsgi/emperor.ini and check everything is working by looking at the log file and ensuring the pid has been created.

Next we need an init script mine is available here but you can use something like supervisor if that’s more your thing.

You can now place your applications ini files in /etc/uwsgi/apps-available and symbolically link them into your apps-enabled folder. You should end up with a folder structure looking like this.

# config
/etc/uwsgi
├── apps-available
│   ├── app1.ini
│   └── app2.ini
├── apps-enabled
│   ├── app1.ini -> ../apps-available/asone-cs.ini
│   └── app2.ini -> ../apps-available/asone.ini
├── emperor.ini
└── logrotate.trigger
# logs
/var/log/uwsgi
├── app1.log
├── app2.log
└── emperor.log
# pid and sockets
/var/run/uwsgi
├── app1.sock
├── app2.sock
└── emperor.pid

Nginx Virtual Host

Configuring Nginx to connect to your uWSGI instance couldn’t be easier. This is due to the native uwsgi protocol support.

The simplest configuration requires just two lines include uwsgi_params; and uwsgi_pass unix:/path/to/socket; or uwsgi_pass hostname:port

Here is an example complete vhost.

server {
	listen 80;
	server_name myapp.example.net;

	root /var/www/myapp;

	access_log /var/log/nginx/myapp.access;
	error_log /var/log/nginx/myapp.error;

	location / {
		include uwsgi_params;
		uwsgi_pass unix:/var/run/uwsgi/myapp.sock;
	}

	location /static/ {
		alias /var/www/myapp/assets/static/;
	}
}

You can also do other lovely things like caching but this will get you up and running.

Auto-Scaling Workers

*note, you must be using Linux and TCP sockets for this to work.

One of the downsides to this configuration is each application has its own processes and does not share a pool of workers. uWSGI has a solution to this which allows your instances to request addition workers processes from their Emperor called zergs

To configure your Emperor process to service these requests we need to add the following emperor-broodlord = num; where num is how many workers are available, to our existing emperor.ini.

Next we configure the application to request these zergs. We need to modify our existing config and add a zerg section. Below is an example of a before and after.

# exist config
[uwsgi]
socket = localhost:3031
master = true
module = myapp.wsgi
processes = 1
disable-logging = true
# modified config
[uwsgi]
socket = localhost:3031
master = true
module = myapp.wsgi
processes = 1
disable-logging = true
# new zerg config
vassal-sos-backlog = 10
zerg-server = /var/run/uwsgi/myapp-broodlord.sock

[zerg]
# setting to use for the zerg workers
zerg = /var/run/uwsgi/myapp-broodlord.sock
master = true
module = myapp.wsgi
processes = 1
disable-logging = true
idle = 30
die-on-idle = true

This configuration will result in the application asking for reinforcements when it has a backlog of requests greater than 10. The zergs will automatically die when idle for 30 seconds, returning to the pool for other applications to use.

I suggest you can play around with these settings and tune them to your specific workload. Set your idle time too low and you might find zergs are constantly starting and stopping, set it too high and you’ll not have resources available for other applications.

There are tons of other great features in uWSGI, I encourage you to explore the docs!


comments powered by Disqus