uWSGI Emperor behind Nginx on Debian
uWSGI is an extremely flexible and fast application server which is rapidly growing in popularity. From its early roots in Python it has quickly adding support for additional languages such as Ruby, PHP and may others.
I’ll be looking at how we’ve deployed uWSGI behind Nginx using virtual hosting. I will also mention some of the clever features (with names like zerg and broodlord) to provide dynamic resource allocation depending on load.
Installation
There are a few ways to install uWSGI, but I find the best approach is to use pip. This will install the latest stable version. It’s worth noting development of uWSGI moves at quite a pace so if you want even more stability you can use the LTS (long term support) release
That was the easy! Remember to make sure the binary (defaults to /usr/local/bin/uwsgi) is able to be executed by the user you’ll be running your applications as.
Configuring the Emperor
uWSGI supports a mind boggling number of configuration formats. We will be using the ini format.
First task is to create the folder structure where the ini, log, pid and socket files will live.
Create the Emperor configuration
All the above options are detailed in full in the extensive uWSGI docs. You can run the emperor with uwsgi --ini /etc/uwsgi/emperor.ini
and check everything is working by looking at the log file and ensuring the pid has been created.
Next we need an init script mine is available here but you can use something like supervisor if that’s more your thing.
You can now place your applications ini files in /etc/uwsgi/apps-available
and symbolically link them into your apps-enabled
folder. You should end up with a folder structure looking like this.
Nginx Virtual Host
Configuring Nginx to connect to your uWSGI instance couldn’t be easier. This is due to the native uwsgi protocol support.
The simplest configuration requires just two lines include uwsgi_params;
and uwsgi_pass unix:/path/to/socket;
or uwsgi_pass hostname:port
Here is an example complete vhost.
You can also do other lovely things like caching but this will get you up and running.
Auto-Scaling Workers
*note, you must be using Linux and TCP sockets for this to work.
One of the downsides to this configuration is each application has its own processes and does not share a pool of workers. uWSGI has a solution to this which allows your instances to request addition workers processes from their Emperor called zergs
To configure your Emperor process to service these requests we need to add the following emperor-broodlord = num
; where num
is how many workers are available, to our existing emperor.ini
.
Next we configure the application to request these zergs. We need to modify our existing config and add a zerg section. Below is an example of a before and after.
This configuration will result in the application asking for reinforcements when it has a backlog of requests greater than 10. The zergs will automatically die when idle for 30 seconds, returning to the pool for other applications to use.
I suggest you can play around with these settings and tune them to your specific workload. Set your idle time too low and you might find zergs are constantly starting and stopping, set it too high and you’ll not have resources available for other applications.
There are tons of other great features in uWSGI, I encourage you to explore the docs!