February 2013 Archives

If my customers read this, they won't remain such for a long time. :-) But, then, it's roughly the truth: my production servers tend to be an horrible mess and I sometimes wonder how they manage to actually work without too many issues. Or, at least, that used to be the truth.

Until very recently I used sometimes Apache and sometimes Lighttd, relying on FastCGI to make them speak with the Catalyst web applications. I was at least wise enough to not have FastCGI processes spawned by the web server but to manage them myself instead: I sometimes used daemontools and sometimes Gentoo init system for that.

At one point I decided I wanted something which:

  1. was more straightforward to manage
  2. standardized all my production deployments
  3. consumed less memory
  4. could maybe provide a bit of speed-up

After a bit of research a decided to try nginx as web server and uWSGI to manage application start and stop.

Configuration was all in all fairly easy, but there were a couple of caveats, so I'll go through the entire process.

uWSGI

uWSGI is a great and lighting fast piece of software which can be used to spawn processes for any application which supports the PSGI interface. Catalyst supports it out of the box.

You should find a myapp.psgi file in the top directory of your application. If it doesn't exists (mainly because you created your app before Catalyst began to support PSGI), you can easily create it yourself:

use strict;
use warnings;

use lib './lib';
use MyApp;

my $app = MyApp->apply_default_middlewares(MyApp->psgi_app);
$app;

uWSGI is pre-packaged for many distribution, or can be downloaded and compiled. Once you have it installed, you can launch your application as follows:

uwsgi --master --daemonize /var/log/uwsgi/myapp/log --plugins psgi --socket 127.0.0.1:8787 \
    --processes 2 --psgi /usr/local/catalyst/MyApp/myapp.psgi \
    --pidfile /var/run/uwsgi_myapp/myapp.pid

Please note that uWSGI has a ton of options, so you should take a look at the documentation. The example above launches a master process which then spawns 2 worker processes, which are the instances of your application (--psgi /usr/local/catalyst/MyApp/myapp.psgi). The server is bound to a TCP socket (localhost, port 8789). The remaining options tell uWSGI to run as a deamon, to keep a log file and to write the process PID in a file.

You can use threads instead of processes if you wish, or even both.

The operating system init system is an ideal candidate for uWSGI master processes launch: Gentoo Linux, for instance, has a nice uWSGI configuration system which is straightforward to use (even though I had to patch it a bit in order for it to work properly for my use case).

nginx

Fast and with a minimal memory footprint, with uWSGI support out of the box, nginx is a great web server. It is also surprisingly easy to configure, much more than it's rivals! That's what you need for a virtual host which talk to your uWSGI server:

server {
    server_name www.domain.it;
    
    access_log /var/log/nginx/www.domain.it.access_log main;
    error_log /var/log/nginx/www.domain.it.error_log info;
    
    location / {
        include uwsgi_params;
        uwsgi_pass 127.0.0.1:8787;
        uwsgi_modifier1 5;
    }
    
    location /myapp-static {
        alias /usr/local/catalyst/MyApp/root/static;
    }
}

This configuration maps your web application to the root location (/). The uwsgi_params file contains the parameters which nginx passes on to uWSGI, which are tipically the following:

uwsgi_param  QUERY_STRING       $query_string;
uwsgi_param  REQUEST_METHOD     $request_method;
uwsgi_param  CONTENT_TYPE       $content_type;
uwsgi_param  CONTENT_LENGTH     $content_length;

uwsgi_param  REQUEST_URI        $request_uri;
uwsgi_param  PATH_INFO          $document_uri;
uwsgi_param  DOCUMENT_ROOT      $document_root;
uwsgi_param  SERVER_PROTOCOL    $server_protocol;
uwsgi_param  HTTPS              $https if_not_empty;

uwsgi_param  REMOTE_ADDR        $remote_addr;
uwsgi_param  REMOTE_PORT        $remote_port;
uwsgi_param  SERVER_PORT        $server_port;
uwsgi_param  SERVER_NAME        $server_name;

...and it works like a charm! That's all! ... Except, what happens if you don't want to map your application to / but to, say /app instead? It is entirely possible, but there is a caveat.

There is something in Catalyst which messes the URLs up when you don't map them to root (this happens also with the reverse proxy configuration, while Mojolicious for instance works perfectly). It's probably just matter to write a Plack middleware for nginx: there is one here, but it's not yet on CPAN and I didn't try it. Instead, I modified nginx configuration as follows:

rewrite ^/app$ /app/ permanent;
location /app/ {
    include uwsgi_params_stripped;
    # Stript app from path info, or Catalyst will break
    set $app_path_info "";
    if ( $document_uri ~ ^/app(.*)$ ) {
        set $app_path_info $1;
    }
    uwsgi_param  SCRIPT_NAME        "/app/";
    uwsgi_param  PATH_INFO          $app_path_info;
    uwsgi_pass 127.0.0.1:8787;
    uwsgi_modifier1 5;
}

An extra SCRIPT_NAME parameter is passed, while PATH_INFO is modified. You also need to include a uwsgi_params_stripped file, to avoid passing PATH_INFO with more than one value:

uwsgi_param  QUERY_STRING       $query_string;
uwsgi_param  REQUEST_METHOD     $request_method;
uwsgi_param  CONTENT_TYPE       $content_type;
uwsgi_param  CONTENT_LENGTH     $content_length;

uwsgi_param  REQUEST_URI        $request_uri;
uwsgi_param  DOCUMENT_ROOT      $document_root;
uwsgi_param  SERVER_PROTOCOL    $server_protocol;
uwsgi_param  HTTPS              $https if_not_empty;

uwsgi_param  REMOTE_ADDR        $remote_addr;
uwsgi_param  REMOTE_PORT        $remote_port;
uwsgi_param  SERVER_PORT        $server_port;
uwsgi_param  SERVER_NAME        $server_name;

Note: you can also use FastCGI or reverse proxy to make uWSGI server and nginx talk, but direct uwsgi support is the most efficient way to do it.

And what about some lengthy administrative tasks (old file deletion, mail queue processing, ...) you application might have to do? The easiest way with Catalyst is to creation an action (with restricted, maybe IP-based, access) which you execute either by hand on with a cronjob. If one of these tasks requires, say, 15 minutes, you need to configure nginx not to timeout while waiting for a response from the application - but you surely don't want to set the gateway timeout to 15 minutes for all your users.

The solution is easy. Just configure another mapping within localhost, with the appropriate settings:

    uwsgi_read_timeout 900; # 15 minutes
    # Maybe disable buffering so if you are sending status messages
    # with $c->res->write() you see them as they are sent
    uwsgi_buffering off;

This short HOWTO explains how to set up the excellent nginx to work with a SSL certificate released from a CA. The whole process is fairly easy, but not completely straightforward.

I'm assuming the host name for which the certificate will be set up is www.domain.ext and the operating system is Gentoo Linux (the process shouldn't be too different with another OS, though). Also, in my example I'm assuming that the certificate is a PositiveSSL from Comodo: using any other equivalent certificate should not make much difference.

First of all, make sure you have OpenSSL and that nginx is compiled with ssl support. In order to create your private key and the certificate request, I suggest you cd to you web server directory:

cd /etc/nginx

before generating the needed files with these two commands:

openssl genrsa -des3 -out www.domain.ext.key 2048
openssl req -new -key www.domain.ext.key -out www.domain.ext.csr

When, after issuing the second command, you are asked for the Common Name, be sure to enter the name of the host where you want to use you certificate, i.e.:

www.domain.ext

This will only work for https://www.domain.ext, and not for https://domain.ext or https://anyotherthing.domain.ext. Wildcard certificates exist, but they're more expensive: they seem to not be so useful, but they are for instance needed to make SSL name-based virtual hosts (these have some caveats, though).

OK, now you have the certificate request file, www.domain.ext.csr: go to your CA and upload it. After the verifications (which in most cases are just the verification of an e-mail address inside the domain), you'll get a download link for the certificate, which will likely be a ZIP file. This file contains the certificate (a file named domain.ext.crt or something similar) and maybe the CA "intermediate" certificate (which in case of PositiveSSL is named positive_bundle.crt).

At this point you have all the needed files, but a couple of actions still need to be performed. If you entered a password when creating the private key with OpenSSL, you'll now most likely want to remove it, otherwise nginx will always prompt you for it when starting (which is not so handy):

cp www.mydomain.ext.key www.mydomain.ext.key.orig
openssl rsa -in www.mydomain.ext.key.orig -out www.mydomain.ext.key

If the file you received from the CA also contained one or more intermediated certificates, you'll need to concatenate them because nginx want a single file:

cat www.domain.ext.crt positive_bundle.crt > www.domain.ext.pem

Be sure to put your server certificate file at the beginning of the concatenated pem file, as in the example below: otherwise, nginx will pick the wrong one up.

For the sake of security you'd better make all these files readable only by root user:

# Also chown or nginx won't be able to read the files
chown nginx:nginx *.pem *.key *.csr *.crt *.orig
chmod 600 *.pem *.key *.csr *.crt *.orig

The final step is the configuration of the web server. Nginx is incredibly powerful but also extraordinarily easy to manage. Open nginx.conf and add something similar to the following (have a look at nginx documentation for more options):

server {
        listen 15.15.15.15:443;
        server_name www.domain.ext;

        ssl on;
        ssl_certificate /etc/nginx/www.domain.ext.pem;
        ssl_certificate_key /etc/nginx/www.domain.ext.key;

        access_log /var/log/nginx/www.domain.ext.access_log main;
        error_log /var/log/nginx/www.domain.ext.error_log info;

        root  /usr/local/domains/www.domain.ext;
}

You should be all set and ready to go now!

About this Archive

This page is an archive of entries from February 2013 listed from newest to oldest.

September 2012 is the previous archive.

December 2014 is the next archive.

Find recent content on the main index or look in the archives to find all content.

Categories

Pages

OpenID accepted here Learn more about OpenID
Powered by Movable Type 5.14-en