If my customers read this, they won't remain such for a long time. :-) But, then, it's roughly the truth: my production servers tend to be an horrible mess and I sometimes wonder how they manage to actually work without too many issues. Or, at least, that used to be the truth.

Until very recently I used sometimes Apache and sometimes Lighttd, relying on FastCGI to make them speak with the Catalyst web applications. I was at least wise enough to not have FastCGI processes spawned by the web server but to manage them myself instead: I sometimes used daemontools and sometimes Gentoo init system for that.

At one point I decided I wanted something which:

  1. was more straightforward to manage
  2. standardized all my production deployments
  3. consumed less memory
  4. could maybe provide a bit of speed-up

After a bit of research a decided to try nginx as web server and uWSGI to manage application start and stop.

Configuration was all in all fairly easy, but there were a couple of caveats, so I'll go through the entire process.

uWSGI

uWSGI is a great and lighting fast piece of software which can be used to spawn processes for any application which supports the PSGI interface. Catalyst supports it out of the box.

You should find a myapp.psgi file in the top directory of your application. If it doesn't exists (mainly because you created your app before Catalyst began to support PSGI), you can easily create it yourself:

use strict;
use warnings;

use lib './lib';
use MyApp;

my $app = MyApp->apply_default_middlewares(MyApp->psgi_app);
$app;

uWSGI is pre-packaged for many distribution, or can be downloaded and compiled. Once you have it installed, you can launch your application as follows:

uwsgi --master --daemonize /var/log/uwsgi/myapp/log --plugins psgi --socket 127.0.0.1:8787 \
    --processes 2 --psgi /usr/local/catalyst/MyApp/myapp.psgi \
    --pidfile /var/run/uwsgi_myapp/myapp.pid

Please note that uWSGI has a ton of options, so you should take a look at the documentation. The example above launches a master process which then spawns 2 worker processes, which are the instances of your application (--psgi /usr/local/catalyst/MyApp/myapp.psgi). The server is bound to a TCP socket (localhost, port 8789). The remaining options tell uWSGI to run as a deamon, to keep a log file and to write the process PID in a file.

You can use threads instead of processes if you wish, or even both.

The operating system init system is an ideal candidate for uWSGI master processes launch: Gentoo Linux, for instance, has a nice uWSGI configuration system which is straightforward to use (even though I had to patch it a bit in order for it to work properly for my use case).

nginx

Fast and with a minimal memory footprint, with uWSGI support out of the box, nginx is a great web server. It is also surprisingly easy to configure, much more than it's rivals! That's what you need for a virtual host which talk to your uWSGI server:

server {
    server_name www.domain.it;
    
    access_log /var/log/nginx/www.domain.it.access_log main;
    error_log /var/log/nginx/www.domain.it.error_log info;
    
    location / {
        include uwsgi_params;
        uwsgi_pass 127.0.0.1:8787;
        uwsgi_modifier1 5;
    }
    
    location /myapp-static {
        alias /usr/local/catalyst/MyApp/root/static;
    }
}

This configuration maps your web application to the root location (/). The uwsgi_params file contains the parameters which nginx passes on to uWSGI, which are tipically the following:

uwsgi_param  QUERY_STRING       $query_string;
uwsgi_param  REQUEST_METHOD     $request_method;
uwsgi_param  CONTENT_TYPE       $content_type;
uwsgi_param  CONTENT_LENGTH     $content_length;

uwsgi_param  REQUEST_URI        $request_uri;
uwsgi_param  PATH_INFO          $document_uri;
uwsgi_param  DOCUMENT_ROOT      $document_root;
uwsgi_param  SERVER_PROTOCOL    $server_protocol;
uwsgi_param  HTTPS              $https if_not_empty;

uwsgi_param  REMOTE_ADDR        $remote_addr;
uwsgi_param  REMOTE_PORT        $remote_port;
uwsgi_param  SERVER_PORT        $server_port;
uwsgi_param  SERVER_NAME        $server_name;

...and it works like a charm! That's all! ... Except, what happens if you don't want to map your application to / but to, say /app instead? It is entirely possible, but there is a caveat.

There is something in Catalyst which messes the URLs up when you don't map them to root (this happens also with the reverse proxy configuration, while Mojolicious for instance works perfectly). It's probably just matter to write a Plack middleware for nginx: there is one here, but it's not yet on CPAN and I didn't try it. Instead, I modified nginx configuration as follows:

rewrite ^/app$ /app/ permanent;
location /app/ {
    include uwsgi_params_stripped;
    # Stript app from path info, or Catalyst will break
    set $app_path_info "";
    if ( $document_uri ~ ^/app(.*)$ ) {
        set $app_path_info $1;
    }
    uwsgi_param  SCRIPT_NAME        "/app/";
    uwsgi_param  PATH_INFO          $app_path_info;
    uwsgi_pass 127.0.0.1:8787;
    uwsgi_modifier1 5;
}

An extra SCRIPT_NAME parameter is passed, while PATH_INFO is modified. You also need to include a uwsgi_params_stripped file, to avoid passing PATH_INFO with more than one value:

uwsgi_param  QUERY_STRING       $query_string;
uwsgi_param  REQUEST_METHOD     $request_method;
uwsgi_param  CONTENT_TYPE       $content_type;
uwsgi_param  CONTENT_LENGTH     $content_length;

uwsgi_param  REQUEST_URI        $request_uri;
uwsgi_param  DOCUMENT_ROOT      $document_root;
uwsgi_param  SERVER_PROTOCOL    $server_protocol;
uwsgi_param  HTTPS              $https if_not_empty;

uwsgi_param  REMOTE_ADDR        $remote_addr;
uwsgi_param  REMOTE_PORT        $remote_port;
uwsgi_param  SERVER_PORT        $server_port;
uwsgi_param  SERVER_NAME        $server_name;

Note: you can also use FastCGI or reverse proxy to make uWSGI server and nginx talk, but direct uwsgi support is the most efficient way to do it.

And what about some lengthy administrative tasks (old file deletion, mail queue processing, ...) you application might have to do? The easiest way with Catalyst is to creation an action (with restricted, maybe IP-based, access) which you execute either by hand on with a cronjob. If one of these tasks requires, say, 15 minutes, you need to configure nginx not to timeout while waiting for a response from the application - but you surely don't want to set the gateway timeout to 15 minutes for all your users.

The solution is easy. Just configure another mapping within localhost, with the appropriate settings:

    uwsgi_read_timeout 900; # 15 minutes
    # Maybe disable buffering so if you are sending status messages
    # with $c->res->write() you see them as they are sent
    uwsgi_buffering off;

This short HOWTO explains how to set up the excellent nginx to work with a SSL certificate released from a CA. The whole process is fairly easy, but not completely straightforward.

I'm assuming the host name for which the certificate will be set up is www.domain.ext and the operating system is Gentoo Linux (the process shouldn't be too different with another OS, though). Also, in my example I'm assuming that the certificate is a PositiveSSL from Comodo: using any other equivalent certificate should not make much difference.

First of all, make sure you have OpenSSL and that nginx is compiled with ssl support. In order to create your private key and the certificate request, I suggest you cd to you web server directory:

cd /etc/nginx

before generating the needed files with these two commands:

openssl genrsa -des3 -out www.domain.ext.key 2048
openssl req -new -key www.domain.ext.key -out www.domain.ext.csr

When, after issuing the second command, you are asked for the Common Name, be sure to enter the name of the host where you want to use you certificate, i.e.:

www.domain.ext

This will only work for https://www.domain.ext, and not for https://domain.ext or https://anyotherthing.domain.ext. Wildcard certificates exist, but they're more expensive: they seem to not be so useful, but they are for instance needed to make SSL name-based virtual hosts (these have some caveats, though).

OK, now you have the certificate request file, www.domain.ext.csr: go to your CA and upload it. After the verifications (which in most cases are just the verification of an e-mail address inside the domain), you'll get a download link for the certificate, which will likely be a ZIP file. This file contains the certificate (a file named domain.ext.crt or something similar) and maybe the CA "intermediate" certificate (which in case of PositiveSSL is named positive_bundle.crt).

At this point you have all the needed files, but a couple of actions still need to be performed. If you entered a password when creating the private key with OpenSSL, you'll now most likely want to remove it, otherwise nginx will always prompt you for it when starting (which is not so handy):

cp www.mydomain.ext.key www.mydomain.ext.key.orig
openssl rsa -in www.mydomain.ext.key.orig -out www.mydomain.ext.key

If the file you received from the CA also contained one or more intermediated certificates, you'll need to concatenate them because nginx want a single file:

cat www.domain.ext.crt positive_bundle.crt > www.domain.ext.pem

Be sure to put your server certificate file at the beginning of the concatenated pem file, as in the example below: otherwise, nginx will pick the wrong one up.

For the sake of security you'd better make all these files readable only by root user:

# Also chown or nginx won't be able to read the files
chown nginx:nginx *.pem *.key *.csr *.crt *.orig
chmod 600 *.pem *.key *.csr *.crt *.orig

The final step is the configuration of the web server. Nginx is incredibly powerful but also extraordinarily easy to manage. Open nginx.conf and add something similar to the following (have a look at nginx documentation for more options):

server {
        listen 15.15.15.15:443;
        server_name www.domain.ext;

        ssl on;
        ssl_certificate /etc/nginx/www.domain.ext.pem;
        ssl_certificate_key /etc/nginx/www.domain.ext.key;

        access_log /var/log/nginx/www.domain.ext.access_log main;
        error_log /var/log/nginx/www.domain.ext.error_log info;

        root  /usr/local/domains/www.domain.ext;
}

You should be all set and ready to go now!

In this article I'm going to explain some of the problems I face when upgrading libraries, language interpreters and other pieces of software which power the web applications I use in production. I'll then be showing how Perl, Catalyst, DBIx::Class and many of the other CPAN modules I use cleverly solve most of these issues.

The issues with software upgrades

When choosing the instruments for building a new web application (or a software in general), a programmer usually bases his decisions on aspects such as knowledge of the language, availability of needed libraries, speed of development, speed of compiled code, and a few others. There is however an important aspect which often doesn't get properly evaluated, which basically is the answer to the question: what is going to happen to my application in 5 years or so?.

IMG_0563

This question actually needs to be broken down in at least four parts:

  1. What is going to happen when a new version of the language (interpreter, compiler) I use will be released?
  2. What is going to happen when a new version of the framework/libraries I use will be released?
  3. What is going to happen when the server where the application is gets updated?
  4. Do I really need to update libraries/language/system/other software?

Minor releases of language interpreters or compilers (question 1) don't usually feature incompatible changes: if they do, that's probably a bug. Major releases, instead, could. For instance, PHP 5 had some incompatible changes when compared to PHP 4 (even though they were just a few). You're not forced to upgrade, but you might actually want to: a configuration option (an instruction at the top of the source code or so) which enables or disables old behaviour could be desirable for situation such as this.

New versions of libraries/modules/frameworks (question 2) sometimes bring incompatible changes, mainly due to deprecation of features: you can't always support legacy things, it's a fact. It's however important to have a good deprecation -> removal cycle for features: this allows users of a library to get warned much beforehand so they have plenty of time to patch their software, and can decide when to do it. Since libraries are developed by a lot of different people, this aspect is covered better or worse depending on the developer.

If you are hosted in a data center in a managed server (which sometimes gets upgraded even if you don't ask for it), or if you decide it's time to update your old system, then you need an answer for question 3. It is basically a sum of 1 and 2, adding some more possible incompatibilities with system tools, etc. You should choose a provider which notifies you months ahead of possible big upgrades to their systems.

So, should you upgrade (question 4)? My opinion is yes, you should do your best to have an up to date system of stable, distribution quality, software, because you're likely to get the latest security patches and the best performance. However, there's no reason to hurry an upgrade, except for serious security issues: take your time, a rushed upgrade is much worse than leaving a working system as is.

The (smart) solutions with Catalyst and Perl

I have some applications in Perl which use Catalyst in production since 2007 or so: Perl was upgraded several times (from version 5.8.8 up to 5.16.1 as of today); libraries were upgraded countless times; operating system was updated regularly. After all of this, the application still works almost no change in 5 years!

First of all, the main libraries I use (Catalyst and DBIx::Class, plus some Catalyst plugins and tenths of other CPAN modules) have an outstanding deprecation policy, which allows me to know way beforehand what API features get removed or changed; also, the code modifications I needed to make were always small enough not to be a real issue.

Perl 5 itself does a pretty good job when it comes to maintain backwards compatibility. When a new major release comes out (i.e. 5.14 => 5.16), backwards compatibility is the default, as you have to specifically enable new features with something like:

use v5.12;

# And it's scoped lexically, so you can
# upgrade PARTS of your software
sub mysub {
    use v5.14;
    ...
    {
        use v5.16;
        ...
    }
}

Thanks to these clever features, which actually solve most of the issue for you, upgrading the software underneath your application while having the application still works (with the added benefits of the upgrade, too!) become a much smaller problem. Just to say an example, I recently upgraded a server with a Catalyst application from perl 5.14.2 to 5.16.1: this involved the reinstallation of some 476 CPAN modules after the upgrade; when it was finished, the application was restarted and it continued to run exactly the same as before, without a single change made.

Also, if you feel you don't want to update your perl interpreter when the operating system gets update, you're not forced to use the perl bundled with the system: take a look at perlbrew, and you'll have your own interpreter in your user directory (you don't even need root access to compile and install it), fully independent and fully managed by you.

Summing it all up, Perl and its ecosystem are proving to be very trustable, and this in turn makes applications very trustable as well, with all the derived benefits!

YAPC::Europe 2012

YAPC::Europe 2012 has been a fine conference and great fun as well. Here follow some casual comments from the perspective both of an attendee and of an YAPC (Pisa 2010) organizer.

  • Climate: it was very hot in Frankfurt, much hotter (because of humidity) than in Italy where I live. The venue had some conditioning system but it wasn't that efficient with all those people... well, we survived and it went well anyway.
  • Talks: as usual, there were many high quality talks around. And, of course, it's always fun to chat a bit in the hallways with other folks.
  • Internet: WiFi at the venue worked nicely for the most, but sometimes it's was impossible to connect. This is actually normal, it wouldn't even be a conference without some Internet service disruption.
  • Coffee breaks/lunch: there was no included lunch, but coffee breaks featured a lot of tasty stuff so lunch was not really a needed (even though there were a lot of eateries around the venue, I never had lunch as I was always too full to).
  • Pre-conference (and Booking.Com drinks): this was a nice event, and the place had pretty decent burgers too (not to mention some good local brews). The main problem was getting the drinks, as the service was kind of disappointing; but, after all, we managed without too much hassle. We went to the same place two days after as well, where Booking.Com nice girls organized a much appreciated round of free beer.
  • Dinner: social dinner was buffet-style, in a big and nicely decorated restaurant near Sudbanhof. Food was (of course) German-style and I actually liked most of it. Best of all, the place had his own brews (including an exceptional dunkel, sort of porter beer): the quality of the beers was so good that it spawned a discussion between some of us regarding brewing at home, etc...
  • After dinner: I, gbjk, mst, nine and others managed to wander around the city downing a few drinks until 4 AM. The area North of Lokalbanhof is particurarly interesting in that regards, as it features many nice bars and pubs. Waking up the following day was another matter.
  • Books: I was aware this year Josette@O'Reilly wasn't coming to the YAPC, as in Germany they have an agent or sort of. This agent brought quite a few books, unfortunately most of them were in German. It's indeed nice to see many Perl books translated, but since YAPC is an international conference an additional selection in English would have been appreciated.
  • Murder: that's a thing which actually happened to me and is not related to the conference. The first morning of the conference I woke up at my hosted, looked through the window and noticed that the road was closed and scientific police in white suites and masks was marking evidence on the ground: a murder (a girl, apparently) had just been committed a few meters from the hostel door; the place is in the red light district, which is a bit sleazy but was supposed to be safe...

As always, the greatest conference in Europe!

See you al at the upcoming workshops (Italian Perl Workshop and London Perl Workshop), and maybe at next year's YAPC::Europe in Kiev!

Danimarca 2011

| No Comments

Ed ecco qui il diario del mio viaggio in Danimarca in auto... oltre 5000 Km con la mia fedele Golf (e con Sabry ovviamente)!

Al solito, è un misto di eventi, considerazioni personali ed indicazioni su cosa abbiamo visto. E sì... perdonatemi gli errori di battitura.

Recent Comments

  • johnsons10000: hmmmmmmmm amazing ___________________ WWW.HealthReviewScam.Com read more
  • johnsons10000: Thanks WWW.HealthReviewScam.Com read more
  • johnsons10000: Wooow Enjoy the party but be careful alcohol is dangerous read more
  • johnsons10000: Hello Friends, My explanation here applies to Unix system, but read more
  • casusbubble: Sometimes weight loss seems like something that we just think read more
  • https://www.google.com/accounts/o8/id?id=AItOawlYtyPgsC2jkOV5qvoVq1h9nKhRJhA5mV4: Oops, fixed that. Thanks alex! Michele. read more
  • alex.hartmaier: Note that no CA will issue you a certificate these read more
  • https://me.yahoo.com/a/P6NTpWwWlsUJ2HV2JMd30jQPISYZ8M8Q#108c4: It is good to hear that news. Sympha is improving. read more
  • lordarthas.myopenid.com: Hello marcosolari! > This script doesn't want to work, for read more
  • marcosolari.myopenid.com: Hi, This script doesn't want to work, for me... :-( read more

Categories

Pages

OpenID accepted here Learn more about OpenID
Powered by Movable Type 5.14-en