Mod Perl Icon Mod Perl Icon Choosing the Right Strategy


Table of Contents:

[ TOC ]
The Writing Apache Modules with Perl and C book can be purchased online from O'Reilly and Amazon.com.
Your corrections of the technical and grammatical errors are very welcome. You are encouraged to help me improve this guide. If you have something to contribute please send it directly to me.
[ TOC ]

Do it like I do it!?

There is no such thing as the right strategy in the web server business, although there are many wrong ones. Never believe a person who says: "Do it this way, this is the best!". As the old saying goes: "Trust but verify". There are too many technologies out there to choose from, and it would take an enormous investment of time and money to try to validate each one before deciding which is the best choice for your situation.

With this in mind, I will present some ways of using standalone mod_perl, and some combinations of mod_perl and other technologies. I'll describe how these things work together, offer my opinions on the pros and cons of each, the relative degree of difficulty in installing and maintaining them, and some hints on approaches that should be used and things to avoid.

To be clear, I will not address all technologies and tools, but limit this discussion to those complementing mod_perl.

Please let me stress it again: do not blindly copy someone's setup and hope for a good result. Choose what is best for your situation -- it might take some effort to find out what that is.

In this chapter we will discuss

[ TOC ]


mod_perl Deployment Overview

There are several different ways to build, configure and deploy your mod_perl enabled server. Some of them are:

  1. Having one binary and one configuration file (one big binary for mod_perl).

  2. Having two binaries and two configuration files (one big binary for mod_perl and one small binary for static objects like images.)

  3. Having one DSO-style binary and two configuration files, with mod_perl available as a loadable object.

  4. Any of the above plus a reverse proxy server in http accelerator mode.

If you are a newbie, I would recommend that you start with the first option and work on getting your feet wet with Apache and mod_perl. Later, you can decide whether to move to the second one which allows better tuning at the expense of more complicated administration, or to the third option -- the more state-of-the-art-yet-suspiciously-new DSO system, or to the fourth option which gives you even more power.

  1. The first option will kill your production site if you serve a lot of static data from large (4 to 15MB) webserver processes. On the other hand, while testing you will have no other server interaction to mask or add to your errors.

  2. This option allows you to tune the two servers individually, for maximum performance.

    However, you need to choose between running the two servers on multiple ports, multiple IPs, etc., and you have the burden of administering more than one server. You have to deal with proxying or fancy site design to keep the two servers in synchronization.

  3. With DSO, modules can be added and removed without recompiling the server, and their code is even shared among multiple servers.

    You can compile just once and yet have more than one binary, by using different configuration files to load different sets of modules. The different Apache servers loaded in this way can run simultaneously to give a setup such as described in the second option above.

    On the down side, you are playing at the bleeding edge.

    You are dealing with a new solution that has weak documentation and is still subject to change. It is still somewhat platform specific. Your mileage may vary.

    The DSO module (mod_so) adds size and complexity to your binaries.

    Refer to the section "Pros and Cons of Building mod_perl as DSO for more information.

    Build details: Build mod_perl as DSO inside Apache source tree via APACI

  4. The fourth option (proxy in http accelerator mode), once correctly configured and tuned, improves the performance of any of the above three options by caching and buffering page results.

[ TOC ]


Alternative architectures for running one and two servers

The next part of this chapter discusses the pros and the cons of each of these presented configurations. Real World Scenarios Implementaion describes the implementation techniques of these schemes.

We will look at the following installations:

[ TOC ]


Standalone mod_perl Enabled Apache Server

The first approach is to implement a straightforward mod_perl server. Just take your plain Apache server and add mod_perl, like you add any other Apache module. You continue to run it at the port it was using before. You probably want to try this before you proceed to more sophisticated and complex techniques.

The advantages:

The disadvantages:

If you are new to mod_perl, this is probably the best way to get yourself started.

And of course, if your site is serving only mod_perl scripts (close to zero static objects, like images), this might be the perfect choice for you!

For implementation notes, see the ``One Plain and One mod_perl enabled Apache Servers'' section in the implementations chapter.

[ TOC ]


One Plain Apache and One mod_perl-enabled Apache Servers

As I have mentioned before, when running scripts under mod_perl you will notice that the httpd processes consume a huge amount of virtual memory -- from 5MB to 15MB and even more. That is the price you pay for the enormous speed improvements under mod_perl. (Again -- shared memory keeps the real memory that is being used much smaller :)

Using these large processes to serve static objects like images and html documents is overkill. A better approach is to run two servers: a very light, plain Apache server to serve static objects and a heavier mod_perl-enabled Apache server to serve requests for dynamic (generated) objects (aka CGI).

From here on, I will refer to these two servers as httpd_docs (vanilla Apache) and httpd_perl (mod_perl enabled Apache).

The advantages:

An important note: When a user browses static pages and the base URL in the Location window points to the static server, for example http://www.example.com/index.html -- all relative URLs (e.g. <A HREF="/main/download.html">) are being served by the light plain Apache server. But this is not the case with dynamically generated pages. For example when the base URL in the Location window points to the dynamic server -- (e.g. http://www.example.com:8080/perl/index.pl) all relative URLs in the dynamically generated HTML will be served by the heavy mod_perl processes. You must use fully qualified URLs and not relative ones! http://www.example.com/icons/arrow.gif is a full URL, while /icons/arrow.gif is a relative one. Using <BASE HREF="http://www.example.com/"> in the generated HTML is another way to handle this problem. Also, the httpd_perl server could rewrite the requests back to httpd_docs (much slower) and you still need the attention of the heavy servers. This is not an issue if you hide the internal port implementations, so the client sees only one server running on port 80. (See Publishing Port Numbers other than 80)

The disadvantages:

Before you go on with this solution you really want to look at the Adding a Proxy Server in http Accelerator Mode section.

For implementation notes see the ``One Plain and One mod_perl enabled Apache Servers'' section in implementations chapter.

[ TOC ]


One light non-Apache and One mod_perl enabled Apache Servers

If the only requirement from the light server is for it to serve static objects, then you can get away with non-Apache servers having an even smaller memory footprint. thttpd has been reported to be about 5 times faster then Apache (especially under a heavy load), since it is very simple and uses almost no memory (260K) and does not spawn child processes.

Meta: Hey, No personal experience here, only rumours. Please let me know if I have missed some pros/cons here. Thanks!

The Advantages:

The Disadvantages:

Another interesting choice is a kHTTPd webserver for Linux. kHTTPd is different from other webservers in that it runs from within the Linux-kernel as a module (device-driver). kHTTPd handles only static (file based) web-pages, and passes all requests for non-static information to a regular userspace-webserver such as Apache. For more information see http://www.fenrus.demon.nl/.

[ TOC ]


Adding a Proxy Server in http Accelerator Mode

At the beginning there were two servers: one plain Apache server, which was very light, and configured to serve static objects, the other mod_perl enabled (very heavy) and configured to serve mod_perl scripts and handlers. As you remember we named them httpd_docs and httpd_perl respectively.

In the dual-server setup presented earlier the two servers coexist at the same IP address by listening to different ports: httpd_docs listens to port 80 (e.g. http://www.example.com/images/test.gif) and httpd_perl listens to port 8080 (e.g. http://www.example.com:8080/perl/test.pl). Note that we did not write http://www.example.com:80 for the first example, since port 80 is the default port for the http service. Later on, we will be changing the configuration of the httpd_docs server to make it listen to port 81.

This section will attempt to convince you that you really want to deploy a proxy server in the http accelerator mode. This is a special mode that in addition to providing the normal caching mechanism, accelerates your CGI and mod_perl scripts.

The advantages of using the proxy server in conjunction with mod_perl are:

The disadvantages are:

Have I succeeded in convincing you that you want a proxy server?

Of course if you are on a very fast local area network (LAN) (which means that all your users are connected from this LAN and not from the outside), then the big benefit of the proxy buffering the output and feeding a slow client is gone. You are probably better off sticking with a straight mod_perl server in this case.

[ TOC ]


Implementations of Proxy Servers

As of this writing, two proxy implementations are known to be widely used with mod_perl, the squid proxy server and mod_proxy which is a part of the Apache server. Let's compare them.

[ TOC ]


The Squid Server

The Advantages:

The Disadvantages:

The pros and cons presented above lead to the idea that you might want to use squid for its dynamic content buffering features, but only if your server serves mostly dynamic requests. So in this situation, when performance is the goal, it is better to have a plain Apache server serving static objects, and squid proxying the mod_perl enabled server only.

For implementation details, see the sections Running One Webserver and Squid in httpd Accelerator Mode and Running Two Webservers and Squid in httpd Accelerator Mode in the implementations chapter.

[ TOC ]


Apache's mod_proxy

I do not think the difference in speed between Apache's mod_proxy and squid is relevant for most sites, since the real value of what they do is buffering for slow client connections. However, squid runs as a single process and probably consumes fewer system resources.

The trade-off is that mod_rewrite is easy to use if you want to spread parts of the site across different back end servers, while mod_proxy knows how to fix up redirects containing the back-end server's idea of the location. With squid you can run a redirector process to proxy to more than one back end, but there is a problem in fixing redirects in a way that keeps the client's view of both server names and port numbers in all cases.

The difficult case is where you have DNS aliases that map to the same IP address and you want the redirect to port 80 and the server is on a different port and you want to keep the specific name the browser has already sent, so that it does not change in the client's Location window.

The Advantages:

For implementation see the ``Using mod_proxy'' section in the implementation chapter.

[ TOC ]


When One Machine is not Enough for RDBMS DataBase and mod_perl

Imagine a scenario where you start your business as a small service providing web-site. After a while your business becomes very popular and at some point you understand that it has outgrown the capacity of your machine. Therefore you decide to upgrade your current machine with lots of memory, the cutting edge super expensive CPU and an ultra-fast hard disk. As a result the load goes back to normal but not for a long, as the demand for your services keeps on growing and just a little time after you've upgraded your machine, once again it cannot cope the load. Should you buy an even stronger and very expensive machine or start looking for another solution? Let's explore the possible solution for this problem.

A typical web service consists of two main software components, the database server and the web server.

A typical user-server interaction consists of accepting the query parameters entered into an HTML form and submitted to the web server by a user, converting these parameters into a database query, sending it to the database server, accepting the results of the executed query, formatting them into a nice HTML page, and sending it to a user's Internet browser or another application that created the request (e.g. WAP).

This figure depicts the above description:

 
               1                      2
  [        ] ====> [               ] ====> [                 ]
  [ Client ]       [ Apache Server ]       [ Database Server ]
  [        ] <==== [               ] <==== [                 ]
               4                       3

This schema is known as a 3-tier architecture in the computing world.

A 3-tier architecture means splitting up several processes of your computing solution between different machines.

We are interested only in the second and the third tiers; we don't specify user machine requirements, since mod_perl is all about server side programming. The only thing the client should be able to do is to render the generated HTML from the response, which any simple browser will do. Of course I'm not talking about the case where you return some heavy Java applets, but that movie is screened in another theater.

[ TOC ]


Servers' Requirements

Let's first understand what kind of software the web and database servers are, what they need to run fast and what implications they have on the rest of the system software.

The three important machine components are the hard disk, the amount of RAM and the CPU type.

Typically the mod_perl server is mostly RAM hungry, while the SQL database server mostly needs a very fast hard-disk. Of course if your mod_perl process reads a lot from the disk (which is a quite infrequent phenomenon) you will need a fast disk too. And if your database server has to do a lot of sorting of big tables and do lots of big table joins, you will need a lot of RAM too.

If we would specify average ``virtual'' requirements for each machine, that's what we'd get:

An "ideal" mod_perl machine:

 
  * HD:  low-end (no real IO, mostly logging)
  * RAM: the more the better
  * CPU: medium to high (according to needs)

An "ideal" database server machine:

 
  * HD:  high-end
  * RAM: large amounts   (big joins, sorting of many records)
         small amounts (otherwise)
  * CPU: medium to high (according to needs)

[ TOC ]


The Problem

With the database and the httpd on the same machine, you have conflicting interests.

During peak loads, Apache will spawn more processes and use RAM that the database server might have been using, or that the kernel was using on its behalf in the form of cache. You will starve your database of resources at the time when it needs those resources the most.

Disk I/O contention is the biggest time issue. Adding another disk wouldn't cut I/O times because the database is the only one who does I/O - since mod_perl processes have all their code loaded in memory. (I'm talking about code that does pure perl and SQL processing) so it's clear that the DB is I/O and CPU bounded (RAM only if there are big joins to make) and mod_perl CPU and mostly RAM bounded.

The problem exists, but it doesn't mean that you cannot run the application and the web servers on the same machine. There is a very high degree of parallelism in modern PC architecture. The I/O hardware is helpful here. The machine can do many things while a SCSI subsystem is processing a command, or the network hardware is writing a buffer over the wire.

If a process is not runnable (that is, it is blocked waiting for I/O or similar), it is not using significant CPU time. The only CPU time that will be required to maintain a blocked process is the time it takes for the operating system's scheduler to look at the process, decide that it is still not runnable, and move on to the next process in the list. This is hardly any time at all. If there are two processes and one of them is blocked on I/O and the other is CPU bound, the blocked process is getting 0% CPU time, the runnable process is getting 99.9% CPU time, and the kernel scheduler is using the remainder.

[ TOC ]


The Solution

Adding another machine, which allows a set-up where both the database and the web servers run on their own dedicated machines.

[ TOC ]


Pros

[ TOC ]


Cons

[ TOC ]


Three Machines Model

Since we are talking about using a dedicated machine for each server, you might consider adding a third machine to do the proxy work; this will make your setup even more flexible since it will enable you to proxy-pass all request to not just one mod_perl running box, but to many of them. It will enable you to do load balancing if and when you need it.

Generally the proxy machine can be very light when it serves just a little traffic and mainly proxy-passes to the mod_perl processes. Of course you can use this machine to serve the static content and then the hardware requirement will depend on the number of objects you will have to serve and the rate at which they are requested.

[ TOC ]


Running More than One mod_perl Server on the Same Machine.

Let's assume that you have two different sets of scripts/code which have little or nothing in common; different modules, no code sharing. Typical numbers can be four megabytes of unshared and four megabytes of shared memory for each code set, plus three megabytes of shared basic mod_perl stuff. Which makes each process 17MB in size when the two code sets are loaded. (3MB (server) + 4MB (shared 1st code set ) + 4MB (unshared 1st code set ) + 4MB (shared 2nd code set ) + 4MB (unshared 2nd code set ). Under this scenario, eleven megabytes are shared and eight megabytes not.

We assume that four megabytes is the size of each code sets unshared memory. This is a pretty typical size of unshared memory, especially when connecting to databases, as the database connections cannot be shared. Databases like Oracle can take even more RAM per connection on top of this.

Let's assume that we have 260 megabytes of RAM dedicated to the webserver.

According to the equation developed in the section: ``Choosing MaxClients'':

 
                    Total_RAM - Max_Process_Size
  MaxClients = ---------------------------------------
               Max_Process_Size - Shared_RAM_per_Child

 
  MaxClients = (260 - 17)/(17-11) = 40

We see that we can run 40 processes, using the given memory and the two code sets in the same server.

Now consider this practical decision. Since we have recognized that the code sets are very distinct in nature and there is no significant memory sharing in place, the wise thing to do is to split the two code sets between two mod_perl servers (a single mod_perl server actually is a set of the parent process and a number of the child processes). So instead of running everything on one server, now we move the second code set onto another mod_perl server. At this point we are talking about a single machine.

Let's look at the figures again. After the split we will have 20 servers of eleven megabytes (4MB unshared + 7mb shared) and another 20 servers of eleven megabytes.

How much memory do we need now? From the above equation we derive:

 
  Total_RAM = MaxClients * (Max_Process_Size - Shared_RAM_per_Child)
              + Max_Process_Size

And using the numbers:

 
  Total_RAM = 2 * (20 * (11-7) + 11) = 182

A total of 182 megabytes of memory required. But, hey, we have 260MB of memory. We've got 78MB of memory freed up. If we recalculate again the MaxClients we will see that we can run almost 60 servers:

 
  MaxClients = (260 - 11*2)/(11-8) = 60

So we can run about 20 more servers using the same memory size. 30 servers for each code set. We have enlarged the servers pool by a half without changing the machine's hardware.

Moreover this new setup allows us to fine tune the two code sets, since in reality the smaller in size code base might have a higher hit rate, so we can benefit even more.

Let's assume that based on the usage statistics we know that the first code set is called in 70% of requests and the other 30% are used by the second set. Now we assume that the first code set requires only 5MB of RAM (3MB shared plus 2MB unshared) over the basic mod_perl server size, and the second set needs 11MBytes (7MB shared and 4MB unshared).

Lets compare this new requirement with our original 50:50 setup.

So now the first mod_perl server running the first code set will have all its processes using 8MB (3MB (server shared) + 3MB (code shared) + 2MB (code unshared), and the second 14MB (3+7+4). Given that we have a 70:30 hits relation and that we have 260MB of available memory, we have to solve these two equations:

 
  X/Y = 7/3

 
  X*(8-6) + 8 + Y*(14-10) + 14 = 260

where X is the total number of the processes the first code set can use and Y the second. The first equation reflect the 70:30 hits relation, and the second uses the equation for the total memory requirements for the given number of servers and the shared and unshared memory sizes.

When we solve these equations, we find that X equals 63 and Y equals 27. So we have a total of 90 servers -- two and a half times the number of servers running compared to the original setup using the same memory size.

The hits rate optimized solution and the fact that the code sets can be different in their memory requirements, allowed us to run 30 more servers in total and gave us 33 more servers (63 versus 30) for the most wanted code base, relative to the simple 50:50 split as in the first example.

Of course if you identify more than two distinct sets of code based on your hit rate statistics, more complicated solutions may be required. You could make even more splits and run three or more mod_perl servers.

Remember that having too many running processes doesn't necessarily mean better performance because all of them will contend for CPU time slices. The more processes that are running the less CPU time each gets and the slower overall performance will be. Therefore after hitting a certain load you might want to start spreading servers over different machines.

In addition to the obvious memory saving you gain the power to troubleshoot problems that occur more easily when you have different components running on different servers. It's quite possible that a small change in the server configuration to fix or improve something for one code set, might completely break the second code set. For example if you upgrade the first code set and it requires an update of some modules that both code bases rely on. But there is a chance that the second code set won't work with a new version of a module it was relying on.

[ TOC ]


SSL functionality and a mod_perl Server

If you need SSL functionality, you can get it by adding the mod_ssl or equivalent Apache_ssl to the light front-end server (httpd_docs) or the heavy back-end mod_perl server (httpd_perl). ( The configuration and installation instructions are located here.)

The question is: Is it a good idea to add mod_ssl into the back-end mod_perl enabled server? Given that your internal network is secured, or if both the front and back end servers are running on the same machine and you can ensure a safe communication between the processes, there is no need for an encrypted traffic between them.

If this is the situation you don't have to put mod_ssl into the already too heavy mod_perl server. You will have the external traffic encrypted by the front-end server, which will proxy-pass the unencrypted request and response data internally.

Another important point is if you put the mod_ssl on the back-end, you have to tunnel back your images to it (i.e. have the back-end serve the images) defeating the whole purpose of having the front-end lightweight server.

You cannot serve a secure page which includes non-secured information. If you fetch an html page over SSL and have an <IMG> tag that fetches the image from the non-secure server, the image is shown broken. This is true for any other non-secured objects as well. Of course if the generated response doesn't include any embedded objects, like images -- this is not a problem.

Choosing the front-end machine to have an SSL functionality also simplifies configuration of mod_perl by eliminating VirtualHost duplication for SSL. mod_perl configuration files can be plenty difficult without the mod_ssl overhead.

Also assuming that you have front-end machines under-worked anyway, especially if you run a high-volume web service deploying a cluster of machines to serve requests, you save some CPU as it's known that SSL connections are about 100 times more CPU intensive than non-SSL connections.

Of course caching session keys so you don't have to set up a new symmetric key for every single connection, improves the situation. If you use the shared memory session caching mechanism that mod_ssl supports, then the overhead is actually rather small except for the initial connection.

But then on the other hand, why even bother to run a full scale mod_ssl in front? You might as well just choose a small tunnel/port forwarding application like Stunnel or one of the many other mentioned at http://www.openssl.org/related/apps.html .

Of course if you do heavy SSL processing ideally you should really be offloading it to a dedicated cryptography server. But this advice can be misleading based on the current status of the crypto hardware. If you use hardware you get extra speed now, but you're locked into a proprietary solution; in 6 months/one year software will have caught up with whatever hardware you're using and because software is easier to adapt you'll have more freedom to change what software you're using and more control of things. So the choice is in your hand. [ TOC ]


Your corrections of the technical and grammatical errors are very welcome. You are encouraged to help me improve this guide. If you have something to contribute please send it directly to me.
The Writing Apache Modules with Perl and C book can be purchased online from O'Reilly and Amazon.com.

[ TOC ]
Written by Stas Bekman.
Last Modified at 08/28/2000

mod_perl icon
Use of the Camel for Perl is
a trademark of O'Reilly & Associates,
and is used by permission.