Chapter 37

Evaluating the Server Environment


In previous chapters, this book addresses the technology of running a Web site as it affects the site owner and the visitor. The remaining chapters address the business of running a Web server, which can handle multiple sites or one large site.

The Web server resides on a computer-usually a UNIX machine-so the business of running the server software inevitably overlaps the administration of the operating system and the server hardware. Thus, this chapter and those that follow it overlap into the world of the system administrator, often known in the UNIX world as the root user or the superuser.

Choosing Server Software

Not every site needs its own server. Many Web sites have relatively light traffic and can satisfactorily share a server with several other sites. If you determine that you do need a dedicated server, what software is available, where can you get it, and how do you set it up?

Do You Need Your Own Server?

Most server software offers a "virtual host" capability, in which each Web site has its own configuration files and directories. The server administrator can also assign a dedicated IP address to the virtual host and offer the site its own domain name. Before choosing to reside on someone else's site, do some calculations to see if you'll get the performance you want.

Capacity of the Machine

Get a user account on the other site owner's machine and log in. If it's a UNIX machine, check the load by running vmstat. Check regularly throughout the day and for several days. See how much time is left in the idle column (usually off to the right). Any nonzeros in the pi or po columns are bad news. So are nonzero numbers in the b column (toward the left margin) or numbers much larger than 1 in the r column (also toward the left column). Figure 37.1 shows a vmstat on a machine with a lot of idle capacity.

Figure 37.1 : vmstat on a machine with lots of idle capacity.

Ask if you can see their load reports (like sar output). If they don't keep them or don't know what they are, keep looking for another service provider. (You cannot improve what you do not measure.)

Now comes the tough part. Estimate how much traffic your site will bring in. This information is hard to estimate but guess high. Really high. Talk to the Webmasters of sites you respect and hope to emulate, and ask them for their log counts.

Many Webmasters will not be willing to share their log data with you and you don't need it. Ask them to run the following command in their server_root directory:

wc -l logs/access_log

Then find out what period of time that log covered. It should be for a minimum of a week or two. Let's say you find that they are getting around 2,000 accesses per week or close to 10,000 per month. (Remember, these are raw hits, not visits, but that's what we care about for estimating server load.) The above numbers represent an average and include a lot of "dead time"-late at night or on weekends.

Some servers have a relatively even load throughout the day (by getting hits from overseas); others have pronounced peaks and valleys. Ask the Webmaster how his or her peak load compares to the average. (Chapter 42, "Processing Logs and Analyzing Site Use," describes software that allows you to get this information directly from the logs.)

Let's suppose that the peak load is about 10 times the average load. Next, get an idea of how long it takes to serve a page. Log into your account on the machine you are considering and enter

time telnet 80

When the Web server answers, enter


followed by two returns. See Chapter 8, "Six Common CGI Mistakes and How to Avoid Them," for a full description of this and other methods for bypassing the browser and running the server by hand.

Look at the resulting time report. Ignore the first line-it took several seconds to type the data in. Add the user and system time, and log the results.

Repeat this experiment throughout the week, several times each time and at different times of the day and night. Compile a log showing peaks and valleys in response time. You could even write a small program to do this for you and put the results in a file for later spreadsheet analysis.

When you are done, you can say with a fair degree of confidence that it takes, say, less than 300 milliseconds of CPU time to fill a Web request 90 percent of the time. (For best results, use the data in that form, called "the upper limit of the 90 percent confidence interval," rather than just taking an average.)

Based on comparisons with what you hope are similar sites, you estimate that your site will grow fairly quickly and be answering 2,000 hits a week, with a peak rate of two per minute. With conservative estimation, you need about 600 milliseconds every minute or so, or about one percent of the capacity.

Take this figure with a big grain of salt because all these numbers are imprecise. But if the server is not already overloaded, your extra load is not likely to slow it down.

Find out how many httpd daemons the service provider runs. Or look for yourself by running

ps -ef | grep httpd

and count them. On some versions of UNIX, the command is

ps -aux | grep httpd

Look around at other users of the site. If the site is running, say, a dozen copies of the server daemon and there are 10 other virtual hosts on the machine, and if they all have similar loads to yours (peaking at up to two requests per minute), the likelihood that all 12 servers will be engaged when a request comes in is fairly low.

If the capacity remaining on the machine is low, think twice before committing your site to this provider. The formula that relates capacity to response time is about as forgiving as a balloon mortgage (see Fig. 37.2.). A rough estimate of the response time (based on a simplified model of the server called the M/M/1 queuing model) is

Figure 37.2 : Response time as a function of usage.

T = Ts / (1 - U)

where T is the response time, Ts is the service time, and U is the usage. From the figures above, suppose Ts is 300 milliseconds and U is 50 percent. Then the response time is a very acceptable 600 milliseconds. When U climbs to 90 percent, response time soars to 3 seconds.

Ask the service provider what kind of performance guarantee they are willing to make. Just because they have excess capacity today doesn't mean they'll have that capacity next month. Are they willing to commit, in writing, to upgrading their system to keep up with demand? One popular site on the server could push the hit count off the scale and leave everyone else's site panting and gasping for CPU time.

Capacity of the Link

Of course, one of the reasons the machine may have idle capacity is that people can't get in. Find out what kind of link the server has to the Net. "T-1" is a good answer. "Multiple T-1s" is better. If the answer is "fractional T-1" or "ISDN basic rate interface" or "frame relay," think twice. Visitors to your site may experience long delays in downloading pages if the link is the bottleneck.

Modern computers can usually fill requests much faster than the network's ability to send out the results. Dr. Louis Slothhouber ( of Starnine Technologies, Inc., presents a more realistic model in his paper, "A Model of Web Server Performance." That model says that

T = F/C + I/(1-AI) + F/(S-AF) + F(B+RY) / (BR-AF(B+RY))


A is the rate at which requests arrive from the network-the hit rate
F is the average size of the file requested
B is the buffer size of the server
I is the initialization time of the server
Y is the static server time
R is the dynamic server rate
S is the server's network bandwidth
C is the client's network bandwidth

For a typical server, the average file size F is around 5,000 bytes. (Remember to include both HTML pages and graphics files in the average.) The buffer size is usually the same size as the disk block size-4,096 is a typical figure.

Initialization time is the time needed for the server to do one-time processing like MIME mapping. In practice, it is easiest to set I and Y to zero, and adjust R to account for all server time.

If the server is connected to the Net by a T-1 link, S is 1.5 Mbps. Other common values are

C must take into account not only the client's modem rate but the throughput of the connection between the server and the client. For a 14.4 Kbps modem connection, 11,200 bps is a good figure.

Thus, for a server with "typical" values (including a hit rate of two hits per second), the response time is approximated by

T = 5000/1400+0+5000/(S/8-2(5000))+5000(4096+0)/(4096*R-((2*5,000)(4096+0)))

For typical values, the response time is dictated by the client's network capacity. To receive a 5,000-byte file over a 1,400-byte per second connection takes just under 3.5 seconds.

The second largest factor is the server's network capacity. If the connection is a T-1 (1.5 Mbps), the third term contributes about 0.03 seconds.

Most single-server sites have a very high processing rate (R) compared to the network delay (dictated by S). For example, if R is just 50,000 bytes per second, then the final term is 0.125 seconds even when A is two hits per second (60 times our one-site estimate from the previous section).

Under the given conditions, the response time to the user will be approximately 3.655 seconds, and only about 4 percent of that time is under the control of the server (see Figure 37.3).

Figure 37.3 : Allocation of time when client uses a 14.4 Kbps connection and server uses T-1.

If the server connection (given as T-1 above) is replaced with an ISDN line at 128 Kbps, the middle term becomes more than 0.8 seconds. The server's contribution to the overall time more than doubles. This situation is illustrated in Figure 37.4.

Figure 37.4 : Allocation of time when client uses 14.4 Kbps connection and server uses ISDN.

Of course, more and more users have access to faster connections. To a user on an ISDN connection, things look much different. Instead of taking 3.5 seconds, the client network takes only about 0.7 seconds, giving a time budget that looks more like Figure 37.5.

Figure 37.5 : Allocation of time when client and server use ISDN.

Other studies have come to the same conclusion. Robert B. Denny, at, warns, "Beware of vendors who make claims that their servers can support large numbers of simultaneous transactions. Ask instead for their measured data delivery rate and transactions per unit time."

Denny used a test environment in which the client and the server were connected by an ethernet cable to factor out client network delays. Denny's analysis shows that even a low cost Pentium-90 PC can saturate a T-1 line, and a 486/33 notebook computer (an IBM Thinkpad) keeps up with 18 requests per second.

Robert E. McGrath of the NCSA reports, at that five UNIX-based servers tested on the same machine (an HP 735 workstation) all showed adequate performance. The slowest of the evaluated servers (a deliberately crippled version of the NCSA server) handled more than 40 requests per second. The other servers handled between 60 and 90 requests per second.

As users acquire faster connections, the Web site must be hosted on a machine with faster connections to keep up. The actual throughput of the server computer is of secondary importance.

Capacity of the Staff

As long as the overall capacity of the machine is respected and the server is connected using a high-speed line, the performance of a server is dominated by the speed of the client's connection-a factor the Webmaster can't control.

There is another factor, however, often overlooked in designing Web sites: the capacity of the staff. Various surveys such as and show that most sites use a UNIX computer, with the Macintosh in second place and DOS and Windows machines in third. The strong showing of the Macintosh suggests that ease of use for the server staff is an important factor in many Webmasters' minds.

Make no mistake about it: UNIX is powerful but no one has yet accused it of being easy to use. Any site on a UNIX server will need at least a part-time administrator who must be knowledgeable about UNIX. Servers running on machines with a graphical user interface (GUI) may not have as many features, but they have all of the features that most sites need and do not need the high level of staff training commonly associated with a UNIX server.

Capacity of the Budget

Several UNIX servers (such as NCSA, Apache, and CERN) are available at no cost over the Net. Because of the large number of installations, these packages are well supported by the user community. Solutions on UNIX servers are likely to be dominated by the cost of the hardware and the technical staff members.

If a site wants to run on a UNIX server and the budget is tight, the Webmaster should investigate Linux, a publicly available UNIX system that runs on Intel machines. A high-end UNIX computer may cost $10,000 or more. That same money invested in PCs running Linux or in Macintoshes may yield higher overall throughput and, in the case of the Macs, lower staff cost.

What Servers Are Available?

Here is a summary of some of the leading Web servers in no particular order.


Apache is a UNIX-based server that started life as a collection of fixes or "patches" to the NCSA server. It is available at no cost from A secure version (using SSL) is available at


Probably the single most common server on the Web, the NCSA server is available free at and runs on UNIX.


Netscape Communications has two commercial offerings. The Netscape Communications server, described at and their secure product, the Netscape Commerce server, described at Version 2.0 of these products are available as upgrades as well as complete systems for first-time users. The secure product is known as the Enterprise server, and the nonsecure version is known as FastTrack. All of these products run on most versions of UNIX.


WebStar started life as MacHTTP. It is the major Macintosh server and is described at WebStar is offered by StarNine, a subsidiary of Quarterdeck.

Which One Is Best?

Of course, no one answer is possible. Webmasters must trade off many factors to decide which server will work best for them. The fact is, they all work and, for the most part, the differences are small. The major differences are in ease of use between the Macintosh-based WebStar and the UNIX software.

In the UNIX camp, the Netscape Commerce server has wide acceptance as a secure server (due, no doubt, to the popularity of their browser). Of the free servers, Apache offers all the benefits of NCSA as well as a few improvements and fixes.

Here are a few factors to consider in making a choice. For an analysis of these features on nearly 50 servers, visit

Which Operating System Does the Server Run On?

Servers are available for computers ranging from IBM mainframes to tiny Amigas. The most popular servers run on UNIX, Macintosh, and Windows machines, in that order.

Launching and Logging

Several server features have to do with how the server is started and what information it logs.

Protocol Support and Includes

Many of the discriminators between servers have to do with how they handle some of the more obscure elements of the protocol and how they support server-side includes.


As presented in Chapters 17, "How to Keep Portions of the Site Private," and 40, "Site Security," there are many things the Web server can do to help keep the site secure.

Other Features

A few discriminators do not fit into any of the existing categories of features. They are presented here.

Be aware that showing the directory tree, while useful, may be considered a security hole. Check the stance for your site.

What Hardware Is Needed?

Earlier in this chapter, I cited studies showing that the speed of the hardware is not a dominant factor in site performance. Thus, there is no need to purchase an expensive UNIX workstation to "keep up" with the Net. Rather, invest the money in a high-speed connection such as a T-1 or even multiple T-1s.

Should You Run UNIX?

When UNIX was developed, it was positioned as the alternative to the big, complex operating systems running on the machines of its day. Now UNIX is a big, complex operating system. It offers a lot of features, some security holes, and many technically inclined people love it. (Of course, many similar people despise it.)

If you run UNIX, you need never lack a system administrator. In most parts of the world, technically inclined UNIX-philes are readily available, though they may be a bit expensive. The number of people who know the intimate details of the Macintosh, or even DOS and Windows, is somewhat smaller, although a system administrator on those machines is less likely to need to know operating system details.

Choosing Defaults

On the UNIX servers, the system administrator usually starts by building (compiling) the server. On other platforms, the server comes precompiled and ready for installation. Try to stay close to the defaults offered by the installation script. Using defaults has three major benefits:

Virtual Hosts and Domain Names

Most servers can now be configured to offer a different document tree on different IP addresses. On Apache, for example, you set up a VirtualHost command in the httpd.conf file. For example,

DocumentRoot /www/docs/
ErrorLog logs/
TransferLog logs/

sets up a virtual host named with the indicated characteristics. Any httpd.conf or srm.conf directive can go into the VirtualHost command.

To connect the server to more than one IP address, set the BindAddress directive to match the desired IP addresses. Then set up a DNS record for each virtual host. For example, to set up at IP address, set up the following DNS record: IN A

Then run ifconfig to tell your machine to listen for that machine name on the ethernet interface:

 ifconfig le0 alias

For more details on this process, see your server documentation and the documentation on your operating system. A general discussion of the process is available at

As of September 1995, the InterNIC charges a nominal fee for issuing a domain name. More important than that is the new policy describing the relationship between trademarks and domain names. Choosing a domain name is getting trickier because there are very few English words or even pronounceable syllables that are not somebody's trade name, somewhere.

Trademark law is organized around industries, so if one company is XYZ Tires, another can be XYZ Jewelry, and both can use XYZ as their trademark. But when they go to the Web, only one can have There are no clear solutions to this problem. For the short term, register one domain name per company and make sure it's one you can claim a legitimate right to.

Watch mailing lists like comp-priv to see how this problem is addressed by the Net as a whole and keep a good lawyer on retainer-one who understands how the Internet works. (See Chapter 38, "Evaluating Your Web Staffing Needs," on picking a legal advisor.)

How to Scale the Site

As we saw earlier in this chapter, the network interfaces are much more likely to cause slow response than the speed of the server software and hardware. To address the first problem, consider mirroring the site. If the server really is the bottleneck, one solution is to scale the site onto a redundant array of inexpensive computers (RAIC).


After the site has been running a while, examine the pattern of access. If many hits are coming from geographically distant locations, consider setting up a mirror site.

To see whether you would benefit from a mirror site, use traceroute to compare the time it takes to contact a nearby host with a distant one. For example, suppose a site in the U.S. finds that many of its hits come from Australia. (We hope that the number of hits roughly follows the amount of business done overseas.)

The Webmaster runs traceroute on various machines around the U.S. and finds that the average U.S. machine has a round-trip-time of just under 100 milliseconds. Then the Webmaster traceroutes several machines around Australia and finds that the response time is closer to 500 milliseconds. Based on this data, the Webmaster desides to set up an Australian mirror site.

The next step is to find a machine in Australia. The company may already have a branch office or a distributor in Australia offering a Web site. There are also companies that specialize in offering regional mirror sites. Find or rent space on one of these machines.

The final step is to make sure that the mirror stays up-to-date. Declare one site to be the master and run a mirror program every day to copy all the master site's files to the mirror. For several years, the definitive mirror program was htget by Oscar Nierstrasz.

Sadly, that script is no longer maintained but a new program, w3mir, has been built using htget as its starting point. An alpha version of w3mir is available at Like most alpha-level software, results might not always be what you expect.

Another method that is more predictable is to use a high-performance compressor like gzip to pack up the whole document tree and put it in the FTP archive. Then at an agreed-upon time, the mirror site can do an FTP GET, retrieve the file, and uncompress it into place, overwriting the old files.

To mirror FTP archives that can be associated with a Web site, check out


If a single site is getting more hits than it can handle and if the problem is not geographically defined, then the site may have to be scaled up. The first step in determining whether or not to scale is to find out whether the machine itself is saturated or whether the network link is full.

Use the techniques presented earlier in this chapter: examine vmstat to see how much idle time the machine has. Use iostat to see if the CPU is loafing because the disk drives are slow. With some versions of UNIX, the administrator can move frequently accessed files toward the center of the disk platter or stripe them across multiple disks.

Sometimes an I/O-bound configuration benefits from additional disk controllers, so that multiple requests are not waiting for each other. Find out if your server takes advantage of asynchronous I/O and see whether asynchronous I/O can be turned on in your version of the operating system.

While looking at vmstat, watch the pi and po columns. If they have any nonzeros in them, the machine would benefit from adding physical memory. The pi and po columns are indications of paging activity, and access to virtual memory (on disk) is about 1,000 times slower than access to real (physical) memory.

If the problem isn't the network connection, the local disk I/O, or virtual memory, and the CPU appears to indeed be saturated, then it is time to scale the site.

It is unusual for a Web site to outgrow its CPU. Double-check all the performance drivers indicated before deciding to throw more CPU at the problem.

One solution, of course, is to move to a bigger, faster computer. This solution may offer some temporary relief, but if the site is growing so fast that it outgrew one processor, it is likely to outgrow another. The best solution may be to put more machines to work on the site.

Some servers can be set up to participate in an RAIC. Check with the server vendor to find out if this technique is possible on your server. A typical configuration is shown in Figure 37.6.

Figure 37.6 : An RAIC site.

Be sure that the machines in the RAIC are the same size and configuration. Adding a slower machine to the RAIC can cause the overall performance to become worse because other machines in the array have to wait for the "pokey little puppy."

This chapter addresses the practical aspects of setting up a Web server. A bewildering array of servers is available and, for the most part, they all work. To get maximum performance from a site, concentrate the budget on fast connections, not on fast computers. Once a site has enough bandwidth and computing power, the choice of the machine and the server comes down to features and ease of use.

If, during the life of a site, the server begins to overload, look for bottlenecks in the network connection first and then in the local I/O and virtual memory subsystems. As a last resort, consider scaling the site onto multiple machines using an RAIC.