|[Home] [Credit Search] [Category Browser] [Staff Roll Call]||The LINUX.COM Article Archive|
|Originally Published: Tuesday, 14 December 1999||Author: Quentin Cregan|
|Published to: enchance_articles_security/Advanced Security Articles||Page: 1/1 - [Printable]|
Surfing Between the Flags: Security on the Web
|Page 1 of 1|
[Article mirror, from http://www.auscert.org.au]
This week's article has been placed up to make our readership aware of issues involved in general websurfing.
Surfing Between the Flags: Security on the Web Catherine Allen, Security Programmer, Australian Computer Emergency Response Team, AUSCERT, c/- Prentice Centre, The University of Queensland, Brisbane, Queensland 4072 Australia.
Abstract There is an increased need to address the growing number and severity of security issues central to new and developing World Wide Web (WWW) services, especially usage involving commercial communication and transactions. This paper examines internet security with respect to the WWW. A number of WWW security issues are presented in three areas: server, client and communication between server and client. Practical precautions and solutions are suggested regarding these issues. Guidelines for protecting host systems are discussed.
This paper aims to promote an awareness of security issues in WWW site maintainers and in general WWW users, without resorting to scare tactics.
Keywords: World Wide Web Security Secure Transactions AUSCERT 1. Introduction This paper discusses security with respect to the World Wide Web (WWW). This paper is aimed to promote an awareness of security issues in general WWW users without resorting to scare tactics. Practical solutions and precautions for security problems are discussed.
As WWW browsers use the client-server paradigm the security problems relating to the WWW and to WWW browsers can be categorised into server problems, client problems and the need for secure communication between server and client.
The concepts and issues described in this paper apply to all operating systems, servers and clients, although implementation differences may cause different specific vulnerabilities. Examples used throughout this paper assume a UNIX host.
2. Server Problems include bugs in the server software, environment, configuration, CGI scripts, data integrity and server-side-includes.
2.1. Bugs in Server Software As the server code (often referred to as httpd) is large, it is likely to have some security vulnerabilities. An implementation of the server which claims to be secure is the "secure" HTTP daemon from all.net. This implementation is compact and has been specially designed with security in mind. It claims to be far more secure against corruption of the server, denial of services to clients and unauthorized dissemination of information from the server than other, standard daemons available [Coh95].
2.2. Environment There is no control over the environment in which the HTTP server runs. The server can be subverted by means of environment variables. To combat this, run the server in a restricted environment. Then, if the server is subverted, the attacker cannot affect files or services outside the restricted area. Ensure that all binaries in this area are statically linked. Ensure that there are no shell scripts in this area as they require the shell to run and they may be subverted. If there is no shell in the restricted area then even if the server is subverted, the intruder will not be able to gain shell access.
On UNIX systems, it is possible to effect a restricted environment using the chroot command. This command resets the root of the file system tree to a directory so that only that directory and subdirectories within it may be accessed. Chroot to a directory which is used solely for the purpose of a single daemon. It is preferable that the directory is in a separate partition, so that there cannot be any hard links to other areas of the file system as these could be used to subvert the chroot mechanism. Please note that when the environment is restricted via chroot, all normal system files and devices are no longer accessible and provision must be made for this.
A detailed description of these modifications is beyond the scope of this paper.
2.3. Configuration Be aware of the implications of the privileges with which the server is executed. If the server runs with superuser (or root) privileges and the server is subverted, then the intruder may be able to execute commands with root privileges. The server should never be run with unnecessary privileges.
On UNIX systems, the server can run with the privileges of the user nobody. This user has no privileges and may not have command line (or shell) access. On non-UNIX systems, the server may run as any user with normal privileges and preferably a particular user which has no command line access.
2.4. CGI Scripts The Common Gateway Interface (CGI) is a standard for interfacing external applications with information servers such as HTTP or WWW servers. A CGI script or program is a script or program written in any language that can be executed on the system. The server can be configured to execute only those CGI scripts which reside in the CGI binary directory, e.g., /cgi-bin. If the ownership and permissions are set correctly on this directory, then the average user will not be able to create CGI programs without authorisation [CGIa].
Security risks with CGI scripts include user input, bad programming practice and server-side-includes [CGIb]. As server-side-includes also present a risk outside CGI programs, they are described in a separate section.
User input should not be trusted [SI94]. Test for unexpected input values may cause the script to perform actions which were not intended by the author special characters may allow unauthorised access unexpectedly large input may cause buffer overflow or inappropriate actions any other potential abuses. Bad programming practice on UNIX systems includes the use of [CGIb]: eval can be subverted to allow an attacker to run an arbitrary command popen() and system() can be subverted easily to allow shell access. There are many ways to fork processes within programming and scripting languages. Within perl, open() to a pipe (i.e., open(OUT, "|program $args");), backticks (i.e., `command`), exec, eval and the regular expression modifier /e all may fork a process. Within C and C++, the popen() call forks a process. The ability to fork a process may allow an intruder to subvert the script or program [Pau95].
Attacks on the server via CGI scripts include mailing information to the attacker (e.g., the password file, a map of the file system, or system information from /etc), starting a logon server on a high port which allows the attacker to telnet to the machine, and many denial of service attacks [Paul95].
2.5. Data Integrity If the server is subverted to allow intruder access to the files which contain the HTML code defining a WWW site then the information releaseed to the WWW may be modified. Depending on the nature of the organisation which runs the WWW site, this may be a serious matter. For example, if a Bank or Government were to run a WWW site which explained their policies, then it would be imperative that the text is not modified in an unauthorised manner. To guard against this, the server host should be secured against a good security policy [Mcm]. The AUSCERT UNIX Security Checklist [Aus95b] gives specific instructions to improve the security of a UNIX host. If the server host is compromised using a network service other than HTTP, the HTML files may still be compromised.
On a UNIX system, if the server runs as user nobody and if the files containing the HTML code are owned by root and are writable only by root then an intruder would not be able to modify the information released to the WWW unless the intruder had gained root access to the server host.
One way to ascertain whether the files containing the HTML code have been modified is to use the Tripwire package [Tri94]. Tripwire creates a database of the ownership, permissions, most recent modification date and content digest of a set of files. The attributes and files recorded can be configured to suit each host. This database can then be held offline so that it cannot be compromised. Tripwire can be run periodically to test the current attributes and contents of the files against their recorded attributes and contents. The system administrator will be notified of any differences. It is advisable to run Tripwire nightly on all hosts.
2.6. Server-Side-Includes Server-Side-Include (SSI) describes a method of executing commands on the server. These programs are executed when the page that contains them is loaded. The server can be configured to parse for SSI commands in all documents, some documents (usually those ending in .shtml) or no documents. The default is to parse no documents. The SSI commands are placed in HTML comments and are prepended with a #. For example:
The output of the command is inserted into the document at the relative position of the command within the HTML code.
Legal SSI commands include #exec and #include. The server can be configured to allow or disallow specific commands.[Kru]
If the SSI is subverted, it could be possible to run arbitrary programs on the server with the privileges of the account under which httpd is run. Again, this emphasises the need to run httpd as nobody and to run it in a chrooted environment.
3. Client The main client problems are automatic application launches, bugs in client software, embedded commands and the potential ability to bypass firewalls.
3.1. Automatic Application Launches One of the more powerful aspects of WWW browsers that they will present information to the user in an immediately useful form. Applications are launched automatically so, for example, sound files can be played automatically, postscript documents displayed automatically and so on. Some of the applications which are launched automatically are very powerful. For example, Ghostview includes file system commands. These applications run on arbitrary input which is provided by the server (which is potentially hostile from the client's viewpoint). As such, these applications can be subverted to allow unauthorised entry to the client machine.
The locations of the applications to be automatically launched can be configured in the client. Allow only "sanitised" applications to be launched automatically.
Most clients can be configured to disable the display of particular data types or to disable the passing of those data types to external viewers. It is advisable that at least the MIME and PL (perl) data types be disabled.
3.2. Bugs in Client Software Browsers are large programs and as such can be expected to include security vulnerabilities. A serious vulnerability was found in NCSA HTTP Daemon V1.3 which was subsequently patched. Always run the latest version of the client software.
3.2.1. NCSA HTTP Daemon V1.3 Vulnerability This vulnerability allowed commands to be embedded in a given URL. Most client software, including the NCSA HTTP Daemon, can be configured to show the URL pointed to by a link in a document while the mouse pointer is situated over the link. It is always best to check the URL pointed to by a link before clicking on the link. Things to look for include the semicolon (;) character.
The fix for this vulnerability is described in the CERT Advisory CA-95:04 [Cer95] which is currently available from the AUSCERT FTP site.
Register with AUSCERT to be informed of fixes for security vulnerabilites as these vulnerabilities are discovered [Aus95a].
3.3. Firewall Considerations As the HTML specification allows protocols other than HTTP (e.g., FTP, TELNET, RLOGIN), it may be used to bypass the filters normally applied to those protocols by a firewall. This can be rectified by using an HTTP proxy which filters the relevant protocols as required [Dal94].
4. Secure Transactions The main problem in communicating between server and client is that usually all data is transmitted over the Internet in clear text. Neither the client nor the server has any control over the route which the data takes through the Internet and neither neccessarily controls which other machines may have access to that data by means of network sniffers. If sensitive data is passed between the server and the client (e.g., credit card information) then the ability to provide secure transactions is required. As the WWW becomes more widely used for commercial purposes, there will be a larger requirement for communication which guarantees data integrity, data confidentiality and user authentication.
General solutions for these requirements have been proposed and currently include the Basic Protection Scheme, the Public Key Protection Scheme, Digest Access Authentication and the Simple Digest Security Scheme.
Current implementations include NCSA's Secure HyperText Transfer Protocol (S-HTTP) and Secure Mosaic which are currently on trial in CommerceNet community, Netscape's Secure Socket Layer (SSL) and CERN's Shen.
4.1. General Solutions 4.1.1. Basic Protection Scheme Step 1. Server sends an unauthorized status [AL93a]. When the server receives a request without an Authorization: field to access a protected document, it sends an Unauthorized 401 status code and a set of WWW-Authenticate: fields containing valid authentication schemes and their scheme-specific parameters. Step 2. Client authenticates itself. The browser prompts for username and password. This is sent with the next request in the Authorization: field. Step 3. Server checks authentication and authorization. If successful, then the server will send the document to the client. 4.1.2. Public Key Protection Scheme This scheme is similar to the Basic Protection Scheme, but requires that the username and password be encrypted with the public key of the server and that documents be encrypted during communication over the Internet [AL93b].
Step 1. Server sends an unauthorized status. In addition to the work done in the Basic Protection Scheme, the WWW server also sends its public key in the WWW-Authenticate: header field of the reply. If the client had given the Authorization: field already with the request, then the scheme continues at step 3. Step 2. Client authenticates itself. The browser prompts for a username and password and then generates a random encryption key. It concatenates these with the browser's IP address and a timestamp, encrypts the string with the server's public key and encodes it into printable characters. The encoded encrypted string is sent with the next request in the Authorization: field. Step 3. Server checks authentication and authorization. The authorization string is decrypted and checked. The document will not be sent if the IP address given does not match the requesting address. The document will not be sent if the timestamp is not the same as the current server time, within some limits in order to avoid replay attacks. Step 4. Server sends an encrypted reply. This is a binary file (not printable), as output by the encryption procedure, in order to save time, space and bandwidth. Step 5. Client decrypts the reply from the server. 4.1.3. Digest Access Authentication This scheme is based on a simple challenge-response paradigm, like the Basic Access Authentication scheme, but is designed to be more secure. It challenges using a nonce value. A valid response contains the MD5 checksum of the password and the given nonce value. Thus, the password is never sent over the Internet in clear text [HFH95]. 4.1.4. Simple Digest Security Scheme This scheme does not require the use of patented or export restricted technology. As with the Basic Protection Scheme, an initial request is sent without authentication and a new connection is established to communicate the authentication parameters [Cern]. 4.2. Implementations 4.2.1. Secure HyperText Transfer Protocol (S-HTTP) Secure HTTP (S-HTTP) provides secure communication mechanisms between an HTTP client-server pair in order to enable spontaneous commercial transactions for a wide range of applications. It is designed to provide a flexible protocol that supports multiple orthogonal operation modes, key management mechanisms, trust models, cryptographic algorithms and encapsulation formats through option negotiation between parties for each transaction [RS94].
4.2.2. Secure Socket Layer (SSL) The SSL Protocol is designed to authenticate the server and optionally the client. It can negotiate an encryption algorithm and session key and authenticate a server before any data is transmitted. All data is encrypted before it is transmitted. The SSL protocol provides channel security. The channel is private, authenticated and reliable [Hic95].
4.2.3. Shen The Shen scheme provides for three separate security related mechanisms [Hal]: Weak Authentication with low maintenance overhead and without patent or export restrictions Strong Authentication via public key exchange Strong Encryption of message content 5. Response to Security Incidents The Australian Computer Emergency Response Team, AUSCERT, provides a single trusted point of contact in Australia for the AARNet community to deal with computer security incidents and their prevention. As such, AUSCERT is uniquely placed to advise organisations about internet computer security, including WWW issues. AUSCERT has collated and developed a comprehensive range of security packages and documentation which assist systems administrators to improve the security of hosts, sites and networks.
If a security incident occurs, AUSCERT staff are available to provide advice and assistance with the co-ordination of the response to the security incident.
It is strongly recommended that all AARNet Members, Affiliates and Associates of AARNet register with AUSCERT. Registration is free for all AARNet Members and Affiliates and is restricted to Australian sites and organisations only [Aus95c].
AUSCERT will respond to a call from any person within an organisation regarding an incident and will attempt to assist that person with their problem. However, AUSCERT will only discuss security matters, particularly those of a sensitive nature, with an authorised contact established throught the registration process. Normally, incidents relating to that site will be reported to the organisation through the authorised contact mailing list.
If sensitive incident or vulnerability information is to be sent to AUSCERT via electronic mail, it is advisable that the e-mail be encrypted. AUSCERT can support a shared DES key and PGP.
AUSCERT currently uses PGP to digitally sign all official outgoing electronic mail. The PGP system is being used because it is currently the most widely deployed signature system available. Newer systems may be adopted as the relevant legislation and technologies change.
6. Conclusion Several security issues have been raised and relevant, practical solutions suggested. Some mention has been made of the services which AUSCERT provides to aid the Australian Internet community with prevention of and response to security incidents. 7. References [AL93a] AL: Basic Protection Scheme Proposal, December 1993, http://www.w3.org/hypertext/WWW/AccessAuthorization/Basic.html [AL93b] AL: Public Key Protection Scheme Proposal, December 1993, http://www.w3.org/hypertext/WWW/AccessAuthorization/Pubkey.html [Aus95a] AUSCERT: Registration Form, ftp://ftp.auscert.org.au/pub/auscert/auscert-registration-p?.ps.Z [Aus95b] AUSCERT: UNIX security checklist, ftp://ftp.auscert.org.au/pub/auscert/papers/unix_security_checklist_1.0.Z [Aus95c] AUSCERT: Brochure, ftp://ftp.auscert.org.au/pub/auscert/auscert-brochure.ps.Z [Cern] No author or date information: Simple Digest Security Scheme, http://www.w3.org/hypertext/WWW/Protocols/HTTP/digest_specification.html [CGIa] No author or date information: Common Gateway Interface, http://hoohoo.nsca.uiuc.edu/cgi/intro.html [CGIb] No author or date information: Writing Secure CGI Scripts, http://hoohoo.ncsa.uiuc.edu/cgi/security.html [Coh95] Cohen, Dr F.: Why is thttpd Secure?, http://all.net/ManAl/white/whitepaper.html [Dal94] Dalva, D. I.: Security and the World Wide Web, June 1994, http://www.tis.com/Home/NetworkSecurity/WWW/Article.html [Hal] Hallam-Baker, P. M.: Shen: A Security Scheme for the World Wide Web, undated, http://www.w3.org/hypertext/WWW/Shen/ref/security_spec.html [HFH95] Hostetler, Franks, Hallam-Baker, Luotonen, Sink and Stewart: A Proposed Extension to HTTP: Digest Access Authentication, Internet Draft, March 1995, http://ds.internic.net/internet-drafts/draft-ietf-http-digest-aa-01.txt [Hic95] Hickman, K. E. B.: The SSL Protocol, February 1995, http://home.mcom.com/info/SSL.html [Kru] Kruse, M.: Server-Side-Includes, undated, http://web.sau.edu/~mkruse/www/info/ssi.html [Mcm] McMillan, R.: Site Security Policy Development, undated, ftp://ftp.auscert.org.au/Site.Security.Policy.Development.wp.Z [Pau95] firstname.lastname@example.org: Safe CGI Programming, March 1995, http://www.primus.com/staff/paulp/cgi-security/safe-cgi.txt [RS94] Rescorla E. and Schiffman A.: The Secure HyperText Transfer Protocol, Internet Draft, December 1994, http://www.commerce.net/information/standards/drafts/shttp.txt [SI94] Smith, D. and Indulska, J.: Enhancing Security of Unix Systems, Proceedings of the AUUG94 Conference, p79-88. [Tri94] Tripwire version 1.2, August 1994, ftp://ftp.auscert.org.au/pub/coast/tools/unix/Tripwire/* 8. The Author Catherine Allen Catherine Allen has been a Security Programmer with the Australian Computer Emergency Response Team (AUSCERT) since August 1994. She has three years experience in the information technology industry, specialising in computer systems security and system administration. She has contracted to a number of commercial organisations providing technical consultation in security and service provision. Her experiences range from assisting sites with computer security issues and analysis of computer security vulnerabilites to researching and writing security advisories designed to assist system administrators to prevent intrusion. Her experience within AUSCERT places her in a unique position to analyse and present trends in computer security incidents on the internet in Australia. Catherine was previously employed by the Prentice Centre at The University of Queensland as a system administrator for VMS and UNIX machines, including contract and facilities management work.
Catherine holds an Honours Degree in Computer Science from the James Cook University of North Queensland. She also holds a Bachelor of Science Degree with a double major in Computer Science and Mathematics from the same University.
|Page 1 of 1|