|[Home] [Credit Search] [Category Browser] [Staff Roll Call]||The LINUX.COM Article Archive|
|Originally Published: Thursday, 11 January 2001||Author: Derrick H. Lewis|
|Published to: enchance_articles_security/Advanced Security Articles||Page: 1/1 - [Printable]|
Cyber Attacks Prove Costly
As the computer industry intensifies, so does the amount of cyber attacks. Many Web sites are open to all sorts of "web hacking." According to the Computer Security Institute and the FBI's joint survey, 90% of 643 computer security practitioners from government agencies, private corporations, and universities detected cyber attacks last year. Over $265,589,940 in financial losses were reported by 273 organizations.
How do we limit the possibilities of being a victim of a cyber attack?
|Page 1 of 1|
Recently I moderated a group of security experts who manage major Open Source Web sites. They not only deal with the networking side of their sites, but they also confront the security problems that arise.
Four panelists participated. Pat Lynch is a senior network architect for OSDN. He manages Slashdot, Newsforge and other sites. Lynch has been a "Unix Head" for over 6 years now. A friend introduced him to Linux, and he later went on to installing it [Linux] on my machines, and doing some development on it [Linux]." He plays with FreeBSD code, and works with a mailing list manager called "listar," a free/open-source software.
Yazz Atlas is another senior network architect for OSDN. He manages Freshmeat, ThinkGeek, and others sites. Altas started working with Open Source software when kernel version 0.99.13 was released back when he was in college. He went on to work at the University of Iowa doing system administration tasks for the geomorphology department and their vh.org (online virtual hospital).
Elizabeth Palomino is a network engineer for OSDN. It is because of one of her friends giving her a Slackware CD that she got involved with Open Source software. She installed it on her PC and since then she has worked with integrating it into mixed environments.
David Ford, final panelist, is a network security specialist for Talon Technologies in Southern California. He is, as he likes to call himself, a hired hacker (the good kind). Ford also worked for Linux.com as a Mail/Systems Administrator. He became interested in Linux/Open Source several years ago when he and a friend ran a BBS and were blown away by being able to run 32-28.8 modems full speed on a 386 without dropping any packets at all.
As the moderators introduced themselves, I asked a simple question that I knew many of the audience members wanted answered: "What do you consider as the most important points for people trying to secure their web site?"
Lynch's immediate response was,"Paying attention to detail." Lynch said there are several things he does. He has a mental checklist for a lockdown, usually done on an OS install. For example, shutting down unneeded services, having SSHd run at startup, and having a good snap/checksum of the machines. He admits it was something in the buildup of the OSDN, ( Open Source Development Network ), that wasn't always followed, due to some pushing to get things done fast, but the OSDN Admin's are generally in progress of a full audit right now, so attention to details is definitely important, but a procedure is also important."
Atlas' answer was sort of a continuation to Lynch's response, except he focused on the 'access control' side. He said, "to secure a site you need to also know what the developer will be running and what access they require. If some one needs for example FTP, it should be locked down to just the system that is required to connect. Be aware that there is always a new exploit out there. Don't think you're safe just because your last audit of your system looked good. Be always on alert for changes."
Ford said, "attention to detail in this manner means accomplishing your checklist and making sure you follow it all the way down to the period at the end of the sentence" He went on to say, "Use common sense. If you don't know what a service is, find out, and shut it off if it's not required. Internal services should be filtered from public access." I finally brought the question back to Palamino. She continued on Altas' idea of staying informed. She says you should "stay abreast on what exploits have been discovered. Monitoring the sites for suspicious activity, shutting down unneeded services, and making the sure the versions of daemons you're running are current."
"The trick is to give the users the ability to do their work without compromising security," Palamino said. "Monitoring is good as well as it can give a needed heads up when someone is trying to exploit your system."
Ford continued on the idea of knowing what services are running. "Never blindly trust any of them, however," he said. "'netstat -na' will show you a list of sockets open and LISTENING on the server. '/proc/net/tcp', 'udp' is where the information comes from in raw fashion. 'fuser' can be used to identify particular socket/inode/process." He said, "It's a good idea to have your own builds of these programs with a modified title, etc., so you're aware that you are running your own [trusted] binary, and these binaries should be statically compiled and a list of md5sums stored. i.e. bring your own burn on a CDrom." Lynch followed up on Ford's second response. Pat decided to go in depth on what binaries that are essential to knowing what's going on your system. He says, "on Linux, 'fuser', but on FreeBSD, 'fstat', or on all of them, having 'lsof' compiled can be useful, tripwire is useful as well in a situation you think you may be compromised. This is because alot of times 'ps' and 'find' and such can be trojaned, so keeping a '.shar' with a known good bin somewhere statically linked is a good idea too."
Many people want to know where these administrators get their tools to keep the security up to par. I asked, "Do any of you administrators develop your own content [tools] with regards to improving security?" With Dave mentioning before about building static linked bin's of such things as 'find', 'ls', and 'ps', Yazz told us that he has a place linked for shell archive that allows him to load and run, that does a nice job finding system changes and intruders. The script was built by viewing the contents of compromised machines, and finding the tools used by un-deleting items from the ext2 file system. He has told me that it is still in beta form, but he recommends everyone to look at it and to make comments and changes if they see fit.
As I asked for questions from the audience to be solicited to me, I knew it was a matter of time for someone to ask the famous question: "What was the chain of events following the Slashdot compromise?"
"A lot of it [the response from compromise] was "Damn it, we spent so much time securing this thing, and an applications hack happened", Lynch said, adding that he, Altas and Elizabeth "all stayed up pretty late examining the hack, finding out how they did it, closing up gaps, and shutting down obviously compromised machines until they could be rebuilt." Lynch said their next agenda was to turn towards the Slashdot app team (Slashcode), Brian Aker, and Chris "Pudge" Nandor, and Pat Galbraith. They all spent hours checking the integrity of the database and "we even started from a known good backup as a starting point for that check." Patrick let us know that "unfortunately, the best laid plans, even if you have a pretty good, secure network, can sometimes fail in the face of human error." The intruders, he told us, "hung out and explained what they did." He said that was, basically, the whole ordeal. He also mentioned that he spent a long time arguing about whether the issue should have been considered a vulnerability. This is a controversial issue. Is an applications hack considered a vulnerability? One could say since there was the ability to circumvent the system then, there must have been a vulnerability.
As I began to move on to the next question, Lynch said, "The Slashdot DoS attack was more fun." This was something I, and many other people, didn't hear about. Lynch, reluctantly, talked about it. A "mid-sized DDoS attack, mostly SYN floods came through the new network," he explained.
It sounded like something they could handle. But the problem, he told us, was that they "had a small arrow-point on loan and the poor thing [server] had a "stroke" under the load." How can one protect themselves from this?
I directed this question to Ford. He mentioned that one couldn't fully protect themselves. Some of the steps you can take are to "lower the initial timeout for the session. There are numerous ways of accomplishing that via '/proc' tuning, kernel mod, etc" Another prevention method can be done "by lowering the timeout, the server's resources are freed more rapidly and it can handle the onslaught more easily." Also, Ford's final tip is to always "have on hand a list of your upstream provider's tech numbers and names." This can come in handy if you encounter a possible attack. Another question came from a member of the audience: "What are the absolute bare minimum services one needs on his/her boxen?" The immediate response from Lynch was, "Depends on what you are running..." He went on to say, "if it's a Web server, only the Web server [sending a gesture: :P], then a web server needs port 80 or what it's listening on." Ford added that if you are running a mail server, "then port 25, possibly 110, should be running."
Ford said that the "best way to answer this question is to say start with nothing then add what you need." Essentially, Lynch added, "start with an empty inetd.conf." Lynch also said that over at OSDN, they "usually have nothing in it anyway, some apps rely upon other things like CVS, pserver, etc" Ford said he has, at most, "'ident' in the 'inetd.conf'." Ford said a "server shouldn't have fast cycling services in 'inetd'. They should be stand-alone." Palamino jumped in to say that "SSH should be the ONLY way anyone should be getting shell access. There is no reason, these days, to run telnet." This is VERY true, as telnet has many vulnerabilities. Lynch said that if you're "going to use telnet, at least use SSL telnet or SRP telnet, which are pretty secure auth and stream crypto apps."
Ford mentions that he "usually has SSH and tends to set it to listen on a non-standard port in the 5 digit range." At OSDN, Lynch mentioned, they have the "arrow-point in conjunction with the firewall/packet filter, which only really allows ports in from the outside we [they] specify." Lynch went on to say that, "there are in actuality 3-4 actual subnets at exodus [where OSDN is hosted], 3 of which most people never see." But, that's going to change in the future he mentioned. In the future, OSDN will start to put each project (Slashdot, Freshmeat, etc) on their own subnets. This way, "there is less danger of having any communication if one cluster does, God forbid, get compromised," he said. Venturing towards the more technical side of the discussion, an audience member asked: "How do you find rogue servers on your network?" Altas' response was "Ah, tcpdump." He went on to say "it's your friend and so is LSOF, when it comes to verifying which one is running the rogue program." Lynch mentioned that OSDN also has system on the bridge which allows them to see traffic go through the bridge." He said this system help them find the Trinoo daemon on a developers machine." Ford jumped in to say, "It's a good idea to put a machine that has two network cards on it and the public card has no services bound to its interfaces." He continued to say that this type of system is "considered a listen-only system and used for sniffing." Ford's main tip is to "learn what your traffic patterns are and be able to identify what is valid traffic. Be alert for suspicious traffic that you're not familiar with." Some tools were mentioned. "Ethereal is one of the better GPL fashioned programs (ethereal is a GUI [graphical user interface] that decodes packets very similar to tcpdump)," Ford mentioned. "Sniffit," Altas mentioned, "is another tool for decoding packets." But, Altas said, "There are other tools, such as Tripwire, one can use to check the integrity of the system, but it's a matter of admin choice..."
After such great responses to these questions, I had to give them one final question. One which sort of sums up this whole discussion. "Is it difficult to manage a site where, users from around the world (who are not paid or personally) have access to the servers that these sites run on? Also, how do you plan on increasing the security of your sites in the future?" Yazz's immediate response was "LIDS. Trust no one." Ford added that, for public access servers, "you always run a heavy risk. You need to keep up with current security topics, take many measures to monitor your systems." Ford went on to say, "this doesn't mean invade the users' privacy but does mean put walls up where they shouldn't go." Lynch added a story saying, "OSDN, way back when there was some talk about taking shiftq [linux.com's development server] onto their network, had to put a lot of thought on how it needs to communicate. The need result was that it would be LIDS'd, and put on the DMZ." But, they never ended up taking it, but "having shells," he added, "on a commercial network gives me the willies. I run my own 'freenet' type servers, and I know how that can be, and I *know* most of the people using my boxes."
The panelists chosen for this discussion could not have been better chosen. Thanks to Pat Lynch, Yazz Atlas, Elizabeth Palomino, Katherine McCoy and David Ford for their contributions to this article and the discussion panel.
Suggested sites:(Thank you to Yazz Atlas for giving me the advice and comments to add this section)
Preparing your Linux box for the Internet - Armoring Linux (The person keeps this up to date):
Rkdet is a small daemon intended to catch someone installing a rootkit or running a packet sniffer. (I plan to uses this and think its a neat toolkit):
Nessus is a remote security scanner for Linux, BSD, Solaris, and other Unices. It is multi-threaded and plug-in-based,has a GTK interface, and performs over 500 remote security checks. It allows for reports to be generated in HTML, XML, LaTeX, and ASCII text, and suggests solutions for security problems. (Good start for auditing your system, plus makes nice graphics for those boring staff meetings when you are pushing to tighten security and limit users access):
The Coroner's Toolkit (TCT) is a collection of tools that are either oriented towards gathering or analyzing forensic data on a Unix system. (If I knew about this earlier it would most likely have helped finding out what the cracker did when they got into our system months ago. Neat idea):
A good read to how to uses and deal with being cracked.
Everyone knows about tripwire (or they should) but this one also looks interesting since it seems to work over a network and can run as a daemon "Samhain". It's not for everyone but it just needs some new developers to help it along. NOTE: To get tripwire to work with many hosts and logging messages over the network cost $$$ Tripwire HQ Manager:
The open source part of tripwire doesn't come with network management:
Here is one of the most versatile Intrusion Detection Systems developed. A little over the novice but something worth mentioning:
'lsof' is a Unix-specific diagnostic tool. Its name stands for List Open Files, and it does just that. It lists information about any files that are open by processes currently running on the system. It is the single most powerful utility for inspecting running processes and determining which process is listening to which ports:
Tcpdump allows you to dump the traffic on a network. It can be used to print out the headers of packets on a network interface that matches a given expression. You can use this tool to track down network problems, to detect "ping attacks" or to monitor the network activities.
Ethereal is a GTK+-based network protocol analyzer, or sniffer, that lets you capture and interactively browse the contents of network frames. The goal of the project is to create a commercial-quality analyzer for Unix and to give Ethereal features that are missing from closed-source sniffers.
ngrep is an awesomely powerful network tool, which strives to provide most of GNU grep's common features, applying them to the network layer. ngrep is a pcap-aware tool that will allow you to specify extended regular expressions to match against data payloads of packets. It currently recognizes TCP, UDP and ICMP across Ethernet, PPP, SLIP and null interfaces, and understands bpf filter logic in the same fashion as more common packet sniffing tools, such as tcpdump and snoop.
nicedump is a network sniffer, which tries to display the entire packet contents. Nicedump can be configured to adapt or add new protocols (with its language) without any re-compilation phase.
Not everyone knows about SSH and how to uses it. This site gives the best breakdown that I have seen for new users:
Site to look for problems and to sign up to be on security related mailing lists:
|Page 1 of 1|