ProductsDesktop Server For Scientific Computing For IBM POWER For IBM System z For SAP Business Applications Red Hat Network Satellite ManagementExtended Update Support High Availability High Performance Network Load Balancer Resilient Storage Scalable File System Smart Management Extended Lifecycle SupportWeb Server Developer Studio Portfolio Edition JBoss Operations Network FuseSource Integration Products Web Framework Kit Application Platform Data Grid Portal Platform SOA Platform Business Rules Management System (BRMS) Data Services Platform Messaging JBoss Community or JBoss enterprise
SolutionsApplication development Business process management Enterprise application integration Interoperability Operational efficiency Security VirtualizationMigrate to Red Hat Enterprise Linux Systems management Upgrading to Red Hat Enterprise Linux JBoss Enterprise Middleware IBM AIX to Red Hat Enterprise Linux HP-UX to Red Hat Enterprise Linux Solaris to Red Hat Enterprise Linux UNIX to Red Hat Enterprise Linux Start a conversation with Red Hat Migration services
TrainingPopular and new courses JBoss Middleware Administration curriculum Core System Administration curriculum JBoss Middleware Development curriculum Advanced System Administration curriculum Linux Development curriculum Cloud Computing and Virtualization curriculum
ConsultingStandard Operating Environment (SOE) Strategic Migration Planning Service-oriented architecture (SOA) Enterprise Data Solutions Business Process Management
Issue #13 November 2005
- Beyond armchair quarterback: Getting involved in Fedora
- Focus on Fedora Extras
- Five Fedora books reviewed and rated
- The Sisyphus security dilemma
- Integrating applications into the desktop, part 2
- Linux and the desktop? Take the survey
- Video: Banche Popolari Unite
- Tuning Oracle Database 10g for ext3 file systems
- Securing your system with Snort
From the Inside
In each Issue
- Editor's blog
- Red Hat speaks
- Ask Shadowman
- Tips & tricks
- Fedora status report
- Magazine archive
The Sisyphus security dilemma
by Josh Bressers
Sisyphus is a character from Greek mythology who was condemned by the gods to spend his afterlife ceaselessly rolling a rock to the top of a hill. Just as he neared the top of the hill, the rock would roll back down to the bottom. He is often seen as the "futile laborer" of the underworld. While the process of finding and fixing security flaws is by no means futile, this story rings true regarding the process of applying security updates. Once a machine is fully patched, more security flaws will be discovered and the machine will need additional updates. The current security update model does not scale; the faster security issues are discovered, the faster security updates must be applied.
Luckily, there are a few differences between the modern problem of keeping computers updated and the task of repeatedly rolling a rock up a hill. There isn't a good way to automate rolling a rock up a hill (at least not if you're the person doing the actual rolling), but the application of updates can and should be automated. A clever person would realize after a short period of time that eternally rolling the same rock up the same hill is a pretty big waste of time. It would make far more sense to just stop rolling the rock and get on with other things. This is how applying security updates should work. There should be no need to immediately apply the updates, but rather plenty of time to review the fix and install it as time permits. When the update is ready to be deployed, the amount of effort needed to update one or one thousand machines should be the same.
Of apples and oranges
It may be appropriate to compare security updates to the process antivirus software employs to keep a computer safe. Computer viruses have been seen as a problem for a much longer time than security issues, and most users seem to understand the need for antivirus software. The process of keeping a computer safe from viruses is typically better understood than the current process of security updates. There are a great number of computers that lack security updates attached the the Internet. This includes machines with illegal or unsupported software or computers whose users who do not understand that they need to keep software updated or even forgotten computers which happen to have an Internet connection.
Before widespread computer connectivity, most users would never apply a security update to their own operating system. The only time end users received fixes for security issues or bugs was when they decided to upgrade. Their operating system was probably be purchased at a store, not downloaded. There was still a need for antivirus software because viruses were spread between computers via floppy disks and shared files, typically with little or no user interaction. A virus could be annoyingnever damaging anything but just spreading between computersor it could be destructive and damage data.
The model used to stop viruses historically has been a very reactive process. A new virus is discovered by a researcher. The antivirus vendor then determines an accurate method for identifying and removing the virus. Instructions for identifying this new virus must then be distributed to the computers running the antivirus software. The problem with this method is that there is always a time gap between the discovery of the virus, and when it can be safely detected and removed.
The process of applying security updates is not unlike the process of virus detection. A security issue must be discovered before it can be fixed. The fix must be added to the current packages, tested to ensure that it doesn't cause other problems, and then made available to the end user. The fix has to be acquired and installed by the end user. This process must be repeated for each update required. This means that the as more security issues are discovered, and more computers are running vulnerable software, the process of keeping computers updated becomes more and more difficult over time. The process of updating the end user systems has become automated, which places the bottleneck on the distribution side. The biggest hurdle today is finding and fixing issues as fast as they are discovered.
Visualize the problem
It is estimated that for every one hundred lines of source code written, there is one bug introduced into a program. This means that if a piece of software has one million lines of source, it will have 10,000 bugs. A great majority of these are probably so minor that they won't be noticed during normal program operation. Even if only a fraction of these flaws have security consequences, that leaves many undiscovered security bugs in a large software project. The long term goal is to have protections in place that will lower the severity of issues we currently consider security bugs. The worst possible outcome of a security issue should be a denial of service, or the program crashing. Currently, this is typically the least severe security impact assigned to an issue. There is also the threat of undiscovered security issues. Since all software programs contain unknown security bugs, having the ability to mitigate potential damage is advantageous.
The pace at which security issues are discovered has been steadily increasing over time. Figure 1 displays the number of security issues which have affected Red Hat products per month for the past five years. A best fit line has been added to the data to show that while the number of issues appears somewhat random, it is steadily increasing with time. It should be of little surprise to anyone who is responsible for applying security updates that the number of issues is increasing with time. This graph shows all security problems, regardless of the severity of these issues. It is important to break the issues down, looking only at those issues which would be the most dangerous.
Figure 2 shows only security issues which we have assigned the severity of Important or Critical. Issues which are Important or Critical will usually need to be fixed quickly. More time may be taken with issues which have been assigned the severity of Moderate or Low. As Figure 2 clearly shows, the number of Important and Critical issues is manageable, which is why classifying security fixes makes sense. Having the ability to prioritize issues quickly and easily can make a big difference for a busy system administrator.
Figure 3 overlays both graphs onto one set of axis. It seems logical that as more security issues are discovered, more Important and Critical issues are also found. It is worth noticing that the rate of increase for Critical and Important issues is lower than the rate of change for all issues. Given the rate of increase for security issues, is should be expected that without some sort of intervention, keeping your system properly updated is not going to be easy in the future. It is important to understand that while the figures presented suggest that the number of issues is increasing over time, it is unknown if this trend will continue. It is possible the rate at which new issues are discovered could increase, decrease, or stay the same. Using current data, it appears it will continue to increase, but there is no way to be sure of this.
The real future in keeping a computer secure does not lie in patching the various issues, it lies in preventative technologies. Suggesting that nobody should ever apply a security patch would be dangerous, but reducing the current immediate need for updates is desirable. The amount of time between an issue being discovered and a virus or worm being written to take advantage of the problem is shrinking quickly. It is possible that a security issue could be discovered by an attacker and used to spread a worm before patches can be made available. It is much easier to use advances in security technologies to prevent the spread of a worm even on an unpatched machine.
There are a number of preventative technologies available today which have already had a positive impact on Linux security. Things such as SELinux, ExecShield, and additions to gcc and glibc can prevent a number of issues from causing serious harm. It's worth noting that these ideas are still very new and will continue to evolve. None are able to solve all possible security issues, but they can help stop what we currently consider to be some of the most dangerous security issues. Given enough experience and time, preventative technologies will be able to stop a majority of what are currently considered security issues.
Even with preventative technologies, it's possible that an attacker could leverage multiple unrelated security issues to compromise a computer. There will also always be certain attacks that are not going to be stopped by new technology. It's also likely that there will be bugs in the preventive technology's code which will prevent it from working as expected. As more preventative technologies are developed, attackers will continue to develop new ways to bypass them. Anytime a security system is designed by a person, it will have flaws which can be subverted by a different person. Every time we roll our rock up the hill, we need to pay attention to why it rolls back down, and someday we might make it to the top.
- Red Hat Magazine Issue 6
- Russell Coker's article in Issue 1 of the magazine
- Limiting buffer overflows with ExecShield