Issue #9 July 2005

Limiting buffer overflows with ExecShield


Security is a buzzword in the IT industry. Is Linux more secure than Microsoft® Windows®? How much spyware do you have on your PC? Did Code Red or some other worm take down your network? And what about all those viruses?

Approaches to security: The office analogy

Before going deeper into ExecShield details, I'd like to give a brief overview of various ways 'security' is viewed, and how they relate to ExecShield as well as other projects and technologies. A simple way to talk about security is to compare your computer system to a physical building, and to talk about how buildings and computers control access in similar ways.

When an employee enters their office, they encounter security. They may show a badge to a guard, or have it scanned by a machine. With proper credentials, they are allowed in. Projects like Fedora Directory Server deliver this type of security.

Access control is only part of what makes a secure building. If a ground-floor window is left open, a guard at the front desk is not very effective at keeping burglars out. To protect against this, a building might automatically lock the windows or have reinforced glass to keep out intruders. For Linux, ExecShield (and similar projects like PaX and grSecurity) help ensure all the "windows" are closed and strong enough to deter a burglar.

An office may have areas of the building are locked down and only accessible to employees with additional credentials. This limits the damage that unauthorized people can do, whether they came in via the front door or by climbing through a window. In the Linux world, Security-Enhanced Linux (SELinux) provides security in this way.

Alarm systems are often used to keep tabs on the security of a building, even if they don't actually prevent intrusion. Metal detectors and x-ray machines alert guards to contraband materials, keeping guns or knives from getting inside. The computer equivalent of an alarm system is an Intrusion Detection System (IDS). Firewalls and virus scanners keep contraband packets or programs out.

The moral responsibility of the operating system vendor

Operating system vendors have been providing security fixes for their products for a long time. While this is important, the Code Red phenomenon has shown that this is no longer enough. Microsoft provided a security fix that prevented Code Red months before it was unleashed but a large number of customers did not apply it, and their systems were compromised.

With an Internet environment that is becoming increasingly hostile, the operating system vendor has a moral responsibility to encourage their customers to apply critical fixes and to attempt to prevent or limit the damage in the event that the fix is not installed. ExecShield is one of several projects that Red Hat has started in this area.

Threat model for the ExecShield project

When designing a security system, you first need to specify what you are and are not defending against. In the office building analogy, the security plan is designed to keep the common burglar out. It is not designed to try to keep military Special Forces out—you won't see anti-aircraft weaponry on most normal office buildings.

With ExecShield, the aim is to defend against automated attacks like worms and viruses without compromising the usability and compatibility of the distribution. The usability restriction is very important; there is no point in providing security if the majority of users turn it off because it impacts performance or speed. The first SELinux (in a Fedora Core test release) was proof of this—people disabled it because it got in their way.

With ExecShield, the aim is to defend against automated attacks like worms and viruses without compromising the usability and compatibility of the distribution.

With the ExecShield project, we didn't go out of our way to specifically protect against a dedicated local attacker with time on his hands. We felt that protecting against this level of threat would be like protecting a temp agency against a Special Forces break-in. Also, it would be nearly impossible to do without compromising usability.

SELinux in strict mode can protect against the "Special Forces" to a limited extent; Refer to the article Taking advantage of SELinux in Red Hat Enterprise Linux for more details on that technology.

Exploits—what do they do?

There are many types of exploits ExecShield was designed to combat. The most common and easy-to-exploit vulnerability is the infamous buffer overflow.

Typical stack layout
Figure 1. Typical stack layout

The stack frame image in Figure 1, "Typical stack layout" shows the memory layout of a typical function (or subroutine) that uses a fixed size buffer. This buffer is stored on the stack, located before the memory buffer containing the address of the program code invoked by the subroutine. At the end of the subroutine, this address is used to resume the program at the point of the subroutine's invocation. On Intel® and Intel-compatible processors, the stack grows in a downward direction (as seen in Figure 1, “Typical stack layout”) over time. This is why the buffer is stored before the return address.

A buffer overflow exploit operates by virtue of a defect in the application. An exploit needs to trick a subroutine to put more data into the buffer than it can hold. This surplus of data is stored beyond the static buffer, including the memory location where the return address is stored. By overwriting the return address (which holds the memory location of the code to execute when the subroutine is complete), the exploit has the ability to control which code is executed when the subroutine finishes. The most simple and common approach is to make the return address point back to the buffer as shown in Figure 2, “Making the return address point to inside the buffer”.

Making the return address point to inside the buffer
Figure 2. Making the return address point to inside the buffer

After all, the exploit has just filled that buffer. Typical exploits fill the buffer with program code to be executed—code that, for example, executes another program or otherwise damages or compromises the machine.

Preventing execution of data

In this buffer overflow example external (assumed-hostile) data and program code are mixed up, and the processor is tricked into executing what is supposed to be "just data." In fact, this way of exploiting buffer overflows would have been impossible if there was a strict separation between program code (executed but not written to) and application data (written to but never executed). It should be impossible for the processor to execute data.

Unfortunately, the x86 family of processors do not have a regular method for controlling the execution of memory that contains data. The processor vendors are aware of this issue, and with AMD's launch of the first AMD64 CPUs, the x86(-64) architecture improved, keeping the processor from executing code from the parts of memory marked data. Intel and other x86 processor vendors quickly followed suit. Today, almost any x86 or x86-64 processor you buy has support for this feature, called NX by AMD (sometimes marketed as "anti-virus protection") and Execute Disable by Intel. The NX technology works by giving each "page" of memory (a page is the unit in which the processor operates on memory with respect to permissions and swapping; equivalent to 4096 bytes on x86) a special "don't execute this" permission.

This NX technology provides fine-grained separation between processors. ExecShield utilizes the NX capability when available, but also tries to find alternatives for systems without this hardware capability. The technology chosen for this is the segment limits technique, first productized in Solar Designer's OpenWall distribution. Segment limits provide limited control over what parts of memory can and can not be executed. For the majority of application programs this is sufficient, with protection results comparable to NX technology.

A year ago it might have been interesting to explore the technical details of segment limits, but now that all new processors have NX support, segment limit technology is rapidly becoming a relic.

Preventing abuse of existing code

In the aforementioned buffer overflow scenario, an exploit could point the return address to somewhere in the actual application code. Since the application code isn't data, NX and similar technologies that prevent execution of data can't help against this.

The ExecShield project counters this problem in two ways. The first is called Ascii Zone, and the second is Address Space Randomization.

Ascii Zone

Ascii Zone is useful for program flaws in the handling of string (text) buffers. A string buffer is special in the C programming language: it has a variable length and the end of the string is indicated by a character with a numerical representation of zero. Program functions that operate on string buffers therefore stop their operation the moment they encounter a zero. The idea behind the Ascii Zone technique is the placement of as much program code as much as possible in places of memory that have a zero in the address. In practice, this means that all shared libraries are covered. Exploits that want to abuse a program bug with a string buffer operation have to point to existing code with a zero in their overflow data. The string buffer operation will stop at the zero, and thus limit how much can be done by an attacker.

Address space randomization

The idea behind Address Space Randomization is to put program code at a different address each time it starts. This way, an exploit can't know where the return address pointer should point to.

Address space layout
Figure 3. Address space layout

There are several parts of a normal application's memory layout that are easy to randomize. The stack and the heap are both data areas that programs use for dynamic memory allocations. The kernel can easily give randomized start locations of these areas because of their dynamic nature.

Another key component of the memory layout is the shared libraries. Thankfully, shared libraries have position independent code—they internally only use memory locations relative to itself. Shared libraries are designed to be put in different places in memory at different times. As a result of this design, it is quite easy for the kernel to give shared libraries a random start address in memory.

While the randomizations I've described so far have been very easy, there are difficult parts—like the binary of the main application itself. Unlike shared libraries, binaries in Linux are positioned in one specific place in memory at program creation time and can't be placed in arbitrary locations in memory easily. Since the main application binary generally contains quite a lot of code, it is a very important component to randomize. To do this, toolchain engineers from Red Hat developed a series of enhancements in the form of patches (for gcc, glibc, and binutils), creating Position Independent Executables (PIE). PIEs are a sort-of hybrid of a shared library and an executable. Like shared libraries, they can be placed in arbitrary places in memory, yet they also have the properties of a main application binary that make it a standalone program.

Position independent code has a performance overhead on most architectures (x86-64 is the exception to this). For this reason, neither Red Hat® Enterprise Linux® nor Fedora™ Core have the entire distribution compiled as a PIE binary. Only selected, security sensitive programs are compiled as PIEs.

There is another barrier with respect to randomization: prelink. Prelink is the application that improves program startup times by doing the dynamic linking ahead of time. It stores the results in the binaries and shared libraries. Due to the performance improvement that prelink gives, it has become a key feature of Red Hat's distributions. Unfortunately, the results of prelink depend on the place of libraries and binaries in memory. Randomization results in binaries and libraries in different places each time, and prelink can no longer perform the necessary pre-calculation.

This problem was solved by making prelink place libraries at randomized locations in memory. On a typical Fedora Core or Red Hat Enterprise Linux system, this randomization occurs every two weeks when prelink reruns from its cronjob. Between the bi-weekly randomizations, libraries will have a fixed location in memory as assigned by prelink. While this is not ideal, each system on the Internet has a unique randomization and maintains that randomization for only 14 days at a time. In addition, pre-linking is disabled by the dynamic linker for PIEs, which are the most exposed executables in the operating system.

These two measures, when combined, mean that for the assumed threat model, prelink can be enabled without significantly increasing the risk.

Heap overflows/Double frees

An entirely different type of application bug that can be exploited is the double free bug, a subset of class exploits that use heap corruption as a point of entry. To understand how these exploits work, some understanding of the internals of the memory allocator for the glibc library is needed.

The glibc library provides memory allocation services to the applications that use glibc (for example, via the malloc() and free() functions). To provide these services, the glibc library must keep track of what memory has been allocated by the application. For this purpose, the glibc library uses a doubly linked list (see picture Figure 4, “Doubly linked list”) with a bit of metadata for each allocation.

Doubly linked list
Figure 4. Doubly linked list

The critically vulnerable part of the bookkeeping process is the removal of a node with information from this double linked list, when the application frees up a piece of previously allocated memory. As can be seen in Figure 4, “Doubly linked list”, the various pointers of the left-most and right-most node need to be adjusted to no longer include the middle node in the list. If a security exploit manages to overwrite parts of one of these nodes, it can cause the re-connecting operation to overwrite a piece of memory of its choice.

Another approach would be to create a fake node in the list somewhere in memory and then "hook that up" by overwriting another node via a programming flaw. This method controls both the location and value of what gets written.

Once an attacker can write to arbitrary pieces of memory, he effectively has full control over the program and thus potentially the whole computer.

A significant amount of checks have been added to the glibc code base as part of the ExecShield project. One such check is related to the previous linked list example. Before doing any memory writes, Glibc checks the linked list to see that it is correctly linked up. The exploit techniques mentioned force links in this linked list to point to incorrect places. The sanity checks added to glibc will detect this deception and abort the program before any harm is done.

A beneficial side effect of all these extra sanity checks is that a large number of non-security related programming bugs in applications have been found (and fixed) in the distribution.

GCC compile time buffer checks (FORTIFY SOURCE)

In Fedora Core 3 and Red Hat Enterprise Linux 4, gcc and glibc gained a feature called "FORTIFY_SOURCE" that will detect and prevent a subset of the buffer overflows before they can do damage. While this feature is present in these two releases, it's not used for significant portions of the Fedora Core 3 and Red Hat Enterprise Linux 4 distributions itself. This is different for Fedora Core 4; here almost the entire distribution is compiled with this feature enabled.

The idea behind FORTIFY_SOURCE is relatively simple: there are cases where the compiler can know the size of a buffer (if it's a fixed sized buffer on the stack, as in the example, or if the buffer just came from a malloc() function call). With a known buffer size, functions that operate on the buffer can make sure the buffer will not overflow.


void foo(char *string)
	char buf[20];
	strcpy(buf, string);

In the example, gcc knows that the buf variable is 20 bytes in size. When this code is compiled with FORTIFY_SOURCE enabled, gcc uses a special version of strcpy() which asks gcc for the the size of the destination buffer. Since the size is known (here, 20 bytes), the strcpy() code will not copy after 20 bytes. If there is more than 20 bytes to copy, the program is aborted. If gcc proves at compile time that this buffer will overflow, it also issues a warning.

There are many functions in the standard glibc library that operate on buffers. In Fedora Core 4, the majority of these functions use this extra information from gcc to do the sanity checks.

The FORTIFY_SOURCE feature can be enabled for your own application in Fedora Core 3 and 4 and Red Hat Enterprise Linux 4 by passing the following argument to gcc:


You can see how often a checking function is used in an application via the following command:

objdump -d <program or library> | grep call | grep _chk | wc -l

If this is non-zero, FORTIFY_SOURCE is active. However if the value returned is zero, FORTIFY_SOURCE either might not be enabled, or the program code is such that FORTIFY_SOURCE is not applicable (for example: secure code that has no static buffers and always checks buffer sizes, and thus FORTIFY_SOURCE cannot identify any potentially dangerous operations in the program).

About the author

Arjan van de Ven is a Quality Architect at Red Hat, primarily responsible for designing and implementing Quality assurance systems and processes. In a past life he was involved with the Linux kernel as used in many of Red Hat's distributions.