Total Pageviews

Search: This Blog, Linked From Here, The Web, My fav sites, My Blogroll

22 October 2009



Leave No Trace

Subtle and insubstantial, the expert leaves no trace; divinely mysterious, he is inaudible. Thus he is the master of his enemy's fate.


Rootkits are not, in and of themselves, malicious. However, rootkits can be used by malicious programs. Understanding rootkit technology is critical if you are to defend against modern attacks.

First-generation rootkits were just normal programs. Today, rootkits are typically packaged as device drivers. Over the next few years, advanced rootkits may modify or install into the microcode of a processor, or exist primarily in the microchips of a computer. For example, it is not inconceivable that the bitmap for an FPGA (field programmable gate array) could be modified to include a backdoor (This assumes that there is enough room (in terms of gates) to add features to an FPGA. Hardware manufacturers try to save money on every component, so an FPGA will be as small as possible for the application. There may not be much room left in the gate array for anything new. To insert a rootkit into a tight spot like this may require removal of other features.). Of course, this type of rootkit would be crafted for a very specific target. Rootkits that use more generic operating-system services are more likely to be in widespread use.

The kind of rootkit technology that could hide within an FPGA is not suitable for use by a network worm. Hardware-specific attacks don't work well for worms. The network-worm strategy is facilitated by large-scale, homogenous computing. In other words, network worms work best when all the targeted software is the same. In the world of hardware-specific rootkits, there are many small differences that make multiple-target attacks difficult. It is much more likely that hardware-based attacks would be used against a specific target the attacker can analyze in order to craft a rootkit specifically for that target.

As long as software exploits exist, rootkits will use these exploits. They work together naturally. However, even if such exploits were not possible, rootkits would still exist. In the next few decades or so, the buffer overflow, currently the "king of all software exploits," will be dead and buried. Advances in:

will render the buffer overflow ineffective, striking a huge blow against those who rely on remote exploitation. This doesn't mean exploits will go away. The new world of exploiting will be based on logic errors in programs rather than on the architecture flaw of buffer overflow.

With or without remote exploitation, however, rootkits will persist. Rootkits can be placed into systems at many stages, from development to delivery. As long as there are people, people will want to spy on other people. This means rootkits will always have a place in our technology. Backdoor programs and technology subversions are timeless!

Understanding Attackers' Motives
A backdoor in a computer is a secret way to get access. Back doors have been popularized in many Hollywood movies as a secret password or method for getting access to a highly secure computer system. But back doors are not just for the silver screen they are very real, and can be used for:
  • stealing data
  • monitoring users, and
  • launching attacks deep into computer networks.
An attacker might leave a back door on a computer for many reasons. Breaking into a computer system is hard work, so once an attacker succeeds,
  • she will want to keep the ground she has gained.
  • She may also want to use the compromised computer to launch additional attacks deeper into the network.
  • A major reason attackers penetrate computers is to gather intelligence.
To gather intelligence, the attacker will want to
  • monitor keystrokes
  • observe behavior over time
  • sniff packets from the network, and exfiltrate (To transport out of, to remove from a location; to transport a copy of data from one location to another) data from the target.
All of this requires establishing a back door of some kind. The attacker will want to:
  • leave software running on the target system that can perform intelligence gathering.
  • penetrate computers to destroy them, in which case the attacker might leave a logic bomb on the computer, which she has set to destroy the computer at a specific time. While the bomb waits, it needs to stay undetected.
  • Even if the attacker does not require subsequent back-door access to the system, this is a case where software is left behind and it must remain undetected.

The Role of Stealth
To remain undetected, a back-door program must use stealth. Unfortunately, most publicly available "hacker" back-door programs aren't terribly stealthy. Many things can go wrong. This is mostly because the developers want to build everything including the proverbial kitchen sink into a back-door program. For example, take a look at the Back Orifice or NetBus programs. These back-door programs sport impressive lists of features, some as foolish as ejecting your CD-ROM tray.
This is fun for office humor, but not a function that would be used in a professional (in this case indicates a sanctioned operation of some kind, as performed, for example, by law enforcement, pen testers, red teams, or the equivalent.) attack operation.
If the attacker is not careful, she may reveal her presence on the network, and the whole operation may sour.
Because of this, professional attack operations usually require specific and automated back-door programs—programs that do only one thing and nothing else. This provides assurance of consistent results.
If computer operators suspect that their computer or network has been penetrated, they may perform forensic discovery looking for unusual activity or back-door programs (For a good text on computer forensics, see D. Farmer and W. Venema, Forensic Discovery (Boston: Addison-Wesley, 2004)).

The best way to counter forensics is with stealth: If no attack is suspected, then no forensics are likely to be applied to the system. Attackers may use stealth in different ways.
  • Some may simply try to step lightly by keeping network traffic to a minimum and avoiding storing files on the hard drive.
  • Others may store files but employ obfuscation techniques that make forensics more difficult.
If stealth is used properly, forensics will never be applied to a compromised system, because the intrusion will not have been detected. Even if an attack is suspected and forensics end up being used a good stealth attack will store data in obfuscated ways to escape detection.

When Stealth Doesn't Matter
Sometimes an attacker doesn't need to be stealthy. For instance,
  • if the attacker wants to penetrate a computer only long enough to steal something, such as an e-mail spool, perhaps she doesn't care if the attack is eventually detected.
  • when the attacker simply wants to crash the target computer. For example, perhaps the target computer is controlling an anti-aircraft system. In this case, stealth is not a concern—just crashing the system is enough to achieve the objective. In most cases, a computer crash will be obvious (and disturbing) to the victim.
If this is the kind of attack you want to learn more about, this text will not help you. Now that you have a basic understanding of attackers' motives, we'll discuss rootkits in general, including some background on the subject as well as how rootkits work.

What Is a Rootkit?
The term rootkit has been around for more than 14 years. A rootkit is a "kit" consisting of small and useful programs that allow an attacker to maintain access to "root," the most powerful user on a computer. In other words, a rootkit is a set of programs and code that allows a permanent or consistent, undetectable presence on a computer. In our definition of "rootkit," the key word is "undetectable."
  • Most of the technology and tricks employed by a rootkit are designed to hide code and data on a system. For example, many rootkits can hide files and directories.
  • Other features in a rootkit are usually for remote access and eavesdropping for instance, for sniffing packets from the network.
When combined, these features deliver a knockout punch to security. Rootkits are not inherently "bad," and they are not always used by the "bad guys." It is important to understand that a rootkit is just a technology. Good or bad intent derives from the humans who use them. There are plenty of legitimate commercial programs that provide remote administration and even eavesdropping features. Some of these programs even use stealth. In many ways, these programs could be called rootkits.
  • Law enforcement may use the term "rootkit" to refer to a sanctioned back-door program something installed on a target with legal permission from the state, perhaps via court order.
  • Large corporations also use rootkit technology to monitor and enforce their computer-use regulations.
By taking the attacker's perspective, we guide you through your enemies' skills and techniques.
  1. This will increase your skills in defending against the rootkit threat.
  2. If you are a legitimate developer of rootkit technology, this text will help you build a base of skills that you can expand upon.

Why Do Rootkits Exist?
Rootkits are a relatively recent invention, but spies are as old as war. Rootkits exist for the same reasons that audio bugs exist. People want to see or control what other people are doing. With the huge and growing reliance on data processing,computers are natural targets.

Rootkits are useful only if you want to maintain access to a system. If all you want to do is steal something and leave, there is no reason to leave a rootkit behind. In fact, leaving a rootkit behind always opens you to the risk of detection. If you steal something and clean up the system, you may leave no trace of your operation. Rootkits provide two primary functions:
  1. remote command and control, and
  2. software eavesdropping.

Remote Command and Control
Remote command and control (or simply "remote control") can include control over files, causing reboots or "Blue Screens of Death," and accessing the command shell (that is, cmd.exe or /bin/sh). Figure 1-1 shows an example of a rootkit command menu. This command menu will give you an idea of the kinds of features a rootkit might include.

Software Eavesdropping
Software eavesdropping is all about watching what people do. This means
  • sniffing packets
  • intercepting keystrokes, and
  • reading e-mail.
An attacker can use these techniques to capture passwords and decrypted files, or even cryptographic keys.
While rootkits have applications in waging digital warfare, they are not the first application of the concept. Wars are fought on many fronts, not the least of which is economic. From the end of World War II through the Cold War, the USSR mounted a large intelligence-gathering operation against the U.S. to obtain technology (G. Weiss, "The Farewell Dossier," in Studies in Intelligence (Washington: Central Intelligence Agency, Center for the Study of Intelligence, 1996), available here). Having detected some of these operations, the US planted bogus plans, software, and materials into the collection channel. In one reported incident, malicious modifications to software (so-called "extra ingredients") were credited for a Siberian gas pipeline explosion (This implies that the explosion was caused by some sort of software subversion). The explosion was photographed by satellites and was described as "the most monumental non-nuclear explosion and fire ever seen from space."(D. Hoffman, "Cold War hotted up when sabotaged Soviet pipeline went off with a bang," Sydney Morning Herald, 28 February 2004)

Legitimate Uses of Rootkits
As we alluded to already, rootkits can be used for legitimate purposes. For instance, they can be used by law-enforcement agencies to collect evidence, in an advanced bugging operation. This would apply to any crime in which a computer is used, such as computer trespass, creating or distributing child pornography, software or music piracy, and DMCA violations (
The Digital Millenium Copyright Act of 1998, PL 105-304, 17 USC § 101 et seq.).

Rootkits can also be used to fight wars. Nations and their militaries rely heavily on computing machinery. If these computers fail, the enemy's decision cycle and operations can be affected. The benefits of using a computer (versus conventional) attack include that it costs less, it keeps soldiers out of danger, it causes little collateral damage, and in most cases it does not cause permanent damage. For instance, if a nation bombs all the power plants in a country, then those power plants will need to be rebuilt at great expense. But if a software worm infects the power control network and disables it, the target country still loses use of the power plants' output, but the damage is neither permanent nor as expensive.

How Long Have Rootkits Been Around?
As we noted previously, rootkits are not a new concept. In fact, many of the methods used in modern rootkits are the same methods used in viruses in the 1980s for example
modifying :
  • key system tables,
  • memory, and
  • program logic.
In the late 1980s, a virus might have used these techniques to hide from a virus scanner. The viruses during this era used floppy disks and BBS's (bulletin board systems) to spread infected programs.

When Microsoft introduced Windows NT, the memory model was changed so that normal user programs could no longer modify key system tables. A lapse in hard virus technology followed, because no virus authors were using the new Windows kernel. When the Internet began to catch on, it was dominated by UNIX operating systems. Most computers used variants of UNIX, and viruses were uncommon. However, this is also when network worms were born. With the famous Morris Worm, the computing world woke up to the possibility of software exploits (
Robert Morris released the first documented Internet worm. For an account of the Morris Worm, see K. Hafner and J. Markoff, Cyberpunk: Outlaws and Hackers on the Computer Frontier (New York: Simon & Schuster, 1991)).

  1. During the early 1990s, many hackers figured out how to find and exploit buffer overflows, the "nuclear bomb" of all exploits. However, the virus-writing community didn't catch on for almost a decade.
  2. During the early 1990s, a hacker would penetrate a system, set up camp, and then use the freshly compromised computer to launch new attacks. Once a hacker had penetrated a computer, she needed to maintain access. Thus, the first rootkits were born. These original rootkits were merely backdoor programs, and they used very little stealth. In some cases, they replaced key system binaries with modified versions that would hide files and processes.
    For example, consider a program called ls that lists files and directories. A first-generation rootkit might replace the ls program with a Trojan version that hides any file named hacker_stuff. Then, the hacker would simply store all of her suspect data in a file named hacker_stuff. The modified ls program would keep the data from being revealed.
  3. System administrators at that time responded by writing programs such as Tripwire (for more look here ) that could detect whether files had been changed.
    Using our previous example, a security utility like Tripwire could examine the ls program and determine that it had been altered, and the Trojan would be unmasked.
  4. The natural response was for attackers to move into the kernel of the computer. The first kernel rootkits were written for UNIX machines. Once they infected the kernel, they could subvert any security utility on the computer at that time. In other words, Trojan files were no longer needed: All stealth could be applied by modifying the kernel. This technique was no different from the techniques used by viruses in the late 1980s to hide from anti-virus software.

How Do Rootkits Work?
Rootkits work using a simple concept called modification. In general, software is designed to make specific decisions based on very specific data. A rootkit locates and modifies the software so it makes incorrect decisions. There are many places where modifications can be made in software. Some of them are discussed in the following paragraphs.

Executable code (sometimes called a binary) consists of a series of statements encoded as data bytes. These bytes come in a very specific order, and each means something to the computer. Software logic can be modified if these bytes are modified. This technique is sometimes called patching like placing a patch of a different color on a quilt. Software is not smart; it does only and exactly what it is told to do and nothing else. That is why modification works so well. In fact, under the hood, it's not all that complicated. Byte patching is one of the major techniques used by "crackers" to remove software protections. Other types of byte patches have been used to cheat on video games (for example, to give unlimited gold, health, or other advantages).

Easter Eggs
Software logic modifications may be "built in." A programmer may place a back door in a program she wrote. This back door is not in the documented design, so the software has a hidden feature. This is sometimes called an Easter Egg, and can be used like a signature: The programmer leaves something behind to show that she wrote the program. Earlier versions of the widely used program Microsoft Excel contained an easter-egg that allowed a user who found it to play a 3D first-person shooter game similar to Doom (
The Easter Eggs and Curios Database) embedded inside a spreadsheet cell.

Spyware Modifications
Sometimes a program will modify another program to infect it with "spyware." Some types of spyware track which Web sites are visited by users of the infected computer. Like rootkits, spyware may be difficult to detect. Some types of spyware hook into Web browsers or program shells, making them difficult to remove. They then make the user's life hell by placing links for new mortgages and Viagra on their desktops, and generally reminding them that their browsers are totally insecure (
Many Web browsers fall prey to spyware, and of course Microsoft's Internet Explorer is one of the biggest targets for spyware so you can choose for example Mozilla's Firefox with noscript plugin).

Source-Code Modification
Sometimes software is modified at the source—literally. A programmer can insert malicious lines of source code into a program she authors.
  1. This threat has caused some military applications to avoid open-source packages such as Linux. These open-source projects allow almost anyone ("anyone" being "someone you don't know") to add code to the sources.
  2. Granted, there is some amount of peer review on important code like BIND, Apache, and Sendmail.
  3. But, on the other hand, does anyone really go through the code line by line? (If they do, they don't seem to do it very well when trying to find security holes!) Imagine a back door that is implemented as a bug in the software. For example, a malicious programmer may expose a program to a buffer overflow on purpose. This type of back door can be placed on purpose. Since it's disguised as a bug, it becomes difficult to detect. Furthermore, it offers plausible deniability on the part of the programmer! Okay, we can hear you saying
"Bah! I fully trust all those unknown people out there who authored my software because they are obviously only three degrees of separation from Linus Torvalds and I'd trust Linus with my life!"
  • Fine, but do you trust the skills of the system administrators who run the source-control servers and the source-code distribution sites? There are several examples of attackers gaining access to source code. A major example of this type of compromise took place when the root FTP servers for the GNU Project (, source of the Linux-based GNU operating system, were compromised in 2003 (CERT Advisory CA-2003-21).
  • Modifications to source code can end up in hundreds of program distributions and are extremely difficult to locate. Even the sources of the very tools used by security professionals have been hacked in this way (For example, D. Song's site was compromised in May, 2002, and the Dsniff, Fragroute and Fragrouter tools hosted there were contaminated. See "Download Sites Hacked, Source Code Backdoored," SecurityFocus).

The Legality of Software Modification
Some forms of software modification are illegal.
For example, if you use a program to modify another program in a way that removes copyright mechanisms, you may be in violation of the law (depending on your jurisdiction). This applies to any "cracking" software that can commonly be found on the Internet.
For example, you can download an evaluation copy of a program that "times out" and stops functioning after 15 days, then download and apply a "crack," after which the software will run as if it had been registered. Such a direct modification of the code and logic of a program would be illegal.

What a Rootkit Is Not
We have described how a rootkit is a powerful hacker tool. But, there are many kinds of hacker tools a rootkit is only one part of a larger collection.

A Rootkit Is Not an Exploit
Rootkits may be used in conjunction with an exploit, but the rootkit itself is a fairly straightforward set of utility programs. These programs may use undocumented functions and methods, but they typically do not depend on software bugs (such as buffer overflows). A rootkit will typically be deployed after a successful software exploit.
Many hackers have a treasure chest of exploits available, but they may have only one or two rootkit programs.
Regardless of which exploit an attacker uses, once she is on the system, she deploys the appropriate rootkit. Although a rootkit is not an exploit, it may incorporate a software exploit. A rootkit usually requires access to the kernel and contains one or more programs that start when the system is booted. There are only a limited number of ways to get code into the kernel (for example, as a device driver). Many of these methods can be detected forensically.

One novel way to install a rootkit is to use a software exploit.
Many software exploits allow arbitrary code or third-party programs to be installed. Imagine that there is a buffer overflow in the kernel (there are documented bugs of this nature) that allows arbitrary code to be executed. Kernel-buffer overflows can exist in almost any device driver (for example, a printer driver). Upon system startup, a loader program can use the buffer overflow to load a rootkit. The loader program does not employ any documented methods for loading or registering a device driver or otherwise installing a rootkit. Instead, the loader exploits the buffer overflow to install the kernel-mode parts of a rootkit. The buffer-overflow exploit is a mechanism for loading code into the kernel. Although most people think of this as a bug, a rootkit developer may treat it as an undocumented feature for loading code into the kernel. Because it is not documented, this "path to the kernel" is not likely to be included as part of a forensic investigation. Even more importantly, it won't be protected by a host-based firewall program. Only someone skilled in advanced reverse engineering would be likely to discover it.

A Rootkit Is Not a Virus
A virus program is a self-propagating automaton.
In contrast, a rootkit does not make copies of itself, and it does not have a mind of its own. A rootkit is under the full control of a human attacker, while a virus is not.
In most cases, it would be dangerous and foolish for an attacker to use a virus when she requires stealth and subversion. Beyond the fact that creating and distributing virus programs may be illegal, most virus and worm programs are noisy and out of control.
A rootkit enables an attacker to stay in complete control. In the case of a sanctioned penetration (for example, by law enforcement), the attacker needs to ensure that only certain targets are penetrated, or else she may violate a law or exceed the scope of the operation. This kind of operation requires very strict controls, and using a virus would simply be out of the question.
It is possible to design a virus or worm program that spreads via software exploits that are not detected by intrusion-detection systems (for instance, zero-day exploits [
A zero-day exploit is brand new, and no software patch exists yet to fix it] ). Such a worm could spread very slowly and be very difficult to detect. It may have been tested in a well-stocked lab environment with a model of the target environment. It may include an "area-of-effect" restriction to keep it from spreading outside of a controlled boundary. And, finally, it may have a "land-mine timer" that causes it to be disabled after a certain amount of time—ensuring that it doesn't cause problems after the mission is over.

The Virus Problem
Even though a rootkit is not a virus, the techniques used by a rootkit can easily be employed by a virus.
When a rootkit is combined with a virus, a very dangerous technology is born.
The world has seen what viruses can do. Some virus programs have spread through millions of computers in only a few hours. The most common operating system, Microsoft Windows, has historically been plagued with software bugs that allow viruses to infect computers over the Internet.
Most malicious hackers will not reveal software bugs to the vendor. In other words, if a malicious hacker were to find an exploitable bug in Microsoft Windows, she would not reveal this to Microsoft. An exploitable bug that affects the default installation of most Windows computers is like a "key to the kingdom"; telling the vendor about it would be giving away the key.
  1. Understanding rootkit technology is very important for defending against viruses.
  2. Virus programmers have been using rootkit technology for many years to "heat up" their viruses. This is a dangerous trend. Algorithms have been published for virus propagation (N. Weaver, "Warhol Worms: The Potential for Very Fast Internet Plagues") that can penetrate hundreds of thousands of machines in an hour.
  3. Techniques exist for destroying computer systems and hardware. And, remotely exploitable holes in Microsoft Windows are not going away. Viruses that use rootkit technology are going to be harder to detect and prevent.

Rootkits and Software Exploits
Software exploitation is an important subject relating to rootkits. (We not cover here how software can break and be exploited . If you're interested in software exploitation, we recommend the book
G. Hoglund and G. McGraw, Exploiting Software). Although a rootkit is not an exploit, it may be employed as part of an exploit tool (for example, in a virus or spyware). The threat of rootkits is made strong by the fact that software exploits are in great supply.
For example, a reasonable conjecture is that at any given time, there are more than a hundred known working exploitable holes in the latest version of Microsoft Windows (We cannot offer proof for this conjecture, but it is a reasonable assumption derived from knowledge about the problem).
For the most part, these exploitable holes are known by Microsoft and are being slowly managed through a quality-assurance and bug-tracking system (Most software vendors use similar methods to track and repair bugs in their products).

Eventually, these bugs are fixed and silently patched (
"Silently patched" means the bug is fixed via a software update, but the software vendor never informs the public or any customers that the bug ever existed. For all intents, the bug is treated as "secret" and nobody talks about it. This is standard practice for many large software vendors, in fact).

Some exploitable software bugs are found by independent researchers and never reported to the software vendor. They are deadly because nobody knows about them except the attacker. This means there is little to no defense against them (no patch is available). Many exploits that have been publicly known for more than a year are still being widely exploited today.
Even if there is a patch available, most system administrators don't apply the patches in a timely fashion. This is especially dangerous since even if no exploit program exists when a security flaw is discovered, an exploit program is typically published within a few days after release of a public advisory or a software patch
Although Microsoft takes software bugs seriously, integrating changes by any large operating system vendor can take an inordinate amount of time. When a researcher reports a new bug to Microsoft, she is usually asked not to release public information about the exploit until a patch can be released. Bug fixing is expensive and takes a great deal of time. Some bugs aren't fixed until several months after they are reported. One could argue that keeping bugs secret encourages Microsoft to take too long to release security fixes. As long as the public doesn't know about a bug, there is little incentive to quickly release a patch. To address this tendency, the
security company eEye has devised a clever method to make public the fact that a serious vulnerability has been found, but without releasing the details. When the bug was reported to a vendor, and by how many days the vendor patch is "overdue," based on the judgment that a timely response would be release of a patch within 60 days. As we have seen in the real world, large software vendors take longer than 60 days. Historically, it seems the only time a patch is released within days is when a real Internet worm is released that uses the exploit.
Type-Safe Languages
Programming languages that are type-safe are more secure from certain exploits, such as buffer overflows. Without type safety, program data is just a big ocean of bits. The program can grab any arbitrary handful of bits and interpret it in limitless ways regardless of the original purpose of the data. For example, if the string "GARY" were placed into memory, it could later be used not as text, but as a 32-bit integer, 0x47415259 (or, in decimal, 1,195,463,257 a rather large number indeed!). When data supplied by an external user can be misinterpreted, software exploits can be employed. Conversely, programs written in a type-safe language (like modern C++ implemetations, Java, C#, python, ruby) would never convert "GARY" to a number; the string would always be treated as text and nothing else.

Why Exploits Are Still a Problem
The need for software security has been known for a long time, yet software exploits continue to be a problem. The root of the problem lies within the software itself. Bluntly stated, most software is not secure. Companies like Microsoft are making huge strides in designing better security for the future, but current operating-system code is written in C or C++, computer languages that by their very nature introduce severe security holes. These languages give rise to a problem known as buffer-overflow exploits. The buffer-overflow bug is the most significant weakness in software today. It has been the enabler for thousands of software exploits. And, it's a bug—an accident that can be fixed (
Although buffer-overflow bugs are not confined to C and C++ code, the C and C++ programming languages make it difficult to ensure safe coding practices. The languages are not type-safe, use built-in functions that can overflow buffers, and are difficult to debug). Buffer-overflow exploits will eventually go away, but not in the near future. Although a disciplined programmer can write code that does not have buffer-overflow bugs (this is regardless of language; even a program written by hand in Assembly can be secure), most programmers are not that diligent. The current trend is to enforce safe coding practices and follow this up with automated code-scanning tools to catch mistakes. Microsoft uses a set of internal tools for this purpose (For example, PREfix and PREfast were developed and deployed by Jon Pincus, Microsoft Research).

  1. Automated code-scanning tools can catch some bugs, but not all of them. Most computer programs are very complex, and it can be difficult to test them thoroughly in an automated fashion.
  2. Some programs may have too many states to possibly evaluate (In fact, it is possible for a computer program to have more potential states than there are particles in the universe. A "state" is like an internal configuration within the software. Every time the software does something, the state will change. Thus, most software has a huge number of potential states).
Given this potential complexity, it can be very hard to make any determination about the security of a computer program (To understand this, consider the theoretical bounds for the number of permutations of a string of binary bits. For example, imagine a 160MB software application that uses 16MB (10% of its total size) of memory to store state. That program could, in theory, have up to 2^16,777,216 different operational states, which is far, far larger than the number of particles in the universe (variously estimated at around 10^80) .

The adoption of type-safe languages (such as Java and C#) would nearly eliminate the risk of buffer overflows. Although a type-safe language is not guaranteed to be secure, it significantly reduces the risks of buffer overflows, sign-conversion bugs, and integer overflows.
Unfortunately, these languages cannot match the performance of C or C++, and most of Microsoft Windows—even the latest and greatest version—still runs old C and C++ code.
Developers of embedded systems have begun to adopt type-safe languages, but even this uptake is slow—and the millions of legacy systems out there will not be replaced any time soon. What this means is that old-fashioned software exploits will be around for awhile.

Offensive Rootkit Technologies

A good rootkit should be able to bypass any security measures, such as firewalls or intrusion-detection systems (IDSes). There are two primary types of IDSes:

HIPS technology can be home-grown or bought off-the-shelf. Examples of HIPS software include:
For the rootkit, the biggest threat is HIPS technology. A HIPS can sometimes detect a rootkit as it installs itself, and can also intercept a rootkit as it communicates with the network. Many HIPSes will utilize kernel technology and can monitor operating systems. In a nutshell, HIPS is an anti-rootkit. This means that anything a rootkit does on the system most likely will be detected and stopped.

When using a rootkit against a HIPS-protected system, there are two choices:
  • bypass the HIPS, or
  • pick an easier target.
The code can help you understand how to bypass a HIPS and can also assist you in constructing your own rootkit-protection system.

Network-based IDS (NIDS) is also a concern for rootkit developers, but a well-designed rootkit can evade a production NIDS. Although, in theory, statistical analysis can detect covert communication channels, in reality this is rarely done. Network connections to a rootkit will likely use a covert channel hidden within innocent-looking packets. Any important data transfer will be encrypted. Most NIDS deployments deal with large data streams (upward of 300 MB/second), and the little trickle of data going to a rootkit will pass by unnoticed. The NIDS poses a larger detection threat when a publicly known exploit is used in conjunction with a rootkit (
When using a publicly known exploit, an attacker may craft the exploit code to mimic the behavior of an already-released worm (for example, the Blaster worm). Most security administrators will mistake the attack as simply actions of the known worm, and thus fail to recognize a unique attack).

Bypassing the IDS/IPS
To bypass firewalls and IDS/IPS software, there are two approaches:
  • active :Active offenses operate at runtime and are designed to prevent detection. Active offenses are modifications to the system hardware and kernel designed to subvert and confuse intrusion-detection software. Active measures are usually required in order to disable HIPS software (such as Okena and Entercept). In general, active offense is used against software which runs in memory and attempts to detect rootkits. Active offenses can also be used to render system-administration tools useless for detecting an attack. A complex offense could render any security software tool ineffective. For example, an active offense could locate a virus scanner and disable it.
  • passive : Just in case someone gets suspicious passive offenses are applied "behind the scenes" to make forensics as difficult as possible. Passive offenses are obfuscations in data storage and transfer. For example, encrypting data before storing it in the file system is a passive offense. A more advanced offense would be to store the decryption key in non-volatile hardware memory (such as flash RAM or EEPROM) instead of in the file system. Another form of passive offense is the use of covert channels for exfiltration of data out of the network.
Both approaches must be combined to create a robust rootkit.

Finally, a rootkit should not be detected by a virus scanner. Virus scanners not only operate at runtime, they can also be used to scan a file system "offline." For example, a hard drive on a lab bench can be forensically analyzed for viruses. To avoid detection in such cases, a rootkit must hide itself in the file system so that it cannot be detected by the scanner.

Bypassing Forensic Tools
Ideally, a rootkit should never be detected by forensic scanning. But the problem is hard to solve. Powerful tools exist to scan hard drives. Some tools, such as Encase, "look for the bad" and are used when a system is suspected of an infection. Other tools, such as Tripwire, "look for the good" and are used to ensure that a system remains uninfected.

A practitioner using a tool like Encase will scan the drive for byte patterns. This tool can look at the entire drive, not just regular files. Slack space and deleted files will be scanned.
  • To avoid detection in this case, the rootkit should not have easily identifiable patterns.
  • The use of steganography can be powerful in this area.
  • Encryption can also be used, but tools used to measure the randomness of data may locate encrypted blocks of data.
  • If encryption is used, the part of the rootkit responsible for decryption would need to stay un-encrypted (of course).
  • Polymorphic techniques can be used to mutate the decryptor code for further protection.
Remember that the tool is only as good as the forensic technicians who drive it. If you think of some way to hide that they have not, you might escape detection.

Tools that perform cryptographic hashing against the file system, such as Tripwire, require a database of hashes to be made from a clean system. In theory, if a copy of a clean system (that is, a copy of the hard drive) is made before the rootkit infection takes place, an offline analysis can be performed that compares the new drive image to the old one. Any differences on the drive image will be noted. The rootkit will certainly be one difference, but there will be others as well. Any running system will change over time. To avoid detection, a rootkit can hide in the regular noise of the file system. Additionally, these tools only look at files, and, they may only look at some files—maybe just files considered important. They don't address data stored in non-conventional ways (for example, in bad sectors on a drive). Furthermore, temporary data files are likely to be ignored. This leaves many potential places to hide that will not be checked. If an attacker is really worried that the system administrator has all things hashed and the rootkit will be detected, she could avoid the file system altogether—perhaps installing a rootkit into memory and never using the drive. One drawback, of course, is that a rootkit stored in volatile memory will vanish if the system reboots.To take things to an extreme, perhaps a rootkit can install itself into firmware present in the BIOS or a flash RAM chip somewhere.

Subverting the Kernel

There was no trace then of the horror which I had myself feltat this curt declaration; but his face showed rather the quiet and interested composure of the chemist who sees the crystals falling into position from his oversaturated solution.

Computers of all shapes and sizes have software installed on them, and most computers have an operating system. The operating system is the core set of software programs that provide services to the other programs on the computer.

Many operating systems multitask, allowing multiple programs to be run simultaneously. Different computing devices can contain different operating systems. For instance, the most widely used operating system on PCs is Microsoft's Windows. A large number of servers on the Internet run Linux or Sun Solaris, while many others run Windows. Embedded devices typically run the VXWorks operating system, and many cellular phones use Symbian.

Regardless of the devices on which it is installed, every operating system (OS) has two common purpose:
  1. to provide a single, consistent interface that application software can use to access the device. These core services control access to the device's file system, network interface, keyboard, mouse, and video/LCD display.
  2. A secondary function of the OS is to provide debugging and diagnostic information about the system. For example, most operating systems can list the running or installed software. Most have logging mechanisms, so that applications can report when they have crashed, when someone fails to login properly, etc.
Although it is possible to write applications that bypass the OS (undocumented, direct-access methods), most developers don't do that. The OS provides the "official" mechanism for access, and frankly, it's much easier to just use the OS. This is why nearly all applications use the OS for these services and it's why a rootkit that changes the OS will affect nearly all software.

In this section we jump right in and start writing our very first rootkit for Windows. We will introduce source code and explain how to set up your development environment. We also cover some basic information about the kernel, and how device drivers work.

Important Kernel Components
In order to understand how rootkits can be used to subvert an OS kernel, it helps to know which functions the kernel handles. Table 2-1 describes each major functional component of the kernel.
Table 2-1. Functional components of the kernel.
  • Process management Processes need CPU time. The kernel contains code to assign this CPU time. If the OS supports threads, the kernel will schedule time to each thread. Data structures in memory keep track of all the threads and processes. By modifying these data structures, an attacker can hide a process.
  • File access The file system is one of the most important features an OS provides. Device drivers may be loaded to handle different underlying file systems (such as NTFS). The kernel provides a consistent interface to these file systems. By modifying the code in this part of the kernel, an attacker can hide files and directories.
  • Security The kernel is ultimately responsible for enforcing restrictions between processes. Simple systems may not enforce any security at all. For example, many embedded devices allow any process to access the full range of memory. On UNIX and MS-Windows systems, the kernel enforces permissions and separate memory ranges for each process. Just a few changes to the code in this part of the kernel can remove all the security mechanisms.
  • Memory management Some hardware platforms, such as the Intel Pentium family, have complex memory-management schemes. A memory address can be mapped to multiple physical locations. For example, one process can read the memory at address 0x00401111 and get the value "HELLO," while another process can read that same memory at address 0x00401111 but get the value "GO AWAY." The same address points to two totally different physical memory locations, each containing different data. (We will discuss more about virtual-to-physical memory mapping). This is possible because the two processes are mapped differently. Exploiting the way this works in the kernel can be very useful for hiding data from debuggers or active forensics software.

Now that we have an idea of the functions of the kernel, we will discuss how a rootkit might be designed to modify the kernel.

Rootkit Design
An attacker typically designs a rootkit to affect a particular OS and software set. If the rootkit is designed with direct hardware access, then it will be limited to that specific hardware. Rootkits can be generic to different versions of an OS, but will still be limited to a given OS family. For example, some rootkits in the public domain affect all flavors of Windows NT, 2000, and XP. This is possible only when all the flavors of the OS have similar data structures and behaviors. It would be far less feasible to create a generic rootkit that can infect both Windows and Solaris, for example.

A rootkit may use more than one kernel module or driver program
. For instance, an attacker may use one driver to handle all file-hiding operations, and another driver to hide registry keys. Distributing the code across many driver packages is sometimes a Good Thing because it helps keep the code manageable as long as each driver has a specific purpose. It would be hard for an attacker to manage a monolithic "kitchen-sink" driver that provides every feature known to man.

One Rootkit, One System One rootkit should be enough for any system. A rootkit is invasive and alters data on the system. Although attackers generally keep this invasive alteration to a minimum, installing multiple rootkits may cause alterations of alterations, leading to possible corruption. Rootkits assume, in most cases, that the system is clean. A rootkit may perform checks for anti-hacker software (such as desktop firewalls), but it usually doesn't check for another rootkit. If another rootkit were found to be already installed on the system, the attacker's best strategy might be to "fail out" (that is, stop executing due to an error).

A complex rootkit project might have many components. It helps to keep things organized in a large project. Although we won't develop any examples that are quite so complex here, the following directory structure might be used by a complex rootkit project:

/My Rootkit
  • /src/File Hider File-hiding code can be complex and should be contained in its own set of source-code files. There are multiple techniques for file hiding, some of which could require a great deal of code. For example, some file-hiding techniques require hooking a large number of function calls. Each hook requires a fair amount of source code.
  • /src/Network Ops Network operations require NDIS(Network Driver Interface Specification) and TDI(Transport Driver Interface: used by NT series Windows to abstract level 7 APIs into a common protocol for the Transport Protocol layer) code on Microsoft Windows. These drivers tend to be large, and they sometimes link to external libraries. Again, it makes sense to confine these features to their own source files.
  • /src/Registry Hider Registry-hiding operations may require different approaches than file-hiding features. There may be many hooks involved, and perhaps tables or lists of handles that need to be tracked. In practice, registry-key hiding has been problematic due to the way keys and values relate to one another. This has caused some rootkit developers to craft rather complex solutions to the problem. Again, this feature set should be confined to its own set of source files.
  • /src/Process Hider Process hiding should use Direct Kernel Object Manipulation (DKOM) techniques. These files may contain reverse-engineered data structures and other information.
  • /src/Boot Service Most rootkits will need to be restarted if the computer reboots. An attacker would include a tiny service here that is used to "kick start" the rootkit at boot time. Getting a rootkit to restart with the computer is a complex topic.
    On the one hand, a simple registry key change can cause a file to lauch on boot-up. On the other hand, such an approach is easily detected.
    Some rootkit developers have crafted complex boot capabilties that involve on-disk kernel patches and modifications to the system boot-loader program.
  • /inc Commonly included files containing typedefs, enums, and I/O Control (IOCTL) codes will go here. These files are typically shared by all other files, so deserve their own special location.
  • /bin All the compiled files will go here.
  • /lib The compiler will have its own set of libraries elsewhere, so the attacker could use this location for her own additional libraries or third-party libraries.

Introducing Code into the Kernel
The straightforward way to introduce code into the kernel is by using a loadable module (sometimes called a device driver or kernel driver). Most modern operating systems allow kernel extensions to be loaded so that manufacturers of third-party hardware, such as storage systems, video cards, motherboards, and network hardware, can add support for their products. Each operating system usually supplies documentation and support to introduce these drivers into the kernel. This is the easy route, and is the road we will take to introduce code into the kernel.

As its name suggests, a device driver is typically for devices. However, any code can be introduced via a driver. Once you have code running in the kernel, you have full access to all of the privileged memory of the kernel and system processes. With kernel-level access you can modify the code and data structures of any software on the computer.

A typical module would include an entry point and perhaps a cleanup routine. For example, a Linux-loadable module may look something like this:

int init_module(void)
void cleanup_module(void)

In some cases, such as with Windows device drivers, the entry point must register function callbacks. In such a case, the module would look like this:

NTSTATUS DriverEntry( ... )
theDriver->DriverUnload = MyCleanupRoutine;
NTSTATUS MyCleanupRoutine()

A cleanup routine is not always needed, which is why Windows device drivers make this optional. The cleanup routine would be required only if you plan on unloading the driver. In many cases, a rootkit can be placed into a system and left there, without any need to unload it. However, it is helpful during development to have an unload routine because you may want to load newer versions of the rootkit as it evolves. Most example rootkits provided by include unload routines (A set of basic rootkits known as the "basic_class" can be found in site).

Building the Windows Device Driver
Our first example will operate on the Windows XP and 2000 platforms and will be designed as a simple device driver. In reality, this isn't actually a rootkit yet it's just a simple "hello world" device driver.

#include "ntddk.h"
NTSTATUS DriverEntry( IN PDRIVER_OBJECT theDriverObject, IN PUNICODE_STRING theRegistryPath )'
DbgPrint("Hello World!");

You can load this code into the kernel, and the debug statement will be posted (See the & Logging the Debug Statements later in this section to learn how to capture debug messages). Our rootkit will be composed of several items, each of which we describe in the sections that follow.

The Device Driver Development Kit
To build our Windows device driver, we'll need the Driver Development Kit (DDK). DDKs are available from Microsoft for each version of Windows (Chances are you will want the Windows 2003 DDK. You can build drivers for Windows 2000, XP, and 2003 using this version of the DDK. Information on Windows DDKs is available here)

The Build Environments
The DDK provides two different build environments:
  • the checked environment You use the checked-build environment when you're developing a device driver. The checked build results in debugging checks being compiled into your driver. The resulting driver will be much larger than the free-build version. You should use the checked build for most of your development work, and switch to the free build only when you're testing your final product. While exploring the examples in this text, checked builds are fine.
  • the free build environment You use the free-build environment for release code.

The Files
You will write your driver source code in C, and you will give the filename(source file) a .c extension. To start your first project, make a clean directory (for example C:\myrootkit), and place a mydriver.c file there. Then copy into that file the "hello world" device-driver code shown earlier.You will also need a SOURCES file and a MAKEFILE file.

This file should be named SOURCES in all-capital letters, with no file extension. The SOURCES file should contain the following code:


  1. The TARGETNAME variable controls what your driver will be named. Remember that this name may be embedded in the binary itself, so using a TARGETNAME of MY_EVIL_ROOTKIT_IS_GONNA_GET_YOU is not a good idea. Even if you later rename the file, this string may still exist—and be discovered—within the binary itself. Better names for the driver are those that look like legitimate device drivers. Examples include MSDIRECTX, MSVID_H424, IDE_HD41, SOUNDMGR, and H323FON. Many device drivers are already loaded on a computer. Sometimes you can get great ideas by just looking at the existing list and coming up with some variations on their names.
  2. The TARGETPATH variable will usually be set to OBJ. This controls where the files go when they are compiled. Usually your driver files will be placed underneath the current directory in the objchk_xxx/i386 subdirectory.
  3. The TARGETTYPE variable controls the kind of file you are compiling. To create a driver, we use the type DRIVER.
  4. On the SOURCES line, a list of .c files is expected.
    If you want to use multiple lines, you need to place a backslash ("\") at the end of each line (except the last line). For example: SOURCES= myfile1.c \
    myfile2.c \
    myfile3.c Notice that there is no trailing backslash character on the last line.
  5. Optionally, you can add the INCLUDES variable and specify multiple directories where include files will be located. For example:
    INCLUDES= c:\my_includes \
    ..\..\inc \
    Create Executables with DDKs
    A little-known bit of trivia about Microsoft Driver Development Kits is that they can be used to compile regular program executables, not just driver files. To do this, you set the TARGETTYPE to PROGRAM. There are other types as well, such as EXPORT_DRIVER, DRIVER_LIBRARY, and DYNLINK.

  6. If libraries need to be linked in, then you will have a TARGETLIBS variable. We use the NDIS library for some of our rootkit drivers, so the line might look like this:
    or this:
    You may need to find the ndis.lib file on your own system and hard-code the path to it when you're building the NDIS driver.
    $(BASEDIR) is a variable that specifies the DDK install directory. $(DDK_LIB_PATH) specifies the location where default libraries are installed. The rest of the path may differ depending on your system and the DDK version that you're using.


Finally, create a file named MAKEFILE, using all capital letters, and with no extension. MAKEFILE should contain the following text on a line by itself:

!INCLUDE $(NTMAKEENV)\makefile.def

Running the Build Utility
Once you have the MAKEFILE, SOURCES, and .c files, all you need to do is start the checked-build environment in the DDK, which opens a command shell.
  1. The checked-build environment can be found as a link under the Windows DDK group from the Start Menu—Programs.
  2. Once you have the build environment command shell open, change the active directory to your driver directory and type the command "build." Ideally there won't be any errors, and you will now have your very first driver!
    One hint: make sure your driver directory is in a location where the full path does not contain any spaces. For example, put your driver into c:\myrootkit.You can find an example driver complete with the MAKEFILE and SOURCES files already created for you here

The Unload Routine
When you created the driver, a theDriverObject argument was passed into the driver's main function. This points to a data structure that contains function pointers. One of these function pointers is called the "unload routine." If we set the unload routine, this means that the driver can be unloaded from memory. If we do not set this pointer, then the driver can be loaded but never unloaded. You will need to reboot to remove the driver from memory. As we continue to develop features for our driver, we will need to load and unload it many times. We should set the unload routine so that we don't need to reboot every time we want to test a new version of the driver. Setting the unload routine is not difficult. We need to create an unload function first, then set the unload pointer:

#include "ntddk.h"
// This is our unload function
VOID OnUnload( IN PDRIVER_OBJECT DriverObject )
DbgPrint("OnUnload called\n");
NTSTATUS DriverEntry(IN PDRIVER_OBJECT theDriverObject,
DbgPrint("I loaded!");
// Initialize the pointer to the unload function
// in the DriverObject
theDriverObject->DriverUnload = OnUnload;

Now we can safely load and unload the driver without rebooting.


Rootkits: Subverting the Windows Kernel
By Greg Hoglund, James Butler
Publisher: Addison Wesley Professional
Pub Date: July 22, 2005
ISBN: 0-321-29431-9Pages: 352

G. Hoglund and G. McGraw, Exploiting Software

No comments:

Post a Comment