Total Pageviews

Search: This Blog, Linked From Here, The Web, My fav sites, My Blogroll

Translate

26 June 2009

The World of WORMS


Highlights


....of many computer enthusiasts, people fascinated with the world of a
machine—call them hackers if you wish—who, even though most of them
would never admit this, walked a thin line between ambition and humility,
imagination and reality, and the law and a common crime, people who would
often find themselves on different sides of the barricade because of blind luck
or sheer chance and not because of fundamental differences in how they perceived
their world. For many, this world was the network.

...The revolution is not coming,
but we are starting to comprehend that simplicity can give a serious
advantage, we are starting to learn, from some seemingly uninteresting incidents,
how complex and surprising the dynamics of a worm ecosystem are
and how they change because of a virtually irrelevant difference in a target
selection algorithm or worm size.Worm authors are beginning to notice that in a world that slowly but constantly obtains better defense systems and becomes more
coordinated in its response against new threats, their developments must be
smarter and better prepared. We have enough data and understanding to observe
the marvels of worm dynamics as they happen. For enthusiasts, the field is
becoming a fascinating subject again; for professionals, the defense against
worms is becoming more of a challenge.

With the increasing migration toward a network-centric computing model, threats to all computers grow in severity.The communications between various systems on a network or the Internet offer great potential to their use for work and research. The emergence and acceptance of networking standards from various engineering groups have helped to create the communications infrastructure we have come to rely on for much of our daily work lives. These same infrastructure components and networking standards can be abused by attackers to create widespread
damage as well. This can be capitalized on by malicious software
to very quickly lead to large scale problems.

Internet-based worms, such as Code Red, Sapphire, and Nimda, spread from their introduction point to the entire Internet in a matter of days or even hours. The speed with which defenses need to be established only grows as time goes on. Code Red reached its peak a day or two after its introduction, and by then many sites knew how to detect its signature and began filtering the hosts and traffic associated with the worm. Sapphire, however, hit its peak in under 5 minutes. There was little time to raise the barriers and withstand the attack. Sites typically were knocked off-line but were back on-line within a few hours, filtering the worm’s traffic. There is typically little time to implement a well-thought-out solution during a worm outbreak.

Simply using what works as an initial step suffices for many. In some cases this means coarse filters in their mail or Web servers. In others, this means a protocol and port level filter on their routers. Once this initial measure is in place, a more complete solution can be deployed, such as desktop virus protections, more selective content filtering,
and compromised host isolation.
Because worms act only to spread from system to system, they bring security concerns to everyone using the Internet. No system can hide from an aggressive worm. However, many of the characteristics of a worm can be used to defeat it, including its predictable behavior and telltale signatures. This is in contrast to individual attackers, who change their tactics every time, even if only subtly, and who have usually chosen a particular target
for some clear reason.

Just as vulnerabilities have a window of exposure between the release of information about the vulnerability and the widespread use of exploits against them, worms have an interval of time between the release of the vulnerability and the appearance of the worm. Some worms are fast to appear, such as the Slapper worm (with an interval of 11 days), while others are much slower such as the sadmind/IIS worm (with a minimum internal of 210 days). Nearly any widespread application with a vulnerability can be capitalized on by a worm.




Assumed background
Is expected you have a good grasp of operating system concepts, including processes and privileges. A knowledge of both UNIX and Windows NT systems will go a long way toward understanding this material. An understanding of TCP/IP networking is assumed, as well as an understanding of Internet scale architecture. Last, an understanding of Assumed background security priciples, including vulnerabilities and how they are exploited, is required (Only working knowledge of these concepts is all that is needed, not mastery).



Why worm-based intrusions?
Given the relative stealth of a good manual intrusion and the noise that most worms generate, this is a very good question to ask. Worms continue to be generated for four main reasons:

  • Ease. In this area, automation cannot be beaten. Although the overhead associated with writing worm software is somewhat significant,it continues to work while the developers are away. Due to its nature of propagation, growth is exponential as well.
  • Penetration. Due to the speed and aggressiveness of most worms, infection in some of the more difficult to penetrate networks can be achieved. An example of this would be an affected laptop being brought inside a corporate network, exposing systems normally behind a firewall and protected from such threats. This usually happens through serendipity, but could, with some work, be programmed into the worm system.
  • Persistence. While it is easy to think that once the attack vectors of a worm are known and patches for the vulnerabilities are available, networks would immunize themselves against the worm, this has been proven otherwise. Independent sources have shown that aggressive worms such as Code Red and Nimda have been persistent for longer than 8 months since their introduction date, despite well-known patches being available since the rise of these worms.
  • Coverage. Because worms act in a continual and aggressive fashion, they seek out and attack the weakest hosts on a network. As they spread through nearly all networks, they find nearly all of the weakest hosts accessible and begin their lifecycle anew on these systems. This then gives worms a broad base of installation from which to act, enabling their persistence on the Internet because they will have a continued base from which to attack for many months or even years.

These are the main benefits of using a worm-based attack model, as opposed to concerted manual efforts. They futurely will continue to be strong reasons to consider worm-based events as a high threat.



The new threat model
Until recently, network security was something that the average home user did not have to understand. Hackers were not interested in cruising for hosts on the dial-up modems of most private, home-based users. The biggest concern to the home user was a virus that threatened to wipe out all of their files (which were never backed up, of course).

Now the situation has changed.

  • Broadband technologies have entered the common home, bringing the Internet at faster speeds with 24-hour connectivity.
  • Operating systems and their application suites became networkcentric, taking advantage of the Internet as it grew in popularity in the late 1990s.
  • And hackers decided to go for the number of machines compromised and not high-profile systems, such as popular Web sites or corporate systems.

The threat of attack is no longer the worry of only government or commercial sites. Worms now heighten this threat to home-based users, bringing total indiscriminacy to the attack. Now everyone attached to the Internet has to worry about worms. The aggressiveness of the Code Red II worm is a clear sign that compromise is now everyone’s worry. Shortly after the release of Code Red, a study conducted by the networking research center CAIDA showed just how large scale a worm problem can be.

Their estimates showed that nearly 360,000 computers were compromised by the Code Red worm in one day alone, with approximately 2,000 systems added to the worm’s pool every minute. Even 8 months after the Code Red worm was introduced several thousand hosts remained active Code Red and Nimda hosts.



A new kind of analysis requirement
Prior information security analysis techniques are not effective in evaluating worms. The main issues faced in worm evaluation include the scale and propagation of the infections. These facets typically receive little attention in traditional information security plans and responses.

Worms are unlike regular Internet security threats in several ways.

  • First, they propagate automatically and quickly. By the time you have detected and started responding to the intrusion, the worm has moved on scanning for new hosts and attacking those it finds. Depending on the speed of the worm, the length of this process can be more than one cycle of infection by the time an intrusion is even noticed.
  • Second, the automatic propagation of worms means that because a single host on a whole network becomes infected, a network may become an unwilling participant in a large number of further attacks. These attacks may include denial-of-service (DoS) attacks or additional compromises by the worm program, or even secondary compromises caused by the back door that the worm introduces. This may make a network legally and financially liable, despite the lack of direct participation in the attack. While attackers typically use a compromised network as a stepping stone to other networks or as DoS launchpads, worms inevitably cause the affected network to participate in the attack.
  • Third, the persistent nature of worms means that despite best efforts and nearly total protection, any weakness in a network can lead to total compromise. This is especially aggravated by “island hopping,”(In the computer security and intrusion detection field island hopping is the act of entering a secured system through a weak link and then "hopping" around on the computer nodes within the internal systems) whereby the worm favors attacks against local networks. This can lead to propagation of the worm behind firewalls and network address translation (NAT) devices, which has been observed in Nimda and Code Red II infections.
  • Lastly, the Internet as a whole suffers in terms of performance and reliability. The spread of worms leads to an exponential increase in traffic rates and firewall state table entries. This can choke legitimate traffic as the worm aggressively attacks the network. A single Sapphire worm host, for example, was able to congest several megabits per second of bandwidth from within a corporate network, disrupting service for everyone. These consequences of spreading worms are well beyond the planned for scenarios of manual attackers.
They require careful consideration of network design and security implementations, along with an aggressive strategy for defense on all fronts.



The persistent costs of worms
Often discussed but rarely investigated are the financial costs associated with the continual presence of worms on the Internet. Worms by their very nature continue to work long after their introduction. Similar to the scenario faced by populations battling diseases and plagues, worms can be almost impossible to eliminate until long after the targets are removed from the Internet. This continued activity consumes resources and causes an increase in operational costs. Some quick “back of the envelope” calculations from Tim Mullen illustrate the scale of the problem.

In their work on the persistence of Code Red and Nimda, Dug Song et al. counted approximately 5 million Nimda attempts each day. For each GET request sent by the worm that generated an answer, approximately 800 bytes were transferred across the network. This corresponds by quick estimation to about 32 gigabits transferred across the Internet per day by Nimda alone. In their study, Song et al. found that Code Red worms send more requests per day at their peak than Nimda worms did due to more hosts being infected over 6 months after the introduction of the worms. This calculation ignores the labor costs associated with identifying and repairing affected systems, attacks that disrupt core equipment, and attempts at contacting the upstream owners of affected nodes. However, it does illustrate how much bandwidth, and thus money, is consumed every day by worms that persist for months after their initial introduction.
Clearly the automated and aggressive nature of worms removes bandwidth from the pool of available resources on the Internet.



Intentions of worm creators
While the intentions of those who write and release worms are difficult to report without a representative sampling, much can be gathered based on the capabilities of the worms they create. These intentions are important to study because they help reveal the likely futures of worms and how much of a defense investment one should make against them.

By examining the history of worms, one can understand the basic intentions of early worm writers. There appear to be three overriding purposes to worms in their early incarnations.

  • Some worms, such as the Morris worm, seem to have an element of curiosity in them, suggesting that the authors developed and released their worms simply to “watch them go.”
  • Other worms, like the HI.COM worm, appear to have an element of mischievous fun to them because it spread a joke from “Father Christmas.”
Each of these two are understandable human emotions, especially in early computer hackers.
  • The third intent of worm authors appears to be to spread a political message automatically, as displayed with the WANK worm. For its authors, worms provided an automated way to spread their interests far and wide.

The intentions of worm users in the past several years can also be gathered from the capabilities and designs found in the wild.

  • With the advent of distributed denial of service (DDoS) networks and widespread Web site defacement, worms seem to have taken the manual exploit into automated realms. The Slapper worm, for example, was used to build a large army of DDoS zombies. Code Red and the sadmind/IIS worm defaced Web sites in an automated fashion. Various e-mail viruses have sent private documents out into the public at large, affecting both private individuals and government organizations. Hackers seem to have found that worms can automate their work and create large-scale disruptions.

These intentions are also important to understand as worms become more widespread. An army of DDoS zombies can be used to wage largescale information warfare, for example. Even if the worm is discovered and filters developed to prevent the spread of the worm on some networks, the number of hosts that the worm has affected is typically large enough to create a sizable bot army. This was seen with the Deloder worm, which created armies of tens of thousands of bots that could be used to launch DDoS attacks. This is considerably more sizable than what would have been achievable by any group of attackers acting traditionally. Even after it was
discovered, thousands of compromised hosts remained on the bot network for use. To that end, defenses should be evaluated more rigorously than if the worm were to simply spread a single message or was the product of a curious hacker.


Worms Vs Viruses


Either have different properties and capabilities. An important distinction to make in developing detection and defense mechanisms. In some cases these differences are subtle, and in others they are quite dramatic. Although many of the features of each are similar, worms differ from computer viruses in several key areas:
  • Both worms and viruses spread from a computer to other computers. However, viruses typically spread by attaching themselves to files (either data files or executable applications). Their spread requires the transmission of the infected file from one system to another. Worms, in contrast, are capable of autonomous migration from system to system via the network without the assistance of external software.
  • A worm is an active and volatile automated delivery system that controls the medium (typically a network) used to reach a specific target system. Viruses, in contrast, are a static medium that does not control the distribution medium.
  • Worm nodes can sometimes communicate with other nodes or a central site. Viruses, in contrast, do not communicate with external systems.
When we speak of computer worms we are referring to both the instance of a worm on a single system, often called a node on the worm network, and the collection of infected computers that operate as a larger entity. When the distinction is important, the term node or worm network will be used.



A formal definition
From the 1991 appeal by R. T. Morris regarding the operation of the 1988 worm that bears his name, the court defined a computer worm as follows:
In the colorful argot of computers, a “worm” is a program that travels from
one computer to another but does not attach itself to the operating system of the computer it “infects.” It differs from a “virus,” which is also a migrating program, but one that attaches itself to the operating system of any computer it enters and can infect any other computer that uses files from the infected computer.
This definition, as we will see later, limits itself to agents that do not alter the operating system. Many worms hide their presence by installing software, or rootkits, to deliberately hide their presence, some use kernel modules to accomplish this. Such an instance of a worm would not be covered by the above definition. For our purposes here, we will define a computer worm as an:
independently replicating and autonomous infection agent, capable of seeking out new host systems and infecting them via the network.
  • A worm node is the host on a network that operates the worm executables, and
  • a worm network is the connected mesh of these infected hosts.



The five components of a worm

Nazario et al. dissected worm systems into their five basic components. A worm may have any or all of these components, though a minimum set must include the attack component.
  • Reconnaissance. The worm network has to hunt out other network nodes to infect. This component of the worm is responsible for discovering hosts on the network that are capable of being compromised by the worm’s known methods.
  • Attack components. These are used to launch an attack against an identified target system. Attacks can include the traditional buffer or heap overflow, string formatting attacks, Unicode misinterpetations (in the case of IIS attacks), and misconfigurations.
  • Communication components. Nodes in the worm network can talk to each other. The communication components give the worms the interface to send messages between nodes or some other central location.
  • Command components. Once compromised, the nodes in the worm network can be issued operation commands using this component. The command element provides the interface to the worm node to issue and act on commands.
  • Intelligence components. To communicate effectively, the worm network needs to know the location of the nodes as well as characteristics about them. The intelligence portion of the worm network provides the information needed to be able to contact other worm nodes, which can be accomplished in a variety of ways.
The phenotype, or external behavior and characteristics, of a worm is typically discussed in terms of the two most visible components, the vulnerability scans and attacks the worm performs. While this is typically enough to identify the presence of a static, monolithic worm (where all components are present in a single binary), the reduction of worms to these components shows how easy it would be to build a modular worm with different instances having some of these components and not others, or upgradable components.

Not all of these components are required to have an operational worm. Again, only basic reconnaissance and attack components are needed to build an effective worm that can spread over a great distance. However, this minimal worm will be somewhat limited in that it lacks additional capabilities, such as DDoS capabilities or a system level interface to the compromised host. These five worm components and the examples next illustrate the core facets of network worms.



Finding new victims: reconnaissance
As it begins its work, the worm has to identify hosts it can use to spread. To do this, the worm has to look for an identifying attribute in the host. Just as an attacker would scan the network looking for vulnerable hosts, the worm will seek out vulnerabilities it can leverage during its spread. Reconnaissance steps can include active port scans and service sweeps of networks, each of which will tell it what hosts are listening on particular ports. These ports are tied to services, such as Web servers or administration services, and sometimes the combination can tell an attacker the type of system they are examining.

Not all of the worm’s efforts are directed to the network, however. A scan of the local file system’s contents can be used to identify new targets. This includes worms which affect messaging and mail clients, which will use the contacts list to identify their next targets, or hosts that are trusted by the local system, as was done by the Morris worm. Additional information can be used to determine which attack vector to use against the remote system.

The worm network follows the same steps an attacker would, using automation to make the process more efficient. A worm will seek out possible targets and look for vulnerabilities to leverage. If the resulting host services match the known vulnerabilities the worm can exploit, it can then identify it as a system to attack.
The criteria for determining vulnerabilities are flexible and can depend on the type of worm attacking a network. Criteria can be as simple as a well-known service listening on its port, which is how the Code Red and Nimda worms operated. All Web servers were attacked, although the attack only worked against IIS servers. In this case, the worm didn’t look closely at targets to determine if they were actually vulnerable to an attack, it simply attacked them.

Alternatively, the reconnaissance performed can be based on intelligent decision making. This can include examining the trust relationships between computers, looking at the version strings of vulnerable services, and looking for more distinguishing attributes on the host. This will help a worm attack its host more efficiently.

The above methods for target identification all rely on active measures by the worm. In the past few years, passive host identification methods have become well known.
Methods for fingerprinting hosts include IP stack analysis or application observation.
By doing this, the worm can stealthfully identify future targets it can attack. Passive reconnaissance has the advantage of keeping monitoring hosts nearly totally silent from detection. This is in contrast to worms such as Code Red and Ramen, which actively scan large chunks of the Internet looking for vulnerable hosts.



Taking control: attack
The worm’s attack components are their most visible and prevalent element. This is the means by which worm systems gain entry on remote systems and begin their infection cycle. These methods can include the standard remote exploits, such as buffer overflows, cgi-bin errors, or similar, or they can include Trojan horse methods. An example of the latter would be the use of an infected executable being sent to an e-mail client by a worm as one of its attack vectors.

This component has to be further subdivided into two portions:
  • the platform on which the worm is executing and
  • the platform of the target
This attack element can be a compiled binary or an interpreted script, which utilizes
a network component from the attacking host, such as a client socket or a network aware application, to transfer itself to its victim.

A main factor of the attack component is the nature of the target being attacked, specifically its platform and operating system. Attack components that are limited to one platform or method rely on finding hosts vulnerable to only this particular exploit. For a worm to support multiple vectors of compromise or various target platforms of a similar type, it must be large. This extra weight can slow down any one instance of a worm attack or, in a macroscopic view, more quickly clog the network.

Other attacks include session hijacking and credential theft (such as passwords and cookies) attacks. Here the attack does not involve any escalation of privileges, but does assist the worm in gaining access to additional systems. These attack elements are also most often used in intrusion detection signature generation. Since the attack is executed between two hosts and over the network, it is visible to monitoring systems. This provides the most accessible wide area monitoring of the network for the presence of an active worm. However, it requires a signature of the attack to trigger an alert.
Furthermore, passive intrusion detection systems cannot stop the worm, and the administrator is alerted to the presence of the worm only as it gains another host.


Passing messages: communication
Worms exist only on computer networks composed of individual hosts. For a worm to utilize its collective intelligence and strength, worm nodes need some mechanism to communicate with each other. This communication mechanism can be used to interface to the compromised system or to transfer information between nodes. For example, if worm nodes are participating in reconnaissance actions, their network vulnerability and mapping information must be passed through to other nodes using some mechanism. The communication module provides this mechanism.

These communication channels are sometimes hidden by the worm using techniques similar to ones adopted by hackers. These can include process and network socket hiding techniques (typically via kernel modules or monitoring software subversion) or the use of covert channels in existing network elements. Communication channels can be both
  • server sockets, which accept inbound connections, and
  • client sockets, which make outbound connectionsto another host.
Furthermore, these channels can be over a variety of transport protocols, such as ICMP or GRE packets, or in non continuous connections, such as e-mail.
Communication channels can be created from a variety of media. A TCP session, such as a Web connection, is one method, but others can include ICMP or UDP-based communication mechanisms, where messages are sent in a single packet. The Slapper worm used such a system to communicate between nodes, with UDP packets being sent between nodes. Electronic mail can also be a communication channel, although a slow one at times.

Several worms have used this technique, including the Ramen worm. Alternative communication channels can include nonsocket-based communication channels. Signals can be sent to the worm via a crafted packet that is not accepted by a listening socket on the host but instead observed on the wire by a “sniffer,” listening promiscuously to the traffic seen by the host. This signal delivery method can be efficient and stealthy, allowing for signals to hide in the noise of the normal network traffic.

Furthermore, covert communications between worm nodes may occur in places such as Web pages and Usenet messages. These are then viewed and acted on by an infected computer. Such a signal may include directions on where to attack next or to delete files on the infected system. By affecting the client application, such as a Web browser, the worm can piggyback its way through the Internet with the system’s user, while continuing communication with the rest of the worm network.



Taking orders: command interface
Having established a system of interconnected nodes, their value can be increased by means of a control mechanism. The command interface provides this capability to the worm nodes. This interface can be interactive, such as a user shell, or indirect, such as electronic mail or a sequence of network packets. Through the combination of the communication channel and the command interface, the worm network resembles a DDoS network. In this model, a hierarchy of nodes exists that can provide a distributed command execution pathway, effectively magnifying the actions of a host.

Traditionally, hackers will leave some mechanism to regain control to a system once they have compromised it. This is typically called a back door because it provides another route of access, behind the scenes, to the system. These mechanisms can include a modified login daemon configured to accept a special passphrase or variable to give the attack easy access again. Code Red, for example, placed the command shell in the root directory of the Web server, allowing for system-level access via Web requests.

The command interface in a worm network can include the ability to upload or download files, flood a target with network packets, or provide unrestricted shell-level access to a host. This interface in a worm network can also be used by other worm nodes in an automated fashion or manually by an attacker.



Knowing the network: intelligence
As worms move along and gather hosts into the worm network, their strength grows. However, this strength can only be harnessed when the nodes in the system can be made to act in concert. Doing this requires knowledge about the other nodes, which includes their location and capabilities. The intelligence component of the worm network provides this facility. When the worm network gains a node, it is added to a list of worm hosts. This information can be used later by the worm network or its controllers to utilize the worm system. Without this information, finding and controlling the nodes in the system are difficult tasks to manage. The information repository held by the worm network can be either a tangible list, such as a list of hostnames or addresses, or a virtual list. One example of a virtual list would be a private chat channel controlled by the worm’s author. Hosts that are affected by the worm join the channel, which in turns is the database of worm hosts. This intelligence database can be developed using several mechanisms.

An actual list of nodes in the worm network containing their network location
(IP address), possibly along with other attributes, such as host type, network
peers, and file listings, would be in one or more files on worm hosts or with an attacker. This database can be created by worm nodes sending an e-mail upon infection with their node information, by sending specially crafted packets to a central location, or by other similar mechanisms. Alternatively, for a virtual database of worm nodes, their subscription to some service for worm nodes, such as an IRC channel or the like creates this list. Worm nodes join the channel and register themselves as active worm hosts. All of these methods have been used by widespread worms in the past and still continue to be effective techniques.

The intelligence database can be monolithic, where the whole database is located in one place, or made from a distributed collection of databases. The former type can easily be created by using a notification system made from electronic mail or a packet-based registration system. This type of database, used by worms such as the Morris worm and the Linux-infecting Ramen worm, is easily gathered but also easily compromised, as is discussed later.

The second type of database, a distributed listing, can be formed in a variety
of ways. A mesh network of worm hosts could be used by worms, with some nodes containing pieces of information about various subnetworks within the larger worm system. Worms would register with their closest database node. When seeking out a node to contact, the requesting host or person would query these local centers, with the appropriate one returning the information needed to establish an answer.
An alternative mechanism that can be used to generate such a distributed database is the use of the parent-child relationship between worm nodes. As they move along and infect additional hosts, the parent node develops a list of infected children. The worm node would then have limited knowledge about the whole worm network, but enough information to contact one of its children.

At first glance, the resilience to compromise or attack is higher with the distributed intelligence database. Another attacker, an investigator, or unexpected outages only affect a small portion of the worm network. This resilience incurs a significant setup penalty, as well as overhead, in gathering information. At some level the connectivity of the nodes needs to be maintained, which provides a point of vulnerability for an attacker or an investigator. Furthermore, it is vulnerable to injection attacks by an investigator or an attacker who wishes to slow down or subvert the worm network.



Assembly of a complete worm node
Figure shows the pieces as they would be assembled in a full worm. For example, the reconnaissance component sends information to the attack module about where to launch an attack. It also sends this information to an intelligence database, possibly using the communication interface. This communications interface is also used to interface to the command module, calling for an attack or the use of the other capabilities against a target.
Note that the arrows can point through the communications and command interfaces to another worm node, such as for intelligence updates or calls for attacks against nodes.



Ramen worm analysis
Using this described worm structure, we can map the components of the Ramen worm which appeared in late 2000 to early 2001, and characterize this instance. Max Vision has written an excellent dissection of the Ramen worm (actually not foundbut see here), including the life cycle, which should also be studied. In mapping these components to a worm found in the wild, we can see how they come together to form a functional worm.

Ramen was a monolithic worm (Vs Modular worms), which is to say that each instance of an infected host has the same files placed on it with the same capabilities. There exists some flexibility by using three different attack possibilities and by compiling the tools on both RedHat Linux versions 6.2 and 7.0, but each set of files (obtained as the tar package “ramen.tgz”) is carried with each instance of the worm.

The reconnaissance portion of the Ramen worm was a simple set of scanners for the vulnerabilities known to the system. Ramen combined TCP SYN scanning with banner analysis to determine the infection potential of the target host. It used a small random class B (/16) network generator to determine what networks to scan. The specific attacks known to Ramen were threefold:
  • FTPd string format exploits against wu-ftpd 2.6.0
  • RPC.statd Linux unformatted strings exploits, and
  • LPR string format attacks.
The command interface of the Ramen worm was limited. No root shell was left listening, and no modified login daemon was left, either. The minimal command interface was reduced to the small server “asp”, which listened on port 27374/TCP and dumped the tarball “ramen.tgz” upon connection.

Communication channels were all TCP-based, including the use of the text-based Web browser “lynx,” which issued a GET command to the Ramen asp server on port 27374/TCP, the mail command to update the database, and the various attacks, which all utilized TCP-based services for attack. Aside from DNS lookups, no UDP communication channels were used. No other IP protocols, including ICMP, were directly used by the worm system. All communication between the child machine and the parent (the newly infected machine and the attacking machine, respectively), along with the mail communication to servers at hotmail.com and yahoo.com were fully connected socket-based communications.

The system’s intelligence database was updated using e-mail messages from the system once it was infected to two central e-mail addresses. The e-mail contains the phrase “Eat Your Ramen!” with the subject as the network address of the infected system. The mail spool of the two accounts was therefore the intelligence database of infected machines. Unused capabilities can be summarized as the other two exploits not used to gain entry into the system, which allow for some flexibility in targeting
either RedHat 6.2 or 7.0 default installations. Ramen did not contain any additional attack capabilities, such as packet flooding techniques, nor did it contain any file manipulation methods.

In analyzing the complexity of the Ramen worm the author has cobbled together several well-known exploits and worm components and as methods utilizing only a few novel small binaries. Examination of the shell scripting techniques used shows low programming skills and a lack of efficiency in design. These findings have two ramifications.
  1. First, it shows how easy it is to put together an effective worm with minimal coding or networking skills. Simply put, this is certainly within the realm of a garden variety “script kiddy” and will be a persistent problem for the foreseeable future.
  2. Second, it leaves, aside from any possible ownership or usage of the yahoo.com and hotmail.com e-mail accounts, very little hard evidence to backtrack to identify the worm’s author.


Worm Traffic Patterns


Because of its continual growth and typical repetitive nature, worm traffic can be readily characterized. Although it is relatively easy to build a signature for a detection engine, typically used on a network intrusion detection system (NIDS).

Unless otherwise stated, the assumption is that the worms under study are spreading from host to host, are active on all hosts they enter, and continue to be active, because this is the pattern of most worms.



Predicted traffic patterns
Because they resemble living systems in some fashion, it is possible to model the growth and reproduction of network worms. Their growth patterns are governed by the rate of infection and the number of vulnerable hosts at any given point. Similarly,
their traffic patterns, in their scans and attacks, are determined by the number of active worms at any time and the amount of traffic per node.



Growth patterns

The worm network actively seeks new hosts to attack and add to the collection nodes in the network. As it finds hosts and attacks them, the worm network grows exponentially. This growth pattern mimics patterns seen for communities occurring naturally, such as bacteria and weeds.

Worm infections can grow in an exponential pattern, rapidly at first and then slowing as a plateau value is reached. This is a typical kinetic model that can be described by a first-order equation:
Nda = (Na)K(1− a)dt (3.1)
It can then be rewritten in the form of a differential equation:
da/dt =Ka ( 1− a ) (3.2)
This describes the random constant spread rate of the worm. Solving the differential equation yields
a= (e^K(t-T))/ [1+(e^K(t-T))] (3.3)
where
  • a is the proportion of vulnerable machines that have been compromised,
  • t is the time,
  • K is an initial compromise rate, and
  • T is the constant time at which the growth began.
Rate K must be scaled to account for machines that have already been infected, yielding e^K(t-T) .

This equation, known as the logistic growth model, is at the heart of the growth data seen for network worms. While more complicated models can be derived, most network worms will follow this trend. We can use this model to obtain a measure of the growth rate of the worm. Some worms, such as Nimda and Code Red, have a very high rate constant K meaning that they are able to compromise many hosts per unit of time. Other worms, such as Bugbear and SQL Snake, are much slower, represented in the smaller rate constants for growth. Figure shows a simple graph of (3.3) using several values of K. The equation shown in this figure is the sigmoidal growth phase of a logistic growth curve. The initial phase of exponential growth and the long linear phase as the worm spread scan be observed. As the worm saturates its vulnerable population and the network, its growth slows and it approaches a
plateau value.
These equations are highly idealized, because the value of N is assumed to be fixed. This assumes that all hosts that are connected at the outset of the
worm attack will remain attached to the network. This constancy assumes
that hosts will remain vulnerable and patches will not be applied. Furthermore, the model assumes a similar amount of bandwidth between hosts which also remains constant during the worm’s life cycle.
In the real world, not all hosts have the same amount of connectivity, and bandwidth is quickly consumed by the worm network as it grows to fill the space. Despite this, these equations provide a good representation of the observed data for a reasonably fast moving worm.

At the peak of its rate of spread, Code Red v2 was able to compromise more than 2,000 hosts a minute. In just under 2 hours, the rate jumped more than fourfold to this maximal value, demonstrating the exponential growth of the worm. After this point, the rate of infection slowed but did not return to 0 until long after the initial introduction of the worm.




Traffic scan and attack patterns
Similar to the growth rate of the worm network, the traffic seen for the
reconnaissance and attack activities by the worm networks is also sigmoidal in nature. It is typically multiples of the number of active and infected hosts on the network, taking into account that each host will scan a large portion of the network space and repeat this scan. For hosts that repeat this scan indefinitely, this traffic grows at a rate that is much faster than the spread of the worm.



Disruption in Internet backbone activities
Not entirely unexpected, as worms move, they are increasingly saturating the network on which they reside. Worms are typically indiscriminate in their use of networks and work to aggressively scan and attack hosts. This saturation can have consequences on the network infrastructure and use. As described below,
  • Internet routing updates
  • network use, and
  • intranet servers
are all affected by worms during their life cycles.



Routing data

The Internet is a collection of networks with the backbone consisting of autonomous systems. These autonomous systems are routed to each other, with this routing data typically contained in the border gateway protocol (BGP; see RFC 1771 [2]). Cowie et al. have analyzed a subset of their Internet instability data to measure the impact of major worms on BGP routing stability. Their historical data allow them to observe differences in the instability of the Internet backbone routing infrastructure and discern signals above the noise.

The damage to the global BGP routing infrastructure brought about by Code Red and Nimda results from several factors. First, the volume of traffic is enough to disrupt the communication networks between routers, effectively choking some routers off of the Internet. When this occurs, the routes to the networks serviced by these routers are withdrawn. Route flap, the rapid announcement and withdrawal of routes, can occur when these routers recover from the load and reintroduce themselves to the outside world and then are quickly overwhelmed again. Routing flap can propagate
through the Internet unless dampening measures are in effect, affecting global routing stability. Route flap was made significantly more prominent due to the activity of Code Red and, even more so, by Nimda, which acts far more aggressively and sends higher traffic rates.

The second source of routing instability is also caused by the volume of traffic generated by Internet worms and directly affects routers as well. The traffic volume increases several fold over the normal traffic on a link, leading to high CPU and memory usage on the routers. This load is only aggravated when flow export (i.e., Cisco NetFlow) is used for accounting, performance measurements, and network security monitoring. Again, as the routers suffer from the load, they collapse, leaving the network and leading to the cycle of route flap.

The third source of routing instability is a result of attacks on routers themselves. Some modern routers contain HTTP-based console management ports, facilitating their administration. Because the worms are indiscriminate about the hosts they attack, attempting to attack every host to which they can connect to port 80/TCP, they will invariably attack routers listening on this port. The sustained connection from many worm sources is enough to raise the load on the routers to high levels, causing the routers to crash in many instances.

The consequences of this increased instability on the Internet were felt for several days, in proportion to the size of the instability introduced by the worm. While the Internet has been modeled and shown to be resilient to directed attacks at most of its core components, the magnitude of the load on the Internet, in addition to the directed attacks at core routers, led to instability. However, the Internet was still functional overall.



Multicast backbone
In early 2001, as the Ramen worm was spreading, multicast networks started to see storms and spikes in the number of multicast announcement messages for each source. Multicast networks use a point-to-multipoint message delivery mechanism, allowing for a single source of data to be received by many hosts across the Internet (see here). Popular uses of the multicast network include audio streams of academic presentations and data streams from sources with wide interests.

n an open letter to the Linux community, Bill Owens described the effect of worms on the multicast backbone network:
The worm has a sloppily written routine to randomly choose a /16 network
block to scan. That routine can choose network prefixes in the range
224.0.0.0 — 239.255.2255.255, a set of addresses reserved for multicast traffic. Each scan packet then causes the generation of a Multicast Source Distribution Protocol (MSDP) Source Availability message. Unfortunately
the scanner being used is very efficient and can cover a /16 in about 15 minutes, generating 65000 SA messages. The SA messages are flooded throughout the multicast backbone and the resulting load on the routers has caused degradation of both multicast and unicast connectivity.
The worm had the effect of disabling a good portion of the multicast network backbone through its leak into the multicast reserved space.
The effects of this were dramatic. It effectively led to a few hosts being
able to disable a significant portion of the multicast network by overwhelming connected routers with traffic. As noted by Owens [6], in his memo, this affected not just multicast traffic but also unicast, or traditional traffic, as these routers collapsed under the load.



Infrastructure servers
Whereas large portion of the Internet is affected when very large worms hit, smaller worms can affect a local network in much the same way. Local networks, such as corporate or university networks, typically have resources for electronic-mail distribution, file sharing, and internal Web servers. All of these elements are affected by network worms. Worms that spread using electronic mail, such as one of the Nimda propagation vectors, can overwhelm mail servers with messages, because
each one sends an attack via a mail message. When medium or large address books are in use by even a modest number of infected machines, the mail storm can be overwhelming to servers. The rate and volume of mail delivery will choke out other, legitimate messages much as worm traffic will overtake a network on the Internet link. Furthermore, if the server performs scans of the messages as they pass through, this additional bottleneck can aggravate the stress on the mail server.

Similarily, local Web servers can feel the brunt of a worm attack. When locally biased scans are used by worms, such as is found in Nimda and Code Red II, the local Web servers feel the burden quickly and can collapse under the load.




Observed traffic patterns
Having laid out a theoretical framework for the growth and spread of the worm populations, we can now look at actual data on networks to see if the observations match the predictions. We will examine three sources of data, first from large network monitors which have measured the scans and attacks of worms on /16 networks. The second set of data is from a black hole monitor. The third set of data is from a single host on a large network which logged IIS worm attempts for nearly 1 year.




From a large network

We begin our look at measured and observed traffic statistics for the onset and continuation of Internet worms by looking at a large network. This network, a representative class B network, kept detailed statistics for Code Red hosts as they attempted to access the network. As shown in Figure right, a sigmoidal approach is seen for the per-hour sources of Code Red scans during the first 36 hours of the worm’s onslaught, as predicted by the above modeling. After an initial steady-state phase, the number of scans seen per hour begins to diminish as infected machines are cleaned up and removed from the Internet.


It is even more interesting to see the data in Figure left. In this figure, the number of unique sources, based on IP address, are plotted as a function of time. The x axis of the graph runs from approximately October 2001 until May 2002, showing the activity of Code Red hosts against a /16 network. This time period represents 3 to 10 months following the introduction of the Code Red worm to the Internet. The striking features of the graph in Figure are as follows:
  • The cycles of scans and quiescence are clearly visible. There is some tailing of the data due to clock skew on various systems, but the general trend is still visible.
  • The maximum values reached are increasing with each month, by more than 2,000 unique hosts from November 2001 to May 2002.
What these data clearly show is the persistent life of the Code Red worm despite the continual release of information and patches for system fixes. Once infected with a malicious worm, much of the Internet is not rid of it. These scans and activities became the background noise of the Internet in the months following the Code Red and Nimda attacks.



From a black hole monitor
Black hole monitoring, or the use of an unallocated network to measure the random data that get put into it, has been very useful in the measurement of large cross sections of Internet trends. Black holes are typically very large networks, such as /8, representing 1/256 of the IPv4 address space on the Internet (and even more of the actual, allocated space). As such, a very accurate picture of actual Internet traffic can be gained. Furthermore, since no actual hosts exist within the space, it is unaffected by outbound data requests.

Figure right shows the results of Nimda and Code Red measurements by a black hole system. Similar to what we saw earlier for the production class B
network, the cycles of scans and dormancy by Code Red are immediately visible. What is novel about this is that Nimda values are also represented in this graph, although no such trends for Nimda scans and attacks can be detected. The relative prevalence of continued Nimda and Code Red hosts can be measured. More than 6 months after each worm’s introduction, there are more Code Red hosts than Nimda hosts.






From an individual host

The individual host analysis shown in Figure left is for a globally advertised Web server running on a single homed /32 (a globally unique host). The Web server runs Apache and resides on an educational network in the United States. The surrounding network is a /16. Using the Apache server software, worm requests were logged and analyzed within a 2-year period of Web server traffic. The Apache suite is unaffected by the methods used by the Code Red and Nimda worms to attack IIS servers. However, the attacks are captured and logged, which allows for monitoring. The network on which this host sits has been aggressively identifying and blocking Code Red and Nimda hosts at the edge or at the nearest subnet device. No filtering of worm-affected hosts was performed on this server. The data here give us a measure of the effectiveness of these measures on a production network that is, taking active measures to stem the tide. This positioning of the host is important because of the “island hopping” that Code Red 2 and Nimda do.

In the analysis of the data, it is important to recall that Code Red 1, 2, and II each have one attack request, while Nimda has seven unique attack requests. Thus any one host infected by Nimda would have seven times as many attacks logged per attack instance than a Code Red attack. Data were culled from Apache logs from approximately July 1999 until May 18, 2002. This represents approximately 10 months of Code Red 1 and 2 traffic, more than 9 months of Code Red II traffic, and approximately 8 months of Nimda attacks. Figure shows the number of hosts detected for each type of attack per day. The immediate observation is that Code Red 1 and 2 took a bit to “ramp up” the number of hosts used for attacks. The number of Code Red 1 and 2 hosts reaches a maximum a few days after the initial observation before dropping off dramatically. Code Red II, in contrast, shows an immediate onset with a pronounced persistence in the number of hosts seen. Nimda shows this, as well, but it is noticeably more dramatic. The first day the worm was seen shows a marked upsurge in infected hosts, almost 60, before dropping off quickly due to filtering. In further analyzing the data in Figure , we can measure the “noise” any one infection typically makes on the network. In the cases of Code Red 1, 2 and II, the number of hosts mirrors the number of attacks logged by the server. Nimda hosts, however, do not show this mirroring. While there is a noticeable spike in the number of Nimda hosts seen on September 18, 2001, this number quickly drops off. The number of Nimda requests seen, however, does not drop off as quickly. This suggests that the Nimda worm is noticeably more “noisy” than Code Red, above its seven fold number of requests made during an attack compared to any of the variants of Code Red. Last, we can observe the heavy tailing in the figure for both Nimda and Code Red 1 and 2. Code Red II, and the version that used a heavy local bias in its scanning, was quickly eradicated from the network. Despite also using island hopping, Nimda has continued to thrive for more than 8 months in this setting. This is most likely due to the aggressive nature of Nimda when compared to Code Red. The prevalence of Code Red 1 and 2 over the course of 10 months is most likely due to its completely random jumping from network to network. As such, it is possible for a host from a distant network to scan for possible victims despite local measures to clean up Code Red hosts.

Useful *nix Things

System


It is useful about users to know
  • who they are, and maybe also
  • what are they doing
this for security reasons, but also if you are planning any server maintenance or reboot, you will have to know
  • who are logged, and
  • what are they doing
to know if you can reboot the server in that moment, or have to wait (...here for more details)



last is another command for the system admin tool box, it displays the login history of all or any specific user (...here for more details)



There are some situations when you need to start a second X session on your computer(...here for more details)


Networking


Quick Howto


23 June 2009

Attacks & Defences of the Data Link Layer

Understanding How ARP Works


The Address Resolution Protocol (ARP) offers the ability to translate any IP address that is routable on your local subnet (i.e., you can send data to it) into the MAC address that the host is using to communicate on the subnet. In other words, it allows a host to ask “What MAC address has IP w.x.y.z ?




Examining ARP Packet Structure

Using our knowledge of protocol analyzers, we examine the structure of an ARP packet. Open Wireshark (ex Ethereal) and begin a pcap (Capture>Options>Capture Filter : arp > Start).

  1. If you’re using Windows/*nix, open a command-line prompt and issue the command arp –d (This will allow you to manually delete the entries in your ARP cache and then force your system to ARP the local gateway to resolve the IP address needed to forward the ping to www.yahoo.com) 
  2. Now issue ping www.yahoo.com.
ARP is a two-step process:
  1. First, there is the ARP request, which is sent to a broadcast address and then
  2. the ARP reply. This reply is sent back the initial requestor as a Unicast.
Once you have collected an ARP packet, you’ll see something similar to the ARP request shown in Figure ARP_Request_Packet. We examine no3 (APR request) and no4 (ARP reply) packets
  1. The first 2 bytes of the ARP data within the Ethernet frame (at offset 0) identify the Hardware Type (in our example our hardware is Ethernet---represented by 0x0001 code confirmed also from wireshark see in Wireshark's "Address Resolution Protocol" section).
  2. The next 2 bytes(at offset 16 in frame) denote the Protocol Type (In this case, we see it's IP ptrotocol---denoted by 0x0800 code) address ARP is attempting to resolve.
  3. The next 2 bytes(at offset 32 in frame) denote the lengths of a hardware address (6 bytes/octets=48bits for MAC) and of a protocol address (4 bytes for IP), respectively.
  4. Next is the Operation byte. This field regard ARP protocol is 0x01 for an ARP lookup request, and 0x02 for an ARP lookup reply. In this case, we are looking at an ARP request packet(see Wireshark output).
  5. Following the operation byte, we have 6 bytes (48 bits) denoting the Sending Hardware Address (SAP) (the sender’s MAC address. In our example my gateway router).
  6. Following this, we have 4 bytes (32 bits) the Sending Protocol Address (the sender’s IP address).
  7. Next is the meat of our request Target MAC: 6 bytes all set to 0, indicating that we want to know what MAC address(in our case my PC's MAC address) belongs to the following 4 bytes . Those final 4 bytes (Target IP) of the ARP packet indicate the IP address that we want to resolve to a MAC. The ARP reply is shown below in Figure ARP_Reply_Packet

Note in the reply that the target hardware and protocol addresses and the sender hardware and protocol addresses have traded positions in the analogous packet structure. Also note:

  • Opcode now is 0x0002 (in fact 2 is the code for a reply packet) and that
  • the formerly null value for the target hardware address (my PC) has been replaced with the requested MAC address ("Sender MAC address" in Wireshark's output).
NOTE---There are other useful commands for maintaining your ARP cache.

  • By using the command arp –s , you can permanently add an entry to the ARP cache.
  • Add the string pub to the end of the command and your system will act as an ARP server, answering ARP requests even for IPs that aren’t yours.
  • Finally, to view the full contents of your ARP cache, execute arp –a.
    harrykar@harrykar-desktop:~$ arp
    Address                  HWtype  HWaddress           Flags Mask            Iface
    192.168.1.1              ether   00:22:33:64:2e:94   C                     eth0
    harrykar@harrykar-desktop:~$ arp -a
    ? (192.168.1.1) at 00:22:33:64:2e:94 [ether] on eth0
    

When ARP replies are received, they are added to the local host’s ARP cache. On most systems, ARP cache entries:

  • will time-out within a relatively short period of time (2 minutes on a Windows host) if no data is received from that host.
  • Additionally, regardless of how much data is received, all entries will time-out after approximately 10 minutes on a Windows host.

Attacking the Data Link Layer


We can begin to examine the methods used by attackers to mount attacks using weaknesses in the protocols.


Passive versus Active Sniffing

The basis for a large number of network-based attacks is passive sniffing. Normally, network cards will process packets that are sent to a MAC address or broadcast; however, in a hubbed network, there are many more packets than just those addressed to the system that reach the network card. Passive sniffing involves using a sniffer (e.g., as Wireshark or tcpdump) to monitor these incoming packets.
Passive sniffing relies on a feature of network cards called promiscuous mode. When placed in promiscuous mode, a network card will pass all packets on to the operating system, rather than just those Unicast or broadcast to the host. Passive sniffing worked well during the days that hubs were used.The problem is that there are few of these devices left. Most modern networks use switches. That is where active sniffing comes in.

Active sniffing relies on injecting packets into the network that causes traffic that should not be sent to your system. Active sniffing is required to bypass the segmentation that switches provide. Switches maintain their own ARP cache in a special type of memory known as Content Addressable Memory (CAM), keeping track of which hosts are connected to which switch port.  


The terms active and passive sniffing have also been used to describe wireless network sniffing. They have analogous meanings. Passive wireless sniffing involves sending no packets, and monitoring the packets sent by others. Active sniffing involves sending out multiple network probes to identify APs.

In both cases (wired and wireless), passive sniffing offers considerable stealth advantages
over active sniffing.



ARP Poisoning

ARP poisoning is the primary means of performing active sniffing on switched Ethernet.

ARP poisoning involves convincing a host that the IP of another host on the network actually
belongs to you, as illustrated in Figure ARP_Poisoning

Another important factor is selecting what IP address you want to redirect to your system to. By spoofing the default gateway’s IP address, all hosts on your subnet will route their transmissions through your system. This method, however, is not very stealthy; you have to poison the ARP cache of every host on your subnet. On the other end of the spectrum, you have the option to poison the ARP cache of a single host on your network. This can be useful if you are attempting to perform a targeted attack and require as much stealth as possible.

When attempting to maintain stealth, be certain not to spoof the IP of another client machine on your subnet. Both Linux and Windows client machines will pop up messages notifying any logged-in user that another host is attempting to use their IP. To conduct the attack at the most rudimentary level, we can add a static entry to the ARP table for another host’s IP:


arp –s   pub

application with the ability to poison A more advanced method is to use an application with the ability to poison the ARP cache. Cain and Abel (for Windows), will automatically detect the IP address of the gateway and begin poisoning all hosts on the subnet with a single click. Running Cain and Abel, you have the choice of either using the default configuration by clicking the radioactive symbol (third icon in the toolbar beneath the menu), or configure it by clicking the network card icon (second icon in the toolbar beneath the menu). If we click on the network card icon and go to the “ARP Poisoned Routing” tab, you will see the options shown in Figure Cain_Abel.

The options of interest when spoofing ARP entries to route traffic through ourselves are the Pre-Poisoning and Poisoning options. Pre-poisoning and using ARP request packets increase your chances of successfully poisoning ARP caches.

Another effective ARP poisoner is WinArpAttacker, WinArpAttacker functions slightly better than Cain and Abel at sniffing LAN traffic. Upon running WinArpAttacker, select the Scan option and scan the local LAN, and select the attack option and choose to SniffLan.

You will see the packet counts increase as WinArpAttacker routes packets from the hosts through your machine, as seen in Figure
WinArpAttacker



ARP Flooding
ARP flooding is another ARP Cache Poisoning technique aimed at network switches. While not effective on all switches, some will drop into a hub-like mode when the CAM table is flooded. This occurs because the switch is too busy to enforce its port security features and broadcasts all network traffic to every computer in the network. This technique is particularly useful in MITM attacks, where the goal is to impersonate one of the hosts in a connection.

In WinArpAttacker, conducting an ARP flood is as simple as clicking the checkboxes next to the host you wish to flood, clicking on the attack icon in the toolbar, and selecting the Flood option.




Routing Games
One method to ensure that all traffic on a network will pass through your host is to change the routing table of the host you wish to monitor. This may be possible by sending a fake route advertisement message via the RIP, declaring yourself as the default gateway. If successful, all traffic will be routed through your host. Make sure that you have enabled IP forwarding, and that your default gateway is set to the authorized network gateway. All outbound traffic from the host will pass through your host and onto the real network gateway. You may not receive return traffic unless you also have the ability to modify the routing table on the default gateway to reroute all return traffic back to you.

All this talk of wired network insecurities may have you thinking that wireless offers more security. Let’s explore that issue by looking at wireless networking technologies.



Sniffing Wireless
Recently, unsecured wireless APs have become a hot issue with legislative bodies. In particular, California is considering requiring that all APs ship with a notice that communications are not secured until the router is configured with a password. Wardrivers who drive around with network cards in promiscuous mode, will identify and occasionally explore unsecured networks within their hunting grounds. We will now examine a pair of tools for identifying and sniffing wireless networks.




Netstumbler
Netstumbler (on Windows), is one of the most basic tools for identifying wireless networks within range. Netstumbler moves through each wireless channel and identifies any networks that are advertising themselves, or any networks that a host is currently connected to. Upon loading, Netstumbler will select a suitable wireless device and begin scanning.

Once networks are identified, Netstumbler displays them in the right-hand pane. The
dots next to the network name are color-coded according to the signal strength, and contain a lock if the connection is encrypted. By expanding the channels option in the left-hand pane, the channel number, and selecting a Service Set Identifier (SSID), you can see usage statistics.

Wireless SSIDs function similarly to MAC addresses, and like MAC addresses can be
changed. Research has been done to identify wireless cards based on slight differences between devices that introduce variability into the properties of the signals transmitted by the cards. While a successful implementation of this would fully eliminate wireless spoofing, we are still several years away from seeing any technology based on this on the market.



Kismet
If a de-facto standard for wireless sniffing exists, that standard is Kismet. One of the earliest wireless sniffing packages, and certainly the most popular, Kismet offers a wide variety of features to aid Wardrivers. Kismet is available for Windows users, and for Linux users.
The kiswin package requires setting up a kismet drone on a Linksys wrt54g
wireless router. This is a significant time investment if you just plan to play with Kismet. A Linux live CD may be an easier alternative to test Kismet’s functionality.
Features
  • 802.11b, 802.11g, 802.11a, 802.11n sniffing
  • Standard PCAP file logging (Wireshark, Tcpdump, etc)
  • Client/Server modular architecture
  • Multi-card and channel hopping support
  • Runtime WEP decoding
  • Tun/Tap virtual network interface drivers for realtime export of packets
  • Hidden SSID decloaking
  • Distributed remote sniffing with Kismet drones
  • XML logging for integration with other tools
  • Linux, OSX, Windows, and BSD support (devices and drivers permitting)



Cracking WEP

One of the most infamous wireless attacks revolves around the initial protocol for secure communications across wireless media. WEP is a protocol based on the RC4 cipher. RC4 is a stream cipher, a form of encryption that has championed such pinnacles of security as the secret decoder ring.
Note, though, that stream ciphers are not inherently weak, and are commonly employed by the military for use in highly sensitive operations!
When vendors were implementing the WEP protocol, they made a mistake. The RC4 cipher is very secure in and of itself. Unfortunately, with cryptography, implementation is everything. The design of WEP permitted a piece of information called initialization vector to be re-used. This had dire consequences for the security of the algorithm. To draw a loose analogy, imagine that WEP is the cryptoquip substitution cipher that is syndicated in many newspapers. Every time a wireless packet is transmitted, you get a letter or two of the puzzle. Easy enough, right? Except that the letters in the first packet are encrypted in a different way from those in the second; the first are from Monday’s cryptoquip, and the second’s from Tuesday. For every 5,000 packets, you get a letter or two of the puzzle that’s encrypted the same way as some of your previous letters. With every 5,000 packets, you can build a bit more and a bit more of Monday’s puzzle until you have enough to solve it.




Wireless Vulnerabilities
Wireless vulnerabilities are also a hot research topic at the moment, particularly with the expansion of wireless hotspots into urban areas. Wireless vulnerabilities can be categorized into roughly four groups:

  • passive attacks
  • jamming attacks
  • active attacks
  • MITM attacks
We have already examined passive attacks as part of network sniffing. We will now examine each of the other three attacks in turn.




Conducting Active Wireless Attacks
Active wireless attacks encompass spoofing and denial of service (DoS) attacks. Between them, spoofing attacks are by far the most common. Many wireless APs maintain filtered lists of MAC addresses permitted to connect to them. Through the use of tools like Netstumbler, however, one can easily identify the MAC address used by a valid workstation and modify one’s MAC to match it through the Advanced tab of the network card’s properties(on Windows), as seen in Figure change_MAC.

DoS attacks against wireless APs still hold only nuisance value. By sending multiple control packets to a wireless network, you can degrade performance. You also have to stay in range of the AP to conduct the DoS, greatly increasing the chances of being discovered.



Jamming Attacks
Similar to DoS attacks, jamming attacks rely on using radio frequencies to interfere
with wireless transmissions. Much like military signal jamming, a device can be used to “spam” the appropriate radio frequencies with a signal much stronger than any of the wireless clients. This will effectively perform a DoS attack on the wireless network.




MITM Attacks
MITM attacks are the most interesting version of attacking a wireless network. They are especially prevalent with the expansion of wireless hotspots. By setting your wireless card up in an identical configuration as an existing hotspot (including spoofed SSID), a client is unable to distinguish the legitimate AP from your spoofed AP without running additional authentication protocols on top of the wireless media.




Defending the Data Link Layer


The Data Link layer offers a number of options for identifying and detecting various types of attacks against the shared media.

Invariably, attackers have the advantage. Through the use of the following techniques, exploits at the Data Link layer can be significantly discouraged, possibly motivating attackers to move on and select an easier target.




Securing Your Network from Sniffers
You might be considering unplugging the network completely so that sniffers like Wireshark, or other more nefarious applications, cannot be used on your network. Hold on to those wire cutters, there are other, more function-friendly ways to help secure your network from the determined eavesdropper.




Using Encryption
Fortunately, for the state of network security, encryption is the one silver bullet that will render a packet sniffer useless. The use of encryption, assuming its mechanism is valid, will thwart any attacker attempting to passively monitor your network. Many existing network protocols now have counterparts that rely on strong encryption, and all-encompassing mechanisms such as IPSec and OpenVPN provide this for all protocols. Unfortunately, IPSec is not widely used on the Internet outside of large enterprise companies.



Secure Shell
Secure Shell (SSH) is a cryptographically secure replacement for the standard UNIX Telnet, Remote Login (rlogin), Remote Shell (RSH), and Remote Copy Protocol (RCP) commands.
It consists of both a client and a server that use public key cryptography to provide session encryption
It also provides the ability to forward arbitrary TCP ports over an encrypted connection, which comes in handy for the forwarding of X11 Windows and other connections. SSH has received wide acceptance as the secure mechanism to access a remote system interactively. SSH was conceived and developed by Finnish developer,Tatu Ylönen. The original version of SSH turned into a commercial venture, and although the original version is still freely available, the license has become more restrictive. A public
specification has been created, resulting in the development of a number of different versions of SSH-compliant client and server software that do not contain these restrictions (most significantly, those that restrict commercial use).

  • A free version of SSH-compatible software, OpenSSH, developed by the OpenBSDoperating system project,.The new commercialized SSH can be purchased from SSH Communications Security, who have made the commercial version free to recognized universities.
  • Mac OS X already contains OpenSSH software.
  • For Windows (now for Linux too), a free alternative for the commercial SSH software is PuTTY.
Originally developed for clear text protocols such as Telnet, PuTTY is very popular among system administrators.




Secure Sockets Layer

Secure Sockets Layer (SSL) provides authentication and encryption services, or can be used as a VPN.

From a sniffing perspective, SSL can be vulnerable to a man-in-the-middle attack. An attacker can set up a transparent proxy between you and the Web server. This transparent proxy can be configured to decrypt the SSL connection, sniff it, and then re-encrypt it. When this happens, the user will be prompted with a dialog box indicating that the SSL certificate was not issued by a trusted authority. The problem is, most users ignore the warnings and proceed anyway.




Pretty Good Protection(Privacy) and Secure/Multipurpose Internet Mail Extensions
Pretty Good Protection (PGP) and Secure/Multipurpose Internet Mail Extensions (S/MIME) are standards for encrypting e-mail. If used correctly, these will prevent e-mail sniffers like dsniff and Carnivore from being able to interpret intercepted e-mail. The sender and receiver must both use the software in order to encrypt and decrypt the communication.

In the United States, the FBI has designed a Trojan horse called Magic Lantern that is designed to log keystrokes, hopefully capturing a user’s passphrase. Once the FBI gets a passphrase, they can decrypt the e-mail messages.

In the United Kingdom, users are required by law to give their encryption keys to law enforcement when requested.



Switching
Network switches make it more difficult for an attacker to monitor your network. Technologies like Dynamic ARP Inspection (DAI) can be used to inspect ARP packets in a network and ensure they are valid. DAI allows a network administrator to intercept, log, and discard ARP packets with invalid MAC address. This can significantly reduce the capability of an attacker to launch a successful Data Link layer attack.

Rate Limiting of ARP packets is another technique that can be used to prevent ARP attacks. If a high number of MAC addresses are transmitted quickly, or illegal ARP pairings are noted, the port is placed in the locked state and remains so until an administrator intervenes.



Employing Detection Techniques

Are there other ways to detect malicious Data Link layer activity?
Yes, one method is to look for NIC cards that are running in promiscuous mode.



Local Detection
Many Operating Systems(Os) provide a mechanism to determine whether a network interface is running in promiscuous mode. This is usually represented in a type of status flag that is associated with each network interface and maintained in the kernel. This can be obtained by using the ifconfig command on UNIX-based systems. The following examples show an interface on the Linux operating system

  • when it isn’t in promiscuous mode:

harrykar@harrykar-desktop:~$ ifconfig -v eth0
eth0      Link encap:Ethernet  HWaddr 00:1e:2a:bd:1e:9a
inet addr:192.168.1.2  Bcast:192.168.1.255  Mask:255.255.255.0
inet6 addr: fe80::21e:2aff:febd:1e9a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:936138 errors:0 dropped:0 overruns:0 frame:0
TX packets:793409 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1188782130 (1.1 GB)  TX bytes:77931947 (77.9 MB)
Interrupt:10 Base address:0xa000

Note that the attributes of this interface mention nothing about promiscuous mode.

  • When the interface is placed into promiscuous mode, as shown next, the PROMISC keyword appears in the attributes section:

harrykar@harrykar-desktop:~$ ifconfig -v eth0
eth0      Link encap:Ethernet  HWaddr 00:1e:2a:bd:1e:9a
inet addr:192.168.1.2  Bcast:192.168.1.255  Mask:255.255.255.0
inet6 addr: fe80::21e:2aff:febd:1e9a/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
RX packets:936138 errors:0 dropped:0 overruns:0 frame:0
TX packets:793409 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1188782130 (1.1 GB)  TX bytes:77931947 (77.9 MB)
Interrupt:10 Base address:0xa000

It is important to note that if an attacker has compromised the security of the host on which you run this command, he or she can easily affect this output. An important part of an attacker’s toolkit is a replacement ifconfig command that does not report interfaces in promiscuous mode.





Network Detection

There are a number of techniques, varying in their degree of accuracy, to detect whether a host is monitoring the network for all traffic. There is no guaranteed method to detect the presence of a network sniffer.




DNS Lookups

Most programs that are written to monitor the network perform reverse DNS lookups when they produce output consisting of the source and destination hosts involved in a network connection. In the process of performing this lookup, additional network traffic is generated; mainly, the DNS query to look up the network address.

  • It is possible to monitor the network for hosts that are performing a large number of address lookups alone; however, this may be coincidental, and not lead to a sniffing host.
  • An easier way, which would result in 100 percent accuracy, would be to generate a false network connection from an address that has no business being on the local network. You would then monitor the network for DNS queries that attempt to resolve the faked address, giving away the sniffing host.


Latency

A second technique that can be used to detect a host that is monitoring the network is to detect latency variations in the host’s response to network traffic (i.e., ping). Although this technique can be prone to a number of error conditions (e.g., the host’s latency being affected by normal operation), it can assist in determining whether a host is monitoring the network. The method that can be used is to probe the host initially, and then sample the response times.

Next, a large amount of network traffic is generated, specifically crafted to interest a host that is monitoring the network for authentication information. Finally, the latency of the host is sampled again to determine whether it has changed significantly.



Driver Bugs
Sometimes an operating system driver bug can assist in determining whether a host is running in promiscuous mode. In one case, CORE-SDI, an Argentine security research company, discovered a bug in a common Linux Ethernet driver. They found that when the host was running in promiscuous mode, the operating system failed to perform Ethernet address checks to ensure that the packet was targeted toward one of its interfaces. Instead, this validation was performed at the IP level, and the packet was accepted as if it was destined to one of the host’s interfaces. Normally, packets that did not correspond to the host’s Ethernet address would have been dropped at the hardware level; however, in promiscuous mode, this doesn’t happen. You can determine whether the host was in promiscuous mode by sending an Internet Control Message Protocol (ICMP) ping packet to the host, with a valid IP address of the host and an invalid Ethernet address. If the host responded to this ping request, it was determined to be running in promiscuous mode.




Network Monitor
Network Monitor (NetMon), available on Windows NT-based systems, has the capability to monitor who is actively running NetMon on your network. It also maintains a history of who has NetMon installed on their system. It only detects other copies of Network Monitor, so if the attacker is using another sniffer, you must detect it using one of the previous methods discussed. Most network-based IDSes will also detect these instances of NetMon.




Using Honeytokens
Another method of detecting unauthorized use of promiscuous network cards is to effectively bait anyone that would be watching for confidential information. For example, a cleartext Telnet password could be used intermittently to log in to a (fake) Telnet service on sensitive hosts. Any off-schedule accesses to this server would not be legitimate, and would indicate that someone is monitoring traffic.

Taking the concept a step further, one could configure an IDS such as Snort to alert on any network traffic utilizing the honeytoken. Provided the honeytoken is sufficiently unique, false-positives will be minimal. One downside to honeytokens is that they do not provide any indication of where the promiscuous device is; they only tell you that there is one. Additionally, there is no guarantee that promiscuous mode was employed. An attacker may have simply compromised one of the machines involved in the transmission of the honeytoken.



Data Link Layer Security Project


How you can use this knowledge to perform security testing on your own network.



Using the Auditor Security Collection to Crack WEP
The Auditor Security Collection is a fully functional, bootable CD-based operating system that provides a suite of wireless network discovery and encryption cracking tools. To complete the security projects discussed in this chapter you will need to download a copy of Auditor and burn it to a CD. The bootable toolkit is available here.
In order to attack your target network, you must first locate it. Auditor provides two
tools for Wireless Local Area Network (WLAN) discovery:
After locating the target network, you can use either Kismet or Wireshark to determine the type of encryption that is being used by your target network.
Once you have determined the type of encryption that is in place, there are several different tools that provide the ability to crack different encryption mechanisms.

  • Void11 is used to de-authenticate clients from the target network
  • The Aircrack suite (i.e., Airodump, Aireplay, and Aircrack) allows you to capture traffic, reinject traffic, and crack WEP keys
  • CoWPAtty performs offline dictionary attacks against WiFi Protected Access-Pre-SharedKey (WPA-PSK) networks.


Cracking WEP with the Aircrack Suite
The Aircrack Suite of tools provides all of the functionality necessary to successfully crack WEP, and consists of three tools:
  • Airodump Used to capture packets
  • Aireplay Used to perform injection attacks
  • Aircrack Used to actually crack the WEP key
The Aircrack Suite can be started from the command line or by using the Auditor menu.To use the menu, right-click on the desktop and navigate to Auditor |Wireless-
WEP cracker | Aircrack suite and select the tool you want to use. The first thing you need to do is capture and reinject an ARP packet with Aireplay. The following commands configure the card correctly to capture an ARP packet:
NOTE---These commands are for a Prism2-based WLAN card. If you aren’t using a Prism2-based card you will need to make sure that your card can be used with the wlan-ng drivers and determine the correct identifier for your card (eth0, eth1, and so forth).

switch-to-wlanng
cardctl eject
cardctl insert
monitor.wlan wlan0 CHANNEL_NUMBER
cd /ramdisk
aireplay -i wlan0 -b MAC_ADDRESS_OF_AP -m 68 -n 68 -d ff:ff:ff:ff:ff:ff

  • First, tell Auditor to use the wlan-ng driver. The switch-to-wlanng command is an Auditor specific command to accomplish this.
  • Then the card must be “ejected” and “inserted” in order for the new driver to load. The cardctl command, coupled with the eject and insert switches, accomplish this.
  • Next, the monitor.wlan command puts the wireless card (wlan0) into Radio Frequency Monitoring (rfmon), listening on the specific channel indicated by CHANNEL_NUMBER.
  • Finally, start Aireplay. Once Aireplay has collected what it thinks is an ARP packet, you are given information and asked to decide if this is an acceptable packet for injection. In order to use the packet, certain criteria must be met:
■ FromDS must be 0
■ ToDS must be 1
■ The BSSID must be the MAC address of the target AP
■ The source MAC must be the MAC address of the target computer
■ The destination MAC must be FF:FF:FF:FF:FF:FF

You are prompted to use this packet. If it does not meet these criteria, type n. If it does meet the criteria, type y and the injection attack will begin. Aircrack, the program that performs the actual WEP cracking, takes input in pcap format. Airodump is an excellent choice, because it is included in the Aircrack Suite; however,
any packet analyzer capable of writing in pcap format (e.g., Wiresharkl, Kismet, and so forth) will work. You must configure your card to use Airodump.

switch-to-wlanng
cardctl eject
cardctl insert
monitor.wlan wlan0 CHANNEL_NUMBER
cd /ramdisk
airodump wlan0 FILE_TO_WRITE_DUMP_TO

Airodump’s display shows the number of packets and Initialization Vectors (IVs) that have been collected.

Once some IVs have been collected, Aircrack can be run while Airodump is capturing. To use Aircrack, issue the following commands:

aircrack -f FUDGE_FACTOR -m TARGET_MAC -n WEP_STRENGTH -q 3 CAPTURE_FILE

Aircrack gathers the unique IVs from the capture file and attempts to crack the key. The FUDGE_FACTOR can be changed to increase the likelihood and speed of the crack. The default FUDGE_FACTOR is 2, but it can be adjusted between 1 and 4. A higher FUDGE_FACTOR cracks the key faster, but more “guesses” are made by the program, so the results aren’t as reliable. Conversely, a lower FUDGE_FACTOR may take longer, but the results are more reliable. The WEP strength should be set to 64, 128, 256, or 512 bits, depending on the WEP strength used by the target AP. A good rule is that it takes around 500,000 unique IVs to crack the WEP key. This number will vary, and can range from as low as 100,000 to more than 500,000.




Cracking WPA with CoWPAtty
CoWPAtty, developed by Joshua Wright, is a tool that automates offline dictionary attacks that WPA-PSK networks are vulnerable to. CoWPAtty is included on the Auditor CD, and is easy to use. Just as with WEP cracking, an ARP packet needs to be captured. Unlike WEP, you don’t need to capture a large amount of traffic; you only need to capture one complete four-way Extensible Authentication Protocol Over Local Area Network (EAPOL) handshake and have a dictionary file that includes the WPA-PSK passphrase. Once you have captured the four-way EAPOL handshake, right-click on the desktop and select Auditor |Wireless | WPA cracker | Cowpatty (WPA PSK bruteforcer). This opens a terminal window with the CoWPAtty options.

Using CoWPAtty is fairly straightforward. You must provide the path to your wordlist,
the .dump file where you captured the EAPOL the handshake, and the SSID of the target network.

cowpatty –f WORDLIST –r DUMPFILE –s SSID


Resources

ARP protocol internals