Total Pageviews

Search: This Blog, Linked From Here, The Web, My fav sites, My Blogroll

29 October 2009

A comprehensive (& filosofical) view of security in the past decade(1997)--A not only American story


The security of the Internet is not a static thing rather is a complex theme. New holes are discovered at the rate of almost one per day.
(Unfortunately, such holes often take much longer to fix). A good system administrator know that there is no substitute for experience



Intro


A text such as this one harbors certain dangers, including:
  • The possibility that readers will use the information maliciously
  • The possibility of angering the often-secretive Internet-security community
  • The possibility of angering vendors that have yet to close security holes within their software
I say all that stuff because there is a real need. To demonstrate why, I'd like to briefly examine the two most common reasons for security breaches:
  • Misconfiguration of the victim host
  • System flaws or deficiency of vendor response

Misconfiguration of the Victim Host
The primary reason for security breaches is misconfiguration of the victim host. Plainly stated, most operating systems ship in an insecure state. There are two manifestations of this phenomenon, which I classify as active and passive states of insecurity in shipped software.


The Active State
The active state of insecurity in shipped software primarily involves network utilities. Certain network utilities, when enabled, create serious security risks. Many software products ship with these options enabled. The resulting risks remain until the system administrator deactivates or properly configures the utility in question.
A good example would be network printing options (the capability of printing over an Ethernet or the Internet). These options might be enabled in a fresh install, leaving the system insecure. It is up to the system administrator (or user) to disable these utilities. However, to disable them, the administrator (or user) must first know of their existence. You might wonder how a user could be unaware of such utilities. The answer is simple: Think of your favorite word processor. Just how much do you know about it? If you routinely write macros in a word-processing environment, you are an advanced user, one member of a limited class. In contrast, the majority of people use only the basic functions of word processors: text, tables, spell check, and so forth. There is certainly nothing wrong with this approach. Nevertheless, most word processors have more advanced features, which are often missed by casual users.
Similarly, users might know little about the inner workings of their favorite operating system. For most, the cost of acquiring such knowledge far exceeds the value. Oh, they pick up tidbits over the years. Perhaps they read computer periodicals that feature occasional tips and tricks. Or perhaps they learn because they are required to, at a job or other official position where extensive training is offered. No matter how they acquire the knowledge, nearly everyone knows something cool about their operating system. (Example: the Microsoft programming team easter egg in Windows 95.The Microsoft programming team easter egg is a program hidden in the heart of Windows 95. When you enter the correct keystrokes and undertake the correct actions, this program displays the names of each programmer responsible for Windows 95.)

Unfortunately, keeping up with the times is difficult. The software industry is a dynamic environment, and users are generally two years behind development. This lag in the assimilation of new technology only contributes to the security problem. When an operating-system- development team materially alters its product, a large class of users is suddenly left knowing less. Microsoft Windows 95 is a good example of this phenomenon. New support has been added for many different protocols: protocols with which the average Windows user might not be familiar. So, it is possible (and probable) that users might be unaware of obscure network utilities at work with their operating systems.

This is especially so with UNIX-based operating systems, but for a slightly different reason. UNIX is a large and inherently complex system. Comparing it to other operating systems can be instructive. DOS contains perhaps 30 commonly used commands. In contrast, a stock distribution of UNIX (without considering windowed systems) supports several hundred commands. Further, each command has one or more command-line options, increasing the complexity of each utility or program.
In any case, in the active state of insecurity in shipped software, utilities are enabled and this fact is unknown to the user. These utilities, while enabled, can foster security holes of varying magnitude. When a machine configured in this manner is connected to the Net, it is a hack waiting to happen.
Active state problems are easily remedied. The solution is to turn off (or properly
configure) the offending utility or service. Typical examples of active state problems include
  • Network printing utilities
  • File-sharing utilities
  • Default passwords
  • Sample networking programs
Of the examples listed, default passwords was/is the most common. Most multiuser operating systems on the market have at least one default password (or an account requiring no password at all).


The Passive State
The passive state involves operating systems with built-in security utilities. These utilities can be quite effective when enabled, but remain worthless until the system administrator activates them. In the passive state, these utilities are never activated, usually because the user is unaware that they exist. Again, the source of the problem is the same: The user or system administrator lacks adequate knowledge of the system.

To understand the passive state, consider:
  1. logging utilities. Many networked operating systems provide good logging utilities. These comprise the cornerstone of any investigation. Often, these utilities are not set to active in a fresh installation. (Vendors might leave this choice to the system administrator for a variety of reasons. For example, certain logging utilities consume space on local drives by generating large text or database files. Machines with limited storage are poor candidates for conducting heavy logging.) Because vendors cannot guess the hardware configuration of the consumer's machine, logging choices are almost always left to the end-user.
  2. Situations where user knowledge (or lack thereof) is not the problem. For instance, certain security utilities are simply impractical. Consider security programs that administer file-access privileges (such as those that restrict user access depending on security level, time of day, and so forth). Perhaps your small network cannot operate with fluidity and efficiency if advanced access restrictions are enabled. If so, you must take that chance, perhaps implementing other security procedures to compensate. In essence, these issues are the basis of security theory: You must balance the risks against practical security measures, based on the sensitivity of your network data.
You will notice that both active and passive states of insecurity in software result from the consumer's lack of knowledge (not from any vendor's act or omission). This is an education issue, and education is a serious theme. Put another way, crackers can gain most effectively by attacking networks where such knowledge is lacking.


System Flaws or Deficiency of Vendor Response
System flaws or deficiency of vendor response are matters beyond the end-user's control. Although vendors might argue this point furiously, here's a fact: These factors are the second most common source of security problems. Anyone who subscribes to a bug mailing list knows this. Each day, bugs or programming weaknesses are found in network software. Each day, these are posted to the Internet in advisories or warnings. Unfortunately, not all users read such advisories. System flaws needn't be classified into many subcategories here. It's sufficient to say that a system flaw is any element of a program that causes the program to
  • Work improperly (under either normal or extreme conditions)
  • Allow crackers to exploit that weakness (or improper operation) to damage or gain control of a system
I am concerned with two types of system flaws. The first, which I call a pure flaw, is a security flaw nested within the security structure itself. It is a flaw inherent within a security-related program. By exploiting it, a cracker obtains one-step, unauthorized access to the system or its data.
The Netscape secure sockets layer flaw: In January, 1996, two students in the
Computer Science department at the University of California, Berkeley highlighted a serious flaw in the Netscape Navigator encryption scheme. Their findings were published in Dr. Dobb's Journal. The article was titled Randomness and the Netscape Browser by Ian Goldberg and David Wagner. In it, Goldberg and Wagner explain that Netscape's implementation of a cryptographic protocol called Secure Sockets Layer (SSL) was inherently flawed. This flaw would allow secure communications intercepted on the WWW to be cracked. This is an excellent example of a pure flaw (It should be noted here that the flaw in Netscape's SSL implementation was originally discovered by an individual in France. However, Goldberg and Wagner were the first individuals in the United States to provide a detailed analysis of it).
Conversely, there are secondary flaws. A secondary flaw is any flaw arising in a program that, while totally unrelated to security, opens a security hole elsewhere on the system. In other words, the programmers were charged with making the program functional, not secure. No one (at the time the program was designed) imagined cause for concern, nor did they imagine that such a flaw could arise.
Secondary flaws are far more common than pure flaws, particularly on platforms that have not traditionally been security oriented. An example of a secondary security flaw is any flaw within a program that requires special access privileges in order to complete its tasks (in other words, a program that must run with root or superuser privileges). If that program can be attacked, the cracker can work through that program to gain special, privileged access to files.
Historically, printer utilities have been problems in this area.(For example, in late 1996, SGI determined that root privileges could be obtained through the Netprint utility in its IRIX operating system.)
Whether pure or secondary, system flaws are especially dangerous to the Internet
community because they often emerge in programs that are used on a daily basis, such as FTP or Telnet. These mission-critical applications form the very heart of the Internet and cannot be suddenly taken away, even if a security flaw exists within them. To understand this concept, imagine if Microsoft Word were discovered to be totally insecure. Would people stop using it? Of course not. Millions of offices throughout the world rely on Word. However, there is a vast difference between a serious security flaw in Microsoft Word and a serious security flaw in NCSA HTTPD, which is a popular Web-server package. The serious flaw in HTTPD would place hundreds of thousands of servers (and therefore, millions of accounts) at risk. Because of the Internet's size and the services it now offers, flaws inherent within its security structure are of international concern.

So, whenever a flaw is discovered within sendmail, FTP, Gopher, HTTP, or other
indispensable elements of the Internet, programmers develop patches (small programs or source code) to temporarily solve the problem. These patches are distributed to the world at large, along with detailed advisories. This brings us to vendor response.


Vendor Response
Vendor response has traditionally been good, but this shouldn't give you a false sense of security. Vendors are in the business of selling software. To them, there is nothing fascinating about someone discovering a hole in the system. At best, a security hole represents a loss of revenue or prestige. Accordingly, vendors quickly issue assurances to allay users' fears, but actual corrective action can sometimes be long in coming. The reasons for this can be complex, and often the vendor is not to blame. Sometimes, immediate corrective action just isn't feasible, such as the following:
  • When the affected application is comprehensively tied to the operating-system source
  • When the application is very widely in use or is a standard
  • When the application is third-party software and that third party has poor support, has gone out of business, or is otherwise unavailable
In these instances, a patch (or other solution) can provide temporary relief. However, for this system to work effectively, all users must know that the patch is available. Notifying the public would seem to be the vendor's responsibility and, to be fair, vendors post such patches to security groups and mailing lists.

However, vendors might not always take the extra step of informing the general public. In many cases, it just isn't cost effective. Once again, this issue breaks down to knowledge. Users who have good knowledge of their network utilities, of holes, and of patches are well prepared. Users without such knowledge tend to be victims. That, more than any other reason, is why I wrote this text. In a nutshell, security education is the best policy.


Why Education in Security Is Important
Traditionally, security folks have attempted to obscure security information from the average user. As such, security specialists occupy positions of prestige in the computing world. They are regarded as high priests of arcane and recondite knowledge that is unavailable to normal folks. There was a time when this approach had merit. After all, users should be afforded such information only on a need-to-know basis. However, the average man has now achieved need-to-know status.
So, I pose the question again:
Who needs to be educated about Internet security?
The answer is:
We all do
The answer to the question regarding the importance of education and Internet security depends on your station in life. If you are a merchant or business person, the answer is straightforward:
In order to conduct commerce on the Net, you must be assured of some reasonable level of data security. This reason is also shared by consumers. If crackers are capable of capturing Net traffic containing sensitive financial data, why buy over the Internet?
And of course, between the consumer and the merchant stands yet another class of individual concerned with data security:
the software vendor who supplies the tools to facilitate that commerce. These parties (and their reasons for security) are obvious.
However, there are some not so obvious reasons:
  • Privacy is one such concern. The Internet represents the first real evidence that an Orwellian society can be established. Every user should be aware that non encrypted communication across the Internet is totally insecure. Likewise, each user should be aware that government agencies--not crackers--pose the greatest threat. Although the internet is a wonderful resource for research or recreation, it is not your friend (at least, not if you have anything to hide).
  • There are other more concrete reasons to promote security education.

The Corporate Sector
For the moment, set aside dramatic scenarios such as corporate espionage. These subjects are exciting for purposes of discussion, but their actual incidence is rare. Instead, I'd like to concentrate on a very real problem: cost.

The average corporate database is designed using proprietary software. Licensing fees for these big database packages can amount to tens of thousands of dollars. Fixed costs of these databases include programming, maintenance, and upgrade fees. In short, development and sustained use of a large, corporate database is costly and labor intensive. When a firm maintains such a database on site but without connecting it to the Internet, security is a limited concern. To be fair, an administrator must grasp the basics of network security to prevent aspiring hackers in this or that department from gaining unauthorized access to data. Nevertheless, the number of potential perpetrators is limited and access is usually restricted to a few, well-known protocols.

Now, take that same database and connect it to the Net. Suddenly, the picture is
drastically different. First, the number of potential perpetrators is unknown and unlimited. An attack could originate from anywhere, here or overseas. Furthermore, access is no longer limited to one or two protocols. The very simple operation of connecting that database to the Internet opens many avenues of entry. For example, database access architecture might require the use of one or more foreign languages to get the data from the database to the HTML page. I have seen scenarios that were incredibly complex. In one scenario, I observed a six-part
process. From the moment the user clicked a Submit button, a series of operations were undertaken:
  1. The variable search terms submitted by the user were extracted and parsed by a Perl script.
  2. The Perl script fed these variables to an intermediate program designed to interface with a proprietary database package.
  3. The proprietary database package returned the result, passing it back to a Perl script that formatted the data into HTML.
Anyone legitimately employed in Internet security can see that this scenario was a
disaster waiting to happen. Each stage of the operation boasted a potential security hole. For exactly this reason, the development of database security techniques is from time ago a hot subject in many circles.

Administrative personnel are sometimes quick to deny (or restrict) funding for security within their corporation. They see this cost as unnecessary, largely because they do not understand the dire nature of the alternative. The reality is this:
One or more talented crackers could--in minutes or hours--destroy several years of data entry.
Before business on the Internet can be reliably conducted, some acceptable level of
security must be reached. For companies, education is an economical way to achieve at least minimal security. What they spend now may save many times that amount later.


Government
Folklore and common sense both suggest that government agencies know something more, something special about computer security. Unfortunately, this simply isn't true (maybe with the notable exception of the National Security Agency). As you will learn, government agencies routinely fail in their quest for security.
Later we will examine various reports that demonstrate the poor security now maintained by government servers. The sensitivity of data accessed by hackers is amazing. These arms of government (and their attending institutions) hold some of the most personal data on peoples. More importantly, these folks hold sensitive data related to national security. At the minimum, this information needs to be protected.


Operating Systems
There is substantial rivalry on the Internet between users of different operating systems. Let me make one thing clear:
It does not matter which operating system you use. Unless it is a secure operating system (that is, one where the main purpose of its design is network security), there will always be security holes, apparent or otherwise.
True, studies have shown that to date, fewer holes have been found in Mac and PC-based operating systems (as opposed to UNIX, for example), at least in the context to the Internet. However, such studies are probably premature and unreliable.


Open Systems
UNIX is an open system. As such, its source is available to the public for examination. In fact, many common UNIX programs come only in source form. Others include binary distributions, but still include the source. Because of this, much is known about the UNIX operating system and its security flaws. Hackers can inexpensively establish Linux boxes in their homes and hack until their faces turn blue.


Closed and Proprietary Systems
Conversely, the source of proprietary and closed operating systems is unavailable. The manufacturers of such software furiously protect their source, claiming it to be a trade secret. As these proprietary operating systems gravitate to the Net, their security flaws will become more readily apparent. To be frank, this process depends largely on the cracking community. As crackers put these operating systems (and their newly implemented TCP/IP) to the test, interesting results will undoubtedly emerge. But, to my point. We no longer live in a world governed exclusively by a single operating system. As the Internet grows in scope and size, all operating systems known to humankind will become integral parts of the network. Therefore, operating-system rivalry must be replaced by a more sensible approach. Network security now depends on having good, general security knowledge. (Or, from another angle, successful hacking and cracking depends on knowing all platforms, not just one.) So, I ask you to temporarily put aside their bias. In terms of the Internet at least, the security of each one of us depends on us all and that is no trivial statement.


The Loneliness of the Long-Distance Net Surfer
The Information Superhighway is a dangerous place. Oh, the main highway isn't so bad. Prodigy, America Online, Microsoft Network...these are fairly clean thoroughfares. They are beautifully paved, with colorful signs and helpful hints on where to go and what to do. But pick a wrong exit, and you travel down a different highway: one littered with burned-out vehicles, overturned dumpsters, and graffiti on the walls. You see smoke rising from fires set on each side of the road. If you listen, you can hear echoes of a distant subway mixed with strange, exotic music.
You pull to a stop and roll down the window. An insane man stumbles from an alley, his tattered clothes blowing in the wind. He careens toward your vehicle, his weathered shoes scraping against broken glass and concrete. He is mumbling as he approaches your window. He leans in and you can smell his acrid breath. He smiles--missing two front teeth--and says "Hey, buddy...got a light?" You reach for the lighter, he reaches for a knife. As he slits your throat, his accomplices emerge from the shadows. They descend on your car as you fade into unconsciousness. Another Net Surfer bites the dust. Others decry your fate. He should have stayed on the main road! Didn't the people at the pub tell him so? Unlucky fellow.

This snippet is an exaggeration; a parody of horror stories often posted to the Net. Most commonly, they are posted by commercial entities seeking to capitalize on your fears and limited understanding of the Internet. These stories are invariably followed by endorsements for this or that product. Protect your business! Shield yourself now! This is an example of a phenomenon I refer to as Internet voodoo. To practitioners of this secret art, the average user appears as a rather gullible chap. A sucker.

I hope all stuff here plays a small part in eradicating Internet voodoo. It provides enough education to shield the user (or new system administrator) from unscrupulous forces on the Net. Such forces give the Internet-security field a bad
name.

I am hoping that new network administrators will also employ security tools against their own networks. In essence, I have tried to provide a gateway through which
any user can become security literate. I believe that the value of the widespread
dissemination of security material will result in an increased number of hackers (and perhaps, crackers). Likewise, serious security documents can be stuffy,
academic, and, to be frank, boring. I hope to avoid a similar style.


Beings categories
  • System Administrator A system administrator is any person charged with managing a network or any portion of a network. Sometimes, people might not realize that they are a system administrator. In small companies, for example, programming duties and system administration are sometimes assigned to a single person. Thus, this person is a general, all-purpose technician. They keep the system running, add new accounts, and basically perform any task required on a day-to-day basis. This, for your purposes, is a system administrator. Many capable system administrators are not well versed in security, not because they are lazy or incompetent but because security was for them (until now) not an issue.
    For example, consider the sysad who lords over an internal LAN. One day, the powers that be decree that the LAN must establish a connection to the Net. Suddenly, that sysad is thrown into an entirely different (and hostile) environment. He/she might be exceptionally skilled at internal security but have little practical experience with the Internet. Today, numerous system administrators are faced with this dilemma. For many, additional funding to hire on-site security specialists is not available and thus, these people must go it alone. I show the attack from both sides of the fence. I show both how to attack and how to defend in a real-life, combat situation.
  • Hacker The term hacker refers to programmers and not to those who unlawfully breach the security of systems. A hacker is any person who investigates the integrity and security of an operating system. Most commonly, these individuals are programmers. They usually have advanced knowledge of both hardware and software and are capable of rigging (or hacking) systems in innovative ways. Often, hackers determine new ways to utilize or implement a network, ways that software manufacturers had not expressly intended. I will help programmers make informed decisions about how to develop code safely and cleanly. As an added benefit, analysis of existing network utilities (and their deficiencies) may assist programmers in developing newer and perhaps more effective applications for the Internet.
  • Cracker A cracker is any individual who uses advanced knowledge of the Internet (or networks) to compromise network security. Historically, this activity involved cracking encrypted password files, but today, crackers employ a wide range of techniques. Hackers also sometimes test the security of networks, often with the identical tools and techniques used by crackers. To differentiate between these two groups on a trivial level, simply remember this: Crackers engage in such activities without authorization. As such, most cracking activity is unlawful, illegal, and therefore punishable by a term of imprisonment. All crackers start somewhere, many on the famous Usenet
    group alt.2600. As more new users flood the Internet, quality information about cracking (and security) becomes more difficult to find.
  • Business Person For your purposes, business person refers to any individual who has established (or will establish) a commercial enterprise that uses the Internet as a medium. Hence, a business person--within the meaning employed here--is anyone who conducts commerce over the Internet by offering goods or services (It does not matter whether these goods or services are offered free as a promotional service. I still classify this as business). Businesses establish permanent connections each day.It warn you for unscrupulous security specialists, who may charge you thousands of dollars to perform basic, system-administration tasks. I will also offer a basic framework for your internal security policies. You have probably read dozens of dramatic accounts about hackers and crackers, but these materials are largely sensationalized. (Commercial vendors often capitalize on your fear by spreading such stories.) The techniques that will be employed against your system are simple and methodical. Know them, and you will know at least the basics about how to protect your data.
  • Journalist A journalist is any party who is charged with reporting on the Internet. This can be someone who works for a wire news service or a college student writing for his or her university newspaper. The classification has nothing to do with how much money is paid for the reporting, nor where the reporting is published. If you are a journalist, you know that security personnel rarely talk to the media. That is, they rarely provide an inside look at Internet security (and when they do, this usually comes in the form of assurances that might or might not have value). Technology writing is difficult and takes considerable research. My intent is to narrow that field of research for journalists who want to cover the Internet. In coming years, this type of reporting (whether by print or broadcast media) will become more prevalent.
  • Casual User A casual user is any individual who uses the Internet purely as a source of entertainment. Such users rarely spend more than 10 hours a week on the Net. They surf subjects that are of personal interest. Here i provide an understanding of the Internet's innermost workings. It will prepare the reader for personal attacks of various kinds, not only from other, hostile users, but from the prying eyes of government. User is informed that the Internet is not a toy, that one's identity can be traced and bad things can happen while using the Net.
  • Security Specialist A security specialist is anyone charged with securing one or more networks from attack. It is not necessary that they get paid for their services in order to qualify in this category. Some people do this as a hobby. If they do it, they are a specialist.
NOTE: Here the void refers to that portion of the Internet that exists beyond your router or modem. It is the dark, swirling mass of machines, services, and users beyond your computer or network. These are quantities that are unknown to you (This term is commonly used in security circles to refer to such quantities).

The Good, the Bad, and the Ugly
Those who unlawfully penetrate networks seldom do so for fun and often pursue destructive objectives. Considering how long it takes to establish a network, write software, configure hardware, and maintain databases, it is abhorrent to the hacking community that the cracking community should be destructive. Still, that is a choice and one choice--even a bad one--is better than no choice at all. Crackers serve a purpose within the scheme of security, too. They assist the good guys in discovering faults inherent within the network.


Hackers and Crackers


For many years, the media has erroneously applied the word hacker when it really means cracker. So the public now believe that a hacker is someone who breaks into computer systems. This is untrue and does a disservice to some of our most talented hackers.


What Is the Difference Between a Hacker and a Cracker?
There are some traditional tests to determine the difference between hackers and
crackers. First, I want to offer the general definitions of each term:
  • A hacker is a person intensely interested in the arcane and recondite workings of any computer operating system. Most often, hackers are programmers. As such, hackers obtain advanced knowledge of operating systems and programming languages. They may know of holes within systems and the reasons for such holes. Hackers constantly seek further knowledge, freely share what they have discovered, and never, ever intentionally damage data.
  • A cracker is a person who breaks into or otherwise violates the system integrity of remote machines, with malicious intent. Crackers, having gained unauthorized access, destroy vital data, deny legitimate users service, or basically cause problems for their targets. Crackers can easily be identified because their actions are malicious.
These definitions are good and may be used in the general sense. However, there are other tests. One is the legal test. It is said that by applying legal reasoning to the equation, you can differentiate between hackers (or any other party) and crackers. This test requires no extensive legal training. It is applied simply by inquiring as to mens rea.


Mens Rea
Mens rea is a Latin term that refers to the guilty mind. It is used to describe that mental condition in which criminal intent exists. Applying mens rea to the hacker-cracker equation seems simple enough. If the suspect unwittingly penetrated a computer system--and did so by methods that any law-abiding citizen would have employed at the time--there is no mens rea and therefore no crime. However, if the suspect was well aware that a security breach was underway--and he knowingly employed sophisticated methods of implementing that breach--mens rea exists and a crime has been committed. By this measure, at least from a legal point of view, the former is an unwitting computer user (possibly a hacker) and the latter a cracker.

In my opinion, however, this test is too rigid. At day's end, hackers and crackers are human beings, creatures too complex to sum up with a single rule. The better way to distinguish these individuals would be to understand their motivations and their ways of life.

The hacker
To understand the mind-set of the hacker, you must first know what they do. To explainthat, I need to briefly discuss computer languages.


Computer Languages
A computer language is any set of libraries or instructions that, when properly arranged and compiled, can constitute a functional computer program. The building blocks of any given computer language never fundamentally change. Therefore, each programmer walks to his or her keyboard and begins with the same basic tools as his or her fellows.Examples of such tools include:
  • Language libraries--These are pre-fabbed functions that perform common actions that are usually included in any computer program (routines that read a directory, for example). They are provided to the programmer so that he or she can concentrate on other, less generic aspects of a computer program.
  • Compilers--These are software programs that convert the programmer's written code to an executable format, suitable for running on this or that platform.
The programmer is given nothing more than languages (except a few manuals that
describe how these tools are to be used). It is therefore up to the programmer what
happens next. The programmer programs to either learn or create, whether for profit or not. This is a useful function, not a wasteful one. Throughout these processes of learning and creating, the programmer applies one magical element that is absent within both the language libraries and the compiler: imagination. That is the programmer's existence in a nutshell.

Modern hackers, however, reach deeper still. They probe the system, often at a microcosmic level, finding holes in software and snags in logic. They write programs to check the integrity of other programs. Thus, when a hacker creates a program that can automatically check the security structure of a remote machine, this represents a desire to better what now exists. It is creation and improvement through the process of analysis.

In contrast, crackers rarely write their own programs. Instead, they beg, borrow, or steal tools from others. They use these tools not to improve Internet security, but to subvert it. They have technique, perhaps, but seldom possess programming skills or imagination. They learn all the holes and may be exceptionally talented at practicing their dark arts, but they remain limited. A true cracker creates nothing and destroys much. His chief pleasure comes from disrupting or otherwise adversely effecting the computer services of others.

This is the division of hacker and cracker. Both are powerful forces on the Internet, and both will remain permanently. And, as you have probably guessed by now, some
individuals may qualify for both categories. The very existence of such individuals assists in further clouding the division between these two odd groups of people. Now, I know that real hackers reading this are saying to themselves "There is no such thing as this creature you are talking about. One is either a hacker or a cracker and there's no more to it." If you had asked me seventeen years ago, I would have agreed. However, today, it just isn't true. A good case in point is Randal Schwartz whom some of you know from his weighty contributions to the programming communities, particularly his discourses on the Practical Extraction and Report Language (Perl). With the exception of Perl's creator, Larry Wall, no one has done more to educate the general public on the Perl programming language. Schwartz has therefore had a most beneficial influence on the Internet in general. Additionally, Schwartz has held positions in consulting at the University of Buffalo, Silicon Graphics (SGI), Motorola Corporation, and Air Net. He is an extremely
gifted programmer.
NOTE: Schwartz has authored or co-authored quite a few books about Perl, including Learning Perl, usually called "The Llama Book," published by O'Reilly & Associates (ISBN 1-56592-042-2).
His contributions not withstanding, Schwartz remains on the thin line between hacker and cracker. In fall 1993 (and for some time prior), Schwartz was employed as a consultant at Intel in Oregon. In his capacity as a system administrator, Schwartz was authorized to implement certain security procedures. As he would later explain on the witness stand, testifying on his own behalf:
Part of my work involved being sure that the computer systems were secure, to pay attention to information assets, because the entire company resides--the product of the company is what's sitting on those disks. That's what the people are producing. They are sitting at their work stations. So protecting that information was my job, to look at the situation, see what needed to be fixed, what needed to be changed, what needed to be installed, what needed to be altered in such a way that the information was protected.
The following events transpired:
  • On October 28, 1993, another system administrator at Intel noticed heavy processes being run from a machine under his control.
  • Upon examination of those processes, the system administrator concluded that the program being run was Crack, a common utility used to crack passwords on UNIX systems. This utility was apparently being applied to network passwords at Intel and at least one other firm.
  • Further examination revealed that the processes were being run by Schwartz or someone using his login and password.
  • The system administrator contacted a superior who confirmed that Schwartz was not authorized to crack the network passwords at Intel.
  • On November 1, 1993, that system administrator provided an affidavit that was sufficient to support a search warrant for Schwartz's home.
  • The search warrant was served and Schwartz was subsequently arrested, charged under an obscure Oregon computer crime statute. The case is bizarre. You have a skilled and renowned programmer charged with maintaining internal security for a large firm. He undertakes procedures to test the security of that network and is ultimately arrested for his efforts. At least, the case initially appears that way. Unfortunately, that is not the end of the story. Schwartz did not have authorization to crack those password files. Moreover, there is some evidence that he violated other network security conventions at Intel.
For example,
  1. Schwartz once installed a shell script that allowed him to access the Intel network from other locations. This script reportedly opened a hole in Intel's firewall.
  2. Another system administrator discovered this program, froze Schwartz's account, and confronted him. Schwartz agreed that installing the script was not a good idea and further agreed to refrain from implementing that program again.
  3. Some time later, that same system administrator found that Schwartz had re-installed the program. (Schwartz apparently renamed the program, thus throwing the system administrator off the trail.)
What does all this mean? From my point of view, Randal Schwartz probably broke Intel policy a number of times. What complicates the situation is that testimony reveals that such policy was never explicitly laid out to Schwartz. At least, he was given no document that expressly prohibited his activity. Equally, however, it seems clear that Schwartz overstepped his authority.

Looking at the case objectively, some conclusions can immediately be made. One is that most administrators charged with maintaining network security use a tool like Crack. This is a common procedure by which to identify weak passwords or those that can be easily cracked by crackers from the void. At the time of the Schwartz case, however, such tools were relatively new to the security scene. Hence, the practice of cracking your own passwords was not so universally accepted as a beneficial procedure. However, Intel's response was, in my opinion, a bit reactionary. For example, why wasn't the matter handled internally?

The Schwartz case angered many programmers and security experts across the country. As Jeffrey Kegler wrote in his analysis paper, "Intel v. Randal Schwartz: Why Care?" the Schwartz case was an ominous development:
Clearly, Randal was someone who should have known better. And in fact, Randal would be the first Internet expert already well known for legitimate activities to turn to crime. Previous computer criminals have been teenagers or wannabes. Even the relatively sophisticated Kevin Mitnick never made any name except as a criminal. Never before Randal would anyone on the `light side of the force' have answered the call of the 'dark side.'
I want you to think about the Schwartz case for a moment. Do you have or administrate a network? If so, have you ever cracked passwords from that network without explicit authorization to do so? If you have, you know exactly what this entails. In your opinion, do you believe this constitutes an offense? If you were writing the laws, would this type of offense be a felony?

In any event, as stated, Randal Schwartz is unfortunate enough to be the first legitimate computer security expert to be called a cracker. Thankfully, the experience proved beneficial, even if only in a very small way. Schwartz managed to revitalize his career, touring the country giving great talks as Just Another Convicted Perl Hacker. The notoriety has served him well as of late.
TIP: The transcripts of this trial are available on the Internet in zipped format. The entire distribution is 13 days of testimony and argument. It is available here.

Why Do Crackers Exist?
Crackers exist because they must. Because human nature is just so, frequently driven by a desire to destroy instead of create. No more complex explanation need be given.

The only issue here is what type of cracker we are talking about.
  • Some crackers crack for profit. These may land on the battlefield, squarely between two competing companies. Perhaps Company A wants to disable the site of Company B.
  • There are crackers for hire. They will break into almost any type of system you like, for a price. Some of these crackers get involved with criminal schemes, such as retrieving lists of TRW profiles. These are then used to apply for credit cards under the names of those on the list. Other common pursuits are cell-phone cloning, piracy schemes, and garden-variety fraud.
  • Other crackers are kids who demonstrate an extraordinary ability to assimilate highly technical computer knowledge. They may just be getting their kicks at the expense of their targets.

Where Did This All Start?
A complete historical account of cracking is beyond our scope here. However, a
little background couldn't hurt. It started with telephone technology. Originally, a handful of kids across the nations were cracking the telephone system. This practice was referred to as phreaking. Phreaking is now recognized as any act by which to circumvent the security of the telephone company. (Although, in reality, phreaking is more about learning how the telephone system works and then manipulating it). Telephone phreaks employed different methods to accomplish this task. Early implementations involved the use of ratshack dialers, or red boxes. (Ratshack was a term to refer to the popular electronics store Radio Shack.) These were hand-held electronic devices that transmitted digital sounds or tones. Phreakers altered these off-the-shelf tone dialers by replacing the internal crystals with Radio Shack part #43-146.
NOTE: Part #43-146 was a crystal, available at many neighborhood electronics stores throughout the country. One could use either a 6.5MHz or 6.5536 crystal. This was used to replace the crystal that shipped with the dialer (3.579545MHz). The alteration process took approximately 5 minutes.
Having made these modifications, they programmed in the sounds of quarters being
inserted into a pay telephone. From there, the remaining steps were simple. Phreaks went to a pay telephone and dialed a number. The telephone would request payment for the call. In response, the phreak would use the red box to emulate money being inserted into the machine. This resulted in obtaining free telephone service at most pay telephones. Schematics and very precise instructions for constructing such devices are at thousands of sites on the Internet. The practice became so common that in many states, the mere possession of a tone dialer altered in such a manner was grounds for search, seizure, and arrest. As time went on, the technology in this area became more and more advanced. New boxes like the red box were developed. The term boxing came to replace the term phreaking, at least in general conversation, and boxing became exceedingly popular. This resulted in even further advances, until an entire suite of boxes was developed. Table 3.1 lists a few of these boxes. There are at least 40 different boxes or devices within this class. Each was designed to perform a different function. Many of the techniques employed are no longer effective.

For example, blue boxing has been seriously curtailed because of new electronically
switched telephone systems. At a certain stage of the proceedings, telephone phreaking and computer programming were combined; this marriage produced
some powerful tools. One example is BlueBEEP, an all-purpose phreaking/hacking tool. BlueBEEP combines many different aspects of the phreaking trade, including the red box. Essentially, in an area where the local telephone lines are old style, BlueBEEP provides the user with awesome power over the telephone system. It looks a lot like any legitimate application, the type anyone might buy at his or her local software outlet. To its author's credit, it operates as well as or better than most commercial software. BlueBEEP runs in a DOS environment, or through a DOS shell window in either Windows 9x or Windows NT.
I should say this before continuing: In 1997 BlueBEEP was the most finely programmed phreaking tool ever coded. The author, then a resident of Germany, reported that the application was written primarily in PASCAL and assembly language. In any event, contained within the program are many, many options for control of trunk lines, generation of digital tones, scanning of telephone exchanges, and so on. It is probably the most comprehensive tool of its kind.
However, I am getting ahead of the time. BlueBEEP was actually created quite late in the game. We must venture back several years to see how telephone phreaking led to Internet cracking. The process was a natural one. Phone phreaks tried almost anything they could to find new systems. Phreaks often searched telephone lines for interesting tones or connections. Some of those connections turned out to be modems. No one can tell when it was--that instant when a telephone phreak first logged on to the Internet. However, the process probably occurred more by chance than skill. Years ago, Point- to-Point Protocol (PPP) was not available. Therefore, the way a phreak would have found the Internet is debatable. It probably happened after one of them, by direct-dial connection, logged in to a mainframe or workstation somewhere in the void. This machine was likely connected to the Internet via Ethernet, a second modem, or another port. Thus, the targeted machine acted as a bridge between the phreak and the Internet. After the phreak crossed that bridge, he or she was dropped into a world teeming with computers, most of which had poor or sometimes no security.

Imagine that for a moment: an unexplored frontier. What remains is history. Since then, crackers have broken their way into every type of system imaginable.
  1. During the 1980s, truly gifted programmers began cropping up as crackers. It was during this period that the distinction between hackers and crackers was first confused, and it has remained so every since.
  2. By the late 1980s, these individuals were becoming newsworthy and the media dubbed those who breached system security as hackers.
  3. Then an event occurred that would forever focus America's computing community on these hackers. On November 2, 1988, someone released a worm into the network. This worm was a self-replicating program that sought out vulnerable machines and infected them. Having infected a vulnerable machine, the worm would go into the wild, searching for additional targets. This process continued until thousands of machines were infected. Within hours, the Internet was under heavy siege. In a now celebrated paper that provides a blow-by-blow analysis of the worm incident ("Tour of the Worm"), Donn Seeley, then at the Department of Computer Science at the University of Utah, wrote:
    November 3, 1988 is already coming to be known as Black Thursday. System administrators around the country came to work on that day and discovered that their networks of computers were laboring under a huge load. If they were able to log in and generate a system status listing, they saw what appeared to be dozens or hundreds of "shell" (command interpreter) processes. If they tried to kill the processes, they found that new processes appeared faster than they could kill them.
    The worm was apparently released from a machine at the Massachusetts Institute of Technology. Reportedly, the logging system on that machine was either working incorrectly or was not properly configured and thus, the perpetrator left no trail. (Seely reports that the first infections included the Artificial Intelligence Laboratory at MIT, the University of California at Berkeley, and the RAND Corporation in California.) As one might expect, the computing community was initially in a state of shock. However, as Eugene Spafford, a renowned computer science professor from Purdue University,
    explained in his paper "The Internet Worm: An Analysis," that state of shock didn't last long. Programmers at both ends of the country were working feverishly to find a solution:
    By late Wednesday night, personnel at the University of California at Berkeley and at Massachusetts Institute of Technology had `captured' copies of the program and began to analyze it. People at other sites also began to study the program and were developing methods of eradicating it.
    An unlikely candidate would come under suspicion: a young man studying computer science at Cornell University. This particular young man was an unlikely candidate for two reasons. First, he was a good student without any background that would suggest such behavior. Second, and more importantly, the young man's father, an engineer with Bell Labs, had a profound influence on the Internet's design. Nevertheless, the young man, Robert Morris Jr., was indeed the perpetrator. Reportedly, Morris expected his program to spread at a very slow rate, its effects being perhaps even imperceptible. However, as Brendan Kehoe notes in his book "Zen and the Art of the Internet":
    Morris soon discovered that the program was replicating and reinfecting machines at a much faster rate than he had anticipated--there was a bug. Ultimately, many machines at locations around the country either crashed or became `catatonic.' When Morris realized what was happening, he contacted a friend at Harvard to discuss a solution. Eventually, they sent an anonymous message from Harvard over the network, instructing programmers how to kill the worm and prevent reinfection.
    Morris was tried and convicted under federal statutes, receiving three years probation and a substantial fine. An unsuccessful appeal followed.
    The introduction of the Morris Worm changed many attitudes about Internet security. A single program had virtually disabled hundreds (or perhaps thousands) of machines. That day marked the beginning of serious Internet security. Moreover, the event helped to forever seal the fate of hackers. Since that point, legitimate programmers have had to rigorously defend their hacker titles. The media has largely neglected to correct this misconception. Even today, the national press refers to crackers as hackers, thus perpetuating the misunderstanding. That will never change and hence, hackers will have to find another term by which to classify themselves.
Does it matter? Not really. Many people charge that true hackers are splitting hairs, that their rigid distinctions are too complex and inconvenient for the public. Perhaps there is some truth to that. For it has been many years since the terms were first used interchangeably (and erroneously). At this stage, it is a matter of principle only.


The Situation Today(1997): A Network at War
The situation today is radically different from the one 10 years ago. Over that period of time, these two groups of people have faced off and crystallized into opposing teams. The network is now at war and these are the soldiers. Crackers fight furiously for recognition and often realize it through spectacular feats of technical prowess. A month cannot go by without a newspaper article about some site that has been cracked. Equally, hackers work hard to develop new methods of security to ward off the cracker hordes.

Who will ultimately prevail? It is too early to tell. The struggle will likely continue for another decade or more. The crackers may be losing ground, though. Because big business has invaded the Net, the demand for proprietary security tools has increased dramatically. This influx of corporate money will lead to an increase in the quality of such security tools. Moreover, the proliferation of these tools will happen at a much faster rate and for a variety of platforms. Crackers will be faced with greater and greater challenges as time goes on. However, the balance of knowledge maintains a constant, with crackers only inches behind.
Some writers assert that throughout this process, a form of hacker evolution is occurring. By this they mean that crackers will ultimately be weeded out over the long haul (many will go to jail, many will grow older and wiser, and so forth). This is probably unrealistic. The exclusivity associated with being a cracker is a strong lure to up-and-coming teenagers. There is a mystique surrounding the activities of a cracker.
There is ample evidence, however, that most crackers eventually retire. They later crop up in various positions, including system administrator jobs. One formerly renowned cracker today runs an Internet salon. Another works on systems for an airline company in Florida. Still another is an elected official in a small town in Southern California.(Because all these individuals have left the life for a more conservative and sane existence, I elected not to mention their names here.)


The Hackers
Let see real-life examples of hackers and crackers. That seems to be the only reliable way to differentiate between them. From these brief descriptions, you can get a better understanding of the distinction.
  • Richard Stallman Stallman joined the Artificial Intelligence Laboratory at MIT in 1971. He received the 250K McArthur Genius award for developing software. He ultimately founded the Free Software Foundation, creating hundreds of freely distributable utilities and programs for use on the UNIX platform. He worked on some archaic machines, including the DEC PDP-10 (to which he probably still has access somewhere). He is a brilliant programmer.
  • Dennis Ritchie, Ken Thompson, and Brian Kernighan Ritchie, Thompson, and Kernighan are programmers at Bell Labs, and all were instrumental in the development of the UNIX operating system and the C programming language. Take these three individuals out of the picture, and there would likely be no Internet (or if there were, it would be a lot less functional). They still hack today. (For example, Ritchie is busy working on Plan 9 from Bell Labs, a new operating system that will probably supplant UNIX as the industry-standard super-networking operating system.)
  • Paul Baran, Rand Corporation Baran is probably the greatest hacker of them all for one fundamental reason: He was hacking the Internet before the Internet even existed. He hacked the concept, and his efforts provided a rough navigational tool that served to inspire those who followed him.
  • Eugene Spafford Spafford is a professor of computer science, celebrated for his work at Purdue University and elsewhere. He was instrumental in creating the Computer Oracle Password and Security System (COPS), a semi-automated system of securing your network. Spafford has turned out some very prominent students over the years and his name is intensely respected in the field.
  • Wietse Venema Venema hails from the Eindhoven University of Technology in the Netherlands. He is an exceptionally gifted programmer who has a long history of writing industry-standard security tools. He co-authored SATAN with Farmer and wrote TCP Wrapper, one of the commonly used security programs in the world. (This program provides close control and monitoring of information packets coming from the void.)
  • Linus Torvalds A most extraordinary individual, Torvalds enrolled in classes on UNIX and the C programming language in the early 1990s. One year later, he began writing a UNIX-like operating system. Within a year, he released this system to the Internet (it was called Linux). Today, Linux has a cult following and has the distinction of being the only operating system ever developed by software programmers all over the world, many of whom will never meet one another. Linux is GPLed (free from copyright restrictions and is available free to anyone with Internet access).
  • Bill Gates and Paul Allen From their high school days, these men from Washington were hacking software. Both are skilled programmers. Starting in 1980, they built the largest and most successful software empire on Earth. Their commercial successes include MS-DOS, Microsoft Windows, Windows 95, and Windows NT.

The Crackers
  • Kevin Mitnik Mitnik, also known as Condor, is probably the world's best-known cracker. Mitnik began his career as a phone phreak. Since those early years, Mitnik has successfully cracked every manner of secure site you can imagine, including but not limited to military sites, financial corporations, software firms, and other technology companies. (When he was still a teen, Mitnik cracked the North American Aerospace Defense Command.) At 1997, he is awaiting trial on federal charges stemming from attacks committed in 1994-1995.
  • Kevin Poulsen Having followed a path quite similar to Mitnik, Poulsen is best known for his uncanny ability to seize control of the Pacific Bell telephone system. (Poulsen once used this talent to win a radio contest where the prize was a Porsche. He manipulated the telephone lines so that his call would be the wining one.) Poulsen has also broken nearly every type of site, but has a special penchant for sites containing defense data. This greatly complicated his last period of incarceration, which lasted five years. (This is the longest period (till 1997) ever served by a hacker in the United States.) Poulsen was released in 1996 and has apparently reformed.
  • Justin Tanner Peterson Known as Agent Steal, Peterson is probably most celebrated for cracking a prominent consumer credit agency. Peterson appeared to be motivated by money instead of curiosity. This lack of personal philosophy led to his downfall and the downfall of others. For example, once caught, Peterson ratted out his friends, including Kevin Poulsen. Peterson then obtained a deal with the FBI to work undercover. This secured his release and he subsequently absconded, going on a crime spree that ended with a failed attempt to secure a six-figure fraudulent wire transfer.
There are many other hackers and crackers, and you will read about them in the following sections. Their names, their works, and their Web pages (when available). If you have done something that influenced the security of the Internet, your name likely appears here. If I missed you, I extend my apologies.


Just Who Can Be Hacked, Anyway?


The Internet was born in 1969. Almost immediately after the network was established, researchers were confronted with a disturbing fact: The Internet was not secure and could easily be cracked. Today, writers try to minimize this fact, reminding you that the security technologies of the time were primitive. This has little bearing. Today, security technology is quite complex and the Internet is still easily cracked. I would like to return to those early days of the Internet. Not only will this give you a flavor of the time, it will demonstrate an important point: The Internet is no more secure today than it was twenty years ago.

My evidence begins with a document: a Request for Comments(RFC). Before you
review the document, let me explain what the RFC system is about. This is important
because I refer to many RFC documents here.


The Request For Comments (RFC) System
Requests for Comments (RFC) documents are special. They are written (and posted to the Net) by individuals engaged in the development or maintenance of the Internet. RFC documents serve the important purpose of requesting Internet-wide comments on new or developing technology. Most often, RFC documents contain proposed standards.
The RFC system is one of evolution.
  1. The author of an RFC posts the document to the Internet, proposing a standard that he or she would like to see adopted network-wide.
  2. The author then waits for feedback from other sources.
  3. The document (after more comments/changes have been made) goes to draft or directly to Internet standard status.
Comments and changes are made by working groups of the Internet Engineering Task Force (IETF). (IETF) is "... a large, open, international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet."

RFC documents are numbered sequentially (the higher the number, the more recent the document) and are distributed at various servers on the Internet.


InterNIC(now ICANN)
InterNIC provides comprehensive databases on networking information. These databases contain the larger portion of collected knowledge on the design and scope of the Internet. Some of those databases include:
  • The WHOIS Database--This database contains all the names and network numbers of hosts (or machines) permanently connected to the Internet in the United States (except *.mil addresses, which must be obtained at nic.ddn.mil).
  • The Directory of Directories--This is a massive listing of nearly all resources on the Internet, broken into categories.
  • The RFC Index--This is a collection of all RFC documents.
All these documents are centrally available here.


A Holiday Message
As I mentioned earlier, I refer here to an early RFC. The document in question is RFC 602: The Stockings Were Hung by the Chimney with Care. RFC 602 was posted by Bob Metcalfe in December, 1973. The subject matter concerned weak passwords. In it, Metcalfe writes: The ARPA Computer Network is susceptible to security violations for at least the three following reasons:
  1. Individual sites, used to physical limitations on machine access, have not yet taken sufficient precautions toward securing their systems against unauthorized remote use. For example, many people still use passwords which are easy to guess: their fist [sic] names, their initials, their host name spelled backwards, a string of characters which are easy to type in sequence (such as ZXCVBNM).
  2. The TIP allows access to the ARPANET to a much wider audience than is thought or intended.TIP phone numbers are posted, like those scribbled hastily on the walls of phone booths and men's rooms. The TIP required no user identification before giving service. Thus, many people, including those who used to spend their time ripping off Ma Bell, get access to our stockings in a most anonymous way.
  3. There is lingering affection for the challenge of breaking someone's system. This affection lingers despite the fact that everyone knows that it's easy to break systems, even easier to crash them.
All of this would be quite humorous and cause for raucous eye winking and elbow
nudging, if it weren't for the fact that in recent weeks at least two major serving hosts were crashed under suspicious circumstances by people who knew what they were risking; on yet a third system, the system wheel password was compromised-- by two high school students in Los Angeles no less. We suspect that the number of dangerous security violations is larger than any of us know is growing. You are advised not to sit "in hope that Saint Nicholas would soon be there." That document was posted well over 20 years ago. Naturally, this password problem is no longer an issue. Or is it? Examine this excerpt from a Defense Data Network Security Bulletin, written in 1993:
Host Administrators must assure that :
  • passwords are kept secret by their users.
  • passwords are robust enough to thwart exhaustive attack by password cracking mechanisms
  • Passwords should be changed at least annually
  • password files are adequately protected.
Take notice. In the more than 25 years of the Internet's existence, it has never been
secure. That's a fact. Later i will try to explain why. For now, however, I confine our inquiry to a narrow question:
Just who can be cracked?
The short answer is this:
As long as a person maintains a connection to the Internet (permanent or otherwise), he or she can be cracked.

What Is Meant by the Term Cracked?
For our purposes, cracked refers to that condition in which the victim network has
suffered an unauthorized intrusion. There are various degrees of this condition, each of which is discussed at length within this book. Here, I offer a few examples of this cracked condition:
  • The intruder gains access and nothing more (access being defined as simple entry; entry that is unauthorized on a network that requires--at a minimum--a login and password).
  • The intruder gains access and destroys, corrupts, or otherwise alters data.
  • The intruder gains access and seizes control of a compartmentalized portion of the system or the whole system, perhaps denying access even to privileged users.
  • The intruder does NOT gain access, but instead implements malicious procedures that cause that network to fail, reboot, hang, or otherwise manifest an inoperable condition, either permanently or temporarily.
To be fair, modern security techniques have made cracking more difficult. However, the gorge between the word difficult and the word impossible is wide indeed. Today, crackers have access to (and often study religiously) a wealth of security information, much of which is freely available on the Internet. The balance of knowledge between these individuals and bona-fide security specialists is not greatly disproportionate. In fact, that gap is closing each day.
My purpose here is to show you that cracking is a common activity: so common that assurances from anyone that the Internet is secure should be viewed with extreme suspicion. To drive that point home, I will begin with governmental entities.
After all, defense and intelligence agencies form the basis of our national security
infrastructure. They, more than any other group, must be secure.


Government
Throughout the Internet's history, government sites have been popular targets among crackers. This is due primarily to press coverage that follows such an event. Crackers enjoy any media attention they can get. Hence, their philosophy is generally this: If you're going to crack a site, crack one that matters.

Are crackers making headway in compromising our nation's most secure networks?
Absolutely.
To find evidence that government systems are susceptible to attack, one needn't look far. A recent report filed by the Government Accounting Office (GAO)
concerning the security of the nation's defense networks concluded that:
Defense may have been attacked as many as 250,000 times last year...In addition, in testing its systems, DISA attacks and successfully penetrates Defense systems 65 percent of the time. According to Defense officials, attackers have obtained and corrupted sensitive information--they
have stolen, modified, and destroyed both data and software. They have installed unwanted files and "back doors" which circumvent normal system protection and allow attackers unauthorized access in the future. They have shut down and crashed entire systems and networks, denying
service to users who depend on automated systems to help meet critical missions. Numerous Defense functions have been adversely affected, including weapons and supercomputer research, logistics, finance, procurement, personnel management, military health, and payroll (From Computer Attacks at Department of Defense Pose Increasing Risks (Chapter Report, 05/22/96, GAO/AIMD-96-84); Chapter 0:3.2, Paragraph 1).
That same report revealed that although more than one quarter of a million attacks occur annually, only 1 in 500 attacks are actually detected and reported. (Note that these sites are defense oriented and therefore implement more stringent security policies than many commercial sites. Many government sites employ secure operating systems that also feature advanced, proprietary security utilities.) Government agencies, mindful of the public confidence, understandably try to minimize these issues. But some of the incidents are difficult to obscure.
For example, in 1994, crackers gained carte-blanche access to a weapons-research laboratory in Rome, New York. Over a two-day period, the crackers downloaded vital national security information, including wartime- communication protocols.
Such information is extremely sensitive and, if used improperly, could jeopardize the lives of American service personnel.
If crackers with relatively modest equipment can access such information, hostile foreign governments (with ample computing power) could access even more.


SATAN and Other Tools
Today, government sites are cracked with increasing frequency. The authors of the GAO report attribute this largely to the rise of user-friendly security programs (such as SATAN). SATAN is a powerful scanner program that automatically detects security weaknesses in remote hosts. It was released freely on the Net in April, 1995. Its authors, Dan Farmer and Wietse Venema, are legends in Internet security.

Because SATAN is conveniently operated through an HTML browser (such as Netscape Navigator or NCSA Mosaic), a cracker requires less practical knowledge of systems. Instead, he or she simply points, clicks, and waits for an alert that SATAN has found a vulnerable system (at least this is what the GAO report suggests).
Is it true?
No. Rather, the government is making excuses for its own shoddy security. Here is why:
First, SATAN runs only on UNIX platforms. Traditionally, such platforms required
expensive workstation hardware. Workstation hardware of this class is extremely
specialized and isn't sold at the neighborhood Circuit City store. However, those quick to defend the government make the point that free versions of UNIX now exist for the IBM-compatible platform. One such distribution is a popular operating system named Linux.
Linux is a true 32-bit, multi-user, multi-tasking, UNIX-like operating system. It is a powerful computing environment and, when installed on the average PC, grants the user an enormous amount of authority, particularly in the context of the Internet. For example, Linux distributions now come stocked with every manner of server ever created for TCP/IP transport over the Net. Moreover Linux runs on a wide range of platforms, not just IBM compatibles.Some of those platforms include the Motorola 68k, the Digital Alpha, the Motorola PowerPC, and even the Sun Microsystems SPARC architecture.Distributions of Linux are freely available for download from the Net, or can be obtained at any local bookstore. CD-ROM distributions are usually bundled with books that instruct users on using Linux. In this way, vendors can make money on an otherwise, ostensibly free operating system. The average Linux book containing a Linux installation CD-ROM sells for forty dollars.
Furthermore, most Linux distributions come with extensive development tools. These include a multitude of language compilers and interpreters:
Yet, even given these facts, the average kid with little knowledge of UNIX cannot implement a tool such as SATAN on a Linux platform. Such tools rarely come prebuilt in binary form. The majority are distributed as source code, which may then be compiled with options specific to the current platform. Thus, if you are working in AIX (IBM's proprietary version of UNIX), the program must be compiled for AIX. If working in Ultrix (DEC), it must be compiled for Ultrix, and so on.
NOTE: A port was available for Linux not long after SATAN was released. However, the bugs were not completely eliminated and the process of installing and running SATAN would still remain an elusive and frustrating experience for many Linux users.
The process of developing an easily implemented port was slow in coming. Most PC users (without UNIX experience) are hopelessly lost even at the time of the
Linux installation. UNIX conventions are drastically different from those in DOS. Thus, before a new Linux user becomes even moderately proficient, a year of use will likely pass. This year will be spent learning how to use MIT's X Window System, how to configure TCP/IP settings, how to get properly connected to the Internet, and how to unpack software packages that come in basic source-code form.
Even after the year has passed, the user may still not be able to use SATAN. The SATAN distribution doesn't compile well on the Linux platform. For it to work, the user must have installed the very latest version of Perl. Only very recent Linux distributions (those released after1998-1999) are likely to have such a version
installed. Thus, the user must also know how to find, retrieve, unpack, and properly
install Perl. In short, the distance between a non-UNIX literate PC user and one who effectively uses SATAN is very long indeed. Furthermore, during that journey from the former to the latter, the user must have ample time (and a brutal resolve) to learn. This is not the type of journey made by someone who wants to point and click his or her way to super-cracker status. It is a journey undertaken by someone deeply fascinated by operating systems, security, and the Internet in general.
So the government's assertion that SATAN, an excellent tool designed expressly to improve Internet security, has contributed to point-and-click cracking is unfounded. True, SATAN will perform automated scans for a user. Nonetheless, that user must have strong knowledge of Internet security, UNIX, and several programming languages. There are also collateral issues regarding the machine and connection type. For example, even if the user is seasoned, he or she must still have adequate hardware power to use SATAN effectively.
SATAN is not the problem with government sites. Indeed, SATAN is not the only
diagnostic tool that can automatically identify security holes in a system. There are
dozens of such tools available:
For now, I will simply say this: These tools operate by attacking the available TCP/IP services and ports open and running on remote systems. Whether available to a limited class of users or worldwide, these tools share one common attribute:
They check for known holes. That is, they check for security vulnerabilities that are commonly recognized within the security community. The chief value of such tools is their capability to automate the process of checking one or more machines (hundreds of machines, if the user so wishes). These tools accomplish nothing more than a knowledgeable cracker might by hand. They simply automate the process.

Education and Awareness About Security
The problem is not that such tools exist, but that education about security is poor. Moreover, the defense information networks are operating with archaic internal security policies. These policies prevent (rather than promote) security. To demonstrate why, I want to refer to the GAO report I mentioned previously. In it, the government concedes:
...The military services and Defense agencies have issued a number of information security policies, but they are dated, inconsistent and incomplete...
The report points to a series of Defense Directives as examples. It cites (as the most
significant DoD policy document) Defense Directive 5200.28. This document, Security Requirements for Automated Information Systems, is dated March 21, 1988. In order to demonstrate the real problem here, let's examine a portion of that Defense Directive. Paragraph 5 of Section D of that document is written as follows:
Computer security features of commercially produced products and Government-developed or -derived products shall be evaluated (as requested) for designation as trusted computer products for inclusion on the Evaluated Products List (EPL). Evaluated products shall be designated as meeting security criteria maintained by the National Computer Security Center (NCSC) at NSA defined by the security division, class, and feature (e.g., B, B1, access control) described in DoD 5200.28-
STD (reference (K)).
It is within the provisions of that paragraph that the government's main problem lies. The Evaluated Products List (EPL) is a list of products that have been evaluated for security ratings, based on DoD guidelines. (The National Security Agency actually oversees the evaluation.) Products on the list can have various levels of security certification. For example, Windows NT version 3.51 has obtained a certification of C2. This is a very limited security certification. The first thing you will notice about this list is that most of the products are old. For example, examine the EPL listing for Trusted Information Systems' Trusted XENIX, a UNIX-based operating system. If you examine the listing closely, you will be astonished. TIS Trusted XENIX is indeed on the EPL. It is therefore endorsed and cleared as a safe system, one that meets the government's guidelines (as of September 1993). However, examine even more closely the platforms on which this product has been cleared. Here are a few:
  • AST 386/25 and Premium 386/33
  • HP Vectra 386
  • NCR PC386sx
  • Zenith Z-386/33
These architectures are ancient. They are so old that no one would actually use them, except perhaps as a garage hacking project on a nice Sunday afternoon (or perhaps if they were legacy systems that housed software or other data that was irreplaceable). In other words, by the time products reach the EPL, they are often pathetically obsolete. (The evaluation process is lengthy and expensive not only for the vendor, but for the American people, who are footing the bill for all this.) Therefore, you can conclude that much of the DoD's equipment, software, and security procedures are likewise obsolete.

Now, add the question of internal education. Are Defense personnel trained in (and
implementing) the latest security techniques? No. Again, quoting the GAO report:
Defense officials generally agreed that user awareness training was needed, but stated that installation commanders do not always understand computer security risk and thus, do not always devote sufficient resources to the problem.

High-Profile Cases
Lack of awareness is pervasive, extending far beyond the confines of a few isolated
Defense sites. It is a problem that affects many federal agencies throughout the country. Evidence of it routinely appears on the front pages of our nation's most popular newspapers. Indeed, some very high-profile government sites were cracked in 1996, including the Central Intelligence Agency (CIA) (a cracker seized control on September 18, 1996, replacing the welcome banner with one that read The Central Stupidity Agency) and the Department of Justice (DoJ) ( In the DoJ incident (Saturday, August 17, 1996), a photograph of Adolf Hitler was offered as the
Attorney General of the United States).
NOTE: skeeve.net was one of many sites that preserved the hacked CIA page, primarily for historical purposes. It is reported that after skeeve.net put the hacked CIA page out for display, its server received hundreds of hits from government sites, including the CIA. Some of these hits involved finger queries and other snooping utilities.
As of this writing(till 1997), neither case has been solved; most likely, neither will ever be. Both are reportedly being investigated by the FBI.
Typically, government officials characterize such incidents as rare. Just how rare are they? Not very.
In the last year, many such incidents have transpired:
  • During a period spanning from July, 1995 to March 1996, a student in Argentina compromised key sites in the United States, including those maintained by the Armed Forces and NASA.
  • In August, 1996, a soldier at Fort Bragg reportedly compromised an "impenetrable" military computer system and widely distributed passwords he obtained.
  • In December, 1996, hackers seized control of a United States Air Force site, replacing the site's defense statistics with pornography. The Pentagon's networked site, DefenseLINK, was shut down for more than 24 hours as a result.
  • The phenomenon was not limited to federal agencies. In October, 1996, the home page of the Florida State Supreme Court was cracked. Prior to its cracking, the page's intended use was to distribute information about the court, including text reproductions of recent court decisions. The crackers removed this information and replaced it with pornography. Ironically, the Court subsequently reported an unusually high rate of hits.
  • In 1996 alone, at least six high-profile government sites were cracked. Two of these (the CIA and FBI) were organizations responsible for maintaining departments for information warfare or computer crime.
    Both are charged with one or more facets of national security. What does all this mean? Is our national security going down the tubes? It depends on how you look at it. In the CIA and FBI cases, the cracking activity was insignificant. Neither server held valuable information, and the only real damage was to the reputation of their owners. However, the Rome, New York case was far more serious (as was the case at Fort Bragg). Such cases demonstrate the potential for disaster.
There is a more frightening aspect to this: The sites mentioned previously were WWW sites, which are highly visible to the public. Therefore, government agencies cannot hide when their home pages have been cracked. But what about when the crack involves some other portion of the targeted system (a portion generally unseen by the public)? It's likely that when such a crack occurs, the press is not involved. As such, there are probably many more government cracks that you will never hear about.

To be fair, the U.S. government is trying to keep up with the times. In January 1997, a reporter for Computerworld magazine broke a major story concerning Pentagon efforts to increase security. Apparently, the Department of Defense is going to establish its own tiger team (a group of individuals whose sole purpose will be to attack DoD computers). Such attacks will reveal key flaws in DoD security.
Other stories indicate that defense agencies have undertaken new and improved
technologies to protect computers holding data vital to national security. However, as reported by Philip Shenon, a prominent technology writer for the New York Times:
While the Pentagon is developing encryption devices that show promise in defeating computer hackers, the accounting office, which is the investigative arm of Congress, warned that none of the proposed technical solutions was foolproof, and that the military's current security program was `dated, inconsistent and incomplete.'
The Pentagon's activity to develop devices that "show promise in defeating computer hackers" appears reassuring. From this, one could reasonably infer that something is being done about the problem. However, the reality and seriousness of the situation is being heavily underplayed.
If Defense and other vital networks cannot defend against domestic attacks from crackers, there is little likelihood that they can defend from hostile foreign powers. I made this point earlier but now I want to expand on it.

Can the United States Protect the National Information Infrastructure?
The United States cannot be matched by any nation for military power. We have
sufficient destructive power at our disposal to eliminate the entire human race. So from a military standpoint, there is no comparison between the United States and even a handful of third-world nations. The same is not true, however, in respect to information warfare.
The introduction of advanced minicomputers has forever changed the balance of power in information warfare. The average Pentium processor now selling at retail computer chains throughout the country is more powerful than many mainframes were five years ago (it is certainly many times faster). Add the porting of high-performance UNIX-based operating systems to the IBM platform, and you have an entirely new environment.
A third-world nation could pose a significant threat to our national information infrastructure.
Using the tools described previously (and some high-speed connections), a third-world nation could effectively wage a successful information warfare campaign against the United States at costs well within their means. In fact, it is likely that within the next few years, we'll experience incidents of bona-fide cyberterrorism.
To prepare for the future, more must be done than simply allocating funds. The federal government must work closely with security organizations and corporate entities to establish new and improved standards. If the new standards do not provide for quicker and more efficient means of implementing security, we will be faced with very dire circumstances.


Who Holds the Cards?
This (not legitimate security tools such as SATAN) is the problem:
  1. Thirty years ago, the U.S. government held all the cards with respect to technology. The average U.S. citizen held next to nothing.
  2. Today, the average American has access to very advanced technology. In some instances, that technology is so advanced that it equals technology currently possessed by the government. Encryption technology is a good example. Many Americans use encryption programs to protect their data from others. Some of these encryption programs (such as the very famous utility PGP, created by Phil Zimmermann) produce military-grade encryption. This level of encryption is sufficiently strong that U.S. intelligence agencies cannot crack it (at least not within a reasonable amount of time, and often, time is of the essence).
    For example, suppose one individual sends a message to another person regarding the date on which they will jointly blow up the United Nations building. Clearly, time is of the essence. If U.S. intelligence officials cannot decipher this message before the date of the event, they might as well have not cracked the message at all.
This principle applies directly to Internet security.
  1. Security technology has trickled down to the masses at an astonishing rate.
  2. Crackers (and other talented programmers) have taken this technology and rapidly improved it.
  3. Meanwhile, the government moves along more slowly, tied down by restrictive and archaic policies. This has allowed the private sector to catch up (and even surpass) the government in some fields of research. This is a matter of national concern.
  4. Many grass-roots radical cracker organizations are enthralled with these circumstances. They often heckle the government, taking pleasure in the advanced knowledge that they possess. These are irresponsible forces in the programming community, forces that carelessly perpetuate the weakening of the national information infrastructure. Such forces should work to assist and enlighten government agencies, but they often do not, and their reasons are sometimes understandable.
  5. The government has, for many years, treated crackers and even hackers as criminals of high order. As such, the government is unwilling to accept whatever valuable information these folks have to offer.
  6. Communication between these opposing forces is almost always negative. Bitter legal disputes have developed over the years.
  7. Indeed, some very legitimate security specialists have lost time, money, and dignity at the hands of the U.S. government. On more than one occasion, the government was entirely mistaken and ruined (or otherwise seriously disrupted) the lives of law-abiding citizens.
  8. Most arise out of the government's poor understanding of the technology.
  9. New paths of communication should be opened between the government and those in possession of advanced knowledge. The Internet marginally assists in this process, usually through devices such as mailing lists and Usenet.
  10. However, there is currently no concerted effort to bring these opposing forces together on an official basis. This is unfortunate because it fosters a situation where good minds in America remain pitted against one another.
  11. Before we can effectively defend our national information infrastructure, we must come to terms with this problem. For the moment, we are at war with ourselves.

The Public Sector
I realize that a category such as the public sector might be easily misunderstood. To
prevent that, I want to identify the range of this category. Here, the public sector refers to any entity that is not a government, an institution, or an individual. Thus, I will be examining companies (public and private), Internet service providers, organizations, or any other entity of commercial or semi-commercial character.
Before forging ahead, one point should be made:
Commercial and other public entities do not share the experience enjoyed by government sites. In other words, they have not yet been cracked to pieces.
Only in the past five years(till 1997) have commercial entities flocked to the Internet. Therefore, some allowances must be made. It is unreasonable to expect these folks to make their sites impenetrable. Many are smaller companies and for a moment, I want to address these folks directly: You, more than any other group, need to acquire sound security advice.

Small companies operate differently from large ones. For the little guy, cost is almost always a strong consideration. When such firms establish an Internet presence, they usually do so either by using in-house technical personnel or by recruiting an Internet guru. In either case, they are probably buying quality programming talent. However, what they are buying in terms of security may vary.
Large companies specializing in security charge a lot of money for their services. Also, most of these specialize in UNIX security. So, small companies seeking to establish an Internet presence may avoid established security firms.
  1. First, the cost is a significant deterrent.
  2. Moreover, many small companies do not use UNIX.
Instead, they may use Novell NetWare, LANtastic, Windows NT, Windows 95, and so forth. This leaves small businesses in a difficult position. They must either pay high costs or take their programmers' word that the network will be secure. Because such small businesses usually do not have personnel who are well educated in security, they are at the mercy of the individual charged with developing the site. That can be a very serious matter.
The problem is many "consultants" spuriously claim to know all about security. They make these claims when, in fact, they may know little or nothing about the subject. Typically, they have purchased a Web-development package, they generate attractive Web pages, and know how to set up a server. Perhaps they have a limited background in security, having scratched the surface. They take money from their clients, rationalizing that there is only a very slim chance that their clients' Web servers will get hacked. For most, this works out well. But although their clients' servers never get hacked, the servers may remain indefinitely in a state of insecurity.

Commercial sites are also more likely to purchase one or two security products and call it a day. They may pay several thousand dollars for an ostensibly secure system and leave it at that, trusting everything to that single product. For these reasons, commercial sites are routinely cracked, and this trend will probably continue. Part of the problem is this: There is no real national standard on security in the private sector. Hence, one most often qualifies as a security specialist through hard
experience and not by virtue of any formal education. It is true that there are many
courses available and even talks given by individuals such as Farmer and Venema. These resources legitimately qualify an individual to do security work. However, there is no single piece of paper that a company can demand that will ensure the quality of the security they are getting.
Because these smaller businesses lack security knowledge, they become victims of
unscrupulous "security specialists." I hope that this trend will change, but I predict that for now, it will only become more prevalent. I say this for one reason: Despite the fact that many thousands of American businesses are now online, this represents a mere fraction of commercial America. There are millions of businesses that have yet to get connected. These millions are all new fish, and security charlatans are lined up waiting to catch them.


The Public Sector Getting Cracked
In the last year, a series of commercial sites have come under attack. These attacks have varied widely in technique. Earlier I defined some of those techniques and
the attending damage or interruption of service they cause. Here, I want to look at cases that more definitively illustrate these techniques. Let's start with the recent(till 1997) attack on Panix.com.


Panix.com
Panix.com (Public Access Networks Corporation) is a large Internet service provider (ISP) that provides Internet access to several hundred thousand New York residents. On September 6, 1996, Panix came under heavy attack from the void. The Panix case was very significant because it demonstrates a technique known as the
Denial of Service (DoS) attack.
This type of attack does not involve an intruder gaining access. Instead, the cracker undertakes remote procedures that render a portion (or sometimes all) of a target inoperable.
The techniques employed in such an attack are simple. As you will learn connections over the Internet are initiated via a procedure called the three-part handshake.
  1. In this process, the requesting machine sends a packet requesting connection.
  2. The target machine responds with an acknowledgment.
  3. The requesting machine then returns its own acknowledgment and a connection is established.
In a syn_flooder attack, the requesting (cracker's) machine sends a series of connection requests but fails to acknowledge the target's response. Because the target never receives that acknowledgment, it waits. If this process is repeated many times, it renders the target's ports useless because the target is still waiting for the response. These connection requests are dealt with sequentially; eventually, the target will abandon waiting for each such acknowledgment. Nevertheless, if it receives tens or even hundreds-thousands of these requests, the port will remain engaged until it has processed--and discarded--each request.
NOTE: The term syn_flooder is derived from the activity undertaken by such tools. The TCP/IP three-way handshake is initiated when one machine sends another a SYN packet. In a typical flooding attack, a series of these packets are forwarded to a target, purporting to be from an address that is non existent. The target machine therefore cannot resolve the host. In any event, by sending a flurry of these SYN packets, one is flooding the target with requests that cannot be fulfilled.
Syn_flooder attacks are common, but do no real damage. They simply deny other users access to the targeted ports temporarily. In the Panix case, though, temporarily was a period lasting more than a week. Syn_flooders are classified in this text as destructive devices. These are typically small programs consisting of two hundred lines of code or fewer. The majority are written in the C programming language, but I know of at least one written in BASIC.


Crack dot Com
ISPs are popular targets for a variety of reasons. One reason is that crackers use such targets as operating environments or a home base from which to launch attacks on other targets. This technique assists in obscuring the identity of the attacker, an issue we will discuss. However, DoS attacks are nothing special. They are the modern equivalent of ringing someone's telephone repeatedly to keep the line perpetually engaged. There are far more serious types of cracks out there. Just ask Crack dot Com, the manufacturers of the now famous computer game Quake.
  1. In January, 1997, crackers raided the Crack dot Com site.
  2. Reportedly, they cracked the Web server and proceeded to chip away at the firewall from that location.
  3. After breaking through the firewall, the crackers gained carte-blanche access to the internal file server.
  4. From that location, they took the source code for both Quake and a new project called Golgotha.
  5. They posted this source code on the Net.
NOTE: For those of you who are not programmers, source code is the programming code of an application in its raw state. This is most often in human-readable form, usually in plain English. After all testing of the software is complete (and there are no bugs within it), this source code is sent a final time through a compiler. Compilers interpret the source code and from it fashion a binary file that can be executed on one or more
platforms. In short, source code can be though of as the very building blocks of a program. In commercial circles, source code is jealously guarded and aggressively proclaimed as proprietary material. For someone to take that data from a server and post it indiscriminately to the Internet is probably a programmer's worst nightmare.
For Crack dot Com, the event could have far-reaching consequences. For example, it's possible that during the brief period that the code was posted on the Net, its competitors may have obtained copies of (at least some of) the programming routines. In fact, the crackers could have approached those competitors in an effort to profit from their activities. This, however, is highly unlikely.
The crackers' pattern of activity suggests that they were kids. For example, after completing the crack, they paraded their spoils on Internet Relay Chat. They also reportedly left behind a log (a recording of someone's activity while connected to a given machine).
The Crack dot Com case highlights the seriousness of the problem, however.


Kriegsman Furs
Another interesting case is that of Kriegsman Furs of Greensborough, North Carolina. This furrier's Web site was cracked by an animal-rights activist. The cracker left behind a very strong message, which I have reproduced in part:
Today's consumer is completely oblivious to what goes on in order for their product to arrive at the mall for them to buy. It is time that the consumer be aware of what goes on in many of today's big industries. Most importantly, the food industries. For instance, dairy cows are injected with a chemical called BGH that is very harmful to both humans and the cows. This chemical gives the cows bladder infections. This makes the cows bleed and guess what? It goes straight in to your
bowl of cereal. Little does the consumer know, nor care. The same kind of thing goes on behind the back of fur wearers. The chemicals that are used to process and produce the fur are extremely bad for our earth. Not only that, but millions of animals are slaughtered for fur and leather coats. I
did this in order to wake up the blind consumers of today. Know the facts.
Following this message were a series of links to animal-rights organizations and
resources.


Kevin Mitnik
Perhaps the most well-known case of the public sector being hacked, however, is the 1994/1995 escapades of famed computer cracker Kevin Mitnik. Mitnik has been gaining notoriety since his teens, when he cracked the North American Aerospace Defense Command (NORAD). The timeline of his life is truly amazing, spanning some 15 years of cracking telephone companies, defense sites, ISPs, and corporations. Briefly, some of Mitnik's previous targets include:
  • Pacific Bell, a California telephone company
  • The California Department of Motor Vehicles
  • A Pentagon system
  • The Santa Cruz Operation, a software vendor
  • Digital Equipment Corporation
  • TRW
On December 25, 1994, Mitnik reportedly cracked the computer network of Tsutomu Shimomura, a security specialist at the San Diego Supercomputer Center. What followed was a press fiasco that lasted for months. The case might not have been so significant were it not for three factors:
  • The target was a security specialist who had written special security tools not available to the general public.
  • The method employed in the break-in was extremely sophisticated and caused a stir in security circles.
  • The suspicion was, from the earliest phase of the case, that Mitnik (then a wanted man) was involved in the break-in.
  1. First, Shimomura, though never before particularly famous, was known in security circles. He, more than anyone, should have been secure. The types of tools he was reportedly developing would have been of extreme value to any cracker. Moreover, Shimomura has an excellent grasp of Internet security. When he got caught with his pants down (as it were), it was a shock to many individuals in security. Naturally, it was also a delight to the cracker community. For some time afterward, the cracking community was enthralled by the achievement, particularly because Shimomura had reportedly assisted various federal agencies on security issues. Here, one of the government's best security advisors had been cracked to pieces by a grass-roots outlaw (at least, that was the hype surrounding the case).
  2. Second, the technique used, now referred to as IP spoofing, was complex and not often implemented.
IP spoofing is significant because it relies on an exchange that occurs between two machines at the system level. Normally, when a user attempts to log in to a machine, he or she is issued a login prompt. When the user provides a login ID, a password prompt is given. The user issues his or her password and logs in (or, he or she gives a bad or incorrect password and does not log in). Thus, Internet security breaches have traditionally revolved around getting a valid password, usually by obtaining and cracking the main password file.

IP spoofing differs from this radically. Instead of attempting to interface with the remote machine via the standard procedure of the login/password variety, the IP-spoofing cracker employs a much more sophisticated method that relies in part on trust. Trust is defined and referred to in this text (unless otherwise expressly stated) as the "trust" that occurs between two machines that identify themselves to one another via IP addresses. In IP spoofing, a series of things must be performed before a successful break-in can be accomplished:
• One must determine the trust relationships between machines on the target network.
• One must determine which of those trust relationships can be exploited (that is, which of those machines is running an operating system susceptible to spoofing).
• One must exploit the hole.
(Be mindful that this brief description is bare bones).
In the attack, the target machine trusted the other. Whenever a login occurred between these two machines, it was authenticated through an exchange of numbers. This number exchange followed a forward/challenge scenario. In other words, one machine would generate a number to which the other must answer (also with a number). The key to the attack was to forge the address of the trusted machine and provide the correct responses to the other machine's challenges.
And, reportedly, that is exactly what Mitnik did. In this manner, privileged access is gained without ever passing a single password or login ID over the network. All exchanges happen deep at the system level, a place where humans nearly never interact with the operating system.
Curiously, although this technique has been lauded as new and innovative, it is actually quite antiquated (or at least, the concept is quite antiquated). It stems from a security paper written by Robert T. Morris in 1985 titled A Weakness in the 4.2BSD UNIX TCP/IP Software. In this paper, Morris (then working for AT&T Bell Laboratories) concisely details the ingredients to make such an attack successful. Morris opens the paper with this statement:
The 4.2 Berkeley Software Distribution of the UNIX operating system (4.2BSD for short) features an extensive body of software based on the "TCP/IP" family of protocols. In particular, each 4.2BSD system "trusts" some set of other systems, allowing users logged into trusted systems to
execute commands via a TCP/IP network without supplying a password. These notes describe how the design of TCP/IP and the 4.2BSD implementation allow users on untrusted and possibly very distant hosts to masquerade as users on trusted hosts. Bell Labs has a growing TCP/IP
network connecting machines with varying security needs; perhaps steps should be taken to reduce their vulnerability to each other.
Morris then proceeds to describe such an attack in detail, some ten years before the first widely reported instance of such an attack had occurred. One wonders whether Mitnik had seen this paper (or even had it sitting on his desk whilst the deed was being done). In any event, the break-in caused a stir. The following month, the New York Times published an article about the attack. An investigation resulted, and Shimomura was closely involved. Twenty days later, Shimomura and the FBI tracked Mitnik to an apartment in North Carolina, the apparent source of the attack. The case made national news for weeks as the authorities sorted out the evidence they found at Mitnik's abode. Again, America's most celebrated computer outlaw was behind bars. In my view, the case demonstrates an important point, the very same point we started with at the beginning of this section:
As long as they are connected to the Net, anyone can be cracked. Shimomura is a hacker and a good one. He is rumored to own 12 machines running a variety of operating systems. Moreover, Shimomura is a talented telephone phreak (someone skilled in manipulating the technology of the telephone system and cellular devices). In essence, he is a specialist in security. If he fell victim to an attack of this nature, with all the tools at his disposal, the average business Web site is wide open to assault over the Internet.
In defense of Shimomura: Many individuals in security defend Shimomura. They
earnestly argue that Shimomura had his site configured to bait crackers. Later you will learn that Shimomura was at least marginally involved in implementing this kind of system in conjunction with some folks at Bell Labs. However, this argument in Shimomura's defense is questionable. For example, did he also intend to allow these purportedly inept crackers to seize custom tools he had been developing? If
not, the defensive argument fails. Sensitive files were indeed seized from Shimomura's network. Evidence of these files on the Internet is now sparse. No doubt, Shimomura has taken efforts to hunt them down. Nevertheless, I have personally seen files that Mitnik reportedly seized from many networks, including Netcom. Charles Platt, in his scathing review of Shimomura's book Takedown, offers a little slice of reality:
Kevin Mitnick...at least he shows some irreverence, taunting Shimomura and trying to puncture his pomposity. At one point, Mitnick bundles up all the data he copied from Shimomura's computer and saves it onto the system at Netcom where he knows that Shimomura will find it....Does Shimomura have any trouble maintaining his dignity in the face of these pranks? No trouble at all. He writes: "This was getting personal. ... none of us could believe how childish and inane it all sounded."
It is difficult to understand why Shimomura would allow crackers (coming randomly from the void) to steal his hard work and excellent source code. My opinion (which may be erroneous) is that Shimomura did indeed have his boxes configured to bait crackers; he simply did not count on anyone cutting a hole through that baited box to his internal network. In other words, I believe that Shimomura (who I readily admit is a brilliant individual) got a little too confident. There should have been no relationship of trust between the baited box and any other workstation.


Summary
These cases are all food for thought. In the past 20 or so years, there have been several thousand such cases (of which we are aware).
The military claims that it is attacked over 250,000 times a year. Estimates suggest it is penetrated better than half of the time.
It is likely that no site is entirely immune. (If such a site exists, it is likely AT&T Bell Laboratories; it probably knows more about network security than any other single organization on the Internet).


Is Security a Futile Endeavor?


Since Paul Baran first put pen to paper, Internet security has been a concern. Over the years, security by obscurity has become the prevailing attitude of the computing
community.
  • Speak not and all will be well.
  • Hide and perhaps they will not find you.
  • The technology is complex. You are safe.
These principles have not only been proven faulty, but they also go against the original concepts of how security could evolve through discussion and open education. Even at the very birth of the Internet, open discussion on standards and methodology was strongly suggested. It was felt that this open discussion could foster important advances in the technology. Baran was well aware of this and articulated the principle concisely when, in The Paradox of the Secrecy About Secrecy: The Assumption of A Clear Dichotomy Between Classified and Unclassified Subject Matter, he wrote:
Without the freedom to expose the system proposal to widespread scrutiny by clever minds of diverse interests, is to increase the risk that significant points of potential weakness have been overlooked. A frank and open discussion here is to our advantage.

Security Through Obscurity
Security through obscurity has been defined and described in many different ways. One rather whimsical description, authored by a student named Jeff Breidenbach in his lively and engaging paper, Network Security Throughout the Ages, appears here:
The Net had a brilliant strategy called "Security through Obscurity." Don't let anyone fool you into thinking that this was done on purpose. The software has grown into such a tangled mess that nobody really knows how to use it. Befuddled engineers fervently hoped potential meddlers would be just as intimidated by the technical details as they were themselves.
Mr. Breidenbach might well be correct about this. Nevertheless, the standardized
definition and description of security through obscurity can be obtained from any archive of the Jargon File, available at thousands of locations on the Internet. That definition is this:
alt. 'security by obscurity' n. A term applied by hackers to most OS vendors' favorite way of coping with security holes--namely, ignoring them, documenting neither any known holes nor the underlying security algorithms, trusting that nobody will find out about them and that people who do find out about them won't exploit them.
Regardless of which security philosophy you believe, three questions remain constant:
  • Why is the Internet insecure?
  • Does it need to be secure?
  • Can it be secure?

Why Is the Internet Insecure?
The Internet is insecure for a variety of reasons, each of which I will discuss here in
detail. Those factors include
  • Lack of education
  • The Internet's design
  • Proprietarism (yes, another ism)
  • The trickling down of technology
  • Human nature
Each of these factors contributes in some degree to the Internet's current lack of security.


Lack of Education
Do you believe that what you don't know can't hurt you? If you are charged with the
responsibility of running an Internet server, you had better not believe it. Education is the single, most important aspect of security, one aspect that has been sorely wanting.

I am not suggesting that a lack of education exists within higher institutions of learning or those organizations that perform security-related tasks. Rather, I am suggesting that security education rarely extends beyond those great bastions of computer-security science.

The Computer Emergency Response Team (CERT) is probably the Internet's best- known security organization. CERT generates security advisories and distributes them throughout the Internet community. These advisories address the latest known security vulnerabilities in a wide range of operating systems. CERT thus performs an extremely valuable service to the Internet.
The CERT Coordination Center, established by ARPA in 1988, provides a centralized point for the reporting of and proactive response to all major security incidents. Since 1988, CERT has grown dramatically, and CERT centers have been established at various points across the globe. You can contact CERT at its WWW page. There resides a database of vulnerabilities, various research papers (including extensive documentation on disaster survivability), and links to other important security resources.
CERT's 1995 annual report shows some very enlightening statistics. During 1995, CERT was informed of some 12,000 sites that had experienced some form of network-security violation. Of these, there were at least 732 known break-ins and an equal number of probes or other instances of suspicious activity. This is so, even though the GAO report examined earlier suggested that Defense computers alone are attacked as many as 250,000 times each year, and Dan Farmer's security survey reported that over 60 percent of all critical sites surveyed were vulnerable to some technique of network security breach. How can this be? Why aren't more incidents reported to CERT?

It might be because the better portion of the Internet's servers are now maintained by individuals who have less-than adequate security education. Many system administrators have never even heard of CERT. True, there are many security resources available on the Internet (many that point to CERT, in fact), but these may initially appear intimidating and overwhelming to those new to security. Moreover, many of the resources provide links to dated information. An example is RFC 1244 (RFC 1244 is still a good study paper for a user new to security. It is available at many places on the Internet) , the Site Security Handbook. At the time(1997) 1244 was written, it comprised a collection of state-of-the-art information on security. As expressed in that document's editor's note:
This FYI RFC is a first attempt at providing Internet users guidance on how to deal with security issues in the Internet. As such, this document is necessarily incomplete. There are some clear shortfalls; for example, this document focuses mostly on resources available in the United States. In the spirit of the Internet's `Request for Comments' series of notes, we encourage feedback from users of this handbook. In particular, those who utilize this document to craft their own policies and procedures. This handbook is meant to be a starting place for further research and should be viewed as a useful resource, but not the final authority. Different organizations and jurisdictions will have different resources and rules. Talk to your local organizations, consult an informed lawyer, or consult with local and national law enforcement. These groups can help fill in the gaps that this document cannot hope to cover.
From 1991 until now(1997), the Site Security Handbook has been an excellent place to start. Nevertheless, as Internet technology grows in leaps and bounds, such texts become rapidly outdated. Therefore, the new system administrator must keep up with the security technology that follows each such evolution. To do so is a difficult task.


The Genesis of an Advisory
Advisories comprise the better part of time-based security information. When these come out, they are immediately very useful because they usually relate to an operating system or popular application now widely in use. As time goes on, however, such advisories become less important because people move on to new products. In this process, vendors are constantly updating their systems, eliminating holes along the way. Thus, an advisory is valuable for a set period of time (although, to be fair, this information may stay valuable for extended periods because some people insist on using older software and hardware, often for financial reasons).

An advisory begins with discovery
  1. Someone, whether hacker, cracker, administrator, or user, discovers a hole. That hole is verified, and the resulting data is forwarded to security organizations, vendors, or other parties deemed suitable. This is the usual genesis of an advisory.
  2. Nevertheless, there is another way that holes are discovered. Often, academic researchers discover a hole. An example, which you will review later, is the series of holes found within the Java programming language (Java is a compiled language used to create interactive applications for use on the World Wide Web. The language was created by efforts at Sun Microsystems.
    It vaguely resembles C++. ). These holes were primarily revealed--at least at first--by those at Princeton University's computer science labs. When such a hole is discovered, it is documented in excruciating detail. That is, researchers often author multipage documents detailing the hole, the reasons for it, and possible remedies. This information gets digested by other sources into an advisory, which is often no more than 100 lines. By the time the average, semi-security literate user lays his or her hands on this information, it is limited and watered-down. Thus, redundancy of data on the Internet has its limitations. People continually rehash these security documents into different renditions, often highlighting different aspects of the same paper. Such digested revisions are available all over the Net. This helps distribute the information, true, but leaves serious researchers hungry. They must hunt, and that hunt can be a struggle. For example, there is no centralized place to acquire all such papers.
  3. Equally end-user documentation can be varied. Although there should be, there is no 12-set volume (with papers by Farmer, Venema, Bellovin, Spafford, Morris, Ranum, Klaus, Muffet, and so on) about Internet security that you can
    acquire at a local library or bookstore. More often, the average bookstore contains brief treatments of the subject (like this text, I suppose). Couple with these factors the mind-set of the average system administrator. A human being only has so much time. Therefore, these individuals absorb what they can on-the-fly, applying methods learned through whatever sources they encounter.

The Dissemination of Information
For so many reasons, education in security is wanting. In the future, specialists need to address this need in a more practical fashion. There must be some suitable means of networking this information. To be fair, some organizations have attempted to do so, but many are forced to charge high prices for their hard-earned databases.
The National Computer Security Association (NCSA) is one such organization. Its RECON division gathers some 70MB per day of hot and heavy security information. Its database is searchable and is available for a price, but that price is substantial.
Many organizations do offer superb training in security and firewall technology. The price for such training varies, depending on the nature of the course, the individuals giving it, and so on. One good source for training is Lucent Technologies, which offers many courses on security.
NOTE: Recources at the end of post contains a massive listing of security training resources as well as general information about where to acquire good security information.
Despite the availability of such training, today's average company is without a clue. In a captivating report (Why Safeguard Information?) from Abo Akademi University in Finland, researcher Thomas Finne estimated that only 15 percent of all Finnish companies had an individual employed expressly for the purpose of information security. The researcher wrote:
The result of our investigation showed that the situation had got even worse; this is very alarming. Pesonen investigated the security in Finnish companies by sending out questionnaires to 453 companies with over 70 employees. The investigation showed that those made responsible for
information security in the companies spent 14.5 percent of their working time on information security. In an investigation performed in the UK over 80 percent of the respondents claimed to have a department or individual responsible for information technology (IT) security.
The Brits made some extraordinary claims! "Of course we have an information security department. Doesn't everyone?" In reality, the percentage of companies that do is likely far less. One survey conducted by the Computer Security Institute found that better than 50 percent of all survey participants didn't even have written security policies and procedures.


The Problems with PC-Based Operating Systems
It should be noted that in America, the increase in servers being maintained by those new to the Internet poses an additional education problem. Many of these individuals have used PC-based systems for the whole of their careers. PC-based operating systems and hardware were never designed for secure operation (although, that is all about to change). Traditionally, PC users have had less-than close contact with their vendors, except on issues relating to hardware and software configuration problems. This is not their fault. The PC community is market based and market driven. Vendors never sold the concept of security; they sold the concept of user friendliness, convenience, and standardization of applications. In these matters, vendors have excelled. The functionality of some PC-
based applications is extraordinary. Nonetheless, programmers are often brilliant in their coding and design of end-user applications but have poor security knowledge. Or, they may have some security knowledge but are unable to implement it because they cannot anticipate certain variables. Foo (the variable) in this case represents the innumerable differences and subtleties involved with other applications that run on the same machine. These will undoubtedly be designed by different individuals and vendors, unknown to the programmer. It is not unusual for the combination of two third-party products to result in the partial compromise of a system's security. Similarly, applications intended to provide security can, when run on PC platforms, deteriorate or otherwise be rendered less secure.
The typical example is the use of the famous encryption utility Pretty Good Privacy (PGP) when used in the Microsoft Windows environment.

PGP

PGP(substitute word PGP with GPG now on) operates by applying complex algorithms. These operations result in very high-level encryption. In some cases, if the user so specifies, using PGP can provide military-level encryption to a home user. The system utilizes the public key/private key pair scenario. In this scenario, each message is encrypted only after the user provides a passphrase, or secret code. The length of this passphrase may vary. Some people use the entire first line of a poem or literary text. Others use lines in a song or other phrases that they will not easily forget. In any event, this passphrase must be kept completely secret. If it is exposed, the encrypted data can be decrypted, altered, or otherwise accessed by unauthorized individuals. In its native state, compiled for MS-DOS, PGP operates in a command-line interface or
from a DOS prompt. This in itself presents no security issue. The problem is that many people find this inconvenient and therefore use a front-end, or a Microsoft Windows-based application through which they access the PGP routines. When the user makes use of such a front-end, the passphrase gets written into the Windows swap file. If that swap file is permanent, the passphrase can be retrieved using fairly powerful machines.
I've tried this on several occasions with machines differently configured. With a 20MB swap file on an IBM compatible DX66 sporting 8-16MB of RAM, this is a formidable task that will likely freeze the machine. This, too, depends on the utility you are using to do the search. Not surprisingly, the most effective utility for performing such a search is GREP.
NOTE: GREP is a utility that comes with many C language packages. It also comes stock on any UNIX distribution. GREP works in a way quite similar to the FIND.EXE command in DOS. Its purpose is to search specified files for a particular string of text. For example, to find the word SEARCH in all files with a *.C extension, you would issue the following command:
grep SEARCH *.C
There are free versions of GREP available on the Internet for a variety of operating systems, including but not limited to UNIX, DOS, OS/2, and 32-bit Microsoft Windows environments.

NOTE: The earlier referenced site is the MIT PGP distribution site for U.S. residents. PGP renders sufficiently powerful encryption that certain versions are not available for export. Exporting such versions is a crime.
In any event, the difficulty factor drops drastically when you use a machine with
resources in excess of 100MHz and 32MB of RAM.
My point is this: It is by no fault of the programmer of PGP that the passphrase gets caught in the swap. PGP is not flawed, nor are those platforms that use swapped memory. Nevertheless, platforms that use swapped memory are not secure and probably never will be.
Thus, even when designing security products, programmers are often faced with
unforeseen problems over which they can exert no control. Techniques of secure programming (methods of programming that enhance security on a given platform) are becoming more popular. These assist the programmer in developing applications that at least won't weaken network security.


The Internet's Design
When engineers were put to the task of creating an open, fluid, and accessible Internet, their enthusiasm and craft were, alas, too potent. The Internet is the most remarkable creation ever erected by humankind in this respect. There are dozens of ways to get a job done on the Internet; there are dozens of protocols with which to do it. Are you having trouble retrieving a file via FTP? Can you retrieve it by electronic mail? What about over HTTP with a browser? Or maybe a Telnet-based BBS? How about Gopher? NFS? SMB? The list goes on.

Heterogeneous networking was once a dream. It is now a confusing, tangled mesh of internets around the globe. Each of the protocols mentioned forms one aspect of the modern Internet. Each also represents a little network of its own. Any machine running modern implementations of TCP/IP can utilize all of them and more.

  • Security experts have for years been running back and forth before a dam of information and protocols, plugging the holes with their fingers. Crackers, meanwhile, come armed with icepicks, testing the dam here, there, and everywhere. Part of the problem is in the Internet's basic design. Traditionally, most services on the Internet rely on the client/server model. The task before a cracker, therefore, is a limited one: Go to the heart of the service and crack that server. I do not see that situation changing in the near future. Today, client/server programming is the most sought-after skill. The client/server model works effectively, and there is no viable replacement at this point.
There are other problems associated with the Internet's design, specifically related to the UNIX platform.
  • One is access control and privileges In UNIX, every process more or less has some level of privilege on the system. That is, these processes must have, at minimum, privilege to access the files they are to work on and the directories into which those files are deposited. In most cases, common processes and programs are already so configured by default at the time of the software's shipment. Beyond this, however, a system administrator may determine specific privilege schemes, depending on the needs of the situation. The system administrator is offered a wide variety of options in this regard. In short, system administrators are capable of restricting access to one, five, or 100 people. In addition, those people (or groups of people) can also be limited to certain types of access, such as read, write, execute, and so forth.
  • In addition to this system being complex (therefore requiring experience on the part of the administrator), the system also provides for certain inherent security risks. One is that access privileges granted to a process or a user may allow increased access or access beyond what was originally intended to be obtained. For example, a utility that requires any form of root access (highest level of privilege) should be viewed with caution. If someone finds a flaw within that program and can effectively exploit it, that person will gain a high level of access.
    Note that strong access-control features have been integrated into the Windows NT operating system and therefore, the phenomenon is not exclusively related to UNIX. Novell NetWare also offers some very strong access-control features.
    All these factors seriously influence the state of security on the Internet. There are clearly hundreds of little things to know about it.
  • This extends into heterogeneous networking as well.
A good system administrator should ideally have knowledge of at least three
platforms. This brings us to another consideration: Because the Internet's design is so complex, the people who address its security charge substantial prices for their services. Thus, the complexity of the Internet also influences more concrete considerations.
  • There are other aspects of Internet design and composition that authors often cite as sources of insecurity. For example, the Net allows a certain amount of anonymity; this issue has good and bad aspects. The good aspects are that individuals who need to communicate anonymously can do so if need be.

Anonymity on the Net
There are plenty of legitimate reasons for anonymous communication.
  1. One is that people living in totalitarian states can smuggle out news about human rights violations. (At least, this reason is regularly tossed around by media people. It is en vogue to say such things, even though the percentage of people using the Internet for this noble activity is incredibly small.) Nevertheless, there is no need to provide excuses for why anonymity should exist on the Internet. We do not need to justify it. After all, there is no reason why Americans should be forbidden from doing something on a public network that they can lawfully do at any other place. If human beings want to communicate anonymously, that is their right.
  2. Most people use remailers to communicate anonymously. These are servers configured to accept and forward mail messages. During that process, the header and originating address are stripped from the message, thereby concealing its author and his or her location. In their place, the address of the anonymous remailer is inserted. To learn more about anonymous remailers, check out the FAQ here. This FAQ provides many useful links to other sites dealing with anonymous remailers. Anonymous remailers (hereafter anon remailers) have been the subject of controversy in the past. Many people, particularly members of the establishment, feel that anon remailers undermine the security of the Internet. Some portray the situation as being darker than it really is:
    By far the greatest threat to the commercial, economic and political viability of the Global Information Infrastructure will come from information terrorists... The introduction of Anonymous Re-mailers into the Internet has altered the capacity to balance attack and counter-attack, or crime and punishment (Paul A. Strassmann, U.S. Military Academy, West Point; Senior Advisor, SAIC and William Marlow, Senior Vice President, Science Applications International Corporation (SAIC). January 28-30, 1996. Symposium on the Global Information Infrastructure:Information, Policy & International Infrastructure).
I should explain that the preceding document was delivered by individuals associated with the intelligence community. Intelligence community officials would naturally be opposed to anonymity, for it represents one threat to effective, domestic intelligence-gathering procedures. That is a given. Nevertheless, one occasionally sees even journalists making similar statements, such as this one by Walter S. Mossberg:
In many parts of the digital domain, you don't have to use your real name. It's often impossible to figure out the identity of a person making political claims...When these forums operate under the cloak of anonymity, it's no different from printing a newspaper in which the bylines are admittedly
fake, and the letters to the editor are untraceable.
This is an interesting statement. For many years, the U.S. Supreme Court has been
unwilling to require that political statements be accompanied by the identity of the
author. This refusal is to ensure that free speech is not silenced. In early American
history, pamphlets were distributed in this manner. Naturally, if everyone had to sign their name to such documents, potential protesters would be driven into the shadows. This is inconsistent with the concepts on which the country was founded.
To date, there has been no convincing argument for why anon remailers should not exist. Nevertheless, the subject remains engaging. One amusing exchange occurred during a hearing in Pennsylvania on the constitutionality of the Communications Decency Act, an act brought by forces in Congress that were vehemently opposed to pornographic images being placed on the Internet. The hearing occurred on March 22, 1996, before the Honorable Dolores K. Sloviter, Chief Judge, United States Court of Appeals for the Third Circuit. The case was American Civil Liberties Union, et al (plaintiffs) v. Janet Reno, the Attorney General of the United States. The discussion went as follows:
Q: Could you explain for the Court what Anonymous Remailers are?
A: Yes, Anonymous Remailers and their -- and a related service called Pseudonymity Servers are computer services that privatize your identity in cyberspace. They allow individuals to, for example, post content for example to a Usenet News group or to send an E-mail without knowing
the individual's true identity.
The difference between an anonymous remailer and a pseudonymity server is very important because an anonymous remailer provides what we might consider to be true anonymity to the individual because there would be no way to know on separate instances who the person was who
was making the post or sending the e-mail.
But with a pseudonymity server, an individual can have what we consider to be a persistent presence in cyberspace, so you can have a pseudonym attached to your postings or your e-mails, but your true identity is not revealed. And these mechanisms allow people to communicate in cyberspace without revealing their true identities.

Q: I just have one question, Professor Hoffman, on this topic. You have not done any study or survey to sample the quantity or the amount of anonymous remailing on the Internet, correct?
A: That's correct. I think by definition it's a very difficult problem to study because these are people who wish to remain anonymous and the people who provide these services wish to remain anonymous.
Indeed, the court was clearly faced with a catch-22. In any case, whatever one's position might be on anonymous remailers, they appear to be a permanent feature of the Internet. Programmers have developed remailer applications to run on almost any operating system, allowing the little guy to start a remailer with his PC.
If you have more interest in anon remailers, visit this site which contains extensive information on these programs, as well as links to personal anon remailing packages and other software tools for use in implementing an anonymous remailer.
In the end, e-mail anonymity on the Internet has a negligible effect on real issues of
Internet security. The days when one could exploit a hole by sending a simple e-mail
message are long gone. Those making protracted arguments against anonymous e-mail are either nosy or outraged that someone can implement a procedure that they cannot. If e-mail anonymity is an issue at all, it is for those in national security. I readily admit that spies could benefit from anonymous remailers. In most other cases, however, the argument expends good energy that could be better spent elsewhere.


Proprietarism
Yes, another ism. Before I start ranting, I want to define this term as it applies here.
Proprietarism is a practice undertaken by commercial vendors in which they attempt to inject into the Internet various forms of proprietary design. By doing so, they hope to create profits in an environment that has been previously free from commercial reign. It is the modern equivalent of Colonialism plus Capitalism in the computer age on the Internet. It interferes with Internet security structure and defeats the Internet's capability to serve all individuals equally and effectively.


ActiveX
A good example of proprietarism in action is Microsoft Corporation's ActiveX
technology
.
Those users unfamiliar with ActiveX technology should visit here. Users who already have some experience with ActiveX should go directly to the Microsoft page that addresses the security features.
To understand the impact of ActiveX, a brief look at HTML would be instructive. HTML was an incredible breakthrough in Internet technology. Imagine the excitement of the researchers when they first tested it! It was (and still is) a protocol by which any user, on any machine, anywhere in the world could view a document and that document, to any other user similarly (or not similarly) situated, would look pretty much the same. What an extraordinary breakthrough. It would release us forever from proprietary designs. Whether you used a Mac, an Alpha, an Amiga, a SPARC, an IBM compatible, or a tire hub (TRS-80, maybe?), you were in. You could see all the wonderful information available on the Net, just like the next guy. Not any more.

ActiveX technology is a new method of presenting Web pages. It is designed to interface with Microsoft's Internet Explorer. If you don't have it, forget it. Most WWW pages designed with it will be nonfunctional for you either in whole or in part. That situation may change, because Microsoft is pushing for ActiveX extensions to be included within the HTML standardization process. Nevertheless, such extensions (including scripting languages or even compiled languages) do alter the state of Internet security in a wide and encompassing way.
  1. First, they introduce new and untried technologies
  2. that are proprietary in nature. Because they are proprietary, the technologies cannot be closely examined by the security community.
  3. Moreover, these are not cross platform and therefore create limitations to the Net, as opposed to heterogeneous solutions. To examine the problem firsthand you may want to visit a page established by Kathleen A. Jackson, Team Leader, Division Security Office, Computing, Information, and Communications Division at the Los Alamos National Laboratory. Jackson points to key problems in ActiveX. On her WWW page, she writes:
    ...The second big problem with ActiveX is security. A program that downloads can do anything the programmer wants. It can reformat your hard drive or shut down your computer...
This issue is more extensively covered in a paper delivered by Simon Garfinkel at Hot Wired. When Microsoft was alerted to the problem, the solution was to recruit a company that created digital signatures for ActiveX controls. This digital signature is supposed to be signed by the control's programmer or creator. The company responsible for this digital signature scheme has every software publisher sign a software publisher's pledge, which is an agreement not to sign any software that contains malicious code. If a user surfs a page that contains an unsigned control, Microsoft's Internet Explorer puts up a warning message box that asks whether you want to accept the unsigned control.
Find the paper delivered by Simon Garfinkel at Hot Wired here.
You cannot imagine how absurd this seems to security professionals.
  1. What is to prevent a software publisher from submitting malicious code, signed or unsigned, on any given Web site?
  2. If it is signed, does that guarantee that the control is safe? The Internet at large is therefore resigned to take the software author or publisher at his or her word. This is impractical and unrealistic.
  3. And, although Microsoft and the company responsible for the signing initiative will readily offer assurances, what evidence is there that such signatures cannot be forged?
  4. More importantly, how many small-time programmers will bother to sign their controls?
  5. And lastly, how many users will refuse to accept an unsigned control?
  6. Most users confronted with the warning box have no idea what it means. All it represents to them is an obstruction that is preventing them from getting to a cool Web page.
There are now all manner of proprietary programs out there inhabiting the Internet. Few have been truly tested for security. I understand that this will become more prevalent and, to Microsoft's credit, ActiveX technology creates the most stunning WWW pages available on the Net. These pages have increased functionality, including drop-down boxes, menus, and other features that make surfing the Web a pleasure. Nevertheless, serious security studies need to be made before these technologies foster an entirely new frontier for those pandering malicious code, viruses, and code to circumvent security.
To learn more about the HTML standardization process, visit the site of the World Wide Web Consortium. If you already know a bit about the subject but want specifics about what types of HTML tags and extensions are supported, you should read W3C's activity statement on this issue. One interesting area of development is W3C's work on support for the disabled.
Proprietarism is a dangerous force on the Internet, and it's gaining ground quickly. To compound this problem, some of the proprietary products are excellent. It is therefore perfectly natural for users to gravitate toward these applications. Users are most concerned with functionality, not security. Therefore, the onus is on vendors, and this is a problem. If vendors ignore security hazards, there is nothing anyone can do. One cannot, for example, forbid insecure products from being sold on the market. That would be an unreasonable restraint of interstate commerce and ground for an antitrust claim. Vendors certainly have every right to release whatever software they like, secure or not. At present, therefore, there is no solution to this problem. Extensions, languages, or tags that probably warrant examination include
These languages are the weapons of the war between these two giants. I doubt that either company objectively realizes that there's a need for both technologies. For example, Netscape cannot shake Microsoft's hold on the desktop market. Equally, Microsoft cannot supply the UNIX world with products. The Internet would probably benefit greatly if these two titans buried the hatchet in something besides each other.


The Trickling Down of Technology
As discussed earlier, there is the problem of high-level technology trickling down from military, scientific, and security sources. Today, the average cracker has tools at his or her disposal that most security organizations use in their work. Moreover, the machines on which crackers use these tools are extremely powerful, therefore allowing faster and more efficient cracking.

Government agencies often supply links to advanced security tools. At these sites, the tools are often free. They number in the hundreds and encompass nearly every aspect of security. In addition to these tools, government and university sites also provide very technical information regarding security. For crackers who know how to mine such information, these resources are invaluable. Some key sites are listed below:
The level of technical information at such sites is high. This is in contrast to many fringe sites that provide information of little practical value to the cracker. But not all fringe sites are so benign. Crackers have become organized, and they maintain a wide variety of servers on the Internet. These are typically established using free operating systems such as Linux or FreeBSD. Many such sites end up establishing a permanent wire to the Net. Others are more unreliable and may appear at different times via dynamic IP addresses. I should make it clear that not all fringe sites are cracking sites. Many are legitimate hacking stops that provide information freely to the Internet community as a service of sorts. In either case, both hackers and crackers have been known to create excellent Web sites with voluminous security information.

The majority of cracking and hacking sites are geared toward UNIX and IBM-compatible platforms. There is a noticeable absence of quality information for Macintosh users. In any event, in-depth security information is available on the Internet for any interested party to view. So, the information is trafficked. There is no solution to this problem, and there shouldn't be. It would be unfair to halt the education of many earnest, responsible individuals for the malicious acts of a few. So advanced security information and tools will remain available.


Human Nature
We have arrived at the final (and probably most influential) force at work in weakening Internet security: human nature. Humans are, by nature, a lazy breed. To most users, the subject of Internet security is boring and tedious. They assume that the security of the Internet will be taken care of by experts. To some degree, there is truth to this. If the average user's machine or network is compromised, who should care? They are the only ones who can suffer (as long as they are not connected to a network other than their own). The problem is, most will be connected to some other network. The Internet is one enterprise that truly relies on the strength of its weakest link. I have seen crackers work feverishly on a single machine when that machine was not their ultimate objective. Perhaps the machine had some trust relationship with another machine that was their ultimate objective. To crack a given region of cyberspace, crackers may often have to take alternate or unusual routes. If one workstation on the network is vulnerable, they are all potentially vulnerable as long as a relationship of trust exists.

Also, you must think in terms of the smaller businesses because these will be the great majority. These businesses may not be able to withstand disaster in the same way that larger firms can. If you run a small business,
  1. when was the last time you performed a complete backup of all information on all your drives?
  2. Do you have a disaster-recovery plan? Many companies do not. This is an important point. I often get calls from companies that are about to establish permanent connectivity. Most of them are unprepared for emergencies.
Moreover, there are still two final aspects of human nature that influence the evolution of security on the Internet.
  1. Fear is one. Most companies are fearful to communicate with outsiders regarding security. For example, the majority of companies will not tell anyone if their security has been breached. When a Web site is cracked, it is front-page news; this cannot be avoided. When a system is cracked in some other way (with a different point of entry), press coverage (or any exposure) can usually be avoided. So, a company may simply move on, denying any incident, and secure its network as best it can. This deprives the security community of much-needed statistics and data.
  2. The last human factor here is curiosity. Curiosity is a powerful facet of human nature that even the youngest child can understand. One of the most satisfying human experiences is discovery. Investigation and discovery are the things that life is really made of. We learn from the moment we are born until the moment that we die, and along that road, every shred of information is useful. Crackers are not so hard to understand. It comes down to basics: Why is this door is locked? Can I open it? As long as this aspect of human experience remains, the Internet may never be entirely secure. Oh, it will be ultimately be secure enough for credit-card transactions and the like, but someone will always be there to crack it.

Does the Internet Really Need to Be Secure?
Yes. The Internet does need to be secure and not simply for reasons of national security. Today, it is a matter of personal security. As more financial institutions gravitate to the Internet, America's financial future will depend on security. Many users may not be aware of the number of financial institutions that offer online banking. One year ago, this was a relatively uncommon phenomenon. Nevertheless, by mid-1996, financial institutions across the country were offering such services to their customers. The threat from lax security is more than just a financial one. Banking records are extremely personal and contain revealing information. Until the Internet is secure, this information is available to anyone with the technical prowess to crack a bank's online service. It hasn't happened yet (I assume), but it will.

Also, the Internet needs to be secure so that it does not degenerate into one avenue of domestic spying. Some law-enforcement organizations are already using Usenet spiders to narrow down the identities of militia members, militants, and other political undesirables. The statements made by such people on Usenet are archived away, you can be sure. This type of logging activity is not unlawful. There is no constitutional protection against it, any more than there is a constitutional right for someone to demand privacy when they scribble on a bathroom wall. Private e-mail is a different matter, though. Law enforcement agents need a warrant to tap someone's Internet connection. To circumvent these procedures (which could become widespread), all users should at least be aware of the encryption products available, both free and commercial. For all these reasons, the Internet must become secure.


Can the Internet Be Secure?
Yes. The Internet can be secure. But in order for that to happen, some serious changes must be made, including the heightening of public awareness to the problem. Most users still regard the Internet as a toy, an entertainment device that is good for a couple of hours on a rainy Sunday afternoon. That needs to change in coming years. The Internet is likely the single, most important advance of the century. Within a few years, it will be a powerful force in the lives of most Americans. So that this force may be overwhelmingly positive, Americans need to be properly informed. Members of the media have certainly helped the situation, even though media coverage of the Internet isn't always painfully accurate. I have seen the rise of technology columns in newspapers throughout the country. Good technology writers are out there, trying to bring the important information home to their readers. I suspect that in the future, more newspapers will develop their own sections for Internet news, similar to those sections allocated for sports, local news, and human interest. Equally, many users are security-aware, and that number is growing each day. As public education increases, vendors will meet the demand of their clientele.


A Brief Primer on TCP/IP


We examine the Transmission Control Protocol (TCP) and the Internet Protocol
(IP)
. The final portion of this chapter explores key TCP/IP utilities with which each user must become familiar. These utilities are of value in maintenance and monitoring of any TCP/IP network.
Note that this chapter is not an exhaustive treatment of TCP/IP. It provides only the minimum knowledge needed to continue reading this text

What Is TCP/IP?
TCP/IP refers to two network protocols (or methods of data transport) used on the
Internet. They are Transmission Control Protocol and Internet Protocol, respectively. These network protocols belong to a larger collection of protocols, or a protocol suite. These are collectively referred to as the TCP/IP suite/stack.

Protocols within the TCP/IP suite work together to provide data transport on the Internet. In other words, these protocols provide nearly all services available to today's Net surfer. Some of those services include:
  • Transmission of electronic mail
  • File transfers
  • Usenet news delivery
  • Access to the World Wide Web
There are two classes of protocol within the TCP/IP suite, and I will address both in the following course. Those two classes are
  • The network-level protocol
  • The application-level protocol

Network-Level Protocols
Network-level protocols manage the discrete mechanics of data transfer. These protocols are typically invisible to the user and operate deep beneath the surface of the system. For example, the IP protocol provides packet delivery of the information sent between the user and remote machines. It does this based on a variety of information, most notably the IP address of the two machines. Based on this and other information, IP guarantees that the information will be routed to its intended destination. Throughout this process, IP interacts with other network-level protocols engaged in data transport. Short of using network utilities (perhaps a sniffer or other device that reads IP datagrams), the user will never see IP's work on the system.


Application-Level Protocols
Conversely, application-level protocols are visible to the user in some measure. For
example, File Transfer Protocol (FTP) is visible to the user. The user requests a
connection to another machine to transfer a file, the connection is established, and the transfer begins. During the transfer, a portion of the exchange between the user's machine and the remote machine is visible (primarily error messages and status reports on the transfer itself, for example, how many bytes of the file have been transferred at any given moment).

For the moment, this explanation will suffice: TCP/IP refers to a collection of protocols that facilitate communication between machines over the Internet (or other networks running TCP/IP).


The History of TCP/IP
In 1969, the Defense Advanced Research Projects Agency (DARPA) commissioned
development of a network over which its research centers might communicate. Its chief concern was this network's capability to withstand a nuclear attack. In short, if the Soviet Union launched a nuclear attack, it was imperative that the network remain intact to facilitate communication. The design of this network had several other requisites, the most important of which was this: It had to operate independently of any centralized control. Thus, if 1 machine was destroyed (or 10, or 100), the network would remain impervious.
  1. The prototype for this system emerged quickly, based in part on research done in 1962 and 1963. That prototype was called ARPANET. ARPANET reportedly worked well, but was subject to periodic system crashes. Furthermore, long-term expansion of that network proved costly.
  2. A search was initiated for a more reliable set of protocols; that search ended in the mid-1970s with the development of TCP/IP. TCP/IP had significant advantages over other protocols. For example, TCP/IP was lightweight (it required meager network resources). Moreover, TCP/IP could be implemented at much lower cost than the other choices then available.
  3. Based on these amenities, TCP/IP became exceedingly popular. In 1983, TCP/IP was integrated into release 4.2 of Berkeley Software Distribution (BSD) UNIX. Its integration into commercial forms of UNIX soon followed, and TCP/IP was established as the Internet standard. It has remained so (as nowadays).
  4. As more users flock to the Internet, however, TCP/IP is being reexamined. More users translates to greater network load. To ease that network load and offer greater speeds of data transport, some researchers have suggested implementing TCP/IP via satellite transmission. Unfortunately, such research has thus far produced dismal results. TCP/IP is apparently unsuitable for this implementation.
Today, TCP/IP is used for many purposes, not just the Internet. For example, intranets are often built using TCP/IP. In such environments, TCP/IP can offer significant advantages over other networking protocols. One such advantage is that TCP/IP works on a wide variety of hardware and operating systems. Thus, one can quickly and easily create a heterogeneous network using TCP/IP. Such a network might have Macs, IBM compatibles, Sun Sparcstations, MIPS machines, and so on. Each of these can communicate with its peers using a common protocol suite. For this reason, since it was first introduced in the 1970s, TCP/IP has remained extremely popular. now it's time to discuss implementation of TCP/IP on various platforms.


What Platforms Support TCP/IP?
Most platforms support TCP/IP. However, the quality of that support can vary. Today, most mainstream operating systems have native TCP/IP support (that is, TCP/IP support that is built into the standard operating system distribution). However, older operating systems on some platforms lack such native support. Table below describes TCP/IP support for various platforms. If a platform has native TCP/IP support, it is labeled as such. If not, the name of a TCP/IP application is provided.

Platform TCP/IP Support
  • UNIX Native
  • DOS Piper/IP By Ipswitch
  • Windows TCPMAN by Trumpet Software
  • Windows 95 Native
  • Windows NT Native
  • Macintosh MacTCP or OpenTransport (Sys 7.5+)
  • OS/2 Native
  • AS/400 OS/400 Native
Platforms that do not natively support TCP/IP can still implement it through the use of proprietary or third-party TCP/IP programs. In these instances, third-party products can offer varied functionality. Some offer very good support and others offer marginal support.

For example, some third-party products provide the user with only basic TCP/IP. For most users, this is sufficient. (They simply want to connect to the Net, get their mail, and enjoy easy networking.) In contrast, certain third-party TCP/IP implementations are comprehensive. These may allow manipulation of compression, methods of transport, and other features common to the typical UNIX TCP/IP implementation. Widespread third-party support for TCP/IP has been around for only a few years. Several years ago, for example, TCP/IP support for DOS boxes was very slim.
TIP: There is actually a wonderful product called Minuet that can be used in conjunction with a packet driver on LANs. Minuet derived its name from the term Minnesota Internet Users Essential Tool. Minuet offers quick and efficient access to the Net through a DOS-based environment. This product is still available free of charge at many locations.
One interesting point about non-native, third-party TCP/IP implementations is this: Most of them do not provide servers within their distributions. Thus, although a user can connect to remote machines to transfer a file, the user's machine cannot accept such a request. For example, a Windows 3.11 user using TCPMAN cannot--without installing additional software--accept a file-transfer request from a remote machine.


How Does TCP/IP Work?
TCP/IP operates through the use of a protocol stack. This stack is the sum total of all protocols necessary to complete a single transfer of data between two machines. (It is also the path that data takes to get out of one machine and into another.) The stack is broken into layers, five of which are of concern here.

After data has passed through the process it travels to its destination on another machine or network. There, the process is executed in reverse (the data first meets the physical layer and subsequently travels its way up the stack). Throughout this process, a complex system of error checking is employed both on the originating and destination machine.

Each layer of the stack can send data to and receive data from its adjoining layer. Each layer is also associated with multiple protocols. At each tier of the stack, these protocols are hard at work, providing the user with various services.


The Individual Protocols
You have examined how data is transmitted via TCP/IP using the protocol stack. Now I want to zoom in to identify the key protocols that operate within that stack. I will begin with network-level protocols.


Network-Level Protocols
Network protocols are those protocols that engage in (or facilitate) the transport process transparently. These are invisible to the user unless that user employs utilities to monitor system processes.
TIP: Sniffers are devices that can monitor such processes. A sniffer is a device--either hardware or software--that can read every packet sent across a network. Sniffers are commonly used to isolate network problems that, while invisible to the user, are degrading network performance. As such, sniffers can read all activity occurring between network-level protocols. Moreover, as you might guess, sniffers can pose a tremendous security threat.
Important network-level protocols include
I will briefly examine each, offering only an overview.
For more comprehensive information about protocols (or the stack in general), I highly recommend Teach Yourself TCP/IP in 14 Days by Timothy Parker, Ph.D (Sams Publishing).

The Address Resolution Protocol
The Address Resolution Protocol (ARP) serves the critical purpose of mapping Internet addresses into physical addresses. This is vital in routing information across the Internet.
  1. Before a message (or other data) is sent, it is packaged into IP packets, or blocks of information suitably formatted for Internet transport. These contain the numeric Internet (IP) address of both the originating and destination machines.
  2. Before this package can leave the originating computer, however, the hardware address of the recipient (destination) must be discovered. (Hardware addresses differ from Internet addresses.) This is where ARP makes its debut. An ARP request message is broadcast on the subnet.
  3. This request is received by a router that replies with the requested hardware address.
  4. This reply is caught by the originating machine and the transfer process can begin.
  5. ARP's design includes a cache.
    To understand the ARP cache concept, consider this: Most modern HTML browsers (such as Netscape Navigator or Microsoft's Internet Explorer) utilize a cache. This cache is a portion of the disk (or memory) in which elements from often-visited Web pages are stored (such as buttons, headers, and common graphics). This is logical because when you return to those pages, these tidbits don't have to be reloaded from the remote machine. They will load much more quickly if they are in your local cache.
    Similarly, ARP implementations include a cache. In this manner, hardware addresses of remote machines or networks are remembered, and this memory obviates the need to conduct subsequent ARP queries on them. This saves time and network resources. Can you guess what type of security risks might be involved in maintaining such an ARP cache? At this stage, it is not particularly important. However, address caching (not only in ARP but in all instances) does indeed pose a unique security risk. If such address-location entries are stored, it makes it easier for a cracker to forge a connection from a remote machine, claiming to hail from one of the cached addresses.
    Readers seeking in-depth information on ARP should see RFC 826. Another good reference for information on ARP is Margaret K. Johnson's piece about details of TCP/IP (excerpts from Microsoft LAN Manager TCP/IP Protocol)

The Internet Control Message Protocol
The Internet Control Message Protocol handles error and control messages that are
passed between two (or more) computers or hosts during the transfer process. It allows those hosts to share that information. In this respect, ICMP is critical for diagnosis of network problems. Examples of diagnostic information gathered through ICMP include
  • When a host is down
  • When a gateway is congested or inoperable
  • Other failures on a network
TIP: Perhaps the most widely known ICMP implementation involves a network utility called ping. Ping is often used to determine whether a remote machine is alive. Ping's method of operation is simple: When the user pings a remote machine, packets are forwarded from the user's machine to the remote host. These packets are then echoed back to the user's machine. If no echoed packets are received at the user's end, the ping program usually generates an error message indicating that the remote host is down. I urge those readers seeking in-depth information about ICMP to examine RFC 792 .

The Internet Protocol
IP belongs to the network layer. The Internet Protocol provides packet delivery for all protocols within the TCP/IP suite. Thus, IP is the heart of the incredible process by which data traverses the Internet.

An IP datagram is composed of several parts.
  1. The first part, the header, is composed of miscellaneous information, including originating and destination IP address. Together, these elements form a complete header.
  2. The remaining portion of a datagram contains whatever data is then being sent.The amazing thing about IP is this: If IP datagrams encounter networks that require smaller packages, the datagrams bust apart to accommodate the recipient network. Thus, these datagrams can fragment during a journey and later be reassembled properly (even if they do not arrive in the same sequence in which they were sent) at their destination.
  3. Even further information is contained within an IP datagram. Some of that information may include identification of the protocol being used, a header checksum, and a
  4. time-to-live specification. This specification is a numeric value. While the datagram is traveling the void, this numeric value is constantly being decremented. When that value finally reaches a zero state, the datagram dies. Many types of packets have time-to-live limitations. Some network utilities (such as Traceroute) utilize the time-to-live field as a marker in diagnostic routines.
In closing, IP's function can be reduced to this: providing packet delivery over the
Internet. As you can see, that packet delivery is complex in its implementation.
I refer readers seeking in-depth information on Internet protocol to
RFC 760 .

The Transmission Control Protocol
The Transmission Control Protocol is the chief protocol employed on the Internet. It
facilitates such mission-critical tasks as file transfers and remote sessions. TCP
accomplishes these tasks through a method called reliable data transfer. In this respect, TCP differs from other protocols within the suite. In unreliable delivery, you have no guarantee that the data will arrive in a perfect state. In contrast, TCP provides what is sometimes referred to as reliable stream delivery. This reliable stream delivery ensures that the data arrives in the same sequence and state in which it was sent.

The TCP system relies on a virtual circuit that is established between the requesting
machine and its target. This circuit is opened via a three-part process, often referred to as the three-part handshake.

After the circuit is open, data can simultaneously travel in both directions. This results in what is sometimes called a full-duplex transmission path. Full-duplex transmission allows data to travel to both machines at the same time. In this way, while a file transfer (or other remote session) is underway, any errors that arise can be forwarded to the requesting machine.

TCP also provides extensive error-checking capabilities. For each block of data sent, a numeric value is generated. The two machines identify each transferred block using this numeric value. For each block successfully transferred, the receiving host sends a message to the sender that the transfer was clean. Conversely, if the transfer is unsuccessful, two things may occur:
  • The requesting machine receives error information
  • The requesting machine receives nothing
When an error is received, the data is retransmitted unless the error is fatal, in which case the transmission is usually halted.
A typical example of a fatal error would be if the connection is dropped. Thus, the transfer is halted for no packets.
Similarly, if no confirmation is received within a specified time period, the information is also retransmitted. This process is repeated as many times as necessary to complete the transfer or remote session.

You have examined how the data is transported when a connect request is made. It is now time to examine what happens when that request reaches its destination. Each time one machine requests a connection to another, it specifies a particular destination. In the general sense, this destination is expressed as the Internet (IP) address and the hardware address of the target machine. However, even more detailed than this, the requesting machine specifies the application it is trying to reach at the destination. This involves two elements:
  • A program called inetd
  • A system based on ports

inetd: The Mother of All Daemons
Before you explore the inetd program, I want to briefly define daemons. This will help you more easily understand the inetd program.
Daemons are programs that continuously listen for other processes (in this case, the process listened for is a connection request). Daemons loosely resemble terminate and stay resident (TSR) programs in the Microsoft platform. These programs remain alive at all times, constantly listening for a particular event. When that event finally occurs, the TSR undertakes some action.
inetd is a very special daemon. It has been called many things, including the super-server or granddaddy of all processes. This is because inetd is the main daemon running on a UNIX machine. It is also an ingenious tool.
Common sense tells you that running a dozen or more daemon processes could eat up machine resources. So rather than do that, why not create one daemon that could listen for all the others? That is what inetd does.
It listens for connection requests from the void. When it receives such a request, it evaluates it. This evaluation seeks to determine one thing only: What service does the requesting machine want? For example, does it want FTP? If so, inetd starts the FTP server process. The FTP server can then process the request from the void. At that point, a file transfer can begin. This all happens within the space of a second or so.
TIP: inetd isn't just for UNIX anymore. For example, Hummingbird Communications has developed (as part of its Exceed 5 product line) a version of inetd for use on any platform that runs Microsoft Windows or OS/2. There are also non- commercial versions of inetd, written by students and other software enthusiasts. One such distribution is available from TFS software.
In general, inetd is started at boot time and remains resident (in a listening state) until the machine is turned off or until the root operator expressly terminates that process. The behavior of inetd is generally controlled from a file called inetd.conf, located in the /etc directory on most UNIX platforms. The inetd.conf file is used to specify what services will be called by inetd. Such services might include FTP, Telnet, SMTP, TFTP, Finger, Systat, Netstat, or any other processes that you specify.


The Ports
Many TCP/IP programs can be initiated over the Internet. Most of these are client/server oriented. As each connection request is received, inetd starts a server program, which then communicates with the requesting client machine. To facilitate this process, each application (FTP or Telnet, for example) is assigned a unique address. This address is called a port. The application in question is bound to that
particular port and, when any connection request is made to that port, the corresponding application is launched (inetd is the program that launches it).
There are thousands of ports on the average Internet server. For purposes of convenience and efficiency, a standard framework has been developed for port assignment. (In other words, although a system administrator can bind services to the ports of his or her choice, services are generally bound to recognized ports. These are commonly referred to as well-known ports). I will examine some of the the applications that are application-level protocols or services (that is, they are visible to user and the user can interact with them at the console).
For a comprehensive list of all port assignments, visit here This document is extremely informative and exhaustive in its treatment of commonly assigned port numbers.

Telnet
Telnet is best described in RFC 854, the Telnet protocol specification: The purpose of the Telnet protocol is to provide a fairly general, bi-directional, eight-bit byte- oriented communications facility. Its primary goal is to allow a standard method of interfacing terminal devices and terminal-oriented processes to each other. Telnet not only allows the user to log in to a remote host, it allows that user to execute
commands on that host. Thus, an individual in Los Angeles can Telnet to a machine in New York and begin running programs on the New York machine just as though the user were actually in New York.

For those of you who are unfamiliar with Telnet, it operates much like the interface of a bulletin board system (BBS). Telnet is an excellent application for providing a terminal-based front end to databases. For example, better than 80 percent of all university library catalogs can be accessed via Telnet.

Even though GUI applications have taken the world by storm, Telnet--which is essentially a text-based application--is still incredibly popular. There are many reasons for this. First,
  1. Telnet allows you to perform a variety of functions (retrieving mail, for example) at a minimal cost in network resources.
  2. Second, implementing secure Telnet isa pretty simple task. There are several programs to implement this, the most popular of which is Secure Shell(ssh).
To use Telnet, the user issues whatever command necessary to start his or her Telnet client, followed the name (or numeric IP address) of the target host. In UNIX, this is done as follows:

#telnet internic.net

This command launches a Telnet session, contacts internic.net, and requests a
connection. That connection will either be honored or denied, depending on the configuration at the target host. In UNIX, the Telnet command has long been a native one. That is, Telnet has been included with basic UNIX distributions for well over a decade. However, not all operating systems have a native Telnet client. Table 6.3 shows Telnet clients for various operating systems.


File Transfer Protocol
File Transfer Protocol is the standard method of transferring files from one system to another. Its purpose is set forth in RFC 0765 as follows: The objectives of FTP are :
  1. to promote sharing of files (computer programs and/or data),
  2. to encourage indirect or implicit (via programs) use of remote computers,
  3. to shield a user from variations in file storage systems among Hosts, and
  4. to transfer data reliably and efficiently.
FTP, though usable directly by a user at a terminal, is designed mainly for use by programs. For over two decades, researchers have investigated a wide variety of file-transfer methods. The development of FTP has undergone many changes in that time. Its first definition occurred in April 1971, and the full specification can be read in RFC 114 (but a more practical document might be RFC 959).


Mechanical Operation of FTP
File transfers using FTP can be accomplished using any suitable FTP client. Table 6.4 defines some common clients used, by operating system.


How Does FTP Work?
FTP file transfers occur in a client/server environment. The requesting machine starts one of the clients named in Table 6.4. This generates a request that is forwarded to the targeted file server (usually a host on another network). Typically, the request is sent by inetd to port 21. For a connection to be established, the targeted file server must be running an FTP server or FTP daemon.
FTPD FTPD is the standard FTP server daemon. Its function is simple: to reply to connect requests received by inetd and to satisfy those requests for file transfers. This daemon comes standard on most distributions of UNIX (for other operating systems, see Table 6.5).

FTPD waits for a connection request. When such a request is received, FTPD probably requests the user login. The user must either provide his or her valid user login and password or may log in anonymously.
Once logged in, the user may download files. In certain instances and if security on the server allows, the user may also upload files.


Simple Mail Transfer Protocol
The objective of Simple Mail Transfer protocol is stated concisely in RFC 821:
The objective of Simple Mail Transfer protocol (SMTP) is to transfer mail reliably and efficiently. SMTP is an extremely lightweight and efficient protocol. The user (utilizing any SMTP-compliant client) sends a request to an SMTP server. A two-way connection is subsequently established. The client forwards a MAIL instruction, indicating that it wants to send mail to a recipient somewhere on the Internet. If the SMTP allows this operation, an affirmative acknowledgment is sent back to the client machine. At that point, the session begins. The client may then forward the recipient's identity, his or her IP address, and the message (in text) to be sent.
Despite the simple character of SMTP, mail service has been the source of countless
security holes. (This may be due in part to the number of options involved.
Misconfiguration is a common reason for holes.) I will discuss it later here.

SMTP servers are native in UNIX. Most other networked operating systems now havevsome form of SMTP, so I'll refrain from listing them here.


Gopher
The Gopher service is a distributed document-retrieval system. It was originally
implemented as the Campus Wide Information System at the University of Minnesota. It is defined in a March 1993 FYI from the University of Minnesota as follows: The Internet Gopher protocol is designed primarily to act as a distributed document-delivery system. While documents (and services) reside on many servers, Gopher client software presents users with a hierarchy of items and directories much like a file system. In fact, the Gopher interface is designed to resemble a file system since a file system is a good model for locating documents and services.
The complete documentation on the Gopher protocol can be obtained
in RFC 1436 .
The Gopher service is very powerful. It can serve text documents, sounds, and other media. It also operates largely in text mode and is therefore much faster than HTTP through a browser. Undoubtedly, the most popular Gopher client is for UNIX. (Gopher2_3 is especially popular, followed by Xgopher.) However, many operating systems have Gopher clients. See Table 6.6 for a few.

Typically, the user launches a Gopher client and contacts a given Gopher server. In turn, the Gopher server forwards a menu of choices. These may include search menus, pre-set destinations, or file directories.

Note that the Gopher model is completely client/server based. The user never logs on per se. Rather, the client sends a message to the Gopher server, requesting all documents (or objects) currently available. The Gopher server responds with this information and does nothing else until the user requests an object.


Hypertext Transfer Protocol
Hypertext Transfer Protocol is perhaps the most renowned protocol of all because it is this protocol that allows users to surf the Net. Stated briefly in RFC 1945, HTTP is:
...an application-level protocol with the lightness and speed necessary for distributed, collaborative, hypermedia information systems. It is a generic, stateless, object-oriented protocol which can be used for many tasks, such as name servers and distributed object management systems, through extension of its request methods (commands). A feature of HTTP is the typing of data representation, allowing systems to be built independently of the data being transferred.
NOTE: RFC 1945 has been superseded by RFC 2068, which is a more recent
specification of HTTP.
HTTP has forever changed the nature of the Internet, primarily by bringing the Internet to the masses. In some ways, its operation is much like Gopher. For example, it too works via a request/response scenario. And this is an important point:
Whereas applications such as Telnet require that a user remain logged on (and while they are logged on, they consume system resources), protocols such as Gopher and HTTP eliminate this phenomenon. Thus, the user is pushed back a few paces. The user (client) only consumes system resources for the instant that he or she is either requesting or receiving data.
Using a common browser like Netscape Navigator or Microsoft Internet Explorer, you can monitor this process as it occurs. For each data element (text, graphic, sound) on a WWW page, your browser will contact the server one time. Thus, it will first grab text, then a graphic, then a sound file, and so on. In the lower-left corner of your browser's screen is a status bar. Watch it for a few moments when it is loading a page. You will see this request/response activity occur, often at a very high speed. HTTP doesn't particularly care what type of data is requested. Various forms of multimedia can be either embedded within or served remotely via HTML-based WWW pages. In short, HTTP is an extremely lightweight and effective protocol. Clients for this protocol are enumerated in Table 6.7.

Until recently, UNIX alone supported an HTTP server. (The standard was NCSA
HTTPD. Apache has now entered the race, giving HTTPD strong competition in the
market.) The application is extremely small and compact. Like most of its counterparts, it runs as a daemon. Its typically assigned port is 80. Today, there are HTTP servers for nearly every operating system. Table 6.8 lists those servers.


Network News Transfer Protocol
The Network News Transfer Protocol is one of the most widely used protocols. It
provides modern access to the news service commonly known as USENET news. Its
purpose is defined in RFC 977 (You may also wish to obtain RFC 850 for examination of earlier implementations of the standard):
NNTP specifies a protocol for the distribution, inquiry, retrieval, and posting of news articles using a reliable stream-based transmission of news among the ARPA-Internet community. NNTP is designed so that news articles are stored in a central database allowing a subscriber to select only those items he wishes to read. Indexing, cross-referencing, and expiration of aged messages are also provided.
NNTP shares characteristics with both Simple Mail Transfer Protocol and TCP.
Similarities to SMTP consist of NNTP's acceptance of plain-English commands from a prompt. It is similar to TCP in that stream-based transport and delivery is used. NNTP typically runs from Port 119 on any UNIX system.


TCP/IP Is the Internet
By now, it should be apparent that TCP/IP basically comprises the Internet itself. It is a complex collection of protocols, many of which remain invisible to the user. On most Internet servers, a minimum of these protocols exist:
  • Transmission Control Protocol
  • Internet Protocol
  • Internet Control Message Protocol
  • Address Resolution Protocol
  • File Transfer Protocol
  • The Telnet protocol
  • The Gopher protocol
  • Network News Transfer Protocol
  • Simple Mail Transfer Protocol
  • Hypertext Transfer Protocol
Now, prepare yourself for a shock. These are only a handful of protocols run on the
Internet. There are actually hundreds of them. Better than half of the primary protocols have had one or more security holes. In essence, the point I would like to make is this: The Internet was designed as a system with multiple avenues of communication. Each protocol is one such avenue. As such, there are hundreds of ways to move data across the Net. Until recently, utilizing these protocols called for accessing them one at a time. That is, to arrest a Gopher session and start a Telnet session, the user had to physically terminate the Gopher connection.

The HTTP browser changed all that and granted the average user much greater power and functionality. Indeed, FTP, Telnet, NTTP, and HTTP are all available at the click of a button.


Internet Warfare


The Internet is an amazing resource. As you sit before your monitor, long after your
neighbors are warm and cozy in their beds, I want you to think about this: Beyond that screen lies 4,000 years of accumulated knowledge. At any time, you can reach out into the void and bring that knowledge home. There is something almost metaphysical about this. It's as though you can fuse yourself to the hearts and minds of humanity, read its innermost inspirations, its triumphs, its failures, its collective contributions to us all. With the average search engine, you can even do this incisively, weeding out the noise of things you deem non essential. For this reason, the Internet will ultimately revolutionize education. I'm not referring to
home study or classes that save time by virtue of teaching 1,000 students simultaneously. Although these are all useful techniques of instruction that will undoubtedly streamline many tasks for teachers and students alike, I am referring to something quite different.

Today, many people have forgotten what the term education really means. Think back to your days at school. In every life there is one memorable teacher: One person who took a subject (history, for example) and with his or her words, brought that subject to life in an electrifying display. Through whatever means necessary, that person transcended the identity of instructor and entered the realm of the educator. There is a difference: One provides the basic information needed to effectively pass the course; the other inspires. The Internet can serve as a surrogate educator, and users can now inspire themselves. So this much is true: The Internet is a an incredible resource for information. However, it is also an incredible resource for communication and basic human networking. Networking from a human standpoint is different from computer networking; human networking contains an added ingredient called action. Thus, individuals from all over the world are organizing (or I should say, crystallizing) into groups with shared interests. Inherent within this process is the exchange of opinions, or more aptly put, ideology. Ideology of any sort is bound to bring controversy, and controversy brings disagreement. Whether that disagreement occurs between two nations or between two individuals is irrelevant. When it occurs on the Internet, it often degenerates into warfare. Much like the term information warfare, the term Internet warfare is often misunderstood. To understand Internet warfare, you must know that there are different classifications of it. Let's start with those classifications. From there, we can discuss warfare at its most advanced levels. The classifications are:
  • Personal Internet warfare
  • Public Internet warfare
  • Corporate Internet warfare
  • Government Internet warfare
More generally, Internet warfare is activity in which one or more participants utilize tools over the Internet to attack another or the information of another. The objective of the attack may be to damage information, hardware, or software, or to deny service. Internet warfare also involves any defensive action taken to repel such an attack. Such warfare may be engaged in by anyone, including individuals, the general public, corporations, or governments. Between these groups, the level of technology varies (by technology, I am referring to all aspects of the tools required, including high-speed connections, software, hardware, and so forth). In general, the level of technology follows an upward path.
NOTE: The categories Public and Individual may seem confusing. Why are they not included together? The reason is this: A portion of the public fails to meet the requirements for either corporate forces or individuals. This portion is composed of middle-level businesses, ISPs, universities, and so on. These groups generally have more technologically advanced tools than individuals, and they conduct warfare in a different manner.
As you might guess, there are fundamental reasons for the difference between these
groups and the tools that they employ. These reasons revolve around economic and
organizational realities. The level of technology increases depending upon certain risks and demands regarding security.

Naturally, government and corporate entities are going to have more financial resources to acquire tools. These tools will be extremely advanced, created by vendors who specialize in high-performance, security-oriented applications. Such applications are generally more reliable than average tools, having been tested repeatedly under a variety of conditions. Except in extreme cases (those where the government is developing methods of destructive data warfare for use against foreign powers), nearly all of these tools will be defensive in character.

Public organizations tend to use less powerful tools. These tools are often shareware or freeware, which is freely available on the Internet. Much of this software is designed by graduate students in computer science. Other sources include companies that also sell commercial products, but are giving the Internet community a little taste of the quality of software available for sale. (Many companies claim to provide these tools out of the goodness of their hearts. Perhaps. In any event, provide them they do, and that is sufficient.) Again, nearly all of these tools are defensive in character.

Private individuals use whatever they come across. This may entail shareware or freeware, programs they use at work, or those that have been popularly reviewed at sites of public interest.


The Private Individual
The private individual doesn't usually encounter warfare (at least, not the average user). When one does, it generally breaks down to combat with another user. This type of warfare can be anticipated and, therefore, avoided. When a debate on the Net becomes heated, you may wish to disengage before warfare erupts. Although it has been said a thousand times, I will say it again: Arguments appear and work differently on the Internet than in person.
  • E-mail or Usenet news messages are delivered in their entirety, without being interrupted by points made from other individuals. That is, you have ample time to write your response. Because you have that time, you might deliver a more scathing reply than you would in person.
  • Moreover, people say the most outrageous things when hiding behind a computer, things they would never utter in public. Always consider these matters.
That settled, I want to examine a few tools of warfare between individuals.


The E-Mail Bomb
The e-mail bomb is a simple and effective harassment tool. A bomb attack consists of nothing more than sending the same message to a targeted recipient over and over again. It is a not-so-subtle form of harassment that floods an individual's mailbox with junk. Depending upon the target, a bomb attack could be totally unnoticeable or a major problem.
  • Some people pay for their mail service (for example, after exceeding a certain number of messages per month, they must pay for additional e-mail service). To these individuals, an e-mail bomb could be costly. Other individuals maintain their own mail server at their house or office.
  • Technically, if they lack storage, one could flood their mailbox and therefore prevent other messages from getting through. This would effectively result in a denial-of-service attack. (A denial-of-service attack is one that degrades or otherwise denies computer service to others.
  • In general, however, a bomb attack (which is, by the way, an irresponsible and childish act) is simply annoying. Various utilities available on the Internet will implement such an attack.
One of the most popular utilities for use on the Microsoft Windows platform is Mail
Bomber. It is distributed in a file called bomb02.zip and is available at many cracker sites across the Internet. The utility is configured via a single screen of fields into which the user enters relevant information, including target, mail server, and so on. The utility works via Telnet. It contacts port 25 of the specified server and generates the mail bomb. Utilities like this are commonplace for nearly every platform. Some are for use anywhere on any system that supports SMTP servers. Others are more specialized, and may only work on systems like America Online. One such utility is Doomsday, which is designed for mass mailings over AOL but is most commonly used as an e-mail bomber. The entire application operates from a single screen interface.
NOTE: For several years, the key utility for AOL users was AOHELL, which included in its later releases a mail-bomb generator. AOHELL started as a utility used to unlawfully access America Online. This, coupled with other utilities such as credit-card number generators, allowed users to create free accounts using fictitious names. These accounts typically expired within two to three weeks.
On the UNIX platform, mail bombing is inanely simple; it can be accomplished with just a few lines. However, one wonders why someone skilled in UNIX would even entertain the idea. Nevertheless, some do; their work typically looks something like this:

#!/bin/perl
$mailprog = `/usr/lib/sendmail';
$recipient = `victim@targeted_site.com';
$variable_initialized_to_0 = 0;
while ($variable_initialized_to_0 "minor" 1000) {
open (MAIL, "|$mailprog $recipient") || die "Can't open $mailprog!\n";
print MAIL "You Suck!";
close(MAIL);
sleep 3;
$variable_initialized_to_0++;
}

The above code is fairly self-explanatory. It initializes a variable to 0, then specifies that as long as that variable is less than the value 1000, mail should be sent to the targeted recipient. For each time this program goes through the while loop, the variable called $variable_initialized_to_0 is incremented. In short, the mail message is sent 999 times.

Mail bombing is fairly simple to defend against: Simply place the mailer's identity in a kill or bozo file. This alerts your mail package that you do not want to receive mail from that person. Users on platforms other than UNIX may need to consult their mail applications; most of them include this capability.

UNIX users can find a variety of sources online. I also recommend a publication that covers construction of intelligent kill file mechanisms:
Teach Yourself the UNIX Shell in 14 Days by David Ennis and James Armstrong Jr. (Sams Publishing).
Chapter 12 of that book contains an excellent script for this purpose. If you are a new user, that chapter (and in fact, the whole book) will serve you well. (Moreover, users who are new to UNIX but have recently been charged with occasionally using a UNIX system will find the book very informative.)
Oh yes. For those of you who are seriously considering wholesale e-mail bombings as a recreational exercise, you had better do it from a cracked mail server. A cracked mail server is one that the cracker currently has control of; it is a machine running sendmail that is under the control of the cracker. If not, you may spend some time behind bars. One individual bombed Monmouth University in New Jersey so aggressively that the mail server temporarily died. This resulted in a FBI investigation, and the young man was arrested. He is reportedly facing several years in prison. I hope that you refrain from this activity. Because e-mail bombing is so incredibly simple, even crackers cast their eyes down in embarrassment and disappointment if a comrade implements such an attack.

List Linking
List linking is becoming increasingly common. The technique yields the same basic
results as an e-mail bomb, but it is accomplished differently. List linking involves
enrolling the target in dozens (sometimes hundreds) of e-mail lists.
E-mail lists (referred to simply as lists) are distributed e-mail message systems. They work as follows:
  1. On the server that provides the list service, an e-mail address is established. This e-mail address is really a pointer to an executable program.
  2. This program is a script or binary file that maintains a database (usually flat file) of e-mail addresses (the members of the list). Whenever a mail message is forwarded to this special e-mail address, the text of that message is forwarded to all members on the list (all e-mail addresses held in the database).
These are commonly used to distribute discussions on various topics of interest to members.
E-mail lists generate a lot of mail. For example, the average list generates 30 or so
messages per day. These messages are received by each member. Some lists digest the messages into a single-file format. This works as follows:
  1. As each message comes in, it is appended to a plain text file of all messages forwarded on that day.
  2. When the day ends (this time is determined by the programmer), the entire file--with all appended messages--is mailed to members.
  3. This way, members get a single file containing all messages for the day.
Enrolling a target in multiple mailing lists is accomplished in one of two ways.
  1. One is to do it manually. The harassing party goes to the WWW page of each list and fills in the registration forms, specifying the target as the recipient or new member. This works for most lists because programmers generally fail to provide an authentication routine. (One wonders why. It is relatively simply to get the user's real address and compare it to the one he or she provides. If the two do not match, the entire registration process could be aborted.) Manually entering such information is absurd, but many individuals do it.
  2. Another and more efficient way is to register via fakemail. You see, most lists allow for registration via e-mail. Typically, users send their first message to an e-mail address such as this one: list_registration@listmachine.com Any user who wants to register must send a message to this address, including the word subscribe in either the subject line or body of the message. The server receives this message, reads the provided e-mail address in the From field, and enrolls the user. (This works on any platform because it involves nothing more than sending a mail message purporting to be from this or that address.)
To sign up a target to lists en masse, the harassing party first generates a flat file of all list- registration addresses. This is fed to a mail program. The mail message--in all cases--is purportedly sent from the target's address. Thus, the registration servers receive a message that appears to be from the target, requesting registration to the list.

This technique relies on the forging of an e-mail message (or generating fakemail).
Although this is explained elsewhere, I should relate something about it here. To forge mail, one sends raw commands to a sendmail server. This is typically found on port 25 of the target machine. Forging techniques work as follows:
  1. You Telnet to port 25 of a UNIX machine. There, you begin a mail session with the command HELO. After you execute that command, the session is open. You then specify the FROM address, providing the mail server with a bogus address (in this case, the target to be list-linked). You also add your recipient and the message to be sent. For all purposes, mail list archives believe that the message came from its purported author. It takes about 30 seconds to register a target with 10, 100, or 500 lists. What is the result?
Ask the editorial offices of Time magazine. On March 18, 1996, Time published an article titled "I'VE BEEN SPAMMED!" The story concerned a list-linking incident involving the President of the United States, two well-known hacking magazines, and a senior editor at Time. Apparently, a member of Time's staff was list-linked to approximately 1,800 lists. Reportedly, the mail amounted to some 16MB. It was reported that House Leader Newt Gingrich had also been linked to the lists. Gingrich, like nearly all members of Congress, had an auto-answer script on his
e-mail address. These trap e-mail addresses contained in incoming messages and send automated responses. (Congressional members usually send a somewhat generic response, such as "I will get back to you as soon as possible and appreciate your support.") Thus, Gingrich's auto-responder received and replied to each and every message. This only increased the number of messages he would receive, because for each time he responded to a mailing list message, his response would be appended to the outgoing messages of the mailing list. In effect, the Speaker of the House was e-mail bombing himself.

For inexperienced users, there is no quick cure for list linking. Usually, they must send a message containing the string unsubscribe to each list. This is easily done in a UNIX environment, using the method I described previously to list-link a target wholesale. However, users on other platforms require a program (or programs) that can do the following:
  • Extract e-mail addresses from messages
  • Mass mail
There are other ways to make a target the victim of an e-mail bomb, even without using an e-mail bomb utility or list linking. One is particularly insidious. It is generally seen only in instances where there is extreme enmity between two people who publicly spar on the Net. It amounts to this:
  1. The attacker posts to the Internet, faking his target's e-mail address. The posting is placed into a public forum in which many individuals can see it (Usenet, for example). The posting is usually so offensive in text (or graphics) that other users, legitimately and genuinely offended, bomb the target.
    For example, Bob posts to the Net, purporting to be Bill. In "Bill's" post, an extremely racist message appears. Other users, seeing this racist message, bomb Bill.

Finally, there is the garden-variety case of harassment on the Internet. This doesn't
circumvent either security or software, but I could not omit mention of it. Bizarre cases of Internet harassment have arisen in the past. Here are a few:
  • California doctoral candidate was expelled for sexually harassing another via e-mail.
  • Another California man was held by federal authorities on $10,000 bail after being accused of being an "international stalker."
  • A young man in Michigan was tried in federal court for posting a rape-torture fantasy about a girl with whom he was acquainted. The case was ultimately dismissed on grounds of insufficient evidence and free speech issues.
These cases pop up with alarming frequency. Some have been racially motivated, others have been simple harassment. Every user should be aware that anyone and everyone is a potential target. If you use the Internet, even if you haven't published your real name, you are a viable target, at least for threatening e-mail messages.


Internet Relay Chat Utilities
Many Internet enthusiasts are unfamiliar with Internet Relay Chat (IRC). IRC is an
arcane system of communication that resembles bulletin board systems (BBSs). IRC is an environment in which many users can log on and chat. That is, messages typed on the local machine are transmitted to all parties within the chat space. These scroll down the screen as they appear, often very quickly. This must be distinguished from chat rooms that are provided for users on systems such as AOL. IRC is Internet-wide and is free to anyone with Internet access. It is also an environment that remains the last frontier of the lawless Internet.

The system works as follows: Using an IRC client, the user connects to an IRC server, usually a massive and powerful UNIX system in the void. Many universities provide IRC servers.
The ultimate list of the world's IRC servers can be found here.
Once attached to an IRC server, the individual specifies the channel to which he or she wishes to connect. The names of IRC channels can be anything, although the established IRC channels often parallel the names of Usenet groups. These names refer to the particular interest of the users that frequent the channel. Thus, popular channels are
  • sex
  • hack
There are thousands of established IRC channels. What's more, users can create their own. In fact, there are utilities available for establishing a totally anonymous IRC server (this is beyond the scope of this discussion). Such programs do not amount to warfare, but flash utilities do. Flash utilities are designed to do one of two things:
  • Knock a target off the IRC channel
  • Destroy the target's ability to continue using the channel
Flash utilities are typically small programs written in C, and are available on the Internet at many cracking sites. They work by forwarding a series of special-character escape sequences to the target . These character sequences flash, or incapacitate, the terminal of the target. In plain talk, this causes all manner of strange characters to appear on the screen, forcing the user to log off or start another session. Such utilities are sometimes used to take over an IRC channel. The perpetrator enters the channel and flashes all members who are deemed to be vulnerable. This temporarily occupies the targets while they reset their terminals.
By far, the most popular flash utility is called flash. It is available at hundreds of sites on the Internet. For those curious about how the code is written, enter one or all of these search strings into any popular search engine:
flash.c
flash.c.gz
flash.gz
megaflash
Another popular utility is called nuke. This utility is far more powerful than any flash program. Rather than fiddle with someone's screen, it simply knocks the user from the server altogether. Note that using nuke on a wholesale basis to deny computer service to others undoubtedly amounts to unlawful activity. However, for those determined to get it, it exists in the void. It can be found by searching for the filename nuke.c.

There are few other methods by which one can easily reach an individual. The majority of these require some actual expertise on the part of the attacker. In this class are the following methods of attack:
  • Virus infection and malicious code
  • Cracking
Although these are extensively covered later in this text, I want to briefly treat them here. They are legitimate concerns and each user should be aware of these actual dangers on the Net.


Virus Infections and Trojan Horses
Virus attacks over the Internet are rare but not unheard of. The primary place that such attacks occur is the Usenet news network. You will read about Usenet in the next section.

Here, I will simply say this: Postings to Usenet can be done relatively anonymously.
Much of the information posted in Usenet these days involves pornography, files on
cracking, or other potentially unlawful or underground material. This type of material strongly attracts many users and as such, those with malicious intent often choose to drop their virus in this network. Commonly, viruses or malicious code masquerade as legitimate files or utilities that have been zipped (compressed) and released for general distribution. It happens. Examine this excerpt from a June 6, 1995 advisory from the Computer Incident Advisory Capability Team at the U.S. Department of Energy:
A trojaned version of the popular, DOS file-compression utility PKZIP is
circulating on the networks and on dial-up BBS systems. The trojaned
files are PKZ300B.EXE and PKZ300B.ZIP. CIAC verified the following
warning from PKWARE:
"Some joker out there is distributing a file called PKZ300B.EXE and
PKZ300B.ZIP. This is NOT a version of PKZIP and will try to erase your
hard drive if you use it. The most recent version is 2.04G. Please tell all
your friends and favorite BBS stops about this hack.
"PKZ300B.EXE appears to be a self extracting archive, but actually
attempts to format your hard drive. PKZ300B.ZIP is an archive, but the
extracted executable also attempts to format your hard drive. While
PKWARE indicated the trojan is real, we have not talked to anyone who
has actually touched it. We have no reports of it being seen anywhere in
the DOE.
"According to PKWARE, the only released versions of PKZIP are 1.10, 1.93, 2.04c, 2.04e and 2.04g. All other versions currently circulating on BBSs are hacks or fakes. The current version of PKZIP and PKUNZIP is 2.04g."
That advisory was issued very quickly after the first evidence of the malicious code was discovered. At about the same time, a rather unsophisticated (but nevertheless
destructive) virus called Caibua was released on the Internet. Many users were infected. The virus, under certain conditions, would overwrite the default boot drive.
I highly recommend that all readers bookmark that. This site is one of the most comprehensive virus databases on the Internet and an excellent resource for learning about various viruses that can affect your platform.
Here's an interesting bit of trivia: If you want to be virus-free, use UNIX as your
platform. According to the CIAC, there has only been one recorded instance of a UNIX virus, and it was created purely for research purposes. It was called the AT&T Attack Virus.
If you want to see an excellent discussion about UNIX and viruses,
check out "The Plausibility of UNIX Virus Attacks" by Peter V. Radatti.
Radatti makes a strong argument for the plausibility of a UNIX virus.
However, it should be noted that virus authors deem UNIX a poor target platform because of access-control restrictions. It is felt that such access-control restrictions prevent the easy and fluid spread of the virus, containing it in certain sectors of the system. Therefore, for the moment anyway, UNIX platforms have little to fear from virus authors around the world. Nonetheless at least one virus for Linux has been confirmed. This virus is called Bliss. Reports on Bliss at the time of this writing are sketchy. There is some argument on the Internet as to whether Bliss qualifies more as a trojan, but the majority of reports suggest otherwise. Furthermore, it is reported that it compiles cleanly on other UNIX platforms.
The only known system tool that checks for Bliss infection was
written by Alfred Huger
NOTE: There is some truth to the assertion that many viruses are written overseas. The rationale for this is as follows: Many authorities feel that authors overseas may not be compensated as generously for their work and they therefore feel disenfranchised. Do you believe it? I think it's possible.
In any event, all materials downloaded from a non trusted source should be scanned for viruses. The best protection is a virus scanner; there are many for all personal computer platforms.

Malicious code is slightly different from a virus, but I want to mention it briefly Malicious code can be defined as any programming code that is not a virus but that can do some harm, however insignificant, to a user's software. Today, the most popular form of malicious code involves the use of black window apps, or small, portable applications in use on the WWW that can crash or otherwise incapacitate
your WWW browser. These are invariably written in scripting languages like JavaScript or VBScript. These tiny applications are embedded within the HTML code that creates any Web page. In general, they are fairly harmless and do little more than force you to reload your browser. However, there is some serious talk on the Net of such applications being capable of:
  • Circumventing security and stealing passwords
  • Formatting hard disk drives
  • Creating a denial-of-service situation
These claims are not fictional. The programming expertise required to wreak this havoc is uncommon in prankster circles. However, implementing such apps is difficult and risky because their origin can be easily traced in most instances. Moreover, evidence of their existence is easily obtained simply by viewing the source code of the host Web page. However, if such applications were employed, they would be employed more likely with Java, or some other compiled language.
In any event, such applications do exist. They pose more serious risks to those using
networked operating systems, particularly if the user is browsing the Web while logged into an account that has special privileges (such as root, supervisor, or administrator). These privileges give one great power to read, write, alter, list, delete, or otherwise tamper with special files. In these instances, if the code bypasses the browser and executes commands, the commands will be executed with the same privileges as the user. This could be critical and perhaps fatal to the system administrator. (Not physically fatal, of course. That would be some incredible code!)


Cracking
Cracking an individual is such a broad subject that I really cannot cover it here. Individuals use all kinds of platforms, and to insert a "cracking the individual" passage here the whole post would have to appear in this &). However, I will make a general statement here: Users who surf using any form of networked operating system are viable targets. So there is no misunderstanding, let me identify those operating systems:
  • Windows 95
  • Windows NT
  • Novell NetWare
  • Any form of UNIX
  • Some versions of AS/400
  • VAX/VMS
If you are connected to the Net with such an operating system(and possibly others), you are a potential target of an online crack. Much depends on what services you are running, but be assured: If you are running TCP/IP as a protocol, you are a target. Equally, those Windows 95 users who share out directories(Briefly, shared out directories are those that allow file sharing across a network) are also targets.