Total Pageviews

Search: This Blog, Linked From Here, The Web, My fav sites, My Blogroll

23 March 2010

OpenBSD --- An Overview for newbies

OpenBSD is a member of the BSD family of operating systems
and is widely regarded as the most secure operating system 
available anywhere, under any licensing terms, for 
its excellent documentation and its fanatical 
focus on security debugging, 
code correctness and 

It's widely used by Internet service providers, embedded 
systems manufacturers, and anyone who needs security 
and stability.

 Most people think that OpenBSD is not the easiest UNIX-like 
operating system, or the easiest version of BSD, or even the 
easiest version of open-source BSD. It doesn't have handy 
"wizards" that walk you through each stage of the 
configuration process. It has very few 
menu-driven front ends. Once 
you're familiar with how 
the system works, 
though, such 
wizards only 
get in the 

The OpenBSD developers and support groups are not really 
interested in helping rank UNIX beginners and usually refuse 
to answer basic UNIX questions. To really understand 
OpenBSD you need to be willing to learn, experiment, 
and spend some time accumulating understanding. 
The good news is, OpenBSD merely shows you 
what other operating systems conceal. Much 
of this knowledge can be directly applied to 
other versions of  BSD, other UNIX-like 
operating systems, and even completely 
foreign operating systems such as 
Microsoft's Windows platforms.

 Because UNIX is not designed to be particularly easy to use, 
don’t feel bad if you have to look up a number of topics 
before you feel comfortable using the computer. 
Most computer users, after all, never have 
to face anything as daunting as UNIX!
 That’s how you know it’s UNIX. 
Looks odd, but works great.

Windows is warm and tasty,
blowfish goes down hard.


What Is BSD?
Indulge us while we tell a historical parable. Imagine that UNIX is a kind of automobile rather than a computer system. In the early days, every UNIX system was distributed with a complete set of source code and development tools. If UNIX had been a car, this distribution method would have  been the same as every car’s being supplied with a complete set of blueprints, wrenches, arc-welders, and other car-building tools. Now imagine that nearly all these cars were sold to engineering schools. You may expect that the students would get to work on their cars and that soon no two cars would be the same. That’s pretty much what happened to UNIX.
AT&T employees created UNIX in the early 1970s. At the time, the monster telephone company was forbidden to compete in the computer industry. The telecommunications company used UNIX internally, but could not transform it into a commercial product. As such, AT&T was willing to license the UNIX software and its source code to universities for a nominal fee. This worked well for all parties:
  1. AT&T got a few pennies and 
  2. a generation of computer scientists who cut their teeth on AT&T technology, 
  3. the universities avoided high operating system license fees, and 
  4. the students were able to dig around inside the source code and see how computers really worked.
Compared to some of the other operating systems of the time, the original UNIX wasn't very good. But all these students had the source code for it and could improve the parts that they didn't like.
  1. If an instructor found a certain bug particularly vexing, he could assign his students the job of fixing it. 
  2. If a university network engineer, professor, or student needed a feature, he could use the source code to quickly implement it. 
As the Internet grew in the early 1980s, these additions and features were exchanged between universities in the form of patches. The Computer Science Research Group (CSRG) at the University of California, Berkeley, acted as a central clearinghouse for these patches. The CSRG distributed these patches to anyone with a valid AT&T source code license. The resulting collection of patches became known as the Berkeley Software Distribution, or BSD. This continued for a long, long time. If you look at the copyright for any BSD-derived code, you will see the following text: 
Copyright 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 2009
The Regents of the University of California. All rights reserved.
Thirty years of continuous development by the brightest students of the best computer science programs in the world, moderated by the faculty of one of the top technical schools(Berkley) in the USA. That's more than a lifetime in software development. As you might imagine, the result was pretty darn good — almost everyone who used UNIX was really using BSD. The CSRG was quite surprised, near the end of these years, when it found that it had replaced almost all of the original AT&T code!
Although about 75 percent of the important stuff is the same on all UNIX  systems, knowing exactly which kind of UNIX you’re using helps

BSD Goes Public

In the early 1990s, the CSRG's funding started to run out. The University of California had to decide what to do with all this wonderful source code it owned. The simplest thing would have been to drop the original tapes down a well and pretend that the CSRG had never happened. In keeping with the spirit of academic freedom, however, it released the entire BSD collection to the public under an extremely liberal license. The license can be summarized like this:
  1. Don't claim you wrote this.
  2. Don't sue us if it breaks.
  3. Don't use our name to promote your product.
Compare this with the software license found on almost any commercial operating system. The BSD license is much easier to understand and unobjectionable to almost anyone. Anyone in the world can take the BSD code and use it for any purpose they like, from desktop computers to self-guided lawnmowers. Not surprisingly, many computer manufacturers jumped right on BSD. Not only was the code free, but also every computer science graduate for the last 22 years was familiar with it.

As the CSRG was merrily improving AT&T's product, AT&T was doing its own UNIX development work to meet its internal needs.
  1. As AT&T developers implemented features, they also evaluated patches that came from the CSRG. 
  2. When they liked a chunk of BSD code, they incorporated it wholesale into AT&T UNIX, 
  3. then turned around and relicensed the result back to the universities, 
  4. who used it as the basis for their next round of work.
This somewhat incestuous relationship kept going for many years, until the grand AT&T breakup. Suddenly, the telecommunications giant was no longer forbidden to dabble in commercial computing. Thanks to years of development, and that generation of computer scientists who knew it, UNIX abruptly looked like a solidly marketable product.
Berkeley's release of the BSD code met with great displeasure from AT&T and instigated one of the most famous computer-related lawsuits of all time. After some legal wrangling, the case was settled out of court. The Berkeley lawyers proved that most of the code in dispute originated in BSD, not in original AT&T UNIX. Only a half-dozen files were original AT&T property, while the rest of the operating system belonged to the CSRG and its contributors. As if that wasn't bad enough, AT&T had even removed the original Berkeley copyright statement from the files it had appropriated from the CSRG! 
AT&T went away and sulked for a while, finally releasing System V UNIX. The CSRG removed disputed files and released BSD 4.4-Lite2, a complete collection of CSRG code utterly unencumbered by any AT&T copyrights.
    BSD 4.4-Lite2, also known just as "Lite 2," is the grandfather of all modern BSD software.
This code was not usable out of the box, and it required some tweaks and additions to function. Various groups of programmers, such as BSDi, the NetBSD Project, and the FreeBSD Project, took it on themselves to make this code usable and to maintain it. Each project was independently managed.

What Is OpenBSD?

OpenBSD's founder, Theo de Raadt, started as a NetBSD developer several years ago. He had several strong disagreements, on many fronts, with the NetBSD developers about how the operating system should be developed. Eventually, he went out on his own and founded the OpenBSD Project, attracting quite a few like-minded developers to work with him.
The OpenBSD team introduced several ideas into the open-source OS world that are now taken for granted, such as public access to the CVS repository and commit logs.
The OpenBSD team quickly established an identity of its own as a security-focused group and is now one of the best-known types of open-source BSD. Today, major companies such as Adobe Systems rely on OpenBSD to provide a reliable, secure operating system.
    Nowadays, OpenBSD is a BSD-based UNIX-like operating system with a fanatical attention to security, correctness, usability, and freedom. It runs on many different sorts of hardware including the standard "Intel PC" (i386), the Macintosh (mac68k and macppc), Sun's Sparc (sparc and sparc64), Compaq's Alpha (alpha), and more. OpenBSD puts almost all its efforts into security features, security debugging, and code correctness. The OpenBSD folks have demonstrated that correct code has a much lower chance of failing, and hence greater security. While some other BSDs focus on different goals, OpenBSD strives to be the ultimate secure operating system.
    The OpenBSD team continually improves the operating system to enhance its security, stability, and freedom. This includes everything from
  • the actual code in the operating system, 
  • to the online manual (which has a nearly legendary quality in the free software community), 
  • to the debugging and development environment, 
  • to the continuous software license auditing
In October of 1995, Theo de Raadt forked the NetBSD code and formed OpenBSD with the goal of making a free (in the context of rights, not price), highly functional operating system that concentrated on security while remaining as portable as possible. NetBSD was originally based on the last release of the academic replacement for AT&T Unix known as the Berkeley Software Distribution (4.4BSD-Lite), so OpenBSD's heritage reaches back considerably further than many other operating systems in development today.
    OpenBSD is designed to be secure by default. The simplest way to explain this concept is to say that everything that could potentially be a security risk is turned off or disabled until you turn on or enable it. That means that while you may have the Apache web server installed, it is not going to start until you either run its daemon from the command line or manually add httpd (the service name that corresponds to Apache) to the system startup script. OpenSSH services will also be unavailable unless specifically enabled. Because it is secure by default, you may have to do more initial configuration with OpenBSD than with most other Unix-like operating systems, and that is why this guide exists to show you how to get an OpenBSD machine up and running quickly.

What Is OpenBSD Good For?
OpenBSD is frequently employed as a web, email, and FTP server, though it can just as easily run BIND to do DNS name resolution, OpenLDAP to form a directory server, and the PostgreSQL and MySQL databases, among a few others. Many people also use OpenBSD in a network appliance machine as a firewall, router, and wireless access point.
    You can also make a suitable desktop operating system out of OpenBSD if you wish. The server is provided on the installation media, and relatively recent editions of KDE, GNOME, Xfce, Fluxbox, Enlightenment (e16), IceWM, and other window managers are available through the Ports system (a collection of optional software that is common to all operating systems in the BSD family). A respectable selection of desktop software is also available, including Firefox, the GIMP, LyX, Evolution, G-Rip, XMMS, and many more. There is an port, but it is nonfunctional in OpenBSD 4.0.
    One thing you won't get with OpenBSD is hardware 3-D acceleration for graphics cards, so while you can get a highly usable 24-bit color display in with nearly any video card, you won't be able to play 3-D-accelerated games like Unreal Tournament or Tux Racer. The issue here is not drivers but kernel drivers; Nvidia, AMD, and Intel refuse to make OpenBSD drivers or supply sufficient hardware documentation for OpenBSD programmers to make their own.
    Actually there are more than 5600 programs available for OpenBSD, but through a procedure that you'll learn about in a later section, you can also run Linux, FreeBSD, SCO Unix, System V Release 4, HP-UX, and BSD/OS binary programs in OpenBSD with little or no performance loss.
Operating systems derived from BSD have a well-earned reputation for stability and security. BSD was developed at a time when computing resources (disk space, network bandwidth, and memory) were meager by today’s standards. So BSD systems were operated by efficient commands, instead of the bloated applications and dumbed-down graphical interfaces often seen today.
    Because of the nature of BSD systems, people running those systems required a high level of expertise. Even when simplified graphical user interfaces based onthe X Window System began to appear, to effectively operate a BSD system youstill needed to know about such things as kernels, device drivers, modules, anddaemons. Because security came before ease-of-use, a BSD expert needed to know how to deal with the fact that many features they may have wanted were not installed, or were turned off, by default.
    If you are someone who has used Linux before, transitioning to a BSD system shouldn’t be too hard. However, BSD systems tend to behave a bit more like older UNIX systems than they do like Linux. Many interfaces are text-based, offering lots of power if you know what you are doing. Despite that fact, however, all the major desktop components that, for example, you get with the GNOME desktop environment are available with BSD systems. So you don’t have to live on the command line.

Supported Architectures and Hardware
OpenBSD works with a diverse array of hardware on 16 different computing platforms, but for the sake of brevity, this guide will cover only the most popular and common CPU architectures: i386 (otherwise known as x86 or IA32) and AMD64 (also known as x86-64 or EM64T).
    Ideally you should have at least 10 GB of hard drive space (though you can get away with much less if you're building a network appliance) and at least 128 MB of RAM (but the more, the better).
    The BSD operating systems have an unwarranted reputation for poor peripheral hardware support. In reality, OpenBSD natively supports more network and RAID devices than any other operating system. That means two things:
  1. first, the days of hunting for and installing third-party drivers are over; 
  2. second, if a device isn't recognized out of the box by OpenBSD, there is nothing you can do to OpenBSD to get it to work. 
What you won't get, though, is support for chiefly desktop hardware such as high-end sound cards (though many ordinary sound cards will work), some kinds of scanners, and other things that are generally unnecessary in a server, network appliance, workstation, or work-oriented desktop machine. In other words, OpenBSD is not quite as suited to home desktop use as FreeBSD or GNU/Linux.
    What about laptop systems? Most Pentium 3, Celeron, Pentium M (Centrino), Pentium 4 M, Pentium 4, Celeron M, early Turion, and some Core Duo (Centrino Duo) notebook computers have been reported to work wonderfully with OpenBSD softmodems and exotic hardware (like fingerprint readers, webcams, and certain Wi-Fi cards) aside. Native ACPI and wireless networking support in OpenBSD is frequently superior to that of even some the fanciest desktop GNU/Linux distributions.
    If you have a mass-produced workstation or server, chances are good that everything you need to work video, sound, network, drive controller, PCI/AGP controller will be fully supported. If you have a home-built system that is more than a year old, it's probably going to be okay, too. If you just built a top-of-the-line Intel Core 2 Duo system with the latest, fanciest 802.11g wireless card, a $400 PCI Express video card, and you expect to set up a RAID-5 array with the built-in SATA fake-RAID controller on your just-released Abit motherboard...well, you're probably going to have at least a moderate amount of trouble with any operating system, but OpenBSD 4.0 probably won't work very well for you. The more your target computer resembles the latter scenario, the less pleased you'll be with OpenBSD at this time.
If you aren't sure if a critical piece of hardware will work with OpenBSD 4.0, your first stop should be the hardware compatibility list for your processor architecture: For i386 (x86), For AMD64 (x86-64).

Other BSDs

So, what are these other versions of BSD, anyway? The main variants are NetBSD, FreeBSD, DragonFly BSD, Mac OS X, Solaris and BSD/OS.

NetBSD is the direct ancestor of OpenBSD and was written to run on as many different types of hardware as possible. So NetBSD has a reputation for being very portable, with versions of NetBSD running as an embedded system on a variety of hardware. NetBSD can run on anything from 32-bit and 64-bit PCs to personal digital assistants (PDAs) to VAX minicomputers.
    OpenBSD maintains much of this platform-independent design, but doesn't support all of the platforms NetBSD does. Moreover unlike FreeBSD and NetBSD, which are covered under the BSD license, OpenBSD is covered primarily under the more-permissive Internet Systems Consortium (ISC) license.

FreeBSD is the most popular of the BSD open-source operating system distributions. It can be operated as a server, workstation, or desktop system, but has also been used in network appliances and special-purpose embedded systems. It has a reputation for maximum performance.
    While the FreeBSD team considers security important, security is not its reason for eating, sleeping, and breathing as it is for the OpenBSD folks.

DragonFly BSD 
DragonFly BSD was originally based on FreeBSD. Its goal was to develop technologies different from FreeBSD in such areas as symmetric multiprocessing and concurrency. So the focus has been on expanding features in the kernel.

Other free (as in no cost, as well as freedom to do what you like with the code) operating systems based on BSD include Darwin (on which Mac OS X is based)
and desktop-oriented systems such as PC-BSD and DesktopBSD. FreeSBIE is a
live CD BSD system. Proprietary operating systems that have been derived from BSD include:

Mac OS X
The latest version of the Macintosh operating system is based on BSD. OpenBSD makes a comfortable and full-featured desktop for a computer professional, but may scare your grandparents. If you want a very friendly, candy-coated desktop that you can put down in front of grandma, but want power and flexibility under the hood, you might check it out. The source code for the graphic interface of Mac OS X is not available, but you can get the source code for the BSD layer and the Mach kernel from Apple.

    There is also a Mac OS X Server product available. Although Mac OS X was originally based on Darwin, it is considered a closed-source operating system with open source components.

SunOS was developed by Sun Microsystems and was very popular as a professional workstation system. Sun stopped development of SunOS in favor of Solaris. However, because Solaris represented a merging of SunOS and UNIX System V, many BSD features made their way into Solaris.

BSD/OS is a commercial, closed-source operating system produced by Wind  River that greatly resembles the open-source BSDs. Some hardware manufacturers will not release specifications for their hardware unless the recipient signs a non-disclosure agreement (NDA). These NDAs are anathema to any open-source development project. Wind River will sign these NDAs and include reliable drivers for this hardware in BSD/OS. If you need to run particular server-grade hardware, and it isn't supported under OpenBSD or any other open-source BSD, you might investigate BSD/OS.

There is a larger list of BSD distributions that you can find at the DistroWatch site. Besides offering descriptions of those BSD distributions, you can also find links to where you can purchase or download the software.

OpenBSD Users (Not really a social OS)

OpenBSD is more than just a collection of bits on CD-ROM. It's also a community of users, developers, and contributors. This community can be a bit of a culture shock for anyone who doesn't know what to expect.
    Many other open-source operating systems place large amounts of effort into growing their user bases and bringing new people into the UNIX fold. The OpenBSD community doesn't. Most open-source UNIX-like operating systems do a lot of pro-UNIX advocacy. Again, OpenBSD doesn't. Some of the communities that have grown up around these operating systems actively welcome new users and do their best to make newbies feel welcome. OpenBSD does not. They are not trying to be the most popular operating system, just the best at what they do. The OpenBSD developers know exactly who their target market is: themselves.
    The OpenBSD community generally expects users to be advanced computer users. They have written extensive documentation about OpenBSD, and expect people to be willing to read it. They're not interested in coddling new UNIX users and will say so if pressed. They don't object to new UNIX users using OpenBSD, but do object to people asking them for basic UNIX help just because they happen to be running OpenBSD. If you're a new UNIX user, they will not hold your hand. They will not develop features just to please users. OpenBSD exists to meet the needs of the developers, and while others are welcome to ride along the needs of the passengers do not steer the project.

OpenBSD Developers

So, how can a group of volunteers scattered all over the world actually create, maintain, and develop an operating system? Almost all discussion takes place via email and online chat. This can be slower than a face-to-face meeting, but is the only means by which people everywhere in the world can openly and reasonably communicate. This also has the advantage of providing a written record of discussions. OpenBSD has three tiers of developers:
  1. the coordinator, 
  2. the committers, and 
  3. the contributors
Contributors are OpenBSD users who have the skills necessary to add features to the operating system, fix problems, or write documentation. Almost anyone can be a contributor. Problems range from a typographical error in the documentation to a device driver that crashes the system under particular circumstances. Every feature that is included in OpenBSD is there because some contributor took the time to sit down and write the code for it. Contributors who submit careful, correct fixes are welcome in the OpenBSD group.
    If a contributor submits enough fixes of high enough quality, he may be offered the role of committer.

Committers are people who have direct access to the central OpenBSD source code repository. Most committers are skilled programmers who work on OpenBSD in their own time, as a hobby. They can make whatever changes they deem necessary for their OpenBSD projects, but are answerable to each other and to the project coordinator. They communicate via a variety of mailing lists, which are available for reading by interested parties. As these mailing lists are meant for developers to discuss coding and implementation details on, users asking basic questions are either ignored or asked to be quiet.
    A committer's work is frequently available on websites and mailing lists before being integrated into the main OpenBSD source code collection, allowing interested people to preview their work. While being a committer seems glamorous, these people also carry a lot of responsibility — if they break the operating system or change something so that it conflicts with the driving "vision" of the Project, they must fix it. All OpenBSD committers answer to the project coordinator.

Theo de Raadt started OpenBSD in 1995 and still coordinates the project. He is the final word on how the system works, what is included in the system and who gets direct access to the repository. He resolves all disputes that contributors and committers cannot resolve amongst themselves. Theo takes whatever actions are necessary to keep the OpenBSD Project running smoothly.
    Many people have very specific coordination roles within OpenBSD — quite a few architectures have a "point man" for issues that affect that hardware, the compiler has a maintainer, and so on. These are people who have earned that position of trust within the community. The only time that Theo acts as the final word is when someone has broken one of OpenBSD's few rules, such as bringing bad licenses into the source tree or behaving poorly with other committers.
    This style of organization, with a central benevolent dictator, avoids a lot of the problems other large open-source projects have with management boards, core teams, or other structures. When someone decides to work on OpenBSD, they can either accept Theo's decisions as final or risk conflicting with the main OpenBSD Project. Thanks to the cooperative nature of OpenBSD development, Theo doesn't have to use that Big Stick nearly as often as one might think.

OpenBSD's Strengths

So, what makes OpenBSD OpenBSD? Why bother with another open-source UNIX-like operating system when there are many out there, many closely related to OpenBSD? What makes this OS worth a computer, let alone entrusting with your corporate firewall?

OpenBSD is designed to run on a wide variety of popular processors and hardware platforms. These platforms include, but are not limited to: Intel (80386 and compatibles), Alpha, Macintosh (both PowerPC and 68000 models), almost everything from Sun, and a variety of more obscure platforms. Chances are, any computer you will come across can run OpenBSD. The OpenBSD team wants to support as many interesting hardware architectures as they have the hardware and skills to maintain, so more are being added regularly.

OpenBSD runs on hardware that's been obsolete for ten years. This isn't a deliberate design decision — the hardware was in popular use when OpenBSD was started, and the developers try to maintain speed and compatibility when they can. People who are running OpenBSD on an ancient VAX quickly catch changes that badly affect system performance on 486s, while people running modern Pentium 4 would probably never notice. Some of these changes are required by the advancing nature of the Internet, changes in the tools used to build OpenBSD, and added functionality in the system, but those that are the result of programming errors or misunderstandings are caught quickly.
    OpenBSD leaves you every scrap of computing power possible to run your applications. In the end, people use applications and not operating systems. This means that a system with a one-gig disk and a 486 CPU can still make a solid web server once you install OpenBSD! A low-footprint operating system gives the most bang out of hardware.

OpenBSD has some of the industry's finest integrated documentation. Many free software projects are satisfied with releasing code. Some think that they're going above and beyond by including a help function in the program itself, available by typing some command-line flag. Others really go all out and provide a grammatically incorrect and technically vague manual page.
    OpenBSD's documentation is expected to be both complete and accurate. The manual pages for system and library calls are extensive, even when compared to the other BSDs, and include discussions on usage and security. In its audit of the OpenBSD source code tree, the OpenBSD team found any number of circumstances where people had used the library interface as the manual page said they should, but the manual page was incorrect! This created both potential and actual security problems. As such, a documentation error is considered a serious bug and treated as harshly as any other serious bug.

In keeping with the spirit of the original BSD license, OpenBSD is free for use in any way by anyone. You can use it in any tool you like, on any computer, for any purpose. Most of today's free software is licensed under terms that require distributors of software to return any changes back to the project owner(GPL licence). OpenBSD doesn't come with even that minor requirement. You can take OpenBSD, modify it, and embed it in refrigerators that order replacement food over the Internet, without ever paying the developers a dime.
    OpenBSD is perhaps the freest of the free operating systems. Like every other free UNIX-like operating system, the source code tree inherited from OpenBSD originally contained a wide variety of programs that shipped under conditional licenses. Some were free for non-commercial use; some were free if you changed the name once you made a change to the code; others had a variety of obscure licensing terms, such as indemnifying a third party against lawsuits. These have been either ripped out or replaced with freely licensed alternatives. Theo de Raadt said on a mailing list during a discussion of licensing terms:
     We know what a free license should say.
        It should say
      * Copyright foo
      * I give up my rights and permit others to:
      * I retain the right to be known as the author/owner
      When it says something else, ask this:
      * - is it 100% guaranteed fluff which cannot ever affect anyone?
      * - is it giving away even more rights (the author right)?
      If not, then it must be giving someone more rights, or by the same token -
      taking more rights away from someone else!
      Then it is _less_ free than our requirements state!
The OpenBSD Project does a lot of work to guarantee that its licensing is as stringently free as its code is correct.

OpenBSD developers strive to implement solutions correctly. This means that they follow UNIX standards such as POSIX and ANSI in their implementations. They make it a strict rule to write programs in a reliable and secure manner, following programming's best current practices. Every skilled programmer knows that programs written correctly are more reliable, predictable, and secure. Many free software producers are satisfied if it compiles and seems to work, however, and quite a few commercial software companies don't give their programmers time to write code that correctly. Code in OpenBSD has been made correct by dint of much hard work, and anyone who tries to introduce incorrect code will be turned away — generally politely, and often with constructive criticism, but turned away nonetheless. And that brings us to OpenBSD's most well-known claim to fame.

OpenBSD strives to be the most secure operating system in the world. While it can reasonably make that claim now, it's a position that requires a constant struggle to maintain. People who break into systems are constantly trying new ways to penetrate computer systems, which means that today's feature may be tomorrow's security hole. As OpenBSD developers learn of new classes of programming errors and security holes, they scan the entire source tree for that class of problem and fix them before anyone even knows how they might be exploited. The history of computer security shows that users cannot be expected to patch or maintain their own systems; those systems must be secure out of the box. OpenBSD's goal is to eliminate those problems before they exist.
    If you work at a company implementing such technology, please base it on OpenBSD. I do not want my refrigerator to be hacked and find 4,000 gallons of sour cream on my doorstep the next day!

OpenBSD Security

Even though OpenBSD is tightly secured, computers running OpenBSD are still broken into. That might seem contradictory, but in truth it means that the person running the computer didn't understand computer security.
    OpenBSD has many integrated security features, but people frequently assume that these features handle security for everything that can be installed on the computer. A moment's thought will show that this really isn't possible. No operating system can protect itself from the computer operator's mistakes. An OS can protect itself from problems in installed software to a limited extent, but ultimately the responsibility for security is in the hands of the administrator.
Consider a web server program running on OpenBSD. OpenBSD will provide the server with a stable, reliable platform, and will do as the server program asks, within the permissions the systems administrator has assigned to it. If the systems administrator has set up the server in a careful and correct manner, something going wrong with the web server will not endanger the operating system. If the sysadmin has integrated the web server with OpenBSD or has chosen to let the web server run with unrestricted privileges, the web server can inflict almost unrestricted damage to the computer software. If an intruder breaks into such a web server, they can use that integration and high permissions setting to lever their way into the operating system itself.
If such a break-in happens, is it OpenBSD's fault? Obviously not. The systems administrator is expected to follow basic security precautions when installing and configuring programs. No operating system can protect itself from an ignorant or careless sysadmin. Ultimately, security is the responsibility of the systems administrator. Here, we will discuss some of the basic security precautions you should be taking when installing and running programs. We will also discuss the advanced security features OpenBSD offers in order to protect itself and help in your systems administration duties.

OpenBSD's Uses

So, OpenBSD has all these nifty features, abilities, and strengths. Where does it fit into your "computing strategy"? That ultimately depends on what your strategy is and where you need it. OpenBSD can be used anywhere you need a solid, reliable, and secure system. I recommend OpenBSD for any of three different uses:
  • on the desktop, 
  • as a server, or 
  • as a network management device

If you need a powerful desktop with all the features you'd expect from a complete UNIX-like workstation, OpenBSD will do nicely. Desktop GUIs, office suites, web browsers, and other programs an average user likes on a computer are available. OpenBSD supports a variety of
  • development tools
  • application environments
  • network servers, and 
  • other features needed by programmers and web developers. 
  • If you're a network administrator OpenBSD supports packet sniffers
  • traffic analyzers
  • and all the other programs you might have come to rely upon.

If you're
  • serving web pages
  • handling email
  • providing LDAP services, or 
  • offering any sort of network services to clients
OpenBSD can help you. It's a cheap and reliable platform. Once it's set up, it just works. Web servers, database servers, and more all work under OpenBSD. And, of course, it's secure, which you cannot underestimate on today's Internet.

Network Management
OpenBSD makes an excellent: 
You can use it to support:
The integrated PF firewall provides state-of-the-art network connection management and control and strips out many dangerous types of traffic before they even reach your servers. Of course, OpenBSD can do all this as cheaply and reliably as it can do anything else.



  • OpenBSD101
  • Faq
  • Documentation
  • Mailing lists
  • IRC chat: #openbsd
  • OpenBSD News
  • OpenBSD support
  • Forums
  • Absolute OpenBSD: UNIX for the Practical Paranoid
    by Michael W. Lucas
    (No Starch Press  2003)
  • The OpenBSD 4.0 Crash Course
    By Jem Matzan (O'Reilly 2007)
    ISBN-10: 0-596-51015-2
  • BSD UNIX Toolbox: 1000+ Commands for FreeBSD, OpenBSD, and NetBSD  Power Users by Christopher Negus,  François Caen (Wiley 2008)
    ISBN: 978-0-470-37603-4

20 March 2010

Ubuntu --- Configuring Servers on Ubuntu


Setting Up a Web Server

Most of the significant advances in computing technology have what is known as a killer app (killer application) — one significantly unique, powerful, and compelling type of application that draws people to that technology in droves and makes it a part of the computing landscape for the foreseeable future.
  • For personal computers in general, that application was the spreadsheet. 
  • For the Apple Macintosh, that application was desktop publishing. 
  • For the Internet, that application was the World Wide Web. Sure, everyone loved e-mail, but the World Wide Web has turned the Internet into a seething pool of e-commerce, personal and technical information, social networking, and who knows what else in the future.
This section explains the flip side of surfing the Web, which is how to set up a Web server so that you can deliver Web pages and other content over the Web to anyone who has access to your server. Most businesses and academic environments today have both externally available and internal-only Web servers. Many people even set up Web servers on their home networks to facilitate Web-based scheduling, document sharing, a central repository for photos, and just about anything else that you can think of.

World Wide Web 101

If you are new to the Web, this section provides some quick history and a sampling of Web buzzwords so that I won’t surprise you by using new terms at random.

You Say URL, I Say URI...
Different Web-aware applications often use different terms to what you and I might simply think of as “Web addresses.” URL (Uniform Resource Locator) is the traditional acronym and term for a Web address, but the acronym and term URI (Uniform Resource Identifier) is actually more technically correct. Another acronym and term that you may come across is URN (Universal Resource Name).
    The relationship between these acronyms is the following:
  • a URI is any way to identify a Web resource. 
  • A URL is a URI that explicitly provides the location of a resource and the protocol used to retrieve it. 
  • A URN is a URI that simply provides the name of a resource, and may or may not tell you how to retrieve it or where it is located.
The bottom line is that most people think of and use the terms URI, URL, and “Web address” interchangeably. If you want to pick one to use, URI is the right term to use.
In 1989, what has become the World Wide Web first entered the world in the mind of Tim Berners-Lee at  CERN (Conseil Européenne pour la Recherche Nucleaire), the European Laboratory for Particle Physics near Geneva, Switzerland. The term World Wide Web wasn’t actually coined until 1990, when Tim Berners-Lee and Robert Cailliau submitted an official project proposal for developing the World Wide Web. They  suggested a new way of sharing information between researchers at CERN who used different types of terminals and workstations. The unique aspect of their information sharing model was that the servers would host information and deliver it to clients in a device-independent form, and it would be the responsibility of each client to display (officially known as render) that information. Web clients and servers would communicate using a language (protocol) known as HTTP, which stands for the HyperText Transfer Protocol.
Hypertext is just text with embedded links to other text in it. The most common examples of hypertext outside of the World Wide Web are various types of online help files, where you navigate from one help topic to another by clicking on keywords or other highlighted text. The most basic form of hypertext used on the Web is HTML, the HyperText Markup Language, which is a structured hypertext format that I’ll talk about a little later in this section.
On the World Wide Web, the servers are Web servers and the clients are typically browsers, such as Firefox, Opera, SeaMonkey, Netscape, Microsoft Internet Explorer, Apple’s Safari, and many others, running on your machine. To retrieve a Web page or other Web resource, you
  1. enter its address as a Uniform Resource Identifier (URI) in your browser by either typing it in or clicking on a link that contains a reference to that URI. 
  2. Your browser contacts the appropriate Web server, which uses that URI to locate the resource that you requested and 
  3. returns that resource as a stream of hypertext information that your browser displays appropriately, and you’re off and running!
 Today’s browsers can understand many protocols beyond HTTP, including FTP (File Transfer Protocol, used to send and receive files), file (used to deliver plain-text files), POP (Post Office Protocol, used to send and receive electronic mail), and NNTP (Network News Transfer Protocol, used to send and receive Usenet News postings). Which protocol you use to retrieve a specific Web resource is encoded into the URI, and is referred to as a scheme in Web nerd terms. A URI specifies three basic things:
The scheme is one of http, ftp, file, and many more, and specifies how to contact the server running on host, which the Web server then uses to determine how to act on your request. The pathname is an optional part of the URI that identifies a location used by the server to locate or generate information to return to you.
    Web pages consist of a static or dynamically generated text document that can contain text, links to other Web pages or sites, embedded graphics in a variety of formats, references to included documents such as style sheets, and much more. These text documents are created using a structured markup language called HTML, the HyperText Markup Language.
    A structured markup language is a markup language that enforces a certain hierarchy where different elements of the document can appear only in certain contexts. Using a structured markup language can be useful to guarantee that, for example, a heading can never appear in the middle of a paragraph. Like documents in other modern markup languages, HTML documents consist of logical elements that identify the type of each element — it is the browser’s responsibility to identify each element and determine how to display (ren-
der) it. Using a device-independent markup language simplifies developing tools that render Web pages in different ways, convert the information in Web pages to other structured formats (and vice versa), and so on.

Introduction to Web Servers and Apache

As mentioned in the previous section, the flip side of a Web browser is the Web server, the application that actually locates and delivers content from a specified URI to the browser. What does a Web server have to do? At the most basic level, it simply has to deliver HTML and other content in response to incoming requests. However, to be useful in a modern Web-oriented environment, a Web server has to do several things. The most important of these are the following:

  • Be flexible and configurable to make it easy to add new capabilities, Web sites, and support increasing demand without recompilation and/or reinstallation.
  • Support authentication to limit users who can access specific pages and Web sites.
  • Support applications that dynamically generate Web pages, such as Perl and PHP, to support a customizable and personal user experience.
  • Maintain logs that can track requests for various pages so that you can both identify problems and figure out the popularity of various pages and Web sites.
  • Support encrypted communications between the browser and server, to guarantee and validate the security of those communications.
The order of importance of these various requirements depends on whether you are a systems administrator or e-commerce merchant, but all modern Web servers must provide at least these capabilities.
    Many different Web servers are available today, depending on your hardware platform, the software requirements of third-party software that a Web site depends on, your fealty to a particular operating system vendor, and whether or not you are willing to run open source software, get additional power, and save money.
    As you might expect, the first Web server in the world went online at CERN, along with the first Web browser. These were written and ran on NeXT workstations, not exactly the world’s most popular platform (sadly enough). The first test of a Web server outside of Europe was made using a server running at the Stanford Linear Accelerator Center (SLAC) in the United States.
    The development focus of Web servers that ran on more popular machines was initially the NCSA (National Center for Supercomputing Applications) Web server, known NCSA httpd (HTTP Daemon). Their development of a freely available Web server paralleled their development of the NCSA browser, known as Mosaic. When one of the primary developers of NCSA httpd (Rob McCool) left the NCSA, a group of NCSA httpd fans, maintainers, and developers formed to maintain and support a set of patches for NCSA httpd. This patched server eventually came to be known as the Apache Web server. Though the official Apache Web site used to claim that the name “Apache” was chosen because of their respect for the endurance and fighting skills of the Apache Indians, most people (myself included) think that this was a joke, and that the name was chosen because the Web server initially consisted of many patches — in other words, it was “a patchy Web server.”
    Two Apache servers are available, contained in the packages apache and apache2. The primary differences between these two versions of the Apache Web server are their code base, their vintage, and how you install and maintain them.
  • The apache package is the latest and greatest version of the Apache 1.x family of Web servers, which was excellent in its day, is still extremely popular, and is still in use in many Web sites across the Net. 
  • However, the apache2 package contains the latest and greatest version of the Apache 2.x Web server, which is essentially “Apache, the Next Generation.” Though things work differently in Apache 2.x, especially from a system administrator’s point of view, Apache 2.x is a far superior Web server and where future Apache extension development is going to take place.
I explain how to install and configure the Apache 2 Web server. Any references to Apache should be taken to refer to the Apache 2.x Web server. Today, Apache Web servers installed at sites across the Internet deliver more Web content than any other Web server. I forget the name of the second most popular Web server, but it only runs on a single operating system (which is not Linux) and therefore loses conceptually as well as numerically.

Installing Apache

Apache is installed in different ways depending on whether you are running a system installed from an Ubuntu Server CD, an Ubuntu Alternate CD, or an Ubuntu Desktop CD. The differences boil down to whether or not your system has a GUI as follows:
  • If you installed your system from an Ubuntu server CD and chose the Install to hard disk option, your system does not have a GUI unless you subsequently installed one. You will probably want to install the Apache 2 Web server using aptitude, because this will also install some recommended packages that you will find useful, such as the Apache documentation.
  • If you installed your system from an Ubuntu server CD and chose the Install a LAMP server option, your system does not have a GUI unless you subsequently installed one. However, the Apache 2 Web server was installed as part of your LAMP (Linux, Apache, MySQL, and Perl) server installation. You can skip this installation section and move on.
  •  If you installed your system from an Ubuntu Alternate CD, you have even more options:
If you selected the Install in text mode option, your system has a GUI and you will probably want to install Apache using Synaptic, as explained in the section entitled “Installing Apache Using Synaptic.”
If you selected the Install in OEM mode option, your system has a GUI and you will probably want to install Apache using Synaptic, as explained in the section entitled “Installing Apache Using Synaptic.”
If you selected the Install a server option, your system does not have a GUI unless you subsequently installed one. You will probably want to install the Apache 2 Web server using   aptitude, as explained in the section entitled “Installing Apache from the Command Line,” because this will also install some recommended packages that you will find useful, such as the Apache documentation.
  • If you installed your system from an Ubuntu Desktop CD, your system has a GUI and you will probably want to install Apache using Synaptic, as explained in the section entitled “Installing Apache Using Synaptic.”

Installing Apache from the Command Line

It is easiest to install the Apache Web server from the command line using either apt-get or aptitude.  Of these two, I suggest that you use aptitude to take advantage of its ability to install recommended packages as well as the basic packages required to run and monitor an Apache Web server on your Ubuntu  system.
As mentioned previously, two versions of the Apache Web server are available in different packages, which have different dependencies and recommended packages. This section focuses on installing the Apache 2 Web server. To install the older, Apache 1.3.x Web server, you must have the universe repositories enabled, and you would specify the apache package on the command line rather than the  apache2 package. I strongly suggest that you use the Apache 2 Web server unless you must use the Apache 1.3.x Web server because you need to use libraries or modules that are not yet available for Apache 2.
To install the Apache 2 Web server from the command line using aptitude, execute the following  command:
$ sudo aptitude -r install apache2
You will be prompted for your password, and then again to confirm that you want to install the apache2 packages, required packages for apache2, and recommended packages for use with the apache2 package. Press return or type Y and press return to accept these packages, and the Apache 2 Web server and friends will be installed, added to your system’s startup sequence, and started for you. You’re now ready to configure your Web server and add content. Skip to the section entitled “Configuring Apache” for more information.

Installing Apache Using Synaptic

To install the packages required to run and monitor an Apache Web server on your Ubuntu system, start the Synaptic Package Manager from the System ➪ Administration menu, and click Search to display the search dialog. Make sure that Names and Descriptions are the selected items to look in, enter apache as the string to search for, and click Search.
After the search completes and, depending on how your repositories are configured, you will see that two Apache servers are available, contained in the packages apache and apache2. The primary differences between these two versions of the Apache Web server are their code base, their vintage, and how you install and maintain them. The apache package is the latest and greatest version of the Apache 1.x family of Web servers, which was great in its day and is still extremely popular and in use in a zillion Web sites across the Net. However, the apache2 package contains the latest and greatest version of the   Apache 2.x Web server, which is essentially “Apache, the Next Generation.” Though things works differently in Apache2, especially from a system administrator’s point of view, Apache 2.x is a far superior Web server and where future Apache extension development is going to take place. Telling you to install anything else would be doing you a disservice.

Right-click on the apache2 package and select Mark for Installation to select that package for installation from the pop-up menu. You may also want to select the apache-doc package, which provides all of the official Apache project documentation for Apache 2.
    A dialog will display that lists other packages that must also be installed and asks for confirmation. When you see this dialog, click Mark to accept these related (and required) packages.
    Next, click Apply in the Synaptic toolbar to install the Apache 2 server and friends on your system. Once the installation completes, you’re already running an Apache 2 Web server, though it is somewhat limited in its initial capabilities. See the next few sections for information on how to configure it, install Web pages, and generally make your Apache 2 Web server more useful.

Apache 2 File Locations

This & provides a quick overview of the default locations of the configuration files, binaries, and content associated with the Apache 2 Web server on your Ubuntu system:
  • /etc/apache2: A directory containing the configuration files for the Apache 2 Web server. The primary configuration file in this directory is the file apache2.conf.
  • /etc/apache2/conf.d: A directory containing local configuration directives for Apache 2, such as those associated with third-party or locally installed packages.
  • /etc/apache2/envvars: A file containing environment variables that you want to set in the environment used by the apache2ctl script to manage an Apache 2 Web server.
  • /etc/apache2/mods-available: A directory containing available Apache 2 modules and their configuration files.
  • /etc/apache2/mods-enabled: A directory containing symbolic links to actively enable Apache 2 modules and their configuration files, located in the /etc/apache2/mods-available directory. This is analogous to the use of symbolic links to start various processes from the scripts in /etc/init.d at different run levels.
  • /etc/apache2/sites-available: A directory containing files that define the Web sites supported by this server.
  • /etc/apache2/mods-enabled: A directory containing symbolic links to actively enabled Web sites for this server, located in the /etc/apache2/mods-available directory. This is analogous to the use of symbolic links to start various processes from the scripts in /etc/init.d at different run levels.
  • /etc/default/apache2: A configuration file that determines whether the Apache 2 should automatically start at boot time.
  • /etc/init.d/apache2: A shell script that uses the apache2ctl utility to start and stop an Apache 2 Web server.
  • /etc/mime.types: The default MIME (Multipurpose Internet Mail Extensions) file types and the extensions that they are associated with.
  • /usr/lib/cgi-bin: The location in which any CGI-BIN (Common Gateway Interface scripts) for a default Apache 2 Web server will be installed.
  • /usr/sbin/apache2: The actual executable for the Apache 2 Web server.
  • /usr/sbin/apache2ctl: An administrative shell script that simplifies starting, stopping, restarting, and monitoring the status of a running Apache 2 Web server.
  • /usr/share/apache2-doc: A directory that contains the actual Apache 2 manual (in the manual subdirectory). This directory is present only if you’ve installed the apache2-doc package (as suggested earlier).
  • /usr/share/apache2/error: A directory containing the default error responses delivered.
  • /usr/share/apache2/icons: A directory containing the default set of icons used by an Apache 2 Web server. This directory is mapped to the directory /icons in your Apache server’s primary configuration file.
  • /var/log/apache2/access.log: The default access log file for an Apache 2 Web server. This log file tracks any attempts to access this Web site, the hosts that they came from, and so on.
  • /var/log/apache2/error.log: The default error log file for an Apache 2 Web server. This log file tracks internal Web server problems, attempts to retrieve non existent files, and so on.
  • /var/run/apache2/ A text file used by Apache 2 to record its process ID when it starts. This file is used when terminating or restarting the Apache 2 server using the /etc/init.d/apache2 script.
  • /var/www/apache2-default: A directory containing the default home page for this Web server. Note that the default Apache 2 Web server does not display the content of this directory correctly — I’ll use that as an example of configuring a Web site in the next section.

Some of these directories, most specifically the /etc/apache2 configuration directory, contain other files that are included or referenced by other files in that same directory.

Configuring Apache

As mentioned in the previous section, the configuration files for the Apache 2 Web server are located in the directory /etc/apache2. Configuration files for Web sites that are available in an Apache 2 Web server are located in the directory /etc/apache2/sites-available. To actually support a site from your Web server, you must create a configuration file for that Web server in /etc/apache2/site-available, and then create symbolic links to that configuration file in the /etc/apache2/sites-available directory.
    The only Web site that is provided out of the box with a standard Apache 2 installation is its default Web site, which you would expect to be able to access at http://hostname. Unfortunately, attempting to access this URI on a newly installed Ubuntu Web server often displays the Web page Index of /.
If you are creating a new Web site and want it to be your Web server’s default page, you can simply put your content in the /var/www directory, where things would work fine immediately. I’m using the vagaries of Ubuntu’s default Web page to demonstrate some of the statements in a server configuration file.
Let’s use that as an opportunity to explore the configuration file for this Web site, explore its syntax, and  change anything that we need to change to see a standard default Apache Web site. The following is a listing of the file /etc/apache2/sites-available/default, to which /etc/apache2/sites-available/000-default is a symbolic link to activate the site on this server. (I’ve added line numbers to make it easier to refer to different entries — they do not actually appear in the file!)
         1. NameVirtualHost *
         3.      ServerAdmin webmaster@localhost
         4.      DocumentRoot /var/www
         6.            Options FollowSymLinks
         7.            AllowOverride None

        10.          Options Indexes FollowSymLinks MultiViews
        11.          AllowOverride None
        12.          Order allow,deny
        13.          allow from all
        14.          # Uncomment this directive is you want to see apache2’s
        15.          # default start page (in /apache2-default) when you go to /
        16.          #RedirectMatch ^/$ /apache2-default/

        18. ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
        20.          AllowOverride None
        21.          Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
        22.          Order allow,deny
        23.          Allow from all

        25. ErrorLog /var/log/apache2/error.log
        26. # Possible values include: debug, info, notice, warn, error, crit,
        27. # alert, emerg.
        28. LogLevel warn
        29. CustomLog /var/log/apache2/access.log combined
        30. ServerSignature On
        31. Alias /doc/ “/usr/share/doc/”
        33.          Options Indexes MultiViews FollowSymLinks
        34.          AllowOverride None
        35.          Order deny,allow
        36.          Deny from all
        37.          Allow from ::1/128


The first thing that I want to change here is line 3, which sends any mail directed to the Webmaster for this site to webmaster@localhost, which probably doesn’t exist on your machine. You can either set up a local alias for Webmaster in your mail server configuration or simply change this to an explicit site-wide address that you’ve already assigned somewhere. I would change this to
    The next thing to fix is line 16, which maps the top-level URI (i.e., anything that begins with a slash, followed immediately by the end of the line) for the site to the DocRoot’s /apache2-default directory. To fix this, simply remove the hash mark at the beginning of the line.
Now, let’s restart the Web server to see if this has changed things:
 $ sudo /etc/init.d/apache2 restart
Visiting the same URI as before now shows the right page now, which is more like what you expect to see from a vanilla Apache Web server.
    Poking around on this page, you can see that the author of the page created a hyperlink called documentation that points to /manual/. However, there is no such directory or an entry in the server’s configuration  file defining a redirect to some other directory. So let’s make one. Create something like the entry for the
 /doc/ directory that’s shown in lines 31 through 38, but simplify it a bit:
         1. Alias /manual/ “/usr/share/doc/apache2-doc/manual/”
         3. Order deny,allow
         4. Deny from all
         5. Allow from 192.168.6

The first line defines an alias called /manual/ that actually points to the directory /usr/share/doc/apache2-doc/manual/, which is where Apache’s online manual lives. The rest of the lines define who has access to that directory and under what circumstances. Line 2 defines the beginning of directives related to the directory /usr/share/doc/apache2-doc/manual/, and line 6 identifies the end of a block of
 directives for a specific directory. Lines 3, 4, and 5 specify how authentication works. Line 3 says that any statements denying access to the directory are processed before any that allow access to the directory. Line 4 denies all access to that directory, while line 5 allows access to that directory from any host whose first three octets are 192.168.6 (the subnet on which this Web server is running), and from the loopback address for the host. After adding these changes to the file (they must come before the directive  shown in line 39 of the previous example because they are part of the definition for this host on this Web server) you can restart the Web server using the same command as before:
$ sudo /etc/init.d/apache2 restart
Visiting the same URI as before and trying to access the Apache documentation hyperlink now shows the desired page, which is more like documentation.
    You may note that there was no equivalent to line 33 of the original server configuration file. This is because there was no need to provide these directory browsing options because I knew that the directory contained HTML files, so that the following options were not necessary:
  • Indexes: Shows an index of the directory if no index.html file is present.
  • MultiViews: Enables content negotiation, where the browser tries to find the best match for a request. In my case, I only want to see the docs in my default language, locale, and character set, so no negotiation is necessary.
  • FollowSymlinks: I know that there are no symbolic links in this directory, so there’s not need to specify that they should be followed.
 The Apache documentation is really quite good and explains all of the site configuration directives that I don’t describe here.


As in any debugging or troubleshooting exercise, log files are your friends. Lines 25, 28, and 29 in the original server configuration file shown earlier identify the log files used by this server, and the level of logging that occurs.
  • Line 25 identifies the name of the error log file as /var/log/apache2/error.log
  • Line 28 sets the logging level to warn (warnings), which is slightly more useful than only logging errors, but is not as useful as debug when actually debugging a new site or server. 
  • Line 29 tells the server to create a single log file named /var/log/apache2/access.log that will log all access requests to the server in NCSA combined log format.
 The following Apache log files are exceptionally useful for debugging purposes:
  • access.log: Shows all attempts to access the server, listing the IP address of the host that attempted access, a timestamp, the actual request that was made, and information about the browser that the request was received from.
  • error.log: Shows all errors of level warning or above (i.e., more serious) that the server encountered when trying to process an access request. This includes pages that can’t be found, directories to which access was denied, and so on.
Apache 2’s logging levels are very useful in controlling the amount and type of information that appears in the Apache logs. These levels are the following:
  • emerg: only reports emergency conditions that make the Web server unstable.
  • alert: logs situations requiring immediate action, and which may identify problems in the host system
  • crit: logs critical errors that may indicate security, server, or system problems
  • error: reports noncritical errors that indicate missing pages, bad server configuration directives, and general error conditions
  • warn: logs messages that warn of noncritical problems or internal conditions that should be investigated
  • notice: reports normal but significant conditions that should still be looked into
  • info: logs informational messages that may help you identify potential problems or suggest possible reconfigurations
  • debug: logs pretty much every state change on the system, such as every file open, every server activity during initialization and operation, and so on
You should never set the log levels lower than crit on any production Web server. I generally find that warn is the best choice for production servers, only using notice, info, or debug when a server is actually having performance or responsiveness problems. Also, don’t forget that you actually have to look at the log files for them to be useful.


  • Ubuntu Linux Bible by William von Hagen ISBN-13: 978-0-470-03899-4
  • In addition to reference material, the Apache2 docs include several tutorials and how-to style articles that provide practical, hands-on information.
  • Apache Server 2 Bible, 2nd Edition by Mohammed J. Kabir (Wiley, 2002, ISBN: 0-7645-4821-2) 
  • Hardening Apache by Tony Mobily (Apress, 2004; ISBN: 1590593782).

    19 March 2010

    Ubuntu --- Network Configuration and Security

    • Real hackers are the electronic equivalent of the National Geographic Society or “Star Trek,” boldly going where no person has gone before.
    • People who break into systems to damage or exploit them are crackers (regard USA mostly in the southern parts of the United States) who give everyone else a bad name.
    • Ubiquitous networking begets the easy availability of tools that enable this sort of thing. The people that use them are often so-called “script kiddies” who use existing tools to demonstrate cleverness in the same way that giving a child an Uzi demonstrates marksmanship.

    Almost from the very beginning of home computing in the 1970s, personal computers have reached out to touch other types of computer systems. Long before ISPs, and before the Internet even existed, home computer fans used modems to access bulletin board systems(BBSs), remote mainframe or minicomputers, and ancient content providers like Compuserve and AOL, using various terminal emulation programs to communicate with each other, transfer files, and so on. Early store-and-forward mechanisms such as the Unix-to-Unix Calling Program (UUCP) and fidonet provided great ways of disseminating files and other information across slow networks of computer systems that were networks only in the sense that they knew each other’s phone numbers.
        The conversion of the ARPANET to the Internet and its resultant commercialization gave birth to the notion of ISPs, commercial Internet Service Providers, who provided a mechanism for home computers to directly access the Internet, albeit through kludgey point-to-point solutions that still depended on a modem and thus provided Net surfing speeds that were only guaranteed (supposedly) to be greater than zero. Regardless, the advent of the ISP ended the concept of the PC as an asynchronous island, making it a real participant in the Internet, even if slowly.
        As ISPs surfaced and became a fundamental utility for many home computer users, networking and PC hardware costs continued to drop, approaching the commodity hardware pricing normally associated with toasters and refrigerators. The reality of more and more home computer users, even in the same homes, introduced the notion of home computer networks, often stand-alone or with modems still connecting specific systems to the Internet by functioning as a 9600 or 56KB gateway to the Internet via an ISP. We all owe much to those pioneering users of home computer who were willing to access the Net and download porn though such tragically slow connections.
        Broadband Ethernet, even cheaper wired network hardware, and the explosive growth of wireless networking has made networking a true reality for many home computer users. Home computer systems may now have real IP addresses and functional connection speeds to the Internet, and are also commonly members of home computer networks that share those connections to the net using mechanisms such as Native Address Translation (NAT).
        Better networking and network access comes at a price. Ubiquitous networking gives thousands of “randoms” access to your computer system through a real IP address or Web server and other network processes. Most of them could care less, some are simply curious, and others are downright malicious. The last set gives everyone else a bad name by actively trying to break into computer systems to exploit them in some fashion. I have no problem with hackers who are simply curious about what’s out there — exploration has always been a fundamental part of the human condition.
        Unfortunately, there are plenty of unscrupulous crackers who would love to break into your machine and damage it or turn it into some sort of zombie system, either to supposedly demonstrate their cleverness or to somehow make a buck. Sigh. Ubiquitous networking begets the easy availability of tools that enable this sort of thing.
        The bottom line of ubiquitous networking is that security becomes everyone’s job. If you live in a small town that considers taking two newspapers from the box on the corner a serious crime, locking your door at night may seem silly. Unfortunately, when you use a personal computer with network access, you are part of the big city known as the Internet. The administrators of enterprise and academic systems that require continuous access to the Internet have known this for a long time. Sadly enough, nowadays your grandmother, parents, and you have to worry about it too. Security is more of a concern today than it has ever been before, and tomorrow will just be worse.
    This post provides a basic introduction to networking, explains the tools that Ubuntu Linux provides to graphically configure and test your network, and (most importantly) provides some general guidelines on how to secure your system to protect it as best as anyone can. There’s an old saying in the IT biz that the only truly secure system is one that isn’t connected to anything. Although this is true, it’s also impractical.
        There are easy rules to follow to minimize the chances that your system will be broken into. You’re already running Ubuntu Linux, which puts you miles ahead of the millions of vulnerable Windows 98, ME, and 2000 XP Vista users out there.

    Networking 101

    Most modern computer systems can communicate with other systems and devices over a type of network called Ethernet, using the Transmission Control Protocol/Internet Protocol (TCP/IP) and Universal Data Packet (UDP) protocols. Ethernet was invented by Xerox Corporation at Xerox PARC (Palo Alto Research
    Center) in the early 1970s. Like most things they’ve invented — except for the photocopier — Xerox failed to make money from Ethernet, which was actually commercialized by many companies (like 3COM, which was founded by the inventor of Ethernet networking, Bob Metcalf, who knew a good thing when he invented it).
        Until a few decades ago, “the Internet” was a fairly techie term, used only by people whose employers or academic experience offered connectivity to the Internet or its predecessor, the ARPANET. The creation and popular explosion of the World Wide Web and the advent of e-mail as a replacement for phone calls changed all that — suddenly, there was a reason for people to want (or perhaps even need) access to the Internet.
        Early home Internet connectivity was primarily done through dial-up connections that emulated TCP/IP connections over dial-up lines using protocols such as Serial Line Internet Protocol (SLIP), Compressed SLIP (CSLIP), or Point-To-Point Protocol (PPP). Unless you were a serious computer geek, developer, or
     researcher, a home network was somewhat rare, but the advent of broadband access to the Internet through cable and telephone providers changed all that. As mentioned, home networks are becoming more common but most people have never needed to set one up before now. If you use a single PC, Mac, or workstation as your sole home machine, a straight connection to a cable or DSL  modem works just fine. However, the instant you want to enable multiple machines to communicate over a home network, you may encounter unfamiliar terms like hubs, switches, 10-BaseT, RJ45, crossover-cables, uplink ports, packets, gateways, routers, Cat5, and a variety of others that pass for popular nouns among nerdier users. Here i tell you how to set up a simple home network and makes you comfortable with the network-related terms that are in use. For more detailed information, consult any of the hundreds of texts available on home networking.
        The basic element of a modern network connection is a standard Ethernet cable, which is just a length of multistrand cable with connectors on either end that enable you to connect a network card in your personal computer (or whatever type) to another network device. The most common connectors used today are plastic connectors known as RJ-45 connectors, which is a transparent plastic jack that looks like a fatter version of a standard telephone cable connector. Ethernet cables that use these connectors are often known as 10-
     BaseT, 100-BaseT, or even 1000-BaseT, where the numeric portion of the name indicates the speed of your network — the cables are the same. 1000-BaseT is more commonly known as gigabit Ethernet, and is the  up-and-coming standard, because things tend to get faster. 10/100 Ethernet (10 megabit or 100 megabit) is
     the standard nowadays.
     You may also encounter the term 10-Base2 when researching network cards. This is an older type of 10-megabit Ethernet cabling that uses shielded Bayonet Neill-Concelman, or Baby N Connector (BNC) cables, and is not supported by most networking hardware today.
    The best way to visualize the Internet or any Ethernet network is as an extremely long piece of cable to which several computers and network devices are attached. In the simplest case, you must use a device called a hub, switch, or router to attach multiple machines to an Ethernet.
    • A hub is a device with multiple incoming connectors for attaching the Ethernet cables from different machines, with a single output connector that attaches it to another Ethernet device such as a cable modem, another hub, or a switch, router, or gateway. Network communications on any incoming port of the hub are broadcast to all other devices on the hub and are also forwarded through the outgoing connection. 
    • Switches are much like hubs on steroids because they keep track of how network connections between different machines are made and reserve dedicated internal circuitry for established connections. Switches are therefore both typically faster and more expensive than hubs (that's not true nowadays) because they do more.
    • Gateways and routers are similar to hubs and switches, but are designed to provide connectivity between different networks. If a machine that you are trying to connect to isn’t immediately found on your local network, the request is forwarded through your gateway, which then sends it on.
    Network communication is done using discrete units of information that are known as packets. Packets contain the Internet Protocol (IP) address of the host that they are trying to contact. IP addresses are in the form of NNN.NNN.NNN.NNN, and are the network equivalent of a post office box, uniquely identifying a specific machine. Packets for an unknown local host are sent through your gateway.
    Routers are expensive, sophisticated pieces of hardware that direct network communication between multiple networks, translate packets between different network communication protocols, and limit network traffic to relevant networks so that your request to retrieve a file from a machine in your son’s bedroom isn’t broadcast to every machine on the Internet.
     The most common way to connect machines on a home network is to use a hub or a home gateway that is connected to your cable or DSL modem. The difference between these is that a hub simply forwards packets through its outgoing connector (known as an uplink port because it simply links the network connections on that device with those on another, forwarding network packets to the other device and is, therefore, wired differently). A home gateway may convert internal network addresses to addresses that are compatible with the outside world before sending the information on through its outgoing or uplink connector. If you’re using a hub to connect your home network to your cable or DSL modem, each machine on your home network would require an IP address that is unique on the Internet. This can be expensive, because most ISPs charge money for each unique host that can be connected to the Internet from your home at any given time. Home gateways provide a way around this because they enable your home network to use a special type of IP address, known as a nonroutable IP address, to assign unique internal network addresses.
        The gateway then internally translates these to appropriate external addresses if you’re trying to connect to a machine on the Internet. The most common nonroutable IP addresses are in the form of 192.168.X.Y,  where X and Y are specific to how you’ve set up your network.
    If you’re really interested, you can get more information about non-routable IP addresses and address translation in the Internet RFCs (Request for Comment) that defined them, 1597 and  1918.
    IP addresses are assigned to computer systems in two basic ways, either statically or dynamically.
    • Static addresses are unique to your home network that are always assigned to a particular machine. 
    • Dynamic addresses are addresses that are automatically assigned to a computer system or network device when you turn it on. 
    Most ISPs use dynamic addresses because only a limited number of IP addresses are available on the Internet. Using dynamic IP addresses enables your ISP to recycle and reassign IP addresses as people turn their machines off and on. Most dynamic IP addresses nowadays are assigned using a protocol called  Dynamic Host Configuration Protocol (DHCP), which fills out the network information for your system when it activates its network interface, including things like
    • the IP address of a gateway system and 
    • the IP addresses of Distributed Name Service (DNS) servers that translate between hostnames and the IP addresses that they correspond to.
    To use static addresses on your home network, you simply assign each machine a unique, non-routable IP address from a given family of nonroutable IP addresses. For example, most of my home machines have static addresses in the form of 192.168.6.Y. Because I use a home gateway, I’ve configured it to do address  translation (more specifically known as NAT, or Network Address Translation) to correctly translate between these addresses and the external IP address of my home gateway box.
        If you want to use Dynamic IP addresses on a home network, one of the machines on your home network must be running a DHCP server. Most home gateways nowadays have built-in  DHCP servers that you simply configure to hand out IP addresses from a specific range of addresses ( through, in my case). Once you activate address translation on your home  gateway, your gateway will route packets appropriately.
    Remember that your home gateway is probably getting its IP address by contacting your ISP’s DHCP server, whereas hosts on your internal network will get their IP addresses from your DHCP server. 
    Don’t set up hosts on an internal network to contact your ISP’s DHCP server unless you:
    • have only a single machine on your home network or 
    • want every one of your machines to be visible on the Internet.
    If you are using:
    • a home gateway that doesn’t provide a DHCP server or
    • want to have more control over what your DHCP server does, or
    • are using Ubuntu Linux in an enterprise or commercial setting
    you may want to set up your own DHCP server on an Ubuntu system.
        A final aspect of networking is how your system identifies and locates specific computer systems on the Internet. This is typically done through the Domain Name Service (DNS).
        As you might expect, the Internet is knee-deep in Web sites that provide more general information about home networking. For truly detailed information about setting up and configuring a home network on a specific type of machine and operating system, see any of the hundreds of books on those topics at your local bookstore.

    Manually Configuring Your Network Hardware

    Configuring the network hardware on your computer system is part of the Ubuntu installation process, which requires network access in order to download the bulk of a vanilla installation of Ubuntu Linux.
        However, things change. You may install new network hardware, change existing hardware from relying on DHCP to using static IP addresses on your network, prioritize one interface over another in multiport machines such as laptops, or simply want to have a better understanding of how networking works or is configured on your system(s).
        In the past Ubuntu releases provides a convenient tool for reconfiguring existing networking interfaces and configuring new ones called gnome-network-admin package (System ➪ Administration ➪ Networking menu item to start this tool). Nowadays (although gnome-network-admin is available to repos but no more supported by Canonical) instead Ubuntu come with NetworkManager Applet in the up panel. Right click on them ➪ click Edit
    connections (you don't have to supplying your password), a dialog displays. The contents of this dialog depend on the number and type of possible Ethernet interfaces that are available on your system. By default, the networking application dialog always displays a Point-to-Point Protocol (PPP) item regardless of whether a modem is present in your system, because PPP Ethernet network connections are also possible over standard serial ports.
    Systems on which multiple Ethernet connections are available are quite common today. If you are using multiple Ethernet connections simultaneously, it usually only makes sense to have them connected to different network, because network routing is somewhat confusing otherwise. Systems with multiple Ethernet connections where each of these connections are attached to different networks are known as multi-homed systems.
     For the rest of this &, I’ll use the sample system that provides both wired and wireless Ethernet inter faces because that is a common configuration that many laptop users will recognize. Desktop computer systems typically provide a single or double Ethernet interface — providing multiple wired  Ethernet interfaces is fairly uncommon, and is normally seen only in systems that route between multiple networks or need a separate network for applications or system development and testing.
        Regardless of what the initial Networking Connections dialog looks like on your system, you can select any of the network interfaces in their tabs(Wired, wireless etc) displayed in this and click Edit to examine or modify its current configuration. You’ll notice that this Ethernet interface is configured to use DHCP to dynamically obtain an Ethernet address, so many of the network configuration options are not active. To transform this same network configuration dialog for a wired Ethernet interface that uses a static IP address
    1.  Go to IPv4 Settings tab in Editing network interface name dialog
    2. change Method: automatic(DHCP) to Manual 
    3. click Add button and fill your LAN's  (non-routable) host IP (like and netmask (mostly address. 
    4. If you want you can fill the Ip's of DNS servers (primary,secondary) like that on OpenDNS. If  you are not using DHCP (i.e. in Automatic(DHCP) adresses only and manual), you must fill DNS's. This setting is common to all of the Ethernet interfaces on your system, so if you are configuring a second Ethernet interface, you may not need to provide this information.
    5. Optionally when present  in DHCP client IP goes mostly the home gateway  non-routable (local) IP address like or any other DHCP server in LAN.
    6. Once you’ve defined the properties for the network interface that you want to configure, click OK to close the properties dialog
    As mentioned in the “Networking 101” &  most systems today use Domain Name Service (DNS) servers to find out the IP addresses associated with different systems on a network. Though you and I simply want to go to, your computer needs to know the numeric network address of that system.
     Also, as mentioned previously, this is usually necessary only on systems that do not get their IP addresses via DHCP, because most DHCP servers also provide the IP addresses of DNS servers as part of the general network configuration information that they provide.
    Because DNS servers are the usual source of information that map IP addresses to hostnames, you can enter only IP addresses in this dialog. If you somehow specified a hostname, your system would need to use a DNS server to figure out the IP address associated with that name, which would  cause a nasty chicken-and-egg loop.
    On most systems, your Network settings dialog contains only a single network interface. At this point, you’ll probably want to test your new network configuration to ensure that everything is working correctly. Ubuntu  provides a nice graphical tool for testing your system’s networking capabilities. For information about using  that tool, see the & later in this section entitled “Network Testing with GNOME’s Network Tools.”
        If you are using a system with multiple network interfaces, see the next section for information about making the most of them by using different interfaces in different locations.

    Manually Configuring Modem Connections

    As mentioned in the previous &, all Ubuntu Linux installations include an option for establishing network connections via PPP , which is a modern way of creating a network interface that runs over a serial or modem connection.
        Though broadband Internet access is becoming more and more common, dial-up connections using protocols such as PPP are still the way in which some people connect to the Internet. I suspect that this will change, both because people will get tired of waiting for complex Web pages to load, and  because telephone and cable companies can make a lot more money from you once you get used to the wire Internet that broadband Internet access provides. Many people, including myself, have both — I use my dial-up account primarily as a fallback whenever the cable in my suburban neighborhood goes out, but it’s also generally useful for testing purposes. However, PPP accounts are also useful for portability. Until recently, many of my vacation planning sessions have included getting a free AOL CD and setting up an account so that I can read my mail and submit posts like this one with minimal toll charges from whatever retro paradise my wife and I have chosen to vacation in.
        At any rate, PPP connections to the Internet via a modem are still very useful in many cases. My first Linux systems required me to write a little script, connect to my ISP, sacrifice a chicken, and hope for the best. Both protocols and ISP support have improved since then. Ubuntu’s Network settings utility makes it just as  easy to configure a PPP connection as it is to set up a physical network interface.
        To configure a dial-up PPP connection to a network (premised that you have a hardware modem --instead of a winmodem that use the cpu. Winmodems not work in Linux unless it is supported by sl-modem-daemon-- they do not require particular drivers and Ubuntu should recognize any external hardware modem.) open a terminal window and type in  
    1. sudo pppconfig, which is a configuration program included in Ubuntu that sets up an internet connection, and followed the instructions from there. During the configuration process you get the option to give a name to your internet connection such as myISP. There are plenty good guides on the net and even the man page for pppd is very helpful in itself.
    2. Connecting to the internet is by typing sudo pon myISP in the terminal window, with a connection established, you can then download a GUI front-end for pon poff scripts such as gnome-ppp if you're using a Gnome desktop, or kppp if you're using KDE.
    3. Type poff to disconnect. 
    Another one solution is to download the network-admin package and do it trough his gui (i don't describe it here because nowadays ADSL is predominant). Finally WvDial sacrifices some of the flexibility of programs like "chat" in order to make your dialup configuration easier.  When you install this package, your modem will be detected automatically and you need to specify just three parameters: the phone number, username, and password. WvDial knows enough to dial with most modems and log in to most servers without any other help.

    Defining and Using Multiple Network Configurations

    As mentioned earlier, if you’re lucky enough to be using a machine with multiple network interfaces, you really don’t want to have multiple Ethernet adaptors available on the same network at the same time. This can easily confuse your system when it tries to figure out which interface to use when sending information to that network (except if the interfaces  are configured to work on separate networks).
        However, having simultaneous access to multiple networks from a single computer system is fairly rare. More commonly, you will either want your system to be on different networks when it is in different locations (home and office, for example), or to use different network interfaces when you are using your system in different locations. Wired Ethernet interfaces are much faster than wireless Ethernet interfaces, so if you are using a laptop with both types of Ethernet interfaces, you’ll want to switch to your wired interface whenever possible.
    As discussed in the last & a tool called the Network Manager that will do this for you is available from the Synaptic repositories. However, if you have limited success using this application. A similar tool, called whereami, can also be used to do this for you.
        Automatic network reconfiguration is convenient, but can be tricky to set up and, frankly, can be a pain unless you’re a networking guru and know every networking buzzword around. Ubuntu’s networking dialog makes it easy for you to do this for yourself by defining multiple networking configurations, known as locations, which you can easily switch whenever necessary. As described in this &, switching locations is a manual process, but it is also an empirical one that requires no configuration beyond setting up the networking interfaces correctly and creating locations that correctly enable the one that you want to use.
        Ubuntu’s Network settings tool simplifies defining combinations of network
    configuration settings on your available network interfaces and then saving them with a unique name, known as a location.
        The first step in creating a location is to configure all of your available network interfaces as they would be when your system is in a specific physical location, being used in a certain way. Next, click the Location drop-down menu at the top of the Network settings dialog to display the menu shown in Figure. Select the Create location menu item a dialog displays, prompting you for a name for this specific combination of configured/unconfigured network interfaces.  Enter a Location name that reflects how and where you anticipate using this network configuration combination, and click OK to save this configuration combination.
        In the future, whenever you want to activate this particular combination of network configuration settings, all you have to do is to select the System ➪ Administration ➪ Networking menu item, enter your password, and then select this location from the Location drop-down menu.
    Creating new locations doesn’t change your existing default networking configuration; it merely adds named combinations to the Locations menu. Once you select a new location, there is no easy way to return to your system’s default settings. Therefore, if you’re going to use multiple locations, it’s a good idea to define a location named Default, which is just a clone of your system’s default configuration. You can then return to your system’s default settings at any time by selecting that location.

    Network Testing with GNOME’s Network Tools

    To maintain its tradition of easy graphical network tools, Ubuntu Linux also provides a convenient graphical tool that simplifies examining the current configuration of any of your network interfaces. Ubuntu provides GNOME’s Network Tools application to give you a graphical display of network configuration information, as well as easy graphical access to a variety of network tools. Select the System ➪ Administration ➪ Network Tools menu item to start the Network Tools application.
        By default, the Network Tools application shows information about your system’s loopback interface. To see  information about a specific interface, click the Network device drop-down menu and select the Ethernet  interface that you’re interested in.
        The easiest and fastest way to identify the current configuration of one of your Ethernet interfaces will probably always be to run the ifconfig interface-name command in an xterm or GNOME Terminal window. As you can see, the text display of Ethernet interface information provided by the ifconfig command still requires a certain amount of interpretation when compared to the friendlier display of information shown in Network Tools.  In addition to a more readable display of basic network configuration information, the Network Tools application supports the graphical display of information produced by several standard network utilities, which traditionally operate only in text mode. The tabs provided in the Network Tools application, along with the purpose of each tab, are the following from left to right:
    • Devices: Displays configuration and traffic summary information for each available network interface on the system. This corresponds to the information provided by the traditional Linux/Unix command-line ifconfig application.
    • Ping: Displays connectivity and availability information by sending packets to a specified host or IP address, and displays elapsed time and success/failure information. This corresponds to the information provided by the traditional Linux/Unix command-line ping application.
    • Netstat: Displays status information about all active and available TCP and UDP network ports on the system. This corresponds to the information provided by the traditional Linux/Unix command-line netstat application.
    • Traceroute: Displays the systems through which communication to a specified host pass and the time required for each intersystem communication, known as a hop. This corresponds to the information provided by the traditional Linux/Unix command-line traceroute application.
    • Port Scan: Displays information about available ports and services on a specified remote machine. This roughly corresponds to the information provided by the traditional Linux/Unix command-line nmap application.
    • Lookup: Displays IP address information and available DNS aliases for a specified system. This roughly corresponds to the information provided by the traditional Linux/Unix command-line nslookup or host applications.
    • Finger: Displays any available personal information about a specific user or a specified host. This corresponds to the information provided by the traditional Linux/Unix command-line finger application. Few hosts provide this information any longer.
    • Whois: Displays information about the registrant and technical contact for a specified Internet domain. This corresponds to the information provided by the traditional Linux/Unix command-line whois or bwhois applications.

    Tips for Securing Your System

    System security is an open-ended topic because it has so many different aspects. These include:
    • physical security
    • login authentication
    • file and filesystem protections, and so on. 
    Entire books have been written about security topics, and more are doubtless on the way. As mentioned,security in all forms will become an ever-increasing concern because of the increasing ubiquity of networking and the increasing availability of easy-to-use tools for probing, exploring, and breaking into remote machines.
        The following are some specific suggestions for increasing the security of your system on a network. As you’d expect, these include some aspects of other security topics but also have their own unique concerns:
    • If you are using an off-the-shelf home gateway, change gateway's  password and  name of the authenticated user if possible before you put it into service . If I had a nickel for how many systems have been broken into because people didn’t change default passwords, I wouldn’t even know how many nickels I had because most of these break-ins go unnoticed.
    • Disable any unnecessary services on your system. You can use the Network Tools Port Scan tab to identify ports on your system that are listening from requests for services. Disable any services that you are not using through a tool such as the System Boot Up Manager, which was discussed in the & entitled “Optimizing the Ubuntu Boot Process” here
    • Remove accounts for any users that are no longer using your system. This includes system accounts that were created for use by or with services that you are no longer running on your system.
    • Always keep your system up to date using the Ubuntu Update Manager. Patches to system and application software are released for a (good) reason.
    • Monitor important system log files regularly. The /var/log/messages and /var/log/syslog files can be an important source of information about who is trying to break into your system, and how.
    • Change your password regularly. Ubuntu’s dependence on the sudo command rather than the traditional root account for system administration tasks is a useful obfuscation, but your dedicated cracker in Beijing often doesn’t have anything better to do than try and try again.
    As mentioned previously, security is your responsibility. Some interesting applications are available to test and probe your own system, which can be both educational and useful. My long-term favorites are:
    • chrootkit: Checks for “root kits,” which is the term for precompiled sets of hacked applications that are often installed on systems that have been broken into. These root kits both make it easier for a cracker to get into your system again and also collect additional login/password information from a cracked system. On that purpose install also rkhunter
    • nmap: Probes network connectivity on your machine and identifies potential problem points.
    As you might expect, both of these applications are available in the Ubuntu repositories and can easily be installed on your system using apt-get, aptitude, or the Synaptic Package Manager.

    Installing a Firewall

    Firewall is the term used to describe a system that sits between one or more computer systems and monitors and manages network traffic. Just as with the firewall in your automobile, which prevents a fire in the engine compartment from proceeding into the passenger compartment and incinerating its occupants, a network firewall is intended to prevent malicious, spurious, or unnecessary network traffic from moving through it. Many firewalls serve multiple functions, also performing services such as Network Address Translation (NAT), but their primary purpose is to protect against network attacks and other unwelcome intrusions.
        On modern Linux systems, firewalling is typically done using kernel modules that support a packet filtering framework known as netfilter, and an associated interface and user-space command known as iptables.
        Packet filtering refers to the ability to analyze network packets and perform various actions based on their source, destination, type, or other information that they contain.
    Because support for packet filtering is built into the Linux kernel, a Linux system that is directly connected to the Internet can serve as its own firewall, monitoring and managing network traffic before that traffic actually gets to any daemons or network-aware processes that it is running. 
    Of course, a dedicated device or Linux system can also serve as a firewall, and many vendors sell prepackaged solutions that do just that. The fact that many of these off-the-shelf systems run Linux and use the netfilter/iptables mechanism to implement their firewalling solutions is just proof of the power of the Linux kernel’s built-in support for packet filtering.
        Whether or not an Ubuntu system actually requires a firewall is a hot debate topic among Ubuntu fans.
    1. Standard Ubuntu Desktop installations do not expose any open ports to an outside network, so there are no  network ports that need to be protected. 
    2. This is not true, of course, for Ubuntu server systems that expose ports for services such as DNS, e-mail, SSH, a Web server, and so on, so a firewall (more correctly a firewal's  interface) is always a good idea for  any server system.
    If you are using your Ubuntu system in an environment that is already protected by a firewall, you probably do not need to set up a firewall on your system. You should, however, make sure that the firewall that your system is located behind is actually doing the right thing by checking with the  manufacturer, your IT group in a business or academic environment, or your Internet Service Provider. Just  because a box has “Firewall” printed on it doesn’t mean that it is actually doing anything.
        As far as Ubuntu desktop systems go, you will probably find yourself opening up some ports on a desktop installation as you use your Ubuntu system over time, and a netfilter/iptables firewall introduces  very little overhead on a desktop system, so I suggest that you always install at least a simple firewall. This  way, if you subsequently increase the exposure of your system by opening ports, the firewall will already be in place. You may want to revisit your initial firewall implementation in the future, but you will at least have some protection even if you neglect firewalling in your excitement to make some new service available from your Ubuntu system. Installing a simple firewall by default is also a good idea if you are setting up systems for friends, relatives, or small businesses where you may not always have complete control over what they add to or activate on their systems.

    Overview of Linux Firewalling and Packet Filtering

    The packet filtering mechanism used by the current Linux kernel (2.6.xx) is a combination of :
    1. a loadable kernel module framework and API called netfilter, and 
    2. an interface and associated and user-space administrative command called iptables. The iptables interface is one of several kernel modules based on the netfilter framework; others include a module that handles Network Address Translation (which enables multiple machines to share one public IP address), and the module that implements and supports connection tracking.
    Throughout the rest of this post, I will collectively refer to this as iptables, because that is the interface that is most commonly associated with modern Linux firewalls and  packet filtering.
        The iptables interface and the netfilter framework are actually the fourth generation of Linux packet filtering solutions. The original Linux packet filtering implementation, ipfw, was liberated from BSD-based systems and was introduced in Linux by Alan Cox in the Linux 1.1 kernel, and was designed to support the  creation of simple IP firewalls and routers through packet inspection and filtering. The iwfwadmin tool and associated ipfw changes, which simplified creating ipfw-based firewalls, was added to the Linux 2.0 kernel and makes up the second generation. The third generation of Linux packet filtering, consisting of a major rewrite of the entire Linux networking layer and introducing the user-space ipchains tool, was introduced in the 2.1 kernel series. The current netfilter framework and iptables interface were  introduced in the 2.4 kernel, and have been the standard mechanism for packet filtering, network address  and port translation, and general packet manipulation (often referred to as packet mangling) in the 2.6  series of Linux kernels.
        Linux packet filtering works by inspecting incoming and outgoing packets and acting upon them based on filtering rules that have been loaded into the netfilter framework’s filter table by the iptables command.
        By default, the iptables command supports three default sets of rules, known as chains, for filtering network packets using the information stored in the iptables filter table. These default chains are the chains:
    • INPUT
    • OUTPUT, and 
    1. The rules in the INPUT chain are used to examine and process incoming packets intended for ports on the local machine
    2. The rules in the OUTPUT chain are for examining and processing outgoing packets that are being sent from the local machine
    3. The rules in the FORWARD chain are used to examine and process packets that are being routed through the local machine.
    Each of the default filtering rule chains can have its own set of filtering rules. You can also define other sets of rules and use them for your own purposes. Many modern Linux and other Unix-like systems come with predefined INPUT, OUTPUT, and FORWARD rule chains and automatically load them at boot time. As discussed later in this post, a variety of graphical and command-line software is available for all Linux distributions to make it easy to define your own packet filtering rules.
        Other netfilter-based modules use packet-matching tables other than the filter table. The NAT module uses the NAT table, which contains three built-in rule chains:
    • OUTPUT
    • POSTROUTING, and 
    Specialized packet manipulation operations use the mangle table, which contains pre-built chains:
    • INPUT
    • OUTPUT
    • PREROUTING, and 
    The connection tracking module uses the raw table, which contains preconfigured chains:
    • OUTPUT and 
    You must have superuser privileges to examine, create, or modify any netfilter-based rule chains. You can do this by putting iptables commands in a script that is executed as part of the system’s boot process or by using a command such as sudo as a normal user to run the iptables commands with root privileges.

    Installing and Configuring a Firewall Using Lokkit

    As mentioned in the previous &, many different software packages are available to help you configure and activate a firewall on your Ubuntu system. These packages include Lokkit (the package described in this section), Firestarter, Fwbuilder, Guarddog, and many more. I think that Lokkit does a great job of setting up a basic firewall, asks the right questions, and is very easy to use, so that’s the package I’ve chosen to discuss in this section.

    Installing Lokkit
    Because whether or not you need a firewall is a hot topic among Ubuntu users, a firewall isn’t installed as part of any default Ubuntu installation. However, as with all software packages on Ubuntu, both the command-line software maintenance tools such as apt-get and aptitude and the Synaptic Package Manager make it easy to install a firewall creation and configuration tool. The one that I suggest installing is Lokkit, which is found in the lokkit package. I also suggest that you install the gnome-lokkit package, which provides an easy-to-use graphical interface that simplifies configuring and customizing a firewall. To install this package using apt-get or aptitude (without the graphical configuration tool), use the commands
    sudo apt-get install lokkit or sudo aptitude –r install lokkit
    There’s no point in installing the gnome-lokkit package if you don’t have a graphical user interface on your Ubuntu system(i.e in Ubuntu Server edition). To install these packages graphically,
    1. start the Synaptic Package Manager from the System ➪ Administration menu and supply your password to start Synaptic. Once the Synaptic application starts, 
    2. click Search to display the search dialog. Make sure that “Description and Name” are the selected items to search through, enter Lokkit as the string to search for, and click Search
    3. After the search completes, scroll down in the search results until you see the lokkit package, right-click its name, and select Mark for Installation to select that package for installation from the pop-up menu.
    4. After you have selected the lokkit package, you should also select the gnome-lokkit package, which is a graphical GNOME utility for configuring and customizing your firewall. Right-click its name, and select Mark for Installation to select that package for installation from the pop-up menu. 
    5. Selecting this package will display a dialog that suggests other packages for installation that are required for this package. Click Mark to also accept these packages for installation. 
    6. After selecting these packages for installation, click Apply in the Synaptic toolbar to install lokkit and its graphical configuration utility. 
    7. When the installation completes, you can exit from Synaptic.

    Using Lokkit to Set Up a Basic Firewall
    Installing lokkit and the gnome-lokkit graphical configuration utility doesn’t add a menu item for these commands, because you generally run them only once to set up a basic firewall.
    • To start the graphical gnome-lokkit tool, execute the command:
    gksudo gnome-lokkit 
    • from any Ubuntu command line and supply your password in the dialog that displays. An initial gnome-lokkit dialog displays that provides some basic information about Lokkit. Click Next to proceed.
    •  In the dialog shown in most cases, the Low Security firewall is your best choice. As discussed earlier, a default Ubuntu Desktop
      installation doesn’t expose any ports to the outside world, so a firewall is simply extra protection in case you subsequently open system ports to the outside world (or install services that do). If you are configuring a firewall on an Ubuntu Server system, you may want to select the High Security option, but you should be prepared to modify the rules created by Lokkit (or specially configure the services that you have installed) to ensure that the services that you want your server to provide are not being blocked by the firewall. Click Next to proceed. 
    • The dialog shown asks if you want to trust hosts on your internal network, i.e., hosts with the same address settings for the first three quads of your system’s IP address. For example, if your system’s IP address is, selecting Yes here would enable any hosts with IP addresses of the form 192.168.6.XXX to connect to any services that your system provides. You should select Yes if you have more than one host on your internal network and the system that you are configuring is not directly connected to the Internet. Click Next to proceed. 
    • The dialog shown asks if you want to enable the DHCP port. You should select Yes if you are running (or plan to run) a DHCP server on this system, or if this system gets its IP address from another system using DHCP. Click Next to proceed. 
    • The dialog shown in enables you to select services that you are running on your system, and to which you want other systems to be able to connect. If you are not currently running (and do not plan to run) services such as a DNS, FTP, mail, or Web server, select No. If you are running these services or plan to, select Yes. Click Next to proceed. 
    • If you selected Yes, subsequent dialogs display that ask, respectively, if
      you want to enable incoming Web, mail, secure shell, and Telnet services. I suggest that you answer Yes to all of these except for Telnet, which is an older, insecure mechanism for connecting to systems over the network that has largely been replaced by SSH
    • After answering these dialogs, or if you selected No to the dialog before the Activate your Firewall dialog displays. Click Finish to activate your firewall and exit the gnome-lokkit configuration utility. If you’ve changed your mind, click Cancel — you can always rerun this utility later if you decide that you want to install a firewall. If you select Yes, lokkit will perform some basic tests of your firewall, and will then activate the firewall and add starting the firewall to the series of startup scripts that your system runs when you boot your system, by adding the /etc/init.d/lokkit startup script to the startup sequence for all system run levels.


    • Ubuntu Linux Bible by William von Hagen ISBN-13: 978-0-470-03899-4