Total Pageviews

Search: This Blog, Linked From Here, The Web, My fav sites, My Blogroll

Translate

19 April 2011

Computer Networks Performance

s

Network Performance

Up to this point (Computer Network Architecture ,   Computer Network Implementing Software), we have focused primarily on the functional aspects of networks. Like any computer system, however, computer networks are also expected to perform well, since the effectiveness of computations distributed over the network often depends directly on the efficiency with which the network delivers the computation’s data.
While the old programming adage “First get it right and then make it fast” is valid in many settings, in networking it is usually necessary to “design for performance.” 
It is therefore important to understand the various factors that impact network performance.

Bandwidth and Latency
Network performance is measured in two fundamental ways:
  1. bandwidth (also called throughput) and 
  2. latency (also called delay).
The bandwidth of a network is given by the number of bits that can be transmitted over the network in a certain period of time. 
For example, a network might have a bandwidth of 10 million bits/second (Mbps), meaning that it is able to deliver 10 million bits every second.
It is sometimes useful to think of bandwidth in terms of how long it takes to transmit each bit of data. On a 10-Mbps network, for example, it takes 1/10(Mbps)=0.1 microsecond (μs) to transmit each bit.
While you can talk about the bandwidth of the network as a whole, sometimes you want to be more precise, focusing, for example, on the bandwidth of a single physical link or of a logical process-to-process channel.

At the physical level, bandwidth is constantly improving, with no end in sight. 
Intuitively, if you think of a second of time as a distance you could measure with a ruler, and bandwidth as how many bits fit in that distance, then you can think of each bit as a pulse of some width. For example, each bit on a 1-Mbps link is 1 μs wide, while each bit on a 2-Mbps link is 0.5 μs wide, as illustrated in Figure 1.20.
The more sophisticated the transmitting and receiving technology, the narrower each bit can become, and thus, the higher the bandwidth.

For logical process-to-process channels, bandwidth is also influenced by other factors, including how many times the software that implements the channel has to handle, and possibly transform, each bit of data.

The second performance metric, latency, corresponds to how long it takes a message to travel from one end of a network to the other. (As with bandwidth, we could be focused on the latency of a single link or an end-to-end channel.) Latency is measured strictly in terms of time. For example, a transcontinental network might have a latency of 24 milliseconds (ms); that is, it takes a message 24 ms to travel from one end of North America to the other.
There are many situations in which it is more important to know how long it takes to send a message from one end of a network to the other and back, rather than the one-way latency. We call this the round-trip time (RTT) of the network.

We often think of latency as having three components:
  1. First, there is the speed-of-light propagation delay. This delay occurs because nothing, including a bit on a wire, can travel faster than the speed of light. If you know the distance between two points, you can calculate the speed-of-light latency, although you have to be careful because light travels across different mediums at different speeds: 
    1. It travels at 3.0×10^8 m/s in a vacuum, 
    2. 2.3×10^8 m/s in a cable, and 
    3. 2.0×10^8 m/s in a fiber. 
  2. Second, there is the amount of time it takes to transmit a unit of data. This is a function of the network bandwidth and the size of the packet in which
    the data is carried. 
  3. Third, there may be queuing delays inside the network, since packet switches generally need to store packets for some time before forwarding them on an outbound link. 
  4. So, we could define the total latency as:
    1. Latency = Propagation + Transmit + Queue
    2. Propagation = Distance/SpeedOfLight (where Distance is the length of the wire over which the data will travel, SpeedOfLight is the effective speed of light over that wire)
    3. Transmit = Size/Bandwidth (Size is the size of the packet, andBandwidth is the bandwidth at which the packet is transmitted).
Note that if the message contains only one bit and we are talking about a single link (as opposed to a whole network), then the Transmit and Queue terms are not relevant, and latency corresponds to the propagation delay only.
Bandwidth and latency combine to define the performance characteristics of a given link or channel. Their relative importance, however, depends on the application.

For some applications, latency dominates bandwidth. For example, a client that sends a 1-byte message to a server and receives a 1-byte message in return is latency bound. Assuming that no serious computation is involved in preparing the response, the application will perform much differently on a transcontinental channel with a 100-ms RTT than it will on an across-the-room channel with a 1-ms RTT. Whether the channel is 1 Mbps or 100 Mbps is relatively insignificant, however, since the former implies that the time to transmit a byte (Transmit) is 8 μs and the latter implies Transmit = 0.08 μs.


In contrast, consider a digital library program that is being asked to fetch a 25-megabyte (MB) image—the more bandwidth that is available, the faster it will be able to return the image to the user. Here, the bandwidth of the channel dominates performance. To see this, suppose that the channel has  a bandwidth of 10 Mbps. It will take 20 seconds to transmit the image, making it relatively unimportant if the image is on the other side of a 1-ms channel or a 100-ms channel; the difference between a 20.001-second response time and a 20.1-second response time is negligible.

Figure 1.21 gives you a sense of how latency or bandwidth can dominate performance in different circumstances. The graph shows how long it takes to move objects  of various sizes (1 byte, 2 KB, 1 MB) across networks with RTTs ranging from 1 to 100 ms and link speeds of either 1.5 or 10 Mbps. We use logarithmic scales to show relative performance.
  1. For a 1-byte object (say, a keystroke), latency remains almost exactly equal to the RTT, so that you cannot distinguish between a 1.5-Mbps network and a 10-Mbps network. 
  2. For a 2-KB object (say, an email message), the link speed makes quite a difference on a 1-ms RTT network but a negligible difference on a 100-ms RTT network. 
  3. And for a 1-MB object (say, a digital image), the RTT makes no difference—it is the link speed that dominates performance across the full range of RTT.
Note that we use the terms latency and delay in a generic way, that is, to denote how long it takes to perform a particular function such as delivering a message or moving an object. When we are referring to the specific amount of time it takes a signal to propagate from one end of a link to another, we use the term propagation delay. Also, we make it clear in the context of the discussion whether we are referring to the one-way latency or the round-trip time.

As an aside, computers are becoming so fast that when we connect them to networks, it is sometimes useful to think, at least figuratively, in terms of instructions per mile. Consider what happens when a computer that is able to execute 1 billion instructions per second sends a message out on a channel with a 100-ms RTT. (To make the math easier, assume that the message covers a distance of 5000 miles.) If that computer  sits idle the full 100 ms waiting for a reply message, then it has forfeited the ability to execute 100 million instructions, or 20,000 instructions per mile. It had better have been worth going over the network to justify this waste.

How Big Is a Mega?
There are several pitfalls you need to be aware of when working with the common units of networking—MB, Mbps, KB, and Kbps.
  1. The first is to distinguish carefully between bits and bytes. Here, we always use a lowercase b for bits and a capital B for bytes. 
  2. The second is to be sure you are using the appropriate definition of mega (M) and kilo (K). 
    1. Mega, for example, can mean either 2^20 or 10^6 . 
    2. Similarly, kilo can be either 2^10 or 10^3 . 
What is worse, in networking we typically use both definitions. Here’s why.  Network bandwidth, which is often specified in terms of Mbps is typically governed by the speed of the clock that paces the transmission of the bits. A clock that is running at 10 MHz is used to transmit bits at 10 Mbps. Because the
mega in MHz means 10^6 hertz, Mbps is usually also defined as 10^6 bits per second. (Similarly, Kbps is 10^3 bits per second.)

On the other hand, when we talk about a message that we want to transmit, we often give its size in kilobytes. Because messages are stored in the computer’s memory, and memory is typically measured in powers of two, the K in KB is usually taken to mean 2^10 . (Similarly, MB usually means 2^20 .) When you put
the two together, it is not un common to talk about sending a 32-KB message over a 10-Mbps channel, which should be interpreted to mean 32 × 2^10 × 8 bits are being transmitted at a rate of 10×10^6 bits per second. This is the interpretation we use here, unless explicitly stated otherwise.

The good news is that many times we are satisfied with a back-of-the-envelope calculation, in which case it is perfectly reasonable to pretend that a byte has 10 bits in it (making it easy to convert between bits and bytes) and that 10^6 is really equal to 2^20 (making it easy to convert between the two definitions of mega). Notice that the first approximation introduces a 20% error, while the latter introduces only a 5% error.
To help you in your quick-and-dirty calculations, 100 ms is a reasonable number to use for a cross-country round-trip time—at least when the country in question
is the United States—and 1 ms is a good approximation of an RTT across a local area network. In the case of the former, we increase the 48-ms round-trip time implied by the speed of light over a fiber to 100 ms because there are, as we have said, other sources of delay, such as the processing time in the switches inside the network. You can also be sure that the path taken by the fiber between two points will not be a straight line.


Delay × Bandwidth
It is also useful to talk about the product of these two metrics, often called the delay × bandwidth product. Intuitively, if we think of a channel between a pair of processes as a hollow pipe (see Figure 1.22), where
  1. the latency corresponds to the length of the pipe and 
  2. the bandwidth gives the diameter of the pipe, 
  3. then the delay × bandwidth product gives the volume of the pipe—the number of bits it holds.
Said another way, if latency (measured in time) corresponds to the length of the pipe, then given the width of each bit (also measured in time), you can calculate how many bits fit in the pipe.

For example, a transcontinental channel with a one-way latency of 50 ms and a bandwidth of 45 Mbps is able to hold:
50 × 10^−3 seconds × 45 × 10^6 bits/second= 2.25 × 106 bits
or approximately 280 KB of data. In other words, this example channel (pipe) holds as many bytes as the memory of a personal computer from the early 1980s could hold.
The delay × bandwidth product is important to know when constructing high-performance networks because it corresponds to how many bits the sender must transmit before the first bit arrives at the receiver. 
If the sender is expecting the receiver to somehow signal that bits are starting to arrive, and it takes another channel latency for this signal to propagate back to the sender (i.e., we are interested in the channel’s RTT rather than just its one-way latency), then the sender can send up to two delay × bandwidth’s worth of data before hearing from the receiver that all is well. The bits in the pipe are said to be “in flight,” which means that if the receiver tells the sender to stop transmitting, it might receive up to a delay × bandwidth’s worth of data before the sender manages to respond.

In our example above, that amount corresponds to 5.5 × 106 bits (671 KB) of data. On the other hand, if the sender does not fill the pipe—send a whole delay × bandwidth product’s worth of data before it stops to wait for a signal—the sender will not fully utilize the network.
Note that most of the time we are interested in the RTT scenario, which we simply refer to as the delay × bandwidth product, without explicitly saying that this product is multiplied by two. Again, whether the “delay” in “delay × bandwidth” means one-way latency or RTT is made clear by the context.

High-Speed Networks
The bandwidths available on today’s networks are increasing at a dramatic rate, and there is eternal optimism that network bandwidth will continue to improve. This causes network designers to start thinking about what happens in the limit, or stated another way, what is the impact on network design of having infinite bandwidth available.

Although high-speed networks bring a dramatic change in the bandwidth available to applications, in many respects their impact on how we think about networking comes in what does not change as bandwidth increases: the speed of light. To quote Scotty from Star Trek, “You cannae change the laws of physics.”
In other words, “high speed” does not mean that latency improves at the same rate as bandwidth; 
the transcontinental RTT of a 1-Gbps link is the same 100 ms as it is for a 1-Mbps link.

To appreciate the significance of ever-increasing bandwidth in the face of fixed latency, consider what is required to transmit a 1-MB file over a 1-Mbps network versus over a 1-Gbps network, both of which have an RTT of 100 ms. In the case of the 1-Mbps network, it takes 80 round-trip times to transmit the file; during  each RTT, 1.25% of the file is sent. In contrast, the same 1-MB file doesn’t even come close to filling 1 RTT’s worth of the 1-Gbps link, which has a delay × bandwidth product of 12.5 MB. Figure 1.23 illustrates the difference between the two networks. In effect, the 1-MB file looks like a stream of data that needs to be transmitted across a 1-Mbps network, while it looks like a single packet on a 1-Gbps network. To help drive this point home, consider that a 1-MB file is to a 1-Gbps network what a 1-KB packet is to a 1-Mbps network.

Another way to think about the situation is that more data can be transmitted during each RTT on a high-speed network, so much so that a single RTT becomes a significant amount of time. Thus, while you wouldn’t think twice about the difference between a file transfer taking 101 RTTs rather than 100 RTTs (a relative difference of only 1%), suddenly the difference between 1 RTT and 2 RTTs is significant—a 100% increase.
In other words, latency, rather than throughput, starts to dominate our thinking about network design.
Perhaps the best way to understand the relationship between throughput and latency is to return to basics. The effective end-to-end throughput that can be achieved over a network is given by the simple relationship:
Throughput = TransferSize/TransferTime
where TransferTime includes not only the elements of one-way Latency identified earlier in this section, but also any additional time spent requesting or setting up the transfer.

Generally, we represent this relationship as:
TransferTime = RTT + 1/Bandwidth × TransferSize
We use RTT in this calculation to account for a request message being sent across the network and the data being sent back.

For example, consider a situation where a user wants to fetch a 1-MB file across a 1-Gbps network with a round-trip time of 100 ms. The TransferTime includes both the transmit time for 1 MB (1/1 Gbps × 1 MB = 8 ms), and the 100-ms RTT, for a total transfer time of 108 ms. This means that the effective throughput will be 1 MB/108 ms = 74.1 Mbps not 1 Gbps.

Clearly, transferring a larger amount of data will help improve the effective throughput, where in the limit, an infinitely large transfer size will cause the effective throughput to approach the network bandwidth. On the other hand, having to endure more than 1 RTT—for example, to retransmit missing packets—will hurt the effective throughput for any transfer of finite size and will be most noticeable for small transfers.

Application Performance Needs
The discussion in this section has taken a network-centric view of performance;  that is, we have talked in terms of what a given link or channel will support. The unstated assumption has been that application programs have simple needs—they want as much bandwidth as the network can provide. This is certainly true of the aforementioned digital library program that is retrieving a 25-MB image; the more bandwidth that is available, the faster the program will be able to return the image to the user.

However, some applications are able to state an upper limit on how much bandwidth they need. Video applications are a prime example. Suppose you want to stream a video image that is one-quarter the size of a standard TV image; that is, it has a resolution of 352 by 240 pixels. If each pixel is represented by 24 bits of information, as would be the case for 24-bit color, then the size of each frame would be (352 × 240 × 24)/8 = 247.5 KB
If the application needs to support a frame rate of 30 frames per second, then it might request a throughput rate of 75 Mbps. The ability of the network to provide more bandwidth is of no interest to such an application because it has only so much data to transmit in a given period of time.

Unfortunately, the situation is not as simple as this example suggests. Because the difference between any two adjacent frames in a video stream is often small, it is possible to compress the video by transmitting only the differences between adjacent frames. This compressed video does not flow at a constant rate, but varies with time according to factors such as the amount of action and detail in the picture and the compression algorithm being used. Therefore, it is possible to say what the average bandwidth requirement will be, but the instantaneous rate may be more or less.

The key issue is the time interval over which the average is computed. Suppose that this example video application can be compressed down to the point that it needs only 2 Mbps, on average. If it transmits 1 megabit in a 1-second interval and 3 megabits in the following 1-second interval, then over the 2-second interval it is transmitting at an average rate of 2 Mbps; however, this will be of little consolation to a channel that was engineered to support no more than 2 megabits in any one second. Clearly, just knowing the average bandwidth needs of an application will not always suffice.

Generally, however, it is possible to put an upper bound on how big of a burst an application like this is likely to transmit. A burst might be described by some peak rate that is maintained for some period of time. Alternatively, it could be described as the number of bytes that can be sent at the peak rate before reverting to the average rate or some lower rate. If this peak rate is higher than the available channel capacity, then the excess data will have to be buffered somewhere, to be transmitted later. Knowing how big of a burst might be sent allows the network designer to allocate sufficient buffer capacity to hold the burst.

Analogous to the way an application’s bandwidth needs can be something other than “all it can get,” an application’s delay requirements may be more complex than simply “as little delay as possible.” In the case of delay, it sometimes doesn’t matter so much whether the one-way latency of the network is 100 ms or 500 ms as how much the latency varies from packet to packet.
The variation in latency is called jitter.
Consider the situation in which the source sends a packet once every 33 ms, as would be the case for a video application transmitting frames 30 times a second.
  • If the packets arrive at the destination spaced out exactly 33 ms apart, then we can deduce that the delay experienced by each packet in the network was exactly the same.
  • If the spacing between when packets arrive at the destination—sometimes called the interpacket gap—is variable, however, then the delay experienced by the sequence of packets must have also been variable, and the network is said to have introduced jitter into the packet stream, as shown in Figure 1.24.
Such variation is generally not introduced in a single physical link, but it can happen when packets experience different queuing delays in a multihop packet-switched network. This queuing delay corresponds to the Queue component of latency defined earlier in this section, which varies with time.

To understand the relevance of jitter, suppose that the packets being transmitted over the network contain video frames, and in order to display these frames on the screen the receiver needs to receive a new one every 33 ms. If a frame arrives early, then it can simply be saved by the receiver until it is time to display it. Unfortunately, if a frame arrives late, then the receiver will not have the frame it needs in time to update the screen, and the video quality will suffer; it will not be smooth. Note that it is not necessary to eliminate jitter, only to know how bad it is. The reason for this is that if the receiver knows the upper and lower bounds on the latency that a packet can experience, it can delay the time at which it starts playing back the video (i.e., displays the first frame) long enough to ensure that in the future it will always have a frame to display when it needs it. The receiver delays the frame, effectively smoothing out the jitter, by storing it in a buffer.

Computer Network Implementing Software

f


Implementing Software

Network architectures and protocol specifications are essential things, but a good blueprint is not enough to explain the phenomenal success of the Internet: The number of computers connected to the Internet has been doubling every year since 1981 and is now approaching the impressive number of 1,966,514,816; It is believed that the number of bits transmitted over the Internet surpassed the corresponding figure for the voice phone system sometime in 2001.
What explains the success of the Internet?
  1. There are certainly many contributing factors (including a good architecture), but 
  2. one thing that has made the Internet such a runaway success is the fact that so much of its functionality is provided by software running in general-purpose computers. The significance of this is that new functionality can be added readily with “just a small matter of programming.” As a result, new applications and services—electronic commerce, videoconferencing, and packet telephony, to name a few—have been showing up at a phenomenal pace.
  3. A related factor is the massive increase in computing power available in commodity machines. Although computer networks have always been capable in principle of transporting any kind of information, such as digital voice samples, digitized images, and so on, this potential was not particularly interesting if the computers sending and receiving that data were too slow to do anything useful with the information. Virtually all of today’s computers are capable of playing back digitized voice at full speed and can display video at a speed and resolution that is useful for some (but by no means all) applications. Thus, today’s networks have begun to support multimedia, and their support for it will only improve as computing hardware becomes faster.
The point to take away from this is that knowing how to implement network software is an essential part of understanding computer networks. With this in mind, this section first introduces some of the issues involved in implementing an application program on top of a network, and then goes on to identify the issues involved in implementing the protocols running within the network.

In many respects, network applications and network protocols are very similar—the way an application engages the services of the network is pretty much the same as the way a high-level protocol invokes the services of a low-level protocol. As we will see later in the section, however, there are a couple of important differences.

Application Programming Interface (Sockets)
The place to start when implementing a network application is the interface exported by the network.
Since most network protocols are implemented in software (especially those high in the protocol stack), and nearly all computer systems implement their network protocols as part of the operating system, when we refer to the interface “exported by the network,” we are generally referring to the interface that the OS provides to its networking subsystem. This interface is often called the network
application programming interface (API).
Although each operating system is free to define its own network API (and most
have), over time certain of these APIs have become widely supported; that is, they
have been ported to operating systems other than their native system. This is what has happened with the socket interface originally provided by the Berkeley distribution of Unix, which is now supported in virtually all popular operating systems.
The advantage of industry-wide support for a single API is that applications can be easily ported from one OS to another. 
It is important to keep in mind, however, that application programs typically interact with many parts of the OS other than the network; for example, they read and write files, fork concurrent processes, and output to the graphical display. Just because two systems support the same network API does not mean that their file system, process, or graphic interfaces are the same. Still, understanding a widely adopted API like Unix sockets gives us a good place to start.

Before describing the socket interface, it is important to keep two concerns separate in your mind.
  1. Each protocol provides a certain set of services, and 
  2. the API provides a syntax by which those services can be invoked in this particular OS. 
    1. The implementation is then responsible for mapping the tangible set of operations and objects defined by the API onto the abstract set of services defined by the protocol. 
    2. If you have done a good job of defining the interface, then it will be possible to use the syntax of the interface to invoke the services of many different protocols.
    3. Such generality was certainly a goal of the socket interface, although it’s far from perfect.
The main abstraction of the socket interface, not surprisingly, is the socket. A good way to think of a socket is as the point where a local application process attaches to the network. The interface defines operations for
  • creating a socket, 
  • attaching the socket to the network, 
  • sending/receiving messages through the socket, and 
  • closing the socket. 
To simplify the discussion, we will limit ourselves to showing how sockets are used with TCP. The first step is to create a socket, which is done with the following operation:
int socket(int domain, int type, int protocol)

The reason that this operation takes three arguments is that the socket interface was designed to be general enough to support any underlying protocol suite. Specifically,
  1. the domain argument specifies the protocol family that is going to be used: 
    1. PF INET denotes the Internet family, 
    2. PF UNIX denotes the Unix pipe facility, and 
    3. PF PACKET denotes direct access to the network interface (i.e., it bypasses the TCP/IP protocol stack). 
  2. The type argument indicates the semantics of the communication. 
    1. SOCK STREAM is used to denote a byte stream. 
    2. SOCK DGRAM is an alternative that denotes a message-oriented service, such as that provided by UDP. 
  3. The protocol argument identifies the specific protocol that is going to be used. 
    1. In our case, this argument is UNSPEC because the combination of PF INET and SOCK STREAM implies TCP. 
  4. Finally, the return value from socket is a handle for the newly created socket, that is, an identifier by which we can refer to the socket in the future. 
    1. It is given as an argument to subsequent operations on this socket.
The next step depends on whether you are a client or a server. On a server machine, the application process performs a passive open—the server says that it is prepared to accept connections, but it does not actually establish a connection. The server does this by invoking the following three operations:
int bind(int socket, struct sockaddr *address, int addr len)
int listen(int socket, int backlog)
int accept(int socket, struct sockaddr *address, int *addr len)

The bind operation, as its name suggests, binds the newly created socket to the specified address. This is the network address of the local participant—the server.
Note that, when used with the Internet protocols, address is a data structure that includes both the IP address of the server and a TCP port number. (As we know ports are used to indirectly identify processes. They are a form of demux keys as defined in an earlier post
The port number is usually some well-known number specific to the service being offered; for example, Web servers commonly accept connections on port 80.

The listen operation then defines how many connections can be pending on the specified socket. Finally, the accept operation carries out the passive open.
It is a blocking operation that does not return until a remote participant has established a connection, and when it does complete, it returns a new socket that corresponds to this just-established connection, and the address argument contains the remote participant’s address. 
Note that when accept returns, the original socket that was given as an argument still exists and still corresponds to the passive open; it is used in future invocations of accept.
On the client machine, the application process performs an active open; that is, it says who it wants to communicate with by invoking the following single operation:
int connect(int socket, struct sockaddr *address, int addr len)

This operation does not return until TCP has successfully established a connection, at which time the application is free to begin sending data. In this case, address contains the remote participant’s address. In practice, the client usually specifies only the remote participant’s address and lets the system fill in the local information.

Whereas a server usually listens for messages on a well-known port, a client typically does not care which port it uses for itself; the OS simply selects an unused one.
Once a connection is established, the application processes invoke the following two operations to send and receive data:
int send(int socket, char *message, int msg len, int flags)
int recv(int socket, char *buffer, int buf len, int flags)

The first operation sends the given message over the specified socket, while the second operation receives a message from the specified socket into the given buffer. Both operations take a set of flags that control certain details of the operation.

Example Application
We now show the implementation of a simple client/server program that uses the socket interface to send messages over a TCP connection. The program also uses other Unix networking utilities, which we introduce as we go. Our application allows a user on one machine to type in and send text to a user on another machine. It is a simplified version of the Unix talk program, which is similar to the program at the core of a Web chat room.

Client
We start with the client side, which takes the name of the remote machine as an argument.
  1. It calls the Unix utility gethostbyname to translate this name into the remote host’s IP address. 
  2. The next step is to construct the address data structure (sin) expected by the socket interface. Notice that this data structure specifies that we’ll be using the socket to connect to the Internet (AF INET). In our example, we use TCP port 5432 as the well-known server port; this happens to be a port that has not been assigned to any other Internet service. 
  3. The final step in setting up the connection is to call socket and connect. Once the connect operation returns, the connection is established and the client program enters its main loop, which reads text from standard input and sends it over the socket.

#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>

#define SERVER_PORT 5432
#define MAX_LINE 256
int main(int argc, char * argv[])
{
 FILE *fp;
 struct hostent *hp;
 struct sockaddr_in sin;
 char *host;
 char buf[MAX_LINE];
 int s;
 int len;
 if (argc==2) {
  host = argv[1];
 }
 else {
  fprintf(stderr, "usage: simplex-talk host\n");
  exit(1);
 }
 
/* translate host name into peer's IP address */
 hp = gethostbyname(host);
 if (!hp) {
  fprintf(stderr, "simplex-talk: unknown host: %s\n", host);
  exit(1);
 }
 
 /* build address data structure */
 bzero((char *)&sin, sizeof(sin));
 sin.sin_family = AF_INET;
 bcopy(hp->h_addr, (char *)&sin.sin_addr, hp->h_length);
 sin.sin_port = htons(SERVER_PORT);
 
 /* active open */
 if ((s = socket(PF_INET, SOCK_STREAM, 0)) < 0) {
  perror("simplex-talk: socket");
  exit(1);
 }
 if (connect(s, (struct sockaddr *)&sin, sizeof(sin)) < 0) {
  perror("simplex-talk: connect");
  close(s);
  exit(1);
 }
 
 /* main loop: get and send lines of text */
 while (fgets(buf, sizeof(buf), stdin)) {
  buf[MAX_LINE-1] = '\0';
  len = strlen(buf) + 1;
  send(s, buf, len, 0);
 }
}

Server The server is equally simple.
  1. It first constructs the address data structure by filling in its own port number (SERVER PORT). By not specifying an IP address, the application program is willing to accept connections on any of the local host’s IP addresses. 
  2. Next, the server performs the preliminary steps involved in a passive open
    1. creates the socket, 
    2. binds it to the local address, and 
    3. sets the maximum number of pending connections to be allowed. 
    4. Finally, the main loop waits for a remote host to try to connect, and 
    5. when one does, receives and prints out the characters that arrive on the connection.  
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netdb.h>

#define SERVER_PORT 5432
#define MAX_PENDING 5
#define MAX_LINE 256

int main()
{
 struct sockaddr_in sin;
 char buf[MAX_LINE];
 int len;
 int s, new_s;
 
 /* build address data structure */
 bzero((char *)&sin, sizeof(sin));
 sin.sin_family = AF_INET;
 sin.sin_addr.s_addr = INADDR_ANY;
 sin.sin_port = htons(SERVER_PORT);
 
 /* setup passive open */
 if ((s = socket(PF_INET, SOCK_STREAM, 0)) < 0) {
  perror("simplex-talk: socket");
  exit(1);
 }
 if ((bind(s, (struct sockaddr *)&sin, sizeof(sin))) < 0) {
  perror("simplex-talk: bind");
  exit(1);
 }
 listen(s, MAX_PENDING);
 
 /* wait for connection, then receive and print text */
 while(1) {
  if ((new_s = accept(s, (struct sockaddr *)&sin, &len)) < 0){
   perror("simplex-talk: accept");
   exit(1);
  }
  while (len = recv(new_s, buf, sizeof(buf), 0))
   fputs(buf, stdout);
  close(new_s);
 }
}

Protocol Implementation Issues
As mentioned at the beginning of this section, the way application programs interact with the underlying network is similar to the way a high-level protocol interacts with a low-level protocol. For example, TCP needs an interface to send outgoing messages to IP, and IP needs to be able to deliver incoming messages to TCP. This is exactly the service interface introduced in an earlier post.

Since we already have a network API (e.g., sockets), we might be tempted to use this same interface between every pair of protocols in the protocol stack. Although certainly an option, in practice the socket interface is not used in this way.
The reason is that there are inefficiencies built into the socket interface that protocol implementers are not willing to tolerate. Application programmers tolerate them because they simplify their programming task and because the inefficiency only has to be tolerated once, but protocol implementers are often obsessed with performance and must worry about getting a message through several layers of protocols. 
The rest of this section discusses the two primary differences between the network API and the protocol-to-protocol interface found lower in the protocol graph.

Process Model
Most operating systems provide an abstraction called a process, or alternatively, a thread.
Each process runs largely independently of other processes, and the OS is responsible for making sure that resources, such as address space and CPU cycles, are allocated to all the current processes. The process abstraction makes it fairly straightforward to have a lot of things executing concurrently on one machine; 
for example, each user application might execute in its own process, and various things inside the OS might execute as other processes. When the OS stops one process from executing on the CPU and starts up another one, we call the change a context switch.

When designing the network subsystem, one of the first questions to answer is, “Where are the processes?” There are essentially two choices, as illustrated in Figure 1.16.
  1. In the first, which we call the process-per-protocol model, each protocol is implemented by a separate process. This means that as a message moves up or down the protocol stack, it is passed from one process/protocol to another—the process that implements protocol i processes the message, then passes it to protocol i − 1, and so on. 
    1. How one process/protocol passes a message to the next process/protocol depends on the support the host OS provides for interprocess communication
      1. Typically, there is a simple mechanism for enqueuing a message with a process. The important point, however, is that a context switch is required at each level of the protocol graph—typically a time-consuming operation.
  2. The alternative, which we call the process-per-message model, treats each protocol as a static piece of code and associates the processes with the messages.
    1. That is, when a message arrives from the network, the OS dispatches a process that it makes responsible for the message as it moves up the protocol graph. 
      1. At each level, the procedure that implements that protocol is invoked, which eventually results in the procedure for the next protocol being invoked, and so on. 
    2. For outbound messages, the application’s process invokes the necessary procedure calls until the message is delivered.
In both directions, the protocol graph is traversed in a sequence of procedure calls.
Although the process-per-protocol model is sometimes easier to think about—I implement my protocol in my process, and you implement your protocol in your process—the process-per-message model is generally more efficient for a simple reason: A procedure call is an order of magnitude more efficient than a context switch on most computers. The former model requires the expense of a context switch at each level,while the latter model costs only a procedure call per level.

Now think about the relationship between the service interface as defined above and the process model.
  1. For an outgoing message, the high-level protocol invokes a send operation on the low-level protocol. Because the high-level protocol has the message in hand when it calls send, this operation can be easily implemented as a procedure call; no context switch is required. 
  2. For incoming messages, however, the high-level protocol invokes the receive operation on the low-level protocol, and then must wait for a message to arrive at some unknown future time; this basically forces a context switch. 
    1. In other words, the process running in the high-level protocol receives a message from the process running in the low-level protocol. This isn’t a big deal if only the application process receives messages from the network subsystem—in fact, it’s the right interface for the network API since application programs already have a process-centric view of the world—but it does have a significant impact on performance if such a context switch occurs at each layer of the protocol stack. It is for this reason that most protocol implementations replace the receive operation with a deliver operation. That is, the low-level protocol does an upcall—a procedure call up the protocol stack—to deliver the message to the high-level protocol.
Figure 1.17 shows the resulting interface between two adjacent protocols, TCP and IP in this case. In general, messages move down the protocol graph through a sequence of send operations, and up the protocol graph through a sequence of deliver operations.

Message Buffers
A second inefficiency of the socket interface is that the application process provides the buffer that contains the outbound message when calling send, and similarly it provides the buffer into which an incoming message is copied when invoking the receive operation. This forces the topmost protocol to copy the message from the application’s buffer into a network buffer, and vice versa, as shown in Figure 1.18.
It turns out that copying data from one buffer to another is one of the most expensive things a protocol implementation can do. This is because while processors are becoming faster at an incredible pace, memory is not getting faster as quickly as processors are.
Instead of copying message data from one buffer to another at each layer in the protocol stack, most network subsystems define an abstract data type for messages that is shared by all protocols in the protocol graph. Not only does this abstraction permit messages to be passed up and down the protocol graph without copying, but it usually provides copy-free ways of manipulating messages in other ways, such as adding and stripping headers, fragmenting large messages into a set of small messages, and reassembling a collection of small messages into a single large message. The exact form of this message abstraction differs from OS to OS, but it generally involves a linked list of pointers to message buffers, similar to the one shown in Figure 1.19.



    18 April 2011

    Computer Network Architecture

     k


    To help deal with complexity, network designers have developed general blueprints—usually called a network architecture—that guide the design and implementation of networks. Here we introducing the central ideas that are common to all network architectures. It also introduces two of the most widely referenced architectures— the OSI architecture and the Internet architecture.

    Layering and Protocols
    When the system gets complex, the system designer introduces another level of abstraction.
    The idea of an abstraction is to define a unifying model that can capture some important aspect of the system, encapsulate this model in an object that provides an interface that can be manipulated by other components of the system, and hide the details of how the object is implemented from the users of the object
    The challenge is:
    1. to identify abstractions that simultaneously provide a service that proves useful in a large number of situations and 
    2. that can be efficiently implemented in the underlying system.
    This is exactly what we were doing when we introduced the idea of a channel.
    We were providing an abstraction for applications that hides the complexity of the network from application writers.

    Abstractions naturally lead to layering, especially in network systems.
    The general idea is that you start with the services offered by the underlying hardware, and then add a sequence of layers, each providing a higher (more abstract) level of service.

    The services provided at the high layers are implemented in terms of the services provided by the low layers. 
    Drawing on the discussion of requirements given in a previous post, for example, we might imagine a network as having two layers of abstraction sandwiched between the application program and the underlying hardware, as illustrated in Figure 1.8.
    1. The layer immediately above the hardware in this case might provide host-to-host connectivity, abstracting away the fact that there may be an arbitrarily complex network topology between any two hosts. 
    2. The next layer up builds on the available host-to-host communication service and provides support for process-to-process channels, abstracting away the fact that the network occasionally loses messages, for example.
    Layering provides two nice features:
    1. First, it decomposes the problem of building a network into more manageable components. 
      1. Rather than implementing a monolithic piece of software that does everything you will ever want, you can implement several layers, each of which solves one part of the problem. 
    2. Second, it provides a more modular design
      1. If you decide that you want to add some new service, you may only need to modify the functionality at one layer, reusing the functions provided at all the other layers.
    Thinking of a system as a linear sequence of layers is an oversimplification, however.
    Many times there are multiple abstractions provided at any given level of the system, each providing a different service to the higher layers but building on the same low-level abstractions. 
    To see this, consider the two types of channels:
    1. One provides a request/reply service, and one 
    2. supports a message stream service.
    These two channels might be alternative offerings at some level of a multilevel networking system, as illustrated in Figure 1.9. Using this discussion of layering as a foundation, we are now ready to discuss the architecture of a network more precisely.
    For starters, the abstract objects that make up the layers of a network system are called protocols.
    That is, a protocol provides a communication service that higher-level objects (such as application processes, or perhaps higher-level protocols) use to exchange messages. 
    For example, we could imagine a network that supports a request/reply protocol and a message stream protocol, corresponding to the request/reply and message stream channels discussed above.

    Each protocol defines two different interfaces.
    1. First, it defines a service interface to the other objects on the same computer that want to use its communication services. 
      1. This service interface defines the operations that local objects can perform on the protocol. For example, a request/reply protocol would support operations by which an application can send and receive messages. 
    2. Second, a protocol defines a peer interface to its counterpart (peer) on another machine. 
      1. This second interface defines the form and meaning of messages exchanged between protocol peers to implement the communication service. This would determine the way in which a request/reply protocol on one machine communicates with its peer on another machine.
    In other words, a protocol defines a communication service that it exports locally, along with a set of rules governing the messages that the protocol exchanges with its peer(s) to implement this service. This situation is illustrated in Figure 1.10.

    Except at the hardware level where peers directly communicate with each other over a link, peer-to-peer communication is indirect—each protocol communicates with its peer by passing messages to some lower-level protocol, which in turn delivers the message to its peer.
    In addition, there are potentially multiple protocols at any given level, each providing a different communication service. 
    We therefore represent the suite of protocols (that make up a network system) with a protocol graph. The nodes of the graph correspond to protocols, and the edges represent a depends-on relation.

    For example, Figure 1.11 illustrates a protocol graph for the hypothetical layered system we have been discussing—protocols RRP (Request/Reply Protocol) and MSP (Message Stream Protocol) implement two different types of process-to-process channels, and both depend on HHP (Host-to-Host Protocol), which provides a host-to-host connectivity service.

    In this example, suppose that
    1. the file access program on host 1 wants to send a message to its peer on host 2 using the communication service offered by protocol RRP. 
    2. In this case, the file application asks RRP to send the message on its behalf.
    3. To communicate with its peer, RRP then invokes the services of HHP,
    4. which in turn transmits the message to its peer on the other machine. 
    5. Once the message has arrived at protocol HHP on host 2, HHP passes the message up to RRP, which in turn delivers the message to the file application.
    In this particular case, the application is said to employ the services of the protocol stack RRP/HHP.

    Note that the term protocol is used in two different ways. Sometimes it refers to the abstract interfaces—that is, the operations defined by the service interface and the form and meaning of messages exchanged between peers—and sometimes it refers to the module that actually implements these two interfaces.

    To distinguish between the interfaces and the module that implements these interfaces, we generally refer to the former as a protocol specification. Specifications are generally expressed using a combination of prose, pseudocode, state transition diagrams, pictures of packet formats, and other abstract notations.
    It should be the case that a given protocol can be implemented in different ways by different programmers, as long as each adheres to the specification.
    The challenge is ensuring that two different implementations of the same specification can successfully exchange messages. Two or more protocol modules that do accurately implement a protocol specification are said to interoperate with each other.
    We can imagine many different protocols and protocol graphs that satisfy the
    communication requirements of a collection of applications. Fortunately, there exist standardization bodies, such as the International Standards Organization (ISO) and the Internet Engineering Task Force (IETF), that establish policies for a particular protocol graph.
    We call the set of rules governing the form and content of a protocol graph a network architecture.
    Standardization bodies such as the ISO and the IETF have established well-defined procedures for introducing, validating, and finally approving protocols in their respective architectures. We briefly describe the architectures defined by the ISO and the IETF shortly, but first there are two additional things we need to explain about the mechanics of a protocol graph.

    Encapsulation
    Consider what happens in Figure 1.11 when
    1. one of the application programs sends a message to its peer by passing the message to protocol RRP. 
    2. From RRP’s perspective, the message it is given by the application is an uninterpreted string of bytes. RRP does not care that these bytes represent an array of integers, an email message, a digital image, or whatever; it is simply charged with sending them to its peer. However, RRP must communicate control information to its peer, instructing it how to handle the message when it is received. RRP does this by attaching a header to the message.
    3. Generally speaking, a header is a small data structure—from a few bytes to a few dozen bytes—that is used among peers to communicate with each other. As the name suggests, headers are usually attached to the front of a message. In some cases, however, this peer-to-peer control information is sent at the end of the message, in which case it is called a trailer. The exact format for the header attached by RRP is defined by its protocol specification. 
    4. The rest of the message—that is, the data being transmitted on behalf of the application—is called the message’s body or payload. We say that the application’s data is encapsulated in the new message created by protocol RRP.
    This process of encapsulation is then repeated at each level of the protocol graph;
    for example, HHP encapsulates RRP’s message by attaching a header of its own.

    If we now assume that HHP sends the message to its peer over some network, then when the message arrives at the destination host, it is processed in the opposite order:
    1. HHP first strips its header off the front of the message, interprets it (i.e., takes whatever action is appropriate given the contents of the header), and 
    2. passes the body of the message up to RRP, which removes the header that its peer attached, takes whatever action is indicated by that header, and 
    3. passes the body of the message up to the application program.
    The message passed up from RRP to the application on host 2 is exactly the same message as the application passed down to RRP on host 1;
    the application does not see any of the headers that have been attached to it to implement the  lower-level communication services. 
    This whole process is illustrated in Figure 1.12. Note that in this example, nodes in the network (e.g., switches and routers) may inspect the HHP header at the front of the message.
    Note that when we say a low-level protocol does not interpret the message it is given by some high-level protocol, we mean that it does not know how to extract any meaning from the data contained in the message. 
    It is sometimes the case, however, that the low-level protocol applies some simple transformation to the data it is given, such as to compress or encrypt it. In this case, the protocol is transforming the entire body of the message, including both the original application’s data and all the headers attached to that data by higher-level protocols.
    Multiplexing and Demultiplexing
    Recall that a fundamental idea of packet switching is to multiplex multiple flows of data over a single physical link. This same idea applies up and down the protocol graph, not just to switching nodes. In Figure 1.11, for example, we can think of RRP as implementing a logical communication channel, with messages from two different applications multiplexed over this channel at the source host and then demultiplexed back to the appropriate application at the destination host.

    Practically speaking, all this means is that the header that RRP attaches to its messages contains an identifier that records the application to which the message belongs. We call this identifier RRP’s demultiplexing key, or demux key for short.
    At the source host, RRP includes the appropriate demux key in its header. When the message is delivered to RRP on the destination host, it strips its header, examines the demux key, and demultiplexes the message to the correct application.

    RRP is not unique in its support for multiplexing; nearly every protocol implements this mechanism. For example, HHP has its own demux key to determine which messages to pass up to RRP and which to pass up to MSP. However, there is no uniform agreement among protocols—even those within a single network architecture—on exactly what constitutes a demux key. Some protocols use an 8-bit field (meaning they can support only 256 high-level protocols), and others use 16- or 32-bit fields. Also, some protocols have a single demultiplexing field in their header, while others have a pair of demultiplexing fields. In the former case, the same demux key is used on both sides of the communication, while in the latter case, each side uses a different key to identify the high-level protocol (or application program) to which the message is to be delivered.



    OSI Architecture
    The ISO was one of the first organizations to formally define a common way to connect computers. Their architecture, called the Open Systems Interconnection (OSI) architecture and illustrated in Figure 1.13, defines a partitioning of network functionality into seven layers, where one or more protocols implement the functionality assigned to a given layer. 
    In this sense, the schematic given in Figure 1.13 is not a protocol graph, per se, but rather a reference model for a protocol graph. 
    The ISO, usually in conjunction with a second standards organization known as the International Telecommunications Union (ITU), --A subcommittee of the ITU on telecommunications (ITU-T) replaces an earlier subcommittee of the ITU, which was known by its French name, Comité Consultatif International Téléphonique et Télégraphique (CCITT).-- publishes a series of protocol specifications based on the OSI architecture. This series is sometimes called the “X dot” series since the protocols are given names like X.25, X.400, X.500, and so on. There have been several networks based on these standards, including the public X.25 network and private networks like Tymnet.

    1. Starting at the bottom and working up, the physical layer handles the transmission of raw bits over a communications link. 
      1. The data link layer then collects a stream of bits into a larger aggregate called a frame
      2. Network adaptors, along with device drivers running in the node’s OS, typically implement the data link level. This means that frames, not raw bits, are actually delivered to hosts. 
    2. The network layer handles routing among nodes within a packet-switched network. 
      1. At this layer, the unit of data exchanged among nodes is typically called a packet rather than a frame, although they are fundamentally the same thing. 
    3. The lower three layers are implemented on all network nodes, including switches within the network and hosts connected along the exterior of the network. 
    4. The transport layer then implements what we have up to this point been calling a process-to-process channel. Here, the unit of data exchanged is commonly called a message rather than a packet or a frame. 
      1. The transport layer and higher layers typically run only on the end hosts and not on the intermediate switches or routers.
    5. There is less agreement about the definition of the top three layers. Skipping ahead to the top (seventh) layer, we find the application layer
      1. Application layer protocols include things like the File Transfer Protocol (FTP), which defines a protocol by which file transfer applications can interoperate. 
    6. Below that, the presentation layer is concerned with the format of data exchanged between peers, for example, 
      1. whether an integer is 16, 32, or 64 bits long and whether the most significant bit is transmitted first or last, or how a video stream is formatted. 
    7. Finally, the session layer provides a name space that is used to tie together the potentially different transport streams that are part of a single application. For example, it might manage an audio stream and a video stream that are being combined in a teleconferencing application.
    Internet Architecture
    The Internet architecture, which is also sometimes called the TCP/IP architecture after its two main protocols, is depicted in Figure 1.14. An alternative representation is given in Figure 1.15.

    • The Internet architecture evolved out of experiences with an earlier packet-switched network called the ARPANET
    • Both the Internet and the ARPANET were funded by the Advanced Research Projects Agency (ARPA), one of the R&D funding agencies of the U.S. Department of Defense
    • The Internet and ARPANET were around before the OSI architecture, and the experience gained from building them was a major influence on the OSI reference model.
    • While the seven-layer OSI model can, with some imagination, be applied to the Internet, a four-layer model is often used instead. 
      • At the lowest level are a wide variety of network protocols, denoted NET1 , NET2 , and so on. 
        • In practice, these protocols are implemented by a combination of hardware (e.g., a network adaptor) and software (e.g., a network device driver). 
          • For example, you might find Ethernet or Fiber Distributed Data Interface (FDDI) protocols at this layer. (These protocols in turn may actually involve several sublayers, but the Internet architecture does not presume anything about them.) 
      • The second layer consists of a single protocol—the Internet Protocol (IP). This is the protocol that supports the interconnection of multiple networking technologies into a single, logical internetwork. 
      • The third layer contains two main protocols—the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). 
        • TCP and UDP provide alternative logical channels to application programs: 
          • TCP provides a reliable byte-stream channel, and 
          • UDP provides an unreliable datagram delivery channel (datagram may be thought of as a synonym for message). 
        • In the language of the Internet, TCP and UDP are sometimes called end-to-end protocols, although it is equally correct to refer to them as transport protocols.
      • Running above the transport layer are a range of application protocols, such as FTP, TFTP (Trivial File Transport Protocol), Telnet (remote login), and SMTP (Simple Mail Transfer Protocol, or electronic mail), that enable the interoperation of popular applications.
    To understand the difference between an application layer protocol and an application, think of all the different World Wide Web browsers that are available (e.g., Mosaic, Netscape, Internet Explorer, Lynx, etc.). There are a similarly large number of different implementations of Web servers. The reason that you can use any one of these application programs to access a particular site on the Web is because they all conform to the same application layer protocol: HTTP (HyperText Transport Protocol).

    Confusingly, the same word sometimes applies to both an application and the application layer protocol that it uses (e.g., FTP).

    The Internet architecture has three features that are worth highlighting.
    1. First, as best illustrated by Figure 1.15, the Internet architecture does not imply strict layering. The application is free to bypass the defined transport layers and to directly use IP or one of the underlying networks. In fact, programmers are free to define new channel abstractions or applications that run on top of any of the existing protocols.
    2. Second, if you look closely at the protocol graph in Figure 1.14, you will notice an hourglass shape—wide at the top, narrow in the middle, and wide at the bottom. This shape actually reflects the central philosophy of the architecture. That is, IP serves as the focal point for the architecture—it defines a common method for exchanging packets among a wide collection of networks. Above IP can be arbitrarily many transport protocols, each offering a different channel abstraction to application programs.
    Thus, the issue of delivering messages from host to host is completely separated from the issue of providing a useful process-to-process communication service. Below IP, the architecture allows for arbitrarily many different network technologies, ranging from Ethernet to FDDI to ATM to single point-to-point links.

    A final attribute of the Internet architecture (or more accurately, of the IETF
    culture) is that in order for someone to propose a new protocol to be included in the architecture, they must produce both a protocol specification and at least one (and preferably two) representative implementations of the specification. The existence of working implementations is required for standards to be adopted by the IETF. This cultural assumption of the design community helps to ensure that the architecture’s protocols can be efficiently implemented. Perhaps the value the Internet culture places on working software is best exemplified by a quote on T-shirts commonly worn at IETF meetings:
    We reject kings, presidents, and voting. We believe in rough consensus and running code.
    (Dave Clark)

    Of these three attributes of the Internet architecture, the hourglass design philosophy is important enough to bear repeating. The hourglass’s narrow waist represents a minimal and carefully chosen set of global capabilities that allows both higher-level applications and lower-level communication technologies to coexist, share capabilities, and evolve rapidly. The narrow-waisted model is critical to the Internet’s ability to adapt rapidly to new user demands and changing technologies.

    13 April 2011

    Computer Networks Requirements

    ARPAnet's Requirements
    To fulfill the needs of the military, the new ARPAnet had to meet the following requirements:
    •  No one point more critical than any other Because the network needed to be able to withstand a nuclear war, there could be no one critical part of the network and no single point of failure. If there were any critical parts of the network, enemies could target that area and eliminate communications.
    •  Redundant routes to any destination Because any location on the network could be taken down by enemies in the event of a war, there had to be multiple routes from any source to any destination on the network. Without redundant routes, any one location could become a critical communications link and a potential point of failure.
    •  On-the-fly rerouting of data If any part of the network failed, the network had to be able to reroute data to its destination on-the-fly.
    •  Ability to connect different types of computers over different types of networks This network could not be tied to just one operating system or hardware type. Because universities, government agencies, and corporations often rely on different types of Local Area Networks (LANs) and network operating systems, interoperability among these many networks was critical. Connecting to the network should not dictate that a lot of new hardware had to be purchased; rather, the existing hardware should suffice. 
    •  Not controlled by a single corporation If one corporation had a monopoly on this network, the network would grow to boost the corporation instead of the usefulness and effectiveness of the network. This network needed to be a cooperative effort among many engineers who were working to improve the network for the sake of the supernetwork, not that of a corporation.
    By December of 1969 the ARPAnet had four hosts. The ARPAnet consisted of computers at
    • the University of California at Los Angeles, 
    • the University of California at Santa Barbara, 
    • the University of Utah, and 
    • Stanford Research Institute. 
    The ARPAnet set the foundation for what would grow up to be the Internet.

    Requirements: a network designer's point of view
    The first step is to identify the set of constraints and requirements that influence network design. Before getting started, however, it is important to understand that the expectations you have of a network depend on your perspective:
    • An application programmer would list the services that his/her application needs, for example, a guarantee that each message the application sends will be delivered without error within a certain amount of time.
    • A network designer would list the properties of a cost-effective design, for example, that network resources are efficiently utilized and fairly allocated to different users.
    • A network provider would list the characteristics of a system that is easy to administer and manage, for example, in which faults can be easily isolated and where it is easy to account for usage.
    Connectivity
    Starting with the obvious, a network must provide connectivity among a set of computers.

    Sometimes it is enough to build a limited network that connects only a few select machines. In fact, for reasons of privacy and security, many private (corporate) networks have the explicit goal of limiting the set of machines that are connected.

    In contrast, other networks (of which the Internet is the prime example) are designed to grow in a way that allows them the potential to connect all the computers in the world. A system that is designed to support growth to an arbitrarily large size is said to scale.

    Links, Nodes, and Clouds
    Network connectivity occurs at many different levels.
    • At the lowest level, a network can consist of two or more computers directly connected by some physical medium, such as a coaxial cable or an optical fiber. We call such a physical medium a link, and 
    • we often refer to the computers it connects as nodes. (Sometimes a node is a more specialized piece of hardware rather than a computer, but we overlook that distinction for the purposes of this discussion.) 
    1. As illustrated in Figure 1.2, physical links are sometimes limited to a pair of nodes (such a link is said to be point-to-point), 
    2. while in other cases, more than two nodes may share a single physical link (such a link is said to be multiple- access). 

    Whether a given link supports point-to-point or multiple access connectivity depends on how the node is attached to the link. 
    It is also the case that multiple-access links are often limited in size, in terms of both the geographical distance they can cover and the number of nodes they can connect. 
    The exception is a satellite link, which can cover a wide geographic area.

    If computer networks were limited to situations in which all nodes are directly connected to each other over a common physical medium, then either networks would be very limited in the number of computers they could connect, or the number of wires coming out of the back of each node would quickly become both unmanageable and very expensive.

    Fortunately, connectivity between two nodes does not necessarily imply a direct physical connection between them—indirect connectivity may be achieved among a set of cooperating nodes. Consider the following two examples of how a collection of computers can be indirectly connected.
    Figure 1.3 shows a set of nodes, each of which is attached to one or more point-to-point links.
    Those nodes that are attached to at least two links run software that forwards data received on one link out on another. If organized in a systematic way, these forwarding nodes form a switched network

    There are numerous types of switched networks, of which the two most common are circuit switched and packet switched. The former is most notably employed by the telephone system, while the latter is used for the overwhelming majority of computer networks and will be the focus here.

    The important feature of packet-switched networks is that the nodes in such a network send discrete blocks of data to each other. Think of these blocks of data as corresponding to some piece of application data such as a file, a piece of email, or an image. We call each block of data either a packet or a message, and for now we use these terms interchangeably; (generally speaking packet and  message are not always the same).

    Packet-switched networks typically use a strategy called store-and-forward. As
    the name suggests, each node in a store-and-forward network first receives a complete packet over some link, stores the packet in its internal memory, and then forwards the complete packet to the next node.

    In contrast, a circuit-switched network first establishes a dedicated circuit across a sequence of links and then allows the source node to send a stream of bits across this circuit to a destination node.
    The major reason for using packet switching rather than circuit switching in a computer network is efficiency
    The cloud in Figure 1.3 distinguishes between the nodes on the inside that implement the network (they are commonly called switches, and their sole function is to store and forward packets) and the nodes on the outside of the cloud that use the network (they are commonly called hosts, and they support users and run application programs).

    Also note that the cloud in Figure 1.3 is one of the most important icons of computer networking. In general, we use a cloud to denote any type of network, whether it is a single point-to-point link, a multiple-access link, or a switched network. Thus, whenever you see a cloud used in a figure, you can think of it as a placeholder for any of the networking technologies covered here.

    A second way in which a set of computers can be indirectly connected is shown in Figure 1.4. In this situation, a set of independent networks (clouds) are interconnected to form an internetwork, or internet for short. We adopt the Internet’s convention of referring to a generic internetwork of networks as a lowercase i internet, and the currently operational TCP/IP Internet as the capital I Internet. 
    A node that is connected to two or more networks is commonly called a router or gateway, and it plays much the same role as a switch—it forwards messages from one network to another. 
    Note that an internet can itself be viewed as another kind of network, which means that an internet can be built from an interconnection of internets.
    Thus, we can recursively build arbitrarily large networks by interconnecting clouds to form larger clouds.

    Just because a set of hosts are directly or indirectly connected to each other does not mean that we have succeeded in providing host-to-host connectivity.
    The final requirement is that each node must be able to say which of the other nodes on the network it wants to communicate with. This is done by assigning an address to each node. An address is a byte string that identifies a node; that is, the network can use a node’s address to distinguish it from the other nodes connected to the network.
    When a source node wants the network to deliver a message to a certain destination node, it specifies the address of the destination node. If the sending and receiving nodes are not directly connected, then the switches and routers of the network use this address to decide how to forward the message toward the destination.
    1. The process of determining systematically how to forward messages toward the destination node based on its address is called routing
    2. This brief introduction to addressing and routing has presumed that the source node wants to send a message to a single destination node (unicast). 
    3. While this is the most common scenario, it is also possible that the source node might want to broadcast a message to all the nodes on the network. 
    4. Or a source node might want to send a message to some subset of the other nodes, but not all of them, a situation called multicast.
    Thus, in addition to node-specific addresses, another requirement of a network is that it support multicast and broadcast addresses.
    The main idea to take away from this discussion is that we can define a network recursively as consisting of two or more nodes connected by a physical link, or as two or more networks connected by a node.
    In other words, a network can be constructed from a nesting of networks, where at the bottom level, the network is implemented by some physical medium. One of the key challenges in providing network connectivity is to define an address for each node that is reachable on the network (including support for broadcast and multicast connectivity), and to be able to use this address to route messages toward the appropriate destination node(s).
    Cost-Effective Resource Sharing
     As stated above, we focuses on packet-switched networks. This section explains the key requirement of computer networks—efficiency—that leads us to packet switching as the strategy of choice.

    Given a collection of nodes indirectly connected by a nesting of networks, it is possible for any pair of hosts to send messages to each other across a sequence of links and nodes. Of course, we want to do more than support just one pair of communicating hosts—we want to provide all pairs of hosts with the ability to exchange messages. The question then is,
    1. How do all the hosts that want to communicate share the network, especially if they want to use it at the same time? And, as if that problem isn’t hard enough, 
    2. how do several hosts share the same link when they all want to use it at the same time?
    To understand how hosts share a network, we need to introduce a fundamental concept, multiplexing, which means that a system resource is shared among multiple users.
    At an intuitive level, multiplexing can be explained by analogy to a timesharing computer system , where a single physical CPU is shared (multiplexed) among multiple jobs, each of which believes it has its own private processor. Similarly, data being sent by multiple users can be multiplexed over the physical links that make up a network. To see how this might work, consider the simple network illustrated in Figure 1.5, where the three hosts on the left side of the network (L1–L3) are sending data to the three hosts on the right (R1–R3) by sharing a switched network that contains only one physical link. (For simplicity, assume that host L1 is communicating with host R1, and so on.) In this situation, three flows of data—corresponding to the three pairs of hosts—are multiplexed onto a single physical link by switch 1 and then demultiplexed back into separate flows by switch 2. Note that we are being intentionally vague about exactly what a “flow of data” corresponds to. For the purposes of this discussion, assume that each host on the left has a large supply of data that it wants to send to its counterpart on the right.

    There are several different methods for multiplexing multiple flows onto one physical link. One common method is synchronous time-division multiplexing (STDM). The idea of STDM is to divide time into equal-sized quanta and, in a round-robin fashion, give each flow a chance to send its data over the physical link.

    In other words, during time quantum 1, data from the first flow is transmitted; during time quantum 2, data from the second flow is transmitted; and so on. This process continues until all the flows have had a turn, at which time the first flow gets to go again, and the process repeats. Another method is frequency- division multiplexing (FDM). The idea of FDM is to transmit each flow over the physical link at a different frequency, much the same way that the signals for different TV
    stations are transmitted at a different frequency on a physical cable TV link.

    Although simple to understand, both STDM and FDM are limited in two ways.
    1. First, if one of the flows (host pairs) does not have any data to send, its share of the physical link—that is, its time quantum or its frequency—remains idle, even if one of the other flows has data to transmit. For computer communication, the amount of time that a link is idle can be very large—for example, consider the amount of time you spend reading a Web page (leaving the link idle) compared to the time you spend fetching the page. 
    2. Second, both STDM and FDM are limited to situations in which the maximum number of flows is fixed and known ahead of time. It is not practical to resize the quantum or to add additional quanta in the case of STDM or to add new frequencies in the case of FDM.
    The form of multiplexing that we make most use of here is called statistical multiplexing. Although the name is not all that helpful for understanding the concept, statistical multiplexing is really quite simple, with two key ideas.
    1. First, it is like STDM in that the physical link is shared over time—first data from one flow is transmitted over the physical link, hen data from another flow is transmitted, and so on. 
      1. Unlike STDM, however, data is transmitted from each flow on demand rather than during a predetermined time slot. 
        1. Thus, if only one flow has data to send, it gets to transmit that data without waiting for its quantum to come around and thus without having to watch the quanta assigned to the other flows go by unused.
      2. It is this avoidance of idle time that gives packet switching its efficiency.
    As defined so far, however, statistical multiplexing has no mechanism to ensure that all the flows eventually get their turn to transmit over the physical link. That is, once a flow begins sending data, we need some way to limit the transmission, so that the other flows can have a turn. To account for this need, statistical multiplexing defines an upper bound on the size of the block of data that each flow is permitted to transmit at a given time.
    This limited-size block of data is typically referred to as a packet, to distinguish it from the arbitrarily large message that an application program might want to transmit. 
    Because a packet-switched network limits the maximum size of packets, a host may not be able to send a complete message in one packet. The source may
    need to fragment the message into several packets, with the receiver reassembling the packets back into the original message.


    In other words, each flow sends a sequence of packets over the physical link, with a decision made on a packet-by-packet basis as to which flow’s packet to send next.
    Notice that if only one flow has data to send, then it can send a sequence of packets back-to-back. However, should more than one of the flows have data to send, then their packets are interleaved on the link. Figure 1.6 depicts a switch multiplexing packets from multiple sources onto a single shared link.

    The decision as to which packet to send next on a shared link can be made in a number of different ways. For example, in a network consisting of switches interconnected by links such as the one in Figure 1.5, the decision would be made by the switch that transmits packets onto the shared link. (As we will see later, not all packet-switched networks actually involve switches, and they may use other mechanisms to determine whose packet goes onto the link next.)
    Each switch in a packet-switched network makes this decision independently, on a packet-by-packet basis.
    One of the issues that faces a network designer is how to make this decision in a fair manner. For example,
    1. a switch could be designed to service packets on a first-in-first-out (FIFO) basis. 
    2. Another approach would be to service the different flows in a round-robin manner, just as in STDM. 
      1. This might be done to ensure that certain flows receive a particular share of the link’s bandwidth, 
      2. or that they never have their packets delayed in the switch for more than a certain length of time. A net-work that allows flows to request such treatment is said to support quality of service (QoS).
    Also, notice in Figure 1.6 that since the switch has to multiplex three incoming packet streams onto one outgoing link, it is possible that the switch will receive packets faster than the shared link can accommodate. In this case, the switch is forced to buffer these packets in its memory. Should a switch receive packets faster than it can send them for an extended period of time, then the switch will eventually run out of buffer space, and some packets will have to be dropped. When a switch is operating in this state, it is said to be congested.
    The bottom line is that statistical multiplexing defines a cost-effective way for multiple users (e.g., host-to-host flows of data) to share network resources (links and nodes) in a fine-grained manner. It defines the packet as the granularity with which the links of the network are allocated to different flows, with each switch able to schedule the use of the physical links it is connected to on a per-packet basis.
    1. Fairly allocating link capacity to different flows and 
    2. dealing with congestion when it occurs 
    are the key challenges of statistical multiplexing.

    SANs, LANs, MANs, and WANs
    One way to characterize networks is according to their size. Two well-known examples are LANs (local area networks) and WANs (wide area networks); the former typically extend less than 1 km, while the latter can be worldwide.

    Other networks are classified as MANs (metropolitan area networks),  which usually span tens of kilometers. The reason such classifications are interesting is that the size of a  network often has implications for the underlying technology that can be used, with a key factor being the amount of time it takes for data to propagate from one end of the network to the other;

    An interesting historical note is  that the term “wide area network” was not applied to the first WANs  because there was no other sort  of network to differentiate them from. When computers were incredibly rare and expensive, there was no point in thinking about how to connect all the computers in the local area—there was only one computer in that area.

    Only as computers began to proliferate did LANs become necessary, and the term “WAN” was then introduced to describe the larger networks that interconnected geographically distant computers.

    Another kind of network that we need to be aware of is SANs (system area networks). SANs are usually confined to a single room and connect the various components of a large computing system. For example, HiPPI (High Performance Parallel Interface) and Fiber Channel are two common SAN technologies used to connect massively parallel processors to scalable storage servers and data vaults. (Because they often connect computers to storage servers, SANs are sometimes defined as storage area networks.) Although here we not describe such networks in detail, they are worth knowing about because they are often at the leading edge in terms of performance, and because it is increasingly common to connect such networks into LANs and WANs.
    Support for Common Services
    It is overly simplistic to view a computer network as simply delivering packets among a collection of computers. It is more accurate to think of a network as providing the means for a set of application processes that are distributed over those computers to communicate.
    In other words, the next requirement of a computer network is that the application programs running on the hosts connected to the network must be able to communicate in a meaningful way.
    When two application programs need to communicate with each other, there are a lot of complicated things that need to happen beyond simply sending a message from one host to another.
    • One option would be for application designers to build all that complicated functionality into each application program. 
      • However, since many applications need common services, it is much more logical to implement those common services once and then to let the application designer build the application using those services. 
        • The challenge for a network designer is to identify the right set of common services. 
        • The goal is to hide the complexity of the network from the application without overly constraining the application designer.
    Intuitively, we view the network as providing logical channels over which application-level processes can communicate with each other; each channel provides the set of services required by that application. In other words, just as we use a cloud to abstractly represent connectivity among a set of computers, we now think of a channel as connecting one process to another. Figure 1.7 shows a pair of application-level processes communicating over a logical channel that is, in turn, implemented on top of a cloud that connects a set of hosts. We can think of the channel as being like a pipe connecting two applications, so that a sending application can put data in one end and expect that data to be delivered by the network to the application at the other end of the pipe.
    The challenge is to recognize what functionality the channels should provide to application programs. 
    For example,
    1. does the application require a guarantee that messages sent over the channel are delivered, or is it acceptable if some messages fail to arrive? 
    2. Is it necessary that messages arrive at the recipient process in the same order in which they are sent, or does the recipient not care about the order in which messages arrive? 
    3. Does the network need to ensure that no third parties are able to eavesdrop on the channel, or is privacy not a concern? 
    In general, a network provides a variety of different types of channels, with each application selecting the type that best meets its needs. The rest of this section illustrates the thinking involved in defining useful channels.

    Identifying Common Communication Patterns
    Designing abstract channels involves
    1. first understanding the communication needs of a representative collection of applications, 
    2. then extracting their common communication requirements, 
    3. and finally incorporating the functionality that meets these requirements in the network.
    One of the earliest applications supported on any network is a file access program like FTP (File Transfer Protocol) or NFS (Network File System). Although many details vary—for example, whether whole files are transferred across the network or only single blocks of the file are read/written at a given time—the communication component of remote file access is characterized by a pair of processes,
    1. one that requests that a file be read or written and 
    2. a second process that honors this request. 
    The process that requests access to the file is called the client, and the process that supports access to the file is called the server.
    • Reading a file involves the client sending a small request message to a server and the server responding with a large message that contains the data in the file. 
    • Writing works in the opposite way—the client sends a large message containing the data to be written to the server, and the server responds with a small message confirming that the write to disk has taken place.
    A digital library, as exemplified by the World Wide Web, is another application that behaves in a similar way: A client process makes a request, and a server process responds by returning the requested data.

    Using file access, a digital library, and the two video applications videoconferencing and video-on-demand as a representative sample, we might decide to provide the following two types of channels:
    1. request/reply channels and 
    2. message stream channels. 
    The request/reply channel would be used by the file transfer and digital library applications. It would guarantee that every message sent by one side is received by the other side and that only one copy of each message is delivered. The request/reply channel might also protect the privacy and integrity of the data that flows over it, so that unauthorized parties cannot read or modify the data being exchanged between the client and server processes.
    The message stream channel could be used by both the video-on-demand and videoconferencing applications, provided it is parameterized to support both one-way and two-way traffic and to support different delay properties.
    1. The message stream channel might not need to guarantee that all messages are delivered, since a video application can operate adequately even if some frames are not received.
    2. It would, however, need to ensure that those messages that are delivered arrive in the same order in which they were sent, to avoid displaying frames out of sequence. 
    3. Like the request/reply channel, the message stream channel might want to ensure the privacy and integrity of the video data. 
    4. Finally, the message stream channel might need to support multicast, so that multiple parties can participate in the teleconference or view the video.
    While it is common for a network designer to strive for the smallest number of abstract channel types that can serve the largest number of applications, there is a danger in trying to get away with too few channel abstractions. Simply stated, if you have a hammer, then everything looks like a nail. For example, if all you have are message stream and request/reply channels, then it is tempting to use them for the next application that comes along, even if neither type provides exactly the semantics needed by the application. Thus, network designers will probably be inventing new types of channels—and adding options to existing channels—for as long as application programmers are inventing new

    Also note that independent of exactly what functionality a given channel provides, there is the question of where that functionality is implemented.
    In many cases, it is easiest to view the host-to-host connectivity of the underlying network as simply providing a bit pipe, with any high-level communication semantics provided at the end hosts. 
    The advantage of this approach is it keeps the switches in the middle of the network as simple as possible—they simply forward packets—but it requires the end hosts to take on much of the burden of supporting semantically rich process-to-process channels.

    The alternative is to push additional functionality onto the switches, thereby allowing the end hosts to be “dumb” devices (e.g., telephone handsets).
    We will see this question of how various network services are partitioned between the packet switches and the end hosts (devices) as a reoccurring issue in network design.
    Reliability
    As suggested by the examples just considered, reliable message delivery is one of the most important functions that a network can provide. It is difficult to  determine how to provide this reliability, however, without first understanding how networks can fail.

    The first thing to recognize is that computer networks do not exist in a perfect world. Machines crash and later are rebooted, fibers are cut, electrical interference corrupts bits in the data being transmitted, switches run out of buffer space, and if these sorts of physical problems aren’t enough to worry about, the software that manages the hardware sometimes forwards packets into oblivion.
    Thus, a major requirement of a network is to mask (hide) certain kinds of failures, so as to make the network appear more reliable than it really is to the application programs using it.
    There are three general classes of failure that network designers have to worry about.
    1. First, as a packet is transmitted over a physical link, bit errors may be introduced into the data; that is, a 1 is turned into a 0 or vice versa. Sometimes single bits are corrupted, but more often than not, a burst error occurs—several consecutive bits are corrupted. Bit errors typically occur because outside forces, such as lightning strikes, power surges, and microwave ovens, interfere with the transmission of data. The good news is that such bit errors are fairly rare, affecting on average only one out of every 106 to 107 bits on a typical copper-based cable and one out of every 1012 to 1014 bits on a typical optical fiber. 
      1. As we will see, there are techniques that detect these bit errors with high probability. Once detected, it is sometimes possible to correct for such errors—if we know which bit or bits are corrupted, we can simply flip them—
      2. while in other cases the damage is so bad that it is necessary to discard the entire packet. In such a case, the sender may be expected to retransmit the packet.
    2. The second class of failure is at the packet, rather than the bit, level; that is, a complete packet is lost by the network. 
      1. One reason this can happen is that the packet contains an uncorrectable bit error and therefore has to be discarded. 
        1. A more likely reason, however, is that one of the nodes that has to handle the packet—for example, a switch that is forwarding it from one link to another—is so overloaded that it has no place to store the packet, and therefore is forced to drop it. This is the problem of congestion. 
      2. Less commonly, the software running on one of the nodes that handles the packet makes a mistake. For example, it might incorrectly forward a packet out on the wrong link, so that the packet never finds its way to the ultimate destination. 
        1. As we will see, one of the main difficulties in dealing with lost packets is distinguishing between a packet that is indeed lost and one that is merely late in arriving at the destination.
      3. The third class of failure is at the node and link level; that is, a physical link is cut, or the computer it is connected to crashes. This can be caused by 
        1. software that crashes, 
        2. a power failure, or 
        3. a reckless backhoe operator. 
    While such failures can eventually be corrected, they can have a dramatic effect on the network for an extended period of time. However, they need not totally disable the network. In a packet-switched network, for example, it is sometimes possible to route around a failed node or link. One of the difficulties in dealing with this third class of failure is distinguishing
    1. between a failed computer and one that is merely slow, or 
    2. in the case of a link, between one that has been cut and one that is very flaky and therefore introducing a high number of bit errors.
    The key idea to take away from this discussion is that defining useful channels involves both understanding the applications’ requirements and recognizing the limitations of the underlying technology. 
    The challenge is to fill in the gap between what the application expects and what the underlying technology can provide. This is sometimes called the semantic gap.



    to be continued....