To fulfill the needs of the military, the new ARPAnet had to meet the following requirements:
- No one point more critical than any other Because the network needed to be able to withstand a nuclear war, there could be no one critical part of the network and no single point of failure. If there were any critical parts of the network, enemies could target that area and eliminate communications.
- Redundant routes to any destination Because any location on the network could be taken down by enemies in the event of a war, there had to be multiple routes from any source to any destination on the network. Without redundant routes, any one location could become a critical communications link and a potential point of failure.
- On-the-fly rerouting of data If any part of the network failed, the network had to be able to reroute data to its destination on-the-fly.
- Ability to connect different types of computers over different types of networks This network could not be tied to just one operating system or hardware type. Because universities, government agencies, and corporations often rely on different types of Local Area Networks (LANs) and network operating systems, interoperability among these many networks was critical. Connecting to the network should not dictate that a lot of new hardware had to be purchased; rather, the existing hardware should suffice.
- Not controlled by a single corporation If one corporation had a monopoly on this network, the network would grow to boost the corporation instead of the usefulness and effectiveness of the network. This network needed to be a cooperative effort among many engineers who were working to improve the network for the sake of the supernetwork, not that of a corporation.
- the University of California at Los Angeles,
- the University of California at Santa Barbara,
- the University of Utah, and
- Stanford Research Institute.
Requirements: a network designer's point of viewThe first step is to identify the set of constraints and requirements that influence network design. Before getting started, however, it is important to understand that the expectations you have of a network depend on your perspective:
- An application programmer would list the services that his/her application needs, for example, a guarantee that each message the application sends will be delivered without error within a certain amount of time.
- A network designer would list the properties of a cost-effective design, for example, that network resources are efficiently utilized and fairly allocated to different users.
- A network provider would list the characteristics of a system that is easy to administer and manage, for example, in which faults can be easily isolated and where it is easy to account for usage.
Starting with the obvious, a network must provide connectivity among a set of computers.
Sometimes it is enough to build a limited network that connects only a few select machines. In fact, for reasons of privacy and security, many private (corporate) networks have the explicit goal of limiting the set of machines that are connected.
In contrast, other networks (of which the Internet is the prime example) are designed to grow in a way that allows them the potential to connect all the computers in the world. A system that is designed to support growth to an arbitrarily large size is said to scale.
Links, Nodes, and Clouds
Network connectivity occurs at many different levels.
- At the lowest level, a network can consist of two or more computers directly connected by some physical medium, such as a coaxial cable or an optical fiber. We call such a physical medium a link, and
- we often refer to the computers it connects as nodes. (Sometimes a node is a more specialized piece of hardware rather than a computer, but we overlook that distinction for the purposes of this discussion.)
- As illustrated in Figure 1.2, physical links are sometimes limited to a pair of nodes (such a link is said to be point-to-point),
- while in other cases, more than two nodes may share a single physical link (such a link is said to be multiple- access).
Whether a given link supports point-to-point or multiple access connectivity depends on how the node is attached to the link.
It is also the case that multiple-access links are often limited in size, in terms of both the geographical distance they can cover and the number of nodes they can connect.The exception is a satellite link, which can cover a wide geographic area.
If computer networks were limited to situations in which all nodes are directly connected to each other over a common physical medium, then either networks would be very limited in the number of computers they could connect, or the number of wires coming out of the back of each node would quickly become both unmanageable and very expensive.
Figure 1.3 shows a set of nodes, each of which is attached to one or more point-to-point links.
Those nodes that are attached to at least two links run software that forwards data received on one link out on another. If organized in a systematic way, these forwarding nodes form a switched network.
There are numerous types of switched networks, of which the two most common are circuit switched and packet switched. The former is most notably employed by the telephone system, while the latter is used for the overwhelming majority of computer networks and will be the focus here.
The important feature of packet-switched networks is that the nodes in such a network send discrete blocks of data to each other. Think of these blocks of data as corresponding to some piece of application data such as a file, a piece of email, or an image. We call each block of data either a packet or a message, and for now we use these terms interchangeably; (generally speaking packet and message are not always the same).
Packet-switched networks typically use a strategy called store-and-forward. As
the name suggests, each node in a store-and-forward network first receives a complete packet over some link, stores the packet in its internal memory, and then forwards the complete packet to the next node.
In contrast, a circuit-switched network first establishes a dedicated circuit across a sequence of links and then allows the source node to send a stream of bits across this circuit to a destination node.
The major reason for using packet switching rather than circuit switching in a computer network is efficiencyThe cloud in Figure 1.3 distinguishes between the nodes on the inside that implement the network (they are commonly called switches, and their sole function is to store and forward packets) and the nodes on the outside of the cloud that use the network (they are commonly called hosts, and they support users and run application programs).
Also note that the cloud in Figure 1.3 is one of the most important icons of computer networking. In general, we use a cloud to denote any type of network, whether it is a single point-to-point link, a multiple-access link, or a switched network. Thus, whenever you see a cloud used in a figure, you can think of it as a placeholder for any of the networking technologies covered here.
A second way in which a set of computers can be indirectly connected is shown in Figure 1.4. In this situation, a set of independent networks (clouds) are interconnected to form an internetwork, or internet for short. We adopt the Internet’s convention of referring to a generic internetwork of networks as a lowercase i internet, and the currently operational TCP/IP Internet as the capital I Internet.
A node that is connected to two or more networks is commonly called a router or gateway, and it plays much the same role as a switch—it forwards messages from one network to another.Note that an internet can itself be viewed as another kind of network, which means that an internet can be built from an interconnection of internets.
Thus, we can recursively build arbitrarily large networks by interconnecting clouds to form larger clouds.
Just because a set of hosts are directly or indirectly connected to each other does not mean that we have succeeded in providing host-to-host connectivity.
The final requirement is that each node must be able to say which of the other nodes on the network it wants to communicate with. This is done by assigning an address to each node. An address is a byte string that identifies a node; that is, the network can use a node’s address to distinguish it from the other nodes connected to the network.When a source node wants the network to deliver a message to a certain destination node, it specifies the address of the destination node. If the sending and receiving nodes are not directly connected, then the switches and routers of the network use this address to decide how to forward the message toward the destination.
- The process of determining systematically how to forward messages toward the destination node based on its address is called routing.
- This brief introduction to addressing and routing has presumed that the source node wants to send a message to a single destination node (unicast).
- While this is the most common scenario, it is also possible that the source node might want to broadcast a message to all the nodes on the network.
- Or a source node might want to send a message to some subset of the other nodes, but not all of them, a situation called multicast.
The main idea to take away from this discussion is that we can define a network recursively as consisting of two or more nodes connected by a physical link, or as two or more networks connected by a node.
In other words, a network can be constructed from a nesting of networks, where at the bottom level, the network is implemented by some physical medium. One of the key challenges in providing network connectivity is to define an address for each node that is reachable on the network (including support for broadcast and multicast connectivity), and to be able to use this address to route messages toward the appropriate destination node(s).Cost-Effective Resource Sharing
As stated above, we focuses on packet-switched networks. This section explains the key requirement of computer networks—efficiency—that leads us to packet switching as the strategy of choice.
Given a collection of nodes indirectly connected by a nesting of networks, it is possible for any pair of hosts to send messages to each other across a sequence of links and nodes. Of course, we want to do more than support just one pair of communicating hosts—we want to provide all pairs of hosts with the ability to exchange messages. The question then is,
- How do all the hosts that want to communicate share the network, especially if they want to use it at the same time? And, as if that problem isn’t hard enough,
- how do several hosts share the same link when they all want to use it at the same time?
timesharing computer system , where a single physical CPU is shared (multiplexed) among multiple jobs, each of which believes it has its own private processor. Similarly, data being sent by multiple users can be multiplexed over the physical links that make up a network. To see how this might work, consider the simple network illustrated in Figure 1.5, where the three hosts on the left side of the network (L1–L3) are sending data to the three hosts on the right (R1–R3) by sharing a switched network that contains only one physical link. (For simplicity, assume that host L1 is communicating with host R1, and so on.) In this situation, three flows of data—corresponding to the three pairs of hosts—are multiplexed onto a single physical link by switch 1 and then demultiplexed back into separate flows by switch 2. Note that we are being intentionally vague about exactly what a “flow of data” corresponds to. For the purposes of this discussion, assume that each host on the left has a large supply of data that it wants to send to its counterpart on the right.
There are several different methods for multiplexing multiple flows onto one physical link. One common method is synchronous time-division multiplexing (STDM). The idea of STDM is to divide time into equal-sized quanta and, in a round-robin fashion, give each flow a chance to send its data over the physical link.
In other words, during time quantum 1, data from the first flow is transmitted; during time quantum 2, data from the second flow is transmitted; and so on. This process continues until all the flows have had a turn, at which time the first flow gets to go again, and the process repeats. Another method is frequency- division multiplexing (FDM). The idea of FDM is to transmit each flow over the physical link at a different frequency, much the same way that the signals for different TV
stations are transmitted at a different frequency on a physical cable TV link.
Although simple to understand, both STDM and FDM are limited in two ways.
- First, if one of the flows (host pairs) does not have any data to send, its share of the physical link—that is, its time quantum or its frequency—remains idle, even if one of the other flows has data to transmit. For computer communication, the amount of time that a link is idle can be very large—for example, consider the amount of time you spend reading a Web page (leaving the link idle) compared to the time you spend fetching the page.
- Second, both STDM and FDM are limited to situations in which the maximum number of flows is fixed and known ahead of time. It is not practical to resize the quantum or to add additional quanta in the case of STDM or to add new frequencies in the case of FDM.
- First, it is like STDM in that the physical link is shared over time—first data from one flow is transmitted over the physical link, hen data from another flow is transmitted, and so on.
- Unlike STDM, however, data is transmitted from each flow on demand rather than during a predetermined time slot.
- Thus, if only one flow has data to send, it gets to transmit that data without waiting for its quantum to come around and thus without having to watch the quanta assigned to the other flows go by unused.
- It is this avoidance of idle time that gives packet switching its efficiency.
This limited-size block of data is typically referred to as a packet, to distinguish it from the arbitrarily large message that an application program might want to transmit.
need to fragment the message into several packets, with the receiver reassembling the packets back into the original message.
In other words, each flow sends a sequence of packets over the physical link, with a decision made on a packet-by-packet basis as to which flow’s packet to send next.Notice that if only one flow has data to send, then it can send a sequence of packets back-to-back. However, should more than one of the flows have data to send, then their packets are interleaved on the link. Figure 1.6 depicts a switch multiplexing packets from multiple sources onto a single shared link.
The decision as to which packet to send next on a shared link can be made in a number of different ways. For example, in a network consisting of switches interconnected by links such as the one in Figure 1.5, the decision would be made by the switch that transmits packets onto the shared link. (As we will see later, not all packet-switched networks actually involve switches, and they may use other mechanisms to determine whose packet goes onto the link next.)
Each switch in a packet-switched network makes this decision independently, on a packet-by-packet basis.One of the issues that faces a network designer is how to make this decision in a fair manner. For example,
- a switch could be designed to service packets on a first-in-first-out (FIFO) basis.
- Another approach would be to service the different flows in a round-robin manner, just as in STDM.
- This might be done to ensure that certain flows receive a particular share of the link’s bandwidth,
- or that they never have their packets delayed in the switch for more than a certain length of time. A net-work that allows flows to request such treatment is said to support quality of service (QoS).
The bottom line is that statistical multiplexing defines a cost-effective way for multiple users (e.g., host-to-host flows of data) to share network resources (links and nodes) in a fine-grained manner. It defines the packet as the granularity with which the links of the network are allocated to different flows, with each switch able to schedule the use of the physical links it is connected to on a per-packet basis.
are the key challenges of statistical multiplexing.
- Fairly allocating link capacity to different flows and
- dealing with congestion when it occurs
Support for Common Services
SANs, LANs, MANs, and WANs
One way to characterize networks is according to their size. Two well-known examples are LANs (local area networks) and WANs (wide area networks); the former typically extend less than 1 km, while the latter can be worldwide.
Other networks are classified as MANs (metropolitan area networks), which usually span tens of kilometers. The reason such classifications are interesting is that the size of a network often has implications for the underlying technology that can be used, with a key factor being the amount of time it takes for data to propagate from one end of the network to the other;
An interesting historical note is that the term “wide area network” was not applied to the first WANs because there was no other sort of network to differentiate them from. When computers were incredibly rare and expensive, there was no point in thinking about how to connect all the computers in the local area—there was only one computer in that area.
Only as computers began to proliferate did LANs become necessary, and the term “WAN” was then introduced to describe the larger networks that interconnected geographically distant computers.
Another kind of network that we need to be aware of is SANs (system area networks). SANs are usually confined to a single room and connect the various components of a large computing system. For example, HiPPI (High Performance Parallel Interface) and Fiber Channel are two common SAN technologies used to connect massively parallel processors to scalable storage servers and data vaults. (Because they often connect computers to storage servers, SANs are sometimes defined as storage area networks.) Although here we not describe such networks in detail, they are worth knowing about because they are often at the leading edge in terms of performance, and because it is increasingly common to connect such networks into LANs and WANs.
It is overly simplistic to view a computer network as simply delivering packets among a collection of computers. It is more accurate to think of a network as providing the means for a set of application processes that are distributed over those computers to communicate.
In other words, the next requirement of a computer network is that the application programs running on the hosts connected to the network must be able to communicate in a meaningful way.When two application programs need to communicate with each other, there are a lot of complicated things that need to happen beyond simply sending a message from one host to another.
- One option would be for application designers to build all that complicated functionality into each application program.
- However, since many applications need common services, it is much more logical to implement those common services once and then to let the application designer build the application using those services.
- The challenge for a network designer is to identify the right set of common services.
- The goal is to hide the complexity of the network from the application without overly constraining the application designer.
The challenge is to recognize what functionality the channels should provide to application programs.For example,
- does the application require a guarantee that messages sent over the channel are delivered, or is it acceptable if some messages fail to arrive?
- Is it necessary that messages arrive at the recipient process in the same order in which they are sent, or does the recipient not care about the order in which messages arrive?
- Does the network need to ensure that no third parties are able to eavesdrop on the channel, or is privacy not a concern?
Identifying Common Communication Patterns
Designing abstract channels involves
- first understanding the communication needs of a representative collection of applications,
- then extracting their common communication requirements,
- and finally incorporating the functionality that meets these requirements in the network.
- one that requests that a file be read or written and
- a second process that honors this request.
- Reading a file involves the client sending a small request message to a server and the server responding with a large message that contains the data in the file.
- Writing works in the opposite way—the client sends a large message containing the data to be written to the server, and the server responds with a small message confirming that the write to disk has taken place.
Using file access, a digital library, and the two video applications videoconferencing and video-on-demand as a representative sample, we might decide to provide the following two types of channels:
- request/reply channels and
- message stream channels.
The message stream channel could be used by both the video-on-demand and videoconferencing applications, provided it is parameterized to support both one-way and two-way traffic and to support different delay properties.
- The message stream channel might not need to guarantee that all messages are delivered, since a video application can operate adequately even if some frames are not received.
- It would, however, need to ensure that those messages that are delivered arrive in the same order in which they were sent, to avoid displaying frames out of sequence.
- Like the request/reply channel, the message stream channel might want to ensure the privacy and integrity of the video data.
- Finally, the message stream channel might need to support multicast, so that multiple parties can participate in the teleconference or view the video.
Also note that independent of exactly what functionality a given channel provides, there is the question of where that functionality is implemented.
In many cases, it is easiest to view the host-to-host connectivity of the underlying network as simply providing a bit pipe, with any high-level communication semantics provided at the end hosts.The advantage of this approach is it keeps the switches in the middle of the network as simple as possible—they simply forward packets—but it requires the end hosts to take on much of the burden of supporting semantically rich process-to-process channels.
The alternative is to push additional functionality onto the switches, thereby allowing the end hosts to be “dumb” devices (e.g., telephone handsets).
We will see this question of how various network services are partitioned between the packet switches and the end hosts (devices) as a reoccurring issue in network design.Reliability
As suggested by the examples just considered, reliable message delivery is one of the most important functions that a network can provide. It is difficult to determine how to provide this reliability, however, without first understanding how networks can fail.
The first thing to recognize is that computer networks do not exist in a perfect world. Machines crash and later are rebooted, fibers are cut, electrical interference corrupts bits in the data being transmitted, switches run out of buffer space, and if these sorts of physical problems aren’t enough to worry about, the software that manages the hardware sometimes forwards packets into oblivion.
Thus, a major requirement of a network is to mask (hide) certain kinds of failures, so as to make the network appear more reliable than it really is to the application programs using it.There are three general classes of failure that network designers have to worry about.
- First, as a packet is transmitted over a physical link, bit errors may be introduced into the data; that is, a 1 is turned into a 0 or vice versa. Sometimes single bits are corrupted, but more often than not, a burst error occurs—several consecutive bits are corrupted. Bit errors typically occur because outside forces, such as lightning strikes, power surges, and microwave ovens, interfere with the transmission of data. The good news is that such bit errors are fairly rare, affecting on average only one out of every 106 to 107 bits on a typical copper-based cable and one out of every 1012 to 1014 bits on a typical optical fiber.
- As we will see, there are techniques that detect these bit errors with high probability. Once detected, it is sometimes possible to correct for such errors—if we know which bit or bits are corrupted, we can simply flip them—
- while in other cases the damage is so bad that it is necessary to discard the entire packet. In such a case, the sender may be expected to retransmit the packet.
- The second class of failure is at the packet, rather than the bit, level; that is, a complete packet is lost by the network.
- One reason this can happen is that the packet contains an uncorrectable bit error and therefore has to be discarded.
- A more likely reason, however, is that one of the nodes that has to handle the packet—for example, a switch that is forwarding it from one link to another—is so overloaded that it has no place to store the packet, and therefore is forced to drop it. This is the problem of congestion.
- Less commonly, the software running on one of the nodes that handles the packet makes a mistake. For example, it might incorrectly forward a packet out on the wrong link, so that the packet never finds its way to the ultimate destination.
- As we will see, one of the main difficulties in dealing with lost packets is distinguishing between a packet that is indeed lost and one that is merely late in arriving at the destination.
- The third class of failure is at the node and link level; that is, a physical link is cut, or the computer it is connected to crashes. This can be caused by
- software that crashes,
- a power failure, or
- a reckless backhoe operator.
- between a failed computer and one that is merely slow, or
- in the case of a link, between one that has been cut and one that is very flaky and therefore introducing a high number of bit errors.
The key idea to take away from this discussion is that defining useful channels involves both understanding the applications’ requirements and recognizing the limitations of the underlying technology.The challenge is to fill in the gap between what the application expects and what the underlying technology can provide. This is sometimes called the semantic gap.
to be continued....