Packet Switching

Aaron Skelsey (00044661)

Communications Networks (48740)
University of Technology, Sydney


Statement of Originality

The work contained in this assignment, other than that specifically attributed to another source, is that of the authors. It is recognised that, should this declaration be found to be false, disciplinary action could be taken and the assignments of all students involved will be given zero marks.


This document was written by a student of Communications Networks.




1. Packet Switching – An introduction

2. History

2.1 The Growing Need for Communication
2.3 Internet (WAN’s and LAN’s)

3. Hardware Concepts and Components

3.1 Packet Switching – The Layers
3.2 Switches and Routers
3.3 Routing Strategies
3.4 Routing Methods

4. Commonly Used Protocols

4.1 X.25
4.2 TCP/IP
4.3 Frame-Relay
4.4 ATM

5. Application of Packet Switching in the World Today

5.1 Advantages over Circuit Switching
5.2 Companies Producing Packet Switching Technology
5.3 Future Applications

6. Summary

6.1 Summary Points
6.2 Review Questions

7. Bibliography and References



1. Packet Switching – An introduction

All communication networks today are comprised of many nodes. Each node needs to be able to communicate with other nodes. Each node may be connected with one or more links, of various speed and reliability. Within a circuit-switched system, if two nodes want to communicate, they will cooperate with other nodes on the network to set up a dedicated link through which to communicate. A packet switched network does not require this ‘rigid’ link. Communication between nodes is established and maintained by an intermittent flow of small portions of data through one or many different paths. The path taken by each portion of data or packet may depend on the status of links, or algorithms used by switching equipment. Switching is carried out by special nodes on the network which govern the flow of data. Such devices include switches, routers and bridges [GRINSEC p xiii, Derfler & Freed p163, Abeck Hegering & Neumair p 50-56].

Animation 1 – Packet Switching

There are many different strategies, algorithms and protocols used to maximise the throughput of packet switching networks. Plus there are two main types of packet switched networks/protocols. One type of switching uses datagrams, which means that each packet is treated individually, sent end-to-end by use of any route that may be active. The second switching method is called a virtual circuit. This type of traffic behaves a lot like circuit switched networks, where communication between to devices will be carried out through a dedicated route. Unlike circuit switching, a virtual circuit permits other traffic to use the same route (or part thereof) concurrently [GRINSEC p 531, Stallings p 304].

In a seminar, held at the Franklin Institute’s 2001, Paul Baran claims that the

"… new argument seems to be that it is unimportant whether the communications stream is chopped into packets or not… " [Baran p 7]

which is an interesting example of how far packet switching has evolved from its early beginnings, to what is has become today.



2. History

Packet switching has a very hazy beginning. There is much speculation as to who actually invented it first; however there have been some definite milestones during its short history. In an article on IPIER.COM, packet switching is accredited to a "…government laboratory in England…" [Internet 101] but fails to mention any names or any further details. In fact, many laboratory experiments with packet switching were being carried out all around the world, most being quite independent of each other. The research that stands out for its impact on packet switching is the research carried out by Paul Baran.


2.1 The Growing Need for Communication

In 1959 Paul Baran joined an organisation called RAND. RAND was set up by the U.S Air force to "…preserve the operations research capability created by the Air Force in World War II, and to work on issues of national security." [Baran p 1]. RAND encouraged a freedom with projects and encouraged approaching problems in a unique manner.

At that time, communication between strategic forces command centres was through either short wave wireless links, or through the national telephone company (via circuit switched networks). This provided a gaping hole in the security of their communications system. In the event of a high-altitude nuclear detonation, the short-wave communications would be disrupted for hours. A strategic attack on communications links would render long distance telephone links inactive, providing what Baran describes as an "Achilles’ heel" [Baran p 1] in RAND’s communication system. As the Cold War was developing, a robust communications system was in demand, so Baran decided to start his research into finding new solutions.

Figure 1 Network topologies [Barna Pg 2].

The first thing that Baran noticed was that the main problem with the system was its topology. Currently, RAND’s communication network was comprised of centralised and decentralised topologies. In figure 1, diagram a) is an example of a centralised topology, where each node is connected to one central node. This network is extremely vulnerable. Diagram b) is an example of a decentralised topology, where the network appears to ‘branch’. This type of network is generally comprised of many centralised networks, hence adding slightly more robustness, as there is now more than one central point in the network. Diagram c) is an example of a distributed topology. Each peer in the network is connected to neighbouring peers through one or more links. This creates a large number of redundancies in the system; however these redundancies offer a considerably higher level of robustness than figures a) or b) could provide.

Baran decided to conduct simulations of networks with varying numbers of redundancies. Level 1 was characterised by a minimum number of connections between nodes. Level 2 had vertical and horizontal connections, offering many more links between nodes. When the order of links was then increased to level 3, Baran made the observation that the system became extremely robust. From this observation, he concluded that it would be "…theoretically possible to build extremely reliable communication networks out of unreliable links, by the proper use of redundancy." [Baran p 2] Thus the main idea behind packet switching was born. If a communications link between nodes was attacked, then the node could theoretically still operate within the network due to its redundancy. ‘How’ was the next obstacle.

Many problems were experienced when developing this distributed network, the first being bit-rates. Synchronisation was not going to be achievable, since the network would have to route data in a manner that could not be established in advance. This would add overhead to the already copious amount of data that was expected to be communicated. Baran overcame this problem by proposing that each routing node in the system contain a buffer that would store the data and re-communicate it at its own pace, the beginnings of store-and-forward packet switching. Data would need to be broken into packets, or as Baran termed them at the time ‘message blocks’. This was because the amount of buffering available in computers during the 1970’s was low, due to the cost and capabilities of the technology.

Baran started looking at routing strategies, basing his research on adaptive systems, which used previous knowledge to route data. The system would have a limited amount of resources, so it was designed to discard old/less relevant information as well as collecting new information. Each packet would contain a ‘hand-over counter’ that would increment each time a packet was passed between nodes. This counter would then be examined by the end-system and hence be used to estimate efficiencies/routes through the network. Baran described routing as a hot potato, "You want to get rid of the hot potato as quickly as you can. If your first choice recipient is busy, toss it to your second choice recipient, and so on. If you have no better choice you are allowed to throw the hot potato back to the previous thrower." [Baran, p 4]. Although it sounds primitive, this method of routing is a basis for many routing algorithms today (often called deflection routing). It makes an emphasis on speed, as each node will not need to stop and ‘think’. This was envisaged as a good algorithm to route delay-sensitive packets, such as voice.

Some other features Baran incorporated in his design of the packets were sequence numbers and error checking. Sequence numbers were used (and still are today) to inform the receiving node about the order the packets should be re-assembled in. To check for errors in transmission, Baran used CRC (cyclical redundancy check) which is also one of the most common error-checking methods today. I would just like to point out that Baran did not invent the CRC. CRC was a commonly used error-checking algorithm used by many people at the time.

Due to the nature of the system, security was to be very important. Baran devised two levels of cryptography. The first level of security would be on a node-to-node level, where each node knows the ‘key’ to each of its neighbouring node’s ciphering algorithms. The second level of security would be based on an end-to-end cryptography scheme. Each pair or end users on the network system would somehow agree upon a key to use. One new feature that Baran designed was that each packet would contain the key to the next packet. This was a great idea not only for security, but also for error checking. If a packet is corrupted during transfer, the key may be wrong and hence the next packet that is deciphered will be garbled. The system could very easily detect these errors and request re-transmission (or simply not send an acknowledgment). With a packet switched network, each of these packets would be encouraged to travel across different routes, so that an eavesdropper would have little chance of obtaining all of the keys. Baran admits that such a system has not been built (for the advantages in error detection) but maintains that is a good idea [Baran p 4].

One problem with circuit switched applications is that "silence is the usual message" [Baran p 5]. This is certainly true for phone conversations, which is an inefficient utilisation of communication infrastructure. If there is no information to be exchanged, then a packet switched system will not send or receive information, freeing up the communication resources for other applications. Baran discusses this in his seminar, although another man, W. David Sincoskie, had the same ideas. In 1980, Sincoskie was employed by Bell Labs, and he made the following calculation: "…an Ethernet would carry up to 150 simultaneous voice calls, if the calls were coded at 64 kb/s and silent periods removed. This was enough to construct a 1000-line private branch exchange..."[Sincoskie, p 2]. This was an amazing idea, which was later demonstrated in the world’s first prototype broadband central office – ERP (Experimental Research Prototype).

For more information on ERP and the future of packet switching, section 6.4 deals with the future applications of packet switching.



The ARPANET is worth mentioning at this point, as it is the basic grass roots of the internet. The Advanced Research Projects Agency (ARPA) Computer Network (NET) has been an important relic for the study of packet communication, because it provides a strong understanding and hence efficient implementation of the packet switching technologies for the internet [Metcalfe, p 23]. In 1969 the ARPANET consisted of four computers, distributed across a large geographical area [Internet 101] and soon grew to over 30 sites in 1996 [Metcalfe p 28]. The ARPANET was based on a store-and-forward packet switched network consisting of end nodes (terminals) and interface message processors (IMPs) which carried out the routing. The topology of the ARPANET was a little ad hoc, but it resembled a distributed network. The ARPANET also allowed study of network behaviour, which would be important to consider when designing packet switching networks. One thing that surprised Metcalfe was the amount of ‘incestuous’ network traffic [Metcalfe p xxiv-xxv] which is his term for intranet traffic. This traffic was being sent to the IMP and then being routed back into the internal network. This phenomenon prompted new devices to be created to handle internal network traffic. The switching equivalent of the internal IMP (or router) is the switch.


2.3 Internet (WAN’s and LAN’s)

In recent years, the growth of the internet has been phenomenal. The technologies that have sprouted from military research and development have bloomed into a worldwide network of communication, providing remote information and control. Many local area networks (LANs) found in the office or even at home have become a part of this growing network of nodes. Packet switching may not be utilised on these LANs (most commonly ethernet), so a bridge may inter-connect the switched networks and the LAN. For example, if a small amount of data is needed to be communicated by a LAN to the internet, it is common to have a dedicated computer which acts as a bridge/router for internet (or inter-subnet) traffic. Recent advances in switching technology have resulted in hardware routers becoming layer 3 switches [Abeck Hegering & Neumair p 415].



3. Hardware Concepts and Components

As described earlier, packet switching relies on two devices – switches and routers. Bridges were mentioned as well; however the bridge forms a common path information can travel on between different protocols and topologies. Bridges may carry out some crude switching functions, but will not be discussed in detail. There is an increase in the capability of these devices and others, on the market today. It is not uncommon to find ‘smart’ devices, with hybrid technologies, performing bridging, switching and routing functions (and more).


3.1 Packet Switching – The Layers

To understand how packet switching works, it is important that we first understand what a packet is and how it is formed. To do this, we will study animation 2. In 1977, ISO decided to make a standard for describing packets and how they are formed. This standard is called the open systems interconnection (OSI) reference model [Stallings p 44]. This model describes seven layers, which describe aspects of the network communication vital for getting data from one node to another.

Animation 2 - Encapsulation

Firstly, the end-user’s computer buffers a piece of data that it wishes to send. This piece of data can be theoretically any length, but for simplicity we will assume it is a short piece of data. If the data is too big, the data will be cut into smaller pieces, which is called segmentation. It is also possible that data may be segmented at intermediate nodes along its journey from the host to the end user if the data is deemed to be too big. Each segment will be appended with some information to identify what it is, where it’s going and any other relevant information. This information is called a header. Once this is completed, the segments are passed down to the next layer, which performs similar tasks on the segments (which have now become the ‘data’). This processes is iterated until the data reaches the final stage, where it is placed on the communication link and sent to the receiving node. The receiving node simply removes the headers from the data and puts all of the segments back together. The process of data being segmented and having a header attached is know as encapsulation.

Table 1 explains each of the seven layers in the OSI model. The top layer (layer 7) is the layer at user level. As the layers go down, they get increasingly primitive, until they reach layer 1, which is the most primitive from an information point of view (ie. It is actually concerned with the physical communication links)

Number Name Description
7 Application Layer Gives application software access to the OSI environment. This layer contains management functions.
6 Presentation Layer Establishes ‘common ground’ for applications to communicate with the other layers. This layer formats data and provides syntaxes for applications.
5 Session Layer Provides a service that controls the communication between applications running on end nodes.
4 Transportation Layer Provides a means of communicating between end nodes. Its functions may include sequencing, error detection and optimisation of communication.
3 Network Layer Deals with communication of data on a network. Here, network information is gathered, including addresses, routing information, etc.
2 Data Link Layer Deals with maintaining and optimising the actual connection to the network. It also performs error checking on the communicated data.
1 Physical Layer

Deals with the physical connection between nodes in a network. It tends to deal with a ‘bit stream’ rather than any single pieces on information.

Table 1 – The seven OSI models [Robinson p 6-7 and Stallings p 51-54]

The application of this model will be discussed in section 5, with regard to commonly used protocols.

The use of these layers was introduced to break down the complexity of communications. Each layer could be seen as an ‘object’ in an object-oriented design approach. This makes designing a complex protocol a lot easier; plus if certain layers do not function correctly, it should be a simple task to isolate a layer and work on it, if this design approach is taken.


3.2 Switches and Routers

Switches and routers are used in packet switching networks to guide packets towards their destination. A switch is a device with a certain number of physical ports, through which either nodes (such as computers, printers or routers) or other switches can be connected. A switch consists of a few key elements. Firstly, the switch has a small amount of processing capability, so that it can work out what device is connected to what port. To help the switch perform this function quickly, a table of network addresses is kept within the switch, with a capacity of around 4000 or more addresses. Depending on the nature of the switch, these tables can be statically or dynamically updated. The key component of the switch is the switching fabric, which is the point where data is transferred from one port of the switch to another. Switches need only know the destination of packets, so generally they will be classed as ‘layer 2 devices’, which means that they communicate and understand only layers 1 and 2 of the OSI model. Since layer 2 devices only deal with the bottom two layers, they handle packets according to MAC addresses. Newer, hybrid switches (layer 3 switches) are becoming more complex and can communicate as a layer 3 device.

The main problem faced by switches is contention. Contention is when two packets are sent to one destination in one instance of time. This would obviously cause some problems. Switches can deal with contention in different ways. One method is to buffer one of the packets, send the other packet and then send the buffered packet. Other methods may discard the contentious packets, which would result in retransmission.

The OSI model refers to what we call ‘routers’ as ‘gateways’ [Routers]. Routers are a lot like switches, except they deal with layer 3 of the OSI model. This is because they don’t simply have to forward packets to a known address, they have to make a decision – how is the packet going to get to the address? Routers have tables, but their tables contain values of LAN addresses, not actual physical addresses. This enables routers to communicate over a much larger number of nodes. Routers will typically only forward packets that are destined for networks outside of the current LAN that a node is attempting to communicate from. Router’s can contain either static or dynamic tables. Static tables are created and maintained by an administrator. This can take a long time if the network is large, and the static data can become redundant if the network system is changed. Dynamic tables gather their routing information from other routers using special router protocols. There are two different strategies for communication. Distance vector involves router’s sending their complete routing tables to neighboring routers. This involves a lot of overhead. Link State sends updates in their tables to their neighboring routers. This requires less overhead than distance vector, but requires a higher level of computation within the router.

When we talked about the distributed topology (in Section 1), we were inferring that routers (or IMPs) are set up in this fashion. Routers require that devices communicate using ‘routable protocols’ [Routers] so that the routers understand the network activity.

When a packet is sent to a router, it will read the destination address and determine if that address is local. If it is local, it will route the packet into the subsequent LAN. If not, it will look at the network portion of the address and choose which network it needs to be sent to. The router may send the packet directly to the router on the required network, or it may need to send the packet to a router on the next ‘hop’ of the journey [Routers]. There is often more than one path to a destination, so how does the router choose which it will use? The router’s administrator will give it an algorithm or strategy to follow whilst routing packets.


3.3 Routing Strategies

There are many different strategies for routing packets. Each method has its advantages and disadvantages, typically characterised by cost, complexity and efficiency. Each link between routers is given a metric (or a measure of how ‘good’ a certain link is) generally referred to as a ‘cost’.

3.3.1 Fixed Routing

For fixed routing, each router has a list of destination nodes and a list of ‘next nodes’. If the router can’t send a packet directly to a destination, it uses the list of ‘next nodes’ to work out which node to send the packet to. If the router does not know where to send a packet, it will usually have a preset destination (for example the address of the router that links the LAN with the internet). Fixed routing is an easy solution to routing for small applications, but on a larger scale it would be time-consuming to set up and dynamic costs (due to network traffic) cannot be adequately addressed using this strategy.


3.3.2 Flooding

When using flooding as a routing strategy, each router sends the packet to all of its immediate neighbours (except to the router that sent the packet). When this strategy is used, the number of packets on the network can grow, being re-transmitted by routers. This can be overcome if routers can identify packets and make sure that packets that have been sent are not re-transmitted if they find their way back. Flooding offers a significant advantage – as all paths are used, the packet will get to its destination in the shortest time possible. However, by flooding the network with packets, the network is not being used very efficiently – creating a high load of network traffic.


3.3.3 Random Routing

Random routing is simple, like flooding, but occupies a lot less load on the network. Routers employing a random strategy send incoming traffic through a random outgoing link. Decisions about which route the traffic is headed is not always based on a random algorithm. In many cases a round-robin approach is taken. Since random routing doesn’t use network information, it is a simple method of routing.


3.3.4 Adaptive Routing

Most packet switching networks use routers that use some form of adaptive routing. Adaptive routing means that the routing decisions made by the router change, depending on the conditions of the network around it. Routing may change if links/routers fail, or if costs of links change. Adaptive routing is complex, as it must gather and interpret information regarding costs of the network system. This means that these routers require more processing power to make routing decisions. The major benefit of adaptive routing is that it controls congestion better than any of the other major routing strategies. Since the conception of the original adaptive routing algorithm in 1969, there have been two major improvements. The newest algorithm (third generation) is more responsive to changes in the network and is generally more stable than the older versions of the adaptive routing algorithm. One problem that the third generation algorithm addresses is oscillation. Oscillation occurs when routers adapt their routing strategies simultaneously in such a fashion that routes become congested quickly. To counter this congestion, routers will again change their strategies, creating an oscillation of congestion, greatly reducing network efficiency. Oscillation is the result of "every node [was] trying to obtain the best route" [Stallings p 327], so the third generation algorithm attempts to give the average route a good path (although not necessarily the best).


3.4 Routing Methods

The actual method of transmitting data through switching equipment can be put into one of the three following categories [How LAN Switches Work]:


3.4.1 Store and Forward

Store and forward switching is one of the most popular methods of packet communication. Incoming packets are saved in a buffer and a CRC is carried out on the data to make sure the packet is error-free. The device will then look at the destination address and send the packet to the specified destination. If the packet contains an error, it will be discarded, leaving the communications protocols to re-send the lost packet. Some switching equipment will incorporate store and forward as well as cut-through methods for handling packets. Store and forward equipment usually holds the packet in its buffers until the destination has acknowledged that the packet has successfully reached its destination.

Animation 3 - Store and Forward Routing

3.4.2 Cut-through

Cut-through switching equipment reads the destination address of the packet as soon as it arrives. This address is buffered and the rest of the packet is sent to that address, eliminating the need to buffer the packet in the switching equipment. This method does not run CRCs to check for errors. For this reason, most packet switching equipment does not solely use the cut-through method. It is quite common for devices to use cut-through until a certain percentage of errors infiltrate the system, then the more reliable (but slower) store and forward method is used.

Animation 4 - Cut-Through Routing


3.4.3 Fragment-Free

Fragment-free is much the same as cut-through. Instead of just buffering the address, fragment-free also stores the first 64 bytes of each packet. This is because most errors and all collisions occur with the first 64 bytes of each packet. Fragment-free is not a commonly used method.

Animation 5 - Fragment-Free Routing



4. Commonly Used Protocols

Now that we’ve established a system of switching packets, we have to have a standard that governs how each packet is to be communicated. This is necessary if we have many different users running many different operating systems from within many different network topologies. Below are three of the most common protocols used today.


4.1 X.25


4.1.1 Description

X.25 is one of the most widely used protocols in packet switched networks today. It was introduced in 1976 and has continued to evolve to maintain its relevance in newer network systems. X.25 is a standard which describes the interface between a host and a packet switched network.

The X.25 protocol offers a Virtual Circuit Service, which is a link that behaves like a circuit switched link, but is actually a packet switched link. The advantage of a virtual circuit is that it offers all of the benefits of packet switching (robustness, no bandwidth wasted in ‘silent’ periods) with only a few drawbacks, such as possible small delays. There are two types of virtual circuit, a virtual call and a permanent virtual circuit. A virtual call is much like the telephone system. A host may set up a connection to another host by ‘calling’ them. Communication takes place and the call is terminated when communication is finished. A permanent virtual circuit (as the name suggests) is a permanent link between hosts. These virtual circuit links always route data in exactly the same manor (i.e. each packet will traverse the same path).

Each X.25 link will carry user data, plus another packet called a control packet. This packet is sent when initiating a virtual circuit. It contains a virtual circuit number, the packet type and any additional control information.

As X.25 creates all of these virtual circuits, it has to be efficient at multiplexing all of these links, as more than one link may be active through a router at one point in time. X.25 is so good, that it can carry up to 4095 virtual circuits at each node concurrently. Certain ranges of numbers are allocated for the different types of virtual circuits (i.e. permanent and call), with many other options available (see Stallings p 335).

Flow control utilises a sliding window, typically 3 bits long (up to 7 packets at a time). X.25 also supports piggybacking, which means that information being sent to a host can also contain flow (or other) control data such as received acknowledgements.


4.1.2 Layers

X.25 incorporates three main layers (defined as levels of protocols). X.25 deals with the physical level, link level and the packet level. These three levels correspond directly to the three lowest levels of the OSI model. The reason X.25 does not use higher levels is because it is generally used as a communication standard for the transport of data long distances and therefore is not tailored for use with applications like TCP/IP. This reduction in complexity means that less additional information (headers) is required for the transmission of data. This reduces overhead and also increases the efficiency of communication.

The physical level of X.25 deals with the physical interface between a host and a communication link to a packet switched network. X.25 refers to the local or host terminal as data terminal equipment (DTE) and the actual node on the packet switched network that the DTE is connected to is known as data circuit-terminating equipment (DCE) [Stallings p 330].

The link level deals with communication across the physical link, sending data in frames, which provides a reliable data transfer method. The link level has its own standard that is known as Link Access Protocol Balanced (LAPB).

Finally, the packet level deals with the virtual circuit service that X.25 offers (described above in section 4.1.1). These virtual circuits are a key feature of X.25, providing the unique characteristics that makes X.25 such a successful standard.


4.2 TCP/IP

4.2.1 Description

Transmission Control Protocol / Internet Protocol (TCP/IP) is a large suite of communication protocols, which breaks down different aspects of communication into small modules. Each module may have a peer module, which is used to mediate actions between these modules. This follows the top-down design mechanism used to create the OSI model. In many ways, TCP/IP is very similar to the OSI model; however the OSI model has one key difference – that it handles certain levels of communication at different stages of hierarchy. This gives the OSI model a layered structure, something the TCP/IP is lacking.

Each packet of data must have two addresses. The global internet address contains the identification of the host and the port contains the identification of the application which is communicating.

Each TCP/IP packet is given three headers, TCP, IP and network headers.

The first header it receives is the TCP header. This header contains information such as the destination port, the sequence number and a checksum. The destination port identifies which process this data should be delivered to on the remote machine. The sequence number is added so that if packets are lost, or delayed in transmission, they can be re-ordered at the remote host. Finally, a checksum is included so that the remote machine can compare the checksum with the data, to determine if the data is erroneous.

The TCP layer then sends the data to the IP layer, which appends its own header, which includes the destination host address.

Finally, the data is passed to the network access layer, which appends a destination network address and a facilities request as a header. A facilities request may be included if the network access protocol wants to use special network facilities, such as higher priority.

A TCP/IP router would remove the network header and look at the IP header. It would examine where the address is on the network (it may need to hop through another router) and modify the network header accordingly. This is where the routing algorithms come in to play (see the previous section).

Some commonly used protocols that incorporate the TCP/IP include the Simple Mail Transfer Protocol (SMTP), the File Transfer Protocol (FTP) and TELNET.

4.2.2 Layers

TCP/IP has five layers, which are described below in table 2:

Name Description
Application Layer Provides a direct interface for applications to communicate with
Transport Layer Provides a data transfer service that appears to be a point to point link
Internet Layer Handles routing the data through one or more networks
Network Access Layer Handles the ‘logical’ interface between points in the network
Physical Layer Handles the communication at the most primitive level (medium, encoding, etc)

Table 2: Layers of the TCP/IP [Stallings Pg 56]

The actual applications of these layers are very flexible. They support many popular communications methods such as HTTP, SMTP, TELNET and FTP. For a more detailed understanding of TCP/IP, there are many useful sources of reference in Stallings on page 60.


4.3 Frame-Relay


4.3.1 Description

Frame relay was developed as a more efficient alternative to X.25 communication. Frame relay has been widely accepted amongst many packet switched networks. Frame relay seeks to surpass X.25’s performance by reducing overhead and hence increase efficiency. The main ways frame relay improves on X.25 is through its signaling, multiplexing and flow/error control methods. Call control (used for establishing virtual circuits) is carried out on a separate connection. This separate connection is not actually physically separate, so it does not require any extra infrastructure. This means that intermediate systems along the route do not need to handle this extra overhead. All switching of ‘logical’ connections (such as the call control) occurs at layer 2, which removes one whole layer of processing and hence increases the speed at which communication can take place. Finally, flow and error control has been restricted to end-to-end (instead of at each hop) which reduces the amount of time each packet is processed at a hop dramatically. These aspects of communication are handled by a higher layer in the frame-relay standard.


4.3.2 Layers

The frame-relay standard does not have distinct layers like the protocols mentioned earlier. It may be considered in regards to a control plane and a user plane.

The control plane offers control over the transmission of data, including call, error and flow control. This control plane operates at the data link layer (layer 2), to reduce the amount of processing required to communicate the control information (less overhead). The control plane uses a protocol called the Link-Layer Protocol (LAPD).

The user plane is the link, which carries information from end to end. The user plane protocol is called the Link Access Procedure for Frame mode bearer services (LAPF).

The exact architecture and protocols used by frame-relay are summarised in Stallings on page 377-8. The main idea behind frame relay is that it provides the robust nature of a virtual circuit based standard, streamlined by the use of logically separated data and control links.


4.4 ATM


4.4.1 Description

Asynchronous Transfer Mode (ATM), also known as cell relay, uses many of the technologies described above and combines them in a refined standard, which offers a greatly improved efficiency and reliability of communication. ATM particularly excels in high speed switched networks.

ATM is a packet switching standard which allows many communications to be multiplexed through one link. The packets which flow along these links are all a fixed size and are known as cells (this is why ATM is often called cell relay). Like frame-relay, ATM minimises overhead by cutting down on flow and error control. This enables ATM to operate at much higher speeds, as cells are smaller and take less time to send. The use of cells also creates a higher level of efficiency, because fixed sized cells require less processing at the intermediate systems (buffers will always be populated with the same length of data).

Instead of using the terminology ‘virtual circuits’, ATM defines its links to be called virtual channel connections (VCCs). A VCC in an ATM network behaves the same as a virtual circuit in an X.25 system. For a multi-intermediate-system route, ATM will have to establish a link consisting of many VCCs. This link is called a virtual path connection (VPC). The VPC concept was developed to minimise the costs involved with network traffic. If there are many links who wish to use the same path (multiplexed), it is much more cost effective to share paths, especially when link speeds are increasing to support this technology.


4.4.1 Layers

ATM is very similar to frame-relay in its architecture; however ATM incorporates three planes; the control, user and management plane. The user and control planes perform the same functions as the frame-relay standard. The management plane deals with the entire protocol package (all of the components of ATM), coordinating the control and user planes, to maximise the efficiency of communication by ensuring all resources are being used to their limit.



5. Application of Packet Switching in the World Today


5.1 Advantages over Circuit Switching

Packet switching offers many advantages over circuit switching. Packet switching is far more robust (since that is what it was originally designed for) and is better suited for transmitting binary data than circuit switched systems. One key benefit of packet switching is its ability to emulate a circuit switched network. X.25 and ATM uses a method of communication that is called virtual circuit. These virtual circuits perform in much the same way as circuit switched circuits, but with one fundamental difference; Virtual circuits allow other virtual circuits to occupy the same link. This means that communication can occur concurrently along a link between many nodes (rather than between two nodes, which is the solution circuit switching provides). The reason why packet switching is beginning to have so many advantages over circuit switching is because of the advances in technology. It is now possible to encode voice and send it in packetised format with minimal problems. In the past it was generally accepted that delay sensitive data (for example voice) was to be handled through a circuit switched network.

It is thought that in the near future many local subscriber circuit switched networks will be replaced by packet switched networks. Such networks would include the telephone network [Sincorskie p 4].


5.2 Companies Producing Packet Switching Technology

There are many companies producing switching equipment. Some of the more reputable companies include:

3Com offer a large range of switches, including core, workgroup and desktop switches. This technology is split up in this manor to offer flexible solutions at cost-effective prices. For example, desktop switches are designed to switch packets between several computers, where core switches are designed to switch high-speed backbones between or within intranets [3Com Product List].

Cisco has a comprehensive range of switches and routers. Cisco is one of the major switching equipment providers. Cisco’s switching equipment offers flexible solutions for many network topologies and traffic. However, the highly tailored and high quality equipment does mean a high price tag [Cisco Routers].

D-Link has a large selection of switches. Like the previous companies, D-Link offers a range of switches, from the desktop switches to backbone gigabit switches. The difference between these products is MAC address storage and throughput, which ultimately results in a difference in cost [D-Link Systems, Inc].


5.3 Future Applications


5.3.1 VOIP

As mentioned in 5.1, it is becoming increasingly accepted to transmit delay sensitive data through a packet switched network (rather than circuit switched). There are protocols that can create a virtually real-time environment – which, for voice conversations, is sufficient.

Voice over IP is essentially a voice signal encoded into a digital format, being sent through a packet switched network (or possibly any other network) using the Internet Protocol (IP). Over recent years there have been standards developed and supported by major companies including ITU-T H.323. VOIP has a long way to evolve before it is used as widespread as circuit switched networks, but it is well on its way.


5.3.2 IPv6

The current protocol that is employed almost everywhere IP (IPv4) has come to the end of its useful life. This is mainly because it has run out of addresses to uniquely identify every non-private computer in the world. IPv6 has been deigned to be more efficient than IPv4 and solve the addressing problems that we face at present.

Ipv6 will use 128 bits to address nodes, which provides 2^128 possibilities (roughly 3.4*10^38). It will incorperate a special ‘option mechanism’ to store optional headers in the transport layer (to maximise efficiency by reducing required space). Finally, Ipv6 will have support for resource allocation, allowing packets to be part of a ‘traffic flow’ which will provide better communication of data such as video/voice streams [VOIP].



6. Summary

In this report, packet switching has been discussed in relation to its history, concepts, components (both hardware and conceptually) and its application in networks such as the Internet. To conclude, I offer some summary points, followed by review questions.


6.1 Summary Points

1 Packet switching is a communication method, which is concerned with breaking up data into smaller units, called packets.

2 Packet switching was first established by Paul Baran in the early 1960's.

3 Packet switching was invented to create a system of communication that was more robust than the current circuit switched network.

4 Switching is carried out by devices called switches and routers (formerly known as IMPs).

5 The OSI model was introduced as a standard to describe the behaviour of packet switching protocols and how they are to be structured. It consists of 7 layers, where layer 7 is the high level layer (application level) and layer 1 is the lowest level layer (the hardware level)

6 Switches operate on layer 2 of the OSI model, which means that they use MAC addresses to route packets.

7 Routers operate on layer 3 of the OSI model, which makes them more versatile than switches, because they can then actually route packets between intranets.

8 Packets may travel through many routers to get to their destination. Each router the packets travel through is called a hop.

9 There are two distinct types of routing. Fixed, flooding and random routing involves little-to-no 'thinking' from the router itself. Adaptive routing is worked out by routers, based on metrics.

10 In fixed routing, routers have a static table, which is usually updated by an admin. The table tells the router what node to send data to, for a particular set of addresses.

11 In flooding, a router sends any incoming data on ever outgoing port. This means that a packet will arrive at its destination in the shortest amount of time (as all paths are taken). However it requires a lot of data to be sent on the network.

12 In random routing, a router sends incoming data on a random (or often round-robin based) outgoing port. This requires no 'thinking' by the router and also requires no upkeep by an admin. However, it is not very efficient.

13 Adaptive routing is the most efficient method of routing. Each link is monitored and given a metric, so the router can determine which path to send data on to gain maximum efficiency. Adaptive routing involves overhead for router information, but makes up for this overhead in its performance.

14 Encapsulation is the process of adding a header to a fragment of data. Encapsulated data is handed down the layers before data is transmitted.

15 TCP/IP is a commonly used protocol used in packet switched networks. The IP addresses nodes using a 32 bit number.

16 X.25 is a popular standard in communication, because it offers high reliability and hig speeds. It achieves this by using a method of communication called virtual circuits.

17 ATM is an emerging standard that is similar to X.25. However, ATM offers improved speeds over X.25, which is influencing X.25 based networks to change to ATM.

18 Store and forward is a method of routing, which involves buffering each packet before re-transmitting. This is not a fast method, but is very reliable.

19 Fragment free is a routing method, which buffers the destination address of the packet and transmits the rest without buffering. This makes it fast, but prone to errors.

20 Cut through is a method of routing, which buffers the first 64 Bytes of a packet before the packet is transmitted. This offers a basic error detection scheme, plus it protects packets from collision errors.


6.2 Review Questions

1) What is Packet Switching?

a) Communication method which breaks data up to transmit it
b) Communication method which incorporates redundant links
c) All of the above
d) None of the above

2) Why was packet switching invented?

a) To speed up the Internet
b) To replace telephone networks
c) To create a more robust communication system
d) To play games

3) What protocols are typically found in packet switched networks?

a) V.90
b) X.25
c) All of the above
d) None of the above

4) Packets were formerly known as:

a) Bags
b) Pockets
c) Data boxes
d) None of the above

5) Hot potato routing focuses on:

a) Finding the best path for each packet
b) Finding the quickest path for each packet
c) Quickly moving packets between nodes
d) Deleting erroneous packets

6) An example of switching equipment is:

a) User terminal
b) Router
c) Imp
d) B & C

7) Encapsulation involves what step before transmission?

a) Data is broken into segments and a header is added
b) A header is added and the data is broken into segments
c) Headers are removed and data is put together
d) Data is transmitted between nodes

8) There are __ layers in the OSI model:

a) 5
b) 6
c) 7
d) 8

9) What layer do switches work on?

a) 1
b) 2
c) 3
d) None of the above

10) What layer do routers work on?

a) 1
b) 2
c) 3
d) None of the above

11) What is the journey taken by a packet between routers called?

a) Hop
b) TTL (Time To Live)
c) Ping
d) Jump

12) Method of routing that involves buffering each packet, working out its next hop and re-transmitting it:

a) Store and forward
b) Cut through
c) Fragment free
d) None of the above

13) Method of routing that involves buffering the first 64 bytes of a packet before re-transmitting:

a) Store and forward
b) Cut through
c) Fragment free
d) None of the above

14) Method of routing that involves buffering only the address before re-transmitting:

a) Store and forward
b) Cut through
c) Fragment free
d) None of the above

15) What is one of the most common protocols in networks, who's addresses consist of a 32 bit combination?

a) IPV6
c) OSI

16) What concept of X.25 is similar to circuit switched systems?

a) Robustness
b) Virtual circuits
c) Ability to repair if an intermediate system disappears
d) All of the above

17) Which of the following is a type of routing strategy?

a) Fixed routing
b) Flooding
c) Adaptive routing
d) All of the above

18) What is typically contained in a packet's header?

a) The data to be transmitted
b) An error control such as X.25
c) A number to identify the sequence of packets
d) The password of the destination's user

19) There are __ layers in the TCP/IP model:

a) 4
b) 5
c) 6
d) 7

20) Fixed routing requires:

a) Server to continually update routing tables
b) Each router to send other routers their routing tables to calculate the best path
c) Packets to be no larger than 64 Bytes
d) None of the above


1. c
2. c
3. b
4. d
5. c
6. d
7. a
8. c
9. b
10. c
11. a
12. a
13. c
14. b
15. d
16. b
17. d
18. c
19. b
20. d



7. Bibliography and References



3Com Product List, 8/9/2002,

Abeck, S. Hegering, H. & Neumair, B. Integrated Management of Network Systems, Morgan Kaufmann Publishers (1999)

Baran, Paul, The Begginnings of Packet Switching: some Underlying Concepts, published in IEEE Communications Magazine, July 2002

Cisco Routers, 8/9/2002,

Derfler, F. Freed, L, How Networks Work, Que Corporation (2000)

D-Link Systems, Inc, 8/9/2002,

GRINSEC, Electronic Switching, Elsevier Science Publishers B.V. (1983)

How LAN Switches Work, 30/8/2002,

Internet 101 PACKET SWITCHING, 14/8/2002,

Metcalfe, Robert M, Packet Communication, Peer-to-Peer Communications, Inc. (1996)

Robinson, Gene, Communication Networks With Layered Architechtures, proceedings: IEEE802 N-West Standards Meeting for Broadband Wireless Access Systems (9/3/1999)

Routers, 29/8/2002,

Sincoskie, W. David, Broadband Packet Switching: A Personal Perspective, published in IEEE Communications Magazine, July 2002

Stallings, William, Data & Computer Communications – sixth edition, Prentice-Hall Inc (2000)

VOIP, 8/9/02,,,sid7_gci214148,00.html



Clark, K. Hamilton, K. CCIE Professional Development Cisco LAN Switching, Cisco Press (1999)

Ethernet Distributed Packet Switching for Local Computer Networks, 4/8/2002,

History of ARPANET, 14/8/2002,

Hobbs, Marvin, Modern Communications Switching Systems, Tab Books (1974)

IP Packet Header, 3/9/2002,

Kang, Moonsik, An Optimal Dynamic Multicast Routing Algorithm for High-Speed LAN/MAN Interconnections,

Neelakanta, P.S. A Textbook on ATM Telecommunications – Principles and Implementation, CRC Press (2000)

Packet Switching –, 14/8/2002,

Seifert, Rich, The Switch Book – The Complete Guide to LAN Switching Technology, John Wiley & Sons, Inc. (2000)

Smouts, Michel, Packet Switching Evolution from Narrowband to Broadband ISDN, Artech House (1991)