0% found this document useful (0 votes)
884 views

CN UNIT-2 Notes

The document discusses the data link layer of computer networks. It covers key topics like data link layer protocols, framing methods, error detection and correction, and multiple access protocols. Specifically, it describes the services provided by the data link layer such as framing, reliable delivery, flow control, and error detection. It also discusses design issues for the data link layer including providing services to the network layer and methods for framing, error control, and flow control.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
884 views

CN UNIT-2 Notes

The document discusses the data link layer of computer networks. It covers key topics like data link layer protocols, framing methods, error detection and correction, and multiple access protocols. Specifically, it describes the services provided by the data link layer such as framing, reliable delivery, flow control, and error detection. It also discusses design issues for the data link layer including providing services to the network layer and methods for framing, error control, and flow control.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 70

COMPUTER NETWORKS UNIT-2

UNIT-II

MALLAREDDY COLLEGE OF ENGINEERING & TECHNOLOGY


1
COMPUTER NETWORKS UNIT-2

UNIT-II

Contents
Data link Layer
• Design issues
• Error detection& correction
• Elementary data link layer protocols
• Sliding window protocols

Multiple Access Protocols


• ALOHA
• CSMA,CSMA/CD,CSMA/CA
• Collision free protocols
• Ethernet-Physical Layer
• Ethernet Mac Sub Layer
• Data link layer Switching
• Use of bridges
• Learning bridges
• Spanning tree bridges
• Repeaters
• Hubs
• Bridges
• Switches
• Routers
• Gate ways

2
COMPUTER NETWORKS UNIT-2

Introduction:

 In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from the bottom.
 The communication channel that connects the adjacent nodes is known as links, and in order
to move the datagram from source to the destination, the datagram must be moved across an
individual link.
 The main responsibility of the Data Link Layer is to transfer the datagram across an
individual link.
 The Data link layer protocol defines the format of the packet exchanged across the nodes as
well as the actions such as Error detection, retransmission, flow control, and random access.
 The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.
 An important characteristic of a Data Link Layer is that datagram can be handled by different
link layer protocols on different links in a path. For example, the datagram is handled by
Ethernet on the first link, PPP on the second link.

The data link layer takes the packets it gets from the network layer and encapsulates them into
frames for transmission. Each frame contains a frame header, a payload field for holding the
packet, and a frame trailer

Parts of a Frame

A frame has the following parts −

• Frame Header − It contains the source and destination addresses of the


frame.

3
COMPUTER NETWORKS UNIT-2

• Payload field − It contains the message to be delivered.


• Trailer − It contains the error detection and error
correction bits.
• Flag − It marks the beginning and end of the frame.

Following services are provided by the Data Link Layer:

Framing & Link access: Data Link Layer protocols encapsulate each network frame within a
Link layer frame before the transmission across the link. A frame consists of a data field in
which network layer datagram is inserted and a number of data fields. It specifies the structure of
the frame as well as a channel access protocol by which frame is to be transmitted over the link.

Reliable delivery: Data Link Layer provides a reliable delivery service, i.e., transmits the
network layer datagram without any error. A reliable delivery service is accomplished with
transmissions and acknowledgements. A data link layer mainly provides the reliable delivery
service over the links as they have higher error rates and they can be corrected locally, link at
which an error occurs rather than forcing to retransmit the data.

4
COMPUTER NETWORKS UNIT-2

Flow control: A receiving node can receive the frames at a faster rate than it can process the
frame. Without flow control, the receiver's buffer can overflow, and frames can get lost. To
overcome this problem, the data link layer uses the flow control to prevent the sending node on
one side of the link from overwhelming the receiving node on another side of the link.

Error detection: Errors can be introduced by signal attenuation and noise. Data Link Layer
protocol provides a mechanism to detect one or more errors. This is achieved by adding error
detection bits in the frame and then receiving node can perform an error check.

Error correction: Error correction is similar to the Error detection, except that receiving node
not only detects the errors but also determine where the errors have occurred in the frame.

Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes can transmit the data at the
same time. In a Half-Duplex mode, only one node can transmit the data at the same time.

DLL DESIGN ISSUES

1. Providing Services to the network layer:


2. Framing
3. Error Control
4. Flow Control

1. SERVICES PROVIDED TO THE NETWORK LAYER:

The data link layer can be designed to offer various services. The actual services offered can
vary from system to system.

Three reasonable possibilities that are commonly provided are

1) Unacknowledged Connectionless service


2) Acknowledged Connectionless service
3) Acknowledged Connection-Oriented service

5
COMPUTER NETWORKS UNIT-2

1.1 UNACKNOWLEDGED CONNECTIONLESS SERVICE:

• Unacknowledged connectionless service consists of having the source machine send


independent frames to the destination machine without having the destination machine
acknowledge them.
• If a frame is lost due to noise on the line, no attempt is made to detect the loss or recover
from it in the data link layer.
• This class of service is appropriate when the error rate is very low so that recovery is left
to higher layers.
• Most LANs use unacknowledged connectionless service in the data link layer.
1.2 ACKNOWLEDGED CONNECTIONLESS SERVICE:

• When this service is offered, still there are no logical connections used, but each frame is
sent individually acknowledged.
• In this way, the sender knows whether a frame has arrived correctly. If it has not arrived
within a specified time interval, it can be sent again. This service is useful over unreliable
channels, such as wireless systems.
• If individual frames are acknowledged and retransmitted, entire packets get through
much faster.
1.3 ACKNOWLEDGED CONNECTION-ORIENTED SERVICE:

Here, the source and destination machines establish a connection before any data are transferred.
Each frame sent over the connection is numbered, and the data link layer guarantees that each
frame sent is indeed received.

Furthermore, it guarantees that each frame is received exactly once and that all frames are
received in the right order.

2. FRAMING

The usual approach is for the data link layer to break the bit stream up into discrete frames and
compute the checksum for each frame (framing).

6
COMPUTER NETWORKS UNIT-2

When a frame arrives at the destination, the checksum is recomputed. If the newly computed
checksum is different from the one contained in the frame, the data link layer knows that an error
has occurred and takes steps to deal with it

• Example., discarding the bad frame and possibly also sending back an error report

We will look at four framing methods:

1. Character count.
2. Byte stuffing.
3. Bit stuffing.
4. Physical layer coding violations.

FRAMING – CHARACTER COUNT

The first framing method uses a field in the header to specify the number of characters in the
frame. When the data link layer at the destination sees the character count, it knows how many
characters follow and hence where the end of the frame is. This technique is shown in fig a) four
frames of sizes 5,5,8,8 characters respectively (without errors) fig.) with errors

7
COMPUTER NETWORKS UNIT-2

• The trouble with this algorithm is that the count can be garbled by a transmission error.
• For example, if the character count of 5 in the second frame of Fig. (b) becomes a 7, the
destination will get out of synchronization and will be unable to locate the start of the next
frame. Even if the checksum is incorrect so the destination knows that the frame is bad, it
still has no way of telling where the next frame starts.
• Sending a frame back to the source asking for a retransmission does not help either, since the
destination does not know how many characters to skip over to get to the start of the
retransmission. For this reason, the character count method is rarely used anymore.

FRAMING – BYTE / CHARACTER STUFFING

 In the past, the starting and ending bytes were different, but in recent years most
protocols have used the same byte, called a flag byte, as both the starting and ending
delimiter, as shown in below figure as FLAG.

• In this way, if the receiver ever loses synchronization, it can just search for the flag byte
to find the end of the current frame. Two consecutive flag bytes indicate the end of one
frame and start of the next one.
• Each frame starts and ends with a FLAG byte. Thus adjacent frames are separated by two
flag bytes.
• A serious problem occurs with this method is when binary data is transmitted, It is
possible that FLAG is actually a part of the data.
• Solution: At the sender an escape byte (ESC) character is inserted just before the FLAG
byte present in the data. The data link layer at the receiver end removes the ESC is from
the data before sending it to the network layer. This technique is called as byte stuffing or
character stuffing.
• Thus, a framing flag bye can be distinguished from one in the data by absence or
presence of an escape byte before it.

8
COMPUTER NETWORKS UNIT-2

• Now if an ESC is present in the data then an extra ESC is inserted before it in the data.
This extra ESC is removed at the receiver.

 The major disadvantage of using this framing method is that it is closely tied to the use of
8-bit characters.

FRAMING – BIT STUFFING

• Whenever the sender's data link layer encounters five consecutive 1s in the data, it
automatically stuffs a 0 bit into the outgoing bit stream.
• This bit stuffing is analogous to byte stuffing, in which an escape bye is stuffed into the
outgoing character stream before a flag byte in the data.
• When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it
automatically de- stuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely
transparent to the network layer in both computers, so is bit stuffing.

Error Detection:
When data is transmitted from one device to another device, the system does not guarantee
whether the data received by the device is identical to the data transmitted by another device. An
Error is a situation when the message received at the receiver end is not identical to the message
transmitted.

9
COMPUTER NETWORKS UNIT-2

Types of Errors:

Single-Bit Error:

The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.

In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is changed to
1.

Single-Bit Error does not appear more likely in Serial Data Transmission. Single-Bit Error
mainly occurs in Parallel Data Transmission.

Burst Error:

The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error. The Burst
Error is determined from the first corrupted bit to the last corrupted bit.

The duration of noise in Burst Error is more than the duration of noise in Single-Bit.

10
COMPUTER NETWORKS UNIT-2
Burst Errors are most likely to occur in Serial Data Transmission.

The number of affected bits depends on the duration of the noise and data rate.

Error Detecting Techniques:


The most popular Error Detecting Techniques are:

 Single parity check


 Two-dimensional parity check
 Checksum
 Cyclic redundancy check

1. Single Parity Check:


 Single Parity checking is the simple mechanism and inexpensive to detect the errors.
 In this technique, a redundant bit is also known as a parity bit which is appended at the end of
the data unit so that the number of 1s becomes even. Therefore, the total number of
transmitted bits would be 9 bits.
 If the number of 1s bits is odd, then parity bit 1 is appended and if the number of 1s bits is
even, then parity bit 0 is appended at the end of the data unit.
 At the receiving end, the parity bit is calculated from the received data bits and compared
with the received parity bit.
 This technique generates the total number of 1s even, so it is known as even-parity checking.

11
COMPUTER NETWORKS UNIT-2

Drawbacks of Single Parity Checking:

 It can only detect single-bit errors which are very rare.


 If two bits are interchanged, then it cannot detect the errors.

2. Two-Dimensional Parity Check


 Performance can be improved by using Two-Dimensional Parity Check which organizes
the data in the form of a table.
 Parity check bits are computed for each row, which is equivalent to the single-parity check.
 In Two-Dimensional Parity check, a block of bits is divided into rows, and the redundant
row of bits is added to the whole block.
 At the receiving end, the parity bits are compared with the parity bits computed from the
received data.

Drawbacks of 2D Parity Check


 If two bits in one data unit are corrupted and two bits exactly the same position in another
data unit is also corrupted, then 2D Parity checker will not be able to detect the error.
 This technique cannot be used to detect the 4-bit errors or more in some cases.

12
COMPUTER NETWORKS UNIT-2

3. Checksum:

A Checksum is an error detection technique based on the concept of redundancy.

It is divided into two parts:

Checksum Generator:

A Checksum is generated at the sending side. Checksum generator subdivides the data into equal
segments of n bits each, and all these segments are added together by using one's complement
arithmetic. The sum is complemented and appended to the original data, known as checksum
field. The extended data is transmitted across the network.

The Sender follows the given steps:  

1.   The block unit is divided into k sections, and each of n bits.  
2.   All the k sections are added together by using one's complement to get the sum.  
3.   The sum is complemented and it becomes the checksum field.  
4.   The original data and checksum field are sent across the network.  

Checksum Checker:

A Checksum is verified at the receiving side. The receiver subdivides the incoming data into
equal segments of n bits each, and all these segments are added together, and then this sum is
complemented. If the complement of the sum is zero, then the data is accepted otherwise data is
rejected.

13
COMPUTER NETWORKS UNIT-2
The Receiver follows the given steps:  

1.   The block unit is divided into k sections and each of n bits.  
2.   All the k sections are added together by using one's complement algorithm to get the sum.  
3.   The sum is complemented.  
4.   If the result of the sum is zero, then the data is accepted otherwise the data is discarded.  

4. Cyclic Redundancy Check (CRC):

CRC is a redundancy error technique used to determine the error.

Following are the steps used in CRC for error detection:

 In CRC technique, a string of n 0s is appended to the data unit, and this n number is less than
the number of bits in a predetermined number, known as division which is n+1 bits.
 Secondly, the newly extended data is divided by a divisor using a process is known as binary
division. The remainder generated from this division is known as CRC remainder.
 Thirdly, the CRC remainder replaces the appended 0s at the end of the original data. This
newly generated unit is sent to the receiver.
 The receiver receives the data followed by the CRC remainder. The receiver will treat this
whole unit as a single unit, and it is divided by the same divisor that was used to find the
CRC remainder.

If the resultant of this division is zero which means that it has no error, and the data is accepted.

If the resultant of this division is not zero which means that the data consists of an error.
Therefore, the data is discarded.

14
COMPUTER NETWORKS UNIT-2
Let's understand this concept through an example:

Suppose the original data is 11100 and divisor is 1001.

CRC Generator:

 A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end
of the data as the length of the divisor is 4 and we know that the length of the string 0s to
be appended is always one less than the length of the divisor.
 Now, the string becomes 11100000, and the resultant string is divided by the divisor 1001.
 The remainder generated from the binary division is known as CRC remainder. The
generated value of the CRC remainder is 111.
 CRC remainder replaces the appended string of 0s at the end of the data unit, and the final
string would be 11100111 which is sent across the network.

CRC Checker:

 The functionality of the CRC checker is similar to the CRC generator.


 When the string 11100111 is received at the receiving end, then CRC checker performs the
modulo-2 division.
 A string is divided by the same divisor, i.e., 1001.
 In this case, CRC checker generates the remainder of zero. Therefore, the data is accepted.

15
COMPUTER NETWORKS UNIT-2

Error Correction:
Error Correction codes are used to detect and correct the errors when data is transmitted from the
sender to the receiver.

Error Correction can be handled in two ways:

 Backward error correction: Once the error is discovered, the receiver requests the
sender to retransmit the entire data unit.
 Forward error correction: In this case, the receiver uses the error-correcting code
which automatically corrects the errors.

A single additional bit can detect the error, but cannot correct it.

For correcting the errors, one has to know the exact position of the error. For example, If we
want to calculate a single-bit error, the error correction code will determine which one of seven
bits is in error. To achieve this, we have to add some additional redundant bits.

Suppose r is the number of redundant bits and d is the total number of the data bits. The number
of redundant bits r can be calculated by using the formula:

The value of r is calculated by using the above formula. For example, if the value of d is 4, then
the possible smallest value that satisfies the above relation would be 3.

16
COMPUTER NETWORKS UNIT-2
To determine the position of the bit which is in error, a technique developed by R.W Hamming is
Hamming code which can be applied to any length of the data unit and uses the relationship
between data units and redundant units.

Hamming Code

Parity bits: The bit which is appended to the original data of binary bits so that the total number
of 1s is even or odd.

Even parity: To check for even parity, if the total number of 1s is even, then the value of the
parity bit is 0. If the total number of 1s occurrences is odd, then the value of the parity bit is 1.

Odd Parity: To check for odd parity, if the total number of 1s is even, then the value of parity
bit is 1. If the total number of 1s is odd, then the value of parity bit is 0.

Algorithm of hamming code:

 An information of’d’ bits are added to the redundant bits 'r' to form d+r.
 The location of each of the (d+r) digits is assigned a decimal value.
 The 'r' bits are placed in the positions 1,2,.....2k-1.
 At the receiving end, the parity bits are recalculated. The decimal value of the parity
bits determines the position of an error.

Relationship b/w Error position & binary number.

Example:

Let's understand the concept of Hamming code through an example:

Step 1: Selecting the number of redundant bits

Suppose the original data is 1010 which is to be sent.

17
COMPUTER NETWORKS UNIT-2

Total number of data bits’d’ = 4


Number of redundant bits r : 2r >= d+r+1
2r>= 4+r+1
Therefore, the value of r is 3 that satisfies the above relation.
Total number of bits = d+r = 4+3 = 7;

Step2: Determining the position of the redundant bits

The number of redundant bits is 3. The three bits are represented by r1, r2, r4. The position of the
redundant bits is calculated with corresponds to the raised power of 2. Therefore, their
corresponding positions are 1, 21, 22.

1. The position of r1 = 1  2. The position of r2 = 2   3. The position of r4 = 4  

Representation of Data on the addition of parity bits:

Step3: Determining the Parity bits

Determining the r1 bit

The r1 bit is calculated by performing a parity check on the bit positions whose binary
representation includes 1 in the first position.

We observe from the above figure that the bit positions that include 1 in the first position are 1,
3, 5, 7. Now, we perform the even-parity check at these bit positions. The total number of 1 at
these bit positions corresponding to r1 is even, therefore, the value of the r1 bit is 0.

Determining r2 bit

18
COMPUTER NETWORKS UNIT-2
The r2 bit is calculated by performing a parity check on the bit positions whose binary
representation includes 1 in the second position.

We observe from the above figure that the bit positions that include 1 in the second position
are 2, 3, 6, 7. Now, we perform the even-parity check at these bit positions. The total number of
1 at these bit positions corresponding to r2 is odd; therefore, the value of the r2 bit is 1.

Determining r4 bit

The r4 bit is calculated by performing a parity check on the bit positions whose binary
representation includes 1 in the third position.

We observe from the above figure that the bit positions that include 1 in the third position are 4,
5, 6, 7. Now, we perform the even-parity check at these bit positions. The total number of 1 at
these bit positions corresponding to r4 is even, therefore, the value of the r4 bit is 0.

Data transferred is given below:

Error correction using hamming code:

Suppose the 4th bit is changed from 0 to 1 at the receiving end, then parity bits are recalculated.

19
COMPUTER NETWORKS UNIT-2

R1 bit

The bit positions of the r1 bit are 1,3,5,7

We observe from the above figure that the binary representation of r1 is 1100. Now, we perform
the even-parity check, the total number of 1s appearing in the r1 bit is an even number.
Therefore, the value of r1 is 0.

R2 bit

The bit positions of r2 bit are 2,3,6,7.

We observe from the above figure that the binary representation of r2 is 1001. Now, we perform
the even-parity check, the total number of 1s appearing in the r2 bit is an even number.
Therefore, the value of r2 is 0.

R4 bit

The bit positions of r4 bit are 4,5,6,7.

20
COMPUTER NETWORKS UNIT-2

We observe from the above figure that the binary representation of r4 is 1011. Now, we perform
the even-parity check, the total number of 1s appearing in the r4 bit is an odd number. Therefore,
the value of r4 is 1.

The binary representation of redundant bits, i.e., r4r2r1 is 100, and its
corresponding decimal value is 4. Therefore, the error occurs in a 4 th bit position.
The bit value must be changed from 1 to 0 to correct the error.

ELEMENTARY DATA LINK LAYER PROTOCOLS:


Protocols in the data link layer are designed so that this layer can perform its basic functions:
framing, error control and flow control. Framing is the process of dividing bit - streams from
physical layer into data frames whose size ranges from a few hundred to a few thousand bytes.
Error control mechanisms deals with transmission errors and retransmission of corrupted and lost
frames. Flow control regulates speed of delivery and so that a fast sender does not drown a slow
receiver.

Types of Data Link layer Protocols


Data link protocols can be broadly divided into two categories, depending on whether the
transmission channel is noiseless or noi

21
COMPUTER NETWORKS UNIT-2

1. Simplest Protocol:
The Simplex protocol is hypothetical protocol designed for unidirectional data transmission over
an ideal channel, i.e. a channel through which transmission can never go wrong. It has distinct
procedures for sender and receiver. The sender simply sends all its data available onto the
channel as soon as they are available its buffer. The receiver is assumed to process all incoming
data instantly. It is hypothetical since it does not handle flow control or error control.

• This is unrealistic protocol ,because it does not handle either flow control or error
correction

2. STOP & WAIT PROTOCOL:

22
COMPUTER NETWORKS UNIT-2
• The problem here is how to prevent the sender from flooding the receiver.
• Stop – and – Wait protocol is for noiseless channel too. It provides unidirectional data
transmission without any error control facilities. However, it provides for flow control so
that a fast sender does not drown a slow receiver
• The receiver send an acknowledge frame back to the sender telling the sender that the last
received frame has been processed and passed to the host; permission to send the next
frame is granted.
• The sender, after having sent a frame, must wait for the acknowledge frame from the
receiver before sending another frame.
• This protocol is known as stop and wait protocol.

Design of Stop and wait Protocol


Drawbacks:
 Only one frame can be in transmission at a time.
 .This leads to inefficiency if propagation delay is much longer than transmission delay.

23
COMPUTER NETWORKS UNIT-2

Flow control for stop and wait

3. Stop & Wait ARQ(Automatic Repeat Request):


Stop & Wait ARQ is a sliding window protocol for flow control and it overcomes the limitations
of Stop & Wait, we can say that it is the improved or modified version of Stop & Wait protocol.
Working of Stop & Wait ARQ is almost like Stop & Wait protocol, the only difference is that it
includes some additional components, which are:
a. Time out timer
b. Sequence numbers for data packets
c. Sequence numbers for feedbacks
• When the frame arrives at the receiver site, it is checked and if it is corrupted, it is silently
discarded.
• Lost frames are more difficult to handle than corrupted ones. In our previous protocols,
there was no way to identify a frame.
• When the receiver receives a data frame that is out of order, this means that frames were The
received frame could be the correct one, or a duplicate, or a frame out of order. The solution
is to number the frames.
• The lost frames need to be resent in this protocol. If the receiver does not respond when there
is an error, how can the sender know which frame to resend?
• To remedy this problem, the sender keeps a copy of the sent frame. At the same time, it starts
a timer. If the timer expires and there is no ACK for the sent frame, the frame is resent, the
copy is held, and the timer is restarted.
• Error correction in Stop-and-Wait ARQ is done by keeping a copy of the
sent frame and retransmitting of the frame when the timer expires.
Operation:

24
COMPUTER NETWORKS UNIT-2
The sender transmits the frame, when frame arrives at the receiver it checks for damage and
acknowledges to the sender accordingly. While transmitting a frame there can be 4 situations.
1. Normal operation
2. The frame is lost
3. The acknowledgement is lost
4. The acknowledgement is delayed

How Stop and Wait ARQ Solves All Problems?

a) Normal operation:

25
COMPUTER NETWORKS UNIT-2
In normal operation the sender sends frame 0 and waits for acknowledgment ACK1.After
receiving ACK1, sender sends next frame 1 and waits for its acknowledgment ACK 0.This
operation is repeated and shown in fig.

b) Lost or damaged frame:


When a receiver receives the frame and found it damaged or lost, it is discarded but retains
its number. When sender does not receive its acknowledgement it retransmits the same
frame.

c) Lost acknowledgement:

26
COMPUTER NETWORKS UNIT-2
When an acknowledgement is lost, the sender does not know whether the frame is received by
receiver. After the timer expires, the sender re-transmits the same frame. On the other hand,
receiver has already received this frame earlier hence the second copy of the frame is discarded.
Fig. shows lost ACK.

D) Delayed acknowledgement:
Suppose the sender sends the data and it has also been received by the receiver. The receiver then
sends the acknowledgment but the acknowledgment is received after the timeout period on the
sender's side. As the acknowledgment is received late, so acknowledgment can be wrongly
considered as the acknowledgment of some other data packet.

27
COMPUTER NETWORKS UNIT-2

Stop and Wait Protocol Vs Stop and Wait ARQ-


 
The following comparison table states the differences between the two protocols-

Stop and Wait Protocol Stop and Wait ARQ

It assumes that the communication channel is It assumes that the communication channel is
perfect and noise free. imperfect and noisy.

Data packet sent by the sender can never get


Data packet sent by the sender may get corrupt.
corrupt.

There is no concept of negative A negative acknowledgement is sent by the


acknowledgements. receiver if the data packet is found to be corrupt.

Sender starts the time out timer after sending the


There is no concept of time out timer.
data packet.

28
COMPUTER NETWORKS UNIT-2

Data packets and acknowledgements are


There is no concept of sequence numbers.
numbered using sequence numbers.

Limitation of Stop and Wait ARQ-


 The major limitation of Stop and Wait ARQ is its very less efficiency
To increase the efficiency, protocols like Go back N and Selective Repeat are used.
Sliding window Protocals:
Sliding Window Protocols-
 Sliding window protocol allows the sender to send multiple frames before needing the
acknowledgements.
 It is more efficient.
Implementations-
 Various implementations of sliding window protocol are-
1. Go back N
2. Selective Repeat

Go back N ARQ:
In the stop-and-wait protocol, the sender can send only one frame at a time and cannot send the
next frame without receiving the acknowledgment of the previously sent frame, whereas, in the
case of sliding window protocol, the multiple frames can be sent at a time.
Go-back N ARQ (Automatic Repeat Request) protocol is a practical implementation of the
sliding window protocol. In Go-Back-N ARQ; N is the sender's window size. Suppose we say
that Go-Back-3, which means that the three frames can be sent at a time before expecting the
acknowledgment from the receiver.

It uses the principle of protocol pipelining in which the multiple frames can be sent before
receiving the acknowledgment of the first frame. If we have five frames and the concept is Go-

29
COMPUTER NETWORKS UNIT-2
Back-3, which means that the three frames can be sent, i.e., frame no 1, frame no 2, frame no 3
can be sent before expecting the acknowledgment of frame no 1.

In Go-Back-N ARQ, the frames are numbered sequentially as Go-Back-N ARQ sends the
multiple frames at a time that requires the numbering approach to distinguish the frame from
another frame, and these numbers are known as the sequential numbers.

The number of frames that can be sent at a time totally depends on the size of the sender's
window. So, we can say that 'N' is the number of frames that can be sent at a time before
receiving the acknowledgment from the receiver.

o N is the sender's window size.


o If the size of the sender's window is 4 then the sequence number will be
0,1,2,3,0,1,2,3,0,1,2, and so on.

The number of bits in the sequence number is 2 to generate the binary sequence 00,01,10,11 

Efficiency of any flow control protocol is given by-

 
Design of Go-Back-N ARQ protocol

30
COMPUTER NETWORKS UNIT-2

Example: Working of Go-Back-N ARQ:

Suppose there are a sender and a receiver, and let's assume that there are 11 frames to be sent.
These frames are represented as 0,1,2,3,4,5,6,7,8,9,10, and these are the sequence numbers of the
frames. Mainly, the sequence number is decided by the sender's window size. But, for the better
understanding, we took the running sequence numbers, i.e., 0,1,2,3,4,5,6,7,8,9,10. Let's consider
the window size as 4, which means that the four frames can be sent at a time before expecting the
acknowledgment of the first frame.

Step 1: Firstly, the sender will send the first four frames to the receiver, i.e., 0,1,2,3, and now the
sender is expected to receive the acknowledgment of the 0th frame.

Let's assume that the receiver has sent the acknowledgment for the 0 frame, and the receiver has
successfully received it.

31
COMPUTER NETWORKS UNIT-2
The sender will then send the next frame, i.e., 4, and the window slides containing four frames
(1,2,3,4).

The receiver will then send the acknowledgment for the frame no 1. After receiving the
acknowledgment, the sender will send the next frame, i.e., frame no 5, and the window will slide
having four frames (2,3,4,5).

Now, let's assume that the receiver is not acknowledging the frame no 2, either the frame is lost,
or the acknowledgment is lost. Instead of sending the frame no 6, the sender Go-Back to 2,
which is the first frame of the current window, retransmits all the frames in the current window,
i.e., 2,3,4,5.

32
COMPUTER NETWORKS UNIT-2

Important points related to Go-Back-N ARQ:

 In Go-Back-N, N determines the sender's window size, and the size of the receiver's window
is always 1.
 It does not consider the corrupted frames and simply discards them.
 It does not accept the frames which are out of order and discards them.
 If the sender does not receive the acknowledgment, it leads to the retransmission of all the
current window frames.

33
COMPUTER NETWORKS UNIT-2

The example of Go-Back-N ARQ is shown below in the figure.

Comparison Table:

Stop and Wait


Go back N Selective Repeat Remarks
ARQ

Go back N and
Selective Repeat
gives better
Efficiency 1 / (1+2a) N / (1+2a) N / (1+2a)
efficiency than
Stop and Wait
ARQ.

34
COMPUTER NETWORKS UNIT-2

Buffer
requirement in
Sender Window Sender Window Selective Repeat is
Sender Window
Size = N Size = N very large.
Size = 1
Window Size
Receiver Window Receiver Window If the system does
Receiver Window
Size = 1 Size = N not have lots of
Size = 1
memory, then it is
better to choose
Go back N.

Selective Repeat
Minimum number of requires large
sequence numbers 2 N+1 2xN number of bits in
required sequence number
field.

Selective Repeat is
Retransmissions Only the lost Only the lost far better than Go
The entire window
required if a packet packet is packet is back N in terms of
is retransmitted
is lost retransmitted retransmitted retransmissions
required.

Bandwidth
requirement is
high because even
Selective Repeat is
if a single packet
Bandwidth Bandwidth better than Go
Bandwidth is lost, entire
requirement is requirement is back N in terms of
Requirement window has to be
Low moderate bandwidth
retransmitted.
requirement.
Thus, if error rate
is high, it wastes a
lot of bandwidth.

35
COMPUTER NETWORKS UNIT-2

High due to Go back N is


searching and better than
CPU usage Low Moderate sorting required Selective Repeat
at sender and in terms of CPU
receiver side usage.

Go back N is
Complex as it better than
Level of difficulty in requires extra Selective Repeat
Low Moderate
Implementation logic and sorting in terms of
and searching implementation
difficulty.

Sending
cumulative
Uses cumulative
acknowledgements
acknowledgement
Uses independent Uses independent reduces the traffic
s (but may use
Acknowledgements acknowledgement acknowledgement in the network but
independent
for each packet for each packet if it is lost, then
acknowledgement
the ACKs for all
s as well)
the corresponding
packets are lost.

Go back N and
Type of Selective Repeat
Half duplex Full duplex Full duplex
Transmission are better in terms
of channel usage.

 Selective Repeat ARQ


Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request. It is a
data link layer protocol that uses a sliding window method. The Go-back-N ARQ protocol works
well if it has fewer errors. But if there is a lot of error in the frame, lots of bandwidth loss in
sending the frames again. So, we use the Selective Repeat ARQ protocol. In this protocol, the
size of the sender window is always equal to the size of the receiver window. The size of the
sliding window is always greater than 1.

36
COMPUTER NETWORKS UNIT-2
If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the receiving
negative acknowledgment. There is no waiting for any time-out to send that frame. The design of
the Selective Repeat ARQ protocol is shown below.

37
COMPUTER NETWORKS UNIT-2

The example of the Selective Repeat ARQ protocol is shown below in the figure.

Difference between the Go-Back-N ARQ and Selective Repeat ARQ:

Go-Back-N ARQ Selective Repeat ARQ

If a frame is corrupted or lost in it, all In this, only the frame is sent again, which is corrupted
subsequent frames have to be sent again. or lost.

If it has a high error rate,it wastes a lot of There is a loss of low bandwidth.
bandwidth.

It is less complex. It is more complex because it has to do sorting and


searching as well. And it also requires more storage.

It does not require sorting. In this, sorting is done to get the frames in the correct
order.

It does not require searching. The search operation is performed in it.

It is used more. It is used less because it is more complex.

Multiple Access Protocols


38
COMPUTER NETWORKS UNIT-2

(Part2)

 Random Access Protocol


In this protocol, all the station has the equal priority to send the data over a channel. In random
access protocol, one or more stations cannot depend on another station nor any station control
another station. Depending on the channel's state (idle or busy), each station transmits the data
frame. However, if more than one station sends the data over a channel, there may be a collision
or data conflict. Due to the collision, the data frame packets may be lost or changed. And hence,
it does not receive by the receiver end.
Given below are the protocols that lie under the category of Random Access protocol:

1. ALOHA

2. CSMA(Carrier sense multiple access)

3. CSMA/CD(Carrier sense multiple access with collision detection)

4. CSMA/CA(Carrier sense multiple access with collision avoidance)

39
COMPUTER NETWORKS UNIT-2

Controlled Access Protocol:

While using the Controlled access protocol the stations can consult with one another in order to
find which station has the rights to send the data. Any station cannot send until it has been
authorized by the other stations.

The three main controlled access methods are as follows;

1. Reservation

2. Polling

3. Token Passing

Channelization Protocols:

Channelization is another method used for multiple accesses in which the available bandwidth of
the link is shared in the time, frequency, or through the code in between the different stations.

Three channelization protocols used are as follows;

 FDMA(Frequency-division Multiple Access)

 TDMA(Time-Division Multiple Access)

 CDMA(Code-Division Multiple Access)

ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium
to transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

40
COMPUTER NETWORKS UNIT-2

Pure Aloha

Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is
idle or not, the chances of collision may occur, and the data frame can be lost. When any station
transmits the data frame to a channel, the pure Aloha waits for the receiver's acknowledgment. If
it does not acknowledge the receiver end within the specified time, the station waits for a random
amount of time, called the back off time (Tb). And the station may assume the frame has been
lost or destroyed. Therefore, it retransmits the frame until all the data are successfully transmitted
to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.

41
COMPUTER NETWORKS UNIT-2
As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the receiver
end. At the same time, other frames are lost or destroyed. Whenever two frames fall on a shared
channel simultaneously, collisions can occur, and both will suffer damage. If the new frame's
first bit enters the channel before finishing the last bit of the second frame. Both frames are
completely finished, and both stations must retransmit the data frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a
very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed
time interval called slots. So that, if a station wants to send a frame to a shared channel, the
frame can only be sent at the beginning of the slot, and only one frame is allowed to be sent to
each slot. And if the stations are unable to send data to the beginning of the slot, the station will
have to wait until the beginning of the slot for the next time. However, the possibility of a
collision remains when trying to send a frame at the beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G *
e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

42
COMPUTER NETWORKS UNIT-2

CSMA (Carrier Sense Multiple Access)


It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the station
can send data to the channel. Otherwise, it must wait until the channel becomes idle. Hence, it
reduces the chances of a collision on a transmission medium.

CSMA Access Modes or Persistence Methods:

What should a station do if the channel is busy? What should a station do if the channel is idle?
Three methods have been devised to answer these questions:

• 1-persistent method
• non-persistent method
• p-persistent method

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep track
of the status of the channel to be idle and broadcast the frame unconditionally as soon as the
channel is idle.

43
COMPUTER NETWORKS UNIT-2
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each
node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the channel is
found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent


mode defines that each node senses the channel, and if the channel is inactive, it sends a frame
with a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random
time and resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each
station waits for its turn to retransmit the data.

44
COMPUTER NETWORKS UNIT-2

CSMA/ CD

It is a carrier senses multiple access/ collision detection network protocol to transmit data


frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits a
frame to check whether the transmission was successful. If the frame is successfully received, the
station sends another frame. If any collision is detected in the CSMA/CD, the station sends a
jam/ stop signal to the shared channel to terminate data transmission. After that, it waits for a
random time before sending a frame to a channel.

How CSMA/CD works?

• Step 1: Check if the sender is ready for transmitting data packets.

• Step 2: Check if the transmission link is idle?

Sender has to keep on checking, if transmission link/medium is idle. For this it


continuously senses transmissions from other nodes. Sender sends dummy data on the link. If
45
COMPUTER NETWORKS UNIT-2
it does not receive any collision signal, this means the link is idle at the moment. If it senses
that the carrier is free and there are no collisions, it sends the data. Otherwise it refrains from
sending data.

Step 3: Transmit the data & check for collisions.

Sender transmits its data on the link. CSMA/CD does not use ‘acknowledgement’ system.

• It checks for the successful and unsuccessful transmissions through collision signals.
During transmission, if collision signal is received by the node, transmission is stopped.

• The station then transmits a jam signal onto the link and waits for random time interval
before it resends the frame. After some random time, it again attempts to transfer the data
and repeats above process.

• Step 4: If no collision was detected in propagation, the sender completes its frame
transmission and resets the counters.

CSMA/ CA

It is a carrier sense multiple access/collision avoidance network protocol for carrier


transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgments, that means the data
frame has been successfully transmitted to the receiver. But if it gets two signals (its own and
one more in which the collision of frames), a collision of the frame occurs in the shared channel.
Detects the collision of the frame when a sender receives an acknowledgment signal.

46
COMPUTER NETWORKS UNIT-2
Following are the methods used in the CSMA/ CA to avoid the collision:

Inter frame space: In this method, the station waits for the channel to become idle, and if it gets
the channel is idle, it does not immediately send the data. Instead of this, it waits for some time,
and this time period is called the Inter frame space or IFS. However, the IFS time is often used
to define the priority of the station.

Contention window: In the Contention window, the total time is divided into different slots.
When the station/ sender is ready to transmit the data frame, it chooses a random slot number of
slots as wait time. If the channel is still busy, it does not restart the entire process, except that it
restarts the timer only to send data packets when the channel is inactive.

Acknowledgment: In the acknowledgment method, the sender station sends the data frame to
the shared channel if the acknowledgment is not received ahead of time.

Types of Collision – free Protocols

47
COMPUTER NETWORKS UNIT-2

Bit – map Protocol


In bit map protocol, the contention period is divided into N slots, where N is the total number of
stations sharing the channel. If a station has a frame to send, it sets the corresponding bit in the
slot. So, before transmission, each station knows whether the other stations want to transmit.
Collisions are avoided by mutual agreement among the contending stations on who gets the
channel.

Transmission of frames in Bit-Map Protocol


Binary Countdown
This protocol overcomes the overhead of 1 bit per station of the bit – map protocol. Here, binary
addresses of equal lengths are assigned to each station. For example, if there are 6 stations, they
may be assigned the binary addresses 001, 010, 011, 100, 101 and 110. All stations wanting to
communicate broadcast their addresses. The station with higher address gets the higher priority
for transmitting.

48
COMPUTER NETWORKS UNIT-2

Station give ups in Binary Countdown


Limited Contention Protocols
These protocols combines the advantages of collision based protocols and collision free
protocols. Under light load, they behave like ALOHA scheme. Under heavy load, they behave
like bitmap protocols.

Adaptive Tree Walk Protocol


In adaptive tree walk protocol, the stations or nodes are arranged in the form of a binary tree as
follows -

Initially all nodes (A, B ……. G, H) are permitted to compete for the channel. If a node is
successful in acquiring the channel, it transmits its frame. In case of collision, the nodes are
divided into two groups (A, B, C, D in one group and E, F, G, H in another group). Nodes
49
COMPUTER NETWORKS UNIT-2
belonging to only one of them are permitted for competing. This process continues until
successful transmission occurs.

STANDARD ETHERNET
The original Ethernet was created in 1976 at Xerox’s Palo Alto Research Center (PARC). Since
then, it has gone through four generations. We briefly discuss the Standard (or traditional) Ethernet
in this section. Ethernet is the most widely used LAN technology used today. Ethernet operates in
the data link layer and the physical layer. It is a family of networking technologies that are
defined in the IEEE 802.2 and 802.3 standards. Ethernet supports data bandwidths of:

 10 Mb/s
 100 Mb/s
 1000 Mb/s (1 Gb/s)
 10,000 Mb/s (10 Gb/s)
 40,000 Mb/s (40 Gb/s)
 100,000 Mb/s (100 Gb/s)

Figure Ethernet evolution through four generations

MAC Sublayer
In Standard Ethernet, the MAC sub layer governs the operation of the access method. It also
frames data received from the upper layer and passes them to the physical layer.

Frame Format
The Ethernet frame contains seven fields: preamble, SFD, DA, SA, length or type of protocol
data unit (PDU), upper-layer data, and the CRC.
Ethernet does not provide any mechanism for acknowledging received frames, making it what
is known as an unreliable medium. Acknowledgments must be implemented at the higher
layers. The format of the MAC frame is shown in Figure
.

50
COMPUTER NETWORKS UNIT-2

Figure 802.3 MAC frame


Preamble. Alerts the receiving system to the coming frame and enables it to synchronize its input
timing. The preamble is actually added at the physical layer and is not (formally) part of the
frame.
Start frame delimiter (SFD). The second field (l byte: 10101011) signals the beginning of the
frame. The SFD warns the station or stations that this is the last chance for synchronization. The
last
2 bits is 11 and alerts the receiver that the next field is the destination address.
Destination address (DA). The DA field is 6 bytes and contains the physical address of the
destination station or stations to receive the packet.
Source address (SA). The SA field is also 6 bytes and contains the physical address of the
sender of the packet.
Length or type. The IEEE standard used it as the length field to define the number of bytes in the
data field. Both uses are common today.
Data. This field carries data encapsulated from the upper-layer
protocols. It is a minimum of 46 and a maximum of 1500 bytes.
CRC. The last field contains error detection information, in this case a CRC-32
Frame Length
• Ethernet has imposed restrictions on both the minimum and maximum lengths of a
frame, as shown in below Figure

51
COMPUTER NETWORKS UNIT-2

Figure. Minimum and maximum lengths


Addressing:
• The Ethernet address is 6 bytes (48 bits), normally written in hexadecimal notation, with
a colon between the bytes.

Figure Example of an Ethernet address in hexadecimal notation

Unicast, Multicast, and Broadcast Addresses:A source address is always a unicast address-the
frame comes from only one station. The destination address, however, can be unicast, multicast,
or broadcast. Below Figure shows how to distinguish a unicast address from a multicast address.
If the least significant bit of the first byte in a destination address is 0, the address is unicast;
otherwise, it is multicast. The broadcast destination address is a special case of the multicast
address in which all bits are 1s.

Unicast and multicast addresses

52
COMPUTER NETWORKS UNIT-2

Categories of Standard Ethernet:

The Standard Ethernet defines several physical layer implementations; four of the most common,
are shown in Figure

Categories of Standard Ethernet


Encoding and Decoding:
• All standard implementations use digital signaling (baseband) at 10 Mbps.
• At the sender, data are converted to a digital signal using the Manchester scheme;
• At the receiver, the received signal is interpreted as Manchester and decoded into data.
• Manchester encoding is self-synchronous, providing a transition at each bit interval.
Figure shows the encoding scheme for Standard Ethernet

53
COMPUTER NETWORKS UNIT-2

Figure Encoding in a Standard Ethernet implementation


lOBase5: Thick Ethernet: Thick Ethernet The first implementation is called 10Base5, thick
Ethernet, or Thicknet. The nickname derives from the size of the cable, which is roughly the size
of a garden hose and too stiff to bend with your hands. 10Base5 was the first Ethernet
specification to use a bus topology with an external transceiver (transmitter/receiver) connected
via a tap to a thick coaxial cable.

The transceiver is responsible for transmitting, receiving, and detecting collisions. The
transceiver is connected to the station via a transceiver cable that provides separate paths for
sending and receiving. This means that collision can only happen in the coaxial cable. The
maximum length of the coaxial cable must not exceed 500 m, otherwise, there is excessive
degradation of the signal. If a length of more than 500 m is needed, up to five segments, each a
maximum of 500-meter, can be connected using repeaters.
10Base2: Thin Ethernet
The second implementation is called 10Base2, thin Ethernet, or Cheaper net. 10Base2 also uses a
bus topology, but the cable is much thinner and more flexible. The cable can be bent to pass very
close to the stations. In this case, the transceiver is normally part of the network interface card
(NIC), which is installed inside the station.

54
COMPUTER NETWORKS UNIT-2

1OBase-T: Twisted-Pair Ethernet:


• It uses a physical star topology. The stations are connected to a hub via two pairs of
twisted cable, as shown in Figure
• The maximum length of the twisted cable here is defined as 100 m, to minimize the
effect of attenuation in the twisted cable

Figure 10Base-T implementation


Although there are several Ethernet, the most common is called 10Base-F.
• 10Base-F uses a star topology to connect stations to a hub. The stations are connected to
the hub using two fiber-optic cables, as shown in Figure

55
COMPUTER NETWORKS UNIT-2
Figure 10Base-F implementation
FAST ETHERNET:
Fast Ethernet was designed to compete with LAN protocols such as FDDI or Fiber Channel.
IEEE created Fast Ethernet under the name 802.3u. Fast Ethernet is backward-compatible with
Standard Ethernet, but it can transmit data 10 times faster at a rate of 100 Mbps.

Figure Fast Ethernet implementations


GIGABIT ETHERNET

56
COMPUTER NETWORKS UNIT-2

Figure: Topologies of Gigabit Ethernet

Figure. Gigabit Ethernet implementations

57
COMPUTER NETWORKS UNIT-2
Summary of Gigabit Ethernet implementations

Summary of Ten-Gigabit Ethernet implementations

Data link Layer Switching


Uses of bridges:
• A bridge is a network device that connects multiple LANs (local area networks) together
to form a larger LAN.
• The process of aggregating networks is called network bridging. A bridge connects the
different components so that they appear as parts of a single network.

• By joining multiple LANs, bridges help in multiplying the network capacity of a single
LAN.

58
COMPUTER NETWORKS UNIT-2
• Since they operate at data link layer, they transmit data as data frames. On receiving a
data frame, the bridge consults a database to decide whether to pass, transmit or discard
the frame.
 If the frame has a destination MAC (media access control) address in the same
network, the bridge passes the frame to that node and then discards it.
 If the frame has a destination MAC address in a connected network, it will forward
the frame toward it.
 Key features of a bridge are mentioned below: • A bridge operates both in physical
and data-link layer • A bridge uses a table for filtering/routing • A bridge does not
change the physical (MAC) addresses in a frame

Learning Bridges:
Bridge is a device that joins networks to create a much larger network. ... A learning bridge, also
called an adaptive bridge, “learns" which network addresses are on one side of the bridge and
which are on the other so it knows how to forward packets it receives.
The Learning Algorithm can be written in Pseudo code as follows:
If the address is in the tables then
Forward the packet onto the necessary port.
If the address is not in the tables, then forward the packet onto every port except for the port that
the packet was received on, just to make sure the destination gets the message.
Add an entry in your internal tables linking the Source Address of the packet to whatever port
the packet was received from.

59
COMPUTER NETWORKS UNIT-2

A better solution to the static table is a dynamic table that maps addresses to ports automatically.
To make a table dynamic, we need a bridge that gradually learns from the frame movements. To
do this, the bridge inspects both the destination and the source addresses. The destination address
is used for the forwarding decision (table lookup); the source address is used for adding entries to
the table and for updating purposes. Let us elaborate on this process by using Figure

1. When station A sends a frame to station D, the bridge does not have an entry for either D or A.
The frame goes out from all three ports; the frame floods the network. However, by looking at

60
COMPUTER NETWORKS UNIT-2
the source address, the bridge learns that station A must be located on the LAN connected to port
1. This means that frames destined for A, in the future, must be sent out through port 1. The
bridge adds this entry to its table. The table has its first entry now.

2. When station E sends a frame to station A, the bridge has an entry for A, so it forwards the
frame only to port 1. There is no flooding. In addition, it uses the source address of the frame, E,
to add a second entry to the table.

3. When station B sends a frame to C, the bridge has no entry for C, so once again it floods the
network and adds one more entry to the table.

4. The process of learning continues as the bridge forwards frames.

Loop Problem: Transparent bridges work fine as long as there are no redundant bridges in the
system. Systems administrators, however, like to have redundant bridges (more than one bridge
between a pair of LANs) to make the system more reliable. If a bridge fails, another bridge takes
over until the failed one is repaired or replaced.

Solution of Loop Problem: To solve the looping problem, the IEEE specification requires that
bridges use the spanning tree algorithm to create a loop less topology.

Spanning Tree Bridges:

• Redundant links are used to provide backup path when one link goes down but redundant
link can sometime cause switching loops.

• The main purpose of Spanning Tree Protocol (STP) is to ensure that you do not create
loops when you have redundant paths in your network.

• The Spanning Tree Protocol (STP) is a network protocol that builds a loop-free logical
topology for Ethernet networks. Means it was created to prevent loops

• In graph theory, a spanning tree is a graph in which there is no loop. In a bridged LAN,
this means creating a topology in which each LAN can be reached from any other LAN
through one path only (no loop). We cannot change the physical topology of the system
because of physical connections between cables and bridges, but we can create a logical
topology that overlay the physical one. Figure 15.8 shows a system with four LANs and
five bridges.

61
COMPUTER NETWORKS UNIT-2

We have shown the physical system and its representation in graph theory. We have
shown both LANs and bridges as nodes. The connecting arcs show the connection of a
LAN to a bridge and vice versa.

 To find the spanning tree, we need to assign a cost (metric) to each arc. The
interpretation of the cost is left up to the systems administrator.

 It may be the path with minimum hops (nodes), the path with minimum delay, or
the path with maximum bandwidth.

 If two ports have the same shortest value, the systems administrator just chooses
one. We have chosen the minimum hops.

The process to find the spanning tree involves three steps:

 Every bridge has a built-in ID (normally the serial number, which is unique). Each bridge
broadcasts this ID so that all bridges know which one has the smallest ID. The bridge
with the smallest ID is selected as the root bridge (root of the tree). We assume that
bridge B1 has the smallest ID. It is, therefore, selected as the root bridge.

 The algorithm tries to find the shortest path (a path with the shortest cost) from the root
bridge to every other bridge or LAN. The shortest path can be found by examining the
total cost from the root bridge to the destination. Figure shows the shortest paths.

62
COMPUTER NETWORKS UNIT-2

 The combination of the shortest paths creates the shortest tree, which is also shown in
Figure.

 Based on the spanning tree, we mark the ports that are part of the spanning tree, the
forwarding ports, which forward a frame that the bridge receives. We also mark those
ports that are not part of the spanning tree, the blocking ports, which block the frames
received by the bridge. Figure 15.10 shows the physical systems of LANs with
forwarding points (solid lines) and blocking ports (broken lines).

Note that there is only one single path from any LAN to any other LAN in the spanning tree
system. This means there is only one single path from one LAN to any other LAN. No loops are
created. You can prove to yourself that there is only one path from LAN 1 to LAN 2, LAN 3, or
LAN 4. Similarly, there is only one path from LAN 2 to LAN 1, LAN 3, and LAN 4. The same
is true for LAN 3 and LAN 4.

Repeaters, Hubs, Bridges, Switches, Routers, and Gateways

In this section, we divide connecting devices into five different categories based on the layer in
which they operate in a network, as shown in Figure 15.1. The five categories contain devices
which can be defined as in Table 1:

63
COMPUTER NETWORKS UNIT-2

1. Repeaters

• A repeater operates at the physical layer. Its job is to regenerate the signal over the same
network before the signal becomes too weak or corrupted.

• An important point to be noted about repeaters is that they do not amplify the signal.
When the signal becomes weak, they copy the signal bit by bit and regenerate it at the
original strength. It is a 2 port device.

• A repeater receives a signal and, before it becomes too weak or corrupted, regenerates the
original bit pattern. The repeater then sends the refreshed signal.

• A repeater does not actually connect two LANs; it connects two segments of the same
LAN. The segments connected are still part of one single LAN. A repeater is not a device
that can connect two LANs of different protocols

• The repeater acts as a two-port node, but operates only in the physical layer. When it
receives a frame from any of the ports, it regenerates and forwards it to the other port. A
repeater forwards every frame; it has no filtering capability.

• A repeater connects different segments of a LAN

• A repeater forwards every bit it receives

• A repeater is a regenerator, not an amplifier

64
COMPUTER NETWORKS UNIT-2
• It can be used to create a single extended LAN

2. Hubs

• A hub is basically a multiport repeater. A hub connects multiple wires coming from
different branches, for example, the connector in star topology which connects different
stations.

• Hubs cannot filter data, so data packets are sent to all connected devices.

• A hub connects multiple wires coming from different branches, for example, the
connector in star topology which connects different stations. Hubs cannot filter data, so
data packets are sent to all connected devices. Hub is a generic term, but commonly
refers to a multiport repeater. It can be used to create multiple levels of hierarchy of
stations.

65
COMPUTER NETWORKS UNIT-2

Important Point about Hub


 HUB work on Physical Layer of OSI Model
 HUB is Broadcast Device
 Hus is use to connect device in the same network
 Hub sends data in the form of binary bits
 Hub only works in half duplex
 Only one device can send data at a time
 Hub does not store any mac address or IP Address

3. Bridge:

A bridge is a repeater; with add on the functionality of filtering content by reading the
MAC addresses of source and destination.

 It is also used for interconnecting two LANs working on the same protocol. It has a
single input and single output port, thus making it a 2 port device.

 A bridge operates in both the physical and the data link layer. As a physical layer device,
it regenerates the

 signal it receives. As a data link layer device, the bridge can check the physical (MAC)
addresses (source and destination) contained in the frame.

4. Switch

Switch – A switch is a multi port bridge with a buffer and a design that can boost its
efficiency(large number of ports imply less traffic) and performance. Switch is data link layer
device. Switch can perform error checking before forwarding data that makes it very efficient
as it does not forward packets that have errors and forward good packets selectively to
correct port only. In other words, switch divides collision domain of hosts, but broadcast
domain remains same.

• A switch is a device that connects other devices together. Multiple data cables are
plugged into a switch to enable communication between different networked devices. A
switch is a data link layer device.

• The switch can perform error checking before forwarding data that makes it very efficient
as it does not forward packets that have errors and forward good packets selectively to
correct port only.

66
COMPUTER NETWORKS UNIT-2
A switch is essentially a fast bridge having additional sophistication that allows faster processing
of frames. Some of important functionalities are:

• Ports are provided with buffer

• Switch maintains a directory: #address - port#

• Each frame is forwarded after examining the #address and forwarded to the proper port#

• Three possible forwarding approaches: Cut-through, Collision-free and Fully buffered as


briefly explained below.

Cut-through: A switch forwards a frame immediately after receiving the destination


address. As a consequence, the switch forwards the frame without collision and error
detection.

Collision-free: In this case, the switch forwards the frame after receiving 64 bytes, which
allows detection of collision. However, error detection is not possible because switch is
yet to receive the entire frame.

Fully buffered: In this case, the switch forwards the frame only after receiving the entire
frame. So, the switch can detect both collision and error free frames are forwarded.

67
COMPUTER NETWORKS UNIT-2

5. Routers: router is a device like a switch that routes data packets based on their IP
addresses. Router is mainly a Network Layer device. Routers normally connect LANs
and WANs together and have a dynamically updating routing table based on which they
make decisions on routing the data packets.

6. Gateways:

 A gateway is protocol converter.

 A gateway is a hardware device that acts as a "gate" between two networks. It may be a
router, firewall, server, or other device that enables traffic to flow in and out of the
network.

 It operates in all seven layers of the OSI model.

68
COMPUTER NETWORKS UNIT-2

 A gateway can accept a packet formatted for one protocol (e.g.TCP/IP) and convert it to a
packet formatted for another protocol.(e.g.Apple talk)

 The gateway must adjust the data rate, size and data format. Gateway is generally
software installed within a router.

Difference between Hub, Switch and Router:

Hub Switch Router

HUB work on Physical Switch work on Data Link Router work on Network Layer
Layer of OSI Model Layer of OSI Model of OSI Model

Router is a routing device use to


HUB is Broadcast Device Switch is Multicast Device create route for transmitting data
packets

Switch is use to connect


Hus is use to connect device Router is use to connect two or
devices in the same
in the same network more different network.
network

Hub sends data in the form Switch sends data in the Router sends data in the form

69
COMPUTER NETWORKS UNIT-2

of binary bits form of frames packets

Hub only works in half Switch works in full


Router works in full duplex
duplex duplex

Only one device can send Multiple devices can send Multiple devices can send data at
data at a time data at the same time the same time

Hub does not store any mac Switch store MAC


Router stores IP address
address or IP address Address

70

You might also like