Computer Networks Module 2 Final
Computer Networks Module 2 Final
Module-2
The Data link layer: Design issues of DLL, Error detection and correction,
Elementary data link protocols, Sliding window protocols. The medium access
control sublayer: The channel allocation problem, multiple access protocols.
Textbook 1: Ch.3.1 to 3.4, Ch.4.1 and 4.2
Data Link layer Design Issues:
The data link layer has a number of specific functions.
1. Data Link Control (DLC): It deals with the design and procedures for communication
b/w nodes: node-to-node communication.
1. DATALINK CONTROL(DLC):
Data link control functions includes
(1) Framing.
(2) Error Control.
(3) Flow Control.
To accomplish these goals, the data link layer takes the packets it gets from the network
layer and encapsulates them into frames for transmission.
(1) FRAMING
The frame contains
1. Frame header
2. Payload field for holding packet
3. Frame trailer as illustrated in Fig.3-1.
The function of the data link layer is to provide services to the network layer. The
principal service is transferring data from the network layer on the source machine to the
network layer on the destination machine. On the source machine is an entity, call it a process, in
the network layer that hands some bits to the data link layer for transmission to the destination.
The job of the data link layer is to transmit the bits to the destination machine so they can be handed over to the
network layer there, as shown in Fig. 3-2(a).
The actual transmission follows the path of Fig. 3-2(b), but it is easier to think in terms of
two data link layer processes communicating using a data link protocol.
o If a frame is lost due to noise on the line, no attempt is made to detect the loss or
recover from it in the data link layer.
o This class of service is appropriate when the error rate is very low so that recovery
is left to higher layers.
o Most LANs use unacknowledged connectionless service in the data link layer
o Example voice.
THREE PHASES:
When connection-oriented service is used, transfers go through three distinct
phases.
1. Connection established:
The connection is established by having both sides initialize variables and
counters needed to keep track of which frames have been received and which
ones have not.
2. Frames are transmitted:
One or more frames are actually transmitted.
3. Connection released:
Freeing up the variables, buffers, and other resources used to maintain the
connection.
Framing:
Breaking the bit stream up into frames is more difficult than it at first appears. One way
to achieve this framing is to insert time gaps between frames, much like the spaces
between words in ordinary text. However, networks rarely make any guarantees about
timing, so it is possible these gaps might be squeezed out or other gaps might be inserted
during transmission.
o To provide service to the network layer, the data link layer must use the service
provided to it by the physical layer.
o Physicallayerdoesisacceptarawbitstreamandattempttodeliverittothe destination.
o This bit stream is not guaranteed to be error free.
o The number of bits received may be less than, equal to, or more than the number
of bits transmitted, and they may have different values.
o It is upto the data link layer to detect and, if necessary, correct errors.
Theusualapproachisforthedatalinklayertobreakthebitstreamupintodiscrete frames and
compute the checksum for each frame.
When a frame arrives at the destination, the checksum is recomputed.
If the newly-computed checksum is different from the one contained in the frame, the
data link layer knows that an error has occurred and takes steps to deal with it.
Breaking the bit stream up into frames is more difficult than it at first appears. One wayto
achieve this framing is to insert time gaps between frames, much like the spaces between
words in ordinary text.
It is too risky to count on timing to mark the start and end of each frame, other methods
have been devised.
1. Character Count:
The first framing method uses a field in the header to specify the number of
characters in the frame. When the data link layer at the destination sees the character
count, it knows how many characters follow and hence where the end of the frame is.
This technique is shown in Fig. 3-4(a) for four frames of sizes 5, 5, 8, and 8 characters,
respectively.
When the data link layer at the destination sees the character count, it knows how many
characters follow and hence where the end of the frame is.
This technique is shown in Fig. 3-4(a) for four frames of sizes 5, 5, 8, and 8 characters,
respectively.
The trouble with this algorithm is that the count can be garbled by a transmission error.
Even if the checksum is incorrect so the destination knows that the frame is bad, it still
has no way of telling where the next frame starts.
Sending a frame back to the source asking for a retransmission does nothelp either, since
the destination does not know how many characters to skip over to get to the start of the
retransmission. For this reason, the character count method is rarely used anymore.
For this reason, the character count method is rarely used anymore.
The second framing method gets around the problem of resynchronization after an error by having
each frame start and end with special bytes.
The header, which normally carries the source and destination addresses and other
control information.
Trailer carries error detection or error correction redundant bits, are also multiples of
8 bits.
To separate one frame from the next, an 8-bit (1-byte) flag is added at the beginning
and the end of a frame.
The flag, composed of protocol-dependent special characters, signals the start or end
of a frame.
Advantage:
1. Simple framing method.
2. Character-oriented framing was popular when only text was exchanged by the data
link layers.
3. The flag could be selected to be any character not used for text communication.
Disadvantage:
1. Even if with checksum, the receiver knows that the frame is bad there is no way to tell
where the next frame starts.
2. Asking for retransmission doesn’t help either because the start of the retransmitted
frame is not known.
3. Hence No longer used.
Byte stuffing is the process of adding 1 extra byte whenever there is a flag or escape character in
the text. In the past, the starting and ending bytes were different, but in recent years most
protocols have used the same byte, called a flag byte, as both the starting and ending delimiter, as
shown in Fig. 3-5(a) as FLAG.
In this way, if the receiver ever loses synchronization, it can just search for the flag byte to find
the end of the current frame.
Two consecutive flag bytes indicate the end of one frame and start of the next one.
A serious problem occurs with this method when binary data, such as object
programs or floating-point numbers, are being transmitted.
It may easily happen that the flag byte's bit pattern occurs in the data. This
situation will usually interfere with the framing.
One way to solve this problem is to have the sender's data link layer insert a
special escape byte (ESC) just before each ''accidental'' flag byte in the data.
The data link layer on the receiving end removes the escape byte before the data
are given to the network layer. This technique is called byte stuffing or character
stuffing.
A major disadvantage of using this framing method is that it is closely tied to the use of 8-bit
characters. Not all character codes use 8-bit characters.
For example UNICODE uses 16-bit characters, so a new technique had to be developed
to allow arbitrary sized characters.
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1’s follow a 0
in the data, so that the receiver does not mistake the pattern 0111110 for a flag.
Most protocols use a special 8-bit pattern flag 01111110 as the delimiter to define the
beginning and the end of the frame, as shown in Figure below.
This flag can create the same type of problem. That is, if the flag pattern appears in the
data, we need to somehow inform the receiver that this is not the end of the frame.
We do this by stuffing 1 single bit (instead of I byte) to prevent the pattern from looking
like a flag. The strategy is called bit stuffing.
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow a 0 in the
data, so that the receiver does not mistake the pattern 0111110 for a flag.
(2) ERRORCONTROL
How do we make sure that all frames are eventually delivered to the network layer at
the destination and in the proper order?
Provide sender with some acknowledgement about what is happening with the
receiver.
Sender could wait for acknowledgement.
Disadvantages:
If a frame vanishes, the receiver will not send an acknowledgement thus, sender will wait
forever.
Dealt with by timer sand sequence numbers–important part of DLL.
Sender transmits a frame, starts a timer.
Timer set to expire after interval long enough for frame to reach destination, be
processed, and have acknowledgement sent to sender.
Is a danger of frame being transmitted several times, however dealt with by assigning
sequence numbers to outgoing frames, so that receiver can distinguish retransmissions
from originals.
(3) FLOWCONTROL
What do we do when a sender transmits frames faster than the receiver can accept them?
Feedback-based flow control–receiver sends back in formation to the sender, giving it
permission to send more data or at least telling the sender how the receiver is doing.
Rate-based flow control – the protocol has a built-in mechanism that limits the rate at which
the sender may transmit data, using feedback from the receiver.
Because of Attenuation, distortion, noise and interferences, errors during transmission are
inevitable, leading to corruption transmitted bits.
Longer the frame size and higher the probability of single bit error, lower is the probability
receiving a frame without error.
ERROR:
When data is being transmitted from one machine to another, it may possible that data
become corrupted on its way. Some of the bits may be altered, damaged or lost during
transmission. Such a condition is known as error.
TYPES OF ERRORS:
Single bit error: Only one bit gets corrupted. Common in Parallel transmission.
Burst error: More than one bit gets corrupted very common in serial transmission of data
occurs when the duration of noise is longer than the duration of one bit.
Bursterror:
More than one bit gets corrupted very common in serial transmission of data occurs when the
duration of noise is longer than the duration of one bit.
The noise affects data; it affects a set of bits.
The number of bits affected depends on the data rate and duration of noise.
Redundancy is the method in which some extra bits are added to the data so as to check whether the
data contain error or not.
m - Data bits (i.e., message bits)
R-redundant bits (or check bits).
n - Total number of bits.
n= (m + r).
An n-bit unit containing data and check-bits is often referred to as an n-bit codeword.
A parity of 1is added to the block if it contains an odd number of 1’s (ON bits) and 0 is added if it
contains an even number of 1’s.
At the receiving end the parity bit is computed from the received data bits and compared with
the received parity bit.
This scheme makes the total number of 1’s even, that is why it is called even parity checking.
Considering a 4-bit word, different combinations of the data words and the corresponding
codeword’s are given in Table 3.2.1.
Table 3.2.1
TWO-DIMENSIONPARITYCHECKING
Performance can be improved by using two dimensional parity checks, which organizes the
block of bits in the form of table.
Parity check bits are calculated from each row, which is equivalent to a simple parity check.
Parity check bits are also calculated for all columns.
Both are sent along with the data.
At the receiving end these are compared with the parity bits calculated on the received data.
Performance:
If two bits in one data unit are damaged and two bits in exactly same position in another data
unit are also damaged, The 2-D Parity check checker will not detect an error.
For example, if two data units: 11001100 and10101100.
If first and second from last bits in each of them is changed, making the data units as
01001110and00101110, the error cannot be detected by 2-DParitycheck.
CHECKSUM:
This is a block code method where a checksum is created based on the data values in the data
blocks to be transmitted using some algorithm and appended to the data. When the receiver
gets this data, a new checksum is calculated and compared with the existing checksum. A
non-match indicates an error.
Example2
Suppose that the sender wants to send 4 frames each of 8 bits, where the frames are 11001100,
10101010, 11110000 and 11000011.
The sender adds the bits using 1s complement arithmetic. While adding two numbers using 1s
complement arithmetic, if there is a carry over, it is added to the sum.
After adding all the 4 frames, the sender complements the sum to get the checksum, 11010011,
and sends it along with the data frames.
The receiver performs 1s complement arithmetic sum of all the frames including the checksum.
The result is complemented and found to be 0. Hence, the receiver assumes that no error has
occurred.
One of the most powerful and commonly used error detecting codes.
Basic approach:
Given a m-bit block of bit sequence, the sender generates an n-bit sequence known as frame
sequence check(FCS),so that the resulting frame, consisting of m+n bits exactly divisible by
same predetermined number.
The receiver divides the incoming frame by that number and, if there is no reminder,
assumes there was no error.
CRC Generator:
Unlike checksum scheme, which is based on addition, CRC is based on binary division.
In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the
end of data unit so that the resulting data unit becomes exactly divisible by a second,
predetermined binary number.
At the destination, the incoming data unit is divided by the same number. If at this step there is
no remainder, the data unit is assumed to be correct and is therefore accepted.
A remainder indicates that the data unit has been damaged in transit and therefore must be
rejected.
1. Hamming codes:
All of these codes add redundancy to the information that is sent. A frame consists of m data (i.e.,
message) bits and r redundant (i.e. check) bits. In a block code, the r check bits are computed
solely as a function of the m data bits with which they are associated, as though the m bits were
looked up in a large table to find their corresponding r check bits.
All of these codes add redundancy to the information that is sent. A frame consists of m data (i.e.,
message) bits and r redundant (i.e. check) bits. In a block code, the r check bits are computed
solely as a function of the m data bits with which they are associated, as though the m bits were
looked up in a large table to find their corresponding r check bits.
In a systematic code, the m data bits are sent directly, along with the check bits, rather than being
encoded themselves before they are sent.
In a linear code, the r check bits are computed as a linear function of the m data bits. Exclusive
OR (XOR) or modulo 2 addition is a popular choice. This means that encoding can be done with
Let the total length of a block be n (i.e., n m r). We will describe this as an (n, m) code. An n-bit
unit containing data and check bits is referred to as an n- bit codeword.
The code rate, or simply rate, is the fraction of the codeword that carries information that is not
redundant, or m/n. The rates used in practice vary widely.
To understand how errors can be handled, it is necessary to first look closely at what an error
really is. Given any two codeword’s that may be transmitted or received—say, 10001001 and
10110001—it is possible to determine how many corresponding bits differ. In this case, 3 bits
differ. To determine how many bits differ, just XOR the two codeword’s and count the number of
1 bits in the result.
For example:
10001001
10110001
------------
00111000
------------
The number of bit positions in which two codeword’s differ is called Hamming distance
(Hamming, 1950). Its significance is that if two codeword’s are a Hamming distance d apart, it will
require d single-bit errors to convert one into the other.
In Hamming codes the bits of the codeword are numbered consecutively, starting with bit 1 at the
left end, bit 2 to its immediate right, and so on. The bits that are powers of 2 (1, 2, 4, 8, 16, etc.) are
check bits. The rest (3, 5, 6, 7, 9, etc.) are filled up with the m data bits. This pattern is shown for
an (11,7) Hamming code with 7 data bits and 4 check bits in Fig. 3-6
A single additional bit can detect error, but it’s not sufficient enough to correct that error
too. For correcting an error one has to know the exact position of error, i.e. exactly which
bit is in error (to locate the invalid bits).
For example, to correct a single-bit error in an ASCII character, the error correction must
determine which one of the seven bits is in error. To this, we have to add some additional
redundant bits.
To calculate the numbers of redundant bits (r) required to correct d data bits, let us find out
the relationship between the two. So we have (d+r) as the total number of bits, which are
to be transmitted; then r must be able to indicate at least d+r+1 different value. Of these,
one value means no error, and remaining d+r values indicate error location of error in each
of d+r locations. So,d+r+1 states must be distinguishable by r bits, and r bits can indicates
“2r” must be greater than d+r+1.
The value of r must be determined by putting in the value of d in the relation. For example, if d is 7,
then the smallest value of r that satisfies the above relation is 4. So the total bits, which are to be
transmitted is 11 bits (d+r = 7+4 =11).
Now let us examine how we can manipulate these bits to discover which bit is in error. A technique
developed by R.W. Hamming provides a practical solution. The solution or coding scheme he
developed is commonly known as Hamming Code.
3. Reed-Solomon codes:
To start with, we assume that the physical layer, data link layer, and network layer are
independent processes that communicate by passing messages back and forth. A common
implementation is shown in Fig. 3-10.
The physical layer process and some of the data link layer process run on dedicate
hardware called a NIC (Network Interface Card), three processes offloaded to dedicated
hardware called a network accelerator.
The rest of the link layer process and the network layer process run on the main CPU as
part of the operating system, with the soft- ware for the link layer process often taking the
form of a device driver.
Another key assumption is that machine A wants to send a long stream of data to
machine B, using a reliable, connection-oriented service. Later, we will consider the case
where B also wants to send data to A simultaneously. A is assumed to have an infinite
supply of data ready to send and never has to wait for data to be produced. Instead, when
A’s data link layer asks for data, the network layer is always able to comply immediately.
As far as the data link layer is concerned, the packet passed across the interface to it from
the network layer is pure data, whose every bit is to be delivered to the destination’s
network layer. The fact that the destination’s network layer may interpret part of the
packet as a header is of no concern to the data link layer.
This procedure only returns when something has happened (e.g., a frame has arrived).
Upon return, the variable event tells what happened. The set of possible events differs
for the various protocols to be described and will be defined separately for each protocol.
When a frame arrives at the receiver, the checksum is recomputed. If the checksum in the
frame is incorrect (i.e., there was a transmission error), the data link layer is so informed
(event cksum err). If the inbound frame arrived undamaged, the data link layer is also
informed (event frame arrival) so that it can acquire the frame for inspection using from
physical layer.
A frame is composed of four fields: kind, seq, ack, and info, the first three of which contain
control information and the last of which may contain actual data to be transferred. These
control fields are collectively called the frame header.
The most important responsibilities of the data link layer are flow control and error control.
Collectively, these functions are known as data link control.
Flow control refers to a set of procedures used to restrict the amount of data that the sender can
send before waiting for acknowledgment.
Error control in the data link layer is based on automatic repeat request (ARQ), which is the
retransmission of data.
The data link layer can combine framing, flow control, and error control to achieve the
delivery of data from one node to another.
The protocols are normally implemented in software by using one of the common
programming languages.
To make our discussions language-free, we have written in pseudo code a version of each
protocol that concentrates mostly on the procedure instead of delving into the details of
language rules.
Taxonomy of protocols:
NOISELESS CHANNELS:
Simplest Protocol
Stop-and-Wait Protocol
We have an ideal channel in which no frames are lost, duplicated or corrupted. We introduce two
protocols for this type of channel.
It is assumed that both the sender and the receiver are always ready for data processing and both
of them have infinite buffer. The sender simply sends all its data available onto the channel as
soon as they are available its buffer.
The receiver is assumed to process all incoming data instantly. It is does not handle flow control
or error control. Since this protocol is totally unrealistic, it is often called Utopian Simplex
protocol.
Advantages:
Protocol-2:
A SIMPLEX STOP-AND-WAIT PROTOCOL FOR AN ERROR-FREE CHANNEL:
Stop – and – Wait protocol is data link layer protocol for transmission of frames over
noiseless channels. It provides unidirectional data transmission with flow control
facilities but without error control facilities.
This protocol takes into account the fact that the receiver has a finite processing speed. If
data frames arrive at the receiver’s end at a rate which is greater than its rate of
processing, frames be dropped out.
In order to avoid this, the receiver sends an acknowledgement for each frame upon its
arrival. The sender sends the next frame only when it has received a positive
acknowledgement from the receiver that it is available for further data processing.
Flow diagram:
NOISY CHANNELS:
Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent frame and retransmitting of
the frame when the timer expires.
Protocol-4:
Stop-and-Wait ARQ (Automatic Repeat Request)
Simplex Stop – and – Wait protocol for noisy channel is data link layer protocol for data
communications with error control and flow control mechanisms. It is popularly known as
Stop – and –Wait Automatic Repeat Request (Stop – and –Wait ARQ) protocol. It adds error
control facilities to Stop – and – Wait protocol.
In Stop-and-Wait ARQ, we use sequence numbers to number the frames. The sequence
numbers are based on modulo-2 arithmetic.
In Stop-and-Wait ARQ, the acknowledgment number always announces in modulo-2
arithmetic the sequence number of the next frame expected.
The above figure shows an example of Stop-and-Wait ARQ. Frame 0 is sent and acknowledged.
Frame 1 is lost and resent after the time-out. The resent frame 1 is acknowledged and the timer
stops. Frame 0 is sent and acknowledged, but the acknowledgment is lost. The sender has no idea
if the frame or the acknowledgment is lost, so after the time-out, it resends frame 0, which is
acknowledged.
Event:
Frame 0 is sent and acknowledged.
Frame 1 is lost and resent after the time-out.
The resent frame1is acknowledged and the timer stops.
Frame 0 is sent and acknowledged, but the acknowledgment is lost.
The sender has no idea if the frame or the acknowledgment is lost.
So after the time-out, it resends frame 0, which is acknowledged.
In this protocol, multiple frames can be sent by a sender at a time before receiving an
acknowledgment from the receiver. The term sliding window refers to the imaginary boxes to
hold frames. Sliding window method is also known as windowing.
In these protocols, the sender has a buffer called the “sending window” and the receiver has buffer called
the “Receiving window”.
The size of the sending window determines the sequence number of the outbound frames. If the
sequence number of the frames is an n-bit field, then the range of sequence numbers that can be
assigned is 0 to 2n−1. Consequently, the size of the sending window is 2n −1. Thus in order to
accommodate a sending window size of 2n −1, a n-bit sequence number is chosen.
The sequence numbers are numbered as modulo-n. For example, if the sending window size is 4,
then the sequence numbers will be 0,1,2,3,0,1,2,3,0,1,and soon. The number of bits in the
sequence number is 2 to generate the binary sequence 00, 01, 10, 11.
The size of the receiving window is the maximum number of frames that the receiver can accept
atatime.Itdeterminesthemaximumnumberofframesthatthesendercansendbeforereceiving
acknowledgment.
Example1:
Example2:
Example 3:
Assume that computer A is trying to send its frame 0 to computer B and that B is trying to send its frame
0 to A. Suppose that A sends a frame to B, but A's timeout interval is a little too short. Consequently, A
may time out repeatedly, sending a series of identical frames, all with seq = 0 and ack = 1.
When the first valid frame arrives at computer B, it will be accepted and frame expected will be
set to 1. All the subsequent frames will be rejected because B is now expecting frames with
sequence number 1, not 0. Furthermore, since all the duplicates have ack = 1 and B is still
waiting for an acknowledgement to f0, B will not fetch a new packet from its network layer.
After every rejected duplicate comes in, B sends A a frame containing seq =0 and ack =0.
Eventually, one of these arrives correctly at A, causing A to begin sending the next packet. No
combination of lost frames or premature timeouts can cause the protocol to deliver duplicate
packets to either network layer, to skip a packet, or to deadlock.
In a Go-Back-N (GBN) protocol, the sender is allowed to transmit multiple packets (when
available) without waiting for an acknowledgment, but is constrained to have not more than
some maximum allowable number, N, of unacknowledged packets in the pipeline.
To improve the efficiency of transmission (filling the pipe), multiple frames must be in transition
while waiting for acknowledgment. In other words, we need to let more than one frame be outstanding
to keep the channel busy while the sender is waiting for acknowledgment.
The first is called Go-Back-N Automatic Repeat. In this protocol we can send several frames
before receiving acknowledgments; we keep a copy of these frames until the acknowledgments
arrive.
The sender window at any time divides the possible sequence numbers into four regions.
First region, from the far left to the left wall of the window, defines the sequence numbers
belonging to frames that are already acknowledged. The sender does not worry about these
frames and keeps no copies of them.
Second region, colored in Figure (a), defines the range of sequence numbers belonging to the
frames that are sent and have an unknown status. The sender needs to wait to find out if these
frames have been received or were lost. We call these outstanding frames.
Third region, white in the figure, defines the range of sequence numbers for frames that can be
sent; however, the corresponding data packets have not yet been received from the network layer.
Finally, the fourth region defines sequence numbers that cannot be used until the window slides.
Selective Repeat (SR) protocols avoid unnecessary retransmissions by having the sender
retransmit only those packets that it suspects were received in error (i.e., were lost or corrupted)
at the receiver. This individual, as-needed, retransmission will require that the receiver
individually acknowledge correctly-received packets. A window size of N will again be used to
limit the number of outstanding, unacknowledged packets in the pipeline.
The SR receiver will acknowledge a correctly received packet whether or not it is in-
order. Out-of-order packets are buffered until any missing packets (i.e., packets with lower
sequence numbers) are received, at which point a batch of packets can be delivered in-order to
the upper layer. Figure receiver itemizes the various actions taken by the SR receiver.
Piggybacking – control information flow in both directions and improves the efficiency
of bidirectional protocols.
A technique called piggybacking is used to improve the efficiency of the bidirectional
protocols. When a frame is carrying data from A to B, it can also carry control information about
arrived (or lost) frames from B; and verse versa.
High-level Data Link Control (HDLC) is a group of communication protocols of the data link
layer for transmitting data between network points or nodes. Since it is a data link protocol, data
is organized into frames. A frame is transmitted via the network to the destination that verifies its
successful arrival. It is a bit - oriented protocol that is applicable for both points - to - point and
multipoint communications.
Transfer Modes:
HDLC supports two types of transfer modes, normal response mode and asynchronous balanced
mode.
Normal Response Mode (NRM) − Here, two types of stations are there, a primary station
that send commands and secondary station that can respond to received commands. It is
used for both point - to - point and multipoint communications.
Asynchronous Balanced Mode (ABM) − Here, the configuration is balanced, i.e. each
station can both send commands and respond to commands. It is used for only point -to -
point communications.
HDLC Frame
HDLC is a bit –oriented protocol where each frame contains up to six fields. The structure varies
according to the type of frame.
The fields of a HDLC frame are −
Flag− It is an 8-bit sequence that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.
Address− It contains the address of the receiver. If the frame is sent by the primary
station, it contains the address(es) of the secondary station(s). If it is sent by the
secondary station, it contains the address of the primary station. The address field may be
from 1 byte to several bytes.
Control− It is 1or2 bytes containing flow and error control information.
Payload−This carries the data from the network layer. Its length may vary from one
network to another.
FCS−It is a 2 byte or 4 bytes frame check sequence for error detection. The standard
code used is CRC (cyclic redundancy code).
POINT-TO-POINTPROTOCOL (PPP):
The Internet needs a point-to-point protocol for a variety of purposes, including router-to-router
traffic and home user-to-ISP traffic. This protocol is PPP (Point-to-Point Protocol).
2. A link control protocol for bringing lines up, testing them, negotiating options, and
bringing them down again gracefully when they are no longer needed. This protocol is
called LCP (Link Control Protocol).It supports synchronous and asynchronous circuits
and byte-oriented and bit-oriented encodings.
HDLC is a general protocol that can be used for both point-to-point and multipoint
configurations; one of the most common protocols for point-to-point access is the Point-to-Point
Protocol (PPP). PPP is a byte-oriented protocol.
Framing
Transition Phases
Multiplexing
Multilink
PPP Framing:
PPP is a byte-oriented protocol using byte stuffing with the escape byte 01111101.
Flag. A PPP frame starts and ends with a 1-byte flag with the bit pattern 01111110.
Address. The address field in this protocol is a constant value and set to11111111 (broadcast address).
Control. This field is set to the constant value 00000011(imitating unnumbered frames in
HDLC).As we will discuss later; PPP does not provide any flow control. Error control is also
limited to error detection.
Protocol. The protocol field defines what is being carried in the data field: either user data or
other information. This field is by default 2 bytes long, but the two parties can agree to use only 1
byte.
Payload field. This field carries either the user data or other information that we will discuss
shortly. The data field is a sequence of bytes with the default of a maximum of 1500 bytes; but
this can be changed during negotiation.
FCS.The frame check sequence (FCS) is simply a 2-byte or 4-byte standard CRC.
/
Mr. Sunil J, Dept. of CSE, CIT, Gubbi
52
*****