CN Full Notes (Module1 and Module 2)
CN Full Notes (Module1 and Module 2)
INTRODUCTION TO NETWORKS
A network is a set of devices (often referred to as nodes) connected by
communication links.
A node can be a computer, printer, or any other device capable of sending or
receiving data generated by other nodes on the network.
When we communicate, we are sharing information. This sharing can be local or
remote.
CHARACTERISTICS OF A NETWORK
The effectiveness of a network depends on three characteristics.
1. Delivery: The system must deliver data to the correct destination.
2. Accuracy: The system must deliver data accurately.
3. Timeliness: The system must deliver data in a timely manner.
Factors that affect the Factors that affect the Factors that affect the
Performance of a network: Reliability of a network: Security of a network:
1
COMPONENTS INVOLVED IN A NETWORK PROCESS
TRANSMISSION MODES
o The way in which data is transmitted from one device to another device is known
as transmission mode.
o The transmission mode is also known as the communication mode.
o Each communication channel has a direction associated with it, and transmission
media provide the direction. Therefore, the transmission mode is also known as a
directional mode.
o The transmission mode is defined in the physical layer.
2
Types of Transmission mode
o Simplex Mode
o Half-duplex Mode
o Full-duplex mode (Duplex Mode)
SIMPLEX MODE
o In Simplex mode, the communication is unidirectional, i.e., the data flow in one
direction.
o A device can only send the data but cannot receive it or it can receive the data but
cannot send the data.
o This transmission mode is not very popular as mainly communications require the
two-way exchange of data. The simplex mode is used in the business field as in
sales that do not require any corresponding reply.
o The radio station is a simplex channel as it transmits the signal to the listeners but
never allows them to transmit back.
o Keyboard and Monitor are the examples of the simplex mode as a keyboard can
only accept the data from the user and monitor can only be used to display the
data on the screen.
o The main advantage of the simplex mode is that the full capacity of the
communication channel can be utilized during transmission.
3
HALF-DUPLEX MODE
o In a Half-duplex channel, direction can be reversed, i.e., the station can transmit
and receive the data as well.
o Messages flow in both the directions, but not at the same time.
o The entire bandwidth of the communication channel is utilized in one direction at
a time.
o In half-duplex mode, it is possible to perform the error detection, and if any error
occurs, then the receiver requests the sender to retransmit the data.
o A Walkie-talkie is an example of the Half-duplex mode.
o In Walkie-talkie, one party speaks, and another party listens. After a pause, the
other speaks and first party listens. Speaking simultaneously will create the
distorted sound which cannot be understood.
FULL-DUPLEX MODE
o In Full duplex mode, the communication is bi-directional, i.e., the data flow in
both the directions.
o Both the stations can send and receive the message simultaneously.
o Full-duplex mode has two simplex channels. One channel has traffic moving in
one direction, and another channel has traffic flowing in the opposite direction.
o The Full-duplex mode is the fastest mode of communication between devices.
o The most common example of the full-duplex mode is a Telephone network.
When two people are communicating with each other by a telephone line, both
can talk and listen at the same time.
4
Advantage of Full-duplex mode:
o Both the stations can send and receive the data at the same time.
Send/Receive A device can only Both the devices Both the devices
send the data but can send and can send and
cannot receive it or receive the data, receive the data
it can only receive but one at a time. simultaneously.
the data but cannot
send it.
Line configuration refers to the way two or more communication devices attach to a
link. A link is a communications pathway that transfers data from one device to another.
There are two possible line configurations:
i. Point to Point (PPP): Provides a dedicated Communication link between two
devices. It is simple to establish. The most common example for Point-to-Point
connection is a computer connected by telephone line. We can connect the two
devices by means of a pair of wires or using a microwave or satellite link.
5
ii. MultiPoint : It is also called Multidrop configuration. In this connection two or
more devices share a single link.There are two kinds of Multipoint Connections.
Temporal (Time) Sharing: If users must take turns using the link ,
then its called Temporally shared or Time Shared Line Configuration.
NETWORK TOPOLOGY
Two or more devices connect to a link. Two or more links form a topology.Topology is
defined as
(1) The way in which a network is laid out physically.
(2)The geometric representation of the relationship of all the links and nodes to
one-another.
The various types of topologies are : Bus, Ring, Tree, Star, Mesh and Hybrid.
6
BUS TOPOLOGY
Bus topology is a network type in which every computer and network device is
connected to single cable.
The long single cable acts as a backbone to link all the devices in a network.
When it has exactly two endpoints, then it is called Linear Bus topology.
It transmits data only in one direction.
RING TOPOLOGY
7
Advantages of Ring Topology Disadvantages of Ring Topology
1. Transmitting network is not affected by 1. Troubleshooting is difficult in ring
high traffic or by adding more nodes, topology.
as only the nodes having tokens can 2. Adding or deleting the computers
transmit data. disturbs the network activity.
2. Cheap to install and expand 3. Failure of one computer disturbs the
whole network
TREE TOPOLOGY
It has a root node and all other nodes are connected to it forming a hierarchy.
It is also called hierarchical topology.
It should at least have three levels to the hierarchy.
Tree topology is ideal if workstations are located in groups.
They are used in Wide Area Network.
STAR TOPOLOGY
8
Advantages of Star Topology Disadvantages of Star Topology
1. Fast performance with few nodes and 1. Cost of installation is high.
low network traffic. 2. Expensive to use.
2. Hub can be upgraded easily. 3. If the hub fails, then the whole
3. Easy to troubleshoot. network is stopped.
4. Easy to setup and modify. 4. Performance is based on the hub that
5. Only that node is affected which has is it depends on its capacity
failed, rest of the nodes can work
smoothly
MESH TOPOLOGY
9
HYBRID TOPOLOGY
NETWORK TYPES
A computer network is a group of computers linked to each other that enables the
computer to communicate with another computer and share their resources, data,
and applications.
A computer network can be categorized by their size.
A computer network is mainly of three types:
1. Local Area Network (LAN)
2. Wide Area Network (WAN)
3. Metropolitan Area Network (MAN)
10
o It is less costly as it is built with inexpensive hardware such as hubs, network
adapters, and ethernet cables.
o The data is transferred at an extremely faster rate in Local Area Network.
o LAN can be connected using a common cable or a Switch.
11
o A Wide Area Network is not limited to a single location, but it spans over a large
geographical area through a telephone line, fibre optic cable or satellite links.
o The internet is one of the biggest WAN in the world.
o A Wide Area Network is widely used in the field of Business, government, and
education.
o WAN can be either a point-to-point WAN or Switched WAN.
12
INTERNETWORK
Types of Internetwork
Extranet Intranet
An extranet is used for information
An intranet belongs to an organization
sharing. The access to the extranet is
which is only accessible by
restricted to only those users who have
the organization's employee or members.
login credentials. An extranet is the lowest
The main aim of the intranet is to share the
level of internetworking. It can be
information and resources among the
categorized as MAN, WAN or other
organization employees. An intranet
computer networks. An extranet cannot
provides the facility to work in groups and
have a single LAN, atleast it must have
for teleconferences.
one connection to the external network.
TRANSMISSION MEDIA
o Transmission media is a communication channel that carries the information from
the sender to the receiver.
o Data is transmitted through the electromagnetic signals.
o The main functionality of the transmission media is to carry the information in
the form of bits (Either as Electrical signals or Light pulses).
o It is a physical path between transmitter and receiver in data communication.
o The characteristics and quality of data transmission are determined by the
characteristics of medium and signal.
o Transmission media is of two types : Guided Media (Wired) and UnGuided
Media (wireless).
13
o In guided (wired) media, medium characteristics are more important whereas, in
unguided (wireless) media, signal characteristics are more important.
o Different transmission media have different properties such as bandwidth, delay,
cost and ease of installation and maintenance.
o The transmission media is available in the lowest layer of the OSI reference
model, i.e., Physical layer.
o Attenuation: Attenuation means the loss of energy, i.e., the strength of the signal
decreases with increasing the distance which causes the loss of energy.
o Distortion: Distortion occurs when there is a change in the shape of the signal.
This type of distortion is examined from different signals having different
frequencies. Each frequency component has its own propagation speed, so they
reach at a different time which leads to the delay distortion.
o Noise: When data is travelled over a transmission medium, some unwanted signal
is added to it which creates the noise.
14
TYPES / CLASSES OF TRANSMISSION MEDIA
GUIDED MEDIA
It is defined as the physical medium through which the signals are transmitted.
It is also known as Bounded media.
Types of Guided media: Twisted Pair Cable, Coaxial Cable , Fibre Optic Cable
Twisted pair is a physical media made up of a pair of cables twisted with each
other.
A twisted pair cable is cheap as compared to other transmission media.
Installation of the twisted pair cable is easy, and it is a lightweight cable.
The frequency range for twisted pair cable is from 0 to 3.5KHz.
A twisted pair consists of two insulated copper wires arranged in a regular spiral
pattern.
15
o Category 3: It can support upto 16Mbps.
o Category 4: It can support upto 20Mbps.
o Category 5: It can support upto 200Mbps.
Advantages :
o It is cheap.
o Installation of the unshielded twisted pair is easy.
o It can be used for high-speed LAN.
Disadvantage:
o This cable can only be used for shorter distances because of attenuation.
A shielded twisted pair is a cable that contains the mesh surrounding the wire that allows
the higher transmission rate.
Advantages :
o The cost of the shielded twisted pair cable is not very high and not very low.
o Installation of STP is easy.
o It has higher capacity as compared to unshielded twisted pair cable.
o It has a higher attenuation.
o It is shielded that provides the higher data transmission rate.
Disadvantages:
o It is more expensive as compared to UTP and coaxial cable.
o It has a higher attenuation rate.
COAXIAL CABLE
16
o The name of the cable is coaxial as it contains two conductors parallel to each
other.
o It has a higher frequency as compared to Twisted pair cable.
o The inner conductor of the coaxial cable is made up of copper, and the outer
conductor is made up of copper mesh.
o The middle core is made up of non-conductive cover that separates the inner
conductor from the outer conductor.
o The middle core is responsible for the data transferring whereas the copper mesh
prevents from the EMI(Electromagnetic interference).
o Common applications of coaxial cable are Cable TV networks and traditional
Ethernet LANs.
17
Disadvantages :
o It is more expensive as compared to twisted pair cable.
o If any fault occurs in the cable causes the failure in the entire network.
o Fibre optic cable is a cable that uses electrical signals for communication.
o Fibre optic is a cable that holds the optical fibres coated in plastic that are used to
send the data by pulses of light.
o The plastic coating protects the optical fibres from heat, cold, electromagnetic
interference from other types of wiring.
o Fibre optics provide faster data transmission than copper wires.
Advantages:
o Greater Bandwidth
o Less signal attenuation
o Immunity to electromagnetic interference
o Resistance to corrosive materials
o Light weight
o Greater immunity to tapping
18
Disadvantages :
o Requires Expertise for Installation and maintenance
o Unidirectional light propagation.
o Higher Cost.
Multimode Propagation
Multimode is so named because multiple beams from a light source move through
the core in different paths.
How these beams move within the cable depends on the structure of the core.
19
Single-Mode Propagation
Single-mode uses step-index fiber and a highly focused source of light that limits
beams to a small range of angles, all close to the horizontal.
The single-mode fiber itself is manufactured with a much smaller diameter than
that of multimode fiber, and with substantially lower density (index of refraction).
The decrease in density results in a critical angle that is close enough to 90° to
make the propagation of beams almost horizontal.
In this case, propagation of different beams is almost identical, and delays are
negligible. All the beams arrive at the destination “together” and can be
recombined with little distortion to the signal.
UNGUIDED MEDIA
o An unguided transmission transmits the electromagnetic waves without using any
physical medium. Therefore it is also known as wireless transmission.
o In unguided media, air is the media through which the electromagnetic energy
can flow easily.
RADIO WAVES
o Radio waves are the electromagnetic waves that are transmitted in all the
directions of free space.
o Radio waves are omnidirectional, i.e., the signals are propagated in all the
directions.
o The range in frequencies of radio waves is from 3Khz to 1Khz.
o In the case of radio waves, the sending and receiving antenna are not aligned, i.e.,
the wave sent by the sending antenna can be received by any receiving antenna.
o An example of the radio wave is FM radio.
20
Applications of Radio waves:
o A Radio wave is useful for multicasting when there is one sender and many
receivers.
o An FM radio, television, cordless phones are examples of a radio wave.
MICROWAVES
Terrestrial Microwave
o Terrestrial Microwave transmission is a technology that transmits the focused
beam of a radio signal from one ground-based microwave transmission antenna to
another.
o Microwaves are the electromagnetic waves having the frequency in the range
from 1GHz to 1000 GHz.
o Microwaves are unidirectional as the sending and receiving antenna is to be
aligned, i.e., the waves sent by the sending antenna are narrowly focused.
o In this case, antennas are mounted on the towers to send a beam to another
antenna which is km away.
o It works on the line of sight transmission, i.e., the antennas mounted on the
towers are at the direct sight of each other.
21
Characteristics of Terrestrial Microwave:
o Frequency range: The frequency range of terrestrial microwave is from 4-6 GHz
to 21-23 GHz.
o Bandwidth: It supports the bandwidth from 1 to 10 Mbps.
o Short distance: It is inexpensive for short distance.
o Long distance: It is expensive as it requires a higher tower for a longer distance.
o Attenuation: Attenuation means loss of signal. It is affected by environmental
conditions and antenna size.
Satellite Microwave
o A satellite is a physical object that revolves around the earth at a known height.
o Satellite communication is more reliable nowadays as it offers more flexibility
than cable and fibre optic systems.
o We can communicate with any point on the globe by using satellite
communication.
o The satellite accepts the signal that is transmitted from the earth station, and it
amplifies the signal. The amplified signal is retransmitted to another earth station.
22
Advantages of Satellite Microwave:
o The coverage area of a satellite microwave is more than the terrestrial microwave.
o The transmission cost of the satellite is independent of the distance from the
centre of the coverage area.
o Satellite communication is used in mobile and wireless communication
applications.
o It is easy to install.
o It is used in a wide variety of applications such as weather forecasting, radio/TV
signal broadcasting, mobile communication, etc.
INFRARED WAVES
Characteristics of Infrared:
o It supports high bandwidth, and hence the data rate will be very high.
o Infrared waves cannot penetrate the walls. Therefore, the infrared communication
in one room cannot be interrupted by the nearby rooms.
o An infrared communication provides better security with minimum interference.
o Infrared communication is unreliable outside the building because the sun rays
will interfere with the infrared waves.
SWITCHING
o The technique of transferring the information from one computer network to
another network is known as switching.
o Switching in a computer network is achieved by using switches.
o A switch is a small hardware device which is used to join multiple computers
together with one local area network (LAN).
23
o Switches are devices capable of creating temporary connections between two or
more devices linked to the switch.
o Switches are used to forward the packets based on MAC addresses.
o A Switch is used to transfer the data only to the device that has been addressed. It
verifies the destination address to route the packet appropriately.
o It is operated in full duplex mode.
o It does not broadcast the message as it works with limited bandwidth.
Advantages of Switching:
o Switch increases the bandwidth of the network.
o It reduces the workload on individual PCs as it sends the information to only that
device which has been addressed.
o It increases the overall performance of the network by reducing the traffic on the
network.
o There will be less frame collision as switch creates the collision domain for each
connection.
Disadvantages of Switching:
o A Switch is more expensive than network bridges.
o A Switch cannot determine the network connectivity issues easily.
o Proper designing and configuration of the switch are required to handle multicast
packets.
24
CIRCUIT SWITCHING
2. Data transfer - Once the circuit has been established, data and voice are
transferred from the source to the destination. The dedicated connection remains
as long as the end parties communicate.
25
Advantages
It is suitable for long continuous transmission, since a continuous transmission
route is established, that remains throughout the conversation.
The dedicated path ensures a steady data rate of communication.
No intermediate delays are found once the circuit is established. So, they are
suitable for real time communication of both voice and data transmission.
Disadvantages
Circuit switching establishes a dedicated connection between the end parties. This
dedicated connection cannot be used for transmitting any other data, even if the
data load is very low.
Bandwidth requirement is high even in cases of low data volume.
There is underutilization of system resources. Once resources are allocated to a
particular connection, they cannot be used for other connections.
Time required to establish connection may be high.
It is more expensive than other switching techniques as a dedicated path is
required for each connection.
PACKET SWITCHING
o The packet switching is a switching technique in which the message is sent in one
go, but it is divided into smaller pieces, and they are sent individually.
o The message splits into smaller pieces known as packets and packets are given a
unique number to identify their order at the receiving end.
o Every packet contains some information in its headers such as source address,
destination address and sequence number.
o Packets will travel across the network, taking the shortest path as possible.
o All the packets are reassembled at the receiving end in correct order.
o If any packet is missing or corrupted, then the message will be sent to resend the
message.
o If the correct order of the packets is reached, then the acknowledgment message
will be sent.
26
Advantages of Packet Switching:
o Cost-effective: In packet switching technique, switching devices do not require
massive secondary storage to store the packets, so cost is minimized to some
extent. Therefore, we can say that the packet switching technique is a cost-
effective technique.
o Reliable: If any node is busy, then the packets can be rerouted. This ensures that
the Packet Switching technique provides reliable communication.
o Efficient: Packet Switching is an efficient technique. It does not require any
established path prior to the transmission, and many users can use the same
communication channel simultaneously, hence makes use of available bandwidth
very efficiently.
27
In this example, all four packets (or datagrams) belong to the same message, but may
travel different paths to reach their destination.
Routing Table
In this type of network, each switch (or packet switch) has a routing table which is based
on the destination address. The routing tables are dynamic and are updated periodically.
The destination addresses and the corresponding forwarding output ports are recorded in
the tables.
28
Virtual Circuit Switching
o Virtual Circuit Switching is also known as connection-oriented switching.
o In the case of Virtual circuit switching, a virtual connection is established before
the messages are sent.
o Call request and call accept packets are used to establish the connection between
sender and receiver.
o In this case, the path is fixed for the duration of a logical connection.
Example :
Source A sends a frame to Source B through Switch 1, Switch 2 and Switch 3.
29
Types of Virtual Circuits
There are two broad classes of Virtual Circuits.
They are
1. PVC – Permanent Virtual Circuit
Network Administrator will configure the state
The virtual circuit is permanent (PVC)
30
COMPARISON – CIRCUIT SWITCHING AND PACKET SWITCHING
PACKET SWITCHING
CIRCUIT
SWITCHING
Virtual Circuit Switching Datagram Switching
A dedicated path exists A dedicated path exists for No dedicated path exists for
for data transfer data transfer data transfer
All the packets take the All the packets take the All the packets may not take
same path same path the same path
31
MESSAGE SWITCHING
PROTOCOL LAYERING
In networking, a protocol defines the rules that both the sender and receiver and
all intermediate devices need to follow to be able to communicate effectively.
A protocol provides a communication service that the process use to exchange
messages.
When communication is simple, we may need only one simple protocol.
When the communication is complex, we may need to divide the task between
different layers, in which case we need a protocol at each layer, or protocol
layering.
Protocol layering is that it allows us to separate the services from the
implementation.
A layer needs to be able to receive a set of services from the lower layer and to
give the services to the upper layer.
Any modification in one layer will not affect the other layers.
32
Basic Elements of Layered Architecture
Service: It is a set of actions that a layer provides to the higher layer.
Protocol: It defines a set of rules that a layer uses to exchange the information
with peer entity. These rules mainly concern about both the contents and order of
the messages used.
Interface: It is a way through which the message is transferred from one layer to
another layer.
Protocol Graph
The set of protocols that make up a network system is called a protocol graph.
The nodes of the graph correspond to protocols, and the edges represent a
dependence relation.
For example, the Figure below illustrates a protocol graph consists of protocols
RRP (Request/Reply Protocol) and MSP (Message Stream Protocol) implement
two different types of process-to-process channels, and both depend on the HHP
(Host-to- Host Protocol) which provides a host-to-host connectivity service
33
OSI MODEL
o OSI stands for Open System Interconnection.
o It is a reference model that describes how information from a software application
in one computer moves through a physical medium to the software application in
another computer.
o OSI consists of seven layers, and each layer performs a particular network
function.
o OSI model was developed by the International Organization for Standardization
(ISO) in 1984, and it is now considered as an architectural model for the inter-
computer communications.
o OSI model divides the whole task into seven smaller and manageable tasks. Each
layer is assigned a particular task.
o Each layer is self-contained, so that task assigned to each layer can be performed
independently.
34
FUNCTIONS OF THE OSI LAYERS
1. PHYSICAL LAYER
The physical layer coordinates the functions required to transmit a bit stream over a
physical medium.
The physical layer is concerned with the following functions:
Physical characteristics of interfaces and media - The physical layer defines
the characteristics of the interface between the devices and the transmission
medium.
Representation of bits - To transmit the stream of bits, it must be encoded to
signals. The physical layer defines the type of encoding.
Signals: It determines the type of the signal used for transmitting the information.
Data Rate or Transmission rate - The number of bits sent each second –is also
defined by the physical layer.
Synchronization of bits - The sender and receiver must be synchronized at the
bit level. Their clocks must be synchronized.
Line Configuration - In a point-to-point configuration, two devices are
connected together through a dedicated link. In a multipoint configuration, a link
is shared between several devices.
Physical Topology - The physical topology defines how devices are connected to
make a network. Devices can be connected using a mesh, bus, star or ring
topology.
35
Transmission Mode - The physical layer also defines the direction of
transmission between two devices: simplex, half-duplex or full-duplex.
It is responsible for transmitting frames from one node to the next node.
The other responsibilities of this layer are
Framing - Divides the stream of bits received into data units called frames.
Physical addressing – If frames are to be distributed to different systems on the
network , data link layer adds a header to the frame to define the sender and
receiver.
Flow control- If the rate at which the data are absorbed by the receiver is less
than the rate produced in the sender ,the Data link layer imposes a flow ctrl
mechanism.
Error control- Used for detecting and retransmitting damaged or lost frames and
to prevent duplication of frames. This is achieved through a trailer added at the
end of the frame.
Medium Access control -Used to determine which device has control over the
link at any given time.
3. NETWORK LAYER
This layer is responsible for the delivery of packets from source to destination.
It determines the best path to move data from source to the destination based on the
network conditions, the priority of service, and other factors.
The other responsibilities of this layer are
Logical addressing - If a packet passes the network boundary, we need another
addressing system for source and destination called logical address. This
addressing is used to identify the device on the internet.
Routing – Routing is the major component of the network layer, and it
determines the best optimal path out of the multiple paths from source to the
destination.
4. TRANSPORT LAYER
5. SESSION LAYER
6. PRESENTATION LAYER
It is concerned with the syntax and semantics of information exchanged between two
systems.
The other responsibilities of this layer are
Translation – Different computers use different encoding system, this layer is
responsible for interoperability between these different encoding methods. It will
change the message into some common format.
Encryption and decryption-It means that sender transforms the original
information to another form and sends the resulting message over the n/w. and
vice versa.
Compression and expansion-Compression reduces the number of bits contained
in the information particularly in text, audio and video.
7. APPLICATION LAYER
This layer enables the user to access the network. It handles issues such as network
transparency, resource allocation, etc. This allows the user to log on to remote user.
The other responsibilities of this layer are
FTAM (File Transfer, Access, Management) - Allows user to access files in a
remote host.
Mail services - Provides email forwarding and storage.
Directory services - Provides database sources to access information about
various sources and objects.
37
TCP / IP PROTOCOL SUITE
The TCP/IP architecture is also called as Internet architecture.
It is developed by the US Defense Advanced Research Project Agency (DARPA)
for its packet switched network (ARPANET).
TCP/IP is a protocol suite used in the Internet today.
It is a 4-layer model. The layers of TCP/IP are
1. Application layer
2. Transport Layer (TCP/UDP)
3. Internet Layer
4. Network Interface Layer
APPLICATION LAYER
An application layer incorporates the function of top three OSI layers. An
application layer is the topmost layer in the TCP/IP model.
It is responsible for handling high-level protocols, issues of representation.
This layer allows the user to interact with the application.
When one application layer protocol wants to communicate with another
application layer, it forwards its data to the transport layer.
Protocols such as FTP, HTTP, SMTP, POP3, etc running in the application layer
provides service to other program running on top of application layer
38
TRANSPORT LAYER
The transport layer is responsible for the reliability, flow control, and correction
of data which is being sent over the network.
The two protocols used in the transport layer are User Datagram protocol and
Transmission control protocol.
o UDP – UDP provides connectionless service and end-to-end delivery of
transmission. It is an unreliable protocol as it discovers the errors but not
specify the error.
o TCP – TCP provides a full transport layer services to applications. TCP is
a reliable protocol as it detects the error and retransmits the damaged
frames.
INTERNET LAYER
The internet layer is the second layer of the TCP/IP model.
An internet layer is also known as the network layer.
The main responsibility of the internet layer is to send the packets from any
network, and they arrive at the destination irrespective of the route they take.
Internet layer handle the transfer of information across multiple networks through
router and gateway .
IP protocol is used in this layer, and it is the most significant part of the entire
TCP/IP suite.
39
COMPARISON - OSI MODEL AND TCP/IP MODEL
7 All packets are reliably delivered TCP reliably delivers packets, IP does
not reliably deliver packets
NETWORK PERFORMANCE
Network performance is measured in using:
Bandwidth, Throughput, Latency, Jitter, RoundTrip Time
BANDWIDTH
The bandwidth of a network is given by the number of bits that can be transmitted
over the network in a certain period of time.
Bandwidth can be measured in two different values: bandwidth in hertz and
bandwidth in bits per second.
40
Bandwidth in Hertz
o Bandwidth in hertz refers to the range of frequencies contained in a composite
signal or the range of frequencies a channel can pass.
o For example, we can say the bandwidth of a subscriber telephone line is 4 kHz.
Relationship
o There is an explicit relationship between the bandwidth in hertz and bandwidth in
bits per second.
o Basically, an increase in bandwidth in hertz means an increase in bandwidth
in bits per second.
THROUGHPUT
Throughput is a measure of how fast we can actually send data through a
network.
Bandwidth in bits per second and throughput may seem to be same, but they are
different.
A link may have a bandwidth of B bps, but we can only send T bps through this
link. (T always less than B).
In other words, the bandwidth is a potential measurement of a link; the
throughput is an actual measurement of how fast we can send data.
For example, we may have a link with a bandwidth of 1 Mbps, but the devices
connected to the end of the link may handle only 200 kbps. This means that we
cannot send more than 200 kbps through this link.
Problem :
A network with bandwidth of 10 Mbps can pass only an average of 12,000 frames
per minute with each frame carrying an average of 10,000 bits. What is the
throughput of this network?
Solution
We can calculate the throughput as
LATENCY (DELAY)
The latency or delay defines how long it takes for an entire message to travel
from one end of a network to the other.
Latency is made up of four components: Propagation time, Transmission time,
Queuing time and Processing delay.
41
Propagation Time
o Propagation time measures the time required for a bit to travel from the source to
the destination.
o The propagation time is calculated by dividing the distance by the propagation
speed.
o The propagation speed of electromagnetic signals depends on the medium and on
the frequency of the signal.
Transmission Time
o In data communications we don’t send just 1 bit, we send a message.
o The first bit may take a time equal to the propagation time to reach its destination.
o The last bit also may take the same amount of time.
o However, there is a time between the first bit leaving the sender and the last bit
arriving at the receiver.
o The first bit leaves earlier and arrives earlier.
o The last bit leaves later and arrives later.
o The transmission time of a message depends on the size of the message and the
bandwidth of the channel.
Queuing Time
o Queuing time is the time needed for each intermediate or end device to hold the
message before it can be processed.
o The queuing time is not a fixed factor. It changes with the load imposed on the
network. When there is heavy traffic on the network, the queuing time increases.
o An intermediate device, such as a router, queues the arrived messages and
processes them one by one.
o If there are many messages, each message will have to wait.
Processing Delay
o Processing delay is the time that the nodes take to process the packet header.
o Processing delay is a key component in network delay.
o During processing of a packet, nodes may check for bit-level errors in the packet
that occurred during transmission as well as determining where the packet's next
destination is.
42
Bandwidth - Delay Product
o Bandwidth and delay are two performance metrics of a link.
o The bandwidth-delay product defines the number of bits that can fill the
link.
o This measurement is important if we need to send data in bursts and wait for the
acknowledgment of each burst before sending the next one.
JITTER
o RTT refers to how long it takes to send a message from one end of a network to
the other and back, rather than the one-way latency. This is called as round-trip
time (RTT) of the network.
43
SOLVED PROBLEMS – PERFORMANCE
Problem 1:
What is the propagation time if the distance between the two points is 12,000 km?
Assume the propagation speed to be 2.4 × 108 m/s .
Solution :
Propagation time = (12000 * 1000) / (2.4 × 108) = 50 ms
Problem 2:
What are the propagation time and the transmission time for a 2.5-KB (kilobyte)
message (an email) if the bandwidth of the network is 1 Gbps? Assume that the
distance between the sender and the receiver is 12,000 km and that light travels at
2.4 * 108 m/s.
Solution:
Propagation time = (12000 * 1000) / (2.4 * 108 ) = 50 ms
Transmission time = (2500 * 8) / 109 = 0.02 ms
Problem 3:
What are the propagation time and the transmission time for a 5-MB (megabyte)
message (an image) if the bandwidth of the network is 1 Mbps? Assume that the
distance between the sender and the receiver is 12,000 km and that light travels at
2.4 * 108m/s.
Solution:
Propagation time = (12000 * 1000) / (2.4 * 108 ) = 50 ms
Transmission time = (5000000 * 8) / 106 = 40 s
44
UNIT II : DATA-LINK LAYER & MEDIA ACCESS
Introduction – Link-Layer Addressing – DLC Services – Data-Link Layer
Protocols – HDLC – PPP – Media Access Control – Wired LANs: Ethernet –
Wireless LANs – Introduction – IEEE 802.11, Bluetooth – Connecting Devices
1. INTRODUCTION
In the OSI model, the data link layer is the 2nd layer from the bottom.
It is responsible for transmitting frames from one node to next node.
The main responsibility of the Data Link Layer is to transfer the datagram
across an individual link.
An important characteristic of a Data Link Layer is that datagram can be
handled by different link layer protocols on different links in a path.
The other responsibilities of this layer are
o Framing - Divides the stream of bits received into data units called
frames.
o Physical addressing – If frames are to be distributed to different
systems on the same network, data link layer adds a header to the
frame to define the sender and receiver.
o Flow control- If the rate at which the data are absorbed by the
receiver is less than the rate produced in the sender ,the Data link
layer imposes a flow control mechanism.
o Error control- Used for detecting and retransmitting damaged or
lost frames and to prevent duplication of frames. This is achieved
through a trailer added at the end of the frame.
o Medium Access control - Used to determine which device has
control over the link at any given time.
2. LINK-LAYER ADDRESSING
A link-layer address is sometimes called a link address, sometimes a
physical address, and sometimes a MAC address.
Since a link is controlled at the data-link layer, the addresses need to belong
to the data-link layer.
When a datagram passes from the network layer to the data-link layer, the
datagram will be encapsulated in a frame and two data-link addresses are
added to the frame header.
These two addresses are changed every time the frame moves from one link
to another.
Multicast Address :
Link-layer protocols define multicast addresses. Multicasting means one-to-
many Communication but not all.
Broadcast Address :
Link-layer protocols define a broadcast address. Broadcasting means one-
to-all communication. A frame with a destination broadcast address is sent
to all entities in the link.
o All nodes except the destination discard the packet but update their ARP
table.
o Destination host (System B)constructs an ARP Response packet
o ARP Response is unicast and sent back to the source host (System A).
o Source stores target Logical & Physical address pair in its ARP table from
ARP Response.
o If target node does not exist on same network, ARP request is sent to
default router.
ARP Packet
3. DLC SERVICES
The data link control (DLC) deals with procedures for communication
between two adjacent nodes—node-to-node communication—no matter
whether the link is dedicated or broadcast.
Data link control service include
(1) Framing (2) Flow Control (3) Error Control
1. FRAMING
The data-link layer packs the bits of a message into frames, so that each
frame is distinguishable from another.
Although the whole message could be packed in one frame, that is not
normally done.
One reason is that a frame can be very large, making flow and error control
very inefficient.
When a message is carried in one very large frame, even a single-bit error
would require the retransmission of the whole frame.
When a message is divided into smaller frames, a single-bit error affects
only that small frame.
Framing in the data-link layer separates a message from one source to a
destination by adding a sender address and a destination address.
The destination address defines where the packet is to go; the sender
address helps the recipient acknowledge the receipt.
Frame Size
Frames can be of fixed or variable size.
Frames of fixed size are called cells. In fixed-size framing, there is no need
for defining the boundaries of the frames; the size itself can be used as a
delimiter.
In variable-size framing, we need a way to define the end of one frame and
the beginning of the next. Two approaches were used for this purpose: a
character-oriented approach and a bit-oriented approach.
Character-Oriented Framing
In character-oriented (or byte-oriented) framing, data to be carried are 8-bit
characters.
To separate one frame from the next, an 8-bit (1-byte) flag is added at the
beginning and the end of a frame.
The flag, composed of protocol-dependent special characters, signals the
start or end of a frame.
Any character used for the flag could also be part of the information.
If this happens, when it encounters this pattern in the middle of the data,the
receiver thinks it has reached the end of the frame.
To fix this problem, a byte-stuffing strategy was added to character-
oriented framing.
If the flag pattern appears in the data, the receiver must be informed that
this is not the end of the frame.
This is done by stuffing 1 single bit (instead of 1 byte) to prevent the pattern
from looking like a flag. The strategy is called bit stuffing.
Bit Stuffing
Bit stuffing is the process of adding one extra 0 whenever five
consecutive 1s follow a 0 in the data, so that the receiver does not
mistake the pattern 0111110 for a flag.
In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0
is added.
This extra stuffed bit is eventually removed from the data by the receiver.
The extra bit is added after one 0 followed by five 1’s regardless of the
value of the next bit.
This guarantees that the flag field sequence does not inadvertently appear in
the frame.
2. FLOW CONTROL
o Flow control refers to a set of procedures used to restrict the amount
of data that the sender can send before waiting for acknowledgment.
o The receiving device has limited speed and limited memory to store the
data.
o Therefore, the receiving device must be able to inform the sending device to
stop the transmission temporarily before the limits are reached.
o It requires a buffer, a block of memory for storing the information until they
are processed.
Advantage of Stop-and-wait
o The Stop-and-wait method is simple as each frame is checked and
acknowledged before the next frame is sent
Disadvantages of Stop-And-Wait
o In stop-and-wait, at any point in time, there is only one frame that is sent
and waiting to be acknowledged.
o This is not a good use of transmission medium.
o To improve efficiency, multiple frames should be in transition while
waiting for ACK.
PIGGYBACKING
SLIDING WINDOW
o The Sliding Window is a method of flow control in which a sender can
transmit the several frames before getting an acknowledgement.
o In Sliding Window Control, multiple frames can be sent one after the
another due to which capacity of the communication channel can be utilized
efficiently.
o A single ACK acknowledge multiple frames.
o Sliding Window refers to imaginary boxes at both the sender and receiver
end.
o The window can hold the frames at either end, and it provides the upper
limit on the number of frames that can be transmitted before the
acknowledgement.
o Frames can be acknowledged even when the window is not completely
filled.
o The window has a specific size in which they are numbered as modulo-n
means that they are numbered from 0 to n-1.
o For example, if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
o The size of the window is represented as n-1. Therefore, maximum n-1
frames can be sent before acknowledgement.
o When the receiver sends the ACK, it includes the number of the next frame
that it wants to receive.
o For example, to acknowledge the string of frames ending with frame
number 4, the receiver will send the ACK containing the number 5.
o When the sender sees the ACK with the number 5, it got to know that the
frames from 0 through 4 have been received.
Sender Window Receiver Window
BURST ERROR
The term Burst Error means that two or more bits in the data unit have changed
from 1 to 0 or from 0 to 1.
PARITY CHECK
One bit, called parity bit is added to every data unit so that the total number
of 1’s in the data unit becomes even (or) odd.
The source then transmits this data via a link, and bits are checked and
verified at the destination.
Data is considered accurate if the number of bits (even or odd) matches the
number transmitted from the source.
This techniques is the most common and least complex method.
1. Even parity – Maintain even number of 1s
E.g., 1011 → 1011 1
2. Odd parity – Maintain odd number of 1s
E.g., 1011 → 1011 0
Steps Involved :
Consider the original message (dataword) as M(x) consisting of ‘k’ bits and
the divisor as C(x) consists of ‘n+1’ bits.
The original message M(x) is appended by ‘n’ bits of zero’s. Let us call
this zero-extended message as T(x).
Divide T(x) by C(x) and find the remainder.
The division operation is performed using XOR operation.
The resultant remainder is appended to the original message M(x) as CRC
and sent by the sender(codeword).
Example 1:
Consider the Dataword / Message M(x) = 1001
Divisor C(x) = 1011 (n+1=4)
Appending ‘n’ zeros to the original Message M(x).
The resultant messages is called T(x) = 1001 000. (here n=3)
Divide T(x) by the divisor C(x) using XOR operation.
Sender Side :
Receiver Side:
(For Both Case – Without Error and With Error)
Polynomials
A pattern of 0s and 1s can be represented as a polynomial with coefficients
of 0 and 1.
The power of each term shows the position of the bit; the coefficient shows
the value of the bit.
INTERNET CHECKSUM
ERROR CONTROL
o Lost Frame: Sender is equipped with the timer and starts when the frame is
transmitted. Sometimes the frame has not arrived at the receiving end so
that it cannot be acknowledged either positively or negatively. The sender
waits for acknowledgement until the timer goes off. If the timer goes off, it
retransmits the last transmitted frame.
SLIDING WINDOW ARQ
1. GO-BACK-N ARQ
o In Go-Back-N ARQ protocol, if one frame is lost or damaged, then it
retransmits all the frames after which it does not receive the positive ACK.
o In the above figure, three frames (Data 0,1,2) have been transmitted before
an error discovered in the third frame.
o The receiver discovers the error in Data 2 frame, so it returns the NAK 2
frame.
o All the frames including the damaged frame (Data 2,3,4) are discarded as it
is transmitted after the damaged frame.
o Therefore, the sender retransmits the frames (Data2,3,4).
2. SELECTIVE-REJECT(REPEAT) ARQ
1. SIMPLE PROTOCOL
o The first protocol is a simple protocol with neither flow nor error control.
o We assume that the receiver can immediately handle any frame it receives.
o In other words, the receiver can never be overwhelmed with incoming
frames.
o The data-link layers of the sender and receiver provide transmission
services for their network layers.
o The data-link layer at the sender gets a packet from its network layer, makes
a frame out of it, and sends the frame.
o The data-link layer at the receiver receives a frame from the link, extracts
the packet from the frame, and delivers the packet to its network layer.
NOTE :
2. STOP-AND-WAIT PROTOCOL
REFER STOP AND WAIT FROM FLOW CONTROL
3. GO-BACK-N PROTOCOL
REFER GO-BACK-N ARQ FROM ERROR CONTROL
4. SELECTIVE-REPEAT PROTOCOL
REFER SELECTIVE-REPEAT ARQ FROM ERROR CONTROL
HDLC FRAMES
HDLC defines three types of frames:
1. Information frames (I-frames) - used to carry user data
2. Supervisory frames (S-frames) - used to carry control information
3. Unnumbered frames (U-frames) – reserved for system management
Each type of frame serves as an envelope for the transmission of a different type of
message.
Each frame in HDLC may contain up to six fields:
1. Beginning flag field
2. Address field
3. Control field
4. Information field (User Information/ Management Information)
5. Frame check sequence (FCS) field
6. Ending flag field
In multiple-frame transmissions, the ending flag of one frame can serve as the
beginning flag of the next frame.
o Flag field - This field contains synchronization pattern 01111110, which
identifies both the beginning and the end of a frame.
o Address field - This field contains the address of the secondary station. If a
primary station created the frame, it contains a ‘to’ address. If a secondary
station creates the frame, it contains a ‘from’ address. The address field can
be one byte or several bytes long, depending on the needs of the network.
o Control field. The control field is one or two bytes used for flow and error
control.
o Information field. The information field contains the user’s data from the
network layer or management information. Its length can vary from one
network to another.
o FCS field. The frame check sequence (FCS) is the HDLC error detection
field. It can contain either a 16- bit or 32-bit CRC.
o The first bit defines the type. If the first bit of the control field is 0, this
means the frame is an I-frame.
o The next 3 bits, called N(S), define the sequence number of the frame.
o The last 3 bits, called N(R), correspond to the acknowledgment number
when piggybacking is used.
o The single bit between N(S) and N(R) is called the P/F bit. If this bit is 1 it
means poll (the frame is sent by a primary station to a secondary).
o If this bit is 0 it means final(the frame is sent by a secondary to a Primary).
o If the first 2 bits of the control field are 10, this means the frame is an S-
frame.
o The last 3 bits, called N(R),correspond to the acknowledgment number
(ACK) or negative acknowledgment number (NAK), depending on the type
of S-frame.
o The 2 bits called code are used to define the type of S-frame itself.
o With 2 bits, we can have four types of S-frames –
Receive ready (RR), Receive not ready (RNR), Reject (REJ) and
Selective reject (SREJ).
o If the first 2 bits of the control field are 11, this means the frame is an U-
frame.
o U-frame codes are divided into two sections: a 2-bit prefix before the P/F
bit and a 3-bit suffix after the P/F bit.
o Together, these two segments (5 bits) can be used to create up to 32
different types of U-frames.
6. POINT-TO-POINT PROTOCOL (PPP)
o Point-to-Point Protocol (PPP) was devised by IETF (Internet Engineering
Task Force) in 1990 as a Serial Line Internet Protocol (SLIP).
o PPP is a data link layer communications protocol used to establish a direct
connection between two nodes.
o It connects two routers directly without any host or any other networking
device in between.
o It is used to connect the Home PC to the server of ISP via a modem.
o It is a byte - oriented protocol that is widely used in broadband
communications having heavy loads and high speeds.
o Since it is a data link layer protocol, data is transmitted in frames. It is also
known as RFC 1661.
PPP Frame
PPP is a byte - oriented protocol where each field of the frame is composed of one
or more bytes.
1. Flag − 1 byte that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.
2. Address − 1 byte which is set to 11111111 in case of broadcast.
3. Control − 1 byte set to a constant value of 11000000.
4. Protocol − 1 or 2 bytes that define the type of data contained in the payload
field.
5. Payload − This carries the data from the network layer. The maximum
length of the payload field is 1500 bytes.
6. FCS − It is a 2 byte(16-bit) or 4 bytes(32-bit) frame check sequence for
error detection. The standard code used is CRC.
Byte Stuffing in PPP Frame
Byte stuffing is used is PPP payload field whenever the flag sequence appears in
the message, so that the receiver does not consider it as the end of the frame. The
escape byte, 01111101, is stuffed before every byte that contains the same byte as
the flag byte or the escape byte. The receiver on receiving the message removes
the escape byte before passing it onto the network layer.
Dead: In dead phase the link is not used. There is no active carrier and the
line is quiet.
Establish: Connection goes into this phase when one of the nodes start
communication. In this phase, two parties negotiate the options. If
negotiation is successful, the system goes into authentication phase or
directly to networking phase.
Authenticate: This phase is optional. The two nodes may decide whether
they need this phase during the establishment phase. If they decide to
proceed with authentication, they send several authentication packets. If the
result is successful, the connection goes to the networking phase; otherwise,
it goes to the termination phase.
Network: In network phase, negotiation for the network layer protocols
takes place.PPP specifies that two nodes establish a network layer
agreement before data at the network layer can be exchanged. This is
because PPP supports several protocols at network layer. If a node is
running multiple protocols simultaneously at the network layer, the
receiving node needs to know which protocol will receive the data.
Open: In this phase, data transfer takes place. The connection remains in
this phase until one of the endpoints wants to end the connection.
Terminate: In this phase connection is terminated.
Components/Protocols of PPP
Three sets of components/protocols are defined to make PPP powerful:
Link Control Protocol (LCP)
Authentication Protocols (AP)
Network Control Protocols (NCP)
PAP
The Password Authentication Protocol (PAP) is a simple authentication procedure
with a two-step process:
a. The user who wants to access a system sends an authentication
identification (usually the user name) and a password.
b. The system checks the validity of the identification and password and
either accepts or denies connection.
CHAP
The Challenge Handshake Authentication Protocol (CHAP) is a three-way
handshaking authentication protocol that provides greater security than PAP. In
this method, the password is kept secret; it is never sent online.
a. The system sends the user a challenge packet containing a challenge
value.
b. The user applies a predefined function that takes the challenge value and
the user’s own password and creates a result. The user sends the result in
the response packet to the system.
c. The system does the same. It applies the same function to the password of
the user (known to the system) and the challenge value to create a result.
If the result created is the same as the result sent in the response packet,
access is granted; otherwise, it is denied.
CHAP is more secure than PAP, especially if the system continuously changes the
challenge value. Even if the intruder learns the challenge value and the result, the
password is still secret.
Goals of MAC
1. Fairness in sharing
2. Efficient sharing of bandwidth
3. Need to avoid packet collisions at the receiver due to interference
MAC Management
Medium allocation (collision avoidance)
Contention resolution (collision handling)
MAC Types
Round-Robin : – Each station is given opportunity to transmit in turns.
Either a central controller polls a station to permit to go, or stations can
coordinate among themselves.
Reservation : - Station wishing to transmit makes reservations for time
slots in advance. (Centralized or distributed).
Contention (Random Access) : - No control on who tries; If collision‖
occurs, retransmission takes place.
MECHANISMS USED
Wired Networks :
o CSMA / CD – Carrier Sense Multiple Access / Collision Detection
Wireless Networks :
o CSMA / CA – Carrier Sense Multiple Access / Collision Avoidance
Carrier Sense in CSMA/CD means that all the nodes sense the medium to
check whether it is idle or busy.
If the carrier sensed is idle, then the node transmits the entire
frame.
If the carrier sensed is busy, the transmission is postponed.
Collision Detect means that a node listens as it transmits and can therefore
detect when a frame it is transmitting has collided with a frame transmitted
by another node.
Non-Persistent Strategy
In the non-persistent method, a station that has a frame to send senses the
line.
If the line is idle, it sends immediately.
If the line is not idle, it waits a random amount of time and then senses the
line again.
Persistent Strategy
1- Persistent :
The 1-persistent method is simple and straightforward.
In this method, after the station finds the line idle, it sends its frame
immediately (with probability 1).
This method has the highest chance of collision because two or more
stations may find the line idle and send their frames immediately.
P-Persistent :
In this method, after the station finds the line idle it follows these steps:
With probability p, the station sends its frame.
With probability q = 1 − p, the station waits for the beginning of the next
time slot and checks the line again.
The p-persistent method is used if the channel has time slots with a slot
duration equal to or greater than the maximum propagation time.
The p-persistent approach combines the advantages of the other two
strategies. It reduces the chance of collision and improves efficiency.
.
EXPONENTIAL BACK-OFF
Once an adaptor has detected a collision and stopped its transmission, it waits
a certain amount of time and tries again.
Each time it tries to transmit but fails, the adaptor doubles the amount of time
it waits before trying again.
This strategy of doubling the delay interval between each retransmission
attempt is a general technique known as exponential back-off.
CARRIER SENSE MULTIPLE ACCESS / COLLISION AVOIDANCE
(CSMA/CA)
Carrier sense multiple access with collision avoidance (CSMA/CA) was
invented for wireless networks.
Wireless protocol would follow exactly the same algorithm as the
Ethernet—Wait until the link becomes idle before transmitting and back off
should a collision occur.
Collisions are avoided through the use of CSMA/CA’s three strategies: the
interframe space, the contention window, and acknowledgments
EVOLUTION OF ETHERNET
Standard Ethernet (10 Mbps)
The original Ethernet technology with the data rate of 10 Mbps as the Standard
Ethernet.
Standard Ethernet types are
1. 10Base5: Thick Ethernet,
2. 10Base2: Thin Ethernet ,
3. 10Base-T: Twisted-Pair Ethernet
4. 10Base-F: Fiber Ethernet.
The 64-bit preamble allows the receiver to synchronize with the signal; it is
a sequence of alternating 0’s and 1’s.
Both the source and destination hosts are identified with a 48-bit address.
The packet type field serves as the demultiplexing key.
Each frame contains up to 1500 bytes of data(Body).
CRC is used for Error detection
Ethernet Addresses
Every Ethernet host has a unique Ethernet address (48 bits – 6 bytes).
Ethernet address is represented by sequence of six numbers separated by
colons.
Each number corresponds to 1 byte of the 6 byte address and is given by
pair of hexadecimal digits.
Eg: 8:0:2b:e4:b1:2 is the representation of
00001000 00000000 00101011 11100100 10110001 00000010
Each frame transmitted on an Ethernet is received by every adaptor
connected to the Ethernet.
In addition to unicast addresses an Ethernet address consisting of all 1s is
treated as broadcast address.
Similarly the address that has the first bit set to 1 but it is not the broadcast
address is called multicast address.
ADVANTAGES OF ETHERNET
Ethernets are successful because
It is extremely easy to administer and maintain. There are no switches that
can fail, no routing or configuration tables that have to be kept up-to-date,
and it is easy to add a new host to the network.
It is inexpensive: Cable is cheap, and the only other cost is the network
adaptor on each host.
1. Flexibility: Within radio coverage, nodes can access each other as radio
waves can penetrate even partition walls.
2. Planning : No prior planning is required for connectivity as long as
devices follow standard convention
3. Design : Allows to design and develop mobile devices.
4. Robustness : Wireless network can survive disaster. If the devices survive,
communication can still be established.
Station Types
IEEE 802.11 defines three types of stations based on their mobility in a wireless
LAN:
1. No-transition - A station with no-transition mobility is either stationary
(not moving) or moving only inside a BSS.
2. BSS-transition - A station with BSS-transition mobility can move from
one BSS to another, but the movement is confined inside one ESS
ESS-transition - A station with ESS-transition mobility can move from one
ESS to another.
COLLISION AVOIDANCE IN WLAN / 802.11
Wireless protocol would follow exactly the same algorithm as the Ethernet—Wait
until the link becomes idle before transmitting and back off should a collision
occur.
Each of the four nodes is able to send and receive signals that reach just the
nodes to its immediate left and right.
For example, B can exchange frames with A and C but it cannot reach D,
while C can reach B and D but not A.
Suppose B is sending to A. Node C is aware of this communication because
it hears B’s transmission.
If at the same time, C wants to transmit to node D.
It would be a mistake, however, for C to conclude that it cannot transmit to
anyone just because it can hear B’s transmission.
This is not a problem since C’s transmission to D will not interfere with A’s
ability to receive from B.
This is called exposed problem.
Although B and C are exposed to each other’s signals, there is no
interference if B transmits to A while C transmits to D.
Two nodes can communicate directly with each other if they are within
reach of each other,
When the nodes are at different range, for example when node A wish to
communicate with node E, A first sends a frame to its access point (AP-1),
which forwards the frame across the distribution system to AP-3, which
finally transmits the frame to E.
Active Scanning
When node C moves from the cell serviced by AP-1 to the cell serviced by AP-2.
As it moves, it sends Probe frames, which eventually result in Probe Response.
Since the node is actively searching for an access point it is called active
scanning.
Passive Scanning
AP’s periodically send a Beacon frame to the nodes that advertises the
capabilities of the access point which includes the transmission rates supported by
the AP. This is called passive scanning and a node can change to this AP based on
the Beacon frame simply by sending it an Association Request frame back to the
access point.
When both the DS bits are set to 1, it indicates that one node is
sending the message to another indirectly using the distribution
system.
Duration - contains the duration of time the medium is occupied by the nodes.
Addr l - identifies the final original destination
Addr 2 - identifies the immediate sender (the one that forwarded the frame
from the distribution system to the ultimate destination)
Addr 3 - identifies the intermediate destination (the one that accepted the
frame from a wireless node and forwarded it across the distribution
system)
Addr 4 - identifies the original source
Sequence Control - to avoid duplication of frames sequence number is
assigned to each frame
Payload - Data from sender to receiver
CRC - used for Error detection of the frame.
PICONET
The basic Bluetooth network configuration is called a Piconet
A Piconet is a collection of eight bluetooth devices which are synchronized.
One device in the piconet can act as Primary (Master), all other devices
connected to the master act as Secondary (Slaves).
All the secondary stations synchronize their clocks and hopping sequence
with the primary.
SCATTERNET
Piconets can be combined to form what is called a scatternet.
Many piconets with overlapping coverage can exist simultaneously,called
Scatternet.
A secondary station in one piconet can be the primary in another piconet.
This station can receive messages from the primary in the first piconet (as a
secondary) and, acting as a primary, deliver them to secondaries in the second
piconet.
A station can be a member of two piconets.
In the example given below, there are two piconets, in which one slave
participates in two different piconets.
Master of one piconet cannot act as the master of another piconet.
But the Master of one piconet can act as a Slave in another piconet
BLUETOOTH LAYERS
Radio Layer
The radio layer is roughly equivalent to the physical layer of the Internet
model.
Bluetooth uses the frequency-hopping spread spectrum (FHSS) method
in the physical layer to avoid interference from other devices or other
networks.
Bluetooth hops 1600 times per second, which means that each device
changes its modulation frequency 1600 times per second.
To transform bits to a signal, Bluetooth uses a sophisticated version of FSK,
called GFSK.
Baseband Layer
The baseband layer is roughly equivalent to the MAC sublayer in LANs.
The access method is TDMA.
The primary and secondary stations communicate with each other using
time slots. The length of a time slot is exactly 625 µs.
During that time, a primary sends a frame to a secondary, or a secondary
sends a frame to the primary.
L2CAP
The Logical Link Control and Adaptation Protocol, or L2CAP (L2 here
means LL) is equivalent to the LLC sublayer in LANs.
It is used for data exchange on an ACL link.
SCO channels do not use L2CAP.
The L2CAP functions are : multiplexing, segmentation and reassembly,
quality of service (QoS), and group management.
11. CONNECTING DEVICES
Connecting devices are used to connect hosts together to make a network or
to connect networks together to make an internet.
Connecting devices can operate in different layers of the Internet model.
Connecting devices are divided into five different categories on the basis of
layers in which they operate in the network.
i) Passive hub
It is a connector, which connects wires coming from the different branches.
By using passive hub, each computer can receive the signal which is sent
from all other computers connected in the hub.
2. REPEATERS
A repeater receives the signal and it regenerates the signal in original bit
pattern before the signal gets too weak or corrupted.
It is used to extend the physical distance of LAN.
Repeater works on physical layer.
A repeater has no filtering capability.
A repeater is implemented in computer networks to expand the coverage
area of the network, repropagate a weak or broken signal and or service
remote nodes.
Repeaters amplify the received/input signal to a higher frequency domain
so that it is reusable, scalable and available.
Repeaters are also known as signal boosters or range extender.
A repeater cannot connect two LANs, but it connects two segments of the
same LAN.
3. BRIDGES
Types of Bridges :
Transparent Bridges
These are the bridge in which the stations are completely unaware of the
bridge’s existence i.e. whether or not a bridge is added or deleted from the
network , reconfiguration of the stations is unnecessary.
CS 8591 - Computer Networks Unit- II
Translation Bridges
These bridges connect networks with different architectures, such as Ethernet
and Token Ring. These bridges appear as:
– Transparent bridges to an Ethernet host
– Source-routing bridges to a Token Ring host
4. SWITCHES
◾ A switch is a small hardware device which is used to join multiple
computers together with one local area network (LAN).
◾ A switch is a mechanism that allows us to interconnect links to form a large
network.
51
annauniversityedu.blogspot.com
Input ports receive stream of packets, analyzes the header, determines the
output port and passes the packet onto the fabric.
Ports contain buffers to hold packets before it is forwarded.
If buffer space is unavailable, then packets are dropped.
If packets at several input ports queue for a single output port, then only one
of them is forwarded.
Types of Switch
5. ROUTERS
A router is a three-layer device.
It operates in the physical, data-link, and network layers.
As a physical-layer device, it regenerates the signal it receives.
As a link-layer device, the router checks the physical addresses (source and
destination) contained in the packet.
As a network-layer device, a router checks the network-layer addresses.
A router is a device like a switch that routes data packets based on their IP
addresses.
A router can connect networks. A router connects the LANs and WANs on
the internet.
A router is an internetworking device.
It connects independent networks to form an internetwork.
The key function of the router is to determine the shortest path to the
destination.
Router has a routing table, which is used to make decision on selecting the
route.
The routing table is updated dynamically based on which they make
decisions on routing the data packets.
6. GATEWAY
7. BROUTER
Brouter is a hybrid device. It combines the features of both bridge and
router.
Brouter is a combination of Bridge and Router.
Functions as a bridge for nonroutable protocols and a router for routable
protocols.
As a router, it is capable of routing packets across networks.
As a bridge, it is capable of filtering local area network traffic.
Provides the best attributes of both a bridge and a router
Operates at both the Data Link and Network layers and can replace separate
bridges and routers.
UNIT III - NETWORK LAYER
PACKETIZING
The first duty of the network layer is definitely packetizing.
This means encapsulating the payload (data received from upper layer) in a
network-layer packet at the source and decapsulating the payload from the
network-layer packet at the destination.
The network layer is responsible for delivery of packets from a sender to a
receiver without changing or using the contents.
ERROR CONTROL
The network layer in the Internet does not directly provide error control.
It adds a checksum field to the datagram to control any corruption in the
header, but not in the whole datagram.
This checksum prevents any changes or corruptions in the header of the
datagram.
The Internet uses an auxiliary protocol called ICMP, that provides some kind
of error control if the datagram is discarded or has some unknown information
in the header.
FLOW CONTROL
Flow control regulates the amount of data a source can send without
overwhelming the receiver.
The network layer in the Internet, however, does not directly provide any flow
control.
The datagrams are sent by the sender when they are ready, without any
attention to the readiness of the receiver.
Flow control is provided for most of the upper-layer protocols that use the
services of the network layer, so another level of flow control makes the
network layer more complicated and the whole system less efficient.
CONGESTION CONTROL
Another issue in a network-layer protocol is congestion control.
Congestion in the network layer is a situation in which too many datagrams are
present in an area of the Internet.
Congestion may occur if the number of datagrams sent by source computers is
beyond the capacity of the network or routers.
In this situation, some routers may drop some of the datagrams.
SECURITY
Another issue related to communication at the network layer is security.
To provide security for a connectionless network layer, we need to have
another virtual level that changes the connectionless service to a connection-
oriented service. This virtual layer is called as called IPSec (IP Security).
2. PACKET SWITCHING
( REFER THE TOPIC PACKET SWITCHING FROM UNIT – I )
3. NETWORK-LAYER PERFORMANCE
The performance of a network can be measured in terms of
Delay, Throughput and Packet loss.
Congestion control is an issue that can improve the performance.
DELAY
A packet from its source to its destination, encounters delays.
The delays in a network can be divided into four types:
Transmission delay, Propagation delay, Processing delay and Queuing delay.
Transmission Delay
A source host or a router cannot send a packet instantaneously.
A sender needs to put the bits in a packet on the line one by one.
If the first bit of the packet is put on the line at time t 1 and the last bit is put on
the line at time t2, transmission delay of the packet is (t2 - t1).
The transmission delay is longer for a longer packet and shorter if the sender
can transmit faster.
The Transmission delay is calculated using the formula
Delaytr = (Packet length) / (Transmission rate)
Example :
In a Fast Ethernet LAN with the transmission rate of 100 million bits per
second and a packet of 10,000 bits, it takes (10,000)/(100,000,000) or
100 microseconds for all bits of the packet to be put on the line.
Propagation Delay
Propagation delay is the time it takes for a bit to travel from point A to point B
in the transmission media.
The propagation delay for a packet-switched network depends on the
propagation delay of each network (LAN or WAN).
The propagation delay depends on the propagation speed of the media, which is
3X108 meters/second in a vacuum and normally much less in a wired medium.
It also depends on the distance of the link.
The Propagation delay is calculated using the formula
Delaypg = (Distance) / (Propagation speed)
Example
If the distance of a cable link in a point-to-point WAN is 2000 meters and
the propagation speed of the bits in the cable is 2 X 10 8 meters/second, then
the propagation delay is 10 microseconds.
Processing Delay
The processing delay is the time required for a router or a destination host to
receive a packet from its input port, remove the header, perform an error
detection procedure, and deliver the packet to the output port (in the case of a
router) or deliver the packet to the upper-layer protocol (in the case of the
destination host).
The processing delay may be different for each packet, but normally is
calculated as an average.
Queuing Delay
Queuing delay can normally happen in a router.
A router has an input queue connected to each of its input ports to store packets
waiting to be processed.
The router also has an output queue connected to each of its output ports to
store packets waiting to be transmitted.
The queuing delay for a packet in a router is measured as the time a packet
waits in the input queue and output queue of a router.
Delayqu = The time a packet waits in input and output queues in a router
Total Delay
Assuming equal delays for the sender, routers and receiver, the total delay
(source-to-destination delay) of a packet can be calculated if we know the
number of routers, n, in the whole path.
Total delay = (n + 1) (Delaytr + Delaypg + Delaypr) + (n) (Delayqu)
If we have n routers, we have (n +1) links.
Therefore, we have (n +1) transmission delays related to n routers and the
source, (n +1) propagation delays related to (n +1) links, (n +1) processing
delays related to n routers and the destination, and only n queuing delays
related to n routers.
THROUGHPUT
Throughput at any point in a network is defined as the number of bits passing
through the point in a second, which is actually the transmission rate of data at
that point.
In a path from source to destination, a packet may pass through several links
(networks), each with a different transmission rate.
Throughput is calculated using the formula
Throughput = minimum{TR1 , TR2, . . . TRn}
Example:
Let us assume that we have three links, each with a different transmission
rate.
The data can flow at the rate of 200 kbps in Link1, 100 kbps in Link2 and
150kbps in Link3.
Throughput = minimum{200,100,150} = 100.
PACKET LOSS
Another issue that severely affects the performance of communication is the
number of packets lost during transmission.
When a router receives a packet while processing another packet, the received
packet needs to be stored in the input buffer waiting for its turn.
A router has an input buffer with a limited size.
A time may come when the buffer is full and the next packet needs to be
dropped.
The effect of packet loss on the Internet network layer is that the packet needs
to be resent, which in turn may create overflow and cause more packet loss.
CONGESTION CONTROL
Congestion at the network layer is related to two issues, throughput and delay.
Based on Delay
When the load is much less than the capacity of the network, the delay is at a
minimum.
This minimum delay is composed of propagation delay and processing delay,
both of which are negligible.
However, when the load reaches the network capacity, the delay increases
sharply because we now need to add the queuing delay to the total delay.
The delay becomes infinite when the load is greater than the capacity.
Based on Throughout
When the load is below the capacity of the network, the throughput increases
proportionally with the load.
We expect the throughput to remain constant after the load reaches the
capacity, but instead the throughput declines sharply.
The reason is the discarding of packets by the routers.
When the load exceeds the capacity, the queues become full and the routers
have to discard some packets.
Discarding packets does not reduce the number of packets in the network
because the sources retransmit the packets, using time-out mechanisms, when
the packets do not reach the destinations.
Retransmission Policy
Retransmission is sometimes unavoidable.
If the sender feels that a sent packet is lost or corrupted, the packet
needs to be retransmitted.
Retransmission in general may increase congestion in the network.
However, a good retransmission policy can prevent congestion.
The retransmission policy and the retransmission timers must be
designed to optimize efficiency and at the same time prevent congestion.
Window Policy
The type of window at the sender may also affect congestion.
The Selective Repeat window is better than the Go-Back-N window for
congestion control.
In the Go-Back-N window, when the timer for a packet times out,
several packets may be resent, although some may have arrived safe and
sound at the receiver.
This duplication may make the congestion worse.
The Selective Repeat window, on the other hand, tries to send the
specific packets that have been lost or corrupted.
Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also affect
congestion.
If the receiver does not acknowledge every packet it receives, it may
slow down the sender and help prevent congestion.
Several approaches are used in this case.
A receiver may send an acknowledgment only if it has a packet to be
sent or a special timer expires.
A receiver may decide to acknowledge only N packets at a time.
Sending fewer acknowledgments means imposing less load on the
network.
Discarding Policy
A good discarding policy by the routers may prevent congestion and at
the same time may not harm the integrity of the transmission.
For example, in audio transmission, if the policy is to discard less
sensitive packets when congestion is likely to happen, the quality of
sound is still preserved and congestion is prevented or alleviated.
Admission Policy
An admission policy, which is a quality-of-service mechanism can also
prevent congestion in virtual-circuit networks.
Switches in a flow first check the resource requirement of a flow before
admitting it to the network.
A router can deny establishing a virtual-circuit connection if there is
congestion in the network or if there is a possibility of future congestion.
Backpressure
The technique of backpressure refers to a congestion control mechanism
in which a congested node stops receiving data from the immediate
upstream node or nodes.
This may cause the upstream node or nodes to become congested, and
they, in turn, reject data from their upstream node or nodes, and so on.
Backpressure is a node-to- node congestion control that starts with a
node and propagates, in the opposite direction of data flow, to the
source.
The backpressure technique can be applied only to virtual circuit
networks, in which each node knows the upstream node from which a
flow of data is coming.
Choke Packet
A choke packet is a packet sent by a node to the source to inform it of
congestion.
In backpressure, the warning is from one node to its upstream node,
although the warning may eventually reach the source station.
In the choke-packet method, the warning is from the router, which has
encountered congestion, directly to the source station.
The intermediate nodes through which the packet has traveled are not
warned.
The warning message goes directly to the source station; the
intermediate routers do not take any action.
Implicit Signaling
In implicit signaling, there is no communication between the congested
node or nodes and the source.
The source guesses that there is congestion somewhere in the network
from other symptoms.
For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is
congested.
The delay in receiving an acknowledgment is interpreted as congestion
in the network; the source should slow down.
Explicit Signaling
The node that experiences congestion can explicitly send a signal to the
source or destination.
The explicit-signaling method is different from the choke-packet
method.
In the choke-packet method, a separate packet is used for this purpose;
in the explicit-signaling method, the signal is included in the packets
that carry data.
Explicit signaling can occur in either the forward or the backward
direction.
4.IPV4 ADDRESSES
The identifier used in the IP layer of the TCP/IP protocol suite to identify the
connection of each device to the Internet is called the Internet address or IP
address.
Internet Protocol version 4 (IPv4) is the fourth version in the development of
the Internet Protocol (IP) and the first version of the protocol to be widely
deployed.
IPv4 is described in IETF publication in September 1981.
The IP address is the address of the connection, not the host or the router. An
IPv4 address is a 32-bit address that uniquely and universally defines the
connection .
If the device is moved to another network, the IP address may be changed.
IPv4 addresses are unique in the sense that each address defines one, and only
one, connection to the Internet.
If a device has two connections to the Internet, via two networks, it has two
IPv4 addresses.
Pv4 addresses are universal in the sense that the addressing system must be
accepted by any host that wants to be connected to the Internet.
In binary notation, an IPv4 address is displayed as 32 bits. To make the address more
readable, one or more spaces are usually inserted between bytes (8 bits).
In hexadecimal notation, each hexadecimal digit is equivalent to four bits. This means
that a 32-bit address has 8 hexadecimal digits. This notation is often used in network
programming.
CLASSFUL ADDRESSING
An IPv4 address is 32-bit long(4 bytes).
An IPv4 address is divided into sub-classes:
Classful Network Architecture
Class A
In Class A, an IP address is assigned to those networks that contain a large
number of hosts.
The network ID is 8 bits long.
The host ID is 24 bits long.
In Class A, the first bit in higher order bits of the first octet is always set to 0
and the remaining 7 bits determine the network ID.
The 24 bits determine the host ID in any network.
The total number of networks in Class A = 2 7 = 128 network address
The total number of hosts in Class A = 2 24 - 2 = 16,777,214 host address
Class B
In Class B, an IP address is assigned to those networks that range from small-
sized to large-sized networks.
The Network ID is 16 bits long.
The Host ID is 16 bits long.
In Class B, the higher order bits of the first octet is always set to 10, and the
remaining14 bits determine the network ID.
The other 16 bits determine the Host ID.
The total number of networks in Class B = 2 14 = 16384 network address
The total number of hosts in Class B = 2 16 - 2 = 65534 host address
Class C
In Class C, an IP address is assigned to only small-sized networks.
The Network ID is 24 bits long.
The host ID is 8 bits long.
In Class C, the higher order bits of the first octet is always set to 110, and the
remaining 21 bits determine the network ID.
The 8 bits of the host ID determine the host in a network.
The total number of networks = 2 21 = 2097152 network address
The total number of hosts = 2 8 - 2 = 254 host address
Class D
In Class D, an IP address is reserved for multicast addresses.
It does not possess subnetting.
The higher order bits of the first octet is always set to 1110, and the remaining
bits determines the host ID in any network.
Class E
In Class E, an IP address is used for the future use or for the research and
development purposes.
It does not possess any subnetting.
The higher order bits of the first octet is always set to 1111, and the remaining
bits determines the host ID in any network.
Address Depletion in Classful Addressing
The reason that classful addressing has become obsolete is address depletion.
Since the addresses were not distributed properly, the Internet was faced with
the problem of the addresses being rapidly used up.
This results in no more addresses available for organizations and individuals
that needed to be connected to the Internet.
To understand the problem, let us think about class A.
This class can be assigned to only 128 organizations in the world, but each
organization needs to have a single network with 16,777,216 nodes .
Since there may be only a few organizations that are this large, most of the
addresses in this class were wasted (unused).
Class B addresses were designed for midsize organizations, but many of the
addresses in this class also remained unused.
Class C addresses have a completely different flaw in design. The number of
addresses that can be used in each network (256) was so small that most
companies were not comfortable using a block in this address class.
Class E addresses were almost never used, wasting the whole class.
Subnetting
In subnetting, a class A or class B block is divided into several subnets.
Each subnet has a larger prefix length than the original network.
For example, if a network in class A is divided into four subnets, each subnet
has a prefix of nsub = 10.
At the same time, if all of the addresses in a network are not used, subnetting
allows the addresses to be divided among several organizations.
CLASSLESS ADDRESSING
In 1996, the Internet authorities announced a new architecture called classless
addressing.
In classless addressing, variable-length blocks are used that belong to no
classes.
We can have a block of 1 address, 2 addresses, 4 addresses, 128 addresses, and
so on.
In classless addressing, the whole address space is divided into variable length
blocks.
The prefix in an address defines the block (network); the suffix defines the
node (device).
Theoretically, we can have a block of 2 0, 2 1, 2 2, 2 32 addresses.
The number of addresses in a block needs to be a power of 2. An organization
can be granted one block of addresses.
Address Aggregation
One of the advantages of the CIDR strategy is address aggregation
(sometimes called address summarization or route summarization).
When blocks of addresses are combined to create a larger block, routing can be
done based on the prefix of the larger block.
ICANN assigns a large block of addresses to an ISP.
Each ISP in turn divides its assigned block into smaller subblocks and grants
the subblocks to its customers.
Limited-broadcast Address
The only address in the block 255.255.255.255/32 is called the limited-
broadcast address.
It is used whenever a router or a host needs to send a datagram to all devices in
a network.
The routers in the network, however, block the packet having this address as
the destination;the packet cannot travel outside the network.
Loopback Address
The block 127.0.0.0/8 is called the loopback address.
A packet with one of the addresses in this block as the destination address
never leaves the host; it will remain in the host.
Private Addresses
Four blocks are assigned as private addresses: 10.0.0.0/8, 172.16.0.0/12,
192.168.0.0/16, and 169.254.0.0/16.
Multicast Addresses
The block 224.0.0.0/4 is reserved for multicast addresses.
A DHCP packet is actually sent using a protocol called the User Datagram
Protocol (UDP).
A technology that can provide the mapping between the private and universal
(external)addresses, and at the same time support virtual private networks is
called as Network Address Translation (NAT).
The technology allows a site to use a set of private addresses for internal
communication and a set of global Internet addresses (at least one) for
communication with the rest of the world.
The site must have only one connection to the global Internet through a NAT-
capable router that runs NAT software.
Address Translation
All of the outgoing packets go through the NAT router, which replaces the
source address in the packet with the global NAT address.
All incoming packets also pass through the NAT router, which replaces the
destination address in the packet (the NAT router global address) with the
appropriate private address.
Translation Table
There may be tens or hundreds of private IP addresses, each belonging to one
specific host.
The problem arises when we want to translate the source address to an external
address. This is solved if the NAT router has a translation table.
Forwarding means to deliver the packet to the next hop (which can be the final
destination or the intermediate connecting device).
Although IP protocol was originally designed as a connectionless protocol,
today the tendency is to use IP as a connection-oriented protocol based on the
label attached to an IP datagram .
When IP is used as a connectionless protocol, forwarding is based on the
destination address of the IP datagram.
When the IP is used as a connection-oriented protocol, forwarding is based on
the label attached to an IP datagram.
To do this, it compares the network part of the destination address with the
network part of the address of each of its network interfaces. (Hosts normally
have only one interface, while routers normally have two or more, since they
are typically connected to two or more networks.)
If a match occurs, then that means that the destination lies on the same physical
network as the interface, and the packet can be directly delivered over that
network that has a reasonable chance of getting the packet closer to its
destination.
If there is no match, then the node is not connected to the same physical
network as the destination node, then it needs to send the packet to a router.
In general, each node will have a choice of several routers, and so it needs to
pick the best one, or at least one that has a reasonable chance of getting the
datagram closer to its destination.
The router that it chooses is known as the next hop router.
The router finds the correct next hop by consulting its forwarding table. The
forwarding table is conceptually just a list of (NetworkNum, NextHop) pairs.
There is also a default router that is used if none of the entries in the table
matches the destination’s network number.
All Packets destined for hosts not on the physical network to which the sending
host is attached will be sent out through the default router.
Forwarding Algorithm
The job of the forwarding module is to search the table, row by row.
In each row, the n leftmost bits of the destination address (prefix) are kept and
the rest of the bits (suffix) are set to 0s.
If the resulting address ( network address), matches with the address in the first
column, the information in the next two columns is extracted; otherwise the
search continues. Normally, the last row has a default value in the first column,
which indicates all destination addresses that did not match the previous rows.
Routing in classless addressing uses another principle, longest mask
matching.
This principle states that the forwarding table is sorted from the longest mask
to the shortest mask.
In other words, if there are three masks, /27, /26, and /24, the mask /27 must be
the first entry and /24 must be the last.
Example
Let us make a forwarding table for router R1 using the configuration as given
in the figure above
When a packet arrives whose leftmost 26 bits in the destination address match
the bits in the first row, the packet is sent out from interface m2.
When a packet arrives whose leftmost 25 bits in the address match the bits in
the second row, the packet is sent out from interface m0, and so on.
The table clearly shows that the first row has the longest prefix and the fourth
row has the shortest prefix.
The longer prefix means a smaller range of addresses; the shorter prefix means
a larger range of addresses.
IP - INTERNET PROTOCOL
The Internet Protocol is the key tool used today to build scalable,
heterogeneous internetworks.
IP runs on all the nodes (both hosts and routers) in a collection of networks
IP defines the infrastructure that allows these nodes and networks to function
as a single logical internetwork.
IP SERVICE MODEL
Service Model defines the host-to-host services that we want to provide
The main concern in defining a service model for an internetwork is that we can
provide a host-to-host service only if this service can somehow be provided over
each of the underlying physical networks.
The Internet Protocol is the key tool used today to build scalable, heterogeneous
internetworks.
The IP service model can be thought of as having two parts:
A GLOBAL ADDRESSING SCHEME - which provides a way to
identify all hosts in the internetwork
A DATAGRAM DELIVERY MODEL – A connectionless model of data
delivery.
FIELD DESCRIPTION
Version Specifies the version of IP. Two versions exists – IPv4 and IPv6.
HLen Specifies the length of the header
TOS An indication of the parameters of the quality of service
(Type of Service) desired such as Precedence, Delay, Throughput and Reliability.
Length Length of the entire datagram, including the header. The maximum
size of an IP datagram is 65,535(210 )bytes
Ident Uniquely identifies the packet sequence number.
(Identification) Used for fragmentation and re-assembly.
Flags Used to control whether routers are allowed to fragment a packet.
If a packet is fragmented , this flag value is 1.If not, flag value is
0.
Offset Indicates where in the datagram, this fragment belongs.
(Fragmentation The fragment offset is measured in units of 8 octets
offset) (64 bits). The first fragment has offset zero.
TTL Indicates the maximum time the datagram is allowed to
(Time to Live) remain in the network. If this field contains the value zero, then
the datagram must be destroyed.
Protocol Indicates the next level protocol used in the data portion of the
datagram
Checksum Used to detect the processing errors introduced into the packet
Example:
The original packet starts at the client; the fragments are reassembled at the
server.
The value of the identification field is the same in all fragments, as is the value
of the flags field with the more bit set for all fragments except the last.
Also, the value of the offset field for each fragment is shown.
Although the fragments arrived out of order at the destination, they can be
correctly reassembled.
The value of the offset field is always relative to the original datagram.
Even if each fragment follows a different path and arrives out of order, the
final destination host can reassemble the original datagram from the
fragments received (if none of them is lost) using the following strategy:
1) The first fragment has an offset field value of zero.
2) Divide the length of the first fragment by 8. The second fragment has an
offset value equal to that result.
3) Divide the total length of the first and second fragment by 8. The third
fragment has an offset value equal to that result.
4) Continue the process. The last fragment has its M bit set to 0.
5) Continue the process. The last fragment has a more bit value of 0.
Reassembly:
Reassembly is done at the receiving host and not at each router.
To enable these fragments to be reassembled at the receiving host, they all
carry the same identifier in the Ident field.
This identifier is chosen by the sending host and is intended to be unique
among all the datagrams that might arrive at the destination from this source
over some reasonable time period.
Since all fragments of the original datagram contain this identifier, the
reassembling host will be able to recognize those fragments that go together.
For example, if a single fragment is lost, the receiver will still attempt to
reassemble the datagram, and it will eventually give up and have to garbage-
collect the resources that were used to perform the failed reassembly.
Hosts are now strongly encouraged to perform “path MTU discovery,” a
process by which fragmentation is avoided by sending packets that are small
enough to traverse the link with the smallest MTU in the path from sender to
receiver.
IP SECURITY
There are three security issues that are particularly applicable to the IP protocol:
(1) Packet Sniffing (2) Packet Modification and (3) IP Spoofing.
Packet Sniffing
An intruder may intercept an IP packet and make a copy of it.
Packet sniffing is a passive attack, in which the attacker does not change the
contents of the packet.
This type of attack is very difficult to detect because the sender and the receiver
may never know that the packet has been copied.
Although packet sniffing cannot be stopped, encryption of the packet can make
the attacker’s effort useless.
The attacker may still sniff the packet, but the content is not detectable.
Packet Modification
The second type of attack is to modify the packet.
The attacker intercepts the packet,changes its contents, and sends the new
packet to the receiver.
The receiver believes that the packet is coming from the original sender.
This type of attack can be detected using a data integrity mechanism.
The receiver, before opening and using the contents of the message, can use
this mechanism to make sure that the packet has not been changed during the
transmission.
IP Spoofing
An attacker can masquerade as somebody else and create an IP packet that
carries the source address of another computer.
An attacker can send an IP packet to a bank pretending that it is coming from
one of the customers.
This type of attack can be prevented using an origin authentication
mechanism
IP Sec
The IP packets today can be protected from the previously mentioned attacks
using a protocol called IPSec (IP Security).
This protocol is used in conjunction with the IP protocol.
IPSec protocol creates a connection-oriented service between two entities in
which they can exchange IP packets without worrying about the three attacks
such as Packet Sniffing, Packet Modification and IP Spoofing.
IP Sec provides the following four services:
1) Defining Algorithms and Keys : The two entities that want to create a
secure channel between themselves can agree on some available
algorithms and keys to be used for security purposes.
2) Packet Encryption : The packets exchanged between two parties can
be encrypted for privacy using one of the encryption algorithms and a
shared key agreed upon in the first step. This makes the packet sniffing
attack useless.
3) Data Integrity : Data integrity guarantees that the packet is not
modified during the transmission. If the received packet does not pass
the data integrity test, it is discarded.This prevents the second attack,
packet modification.
4) Origin Authentication : IPSec can authenticate the origin of the
packet to be sure that the packet is not created by an imposter. This can
prevent IP spoofing attacks.
Ping
The ping program is used to find if a host is alive and responding.
The source host sends ICMP echo-request messages; the destination, if alive,
responds with ICMP echo-reply messages.
The ping program sets the identifier field in the echo-request and echo-reply
message and starts the sequence number from 0; this number is incremented by
1 each time a new message is sent.
The ping program can calculate the round-trip time.
It inserts the sending time in the data section of the message.
When the packet arrives, it subtracts the arrival time from the departure time to
get the round-trip time (RTT).
$ ping google.com
Traceroute or Tracert
The traceroute program in UNIX or tracert in Windows can be used to trace
the path of a packet from a source to the destination.
It can find the IP addresses of all the routers that are visited along the path.
The program is usually set to check for the maximum of 30 hops (routers) to be
visited.
The number of hops in the Internet is normally less than this.
$ traceroute google.com
9. UNICAST ROUTING
Routing is the process of selecting best paths in a network.
In unicast routing, a packet is routed, hop by hop, from its source to its
destination by the help of forwarding tables.
Routing a packet from its source to its destination means routing the packet
from a source router (the default router of the source host) to a destination
router (the router connected to the destination network).
The source host needs no forwarding table because it delivers its packet to the
default router in its local network.
The destination host needs no forwarding table either because it receives the
packet from its default router in its local network.
Only the intermediate routers in the networks need forwarding tables.
NETWORK AS A GRAPH
The Figure below shows a graph representing a network.
The nodes of the graph, labeled A through G, may be hosts, switches, routers,
or networks.
The edges of the graph correspond to the network links.
Each edge has an associated cost.
The basic problem of routing is to find the lowest-cost path between any two
nodes, where the cost of a path equals the sum of the costs of all the edges that
make up the path.
This static approach has several problems:
It does not deal with node or link failures.
It does not consider the addition of new nodes or links.
It implies that edge costs cannot change.
For these reasons, routing is achieved by running routing protocols among the
nodes.
These protocols provide a distributed, dynamic way to solve the problem of
finding the lowest-cost path in the presence of link and node failures and
changing edge costs.
Initial State
Each node sends its initial table (distance vector) to neighbors and receives
their estimate.
Node A sends its table to nodes B, C, E & F and receives tables from nodes B,
C, E & F.
Each node updates its routing table by comparing with each of its neighbor's
table
For each destination, Total Cost is computed as:
Total Cost = Cost (Node to Neighbor) + Cost (Neighbor to Destination)
If Total Cost < Cost then
Cost = Total Cost and NextHop = Neighbor
Node A learns from C's table to reach node D and from F's table to reach
node G.
Total Cost to reach node D via C = Cost (A to C) + Cost(C to D)
Cost = 1 + 1 = 2.
Since 2 < ∞, entry for destination D in A's table is changed to (D, 2, C)
Total Cost to reach node G via F = Cost(A to F) + Cost(F to G) = 1 + 1 = 2
Since 2 < ∞, entry for destination G in A's table is changed to (G, 2, F)
Each node builds complete routing table after few exchanges amongst its
neighbors.
System stabilizes when all nodes have complete routing information, i.e.,
convergence.
Routing tables are exchanged periodically or in case of triggered update.
The final distances stored at each node is given below:
Periodic Update
In this case, each node automatically sends an update message every so often,
even if nothing has changed.
The frequency of these periodic updates varies from protocol to protocol, but
it is typically on the order of several seconds to several minutes.
Triggered Update
In this case, whenever a node notices a link failure or receives an update from
one of its neighbors that causes it to change one of the routes in its routing
table.
Whenever a node’s routing table changes, it sends an update to its neighbors,
which may lead to a change in their tables, causing them to send an update to
their neighbors.
Routers advertise the cost of reaching networks. Cost of reaching each link is 1
hop. For example, router C advertises to A that it can reach network 2, 3 at cost
0 (directly connected), networks 5, 6 at cost 1 and network 4 at cost 2.
Each router updates cost and next hop for each network number.
Infinity is defined as 16, i.e., any route cannot have more than 15 hops.
Therefore RIP can be implemented on small-sized networks only.
Advertisements are sent every 30 seconds or in case of triggered update.
Reliable Flooding
Each node sends its LSP out on each of its directly connected links.
When a node receives LSP of another node, checks if it has an LSP already for
that node.
If not, it stores and forwards the LSP on all other links except the incoming
one.
Else if the received LSP has a bigger sequence number, then it is stored and
forwarded. Older LSP for that node is discarded.
Otherwise discard the received LSP, since it is not latest for that node.
Thus recent LSP of a node eventually reaches all nodes, i.e., reliable flooding.
Route Calculation
Each node knows the entire topology, once it has LSP from every other node.
Forward search algorithm is used to compute routing table from the received
LSPs.
Each node maintains two lists, namely Tentative and Confirmed with entries of
the form (Destination, Cost, NextHop).
Example :
OPEN SHORTEST PATH FIRST PROTOCOL (OSPF)
OSPF is a non-proprietary widely used link-state routing protocol.
OSPF Features are:
Authentication―Malicious host can collapse a network by advertising
to reach every host with cost 0. Such disasters are averted by
authenticating routing updates.
Additional hierarchy―Domain is partitioned into areas, i.e., OSPF is
more scalable.
Load balancing―Multiple routes to the same place are assigned same
cost. Thus traffic is distributed evenly.
Spanning Trees
In path-vector routing, the path from a source to all destinations is determined
by the best spanning tree.
The best spanning tree is not the least-cost tree.
It is the tree determined by the source when it imposes its own policy.
If there is more than one route to a destination, the source can choose the route
that meets its policy best.
A source may apply several policies at the same time.
One of the common policies uses the minimum number of nodes to be visited.
Another common policy is to avoid some nodes as the middle node in a route.
The spanning trees are made, gradually and asynchronously, by each node.
When a node is booted, it creates a path vector based on the information it can
obtain about its immediate neighbor.
A node sends greeting messages to its immediate neighbors to collect these
pieces of information.
Each node, after the creation of the initial path vector, sends it to all its
immediate neighbors.
Each node, when it receives a path vector from a neighbor, updates its path
vector using the formula
Example:
The Figure below shows a small internet with only five nodes.
Each source has created its own spanning tree that meets its policy.
The policy imposed by all sources is to use the minimum number of nodes to
reach a destination.
The spanning tree selected by A and E is such that the communication does not
pass through D as a middle node.
Similarly, the spanning tree selected by B is such that the communication does
not pass through C as a middle node.
Path Vectors made at booting time
The Figure below shows all of these path vectors for the example.
Not all of these tables are created simultaneously.
They are created when each node is booted.
The figure also shows how these path vectors are sent to immediate neighbors
after they have been created.
Each AS have a border router (gateway), by which packets enter and leave that
AS. In above figure, R3 and R4 are border routers.
One of the router in each autonomous system is designated as BGP speaker.
BGP Speaker exchange reachability information with other BGP speakers,
known as external BGP session.
BGP advertises complete path as enumerated list of AS (path vector) to reach a
particular network.
Paths must be without any loop, i.e., AS list is unique.
For example, backbone network advertises that networks 128.96 and 192.4.153
can be reached along the path <AS1, AS2, AS4>.
If there are multiple routes to a destination, BGP speaker chooses one based on
policy.
Speakers need not advertise any route to a destination, even if one exists.
Advertised paths can be cancelled, if a link/node on the path goes down. This
negative advertisement is known as withdrawn route.
Routes are not repeatedly sent. If there is no change, keep alive messages are
sent.
iBGP - interior BGP
A Variant of BGP
Used by routers to update routing information learnt from other speakers to
routers inside the autonomous system.
Each router in the AS is able to determine the appropriate next hop for all
prefixes.
In multicasting, a multicast router may have to send out copies of the same
datagram through more than one interface.
Hosts that are members of a group receive copies of any packets sent to that
group’s multicast address
A host can be in multiple groups
A host can join and leave groups
A host signals its desire to join or leave a multicast group by
communicating with its local router using a special protocol.
In IPv4, the protocol is Internet Group Management Protocol (IGMP)
In IPv6, the protocol is Multicast Listener Discovery (MLD)
MULTICAST ADDRESSING
Multicast address is associated with a group, whose members are dynamic.
Each group has its own IP multicast address.
IP addresses reserved for multicasting are Class D in IPv4 (Class D 224.0.0.1
to 239.255.255.255), 1111 1111 prefix in IPv6.
o
Hosts that are members of a group receive copy of the packet sent when
destination contains group address.
Using IP multicast
Sending host does not send multiple copies of the packet
A host sends a single copy of the packet addressed to the group’s multicast
address
The sending host does not need to know the individual unicast IP address of
each member
TYPES OF MULTICASTING
Source-Specific Multicast - In source-specific multicast (one-to-many model),
receiver specifies multicast group and sender from which it is interested to
receive packets. Example: Internet radio broadcasts.
MULTICAST APPLICATIONS
Access to Distributed Databases
Information Dissemination
Teleconferencing.
Distance Learning
MULTICAST ROUTING
To support multicast, a router must additionally have multicast forwarding
tables that indicate, based on multicast address, which links to use to
forward the multicast packet.
Unicast forwarding tables collectively specify a set of paths.
Multicast forwarding tables collectively specify a set of trees -Multicast
distribution trees.
Multicast routing is the process by which multicast distribution trees are
determined.
To support multicasting, routers additionally build multicast forwarding
tables.
Multicast forwarding table is a tree structure, known as multicast
distribution trees.
Internet multicast is implemented on physical networks that support
broadcasting by extending forwarding functions.
Pruning:
Sent from routers receiving multicast traffic for which they have no active
group members
“Prunes” the tree created by DVMRP
Grafting:
Used after a branch has been pruned back
Goes from router to router until a router active on the multicast group is
reached
Sent for the following cases
Shared Tree
When a router sends Join message for group G to RP, it goes through a setof
routers.
Join message is wildcarded (*), i.e., it is applicable to all senders.
Routers create an entry (*, G) in its forwarding table for the shared tree.
Interface on which the Join arrived is marked to forward packets for that
group.
Forwards Join towards rendezvous router RP.
Eventually, the message arrives at RP. Thus a shared tree with RP as root is
formed.
Example
Router R4 sends Join message for group G to rendezvous router RP.
Join message is received by router R2. It makes an entry (*, G) in its table and
forwards the message to RP.
When R5 sends Join message for group G, R2 does not forwards the Join. It
adds an outgoing interface to the forwarding table created for that group.
As routers send Join message for a group, branches are added to the tree, i.e.,
shared.
Multicast packets sent from hosts are forwarded to designated router RP.
Suppose router R1, receives a message to group G.
oR1 has no state for group G.
o Encapsulates the multicast packet in a Register message.
o Multicast packet is tunneled along the way to RP.
RP decapsulates the packet and sends multicast packet onto the shared tree,
towards R2.
R2 forwards the multicast packet to routers R4 and R5 that have members for
group G.
Source-Specific Tree
RP can force routers to know about group G, by sending Join message to the
sending host, so that tunneling can be avoided.
Intermediary routers create sender-specific entry (S, G) in their tables. Thus
a source-specific route from R1 to RP is formed.
If there is high rate of packets sent from a sender to a group G, then shared-
tree is replaced by source-specific tree with sender as root.
Example
Analysis of PIM
Protocol independent because, tree is based on Join messages via shortest path.
Shared trees are more scalable than source-specific trees.
Source-specific trees enable efficient routing than shared trees.
FEATURES OF IPV6
1. Better header format - IPv6 uses a new header format in which options are
separated from the base header and inserted, when needed, between the base
header and the data. This simplifies and speeds up the routing process because
most of the options do not need to be checked by routers.
2. New options - IPv6 has new options to allow for additional functionalities.
3.Allowance for extension - IPv6 is designed to allow the extension of the
protocol if required by new technologies or applications.
4. Support for resource allocation - In IPv6, the type-of-service field has been
removed, but two new fields, traffic class and flow label, have been added to
enable the source to request special handling of the packet. This mechanism
can be used to support traffic such as real-time audio and video.
Additional Features :
1. Need to accommodate scalable routing and addressing
2. Support for real-time services
3. Security support
4.Autoconfiguration -
The ability of hosts to automatically configure themselves with such
information as their own IP address and domain name.
5. Enhanced routing functionality, including support for mobile hosts
6. Transition from ipv4 to ipv6
IPv4 address is mapped to IPv6 address by prefixing the 32-bit IPv4 address
with 2 bytes of 1s and then zero-extending the result to 128 bits.
For example,
128. 96.33.81 → : : FFFF : 128.96.33.81
This notation is called as CIDR notation or slash notation.
Extension Headers
Extension header provides greater functionality to IPv6.
Base header may be followed by six extension headers.
Each extension header contains a NextHeader field to identify the header
following it.
ADVANTAGES OF IPV6
Address space ― IPv6 uses 128-bit address whereas IPv4 uses 32-bit address.
Hence IPv6 has huge address space whereas IPv4 faces address shortage
problem.
Header format ― Unlike IPv4, optional headers are separated from base
header in IPv6. Each router thus need not process unwanted addition
information.
Extensible ― Unassigned IPv6 addresses can accommodate needs of future
technologies.
1. INTRODUCTION
The transport layer is the fourth layer of the OSI model and is the core of the Internet
model.
It responds to service requests from the session layer and issues service requests to
the network Layer.
The transport layer provides transparent transfer of data between hosts.
It provides end-to-end control and information transfer with the quality of service
needed by the application program.
It is the first true end-to-end layer, implemented in all End Systems (ES).
Flow Control
Flow Control is the process of managing the rate of data transmission between two
nodes to prevent a fast sender from overwhelming a slow receiver.
It provides a mechanism for the receiver to control the transmission speed, so that the
receiving node is not overwhelmed with data from transmitting node.
2
Error Control
Error control at the transport layer is responsible for
1. Detecting and discarding corrupted packets.
2. Keeping track of lost and discarded packets and resending them.
3. Recognizing duplicate packets and discarding them.
4. Buffering out-of-order packets until the missing packets arrive.
Error Control involves Error Detection and Error Correction
Congestion Control
Congestion in a network may occur if the load on the network (the number of
packets sent to the network) is greater than the capacity of the network (the number
of packets a network can handle).
Congestion control refers to the mechanisms and techniques that control the
congestion and keep the load below the capacity.
Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened
Congestion control mechanisms are divided into two categories,
1. Open loop - prevent the congestion before it happens.
2. Closed loop - remove the congestion after it happens.
2. PORT NUMBERS
A transport-layer protocol usually has several responsibilities.
One is to create a process-to-process communication.
Processes are programs that run on hosts. It could be either server or client.
A process on the local host, called a client, needs services from a process usually
on the remote host, called a server.
Processes are assigned a unique 16-bit port number on that host.
Port numbers provide end-to-end addresses at the transport layer
They also provide multiplexing and demultiplexing at this layer.
3
The port numbers are integers between 0 and 65,535 .
ICANN (Internet Corporation for Assigned Names and Numbers) has divided the port
numbers into three ranges:
Well-known ports
Registered
Ephemeral ports (Dynamic Ports)
WELL-KNOWN PORTS
These are permanent port numbers used by the servers.
They range between 0 to 1023.
This port number cannot be chosen randomly.
These port numbers are universal port numbers for servers.
Every client process knows the well-known port number of the corresponding server
process.
For example, while the daytime client process, a well-known client program, can
use an ephemeral (temporary) port number, 52,000, to identify itself, the daytime
server process must use the well-known (permanent) port number 13.
4
EPHEMERAL PORTS (DYNAMIC PORTS)
The client program defines itself with a port number, called the ephemeral port
number.
The word ephemeral means “short-lived” and is used because the life of a client is
normally short.
An ephemeral port number is recommended to be greater than 1023.
These port number ranges from 49,152 to 65,535 .
They are neither controlled nor registered. They can be used as temporary or private
port numbers.
REGISTERED PORTS
The ports ranging from 1024 to 49,151 are not assigned or controlled.
Each protocol provides a different type of service and should be used appropriately.
5
UDP - UDP is an unreliable connectionless transport-layer protocol used for its simplicity
and efficiency in applications where error control can be provided by the application-layer
process.
TCP - TCP is a reliable connection-oriented protocol that can be used in any application
where reliability is important.
SCTP - SCTP is a new transport-layer protocol designed to combine some features of UDP
and TCP in an effort to create a better protocol for multimedia communication.
UDP PORTS
Processes (server/client) are identified by an abstract locator known as port.
Server accepts message at well known port.
Some well-known UDP ports are 7–Echo, 53–DNS, 111–RPC, 161–SNMP, etc.
< port, host > pair is used as key for demultiplexing.
Ports are implemented as a message queue.
When a message arrives, UDP appends it to end of the queue.
When queue is full, the message is discarded.
When a message is read, it is removed from the queue.
When an application process wants to receive a message, one is removed from the
front of the queue.
If the queue is empty, the process blocks until a message becomes available.
6
UDP DATAGRAM (PACKET) FORMAT
UDP packets are known as user datagrams .
These user datagrams, have a fixed-size header of 8 bytes made of four fields, each
of 2 bytes (16 bits).
7
Length
This field denotes the total length of the UDP Packet (Header plus data)
The total length of any UDP datagram can be from 0 to 65,535 bytes.
Checksum
UDP computes its checksum over the UDP header, the contents of the message
body, and something called the pseudoheader.
The pseudoheader consists of three fields from the IP header—protocol number,
source IP address, destination IP address plus the UDP length field.
Data
Data field defines tha actual payload to be transmitted.
Its size is variable.
UDP SERVICES
Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a
combination of IP addresses and port numbers.
Connectionless Services
UDP provides a connectionless service.
There is no connection establishment and no connection termination .
Each user datagram sent by UDP is an independent datagram.
There is no relationship between the different user datagrams even if they are
coming from the same source process and going to the same destination program.
The user datagrams are not numbered.
Each user datagram can travel on a different path.
Flow Control
UDP is a very simple protocol.
There is no flow control, and hence no window mechanism.
The receiver may overflow with incoming messages.
The lack of flow control means that the process using UDP should provide for this
service, if needed.
Error Control
There is no error control mechanism in UDP except for the checksum.
This means that the sender does not know if a message has been lost or duplicated.
When the receiver detects an error through the checksum, the user datagram is
silently discarded.
8
The lack of error control means that the process using UDP should provide for this
service, if needed.
Checksum
UDP checksum calculation includes three sections: a pseudoheader, the UDP header,
and the data coming from the application layer.
The pseudoheader is the part of the header in which the user datagram is to be
encapsulated with some fields filled with 0s.
Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control.
UDP assumes that the packets sent are small and sporadic(occasionally or at irregular
intervals) and cannot create congestion in the network.
This assumption may or may not be true, when UDP is used for interactive real-time
transfer of audio and video.
Queuing
In UDP, queues are associated with ports.
At the client site, when a process starts, it requests a port number from the operating
system.
Some implementations create both an incoming and an outgoing queue associated
with each process.
Other implementations create only an incoming queue associated with each process.
9
APPLICATIONS OF UDP
UDP is used for management processes such as SNMP.
UDP is used for route updating protocols such as RIP.
UDP is a suitable transport protocol for multicasting. Multicasting capability is
embedded in the UDP software
UDP is suitable for a process with internal flow and error control mechanisms such
as Trivial File Transfer Protocol (TFTP).
UDP is suitable for a process that requires simple request-response communication
with little concern for flow and error control.
UDP is normally used for interactive real-time applications that cannot tolerate
uneven delay between sections of a received message.
TCP SERVICES
Process-to-Process Communication
TCP provides process-to-process communication using port numbers.
Stream Delivery Service
TCP is a stream-oriented protocol.
TCP allows the sending process to deliver data as a stream of bytes and allows the
receiving process to obtain data as a stream of bytes.
TCP creates an environment in which the two processes seem to be connected by an
imaginary “tube” that carries their bytes across the Internet.
The sending process produces (writes to) the stream and the receiving process
consumes (reads from) it.
10
CS8591 – Computer Networks Unit 4
Full-Duplex Communication
TCP offers full-duplex service, where data can flow in both directions at the same
time.
Each TCP endpoint then has its own sending and receiving buffer, and segments
move in both directions.
Connection-Oriented Service
TCP is a connection-oriented protocol.
A connection needs to be established for each pair of processes.
When a process at site A wants to send to and receive data from another
process at site B, the following three phases occur:
1. The two TCP’s establish a logical connection between them.
2. Data are exchanged in both directions.
3. The connection is terminated.
Reliable Service
TCP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the safe and sound arrival of data.
TCP SEGMENT
A packet in TCP is called a segment.
Data unit exchanged between TCP peers are called segments.
A TCP segment encapsulates the data received from the application layer.
The TCP segment is encapsulated in an IP datagram, which in turn is encapsulated in
a frame at the data-link layer.
11
TCP is a byte-oriented protocol, which means that the sender writes bytes into a TCP
connection and the receiver reads bytes out of the TCP connection.
TCP does not, itself, transmit individual bytes over the Internet.
TCP on the source host buffers enough bytes from the sending process to fill a
reasonably sized packet and then sends this packet to its peer on the destination host.
TCP on the destination host then empties the contents of the packet into a receive
buffer, and the receiving process reads from this buffer at its leisure.
TCP connection supports byte streams flowing in both directions.
The packets exchanged between TCP peers are called segments, since each one
carries a segment of the byte stream.
12
Advertised Window―defines receiver’s window size and acts as flow control.
Checksum―It is computed over TCP header, Data, and pseudo header containing IP fields
(Length, SourceAddr & DestinationAddr).
UrgPtr ― used when the segment contains urgent data. It defines a value that must be
added to the sequence number.
Options - There can be up to 40 bytes of optional information in the TCP header.
Connection Establishment
While opening a TCP connection the two nodes(client and server) want to agree on a
set of parameters.
The parameters are the starting sequence numbers that is to be used for their
respective byte streams.
Connection establishment in TCP is a three-way handshaking.
1. Client sends a SYN segment to the server containing its initial sequence number (Flags
= SYN, SequenceNum = x)
2. Server responds with a segment that acknowledges client’s segment and specifies its
initial sequence number (Flags = SYN + ACK, ACK = x + 1 SequenceNum = y).
3. Finally, client responds with a segment that acknowledges server’s sequence number
(Flags = ACK, ACK = y + 1).
13
The reason that each side acknowledges a sequence number that is one larger
than the one sent is that the Acknowledgment field actually identifies the “next
sequence number expected,”
A timer is scheduled for each of the first two segments, and if the expected
response is not received, the segment is retransmitted.
Data Transfer
After connection is established, bidirectional data transfer can take place.
The client and server can send data and acknowledgments in both directions.
The data traveling in the same direction as an acknowledgment are carried on the
same segment.
The acknowledgment is piggybacked with the data.
Connection Termination
Connection termination or teardown can be done in two ways :
Three-way Close and Half-Close
14
STATE TRANSITION DIAGRAM
To keep track of all the different events happening during connection establishment,
connection termination, and data transfer, TCP is specified as the finite state machine
(FSM).
The transition from one state to another is shown using directed lines.
States involved in opening and closing a connection is shown above and below
ESTABLISHED state respectively.
States Involved in TCP :
15
CS8591 – Computer Networks
16
Send Buffer
Sending TCP maintains send buffer which contains 3 segments
(1) acknowledged data
(2) unacknowledged data
(3) data to be transmitted.
Send buffer maintains three pointers
(1) LastByteAcked, (2) LastByteSent, and (3) LastByteWritten
such that:
LastByteAcked ≤ LastByteSent ≤ LastByteWritten
A byte can be sent only after being written and only a sent byte can be
acknowledged.
Bytes to the left of LastByteAcked are not kept as it had been acknowledged.
Receive Buffer
Receiving TCP maintains receive buffer to hold data even if it arrives out-of-order.
Receive buffer maintains three pointers namely
(1) LastByteRead, (2) NextByteExpected, and (3) LastByteRcvd
such that:
LastByteRead ≤ NextByteExpected ≤ LastByteRcvd + 1
A byte cannot be read until that byte and all preceding bytes have been received.
If data is received in order, then NextByteExpected = LastByteRcvd + 1
Bytes to the left of LastByteRead are not buffered, since it is read by the application.
TCP TRANSMISSION
TCP has three mechanism to trigger the transmission of a segment.
They are
o Maximum Segment Size (MSS) - Silly Window Syndrome
o Timeout - Nagle’s Algorithm
18
Nagle’s Algorithm
If there is data to send but is less than MSS, then we may want to wait some amount
of time before sending the available data
If we wait too long, then it may delay the process.
If we don’t wait long enough, it may end up sending small segments resulting in
Silly Window Syndrome.
The solution is to introduce a timer and to transmit when the timer expires
Nagle introduced an algorithm for solving this problem
20
When CongestionWindow is plotted as a function of time, a saw-tooth pattern
results.
Slow Start
Slow start is used to increase CongestionWindow exponentially from a cold start.
Source TCP initializes CongestionWindow to one packet.
TCP doubles the number of packets sent every RTT on successful transmission.
When ACK arrives for first packet TCP adds 1 packet to CongestionWindow and
sends two packets.
When two ACKs arrive, TCP increments CongestionWindow by 2 packets and sends
four packets and so on.
Instead of sending entire permissible packets at once (bursty traffic), packets are sent
in a phased manner, i.e., slow start.
Initially TCP has no idea about congestion, henceforth it increases
CongestionWindow rapidly until there is a timeout. On timeout:
CongestionThreshold = CongestionWindow/ 2
CongestionWindow = 1
Slow start is repeated until CongestionWindow reaches CongestionThreshold and
thereafter 1 packet per RTT.
21
The congestion window trace will look like
22
For example, packets 1 and 2 are received whereas packet 3 gets lost.
o Receiver sends a duplicate ACK for packet 2 when packet 4 arrives.
o Sender receives 3 duplicate ACKs after sending packet 6 retransmits packet 3.
o When packet 3 is received, receiver sends cumulative ACK up to packet 6.
The congestion window trace will look like
The idea is to evenly split the responsibility for congestion control between the
routers and the end nodes.
Each router monitors the load it is experiencing and explicitly notifies the end nodes
when congestion is about to occur.
This notification is implemented by setting a binary congestion bit in the packets that
flow through the router; hence the name DECbit.
23
The destination host then copies this congestion bit into the ACK it sends back to the
source.
The Source checks how many ACK has DEC bit set for previous window packets.
If less than 50% of ACK have DEC bit set, then source increases its congestion
window by 1 packet
Using a queue length of 1 as the trigger for setting the congestion bit.
A router sets this bit in a packet if its average queue length is greater than or equal to
1 at the time the packet arrives.
Average queue length is measured over a time interval that includes the
last busy + last idle cycle + current busy cycle.
It calculates the average queue length by dividing the curve area with time interval.
24
Each router is programmed to monitor its own queue length, and when it detects that
there is congestion, it notifies the source to adjust its congestion window.
RED differs from the DEC bit scheme by two ways:
a. In DECbit, explicit notification about congestion is sent to source, whereas
RED implicitly notifies the source by dropping a few packets.
b. DECbit may lead to tail drop policy, whereas RED drops packet based on
drop probability in a random manner. Drop each arriving packet with some
drop probability whenever the queue length exceeds some drop level. This
idea is called early random drop.
RED has two queue length thresholds that trigger certain activity: MinThreshold and
MaxThreshold
When a packet arrives at a gateway it compares Avglen with these two values
according to the following rules.
25
6. STREAM CONTROL TRANSMISSION PROTOCOL (SCTP)
Stream Control Transmission Protocol (SCTP) is a reliable, message-oriented
transport layer protocol.
SCTP has mixed features of TCP and UDP.
SCTP maintains the message boundaries and detects the lost data, duplicate data as
well as out-of-order data.
SCTP provides the Congestion control as well as Flow control.
SCTP is especially designed for internet applications as well as multimedia
communication.
SCTP SERVICES
Process-to-Process Communication
SCTP provides process-to-process communication.
Multiple Streams
SCTP allows multistream service in each connection, which is called association in
SCTP terminology.
If one of the streams is blocked, the other streams can still deliver their data.
Multihoming
An SCTP association supports multihoming service.
The sending and receiving host can define multiple IP addresses in each end for an
association.
In this fault-tolerant approach, when one path fails, another interface can be used for
data delivery without interruption.
26
Full-Duplex Communication
SCTP offers full-duplex service, where data can flow in both directions at the same
time. Each SCTP then has a sending and receiving buffer and packets are sent in both
directions.
Connection-Oriented Service
SCTP is a connection-oriented protocol.
In SCTP, a connection is called an association.
If a client wants to send and receive message from server , the steps are :
Step1: The two SCTPs establish the connection with each other.
Step2: Once the connection is established, the data gets exchanged in both the
directions.
Step3: Finally, the association is terminated.
Reliable Service
SCTP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the safe and sound arrival of data.
An SCTP packet has a mandatory general header and a set of blocks called chunks.
General Header
The general header (packet header) defines the end points of each association to
which the packet belongs
It guarantees that the packet belongs to a particular association
It also preserves the integrity of the contents of the packet including the header itself.
There are four fields in the general header.
Source port
This field identifies the sending port.
Destination port
This field identifies the receiving port that hosts use to route the packet to the
appropriate endpoint/application.
27
Verification tag
A 32-bit random value created during initialization to distinguish stale packets
from a previous connection.
Checksum
The next field is a checksum. The size of the checksum is 32 bits. SCTP uses
CRC-32 Checksum.
Chunks
Control information or user data are carried in chunks.
Chunks have a common layout.
The first three fields are common to all chunks; the information field depends on the
type of chunk.
The type field can define up to 256 types of chunks. Only a few have been defined so
far; the rest are reserved for future use.
The flag field defines special flags that a particular chunk may need.
The length field defines the total size of the chunk, in bytes, including the type, flag,
and length fields.
Types of Chunks
An SCTP association may send many packets, a packet may contain several chunks,
and chunks may belong to different streams.
SCTP defines two types of chunks - Control chunks and Data chunks.
A control chunk controls and maintains the association.
A data chunk carries user data.
28
SCTP ASSOCIATION
SCTP is a connection-oriented protocol.
A connection in SCTP is called an association to emphasize multihoming.
SCTP Associations consists of three phases:
Association Establishment
Data Transfer
Association Termination
Association Establishment
Association establishment in SCTP requires a four-way handshake.
In this procedure, a client process wants to establish an association with a server
process using SCTP as the transport-layer protocol.
The SCTP server needs to be prepared to receive any association (passive open).
Association establishment, however, is initiated by the client (active open).
The client sends the first packet, which contains an INIT chunk.
The server sends the second packet, which contains an INIT ACK chunk. The INIT
ACK also sends a cookie that defines the state of the server at this moment.
The client sends the third packet, which includes a COOKIE ECHO chunk. This is a
very simple chunk that echoes, without change, the cookie sent by the server. SCTP
allows the inclusion of data chunks in this packet.
The server sends the fourth packet, which includes the COOKIE ACK chunk that
acknowledges the receipt of the COOKIE ECHO chunk. SCTP allows the inclusion
of data chunks with this packet.
Data Transfer
The whole purpose of an association is to transfer data between two ends.
After the association is established, bidirectional data transfer can take place.
The client and the server can both send data.
SCTP supports piggybacking.
29
Types of SCTP data Transfer :
1. Multihoming Data Transfer
Data transfer, by default, uses the primary address of the destination.
If the primary is not available, one of the alternative addresses is used.
This is called Multihoming Data Transfer.
2. Multistream Delivery
SCTP can support multiple streams, which means that the sender process
can define different streams and a message can belong to one of these
streams.
Each stream is assigned a stream identifier (SI) which uniquely defines
that stream.
SCTP supports two types of data delivery in each stream: ordered (default)
and unordered.
Association Termination
In SCTP,either of the two parties involved in exchanging data (client or server) can
close the connection.
SCTP does not allow a “half closed” association. If one end closes the association,
the other end must stop sending new data.
If any data are left over in the queue of the recipient of the termination request, they
are sent and the association is closed.
Association termination uses three packets.
Receiver Site
The receiver has one buffer (queue) and three variables.
30
The queue holds the received data chunks that have not yet been read by the process.
The first variable holds the last TSN received, cumTSN.
The second variable holds the available buffer size; winsize.
The third variable holds the last accumulative acknowledgment, lastACK.
The following figure shows the queue and variables at the receiver site.
When the site receives a data chunk, it stores it at the end of the buffer (queue) and
subtracts the size of the chunk from winSize.
The TSN number of the chunk is stored in the cumTSN variable.
When the process reads a chunk, it removes it from the queue and adds the size of the
removed chunk to winSize (recycling).
When the receiver decides to send a SACK, it checks the value of lastAck; if it is less
than cumTSN, it sends a SACK with a cumulative TSN number equal to the
cumTSN.
It also includes the value of winSize as the advertised window size.
Sender Site
The sender has one buffer (queue) and three variables: curTSN, rwnd, and inTransit.
We assume each chunk is 100 bytes long. The buffer holds the chunks produced by
the process that either have been sent or are ready to be sent.
The first variable, curTSN, refers to the next chunk to be sent.
All chunks in the queue with a TSN less than this value have been sent, but not
acknowledged; they are outstanding.
The second variable, rwnd, holds the last value advertised by the receiver (in bytes).
The third variable, inTransit, holds the number of bytes in transit, bytes sent but not
yet acknowledged.
The following figure shows the queue and variables at the sender site.
31
A chunk pointed to by curTSN can be sent if the size of the data is less than or equal
to the quantity rwnd - inTransit.
After sending the chunk, the value of curTSN is incremented by 1 and now points to
the next chunk to be sent.
The value of inTransit is incremented by the size of the data in the transmitted chunk.
When a SACK is received, the chunks with a TSN less than or equal to the
cumulative TSN in the SACK are removed from the queue and discarded. The sender
does not have to worry about them anymore.
The value of inTransit is reduced by the total size of the discarded chunks.
The value of rwnd is updated with the value of the advertised window in the SACK.
Receiver Site
The receiver stores all chunks that have arrived in its queue including the out-of-
order ones. However, it leaves spaces for any missing chunks.
It discards duplicate messages, but keeps track of them for reports to the sender.
The following figure shows a typical design for the receiver site and the state of the
receiving queue at a particular point in time.
32
An array of variables keeps track of the beginning and the end of each block that is
out of order.
An array of variables holds the duplicate chunks received.
There is no need for storing duplicate chunks in the queue and they will be discarded.
Sender Site
At the sender site, it needs two buffers (queues): a sending queue and a
retransmission queue.
Three variables were used - rwnd, inTransit, and curTSN as described in the previous
section.
The following figure shows a typical design.
33
UNIT 5 : APPLICATION LAYER
WWW and HTTP – FTP – Email –Telnet –SSH – DNS – SNMP
1. INTRODUCTION
APPLICATION-LAYER PARADIGMS
Two paradigms have been developed for Application Layer
1. Traditional Paradigm : Client-Server
2. New Paradigm : Peer-to-Peer
1
Client-Server Paradigm
o The traditional paradigm is called the client-server paradigm.
o It was the most popular Paradigm.
o In this paradigm, the service provider is an application program, called the
server process; it runs continuously, waiting for another application
program, called the client process, to make a connection through the
Internet and ask for service.
o The server process must be running all the time; the client process is started
when the client needs to receive service.
o There are normally some server processes that can provide a specific type
of service, but there are many clients that request service from any of these
server processes.
Peer-to-Peer(P2P) Paradigm
o A new paradigm, called the peer-to-peer paradigm has emerged to respond to
the needs of some new applications.
o In this paradigm, there is no need for a server process to be running all the time
and waiting for the client processes to connect.
o The responsibility is shared between peers.
o A computer connected to the Internet can provide service at one time and
receive service at another time.
o A computer can even provide and receive services at the same time.
Mixed Paradigm
o An application may choose to use a mixture of the two paradigms by
combining the advantages of both.
o For example, a light-load client-server communication can be used to find the
address of the peer that can offer a service.
o When the address of the peer is found, the actual service can be received from
the peer by using the peer-to-peer paradigm.
2
2. WWW (WORLD WIDE WEB)
WWW is a distributed client/server service, in which a client (Browsers such as
IE, Firefox, etc.) can access services at a server (Web server such as IIS,
Apache).
The service provided is distributed over many locations called sites.
WWW was constructed originally by a small group of people led by Tim
Berners Lee at CERN, in 1989 and in 1991 this was released to the world.
A new protocol for the Internet and a system of document access to use it was
proposed and named as WWW.
This system allows document search and retrieval from any part of the Internet.
The documents were having Hypertext as the content
The units of information on the web can be referred to as pages, documents or
resources.
A document can contain text, images, sound and video, together called
Hypermedia.
Web is a vast collection of data, information, software and protocols , spread
across the world in web servers, which are accessed by client machines by
browsers through the Internet.
Clients use browser application to send URL’s via HTTP to servers requesting
a Web page.
Web pages constructed using HTML /XML and consist of text, graphics,
sounds plus embedded files
Servers (or caches) respond with requested Web page.
Client’s browser renders Web page returned by server
Web Page is written using Hyper Text Markup Language (HTML)
Displays text, graphics and sound in browser
The entire system runs over standard networking protocols (TCP/IP, DNS)
3
WEB CLIENTS (BROWSERS)
A browser is a software on the client on the web which initiates the
communication with the server.
Each browser usually consists of three parts: a controller, client protocols, and
interpreters.
The controller receives input from the keyboard or the mouse and uses the
client programs to access the document. After the document has been accessed,
the controller uses one of the interpreters to display the document on the
screen.
Examples are Internet Explorer, Mozilla FireFox, Netscape Navigator, Safari
etc.
WEB SERVERS
All the communication between the web client and a web server use the
standard protocol called as HTTP.
Web server informs its operating system to accept incoming network
connections using a specific port on the machine.
The server also runs as a background process.
A client (browser) opens a connection to the server, sends a request, receives
information from server and closes the connection.
Web server monitors a communications port on its host machine, accepts the
http commands through it and performs specified operations.
HTTP commands include a URL specifying the host machine.
The URL received is translated into either a filename or a program name,
accordingly the requested file or the output of the program execution is sent
back to the browser.
PROXY SERVER
A Proxy server is a computer that keeps copies of responses to recent requests.
The web client sends a request to the proxy server.
The proxy server checks its cache.
If the response is not stored in the cache, the proxy server sends the request to
the corresponding server.
4
Incoming responses are sent to the proxy server and stored for future requests
from other clients.
The proxy server reduces the load on the original server, decreases traffic, and
improves latency.
However, to use the proxy server, the client must be configured to access the
proxy instead of the target server.
The proxy server acts as both server and client.
When it receives a request from a client for which it has a response, it acts as a
server and sends the response to the client.
When it receives a request from a client for which it does not have a response,
it first acts as a client and sends a request to the target server.
When the response has been received, it acts again as a server and sends the
response to the client.
The URL defines four parts - Method, Host computer, Port, and Path.
o Method: The method is the protocol used to retrieve the document from a
server. For example, HTTP.
o Host: The host is the computer where the information is stored, and the
computer is given an alias name. Web pages are mainly stored in the computers
and the computers are given an alias name that begins with the characters
"www". This field is not mandatory.
o Port: The URL can also contain the port number of the server, but it's an
optional field. If the port number is included, then it must come between the
host and path and it should be separated from the host by a colon.
o Path: Path is the pathname of the file where the information is stored. The path
itself contain slashes that separate the directories from the subdirectories and
files.
5
URL Paths
The path of the document for a http protocol is same as that for a document
or file or a directory in a client.
In Unix the path components are separated by forward slashes (/) and in
windows backward slashes (\).
But an URL need not include all the directories in the path.
A path which includes all the directories is a complete path, else it is a
partial path.
WEB DOCUMENTS
The documents in the WWW can be grouped into three broad categories:
Static, Dynamic and Active.
Static Documents
Static documents are fixed-content documents that are created and stored in a
server.
The client can get a copy of the document only.
In other words, the contents of the file are determined when the file is created,
not when it is used.
Of course, the contents in the server can be changed, but the user cannot
change them.
When a client accesses the document, a copy of the document is sent.
The user can then use a browser to see the document.
Static documents are prepared using one of several languages:
1. HyperText Markup Language (HTML)
2. Extensible Markup Language (XML)
3. Extensible Style Language (XSL)
4. Extensible Hypertext Markup Language (XHTML).
Dynamic Documents
A dynamic document is created by a web server whenever a browser requests
the document.
When a request arrives, the web server runs an application program or a script
that creates the dynamic document.
6
The server returns the result of the program or script as a response to the
browser that requested the document.
Because a fresh document is created for each request, the contents of a dynamic
document may vary from one request to another.
A very simple example of a dynamic document is the retrieval of the time and
date from a server.
Time and date are kinds of information that are dynamic in that they change
from moment to moment.
Dynamic documents can be retrieved using one of several scripting languages:
1. Common Gateway Interface (CGI)
2. Java Server Pages (JSP)
3. Active Server Pages (ASP)
4. ColdFusion
Active Documents
For many applications, we need a program or a script to be run at the client site.
These are called active documents.
For example, suppose we want to run a program that creates animated graphics
on the screen or a program that interacts with the user.
The program definitely needs to be run at the client site where the animation or
interaction takes place.
When a browser requests an active document, the server sends a copy of the
document or a script.
The document is then run at the client (browser) site.
Active documents can be created using one of several languages:
1. Java Applet – A program written in Java on the server. It is compiled
and ready to be run. The document is in bytecode format.
2. Java Script - Download and run the script at the client site.
7
When hypertext is clicked, browser opens a new connection, retrieves file from
the server and displays the file.
Each HTTP message has the general form
START_LINE <CRLF>
MESSAGE_HEADER <CRLF>
<CRLF> MESSAGE_BODY <CRLF>
where <CRLF> stands for carriage-return-line-feed.
Features of HTTP
o Connectionless protocol:
HTTP is a connectionless protocol. HTTP client initiates a request and
waits for a response from the server. When the server receives the
request, the server processes the request and sends back the response to
the HTTP client after which the client disconnects the connection. The
connection between client and server exist only during the current
request and response time only.
o Media independent:
HTTP protocol is a media independent as data can be sent as long as
both the client and server know how to handle the data content. It is
required for both the client and server to specify the content type in
MIME-type header.
o Stateless:
HTTP is a stateless protocol as both the client and server know each
other only during the current request. Due to this nature of the protocol,
both the client and server do not retain the information between various
requests of the web pages.
Request Message: The request message is sent by the client that consists of a
request line, headers, and sometimes a body.
Response Message: The response message is sent by the server to the client
that consists of a status line, headers, and sometimes a body.
8
HTTP REQUEST MESSAGE
Request Line
There are three fields in this request line - Method, URL and Version.
The Method field defines the request types.
The URL field defines the address and name of the corresponding web page.
The Version field gives the version of the protocol; the most current version of
HTTP is 1.1.
Some of the Method types are
Request Header
Each request header line sends additional information from the client to the
server.
Each header line has a header name, a colon, a space, and a header value.
The value field defines the values associated with each header name.
Headers defined for request message include
9
Body
The body can be present in a request message. It is optional.
Usually, it contains the comment to be sent or the file to be published on the
website when the method is PUT or POST.
Conditional Request
A client can add a condition in its request.
In this case, the server will send the requested web page if the condition is met
or inform the client otherwise.
One of the most common conditions imposed by the client is the time and date
the web page is modified.
The client can send the header line If-Modified-Since with the request to tell the
server that it needs the page only if it is modified after a certain point in time.
Status Line
The Status line contains three fields - HTTP version , Status code, Status
phrase
The first field defines the version of HTTP protocol, currently 1.1.
The status code field defines the status of the request. It classifies the HTTP
result. It consists of three digits.
1xx–Informational, 2xx– Success, 3xx–Redirection,
4xx–Client error, 5xx–Server error
The Status phrase field gives brief description about status code in text form.
Some of the Status codes are
10
Response Header
Each header provides additional information to the client.
Each header line has a header name, a colon, a space, and a header value.
Some of the response headers are:
Body
The body contains the document to be sent from the server to the client.
The body is present unless the response is an error message.
HTTP CONNECTIONS
HTTP Clients and Servers exchange multiple messages over the same TCP
connection.
If some of the objects are located on the same server, we have two choices: to
retrieve each object using a new TCP connection or to make a TCP connection
and retrieve them all.
The first method is referred to as a non-persistent connection, the second as a
persistent connection.
HTTP 1.0 uses non-persistent connections and HTTP 1.1 uses persistent
connections .
NON-PERSISTENT CONNECTIONS
11
PERSISTENT CONNECTIONS
HTTP COOKIES
An HTTP cookie (also called web cookie, Internet cookie, browser cookie,
or simply cookie) is a small piece of data sent from a website and stored on the
user's computer by the user's web browser while the user is browsing.
HTTP is stateless , Cookies are used to add State.
Cookies were designed to be a reliable mechanism for websites to
remember stateful information (such as items added in the shopping cart in an
online store) or to record the user's browsing activity (including clicking
particular buttons, logging in, or recording which pages were visited in the
past).
They can also be used to remember arbitrary pieces of information that the user
previously entered into form fields such as names, addresses, passwords, and
credit card numbers.
Components of Cookie
A cookie consists of the following components:
1. Name
2. Value
3. Zero or more attributes (name/value pairs). Attributes store information such as
the cookie's expiration, domain, and flags
12
Creating and Storing Cookies
The creation and storing of cookies depend on the implementation; however, the
principle is the same.
1. When a server receives a request from a client, it stores information
about the client in a file or a string. The information may include the
domain name of the client, the contents of the cookie (information the
server has gathered about the client such as name, registration number,
and so on), a timestamp, and other information depending on the
implementation.
2. The server includes the cookie in the response that it sends to the client.
3. When the client receives the response, the browser stores the cookie in
the cookie directory, which is sorted by the server domain name.
Using Cookies
When a client sends a request to a server, the browser looks in the cookie
directory to see if it can find a cookie sent by that server.
If found, the cookie is included in the request.
When the server receives the request, it knows that this is an old client, not a
new one.
The contents of the cookie are never read by the browser or disclosed to the
user. It is a cookie made by the server and eaten by the server.
Types of Cookies
1. Authentication cookies
These are the most common method used by web servers to know whether the
user is logged in or not, and which account they are logged in with. Without
such a mechanism, the site would not know whether to send a page containing
sensitive information, or require the user to authenticate themselves by logging
in.
2. Tracking cookies
These are commonly used as ways to compile individuals browsing histories.
3. Session cookie
A session cookie exists only in temporary memory while the user navigates the
website. Web browsers normally delete session cookies when the user closes
the browser.
4. Persistent cookie
Instead of expiring when the web browser is closed as session cookies do,
a persistent cookie expires at a specific date or after a specific length of time.
This means that, for the cookie's entire lifespan , its information will be
transmitted to the server every time the user visits the website that it belongs to,
or every time the user views a resource belonging to that website from another
website.
13
HTTP CACHING
HTTP Caching enables the client to retrieve document faster and reduces load
on the server.
HTTP Caching is implemented at Proxy server, ISP router and Browser.
Server sets expiration date (Expires header) for each page, beyond which it is
not cached.
HTTP Cache document is returned to client only if it is an updated copy by
checking against If-Modified-Since header.
If cache document is out-of-date, then request is forwarded to the server and
response is cached along the way.
A web page will not be cached if no-cache directive is specified.
HTTP SECURITY
HTTP does not provide security.
However HTTP can be run over the Secure Socket Layer (SSL).
In this case, HTTP is referred to as HTTPS.
HTTPS provides confidentiality, client and server authentication, and data
integrity.
FTP OBJECTIVES
It provides the sharing of files.
It is used to encourage the use of remote computers.
It transfers the data more reliably and efficiently.
FTP MECHANISM
14
The above figure shows the basic model of the FTP.
The FTP client has three components:
o user interface, control process, and data transfer process.
The server has two components:
o server control process and server data transfer process.
FTP CONNECTIONS
There are two types of connections in FTP -
Control Connection and Data Connection.
The two connections in FTP have different lifetimes.
The control connection remains connected during the entire interactive FTP
session.
The data connection is opened and then closed for each file transfer activity.
When a user starts an FTP session, the control connection opens.
While the control connection is open, the data connection can be opened and
closed multiple times if several files are transferred.
FTP uses two well-known TCP ports:
o Port 21 is used for the control connection
o Port 20 is used for the data connection.
Control Connection:
o The control connection uses very simple rules for communication.
o Through control connection, we can transfer a line of command or line
of response at a time.
o The control connection is made between the control processes.
o The control connection remains connected during the entire interactive
FTP session.
Data Connection:
o The Data Connection uses very complex rules as data types may vary.
o The data connection is made between data transfer processes.
o The data connection opens when a command comes for transferring the
files and closes when the file is transferred.
15
FTP COMMUNICATION
FTP Communication is achieved through commands and responses.
FTP Commands are sent from the client to the server
FTP responses are sent from the server to the client.
FTP Commands are in the form of ASCII uppercase, which may or may not be
followed by an argument.
Some of the most common commands are
16
FTP DATA STRUCTURE
FTP can transfer a file across the data connection using one of the following
data structure : file structure, record structure, or page structure.
The file structure format is the default one and has no structure. It is a
continuous stream of bytes.
In the record structure, the file is divided into records. This can be used only
with text files.
In the page structure, the file is divided into pages, with each page having a
page number and a page header. The pages can be stored and accessed
randomly or sequentially.
FTP SECURITY
FTP requires a password, the password is sent in plaintext which is
unencrypted. This means it can be intercepted and used by an attacker.
The data transfer connection also transfers data in plaintext, which is insecure.
To be secure, one can add a Secure Socket Layer between the FTP application
layer and the TCP layer.
In this case FTP is called SSL-FTP.
17
When the sender and the receiver of an e-mail are on the same system, we need
only two User Agents and no Message Transfer Agent
When the sender and the receiver of an e-mail are on different system, we need
two UA, two pairs of MTA (client and server), and two MAA (client and
server).
WORKING OF EMAIL
18
USER AGENT (UA)
The first component of an electronic mail system is the user agent (UA).
It provides service to the user to make the process of sending and receiving a
message easier.
A user agent is a software package that composes, reads, replies to, and
forwards messages. It also handles local mailboxes on the user computers.
Command driven
o Command driven user agents belong to the early days of electronic mail.
o A command-driven user agent normally accepts a one character command from
the keyboard to perform its task.
o Some examples of command driven user agents are mail, pine, and elm.
GUI-based
o Modern user agents are GUI-based.
o They allow the user to interact with the software by using both the keyboard
and the mouse.
o They have graphical components such as icons, menu bars, and windows that
make the services easy to access.
o Some examples of GUI-based user agents are Eudora and Outlook.
19
ADDRESS FORMAT OF EMAIL
E-mail address is userid @ domain where domain is hostname of the mail
server.
Email was extended in 1993 to carry many different types of data: audio,
video, images, Word documents, and so on.
This extended version is known as MIME(Multipurpose Mail Extension).
20
SMTP clients and servers have two main components
o User Agents(UA) – Prepares the message, encloses it in an envelope.
o Mail Transfer Agent (MTA) – Transfers the mail across the internet
SMTP also allows the use of Relays allowing other MTAs to relay the mail.
21
SMTP MAIL FLOW
SMTP Commands
Commands are sent from the client to the server. It consists of a keyword
followed by zero or more arguments. SMTP defines 14 commands.
22
SMTP Responses
Responses are sent from the server to the client.
A response is a three digit code that may be followed by additional textual
information.
23
SMTP OPERATIONS
Basic SMTP operation occurs in three phases:
1. Connection Setup
2. Mail Transfer
3. Connection Termination
Connection Setup
An SMTP sender will attempt to set up a TCP connection with a target host
when it has one or more mail messages to deliver to that host.
The sequence is quite simple:
1. The sender opens a TCP connection with the receiver.
2. Once the connection is established, the receiver identifies itself with
"Service Ready”.
3. The sender identifies itself with the HELO command.
4. The receiver accepts the sender's identification with "OK".
5. If the mail service on the destination is unavailable, the destination host
returns a "Service Not Available" reply in step 2, and the process is
terminated.
Mail Transfer
Once a connection has been established, the SMTP sender may send one or
more messages to the SMTP receiver.
There are three logical phases to the transfer of a message:
1. A MAIL command identifies the originator of the message.
2. One or more RCPT commands identify the recipients for this
message.
3. A DATA command transfers the message text.
24
Connection Termination
The SMTP sender closes the connection in two steps.
First, the sender sends a QUIT command and waits for a reply.
The second step is to initiate a TCP close operation for the TCP connection.
The receiver initiates its TCP close after sending its reply to the QUIT
command.
LIMITATIONS OF SMTP
SMTP cannot transmit executable files or other binary objects.
SMTP cannot transmit text data that includes national language characters, as
these are represented by 8-bit codes with values of 128 decimal or higher, and
SMTP is limited to 7-bit ASCII.
SMTP servers may reject mail message over a certain size.
SMTP gateways that translate between ASCII and the character code EBCDIC
do not use a consistent set of mappings, resulting in translation problems.
Some SMTP implementations do not adhere completely to the SMTP standards
defined.
Common problems include the following:
1. Deletion, addition, or recording of carriage return and linefeed.
2. Truncating or wrapping lines longer than 76 characters.
3. Removal of trailing white space (tab and space characters).
4. Padding of lines in a message to the same length.
5. Conversion of tab characters into multiple-space characters.
25
Be able to send multiple attachments with a single message;
Unlimited message length;
Use of character sets other than ASCII code;
Use of rich text (layouts, fonts, colors, etc)
Binary attachments (executables, images, audio or video files, etc.), which
may be divided if needed.
MIME is a protocol that converts non-ASCII data to 7-bit NVT(Network
Virtual Terminal) ASCII and vice-versa.
MIME HEADERS
Using headers, MIME describes the type of message content and the encoding
used.
Headers defined in MIME are:
MIME-Version- current version, i.e., 1.1
Content-Type - message type (text/html, image/jpeg, application/pdf)
Content-Transfer-Encoding - message encoding scheme (eg base64).
Content-Id - unique identifier for the message.
Content-Description - describes type of the message body.
26
MIME CONTENT TYPES
There are seven different major types of content and a total of 14 subtypes.
In general, a content type declares the general type of data, and the subtype
specifies a particular format for that type of data.
MIME also defines a multipart type that says how a message carrying more
than one data type is structured.
This is like a programming language that defines both base types (e.g., integers
and floats) and compound types (e.g., structures and arrays).
One possible multipart subtype is mixed, which says that the message contains
a set of independent data pieces in a specified order.
Each piece then has its own header line that describes the type of that piece.
The table below lists the MIME content types:
27
MESSAGE TRANSFER IN MIME
MTA is a mail daemon (sendmail) active on hosts having mailbox, used to send
an email.
Mail passes through a sequence of gateways before it reaches the recipient mail
server.
Each gateway stores and forwards the mail using Simple mail transfer protocol
(SMTP).
SMTP defines communication between MTAs over TCP on port 25.
In an SMTP session, sending MTA is client and receiver is server. In each
exchange:
Client posts a command (HELO, MAIL, RCPT, DATA, QUIT, VRFY, etc.)
Server responds with a code (250, 550, 354, 221, 251 etc) and an explanation.
Client is identified using HELO command and verified by the server
Client forwards message to server, if server is willing to accept.
Message is terminated by a line with only single period (.) in it.
Eventually client terminates the connection.
OPERATION OF IMAP
The mail transfer begins with the client authenticating the user and identifying
the mailbox they want to access.
Client Commands
LOGIN, AUTHENTICATE, SELECT, EXAMINE, CLOSE, and LOGOUT
Server Responses
OK, NO (no permission), BAD (incorrect command),
When user wishes to FETCH a message, server responds in MIME format.
Message attributes such as size are also exchanged.
Flags are used by client to report user actions.
SEEN, ANSWERED, DELETED, RECENT
IMAP4
The latest version is IMAP4. IMAP4 is more powerful and more complex.
IMAP4 provides the following extra functions:
A user can check the e-mail header prior to downloading.
A user can search the contents of the e-mail for a specific string of
characters prior to downloading.
29
A user can partially download e-mail. This is especially useful if bandwidth
is limited and the e-mail contains multimedia with high bandwidth
requirements.
A user can create, delete, or rename mailboxes on the mail server.
A user can create a hierarchy of mailboxes in a folder for e-mail storage.
ADVANTAGES OF IMAP
With IMAP, the primary storage is on the server, not on the local machine.
Email being put away for storage can be foldered on local disk, or can be
foldered on the IMAP server.
The protocol allows full user of remote folders, including a remote folder
hierarchy and multiple inboxes.
It keeps track of explicit status of messages, and allows for user-defined status.
30
Supports new mail notification explicitly.
Extensible for non-email data, like netnews, document storage, etc.
Selective fetching of individual MIME body parts.
Server-based search to minimize data transfer.
Servers may have extensions that can be negotiated.
31
POP3 client is installed on the recipient computer and POP server on the mail
server.
Client opens a connection to the server using TCP on port 110.
Client sends username and password to access mailbox and to retrieve
messages.
POP3 Commands
POP commands are generally abbreviated into codes of three or four letters
The following describes some of the POP commands:
1. UID - This command opens the connection
2. STAT - It is used to display number of messages currently in the mailbox
3. LIST - It is used to get the summary of messages
4. RETR -This command helps to select a mailbox to access the messages
5. DELE - It is used to delete a message
6. RSET - It is used to reset the session to its initial state
7. QUIT - It is used to log off the session
32
POP commands are generally IMAP commands are not abbreviated,
7 abbreviated into codes of three or four they are full. Eg. STATUS.
letters. Eg. STAT.
8 It requires minimum use of server Clients are totally dependent on server.
resources.
9 Mails once downloaded cannot be Allows mails to be accessed from
accessed from some other location. multiple locations.
10 The e-mails are not downloaded Users can view the headings and sender
automatically. of e-mails and then decide to download.
11 POP requires less internet usage time. IMAP requires more internet usage time.
33
Local Login
Remote Logging
TELENT OPTIONS
TELNET lets the client and server negotiate options before or during the use of
the service.
Options are extra features available to a user with a more sophisticated
terminal.
Users with simpler terminals can use default features.
TELENT COMMANDS
35
NVT Character Format
NVT uses two sets of characters, one for data and one for control.
For data, NVT normally uses what is called NVT ASCII. This is an 8-bit
character set in which the seven lowest order bits are the same as ASCII and
the highest order bit is 0.
To send control characters between computers , NVT uses an 8-bit character
set in which the highest order bit is set to 1.
Secure Shell (SSH) is a secure application program that can be used today for
several purposes such as remote logging and file transfer, it was originally
designed to replace TELNET.
There are two versions of SSH: SSH-1 and SSH-2, which are totally
incompatible. The first version, SSH-1, is now deprecated because of security
flaws in it.
SSH COMPONENTS
SSH is an application-layer protocol with three components:
1. SSH Transport-Layer Protocol (SSH-TRANS)
2. SSH Authentication Protocol (SSH-AUTH)
3. SSH Connection Protocol (SSH-CONN)
36
Services provided by this protocol:
1. Privacy or confidentiality of the message exchanged
2. Data integrity, which means that it is guaranteed that the messages
exchanged between the client and server are not changed by an intruder
3. Server authentication, which means that the client is now sure that the server
is the one that it claims to be
4. Compression of the messages, which improves the efficiency of the system
and makes attack more difficult
SSH APPLICATIONS
SSH is a general-purpose protocol that provides a secure connection between a client
and server.
Port Forwarding
One of the interesting services provided by the SSH protocol is port
forwarding.
We can use the secured channels available in SSH to access an application
program that does not provide security services.
Applications such as TELNET and Simple Mail Transfer Protocol (SMTP),can
use the services of the SSH port forwarding mechanism.
The SSH port forwarding mechanism creates a tunnel through which the
messages belonging to other protocols can travel.
For this reason, this mechanism is sometimes referred to as SSH tunneling.
The length field defines the length of the packet but does not include the
padding.
The Padding field is added to the packet to make the attack on the security
provision more difficult.
The type field designates the type of the packet used in different SSH
protocols.
The data field is the data transferred by the packet in different protocols.
The CRC field is used for error detection.
The FTP client can use the SSH client on the local site to make a secure
connection
with the SSH server on the remote site.
Any request from the FTP client to the FTP server is carried through the tunnel
provided by the SSH client and server.
Any response from the FTP server to the FTP client is also carried through the
tunnel provided by the SSH client and server.
38
8. DNS (DOMAIN NAME SYSTEM)
WORKING OF DNS
The following six steps shows the working of a DNS. It maps the host name to an IP
address:
1. The user passes the host name to the file transfer client.
2. The file transfer client passes the host name to the DNS client.
3. Each computer, after being booted, knows the address of one DNS server. The
DNS client sends a message to a DNS server with a query that gives the file
transfer server name using the known IP address of the DNS server.
4. The DNS server responds with the IP address of the desired file transfer server.
5. The DNS server passes the IP address to the file transfer client.
6. The file transfer client now uses the received IP address to access the file
transfer server.
NAME SPACE
To be unambiguous, the names assigned to machines must be carefully selected
from a name space with complete control over the binding between the names
and IP address.
39
The names must be unique because the addresses are unique.
A name space that maps each address to a unique name can be organized in
two ways: flat (or) hierarchical.
40
Each node in the tree has a label, which is a string with a maximum of 63
characters.
The root label is a null string (empty string). DNS requires that children of a
node (nodes that branch from the same node) have different labels, which
guarantees the uniqueness of the domain names.
Domain Name
Each node in the tree has a label called as domain name.
A full domain name is a sequence of labels separated by dots (.)
The domain names are always read from the node up to the root.
The last label is the label of the root (null).
This means that a full domain name always ends in a null label, which
means the last character is a dot because the null string is nothing.
If a label is terminated by a null string, it is called a fully qualified domain
name (FQDN).
If a label is not terminated by a null string, it is called a partially qualified
domain name (PQDN).
41
Domain
A domain is a subtree of the domain name space.
The name of the domain is the domain name of the node at the top of the sub-
tree.
A domain may itself be divided into domains.
42
ZONE
What a server is responsible for, or has authority over, is called a zone.
The server makes a database called a zone file and keeps all the information for
every node under that domain.
If a server accepts responsibility for a domain and does not divide the domains
into smaller domains, the domain and zone refer to the same thing.
But if a server divides its domain into sub domains and delegates parts of its
authority to other servers, domain and zone refer to different things.
The information about the nodes in the sub domains is stored in the servers at
the lower levels, with the original server keeping some sort of references to
these lower level servers.
But still, the original server does not free itself from responsibility totally.
It still has a zone, but the detailed information is kept by the lower level
servers.
ROOT SERVER
A root sever is a server whose zone consists of the whole tree.
A root server usually does not store any information about domains but
delegates its authority to other servers, keeping references to those servers.
Currently there are more than 13 root servers, each covering the whole domain
name space.
The servers are distributed all around the world.
Generic Domains
The generic domains define registered hosts according to their generic
behavior.
Each node in the tree defines a domain, which is an index to the domain name
space database.
The first level in the generic domains section allows seven possible three
character levels.
These levels describe the organization types as listed in following table.
Country Domains
The country domains section follows the same format as the generic domains
but uses two characters for country abbreviations
E.g.; in for India, us for United States etc) in place of the three character
organizational abbreviation at the first level.
Second level labels can be organizational, or they can be more specific,
national designation.
India for example, uses state abbreviations as a subdivision of the country
domain us. (e.g., ca.in.)
Inverse Domains
Mapping an address to a name is called Inverse domain.
44
The client can send an IP address to a server to be mapped to a domain name
and it is called PTR(Pointer) query.
To answer queries of this kind, DNS uses the inverse domain
DNS RESOLUTION
Mapping a name to an address or an address to a name is called name address
resolution.
DNS is designed as a client server application.
A host that needs to map an address to a name or a name to an address calls a
DNS client named a Resolver.
The Resolver accesses the closest DNS server with a mapping request.
If the server has the information, it satisfies the resolver; otherwise, it either
refers the resolver to other servers or asks other servers to provide the
information.
After the resolver receives the mapping, it interprets the response to see if it is a
real resolution or an error and finally delivers the result to the process that
requested it.
A resolution can be either recursive or iterative.
Recursive Resolution
The application program on the source host calls the DNS resolver (client) to
find the IP address of the destination host. The resolver, which does not know
this address, sends the query to the local DNS server of the source (Event 1)
The local server sends the query to a root DNS server (Event 2)
The Root server sends the query to the top-level-DNS server(Event 3)
The top-level DNS server knows only the IP address of the local DNS server at
the destination. So it forwards the query to the local server, which knows the IP
address of the destination host (Event 4)
The IP address of the destination host is now sent back to the top-level DNS
server(Event 5) then back to the root server (Event 6), then back to the source
DNS server, which may cache it for the future queries (Event 7), and finally
back to the source host (Event 8).
45
Iterative Resolution
In iterative resolution, each server that does not know the mapping, sends the
IP address of the next server back to the one that requested it.
The iterative resolution takes place between two local servers.
The original resolver gets the final answer from the destination local server.
The messages shown by Events 2, 4, and 6 contain the same query.
However, the message shown by Event 3 contains the IP address of the top-
level domain server.
The message shown by Event 5 contains the IP address of the destination local
DNS server
The message shown by Event 7 contains the IP address of the destination.
When the Source local DNS server receives the IP address of the destination, it
sends it to the resolver (Event 8).
DNS CACHING
Each time a server receives a query for a name that is not in its domain, it needs
to search its database for a server IP address.
DNS handles this with a mechanism called caching.
When a server asks for a mapping from another server and receives the
response, it stores this information in its cache memory before sending it to the
client.
If the same or another client asks for the same mapping, it can check its cache
memory and resolve the problem.
However, to inform the client that the response is coming from the cache
memory and not from an authoritative source, the server marks the response as
unauthoritative.
Caching speeds up resolution. Reduction of this search time would increase
efficiency, but it can also be problematic.
If a server caches a mapping for a long time, it may send an outdated mapping
to the client.
To counter this, two techniques are used.
First, the authoritative server always adds information to the mapping
called time to live (TTL). It defines the time in seconds that the
receiving server can cache the information. After that time, the mapping
is invalid and any query must be sent again to the authoritative server.
46
Second, DNS requires that each server keep a TTL counter for each
mapping it caches. The cache memory must be searched periodically
and those mappings with an expired TTL must be purged.
DNS MESSAGES
DNS has two types of messages: query and response.
Both types have the same format.
The query message consists of a header and question section.
The response message consists of a header, question section, answer section,
authoritative section, and additional section .
Header
Both query and response messages have the same header format with
some fields set to zero for the query messages.
The header fields are as follows:
47
The identification field is used by the client to match the response with
the query.
The flag field defines whether the message is a query or response. It also
includes status of error.
The next four fields in the header define the number of each record type
in the message.
Question Section
The question section consists of one or more question records. It is
present in both query and response messages.
Answer Section
The answer section consists of one or more resource records. It is
present only in response messages.
Authoritative Section
The authoritative section gives information (domain name) about one or
more authoritative servers for the query.
Additional Information Section
The additional information section provides additional information that
may help the resolver.
DNS CONNECTIONS
DNS can use either UDP or TCP.
In both cases the well-known port used by the server is port 53.
UDP is used when the size of the response message is less than 512 bytes
because most UDP packages have a 512-byte packet size limit.
If the size of the response message is more than 512 bytes, a TCP connection is
used.
DNS REGISTRARS
New domains are added to DNS through a registrar. A fee is charged.
A registrar first verifies that the requested domain name is unique and then
enters it into the DNS database.
Today, there are many registrars; their names and addresses can be found at
http://www.intenic.net
To register, the organization needs to give the name of its server and the IP
address of the server.
For example, a new commercial organization named wonderful with a server
named ws and IP address 200.200.200.5, needs to give the following
information to one of the registrars:
Domain name: ws.wonderful.com IP address: 200.200.200.5
48
DDNS (DYNAMIC DOMAIN NAME SYSTEM)
In DNS, when there is a change, such as adding a new host, removing a host, or
changing an IP address, the change must be made to the DNS master file.
The DNS master file must be updated dynamically.
The Dynamic Domain Name System (DDNS) is used for this purpose.
In DDNS, when a binding between a name and an address is determined, the
information is sent to a primary DNS server.
The primary server updates the zone.
The secondary servers are notified either actively or passively.
In active notification, the primary server sends a message to the secondary
servers about the change in the zone, whereas in passive notification, the
secondary servers periodically check for any changes.
In either case, after being notified about the change, the secondary server
requests information about the entire zone (called the zone transfer).
To provide security and prevent unauthorized changes in the DNS records,
DDNS can use an authentication mechanism.
DNS SECURITY
DNS is one of the most important systems in the Internet infrastructure; it
provides crucial services to Internet users.
Applications such as Web access or e-mail are heavily dependent on the proper
operation of DNS.
DNS can be attacked in several ways including:
Attack on Confidentiality - The attacker may read the response of a DNS
server to find the nature or names of sites the user mostly accesses. This
type of information can be used to find the user’s profile. To prevent this
attack, DNS messages need to be confidential.
Attack on authentication and integrity - The attacker may intercept the
response of a DNS server and change it or create a totally new bogus
response to direct the user to the site or domain the attacker wishes the user
to access. This type of attack can be prevented using message origin
authentication and message integrity.
Attack on denial-of-service - The attacker may flood the DNS server to
overwhelm it or eventually crash it. This type of attack can be prevented
using the provision against denial-of-service attack.
49
9. SNMP (SIMPLE NETWORK MANAGEMENT PROTOCOL)
The Simple Network Management Protocol (SNMP) is a framework for
managing devices in an internet using the TCP/IP protocol suite.
SNMP is an application layer protocol that monitors and manages routers,
distributed over a network.
It provides a set of operations for monitoring and managing the internet.
SNMP uses services of UDP on two well-known ports: 161 (Agent) and 162
(manager).
SNMP uses the concept of manager and agent.
SNMP MANAGER
A manager is a host that runs the SNMP client program
The manager has access to the values in the database kept by the agent.
A manager checks the agent by requesting the information that reflects the
behavior of the agent.
A manager also forces the agent to perform a certain function by resetting
values in the agent database.
For example, a router can store in appropriate variables the number of packets
received and forwarded.
The manager can fetch and compare the values of these two variables to see if
the router is congested or not.
SNMP AGENT
The agent is a router that runs the SNMP server program.
The agent is used to keep the information in a database while the manager is
used to access the values in the database.
For example, a router can store the appropriate variables such as a number of
packets received and forwarded while the manager can compare these variables
to determine whether the router is congested or not.
Agents can also contribute to the management process.
A server program on the agent checks the environment, if something goes
wrong, the agent sends a warning message to the manager.
Name
SMI requires that each managed object (such as a router, a variable in a router,
a value,etc.) have a unique name. To name objects globally.
SMI uses an object identifier, which is a hierarchical identifier based on a tree
structure.
The tree structure starts with an unnamed root. Each object can be defined
using a sequence of integers separated by dots.
The tree structure can also define an object using a sequence of textual names
separated by dots.
Type of data
The second attribute of an object is the type of data stored in it.
To define the data type, SMI uses Abstract Syntax Notation One (ASN.1)
definitions.
SMI has two broad categories of data types: simple and structured.
The simple data types are atomic data types. Some of them are taken directly
from ASN.1; some are added by SMI.
SMI defines two structured data types: sequence and sequence of.
Sequence - A sequence data type is a combination of simple data types,
not necessarily of the same type.
Sequence of - A sequence of data type is a combination of simple data
types all of the same type or a combination of sequence data types all of
the same type.
Encoding data
SMI uses another standard, Basic Encoding Rules (BER), to encode data to be
transmitted over the network.
BER specifies that each piece of data be encoded in triplet format (TLV): tag,
length, value
51
Management Information Base (MIB)
The Management Information Base (MIB) is the second component used in network
management.
Each agent has its own MIB, which is a collection of objects to be managed.
MIB classifies objects under groups.
MIB Variables
MIB variables are of two types namely simple and table.
Simple variables are accessed using group-id followed by variable-id and 0
Tables are ordered as column-row rules, i.e., column by column from top to
bottom. Only leaf elements are accessible in a table type.
SNMP MESSAGES/PDU
SNMP is request/reply protocol that supports various operations using PDUs.
SNMP defines eight types of protocol data units (or PDUs):
GetRequest, GetNext-Request, GetBulkRequest, SetRequest, Response, Trap,
InformRequest, and Report
GetRequest
The GetRequest PDU is sent from the manager (client) to the agent (server) to
retrieve the value of a variable or a set of variables.
GetNextRequest
The GetNextRequest PDU is sent from the manager to the agent to retrieve the
value of a variable.
52
GetBulkRequest
The GetBulkRequest PDU is sent from the manager to the agent to retrieve a
large amount of data. It can be used instead of multiple GetRequest and
GetNextRequest PDUs.
SetRequest
The SetRequest PDU is sent from the manager to the agent to set (store) a
value in a variable.
Response
The Response PDU is sent from an agent to a manager in response to
GetRequest or GetNextRequest. It contains the value(s) of the variable(s)
requested by the manager.
Trap
The Trap PDU is sent from the agent to the manager to report an event. For
example, if the agent is rebooted, it informs the manager and reports the time of
rebooting.
InformRequest
The InformRequest PDU is sent from one manager to another remote manager
to get the value of some variables from agents under the control of the remote
manager. The remote manager responds with a Response PDU.
Report
The Report PDU is designed to report some types of errors between managers.
53