Cn Coursehandout
Cn Coursehandout
Course Credit : L T P S C
Structure 3 - 2 - 4
Year & Semester : VI Semester
Contact Hours : 45
Instructor : Dr. M. Dharani
Instructor’s Email : [email protected]
Office Hours : All working Days with prior Appointment
Academic Year : 2024-2025
Date of Issue : December 16th 2024
PROGRAM OUTCOMES
On successful completion of the Program, the graduates of B. Tech. (ECE) Program will be able to:
1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering
fundamentals, and an engineering specialization to the solution of complex engineering problems.
2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems and
design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis of
the information to provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with
an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to
the professional engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need
for sustainable development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms
of the engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or leader in
diverse teams, and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive
clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.
PSO1: Apply knowledge of computer science engineering, Use modern tools, techniques and
technologies for efficient design and development of computer-based systems for complex
engineering problems.
PSO2: Design and deploy networked systems using standards and principles, evaluate security
measures for complex networks, apply procedures and tools to solve networking issues.
PSO3: Develop intelligent systems by applying adaptive algorithms and methodologies for solving
problems from inter-disciplinary domains.
PSO4: Apply suitable models, tools and techniques to perform data analytics for effective decision
making.
Pre-Requisite 1.
Anti-Requisite 1.
Co-Requisite 2.
COURSE OUTCOMES: After successful completion of the course, students will be able to:
2. Evaluate sub netting and routing algorithms for finding optimal paths in networks.
3. Solve problems related to flow control, error control and congestion control in data
transmission
4. Assess the impact of wired and wireless networks in the context of network protocols
Like DNS, SMTP, HTTP, and FTP.
COURSE CONTENT:
Network hardware, Network software, Reference models - OSI, TCP/IP; Example networks –
Internet; Wireless LANs - 802.11.
MODULE 2: DATA LINK LAYER AND MEDIUM ACCESS CONTROL SUBLAYER (9 Periods)
Data Link Layer: Data link layer design issues, Error detection and correction - CRC,
Hamming codes; Elementary data link protocols, Sliding window protocols.
Medium Access Control Sub layer: ALOHA, Carrier sense multiple access protocols,
Collision free protocols, Ethernet, Data link layer switching - Repeaters, Hubs, Bridges,
Switches, Routers, Gateways.
Network layer design issues, Routing algorithms - Shortest path algorithm, Flooding, Distance
vector routing, Link state routing, Hierarchical routing, Broadcast routing, Multicast routing,
Any cast routing; Congestion control algorithms, Network layer in the internet - The IP version
4 protocol, IP addresses, IP version 6, Internet control protocols, OSPF, BGP.
Domain Name System (DNS) - Name space, Domain resource records, Name servers;
Electronic mail - Architecture and services, User agent, Message formats, Message transfer,
Final delivery; The World Wide Web - Architectural overview, HTTP, FTP.
Total Periods: 45
EXPERIENTIAL LEARNING:
LIST OF EXERCISES:
3. Design and develop a program to compute checksum for the given frame 1101011011
using CRC-12, CRC-16, and CRC-CCIP. Display the actual bit string transmitted. Suppose
any bit is inverted during transmission. Show that this error is detected at the receiver’s
end.
4. Implement Dijkstra’s algorithm to compute the shortest path for the given graph.
8
4 25
1
2 4 5
3 43
1
5. Develop a program to obtain routing table for each node using Distance Vector Routing
Algorithm by considering the given subnet with weights indicating delay between Nodes.
REFERENCE BOOKS:
1. Andrew S. Tanenbaum and David J. Wetherall, Computer Networks, Pearson, 5th
Edition, 2015.
2. A. Jesin, Packet Tracer Network Simulator,Packt Publishing, 2014.
Software/Tools used:
1. C/Python/Java
2. Network simulator tool - Packet Tracer
3. Virtual Labs (Computer Networks Lab –
http://vlabs.iitb.ac.in/vlabs-dev/labs_local/ computer-networks/ labs/explist.php)
VIDEO LECTURES:
1. https://onlinecourses.nptel.ac.in/noc21_cs18/preview
2. https://www.coursera.org/learn/tcpip
WEB RESOURCES:
1. https://www.cisco.com/c/en/us/solutions/small-business/resourcecenter/
networking/networking-basics.html
2. https://memberfiles.freewebs.com/00/88/103568800/documents/
Data.And.Computer.Communications.8e.WilliamStallings.pdf
PEDAGOGY:
The following pedagogy methods will be used to deliver the course.
1. Chalk and Board
2. Videos
3. PPT
4. Flipped Classroom
4. Content specified
Experiential Learning - 20 20
above
Reg. No.
4. a) For a 12-bit data string of 101100010010, determine the number of 8 Marks L3 CO3
Hamming bits required, arbitrarily place the Hamming bits into the
data string, determine the logic condition of each Hamming bit,
assume an arbitrary single-bit transmission error, and prove that
the Hamming code will successfully detect the error.
b) Explain and demonstrate Selective repeat sliding window Protocol 8 Marks L2 CO3
with an example.
(OR)
5. a) Discuss the conceptual model of CSMA with Collision Detection 8 Marks L2 CO3
with relevant diagrams
b) Explain Bit-map Protocol and Binary Countdown protocol With the 8 Marks L2 CO3
help of neat diagrams. Explain the advantage of Binary Countdown
over Bit-map protocol.
10 a) Explore the concept of Uniform Resource Locators (URLs) and their 8 Marks L1 CO4
. role in identifying web resources. How are URLs structured?
b) Discuss the hierarchical structure of the DNS name space. How do 8 Marks L2 CO4
top-level domains, second-level domains, and subdomains relate to
each other?
(OR)
11 a) Trace the path of an email message from its creation to final 8 Marks L2 CO4
. delivery, highlighting the steps involved in message transfer.
b) Explain the Hypertext Transfer Protocol (HTTP) and its role in 8 Marks L1 CO4
facilitating communication between web clients and servers.
Target for Course Outcome Attainment:
Course Outcomes Attainment Target (%)
CO1 60%
CO2 60%
CO3 60%
CO4 60%
CO5 60%
CO6 60%
Note:
Any information further to this handout will be announced in the class.
Signature of the Course Instructor Signature of the Chairperson BOS
1. Introduction & Physical Layer
A collection of devices, or nodes, connected by communication lines is called a network. Any device
that can send and/or receive data created by other nodes on the network is referred to as a node.
Examples of such devices include printers and computers. If two computers can share data, then
they are considered to be interconnected. Copper wires are one possible connecting method; other
options include fiber optics, microwaves, infrared, and communication satellites. There are
numerous sizes, forms, and shapes for networks.
Resource Sharing: The objective of resource sharing is to facilitate the accessibility of equipment
and data to all network users, regardless of the geographical location of the resource or the user. A
group of individuals employed in an office environment collectively utilize a shared printing device.
A networked printer with high volume capacity is frequently more cost-effective, efficient, and
manageable in terms of maintenance compared to a large collection of individual printers.
Client-Server model: Information related to the organization is stored on servers, which are robust
computers under the care of a system administrator. Employees utilize client devices, which are
simple computers located on their workstations, to retrieve data from the servers. The server and
client computers are linked together via a network.
E-commerce (electronic commerce): Many businesses use electronic means to conduct business.
Retailers such as booksellers and airlines have found that a lot of their consumers prefer the ease
of purchasing from home. As a result, a lot of businesses offer online catalogues featuring their
products and services and even accept online orders.
Network Hardware
There is a lack of agreement over a universally accepted taxonomy that incorporates all computer
networks. However, two key elements emerge as significant factors: transmission technology and
scalability. There are two main forms of transmission technologies that are extensively used:
broadcast links and point-to-point links.
In a network made up of point-to-point links, packets may need to be routed through one or more
intermediary devices before reaching their intended destination. In point-to-point networks, it is
crucial to identify optimal routes due to the existence of multiple paths of varying lengths. The
transmission method characterized by a single sender and a single recipient is sometimes referred
to as unicasting. On the other hand, with a broadcast network, every machine on the network
shares the communication channel, meaning that any machine can send and receive packets.
Each packet has an address field that identifies the intended recipient. A machine reads a packet
and looks up the address field. When a packet is meant for the receiving machine, it is processed
by that machine; when it is meant for another machine, it is simply ignored.
Broadcast systems typically allow a packet to be addressed to all destinations by using a special
code in the address field. When a packet containing this code is sent, every machine on the
network receives and processes it. This form of functioning is referred to as broadcasting. Some
broadcast systems also offer multicasting, which is the transmitting of data to a subset of
workstations. Networks can also be categorized in terms of scale. Distance is a metric used for this
categorization.
Network Software
In the early stages of computer network development, hardware predominated over software
considerations. This approach is no longer effective. Presently, network software is heavily
structured. Most networks are set up as a stack of layers or levels, with each layer building on top
of the one below it. This makes them easier to organize. From network to network, the number of
levels, the name of each layer, the contents of each layer, and the job of each layer are all
different.
The job of each layer is to provide certain services to the layers above it while keeping those layers
from knowing the particulars of how the services are delivered.
Layer n on one machine has a conversation with layer n on another machine. The rules and
conventions that these two machines follow are called the layer n protocol. A protocol is essentially
an agreement between communicative parties on how communication should proceed. Peers are
the entities that make up the matching layers on various machines. The protocols are used by the
peers to communicate with one another.
There is no straight transfer of data from layer n on one machine to layer n on another machine.
Each layer instead sends control and data information to the layer below it, all the way down to the
lowest layer. The physical medium that transmission actually takes place is below layer 1. Actual
communication is shown by straight lines, and virtual communication is shown by dashed lines. An
interface exists between each pair of neighboring layers. The interface specifies which primitive
operations and services are made available to the upper layer by the lower layer. Each layer has an
own set of functions. Interfaces that are clean and unambiguous also make it easier to replace one
layer with an entirely different protocol or implementation. A network architecture is a collection
of layers and protocols.
Each process may include some information known as a header that is exclusively intended for its
peer. This data is not transmitted to the layer above. Control information like as addresses,
sequence numbers, sizes, and times are included in the header. An application process at layer 5
generates a message, M, which is sent to layer 4 for transmission. Layer 4 adds a header to the
message and forwards the result to Layer 3. In many networks, there is no limit on the size of
messages sent using the layer 4 protocol. However, the layer 3 protocol almost always has a limit.
So, layer 3 has to separate the received messages into smaller pieces called packets and add a layer
3 header to each packet. M is split into two parts, M1 and M2, which will be sent separately in this
case. Layer 3 selects which of the lines to use for sending data and sends the packets to Layer 2.
Layer 2 gives each piece both header and a trailer, and then sends the whole thing to Layer 1 to be
sent physically. At the recipient machine, the message moves from one layer to the next, and as it
goes up, headers are stripped off. Below layer n, none of the headers are sent up to layer n.
Reliability is a design issue of constructing a network that functions accurately in spite of having a
collection of unreliable components. Consider the packet traverse in the network. It is possible that
some of these bits may be received in an inverted state due to noise, hardware defects, software
errors, and so forth. How do we manage to locate and rectify these errors? One approach to
identifying errors in received data involves the use of error detection codes.
If information is received wrongly, it can be sent again until it is received correctly. Error correction
is possible with stronger codes by adding redundant information.
There are often more than one way for transferring data from one place to another, and in a big
network, some links or routers may not work. The decision should be made immediately by the
network. We call this subject "routing."
Due to the large number of computers comprising the network, every layer must incorporate a
means of detecting the senders and receivers associated with a specific message. This process is
called as addressing. It is a fact that some communication channels will not maintain the sequence
of messages transmitted through them, requiring the implementation of message numbering
solutions. Differences in the maximum message size that can be transmitted across networks are
another example. As a consequence, mechanisms are developed to disassemble, transmit, and
subsequently reassemble messages. The collective term for this subject is internetworking.
How can a quick sender avoid sending too much data at once to a slow receiver? It is common to
employ feedback from the recipient to the sender. We refer to this topic as flow control.
Oversubscriptions can occur when an excessive number of computers attempt to transmit an
excessive amount of traffic, surpassing the network's capacity to deliver. The term used for this
condition is called as congestion.
The final design consideration is the network's protection against various types of threats. One of
such risk relates to the eavesdropping of communications. Confidentiality-preserving mechanisms
serve as protection against this danger. The implementation of authentication mechanisms serves
to prevent attempts at fraud identities. Additional integrity mechanisms serve to prevent
modifications made to messages.
Connection-oriented and connectionless services are the two forms of services that layers can
provide to the layers above them. The telephone system serves as the example for connection-
oriented service. To speak with someone, you pick up the phone, dial the number, speak with
them, and then hang up. Similarly, to use a connection-oriented network service, the service user
first establishes a connection, then uses it, and then releases it. In the majority of instances, the
order of transmission is maintained to ensure that the bits are received in the same sequence as
they were originally transmitted. The connectionless service is designed based on the conceptual
framework of the postal system. Every individual message, in the form of a letter, contains the
whole destination address. These messages are then directed through the intermediate nodes
within the system, irrespective of subsequent messages. Each type of service can be further
classified based on its level of reliability. Certain services can be considered reliable due to their
ability to maintain data without any loss. Typically, the establishment of a reliable service requires
the integration of a mechanism wherein the recipient acknowledges the receipt of every message,
so providing assurance to the sender of its successful delivery. The process of being acknowledged
involves additional costs and time delays, which are often considered reasonable, while
occasionally considered undesirable.
The file owner wants to be sure that everything comes in the exact same sequence that it was sent
in.
Two slight variations of reliable connection-oriented services are byte streams and message
sequences.
The message boundaries are maintained in the former version. Two 1024-byte messages never
arrive as a single 2048-byte message when they are transmitted; instead, they arrive as two
separate 1024-byte messages. In the latter case, there are no message boundaries and the
connection is just a stream of bytes. It is impossible to determine if 2048 bytes were sent as two
1024-byte messages or as a single 2048-byte message when they reach the recipient. The
acknowledgement-induced transit delays are unacceptably long for some applications.For instance,
a few incorrect pixels during a video conference won't affect the transmission; however, it will
irritate the viewer if the image jerks as the flow stops and begins to rectify faults. Not every
application needs to be connected. A method for sending a single message with a good chance of
arriving but no guarantee is all that is required. Datagram service is a common term for
connectionless services that are unreliable (i.e., not acknowledged).In certain cases, it is not
desirable to connect in order to transmit a single message, but reliability is crucial. For these
applications, the acknowledged datagram service can be offered. It functions similarly to obtaining
a return receipt for a registered letter sent. The sender is certain that the letter was delivered to
the appropriate recipient and wasn't misplaced when the receipt is returned. Particularly in real-
time applications like multimedia, the inevitable delays in delivering reliable service would not be
acceptable. These factors lead to the coexistence of reliable and unreliable communication.
The ISO proposed this concept as a first step towards international standardisation of layer
protocols.
The ISO OSI (Open Systems Interconnection) Reference Model connects open-communication
systems.
Physical Layer: Raw bits are sent via a communication channel by the physical layer. What
electrical impulses indicate 1 and 0, and how long do bits last? Can two-way transmission occur
simultaneously? How is the initial connection made and broken when both sides are done?
Number and purpose of network connector pins. Mechanical, electrical, and timing connections
and the physical transmission medium underlying the physical layer are the main design issues.
Data link Layer: Its primary responsibility is to provide error-free information transfer. In order to
complete this task, the transmitter must divide the input data into data frames and deliver the
frames one after the other sequentially. The receiver sends back an acknowledgement frame to
verify that each frame was received correctly, indicating that the service is reliable. How to prevent
a fast transmitter from drowning a slow receiver with data is another problem that occurs at the
data link layer. In the data link layer, broadcast networks also face the problem of controlling
access to the shared channel. This issue is addressed by the medium access control sublayer, a
unique sublayer of the data link layer.
Network Layer: Choosing the best route for packets to go from source to destination is an
important design decision. Bottlenecks arise when there are too many packets in the network at
once and they obstruct one another. The network layer is also in responsible for handling
congestion. Numerous issues can occur when a packet needs to go across networks in order to
reach its destination. It's possible that the addressing implemented by the two networks differs
from one another. The packet might be too big for the second network to receive at all. The
protocols could vary, and so on. The network layer is responsible for resolving each of these issues
so that diverse networks can be joined. The network layer in broadcast networks is frequently
minimal or non-existent since the routing problem is simple.
Transport Layer: The transport layer is a true end-to-end layer, carrying data from source to
destination.
In other words, a programme on the source machine communicates with a programme on the
destination system via message headers and control messages. Each protocol in the lower layers is
between a machine and its near neighbours, rather than between the final source and destination
machines, which may be separated by multiple routers. The transport layer also decides what kind
of service to give to the session layer and, ultimately, to network users.
TCP is a reliable, error-free point-to-point transport connection that delivers messages in the
order in which they were transmitted. UDP is another kind of transport service that carries
individual messages with no guarantee of delivery order.
Session Layer: The session layer enables sessions to be established between users on multiple
machines. Sessions provide a variety of services, such as dialogue control (keeping track of who is
transmitting), token management (preventing two parties from attempting the same critical
operation at the same time), and synchronisation (check pointing long transmissions to allow them
to pick up where they left off in the event of a crash and subsequent recovery).
Presentation Layer: The presentation layer is concerned with the syntax and semantics of the
information delivered, compared to the lower levels, which are largely concerned with moving bits
around. The data structures must be exchanged in order for machines with various internal data
representations to communicate.
Application Layer: The application layer contains a number of protocols that users frequently
require.
HTTP (Hypertext Transfer Protocol), the foundation of the World Wide Web, is a popular
application protocol. When a browser requests a Web page, it uses HTTP to send the page's name
to the server hosting the page. The page is then returned by the server. For file transfer, electronic
mail, and network news, other application protocols are employed.
The ARPANET was a DoD-sponsored research network. It used leased telephone lines to connect
hundreds of universities. When satellite and radio networks were added later, the current
protocols had difficulty interacting with them, necessitating the creation of new reference
architecture. One of the main design aims was to be able to integrate numerous networks in a
seamless manner. This architecture came to be known as the TCP/IP Reference Model. Another key
goal was for the network to be able to withstand the loss of subnet hardware without breaking up
on-going communications. Furthermore, because applications with varying needs, ranging from file
transfer to real-time speech transmission, a flexible architecture was required.
Link Layer: All of these needs lead to the selection of a packet-switching network based on a
connectionless layer that spans many networks. The link layer, the model's lowest layer, explains
which links suitable the requirements of this connectionless internet layer. The Link Layer serves as
a bridge between hosts and transmission links.
Internet Layer: The internet layer is the linchpin that connects the entire architecture. Its purpose
is to allow hosts to inject packets into any network and have them flow independently to the
destination (which could be on a separate network).They may even arrive in a different order than
when they were sent, in which case upper layers must rearrange them if in-order delivery is
necessary.
The internet layer specifies an official packet format and protocol known as IP (Internet Protocol),
as well as a companion protocol known as ICMP (Internet Control Message Protocol), which helps
in its operation.
The internet layer's job is to get IP packets to where they need to go. Clearly, packet routing is a
significant issue here.
Transport Layer: In the TCP/IP model, the layer above the internet layer is now commonly referred
to as the transport layer. It is intended to allow peer entities on the source and destination hosts to
converse in the same way that the OSI transport layer does. Here, two end-to-end transport
protocols are defined. TCP (Transmission Control Protocol) is a reliable connection-oriented
protocol that allows a byte stream from one machine to be sent without error to any other
machine on the internet. It divides the incoming byte stream into discrete messages and forwards
them to the internet layer. The receiving TCP process at the destination reassembles the received
messages into the output stream. TCP also handles flow management to ensure that a fast sender
does not overwhelm a slow receiver with messages that it cannot handle. UDP, the second
protocol in this layer, is an unreliable, connectionless protocol designed for applications that do not
require TCP's sequencing or flow control. Applications where speed is more critical than accuracy,
such as transmitting speech or video.
Application Layer: There are no session or presentation levels in the TCP/IP architecture. The
application layer stands above the transport layer. All of the higher-level protocols are included in
it. The first ones were electronic mail (SMTP), file transfer (FTP), and virtual terminal (TELNET). The
Domain Name System (DNS), which maps host names to their network addresses, HTTP, which
retrieves pages from the World Wide Web, and RTP, which transfers real-time media like audio and
video, are a few of the most important ones that we will examine.
The Internet is a huge collection of different networks that share many common protocols and
services, but it is not actually a network at all. Because no one designed it and no one is in charge
of it, it is an unusual system. Let's start from the beginning and examine how and why it has
evolved in order to gain a deeper understanding of it. The story commences in the late 1950s,
when the U.S. Department of Defence (DoD) sought a command-and-control network that could
withstand a nuclear war. During that period, military communications relied on the public
telephone network, which was considered vulnerable. The telephone switching offices, denoted by
the black dots, were linked to thousands of phones through their connections, which in turn were
connected to higher-level switching offices, or toll offices, forming a nationwide hierarchy.
One potential weakness of the system was that the destruction of several critical toll offices could
result in its fragmentation into numerous isolated islands. To address this challenge ARPANET was
designed. The subnet would be made up of 56-kbps transmission lines connecting minicomputers
known as Interface
The software was divided into a host and subnet parts. The subnet software comprised the IMP-
IMP protocol, the source IMP to destination IMP protocol, and the IMP end of the host-IMP
connection, all of which were intended to increase reliability. In addition to the subnet, application
software and the host-host protocol were required externally.
Message Processors or IMPs. Each IMP would be linked to at least two additional IMPs for
maximum reliability. Messages in a subnet may be automatically diverted along different paths
even if some lines and IMPs were destroyed. Each network node was to be made up of an IMP and
a host in the same room, linked by a short wire. A host could send up to 8063-bit messages to its
IMP, which would then divide them into packets of no more than 1008 bits and forward them
independently towards the destination. The subnet was the first electronic store and forward
packet-switching network since each packet was received in its entirety before being forwarded.
The ARPANET protocols that were in use at the time were not designed to run across several
networks. The discovery of the TCP/IP model and protocols was the result of additional research on
protocols motivated by this observation. TCP/IP was created specifically to manage internetwork
communication, which became important as more and more networks were connected to the
ARPANET.
An Internet Service Provider (ISP) is what connects a computer to the Internet. A user pays an ISP
to get access to the Internet. People often use their home phone line to connect to an ISP. In this
case, your phone company is your ISP. There is a DSL modem attached to the computer. This
modem changes digital packets into analogue signals that can go over the phone line without any
problems. A DSLAM (Digital Subscriber Line Access Multiplexer) makes the change between signals
and packets at the other end. POP stands for "Point of Presence." This is the place where customer
packets join the ISP network to be served. The system is now fully digitised and packet switched.
ISP networks can cover an area, a country, or the whole world. The architecture of an ISP is made
up of long-distance transmission lines that connect routers at POPs in different places. This
equipment is called the backbone of the ISP.As long as the packet is going to a host that the ISP
directly serves, it will be sent over the backbone and to that host. If not, it has to be given to
another ISP.An IXP is a place where ISPs can connect their networks and exchange data. It is stated
that the connected ISPs peer with one another. Globally, there are numerous IXPs located in
cities.An IXP is essentially a room full of routers—at least one for each ISP. All of the routers in the
room are connected by a LAN, allowing packets to be passed from one ISP backbone to another.
The Amsterdam Internet Exchange is one of the biggest, connecting hundreds of ISPs and
facilitating the exchange of hundreds of gigabits of traffic every second. A small number of
corporations, such as AT&T and Sprint, run huge international backbone networks with thousands
of routers linked by high-bandwidth fibre optic lines at the top of the food chain. These Internet
service providers do not pay for transport. Tier 1 ISPs are commonly referred to as the Internet's
backbone because everyone else must connect to them in order to access the whole Internet.
Large content providers, such as Google and Yahoo!, house their computers in data centres that
are well connected to the rest of the Internet.
These data centres are intended for computers and can be filled with rack after rack of machines,
referred to as a server farm.
If a machine: (1) ran the TCP/IP protocol stack; (2) had an IP address; and (3) could send IP packets
to all other machines on the Internet, it was on the Internet. However, ISPs frequently reuse IP
addresses based on which computers are currently in use, and they frequently share a single IP
address among many computers.
For the actual transmission, many physical mediums can be employed. Each has its own niche in
terms of bandwidth, delay, cost, and simplicity of installation and maintenance. Broadly speaking,
media can be divided into two categories: unguided media (such satellites, terrestrial wireless, and
airborne lasers) and guided media (like fibre optics and copper wire).
Magnetic Media: Writing data to magnetic tape or removable media (like DVDs), moving the tape
or discs to the destination computer, and then reading them back in again is one of the most
popular ways to move data from one computer to another. It is usually cheaper, has a high speed,
or costs less per bit sent. Even though magnetic tape has great capacity, the time it takes to send
data is measured in minutes or hours, not milliseconds.
Twisted Pairs: A connection is needed for many applications. Twisted pair is one of the oldest and
most popular ways to send data. A twisted pair is made up of two copper wires that are shielded
and are usually about 1 mm thick. The wires are twisted together in a helical form, just like a DNA
molecule. Twisting is done because two parallel wires make an excellent antenna. When wires are
twisted, the waves from different twists cancel each other out, causing the wire to radiate less
effectively. The telephone system is the most prevalent application of the twisted pair. Twisted
pairs can be utilized to convey both analogue and digital data. The bandwidth is determined by the
wire's thickness and the distance travelled. There are various types of twisted-pair cabling.
Category 5 cabling is the common type seen in many office buildings.
A category 5 twisted pair is made up of two insulated wires that have been gently twisted
together. To protect the wires and keep them together, four such pairs are commonly combined in
a plastic sheath.
Cat 5 cables replaced previous Category 3 cables with a similar cable that has more twists per
metre but utilises the same connector. More twists result in reduced crosstalk and higher signal
quality over longer distances, making the cables more suitable for high-speed computer
connection.
Coaxial cable: It can cover farther at faster rates than unshielded twisted pairs because it has
stronger shielding and a wider bandwidth. In general, two types of coaxial cable are utilised. If the
purpose of the cable is for digital transmission, 50-ohm cable is frequently utilised. Cable television
and analogue transmission are two common uses for 75 ohm cable. A coaxial cable is made up of a
strong copper wire core that is surrounded by an insulating layer. A cylindrical conductor,
commonly in the form of a tightly woven braided mesh, surrounds the insulator. The outside
conductor is protected by a plastic sheath.
The bandwidth is determined by the cable's quality and length. Modern cables have bandwidths of
many GHz. Cable television and metropolitan area networks continue to depend heavily on coax.
Fiber Optics: Fibre optics are used in network backbones, high-speed LANs, and high-speed
Internet access like FttH (Fibre to the Home).The light source, transmission medium, and detector
are the three main components of an optical transmission system. A light pulse represents a 1 bit,
whereas the absence of light represents a 0 bit. The transmission medium is a glass fibre that is
extremely thin. When light strikes the detector, it generates an electrical pulse. By connecting a
light source to one end of an optical fibre and a detector to the other, we can make a one-way data
transfer system that takes an electrical signal, changes it into light pulses, sends them, and then
changes the output back to an electrical signal at the receiving end.
The path of a light ray changes when it goes from one material to another, like silica to air. Here we
see a light ray incident on the boundary at an angle α1 emerging at an angle β1. The amount of
refraction depends on the properties of the two media.
When the angle of incidence is above a certain critical angle, the light is bent back into the silica
and doesn't get out into the air. So, a light ray that hits the fibre at or above the critical angle gets
stuck inside it and can travel for many kilometres with almost no loss. The glass core through which
light propagates is located in the centre. To keep all of the light in the core, the core is surrounded
by a glass cladding with a lower index of refraction than the core. The cladding is then protected by
a thin plastic jacket. Fibres are normally bundled together and protected by an outer sheath.
Fibre offers a lot of benefits. It can tolerate significantly larger bandwidths than copper, to start.
Because there isn't much loss, repeaters are only needed every 50 km on long lines, compared to
every 5 km for copper. This saves a lot of money. Fibre is also better because it doesn't get
damaged by power spikes, electromagnetic interference, or power outages. It is important for
harsh factory environments that it is not affected by chemicals in the air that erode away at
metal. Copper is a lot heavier than fibre. It costs considerably less to install. Fibres are hard to tap
and never allow light through. Because of these features, fibre is very safe from individuals who
might try to tap. Finally, fibre interfaces cost more than electrical interfaces.
Wireless Transmission
Twisted pair, coax, and fibre optics are useless to mobile users. They require data without being
bound to terrestrial communication infrastructure. Wireless communication is the solution for
these consumers. In some cases, wireless has advantages over fixed equipment. If running fibre to
a building is difficult because of the geography (mountains, jungles, etc.), wireless may be
preferable.
Electromagnetic Spectrum: Electrons generate electromagnetic waves that can travel over space.
These waves were predicted in 1865 by British physicist James Clerk Maxwell and first observed in
1887 by German scientist Heinrich Hertz. The frequency, f, of a wave is measured in hertz (Hz) and
is defined as the number of oscillations per second. The wavelength is defined as the distance
between two successive maxima (or minima). When an appropriate-sized antenna is connected to
an electrical circuit, electromagnetic waves can be broadcast efficiently and received by a receiver
located some distance away. This idea drives all wireless communication.
In a vacuum, all electromagnetic waves, regardless of frequency, travel at the same speed. This is
commonly referred to as the speed of light. The fundamental relation between f, λ, and c (in a
vacuum) is λf = c
100-MHz waves, for example, are around 3 metres long, 1000-MHz waves are roughly 0.3 metres
long, and 0.1-meter waves have a frequency of 3000 MHz. By changing the waves' amplitude,
frequency, or phase, information can be sent through radio waves, microwaves, infrared light, and
visible light. Due to their higher frequencies, ultraviolet light, X-rays, and gamma rays would be
even more ideal; however, they are hazardous to living organisms, difficult to generate and
modulate, and do not transmit effectively through buildings.
Radio Transmission: Radio frequency (RF) waves are often used for communication both indoors
and outdoors because they are simple to produce, can travel great distances, and can easily
penetrate walls.
Additionally, because radio waves are omnidirectional—that is, they can travel in any direction
from their source—physical alignment between the transmitter and receiver is not necessary. The
characteristics of radio waves vary with frequency. Radio waves can easily go through obstructions
at low frequencies. Radio waves at high frequencies typically bounce off obstacles and go in
straight lines. As the distance from the source increases, the RF signal's energy abruptly decreases.
We refer to this attenuation as "path loss." Rain also absorbs radio signals at high frequencies.
Radio waves in the VLF, LF, and MF bands follow the earth. These bands allow radio waves to
readily pass through buildings. Ground waves are absorbed by the earth in the HF and VHF
frequencies. However, waves that reach the ionosphere, a layer of charged particles that circles
Earth at a height of 100 to 500 km, are refracted and returned to the earth.
Microwave Transmission: Because these waves travel in roughly straight lines, they can be
narrowly directed. Using a parabolic antenna to concentrate all of the energy into a narrow
beam. Both the transmitting and receiving antennas must be precisely aligned. Multiple
transmitters lined up in a row can communicate with multiple receivers in a row without
interfering, as long as specific minimum spacing restrictions are followed. Since microwaves move
in a straight line, the earth will obstruct the path if the towers are too far apart. Repeaters are
therefore occasionally required. The maximum distance between towers increases with height.
Microwaves, unlike lower-frequency radio waves, do not travel well through
buildings. Furthermore, even if the beam is well concentrated at the transmitter, there is still some
divergence in space. Some waves may be refracted off low-lying air layers and thus arrive slightly
later than direct waves. When the delayed waves come out of phase with the direct wave, the
signal is cancelled. This is known as multipath fading, and it is frequently a major issue. Microwave
transmission is used for like long-distance phones, cell phones, and TV distribution. It's better than
fibre in many ways. The main one is that you don't need to lay down wires. Every 50 km, buy a
small piece of land and put a radio tower on it. The microwave is also not too expensive. Burying 50
km of fibre cable through a busy city or up over a mountain might cost more than putting up two
simple towers with antennas on each one.
Infrared Transmission: Infrared waves are often used for short-range contact. Infrared is used for
contact between TV, VCR, and stereo remote controls. They are cheap, easy to make, and good at
pointing in the right direction, but they can't go through solid things, which is a big problem. On the
other hand, it's a good thing that infrared waves don't easily pass through concrete walls. It means
that an infrared system in one room of a building won't affect a similar system in rooms or
buildings next door. For example, you can't use your remote to control your neighbour’s TV. An
infrared system does not need a licence from the government to work. Radio systems, on the other
hand, need a licence to work outside of the ISM bands.
Light Transmission: Free-space optics, also known as unguided optical signalling, has been used for
a very long time. Lasers on the roofs of two buildings can be used to join their LANs, which is a
more modern use. Laser-based optical signalling can only go in one direction, so each end needs its
own laser and photo detector. This plan has a very wide bandwidth for a very low price. It is also
pretty safe because it is hard to tap a narrow laser beam. It is also relatively simple to set up and
does not necessitate an FCC licence.
The laser's advantage, an extremely narrow beam, but also its disadvantage here. Aiming a laser
beam 1 mm broad at a 500-meter-away target the size of a pin head necessitates precision. Wind
and temperature variations can cause the beam to distort, and laser beams cannot penetrate rain
or severe fog, even on sunny days.
Switching-Circuit switching
The phone system is made up of two main parts: the outside plant, which includes the local loops
and trunks because they are not inside the switching offices and the inside plant, which includes
the switches that are inside the switching offices. There are two types of switching that are used by
networks today: packet switching and circuit switching. When you make a phone call, the
telephone system's switching equipment searches for a physical path from your phone to the
receiver's phone. This is referred to as circuit switching.
A carrier switching office is represented by each of the six rectangles. Each office has three
incoming and three outgoing telephone lines. A physical connection is formed between the line on
which the call came in and one of the output lines when a call travels through a switching office.
Once a call is set up, there is a fixed path between both ends that will stay open until the call is
over. There can be up to 10 seconds between when the phone stops dialling and when it starts to
ring. This can be longer for foreign or long-distance calls. During this time, the phone system is
looking for a path to connect. The one and only delay after setup is the electromagnetic signal
propagation time, roughly 5 ms per 1000 kilometres. The established path prevents congestion—
once the call is made, you never get busy signals. Full bandwidth is reserved from sender to
receiver. Data that follows the same path cannot arrive out of order.
Packets are sent as soon as they are available using this method. There is no need to plan ahead of
time for a particular route. Routers must use store-and-forward transmission to transmit each
packet on its own path to the destination. With packet switching there is no fixed path, thus
distinct packets can travel various routes, depending on network conditions at the time they are
transmitted, and they may arrive out of order.
Packet-switching networks place a tight upper limit on the size of packets. The first packet of a long
message can be forwarded before the second one has fully arrived. However, the store-and-
forward delay of accumulating a packet in the router’s memory before it is sent on to the next
router exceeds that of circuit switching. Because no bandwidth is reserved with packet switching,
packets may have to wait to be forwarded. If multiple packets are sent at the same time, this
causes queuing delay and congestion. If a circuit is dedicated for a certain user but there is no
traffic, the bandwidth is wasted. It can't be used for anything else. Because packet switching does
not waste bandwidth, it is more efficient from a system viewpoint. Circuit switching is less tolerant
of errors than packet switching. All circuits that are utilising a failed switch are terminated,
preventing the transmission of any further traffic on any of them. Packet switching facilitates the
bypassing of dead switches for packets.
2. Datalink Layer
Data link layer design issues
1. By utilising the physical layer's services, the data link layer transmits and receives information
across communication channels.
2. Providing the network layer with a service interface that is precisely defined.
4. Controlling the data flow to prevent overloading of slow receivers by fast senders.
4. Frame management forms the heart of what the data link layer does.
For unacknowledged connectionless service, the source machine sends distinct frames to the
destination machine, but the destination machine is unable to acknowledge them. There is no
logical connection established or released beforehand. If a frame is lost due to noise on the line, no
attempt is made in the data link layer to identify or recover from the loss. When the error rate is
very low, this type of service is appropriate.
The next level of reliability is acknowledged connectionless service. When this service is provided,
no logical connections are used, yet each frame delivered is individually acknowledged. This service
is useful when communicating via unstable channels, such as wireless networks. Connection-
oriented service is the most sophisticated service that the data link layer can offer the network
layer. Before any data is sent, this service establishes a connection between the source and
destination machines. The data link layer ensures that every frame submitted is received, and each
frame is numbered before being sent over the connection. It also ensures that all frames are
received in the correct order and that each frame is received precisely once. The data link layer
must use the service offered by the physical layer to give service to the network layer. The physical
layer accepts a raw bit stream and attempts to transmit it to its destination. The data link layer
does not guarantee that the bit stream it receives is error-free. Some bits values may differ, and
the number of bits received may be less than, equal to, or greater than the number of bits
transmitted. It is the data link layer's responsibility to detect and, if required, repair problems.
Framing
Typically, the datalink layer splits the bit stream into discrete frames, computes a small token
called a checksum for each frame, and transmits the frame with the checksum included. Upon
reaching the destination that was intended, the checksum is recomputed. The data link layer
detects an error has happened and takes action if the freshly computed checksum differs from the
one in the frame.
A good design should make it simple for a receiver to determine when a new frame begins and
ends.
1. Byte count. 2. Flag bytes with byte stuffing. 3. Flag bits with Bit stuffing.
Byte Count: The first framing technique specifies the number of bytes in the frame using a field in
the header. The data link layer at the destination may determine the end of the frame by looking at
the byte count, which lets it know the end of the frame.
The issue with this approach is that a transmission fault may distort the count. For instance, the
destination will lose synchronisation if a single bit flip causes the byte count of 5 in the second
frame of Fig. to change to a 7. After then, it won't be able to determine when the next time frame
should begin.
Flag bytes with byte stuffing: The second framing technique fixes the resynchronization issue by
adding unique bytes at the beginning and end of each frame. Frequently, the beginning and ending
delimiters are employed to be the same byte, known as a flag byte. The end of one frame and the
beginning of the next are indicated by two consecutive flag bytes. Therefore, the receiver only
needs to look for two flag bytes to determine the end of the current frame and the beginning of
the next if it ever loses synchronisation. Still, there's a problem that needs to be resolved. The flag
byte could occasionally appear in the data; in this case, the framing would be affected. A potential
solution to this issue is to have an escape byte (ESC) added by the sender's datalink layer
immediately before each "accidental" flag byte in the data. Therefore, the presence or lack of an
escape byte preceding a byte can be used to distinguish a framing flag byte from one in the data.
Before transferring the data to the network layer, the data link layer on the receiving end
eliminates the escape bytes. We refer to this method as "byte stuffing." The next question is, what
happens if an escape byte appears in the middle of the data? The answer is that it, too, has an
escape byte. The initial escape byte gets removed at the receiver, leaving the data byte that follows
it.
Flag bits with bit stuffing: Byte stuffing's drawback is that it requires the use of 8-bit bytes.
Additionally, framing can be done at the bit level, allowing frames to have any number of bits of
any size. Every frame starts and finishes with the unique bit sequence 01111110. It is a flag byte
pattern.
The sender's data link layer automatically inserts a 0 bit into the outgoing bit stream whenever it
finds five consecutive 1s in the data. The receiver automatically destuffs, or deletes, the 0 bit when
it detects five consecutive incoming 1 bits followed by a 0 bit. When the user data contains the flag
pattern 01111110, it is transmitted as 011111010 but is saved as 01111110 in the memory of the
receiver. The flag pattern can unambiguously recognise the boundary between two frames when
bit stuffing is used.
As a result, if the receiver becomes confused, all it has to do is check the input for flag sequences,
which can only occur at frame boundaries and never within the data.
Error Detection & Correction
Transmission errors are to be expected in ageing local circuits, wireless connections. Network
designers have devised two fundamental approaches to handling errors. Both add redundant
information to the transmitted data. One approach is Incorporating sufficient redundant
information to allow the recipient to determine the nature of the transmitted data. The alternative
is to have just enough redundancy such that in the case of an error, the receiver can determine
what went wrong and request to send it again. The first strategy uses codes to correct errors, while
the second strategy uses codes that find errors. We are going to look at two different error-
detecting codes.
Let us examine the initial error-detecting code, which appends one single parity bit to the data. In
order to ensure that the number of ones in the code word is even (or unusual), the parity bit is
used. A bit is appended to the end of 1011010 when it is transmitted with even parity, resulting in
10110100. Odd parity transforms 1011010 into 10110101. Any error of a single bit results in a code
word that has the incorrect parity. This indicates that single-bit errors can be detected.
Cyclic Redundancy Check: The datalink layer employs a more robust type of error-detection code
known as the CRC (Cyclic Redundancy Check), which is also referred to as a polynomial code.
Polynomial codes are based upon treating bit strings as representations of polynomials with
coefficients of 0 and 1 o
When using the polynomial code approach, the sender and receiver must first agree on a generator
polynomial, G(x).The frame must be longer than the generator polynomial in order to compute the
CRC for some frame with m bits corresponding to the polynomial M(x).
The goal is to add a CRC to the end of the frame so that the polynomial represented by the
checksummed frame is divisible by G(x). When the receiver receives the checksummed frame, it
attempts to divide it by G(x). There was a transmission error if there is a remainder.
Example: Calculate the CRC for a frame 1101011111 using the generator G(x) = x 4 + x + 1.
Example: For a 12-bit data string of 101100010010, determine the number of Hamming bits
required, arbitrarily place the Hamming bits into the data string, determine the logic condition of
each Hamming bit, assume an arbitrary single-bit transmission error, and prove that the Hamming
code will successfully detect the error.
Ans: Five Hamming bits are sufficient, and a total of 17 bits make up the data stream. Arbitrarily
placing five Hamming bits into bit positions 4, 8, 9, 13, and 17 yields
To determine the logic condition of the Hamming bits, express all bit positions that contain a logic 1
as a five-bit binary number and XOR them together.
Assume that during transmission, an error occurs in bit position 14. The received data stream is
Elementary Data Link Protocols
Simplest Protocol: It's quite simple. The transmitter transmits a series of frames without
consideration for the recipient. Data may only be transmitted in one direction. Both the transmitter
and the receiver are constantly ready to send and receive frames. The processing time is ignored.
There is an infinite amount of buffer space. The most important feature is that frames are never
lost or corrupted in the communication channel. We’re going to name our extremely pointless
scheme "Utopia." The utopia protocol cannot function since it does not manage either flow control
or error correction.
Stop-and-wait Protocol:
After sending one frame, the sender waits for the recipient's acknowledgment. The sender
transmits the subsequent frame after receiving the ACK. The protocol is known as "stop-and-wait"
because after sending a frame, the sender waits for the recipient to indicate that it is alright to
proceed before sending next frame. This scheme includes the flow control.
NOISY CHANNELS:
Although its predecessor, known as the Stop-and-Wait Protocol, demonstrates how to implement
flow control, noiseless channels are non-existent. Either we add error control to our protocols, or
we choose to ignore the error that occurred. These are protocols which considered the existence of
transmission errors.
We must include redundancy bits in our data frame in order to identify and correct defective
frames. The frame is examined upon arrival to the recipient site, and if it is corrupted, it is quietly
deleted. The receiver's silence indicates that there are errors in this frame. Handling lost frames is
more challenging than handling corrupted ones. The frame that was received can be an out-of-
order frame, a duplicate, or the proper one. Assigning a sequence number to the frames is the
answer. How does the sender determine which frame to transmit if the recipient is silent about an
error? The sender retains a copy of the transmitted frame in order to fix this issue. It also sets off a
timer at the same moment. The frame is resent, the copy is retained, and the countdown is
repeated if the timer ends without an acknowledgment for the transmitted frame. Because the
protocol employs the stop-and-wait technique, an ACK is only required for one particular frame.
Sequence numbers are used in Stop-and-Wait ARQ to number the frames. The sequence numbers
are derived from arithmetic modulo-2. The acknowledgment number in Stop-and-Wait ARQ always
transmits the sequence number of the expected next frame in modulo-2 arithmetic.
2. Go-Back-N Automatic Repeat Request
Out-of-order frames are not stored in Go-Back-N ARQ because the receiver just throws them away.
But this technique doesn't work well with a noisy link. In a noisy link, there is a greater chance that
a frame will be damaged, which requires sending more than one frame again. This sending again
takes up bandwidth and makes the communication go more slowly. There is another way to handle
noisy links that doesn't send N frames again when only one has become damaged; only the broken
frame is sent again. This method is known as Selective Repeat ARQ. It works better when the link is
noisy, but it requires additional efforts at the receiver end.
Medium Access Control Sublayer
Two types of network connections exist: those that use broadcast channels and those that use
point-to-point connections. Random access channels and multi-access channels are other names
for broadcast channels. The most important problem in any broadcast network is how to decide
who gets the use of the channel in the event of competition. The protocols that decide who gets to
talk on a multi-access channel are part of the data link layer's MAC sublayer. In LANs, the MAC
sublayer is very crucial. How contending users are to be allocated a single broadcast channel is the
central theme of this chapter. In both scenarios, the channel serves as a connection between each
user and every other user, and full channel usage disrupts the progress of users who also desire to
use the channel. Conventionally, in order to distribute a single channel among numerous
contending consumers, its capacity is distributed through the use of frequency division
multiplexing. Considerable spectrum will be wasted if the spectrum is divided into N regions and
fewer than N users are presently interested in communicating. Some users will be denied
permission to communicate if N or more users attempt to do so, due to insufficient bandwidth. This
is the static channel allocation issue that is resolved by the dynamic channel allocation issue. In the
presence of N users, the bandwidth is partitioned into N portions of equal size, wherein one
portion is allocated to each user. Since every user is allocated a unique frequency band,
interference among users has been eliminated. A straightforward and effective allocation
mechanism is this division when the number of users is small and consistent, and each user has a
consistent flow of traffic. When the number of senders is substantial and fluctuating, FDM poses
certain challenges.
Prior to discussing the numerous channel allocation methods covered in this chapter, it is essential
to grasp the following five fundamental assumptions.
Independent Traffic: The model comprises N autonomous stations, wherein each station is
occupied by a user responsible for generating frames intended for transmission.
After the generation of a frame, the station enters a blocked state and remains inactive until the
frame is transmitted successfully.
Single Channel: All communications are conducted through a single channel. It is capable of
transmitting and receiving signals from all stations. While stations may be assigned priorities by
protocols, their capabilities have been considered to be equivalent.
Observable collisions: Collisions occur when two frames are transmitted concurrently, leading to a
temporal overlap that results in a distorted signal. Each station is capable of identifying a collision.
Following a collision, a frame must be retransmitted.
Carrier sense assumption: it enables stations to determine whether a particular channel is in use
prior to attempting to make use of it. A station that perceives the channel as active will not attempt
to use it. Without carrier sense, stations are unable to detect the channel prior to attempting to
use it. They proceed with the transmission. They don't know whether the transmission was
successful until a later time.
Norman Abramson and his associates at the University of Hawaii were attempting to establish a
connection between the main computer system in Honolulu and users located on remote islands.
Short-range radios were employed, whereby each user terminal used an identical upstream
frequency to transmit frames to the central computer. The fundamental concept can be
implemented in any system wherein uncoordinated users contend for access to a single shared
channel. Two variants of ALOHA will be examined in this context: pure and slotted. Distinction
arises as to whether time is discrete segments, requiring all frames to suit, or continuous, as in the
pure version.
Pure ALOHA:
Slotted ALOHA:
Roberts (1972) presented a technique for doubling the capacity of an ALOHA system shortly after
ALOHA was introduced. He suggested partitioning up time into discrete periods known as slots,
each of which would represent a single frame. It is not allowed for a station to transmit whenever
the user obtains a frame. It is necessary to wait until the start of the next slot instead. This reduces
the vulnerable period half.
As you can see, slotted ALOHA has a throughput of S = 1/e, or around 0.368, which is double that of
pure ALOHA. It peaks at G = 1. The chance of an empty slot in a system running at G = 1 is 0.368.
The maximum channel utilisation possible with slotted ALOHA is 1/e. This poor result is not
unexpected, since there will always be numerous collisions when stations broadcast at random and
are unaware of each other's activities. Stations on LANs may often sense what other stations are
doing and modify their behaviour appropriately. Compared to 1/e, these networks are capable of
much higher utilisation. We will talk about a few procedures for enhancing performance in this
part. Carrier sense protocols are those in which stations listen for a carrier, or a signal, and respond
appropriately.
Persistent and Nonpersistent CSMA: Here, we shall examine the initial carrier sense protocol,
known as 1-persistent CSMA. The simplest CSMA plan. A station listens to the channel to
determine whether anybody else is broadcasting at that particular time before sending data. The
station transmits its data if the channel is empty. Otherwise, the station just waits for the channel
to become idle if it is busy. The station then sends out a frame. In the event of a collision, the
station restarts after waiting for an arbitrary period of time. Because the station broadcasts with a
probability of 1 when it detects that the channel is idle, the protocol is known as 1-persistent. This
protocol exhibits superior performance in comparison to pure ALOHA due to the fact that both
stations possess the decency to refrain from interrupting the frame of the third station.
The station does not continuously sense an already-occupied channel in order to seize it
immediately after detecting the conclusion of the previous transmission if the channel is in use.
However, after a random interval of time has passed, the algorithm is executed again. This
algorithm therefore optimises channel utilisation, albeit at the expense of lengthier latency
compared to 1-persistent CSMA. A further enhancement involves the stations promptly identifying
the accident and suddenly ceasing transmission.
CSMA with Collision Detection: The foundation of the traditional Ethernet LAN is this protocol,
which is referred to as CSMA/CD. While the channel is broadcasting, the hardware of the station
has to be listening to it. It detects the presence of a collision if the signal it sends out differs from
the signal it receives back. Utilising the conceptual model is CSMA/CD.
A station has completed broadcasting its frame at the t 0 point. Now, any other station with a frame
to transmit is free to try. A collision will occur if two or more stations choose to broadcast at the
same time. A station will stop transmitting if it detects a collision, wait an arbitrary amount of time,
and then try again. Thus, we will have alternating congestion and transmission times in our
CSMA/CD model, with idle periods happening when all stations are inactive.
Assume that at precisely moment t0, two stations simultaneously start broadcasting. How much
time until they realise they've collided? The time it takes for a signal to go from one station to
another is the minimal amount of time needed to detect a collision. Stated differently, a station
cannot, in the worst situation, be certain that it has taken the channel until it has communicated
for two times the propagation time without hearing a collision.
Collision-Free Protocols
This section will look at a few protocols that resolve the channel contention without ever causing a
collision, not even during the contention period. Assuming N stations, each uniquely programmed
with an address between 0 and N − 1.
A Bit-Map Protocol:
Every contention period in the fundamental bit-map approach, has precisely N slots. Station 0
broadcasts a 1 bit during slot 0 if it has a frame to send. During this time period, no other station is
permitted to broadcast. A 1 bit may generally be inserted into slot j by station j to indicate that it
has a frame to transmit. Each station knows exactly which other stations want to broadcast once all
N slots have elapsed.
There will never be collisions since everyone agrees on who goes first. Another N-bit contention
period starts when the final ready station transmits its frame. A station is out of luck if it gets ready
just after its bit slot has gone by; it needs to stay quiet until all stations have had a chance and the
bit map has rotated once again.
Such protocols, which announce the intention to transmit before the transmission actually
happens, are known as reservation protocols as they reserve channel ownership ahead of time and
avoid collisions. When there is little load, the bit map will just keep repeating itself as there aren't
enough data frames. Examine the scenario from the perspective of a station with a low value, such
0 or 1.
The "current" slot will normally be in the centre of the bit map when it's ready to transmit. Before
the station can start broadcasting, it will often need to wait N /2 slots for the current scan to end
and other full N slots for the next scan to complete. For high-numbered stations, the future seems
more promising. These will often begin transmitting after only half a scan (N /2 bit slots).
Due to the fact that high-numbered stations must wait an average of 0.5N slots and low-numbered
stations an average of 1.5N slots. Calculating the channel efficiency under low load is simple. For an
efficiency of d /(d + N), the overhead per frame is N bits and the data quantity is d bits.
The N bit contention period is prorated over N frames during high load, when all stations have
something to broadcast all the time. This results in an overhead of just 1 bit per frame, or an
efficiency of d /(d + 1).The overhead of the standard bit-map protocol is one bit per station, which
makes it unsuitable for networks with thousands of stations. We can use binary station addresses
to improve upon that.
Binary Countdown:
Now, a station wishing to utilise the channel broadcasts, beginning with the high order bit, its
address as a binary bit string. It is expected that every address has the same length. Each address
position's bits are BOOLEAN ORed together with bits from various stations. A station quits up as
soon as it notices that a high-order bit position that is 0 in its address has been replaced with a 1.
For instance, in the first bit period, stations 0010, 0100, 1001, and 1010 send 0, 0, 1, and 1 in an
attempt to get the channel. To create a 1, they are ORed together. After seeing the 1, stations 0010
and 0100 give up for the current round, realizing that a station with a higher number is competing
for the channel. Proceed to stations 1001 and 1010. After bit zero, both stations carry on. The
station 1001 loses up when it reaches bit 1. Station 1010 is victorious due to its highest address. It
may now send a frame after winning the auction, and then another bidding cycle begins.
Its feature is that stations with higher numbers are given preference over ones with lower
numbers.
Ethernet
The Xerox company invented the first Ethernet. It has undergone four generations since then.
Gigabit Ethernet (1 Gbps), Ten-Gigabit Ethernet (l0 Gbps), Fast Ethernet (100 Mbps), and Standard
Ethernet (l0 Mbps), This section briefly discusses standard (or conventional) Ethernet.
The MAC sublayer in Standard Ethernet controls how the access method works. Additionally, it
transfers data to the physical layer after framing data received from the higher layer.
Preamble: Seven bytes, or 56 bits, of alternating 0s and 1s make up the first field of an 802.3
frame. This allows the receiving system to synchronise its input time and be informed that a new
frame is about to arrive.
Start frame delimiter (SFD): The start of the frame is shown by the second field (l byte: 10101011).
The SFD tells the station or stations that this is their last chance to get in sync. The last two bits are
11 and let the recipient know that the next field is the destination's address.
Destination address (DA): The DA field is made up of 6 bytes and has the actual address of the
station that will receive the message.
Source address (SA): The physical address of the packet's sender is included in the six-byte SA field.
Length or type: This field served as the type field in the original Ethernet, which defined the upper-
layer protocol via the MAC frame. It served as the length field in the IEEE standard to specify how
many bytes were in the data field.
Data and padding: This field contains data that has been encapsulated from upper-layer protocols.
The least amount is 46 bytes and the most is 1500 bytes. To compensate when the upper-layer
payload is shorter than 46 bytes, padding is appended.
To compensate when the upper-layer payload is shorter than 46 bytes, padding is appended.
CSMA/CD is used as MAC method in Ethernet.
The physical layer implementations specified by the Standard Ethernet are various; four of the
most prevalent are illustrated in Figure.
10Base5: Thick Ethernet: Through a single lengthy cable that circumnavigated the building, every
computer was connected to traditional Ethernet. The initial variant was known as "thick Ethernet."
A yellow garden hose with every 2.5 metres of markings denoting computer attachment locations.
Twisted-Pair Ethernet:
10Base-T, also known as twisted-pair Ethernet, denotes the third scheme. A physical star topology
is implemented. As shown in Figure, the stations are linked to a node via two pairs of twisted
cable.A 100-meter maximum length is specified for the twisted cable in order to minimise the
effect of attenuation in the cable.
Many organisations have multiple LANs and desire to link them. This is possible while establishing
connections using apparatus known as bridges. Ethernet switches are bridges in the modern sense.
Bridges function at the stratum of data links. In order to combine multiple physical LANs into a
single logical LAN, bridges are utilised.
When station A transmits a frame to station B, bridge B1 will receive the frame via port 1. This
particular frame may be promptly disregarded since it is already connected to the appropriate port.
In the depicted architecture shown in Figure (b), let us consider the scenario when node A
transmits a frame to node D. Bridge B1 will receive the frame via port 1 and thereafter transmit it
via port 4. The frame will be received by Bridge B2 on port 4 and then sent on port 1.
Repeater devices are situated in the lowermost layer, known as the physical layer. The
aforementioned devices are analogue in nature and operate by manipulating signals sent via the
connections to which they are linked. The signal present on a given cable undergoes a process of
signal conditioning, including cleaning, amplification, and subsequent transmission onto another
cable. Repeaters lack comprehension of frames, packets, and headers. The individuals possess
comprehension of the symbols that serve as representations for bits via voltage encoding. A hub is
equipped with many input lines that are electrically interconnected. Frames that are received on
any of the lines are subsequently sent on all the other lines. In the event that two frames are
received simultaneously, a collision will occur, similar to the scenario seen in a coaxial cable
network.
A bridge serves as a means of interconnecting many Local Area Networks (LANs). Similar to a
central hub, a contemporary bridge has several ports, often accommodating a range of 4 to 48
input lines of a certain kind. In contrast to a hub, each port in a network switch is designed to
function as an independent collision domain. The phrase "switch" has gained significant popularity
in contemporary times. In contemporary installations, point-to-point connections are often used,
whereby individual computers are directly connected to a switch, resulting in the switch typically
possessing several ports. Next, we go to the topic of routers. Upon arrival of a packet at a router,
the frame header and trailer are removed, and the packet contained inside the payload field of the
frame is thereafter sent to the routing software. The software uses the packet header information
in order to determine the appropriate output line. At a higher hierarchical level, transport
gateways are located.
These devices establish a link between two computers that use dissimilar connection-oriented
transport protocols. For instance, consider a scenario where a computer using the connection-
oriented TCP/IP protocol is required to establish communication with another computer utilising a
distinct connection-oriented transport protocol known as SCTP. The transport gateway has the
capability to duplicate packets from one connection to another, while also adjusting their format as
required.Application gateways has the capability to comprehend the structure and substance of
data, enabling them to effectively convert messages from one format to another.
3.Network Layer
The network layer is primarily responsible for facilitating the transmission of packets from the
source node to the destination node. In order to do this, it is important for the network layer to
possess knowledge on the network's topology, encompassing all routers and links. Subsequently,
the network layer may make informed decisions in selecting suitable pathways inside the network.
It is important to use caution in the selection of routes to prevent the excessive burdening of some
communication lines and routers, while leaving others underutilised. When the source and
destination are located on separate networks, additional challenges arise. The responsibility for
addressing and managing these issues lies at the network layer. The primary duty is delivering
services to the transport layer. It is essential that the provision of services be detached from the
underlying router technology. It is essential to ensure that the transport layer remains isolated
from any knowledge pertaining to the number, categorization, and arrangement of routers inside
the network infrastructure.
The equipment of the ISP (routers linked by transmission lines), depicted within the shaded oval,
and the equipment of the customers, illustrated outside the oval, constitute the primary
constituents of the network. Connected directly to one of the ISP's routers, A, Host H1 may be a
personal computer with a DSL modem inserted in.H2 is connected to a LAN, which may be an
office Ethernet, via a customer-owned and operated router, F.F has been positioned outside the
oval in this context due to its non-belonging status to the ISP.
When a host has a packet to transmit, it does so via a point-to-point link to the ISP or its own LAN
to the nearest router. The packet remains in that location until it has completely arrived and the
link has completed its processing, which includes checksum verification. It is then transmitted to
the subsequent router along the path until it arrives at the destination host, where it is unpacked.
This is store-and-forward packet switching in action.
Implementation of Connectionless Service:
In the event that CL service is provided, packets are independently injected into the network and
subsequently routed. No setup in advance is required. The network is referred to as a datagram
network, and the packets are often referred to as datagrams.
When connection-oriented service is implemented, prior to transmitting data packets, a path must
be established from the source router to the destination router. Comparable to the physical
circuits established by the telephone system, this connection is referred to as a VC (virtual circuit),
and the network is known as a virtual-circuit network. Internal tables within every router specify
where to forward packets. Every entry in the table comprises a dyad, which includes a destination
and the corresponding outgoing line. Only lines with direct connections are permitted. Since A
possesses solely two outgoing lines, which are directed to B and C, any incoming packet must be
routed through one of these routers, notwithstanding its ultimate destination being another
router. The figure displays the initial routing table for A, denoted by the label "initially."
At location A, packets 1, 2, and 3 are temporarily held after being received over the incoming
connection and undergoing checksum verification then, every packet is routed in accordance with
the routing table of node A, and then sent across the outgoing connection leading to node C.
However, packet 4 experiences a distinct occurrence. Upon reaching point A, the data is sent to
router B, despite its intended destination being F. Due to an unidentified reasoning, A opted to
transmit packet 4 via an alternative route in contrast to the preceding three packets. It is possible
that the system has acquired information on a congestion event along the ACE route and
subsequently modified its routing table, as seen in the section labelled "later." The algorithm
responsible for managing the tables and determining the routing choices is sometimes referred to
as the routing algorithm. For connection-oriented connections, a virtual circuit network is required.
As part of the connection configuration, a route from the source machine to the destination
machine is selected and stored in routing tables when the connection is established. This path is
utilised by all traffic traversing the connection. Upon the discharge of the connection, the virtual
circuit is likewise terminated. In connection-oriented service, a unique identifier is appended to
each transmission, specifying to which virtual circuit it is associated. Host H1 and host H2 have
established connection 1.According to the first line of A's table, in the event that a transmission
arrives from H1 with the connection identifier 1, it is to be forwarded to router C with the same
connection identifier 1.In a similar fashion, the initial entry at C assigns connection identifier 1 to
the transmission and forwards it to E.
If H3 desires to establish a connection with H2, it will select connection identifier 1 and instruct the
network to establish the virtual circuit. Consequently, the second row in the tables is generated.
While A can readily differentiate between connection 1 packets originating from H1 and those
from H3, C lacks this capability. Consequently, A assigns a distinct connection identifier to the
outgoing traffic for the second connection.
Routing Algorithms
The routing algorithm is a crucial component of the network layer software, since it is tasked with
determining the appropriate output line for transmitting incoming packets. Once a prominent
broadcasting network is established, it is often anticipated to operate consistently over an
extended period of time, devoid of any significant disruptions or malfunctions.
During that time frame, several types of hardware and software problems are expected to occur.
The occurrence of failures in hosts, routers, and lines will be frequent, leading to many changes in
the network architecture. The routing algorithm must possess the capability to effectively handle
variations in both the network architecture and traffic conditions. Routing algorithms may be
categorised into two primary classes: nonadaptive and adaptive. Nonadaptive algorithms make
routing choices without considering any measurements or estimations of the existing topology and
traffic. The selection of the route is precalculated. This particular process is sometimes referred to
as static routing.
In contrast, adaptive algorithms modify their routing choices in response to changes in the
topology and traffic conditions. The information used by these dynamic routing techniques is
obtained from neighbouring routers. The metrics used for optimisation include distance, the
number of hops, and travel time.
The proposed concept involves constructing a graphical representation of the network, whereby
individual nodes within the graph symbolise routers, and the edges within the graph symbolise
communication lines. In order to determine the optimal path between two designated routers, the
method simply identifies the shortest route connecting them inside the graph.
The computation of edge labels may be determined using a function that takes into account many
aspects such as distance, bandwidth, average traffic, communication cost, observed latency, and
other relevant considerations.
The distance from the source node along the best known route is denoted by a label in brackets for
each node. At beginning, the absence of any known pathways necessitates the assignment of an
infinite label to all nodes. As the algorithm progresses and discovers pathways, the labels have the
potential to undergo modifications, indicating the presence of improved paths. A designation may
be categorised as either provisional or enduring. At the beginning, it is important to note that all
labels are subject to revision and should be considered provisional. Once the shortest feasible
route from the source to a particular node is determined, it is permanently established and
remains unchanged afterward.
Determine the shortest distance between points A and D. We commence by designating node A as
permanent, denoted by a circle that is entirely filled in. Examine and relabel each of the adjacent
nodes to A (the working node) with its distance to A.
1. Make the router, which is a local node, the tree's root. This node should have a cost of 0 and be
the first fixed node.
4. From the list of possible nodes, a. Pick the one with the lowest cost and make it permanent.
b. If there are more than one way to get to a point, choose the one with the lowest total cost.
One often used method is known as flooding, when each incoming packet is sent via every
outgoing line, with the exception of the line through which it was received. Flooding, a well-known
phenomenon in network communication, results in the generation of a significant volume of
duplicate packets. In response to this issue, many solutions have been implemented to effectively
manage and regulate the flooding process. One potential approach for attaining this objective is
the implementation of a mechanism whereby the router assigns a unique sequence number to
every packet it receives from the connected hosts. This algorithm is strong and resilient. As long as
a sufficient number of routers are not working properly, packets can still get to where they are
supposed to go.
Two widely used dynamic methods in computer networking are distance vector routing and link
state routing. Every router maintains a database, often referred to as a vector, that contains
information on the optimal distance to each destination and the corresponding link that should be
used to reach that destination. The tables undergo updates via the exchange of information with
neighbouring entities. Over time, each router acquires knowledge about the optimal connection to
establish connectivity with every destination. The routing method known as the distance vector
algorithm is often referred to as the Bellman-Ford routing algorithm.
Let us examine the process by which J determines its updated path to router G. It is aware that it
can reach destination A within a time frame of 8 milliseconds. Additionally, destination A claims
that it can reach destination G within a time frame of 18 milliseconds. Consequently, destination J
is aware that it can rely on a delay of 26 milliseconds to reach destination G if it passes packets
intended for G to destination A. In a similar manner, the computation of the delay from G to I, H,
and K is as follows: 41 milliseconds (31 milliseconds + 10 milliseconds), 18 milliseconds (6
milliseconds + 12 milliseconds), and 37 milliseconds (31 milliseconds + 6 milliseconds), respectively.
The optimal value among the given options is 18, thereby causing the system to record an entry in
its routing database indicating that the delay to destination G is 18 milliseconds and the
recommended route is via node H. The same computation is executed for each of the other
destinations, resulting in the updated routing table shown in the last column of the diagram.
The issue arises when the nodes inside the network use the Distance Vector Routing (DVR)
Protocol.
Router A will infer that it has the capability to establish a connection with Router B with a cost of 1
unit, whereas Router B will infer that it has the capability to establish a connection with Router C
with a cost of 2 units.
The scenario shown in the previous image involves the disconnection of the connection between
points B and C. In this scenario, B will get the knowledge that it is no longer feasible to reach C with
a cost of 2, and then modify its table to reflect this change.
Nevertheless, it is possible that A transmits some data to B indicating the feasibility of establishing
a connection between A and C, although at a cost of 3. Subsequently, given that B can establish a
connection with A at a cost of 1, B will mistakenly revise its table to reflect that it can establish a
connection with C via A at a cumulative cost of 1 + 3 = 4 units.
Subsequently, A will get notifications from B and proceed to adjust its expenses to a value of 4, and
this process will continue iteratively. Consequently, the system becomes trapped in a cycle of
negative feedback, resulting in an exponential increase in costs. The phenomenon at hand is often
referred to as the Count to Infinity issue.
Hierarchical routing
As the size of networks increases, the routing tables of routers also increase in proportion. The use
of router memory is not only related to the continuous growth of tables, but also necessitates
more CPU time for scanning and increased bandwidth for transmitting status reports. At a certain
point, the network may expand to a magnitude where it becomes impractical for each router to
possess an entry for every other router. Consequently, routing will need to be executed in a
hierarchical manner, akin to the telephone network. In the context of network routing, the
implementation of hierarchical routing involves the partitioning of routers into distinct zones.
Every router has a thorough understanding of the routing of packets to destinations inside its own
region, while remaining unaware of the internal architecture of other regions. In the case of large
networks, a two-tiered structure may become inadequate, hence necessitating a division of
regions into clusters, clusters into zones, zones into groups, and so on. Router 1A has a complete
routing table including a total of 17 entries. In the hierarchical routing approach, local routers are
assigned individual entries, while all other regions are consolidated into a singular router.
Consequently, traffic destined for region 2 is directed over the 1B-2A line, while the other traffic is
routed via the 1C-3B line. The implementation of hierarchical routing has resulted in a decrease in
the number of entries in the table, namely from 17 to 7.
There exists a consequence that must be paid, namely a slight increase in the length of the route.
since an example, the optimal path from location 1A to location 5C is found by traversing region 2.
However, in the context of hierarchical routing, all traffic destined for region 5 is directed via
region 3, since this routing strategy proves more advantageous for the majority of destinations
inside region 5.When confronted with the scenario of a much expanded network, a pertinent
inquiry arises: "What is the optimal number of levels for the hierarchy?" As an example, let us
suppose a network with 720 routers. In the absence of a hierarchical structure, it is necessary for
each router to possess a total of 720 routing table entries.
In the scenario where the network is divided into 24 regions, with each region consisting of 30
routers, it is necessary for each router to maintain 30 local entries and 23 distant entries, resulting
in a cumulative total of 53 entries. In the case of choosing a three-level hierarchy, in which there
are 8 clusters, each including 9 regions consisting of 10 routers, it is necessary for each router to
possess 10 entries for local routers, 8 entries for routing to other regions within its own cluster,
and 7 entries for distant clusters. Consequently, the total number of entries required per router
amounts to 25.
The main issue observed in DVR was the prolonged convergence time in the network architecture,
due to the count-to-infinity problem. As a result, it was subsequently replaced by a completely
novel method known as link state routing. The concept behind link state routing is quite
straightforward and can be delineated into five components. Every router is required to perform
the following tasks:
Explore the adjacent entities and get knowledge about their network addresses. Assign a distance
or cost measure to each of its neighbouring entities. Compose a comprehensive document
encapsulating the whole of the acquired knowledge. Transmit this data packet to and establish
communication with all other routers in the network. Calculate the most efficient route to each
additional router. The whole architecture is effectively disseminated to each individual router.
Subsequently, the use of Dijkstra's algorithm may be employed at each router in order to
determine the most optimal route to every other router inside the network. After booting, a router
starts the process of identifying and acquiring information about its neighbouring routers. This
objective is achieved by transmitting a distinct HELLO packet over each individual point-to-point
connection. It is anticipated that the router situated at the other end will transmit a response with
its distinct identifier. The link state routing technique requires the presence of a distance metric for
each connection in order to determine the shortest pathways. The measurement of link delays
may be regarded as a metric in some cases. The most straightforward method for determining this
delay involves transmitting a specialised ECHO packet across the communication connection,
which prompts the recipient to promptly return it.
By measuring the time it takes for a signal to travel from the sending router to the receiving router
and back, and then dividing this value by two, the sending router may get a reasonably accurate
approximation of the delay. Every router constructs a packet that includes all of the data. The
packet starts with the sender's identification, which is thereafter accompanied by a sequence
number and age, as well as a list of neighbouring entities. The cost assigned to each neighbour is
also provided.
The distribution of link status packets to all routers may be achieved via the use of flooding. To
regulate the flood, each packet is equipped with a sequence number that is increased for every
new packet sent. In the event that a packet is fresh, it is forwarded over all lines except the one it
was received on. Conversely, if a packet is identified as a duplicate, it is disregarded and not
further processed. The distribution of link status packets to all routers may be achieved via the use
of flooding. In this approach, each packet is equipped with a sequence number that is incremented
for every new packet sent. To regulate the flood, the packet is forwarded over all lines, except the
one it was received on, if it is determined to be fresh. Conversely, if the packet is identified as a
duplicate, it is rejected.
Congestion Control
The presence of an excessive number of packets inside the network results in delays and losses of
packets, hence leading to degradation in performance. The present circumstance might be
referred to as congestion. The duty for managing congestion is shared between the network and
transport levels. When hosts transmit packets into the network at a rate that is much below its
maximum capacity, the quantity of packets successfully delivered is directly proportional to the
quantity of packets transmitted. As the load being imposed on the system approaches its carrying
capacity, intermittent traffic bursts result in the saturation of buffers inside routers, leading to
occasional packet loss.
In the event that several streams of packets abruptly begin arriving on three or four input lines, all
necessitating the same output line, a queue will accumulate. In the event that the available
memory is inadequate to accommodate the whole of the packets, loss of packets will occur. The
potential benefits of increasing memory capacity in routers are limited, since excessive memory
allocation might exacerbate congestion issues rather than alleviate them. This phenomenon occurs
due to the fact that packets, upon reaching the forefront of the queue, have already experienced
many instances of timing out and subsequent transmission of duplicate packets. This exacerbates
the situation, rather than improving it, since it contributes to congestion breakdown. Congestion
may also occur in low-bandwidth lines or routers that exhibit slower packet processing rates
compared to the line rate.
The existence of congestion indicates that the magnitude of the load exceeds the capacity of the
available resources. Two potential approaches might be considered: increasing the available
resources or reducing the current load. One fundamental approach to minimising congestion is the
construction of a network that is appropriately designed to accommodate the volume of traffic it
handles. In instances of significant congestion, resources have the potential to be included in a
dynamic manner. This may be achieved via many means, such as activating additional routers that
are available as backups or procuring more bandwidth from the open market. The process being
referred to is often known as provisioning. Routes may be customised to accommodate
fluctuations in traffic patterns that occur during the day, when network users residing in various
time zones awaken and retire. The term used to describe this concept is traffic-aware routing.
Occasionally, it may be unfeasible to increase the capacity. The only method to alleviate
congestion is by reducing the overall burden.
Open loop congestion management rules are used as a proactive measure to mitigate congestion
before it occurs. Congestion control is managed by either the source or the destination. The
policies implemented by open loop congestion management mechanisms are as follows:
The retransmission policy refers to a set of guidelines or rules that govern the process of resending
or retransmitting data packets in a communication system.
In the event that the sender perceives a loss or corruption of a sent packet, it becomes necessary
to initiate the process of retransmission for such packet. The transmission has the potential to
worsen network congestion. In order to mitigate congestion, it is essential to design transmission
times that not only avoid congestion but also optimise efficiency. Congestion may also be impacted
by the kind of window used at the sender's end. While some packets may be successfully received
at the recipient end, many of the packets in the Go-back-n window are resent. This duplication can
worsen and intensify the network's congestion. Selective repeat window should thus be used
because it transmits the particular packet that could have been missed. One effective strategy
used by routers is the implementation of a discarding policy. This policy serves the dual purpose of
reducing congestion and selectively discarding corrupted or less critical packets, while ensuring a
general level quality of the transmitted information. The congestion levels may also be influenced
by the acknowledgment policy implemented by the recipient. In order to enhance efficiency, it is
recommended that the receiver sends acknowledgment for many packets (N) instead of sending
acknowledgement for each individual packet. The recipient is expected to transmit an
acknowledgment only in the event that it has to send a packet using the piggybacking technique or
when a timer reaches its expiration. The denial of new connections may occur when their
establishment will result in network congestion. This process is sometimes referred to as admission
control.
1. The concept of backpressure involves the implementation of a mechanism whereby a node that
is experiencing congestion ceases to accept incoming packets from its upstream node. This
phenomenon has the potential to result in congestion of the upstream node or nodes, leading to
the refusal of data transmission from nodes situated higher in the network hierarchy. Backpressure
is a congestion control strategy that operates at the node-to-node level and is characterised by the
propagation of signals in the direction opposite to that of data flow.
In the figure shown, it can be seen that the third node has congestion, leading to a stop in the
reception of packets. Consequently, the second node may also experience congestion as a
consequence of the slowdown in the output data flow. Likewise, the first node may experience
congestion and then notify the source to reduce its transmission rate.
2. A packet that a node sends to the source to alert it about congestion is known as a choke
packet. Every router keeps an eye on the amount of resources it has and how each output line is
being used.
When the administrator sets an acceptable level for resource utilisation, the router immediately
sends a choke packet to the source as a way of asking it to cut down on traffic. There is no
congestion alert sent to the intermediary nodes that the packets passed through.
3. There is no communication between the congested nodes and the source in implicit signalling.
The source believes that a network is congested. For instance, one might assume that there is
congestion if the sender transmits many packets and there is a delay in acknowledgment.
4. A router may tag each packet it transmits with a bit in the header to indicate congestion instead
of creating more packets. When the network transmits the packet, the destination might observe
congestion and notify the sender when it replies. As previously, the sender may restrict
transmissions. This concept is used over the Internet and is known as ECN (Explicit Congestion
Notification).If a packet has encountered congestion; it is indicated by two bits in the IP packet
header.
5. If none of the aforementioned techniques relieve the congestion, routers have the ability to use
load shedding. Load shedding is the technical term for routers discarding excessive amounts of
packets they are unable to process. Which packets to discard is the crucial decision for a router
that is overflowing with packets. Depending on the kind of applications using the network, a
certain option could be used. An old packet has greater value than a fresh one during a file
transfer. This is due to the fact that, for example, maintaining packets 7 through 10 and discarding
packet 6 would merely make the receiver work harder to buffer data that it is not yet able to
utilise. On the other hand, a fresh packet is worth more than an old one when it comes to real-
time media. This is due to the fact that delayed packets lose their usefulness if they are not played
by the required time.
6. Data networks see bursts of traffic. When the traffic rate fluctuates, it usually comes at
nonuniform rates.
A method for controlling the average speed and burstiness of a data flow entering the network is
called traffic shaping. We shall now examine the token bucket and leaky bucket algorithms.
Consider a bucket with a little opening at the bottom. When there is any water in the bucket, the
outflow is at a constant rate, R, regardless of the rate at which water enters the bucket; when the
bucket is empty, the outflow is zero. Furthermore, any further water that enters the bucket after it
reaches capacity B flows over the sides and is wasted.
Packets entering the network may be shaped using this bucket. Every host has an interface with a
leaky bucket that connects it to the network. It must be feasible to add additional water to the
bucket in order for a packet to be sent over the network. When a packet comes in after the bucket
is full, it has to be threw or held off until enough water drains out to contain it.
Leaky bucket Algorithm: The Token Bucket Algorithm differs from the LB in that it permits the
output rate to fluctuate based on the magnitude of the burst. In the TB algorithm, tokens are
stored inside the bucket. In order to facilitate the transmission of a packet, the host is required to
acquire and subsequently consume a single token. Tokens are produced by a timekeeping device
at a frequency of one token per unit time, denoted as Δt seconds. Idle hosts have the capability to
acquire and store tokens, accumulating them until the bucket reaches its maximum capacity. This
accumulation allows hosts to subsequently transmit greater bursts of data.
Internet Protocol
The main component that ensures the seamless functioning of the Internet is the network layer
protocol known as IP (Internet Protocol).The transport layer is responsible for segmenting data
streams into smaller units known as IP packets, which may then be sent. IP routers are responsible
for the transmission of individual packets over the Internet, sequentially passing them from one
router to the subsequent router, until the intended destination is successfully reached. Upon
reaching the designated location, the network layer transfers the data to the transport layer, which
then delivers it to the receiving process. Upon arrival at the target computer, the network layer
undertakes the task of reassembling the constituent bits of the datagram, therefore restoring it to
its original form. Subsequently, the datagram is transferred to the transport layer. A suitable point
of departure for our examination of the network layer inside the Internet is the structure and
composition of IP datagrams. The structure of an IPv4 datagram has two main components: a
header section and a payload section.
The 4 bit Version field is responsible for maintaining the details of the specific version of the
protocol to which the datagram is associated. Currently, Version 4 is the main standard in the
realm of the Internet.
IPv6, also known as Internet Protocol version 6, is the subsequent version of the IP protocol.
The format of an IPv4 address is represented as x.x.x.x, where each x is referred to as an octet and
is required to be a decimal number ranging from 0 to 255. In computer networking, octets are
often separated by periods. An IPv4 address is required to consist of four octets separated by three
periods.
Example: 01.102.103.104
The standard format for an IPv6 address consists of eight segments, denoted as y:y:y:y:y:y:y:y.
Each segment, referred to as "y," represents a hexadecimal number ranging from 0 to FFFF.
The IHL is a 4-bit field is used to indicate the length of the header.
The minimum value of 5 is applicable in cases when none of the options are provided.
The maximum possible value of the 4-bit field is 15, hence imposing a constraint on the header
size, which is restricted to 60 bytes.
The maximum payload value, including the header length, is 65535 bytes.
An increase in the length of the header results in a corresponding reduction in the size of the
payload.
The Differentiated Services (DS), which utilises 8 bits, is responsible for specifying the Type of
Service. Different combinations of reliability and speed may be achieved.
In the context of digitised voice, quick delivery surpasses precise delivery. In the context of file
transfer, ensuring error-free transmission takes prominence over speedy transmission.
The Type of Service field allocates 3 bits for indicating priority and a further 3 bits for indicating the
host's preference towards latency, throughput, or reliability.
There are still two remaining bits that constitute Explicit Congestion Notification (ECN).
The total length of the datagram, which consists of both the header and data, is 16 bits. The upper
limit of the length is 65,535 bytes.
The inclusion of the Identification field is necessary in order to enable the recipient determine the
specific packet to which a recently received fragment corresponds. All fragments inside a packet
possess identical Identification values.
Subsequently, an unutilized bit follows, creating a sense of surprise. This particular bit is used for
the purpose of identifying and discerning potentially harmful network traffic. The proposed
approach would significantly enhance security measures by enabling the identification and
subsequent rejection of packets containing the designated 'evil' bit, hence establishing their origin
as malicious entities.
Next, there are two 1-bit feilds that relate to fragmentation. The acronym DF is an abbreviation for
the term "Don't Fragment." The directive instructs the routers to keep away from fragmenting the
packet.
Initially, the purpose was to provide assistance to hosts who lacked the ability to reassemble the
fragments.
The fragment offset indicates the specific position inside the current packet to which this fragment
corresponds.
The Time to Live (TTL) field, consisting of 8 bits, serves as a counter that is used to restrict the
lifespan of packets.
The initial intention was for the time measurement to be in seconds, with a maximum duration of
255 seconds. The value must be reduced by one with each iteration. Once the packet's value
reaches zero, it is deleted and a warning message is then sent back to the originating host. This
functionality serves to prevent packets from indefinitely traversing the network.
Once the network layer has successfully aggregated all the necessary components of a packet, it is
essential for it to choose the appropriate course of action for the packet. The Protocol field is
responsible for determining the appropriate transport mechanism to which the packet should be
directed. One potential option is Transmission Control Protocol (TCP), however, User Datagram
Protocol (UDP) and other alternatives also exist. The Header checksum is assigned to the header in
order to safeguard critical information, such as addresses, by providing a means of verification.
The Source address and Destination address serve to denote the Internet Protocol (IP) address of
both the source and destination, respectively.
IP Addresses
IPv4 is identified by its 32-bit addresses. The Source address and Destination address sections of IP
packets may be filled with the IP address of any host or router connected to the Internet.
It's crucial to remember that an IP address does not really correspond to a host. In actuality, it
refers to a network interface. A host component resides in the bottom bits of every 32-bit address,
while a variable-length network portion occupies the top bits. For every host on a single network,
the network portion has the same value.
This indicates that an IP address space block that is contiguous to a network belongs to it. We refer
to this block as a prefix. The format used to write IP addresses is dotted decimal. Each of the four
bytes in this format is represented in decimal, ranging from 0 to 255.
Internet control protocols
The Internet Control Message Protocol (ICMP) is a network protocol that is used to provide error
messages and functional information about network conditions. In addition to the Internet
Protocol (IP), which serves as the primary means for data movement, the Internet has a variety of
complementary control protocols that operate at the network layer. The protocols included in this
category are ICMP, ARP, and DHCP.
The routers carefully monitor the functioning of the Internet. In the case of an unforeseen
occurrence during the processing of packets at a router, the ICMP (Internet Control Message
Protocol) is responsible for notifying the sender.
Even though every computer connected to the Internet has one or more IP addresses, transmitting
packets requires more than just these addresses. Internet addresses are not understood by data
link layer NICs (Network Interface Cards), such as Ethernet cards. When it comes to Ethernet, each
and every NIC that has ever been produced has a unique 48-bit Ethernet address. To make sure
that no two Ethernet network interface controllers have the same address, Ethernet NIC
manufacturers ask IEEE for a block of Ethernet addresses. 48-bit Ethernet addresses are used by
the NICs to transmit and receive frames. They have no knowledge whatsoever about 32-bit IP
addresses.
Now, host 1's upper layer software creates a packet and sends it to the IP software for
transmission, adding 192.32.65.5 to the destination address field. The destination is on the CS
network, which is its own network, as the IP programme can determine by looking at the address.
To transmit the frame, it still requires a method to determine the destination's Ethernet address.
Having a configuration file that maps IP addresses to Ethernet addresses someplace in the system
is one way to solve this problem. While there is no doubt that this technique is feasible,
maintaining all these files up to date is a laborious and error-prone task for organisations with
thousands of devices.
A better way would be for host 1 to query who owns IP address 192.32.65.5 via a broadcast packet
sent over the Ethernet. Every computer connected to the CS Ethernet will receive the broadcast
and verify its IP address. Only Host 2 will reply, providing its Ethernet address (E2). In this manner,
host 1 discovers that the host with Ethernet address E2 has IP address 192.32.65.5. ARP is the
name of the protocol that is used to pose this query and get a response (Address Resolution
Protocol).
Hosts are equipped with fundamental details, such as their own IP addresses. By what means do
hosts get this information? Manual configuration of individual computers is a feasible approach;
nonetheless, this method is characterised by its time-consuming nature and susceptibility to
errors. An alternative method, known as Dynamic Host Configuration Protocol (DHCP), exists. In
the context of networking, it is important for each network to possess a Dynamic Host setup
Protocol (DHCP) server, which assumes the responsibility of network setup. The computer initiates
a transmission to request an IP address inside its network. This is achieved by the use of a DHCP
DISCOVER packet. It is essential that this packet reaches the Dynamic Host Configuration Protocol
(DHCP) server. Upon receiving the request, the server proceeds to assign an available IP address
and transmits it to the host via a DHCP OFFER packet. In order to do this task in situations when
hosts lack IP addresses, the server employs the identification of a host by its Ethernet address,
which is sent inside the DHCP DISCOVER packet. One concern that arises in the context of
automated allocation of IP addresses from a pool is the duration for which an IP address should be
assigned. In the event that a host departs from the network without surrendering its IP address to
the DHCP server, the said address will be irretrievably forfeited. Over the course of time, it is
possible for a considerable number of addresses to get lost or misplaced. In order to mitigate this
occurrence, the allocation of IP addresses might be implemented for a predetermined duration,
using a method known as leasing. Immediately before to the termination of the lease, it is essential
for the host to initiate a request for DHCP renewal. In the event of a failed request or a refused
request, the host is no longer permitted to utilise the previously assigned IP address.
Application Layer
The underlying layers behind the application layer are primarily responsible for delivering transport
services, but without directly engaging in actual tasks for end users. The Applications Layer facilitates
user interaction with the network.
Name management in the postal system is accomplished by requiring letters to include the
addressee's name, street address, nation, state, or province, and city. DNS functions in the same
manner. The Internet Corporation for Assigned Names and Numbers, or ICANN, is in charge of
overseeing the top naming hierarchy for the Internet. Over 250 top-level domains make up the
Internet, and each domain includes a large number of hosts. Every domain is divided into subdomains,
which are then divided yet further, and so on. There are two types of top-level domains: countries and
generic.
ICANN-appointed registrars are in charge of overseeing the top-level domains. To get a name, all you
have to do is visit the relevant registrar (for com in this example) and see whether the name you want
is accessible.
The requester obtains the name by paying a nominal yearly fee to the registrar. Each nation has a
single entry in the country domains. For instance, the.in domain is open to anyone and is used by
businesses, people, and organisations throughout India. For instance: You may reach Cisco's
engineering department at eng.cisco.com.
Domains may be added to the tree in either national or generic domains, in theory. It takes
authorization from the domain it will be included in to establish a new domain. For instance,
authorization from the person in charge of managing cs.washington.edu is required if a new VLSI
group at the University of Washington want to use the domain name vlsi.cs.washington.edu.
Each domain may be linked to a collection of resource entries. The resource records linked to a
domain name are returned to a resolver when it submits the name to DNS. Thus, mapping domain
names to resource entries is DNS's principal purpose. A five-tuple makes up a resource record. The
structure is as follows:
cs.mit.edu 86400 IN CNAME, for instance csail.mit.edu. The domain to which this record belongs is
indicated by the domain name. As a result, the main search key that answers queries is this field. The
record's stability is indicated by the Time to Live field. The Class is the third field in each resource
record. Information on the Internet is always IN. What sort of record this is indicated by the Type
column? DNS records come in a variety of forms.
The A (Address) record is the most significant sort of record. It contains a host's interface's 32-bit IPv4
address.
The matching 128-bit IPv6 address is included in the AAAA, or "quad A," record. The MX record is a
typical record type. It includes the host name that is ready to receive email for the given domain. The
sending host must identify a mail server at microsoft.com that is ready to receive emails in order to
send an email to, say, [email protected]. This information may be found in the MX record.
CNAME records allow aliases to be created. Example: cs.mit.edu 86400 IN CNAME csail.mit.edu
Name Servers:
A single name server could theoretically store the whole DNS database and be able to answer any
requests about it. In actuality, this server would be rendered ineffective due to overload. Moreover,
the whole Internet would be totally inoperable if it ever went down. The DNS name space is separated
into non-overlapping zones to prevent the issues that arise from having only one source of
information.
One or more name servers are also connected to each zone. The zone's database is stored on these
hosts.
Name resolution is the process of searching up a name and locating an address. A local name server
receives queries from resolvers about domain names. The authoritative resource records are returned
if the domain being requested is within the control of the name server, e.g., top.cs.vu.nl falls under
cs.vu.nl.
What takes place when the domain is remote? If the name server doesn't already have information
about the domain saved locally and flitts.cs.vu.nl wants to find the IP address of
robot.cs.washington.edu at the University of Washington, it starts a remote query. Next, ask one of
the main name servers. This will put you at the very top of the name system. The name servers for
these top-level domains know about each one.
In order to establish communication with a root server, it is necessary for each name server to possess
relevant information on one or more root name servers. The total number of root DNS servers is 13,
which are often referred to as a-root-servers.net through m.root-servers.net. It is unlikely that the
root name server has knowledge of the machine address at the University of Washington (UW), and it
is also impossible that it has information on the name server for UW. However, it is necessary to
figure out the name server for the edu domain, where cs.washington.edu is situated. In step 3, the
response includes the identification of the name and IP address. Subsequently, the name server at the
local level proceeds with its search. The whole inquiry is sent to the educational name server, namely
a.edu-servers.net. The above-mentioned name server provides the name server information for the
University of Washington. This is shown in stages four and five. Subsequently, the local name server
proceeds to transmit the query to the University of Washington (UW) name server, so advancing to
step 6 in the process. In the event that the desired domain name belongs to the English department,
the solution might be located inside the UW zone, which encompasses the English department.
However, the Computer Science department has made the decision to independently operate its own
name server. The query yields the name and IP address of the University of Washington Computer
Science name server at step 7.
In the last stage, the local name server initiates queries to the UW Computer Science name server.
The server in question has authoritative control over the domain cs.washington.edu, thereby
indicating its capability to provide the necessary response. The local name server delivers the final
answer (step 9) to flits as a response. The website cs.vu.nl is accessed in step 10.
EMAIL
The email system is made up of two types of subsystems: user agents and message transfer agents.
User agents let people read and send email, and message transfer agents move data from the sender
to the receiver. The user agent gives people a way to connect with the email system through a
graphical user interface. Google Gmail, Microsoft Outlook, and Apple Mail are just a few of the well-
known user interfaces. It lets you write messages and answers to messages, see what messages are
coming in, and organise messages by filing, finding, and getting rid of them. Users' mailboxes are
where they store the emails they get. Mail systems take care of them. It is possible to send, receive,
and respond to messages, as well as change the settings for folders, using a user agent. Most user
agents can handle mailboxes that have more than one place for saved mail. It's also possible for the
user agent to file messages before the user even reads them. A lot of businesses and ISPs have
software that sorts mail into two groups: important and spam. Mail creation is one of the most basic
things that user agents allow. Making messages and replies to messages and sending them is part of
it. In most cases, editors are built into the user agent so that it can help with addressing. For instance,
when you react to a message, the email system can pull out the sender's address from the incoming
message and put it in the right place in your reply immediately. User agents do nothing more than
show people what's in their mailboxes.
The package and its contents make up a message structure. The message is inside the paper. It has the
target address, importance, and security level, among other things, so that the letter can be sent.
There are two parts to the message inside the envelope: the title and the body. The header tells the
user agents how to control the page. The body is for the human recipient.
The displayed information in the lines is organised in a certain sequence, namely the From, Subject,
and Received fields. This arrangement allows for the identification of the sender, the subject matter of
the message, and the time of its reception. The symbols next to the message topics may serve as
indicators, such as denoting unread correspondence (represented by an envelope icon), the presence
of attached files (represented by a paperclip icon), and the classification of messages as significant,
among other possibilities. Once a message has been perused, the user has the agency to choose the
subsequent course of action. This phenomenon is referred to as message disposition.
Messages are composed of a basic envelope, which includes header information, followed by the
message content. Every header field is composed of a field name followed by a colon. Typically, the
user agent constructs a message and transfers it to the message transfer agent, which then utilises
certain header information to assemble the physical envelope. The principal header fields related to
message transport are listed as
The "To:" column contains the Domain Name System (DNS) address of the principal recipient, for
example, [email protected]. The Cc field in an email message contains the email addresses of
additional recipients who are not the primary receivers of the message. The Bcc (Blind carbon copy)
field enables someone to discreetly transmit copies of a message to other recipients, without the
knowledge of the original and secondary receivers. The last two fields, namely From: and Sender:,
indicate the identity of the author and the person responsible for transmitting the message,
respectively. These two entities do not necessarily have to possess identical characteristics. In some
instances, a company executive may compose a message, but the responsibility of transmitting this
message may be delegated to her assistant. In this scenario, the individual holding an executive
position would be designated in the "From:" area, while the assistant would be indicated in the
"Sender:" field. The inclusion of the From: field is mandatory, whereas the omission of the Sender:
field is permissible provided it corresponds to the same information as the From: field.
The message transfer agents (MTAs) operate in the background on mail servers, fulfilling the task of
autonomously facilitating the movement of emails inside the system, from the sender to the receiver,
using the Simple Mail Transfer Protocol (SMTP).Message transfer agents (MTAs) are responsible for
the implementation of mailing lists, which include the distribution of an exact replica of a message to
all individuals included in a designated list of email addresses. Additional sophisticated functions
include the use of carbon copies, blind carbon copies, and alternate receivers in cases when the
principal recipient is presently unavailable.
The transmission of mail occurs via the exchange of messages between message transfer agents,
according to a universally accepted format. This feature provides compatibility for multimedia
material and enables the inclusion of foreign text. The term used for this particular system is MIME.
The Multipurpose Internet Mail Extensions (MIME) is a standard that allows for the exchange of
different types of data over the Internet. During the first stages of the ARPANET, the email system
only accommodated the English language. With the emergence of the internet and the increasing
need to transmit various forms of material via electronic mail, the existing technique became
insufficient. The issues included the transmission and reception of messages in languages other than
the user's native language, such as Chinese and Japanese. Additionally, challenges were encountered
when attempting to transmit messages that did not consist of textual content, such as audio
recordings or photos.
The resolution included the creation of MIME (Multipurpose Internet Mail Extensions). The use of this
technology is prevalent in the transmission of electronic mail messages via the Internet. The MIME
specification encompasses the definition of five distinct message headers. The first field of the
message serves to inform the user agent that it is encountering a MIME message and specifies the
version of MIME being used. In the absence of a MIME-Version: header, it is presumed that every
message is an English plaintext message.
The Content-Description is an ASCII string that provides a description of the content included inside
the message. The inclusion of a header is necessary to provide the receiver with an indication of the
message's value and significance, so assisting them in determining if it is worthwhile to invest effort in
decoding and reading its contents. The Content-Id header serves as an identifier for the content. The
Content-Transfer-Encoding header specifies the method by which the body of a message is
encapsulated for transmission over a network. The Content-Type header is used to indicate the
format or type of the message body.
The following MIME type relates to images, serving as a means to transport static visual
representations. Currently, a variety of formats are extensively used for the storage and transmission
of photographs, including both compressed and uncompressed options. A number of image formats,
such as GIF, JPEG, and TIFF, are inherently integrated inside the majority of web browsers.
Message Transfer:
Having provided an overview of user agents and mail messages, we can now proceed to examine the
process by which message transfer agents facilitate the transmission of messages from the sender to
the receiver. The process of transferring electronic mail is facilitated by the Simple Mail Transfer
Protocol (SMTP).One of the most straightforward methods for message transmission is the
establishment of a transport link between the source and destination machines, followed by the
direct transfer of the message.
In the realm of the internet, electronic mail (email) is sent via the establishment of a Transmission
Control Protocol (TCP) connection between the originating computer and the destination computer's
port 25. The system in question operates as a mail server, namely using the Simple Mail Transfer
Protocol (SMTP). The server has the capability to receive incoming connections and process messages
for subsequent transmission.
Once the Transmission Control Protocol (TCP) connection to port 25 has been established, the
computer that initiates the connection, referred to as the client, and the machine that receives the
connection, referred to as the server. When the server is amenable to receiving email, the client
proceeds to declare the intended recipient(s) to whom the email is being sent. If a receiver is present
at the designated location, the server grants permission to the client to transmit the message.
Subsequently, the client transmits the message, and the server duly confirms its receipt. The outgoing
mail transfer agent establishes a Transmission Control Protocol (TCP) connection on port 25 with the
Internet Protocol (IP) address of the mail server in order to communicate with the receiving mail
transfer agent. This communication is facilitated using the Simple Mail Transfer Protocol (SMTP) for
the purpose of relaying the message. Subsequently, the mail transfer agent responsible for receiving
will proceed to allocate the incoming mail to the appropriate mailbox designated for the user named
Bob, so enabling him to peruse its contents at a later moment. IMAP and POP are two distinct
protocols used for email retrieval. The suggested approach for accessing emails from many devices,
such as smartphones, laptops, and tablets, is via the use of IMAP (Internet Message Access Protocol).
The World Wide Web is comprised of a global collection of information presented in the format of
web pages. The World Wide Web serves as a comprehensive infrastructure that facilitates the
retrieval and navigation of interconnected information distributed over a vast network of Internet-
connected devices. Every individual page has the potential to have hyperlinks that connect to other
sites located anywhere around the globe. Individuals have the ability to access a webpage by engaging
with a hyperlink via the action of clicking on it. This action directs them to the specific webpage that
the hyperlink is referencing. Web pages are often accessed and displayed using a software application
known as a web browser. Firefox, Internet Explorer, and Chrome exemplify prevalent web
browsers. The web browser retrieves the specified webpage and presents it on the screen in a
correctly structured manner. Certain sections of the webpage are interconnected via hyperlinks
leading to further webpages. A hyperlink refers to a textual or visual element, such as a piece of text,
symbol, or picture, which is linked to another webpage. In order to access a hyperlink, the user
positions the mouse pointer over the designated connected section of the webpage and initiates a
click.
If you click on a link, it just tells your computer to get another page. To get a page, you send a request
to one or more computers. The servers then send back the page's information. As shown in the figure,
the computer calls up cs.washington.edu, youtube.com, and google-analytics.com to get the two
pages. The browser takes the information from these different sites and puts it all together.
The current scenario comprises the main page being provided by the cs.washington.edu server, an
embedded video being provided by the youtube.com server, and nothing visible to the user being
provided by the google-analytics.com server other than the ability to monitor site visits. HTTP
(Hypertext Transfer Protocol) is a straightforward text-based request-response protocol used to
retrieve pages. It's possible that the material is just a static document that appears the same each
time. Each time a dynamic website is presented, it could look different. For instance, every visitor to
an electrical shop can get a distinct front page. If a consumer has previously purchased mystery books,
they are likely to see new thrillers prominently promoted when they visit the store's home
page. Mechanisms for identifying and finding pages were necessary for web pages. Before a chosen
page could be shown, the following three questions had to be answered: 1. What is the name of the
page? 2. What is the address of the page? 3. How do you go to the page? Uniform Resource Locators,
or URLs, are given to each page and function as the page's global name.
Three components make up a URL: the path that uniquely identifies the particular page, the DNS
name of the computer hosting the page, and the protocol. For instance, the webpage seen in Figure
may be accessed via http://www.cs.washington.edu/index.html. The host's DNS name
(www.cs.washington.edu), the path name (index.html), and the protocol (http) make up this URL.
Let's go through the actions that take place when we click on our sample link:
The browser goes through a number of actions when a user clicks on a hyperlink in order to get the
page it points to.
The IP address of the server www.cs.washington.edu is requested by the browser via DNS.
It is simple for browsers to utilise several protocols to access different types of resources thanks to the
URL architecture. Indeed, many different protocols have specified URLs.
Any page must have its format understood by the browser in order for it to be displayed. Web pages
are written in a standardised language called HTML so that all browsers can read them. The majority
of browsers have a variety of buttons and features designed to facilitate Web browsing. Most contain
buttons to advance to the next page and to go back to the previous page.
A page might include any one of hundreds of different file formats, such as an MPEG movie, a PDF
document, a JPEG picture, MP3 music, or a paper. The common markup language used to create Web
pages is HTML. The components that make up HTML are many. The browser is instructed on how to
display material using HTML elements.
4. Transport Layer
The transport layer facilitating the transmission of data between processes on different machines. It
ensures an accurate level of reliability, regardless of the underlying physical networks being used.
The primary objective of the transport layer is to provide a data transmission service that is efficient,
reliable, and economically viable to its users, often processes located at the application layer. In
order to do this, the transport layer utilises the services offered by the network layer. The
component responsible for executing the necessary tasks inside the transport layer, including both
software and hardware elements, is often referred to as the transport entity. The transport entity
has the potential to reside inside the operating system kernel. The interconnection between the
network, transport, and application layers is shown in the following diagram.
Similar to the existence of two distinct categories of network service, namely connection-oriented
and connectionless, the transport service also encompasses two distinct kinds. The connection-
oriented transport service has some similarities to the connection-oriented network service. In all
scenarios, the process of establishing connections involves three distinct phases: establishment, data
transmission, and release. Addressing and flow control exhibit similarities in both levels.
Additionally, it is worth noting that the connectionless transport service has a striking resemblance
to the connectionless network service.
In order to facilitate user access to the transport service, the transport layer is required to provide a
set of operations to application programmes, which is often referred to as a transport service
interface. Every transport service has its own unique interface. To get insight into the potential
utilisation of these fundamental components, let us contemplate an illustrative scenario including an
application comprising a central server and several distributed clients. Initially, the server initiates
the execution of a LISTEN primitive, often achieved by invoking a library method that triggers a
system call. This system call causes the server to enter a blocked state until a client connection is
established. When a client desires to establish communication with the server, it initiates the
execution of a CONNECT primitive. The transportation entity does this rudimentary action by
obstructing the caller's communication and dispatching a packet to the server. The execution of the
client's CONNECT function results in the transmission of a CONNECTION REQUEST segment to the
server. Upon arrival, the transport entity verifies if the server is in a blocked state on a LISTEN
operation, indicating its readiness to handle incoming requests. If this condition is met, the server
proceeds to unblock and transmits a CONNECTION ACCEPTED segment in response to the client.
Upon the arrival of this particular section, the client becomes unblocked and a connection is
successfully formed. The sharing of data is now facilitated by the use of the SEND and RECEIVE
primitives. In its most basic manifestation, one party has the capability to execute a "blocking"
RECEIVE operation, therefore awaiting the occurrence of a corresponding SEND operation by the
other party. Upon the arrival of the section, the receiver becomes unblocked. When a connection
becomes unnecessary, it must be withdrawn in order to allocate table space inside the two transport
entities. There are two variations of disconnection: asymmetric and symmetric.
Addressing:
When an application process (like a user process) wants to connect to a remote application process,
it has to say which one it wants to connect to. Setting up transport addresses that processes can
listen to for connection requests is the usual way to do it. These places are known as ports on the
Internet. The word "Transport Service Access Point" (TSAP) will be used to refer to a specific address
in the transport layer. It's no surprise that the similar ends in the network layer, which are called
network layer names, are called NSAPs. An example of an NSAP is an IP address. The NSAPs, TSAPs
are connected in the way shown in the next figure.
This is one possible situation for a transport connection:
1. A mail server process joins TSAP 1522 on host 2 and waits for a call to come in. The networking
model has nothing to do with how a process connects to a TSAP; it's all up to the local operating
system. This could be done with a call like our LISTEN.
2. An application process on server 1 wants to send an email, so it connects to TSAP 1208 and sends
a CONNECT request. It says that TSAP 1208 on host 1 is the source and TSAP 1522 on host 2 is the
target. After this step is taken, a transport link is made between the application process and the
server.
4. The mail service answers and says it will send the message.
A user datagram is the name for the packet that the UDP sends.
The source port address is the address of the process that sent the message.
With a destination port address, you can find the address of the process that will receive the
message.
The total length field tells you how many bytes the whole user datagram is.
An error can be found by UDP, and ICMP can then tell the source that a user datagram was damaged
and should be thrown away. It probably makes sense to list some of the things that UDP doesn't do.
It does not control the flow of data, fix errors, or send again after getting a bad portion. It does give
you a way to talk to the IP protocol, and it also lets you stop multiple processes from using the ports
at the same time. Client-server situations are one place where UDP is very useful. A client will often
send a short request to the server and wait for a short response.
There are similarities between delivering a message to a remote host and receiving a response,
much as when you call a function in a programming language.
When a procedure on machine 2 is called by a process on machine 1, the calling process on machine
1 is halted, and the called procedure is executed on machine 2.
Data may go via the parameters from the caller to the callee and return as the procedure's outcome.
The programmer is not able to see any communication travelling. Known as RPC (Remote Procedure
Call), this method is currently the foundation for a large number of networking applications.
The goal of remote procedure calls (RPCs) is to mimic local procedure calls as much as feasible.
In its most basic form, calling a remote procedure requires that the client programme be tied to a
client stub—a tiny library process that runs in the client's address space and simulates the server
method.
Similar to this, a process known as the server stub is connected to the server.
Figure illustrates the actual processes involved in creating an RPC.
• The client calls the client stub in step 1. The arguments for this call are being placed onto the stack
in the standard manner; it is a local procedure call.
• The second step involves the client stub sending a system call to transmit the message after
stuffing the arguments into it. Marshalling is the process of packing the parameters.
• The kernel transmits the message from the client computer to the server computer in step three.
• The arriving packet is sent to the server stub by the kernel in step four.
• The server stub in step 5 finally calls the server method using the unmarshaled arguments. The
response follows the same route in the other direction.
UDP is extensively used in two areas: client-server RPC and real-time multimedia applications,
namely in Internet radio, Internet telephony, music-on-demand, videoconferencing, and other
multimedia applications. Over time, it became evident that it would be beneficial to have a general
real-time transport protocol for many applications. Real-time Transport Protocol (RTP) was thus
created.
• As a result of this design, it might be difficult to determine which layer RTP is in.
• Multiplexing numerous real-time data streams onto single stream of UDP packets is the
fundamental purpose of RTP.
• The UDP stream may be multicasting, or transmitted to many locations instead of only one
(unicasting).
• RTP just utilises regular UDP, therefore unless certain standard IP quality-of-service features are
set, routers do not handle its packets differently.
• The destination can identify if any packets are missing thanks to the numbering system, which
assigns each packet delivered in an RTP stream a number one higher than its predecessor.
•The appropriate course of action for the destination in the event that a packet is missing is to use
interpolation to estimate the missing data.
Retransmission is not a feasible solution since it is likely that the packet will arrive too late to be of
any value. RTP lacks acknowledgements, flow management, error control, and a retransmission
request mechanism as a result.
There are three 32-bit words in it, along with maybe a few extensions.
• The packet has been padded to a multiple of 4 bytes, as indicated by the P bit.
• There is no standardised format or semantics for the extension header. The length of the extension
is indicated by the first word in the document, which is the sole definition.
• The number of contributing sources is shown in the CC column. 0 to 15 may be used using it.
• The M bit is an application-specific marker that may be used to indicate the beginning of a word in
an audio channel, the beginning of a video frame, or any other meaningful point for the application.
•Which encoding technique was used—MP3, uncompressed 8-bit audio, etc.—is disclosed in the
Payload type field.
• All that the sequence number is, is a counter that is tallied with every transmitted RTP packet. It is
used to find dropped packets.
• The stream's source generates the timestamp, which indicates the moment the packet's first
sample was taken. This number may aid in jitter reduction.
• The stream to which the packet belongs is indicated by the Synchronisation source identity. It is a
technique for multiplexing and demultiplexing many data streams into a single UDP packet stream.
• Real-time Transport Control Protocol, or RTCP, is the sister protocol of RTP. It is covered in RFC
3550, along with RTP, and it deals with synchronisation, feedback, and user interface.
• Feedback on latency, jitter fluctuation in delay or bandwidth, congestion, and other network
parameters may be sent to the sources via the first function.
• The encoding process may utilise this information to reduce the data rate during network outages
and raise the data rate (and provide higher quality) during network outages.
• The encoding methods may be continually adjusted to deliver the greatest quality feasible given
the present conditions by receiving continuous feedback.
TCP Segment Header:
The application programme on the source machine is defined by the source port address.
The application programme on the destination machine is defined by the target port address.
There is a possibility to split a data stream from the application programme into many TCP
segments. The data's original data stream location is shown in the sequence number field.
Only when the ACK bit is set in the control field (as will be discussed later) is this number valid. Here,
it specifies the predicted byte sequence number that comes next. The number of 32-bit (four-byte)
words in the TCP header is indicated by the four-bit HLEN field. Up to 15 numbers may be defined by
the four bits. The total amount of bytes in the header is obtained by multiplying this by 4. As a result,
the header's maximum size is 60 bytes (4x15).40 bytes are available for the options section since the
header must be at least 20 bytes in size.
• Set aside. A field with six bits is set aside for future usage.
• When ECN (Explicit Congestion Notification) is used, ECE is configured to signal an ECN-Echo to a
TCP sender to instruct it to slow down when the TCP receiver receives a congestion indication from
the network. • CWR and ECE are used to indicate congestion.
• CWR is configured to notify the TCP receiver of the Congestion Window Reduced so that it is aware
of the sender's slowdown and may cease transmitting the ECN-Echo.
• If the Urgent pointer is being used, URG is set to 1. The urgent data should be located at a byte
offset from the current sequence number, which is indicated by the Urgent pointer. This feature
replaces interrupt messages.
• To show that the acknowledgement number is legitimate, the ACK bit is set to 1.
• The Acknowledgment number field is disregarded if ACK is 0, indicating that the segment does not
include an acknowledgment.
• PUSHEd data is indicated by the PSH bit. Please provide the data to the application as soon as it
arrives, rather than buffering it until a complete buffer has been received, as the receiver could
normally do for efficiency.
• A connection that has gotten confused because of a host crash or another issue may be reset
using the RST bit. Rejecting an invalid segment or an attempt to establish a connection are other
uses for it.
• SYN = 1 and ACK = 0 in the connection request indicate that the piggyback acknowledgment field is
not being used.
• The connection reply has SYN = 1 and ACK = 1, indicating that it does include an acknowledgment.
• The ACK bit is used to differentiate between the two options that the SYN bit indicates: connection
request and connection accepted.
• It indicates that there are no more data for the sender to convey.
•Sequence numbers ensure that the SYN and FIN segments are handled in the proper order.
• Size of the window. The size of the sliding window is defined by the window, a 16-bit parameter.
• The checksum. A 16-bit field called the checksum is used for error detection.
• An urgent arrow. The header's last mandatory field is this one. Only when the URG bit in the
control field is set is its value valid. In this instance, the sender is alerting the recipient to the urgent
material included in the segment's data section. This marker indicates when urgent data ends and
regular data begin.
• Padding and options. The optional fields are defined in the remaining TCP header. They are used
for alignment or to provide the recipient with further information.
• To create a connection, one side, says the server, uses the LISTEN and ACCEPT primitives to
execute while waiting for an incoming connection. It may specify a specific source or it can accept
connections from anybody.
• The other party, let's say the client, uses the CONNECT primitive to establish a connection. It
provides the IP address and port it wishes to use, the largest TCP segment size it will allow, and
optionally some user data (password, for example). The CONNECT primitive waits for a response
after sending a TCP segment with the ACK bit off and SYN bit on.
• The TCP entity at the destination verifies whether a process has performed a LISTEN on the port
specified in the Destination port field when this segment reaches there.
• If not, it rejects the connection by sending a reply with the RST bit set.
• Either side may send a TCP segment with the FIN bit set, indicating that it has finished transmitting
data, to stop a connection.
On the other hand, data may flow in the other way forever.
• The connection is relinquished when both directions have been cut off.
• The sender of the FIN releases the connection if they do not get a response to their message
within the allotted two maximum packet lifetimes. Eventually, the opposing side will also time out
after realising that no one seems to be listening to it anymore.