0% found this document useful (0 votes)
122 views

Email Protocols: IMAP, POP3, SMTP and HTTP

The document discusses various email protocols including IMAP, POP3, SMTP, and HTTP. IMAP allows users to access email on a server, POP3 downloads emails to a local device, and SMTP delivers emails between servers. HTTP can also be used to access web-based email accounts. The document then provides more details on each protocol.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
122 views

Email Protocols: IMAP, POP3, SMTP and HTTP

The document discusses various email protocols including IMAP, POP3, SMTP, and HTTP. IMAP allows users to access email on a server, POP3 downloads emails to a local device, and SMTP delivers emails between servers. HTTP can also be used to access web-based email accounts. The document then provides more details on each protocol.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Email Protocols: IMAP, POP3, SMTP and HTTP

Basicaly, a protocol is about a standard method used at each end of a communication channel, in order to
properly transmit information. In order to deal with your email you must use a mail client to access a mail
server. The mail client and mail server can exchange information with each other using a variety of
protocols.

 IMAP Protocol:

IMAP (Internet Message Access Protocol) – Is a standard protocol for accessing e-mail from your local
server. IMAP is a client/server protocol in which e-mail is received and held for you by your Internet
server. As this requires only a small data transfer this works well even over a slow connection such as a
modem. Only if you request to read a specific email message will it be downloaded from the server. You
can also create and manipulate folders or mailboxes on the server, delete messages etc.

 see also IMAP.org

 POP3 Protocol:

The POP (Post Office Protocol 3) protocol provides a simple, standardized way for users to access
mailboxes and download messages to their computers.

When using the POP protocol all your eMail messages will be downloaded from the mail server to your
local computer. You can choose to leave copies of your eMails on the server as well. The advantage is
that once your messages are downloaded you can cut the internet connection and read your eMail at your
leisure without incuring further communication costs. On the other hand you might have transferred a lot
of message (including spam or viruses) in which you are not at all interested at this point.

 see also POP3 Description (RFC)

 SMTP Protocol:

The SMTP (Simple Mail Transfer Protocol) protocol is used by the Mail Transfer Agent (MTA) to deliver
your eMail to the recipient's mail server. The SMTP protocol can only be used to send emails, not to
receive them. Depending on your network / ISP settings, you may only be able to use the SMTP protocol
under certain conditions (see incoming and outgoing mail servers

 see also SMTP RFC

 HTTP Protocol:

The HTTP protocol is not a protocol dedicated for email communications, but it can be used for
accessing your mailbox. Also called web based email, this protocol can be used to compose or retrieve
emails from an your account. Hotmail is a good example of using HTTP as an email protocol.
Network congestion
From Wikipedia, the free encyclopedia

In data networking and queueing theory, network congestion occurs when a link or node is


carrying so much data that its quality of service deteriorates. Typical effects includequeueing
delay, packet loss or the blocking of new connections. A consequence of these latter two is that
incremental increases in offered load lead either only to small increases in network throughput,
or to an actual reduction in network throughput.

Network protocols which use aggressive retransmissions to compensate for packet loss tend to


keep systems in a state of network congestion even after the initial load has been reduced to a
level which would not normally have induced network congestion. Thus, networks using these
protocols can exhibit two stable states under the same level of load. The stable state with low
throughput is known as congestive collapse.

Modern networks use congestion control and network congestion avoidance techniques to try to


avoid congestion collapse. These include: exponential backoff in protocols such
as802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in
devices such as routers. Another method to avoid the negative effects of network congestion is
implementing priority schemes, so that some packets are transmitted with higher priority than
others. Priority schemes do not solve network congestion by themselves, but they help to
alleviate the effects of congestion for some services. An example of this is 802.1p. A third
method to avoid network congestion is the explicit allocation of network resources to specific
flows. One example of this is the use of Contention-Free Transmission Opportunities
(CFTXOPs) in the ITU-T G.hn standard, which provides high-speed (up to 1 Gbit/s) Local area
networking over existing home wires (power lines, phone lines and coaxial cables).

RFC 2914 addresses the subject of congestion control in detail.

Contents
 [hide]

1 Network capacity
2 Congestive collapse
o 2.1 History
o 2.2 Cause
3 Congestion control
o 3.1 Theory of congestion control
o 3.2 Classification of congestion control algorithms
4 Avoidance
o 4.1 Practical network congestion avoidance
 4.1.1 TCP/IP congestion avoidance
 4.1.2 Active Queue Management (AQM)
 4.1.2.1 Purpose
 4.1.2.2 Random early detection
 4.1.2.3 Flowbased-RED/WRED
 4.1.2.4 IP ECN
 4.1.2.5 Cisco AQM: Dynamic buffer limiting
(DBL)
 4.1.2.6 TCP Window Shaping
o 4.2 Side effects of congestive collapse avoidance
 4.2.1 Radio links
 4.2.2 Short-lived connections
5 See also
6 References
7 Books
8 External links

[edit]Network capacity
The fundamental problem is that all network resources are limited, including router processing
time and link throughput.

However:

 today's (2006) Wireless LAN effective bandwidth throughput (15-100Mbit/s) is easily


filled by a single personal computer.
 Even on fast computer networks (e.g. 1 Gbit), the backbone can easily be congested by
a few servers and client PCs.
 Because P2P scales very well, file transmissions by P2P have no problem filling and will
fill an uplink or some other network bottleneck, particularly when nearby peers are preferred
over distant peers.
 Denial of service attacks by botnets are capable of filling even the largest Internet
backbone network links (40 Gbit/s as of 2007), generating large-scale network congestion
[edit]Congestive collapse
Congestive collapse (or congestion collapse) is a condition which a packet
switched computer network can reach, when little or no useful communication is happening due
tocongestion. Congestion collapse generally occurs at choke points in the network, where the
total incoming bandwidth to a node exceeds the outgoing bandwidth. Connection points
between a local area network and a wide area network are the most likely choke points. A DSL
modem is the most common small network example, with between 10 and 1000 Mbit/s of
incoming bandwidth and at most 8 Mbit/s of outgoing bandwidth.

When a network is in such a condition, it has settled (under overload) into a stable state where
traffic demand is high but little useful throughput is available, and there are high levels
ofpacket delay and loss (caused by routers discarding packets because their output queues are
too full) and general quality of service is extremely poor.

[edit]History

Congestion collapse was identified as a possible problem as far back as 1984 (RFC 896, dated
6 January). It was first observed on the early Internet in October 1986, when the NSFnetphase-I
backbone dropped three orders of magnitude from its capacity of 32 kbit/s to 40 bit/s, and this
continued to occur until end nodes started implementing Van Jacobson'scongestion
control between 1987 and 1988.

[edit]Cause

When more packets were sent than could be handled by intermediate routers, the intermediate
routers discarded many packets, expecting the end points of the network to retransmit the
information. However, early TCP implementations had very bad retransmission behavior. When
this packet loss occurred, the end points sent extra packets that repeated the information lost;
doubling the data rate sent, exactly the opposite of what should be done during congestion. This
pushed the entire network into a 'congestion collapse' where most packets were lost and the
resultant throughput was negligible.

[edit]Congestion control
Congestion control concerns controlling traffic entry into a telecommunications network, so as
to avoid congestive collapse by attempting to avoid oversubscription of any of the processing
or link capabilities of the intermediate nodes and networks and taking resource reducing steps,
such as reducing the rate of sending packets. It should not be confused withflow control, which
prevents the sender from overwhelming the receiver.
[edit]Theory of congestion control
The modern theory of congestion control was pioneered by Frank Kelly, who
applied microeconomic theory and convex optimization theory to describe how individuals
controlling their own rates can interact to achieve an "optimal" network-wide rate allocation.

Examples of "optimal" rate allocation are max-min fair allocation and Kelly's suggestion


of proportional fair allocation, although many others are possible.

The mathematical expression for optimal rate allocation is as follows. Let xi be the rate of
flow i, Cl be the capacity of link l, and rli be 1 if flow i uses link l and 0 otherwise.
Let x, c andR be the corresponding vectors and matrix. Let U(x) be an increasing,
strictly convex function, called the utility, which measures how much benefit a user obtains by
transmitting at ratex. The optimal rate allocation then satisfies

such that 

The Lagrange dual of this problem decouples, so that each flow sets its own rate,
based only on a "price" signalled by the network. Each link capacity imposes a
constraint, which gives rise to a Lagrange multiplier, pl. The sum of these Lagrange
multipliers,

yi = ∑ pr ,
l li

is the price to which the flow responds.

Congestion control then becomes a distributed optimisation algorithm for solving the
above problem. Many current congestion control algorithms can be modelled in this
framework, withpl being either the loss probability or the queueing delay at link l.

A major weakness of this model is that it assumes all flows observe the same price,
while sliding window flow control causes "burstiness" which causes different flows to
observe different loss or delay at a given link.

[edit]Classification of congestion control algorithms


Main article: Taxonomy of congestion control

There are many ways to classify congestion control algorithms:


 By the type and amount of feedback received from the network: Loss; delay;
single-bit or multi-bit explicit signals
 By incremental deployability on the current Internet: Only sender needs
modification; sender and receiver need modification; only router needs
modification; sender, receiver and routers need modification.
 By the aspect of performance it aims to improve: high bandwidth-delay product
networks; lossy links; fairness; advantage to short flows; variable-rate links
 By the fairness criterion it uses: max-min, proportional, "minimum potential delay"
[edit]Avoidance

The prevention of network congestion and collapse requires two major components:

1. A mechanism in routers to reorder or drop packets under overload,


2. End-to-end flow control mechanisms designed into the end points which
respond to congestion and behave appropriately.

The correct end point behaviour is usually still to repeat dropped information, but
progressively slow the rate that information is repeated. Provided all end points do
this, the congestion lifts and good use of the network occurs, and the end points all
get a fair share of the available bandwidth. Other strategies such as slow-start ensure
that new connections don't overwhelm the router before the congestion detection can
kick in.

The most common router mechanisms used to prevent congestive collapses are fair
queueing and other scheduling algorithms, and random early detection, or RED,
where packets are randomly dropped proactively triggering the end points to slow
transmission before congestion collapse actually occurs. Fair queueing is most useful
in routers at choke points with a small number of connections passing through them.
Larger routers must rely on RED.

Some end-to-end protocols are better behaved under congested conditions than
others. TCP is perhaps the best behaved. The first TCP implementations to handle
congestion well were developed in 1984[citation needed], but it was not until Van Jacobson's
inclusion of an open source solution in Berkeley UNIX ("BSD") in 1988 that good TCP
implementations became widespread.

UDP does not, in itself, have any congestion control mechanism. Protocols built atop
UDP must handle congestion in their own way. Protocols atop UDP which transmit at
a fixed rate, independent of congestion, can be troublesome. Real-time streaming
protocols, including many Voice over IP protocols, have this property. Thus, special
measures, such as quality-of-service routing, must be taken to keep packets from
being dropped from streams.

In general, congestion in pure datagram networks must be kept out at the periphery of
the network, where the mechanisms described above can handle it. Congestion in
the Internet backbone is very difficult to deal with. Fortunately, cheap fiber-optic lines
have reduced costs in the Internet backbone. The backbone can thus be provisioned
with enough bandwidth to keep congestion at the periphery.[citation needed]

[edit]Practical network congestion avoidance


Implementations of connection-oriented protocols, such as the widely-
used TCP protocol, generally watch for packet errors, losses, or delays (see Quality of
Service) in order to adjust the transmit speed. There are many different network
congestion avoidance processes, since there are a number of different trade-offs
available. [1]

[edit]TCP/IP congestion avoidance


Main article: TCP congestion avoidance algorithm

The TCP congestion avoidance algorithm is the primary basis for congestion control in
the Internet. [2] [3] [4] [5] [6]

Problems occur when many concurrent TCP flows are experiencing port queue
buffer tail-drops. Then TCP's automatic congestion avoidance is not enough. All flows
that experience port queue buffer tail-drop will begin a TCP retrain at the same
moment - this is called TCP global synchronization.

[edit]Active Queue Management (AQM)


Main article: Active Queue Management

[edit]Purpose

"Recommendations on Queue Management and Congestion Avoidance in the


Internet" (RFC 2309[7]) states that:

 Fewer packets will be dropped with Active Queue Management (AQM).


 The link utilization will increase because less TCP global synchronization will
occur.
 By keeping the average queue size small, queue management will reduce the
delays and jitter seen by flows.
 The connection bandwidth will be more equally shared among connection oriented
flows, even without flow-based RED or WRED.
[edit]Random early detection
Main article: Random early detection

Main article: Weighted random early detection

One solution is to use random early detection (RED) on network equipments port


queue buffer. [8] [9] On network equipment ports with more than one queue
buffer, weighted random early detection (WRED) could be used if available.

RED indirectly signals to sender and receiver by deleting some packets, eg. when the
average queue buffer lengths are more than eg. 50% (lower threshold) filled and
deletes linearly more or (better according to paper) cubical more packets, [10] up to eg.
100% (higher threshold). The average queue buffer lengths are computed over 1
second at a time.

[edit]Flowbased-RED/WRED

Some network equipment are equipped with ports that can follow and measure each
flow (flowbased-RED/WRED) and are hereby able to signal to a too big bandwidth
flow according to some QoS policy. A policy could divide the bandwidth among all
flows by some criteria.

[edit]IP ECN
Main article: Explicit Congestion Notification

Another approach is to use IP ECN[11]. ECN is only used when the two hosts signal
that they want to use it. With this method, an ECN bit is used to signal that there is
explicit congestion. This is better than the indirect packet delete congestion
notification performed by the RED/WRED algorithms, but it requires explicit support by
both hosts to be effective. [12]Some outdated or buggy network equipment drops
packets with the ECN bit set, rather than ignoring the bit. More information on the
status of ECN including the version required forCisco IOS, by Sally Floyd[8], one of the
authors of ECN.

When a router receives a packet marked as ECN capable and anticipates (using
RED) congestion, it will set an ECN-flag notifying the sender of congestion. The
sender then ought to decrease its transmission bandwidth; eg. by decreasing the tcp
window size (sending rate) or by other means.

[edit]Cisco AQM: Dynamic buffer limiting (DBL)

Cisco has taken a step further in their Catalyst 4000 series with engine IV and V.
Engine IV and V has the possibility to classify all flows in "aggressive" (bad) and
"adaptive" (good). It ensures that no flows fill the port queues for a long time. DBL can
utilize IP ECN instead of packet-delete-signalling. [13] [14]

[edit]TCP Window Shaping

Congestion avoidance can also efficiently be achieved by reducing the amount of


traffic flowing into your network. When an application requests a large file, graphic or
web page, it usually advertises a "window" of between 32K and 64K. This results in
the server sending a full window of data (assuming the file is larger than the window).
When you have many applications simultaneously requesting downloads, this data
creates a congestion point at your upstream provider by flooding the queue much
faster than it can be emptied. By using a device to reduce the window advertisement,
the remote servers will send less data, thus reducing the congestion and allowing
traffic to flow more freely. This technique can reduce congestion in a network by a
factor of 40.

[edit]Side effects of congestive collapse avoidance


[edit]Radio links

The protocols that avoid congestive collapse are often based on the idea that data
loss on the Internet is caused by congestion. This is true in nearly all cases; errors
during transmission are rare on today's fiber based Internet. However, this
causes WiFi, 3G or other networks with a radio layer to have poor throughput in some
cases since wireless networks are susceptible to data loss due to interference. The
TCP connections running over a radio based physical layer see the data loss and tend
to believe that congestion is occurring when it isn't and erroneously reduce the data
rate sent.

[edit]Short-lived connections

The slow-start protocol performs badly for short-lived connections. Older web


browsers would create many consecutive short-lived connections to the web server,
and would open and close the connection for each file requested. This kept most
connections in the slow start mode, which resulted in poor response time.
To avoid this problem, modern browsers either open multiple connections
simultaneously or reuse one connection for all files requested from a particular web
server. However, the initial performance can be poor, and many connections never
get out of the slow-start regime, significantly increasing latency.

[edit]See also

n computer networking and computer science, bandwidth[1], network bandwidth[2], data


bandwidth[3] or digital bandwidth[4][5] is a bit rate measure of available or consumed data
communication resources expressed in bits/second or multiples of it (kilobits/s, megabits/s etc.).

Note that in textbooks on data transmission, digital communications, wireless


communications, electronics, etc., bandwidth refers to analogsignal bandwidth measured
in hertz - the original meaning of the term. Some computer networking authors prefer less
ambiguous terms such as bit rate, channel capacity and throughputrather than bandwidth in
bit/s, to avoid this confusion.

A router distributes Digital computer information that is contained within a data packet. Each
data packet contains address information that a router can use to determine if the source and
destination are on the same network, or if the data packet must be transferred from one network
type to another. This transfer to another type of network is achieved by encapsulating the data
with Network specific Protocol header information. When multiple routers are used in a large
collection of interconnected networks, the routers exchange information about target system
addresses, so that each router can build up a table showing the preferred paths between any
two systems on the interconnected networks.

You might also like