A Mathematical Theory of Communication
A Mathematical Theory of Communication
I NTRODUCTION
HE recent development of various methods of modulation such as PCM and PPM which exchange
bandwidth for signal-to-noise ratio has intensied the interest in a general theory of communication. A
basis for such a theory is contained in the important papers of Nyquist 1 and Hartley2 on this subject. In the
present paper we will extend the theory to include a number of new factors, in particular the effect of noise
in the channel, and the savings possible due to the statistical structure of the original message and due to the
nature of the nal destination of the information.
The fundamental problem of communication is that of reproducing at one point either exactly or ap-
proximately a message selected at another point. Frequently the messages have meaning; that is they refer
to or are correlated according to some system with certain physical or conceptual entities. These semantic
aspects of communication are irrelevant to the engineering problem. The signicant aspect is that the actual
message is one selected from a set of possible messages. The system must be designed to operate for each
possible selection, not just the one which will actually be chosen since this is unknown at the time of design.
If the number of messages in the set is nite then this number or any monotonic function of this number
can be regarded as a measure of the information produced when one message is chosen from the set, all
choices being equally likely. As was pointed out by Hartley the most natural choice is the logarithmic
function. Although this denition must be generalized considerably when we consider the inuence of the
statistics of the message and when we have a continuous range of messages, we will in all cases use an
essentially logarithmic measure.
The logarithmic measure is more convenient for various reasons:
1. It is practically more useful. Parameters of engineering importance such as time, bandwidth, number
of relays, etc., tend to vary linearly with the logarithm of the number of possibilities. For example,
adding one relay to a group doubles the number of possible states of the relays. It adds 1 to the base 2
logarithm of this number. Doubling the time roughly squares the number of possible messages, or
doubles the logarithm, etc.
2. It is nearer to our intuitive feeling as to the proper measure. This is closely related to (1) since we in-
tuitively measures entities by linear comparison with common standards. One feels, for example, that
two punched cards should have twice the capacity of one for information storage, and two identical
channels twice the capacity of one for transmitting information.
3. It is mathematically more suitable. Many of the limiting operations are simple in terms of the loga-
rithm but would require clumsy restatement in terms of the number of possibilities.
The choice of a logarithmic base corresponds to the choice of a unit for measuring information. If the
base 2 is used the resulting units may be called binary digits, or more briey bits, a word suggested by
J. W. Tukey. A device with two stable positions, such as a relay or a ip-op circuit, can store one bit of
information. N such devices can store N bits, since the total number of possible states is 2N and log2 2N N.
If the base 10 is used the units may be called decimal digits. Since
1
INFORMATION
SOURCE TRANSMITTER RECEIVER DESTINATION
SIGNAL RECEIVED
SIGNAL
MESSAGE MESSAGE
NOISE
SOURCE
a decimal digit is about 3 13 bits. A digit wheel on a desk computing machine has ten stable positions and
therefore has a storage capacity of one decimal digit. In analytical work where integration and differentiation
are involved the base e is sometimes useful. The resulting units of information will be called natural units.
Change from the base a to base b merely requires multiplication by logb a.
By a communication system we will mean a system of the type indicated schematically in Fig. 1. It
consists of essentially ve parts:
2. A transmitter which operates on the message in some way to produce a signal suitable for trans-
mission over the channel. In telephony this operation consists merely of changing sound pressure
into a proportional electrical current. In telegraphy we have an encoding operation which produces
a sequence of dots, dashes and spaces on the channel corresponding to the message. In a multiplex
PCM system the different speech functions must be sampled, compressed, quantized and encoded,
and nally interleaved properly to construct the signal. Vocoder systems, television and frequency
modulation are other examples of complex operations applied to the message to obtain the signal.
3. The channel is merely the medium used to transmit the signal from transmitter to receiver. It may be
a pair of wires, a coaxial cable, a band of radio frequencies, a beam of light, etc.
4. The receiver ordinarily performs the inverse operation of that done by the transmitter, reconstructing
the message from the signal.
5. The destination is the person (or thing) for whom the message is intended.
We wish to consider certain general problems involving communication systems. To do this it is rst
necessary to represent the various elements involved as mathematical entities, suitably idealized from their
2
physical counterparts. We may roughly classify communication systems into three main categories: discrete,
continuous and mixed. By a discrete system we will mean one in which both the message and the signal
are a sequence of discrete symbols. A typical case is telegraphy where the message is a sequence of letters
and the signal a sequence of dots, dashes and spaces. A continuous system is one in which the message and
signal are both treated as continuous functions, e.g., radio or television. A mixed system is one in which
both discrete and continuous variables appear, e.g., PCM transmission of speech.
We rst consider the discrete case. This case has applications not only in communication theory, but
also in the theory of computing machines, the design of telephone exchanges and other elds. In addition
the discrete case forms a foundation for the continuous and mixed cases which will be treated in the second
half of the paper.
log N T
C Lim
T T
where N T is the number of allowed signals of duration T .
It is easily seen that in the teletype case this reduces to the previous result. It can be shown that the limit
in question will exist as a nite number in most cases of interest. Suppose all sequences of the symbols
S1 Sn are allowed and these symbols have durations t1 tn . What is the channel capacity? If N t
represents the number of sequences of duration t we have
N t N t t1 N t t2 N t tn
The total number is equal to the sum of the numbers of sequences ending in S1 S2 Sn and these are
N t t 1 N t t2 N t tn , respectively. According to a well-known result in nite differences, N t
is then asymptotic for large t to X0t where X0 is the largest real solution of the characteristic equation:
t1 t2 tn
X X X 1