0% found this document useful (0 votes)
18 views

Control

Uploaded by

Hamilton Jude
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Control

Uploaded by

Hamilton Jude
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 262

Foundations of Control

Engineering
Foundations of Control
Engineering

Marc Bodson
Copyright c 2020 by Marc Bodson. All rights reserved.
ISBN: 9781705847466. Independently published.
Electronic file produced on January 20, 2020.

No parts of this book may be offered for sale, included in other works, or made
available electronically without written permission from the copyright holder.

Disclaimer: this work is published with the understanding that the author is
supplying information but is not attempting to provide professional services.
Reasonable efforts have been made to ensure the correctness of the information.
However, no representation, express or implied, is made with regard to the
accuracy or completeness of the information, and the author cannot accept legal
responsibility or liability for any damages arising out of its use.

Front cover: space shuttle robotic arm (see p. 38, credit: NASA).
Back cover: Watt’s governor (see p. 2, photo by the author), X-29 (see p. 3,
credit: NASA), closed-loop frequency response (see p. 158, graphic by the
author).
Preface

The book presents the core theory of control engineering, together with its foun-
dations in signals and systems. These foundations include continuous-time sys-
tems using the Laplace transform, discrete-time systems using the z-transform,
and sampled-data systems connecting the two domains. The classical theory
of control covers the analysis of the dynamic response of linear time-invariant
systems, root-locus techniques for feedback design, and the frequency-domain
analysis of closed-loop systems. Control engineering is strongly related to signal
processing and communications, and the book includes a discussion of phase-
locked loops as an example of feedback control. To the extent possible, the
origin of the theoretical results is explained, and the technical details needed
to reach a more complete understanding of the concepts are included. On the
other hand, the book does not present design studies or specialized topics, for
which the reader is referred to the bibliography. Material complementing the
book is available through the author’s web page, including solutions to selected
problems and virtual lab experiments.

v
About the author

Marc Bodson received a Ph.D. degree in Electrical Engineering and Computer


Science from the University of California, Berkeley, in 1986. He obtained two
M.S. degrees - one in Electrical Engineering and Computer Science and the
other in Aeronautics and Astronautics - from the Massachusetts Institute of
Technology, Cambridge MA, in 1982. In 1980, he received the degree of Ingénieur
Civil Mécanicien et Electricien from the Université Libre de Bruxelles, Belgium.
He is a Professor of Electrical & Computer Engineering at the University of
Utah in Salt Lake City, where he was Chair of the department between 2003
and 2009. He was the Editor-in-Chief of IEEE Trans. on Control Systems
Technology from 2000 to 2003. He was elected Fellow of the IEEE in 2006, and
Associate Fellow of the American Institute of Aeronautics and Astronautics in
2013. He also received the Engineering Educator of the Year award from the
Utah Engineers Council in 2007. His activities are described in further detail at
www.ece.utah.edu/˜bodson.

vi
Contents

1 Introduction to feedback systems 1


1.1 Standard feedback system . . . . . . . . . . . . . . . . . . . . . 1
1.2 Example of Watt’s governor . . . . . . . . . . . . . . . . . . . . 2
1.3 Example of flight control system . . . . . . . . . . . . . . . . . . 2
1.4 Example of active noise control . . . . . . . . . . . . . . . . . . 3

2 Continuous-time signals 5
2.1 The Laplace transform . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.3 Relationship between pole locations and
signal shapes . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.4 Properties of the Laplace transform . . . . . . . . . . . . 8
2.2 Inverse of Laplace transforms using partial fraction expansions . 9
2.2.1 General form of a partial fraction expansion . . . . . . . 9
2.2.2 Determination of the coefficients . . . . . . . . . . . . . . 9
2.2.3 Grouping complex terms . . . . . . . . . . . . . . . . . . 10
2.2.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.5 Non-strictly-proper transforms . . . . . . . . . . . . . . . 17
2.3 Properties of signals . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.1 Existence of terms in the partial fraction expansion . . . 17
2.3.2 Boundedness and convergence of signals . . . . . . . . . 19
2.3.3 Non-strictly-proper transforms . . . . . . . . . . . . . . . 20
2.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3 Continuous-time systems 23
3.1 Transfer functions and interconnected systems . . . . . . . . . . 23
3.1.1 Transfer function of a system . . . . . . . . . . . . . . . 23
3.1.2 Cascade systems . . . . . . . . . . . . . . . . . . . . . . 25
3.1.3 Parallel systems . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.4 Feedback system . . . . . . . . . . . . . . . . . . . . . . 26
3.1.5 Block reduction method . . . . . . . . . . . . . . . . . . 27
3.1.6 General interconnected systems . . . . . . . . . . . . . . 30
3.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 31

vii
3.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.3 Non-proper transfer functions . . . . . . . . . . . . . . . 33
3.3 Responses to step inputs . . . . . . . . . . . . . . . . . . . . . . 33
3.3.1 General characteristics of step responses . . . . . . . . . 33
3.3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.3 Effect of poles and zeros on step responses . . . . . . . . 38
3.4 Responses to sinusoidal inputs . . . . . . . . . . . . . . . . . . . 39
3.4.1 Definition and example . . . . . . . . . . . . . . . . . . . 39
3.4.2 General characteristics of steady-state sinusoidal responses 40
3.4.3 Example: first-order system . . . . . . . . . . . . . . . . 43
3.5 Effect of initial conditions . . . . . . . . . . . . . . . . . . . . . 44
3.6 State-space representations . . . . . . . . . . . . . . . . . . . . . 47
3.6.1 Example of a state-space model . . . . . . . . . . . . . . 47
3.6.2 General form of a state-space model . . . . . . . . . . . . 48
3.6.3 State-space analysis . . . . . . . . . . . . . . . . . . . . . 49
3.6.4 State-space realizations . . . . . . . . . . . . . . . . . . . 51
3.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4 Stability and performance of control systems 63


4.1 Control system characteristics . . . . . . . . . . . . . . . . . . . 63
4.2 Proportional control . . . . . . . . . . . . . . . . . . . . . . . . 65
4.3 Steady-state error and integral control . . . . . . . . . . . . . . 67
4.3.1 Tracking of constant reference inputs . . . . . . . . . . . 67
4.3.2 Rejection of constant disturbances . . . . . . . . . . . . . 69
4.3.3 Example of integral control . . . . . . . . . . . . . . . . 71
4.3.4 Proportional-integral-derivative control . . . . . . . . . . 73
4.4 Effect of initial conditions . . . . . . . . . . . . . . . . . . . . . 74
4.5 Routh-Hurwitz criterion . . . . . . . . . . . . . . . . . . . . . . 75
4.5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.5.2 Procedure for the Routh-Hurwitz criterion . . . . . . . . 75
4.5.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.5.4 Explanation of the Routh array . . . . . . . . . . . . . . 78
4.6 Root-locus method . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.6.2 Main root-locus rules . . . . . . . . . . . . . . . . . . . . 84
4.6.3 Additional root-locus rules . . . . . . . . . . . . . . . . . 86
4.6.4 Complementary root-locus . . . . . . . . . . . . . . . . . 93
4.6.5 Important conclusions from the root-locus rules . . . . . 94
4.7 Feedback design for phase-locked loops . . . . . . . . . . . . . . 98
4.7.1 Modulation of signals in communication systems . . . . . 98
4.7.2 Voltage-controlled oscillators . . . . . . . . . . . . . . . . 100
4.7.3 Phase-locked loops . . . . . . . . . . . . . . . . . . . . . 100
4.7.4 Compensator design . . . . . . . . . . . . . . . . . . . . 103
4.7.5 Phase detectors . . . . . . . . . . . . . . . . . . . . . . . 105
4.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

viii
5 Frequency-domain analysis of control systems 115
5.1 Bode plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.1.2 Approximations of the frequency response . . . . . . . . 116
5.1.3 Bode plots - Systems with no poles or zeros at the origin 118
5.1.4 Bode plots - Systems with poles or zeros at the origin . . 121
5.1.5 Complex poles and zeros with low damping factor . . . . 122
5.1.6 Some special transfer functions . . . . . . . . . . . . . . 126
5.2 Nyquist criterion of stability . . . . . . . . . . . . . . . . . . . . 131
5.2.1 Nyquist diagram . . . . . . . . . . . . . . . . . . . . . . 131
5.2.2 Nyquist criterion . . . . . . . . . . . . . . . . . . . . . . 133
5.2.3 Counting the number of encirclements . . . . . . . . . . 137
5.2.4 Implications of the Nyquist criterion . . . . . . . . . . . 137
5.2.5 Explanation of the Nyquist criterion . . . . . . . . . . . 139
5.2.6 Open-loop poles on the jω-axis . . . . . . . . . . . . . . 141
5.3 Gain and phase margins . . . . . . . . . . . . . . . . . . . . . . 145
5.3.1 Gain margin . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.3.2 Gain margin in the Nyquist diagram . . . . . . . . . . . 146
5.3.3 Gain margin in the Bode plots . . . . . . . . . . . . . . . 146
5.3.4 Phase margin . . . . . . . . . . . . . . . . . . . . . . . . 148
5.3.5 Phase margin in the Bode plots . . . . . . . . . . . . . . 149
5.3.6 Delay margin . . . . . . . . . . . . . . . . . . . . . . . . 149
5.3.7 Relationship between phase margin and damping . . . . 151
5.3.8 Frequency-domain design . . . . . . . . . . . . . . . . . . 152
5.3.9 Example of frequency-domain design with a lead controller 154
5.3.10 Design in the Nyquist diagram . . . . . . . . . . . . . . . 157
5.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

6 Discrete-time signals and systems 169


6.1 The z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.1.1 Discrete-time signals . . . . . . . . . . . . . . . . . . . . 169
6.1.2 The z-transform . . . . . . . . . . . . . . . . . . . . . . . 170
6.1.3 The z-plane . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.2 Properties of the z-transform . . . . . . . . . . . . . . . . . . . . 181
6.3 Inversion of z-transforms . . . . . . . . . . . . . . . . . . . . . . 186
6.3.1 Inversion using partial fraction expansions . . . . . . . . 186
6.3.2 Inversion using long division . . . . . . . . . . . . . . . . 187
6.3.3 Conclusions drawn from the procedure of partial fraction
expansion . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.3.4 Properties of signals . . . . . . . . . . . . . . . . . . . . 189
6.4 Discrete-time systems . . . . . . . . . . . . . . . . . . . . . . . . 190
6.4.1 Definition and examples . . . . . . . . . . . . . . . . . . 190
6.4.2 FIR and IIR systems . . . . . . . . . . . . . . . . . . . . 192
6.4.3 BIBO stability . . . . . . . . . . . . . . . . . . . . . . . 193
6.4.4 Responses to step inputs . . . . . . . . . . . . . . . . . . 194

ix
6.4.5 Responses to sinusoidal inputs . . . . . . . . . . . . . . . 194
6.4.6 Systems described by difference equations and effect of
initial conditions . . . . . . . . . . . . . . . . . . . . . . 197
6.4.7 Internal stability definitions and properties . . . . . . . . 198
6.4.8 Realization of discrete-time transfer functions . . . . . . 199
6.4.9 State-space models . . . . . . . . . . . . . . . . . . . . . 202
6.4.10 Extensions of other continuous-time results . . . . . . . . 203
6.4.11 Example of root-locus in discrete-time . . . . . . . . . . 205
6.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

7 Sampled-data systems 213


7.1 Conversion continuous-time signal to discrete-time signal . . . . 213
7.1.1 Definition of sampling . . . . . . . . . . . . . . . . . . . 213
7.1.2 Transform of a sampled signal with rational transform . 214
7.1.3 Transformation z = esT . . . . . . . . . . . . . . . . . . . 215
7.1.4 Transform of a sampled signal - General case . . . . . . . 217
7.1.5 Transform of a sampled signal in the frequency domain . 218
7.1.6 Aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
7.1.7 Avoiding aliasing . . . . . . . . . . . . . . . . . . . . . . 222
7.2 Conversion discrete-time signal to continuous-time signal . . . . 223
7.2.1 Definition of reconstruction . . . . . . . . . . . . . . . . 223
7.2.2 Transform of a reconstructed signal . . . . . . . . . . . . 224
7.2.3 Transform of a reconstructed signal in the frequency domain224
7.3 Conversion continuous-time system to discrete-time system . . . 228
7.3.1 Equivalent discrete-time system . . . . . . . . . . . . . . 228
7.3.2 Discrete-time controller for continuous-time plant . . . . 231
7.4 Conversion discrete-time system to continuous-time system . . . 232
7.4.1 Equivalent continuous-time system . . . . . . . . . . . . 232
7.4.2 Equivalent system in the frequency domain . . . . . . . . 234
7.4.3 Delay of a low-pass filter . . . . . . . . . . . . . . . . . . 234
7.5 Discrete-time design to approximate a continuous-time system . 236
7.5.1 Sampled-data control design . . . . . . . . . . . . . . . . 239
7.6 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
7.6.1 Proof for the conversion from continuous-time to discrete-
time signal . . . . . . . . . . . . . . . . . . . . . . . . . . 240
7.6.2 Proof for the conversion from discrete-time to continuous-
time signal . . . . . . . . . . . . . . . . . . . . . . . . . . 241
7.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

Bibliography 245

x
Chapter 1
Introduction to feedback systems

1.1 Standard feedback system


Fig. 1.1 shows a standard feedback system. The elements of the system are:
P , the plant, C the compensator, and ⊕, a summing function or comparator.
The signals are r, the reference input, e = r − y, the tracking error, u, the
control input, and y, the plant output. The main objective of such a feedback
system is to achieve tracking of the reference input by the plant output, i.e.,
to make the tracking error as close to zero as possible. Examples of feedback
systems encountered in everyday life include the thermostat that regulates the
temperature of a room, and the cruise control that regulates the speed of a car.

r e u y
C P

Figure 1.1: Standard feedback system

In addition to tracking the reference input, it is important to have an accept-


able transient response when the reference input changes. Fig. 1.2 shows three
examples of transient responses. Response (a) is a satisfactory response, while
responses (b) and (c) are undesirable. Response (b) is fast at first, but converges
to the steady-state very slowly. Response (c) overshoots the steady-state and
oscillates multiple times around the desired value. Challenges in achieving good
steady-state and transient responses in control systems include:

• errors in the measurements (noise and offsets),

1
2 Chapter 1. Introduction to feedback systems

• disturbances affecting the system (such as the slope of the road in a cruise
control system),

• uncertainty about the plant characteristics (including changes over time),

• limitations of the actuation system (delays and finite range of the control
input).

y(t) y(t) y(t)

t t t
(a) (b) (c)

Figure 1.2: Examples of transient responses

1.2 Example of Watt’s governor


Watt’s governor, shown in Fig. 1.3, is an early example of control system orig-
inating from the industrial revolution. The control input is the flow of steam
applied to an engine, and the plant output is the rotational speed. Through the
centrifugal force acting on two spheres, the speed of rotation creates a negative
feedback that opens or closes a valve feeding steam to the engine. In this man-
ner, speed was regulated to some desired set point. An interesting paper [20] by
J. C. Maxwell (author of Maxwell’s equations) describes early attempts to ana-
lyze governors as linear dynamical systems, and to design mechanical feedback
systems to achieve specific control objectives. Much progress was achieved over
time in the understanding of these concepts.

1.3 Example of flight control system


A more modern example is the flight control system for an advanced aircraft,
shown schematically in Fig. 1.4. The reference input is provided by the pilot, in
the form of commands in the pitch, roll, and yaw axes (given by stick and pedal
movements). The flight control system uses the pilot commands in combination
1.4. Example of active noise control 3

Flyball

Steam
Load
engine
Steam

Valve

Figure 1.3: Watt’s governor

with measurements of the aircraft states (for example, its angular velocities) to
determine the appropriate commands to be applied to the control surfaces.

Pilot
input Computer Actuators Aircraft Sensors

Control system
Plant

Figure 1.4: Flight control system

In some modern aircraft, such as the X-29, the dynamic behavior is so unsta-
ble that a pilot would be unable to maintain steady flight without the feedback
actions implemented by the flight control computer. In the worst flight condition
of the X-29, for example, an angular deviation from horizontal flight doubled
every 0.12 seconds (the stabilization task is equivalent to the one required to
balance a 17.4 in stick on a finger) [7]. Computations were performed at a rate of
40 times per second to provide adequate stabilization and control of the aircraft.

1.4 Example of active noise control


Fig. 1.5 shows a different example of feedback system, specifically an active noise
control system. Here, the purpose of the control system is to make a region of
space free from noise. The result is achieved by generating a sound wave using
4 Chapter 1. Introduction to feedback systems

a speaker and canceling exactly the wave produced by the noise source. The
air pressure produced by the speaker and by the noise source at the microphone
is shown as a function of time in Fig. 1.6. Ideally, the two waves will cancel
each other exactly. The control problem is quite different from the flight control
application. There is no issue of stabilization of the plant, or of tracking of
commands. The problem is purely one of disturbance rejection. Challenges in
this application are the speed at which computations must be performed (a rate
of 8 kHz is typical), and the time delay present in the plant (due to the time it
takes for the sound to travel from the speaker to the microphone). The general
structure of the control system, however, is similar to the structure of the flight
control system of Fig. 1.4, where the actuators are replaced by the speaker, the
sensors by the microphone, the aircraft by the acoustics from the speaker to the
microphone, and the reference input is set to zero.

Noise Speaker
source

Noise control
system

Figure 1.5: Active noise control system

Due to noise source


p(t)
Due to speaker

Figure 1.6: Principle of noise cancellation


Chapter 2

Continuous-time signals

2.1 The Laplace transform


2.1.1 Definition
A continuous-time signal is a function of a variable t called time, which is defined
over the set of real numbers. The Laplace transform of a signal x(t) is given by

X(s) = x(t)e−stdt. (2.1)
0

The Laplace transform X(s) is a real function of the complex variable s.

2.1.2 Examples
1) x(t) = δ(t) ⇔ X(s) = 1,
where δ(t) is an impulse function or Dirac function.
2) x(t) = u(t) ⇔ X(s) = 1s ,
where u(t) is a step function: u(t) = 0 for t < 0 and u(t) = 1 for t 0.
3) x(t) = 1 ⇔ X(s) = 1s .
The transform is the same as for u(t), because X(s) does not depend on x(t)
for t < 0.
1 .
4) x(t) = eat ⇔ X(s) = s − a
5) x(t) = cos(bt) ⇔ X(s) = s .
s2 + b2
6) x(t) = sin(bt) ⇔ X(s) = 2 b 2 .
s +b
1
7) x(t) = t ⇔ X(s) = 2 .
s
n at
8) x(t) = t e ⇔ X(s) = n! .
(s − a)n+1

5
6 Chapter 2. Continuous-time signals

9) x(t) = eat cos(bt) ⇔ X(s) = s−a .


(s − a)2 + b2
10) x(t) = eat sin(bt) ⇔ X(s) = b .
(s − a)2 + b2
(s − a)2 − b2
11) x(t) = teat cos(bt) ⇔ X(s) = .
((s − a)2 + b2)2
2b(s − a)
12) x(t) = teat sin(bt) ⇔ X(s) = .
((s − a)2 + b2 )2
Many transforms can be obtained from the formula for tneat. For example,

ejbt + e−jbt t t
x(t) = teat cos(bt) = teat = e(a+jb)t + e(a−jb)t,
2 2 2
(2.2)

so that
1/2 1/2 (s − a + jb)2 /2 + (s − a − jb)2 /2
X(s) = + =
(s − a − jb)2 (s − a + jb)2
((s − a)2 + b2 )2
(s − a)2 + b2
= . (2.3)
((s − a)2 + b2 )2

Although the intermediate steps use complex variables, the final result is a real
function of the variable s, which must be the case when the signal is real.

2.1.3 Relationship between pole locations and


signal shapes
All the examples of the previous section correspond to transforms of the form
X(s) = N (s)/D(s), where N (s) and D(s) are polynomials in s. Such functions
are called rational functions of s. The coefficients of the polynomials are real,
so that the roots of the numerator and denominator polynomials are real, or
occur in complex pairs. The roots of D(s) are called the poles of X(s), while
the roots of N (s) are called the zeros of X(s). The examples show that there
is a direct relationship between the location of the poles and the shapes of the
signals. The signals are of the form tn eat cos(bt + φ), with the coefficient of
the exponential a and the frequency of the sinusoid b corresponding to the real
part and imaginary part of the pole (respectively), and n (the power of t) being
equal to the multiplicity of the pole minus 1. Fig. 2.1 illustrates the relationship
between pole locations and signal shapes in the case of single poles and Fig 2.2
does the same for double poles.
In general, the location of the poles along the real axis determines the rate
of growth or decay of the signal. For a pole in the left half-plane (with Re(p) =
2.1. The Laplace transform 7

Higher s-plane
frequency

Faster Faster
decay growth

Figure 2.1: Signal shape vs. pole location (single pole)

Figure 2.2: Signal shape vs. pole location (double pole)

a < 0), the signal contains a decaying exponential with


1
eat = 0.37 for t=− (63% convergence).
a
2
= 0.14 for t=− (86% convergence).
a
3
= 0.05 for t=− (95% convergence).
a
4
= 0.02 for t=− (98% convergence). (2.4)
a
The value of time τc = −1/a is usually called the time constant. The value
τ2% = 4τc corresponding to a 2% residual is often taken to be the convergence
8 Chapter 2. Continuous-time signals

time, or settling time. For a growing exponential (or pole in the right half-plane)

0.7
eat = 2.0 for t= (2.5)
a
and the value of the time is called the time to double the amplitude. For a =
10 rad/s, τdouble = 70 ms.
The imaginary part of the pole b gives an indication of the frequency of
oscillation. Specifically, cos(bt) has a period


Tosc = (period of oscillation). (2.6)
b
For b = 100 rad/s, Tosc = 62.8 ms.

2.1.4 Properties of the Laplace transform


1) Linearity: y(t) = a1 x1 (t) + a2x2(t) ⇔ Y (s) = a1 X1 (s) + a2X2(s).
dx(t)
2) Differentiation: y(t) = ⇔ Y (s) = sX(s) − x(0)
dt

(use x(0 ) for discontinuous signals).
t x(0) Y (s)
3) Integration: x(t) = x(0) + 0
y(σ)dσ ⇔ X(s) = s + s .
4) Final value: if limt→∞ x(t) exists, limt→∞ x(t) = lims→0 sX(s).
dX(s)
5) Multiplication by t: y(t) = tx(t) ⇔ Y (s) = − ds .
6) Multiplication by eat : y(t) = eat x(t) ⇔ Y (s) = X(s − a).
7) Delay (right shift): y(t) = x(t − T ) u(t − T ) for T > 0 ⇒ Y (s) = X(s)e−sT .

Comments on the properties

• Linearity is a key property that is used to find the time signals correspond-
ing to rational Laplace transforms using partial fraction expansions.

• In short, differentiation ⇔ multiplication by s and integration ⇔ divi-


sion by s. These properties are very useful for the analysis of ordinary
differential equations using the Laplace transform.

• The correct application of the final value theorem is tricky, because one
must know that the limit exists. For example, s + 1 and 1 yield
1 s−1
the same result, but the limit exists only in the first case. For certain
transforms, the existence of the limit can be determined from X(s), as
will be seen later.
2.2. Inverse of Laplace transforms using partial fraction expansions 9

• For the delay property, one must have that T > 0, i.e, that the signal is
shifted to the right, or delayed. The property does not apply for a shift to
the left.

2.2 Inverse of Laplace transforms using partial


fraction expansions
2.2.1 General form of a partial fraction expansion
In this section, we restrict our attention to Laplace transforms that are rational
functions of s. The procedure of partial fraction expansions allows one to break-
up the transforms into simpler terms, which may be transformed back into the
time domain using standard formulas.

Fact - Partial fraction expansion


1. Let X(s) = N (s)/D(s), where N(s) and D(s) are polynomials in s with real
coefficients, no common roots, and deg N (s) < deg D(s). Let p1 , p2 , ..., pn be
the roots of D(s) = 0, with multiplicities r1 , r2, ..., rn, respectively, so that
D(s) = (s − p1)r1 (s − p2)r2 ...(s − pn )rn .
Then, X(s) can be expanded as
n ri
ci,k
X(s) = . (2.7)
i=1 k=1
(s − pi )k
Note that the number of terms is equal to the degree of D(s). The coefficients ci,k
may be complex, but the coefficients associated with complex conjugate poles
are themselves complex conjugate: pl = p∗m ⇒ cl,k = c∗m,k .
2. The corresponding signal x(t) is given by
n ri
tk−1 pi t
x(t) = ci,k e , (2.8)
i=1 k=1
(k − 1)!
where 0! = 1. Although the coefficients and the functions may be complex,
complex conjugate terms can be grouped to yield real functions, as will be
shown later.

2.2.2 Determination of the coefficients


Two methods exist for the determination of the coefficients: a direct method
called residue method, or cover-up method, and an indirect method called method
of clearing fractions. Examples are discussed later to illustrate the application
of the methods.
10 Chapter 2. Continuous-time signals

Residue method (or cover-up method)

The coefficients are given by the following formulas

ci,ri = [(s − pi )ri X(s)]s=pi ,


d
ci,ri −1 = (s − pi )ri X(s) ,
ds s=pi
1 dm
ci,ri −m = (s − pi )ri X(s) , (2.9)
m! dsm s=pi

where the notation [·]s=pi means that the expression inside the bracket is evalu-
ated at s = pi .

Clearing fractions

Equating the numerators, so that X(s) = N (s)/D(s), one finds that

N (s) = c1,1(s − p1)r1 −1(s − p2 )r2 ...(s − pn )rn


+c1,2 (s − p1)r1 −2(s − p2)r2 ...(s − pn)rn
+... + c1,r1 (s − p2)r2 ...(s − pn)rn
+c2,1 (s − p1)r1 (s − p2)r2 −1...(s − pn)rn
+c2,2 (s − p1)r1 (s − p2)r2 −2...(s − pn)rn
+... + c2,r2 (s − p1)r1 (s − p3)r3 ...(s − pn)rn
+... (2.10)

Matching the coefficients of the polynomials on both sides yields a system of


linear equations in the unknowns ci,k .

2.2.3 Grouping complex terms


Grouping terms in the time domain

The sum of two terms of the form


c c∗
X(s) = + (2.11)
(s − p)k (s − p∗)k

gives the time function

tk−1 pt tk−1 p∗ t
x(t) = c e + c∗ e . (2.12)
(k − 1)! (k − 1)!
2.2. Inverse of Laplace transforms using partial fraction expansions 11

Denoting a = Re(p), b = Im(p), the function becomes

tk−1 at
x(t) = e cejbt + c∗ e−jbt
(k − 1)!
tk−1 at
= e ((c + c∗ ) cos(bt) + j(c − c∗) sin(bt)) , (2.13)
(k − 1)!
which shows that x(t) is the real function of time

tk−1 at tk−1 at
x(t) = 2 Re(c) e cos(bt) − 2 Im(c) e sin(bt).
(k − 1)! (k − 1)!
(2.14)

The time function can also be expressed as

tk−1 at
x(t) = 2 |c| e cos(bt + ∡c), (2.15)
(k − 1)!
since

2 |c| cos(bt + ∡c) = 2 |c| (cos(∡c) cos(bt) − sin(∡c) sin(bt))


= 2 Re(c) cos(bt) − 2 Im(c) sin(bt). (2.16)

Important observation: a pole at s = a + jb and its complex conjugate


together yield a time function that is the product of three functions: an ex-
ponential function whose coefficient is the real part of the pole, a sinusoidal
function whose angular frequency is the imaginary part of the pole, and the
time function raised to a power equal to the multiplicity of the pole minus one.
The residue of a pole determines the scalar by which the function is multiplied,
as well as the phase of the sinusoidal function.

Grouping terms in the s-domain

The grouping of complex terms can also be performed in the s-domain. This
fact is useful in the procedure of clearing fractions, in order to obtain a system
of linear equations with real coefficients. For example, two terms due to non-
repeated imaginary poles can be combined as follows:
c c∗ (c + c∗ )s + jb(c − c∗)
X(s) = + =
s − jb s + jb (s − jb)(s + jb)
2 Re(c)s − 2 Im(c)b
= . (2.17)
s2 + b2
The result is identical to the one obtained in the time domain, because 2 s 2
s +b
is the transform of cos(bt) and 2 b 2 is the transform of sin(bt). In general,
s +b
12 Chapter 2. Continuous-time signals

any two terms


c c∗
X(s) = +
(s − a − jb)k (s − a + jb)k
2 Re(c) Re(s − a + jb)k − 2 Im(c) Im(s − a + jb)k
= , (2.18)
((s − a)2 + b2 )k
where s is treated as a real variable when taking the real and imaginary parts
of (s − a + jb)k . Again, the result is identical to the one obtained in the time
domain because
tk−1 at Re(s − a + jb)k
x1 (t) = e cos(bt) ⇔ X1(s) = ,
(k − 1)! ((s − a)2 + b2 )k
tk−1 at Im(s − a + jb)k
x2 (t) = e sin(bt) ⇔ X2 (s) = . (2.19)
(k − 1)! ((s − a)2 + b2 )k

2.2.4 Examples
(1-a) Example with real poles by residue method

1 c11 c12 c21


X(s) = = + + . (2.20)
(s + 1)2(s + 2) s + 1 (s + 1)2 s + 2
For c21 , the formula gives
1 1
c21 = [(s + 2)X(s)]s=−2 = = = 1.
(s + 1)2 s=−2 (−2 + 1)2 (2.21)

Note that the formula can be explained easily in this case by noting that if X(s)
is multiplied by s + 2, one gets
c11 (s + 2) c12 (s + 2)
(s + 2)X(s) = + + c21 . (2.22)
s+1 (s + 1)2
Only c21 remains in this expression when s = −2.
The parameter c12 (the coefficient of the highest power for the pole at s = −1)
is determined in a similar way
1 1
c12 = (s + 1)2 X(s) s=−1
= = = 1. (2.23)
s+2 s=−1 −1 + 2

For c11 , the formula is more complex. First, go back to (s + 1)2 X(s) = 1/(s + 2),
and take
d d 1 −1
c11 = (s + 1)2 X(s) = = = −1.
ds s=−1 ds s + 2 s=−1 (s + 2)2 s=−1 (2.24)
2.2. Inverse of Laplace transforms using partial fraction expansions 13

Finally, the time-domain signal is given by

x(t) = c11 e−t + c12te−t + c21 e−2t = −e−t + te−t + e−2t . (2.25)

(1-b) Example with real poles by clearing fractions

1
X(s) =
(s + 1)2(s + 2)
c11(s + 1)(s + 2) + c12 (s + 2) + c21(s + 1)2
= . (2.26)
(s + 1)2 (s + 1)

Equating numerators, one finds

c11s2 + c113s + 2c11 + c12 s + 2c12 + c21 s2 + 2c21 s + c21 = 1,


(2.27)

which leads to the system of equations

s2 term: 0 = c11 + c21


1
s term: 0 = 3c11 + c12 + 2c21

s term: 1 = 2c11 + 2c12 + c21 (2.28)

or, in matrix form,


    
1 0 1 c11 0
 3 1 2   c12  =  0  . (2.29)
2 2 1 c21 1

The system is a set of linear equations with real coefficients. It turns out that,
when the partial fraction expansion is set-up correctly, the system always has
as many equations as unknowns, and always has a unique solution. In this case,
the solution is c11 = −1, c12 = 1, c21 = 1, as found earlier.

(2-a) Example with complex poles by residue method

2(s + 1)
X(s) = with poles at: s = 0, s = −1 ± j
s(s2 + 2s + 2)
c11 c21 c31
= + + with: c31 = c∗21. (2.30)
s s+1−j s+1+j
14 Chapter 2. Continuous-time signals

The coefficients are given by


2(s + 1)
c11 = [sX(s)]s=0 = = 1,
s2 + 2s + 2 s=0
2(s + 1)
c21 = [(s + 1 − j)X(s)]s=−1−j =
s(s + 1 + j) s=−1+j
2j 1 −1 − j
= = = ,
(−1 + j)(2j) −1 + j 2
−1 + j
c31 = (= c∗21 ). (2.31)
2
The time-domain function is

x(t) = 1 + 2 Re(c21)eRe(p21 )t cos (Im(p21)t) − 2 Im(c21 )eRe(p21 )t sin (Im(p21)t)


= 1 − e−t cos t + e−t sin t. (2.32)

Note that if the pole p31 is chosen instead of p21 in (2.32), the result remains the
same.

(2-b) Example with complex poles by clearing fractions

2(s + 1)
X(s) = with poles at: s = 0, s = −1 ± j
s(s2 + 2s + 2)
c11 c21 c31
= + + with: c31 = c∗21
s s+1−j s+1+j
c11(s + 1 − j)(s + 1 + j) + c21 s(s + 1 + j) + c31 s(s + 1 − j)
= .
s s2 + 2s + 2
(2.33)

Equating the numerators, one finds

c11s2 + c112s + c112 + c21 s2 + c21 (1 + j)s + c31 s2 + c31s(1 − j) = 2s + 2


(2.34)
and the system of equations is
    
1 1 1 c11 0
 2 (1 + j) (1 − j)   c21  =  2  . (2.35)
2 0 0 c31 2
The result is a system of linear equations with complex coefficients, whose solu-
tion is identical to the one found in (2-a).
Complex arithmetic can be avoided by defining as unknowns the real parts
and imaginary parts of the variables when the residue is complex. Specifically,
let

c21 = f21 + jg21. (2.36)


2.2. Inverse of Laplace transforms using partial fraction expansions 15

Then
c11 c21 c∗21
X(s) = + +
s s+1−j s+1+j
c11 (c21 + c∗21 )(s + 1) + j(c21 − c∗21 )
= +
s s2 + 2s + 2
c11(s2 + 2s + 2) + (2f21 (s + 1) − 2g21)s
= , (2.37)
s(s2 + 2s + 2)
which gives

c11 s2 + 2sc11 + 2c11 + f21 2s2 + f21 2s − 2g21 s = 2s + 2 (2.38)

and the system of equations


    
1 2 0 c11 0
 2 2 −2   f21  =  2  . (2.39)
2 0 0 g21 2
Solving the system produces f21 = −1/2, g21 = −1/2, c11 = 1, which (again)
yields the same result as (2-a).

(3-a) Example with repeated complex poles by residue method

1
X(s) = 2 2 with double poles at s = −1 ± j
s + 2s + 2
c11 c∗11 c12 c∗12
= + + + . (2.40)
s + 1 − j s + 1 + j (s + 1 − j)2 (s + 1 + j)2
Using the formula gives

(s + 1 − j)2 1
c12 = 2 =
2
s + 2s + 2 (s + 1 + j)2 s=−1+j
s=−1+j
1 1
= =− ,
(2j)2 4
d 1 −2
c11 = =
ds (s + 1 + j)2 s=−1+j (s + 1 + j)3 s=−1+j
−2 j
= =− , (2.41)
(2j)3 4
so that the time function is

x(t) = 2 Re (c11 ) e−t cos(t) − 2 Im (c11 ) e−t sin(t)


+2 Re (c12 ) te−t cos(t) − 2 Im (c12 ) te−t sin(t)
1 −t 1
= e sin(t) − te−t cos(t). (2.42)
2 2
16 Chapter 2. Continuous-time signals

(3-b) Example with repeated complex poles by clearing fractions


The example starts as before, but the complex conjugate terms are grouped
together as follows

1
X(s) = 2 2 with double poles at p1 = −1 + j and p2 = −1 − j
s + 2s + 2
c11 c∗11 c12 c∗12
= + + 2 +
s + 1 − j s + 1 + j (s + 1 − j) (s + 1 + j)2

c11 c11 c12 c∗12
= + + 2 + 2 .
s + 1 − j s + 1 + j s + 2s − 2j(s + 1) s + 2s + 2j(s + 1)
(2.43)

Defining

c11 = f11 + jg11 , c12 = f12 + jg12 , (2.44)

the transform becomes

2f11(s + 1) − 2g11 2f12 (s2 + 2s) − 4g12(s + 1)


X(s) = + .
s2 + 2s + 2 (s2 + 2s + 2)2 (2.45)

Equating the numerators for X(s) gives

(2f11 (s + 1) − 2g11) s2 + 2s + 2 + 2f12 (s2 + 2s) − 4g12 (s + 1) = 1,


(2.46)

and

s3 (2f11 ) + s2 (2f11 + 4f11 − 2g11 + 2f12 )


+s (4f11 + 4f11 − 4g11 + 4f12 − 4g12)
+ (4f11 − 4g11 − 4g12 ) = 1, (2.47)

which corresponds to the system of equations


   
2 0 0 0 f11 0
 6 −2 2 0   g11   0 
   =  . (2.48)
 8 −4 4 −4   f12   0 
4 −4 0 −4 g12 1

The solution is f11 = 0, f12 = −1/4, g11 = −1/4, g12 = 0, which gives the same
result as (3-a).
2.3. Properties of signals 17

2.2.5 Non-strictly-proper transforms


The partial fraction expansion technique assumed that X(s) = N (s)/D(s),
where N (s) and D(s) are polynomials in s and deg N (s) < deg D(s). By defin-
ition:

• a function that is a ratio of polynomials is called a rational function of s.

• a rational function of s such that deg N (s) < deg D(s) is called a strictly
proper function of s.

• a rational function of s such that deg N (s) deg D(s) is called a proper
function of s.

In some cases, one may encounter a transform X(s) = N (s)/D(s), with deg
N (s) deg D(s). Such transform can be inverted using partial fraction ex-
pansions, with a preliminary step. First, using polynomial division, one finds
Q(s), R(s) such that N (s) = D(s)Q(s) + R(s) and deg R(s) < deg D(s) (Q(s)
and R(s) are the quotient and the remainder of the division of N (s) by D(s),
respectively). As a result

R(s)
X(s) = Q(s) + . (2.49)
D(s)

The second term is a strictly proper rational function of s, which can be inverted
using the partial fraction expansion procedure described earlier. The first term
is a polynomial Q(s) = q0 + q1s + q2s2 + ..., whose associated time function is

d d2
q(t) = q0 δ(t) + q1 (δ(t)) + q2 2 (δ(t)) + ... (2.50)
dt dt
In other words, q(t) is a linear combination of the delta function and its deriv-
atives. While the case of non-strictly-proper rational functions of s may be
addressed easily in this manner, it is rarely meaningful in practice.

2.3 Properties of signals


2.3.1 Existence of terms in the partial fraction
expansion
Important properties of the time function associated with a rational transform
X(s) may be derived from the knowledge of the procedure of partial fraction
18 Chapter 2. Continuous-time signals

expansion. The expansion does not need to be performed: only the locations of
the poles are needed. Consider the example

N (s)
X(s) = , (2.51)
(s + 1)2 (s + 2)

where N (s) is an arbitrary polynomial. A partial fraction expansion will produce


c11 c12 c21
X(s) = + + . (2.52)
s + 1 (s + 1)2 s + 2

Without computing the coefficients, one can predict that x(t) will be a linear
combination of the functions e−t , te−t , and e−2t . The result is independent of
N (s). Further, if N (s) does not have a root that is identical to a denominator
root (pole/zero cancellation), the functions te−t, and e−2t must be present in the
expansion. Indeed, the coefficients c12 and c21 cannot be zero because

c12 = (s + 1)2X(s) s=−1


= 0 ⇔ [N (s)]s=−1 = 0,
c21 = [(s + 2)X(s)]s=−2 = 0 ⇔ [N(s)]s=−2 = 0, (2.53)

and there would have to be a pole/zero cancellation in X(s), i.e., a common


root to N (s) and D(s).
In general, if there is no common root between N (s) and D(s), the residue
formula implies that the coefficient associated to the highest power of a given
pole must always be nonzero. Lower order terms may not be present. A sim-
ilar conclusion may be drawn regarding the corresponding time function. The
property is made precise in the following fact.

Fact - Nature of the time function associated with a rational X(s)


Consider a signal x(t) with a transform X(s) = N (s)/D(s), where N (s) and
D(s) are polynomials in s with real coefficients having no common roots and
such that deg N (s) < deg D(s). Let the roots of D(s) = 0 be of the form
p = a ± jb, with multiplicity r. Then, the signal x(t) is the linear combination
of a set of time functions corresponding to the poles. For a given pole, the
functions are eat cos(bt + φ), teat cos(bt + φ),..., and tr−1eat cos(bt + φ), where φ is
some phase angle. If the pole is real, the functions are eat , teat , ..., tr−1 eat. The
terms with the highest power of t (that is, tr−1 eat or tr−1 eat cos(bt + φ)) must
be present (i.e., must have nonzero coefficients).
Example: consider the signal with transform
s+1
X(s) = 4 2 2 , (2.54)
s s + 4s + 13 (s − 10)
2.3. Properties of signals 19

with poles at s = 0 (repeated 4 times), s = −2±3j (repeated twice), and s = 10.


Without performing a partial fraction expansion, we can say that x(t) is a linear
combination of the functions: 1, t, t2, t3 , e−2t cos(3t + φ), te−2t cos(3t + φ), and
e10t . The coefficients of t3, te−2t cos(3t + φ), and e10t must be nonzero.

2.3.2 Boundedness and convergence of signals


The results derived from the partial fraction expansion procedure can be further
developed to obtain statements about the boundedness and convergence of a
signal with a rational transform. In particular, the results allow one to determine
whether the final value theorem can be used, based on an inspection of the poles.
We begin by recalling the definition of a bounded signal.
Definition - Bounded signal: a signal x(t) is said to be bounded if there exists
M 0 such that

|x(t)| M for all t 0 (2.55)

Conversely, a signal is said to be unbounded if no such bound can be found


(meaning that, for all M, there exists some t such that |x(t)| > M ).
The following definitions will also be useful

s ∈ open left half-plane (OLHP) ⇔ Re(s) < 0


s ∈ open right half-plane (ORHP) ⇔ Re(s) > 0
s ∈ closed left half-plane (CLHP) ⇔ Re(s) 0
s ∈ closed right half-plane (CRHP) ⇔ Re(s) 0
s ∈ jω − axis ⇔ Re(s) = 0

Fact - Conditions for boundedness and convergence of signals with


rational transforms
Assume that the signal x(t) has a rational transform X(s) = N(s)/D(s), where
N (s) and D(s) are polynomials in s with real coefficients having no common
roots and such that deg N (s) < deg D(s). Then:
(a) The signal x(t) is bounded if and only if the roots of D(s) are either in the
open left half-plane or are non-repeated roots on the jω−axis.
(b) The signal x(t) has a limit for t → ∞ if and only if the roots of D(s) are
in the open left half-plane, with the possible exception of a single root at the
origin (s = 0). If the limit exists, limt→∞ x(t) = lims→0 sX(s).
(c) The signal x(t) converges to zero as t → ∞ if and only if all the roots of
D(s) are in the open left half-plane.
20 Chapter 2. Continuous-time signals

Proof: the proof is based on the properties of the individual terms of the partial
fraction expansion. A term tr−1 eat cos(bt + φ) has the properties that:
(a) it is bounded if and only if a < 0 or a = 0 and r = 1 (the pole is in the
OLHP or is non-repeated on the jω−axis).
(b) it has a limit if and only if a < 0 or a = 0, b = 0, and r = 1 (the pole is in
the OLHP or is non-repeated at s = 0).
(c) it converges to zero if and only if a < 0 (the pole is in the OLHP).
The other elements of the proof are that the functions with the highest powers of
t must be present in the expansion and that a function in the expansion cannot
be cancelled by a combination of other functions (the functions are linearly
independent).
Examples

1. s−1 bounded, converges to 0.


(s + 1)2

2. s+1 bounded, does not converge.


s2 + 4

3. s +2 1 unbounded.
s

4. s+5 bounded, converges to 5.


s(s + 1)

5. s2 − 1 unbounded.
2
s2 + 16

6. 1 unbounded.
s2 − 1

2.3.3 Non-strictly-proper transforms


The fact assumed that X(s) = N (s)/D(s), where N (s) and D(s) are polynomi-
als in s and deg N (s) < deg D(s). For deg N(s) deg D(s), it was found earlier
that

X(s) = Q(s) + Xsp(s), (2.56)

where Xsp (s) is a strictly proper function of s and Q(s) is a polynomial in s.


Further, Q(s) corresponds to a time-domain function that is a linear combination
of impulses and derivatives of impulses. Therefore, we have that

N (s)
X(s) = with deg N(s) deg D(s) ⇒ x(t) is unbounded.
D(s)
(2.57)
2.4. Problems 21

2.4 Problems
Problem 2.1: Using both methods for partial fraction expansion (residue and
clearing fractions), find the signals whose Laplace transforms are given by
2s 2s + 1
(a) X(s) = (b) X(s) = ,
(s + 2)(s − 2) s2 (s + 1)2
s3 + 2s2 + 2s + 5 4s − 8
(c) X(s) = (d) X(s) = 2 . (2.58)
s2 (s2 + 2s + 5) (s + 4)2

Problem 2.2: (a) Consider a signal with Laplace transform

s2 + 4
X(s) = . (2.59)
s (s + 2s + 5)2
3 2

Give the form of x(t) that would result from a partial fraction expansion. Ex-
press the signal as a linear combination of time functions, but do not solve for
the coefficients themselves. Indicate which of the coefficients may or may not
turn out to be zero.
(b) repeat part (a) for
s−1
X(s) = . (2.60)
(s + 2) (s − 3)3(s + 4)
4

Problem 2.3: For the signals whose Laplace transforms are given below, indi-
cate whether the signals are bounded and, if so, whether lim x(t) exists. If the
t→∞
limit exists, give its value. Do not invert the Laplace transforms to obtain the
results.
10 (s − 1)
(a) X(s) = (b) X(s) = ,
(s + 1)10 s(s + 2)
1 5
(c) X(s) = 2 (d) X(s) = ,
s (s + 2) s(s + 1)2
3 3
(e) X(s) = (f) X(s) = ,
s(s2 + 4) s(s2 + 4)2
2(s − 1) 2(s − 1)
(g) X(s) = (h) X(s) = 2 ,
(s2 + 2s + 1)(s + 3) (s + 2s + 2)(s + 3)
2(s − 1)
(i) X(s) = . (2.61)
(s2 + 2s + 2)2 (s + 3)

Problem 2.4: (a) Is a signal x(t) considered bounded if x(t) < 216 ?
(b) How fast does x(t) = e−100t cos(10, 000t) converge to zero?
(s − 1)2
(c) Is the signal X(s) = bounded? Does it converge to a
s(s + 1)(s2 + 4)
2

steady-state value? If so, to what value?


22 Chapter 2. Continuous-time signals

Problem 2.5: (a) List the time functions that would originate from a partial
fraction expansion of
s+1
X(s) = . (2.62)
(s2 + 2s + 5)2 (s2 + 4)3

Express the result in terms of real functions.


(b) Give the real time function x(t) whose transform is
1 + 2j 1 − 2j
X(s) = 3 + . (2.63)
(s + 2 + 3j) (s + 2 − 3j)3

Problem 2.6: (a) Give the response y(t) of a system with transfer function
1
P (s) = (2.64)
s2 (s + 1)

and with constant input x(t) = 1.


(b) An input x(t) = sin(2t) is applied to a system with transfer function
s−1
P (s) = . (2.65)
s(s2 + 4s + 8)3 (s2 + 4)

List all the functions that appear in the partial fraction expansion of Y (s),
without calculating the coefficients of the expansion. Express the result in terms
of real functions, and indicate which functions must have nonzero coefficients.
Problem 2.7: (a) List the real time functions that would originate from a
partial fraction expansion of the response of the system
s+1
P (s) = (2.66)
s(s + 100 + 33j)2(s + 100 − 33j)2

to a step input. Indicate which functions may and may not have zero coefficients
in the partial fraction expansion.
(b) Repeat part (a) for an input x(t) = cos(33t).
Chapter 3
Continuous-time systems

3.1 Transfer functions and interconnected


systems
3.1.1 Transfer function of a system
A continuous-time system is an operator that transform a continuous-time signal
into another continuous-time signal. For example, consider the simple circuit
shown in Fig. 3.1. The circuit is described by
di(t)
v(t) = L + Ri(t), (3.1)
dt
which may be transformed to the s-domain by using the properties of the Laplace
transform
1 Li(0)
V (s) = sLI(s) − Li(0) + RI(s) ⇒ I(s) = V (s) + .
sL + R sL + R
(3.2)

Consider V (s) to be the input to the system, and I(s) to be the output. The
first term in the expression for I(s) is due to the input, while the second term

L i(t)

v(t) R

Figure 3.1: RL circuit for example

23
24 Chapter 3. Continuous-time systems

x y
H(s)

Figure 3.2: Linear time-invariant system

is due to the initial current in the inductor. This initial current is considered
to be an initial condition of the system, or initial state. For the time being, we
will let i(0) = 0. Then
I(s) 1 1/L
= = . (3.3)
V (s) sL + R s + R/L
By definition
I(s)
H(s) = , (3.4)
V (s)
where H(s) is called the transfer function of the system. Note that
1/L 1 −(R/L)t
H(s) = = Laplace transform e
s + R/L L
Laplace transform (h(t)) , (3.5)

where h(t) is the impulse response of the system. Indeed for v(t) = δ(t), V (s) =
1, I(s) = H(s), and i(t) = h(t).
In general, a linear time-invariant system with input x(t) and output y(t)
may be represented by the block diagram of Fig. 3.2. This representation means
that

Y (s) = H(s)X(s). (3.6)

A system is completely described by H(s), since the response to any input signal
can be computed from the knowledge of the signal and of the transfer function
of the system. When
N (s)
H(s) = , (3.7)
D(s)
where N (s) and D(s) are polynomials, the roots of N (s) are called the zeros of
the system, and the roots of D(s) are called the poles of the system.
In the time domain

y(t) = h(t) ∗ x(t), (3.8)


3.1. Transfer functions and interconnected systems 25

x y1 y
H (s) H (s)
1 2

Figure 3.3: Cascade system

where ”∗” denotes the convolution operation [26], or



h(t) ∗ x(t) = h(τ )x(t − τ )dτ. (3.9)
−∞

Knowledge of h(t) is equivalent to knowledge of H(s). The advantage of H(s) is


that the rather complicated time-domain convolution operation is replaced by a
simple multiplication in the s-domain. Note that, with the unilateral definition
of the Laplace transform (integration from t = 0 to ∞), the equivalence between
convolution in the time domain and multiplication in the s-domain is based on
the assumption that h(t) and x(t) are zero for t < 0.

3.1.2 Cascade systems


The replacement of convolution by a simple multiplication in the s-domain allows
one to rapidly compute the transfer functions of interconnected systems. A
cascade system is shown in Fig. 3.3. The overall transfer function is obtained as
follows

Y1(s) = H1 (s)X(s)
Y (s) = H2 (s)Y1 (s), (3.10)

so that

Y (s) = H2 (s)H1(s)X(s)
= H(s)X(s) for H(s) = H2(s)H1 (s). (3.11)

In other words, the transfer function of a cascade system is the product of the
two transfer functions. If we let
N1 (s) N2(s)
H1(s) = H2 (s) = , (3.12)
D1 (s) D2(s)
the result is
N1(s)N2 (s)
H(s) = . (3.13)
D1(s)D2(s)
26 Chapter 3. Continuous-time systems

y1
H (s)
1
x y

H (s)
2
y2

Figure 3.4: Parallel system

Therefore, unless there are pole/zero cancellations, the poles of H(s) are the
union of the poles of H1 (s) and H2 (s), and the zeros of H(s) are the union of
the zeros of H1 (s) and H2(s).

3.1.3 Parallel systems


A parallel system is shown in Fig. 3.4. The output of the system is given by

Y (s) = Y1 (s) + Y2 (s) = H1 (s)X(s) + H2 (s)X(s)


= (H1(s) + H2 (s)) X(s), (3.14)

so that the overall transfer function is the sum of the two transfer functions

H(s) = H1(s) + H2 (s). (3.15)

With
N1 (s) N2(s)
H1(s) = H2 (s) = , (3.16)
D1 (s) D2(s)
one has
N1(s)D2(s) + D1(s)N2 (s)
H(s) = . (3.17)
D1(s)D2(s)
The poles of H(s) are (again) the union of the poles of H1(s) and H2(s), but
the zeros are the roots of N1(s)D2 (s) + D1(s)N2 (s) = 0 (except for possible
cancellations). A zero that is common to both H1 (s) and H2 (s) is also a zero of
H(s).

3.1.4 Feedback system


A feedback system is shown in Fig. 3.5. The output of the system is given by

Y (s) = H1 (s)E(s)
= H1 (s)X(s) − H1 (s)H2(s)Y (s), (3.18)
3.1. Transfer functions and interconnected systems 27

x e y
H (s)
1

H (s)
2

Figure 3.5: Feedback system

which implies that

(1 + H1 (s)H2 (s)) Y (s) = H1 (s)X(s). (3.19)

The result is the overall transfer function


H1(s)
H(s) = . (3.20)
1 + H1 (s)H2 (s)
This is an important formula for the design of feedback systems. The overall
transfer function is the ratio of the so-called forward path transfer function H1 (s)
and the number 1 plus the so-called loop transfer function H1 (s)H2(s). H2 (s) is
called the feedback path transfer function.
In terms of the original poles and zeros
N1(s)D2 (s)
H(s) = . (3.21)
N1(s)N2 (s) + D1 (s)D2 (s)
The poles of the overall system are the roots of N1 (s)N2(s) + D1(s)D2 (s) = 0
and are called the closed-loop poles. In general, these poles are different from
the open-loop poles (the poles of H1 (s) and H2 (s)). Indeed, if p is an open-loop
pole, D1(p) = 0 or D2(p) = 0. To be a closed-loop pole, it would need to satisfy
N1 (p) = 0 or N2(p) = 0. Therefore, unless there is a pole/zero cancellation in
the product H1(s)H2 (s), the closed-loop poles are distinct from the open-loop
poles. Finally, note that the zeros of H(s) are the union of the zeros of the
forward path and of the poles of the feedback path.

3.1.5 Block reduction method


The formulas for cascade, parallel, and feedback systems can be combined to find
the transfer function of many interconnected systems. For example, consider
the system of Fig. 3.6. The parallel blocks H1(s) and H2 (s) can be replaced
by H1 (s) + H2 (s). In cascade with H4(s), the combined transfer function of
28 Chapter 3. Continuous-time systems

the forward path is (H1(s) + H2(s)) H4(s). Then, using the feedback system
formula, the overall transfer function is found to be

(H1 (s) + H2 (s)) H4 (s)


H(s) = . (3.22)
1 + (H1(s) + H2(s)) H3 (s)H4(s)

H (s)
1

x y
H (s) H (s)
2 4

H (s)
3

Figure 3.6: Interconnected system

If summation junctions are moved through equivalent transformations, even


more problems can be solved. For example, consider the system of Fig. 3.7. The
structure of the system is similar to Fig. 3.6, but none of the formulas can be
applied directly. However, the summation junction between H2 (s) and H4 (s)
can be moved to the output of H4 (s) by replacing H1(s) by H1(s)H4(s), as
shown in Fig. 3.8. Then, the formula for cascade systems makes it possible to
replace the two blocks H2 (s) and H4(s) by a single block H2 (s)H4(s).
To proceed further, the output of H1 (s)H4 (s) is moved past the feedback
from y, requiring that one add a feedforward term from x as shown in Fig. 3.9.
Then, the transfer function of the system can be computed to be

H2 (s)H4(s)
H(s) = H1 (s)H4(s) + (1 − H1 (s)H3(s)H4(s))
1 + H2 (s)H3(s)H4 (s)
N (s)
= , (3.23)
1 + H2 (s)H3(s)H4 (s)

where

N (s) = H1(s)H4(s) + H1 (s)H2 (s)H3(s)H4 (s)2 + H2(s)H4 (s)


−H1(s)H2(s)H3 (s)H4(s)2
= (H1 (s) + H2 (s)) H4 (s). (3.24)
3.1. Transfer functions and interconnected systems 29

H (s)
1

x x1 x2 y
H (s) H (s)
2 4

H (s)
3

Figure 3.7: Interconnected system

H (s) H (s)
1 4

x y
H (s) H (s)
2 4

H (s)
3

Figure 3.8: Block diagram after the first step

H (s)H4(s)
1

H1(s)H3(s) H4(s)
x y
H (s) H (s)
2 4

H (s)
3

Figure 3.9: Block diagram after the second step


30 Chapter 3. Continuous-time systems

As seen in this example, the transfer function of many systems can be found
by the block reduction method using the formulas for cascade, parallel, and feed-
back systems, and carefully transforming the diagrams into equivalent systems.
In difficult cases, skillful manipulations of the diagram may be required. The
so-called Mason’s rule is a general procedure that can also be used and is often
found in textbooks. However, it also requires some skill to apply. The next
section presents a procedure that can be applied systematically and with less
risk of error in complicated cases.

3.1.6 General interconnected systems


Interconnected systems are assumed to be composed of linear time-invariant
systems and summing junctions, with a single input and a single output for
which the transfer function must be obtained.

Procedure

1. Define n variables Xi (s) after every summing junction.

2. Write equations for Y (s) and for the Xi (s)′ s, by reading the diagram. This
step produces n + 1 equations relating Xi (s), Y (s) and X(s).

3. Eliminate the Xi (s)′ s until a single equation is left that relates Y (s) and
X(s). This step is equivalent to solving n + 1 linear equations in the
n + 1 unknowns Y (s) and Xi (s). The solution for Y (s) gives the transfer
function.

Example: consider the block diagram of Fig. 3.7. The equations are

X1 (s) = X(s) − H3 (s)Y (s)


X2 (s) = H1 (s)X(s) + H2(s)X1 (s)
Y (s) = H4 (s)X2 (s). (3.25)

Eliminating the extra variables

Y (s) = H4(s)H1(s)X(s) + H4(s)H2 (s)X1 (s)


= H4(s)H1(s)X(s) + H4(s)H2 (s)X(s)
−H4(s)H2(s)H3 (s)Y (s), (3.26)
3.2. Stability 31

and

(1 + H4 (s)H2(s)H3(s)) Y (s) = H4(s) (H1 (s) + H2 (s)) X(s).


(3.27)
Therefore, the transfer function is
H4(s) (H1 (s) + H2(s))
H(s) = . (3.28)
1 + H2 (s)H3 (s)H4(s)
The result is the same as was obtained through block reduction.

3.2 Stability
3.2.1 Definitions
In the previous chapter, we discussed the boundedness and convergence proper-
ties of signals. Now, we discuss the stability properties of systems. The two sets
of properties are very much related, and rely heavily on concepts derived from
partial fraction expansions. However, they concern distinct objects, namely sig-
nals and systems. We begin with a standard definition of stability and with a
test for stability.
Definition - BIBO stability: a linear time-invariant system is called bounded-
input bounded-output stable (BIBO stable) if the output is bounded for any
bounded input. A system is called BIBO unstable if it is not BIBO stable.

Comments
(a) A system is unstable if there exists a bounded input signal such that the
output is unbounded. The output of the system does not need to be unbounded
for all bounded input signals: only one signal is sufficient. Further, the output
of an unstable system may even be bounded for some unbounded input signals.
(b) The fact below states that a system is BIBO stable if all its poles are in the
open left half-plane. Then, two types of unstable systems may be encountered:
systems that have some repeated poles on the jω−axis and/or some poles in
the open right half-plane, and systems that only have non-repeated poles on the
jω−axis. In the first case, virtually every bounded input yields an unbounded
output. In the second case, only well-chosen inputs produce an unbounded
output.

3.2.2 Properties
Fact - BIBO stability of systems with rational transfer functions
A linear time-invariant system with H(s) = N(s)/D(s) and deg N (s) deg D(s)
32 Chapter 3. Continuous-time systems

is BIBO stable if and only if all the poles of the transfer function are in the open
left half-plane (Re(s) < 0).

Example 1: H(s) = 1 is stable. The output is bounded for all bounded


(s + 1)2
inputs.

Example 2: H(s) = s − 1 is unstable. For x(t) = u(t) (step input), the output
1
is y(t) = −1 + et and is unbounded. The output for X(s) = (s − 1)/(s + 1)2 is
bounded. However, all bounded inputs whose transforms do not have a zero at
s = 1 will yield unbounded outputs.

Example 3: H(s) = 1 is unstable. However, the only bounded inputs


(s2 + 1)
that yield an unbounded output are those which have poles at s = ±j. If,
for example, x(t) = u(t), then Y (s) = 1/(s(s2 + 1)) = 1/s − s/(s2 + 1), and
y(t) = 1 − cos(t), which is bounded. On the other hand, if x(t) = 2 cos(t), then
Y (s) = 2s/(s2 + 1)2, and y(t) = t sin(t), which is unbounded.

Proof of the fact

The proof given below assumes that the input signals under consideration have
rational transforms. However, the result is true in general.

(a) Pole condition ⇒ BIBO stability. Recall that a signal is bounded if and only
if the poles of its transform are in the OLHP or are non-repeated poles on the
jω−axis. Since the poles of Y (s) are the union of those of X (s) and H(s), and
since all the poles of H(s) are in the OLHP, Y (s) will satisfy the condition for
boundedness if X(s) does.

(b) BIBO stability ⇒ pole condition. The result is proved by contradiction:


if the pole condition is not satisfied, there is a bounded input for which the
output is unbounded. Many input signals may exist, but only one signal needs
to be found. Two cases are considered. If H(s) has some repeated poles on the
jω−axis and/or some poles in the ORHP, let x(t) = cos(ω0 t), where s = jω0 is
any value that is not a zero of H(s). The resulting output is unbounded, since
the partial fraction expansion of the output must include unbounded signals
corresponding to the poles of H(s). If H(s) has non-repeated poles on the
jω−axis, only well-chosen signals produce unbounded outputs. In particular, a
signal x(t) = cos(ω0 t) with a frequency that matches the location of one of the
poles of H(s) on the jω−axis yields an unbounded output, because the partial
fraction expansion of the output must contain imaginary poles with multiplicity
greater than 1.
3.3. Responses to step inputs 33

3.2.3 Non-proper transfer functions


For a non-proper transfer function (deg N (s) > deg D(s)), the response to a step
input includes impulses, so that
N (s)
H(s) = with deg N (s) > deg D(s) =⇒ H(s) is BIBO unstable.
D(s)
(3.29)

3.3 Responses to step inputs


3.3.1 General characteristics of step responses
Consider a system with transfer function H(s) = N (s)/D(s). The step response
of the system is the response to a step input
x(t) = xm u(t), (3.30)
where u(t) = 0 for t < 0 and u(t) = 1 for t 0. In the s-domain, the step
response is given by
xm xm N (s)
Y (s) = H(s) = . (3.31)
s sD(s)
If the magnitude of the step is 1 (xm = 1), the step response is (1/s)H(s)
and is called the unit step response. The unit step response is the integral of the
impulse response. For this reason, the step response is often used to characterize
dynamic systems. It contains complete information about the poles and zeros
of the system and can be used to estimate the transfer function of the system.
The step response is also important because, in many control applications, the
control input changes abruptly and remains constant for long periods of time
(set-point regulation).
General characteristics of the step response can be derived from knowledge of
partial fraction expansions. We assume that the system is BIBO stable. Then,
H(s) cannot have a pole at the origin and the step response is a bounded signal.
A partial fraction expansion of Y (s) will lead to an expression of the form
H(0)xm N1(s)
Y (s) = + , (3.32)
s D(s)
where the first term is obtained from the residue formula and the second term
groups all the contributions of the poles of H(s) (with N1 (s) some polynomial
in s). H(0) is the value of the transfer function evaluated at s = 0, and is finite
by virtue of the stability assumption. Define
H(0)xm N1 (s)
Yss (s) = Ytr (s) = , (3.33)
s D(s)
34 Chapter 3. Continuous-time systems

where Yss (s) is called the steady-state response of the system and Ytr (s) is
called the transient response of the system. Because of the stability assumption
on H(s), the transient response is an exponentially decaying function in the time
domain. The steady-state response is a constant signal

yss (t) = H(0)xm. (3.34)

H(0) is called the DC gain or steady-state gain of the system. Assuming a


constant input signal and neglecting the transient response terms, the DC gain
is the ratio of the magnitude of the output signal to the magnitude of the input
signal.

3.3.2 Examples
Example 1: consider a first-order system
k
H(s) = . (3.35)
s+a
The DC gain is k/a and, through a partial fraction expansion, the steady-state
and transient responses can be found to be
k k
yss (t) = xm ytr (t) = − xme−at . (3.36)
a a
The constant τ = 1/a is usually referred to as the time constant of the system.
[dy/dt]t=0 = kxm, so that the intersection of the tangent to the response at t = 0
and the steady-state value H(0)xm = kxm/a occurs for t = 1/a = τ , the time
constant of the system. These properties and the shape of the step response are
shown in Fig. 3.10. For t = τ , y(t) = H(0)xm (1 − e−1 ), which implies that
the output reaches 63% of its steady-state value after a time equal to the time
constant. Often, a value of time equal to 4τ is taken to be the convergence time,
or settling time. After that time, the output has reached 98% of its steady-state
value.
Example 2: If the transfer function has a double real pole
k
H(s) = , (3.37)
(s + a)2
a partial fraction expansion gives
k k k
yss (t) = xm ytr (t) = − xmte−at − 2 xme−at. (3.38)
a2 a a
The step responses of the first-order system with pole at −a and of the second-
order system with double pole at s = −a are shown on Fig. 3.11. The responses
3.3. Responses to step inputs 35

y(t)

H(0)x m
0.63 H(0)x m

τ=1/a t

Figure 3.10: Step response of a first-order system

y(t) single pole


H(0)xm

double pole

Figure 3.11: Step responses of a first-order system and of a second-order system


with double pole at the same location

are qualitatively similar, but one can see that the delay of the response is roughly
doubled. Also, the derivative at t = 0 is zero for the double pole. The slope is
always zero if the number of poles exceeds the number of zeros by 2 or more.
Example 3: If the system has two distinct real poles
k
H(s) = , (3.39)
(s + a1 )(s + a2 )
a partial fraction expansion gives
k k k
yss (t) = xm ytr (t) = xme−a1 t + xm e−a2 t .
a1 a2 a1(a1 − a2) a2 (a2 − a1 )
(3.40)

The pole with the larger magnitude yields a term that converges faster to zero,
so that the response is usually dominated by the contribution of the pole with
smaller magnitude. Fig. 3.12 shows the response of a first-order system with pole
36 Chapter 3. Continuous-time systems

y(t) pole p
H(0)xm

poles p and 5p

Figure 3.12: Step responses of a first-order system with pole p and of a second-
order system with poles p and 5p

at −a1 and of a second-order system with poles at −a1 and −a2 = −5a1. Note
that the responses are very close. The additional pole creates a small amount of
delay, and the slope around t = 0 is zero for the second-order system. However,
the first-order system is a good approximation of the second-order system, even
though the additional pole is only 5 times greater.
Example 4: If the system has a pair of complex poles at s = −a ± jb,
k k
H(s) = = 2 , (3.41)
(s + a − jb)(s + a + jb) s + 2as + a2 + b2
a partial fraction expansion gives
a
yss (t) = H(0)xm ytr (t) = −H(0)xme−at cos(bt) − H(0)xm e−at sin(bt),
b
(3.42)
where
k
H(0) = . (3.43)
a2 + b2
The step response typically exhibits oscillations associated with the sinusoidal
components of the transient response. However, the magnitude of these oscilla-
tions depends on the rate of decay associated with the real part of the complex
pole, as compared to the imaginary part of the pole. Fig. 3.13 shows the re-
sponses for several cases. If the imaginary part is larger than the real part of
the pole in magnitude, oscillations are visible and produce a large overshoot in
the response.
For a small ratio a/b, the response is approximately

y(t) ≃ H(0)xm(1 − e−at cos(bt)), (3.44)


3.3. Responses to step inputs 37

y(t)
a/b=1/3
H(0)xm
a/b=1/2
a/b=1

π/b t

Figure 3.13: Step responses of systems with complex poles

so that the peak of the response occurs for t ≃ π/b. The percent overshoot is

P O (%) ≃ 100e−aπ/b . (3.45)

The formula gives 4% for a/b = 1, 20% for a/b = 1/2, and 35% for a/b = 1/3,
which is consistent with the figure. It is typical to define the damping factor ζ
and the natural frequency ωn through
a √
ζ=√ , ωn = a2 + b2 . (3.46)
a2 + b2
The natural frequency is the magnitude the pole, and the damping factor is the
cos of the angle between the pole and the negative real axis in the s-domain (see
Fig. 3.14).

p jb
ωn
α

-a=-ζω n

Figure 3.14: Definition of variables for complex poles

Systems with low damping are those for which


a
< 1 ⇔ ζ < 0.707. (3.47)
b
38 Chapter 3. Continuous-time systems

Figure 3.15: Step responses vs. pole locations

When the damping factor is small, ζ ≃ a/b and ωn ≃ b, so that the oscillation
frequency is close to the natural frequency. An example of a system with a low
damping is a slender robotic arm, such as the one used in the space shuttle.

3.3.3 Effect of poles and zeros on step responses


The characteristics of step responses for systems with single real and complex
poles are shown in Fig. 3.15. Unstable systems are also considered, where the
responses are unbounded. Although the responses of systems with more poles
are not included, the components of the responses depend on the pole locations
in a similar manner. Further, the responses are often characterized by the real
pole or complex pole pair which is the farthest to the right of the s-plane, and
is therefore called the dominant pole.
Sometimes, the zeros of the transfer function can have a significant effect on
the step responses. This phenomenon occurs in particular when the magnitude
of a zero is much smaller than the magnitude of the poles. Overshoot may
occur even if the poles are real. Fig. 3.16 shows the response to be expected
of a second-order system with two poles and one zero as the location of the
zero is varied. Without the zero, no overshoot is observed. When the zero
is at the origin, the DC gain of the system is zero. Therefore, the response
of the system converges to zero in the steady-state and consists only of the
transient response. If the zero is not at the origin, but is closer to the origin than
the poles, the response converges to a nonzero value, but exhibits a significant
overshoot (without oscillations). Fig. 3.16 also shows that, when the zero is in
3.4. Responses to sinusoidal inputs 39

Figure 3.16: Effect of a zero on step responses

the right half-plane, the response may exhibit undershoot. Such characteristic
is undesirable in control systems. Generally, zeros in the right half-plane are
called non-minimum-phase zeros and are unfavorable for reasons including, but
not limited to the transient characteristic discussed here. Zeros in the left half-
plane are called minimum-phase zeros.

3.4 Responses to sinusoidal inputs


3.4.1 Definition and example
The sinusoidal response of a system is the response to an input signal

x(t) = xm cos(ω0 t + α). (3.48)

It is assumed that the system is BIBO stable, in order to guarantee the bound-
edness of the response. As an example, consider the RL circuit of Fig. 3.1. Let
R = 1 Ω and L = 1 H, and the input voltage v(t) = sin(t), so that

1 1 1
I(s) = V (s) = . (3.49)
s+1 s + 1 s2 + 1
40 Chapter 3. Continuous-time systems

A partial fraction expansion gives the time-domain function


1 1 1
i(t) = e−t − cos(t) + sin(t). (3.50)
2 2 2
Similarly to step responses, the first term is an exponentially decaying term
called the transient response. The last two terms are sinusoidal functions at the
frequency of the input, and constitute the steady-state response. In other words

i(t) = itr (t) + iss (t), (3.51)

with
1 1 1
itr (t) = e−t, iss (t) = − cos(t) + sin(t). (3.52)
2 2 2
Alternatively, iss (t) is also
1 1
iss (t) = √ cos(t + 225◦ ) = √ sin(t − 45◦), (3.53)
2 2
which shows that the steady-state response is a sinusoid with the same frequency
as the input signal, but different magnitude and phase. Note that it was assumed
in the derivation of I(s) that i(0) = 0 (the initial current in the inductor was
zero). An interesting consequence of that assumption is that the signal i(t)
obtained here is the current that would appear in the RL circuit if the voltage
source was connected at t = 0 (as if a switch was closed at t = 0). Although the
input signal was taken to be a sinusoid, the value of the signal for t < 0 is not
considered in the Laplace transform analysis and the initial condition i(0) solely
determines the state of the circuit at t = 0.
The transient, steady-state, and overall current waveforms are shown in
Fig. 3.17 for the system H(s) = 1/(s + 1) and an input x(t) = sin(3t). As
expected, the current is zero at t = 0. An overcurrent of about 50% is observed
within about a second. Such large transient currents are observed in power dis-
tribution systems, and need to be accounted for in the design of the components
and their protection.

3.4.2 General characteristics of steady-state sinusoidal


responses
We first consider the case where x(t) = xm cos(ω0t), so that the Laplace trans-
form of the output is given by
s
Y (s) = H(s)xm . (3.54)
s2 + ω02
3.4. Responses to sinusoidal inputs 41

0.35
Transient response
0.3
0.25
0.2
0.15
0.5 Overall response
0.1
0.4
0.05 0.3
0 0.2
0 1 2 3 4 5 6 7 8 0.1
Time (s) 0
-0.1
0.4 Steady-state response -0.2
0.3 -0.3
0.2 -0.4
0 1 2 3 4 5 6 7 8
0.1 Time (s)
0
-0.1
-0.2
-0.3
-0.4
0 1 2 3 4 5 6 7 8
Time (s)

Figure 3.17: Transient, steady-state and overall current responses

A partial fraction expansion leads to terms of the form

1 H(jω0 ) 1 H(−jω0) N2 (s)


Y (s) = xm + xm + , (3.55)
2 s − jω0 2 s + jω0 D(s)

where the first two terms are obtained by the residue method and the last term
groups all the components due to the poles of H(s). H(jω0 ) is the value of the
transfer function evaluated at s = jω0.
As for the step response, assume that the system is BIBO stable and define

1 H(jω0) 1 H(−jω0 ) N2(s)


Yss (s) = xm + xm , Ytr (s) = ,
2 s − jω0 2 s + jω0 D(s)
(3.56)

where Yss (s) and Ytr (s) are the steady-state and transient responses, respec-
tively. In the time domain, the transient response is a signal that decays to zero
exponentially because of the stability assumption on the system.
One has that H(−jω0 ) = H ∗ (jω0 ) for a system with a real impulse response.
42 Chapter 3. Continuous-time systems

so that
xm H(jω0) + H ∗(jω0) H(jω0 ) − H ∗ (jω0)
Yss (s) = 2 2 s + jω0
s + ω0 2 2
xm
= 2 (s Re (H(jω0)) − ω0 Im (H(jω0 ))) . (3.57)
s + ω02

In the time domain,

yss (t) = Re(H(jω0))xm cos(ω0 t) − Im(H(jω0))xm sin(ω0t),


(3.58)
or

yss (t) = M xm cos(ω0 t + φ), (3.59)

where

M = |H(jω0)| , φ = ∡(H(jω0)). (3.60)

Equation (3.59) highlights the similarity between the input and output sig-
nals in the steady-state. |H(jω0)| is viewed as the gain of the system for sinu-
soidal inputs, because it is the ratio of the magnitude of the output signal to
the magnitude of the input signal. The output signal is shifted in time with
respect to the input signal, with the effect of a delay if φ < 0 and of an advance
if φ > 0. Equation (3.60) shows that the gain and phase relating the input and
output signals are given by the magnitude and angle of the complex number
H(jω0 ). H(jω) is called the frequency response of the system. It is the value of
the transfer function H(s) evaluated at s = jω. It is also the Fourier transform
of the impulse response h(t), assuming that h(t) = 0 for t < 0.
For a general sinusoidal signal

x(t) = xm cos(ω0t + α), (3.61)

a similar approach can be used to find y(t). The steady-state output turns out
to be given by an expression similar to (3.59)

yss (t) = M xm cos(ω0t + α + φ), (3.62)

In particular, if α = −π/2, x(t) = x0 sin(ω0t) and yss (t) = Mx0 sin(ω0 t+φ). The
transient response is also similar, but the partial fraction expansion produces
different coefficients for the exponentially decaying functions for different values
of α.
3.4. Responses to sinusoidal inputs 43

3.4.3 Example: first-order system


Let
k
H(s) = . (3.63)
s+a
Case 1: The response to x(t) = xm cos(ω0 t) is given by
k s
Y (s) = xm
s + a s2 + ω02
−ak 1 ak s ω0 k ω0
= 2 2 xm + 2 2 xm 2 2 + 2 2 xm 2 .
ω0 + a s + a ω0 + a s + ω0 ω0 + a s + ω02
(3.64)
The first term of the response is the transient response, and corresponds to the
time function
−ak
ytr (t) = 2 xme−at . (3.65)
ω0 + a2
The second and third terms constitute the steady-state response, which is given
by
ak ω0 k
yss (t) = 2 xm cos(ω0 t) + 2 xm sin(ω0t)
ω02
+a ω0 + a2
= M xm cos(ω0 t + φ), (3.66)
with the magnitude and phase
k
M= φ = ∡(a − jω0). (3.67)
ω02 + a2
These quantities can be obtained directly from H(jω0 ) = k/(jω0 + a) using
(3.60). The magnitude decreases monotonically from the value of the DC gain
k/a to zero, as ω varies from 0 to ∞. The phase decreases from 0 degrees to
−90 degrees. The phase is also equal to
ω0
φ = − tan−1( ), (3.68)
a
but one must be careful to choose the correct quadrant for the inverse tangent
function.
Case 2: For the response to x(t) = xm sin(ω0 t), the partial fraction expansion
gives
k ω0
Y (s) = xm
s + a s2 + ω02
kω0 1 ω0 k s ak ω0
= xm − xm + xm ,
ω02 + a2 s + a ω02 + a2 s2 + ω02 ω02 + a2 s2 + ω02
(3.69)
44 Chapter 3. Continuous-time systems

so that the transient response is


kω0
ytr (t) = xme−at, (3.70)
ω02 + a2
and the steady-state response is
ω0 k ak
yss (t) = − 2 xm cos(ω0 t) + 2 xm sin(ω0t)
ω02
+a ω0 + a2
= M xm sin(ω0t + φ). (3.71)

As noted earlier, the steady-state response is simply shifted if the input signal is
shifted, but the transient response varies and is not simply shifted. For different
phases of the input signal, there can be large variations in the transient response.
Case 3: The response to x(t) = xm cos(ω0 t + α) can be computed using

xm cos(ω0t + α) = xm cos(ω0 t) cos(α) − xm sin(ω0t) sin(α).


(3.72)
Based on the linearity of the system, the response is a linear combination of the
responses computed earlier, with
−ak kω0
ytr (t) = xme−at cos(α) − xme−at sin(α)
ω02 + a2 ω02 + a2
k
= xme−at (−a cos(α) − ω0 sin(α)) . (3.73)
ω02 + a2
Similarly

yss (t) = M xm cos(ω0t + φ + α), (3.74)

as expected. It is possible to find values of α such that the transient response is


maximized, as well as such that the transient response is zero. In an electrical
circuit where an AC source is suddenly switched on, the transient currents may
vary significantly depending on the time of switching.

3.5 Effect of initial conditions


In the analysis of the simple RL circuit, it was assumed that the current i(t)
was initially zero. A nonzero value could be accounted for, but would require an
additional term in the response. The purpose of this section is to investigate the
effect of such nonzero initial conditions in differential equations. We consider a
general input-output differential equation
dn y dn−1y dy dn x dn−1x dx
n + an−1 n−1 + · · · + a1 + a0 y = bn n + bn−1 n−1 + · · · + b1 + b0x.
dt dt dt dt dt dt
(3.75)
3.5. Effect of initial conditions 45

Without loss of generality, we let the first coefficient be equal to 1 (if needed,
both sides can be divided by the first coefficient to get the result). For the
Laplace transform analysis, recall that
dy
y1 = ⇒ Y1 (s) = sY (s) − y(0). (3.76)
dt
Therefore
d2 y dy1
y2 = 2 = ⇒ Y2 (s) = sY1 (s) − y1(0) = s2Y (s) − sy(0) − ẏ(0),
dt dt
(3.77)
where we used the notation
dy dy
= ẏ and y1 (0) = ẏ(0) = . (3.78)
dt dt t=0
The procedure can be extended to higher-order derivatives.
Consider the case of a second-order input/output differential equation
d2 y dy d2 x dx
2 + a1 + a0 y = b2 2 + b1 + b0 x. (3.79)
dt dt dt dt
Application of the Laplace transform to both sides yields
s2 Y (s) − sy(0) − ẏ(0) + a1 sY (s) − a1y(0) + a0 Y (s)
= b2s2 X(s) − b2sx(0) − b2ẋ(0)
+b1 sX(s) − b1x(0) + b0X(s). (3.80)
Note the distinction between X(s), which is the Laplace transform of x(t), and
x(0), which is the initial value of x(t). The transform Y (s) may be deduced to
be
sy(0) + ẏ(0) + a1y(0) − b2 sx(0) − b2 ẋ(0) − b1x(0)
Y (s) =
s2 + a1 s + a0
Response to the initial conditions
or zero-input response Yzi (s)
b2 s2 + b1 s + b0
+ X(s) . (3.81)
s2 + a1s + a0
H(s) (transfer function)
Response to the input
or zero-state response Yzs (s)
The first term is the response to the initial conditions, and is also called the zero-
input response Yzi (s). The second term is the response to the input, and is also
called the zero-state response Yzs (s). It is the product of the transfer function
H(s) with the transform of the input signal. The following observations may be
made, which also apply to systems of higher order:
46 Chapter 3. Continuous-time systems

• the response is the sum of the response due to the input and the response
due to the initial conditions. The two components are independent. The
response to the input is the response that is obtained for the same input
but zero initial conditions, and the response to the initial conditions is the
response that is obtained for the same initial conditions but zero input.

• the initial conditions are composed of the values of the input and output
variables as well as their derivatives at t = 0. All the relevant history of
the system for t < 0 is contained in those values, which may be viewed as
the state of the system at t = 0.

• the transfer function and the response to initial conditions are rational
functions of s with the same denominators. As a consequence, the response
to the initial conditions is similar to the transient response defined earlier
for step and sinusoidal inputs. Both can be lumped together as a “total”
transient response. Boundedness of this part of the response is associated
with the concept of internal stability.

• the response to the initial conditions converges to zero if the poles of the
system are in the open left half-plane. This property is referred to as
asymptotic stability. The condition for asymptotic stability is the same as
for BIBO stability.

• the response to the initial conditions is bounded if the poles of the system
are in the open left half-plane, or are non-repeated poles on the jω−axis.
This property is sometimes referred to as marginal stability. However, a
marginally stable system is unstable from the BIBO point of view.

• singular problems are encountered if a pole/zero cancellation occurs in the


rational functions. For example, the response to some initial conditions
may be bounded even if the transfer function has poles in the open right
half-plane. Conversely, unstable poles may be cancelled in the transfer
function, resulting in a BIBO stable system. In that case, however, the
response to initial conditions may still be unbounded. Generally, cancella-
tions of undesirable poles are viewed as conditions to be avoided, because
of initial conditions and because cancellations can never be made exact in
practice.
3.6. State-space representations 47

In summary, the definitions are:

Asymptotically stable yzi (t) → 0 as t → ∞


Marginally stable yzi (t) bounded
Internally unstable yzi (t) unbounded

while the tests on the poles are

Asymptotically stable All poles with Re(s) < 0


Marginally stable All poles with Re(s) < 0
+ possible non-repeated poles on the jω−axis
Internally unstable At least one pole with Re(s) > 0
or repeated pole on the jω−axis

Asymptotic stability is equivalent to bounded-input bounded-output stability.

3.6 State-space representations


3.6.1 Example of a state-space model
State-space models are similar to input/output differential equations. However,
they arise more naturally from the modelling of physical systems. For example,
consider the RLC circuit shown in Fig. 3.18. Using standard circuit analysis
tools, the following equations may be written
dx1
u = L + x2 + y
dt
dx2
x1 = C
dt
y = Rx1. (3.82)

Note that the voltage u(t) is not a step function here, but the input to the
system, in accordance with standard state-space notation.
The two differential equations for the circuit may be written as
dx1 R 1 1
= − x1 − x2 + u
dt L L L
dx2 1
= x1, (3.83)
dt C
or, in matrix form,
dx1 /dt −R/L −1/L x1 1/L
= + u,
dx2 /dt 1/C 0 x2 0
(3.84)
48 Chapter 3. Continuous-time systems

L C
+
x2
x1 +
u(t) R y(t)

Figure 3.18: RLC circuit

and the output of the system is


x1
y= R 0 . (3.85)
x2

Equations (3.84) and (3.85) constitute a state-space model for the RLC circuit.

3.6.2 General form of a state-space model


In general, a state-space representation has the form
dx
= Ax + Bu
dt
y = Cx + Du, (3.86)

where:
x is a column vector of dimension n, called the state vector
u is a scalar signal, and the input of the system
y is a scalar signal, and the output of the system
A is an n × n matrix, B is a column vector of dimension n
C is a row vector of dimension n, and D is a scalar
The dimensions of all the elements and of the products are shown in Fig. 3.19.
Generally, dx/dt is denoted ẋ.
To obtain a state-space model for circuits, a systematic procedure consists
in:

1. defining a state vector with the voltages on the capacitors and the currents
in the inductors as components.

2. writing equations using Kirchhoff’s voltage law, Kirchhoff’s current law,


and element descriptions.

3. converting the equations to state-space form.


3.6. State-space representations 49

.
x A x B
u
= +

n×1 n×n n×1 n×1 1×1


y C x D u
= +

1×1 1×n n×1 1×1 1×1

Figure 3.19: Matrix products in the state-space model

Many other physical systems can also be represented with state-space models,
using standard modelling techniques.

3.6.3 State-space analysis

Some simple yet general results may be obtained by applying the Laplace trans-
form to the state-space model. The equations for the state-space model are

ẋ = Ax + Bu
y = Cx + Du, (3.87)

so that, in the s-domain

sX(s) − x(0) = AX(s) + BU(s)


Y (s) = CX(s) + DU (s). (3.88)

The first (vector) equation gives

sX(s) − AX(s) = (sI − A)X(s) = x(0) + BU(s), (3.89)


50 Chapter 3. Continuous-time systems

where I is the identity matrix with dimension n × n. The transform of the


output is therefore

Y (s) = C(sI − A)−1 x(0) + C(sI − A)−1 B + D U(s) .


Response to initial conditions Response to input
or zero-input response Yzi (s) or zero-state response Yzs (s)
(3.90)

The dimensions of the terms in the expression are, in the order in which they
appear

1 × 1 = (1 × n) × (n × n) × (n × 1)
+ ((1 × n) × (n × n) × (n × 1) + (1 × 1)) × (1 × 1). (3.91)

As in the case of input/output differential equations, the output Y (s) ob-


tained from the state-space model has two terms: a term due to initial condi-
tions, and a term due to the input. The transfer function is found to be

H(s) = C(sI − A)−1B + D. (3.92)

Although this transfer function may be complicated to compute, the poles are
determined by a simple equation related to the denominator of the matrix (sI −
A)−1. Specifically, the denominator is det(sI − A) and, therefore, the poles are
given by the roots of det(sI − A) = 0. These roots are called the eigenvalues of
the matrix A, and may be computed using a mathematical software package.
Example: for the RLC circuit,
−R/L −1/L 1/L
A= , B= , C= R 0 , D = 0.
1/C 0 0
(3.93)
Using the formula for the inverse of a 2 × 2 matrix
−1
M11 M12 1 M22 −M12
M −1 = = ,
M21 M22 M11 M22 − M21 M12 −M21 M11
(3.94)
one finds that the transfer function is
−1
s + R/L 1/L 1/L
H(s) = R 0
−1/C s 0
1 s −1/L 1/L
= R 0
s2 + (R/L)s + 1/LC 1/C s + R/L 0
(R/L)s
= 2 . (3.95)
s + (R/L)s + 1/LC
3.6. State-space representations 51

Note that the elements of the circuit are connected in series, so that the im-
pedance of the circuit is

V (s) 1 s2 LC + 1 + sCR
Z(s) = = sL + +R = , (3.96)
I(s) sC sC

so that the transfer function can be computed independently to be

RI(s) sRC
H(s) = = 2 , (3.97)
V (s) s LC + sCR + 1

which is the same result. Note that there is a zero in the transfer function at s
= 0, because of the blocking of DC signals by the capacitor.
The response to initial conditions is

R (sx1(0) − (1/L)x2(0))
C(sI − A)−1 x(0) = , (3.98)
s2 + (R/L)s + 1/LC

where x1(0) is the current in the inductor and x2 (0) is the voltage on the ca-
pacitor, both at t = 0. The denominator in the expressions is det(sI − A) =
s2 +(R/L)s+ 1/LC, and the roots of the polynomial are the poles of the system.

3.6.4 State-space realizations


An interesting fact is that a state-space model can always be created that “real-
izes” a transfer function H(s) = N (s)/D(s), provided that deg N (s) deg D(s).
The procedure works as follows. First, use polynomial division to divide N (s)
by D(s), so that N (s) = q0D(s) + R(s), with deg R(s) < deg D(s), and the
quotient q0 is a scalar. Then, denoting

R(s) bn−1 sn−1 + · · · + b1 s + b0


= n , (3.99)
D(s) s + an−1 sn−1 + · · · + a1s + a0

the matrices of a state-space realization are


 
0 1 0 ··· 0 
 0  0
 0 1 0 ··· 0   .. 
A =  ...
   
, B= . 
   0 
 0 ··· ··· 0 1 
1
−a0 −a1 ··· −an−1
C = b0 · · · bn−1 , D = q0 . (3.100)

This state-space realization can be implemented using integrators, multipliers,


and summing junctions, using the diagram of Fig. 3.20.
52 Chapter 3. Continuous-time systems

b n-1
b n-2

u xn x n-1 x2 x1 y
∫ ∫ ... ∫ b0

a n-1

a n-2
...

a0

q0

Figure 3.20: Realization of a linear time-invariant system using integrators

To prove that the state-space system indeed has the required transfer func-
tion, note that the equations of the system are

ẋ1 = x2 , ẋ2 = x3, · · · , ẋn−1 = xn


ẋn = −a0x1 · · · − an−1xn + u
y = b0x1 · · · + bn−1 xn + q0 u, (3.101)

so that, applying the Laplace transform,

X2(s) = sX1 (s), ..., Xn(s) = sn−1X1 (s), (3.102)

and
1
X1(s) = U (s). (3.103)
sn + an−1 sn−1 + · · · + a1s + a0

Therefore
bn−1sn−1 + · · · + b1 s + b0
Y (s) = U(s) + q0U (s)
sn
+ an−1 sn−1 + · · · + a1 s + a0
R(s)
= + q0 U(s)
D(s)
N (s)
= U(s), (3.104)
D(s)

which is the desired result.


3.7. Problems 53

For example, the transfer function of the RLC circuit considered earlier

(R/L)s
H(s) = , (3.105)
s2 + (R/L)s + 1/LC

may be realized using

0 1 0
A= , B= , C= 0 R/L , D = 0.
−1/LC −R/L 1
(3.106)

Note that this state-space representation is different from the one that gave rise
to the transfer function. Indeed, a state-space model is not unique. For a given
state-space model, another model can be obtained by applying what is called a
similarity transformation. A new state z is defined through

z = P x, (3.107)

where P is an invertible matrix. Then

ż = P ẋ = P Ax + P Bu = P AP −1z + P Bu
y = Cx + Du = CP −1 z + Du, (3.108)

so that z is the state of a model with matrices Ā = P AP −1, B̄ = P B, C̄ = CP −1,


D̄ = D, and identical transfer function. It can be shown that if two state-space
realizations implement the same transfer function without pole/zero cancellations
in H(s), they are related by a similarity transformation.
Realizability: to be realizable as a state-space system, a transfer function
must be proper. A non-proper transfer function is realizable as an input-ouput
differential equation, but is problematic in practice because the magnitude of
the frequency response is unbounded for increasing frequencies. Filtering may
be added to approximate the system as a proper transfer function. For example,
H(s) = s is implemented as s a/(s + a) where a > 0 is large. The approximate
transfer function can be implemented as a state-space system.

3.7 Problems
Problem 3.1: A model of a brush DC motor without load is
di
L = v − Ri − Kω
dt

J = Ki, (3.109)
dt
54 Chapter 3. Continuous-time systems

where R (Ω) is the rotor resistance, L (H) is the rotor inductance, K (N m/A
or V s) is the motor torque constant (also the back-emf constant), J (kg m2 ) is
the inertia of the motor. The input of the system is the voltage v (V) applied
to the motor and the output is the rotor velocity ω (rad/s). The current i (A)
is considered to be an internal variable (or “state”).
(a) Find the transfer function from v to ω. Give the values of the poles and zeros
of the transfer function.
(b) Find the approximate transfer function that is obtained when L = 0, and
give the values of its poles and zeros.
(c) Compare the numerical values obtained for parts (a) and (b) when R = 0.7 Ω,
L = 2.5 10−3 H, K = 0.07 N m/A, and J = 5.7 10−5 kg m2.
Problem 3.2: (a) Find the transfer function from X(s) to Y (s) for the system
shown in Fig. 3.21.

x y
H (s) H (s)
1 2

H (s)
3

Figure 3.21: System for problem 3.2, part (a)

(b) Repeat part (a) for the system shown in Fig. 3.22.

H1(s) H3(s)

x y

H (s) H4(s)
2

Figure 3.22: System for problem 3.2, part (b)

Problem 3.3: Determine which transfer functions are stable.


(a) H(s) = s − 1 2
(s + 2)
3.7. Problems 55

(b) H(s) = 1
(s2 + 4)
(c) H(s) = s
(s + 3)2
(d) H(s) = 2 s
(s − 4)
(e) H(s) = 1
s(s + 1)
(f) H(s) = 2 1
s (s + 1)
For the unstable systems, give an example of a bounded input that yields an
unbounded output.
Problem 3.4: For the systems given below, calculate the DC gain. Then,
calculate the step responses using partial fraction expansions and compare the
steady-state values to the values predicted by the DC gain.
(a) H(s) = 2 2
s + 2s + 1
(b) H(s) = 2 −s − 2
s + 2s + 2

+
R L v
v1 +
_ 2
C R _

Figure 3.23: Circuit for problem 3.5

Problem 3.5: Consider the circuit of Fig. 3.23.


All the initial conditions in the circuit are zero.
(a) Using complex impedances, calculate the transfer function from v1 to v2.
Show that H(s) = 1/(s2 + 2s + 2) when R = L = C = 1.
(b) Using partial fraction expansions, calculate the response v2 (t) to v1 (t) =
5 cos(t). Indicate which part of the response is the transient response and which
part is the steady-state response.
(c) Calculate the steady-state response using the frequency response, and com-
pare the results to those of part (b).
(d) Is there a frequency ω such that the steady-state response to cos(ωt) is zero
for all t ? Is there a frequency ω such that the steady-state response to cos(ωt)
is proportional to sin(ωt)?
Problem 3.6: (a) Write a state-space description for the circuit of problem 3.5.
(b) Calculate the transfer function using H(s) = C(sI −A)−1B +D and compare
the result to the result obtained in problem 3.5.
56 Chapter 3. Continuous-time systems

(c) Calculate the additional response due to an initial current in the inductor
and an initial voltage on the capacitor using C(sI − A)−1 x(0). Give an estimate
of the time required for the transient voltage to decay to negligible values when
R = L = C = 1.
Problem 3.7: (a) Calculate the Laplace transform Y (s) of the solution of the
differential equation

dy 2 (t) dy(t)
+4 + 29y(t) = cos(t), (3.110)
dt2 dt

with initial conditions y(0), ẏ(0).


(b) Without performing a partial fraction expansion, sketch the response to the
initial conditions (i.e., the zero-input response).
Problem 3.8: Write a state-space description for the circuit shown in Fig. 3.24

+
L L
v +
_ v2
1
C R _

Figure 3.24: Circuit for problem 3.8

The input is v1 and the output is v2. Give the equation that must be solved to
find the poles of the system and solve it for R = L = C = 1.
Problem 3.9: (a) Find the response y(t) of the system with transfer function
H(s) = 1 and input x(t) = 1.
s(s + 1)
(b) Find the response y(t) of the system with transfer function H(s) = 1
s(s + 1)
and input x(t) = sin(t).
(c) Is a system BIBO stable if its response to a step input is bounded?
(d) Is a system BIBO unstable if its response to a step input is unbounded?
(e) Is the response of H(s) = 1 to X(s) = s + 3 bounded?
(s + 1)2(s + 4) s(s + 4)
Does it converge to a steady-state value? If so, to what value?
(f) Is the response of H(s) = 1 to X(s) = 2 1 2 bounded?
(s + 1)2 (s + 4) (s + 1)
Does it converge to a steady-state value? If so, to what value?
Problem 3.10: Find the transfer function H(s) = Y (s)/X(s) for the system
of Fig. 3.25.
3.7. Problems 57

H (s)
3

x y
H (s) H (s)
1 4

H (s)
2

Figure 3.25: System for problem 3.10

Problem 3.11: (a) Give the transform Y (s) for the system of Fig. 3.26. As-
sume that the input has transform U(s), and that the integrators have initial
conditions x1 (0) and x2(0).

u x1 x1 x2 y
x2
∫ ∫

Figure 3.26: System for problem 3.11

(b) Give the steady-state value of the output when the input u(t) = 5 for all t
(if the limit does not exist, indicate why).
Problem 3.12: (a) Find the transfer function W (s)/R(s) for the system of
Fig. 3.27.
(b) Given that the steady-state response of P (s) is yss (t) = sin(10t − 60◦ ) when
x(t) = sin(10t), what is the steady-state response wss (t) of the system of part (a)
when r(t) = sin(10t)? Give the condition that P (s) must satisfy for the result
to be true.
Problem 3.13: (a) Write a state-space model for the circuit of Fig. 3.28 and,
using the A matrix, give the polynomial whose roots are the poles of the system.
(b) Using the method of your choice (standard circuit calculations using complex
impedances are recommended), obtain the transfer function from u to y for the
circuit of part (a). Give the DC gain of the system and the location of the
58 Chapter 3. Continuous-time systems

r(t) x(t) y(t) w(t)


P(s)

Figure 3.27: System for problem 3.12

R L

+
u + C y
C

Figure 3.28: Circuit for problem 3.13

zero(s).
Problem 3.14: Calculate the transfer function P (s) = Y (s)/X(s) for the
system shown in Fig. 3.29.
Problem 3.15: (a) Calculate the transient response ytr (t) associated with the
output y(t) of a system with transfer function
s
P (s) = (3.111)
(s + 1)2

and input x(t) = 2 cos(t) (the steady-state response.is not needed).


(b) Without performing a partial fraction expansion, give the value of the steady-
state output yss (t) of the system of part (a) for a constant input x(t) = 2. If
the steady-state output does not exist, explain why.
(c) Without performing a partial fraction expansion, give the value of the steady-
state response of the system of part (a) to an input x(t) = 5 cos(2t). If the
steady-state response does not exist, explain why.
(d) A system with transfer function P1(s) has steady-state response 2 cos(t−30o )
when an input x(t) = cos(t) is applied to the system. Another system with
transfer function P2 (s) has steady-state response 3 sin(t + 60o) when an input
x(t) = sin(t) is applied. What is the steady-state response of the cascade system
P (s) = P1(s)P2(s) if an input x(t) = cos(t) is applied to the system?
3.7. Problems 59

H1(s)

x y
H2(s)

H3(s)

Figure 3.29: System for problem 3.14

Problem 3.16: (a) For the circuit of Fig. 3.30, calculate the voltage v2 (t) that
is observed for t 0 if v1(t) = 15 V, R = 10 Ω, C = 100 µF, and the initial
voltage on the capacitor is vc (0) = 5 V. Sketch the voltage v2(t), being careful
to label the axes precisely.

v
C
+
+
v1 + C v2
R

Figure 3.30: Circuit for problem 3.16 (a)

(b) What are the poles of a system whose state-space representation is such that:

• the matrix A = 0,

• the matrix A = kI (the identity matrix multiplied by a constant k).

Problem 3.17: (a) Indicate whether the following system is BIBO stable.
s−1
H(s) = . (3.112)
(s + 2 + j)2(s + 2 − j)2
(b) Indicate whether the following system is BIBO stable.
s+1
H(s) = . (3.113)
(s + j)(s − j)
60 Chapter 3. Continuous-time systems

(c) Indicate whether the signal with the following transform is bounded, and
whether it converges.
s2 + 9
X(s) = . (3.114)
(s2 − 1)(s2 + 4)

(d) A signal x(t) = cos(4t) is applied to a system with transfer function H(s) =
1/(s2 + 4). Is the output bounded?
(e) A signal that converges to zero is applied to a BIBO stable system. The
transform of the signal and the transfer function of the system are proper, ratio-
nal functions of s. Indicate whether the output of the system: (1) is bounded,
(2) converges, and (3) converges to zero.
Problem 3.18: (a) Find the transfer function H(s) = Y (s)/X(s) for the system
of Fig. 3.31.

1
s
x 1 y
s

Figure 3.31: System for problem 3.18

(b) What can you say about the steady-state value of the output yss (t) for a
constant input x(t) = 5 ?
Problem 3.19: (a) Using the frequency response, determine what the steady-
state output yss (t) is for an input x(t) = sin(t) and a system with transfer
function
−(s − 1)
H(s) = . (3.115)
(s + 1)2
(b) For the system and input signal of part (a), use partial fraction expansions to
determine the transient response ytr (t) (only the transient response is needed).
Problem 3.20: Write a state-space model for the circuit of Fig. 3.32.
Problem 3.21: (a) Consider the system with input x and output y described
by the differential equation
d2 y
= ay + bx. (3.116)
dt2
3.7. Problems 61

x1 R
+

C +
u + x2 x3 y
--
L L

Figure 3.32: Circuit for problem 3.20

Give the Laplace transform of the response to initial conditions, or zero-input


response Yzi (s).
(b) Give the time-domain function yzi (t) for a = −1 and for a = 1.
Problem 3.22: (a) By means of the Laplace transform, find the solutions x1(t)
and x2(t) of the following system

dx1 (t)
= x2
dt
dx2 (t)
= −2x1 − 3x2 + u(t), (3.117)
dt
where x1(0) = 1, x2(0) = 0, and u(t) is a step input of magnitude 1.
(b) Write a state-space realization for the system in Fig. 3.33 and give the values
of the poles of the system.

u x1 x1 x2 x2 y
∫ ∫ 2

2
x3

x3

Figure 3.33: System for problem 3.22 (b)


62 Chapter 3. Continuous-time systems
Chapter 4
Stability and performance of
control systems

4.1 Control system characteristics


A standard control system is shown in Fig. 4.1. The following elements may be
recognized:

• P (s) is the plant, or system to be controlled

• C(s) is the controller, compensator, or control system

• y is the plant output

• u is the control input

• r is the reference input

• e is the tracking error

The objective of control system design is to find a compensator such that


the plant output matches the reference input as closely as possible. Then, the
tracking error will be zero, or close to zero. The reference input may be specified
by a human operator or computed automatically by a computer.
Practically, most plants are physical systems and are best described by
continuous-time models. On the other hand, control systems are typically
computer-based, and operate in a discrete-time domain associated with a cer-
tain sampling frequency. Sensors must be chosen to provide the desired mea-
surements of the plant output(s), and actuator(s) must provide the required
power to drive the system. Analog to digital (A/D) and digital to analog (D/A)
converters transform the signals between the continuous-time and discrete-time
domains. Overall, the physical structure of a control system is shown in Fig. 4.2.

63
64 Chapter 4. Stability and performance of control systems

r e u y
C P

Figure 4.1: Basic control system

Actuator System
D/A
Computer
Operator
(Compensator)
A/D Sensor

Figure 4.2: Practical implementation of control system

The objectives of the control system are to:

• maintain the stability of the system (either by preserving the inherent


stability of a system or by stabilizing the system if it is unstable).

• ensure the tracking of reference inputs (including convergence to steady-


state reference values as well as fast and smooth transient response to
varying inputs).

• reject disturbances affecting the system.

• be sufficiently insensitive to plant uncertainties and time variations.

• tolerate the presence of noise on the measurements.

One distinguishes between feedforward and feedback control systems (both shown
in Fig. 4.3). A feedfoward control system typically implements an approximation
of the inverse of the plant. Generally, the advantages of a feedforward control
system are that stability is easier to maintain, and that no output sensor is
needed (hence, the system is insensitive to measurement noise). However, feed-
back systems are typically preferred because the effect of disturbances may be
reduced, and plant variations or uncertainties may be compensated for. Feed-
back systems are also the only option available for unstable plants. In practice,
control systems are often a blend of feedforward and feedback control.
4.2. Proportional control 65

C P C P
Feedforward
(prefiltering) Feedback

Figure 4.3: Feedforward and feedback control

For stability and for an adequate transient response, the desired locations of
closed-loop poles are shown in Fig. 4.4. The poles should be sufficiently far in
the left half-plane that the response of the system is fast, and sufficiently close
to the real axis that the responses do not exhibit large transient oscillations.

Damping

Settling time

Figure 4.4: Region for desirable pole locations in the s-plane

4.2 Proportional control


A simple controller is the proportional control system

C(s) = kP , (4.1)

where kP > 0 is a fixed gain. The feedback system is shown on Fig. 4.5 for a
plant
1
P (s) = . (4.2)
s+1
A feedforward gain k0 was added that will be discussed shortly.
The closed-loop transfer function is given by
Y (s) k0 kP
PCL (s) = = . (4.3)
R(s) s + 1 + kP
66 Chapter 4. Stability and performance of control systems

r y
k0 kP 1
s+1

Figure 4.5: Proportional control of a first-order plant

The transfer function has a single pole at

s = −1 − kP . (4.4)

The system is stable for all kP > 0, and the DC gain of the transfer function is

k0 kP
PCL(0) = . (4.5)
1 + kP
The feedforward gain was inserted in Fig. 4.5 so that the DC gain could be made
equal to 1 by setting

1 + kP
k0 = . (4.6)
kP
Then, the output will track a constant reference input in the steady-state.
In closed-loop, the original pole at s = −1 moves to an arbitrary value
determined by kP . The closed-loop system response can be made to respond
faster. For example, for kP = 1, k0 = 2,

Y (s) 2
= , (4.7)
R(s s+2

which means that the closed-loop system responds twice as fast as the original
system. The input signal is

2(s + 1)
U(s) = R(s). (4.8)
s+2
The signal is the same as if a feedforward controller was used to cancel the
pole of the plant and replace it by a faster pole. However, there are fundamen-
tal differences between moving a pole by pole/zero cancellation and by using
feedback.
For a unit step input R(s) = 1/s,

1 1
U(s) = + ⇔ u(t) = 1 + e−2t . (4.9)
s s+2
4.3. Steady-state error and integral control 67

Plant output Control input


2
1

0.8 1.5

0.6
1
0.4
0.5
0.2

0 0
0 1 2 3 4 0 1 2 3 4
Time (s) Time (s)

Figure 4.6: Plant output (left) and control input (right) for the open-loop system
(dashed) and with proportional feedback (solid)

The response of the system and the input signal are shown on Fig. 4.6 for the
open-loop and closed-loop systems. One finds that the response is accelerated,
although the result is obtained by applying a much larger input signal at the
beginning of the response.
In practice, a pole cannot be moved arbitrarily far in the left half-plane due
to input constraints, as well as other factors. However, feedback is helpful to
increase the speed of response within limits, to improve damping, or to stabilize a
system. Fig. 4.5 required the addition of a feedforward gain to achieve tracking.
An alternative and often preferable solution consists in using a controller with
a pole at s = 0.

4.3 Steady-state error and integral control


4.3.1 Tracking of constant reference inputs

r e u y
C(s) P(s)

Figure 4.7: Feedback control system


68 Chapter 4. Stability and performance of control systems

A standard control system is shown in Fig. 4.7, where we assumed that the
plant and control systems are linear time-invariant and described by transfer
functions P (s) and C(s), respectively. A significant signal in this system is the
tracking error

e(t) = r(t) − y(t), (4.10)

which is expected to remain close to zero. The steady-state error is defined to


be

ess = lim e(t), (4.11)


t→∞

assuming that the limit exists. The reference input r(t) is taken to be constant,
i.e.,
rm
r(t) = rm, R(s) = . (4.12)
s
It is typical for the reference input of a control system to be constant for rel-
atively long periods of time and for the steady-state error to be a significant
consideration. The infinite-time limit is really an approximation of time periods
that are long compared to the transient response of the system. For example,
the reference speed of a cruise control system may remain constant for minutes,
with the speed itself converging within a few seconds.
The Laplace transform may be used to analyze the feedback system of
Fig. 4.7. The tracking error is given by

E(s) = R(s) − Y (s) = R(s) − P (s)C(s)E(s), (4.13)

so that
1
E(s) = R(s). (4.14)
1 + P (s)C(s)
According to the analysis of step responses, the steady-state error for a constant
reference input is
1
ess = rm. (4.15)
1 + P (0)C(0)
DC gain of the transfer function from r → e
Recall that the closed-loop system must be stable for the steady-state error to
be well-defined.
Alternatively, one could calculate Y (s) using
P (s)C(s)
Y (s) = R(s) (4.16)
1 + P (s)C(s)
4.3. Steady-state error and integral control 69

and the steady-state plant output would be


P (0)C(0)
lim y(t) = rm.
t→∞ 1 + P (0)C(0)
(4.17)
DC gain of the transfer function from r → y

For the output signal to converge to the reference input, the DC gain of the
transfer function from r to y should be equal to 1. It is easy to check that, since
ess = rm − yss , the result is the same as the one obtained with (4.15).
Next, we define perfect tracking, as the condition in which ess = 0, or yss =
rm. Let
np(s) nc (s)
P (s) = , C(s) = , (4.18)
dp(s) dc (s)

where np (s), dp (s), nc (s), and dc (s) are polynomials. The condition for perfect
tracking is that
1 dp(0)dc (0)
= = 0. (4.19)
1 + P (0)C(0) np(0)nc(0) + dp (0)dc (0)

Thus, the condition is satisfied if and only if

dp (0) = 0 or dc (0) = 0. (4.20)

In other words, perfect tracking of constant reference inputs is achieved


if:

• the closed-loop system is stable.

• either P (s) has a pole at s = 0 or C(s) has a pole at s = 0.

There is also a technical requirement that neither P (s) nor C(s) have a zero
at s = 0. Obviously, if the plant has a zero at s = 0, i.e., has zero DC gain,
there is no possibility of tracking constant reference inputs.

4.3.2 Rejection of constant disturbances


The problem of rejecting constant disturbances turns out to be similar, although
not identical, to the problem of perfect tracking. Constant disturbances may be
caused by wind in the case of an autopilot for aircraft, or by the slope of the
road in the case of a cruise control system. A similar problem formulation
may be posed as for tracking, assuming that the disturbance is d(t) = dm, or
D(s) = dm/s. We assume that the disturbance is added at the input of the plant,
70 Chapter 4. Stability and performance of control systems

d
r e y
C(s) P(s)

Figure 4.8: Feedback system with disturbance

as shown in Fig. 4.8. The reference input r(t) is now assumed to be zero, but may
be added later using superposition (since the systems are linear time-invariant).
For r(t) = 0, the Laplace transform gives

E(s) = −P (s) [D(s) + C(s)E(s)] , (4.21)

so that
−P (s)
E(s) = D(s). (4.22)
1 + P (s)C(s)

For a constant disturbance d(t) = dm , the steady-state error is

−P (0)
ess = dm, (4.23)
1 + P (0)C(0)

under the assumption that the closed-loop system is stable.


In terms of the poles and zeros of the plant and compensator

P (s) np(s)dc (s)


= . (4.24)
1 + P (s)C(s) np (s)nc(s) + dp(s)dc (s)

The steady-state error is zero if and only if

np (0) = 0 or dc (0) = 0. (4.25)

Note that zero steady-state error is achieved if the plant has a zero at the ori-
gin, a fortunate but uninteresting case, since the plant then rejects all constant
signals, not just disturbances. Therefore, perfect rejection of constant dis-
turbances requires that:

• the closed-loop system is stable.

• C(s) has a pole at the s = 0.


4.3. Steady-state error and integral control 71

Disturbance rejection could be obtained for a specific disturbance of known


magnitude by subtracting an estimate of the signal d from the control input.
However, this would require perfect knowledge of the disturbance, while the
result obtained here is achieved despite uncertainties about the disturbance and
the plant parameters (as long as stability is preserved).
If perfect tracking and perfect disturbance rejection are desired, then the
system must be closed-loop stable and the compensator must have a pole at
s = 0. Because a pole at the origin is associated with an integrator, this strategy
is usually referred to as integral control, and is extremely common in feedback
systems.
The concept can be extended to the tracking/rejection of signals with multi-
ple poles at the origin. For example, perfect tracking of ramp inputs r(t) = rmt
for a constant rm requires that C(s) must have two poles at s = 0. In general,
one says that a control system is of type n if it has n poles at s = 0. Finally, the
concept can also be extended to systems with poles on the imaginary axis other
than at the origin. For example, perfect tracking of a sinusoidal signal sin (ω0 t)
can be achieved by placing compensator poles at s = ±jω0.

4.3.3 Example of integral control


The simplest integral controller is
kI
C(s) = . (4.26)
s
In the time domain
t
u(t) = kI e(τ )dτ. (4.27)
0

Assume that the plant is a constant gain, or that the plant is stable and that
one approximate the plant by its DC gain, i.e.,

P (s) ≃ P (0). (4.28)

Next, let the integral gain be

kI = gP −1 (0), (4.29)

where g > 0 is an adjustable gain. This choice of compensator is shown in


Fig. 4.9.
With the approximation (4.28), the response of the system is
gP −1 (0)
Y (s) = P (0)D(s) + P (0) (R(s) − Y (s) , (4.30)
s
72 Chapter 4. Stability and performance of control systems

d
r -1
g P (0) y
P(s)
s

Figure 4.9: Integral control based on the steady-state response of the plant

so that

g sP (0)
Y (s) = R(s) + D(s). (4.31)
s+g s+g

The transfer function from the reference input to the output is a stable first-
order system with a pole at s = −g and a unity DC gain. The transfer funtion
from the disturbance is also a stable first-order system with a pole at s = −g but
with a zero DC gain. Therefore, constant reference inputs are perfectly tracked
and constant disturbances are perfectly rejected. The speed of response of the
system can be directly controlled by choice of the gain g.
Without the steady-state approximation

gP −1 (0)P (s) sP (0)


Y (s) = R(s) + D(s). (4.32)
s + gP −1(0)P (s) s + gP −1 (0)P (s)

The transfer function from the reference input to the output still has unity
DC gain and the transfer function from the disturbance still has zero DC gain.
Therefore, the properties remain true as long as the closed-loop system is stable.
Further, still assuming stability, the results hold despite possible errors in the
estimate of P (0).
In general, if the gain g is small, the poles of the closed-loop system will be
approximately those of the plant, plus a pole close to s = −g. This integral
control method is effective in providing tracking control for a stable system,
although not necessarily one that provides the fastest possible responses.
In contrast, consider the feedforward approach shown in Fig. 4.10. This
system will also provide unity DC gain from the reference input to the output.
There is no issue of stability with this controller if the plant is stable. However,
there is also no rejection of the disturbance, and any error in P (0) results in an
error in the tracking of the reference.
4.3. Steady-state error and integral control 73

d
r -1 y
P (0) P(s)

Figure 4.10: Feedforward control based on the steady-state gain of the plant

4.3.4 Proportional-integral-derivative control

An extension of the integral controller is the proportional-integral-derivative con-


trol law

t
de(t)
u(t) = kP (t)e(t) + kI e(τ )dτ + kD , (4.33)
0 dt

where kP , kI , kD are positive, adjustable gains called the proportional, integral,


and derivative gains, respectively. The transfer function of the control system is

kI k I + k P s + k D s2
C(s) = kP + + kD s = . (4.34)
s s

This type of controller is most common in industry, and is referred to as a PID


controller. The addition of an integrator in the feedback loop is beneficial for
tracking, but detrimental to stability. The proportional and derivative gains give
degrees of freedom that, in general, make it possible to maintain stability with
faster response times. In the controller, the derivative component is typically
filtered, for example by replacing (4.34) by

kI as
C(s) = kP + + kD , (4.35)
s s+a

where a > 0 is sufficiently large to approximate the derivative.


It is not uncommon to add a feedforward term in the PID controller, as
shown in Fig. 4.11. Such controller is called a two degree-of-freedom controller
and mixes feedforward and feedback control actions. Such structure can be
used to fine-tune the properties of the closed-loop transfer function. A windup
problem also occurs when input limits are encountered and the behavior of the
system ceases to be linear. To alleviate this problem, anti-windup protection is
incorporated into PID control algorithms.
74 Chapter 4. Stability and performance of control systems

r y
CF(s) C(s) P(s)

Figure 4.11: Two degree-of-freedom controller mixing feedforward and feedback


control

4.4 Effect of initial conditions


The response to nonzero initial conditions was found in (3.81) to be of the form
np (s) n0 (s)
Y (s) = U(s) + , (4.36)
dp (s) dp (s)
where np (s) and dp(s) are the numerator and denominator polynomials of the
transfer function P (s), and n0 (s) is a polynomial depending on the initial con-
ditions. For the system of Fig. 4.8
nc (s)
U(s) = (R(s) − Y (s)) + D(s), (4.37)
dc (s)
where nc(s) and dc (s) are the numerator and denominator polynomials of the
transfer function C(s). Combining the two equations

dp (s)dc(s)Y (s) = np (s)nc (s)R(s) − np(s)nc (s)Y (s)


+dc(s)np (s)D(s) + dc (s)n0(s), (4.38)

so that
np (s)nc (s) np (s)dc (s) dc (s)n0 (s)
Y (s) = R(s) + D(s) + , (4.39)
dCL (s) dCL (s) dCL (s)
where

dCL(s) = np (s)nc (s) + dp (s)dc (s). (4.40)

The overall response is the sum of the response to the reference input, the
response to the disturbance, and the response to the initial conditions. For all
three components, the denominator polynomial is the closed-loop polynomial
dCL (s). In all regards, the poles of the system are moved by the feedback. An
unstable system can be stabilized, including its response to initial conditions.
In contrast, pole/zero cancellation in the feedforward control scheme of Fig. 4.3
would eliminate poles from the response to the reference input, but would not
modify the response to the disturbance or the response to the initial conditions.
4.5. Routh-Hurwitz criterion 75

4.5 Routh-Hurwitz criterion


4.5.1 Background
The Routh-Hurwitz stability test [29] addresses the following question: given a
polynomial

D(s) = sn + an−1 sn−1 + · · · + a0 = 0, (4.41)

find necessary and sufficient conditions such that all the roots of D(s) are in the
open left half-plane (OLHP), i.e., with Re(s) < 0. Some partial answers to this
problem are simple, specifically:

• all roots of d(s) = s2 + a1 s + a0 are in the OLHP ⇐⇒ a1 > 0, a0 > 0

• all roots of d(s) = sn + an−1 sn−1 + · · · + a0 are in the OLHP ⇒ an−1 > 0,
· · · , a0 > 0

Therefore, if any coefficient is zero or negative, there must be a root on the jω-
axis or in the open right half-plane. For polynomials of degree 2, the condition
is necessary and sufficient, but it is only necessary for higher degrees. In other
words, it is possible for a third order polynomial with all positive coefficients
to have some roots with Re(s) 0. The coefficients must satisfy additional
conditions, specified by the Routh-Hurwitz criterion, in order for all the poles
to be in the OLHP.

4.5.2 Procedure for the Routh-Hurwitz criterion


The procedure to be followed is:

1. using the coefficients of the polynomial, form an array of numbers (as


described below).

2. check the coefficients of the first column. The polynomial has all roots
in the open left half-plane ⇐⇒ all the coefficients of the first column are
nonzero and of the same sign.

Routh array: the construction of the array proceeds as follows. Given

D(s) = an sn + an−1 sn−1 + · · · + a0, (4.42)


76 Chapter 4. Stability and performance of control systems

1. Create the first two rows using the coefficients of the polynomial and the
following pattern

sn an an−2 an−4 an−6


sn−1 an−1 an−3 an−5 · · ·

When a0 is reached, fill the rest of the array with zeros or leave blank.
Label the first row sn and the second row sn−1 .

2. Compute the third row, labelled sn−2 , as shown in Fig. 4.12.

an a n-2 a n-4 a n-6


a n-1 a n-3 a n-5 a n-7

a n-1 a n-2 - a na n-3 a n-1 a n-6 - an a n-7


a n-1 a n-1
a n-1 a n-4 - a n a n-5
a n-1

Figure 4.12: Construction of the Routh array

3. Repeat the procedure for additional rows until s0 is reached.

Comment: if an = 1, the polynomial can be divided by an , with no change to


the roots. Once an = 1, the condition to be satisfied is that all the coefficients
of the first column of the Routh array must be strictly positive.

4.5.3 Examples
Example 1: consider the polynomial

D(s) = s2 + 2s + 5 s2 + 4s + 4
= s4 + 6s3 + 17s2 + 28s + 20. (4.43)

The polynomial has all positive coefficients and has all roots in the open left
half-plane (its roots are at −1 ± 2j and −2 (double root)). The Routh array is
4.5. Routh-Hurwitz criterion 77

given by

s4 1 17 20 0
s3 6 28 0 0
6 × 17 − 28 = 37 20 0 0
s2 6 3
37/3 × 28 − 6 × 20
s1 = 676
37 0 0 0
37/3
0
s 20 0 0 0

Because the coefficients of the first column are all positive, the test confirms
that all the roots are in the open left half-plane.
Example 2: consider the polynomial

D(s) = s2 − 2s + 5 s2 + 4s + 4
= s4 + 2s3 + s2 + 12s + 20. (4.44)

Again, the coefficients are all positive, but we would be mistaken to infer that
all the roots are in the open left half-plane. Indeed, the roots may be computed
to be 1 ± 2j, and −2 (double root). This time, the array is computed to be

s4 1 1 20 0
3
s 2 12 0 0
s2 2 × 1 − 1 × 12 = −5 2 × 20 − 1 × 0 = 20 0 0
2 2
s1 −5 × 12 − 2 × 20 = 20 −5 × 0 − 2 × 0 = 0 0 0
−5 −5
0
s 20

The coefficients of the first column are not all of the same sign, which confirms
the fact that the roots are not all in the open left half-plane.
Example 3: a most interesting feature of the Routh-Hurwitz criterion is that
it may be applied to polynomials with variables as coefficients, as opposed to
a polynomial with fixed coefficients. In the context of feedback systems, this
feature translates into an ability to find conditions on controller parameters that
ensure closed-loop stability. Fig. 4.13 shows an example of such an application.
The compensator is a gain k whose value is a free parameter. The closed-loop
transfer function is given by

k/(s + 1)3 k
PCL(s) = = 3 . (4.45)
1 + k/(s + 1)3 s + 3s2 + 3s + (1 + k)
78 Chapter 4. Stability and performance of control systems

1
k 3
(s + 1)

Figure 4.13: Example for the Routh-Hurwitz criterion

The Routh array for this system is

s3 1 3 0
2 3 (1 + k) 0
s
s 1 9 − (1 + k)
3 0 0
s0 1+k

and shows that stability is obtained if and only if 1 + k > 0 and 8 − k > 0, i.e.,
if and only if

−1 < k < 8 (4.46)

This interval is the range of gain k for which the system of Fig. 4.13 is stable.

4.5.4 Explanation of the Routh array


The first two rows of the array contain the coefficients of the polynomials

p1 (s) = ansn + an−2sn−2 + . . .


p2 (s) = an−1sn−1 + an−3 sn−3 + . . . (4.47)

where the elements that are zero by construction are omitted from the array. A
polynomial p3(s) is defined that is the remainder of the polynomial division of
p1(s) by p2(s). Therefore

p1 (s) = q1(s)p2(s) + p3 (s), (4.48)

where q1(s) = an s/an−1 is the quotient. The third row of the array contains
the coefficients of the remainder

p3(s) = (an−2 − an−3 an/an−1)sn−2 + (an−4 − an−5 an /an−1)sn−4 + . . .


(4.49)
4.6. Root-locus method 79

Repeating the procedure, polynomials pk (s) are constructed that are of the form

pk (s) = ck sn−k+1 + . . . (4.50)

where c1 = an and c2 = an−1 . The polynomials alternate as even and odd


polynomials of decreasing order. The Routh array contains the coefficients of
these polynomials, omitting the coefficients that are always equal to zero due
to the even/odd property. The labels on the left of the array give the highest
power of s of the polynomials.
Together with the polynomials pk (s), the procedure also generates a sequence
of polynomials pk (s) + pk+1 (s), starting from the original polynomial p(s) =
p1(s)+p2 (s). A key property is that pk (s)+pk+1 (s) has the same number of right
half-plane roots and the same number of left half-plane roots as the polynomial
(ck s + ck+1)(pk+1 (s) + pk+2(s)), assuming nonzero leading coefficients up to that
point. Remarkably, the imaginary roots of the two polynomials are identical. If
there are no zero leading coefficients, the key property implies that the number
of right half-plane roots is equal to the number of sign reversals in the first
column.
Over the years, approaches were found to simplify the original proof of [29].
A tutorial presentation is available in [4]. Procedures have been developed to
extend the solution to cases where a coefficient of the first column is zero. How-
ever, the system is known to have poles outside Re(s) < 0 as soon as a sign
change or zero coefficient is reached.

4.6 Root-locus method


4.6.1 Motivation
A typical application of the root-locus method is to the standard feedback system
of Fig. 4.14, where the compensator is assumed to be of the form C(s) = kC0 (s),
i.e., a fixed compensator together with an adjustable gain parameter k. If we
let G(s) = C0(s)P (s), the feedback system takes the form of Fig. 4.15, which is
considered in this section.
The root-locus method [10] answers the question of how the poles of the
closed-loop system vary as k = 0 → ∞. Assuming that the open-loop transfer
function is written as
N (s)
G(s) = , (4.51)
D(s)
80 Chapter 4. Stability and performance of control systems

r
C(s) P(s)

Figure 4.14: Standard feedback system

r y
k G(s)

Figure 4.15: Feedback system for the root-locus method

the closed-loop transfer function is given by

Y (s) kG(s) kN (s)


= = . (4.52)
R(s) 1 + kG(s) D(s) + kN(s)

Therefore, the root-locus is the locus of the roots of the polynomial D(s) + kN (s)
for k = 0 → ∞. We will consider proper systems (deg D(s) deg N(s)), so that
the number of closed-loop poles is equal to the number of open-loop poles.
We begin with some examples to gain insight into what a root-locus may
look like.
Example 1: consider the system with

1
G(s) = ⇒ D(s) + kN (s) = s2 + 2s + k. (4.53)
s(s + 2)

The roots are given by


2 −2 ± 4 − 4k √
s + 2s + k = 0 ⇒ s = = −1 ± 1 − k. (4.54)
2
4.6. Root-locus method 81

For a few values of k, we have the closed-loop poles

k s1 s2
0 −2 0
√ √
0.5 −1 + 0.5 = −1.7 −1 − 0.5 = −0.3
1 −1 −1
2 −1 + j −1 − j
5 −1 + 2j −1 − 2j
101 −1 ± 10j −1 ± 10j

From these results, the locus of the two poles as k varies from 0 to ∞ can be
deduced to be as shown in Fig. 4.16. In general, the root-locus is described by
smooth curves, or branches, whose number is equal to the number of open-loop
poles. Note that, for this example, the response of the system is stable for all k,
but becomes oscillatory for k > 2.

-2

Figure 4.16: Root-locus for example 1

Example 2: add a zero to the system, so that

s+1
G(s) = ⇒ D(s) + kN (s) = s2 + (2 + k)s + k. (4.55)
s(s + 2)

In this case, the closed-loop poles are given by



−(2 + k) ± (2 + k)2 − 4k −(2 + k) ± 4 + k2
s1,2 = = .
2 2
(4.56)

The two poles are real for all k. A representative set of values for the poles is
82 Chapter 4. Stability and performance of control systems

given below.
k s1 s2
0 0 −2
1 −0.4 −2.6
100 −0.99 −101
∞ −1 k

and the root-locus may be deduced to be the one shown in Fig. 4.17.

-2 -1

Figure 4.17: Root-locus for example 2

Example 3: the next example is similar to the previous one, but with the
location of the nonzero pole and the zero reversed. Specifically

s+2
G(s) = ⇒ D(s) + kN (s) = s2 + s(1 + k) + 2k, (4.57)
s(s + 1)

and the poles are given by

−(1 + k) ± (1 + k)2 − 8k
s1,2 = . (4.58)
2

Whether the poles are real or complex is determined by the sign of the function

f (k) = (1 + k)2 − 8k = k 2 − 6k + 1, (4.59)

which has roots at 0.2 and 5.8. Therefore, the function f (k) has the shape shown
in Fig. 4.18.
In terms of the original polynomial, we may conclude that

k = 0 → 0.2 2 real roots


k = 0.2 → 5.8 2 complex roots
k = 5.8 → ∞ 2 real roots
4.6. Root-locus method 83

f(k)

0.2 5.8 k

Figure 4.18: Function f(k)

A few representative values of the closed-loop poles may be computed to be

k s1 s2
0 0 −1
..
. real real
0.2 −0.6 −0.6
..
. complex complex
5.8 −3.4 −3.4
..
. real real
100 −2.02 −98.98

and the root-locus may be deduced to be as shown on Fig. 4.19.

-3.4 -2 -1 -0.6

Figure 4.19: Root-locus for example 3

Although the closed-loop poles cannot be computed exactly in general, as


was the case for the three examples presented here for motivation, it turns out
that the root-locus satisfies enough properties that its general shape can be
predicted rather well using simple rules.
84 Chapter 4. Stability and performance of control systems

4.6.2 Main root-locus rules


Definitions
The open-loop transfer function is assumed to be expressed as
(s − z1 ) (s − z2 ) · · · (s − zm )
P (s)C(s) = k G(s) = k . (4.60)
(s − p1 )(s − p2) · · · (s − pn)
There are m open-loop zeros z1 , · · · , zm, and n open-loop poles p1, · · · , pn.
Assume that m n, and that the numerator and denominator polynomials
have real coefficients. The root-locus is the locus of the closed-loop poles as
k = 0 → ∞. These are the poles of kG(s)/(1 + kG(s)) and, therefore, the roots
of

(s − p1 )(s − p2) · · · (s − pn) + k (s − z1 ) (s − z2 ) · · · (s − zm ) = 0.


(4.61)

Main root-locus rules


Branches of the root-locus: there are n branches in the root-locus (i.e., n closed-
loop poles for all k). The n branches start at the open-loop poles (start means
k = 0) and m of the branches finish at the open-loop zeros (finish means k → ∞).
The root-locus is symmetric with respect to the real axis.
Portions of the real axis: to determine if a portion of the real axis belongs to
the locus, count the total number of open-loop poles and open-loop zeros that
lie to the right of a point on the real axis. The point belongs to the locus if and
only if the number is odd. As a consequence, the portion of the real axis that is
farther to the right than any pole or zero does not belong to the locus.
Asymptotes: the n − m poles that do not go to open-loop zeros go to infinity. For
k large, the branches become close to straight lines, called asymptotes, which:
(a) all intersect at the same point called centroid, which is located on the real
axis at:
n m
pi − zj
i=1 j=1
σ= . (4.62)
n−m

(b) form angles with the real axis equal to n180 360◦
− m + i n − m with i = 0, 1, · · · ,
n − m − 1.
Angles of departure and arrival on the real axis: for an open-loop pole on the
real axis with multiplicity r, the angles of departure of the branches are either

i 360 180◦ 360◦
r or r + i r (with i = 0, 1, · · · , r − 1). Which case applies can be
determined using the rule regarding the portions of the real axis. Branches reach
4.6. Root-locus method 85

multiple zeros on the real axis with the same set of angles. When roots merge
on the real axis, one set of angles defines the angles formed by the incoming
branches, and the other set defines the angles formed by the outgoing branches.
Examples
Fig. 4.20 shows examples of application of the basic rules. The angles of the
asymptotes are: 180◦ if n − m = 1, (90◦, −90◦) if n − m = 2, (180◦ , 60◦ , −60◦ ) if
n−m = 3, (45◦ , −45◦, 135◦, −135◦) if n−m = 4,... In cases 1 and 3 of Fig. 4.20,
the angles are ±90◦ . In cases 2 and 4, the angles are ±60◦ and 180◦. In general,
application of the rules is fairly straightforward, although inferring the shape
of the root-locus becomes easier with experience, and sometimes requires some
amount of guessing.

1. G(s) = 1
s(s + 2)
Centroid: -1 -2

2. G(s) = 1
s2 (s + 2)
2 -2
Centroid: - 3

3. G(s) = (s + 1)
s2 (s + 2)
Centroid: - 1 -2 -1
2

4. G(s) = (s + 1)
s 2(s + 2) 2
Centroid: (-4 + 1)/3 = -1 -2 -1

Figure 4.20: Root-locus examples

For the angles of departure on the real axis, the set of possible angles are
shown on Fig. 4.21. In other words, the possible patterns are the same as for
the asymptotes, plus the patterns rotated by 180◦ divided by the number of
86 Chapter 4. Stability and performance of control systems

poles. In cases 2, 3 and 4 of Fig. 4.20, poles leave at ±90◦ (the other pattern
is excluded because the portions of the real axis on both sides of the poles do
not belong to the root-locus). The same set of angles applies when poles merge
together on the real axis. Then, poles reach the so-called breakaway point with
one set of angles, and they leave with the other. In case 1 of Fig. 4.20, poles
merge with incoming branches at (0◦, 180◦) and with outgoing branches at (90◦,
−90◦). Note that, although not shown on the examples, the same set of angles
also applies when branches reach multiple zeros on the real axis.

2 poles:
± 90˚
or
or
0˚, 180˚
3 poles: 120˚
± 60˚, 180˚ 60˚
or
or
0˚, ± 120˚

4 poles:
± 45˚, ± 135˚ 45˚
or or
0˚, 180˚, ± 90˚

Figure 4.21: Angles of departure from the real axis

4.6.3 Additional root-locus rules


Often, the most important characteristics of the root-locus can be deduced using
only the basic rules. In this manner, important information about the poles of
a closed-loop system can be obtained remarkably quickly. Sometimes, however,
additional rules prove useful to refine the root-locus. The rules are presented
first, followed by examples illustrating their application.
Additional root-locus rules
breakaway points from the real axis: points where the root-locus breaks away
d [G(s)] = 0. The reverse
from the real axis are the real roots of the equation ds
is not necessarily true: there may be complex solutions to the equation, and
4.6. Root-locus method 87

some solutions may correspond to k < 0. One should evaluate k = −D(s)/N (s)
at the roots. If k is real and k > 0 for some root, the root gives the location of
a breakaway point from the real axis.
Crossing of the jω-axis: the values of k for which the root-locus crosses the
jω-axis can be determined by applying the Routh-Hurwitz criterion to (4.61).
Given k, the roots of (4.61) determine the locations where the branches cross
the jω-axis.
Angles of departure and arrival: the angle θ between the tangent to the root-
locus close to an open-loop pole and the direction of the real axis can be deter-
mined as follows. Assume that the angle of departure is calculated for the pole
p1 (the procedure can be repeated for other poles in a similar manner). Assume
that the pole is not repeated. Let αi be the angle between the direction of the
real axis and the vector drawn from the pole pi to the pole p1 . Let βj be the
similar angles for the zeros. The angle θ is given by
n m
θ = 180◦ − αi + βj . (4.63)
i=2 j=1

For a repeated pole with multiplicity r, (4.63) is replaced by


n m
1
θ= 180◦ − αi + βj + l 360◦ l = 0, · · · , r − 1.
r i=r+1 j=1 (4.64)

The procedure can also be applied to determine the angles of arrival to the
zeros. In this case, the procedure is identical, except that θ is replaced by −θ.
Therefore, for the angle of arrival to a zero of multiplicity r
n m
−1
θ= 180◦ − αi + βj + l 360◦ l = 0, · · · , r − 1
r i=1 j=r+1 (4.65)

Example - breakaway points from the real axis: consider the system

(s + 2)
G(s) = , (4.66)
s(s + 1)

whose root-locus was obtained before and is shown in Fig. 4.22. The breakaway
points had already been determined to be at −0.6 and −3.4. We may verify
these values using the fact that

dG(s) s(s + 1) − (2s + 1)(s + 2)


= , (4.67)
ds s2 (s + 1)2
88 Chapter 4. Stability and performance of control systems

-2 -1
-3.4 -0.6

Figure 4.22: Example of breakaway points

so that
dG(s)
= 0 ⇐⇒ s2 + 4s + 2 = 0. (4.68)
ds

The roots of this polynomial are s1,2 = −2 ± 2 = −0.6 and −3.4, which
confirms the earlier result. In general, however, one should verify that the root

is really a breakaway point by computing k. For s = −2 + 2 = −0.6,
√ √
D(s) −2 + 2 −1 + 2
k=− =− √ = 0.17. (4.69)
N(s) 2
Since k is real and k > 0, the breakaway point belongs to the root-locus. The
same property can be checked for the other root.
Example - crossing of the jω−axis: consider the system
1
G(s) = (4.70)
(s + 1)3

In this case, the closed-loop poles may be computed exactly, and are given by
√ √ √
(s + 1) =3 −k ⇒ s = −1 + 3 k 3 −1, (4.71)

or

3

s1 = −1 + k ejπ/3,
3

s2 = −1 + k ej2π/3,
3

s3 = −1 + k e−j2π/3. (4.72)

The root-locus is shown in Fig. 4.23, and may be shown to satisfy the root-locus
rules. We may easily determine that crossing of the imaginary axis occurs when

3

k cos(60◦ ) = 1, or k = 8. (4.73)
4.6. Root-locus method 89

Figure 4.23: Example of crossing of jω-axis

In general, the roots cannot be computed analytically, and one may apply the
Routh-Hurwitz criterion to find the range. This computation was done before
and yielded (4.46). The locations of the crossings of the imaginary axis are
obtained by letting k = 8 in the closed-loop polynomial. With s3 +3s2 +3s+9 =

0, the roots are s = −3, s = ± j 3, and the complex roots of this equation
correspond to the crossings.
Example - angles of departure: consider the system
s+3
G(s) = . (4.74)
(s2 + 2s + 2)(s + 2)
One denotes
θ = the angle of departure from the given pole p1
αi = the angle from the ith pole to the given pole p1
βj = the angle from the j th zero to the given pole p1
Let the pole p1 = −1 + j. The angles are shown on Fig. 4.24.
The rule says that

θ = 180◦ − α2 − α3 + β1 . (4.75)

From the figure and with tan−1(0.5) = 26.6◦, the angle of departure from the
pole is

θ = 180◦ − 90◦ − 45◦ + tan−1(0.5) = 71.6◦. (4.76)

The result is consistent with the figure as it was drawn.


As a second example, consider the system
s+1
G(s) = . (4.77)
(s + 2) (s2 + 1)
90 Chapter 4. Stability and performance of control systems

root-locus

p1 θ
j
β1 α3

-3 -2 -1
α2

Figure 4.24: Definition of angles for the computation of the angle of departure

A possible root-locus is shown on the left of Fig. 4.25. However, one may be
concerned that the branches leaving the complex poles could cross into to the
right half-plane as shown on the right of Fig. 4.25, making the system unstable
for small gain..

+j +j
-1 -1

-2 -2
-1/2 -j -1/2 -j

Figure 4.25: Two possibilities for the root-locus of the second example for angles
of departure

The relevant angles for the computation of the angle of departure from the
pole at s = j are shown in Fig. 4.26. Since tan−1 (0.5) = 26.6◦, the formula gives

θ = 180◦ − 90◦ − 26.6◦ + 45◦ = 108.4◦ . (4.78)

Therefore, the branch leaving s = j indeed leaves the pole as shown on Fig. 4.26
and on the left of Fig. 4.25.
4.6. Root-locus method 91

root-locus
θ
+j

27° 45° 90°

-2 -1

-j

Figure 4.26: Second example for angles of departure

Example - angles of arrival: consider the system


s2 + 4
G(s) = . (4.79)
s(s + 1)2
Using the main rules, the root-locus for this system may be drawn as shown in
Fig. 4.27. The figure also gives the angles for the computation of the angle of
arrival at s = 2j.

θ
2j
63°

2 poles
at -1 -2j

Figure 4.27: Example for angle of arrival

The formula gives

−θ = 180◦ −63◦ − 63◦ − 90◦ + 90◦ ⇒ θ = −54◦ .


2 poles at -1 pole at 0 zero at 2j (4.80)

Oddly, this result forces us to redraw the root-locus of Fig. 4.27 to become the
one shown in Fig. 4.28. In fact, the system becomes unstable for large gain,
which would not have been predicted from the tentative Fig. 4.27.
92 Chapter 4. Stability and performance of control systems

θ=−54°

Figure 4.28: Example for angle of arrival with actual shape of the root-locus

Example - angles of departure/arrival for multiple poles/zeros: for


multiple poles or zeros, one uses the same formula and divides the result by the
multiplicity r. Adding multiples of 360◦ /r gives the other angles. Consider
(s + 1)
G(s) = , (4.81)
(s2 + 1)2
whose root-locus is shown in Fig. 4.29. The angles of departure for the two poles
at s = j are given by
0◦
2θ = 180◦ −2 × 90◦ + 45◦ + ⇒ θ = 22.5◦, 202.5◦.
360◦
poles at −j zero at −1 (4.82)

These values are consistent with the root-locus as drawn on the figure.

j θ

-1 1/3

Figure 4.29: Example of angle of departure for multiple poles

Note that this example is a good opportunity to compute the location of the
breakaway points on the real axis. Specifically,
2
dG(s) (s2 + 1) − 2 (s2 + 1) 2s(s + 1)
= =0 (4.83)
ds (s2 + 1)2
4.6. Root-locus method 93

if and only if

s2 + 1 − 4s(s + 1) = −3s2 − 4s + 1 = 0, (4.84)

whose roots are at s1 = −1.55 and s2 = 0.22. The values of gains k corresponding
to the two roots are given by

k1 = −1/G(s1) = 21, k2 = −1/G(s2) = −0.9. (4.85)

The second root turns out to be a breakaway point for the complementary root-
locus, or root-locus for k < 0, which is discussed next.

4.6.4 Complementary root-locus


The root-locus for k = 0 → −∞ is called the complementary root-locus. Since
Fig. 4.15 assumed negative feedback with k > 0, the case k < 0 corresponds
to positive feedback. The rules for k < 0 are similar to those for the regular
root-locus. The most significant differences are that the portions of the real axis
are those that do not belong to the regular root-locus and that the angles of the
asymptotes are the alternate patterns shown in Fig. 4.21.
Complementary root-locus rules
The rules are the same as for k > 0, except that:
Branches of the root-locus: when m = n, one or more branches of the root-locus
may reach ∞ for k = −1.
Portions of the real axis: replace “odd” by “even”. The portion of the real axis
to the right of the rightmost pole or zero now belongs to the locus.

Asymptotes: replace the angles by i n360
− m.
Angles of departure: replace 180◦ by 0◦.
Examples
Generally, the angles of the asymptotes are now 0◦ if n −m = 1, (0◦, 180◦ ) if
n−m = 2, (0◦, 120◦ , −120◦ ) if n−m = 3 and (0◦ , 180◦, 90◦ , −90◦ ) if n−m = 4.
Example 1: Fig. 4.30 shows the example of
s+1
G(s) = , (4.86)
s2 (s + 2)2
which corresponds to n−m = 3. The complementary root-locus can be viewed as
the second piece of the overall root-locus, which covers the range k = −∞ → ∞.
Example 2: Another example corresponds to the system considered earlier for
the angles of departure from multiple poles, that is
(s + 1)
G(s) = . (4.87)
(s2 + 1)2
94 Chapter 4. Stability and performance of control systems

-2 -1

k>0 k<0
k=0→∞ k=0→−∞

Figure 4.30: Comparison of root-locus and complementary root-locus

Application of the rules yields opposite portions of the real axis, identical cen-
troid, asymptotes at rotated by 180◦ /(n−m) (that is, with angles 0◦ and ±120◦),
and a complementary breakaway point at s = 0.22. The angles of departure are
also rotated by 180◦ /r, that is 90◦ for the two imaginary poles (yielding 112.5◦
and 292.5◦ ). The resulting complementary root-locus is shown in Fig. 4.31.

60°
1/3

-1

Figure 4.31: Example of complementary root-locus

4.6.5 Important conclusions from the root-locus rules


The main root-locus rules allow one to reach the following useful observations:

1. zeros in the right half-plane always lead to instability for high gain. Such
zeros are usually undesirable in either the plant or the controller.
4.6. Root-locus method 95

2. systems whose number of poles exceed the number of zeros by 3 or more


always become unstable for sufficiently high gain. The higher the pole/zero
excess, the higher the danger of instability at high gain. Typical control
systems have as many zeros as poles.

3. if an open-loop zero is close to an open-loop pole, the pole will move


towards that zero. This property can be used in control design to attract
poles to desirable locations. An example is shown in Fig. 4.32, assuming
that additional poles are somewhat distant from the two main poles. The
damping of oscillatory poles can be increased by attracting them to well-
placed zeros.

additional poles

Figure 4.32: Attraction of poles by zeros

4. moving a zero far in the left half-plane may be counterproductive, because


the centroid will be pushed towards the right half-plane. Fig. 4.33 gives
an example where it is preferable to place the zero closer to the origin.

Figure 4.33: Effect of a zero on asymptotes

5. a pole at s = 0 in C(s) desirable for zero steady-state error, but tends


to make the closed-loop system less stable. For example, Fig. 4.34 shows
96 Chapter 4. Stability and performance of control systems

options for the control of the system with transfer function


k
P (s) = . (4.88)
s(s + a)

Four controllers are considered: (a) proportional, (b) proportional-derivative,

or

k
(a) P: C(s)=k P (b) PD: C(s) = kD s+ P
kD

k k k
kP s + I kD s2 + P s + I
(c) PI: C(s)= kP kD kD
(d) PID: C(s)=
s s

Figure 4.34: Choices of compensators for P (s) = k/(s(s + a))

(c) proportional-integral, (d) proportional-integral-derivative. Generally,


addition of the derivative term improves the damping of the system, while
inclusion of integral action makes it more difficult to achieve a satisfactory
transient response. From (a) to (c), for example, the asymptote moves
towards the right, and the poles stay closer to the imaginary axis for small
gain.

6. unmodelled dynamics (additional poles in the actual plant transfer func-


tion) may produce instabilities for large gain. For example, a system with
transfer function
1
G(s) = (4.89)
s(s + 1)

is stable for all gain, as shown by the root-locus on the left of Fig. 4.35.
However, if an additional real pole is present in the actual system, the root-
locus becomes the one shown on the right of Fig. 4.35. No matter how
4.6. Root-locus method 97

Additional pole

Figure 4.35: Destabilizing effect of an additional pole on a root-locus with two


real poles

far the pole is in the left half-plane, the closed-loop system will become
unstable for large enough gain.

7. Because the positive real axis belongs to the root-locus for k < 0, a system
always becomes unstable for positive feedback of large gain. For small gain,
however, positive feedback can be stabilizing. For example, consider the
root-locus for
s+2
G(s) = . (4.90)
(s + 1)(s2 + 1)
The regular root-locus (k > 0) is shown on the left of Fig. 4.36, while the
complementary root-locus is shown on the right of the figure. For k > 0,
the two poles at s = ±j immediately move to the right half-plane, towards
the ±90◦ asymptotes with centroid at s = 1/2. The angle of departure of
the pole at s = j is equal to θ = 180◦ −90◦ −45◦ + tan−1(0.5) = 71.6◦ . The
system is unstable for all gain k > 0. In contrast, for k < 0, the angle of
departure of the pole at s = j becomes 251.6◦ and the pole moves towards
the left half-plane. The pole at s = 1 moves towards the right half-plane
and, for sufficiently large gain, the system becomes unstable. However, for
some range of gain, the system is stabilized with k < 0.

Root-locus rules are useful to quickly sketch a root-locus and understand how
pole and zero locations affect its shape. For complicated cases or for precision
plots, a modern software package should be used. For example, the shape of the
complementary root-locus of Fig. 4.36 was obtained using the Matlab commands:

num=[1 2];
den=conv([1 1],[1 0 1]);
rlocus(-num,den)
98 Chapter 4. Stability and performance of control systems

j j

-2 -1 -2 -1
-j -j

k>0 k<0

Figure 4.36: Root-locus with positive and negative feedback

The regular root-locus can be obtained by replacing the last line of the code by
rlocus(num,den).The conv function is convenient to multiply the two denomina-
tor polynomials.

4.7 Feedback design for phase-locked loops


4.7.1 Modulation of signals in communication systems
The transmission of signals in telecommunication systems is typically performed
through a transmitter and a receiver, as shown in Fig. 4.37. The purpose of the
modulator is to shift the frequency spectrum of the signal x(t) to a higher fre-
quency range, so that transmission over electromagnetic waves can be performed
efficiently. A demodulator is needed to perform the reverse operation. Ideally,
the demodulated signal xd (t) is equal, or proportional to x(t).

x(t) y(t) y (t) xd(t)


d
Modulator Amplifier Amplifier Demodulator

Transmitter Receiver

Figure 4.37: Communication system

Two basic methods of modulation are amplitude modulation (AM) and fre-
4.7. Feedback design for phase-locked loops 99

quency modulation (FM). In amplitude modulation, one has that

y(t) = (A + km x(t)) sin (2πfc t) . (4.91)

Amplitude modulation of a sinusoidal signal x(t) is shown in Fig. 4.38. The


frequency fc is called the carrier frequency. In commercial AM radio, fc ranges
from 530 to 1600 kHz. The spectrum of x(t) is limited to 5 kHz. Low-pass
filtering is used to limit the spectrum of x(t) to this range.

x(t) y(t) A+k mx(t)


A

t t
-A
-A-k mx(t)

Figure 4.38: Amplitude modulation

In frequency modulation, one has

y(t) = A sin (θ(t)) ,


t

θ(t) = 2πfc t + 2πkm x(σ)dσ. (4.92)


0

The frequency fc is called the center frequency. It is the frequency of the signal
y(t) when x(t) = 0. The instantaneous frequency (in Hz) of the signal y(t) is
defined to be
1 dθ(t)
f (t) = = fc + kmx(t). (4.93)
2π dt
In an implementation with analog electronics, km has the units of Hz/V, as-
suming that x(t) has the units of volts. The parameter km specifies how much
the frequency of the signal y increases per unit increase of the magnitude of the
signal x. Frequency modulation by a sinusoidal signal x(t) is shown in Fig. 4.39.
In commercial FM radio, fc ranges from 88 to 108 MHz. The spectrum of
x(t) is limited to fmax = 53 kHz, which includes two (stereo) channels with
15 kHz bandwidth each. If the modulating signal is proportional to x(t), instead
of the integral of x(t), the modulation is referred to as angle modulation or phase
modulation. In some cases, the modulating signal is digital (on/off), which leads
100 Chapter 4. Stability and performance of control systems

x(t) y(t)
A

t t

-A

Figure 4.39: Frequency modulation

to frequency shift keying (FSK, if x(t) is digital) or phase shift keying (PSK, if
θ(t) is digital). For 180◦ phase reversals, PSK becomes phase reversal keying or
binary phase shift keying (BPSK). For four values of the phase (θ = 0◦ , 90◦,
180◦, 270◦ ), one has quaternary or quadriphase PSK (QPSK).

4.7.2 Voltage-controlled oscillators


A frequency modulator is also called a voltage-controlled oscillator (VCO). Such
device performs the transformation from a signal x(t) to a signal y(t) whose
frequency is determined by the magnitude of x(t). A VCO is also called a voltage-
to-frequency converter (V → f converter). Mathematically, the representation
of a VCO is that of (4.92) or Fig. 4.40: it is an integrator followed by a sinusoidal
function. Note that the signal 2πfc t can be moved to the input of the integrator,
as a constant signal fc /km added to x(t).

x(t) y(t) x(t) 2πk m θ(t) y(t)


VCO A sin(.)
s

2πfc t

Figure 4.40: Voltage-controlled oscillator and mathematical equivalent

4.7.3 Phase-locked loops


The device that performs the inverse operation of a VCO is called an FM de-
modulator or f → V converter. A phase-locked loop (PLL) is such a device,
whose general structure is shown in Fig. 4.41. Interestingly, a PLL includes a
4.7. Feedback design for phase-locked loops 101

VCO as one of its components. The signal xvco (t) is, under ideal conditions,
proportional to the signal that generated y(t), that is x(t). The signal yvco(t)
then has the same instantaneous frequency as y(t).
The phase detector is a device that generates a signal φ(t) whose value is
proportional to the difference of phase between the signals y(t) and yvco (t).
Much of the complexity of phase-locked loops is related to this device. A filter
is typically needed after the phase detector, because of harmonic components
associated with practical detectors, and to improve the stability properties of
the feedback system.
The concept of a phase-locked loop is that, if there is a small phase error so
that yvco(t) lags behind y(t), a signal φ(t) appears at the output of the detector.
This signal produces an increase in the voltage applied to the VCO, so that the
frequency of yvco (t) increases and the phase of the signal catches up with the
phase of the incoming signal y(t). In steady-state, the phases of y(t) and yvco(t)
are equal, or separated by a constant, and the signals are said to be locked in
phase.

y(t) Phase φ(t) x vco(t)


Filter
detector

y vco(t)
VCO

Figure 4.41: Phase-locked loop

Assuming that an ideal phase detector is available, the equations for the
system are:

VCO (modulator): y(t) = A sin (θ(t)) ,


t

θ(t) = 2πfc t + 2πkm x(σ)dσ. (4.94)


0

VCO (PLL): yvco(t) = Avco sin (θvco (t)) ,


t

θvco(t) = 2πfc,vco t + 2πkvco xvco (σ)dσ. (4.95)


0

Phase detector: φ(t) = kpd (θ(t) − θvco(t)) . (4.96)


102 Chapter 4. Stability and performance of control systems

x(t) 2π k m θ(t) φ(t) xvco(t)


kpd C(s)
s
2π k vco
2π fc t s
θvco (t)

2π fc,vco t

Figure 4.42: Diagram of a phase-locked loop with ideal phase detector

Filter: Xvco (s) = C(s) Φ(s). (4.97)

The gain of the phase detector is kpd and has the units of Volts/rad in an analog
phase-locked loop. C(s) is the transfer function of the filter. The instantaneous
frequency of the VCO of the PLL is
1 dθvco(t)
fvco (t) = = fc,vco + kvcoxvco(t). (4.98)
2π dt
The dynamics of the system are highly nonlinear, because of the sinusoidal
functions. However, under the assumption of an ideal phase detector, the equa-
tions describing the variables θ(t), θvco (t), x(t), and xvco (t) are linear, as shown
in Fig. 4.42. The system can therefore be analyzed using linear time-invariant
methods (in particular, transfer functions can be used).
Fig. 4.42 can be transformed into a simpler, equivalent diagram, using the
following definitions. Denote the difference between the center frequencies of
the VCO of the modulator and of the VCO of the demodulator

δfc = fc − fc,vco (4.99)

and let
km
xs (t) = x(t). (4.100)
kvco
Note that xs (t) is proportional to x(t). We will find that the signal xvco (t) converges
to this scaled signal under ideal conditions.
We have that
t

θ(t) − θvco (t) = 2πδfc t + 2πkvco (xs (σ) − xvco (σ)) dσ


0
t
δfc
= 2πkvco xs (σ) − xvco (σ) + dσ. (4.101)
kvco
0
4.7. Feedback design for phase-locked loops 103

Therefore, the diagram of Fig. 4.43 represents the phase-locked loop if one defines
the two constants

kpll = 2πkvco kpd ,


δfc
d0 = . (4.102)
kvco
as well as the disturbance signal d(t) = d0 . kpll is the gain of the phase-locked
loop, not including the filter C(s).

δfc
d(t)=
kvco
xs(t) k pll φ(t) xvco(t)
C(s)
s

Figure 4.43: Equivalent diagram of a phase locked-loop with ideal phase detector

The diagram of the phase locked-loop is similar to a conventional feedback


system with P (s) = kpll /s (the plant is an integrator). The compensator appears
after the plant, but the change does not affect the analysis. The objective is to
have xvco (t) track the scaled signal xs (t), despite the constant disturbance signal
originating from the difference in center frequencies between the modulator and
the demodulator. The filter C(s) is to be designed as a control system to achieve
this result.

4.7.4 Compensator design


From Fig. 4.43, the Laplace transforms Xvco (s), Φ(s), Xs (s), D(s) are related
by

Xvco(s) = Hx (s) (Xs (s) + D(s)) , Φ(s) = Hφ (s) (Xs (s) + D(s)) ,
(4.103)
where
kpll C(s) kpll
Hx (s) = , Hφ (s) = . (4.104)
s + kpll C(s) s + kpll C(s)
Two typical choices of compensator C(s) are:
k
• C(s) = s +fa first-order filter
f
104 Chapter 4. Stability and performance of control systems

-a f -a f -b f

Figure 4.44: Root-locus for first-order filter (left) and second-order filter (right)

kf (s + bf )
• C(s) = second-order filter
s(s + af )
For the first-order filter,
kpll kf
Hx (s) = ,
s2
+ af s + kpll kf
kpll (s + af )
Hφ (s) = 2 (first-order). (4.105)
s + af s + kpll kf
The second-order filter includes an integrator, and yields the transfer functions
kpll kf (s + bf )
Hx (s) = ,
s3
+ af s2 + kpll kf s + kpll kf bf
kpll s(s + af )
Hφ (s) = 3 (second-order). (4.106)
s + af s2 + kpll kf s + kpll kf bf
Fig. 4.44 on the left shows the root-locus for the first-order filter, for kpll kf =
0 → ∞ and af > 0. On the right, the root-locus is shown for the second-
order filter, assuming af > bf > 0. With these restrictions on the parameters,
the closed-loop system is always stable. Closed-loop poles may be placed at
appropriate locations by proper choice of the parameters.
For both filters, the DC gain of the transfer function from xs to xvco is equal
to 1. Therefore, xvco(t) (the frequency estimate) will match xs (t) (the true
frequency) in steady-state, and xvco (t) will track xs (t), as long as xs (t) does not
vary too fast compared to the time constants of the closed-loop system. The
magnitude of the poles should be high enough to ensure tracking of the signal
xs (t), but not so high to yield excessive sensitivity to noise.
Consider now the effect of a center frequency error δfc . In the steady-state,
the effect of δfc on xvco and φ is determined by Gx (0) and Gφ (0). In the case of
a first-order filter, a center frequency error δfc results in a constant bias δxvco,ss
and a constant phase error δφss given by
δfc af δfc
δxvco,ss = , δφss = (first-order). (4.107)
kvco kf kvco
4.7. Feedback design for phase-locked loops 105

The steady-state frequency error f (t) − fvco(t) = (1/2π)(dφ/dt) is zero since


φ is constant. Therefore, the signals y and yvco have the same instantaneous
frequencies. There is a phase difference δφss : although the two phases are locked,
they are not equal. The phase difference produces a bias at the input of the VCO
(δxvco,ss ), which ensures that the frequencies of the incoming and VCO signals
are matched. Because the phase detector is only linear for some range of φ, the
center frequency error should be small enough to operate in that region. For
example, the steady-state phase δφss will be smaller than π (or 180 degrees) if
the center frequency error satisfies
kf
δfc < kvco π. (4.108)
af
The center frequency of the PLL must be much closer to the center frequency
of the modulator than the bound specifies for operation in the linear region to
be possible.
With the second-order filter, the steady-state signals are
δfc
δxvco,ss = , δφss = 0 (second-order). (4.109)
kvco
Now, both the frequency error and the phase error are zero despite a center
frequency error. The bias in the VCO input δxvco remains the value required to
match the incoming frequency, but it is now provided by the integrator of the
compensator. The principle of operation is similar to the rejection of constant
disturbances using integral compensation in conventional feedback systems.

4.7.5 Phase detectors


The implementation of the phase-locked loop requires a phase detector. An
ideal phase detector was assumed for the linear analysis of the previous section.
However, practical devices are less than ideal. A phase detector based on a
multiplier is shown in Fig. 4.45.
The principle of operation is as follows. Given

y(t) = sin (θ(t)) , yvco (t) = sin (θvco (t)) , (4.110)

the phase-advanced signal yq is given by

yq (t) = cos (θvco (t)) , (4.111)

and

2 y(t) yq (t) = sin (θ(t) − θvco(t)) + sin (θ(t) + θvco (t)) .


(4.112)
106 Chapter 4. Stability and performance of control systems

y(t) φ(t)
2 Low-pass
filter
yq (t)
o
90 phase
advance

yvco (t) Phase detector

Figure 4.45: Phase detector

The second component of (4.112) is a signal whose frequency is approximately


twice the center frequency of the incoming signal. The purpose of the low-pass
filter is to eliminate that signal, so that

φ(t) = LP F [2 y(t) yq (t)] ≃ sin (θ(t) − θvco (t)) . (4.113)

Then, the desired result is obtained

φ(t) ≃ θ(t) − θvco (t) (4.114)

if θ(t) − θvco (t) is small.


Regarding the 90◦ phase advance, a first option is to compute cos(·) in the
VCO of the PLL instead of sin(·). A second option is to eliminate the phase
advance block, recognizing that the steady-state phase θvco (t) will simply be
offset by 90◦ , and that there will be no effect on the signal xvco(t). The steady-
state signals y(t) and yvco(t) will be in quadrature, rather than in phase.
The characteristics of the phase comparator are only ideal for small phase
error θ(t) − θvco(t). The overall behavior is nonlinear, as shown by (4.113) and
on Fig. 4.46. With more sophisticated phase detectors, the characteristic is
linear for a larger range of φ. However, the periodicity of 2π is unavoidable,
since phases separated by 2π cannot be distinguished. Therefore, any practical
phase comparator will be nonlinear, and at most linear up to an angle π. A
precise analysis of the feedback system is much more complex than the linear
approximation indicates.
From Fig. 4.46, one may note that φ = π is also an equilibrium state of the
PLL with multiplier-based phase detector. However, it corresponds to a sign
reversal of the feedback gain and to an unstable condition. Although phases
separated by 2π cannot be distinguished, it is possible for the phase of two
4.8. Problems 107

π 2π
θ−θ vco

Figure 4.46: Nonlinear characteristic of the phase comparator

signals with nearly identical frequency to slowly increase from 0 and become
2π, 4π, etc. When there is an initial error in frequency fc − fc,vco in a PLL,
it may take several cycles for φ to converge to a stable equilibrium position.
This condition is referred to as cycle slipping. In some cases, phase lock may
never occur. The lock-in range is the range of frequencies for which phase lock
occurs without cycle slipping. The capture range, or pull-in range, is the range of
frequencies for which phase lock occurs, possibly with cycle slipping. One also
defines the hold range as the range of frequencies for which the loop remains
locked, if it is initially locked. This range may be determined experimentally
by slowly varying the frequency of the incoming signal, starting from a locked
condition.
The nonlinear behavior highlights two additional considerations in the choice
of the closed-loop bandwidth of the linearized system: the bandwidth should be
small enough to filter the high frequency component originating from phase
detection, but the response should be fast enough to ensure locking of the PLL.
In general, while the design of the loop filter is performed using linear time-
invariant analysis methods, characteristics of the true nonlinear system must be
considered as well.

4.8 Problems
Problem 4.1: Consider the control system of Fig. 4.47.
(a) Let P (s) = k/(s + a) and C(s) = kP . Find Y (s), assuming that both R(s)
and D(s) are nonzero. Deduce the transfer functions from r to y (for d = 0)
and from d to y (for r = 0). Give conditions on kP , k and a such that the
transfer functions are BIBO stable. Assuming that such conditions are satisfied,
give the values of the DC gains of the transfer functions. Indicate whether
perfect tracking of constant reference inputs and perfect rejection of constant
108 Chapter 4. Stability and performance of control systems

d
r e u y
C(s) P(s)

Figure 4.47: Standard feedback system with disturbance

disturbances is achieved by the control system.


(b) Consider the special case of a DC motor, with input u (the voltage in V)
and output y (the speed in rad/s). Let k = 1000 (rad/s)2/V, a = 100 rad/s,
kP = 3 V/(rad/s) (or V s), r = 200 rad/s and d = 0. Give the steady-state
value of y and the steady-state value of the error e. Plot the steady-state error,
expressed as a percentage of the reference input, as a function of kP . Observe
that the error goes to zero as kP → ∞. Also show that it is possible to achieve
perfect tracking by multiplying the reference input by a constant number before
it is applied to the summing junction. Explain the limitation of this approach
to tracking.
(c) In the same conditions as part (b), except that r = 0 and d = 3 V (the
disturbance is expressed as an equivalent input disturbance in volts), give the
steady-state value of y and of the error e. Plot the error for a unit disturbance
(d = 1) as a function of kP . Show that it is possible to add to the reference
input a signal that is proportional to the disturbance, so that the output is equal
to the reference input. Explain the limitation of this approach to disturbance
rejection.
(d) Repeat part (a) with P (s) = k/(s + a) and C(s) = kP /s.
(e) Repeat part (a) with P (s) = k/(s(s + a)) and C(s) = kP .
Problem 4.2: Determine whether all the roots of the following polynomials are
in the open left half-plane
(a) D(s) = s4 + 4s3 + 3s2 + 4s + 1
(b) D(s) = s5 + 5s4 + 8s3 + 4s2 − s − 1
(c) D(s) = s4 + 2s3 + 2s2 + 2s + 1
Problem 4.3: Give necessary and sufficient conditions on a and b for the
following polynomial to have all roots in the open left half-plane: D(s) = s4 +
s3 + s2 + as + b.
Problem 4.4: Give conditions on kP , kI , and kD such that a closed-loop system
4.8. Problems 109

with P (s) = k and C(s) = kP + ksI + kD s is BIBO stable. Assume that


s(s + a)
k > 0, but consider both cases kI = 0 and kI = 0.
Problem 4.5: (a) Sketch the root-locus of the transfer function

s(s + 1)
G(s) = (4.115)
(s + 2)2 (s + 3)

applying only the main rules.


(b) Repeat part (a) for

(s + 3)
G(s) = . (4.116)
s(s + 9)3

(c) Repeat part (a) for

s+a
G(s) = . (4.117)
(s + b)(s2 − 2s + 2)

Also give condition(s) that a > 0 and b > 0 must satisfy for the closed-loop
system to be stable for sufficiently high gain k > 0 (note that you do not need
to apply the Routh-Hurwitz criterion, nor provide the range of k for which the
system is closed-loop stable).
Problem 4.6: Sketch the root-locus for the transfer function
(s + 3)
G(s) = . (4.118)
(s − 1)(s2 + 2s + 2)

Give the range of k > 0 for which the system is closed-loop stable and calculate
the angle of departure of the locus from the pole at s = −1 + j.
Problem 4.7: Sketch the root-locus for the transfer function
(s + 1)2
G(s) = . (4.119)
s3
Give the locations of the breakaway points on the real axis, the values of the
angles of departure, and the range of k > 0 for which the system is closed-loop
stable.
Problem 4.8: Sketch the root-locus for the transfer function
1 1
G(s) = 2
= 3 2
. (4.120)
(s − 1)(s + 3) s + 5s + 3s − 9

Give the range of gain k > 0 for which the closed-loop system is stable, the
locations of the breakaway points on the real axis, and the locations of the
jω-axis crossings.
110 Chapter 4. Stability and performance of control systems

Problem 4.9: Sketch the root-locus for the transfer function


(s2 + 2s + 17) (s + 1 + 4j)(s + 1 − 4j)
G(s) = = . (4.121)
(s + 1)3 (s + 1)3

Include the angles of departure and arrival. Also give the range of gain k > 0
for which the closed-loop system is stable and use the result to improve your
sketch of the locus, if possible.
Problem 4.10: Consider a standard control system as shown in Fig. 4.47. Let

1 kI
P (s) = , C(s) = kP + + kD s. (4.122)
s2 (s + 1) s

(a) Assuming that kP = 0 and that the closed-loop system is stable, what
condition(s) must kI and kD satisfy so that the steady-state error for constant
reference inputs is zero?
(b) Assuming that kP = 0 and that the closed-loop system is stable, what
condition(s) must kI and kD satisfy so that the steady-state error for constant
input disturbances is zero?
(c) Assuming that kI = 0, what condition(s) must kP , kI , and kD satisfy so that
the closed-loop system is stable?
Problem 4.11: (a) Sketch the root-locus for k > 0 and the problem of Fig. 4.48.
There is one zero at s = 0 and two poles at s = 1.

Figure 4.48: Pole/zero locations for problem 4.11

(b) Give the range of gain k > 0 for which the system is closed-loop stable, and
give the locations of the jω-axis crossings.
(c) Give the locations of the breakaway points on the real axis.
Problem 4.12: Sketch the root-locus for the problem of Fig. 4.49, using only
the main rules. There is a zero at s = 0, two poles at s = −1, and two poles at
s = −1 ± j.
Problem 4.13: Sketch the root-locus for the problem of Fig. 4.50. Do not
calculate the range of gains for stability, the jω-axis crossings, or the breakaway
points from the real axis. However, give the angles of departure from the complex
4.8. Problems 111

-1
-j

Figure 4.49: Pole/zero locations for problem 4.12

2j
j

-2 -j
-2j

Figure 4.50: Pole/zero locations for problem 4.13

poles. There is a zero at s = 0 and a zero at s = −2. There are poles at s = ±j


and s = ±2j. Note that tan−1(0.5) = 27o.
Problem 4.14: Determine whether the roots of the polynomial D(s) are all in
the open left half-plane, where

D(s) = s5 + 3s4 + 4s3 + 4s2 + 3s + 1. (4.123)

Problem 4.15: (a) Consider the system of Fig. 4.47. Let

1 s+a
P (s) = , C(s) = . (4.124)
s(s + 1) s+1

Determine the range of values of the parameter a of C(s) such that the closed-
loop system is stable.
(b) For the system of part (a), give the steady-state error ess that is observed
when r(t) = 2 and d(t) = 0. The result may be a function of the parameter a.
(c) For the system of part (a), give the steady-state error ess that is observed
when r(t) = 0 and d(t) = 2. The result may be a function of the parameter a.
112 Chapter 4. Stability and performance of control systems

Problem 4.16: (a) Sketch the root-locus for


s+2
G(s) = 2 2 . (4.125)
s (s + 2s + 2)
Calculate the angles of departure for the complex poles, but do not calculate
the breakaway points from the real axis, the range of gain for stability, or the
crossing points on the jω-axis.
(b) Sketch the root-locus for
1
G(s) = . (4.126)
(s − 1)(s + 5)2
Calculate the breakaway points from the real axis and the range of gain for
stability (the crossing points on the jω-axis are not needed).
Problem 4.17: Consider the feedback system of Problem 4.4. Assuming that
the stability conditions are satisfied, determine what the steady-state error ess =
limt→∞ e(t) is for r(t) = 2. Consider both cases kI = 0 and kI = 0.
Problem 4.18: (a) Sketch the root-locus for
k
G(s) = (4.127)
s + 6s + 13s2 + 12s + 4
4 3

using only the main rules (the poles are shown below). Give the range of gain
k > 0 for which the system is closed-loop stable. Poles are shown on Fig. 4.51.

-2 -1
-j

Figure 4.51: Pole/zero locations for problem 4.18 (a)

(b) Sketch the root-locus for the problem of Fig. 4.52, using only the main rules.
Give the angles of departure from the complex poles.
Problem 4.19: (a) Find Y (s) as a function of R(s) and D(s) for the system of
Fig. 4.53.
(b) For the system of part (a), let C1(s) = 1/(s + a), C2 (s) = k(s + 1)/(s + 2),
P (s) = 1/s, and find conditions on k and a such that the closed-loop system is
stable.
4.8. Problems 113

-2 -1
-j

Figure 4.52: Pole/zero locations for problem 4.18 (b)

d
r e1 y
C1(s) P(s)

C2(s)

Figure 4.53: System for problem 4.19

(c) For the system of parts (a)-(b), let e(t) = r(t) − y(t). Note that the signal
is not equal to the signal denoted e1 on the diagram. Assuming constant but
arbitrary signals r = r0 and d = d0, obtain E(s) and conditions on k and a such
that limt→∞ e(t) = 0.
Problem 4.20: (a) Sketch the root-locus for
1
G(s) = . (4.128)
s((s + 10)2 + 4)

Compute the breakaway points from the real axis and the value of k for which
crossing of the jω-axis occurs. Do not compute the angles of departure.
(b) Sketch the root-locus for
1
G(s) = . (4.129)
s((s + 4)2 + 16)

Compute the breakaway points from the real axis and the angles of departure
from the complex poles. Do not compute the value of k for jω-axis crossing.
114 Chapter 4. Stability and performance of control systems
Chapter 5

Frequency-domain analysis of
control systems

5.1 Bode plots


5.1.1 Motivation
While control systems may be designed on the basis of pole/zero knowledge,
some applications are better handled using the frequency response of the sys-
tem to be controlled. Indeed, the frequency response may be measured with
good accuracy by injecting sinusoids of various frequencies at the input of the
system. Given a system with transfer function P (s), Bode plots are plots of the
magnitude and of the angle of P (jω) as a function of ω, i.e., of the gain and
phase shift of the steady-state sinusoidal response in the case of a stable system
[5]. For example, Fig. 5.1 shows the magnitude Bode plot of a low-pass filter, as
may be encountered in filtering applications.

P(jω)

Figure 5.1: Magnitude Bode plot of a low-pass filter (log scales are used)

In the context of control systems, Bode plots are used even when the system
is unstable, generalizing the concept of frequency response. The Bode plots are
computed by replacing s by jω in the transfer function P (s). The frequency
response does not exist for unstable systems, due to the lack of steady-state

115
116 Chapter 5. Frequency-domain analysis of control systems

sinusoidal response, but the Bode plots can nevertheless be determined exper-
imentally by placing the system in a stabilizing feedback loop, as shown in
Fig. 5.2. In this manner, the effect of initial conditions will decay to zero and
the responses will remain bounded. A reference signal r = r0 sin(ω0t) is applied
and, assuming a stable closed-loop system, the signals u and y will converge to
the steady-state responses

uss = u0 sin(ω0t + α0 ),
yss = y0 sin(ω0 t + β0 ). (5.1)

Then
y0
|P (jω0)| = , ∡P (jω0 ) = β0 − α0. (5.2)
u0
In other words, the Bode plots can be measured and interpreted similarily for
stable and unstable systems. One just needs to remember that the steady-state
sinusoidal response only exists for unstable systems if the systems are placed in
a stabilizing feedback loop.

r u y
C(s) P(s)

Figure 5.2: Closed-loop system

As for the root-locus, Bode plots can be computed easily and rapidly using
modern software. The procedures described in this chapter are not useful to
draw manually detailed plots, but rather to gain a valuable understanding of how
transfer function parameters are related to frequency response characteristics.

5.1.2 Approximations of the frequency response


Fig. 5.3 shows the Bode plots for the transfer function G(s) = 1/(s + 1), with
the magnitude plot on the top and the phase plot on the bottom. Both plots
use a log scale on the x-axis, such that equal space is assigned to a range from
a given frequency to a frequency ten times greater. Such a frequency range is
referred to as a decade. The y-axis of the magnitude plot also uses a log scale,
but is labelled in dB (from the unit for sound level, the decibel), with

[G(jω)]dB = 20 log |G(jω)| . (5.3)


5.1. Bode plots 117

On the y-axis, a decade spans a range of 20 dB. The phase plot shows ∡G(jω)
in a regular scale labelled in degrees.

0
Magnitude (dB)
-10

-20

-30

-40 2
-2 -1 1
10 10 10 0 10 10
0

2 1
Phase (deg)

-45

-90 1 2
-2 -1 10 0
10 10 10 10
Frequency (rad/s)

Figure 5.3: Bode plots and approximations

The plots on Fig. 5.3 show the magnitude and phase responses as solid lines
and approximations as dashed lines. The magnitude approximation is very close.
In the phase plot, two approximations are shown, with a coarse one labelled #1
and a finer one labelled #2. For the transfer function G(s) = 1/(s + 1), the
approximations are based on the fact that G(jω) ≃ 1 for ω ≪ 1 and G(jω) ≃
1/(jω) = −j/ω for ω ≫ 1, resulting in the magnitude and phase approximations
given in the table below.

ω≪1 ω≫1
[G(jω)]dB ≃ 0 [G(jω)]dB ≃ −20 log(ω)
∡G(jω) ≃ 0 ∡G(jω) ≃ −90◦

In the case of the magnitude plot, the approximation is composed on two


lines. The first line is flat at 0 dB/decade and the second line slopes in the
negative direction at -20 dB/decade. The two lines intersect at a point with
frequency ω = 1 rad/s and magnitude equal to 0 dB. The transition frequency is
118 Chapter 5. Frequency-domain analysis of control systems

equal to the magnitude of the pole at s = −1. For the phase plot, approximation
#1 is a discontinuous curve composed of a flat line at 0 deg and another flat line
at −90◦ with a sharp transition at ω = 1 rad/s. This approximation is quite
coarse, and a finer approximation is shown with the dashed line labelled #2.
Both approximations can be used to produce Bode plots, with the second one
more accurate but also more time-consuming to apply.
A similar approximation is obtained for G(s) = 1/(s + p) with p > 0, ex-
cept that the transition occurs at ω = |p| and the low frequency magnitude
is −20 log(|p|). If p < 0, the phase plot is similar but moving from −180◦ to
−90◦. For a zero, G(s) = s + z, the plot for the magnitude is similar but adding
20 dB/decade at ω = |z|. The phase plots are the same as for poles, but with
signs reversed. Overall, the changes occuring at the transition frequency are
summarized below.

OLHP pole ORHP pole OLHP zero ORHP zero


Magnitude −20 dB/dec − 20 dB/dec 20 dB/dec 20 dB/dec
◦ ◦ ◦
Phase −90 90 90 −90◦

Note that the coarse approximation is recommended, because it can obtained


much faster, while precise Bode plots are easily computed numerically nowadays.

5.1.3 Bode plots - Systems with no poles or zeros at the


origin
Procedure
Step 1: prepare to draw two plots. For both plots, the x-axis is the log of the
frequency ω. Although a log scale is used, the x-axis is typically labelled in ω
directly (rad/s or Hz), rather than log(ω). A scale for the x-axis might be 0.1,
1, 10, 100, · · · , (rad/s). A factor of 10 on the x-axis is called a decade, and
corresponds to an addition of 1 in log(ω). A good choice of x-axis normally
spans from
1
ωmin = min (|zi | , |pj |) to ωmax = 10 max (|zi | , |pj |) , (5.4)
10 i,j i,j

where zi are the zeros of the system and pj are the poles (whose values have the
units of rad/s). The first plot shows the magnitude of the frequency response as
a function of log (ω). A log scale is used again, with 20 log |P (jω)| shown on the
y-axis. The units are dB’s. A multiplication of the magnitude by 10 translates
into an addition of 20 dB. The second plot is a phase plot, giving the angle (in
5.1. Bode plots 119

degrees) of P (jω) as a function of log(ω). A log scale is not used for the y-axis
of this plot.
Step 2: start the plots on the left at a sufficiently low frequency, such as ωmin
in (5.4). For the magnitude, draw a horizontal line at 20 log(|P (0)|). For the
phase, draw a horizontal line at 0◦ if P (0) > 0 and 180◦ if P (0) < 0.
Step 3: continue the plots from left to right. Every time a pole or zero is
encountered, that is, every time ω = |pi | or ω = |zi | :
(a) for the magnitude, change the slope by an additional −20 dB/decade every
time a pole is encountered, and 20 dB/decade every time a zero is encountered.
(b) for the phase, add −90◦ whenever a left half-plane pole or right half-plane
zero is encountered, and 90◦ whenever a right half-plane pole or left half-plane
zero is encountered.
Step 4: draw smooth curves that fit the approximations.
Comments
In simple terms, the procedure amounts to:

1. start the Bode plots from the left using the low-frequency approximation
PLF (s) ≃ P (0) for s = jω small.

2. move from left to right, applying −20 dB/dec to the magnitude plot when
a pole is reached and +20 dB/dec when a zero is reached.

3. for the phase plot, adding −90◦ when an OLHP pole or ORHP zero is
reached, and +90◦ when an ORHP pole or OLHP zero is reached.

Example 1: P (s) = ss+ + 1 . |P (0)| = 0.1 (or −20 dB) and ∡P (0) = 0◦ . Start
10
1
the Bode plot around ω = 0.1 ( 10 × 1) with this approximation. Next, move
from left to right and apply the changes when the poles at s = −1 and s = −10
are encountered. The result is shown in Fig. 5.4. The dashed curves are smooth
estimates of the frequency response, based on the linear approximations. From
these estimates, one may predict, for example, that the response of the system
to a cos(10t) input is a signal M cos(10t + φ), with M slightly smaller than 1
and a phase advance φ around 45◦ .
Example 2: P (s) = ss+ 10
+ 1 (similar to example 1, but the pole and the zero
are reversed). The plot is shown on Fig. 5.5. Note that the output lags behind
the input when the pole is smaller than the zero in magnitude.
Example 3: P (s) = −s + 10
s + 1 (similar to example 2, but with a right half-
plane zero). The plot is shown on Fig. 5.6. Note that the phase lag is larger
than in the previous example, due to the zero being in the right half-plane.
120 Chapter 5. Frequency-domain analysis of control systems

P(jω) P(jω)
(dB) (deg)
0 dB/dec
1 10 90°
0.1 ω (rad/s)
+20 db/dec
0.1 1 10 ω (rad/s)
-20
0 dB/dec

Figure 5.4: Bode plot for example 1

P P(jω)
(dB) (deg) 1 10
0 dB/dec
20 dB
-20 dB/dec 0.1 ω (rad/s)

ω (rad/s) -90°
1 10
0 dB/dec

Figure 5.5: Bode plot for example 2

Such a zero is called a non-minimum-phase zero. One also says that a system is
minimum-phase if all its zeros are in the left half-plane, and non-minimum-phase
otherwise.

P P
(dB) (deg)
0 dB/dec
20 dB 1 10 ω (rad/s)
-20 dB/dec
-90°
1 10 ω (rad/s)
0 dB/dec -180°

Figure 5.6: Bode plot for example 3

Example 4: P (s) = ss− 10


+ 1 . Here, |P (0)| = 10 (or 20 dB) and ∡P (0) = 180 .

The plot is shown on Fig. 5.7.


(s − 1)
Example 5: P (s) = 10 . |P (0)| = 1/3 ≃ −10 dB and ∡P (0) =
(s − 3)(s − 10)

180 . Note that the frequency 3 rad/s is approximately mid-way between 1 and
5.1. Bode plots 121

P P
(dB) (deg)
0 dB/dec 180°
20 dB
-20 dB/dec 90°

1 10 ω (rad/s)
0 dB/dec 1 10 ω (rad/s)

Figure 5.7: Bode plot for example 4

10 rad/s in a log scale (log(3) = 0.48). The plot is shown on Fig. 5.8.

P P(jω)
(dB) +20 db/dec (deg) 270°
3 10
180°
1 ω (rad/s)
90°
-10 dB -20 db/dec
0 dB/dec 1 3 10 ω (rad/s)

Figure 5.8: Bode plot for example 5

5.1.4 Bode plots - Systems with poles or zeros at the


origin
Preliminary
The procedure remains the same, except that the low-frequency approximation
in step 2 is more complicated. If the system has n zeros at s = 0, the low-
frequency approximation is of the form

PLF (s) = ksn with k = s−nP (s) s=0


. (5.5)

In other words, k is the DC gain of the transfer function with the zeros at the
origin removed. If the system has n poles at s = 0, the same approximation
applies with n < 0.
Procedure
In step 2, draw the low-frequency approximation as follows. Let k = [s−nP (s)]s=0
where n is the number of zeros of P (s) at the origin (if there are poles at the
122 Chapter 5. Frequency-domain analysis of control systems

origin instead, let −n be the number of poles). For the magnitude, draw
a line with a slope equal to n × 20 dB/decade. Position the line so that
|P (jω0)|dB = 20 log(|k|ω0n ), where ω0 is some low frequency where the graph
is started (for example, ω0 = ωmin using (5.4) computed from the poles and
zeros other than those at the origin). For the phase, draw a horizontal line at
0◦ + n 90◦ if k > 0 and 180◦ + n 90◦ if k < 0.
Example 6: P (s) = s +s 10 . Since n = −1, k = 10, the low-frequency approx-
imation is PLF (s) = 10
s . Thus, we begin the plot using
10
|PLF (jω)|dB = = 20 − 20 log(ω). (5.6)
jω dB

At ω = 1, |PLF (jω)|dB = 20 dB, which allows us to “pin down” the low-frequency


approximation on the plot. The low-frequency phase is ∡P (jω) = ∡(1/j) =
−90◦. The plot is shown on Fig. 5.9.

P P(jω)
(dB) (deg)
20 dB 1 10

-20 dB/dec 0 dB/dec ω (rad/s)


-90°
1 10 ω (rad/s)

Figure 5.9: Bode plot for example 6

Example 7: P (s) = s −310 . Since n = −3, k = −10, PLF (s) = −10 . We start
s s3
at ω = 1 with the approximation
10
|PLF (jω)|dB = = 20 − 60 log(ω) (5.7)
ω3 dB

and |PLF |dB = 20 dB at ω = 1. The low-frequency phase is given by

∡P (jω) = 180◦ − 270◦ = −90◦ . (5.8)

The plot is shown on Fig. 5.10.

5.1.5 Complex poles and zeros with low damping factor


Preliminary
Complex poles and zeros occur in complex pairs, with both poles or both zeros
having the same magnitude. When a frequency ω is reached on the frequency
5.1. Bode plots 123

P
(dB) P(jω)
(deg)

20 dB 0°
10 ω (rad/s)
1 10 ω (rad/s) -90°
-40 dB
-60 dB/dec -180°

-40 dB/dec

Figure 5.10: Bode plot for example 7

axis where ω = |p|, the magnitude of the two poles or zeros, the effect is the
same as if a pair of real poles or zeros had been reached (adding ±40 dB/decade
to the magnitude and ±180◦ to the phase). For complex poles close to the
jω-axis, however, the frequency response deviates significantly from the linear
approximation.
Consider a system with a pair of stable complex poles p = −a + jb, p∗ =
−a−jb. One defines the natural frequency ωn and the damping factor ζ through
the formulas

ωn = |p| = a2 + b2 ,

ζ = − Re(p)/|p| = a/ a2 + b2. (5.9)

These relationships are illustrated on Fig. 5.11. Note that ζ = cos(α), where α
is the angle between the direction of the pole and the real axis.

p jb
ωn
α

-a=-ζω n

Figure 5.11: Definition of variables for complex poles

Consider the contribution of a complex pole pair to the transfer function


pp∗ ωn2
= 2 (5.10)
s2 − (p + p ) s + pp
∗ ∗ s + 2ζωn s + ωn2
124 Chapter 5. Frequency-domain analysis of control systems

with the numerator set so that the DC gain is equal to 1. The Bode plot
approximation is such that the gain is 1 (0 dB) up to ωn, then decreases at the
rate of −40 dB per decade. However, at ωn , the exact gain is

ωn2 1
|P (jωn )| = = . (5.11)
2ζjωn2 2ζ
Therefore, the actual magnitude is 10 instead of 1 for ζ = 0.05, or 20 dB.
Fig. 5.12 shows the shape of the responses for different values of the damping
factor ζ. For small damping factors, the magnitude response peaks significantly
above the linear approximation. The transition of the phase response around ωn
is also sharper. For right half-plane poles, the phase is reversed and ζ is replaced
by |ζ| for the plots.

P 20dB ζ = 0.05 P(jω)


(dB) (deg)
ωn ωn

ζ = 0.5
ζ = 0.5
ζ = 1.0
ζ = 1.0 -180°

ζ = 0.05
-40 dB/dec

Figure 5.12: Bode plots of systems with two complex poles in the left half-plane

The true peak of the magnitude response is not exactly at ωn, but at a
frequency

ωp = 1 − 2ζ 2ωn = b2 − a2 , (5.12)

which is slightly smaller than ωn. There is no peaking of the magnitude unless
ζ < 0.707, i.e., unless the imaginary part of the pole is greater than the real
part. The actual magnitude of the peak is
1 1
|P (jωp)| = . (5.13)
2ζ 1 − ζ2

However, for small damping factor, the frequency of peaking is close to ωn, and
the magnitude is close to 1/2ζ, as given by (5.11).
For complex zeros close to the jω-axis in the open left half-plane, a similar
correction needs to be applied and is shown in Fig. 5.13. For right half-plane
5.1. Bode plots 125

P P(jω)
(dB) (deg)

Figure 5.13: Bode plots for complex zeros in the left half-plane

zeros, the phase is reversed. If the zeros are on the jω-axis, the response is zero
at the corresponding frequency, or at minus infinity in the log scale.
Procedure
When the frequency reaches a value equal to the magnitude of a pair of complex
poles or zeros, apply the rules as if there were two real poles or two real zeros
with the same magnitude. If the pair of complex poles or zeros is close to the
jω-axis, peaking in the response may be accounted for as follows. Given a pair

of complex poles p = −a ± jb, let ωn = a2 + b2 (called the natural frequency)
and ζ = a/ωn (called the damping factor). If |ζ| < 0.5, an increase of gain equal
to 20 log(|1/2ζ|) should be added to the magnitude plot at ωn . For a pair of
complex zeros, a similar reduction in the gain should be applied. The effect of a
small damping factor on the phase plot is a rapid variation of phase around ωn.
Example 8: Consider the transfer function
s2
P (s) = , (5.14)
(s + 1) (s2 + 0.1s + 100)

which has poles at p1 = −1, p2,3 = −0.05 ± (0.05)2 − 100 = −0.05 ± 9.999875.
The complex poles are such that ωn = 10, and ζ = 0.005. Thefore, the peak
of the gain at the natural frequency is 1/2ζ = 100 ≡ 40 dB. To draw the Bode
plot, note that the low-frequency approximation is
s2
PLF (s) = , |PLF (jω)|dB = −40 + 40 log |ω|, ∡PLF (jω) = 180◦ ,
100
(5.15)

and, for ω = 0.1,

|PLF (0.1j)|dB = −80 dB. (5.16)

The resulting Bode plot is shown in Fig. 5.14. The peak of the magnitude plot
and the sharp phase transition at the complex poles were accounted for in the
drawing of the smooth approximation.
126 Chapter 5. Frequency-domain analysis of control systems

P P(jω)
(dB) (deg)
20 180°
90°
0.1 1 10 ω (rad/s) 10
-20 1 ω (rad/s)
-40 20 dB/dec -90°
40 dB/dec -20 dB/dec

-80

Figure 5.14: Bode plots of example with low damping complex poles

For ω = 10 rad/s, the estimate of the Bode plot is |P | = 20 dB and ∡P = 0◦,


i.e., P (j10) = 10. The true value is
−100
P (j10) = = 9.9010 + 0.9901j, (5.17)
(1 + j10)(j)
which is close to the estimate.

5.1.6 Some special transfer functions


We discuss a few additional transfer functions that are commonly encountered,
and their associated Bode plots.
Time delay: the transfer function of a time delay Td (in seconds) is

P (s) = e−sTd . (5.18)

Note that this is not a rational function of s, so that the usual rules of Bode plots
cannot be applied. However, the plots themselves can be drawn. The frequency
response of a time delay is P (jω) = e−jωTd and

|P (jω)| = 1,
∡P (jω) = −ωTd . (5.19)

The Bode plots are shown in Fig. 5.15. A plot of the phase with a linear scale
for the x-axis was inserted, to illustrate the property of the delay called linear
phase. This linear phase property is associated with the fact that a time delay
does not alter the shape of the signal (the signal is not distorted, as it generally
would be with a rational transfer function).
5.1. Bode plots 127

P P P

0dB 2π/Td
ω ω ω
(linear scale) (log scale)
-360°

Figure 5.15: Bode plots of pure delay system

Notch filter: a notch filter eliminates the component of a signal at a given


frequency ω0. The transfer function of a second-order notch filter is given by

s2 + ω02
P (s) = . (5.20)
s2 + 2ζω0 s + ω02

For example, one may let ζ = 1. The Bode plots of the notch filter are shown in
Fig. 5.16, together with the individual contributions of the complex poles and
zeros. Note that the zeros are exactly on the jω-axis, so that the gain for a
sinusoidal signal of frequency ω0 is exactly zero (P (jω0 ) = 0). Outside a narrow
band around ω0, P (jω) ≃ 1.

P ω0 ω0 ω0
= +
ω

P
90° 180°
ω0
ω = +
ω0
-90° ω0
-180°

Figure 5.16: Bode plot of a notch filter, showing also the individual contributions
of the denominator and numerator of the transfer function

Wash-out filter: a wash-out filter eliminates the DC component of a signal.


An example of a wash-out filter is
s
P (s) = . (5.21)
s+a
128 Chapter 5. Frequency-domain analysis of control systems

It is a special case of a high-pass filter. The Bode plot of the filter is shown in
Fig. 5.17. Wash-out filters are used to:

• eliminate biases and offsets in signals.

• keep the response of a system centered around a neutral position (an ex-
ample is a flight simulator which emulates the motion of an aircraft, yet
must remain at the same location).

• approximate the derivative of a signal within a finite frequency range (as


in a PID controller).

P
a

Figure 5.17: Magnitude Bode plot of a wash-out filter

Low-pass filter: the wash-out filter is a high-pass filter, because it filters out the
low frequencies and lets high frequencies pass through. Conversely, a low-pass
filter removes high-frequency components while transmitting the low-frequency
components. For example, consider the signal of Fig. 5.18, meant to represent
a noisy sinusoidal signal and given by

u(t) = sin(2πt) + 0.3 sin(2π 20 t). (5.22)

A low-pass filter can be used to isolate the signal at 1 Hz, while reducing the
noisy component at 20 Hz.
An example of low-pass filter is
an
F (s) = , (5.23)
(s + a)n

where a determines the bandwidth of the filter, and n is the order of the filter.
Fig. 5.19 shows the Bode plots of F (s) for a = 30, and for n = 1 and n = 2.
The magnitude plot shows the attenuation of high-frequency signals, which is
enhanced for the higher value of n. Lower values of a also reduce high frequency
5.1. Bode plots 129

1.5

Noisy signal
0.5

-0.5

-1

-1.5
0 0.5 1 1.5 2
Time (s)

Figure 5.18: Noisy signal to demonstrate low-pass filtering.

components, but affect the main component as well if the value is too small. In
the example, the main component is at ω = 6.28 rad/s, while the noise is at
ω = 125.7 rad/s.
Fig. 5.20 shows the noisy signal filtered by F (s) with n = 1 and n = 2,
together with the original signal sin(2πt). For n = 1, the noise is considerably
reduced, while it is virtually eliminated for n = 2. Notable features are that the
main component is delayed compared to the original signal, and slightly reduced
in magnitude as well. This effect is stronger for n = 2, and can be predicted
from the magnitude and phase plots of Fig. 5.19. In general, the higher the
reduction of the high-frequency noise, the greater the delay of the low frequency
signal.
A useful observation is that the effect of the filter at low frequencies can be
approximated by a pure time delay. Indeed, the transfer function of the filter
(5.23) can be represented by the first terms of its Taylor series expansion around
s = 0, with
n
F (s) ≃ 1 − s + ... (5.24)
a
Similarly, a time delay can be approximated by

F (s) = e−sTd ≃ 1 − Td s. (5.25)

Combining the expressions, one obtains the equivalent time delay of the low-pass
filter
n
Td = . (5.26)
a
130 Chapter 5. Frequency-domain analysis of control systems

Magnitude (dB)
n=1
-20

-40
n=2
-60
-80
0
Phase (deg)

-45
n=1
-90
-135 n=2
-180
0 1 2 3
10 10 10 10
Frequency (rad/s)

Figure 5.19: Bode plots of an /(s + a)n for a = 30, and for n = 1 and n − 2

Lower bandwidth or higher order filters imply higher time delays. In the example
of Fig. 5.20, the approximate time delay Td = 33.3 ms for n = 1, and Td =
66.7 ms for n = 2.
Low-pass filters are often used to reduce the noise re-injected in the feedback
loop from measurements of the output. However, the delay is detrimental to
the closed-loop stability of the system, so that the choice of filter represents a
trade-off between removal of the noise and closed-loop dynamics.
The transfer function (5.23) is only one example of a low-pass filter. Many
low-pass filters exist, such as Butterworth filters, Bessel filters, Chebyshev filters,
and elliptic filters. In general, the formula for the equivalent delay of a filter is
F (0) − F (s)
Td = lim . (5.27)
s→0 sF (0)
In the case of a stable transfer function
bn−1 sn−1 + · · · + b1s + b0
F (s) = , (5.28)
sn + an−1sn−1 + · · · + a1s + a0
the formula gives
a1 b1
Td = − . (5.29)
a0 b0
The result requires that a0 = 0 (needed for stability) and b0 = 0 (needed for
non-vanishing response at low frequencies).
5.2. Nyquist criterion of stability 131

1.5
n=1

Original and filtered signals


1 Original

0.5 n=2

-0.5

-1

-1.5
0 0.5 1 1.5 2
Time (s)

Figure 5.20: Original (dashed) and filtered (solid) signals

5.2 Nyquist criterion of stability

5.2.1 Nyquist diagram

The Nyquist criterion [23] is an important test to determine the stability of a


closed-loop system, based on properties of the frequency response of the open-
loop system. It may be applied using experimentally measured frequency re-
sponse data, without obtaining a pole/zero model. The criterion is based on the
Nyquist diagram, which is explained first.

Im(P)
P

Re(P)

Figure 5.21: Polar representation of the frequency response

The Nyquist diagram is a plot of Im P (jω) vs. Re P (jω) for ω = −∞ → ∞.


As shown in Fig. 5.21, it may also be viewed as a polar plot of the frequency
132 Chapter 5. Frequency-domain analysis of control systems

response. Consider for example P (s) = 1/(s + 1). We have that


1 −ω
Re P (jω) = , Im P (jω) = . (5.30)
1 + ω2 1 + ω2
Because
2 1 2
1 − 12 ω 2 + ω2 1,
Re P (jω) − + (Im P (jω))2 = 2
=
2 (1 + ω2 )2 4
(5.31)
the Nyquist curve is a circle with radius 1/2 and centered at (1/2, 0). The
diagram for positive frequencies is shown in Fig. 5.22. The arrow shows the
direction for ω = 0 → ∞. If we consider instead P (s) = 1/(s + 1)3 , the phase
is multiplied by 3, and the resulting Nyquist diagram for positive frequencies is
shown in Fig. 5.23.

P 1
Im(P)
ω=0
1/2 1

P Re(P)
ω=∞

-90° ω=1

Figure 5.22: Bode plots (left) and Nyquist diagram (right) for 1/(s + 1)

ω=∞
1 ω=0

ω=1

Figure 5.23: Nyquist diagram of a third-order system

The plot for ω > 0 is complemented by the portion for ω < 0. However, the
fact that P (−jω) = P ∗(jω) (a consequence of the assumption that the impulse
response is real) implies that the diagram for ω < 0 is the reflection of the
diagram for ω > 0 with respect to the real axis. Fig. 5.24 shows the complete
diagrams for the two examples.
Other properties worth noting are that:
5.2. Nyquist criterion of stability 133

ω <0
ω <0 ω=±∞
ω=0 ω=0

ω=±∞ ω >0
ω >0

Figure 5.24: Complete Nyquist diagram for 1/(s + 1) (left) and 1/(s + 1)3 (right)

• If P (s) has no pole at s = 0, P (0) is real and finite (P (0) is the DC gain
of P (s)).

• If the number of poles is equal to the number of zeros, P (∞) = P (−∞).

• If the number of poles is greater than the number of zeros, P (∞) = P (−∞)
= 0.

• Assuming that P (s) is of the form


sm + · · ·
P (s) = k , (5.32)
sn + · · ·
with n m (proper transfer function), the high-frequency behavior may
be approximated by
k −j(π/2)(n−m)
PHF (jω) ≃ e , (5.33)
ωn−m
so that the Nyquist diagram approaches the origin for high frequencies
with an angle
π
∡PHF (jω) = − (n − m) if k > 0
2
π
= π − (n − m) if k < 0. (5.34)
2

5.2.2 Nyquist criterion


The Nyquist criterion considers both the open-loop transfer function G(s) =
N (s)/D(s) and the closed-loop transfer function
G(s) N(s)
= . (5.35)
1 + G(s) N (s) + D(s)
The open-loop poles are the roots of D(s) = 0, and the closed-loop poles are
the roots of N (s) + D(s) = 0. For simplicity, we will assume that there are no
134 Chapter 5. Frequency-domain analysis of control systems

open-loop poles or closed-loop poles on the jω-axis. We will also assume that
the number of poles is greater than or equal to the number of zeros.
Nyquist criterion

1. Plot G(jω) for ω = −∞ → ∞.

2. Let N be the number of clockwise encirclements of (−1, 0), that is, the
number of times that the closed curve drawn by G(jω) as ω = −∞ → ∞
encircles the (−1, 0) point in the clockwise direction.

3. Then: Z = N + P, where:

Z = number of closed-loop poles in the right half-plane


P = number of open-loop poles in the right half-plane

Example 1: let G(s) = 1/(s + 1), D(s) + N (s) = s + 2. The Nyquist plot is
shown in Fig. 5.25. There are no encirclements, so that N = 0. Since the system
is open-loop stable, P = 0. Therefore, Z = 0, and the test confirms that the
system is closed-loop stable. If we let G(s) = k/(s + 1), D(s) + N(s) = s + 1 + k,
the diagram is simply expanded by a factor k. The number of encirclements
does not change, and since the open-loop system remains stable, the test verifies
that the closed-loop system is stable for all k.

-1 1

Figure 5.25: Nyquist diagram for 1/s + 1

-1

Figure 5.26: Nyquist diagram for 1/(s + 1)3


5.2. Nyquist criterion of stability 135

Example 2: let G(s) = 1/(s + 1)3, D(s) + N (s) = (s + 1)3 + 1. Now, the
Nyquist diagram is shown in Fig. 5.26. Since N = 0, P = 0, we have Z = 0,
and the system is closed-loop stable. However, if we let G(s) = k/(s + 1)3,
D(s) + N (s) = (s + 1)3 + k, there is a value of k such that the number of
encirclements will change. The effect of an increasing parameter k on the Nyquist
diagram and on the root-locus is shown in Fig. 5.27. When the value of k is
sufficiently large that the value of a shown on the figure is greater than 1,
the number of clockwise encirclements becomes equal to 2, and the number of
unstable closed-loop poles also becomes Z = 2 (as indicated independently by
the root-locus). We had found earlier using the Routh-Hurwitz criterion (see
(4.46)) that the value of k which separated the stable and unstable conditions
was k = 8. Considering the Nyquist criterion, we note that
k k
G(jω) = = 1 − 3ω 2 + j −3ω + ω3 .
(1 + jω)3
(1 + ω 2)3 (5.36)
Therefore,

Im G(jω) = 0 for − 3ω + ω3 = 0, or ω2 = 3. (5.37)

The crossing of the real axis by the Nyquist curve therefore occurs for ω2 = 3
and
k(1 − 9) k
a = − Re G(jω) = − = . (5.38)
43 8
Again, one finds that the system is becomes unstable when k > 8 (the case for
k < 0 can also be considered using the root-locus and the Nyquist criterion, but
is left to the reader).

-a

Figure 5.27: Nyquist diagram of k/(s + 1)3 for k increasing (left), and corre-
sponding root-locus (right)

Example 3: G(s) = k/(s − 1) (k > 0). This is an unstable open-loop system.


Using the root-locus technique, as in Fig. 5.28, we can predict that the system
136 Chapter 5. Frequency-domain analysis of control systems

is stable for sufficiently large k > 0. Indeed, since D(s) + N (s) = s − 1 + k, the
system is known to be stable for k > 1. Drawing the Bode plots, we can sketch
the Nyquist diagram as in Fig. 5.29. Because there is one unstable open-loop
pole, P = 1. Then, for k < 1, N = 0, Z = 1 (1 unstable closed-loop pole). For
k > 1, N = −1 (1 counterclockwise encirclement), Z = 0 (no unstable closed-
loop pole). In summary, the Nyquist criterion correctly predicts that there is
one unstable pole for k < 1, and that the closed-loop system is stable for k > 1.

Figure 5.28: Root-locus of k/(s − 1)

G
Im(G)

-k

G Re(G)

-90°
-180°

Figure 5.29: Bode plots (left) and Nyquist diagram (right) for k/(s − 1)

Example 4: consider the system

k(s + 2)
G(s) = (5.39)
(s − 1)(s2 + 2s + 2)

The system has one unstable open-loop pole (P = 1) and its Nyquist curve
shown in Fig. 5.30. The following results apply

k<1 N =0 Z = N + P = 1 ⇒ unstable
1 < k < 2 N = −1 Z = N + P = 0 ⇒ stable
k>2 N =1 Z = N + P = 2 ⇒ unstable
5.2. Nyquist criterion of stability 137

-k -k/2
-1

Figure 5.30: Nyquist diagram of a conditionally stable system

Such a system is called conditionally stable: the gain k is required to belong to


a finite range for stability. The root-locus is shown in Fig. 5.31. While one pole
becomes stable for k > 1, the other two poles become unstable when k > 2.

k=2
k=1

k=2

Figure 5.31: Example of root-locus for a conditionally stable system

5.2.3 Counting the number of encirclements


In some cases, counting the number of encirclements can be confusing. However,
a simple procedure eliminates any ambiguity. As shown in Fig. 5.32 on the left,
one may draw a straight line from (−1, 0), and count the number of times that
the Nyquist curve intersects the line (letting the sign be positive if the crossing
is in the clockwise direction, and negative otherwise). Then, the sum is the
number of clockwise encirclements. The number is zero on Fig. 5.32. As shown
on the right of the figure, the number obtained is independent of the line that
is used for the counting.

5.2.4 Implications of the Nyquist criterion


1. For the closed-loop system to be stable (Z = 0), one must have:
138 Chapter 5. Frequency-domain analysis of control systems

(-1,0) (-1,0)

-1
-1 +1 -1
+1

+1

Figure 5.32: Counting the number of clockwise encirclements

(a) no encirclements if the open-loop system is stable.

(b) otherwise, as many counterclockwise encirclements as there are unsta-


ble open-loop poles (N = −P ).

2. Note that, if the open-loop system is stable, sufficient (but not necessary)
conditions for the stability of the closed-loop system are:

• either |G(jω)| < 1 for all ω,


• or |∡G(jω)| < 180◦ for all ω > 0,
• or, for some ω1, |∡G(jω)| < 180◦ for ω ω1 and |G(jω)| < 1 for
ω ω1 .

3. One may wonder what happens when the Nyquist curve goes through
(−1, 0) point. Then, it is impossible to count the number of encirclements.
However, G(jω0) = −1 for some ω0 , so that 1 + G(jω0) = 0. Therefore,
the system has at least one closed-loop pole at jω0. This case was excluded
in the assumptions, but one can nevertheless conclude that the closed-loop
system is unstable, since some closed-loop poles are on the jω-axis.

4. The case where there are open-loop poles on the jω-axis was also excluded,
but may be handled using a modified procedure to be described later. This
case is important in feedback systems, because many control systems have
poles at the origin.
5.2. Nyquist criterion of stability 139

5. The main value of the Nyquist criterion is to quantify how far a system is
from being unstable. This topic will also be discussed later.

5.2.5 Explanation of the Nyquist criterion


Let C be the set of complex numbers located on a circle of radius 1 and associated
with an orientation in the counterclockwise (CCW) direction, as shown on the
left of Fig. 5.33. C is a closed curve and is also called a contour. Consider the
transfer function

H(s) = s + z1, (5.40)

where z1 is a real number. Note that s = −z1 is the zero of H(s). Let H(C) be
the set of complex numbers corresponding to H(s) for s ∈ C with an orientation
corresponding to C. The right side of Fig. 5.33 shows H(C), which is simply the
original curve C shifted by z1. A trivial fact is that H(C) encircles the origin in
the CCW direction if and only if the zero at s = −z1 belongs to the interior of
C. In the case shown on the figure, there are no encirclements.

H(s)=s+z1
C H(C)

-z1

Figure 5.33: Contour and transformed contour

Fig. 5.34 shows an interpretation of the result where the complex number
H(s) is a vector with angle α with respect to the real axis. For z1 = 2, as shown
on the figure, and for ∡s increasing from 0◦ to 360◦ , α grows from 0 to 30◦, then
goes down to −30◦ , then rises back to zero. H(C) does not encircle the origin
because the angle returns to 0◦ rather than reach 360◦ . For z1 = 0, α grows
continuously from 0◦ to 360◦, and the origin is encircled.
With this interpretation, it becomes clear that the result on the number of
encirclements remains true for arbitrary z1 and for any closed curve C that does
not intersect itself. Further, for a general transfer function
(s − z1) · · · (s − zm)
H(s) = bm , (5.41)
(s − p1) · · · (s − pn)
140 Chapter 5. Frequency-domain analysis of control systems

s+z1
α s
-z1

Figure 5.34: Angle of H(s) as a function of the angle of s

one has that


m n
∡H(s) = ∡bm + ∡(s − zi ) − ∡(s − pi ). (5.42)
i=1 i=1

It follows that

The number of CCW encirclements of the origin by H(C) is equal to


the number of zeros of H(s) inside C
− the number of poles of H(s) inside C (5.43)

The result is a version of the so-called argument principle or Cauchy’s principle


of the argument from complex analysis.
To obtain the Nyquist criterion, one lets

N (s) D(s) + N (s)


H(s) = 1 + G(s) = 1 + = , (5.44)
D(s) D(s)

where G(s) is the open-loop transfer function. Therefore, the poles of H(s) are
the open-loop poles and the zeros of H(s) are the closed-loop poles. Next, one
defines the contour C as shown on Fig. 5.35. The contour is composed of the
imaginary axis completed with a half circle of infinite radius. The curve is called
the Nyquist contour. Note that the area of the complex plane delimited by the
Nyquist contour is the right half-plane.
The orientation of the contour was changed to be clockwise (CW) so that
the frequency varies in the positive direction along the imaginary axis. Instead
of mapping H(C) = 1 + G(C), one plots G(C). Encirclement of the origin
is replaced by encirclement of (-1, 0). If the loop transfer function is strictly
proper, the half circle of infinite radius is mapped to Re = 0, Im = 0, and the
Nyquist plot reduces to a plot of G(jω) for ω ranging from −∞ to ∞. With
5.2. Nyquist criterion of stability 141

ω = +∞ G(jω)
Im

Re

s-plane
ω = −∞

Figure 5.35: Principle of the Nyquist criterion

these changes, (5.43) becomes

The number of CW encirclements of (-1,0) by G(jω) is equal to


the number of unstable closed-loop poles
− the number of unstable open-loop poles, (5.45)

which is the Nyquist criterion.


Whether the contour encompasses the right half-plane or the left half-plane
is determined by the orientation of the contour. One could also have deduced
that

The number of CCW encirclements of (-1,0) by G(jω) is equal to


the number of stable closed-loop poles
− the number of stable open-loop poles. (5.46)

(5.45) and (5.46) are equivalent because the number of unstable poles plus the
number of stable poles is the same for the open-loop and for the closed-loop
systems.

5.2.6 Open-loop poles on the jω-axis


To handle poles on the imaginary axis, one modifies the Nyquist contour slightly,
in order to avoid the poles on the imaginary axis. This procedure is shown in
Fig. 5.36. A half circle of radius ε is inserted in the path to avoid the poles.
The rule Z = N + P still applies, but P refers to the number of open-loop poles
located in the modified contour, and Z refers to the number of closed-loop poles
in the same area. Since the closed-loop poles differ from the open-loop poles
142 Chapter 5. Frequency-domain analysis of control systems

(when there are no pole/zero cancellations in the open-loop transfer function),


Z will still be equal to the number of unstable closed-loop poles if ε is sufficiently
small. The only difficulty in the application of the modified Nyquist criterion
comes from the need to transform the two small paths around the imaginary
poles. The procedure is best explained through examples.

Figure 5.36: Modified Nyquist contour to handle poles on the jω-axis

Example 1: G(s) = 1/ (s(s + 1)). The Bode plots and the Nyquist curve for
this system are shown in Fig. 5.37. For the usual Nyquist contour, the trans-
formed path grows to infinity as ω reaches the origin. In the modified contour,
G(jε) = 1/jε(1 + jε) ≃ 1/jε = −j/ε, where ε is an arbitrarily small number.
On the negative side, G(−jε) ≃ −1/jε = j/ε. To count the encirclements, we
need to connect the end of the branches at ω = ε and ω = −ε.

G
ω =−ε

-1 ω=∞
G ⇒

-90°

-180° ω =ε

Figure 5.37: Nyquist curve for a system with a pole at the origin

Fig. 5.38 shows the detail of the modified contour around the origin. We
5.2. Nyquist criterion of stability 143

assume that a half circle is used to connect the two paths, so that

s = ε ejθ , with θ = −π/2 → π/2. (5.47)

In the transformed path


1 1 1 1
G(s) = = ≃ = e−jθ . (5.48)
s(s + 1) ε ejθ (1 + ε ejθ ) ε ejθ ε

Since θ = −π/2 → π/2, the transformed path connects clockwise the branches at
90◦ and −90◦. As a result, the transformed, modified contour may be connected
as shown in Fig. 5.39. Since there are no unstable open-loop poles in the modified
contour, P = 0. On the other hand, there are no encirclements of the (−1, 0)
point by the transformed contour, so that N = 0. As a result, the closed-loop
system is stable for all k > 0 (as predicted by the root-locus).

ω=ε

ω = −ε

Figure 5.38: Detail of the modified contour in the vicinity of the origin

ω = −ε

-1

ω=ε

Figure 5.39: Nyquist diagram for G(s) = 1/(s(s + 1)) and a modified Nyquist
contour

Example 2: G(s) = −1/ (s(s + 1)). The Nyquist diagram is shown in Fig. 5.40.
Since N = 1, P = 0, we have Z = 1, and there is one unstable closed-loop pole.
144 Chapter 5. Frequency-domain analysis of control systems

ω =ε

ω = −ε

Figure 5.40: Nyquist diagram for G(s) = −1/(s(s + 1)) and a modified Nyquist
contour

Example 3: G(s) = 1/ (s3 (s + 1)). The Nyquist diagram is shown in Fig. 5.41.
For s = ε ejθ
1 −3jθ
G(s) ≃ e , (5.49)
ε3
so that the transformed path sweeps from 270◦ to −270◦ . P = 0, N = 2, so that
Z = 2 and there are two unstable closed-loop poles. This result may be verified
from the root-locus shown in Fig. 5.42.


G
90°

Figure 5.41: Bode plots (left) and Nyquist diagram (right) for G(s) = 1/(s3(s +
1))

General procedure
For a system with n poles at s = 0, the procedure can be applied in a similar
manner. Plot G(jω) for ω = ε → ∞, and draw G(−jω) by symmetry. Then,
connect G(−jε) to G(jε) with a circular curve that rotates around the origin by
an angle n × 180◦ in the clockwise direction. Let P be the number of unstable
5.3. Gain and phase margins 145

Figure 5.42: Root-locus for G(s) = 1/(s3 (s + 1))

open-loop poles, not including the pole at s = 0, and count the number of
encirclements N . The criterion is then applied as for the original contour. The
same procedure may also be applied for pole(s) at s = jω0 , connecting the
branches for G(jω0 − jε) to G(jω0 + jε).

5.3 Gain and phase margins


5.3.1 Gain margin

Aside from the concept of stability for a closed-loop system, an almost equally
important consideration is how far the system is from instability. This is the
concept behind gain and phase margins. Both apply to a system which is known
to be closed-loop stable, but does not have to be open-loop stable.
By definition, the gain margin is the maximum value of the gain k > 0 by
which the open-loop transfer function may be multiplied before the closed-loop
system reaches instability. For example, consider the open-loop system

1
G(s) = , (5.50)
(s + 1)3

which was found to yield a stable closed-loop system. It was also found that the
closed-loop system became unstable if the gain was multiplied by 8. Therefore,
the gain margin of the system with open-loop transfer function G(s) is equal to
8. Sometimes, the gain margin is expressed in dB. Then, GMdB = 20 log(8) =
18 dB.
146 Chapter 5. Frequency-domain analysis of control systems

5.3.2 Gain margin in the Nyquist diagram


In the Nyquist diagram, the gain margin is the value of the gain k > 0 by
which the open-loop transfer function may be multiplied before the Nyquist
curve passes through the (−1, 0) point. Indeed, if the gain is increased so that
P (jω1)C(jω1) = − 1, for some frequency ω1, then s = ±jω1 is a closed-loop
pole and the closed-loop system is unstable. For larger values of the gain, the
closed-loop system will not necessarily be unstable, but will almost always be so
because the number of encirclements will be different. Fig. 5.43 shows how the
gain margin is obtained from the Nyquist diagram. Assuming that the Nyquist
curve crosses the real axis to the right of the (−1, 0) point at −a, the gain margin
is 1/a.

-1

-a

Figure 5.43: Computing the gain margin from the Nyquist diagram

For many systems, the gain may be reduced by any amount without resulting
in instability. When one says that the gain margin is 2, it means that the gain
may be multiplied by any number between 0 and 2. In some cases, however,
the gain margin involves a lower number as well. The system is then called
conditionally stable (see Figs. 5.30 and 5.31 involving a gain k restricted to being
between 1 and 2). The nominal gain can neither be increased, nor decreased
arbitrarily without resulting in instability. In the example shown in Fig. 5.44,
the gain margin is given by
1 1
GM = ( , ). (5.51)
b a

5.3.3 Gain margin in the Bode plots


The computation of the gain margin in the Nyquist diagram may translated into
the Bode plots. Let ω1 be the frequency such that

∡P (jω1 )C(jω1 ) = 180◦ ± n 360◦, n = 0, 1, 2, · · · (5.52)


5.3. Gain and phase margins 147

-1
-b -a

Figure 5.44: Gain margin for a conditionally stable system

PC
dB
ω1
ω
GM dB

PC
ω1 ω

-180°

Figure 5.45: Determination of the gain margin from the Bode plots

Assuming |P (jω1 )C(jω1 )| 1 , the gain margin is given by

1
GM = , (5.53)
|P (jω1)C(jω1)|

or

GMdB = −20 log |P (jω1 )C(jω1 )|dB . (5.54)

The interpretation of the definition in Bode plots is shown in Fig. 5.45. If several
frequencies are associated with an angle of 180◦ , the gain margin is the smallest
value of all obtained.
If |P (jω1 )C(jω1)| > 1 for one or more of the frequencies, the gain margin has
a lower bound. Fig. 5.46 shows a hypothetical example where the gain margin
is given by

1
GMdB = (−6, 10) ⇔ GM = ( , 3 ). (5.55)
2

Fig. 5.47 shows the associated Nyquist diagram.


148 Chapter 5. Frequency-domain analysis of control systems

PC
20

6
ω
-10
-20

PC
ω

-180°

Figure 5.46: Bode plot with multiple intersections of the 180◦ line

-10 -2 -1 -1/3 -1/10

Figure 5.47: Nyquist diagram of a system with multiple intersections of the


−180◦ line

5.3.4 Phase margin


The phase margin is the maximum angle that may be added to the phase of the
frequency response before the closed-loop system becomes unstable. It is also
the angle that must be added to the open-loop frequency response so that the
Nyquist curve passes through the (−1, 0) point. Because the frequency response
of various actuators exhibits phase lags for sufficiently high frequencies, the
phase margin quantifies the ability of the system to maintain stability despite
such effects.
The phase margin may be determined from the Nyquist diagram as shown
in Fig. 5.48. The circle of radius 1 is intersected with the Nyquist curve, and
the angle between the negative real axis and the intersection point is the phase
5.3. Gain and phase margins 149

margin. If several points are found, the angle of smallest magnitude defines the
phase margin. Because P (−jω) = P ∗ (jω), the phase margin is the same in
magnitude for both positive and negative directions.

-1

Circle of PM
radius 1

Figure 5.48: Determination of the phase margin in the Nyquist diagram

5.3.5 Phase margin in the Bode plots


To compute the phase margin in the Bode plots, one finds the frequency (or
frequencies) ω2 such that

|P (jω2 )C(jω2)| = 1, (5.56)

or

|P (jω2)C(jω2)|dB = 0. (5.57)

Then, the phase margin is given by

P M = 180◦ − |∡P (jω2 )C(jω2 )| . (5.58)

where the angle function ∡ is defined between −180◦ and 180◦ . The concept is
shown in Fig. 5.49. The frequency ω2 is called the crossover frequency. If several
frequencies are found, the smallest phase margin computed with the formula is
the phase margin of the system.

5.3.6 Delay margin


There is a significant difference between the phase margin and the gain margin.
In theory, the gain margin could be determined experimentally. However, a
system with a frequency response P (jω) = ejϕ for some phase ϕ positive or
negative cannot be realized (the associated impulse response is not zero for
150 Chapter 5. Frequency-domain analysis of control systems

PC
dB ω2

PC
ω

PM
-180°

Figure 5.49: Determination of the phase margin from the Bode plots

Delay C(s) P(s)


T

Figure 5.50: Feedback system for the definition of delay margin

t < 0). Therefore, the concept of phase margin can only be justified in terms of
the Nyquist diagram.
A more practical concept is the delay margin, which is the maximum amount
of time delay T that may be added to the open-loop transfer function before the
closed-loop system becomes unstable. This concept is illustrated in Fig. 5.50.
The delay margin can be estimated in practice by inserting a delay element in
the loop (one would increase the delay until oscillations appear in the response
of the system and before instability fully develops).
A delay T corresponds to a transfer function e−sT , with

e−jωT = 1,
∡e −jωT
= −ωT (in rad). (5.59)

Therefore, considering the Nyquist criterion

Phase margin (rad) = Crossover frequency (rad/s)


× Delay margin (s). (5.60)
5.3. Gain and phase margins 151

or
Phase margin (rad)
Delay margin (s) =
Crossover frequency (rad/s)
Phase margin (deg)/360◦
= . (5.61)
Crossover frequency (Hz)

So, a phase margin of 45◦ with a crossover frequency of 1 kHz corresponds to a


delay margin of 1/8 ms. If the crossover frequency was 10 Hz, the delay margin
would be 100 times larger.

5.3.7 Relationship between phase margin and damping


There is a strong relationship between the phase margin and the damping of
the poles of the closed-loop system as defined in (5.9). The relationship can
be established analytically and exactly for the second-order system shown in
Fig. 5.51. For ζ < 1, the closed-loop poles are located at

s = −a ± jb with a = ζωn and b = 1 − ζ 2 ωn . (5.62)

ωn2
s 2 + 2ζ ω ns

ω2n
PCL =
s 2 + 2ζ ω n s + ω n2

Figure 5.51: Second-order system to relate damping and phase margin

The following results may be derived analytically for this system.

1. Phase margin (5.58)


 
 2ζ 
P M = tan−1  1/2 
(rad)
(4ζ 4 + 1)1/2 − 2ζ 2
≃ 100ζ (deg). (5.63)
152 Chapter 5. Frequency-domain analysis of control systems

2. Percent overshoot in the step response (3.45)


2
P O = e−ζπ/ 1 − ζ × 100 (%)
≃ e−ζπ × 100 (%). (5.64)

3. Peaking of the frequency response (5.13)


|PCL (jω)| 1 1 1
P F = max = ≃ . (5.65)
ω |PCL (0)| 2ζ 1− ζ2 2ζ

With these relationships, the following table can be derived.

ζ PM PO PF
0.2 22.6◦ 52.7% 2.55
0.3 33.3◦ 37.2% 1.75
0.4 43.1◦ 25.4% 1.36

0.5 51.8 16.3% 1.15

0.6 59.2 9.5% 1.04

0.7 65.2 4.6% 1.0002

The results show a tight connection between the phase margin of the second-
order system, the overshoot of the step response, and the peaking of the fre-
quency response. In view of the results, a phase margin of 60◦ is often taken as
an objective in control systems. Although the formulas for the phase margin
were obtained for a specific second-order system, the results are taken to provide
guidance for higher-order systems as well.

5.3.8 Frequency-domain design


Although the root-locus technique is helpful for the design of feedback systems,
frequency-domain design is also effective is also effective, with definite advan-
tages in some situations.
Consider the feedback system shown in Fig. 5.52, where G(s) is the com-
bined transfer function of the plant and compensator. The closed-loop transfer
function and the closed-loop frequency response are given by
G(s) G(jω)
GCL (s) = , GCL(jω) = . (5.66)
1 + G(s) 1 + G(jω)
The objective of tracking translates into an objective that GCL(jω) ≃ 1, which
may be achieved by setting |G(jω)| ≫ 1. Although it would be desirable to
5.3. Gain and phase margins 153

G(s)

Figure 5.52: Feedback system

have this property hold true for all ω, it is only practical to do so for a finite
range of frequencies. Indeed, the gain of physical systems usually falls rapidly
at high frequencies. A controller may only partially compensate for this effect.
The phase of physical systems also tends to increase rapidly at high frequencies,
and often does so in ways that cannot be precisely modelled. In order to ensure
closed-loop stability, it may be necessary to bring the loop gain well below 1
before significant phase variations are observed.
Overall, the design problem in the frequency domain consists first of all in a
careful selection of the crossover frequency of the closed-loop system. Below this
frequency, the loop gain will be made as large as possible (sometimes through
the use of integral control). Around the crossover frequency, the phase will be
carefully controlled in order to ensure stability as well as robustness to uncer-
tainties and parameter variations (considering the Nyquist criterion). A Bode
plot of a typical open-loop transfer function is shown in Fig. 5.53.

G(jω) dB

Robustness
bound

ω
Performance
bound
Crossover frequency
at 0 dB (gain=1)

Figure 5.53: Design objectives on the loop frequency response


154 Chapter 5. Frequency-domain analysis of control systems

5.3.9 Example of frequency-domain design with a lead


controller
Lead controller: consider the so-called lead controller

(s + b)
C(s) = kc , (5.67)
(s + a)

where a > b > 0. In the s−plane, the controller has a pole and a zero on the
negative real axis, with the zero being closer to the origin than the pole.

C(jω) C(jω)
(dB) (deg)
m
8

mp 90°
φp
m0

b ωp a ω (rad/s) b ωp a ω (rad/s)

Figure 5.54: Bode plots of lead controller

The Bode plots of the lead controller are shown in Fig. 5.54. The controller
is called a lead controller because the phase angle is positive. For the same
reason, the controller is called a lag controller if b > a (the system becomes
a proportional-integral controller for a = 0). The DC gain m0 and the high-
frequency gain m∞ of the controller are

b
m0 = kc , m∞ = kc. (5.68)
a

The frequency ωp is defined on the figure as the frequency where the angle of
the frequency response is maximum. φp is the phase of the lead controller at the
frequency ωp. Since
!
b2 + ω2
|C(jω)| = kc ,
a2 + ω2
ω ω
∡C(jω) = tan−1 ( ) − tan−1 ( ),
b a
d∡C(jω) 1 1 1 1 (ω2 − ab) (b − a)
= 2 2
− 2 2
= 2 , (5.69)
dω 1 + ω /b b 1 + ω /a a (b + ω2 ) (a2 + ω 2)
5.3. Gain and phase margins 155

one can deduce that



ωp = ab,
!
b
mp = kc ,
a
! !
−1 a −1 b
φp = tan − tan . (5.70)
b a

The result shows that the frequency of the peak is located mid-way between the
pole and the zero on a log scale (log(ωp) = (log(a) + log(b))/2).
A different formula can be obtained for φp , using the fact that

(b + jω)(a − jω) (ab + ω 2) + jω(a − b)


C(jω) = kc 2 2
= kc ,
(a + ω ) (a2 + ω2 )
(5.71)

so that
ωp2(a − b)2 (a − b)2
sin2 (∡C(jωp )) = sin2 (φp ) = = ,
(ab + ωp2 )2 + ωp2 (a − b)2 (a + b)2
(5.72)

and
(a/b − 1) a 1 + sin(φp)
sin(φp ) = ⇔ = . (5.73)
(a/b + 1) b 1 − sin(φp )

This result shows that the amount of phase depends on the ratio of the pole
magnitude over the zero magnitude (a/b). Specifically

a/b φp
5.83 45◦
9 53.1◦
13.9 60◦

Lead controller for a double integrator: consider now the control problem
for a double integrator

k
P (s) = . (5.74)
s2
The Bode plots of the loop transfer function are shown in Fig. 5.55. From root-
locus theory, the closed-loop system is known to be stable for all kkc > 0. A
possible choice is to set the gain of the controller such that the loop gain is
equal to 1 at ωp. In this manner, the frequency ωp is the same as the crossover
156 Chapter 5. Frequency-domain analysis of control systems

frequency of the system, and the phase φp is the phase margin. For the plant
under consideration, setting the loop gain to be 1 at ωp implies that
!
k b k
mp 2 = kc = 1, (5.75)
ωp a ωp2
or
!
a ωp2
kc = . (5.76)
b k

CP(jω) CP(jω)
(dB) (deg)
-40 dB/dec

ωp a ωp a
b ω (rad/s) b ω (rad/s)
-20 dB/dec -90°
-180°
-40 dB/dec

Figure 5.55: Bode plots of lead controller with double integrator plant

Assume that a certain phase margin and a certain cross-over frequency are
specified. The ratio a/b is determined by the phase margin and ωp is set equal
to the crossover frequency. Then, the controller parameters are determined from
! ! !
a b a ωp2
a = ωp , b = ωp , kc = . (5.77)
b a b k

For example, if a phase margin of 53.1◦ is chosen, a/b = 9, and the parameters
of the controller are equal to

ωp 3ωp2
a = 3ωp , b= , kc = . (5.78)
3 k
Therefore, the three parameters of the controller can be set as functions of the
crossover frequency ωp. ωp is a free parameter that can be adjusted experimen-
tally to be as large as possible to maximize the bandwidth of the system. In
theory, there is no limit to ωp , but in practice, additional dynamics will limit
the possible range. If these dynamics were included in the model, a careful
5.3. Gain and phase margins 157

design would maximize the closed-loop bandwidth within gain and phase mar-
gin specifications. Such design sinvolve tedious trial-and-error adjustments of
the controller parameters, and is preferably performed nowadays using some
numerical optimization tool [9].

5.3.10 Design in the Nyquist diagram


The gain and phase margins aim at quantifying the ability of a feedback system
to tolerate uncertainties or variations in the nominal plant transfer function.
This property is generally referred to as the robustness of the feedback system.
The gain and phase margins are convenient measures of robustness, but some
pathological cases may be conceived, such as the one shown in the Nyquist
diagram of Fig. 5.56. For this system, GM = ∞, P M > 45◦ . However, small
but combined changes in gain and phase could lead to instability. Generally,
the number |1 + G(jω)| represents the distance from the Nyquist curve to the
(−1, 0) point, and is a good measure of robustness, although its interpretation
is not as intuitive as the gain and phase margins.

-1

Figure 5.56: Nyquist diagram of a non-robust design with good gain and phase
margins

An example of a design in the Nyquist diagram is shown in Fig. 5.57. For


a range of frequencies where tracking is required, the loop gain is made large.
Around the crossover frequency, the loop frequency response is carefully con-
trolled so that a sufficient phase margin is guaranteed and is insensitive to vari-
ations of the gain of the system. When the loop gain becomes sufficiently small
at high frequencies, the phase behavior becomes irrelevant.
Peaking of the closed-loop frequency response should also be avoided, since
large peaks translate into poor transient responses, and sensitivity to noise and
disturbances at the associated frequencies. Fig. 5.58 shows |GCL (jω)| as a func-
tion of Re(G(jω)) and Im(G(jω)). As expected, the gain is infinite at the (−1, 0)
point, and is large in its vicinity.
158 Chapter 5. Frequency-domain analysis of control systems

Im G(jω)
-1
Re G(jω)

Robustness
bound

Figure 5.57: Robustness objective in the Nyquist diagram

4
|G CL(jω)|

0 4
-4 2
-2 0
0
2 -2
Re(G(jω )) 4 -4 Im(G(jω ))

Figure 5.58: Plot of |GCL (jω)| as a function of Re(G(jω)) and Im(G(jω))


5.4. Problems 159

The plot is often represented through level lines, as shown in Fig. 5.59. It
turns out that |GCL (jω)| = M if G(jω) belongs to a circle of radius M/(M 2 − 1)
with center −M 2 /(M 2 − 1). In order to avoid peaking in the frequency domain,
the Nyquist curve of the open-loop frequency response should avoid as much as
possible the portion of the complex plane where Re(G(jω)) < −1/2, especially
when Im(G(jω)) is small.

Im (G(jω))
M = 0.7

M = 0.5

-1/2 M = 0.2
-1
M=0 Re (G(jω))
M=∞
M=2
M = 1.5 M=1

Figure 5.59: Lines of constant closed-loop magnitude

5.4 Problems
Problem 5.1: Sketch the Bode plots for the following transfer functions. Make
sure to label the graphs, and to give the slopes of the lines in the magnitude
plot.
(a) P (s) = s − 10
(s + 1)(s + 100)
100(s − 10)(s + 10)
(b) P (s) =
(s + 0.1)(s + 100)2
(s + 10)
(c) P (s) = 2
s + 0.1s + 1
(d) P (s) = s − 10
s(s + 1)
(e) P (s) = 100
(s − 10)2(s + 1)
2
(f) P (s) = s + 2s2+ 100
s
Problem 5.2: (a) The magnitude Bode plot of a system is shown in Fig. 5.60.
What are the possible transfer functions of stable systems having this Bode plot?
160 Chapter 5. Frequency-domain analysis of control systems

20

15

Magnitude (dB) 10

0 2
10-1 10 0 101 10
Frequency (rad/sec)

Figure 5.60: Bode plot for problem 5.2 (a)

(b) The Bode plots of a system are shown in Fig. 5.61. Give an estimate of the
transfer function of the system.

10 0
0 -20
-40
-10
Magnitude (dB)

-60
Phase (deg)

-20 -80
-30 -100
-120
-40
-140
-50 -160
-60 -180
10-2 10 -1 10 0 10 1 102 10 -2 10 -1 10 0 101 102
Frequency (rad/sec) Frequency (rad/sec)

Figure 5.61: Bode plots for problem 5.2 (b)

Problem 5.3: (a) The system whose Bode plots are shown in Fig. 5.62 is stable
in closed-loop. Find its gain margin and phase margin.
(b) Describe the behavior of the closed-loop system of part (a) if the open-
loop gain is increased to a value close to the maximum value given by the gain
margin. In particular, what can you say about the locations of the poles of the
closed-loop system?
(c) Consider an open-loop stable system such that the magnitude of its frequency
5.4. Problems 161

50

Magnitude (dB) 0

-50

-100
-3 -2 -1 2
10 10 10 10 0 101 10
Frequency (rad/sec)
0
Phase (deg)

-90

-180

-270
10 -3 10 -2 10 -1 10 0 101 102
Frequency (rad/sec)

Figure 5.62: Bode plots for problem 5.3

response is less than 1 for all ω. Can it be determined whether the closed-loop
system is stable with only that information?
Problem 5.4: (a) The Nyquist diagram of a stable system is shown in Fig. 5.63,
with the overall diagram shown on the left and the detail around the (-1,0) point
shown on the right. The solid line corresponds to ω > 0, with the arrow giving
the direction of increasing ω. The dashed line is the symmetric curve obtained
for ω < 0. Assuming that the transfer function of the system is multiplied by
a gain k > 0, what is the set of values of k for which the system is stable in
closed-loop ?
(b) Repeat part (a) for the system whose Nyquist curve is shown in Fig. 5.64,
given that the system has one unstable pole.
Problem 5.5: (a) The Nyquist diagram for P (s) = 5(s + 2)/(s + 1)3 is shown
in Fig. 5.65, with the overall diagram shown on the left and the detail around
the (-1,0) point shown on the right. Indicate what the gain margin and the
phase margin are (for the phase margin, a rough estimate is fine). Compare the
gain margin results with those predicted by a root-locus plot and the use of the
Routh-Hurwitz criterion.
(b) Repeat part (a) for P (s) = 2(s + 5)/(s + 1)3 and the diagrams shown in
162 Chapter 5. Frequency-domain analysis of control systems

20 1.5
15 1
10

Imag Axis
0.5
Imag Axis

5
0 0
-5 -0.5
-10
-1
-15
-20 -1.5
-5 0 5 10 15 20 25 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5
Real Axis Real Axis

Figure 5.63: Nyquist diagram for problem 5.4 (a)

0.08
0.06
0.04
0.02
Imag Axis

0
-0.02
-0.04
-0.06
-0.08
-0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0
Real Axis

Figure 5.64: Nyquist curve for problem 5.4 (b)

Fig. 5.66.
Problem 5.6: Sketch the Bode plots for the following transfer function
10(s − 1)
P (s) = . (5.79)
(s + 10)2
Make sure to label the graphs, and to give the slopes of the lines in the magnitude
plot.
Problem 5.7: Sketch the Bode plots for the following transfer function
10(s + 1)
P (s) = . (5.80)
s2(s2 − 2s + 100)
5.4. Problems 163

8 1.5
6
1
4
0.5

Imag Axis
Imag Axis

2
0 0
-2
-0.5
-4
-6 -1
-8 -1.5
-6 -4 -2 0 2 4 6 8 10 12 14 -2.5 -2 -1.5 -1 -0.5 0 0.5 1
Real Axis Real Axis

Figure 5.65: Nyquist curve for problem 5.5 (a)

8
0.4
6
0.3
4 0.2
Imag Axis

Imag Axis

2 0.1
0 0
-2 -0.1
-0.2
-4
-0.3
-6 -0.4
-8
-6 -4 -2 0 2 4 6 8 10 12 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0
Real Axis Real Axis

Figure 5.66: Nyquist curve for problem 5.5 (b)

Make sure to label the graphs, and to give the slopes of the lines in the magnitude
plot.
Problem 5.8: (a) Sketch the Bode plots for

(s − 1)
P (s) = . (5.81)
s(s + 1)

Be sure to label the axes precisely.


(b) Sketch the Bode plots for a transfer function whose poles are placed in the
s−plane as shown on Fig. 5.67. Assume that the DC gain of the system is 10
(with a positive sign). There are five poles located on a circle of radius 10. One
164 Chapter 5. Frequency-domain analysis of control systems

pole is real, two poles are on a 45o line, and two poles have real parts equal
to −0.5. You may use the fact that, for α small, sin(α) ≃ tan(α) ≃ α, and
cos(α) ≃ 1. Be sure to label the axes precisely.

10j

o
45
-10 -0.5

-10j

Figure 5.67: Pole locations for problem 5.8 (b)

Problem 5.9: (a) Give the gain margin and the phase margin of the system
whose Bode plots are shown in Fig. 5.68 (the plots are for the open-loop transfer
function and the closed-loop transfer function is assumed to be stable).
(b) Indicate whether the system whose Nyquist curve is shown in Fig. 5.69 is
closed-loop stable, given that it is open-loop stable.
(c) What are the values of the gain g > 0 by which the open-loop transfer
function of part (b) may be multiplied, with the closed-loop system remaining
stable?
(d) Sketch an example of a Nyquist curve for a system that has three unstable
open-loop poles, and which is closed-loop stable.
Problem 5.10: The magnitude Bode plot of a system is shown in Fig. 5.70.
Give all the possible transfer functions of systems having this Bode plot. The
poles and zeros are all real, and the values of the gain, poles, and zeros, are all
multiples of 10.
Problem 5.11: All parts of this problem refer to the system whose Nyquist
curve is shown in Fig. 5.71 (only the portion for ω > 0 is plotted).
(a) Knowing that the closed-loop system is stable, can one say for sure that the
open-loop system is stable?
(b) Given that the closed-loop system is stable, estimate the gain margin and
the phase margin of the closed-loop system.
(c) How many unstable poles does the closed-loop system have if the open-loop
5.4. Problems 165

50

0
Gain (dB)

-50

-100
-3 -2 -1 0 1 2 3
10 10 10 10 10 10 10
Frequency (rad/sec)
180
Phase (deg)

-180

-360
-3 -2 -1 0 1 2 3
10 10 10 10 10 10 10
Frequency (rad/sec)

Figure 5.68: Bode plots for problem 5.9 (a)

-1/3
-1 1
-3 -2
-2/3

Figure 5.69: Nyquist curve for problem 5.9 (b)


166 Chapter 5. Frequency-domain analysis of control systems

10
Magnitude (dB) 5
0
-5
-10
-15
-20
3 4
10 0 101 10 2 10 10
Frequency (rad/sec)

Figure 5.70: Bode plot for problem 5.10

-1/2 -1/4 1/3 1 2

-1
ω=5 ω=0

-j
ω=1

Figure 5.71: Nyquist curve for problem 5.11

gain is multiplied by 5?
(d) Give the steady-state response yss (t) of the open-loop system to an input
x(t) = 2. Repeat for x(t) = 3 cos(t) and for x(t) = 4 cos(5t).
(e) Repeat part (d) for the closed-loop system.
Problem 5.12: (a) Sketch the Bode plots of
1
G(s) = . (5.82)
(s + 1)(s − 1)
Be sure to label the axes precisely.
(b) Sketch the magnitude Bode plot of

(s2 + 1)(s − 100)


G(s) = . (5.83)
s(s + 10)2
Only the magnitude is needed. Be sure to label the axes precisely.
5.4. Problems 167

Problem 5.13: (a) Consider the Nyquist diagram of a transfer function G(s)
shown in Fig. 5.72. Only the portion for ω > 0 is plotted. Assume that G(s)
has no poles in the open right half-plane, and that a gain k is cascaded with
G(s). Find the ranges of positive k for which the closed-loop system is stable.

-2
-1
-1.5 -0.5 0.75

Figure 5.72: Nyquist curve for problem 5.13 (a)

(b) Bode plots of the open-loop transfer function of a feedback system are shown
in Fig. 5.73, with the detail from 1 to 10 rad/s shown on the right. For this
system:

• How much can the open-loop gain be changed (increased and/or decreased)
before the closed-loop system becomes unstable ?

• What is a rough estimate of the phase margin of the feedback system?

The numerical results do not have to be precise.


(c) For the system of part (b), give the steady-state response of the open-loop
system and of the closed-loop system to an input x(t) = 2.
Problem 5.14: (a) Consider the lead controller for the double integrator. For
the design that makes the crossover frequency equal to ωp , obtain the polynomial
that specifies the closed-loop poles (as a function of a/b and ωc). Show that one
closed-loop pole is at s = −ωc no matter what a/b is.
(b) Compute the other closed-loop poles as functions of ωc, when a/b = 5.83, 9,
and 13.9.
168 Chapter 5. Frequency-domain analysis of control systems

50 10
6
Gain dB

Gain dB
0
0
-50

-100 -10 0
10 -1 10 0 101 102 10 101
Frequency (rad/sec) Frequency (rad/sec)

360 270
180
Phase deg

Phase deg

180
90
0 0
-180 -90
10 -1 10 0 10 1 10 2 10 0 101
Frequency (rad/sec) Frequency (rad/sec)

Figure 5.73: Bode plots for problem 5.13 (b)


Chapter 6

Discrete-time signals and


systems

6.1 The z-transform


6.1.1 Discrete-time signals

A discrete-time signal is a function of time x(k), where time is defined over a


set of integer values, or k = 0, 1, 2, · · · . A discrete-time signal is similar to
a sequence, as defined in mathematical analysis. The discrete time values are
sometimes called steps, or samples. Fig. 6.1 shows an example of a discrete-time
signal.

x(k)

0 1 2 3 4 k

Figure 6.1: Discrete-time signal

In mathematical software packages, discrete-time signals are represented in


various manners. Fig. 6.2 shows two standard options. The plot on the left is
usually preferred, but note that the signal is only defined at the discrete-time
instants, although the values between time instants are interpolated. For the
plot on the right, options also include dots, + signs, and other symbols.

169
170 Chapter 6. Discrete-time signals and systems

x(k) x(k)

1 2 3 4 k 1 2 3 4 k

Figure 6.2: Two representations of a discrete-time signal

6.1.2 The z-transform


The z-transform of a signal x(k) is defined as the function X(z) such that

X(z) = x(k)z −k
k=0
= x(0) + x(1)z −1 + x(2)z −2 + · · · (6.1)

The z-transform is the result of an infinite series, involving all the values of the
signal x(k) and the complex variable z. The variable z is similar to the variable
s of the Laplace transform. The definition of the z-transform is unilateral, i.e.,
independent of x(k) for k < 0. The bilateral z-transform requires summation
over both positive and negative values of time, but is not used here. We discuss
a few important examples.

1. Discrete-time impulse
A discrete-time impulse is defined by

x(k) = δ(k) ⇔ x(0) = 1 and x(k) = 0 for k = 0. (6.2)

The signal is shown in Fig. 6.3. Its transform is easily found to be

x(k) = δ(k) ⇔ X(z) = 1. (6.3)

The result is the same as in continuous-time, but there is a considerable


difference in that the time function is an ordinary function, rather than a
generalized function (or distribution, or delta function) in continuous-time.

2. Finite-length signal
A finite-length signal is a signal that vanishes after a finite period of time,
i.e.,

x(k) = 0 for k > N. (6.4)


6.1. The z-transform 171

δ(k)
1

0 1 2 3 k

Figure 6.3: Discrete-time impulse

The transform of such a signal is given by

X(z) = x(0) + x(1)z −1 + · · · x(N )z −N


x(0)z N + x(1)z N−1 + · · · + x(N )
= . (6.5)
zN
The z-transform of a finite-length signal is a rational function of z whose
poles are all located at z = 0. Conversely, any rational function of z with
all poles at z = 0 is the transform of a finite-length signal, and the signal
is easily obtained from X(z). For example

z3 − z 2 + 2
X(z) = ⇔ x(k) = 0, 0, 1, −1, 0, 2, 0, 0, ...
z5
(6.6)

Note that the definition of the z-transform implies that a rational transform
X(z) = N (z)/D(z) must always be such that deg(N (z)) deg(D(z)) (i.e.,
the transform must be a proper function of z)

3. Step signal
A step signal is given by

x(k) = 1 for k 0
= 0 for k < 0. (6.7)

The transform is given by

X(z) = 1 + z −1 + z −2 + · · · (6.8)

The infinite series has an analytic expression


1 z
X(z) = = . (6.9)
1 − z −1 z−1
172 Chapter 6. Discrete-time signals and systems

(6.9) may be obtained through the following auxiliary result. Since

1 + a + a2 + · · · an (1 − a) = 1 + a + a2 + · · · + an
−a − a2 − · · · − an+1
= 1 − an+1 . (6.10)

It follows that
1 − an+1
1 + a + a2 + · · · an = (6.11)
1−a
and
n
1
lim an = if |a| < 1. (6.12)
n→∞
i=0
1−a

Applying the auxiliary result to the step signal, one finds that
1 z
X(z) = = if |z −1| < 1, or |z| > 1.
1 − z −1 z−1
(6.13)

As for the Laplace transform, the z-transform is defined only in the region
of the z-plane where convergence of the infinite series is guaranteed. This
region is called the region of convergence (ROC). For the step signal, the
region of convergence is the portion of the z plane located outside the circle
of radius 1.
The z-transform of the step signal (6.9) is a rational function of z. The
pole is at z = 1 and there is a zero at z = 0. The transform is different
from the transform of the continuous-time step function, which is (1/s).
We will find that z = 1 is the equivalent of s = 0 in the s−plane.

4. Geometric progression
The geometric progression signal is the equivalent of the exponential signal
in continuous-time, and is given by

x(k) = ak . (6.14)

For the time being, we assume that a is real. Depending on the magnitude
of a, the signal has the following properties

|a| < 1 a decaying exponential


|a| > 1 a growing exponential
|a| = 1 a step signal
6.1. The z-transform 173

The z-transform is given by

X(z) = 1 + az −1 + a2z −2 · · ·
1 z
= = if |az −1 | < 1, or |z| > a. (6.15)
1 − az −1 z−a
Fig. 6.4 shows the decaying signal for |a| < 1, and the associated pole in
the z-plane.

1 Unit circle

a
a2
1
a

Figure 6.4: Exponentially decaying signal (geometric progression)

For the geometric progression, the region of convergence (ROC) is the


portion of the z-plane located outside the circle of radius z = a. This
feature is shown in Fig. 6.5. For rational transforms, the ROC is an open
set whose boundary is the smallest circle that includes all the poles. The
transforms of signals with different ROC’s are defined in a common region
sufficiently far from the origin, and one does not need to worry about the
ROC in order to apply the transform to combinations of signals (this result
only applies to the unilateral transform).

X(z) exists

X(z) does
not exist

Figure 6.5: Region of convergence of the z-transform


174 Chapter 6. Discrete-time signals and systems

5. Signals with two complex poles


The property that
z
x(k) = ak ⇔ X(z) = (6.16)
z−a
also applies to complex signals. As for the Laplace transform, we consider
the signal
cz c∗ z
x(k) = cpk + c∗p∗k ⇔ X(z) = + . (6.17)
z − p z − p∗

As in Fig. 6.6, one defines the magnitude and angle of the complex pole
with

p = |p|ej∡p − π < ∡p π. (6.18)

Then, the time signal is

x(k) = 2 Re(cpk ) = 2|cpk | cos ∡(cpk )


= 2|c| |p|k cos(∡p k + ∡c). (6.19)

p
|p|
p

Figure 6.6: Magnitude/angle representation of a pole

The discrete-time result


cz c∗ z ∗
x(k) = 2|c| |p|k cos(∡p k + ∡c) ⇔ X(z) = +
z − p z − p∗
(6.20)

can be compared to the continuous-time result


c c∗
x(t) = 2|c| eRe(p)t cos(Im(p) t + ∡c) ⇔ X(s) = + .
s − p s − p∗
(6.21)
6.1. The z-transform 175

Both signals consist in the product of an exponential signal and a sinusoidal


signal. In discrete-time, however, the magnitude and angle of the pole play
the role of the real part and imaginary part of the pole in continuous-time.
Specifically,
|p| determines the rate of growth/decay of the signal
If |p| >1, the signal grows
If |p| <1, the signal decays
If |p| =1, the signal is a pure sinusoid
∡p determines the frequency of the signal (see next section)
6. Sinusoidal signals
A sinusoidal signal is obtained as the special case of (6.20) with |p| = 1.
For example, x(k) = cos (Ω0 k) corresponds to |c| = 1/2, |p| = 1, ∡p = Ω0,
and ∡c = 0. Therefore,
p = ejΩ0 = cos (Ω0) + j sin (Ω0) , c = 1/2, (6.22)
and the transform is
1 z 1 z
x(k) = cos (Ω0k) ⇔ X(z) = +
2 z − ejΩ0 2 z − e−jΩ0
z 2 − cos(Ω0 )z
= . (6.23)
z 2 − 2 cos (Ω0) z + 1
Similarly, the transform of a sin function can be found to be
1 z 1 z
x(k) = sin (Ω0k) ⇔ X(z) = jΩ

2j z − e 0 2j z − e−jΩ0
sin (Ω0 ) z
= 2 . (6.24)
z − 2 cos (Ω0) z + 1
The two poles of the sinusoidal signals are on the unit circle, i.e., the circle
such that |p| = 1. This is shown in Fig. 6.7.
The angle between the real axis and the poles is the discrete-time frequency
Ω0 , expressed in radians, or radians per sample. The period of the signal
is T = 2π/Ω0 . When T is an integer, the signal is periodic, and T is
the number of samples associated with the period. Fig. 6.8 shows a few
sinusoidal signals with integer periods. When T is a rational number,
with T = n/m and n, m integers, the signal is sinusoidal with period n.
Indeed, cos(2π(m/n)k) repeats itself when n is added to k. When T is
not an integer or a rational number, the signal is not periodic over an
integer number of samples, but still exhibits the same sinusoidal shape.
As opposed to continuous-time, there is a maximum frequency in discrete-
time equal to π.
176 Chapter 6. Discrete-time signals and systems

Ω0

Figure 6.7: Poles of a sinusoidal signal

Ωo = 0 , T = ∞ Ω o = π , T = 8 samples
4

π
Ωo = , T = 4 samples Ω o = π , T = 2 samples
2

Figure 6.8: Discrete-time sinusoidal signals of different frequencies

6.1.3 The z-plane


Pole locations and signal shapes

As in continuous-time, the examples of simple z-transforms provide useful in-


formation about interpretations of the z-plane. The interpretations are dif-
ferent than in continuous-time, however. For signals with a single real pole
(X(z) = z/(z − p)), or two complex poles (X(z) = cz/(z − p) + c∗ z/(z − p∗ )),
the associated signals are shown in Fig. 6.9.
There are some strong similarities between the z−plane and the s−plane,
6.1. The z-transform 177

Increasing
frequency
Faster
Faster growth
decay
Impulse Step

Zero
frequency

Constant
magnitude

Figure 6.9: Signal characteristics as a function of pole location

with

|z| = 1 (unit circle) ⇔ s = 0 (jω − axis)


|z| > 1 (outside the unit circle) ⇔ Re(s) > 0 (open right half-plane)
|z| < 1 (inside the unit circle) ⇔ Re(s) < 0 (open left half-plane)

As in continuous-time, one may define useful quantities


1
For |p| < 1, the time constant: τc = −
ln |p|
4
For |p| < 1, the 2% decay time: τ2% = −
ln |p|
2
For |p| > 1, the time to double: τdouble =
ln |p|
For |p| ≈ 1, a useful approximation is
1
τc ≃ , (6.25)
1 − |p|
178 Chapter 6. Discrete-time signals and systems

so that

|p| = 0.9 ⇒ τc = 9.5 ≈ 10 samples


|p| = 0.99 ⇒ τc = 99.5 ≈ 100 samples

The time constant should be multiplied by 4 to obtain the time needed for a
decay of the signal to 2% of its original value. Specifically, |p| = 0.99 ⇒ 4τc =
400, which means that

0.99400 = 0.018 ≈ 2% (6.26)

In other words, an exponentially decaying signal whose transform has a single


pole at p = 0.99 will decay to 2% of its original value within 400 discrete-time
instants, or samples.
For complex poles, the period of oscillations is given by

Tosc = . (6.27)
∡p

The result is not necessarily an integer number of samples.

Lines of constant damping

In continuous-time, one defines the damping factor ζ of a pole p as the cos of


the angle of the pole with respect to the negative real axis, or

− Re(p) − Re(p)
ζ= = (continuous-time).
|p| Re(p)2 + Im(p)2 (6.28)

The dashed line of Fig. 6.10 is a line of constant damping in continuous-time. It


is known that an angle greater than 45o , or a damping factor smaller than 0.707,
yields overshoot in the step response and peaking in the frequency response.

ζ=cos(α)
α

Figure 6.10: Definition of the damping factor ζ in continuous-time


6.1. The z-transform 179

Since the time constant and the period of oscillation associated with a com-
plex pole are

τc = −1/ Re(p), Tosc = 2π/ Im(p) (continuous-time),


(6.29)

an interpretation of the damping factor is


1/τc 1
ζ= = . (6.30)
(1/τc )2 + (2π/Tosc )2 (1 + (2πτc /Tosc )2

A desirable damping factor of ζ > 0.707 means that τc < (Tosc )/2π, implying
that the convergence time is small compared to the period of oscillation.
To obtain an equivalent result in discrete-time, we use the applicable defini-
tions of time constant and of period of oscillations
1 2π
τc = − , Tosc = (discrete-time), (6.31)
ln |p| ∡p
to obtain
− ln |p|
ζ= (discrete-time). (6.32)
(ln |p|)2 + (∡p)2
This equation can also be written as
ζ
ln |p| = − ∡p, (6.33)
1 − ζ2
or
1 ζ
|p| = where α = . (6.34)
eα∡p 1 − ζ2
A curve of constant damping in discrete-time is the set of complex numbers
p such that (6.34) is satisfied with a fixed ζ. (6.34) shows that the magnitude
of the pole must decrease as the angle increases. A line of constant damping is
a curve called a logarithmic spiral, and is shown in Fig. 6.11.
The line corresponding to ζ = 0.707 is the curve

|p| = e−∡p (6.35)

with ∡p in radians. The curve defines the boundary separating well-damped


from poorly-damped responses, in a similar way as the 45◦ line defines the same
characteristics in the s-plane. In continuous-time, the line is such that the time
constant is equal to the period of oscillation divided by 2π and the same remains
true here.
180 Chapter 6. Discrete-time signals and systems

Logarithmic spiral

Unit circle

Figure 6.11: Line of constant damping for ς = 0.707 (logarithmic spiral)

As an example of a signal with low damping, consider

p = 0.5 ± j0.5 ⇒ |p| = 0.707, ∡p = π/4. (6.36)

The damping factor can be computed from (6.32) to be ζ = 0.404, which is low.
Indeed, the time signal pk + (p∗ )k is shown in Fig. 6.12 and exhibits a visible
oscillation.

1.5

0.5

-0.5

-1
0 2 4 6 8 10 12 14 16 18 20

Figure 6.12: Discrete-time signal with low damping


6.2. Properties of the z-transform 181

Connections to the discrete-time Fourier transform

The discrete-time Fourier transform (DTFT) is defined to be



X̄(Ω) = x(k)e−jΩk . (6.37)
k=−∞

We use the notation X̄ to distinguish the DTFT from the z-transform X(z). The
DTFT is defined on a finite interval (from −π to π), or as a periodic function on
an infinite interval. An example of a DTFT is shown in Fig. 6.13 (the magnitude
is shown only, but the DTFT is generally a complex function).

|X(Ω)|

−π π 2π Ω

Figure 6.13: Discrete-time Fourier transform

One may observe that

X̄(Ω) = [X(z)]z=ejΩ if x(k) = 0 for k < 0. (6.38)

In other words, we must assume that x(k) is zero for negative time so that the
bilateral and unilateral transforms can be related. The DTFT is equal to the
z-transform evaluated on the unit circle (z = ejΩ ). The property requires that
the transforms exist in an ordinary sense on the unit circle, i.e., that the region
of convergence of the z-transform includes the unit circle. The property is not
satisfied by a sinusoid, for which the region of convergence is |z| > 1

6.2 Properties of the z-transform


Properties can be established for the z-transform that are similar to those of the
Laplace transform. The proofs are usually straightforward.

1. Linearity

x(k) = ax1 (k) + bx2(k) ⇔ X(z) = aX1 (z) + bX2 (z).


(6.39)
182 Chapter 6. Discrete-time signals and systems

2. Right shift
The right shift or one-step delay is defined by

y(k) = x(k − 1)u(k − 1), (6.40)

where u(k) is the unit step. The signal and the one-step delayed signal are
shown in Fig. 6.14. Since

x(k) y(k)

Figure 6.14: Right shift of a signal

X(z) = x(0) + x(1)z −1 · · · , (6.41)

the output satisfies

Y (z) = x(0)z −1 + x(1)z −2 + · · · = z −1 X(z). (6.42)

Therefore, we can associate

z −1 ⇔ one step delay, (6.43)

meaning that multiplication of the transform by z −1 is equivalent to a


one-step delay of the signal. (6.40) assumed that x(k) was multiplied by
a unit step. Otherwise,

Y (z) = x(−1) + z −1 X(z). (6.44)

Examples of delayed signals are shown in Fig. 6.15 and include a delayed
impulse and a delayed exponential signal. An observation is that a signal
with a pole at z = a and no zero at z = 0 is an exponential signal, but
with a zero value at the initial time instant.

3. Left shift
In a similar manner, one can obtain the formula for a left shift

y(k) = x(k + 1) ⇒ Y (z) = z (X(z) − x(0)) . (6.45)


6.2. Properties of the z-transform 183

Y(z) = z -1 (delayed impulse) Y(z) = 1 (delayed exponential)


z-a

Figure 6.15: Delayed impulse and delayed exponential

To prove this fact, note that

Y (z) = y(0) + y(1)z −1 + y(2)z −2 + ...


= x(1) + x(2)z −1 + x(3)z −2 + ...
= z x(0) + x(1)z −1 + x(2)z −2 + x(3)z −3 + ... − zx(0).
(6.46)

4. Initial value
The initial value of the signal x(k) can be obtained as

x(0) = lim X(z). (6.47)


z→∞

The result follows from the fact that



lim X(z) = −1
lim X(z) = −1
lim ( x(k)z −k ) = x(0).
z→∞ z →0 z →0
k=0 (6.48)

Note that by virtue of the definition of the z-transform, any transform


X(z) that is rational must be proper (degree numerator degree de-
nominator). Therefore, the limit always exists. x(0) is the ratio of the
coefficients associated with the highest power of z in the numerator and
in the denominator. For example

2z 2 + 1 2
X(z) = ⇒ x(0) =
3z 2 + z 3
2z + 1
X(z) = ⇒ x(0) = 0. (6.49)
3z 2 + z
Other values of x(k) may also be obtained in a similar manner, e.g.

x(1) = lim (zX(z) − x(0)) . (6.50)


z→∞
184 Chapter 6. Discrete-time signals and systems

5. Final value
If lim x(k) exists, then
k→∞

lim x(k) = lim (z − 1)X(z). (6.51)


k→∞ z→1

The result is similar to the equivalent result for the Laplace transform,
but s = 0 is replaced by z = 1.

6. Multiplication by time

d
y(k) = kx(k) ⇔ Y (z) = −z X(z). (6.52)
dz
For example, consider the transform
z
x(k) = ak ⇔ X(z) = . (6.53)
z−a
One may deduce the transforms of the signals

(z − a) − z az
y1 (k) = kak ⇔ Y1(z) = −z 2
= ,
(z − a) (z − a)2
a(z − a)2 − az 2(z − a) az(z + a)
y2 (k) = k2 ak ⇔ Y2 (z) = −z = ,
(z − a)4 (z − a)3
az (z 2 + 4az + a2 )
y3 (k) = k3 ak ⇔ Y3 (z) = · · · = . (6.54)
(z − a)4

As in continuous-time, powers of time kn are associated with poles of


multiplicity n + 1. However, numerator polynomials appear in the trans-
forms that did not appear in the transforms in continuous-time. The
time-domain signals associated to transforms with different polynomials
in the numerators are given by more complicated functions of time. For
example

n − 1 terms
z k(k − 1) · · · (k − n + 2) k−n+1
Y (z) = ⇔ y(k) = a .
(z − a)n (n − 1)!
(6.55)

In particular
z 1
Y (z) = ⇔ y(k) = k(k − 1)ak−2 ,
(z − a)3 2
z 1
Y (z) = ⇔ y(k) = k(k − 1)(k − 2)ak−3. (6.56)
(z − a)4 6
6.2. Properties of the z-transform 185

7. Convolution:
As in continuous-time, if

x(k) = 0 and h(k) = 0 for k < 0, (6.57)

then

y(k) = h(k) ∗ x(k) ⇔ Y (z) = H(z) X(z). (6.58)

In discrete-time, the convolution operation is given by



h(k) ∗ x(k) = h(i)x(k − i). (6.59)
i=−∞

Under the assumption of zero signals for negative time,


k
y(k) = h(i)x(k − i). (6.60)
i=0

Consider, for example, the convolution of a signal x(k) with a step signal
u(k), yielding
k
z
y(k) = x(k) ∗ u(k) = x(i) ⇔ Y (z) = X(z).
z−1
i=0 (6.61)
In other words,
z
Discrete-time integrator ⇔ , (6.62)
z−1
compared to 1/s in continuous-time. The formula for the integration of a
signal could also have been derived quickly by transforming the recursion
formula for y(k), so that

y(k) = y(k − 1) + x(k) ⇒ Y (z) = z −1Y (z) + X(z)


z
⇒ Y (z) = X(z) (6.63)
z−1

(assuming that y(−1) = 0). Note that, for a slightly different formulation

y(k) = y(k − 1) + x(k − 1) ⇒ Y (z) = z −1 Y (z) + z −1X(z)


1
⇒ Y (z) = X(z) (6.64)
z−1

(assuming that y(−1) = x(−1) = 0). In other words, multiplication by


1/(z − 1), instead of z/(z − 1) also corresponds to an integrator, but with
x(k) delayed by one step.
186 Chapter 6. Discrete-time signals and systems

6.3 Inversion of z-transforms


6.3.1 Inversion using partial fraction expansions
The method is similar to the one used for Laplace transforms. However, it turns
out that it is more convenient to invert X(z)/z, rather than X(z). Assume
first that X(z) has no repeated poles and no pole at z = 0. A partial fraction
expansion gives
n
X(z) c0 ci
= + , (6.65)
z z i=1
z − pi

with

c0 = X(0),
X(z)
ci = (z − pi ) . (6.66)
z z=pi

Then, one can write X(z) as


n
ci z
X(z) = c0 + , (6.67)
i=1
(z − pi )

and obtain directly the time function


n
x(k) = c0 δ(k) + ci (pi )k . (6.68)
i=1

Note that the preliminary division of X(z) by z ensures that the function in the
time domain can be obtained directly, without shifting. For complex poles, the
two complex conjugate time-domain signals can be merged to produce a real
signal
cz c∗ z
+ ⇔ cpk + c∗ p∗k = 2|c| |p|k cos(∡p k + ∡c).
z − p z − p∗
(6.69)

Example: let
1
X(z) = . (6.70)
(z − 1)(z + 1)

Performing the partial fraction expansion

X(z) 1 1 1/2 1/2


= =− + + , (6.71)
z z(z − 1)(z + 1) z z−1 z+1
6.3. Inversion of z-transforms 187

so that

1 z 1 z
X(z) = −1 + + , (6.72)
2 z−1 2 z+1

and the time-domain signal is obtained

1 1
x(k) = −δ(k) + + (−1)k . (6.73)
2 2

The values of x(k) are

x(0) = 0, x(1) = 0, x(2) = 1, x(3) = 0, x(4) = 1, . . . (6.74)

If X(z) has poles at z = 0, or has other repeated poles, one must use the
more general formula of partial fraction expansion. Poles at z = 0 will lead to
terms of the form

c
⇔ cδ(k − n), (6.75)
zn

i.e., to delayed impulses. Repeated poles other than at z = 0 generally are


inverted using the formula

cz c∗ z
n
+
(z − p) (z − p∗)n
k(k − 1) · · · (k − n + 2) k−n+1
⇔ 2|c| |p| cos (∡p (k − n + 1) + ∡c) .
(n − 1)!
(6.76)

While this is a more complicated expression than in continuous-time, it is no


more difficult to apply and yields qualitatively similar results.

6.3.2 Inversion using long division


The z-transform permits an inversion procedure that does not have a useful
equivalent in continuous-time. Essentially, it consists in reconstructing the power
series X(z) = x(0)+x(1)z −1 +x(2)z −2 · · · by polynomial division (long division).
For example, one can divide z by z − a to reconstruct the signal associated with
the transform

z
X(z) = , (6.77)
z−a

as shown in the following computation.


188 Chapter 6. Discrete-time signals and systems

1 + az -1 + a 2 z -2 . . .
z-a z
z-a
⇒ X(z) = 1 + az -1 + a2 z-2 ...
a
a - a 2 z -1
-1
a2z x(0) x(1) x(2)
a 2 z -1- a 3z -2
a 3 z -2 . . .
First example using long division

Another example is
1
X(z) = , (6.78)
z2 − 1
which was inverted by partial fraction expansion in (6.74). The same result may
be obtained through the following computation.

z -2 + z -4 + z -6 + . . .
z2 - 1 1
1 - z -2 ⇒ X(z) = z -2+ z -4 + z -6 + . . .
z -2
z -2 - z -4
z -4 . . . x(2) x(4) x(6)

Second example using long division

6.3.3 Conclusions drawn from the procedure of partial


fraction expansion
Conclusions can be derived from the known results of partial fraction expansions,
as in continuous-time. Assuming a rational transform X(z) = N (z)/D(z), with
poles at z = pi , one may immediately state that x(k) is a linear combination of
the following functions:
(a) δ(k) (an impulse).
(b) δ(k − l), l = 1 · · · n (delayed impulses), if X(z) has n poles at z = 0.
(c) pki if pi is real.
(d) k(k − 1) · · · (k − l + 2)pki , for l = 2 · · · n,
if pi is real and is repeated n times.
(e) |pi |k cos(∡pi k + φi ) if pi is complex for some φi .
6.3. Inversion of z-transforms 189

(f) k(k − 1) · · · (k − l + 2)|pi |k cos (∡pi k + φi, l ) , for l = 2 · · · n,


and for some φi,l , if pi is complex and is repeated n times.
Not all terms must be present, but one knows that the highest order terms (those
corresponding to l = n) cannot have zero coefficients.
Example: consider a signal with double poles at z = ±j, as shown in Fig. 6.16.

Figure 6.16: Example with double poles at z = ±j

The signal x(k) must be a linear combination of


π π
δ(k), cos k + φ11 , k cos k + φ12 , (6.79)
2 2
and the coefficient multiplying the last term must be nonzero. The coefficient
multiplying the first term is nonzero unless X(z) is strictly proper.

6.3.4 Properties of signals


Properties of signals with rational transforms can be obtained as in continuous-
time, with the equivalence

|z| = 1 ⇔ Re(s) = 0
|z| > 1 ⇔ Re(s) > 0
|z| < 1 ⇔ Re(s) < 0 (6.80)

As a consequence, the following properties of signals with rational transforms


can be established

• x(k) converges to zero ⇔ all poles are inside the unit circle.

• x(k) converges ⇔ all poles are inside the unit circle, except at most a
single pole at z = 1.

• x(k) is bounded ⇔ all poles are inside the unit circle or are non-repeated
poles on the unit circle.
190 Chapter 6. Discrete-time signals and systems

6.4 Discrete-time systems


6.4.1 Definition and examples
Discrete-time systems transform a discrete-time signal into another discrete-
time signal. Linear time-invariant systems are characterized by their impulse
response h(k), whose z-transform H(z) is the transfer function of the system.
The most common discrete-time systems are described by difference equations,
which have rational transfer functions.
Example 1: consider a bank deposit with daily interest such that

y(k) = balance at the end of day k


x(k) = deposit on day k
α = daily interest rate (6.81)

Then, y(k) satisfies a difference equation

y(k) = y(k − 1) + αy(k − 1) + x(k), (6.82)

so that the z-transform is given by

Y (z) = (1 + α)z −1Y (z) + X(z). (6.83)

Then
z
Y (z) = X(z), (6.84)
z − (1 + α)

so that the transfer function of the system is


z
H(z) = . (6.85)
z − (1 + α)

H(z) is the z-transform of the impulse response

h(k) = (1 + α)k . (6.86)

This impulse response is shown in Fig. 6.17 and is unbounded. The money grows
exponentially with a single initial deposit: the system is unstable!
Note that the daily rate can be transformed into a yearly rate (and vice-
versa) with

3% yearly rate ⇒ (1 + α)365 = 1.03, or α = 8.1 10−5 . (6.87)


6.4. Discrete-time systems 191

y(k)

1 2 3 k

Figure 6.17: Impulse response of a first-order unstable system

The time to double can also be determined (assuming a 3% yearly rate)


ln(2)
(1 + α)k = 2 ⇒ k = = 8.5592 103 days = 23.45 years.
ln(1 + α)
(6.88)

Example 2: consider the case of a microphone that receives the signal produced
by a speaker. The system is shown in Fig. 6.18. There is a direct path from the
speaker to the microphone, but a reflection is also received from a surface in the
room. More echos may be present in the received signal.

x(k) y(k)

Figure 6.18: Transmission of sound with an echo

Fig. 6.19 shows the impulse response estimated experimentally in a labo-


ratory system. Samples were taken at a rate of 10 kHz. The first part of
the response exhibits a pair of large positive and negative pulses that may be
attributed to the compression of air. A delay of 20 samples is visible and is as-
sociated with the distance from the speaker to the microphone. The 2 ms delay
at a speed of sound of 1125 ft/s corresponds to a distance of 2.25 ft. Afterwards,
the response decays to zero with some oscillations. In active noise control, it
is common to treat the impulse response as equal to zero after some time. For
example, the 300 samples shown on the figure may be taken to be the length of
the impulse response of the system.
192 Chapter 6. Discrete-time signals and systems

0.4

0.2
Impulse response

-0.2

-0.4

-0.6
0 100 200 300
Samples

Figure 6.19: Impulse response corresponding to sound transmission in a room

6.4.2 FIR and IIR systems


FIR System (Finite Impulse Response)

A finite impulse response (FIR) system is such that the impulse response is a
finite-time signal, i.e., for some n
h(k) = 0 for k > n. (6.89)
The transfer function of an FIR system is of the form
H(z) = h(0) + h(1)z −1 + · · · h(n)z −n
h(0)z n + · · · + h(n)
= . (6.90)
zn
In other words
A system is FIR ⇔ H(z) is rational with all poles at z = 0
(6.91)
Example 2 (echo system) was an example of an FIR system.
For an FIR system

y(k) = h(k) ∗ x(k) = h(i)x(k − i)
i=−∞
= h(0)x(k) + h(1)x(k − 1) + · · · + h(n)x(k − n). (6.92)
In other words, the output signal is the linear combination of the delayed values
of the input signal, and is particularly easy to compute.
6.4. Discrete-time systems 193

IIR System (Infinite Impulse Response)

Systems that are not FIR are infinite impulse response systems (IIR) and are
such that

For all n, there exists k n such that h(k) = 0. (6.93)

Example 1 (bank deposit) was an example of an IIR system.

6.4.3 BIBO stability


BIBO stability is defined as for continuous-time systems, and stability conditions
on rational transfer functions H(z) can also be obtained in a similar manner,
leading to the following condition:

H(z) is BIBO stable ⇔ all poles of H(z) are inside the unit circle

Note that an FIR system is always stable.


Examples
(a)
1
H(z) = stable. (6.94)
z(z − 0.5)
(b)
1
H(z) = unstable. (6.95)
z−2
(c)
1
H(z) = unstable, (6.96)
(z − 0.8 + j0.8)(z − 0.8 − j0.8)
since

|p|2 = 0.82 + 0.82 = 1.28 ⇒ |p| > 1. (6.97)

(d)
1
H(z) = stable. (6.98)
z3
(e)
1
H(z) = unstable. (6.99)
z−1
For cases (c) and (d), the output will be unbounded for most input signals. For
case (e), the output will be unbounded for signals containing a DC component
or bias.
194 Chapter 6. Discrete-time signals and systems

6.4.4 Responses to step inputs


A step signal of arbitrary magnitude xm has a transform
z
x(t) = xm ⇔ X(z) = xm. (6.100)
z−1
Therefore, the step response of a system with transfer function H(z) is given by
z
Y (z) = H(z) xm . (6.101)
z−1
As in continuous-time, one can predict the results that would be obtained
if one performed a partial fraction expansion of Y (z)/z (note the division by
z to facilitate the expansion). If H(z) = N (z)/D(z) is BIBO stable, the step
response is composed of the steady-state response and the transient response,
with
z N1(z)
Y (z) = [H(z)]z=1 xm + , (6.102)
z−1 D(z)
steady-state response transient response
where N1(z) is a polynomial depending on N (z) and D(z). In the steady-state

lim y(k) = lim(z − 1)Y (z) = H(1) xm . (6.103)


k→∞ z→1

The DC gain of a discrete-time system is H(1).

6.4.5 Responses to sinusoidal inputs


We found earlier that the transform of a cos signal was given by
1 z 1 z z 2 − cos Ω0 z
x(k) = cos (Ω0k) ⇔ X(z) = + = .
2 z − ejΩ0 2 z − e−jΩ0 z 2 − 2 cos (Ω0 ) z + 1
(6.104)

The transform has two poles on the unit circle, placed at an angle Ω0 from the
real axis. Ω0 is the frequency of the signal, in radians, with an associated period
of 2π/Ω0 in samples.
Using partial fraction expansions, the response of a BIBO stable system with
rational transfer function H(z) to an input x(k) = xm cos (Ω0k) is.
N (z) z 2 − cos (Ω0 ) z
Y (z) = 2
xm
D(z) z − 2 cos (Ω0 ) z + 1
N1(z) N2(z)
= xm + .(6.105)
z 2 − 2 cos (Ω0 ) z + 1 D(z)
steady-state response Yss (z) transient response Ytr (z)
6.4. Discrete-time systems 195

As in continuous-time, it turns out that N1(z) is such that

yss (k) = M xm cos (Ω0k + φ) , (6.106)

where

M ejφ = [H(z)]z=p and p = ejΩ0 . (6.107)

In other words, M and φ are the magnitude and angle of the transfer function
evaluated on the unit circle

M = H ejΩ0 , φ = ∡H ejΩ0 . (6.108)

As in continuous-time, H ejΩ0 is called the frequency response of the system.


It is also the DTFT of the impulse response of the system.

Example

Consider the FIR system with impulse response

h(k) = δ(k) − δ(k − 4), (6.109)

so that y(k) = x(k) − x(k − 4). The computation of the output and an imple-
mentation of the system are shown in Fig. 6.20, where D represents a one-step
delay.

x(k)
y(4)

0 1 2 3 4 k
x(k) y(k)

D D D D

Figure 6.20: System with H(z) = 1 − z −4, signal computation (top) and system
implementation (bottom)

The system has a finite impulse response, and is therefore BIBO stable. Its
transfer function is given by
z4 − 1
H(z) = 1 − z −4 = . (6.110)
z4
196 Chapter 6. Discrete-time signals and systems

There are 4 poles at z = 0, and 4 zeros at z = 1 (ej0 ), z = −1 (ejπ ), z =


j (ejπ/2 ), z = −j (e−jπ/2). The four zeros on the unit circle imply that the
frequency response is zero for the associated frequencies and, therefore, that the
steady-state response is zero for the following signals:

x(k) = a Ω0 = 0
x(k) = a(−1)k Ω0 = π
x(k) = a cos π2 k + φ Ω0 = π/2

i.e., for signals with frequencies 0, π/2, and π. This result may be easily inter-
preted by considering the computation of the output signal in Fig. 6.20.

x(k)

1 2 3 4 5 6 7 8 k

Figure 6.21: Sinusoidal signal with frequency Ω0 = π/4

On the other hand, consider the signal


π
x(k) = cos k . (6.111)
4
The signal is shown in Fig. 6.21. The frequency response at Ω0 = π/4 is
4
ejπ/4 −1 ejπ − 1 −2
H ejπ/4 = 4 = = = 2. (6.112)
(ejπ/4) ejπ −1

Therefore, the steady-state output is given by


π
yss (k) = 2 cos k . (6.113)
4
As opposed to being eliminated, this signal is amplified by a factor of 2. Again,
this result may be interpreted in view of the computation of the signal in
Fig. 6.20.
As an additional example, consider
π
x(k) = cos k , (6.114)
8
6.4. Discrete-time systems 197

so that
j−1 √ ◦ √
H(ejπ/8 ) = = 1 + j = 2 ej45 = 2 ejπ/4 , (6.115)
j

and
√ π π
yss (k) = 2 cos k+ . (6.116)
8 4

6.4.6 Systems described by difference equations and


effect of initial conditions
Difference equations are similar to input/output differential equations. As an
example, consider the second-order equation

y(k) = −a1 y(k − 1) − a0y(k − 2) + b2x(k) + b1x(k − 1) + b0 x(k − 2).


(6.117)

At time k = 0, the recursion is started with initial conditions y(−1), y(−2),


x(−1), x(−2), and the output signal at k = 0 is given by

y(0) = −a1y(−1) − a0 y(−2) + b2x(0) + b1x(−1) + b0 x(−2).


(6.118)

Typically, the initial conditions are set to zero, but the effect of nonzero values
may be analyzed in much the same way as in continuous-time. For nonzero
values prior to k = 0

y(k − 1) ⇔ z −1 Y (z) + y(−1), (6.119)

so that

y(k − 2) ⇔ z −1 z −1Y (z) + y(−1) + y(−2) = z −2 Y (z) + z −1y(−1) + y(−2).


(6.120)

In this manner, the shifting formula can be extended to arbitrary time shifts,
and used to account for arbitrary initial conditions.
Applying the formulas to the difference equation, one obtains

Y (z) = −a1 z −1 Y (z) − a1 y(−1) − a0 z −2 Y (z) − a0 z −1y(−1) − a0 y(−2)


+b2 X(z) + b1 z −1X(z) + b1 x(−1)
+b0 z −2 X(z) + b0z −1 x(−1) + b0x(−2), (6.121)
198 Chapter 6. Discrete-time signals and systems

and
N(z)
Y (z) =
z 2 + a1 z + a0
Response to the initial conditions
or zero-input response Yzi (z)
b2 z 2 + b1 z + b0
+ X(z) , (6.122)
z 2 + a1z + a0
Response to the input
or zero-state response Yzs (z)

where

N (z) = −a1z 2y(−1) − a0 zy(−1) − a0z 2 y(−2)


+b1z 2x(−1) + b0zx(−1) + b0 z 2x(−2). (6.123)

The first term Yzi (s) is the response to the initial conditions (the zero-input
response) and the second term Yzs (s) is the response to the input (the zero-state
response). The two denominators are identical, and determine the poles of the
system. The transfer function of the system is the rational function
b2z 2 + b1z + b0
H(z) = . (6.124)
z 2 + a1 z + a0
and can be directly transcribed from the original difference equation.

6.4.7 Internal stability definitions and properties


Internal stability definitions and properties follow as in continuous-time for a
system described by a difference equation. The definitions are:

Asymptotically stable yzi (k) → 0 as k → ∞


Marginally stable yzi (k) bounded
Internally unstable yzi (k) unbounded

while the tests on the poles become

Asymptotically stable All poles inside the unit circle


Marginally stable All poles inside the unit circle
+ possible non-repeated poles on the unit circle
Internally unstable At least one pole outside the unit circle
or repeated pole on the unit circle
6.4. Discrete-time systems 199

As in continuous-time, asymptotic stability is equivalent to bounded-input bounded-


output stability. Note that an FIR system has all poles at z = 0, so that the
response to initial conditions becomes exactly zero in finite time. An FIR system
is always BIBO stable and asymptotically stable.

6.4.8 Realization of discrete-time transfer functions


FIR system
An FIR transfer function is of the form

H(z) = h(0) + h(1)z −1 + h(2)z −2 · · · + h(n)z −n , (6.125)

and the output may be computed using the formula

y(k) = h(0)x(k) + h(1)x(k − 1) + · · · + h(n)x(k − n).


(6.126)

The implementation is shown in Fig. 6.22.

x(k) x(k-1) x(k-2) x(k-n)


z -1 z -1 ... z -1

h(n)

h(1)
y(k)
h(0)

Figure 6.22: Implementation of an FIR transfer function

The operator z −1 is sometimes replaced by D, for delay. It is a one-step delay,


and is realized with one memory location in a computer code. Specifically, code
for an implementation of the FIR filter is of the form

Begin
y = h(0) ∗ x + h(1) ∗ m1 + · · · + h(n) ∗ mn
mn = mn−1
..
.
m2 = m1
m1 = x
Repeat
200 Chapter 6. Discrete-time signals and systems

where m1 , ..., mn are n storage locations. The fact that the zero-input response
vanishes in n steps means that the effect of any value placed in the registers m1,
..., mn, disappears n time instants after the system is started.
IIR system
An IIR system has the general transfer function
bnz n + · · · + b0
H(z) = , (6.127)
zn + an−1z n−1 + · · · + a0
which may be translated into the difference equation

y(k) = −an−1y(k − 1) − ... − a0 y(k − n) + bn x(k) + ... + b0x(k − n).


(6.128)
A direct implementation of the difference equation is shown in Fig. 6.23 and
requires 2n storage elements.

x(k) x(k-1) x(k-n) y(k)


z-1 ... z -1 b0
...

y(k-n) y(k-1)
bn a0 z -1 ... z -1
...

a n-1

Figure 6.23: Direct implementation of an IIR transfer function

An alternate implementation is shown in Fig. 6.24. It only requires n storage


elements, which is the minimum possible. The realization is equivalent to the
canonical form discussed in continuous-time, and assumes that bn = 0. To show
that the realization implements the desired transfer function, note that

X2 (z) = zX1(z), X3 (z) = zX2 (z), . . . , Xn(z) = zXn−1(z),


(6.129)
and

X(z) = zXn(z) + a0X1 (z) + · · · + an−1 Xn(z). (6.130)

Therefore

X3 (z) = z 2X1 (z), · · · , Xn (z) = z n−1 X1(z), (6.131)


6.4. Discrete-time systems 201

and

X(z) = z n + an−1 z n−1 + · · · + a0 X1(z). (6.132)

Also

Y (z) = bn−1Xn (z) + · · · + b0X1(z)


= bn−1 z n−1 + · · · + b0 X1(z)
bn−1z n−1 + · · · + b0
= n X(z). (6.133)
z + an−1 z n−1 · · · + a0

b n-1

...
y
b0
x xn x x1
z -1 ... 2 z-1

-a n-1
...

-a 0

Figure 6.24: Minimal implementation of an IIR transfer function

As in continuous-time, if bn = 0, the transfer function can be split into a


gain and a strictly proper transfer function

H(z) = bn + Hsp (z), (6.134)

where Hsp (z) is of the form

b′n−1 z n−1 + · · · + b′0


Hsp (z) = . (6.135)
z n + an−1 z n−1 + · · · + a0

Then, the output y is given by

y(k) = bn x(k) + y ′ (k), (6.136)

where y ′ (k) is the output of a strictly proper system as shown in Fig. 6.24.
202 Chapter 6. Discrete-time signals and systems

Realizability: to be realizable, a transfer function must be proper. With a


unilateral z-transform (where the summation is from k = 0 to ∞), a system
with a non-proper transfer function is not possible. With a bilateral z-transform
(where the summation is from k = −∞ to ∞), a non-proper transfer function
is possible, but is still not realizable. In particular, the impulse response of
H(z) = z is an impulse occurring at k = −1. A system whose output responds
before the input is applied is called non-causal, and does not respect cause-and-
effect properties of physical systems (referred to as causality).

6.4.9 State-space models


A discrete-time state-space model has the form

x(k + 1) = Ax(k) + Bu(k)


y(k) = Cx(k) + Du(k). (6.137)

The minimal implementation of the IIR system is an example of a state-space


system with matrices
  
0 1 0 ··· 0 0
 0 0 1 ··· 0   .. 
   
A =  ..  B= . 
 .   0 
−a0 ··· −an−1 1
C = b0 · · · bn−1 D = 0. (6.138)

As a result, a rational and proper transfer function can always be realized in


discrete-time using a state-space description. Conversely, the transfer function
may be computed by obtaining the output of the system using

zX(z) − x(0) = AX(z) + BU(z)


Y (z) = CX(z) + DU (z), (6.139)

which gives

Y (z) = (C(zI − A)−1 B + D) U(z) + C(zI − A)−1 x(0).


(6.140)
H(z) Response to IC’s

H(z) is the transfer function of the system and IC’s means initial conditions.
The poles are the eigenvalues of the matrix A, i.e., the roots of det(zI − A).
6.4. Discrete-time systems 203

In discrete-time, the general solution of the difference equation is relatively


simple. Iterating on the recursion equation gives

x(1) = Ax(0) + Bu(0)


x(2) = Ax(1) + Bu(1) = A2 x(0) + ABu(0) + Bu(1)
k−1
x(k) = Ak x(0) + CAk−1−i Bu(i). (6.141)
i=0

Therefore, the general solution is

y(k) = CAk x(0)


Response to IC’s
or zero-input response Yzi (z)
k−1
+ CAk−1−i Bu(i) + Du(k) . (6.142)
i=0

Response to the input


or zero-state response Yzs (z)

6.4.10 Extensions of other continuous-time results


The similarity between the z-transform and the Laplace transform implies that
many results discussed earlier can be adapted to discrete-time. When differences
arise, it is usually because of the distinctions between the stability regions for
the continuous-time and discrete-time cases.
(a) Interconnected systems
The transfer functions of interconnected systems can be obtained exactly as in
continuous-time. For example, the closed-loop system of Fig. 6.25 has transfer
function G(z)/(1 + G(z)).

x(k) y(k)
G(z)

Figure 6.25: Discrete-time feedback system

(b) Desirable pole locations


The desirable region for the location of the closed-loop poles is the grey portion
of the z-plane shown in Fig. 6.26. It is the intersection of the objectives of
damping and settling time.
204 Chapter 6. Discrete-time signals and systems

Stability

Adequate
settling
time Adequate
damping

Figure 6.26: Desirable z-domain pole locations

(c) Routh-Hurwitz criterion


The Routh-Hurwitz criterion is not directly useful in discrete-time, because the
objective is not to place the poles in the left half-plane anymore. The Jury test
may be used to find the number of poles of a polynomial located outside the
unit circle, and is the equivalent of the Routh-Hurwitz criterion.
(d) Root-locus
The poles of the feedback system of Fig. 6.25 can be obtained from the poles and
zeros of the open-loop system using the same procedure as in continuous-time.
Differences arise only with the interpretation of the results. In discrete-time, any
system with a strictly proper transfer function becomes unstable in closed-loop
for large gains. Even a stable first-order system with transfer function b/(z − a)
becomes unstable for large gain, which is not the case in continuous-time.
(e) Bode plots
Frequency response plots have the same significance in discrete-time as they have
in continuous-time. The plots of the magnitude and phase of H(ejΩ ) characterize
the amplification and delay of the response to an input signal cos(Ωt). It is only
necessary to plot for a range Ω : 0 → π on the x-axis. There is no simple method
to draw these plots as for the Bode plots, and they are normally generated
numerically.
(f) Nyquist criterion
The Nyquist criterion can be applied to test the stability of a closed-loop system,
but a different curve must be used in discrete-time. In continuous-time, the
open-loop transfer function is evaluated along the imaginary axis (completed
at infinity), which is the portion of the s−plane that is associated with the
6.4. Discrete-time systems 205

sinusoidal response, as well as the boundary for stability (see Fig. 5.35). In
discrete-time, the computation must be performed for

z = ejΩ , Ω = −π → π, (6.143)

as shown Fig. 6.27. Evaluating H(z) along the Nyquist curve is equivalent
to plotting the frequency response H(ejΩ ) in the complex plane. Counting the
number of encirclements of the (−1, 0) point gives the number of unstable closed-
loop poles, as in continuous-time.

z=1

Figure 6.27: Nyquist curve in discrete-time

6.4.11 Example of root-locus in discrete-time


Consider a plant
1
P (z) = , (6.144)
z−1
which is an integrator with a unit step delay. The controller also has a pole at
z = 1 and is given by
z − za
C(z) = g . (6.145)
z−1
The closed-loop poles are the roots of

dCL(z) = z 2 + (g − 2)z + 1 − gza , (6.146)

and the root-locus is shown on Fig. 6.28 for za = 0.8.


The root-locus is the same as it would be in continuous-time, but the con-
ditions for stability are different. There is a value of the gain gmax for which
one of the poles is equal to −1. The system is only stable up to g = gmax . For
g0 = 1/za , one of the poles is equal to zero. It is counterproductive to increase
the gain beyond g0.
206 Chapter 6. Discrete-time signals and systems

0.5
gmax
0
g0
-0.5

-1

-1 -0.5 0 0.5 1

Figure 6.28: Root-locus for the discrete-time example (the unit circle is shown
as a dashed curve)

A possible design consists in choosing a desired value for the closed-loop


poles equal to some zd , with |zd | < 1. Then,

dCL (z) = (z − zd )2 = z − 2zd z + zd2 (6.147)

is obtained for
1 + zd
g = 2(1 − zd ), za = . (6.148)
2
Such a design technique is referred to as pole placement. When both poles are
placed at the same location, the choice corresponds to the breakaway point of
the root-locus. The method can work well if reasonable values of zd are chosen,
which are generally values slightly smaller than 1.

6.5 Problems
Problem 6.1: (a) Find x(0) if the z-transform of x(k) is X(z) = az −1
z−1 .
(b) Find x(0) if the z-transform of x(k) is X(z) = 2 z .
z − az + a2
Problem 6.2: (a) Consider the Newton-Raphson method to find the zeros of a
function f (x), which is described by the difference equation
f (x(k − 1))
x(k) = x(k − 1) − , (6.149)
f ′ (x(k − 1))
where f ′ (x(k−1)) is the derivative of f (x) evaluated at x(k −1). Let f (x) = ax2.
Find the z-transform of x(k) as a function of x(−1), and give conditions under
which x(k) converges to zero as k → ∞.
6.5. Problems 207

(b) Repeat part (a) for the gradient algorithm

x(k) = x(k − 1) − f ′ (x(k − 1)). (6.150)

Problem 6.3: (a) Use partial fraction expansions to find the x(k) whose z-
transform is X(z) = 1 .
(z − 1)(z − 2)
(b) Use partial fraction expansions to find the x(k) whose z-transform is X(z) =
z .
z 2 − 2z + 2
Problem 6.4: (a) Sketch the time function x(k) that you would associate with
the following poles: p1 = 0.9j, p2 = −0.9j. Only a sketch is required, but be as
precise as possible.
(b) Repeat part (a) for: p1 = 1, p2 = −1.
(c) Repeat part (a) for: p1 = 0.3, p2 = 0.9.
(d) Repeat part (a) for: p1 = ejπ/6 , p2 = e−jπ/6.
The pole locations for the 4 parts are shown in Fig. 6.29.

(a) (b) (c) (d)

Figure 6.29: Pole locations for problem 6.4

Problem 6.5: (a) Find Y (z) as a function of X(z) if y(k) = (−1)k x(k). As-
suming that X̄(Ω), the DTFT of x(k), is as shown on Fig. 6.30, sketch Ȳ (Ω),
the DTFT of y(k).

X(Ω)
1

-2π -π -ΩB ΩB π 2π Ω

Figure 6.30: DTFT for problem 6.5


208 Chapter 6. Discrete-time signals and systems

(b) Repeat part (a) for

y(k) = x(k/2) if n is even


y(k) = 0 if n is odd

Problem 6.6: (a) Using partial fraction expansions, find the signal x(k) whose
z-transform is
4
X(z) = . (6.151)
(z − 1)(z 2 + 1)

Use the result to compute x(k) for k = 0, ..., 8.


(b) Using long division, obtain x(k) for the signal of part (a) and k = 0, ..., 8.
Compare the results to those obtained in part (a).

Problem 6.7: For the signals whose z-transforms are given below, indicate
whether the time functions x(k) are bounded, converge to some value, or vanish
in finite time.
(z + 1)
(a) X(z) = .
(z + 0.5)(z − 0.7 + 0.7j)(z − 0.7 − 0.7j)
(b) X(z) = (1 − 2z −1)(1 + 3z −1 ).
(z − 1)
(c) X(z) = .
(z + 1)(z + 0.5)2
(z + 1)
(d) X(z) = .
(z − 1)(z + 0.5)2
(z + 1)
(e) X(z) = .
z(z − 1)
10
(f) X(z) = z .
(z + 5)
(z + 1)2
(g) X(z) = 2 .
(z + 1)(z − 0.5)
(z − 2)2
(h) X(z) = 3 .
z (z − 1)
Problem 6.8: Indicate whether the discrete-time systems with the following
transfer functions are BIBO stable.
(a) H(z) = z −z 0.5 .
3
(b) H(z) = 2 z .
(z + 0.81)2
(c) H(z) = z .
(z + 1)(z + 2)
(d) H(z) = z −1010 .
z
(e) H(z) = z + 0.5 .
(z + 1)(z + 0.25)
(f) H(z) corresponding to the difference equation: y(k + 1) − 12 y(k) = x(k + 1) −
2x(k).
6.5. Problems 209

x y

z-1
a

z-1
-a2

Figure 6.31: System for problem 6.9 (a)

Problem 6.9: (a) Find the transfer function H(z) = Y (z)/X(z) and a condi-
tion on a such that the system of Fig. 6.31 is BIBO stable.
(b) Find the transfer function H(z) = Y (z)/X(z) and indicate whether the
system of Fig. 6.32 is BIBO stable.

D 2
x y
3 2

D
3
0.5 4

Figure 6.32: System for problem 6.9 (b)

Problem 6.10: (a) Find the transfer function H(z) = Y (z)/X(z) correspond-
ing to the difference equation
y(k) = y(k − 1) + y(k − 2) + x(k). (6.152)

Is the system stable?


(b) Let the input be x(k) = δ(k), the discrete-time impulse. Show that the
response of the system of part (a) with zero initial conditions is such that
y(k)
limk→∞ exists, and give its value.
y(k − 1)
Problem 6.11: (a) Find the steady-state response of the system with transfer
function H(z) = 4 z − 2 and an input x(k) = 3. Do not perform a partial
z (z + 0.5)
fraction expansion: use the DC gain.
210 Chapter 6. Discrete-time signals and systems

(b) Repeat part (a) for x(k) = cos(πk/2). Do not perform a partial fraction
expansion: use the frequency response.
(c) Write a program (in Matlab, for example) to check the results of parts (a)
and (b). Plot the input x(k) and the output y(k) for both cases over 40 time
steps. To implement H(z), find a difference equation that corresponds to the
given transfer function, and let all initial conditions be zero.

Problem 6.12: Consider the system of Fig. 6.33, with C(z) = g/(z − a) and
P (z) = 1/z. Find conditions on g and a such that the steady-state error ess =
limk→∞ e(k) is zero for all constant inputs r(k) = rm. Assume that g > 0.

r e u y
C(z) P(z)

Figure 6.33: System for problem 6.12

Problem 6.13: (a) Find the initial value and the final value of the signal whose
z-transform is
z2
X(z) = . (6.153)
(z + 0.5 + 0.9j)(z + 0.5 − 0.9j)
(b) Find the initial value and the final value of the step response of the system
2z 2 − 0.3z + 0.25
H(z) = . (6.154)
z 2 − 0.6z + 0.25
Problem 6.14: (a) Sketch the time function x(k) that you would associate with
the following poles: p1 = 1, p2 = j, p3 = −j. Only a sketch is required, but be
as precise as possible.
(b) Repeat part (a) for: p1 = ej2π/3 , p2 = e−j2π/3 .
(c) Is a discrete-time system with the following poles BIBO stable?
p1 = 0.9 + 0.1j, p2 = 0.9 − 0.1j, p3 = −0.9 + 0.1j, p4 = −0.9 − 0.1j,
p5 = 0.1 + 0.9j, p6 = 0.1 − 0.9j, p7 = −0.1 + 0.9j, p8 = −0.1 − 0.9j.
(d) Repeat part (c) for: p1 = −1, p2 = −0.5, p3 = −0.5 + 0.5j, p4 = −0.5 − 0.5j.
Pole locations for the 4 parts are shown on Fig. 6.34.
Problem 6.15: (a) Consider the discrete-time system with transfer function
z4
H(z) = . (6.155)
z4 − 1
6.5. Problems 211

(a) (b) (c) (d)

Figure 6.34: Pole locations for problem 6.14

What are the poles and zeros of the system? Is the system BIBO stable?
(b) Find the impulse response of the system of part (a) using long division.
Problem 6.16: (a) Using long division, find y(0), y(1), and y(2) for
2z 3 + 13z 2 + z
Y (z) = . (6.156)
z 3 + 7z 2 + 2z + 1
(b) Find the transfer function H(z) = Y (z)/X(z) for the system of Fig. 6.35.
Give the values of the closed-loop poles and the range of gain g for which the
system is closed-loop stable. Show the root-locus of the system, with g being
the parameter that varies from 0 to ∞.

x 1 y
z(z-1)

Figure 6.35: System for problem 6.16 (b)

Problem 6.17: (a) Let


3
y(n) = y(n − 1) + y(n − 2) + x(n − 1). (6.157)
4
Find the response Y (z) to arbitrary initial conditions and an arbitrary input
X(z). Determine whether the system is BIBO stable.
(b) Using partial fraction expansions, find the signal y(k) for y(−1) = 0, y(−2) =
1, and an input x(k) that is zero everywhere except x(0) = 1.

Problem 6.18: (a) Find the step response of the system


1
H(z) = . (6.158)
z (z 2 − z + 1/2)
212 Chapter 6. Discrete-time signals and systems

Indicate what the transient response and the steady-state response are. Compare
the steady-state value to the value predicted by the DC gain.
(b) Find the steady-state response of the system of part (a) to x(k) = cos(πk/2) as
well as for x(k) = cos(πk).

Problem 6.19: (a) Using partial fraction expansions, find the signal x(k) whose
z-transform is
1
X(z) = . (6.159)
(z − 1)(z 2 − 2z + 2)

(b) Find the transfer function H(z) = Y (z)/X(z) and a condition on a such
that the system of Fig. 6.36 is BIBO stable.

x y
D

-2a

D
-a2

Figure 6.36: System for problem 6.19 (b)

Problem 6.20: Find the steady-state response of the system with transfer
4
function H(z) = z − 1 and an input x(k) = cos(πk/4). Do not perform a
z8
partial fraction expansion: use the frequency response. Plot the response yss (k)
(label the axes precisely).
Chapter 7

Sampled-data systems

7.1 Conversion continuous-time signal to


discrete-time signal
7.1.1 Definition of sampling
Consider the transformation of a continuous-time signal x(t) to a discrete-time
signal xd (k), such that

xd (k) = x(kT ), (7.1)

where T is called the sampling period (in seconds), fs = 1/T and ωs = 2π/T
are the sampling frequency (in Hz and in rad/s, respectively). The continu-
ous/discrete conversion, is referred to as sampling, or discretization, and is shown
in Fig. 7.1 for an arbitrary signal x(t). The operation is performed in analog-to-
digital (A/D) converters. However, such converters also perform a quantization
operation, which approximates real numbers by a finite set of numbers, coded
by bits. The effects of quantization are ignored in the following discussion.

x(t) xd(k)

0 T 2T 3T t 0 1 2 3 k

Figure 7.1: Conversion of a continuous-time signal to a discrete-time signal

213
214 Chapter 7. Sampled-data systems

Let X(s) denote the Laplace transform of x(t) and Xd (z) denote the z-
transform of xd (k). A natural question to ask is: how is Xd (z) related to X(s)?
The answer to the question turns out to be relatively simple when X(s) is a
rational function of s, but quite complicated in general.

7.1.2 Transform of a sampled signal with rational


transform
Consider a general rational transform X(s) with non-repeated poles
n
N (s) ci
X(s) = = , (7.2)
D(s) i=1
s − pi

where pi are the poles of X(s) and ci are the coefficients of the partial fraction
expansion. The corresponding signal is given by
n
x(t) = ci epi t . (7.3)
i=1

If we sample the signal every T seconds, the resulting discrete-time signal is


given by
n
xd (k) = ci epi kT , (7.4)
i=1

and the z-transform of the discrete-time signal is


n
ci z
Xd (z) = . (7.5)
i=1
z − epi T

One finds that every pole pi of the continuous-time signal is associated with a
pole pd,i of the discrete-time signal equal to

pd,i = epi T . (7.6)

Note that, because zeros are not transformed in the same manner, there can be
pole/zero cancellations in the z-transform even if there are none in the Laplace
transform. As a result, there may be fewer discrete-time poles than continuous-
time poles.
Using the general formula of partial fraction expansion, the result can be
extended to the general case with repeated poles. Therefore, a signal with
rational transform in the s-domain always becomes a signal with a rational
transform in the z-domain. The z-transform can be obtained relatively easily,
and the poles of Xd (z) are those of X(s) transformed through z = esT .
7.1. Conversion continuous-time signal to discrete-time signal 215

Example 1: consider the signal


1
x(t) = e−at u(t) ⇔ X(s) = . (7.7)
s+a
The discrete-time signal obtained by sampling is

xd (k) = e−akT = (e−aT )k , (7.8)

and has the transform


z
Xd (z) = . (7.9)
z − e−aT
In other words, a signal with a first-order rational transform in the s-domain
becomes a signal with a first-order rational transform in the z-domain. Note
that the pole at −a is transformed into z = e−aT .
Example 2: this example shows that there may be a pole/zero cancellation in
the formula for X(z). Consider
s
x(t) = cos(ω0 t) ⇔ X(s) = . (7.10)
s2 + ω02

The discrete-time signal is

z 2 − cos(ω0T )z
xd (k) = cos(ω0T k) ⇔ Xd (z) = . (7.11)
z 2 − 2 cos(ω0 T )z + 1

The transform has poles at ejω0 T and e−jω0 T . It also has zeros at z = 0 and
z = cos(ω0 T ).
If ω0T = 2π, Xd (z) reduces to Xd (z) = z/(z − 1) and one pole is cancelled
by a zero. Note that the transform is the same as the transform of a step signal.
The two signals have the same transforms because, as shown in Fig. 7.2, the
samples are identical. In general, the number of poles of Xd (z) may be smaller
than the number of poles of X(s) (it cannot be larger, however), and it is not
always possible to invert the sampling operation.

7.1.3 Transformation z = esT


The transformation z = esT is at the core of the sampling operation, so that we
analyze it in more detail. Since esT = eRe(s)T ej Im(s)T , we have that

|z| = eRe(s)T ,
∡z = Im(s)T. (7.12)
216 Chapter 7. Sampled-data systems

x(t)

Figure 7.2: A cosine function and a step function sampled at a period equal to
the period of the cosine function are identical signals

Note that the transformation z = esT is not invertible, unless a restriction is


placed on the variable s. Choose the portion of the s-plane with − Tπ < Im(s)
π
T
. With this restriction, the transformation is a bijection (one-to-one) and the
inverse of the transformation is
1 1 1
s = ln(z) = ln (|z|) + j ∡z. (7.13)
T T T
The mapping is such that

Re(s) < 0 if and only if |z| < 1


Re(s) = 0 if and only if |z| = 1
Re(s) > 0 if and only if |z| > 1 (7.14)

The jω-axis of the s-plane maps to the unit circle of the z-plane, the open left
half-plane maps to the inside of the unit circle, and the open right half-plane
maps to the outside of the unit circle. This is shown in Fig. 7.3.
The equivalences of the following table are also worth noting.

s-plane z-plane
s=0 z=1
s = ±j Tπ z = −1
π
s= +j 2T z=+j
π
s= −j 2T z=−j
Re(s) = −∞ z = 0
s2 = s∗1 z2 = z1∗

Note that a single z location is associated to a given s location, but the reverse
is not true if all values of s are considered. Values of s outside the range − Tπ <
7.1. Conversion continuous-time signal to discrete-time signal 217

Im(s) Im(z)

π/T 2

1
1 2

Re(s) Re(z)
1

−π/T
3
3

s-plane z-plane

Figure 7.3: Mapping z = esT

π
Im(s) T
map to the same values of z as some values of s within the range,
with

s2 = s1 + j ⇒ z2 = z1 . (7.15)
T
In other words, all complex numbers separated by a multiple of j2π/T map to
the same value of z. Note that 2π/T is equal to ωs , the sampling frequency.

7.1.4 Transform of a sampled signal - General case


We now discuss the general relationship between the continuous-time and discrete-
time transforms of a sampled signal. The transform X(s) is not assumed to be
rational.
Fact - Transform of a sampled signal
For x(t) and xd (k) related by

xd (k) = x(kT ), (7.16)

the corresponding Laplace transform X(s) and z-transform Xd (z) are related by

1 2π
Xd (z) = X s−n j , (7.17)
T n=−∞ T
s=(1/T ) ln(z)
218 Chapter 7. Sampled-data systems

or

1 2π
[Xd (z)]z=esT = X s−n j . (7.18)
T n=−∞ T

The proof of this fact is somewhat complicated and is left to the appendix at
the end of the chapter.
The result involves an infinite sum of Laplace transforms, each shifted from
the original one by a multiple of j2π/T , and the change of variable z = esT .
An important observation is that the shift occurs along the direction of the
imaginary axis by an amount of 2π/T , which is exactly the shift that produces
the same value of z in the transformation z = esT . This property implies that
the transformation (7.18) gives the same result for Xd (z) if values of s separated
by j2π/T are chosen. Hence, the non-invertibility of the transformation z = esT
is not an issue.
We found earlier that, for the continuous-time signal
1
x(t) = e−at u(t) ⇔ X(s) = , (7.19)
s+a
the z-transform of the sampled signal was
z
Xd (z) = . (7.20)
z − e−aT
In contrast, the general result gives

1 1
Xd (z) = . (7.21)
T n=−∞ s + a − n j 2π
T s=(1/T ) ln(z)

Therefore, the series (7.21) must be equal to the analytic form (7.20). However,
an analytic form of the infinite series cannot be found, in general.

7.1.5 Transform of a sampled signal in the frequency


domain
Consider the Fourier transform (FT) of the continuous-time signal x(t), which
we will denote X̄(ω), and the discrete-time Fourier transform (DTFT) of the
discrete-time signal xd (k), which we will denote X̄d (Ω), with

X̄(ω) = x(t)e−jωt dt (Fourier transform)
−∞

X̄d (Ω) = xd (k)e−jΩk (Discrete-time Fourier transform). (7.22)
k=−∞
7.1. Conversion continuous-time signal to discrete-time signal 219

Because the frequency-domain transforms are defined as bilateral transforms,


while our definitions of Laplace and z-transforms are unilateral, we will assume
that the signals are zero for negative time.
Assuming that the Fourier transforms exist, we have that

X̄(ω) = [X(s)]s=jω ,
X̄d (Ω) = [Xd (z)]z=ejΩ . (7.23)

The relationships between the s-plane and the ω variable, and between the z-
plane and the Ω variable, are shown in Fig. 7.4.

Im(s) Im(z)


ω

Re(s) Re(z)
1

s-plane z-plane

X(ω) X (Ω)
d

ω −2π −π π 2π Ω

Fourier transform DT Fourier transform

Figure 7.4: From the Laplace and z-transforms to the Fourier and DT Fourier
transforms

In the frequency domain, the transformation z = esT becomes

ejΩ = z = esT = ejωT , (7.24)

or simply

Ω = ωT. (7.25)
220 Chapter 7. Sampled-data systems

With (7.25), the formula (7.17) becomes



1 2π
X̄d (Ω) = X̄(ω − n ) . (7.26)
T n=−∞ T ω=Ω/T

This result allows one to calculate the DTFT of the discrete-time signal xd (k),
knowing the FT of the continuous-time signal x(t). The transformation in the
frequency domain is composed of two steps:

• a rescaling of the frequency axis, so that Ω = ωT. In particular, the sam-


pling frequency ωs = 2π/T is mapped to Ω = 2π. For example, a 100 Hz
signal sampled at 1 kHz is mapped to π/5.

• an infinite sum of the transform shifted by multiples of 2π/T in the ω


space, or 2π in the Ω space.

• a scaling of the amplitude of the transform by a factor 1/T.

Note that, while the result was derived assuming that the signals were zero for
negative time, (7.26) is valid for arbitrary signals, provided that their Fourier
transforms exist.

7.1.6 Aliasing
Fig. 7.5 shows the transform of a continuous-time signal x(t), with X̄(ω) = 0
for |ω| > ωB . For simplicity of presentation, X̄(ω) is taken to be a real function
of ω, but it is a complex function, in general. Assume that the signal is sampled
with a sampling period T, corresponding to a sampling frequency ωs . The top
figures show the result that is obtained when ωB < ωs /2 = π/T , so that the
discrete-time frequency ΩB = ωB T is less than π. The figures on the bottom
show the result when this condition is not satisfied.
In the first case, only the original copy of the transform contributes to the
discrete-time transform in the −π to π range. In the second case, there is
interference between the original transform and its copies. This situation is
called aliasing. When there is aliasing, a frequency component may be observed
in the discrete-time transform which is the image of a higher frequency in the
original signal.
Fig. 7.6 demonstrates this phenomenon in the time domain. A signal with
frequency ω0 = π/4 (period of 8 seconds) is sampled at ωs = π/3 (period of
6 seconds). In the discrete-time domain, a frequency of Ω = 2π − ω0 T = 2π −
3π/2 = π/2 is obtained, which is the same as would have been obtained with a
7.1. Conversion continuous-time signal to discrete-time signal 221

X(ω) X (Ω)
d

−ωB ωB ωS ω −2π −π 2π Ω
2π/Τ ωBΤ ωSΤ
ω
2π− B Τ
X(ω) X (Ω)
d

−ωB ωB ω ω −2π −π π 2π Ω
S
2π/Τ 2π− ωBΤ ωΤ
S
ω BΤ

Figure 7.5: Transformation of a band-limited transform without aliasing (top)


and with aliasing (bottom)

frequency Ω = ω0 T = π/2 or ω0 = π/12 (period of 24 seconds). The two signals


that yield the same samples are shown on the figure. A similar phenomenon was
also observed in Fig. 7.2.

Numerical example: let the sampling frequency be 1000 Hz. 500 Hz is


the maximum frequency that can be sampled without aliasing. A frequency
of 500 Hz maps to π in the discrete-time frequency domain. A frequency of
100 Hz maps to π/5, and 250 Hz maps to π/2. A frequency of 600 Hz maps to
4π/5, which is the same frequency as 400 Hz. Similary, 800 Hz maps to the same
frequency as 200 Hz, and 1000 Hz is the same as a DC signal. Higher frequencies
make confusion worse. For example, 100 Hz is indistinguishable from 900 Hz,
1100 Hz, 1900 Hz, 2100 Hz, ...(in general, a frequency f cannot be distinguished
from nfs ± f , where fs is the sampling frequency and n = 1, 2, ...).
222 Chapter 7. Sampled-data systems

x(t)

6 12 18 24

Figure 7.6: Two signals yielding the same samples

7.1.7 Avoiding aliasing


When there is aliasing, it is impossible to recover unambiguously a signal from its
samples. In order to avoid such situation, the sampling frequency must be at least
twice the highest frequency present in the signal to be sampled. The minimum
sampling frequency for a given signal, i.e., ωs = 2ωB , is called the Nyquist rate.
Since real signals are rarely, if ever, bandlimited, one must generally choose a
frequency range of interest and a sampling frequency equal to at least twice
the upper bound. Aliasing is then prevented by use of an anti-aliasing filter.
Such a filter is applied to the signal before discretization. An ideal anti-aliasing
filter is a lowpass filter whose gain is 1 for |ω| < ωs /2 and 0 otherwise. The
phase should be zero for |ω| < ωs /2. Such a filter is not realizable and an
approximation must be implemented. In practice, the bandwidth of the filter
is typically specified slightly lower than half the sampling frequency in order to
account for the transition between the passing band and the stopping band of
any practical anti-aliasing filter F̄ (ω) (see Fig. 7.7).
If the conditions are satisfied so that aliasing is avoided, the relationship
between the DTFT of the discrete-time signal and the FT of the continuous-
time signal is
1
X̄d (Ω) = X̄(ω) ω=Ω/T for − π < Ω π. (7.27)
T
The transformation simply amounts to a rescaling of the frequency variable and
of the transform. Shifted copies of the continuous-time transform do not affect
the discrete-time transform. Note that (7.27) is a special case of
1 1
[Xd (z)]z=esT = X(s) or Xd (z) = X(s) .
T T s=(1/T ) ln(z) (7.28)
7.2. Conversion discrete-time signal to continuous-time signal 223

F(ω)
1

−π/Τ π/Τ ω

Figure 7.7: Anti-aliasing filter

7.2 Conversion discrete-time signal to


continuous-time signal
7.2.1 Definition of reconstruction
We now consider the reconstruction of a continuous-time signal x(t) from a
discrete-time signal xd (k) by

x(t) = xd (k) for t ∈ [kT, (k + 1)T ). (7.29)

The transformation is shown in Fig. 7.8. The operation is the one commonly
performed in digital-to-analog (D/A) converters, and is usually referred to as a
zero-order hold (ZOH). More sophisticated converters exist that interpolate the
values of x(t) between the time instants. A linear interpolator would be called
a first-order hold. However, the zero-order hold is by far the most commonly
used.

xd(k) x(t)

0 1 2 3 k 0 T 2T 3T t

Figure 7.8: Conversion of a discrete-time signal to a continuous-time signal


224 Chapter 7. Sampled-data systems

7.2.2 Transform of a reconstructed signal


Again, a natural problem is to relate X(s), the Laplace transform of x(t), to
Xd (z), the z-transform of xd (k). Here, the result is simpler.
Fact - Transform of a reconstructed signal
For x(t) and xd (k) related by (7.29), the Laplace transform X(s) and the z-
transform Xd (z) are related by
1 − e−sT
X(s) = [Xd (z)]z=esT
s
1 − e−sT
= T [Xd (z)]z=esT . (7.30)
sT
The proof is not difficult in this case, but is left to the appendix at the end of
the chapter.
Example 1: consider a step xd (k) = u(k). Then, x(t) = u(t) is also a step
(though in continuous-time instead of discrete-time). Applying the formula with
Xd (z) = z/(z − 1) yields X(s) = 1/s, as expected.
Example 2: consider xd (k) = δ(k), the discrete-time impulse. Now x(t) is not
a continuous-time impulse, but a pulse of duration T, i.e., a function equal to
1 for 0 t < T and zero otherwise (see Fig. 7.9). As expected, Xd (z) = 1
yields X(s) = (1 − e−sT )/s (recall that the Laplace transform associated with a
pure delay T is e−sT ). Note that X(s) is not a rational function of s, although
Xd (z) is a rational function of z. Indeed, the discrete to continuous conversion
does not preserve the rational nature of transforms, in general.

x (k) x(t)
d

0 1 2 3 k 0 T 2T 3T t

Figure 7.9: Conversion of a discrete-time impulse to continuous-time

7.2.3 Transform of a reconstructed signal in the


frequency domain
Again, we define X̄(ω) as the Fourier transform of x(t), and X̄d (Ω) as the
discrete-time Fourier transform of xd (k). The transformation z = esT becomes
7.2. Conversion discrete-time signal to continuous-time signal 225

Ω = ωT , so that the formula (7.30) becomes

1 − e−jωT
X̄(ω) = T X̄d (Ω) Ω=ωT
. (7.31)
jωT

The transformation is composed of three steps:

• a rescaling of the frequency axis such that Ω = ωT or ω = Ω/T.

• a filtering of the resulting signal by (1 − e−jωT )/jωT.

• a scaling of the magnitude of the transform by T.

Zero-order hold transfer function


The transfer function
1 − e−sT
H(s) = (7.32)
sT
is referred to as the transfer function of the zero-order hold. In the frequency
domain, it can be expressed as

1 − e−jωT ejωT /2 − e−jωT /2


H̄(ω) = = e−jωT /2
jωT jωT
2 sin (ωT /2)
= e−jωT /2
ωT
= e−jωT /2 sinc (ωT /2) , (7.33)

where sinc(x) sin(x)/x. The magnitude of the frequency response is |sinc (ωT /2)|.
The phase is -ωT /2, plus π when sinc(ωT /2) is negative. The magnitude and
phase are shown in Fig. 7.10. Note that, for frequencies below 2π/T , the phase
lag is equal to the phase lag produced by a time delay of T /2.
Example: the effects of the discrete to continuous conversion may be studied
further by considering a discrete-time sinusoid xd (k) = cos(Ω0 k). The transform
is a pair of impulses at ±Ω0 , with magnitude 1/2, plus the copies shifted by all
the multiples of 2π. The resulting transform of the continuous-time signal is
shown on Fig. 7.11. Interestingly, one has that
∞ ∞
1 1
T δ(Ω − Ω0 ) dω = δ(Ω − Ω0 )dΩ, (7.34)
−∞ 2 Ω=ωT −∞ 2

so that the scaling of the axes cancels the factor T in the formula and the
magnitude of the impulses remains 1/2 in continuous-time (except for the slight
reduction in magnitude due to the zero-order hold).
226 Chapter 7. Sampled-data systems

H(ω)
1

-6π/T -4π/T -2π/T 2π/T 4π/T 6π/T ω

H(ω)

180o

2π/T 4π/T 6π/T

-6π/T -4π/T -2π/T ω

-180o

Figure 7.10: Frequency response of a zero-order hold

X (Ω) X(ω) 1/2


d

-Ω0 Ω0 Ω -4π/T -2π/T -Ω0 /T Ω0 /T 2π/T 4π/T ω

Figure 7.11: Transform of a sinusoid reconstructed through a zero-order hold


7.2. Conversion discrete-time signal to continuous-time signal 227

The transform of the discrete-time signal is not that of a continuous-time


sinusoid. Indeed, the signal is shown in Fig. 7.12, and is discontinuous. From
knowledge of Fourier series, one should expect a number of high-frequency com-
ponents associated with the sharp transitions at the sampling times. The two
large components of the transform correspond to the fundamental of the signal,
which has a magnitude slightly less than the original signal, and a phase shift
corresponding to a time delay of T /2. This phase can be anticipated from the
shape of the reconstructed signal. Interestingly, the reconstructed signal is not
periodic, unless the sampling frequency is a multiple of the signal frequency, as
in Fig. 7.12. The visible result on an oscilloscope is that the stepwise component
of the waveform shifts continuously with respect to the fundamental.

x(t)

Figure 7.12: Sinusoid reconstructed through zero-order hold

Numerical example: let Ω0 = π/4, which yields the signal shown in Fig. 7.12.
Assume that the sampling frequency is fs = 1000 Hz (or T = 1 ms). The
continuous-time signal is of the form

x1 (t) = M1 cos(ω1 t + φ1 ) + M2 cos(ω2 t + φ2 ) + ... (7.35)

The frequency of the first component is ω1 = Ω0 /T = π/(4T ), or f1 = ω1/(2π) =


1/(8T ) = fs /8 or 125 Hz. The magnitude and phase of the first component are

sin(ω1T /2) sin(π/8)


M1 = = = 0.975
ω1T /2 π/8
φ1 = −ω1 T /2 = −π/8 = −22.5◦. (7.36)

For the second component, ω2 = 2π/T − Ω0/T = 7π/(8T ). The frequency of


this component is f2 = ω2 /(2π) = 7fs /8 or 875 Hz. The magnitude and phase
228 Chapter 7. Sampled-data systems

of the second component are


sin(ω2 T /2) sin(7π/8)
M2 = = = 0.139
ω2 T /2 7π/8
φ2 = −ω2T /2 = −7π/8 = −157.5◦ . (7.37)

Similarly, the third component has frequency 1125 Hz, magnitude 0.108, and
phase −22.5◦ . Overall, one finds that the reconstructed signal has a fundamen-
tal component at the desired frequency, but with a slightly lower magnitude
(2.5% smaller) and a significant phase delay (22.5◦ ). Additional components are
present at higher frequencies with magnitudes of about 10% of the fundamen-
tal. Generally, these are not harmonic frequencies, but rather multiples of the
sampling frequency plus or minus the fundamental signal frequency. Parasitic
effects are reduced if the ratio of the fundamental frequency to the sampling
frequency decreases.

7.3 Conversion continuous-time system to


discrete-time system
7.3.1 Equivalent discrete-time system
The need to find a discrete-time equivalent to a continuous-time system arises
in control applications, such as shown in Fig. 7.13. A plant is controlled by
a computer-based system, such that the control input x(t) is generated by a
D/A converter, the plant output y(t) is sampled by an A/D converter, and
the discrete-time control input is calculated by a computer (microprocessor, or
other). Two approaches are possible: the first consists in designing a continuous-
time control law for the plant, and then finding a discrete-time equivalent for
implementation. This approach is discussed later. The second approach consists
in finding a discrete-time equivalent to the plant, that is, a description for the
transformation from xd (k) to yd (k). This approach is discussed now.
An interesting result is that the system from xd (k) to yd (k) is a linear time-
invariant system. Its transfer function is such that the step response of the
discrete-time system matches the samples of the step response of the continuous-
time system (see Fig. 7.14). Surprisingly, the result holds true without the
assumption of ideal anti-aliasing or post-sampling filters (although these filters
are nevertheless useful in practice).
Example: consider a first-order system
1
P (s) = . (7.38)
s+1
7.3. Conversion continuous-time system to discrete-time system 229

r (k) x (k) x(t) y(t) yd(k)


d Discrete-time d
D/A Plant A/D
controller

Figure 7.13: Digital control application

x (k) x(t)
d

0 1 2 3 k 0 T 2T 3T t

y(t) y (k)
d

0 T 2T 3T t 0 1 2 3 k

Figure 7.14: Step response matching

If a discrete-time step input xd (k) is applied to the D/A, the result is a continuous-
time step input x(t) applied to the plant. The step response is
1 1 1
Y (s) = = − ⇔ y(t) = 1 − e−t. (7.39)
(s + 1)s s s+1
The sampled output of the step response is

yd (k) = 1 − e−kT
z z z(1 − e−T )
⇔ Yd (z) = − = . (7.40)
z − 1 z − e−T (z − 1)(z − e−T )
On the other hand, the step response of the equivalent discrete-time system is
z
Yd (z) = Pd (z) . (7.41)
z−1
230 Chapter 7. Sampled-data systems

We conclude that
1 − e−T
Pd (z) = . (7.42)
z − e−T
Pd (z) is usually called the step response equivalent or zero-order hold equivalent
of P (s).
Although only the step response of the discrete-time system was shown to
match the step response of the sampled-data system, it is not hard to show that
the responses of both systems are identical for any input signal xd (k). Indeed,
any xd (k) may be viewed as the superposition of shifted step signals, and linear
time-invariance implies the result.
The procedure to obtain Pd (z) can also be extended to arbitrary linear sys-
tems with rational transforms, and is similar to the procedure associated with
the discretization of a signal with rational transform. The notation becomes
complicated for repeated poles, so we assume that P (s) = N (s)/D(s) is rational
and strictly proper, has non-repeated poles, and has no pole at s = 0. The step
response of the continuous-time system is given by
n n
1 1 ci
Y (s) = P (s) = P (0) + ⇔ y(t) = P (0) + ci epi t ,
s s s − pi
i=1 i=1 (7.43)

where pi are the poles of the transfer function P (s) and ci are the coefficients of
the partial fraction expansion, with
s − pi
ci = P (s) . (7.44)
s s=pi

If we sample this signal every T seconds, the resulting discrete-time signal is


given by
n n
z z
yd (k) = P (0) + ci epi kT ⇔ Yd (z) = P (0) + ci .
z−1 z − epi T
i=1 i=1 (7.45)

Using (7.41), the transfer function of the discrete-time system is given by


n
1
Pd (z) = P (0) + (z − 1) ci . (7.46)
i=1
z − epi T

The poles pi of the transfer function P (s) are mapped to poles epi T of Pd (z).
Therefore,

P (s) is rational of order n ⇒ Pd (z) is rational of order at most n


P (s) is asymptotically stable ⇒ Pd (z) is asymptotically stable. (7.47)
7.3. Conversion continuous-time system to discrete-time system 231

While the poles are mapped through z = esT , the zeros are not necessarily
mapped in the same manner. Therefore, pole/zero cancellations may cause
Pd (z) to have fewer poles than P (s), and the order of the transfer function may
be reduced. The frequency responses of the continuous-time and discrete-time
systems are also not obviously related. However, Pd (1) = P (0), so that the DC
gains of the two systems are identical. For T sufficiently small, one can also
show that

[Pd (z)]z=esT ≃ P (s) for |sT | ≪ 1, (7.48)

so that the step response equivalent and the transformation z = esT give the
same result for low frequencies.

7.3.2 Discrete-time controller for continuous-time plant


Given a discrete-time equivalent Pd (z) to the continuous-time plant P (s), Fig. 7.15
shows how a discrete-time control algorithm Cd (z) can be analyzed in the z-
domain. Root-locus methods may be used, for example, to place the poles of
the discrete-time system. Desirable locations must be considered in the z-plane,
instead of the s-plane (see Fig. 6.22).

rd(k) e (k) x (k) yd(k)


d d
Cd (z) Pd(z)

Figure 7.15: Direct design of digital controllers in the z-domain

For example, consider the continuous-time system


1
P (s) = , (7.49)
s+1
with C(s) = g. The continuous-time controller gives a stable closed-loop system
for all g (the root-locus shows a pole moving along the real axis in the negative
direction).
The equivalent discrete-time system is

1 − e−T
Pd (z) = . (7.50)
z − e−T
232 Chapter 7. Sampled-data systems

A discrete-time controller Cd (z) = g gives a closed-loop pole at

z = e−T − k(1 − e−T ). (7.51)

As in continuous-time, the pole moves along the real axis in the negative direc-
tion. The pole becomes unstable when it reaches z = −1 for

1 + e−T
gmax = . (7.52)
1 − e−T
In fact, there is no benefit in pushing the pole further than z = 0, so that a
practical limit for the gain is

e−T
g0 = . (7.53)
1 − e−T
The gain g0 results in the transfer function

e−T
PCL(z) = . (7.54)
z

which is a one-step delay with a gain e−T . Typically, such response (called
deadbeat response) requires large input signals and is sensitive to noise and un-
modelled dynamics, so that the feedback gain will be set much below gmax .

7.4 Conversion discrete-time system to


continuous-time system
7.4.1 Equivalent continuous-time system
We now turn to the problem of converting a discrete-time system to a continuous-
time system. This problem arises, in particular, in the digital filtering applica-
tion shown in Fig. 7.16. The objective is to filter the signal x(t) to remove
some undesirable frequency components. Instead of implementing a continuous-
time filter, a digital processing system is used so that the signal x(t) is sampled
through an A/D converter, processed by a computing device, and transformed
back to a continuous-time signal y(t) through a D/A converter. Another situa-
tion is where a compensator is designed in continuous-time, and a discrete-time
implementation is sought. The computing element operates in discrete-time,
and implements a linear time-invariant system with transfer function Fd (z). An
important question is: what is the relationship between the transfer function
Fd (z) and the transformation from x(t) to y(t)?
7.4. Conversion discrete-time system to continuous-time system 233

x(t) xd(k) yd(k) y(t)


A/D Discrete-time
D/A
filter

Figure 7.16: Digital filtering application

As opposed to the (reverse) problem discussed in the previous section, the


system shown in Fig. 7.16 is generally not time-invariant, even if the discrete-
time system is. In particular the response of the system depends on how the
sampling instants are synchronized with the input signal (consider for example
the effect of a delay of sampling times in Fig. 7.2). Using previous results, we
have, in general
k=∞
1 2π
Xd (z) = X s−k j , (7.55)
T k=−∞ T
s=(1/T ) ln(z)

so that
k=∞
1 2π
Yd (z) = Fd (z) X s−k j , (7.56)
T k=−∞ T
s=(1/T ) ln(z)

and
1 − e−sT
Y (s) = T [Yd (z)]z=esT
sT
k=∞
1 − e−sT 2π
= [Fd (z)]z=esT X s−k j . (7.57)
sT k=−∞
T

This transformation relates the Laplace transforms of the input and output of
the system. Unfortunately, the transformation cannot be put into the form
Y (s) = F (s)X(s) due to the change of variables z = esT and due to the infinite
sum.
If the aliasing effects can be neglected, only the term k = 0 is retained, and
we have the approximation
1 − e−sT
Y (s) = [Fd (z)]z=esT X(s). (7.58)
sT
Thus, the transformation can be approximately represented by a transfer func-
tion F (s), with
1 − e−sT
F (s) = [Fd (z)]z=esT (aliasing effects neglected).
sT
(7.59)
234 Chapter 7. Sampled-data systems

Further, if the zero-order hold effects are neglected

F (s) = [Fd (z)]z=esT (aliasing and ZOH effects neglected).


(7.60)

Even with these assumptions, a rational transfer function Fd (z) does not yield a
rational transfer function F (s).

7.4.2 Equivalent system in the frequency domain


In the frequency domain, the corresponding relationships are
k=∞
1 − e−jωT 2π
Ȳ (ω) = F̄d (Ω) Ω=ωT
X̄ ω − k . (7.61)
jωT k=−∞
T

If antialiasing conditions are satisfied,


π
X̄(ω) = 0 for |ω| > , (7.62)
T
and if zero-order hold effects are also neglected, (7.61) reduces to
π
Ȳ (ω) = F̄d (Ω) Ω=ωT
X̄(ω) for |ω| <
T
= 0 otherwise. (7.63)

In other words, the overall system with ideal anti-aliasing and post-sampling
filters is equivalent to a continuous-time filter
π
F̄ (ω) = F̄d (Ω) Ω=ωT
for |ω| < . (7.64)
T
(7.64) is the equivalent, for Fourier transforms, of (7.60). Under these assump-
tions, the equivalence between the discrete-time filter and the continuous-time
filter is represented on Fig. 7.17 for a discrete-time low-pass filter of bandwidth
ΩB .

7.4.3 Delay of a low-pass filter


In continuous-time, the delay of a low-pass filter was computed using formulas
(5.27) and (5.29). Equivalent formulas can be derived for a discrete-time filter
as well. In discrete-time, the low frequency behavior of a transfer function is
described by the value of F (z) in the vicinity of z = 1. The formula equivalent
to (5.27) in discrete-time is
F (1) − F (1 + ∆z)
nd = lim , (7.65)
∆z→0 ∆z F (1)
7.4. Conversion discrete-time system to continuous-time system 235

F (Ω) F(ω)
d 1 1

−2π −π −ΩB ΩB π 2π Ω −ΩB /Τ ΩB /Τ ω

DT filter CT filter

Figure 7.17: Discrete-time filter and equivalent continuous-time filter

where nd is the low-frequency delay measured in samples. nd is not necessarily


an integer. For example, consider
b
F (z) = . (7.66)
z 2 (z − a)
The formula gives
b(1 + ∆z)2 (1 + ∆z − a) − b(1 − a)
nd = lim
∆z→0 ∆z b(1 + ∆z)2 (1 + ∆z − a)
3 − 2a
= , (7.67)
1−a
or
1
nd = 2 + . (7.68)
1−a
It can be checked that the first term is the delay computed for 1/z 2 alone, while
the second term is the delay resulting from the low-pass filter 1/(z −a). 1/(1−a)
can be much larger than 2 if a is close to 1 (for example, a = 0.99 gives 100).
In general, for a rational transfer function
bn−1 z n−1 + · · · + b1z + b0
F (z) = (7.69)
z n + an−1 z n−1 + · · · + a1 z + a0
the time delay is
n + (n − 1)an−1 + · · · + 2a2 + a1 (n − 1)bn−1 + · · · + 2b2 + b1
nd = − .
1 + an−1 + · · · + a1 + a0 bn−1 + · · · + b1 + b0
(7.70)

This formula is the equivalent of (5.29) obtained in continuous-time.


If the discrete-time filter is implemented as in Fig. 7.16, the time delay (in
seconds) associated with the filter may increase by T /2 due to the zero-order
236 Chapter 7. Sampled-data systems

hold and another T due to the fact the output computed by the discrete-time
filter is normally applied only at the next time instant. If the filter is inserted
in a feedback system, the delay margin must be sufficient to accommodate the
filter’s delay.

7.5 Discrete-time design to approximate a


continuous-time system
This problem is, in some ways, the combination of the two previous problems.
Starting from a continuous-time system Fc(s) (which, in a control application
would be a compensator C(s)), find Fd (z) such that the continuous-time system
corresponding to Fd (z) in Fig. 7.16 approximates F (s), i.e., F (s) ≃ Fc (s). A
simple answer would be to choose Fd (z) such that [Fd (z)]z=esT = Fc (s), since
the transformation is known to give F (s) = Fc (s) (if aliasing and zero-order
hold effects neglected). However, the transformation z = esT does not pre-
serve the rational nature of a transfer function, so that implementation of the
discrete-time transfer function would not be possible as a difference equation,
even if the continuous-time transfer function was rational. Therefore, further
approximations are commonly used.
Euler approximation
The Euler approximation consists in approximating

dx x(t + T ) − x(t)
≃ for T small, (7.71)
dt T
which is equivalent to the following transformation

z−1
s= or z = 1 + sT. (7.72)
T
For example, the PID controller (4.35) becomes

T a(z − 1)
Cd (z) = kP + kI + kD . (7.73)
z−1 z − 1 + aT

Note that this transformation is an approximation of z = esT for T small. It


preserves the rational nature and order of the transfer function with

Fd (z) = [Fc (s)]s=(z−1)/T . (7.74)

However, stability may not be preserved. Continuous-time poles must be located


inside the circle of radius 1/T and with center −1/T for the discrete-time system
7.5. Discrete-time design to approximate a continuous-time system 237

to be stable. Nevertheless, for T sufficiently small, stability will be obtained.


This simple method is often adequate, but requires caution.
Bilinear transformation (Tustin’s method)
A better rational approximation of the transformation z = esT is the bilinear
transformation

esT /2 1 + sT /2
z = esT = ≃ . (7.75)
e−sT /2 1 − sT /2
The transformation is invertible, since
sT sT 2 z−1
z 1− =1+ ⇒s= . (7.76)
2 2 T z+1
Given Fc (s) with desirable frequency-domain properties, one lets

Fd (z) = [Fc (s)]s= 2 z−1 . (7.77)


T z+1

It is not difficult to verify that

Fc (s) is rational of order n ⇔ Fd (z) is rational of order n.


(7.78)
Further, it turns out that the stability regions of the s and z planes are mapped
exactly to each other, that is

Re(s) < 0 ⇔ |z| > 1. (7.79)

Therefore,

Fc (s) stable ⇔ Fd (z) stable. (7.80)

From the properties of the conversion from discrete-time system to continuous-


time system, we also know that

Fd (z) stable ⇔ F (s) stable. (7.81)

Impulse response matching


The approach consists in matching the impulse response of the discrete-time sys-
tem fd (k) to the impulse response of the continuous-time system fc (t). Recalling
the results regarding the conversion of a continuous-time signal to a discrete-time
signal, we have that
n n
N (s) ci z
Fc (s) = = ⇔ Fd (z) = ci ,
D(s) s − pi z − epi T
i=1 i=1 (7.82)
238 Chapter 7. Sampled-data systems

where pi are the poles of Fc (s) and ci are the coefficients of the partial fraction
expansion (for simplicity, it is assumed that Fc (s) = N (s)/D(s) is rational with
non-repeated poles).
Using the impulse response matching method, the poles pi of the transfer
function Fc (s) are mapped to poles epi T of Fd (z) and

Fc(s) is rational of order n ⇒ Fd (z) is rational of order n


Fc (s) is asymptotically stable ⇒ Fd (z) is asymptotically stable. (7.83)

The zeros are mapped in a complicated way. In this case, however, the frequency
responses of Fd (z) and Fc (s) can be related, provided the frequency response
F̄c (ω) = 0 for |ω| < π/T, i.e., that fc (t) is a bandlimited signal sampled at
twice its highest frequency. In that case, the results regarding the sampling of
a continuous-time signal (7.27) indicate that
1
F̄d (Ω) = F̄c (ω) ω=Ω/T (7.84)
T
for |ω| < π/T, which is the desired result, except for the factor of 1/T . For
this reason, the impulse response procedure requires a slight modification of the
formula (7.82), so that
n
N (s) ci
Fc(s) = =
D(s) i=1
s − pi
n
z
⇔ Fd (z) = T ci (impulse response equivalent).(7.85)
i=1
z − epi T
This approach is such that

fd (k) = T fc(kT ), (7.86)

where fd (k) is the impulse response of the discrete-time system, and fc (t) is the
impulse response of the desired continuous-time system. One way to justify the
factor T is to remark that a discrete-time impulse has an equivalent area T ,
while a continuous-time impulse has an area equal to 1.
Step response matching
The step response matching method is the same as the step response equivalent
method that was used for the conversion from continuous-time to discrete-time
system. The properties of the resulting transfer function are similar to those
of the impulse response matching method, but the transfer functions are not
exactly the same. For example, the step response method was shown to yield
1 1 − e−T
Fc (s) = ⇒ Fd (z) = , (7.87)
s+1 z − e−T
7.5. Discrete-time design to approximate a continuous-time system 239

while the impulse response method gives

1 T z
Fc (s) = ⇒ Fd (z) = . (7.88)
s+1 z − e−T

The poles of the transfer functions are identical, and the low frequency behavior
(z close to 1) of both transfer functions is similar, but the transfer functions
are different. Worth noting is the fact that the impulse response resulting from
the step response matching method is delayed by one sample compared to the
impulse response obtained with the impulse response method.

7.5.1 Sampled-data control design

The presentation of this section led to a conceptually opposite method to design


sampled-data control systems, as compared to the one presented in section 7.3.
The method of section 7.3 converted the continuous-time plant into an equivalent
discrete-time plant, so that a discrete-time controller could be designed. The
method of this section suggests a design of the controller in continuous-time,
followed by an approximate implementation of the controller in discrete-time.
The advantage of the first method is that implementation can be achieved with
a direct realization of the controller in computer code. Some special algorithms,
such as those based on FIR filters, are also specific to discrete-time. On the other
hand, continuous-time design deals better with issues in the frequency domain,
including robustness, and with physical systems having nonlinear dynamics.

In the early days of digital control, it appeared that discrete-time design


would make continuous-time design obsolete. However, the high sampling rates
and small quantization levels of modern systems have made it possible to imple-
ment continuous-time control systems with minimal sampling effects, even with
crude Euler approximations. Thus, continuous-time design remains a common
methodology despite the ultimate discrete-time implementation. Nevertheless,
it remains important for the control engineer to have a good grasp of discrete-
time and sampled-data issues, as they may significantly affect performance in
some cases.
240 Chapter 7. Sampled-data systems

7.6 Appendix
7.6.1 Proof for the conversion from continuous-time to
discrete-time signal
We first establish two facts. For an arbitrary function f (t)
∞ ∞

f (t) = f (t)δ(t − λ)dλ = f (λ)δ(t − λ)dλ, (7.89)


−∞ −∞

where δ(t) is the delta function. Next, we note that the function

p(t) = δ(t − kT ) (7.90)
k=−∞

satisfies the following equality



1
p(t) = ejk(2π/T )t. (7.91)
T k=−∞

Indeed, p(t) is a periodic function, with period T . Its Fourier series is



p(t) = ck ejk(2π/T )t, (7.92)
k=−∞

with
T /2 T /2
1 −jk(2π/T )t 1 1
ck = p(t) e dt = δ(t) e−jk(2π/T )tdt = .
T T T
−T /2 −T /2 (7.93)

Therefore, (7.91) is satisfied.


With these preliminaries, we now prove the result. The z-transform of the
discrete-time signal is
∞ ∞
Xd (z) = xd (k)z −k = x(kT )u(kT )z −k , (7.94)
k=0 k=−∞

where u(t) is a continuous-time step signal. Since


δ(t − kT )dt = 1, (7.95)


−∞
7.6. Appendix 241

(7.94) can be written as


∞ ∞
−k
Xd (z) = x(kT )u(kT ) z δ(t − kT )dt (7.96)
k=−∞ −∞

and, since δ(t − kT ) = 0 unless t = kT,


∞ ∞

Xd (z) = x(t)u(t) z −t/T δ(t − kT )dt. (7.97)


k=−∞ −∞

Permuting the order of integration and summation, and removing the step func-
tion in the expression by an adjustment of the integration range
∞ ∞
Xd (z) = x(t)u(t) z −t/T δ(t − kT ) dt
−∞ k=−∞
∞ ∞
= x(t) z −t/T δ(t − kT ) dt. (7.98)
0 k=−∞

Next, using the preliminary fact (7.91)


∞ ∞
1
Xd (z) = x(t) z −t/T ejk(2π/T )t dt. (7.99)
T k=−∞
0

Permuting again the order of integration and summation


∞ 

1  x(t)z −t/T ejk(2π/T )t  dt,
Xd (z) = (7.100)
T k=−∞
0

one finds that


∞ ∞
1
[Xd (z)]z=esT = x(t)e−(s−jk(2π/T ))t dt
T
k=−∞ 0

1 2π
= X s − jk . (7.101)
T k=−∞
T

which establishes the result.

7.6.2 Proof for the conversion from discrete-time to


continuous-time signal
The continuous-time signal can be written as

x(t) = xd (k) (u(t − kT ) − u(t − (k + 1)T )) , (7.102)
k=0
242 Chapter 7. Sampled-data systems

where u(t) is the continuous-time step function. Applying the Laplace transform
to both sides
∞ ∞
X(s) = xd (k) (u(t − kT ) − u (t − (k + 1)T )) e−st dt.
0 k=0 (7.103)
Permuting integration and summation
∞ ∞

X(s) = xd (k) (u(t − kT ) − u (t − (k + 1)T )) e−st dt.


k=0 0 (7.104)
Using the right shift formula of the Laplace transform and the expression for
the transform of a step function

e−skT − e−s(k+1)T
X(s) = xd (k) . (7.105)
k=0
s

Therefore, we have that



−k 1 − e−sT
X(s) = xd (k) esT
s
k=0
1 − e−sT
= [Xd (z)]z=esT . (7.106)
s
which is the desired result.

7.7 Problems
Problem 7.1: (a) Consider the continuous-time system
1
H(s) = . (7.107)
s(s + 1)
Find the discrete-time system Hd (z) whose step response yd (k) is such that
yd (k) = y(kT ) , where y(t) is the step response of H(s) and T is some arbitrary
sampling time.
(b) Repeat part (a) for
1
H(s) = . (7.108)
s2 + 1
Explain what happens when T = 2π.

Problem 7.2: Consider the signal

x(t) = 1 + cos(20πt) + sin(60πt). (7.109)


7.7. Problems 243

Give the lowest sampling frequency fs (in Hz) such that no aliasing occurs when
the signal is discretized.

Problem 7.3: (a) The signal


π
xd (k) = cos( k) (7.110)
2
is sent to a D/A at a frequency of 1 kHz. Sketch the output waveform x(t),
making sure to label the time axis precisely.
(b) Using the discrete-time to continuous-time conversion results, sketch the
magnitude of the Fourier transform of x(t) in part (a). Give the frequencies (in
Hz) and the magnitudes of the first three sinusoidal components, as well as the
phase of the first component. Compare the results for the first component to
those obtained by computing the coefficients of a Fourier series.

Problem 7.4: Let x(t) be obtained from xd (k) = k through a zero-order hold.
Find X(s) from Xd (z). Compare the result to the one obtained by computing the
Laplace transform directly from x(t) (note that x(t) is the sum of step functions
delayed by multiples of T ).

Problem 7.5: A signal x(t) with transform


1
X(s) = (7.111)
s(s + 1)2

is sampled at time instants t = kT to obtain xd (k). Find the transform Xd (z) of


the resulting signal and obtain the poles.

Problem 7.6: Consider the continuous-time system


2
H(s) = . (7.112)
(s + 1)(s + 2)

Find the discrete-time system Hd (z) whose step response yd (k) is such that
yd (k) = y(kT ), where y(t) is the step response of H(s) and T is some arbitrary
sampling time. Compare the DC gains of H(s) and Hd (z).
244 Chapter 7. Sampled-data systems
Bibliography

[1] K. J. Astrom & R. M. Murray, Feedback Systems: An Introduction for


Scientists and Engineers, Princeton University Press, 2008.

[2] R. W. Beard, T. W. McLain, & C. Peterson, Introduction to Feedback Con-


trol Using Design Studies, Amazon/independently published, 2019.

[3] S. Bennett, A History of Control Engineering 1800∼1930, Institution of


Electrical Engineers, London, 1979.

[4] M. Bodson, “Explaining the Routh-Hurwitz Criterion,” IEEE Control Sys-


tems, vol. 40, no. 1, pp. 45-51, 2020.

[5] H. W. Bode, “Relations between Attenuation and Phase in Feedback Am-


plifier Design,” Bell System Technical J., vol. 19, no. 3, pp. 421-492, 1940.

[6] J. Chiasson, An Introduction to System Modeling and Control, Lulu Press,


2017.

[7] R. Clarke, J. J. Burken, J. T. Bosworth, & J. E. Bauer, X-29 Flight Control


System: Lessons Learned, Technical Memorandum 4598, NASA, Washing-
ton, DC 20546, 1994.

[8] R. C. Dorf & R. H. Bishop, Modern Control Systems, 13th edition, Pearson,
2016.

[9] J. C. Doyle, B. A. Francis, & A. R. Tannenbaum, Feedback Control Theory,


Dover, 2009.

[10] W. R. Evans, “Control System Synthesis by Root Locus Method,” AIEE


Trans., vol. 69, pp. 66-69, 1950.

[11] G. F. Franklin, J. D. Powell, & A. Emami-Naeini, Feedback Control of


Dynamic Systems, 7th edition, Pearson, 2014.

245
246 Bibliography

[12] G. F. Franklin, J. D. Powell, & M. Workman, Digital Control of Dynamic


Systems, 3rd edition, Addison-Wesley, 1997.

[13] B. Friedland, Control System Design: An Introduction to State-Space Meth-


ods, Dover, 2005

[14] A. T. Fuller, “The Early Development of Control Theory,” J. of Dynamic


Systems, Measurement, and Control, pp. 109-118, 1976.

[15] A. T. Fuller, “The Early Development of Control Theory, II” J. of Dynamic


Systems, Measurement, and Control, pp. 224-235, 1976.

[16] F. Golnaraghi & B. Kuo, Automatic Control Systems, 10th edition,


McGraw-Hill, 2017.

[17] E. W. Kamen & B. S. Heck, Fundamentals of Signals and Systems, 2nd


edition, Prentice-Hall, 2000.

[18] B. Kuo, Digital Control Systems, 2nd edition, Oxford University Press,
1995.

[19] W. S. Levine, The Control Handbook, 2nd edition, CRC Press, 2010.

[20] J. C. Maxwell, “On Governors,” Proc. of the Royal Society, London,


pp. 270-283, 1868 (available from: https://www.jstor.org/stable/112510).

[21] O. Mayr, The Origins of Feedback Control, MIT Press, Cambridge, MA,
1970.

[22] N. S. Nise, Control Systems Engineering, 7th edition,Wiley, 2014.

[23] H. Nyquist, “Regeneration Theory,” Bell System Technical J., vol. 11,
pp. 126-147, 1932.

[24] K. Ogata, Discrete-Time Control Systems, 2nd edition, Pearson, 1995.

[25] K. Ogata, Modern Control Engineering, 5th edition, Pearson, 2009.

[26] A. V. Oppenheim, A. S. Willsky, Signals & Systems, 2nd edition with S.H.
Nawab, Pearson, 1996.

[27] C. L. Phillips & R. D. Harbor, Feedback Control Systems, 4th edition,


Prentice-Hall, 2000.
Bibliography 247

[28] J. R. Ragazzini & L. A. Zadeh, “The Analysis of Sampled-Data Systems,”


AIEE Trans., vol. 71, pp. 225-234, 1952.

[29] E. J. Routh, A Treatise on the Stability of a Given State of Motion, Par-


ticularly Steady Motion, Macmillan & Co., London, 1877.
Index

Active noise control, 3, 191 Compensator, 63


Aliasing, 220 Complementary root-locus, 93
Amplitude modulation, 99 Control input, 1, 63
Analog to digital (A/D), 63, 213, 228 Convergence of a signal
Angles of arrival Continuous-time condition, 19
On the real axis, 84 Discrete-time condition, 189
To a zero, 87 Convergence time, 8, 34, 177
Angles of departure Crossover frequency, 149
From a pole, 87 Damping, 65, 123, 151
On the real axis, 84 Damping factor, 37, 124
Anti-aliasing filter, 222 dB, 118
Anti-windup, 73 DC gain, 34
Argument principle, 140 Decade, 116
Asymptotes, 84 Delay margin, 150
Bilateral transform, 170 Delay of a low-pass filter
Bilinear transformation, 237 Continuous-time, 129
Block reduction method, 27 Discrete-time, 234
Bode plots, 115 Demodulator, 98
Bounded signal Desirable pole locations
Continuous-time condition, 19 Continuous-time systems, 65
Definition, 19 Discrete-time systems, 203
Discrete-time condition, 189 Difference equations, 197
Bounded-input bounded-output (BIBO), Digital to analog (D/A), 63, 223, 228
31, 193 Discrete-time Fourier transform, 181,
Breakaway points, 86 218
Cascade systems, 25 Discrete-time frequency, 175
Causality, 201 Discretization, 213
Centroid, 84 Disturbance rejection, 2, 64, 69
Closed left half-plane (CLHP), 19 Dominant pole, 38
Closed right half-plane (CRHP), 19 Effect of initial conditions, 44, 74
Closed-loop transfer function, 80, 133 Encirclements, 137

248
Index 249

Euler approximation, 236 Left half-plane, 19


Feedback control, 1, 26, 64, 73 Linear phase, 126
Feedforward control, 64, 73 Linearity, 8
Final value, 8, 69, 184 Logarithmic spiral, 179
Finite impulse response (FIR), 192 Low-pass filter, 128
Flight control system, 2 Minimum-phase, 39, 120
Frequency modulation, 99 Modulator, 98
Frequency response, 115 Multiplicity of a pole, 6
Continuous-time systems, 42 Natural frequency, 37, 123
Discrete-time systems, 195 Non-minimum-phase, 39, 120
Frequency, discrete-time, 175 Notch filter, 127
Gain margin Nyquist
Definition, 145 Contour, 139
In Bode plots, 146 Criterion, 133
In Nyquist diagram, 146 Diagram, 131
Geometric progression, 172 Rate, 222
High-pass filter, 128 Open left half-plane (OLHP), 19, 75
Impulse function Open right half-plane (ORHP), 19
Continuous-time, 5 Open-loop transfer function, 79, 133
Discrete-time, 170 Oscillations, 36
Impulse response, 24 Overshoot, 36
Impulse response matching, 237 Parallel systems, 26
Infinite impulse response (IIR), 193 Partial fraction expansion
Initial condition (IC), 24, 44, 50, 74, Clearing fractions, 10
203 Cover-up method, 10
Instability, 31, 46 Definition, 9
Instantaneous frequency, 99 Residue method, 10
Integral control, 71 Peaking in the frequency domain, 124,
Interconnected systems, 30 152, 157
Jury test, 203 Percent overshoot, 37, 152
Lag controller, 154 Perfect disturbance rejection, 70
Laplace transform Perfect tracking, 69
Definition, 5 Sinusoid, 71
Examples, 5 Phase detector, 101, 105
Proper and non-proper, 17 Phase margin
Properties, 8 Definition, 148
Lead controller, 154 In Bode plots, 149
250 Index

In Nyquist diagram, 148 Continuous-time systems, 31


Phase-locked loops, 98 Definition, 31
PID, 73, 236 Discrete-time systems, 193
Plant, 1, 63 Internal, 46, 198
Plant output, 1, 63 Marginal, 46, 198
Pole locations vs. signal shape Non-proper transfer function, 33
Continuous-time, 6 State-space model, 47
Discrete-time, 176 Continuous-time, 51
Poles, 6, 24 Discrete-time, 202
Proper function, 17 Steady-state error, 68
Proportional control, 65 Steady-state gain, 34
Proportional-integral-derivative, 73, 236 Steady-state response, 34, 40
Quantization, 213 Step function
Continuous-time, 5
Rational function, 6, 17
Discrete-time, 171
Realizability
Step response, 33, 194
Discrete-time, 53, 201
Step response equivalent, 230
Realization
Step response matching, 238
Continuous-time systems, 51
Strictly proper function, 17
Discrete-time systems, 198
System
Reference input, 1, 63
Continuous-time, 23
Region of convergence (ROC), 172
Discrete-time, 190
Response to initial conditions, 45, 50,
197, 203 Time constant, 7, 34
Response to the input, 45, 50, 197, 203 Time delay, 126
Time to double, 8
Right half-plane, 19
Tracking error, 1, 63, 68
Robotic arm, 38
Transfer function, 24
Robustness, 153, 157
Transient response, 34, 40
Root-locus method, 79
Tustin’s method, 237
Routh-Hurwitz test, 75
Two degree-of-freedom controller, 73
Sampling (frequency & period), 213
Type (of a control system), 71
Settling time, 8, 65
Undershoot, 39
Signal
Unilateral transform, 170
Continuous-time, 5
Unit circle, 175
Discrete-time, 169
Unstable, 31, 46, 198
Sinusoidal response, 39, 194
Stability Voltage-controlled oscillator (VCO), 100
Asymptotic, 46, 198 Wash-out filter, 127
Index 251

Watt’s governor, 2
Windup, 73
X-29, 3
z-transform
Definition, 170
Examples, 170
Inverse using long division, 187
Inverse using partial fraction expan-
sions, 186
Properties, 181
Zero-input response, 45, 50, 197, 203
Zero-order hold, 223, 225
Zero-order hold equivalent, 230
Zero-state response, 45, 50, 197, 203
Zeros, 6, 24
252 Index

You might also like