Control
Control
Engineering
Foundations of Control
Engineering
Marc Bodson
Copyright c 2020 by Marc Bodson. All rights reserved.
ISBN: 9781705847466. Independently published.
Electronic file produced on January 20, 2020.
No parts of this book may be offered for sale, included in other works, or made
available electronically without written permission from the copyright holder.
Disclaimer: this work is published with the understanding that the author is
supplying information but is not attempting to provide professional services.
Reasonable efforts have been made to ensure the correctness of the information.
However, no representation, express or implied, is made with regard to the
accuracy or completeness of the information, and the author cannot accept legal
responsibility or liability for any damages arising out of its use.
Front cover: space shuttle robotic arm (see p. 38, credit: NASA).
Back cover: Watt’s governor (see p. 2, photo by the author), X-29 (see p. 3,
credit: NASA), closed-loop frequency response (see p. 158, graphic by the
author).
Preface
The book presents the core theory of control engineering, together with its foun-
dations in signals and systems. These foundations include continuous-time sys-
tems using the Laplace transform, discrete-time systems using the z-transform,
and sampled-data systems connecting the two domains. The classical theory
of control covers the analysis of the dynamic response of linear time-invariant
systems, root-locus techniques for feedback design, and the frequency-domain
analysis of closed-loop systems. Control engineering is strongly related to signal
processing and communications, and the book includes a discussion of phase-
locked loops as an example of feedback control. To the extent possible, the
origin of the theoretical results is explained, and the technical details needed
to reach a more complete understanding of the concepts are included. On the
other hand, the book does not present design studies or specialized topics, for
which the reader is referred to the bibliography. Material complementing the
book is available through the author’s web page, including solutions to selected
problems and virtual lab experiments.
v
About the author
vi
Contents
2 Continuous-time signals 5
2.1 The Laplace transform . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.3 Relationship between pole locations and
signal shapes . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.4 Properties of the Laplace transform . . . . . . . . . . . . 8
2.2 Inverse of Laplace transforms using partial fraction expansions . 9
2.2.1 General form of a partial fraction expansion . . . . . . . 9
2.2.2 Determination of the coefficients . . . . . . . . . . . . . . 9
2.2.3 Grouping complex terms . . . . . . . . . . . . . . . . . . 10
2.2.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.5 Non-strictly-proper transforms . . . . . . . . . . . . . . . 17
2.3 Properties of signals . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.1 Existence of terms in the partial fraction expansion . . . 17
2.3.2 Boundedness and convergence of signals . . . . . . . . . 19
2.3.3 Non-strictly-proper transforms . . . . . . . . . . . . . . . 20
2.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3 Continuous-time systems 23
3.1 Transfer functions and interconnected systems . . . . . . . . . . 23
3.1.1 Transfer function of a system . . . . . . . . . . . . . . . 23
3.1.2 Cascade systems . . . . . . . . . . . . . . . . . . . . . . 25
3.1.3 Parallel systems . . . . . . . . . . . . . . . . . . . . . . . 26
3.1.4 Feedback system . . . . . . . . . . . . . . . . . . . . . . 26
3.1.5 Block reduction method . . . . . . . . . . . . . . . . . . 27
3.1.6 General interconnected systems . . . . . . . . . . . . . . 30
3.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 31
vii
3.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.3 Non-proper transfer functions . . . . . . . . . . . . . . . 33
3.3 Responses to step inputs . . . . . . . . . . . . . . . . . . . . . . 33
3.3.1 General characteristics of step responses . . . . . . . . . 33
3.3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.3 Effect of poles and zeros on step responses . . . . . . . . 38
3.4 Responses to sinusoidal inputs . . . . . . . . . . . . . . . . . . . 39
3.4.1 Definition and example . . . . . . . . . . . . . . . . . . . 39
3.4.2 General characteristics of steady-state sinusoidal responses 40
3.4.3 Example: first-order system . . . . . . . . . . . . . . . . 43
3.5 Effect of initial conditions . . . . . . . . . . . . . . . . . . . . . 44
3.6 State-space representations . . . . . . . . . . . . . . . . . . . . . 47
3.6.1 Example of a state-space model . . . . . . . . . . . . . . 47
3.6.2 General form of a state-space model . . . . . . . . . . . . 48
3.6.3 State-space analysis . . . . . . . . . . . . . . . . . . . . . 49
3.6.4 State-space realizations . . . . . . . . . . . . . . . . . . . 51
3.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
viii
5 Frequency-domain analysis of control systems 115
5.1 Bode plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.1.2 Approximations of the frequency response . . . . . . . . 116
5.1.3 Bode plots - Systems with no poles or zeros at the origin 118
5.1.4 Bode plots - Systems with poles or zeros at the origin . . 121
5.1.5 Complex poles and zeros with low damping factor . . . . 122
5.1.6 Some special transfer functions . . . . . . . . . . . . . . 126
5.2 Nyquist criterion of stability . . . . . . . . . . . . . . . . . . . . 131
5.2.1 Nyquist diagram . . . . . . . . . . . . . . . . . . . . . . 131
5.2.2 Nyquist criterion . . . . . . . . . . . . . . . . . . . . . . 133
5.2.3 Counting the number of encirclements . . . . . . . . . . 137
5.2.4 Implications of the Nyquist criterion . . . . . . . . . . . 137
5.2.5 Explanation of the Nyquist criterion . . . . . . . . . . . 139
5.2.6 Open-loop poles on the jω-axis . . . . . . . . . . . . . . 141
5.3 Gain and phase margins . . . . . . . . . . . . . . . . . . . . . . 145
5.3.1 Gain margin . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.3.2 Gain margin in the Nyquist diagram . . . . . . . . . . . 146
5.3.3 Gain margin in the Bode plots . . . . . . . . . . . . . . . 146
5.3.4 Phase margin . . . . . . . . . . . . . . . . . . . . . . . . 148
5.3.5 Phase margin in the Bode plots . . . . . . . . . . . . . . 149
5.3.6 Delay margin . . . . . . . . . . . . . . . . . . . . . . . . 149
5.3.7 Relationship between phase margin and damping . . . . 151
5.3.8 Frequency-domain design . . . . . . . . . . . . . . . . . . 152
5.3.9 Example of frequency-domain design with a lead controller 154
5.3.10 Design in the Nyquist diagram . . . . . . . . . . . . . . . 157
5.4 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
ix
6.4.5 Responses to sinusoidal inputs . . . . . . . . . . . . . . . 194
6.4.6 Systems described by difference equations and effect of
initial conditions . . . . . . . . . . . . . . . . . . . . . . 197
6.4.7 Internal stability definitions and properties . . . . . . . . 198
6.4.8 Realization of discrete-time transfer functions . . . . . . 199
6.4.9 State-space models . . . . . . . . . . . . . . . . . . . . . 202
6.4.10 Extensions of other continuous-time results . . . . . . . . 203
6.4.11 Example of root-locus in discrete-time . . . . . . . . . . 205
6.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Bibliography 245
x
Chapter 1
Introduction to feedback systems
r e u y
C P
1
2 Chapter 1. Introduction to feedback systems
• disturbances affecting the system (such as the slope of the road in a cruise
control system),
• limitations of the actuation system (delays and finite range of the control
input).
t t t
(a) (b) (c)
Flyball
Steam
Load
engine
Steam
Valve
with measurements of the aircraft states (for example, its angular velocities) to
determine the appropriate commands to be applied to the control surfaces.
Pilot
input Computer Actuators Aircraft Sensors
Control system
Plant
In some modern aircraft, such as the X-29, the dynamic behavior is so unsta-
ble that a pilot would be unable to maintain steady flight without the feedback
actions implemented by the flight control computer. In the worst flight condition
of the X-29, for example, an angular deviation from horizontal flight doubled
every 0.12 seconds (the stabilization task is equivalent to the one required to
balance a 17.4 in stick on a finger) [7]. Computations were performed at a rate of
40 times per second to provide adequate stabilization and control of the aircraft.
a speaker and canceling exactly the wave produced by the noise source. The
air pressure produced by the speaker and by the noise source at the microphone
is shown as a function of time in Fig. 1.6. Ideally, the two waves will cancel
each other exactly. The control problem is quite different from the flight control
application. There is no issue of stabilization of the plant, or of tracking of
commands. The problem is purely one of disturbance rejection. Challenges in
this application are the speed at which computations must be performed (a rate
of 8 kHz is typical), and the time delay present in the plant (due to the time it
takes for the sound to travel from the speaker to the microphone). The general
structure of the control system, however, is similar to the structure of the flight
control system of Fig. 1.4, where the actuators are replaced by the speaker, the
sensors by the microphone, the aircraft by the acoustics from the speaker to the
microphone, and the reference input is set to zero.
Noise Speaker
source
Noise control
system
Continuous-time signals
2.1.2 Examples
1) x(t) = δ(t) ⇔ X(s) = 1,
where δ(t) is an impulse function or Dirac function.
2) x(t) = u(t) ⇔ X(s) = 1s ,
where u(t) is a step function: u(t) = 0 for t < 0 and u(t) = 1 for t 0.
3) x(t) = 1 ⇔ X(s) = 1s .
The transform is the same as for u(t), because X(s) does not depend on x(t)
for t < 0.
1 .
4) x(t) = eat ⇔ X(s) = s − a
5) x(t) = cos(bt) ⇔ X(s) = s .
s2 + b2
6) x(t) = sin(bt) ⇔ X(s) = 2 b 2 .
s +b
1
7) x(t) = t ⇔ X(s) = 2 .
s
n at
8) x(t) = t e ⇔ X(s) = n! .
(s − a)n+1
5
6 Chapter 2. Continuous-time signals
ejbt + e−jbt t t
x(t) = teat cos(bt) = teat = e(a+jb)t + e(a−jb)t,
2 2 2
(2.2)
so that
1/2 1/2 (s − a + jb)2 /2 + (s − a − jb)2 /2
X(s) = + =
(s − a − jb)2 (s − a + jb)2
((s − a)2 + b2 )2
(s − a)2 + b2
= . (2.3)
((s − a)2 + b2 )2
Although the intermediate steps use complex variables, the final result is a real
function of the variable s, which must be the case when the signal is real.
Higher s-plane
frequency
Faster Faster
decay growth
time, or settling time. For a growing exponential (or pole in the right half-plane)
0.7
eat = 2.0 for t= (2.5)
a
and the value of the time is called the time to double the amplitude. For a =
10 rad/s, τdouble = 70 ms.
The imaginary part of the pole b gives an indication of the frequency of
oscillation. Specifically, cos(bt) has a period
2π
Tosc = (period of oscillation). (2.6)
b
For b = 100 rad/s, Tosc = 62.8 ms.
• Linearity is a key property that is used to find the time signals correspond-
ing to rational Laplace transforms using partial fraction expansions.
• The correct application of the final value theorem is tricky, because one
must know that the limit exists. For example, s + 1 and 1 yield
1 s−1
the same result, but the limit exists only in the first case. For certain
transforms, the existence of the limit can be determined from X(s), as
will be seen later.
2.2. Inverse of Laplace transforms using partial fraction expansions 9
• For the delay property, one must have that T > 0, i.e, that the signal is
shifted to the right, or delayed. The property does not apply for a shift to
the left.
where the notation [·]s=pi means that the expression inside the bracket is evalu-
ated at s = pi .
Clearing fractions
tk−1 pt tk−1 p∗ t
x(t) = c e + c∗ e . (2.12)
(k − 1)! (k − 1)!
2.2. Inverse of Laplace transforms using partial fraction expansions 11
tk−1 at
x(t) = e cejbt + c∗ e−jbt
(k − 1)!
tk−1 at
= e ((c + c∗ ) cos(bt) + j(c − c∗) sin(bt)) , (2.13)
(k − 1)!
which shows that x(t) is the real function of time
tk−1 at tk−1 at
x(t) = 2 Re(c) e cos(bt) − 2 Im(c) e sin(bt).
(k − 1)! (k − 1)!
(2.14)
tk−1 at
x(t) = 2 |c| e cos(bt + ∡c), (2.15)
(k − 1)!
since
The grouping of complex terms can also be performed in the s-domain. This
fact is useful in the procedure of clearing fractions, in order to obtain a system
of linear equations with real coefficients. For example, two terms due to non-
repeated imaginary poles can be combined as follows:
c c∗ (c + c∗ )s + jb(c − c∗)
X(s) = + =
s − jb s + jb (s − jb)(s + jb)
2 Re(c)s − 2 Im(c)b
= . (2.17)
s2 + b2
The result is identical to the one obtained in the time domain, because 2 s 2
s +b
is the transform of cos(bt) and 2 b 2 is the transform of sin(bt). In general,
s +b
12 Chapter 2. Continuous-time signals
2.2.4 Examples
(1-a) Example with real poles by residue method
Note that the formula can be explained easily in this case by noting that if X(s)
is multiplied by s + 2, one gets
c11 (s + 2) c12 (s + 2)
(s + 2)X(s) = + + c21 . (2.22)
s+1 (s + 1)2
Only c21 remains in this expression when s = −2.
The parameter c12 (the coefficient of the highest power for the pole at s = −1)
is determined in a similar way
1 1
c12 = (s + 1)2 X(s) s=−1
= = = 1. (2.23)
s+2 s=−1 −1 + 2
For c11 , the formula is more complex. First, go back to (s + 1)2 X(s) = 1/(s + 2),
and take
d d 1 −1
c11 = (s + 1)2 X(s) = = = −1.
ds s=−1 ds s + 2 s=−1 (s + 2)2 s=−1 (2.24)
2.2. Inverse of Laplace transforms using partial fraction expansions 13
x(t) = c11 e−t + c12te−t + c21 e−2t = −e−t + te−t + e−2t . (2.25)
1
X(s) =
(s + 1)2(s + 2)
c11(s + 1)(s + 2) + c12 (s + 2) + c21(s + 1)2
= . (2.26)
(s + 1)2 (s + 1)
The system is a set of linear equations with real coefficients. It turns out that,
when the partial fraction expansion is set-up correctly, the system always has
as many equations as unknowns, and always has a unique solution. In this case,
the solution is c11 = −1, c12 = 1, c21 = 1, as found earlier.
2(s + 1)
X(s) = with poles at: s = 0, s = −1 ± j
s(s2 + 2s + 2)
c11 c21 c31
= + + with: c31 = c∗21. (2.30)
s s+1−j s+1+j
14 Chapter 2. Continuous-time signals
Note that if the pole p31 is chosen instead of p21 in (2.32), the result remains the
same.
2(s + 1)
X(s) = with poles at: s = 0, s = −1 ± j
s(s2 + 2s + 2)
c11 c21 c31
= + + with: c31 = c∗21
s s+1−j s+1+j
c11(s + 1 − j)(s + 1 + j) + c21 s(s + 1 + j) + c31 s(s + 1 − j)
= .
s s2 + 2s + 2
(2.33)
Then
c11 c21 c∗21
X(s) = + +
s s+1−j s+1+j
c11 (c21 + c∗21 )(s + 1) + j(c21 − c∗21 )
= +
s s2 + 2s + 2
c11(s2 + 2s + 2) + (2f21 (s + 1) − 2g21)s
= , (2.37)
s(s2 + 2s + 2)
which gives
1
X(s) = 2 2 with double poles at s = −1 ± j
s + 2s + 2
c11 c∗11 c12 c∗12
= + + + . (2.40)
s + 1 − j s + 1 + j (s + 1 − j)2 (s + 1 + j)2
Using the formula gives
(s + 1 − j)2 1
c12 = 2 =
2
s + 2s + 2 (s + 1 + j)2 s=−1+j
s=−1+j
1 1
= =− ,
(2j)2 4
d 1 −2
c11 = =
ds (s + 1 + j)2 s=−1+j (s + 1 + j)3 s=−1+j
−2 j
= =− , (2.41)
(2j)3 4
so that the time function is
1
X(s) = 2 2 with double poles at p1 = −1 + j and p2 = −1 − j
s + 2s + 2
c11 c∗11 c12 c∗12
= + + 2 +
s + 1 − j s + 1 + j (s + 1 − j) (s + 1 + j)2
∗
c11 c11 c12 c∗12
= + + 2 + 2 .
s + 1 − j s + 1 + j s + 2s − 2j(s + 1) s + 2s + 2j(s + 1)
(2.43)
Defining
and
The solution is f11 = 0, f12 = −1/4, g11 = −1/4, g12 = 0, which gives the same
result as (3-a).
2.3. Properties of signals 17
• a rational function of s such that deg N (s) < deg D(s) is called a strictly
proper function of s.
• a rational function of s such that deg N (s) deg D(s) is called a proper
function of s.
In some cases, one may encounter a transform X(s) = N (s)/D(s), with deg
N (s) deg D(s). Such transform can be inverted using partial fraction ex-
pansions, with a preliminary step. First, using polynomial division, one finds
Q(s), R(s) such that N (s) = D(s)Q(s) + R(s) and deg R(s) < deg D(s) (Q(s)
and R(s) are the quotient and the remainder of the division of N (s) by D(s),
respectively). As a result
R(s)
X(s) = Q(s) + . (2.49)
D(s)
The second term is a strictly proper rational function of s, which can be inverted
using the partial fraction expansion procedure described earlier. The first term
is a polynomial Q(s) = q0 + q1s + q2s2 + ..., whose associated time function is
d d2
q(t) = q0 δ(t) + q1 (δ(t)) + q2 2 (δ(t)) + ... (2.50)
dt dt
In other words, q(t) is a linear combination of the delta function and its deriv-
atives. While the case of non-strictly-proper rational functions of s may be
addressed easily in this manner, it is rarely meaningful in practice.
expansion. The expansion does not need to be performed: only the locations of
the poles are needed. Consider the example
N (s)
X(s) = , (2.51)
(s + 1)2 (s + 2)
Without computing the coefficients, one can predict that x(t) will be a linear
combination of the functions e−t , te−t , and e−2t . The result is independent of
N (s). Further, if N (s) does not have a root that is identical to a denominator
root (pole/zero cancellation), the functions te−t, and e−2t must be present in the
expansion. Indeed, the coefficients c12 and c21 cannot be zero because
Proof: the proof is based on the properties of the individual terms of the partial
fraction expansion. A term tr−1 eat cos(bt + φ) has the properties that:
(a) it is bounded if and only if a < 0 or a = 0 and r = 1 (the pole is in the
OLHP or is non-repeated on the jω−axis).
(b) it has a limit if and only if a < 0 or a = 0, b = 0, and r = 1 (the pole is in
the OLHP or is non-repeated at s = 0).
(c) it converges to zero if and only if a < 0 (the pole is in the OLHP).
The other elements of the proof are that the functions with the highest powers of
t must be present in the expansion and that a function in the expansion cannot
be cancelled by a combination of other functions (the functions are linearly
independent).
Examples
3. s +2 1 unbounded.
s
5. s2 − 1 unbounded.
2
s2 + 16
6. 1 unbounded.
s2 − 1
N (s)
X(s) = with deg N(s) deg D(s) ⇒ x(t) is unbounded.
D(s)
(2.57)
2.4. Problems 21
2.4 Problems
Problem 2.1: Using both methods for partial fraction expansion (residue and
clearing fractions), find the signals whose Laplace transforms are given by
2s 2s + 1
(a) X(s) = (b) X(s) = ,
(s + 2)(s − 2) s2 (s + 1)2
s3 + 2s2 + 2s + 5 4s − 8
(c) X(s) = (d) X(s) = 2 . (2.58)
s2 (s2 + 2s + 5) (s + 4)2
s2 + 4
X(s) = . (2.59)
s (s + 2s + 5)2
3 2
Give the form of x(t) that would result from a partial fraction expansion. Ex-
press the signal as a linear combination of time functions, but do not solve for
the coefficients themselves. Indicate which of the coefficients may or may not
turn out to be zero.
(b) repeat part (a) for
s−1
X(s) = . (2.60)
(s + 2) (s − 3)3(s + 4)
4
Problem 2.3: For the signals whose Laplace transforms are given below, indi-
cate whether the signals are bounded and, if so, whether lim x(t) exists. If the
t→∞
limit exists, give its value. Do not invert the Laplace transforms to obtain the
results.
10 (s − 1)
(a) X(s) = (b) X(s) = ,
(s + 1)10 s(s + 2)
1 5
(c) X(s) = 2 (d) X(s) = ,
s (s + 2) s(s + 1)2
3 3
(e) X(s) = (f) X(s) = ,
s(s2 + 4) s(s2 + 4)2
2(s − 1) 2(s − 1)
(g) X(s) = (h) X(s) = 2 ,
(s2 + 2s + 1)(s + 3) (s + 2s + 2)(s + 3)
2(s − 1)
(i) X(s) = . (2.61)
(s2 + 2s + 2)2 (s + 3)
Problem 2.4: (a) Is a signal x(t) considered bounded if x(t) < 216 ?
(b) How fast does x(t) = e−100t cos(10, 000t) converge to zero?
(s − 1)2
(c) Is the signal X(s) = bounded? Does it converge to a
s(s + 1)(s2 + 4)
2
Problem 2.5: (a) List the time functions that would originate from a partial
fraction expansion of
s+1
X(s) = . (2.62)
(s2 + 2s + 5)2 (s2 + 4)3
Problem 2.6: (a) Give the response y(t) of a system with transfer function
1
P (s) = (2.64)
s2 (s + 1)
List all the functions that appear in the partial fraction expansion of Y (s),
without calculating the coefficients of the expansion. Express the result in terms
of real functions, and indicate which functions must have nonzero coefficients.
Problem 2.7: (a) List the real time functions that would originate from a
partial fraction expansion of the response of the system
s+1
P (s) = (2.66)
s(s + 100 + 33j)2(s + 100 − 33j)2
to a step input. Indicate which functions may and may not have zero coefficients
in the partial fraction expansion.
(b) Repeat part (a) for an input x(t) = cos(33t).
Chapter 3
Continuous-time systems
Consider V (s) to be the input to the system, and I(s) to be the output. The
first term in the expression for I(s) is due to the input, while the second term
L i(t)
v(t) R
23
24 Chapter 3. Continuous-time systems
x y
H(s)
is due to the initial current in the inductor. This initial current is considered
to be an initial condition of the system, or initial state. For the time being, we
will let i(0) = 0. Then
I(s) 1 1/L
= = . (3.3)
V (s) sL + R s + R/L
By definition
I(s)
H(s) = , (3.4)
V (s)
where H(s) is called the transfer function of the system. Note that
1/L 1 −(R/L)t
H(s) = = Laplace transform e
s + R/L L
Laplace transform (h(t)) , (3.5)
where h(t) is the impulse response of the system. Indeed for v(t) = δ(t), V (s) =
1, I(s) = H(s), and i(t) = h(t).
In general, a linear time-invariant system with input x(t) and output y(t)
may be represented by the block diagram of Fig. 3.2. This representation means
that
A system is completely described by H(s), since the response to any input signal
can be computed from the knowledge of the signal and of the transfer function
of the system. When
N (s)
H(s) = , (3.7)
D(s)
where N (s) and D(s) are polynomials, the roots of N (s) are called the zeros of
the system, and the roots of D(s) are called the poles of the system.
In the time domain
x y1 y
H (s) H (s)
1 2
Y1(s) = H1 (s)X(s)
Y (s) = H2 (s)Y1 (s), (3.10)
so that
Y (s) = H2 (s)H1(s)X(s)
= H(s)X(s) for H(s) = H2(s)H1 (s). (3.11)
In other words, the transfer function of a cascade system is the product of the
two transfer functions. If we let
N1 (s) N2(s)
H1(s) = H2 (s) = , (3.12)
D1 (s) D2(s)
the result is
N1(s)N2 (s)
H(s) = . (3.13)
D1(s)D2(s)
26 Chapter 3. Continuous-time systems
y1
H (s)
1
x y
H (s)
2
y2
Therefore, unless there are pole/zero cancellations, the poles of H(s) are the
union of the poles of H1 (s) and H2 (s), and the zeros of H(s) are the union of
the zeros of H1 (s) and H2(s).
so that the overall transfer function is the sum of the two transfer functions
With
N1 (s) N2(s)
H1(s) = H2 (s) = , (3.16)
D1 (s) D2(s)
one has
N1(s)D2(s) + D1(s)N2 (s)
H(s) = . (3.17)
D1(s)D2(s)
The poles of H(s) are (again) the union of the poles of H1(s) and H2(s), but
the zeros are the roots of N1(s)D2 (s) + D1(s)N2 (s) = 0 (except for possible
cancellations). A zero that is common to both H1 (s) and H2 (s) is also a zero of
H(s).
Y (s) = H1 (s)E(s)
= H1 (s)X(s) − H1 (s)H2(s)Y (s), (3.18)
3.1. Transfer functions and interconnected systems 27
x e y
H (s)
1
H (s)
2
the forward path is (H1(s) + H2(s)) H4(s). Then, using the feedback system
formula, the overall transfer function is found to be
H (s)
1
x y
H (s) H (s)
2 4
H (s)
3
H2 (s)H4(s)
H(s) = H1 (s)H4(s) + (1 − H1 (s)H3(s)H4(s))
1 + H2 (s)H3(s)H4 (s)
N (s)
= , (3.23)
1 + H2 (s)H3(s)H4 (s)
where
H (s)
1
x x1 x2 y
H (s) H (s)
2 4
H (s)
3
H (s) H (s)
1 4
x y
H (s) H (s)
2 4
H (s)
3
H (s)H4(s)
1
H1(s)H3(s) H4(s)
x y
H (s) H (s)
2 4
H (s)
3
As seen in this example, the transfer function of many systems can be found
by the block reduction method using the formulas for cascade, parallel, and feed-
back systems, and carefully transforming the diagrams into equivalent systems.
In difficult cases, skillful manipulations of the diagram may be required. The
so-called Mason’s rule is a general procedure that can also be used and is often
found in textbooks. However, it also requires some skill to apply. The next
section presents a procedure that can be applied systematically and with less
risk of error in complicated cases.
Procedure
2. Write equations for Y (s) and for the Xi (s)′ s, by reading the diagram. This
step produces n + 1 equations relating Xi (s), Y (s) and X(s).
3. Eliminate the Xi (s)′ s until a single equation is left that relates Y (s) and
X(s). This step is equivalent to solving n + 1 linear equations in the
n + 1 unknowns Y (s) and Xi (s). The solution for Y (s) gives the transfer
function.
Example: consider the block diagram of Fig. 3.7. The equations are
and
3.2 Stability
3.2.1 Definitions
In the previous chapter, we discussed the boundedness and convergence proper-
ties of signals. Now, we discuss the stability properties of systems. The two sets
of properties are very much related, and rely heavily on concepts derived from
partial fraction expansions. However, they concern distinct objects, namely sig-
nals and systems. We begin with a standard definition of stability and with a
test for stability.
Definition - BIBO stability: a linear time-invariant system is called bounded-
input bounded-output stable (BIBO stable) if the output is bounded for any
bounded input. A system is called BIBO unstable if it is not BIBO stable.
Comments
(a) A system is unstable if there exists a bounded input signal such that the
output is unbounded. The output of the system does not need to be unbounded
for all bounded input signals: only one signal is sufficient. Further, the output
of an unstable system may even be bounded for some unbounded input signals.
(b) The fact below states that a system is BIBO stable if all its poles are in the
open left half-plane. Then, two types of unstable systems may be encountered:
systems that have some repeated poles on the jω−axis and/or some poles in
the open right half-plane, and systems that only have non-repeated poles on the
jω−axis. In the first case, virtually every bounded input yields an unbounded
output. In the second case, only well-chosen inputs produce an unbounded
output.
3.2.2 Properties
Fact - BIBO stability of systems with rational transfer functions
A linear time-invariant system with H(s) = N(s)/D(s) and deg N (s) deg D(s)
32 Chapter 3. Continuous-time systems
is BIBO stable if and only if all the poles of the transfer function are in the open
left half-plane (Re(s) < 0).
Example 2: H(s) = s − 1 is unstable. For x(t) = u(t) (step input), the output
1
is y(t) = −1 + et and is unbounded. The output for X(s) = (s − 1)/(s + 1)2 is
bounded. However, all bounded inputs whose transforms do not have a zero at
s = 1 will yield unbounded outputs.
The proof given below assumes that the input signals under consideration have
rational transforms. However, the result is true in general.
(a) Pole condition ⇒ BIBO stability. Recall that a signal is bounded if and only
if the poles of its transform are in the OLHP or are non-repeated poles on the
jω−axis. Since the poles of Y (s) are the union of those of X (s) and H(s), and
since all the poles of H(s) are in the OLHP, Y (s) will satisfy the condition for
boundedness if X(s) does.
where Yss (s) is called the steady-state response of the system and Ytr (s) is
called the transient response of the system. Because of the stability assumption
on H(s), the transient response is an exponentially decaying function in the time
domain. The steady-state response is a constant signal
3.3.2 Examples
Example 1: consider a first-order system
k
H(s) = . (3.35)
s+a
The DC gain is k/a and, through a partial fraction expansion, the steady-state
and transient responses can be found to be
k k
yss (t) = xm ytr (t) = − xme−at . (3.36)
a a
The constant τ = 1/a is usually referred to as the time constant of the system.
[dy/dt]t=0 = kxm, so that the intersection of the tangent to the response at t = 0
and the steady-state value H(0)xm = kxm/a occurs for t = 1/a = τ , the time
constant of the system. These properties and the shape of the step response are
shown in Fig. 3.10. For t = τ , y(t) = H(0)xm (1 − e−1 ), which implies that
the output reaches 63% of its steady-state value after a time equal to the time
constant. Often, a value of time equal to 4τ is taken to be the convergence time,
or settling time. After that time, the output has reached 98% of its steady-state
value.
Example 2: If the transfer function has a double real pole
k
H(s) = , (3.37)
(s + a)2
a partial fraction expansion gives
k k k
yss (t) = xm ytr (t) = − xmte−at − 2 xme−at. (3.38)
a2 a a
The step responses of the first-order system with pole at −a and of the second-
order system with double pole at s = −a are shown on Fig. 3.11. The responses
3.3. Responses to step inputs 35
y(t)
H(0)x m
0.63 H(0)x m
τ=1/a t
double pole
are qualitatively similar, but one can see that the delay of the response is roughly
doubled. Also, the derivative at t = 0 is zero for the double pole. The slope is
always zero if the number of poles exceeds the number of zeros by 2 or more.
Example 3: If the system has two distinct real poles
k
H(s) = , (3.39)
(s + a1 )(s + a2 )
a partial fraction expansion gives
k k k
yss (t) = xm ytr (t) = xme−a1 t + xm e−a2 t .
a1 a2 a1(a1 − a2) a2 (a2 − a1 )
(3.40)
The pole with the larger magnitude yields a term that converges faster to zero,
so that the response is usually dominated by the contribution of the pole with
smaller magnitude. Fig. 3.12 shows the response of a first-order system with pole
36 Chapter 3. Continuous-time systems
y(t) pole p
H(0)xm
poles p and 5p
Figure 3.12: Step responses of a first-order system with pole p and of a second-
order system with poles p and 5p
at −a1 and of a second-order system with poles at −a1 and −a2 = −5a1. Note
that the responses are very close. The additional pole creates a small amount of
delay, and the slope around t = 0 is zero for the second-order system. However,
the first-order system is a good approximation of the second-order system, even
though the additional pole is only 5 times greater.
Example 4: If the system has a pair of complex poles at s = −a ± jb,
k k
H(s) = = 2 , (3.41)
(s + a − jb)(s + a + jb) s + 2as + a2 + b2
a partial fraction expansion gives
a
yss (t) = H(0)xm ytr (t) = −H(0)xme−at cos(bt) − H(0)xm e−at sin(bt),
b
(3.42)
where
k
H(0) = . (3.43)
a2 + b2
The step response typically exhibits oscillations associated with the sinusoidal
components of the transient response. However, the magnitude of these oscilla-
tions depends on the rate of decay associated with the real part of the complex
pole, as compared to the imaginary part of the pole. Fig. 3.13 shows the re-
sponses for several cases. If the imaginary part is larger than the real part of
the pole in magnitude, oscillations are visible and produce a large overshoot in
the response.
For a small ratio a/b, the response is approximately
y(t)
a/b=1/3
H(0)xm
a/b=1/2
a/b=1
π/b t
so that the peak of the response occurs for t ≃ π/b. The percent overshoot is
The formula gives 4% for a/b = 1, 20% for a/b = 1/2, and 35% for a/b = 1/3,
which is consistent with the figure. It is typical to define the damping factor ζ
and the natural frequency ωn through
a √
ζ=√ , ωn = a2 + b2 . (3.46)
a2 + b2
The natural frequency is the magnitude the pole, and the damping factor is the
cos of the angle between the pole and the negative real axis in the s-domain (see
Fig. 3.14).
p jb
ωn
α
-a=-ζω n
When the damping factor is small, ζ ≃ a/b and ωn ≃ b, so that the oscillation
frequency is close to the natural frequency. An example of a system with a low
damping is a slender robotic arm, such as the one used in the space shuttle.
the right half-plane, the response may exhibit undershoot. Such characteristic
is undesirable in control systems. Generally, zeros in the right half-plane are
called non-minimum-phase zeros and are unfavorable for reasons including, but
not limited to the transient characteristic discussed here. Zeros in the left half-
plane are called minimum-phase zeros.
It is assumed that the system is BIBO stable, in order to guarantee the bound-
edness of the response. As an example, consider the RL circuit of Fig. 3.1. Let
R = 1 Ω and L = 1 H, and the input voltage v(t) = sin(t), so that
1 1 1
I(s) = V (s) = . (3.49)
s+1 s + 1 s2 + 1
40 Chapter 3. Continuous-time systems
with
1 1 1
itr (t) = e−t, iss (t) = − cos(t) + sin(t). (3.52)
2 2 2
Alternatively, iss (t) is also
1 1
iss (t) = √ cos(t + 225◦ ) = √ sin(t − 45◦), (3.53)
2 2
which shows that the steady-state response is a sinusoid with the same frequency
as the input signal, but different magnitude and phase. Note that it was assumed
in the derivation of I(s) that i(0) = 0 (the initial current in the inductor was
zero). An interesting consequence of that assumption is that the signal i(t)
obtained here is the current that would appear in the RL circuit if the voltage
source was connected at t = 0 (as if a switch was closed at t = 0). Although the
input signal was taken to be a sinusoid, the value of the signal for t < 0 is not
considered in the Laplace transform analysis and the initial condition i(0) solely
determines the state of the circuit at t = 0.
The transient, steady-state, and overall current waveforms are shown in
Fig. 3.17 for the system H(s) = 1/(s + 1) and an input x(t) = sin(3t). As
expected, the current is zero at t = 0. An overcurrent of about 50% is observed
within about a second. Such large transient currents are observed in power dis-
tribution systems, and need to be accounted for in the design of the components
and their protection.
0.35
Transient response
0.3
0.25
0.2
0.15
0.5 Overall response
0.1
0.4
0.05 0.3
0 0.2
0 1 2 3 4 5 6 7 8 0.1
Time (s) 0
-0.1
0.4 Steady-state response -0.2
0.3 -0.3
0.2 -0.4
0 1 2 3 4 5 6 7 8
0.1 Time (s)
0
-0.1
-0.2
-0.3
-0.4
0 1 2 3 4 5 6 7 8
Time (s)
where the first two terms are obtained by the residue method and the last term
groups all the components due to the poles of H(s). H(jω0 ) is the value of the
transfer function evaluated at s = jω0.
As for the step response, assume that the system is BIBO stable and define
where Yss (s) and Ytr (s) are the steady-state and transient responses, respec-
tively. In the time domain, the transient response is a signal that decays to zero
exponentially because of the stability assumption on the system.
One has that H(−jω0 ) = H ∗ (jω0 ) for a system with a real impulse response.
42 Chapter 3. Continuous-time systems
so that
xm H(jω0) + H ∗(jω0) H(jω0 ) − H ∗ (jω0)
Yss (s) = 2 2 s + jω0
s + ω0 2 2
xm
= 2 (s Re (H(jω0)) − ω0 Im (H(jω0 ))) . (3.57)
s + ω02
where
Equation (3.59) highlights the similarity between the input and output sig-
nals in the steady-state. |H(jω0)| is viewed as the gain of the system for sinu-
soidal inputs, because it is the ratio of the magnitude of the output signal to
the magnitude of the input signal. The output signal is shifted in time with
respect to the input signal, with the effect of a delay if φ < 0 and of an advance
if φ > 0. Equation (3.60) shows that the gain and phase relating the input and
output signals are given by the magnitude and angle of the complex number
H(jω0 ). H(jω) is called the frequency response of the system. It is the value of
the transfer function H(s) evaluated at s = jω. It is also the Fourier transform
of the impulse response h(t), assuming that h(t) = 0 for t < 0.
For a general sinusoidal signal
a similar approach can be used to find y(t). The steady-state output turns out
to be given by an expression similar to (3.59)
In particular, if α = −π/2, x(t) = x0 sin(ω0t) and yss (t) = Mx0 sin(ω0 t+φ). The
transient response is also similar, but the partial fraction expansion produces
different coefficients for the exponentially decaying functions for different values
of α.
3.4. Responses to sinusoidal inputs 43
As noted earlier, the steady-state response is simply shifted if the input signal is
shifted, but the transient response varies and is not simply shifted. For different
phases of the input signal, there can be large variations in the transient response.
Case 3: The response to x(t) = xm cos(ω0 t + α) can be computed using
Without loss of generality, we let the first coefficient be equal to 1 (if needed,
both sides can be divided by the first coefficient to get the result). For the
Laplace transform analysis, recall that
dy
y1 = ⇒ Y1 (s) = sY (s) − y(0). (3.76)
dt
Therefore
d2 y dy1
y2 = 2 = ⇒ Y2 (s) = sY1 (s) − y1(0) = s2Y (s) − sy(0) − ẏ(0),
dt dt
(3.77)
where we used the notation
dy dy
= ẏ and y1 (0) = ẏ(0) = . (3.78)
dt dt t=0
The procedure can be extended to higher-order derivatives.
Consider the case of a second-order input/output differential equation
d2 y dy d2 x dx
2 + a1 + a0 y = b2 2 + b1 + b0 x. (3.79)
dt dt dt dt
Application of the Laplace transform to both sides yields
s2 Y (s) − sy(0) − ẏ(0) + a1 sY (s) − a1y(0) + a0 Y (s)
= b2s2 X(s) − b2sx(0) − b2ẋ(0)
+b1 sX(s) − b1x(0) + b0X(s). (3.80)
Note the distinction between X(s), which is the Laplace transform of x(t), and
x(0), which is the initial value of x(t). The transform Y (s) may be deduced to
be
sy(0) + ẏ(0) + a1y(0) − b2 sx(0) − b2 ẋ(0) − b1x(0)
Y (s) =
s2 + a1 s + a0
Response to the initial conditions
or zero-input response Yzi (s)
b2 s2 + b1 s + b0
+ X(s) . (3.81)
s2 + a1s + a0
H(s) (transfer function)
Response to the input
or zero-state response Yzs (s)
The first term is the response to the initial conditions, and is also called the zero-
input response Yzi (s). The second term is the response to the input, and is also
called the zero-state response Yzs (s). It is the product of the transfer function
H(s) with the transform of the input signal. The following observations may be
made, which also apply to systems of higher order:
46 Chapter 3. Continuous-time systems
• the response is the sum of the response due to the input and the response
due to the initial conditions. The two components are independent. The
response to the input is the response that is obtained for the same input
but zero initial conditions, and the response to the initial conditions is the
response that is obtained for the same initial conditions but zero input.
• the initial conditions are composed of the values of the input and output
variables as well as their derivatives at t = 0. All the relevant history of
the system for t < 0 is contained in those values, which may be viewed as
the state of the system at t = 0.
• the transfer function and the response to initial conditions are rational
functions of s with the same denominators. As a consequence, the response
to the initial conditions is similar to the transient response defined earlier
for step and sinusoidal inputs. Both can be lumped together as a “total”
transient response. Boundedness of this part of the response is associated
with the concept of internal stability.
• the response to the initial conditions converges to zero if the poles of the
system are in the open left half-plane. This property is referred to as
asymptotic stability. The condition for asymptotic stability is the same as
for BIBO stability.
• the response to the initial conditions is bounded if the poles of the system
are in the open left half-plane, or are non-repeated poles on the jω−axis.
This property is sometimes referred to as marginal stability. However, a
marginally stable system is unstable from the BIBO point of view.
Note that the voltage u(t) is not a step function here, but the input to the
system, in accordance with standard state-space notation.
The two differential equations for the circuit may be written as
dx1 R 1 1
= − x1 − x2 + u
dt L L L
dx2 1
= x1, (3.83)
dt C
or, in matrix form,
dx1 /dt −R/L −1/L x1 1/L
= + u,
dx2 /dt 1/C 0 x2 0
(3.84)
48 Chapter 3. Continuous-time systems
L C
+
x2
x1 +
u(t) R y(t)
Equations (3.84) and (3.85) constitute a state-space model for the RLC circuit.
where:
x is a column vector of dimension n, called the state vector
u is a scalar signal, and the input of the system
y is a scalar signal, and the output of the system
A is an n × n matrix, B is a column vector of dimension n
C is a row vector of dimension n, and D is a scalar
The dimensions of all the elements and of the products are shown in Fig. 3.19.
Generally, dx/dt is denoted ẋ.
To obtain a state-space model for circuits, a systematic procedure consists
in:
1. defining a state vector with the voltages on the capacitors and the currents
in the inductors as components.
.
x A x B
u
= +
Many other physical systems can also be represented with state-space models,
using standard modelling techniques.
Some simple yet general results may be obtained by applying the Laplace trans-
form to the state-space model. The equations for the state-space model are
ẋ = Ax + Bu
y = Cx + Du, (3.87)
The dimensions of the terms in the expression are, in the order in which they
appear
1 × 1 = (1 × n) × (n × n) × (n × 1)
+ ((1 × n) × (n × n) × (n × 1) + (1 × 1)) × (1 × 1). (3.91)
Although this transfer function may be complicated to compute, the poles are
determined by a simple equation related to the denominator of the matrix (sI −
A)−1. Specifically, the denominator is det(sI − A) and, therefore, the poles are
given by the roots of det(sI − A) = 0. These roots are called the eigenvalues of
the matrix A, and may be computed using a mathematical software package.
Example: for the RLC circuit,
−R/L −1/L 1/L
A= , B= , C= R 0 , D = 0.
1/C 0 0
(3.93)
Using the formula for the inverse of a 2 × 2 matrix
−1
M11 M12 1 M22 −M12
M −1 = = ,
M21 M22 M11 M22 − M21 M12 −M21 M11
(3.94)
one finds that the transfer function is
−1
s + R/L 1/L 1/L
H(s) = R 0
−1/C s 0
1 s −1/L 1/L
= R 0
s2 + (R/L)s + 1/LC 1/C s + R/L 0
(R/L)s
= 2 . (3.95)
s + (R/L)s + 1/LC
3.6. State-space representations 51
Note that the elements of the circuit are connected in series, so that the im-
pedance of the circuit is
V (s) 1 s2 LC + 1 + sCR
Z(s) = = sL + +R = , (3.96)
I(s) sC sC
RI(s) sRC
H(s) = = 2 , (3.97)
V (s) s LC + sCR + 1
which is the same result. Note that there is a zero in the transfer function at s
= 0, because of the blocking of DC signals by the capacitor.
The response to initial conditions is
R (sx1(0) − (1/L)x2(0))
C(sI − A)−1 x(0) = , (3.98)
s2 + (R/L)s + 1/LC
where x1(0) is the current in the inductor and x2 (0) is the voltage on the ca-
pacitor, both at t = 0. The denominator in the expressions is det(sI − A) =
s2 +(R/L)s+ 1/LC, and the roots of the polynomial are the poles of the system.
b n-1
b n-2
u xn x n-1 x2 x1 y
∫ ∫ ... ∫ b0
a n-1
a n-2
...
a0
q0
To prove that the state-space system indeed has the required transfer func-
tion, note that the equations of the system are
and
1
X1(s) = U (s). (3.103)
sn + an−1 sn−1 + · · · + a1s + a0
Therefore
bn−1sn−1 + · · · + b1 s + b0
Y (s) = U(s) + q0U (s)
sn
+ an−1 sn−1 + · · · + a1 s + a0
R(s)
= + q0 U(s)
D(s)
N (s)
= U(s), (3.104)
D(s)
For example, the transfer function of the RLC circuit considered earlier
(R/L)s
H(s) = , (3.105)
s2 + (R/L)s + 1/LC
0 1 0
A= , B= , C= 0 R/L , D = 0.
−1/LC −R/L 1
(3.106)
Note that this state-space representation is different from the one that gave rise
to the transfer function. Indeed, a state-space model is not unique. For a given
state-space model, another model can be obtained by applying what is called a
similarity transformation. A new state z is defined through
z = P x, (3.107)
ż = P ẋ = P Ax + P Bu = P AP −1z + P Bu
y = Cx + Du = CP −1 z + Du, (3.108)
3.7 Problems
Problem 3.1: A model of a brush DC motor without load is
di
L = v − Ri − Kω
dt
dω
J = Ki, (3.109)
dt
54 Chapter 3. Continuous-time systems
where R (Ω) is the rotor resistance, L (H) is the rotor inductance, K (N m/A
or V s) is the motor torque constant (also the back-emf constant), J (kg m2 ) is
the inertia of the motor. The input of the system is the voltage v (V) applied
to the motor and the output is the rotor velocity ω (rad/s). The current i (A)
is considered to be an internal variable (or “state”).
(a) Find the transfer function from v to ω. Give the values of the poles and zeros
of the transfer function.
(b) Find the approximate transfer function that is obtained when L = 0, and
give the values of its poles and zeros.
(c) Compare the numerical values obtained for parts (a) and (b) when R = 0.7 Ω,
L = 2.5 10−3 H, K = 0.07 N m/A, and J = 5.7 10−5 kg m2.
Problem 3.2: (a) Find the transfer function from X(s) to Y (s) for the system
shown in Fig. 3.21.
x y
H (s) H (s)
1 2
H (s)
3
(b) Repeat part (a) for the system shown in Fig. 3.22.
H1(s) H3(s)
x y
H (s) H4(s)
2
(b) H(s) = 1
(s2 + 4)
(c) H(s) = s
(s + 3)2
(d) H(s) = 2 s
(s − 4)
(e) H(s) = 1
s(s + 1)
(f) H(s) = 2 1
s (s + 1)
For the unstable systems, give an example of a bounded input that yields an
unbounded output.
Problem 3.4: For the systems given below, calculate the DC gain. Then,
calculate the step responses using partial fraction expansions and compare the
steady-state values to the values predicted by the DC gain.
(a) H(s) = 2 2
s + 2s + 1
(b) H(s) = 2 −s − 2
s + 2s + 2
+
R L v
v1 +
_ 2
C R _
(c) Calculate the additional response due to an initial current in the inductor
and an initial voltage on the capacitor using C(sI − A)−1 x(0). Give an estimate
of the time required for the transient voltage to decay to negligible values when
R = L = C = 1.
Problem 3.7: (a) Calculate the Laplace transform Y (s) of the solution of the
differential equation
dy 2 (t) dy(t)
+4 + 29y(t) = cos(t), (3.110)
dt2 dt
+
L L
v +
_ v2
1
C R _
The input is v1 and the output is v2. Give the equation that must be solved to
find the poles of the system and solve it for R = L = C = 1.
Problem 3.9: (a) Find the response y(t) of the system with transfer function
H(s) = 1 and input x(t) = 1.
s(s + 1)
(b) Find the response y(t) of the system with transfer function H(s) = 1
s(s + 1)
and input x(t) = sin(t).
(c) Is a system BIBO stable if its response to a step input is bounded?
(d) Is a system BIBO unstable if its response to a step input is unbounded?
(e) Is the response of H(s) = 1 to X(s) = s + 3 bounded?
(s + 1)2(s + 4) s(s + 4)
Does it converge to a steady-state value? If so, to what value?
(f) Is the response of H(s) = 1 to X(s) = 2 1 2 bounded?
(s + 1)2 (s + 4) (s + 1)
Does it converge to a steady-state value? If so, to what value?
Problem 3.10: Find the transfer function H(s) = Y (s)/X(s) for the system
of Fig. 3.25.
3.7. Problems 57
H (s)
3
x y
H (s) H (s)
1 4
H (s)
2
Problem 3.11: (a) Give the transform Y (s) for the system of Fig. 3.26. As-
sume that the input has transform U(s), and that the integrators have initial
conditions x1 (0) and x2(0).
u x1 x1 x2 y
x2
∫ ∫
(b) Give the steady-state value of the output when the input u(t) = 5 for all t
(if the limit does not exist, indicate why).
Problem 3.12: (a) Find the transfer function W (s)/R(s) for the system of
Fig. 3.27.
(b) Given that the steady-state response of P (s) is yss (t) = sin(10t − 60◦ ) when
x(t) = sin(10t), what is the steady-state response wss (t) of the system of part (a)
when r(t) = sin(10t)? Give the condition that P (s) must satisfy for the result
to be true.
Problem 3.13: (a) Write a state-space model for the circuit of Fig. 3.28 and,
using the A matrix, give the polynomial whose roots are the poles of the system.
(b) Using the method of your choice (standard circuit calculations using complex
impedances are recommended), obtain the transfer function from u to y for the
circuit of part (a). Give the DC gain of the system and the location of the
58 Chapter 3. Continuous-time systems
R L
+
u + C y
C
zero(s).
Problem 3.14: Calculate the transfer function P (s) = Y (s)/X(s) for the
system shown in Fig. 3.29.
Problem 3.15: (a) Calculate the transient response ytr (t) associated with the
output y(t) of a system with transfer function
s
P (s) = (3.111)
(s + 1)2
H1(s)
x y
H2(s)
H3(s)
Problem 3.16: (a) For the circuit of Fig. 3.30, calculate the voltage v2 (t) that
is observed for t 0 if v1(t) = 15 V, R = 10 Ω, C = 100 µF, and the initial
voltage on the capacitor is vc (0) = 5 V. Sketch the voltage v2(t), being careful
to label the axes precisely.
v
C
+
+
v1 + C v2
R
(b) What are the poles of a system whose state-space representation is such that:
• the matrix A = 0,
Problem 3.17: (a) Indicate whether the following system is BIBO stable.
s−1
H(s) = . (3.112)
(s + 2 + j)2(s + 2 − j)2
(b) Indicate whether the following system is BIBO stable.
s+1
H(s) = . (3.113)
(s + j)(s − j)
60 Chapter 3. Continuous-time systems
(c) Indicate whether the signal with the following transform is bounded, and
whether it converges.
s2 + 9
X(s) = . (3.114)
(s2 − 1)(s2 + 4)
(d) A signal x(t) = cos(4t) is applied to a system with transfer function H(s) =
1/(s2 + 4). Is the output bounded?
(e) A signal that converges to zero is applied to a BIBO stable system. The
transform of the signal and the transfer function of the system are proper, ratio-
nal functions of s. Indicate whether the output of the system: (1) is bounded,
(2) converges, and (3) converges to zero.
Problem 3.18: (a) Find the transfer function H(s) = Y (s)/X(s) for the system
of Fig. 3.31.
1
s
x 1 y
s
(b) What can you say about the steady-state value of the output yss (t) for a
constant input x(t) = 5 ?
Problem 3.19: (a) Using the frequency response, determine what the steady-
state output yss (t) is for an input x(t) = sin(t) and a system with transfer
function
−(s − 1)
H(s) = . (3.115)
(s + 1)2
(b) For the system and input signal of part (a), use partial fraction expansions to
determine the transient response ytr (t) (only the transient response is needed).
Problem 3.20: Write a state-space model for the circuit of Fig. 3.32.
Problem 3.21: (a) Consider the system with input x and output y described
by the differential equation
d2 y
= ay + bx. (3.116)
dt2
3.7. Problems 61
x1 R
+
C +
u + x2 x3 y
--
L L
dx1 (t)
= x2
dt
dx2 (t)
= −2x1 − 3x2 + u(t), (3.117)
dt
where x1(0) = 1, x2(0) = 0, and u(t) is a step input of magnitude 1.
(b) Write a state-space realization for the system in Fig. 3.33 and give the values
of the poles of the system.
u x1 x1 x2 x2 y
∫ ∫ 2
2
x3
∫
x3
63
64 Chapter 4. Stability and performance of control systems
r e u y
C P
Actuator System
D/A
Computer
Operator
(Compensator)
A/D Sensor
One distinguishes between feedforward and feedback control systems (both shown
in Fig. 4.3). A feedfoward control system typically implements an approximation
of the inverse of the plant. Generally, the advantages of a feedforward control
system are that stability is easier to maintain, and that no output sensor is
needed (hence, the system is insensitive to measurement noise). However, feed-
back systems are typically preferred because the effect of disturbances may be
reduced, and plant variations or uncertainties may be compensated for. Feed-
back systems are also the only option available for unstable plants. In practice,
control systems are often a blend of feedforward and feedback control.
4.2. Proportional control 65
C P C P
Feedforward
(prefiltering) Feedback
For stability and for an adequate transient response, the desired locations of
closed-loop poles are shown in Fig. 4.4. The poles should be sufficiently far in
the left half-plane that the response of the system is fast, and sufficiently close
to the real axis that the responses do not exhibit large transient oscillations.
Damping
Settling time
C(s) = kP , (4.1)
where kP > 0 is a fixed gain. The feedback system is shown on Fig. 4.5 for a
plant
1
P (s) = . (4.2)
s+1
A feedforward gain k0 was added that will be discussed shortly.
The closed-loop transfer function is given by
Y (s) k0 kP
PCL (s) = = . (4.3)
R(s) s + 1 + kP
66 Chapter 4. Stability and performance of control systems
r y
k0 kP 1
s+1
s = −1 − kP . (4.4)
The system is stable for all kP > 0, and the DC gain of the transfer function is
k0 kP
PCL(0) = . (4.5)
1 + kP
The feedforward gain was inserted in Fig. 4.5 so that the DC gain could be made
equal to 1 by setting
1 + kP
k0 = . (4.6)
kP
Then, the output will track a constant reference input in the steady-state.
In closed-loop, the original pole at s = −1 moves to an arbitrary value
determined by kP . The closed-loop system response can be made to respond
faster. For example, for kP = 1, k0 = 2,
Y (s) 2
= , (4.7)
R(s s+2
which means that the closed-loop system responds twice as fast as the original
system. The input signal is
2(s + 1)
U(s) = R(s). (4.8)
s+2
The signal is the same as if a feedforward controller was used to cancel the
pole of the plant and replace it by a faster pole. However, there are fundamen-
tal differences between moving a pole by pole/zero cancellation and by using
feedback.
For a unit step input R(s) = 1/s,
1 1
U(s) = + ⇔ u(t) = 1 + e−2t . (4.9)
s s+2
4.3. Steady-state error and integral control 67
0.8 1.5
0.6
1
0.4
0.5
0.2
0 0
0 1 2 3 4 0 1 2 3 4
Time (s) Time (s)
Figure 4.6: Plant output (left) and control input (right) for the open-loop system
(dashed) and with proportional feedback (solid)
The response of the system and the input signal are shown on Fig. 4.6 for the
open-loop and closed-loop systems. One finds that the response is accelerated,
although the result is obtained by applying a much larger input signal at the
beginning of the response.
In practice, a pole cannot be moved arbitrarily far in the left half-plane due
to input constraints, as well as other factors. However, feedback is helpful to
increase the speed of response within limits, to improve damping, or to stabilize a
system. Fig. 4.5 required the addition of a feedforward gain to achieve tracking.
An alternative and often preferable solution consists in using a controller with
a pole at s = 0.
r e u y
C(s) P(s)
A standard control system is shown in Fig. 4.7, where we assumed that the
plant and control systems are linear time-invariant and described by transfer
functions P (s) and C(s), respectively. A significant signal in this system is the
tracking error
assuming that the limit exists. The reference input r(t) is taken to be constant,
i.e.,
rm
r(t) = rm, R(s) = . (4.12)
s
It is typical for the reference input of a control system to be constant for rel-
atively long periods of time and for the steady-state error to be a significant
consideration. The infinite-time limit is really an approximation of time periods
that are long compared to the transient response of the system. For example,
the reference speed of a cruise control system may remain constant for minutes,
with the speed itself converging within a few seconds.
The Laplace transform may be used to analyze the feedback system of
Fig. 4.7. The tracking error is given by
so that
1
E(s) = R(s). (4.14)
1 + P (s)C(s)
According to the analysis of step responses, the steady-state error for a constant
reference input is
1
ess = rm. (4.15)
1 + P (0)C(0)
DC gain of the transfer function from r → e
Recall that the closed-loop system must be stable for the steady-state error to
be well-defined.
Alternatively, one could calculate Y (s) using
P (s)C(s)
Y (s) = R(s) (4.16)
1 + P (s)C(s)
4.3. Steady-state error and integral control 69
For the output signal to converge to the reference input, the DC gain of the
transfer function from r to y should be equal to 1. It is easy to check that, since
ess = rm − yss , the result is the same as the one obtained with (4.15).
Next, we define perfect tracking, as the condition in which ess = 0, or yss =
rm. Let
np(s) nc (s)
P (s) = , C(s) = , (4.18)
dp(s) dc (s)
where np (s), dp (s), nc (s), and dc (s) are polynomials. The condition for perfect
tracking is that
1 dp(0)dc (0)
= = 0. (4.19)
1 + P (0)C(0) np(0)nc(0) + dp (0)dc (0)
There is also a technical requirement that neither P (s) nor C(s) have a zero
at s = 0. Obviously, if the plant has a zero at s = 0, i.e., has zero DC gain,
there is no possibility of tracking constant reference inputs.
d
r e y
C(s) P(s)
as shown in Fig. 4.8. The reference input r(t) is now assumed to be zero, but may
be added later using superposition (since the systems are linear time-invariant).
For r(t) = 0, the Laplace transform gives
so that
−P (s)
E(s) = D(s). (4.22)
1 + P (s)C(s)
−P (0)
ess = dm, (4.23)
1 + P (0)C(0)
Note that zero steady-state error is achieved if the plant has a zero at the ori-
gin, a fortunate but uninteresting case, since the plant then rejects all constant
signals, not just disturbances. Therefore, perfect rejection of constant dis-
turbances requires that:
Assume that the plant is a constant gain, or that the plant is stable and that
one approximate the plant by its DC gain, i.e.,
kI = gP −1 (0), (4.29)
d
r -1
g P (0) y
P(s)
s
Figure 4.9: Integral control based on the steady-state response of the plant
so that
g sP (0)
Y (s) = R(s) + D(s). (4.31)
s+g s+g
The transfer function from the reference input to the output is a stable first-
order system with a pole at s = −g and a unity DC gain. The transfer funtion
from the disturbance is also a stable first-order system with a pole at s = −g but
with a zero DC gain. Therefore, constant reference inputs are perfectly tracked
and constant disturbances are perfectly rejected. The speed of response of the
system can be directly controlled by choice of the gain g.
Without the steady-state approximation
The transfer function from the reference input to the output still has unity
DC gain and the transfer function from the disturbance still has zero DC gain.
Therefore, the properties remain true as long as the closed-loop system is stable.
Further, still assuming stability, the results hold despite possible errors in the
estimate of P (0).
In general, if the gain g is small, the poles of the closed-loop system will be
approximately those of the plant, plus a pole close to s = −g. This integral
control method is effective in providing tracking control for a stable system,
although not necessarily one that provides the fastest possible responses.
In contrast, consider the feedforward approach shown in Fig. 4.10. This
system will also provide unity DC gain from the reference input to the output.
There is no issue of stability with this controller if the plant is stable. However,
there is also no rejection of the disturbance, and any error in P (0) results in an
error in the tracking of the reference.
4.3. Steady-state error and integral control 73
d
r -1 y
P (0) P(s)
Figure 4.10: Feedforward control based on the steady-state gain of the plant
t
de(t)
u(t) = kP (t)e(t) + kI e(τ )dτ + kD , (4.33)
0 dt
kI k I + k P s + k D s2
C(s) = kP + + kD s = . (4.34)
s s
kI as
C(s) = kP + + kD , (4.35)
s s+a
r y
CF(s) C(s) P(s)
so that
np (s)nc (s) np (s)dc (s) dc (s)n0 (s)
Y (s) = R(s) + D(s) + , (4.39)
dCL (s) dCL (s) dCL (s)
where
The overall response is the sum of the response to the reference input, the
response to the disturbance, and the response to the initial conditions. For all
three components, the denominator polynomial is the closed-loop polynomial
dCL (s). In all regards, the poles of the system are moved by the feedback. An
unstable system can be stabilized, including its response to initial conditions.
In contrast, pole/zero cancellation in the feedforward control scheme of Fig. 4.3
would eliminate poles from the response to the reference input, but would not
modify the response to the disturbance or the response to the initial conditions.
4.5. Routh-Hurwitz criterion 75
find necessary and sufficient conditions such that all the roots of D(s) are in the
open left half-plane (OLHP), i.e., with Re(s) < 0. Some partial answers to this
problem are simple, specifically:
• all roots of d(s) = sn + an−1 sn−1 + · · · + a0 are in the OLHP ⇒ an−1 > 0,
· · · , a0 > 0
Therefore, if any coefficient is zero or negative, there must be a root on the jω-
axis or in the open right half-plane. For polynomials of degree 2, the condition
is necessary and sufficient, but it is only necessary for higher degrees. In other
words, it is possible for a third order polynomial with all positive coefficients
to have some roots with Re(s) 0. The coefficients must satisfy additional
conditions, specified by the Routh-Hurwitz criterion, in order for all the poles
to be in the OLHP.
2. check the coefficients of the first column. The polynomial has all roots
in the open left half-plane ⇐⇒ all the coefficients of the first column are
nonzero and of the same sign.
1. Create the first two rows using the coefficients of the polynomial and the
following pattern
When a0 is reached, fill the rest of the array with zeros or leave blank.
Label the first row sn and the second row sn−1 .
4.5.3 Examples
Example 1: consider the polynomial
D(s) = s2 + 2s + 5 s2 + 4s + 4
= s4 + 6s3 + 17s2 + 28s + 20. (4.43)
The polynomial has all positive coefficients and has all roots in the open left
half-plane (its roots are at −1 ± 2j and −2 (double root)). The Routh array is
4.5. Routh-Hurwitz criterion 77
given by
s4 1 17 20 0
s3 6 28 0 0
6 × 17 − 28 = 37 20 0 0
s2 6 3
37/3 × 28 − 6 × 20
s1 = 676
37 0 0 0
37/3
0
s 20 0 0 0
Because the coefficients of the first column are all positive, the test confirms
that all the roots are in the open left half-plane.
Example 2: consider the polynomial
D(s) = s2 − 2s + 5 s2 + 4s + 4
= s4 + 2s3 + s2 + 12s + 20. (4.44)
Again, the coefficients are all positive, but we would be mistaken to infer that
all the roots are in the open left half-plane. Indeed, the roots may be computed
to be 1 ± 2j, and −2 (double root). This time, the array is computed to be
s4 1 1 20 0
3
s 2 12 0 0
s2 2 × 1 − 1 × 12 = −5 2 × 20 − 1 × 0 = 20 0 0
2 2
s1 −5 × 12 − 2 × 20 = 20 −5 × 0 − 2 × 0 = 0 0 0
−5 −5
0
s 20
The coefficients of the first column are not all of the same sign, which confirms
the fact that the roots are not all in the open left half-plane.
Example 3: a most interesting feature of the Routh-Hurwitz criterion is that
it may be applied to polynomials with variables as coefficients, as opposed to
a polynomial with fixed coefficients. In the context of feedback systems, this
feature translates into an ability to find conditions on controller parameters that
ensure closed-loop stability. Fig. 4.13 shows an example of such an application.
The compensator is a gain k whose value is a free parameter. The closed-loop
transfer function is given by
k/(s + 1)3 k
PCL(s) = = 3 . (4.45)
1 + k/(s + 1)3 s + 3s2 + 3s + (1 + k)
78 Chapter 4. Stability and performance of control systems
1
k 3
(s + 1)
s3 1 3 0
2 3 (1 + k) 0
s
s 1 9 − (1 + k)
3 0 0
s0 1+k
and shows that stability is obtained if and only if 1 + k > 0 and 8 − k > 0, i.e.,
if and only if
This interval is the range of gain k for which the system of Fig. 4.13 is stable.
where the elements that are zero by construction are omitted from the array. A
polynomial p3(s) is defined that is the remainder of the polynomial division of
p1(s) by p2(s). Therefore
where q1(s) = an s/an−1 is the quotient. The third row of the array contains
the coefficients of the remainder
Repeating the procedure, polynomials pk (s) are constructed that are of the form
r
C(s) P(s)
r y
k G(s)
Therefore, the root-locus is the locus of the roots of the polynomial D(s) + kN (s)
for k = 0 → ∞. We will consider proper systems (deg D(s) deg N(s)), so that
the number of closed-loop poles is equal to the number of open-loop poles.
We begin with some examples to gain insight into what a root-locus may
look like.
Example 1: consider the system with
1
G(s) = ⇒ D(s) + kN (s) = s2 + 2s + k. (4.53)
s(s + 2)
√
2 −2 ± 4 − 4k √
s + 2s + k = 0 ⇒ s = = −1 ± 1 − k. (4.54)
2
4.6. Root-locus method 81
k s1 s2
0 −2 0
√ √
0.5 −1 + 0.5 = −1.7 −1 − 0.5 = −0.3
1 −1 −1
2 −1 + j −1 − j
5 −1 + 2j −1 − 2j
101 −1 ± 10j −1 ± 10j
From these results, the locus of the two poles as k varies from 0 to ∞ can be
deduced to be as shown in Fig. 4.16. In general, the root-locus is described by
smooth curves, or branches, whose number is equal to the number of open-loop
poles. Note that, for this example, the response of the system is stable for all k,
but becomes oscillatory for k > 2.
-2
s+1
G(s) = ⇒ D(s) + kN (s) = s2 + (2 + k)s + k. (4.55)
s(s + 2)
The two poles are real for all k. A representative set of values for the poles is
82 Chapter 4. Stability and performance of control systems
given below.
k s1 s2
0 0 −2
1 −0.4 −2.6
100 −0.99 −101
∞ −1 k
and the root-locus may be deduced to be the one shown in Fig. 4.17.
-2 -1
Example 3: the next example is similar to the previous one, but with the
location of the nonzero pole and the zero reversed. Specifically
s+2
G(s) = ⇒ D(s) + kN (s) = s2 + s(1 + k) + 2k, (4.57)
s(s + 1)
−(1 + k) ± (1 + k)2 − 8k
s1,2 = . (4.58)
2
Whether the poles are real or complex is determined by the sign of the function
which has roots at 0.2 and 5.8. Therefore, the function f (k) has the shape shown
in Fig. 4.18.
In terms of the original polynomial, we may conclude that
f(k)
0.2 5.8 k
k s1 s2
0 0 −1
..
. real real
0.2 −0.6 −0.6
..
. complex complex
5.8 −3.4 −3.4
..
. real real
100 −2.02 −98.98
-3.4 -2 -1 -0.6
multiple zeros on the real axis with the same set of angles. When roots merge
on the real axis, one set of angles defines the angles formed by the incoming
branches, and the other set defines the angles formed by the outgoing branches.
Examples
Fig. 4.20 shows examples of application of the basic rules. The angles of the
asymptotes are: 180◦ if n − m = 1, (90◦, −90◦) if n − m = 2, (180◦ , 60◦ , −60◦ ) if
n−m = 3, (45◦ , −45◦, 135◦, −135◦) if n−m = 4,... In cases 1 and 3 of Fig. 4.20,
the angles are ±90◦ . In cases 2 and 4, the angles are ±60◦ and 180◦. In general,
application of the rules is fairly straightforward, although inferring the shape
of the root-locus becomes easier with experience, and sometimes requires some
amount of guessing.
1. G(s) = 1
s(s + 2)
Centroid: -1 -2
2. G(s) = 1
s2 (s + 2)
2 -2
Centroid: - 3
3. G(s) = (s + 1)
s2 (s + 2)
Centroid: - 1 -2 -1
2
4. G(s) = (s + 1)
s 2(s + 2) 2
Centroid: (-4 + 1)/3 = -1 -2 -1
For the angles of departure on the real axis, the set of possible angles are
shown on Fig. 4.21. In other words, the possible patterns are the same as for
the asymptotes, plus the patterns rotated by 180◦ divided by the number of
86 Chapter 4. Stability and performance of control systems
poles. In cases 2, 3 and 4 of Fig. 4.20, poles leave at ±90◦ (the other pattern
is excluded because the portions of the real axis on both sides of the poles do
not belong to the root-locus). The same set of angles applies when poles merge
together on the real axis. Then, poles reach the so-called breakaway point with
one set of angles, and they leave with the other. In case 1 of Fig. 4.20, poles
merge with incoming branches at (0◦, 180◦) and with outgoing branches at (90◦,
−90◦). Note that, although not shown on the examples, the same set of angles
also applies when branches reach multiple zeros on the real axis.
2 poles:
± 90˚
or
or
0˚, 180˚
3 poles: 120˚
± 60˚, 180˚ 60˚
or
or
0˚, ± 120˚
4 poles:
± 45˚, ± 135˚ 45˚
or or
0˚, 180˚, ± 90˚
some solutions may correspond to k < 0. One should evaluate k = −D(s)/N (s)
at the roots. If k is real and k > 0 for some root, the root gives the location of
a breakaway point from the real axis.
Crossing of the jω-axis: the values of k for which the root-locus crosses the
jω-axis can be determined by applying the Routh-Hurwitz criterion to (4.61).
Given k, the roots of (4.61) determine the locations where the branches cross
the jω-axis.
Angles of departure and arrival: the angle θ between the tangent to the root-
locus close to an open-loop pole and the direction of the real axis can be deter-
mined as follows. Assume that the angle of departure is calculated for the pole
p1 (the procedure can be repeated for other poles in a similar manner). Assume
that the pole is not repeated. Let αi be the angle between the direction of the
real axis and the vector drawn from the pole pi to the pole p1 . Let βj be the
similar angles for the zeros. The angle θ is given by
n m
θ = 180◦ − αi + βj . (4.63)
i=2 j=1
The procedure can also be applied to determine the angles of arrival to the
zeros. In this case, the procedure is identical, except that θ is replaced by −θ.
Therefore, for the angle of arrival to a zero of multiplicity r
n m
−1
θ= 180◦ − αi + βj + l 360◦ l = 0, · · · , r − 1
r i=1 j=r+1 (4.65)
Example - breakaway points from the real axis: consider the system
(s + 2)
G(s) = , (4.66)
s(s + 1)
whose root-locus was obtained before and is shown in Fig. 4.22. The breakaway
points had already been determined to be at −0.6 and −3.4. We may verify
these values using the fact that
-2 -1
-3.4 -0.6
so that
dG(s)
= 0 ⇐⇒ s2 + 4s + 2 = 0. (4.68)
ds
√
The roots of this polynomial are s1,2 = −2 ± 2 = −0.6 and −3.4, which
confirms the earlier result. In general, however, one should verify that the root
√
is really a breakaway point by computing k. For s = −2 + 2 = −0.6,
√ √
D(s) −2 + 2 −1 + 2
k=− =− √ = 0.17. (4.69)
N(s) 2
Since k is real and k > 0, the breakaway point belongs to the root-locus. The
same property can be checked for the other root.
Example - crossing of the jω−axis: consider the system
1
G(s) = (4.70)
(s + 1)3
In this case, the closed-loop poles may be computed exactly, and are given by
√ √ √
(s + 1) =3 −k ⇒ s = −1 + 3 k 3 −1, (4.71)
or
3
√
s1 = −1 + k ejπ/3,
3
√
s2 = −1 + k ej2π/3,
3
√
s3 = −1 + k e−j2π/3. (4.72)
The root-locus is shown in Fig. 4.23, and may be shown to satisfy the root-locus
rules. We may easily determine that crossing of the imaginary axis occurs when
3
√
k cos(60◦ ) = 1, or k = 8. (4.73)
4.6. Root-locus method 89
In general, the roots cannot be computed analytically, and one may apply the
Routh-Hurwitz criterion to find the range. This computation was done before
and yielded (4.46). The locations of the crossings of the imaginary axis are
obtained by letting k = 8 in the closed-loop polynomial. With s3 +3s2 +3s+9 =
√
0, the roots are s = −3, s = ± j 3, and the complex roots of this equation
correspond to the crossings.
Example - angles of departure: consider the system
s+3
G(s) = . (4.74)
(s2 + 2s + 2)(s + 2)
One denotes
θ = the angle of departure from the given pole p1
αi = the angle from the ith pole to the given pole p1
βj = the angle from the j th zero to the given pole p1
Let the pole p1 = −1 + j. The angles are shown on Fig. 4.24.
The rule says that
θ = 180◦ − α2 − α3 + β1 . (4.75)
From the figure and with tan−1(0.5) = 26.6◦, the angle of departure from the
pole is
root-locus
p1 θ
j
β1 α3
-3 -2 -1
α2
Figure 4.24: Definition of angles for the computation of the angle of departure
A possible root-locus is shown on the left of Fig. 4.25. However, one may be
concerned that the branches leaving the complex poles could cross into to the
right half-plane as shown on the right of Fig. 4.25, making the system unstable
for small gain..
+j +j
-1 -1
-2 -2
-1/2 -j -1/2 -j
Figure 4.25: Two possibilities for the root-locus of the second example for angles
of departure
The relevant angles for the computation of the angle of departure from the
pole at s = j are shown in Fig. 4.26. Since tan−1 (0.5) = 26.6◦, the formula gives
Therefore, the branch leaving s = j indeed leaves the pole as shown on Fig. 4.26
and on the left of Fig. 4.25.
4.6. Root-locus method 91
root-locus
θ
+j
-2 -1
-j
θ
2j
63°
2 poles
at -1 -2j
Oddly, this result forces us to redraw the root-locus of Fig. 4.27 to become the
one shown in Fig. 4.28. In fact, the system becomes unstable for large gain,
which would not have been predicted from the tentative Fig. 4.27.
92 Chapter 4. Stability and performance of control systems
θ=−54°
Figure 4.28: Example for angle of arrival with actual shape of the root-locus
These values are consistent with the root-locus as drawn on the figure.
j θ
-1 1/3
Note that this example is a good opportunity to compute the location of the
breakaway points on the real axis. Specifically,
2
dG(s) (s2 + 1) − 2 (s2 + 1) 2s(s + 1)
= =0 (4.83)
ds (s2 + 1)2
4.6. Root-locus method 93
if and only if
whose roots are at s1 = −1.55 and s2 = 0.22. The values of gains k corresponding
to the two roots are given by
The second root turns out to be a breakaway point for the complementary root-
locus, or root-locus for k < 0, which is discussed next.
-2 -1
k>0 k<0
k=0→∞ k=0→−∞
Application of the rules yields opposite portions of the real axis, identical cen-
troid, asymptotes at rotated by 180◦ /(n−m) (that is, with angles 0◦ and ±120◦),
and a complementary breakaway point at s = 0.22. The angles of departure are
also rotated by 180◦ /r, that is 90◦ for the two imaginary poles (yielding 112.5◦
and 292.5◦ ). The resulting complementary root-locus is shown in Fig. 4.31.
60°
1/3
-1
1. zeros in the right half-plane always lead to instability for high gain. Such
zeros are usually undesirable in either the plant or the controller.
4.6. Root-locus method 95
additional poles
or
k
(a) P: C(s)=k P (b) PD: C(s) = kD s+ P
kD
k k k
kP s + I kD s2 + P s + I
(c) PI: C(s)= kP kD kD
(d) PID: C(s)=
s s
is stable for all gain, as shown by the root-locus on the left of Fig. 4.35.
However, if an additional real pole is present in the actual system, the root-
locus becomes the one shown on the right of Fig. 4.35. No matter how
4.6. Root-locus method 97
Additional pole
far the pole is in the left half-plane, the closed-loop system will become
unstable for large enough gain.
7. Because the positive real axis belongs to the root-locus for k < 0, a system
always becomes unstable for positive feedback of large gain. For small gain,
however, positive feedback can be stabilizing. For example, consider the
root-locus for
s+2
G(s) = . (4.90)
(s + 1)(s2 + 1)
The regular root-locus (k > 0) is shown on the left of Fig. 4.36, while the
complementary root-locus is shown on the right of the figure. For k > 0,
the two poles at s = ±j immediately move to the right half-plane, towards
the ±90◦ asymptotes with centroid at s = 1/2. The angle of departure of
the pole at s = j is equal to θ = 180◦ −90◦ −45◦ + tan−1(0.5) = 71.6◦ . The
system is unstable for all gain k > 0. In contrast, for k < 0, the angle of
departure of the pole at s = j becomes 251.6◦ and the pole moves towards
the left half-plane. The pole at s = 1 moves towards the right half-plane
and, for sufficiently large gain, the system becomes unstable. However, for
some range of gain, the system is stabilized with k < 0.
Root-locus rules are useful to quickly sketch a root-locus and understand how
pole and zero locations affect its shape. For complicated cases or for precision
plots, a modern software package should be used. For example, the shape of the
complementary root-locus of Fig. 4.36 was obtained using the Matlab commands:
num=[1 2];
den=conv([1 1],[1 0 1]);
rlocus(-num,den)
98 Chapter 4. Stability and performance of control systems
j j
-2 -1 -2 -1
-j -j
k>0 k<0
The regular root-locus can be obtained by replacing the last line of the code by
rlocus(num,den).The conv function is convenient to multiply the two denomina-
tor polynomials.
Transmitter Receiver
Two basic methods of modulation are amplitude modulation (AM) and fre-
4.7. Feedback design for phase-locked loops 99
t t
-A
-A-k mx(t)
The frequency fc is called the center frequency. It is the frequency of the signal
y(t) when x(t) = 0. The instantaneous frequency (in Hz) of the signal y(t) is
defined to be
1 dθ(t)
f (t) = = fc + kmx(t). (4.93)
2π dt
In an implementation with analog electronics, km has the units of Hz/V, as-
suming that x(t) has the units of volts. The parameter km specifies how much
the frequency of the signal y increases per unit increase of the magnitude of the
signal x. Frequency modulation by a sinusoidal signal x(t) is shown in Fig. 4.39.
In commercial FM radio, fc ranges from 88 to 108 MHz. The spectrum of
x(t) is limited to fmax = 53 kHz, which includes two (stereo) channels with
15 kHz bandwidth each. If the modulating signal is proportional to x(t), instead
of the integral of x(t), the modulation is referred to as angle modulation or phase
modulation. In some cases, the modulating signal is digital (on/off), which leads
100 Chapter 4. Stability and performance of control systems
x(t) y(t)
A
t t
-A
to frequency shift keying (FSK, if x(t) is digital) or phase shift keying (PSK, if
θ(t) is digital). For 180◦ phase reversals, PSK becomes phase reversal keying or
binary phase shift keying (BPSK). For four values of the phase (θ = 0◦ , 90◦,
180◦, 270◦ ), one has quaternary or quadriphase PSK (QPSK).
2πfc t
VCO as one of its components. The signal xvco (t) is, under ideal conditions,
proportional to the signal that generated y(t), that is x(t). The signal yvco(t)
then has the same instantaneous frequency as y(t).
The phase detector is a device that generates a signal φ(t) whose value is
proportional to the difference of phase between the signals y(t) and yvco (t).
Much of the complexity of phase-locked loops is related to this device. A filter
is typically needed after the phase detector, because of harmonic components
associated with practical detectors, and to improve the stability properties of
the feedback system.
The concept of a phase-locked loop is that, if there is a small phase error so
that yvco(t) lags behind y(t), a signal φ(t) appears at the output of the detector.
This signal produces an increase in the voltage applied to the VCO, so that the
frequency of yvco (t) increases and the phase of the signal catches up with the
phase of the incoming signal y(t). In steady-state, the phases of y(t) and yvco(t)
are equal, or separated by a constant, and the signals are said to be locked in
phase.
y vco(t)
VCO
Assuming that an ideal phase detector is available, the equations for the
system are:
2π fc,vco t
The gain of the phase detector is kpd and has the units of Volts/rad in an analog
phase-locked loop. C(s) is the transfer function of the filter. The instantaneous
frequency of the VCO of the PLL is
1 dθvco(t)
fvco (t) = = fc,vco + kvcoxvco(t). (4.98)
2π dt
The dynamics of the system are highly nonlinear, because of the sinusoidal
functions. However, under the assumption of an ideal phase detector, the equa-
tions describing the variables θ(t), θvco (t), x(t), and xvco (t) are linear, as shown
in Fig. 4.42. The system can therefore be analyzed using linear time-invariant
methods (in particular, transfer functions can be used).
Fig. 4.42 can be transformed into a simpler, equivalent diagram, using the
following definitions. Denote the difference between the center frequencies of
the VCO of the modulator and of the VCO of the demodulator
and let
km
xs (t) = x(t). (4.100)
kvco
Note that xs (t) is proportional to x(t). We will find that the signal xvco (t) converges
to this scaled signal under ideal conditions.
We have that
t
Therefore, the diagram of Fig. 4.43 represents the phase-locked loop if one defines
the two constants
δfc
d(t)=
kvco
xs(t) k pll φ(t) xvco(t)
C(s)
s
Figure 4.43: Equivalent diagram of a phase locked-loop with ideal phase detector
Xvco(s) = Hx (s) (Xs (s) + D(s)) , Φ(s) = Hφ (s) (Xs (s) + D(s)) ,
(4.103)
where
kpll C(s) kpll
Hx (s) = , Hφ (s) = . (4.104)
s + kpll C(s) s + kpll C(s)
Two typical choices of compensator C(s) are:
k
• C(s) = s +fa first-order filter
f
104 Chapter 4. Stability and performance of control systems
-a f -a f -b f
Figure 4.44: Root-locus for first-order filter (left) and second-order filter (right)
kf (s + bf )
• C(s) = second-order filter
s(s + af )
For the first-order filter,
kpll kf
Hx (s) = ,
s2
+ af s + kpll kf
kpll (s + af )
Hφ (s) = 2 (first-order). (4.105)
s + af s + kpll kf
The second-order filter includes an integrator, and yields the transfer functions
kpll kf (s + bf )
Hx (s) = ,
s3
+ af s2 + kpll kf s + kpll kf bf
kpll s(s + af )
Hφ (s) = 3 (second-order). (4.106)
s + af s2 + kpll kf s + kpll kf bf
Fig. 4.44 on the left shows the root-locus for the first-order filter, for kpll kf =
0 → ∞ and af > 0. On the right, the root-locus is shown for the second-
order filter, assuming af > bf > 0. With these restrictions on the parameters,
the closed-loop system is always stable. Closed-loop poles may be placed at
appropriate locations by proper choice of the parameters.
For both filters, the DC gain of the transfer function from xs to xvco is equal
to 1. Therefore, xvco(t) (the frequency estimate) will match xs (t) (the true
frequency) in steady-state, and xvco (t) will track xs (t), as long as xs (t) does not
vary too fast compared to the time constants of the closed-loop system. The
magnitude of the poles should be high enough to ensure tracking of the signal
xs (t), but not so high to yield excessive sensitivity to noise.
Consider now the effect of a center frequency error δfc . In the steady-state,
the effect of δfc on xvco and φ is determined by Gx (0) and Gφ (0). In the case of
a first-order filter, a center frequency error δfc results in a constant bias δxvco,ss
and a constant phase error δφss given by
δfc af δfc
δxvco,ss = , δφss = (first-order). (4.107)
kvco kf kvco
4.7. Feedback design for phase-locked loops 105
and
y(t) φ(t)
2 Low-pass
filter
yq (t)
o
90 phase
advance
π 2π
θ−θ vco
signals with nearly identical frequency to slowly increase from 0 and become
2π, 4π, etc. When there is an initial error in frequency fc − fc,vco in a PLL,
it may take several cycles for φ to converge to a stable equilibrium position.
This condition is referred to as cycle slipping. In some cases, phase lock may
never occur. The lock-in range is the range of frequencies for which phase lock
occurs without cycle slipping. The capture range, or pull-in range, is the range of
frequencies for which phase lock occurs, possibly with cycle slipping. One also
defines the hold range as the range of frequencies for which the loop remains
locked, if it is initially locked. This range may be determined experimentally
by slowly varying the frequency of the incoming signal, starting from a locked
condition.
The nonlinear behavior highlights two additional considerations in the choice
of the closed-loop bandwidth of the linearized system: the bandwidth should be
small enough to filter the high frequency component originating from phase
detection, but the response should be fast enough to ensure locking of the PLL.
In general, while the design of the loop filter is performed using linear time-
invariant analysis methods, characteristics of the true nonlinear system must be
considered as well.
4.8 Problems
Problem 4.1: Consider the control system of Fig. 4.47.
(a) Let P (s) = k/(s + a) and C(s) = kP . Find Y (s), assuming that both R(s)
and D(s) are nonzero. Deduce the transfer functions from r to y (for d = 0)
and from d to y (for r = 0). Give conditions on kP , k and a such that the
transfer functions are BIBO stable. Assuming that such conditions are satisfied,
give the values of the DC gains of the transfer functions. Indicate whether
perfect tracking of constant reference inputs and perfect rejection of constant
108 Chapter 4. Stability and performance of control systems
d
r e u y
C(s) P(s)
s(s + 1)
G(s) = (4.115)
(s + 2)2 (s + 3)
(s + 3)
G(s) = . (4.116)
s(s + 9)3
s+a
G(s) = . (4.117)
(s + b)(s2 − 2s + 2)
Also give condition(s) that a > 0 and b > 0 must satisfy for the closed-loop
system to be stable for sufficiently high gain k > 0 (note that you do not need
to apply the Routh-Hurwitz criterion, nor provide the range of k for which the
system is closed-loop stable).
Problem 4.6: Sketch the root-locus for the transfer function
(s + 3)
G(s) = . (4.118)
(s − 1)(s2 + 2s + 2)
Give the range of k > 0 for which the system is closed-loop stable and calculate
the angle of departure of the locus from the pole at s = −1 + j.
Problem 4.7: Sketch the root-locus for the transfer function
(s + 1)2
G(s) = . (4.119)
s3
Give the locations of the breakaway points on the real axis, the values of the
angles of departure, and the range of k > 0 for which the system is closed-loop
stable.
Problem 4.8: Sketch the root-locus for the transfer function
1 1
G(s) = 2
= 3 2
. (4.120)
(s − 1)(s + 3) s + 5s + 3s − 9
Give the range of gain k > 0 for which the closed-loop system is stable, the
locations of the breakaway points on the real axis, and the locations of the
jω-axis crossings.
110 Chapter 4. Stability and performance of control systems
Include the angles of departure and arrival. Also give the range of gain k > 0
for which the closed-loop system is stable and use the result to improve your
sketch of the locus, if possible.
Problem 4.10: Consider a standard control system as shown in Fig. 4.47. Let
1 kI
P (s) = , C(s) = kP + + kD s. (4.122)
s2 (s + 1) s
(a) Assuming that kP = 0 and that the closed-loop system is stable, what
condition(s) must kI and kD satisfy so that the steady-state error for constant
reference inputs is zero?
(b) Assuming that kP = 0 and that the closed-loop system is stable, what
condition(s) must kI and kD satisfy so that the steady-state error for constant
input disturbances is zero?
(c) Assuming that kI = 0, what condition(s) must kP , kI , and kD satisfy so that
the closed-loop system is stable?
Problem 4.11: (a) Sketch the root-locus for k > 0 and the problem of Fig. 4.48.
There is one zero at s = 0 and two poles at s = 1.
(b) Give the range of gain k > 0 for which the system is closed-loop stable, and
give the locations of the jω-axis crossings.
(c) Give the locations of the breakaway points on the real axis.
Problem 4.12: Sketch the root-locus for the problem of Fig. 4.49, using only
the main rules. There is a zero at s = 0, two poles at s = −1, and two poles at
s = −1 ± j.
Problem 4.13: Sketch the root-locus for the problem of Fig. 4.50. Do not
calculate the range of gains for stability, the jω-axis crossings, or the breakaway
points from the real axis. However, give the angles of departure from the complex
4.8. Problems 111
-1
-j
2j
j
-2 -j
-2j
1 s+a
P (s) = , C(s) = . (4.124)
s(s + 1) s+1
Determine the range of values of the parameter a of C(s) such that the closed-
loop system is stable.
(b) For the system of part (a), give the steady-state error ess that is observed
when r(t) = 2 and d(t) = 0. The result may be a function of the parameter a.
(c) For the system of part (a), give the steady-state error ess that is observed
when r(t) = 0 and d(t) = 2. The result may be a function of the parameter a.
112 Chapter 4. Stability and performance of control systems
using only the main rules (the poles are shown below). Give the range of gain
k > 0 for which the system is closed-loop stable. Poles are shown on Fig. 4.51.
-2 -1
-j
(b) Sketch the root-locus for the problem of Fig. 4.52, using only the main rules.
Give the angles of departure from the complex poles.
Problem 4.19: (a) Find Y (s) as a function of R(s) and D(s) for the system of
Fig. 4.53.
(b) For the system of part (a), let C1(s) = 1/(s + a), C2 (s) = k(s + 1)/(s + 2),
P (s) = 1/s, and find conditions on k and a such that the closed-loop system is
stable.
4.8. Problems 113
-2 -1
-j
d
r e1 y
C1(s) P(s)
C2(s)
(c) For the system of parts (a)-(b), let e(t) = r(t) − y(t). Note that the signal
is not equal to the signal denoted e1 on the diagram. Assuming constant but
arbitrary signals r = r0 and d = d0, obtain E(s) and conditions on k and a such
that limt→∞ e(t) = 0.
Problem 4.20: (a) Sketch the root-locus for
1
G(s) = . (4.128)
s((s + 10)2 + 4)
Compute the breakaway points from the real axis and the value of k for which
crossing of the jω-axis occurs. Do not compute the angles of departure.
(b) Sketch the root-locus for
1
G(s) = . (4.129)
s((s + 4)2 + 16)
Compute the breakaway points from the real axis and the angles of departure
from the complex poles. Do not compute the value of k for jω-axis crossing.
114 Chapter 4. Stability and performance of control systems
Chapter 5
Frequency-domain analysis of
control systems
P(jω)
Figure 5.1: Magnitude Bode plot of a low-pass filter (log scales are used)
In the context of control systems, Bode plots are used even when the system
is unstable, generalizing the concept of frequency response. The Bode plots are
computed by replacing s by jω in the transfer function P (s). The frequency
response does not exist for unstable systems, due to the lack of steady-state
115
116 Chapter 5. Frequency-domain analysis of control systems
sinusoidal response, but the Bode plots can nevertheless be determined exper-
imentally by placing the system in a stabilizing feedback loop, as shown in
Fig. 5.2. In this manner, the effect of initial conditions will decay to zero and
the responses will remain bounded. A reference signal r = r0 sin(ω0t) is applied
and, assuming a stable closed-loop system, the signals u and y will converge to
the steady-state responses
uss = u0 sin(ω0t + α0 ),
yss = y0 sin(ω0 t + β0 ). (5.1)
Then
y0
|P (jω0)| = , ∡P (jω0 ) = β0 − α0. (5.2)
u0
In other words, the Bode plots can be measured and interpreted similarily for
stable and unstable systems. One just needs to remember that the steady-state
sinusoidal response only exists for unstable systems if the systems are placed in
a stabilizing feedback loop.
r u y
C(s) P(s)
As for the root-locus, Bode plots can be computed easily and rapidly using
modern software. The procedures described in this chapter are not useful to
draw manually detailed plots, but rather to gain a valuable understanding of how
transfer function parameters are related to frequency response characteristics.
On the y-axis, a decade spans a range of 20 dB. The phase plot shows ∡G(jω)
in a regular scale labelled in degrees.
0
Magnitude (dB)
-10
-20
-30
-40 2
-2 -1 1
10 10 10 0 10 10
0
2 1
Phase (deg)
-45
-90 1 2
-2 -1 10 0
10 10 10 10
Frequency (rad/s)
The plots on Fig. 5.3 show the magnitude and phase responses as solid lines
and approximations as dashed lines. The magnitude approximation is very close.
In the phase plot, two approximations are shown, with a coarse one labelled #1
and a finer one labelled #2. For the transfer function G(s) = 1/(s + 1), the
approximations are based on the fact that G(jω) ≃ 1 for ω ≪ 1 and G(jω) ≃
1/(jω) = −j/ω for ω ≫ 1, resulting in the magnitude and phase approximations
given in the table below.
ω≪1 ω≫1
[G(jω)]dB ≃ 0 [G(jω)]dB ≃ −20 log(ω)
∡G(jω) ≃ 0 ∡G(jω) ≃ −90◦
equal to the magnitude of the pole at s = −1. For the phase plot, approximation
#1 is a discontinuous curve composed of a flat line at 0 deg and another flat line
at −90◦ with a sharp transition at ω = 1 rad/s. This approximation is quite
coarse, and a finer approximation is shown with the dashed line labelled #2.
Both approximations can be used to produce Bode plots, with the second one
more accurate but also more time-consuming to apply.
A similar approximation is obtained for G(s) = 1/(s + p) with p > 0, ex-
cept that the transition occurs at ω = |p| and the low frequency magnitude
is −20 log(|p|). If p < 0, the phase plot is similar but moving from −180◦ to
−90◦. For a zero, G(s) = s + z, the plot for the magnitude is similar but adding
20 dB/decade at ω = |z|. The phase plots are the same as for poles, but with
signs reversed. Overall, the changes occuring at the transition frequency are
summarized below.
where zi are the zeros of the system and pj are the poles (whose values have the
units of rad/s). The first plot shows the magnitude of the frequency response as
a function of log (ω). A log scale is used again, with 20 log |P (jω)| shown on the
y-axis. The units are dB’s. A multiplication of the magnitude by 10 translates
into an addition of 20 dB. The second plot is a phase plot, giving the angle (in
5.1. Bode plots 119
degrees) of P (jω) as a function of log(ω). A log scale is not used for the y-axis
of this plot.
Step 2: start the plots on the left at a sufficiently low frequency, such as ωmin
in (5.4). For the magnitude, draw a horizontal line at 20 log(|P (0)|). For the
phase, draw a horizontal line at 0◦ if P (0) > 0 and 180◦ if P (0) < 0.
Step 3: continue the plots from left to right. Every time a pole or zero is
encountered, that is, every time ω = |pi | or ω = |zi | :
(a) for the magnitude, change the slope by an additional −20 dB/decade every
time a pole is encountered, and 20 dB/decade every time a zero is encountered.
(b) for the phase, add −90◦ whenever a left half-plane pole or right half-plane
zero is encountered, and 90◦ whenever a right half-plane pole or left half-plane
zero is encountered.
Step 4: draw smooth curves that fit the approximations.
Comments
In simple terms, the procedure amounts to:
1. start the Bode plots from the left using the low-frequency approximation
PLF (s) ≃ P (0) for s = jω small.
2. move from left to right, applying −20 dB/dec to the magnitude plot when
a pole is reached and +20 dB/dec when a zero is reached.
3. for the phase plot, adding −90◦ when an OLHP pole or ORHP zero is
reached, and +90◦ when an ORHP pole or OLHP zero is reached.
Example 1: P (s) = ss+ + 1 . |P (0)| = 0.1 (or −20 dB) and ∡P (0) = 0◦ . Start
10
1
the Bode plot around ω = 0.1 ( 10 × 1) with this approximation. Next, move
from left to right and apply the changes when the poles at s = −1 and s = −10
are encountered. The result is shown in Fig. 5.4. The dashed curves are smooth
estimates of the frequency response, based on the linear approximations. From
these estimates, one may predict, for example, that the response of the system
to a cos(10t) input is a signal M cos(10t + φ), with M slightly smaller than 1
and a phase advance φ around 45◦ .
Example 2: P (s) = ss+ 10
+ 1 (similar to example 1, but the pole and the zero
are reversed). The plot is shown on Fig. 5.5. Note that the output lags behind
the input when the pole is smaller than the zero in magnitude.
Example 3: P (s) = −s + 10
s + 1 (similar to example 2, but with a right half-
plane zero). The plot is shown on Fig. 5.6. Note that the phase lag is larger
than in the previous example, due to the zero being in the right half-plane.
120 Chapter 5. Frequency-domain analysis of control systems
P(jω) P(jω)
(dB) (deg)
0 dB/dec
1 10 90°
0.1 ω (rad/s)
+20 db/dec
0.1 1 10 ω (rad/s)
-20
0 dB/dec
P P(jω)
(dB) (deg) 1 10
0 dB/dec
20 dB
-20 dB/dec 0.1 ω (rad/s)
ω (rad/s) -90°
1 10
0 dB/dec
Such a zero is called a non-minimum-phase zero. One also says that a system is
minimum-phase if all its zeros are in the left half-plane, and non-minimum-phase
otherwise.
P P
(dB) (deg)
0 dB/dec
20 dB 1 10 ω (rad/s)
-20 dB/dec
-90°
1 10 ω (rad/s)
0 dB/dec -180°
P P
(dB) (deg)
0 dB/dec 180°
20 dB
-20 dB/dec 90°
1 10 ω (rad/s)
0 dB/dec 1 10 ω (rad/s)
10 rad/s in a log scale (log(3) = 0.48). The plot is shown on Fig. 5.8.
P P(jω)
(dB) +20 db/dec (deg) 270°
3 10
180°
1 ω (rad/s)
90°
-10 dB -20 db/dec
0 dB/dec 1 3 10 ω (rad/s)
In other words, k is the DC gain of the transfer function with the zeros at the
origin removed. If the system has n poles at s = 0, the same approximation
applies with n < 0.
Procedure
In step 2, draw the low-frequency approximation as follows. Let k = [s−nP (s)]s=0
where n is the number of zeros of P (s) at the origin (if there are poles at the
122 Chapter 5. Frequency-domain analysis of control systems
origin instead, let −n be the number of poles). For the magnitude, draw
a line with a slope equal to n × 20 dB/decade. Position the line so that
|P (jω0)|dB = 20 log(|k|ω0n ), where ω0 is some low frequency where the graph
is started (for example, ω0 = ωmin using (5.4) computed from the poles and
zeros other than those at the origin). For the phase, draw a horizontal line at
0◦ + n 90◦ if k > 0 and 180◦ + n 90◦ if k < 0.
Example 6: P (s) = s +s 10 . Since n = −1, k = 10, the low-frequency approx-
imation is PLF (s) = 10
s . Thus, we begin the plot using
10
|PLF (jω)|dB = = 20 − 20 log(ω). (5.6)
jω dB
P P(jω)
(dB) (deg)
20 dB 1 10
Example 7: P (s) = s −310 . Since n = −3, k = −10, PLF (s) = −10 . We start
s s3
at ω = 1 with the approximation
10
|PLF (jω)|dB = = 20 − 60 log(ω) (5.7)
ω3 dB
P
(dB) P(jω)
(deg)
20 dB 0°
10 ω (rad/s)
1 10 ω (rad/s) -90°
-40 dB
-60 dB/dec -180°
-40 dB/dec
axis where ω = |p|, the magnitude of the two poles or zeros, the effect is the
same as if a pair of real poles or zeros had been reached (adding ±40 dB/decade
to the magnitude and ±180◦ to the phase). For complex poles close to the
jω-axis, however, the frequency response deviates significantly from the linear
approximation.
Consider a system with a pair of stable complex poles p = −a + jb, p∗ =
−a−jb. One defines the natural frequency ωn and the damping factor ζ through
the formulas
√
ωn = |p| = a2 + b2 ,
√
ζ = − Re(p)/|p| = a/ a2 + b2. (5.9)
These relationships are illustrated on Fig. 5.11. Note that ζ = cos(α), where α
is the angle between the direction of the pole and the real axis.
p jb
ωn
α
-a=-ζω n
with the numerator set so that the DC gain is equal to 1. The Bode plot
approximation is such that the gain is 1 (0 dB) up to ωn, then decreases at the
rate of −40 dB per decade. However, at ωn , the exact gain is
ωn2 1
|P (jωn )| = = . (5.11)
2ζjωn2 2ζ
Therefore, the actual magnitude is 10 instead of 1 for ζ = 0.05, or 20 dB.
Fig. 5.12 shows the shape of the responses for different values of the damping
factor ζ. For small damping factors, the magnitude response peaks significantly
above the linear approximation. The transition of the phase response around ωn
is also sharper. For right half-plane poles, the phase is reversed and ζ is replaced
by |ζ| for the plots.
ζ = 0.5
ζ = 0.5
ζ = 1.0
ζ = 1.0 -180°
ζ = 0.05
-40 dB/dec
Figure 5.12: Bode plots of systems with two complex poles in the left half-plane
The true peak of the magnitude response is not exactly at ωn, but at a
frequency
√
ωp = 1 − 2ζ 2ωn = b2 − a2 , (5.12)
which is slightly smaller than ωn. There is no peaking of the magnitude unless
ζ < 0.707, i.e., unless the imaginary part of the pole is greater than the real
part. The actual magnitude of the peak is
1 1
|P (jωp)| = . (5.13)
2ζ 1 − ζ2
However, for small damping factor, the frequency of peaking is close to ωn, and
the magnitude is close to 1/2ζ, as given by (5.11).
For complex zeros close to the jω-axis in the open left half-plane, a similar
correction needs to be applied and is shown in Fig. 5.13. For right half-plane
5.1. Bode plots 125
P P(jω)
(dB) (deg)
Figure 5.13: Bode plots for complex zeros in the left half-plane
zeros, the phase is reversed. If the zeros are on the jω-axis, the response is zero
at the corresponding frequency, or at minus infinity in the log scale.
Procedure
When the frequency reaches a value equal to the magnitude of a pair of complex
poles or zeros, apply the rules as if there were two real poles or two real zeros
with the same magnitude. If the pair of complex poles or zeros is close to the
jω-axis, peaking in the response may be accounted for as follows. Given a pair
√
of complex poles p = −a ± jb, let ωn = a2 + b2 (called the natural frequency)
and ζ = a/ωn (called the damping factor). If |ζ| < 0.5, an increase of gain equal
to 20 log(|1/2ζ|) should be added to the magnitude plot at ωn . For a pair of
complex zeros, a similar reduction in the gain should be applied. The effect of a
small damping factor on the phase plot is a rapid variation of phase around ωn.
Example 8: Consider the transfer function
s2
P (s) = , (5.14)
(s + 1) (s2 + 0.1s + 100)
which has poles at p1 = −1, p2,3 = −0.05 ± (0.05)2 − 100 = −0.05 ± 9.999875.
The complex poles are such that ωn = 10, and ζ = 0.005. Thefore, the peak
of the gain at the natural frequency is 1/2ζ = 100 ≡ 40 dB. To draw the Bode
plot, note that the low-frequency approximation is
s2
PLF (s) = , |PLF (jω)|dB = −40 + 40 log |ω|, ∡PLF (jω) = 180◦ ,
100
(5.15)
The resulting Bode plot is shown in Fig. 5.14. The peak of the magnitude plot
and the sharp phase transition at the complex poles were accounted for in the
drawing of the smooth approximation.
126 Chapter 5. Frequency-domain analysis of control systems
P P(jω)
(dB) (deg)
20 180°
90°
0.1 1 10 ω (rad/s) 10
-20 1 ω (rad/s)
-40 20 dB/dec -90°
40 dB/dec -20 dB/dec
-80
Figure 5.14: Bode plots of example with low damping complex poles
Note that this is not a rational function of s, so that the usual rules of Bode plots
cannot be applied. However, the plots themselves can be drawn. The frequency
response of a time delay is P (jω) = e−jωTd and
|P (jω)| = 1,
∡P (jω) = −ωTd . (5.19)
The Bode plots are shown in Fig. 5.15. A plot of the phase with a linear scale
for the x-axis was inserted, to illustrate the property of the delay called linear
phase. This linear phase property is associated with the fact that a time delay
does not alter the shape of the signal (the signal is not distorted, as it generally
would be with a rational transfer function).
5.1. Bode plots 127
P P P
0dB 2π/Td
ω ω ω
(linear scale) (log scale)
-360°
s2 + ω02
P (s) = . (5.20)
s2 + 2ζω0 s + ω02
For example, one may let ζ = 1. The Bode plots of the notch filter are shown in
Fig. 5.16, together with the individual contributions of the complex poles and
zeros. Note that the zeros are exactly on the jω-axis, so that the gain for a
sinusoidal signal of frequency ω0 is exactly zero (P (jω0 ) = 0). Outside a narrow
band around ω0, P (jω) ≃ 1.
P ω0 ω0 ω0
= +
ω
P
90° 180°
ω0
ω = +
ω0
-90° ω0
-180°
Figure 5.16: Bode plot of a notch filter, showing also the individual contributions
of the denominator and numerator of the transfer function
It is a special case of a high-pass filter. The Bode plot of the filter is shown in
Fig. 5.17. Wash-out filters are used to:
• keep the response of a system centered around a neutral position (an ex-
ample is a flight simulator which emulates the motion of an aircraft, yet
must remain at the same location).
P
a
Low-pass filter: the wash-out filter is a high-pass filter, because it filters out the
low frequencies and lets high frequencies pass through. Conversely, a low-pass
filter removes high-frequency components while transmitting the low-frequency
components. For example, consider the signal of Fig. 5.18, meant to represent
a noisy sinusoidal signal and given by
A low-pass filter can be used to isolate the signal at 1 Hz, while reducing the
noisy component at 20 Hz.
An example of low-pass filter is
an
F (s) = , (5.23)
(s + a)n
where a determines the bandwidth of the filter, and n is the order of the filter.
Fig. 5.19 shows the Bode plots of F (s) for a = 30, and for n = 1 and n = 2.
The magnitude plot shows the attenuation of high-frequency signals, which is
enhanced for the higher value of n. Lower values of a also reduce high frequency
5.1. Bode plots 129
1.5
Noisy signal
0.5
-0.5
-1
-1.5
0 0.5 1 1.5 2
Time (s)
components, but affect the main component as well if the value is too small. In
the example, the main component is at ω = 6.28 rad/s, while the noise is at
ω = 125.7 rad/s.
Fig. 5.20 shows the noisy signal filtered by F (s) with n = 1 and n = 2,
together with the original signal sin(2πt). For n = 1, the noise is considerably
reduced, while it is virtually eliminated for n = 2. Notable features are that the
main component is delayed compared to the original signal, and slightly reduced
in magnitude as well. This effect is stronger for n = 2, and can be predicted
from the magnitude and phase plots of Fig. 5.19. In general, the higher the
reduction of the high-frequency noise, the greater the delay of the low frequency
signal.
A useful observation is that the effect of the filter at low frequencies can be
approximated by a pure time delay. Indeed, the transfer function of the filter
(5.23) can be represented by the first terms of its Taylor series expansion around
s = 0, with
n
F (s) ≃ 1 − s + ... (5.24)
a
Similarly, a time delay can be approximated by
Combining the expressions, one obtains the equivalent time delay of the low-pass
filter
n
Td = . (5.26)
a
130 Chapter 5. Frequency-domain analysis of control systems
Magnitude (dB)
n=1
-20
-40
n=2
-60
-80
0
Phase (deg)
-45
n=1
-90
-135 n=2
-180
0 1 2 3
10 10 10 10
Frequency (rad/s)
Figure 5.19: Bode plots of an /(s + a)n for a = 30, and for n = 1 and n − 2
Lower bandwidth or higher order filters imply higher time delays. In the example
of Fig. 5.20, the approximate time delay Td = 33.3 ms for n = 1, and Td =
66.7 ms for n = 2.
Low-pass filters are often used to reduce the noise re-injected in the feedback
loop from measurements of the output. However, the delay is detrimental to
the closed-loop stability of the system, so that the choice of filter represents a
trade-off between removal of the noise and closed-loop dynamics.
The transfer function (5.23) is only one example of a low-pass filter. Many
low-pass filters exist, such as Butterworth filters, Bessel filters, Chebyshev filters,
and elliptic filters. In general, the formula for the equivalent delay of a filter is
F (0) − F (s)
Td = lim . (5.27)
s→0 sF (0)
In the case of a stable transfer function
bn−1 sn−1 + · · · + b1s + b0
F (s) = , (5.28)
sn + an−1sn−1 + · · · + a1s + a0
the formula gives
a1 b1
Td = − . (5.29)
a0 b0
The result requires that a0 = 0 (needed for stability) and b0 = 0 (needed for
non-vanishing response at low frequencies).
5.2. Nyquist criterion of stability 131
1.5
n=1
0.5 n=2
-0.5
-1
-1.5
0 0.5 1 1.5 2
Time (s)
Im(P)
P
Re(P)
P 1
Im(P)
ω=0
1/2 1
⇒
P Re(P)
ω=∞
-90° ω=1
Figure 5.22: Bode plots (left) and Nyquist diagram (right) for 1/(s + 1)
ω=∞
1 ω=0
ω=1
The plot for ω > 0 is complemented by the portion for ω < 0. However, the
fact that P (−jω) = P ∗(jω) (a consequence of the assumption that the impulse
response is real) implies that the diagram for ω < 0 is the reflection of the
diagram for ω > 0 with respect to the real axis. Fig. 5.24 shows the complete
diagrams for the two examples.
Other properties worth noting are that:
5.2. Nyquist criterion of stability 133
ω <0
ω <0 ω=±∞
ω=0 ω=0
ω=±∞ ω >0
ω >0
Figure 5.24: Complete Nyquist diagram for 1/(s + 1) (left) and 1/(s + 1)3 (right)
• If P (s) has no pole at s = 0, P (0) is real and finite (P (0) is the DC gain
of P (s)).
• If the number of poles is greater than the number of zeros, P (∞) = P (−∞)
= 0.
open-loop poles or closed-loop poles on the jω-axis. We will also assume that
the number of poles is greater than or equal to the number of zeros.
Nyquist criterion
2. Let N be the number of clockwise encirclements of (−1, 0), that is, the
number of times that the closed curve drawn by G(jω) as ω = −∞ → ∞
encircles the (−1, 0) point in the clockwise direction.
3. Then: Z = N + P, where:
Example 1: let G(s) = 1/(s + 1), D(s) + N (s) = s + 2. The Nyquist plot is
shown in Fig. 5.25. There are no encirclements, so that N = 0. Since the system
is open-loop stable, P = 0. Therefore, Z = 0, and the test confirms that the
system is closed-loop stable. If we let G(s) = k/(s + 1), D(s) + N(s) = s + 1 + k,
the diagram is simply expanded by a factor k. The number of encirclements
does not change, and since the open-loop system remains stable, the test verifies
that the closed-loop system is stable for all k.
-1 1
-1
Example 2: let G(s) = 1/(s + 1)3, D(s) + N (s) = (s + 1)3 + 1. Now, the
Nyquist diagram is shown in Fig. 5.26. Since N = 0, P = 0, we have Z = 0,
and the system is closed-loop stable. However, if we let G(s) = k/(s + 1)3,
D(s) + N (s) = (s + 1)3 + k, there is a value of k such that the number of
encirclements will change. The effect of an increasing parameter k on the Nyquist
diagram and on the root-locus is shown in Fig. 5.27. When the value of k is
sufficiently large that the value of a shown on the figure is greater than 1,
the number of clockwise encirclements becomes equal to 2, and the number of
unstable closed-loop poles also becomes Z = 2 (as indicated independently by
the root-locus). We had found earlier using the Routh-Hurwitz criterion (see
(4.46)) that the value of k which separated the stable and unstable conditions
was k = 8. Considering the Nyquist criterion, we note that
k k
G(jω) = = 1 − 3ω 2 + j −3ω + ω3 .
(1 + jω)3
(1 + ω 2)3 (5.36)
Therefore,
The crossing of the real axis by the Nyquist curve therefore occurs for ω2 = 3
and
k(1 − 9) k
a = − Re G(jω) = − = . (5.38)
43 8
Again, one finds that the system is becomes unstable when k > 8 (the case for
k < 0 can also be considered using the root-locus and the Nyquist criterion, but
is left to the reader).
-a
Figure 5.27: Nyquist diagram of k/(s + 1)3 for k increasing (left), and corre-
sponding root-locus (right)
is stable for sufficiently large k > 0. Indeed, since D(s) + N (s) = s − 1 + k, the
system is known to be stable for k > 1. Drawing the Bode plots, we can sketch
the Nyquist diagram as in Fig. 5.29. Because there is one unstable open-loop
pole, P = 1. Then, for k < 1, N = 0, Z = 1 (1 unstable closed-loop pole). For
k > 1, N = −1 (1 counterclockwise encirclement), Z = 0 (no unstable closed-
loop pole). In summary, the Nyquist criterion correctly predicts that there is
one unstable pole for k < 1, and that the closed-loop system is stable for k > 1.
G
Im(G)
-k
⇓
G Re(G)
-90°
-180°
Figure 5.29: Bode plots (left) and Nyquist diagram (right) for k/(s − 1)
k(s + 2)
G(s) = (5.39)
(s − 1)(s2 + 2s + 2)
The system has one unstable open-loop pole (P = 1) and its Nyquist curve
shown in Fig. 5.30. The following results apply
k<1 N =0 Z = N + P = 1 ⇒ unstable
1 < k < 2 N = −1 Z = N + P = 0 ⇒ stable
k>2 N =1 Z = N + P = 2 ⇒ unstable
5.2. Nyquist criterion of stability 137
-k -k/2
-1
k=2
k=1
k=2
(-1,0) (-1,0)
-1
-1 +1 -1
+1
+1
2. Note that, if the open-loop system is stable, sufficient (but not necessary)
conditions for the stability of the closed-loop system are:
3. One may wonder what happens when the Nyquist curve goes through
(−1, 0) point. Then, it is impossible to count the number of encirclements.
However, G(jω0) = −1 for some ω0 , so that 1 + G(jω0) = 0. Therefore,
the system has at least one closed-loop pole at jω0. This case was excluded
in the assumptions, but one can nevertheless conclude that the closed-loop
system is unstable, since some closed-loop poles are on the jω-axis.
4. The case where there are open-loop poles on the jω-axis was also excluded,
but may be handled using a modified procedure to be described later. This
case is important in feedback systems, because many control systems have
poles at the origin.
5.2. Nyquist criterion of stability 139
5. The main value of the Nyquist criterion is to quantify how far a system is
from being unstable. This topic will also be discussed later.
where z1 is a real number. Note that s = −z1 is the zero of H(s). Let H(C) be
the set of complex numbers corresponding to H(s) for s ∈ C with an orientation
corresponding to C. The right side of Fig. 5.33 shows H(C), which is simply the
original curve C shifted by z1. A trivial fact is that H(C) encircles the origin in
the CCW direction if and only if the zero at s = −z1 belongs to the interior of
C. In the case shown on the figure, there are no encirclements.
H(s)=s+z1
C H(C)
-z1
Fig. 5.34 shows an interpretation of the result where the complex number
H(s) is a vector with angle α with respect to the real axis. For z1 = 2, as shown
on the figure, and for ∡s increasing from 0◦ to 360◦ , α grows from 0 to 30◦, then
goes down to −30◦ , then rises back to zero. H(C) does not encircle the origin
because the angle returns to 0◦ rather than reach 360◦ . For z1 = 0, α grows
continuously from 0◦ to 360◦, and the origin is encircled.
With this interpretation, it becomes clear that the result on the number of
encirclements remains true for arbitrary z1 and for any closed curve C that does
not intersect itself. Further, for a general transfer function
(s − z1) · · · (s − zm)
H(s) = bm , (5.41)
(s − p1) · · · (s − pn)
140 Chapter 5. Frequency-domain analysis of control systems
s+z1
α s
-z1
It follows that
where G(s) is the open-loop transfer function. Therefore, the poles of H(s) are
the open-loop poles and the zeros of H(s) are the closed-loop poles. Next, one
defines the contour C as shown on Fig. 5.35. The contour is composed of the
imaginary axis completed with a half circle of infinite radius. The curve is called
the Nyquist contour. Note that the area of the complex plane delimited by the
Nyquist contour is the right half-plane.
The orientation of the contour was changed to be clockwise (CW) so that
the frequency varies in the positive direction along the imaginary axis. Instead
of mapping H(C) = 1 + G(C), one plots G(C). Encirclement of the origin
is replaced by encirclement of (-1, 0). If the loop transfer function is strictly
proper, the half circle of infinite radius is mapped to Re = 0, Im = 0, and the
Nyquist plot reduces to a plot of G(jω) for ω ranging from −∞ to ∞. With
5.2. Nyquist criterion of stability 141
ω = +∞ G(jω)
Im
Re
s-plane
ω = −∞
(5.45) and (5.46) are equivalent because the number of unstable poles plus the
number of stable poles is the same for the open-loop and for the closed-loop
systems.
Example 1: G(s) = 1/ (s(s + 1)). The Bode plots and the Nyquist curve for
this system are shown in Fig. 5.37. For the usual Nyquist contour, the trans-
formed path grows to infinity as ω reaches the origin. In the modified contour,
G(jε) = 1/jε(1 + jε) ≃ 1/jε = −j/ε, where ε is an arbitrarily small number.
On the negative side, G(−jε) ≃ −1/jε = j/ε. To count the encirclements, we
need to connect the end of the branches at ω = ε and ω = −ε.
G
ω =−ε
-1 ω=∞
G ⇒
-90°
-180° ω =ε
Figure 5.37: Nyquist curve for a system with a pole at the origin
Fig. 5.38 shows the detail of the modified contour around the origin. We
5.2. Nyquist criterion of stability 143
assume that a half circle is used to connect the two paths, so that
Since θ = −π/2 → π/2, the transformed path connects clockwise the branches at
90◦ and −90◦. As a result, the transformed, modified contour may be connected
as shown in Fig. 5.39. Since there are no unstable open-loop poles in the modified
contour, P = 0. On the other hand, there are no encirclements of the (−1, 0)
point by the transformed contour, so that N = 0. As a result, the closed-loop
system is stable for all k > 0 (as predicted by the root-locus).
ω=ε
ω = −ε
Figure 5.38: Detail of the modified contour in the vicinity of the origin
ω = −ε
-1
ω=ε
Figure 5.39: Nyquist diagram for G(s) = 1/(s(s + 1)) and a modified Nyquist
contour
Example 2: G(s) = −1/ (s(s + 1)). The Nyquist diagram is shown in Fig. 5.40.
Since N = 1, P = 0, we have Z = 1, and there is one unstable closed-loop pole.
144 Chapter 5. Frequency-domain analysis of control systems
ω =ε
ω = −ε
Figure 5.40: Nyquist diagram for G(s) = −1/(s(s + 1)) and a modified Nyquist
contour
Example 3: G(s) = 1/ (s3 (s + 1)). The Nyquist diagram is shown in Fig. 5.41.
For s = ε ejθ
1 −3jθ
G(s) ≃ e , (5.49)
ε3
so that the transformed path sweeps from 270◦ to −270◦ . P = 0, N = 2, so that
Z = 2 and there are two unstable closed-loop poles. This result may be verified
from the root-locus shown in Fig. 5.42.
⇒
G
90°
Figure 5.41: Bode plots (left) and Nyquist diagram (right) for G(s) = 1/(s3(s +
1))
General procedure
For a system with n poles at s = 0, the procedure can be applied in a similar
manner. Plot G(jω) for ω = ε → ∞, and draw G(−jω) by symmetry. Then,
connect G(−jε) to G(jε) with a circular curve that rotates around the origin by
an angle n × 180◦ in the clockwise direction. Let P be the number of unstable
5.3. Gain and phase margins 145
open-loop poles, not including the pole at s = 0, and count the number of
encirclements N . The criterion is then applied as for the original contour. The
same procedure may also be applied for pole(s) at s = jω0 , connecting the
branches for G(jω0 − jε) to G(jω0 + jε).
Aside from the concept of stability for a closed-loop system, an almost equally
important consideration is how far the system is from instability. This is the
concept behind gain and phase margins. Both apply to a system which is known
to be closed-loop stable, but does not have to be open-loop stable.
By definition, the gain margin is the maximum value of the gain k > 0 by
which the open-loop transfer function may be multiplied before the closed-loop
system reaches instability. For example, consider the open-loop system
1
G(s) = , (5.50)
(s + 1)3
which was found to yield a stable closed-loop system. It was also found that the
closed-loop system became unstable if the gain was multiplied by 8. Therefore,
the gain margin of the system with open-loop transfer function G(s) is equal to
8. Sometimes, the gain margin is expressed in dB. Then, GMdB = 20 log(8) =
18 dB.
146 Chapter 5. Frequency-domain analysis of control systems
-1
-a
Figure 5.43: Computing the gain margin from the Nyquist diagram
For many systems, the gain may be reduced by any amount without resulting
in instability. When one says that the gain margin is 2, it means that the gain
may be multiplied by any number between 0 and 2. In some cases, however,
the gain margin involves a lower number as well. The system is then called
conditionally stable (see Figs. 5.30 and 5.31 involving a gain k restricted to being
between 1 and 2). The nominal gain can neither be increased, nor decreased
arbitrarily without resulting in instability. In the example shown in Fig. 5.44,
the gain margin is given by
1 1
GM = ( , ). (5.51)
b a
-1
-b -a
PC
dB
ω1
ω
GM dB
PC
ω1 ω
-180°
Figure 5.45: Determination of the gain margin from the Bode plots
1
GM = , (5.53)
|P (jω1)C(jω1)|
or
The interpretation of the definition in Bode plots is shown in Fig. 5.45. If several
frequencies are associated with an angle of 180◦ , the gain margin is the smallest
value of all obtained.
If |P (jω1 )C(jω1)| > 1 for one or more of the frequencies, the gain margin has
a lower bound. Fig. 5.46 shows a hypothetical example where the gain margin
is given by
1
GMdB = (−6, 10) ⇔ GM = ( , 3 ). (5.55)
2
PC
20
6
ω
-10
-20
PC
ω
-180°
Figure 5.46: Bode plot with multiple intersections of the 180◦ line
margin. If several points are found, the angle of smallest magnitude defines the
phase margin. Because P (−jω) = P ∗ (jω), the phase margin is the same in
magnitude for both positive and negative directions.
-1
Circle of PM
radius 1
or
|P (jω2)C(jω2)|dB = 0. (5.57)
where the angle function ∡ is defined between −180◦ and 180◦ . The concept is
shown in Fig. 5.49. The frequency ω2 is called the crossover frequency. If several
frequencies are found, the smallest phase margin computed with the formula is
the phase margin of the system.
PC
dB ω2
PC
ω
PM
-180°
Figure 5.49: Determination of the phase margin from the Bode plots
t < 0). Therefore, the concept of phase margin can only be justified in terms of
the Nyquist diagram.
A more practical concept is the delay margin, which is the maximum amount
of time delay T that may be added to the open-loop transfer function before the
closed-loop system becomes unstable. This concept is illustrated in Fig. 5.50.
The delay margin can be estimated in practice by inserting a delay element in
the loop (one would increase the delay until oscillations appear in the response
of the system and before instability fully develops).
A delay T corresponds to a transfer function e−sT , with
e−jωT = 1,
∡e −jωT
= −ωT (in rad). (5.59)
or
Phase margin (rad)
Delay margin (s) =
Crossover frequency (rad/s)
Phase margin (deg)/360◦
= . (5.61)
Crossover frequency (Hz)
ωn2
s 2 + 2ζ ω ns
ω2n
PCL =
s 2 + 2ζ ω n s + ω n2
ζ PM PO PF
0.2 22.6◦ 52.7% 2.55
0.3 33.3◦ 37.2% 1.75
0.4 43.1◦ 25.4% 1.36
◦
0.5 51.8 16.3% 1.15
◦
0.6 59.2 9.5% 1.04
◦
0.7 65.2 4.6% 1.0002
The results show a tight connection between the phase margin of the second-
order system, the overshoot of the step response, and the peaking of the fre-
quency response. In view of the results, a phase margin of 60◦ is often taken as
an objective in control systems. Although the formulas for the phase margin
were obtained for a specific second-order system, the results are taken to provide
guidance for higher-order systems as well.
G(s)
have this property hold true for all ω, it is only practical to do so for a finite
range of frequencies. Indeed, the gain of physical systems usually falls rapidly
at high frequencies. A controller may only partially compensate for this effect.
The phase of physical systems also tends to increase rapidly at high frequencies,
and often does so in ways that cannot be precisely modelled. In order to ensure
closed-loop stability, it may be necessary to bring the loop gain well below 1
before significant phase variations are observed.
Overall, the design problem in the frequency domain consists first of all in a
careful selection of the crossover frequency of the closed-loop system. Below this
frequency, the loop gain will be made as large as possible (sometimes through
the use of integral control). Around the crossover frequency, the phase will be
carefully controlled in order to ensure stability as well as robustness to uncer-
tainties and parameter variations (considering the Nyquist criterion). A Bode
plot of a typical open-loop transfer function is shown in Fig. 5.53.
G(jω) dB
Robustness
bound
ω
Performance
bound
Crossover frequency
at 0 dB (gain=1)
(s + b)
C(s) = kc , (5.67)
(s + a)
where a > b > 0. In the s−plane, the controller has a pole and a zero on the
negative real axis, with the zero being closer to the origin than the pole.
C(jω) C(jω)
(dB) (deg)
m
8
mp 90°
φp
m0
b ωp a ω (rad/s) b ωp a ω (rad/s)
The Bode plots of the lead controller are shown in Fig. 5.54. The controller
is called a lead controller because the phase angle is positive. For the same
reason, the controller is called a lag controller if b > a (the system becomes
a proportional-integral controller for a = 0). The DC gain m0 and the high-
frequency gain m∞ of the controller are
b
m0 = kc , m∞ = kc. (5.68)
a
The frequency ωp is defined on the figure as the frequency where the angle of
the frequency response is maximum. φp is the phase of the lead controller at the
frequency ωp. Since
!
b2 + ω2
|C(jω)| = kc ,
a2 + ω2
ω ω
∡C(jω) = tan−1 ( ) − tan−1 ( ),
b a
d∡C(jω) 1 1 1 1 (ω2 − ab) (b − a)
= 2 2
− 2 2
= 2 , (5.69)
dω 1 + ω /b b 1 + ω /a a (b + ω2 ) (a2 + ω 2)
5.3. Gain and phase margins 155
The result shows that the frequency of the peak is located mid-way between the
pole and the zero on a log scale (log(ωp) = (log(a) + log(b))/2).
A different formula can be obtained for φp , using the fact that
so that
ωp2(a − b)2 (a − b)2
sin2 (∡C(jωp )) = sin2 (φp ) = = ,
(ab + ωp2 )2 + ωp2 (a − b)2 (a + b)2
(5.72)
and
(a/b − 1) a 1 + sin(φp)
sin(φp ) = ⇔ = . (5.73)
(a/b + 1) b 1 − sin(φp )
This result shows that the amount of phase depends on the ratio of the pole
magnitude over the zero magnitude (a/b). Specifically
a/b φp
5.83 45◦
9 53.1◦
13.9 60◦
Lead controller for a double integrator: consider now the control problem
for a double integrator
k
P (s) = . (5.74)
s2
The Bode plots of the loop transfer function are shown in Fig. 5.55. From root-
locus theory, the closed-loop system is known to be stable for all kkc > 0. A
possible choice is to set the gain of the controller such that the loop gain is
equal to 1 at ωp. In this manner, the frequency ωp is the same as the crossover
156 Chapter 5. Frequency-domain analysis of control systems
frequency of the system, and the phase φp is the phase margin. For the plant
under consideration, setting the loop gain to be 1 at ωp implies that
!
k b k
mp 2 = kc = 1, (5.75)
ωp a ωp2
or
!
a ωp2
kc = . (5.76)
b k
CP(jω) CP(jω)
(dB) (deg)
-40 dB/dec
ωp a ωp a
b ω (rad/s) b ω (rad/s)
-20 dB/dec -90°
-180°
-40 dB/dec
Figure 5.55: Bode plots of lead controller with double integrator plant
Assume that a certain phase margin and a certain cross-over frequency are
specified. The ratio a/b is determined by the phase margin and ωp is set equal
to the crossover frequency. Then, the controller parameters are determined from
! ! !
a b a ωp2
a = ωp , b = ωp , kc = . (5.77)
b a b k
For example, if a phase margin of 53.1◦ is chosen, a/b = 9, and the parameters
of the controller are equal to
ωp 3ωp2
a = 3ωp , b= , kc = . (5.78)
3 k
Therefore, the three parameters of the controller can be set as functions of the
crossover frequency ωp. ωp is a free parameter that can be adjusted experimen-
tally to be as large as possible to maximize the bandwidth of the system. In
theory, there is no limit to ωp , but in practice, additional dynamics will limit
the possible range. If these dynamics were included in the model, a careful
5.3. Gain and phase margins 157
design would maximize the closed-loop bandwidth within gain and phase mar-
gin specifications. Such design sinvolve tedious trial-and-error adjustments of
the controller parameters, and is preferably performed nowadays using some
numerical optimization tool [9].
-1
Figure 5.56: Nyquist diagram of a non-robust design with good gain and phase
margins
Im G(jω)
-1
Re G(jω)
Robustness
bound
4
|G CL(jω)|
0 4
-4 2
-2 0
0
2 -2
Re(G(jω )) 4 -4 Im(G(jω ))
The plot is often represented through level lines, as shown in Fig. 5.59. It
turns out that |GCL (jω)| = M if G(jω) belongs to a circle of radius M/(M 2 − 1)
with center −M 2 /(M 2 − 1). In order to avoid peaking in the frequency domain,
the Nyquist curve of the open-loop frequency response should avoid as much as
possible the portion of the complex plane where Re(G(jω)) < −1/2, especially
when Im(G(jω)) is small.
Im (G(jω))
M = 0.7
M = 0.5
-1/2 M = 0.2
-1
M=0 Re (G(jω))
M=∞
M=2
M = 1.5 M=1
5.4 Problems
Problem 5.1: Sketch the Bode plots for the following transfer functions. Make
sure to label the graphs, and to give the slopes of the lines in the magnitude
plot.
(a) P (s) = s − 10
(s + 1)(s + 100)
100(s − 10)(s + 10)
(b) P (s) =
(s + 0.1)(s + 100)2
(s + 10)
(c) P (s) = 2
s + 0.1s + 1
(d) P (s) = s − 10
s(s + 1)
(e) P (s) = 100
(s − 10)2(s + 1)
2
(f) P (s) = s + 2s2+ 100
s
Problem 5.2: (a) The magnitude Bode plot of a system is shown in Fig. 5.60.
What are the possible transfer functions of stable systems having this Bode plot?
160 Chapter 5. Frequency-domain analysis of control systems
20
15
Magnitude (dB) 10
0 2
10-1 10 0 101 10
Frequency (rad/sec)
(b) The Bode plots of a system are shown in Fig. 5.61. Give an estimate of the
transfer function of the system.
10 0
0 -20
-40
-10
Magnitude (dB)
-60
Phase (deg)
-20 -80
-30 -100
-120
-40
-140
-50 -160
-60 -180
10-2 10 -1 10 0 10 1 102 10 -2 10 -1 10 0 101 102
Frequency (rad/sec) Frequency (rad/sec)
Problem 5.3: (a) The system whose Bode plots are shown in Fig. 5.62 is stable
in closed-loop. Find its gain margin and phase margin.
(b) Describe the behavior of the closed-loop system of part (a) if the open-
loop gain is increased to a value close to the maximum value given by the gain
margin. In particular, what can you say about the locations of the poles of the
closed-loop system?
(c) Consider an open-loop stable system such that the magnitude of its frequency
5.4. Problems 161
50
Magnitude (dB) 0
-50
-100
-3 -2 -1 2
10 10 10 10 0 101 10
Frequency (rad/sec)
0
Phase (deg)
-90
-180
-270
10 -3 10 -2 10 -1 10 0 101 102
Frequency (rad/sec)
response is less than 1 for all ω. Can it be determined whether the closed-loop
system is stable with only that information?
Problem 5.4: (a) The Nyquist diagram of a stable system is shown in Fig. 5.63,
with the overall diagram shown on the left and the detail around the (-1,0) point
shown on the right. The solid line corresponds to ω > 0, with the arrow giving
the direction of increasing ω. The dashed line is the symmetric curve obtained
for ω < 0. Assuming that the transfer function of the system is multiplied by
a gain k > 0, what is the set of values of k for which the system is stable in
closed-loop ?
(b) Repeat part (a) for the system whose Nyquist curve is shown in Fig. 5.64,
given that the system has one unstable pole.
Problem 5.5: (a) The Nyquist diagram for P (s) = 5(s + 2)/(s + 1)3 is shown
in Fig. 5.65, with the overall diagram shown on the left and the detail around
the (-1,0) point shown on the right. Indicate what the gain margin and the
phase margin are (for the phase margin, a rough estimate is fine). Compare the
gain margin results with those predicted by a root-locus plot and the use of the
Routh-Hurwitz criterion.
(b) Repeat part (a) for P (s) = 2(s + 5)/(s + 1)3 and the diagrams shown in
162 Chapter 5. Frequency-domain analysis of control systems
20 1.5
15 1
10
Imag Axis
0.5
Imag Axis
5
0 0
-5 -0.5
-10
-1
-15
-20 -1.5
-5 0 5 10 15 20 25 -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5
Real Axis Real Axis
0.08
0.06
0.04
0.02
Imag Axis
0
-0.02
-0.04
-0.06
-0.08
-0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0
Real Axis
Fig. 5.66.
Problem 5.6: Sketch the Bode plots for the following transfer function
10(s − 1)
P (s) = . (5.79)
(s + 10)2
Make sure to label the graphs, and to give the slopes of the lines in the magnitude
plot.
Problem 5.7: Sketch the Bode plots for the following transfer function
10(s + 1)
P (s) = . (5.80)
s2(s2 − 2s + 100)
5.4. Problems 163
8 1.5
6
1
4
0.5
Imag Axis
Imag Axis
2
0 0
-2
-0.5
-4
-6 -1
-8 -1.5
-6 -4 -2 0 2 4 6 8 10 12 14 -2.5 -2 -1.5 -1 -0.5 0 0.5 1
Real Axis Real Axis
8
0.4
6
0.3
4 0.2
Imag Axis
Imag Axis
2 0.1
0 0
-2 -0.1
-0.2
-4
-0.3
-6 -0.4
-8
-6 -4 -2 0 2 4 6 8 10 12 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0
Real Axis Real Axis
Make sure to label the graphs, and to give the slopes of the lines in the magnitude
plot.
Problem 5.8: (a) Sketch the Bode plots for
(s − 1)
P (s) = . (5.81)
s(s + 1)
pole is real, two poles are on a 45o line, and two poles have real parts equal
to −0.5. You may use the fact that, for α small, sin(α) ≃ tan(α) ≃ α, and
cos(α) ≃ 1. Be sure to label the axes precisely.
10j
o
45
-10 -0.5
-10j
Problem 5.9: (a) Give the gain margin and the phase margin of the system
whose Bode plots are shown in Fig. 5.68 (the plots are for the open-loop transfer
function and the closed-loop transfer function is assumed to be stable).
(b) Indicate whether the system whose Nyquist curve is shown in Fig. 5.69 is
closed-loop stable, given that it is open-loop stable.
(c) What are the values of the gain g > 0 by which the open-loop transfer
function of part (b) may be multiplied, with the closed-loop system remaining
stable?
(d) Sketch an example of a Nyquist curve for a system that has three unstable
open-loop poles, and which is closed-loop stable.
Problem 5.10: The magnitude Bode plot of a system is shown in Fig. 5.70.
Give all the possible transfer functions of systems having this Bode plot. The
poles and zeros are all real, and the values of the gain, poles, and zeros, are all
multiples of 10.
Problem 5.11: All parts of this problem refer to the system whose Nyquist
curve is shown in Fig. 5.71 (only the portion for ω > 0 is plotted).
(a) Knowing that the closed-loop system is stable, can one say for sure that the
open-loop system is stable?
(b) Given that the closed-loop system is stable, estimate the gain margin and
the phase margin of the closed-loop system.
(c) How many unstable poles does the closed-loop system have if the open-loop
5.4. Problems 165
50
0
Gain (dB)
-50
-100
-3 -2 -1 0 1 2 3
10 10 10 10 10 10 10
Frequency (rad/sec)
180
Phase (deg)
-180
-360
-3 -2 -1 0 1 2 3
10 10 10 10 10 10 10
Frequency (rad/sec)
-1/3
-1 1
-3 -2
-2/3
10
Magnitude (dB) 5
0
-5
-10
-15
-20
3 4
10 0 101 10 2 10 10
Frequency (rad/sec)
-1
ω=5 ω=0
-j
ω=1
gain is multiplied by 5?
(d) Give the steady-state response yss (t) of the open-loop system to an input
x(t) = 2. Repeat for x(t) = 3 cos(t) and for x(t) = 4 cos(5t).
(e) Repeat part (d) for the closed-loop system.
Problem 5.12: (a) Sketch the Bode plots of
1
G(s) = . (5.82)
(s + 1)(s − 1)
Be sure to label the axes precisely.
(b) Sketch the magnitude Bode plot of
Problem 5.13: (a) Consider the Nyquist diagram of a transfer function G(s)
shown in Fig. 5.72. Only the portion for ω > 0 is plotted. Assume that G(s)
has no poles in the open right half-plane, and that a gain k is cascaded with
G(s). Find the ranges of positive k for which the closed-loop system is stable.
-2
-1
-1.5 -0.5 0.75
(b) Bode plots of the open-loop transfer function of a feedback system are shown
in Fig. 5.73, with the detail from 1 to 10 rad/s shown on the right. For this
system:
• How much can the open-loop gain be changed (increased and/or decreased)
before the closed-loop system becomes unstable ?
50 10
6
Gain dB
Gain dB
0
0
-50
-100 -10 0
10 -1 10 0 101 102 10 101
Frequency (rad/sec) Frequency (rad/sec)
360 270
180
Phase deg
Phase deg
180
90
0 0
-180 -90
10 -1 10 0 10 1 10 2 10 0 101
Frequency (rad/sec) Frequency (rad/sec)
x(k)
0 1 2 3 4 k
169
170 Chapter 6. Discrete-time signals and systems
x(k) x(k)
1 2 3 4 k 1 2 3 4 k
The z-transform is the result of an infinite series, involving all the values of the
signal x(k) and the complex variable z. The variable z is similar to the variable
s of the Laplace transform. The definition of the z-transform is unilateral, i.e.,
independent of x(k) for k < 0. The bilateral z-transform requires summation
over both positive and negative values of time, but is not used here. We discuss
a few important examples.
1. Discrete-time impulse
A discrete-time impulse is defined by
2. Finite-length signal
A finite-length signal is a signal that vanishes after a finite period of time,
i.e.,
δ(k)
1
0 1 2 3 k
z3 − z 2 + 2
X(z) = ⇔ x(k) = 0, 0, 1, −1, 0, 2, 0, 0, ...
z5
(6.6)
Note that the definition of the z-transform implies that a rational transform
X(z) = N (z)/D(z) must always be such that deg(N (z)) deg(D(z)) (i.e.,
the transform must be a proper function of z)
3. Step signal
A step signal is given by
x(k) = 1 for k 0
= 0 for k < 0. (6.7)
X(z) = 1 + z −1 + z −2 + · · · (6.8)
1 + a + a2 + · · · an (1 − a) = 1 + a + a2 + · · · + an
−a − a2 − · · · − an+1
= 1 − an+1 . (6.10)
It follows that
1 − an+1
1 + a + a2 + · · · an = (6.11)
1−a
and
n
1
lim an = if |a| < 1. (6.12)
n→∞
i=0
1−a
Applying the auxiliary result to the step signal, one finds that
1 z
X(z) = = if |z −1| < 1, or |z| > 1.
1 − z −1 z−1
(6.13)
As for the Laplace transform, the z-transform is defined only in the region
of the z-plane where convergence of the infinite series is guaranteed. This
region is called the region of convergence (ROC). For the step signal, the
region of convergence is the portion of the z plane located outside the circle
of radius 1.
The z-transform of the step signal (6.9) is a rational function of z. The
pole is at z = 1 and there is a zero at z = 0. The transform is different
from the transform of the continuous-time step function, which is (1/s).
We will find that z = 1 is the equivalent of s = 0 in the s−plane.
4. Geometric progression
The geometric progression signal is the equivalent of the exponential signal
in continuous-time, and is given by
x(k) = ak . (6.14)
For the time being, we assume that a is real. Depending on the magnitude
of a, the signal has the following properties
X(z) = 1 + az −1 + a2z −2 · · ·
1 z
= = if |az −1 | < 1, or |z| > a. (6.15)
1 − az −1 z−a
Fig. 6.4 shows the decaying signal for |a| < 1, and the associated pole in
the z-plane.
1 Unit circle
a
a2
1
a
X(z) exists
X(z) does
not exist
As in Fig. 6.6, one defines the magnitude and angle of the complex pole
with
p
|p|
p
∀
Ω0
Ωo = 0 , T = ∞ Ω o = π , T = 8 samples
4
π
Ωo = , T = 4 samples Ω o = π , T = 2 samples
2
Increasing
frequency
Faster
Faster growth
decay
Impulse Step
Zero
frequency
Constant
magnitude
with
so that
The time constant should be multiplied by 4 to obtain the time needed for a
decay of the signal to 2% of its original value. Specifically, |p| = 0.99 ⇒ 4τc =
400, which means that
− Re(p) − Re(p)
ζ= = (continuous-time).
|p| Re(p)2 + Im(p)2 (6.28)
ζ=cos(α)
α
Since the time constant and the period of oscillation associated with a com-
plex pole are
A desirable damping factor of ζ > 0.707 means that τc < (Tosc )/2π, implying
that the convergence time is small compared to the period of oscillation.
To obtain an equivalent result in discrete-time, we use the applicable defini-
tions of time constant and of period of oscillations
1 2π
τc = − , Tosc = (discrete-time), (6.31)
ln |p| ∡p
to obtain
− ln |p|
ζ= (discrete-time). (6.32)
(ln |p|)2 + (∡p)2
This equation can also be written as
ζ
ln |p| = − ∡p, (6.33)
1 − ζ2
or
1 ζ
|p| = where α = . (6.34)
eα∡p 1 − ζ2
A curve of constant damping in discrete-time is the set of complex numbers
p such that (6.34) is satisfied with a fixed ζ. (6.34) shows that the magnitude
of the pole must decrease as the angle increases. A line of constant damping is
a curve called a logarithmic spiral, and is shown in Fig. 6.11.
The line corresponding to ζ = 0.707 is the curve
Logarithmic spiral
Unit circle
The damping factor can be computed from (6.32) to be ζ = 0.404, which is low.
Indeed, the time signal pk + (p∗ )k is shown in Fig. 6.12 and exhibits a visible
oscillation.
1.5
0.5
-0.5
-1
0 2 4 6 8 10 12 14 16 18 20
We use the notation X̄ to distinguish the DTFT from the z-transform X(z). The
DTFT is defined on a finite interval (from −π to π), or as a periodic function on
an infinite interval. An example of a DTFT is shown in Fig. 6.13 (the magnitude
is shown only, but the DTFT is generally a complex function).
|X(Ω)|
−π π 2π Ω
In other words, we must assume that x(k) is zero for negative time so that the
bilateral and unilateral transforms can be related. The DTFT is equal to the
z-transform evaluated on the unit circle (z = ejΩ ). The property requires that
the transforms exist in an ordinary sense on the unit circle, i.e., that the region
of convergence of the z-transform includes the unit circle. The property is not
satisfied by a sinusoid, for which the region of convergence is |z| > 1
1. Linearity
2. Right shift
The right shift or one-step delay is defined by
where u(k) is the unit step. The signal and the one-step delayed signal are
shown in Fig. 6.14. Since
x(k) y(k)
Examples of delayed signals are shown in Fig. 6.15 and include a delayed
impulse and a delayed exponential signal. An observation is that a signal
with a pole at z = a and no zero at z = 0 is an exponential signal, but
with a zero value at the initial time instant.
3. Left shift
In a similar manner, one can obtain the formula for a left shift
4. Initial value
The initial value of the signal x(k) can be obtained as
2z 2 + 1 2
X(z) = ⇒ x(0) =
3z 2 + z 3
2z + 1
X(z) = ⇒ x(0) = 0. (6.49)
3z 2 + z
Other values of x(k) may also be obtained in a similar manner, e.g.
5. Final value
If lim x(k) exists, then
k→∞
The result is similar to the equivalent result for the Laplace transform,
but s = 0 is replaced by z = 1.
6. Multiplication by time
d
y(k) = kx(k) ⇔ Y (z) = −z X(z). (6.52)
dz
For example, consider the transform
z
x(k) = ak ⇔ X(z) = . (6.53)
z−a
One may deduce the transforms of the signals
(z − a) − z az
y1 (k) = kak ⇔ Y1(z) = −z 2
= ,
(z − a) (z − a)2
a(z − a)2 − az 2(z − a) az(z + a)
y2 (k) = k2 ak ⇔ Y2 (z) = −z = ,
(z − a)4 (z − a)3
az (z 2 + 4az + a2 )
y3 (k) = k3 ak ⇔ Y3 (z) = · · · = . (6.54)
(z − a)4
n − 1 terms
z k(k − 1) · · · (k − n + 2) k−n+1
Y (z) = ⇔ y(k) = a .
(z − a)n (n − 1)!
(6.55)
In particular
z 1
Y (z) = ⇔ y(k) = k(k − 1)ak−2 ,
(z − a)3 2
z 1
Y (z) = ⇔ y(k) = k(k − 1)(k − 2)ak−3. (6.56)
(z − a)4 6
6.2. Properties of the z-transform 185
7. Convolution:
As in continuous-time, if
then
Consider, for example, the convolution of a signal x(k) with a step signal
u(k), yielding
k
z
y(k) = x(k) ∗ u(k) = x(i) ⇔ Y (z) = X(z).
z−1
i=0 (6.61)
In other words,
z
Discrete-time integrator ⇔ , (6.62)
z−1
compared to 1/s in continuous-time. The formula for the integration of a
signal could also have been derived quickly by transforming the recursion
formula for y(k), so that
(assuming that y(−1) = 0). Note that, for a slightly different formulation
with
c0 = X(0),
X(z)
ci = (z − pi ) . (6.66)
z z=pi
Note that the preliminary division of X(z) by z ensures that the function in the
time domain can be obtained directly, without shifting. For complex poles, the
two complex conjugate time-domain signals can be merged to produce a real
signal
cz c∗ z
+ ⇔ cpk + c∗ p∗k = 2|c| |p|k cos(∡p k + ∡c).
z − p z − p∗
(6.69)
Example: let
1
X(z) = . (6.70)
(z − 1)(z + 1)
so that
1 z 1 z
X(z) = −1 + + , (6.72)
2 z−1 2 z+1
1 1
x(k) = −δ(k) + + (−1)k . (6.73)
2 2
If X(z) has poles at z = 0, or has other repeated poles, one must use the
more general formula of partial fraction expansion. Poles at z = 0 will lead to
terms of the form
c
⇔ cδ(k − n), (6.75)
zn
cz c∗ z
n
+
(z − p) (z − p∗)n
k(k − 1) · · · (k − n + 2) k−n+1
⇔ 2|c| |p| cos (∡p (k − n + 1) + ∡c) .
(n − 1)!
(6.76)
z
X(z) = , (6.77)
z−a
1 + az -1 + a 2 z -2 . . .
z-a z
z-a
⇒ X(z) = 1 + az -1 + a2 z-2 ...
a
a - a 2 z -1
-1
a2z x(0) x(1) x(2)
a 2 z -1- a 3z -2
a 3 z -2 . . .
First example using long division
Another example is
1
X(z) = , (6.78)
z2 − 1
which was inverted by partial fraction expansion in (6.74). The same result may
be obtained through the following computation.
z -2 + z -4 + z -6 + . . .
z2 - 1 1
1 - z -2 ⇒ X(z) = z -2+ z -4 + z -6 + . . .
z -2
z -2 - z -4
z -4 . . . x(2) x(4) x(6)
|z| = 1 ⇔ Re(s) = 0
|z| > 1 ⇔ Re(s) > 0
|z| < 1 ⇔ Re(s) < 0 (6.80)
• x(k) converges to zero ⇔ all poles are inside the unit circle.
• x(k) converges ⇔ all poles are inside the unit circle, except at most a
single pole at z = 1.
• x(k) is bounded ⇔ all poles are inside the unit circle or are non-repeated
poles on the unit circle.
190 Chapter 6. Discrete-time signals and systems
Then
z
Y (z) = X(z), (6.84)
z − (1 + α)
This impulse response is shown in Fig. 6.17 and is unbounded. The money grows
exponentially with a single initial deposit: the system is unstable!
Note that the daily rate can be transformed into a yearly rate (and vice-
versa) with
y(k)
1 2 3 k
Example 2: consider the case of a microphone that receives the signal produced
by a speaker. The system is shown in Fig. 6.18. There is a direct path from the
speaker to the microphone, but a reflection is also received from a surface in the
room. More echos may be present in the received signal.
x(k) y(k)
0.4
0.2
Impulse response
-0.2
-0.4
-0.6
0 100 200 300
Samples
A finite impulse response (FIR) system is such that the impulse response is a
finite-time signal, i.e., for some n
h(k) = 0 for k > n. (6.89)
The transfer function of an FIR system is of the form
H(z) = h(0) + h(1)z −1 + · · · h(n)z −n
h(0)z n + · · · + h(n)
= . (6.90)
zn
In other words
A system is FIR ⇔ H(z) is rational with all poles at z = 0
(6.91)
Example 2 (echo system) was an example of an FIR system.
For an FIR system
∞
y(k) = h(k) ∗ x(k) = h(i)x(k − i)
i=−∞
= h(0)x(k) + h(1)x(k − 1) + · · · + h(n)x(k − n). (6.92)
In other words, the output signal is the linear combination of the delayed values
of the input signal, and is particularly easy to compute.
6.4. Discrete-time systems 193
Systems that are not FIR are infinite impulse response systems (IIR) and are
such that
H(z) is BIBO stable ⇔ all poles of H(z) are inside the unit circle
(d)
1
H(z) = stable. (6.98)
z3
(e)
1
H(z) = unstable. (6.99)
z−1
For cases (c) and (d), the output will be unbounded for most input signals. For
case (e), the output will be unbounded for signals containing a DC component
or bias.
194 Chapter 6. Discrete-time signals and systems
The transform has two poles on the unit circle, placed at an angle Ω0 from the
real axis. Ω0 is the frequency of the signal, in radians, with an associated period
of 2π/Ω0 in samples.
Using partial fraction expansions, the response of a BIBO stable system with
rational transfer function H(z) to an input x(k) = xm cos (Ω0k) is.
N (z) z 2 − cos (Ω0 ) z
Y (z) = 2
xm
D(z) z − 2 cos (Ω0 ) z + 1
N1(z) N2(z)
= xm + .(6.105)
z 2 − 2 cos (Ω0 ) z + 1 D(z)
steady-state response Yss (z) transient response Ytr (z)
6.4. Discrete-time systems 195
where
In other words, M and φ are the magnitude and angle of the transfer function
evaluated on the unit circle
Example
so that y(k) = x(k) − x(k − 4). The computation of the output and an imple-
mentation of the system are shown in Fig. 6.20, where D represents a one-step
delay.
x(k)
y(4)
0 1 2 3 4 k
x(k) y(k)
D D D D
Figure 6.20: System with H(z) = 1 − z −4, signal computation (top) and system
implementation (bottom)
The system has a finite impulse response, and is therefore BIBO stable. Its
transfer function is given by
z4 − 1
H(z) = 1 − z −4 = . (6.110)
z4
196 Chapter 6. Discrete-time signals and systems
x(k) = a Ω0 = 0
x(k) = a(−1)k Ω0 = π
x(k) = a cos π2 k + φ Ω0 = π/2
i.e., for signals with frequencies 0, π/2, and π. This result may be easily inter-
preted by considering the computation of the output signal in Fig. 6.20.
x(k)
1 2 3 4 5 6 7 8 k
so that
j−1 √ ◦ √
H(ejπ/8 ) = = 1 + j = 2 ej45 = 2 ejπ/4 , (6.115)
j
and
√ π π
yss (k) = 2 cos k+ . (6.116)
8 4
Typically, the initial conditions are set to zero, but the effect of nonzero values
may be analyzed in much the same way as in continuous-time. For nonzero
values prior to k = 0
so that
In this manner, the shifting formula can be extended to arbitrary time shifts,
and used to account for arbitrary initial conditions.
Applying the formulas to the difference equation, one obtains
and
N(z)
Y (z) =
z 2 + a1 z + a0
Response to the initial conditions
or zero-input response Yzi (z)
b2 z 2 + b1 z + b0
+ X(z) , (6.122)
z 2 + a1z + a0
Response to the input
or zero-state response Yzs (z)
where
The first term Yzi (s) is the response to the initial conditions (the zero-input
response) and the second term Yzs (s) is the response to the input (the zero-state
response). The two denominators are identical, and determine the poles of the
system. The transfer function of the system is the rational function
b2z 2 + b1z + b0
H(z) = . (6.124)
z 2 + a1 z + a0
and can be directly transcribed from the original difference equation.
h(n)
h(1)
y(k)
h(0)
Begin
y = h(0) ∗ x + h(1) ∗ m1 + · · · + h(n) ∗ mn
mn = mn−1
..
.
m2 = m1
m1 = x
Repeat
200 Chapter 6. Discrete-time signals and systems
where m1 , ..., mn are n storage locations. The fact that the zero-input response
vanishes in n steps means that the effect of any value placed in the registers m1,
..., mn, disappears n time instants after the system is started.
IIR system
An IIR system has the general transfer function
bnz n + · · · + b0
H(z) = , (6.127)
zn + an−1z n−1 + · · · + a0
which may be translated into the difference equation
y(k-n) y(k-1)
bn a0 z -1 ... z -1
...
a n-1
Therefore
and
Also
b n-1
...
y
b0
x xn x x1
z -1 ... 2 z-1
-a n-1
...
-a 0
where y ′ (k) is the output of a strictly proper system as shown in Fig. 6.24.
202 Chapter 6. Discrete-time signals and systems
which gives
H(z) is the transfer function of the system and IC’s means initial conditions.
The poles are the eigenvalues of the matrix A, i.e., the roots of det(zI − A).
6.4. Discrete-time systems 203
x(k) y(k)
G(z)
Stability
Adequate
settling
time Adequate
damping
sinusoidal response, as well as the boundary for stability (see Fig. 5.35). In
discrete-time, the computation must be performed for
z = ejΩ , Ω = −π → π, (6.143)
as shown Fig. 6.27. Evaluating H(z) along the Nyquist curve is equivalent
to plotting the frequency response H(ejΩ ) in the complex plane. Counting the
number of encirclements of the (−1, 0) point gives the number of unstable closed-
loop poles, as in continuous-time.
z=1
0.5
gmax
0
g0
-0.5
-1
-1 -0.5 0 0.5 1
Figure 6.28: Root-locus for the discrete-time example (the unit circle is shown
as a dashed curve)
is obtained for
1 + zd
g = 2(1 − zd ), za = . (6.148)
2
Such a design technique is referred to as pole placement. When both poles are
placed at the same location, the choice corresponds to the breakaway point of
the root-locus. The method can work well if reasonable values of zd are chosen,
which are generally values slightly smaller than 1.
6.5 Problems
Problem 6.1: (a) Find x(0) if the z-transform of x(k) is X(z) = az −1
z−1 .
(b) Find x(0) if the z-transform of x(k) is X(z) = 2 z .
z − az + a2
Problem 6.2: (a) Consider the Newton-Raphson method to find the zeros of a
function f (x), which is described by the difference equation
f (x(k − 1))
x(k) = x(k − 1) − , (6.149)
f ′ (x(k − 1))
where f ′ (x(k−1)) is the derivative of f (x) evaluated at x(k −1). Let f (x) = ax2.
Find the z-transform of x(k) as a function of x(−1), and give conditions under
which x(k) converges to zero as k → ∞.
6.5. Problems 207
Problem 6.3: (a) Use partial fraction expansions to find the x(k) whose z-
transform is X(z) = 1 .
(z − 1)(z − 2)
(b) Use partial fraction expansions to find the x(k) whose z-transform is X(z) =
z .
z 2 − 2z + 2
Problem 6.4: (a) Sketch the time function x(k) that you would associate with
the following poles: p1 = 0.9j, p2 = −0.9j. Only a sketch is required, but be as
precise as possible.
(b) Repeat part (a) for: p1 = 1, p2 = −1.
(c) Repeat part (a) for: p1 = 0.3, p2 = 0.9.
(d) Repeat part (a) for: p1 = ejπ/6 , p2 = e−jπ/6.
The pole locations for the 4 parts are shown in Fig. 6.29.
Problem 6.5: (a) Find Y (z) as a function of X(z) if y(k) = (−1)k x(k). As-
suming that X̄(Ω), the DTFT of x(k), is as shown on Fig. 6.30, sketch Ȳ (Ω),
the DTFT of y(k).
X(Ω)
1
-2π -π -ΩB ΩB π 2π Ω
Problem 6.6: (a) Using partial fraction expansions, find the signal x(k) whose
z-transform is
4
X(z) = . (6.151)
(z − 1)(z 2 + 1)
Problem 6.7: For the signals whose z-transforms are given below, indicate
whether the time functions x(k) are bounded, converge to some value, or vanish
in finite time.
(z + 1)
(a) X(z) = .
(z + 0.5)(z − 0.7 + 0.7j)(z − 0.7 − 0.7j)
(b) X(z) = (1 − 2z −1)(1 + 3z −1 ).
(z − 1)
(c) X(z) = .
(z + 1)(z + 0.5)2
(z + 1)
(d) X(z) = .
(z − 1)(z + 0.5)2
(z + 1)
(e) X(z) = .
z(z − 1)
10
(f) X(z) = z .
(z + 5)
(z + 1)2
(g) X(z) = 2 .
(z + 1)(z − 0.5)
(z − 2)2
(h) X(z) = 3 .
z (z − 1)
Problem 6.8: Indicate whether the discrete-time systems with the following
transfer functions are BIBO stable.
(a) H(z) = z −z 0.5 .
3
(b) H(z) = 2 z .
(z + 0.81)2
(c) H(z) = z .
(z + 1)(z + 2)
(d) H(z) = z −1010 .
z
(e) H(z) = z + 0.5 .
(z + 1)(z + 0.25)
(f) H(z) corresponding to the difference equation: y(k + 1) − 12 y(k) = x(k + 1) −
2x(k).
6.5. Problems 209
x y
z-1
a
z-1
-a2
Problem 6.9: (a) Find the transfer function H(z) = Y (z)/X(z) and a condi-
tion on a such that the system of Fig. 6.31 is BIBO stable.
(b) Find the transfer function H(z) = Y (z)/X(z) and indicate whether the
system of Fig. 6.32 is BIBO stable.
D 2
x y
3 2
D
3
0.5 4
Problem 6.10: (a) Find the transfer function H(z) = Y (z)/X(z) correspond-
ing to the difference equation
y(k) = y(k − 1) + y(k − 2) + x(k). (6.152)
(b) Repeat part (a) for x(k) = cos(πk/2). Do not perform a partial fraction
expansion: use the frequency response.
(c) Write a program (in Matlab, for example) to check the results of parts (a)
and (b). Plot the input x(k) and the output y(k) for both cases over 40 time
steps. To implement H(z), find a difference equation that corresponds to the
given transfer function, and let all initial conditions be zero.
Problem 6.12: Consider the system of Fig. 6.33, with C(z) = g/(z − a) and
P (z) = 1/z. Find conditions on g and a such that the steady-state error ess =
limk→∞ e(k) is zero for all constant inputs r(k) = rm. Assume that g > 0.
r e u y
C(z) P(z)
Problem 6.13: (a) Find the initial value and the final value of the signal whose
z-transform is
z2
X(z) = . (6.153)
(z + 0.5 + 0.9j)(z + 0.5 − 0.9j)
(b) Find the initial value and the final value of the step response of the system
2z 2 − 0.3z + 0.25
H(z) = . (6.154)
z 2 − 0.6z + 0.25
Problem 6.14: (a) Sketch the time function x(k) that you would associate with
the following poles: p1 = 1, p2 = j, p3 = −j. Only a sketch is required, but be
as precise as possible.
(b) Repeat part (a) for: p1 = ej2π/3 , p2 = e−j2π/3 .
(c) Is a discrete-time system with the following poles BIBO stable?
p1 = 0.9 + 0.1j, p2 = 0.9 − 0.1j, p3 = −0.9 + 0.1j, p4 = −0.9 − 0.1j,
p5 = 0.1 + 0.9j, p6 = 0.1 − 0.9j, p7 = −0.1 + 0.9j, p8 = −0.1 − 0.9j.
(d) Repeat part (c) for: p1 = −1, p2 = −0.5, p3 = −0.5 + 0.5j, p4 = −0.5 − 0.5j.
Pole locations for the 4 parts are shown on Fig. 6.34.
Problem 6.15: (a) Consider the discrete-time system with transfer function
z4
H(z) = . (6.155)
z4 − 1
6.5. Problems 211
What are the poles and zeros of the system? Is the system BIBO stable?
(b) Find the impulse response of the system of part (a) using long division.
Problem 6.16: (a) Using long division, find y(0), y(1), and y(2) for
2z 3 + 13z 2 + z
Y (z) = . (6.156)
z 3 + 7z 2 + 2z + 1
(b) Find the transfer function H(z) = Y (z)/X(z) for the system of Fig. 6.35.
Give the values of the closed-loop poles and the range of gain g for which the
system is closed-loop stable. Show the root-locus of the system, with g being
the parameter that varies from 0 to ∞.
x 1 y
z(z-1)
Indicate what the transient response and the steady-state response are. Compare
the steady-state value to the value predicted by the DC gain.
(b) Find the steady-state response of the system of part (a) to x(k) = cos(πk/2) as
well as for x(k) = cos(πk).
Problem 6.19: (a) Using partial fraction expansions, find the signal x(k) whose
z-transform is
1
X(z) = . (6.159)
(z − 1)(z 2 − 2z + 2)
(b) Find the transfer function H(z) = Y (z)/X(z) and a condition on a such
that the system of Fig. 6.36 is BIBO stable.
x y
D
-2a
D
-a2
Problem 6.20: Find the steady-state response of the system with transfer
4
function H(z) = z − 1 and an input x(k) = cos(πk/4). Do not perform a
z8
partial fraction expansion: use the frequency response. Plot the response yss (k)
(label the axes precisely).
Chapter 7
Sampled-data systems
where T is called the sampling period (in seconds), fs = 1/T and ωs = 2π/T
are the sampling frequency (in Hz and in rad/s, respectively). The continu-
ous/discrete conversion, is referred to as sampling, or discretization, and is shown
in Fig. 7.1 for an arbitrary signal x(t). The operation is performed in analog-to-
digital (A/D) converters. However, such converters also perform a quantization
operation, which approximates real numbers by a finite set of numbers, coded
by bits. The effects of quantization are ignored in the following discussion.
x(t) xd(k)
0 T 2T 3T t 0 1 2 3 k
213
214 Chapter 7. Sampled-data systems
Let X(s) denote the Laplace transform of x(t) and Xd (z) denote the z-
transform of xd (k). A natural question to ask is: how is Xd (z) related to X(s)?
The answer to the question turns out to be relatively simple when X(s) is a
rational function of s, but quite complicated in general.
where pi are the poles of X(s) and ci are the coefficients of the partial fraction
expansion. The corresponding signal is given by
n
x(t) = ci epi t . (7.3)
i=1
One finds that every pole pi of the continuous-time signal is associated with a
pole pd,i of the discrete-time signal equal to
Note that, because zeros are not transformed in the same manner, there can be
pole/zero cancellations in the z-transform even if there are none in the Laplace
transform. As a result, there may be fewer discrete-time poles than continuous-
time poles.
Using the general formula of partial fraction expansion, the result can be
extended to the general case with repeated poles. Therefore, a signal with
rational transform in the s-domain always becomes a signal with a rational
transform in the z-domain. The z-transform can be obtained relatively easily,
and the poles of Xd (z) are those of X(s) transformed through z = esT .
7.1. Conversion continuous-time signal to discrete-time signal 215
z 2 − cos(ω0T )z
xd (k) = cos(ω0T k) ⇔ Xd (z) = . (7.11)
z 2 − 2 cos(ω0 T )z + 1
The transform has poles at ejω0 T and e−jω0 T . It also has zeros at z = 0 and
z = cos(ω0 T ).
If ω0T = 2π, Xd (z) reduces to Xd (z) = z/(z − 1) and one pole is cancelled
by a zero. Note that the transform is the same as the transform of a step signal.
The two signals have the same transforms because, as shown in Fig. 7.2, the
samples are identical. In general, the number of poles of Xd (z) may be smaller
than the number of poles of X(s) (it cannot be larger, however), and it is not
always possible to invert the sampling operation.
|z| = eRe(s)T ,
∡z = Im(s)T. (7.12)
216 Chapter 7. Sampled-data systems
x(t)
Figure 7.2: A cosine function and a step function sampled at a period equal to
the period of the cosine function are identical signals
The jω-axis of the s-plane maps to the unit circle of the z-plane, the open left
half-plane maps to the inside of the unit circle, and the open right half-plane
maps to the outside of the unit circle. This is shown in Fig. 7.3.
The equivalences of the following table are also worth noting.
s-plane z-plane
s=0 z=1
s = ±j Tπ z = −1
π
s= +j 2T z=+j
π
s= −j 2T z=−j
Re(s) = −∞ z = 0
s2 = s∗1 z2 = z1∗
Note that a single z location is associated to a given s location, but the reverse
is not true if all values of s are considered. Values of s outside the range − Tπ <
7.1. Conversion continuous-time signal to discrete-time signal 217
Im(s) Im(z)
π/T 2
1
1 2
Re(s) Re(z)
1
−π/T
3
3
s-plane z-plane
π
Im(s) T
map to the same values of z as some values of s within the range,
with
2π
s2 = s1 + j ⇒ z2 = z1 . (7.15)
T
In other words, all complex numbers separated by a multiple of j2π/T map to
the same value of z. Note that 2π/T is equal to ωs , the sampling frequency.
the corresponding Laplace transform X(s) and z-transform Xd (z) are related by
∞
1 2π
Xd (z) = X s−n j , (7.17)
T n=−∞ T
s=(1/T ) ln(z)
218 Chapter 7. Sampled-data systems
or
∞
1 2π
[Xd (z)]z=esT = X s−n j . (7.18)
T n=−∞ T
The proof of this fact is somewhat complicated and is left to the appendix at
the end of the chapter.
The result involves an infinite sum of Laplace transforms, each shifted from
the original one by a multiple of j2π/T , and the change of variable z = esT .
An important observation is that the shift occurs along the direction of the
imaginary axis by an amount of 2π/T , which is exactly the shift that produces
the same value of z in the transformation z = esT . This property implies that
the transformation (7.18) gives the same result for Xd (z) if values of s separated
by j2π/T are chosen. Hence, the non-invertibility of the transformation z = esT
is not an issue.
We found earlier that, for the continuous-time signal
1
x(t) = e−at u(t) ⇔ X(s) = , (7.19)
s+a
the z-transform of the sampled signal was
z
Xd (z) = . (7.20)
z − e−aT
In contrast, the general result gives
∞
1 1
Xd (z) = . (7.21)
T n=−∞ s + a − n j 2π
T s=(1/T ) ln(z)
Therefore, the series (7.21) must be equal to the analytic form (7.20). However,
an analytic form of the infinite series cannot be found, in general.
X̄(ω) = [X(s)]s=jω ,
X̄d (Ω) = [Xd (z)]z=ejΩ . (7.23)
The relationships between the s-plane and the ω variable, and between the z-
plane and the Ω variable, are shown in Fig. 7.4.
Im(s) Im(z)
Ω
ω
Re(s) Re(z)
1
s-plane z-plane
X(ω) X (Ω)
d
ω −2π −π π 2π Ω
Figure 7.4: From the Laplace and z-transforms to the Fourier and DT Fourier
transforms
or simply
Ω = ωT. (7.25)
220 Chapter 7. Sampled-data systems
This result allows one to calculate the DTFT of the discrete-time signal xd (k),
knowing the FT of the continuous-time signal x(t). The transformation in the
frequency domain is composed of two steps:
Note that, while the result was derived assuming that the signals were zero for
negative time, (7.26) is valid for arbitrary signals, provided that their Fourier
transforms exist.
7.1.6 Aliasing
Fig. 7.5 shows the transform of a continuous-time signal x(t), with X̄(ω) = 0
for |ω| > ωB . For simplicity of presentation, X̄(ω) is taken to be a real function
of ω, but it is a complex function, in general. Assume that the signal is sampled
with a sampling period T, corresponding to a sampling frequency ωs . The top
figures show the result that is obtained when ωB < ωs /2 = π/T , so that the
discrete-time frequency ΩB = ωB T is less than π. The figures on the bottom
show the result when this condition is not satisfied.
In the first case, only the original copy of the transform contributes to the
discrete-time transform in the −π to π range. In the second case, there is
interference between the original transform and its copies. This situation is
called aliasing. When there is aliasing, a frequency component may be observed
in the discrete-time transform which is the image of a higher frequency in the
original signal.
Fig. 7.6 demonstrates this phenomenon in the time domain. A signal with
frequency ω0 = π/4 (period of 8 seconds) is sampled at ωs = π/3 (period of
6 seconds). In the discrete-time domain, a frequency of Ω = 2π − ω0 T = 2π −
3π/2 = π/2 is obtained, which is the same as would have been obtained with a
7.1. Conversion continuous-time signal to discrete-time signal 221
X(ω) X (Ω)
d
−ωB ωB ωS ω −2π −π 2π Ω
2π/Τ ωBΤ ωSΤ
ω
2π− B Τ
X(ω) X (Ω)
d
−ωB ωB ω ω −2π −π π 2π Ω
S
2π/Τ 2π− ωBΤ ωΤ
S
ω BΤ
x(t)
6 12 18 24
F(ω)
1
−π/Τ π/Τ ω
The transformation is shown in Fig. 7.8. The operation is the one commonly
performed in digital-to-analog (D/A) converters, and is usually referred to as a
zero-order hold (ZOH). More sophisticated converters exist that interpolate the
values of x(t) between the time instants. A linear interpolator would be called
a first-order hold. However, the zero-order hold is by far the most commonly
used.
xd(k) x(t)
0 1 2 3 k 0 T 2T 3T t
x (k) x(t)
d
0 1 2 3 k 0 T 2T 3T t
1 − e−jωT
X̄(ω) = T X̄d (Ω) Ω=ωT
. (7.31)
jωT
where sinc(x) sin(x)/x. The magnitude of the frequency response is |sinc (ωT /2)|.
The phase is -ωT /2, plus π when sinc(ωT /2) is negative. The magnitude and
phase are shown in Fig. 7.10. Note that, for frequencies below 2π/T , the phase
lag is equal to the phase lag produced by a time delay of T /2.
Example: the effects of the discrete to continuous conversion may be studied
further by considering a discrete-time sinusoid xd (k) = cos(Ω0 k). The transform
is a pair of impulses at ±Ω0 , with magnitude 1/2, plus the copies shifted by all
the multiples of 2π. The resulting transform of the continuous-time signal is
shown on Fig. 7.11. Interestingly, one has that
∞ ∞
1 1
T δ(Ω − Ω0 ) dω = δ(Ω − Ω0 )dΩ, (7.34)
−∞ 2 Ω=ωT −∞ 2
so that the scaling of the axes cancels the factor T in the formula and the
magnitude of the impulses remains 1/2 in continuous-time (except for the slight
reduction in magnitude due to the zero-order hold).
226 Chapter 7. Sampled-data systems
H(ω)
1
H(ω)
180o
-180o
x(t)
Numerical example: let Ω0 = π/4, which yields the signal shown in Fig. 7.12.
Assume that the sampling frequency is fs = 1000 Hz (or T = 1 ms). The
continuous-time signal is of the form
Similarly, the third component has frequency 1125 Hz, magnitude 0.108, and
phase −22.5◦ . Overall, one finds that the reconstructed signal has a fundamen-
tal component at the desired frequency, but with a slightly lower magnitude
(2.5% smaller) and a significant phase delay (22.5◦ ). Additional components are
present at higher frequencies with magnitudes of about 10% of the fundamen-
tal. Generally, these are not harmonic frequencies, but rather multiples of the
sampling frequency plus or minus the fundamental signal frequency. Parasitic
effects are reduced if the ratio of the fundamental frequency to the sampling
frequency decreases.
x (k) x(t)
d
0 1 2 3 k 0 T 2T 3T t
y(t) y (k)
d
0 T 2T 3T t 0 1 2 3 k
If a discrete-time step input xd (k) is applied to the D/A, the result is a continuous-
time step input x(t) applied to the plant. The step response is
1 1 1
Y (s) = = − ⇔ y(t) = 1 − e−t. (7.39)
(s + 1)s s s+1
The sampled output of the step response is
yd (k) = 1 − e−kT
z z z(1 − e−T )
⇔ Yd (z) = − = . (7.40)
z − 1 z − e−T (z − 1)(z − e−T )
On the other hand, the step response of the equivalent discrete-time system is
z
Yd (z) = Pd (z) . (7.41)
z−1
230 Chapter 7. Sampled-data systems
We conclude that
1 − e−T
Pd (z) = . (7.42)
z − e−T
Pd (z) is usually called the step response equivalent or zero-order hold equivalent
of P (s).
Although only the step response of the discrete-time system was shown to
match the step response of the sampled-data system, it is not hard to show that
the responses of both systems are identical for any input signal xd (k). Indeed,
any xd (k) may be viewed as the superposition of shifted step signals, and linear
time-invariance implies the result.
The procedure to obtain Pd (z) can also be extended to arbitrary linear sys-
tems with rational transforms, and is similar to the procedure associated with
the discretization of a signal with rational transform. The notation becomes
complicated for repeated poles, so we assume that P (s) = N (s)/D(s) is rational
and strictly proper, has non-repeated poles, and has no pole at s = 0. The step
response of the continuous-time system is given by
n n
1 1 ci
Y (s) = P (s) = P (0) + ⇔ y(t) = P (0) + ci epi t ,
s s s − pi
i=1 i=1 (7.43)
where pi are the poles of the transfer function P (s) and ci are the coefficients of
the partial fraction expansion, with
s − pi
ci = P (s) . (7.44)
s s=pi
The poles pi of the transfer function P (s) are mapped to poles epi T of Pd (z).
Therefore,
While the poles are mapped through z = esT , the zeros are not necessarily
mapped in the same manner. Therefore, pole/zero cancellations may cause
Pd (z) to have fewer poles than P (s), and the order of the transfer function may
be reduced. The frequency responses of the continuous-time and discrete-time
systems are also not obviously related. However, Pd (1) = P (0), so that the DC
gains of the two systems are identical. For T sufficiently small, one can also
show that
so that the step response equivalent and the transformation z = esT give the
same result for low frequencies.
1 − e−T
Pd (z) = . (7.50)
z − e−T
232 Chapter 7. Sampled-data systems
As in continuous-time, the pole moves along the real axis in the negative direc-
tion. The pole becomes unstable when it reaches z = −1 for
1 + e−T
gmax = . (7.52)
1 − e−T
In fact, there is no benefit in pushing the pole further than z = 0, so that a
practical limit for the gain is
e−T
g0 = . (7.53)
1 − e−T
The gain g0 results in the transfer function
e−T
PCL(z) = . (7.54)
z
which is a one-step delay with a gain e−T . Typically, such response (called
deadbeat response) requires large input signals and is sensitive to noise and un-
modelled dynamics, so that the feedback gain will be set much below gmax .
so that
k=∞
1 2π
Yd (z) = Fd (z) X s−k j , (7.56)
T k=−∞ T
s=(1/T ) ln(z)
and
1 − e−sT
Y (s) = T [Yd (z)]z=esT
sT
k=∞
1 − e−sT 2π
= [Fd (z)]z=esT X s−k j . (7.57)
sT k=−∞
T
This transformation relates the Laplace transforms of the input and output of
the system. Unfortunately, the transformation cannot be put into the form
Y (s) = F (s)X(s) due to the change of variables z = esT and due to the infinite
sum.
If the aliasing effects can be neglected, only the term k = 0 is retained, and
we have the approximation
1 − e−sT
Y (s) = [Fd (z)]z=esT X(s). (7.58)
sT
Thus, the transformation can be approximately represented by a transfer func-
tion F (s), with
1 − e−sT
F (s) = [Fd (z)]z=esT (aliasing effects neglected).
sT
(7.59)
234 Chapter 7. Sampled-data systems
Even with these assumptions, a rational transfer function Fd (z) does not yield a
rational transfer function F (s).
In other words, the overall system with ideal anti-aliasing and post-sampling
filters is equivalent to a continuous-time filter
π
F̄ (ω) = F̄d (Ω) Ω=ωT
for |ω| < . (7.64)
T
(7.64) is the equivalent, for Fourier transforms, of (7.60). Under these assump-
tions, the equivalence between the discrete-time filter and the continuous-time
filter is represented on Fig. 7.17 for a discrete-time low-pass filter of bandwidth
ΩB .
F (Ω) F(ω)
d 1 1
DT filter CT filter
hold and another T due to the fact the output computed by the discrete-time
filter is normally applied only at the next time instant. If the filter is inserted
in a feedback system, the delay margin must be sufficient to accommodate the
filter’s delay.
dx x(t + T ) − x(t)
≃ for T small, (7.71)
dt T
which is equivalent to the following transformation
z−1
s= or z = 1 + sT. (7.72)
T
For example, the PID controller (4.35) becomes
T a(z − 1)
Cd (z) = kP + kI + kD . (7.73)
z−1 z − 1 + aT
esT /2 1 + sT /2
z = esT = ≃ . (7.75)
e−sT /2 1 − sT /2
The transformation is invertible, since
sT sT 2 z−1
z 1− =1+ ⇒s= . (7.76)
2 2 T z+1
Given Fc (s) with desirable frequency-domain properties, one lets
Therefore,
where pi are the poles of Fc (s) and ci are the coefficients of the partial fraction
expansion (for simplicity, it is assumed that Fc (s) = N (s)/D(s) is rational with
non-repeated poles).
Using the impulse response matching method, the poles pi of the transfer
function Fc (s) are mapped to poles epi T of Fd (z) and
The zeros are mapped in a complicated way. In this case, however, the frequency
responses of Fd (z) and Fc (s) can be related, provided the frequency response
F̄c (ω) = 0 for |ω| < π/T, i.e., that fc (t) is a bandlimited signal sampled at
twice its highest frequency. In that case, the results regarding the sampling of
a continuous-time signal (7.27) indicate that
1
F̄d (Ω) = F̄c (ω) ω=Ω/T (7.84)
T
for |ω| < π/T, which is the desired result, except for the factor of 1/T . For
this reason, the impulse response procedure requires a slight modification of the
formula (7.82), so that
n
N (s) ci
Fc(s) = =
D(s) i=1
s − pi
n
z
⇔ Fd (z) = T ci (impulse response equivalent).(7.85)
i=1
z − epi T
This approach is such that
where fd (k) is the impulse response of the discrete-time system, and fc (t) is the
impulse response of the desired continuous-time system. One way to justify the
factor T is to remark that a discrete-time impulse has an equivalent area T ,
while a continuous-time impulse has an area equal to 1.
Step response matching
The step response matching method is the same as the step response equivalent
method that was used for the conversion from continuous-time to discrete-time
system. The properties of the resulting transfer function are similar to those
of the impulse response matching method, but the transfer functions are not
exactly the same. For example, the step response method was shown to yield
1 1 − e−T
Fc (s) = ⇒ Fd (z) = , (7.87)
s+1 z − e−T
7.5. Discrete-time design to approximate a continuous-time system 239
1 T z
Fc (s) = ⇒ Fd (z) = . (7.88)
s+1 z − e−T
The poles of the transfer functions are identical, and the low frequency behavior
(z close to 1) of both transfer functions is similar, but the transfer functions
are different. Worth noting is the fact that the impulse response resulting from
the step response matching method is delayed by one sample compared to the
impulse response obtained with the impulse response method.
7.6 Appendix
7.6.1 Proof for the conversion from continuous-time to
discrete-time signal
We first establish two facts. For an arbitrary function f (t)
∞ ∞
where δ(t) is the delta function. Next, we note that the function
∞
p(t) = δ(t − kT ) (7.90)
k=−∞
with
T /2 T /2
1 −jk(2π/T )t 1 1
ck = p(t) e dt = δ(t) e−jk(2π/T )tdt = .
T T T
−T /2 −T /2 (7.93)
Permuting the order of integration and summation, and removing the step func-
tion in the expression by an adjustment of the integration range
∞ ∞
Xd (z) = x(t)u(t) z −t/T δ(t − kT ) dt
−∞ k=−∞
∞ ∞
= x(t) z −t/T δ(t − kT ) dt. (7.98)
0 k=−∞
where u(t) is the continuous-time step function. Applying the Laplace transform
to both sides
∞ ∞
X(s) = xd (k) (u(t − kT ) − u (t − (k + 1)T )) e−st dt.
0 k=0 (7.103)
Permuting integration and summation
∞ ∞
7.7 Problems
Problem 7.1: (a) Consider the continuous-time system
1
H(s) = . (7.107)
s(s + 1)
Find the discrete-time system Hd (z) whose step response yd (k) is such that
yd (k) = y(kT ) , where y(t) is the step response of H(s) and T is some arbitrary
sampling time.
(b) Repeat part (a) for
1
H(s) = . (7.108)
s2 + 1
Explain what happens when T = 2π.
Give the lowest sampling frequency fs (in Hz) such that no aliasing occurs when
the signal is discretized.
Problem 7.4: Let x(t) be obtained from xd (k) = k through a zero-order hold.
Find X(s) from Xd (z). Compare the result to the one obtained by computing the
Laplace transform directly from x(t) (note that x(t) is the sum of step functions
delayed by multiples of T ).
Find the discrete-time system Hd (z) whose step response yd (k) is such that
yd (k) = y(kT ), where y(t) is the step response of H(s) and T is some arbitrary
sampling time. Compare the DC gains of H(s) and Hd (z).
244 Chapter 7. Sampled-data systems
Bibliography
[8] R. C. Dorf & R. H. Bishop, Modern Control Systems, 13th edition, Pearson,
2016.
245
246 Bibliography
[18] B. Kuo, Digital Control Systems, 2nd edition, Oxford University Press,
1995.
[19] W. S. Levine, The Control Handbook, 2nd edition, CRC Press, 2010.
[21] O. Mayr, The Origins of Feedback Control, MIT Press, Cambridge, MA,
1970.
[23] H. Nyquist, “Regeneration Theory,” Bell System Technical J., vol. 11,
pp. 126-147, 1932.
[26] A. V. Oppenheim, A. S. Willsky, Signals & Systems, 2nd edition with S.H.
Nawab, Pearson, 1996.
248
Index 249
Watt’s governor, 2
Windup, 73
X-29, 3
z-transform
Definition, 170
Examples, 170
Inverse using long division, 187
Inverse using partial fraction expan-
sions, 186
Properties, 181
Zero-input response, 45, 50, 197, 203
Zero-order hold, 223, 225
Zero-order hold equivalent, 230
Zero-state response, 45, 50, 197, 203
Zeros, 6, 24
252 Index