TU Braunschwieg Automatic Control Notes
TU Braunschwieg Automatic Control Notes
(Regelungstechnik 2)
Lecture Notes
Jürgen Pannek
During summer term 2022 I give the lecture to the module Control Engineering 2 (Regelung-
stechnik 2) at the Technical University of Braunschweig. To structure the lecture and support my
students in their learning process, I prepared these lecture notes. As it is the first edition, the notes
are still incomplete and are updated in due course of the lecture itself. Moreover, I will integrate
remarks and corrections throughout the summer term.
The aim of the module is to provide participating students with knowledge of terms of system the-
ory and control engineering. Moreover, students shall be enabled to understand complex control
structures, apply control schemes and analyze control systems. After successfully completing
the module, students shall additionally be able to apply the discussed methods within real life
applications and be able to assess results.
To this end, the module will tackle the subject areas
for complex and networked linear as well as nonlinear systems. In particular, we discuss the
topics
Frequency domain
Modeling of complex control loops
Bang-bang and double-setpoint control
Multi-input multi-output systems
Time domain
II
within the lecture and support understanding and application within the tutorial and laboratory
classes. The module itself is accredited with 5 credits with an add-on of 2 credits if the require-
ments of the laboratory classes are met.
During the preparation of the lecture, I utilized the books of Jan Lunze [12–14]. For further
reading the books of Heinz Unbehauen [19–21] and Otto Föllinger [7] provide deep insights. For
the nonlinear part, I particularly recommend the works from Khalil [9] and Isidori [8], which also
formed the basis of this lecture.
Contents
Contents iv
List of tables v
I Frequence Domain 23
II Time Domain 73
Appendices 109
Bibliography 120
List of Tables
3.6 Closed loop with bang-bang control with low pass and amplifier . . . . . . . . . 56
3.7 Closed loop with double-setpoint control with low pass, amplifier and integrator . 58
3.8 Closed loop with double-setpoint control with low pass, amplifier, latency and
integrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.9 Mimic PD control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.10 Mimic PID control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.11 Mimic PI control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.12 Nonlinear static system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.13 Separation of maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.14 MIMO system with two inputs and two outputs . . . . . . . . . . . . . . . . . . 65
3.15 Canonical structures of MIMO systems with two inputs and two outputs . . . . . 65
3.16 Decoupling structure of MIMO system with P canonical structure . . . . . . . . 67
3.17 Elimination of coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.18 Adaptable decoupling structure of MIMO system with P canonical structure . . . 70
While on the one hand we want to understand the fundamental limitations that math-
ematics imposes on what is achievable, irrespective of the precise technology being
used, it is also true that technology may well influence the type of question to be asked
and the choice of mathematical model.
In Control Engineering 1, basic control structures formed the heart of the lecture. Common
ways of describing systems in both frequency and time domain were defined to understand the
foundation of control. Moreover, control methods were applied and analyzed.
Within the lecture series Control Engineering 2, we study control structures, which are more
complex in both the description and well as in there application and analysis. To have a common
basis, this chapter provides terminology and properties of control systems for both frequency and
time domain.
For more historic insights, we additionally refer to the books of Cellier [5], Director and Rohrer [6],
Ludyk [10], Luenberger [11], Padulo and Arbib [16] and Shearer and Kulakowski [17].
1.1. System
The term of a system is used in various different scientific and non-scientific areas. Its meaning,
however, is often not defined clearly. Simple formulated, a system is the connection of differ-
ent interacting components to realize given tasks. The interdependence of systems with their
environment is given by so called input and output variables, cf. Figure 1.1.
More formally, we define the following:
2
u1 y1
u2 y2
.. System ..
. .
u nu y ny
The inputs u1 , . . . , unu ∈ U are variables, which act from the environment to the system and are
not dependent on the system itself or its properties. We distinguish between inputs, which are
used to specifically manipulate (or control) the system, and inputs, which are not manipulated on
purpose. We call the first ones control or manipulation inputs, and we refer to the second ones as
disturbance inputs. The outputs y1 , . . . , yny ∈ Y are variables, which are generated by the system
and influence the environment. Here, we distinguish output variables depending on whether we
measure them or not. We call the measured ones measurement outputs.
Remark 1.2
Note that in most cases not all measurable outputs are actually measured. Similarly, in many
cases not all manipulable inputs are controlled.
In the following, we consider two electical systems illustrated in Figure 1.2 which represent an
ideal resistance and an ideal capacitor.
I (t) I (t)
U (t) R U (t) C
The systems in Figure 1.2 possess the input variable I (t), the output variable U (t) and time t.
For the resistance R the output is uniquely defined by the input for every time instant t, i.e. we
1.1. S YSTEM 3
have
y ( t ) = U ( t ) = R · I ( t ) = R · u ( t ). (1.1)
If the outputs depend on the input at the same time instant, we call systems such as this one static.
In contrast to this, the computation of the voltage U (t) at the capacitor C at time instant t depends
on the entire history I (τ ) for τ ≤ t, i.e. we have
Zt Zt
1 1
y(t) = U (t) = I (τ )dτ = u(τ )dτ.
C C
−∞ −∞
If we additionally know the voltage U (t0 ) at a time instant t0 ≤ t, then only the history t0 ≤
τ ≤ t of the current is required, i.e.
Zt Zt0 Zt Zt
1 1 1 1
y(t) = U (t) = I (τ )dτ = I (τ )dτ + I (τ )dτ = U (t0 ) + u(τ )dτ. (1.2)
C C C C
−∞ −∞ t0 t0
| {z }
U ( t0 )
As we can see from (1.1), the initial value U (t0 ) contains all the information on the history
τ ≤ t0 . For this reason, one typically refers to U (t0 ) as the internal state of the system capacitor
at time instant t0 . If the output of the system depends not only on the input at the time instant but
also on the history of the latter, we call these systems dynamic.
Remark 1.3
Note that by this definition the set of dynamic systems covers the set of static systems.
If for a system according to Figure 1.1 the outputs y1 (t), . . . , yny (t) depend on the history of the
inputs u1 (τ ), . . . , unu (τ ) for τ ≤ t only, then the system is called causal. As all technically
feasible systems are causal, we will restrict ourselves to this case.
Now, our discussion so far allow us to give the general definition of states of a dynamical system:
Task 1.5
Which variable represents a state in case of induction?
u = [ u1 u2 . . . u n u ] ⊤ (1.4a)
h i⊤
y = y1 y2 . . . y n y (1.4b)
x = [ x1 x2 . . . x n x ] ⊤ . (1.4c)
d
Using additionally the short form ẋ for dt x we obtain the compact vector notation
The variables u, y and x are called input, output and state of the dynamical system.
If the state x represents an element of an n x –dimensional vector space X , then X is called state
space. The state of a system at time instant t can then be depicted as a point in the n x –dimensional
state space. The curve of points for variable time t in the state space is called trajectory and is
denoted by x(·).
Remark 1.6
Systems with infinite dimensional states are called distributed parametric systems and are de-
scribed, e.g., via partial differential equations. Examples of such systems are beams, boards,
membranes, electromagnetic fields, heat etc..
T := {tk | tk := t0 + k · T } ⊂ R.
where t0 is some fixed initial time stamp. Apart from equidistant sampling, other types such as
event based or sequence based are possible. The equidistant case, however, is important in digital
control, which we consider later in the lecture.
Remark 1.7
Note that the class of discrete time systems is larger and contains the class of continuous time
systems, i.e. for each continuous time system there exists a discrete time equivalent, but for some
discrete time systems no continuous time equivalent exists.
To mathematically describe discrete time systems so called difference equations and algebraic
equations are used. Similar to (1.3) we write
x1 ( k + 1) = f 1 ( x1 ( k ), . . . , x n x ( k ), u1 ( k ), . . . , u n u ( k ), k )
.. Difference equations (1.6a)
.
x n x ( k + 1) = f n x ( x1 ( k ), . . . , x n x ( k ), u1 ( k ), . . . , u n u ( k ), k )
6
x1 (0) = x1,0
.. Initial conditions (1.6b)
.
xnx (0) = xnx ,0
y1 ( k ) = h1 ( x1 ( k ), . . . , x n x ( k ), u1 ( k ), . . . , u n u ( k ), k )
.. Output equations (1.6c)
.
y ( k ) = h ( x ( k ), . . . , x ( k ), u ( k ), . . . , u ( k ), k )
ny ny 1 nx 1 nu
Again, we combine the input, output and state variables to (column) vectors
u = [ u1 u2 . . . u n u ] ⊤ (1.7a)
h i⊤
y = y1 y2 . . . y n y (1.7b)
x = [ x1 x2 . . . x n x ] ⊤ (1.7c)
x ( k + 1) = f ( x ( k ), u ( k ), k ), x (0) = x0 (1.8a)
y ( k ) = h ( x ( k ), u ( k ), k ). (1.8b)
An example of a discrete time system is the interest rate development of a bank deposit. Consider
x(k ) to be the bank deposit in month k and p to be the interest rate in percentage. If we place
u(k ) on the deposit at month k, then the model of the bank deposit development reads
p
x ( k + 1) = 1 + · x ( k ) + u ( k ). (1.9)
100
Based on this model the bank deposit can be computed for all following months.
Solution to Task :1.8 Suppose the player owns 0 ≤ k ≤ a + b chips, and therefore the
casino owns a + b − k chips. Let x(k ) denote the probability that the player with k chips
1.3. D ISCRETE TIME SYSTEMS 7
wins. Hence, depending on whether the player wins or not the player will have k + 1 or
k − 1 chips after this iteration. Therefore, the difference equation is given by
Note that in both discrete and continuous time, the dynamic reveals a flow of the system at hand,
whereas a trajectory is bound to a specific initial value and input sequence. The following Fig-
ure 1.3 illustrates the idea of flow and trajectory. In this case, the flow is colored to mark its
intensity whereas the arrows point into its direction. The trajectory is evaluated for a specific
initial value and „follows“ the flow accordingly.
x2
x1
u3 y ( t0 )
+
Z
+
u1 y u y u k y
−
u2
Rt
y = u1 − u2 + u3 y ( t ) = y ( t0 ) + u(τ )dτ y(t) = ku(t)
t0
y = u1 · u2 y = f ( u1 , u2 ) y1 ( t ) = u ( t ); y2 ( t ) = u ( t )
These symbols allow us to visually break down the structure of a system to its elements. For
illustration, we consider a separately excited DC machine, which is moving a load via a cable
drum, cf. Figure 1.5. Within the example, we denote the armature and excitation currents by I A
and IF , the armature and excitation voltages by U A and UF respectively. Moreover, we denote the
winding resistance by R A and R F and the magnetic flow by Ψ F ( IF ), where L A and k represent
the armature induction and gain and ω the rotation speed of the motor.
1.4. B LOCK DIAGRAM 9
RA IA
UR A UL A LA
Ψ F ( IF ) I
F
UA UL F Mel , ω, φ
UF UR F Uind M Cable drum
RF ΘG ΘT
Figure 1.5.: Schematic of separately excited DC machine moving a (hoisting) drum roll
The mathematical model can be derived from the mesh equations of the armature circuit and the
excitation circuit
where we have
d
UL A = L A · IA (1.11a)
dt
d ∂ d
UL F = Ψ F ( IF ) = Ψ F ( IF ) · IF (1.11b)
dt ∂IF dt
Uind = k · Ψ F ( IF ) · ω. (1.11c)
d
LA · I A = U A − R A · I A − k · Ψ F ( IF ) · ω (1.12a)
dt
d d
Ψ F ( IF ) · IF = UF − R F · IF . (1.12b)
dIF dt
Introducing the force Fs of the rope, then by conservation of momentum at the motor we have
d
φ=ω (1.13a)
dt
10
d
(ΘG + Θ T ) · ω = Mel − Fs · r = kΨ F ( IF ) · I A − Fs · r (1.13b)
dt
where r and φ are the radius and angle of the drum roll and Mel = kΨ F ( IF ) · I A is the electric
momentum of the motor. Similarly, via conservation of momentum at the rope we have
d
x=v (1.14a)
dt
d
m v = Fs − m · g (1.14b)
dt
where x, v and m are the position, velocity and mass of the load, and g is the acceleration of
gravity. Using
d d
r φ= x=v
dt
|{z} dt
=ω
in (1.13b) we obtain
d
Fs = m · r · ω + m · g.
dt
Now, we can combine systems (1.12) and (1.13) to obtain the combined system of differential
equations
d 1
IA = (U A − R A · I A − k · Ψ F ( I F ) · ω ) (1.15a)
dt LA
d 1
IF = ∂ (U F − R F · I F ) (1.15b)
∂I Ψ F ( IF )
dt
F
d
φ=ω (1.15c)
dt
d 1
ω= (kΨ F ( IF ) · I A − m · g · r ) . (1.15d)
dt (Θ G + Θ T + m · r2 )
Utilizing Definitions 1.1 and 1.4 we identify the inputs u = [U A UF ]⊤ , the output y = r · φ and
the states x = [ I A IF φ ω ]⊤ .
To represent system (1.15) in a block diagram, we first integrate this system of first order differ-
ential equations and get
Zt
1
I A ( t ) = I A (0) + (U A (τ ) − R A · I A (τ ) − k · Ψ F ( IF (τ )) · ω (τ )) dτ (1.16a)
LA
0
1.4. B LOCK DIAGRAM 11
Zt
1
I F ( t ) = I F (0) + (UF (τ ) − R F · IF (τ )) dτ (1.16b)
∂IF Ψ F ( IF ( τ ))
∂
0
Zt
φ ( t ) = φ (0) + ω (τ )dτ (1.16c)
0
Zt
1
ω ( t ) = ω (0) + (kΨ F ( IF (τ )) · I A (τ ) − m · g · r ) dτ. (1.16d)
(ΘG + Θ T + m · r2 )
0
In the first step, we consider the simplest equation (1.15c). Utilizing the standard blocks from
Figure 1.4 we obtain Figure 1.6.
φ (0)
Z
ω φ
Next, we consider equation (1.15d) and separate it into operating blocks in Figure 1.7.
ω (0)
IF Ψ F (·) k
Z
1
× Θ̃
ω
IA
−
m·g·r
Considering the current in the armature circuit, we obtain the diagram of Figure 1.8.
Last, we get Figure 1.9 for the excitation circuit (1.16a).
Note that now all ingoing values to the block diagrams are either input u or states x. Hence, we
can connect these lines and obtain the overall block diagram.
Task 1.9
Draw the overall block diagram for the separately excited DC machine with drum roll from
Figure 1.5.
12
1
∂IF Ψ F ( IF ( τ ))
∂
I F (0)
Z
+ × IF
UF
−
RF
IF Ψ F (·) k
×
ω I A (0)
+
Z
+ 1
LA IA
UA
−
RA
Solution to Task 1.9: The block from Figures 1.6–1.9 are connected via their states, cf.
Figure 1.10.
IA
UA 1.9
ω
1.7
I A (0)
IF
UF 1.8 1.6 φ
ω (0)
I F (0) φ (0)
Figure 1.10.: Block diagram for separately excited DC machine with drum roll
The first and possibly most important property used in system theory is linearity. Informally,
we can say that „a system is linear if all its components and connections are linear“. Note that,
unfortunately, we cannot say that „a system is nonlinear if all of its components and connections
are nonlinear“. A counterexample is given by the following circuits in Figure 1.11, which are
equivalent, yet one system is linear and one contains nonlinear elements.
I (t) I (t)
U (t) U (t)
R R
(a) Circuit with two diodes and one resistor (b) Circuit with two diodes and one resistor
Within Definition 1.10 we call equation (1.18a) superposition principle, equation (1.18b) zero-
input-linearity and equation (1.18c) zero-state-linearity. Given Definition 1.10, we get the fol-
lowing:
Here, the matrices A(t) ∈ Rnx ×nx , B(t) ∈ Rnx ×nu , C (t) ∈ Rny ×nx and D (t) ∈ Rny ×nu depend
on time t ∈ R only.
Task 1.12
Is system
ẋ(t) = 2 + u(t)
y(t) = x(t)
linear?
Apart from linearity, the dependence on time is a key element for systems (1.17). In particular,
if a system is independent from time, the starting point may be shifted freely without change on
1.5. P ROPERTIES OF SYSTEMS 15
the behavior of the system and its output. Note that the dependence of time of a model may be
different for the dependence of time of a system. For example, a model of a rocket without its
environment does not depend on weather or the orbital mechanics including the moon etc., yet
the system itself clearly depends on these aspects which are varying over time.
Remark 1.14
Note that by considering a function evaluation of any function f at time instant t − T with T > 0,
the function is „shifted to the right by T“.
Task 1.15
Consider the example from Task 1.12. Is the system time invariant?
Regarding time invariance, the following necessary and sufficient conditions hold:
We like to note that linearity and time invariance is not limited to systems of the form (1.17). To
see this, consider the following:
Task 1.17
Consider a conveyor belt and let u(t) denote the amount of material put on the lower end
of the belt and let x(t) denote the amount of material issued at the upper end of the belt
at time t. For transportation from lower to upper end, the time t T (dead time) is required,
i.e. we have x(t) = u(t − t T ). Is the system linear and time invariant? Can the system be
formulated in the form (1.17)?
Solution to Task 1.17: The system is linear and time invariant, yet no description of
form (1.17) exists.
ẋ(t) = A · x, x ( t0 ) = x0 , (1.22)
then we know by Lipschitz continuity of A that a unique solution exists and can derive the corre-
sponding solution by applying Picard’s method of successive approximation. The latter reveals
∞
!
t2 tj j
jt
x(t) = lim x j (t) = lim
j→∞ j→∞
Id + A · t + A2 + . . . + A j
2 j!
= ∑ j! A x0 ,
j =0
∞
!
tj
Φ(t) := exp( A · t) = ∑ A j j! (1.23)
j =0
x ( t ) = Φ ( t ) · x0 . (1.24)
Task 1.20
Compute the transition matrix of the system
! " # !
ẋ1 (t) 0 1 ẋ1 (t)
= .
ẋ2 (t) 0 0 ẋ2 (t)
For the linear time invariant system (1.21), we can utilize the transition matrix and the method of
variation of constants to see the following:
Zt
x ( t ) = Φ ( t ) · x0 + Φ(t − τ ) · B · u(τ )dτ (1.25a)
0
y ( t ) = C · x ( t ) + D · u ( t ). (1.25b)
Task 1.22
Consider the PI controller given by Figure 1.12 with corresponding equations
1
U̇C (t) = u(t)
R1 C
18
R2
y(t) = −UC (t) − · u ( t ).
R1
Use Theorem 1.21 to compute the output y(t) for any feasible input u(t).
C
R2
IC (t)
UC (t)
R1
−
I1 ideal
+
u(t)
y(t)
Zt
1 R2
y(t) = − u(τ )dτ − u ( t ),
R1 C R1
0
which gives us the proportional parameter K P = − R2 /R1 and the integral parameter K I =
−1/( R1 C ) of the controller.
The last task is an example of the core of this lecture, the systematic manipulation of systems
to fulfill tasks or force behavior. As we will see later in the lecture, the systematic manipula-
tion of nonlinear systems is in general more complicated as compared to linear systems. Yet, for
sufficiently small neighborhoods of points in the operating range of a system, results for linear
systems apply also to nonlinear systems. This is particularly useful if these points are equilib-
ria (constant operating points) or reference trajectories. To this end, we consider autonomous
nonlinear systems of the form
and define:
f (x⋆ ) = 0 ∀t ≥ 0. (1.27)
Task 1.24
Compute the equilibria for the systems
Solution to Task 1.24: For system (1.28a) we have three equilibria x1⋆ = 1, x2⋆ = 2 and
x3⋆ = 3.
For system (1.28b) we have infinitely many equilibria x⋆ = x ∈ R2 | x2 = 0 ∧ x1 ∈ R .
Remark 1.25
For autonomous linear time invariant systems (1.22) we have
there exist infinitely many equilibria if and only if A is singular, i.e. det( A) = 0.
then the input u ∈ Rnu needs to be constant and fixed to u = u⋆ in order to compute the
equilibria.
20
f (x⋆ , u⋆ ) = 0 (1.30)
are called operating points of the system. If (1.30) holds true for any u⋆ , then the operating point
is called strong or robust operating point.
Remark 1.27
For linear time invariant systems (1.21a) we have
infinitely many operating points iff det( A) = 0 and rank( A) = rank([ A, B · u⋆ ]),
The result/property we are most interested in control theory is stability. Utilizing Definition 1.26
we can introduce two concepts of stability and asymptotic stability, robustness and controllability.
These concepts depend on the interpretation of u as an external control or a disturbance.
strongly or robustly stable operating point if, for each ε > 0, there exists a real number
δ = δ(ε) > 0 such that for all u we have
strongly or robustly asymptotically stable operating point if it is stable and there exists a
positive real constant r such that for all u
holds for all x0 satisfying ∥x0 ∥ ≤ r. If additionally r can be chosen arbitrary large, then x⋆
is called globally strongly or robustly asymptotically stable.
weakly stable or controllable operating point if, for each ε > 0, there exists a real number
1.5. P ROPERTIES OF SYSTEMS 21
δ = δ(ε) > 0 such that for each x0 there exists a control u guaranteeing
Task 1.29
Draw solutions of systems for each of the cases in Definition 1.28.
Note that strongly asymptotically stable control systems are boring from a control point of view
since the chosen control does not affect the stability property of the system. Still, it can be used
to improve the performance of the system. Moreover, strong asymptotic stability is interesting in
the presence of significant measurement or discretization errors. Its most interesting application
is in the analysis of robustness of a system, i.e. whether or not there exists an external input (in
that case a disturbance) which can destabilize the system.
The concept of weak stability, on the other hand, naturally leads to the question how to compute
a control law such that x⋆ is weakly stable, and, in particular, how to characterize the quality of a
control law.
In the following chapter, our focus will be to design a control law such that the stability property
can be forced to hold for a given sytem.
Part I.
Frequence Domain
CHAPTER 2
In modeling of control systems, we used a white box idea in Chapter 1 and introduced the state
of a system. In practice, however, deriving such a white box model is neither always necessary
nor productive. In many (especially in simple) cases, a black box approach allows us to derive a
control with required properties much more easily. This approach utilizes the so called frequency
domain. In that case, the map between input and output is not defined via a state dependent
dynamic, but via a direct map from input to output, the so called transfer function. As we learned
in Control Engineering 1, there exists a linear and invertible transformation between the time
domain which we used in Chapter 1 and the frequency domain, the so called Laplace transform
(or z transform in the discrete time case).
Within this chapter, we first recall the connection of frequency and time domain for simple sys-
tems before moving to more complex control loops.
For further details we additionally refer to the DIN 19226 [1, 2]
Z∞
fˆ(s) = L( f (t)) = exp(−st) · f (t)dt, s = α + iω (2.1)
0
Laplace transform of the time function f (t) and the set Cγ = {s ∈ C | Re(s) > γ} is called
area of existence of fˆ(s).
Remark 2.2
Note that the integral (2.1) converges absolutely if Re(s) > γ.
Task 2.3
Compute the Laplace transform and its area of existence for f (t) = exp( at).
Two special cases of Laplace transformed functions are the so called Heaviside and Dirac delta
function. The Heaviside function represents a unit jump, which is not differentiable but integrable
via the Laplace function, and also not defined at the jump point.
Task 2.4
Compute the Laplace transform of the Heaviside function
0, t<0
η (t) = undefined, t=0.
1, t>0
2.1. L APLACE TRANSFORM 27
y
1
0.5
t
−3 −2 −1 1 2 3
The Dirac delta function is the left sided jump height of the Heaviside function, or may be inter-
preted as functional to align a n-times continuously differentiable function to an initial value.
Task 2.5
Compute the Laplace transform of the Dirac delta function
Z∞
δ(t) · g(t)dt = g(0)
−∞
Z∞ n
dn
d n
· δ(t) g(t)dt = (−1) · g (0)
dtn dtn
−∞
η (t) − η (t − τ )
δ(t) = lim .
τ →0 τ
28
y
1
0.5
t
−3 −2 −1 1 2 3
η (t) − η (t − τ )
L(δ(t)) = lim L
τ →0 τ
Z∞ Z∞
1
= lim · η (t) · exp(−st)dt − η (t − τ ) · exp(−st)dt
τ →0 τ
0 0
Zτ
1 1 − exp(−sτ ) s · exp(τs)
= lim · exp(−st)dt = lim = lim =1
τ →0 τ τ →0 τs |{z} τ →0 s
0 l’Hospital
+i∞
rZ
1
f (t) = L( fˆ(s)) = · fˆ(s) · exp(st)ds, t ≥ 0, r ∈ R (2.2)
2πi
r −i∞
The reason why the Laplace transform or Laplace transformed functions are used quite often
to solve dynamic problems is due to its properties: While in time domain, the solution of a
dynamics requires the computation of integral, derivatives, time delay/advance, convolution etc.,
in frequency domain these problems can be solved using algebraic equations only. To recall the
main properties and laws of computation, we refer to Table A.1.
2.2. T RANSFER MATRIX 29
Note that typically the computation of a Laplace transform and of its inverse is not done via equa-
tions (2.1) or (2.2) but via equivalence tables. Table B.1 summarizes a few of these equivalencies.
The main mathematical tool used to apply the equivalences from Table B.1 is the partial fraction
decomposition. Since the entire fraction is typically not contained in the table, this method al-
lows us to split fraction into components, which are available in the table and therefore can be
transformed.
p̂(s)
fˆ(s) = (2.3)
q̂(s)
where p̂(s) and q̂(s) are coprime real polynoms satisfying grad( p̂(s)) ≤ grad(q̂(s)) = n.
Furthermore, suppose that q̂(s) can be transformed into the form
h m l j
∏ ∏
k j 2
q̂(s) = s − λj s − αj + β2j (2.4)
j =1 j =1
h m
with grad(q̂(s)) = n = ∑ k j + 2 ∑ l j . Then, function fˆ(s) can be uniquely reformulated to to
j =1 j =1
the partial fraction decomposition
h kj m lj
c ji d ji + e ji s
fˆ(s) = c0 + ∑ ∑ i + ∑ ∑ 2 i (2.5)
j =1 i =1 s − λj j =1 i =1 s − α j + β2j
p̂(s)
with c0 = lim and real coefficients c ji , d ji and e ji .
s→∞ q̂(s)
While the Laplace transform is not necessary to derive a description of a black box model, it is
very useful to see the interconnection between the white box and black box description.
then we obtain
which gives us
ŷ(s) = C · (s · Id − A)−1 · x0 + C · (s · Id − A)−1 · B + D · û(s) (2.7)
Technically, we are not restricted to the linear case of system (1.21). For this reason, we can
directly define the following:
In the linear case, we know from Theorem 1.21 that the general solution to system (1.21) reads
Zt
x ( t ) = Φ ( t ) · x0 + Φ(t − τ ) · B · u(τ )dτ
0
y(t) = C · x(t) + D · u(t)
with transition matrix Φ(t). Applying the Laplace transform to the state dynamics and the con-
2.2. T RANSFER MATRIX 31
and thereby
Remark 2.10
The two parts of (2.9) can be interpreted physically.
The first part Φ̂(s) · x0 represents the response of the system if no input is applied. For this
reason, it is called zero input response.
The second part Φ̂(s) · B · û(s) represent the response to an input if the system state is
zero. Similarly, it is termed zero state response.
For the zero state response of the system, i.e. x0 = 0, we can conclude
Remark 2.12
Note that due to non-commutativity of matrix multiplication, the sequence of transfer matrices is
important and may not be switched as in the one dimensional case of transfer functions.
The inverted way, i.e. to derive a state description from a transfer matrix/function, is called
realization problem. For a system, the related property is called properness.
If there exists a (possibly nonlinear) system (1.20) such that G (s) is the transfer function of the
system, then the transfer matrix G (s) is called proper.
32
For the latter, if a solution exists then the solution is not unique and one typically addresses a
minimal realization only. The main result is the following, which we state for the one dimensional
case:
z(s)
G (s) = (2.12)
n(s)
with polynoms z(s) and n(s). The transfer function is proper if and only if
Remark 2.15
Properness of a system can be extended to the MIMO case. In order to compute the latter, one
typically applies a parameter transformation first (similar to the computation of the Jordan ma-
trix) to separate the connections between inputs and outputs. For further details, we refer to the
book of Isidori [8, Chapter 5] for the general nonlinear case or the article of Müller [15] for the
linear case.
In the literature, there are two canonical minimal realizations, which can be obtained via partial
fraction decomposition, cf. Theorem 2.7. The canonical minimal realizations are called control-
lable normal form and observable normal form and are given in the appendix, cf. Definitions B.1
and B.1 respectively. As the MIMO case (many) introduces further zeros and ones, we restrict
ourselves to the SISO case here. A full description can be found in Müller [15].
z(s)
G (s) = (2.14)
n(s)
with coprime polynoms z(s) and n(s). Then we have grad(z(s)) ≤ grad(n(s)) ≤ n x and the
zeros of n(s) are called poles of the transfer function G (s) and equal to the Eigenvalues of A.
2.2. T RANSFER MATRIX 33
Utilizing Definition 1.28 on stability in the frequency domain, we can show that the following
holds:
Then the system is strong/robust/BIBO stable if and only if for the impulse response
Z∞
∥ g(t)∥dt < ∞. (2.16)
0
If we know, that the transfer matrix corresponds to a linear time invariant system, then Theo-
rem 2.18 simplifies to
The latter can be verified by checking the poles of the transfer function.
Theorem 2.20 (Strong/robust/BIBO stability for linear time invariant systems via transfer func-
tion).
A linear time invariant system (1.21) is strong/robust/BIBO stable if and only if all poles
34
Before coming to structures with multiple inputs and multiple outputs, we consider two interme-
diate cases where we use multiple outputs to improve the controller for a single input.
+
w + + + y j −1 yj
GR j G R j −1 GSj−1 GSj
− −
The reason for utilizing such a control structure is that, in order for a controller to react, distur-
bances must directly affect the state and the output. As a consequence, the output will suffer
and diverge from the target point, whereas the speed of recovery is determined by the response
speed of the control loop. Unfortunately, plants which are difficult to control often exhibit low
gains and long integral times for stability, hence have a slow response. Such plants are prone to
error from disturbances. Examples of such applications can, e.g., be found in motor control, cf.
Example 2.21.
3. position control u φ .
dM
+
w + + + yM + yω yφ
GR3 GR2 GR1 GS1 GS2 GS3
uφ uω uM
− − −
Remark 2.22
Closed loop systems can in general be considered to show the behavior of a second order system
with a natural rotating frequency and a damping factor. At frequencies faster than the natural
frequency, one can see that the closed loop gain decreases rapidly (at 12dB per octave). Conse-
quently, disturbances faster than double the natural frequency will remain more or less uncor-
rected. In case of underdamped systems, i.e. damping factor less then unity, the disturbances with
frequency around the natural frequency may be magnified.
If the disturbance can be measured, and its effect known, (even approximately), the idea of a
cascade controller can be imposed. As such, a correcting signal can be added either on the inner
or outer loop to compensate the issues described above.
While typically a two loop structure as shown in Figure 2.3, a cascade controller technically
distinguishes between outer and inner loops only. Hence, also concatenated loops are possible.
More formally, extending on Figure 2.3, the transfer function of the cascade control is given by
the following.
Suppose the structure of Figure 2.3 holds iteratively for these controllers and systems. Then we
call
transfer function of a cascade controlled system where the transfer functions Gcascadej (s) are
defined via
In that sense, an outer loop control can therefore be interpreted as feed forward for an inner loop.
It supplies a reference trajectory which shall be followed by the (typically faster) inner controller.
On the downside, the usage of cascade control increases the complexity of the control structure.
Additionally, it requires additional measurement devices and controllers.
Remark 2.24
In general, if the inner loop is at least three times faster than the outer loop, then the improved
performance justifies the investment of a cascade controller.
The nice property of the cascade control is that the problems to be tackled by the concatenated
loops can be separated and therefore treated subsequently.
Remark 2.25
Note that the problems are not treatable independently but require subsequent steps.
As noted before, the problems are nested and tackled in a subsequent manner. From Algo-
rithm 2.26 we observe, that the closed loop behavior of the inner loop is the open loop behavior
of the subsequent outer loop. Hence, the design of the inner loop control heavily influences the
difficulty of designing the subsequent outer loop. I.e., if a fast behavior is desired, then the inner
control needs to be aggressive.
In most cases, only the performance of the most outer loop is of interest to the user, whereas all
inner loops are only a means to an end. Therefore, criteria such as „no constant deviation“ or
„minimal overshoot“ are not of interest for the inner loops. For this reason, on the inner loops
typically P or PDT1 controllers are used, which are also faster, whereas the much slower PI
controller are applied on the most outer loop.
Task 2.27
Consider the cascade control from Figure 2.5. What is the advantage of using a cascade
controller utilizing y1 (s) and y2 (s) as compared to a SISO controller based on y2 (s) only?
+
w + + + y1 y2
GR2 GR1 GS1 1
u2 u1 s
− −
Figure 2.5.: Block diagram of a cascade control for integral outer loop
Solution to Task 2.27: Designing the inner loop as a P controller is equivalent of using a D
controller on the outer loop. This configuration allows to get rid of the disadvantages of the
D controller such as noise sensitivity and the rank problem in the transfer function.
Task 2.28
Consider a robot arm with variables as shown in Figure 2.6. For robots, three typical control
types exist:
torque control, i.e. to apply a defined torque / moment within a working area,
position control, i.e. to guarantee sufficiently accurate movement of the arm indepen-
dent from torques / moments, and
38
hybrid control, i.e. an application dependent switching between torque and position
control.
Consider the cascade control from Figure 2.7 to be applied for position control of one of the
angles φ j , j = 1, 2, 3. Suppose the transfer function block coefficients of drive j to read
K P φ = 0.2, K DT φ = 0.009s
K I ω = 0.9s−1
K Pu = 2.8, K DT u = 0.073s
K P φ = 3.5, K DT φ = 0.069s
For inner loop control coefficients K P1 = 25.5 and KT1 = 0.073s compute the optimal
coefficients K P2 , K I 2 of the external PI controller.
φ2
l2
φ3
l3
φ1
l1
K P φ , K DT φ
φ
K P2 , K I 2 K P1 , KT1 K Pu , K DT u + KI ω K P φ , K DT φ
w + + + yω yφ
− −
Figure 2.7.: Block diagram of a cascade robot control for one joint drive
2.3. C ASCADE CONTROL 39
Solution to Task 2.28: The inner open loop transfer function of the drive reads
K Pu K
G01 (s) = K P1 · (1 + s · KT1 ) · · Iω .
1 + s · K DT u s
K P1 · K Pu · K I ω
G01 (s) =
s
1 1 1
Gw1 (s) = 1
= s =
1+ 1+ K P1 ·K Pu ·K I ω 1 + s · Kw1
G01 (s)
with
1
Kw1 := 1/(K P1 · K Pu · K I ω ) = = 0.016s.
25.5 · 2.8 · 0.9s−1
Using the latter, the outer open loop transfer function reads
K P2 · (1 + sK I 2 ) KP φ
G02 (s) = · Gw1 (s) ·
s · KI 2 1 + s · K DT φ
K · (1 + sK I 2 ) 1 KP φ
= P2 · ·
s · KI 2 1 + s · Kw1 1 + s · K DT φ
K I 2 := Kw1 = 0.016s
K P2 · K P φ
G02 (s) = .
s · K I 2 · 1 + s · K DT φ
1 1 K P2 · K P φ
Gw2 (s) = 1
= = .
1+ s·K I 2 ·(1+s·K DT φ )
G02 (s) 1+ K P2 · K Pφ + s · K I2 · 1 + s · K DT φ
K P2 ·K P φ
40
K P2 · K P φ
Gw2 (iω ) =
K P2 · K P φ + iω · K I 2 · 1 + iω · K DT φ
K P2 · K P φ
=
K P2 · K P φ − ω 2 · K I 2 · K DT φ + iω · K I 2
Hence, we have
2
K P 22 · K P 2φ
| Gw2 (iω )| = 2
K P2 · K P φ − ω 2 · K I 2 · K DT φ + ( ω · K I 2 )2
K P 22 · K P 2φ
=
K P 22 · K P 2φ − 2 · K P2 · K P φ · ω 2 · K I 2 · K DT φ + ω 4 · K I 22 · K DT 2φ + ω 2 · K I 22
K P 22 · K P 2φ
=
K P 22 · K P 2φ − ω2 · 2 · K P2 · K P φ · K I 2 · K DT φ − K I 22 + ω 4 · K I 22 · K DT 2φ
2 · K P2 · K P φ · K I 2 · K DT φ − K I 22 ≈ 0
KI 2 0.016s
⇐⇒ KP φ ≈ = = 4.44.
2 · K P2 · K DT φ 2 · 0.2 · 0.009s
To summarize, a cascade control shows the following advantages/disadvantages given in Table 2.1
if compared to SISO.
w u y
GF GS
If such an identity can be reached, that is GF (s) = 1/GS (s), the control would truly be optimal.
However, the main reasons why such a behavior is not realizable are:
1. Degree of nominator is larger than degree of denominator. This is necessary (but not real-
izable) since the system GS (s) exhibits degree of nominator to be smaller than degree of
denominator, only in exceptional cases both are identical.
K (1 + T1 s) · (1 + T2 s) 1
GS (s) = −→ GF (s) = ·
(1 + T1 s) · (1 + T2 s) K (1 + Ts)2
1
=⇒ Gw (s) = .
(1 + Ts)2
2. Negative lag time. If the system exhibits a lag time, then its inverse must have a negative
lag time, i.e. a predictor. Hence, the controller is not causal and can only be applied if the
output is known in advance.
K (1 + T1 s) 1
GS (s) = · exp(− T2 s) −→ GF (s) = ·
(1 + T1 s) K (1 + Ts)
1
=⇒ Gw (s) = · exp(− T2 s).
(1 + Ts)
3. Unstable behavior. If the system GS (s) shows unstable zeros, i.e. it is not minimal phase,
then the feed forward GF (s) must have unstable poles. The latter may lead to unbounded
controls and if the poles and zeros do not cancel out exactly, then the loop will be unstable.
K · (1 − T1 s) (1 + T2 s) · (1 + T3 s) 1
GS (s) = −→ GF (s) = ·
(1 + T2 s) · (1 + T3 s) K (1 + Ts)2
(1 − T1 s)
=⇒ Gw (s) = .
(1 + Ts)2
→ Remedy: Set the amplitude response | Gw (iω )| = 1. To this end, any unstable zero
of the system is compensated by its complex transposed in the denominator of the
control, for example
K · (1 − T1 s) (1 + T2 s) · (1 + T3 s) 1
GS (s) = −→ GF (s) = ·
(1 + T2 s) · (1 + T3 s) K · (1 + T1 s) (1 + Ts)
(1 − T1 s)
=⇒ Gw (s) = .
(1 + T1 s) · (1 + Ts)
→ Remedy: Set the phase response ∠Gw (iω ) = 0. To this end, any unstable zero of the
system is compensated by its complex transposed in the nominator of the control, for
2.4. D ISTURBANCE CONTROL 43
example
K · (1 − T1 s) (1 + T2 s) · (1 + T3 s) · (1 + T1 s)
GS (s) = −→ GF (s) =
(1 + T2 s) · (1 + T3 s) K · (1 + Ts)3
1 − T12 s2
(1 − T1 s) · (1 + T1 s)
=⇒ Gw (s) = = .
(1 + Ts)3 (1 + Ts)3
Note that in all cases, the unstable zeros remain uncompensated. Hence, considering realizability
of a control, the choice of T is always a compromise between sensitivity wrt. noise and speed of
the control.
As a consequence, we know that unstable systems cannot be stabilized via feed forward, but
require a feedback structure to enforce stable behavior.
Remark 2.29
In fact, unstable zeros (and weakly damped one, that is zeros close to the stability boundary)
should never be canceled out to avoid instability of the control (or strong oscillations in case of
weakly damped zeros).
Yet, a disturbance can only be suppressed by design of a feedback control, if the reference input
and the disturbance are within the same frequency range. To tackle the case where these vari-
ables exhibit different frequency ranges, we introduce the precontrol and the equivalent prefilter
concept.
To integrate a feed forward and a feedback into one loop, two structures are possible, cf. Fig-
ure 2.9 and 2.10 respectively.
GF
d
+ +
w + + + y
GR GS
−
+
w + + y
GP GR GS
−
Remark 2.30
Since the feed forward does not interact with the feedback, the stability properties of the closed
loop remain unchanged.
The structures of the precontrol Figure 2.9 and of the prefilter 2.10 allow to simultaneously treat
reference tracking via the feedback and disturbance suppression via the feed forward. Hence, the
choice of T does not depend on the necessity to find a compromise between sensitivity wrt. noise
and speed of the control anymore.
More formally, we define the following:
If we compare both approaches, then we have identical behavior if the transfer functions are
identical, i.e.
GF ( s )
GP ≡ 1 + (2.23)
GR ( s )
holds, then the transfer functions of precontrol and prefilter are identical.
(1) Design the feed forward GF (s) to be (approximately) the inverse of the system GS (s).
(2) Design the feedback GR (s) such that the disturbance d(t) is suppressed as good as possible.
(1) Design the feedback GR (s) such that the disturbance d(t) is suppressed as good as possible.
(2) Design the feed forward GF (s) to be (approximately) the inverse of the closed loop system.
46
Remark 2.36
Note that the precontrol — in contrast to the prefilter — does not depend on the closed loop.
Consequently, there is no need to adapt it in case the closed loop is optimized or changed after
the design process is finished.
Remark 2.37
The prefilter can be used to compensate for unwanted behavior of the closed loop caused by
large values K D of the D part of a closed loop control. Large K D may be wanted to stabilize or
improve performance of the closed loop, yet its impact on zeros of the closed loop may lead to
large overshoots. The prefilter can be used to cancel out these zeros, hence the D part can be
designed without having to worry about possible negative impacts.
Task 2.38
Consider the (critically stable) system given by the transfer function
5
GS (s) = .
s · ( s + 5)2
Since the system exhibits an explicit I part, the feedback is designed as a PD controller
following
GR ( s ) = K P · ( s + 1) .
Solution to Task 2.38: For the closed loop we obtain the transfer function
5·K P ·(s+1)
s·(s+5)2 5 · K P · ( s + 1)
Gw (s) = = .
5·K ·(s+1)
1+ P 2 s3 + 10s2 + (5K P + 25) s + 5K P
s·(s+5)
Hence, we observe an unwanted zero in the nominator given by (s + 1), which need to be
included in the prefilter. To ensure unchanged amplitude response, we require GP (0) = 1.
Consequently, we design
1
GP ( s ) = .
( s + 1)
2.4. D ISTURBANCE CONTROL 47
A different approach can be used if the disturbance can be measured. In this case, the system
no longer exhibits only one but two outputs and may additionally show a disturbance transfer
function. Since the system is disturbed, a feedback is required to stabilitze it. The feedback is
based on the output of the system. As the disturbance itself cannot be influenced, no closed loop
can be used to compensate for the disturbance. Yet, a feed forward can be applied based on the
output of the disturbance to update the feedback. A sketch of the system is given in Figure 2.11.
GZ GD
− +
w + + + y
GR GS
−
GR (s) · GS (s) GD − GS · GZ
ŷ(s) = · ŵ(s) + · d̂(s) (2.24)
1 + GR (s) · GS (s) 1 + GR (s) · GS (s)
GD ( s )
GZ ( s ) = ,
GS (s)
2. the stability of the system is not influenced by the additional component is the disturbance
influence is canceled out.
Remark 2.40
In contrast to precontrol, a disturbance control is (typically) more easily realizable. The reason
is that (at least typically) the impact of the disturbance is lagged. The more lagged GD (s) is
compared to GS (s), the higher will be its low pass behavior and the easier will be the realization
of GZ (s).
Task 2.41
Consider the system transfer function
K
GS (s) = · exp(−2s).
(1 + T1 s) · (1 + T2 s)
Compute the ideal and realizable disturbance controls for the cases GD (s) = 1 and
GD (s) = exp(−3s)/ (1 + T3 s).
(1 + T1 s) · (1 + T2 s)
GZ ( s ) = · exp(2s).
K
Ignoring the negative lag time and equalizing degree of nominator / denominator reveals the
possible realization
(1 + T1 s) · (1 + T2 s)
GZ ( s ) = .
K · (1 + Ts)2
2.4. D ISTURBANCE CONTROL 49
(1 + T1 s) · (1 + T2 s)
GZ ( s ) = · exp(−s).
K · (1 + T3 s)
Since the lag time is positive, we only need to equalize degree of nominator / denominator,
which reveals the possible realization
(1 + T1 s) · (1 + T2 s)
GZ ( s ) = .
K · (1 + T3 s) · (1 + Ts)
Recall that for T → 0 the ideal inverse, and respectively the perfect disturbance suppression, is
obtained. Yet, for smaller T the derivative character of the control is increasing and hence the
input magnitude is inverse proportionally rising with the decrease of T. The latter may quickly
result in violating input constraints.
To summarize, if compared to prefilter/precontrol a disturbance control shows the following ad-
vantages/disadvantages given in Table 2.2.
Remark 2.42
In the literature, two special cases of the cascade and the precontrol are known. If the cascade
exhibits only two loops, it is also referred to as auxiliary feedback. If a second input u is available,
the concept of precontrol can be applied to the controllable subsystem of the second input which
is referred to as auxiliary feed forward.
CHAPTER 3
In the upcoming chapter, we consider control structures, which extend the standard setting of P, PI,
PID, PIDT and PIDT-Tt considered so far in different directions. First, for practical applications,
the electric/electronic realization of these controllers is (despite their simplicity) too complex
and expensive. To address this issue, we consider bang-bang and double-setpoint control in the
following Section 3.1. The simplification of these controllers lies in their operating range, which
consists of 2 (or 3) operating points only, e.g. on/off switches or gear shifts.
To address more complex control architectures with several inputs and outputs, we already saw
in the previous Chapter 2 how prefilter, precontrol, cascade control and disturbance control can
be applied. The more general setting of multi-input multi-output systems will be addressed in
Section 3.3.
Practitioners use such devices as they are very cheap, more robust, require less maintenance, are
smaller, simpler, consist of less parts and exhibit a higher degree of efficiency. On the downside,
however, bang-bang and double-setpoint controllers induce oscillations, which may lead to res-
onances, noise and degrade comfort. Moreover, due to switching, these elements show higher
wearing and a limited lifespan. Additionally, the control shows a slower response on the sys-
tem due to integrating the pulse waves of the input. The most dreadful disadvantage is yet the
complexity of modeling and evaluation. Here, we will particularly focus on the last issue.
The block diagrams of bang-bang and double-setpoint control are given in Figures 3.1 and 3.2
respectively.
Remark 3.1
Note that the right part of each figure represents the control with hysteresis, i.e. the case when
shifting up/down is not done at the same value. The idea of the latter is to avoid repeated and fast
switching (at the cost of potentially larger oscillations).
u y u −ε ε
y
(a) Bang bang control (b) Bang bang control with hysteresis
−ε 2−ε 1
u y u ε1 ε2
y
Remark 3.3
For simplicity reasons, the threshold value is typically set to zero.
Remark 3.4
We like to stress that — in contrast to all control types considered so far — a bang-bang control is
by definition nonlinear. Hence, the principles of superposition, amplification and commutativity
do not hold in general.
The output of a bang-bang control as given in Figure 3.1(left) takes the form of a so called pulse
modulation function. To generate such a signal, several possibilities exist, which include
PWM: The amplitude of output is fixed. The length of the impulse depends on the ampli-
tude of the input signal.
PFM: The amplitude of output is fixed. The frequency of the pulse depends on the ampli-
tude of the input signal.
54
PAM: The amplitude of output depends on amplitude of input. the frequency of the pulse
is fixed.
In practice, all three options are applied, yet due to its similarity to electronics, PWM is the most
common one. Here, we focus on PWM only.
The idea of pulse modulation is to generate a switching function to mimic a continuous control
like a PID. To this end, the signal to be mimiced is required as an input and is compared to a so
called generator function, cf. Figure 3.3 for the general setting.
w y
GR
−
The easiest way is to generate a PWM signal is to apply a triangle function and compare it to
a reference signal, cf. Figure 3.4. Note that the reference w(t) in Figures 3.3, 3.4 is the input
for our controller. Whenever the triangle function is less than the reference, the lower bound is
applied. If it is higher, then the upper bound is used.
Remark 3.5
Note that other funtions like a sawtooth or delta function with respect to limits can be applied.
If a bang-bang control is applied, one has to face the problem that the reference w is typically not
asymptotically stabilized. The only exception is if either (w, umin ) or (w, umax ) is an operating
point of the system. In any other case, applying either umin or umax to the system results in
a deviation. Once the error passes the threshold, the deviation in turn leads to a switch of the
control. The switch of the control induces a change of direction of the development of the error,
which at some point results in another switch of the control.
As a result, the control chatters. To reduce such a behavior, hysteresis is introduced:
w3 ( t )
w1 ( t )
0 t
w2 ( t )
y1
0 t
y2
0 t
y3
0 t
Remark 3.7
Note that bang-bang with hysteresis is not a function. It exhibits a region where the output may
either be ymin or ymax .
Note that even with hysteresis, the closed loop for any system (cf. Figure 3.5) will still oscillate
around the reference.
Here, the following holds:
56
umin , umax
w u y
−ε ε
GS
−
In order to still be fairly close to the reference, a fast reaction to even little changes is required.
To this end, the deviation from the reference needs to be amplified to cross the threshold of the
bang-bang controller even for small deviations. Still, we don’t want the control to react too often.
To this end, high frequency changes are filtered out.
Remark 3.9
High frequency changes are often subject to measurement errors or unmodeled elements of the
system. In practice, a low pass can be applied to filter these occurrences and smooth the system
output.
Hence, we end up with a combination of a bang-bang control with a low pass and a control
amplifier as illustrated in Figure 3.6
umin , umax
w u y
≫1 −ε ε
GS
−
G1
Figure 3.6.: Closed loop with bang-bang control with low pass and amplifier
3.1. BANG - BANG AND DOUBLE - SETPOINT CONTROL 57
Considering the impact of the gain and low pass elements, we want to assess the impact on Theo-
rem 3.8. To this end, we denote the transfer function of the gain by Ggain and of the combination
by Gadapt . Since Ggain is chosen large it will dominate the control and system transfer function,
which allows us to neglect them and obtain
Ggain Ggain 1
Gadapt = = 1 ≈
1 + G1 · Ggain G1
Ggain · + G1
Ggain
| {z }
≈0
Theorem 3.10 (Oscillation for bang-bang control with low pass and amplifier).
Consider a closed loop system as given in Figure 3.6 and suppose a controllable operating point
x ⋆ to exist for system GS and the corresponding control u(·) to satisfy u(·) ∈ [umin , umax ]. Then
the closed loop from Figure 3.6 is stable and the amplitude of the oscillation of the closed loop is
directly proportional to ∆/Ggain whereas the frequency of the oscillation is directly proportional
to (umax − umin ) / (θ · G1 ).
Remark 3.11
A direct conclusion from Theorem 3.10 is that systems with only slow frequencies are ideally
suited for control via bang-bang controllers. For such systems, the output is almost linear and
performance analysis can be done via the mean of the output.
Still, the chattering behavior only be reduced by these extensions, yet not avoided. A complete
and general avoidance is also not possible, yet in particular cases a resolution can be found. These
cases refer to systems which exhibit an I like behavior close to the reference. The reason why a
resolution is possible is that for I like behavior close to the reference the control input satisfies
u ≡ u⋆ . Hence, we can introduce a third state into the bang-bang controller, which reveals exactly
that value within a so called dead zone of the system, i.e. the neighborhood of the reference. We
define the following:
Remark 3.13
Note that the I like behavior of a system can be forced to exist by adding an integrator between
the double-setpoint controller and the system.
Extending our system setup from Figure 3.6 by a double-setpoint controller and forcing applica-
bility by incorporating an integrator in Figure 3.7, we can actually show the following remarkable
result:
umin , u⋆ , umax KI
w −ε 2 −ε 1 u y
≫1 ε1 ε2
GS
−
G1
Figure 3.7.: Closed loop with double-setpoint control with low pass, amplifier and integrator
One can even go one step further and minimize the number of switches that are necessary to reach
the reference value. To this end, a latency can be introduced to decelerate the speed of the control,
cf. Figure 3.8.
For such a structure, we can show an extension of Theorem 3.14 revealing:
3.1. BANG - BANG AND DOUBLE - SETPOINT CONTROL 59
umin , u⋆ , umax KP , KT KI
w −ε 2 −ε 1 u y
≫1 ε1 ε2
GS
−
G1
Figure 3.8.: Closed loop with double-setpoint control with low pass, amplifier, latency and integrator
Theorem 3.15 (Minimal asymptotic stability for double-setpoint control with hysteresis).
onsider a closed loop system as given in Figure 3.7 and suppose a controllable operating point
( x ⋆ , u⋆ ) to exist for system GS and the corresponding control u(·) to satisfy u(·) ∈ [umin , umax ].
Then the closed loop from Figure 3.7 is asymptotically stable and only a finite number of switches
of the control occur.
Including the latency, however, comes at the price of reduced convergence speed of the closed
loop. Additionally, the dead zone depends on the latency leading to the problem of balancing the
low pass and the latency parameters.
Apart from influencing the switching behavior, one can also adapt the structure to mimic the
behavior of a continuous controller. The following three structures are common:
umin , umax
w y
−ε ε
K P , K DT
umin , umax
w y
−ε ε
K P , K DT K DT
umin , u⋆ , umax KI
w −ε 2−ε 1
y
ε1 ε2
K P , K DT
Task 3.17
Evaluate the transfer functions from Figures 3.9, 3.10 and 3.11 to show the approximate
equivalence with respect to the continuous controllers.
Summarizing, bang-bang and double-setpoint controllers show the advantages and disadvantages
given in Tables 3.2 and 3.3.
3.1. BANG - BANG AND DOUBLE - SETPOINT CONTROL 61
At this point, we like to come back to our Remark 3.4 stating that bang-bang and therefore also
double-setpoint are nonlinear components. The ideas used in the results shown before follow
one idea only: Additional components are included in the closed loop to simplify, transform and
compensate nonlinearities and map the system to a linear one. To this end, adding low pass
and amplifier in Figure 3.6 is equivalent to reducing the neighborhood where a linearization is
applied and at the same time making the linear reaction to dominate the nonlinear parts. Adding
the integrator and latency in Figure 3.8 basically increases the order of the system by integration,
i.e. the control is applied as to a derivative of the system, and compensating for tardiness of the
system wrt. the control.
In general, even more complex connections can be drawn as we will see in the following section.
62
u y
The map of the system is not necessarily given by a function as we already indicated in Defini-
tion 1.1 of a system. Instead, also (possibly multidimensional) data sets can be used. In that case,
the map is defined as follows:
characteristic map.
For the realization of a map, the function g(·, ·) is typically implemented as an interpolation. A
typical example of a characteristic map can be found in motor control for combustion engines.
For such devices, the engine torque depends on both the engine speed and setting of the throttle
valve (for Otto) or quantity of injected fuel (for Diesel).
Since the required storage rises exponentially with the dimension of the data sets, other represen-
tations of characteristic maps via
polynoms,
splines,
fuzzy logics,
associative storage
3.2. C HARACTERISTIC MAP 63
are used. In particular, polynoms and splines additionally exhibit the advantage of being differ-
entiable instead of only continuous.
Remark 3.19
These maps (also called models) are typically identified offline using optimization methods such
as regression or MINLP, or online via filter techniques such as Kalman. Modelling and identifi-
cation is not within the scope of this lecture but instead treated in „Systemics“.
Such systems can often be separated into subsystems, which are either
linear dynamic, or
nonlinear static.
If such a separation into one of each subsystems is possible, then the structures given in Fig-
ure 3.13 are possible.
w u y w u y
GS GR
Remark 3.20
The bang-bang and double-setpoint controller are special cases of the Hammerstein structure.
Both structures may be identical in the static and dynamic behavior in a neighborhood of an
operating point, yet they exhibit very different large-signal behavior. The reason for the latter is
that for nonlinear systems the principle of commutativity does not apply.
As outlined before, also superposition and amplification cannot be applied, which results in
As a consequence of the above, derivation of results in frequency domain is limited. Instead, the
time domain is considered which we will focus on in Part II of the lecture.
64
∂2 f ∂2 f ∂2 f
(u) ≥ θ or (u) ≥ θ or (u) ≥ θ (3.5)
∂u j ∂uk ∂y j ∂yk ∂u j ∂yk
The threshold parameter θ indicates the degree of coupling of the inputs and outputs. In case the
coupling is larger than the threshold, we call the coupling to be strong, otherwise weak. In case
of weak coupling, the system can be decoupled into multiple single-input single-output systems
(SISO), for which the coupling can be neglected. In order to be neglectable, the control needs
to be designed to suppress disturbances emanating from other systems using one of the methods
discussed so far. For MIMO systems, we therefore need to focus on strongly coupled systems
only.
Examples for strongly coupled outputs are
roll and yaw angle control for flying curves with an aircraft, or
For such systems, we utilize the concept of a transfer matrix introduced in Definition 2.9. As an
extension is easily possible, we consider the setting of two inputs and two outputs only. Therefore,
the setting is given as shown in Figure 3.14.
3.3. M ULTI - INPUT MULTI - OUTPUT SYSTEMS 65
u1 y1
System
u2 y2
Figure 3.14.: MIMO system with two inputs and two outputs
Zooming into this setting and recalling that the inputs or outputs are strongly coupled, there are
two different structures resembling feed forward and feed back connectivity.
u1 P11 y1 u1 V11 y1
P21 V21
P12 V12
u2 P22 y2 u2 V22 y2
Figure 3.15.: Canonical structures of MIMO systems with two inputs and two outputs
Both structures are often found in practice showing the properties given in Table 3.4.
66
As the P canonical structure is more easily treatable via control methods, one typically transforms
V canonical systems to P canonical structure. To this end, we have
! " # ! " # !!
y1 V11 0 u1 0 V12 y1
= · + ·
y2 0 V22 u2 V21 0 y2
" # ! " # !
V11 0 u1 0 V11 · V12 y1
= · + ·
0 V22 u2 V22 · V21 0 y2
" # ! " # !
1 −V11 · V12 y1 V11 0 u1
· = ·
−V22 · V21 1 y2 0 V22 u2
! " # −1 " # !
y1 1 −V11 · V12 V11 0 u1
⇐⇒ = · · .
y2 −V22 · V21 1 0 V22 u2
To treat such MIMO systems, one can distinguish between three concepts.
Decentralized control: For each input/output pair we design exactly one control. For each
pair, the input of the other pair is considered to be a disturbance.
Decoupling control: For each input/output pair we design one main control and for all
couplings one decoupling control. The task of the latter is to reduce or eliminate the input
of other pairs such that the pairs can be treated separately.
Multivariable control: The control exhibits as many inputs and outputs as the system does.
Note that decentralized control can only be applied for weakly coupled systems. The reason for
that is due to the wrong assumption of the coupling to be a disturbance, i.e. an independent
input. Since the coupling is driven by the variables of the control loop, they are, however, not
independent. As a consequence, stability issues (e.g. by shifting poles) may arise.
In the following, we focus on decoupling control and consider multivariable control in the time
domain setting.
−
u1
w1 R11 S11 y1
−
R21 S21
R12 S12
−
u2
w2 R22 S22 y2
−
The idea of decoupling control is a special case of disturbance rejection, i.e. we eliminate or at
least reduce the impact of the systems on one another, which allows us to apply standard methods
for the decoupled circuits.
68
Within Figure 3.16, there are four controllers which need to be designed. While designing, the
intention is that
We now focus on eliminating the impact of the second system on the first, cf. Figure 3.17.
u1
R11 S11 y1
−
R21 S21
R12 S12
−
u2
R22 S22 y2
In order to eliminate one another, the blue and red paths in Figure 3.17 need to be identical.
Hence, we directly obtain
S12
R12 = R22 · (3.7)
S11
S
R21 = R11 · 21 (3.8)
S22
Remark 3.27
Regarding realization, the same considerations as for disturbance control apply. If a perfect
decoupling is not possible, then its impact should be reduced. In general, a decoupling is more
easily achieved if the coupling is slow. Since we have
coupling system
decoupling control = main control ·
main system
and the main control must satisfy degree of nominator is equal to degree of denominator, then the
decoupling control is realizable if and only if the number of poles of the coupling system is larger
than number of poles of the main system. Additionally, the known limitations for realizability
apply, cf. Section 2.4.
As an alternative to a pathwise comparison as in Theorem 3.26, we can use the transfer matrix
for decoupling. From Figure 3.16 we obtain
! " # !
u1 R11 − R12 w1 − y 1
= ·
u2 − R21 R22 w2 − y 2
! " # !
y1 S S u1
= 11 12 ·
y2 S21 S22 u2
which gives us
! " # !
y1 R11 S11 − R21 S12 − R12 S11 + R22 S12 w1 − y 1
= · . (3.9)
y2 R11 S21 − R21 S22 − R12 S21 + R22 S22 w2 − y 2
Remark 3.29
Note that both approaches reveal identical conditions.
It is of particular importance that even in the case of ideal decoupling, each system depends
on both the main and the decoupling control. As a consequence, if we want to adapt the main
controller in a later stage of the development, the decoupling controller needs to be adapted as
70
well. One way to circumvent this problem is to consider a slight modification of the decoupling
circuitry, cf. Figure 3.18.
−
u1
w1 R11 S11 y1
−
R21 S21
R12 S12
−
u2
w2 R22 S22 y2
−
Figure 3.18.: Adaptable decoupling structure of MIMO system with P canonical structure
S12
R12 =
S11
S
R21 = 21 ,
S22
i.e. the decoupling control is only subject to the input of the main system and therefore indepen-
dent from the main controller.
Remark 3.30
Alternatively, the decoupling control can be designed as a V canonical structure. The advantage
of that approach is that for the design of the main control no aspect of the respective other system
needs to be considered.
Yet, as we can see from the involved design process of main and decoupling controller, this
approach does not allow for scalability to high dimensional MIMO systems. To consider the
latter, we will shift our solution approach to the time domain.
Part II.
Time Domain
CHAPTER 4
In Chapter 1, we introduced the concept of a generic system and thereafter discussed the property
of stability and how to design feed forward and feedback control laws to enforce this property of a
system. In frequency domain, we observed that for more complex structures of the system and/or
of the control, the design of the latter becomes more and more involved. Additionally, we saw
that asymptotic stability was in most cases out of scope for the design methods. Now, we shift
our view to the time domain and in particular on nonlinear systems. Again, our interest lies in
showing (asymptotic) stability of a system, and respective ideas of designing controls to guarantee
this property. Starting point in Section 4.1 will be the notion of asymptotic stability, where we
will directly jump to nonlinear systems. As we will see, if we move from linear to nonlinear
systems, it is not entirely clear whether of not a continuous stabilizing control exists. To this end,
we discuss Brockett’s condition [4] and Artstein’s counterexample [3]. In Section 4.2, we then
discuss alternative concepts for equivalent definitions of stability and controllability. Here, we
follow the approach of Khalil [9] and Isidori [8]. These concepts allow us to foster properties of
the system dynamics to derive stabilizing controls. In Section 4.3, we will utilize one of these
properties to derive the so called Sontag’s formula for computing an asymptotically stabilizing
feedback based on a Control-Lyapunov function [18]. Moreover, we will introduce the concept
of backstepping to construct such a Control-Lyapunov function [8].
76
we require that there exists a control u such that bounded deviations from an operating point
result in bounded behavior, that is
and that any bounded deviation will be controlled to the operating point, i.e.
In the linear case, we saw that a linear control u in both feed forward and feedback form can
always be constructed in „Control Engineering 1“, i.e. for each u : T → U we can construct
u : X → U and vice versa. For the nonlinear case, the latter does not hold true. In particular,
we only have the following quite intuitive connection between existence of feedback and feed
forward control to still hold true:
Lemma 4.1
Consider a system (4.1) and let (x⋆ , u⋆ ) be an operating point. If a feedback u : X → U exists
such that the closed loop is asymptotically stable and additionally both the feedback and the
closed loop are Lipschitz, then there exists a feed forward u : T → U such that the system is
asymptotically controllable.
Remark 4.2
Lipschitz continuity is a crucial element within Lemma 4.1 as it allows us to invert the system and
derive the controllability property.
So the first question to be answered is how to construct a Lipschitz continuous feedback in time
domain. Here, the following core result holds true:
contains a neighborhood of x⋆ .
1. The operating point x⋆ is (locally) exponentially stable if and only if the real parts of all
Eigenvalues λi ∈ C of D f (x⋆ ) are negative.
2. The operating point x⋆ is exponentially unstable if and only if there exists one Eigenvalue
λi ∈ C of D f (x⋆ ) with positive real part.
3. The operating point x⋆ is exponentially antistable if and only if the real parts of all Eigen-
values λi ∈ C of D f (x⋆ ) are positive.
Theorem 4.4 gives us a remarkable insight: in a neighborhood of an operating point the first
moment of the dynamics dominates all higher moments. Based on Theorem 4.4 and the defini-
tion of equilibria / working points together with Taylor’s approximation theorem, we obtain the
following:
with
∂ ∂
A= f ( x ⋆ , u ⋆ ), B= f (x⋆ , u⋆ )
∂x ∂u
∂ ∂
C= h ( x ⋆ , u ⋆ ), D= h(x⋆ , u⋆ )
∂x ∂u
is also the solution of the nonlinear system (4.1). The system (4.5) is called linearization around
the operating point (x⋆ , u⋆ ).
Remark 4.6
As a consequence of Theorem 4.5 we can transfer results from linear systems to nonlinear systems
at least in a neighborhood of an operating point.
and compute its linearization. Design a feedback such that the Eigenvalues of the closed loop
are −1.
Now, we can compute the entries of F such that all Eigenvalues are equal to −1 revealing
g + k2
F1 F2 F3 F4 = − g2 − 4k
g − 6 − g − gk2 − 4g − 4 + k 1
g
k
g2
+ 4
g .
Unfortunately, the inverse of Lemma 4.1 does not hold true. To illustrate the latter, consider the
following:
where the angle of heading is given by x1 and the position by ( x2 · cos( x1 ) + x3 · sin( x1 ), x2 ·
sin( x1 ) − x3 · cos( x1 )). Show that Theorem 4.3 does not apply.
Solution to Task 4.8: From the dynamic, we directly obtain that no point (0, r, ε) with ε ̸= 0
and r ∈ R is in the image of f .
The displayed example is also called Brockett’s nonholonomic integrator and systems showing
this property are called nonholonomic. As a consequence of no being feedback stabilizable, we
can also not apply linearization to the nonholonomic integrator.
80
u2
u1
Task 4.9
Consider the dynamics from Task 4.8. Show that the linearization is not controllable.
Solution to Task 4.9: Computing the linearization using Theorem 4.5 we obtain
0 0 0 1 0
A = 0 0 0 and B = 0 1 .
0 u1 0 x2 0
0 0 0 0 0
ẋ3 (t) = 0.
Hence, the solution cannot converge to x⋆ = 0 and similarly no stabilizing feedback exists
for the linearization.
Technically speaking, Task 4.8 refers to parallel parking using a steered vehicle. Since a parallel
movement is not possible, such a feedback cannot exists. Yet we know that there exists a solu-
tion to transport a vehicle into a parallel parking spot using a non-parallel movement. Such a
trajectory, however, cannot be described using a Lipschitz continuous feedback.
4.1. N ECESSARY CONDITIONS FOR CONTROLLABILITY 81
Solution to Task 4.8: Consider the operating point (0, 0) or shift the operating point respec-
tively. If we apply
0, t ∈ [0, 1]
p
−sgn( x3 ) · | x3 |, t ∈ [1, 2]
u1 = 0, t ∈ [2, 3]
p
− x − sgn ( x ) · | x | , t ∈ [3, 4]
1 3 3
0, t>4
and
p
sgn( x3 ) · | x3 |, t ∈ [0, 1]
0, t ∈ [1, 2]
u1 = p
− sgn ( x 3 ) · | x3 |, t ∈ [2, 3]
0, t>3
then the system will reach (0, 0) at t = 4 and remain there. Hence, the system is asymptoti-
cally controllable.
Corollary 4.10
Consider a system (4.1) and let (x⋆ , u⋆ ) be an operating point. If for the set
there exists no open ball Br (x⋆ ) with radius r > 0 around x⋆ such that Br (x⋆ ) ⊂ f (X ), then no
Lipschitz continuous feedback exists.
In the nonlinear setting, we therefore know that Brockett’s condition is at least necessary. Un-
fortunately, the following example will show that it is not sufficient. The idea of the example is
to design a system such that Brockett’s conditions holds, yet not Lipschitz continuous feedback
does exist.
82
v2 v1
! !
−1 + x22 − 2x · − 2 x
v2 2 v1
f (x, u) = ·u = 2
v2
=
−2x2 − 2x2 · (−2x2 ) v2
Remark 4.12
Since the proof of nonexistence of a Lipschitz continuous feedback is rather involved, we refer
to [3]. The idea of Artstein is to utilize that the dynamic forms a circle and dependency on
the state results in a contradiction to the uniqueness of an operating point. In particular, the
convergence is getting so slow, that the operating point is never reached, not even for t → ∞.
Utilizing the examples from Brockett and Artstein, we can only state the following:
Corollary 4.13
Consider a system (4.1) and let (x⋆ , u⋆ ) be an operating point. If the system is asymptotically
controllable, existence of a Lipschitz continuous feedback is not guaranteed.
4.2. E QUIVALENT CONCEPTS OF CONTROLLABILITY 83
From the discussion so far we obtain that the ε-δ definition of controllability does not provide us
with enough insight to design a control for a given nonlinear system. In the following section,
we will introduce equivalent notions of asymptotic stability and controllability, which allow for
further interpretation of the system behavior.
Remark 4.14
We already like to note that identical considerations hold true for observability and detectability.
The latter are, however, related to gathering information about the state of the system, not about
control of the system, and are therefore outside the scope of this lecture.
The first alternative concept for stability/controllability utilizes so called comparison functions,
cf. Figure 4.2 for an illustration.
A function β : R≥0 × R≥0 → R≥0 is of class KL if for every fixed t ≥ 0 the function
β(·, t) is of class K and for each fixed s > 0 the function β(s, ·) is of class L.
The functions allow us to geometrically include solutions emanating from a given initial value by
inducing a bound for the worst case. This directly leads to the following result, cf. [9]:
84
3 β(·, t) 3 β(x, ·)
2 2
1 1
x t
0.5 1 1.5 2 2.5 3 0.5 1 1.5 2 2.5 3
(a) Sketch of a class K function (b) Sketch of a class L function
Remark 4.17
The result from Theorem 4.16 can be generalized to arbitrary operating points by subtracting x⋆
within the norm operator on both sides of equation (4.7) or (4.8).
4.2. E QUIVALENT CONCEPTS OF CONTROLLABILITY 85
Task 4.18
Draw solutions of systems to visualize Theorem 4.16.
3 β(·, t) 3 β(x, ·)
2 2
1 1
x t
0.5 1 1.5 2 2.5 3 0.5 1 1.5 2 2.5 3
(a) Sketch of a robustly asymptotically stable system (b) Sketch of a weakly asymptotically stable system
The idea of the comparison function concept is to establish a bound on the system behavior. Once
we can verify/compute how such a bound looks like, we know that even in the worst case, the
system behavior will be better, i.e. the trajectories will be closer to the operating point than the
comparison bound. Secondly, we also know that for any control function to be used satisfying
this bound, then the solution will converge and the system will be asymptotically stabilized by
the control.
Remark 4.19
Note that Theorem 4.16 makes no assumption as to whether the control is feed forward or feed-
back, nor whether the control is continuous or discontinuous.
The second concept utilizes the Lyapunov functions, which can be interpreted as energy func-
tion of the system state. The main difference lies in considering a minimizing control in the
neighborhood of the considered state.
functions α1 , α2 , α3 ∈ K∞ such that there exists a control function u satisfying the inequalities
The idea of the Lyapunov function is comparable to a salad bowl: If we put a ball into the
bowl, it will run downhill and remain at the lowest point. Using this metaphor, the lowest point
regarding the Lyapunov function marks the desired equilibrium. The Control-Lyaponov function
itself can be seen as energy of a system. Hence, the first inequalities (4.9) are bounds on the
behavior of the system. In contrast to comparison functions, however, both lower and upper
bounds are required. The reason for this necessity emanates from the second inequality (4.10).
This inequality basically says that energy is drawn from the system continuously. Yet, even if
energy is continuously drawn from the system, it may come to a rest far away from the operating
point. To avoid such cases, the bounds on V are required.
In our last step, we apply this energy concept to obtain stability by energy draining arguments:
∂
sup V (x) · f (x, u) ≤ −α3 (x)
u∈U ∂x
Task 4.22
Draw solutions of systems to visualize Theorem 4.21.
4.2. E QUIVALENT CONCEPTS OF CONTROLLABILITY 87
Remark 4.23
Again, the strong or robust concept means that no matter which control we consider, energy is
drawn from the system. The weak concept requires additional work to design a control such that
the stability property is induced.
In contrast to Lipschitz continuous feedback stabilizability, cf. Lemma 4.1 and Corollaries 4.10,
4.13, using the notion of Control-Lyapunov functions an inversion of the statement is possible.
Remark 4.26
Consider Artstein’s circles, cf. Task 4.11, one can show that
q
V (x) = 4x12 + 3x22 − | x1 |
is a Control-Lyapunov function, for which the conditions (4.9), (4.10) hold true for either u ≡ −1
or u ≡ 1.
Based on the latter remark, we already see that the concept of Control-Lyapunov function allows
us to conclude asymptotic stability directly once such a function is known. Hence, the concept
of a Control-Lyapunov function allows for a broader class of feedbacks to be computed as com-
pared to linearization (Theorem 4.5) or characteristic maps (Section 3.2). The map in Figure 4.4
characterizes the connection between the last results:
From Figure 4.4, we directly obtain the weak links in the nonlinear setting: From Brockett and
Artstein, it is clear that the arrow from controllability to existence of a Lipschitz feedback is
not present. Additionally, from a differentiable Control-Lyapunov function, the existence of a
Liptschitz feedback cannot be guaranteed.
88
Lemma 4.1
Asymptotic Stability via
controllability Lipschitz feedback
Corollary 4.13
In the following section, we specialize our setting and aim to close the gaps illustrated in Fig-
ure 4.4 to derive a concept to compute a nonlinear feedback for a nonlinear system.
nu
ẋ(t) = f (x(t), u(t)) = f 0 (x(t)) + ∑ f j (x(t)) · u j (t) (4.11)
j =1
with locally Lipschitz continuous functions f j : Rnx → Rnx for all j = 1, . . . , nu , then we call
the system control-affine.
Note that the controls u j : T → R are scalar and the sum ranges across the dimension of the
control u ∈ Rnu .
Moreover, we need to restrict ourselves to feedbacks, which are bounded for every bounded input.
The easiest way to obtain such a restriction is to impose the following assumption:
Assumption 4.28
Consider a feedback u : X → U . Then we assume u to be Lipschitz continuous and satisfy
u(0) = 0.
Remark 4.29
Note that the condition u(0) = 0 can be easily satisfied by transforming the dynamic f˜(x, u) =
f (x, u + u(0)) and obtain ũ(x) = u(x) − u(0). Hence, we will always have ũ(0) = 0.
Utilizing the latter assumption, we can show boundedness of the feedback for bounded input:
Lemma 4.30
Consider a feedback u : X → U satisfying Assumption 4.28. Then we have
Now, we can use the latter to invert Theorem 4.24 and define an asymptotically stabilizing feed-
back. The result itself dates back to Artstein [3], yet here we use the explicit formula for the
feedback derived by Sontag [18].
90
with
√
a + a2 + b2 , b ̸= 0
b
Φ( a, b) =
0, b=0
is Lipschitz continuous in the range u(x) ≤ γ(x) and the closed loop is globally asymptotically
stable.
Task 4.32
Consider the controllable inverted pendulum
where the control represents the force of a motor acting on the angular velocity. Show that
1 2 2
V (x) = ( x1 + x2 ) + x1
2
∂
V (x) · f (x, u) = (2x1 + x2 ) x2 + ( x1 + x2 ) · (− x2 + sin ( x1 ) + u) .
∂x
As we require the descent only to hold in the infimum, we can choose a control wisely to
cancel out inner parts of the latter expression. Here, we use u = − x1 − x2 − sin( x1 ) and
see
∂
V (x) · f (x, u) = (2x1 + x2 ) x2 + ( x1 + x2 ) · (− x2 + sin ( x1 ) + u)
∂x
= (2x1 + x2 ) x2 + ( x1 + x2 ) · (− x1 − 2x2 )
= 2x1 x2 + x22 − x12 − 2x22 − 3x1 x2
= − x12 − x22 − x1 x2
|{z}
≥− x12 /2− x22 /2
1 2 1
≤− x1 + x22 = − ∥x∥2 < 0
2 2
Hence, we obtain
∂
V (x) · f 0 (x) = (2x1 + x2 ) x2 + ( x1 + x2 ) · (− x2 + sin ( x1 ))
∂x
92
= x1 x2 + ( x1 + x2 ) · sin ( x1 )
and
∂
V ( x ) · f 1 ( x ) = x1 + x2 .
∂x
Using Sontag’s formula, we therefore obtain
q
x1 x2 + ( x1 + x2 ) · sin ( x1 ) + ( x1 x2 + ( x1 + x2 ) · sin ( x1 ))2 + ( x1 + x2 )4
u ( x ) = − ( x1 + x2 ) · .
( x1 + x2 )2
As we have seen, while evaluating Sontag’s formula is straight forward, the derivation of a
Control-Lyapunov function is a rather involved task. In this context, backstepping is an system-
atic approach to construct and compute such a function. The idea of backstepping is to iteratively
construct the Control-Lyapunov function. To this end, the dynamic is split into parts forming a
cascade. The split is made in such a way that the outer parts of the cascade are simple and exhibit
a known Control-Lyapunov function for the outer part. The cascade is then formed using the
following core result:
with state x̂ = (x, û) and control u there exists a continuously differentiable feedback such that
the closed loop is asymptotically stable and corresponding Control-Lyapunov function
1 2
V (x̂) = V (x, û) := Vf (x) + û − u f (x) . (4.16)
2
u û
x
h f
Note that different from the idea of a cascade control, the structure in the backstepping approach
is already fixed by the model. As a consequence, the derivation of the controllers per loop is not
done from inside to outside, but instead tracks down separability of simple systems from outside
to inside. Hence, for an application, we first need to identify/fix an outer subsystem f such that
the overall system exhibits the structure given in equations (4.14), (4.15).
Remark 4.34
2
We highlight that the aim of the added Lyapunov part û − u f (x) /2 in (4.15) is to enforce
a reference for the outer control û such that it tracks the desired function u f (x). Hence, the
input u(x) shall be a reference tracking feedback. Such a feedback can be derived using Sontag’s
formula applied to the Control-Lyapunov function obtained via backstepping.
To better understand the procedure of backstepping, we consider the example from Task 4.32
again.
Task 4.35
Again, consider the controllable inverted pendulum
where the control represents the force of a motor acting on the angular velocity. Derive a
Control-Lyapunov function via backstepping.
Solution to Task 4.34: Using the notation from Theorem 4.33, we set x = x1 and û = x2 .
Then we obtain
f (x, û) = û
94
Now, we consider the first equation only. Since the system is linear, we can directly use the
Control-Lyapunov function Vf (x) = x2 /2. Hence, we obtain the feedback u f (x) := −x for
the outer system. Note that due to the overall structure of the system, we have û ̸= u f (x).
Applying the backstepping procedure, we use the latter to compute
1 2
V (x̂) = V (x, û) = Vf (x) + û − u f (x)
2
1 1
= x2 + |û + x|2
2 2
1 2
= x + (û + x)2
2
1 2
= x1 + ( x2 + x1 )2 .
2
Remark 4.36
Note that in the linear one dimensional case, we have that V (x) = x2 /2 is always a Control-
Lyapunov function.
Using backstepping and Sontag’s formula for the special case of control affine systems, we can
complement our schematic from Figure 4.4. In Figure 4.6, the special case is highlighted in red.
Lemma 4.1
Asymptotic Stability via Computation of a
controllability Lipschitz feedback Lipschitz feedback
Corollary 4.13
Theorem 4.24 Theorem 4.21 Theorem 4.25 Theorem 4.33 Theorem 4.31
At this point we like to note that stability is not necessarily restricted to equilibria, but can also
apply to periodic orbits or areas. An illustration is given in Figure 4.7 indicating stability of the
system yet not of a point.
To summarize, we considered one approach to treat nonlinear systems. We again like to stress
that currently no generic concept to treat nonlinear systems is known. Modern approaches to
4.3. BACKSTEPPING AND S ONTAG ’ S FORMULA 95
x2
x1
tackle this area utilize the concept of key performance indicators, exploit the dynamics and in-
clude constraints on the system. All of the latter will be part of the subsequent lecture Control
Engineering 3.
In the previous Chapter 4 we observed that for an asymptotically controllable system Brockett’s
condition is a necessary yet not sufficient criterion for existence of a Lipschitz continuous feed-
back (see Corollary 4.10). As the system is asymptotically controllable, a possibly discontinuous
feedback must exist.
Throughout this chapter, we again consider nonlinear control systems of the form
Remark 5.1
There are two more general cases: For one, the sampling times may be defined by a function of
time, or secondly, the sampling times can be defined by a function of states. The first one is com-
mon in prediction and prescription of systems where action is the far future are significantly less
important. Hence, one typically chooses between exactness of the prediction and computational
complexity. The latter case is referred to a event driven control.
We still like to stress that in applications, the choice of T is not fixed right from the beginning,
98
but depends on the obtainable solution and stability properties. Hence, we continue to formulate
the following definitions of zero order hold control and solution as a parametrization of operators
with respect to T.
As a consequence of the latter definition, the control u T is not continuous but instead exhibits
jumps at the sampling times tk . Still, the function is integrable, which is a requirement for exis-
tence of a solution of (5.1) for such an input. This insertion directly leads to the following:
In order to compute such a solution, we can simply concatenate solutions of subsequent sampling
intervals [tk , tk+1 ). Here, we can use the endpoint of the solution on one sampling interval to be
the initial point on the following one. Hence, the computation of x T is well defined.
Remark 5.4
Since the system is Lipschitz continuous on each interval [tk , tk+1 ), the solution is also unique.
Hence, identifying endpoint and initial point of subsequent sampling intervals is sufficient to
show that teh zero order hold solution is unique. Yet, as a consequence of this concatenation, the
solution is not differentiable at the sampling points tk .
5.1. Z ERO ORDER HOLD CONTROL 99
Remark 5.5
Note that despite u T to be piecewise constant, the zero order hold solution does not exhibit jumps
and shows nonlinear behavior.
Similar to the nonlinear case, we next introduce the concept of stability. Note that we did not
fix the sampling period T, hence stability needs to be parametrized using this characterizing
parameter of the control. Here, we use the same simplification to shift the operating point to the
origin as in Chapter 4, cf. Remark 4.17.
holds for all t > 0, for all T ∈ (0, T ⋆ ] and all initial value satisfying ∥x0 ∥ ≤ R.
Remark 5.7
The term „semiglobal“ refers to the constant R, which limits the range of the initial states for
which stability can be concluded. The term „practical“ refers to the constant ε, which is a
measure on how close the solution can be driven towards the operating point before oscillations
as in the case of the bang bang controller occur.
As a direct conclusion of Definition 5.6, we can apply Lemma 4.1 and obtain:
Corollary 5.8
Consider a nonlinear control system (5.1) with f (0, 0) = 0 and suppose a family of feedbacks
u T , T ∈ (0, T ⋆ ] to exist, which semiglobally practically asymptotically stabilize the operating
point (x⋆ , u⋆ ) = (0, 0). Then there exists a feed forward u : T → U such that the system is
practically asymptotically controllable.
Definition 5.6 also shows the dilemma of digital control using fixed sampling periods: Both close
to the desired operating point and for initial values far away from it, the discontinuous evaluation
of the feedback u T leads to a degradation of performance. Close to the operating point, a slow
evaluation leads to overshoots despite the dynamics to be typically rather slow. Far away from
100
the operating point, the dynamics is too fast to be captured in between two sampling points which
leads to unstable behavior.
Still, it may even be possible to obtain asymptotic stability (not only practical asymptotic stability)
using fixed sampling periods T as shown in the following task:
Task 5.9
Consider the system
ẋ1 (t) = − x1 (t)2 + x2 (t)2 · u(t)
Design a zero order hold control such that the system is practically asymptotically stable.
For this choice, the system is globally asymptotically stable for all T > 0 and even inde-
pendent from T. The reason for the latter is that the solutions never cross the switching line
x1 = 0, i.e. the control to be applied is always constant, which leads to independence of the
feedback from T.
As described before, the behavior observed in Task 5.9 is the exception. In practice, the lim-
itations of semiglobality and practicality is typically the best we can expect in zero order hold
control of nonlinear system.
In order to show that a stabilizing zero order hold control exists, we follow the path from Chapter 4
and adapt the concept of Control-Lyapunov functions from Definition 4.20 accordingly.
W : X → R+ \ {0} such that there exists a control function u satisfying the inequalities
The latter definition extends the concepts of a Control-Lyapunov function is various ways. For
one, as the zero order hold solution is not differentiable, we can no longer assume VT to be
differentiable. Hence, the formulation of decrease in energy in inequality (5.6) is given along
a solution instead of its vector field. Secondly, the parametrization regarding T needs to be
considered. This leads to a parametrization of the decrease in inequality (5.6) using the positive
definite function W (·). Moreover, the ideas of simiglobality and practicality are integrated.
Remark 5.11
Comparing Definition 5.10 to Definition 5.6, we can identify the similarity of semiglobality be-
tween the constants R and R̂ as well as ε and ε̂. The difference between these two pairs lies in
their interpretation: For KL function, we utilize the state space, whereas for Control-Lyapunov
functions the energy space is used. Hence, both values are a transformation of one another using
the Control-Lyapunov function VT .
Now, supposingly that a practical Control-Lyapunov function exists, we can directly derive the
existence of a zero order hold control.
Note that in (5.7), the right hand side depends on u implicitly as x T (tk+1 ) is defined using the
initial value x T (tk ) and the zero order hold control u. Hence, the definition (5.7) is proper.
102
Remark 5.13
The transfer from infimum in (5.6) to minimum in (5.7) is only possible as the control is constant
in between two sampling instances tk and tk+1 and therefore the solution x T (·) is continuous with
respect to u.
Unfortunately, the pure existence of a feedback does not help us in computing it. Additionally,
we still require the existence of a practical Control-Lyapunov function to conclude existence of
such a feedback. Here, we first address existence of a Control-Lyapunov function, for which the
following is known from the literature:
The most important aspect of Theorem 5.14 is the requirement regarding the control system. The
result does only require the system to be asymptotically controllable, a task which we discussed
in the previous Chapter 4, i.e. without digitalization. Hence, techniques such as backstepping or
others depending on the structure of the control system may be applied.
Practical asymptotic
controllability
Corollary 5.8
Lemma 4.1
Stability via family of Asymptotic Stability via
practically stabilizing feedbacks controllability Lipschitz feedback
Corollary 4.13
Theorem 5.14
Theorem 5.12 Theorem 4.24 Theorem 4.21 Theorem 4.25
In practice, however, the two tasks of deriving feedback u T and Control-Lyapunov function VT
are often done in the inverse sequence. To this end, first a feedback u T is derived, and then the
inequality (5.6) is shown to hold for this feedback
The reason for using such a procedure is that Theorem 5.12 only requires a Control-Lyapunov
function for fixed R̂, ε̂ to exists for some T0 > 0 in order to conclude existence also for all
smaller sampling periods. Hence, if we find a constructive way to derive a feedback, then a
practical Control-Lyapunov function can be derived and stability properties of this feedback can
be concluded for all T ∈ (0, T0 ].
Here, we follow this idea and show how a feedback can be derived, which exhibits the requires
properties.
where the disturbance d(t) is a measureable function and f is Lipschitz continuous in the distur-
bance.
Unfortunately, even small disturbances may lead to instability.
Task 5.15
Consider the system
− exp−x(t)+1 , x(t) ≥ 1
ẋ(t) = − x, x ∈ [−1, 1] .
expx(t)+1 ,
x ≤ −1
Show that the system is asymptotically stable using the Lyapunov function V (x) = x2 /2.
Show that the disturbed system
is unstable.
104
Solution to Task 5.15: Using the Lyapunov function V (x) we obtain α1 = α2 = V (x) and
the decrease via α3 (x) = x · f (x). Hence, the system is asymptotically stable.
Considering d(t) ≡ ε > 0, there always exists a δ > 0 such hat f (x) + ε > ε/2 for all
x ≥ δ. Hence, each solution with initial value x ≥ δ increases with at least constant rate, i.e.
diverges to ∞.
Remark 5.16
Note that this possible instability is not present in the linear case. For systems of the form
Zt
A·t
x(t) = exp · x ( t0 ) + exp A·(t−s) · D · d(s)ds.
t0
c · ∥D∥
∥x(t)∥ ≤ c · exp−σ·t ∥x∥ + · ∥d∥∞ .
σ
c·∥ D ∥
As a consequence, each solution converges towards a ball with radius σ · ∥d∥∞ , i.e. depends
on the infinity norm of the disturbance.
In the nonlinear setting, such a convergence cannot be expected, yet under certain conditions it can
be assured. These conditions are known as ISS (input to state stability) and typically formulated
for (uncontrolled) systems:
Having defined the ISS property, we can directly derive the following:
5.2. ISS – I NPUT TO STATE STABILITY 105
Corollary 5.18
Consider a nonlinear system (5.1) and its connected disturbed system (5.8) with u ≡ 0. If the
disturbed system is ISS, then the undisturbed system is asymptotically stable
In the other direction, however, we can use the result from Task 5.15 to derive the following:
Corollary 5.19
Consider a nonlinear system (5.1) and its connected disturbed system (5.8) with u ≡ 0. If the
system is asymptotically stable, no conclusion regarding stability of the disturbed system can be
drawn.
However, by tightening the requirements on the system, the reverse direction can be concluded:
Theorem 5.20.
Consider a nonlinear system (5.1) and its connected disturbed system (5.8) with u ≡ 0. Suppose
the undisturbed system to be asymptotically stable, the dynamic to be Lipschitz continuous with
respect to state and disturbance. Then there exists a neighborhood N (x⋆ ) such that the disturbed
system is ISS.
For our control setting, we can apply this definition if we consider the control to be given by a
feedback u : X → U .
holds for all t > 0 and all x ∈ N (x⋆ ), then the system is called η practically ISS.
j
The family of systems x T is called practically ISS if there exists a sequence η j → 0 and a
function β ∈ KL such that
j
∥x T (t)∥ ≤ β(∥x0 ∥, t) + η j (5.11)
We now want to use the ISS property to conclude under which conditions disturbances introduced
by digitalization / zero order hold can be regarded as disturbances and asymptotic controllability
of the undistrubed system can be carried over to its zero order hold solution.
For zero order hold systems, we directly obtain consistency for both the state and its derivative:
Lemma 5.23
Consider a nonlinear control system (5.8) with feedback u T : X → U and its respective zero
order hold.
Based on consistency, we can embed systems into one another. Our interest in the context
of digitalization is to obtain parameters of the practical stability property based on the undis-
turbed/undigitalized version of the feedback. The idea of embedding is to express one system
by another one and to express respective properties by one another. Here, we are particularly
interested in the disturbance. In general, embedding is defined as follows:
x2 ( t ) ∈ D ∀t ∈ [0, T ] (5.15)
and additionally if there exists a disturbance d1 such that x1 (t) = x2 (t) for allt ∈ [0, T ] and
∥ d1 ∥ ∞ ≤ δ + ρ ∥ d2 ∥ ∞ . (5.16)
In the context of digitalization, we have d2 ≡ 0 and can therefore always choose ρ = 0. Now,
we can use embedding and obtain the following core result:
The feedback controlled system (5.8) is asymptotically stable for all x ∈ N (x⋆ ).
The family of systems x T j is practically asymptotically stable for sufficiently large j and the
comparison function β ∈ KL is independent from j.
From Theorem 5.25 we see that asymptotic stability of the continuously controlled system is
transferred to the digitally controlled system in the semiglobal practical sense. Here, the sampling
time T shows various effects on the quality of the digital closed loop:
The constants ε, ε̂ and η j are in general larger if the sampling time T is increased.
The neighborhood N (x⋆ ) within which practically asymptotic stability can be shown is in
general smaller if the sampling time T is increased.
Corollary 5.8
Theorem 5.25
Lemma 4.1
Stability via family of Asymptotic Stability via Computation of a
practically stabilizing feedbacks controllability Lipschitz feedback Lipschitz feedback
Corollary 4.13
Theorem 5.14
Theorem 5.12 Theorem 4.24 Theorem 4.21 Theorem 4.25 Theorem 4.33 Theorem 4.31
LAPLACE TRANSFORM
The following Table A.1 recites some of the main properties and laws of computation for Laplace
transformed functions.
The Laplace transform and of its inverse are typically applied using equivalence tables. Table B.1
summarizes a few of these equivalencies.
APPENDIX B
δ(t) 1
1
η (t) s
1
t s2
1
exp( at) s− a
n
tn exp( at) ( s − a ) n +1
b
sin(bt) s2 + b2
s
cos(bt) s2 + b2
b
exp( at) sin(bt) ( s − a )2 + b2
s− a
exp( at) cos(bt) ( s − a )2 + b2
.. ..
. .
z(s)
G (s) = (B.1)
n(s)
with coprime polynoms z(s) and n(s). Then the minimal realization
x̂1 0 1 0 ··· 0 x̂1 0
x̂2
0 0 1 ··· 0
x̂2
0
.. .. .. .. .. .. .. .
s· (s) = . . · (s) + .. û(s)
. . . . .
x̂
n x −1
0
0 ··· 0 1 x̂
n x −1
0
x̂nx − a0 − a1 · · · − a n x −2 − a n x −1 x̂nx 1
(B.2a)
x̂1
x̂2
.
ŷ = b0 b1 · · · bn x − 2 bnx −1 · .. (s) + bnx û(s) (B.2b)
x̂
n x −1
x̂nx
z(s)
G (s) = (B.3)
n(s)
with coprime polynoms z(s) and n(s). Then the minimal realization
x̂1 0 ··· ··· 0 − a0 x̂1 b0
x̂2
1 0
··· 0 − a1
x̂2
b1
.. . ... ... .. .. ..
s· (s) = .. 1 · (s) + û(s) (B.4a)
. . . .
x̂
x n − 1
0 0
··· 1 − a n x −2 x̂
x n − 1
b
n
x − 2
x̂nx 0 0 ··· 1 − a n x −1 x̂nx bn x − 1
115
x̂1
x̂2
..
ŷ = 0 0 · · · 0 1 · (s) + bnx û(s) (B.4b)
.
x̂
n x −1
x̂nx
BLOCK DIAGRAM
P controller
KI
I controller
KD
D controller
KP , KT
Latency
KP , K I
PI controller
KP , KD
PD/PDT1 controller
Saturation
K DT
Decay
Limit
ymin , ymax
Bang-bang
ymin , ymax
−ε ε
Bang-bang with hysteresis
ymin , y0 ymax
Double-setpoint
ymin , y0 ymax
−ε−
2 ε1
ε1 ε2
Double-setpoint with hysteresis
Nonlinear
Triangle
BIBLIOGRAPHY
[1] 1, DIN 19226 Leittechnik T.: Regelungstechnik und Steuerungstechnik: Allgemeine Grund-
begriffe. 1994
[2] 2, DIN 19226 Leittechnik T.: Regelungstechnik und Steuerungstechnik: Begriffe zum Ver-
halten dynamischer Systeme. 1994
[3] A RTSTEIN, Z.: Stabilization with relaxed controls. In: Nonlinear Analysis: Theory, Meth-
ods & Applications 7 (1983), Nr. 11, S. 1163–1173
[4] B ROCKETT, R.W.: Asymptotic stability and feedback stabilization. In: B ROCKETT, R.W.
(Hrsg.) ; M ILLMAN, R.S. (Hrsg.) ; S USSMANN, H.J. (Hrsg.): Differential Geometric Con-
trol Theory. Birkhäuser, 1983, S. 181–191
[5] C ELLIER, F.E.: Continuous System Modeling. Springer, New York, 1991
[6] D IRECTOR, S.W. ; ROHRER, R.A.: Introduction to System Theory. McGraw-Hill, New
York, 1972
[8] I SIDORI, A.: Nonlinear Control Systems. 3rd edition. Springer, 1995
[9] K HALIL, H.K.: Nonlinear Systems. Prentice Hall PTR, 2002. – 750 S. – ISBN 0130673897
[11] L UENBERGER, D.G.: Introduction to Dynamic Systems. John Wiley & Sons New York,
1979
[15] M ÜLLER, M.: Normal form for linear systems with respect to its vector relative de-
gree. In: Linear Algebra and its Applications 430 (2009), Nr. 4, S. 1292–1312. http:
//dx.doi.org/https://doi.org/10.1016/j.laa.2008.10.014. – DOI
https://doi.org/10.1016/j.laa.2008.10.014
[16] PADULO, L. ; A RBIB, M.A.: System Theory. W.B. Saunders Company, Philadelphia, 1974
[17] S HEARER, J.L. ; KOLAKOWSKI, B.T.: Dynamic Modeling and Control of Engineering
Systems. Macmillan Publishing, New York, 1995
[18] S ONTAG, E.D.: Mathematical Control Theory: Deterministic Finite Dimensional Systems.
Springer, 1998. – 531 S. – ISBN 0387984895