0% found this document useful (0 votes)
23 views

TU Braunschwieg Automatic Control Notes

Uploaded by

Priyam Balhara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

TU Braunschwieg Automatic Control Notes

Uploaded by

Priyam Balhara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 134

Control Engineering 2

(Regelungstechnik 2)

Lecture Notes

Jürgen Pannek

July 25, 2022


Jürgen Pannek
Institute for Intermodal Transport and Logistic Systems
Hermann-Blenck-Str. 42
38519 Braunschweig
FOREWORD

During summer term 2022 I give the lecture to the module Control Engineering 2 (Regelung-
stechnik 2) at the Technical University of Braunschweig. To structure the lecture and support my
students in their learning process, I prepared these lecture notes. As it is the first edition, the notes
are still incomplete and are updated in due course of the lecture itself. Moreover, I will integrate
remarks and corrections throughout the summer term.
The aim of the module is to provide participating students with knowledge of terms of system the-
ory and control engineering. Moreover, students shall be enabled to understand complex control
structures, apply control schemes and analyze control systems. After successfully completing
the module, students shall additionally be able to apply the discussed methods within real life
applications and be able to assess results.
To this end, the module will tackle the subject areas

System theory and Modeling,

Methods and Algorithms, and

Stability and Control Design

for complex and networked linear as well as nonlinear systems. In particular, we discuss the
topics

Frequency domain
Modeling of complex control loops
Bang-bang and double-setpoint control
Multi-input multi-output systems

Time domain
II

Nonlinear control systems


Backstepping and Sontag’s formula
Digital control systems

within the lecture and support understanding and application within the tutorial and laboratory
classes. The module itself is accredited with 5 credits with an add-on of 2 credits if the require-
ments of the laboratory classes are met.
During the preparation of the lecture, I utilized the books of Jan Lunze [12–14]. For further
reading the books of Heinz Unbehauen [19–21] and Otto Föllinger [7] provide deep insights. For
the nonlinear part, I particularly recommend the works from Khalil [9] and Isidori [8], which also
formed the basis of this lecture.
Contents

Contents iv

List of tables v

List of figures viii

List of definitions and theorems xi

1 System and model of a system 1


1.1 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Continuous time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Discrete time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Properties of systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

I Frequence Domain 23

2 Modeling of complex control loops 25


2.1 Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.2 Transfer matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3 Cascade control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4 Disturbance control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3 Complex control structures 51


3.1 Bang-bang and double-setpoint control . . . . . . . . . . . . . . . . . . . . . . . 51
3.2 Characteristic map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.3 Multi-input multi-output systems . . . . . . . . . . . . . . . . . . . . . . . . . . 64
IV C ONTENTS

II Time Domain 73

4 Nonlinear Control Systems 75


4.1 Necessary conditions for controllability . . . . . . . . . . . . . . . . . . . . . . 76
4.2 Equivalent concepts of controllability . . . . . . . . . . . . . . . . . . . . . . . 83
4.3 Backstepping and Sontag’s formula . . . . . . . . . . . . . . . . . . . . . . . . 88

5 Digital control systems 97


5.1 Zero order hold control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.2 ISS – Input to state stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.3 Stability under digitalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Appendices 109

A Laplace transform 111

B Transfer function and properness 113

C Block diagram 117

Bibliography 120
List of Tables

2.1 Advantages and disadvantages of cascade control . . . . . . . . . . . . . . . . . 40


2.2 Advantages and disadvantages of disturbance control as compared to prefilter/pre-
control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.1 Technical possibilities of continuous and switching actuators . . . . . . . . . . . 52


3.2 Advantages and disadvantages of bang-bang . . . . . . . . . . . . . . . . . . . . 61
3.3 Advantages and disadvantages of double-setpoint control . . . . . . . . . . . . . 61
3.4 Properties of P and V canonical structure . . . . . . . . . . . . . . . . . . . . . . 66
3.5 Advantages and disadvantages of MIMO control . . . . . . . . . . . . . . . . . 70

4.1 Advantages and disadvantages of Control-Lyapunov functions . . . . . . . . . . 88


4.2 Advantages and disadvantages of backstepping & Sontag’s formula . . . . . . . 95

5.1 Advantages and disadvantages of digital control . . . . . . . . . . . . . . . . . . 108

A.1 Properties of Laplace transformed functions . . . . . . . . . . . . . . . . . . . . 111

B.1 Equivalence table for Laplace transformations . . . . . . . . . . . . . . . . . . . 113

C.1 List of block symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117


List of Figures

1.1 Term of a system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2


1.2 Example of static and dynamic systems . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Sketch of a dynamic flow and a trajectory . . . . . . . . . . . . . . . . . . . . . 7
1.4 Standard block diagram elements and their meaning . . . . . . . . . . . . . . . . 8
1.5 Schematic of separately excited DC machine moving a (hoisting) drum roll . . . 9
1.6 Block diagram of equation (1.16c) . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7 Block diagram of equation (1.16d) with Θ̃ = ΘG + Θ T + m · r2 . . . . . . . . . 11
1.8 Block diagram of equation (1.16b) . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.9 Block diagram of equation (1.16a) . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.10 Block diagram for separately excited DC machine with drum roll . . . . . . . . . 13
1.11 Counterexample for linearity of systems . . . . . . . . . . . . . . . . . . . . . . 13
1.12 Electronic circuit of a PI controller . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.1 Sketch of the Heaviside function . . . . . . . . . . . . . . . . . . . . . . . . . . 27


2.2 Sketch of the Dirac delta function . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.3 Block diagram of a cascade control . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4 Block diagram of a cascade motor control . . . . . . . . . . . . . . . . . . . . . 35
2.5 Block diagram of a cascade control for integral outer loop . . . . . . . . . . . . . 37
2.6 Sketch of a 3DOF robot arm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.7 Block diagram of a cascade robot control for one joint drive . . . . . . . . . . . 38
2.8 Simple feed forward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.9 Structure of a precontrol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.10 Structure of a prefilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.11 Structure of a disturbance control . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.1 Block diagram of a bang bang control . . . . . . . . . . . . . . . . . . . . . . . 52


3.2 Block diagram of a double-setpoint control . . . . . . . . . . . . . . . . . . . . 53
3.3 Mimicry of a continuous input function . . . . . . . . . . . . . . . . . . . . . . 54
3.4 Sketch of a pulse width modulation using triangle functions . . . . . . . . . . . . 55
3.5 Closed loop with bang-bang control . . . . . . . . . . . . . . . . . . . . . . . . 56
VIII L IST OF F IGURES

3.6 Closed loop with bang-bang control with low pass and amplifier . . . . . . . . . 56
3.7 Closed loop with double-setpoint control with low pass, amplifier and integrator . 58
3.8 Closed loop with double-setpoint control with low pass, amplifier, latency and
integrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.9 Mimic PD control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.10 Mimic PID control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.11 Mimic PI control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.12 Nonlinear static system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.13 Separation of maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.14 MIMO system with two inputs and two outputs . . . . . . . . . . . . . . . . . . 65
3.15 Canonical structures of MIMO systems with two inputs and two outputs . . . . . 65
3.16 Decoupling structure of MIMO system with P canonical structure . . . . . . . . 67
3.17 Elimination of coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.18 Adaptable decoupling structure of MIMO system with P canonical structure . . . 70

4.1 Sketch of a nonholonomic car . . . . . . . . . . . . . . . . . . . . . . . . . . . 80


4.2 Sketch of classes of comparison functions . . . . . . . . . . . . . . . . . . . . . 84
4.3 Sketch of asymptotically stable systems . . . . . . . . . . . . . . . . . . . . . . 85
4.4 Schematic connection of stability results . . . . . . . . . . . . . . . . . . . . . . 88
4.5 System structure for backstepping . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.6 Schematic connection of stability results to backstepping/Sontag’s formula . . . . 94
4.7 Sketch of a stable system with periodic orbit . . . . . . . . . . . . . . . . . . . . 95

5.1 Schematic connection of stability results to digitalization . . . . . . . . . . . . . 102


5.2 Schematic connection of stability results to derive digital controls . . . . . . . . 108
List of Definitions and Theorems

Definition 1.1 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2


Definition 1.4 State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Definition 1.10 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Theorem 1.11 Linear system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Definition 1.13 Time invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Theorem 1.16 Linear time invariant system . . . . . . . . . . . . . . . . . . . . . . . 15
Definition 1.18 Transition matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Theorem 1.19 Solution of autonomous linear time invariant systems . . . . . . . . . . 17
Theorem 1.21 Solution for linear time invariant systems . . . . . . . . . . . . . . . . . 17
Definition 1.23 Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Definition 1.26 Operating point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Definition 1.28 Stability and Controllability . . . . . . . . . . . . . . . . . . . . . . . 20
Definition 2.1 Laplace transform function . . . . . . . . . . . . . . . . . . . . . . . . 25
Definition 2.6 Inverse Laplace transform function . . . . . . . . . . . . . . . . . . . . 28
Theorem 2.7 Partial fraction decomposition . . . . . . . . . . . . . . . . . . . . . . . 29
Definition 2.9 Transfer matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Theorem 2.11 Transfer matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Definition 2.13 Properness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Theorem 2.14 Properness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Theorem 2.16 Poles and zeros of transfer function . . . . . . . . . . . . . . . . . . . . 32
Theorem 2.17 Poles and zeros of transfer matrix . . . . . . . . . . . . . . . . . . . . . 33
Theorem 2.18 Strong/robust/BIBO stability . . . . . . . . . . . . . . . . . . . . . . . 33
Theorem 2.19 Strong/robust/BIBO stability for linear time invariant systems . . . . . . 33
Theorem 2.20 Strong/robust/BIBO stability for linear time invariant systems via trans-
fer function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Definition 2.23 Transfer function cascade control . . . . . . . . . . . . . . . . . . . . 35
Definition 2.31 Transfer function precontrol . . . . . . . . . . . . . . . . . . . . . . . 44
Definition 2.32 Transfer function prefilter . . . . . . . . . . . . . . . . . . . . . . . . 44
Theorem 2.33 Equivalency precontrol and prefilter . . . . . . . . . . . . . . . . . . . 45
Definition 2.39 Transfer function disturbance control . . . . . . . . . . . . . . . . . . 47
X L IST OF D EFINITIONS AND T HEOREMS

Definition 3.2 Bang-bang control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52


Definition 3.6 Bang-bang control with hysteresis . . . . . . . . . . . . . . . . . . . . . 54
Theorem 3.8 Oscillation for bang-bang control w/o hysteresis . . . . . . . . . . . . . . 56
Theorem 3.10 Oscillation for bang-bang control with low pass and amplifier . . . . . . 57
Definition 3.12 Double-setpoint control with hysteresis . . . . . . . . . . . . . . . . . 57
Theorem 3.14 Asymptotic stability for double-setpoint control with hysteresis . . . . . 58
Theorem 3.15 Minimal asymptotic stability for double-setpoint control with hysteresis 59
Theorem 3.16 Mimic continuous PD, PID, PI control . . . . . . . . . . . . . . . . . . 59
Definition 3.18 Characteristic map . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Definition 3.21 MIMO system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Definition 3.22 P canonical structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Definition 3.23 V canonical structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Theorem 3.24 Equivalence P and V canonical structure . . . . . . . . . . . . . . . . . 66
Definition 3.25 Decoupling control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Theorem 3.26 Decoupling condition . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Theorem 3.28 Decoupling transfer matrix . . . . . . . . . . . . . . . . . . . . . . . . 69
Theorem 4.3 Brockett’s condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Theorem 4.4 Exponential Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Theorem 4.5 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Definition 4.15 Comparison Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Theorem 4.16 Stability Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Definition 4.20 Control-Lyapunov function . . . . . . . . . . . . . . . . . . . . . . . 85
Theorem 4.21 Asymptotic Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Theorem 4.24 Existence of Control-Lyapunov function . . . . . . . . . . . . . . . . . 87
Theorem 4.25 Existence of Control-Lyapunov function . . . . . . . . . . . . . . . . . 87
Definition 4.27 Control-affine system . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Theorem 4.31 Sontag’s formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Theorem 4.33 Backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Definition 5.2 Zero order hold control . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Definition 5.3 Zero order hold solution . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Definition 5.6 Practical stability/controllability . . . . . . . . . . . . . . . . . . . . . . 99
Definition 5.10 Practical Control-Lyapunov functions . . . . . . . . . . . . . . . . . . 100
Theorem 5.12 Existence of feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Theorem 5.14 Existence of practical Control-Lyapunov function . . . . . . . . . . . . 102
Definition 5.17 ISS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Definition 5.21 η practical ISS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Definition 5.22 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
L IST OF D EFINITIONS AND T HEOREMS XI

Definition 5.24 Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107


Theorem 5.25 Equivalency of stability under digitalization . . . . . . . . . . . . . . . 107
Definition B.1 Controllable normal form . . . . . . . . . . . . . . . . . . . . . . . . . 113
Definition B.2 Observable normal form . . . . . . . . . . . . . . . . . . . . . . . . . 114
CHAPTER 1

SYSTEM AND MODEL OF A SYSTEM

While on the one hand we want to understand the fundamental limitations that math-
ematics imposes on what is achievable, irrespective of the precise technology being
used, it is also true that technology may well influence the type of question to be asked
and the choice of mathematical model.

E.D. Sontag [18]

In Control Engineering 1, basic control structures formed the heart of the lecture. Common
ways of describing systems in both frequency and time domain were defined to understand the
foundation of control. Moreover, control methods were applied and analyzed.
Within the lecture series Control Engineering 2, we study control structures, which are more
complex in both the description and well as in there application and analysis. To have a common
basis, this chapter provides terminology and properties of control systems for both frequency and
time domain.
For more historic insights, we additionally refer to the books of Cellier [5], Director and Rohrer [6],
Ludyk [10], Luenberger [11], Padulo and Arbib [16] and Shearer and Kulakowski [17].

1.1. System
The term of a system is used in various different scientific and non-scientific areas. Its meaning,
however, is often not defined clearly. Simple formulated, a system is the connection of differ-
ent interacting components to realize given tasks. The interdependence of systems with their
environment is given by so called input and output variables, cf. Figure 1.1.
More formally, we define the following:
2

u1 y1
u2 y2
.. System ..
. .
u nu y ny

Figure 1.1.: Term of a system

Definition 1.1 (System).


Consider two sets U and Y . Then a map f : U → Y is called a system.

The inputs u1 , . . . , unu ∈ U are variables, which act from the environment to the system and are
not dependent on the system itself or its properties. We distinguish between inputs, which are
used to specifically manipulate (or control) the system, and inputs, which are not manipulated on
purpose. We call the first ones control or manipulation inputs, and we refer to the second ones as
disturbance inputs. The outputs y1 , . . . , yny ∈ Y are variables, which are generated by the system
and influence the environment. Here, we distinguish output variables depending on whether we
measure them or not. We call the measured ones measurement outputs.

Remark 1.2
Note that in most cases not all measurable outputs are actually measured. Similarly, in many
cases not all manipulable inputs are controlled.

In the following, we consider two electical systems illustrated in Figure 1.2 which represent an
ideal resistance and an ideal capacitor.

I (t) I (t)

U (t) R U (t) C

Figure 1.2.: Example of static and dynamic systems

The systems in Figure 1.2 possess the input variable I (t), the output variable U (t) and time t.
For the resistance R the output is uniquely defined by the input for every time instant t, i.e. we
1.1. S YSTEM 3

have

y ( t ) = U ( t ) = R · I ( t ) = R · u ( t ). (1.1)

If the outputs depend on the input at the same time instant, we call systems such as this one static.
In contrast to this, the computation of the voltage U (t) at the capacitor C at time instant t depends
on the entire history I (τ ) for τ ≤ t, i.e. we have

Zt Zt
1 1
y(t) = U (t) = I (τ )dτ = u(τ )dτ.
C C
−∞ −∞

If we additionally know the voltage U (t0 ) at a time instant t0 ≤ t, then only the history t0 ≤
τ ≤ t of the current is required, i.e.

Zt Zt0 Zt Zt
1 1 1 1
y(t) = U (t) = I (τ )dτ = I (τ )dτ + I (τ )dτ = U (t0 ) + u(τ )dτ. (1.2)
C C C C
−∞ −∞ t0 t0
| {z }
U ( t0 )

As we can see from (1.1), the initial value U (t0 ) contains all the information on the history
τ ≤ t0 . For this reason, one typically refers to U (t0 ) as the internal state of the system capacitor
at time instant t0 . If the output of the system depends not only on the input at the time instant but
also on the history of the latter, we call these systems dynamic.

Remark 1.3
Note that by this definition the set of dynamic systems covers the set of static systems.

If for a system according to Figure 1.1 the outputs y1 (t), . . . , yny (t) depend on the history of the
inputs u1 (τ ), . . . , unu (τ ) for τ ≤ t only, then the system is called causal. As all technically
feasible systems are causal, we will restrict ourselves to this case.

Now, our discussion so far allow us to give the general definition of states of a dynamical system:

Definition 1.4 (State).


Consider a system f : U → Y . If the output y(t) uniquely depends on the history of inputs u(τ )
for t0 ≤ τ ≤ t and the some x(t0 ), then the variable x(t) is called state of the system.
4

Task 1.5
Which variable represents a state in case of induction?

Solution to Task 1.5: Current through the inductor

1.2. Continuous time systems


If a dynamical system can be described by a finite number n x of states, then it is called system
with finite state of order n x or concentrated-parametric systems. Such systems with finite state
are described by mathematical models featuring differential and algebraic equations. Within this
lecture, we restrict ourselves to this class. These systems can be described explicitly via

d
x1 ( t ) = f 1 ( x1 ( t ), . . . , x n x ( t ), u1 ( t ), . . . , u n u ( t ), t )


dt




..
. Differential equations (1.3a)


d 

x n x ( t ) = f n x ( x1 ( t ), . . . , x n x ( t ), u1 ( t ), . . . , u n u ( t ), t ) 

dt 
x1 (t0 ) = x1,0  


.. Initial conditions (1.3b)
. 


xnx (t0 ) = xnx ,0 

y1 ( t ) = h1 ( x1 ( t ), . . . , x n x ( t ), u1 ( t ), . . . , u n u ( t ), t )  


.. Output equations (1.3c)
. 


y n y ( t ) = h n y ( x1 ( t ), . . . , x n x ( t ), u1 ( t ), . . . , u n u ( t ), t ) 

We combine the input, output and state variables to (column) vectors

u = [ u1 u2 . . . u n u ] ⊤ (1.4a)
h i⊤
y = y1 y2 . . . y n y (1.4b)

x = [ x1 x2 . . . x n x ] ⊤ . (1.4c)

d
Using additionally the short form ẋ for dt x we obtain the compact vector notation

ẋ(t) = f (x(t), u(t), t), x ( t0 ) = x0 (1.5a)


y ( t ) = h ( x ( t ), u ( t ), t ). (1.5b)
1.3. D ISCRETE TIME SYSTEMS 5

The variables u, y and x are called input, output and state of the dynamical system.
If the state x represents an element of an n x –dimensional vector space X , then X is called state
space. The state of a system at time instant t can then be depicted as a point in the n x –dimensional
state space. The curve of points for variable time t in the state space is called trajectory and is
denoted by x(·).

Remark 1.6
Systems with infinite dimensional states are called distributed parametric systems and are de-
scribed, e.g., via partial differential equations. Examples of such systems are beams, boards,
membranes, electromagnetic fields, heat etc..

1.3. Discrete time systems


The setting of continuous time systems as we discussed them so far can be widened to discrete
time systems. In contrast to continuous time systems where we have t ∈ R, for discrete time
systems time refers to an index k ∈ Z. Here, the states are denoted by x(k ) and represent a
sequence of points in the state space X , which is not a curve. Discrete time systems may arise by
sampling continuous time systems, e.g. via a A/D and D/A converter. For equidistant sampling
with sampling time T, we obtain

T := {tk | tk := t0 + k · T } ⊂ R.

where t0 is some fixed initial time stamp. Apart from equidistant sampling, other types such as
event based or sequence based are possible. The equidistant case, however, is important in digital
control, which we consider later in the lecture.

Remark 1.7
Note that the class of discrete time systems is larger and contains the class of continuous time
systems, i.e. for each continuous time system there exists a discrete time equivalent, but for some
discrete time systems no continuous time equivalent exists.

To mathematically describe discrete time systems so called difference equations and algebraic
equations are used. Similar to (1.3) we write

x1 ( k + 1) = f 1 ( x1 ( k ), . . . , x n x ( k ), u1 ( k ), . . . , u n u ( k ), k ) 



.. Difference equations (1.6a)
. 


x n x ( k + 1) = f n x ( x1 ( k ), . . . , x n x ( k ), u1 ( k ), . . . , u n u ( k ), k ) 
6


x1 (0) = x1,0 



.. Initial conditions (1.6b)
. 


xnx (0) = xnx ,0 

y1 ( k ) = h1 ( x1 ( k ), . . . , x n x ( k ), u1 ( k ), . . . , u n u ( k ), k ) 



.. Output equations (1.6c)
. 


y ( k ) = h ( x ( k ), . . . , x ( k ), u ( k ), . . . , u ( k ), k ) 
ny ny 1 nx 1 nu

Again, we combine the input, output and state variables to (column) vectors

u = [ u1 u2 . . . u n u ] ⊤ (1.7a)
h i⊤
y = y1 y2 . . . y n y (1.7b)

x = [ x1 x2 . . . x n x ] ⊤ (1.7c)

and obtain the compact vector notation

x ( k + 1) = f ( x ( k ), u ( k ), k ), x (0) = x0 (1.8a)
y ( k ) = h ( x ( k ), u ( k ), k ). (1.8b)

An example of a discrete time system is the interest rate development of a bank deposit. Consider
x(k ) to be the bank deposit in month k and p to be the interest rate in percentage. If we place
u(k ) on the deposit at month k, then the model of the bank deposit development reads
p
x ( k + 1) = 1 + · x ( k ) + u ( k ). (1.9)
100

Based on this model the bank deposit can be computed for all following months.

Task 1.8 (Gambler’s ruin)


Suppose a player is going into a casino to play Roulette. The probability to win is p where
0 < p < 1, whereas the probability that the casino wins is q = 1 − p. Upon start, the player
has a chips and the casino has b chips. What is the probability that the player wins all chips?
Use the toy data p = 18/37, q = 19/37, a = 100 and b = 10000.

Solution to Task :1.8 Suppose the player owns 0 ≤ k ≤ a + b chips, and therefore the
casino owns a + b − k chips. Let x(k ) denote the probability that the player with k chips
1.3. D ISCRETE TIME SYSTEMS 7

wins. Hence, depending on whether the player wins or not the player will have k + 1 or
k − 1 chips after this iteration. Therefore, the difference equation is given by

x(k ) = px(k + 1) + qx(k − 1).

Additionally, we obtain the boundary conditions x(0) = 0 and x( a + b) = 1. Solving the


difference equation (e.g. via Maple via resolve), we obtain
k
q
1− p
x(k) = a+b .
q
1− p

Using the toy data we obtain x( a) = 1.538 · 10−235 .

Note that in both discrete and continuous time, the dynamic reveals a flow of the system at hand,
whereas a trajectory is bound to a specific initial value and input sequence. The following Fig-
ure 1.3 illustrates the idea of flow and trajectory. In this case, the flow is colored to mark its
intensity whereas the arrows point into its direction. The trajectory is evaluated for a specific
initial value and „follows“ the flow accordingly.

x2

x1

Figure 1.3.: Sketch of a dynamic flow and a trajectory


8

1.4. Block diagram


Although the dynamic behavior of a system is described well by mathematical models, it is con-
venient to represent models via so called block diagrams. Block diagrams are visual represen-
tations using (more or less) standardized symbols. Originally, this graphical representation was
developed to simulate differential equations on analog computers. Today, still many simulations
programs such as Matlab/Simulink as well as automation systems such as Step7 offer the
possibility to enter models via blocks.
Some standard symbols are their meaning are given in Figure 1.4.

u3 y ( t0 )

+
Z
+
u1 y u y u k y

u2

Rt
y = u1 − u2 + u3 y ( t ) = y ( t0 ) + u(τ )dτ y(t) = ku(t)
t0

(a) Sum (b) Integrator (c) Gain


y1
u1 u1
× y f (·) y u
u2 u2
y2

y = u1 · u2 y = f ( u1 , u2 ) y1 ( t ) = u ( t ); y2 ( t ) = u ( t )

(d) Multiplicator (e) Function (f) Split

Figure 1.4.: Standard block diagram elements and their meaning

These symbols allow us to visually break down the structure of a system to its elements. For
illustration, we consider a separately excited DC machine, which is moving a load via a cable
drum, cf. Figure 1.5. Within the example, we denote the armature and excitation currents by I A
and IF , the armature and excitation voltages by U A and UF respectively. Moreover, we denote the
winding resistance by R A and R F and the magnetic flow by Ψ F ( IF ), where L A and k represent
the armature induction and gain and ω the rotation speed of the motor.
1.4. B LOCK DIAGRAM 9

RA IA

UR A UL A LA
Ψ F ( IF ) I
F

UA UL F Mel , ω, φ
UF UR F Uind M Cable drum

RF ΘG ΘT

Figure 1.5.: Schematic of separately excited DC machine moving a (hoisting) drum roll

The mathematical model can be derived from the mesh equations of the armature circuit and the
excitation circuit

Armature circuit −U A + R A · I A + UL A + Uind = 0 (1.10a)


Excitation circuit −U F + R F · I F + U L F = 0 (1.10b)

where we have

d
UL A = L A · IA (1.11a)
dt
d ∂ d
UL F = Ψ F ( IF ) = Ψ F ( IF ) · IF (1.11b)
dt ∂IF dt
Uind = k · Ψ F ( IF ) · ω. (1.11c)

Putting (1.11) into (1.10) we obtain

d
LA · I A = U A − R A · I A − k · Ψ F ( IF ) · ω (1.12a)
dt
d d
Ψ F ( IF ) · IF = UF − R F · IF . (1.12b)
dIF dt

Introducing the force Fs of the rope, then by conservation of momentum at the motor we have

d
φ=ω (1.13a)
dt
10

d
(ΘG + Θ T ) · ω = Mel − Fs · r = kΨ F ( IF ) · I A − Fs · r (1.13b)
dt

where r and φ are the radius and angle of the drum roll and Mel = kΨ F ( IF ) · I A is the electric
momentum of the motor. Similarly, via conservation of momentum at the rope we have

d
x=v (1.14a)
dt
d
m v = Fs − m · g (1.14b)
dt

where x, v and m are the position, velocity and mass of the load, and g is the acceleration of
gravity. Using

d d
r φ= x=v
dt
|{z} dt

in (1.13b) we obtain

d
Fs = m · r · ω + m · g.
dt

Now, we can combine systems (1.12) and (1.13) to obtain the combined system of differential
equations

d 1
IA = (U A − R A · I A − k · Ψ F ( I F ) · ω ) (1.15a)
dt LA
d 1
IF = ∂ (U F − R F · I F ) (1.15b)
∂I Ψ F ( IF )
dt
F
d
φ=ω (1.15c)
dt
d 1
ω= (kΨ F ( IF ) · I A − m · g · r ) . (1.15d)
dt (Θ G + Θ T + m · r2 )

Utilizing Definitions 1.1 and 1.4 we identify the inputs u = [U A UF ]⊤ , the output y = r · φ and
the states x = [ I A IF φ ω ]⊤ .

To represent system (1.15) in a block diagram, we first integrate this system of first order differ-
ential equations and get

Zt
1
I A ( t ) = I A (0) + (U A (τ ) − R A · I A (τ ) − k · Ψ F ( IF (τ )) · ω (τ )) dτ (1.16a)
LA
0
1.4. B LOCK DIAGRAM 11

Zt
1
I F ( t ) = I F (0) + (UF (τ ) − R F · IF (τ )) dτ (1.16b)
∂IF Ψ F ( IF ( τ ))

0
Zt
φ ( t ) = φ (0) + ω (τ )dτ (1.16c)
0
Zt
1
ω ( t ) = ω (0) + (kΨ F ( IF (τ )) · I A (τ ) − m · g · r ) dτ. (1.16d)
(ΘG + Θ T + m · r2 )
0

In the first step, we consider the simplest equation (1.15c). Utilizing the standard blocks from
Figure 1.4 we obtain Figure 1.6.

φ (0)

Z
ω φ

Figure 1.6.: Block diagram of equation (1.16c)

Next, we consider equation (1.15d) and separate it into operating blocks in Figure 1.7.

ω (0)
IF Ψ F (·) k
Z
1
× Θ̃
ω
IA

m·g·r

Figure 1.7.: Block diagram of equation (1.16d) with Θ̃ = ΘG + Θ T + m · r2

Considering the current in the armature circuit, we obtain the diagram of Figure 1.8.
Last, we get Figure 1.9 for the excitation circuit (1.16a).
Note that now all ingoing values to the block diagrams are either input u or states x. Hence, we
can connect these lines and obtain the overall block diagram.

Task 1.9
Draw the overall block diagram for the separately excited DC machine with drum roll from
Figure 1.5.
12

1
∂IF Ψ F ( IF ( τ ))

I F (0)

Z
+ × IF
UF

RF

Figure 1.8.: Block diagram of equation (1.16b)

IF Ψ F (·) k

×
ω I A (0)

+
Z
+ 1
LA IA
UA

RA

Figure 1.9.: Block diagram of equation (1.16a)

Solution to Task 1.9: The block from Figures 1.6–1.9 are connected via their states, cf.
Figure 1.10.

1.5. Properties of systems


In order to analyze, characterize and classify systems, we can use certain properties. These prop-
erties not only allow us to get deeper insights into the systems themselves, but also offer possibil-
ities to solve connected problems, design controllers and derive solution and control methods.
1.5. P ROPERTIES OF SYSTEMS 13

IA
UA 1.9

ω
1.7
I A (0)
IF
UF 1.8 1.6 φ
ω (0)

I F (0) φ (0)

Figure 1.10.: Block diagram for separately excited DC machine with drum roll

The first and possibly most important property used in system theory is linearity. Informally,
we can say that „a system is linear if all its components and connections are linear“. Note that,
unfortunately, we cannot say that „a system is nonlinear if all of its components and connections
are nonlinear“. A counterexample is given by the following circuits in Figure 1.11, which are
equivalent, yet one system is linear and one contains nonlinear elements.

I (t) I (t)

U (t) U (t)
R R

(a) Circuit with two diodes and one resistor (b) Circuit with two diodes and one resistor

Figure 1.11.: Counterexample for linearity of systems

As outlined before, we focus on systems of the form (1.3), or for short

ẋ(t) = f (x(t), u(t), t), x ( t0 ) = x0 (1.17a)


y ( t ) = h ( x ( t ), u ( t ), t ) (1.17b)

with state x ∈ Rnx , input u ∈ Rnu and output y ∈ Rny .


Suppose that φ(t; x0 , t0 , u) denotes the solution of the system (1.17) with initial value x(t0 ) = x0
and input u(τ ), t0 ≤ τ ≤ t. Then the linearity definition reads as follows:
14

Definition 1.10 (Linearity).


Consider a system of the form (1.17). If for all (feasible) inputs u and all initial times t0 ≥ 0 we
have that the output y(x0 , u, t) = h( φ(t; x0 , t0 , u), u(t), t) satisfies

y(α1 x0,1 + α2 x0,2 , 0, t) = α1 y(x0,1 , 0, t) + α2 y(x0,2 , 0, t) (1.18a)


y(0, β 1 u1 + β 2 u2 , t) = β 1 y(0, u1 , t) + β 2 y(0, u2 , t) (1.18b)
y(x0 , u, t) = y(x0 , 0, t) + y(0, u, t) (1.18c)

with α1 , α2 , β 1 , β 2 ∈ R for all t ≥ t0 , then we call system (1.17) linear.

Within Definition 1.10 we call equation (1.18a) superposition principle, equation (1.18b) zero-
input-linearity and equation (1.18c) zero-state-linearity. Given Definition 1.10, we get the fol-
lowing:

Theorem 1.11 (Linear system).


System (1.17) is linear if and only if it can be transformed into

ẋ(t) = A(t) · x(t) + B(t) · u(t), x ( t0 ) = x0 (1.19a)


y ( t ) = C ( t ) · x ( t ) + D ( t ) · u ( t ). (1.19b)

Here, the matrices A(t) ∈ Rnx ×nx , B(t) ∈ Rnx ×nu , C (t) ∈ Rny ×nx and D (t) ∈ Rny ×nu depend
on time t ∈ R only.

Task 1.12
Is system

ẋ(t) = 2 + u(t)
y(t) = x(t)

linear?

Solution to Task 1.12: No, it isn’t.

Apart from linearity, the dependence on time is a key element for systems (1.17). In particular,
if a system is independent from time, the starting point may be shifted freely without change on
1.5. P ROPERTIES OF SYSTEMS 15

the behavior of the system and its output. Note that the dependence of time of a model may be
different for the dependence of time of a system. For example, a model of a rocket without its
environment does not depend on weather or the orbital mechanics including the moon etc., yet
the system itself clearly depends on these aspects which are varying over time.

Definition 1.13 (Time invariance).


Consider a system of the form (1.17). Suppose y(t) is the output at time t ≥ t0 for initial value
x(t0 ) = x0 and input u(τ ), t0 ≤ τ ≤ t. If for all (feasible) inputs u and all initial times t0 ≥ 0
we have

y(t − T ) = h( φ(t; x0 , t0 + T, u(· − T )), u(t − T )),

then (1.17) is time invariant.

Remark 1.14
Note that by considering a function evaluation of any function f at time instant t − T with T > 0,
the function is „shifted to the right by T“.

Typically, the time invariant version of system (1.17) is written

ẋ(t) = f (x(t), u(t)), x ( t0 ) = x0 (1.20a)


y(t) = h(x(t), u(t)) (1.20b)

Task 1.15
Consider the example from Task 1.12. Is the system time invariant?

Solution to Task 1.15: Yes, it is.

Regarding time invariance, the following necessary and sufficient conditions hold:

Theorem 1.16 (Linear time invariant system).


System (1.17) is linear time invariant if and only if it can be transformed into

ẋ(t) = A · x(t) + B · u(t), x (0) = x0 (1.21a)


y ( t ) = C · x ( t ) + D · u ( t ). (1.21b)
16

We like to note that linearity and time invariance is not limited to systems of the form (1.17). To
see this, consider the following:

Task 1.17
Consider a conveyor belt and let u(t) denote the amount of material put on the lower end
of the belt and let x(t) denote the amount of material issued at the upper end of the belt
at time t. For transportation from lower to upper end, the time t T (dead time) is required,
i.e. we have x(t) = u(t − t T ). Is the system linear and time invariant? Can the system be
formulated in the form (1.17)?

Solution to Task 1.17: The system is linear and time invariant, yet no description of
form (1.17) exists.

If we focus further on the so called linear autonomous time invariant system

ẋ(t) = A · x, x ( t0 ) = x0 , (1.22)

then we know by Lipschitz continuity of A that a unique solution exists and can derive the corre-
sponding solution by applying Picard’s method of successive approximation. The latter reveals


!
t2 tj j

jt
x(t) = lim x j (t) = lim
j→∞ j→∞
Id + A · t + A2 + . . . + A j
2 j!
= ∑ j! A x0 ,
j =0

which allows us to define the following solution operator:

Definition 1.18 (Transition matrix).


Consider system (1.22). Then we call


!
tj
Φ(t) := exp( A · t) = ∑ A j j! (1.23)
j =0

transition matrix of the system.

In particular, the following holds:


1.5. P ROPERTIES OF SYSTEMS 17

Theorem 1.19 (Solution of autonomous linear time invariant systems).


Consider system (1.22). Then we obtain the solution

x ( t ) = Φ ( t ) · x0 . (1.24)

Task 1.20
Compute the transition matrix of the system
! " # !
ẋ1 (t) 0 1 ẋ1 (t)
= .
ẋ2 (t) 0 0 ẋ2 (t)

Solution to Task 1.20:


"
# " # " #2 " #
1 0 0 1 0 1 t2 1 t
Φ(t) = + ·t+ · = .
0 1 0 0 0 0 2 0 1
| {z }
=0

For the linear time invariant system (1.21), we can utilize the transition matrix and the method of
variation of constants to see the following:

Theorem 1.21 (Solution for linear time invariant systems).


Consider system (1.21) and let Φ denote the transition matrix of system (1.21) with u ≡ 0. Then
the solution of system (1.21) is given by

Zt
x ( t ) = Φ ( t ) · x0 + Φ(t − τ ) · B · u(τ )dτ (1.25a)
0
y ( t ) = C · x ( t ) + D · u ( t ). (1.25b)

Task 1.22
Consider the PI controller given by Figure 1.12 with corresponding equations

1
U̇C (t) = u(t)
R1 C
18

R2
y(t) = −UC (t) − · u ( t ).
R1

Use Theorem 1.21 to compute the output y(t) for any feasible input u(t).

C
R2

IC (t)
UC (t)

R1

I1 ideal
+
u(t)
y(t)

Figure 1.12.: Electronic circuit of a PI controller

Solution to Task 1.22: We directly obtain

Zt
1 R2
y(t) = − u(τ )dτ − u ( t ),
R1 C R1
0

which gives us the proportional parameter K P = − R2 /R1 and the integral parameter K I =
−1/( R1 C ) of the controller.

The last task is an example of the core of this lecture, the systematic manipulation of systems
to fulfill tasks or force behavior. As we will see later in the lecture, the systematic manipula-
tion of nonlinear systems is in general more complicated as compared to linear systems. Yet, for
sufficiently small neighborhoods of points in the operating range of a system, results for linear
systems apply also to nonlinear systems. This is particularly useful if these points are equilib-
ria (constant operating points) or reference trajectories. To this end, we consider autonomous
nonlinear systems of the form

ẋ(t) = f (x(t)) (1.26)


1.5. P ROPERTIES OF SYSTEMS 19

and define:

Definition 1.23 (Equilibrium).


Given a system of form (1.26), we call a point x⋆ ∈ Rnx an equilibrium if

f (x⋆ ) = 0 ∀t ≥ 0. (1.27)

Task 1.24
Compute the equilibria for the systems

ẋ(t) = (x − 1) · (x − 2) · (x − 3), (1.28a)


! !
ẋ1 (t) x2 (t) · exp −x1 (t)
= , (1.28b)
ẋ2 (t) sin(x2 (t))
ẋ(t) = x2 (t) + 1. (1.28c)

Solution to Task 1.24: For system (1.28a) we have three equilibria x1⋆ = 1, x2⋆ = 2 and
x3⋆ = 3.
For system (1.28b) we have infinitely many equilibria x⋆ = x ∈ R2 | x2 = 0 ∧ x1 ∈ R .

For system (1.28c) there exists no equilibrium.

Remark 1.25
For autonomous linear time invariant systems (1.22) we have

x⋆ = 0 if and only if A is regular, i.e. det( A) ̸= 0, or

there exist infinitely many equilibria if and only if A is singular, i.e. det( A) = 0.

If the nonlinear system is not autonomous, i.e.

ẋ(t) = f (x(t), u(t)), (1.29)

then the input u ∈ Rnu needs to be constant and fixed to u = u⋆ in order to compute the
equilibria.
20

Definition 1.26 (Operating point).


Consider system (1.29). Then the pairs (x⋆ , u⋆ ) satisfying

f (x⋆ , u⋆ ) = 0 (1.30)

are called operating points of the system. If (1.30) holds true for any u⋆ , then the operating point
is called strong or robust operating point.

Remark 1.27
For linear time invariant systems (1.21a) we have

x⋆ = − A−1 · B · u⋆ iff det( A) ̸= 0,

infinitely many operating points iff det( A) = 0 and rank( A) = rank([ A, B · u⋆ ]),

no operating points iff det( A) = 0 and rank( A) ̸= rank([ A, B · u⋆ ])

The result/property we are most interested in control theory is stability. Utilizing Definition 1.26
we can introduce two concepts of stability and asymptotic stability, robustness and controllability.
These concepts depend on the interpretation of u as an external control or a disturbance.

Definition 1.28 (Stability and Controllability).


For a system (1.29) we call x⋆

strongly or robustly stable operating point if, for each ε > 0, there exists a real number
δ = δ(ε) > 0 such that for all u we have

∥x0 ∥ ≤ δ =⇒ ∥x(t)∥ ≤ ε ∀t ≥ 0 (1.31)

strongly or robustly asymptotically stable operating point if it is stable and there exists a
positive real constant r such that for all u

lim x(t) = 0 (1.32)


t→∞

holds for all x0 satisfying ∥x0 ∥ ≤ r. If additionally r can be chosen arbitrary large, then x⋆
is called globally strongly or robustly asymptotically stable.

weakly stable or controllable operating point if, for each ε > 0, there exists a real number
1.5. P ROPERTIES OF SYSTEMS 21

δ = δ(ε) > 0 such that for each x0 there exists a control u guaranteeing

∥x0 ∥ ≤ δ =⇒ ∥x(t)∥ ≤ ε ∀t ≥ 0. (1.33)

weakly asymptotically stable or asymptotically controllable operating point if there exists a


control u depending on x0 such that (1.33) holds and there exists a positive constant r such
that

lim x(t) = 0 ∀∥x0 ∥ ≤ r. (1.34)


t→∞

If additionally r can be chosen arbitrary large, then x⋆ is called globally asymptotically


stable.

Task 1.29
Draw solutions of systems for each of the cases in Definition 1.28.

Remark 1.30 (BIBO stability)


In some books, the concept of strong/robust stability is also termed BIBO (bounded input bounded
output) stability.

Note that strongly asymptotically stable control systems are boring from a control point of view
since the chosen control does not affect the stability property of the system. Still, it can be used
to improve the performance of the system. Moreover, strong asymptotic stability is interesting in
the presence of significant measurement or discretization errors. Its most interesting application
is in the analysis of robustness of a system, i.e. whether or not there exists an external input (in
that case a disturbance) which can destabilize the system.
The concept of weak stability, on the other hand, naturally leads to the question how to compute
a control law such that x⋆ is weakly stable, and, in particular, how to characterize the quality of a
control law.
In the following chapter, our focus will be to design a control law such that the stability property
can be forced to hold for a given sytem.
Part I.

Frequence Domain
CHAPTER 2

MODELING OF COMPLEX CONTROL LOOPS

In modeling of control systems, we used a white box idea in Chapter 1 and introduced the state
of a system. In practice, however, deriving such a white box model is neither always necessary
nor productive. In many (especially in simple) cases, a black box approach allows us to derive a
control with required properties much more easily. This approach utilizes the so called frequency
domain. In that case, the map between input and output is not defined via a state dependent
dynamic, but via a direct map from input to output, the so called transfer function. As we learned
in Control Engineering 1, there exists a linear and invertible transformation between the time
domain which we used in Chapter 1 and the frequency domain, the so called Laplace transform
(or z transform in the discrete time case).
Within this chapter, we first recall the connection of frequency and time domain for simple sys-
tems before moving to more complex control loops.
For further details we additionally refer to the DIN 19226 [1, 2]

2.1. Laplace transform


The Laplace transform is a one to one map of functions of time t to functions of complex vari-
ables s. For causal functions f (t), that is function satisfying f (t) = 0 for t < 0, the (one sided)
map itself is defined as follows:

Definition 2.1 (Laplace transform function).


Consider a function f (t) which is

causal, i.e. f (t) = 0 for t < 0,

piecewise constant for every finite time interval t ≥ 0, and


26

bounded by | f (t)| ≤ M exp(γt) for suitable constants γ, M > 0.

Then we call the integral

Z∞
fˆ(s) = L( f (t)) = exp(−st) · f (t)dt, s = α + iω (2.1)
0

Laplace transform of the time function f (t) and the set Cγ = {s ∈ C | Re(s) > γ} is called
area of existence of fˆ(s).

Remark 2.2
Note that the integral (2.1) converges absolutely if Re(s) > γ.

Task 2.3
Compute the Laplace transform and its area of existence for f (t) = exp( at).

Solution to Task 2.3: We obtain


Z∞ ∞
exp(−(s − a)t) 1
fˆ(t) = exp(−st) · exp( at)dt = =
−(s − a) 0 s−a
0

for Re(s) > a, i.e. area of existence Ca = {s ∈ C | Re(s) > a}.

Two special cases of Laplace transformed functions are the so called Heaviside and Dirac delta
function. The Heaviside function represents a unit jump, which is not differentiable but integrable
via the Laplace function, and also not defined at the jump point.

Task 2.4
Compute the Laplace transform of the Heaviside function



 0, t<0

η (t) = undefined, t=0.


1, t>0

2.1. L APLACE TRANSFORM 27

y
1

0.5

t
−3 −2 −1 1 2 3

Figure 2.1.: Sketch of the Heaviside function

Solution to Task 2.4:


Z∞ ∞
exp(−st) 1
L(η (t)) = exp(−st)dt = = for Re(s) > 0
−s 0 s
0

The Dirac delta function is the left sided jump height of the Heaviside function, or may be inter-
preted as functional to align a n-times continuously differentiable function to an initial value.

Task 2.5
Compute the Laplace transform of the Dirac delta function

Z∞
δ(t) · g(t)dt = g(0)
−∞
Z∞ n
dn

d n
· δ(t) g(t)dt = (−1) · g (0)
dtn dtn
−∞

or (via the Heaviside function)

η (t) − η (t − τ )
δ(t) = lim .
τ →0 τ
28

y
1

0.5

t
−3 −2 −1 1 2 3

Figure 2.2.: Sketch of the Dirac delta function

Solution to Task 2.5:

η (t) − η (t − τ )

L(δ(t)) = lim L
τ →0 τ
Z∞ Z∞
1
= lim · η (t) · exp(−st)dt − η (t − τ ) · exp(−st)dt
τ →0 τ
0 0

1 1 − exp(−sτ ) s · exp(τs)
= lim · exp(−st)dt = lim = lim =1
τ →0 τ τ →0 τs |{z} τ →0 s
0 l’Hospital

Since the Laplace transform is one to one, it can be inverted:

Definition 2.6 (Inverse Laplace transform function).


Consider a function fˆ(s) : Cγ → C, which is well defined on Cγ . Then we call the integral

+i∞
rZ
1
f (t) = L( fˆ(s)) = · fˆ(s) · exp(st)ds, t ≥ 0, r ∈ R (2.2)
2πi
r −i∞

the inverse Laplace transform.

The reason why the Laplace transform or Laplace transformed functions are used quite often
to solve dynamic problems is due to its properties: While in time domain, the solution of a
dynamics requires the computation of integral, derivatives, time delay/advance, convolution etc.,
in frequency domain these problems can be solved using algebraic equations only. To recall the
main properties and laws of computation, we refer to Table A.1.
2.2. T RANSFER MATRIX 29

Note that typically the computation of a Laplace transform and of its inverse is not done via equa-
tions (2.1) or (2.2) but via equivalence tables. Table B.1 summarizes a few of these equivalencies.
The main mathematical tool used to apply the equivalences from Table B.1 is the partial fraction
decomposition. Since the entire fraction is typically not contained in the table, this method al-
lows us to split fraction into components, which are available in the table and therefore can be
transformed.

Theorem 2.7 (Partial fraction decomposition).


Consider a function

p̂(s)
fˆ(s) = (2.3)
q̂(s)

where p̂(s) and q̂(s) are coprime real polynoms satisfying grad( p̂(s)) ≤ grad(q̂(s)) = n.
Furthermore, suppose that q̂(s) can be transformed into the form

h m l j
∏ ∏
k j 2
q̂(s) = s − λj s − αj + β2j (2.4)
j =1 j =1

h m
with grad(q̂(s)) = n = ∑ k j + 2 ∑ l j . Then, function fˆ(s) can be uniquely reformulated to to
j =1 j =1
the partial fraction decomposition

h kj m lj
c ji d ji + e ji s
fˆ(s) = c0 + ∑ ∑ i + ∑ ∑ 2 i (2.5)
j =1 i =1 s − λj j =1 i =1 s − α j + β2j

p̂(s)
with c0 = lim and real coefficients c ji , d ji and e ji .
s→∞ q̂(s)

While the Laplace transform is not necessary to derive a description of a black box model, it is
very useful to see the interconnection between the white box and black box description.

2.2. Transfer matrix


Within the lecture, we aim to extend the concept of control, stability and controllability to more
complex systems. To this end, we abstract from the transfer function treated in Control Engineer-
ing 1 for the one dimensional case and introduce the transfer matrix.
A transfer matrix in general is a map of a system in the sense of Definition 1.1. If we reconsider
30

the linear time invariant system from equation (1.21)

ẋ(t) = A · x(t) + B · u(t), x (0) = x0


y ( t ) = C · x ( t ) + D · u ( t ),

then we obtain

s · x̂(s) − x0 = A · x̂(s) + B · û(s)


ŷ(s) = C · x̂(s) + D · û(s)

which gives us

ŷ(s) = C · (s · Id − A)−1 · x0 + C · (s · Id − A)−1 · B + D · û(s) (2.7)

Remark 2.8 (SISO)


Note that in the one dimensional case, that is one input u and one output y, we call G transfer
function. This case is also termed SISO – single input single output. Unless termed otherwise, we
directly state all results for the multi input multi output (MIMO) case.

Technically, we are not restricted to the linear case of system (1.21). For this reason, we can
directly define the following:

Definition 2.9 (Transfer matrix).


Consider a time invariant system (1.20) and a function G ∈ Cny ×nu . We call G (s) transfer matrix
of system (1.20) if it satisfies

ŷ(s) = G (s) · û(s). (2.8)

In the linear case, we know from Theorem 1.21 that the general solution to system (1.21) reads

Zt
x ( t ) = Φ ( t ) · x0 + Φ(t − τ ) · B · u(τ )dτ
0
y(t) = C · x(t) + D · u(t)

with transition matrix Φ(t). Applying the Laplace transform to the state dynamics and the con-
2.2. T RANSFER MATRIX 31

volution rule from Table A.1 we have

x̂(s) = Φ̂(s) · x0 + Φ̂(s) · B · û(s) (2.9)

and thereby

Φ̂(s) = (s · Id − A)−1 . (2.10)

Remark 2.10
The two parts of (2.9) can be interpreted physically.

The first part Φ̂(s) · x0 represents the response of the system if no input is applied. For this
reason, it is called zero input response.

The second part Φ̂(s) · B · û(s) represent the response to an input if the system state is
zero. Similarly, it is termed zero state response.

For the zero state response of the system, i.e. x0 = 0, we can conclude

Theorem 2.11 (Transfer matrix).


Consider a linear time invariant system (1.21) with x0 = 0. Then the transfer matrix is given by

G (s) = C · (s · Id − A)−1 · B + D = C · Φ̂(s) · B + D. (2.11)

Remark 2.12
Note that due to non-commutativity of matrix multiplication, the sequence of transfer matrices is
important and may not be switched as in the one dimensional case of transfer functions.

The inverted way, i.e. to derive a state description from a transfer matrix/function, is called
realization problem. For a system, the related property is called properness.

Definition 2.13 (Properness).


Consider a transfer matrix G (s) defining the system

ŷ(s) = G (s) · û(s).

If there exists a (possibly nonlinear) system (1.20) such that G (s) is the transfer function of the
system, then the transfer matrix G (s) is called proper.
32

For the latter, if a solution exists then the solution is not unique and one typically addresses a
minimal realization only. The main result is the following, which we state for the one dimensional
case:

Theorem 2.14 (Properness).


Consider a transfer function

z(s)
G (s) = (2.12)
n(s)

with polynoms z(s) and n(s). The transfer function is proper if and only if

grad(z(s)) ≤ grad(n(s)) or equivalently lim | G (s)| < ∞. (2.13)


s→∞

Remark 2.15
Properness of a system can be extended to the MIMO case. In order to compute the latter, one
typically applies a parameter transformation first (similar to the computation of the Jordan ma-
trix) to separate the connections between inputs and outputs. For further details, we refer to the
book of Isidori [8, Chapter 5] for the general nonlinear case or the article of Müller [15] for the
linear case.

In the literature, there are two canonical minimal realizations, which can be obtained via partial
fraction decomposition, cf. Theorem 2.7. The canonical minimal realizations are called control-
lable normal form and observable normal form and are given in the appendix, cf. Definitions B.1
and B.1 respectively. As the MIMO case (many) introduces further zeros and ones, we restrict
ourselves to the SISO case here. A full description can be found in Müller [15].

Theorem 2.16 (Poles and zeros of transfer function).


Consider a transfer function

z(s)
G (s) = (2.14)
n(s)

with coprime polynoms z(s) and n(s). Then we have grad(z(s)) ≤ grad(n(s)) ≤ n x and the
zeros of n(s) are called poles of the transfer function G (s) and equal to the Eigenvalues of A.
2.2. T RANSFER MATRIX 33

Theorem 2.17 (Poles and zeros of transfer matrix).


For a transfer matrix G (s), the poles and zeros are given by the interconnection of the transfer
functions by applying the computing rules for Laplace transformed functions.

Utilizing Definition 1.28 on stability in the frequency domain, we can show that the following
holds:

Theorem 2.18 (Strong/robust/BIBO stability).


Consider a transfer matrix G (s) defining the system

ŷ(s) = G (s) · û(s).

Then the system is strong/robust/BIBO stable if and only if for the impulse response

g(t) = L−1 ( G (s) · Id) (2.15)

the following inequality holds

Z∞
∥ g(t)∥dt < ∞. (2.16)
0

If we know, that the transfer matrix corresponds to a linear time invariant system, then Theo-
rem 2.18 simplifies to

Theorem 2.19 (Strong/robust/BIBO stability for linear time invariant systems).


A linear time invariant system (1.21) is strong/robust/BIBO stable if and only if inequality (2.16)
holds for the impulse response

g(t) = L−1 ( G (s) · Id) = C · (s · Id − A)−1 · B. (2.17)

The latter can be verified by checking the poles of the transfer function.

Theorem 2.20 (Strong/robust/BIBO stability for linear time invariant systems via transfer func-
tion).
A linear time invariant system (1.21) is strong/robust/BIBO stable if and only if all poles
34

s j = α j + iω j of the transfer function G (s) satisfy

Re(s j ) = α j < 0. (2.18)

Before coming to structures with multiple inputs and multiple outputs, we consider two interme-
diate cases where we use multiple outputs to improve the controller for a single input.

2.3. Cascade control


The concept of a cascade control can be applied to systems (1.20) if more than one output but
only one input is available. The idea is to distinguish between fast and slow processes using fast
and slow outputs/sensors. In an inner control loop, disturbances on the system are included in a
fast manner, while on the outer loop low frequent changes are tackled. The block diagram of the
cascade control is sketched in Figure 2.3.

+
w + + + y j −1 yj
GR j G R j −1 GSj−1 GSj
− −

Figure 2.3.: Block diagram of a cascade control

The reason for utilizing such a control structure is that, in order for a controller to react, distur-
bances must directly affect the state and the output. As a consequence, the output will suffer
and diverge from the target point, whereas the speed of recovery is determined by the response
speed of the control loop. Unfortunately, plants which are difficult to control often exhibit low
gains and long integral times for stability, hence have a slow response. Such plants are prone to
error from disturbances. Examples of such applications can, e.g., be found in motor control, cf.
Example 2.21.

Example 2.21 (Cascaded motor control)


Consider the control structure in Figure 2.4 of an electric engine where y M denotes the drive
torque, yω denotes the rotating speed of the engine, and y φ denotes the position angle of the
engine. The controller has the three levels
2.3. C ASCADE CONTROL 35

1. current / torque control u M ,

2. speed control uω , and

3. position control u φ .

Here, the disturbance is modeled as a load moment d M .

dM

+
w + + + yM + yω yφ
GR3 GR2 GR1 GS1 GS2 GS3
uφ uω uM
− − −

Figure 2.4.: Block diagram of a cascade motor control

Remark 2.22
Closed loop systems can in general be considered to show the behavior of a second order system
with a natural rotating frequency and a damping factor. At frequencies faster than the natural
frequency, one can see that the closed loop gain decreases rapidly (at 12dB per octave). Conse-
quently, disturbances faster than double the natural frequency will remain more or less uncor-
rected. In case of underdamped systems, i.e. damping factor less then unity, the disturbances with
frequency around the natural frequency may be magnified.

If the disturbance can be measured, and its effect known, (even approximately), the idea of a
cascade controller can be imposed. As such, a correcting signal can be added either on the inner
or outer loop to compensate the issues described above.
While typically a two loop structure as shown in Figure 2.3, a cascade controller technically
distinguishes between outer and inner loops only. Hence, also concatenated loops are possible.
More formally, extending on Figure 2.3, the transfer function of the cascade control is given by
the following.

Definition 2.23 (Transfer function cascade control).


Consider a system (1.20) with one input and at least j ≥ 2 outputs and tracking reference w(t).
Let GR j (s) and GSj (s) denote the control and system transfer functions for j = 1, . . . , jmax .
36

Suppose the structure of Figure 2.3 holds iteratively for these controllers and systems. Then we
call

ŷ(s) = Gcascadejmax (s) · ŵ(s) (2.19)

transfer function of a cascade controlled system where the transfer functions Gcascadej (s) are
defined via

GR j (s) · Gcascadej−1 (s) · GSj (s)


Gcascadej (s) := ∀ j = 1, . . . , jmax (2.20)
1 + GR j (s) · Gcascadej−1 (s) · GSj (s)

with Gcascade0 (s) = Id(s).

In that sense, an outer loop control can therefore be interpreted as feed forward for an inner loop.
It supplies a reference trajectory which shall be followed by the (typically faster) inner controller.
On the downside, the usage of cascade control increases the complexity of the control structure.
Additionally, it requires additional measurement devices and controllers.

Remark 2.24
In general, if the inner loop is at least three times faster than the outer loop, then the improved
performance justifies the investment of a cascade controller.

The nice property of the cascade control is that the problems to be tackled by the concatenated
loops can be separated and therefore treated subsequently.

Remark 2.25
Note that the problems are not treatable independently but require subsequent steps.

The steps to be taken can be combined into the following method:

Algorithm 2.26 (Design of a cascade control)


Consider the setting of Definition 2.23. Suppose performance criteria and input/output bounds to
be given for each loop.
For each j = 1, . . . , jmax do
(1) Disregard all loops k > j and consider the open loop GR j (s) · Gcascadej−1 (s) · GSj (s). De-
sign the controller GR j (s) such that the closed loop satisfies the respective performance
criteria and input/output bounds.

(2) Compute and simplify the closed loop Gcascadej (s).


2.3. C ASCADE CONTROL 37

As noted before, the problems are nested and tackled in a subsequent manner. From Algo-
rithm 2.26 we observe, that the closed loop behavior of the inner loop is the open loop behavior
of the subsequent outer loop. Hence, the design of the inner loop control heavily influences the
difficulty of designing the subsequent outer loop. I.e., if a fast behavior is desired, then the inner
control needs to be aggressive.
In most cases, only the performance of the most outer loop is of interest to the user, whereas all
inner loops are only a means to an end. Therefore, criteria such as „no constant deviation“ or
„minimal overshoot“ are not of interest for the inner loops. For this reason, on the inner loops
typically P or PDT1 controllers are used, which are also faster, whereas the much slower PI
controller are applied on the most outer loop.

Task 2.27
Consider the cascade control from Figure 2.5. What is the advantage of using a cascade
controller utilizing y1 (s) and y2 (s) as compared to a SISO controller based on y2 (s) only?

+
w + + + y1 y2
GR2 GR1 GS1 1
u2 u1 s
− −

Figure 2.5.: Block diagram of a cascade control for integral outer loop

Solution to Task 2.27: Designing the inner loop as a P controller is equivalent of using a D
controller on the outer loop. This configuration allows to get rid of the disadvantages of the
D controller such as noise sensitivity and the rank problem in the transfer function.

Task 2.28
Consider a robot arm with variables as shown in Figure 2.6. For robots, three typical control
types exist:

torque control, i.e. to apply a defined torque / moment within a working area,

position control, i.e. to guarantee sufficiently accurate movement of the arm indepen-
dent from torques / moments, and
38

hybrid control, i.e. an application dependent switching between torque and position
control.

Consider the cascade control from Figure 2.7 to be applied for position control of one of the
angles φ j , j = 1, 2, 3. Suppose the transfer function block coefficients of drive j to read

K P φ = 0.2, K DT φ = 0.009s
K I ω = 0.9s−1
K Pu = 2.8, K DT u = 0.073s
K P φ = 3.5, K DT φ = 0.069s

For inner loop control coefficients K P1 = 25.5 and KT1 = 0.073s compute the optimal
coefficients K P2 , K I 2 of the external PI controller.

φ2
l2
φ3
l3

φ1
l1

Figure 2.6.: Sketch of a 3DOF robot arm

K P φ , K DT φ
φ

K P2 , K I 2 K P1 , KT1 K Pu , K DT u + KI ω K P φ , K DT φ
w + + + yω yφ

− −

Figure 2.7.: Block diagram of a cascade robot control for one joint drive
2.3. C ASCADE CONTROL 39

Solution to Task 2.28: The inner open loop transfer function of the drive reads

K Pu K
G01 (s) = K P1 · (1 + s · KT1 ) · · Iω .
1 + s · K DT u s

Since KT1 ≡ K DT u = 0.073s we obtain

K P1 · K Pu · K I ω
G01 (s) =
s

and obtain the inner closed loop transfer function

1 1 1
Gw1 (s) = 1
= s =
1+ 1+ K P1 ·K Pu ·K I ω 1 + s · Kw1
G01 (s)

with

1
Kw1 := 1/(K P1 · K Pu · K I ω ) = = 0.016s.
25.5 · 2.8 · 0.9s−1

Using the latter, the outer open loop transfer function reads

K P2 · (1 + sK I 2 ) KP φ
G02 (s) = · Gw1 (s) ·
s · KI 2 1 + s · K DT φ
K · (1 + sK I 2 ) 1 KP φ
= P2 · ·
s · KI 2 1 + s · Kw1 1 + s · K DT φ

Hence, we can already fix the integrator constant and set

K I 2 := Kw1 = 0.016s

and obtain the outer open loop transfer function

K P2 · K P φ
G02 (s) = .
s · K I 2 · 1 + s · K DT φ

Therefore, the outer closed loop transfer function reads

1 1 K P2 · K P φ
Gw2 (s) = 1
= = .
1+ s·K I 2 ·(1+s·K DT φ )
G02 (s) 1+ K P2 · K Pφ + s · K I2 · 1 + s · K DT φ
K P2 ·K P φ
40

For optimality, we require | Gw2 (iω )|2 ≈ 1 and see

K P2 · K P φ
Gw2 (iω ) =
K P2 · K P φ + iω · K I 2 · 1 + iω · K DT φ
K P2 · K P φ
=
K P2 · K P φ − ω 2 · K I 2 · K DT φ + iω · K I 2

Hence, we have

2
K P 22 · K P 2φ
| Gw2 (iω )| = 2
K P2 · K P φ − ω 2 · K I 2 · K DT φ + ( ω · K I 2 )2
K P 22 · K P 2φ
=
K P 22 · K P 2φ − 2 · K P2 · K P φ · ω 2 · K I 2 · K DT φ + ω 4 · K I 22 · K DT 2φ + ω 2 · K I 22
K P 22 · K P 2φ
=
K P 22 · K P 2φ − ω2 · 2 · K P2 · K P φ · K I 2 · K DT φ − K I 22 + ω 4 · K I 22 · K DT 2φ

Since we cannot influence ω 4 · K I 22 · K DT 2φ , we obtain

2 · K P2 · K P φ · K I 2 · K DT φ − K I 22 ≈ 0
KI 2 0.016s
⇐⇒ KP φ ≈ = = 4.44.
2 · K P2 · K DT φ 2 · 0.2 · 0.009s

To summarize, a cascade control shows the following advantages/disadvantages given in Table 2.1
if compared to SISO.

Table 2.1.: Advantages and disadvantages of cascade control


Advantage Disadvantage
✓ Identical design of control per loops as ✗ More controllers required
for SISO
✓ Possible integration of multiple out- ✗ More sensors required
puts
✓ Improved disturbance rejection ✗ Slower response due to higher order
Continued on next page
2.4. D ISTURBANCE CONTROL 41

Table 2.1 – continued from previous page


Advantage Disadvantage
✓ Improved controllability of local non-
linearities
✓ Simplified integration of input/output ✗ Possible wind-up for integral control
bounds per loop caused by local bound

2.4. Disturbance control


An alternative way of utilizing inner knowledge of the system to improve its performance is the
so called disturbance control. Disturbance control is closely connected to feed forward control,
which we analyze first. As we will see, unstable zeros cannot be treated via feed forward but
require a feedback control structure. At the same time, by design of a feedback control, a dis-
turbance d(t) can only be suppressed if the reference input w(t) and the disturbance d(t) are
within the same frequency range. Hence, if they exhibit different frequency ranges, then an ad-
ditional component must be added to the loop. In that case, the advantages of both feed forward
and feedback can be utilized to design disturbance suppression and reference tracking. Last, if
the disturbance can additionally be measured, we extend the concept to disturbance control.
A feed forward control GF (s) aims to annihilate the dynamic of the system, i.e. to push the
!
combined dynamic of the feed forward and the system to identity GF (s) · GS (s) = Id(s). A feed
forward is sketched in Figure 2.8.

w u y
GF GS

Figure 2.8.: Simple feed forward

If such an identity can be reached, that is GF (s) = 1/GS (s), the control would truly be optimal.
However, the main reasons why such a behavior is not realizable are:

1. Degree of nominator is larger than degree of denominator. This is necessary (but not real-
izable) since the system GS (s) exhibits degree of nominator to be smaller than degree of
denominator, only in exceptional cases both are identical.

→ Remedy: Add time constants to denominator to equalize degree of nominator and


42

denominator, for example:

K (1 + T1 s) · (1 + T2 s) 1
GS (s) = −→ GF (s) = ·
(1 + T1 s) · (1 + T2 s) K (1 + Ts)2
1
=⇒ Gw (s) = .
(1 + Ts)2

2. Negative lag time. If the system exhibits a lag time, then its inverse must have a negative
lag time, i.e. a predictor. Hence, the controller is not causal and can only be applied if the
output is known in advance.

→ Remedy: Ignore negative lag time, for example

K (1 + T1 s) 1
GS (s) = · exp(− T2 s) −→ GF (s) = ·
(1 + T1 s) K (1 + Ts)
1
=⇒ Gw (s) = · exp(− T2 s).
(1 + Ts)

3. Unstable behavior. If the system GS (s) shows unstable zeros, i.e. it is not minimal phase,
then the feed forward GF (s) must have unstable poles. The latter may lead to unbounded
controls and if the poles and zeros do not cancel out exactly, then the loop will be unstable.

→ Remedy: Ignore unstable zeros, for example

K · (1 − T1 s) (1 + T2 s) · (1 + T3 s) 1
GS (s) = −→ GF (s) = ·
(1 + T2 s) · (1 + T3 s) K (1 + Ts)2
(1 − T1 s)
=⇒ Gw (s) = .
(1 + Ts)2

→ Remedy: Set the amplitude response | Gw (iω )| = 1. To this end, any unstable zero
of the system is compensated by its complex transposed in the denominator of the
control, for example

K · (1 − T1 s) (1 + T2 s) · (1 + T3 s) 1
GS (s) = −→ GF (s) = ·
(1 + T2 s) · (1 + T3 s) K · (1 + T1 s) (1 + Ts)
(1 − T1 s)
=⇒ Gw (s) = .
(1 + T1 s) · (1 + Ts)

This transfer function is also called all pass.

→ Remedy: Set the phase response ∠Gw (iω ) = 0. To this end, any unstable zero of the
system is compensated by its complex transposed in the nominator of the control, for
2.4. D ISTURBANCE CONTROL 43

example

K · (1 − T1 s) (1 + T2 s) · (1 + T3 s) · (1 + T1 s)
GS (s) = −→ GF (s) =
(1 + T2 s) · (1 + T3 s) K · (1 + Ts)3
1 − T12 s2

(1 − T1 s) · (1 + T1 s)
=⇒ Gw (s) = = .
(1 + Ts)3 (1 + Ts)3

Note that in all cases, the unstable zeros remain uncompensated. Hence, considering realizability
of a control, the choice of T is always a compromise between sensitivity wrt. noise and speed of
the control.
As a consequence, we know that unstable systems cannot be stabilized via feed forward, but
require a feedback structure to enforce stable behavior.

Remark 2.29
In fact, unstable zeros (and weakly damped one, that is zeros close to the stability boundary)
should never be canceled out to avoid instability of the control (or strong oscillations in case of
weakly damped zeros).

Yet, a disturbance can only be suppressed by design of a feedback control, if the reference input
and the disturbance are within the same frequency range. To tackle the case where these vari-
ables exhibit different frequency ranges, we introduce the precontrol and the equivalent prefilter
concept.
To integrate a feed forward and a feedback into one loop, two structures are possible, cf. Fig-
ure 2.9 and 2.10 respectively.

GF
d

+ +
w + + + y
GR GS

Figure 2.9.: Structure of a precontrol


44

+
w + + y
GP GR GS

Figure 2.10.: Structure of a prefilter

Remark 2.30
Since the feed forward does not interact with the feedback, the stability properties of the closed
loop remain unchanged.

The structures of the precontrol Figure 2.9 and of the prefilter 2.10 allow to simultaneously treat
reference tracking via the feedback and disturbance suppression via the feed forward. Hence, the
choice of T does not depend on the necessity to find a compromise between sensitivity wrt. noise
and speed of the control anymore.
More formally, we define the following:

Definition 2.31 (Transfer function precontrol).


Consider a system (1.20), which is SISO. Let GR (s), GF (s) and GS (s) denote the feedback, feed
forward and system transfer functions. Moreover, let w(t) denote the tracking reference and d(t)
denote the known disturbance. Last, suppose the structure of Figure 2.9 for these controllers and
systems to hold. Then we call

GR (s) · GS (s) + GF (s) · GS (s)


ŷ(s) = · ŵ(s) (2.21)
1 + GR (s) · GS (s)

transfer function of a precontrolled system.

Definition 2.32 (Transfer function prefilter).


Consider a system (1.20), which is SISO. Let GR (s), GF (s) and GS (s) denote the feedback, feed
forward and system transfer functions. Moreover, let w(t) denote the tracking reference and d(t)
denote the known disturbance. Last, suppose the structure of Figure 2.10 for these controllers and
2.4. D ISTURBANCE CONTROL 45

systems to hold. Then we call

GP (s) · GR (s) · GS (s)


ŷ(s) = · ŵ(s) (2.22)
1 + GR (s) · GS (s)

transfer function of a prefiltered system.

If we compare both approaches, then we have identical behavior if the transfer functions are
identical, i.e.

GR (s) · GS (s) + GF (s) · GS (s) G (s) · GR (s) · GS (s)


= P
1 + GR (s) · GS (s) 1 + GR (s) · GS (s)
⇐⇒ GR (s) · GS (s) + GF (s) · GS (s) = GP (s) · GR (s) · GS (s)
⇐⇒ GR (s) + GF (s) = GP (s) · GR (s).

The latter reveals the following equivalency result:

Theorem 2.33 (Equivalency precontrol and prefilter).


Consider the precontrol and prefilter as given in Definition 2.31 and 2.32 respectively. If

GF ( s )
GP ≡ 1 + (2.23)
GR ( s )

holds, then the transfer functions of precontrol and prefilter are identical.

The design of a precontrol is done with the following two steps:

Algorithm 2.34 (Design precontrol)


Consider a control system as illustrated in Figure 2.9.

(1) Design the feed forward GF (s) to be (approximately) the inverse of the system GS (s).

(2) Design the feedback GR (s) such that the disturbance d(t) is suppressed as good as possible.

The idea to design a prefilter is exactly the other way around:

Algorithm 2.35 (Design prefilter)


Consider a control system as illustrated in Figure 2.10.

(1) Design the feedback GR (s) such that the disturbance d(t) is suppressed as good as possible.

(2) Design the feed forward GF (s) to be (approximately) the inverse of the closed loop system.
46

Remark 2.36
Note that the precontrol — in contrast to the prefilter — does not depend on the closed loop.
Consequently, there is no need to adapt it in case the closed loop is optimized or changed after
the design process is finished.

Remark 2.37
The prefilter can be used to compensate for unwanted behavior of the closed loop caused by
large values K D of the D part of a closed loop control. Large K D may be wanted to stabilize or
improve performance of the closed loop, yet its impact on zeros of the closed loop may lead to
large overshoots. The prefilter can be used to cancel out these zeros, hence the D part can be
designed without having to worry about possible negative impacts.

Task 2.38
Consider the (critically stable) system given by the transfer function

5
GS (s) = .
s · ( s + 5)2

Since the system exhibits an explicit I part, the feedback is designed as a PD controller
following

GR ( s ) = K P · ( s + 1) .

Design a prefilter, which leave the amplitude response unchanged.

Solution to Task 2.38: For the closed loop we obtain the transfer function

5·K P ·(s+1)
s·(s+5)2 5 · K P · ( s + 1)
Gw (s) = = .
5·K ·(s+1)
1+ P 2 s3 + 10s2 + (5K P + 25) s + 5K P
s·(s+5)

Hence, we observe an unwanted zero in the nominator given by (s + 1), which need to be
included in the prefilter. To ensure unchanged amplitude response, we require GP (0) = 1.
Consequently, we design

1
GP ( s ) = .
( s + 1)
2.4. D ISTURBANCE CONTROL 47

A different approach can be used if the disturbance can be measured. In this case, the system
no longer exhibits only one but two outputs and may additionally show a disturbance transfer
function. Since the system is disturbed, a feedback is required to stabilitze it. The feedback is
based on the output of the system. As the disturbance itself cannot be influenced, no closed loop
can be used to compensate for the disturbance. Yet, a feed forward can be applied based on the
output of the disturbance to update the feedback. A sketch of the system is given in Figure 2.11.

GZ GD

− +
w + + + y
GR GS

Figure 2.11.: Structure of a disturbance control

More formally, we define the following:

Definition 2.39 (Transfer function disturbance control).


Consider a system (1.20) with one input and two outputs. Moreover, let w(t) denote the tracking
reference and d(t) denote the disturbance, which can be measured by one of the outputs. Let
GD (s) and GS (s) denote the system and disturbance transfer functions and let GR (s) and GZ (s)
denote the feedback and feed forward control. Last, suppose the structure of Figure 2.11 for these
controllers and systems to hold. Then we call

GR (s) · GS (s) GD − GS · GZ
ŷ(s) = · ŵ(s) + · d̂(s) (2.24)
1 + GR (s) · GS (s) 1 + GR (s) · GS (s)

transfer function of a disturbance control system.

We directly observe that


48

1. the ideal compensation of the disturbance GD (s) is given by

GD ( s )
GZ ( s ) = ,
GS (s)

which cancels out the disturbance dependent component of (2.24), and

2. the stability of the system is not influenced by the additional component is the disturbance
influence is canceled out.

Remark 2.40
In contrast to precontrol, a disturbance control is (typically) more easily realizable. The reason
is that (at least typically) the impact of the disturbance is lagged. The more lagged GD (s) is
compared to GS (s), the higher will be its low pass behavior and the easier will be the realization
of GZ (s).

Task 2.41
Consider the system transfer function

K
GS (s) = · exp(−2s).
(1 + T1 s) · (1 + T2 s)

Compute the ideal and realizable disturbance controls for the cases GD (s) = 1 and
GD (s) = exp(−3s)/ (1 + T3 s).

Solution to Task 2.41: Considering GD (s) = 1 we obtain

(1 + T1 s) · (1 + T2 s)
GZ ( s ) = · exp(2s).
K

Ignoring the negative lag time and equalizing degree of nominator / denominator reveals the
possible realization

(1 + T1 s) · (1 + T2 s)
GZ ( s ) = .
K · (1 + Ts)2
2.4. D ISTURBANCE CONTROL 49

Considering GD (s) = exp(−3s)/ (1 + T3 s) we get

(1 + T1 s) · (1 + T2 s)
GZ ( s ) = · exp(−s).
K · (1 + T3 s)

Since the lag time is positive, we only need to equalize degree of nominator / denominator,
which reveals the possible realization

(1 + T1 s) · (1 + T2 s)
GZ ( s ) = .
K · (1 + T3 s) · (1 + Ts)

Recall that for T → 0 the ideal inverse, and respectively the perfect disturbance suppression, is
obtained. Yet, for smaller T the derivative character of the control is increasing and hence the
input magnitude is inverse proportionally rising with the decrease of T. The latter may quickly
result in violating input constraints.
To summarize, if compared to prefilter/precontrol a disturbance control shows the following ad-
vantages/disadvantages given in Table 2.2.

Table 2.2.: Advantages and disadvantages of disturbance control as compared to prefilter/precontrol


Advantage Disadvantage
✓ Identical design of feed forward and ✗ More controllers required
feedback as for SISO
✓ Possible integration of multiple out- ✗ More sensors required
puts
✓ Improved disturbance suppression ✗ D character for suppression
✓ Improved realizability ✗ No compensation of D character in
closed loop
✓ Consideration of disturbance dynamic ✗ Possible violation of input bounds

Remark 2.42
In the literature, two special cases of the cascade and the precontrol are known. If the cascade
exhibits only two loops, it is also referred to as auxiliary feedback. If a second input u is available,
the concept of precontrol can be applied to the controllable subsystem of the second input which
is referred to as auxiliary feed forward.
CHAPTER 3

COMPLEX CONTROL STRUCTURES

In the upcoming chapter, we consider control structures, which extend the standard setting of P, PI,
PID, PIDT and PIDT-Tt considered so far in different directions. First, for practical applications,
the electric/electronic realization of these controllers is (despite their simplicity) too complex
and expensive. To address this issue, we consider bang-bang and double-setpoint control in the
following Section 3.1. The simplification of these controllers lies in their operating range, which
consists of 2 (or 3) operating points only, e.g. on/off switches or gear shifts.
To address more complex control architectures with several inputs and outputs, we already saw
in the previous Chapter 2 how prefilter, precontrol, cascade control and disturbance control can
be applied. The more general setting of multi-input multi-output systems will be addressed in
Section 3.3.

3.1. Bang-bang and double-setpoint control


The first class of complex control structures we consider are switches. These elements represent
discontinuous controls and are therefore by definition nonlinear and cannot be linearized. For a
bang-bang control, a realization could be a switch with „on“ and „of“ settings. For an engine,
a double-setpoint control may be the settings „drive“, „reverse“ and „neutral“. The typical ap-
plications for bang-bang control are in the range of simple temperature and pressure regulation
whereas double-setpoint control is applied to motors.
Switching devices exists in a variety of realizations, cf. Table 3.1.
52

Table 3.1.: Technical possibilities of continuous and switching actuators


Continuous actuator Switching actuator
Valve Proportional value, servo valve, noz- Shift valve
zle, slit
Resistor Potentiometer Relay, switch, contactor, transistor,
thyristor
Clutch Friction clutch, converter Clutch coupling

Practitioners use such devices as they are very cheap, more robust, require less maintenance, are
smaller, simpler, consist of less parts and exhibit a higher degree of efficiency. On the downside,
however, bang-bang and double-setpoint controllers induce oscillations, which may lead to res-
onances, noise and degrade comfort. Moreover, due to switching, these elements show higher
wearing and a limited lifespan. Additionally, the control shows a slower response on the sys-
tem due to integrating the pulse waves of the input. The most dreadful disadvantage is yet the
complexity of modeling and evaluation. Here, we will particularly focus on the last issue.
The block diagrams of bang-bang and double-setpoint control are given in Figures 3.1 and 3.2
respectively.

Remark 3.1
Note that the right part of each figure represents the control with hysteresis, i.e. the case when
shifting up/down is not done at the same value. The idea of the latter is to avoid repeated and fast
switching (at the cost of potentially larger oscillations).

ymin , ymax ymin , ymax

u y u −ε ε
y

(a) Bang bang control (b) Bang bang control with hysteresis

Figure 3.1.: Block diagram of a bang bang control

Formally, a bang-bang controller is a system with the following properties:

Definition 3.2 (Bang-bang control).


Consider a system f : U → Y with Y := [ymin , ymax ] ⊂ R such that Y ̸= ∅. Furthermore,
3.1. BANG - BANG AND DOUBLE - SETPOINT CONTROL 53

ymin , y0 , ymax ymin , y0 ymax

−ε 2−ε 1
u y u ε1 ε2
y

(a) Double-setpoint control (b) Double-setpoint control with hysteresis

Figure 3.2.: Block diagram of a double-setpoint control

consider a threshold θ ∈ U to be given. Then we call f to be bang-bang iff



y
min , if u ≤ θ
f (u) = (3.1)
y
max , if u > θ

is the setting command of the system.

Remark 3.3
For simplicity reasons, the threshold value is typically set to zero.

Remark 3.4
We like to stress that — in contrast to all control types considered so far — a bang-bang control is
by definition nonlinear. Hence, the principles of superposition, amplification and commutativity
do not hold in general.

The output of a bang-bang control as given in Figure 3.1(left) takes the form of a so called pulse
modulation function. To generate such a signal, several possibilities exist, which include

pulse width modulation (PWM),

pulse frequency modulation (PFM), and

pulse amplitude modulation (PAM).

The ideas of these modulations can be captured as follows:

PWM: The amplitude of output is fixed. The length of the impulse depends on the ampli-
tude of the input signal.

PFM: The amplitude of output is fixed. The frequency of the pulse depends on the ampli-
tude of the input signal.
54

PAM: The amplitude of output depends on amplitude of input. the frequency of the pulse
is fixed.

In practice, all three options are applied, yet due to its similarity to electronics, PWM is the most
common one. Here, we focus on PWM only.
The idea of pulse modulation is to generate a switching function to mimic a continuous control
like a PID. To this end, the signal to be mimiced is required as an input and is compared to a so
called generator function, cf. Figure 3.3 for the general setting.

w y
GR

Figure 3.3.: Mimicry of a continuous input function

The easiest way is to generate a PWM signal is to apply a triangle function and compare it to
a reference signal, cf. Figure 3.4. Note that the reference w(t) in Figures 3.3, 3.4 is the input
for our controller. Whenever the triangle function is less than the reference, the lower bound is
applied. If it is higher, then the upper bound is used.

Remark 3.5
Note that other funtions like a sawtooth or delta function with respect to limits can be applied.

If a bang-bang control is applied, one has to face the problem that the reference w is typically not
asymptotically stabilized. The only exception is if either (w, umin ) or (w, umax ) is an operating
point of the system. In any other case, applying either umin or umax to the system results in
a deviation. Once the error passes the threshold, the deviation in turn leads to a switch of the
control. The switch of the control induces a change of direction of the development of the error,
which at some point results in another switch of the control.
As a result, the control chatters. To reduce such a behavior, hysteresis is introduced:

Definition 3.6 (Bang-bang control with hysteresis).


Consider a system f : U → Y with Y := [ymin , ymax ] ⊂ R such that Y ̸= ∅. Furthermore,
3.1. BANG - BANG AND DOUBLE - SETPOINT CONTROL 55

w3 ( t )

w1 ( t )
0 t

w2 ( t )

y1

0 t

y2

0 t

y3

0 t

Figure 3.4.: Sketch of a pulse width modulation using triangle functions

consider a threshold θ ∈ U and a hysteresis width ∆ > 0 to be given. Then we call f to be


bang-bang with hysteresis iff

y
min , if u ≤ θ − ∆
f (u) = (3.2)
y
max , if u ≥ θ + ∆

is the setting command of the system.

Remark 3.7
Note that bang-bang with hysteresis is not a function. It exhibits a region where the output may
either be ymin or ymax .

Note that even with hysteresis, the closed loop for any system (cf. Figure 3.5) will still oscillate
around the reference.
Here, the following holds:
56

umin , umax
w u y
−ε ε
GS

Figure 3.5.: Closed loop with bang-bang control

Theorem 3.8 (Oscillation for bang-bang control w/o hysteresis).


Consider a closed loop system as given in Figure 3.5 and suppose a controllable operating point
x ⋆ to exist for system GS and the corresponding control u(·) to satisfy u(·) ∈ [umin , umax ]. Then
the closed loop from Figure 3.5 is stable and the amplitude of the oscillation of the closed loop is
directly proportional to the hysteresis width ∆ whereas the frequency of the oscillation is directly
proportional to the control bounds (umax − umin ) /θ.

In order to still be fairly close to the reference, a fast reaction to even little changes is required.
To this end, the deviation from the reference needs to be amplified to cross the threshold of the
bang-bang controller even for small deviations. Still, we don’t want the control to react too often.
To this end, high frequency changes are filtered out.

Remark 3.9
High frequency changes are often subject to measurement errors or unmodeled elements of the
system. In practice, a low pass can be applied to filter these occurrences and smooth the system
output.

Hence, we end up with a combination of a bang-bang control with a low pass and a control
amplifier as illustrated in Figure 3.6

umin , umax
w u y
≫1 −ε ε
GS

G1

Figure 3.6.: Closed loop with bang-bang control with low pass and amplifier
3.1. BANG - BANG AND DOUBLE - SETPOINT CONTROL 57

Considering the impact of the gain and low pass elements, we want to assess the impact on Theo-
rem 3.8. To this end, we denote the transfer function of the gain by Ggain and of the combination
by Gadapt . Since Ggain is chosen large it will dominate the control and system transfer function,
which allows us to neglect them and obtain

Ggain Ggain 1
Gadapt = = 1 ≈
1 + G1 · Ggain G1
Ggain · + G1
Ggain
| {z }
≈0

Hence, we directly get the following:

Theorem 3.10 (Oscillation for bang-bang control with low pass and amplifier).
Consider a closed loop system as given in Figure 3.6 and suppose a controllable operating point
x ⋆ to exist for system GS and the corresponding control u(·) to satisfy u(·) ∈ [umin , umax ]. Then
the closed loop from Figure 3.6 is stable and the amplitude of the oscillation of the closed loop is
directly proportional to ∆/Ggain whereas the frequency of the oscillation is directly proportional
to (umax − umin ) / (θ · G1 ).

Remark 3.11
A direct conclusion from Theorem 3.10 is that systems with only slow frequencies are ideally
suited for control via bang-bang controllers. For such systems, the output is almost linear and
performance analysis can be done via the mean of the output.

Still, the chattering behavior only be reduced by these extensions, yet not avoided. A complete
and general avoidance is also not possible, yet in particular cases a resolution can be found. These
cases refer to systems which exhibit an I like behavior close to the reference. The reason why a
resolution is possible is that for I like behavior close to the reference the control input satisfies
u ≡ u⋆ . Hence, we can introduce a third state into the bang-bang controller, which reveals exactly
that value within a so called dead zone of the system, i.e. the neighborhood of the reference. We
define the following:

Definition 3.12 (Double-setpoint control with hysteresis).


Consider a system f : U → Y with Y := [ymin , ymax ] ⊂ R such that Y ̸= ∅. Furthermore,
consider a control value y⋆ ∈ Y , a threshold θ ∈ U and two hysteresis widths ∆2 > ∆1 > 0 to
58

be given. Then we call f to be double-setpoint with hysteresis iff





 y⋆ , if u ≤ θ − ∆1


y⋆ , if u ≥ θ + ∆1

f (u) = (3.3)


 ymin , if u ≤ θ − ∆2


y , if u ≥ θ + ∆2

max

is the setting command of the system.

Remark 3.13
Note that the I like behavior of a system can be forced to exist by adding an integrator between
the double-setpoint controller and the system.

Extending our system setup from Figure 3.6 by a double-setpoint controller and forcing applica-
bility by incorporating an integrator in Figure 3.7, we can actually show the following remarkable
result:

umin , u⋆ , umax KI
w −ε 2 −ε 1 u y
≫1 ε1 ε2
GS

G1

Figure 3.7.: Closed loop with double-setpoint control with low pass, amplifier and integrator

Theorem 3.14 (Asymptotic stability for double-setpoint control with hysteresis).


Consider a closed loop system as given in Figure 3.7 and suppose a controllable operating point
( x ⋆ , u⋆ ) to exist for system GS and the corresponding control u(·) to satisfy u(·) ∈ [umin , umax ].
Then the closed loop from Figure 3.7 is asymptotically stable.

One can even go one step further and minimize the number of switches that are necessary to reach
the reference value. To this end, a latency can be introduced to decelerate the speed of the control,
cf. Figure 3.8.
For such a structure, we can show an extension of Theorem 3.14 revealing:
3.1. BANG - BANG AND DOUBLE - SETPOINT CONTROL 59

umin , u⋆ , umax KP , KT KI
w −ε 2 −ε 1 u y
≫1 ε1 ε2
GS

G1

Figure 3.8.: Closed loop with double-setpoint control with low pass, amplifier, latency and integrator

Theorem 3.15 (Minimal asymptotic stability for double-setpoint control with hysteresis).
onsider a closed loop system as given in Figure 3.7 and suppose a controllable operating point
( x ⋆ , u⋆ ) to exist for system GS and the corresponding control u(·) to satisfy u(·) ∈ [umin , umax ].
Then the closed loop from Figure 3.7 is asymptotically stable and only a finite number of switches
of the control occur.

Including the latency, however, comes at the price of reduced convergence speed of the closed
loop. Additionally, the dead zone depends on the latency leading to the problem of balancing the
low pass and the latency parameters.
Apart from influencing the switching behavior, one can also adapt the structure to mimic the
behavior of a continuous controller. The following three structures are common:

Theorem 3.16 (Mimic continuous PD, PID, PI control).


Consider the system given in Figures 3.9, 3.10 and 3.11. Let the reference w(·) be generated by
a PD, PID and PI controller according to Figure 3.3. Then the transfer function is approximately
equal to a PD, PID and PI controller respectively.

umin , umax
w y
−ε ε

K P , K DT

Figure 3.9.: Mimic PD control


60

umin , umax
w y
−ε ε

K P , K DT K DT

Figure 3.10.: Mimic PID control

umin , u⋆ , umax KI
w −ε 2−ε 1
y
ε1 ε2

K P , K DT

Figure 3.11.: Mimic PI control

Task 3.17
Evaluate the transfer functions from Figures 3.9, 3.10 and 3.11 to show the approximate
equivalence with respect to the continuous controllers.

Solution to Task 3.17: Left to the user.

Summarizing, bang-bang and double-setpoint controllers show the advantages and disadvantages
given in Tables 3.2 and 3.3.
3.1. BANG - BANG AND DOUBLE - SETPOINT CONTROL 61

Table 3.2.: Advantages and disadvantages of bang-bang


Advantage Disadvantage
✓ Proven stability ✗ Remaining oscillations
✓ Full dynamics using control bounds ✗ High wear of components
✓ Reduction of oscillations ✗ Reduction increases switching fre-
quency
✓ Low pass compensation possible ✗ Reduction may require low pass feed-
back
✓ Analysis via mean ✗ Mean requires low pass feedback

Table 3.3.: Advantages and disadvantages of double-setpoint control


Advantage Disadvantage
✓ Proven asymptotic stability ✗ More cost intensive
✓ Avoidance of oscillations ✗ Limited to I type systems
✓ Full dynamics using control bounds ✗ May require I component
✓ Minimal number of switches ✗ Reduced control dynamic
✓ Reduced wear of components ✗ Balance of low pass and latency
✓ Analysis via mean ✗ Mean requires low pass feedback

At this point, we like to come back to our Remark 3.4 stating that bang-bang and therefore also
double-setpoint are nonlinear components. The ideas used in the results shown before follow
one idea only: Additional components are included in the closed loop to simplify, transform and
compensate nonlinearities and map the system to a linear one. To this end, adding low pass
and amplifier in Figure 3.6 is equivalent to reducing the neighborhood where a linearization is
applied and at the same time making the linear reaction to dominate the nonlinear parts. Adding
the integrator and latency in Figure 3.8 basically increases the order of the system by integration,
i.e. the control is applied as to a derivative of the system, and compensating for tardiness of the
system wrt. the control.
In general, even more complex connections can be drawn as we will see in the following section.
62

3.2. Characteristic map


From Section 3.1 we have seen a nonlinear yet very special control component. In practice, more
complex systems connecting input and output are possible as illustrated in Figure 3.12.

u y

Figure 3.12.: Nonlinear static system

The map of the system is not necessarily given by a function as we already indicated in Defini-
tion 1.1 of a system. Instead, also (possibly multidimensional) data sets can be used. In that case,
the map is defined as follows:

Definition 3.18 (Characteristic map).


Consider two sets U and Y and a set U × Y ⊊ U × Y . Then we call f : U → Y satisfying

y, ∀ (u, y) ∈ U × Y
f (u) = (3.4)
 g(u, U × Y), ∀ (u, y) ̸∈ U × Y

characteristic map.

For the realization of a map, the function g(·, ·) is typically implemented as an interpolation. A
typical example of a characteristic map can be found in motor control for combustion engines.
For such devices, the engine torque depends on both the engine speed and setting of the throttle
valve (for Otto) or quantity of injected fuel (for Diesel).
Since the required storage rises exponentially with the dimension of the data sets, other represen-
tations of characteristic maps via

polynoms,

splines,

fuzzy logics,

neural networks, and

associative storage
3.2. C HARACTERISTIC MAP 63

are used. In particular, polynoms and splines additionally exhibit the advantage of being differ-
entiable instead of only continuous.

Remark 3.19
These maps (also called models) are typically identified offline using optimization methods such
as regression or MINLP, or online via filter techniques such as Kalman. Modelling and identifi-
cation is not within the scope of this lecture but instead treated in „Systemics“.

Such systems can often be separated into subsystems, which are either

linear dynamic, or

nonlinear static.

If such a separation into one of each subsystems is possible, then the structures given in Fig-
ure 3.13 are possible.

w u y w u y
GS GR

(a) Hammerstein structure (b) Wiener structure

Figure 3.13.: Separation of maps

Remark 3.20
The bang-bang and double-setpoint controller are special cases of the Hammerstein structure.

Both structures may be identical in the static and dynamic behavior in a neighborhood of an
operating point, yet they exhibit very different large-signal behavior. The reason for the latter is
that for nonlinear systems the principle of commutativity does not apply.
As outlined before, also superposition and amplification cannot be applied, which results in

Blocks cannot be switched arbitrarily

Analysis is more difficult and limited to special cases

Complexity of results is increased

Laplace transform is no longer applicable

As a consequence of the above, derivation of results in frequency domain is limited. Instead, the
time domain is considered which we will focus on in Part II of the lecture.
64

3.3. Multi-input multi-output systems


The systems we considered so far exhibit two properties. For one, they are restricted to a single
input and single outputs or can be treated independently. If both properties do not hold, then we
call a system multi-input multi-output or short MIMO.
In particular, we define the following:

Definition 3.21 (MIMO system).


Consider a system f : U → Y with input and output sets U and Y . If

∂2 f ∂2 f ∂2 f
(u) ≥ θ or (u) ≥ θ or (u) ≥ θ (3.5)
∂u j ∂uk ∂y j ∂yk ∂u j ∂yk

for indexes j, k with θ ∈ R+ holds, then we call the system to be MIMO.

The threshold parameter θ indicates the degree of coupling of the inputs and outputs. In case the
coupling is larger than the threshold, we call the coupling to be strong, otherwise weak. In case
of weak coupling, the system can be decoupled into multiple single-input single-output systems
(SISO), for which the coupling can be neglected. In order to be neglectable, the control needs
to be designed to suppress disturbances emanating from other systems using one of the methods
discussed so far. For MIMO systems, we therefore need to focus on strongly coupled systems
only.
Examples for strongly coupled outputs are

pressure and temperature for steamers,

temperature and humidity for air conditioners,

temperature, height of flame and mixture of stack gas for burners, or

position, velocity and force for robot arms.

Coupled inputs may be

position control of multiple drives for robots,

roll and yaw angle control for flying curves with an aircraft, or

temperature, pH measurement and biomass distribution for bio reactors.

For such systems, we utilize the concept of a transfer matrix introduced in Definition 2.9. As an
extension is easily possible, we consider the setting of two inputs and two outputs only. Therefore,
the setting is given as shown in Figure 3.14.
3.3. M ULTI - INPUT MULTI - OUTPUT SYSTEMS 65

u1 y1
System
u2 y2

Figure 3.14.: MIMO system with two inputs and two outputs

Zooming into this setting and recalling that the inputs or outputs are strongly coupled, there are
two different structures resembling feed forward and feed back connectivity.

Definition 3.22 (P canonical structure).


Consider a system with two inputs and two outputs. If the coupling of inputs to outputs exhibits
a feed forward structure as shown in Figure 3.15a, then we call it P canonically structured.

Definition 3.23 (V canonical structure).


Consider a system with two inputs and two outputs. If the coupling of inputs to outputs exhibits
a feedback structure as shown in Figure 3.15b, then we call it V canonically structured.

u1 P11 y1 u1 V11 y1

P21 V21

P12 V12

u2 P22 y2 u2 V22 y2

(a) P canonical structure (b) V canonical structure

Figure 3.15.: Canonical structures of MIMO systems with two inputs and two outputs

Both structures are often found in practice showing the properties given in Table 3.4.
66

Table 3.4.: Properties of P and V canonical structure


P canonical structure V canonical structure
✓ Direct correspondence to transfer ma- ✗ Required transformation of transfer
trix matrix
✗ Typically no connection to modeling ✓ Direct derivation via modeling
✓ Easy to treat ✗ Difficult to treat
✗ Typically no equivalent of Pjk in real ✓ Equivalent of Vjk in real system
system
✗ Physical interpretation questionable ✓ Physical interpretation given

As the P canonical structure is more easily treatable via control methods, one typically transforms
V canonical systems to P canonical structure. To this end, we have

! " # ! " # !!
y1 V11 0 u1 0 V12 y1
= · + ·
y2 0 V22 u2 V21 0 y2
" # ! " # !
V11 0 u1 0 V11 · V12 y1
= · + ·
0 V22 u2 V22 · V21 0 y2

revealing the linear equation system

" # ! " # !
1 −V11 · V12 y1 V11 0 u1
· = ·
−V22 · V21 1 y2 0 V22 u2
! " # −1 " # !
y1 1 −V11 · V12 V11 0 u1
⇐⇒ = · · .
y2 −V22 · V21 1 0 V22 u2

Theorem 3.24 (Equivalence P and V canonical structure).


Consider two systems with two inputs and two outputs to be given. Suppose one system is in P
canonical structure and one in V canonical structure. If
" # " # −1 " #
P11 P12 1 −V11 · V12 V11 0
= · (3.6)
P21 P22 −V22 · V21 1 0 V22

holds, then the transfer matrices of both systems are equivalent.


3.3. M ULTI - INPUT MULTI - OUTPUT SYSTEMS 67

To treat such MIMO systems, one can distinguish between three concepts.
Decentralized control: For each input/output pair we design exactly one control. For each
pair, the input of the other pair is considered to be a disturbance.

Decoupling control: For each input/output pair we design one main control and for all
couplings one decoupling control. The task of the latter is to reduce or eliminate the input
of other pairs such that the pairs can be treated separately.

Multivariable control: The control exhibits as many inputs and outputs as the system does.
Note that decentralized control can only be applied for weakly coupled systems. The reason for
that is due to the wrong assumption of the coupling to be a disturbance, i.e. an independent
input. Since the coupling is driven by the variables of the control loop, they are, however, not
independent. As a consequence, stability issues (e.g. by shifting poles) may arise.
In the following, we focus on decoupling control and consider multivariable control in the time
domain setting.

Definition 3.25 (Decoupling control).


Consider a system with two inputs and two outputs in P canonical structure. If the control exhibits
the structure given in Figure 3.16, then we call it decoupling control.


u1
w1 R11 S11 y1

R21 S21

R12 S12


u2
w2 R22 S22 y2

Figure 3.16.: Decoupling structure of MIMO system with P canonical structure

The idea of decoupling control is a special case of disturbance rejection, i.e. we eliminate or at
least reduce the impact of the systems on one another, which allows us to apply standard methods
for the decoupled circuits.
68

Within Figure 3.16, there are four controllers which need to be designed. While designing, the
intention is that

R11 shall control y1 using u1 (main system S11 ),

R12 shall eliminate the impact of u2 on y1 (coupling system S12 ),

R21 shall eliminate the impact of u1 on y2 (coupling system S21 ), and

R22 shall control y2 using u2 (main system S22 ).

We now focus on eliminating the impact of the second system on the first, cf. Figure 3.17.

u1
R11 S11 y1

R21 S21

R12 S12


u2
R22 S22 y2

Figure 3.17.: Elimination of coupling

In order to eliminate one another, the blue and red paths in Figure 3.17 need to be identical.
Hence, we directly obtain

Theorem 3.26 (Decoupling condition).


Consider a MIMO system with two inputs and two outputs in P canonical structure subject to a
decoupling control. If the conditions

S12
R12 = R22 · (3.7)
S11
S
R21 = R11 · 21 (3.8)
S22

hold, then the system is decoupled.


3.3. M ULTI - INPUT MULTI - OUTPUT SYSTEMS 69

Remark 3.27
Regarding realization, the same considerations as for disturbance control apply. If a perfect
decoupling is not possible, then its impact should be reduced. In general, a decoupling is more
easily achieved if the coupling is slow. Since we have

coupling system
decoupling control = main control ·
main system

and the main control must satisfy degree of nominator is equal to degree of denominator, then the
decoupling control is realizable if and only if the number of poles of the coupling system is larger
than number of poles of the main system. Additionally, the known limitations for realizability
apply, cf. Section 2.4.

As an alternative to a pathwise comparison as in Theorem 3.26, we can use the transfer matrix
for decoupling. From Figure 3.16 we obtain
! " # !
u1 R11 − R12 w1 − y 1
= ·
u2 − R21 R22 w2 − y 2
! " # !
y1 S S u1
= 11 12 ·
y2 S21 S22 u2

which gives us
! " # !
y1 R11 S11 − R21 S12 − R12 S11 + R22 S12 w1 − y 1
= · . (3.9)
y2 R11 S21 − R21 S22 − R12 S21 + R22 S22 w2 − y 2

Hence, the system is decoupled if the following holds:

Theorem 3.28 (Decoupling transfer matrix).


Consider a MIMO system with two inputs and two outputs in P canonical structure subject to a
decoupling control. If the matrix in Equation 3.9 is diagonal, then the system is decoupled.

Remark 3.29
Note that both approaches reveal identical conditions.

It is of particular importance that even in the case of ideal decoupling, each system depends
on both the main and the decoupling control. As a consequence, if we want to adapt the main
controller in a later stage of the development, the decoupling controller needs to be adapted as
70

well. One way to circumvent this problem is to consider a slight modification of the decoupling
circuitry, cf. Figure 3.18.


u1
w1 R11 S11 y1

R21 S21

R12 S12


u2
w2 R22 S22 y2

Figure 3.18.: Adaptable decoupling structure of MIMO system with P canonical structure

In this case, compensation is achieved if

S12
R12 =
S11
S
R21 = 21 ,
S22

i.e. the decoupling control is only subject to the input of the main system and therefore indepen-
dent from the main controller.

Remark 3.30
Alternatively, the decoupling control can be designed as a V canonical structure. The advantage
of that approach is that for the design of the main control no aspect of the respective other system
needs to be considered.

Table 3.5.: Advantages and disadvantages of MIMO control


Advantage Disadvantage
✓ Allows for MIMO structure ✗ Computationally involved
Continued on next page
3.3. M ULTI - INPUT MULTI - OUTPUT SYSTEMS 71

Table 3.5 – continued from previous page


Advantage Disadvantage
✓ Standardized P and V structures ✗ Any structure limited in either usage or
derivation
✓ Decoupling possible ✗ Require additional controllers
✓ Allows for independent design ✗ Requires specific decoupling structure
✓ Allows for SISO methods ✗ Not suitable for multivariate control

Yet, as we can see from the involved design process of main and decoupling controller, this
approach does not allow for scalability to high dimensional MIMO systems. To consider the
latter, we will shift our solution approach to the time domain.
Part II.

Time Domain
CHAPTER 4

NONLINEAR CONTROL SYSTEMS

In Chapter 1, we introduced the concept of a generic system and thereafter discussed the property
of stability and how to design feed forward and feedback control laws to enforce this property of a
system. In frequency domain, we observed that for more complex structures of the system and/or
of the control, the design of the latter becomes more and more involved. Additionally, we saw
that asymptotic stability was in most cases out of scope for the design methods. Now, we shift
our view to the time domain and in particular on nonlinear systems. Again, our interest lies in
showing (asymptotic) stability of a system, and respective ideas of designing controls to guarantee
this property. Starting point in Section 4.1 will be the notion of asymptotic stability, where we
will directly jump to nonlinear systems. As we will see, if we move from linear to nonlinear
systems, it is not entirely clear whether of not a continuous stabilizing control exists. To this end,
we discuss Brockett’s condition [4] and Artstein’s counterexample [3]. In Section 4.2, we then
discuss alternative concepts for equivalent definitions of stability and controllability. Here, we
follow the approach of Khalil [9] and Isidori [8]. These concepts allow us to foster properties of
the system dynamics to derive stabilizing controls. In Section 4.3, we will utilize one of these
properties to derive the so called Sontag’s formula for computing an asymptotically stabilizing
feedback based on a Control-Lyapunov function [18]. Moreover, we will introduce the concept
of backstepping to construct such a Control-Lyapunov function [8].
76

4.1. Necessary conditions for controllability


Let us first recall the term (asymptotic) controllability from Definition 1.28. In particular, for a
system

ẋ(t) = f (x(t), u(t)), (4.1)

we require that there exists a control u such that bounded deviations from an operating point
result in bounded behavior, that is

∥x0 ∥ ≤ δ =⇒ ∥x(t)∥ ≤ ε ∀t ≥ 0, (4.2)

and that any bounded deviation will be controlled to the operating point, i.e.

lim x(t) = 0 ∀∥x0 ∥ ≤ r. (4.3)


t→∞

In the linear case, we saw that a linear control u in both feed forward and feedback form can
always be constructed in „Control Engineering 1“, i.e. for each u : T → U we can construct
u : X → U and vice versa. For the nonlinear case, the latter does not hold true. In particular,
we only have the following quite intuitive connection between existence of feedback and feed
forward control to still hold true:

Lemma 4.1
Consider a system (4.1) and let (x⋆ , u⋆ ) be an operating point. If a feedback u : X → U exists
such that the closed loop is asymptotically stable and additionally both the feedback and the
closed loop are Lipschitz, then there exists a feed forward u : T → U such that the system is
asymptotically controllable.

Remark 4.2
Lipschitz continuity is a crucial element within Lemma 4.1 as it allows us to invert the system and
derive the controllability property.

So the first question to be answered is how to construct a Lipschitz continuous feedback in time
domain. Here, the following core result holds true:

Theorem 4.3 (Brockett’s condition).


Consider a system (4.1) and let (x⋆ , u⋆ ) be an operating point. Moreover, suppose a Lipschitz
4.1. N ECESSARY CONDITIONS FOR CONTROLLABILITY 77

continuous feedback u : X → U to be given. Then the set

f (X ) := {y ∈ X | y = f (x, u(x))} (4.4)

contains a neighborhood of x⋆ .

The latter result allows us to utilize Taylor’s theorem and obtain:

Theorem 4.4 (Exponential Stability).


Consider an operating point (x⋆ , u⋆ ) of a system (4.1) with feedback u : X → U resulting
in a autonomous vector field f : Rnx → Rnx . Suppose f is continuously differentiable in a
neighborhood of x⋆ and that D f (x⋆ ) ∈ Rnx ×nx represents the Jacobian of f at x⋆ . Then the
following holds:

1. The operating point x⋆ is (locally) exponentially stable if and only if the real parts of all
Eigenvalues λi ∈ C of D f (x⋆ ) are negative.

2. The operating point x⋆ is exponentially unstable if and only if there exists one Eigenvalue
λi ∈ C of D f (x⋆ ) with positive real part.

3. The operating point x⋆ is exponentially antistable if and only if the real parts of all Eigen-
values λi ∈ C of D f (x⋆ ) are positive.

Theorem 4.4 gives us a remarkable insight: in a neighborhood of an operating point the first
moment of the dynamics dominates all higher moments. Based on Theorem 4.4 and the defini-
tion of equilibria / working points together with Taylor’s approximation theorem, we obtain the
following:

Theorem 4.5 (Linearization).


Consider system (4.1) and let (x⋆ , u⋆ ) be an operating point. If the deviation (△x, △u) :=
(x0 − x⋆ , u − u⋆ ) is sufficiently small, then the solution of system

△ẋ(t) = A · △x(t) + B · △u(t), △ x (0) = △ x0 − x ⋆ (4.5a)


△ y ( t ) = C · △ x ( t ) + D · △ u ( t ). (4.5b)
78

with

∂ ∂
A= f ( x ⋆ , u ⋆ ), B= f (x⋆ , u⋆ )
∂x ∂u
∂ ∂
C= h ( x ⋆ , u ⋆ ), D= h(x⋆ , u⋆ )
∂x ∂u

is also the solution of the nonlinear system (4.1). The system (4.5) is called linearization around
the operating point (x⋆ , u⋆ ).

Remark 4.6
As a consequence of Theorem 4.5 we can transfer results from linear systems to nonlinear systems
at least in a neighborhood of an operating point.

Task 4.7 (Inverted pendulum)


Consider the inverted pendulum on a cart given by the system

ẋ1 (t) = x2 (t) (4.6a)


ẋ2 (t) = −kx2 + g sin( x1 (t)) + u(t) · cos( x1 (t)) (4.6b)
ẋ3 (t) = x4 (t) (4.6c)
ẋ4 (t) = u(t) (4.6d)

and compute its linearization. Design a feedback such that the Eigenvalues of the closed loop
are −1.

Solution to Task 4.7: The linearization reads


   
0 1 0 0 0
   
 g − k 0 0 1
A=   and B=
0 .

 0 0 0 1

 
0 0 0 0 1
4.1. N ECESSARY CONDITIONS FOR CONTROLLABILITY 79

Let F be the feedback matrix we are looking for. Then we obtain


   
0 1 0 0 0
   
 g −k 0 0 + 1 · F1 F2 F3 F4
 
A+B·F = 
0 0
 0 1 0
 
0 0 0 0 1
 
0 1 0 0
 
 g + F1 −k + F2 F3 F4 
= 0

 0 0 1 
F1 F2 F3 F4

Now, we can compute the entries of F such that all Eigenvalues are equal to −1 revealing

g + k2

F1 F2 F3 F4 = − g2 − 4k
g − 6 − g − gk2 − 4g − 4 + k 1
g
k
g2
+ 4
g .

Unfortunately, the inverse of Lemma 4.1 does not hold true. To illustrate the latter, consider the
following:

Task 4.8 (Nonholonomic integrator)


Consider a steerable vehicle from Figure 4.1 given by the dynamics

ẋ1 (t) = u1 (t)


ẋ2 (t) = u2 (t)
ẋ3 (t) = x2 (t) · u1 (t)

where the angle of heading is given by x1 and the position by ( x2 · cos( x1 ) + x3 · sin( x1 ), x2 ·
sin( x1 ) − x3 · cos( x1 )). Show that Theorem 4.3 does not apply.

Solution to Task 4.8: From the dynamic, we directly obtain that no point (0, r, ε) with ε ̸= 0
and r ∈ R is in the image of f .

The displayed example is also called Brockett’s nonholonomic integrator and systems showing
this property are called nonholonomic. As a consequence of no being feedback stabilizable, we
can also not apply linearization to the nonholonomic integrator.
80

u2

u1

Figure 4.1.: Sketch of a nonholonomic car

Task 4.9
Consider the dynamics from Task 4.8. Show that the linearization is not controllable.

Solution to Task 4.9: Computing the linearization using Theorem 4.5 we obtain
   
0 0 0 1 0
A = 0 0 0 and B =  0 1 .
   

0 u1 0 x2 0

For the operating point (x⋆ , u⋆ ) = (0, 0) we obtain


   
0 0 0 1 0
A = 0 0 0 and B = 0 1 .
   

0 0 0 0 0

Consequently, the third line results in

ẋ3 (t) = 0.

Hence, the solution cannot converge to x⋆ = 0 and similarly no stabilizing feedback exists
for the linearization.

Technically speaking, Task 4.8 refers to parallel parking using a steered vehicle. Since a parallel
movement is not possible, such a feedback cannot exists. Yet we know that there exists a solu-
tion to transport a vehicle into a parallel parking spot using a non-parallel movement. Such a
trajectory, however, cannot be described using a Lipschitz continuous feedback.
4.1. N ECESSARY CONDITIONS FOR CONTROLLABILITY 81

Solution to Task 4.8: Consider the operating point (0, 0) or shift the operating point respec-
tively. If we apply



 0, t ∈ [0, 1]

 p
−sgn( x3 ) · | x3 |, t ∈ [1, 2]





u1 = 0, t ∈ [2, 3]

 p
− x − sgn ( x ) · | x | , t ∈ [3, 4]

1 3 3






0, t>4

and
 p


 sgn( x3 ) · | x3 |, t ∈ [0, 1]


0, t ∈ [1, 2]

u1 = p


 − sgn ( x 3 ) · | x3 |, t ∈ [2, 3]


0, t>3

then the system will reach (0, 0) at t = 4 and remain there. Hence, the system is asymptoti-
cally controllable.

Based on this counterexample, we have the following:

Corollary 4.10
Consider a system (4.1) and let (x⋆ , u⋆ ) be an operating point. If for the set

f (X ) := {y ∈ X | y = f (x, u) for all u ∈ U }

there exists no open ball Br (x⋆ ) with radius r > 0 around x⋆ such that Br (x⋆ ) ⊂ f (X ), then no
Lipschitz continuous feedback exists.

In the nonlinear setting, we therefore know that Brockett’s condition is at least necessary. Un-
fortunately, the following example will show that it is not sufficient. The idea of the example is
to design a system such that Brockett’s conditions holds, yet not Lipschitz continuous feedback
does exist.
82

Task 4.11 (Artstein’s circles)


Consider the system

ẋ1 = − x1 (t)2 + x2 (t)2 · u(t)

ẋ2 = (−2x1 (t) · x2 (t)) · u(t).

Show that Brockett’s condition applies.

Solution to Task 4.11: For v = (v1 , v2 ) we consider three cases:


r
v1 v2 v2
for v2 ̸= 0, we choose x1 = 1, x2 = v2 + v12 + 1 and u = − 2x 2
. Then we obtain
2

 
v2 v1
! !
−1 + x22 − 2x · − 2 x
v2 2  v1
f (x, u) = ·u =  2
v2
=
−2x2 − 2x2 · (−2x2 ) v2

and therefore f (x, u) ∈ f (X , U ).


p
For v1 ̸= 0, we set x1 = 0, x2 = |v1 | and u = sgn(v1 ). Again, we obtain
f (x, u) ∈ f (X , U ).

For v1 = v2 = 0 we set u = 0 and directly obtain f (x, u) ∈ f (X , U ).

Hence, Brockett’s condition holds.

Remark 4.12
Since the proof of nonexistence of a Lipschitz continuous feedback is rather involved, we refer
to [3]. The idea of Artstein is to utilize that the dynamic forms a circle and dependency on
the state results in a contradiction to the uniqueness of an operating point. In particular, the
convergence is getting so slow, that the operating point is never reached, not even for t → ∞.

Utilizing the examples from Brockett and Artstein, we can only state the following:

Corollary 4.13
Consider a system (4.1) and let (x⋆ , u⋆ ) be an operating point. If the system is asymptotically
controllable, existence of a Lipschitz continuous feedback is not guaranteed.
4.2. E QUIVALENT CONCEPTS OF CONTROLLABILITY 83

From the discussion so far we obtain that the ε-δ definition of controllability does not provide us
with enough insight to design a control for a given nonlinear system. In the following section,
we will introduce equivalent notions of asymptotic stability and controllability, which allow for
further interpretation of the system behavior.

4.2. Equivalent concepts of controllability


Apart from Definition 1.28, alternative definitions can be found in the literature, and these defi-
nitions are all equivalent. The intention of these definitions is to foster information of the system
dynamics into abstract concepts for stability and controllability.

Remark 4.14
We already like to note that identical considerations hold true for observability and detectability.
The latter are, however, related to gathering information about the state of the system, not about
control of the system, and are therefore outside the scope of this lecture.

The first alternative concept for stability/controllability utilizes so called comparison functions,
cf. Figure 4.2 for an illustration.

Definition 4.15 (Comparison Functions).


The following classes of functions are called comparison functions:

A function γ : R≥0 → R≥0 is of class K if it is continuous, zero at zero and strictly


increasing.

A function is of class K∞ if it is of class K and also unbounded.

A function is of class L if it is strictly positive and it is strictly decreasing to zero as its


argument tends to infinity.

A function β : R≥0 × R≥0 → R≥0 is of class KL if for every fixed t ≥ 0 the function
β(·, t) is of class K and for each fixed s > 0 the function β(s, ·) is of class L.

The functions allow us to geometrically include solutions emanating from a given initial value by
inducing a bound for the worst case. This directly leads to the following result, cf. [9]:
84

3 β(·, t) 3 β(x, ·)

2 2

1 1

x t
0.5 1 1.5 2 2.5 3 0.5 1 1.5 2 2.5 3
(a) Sketch of a class K function (b) Sketch of a class L function

Figure 4.2.: Sketch of classes of comparison functions

Theorem 4.16 (Stability Concepts).


Consider a system (4.1).

(i) An operating point x⋆ = 0 is strongly asymptotically stable or robustly asymptotically


stable if there exists an open neighborhood N of x⋆ and a function β ∈ KL such that

∥x(t)∥ ≤ β(∥x0 ∥, t) (4.7)

holds for all x0 , u and all t ≥ 0.

(ii) An operating point x⋆ = 0 is weakly asymptotically stable or asymptotically controllable


if there exists an open neighborhood N of x⋆ and a function β ∈ KL such that for every
x0 there exists a control law u such that

∥x(t)∥ ≤ β(∥x0 ∥, t) (4.8)

holds for all t ≥ 0.

Remark 4.17
The result from Theorem 4.16 can be generalized to arbitrary operating points by subtracting x⋆
within the norm operator on both sides of equation (4.7) or (4.8).
4.2. E QUIVALENT CONCEPTS OF CONTROLLABILITY 85

Task 4.18
Draw solutions of systems to visualize Theorem 4.16.

3 β(·, t) 3 β(x, ·)

2 2

1 1

x t
0.5 1 1.5 2 2.5 3 0.5 1 1.5 2 2.5 3
(a) Sketch of a robustly asymptotically stable system (b) Sketch of a weakly asymptotically stable system

Figure 4.3.: Sketch of asymptotically stable systems

The idea of the comparison function concept is to establish a bound on the system behavior. Once
we can verify/compute how such a bound looks like, we know that even in the worst case, the
system behavior will be better, i.e. the trajectories will be closer to the operating point than the
comparison bound. Secondly, we also know that for any control function to be used satisfying
this bound, then the solution will converge and the system will be asymptotically stabilized by
the control.

Remark 4.19
Note that Theorem 4.16 makes no assumption as to whether the control is feed forward or feed-
back, nor whether the control is continuous or discontinuous.

The second concept utilizes the Lyapunov functions, which can be interpreted as energy func-
tion of the system state. The main difference lies in considering a minimizing control in the
neighborhood of the considered state.

Definition 4.20 (Control-Lyapunov function).


Consider a system (4.1) with operating point (x⋆ , u⋆ ) = (0, 0) and a neighborhood N (x⋆ ).
Then a continuous function V : Rnx → R0+ is called a Control-Lyapunov function if there exist
86

functions α1 , α2 , α3 ∈ K∞ such that there exists a control function u satisfying the inequalities

α1 (∥x∥) ≤ V (x) ≤ α2 (∥x∥) (4.9)


d ∂
inf V (x) = inf V (x) · f (x, u) ≤ −α3 (x) (4.10)
u∈U dt u∈U ∂x

for all x ∈ N \ {x⋆ }.

The idea of the Lyapunov function is comparable to a salad bowl: If we put a ball into the
bowl, it will run downhill and remain at the lowest point. Using this metaphor, the lowest point
regarding the Lyapunov function marks the desired equilibrium. The Control-Lyaponov function
itself can be seen as energy of a system. Hence, the first inequalities (4.9) are bounds on the
behavior of the system. In contrast to comparison functions, however, both lower and upper
bounds are required. The reason for this necessity emanates from the second inequality (4.10).
This inequality basically says that energy is drawn from the system continuously. Yet, even if
energy is continuously drawn from the system, it may come to a rest far away from the operating
point. To avoid such cases, the bounds on V are required.
In our last step, we apply this energy concept to obtain stability by energy draining arguments:

Theorem 4.21 (Asymptotic Stability).


Consider a system (4.1) where f (0, 0) = 0, a neighborhood N of x⋆ and a continuous function
V : N → R0+ .

(i) An equilibrium x⋆ = 0 is strongly asymptotically stable or robustly asymptotically stable if


(4.9) and


sup V (x) · f (x, u) ≤ −α3 (x)
u∈U ∂x

hold for α3 ∈ K∞ and all x ∈ N .

(ii) An equilibrium x⋆ = 0 is weakly asymptotically stable or asymptotically controllable if for


all x ∈ N there exists a control u such that (4.9), (4.10) hold.

Task 4.22
Draw solutions of systems to visualize Theorem 4.21.
4.2. E QUIVALENT CONCEPTS OF CONTROLLABILITY 87

Remark 4.23
Again, the strong or robust concept means that no matter which control we consider, energy is
drawn from the system. The weak concept requires additional work to design a control such that
the stability property is induced.

In contrast to Lipschitz continuous feedback stabilizability, cf. Lemma 4.1 and Corollaries 4.10,
4.13, using the notion of Control-Lyapunov functions an inversion of the statement is possible.

Theorem 4.24 (Existence of Control-Lyapunov function).


Consider a system (4.1) where f (0, 0) = 0. Suppose that x⋆ = 0 is weakly asymptotically stable
or asymptotically controllable. Then there exists a continuous function V : N → R0+ satisfying
the properties of a Control-Lyapunov function given in Definition 4.20.

Theorem 4.25 (Existence of Control-Lyapunov function).


Consider a system (4.1) where f (0, 0) = 0. If x⋆ = 0 is asymptotically stabilizable/controllable
on a neighborhood N (x⋆ ) by a Lipschitz continuous feedback u : N (x⋆ ) → U , then the Control-
Lyapunov function is arbitrarily often continuously partially differentiable on N (x⋆ ).

Remark 4.26
Consider Artstein’s circles, cf. Task 4.11, one can show that
q
V (x) = 4x12 + 3x22 − | x1 |

is a Control-Lyapunov function, for which the conditions (4.9), (4.10) hold true for either u ≡ −1
or u ≡ 1.

Based on the latter remark, we already see that the concept of Control-Lyapunov function allows
us to conclude asymptotic stability directly once such a function is known. Hence, the concept
of a Control-Lyapunov function allows for a broader class of feedbacks to be computed as com-
pared to linearization (Theorem 4.5) or characteristic maps (Section 3.2). The map in Figure 4.4
characterizes the connection between the last results:
From Figure 4.4, we directly obtain the weak links in the nonlinear setting: From Brockett and
Artstein, it is clear that the arrow from controllability to existence of a Lipschitz feedback is
not present. Additionally, from a differentiable Control-Lyapunov function, the existence of a
Liptschitz feedback cannot be guaranteed.
88

Lemma 4.1
Asymptotic Stability via
controllability Lipschitz feedback
Corollary 4.13

Theorem 4.24 Theorem 4.21 Theorem 4.25

Existence of a continuous Existence of a differentiable


Control-Lyapunov function Control-Lyapunov function

Figure 4.4.: Schematic connection of stability results

Combining the results of this section, we obtain the following:

Table 4.1.: Advantages and disadvantages of Control-Lyapunov functions


Advantage Disadvantage
✓ Allows energy based approach ✗ Derivation complicated
✓ Requires no knowledge of system ✗ Limited to neighborhood
✓ Allows broader class of systems ✗ Existence of feedback not guaranteed
✓ Stability induces existence

In the following section, we specialize our setting and aim to close the gaps illustrated in Fig-
ure 4.4 to derive a concept to compute a nonlinear feedback for a nonlinear system.

4.3. Backstepping and Sontag’s formula


The research are for constructive nonlinear control is still an open field of research and focuses on
the derivation of explicit formulas for the computation of feedback controllers. A characteristic
of this field is that approach are not valid for the entire class of nonlinear systems (4.1), but only
for subclasses satisfying certain structural assumptions.
The aim of this section is to invert the result from Theorem 4.24, that is for a given Control-
Lyapunov function we want to derive a feedback such that the closed loop is asymptotically
stable.
Within this section, we focus on so called control-affine systems, which take the following general
form:
4.3. BACKSTEPPING AND S ONTAG ’ S FORMULA 89

Definition 4.27 (Control-affine system).


Consider a system (4.1). If the dynamic is given by

nu
ẋ(t) = f (x(t), u(t)) = f 0 (x(t)) + ∑ f j (x(t)) · u j (t) (4.11)
j =1

with locally Lipschitz continuous functions f j : Rnx → Rnx for all j = 1, . . . , nu , then we call
the system control-affine.

Note that the controls u j : T → R are scalar and the sum ranges across the dimension of the
control u ∈ Rnu .
Moreover, we need to restrict ourselves to feedbacks, which are bounded for every bounded input.
The easiest way to obtain such a restriction is to impose the following assumption:

Assumption 4.28
Consider a feedback u : X → U . Then we assume u to be Lipschitz continuous and satisfy
u(0) = 0.

Remark 4.29
Note that the condition u(0) = 0 can be easily satisfied by transforming the dynamic f˜(x, u) =
f (x, u + u(0)) and obtain ũ(x) = u(x) − u(0). Hence, we will always have ũ(0) = 0.

Utilizing the latter assumption, we can show boundedness of the feedback for bounded input:

Lemma 4.30
Consider a feedback u : X → U satisfying Assumption 4.28. Then we have

u(x) ≤ max ∥u(x̂)∥ + ∥x∥ =: γ(x) (4.12)


∥x̂∥≤∥x∥

where γ ∈ K is a bounding function of the feedback.

Now, we can use the latter to invert Theorem 4.24 and define an asymptotically stabilizing feed-
back. The result itself dates back to Artstein [3], yet here we use the explicit formula for the
feedback derived by Sontag [18].
90

Theorem 4.31 (Sontag’s formula).


Consider a control-affine system (4.11) and suppose V : X → R0+ is a Control-Lyapunov func-
tion for this system satisfying inequality (4.10) for a control u satisfying Assumption 4.28. Let γ
to be defined as in Lemma 4.30. Then, using the abbreviation Fj (x, u) = ∂x ∂
V (x) · f j (x, u), for
the feedback u = (u1 , . . . , unu )⊤ defined via

0,

 if x = 0 or Fj (x, u) = 0
!
u j (x) := nu 2 (4.13)
− Fj (x, u) · Φ F0 (x, u), ∑ Fj (x, u) , if Fj (x, u) ̸= 0


j =1

with
 √
 a + a2 + b2 , b ̸= 0
b
Φ( a, b) =
0, b=0

is Lipschitz continuous in the range u(x) ≤ γ(x) and the closed loop is globally asymptotically
stable.

As a consequence, we can now compute an asymptotically stabilizing feedback explicitly using


equation (4.13) if a Control-Lyapunov function is known. The latter part, however, remains a
difficult task. To illustrate the idea, we consider the following example:

Task 4.32
Consider the controllable inverted pendulum

ẋ1 (t) = x2 (t)


ẋ2 (t) = − x2 (t) + sin ( x1 (t)) + u(t)

where the control represents the force of a motor acting on the angular velocity. Show that

1 2 2

V (x) = ( x1 + x2 ) + x1
2

is a Control-Lyapunov function and compute a feedback via Sontag’s formula.


4.3. BACKSTEPPING AND S ONTAG ’ S FORMULA 91

Solution to Task 4.32: To show that V (x) is a Control-Lyapunov function, we use


 
1 2 1 2 1
V (x) =  x1 + 2x1 x2 + x22 + x12  ≥ x1 + x22 = ∥x∥2

2 | {z } 4 4
≥−3x12 /2− x22 /2

to obtain α1 (x) = ∥x∥2 /4. Similarly, we have


 
1
( x1 + x2 )2 + x2  ≤ 2x2 + 2x2 ≤ 2∥x∥2

V (x) = 1 1 2
2 | {z }
 
≤3x12 +4x22

and accordingly α2 (x) = 2∥x∥2 . For the derivative, we obtain


V (x) · f (x, u) = (2x1 + x2 ) x2 + ( x1 + x2 ) · (− x2 + sin ( x1 ) + u) .
∂x

As we require the descent only to hold in the infimum, we can choose a control wisely to
cancel out inner parts of the latter expression. Here, we use u = − x1 − x2 − sin( x1 ) and
see


V (x) · f (x, u) = (2x1 + x2 ) x2 + ( x1 + x2 ) · (− x2 + sin ( x1 ) + u)
∂x
= (2x1 + x2 ) x2 + ( x1 + x2 ) · (− x1 − 2x2 )
= 2x1 x2 + x22 − x12 − 2x22 − 3x1 x2
= − x12 − x22 − x1 x2
|{z}
≥− x12 /2− x22 /2
1 2 1
≤− x1 + x22 = − ∥x∥2 < 0
2 2

revealing α3 (x) = ∥x∥2 /2. Hence, V (x) is a Control-Lyapunov function.


Considering the dynamic, we have
! !
x2 0
f 0 (x) = and f 1 (x) = .
− x2 + sin ( x1 ) 1

Hence, we obtain


V (x) · f 0 (x) = (2x1 + x2 ) x2 + ( x1 + x2 ) · (− x2 + sin ( x1 ))
∂x
92

= x1 x2 + ( x1 + x2 ) · sin ( x1 )

and


V ( x ) · f 1 ( x ) = x1 + x2 .
∂x
Using Sontag’s formula, we therefore obtain
q
x1 x2 + ( x1 + x2 ) · sin ( x1 ) + ( x1 x2 + ( x1 + x2 ) · sin ( x1 ))2 + ( x1 + x2 )4
u ( x ) = − ( x1 + x2 ) · .
( x1 + x2 )2

As we have seen, while evaluating Sontag’s formula is straight forward, the derivation of a
Control-Lyapunov function is a rather involved task. In this context, backstepping is an system-
atic approach to construct and compute such a function. The idea of backstepping is to iteratively
construct the Control-Lyapunov function. To this end, the dynamic is split into parts forming a
cascade. The split is made in such a way that the outer parts of the cascade are simple and exhibit
a known Control-Lyapunov function for the outer part. The cascade is then formed using the
following core result:

Theorem 4.33 (Backstepping).


Consider a nonlinear control system (4.1). Suppose that f is continuously differentiable and that
there exists a twice continuously differentiable feedback u f (x) with u f (0) = 0 such that x⋆ = 0
is (locally) asymptotically stable. Let Vf (x) be the corresponding continuously differentiable
Control-Lyapunov function. Furthermore, suppose there exists a continuously differentiable func-
tion h : X × U → U with h(0, û) = 0 for û ∈ U . Then for the coupled system

ẋ(t) = f (x(t), û(t)) (4.14)


û˙ (t) = h(x(t), û(t)) + u(t) (4.15)

with state x̂ = (x, û) and control u there exists a continuously differentiable feedback such that
the closed loop is asymptotically stable and corresponding Control-Lyapunov function

1 2
V (x̂) = V (x, û) := Vf (x) + û − u f (x) . (4.16)
2

In Figure 4.5, the block structure of the backstepping idea is depicted.


4.3. BACKSTEPPING AND S ONTAG ’ S FORMULA 93

u û
x
h f

Figure 4.5.: System structure for backstepping

Note that different from the idea of a cascade control, the structure in the backstepping approach
is already fixed by the model. As a consequence, the derivation of the controllers per loop is not
done from inside to outside, but instead tracks down separability of simple systems from outside
to inside. Hence, for an application, we first need to identify/fix an outer subsystem f such that
the overall system exhibits the structure given in equations (4.14), (4.15).

Remark 4.34
2
We highlight that the aim of the added Lyapunov part û − u f (x) /2 in (4.15) is to enforce
a reference for the outer control û such that it tracks the desired function u f (x). Hence, the
input u(x) shall be a reference tracking feedback. Such a feedback can be derived using Sontag’s
formula applied to the Control-Lyapunov function obtained via backstepping.

To better understand the procedure of backstepping, we consider the example from Task 4.32
again.

Task 4.35
Again, consider the controllable inverted pendulum

ẋ1 (t) = x2 (t)


ẋ2 (t) = − x2 (t) + sin ( x1 (t)) + u(t)

where the control represents the force of a motor acting on the angular velocity. Derive a
Control-Lyapunov function via backstepping.

Solution to Task 4.34: Using the notation from Theorem 4.33, we set x = x1 and û = x2 .
Then we obtain

f (x, û) = û
94

h(x, û) = −û + sin (x) .

Now, we consider the first equation only. Since the system is linear, we can directly use the
Control-Lyapunov function Vf (x) = x2 /2. Hence, we obtain the feedback u f (x) := −x for
the outer system. Note that due to the overall structure of the system, we have û ̸= u f (x).
Applying the backstepping procedure, we use the latter to compute

1 2
V (x̂) = V (x, û) = Vf (x) + û − u f (x)
2
1 1
= x2 + |û + x|2
2 2
1 2
= x + (û + x)2
2
1 2
= x1 + ( x2 + x1 )2 .
2

This is exactly the Control-Lyapunov function given in Task 4.32.

Remark 4.36
Note that in the linear one dimensional case, we have that V (x) = x2 /2 is always a Control-
Lyapunov function.

Using backstepping and Sontag’s formula for the special case of control affine systems, we can
complement our schematic from Figure 4.4. In Figure 4.6, the special case is highlighted in red.

Lemma 4.1
Asymptotic Stability via Computation of a
controllability Lipschitz feedback Lipschitz feedback
Corollary 4.13

Theorem 4.24 Theorem 4.21 Theorem 4.25 Theorem 4.33 Theorem 4.31

Existence of a continuous Existence of a differentiable Computation of a differentiable


Control-Lyapunov function Control-Lyapunov function Control-Lyapunov function

Figure 4.6.: Schematic connection of stability results to backstepping/Sontag’s formula

At this point we like to note that stability is not necessarily restricted to equilibria, but can also
apply to periodic orbits or areas. An illustration is given in Figure 4.7 indicating stability of the
system yet not of a point.
To summarize, we considered one approach to treat nonlinear systems. We again like to stress
that currently no generic concept to treat nonlinear systems is known. Modern approaches to
4.3. BACKSTEPPING AND S ONTAG ’ S FORMULA 95

x2

x1

Figure 4.7.: Sketch of a stable system with periodic orbit

tackle this area utilize the concept of key performance indicators, exploit the dynamics and in-
clude constraints on the system. All of the latter will be part of the subsequent lecture Control
Engineering 3.

Table 4.2.: Advantages and disadvantages of backstepping & Sontag’s formula


Advantage Disadvantage
✓ Direct derivation of feedback ✗ Requires separability
✓ Allows multivariate control ✗ Limited to control affine systems
✓ Recursive construction possible ✗ Results analytically involved
✓ Generic applicability
CHAPTER 5

DIGITAL CONTROL SYSTEMS

In the previous Chapter 4 we observed that for an asymptotically controllable system Brockett’s
condition is a necessary yet not sufficient criterion for existence of a Lipschitz continuous feed-
back (see Corollary 4.10). As the system is asymptotically controllable, a possibly discontinuous
feedback must exist.
Throughout this chapter, we again consider nonlinear control systems of the form

ẋ(t) = f (x(t), u(t)). (5.1)

5.1. Zero order hold control


The most simple case of a discontinuous feedback is given by the so called zero order hold.
The idea is to sample the control, i.e. to fix a time grid {tk } ⊂ R and define the control to be
constant in between two sampling instances tk and tk+1 . Here, we further simplify the setting
by introducing a sampling period T and define the sampling instances to be equidistant, that is
tk = k · T.

Remark 5.1
There are two more general cases: For one, the sampling times may be defined by a function of
time, or secondly, the sampling times can be defined by a function of states. The first one is com-
mon in prediction and prescription of systems where action is the far future are significantly less
important. Hence, one typically chooses between exactness of the prediction and computational
complexity. The latter case is referred to a event driven control.

We still like to stress that in applications, the choice of T is not fixed right from the beginning,
98

but depends on the obtainable solution and stability properties. Hence, we continue to formulate
the following definitions of zero order hold control and solution as a parametrization of operators
with respect to T.

Definition 5.2 (Zero order hold control).


Consider a nonlinear control system (5.1) and a feedback u : X → U such that ∥u(x)∥ ≤ γ(x)
holds for all x ∈ X and some continuous function γ : X → R. Moreover suppose a sampling
period T > 0 to be given, which defines the sampling times tk = k · T. Then we call the
piecewise constant function

u T (t) ≡ u(x(tk )), t ∈ [ t k , t k +1 ) (5.2)

zero order hold control.

As a consequence of the latter definition, the control u T is not continuous but instead exhibits
jumps at the sampling times tk . Still, the function is integrable, which is a requirement for exis-
tence of a solution of (5.1) for such an input. This insertion directly leads to the following:

Definition 5.3 (Zero order hold solution).


Given a nonlinear control system (5.1) and a zero order hold control u T : T → U . Then we call
the function x T : T → X satisfying

ẋ T (t) = f (x T (t), u T (t)) (5.3)

zero order hold solution.

In order to compute such a solution, we can simply concatenate solutions of subsequent sampling
intervals [tk , tk+1 ). Here, we can use the endpoint of the solution on one sampling interval to be
the initial point on the following one. Hence, the computation of x T is well defined.

Remark 5.4
Since the system is Lipschitz continuous on each interval [tk , tk+1 ), the solution is also unique.
Hence, identifying endpoint and initial point of subsequent sampling intervals is sufficient to
show that teh zero order hold solution is unique. Yet, as a consequence of this concatenation, the
solution is not differentiable at the sampling points tk .
5.1. Z ERO ORDER HOLD CONTROL 99

Remark 5.5
Note that despite u T to be piecewise constant, the zero order hold solution does not exhibit jumps
and shows nonlinear behavior.

Similar to the nonlinear case, we next introduce the concept of stability. Note that we did not
fix the sampling period T, hence stability needs to be parametrized using this characterizing
parameter of the control. Here, we use the same simplification to shift the operating point to the
origin as in Chapter 4, cf. Remark 4.17.

Definition 5.6 (Practical stability/controllability).


Consider a nonlinear control system (5.1) with f (0, 0) = 0. Then we call a family of feed-
backs u T , T ∈ (0, T ⋆ ] to semiglobally practically asymptotically stabilize the operating point
(x⋆ , u⋆ ) = (0, 0) if there exists a function β ∈ KL and constants R > ε > 0 such that

∥x T (t)∥ ≤ max{ β(∥x0 ∥, t), ε} (5.4)

holds for all t > 0, for all T ∈ (0, T ⋆ ] and all initial value satisfying ∥x0 ∥ ≤ R.

Remark 5.7
The term „semiglobal“ refers to the constant R, which limits the range of the initial states for
which stability can be concluded. The term „practical“ refers to the constant ε, which is a
measure on how close the solution can be driven towards the operating point before oscillations
as in the case of the bang bang controller occur.

As a direct conclusion of Definition 5.6, we can apply Lemma 4.1 and obtain:

Corollary 5.8
Consider a nonlinear control system (5.1) with f (0, 0) = 0 and suppose a family of feedbacks
u T , T ∈ (0, T ⋆ ] to exist, which semiglobally practically asymptotically stabilize the operating
point (x⋆ , u⋆ ) = (0, 0). Then there exists a feed forward u : T → U such that the system is
practically asymptotically controllable.

Definition 5.6 also shows the dilemma of digital control using fixed sampling periods: Both close
to the desired operating point and for initial values far away from it, the discontinuous evaluation
of the feedback u T leads to a degradation of performance. Close to the operating point, a slow
evaluation leads to overshoots despite the dynamics to be typically rather slow. Far away from
100

the operating point, the dynamics is too fast to be captured in between two sampling points which
leads to unstable behavior.
Still, it may even be possible to obtain asymptotic stability (not only practical asymptotic stability)
using fixed sampling periods T as shown in the following task:

Task 5.9
Consider the system

ẋ1 (t) = − x1 (t)2 + x2 (t)2 · u(t)

x2 (t) = (−2 · x1 (t) · x2 (t)) · u(t).

Design a zero order hold control such that the system is practically asymptotically stable.

Solution to Task 5.9: We set



1, x1 ≥ 0
u T (t) = .
−1, x1 < 0

For this choice, the system is globally asymptotically stable for all T > 0 and even inde-
pendent from T. The reason for the latter is that the solutions never cross the switching line
x1 = 0, i.e. the control to be applied is always constant, which leads to independence of the
feedback from T.

As described before, the behavior observed in Task 5.9 is the exception. In practice, the lim-
itations of semiglobality and practicality is typically the best we can expect in zero order hold
control of nonlinear system.
In order to show that a stabilizing zero order hold control exists, we follow the path from Chapter 4
and adapt the concept of Control-Lyapunov functions from Definition 4.20 accordingly.

Definition 5.10 (Practical Control-Lyapunov functions).


Consider a nonlinear control system (5.1) with operating point (x⋆ , u⋆ ) = (0, 0) such that
f (x⋆ , u⋆ ) = 0 and a neighborhood N (x⋆ ). Then the family of continuous functions VT : Rnx →
R0+ for T ∈ (0, T ⋆ ] is called a semiglobal practical family of Control-Lyapunov functions if
there exist constants R̂ > ε̂ > 0 as well as functions α1 , α2 , ∈ K∞ and a continuous function
5.1. Z ERO ORDER HOLD CONTROL 101

W : X → R+ \ {0} such that there exists a control function u satisfying the inequalities

α1 (∥x∥) ≤ VT (x) ≤ α2 (∥x∥) (5.5)


inf VT (x T (tk+1 )) ≤ max {VT (x T (tk ) − T · W (x T (tk )), ε̂)} (5.6)
u∈U

for all x ∈ N \ {x⋆ } with VT (x) ≤ R̂ and all T ∈ (0, T ⋆ ].

The latter definition extends the concepts of a Control-Lyapunov function is various ways. For
one, as the zero order hold solution is not differentiable, we can no longer assume VT to be
differentiable. Hence, the formulation of decrease in energy in inequality (5.6) is given along
a solution instead of its vector field. Secondly, the parametrization regarding T needs to be
considered. This leads to a parametrization of the decrease in inequality (5.6) using the positive
definite function W (·). Moreover, the ideas of simiglobality and practicality are integrated.

Remark 5.11
Comparing Definition 5.10 to Definition 5.6, we can identify the similarity of semiglobality be-
tween the constants R and R̂ as well as ε and ε̂. The difference between these two pairs lies in
their interpretation: For KL function, we utilize the state space, whereas for Control-Lyapunov
functions the energy space is used. Hence, both values are a transformation of one another using
the Control-Lyapunov function VT .

Now, supposingly that a practical Control-Lyapunov function exists, we can directly derive the
existence of a zero order hold control.

Theorem 5.12 (Existence of feedback).


Consider a nonlinear control system (5.1) with operating point (x⋆ , u⋆ ) = (0, 0) such that
f (x⋆ , u⋆ ) = 0. Let VT to be a semiglobal practical family of Control-Lyapunov functions for
T ∈ (0, T ⋆ ]. Then the minimizer

u T (t) := argmin VT (x T (tk+1 )) (5.7)


u∈U

is a family of semiglobally practially asymptotically stabilizing feedbacks.

Note that in (5.7), the right hand side depends on u implicitly as x T (tk+1 ) is defined using the
initial value x T (tk ) and the zero order hold control u. Hence, the definition (5.7) is proper.
102

Remark 5.13
The transfer from infimum in (5.6) to minimum in (5.7) is only possible as the control is constant
in between two sampling instances tk and tk+1 and therefore the solution x T (·) is continuous with
respect to u.

Unfortunately, the pure existence of a feedback does not help us in computing it. Additionally,
we still require the existence of a practical Control-Lyapunov function to conclude existence of
such a feedback. Here, we first address existence of a Control-Lyapunov function, for which the
following is known from the literature:

Theorem 5.14 (Existence of practical Control-Lyapunov function).


Consider a nonlinear control system (5.1) with operating point (x⋆ , u⋆ ) = (0, 0) such that
f (x⋆ , u⋆ ) = 0. If the system is asymptotic controllable, then there exists a family of semiglobal
practical Control-Lyapunov function.

The most important aspect of Theorem 5.14 is the requirement regarding the control system. The
result does only require the system to be asymptotically controllable, a task which we discussed
in the previous Chapter 4, i.e. without digitalization. Hence, techniques such as backstepping or
others depending on the structure of the control system may be applied.

Practical asymptotic
controllability

Corollary 5.8

Lemma 4.1
Stability via family of Asymptotic Stability via
practically stabilizing feedbacks controllability Lipschitz feedback
Corollary 4.13

Theorem 5.14
Theorem 5.12 Theorem 4.24 Theorem 4.21 Theorem 4.25

Existence of a family of Existence of a continuous Existence of a differentiable


practical Control-Lyapunov functions Control-Lyapunov function Control-Lyapunov function

Figure 5.1.: Schematic connection of stability results to digitalization

In practice, however, the two tasks of deriving feedback u T and Control-Lyapunov function VT
are often done in the inverse sequence. To this end, first a feedback u T is derived, and then the
inequality (5.6) is shown to hold for this feedback

VT (x T (tk+1 )) ≤ max {VT (x T (tk ) − T · W (x T (tk )), ε̂)} .


5.2. ISS – I NPUT TO STATE STABILITY 103

The reason for using such a procedure is that Theorem 5.12 only requires a Control-Lyapunov
function for fixed R̂, ε̂ to exists for some T0 > 0 in order to conclude existence also for all
smaller sampling periods. Hence, if we find a constructive way to derive a feedback, then a
practical Control-Lyapunov function can be derived and stability properties of this feedback can
be concluded for all T ∈ (0, T0 ].
Here, we follow this idea and show how a feedback can be derived, which exhibits the requires
properties.

5.2. ISS – Input to state stability


The idea to obtain a feedback in a digital setting is to first derive a feedback in the continuous
setting and „embed“ it in a digital setting. The simplest idea is to use the continuous feedback and
apply the zero order hold idea to it. As the embedding comes from digitalization, the resulting
feedback will exhibit an offset, which is similar to the difference between the continuous and the
zero order hold feedback. This difference acts like a disturbance on the system. To formalize this,
we consider the disturbed nonlinear control system

ẋ(t) = f (x(t), u(t)) + d(t) (5.8)

where the disturbance d(t) is a measureable function and f is Lipschitz continuous in the distur-
bance.
Unfortunately, even small disturbances may lead to instability.

Task 5.15
Consider the system



 − exp−x(t)+1 , x(t) ≥ 1

ẋ(t) = − x, x ∈ [−1, 1] .


expx(t)+1 ,

x ≤ −1

Show that the system is asymptotically stable using the Lyapunov function V (x) = x2 /2.
Show that the disturbed system

ẋ(t) = f (x(t)) + d(t)

is unstable.
104

Solution to Task 5.15: Using the Lyapunov function V (x) we obtain α1 = α2 = V (x) and
the decrease via α3 (x) = x · f (x). Hence, the system is asymptotically stable.
Considering d(t) ≡ ε > 0, there always exists a δ > 0 such hat f (x) + ε > ε/2 for all
x ≥ δ. Hence, each solution with initial value x ≥ δ increases with at least constant rate, i.e.
diverges to ∞.

Remark 5.16
Note that this possible instability is not present in the linear case. For systems of the form

ẋ(t) = A · x(t) + D · d(t)

the solution is given by

Zt
A·t
x(t) = exp · x ( t0 ) + exp A·(t−s) · D · d(s)ds.
t0

Hence, using ∥ exp A·t ∥ ≤ c · exp−σ·t , each solution satisfies

c · ∥D∥
∥x(t)∥ ≤ c · exp−σ·t ∥x∥ + · ∥d∥∞ .
σ
c·∥ D ∥
As a consequence, each solution converges towards a ball with radius σ · ∥d∥∞ , i.e. depends
on the infinity norm of the disturbance.

In the nonlinear setting, such a convergence cannot be expected, yet under certain conditions it can
be assured. These conditions are known as ISS (input to state stability) and typically formulated
for (uncontrolled) systems:

Definition 5.17 (ISS).


Consider the disturbed nonlinear system (5.8) with u ≡ 0 and a neighborhood N (x⋆ ) of x⋆ = 0.
Then we call the system ISS if there exists function β ∈ KL and γ ∈ K∞ such that

∥x(t)∥ ≤ β(∥x0 ∥, t) + γ(∥d∥∞ ) (5.9)

holds for all x ∈ N (x⋆ ) and all t ≥ 0.

Having defined the ISS property, we can directly derive the following:
5.2. ISS – I NPUT TO STATE STABILITY 105

Corollary 5.18
Consider a nonlinear system (5.1) and its connected disturbed system (5.8) with u ≡ 0. If the
disturbed system is ISS, then the undisturbed system is asymptotically stable

In the other direction, however, we can use the result from Task 5.15 to derive the following:

Corollary 5.19
Consider a nonlinear system (5.1) and its connected disturbed system (5.8) with u ≡ 0. If the
system is asymptotically stable, no conclusion regarding stability of the disturbed system can be
drawn.

However, by tightening the requirements on the system, the reverse direction can be concluded:

Theorem 5.20.
Consider a nonlinear system (5.1) and its connected disturbed system (5.8) with u ≡ 0. Suppose
the undisturbed system to be asymptotically stable, the dynamic to be Lipschitz continuous with
respect to state and disturbance. Then there exists a neighborhood N (x⋆ ) such that the disturbed
system is ISS.

For our control setting, we can apply this definition if we consider the control to be given by a
feedback u : X → U .

Definition 5.21 (η practical ISS).


Consider a disturbed nonlinear control system (5.8) with feedback u T : X → U and consider a
neighborhood N (x⋆ ) of x⋆ = 0.

If for η > 0 there exists a function β ∈ KL such that

∥x T (t)∥ ≤ β(∥x0 ∥, t) + η (5.10)

holds for all t > 0 and all x ∈ N (x⋆ ), then the system is called η practically ISS.
j
The family of systems x T is called practically ISS if there exists a sequence η j → 0 and a
function β ∈ KL such that

j
∥x T (t)∥ ≤ β(∥x0 ∥, t) + η j (5.11)

holds for all t > 0 and all x ∈ N (x⋆ ).


106

We now want to use the ISS property to conclude under which conditions disturbances introduced
by digitalization / zero order hold can be regarded as disturbances and asymptotic controllability
of the undistrubed system can be carried over to its zero order hold solution.

5.3. Stability under digitalization


Within this section, we aim to derive conditions to show that the zero order hold solution from
Definition 5.3 is practically asymptotically stable. To this end, we first introduce the concept of
consistency and show that zero order hold reveals a consistent solution. Thereafter, we present
that consistent solutions allow the preservation of asymptotic stability under digitalization, which
is called embedding.
We start with the definition of consistency.

Definition 5.22 (Consistency).


Consider two systems x1 (t) and x2 (t) and a time ∆ > 0. We call both systems to be consistent
on a set D if for ε > 0

∥x1 (t) − x2 (t)∥ ≤ t · ε (5.12)

holds for all t ∈ [0, ∆] and all x ∈ D.

For zero order hold systems, we directly obtain consistency for both the state and its derivative:

Lemma 5.23
Consider a nonlinear control system (5.8) with feedback u T : X → U and its respective zero
order hold.

For each bounded set D and sampling intervals T j → 0 for j → ∞ we have

∥x(t) − x T j (t)∥ ≤ t · ε j (5.13)

for all t ∈ [0, T j ] mit ε j = O( T j ), that is ε j ≤ C · T j for some C > 0.

For each bounded set D and sampling intervals T j → 0 for j → ∞ we have

∥ẋ(t) − ẋ T j (t)∥ ≤ t · ε (5.14)

for all t ∈ [0, T j ] mit ε j = O( T j ).


5.3. S TABILITY UNDER DIGITALIZATION 107

Based on consistency, we can embed systems into one another. Our interest in the context
of digitalization is to obtain parameters of the practical stability property based on the undis-
turbed/undigitalized version of the feedback. The idea of embedding is to express one system
by another one and to express respective properties by one another. Here, we are particularly
interested in the disturbance. In general, embedding is defined as follows:

Definition 5.24 (Embedding).


Consider two disturbed systems x1 (t) and x2 (t). We call the systems to be embedded on a set
D ⊂ X if for each disturbance d2 and each x ∈ D we have

x2 ( t ) ∈ D ∀t ∈ [0, T ] (5.15)

and additionally if there exists a disturbance d1 such that x1 (t) = x2 (t) for allt ∈ [0, T ] and

∥ d1 ∥ ∞ ≤ δ + ρ ∥ d2 ∥ ∞ . (5.16)

In the context of digitalization, we have d2 ≡ 0 and can therefore always choose ρ = 0. Now,
we can use embedding and obtain the following core result:

Theorem 5.25 (Equivalency of stability under digitalization).


Consider a nonlinear control system (5.8) with feedback u T : X → U and its respective zero
order hold. Suppose a bounded neighborhood N (x⋆ ) of the operating point x⋆ = 0 to be given.
Then the following statements are equivalent:

The feedback controlled system (5.8) is asymptotically stable for all x ∈ N (x⋆ ).

The family of systems x T j is practically asymptotically stable for sufficiently large j and the
comparison function β ∈ KL is independent from j.

From Theorem 5.25 we see that asymptotic stability of the continuously controlled system is
transferred to the digitally controlled system in the semiglobal practical sense. Here, the sampling
time T shows various effects on the quality of the digital closed loop:

The constants ε, ε̂ and η j are in general larger if the sampling time T is increased.

The neighborhood N (x⋆ ) within which practically asymptotic stability can be shown is in
general smaller if the sampling time T is increased.

The performance of solutions is in general degrading if the sampling time T is increased.


108

Stability of family of Practical asymptotic


zero order hold feedbacks controllability

Corollary 5.8

Theorem 5.25

Lemma 4.1
Stability via family of Asymptotic Stability via Computation of a
practically stabilizing feedbacks controllability Lipschitz feedback Lipschitz feedback
Corollary 4.13

Theorem 5.14
Theorem 5.12 Theorem 4.24 Theorem 4.21 Theorem 4.25 Theorem 4.33 Theorem 4.31

Existence of a family of Existence of a continuous Existence of a differentiable Computation of a differentiable


practical Control-Lyapunov functions Control-Lyapunov function Control-Lyapunov function Control-Lyapunov function

Figure 5.2.: Schematic connection of stability results to derive digital controls

Again, we summarized the results in a schematic sketch given in Figure 5.2.


Here, we see that Theorem 5.25 links the continuous/analog and zero order hold/digital worlds.
Hence, we now have the argument to simply apply zero order hold or higher order digitalization
methods to feedbacks, which are originally designed for continuous/analog application, and retain
stability. Note that this also applies to all control laws derived via classical methods such as PID
and all other methods we discussed so far such as cascade or disturbance control.

Table 5.1.: Advantages and disadvantages of digital control


Advantage Disadvantage
✓ Simple zero hold hold derivation ✗ Requires bounded initial state
✓ Allows usage of continuous control ✗ Performance may degrade at operating
methods point
✓ Higher orders possible ✗ Performance may degrade far from op-
erating point
✓ Applies to all digitalizations ✗ Range of initial values may be limited
Appendices
APPENDIX A

LAPLACE TRANSFORM

The following Table A.1 recites some of the main properties and laws of computation for Laplace
transformed functions.

Table A.1.: Properties of Laplace transformed functions

Property Time domain Frequency domain

Linearity c1 f 1 ( t ) + c2 f 2 ( t ) c1 fˆ1 (s) + c2 fˆ2 (s)


1 ˆ s

Time scaling f ( at) a f a

Frequency derivative t f (t) − fˆ′ (s)

Frequency general derivative tn f (t) (−1)n fˆ(n) (s)

Time Derivative f ′ (t) s fˆ(s) − f (0+ )


n
Time general derivative f (n) ( t ) sn fˆ(s) − ∑ sn−k f k−1 (0+ )
k =1
R∞
Frequency integration 1
t f (t) fˆ(σ )dσ
s
Rt
Time integration f (τ )dτ = (η ∗ f )(t) 1
s fˆ(s)
0

Frequency shifting exp( at) · f (t) fˆ(s − a)

Time shifting f (t − a) · η (t − a) exp(− as) · fˆ(s)

Continued on next page


112 A. L APLACE TRANSFORM

Table A.1 – continued from previous page

Property Time domain Frequency domain


r+
R iT
Multiplication f (t) · g(t) 1
lim
2πi T →∞ fˆ(σ ) · ĝ(σ)dσ
r −iT
RT
Convolution f (t) ∗ g(t) = f (τ ) · g(t − τ )dτ fˆ(s) · ĝ(s)
0
.. .. ..
. . .

The Laplace transform and of its inverse are typically applied using equivalence tables. Table B.1
summarizes a few of these equivalencies.
APPENDIX B

TRANSFER FUNCTION AND PROPERNESS

Table B.1.: Equivalence table for Laplace transformations

Time domain Frequency domain

δ(t) 1
1
η (t) s
1
t s2
1
exp( at) s− a
n
tn exp( at) ( s − a ) n +1
b
sin(bt) s2 + b2
s
cos(bt) s2 + b2
b
exp( at) sin(bt) ( s − a )2 + b2
s− a
exp( at) cos(bt) ( s − a )2 + b2
.. ..
. .

Definition B.1 (Controllable normal form).


Consider a transfer function
114 B. T RANSFER FUNCTION AND PROPERNESS

z(s)
G (s) = (B.1)
n(s)

with coprime polynoms z(s) and n(s). Then the minimal realization
       
x̂1 0 1 0 ··· 0 x̂1 0
       

 x̂2 


 0 0 1 ··· 0  
  x̂2

0
 
.. .. .. .. .. .. .. .
s·  (s) =  . . ·  (s) +  ..  û(s)
     
 .   . . .   .  
 x̂
 n x −1 
  0
 0 ··· 0 1   x̂
  n x −1 
 0
 
x̂nx − a0 − a1 · · · − a n x −2 − a n x −1 x̂nx 1
(B.2a)
 
x̂1
 
 x̂2 
 
 . 

ŷ = b0 b1 · · · bn x − 2 bnx −1 ·  ..  (s) + bnx û(s) (B.2b)
 
 x̂ 
 n x −1 
x̂nx

is called controllable normal form or first standard form.

Definition B.2 (Observable normal form).


Consider a transfer function

z(s)
G (s) = (B.3)
n(s)

with coprime polynoms z(s) and n(s). Then the minimal realization
       
x̂1 0 ··· ··· 0 − a0 x̂1 b0
       

 x̂2 

1 0
 ··· 0 − a1  
  x̂2


 b1


.. . ... ... .. .. ..
s·  (s) =  .. 1 ·  (s) +   û(s) (B.4a)
      
 .   .   .   .
 x̂
 x n − 1
 0 0
 ··· 1 − a n x −2   x̂
  x n − 1
 b
n
 x  − 2

x̂nx 0 0 ··· 1 − a n x −1 x̂nx bn x − 1
115

 
x̂1
 

 x̂2

..

ŷ = 0 0 · · · 0 1 ·  (s) + bnx û(s) (B.4b)
 
 .
 x̂ 
 n x −1 
x̂nx

is called observable normal form or second standard form.


APPENDIX C

BLOCK DIAGRAM

Table C.1.: List of block symbols

Block symbol Meaning


KP

P controller
KI

I controller
KD

D controller
KP , KT

Latency
KP , K I

PI controller
KP , KD

PD/PDT1 controller

Continued on next page


118 C. B LOCK DIAGRAM

Table C.1 – continued from previous page

Block symbol Meaning


ymin , ymax

Saturation
K DT

Decay

Limit
ymin , ymax

Bang-bang
ymin , ymax

−ε ε
Bang-bang with hysteresis
ymin , y0 ymax

Double-setpoint
ymin , y0 ymax
−ε−
2 ε1
ε1 ε2
Double-setpoint with hysteresis

Nonlinear

Triangle
BIBLIOGRAPHY

[1] 1, DIN 19226 Leittechnik T.: Regelungstechnik und Steuerungstechnik: Allgemeine Grund-
begriffe. 1994

[2] 2, DIN 19226 Leittechnik T.: Regelungstechnik und Steuerungstechnik: Begriffe zum Ver-
halten dynamischer Systeme. 1994

[3] A RTSTEIN, Z.: Stabilization with relaxed controls. In: Nonlinear Analysis: Theory, Meth-
ods & Applications 7 (1983), Nr. 11, S. 1163–1173

[4] B ROCKETT, R.W.: Asymptotic stability and feedback stabilization. In: B ROCKETT, R.W.
(Hrsg.) ; M ILLMAN, R.S. (Hrsg.) ; S USSMANN, H.J. (Hrsg.): Differential Geometric Con-
trol Theory. Birkhäuser, 1983, S. 181–191

[5] C ELLIER, F.E.: Continuous System Modeling. Springer, New York, 1991

[6] D IRECTOR, S.W. ; ROHRER, R.A.: Introduction to System Theory. McGraw-Hill, New
York, 1972

[7] F ÖLLINGER, O.: Regelungstechnik. 13. überarbeitete Auflage. VDE-Verlag, 2022

[8] I SIDORI, A.: Nonlinear Control Systems. 3rd edition. Springer, 1995

[9] K HALIL, H.K.: Nonlinear Systems. Prentice Hall PTR, 2002. – 750 S. – ISBN 0130673897

[10] L UDYK, G.: Theoretische Regelungstechnik 1. Springer, Berlin, 1995

[11] L UENBERGER, D.G.: Introduction to Dynamic Systems. John Wiley & Sons New York,
1979

[12] L UNZE, J.: Regelungstechnik 1: Systemtheoretische Grundlagen, Analyse und Entwurf


einschleifiger Regelungen. 11. überarbeitete und ergänzte Auflage. Springer, 2016
120 B IBLIOGRAPHY

[13] L UNZE, J.: Regelungstechnik 2: Mehrgrößensysteme, Digitale Regelung. 9. überarbeitete


Auflage. Springer, 2016

[14] L UNZE, J.: Automatisierungstechnik. 5. Auflage. DeGruyter, 2020

[15] M ÜLLER, M.: Normal form for linear systems with respect to its vector relative de-
gree. In: Linear Algebra and its Applications 430 (2009), Nr. 4, S. 1292–1312. http:
//dx.doi.org/https://doi.org/10.1016/j.laa.2008.10.014. – DOI
https://doi.org/10.1016/j.laa.2008.10.014

[16] PADULO, L. ; A RBIB, M.A.: System Theory. W.B. Saunders Company, Philadelphia, 1974

[17] S HEARER, J.L. ; KOLAKOWSKI, B.T.: Dynamic Modeling and Control of Engineering
Systems. Macmillan Publishing, New York, 1995

[18] S ONTAG, E.D.: Mathematical Control Theory: Deterministic Finite Dimensional Systems.
Springer, 1998. – 531 S. – ISBN 0387984895

[19] U NBEHAUEN, H.: Regelungstechnik I. Vieweg / Teubner, 2007

[20] U NBEHAUEN, H.: Regelungstechnik II. Vieweg / Teubner, 2007

[21] U NBEHAUEN, H.: Regelungstechnik III. Vieweg / Teubner, 2011

You might also like