Nonlinear Oscillation
Nonlinear Oscillation
Up until now, we’ve been considering the differential equation for the (damped)
harmonic oscillator,
ÿ + 2β ẏ + ω 2 y = Lβ y = f (t) . (1)
Due to the linearity of the differential operator on the left side of our equation,
we were able to make use of a large number of theorems in finding the solution
to this equation. In fact, in the last few lectures, we’ve pretty much been able
to solve this equation for any realistic case we could imagine.
However, we know that this equation came from an approximation - we’ve
been assuming that the potential energy function of the spring can be written
as
1
U (y) = ky 2 . (2)
2
While in many cases this is an incredibly good approximation, we may wonder
how the the addition of higher order terms might affect the behaviour of our
system. For example, we could imagine taking one more term in a hypothetical
Taylor series expansion which defines U (y), so that
1 2 1 3
U (y) = ky + γy . (3)
2 6
A plot of this potential energy function is shown in Figure 1.
This is not, however, a particularly nice potential energy function to work
with, because it is unstable. Because the cubic term we have added is an odd
function, then for large negative values of y, assuming that γ > 0, we find
instead of positive infinity. For the case that γ < 0, we find similar behaviour as
we move in the opposite direction. The implication of this is that any particle
subject to this potential energy function with a total energy larger than
2 k3
Ec = (5)
3 γ2
will escape towards negative infinity, and never come back. This is also shown
in Figure 1. While this may indeed describe some physical systems, it does not
do a good job of modelling the type of system we are interested in, which is
reasonably small oscillations around an equilibrium point.
For this reason, we should add at least one more term to the Taylor series
expansion of the potential,
1 2 1 3 1
U (y) = ky + γy + λy 4 . (6)
2 6 24
For λ > 0, this now describes a stable potential energy function. A plot of this
improved potential energy expansion is shown in Figure 2. Notice that despite
1
1.5
1.0
0.5
-4 -3 -2 -1 1 2
-0.5
Figure 1: A plot of the unstable potential energy function (blue curve), for
k = γ = 1. The orange line indicates the amount of energy a particle would
need in order to be able to “hop” out of the potential minimum and travel
towards negative infinity.
the somewhat strange shape near the origin, a particle with an arbitrarily large
energy will be trapped inside of the potential minimum. Thus, we can consider
motion at an arbitrarily large energy, without worrying about issues of stability.
With this potential energy function, my differential equation now becomes
ÿ + 2β ẏ + ω 2 y + φy 2 + y 3 = f (t) , (7)
where
φ = γ/2m ; = λ/6m. (8)
For simplicity, we will assume that there is no damping, no driving force, and
that the cubic term in the potential is zero (so that the potential energy is
symmetric around zero). In this case, we find the Duffing equation,
ÿ + ω 2 y + y 3 = 0. (9)
2
20
15
10
-4 -2 2
equation, y1 (t) and y2 (t), then a linear combination of these two solutions is
NOT necessarily a solution, because, of course,
3
(y1 + y2 ) 6= y13 + y23 . (10)
Almost every solution technique we have used so far has, at least in some way,
involved the principle of superposition, a property which we have now lost. So
what now?
While we may no longer be able to use the principle of superposition, we
do have one old tool which we can always fall back on: perturbation theory.
Our goal here is to understand how, under a suitable approximation, we can
think of the motion of the anharmonic oscillator as being a “perturbation” of
the harmonic oscillator’s motion. For nonlinear problems, there will often be
many different ways to perform perturbation theory, each with their advantages
and disadvantages. We’ll explore two techniques here, although this list is far
from being exhaustive.
3
is?). While we may not know how to solve for the motion of the particle exactly,
we do know how to find the region in which it travels, and thus we can check
whether this condition holds. If the particle has total energy E, then its turning
points must be the locations at which
1 2 1 4
U (yM ) = E ⇒ kyM + λyM = E, (12)
2 24
or, r
6k
q p
yM = ± −1 + 1 + 2λE/3k 2 . (13)
λ
If the quartic part of the potential is indeed only a small correction in between
these two points, then the nonlinear term in our differential equation should
only represent a small perturbation to the linear oscillator, described by the
differential equation,
ÿ + ω 2 y = 0, (14)
whose solution we know to be
y0 (t) = A cos (ωt) + B sin (ωt) . (15)
Motivated by this thinking, we might imagine that in some sense, the solution
to the anharmonic oscillator is given by a small “correction” to the harmonic
solution, a correction which depends on the small quantity . Such a correction
might look something like
y (t) = y0 (t) + y1 (t) + ... (16)
The function y1 (t) is the correction term, and we can think of it as the next term
in an expansion in powers of . In this case, however, notice that the coefficients
in the expansion are functions of time. That is not unlike the previous cases
we considered, where, for example, the expansion was in powers of the drag
parameter, but the coefficients in this expansion could depend on the other
parameters of the problem (mass, initial velocity, etc.) in a more complicated
way. If we plug this proposed solution into our full differential equation, we find
3
ÿ0 + ω 2 y0 + ÿ1 + ω 2 y1 + (y0 + y1 ) = 0. (17)
If we expand this equation out to first order in epsilon, we find
ÿ0 + ω 2 y0 + ÿ1 + ω 2 y1 + y03 = 0.
(18)
Now, in order for both sides of this equation to be equal for all values of ,
it must be the case that each parenthetical term vanishes. In this case, the
zero-order parenthetical term yields
ÿ0 + ω 2 y0 = 0, (19)
which is nothing other than the equation for the linear oscillator, which we know
how to solve. If we also match the first-order parenthetical term, then we have
ÿ1 + ω 2 y1 = −y03 . (20)
4
Since we already know what y0 is, this represents a forced, linear differential
equation for y1 , which is something we do know how to solve. In particular,
this equation describes the function y1 as the coordinate of a simple harmonic
oscillator with frequency ω.
For concreteness, let’s assume we’ve chosen our initial conditions such that
y (t = 0) = 1 ; v (t = 0) = 0, (21)
where I’ve avoided using y0 and v0 , so as to not confuse them with the coefficients
in the expansion. Applying these initial conditions to the zero order solution,
we have
y0 (t) = cos (ωt) . (22)
Our perturbative equation then tells us
where the constants A and B arise from the homogeneous part of the solution.
Thus, the full motion, through order , is given by
3
y (t) = (A + 1) cos (ωt) + B − t sin (ωt) + cos (3ωt) . (26)
8ω 32ω 2
5
external driving force in order to excite higher harmonics, the nonlinear system
is capable of doing so under the action of its own internal dynamics. This type
of behaviour would also appear, for example, in a course on the Standard Model,
in which a similar type of differential equation, this time describing something
known as a Quantum Field, would be used to describe how the interaction of
particles can create and destroy new particles.
where
2 n
α= n−1 (30)
2n 2
involves a binomial coefficient. Even powers also cause problems, although this
only becomes clear at higher orders in perturbation theory. Because the differ-
ential equation for y1 has the same natural frequency ω as the linear oscillator,
6
realization gives us the idea that maybe we can make a slightly better choice
for our “unperturbed” solution, and instead choose
as you did on the homework. However, it’s not clear exactly how many terms
we should take.
In order to side-step this thorny issue altogether, we will make use of a new
tool, sometimes known as a dual series expansion. The idea is that we have
two objects, the frequency Ω and the trajectory y (t), which both need to be
expanded in terms of . By expanding them simultaneously in just the right
way, we can eliminate the secular term from our solution. In order to do so, we
will in fact not modify the solution y0 directly, but instead define a new time
variable
τ = Ωt, (35)
so that in terms of this new variable, our differential equation becomes
Ω2 ÿ (τ ) + ω 2 y (τ ) + y 3 (τ ) = 0. (36)
The derivatives in the first term are now derivatives with respect to τ , and so
the chain rule pulls out a factor of Ω2 , since
d dτ d d
= =Ω . (37)
dt dt dτ dτ
Make sure to understand that this is exactly the same equation as before, simply
written in terms of a new coordinate. The difference will come when we conclude
later that Ω is something other than ω.
We now seek a solution of the form
7
If we plug in this proposed solution, along with the expansion for Ω, we arrive
at the equation
2 2 3
(ω + Ω1 ) ÿ0 + ω 2 y0 + (ω + Ω1 ) ÿ1 + ω 2 y1 + (y0 + y1 ) = 0, (39)
ÿ0 + y0 = 0, (42)
y (t = 0) = 1 ; ẏ (t = 0) = 0, (46)
y (τ = 0) = 1 ; Ωẏ (τ = 0) = 0 ⇒ ẏ (τ = 0) = 0, (47)
y0 (τ ) = cos (τ ) . (48)
8
or, using the same trigonometric identity for the cosine cubed term,
2 3 1
ÿ1 + y1 = Ω1 − cos (τ ) − cos (3τ ) . (50)
ω 8ω 4ω 2
9
Using our expansion for the frequency, this becomes, in terms of Ω and t,
y (t) = 1 − cos ((Ω0 + Ω1 ) t) + cos (3 (Ω0 + Ω1 ) t) , (57)
32ω 2 32ω 2
where
3
Ω 0 = ω ; Ω1 = . (58)
8ω
We now have a solution to our equation which is stable for all times - it has
the correct oscillatory behaviour, and does not diverge to infinity. Also, notice
that not only are there still higher harmonics which have been excited by the
nonlinear interaction, the value of the base frequency has also been modified by
the nonlinear interaction. The interplay of these two effects leads to a modified
solution in the presence of the quartic perturbation. Figure 3 shows a plot of
this solution, for three different values of . Notice that for small enough values
of , the overall qualitative shape of the plot is the same, although the period of
oscillation is slightly shorter, and the shape is not quite a pure sinusoidal term.
Figure 4 shows a zoomed-in version of these three solutions, as they approach
their initial starting values, demonstrating that the effect of increasing is to
bring the oscillator back to its initial position sooner. Figure 5 compares the
solution to a pure cosine term with the same amplitude and frequency - notice
that the shape of the curve is slightly different as a result of the higher harmonic
that has been introduced.
1.0
0.5
1 2 3 4 5 6
-0.5
-1.0
10
1.00
0.98
0.96
0.94
0.92
0.90
then in principle, we can plug this expansion into our differential equation, and
attempt to solve for the coefficients yn . In particular, we have
∞
X ∞
X
ẏ (t) = nyn tn−1 = nyn tn−1 . (60)
n=0 n=1
11
1.0
0.5
-0.5
-1.0
Notice that the second sum can equally be taken to start at n = 1, since the
n = 0 term is simply zero anyway. If we rewrite the summation index slightly,
m = n − 1, (61)
Now, in order for this to hold for all times t, it must again be true that
both sides of the equation are equal, power by power in t. So our goal again is
12
to expand the left side in powers of t, and match coefficients. Performing this
expansion to second order, we find
(65)
Matching the zero order terms, we find the equation,
or
ω2
y2 = − y0 − y03 . (67)
2 2
This equation fixes y2 in terms of y0 . However, we in fact already know y0 , since
it is none other than the initial condition
y0 = y (t = 0) . (68)
So assuming we know the initial position, we have now fixed the second order
coefficient in the expansion.
Continuing on to match the first order term, we find
or
1 1
y3 = − ω 2 y1 − y02 y1 . (70)
6 2
This fixes y3 in terms of y0 and y1 . We also have knowledge of y1 , however,
since our initial condition for the velocity reads
v0 = ẏ (0) = y1 . (71)
Thus, we have
1 1
y3 = − ω 2 v0 − y02 v0 . (72)
6 2
Lastly, matching the second order term, we find
or
1 2 2
y0 y2 + y0 y12 .
y4 = − ω y2 − (74)
12 4
Since we know all of the values on the right side, we can write this as
ω2 ω2
1 2 3 2 3 2
y4 = − ω − y0 − y0 − y0 − y0 − y0 + y0 v0 , (75)
12 2 2 4 2 2
or, simplifying this a bit,
ω4 ω 2 3 2 5
y4 = y0 + y + y0 − y0 v02 . (76)
24 6 0 8 4
13
All together, these results provide us with an expansion of y (t) out to fourth
order in time. As a specific example, let’s consider the initial condition
v0 = 0, (77)
along with some arbitrary initial position y0 - that is, we let the spring go from
rest. In this case, we find
y0 2 y0 4
ω + y02 ; y3 = 0 ; y4 = ω + 4ω 2 y02 + 32 y04 , (78)
y1 = 0 ; y2 = −
2 24
which gives
y0 2 y0 4
ω + y02 t2 + ω + 4ω 2 y02 + 32 y04 t4
y (t) ≈ y0 − (79)
2 24
For small enough times, this solution should be a reasonably good approximation
to the full motion of the spring.
However, the obvious disadvantage to this approach is that it does not tell
us anything about very long times - we know that whenever we keep a Taylor
series expansion to a finite number of terms, eventually, after long enough time,
the series expansion should become a worse and worse approximation to the
true functional form. In fact, for very large times, our solution behaves as
y (t → ∞) → ±∞, (80)
with the sign depending on the sign of y0 . This behaviour is quite general, since
no matter what order we take our expansion to,
N
X
y (t) = yn t n , (81)
n=0
y (t) ≈ yN tN (82)
for large enough times, which again diverges to infinity. Again, we know this
is not consistent with the physics of the problem - the particle should oscillate
back and forth forever, with a finite oscillation amplitude.
However, once again, we can use our prior knowledge about the periodic
motion of the particle to help us improve our perturbative approach. We know
that the period of oscillation of the particle must be
√ Z y+
dx
T = 2m p . (83)
y− E − U (x)
While we may not be able to do this integral in closed form, for any set of
initial conditions, it is certainly easy to calculate a numerical value for the
14
integral using a calculator. Once we have computed T , we can then use it to
impose a constraint on the particle’s motion,
y (T ) = y (0) . (84)
All further motion of the particle after time T simply repeats exactly the same
basic oscillation, such that
y (t + nT ) = y (t) . (85)
For this reason, our series expansion of y (t) only needs to be accurate up until
the time T in order to understand the full motion of the particle. Furthermore,
we can also get a good sense for how accurate our perturbative result is, by
checking to see how well it satisfies the constraint
y (T ) ≈ y (0) . (86)
If our perturbative result satisfies this constraint very well, we know that we have
taken enough terms in the expansion to get a reasonably good approximation
to the particle’s motion. If the constraint is satisfied very poorly, we know we
need to take more terms in our expansion.
Figure 6 shows an example solution using this approach, compared with
the result of a more sophisticated numerical algorithm. Notice that for short
enough times, the two curves agree well, but as time goes on, the two curves
start to deviate noticeably. Figures 7 and 8 show the two curves over longer
periods of time, where it becomes obvious that our fourth-order approximation
becomes relatively poor before even a single period of oscillation has occurred.
Unfortunately, we usually need to take quite a few terms in order for this type
of perturbation theory technique to be accurate, and in many situations it is
more practical to use a numerical algorithm to perform the time evolution.
However, the one advantage of this approach is that to an arbitrarily high level
of precision, we can find a closed-form expression for our particle’s motion,
in which the dependence on the initial conditions is explicit. Performing the
algebraic manipulations required in working out the Taylor series coefficients is
something that a program like Mathematica can help us with.
For systems which exhibit periodic behaviour, the short time approximation
method can be used to help us understand the motion of our system, even
when the nonlinear terms in our system are not small. However, for systems
which are not periodic and do not have small nonlinearities, we are generally
in a much trickier situation if we want to know something about the long-time
behaviour of the system. While there are some techniques available for dealing
with situations like this, they are unfortunately beyond the scope of our class.
Chaos
In the absence of damping and external driving forces, we have seen that our
system exhibits regular periodic motion, and we have derived two different tech-
niques for understanding the nature of this periodic motion. In general, though,
15
1.0
0.8
0.6
0.4
Figure 6: A plot of the motion of our particle, using the short time approxima-
tion, for k = m = = y0 = 1 (orange curve). The blue curve shows the result
of a more sophisticated numerical calculation. Notice that for short times the
two curves agree well, although at longer time, the deviation starts to become
noticeable.
we may want to know something about the solutions to the full nonlinear dif-
ferential equation,
ÿ + 2β ẏ + ω 2 y + φy 2 + y 3 = f (t) , (87)
What type of behaviour does this equation exhibit, for an arbitrary forcing
function?
The answer to this question is that there is, in fact, an enormous range of
different types of qualitative behaviour which can arise from this differential
equation. Linear systems are special in the sense that the superposition rule
allows us to more or less understand all of the possible solutions to a linear
differential equation, for any arbitrary set of initial conditions. This special
property is generically not true, however, for nonlinear systems, and any at-
tempt to understand the entire range of behaviour for an arbitrary set of initial
conditions is doomed to failure. For this reason, it is impossible to give a short
summary here of all of the interesting types of behaviour this equation admits.
However, one of the most striking features of this equation, which is common
to almost all nonlinear systems, is that it is capable of exhibiting chaos. Chaotic
motion occurs when the trajectory of a system is highly sensitive to its initial
conditions. To demonstrate what I mean by this, consider two solutions to my
differential equation, x1 and x2 , with slightly different initial conditions. The
16
2.5
2.0
1.5
1.0
0.5
-0.5
-1.0
Figure 7: A plot of the motion of our particle, using the short time approxima-
tion, for k = m = = y0 = 1 (orange curve). The blue curve shows the result
of a more sophisticated numerical calculation. Notice that for short times the
two curves agree well, although at longer time, the deviation starts to become
noticeable.
The question I may now ask is, if the difference between these two solutions is
small at time zero,
∆x (0) 1 , ∆x0 (0) 1, (89)
then how does ∆x (t) behave at large times?
For linear systems, the difference between two solutions is typically either a
constant, or decays to zero. For example, in the linear harmonic oscillator, the
difference between two solutions with zero initial velocity, and slightly different
starting positions, is given by
The closer the initial conditions, the smaller this quantity. For this reason, we
say that the differential equation describing the linear oscillator demonstrates
stability - two solutions which are are initially close to each other will stay
close to each other, for all time.
However, in stark contrast to this is chaotic behaviour. In a chaotic system,
the difference between two solutions, even those which are initially separated
17
25
20
15
10
1 2 3 4
Figure 8: A plot of the motion of our particle, using the short time approxima-
tion, for k = m = = y0 = 1 (orange curve). The blue curve shows the result
of a more sophisticated numerical calculation. Notice that for short times the
two curves agree well, although at longer time, the deviation starts to become
noticeable.
The quantity λ is known as the Lyapunov exponent, and quantifies the ex-
tent to which a system is chaotic. Almost all nonlinear systems are capable of
demonstrating chaotic behaviour, although this behaviour may not occur across
the entire phase space - for example, in the forced nonlinear oscillator, only
certain forcing functions will result in chaotic behaviour. While a driving force
is necessary to elicit chaotic behaviour in the nonlinear oscillator, other nonlin-
ear systems (some of which involve conservative forces and thus conservation
of energy) can demonstrate chaotic behaviour even without an external driving
force.
Whether or not a system is capable of demonstrating chaos is incredibly
important, for a variety of reasons. One of the most important reasons is that
a lack of stability in a system means that modelling its long time behaviour
can be very difficult. The reason for this is that any realistic system which
we study in a laboratory will have some initial conditions that we can measure
only to within some experimental precision. For this reason, we will not know
precisely what the initial conditions of our system are, and so if the system
demonstrates chaos, it will be essentially impossible to say anything about its
18
long-time behaviour, since our predicted motion can diverge exponentially from
the true motion. This is why systems which are, in principle, deterministic, can
still be, for all practical purposes, impossible to simulate. It’s also, more or less,
the reason why I can’t tell you what the weather will be like one month from
now.
Secondly, chaos is important because chaotic systems are capable of a be-
haviour known as thermalization. I know that if I take a complicated system
of particles and leave them in a box, eventually, after enough time, the gas inside
of the box will obey the laws of thermodynamics, regardless of the initial con-
ditions for each and every one of the particles in the box. The assumption that
this will occur is known as the ergodic hypothesis, and it is the fundamental
assumption behind all of statistical mechanics. For this reason, it is important
to understand exactly when a many-body system is capable of demonstrating
chaos. While this question was settled for classical systems many years ago, it
is in fact still an active research question as to the precise conditions necessary
for a closed, quantum system to come to thermal equilibrium, and in fact, this
question is at the heart of my own research.
While nonlinear systems are very interesting, they also tend to be very chal-
lenging to solve, and for this reason, we will not be able to dedicate any more
time to their study in this class. However, for those of you interested in learning
more about nonlinear systems, the course Physics 106 here at UCSB is dedicated
entirely to this subject.
19