0% found this document useful (0 votes)
75 views

Nonlinear Oscillation

This document discusses nonlinear oscillations that occur when higher order terms are added to the potential energy function of the harmonic oscillator. Specifically: 1) Adding an unstable cubic term to the potential energy results in unbounded motion for large displacements from equilibrium. 2) Adding a stabilizing quartic term results in a bounded "Duffing oscillator" potential with no analytic solution. 3) Perturbation theory can be used to approximate the Duffing oscillator solution as a small correction to the harmonic oscillator solution. Expanding in powers of the small nonlinear parameter yields linear differential equations that can be solved sequentially.

Uploaded by

Athul S Govind
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Nonlinear Oscillation

This document discusses nonlinear oscillations that occur when higher order terms are added to the potential energy function of the harmonic oscillator. Specifically: 1) Adding an unstable cubic term to the potential energy results in unbounded motion for large displacements from equilibrium. 2) Adding a stabilizing quartic term results in a bounded "Duffing oscillator" potential with no analytic solution. 3) Perturbation theory can be used to approximate the Duffing oscillator solution as a small correction to the harmonic oscillator solution. Expanding in powers of the small nonlinear parameter yields linear differential equations that can be solved sequentially.

Uploaded by

Athul S Govind
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Nonlinear Oscillation

Up until now, we’ve been considering the differential equation for the (damped)
harmonic oscillator,
ÿ + 2β ẏ + ω 2 y = Lβ y = f (t) . (1)
Due to the linearity of the differential operator on the left side of our equation,
we were able to make use of a large number of theorems in finding the solution
to this equation. In fact, in the last few lectures, we’ve pretty much been able
to solve this equation for any realistic case we could imagine.
However, we know that this equation came from an approximation - we’ve
been assuming that the potential energy function of the spring can be written
as
1
U (y) = ky 2 . (2)
2
While in many cases this is an incredibly good approximation, we may wonder
how the the addition of higher order terms might affect the behaviour of our
system. For example, we could imagine taking one more term in a hypothetical
Taylor series expansion which defines U (y), so that
1 2 1 3
U (y) = ky + γy . (3)
2 6
A plot of this potential energy function is shown in Figure 1.
This is not, however, a particularly nice potential energy function to work
with, because it is unstable. Because the cubic term we have added is an odd
function, then for large negative values of y, assuming that γ > 0, we find

U (y → −∞) = −∞, (4)

instead of positive infinity. For the case that γ < 0, we find similar behaviour as
we move in the opposite direction. The implication of this is that any particle
subject to this potential energy function with a total energy larger than

2 k3
Ec = (5)
3 γ2
will escape towards negative infinity, and never come back. This is also shown
in Figure 1. While this may indeed describe some physical systems, it does not
do a good job of modelling the type of system we are interested in, which is
reasonably small oscillations around an equilibrium point.
For this reason, we should add at least one more term to the Taylor series
expansion of the potential,
1 2 1 3 1
U (y) = ky + γy + λy 4 . (6)
2 6 24
For λ > 0, this now describes a stable potential energy function. A plot of this
improved potential energy expansion is shown in Figure 2. Notice that despite

1
1.5

1.0

0.5

-4 -3 -2 -1 1 2

-0.5

Figure 1: A plot of the unstable potential energy function (blue curve), for
k = γ = 1. The orange line indicates the amount of energy a particle would
need in order to be able to “hop” out of the potential minimum and travel
towards negative infinity.

the somewhat strange shape near the origin, a particle with an arbitrarily large
energy will be trapped inside of the potential minimum. Thus, we can consider
motion at an arbitrarily large energy, without worrying about issues of stability.
With this potential energy function, my differential equation now becomes

ÿ + 2β ẏ + ω 2 y + φy 2 + y 3 = f (t) , (7)

where
φ = γ/2m ; = λ/6m. (8)
For simplicity, we will assume that there is no damping, no driving force, and
that the cubic term in the potential is zero (so that the potential energy is
symmetric around zero). In this case, we find the Duffing equation,

ÿ + ω 2 y + y 3 = 0. (9)

It is a nonlinear differential equation that describes a simple harmonic oscillator


with an additional correction to its potential energy function. This type of
oscillator is often known as an anharmonic oscillator. How do we solve for
the motion of such a system?
The most important thing to notice before embarking upon our search for
a solution is that in the presence of a nonlinear term, the principle of super-
position no longer applies. That is, if I have two solutions to my differential

2
20

15

10

-4 -2 2

Figure 2: A plot of the stable potential energy function, for k = γ = λ = 1.

equation, y1 (t) and y2 (t), then a linear combination of these two solutions is
NOT necessarily a solution, because, of course,
3
(y1 + y2 ) 6= y13 + y23 . (10)
Almost every solution technique we have used so far has, at least in some way,
involved the principle of superposition, a property which we have now lost. So
what now?
While we may no longer be able to use the principle of superposition, we
do have one old tool which we can always fall back on: perturbation theory.
Our goal here is to understand how, under a suitable approximation, we can
think of the motion of the anharmonic oscillator as being a “perturbation” of
the harmonic oscillator’s motion. For nonlinear problems, there will often be
many different ways to perform perturbation theory, each with their advantages
and disadvantages. We’ll explore two techniques here, although this list is far
from being exhaustive.

Small-Parameter Perturbation Theory


Let’s imagine that the quantity describing the anharmonic term in our poten-
tial is sufficiently “small.” Actually, more carefully, we should say that
y 3 ω 2 y (11)
for the entire region in which the particle moves (can you see why the cubic
term will always become important for large enough y, no matter how small

3
is?). While we may not know how to solve for the motion of the particle exactly,
we do know how to find the region in which it travels, and thus we can check
whether this condition holds. If the particle has total energy E, then its turning
points must be the locations at which
1 2 1 4
U (yM ) = E ⇒ kyM + λyM = E, (12)
2 24
or, r
6k
q p
yM = ± −1 + 1 + 2λE/3k 2 . (13)
λ
If the quartic part of the potential is indeed only a small correction in between
these two points, then the nonlinear term in our differential equation should
only represent a small perturbation to the linear oscillator, described by the
differential equation,
ÿ + ω 2 y = 0, (14)
whose solution we know to be
y0 (t) = A cos (ωt) + B sin (ωt) . (15)
Motivated by this thinking, we might imagine that in some sense, the solution
to the anharmonic oscillator is given by a small “correction” to the harmonic
solution, a correction which depends on the small quantity . Such a correction
might look something like
y (t) = y0 (t) + y1 (t) + ... (16)
The function y1 (t) is the correction term, and we can think of it as the next term
in an expansion in powers of . In this case, however, notice that the coefficients
in the expansion are functions of time. That is not unlike the previous cases
we considered, where, for example, the expansion was in powers of the drag
parameter, but the coefficients in this expansion could depend on the other
parameters of the problem (mass, initial velocity, etc.) in a more complicated
way. If we plug this proposed solution into our full differential equation, we find
3
ÿ0 + ω 2 y0 + ÿ1 + ω 2 y1 + (y0 + y1 ) = 0. (17)
If we expand this equation out to first order in epsilon, we find
ÿ0 + ω 2 y0 + ÿ1 + ω 2 y1 + y03 = 0.

(18)
Now, in order for both sides of this equation to be equal for all values of ,
it must be the case that each parenthetical term vanishes. In this case, the
zero-order parenthetical term yields
ÿ0 + ω 2 y0 = 0, (19)
which is nothing other than the equation for the linear oscillator, which we know
how to solve. If we also match the first-order parenthetical term, then we have
ÿ1 + ω 2 y1 = −y03 . (20)

4
Since we already know what y0 is, this represents a forced, linear differential
equation for y1 , which is something we do know how to solve. In particular,
this equation describes the function y1 as the coordinate of a simple harmonic
oscillator with frequency ω.
For concreteness, let’s assume we’ve chosen our initial conditions such that

y (t = 0) = 1 ; v (t = 0) = 0, (21)

where I’ve avoided using y0 and v0 , so as to not confuse them with the coefficients
in the expansion. Applying these initial conditions to the zero order solution,
we have
y0 (t) = cos (ωt) . (22)
Our perturbative equation then tells us

ÿ1 + ω 2 y1 = − cos3 (ωt) , (23)

Using a trigonometric identity, we can rewrite the cubed term as


3 1
ÿ1 + ω 2 y1 = − cos (ωt) − cos (3ωt) . (24)
4 4
This is just a linear, undamped oscillator subject to two sinusoidal driving forces.
As for its solution, we can simply quote our results from the past lecture, in
order to find

3 1
y1 (t) = A cos (ωt) + B − t sin (ωt) + cos (3ωt) , (25)
8ω 32ω 2

where the constants A and B arise from the homogeneous part of the solution.
Thus, the full motion, through order , is given by

3
y (t) = (A + 1) cos (ωt) + B − t sin (ωt) + cos (3ωt) . (26)
8ω 32ω 2

If we apply the same initial conditions as before, a short calculation reveals


1
A=− ; B = 0, (27)
32ω 2
and so we find
3
y (t) = 1 − cos (ωt) + cos (3ωt) − t sin (ωt) . (28)
32ω 2 32ω 2 8ω
Notice that the coefficient on cos (ωt) is slightly less than one, by the same
quantity which multiples the cos (3ωt) term. Thus, some of the “weight” of
the solution has been transferred into an oscillatory term with a frequency
that is an integer multiple of the original frequency. We say that the nonlinear
“interaction” has “excited a higher harmonic” of the oscillator, which is a general
feature of nonlinear differential equations. While the linear system required an

5
external driving force in order to excite higher harmonics, the nonlinear system
is capable of doing so under the action of its own internal dynamics. This type
of behaviour would also appear, for example, in a course on the Standard Model,
in which a similar type of differential equation, this time describing something
known as a Quantum Field, would be used to describe how the interaction of
particles can create and destroy new particles.

The Poincaré-Lindstedt Method


There is, however, an obvious problem with the answer we have found from using
perturbation theory. Notice that the sine term has a factor of t - it continues to
grow over time, increasing without bound. This is totally inconsistent with the
behaviour we expect on physical grounds - the particle should simply oscillate
back and forth between the two turning points. The appearance of this term is
actually quite general, and not special to this case. The term appears because
in the expansion of y0n , for odd values of n, there will always be a sinusoidal
term with the undamped frequency of the linear oscillator,

cosn (ωt) = α cos (ωt) + ... (29)

where
2 n
α= n−1 (30)
2n 2
involves a binomial coefficient. Even powers also cause problems, although this
only becomes clear at higher orders in perturbation theory. Because the differ-
ential equation for y1 has the same natural frequency ω as the linear oscillator,

ÿ1 + ω 2 y1 = −y03 , (31)

then there is an undamped resonance, resulting in this diverging oscillation


amplitude. This type of term, one which arises in perturbation theory and
grows without bound over time, is often known as a secular term.
The resolution to this problem comes from realizing a second, somewhat
more subtle problem with our solution - it oscillates at the wrong frequency. The
frequencies which appear in our solution are ω, and also a harmonic multiple
3ω. Thus, our solution is still periodic with frequency ω. However, this is
inconsistent with the fact that in addition to changing the functional form of
the solution, the quartic perturbation to the potential will in general also change
the oscillation frequency of the spring. In some sense, the reason that our naive
version of perturbation theory has failed is because we have not taken into
account the fact that the new frequency of oscillation in our system will no
longer be the same as the frequency of the linear oscillator. So how do we take
this fact into account?
As a fist step, notice that if y0 were to oscillate at a frequency other than
ω, we would no longer have our secular term problem - the forcing function
would no longer be in resonance with the differential equation for y1 . This

6
realization gives us the idea that maybe we can make a slightly better choice
for our “unperturbed” solution, and instead choose

y0 (t) = cos (Ωt) , (32)

for some Ω 6= ω. This revised choice reflects an attempt to incorporate the


additional change in the frequency of the oscillator, while still “perturbing”
away from the solution we already know. So how should we choose Ω? One
guess might be to simply replace it with the actual frequency of oscillation in
the full potential, which we can find from the period,
√ Z y+
dx
T = 2m p , (33)
y− E − U (x)
where y− and y+ are the two turning points. However, while this will give the
correct oscillation frequency, perhaps it may seem as though our unperturbed
solution y0 should not necessarily incorporate the exact frequency of the oscil-
lator - it is, after all, only supposed to be approximately correct, not exactly
correct. Of course, we can always perform a perturbative calculation of Ω,
writing the new frequency in terms of an expansion in ,

Ω = Ω0 + Ω1 + ... (34)

as you did on the homework. However, it’s not clear exactly how many terms
we should take.
In order to side-step this thorny issue altogether, we will make use of a new
tool, sometimes known as a dual series expansion. The idea is that we have
two objects, the frequency Ω and the trajectory y (t), which both need to be
expanded in terms of . By expanding them simultaneously in just the right
way, we can eliminate the secular term from our solution. In order to do so, we
will in fact not modify the solution y0 directly, but instead define a new time
variable
τ = Ωt, (35)
so that in terms of this new variable, our differential equation becomes

Ω2 ÿ (τ ) + ω 2 y (τ ) + y 3 (τ ) = 0. (36)

The derivatives in the first term are now derivatives with respect to τ , and so
the chain rule pulls out a factor of Ω2 , since
d dτ d d
= =Ω . (37)
dt dt dτ dτ
Make sure to understand that this is exactly the same equation as before, simply
written in terms of a new coordinate. The difference will come when we conclude
later that Ω is something other than ω.
We now seek a solution of the form

y (τ ) = y0 (τ ) + y1 (τ ) + ... (38)

7
If we plug in this proposed solution, along with the expansion for Ω, we arrive
at the equation
2 2 3
(ω + Ω1 ) ÿ0 + ω 2 y0 + (ω + Ω1 ) ÿ1 + ω 2 y1 + (y0 + y1 ) = 0, (39)

where we used the fact that


Ω0 = ω, (40)
which must be true in order to recover the correct value for the frequency when
= 0. If we expand in and only keep terms up to first order, we find

ω 2 (ÿ0 + y0 ) + 2ωΩ1 ÿ0 + ω 2 ÿ1 + ω 2 y1 + y03 = 0.



(41)

The vanishing of the first term tells us that

ÿ0 + y0 = 0, (42)

which has the general solution

y0 (τ ) = A cos (τ ) + B sin (τ ) = A cos (Ωt) + B sin (Ωt) , (43)

or, to zero order in the frequency,

y0 (τ ) = A cos (Ω0 t) + B sin (Ω0 t) = A cos (ωt) + B sin (ωt) . (44)

This is indeed the correct solution when = 0.


So far, it would not appear that our new technique has accomplished any-
thing. However, things start to look different when we consider the first order
term,
Ω1 1
ÿ1 + y1 = −2 ÿ0 − 2 y03 (45)
ω ω
This is again a linear differential equation describing y1 , with a forcing term that
depends on y0 , although it has a slightly different appearance. Again, choosing
our initial conditions such that

y (t = 0) = 1 ; ẏ (t = 0) = 0, (46)

we find, according to the chain rule

y (τ = 0) = 1 ; Ωẏ (τ = 0) = 0 ⇒ ẏ (τ = 0) = 0, (47)

so that our zero order solution is

y0 (τ ) = cos (τ ) . (48)

Therefore, our first order equation reads


ω1 1
ÿ1 + y1 = 2 cos (τ ) − 2 cos3 (τ ) , (49)
ω ω

8
or, using the same trigonometric identity for the cosine cubed term,

2 3 1
ÿ1 + y1 = Ω1 − cos (τ ) − cos (3τ ) . (50)
ω 8ω 4ω 2

Again, we find an equation for y1 which contains an undamped resonance


- the natural frequency of y1 (in this case simply 1) is matched on the right
by a sinusoidal forcing term with the same frequency. This resonance would
cause y1 (τ ) to contain an overall factor of τ , which also goes to infinity for very
large times. Thus, it would seem that we still have the same problem as before.
However, notice that in this case, the resonant forcing term is multiplied by a
factor which involves the expansion coefficient Ω1 . If we were to set
3
Ω1 − = 0, (51)

then this problematic term would be gone, and we would have a well-behaved
solution. Thus, we see how our secular term can actually be turned into a
useful tool, rather than a problem. If I require that my solution be free of any
problematic divergent terms (which I know must be the case), then this forces me
to make a specific choice for ω1 , which helps me determine the series expansion
for Ω. This technique is known as the Poincaré-Lindstedt Method, and it
is a very useful tool for studying periodic potion in a nonlinear potential. It
can be continued to higher orders in , and at each step in the expansion, the
elimination of a resonant forcing function will fix another term in the expansion
of Ω.
Having made this choice for Ω1 , we find
1
ÿ1 + y1 = − cos (3τ ) . (52)
4ω 2
Quoting our result from before, the solution to this equation is
1
y1 (τ ) = cos (3τ ) + A cos (τ ) + B sin (τ ) , (53)
32ω 2
where the last two terms come from the addition of the homogeneous solution.
Therefore, our full solution, valid to first order in , is given by

y (τ ) = (A + 1) cos (τ ) + cos (3τ ) + B sin (τ ) . (54)
32ω 2
If we fix the same boundary conditions as before, we find yet again
1
A=− ; B = 0, (55)
32ω 2
Thus, our solution, in terms of τ , is given by

y (τ ) = 1 − cos (τ ) + cos (3τ ) . (56)
32ω 2 32ω 2

9
Using our expansion for the frequency, this becomes, in terms of Ω and t,

y (t) = 1 − cos ((Ω0 + Ω1 ) t) + cos (3 (Ω0 + Ω1 ) t) , (57)
32ω 2 32ω 2
where
3
Ω 0 = ω ; Ω1 = . (58)

We now have a solution to our equation which is stable for all times - it has
the correct oscillatory behaviour, and does not diverge to infinity. Also, notice
that not only are there still higher harmonics which have been excited by the
nonlinear interaction, the value of the base frequency has also been modified by
the nonlinear interaction. The interplay of these two effects leads to a modified
solution in the presence of the quartic perturbation. Figure 3 shows a plot of
this solution, for three different values of . Notice that for small enough values
of , the overall qualitative shape of the plot is the same, although the period of
oscillation is slightly shorter, and the shape is not quite a pure sinusoidal term.
Figure 4 shows a zoomed-in version of these three solutions, as they approach
their initial starting values, demonstrating that the effect of increasing is to
bring the oscillator back to its initial position sooner. Figure 5 compares the
solution to a pure cosine term with the same amplitude and frequency - notice
that the shape of the curve is slightly different as a result of the higher harmonic
that has been introduced.

1.0

0.5

1 2 3 4 5 6

-0.5

-1.0

Figure 3: A plot of the perturbative solution to the nonlinear oscillator, for


ω = 1 and = 0 (blue curve), = 0.05 (orange curve), and = 0.1 (green
curve). Notice that all three solutions have qualitatively the same shape, and
that the period decreases with increasing .

10
1.00

0.98

0.96

0.94

0.92

0.90

5.9 6.0 6.1 6.2 6.3 6.4 6.5

Figure 4: A plot of the perturbative solution to the nonlinear oscillator, for ω = 1


and = 0 (blue curve), = 0.05 (orange curve), and = 0.1 (green curve). The
red line (at a value of 1) shows the amplitude of all three oscillators, clearly
demonstrating that the oscillators with larger return to their initial starting
position sooner.

Short Time Perturbation Theory


For small enough values of , the above perturbative approach gives a nicely
behaved approximate solution to our differential equation. However, there may
be situations in which the quartic term is not small, and in fact may be a
more important contribution than the quadratic term. In such a situation,
our perturbative method above will not yield a solution which is reasonably
accurate, and we must develop a new technique.
However, this does not mean that we are completely out of luck - there is
of course another parameter in our problem which we can use to perform an
expansion, the time coordinate itself. If we make the assumption that y (t)
admits a Taylor series expansion, so that

X
y (t) = yn tn = y0 + y1 t + y2 t2 + ... (59)
n=0

then in principle, we can plug this expansion into our differential equation, and
attempt to solve for the coefficients yn . In particular, we have

X ∞
X
ẏ (t) = nyn tn−1 = nyn tn−1 . (60)
n=0 n=1

11
1.0

0.5

0.5 1.0 1.5 2.0 2.5 3.0

-0.5

-1.0

Figure 5: A plot of the perturbative solution to the nonlinear oscillator, for


ω = 1 and = 0.5 (orange curve). The blue curve shows a pure cosine term
with the same amplitude and frequency as the nonlinear oscillator. Notice that
the presence of the higher harmonic changes the shape of the solution slightly
away from a pure sinusoidal function.

Notice that the second sum can equally be taken to start at n = 1, since the
n = 0 term is simply zero anyway. If we rewrite the summation index slightly,

m = n − 1, (61)

then this summation becomes



X
ẏ (t) = (m + 1) ym+1 tm . (62)
m=0

Similarly, we can write



X
ÿ (t) = (m + 1) (m + 2) ym+2 tm . (63)
m=0

This then results in the differential equation



! ∞
! ∞
!3
X X X
n 2 n n
(n + 1) (n + 2) yn+2 t +ω yn t + yn t = 0. (64)
n=0 n=0 n=0

Now, in order for this to hold for all times t, it must again be true that
both sides of the equation are equal, power by power in t. So our goal again is

12
to expand the left side in powers of t, and match coefficients. Performing this
expansion to second order, we find

2y2 + 6y3 t + 12y4 t2 +ω 2 y0 + y1 t + y2 t2 + y03 + 3y02 y1 t + 3y02 y2 t2 + 3y0 y12 t2 = 0


(65)
Matching the zero order terms, we find the equation,

2y2 + ω 2 y0 + y03 = 0, (66)

or
ω2
y2 = − y0 − y03 . (67)
2 2
This equation fixes y2 in terms of y0 . However, we in fact already know y0 , since
it is none other than the initial condition

y0 = y (t = 0) . (68)

So assuming we know the initial position, we have now fixed the second order
coefficient in the expansion.
Continuing on to match the first order term, we find

6y3 + ω 2 y1 + 3y02 y1 = 0, (69)

or
1 1
y3 = − ω 2 y1 − y02 y1 . (70)
6 2
This fixes y3 in terms of y0 and y1 . We also have knowledge of y1 , however,
since our initial condition for the velocity reads

v0 = ẏ (0) = y1 . (71)

Thus, we have
1 1
y3 = − ω 2 v0 − y02 v0 . (72)
6 2
Lastly, matching the second order term, we find

12y4 + ω 2 y2 + 3 y02 y2 + y0 y12 = 0,



(73)

or
1 2 2
y0 y2 + y0 y12 .

y4 = − ω y2 − (74)
12 4
Since we know all of the values on the right side, we can write this as

ω2 ω2

1 2 3 2 3 2
y4 = − ω − y0 − y0 − y0 − y0 − y0 + y0 v0 , (75)
12 2 2 4 2 2
or, simplifying this a bit,

ω4 ω 2 3 2 5
y4 = y0 + y + y0 − y0 v02 . (76)
24 6 0 8 4

13
All together, these results provide us with an expansion of y (t) out to fourth
order in time. As a specific example, let’s consider the initial condition

v0 = 0, (77)

along with some arbitrary initial position y0 - that is, we let the spring go from
rest. In this case, we find
y0 2 y0 4
ω + y02 ; y3 = 0 ; y4 = ω + 4ω 2 y02 + 32 y04 , (78)

y1 = 0 ; y2 = −
2 24
which gives
y0 2 y0 4
ω + y02 t2 + ω + 4ω 2 y02 + 32 y04 t4

y (t) ≈ y0 − (79)
2 24
For small enough times, this solution should be a reasonably good approximation
to the full motion of the spring.
However, the obvious disadvantage to this approach is that it does not tell
us anything about very long times - we know that whenever we keep a Taylor
series expansion to a finite number of terms, eventually, after long enough time,
the series expansion should become a worse and worse approximation to the
true functional form. In fact, for very large times, our solution behaves as

y (t → ∞) → ±∞, (80)

with the sign depending on the sign of y0 . This behaviour is quite general, since
no matter what order we take our expansion to,
N
X
y (t) = yn t n , (81)
n=0

so long as N is finite, we will eventually have

y (t) ≈ yN tN (82)

for large enough times, which again diverges to infinity. Again, we know this
is not consistent with the physics of the problem - the particle should oscillate
back and forth forever, with a finite oscillation amplitude.
However, once again, we can use our prior knowledge about the periodic
motion of the particle to help us improve our perturbative approach. We know
that the period of oscillation of the particle must be
√ Z y+
dx
T = 2m p . (83)
y− E − U (x)

While we may not be able to do this integral in closed form, for any set of
initial conditions, it is certainly easy to calculate a numerical value for the

14
integral using a calculator. Once we have computed T , we can then use it to
impose a constraint on the particle’s motion,
y (T ) = y (0) . (84)
All further motion of the particle after time T simply repeats exactly the same
basic oscillation, such that
y (t + nT ) = y (t) . (85)
For this reason, our series expansion of y (t) only needs to be accurate up until
the time T in order to understand the full motion of the particle. Furthermore,
we can also get a good sense for how accurate our perturbative result is, by
checking to see how well it satisfies the constraint
y (T ) ≈ y (0) . (86)
If our perturbative result satisfies this constraint very well, we know that we have
taken enough terms in the expansion to get a reasonably good approximation
to the particle’s motion. If the constraint is satisfied very poorly, we know we
need to take more terms in our expansion.
Figure 6 shows an example solution using this approach, compared with
the result of a more sophisticated numerical algorithm. Notice that for short
enough times, the two curves agree well, but as time goes on, the two curves
start to deviate noticeably. Figures 7 and 8 show the two curves over longer
periods of time, where it becomes obvious that our fourth-order approximation
becomes relatively poor before even a single period of oscillation has occurred.
Unfortunately, we usually need to take quite a few terms in order for this type
of perturbation theory technique to be accurate, and in many situations it is
more practical to use a numerical algorithm to perform the time evolution.
However, the one advantage of this approach is that to an arbitrarily high level
of precision, we can find a closed-form expression for our particle’s motion,
in which the dependence on the initial conditions is explicit. Performing the
algebraic manipulations required in working out the Taylor series coefficients is
something that a program like Mathematica can help us with.
For systems which exhibit periodic behaviour, the short time approximation
method can be used to help us understand the motion of our system, even
when the nonlinear terms in our system are not small. However, for systems
which are not periodic and do not have small nonlinearities, we are generally
in a much trickier situation if we want to know something about the long-time
behaviour of the system. While there are some techniques available for dealing
with situations like this, they are unfortunately beyond the scope of our class.

Chaos
In the absence of damping and external driving forces, we have seen that our
system exhibits regular periodic motion, and we have derived two different tech-
niques for understanding the nature of this periodic motion. In general, though,

15
1.0

0.8

0.6

0.4

0.2 0.4 0.6 0.8 1.0

Figure 6: A plot of the motion of our particle, using the short time approxima-
tion, for k = m = = y0 = 1 (orange curve). The blue curve shows the result
of a more sophisticated numerical calculation. Notice that for short times the
two curves agree well, although at longer time, the deviation starts to become
noticeable.

we may want to know something about the solutions to the full nonlinear dif-
ferential equation,

ÿ + 2β ẏ + ω 2 y + φy 2 + y 3 = f (t) , (87)

What type of behaviour does this equation exhibit, for an arbitrary forcing
function?
The answer to this question is that there is, in fact, an enormous range of
different types of qualitative behaviour which can arise from this differential
equation. Linear systems are special in the sense that the superposition rule
allows us to more or less understand all of the possible solutions to a linear
differential equation, for any arbitrary set of initial conditions. This special
property is generically not true, however, for nonlinear systems, and any at-
tempt to understand the entire range of behaviour for an arbitrary set of initial
conditions is doomed to failure. For this reason, it is impossible to give a short
summary here of all of the interesting types of behaviour this equation admits.
However, one of the most striking features of this equation, which is common
to almost all nonlinear systems, is that it is capable of exhibiting chaos. Chaotic
motion occurs when the trajectory of a system is highly sensitive to its initial
conditions. To demonstrate what I mean by this, consider two solutions to my
differential equation, x1 and x2 , with slightly different initial conditions. The

16
2.5

2.0

1.5

1.0

0.5

0.5 1.0 1.5 2.0

-0.5

-1.0

Figure 7: A plot of the motion of our particle, using the short time approxima-
tion, for k = m = = y0 = 1 (orange curve). The blue curve shows the result
of a more sophisticated numerical calculation. Notice that for short times the
two curves agree well, although at longer time, the deviation starts to become
noticeable.

difference between these two functions, as a function of time, is given by

∆x (t) = x1 (t) − x2 (t) . (88)

The question I may now ask is, if the difference between these two solutions is
small at time zero,
∆x (0) 1 , ∆x0 (0) 1, (89)
then how does ∆x (t) behave at large times?
For linear systems, the difference between two solutions is typically either a
constant, or decays to zero. For example, in the linear harmonic oscillator, the
difference between two solutions with zero initial velocity, and slightly different
starting positions, is given by

∆x (t) = (A1 − A2 ) cos (ωt) . (90)

The closer the initial conditions, the smaller this quantity. For this reason, we
say that the differential equation describing the linear oscillator demonstrates
stability - two solutions which are are initially close to each other will stay
close to each other, for all time.
However, in stark contrast to this is chaotic behaviour. In a chaotic system,
the difference between two solutions, even those which are initially separated

17
25

20

15

10

1 2 3 4

Figure 8: A plot of the motion of our particle, using the short time approxima-
tion, for k = m = = y0 = 1 (orange curve). The blue curve shows the result
of a more sophisticated numerical calculation. Notice that for short times the
two curves agree well, although at longer time, the deviation starts to become
noticeable.

by a very small amount, can diverge exponentially,

∆x (t) ∼ ∆x0 eλt . (91)

The quantity λ is known as the Lyapunov exponent, and quantifies the ex-
tent to which a system is chaotic. Almost all nonlinear systems are capable of
demonstrating chaotic behaviour, although this behaviour may not occur across
the entire phase space - for example, in the forced nonlinear oscillator, only
certain forcing functions will result in chaotic behaviour. While a driving force
is necessary to elicit chaotic behaviour in the nonlinear oscillator, other nonlin-
ear systems (some of which involve conservative forces and thus conservation
of energy) can demonstrate chaotic behaviour even without an external driving
force.
Whether or not a system is capable of demonstrating chaos is incredibly
important, for a variety of reasons. One of the most important reasons is that
a lack of stability in a system means that modelling its long time behaviour
can be very difficult. The reason for this is that any realistic system which
we study in a laboratory will have some initial conditions that we can measure
only to within some experimental precision. For this reason, we will not know
precisely what the initial conditions of our system are, and so if the system
demonstrates chaos, it will be essentially impossible to say anything about its

18
long-time behaviour, since our predicted motion can diverge exponentially from
the true motion. This is why systems which are, in principle, deterministic, can
still be, for all practical purposes, impossible to simulate. It’s also, more or less,
the reason why I can’t tell you what the weather will be like one month from
now.
Secondly, chaos is important because chaotic systems are capable of a be-
haviour known as thermalization. I know that if I take a complicated system
of particles and leave them in a box, eventually, after enough time, the gas inside
of the box will obey the laws of thermodynamics, regardless of the initial con-
ditions for each and every one of the particles in the box. The assumption that
this will occur is known as the ergodic hypothesis, and it is the fundamental
assumption behind all of statistical mechanics. For this reason, it is important
to understand exactly when a many-body system is capable of demonstrating
chaos. While this question was settled for classical systems many years ago, it
is in fact still an active research question as to the precise conditions necessary
for a closed, quantum system to come to thermal equilibrium, and in fact, this
question is at the heart of my own research.
While nonlinear systems are very interesting, they also tend to be very chal-
lenging to solve, and for this reason, we will not be able to dedicate any more
time to their study in this class. However, for those of you interested in learning
more about nonlinear systems, the course Physics 106 here at UCSB is dedicated
entirely to this subject.

19

You might also like