Math 240 Home,
Launch Series Solutions Lab
Series Solutions Lab Instructions
In chapter 4 we are building new functions using Taylor series. These
functions are designed to be the solutions of differential equations.
Naturally, we will want to be able to evaluate these functions. Since they
are defined in terms of an infinite series, we can't just evaluate all
infinitely many terms. The easy and natural (and usually correct) way to
approximate the value is to sum up the first terms of the Taylor series,
which gives the Taylor polynomial approximation. Depending on the
situation, you may be able to sum just a few terms, or you may want to
sum hundreds of terms.
A key feature of a power series is the radius of convergence. If
you have a series
a0 + a1x +
a2x2 + ... , then there is a constant
R such that the series converges in the region
|x| < R and diverges in the region
|x| > R. That is, the region of convergence
can't extend more in one direction than the other. While the function may
be defined outside the radius of convergence, the series diverges there
and the Taylor polynomials do not provide a reasonable
approximation outside the radius of convergence. For example, the function
(1 + x2)−1 has Taylor
series
1 − x2 + x4 −
x6 + ..., which has radius of convergence 1.
While the function is defined for all real x, the Taylor series
diverges and the Taylor polynomials do not provide an accurate
approximation for |x| ≥ 1. Obviously it is
important to be aware where the Taylor polynomial approximation breaks
down if you are going to use it to approximate a solution to a
differential equation.
In this lab,
we will look at Taylor polynomials that approximate the solution of
a differential equation. In this case, we won't know the exact solution,
so we won't be able to compare our approximations to the
actual function value in most cases. However, it usually isn't hard to
see where things seem to work, and this will build some intuition
about where series converge that we can justify with theorems in next
Monday's lecture. We will also look at a
couple of situations that illustrate dangers in using Taylor polynomial
approximations that you
should be aware of.
- The first problem we will consider is
(x2+1)y" + xy' + 2y = 0, y(0)=0,
y'(0)=1.
Launch the lab (see the link at the top of the page) and change the
coefficients to match this equation. Now look at the 5, 10, 25, 50, 500
and 5000 degree Taylor polynomials. What do you think is the radius of
convergence of the series solution for this problem?
- Now change the coefficients of
y and y',
along with the initial values. How does changing the lower order
coefficients seem to affect the radius of convergence of the power
series? (You may find it easiest to
check the radius of convergence if you leave the degree set at
5000).
- Next consider the problem
(.5x2+x+1)y" + (x + 1)y' + 3y = 0, y(0)=1,
y'(0)=0. Where do you think the series
solution for this problem converges? How does changing the
lower order terms affect the radius of convergence?.
- Based on your work so far, make a conjecture about the relationship
between radius of convergence and the coefficients of the differential
equation. Test your conjecture with at least two additional initial value
problems.
The next two problems point out
places where looking at Taylor approximations may be misleading.
- A series solution will be valid up to the radius of convergence. The
approximation provided by a finite Taylor polynomial (as opposed to
the infinite Taylor series) won't work over quite as large a
region. In fact, sometimes it can be difficult to impossible to discover
features of the solution near the radius of convergence from examining
Taylor polynomials. Consider the equation
(x2+2x+1)y" + (x+1)y' + y = 0, y(0)=1,
y'(0)=0.
The exact solution to this initial value problem is
y = cos(log(x+1)) (we will see how to find this
solution next week). The radius of convergence of this equation 1,
so the Taylor series represents the function in the region
-1 < x < 1. Look at the behaviour of the Taylor
polynomials near x = -1. Sketch how you would
guess the solution to the initial value problem behaves near
x = -1 based on the behaviour of the Taylor
polynomials up to degree 5000. Show that the solution actually oscillates
between -1 and 1 infinitely many times near
x = -1. Explain why no polynomial can represent
infinitely many oscillations (and in this case, it doesn't even come
close).
- Other problems may arise with the computation of the Taylor
polynomials even away from the radius of convergence. Consider the
equation
y" + 400y = 0, y(0)=1, y'(0)=0. The true
solution
to this problem is y = cos(20x) of course, and
the radius of convergence is infinity.
Sketch the behaviour of the Taylor polynomial approximations. The
difficulty you observe is
caused by a build up of round-off error in the computation of the Taylor
polynomial. Note that the applet uses double-precision arithmetic and is
coded to minimize the number of multiplications (and hence the amount of
accumulated round-off error). Sometimes the computation just doesn't want
to behave (and then you try to come up with some means other than blindly
computing the polynomial to find the answer). By the way, the problem here
is not that 400 is too large. Anytime you try to compute a cosine using
Taylor polynomials in double-precision, you will run into roundoff errors
a little after 6 periods. The same behavior happens with
y" + y = 0, y(0)=1, y'(0)=0,
whose solution is y = cos(x), a little past
x=37.5. The choice of cos(20x) was made to get the bad
behavior in the graphing window.
Write up a lab report with your answers to the questions in bold-face
above. Be sure to use complete sentences in your explanations.
Please report any problems with this page to
bennett@math.ksu.edu