Mathematics Department

Math 340 Home, Textbook Contents, Online Homework Home

Warning: MathJax requires JavaScript to process the mathematics on this page.
If your browser supports JavaScript, be sure it is enabled.

Deciding How To Solve A First Order Problem

Discussion

In real life, you are unlikely to be asked if a problem is Bernoulli. In fact you are unlikely to be directly asked to solve any differential equation at all. More commonly, you will have some question that after doing some work can be reduced to solving a differential equation, without any clue as to what type of equation it is. Given such a situation, I check to see if it is one of the types we have considered in the following order.
  • Separable
  • Linear
  • Bernoulli
  • Exact
  • Homogeneous
There is nothing magical about this ordering. I just find it easy to check if an equation is one of the first three types by inspection so I do that first. Equations which are homogeneous are often solvable using some other method and if they are, it is usually easier to use the other method. So I check if an equation is homogeneous only after I have tried every other method. This is just the order I use, you are welcome to use a different order of techniques to try if you want. Of course, lots of equations can't be solved using any of the five methods we have covered. There are a lot of other techniques that are somewhat less common that we haven't covered in this class. The standard reference if you need to solve an equation that doesn't fit into any of our paradigms is Differentialgleichungen: Losungsmethoden und Losungen, by Erich Kamke. This book contains a vast assortment of different tricks and techniques for solving differential equations. Unfortunately it is written in German, but you can't have everything.

If you have an equation you can't figure out how to solve explicitly, then you typically try using numerical methods to approximate the solution. But whenever you can't find an explicit solution to a mathematical problem and are about to use a numerical approximation instead, you ought to pause and think for a minute. Could it be the reason you can't find an explicit solution is because no such solution exists? How do we know there is any function whose derivative satisfies some equation we have written down. This can be a tricky and dangerous question in some situations. What makes it dangerous is that the numerical methods we've learned will (almost) always produce numbers. But the numbers they produce won't be meaningful if no solution exists. A proper investigation into numerical methods addresses how you know if there is a solution and how close your numerical techniques are approximating that solution. In this class, we will just discuss briefly the basic existence theory for first order equations.

Theory of First Order Equations

Discussion

To answer the question of whether a solution exists for a given initial value problem, we have the following theorem:

Theorem:     Let the functions $f$ and $\partial f/\partial y$ be continuous in some rectangle $a<x<b$ and $c<y<d$ containing the point $(p,q)$. Then in some interval $p-h<x<p+h$ contained in $a<x<b$, there is a unique solution $y=g(x)$ of the initial value problem $$\frac{dy}{dx} = f(x,y)\qquad y(p)=q $$ I won't give a formal proof of this theorem but I will sketch out the approach used to build a careful proof. The first tool is that we convert our differential equation to an integral equation.

A function $y(x)$ solves the initial value problem $$\frac{dy}{dx}=f(x,y)\qquad y(p)=q $$ if and only if it solves the integral equation $$y(x)=\int_p^x f(t,y(t))\, dt\quad + \quad q$$ You can check the equivalence of these two problems by differentiating the integral equation to show it leads to the initial value problem. Now we will build an iterative process for creating a solution to the integral equation.

Our procedure is to start with an initial guess, $y_0(x)$ for the solution to the initial value problem. Since the one value we do know is that $y(p)=q$, we'll make our first guess $y_0(x)=q$, the constant function. We then compute $$ y_{n+1}(x)=\int_p^x f(t,y_n(t)) dt\quad + \quad q. $$ Let's see how this works with an example. Consider the initial value problem $$\frac{dy}{dx}=2xy,\qquad y(0)=1.$$ We start with $y_0(x)=1$ and then compute $$ \begin{align} y_1(x)&=\int_0^x 2t(1) dt + 1 = \int_0^x 2t dt + 1 = x^2+1 \\ y_2(x)&=\int_0^x 2t(y_1(t))dt + 1 = \int_0^x 2t(t^2+1)dt + 1 = \int_0^x 2t^3+2t dt + 1 = \frac{x^4}{2}+x^2+1 \\ &\cdots \end{align} $$ This technique is called Picard iteration and we can show that, as long as the hypotheses of the theorem about continuity of $f$ and $\partial f/\partial y$ are satisfied, the sequence of functions will converge to a limit function $y_{\infty}(x)$, at least in some interval of the form specified in the conclusion of the theorem. But taking the limit of both sides of the integral equation shows that $$ y_{\infty}(x) = \int_0^x f(t,y_{\infty}(t))\,dt\quad + \quad q $$ so $y_{\infty}(x)$ is a solution to the integral equation, and hence the solution to the initial value problem. You are welcome to come see me in my office if you are interested in seeing the full details. We've already looked at iterative processes like Euler's method to build better and better approximations to the solution of the differential equation. Those techniques are easier and will usually converge much quicker than Picard iteration, which is why they are used in practice. The advantage of using Picard iteration for the integral equation is that while it is a more complicated technique, it is much easier to show the process actually converges to a solution because integration is a better behaved process (in a technical numerical analysis sense) than differentiation. And since this process converges to a solution, we are guaranteed a solution exists. We will return to using integral operators again when we get to the chapter on Laplace transforms. The connections between differential equations and integral operators form a deep and fruitful area of both theoretical and practical research.

You should note that this theorem only guarantees us that we have a solution that exists in some interval around our initial point. We have no guarantee that the solution doesn't stop somewhere. In fact, it is quite possible for a solution to only exist for a finite interval as we saw when we discussed explosions earlier. On the other hand, Euler's method and the improved Euler's method will almost always produce results on an infinite interval. It will be up to you to watch for signs that the numerical results no longer approximate the true solution (which may not exist any longer). To detect explosions you need to recognize an explosion exists in the differential equation and/or use more sophisticated software that tests for singularities in the result. Unfortunately, sometimes the numerical methods can be fooled into thinking a solution explodes even when it doesn't. This is a final reason for understanding the geometric approach to dealing with autonomous equations.


If you have any problems with this page, please contact bennett@math.ksu.edu.
©2010, 2014 Andrew G. Bennett