**Warning: MathJax requires JavaScript to process the mathematics on this page.**

If your browser supports JavaScript, be sure it is enabled.

If your browser supports JavaScript, be sure it is enabled.

### Deciding How To Solve A First Order Problem

#### Discussion

In real life, you are unlikely to be asked if a problem is Bernoulli. In fact you are unlikely to be directly asked to solve any differential equation at all. More commonly, you will have some question that after doing some work can be reduced to solving a differential equation, without any clue as to what type of equation it is. Given such a situation, I check to see if it is one of the types we have considered in the following order.- Separable
- Linear
- Bernoulli
- Exact
- Homogeneous

*Differentialgleichungen: Losungsmethoden und Losungen,*by Erich Kamke. This book contains a vast assortment of different tricks and techniques for solving differential equations. Unfortunately it is written in German, but you can't have everything.

If you have an equation you can't figure out how to solve explicitly, then you typically try using numerical methods to approximate the solution. But whenever you can't find an explicit solution to a mathematical problem and are about to use a numerical approximation instead, you ought to pause and think for a minute. Could it be the reason you can't find an explicit solution is because no such solution exists? How do we know there is any function whose derivative satisfies some equation we have written down. This can be a tricky and dangerous question in some situations. What makes it dangerous is that the numerical methods we've learned will (almost) always produce numbers. But the numbers they produce won't be meaningful if no solution exists. A proper investigation into numerical methods addresses how you know if there is a solution and how close your numerical techniques are approximating that solution. In this class, we will just discuss briefly the basic existence theory for first order equations.

### Theory of First Order Equations

#### Discussion

To answer the question of whether a solution exists for a given initial value problem, we have the following theorem:
*Theorem:*
Let the functions $f$ and $\partial f/\partial y$ be continuous in some
rectangle $a<x<b$ and $c<y<d$ containing the point $(p,q)$.
Then in some
interval
$p-h<x<p+h$ contained in $a<x<b$, there is a unique solution
$y=g(x)$ of the
initial value problem
$$\frac{dy}{dx} = f(x,y)\qquad y(p)=q $$
I won't give a formal proof of this theorem but I will sketch out the
approach used to build a careful proof. The first tool is that we convert
our differential equation to an integral equation.

A function $y(x)$ solves the initial value problem $$\frac{dy}{dx}=f(x,y)\qquad y(p)=q $$ if and only if it solves the integral equation $$y(x)=\int_p^x f(t,y(t))\, dt\quad + \quad q$$ You can check the equivalence of these two problems by differentiating the integral equation to show it leads to the initial value problem. Now we will build an iterative process for creating a solution to the integral equation.

Our procedure is to start with an initial guess, $y_0(x)$ for the solution to the initial value problem. Since the one value we do know is that $y(p)=q$, we'll make our first guess $y_0(x)=q$, the constant function. We then compute $$ y_{n+1}(x)=\int_p^x f(t,y_n(t)) dt\quad + \quad q. $$ Let's see how this works with an example. Consider the initial value problem $$\frac{dy}{dx}=2xy,\qquad y(0)=1.$$ We start with $y_0(x)=1$ and then compute $$ \begin{align} y_1(x)&=\int_0^x 2t(1) dt + 1 = \int_0^x 2t dt + 1 = x^2+1 \\ y_2(x)&=\int_0^x 2t(y_1(t))dt + 1 = \int_0^x 2t(t^2+1)dt + 1 = \int_0^x 2t^3+2t dt + 1 = \frac{x^4}{2}+x^2+1 \\ &\cdots \end{align} $$ This technique is called Picard iteration and we can show that, as long as the hypotheses of the theorem about continuity of $f$ and $\partial f/\partial y$ are satisfied, the sequence of functions will converge to a limit function $y_{\infty}(x)$, at least in some interval of the form specified in the conclusion of the theorem. But taking the limit of both sides of the integral equation shows that $$ y_{\infty}(x) = \int_0^x f(t,y_{\infty}(t))\,dt\quad + \quad q $$ so $y_{\infty}(x)$ is a solution to the integral equation, and hence the solution to the initial value problem. You are welcome to come see me in my office if you are interested in seeing the full details. We've already looked at iterative processes like Euler's method to build better and better approximations to the solution of the differential equation. Those techniques are easier and will usually converge much quicker than Picard iteration, which is why they are used in practice. The advantage of using Picard iteration for the integral equation is that while it is a more complicated technique, it is much easier to show the process actually converges to a solution because integration is a better behaved process (in a technical numerical analysis sense) than differentiation. And since this process converges to a solution, we are guaranteed a solution exists. We will return to using integral operators again when we get to the chapter on Laplace transforms. The connections between differential equations and integral operators form a deep and fruitful area of both theoretical and practical research.

You should note that this theorem only guarantees us that we have a solution that exists in some interval around our initial point. We have no guarantee that the solution doesn't stop somewhere. In fact, it is quite possible for a solution to only exist for a finite interval as we saw when we discussed explosions earlier. On the other hand, Euler's method and the improved Euler's method will almost always produce results on an infinite interval. It will be up to you to watch for signs that the numerical results no longer approximate the true solution (which may not exist any longer). To detect explosions you need to recognize an explosion exists in the differential equation and/or use more sophisticated software that tests for singularities in the result. Unfortunately, sometimes the numerical methods can be fooled into thinking a solution explodes even when it doesn't. This is a final reason for understanding the geometric approach to dealing with autonomous equations.

If you have any problems with this page, please contact bennett@math.ksu.edu.

©2010, 2014 Andrew G. Bennett