Deciding How To Solve A First Order Problem
Discussion
In real life, you are unlikely to be asked if a problem is Bernoulli. In fact you are unlikely to be directly asked to solve any differential equation at all. More commonly, you will have some question that after doing some work can be reduced to solving a differential equation, without any clue as to what type of equation it is. Given such a situation, I check to see if it is one of the types we have considered in the following order.- Separable
- Linear
- Bernoulli
- Exact
- Homogeneous
Theory of First Order Equations
Discussion
To answer the question of whether a solution exists for a given initial value problem, we have the following theorem: Theorem: Let the functions $f$ and $\partial f/\partial y$ be continuous in some rectangle $a<x<b$ and $c<y<d$ containing the point $(p,q)$. Then in some interval $p-h<x<p+h$ contained in $a<x<b$, there is a unique solution $y=g(x)$ of the initial value problem $$\frac{dy}{dx} = f(x,y)\qquad y(p)=q $$ I won't give a formal proof of this theorem but I will sketch out the approach used to build a careful proof. The first tool is that we convert our differential equation to an integral equation. A function $y(x)$ solves the initial value problem $$\frac{dy}{dx}=f(x,y)\qquad y(p)=q $$ if and only if it solves the integral equation $$y(x)=\int_p^x f(t,y(t))\, dt\quad + \quad q$$ You can check the equivalence of these two problems by differentiating the integral equation to show it leads to the initial value problem. Now we will build an iterative process for creating a solution to the integral equation. Our procedure is to start with an initial guess, $y_0(x)$ for the solution to the initial value problem. Since the one value we do know is that $y(p)=q$, we'll make our first guess $y_0(x)=q$, the constant function. We then compute $$ y_{n+1}(x)=\int_p^x f(t,y_n(t)) dt\quad + \quad q. $$ Let's see how this works with an example. Consider the initial value problem $$\frac{dy}{dx}=2xy,\qquad y(0)=1.$$ We start with $y_0(x)=1$ and then compute $$ \begin{align} y_1(x)&=\int_0^x 2t(1) dt + 1 = \int_0^x 2t dt + 1 = x^2+1 \\ y_2(x)&=\int_0^x 2t(y_1(t))dt + 1 = \int_0^x 2t(t^2+1)dt + 1 = \int_0^x 2t^3+2t dt + 1 = \frac{x^4}{2}+x^2+1 \\ &\cdots \end{align} $$ This technique is called Picard iteration and we can show that, as long as the hypotheses of the theorem about continuity of $f$ and $\partial f/\partial y$ are satisfied, the sequence of functions will converge to a limit function $y_{\infty}(x)$, at least in some interval of the form specified in the conclusion of the theorem. But taking the limit of both sides of the integral equation shows that $$ y_{\infty}(x) = \int_0^x f(t,y_{\infty}(t))\,dt\quad + \quad q $$ so $y_{\infty}(x)$ is a solution to the integral equation, and hence the solution to the initial value problem. You are welcome to come see me in my office if you are interested in seeing the full details. We've already looked at iterative processes like Euler's method to build better and better approximations to the solution of the differential equation. Those techniques are easier and will usually converge much quicker than Picard iteration, which is why they are used in practice. The advantage of using Picard iteration for the integral equation is that while it is a more complicated technique, it is much easier to show the process actually converges to a solution because integration is a better behaved process (in a technical numerical analysis sense) than differentiation. And since this process converges to a solution, we are guaranteed a solution exists. We will return to using integral operators again when we get to the chapter on Laplace transforms. The connections between differential equations and integral operators form a deep and fruitful area of both theoretical and practical research. You should note that this theorem only guarantees us that we have a solution that exists in some interval around our initial point. We have no guarantee that the solution doesn't stop somewhere. In fact, it is quite possible for a solution to only exist for a finite interval as we saw when we discussed explosions earlier. On the other hand, Euler's method and the improved Euler's method will almost always produce results on an infinite interval. It will be up to you to watch for signs that the numerical results no longer approximate the true solution (which may not exist any longer). To detect explosions you need to recognize an explosion exists in the differential equation and/or use more sophisticated software that tests for singularities in the result. Unfortunately, sometimes the numerical methods can be fooled into thinking a solution explodes even when it doesn't. This is a final reason for understanding the geometric approach to dealing with autonomous equations.If you have any problems with this page, please contact bennett@math.ksu.edu.
©2010, 2014 Andrew G. Bennett