differential equations for reliability, maintainability
Post on 27-Apr-2022
5 Views
Preview:
TRANSCRIPT
Differential Equations for Reliability,Maintainability, and Availability
Harry A. Watson, Jr.
September 1, 2005
Abstract
This is an electronic textbook on Differential Equationsfor Reliability, Maintainability, and Availability.It is written to fill the need of an introductory bookwhich can be accessed on-line, stored in magnetic mediaor on CDs (Compact Disks), and cross-referenced electronically.This entire book was done inthe (public domain) typesetting language LATEX.The equations, figures, pictures, and tables weretypeset in their respective LATEX environments.Many thanks are due to those who assisted in theproofreading and problem sets: Eric Gentile (electrical engineerand physicist) and Ron Shortt (physicist).Because computersoftware is rapidly replacing rote manipulations andbecause modern numerical techniques allow qualitativereal-time graphics, the traditional introductory coursein ordinary differential equations is doomed to oblivion.However, many textbooks in physics, chemistry, engineering,medicine, biology, and ecology mention techniques, terms,and processes covered nowhere else. Advanced courses ondynamical systems, algebraic topology, and functional analysisnever mention such items as “exact equations,” “integratingfactors,” or the Clairautequation. Advanced modern algebra treats differential formsand tensors in a totally different manner than is doneeither in physics or in engineering.Without a reference in introductory differential equations,one could never span the disciplines. Oneneeds practice in modern transform methods in general andin the Laplace transform in particular.This practice should be doneat the introductory level and not infunctional analysis, where such topics are introducedin an abstract manner with much lemmata and with no
correlation to physical reality.Finally, in this book, we end each proof with thehypocycloid of four cusps—the diamond symbol (♦).
2
Limit of Liability/Disclaimer of Warranty:While every effort has been made to ensure the correctness ofthis material, the author makes no warranty, expressed orimplied, or assumes any legal liability or responsibilityfor the accuracy, completeness, or usefulness of anyinformation, process, or procedure disclosed herein. In noevent will the author be liable for any loss of profit or anyother commercial damage, including but not limited tospecial, incidental, consequential, or other damages.
Copyright c© 1997, Harry A.Watson, Jr. This book may be used by NWAD (Naval WarfareAssessment Division) and its sponsors freely withoutpayment of any royalty and any part may be copied freely,except that no alteration is allowed when the copyrightsymbol ( c©) is displayed.
Contents
1 Introduction 51.1 First Encounters . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Basic Terminology . . . . . . . . . . . . . . . . . . . . . . . . 91.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.4 Computers and Differential Equations . . . . . . . . . . . . . . 191.5 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . 221.6 The equation y′ = F (x) . . . . . . . . . . . . . . . . . . . . . . 271.7 Existence theorems . . . . . . . . . . . . . . . . . . . . . . . . 331.8 Systems of Equations . . . . . . . . . . . . . . . . . . . . . . . 421.9 The General Solution . . . . . . . . . . . . . . . . . . . . . . . 451.10 The Primitive . . . . . . . . . . . . . . . . . . . . . . . . . . . 481.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2 First Order, First Degree Equations 612.1 Differential Form Versus Standard Form . . . . . . . . . . . . 612.2 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . 652.3 Separable Variables . . . . . . . . . . . . . . . . . . . . . . . . 712.4 First Order Homogeneous Equations . . . . . . . . . . . . . . 812.5 A Theorem on Exactness . . . . . . . . . . . . . . . . . . . . . 882.6 About Integrating Factors . . . . . . . . . . . . . . . . . . . . 962.7 The First Order Linear Differential Equation . . . . . . . . . . 1002.8 Other Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 1052.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3 The Laplace Transform 1103.1 Laplace Transform Preliminaries . . . . . . . . . . . . . . . . . 1103.2 Basic Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . 1143.3 The Inverse Transform . . . . . . . . . . . . . . . . . . . . . . 123
1
3.4 Transforms and Differential Equations . . . . . . . . . . . . . 1283.5 Partial Fractions . . . . . . . . . . . . . . . . . . . . . . . . . 1293.6 Sufficient Conditions . . . . . . . . . . . . . . . . . . . . . . . 1313.7 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383.8 Useful Functions and Functionals . . . . . . . . . . . . . . . . 1403.9 Second Order Differential Equations . . . . . . . . . . . . . . . 1443.10 Systems of Differential Equations . . . . . . . . . . . . . . . . 1463.11 Heaviside Expansion Formula . . . . . . . . . . . . . . . . . . 1483.12 Table of Laplace Transform Theorems . . . . . . . . . . . . . . 1513.13 Table of Laplace Transforms . . . . . . . . . . . . . . . . . . . 1523.14 Doing Laplace Transforms . . . . . . . . . . . . . . . . . . . . 1553.15 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
2
List of Figures
1.1 Inverted Exponential . . . . . . . . . . . . . . . . . . . . . . . 61.2 Isoclines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201.3 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . 241.4 Simple Difference Method . . . . . . . . . . . . . . . . . . . . 291.5 Closed Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.1 Connected Set . . . . . . . . . . . . . . . . . . . . . . . . . . . 632.2 Level Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662.3 A Closed Path . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.1 Transform Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . 1153.2 The Heaviside Function . . . . . . . . . . . . . . . . . . . . . . 1263.3 The Dirac Delta Function . . . . . . . . . . . . . . . . . . . . 1263.4 Heaviside Function . . . . . . . . . . . . . . . . . . . . . . . . 1533.5 Ramp Function . . . . . . . . . . . . . . . . . . . . . . . . . . 1533.6 Shifted Heaviside . . . . . . . . . . . . . . . . . . . . . . . . . 1533.7 Linearly Transformed Heaviside . . . . . . . . . . . . . . . . . 1543.8 Linearly Transformed Ramp . . . . . . . . . . . . . . . . . . . 1543.9 Sawtooth Function . . . . . . . . . . . . . . . . . . . . . . . . 154
3
List of Tables
1.1 Euler’s Method Calculations . . . . . . . . . . . . . . . . . 251.2 An Approximation to ex . . . . . . . . . . . . . . . . . . . . 26
2.1 Table of Symbols . . . . . . . . . . . . . . . . . . . . . . . . 642.2 Table of Exact Differentials . . . . . . . . . . . . . . . . 1002.3 Table of Common Abbreviations . . . . . . . . . . . . . . . 101
3.1 Named Functions . . . . . . . . . . . . . . . . . . . . . . . . 1183.2 Laplace Transform Pairs . . . . . . . . . . . . . . . . . . . 1273.3 Theorem Table . . . . . . . . . . . . . . . . . . . . . . . . . 152
4
Chapter 1
Introduction
1.1 First Encounters
Differential equations are of fundamental importancein science and engineering because many physical lawsand relations are described mathematically by suchequations.Roughly speaking, by an ordinary differential equation we mean a relation
between anindependent variable x, a function y of x, andone or more of the derivatives y′, y′′, . . . ,y(n) of x. For example,
y′ = 1− y, (1.1)
y′′ + 4y = 3 sinx, (1.2)
y′′ + 2x(y′)5 = xy (1.3)
are ordinary differential equations.In calculus we learn how to find the successive,that is higher or higher order, derivatives of a given function y(x). Notice
that we use parentheses to distinguish y(n) from yn, the nth power of y. Ifthe function y depends on two or more independent variables, say x1, . . .,xm, then the derivatives are partial derivatives and the equation is called apartial differential equation.
5
-
6
y
x·······································································································································································································
·············································································································································································································································
Figure 1.1: The Curve y = 1− e−x
Definition 1 A differential equation is an equation which involves deriva-tives of a dependent variable with respect to one or more independent vari-ables.
As an example of a differential equation, if y = 1− e−x, then
y′ = e−x = 1− y; (1.4)
thus for this function the relation (1.1) is satisfied. We say that y = 1− e−x
is a solution of Equation (1.1). However, this is not the only solution, fory = 1− 2e−x also satisfies Equation (1.1):
y′ = 2e−x = 1− y.
We note that the function y = 1 − e−x is sometimes called the “invertedexponential.” Curves of this type are seen in biology where a population islimited by some factor, e.g., the food supply, and in reliability, where one isconcerned with component failures.
The first fundamental problem is how to determine
6
all possible solutions of a given differentialequation. Even for the simplest example of adifferential equation
dx
dx= F (x) or, equivalently y′ = F (x),
where F is an explicit function of xalone, this may not be easy. In fact, entire tablesof integrals and symbolic mathematical softwarecannot determine every such solution in closedform. To solve this equation, one must find afunction y = f(x) whose derivative is the givenfunction F (x). The solution, y, is the so-calledantiderivative or indefinite integral of F (x).There is a second fundamental problem.For many differential equations it isvery difficult to obtain explicit formulas for all thesolutions. However, a general existence theorem guaranteesthat there are solutions; in fact, infinitely many. The problemis to determine properties of the solutions,or of some of the solutions, from the differential equationitself. Many properties can be foundwithout explicit formulas for the solutions;we can, in fact, obtain numerical values for solutions to anyaccuracy desired. Accordingly, we are led to regard adifferential equation itself as a sort of explicit formuladescribing a certain collection of functions. For example,we can show that all solutions ofEquation (1.2) are given by
y = sinx+ c1 cos 2x+ c2 sin 2x
where c1 and c2 are arbitrary constants.Equation (1.2) itself, y′′ + 4y = 3 sinx, is anotherway of describing all these functions.Historically, differential equations came into existenceat the same time as differential and integral calculus.It is an artifact of the education system that they are studied
7
in colleges and universities after calculus.A single differential equation may describe manyof the laws of nature in a unified and concise form,due to the fact that several different functions caneach be a solution of one differential equation.Thus, it is not surprising that most laws of physicsare in the form of differential equations.One noteworthy example is the formulation ofNewton’s second law:
force = mass × acceleration.
Let m denote the mass of a particle constrained to more alonga straight line. The differential equation
md2x
dt2= F
(dx
dt, x, t
),
where x is the distance from an origin, t is time, andand F is force, which depends on velocity dx/dt, x,and t.We will be studying the problem of obtaining a solutionas well as the theoretical problems associated withexistence, uniqueness, and the like.The goal is to obtain explicitexpressions for the solutions wherever possible.Such solutions are also referred to assolutions in closed form.In the course of study, we will observe that solutionshave different appearances, including, but notlimited to, infinite series. We will also try tomake qualitative statements about thesolutions, such as trajectories, graphs, asymptotes,etc., directly from the differential equations.It has been observed that, except for a fewspecial cases, there is no simple way ofsolving ordinary differential equations.If the unknown function and its derivativesappear linearly in the differential equation,
8
then it is called a lineardifferential equation; otherwise, it is saidto be nonlinear. Among the cases for whichsimple methods of solution are possible arethe linear equations.This is fortunate because of the frequencywith which they occur in scientific phenomena.In fact, many of the fundamental laws of scienceare formulated in terms of linear ordinarydifferential equations. Consequently, we willdevote a majority of this book to such equations.
1.2 Basic Terminology
An ordinary differential equation oforder n is an equation of the form
F
(x, y,
dy
dx, . . . ,
dny
dxn
)= 0. (1.5)
The above equation involves an unknown function, F ,and one or more its derivatives, y(k) = dky/dxk,where k = 1, 2, . . . , n. Of course, if k = 0, we havey(0) ≡ y and equation (1.5)reduces to an algebraic equation in x and y.Moreover, y(1) ≡ y′. (Sometimes wewrite y(iv) instead of y′′′′
or y(4).)For example, observe that
xy′′ + 3y′ − 2y + xex = 0, (1.6)
(y′′′)2 − 4y′y′′′ + (y′′)3 = 0 (1.7)
are ordinary differential equations of orders 2and 3, respectively.If a differentialequation has the form of an algebraic equationof degree m in the highest derivative, then we
9
say that the differential equation is ofdegree m.For example, Equation (1.7) is ofdegree 2 in its highestderivative, y′′′, whereas Equation (1.6)is of degree 1.In an introductory course, one generally isrestricted to equations of thefirst degree.Moreover, the leading coefficient of the highestorder y(n) is generally 1 so that we havean expression of the form
y(n) = F(x, y, . . . , y(n−1)
).
A linear ordinary differentialequation is a restriction of Equation (1.5)to the form
b0(x)y(n) + b1(x)y
(n−1) + · · ·+ bn−1(x)y′ + bn(x)y = Q(x). (1.8)
This corresponds to an algebraic equation where thecoefficients are replaced by functions and the powersare replaced by derivatives, for example:
a0tn + a1t
n−1 + · · ·+ an−1t+ an = 0,
where t0 ≡ 1 and the “driving function,”Q(x), is absorbed into the coefficient an.This phenomenon will be a recurring theme throughoutall of differential equations, operator theory,transform calculus, and tensor analysis. Familiar,common notions will be extended by more general,complicated concepts. In turn, the more generalconcepts will reduce to basic ideas and many of thetheorems and facts can be generalized.Each of the Equations (1.1), (1.2),and (1.6) above is linear.In particular, can write the coefficients
10
for Equation (1.6) as follows:
b0(x) := x, b1(x) := 3, b2(x) := −2, Q(x) := −xex.
It is true that every linear differential equationis always of degree 1. The converse does nothold, as one can see from Equation (1.3),which is of first degree (in its lead coefficient)and nonlinear (in its first derivative).The word “ordinary” implies thatthere is just one independentvariable. If we have a function of twoor more independent variables, say U(x, y, z),it is possible to have partial derivatives.An equation such as
∂2U
∂x2+∂2U
∂y2+∂2U
∂z2= 0
is called a partial differential equation.In this book, with a few notable exceptions, we will beconcerned only with ordinarydifferential equations. This being the case,the word “ordinary” will generally be omitted.
1.3 Solutions
Definition 2 A function y = f(x), defined on some intervala < x < b (possibly infinite), is said to be asolution of the differentialequation
F(x, y, y′, . . . , y(n)
)= 0 (1.9)
if Equation (1.9) is identicallysatisfied whenever y and its derivativesare replaced by f(x) and its derivatives.
Of course, it is implied in the definition thatif the differential equation defined by
11
Equation (1.9) is of order nthen f(x) has at least n derivativesthroughout the interval (a, b). Moreover,whatever is valid for a general solution alsoholds for a particular solution.One may observe that the function y = f(x)is defined on an open interval,a < x < b, and not on a closedinterval, a ≤ x ≤ b. It is generally truethat derivatives are studied on open sets whereascontinuous functions (and integrals) are studied onclosed sets.As an example of a solution, suppose thateach of c1 and c2 is a constant. The equation,which is related to Equation (1.2),
y = c1 cos 2x+ c2 sin 2x
is a solution of the differential equation
y′′ + 4y = 0 (1.10)
since y′′ = −4c1 cos 2x− 4c2 sin 2x, so that
y′′ + 4y = (−4c1 cos 2x− 4c2 sin 2x) + 4 (c1 cos 2x+ c2 sin 2x) ≡ 0.
Many differential equations have solutionsthat can be concisely written as
y = f (x; c1, . . . , cn) , (1.11)
where each of c1, . . . , cn is an arbitrary constant.(Two or more of the c’s can assume the same value.)It is not always possible to let the c’s assume anyreal value for unrestricted values of x; however,for given values of the c’s and an admissiblerange of values of the independent variable, x,Equation (1.11) gives all of the solutions ofEquation (1.9).
12
For example, all of the solutions ofEquation (1.10) are given by
y = c1 cos 2x+ c2 sin 2x; (1.12)
the solution y = cos(2x) is obtained when c1 = 1,c2 = 0. When a function (1.11) isobtained, providing all solutions,it is called the general solution.In general, the number of arbitrary constants willequal the order of n, as will be explained inSection 1.7. However, theremay be exceptions.
(y′)2 + y2 = 0
has exactly one solution, that is y ≡ 0.In order to gain some experience with thepreceding material, we will preview some of thematerial from Section 1.6.The differential equationin that section is just
y′ = F (x), (1.13)
where F (x) is defined and continuous fora ≤ x ≤ b. Allpossible solutions of Equation (1.13)come from the integral equation
y =∫F (x) dx+ c1, a < x < b. (1.14)
We observe that the arbitrary constantof the differential equationis simply the so-called constant of integrationfrom the indefinite integral. Thisgeneralizes to higher order differentialequations. For example, suppose that
y′′ = 30x. (1.15)
13
Integrate y′′ twice to obtain
y′ = 15x2 + c1, y = 5x3 + c1x+ c2. (1.16)
Successive integration ofEquation (1.15), which isor order two, has produced exactlytwo arbitrary constants, c1, c2.
Problems
1. Classify each of the following differential equations
as to order, degree, and linearity:
(a) y′′ + 3y′ + 6y = 0,
(b) y′ + P (x)y = Q(x),
(c) (y′)2 = x3 − y,
(d) y′′ − 2(y′)2 + xy = 0,
(e) (y′)2 + 9xy′ − y2 = 0,
(f) x3y′′ − xy′ + 5y = 2x,
(g) y(vi) − y′′ = 0,
(h) sin(y′′) + ey′ = 1.
2. Integrate each of the following differential equations
and include constants of integration as arbitrary constants to
give the general solution:
(a) y′ = xex,
(b) y′′′ = 0,
(c) y′′ = x,
(d) y(n) = x,
(e) y′ = log x,
(f) y′ = 1/x.
14
3. Show that the function y = f(x) is a solution of
the given differential equations:
(a) y = ex, for y′′ − y = 0,
(b) y = cos 2x, for y(iv) + 4y′′ = 0,
(c) y = c1 cos 2x+ c2 sin 2x (each of c1 and c2
is a constants), for y′′ + 4y = 0,
(d) y = sin (ex), for y′′ − y′ + e2xy = 0.
4. Consider the differential equation y′ = 3x2.
(a) Verify that y = x3 + c1 is the general solution;
(b) Determine c1
such that the solution curve passes through
(1, 3);
(c) Determine c1 so that the solution
satisfies the integral equation∫ 10 y(x) dx = 1
2.
5. Discuss how a differential equation is a
generalization of an algebraic equation.
6. Write a computer program to generate the data points
for Figure 1.1.
Solutions
1. Classify each of the following differential equations
as to order, degree, and linearity:
(a) y′′ + 3y′ + 6y = 0,
2nd order, 1st degree, linear;
15
(b) y′ + P (x)y = Q(x),
1st order, 1st degree, linear;
(c) (y′)2 = x3 − y,
1st order, 2nd degree, nonlinear;
(d) y′′ − 2(y′)2 + xy = 0,
2st order, 1st degree, nonlinear;
(e) (y′)2 + 9xy′ − y2 = 0,
1st order, 2nd degree, nonlinear;
(f) x3y′′ − xy′ + 5y = 2x,
2nd order, 1st degree, linear;
(g) y(vi) − y′′ = 0,
6th order, 1st degree, linear;
(h) sin(y′′) + ey′ = 1,
2nd order, unknown degree, nonlinear.
2. Integrate each of the following differential equations
and include constants of integration as arbitrary constants to
give the general solution:
(a) y′ = xex, for all real x, y = xex − ex + c1;
(b) y′′′ = 0, for all real x, y = c1x2 + c2x+ c3;
(c) y′′ = x, for all real x, y = 16x3 + c1x+ c2;
(d) y(n) = x, for all real x, y = 1(n+1)!
xn+1+c1xn−1+c2x
n−2+· · ·+ cn;
(e) y′ = log x, for all x > 0, y = x log(x)− x+ c1;
(f) y′ = 1/x for all x 6= 0 and c1 > 0, y = log (c1|x|).
3. Show that the function y = f(x) is a solution of
the given differential equations:
(a) y = ex, for y′′ − y = 0,
y′ = ex, y′′ = ex,
16
y′′ − y = ex − ex = 0,
by substitution.
(b) y = cos 2x, for y(iv) + 4y′′ = 0,
y = cos 2x, y′ = −2 sin 2x, y′′ = −4 cos 2x,
y′′′ = 8 sin 2x, y(iv) = 16 cos 2x.
y(iv) + y′′ = 16 cos 2x− 16 cos 2x = 0,
by successive differentiation and substitution.
(c) y = c1 cos 2x + c2 sin 2x (each of c1 and c2 is a constants), fory′′ + 4y = 0,
y′ = −2c1 sin 2x+ 2c2 cos 2x, y′′ = −4c1 cos 2x− 4c2 sin 2x.
y′′ + 4y = −4c1 cos 2x− 4c2 sin 2x+ 4 (c1 cos 2x+ c2 sin 2x) = 0
by differentiation and substitution.
(d) y = sin (ex), for y′′ − y′ + e2xy = 0.
y′ = ex cos (ex) , y′′ = ex cos (ex)− e2x sin (ex) .
y′′ − y′ + e2xy =
ex cos (ex)− e2x sin (ex)− [ex cos (ex)] + e2x sin (ex) = 0
by direct substitution.
4. Consider the differential equation y′ = 3x2.
17
(a) Verify that y = x3 + c1 is the general solution.Differentiate. dy/dx = y′ = 3x2.
(b) Determine c1 such that the solution curve passes through (1, 3).Solve the algebraic equation 3 = 13 + c1 for c1.By inspection, c1 = 2.
(c) Determine c1 so that the solution
satisfies the integral equation∫ 10 y(x) dx = 1
2.
Integrate y(x) = x3 + c1 from 0 to 1 and require
the definite integral to be equal to 12.
∫ 1
0
(x3 + c1
)dx =
x4
4
∣∣∣∣∣1
0
+ c1x∣∣∣∣10
= 12.
14
+ c1 = 12, thus c1 = 1
4.
5. Discuss how a differential equation is a
generalization of an algebraic equation.Starting with an equation
a0tn + a1t
n−1 + · · ·+ an−1t+ an = 0,
observe that t1 = t and t0 = 1. This gives
a0tn + a1t
n−1 + · · ·+ an−1t1 + ant
0 = 0.
Substitute bk(x) for ak for all k = 0, 1, . . . , nand substitute y(k) for tk, with the understandingthat y(0) is the function y itself. (Also, y′ := y(1).)
b0(x)y(n) + b1(x)y
(n−1) + · · ·+ bn−1(x)y′ + bny = 0.
The solution of an algebraic equation is a numberset (either real or complex). On the other hand,the solutions set of a differential equation ismade up of functions. The last step is to add a”driving function” Q(x) and the generalizationis complete.
18
6. Write a computer program to generate the data points for Figure 1.1.The plotting area in the figure is approximately
300 points by 200 points, with 72 points per inch.With scaling by points, we can write a BASIC (Beginners’All-purpose Symbolic Instruction Code) program.
100 FOR k = 0 TO 10 STEP .1
110 y = 200 * (1 − EXP( − k / 20) )
120 PRINT k, y
130 NEXT k
140 FOR k = 10 TO 20 STEP .2
150 y = 200 * (1 − EXP( − k / 20) )
160 PRINT k, y
170 NEXT k
180 FOR k = 20 TO 300
190 y = 200 * (1 − EXP( − k / 20) )
200 PRINT k, y
210 NEXT k
220 END
1.4 Computers and Differential Equations
The success that digital computers enjoyedin solving algebraic equations, both numericallyand symbolically, quickly extended itself todifferential equations. This should not besurprising because differential equations canbe considered as a generalization, in somesense, of algebraic equations. For example,each solution of the algebraic equationx2 + 3x+ 1 = 0 is simply a number; each solutionof a differential equation is a function.Computers have also added plotting andinteractive graphics capabilities to the
19
-
6
x
y
1•
••••
•
•
•• • •
•
•
•
••
••••••
•
•
•
•
••
• • • • ••
•
•
•
•
•
•
••••
•
•
•
•
•
•
•
•
•• • •
•
•
•
•
•
•
•
•
••••
•
•
•
•
•
•
•
•
•• • •
•
•
•
•
��
������
� � ��
��
��
������
�����
�� � � � �
��
�
�
�
��
����
�
�
�
�
�
�
��
� � ��
�
�
�
��
��
��
��
��������
��
��
��
��
��
��
��
��
���� �� ��
��
��
��
��
��
������
� � �� �
��
������
��
����
�� � � � �
���
�
�
�
��
����
�
�
�
�
�
�
��
� � ��
�
�
�
��
��
��
��
����
������
��
��
��
��
��
��
��
���� ��
����
��
��
��
Figure 1.2: Isoclines of y′ = x2 + y2
traditional quantitative numerical tables.Modern computer software delivers anapproximating function to any differentialequation which cannot be solved explicitly.This approximating function enables the userto retrieve output in any desired format:tables, plots, graphs, etc. The geometricinterpretation of first order differentialequations has been rendered obsolete by thistechnology. The process of determiningsolutions by isoclines,once the darling of the numerical analyst,is now viewed as a fond and vain thing nolonger worth the time to study.For purely historical reasons, however,we will touch on the topic of isoclines.The equation
20
{{−0.0355234}, {0.0749577}, {0.166858},{0.243498}, {0.30751}, {0.361096}, {0.406214},{0.444702}, {0.478376}, {0.509112}, {0.53891},{0.569969}, {0.604761}, {0.646141}, {0.697497},{0.76299}, {0.847958}, {0.95961}, {1.10828},{1.30986}, {1.59084}, {2.}, {2.63986}}
y′ = F (x, y) (1.17)
has a very simple geometric interpretation.One can construct from the function F (x, y) abovea very useful graphical method to qualitativelydescribe the solutions of Equation (1.17)without actually obtaining a solution of theform y = f(x). This is done by letting theequation F (x, y) be constant and plottingcurves in the xy-plane calledisoclines, or curves of constant slope.Each isocline is determined by the equation
F (x, y) = m (1.18)
where m is a fixed constant. For example, considerthe first order differential equation
dy
dx= x2 + y2. (1.19)
The isoclines are concentric circles centered at the origin.(See the Figure 1.2.) We use some moderncomputer software to obtain a solution through the point(1, 2). The software indicates that a closed form, orexplicit solution, cannot be obtained. We settle for anumerical solution and get a table of values for y interms of x. For this example we let x march from−1.1 to 1.1 with a step size of 0.1.The software also produces an outstanding graph, which is
21
available as a computer graphics file.This method can be helpful in getting a qualitativeknowledge of a solution curve in engineering problemsinvolving first order differential equations whosesolution set cannot be expressed in terms of knownfunctions or where finding a solution is mathematicallyintractable.
1.5 Euler’s Method
To obtain a numerical solution to a differentialequation, each of the arbitrary constants must havea definite value. One might view the numericaltechniques as a generalization of the concept ofthe definite integral
∫ ba F (x) dx. On the
other hand, closed form, symbolic solutionsmay be considered as a generalization of theso-called indefinite integral
∫F (x) dx+ c1.
From an indefinite integral and a particular point(x0, y0), with a < x0 < b, one may computethe particular solution, f(x).
f (x0) := y0 =∫ x0
x0
F (t) dt+ c1
implies that c1 = y0 so that
f(x) = y0 +∫ x
x0
F (t) dt.
Using this construction,the solution curve { (x, y) | a < x < b, y = f(x)},passes through the point (x0, y0) and thereare no arbitrary constants. This is referred toas an initial value problemand the subsidiary conditions that determine thevalues of the arbitrary constants are known asinitial values. By placing some restrictionson the function F one may ensure that the initial
22
value problem has a unique solution. The etymologyof the expression “initial value” lies in theproblem of determining the behavior of the motionof a particle moving along a constrained path andsubject to known forces. Knowing the initialposition and the initial velocity is sufficient touniquely determine all future behavior of theparticle.The simplest, non-trivial initial value problem isthat of determining a particular solution to
y′ ≡ dx
dy= F (x, y), (1.20)
where each of F and Fy ≡ ∂F/∂yis a continuous function in some rectangular region Rhaving (x0, y0) in its interior. For a given setof discrete points x0 < x1 < · · · < xn−1 < xn we wishto compute a table of approximate numerical values(xk, yk) to points on the solution curve(xk, f(xk)). There are several ways of doingthis, the most elementary being a simple differencemethod known as Euler’s method.The idea is straightforward enough. Starting with(x0, y0) and a step of size ∆x := x1 − x0
compute
∆y := F (x0, y0) ∆x (1.21)
then set x1 = x0 + ∆x and y1 := y0 + ∆y.Repeat this procedure, redefining the increment∆x so that ∆x := x2 − x1 andcomputing
∆y := F (x1, y1) ∆x
to obtain
y2 := y1 + ∆y = y1 + F (x1, y1) · [x2 − x1] .
23
-
6
x0 x1 x2 x3
y
x
••
••
∆x
∆y
"
""""
aaaaa
Figure 1.3:Solution Obtained by Euler’s Method
If we define y′k = F (xk, yk) and hk = xk − xk−1,for all appropriate values of the integer k, then
yk+1 = yk + y′k · hk. (1.22)
This is illustrated in Figure (1.3).If the points are equally spaced, so that∆x = x1 − x0 = x2 − x1 = · · · = xn − xn−1,then we simply write h for hk, k = 1, 2, . . . , n− 1and Equation 1.22 becomes
yk+1 = yk + h y′k k = 0, 1, . . . , n− 1.
One may observe that ∆x is actually anapproximation to the differential dx andthat ∆y := F (xk, yk) ∆x isan approximation to dy evaluated at thepoint (xk, yk). From henceforth we will only
24
Total Slope at (x, y) Increment of yx y F (x, y) ∆y
x0 y0 y′0 := F (x0, y0) F (x0, y0) ∆xx0 + ∆x y0 + ∆y y′1 := F (x0 + ∆x, y0 + ∆y) F (x1, y1) ∆xx0 + 2∆x y1 + ∆y y′2 := F (x1 + ∆x, y1 + ∆y) F (x2, y2) ∆xx0 + 3∆x y2 + ∆y y′3 := F (x2 + ∆x, y2 + ∆y) · · ·· · · · · · · · · · · ·x0 + n∆x yn−1 + ∆y y′n = F (xn−1 + ∆x, yn−1 + ∆y) F (xn, yn) ∆x
Table 1.1: Euler’s Method Calculations
consider equally spaced points xk. Nowlook at the table 1.1,with ∆x kept constant,for Euler’s method.The example
y′ = y
with x0 = 0, y0 = 1, ∆x = 0.1, is worked out intable 1.2.Of course, the solution to y′ = y is simply the exponentialfunction y = ex. This function is well behaved andhas is defined for all real numbers. We will computethe first 5 values (k = 0, . . . , 5) using a BASIC(Beginners’ All-purpose Symbolic Instruction Code)program.
25
k xk yk ∆y exk
0 0 1 0.1 11 .1 1.1 0.21 1.1051712 .2 1.21 0.331 1.2214033 .3 1.331 0.4641 1.3498594 .4 1.4641 0.61051 1.4918255 .5 1.61051 0.771561 1.648721
Table 1.2: An Approximation to ex
100 x0 = 0 : y0 = 1
110 h = .1 : n = 5
120 x = x0 : y = y0
130 PRINT x, y, EXP(x)
140 FOR k = 1 TO n
150 yprime = y
160 x = x + h
170 y = y + yprime ∗ h
180 PRINT x,y,EXP(y)
190 NEXT k
200 END
One might observe that the smaller the stepsize, that is, the smaller the value of ∆x,the closer the approximation of Euler’s method isto the exact solution. We went from x0
to xn by positive increments h := ∆x;proceeding by negative increments would haveyielded a solution to the left. With moderncomputer software, there is little need to
26
apply such techniques as Euler’s method;however, there is much to be learned aboutwhich technique is best employed for thecomputation of a solution to a given differentialequation and which should be avoided. This topic,the numerical approximation ofsolution curves to differentialequations, is part of numerical analysis. Thecomputer algorithms and techniques found incommercial software and applied tovarious problems require special skills andknowledge beyond the scope of this introductorycourse; however, certain topics in the estimationof the error in computed versus exact solutionswill be covered in a later chapter. The readeris encourage to experiment with computersoftware which calculates and plots solutioncurves to various ordinary differential equationsand systems of ordinary differential equations.Some of the graphics are truly rad and awesome!
1.6 The equation y′ = F (x)
The most familiar of all differential equationsare those of the form
y′ = F (x), (1.23)
where F (x) is continuous for all x, a ≤ x ≤ b.Solutions of (1.23) are obtained viathe Fundamental Theorem of Calculus.
Theorem 1 Fundamental Theorem of CalculusLet F (x) be continuous in the interval a ≤ x ≤ b.For each real number c1, there exists a unique solutionf(x) of (1.23) in the interval [a, b]such that f(a) := c1. The solution is given by thedefinite integral
27
f(x) =∫ x
aF (t) dt+ c1. (1.24)
Letting∫F (t) dt denote the indefinite integral of F ,
we can denote Equation (1.24) as
y =∫F (x) dx+ c1 a ≤ x ≤ b.
Another form of the above is
y =∫ x
x0
F (t) dt+ c1 a ≤ x ≤ b, (1.25)
where a ≤ x0 ≤ b. In the above equationthe indefinite integralhas been replaced by the definite integral withlimits of integration x0, x, and with tas a “dummy” variableof integration. Again, it is the FundamentalTheorem of Calculus that says
d
dx
∫ x
x0
F (t) dt = F (x), (1.26)
so that the integral on the right ofEquation (1.26) is indeed anindefinite integral of F (x).In applications, it is frequently requiredthat y := y0 when x = x0. The constant c1must be chosen inEquation (1.25) such that
y0 =∫ x0
x0
F (u) du+ c1 = 0 + c1.
Therefore c1 := y0 and the differential equation becomes
y = y0 +∫ x
x0
F (t) dt a ≤ x ≤ b. (1.27)
Just because the indefinite integral∫F (t) dt
cannot be evaluated in closed form does not mean that
28
-
6
x0 x0+∆x x
y
t
∆x
F (x0)
Figure 1.4: Integration by Simple Differences
it does not have a meaning. In fact, several importantfunctions in statistics (e.g., the normal distribution)and physics (e.g., the Fresnel integrals) are expressed interms of definite integrals. Equation (1.27)itself can be used to compute the valueof y for each x. Fix x and evaluate thedefinite integral by any of the standardapproximation techniques: thetrapezoidal rule, Simpson’s rule, etc.The most elementary approximation is given bytaking the sums of“circumscribed rectangles” (also called aninner sum) as follows.Divide the closed interval[x0, x] = { t |x0 ≤ t ≤ x } into n equal parts, eachof length ∆x,so that x = x0 + n∆x.
29
y = y0 +∫ x
x0
F (u) du ≈ y0 + F (x0) ∆x+
F (x0 + ∆x) ∆x+ . . .+ +F (x0 + (n− 1)∆x) ∆x, (1.28)
as suggested in Figure 1.4.Notice that when F (x, y) ≡ F (x)Equation (1.28) and the Euler’s method areprecisely the same. We tabulate values for (xk, yk),k = 0, 1, . . . , n as follows.
x0 y0 y0
x1 := x0 + ∆x y1 y0 + ∆y = y0 + F (x0) ∆xx2 := x0 + 2∆x y2 y0 + F (x0) ∆x+ F (x0 + ∆x) ∆x,...
......
xn := x0 + n∆x yn y0 + F (x0) ∆x+ · · ·+ F (x0 + (n− 1)∆x) ∆x.
The Euler’s method is simply a generalization of a firstapproximation, the inner sum, to an integral. In fact,if one defines an outer sum in a manner similar, it ispossible to classify all integrable functions as thosewhose inner and outer sums converge in the limit as∆x→ 0 to a number. In some sense, numericalsolution of a differential equation is a generalizationof numerical integration. The earliest attempts at solvingdifferential equations were via hard wired electricalcircuits and mechanical devices. These “differentialengines” were analog computers and have been madeobsolete by modern digital computers.The techniques used to solve single equations using Euler’smethod can also be used to solve systems of simultaneousfirst order differential equations. Such problems asreduction of order and solving simultaneous first orderdifferential equations are a staple in everyelementary book on ordinary differential equations.
30
Problems
1. Use Euler’s method, with ∆x = 0.01,
to find the value of y when x = 1.5 on the solution curve
of y′ = −y2 + x2 such that y0 = 1 when x0 = 1.
Compare with a solution from a software package.
2. Use Euler’s method, with ∆x = 0.01,
to find the value of y when x = 1.5 on the solution curve
of y′ = −y2 + x such that y0 = 1 when x0 = 1.
Compare with a solution from a software package.
3. Use Euler’s method, with ∆x = 0.1,
to find the value of y when x = 1.5 on the solution curve
of y′ = x+ y such that y0 = 1 when x0 = 0.
Compare with a solution from a software package.
Solutions
1. Use Euler’s method, with ∆x = 0.01,
to find the value of y when x = 1.5 on the solution curve
of y′ = −y2 + x2 such that y0 = 1 when x0 = 1.
Compare with a solution from a software package.We start with a BASIC program
100 x0 = 1 : y0 = 1
110 h = .01 : n = 50
120 x = x0 : y = y0
130 PRINT x,y
140 yprime = x − y ∗ y
100 x = x + h
31
100 y = y + yprime ∗ h
100 NEXT k
100 END
The BASIC program yields y(1.5) = 1.210649The Mathematical Software Program yields y(1.5) = 1.2129
2. Use Euler’s method, with ∆x = 0.01,
to find the value of y when x = 1.5 on the solution curve
of y′ = −y2 + x such that y0 = 1 when x0 = 1.
Compare with a solution from a mathematical software package.We start with a BASIC program
100 x0 = 1 : y0 = 1
110 h = .01 : n = 50
120 x = x0 : y = y0
130 PRINT x,y
140 yprime = x ∗ x − y ∗ y
100 x = x + h
100 y = y + yprime ∗ h
100 NEXT k
100 END
The BASIC program yields y(1.5) = 1.09031The Mathematical Software Program yields y(1.5) = 1.09119
3. Use Euler’s method, with ∆x = 0.1,
to find the value of y when x = 1.5 on the solution curve
of y′ = x+ y such that y0 = 1 when x0 = 0.
Compare with a solution from a software package.We start with a BASIC program
100 x0 = 0 : y0 = 1
110 h = .1 : n = 10
32
120 x = x0 : y = y0
130 PRINT x,y
140 yprime = x + y
100 x = x + h
100 y = y + yprime ∗ h
100 NEXT k
100 END
The BASIC program yields y(1.5) = 3.187485The Mathematical Software Program yields y(1.5) = 3.43658From the professional software, we have
{{1.}, {1.11034}, {1.24281}, {1.39972},
{1.58365}, {1.79745}, {2.04424}, {2.32751},
{2.65109}, {3.01922}, {3.43658}}
1.7 Existence theorems
Not every differential equation has a solution.Look no further than the differential equation
(y′)2 + y2 + 1 = 0.
If there exists a number x for which y(x)is defined then y′(x) cannot be a real number.However, from what was just done in Section1.6 one might guessthat the differential equation
dy
dx= F (x, y) (1.29)
33
has is a unique solution y = f(x) which passes through agiven initial point (x0, y0).Under appropriate assumptions concerning the functionF (x, y), existence of such a solution can indeed beguaranteed. For the equation of the nth order,
y(n) = F(x, y, y′, . . . , y(n−1)
), (1.30)
we expect that there will be a unique solution satisfyinginitial conditions:
y(x0) := y0, y′(x0) := y′0, . . . ,
y(n−1)(x0) := y(n−1)0 (1.31)
where each of x0, y0, . . . , y(n−1)n is a real number.
The following fundamental theorem justifies theseexpectations.
Theorem 2 Existence Theorem.Let F
(x, y, y′, . . . , y(n−1)
)be a
function of the variables x, y, y′, . . . , y(n−1),defined and continuous when
|x− x0| < h, |y − y0| < h, . . . , |y(n−1) − y(n−1)0 | < h,
and having continuous first partial derivatives withrespect to y, y′, . . ., y(n−1). Then thereexists a solution y = f(x) of the differential equation(1.30), defined in some interval |x− x0| < h1,and satisfying the initial conditions (1.31).Furthermore, the solution is unique; that is, ify = g(x) is a second solution satisfying (1.31),then f(x) ≡ g(x) whenever both functions are defined.
A proof of this theorem is long and technical. It isincluded here because several of the concepts that appearin the proof are frequently referred to in physics,engineering, and computer science applications. The student
34
should at least read through the proof to become acquaintedwith the terminology and with the thread of the argument.In a course on advanced calculus, proofs such as thisare common place and require memorization.
Proof:We will prove this theorem for first order equations,that is, for n = 1,
y′ = F (x, y).
Let M denote the maximum of F (x, y) for|x− x0| ≤ h/2 and |y − y0| ≤ h/2.We choose a smaller closed rectangle insidethe original open rectangle to ensure thatF (x, y) attains its maximum there. The closedrectangle may be denoted as the set
R = { (x, y) |x0 − h/2 ≤ x ≤ x0 + h/2, y0 − h/2 ≤ y ≤ y0 + h/2 }.
(We have to have a closed and bounded set.A function like g(x) = 1/x is continuousat every point in the open interval (0, 1)but it is unbounded!) Let
h1 =h
2(M + 1).
The fact that F (x, y) has a continuous firstpartial derivative with respect to y in Rsays that it satisfies a Lipschitz conditionwith respect to y in R. This condition isa technical piece of mathematics; however, it isfound in every advanced textbook in science andengineering and no course indifferential equations would be complete withoutmentioning it. One should be encouraged bythe fact that there are several cases in thestudy of shock waves and denotation where the
35
-
6y
x
y0
y0−h/2
y0+h/2
x0−h1 x0+h1x0
(x0, y0) •
?
6
2h
Figure 1.5: The Closed Region R
function F (x, y) does not havea partial derivative with respect to y butdoes satisfy the Lipschitz condition. By a Lipschitzcondition, we mean thatthere exists a positive number K such that
|y(x)− y0| ≤ K |x− x0|
for each x ∈ [x0 − h1, x0 + h1].We will demonstrate in a solved problem thatif Fy(x, y) ≡ ∂F (x, y)/∂yis continuous in R, then F satisfies a Lipschitzcondition in F with respect to y.Solving an initial value problem is the same asfinding a continuous solution to the integralequation
y(x) = y0 +∫ x
x0
F (t, y(t)) dt for |x− x0| ≤ h1.
36
The equivalence of the integral equation and thedifferential equation follows from the FundamentalTheorem of Calculus.For convenience, let x0 ≤ x ≤ x0 + h1.A symmetricproof will hold for the case x0 − h1 ≤ x ≤ x0.We apply Picard’s method (of successive approximations).This method is important for historical purposes and alsoit is a must for any student who claimsto have completed a course in differential equations.But it is not something that needs to be dwelt on. Justremember that prior to the advent of computers, mathematicianshad to rely solely on such methods to solve differentialequations numerically.
y0(x) := y0
y1(x) := y0 +∫ x
x0
F (t, y0(t)) dt
. . .
yn(x) = y0 +∫ x
x0
F (t, yn−1(t)) dt
for n = 1, 2, . . .. To prove existence,we have to show two things: (1)the sequence of functions {yn}∞n=0
converges pointwise to a limit function, y(x);and (2) the pointwise limit, y(x), is continuous onthe interval x0 − h1 ≤ x ≤ x0 + h1.If |yn−1(x)− y0| ≤ h/2 then
|yn(x)− y0| ≤∫ x
x0
|F (t, yn−1(t))| dt ≤ (x− x0)M ≤ h1M ≤ hM
2(M + 1)≤ h
2.
It follows by induction that |yn(x)− y0| ≤ h/2 foreach positive integer n. From this, we observe that
37
the Lipschitz condition applies to F (x, yn(x)).Going back to the operational definition of Fy
and applying the mean value theorem for derivatives,we notice that
|F (x, yn(x))− F (x, y0(x))| ≤M · |y − y0|
In order to show that the function sequence{yn(x)}∞n=0, x0 ≤ x ≤ x0 + h1,converges uniformly to a continuous function y(x) suchthat
y(x) = y0 +∫ x
x0
F (t, y(t)) dt for |x− x0| ≤ h1,
we consider the infinite series
y0(x) + [y1(x)− y0(x)] + [y2(x)− y1(x)] + . . .+ [yn(x)− yn−1(x)] + . . . .
yn(x) is the nth partial sum since, it“telescopes” as a finite series.
|y1(x)− y0(x)| ≤M |x− x0| .
|y2(x)− y1(x)| ≤∫ x
x0
|F (t, y1(t))− F (t, y0(t))| dt,
applying the Lipschitz condition for F (all the yn(x)are in the rectangle R), one gets
|y2(x)− y1(x)| ≤M∫ x
x0
|y1(t)− y0(t)| dt ≤M ·M∫ x
x0
(t− x0) dt
≤ M2(x− x0)2
2!≤ M2h2
1
2!.
By induction, one obtains
|yn − yn−1| ≤Mn
n!hn
1 .
The infinite series
38
∞∑n=1
Mn
n!hn
1
converges for all h1 ≥ 0. From the fact thatex =
∑∞n=0 x
n/n!, we can even computethe limit. Now, to define y by passing to thelimit of the infinite series of approximations.
y(x) = limn→∞
yn(x) for all x0 − h1 ≤ x ≤ x0 + h1
= y0 + limn→∞
∫ x
x0
F (t, yn−1(t)) dt
= y0 +∫ x
x0
limn→∞
F (t, yn−1(t)) dt
= y0 +∫ x
x0
F (t, y(t)) dt.
The fact that F (x, y) is continuous and that {yn(x)}is a uniformly convergent sequence justifies interchangingthe limit and integral. This shows that there exists asolution. (∃ y such that y′ = F (x, y).)
To prove uniqueness of the solution y(x), we assume thatw(x) is a solution of y′ = F (x, y) for x0 ≤ x ≤ h1.It must be true that
w(x) = y0 +∫ x
x0
F (t, w(t)) dt
and
|yn(x)− w(x)| ≤∫ x
x0
|F (t, yn−1(t))− F (t, w(t))| dt
As before, we apply induction to show that
|yn(x)− w(x)| ≤ Mn+1hn1
(n+ 1)!x0 ≤ x ≤ x0 + h1.
39
Passing to the limit as n→ +∞,
|y(x)− w(x)| ≤ 0.
Therefore, y(x) ≡ w(x) for eachx ∈ [x0, x0 + h1]. This provesthat the solution is unique. ♦
Example.
Consider the differential equation
y′′ = e2x. (1.32)
Integrate twice to obtain
y′ = 12e2x + c1
y = 14e2x + c1x+ c2. (1.33)
Suppose that each of y0, y′0 are real
numbers, so that the initial values are
x0 := 0 y(0) := y0, and y′(0) := y′0.
From the above initial conditions, it is easyto compute the definite values for thearbitrary constants, c1, c2:
c1 = y′0 − 12, c2 = y0 − 1
4.
Thus, the solution to the initial valueproblem is
y = 14e2x +
(y′0 − 1
2
)+(y0 − 1
4
). (1.34)
For the initial values problem ofF(x, y′, . . . , y(n)
)= 0, we require that
x0 := x0, y(0) := y0, y′(0) := y′0,
. . ., y(n)(0) := y(n)0 ,
where each of x0, y0, y′0, . . .,
40
y(n)0 is a real number. This will
determine, in general, definite values foreach of the n arbitrary constants,c1, c2, . . ., cn. Recall fromalgebra that the equation of a straight line,y = mx+ b (m is the slope and b is theso-called y-intercept), can be determinedin several ways: from two points, (x0, y0), (x1, y1), (where x0 6= x1); from
onepoint and the slope, (x0, y0) and the realnumber m; by the slope-intercept method;etc. Likewise the n arbitrary constantsarising from the solution of an nth orderordinary differential equation can bedetermined by other means than by initialvalues. One method of determining the constantsrelies on data from more than one value of the independentvariable, x. These are called boundaryconditions as opposed to initial conditions.These are called boundary conditions because theyarose in physical applications of thermodynamicsand vibrating strings where measurements could onlybe done on the edges, or boundaries, of the objectbeing studied.As an example of the use of boundary conditions,consider Equation (1.33) and requirethat
y(0) = y0 and y(1) = y1. (1.35)
Solving for c1, c2 yields
c2 = y1 − 14,
c1 = y2 − y1 + 14
(1− e2
).
From the above application of boundary values, onemight be led to assume that for n arbitrary
41
constants one need only apply n boundary valuesto solve the problem. This is not true. Take, forinstance, the differential equation
y′′ = −y + 2 cosx, (1.36)
whose general solution is
y = c1 cosx+ c2 sin x+ x sin x.
If we try to apply the boundary conditionsto Equation (1.36)
y(0) = 0 and y(π) = 1
we have a problem. The arbitrary constantc1 must be equal to 0 and to1. Clearly additional conditionsmust be applied for boundary value problemsto make sense, that is, to be consistent. Indeed,this is an old problem.The theorems for boundary value problemsare complicated. Some of the more elementaryboundary value problems will be dealt with in a laterchapter.
1.8 Systems of Equations
There are applications where two or more functions sharea common independent variable. In a typical problem, one mightencounter a system of equations as follows
y′(x) = y(x) + z(x) z′(x) = 2y(x), (1.37)
where each of y and z is a function of x alone.The solution to the above system of equations is
y(x) = c1e2x + c2e
−x z(x) = c1e2x − 2c2e
−x,
42
as can be verified by differentiation (with respectto x) and substitution. In later chapters we willlearn routine methods for solving such equations.This is similar to situations encountered in algebra,and it won’t be surprising that many of the techniquesfrom algebra will carry over.One particularly important feature of systems of differentialequations is its application to a single ordinarydifferential equation of order two or higher.One can reduce the order of a second order equationy′′ = F (x, y, y′) to two simultaneous equationsby the following substitution:
w :=dy
dx,
dw
dx:= F (x, y, w).
The above is a special case of a more general concept calledthe method of reduction of order. It willalso be considered in more detail in a later chapter.There is also one instance where an equation of higherdegree can be solved by inspection.Consider the Clairaut equation
y = xy′ +G(y′)
where G(·) is an arbitrary function. A solutionis found immediately to be a one-parameter family orcurves
y = c1x+G(c1).
We will state an analog to the existence theorem inSection 1.7for systems of n first order equations.Let
dy1
dx= F1(x, y1, y2, . . . , yn),
dy2
dx= F2(x, y1, y2, . . . , yn), (1.38)
43
......
dyn
dx= Fn(x, y1, y2, . . . , yn),
be n simultaneous equations in the nunknown functions y1, y2, . . . , yn of x.Stating the theorem for two unknowns, y, z,is sufficient. It is clear how the theoremwill generalize for n unknown functions.
dy
dx= F (x, y, z),
dz
dx= G(x, y, z), (1.39)
Theorem 3 Suppose R is a rectangular region.If each of functions F and G is a continuousfunction in R and if h > 0 such thateach has a continuous firstpartial derivative, Fy, Fz, Gy, Gz,with respect to y and z for |x− x0| < h,|y − y0| < h, |z − z0| < h, thenthere exists a h1 > 0 andthere exists a unique solution
y = f(x), z = g(x), |x− x0| < h1, (1.40)
which contains the point(x0, y0, z0).
A proof of this theorem can be foundin many advanced textbooks on ordinarydifferential equations.(See [15] in the bibliography.)The previous theorem, Theorem 2,for one unknown variable,is a special case of this theorem,because Equation (1.30)is simply Equation (1.38)
44
with n = 1.Higher order systems are also possible, such as:
d2x
dt2= F1
(t, x, y,
dx
dt,dy
dt
),
d2y
dt2= F2
(t, x, y,
dx
dt,dy
dt
). (1.41)
Recall Newton’s law (force = mass × acceleration).When this formula is written for a system of particles,second order equations (1.41) occur.When x, y are each functions of a parameter t (time),one typically denotes x′(t), y′(t) as x, yand x′′(t), y′′(t) as x, y.There exists a uniquesolution for (1.41) satisfying
t = t0, x (t0) := x0, y (t0) := y0, x (t0) := x0, y (t0) := y0, (1.42)
where each of t0, x0, x0, y0,and y0 is a real number.For a particle moving along a constrained path andsubject to known forces, one can determine theposition and velocity at anytime t ≥ t0 giventhe initial position x0, y0 and the initialvelocity x0, y0.Electrical circuit obey similar laws.
1.9 The General Solution
It is important from both a theoretical and a practicalperspective to derive one formula for a given differentialequation which contains all possible solutions. Such aformula is called the general solutionof the differential equation. From the existence theorem,found in Section 1.7, we can beassured that, under very general conditions, solutionsdo exist. Notice that y = c1e
3x + c2e−3x is the
45
general solution to y′′ − 9y = 0. The fact that y = c1e3x + c2e
−3x is asolution is readily verified
by successive differentiation. The fact that it isall the solutions (two, in fact, added togetheror superimposed) can be determinedby calculation. We take the most general set ofinitial conditions, namely, for x := x0,we set y(x0) := y0, t
′(x0) := y′0, for anyreal number set {x0, y0, y
′0}. Without
loss of generality, let x0 := 0.
c1 + c2 = y0, 3c1 − 3c2 = y′0,
c1 = y0 − c2, 3y0 − 3c2 − 3c2 = y′0,
6c2 = 3y0 − y′0, c2 = 12y0 − 1
6y′0,
and c1 = 12y0 + 1
6y′0. c1, c2
are uniquely determined for each choice of realnumbers y0, y
′0. Therefore,
y = c1e3x + c2e
−3x is the general solution ofy′′ − 9y = 0.To verify that y = f(x; c1, c2) is a generalsolution for the second order differential equation
y′′ = F (x, y, y′), (1.43)
one has to take successive derivatives and substitutethem into Equation (1.43), ensuringthat it is an identity, and one has to ensurethat for every real number set {x0, y0, y
′0}
for which F has continuous first partialderivatives there exist numbers c1, c2 such that
y0 = f (x0, c1, c2) and y′0 = fx (x0, c1, c2) .
The same procedure applies to systems of equations
46
as well as to higher order differential equations.The existence theorem cannot be weakened in itsassumption on the continuity of F and its partialderivatives.It should be stressed that the existence theorem isapplicable only when the continuity conditions are satisfied.If 0 < β < 1 then function F in the equation
y′ = F (x, y) = (1− β)−1yβ (1.44)
has a partial derivative Fy(x, y) = β(1−β)−1yβ−1, −1 < β−1 < 0, whichhas a discontinuity
at y = 0. We refer to the points at which y = 0as singular points.Any solution curve containing a singular pointmust be dealt with as an exceptional case.Using the relations
β =α− 1
αand α =
1
1− β,
we will observe that
y = (x− c1)α = (x− c1)
1/(1−β) (1.45)
is a solution to Equation (1.44)whenever y 6= 0.This troublesome differential equation has otherproperties. Equation (1.45)does provide a solution through each point(x, 0); however, the trivial solution (y ≡ 0)is also a solution passing through the singularpoint which does not satisfy Equation(1.44).There are other, pathological exampleswhich will be studied in later chapters.In physical applications, however, such singularsituations rarely pose a problem. However, it isimportant in the setting up of an equation to beaware that such occurrences bespeak the need for
47
additional investigation of the underlyingassumption in the mathematical model.
1.10 The Primitive
Suppose that each of c1, c2 is a fixed, but arbitrary, constantand f(x; c1, c2) is a twice continuously differentiable functiondefined on an open interval a < x < b (possibly infinite).The problem is to construct a differential equation in terms ofx, y, y′, and y′′ such that (1) f is a solution ofF (x, y, y′, y′′) = 0 and (2) neither c1 not c2is present in F . In this case, the function f is usuallyreferred to as the primitive of thedifferential equation F (x, y, y′, y′′) = 0. We willbegin with the example
y = c1 + c2x2. (1.46)
Differentiating twice, we obtain
y′ = 2c2x,
y′′ = 2c2.
We eliminate the constants c1, c2 from Equation(1.46) by substitution
y′ = 2(
12y′′)x, xy′′ − y′ = 0.
Now, let’s start with xy′′ − y′ = 0 and see if we can recoverthe original equation. Set v = y′ and v′ = y′′ = dv/dx.The “reduced” equation becomes xv′ = v.
xdv
dx= v
dv
v=dx
x
∫ dv
v=∫ dx
x+ c0 ln v = ln x+ c0.
48
Now we set c0 = ln (c2/2). Notice that foreach real number c0 there exists exactly one positivereal number c2 such that c0 = ln (c2/2).(The reason for choosing c2/2 rather than c2 will beclear momentarily.)
ln(v) = ln (c2x/2) v =dy
dx= 1
2c2x.
Integrating again, we obtain
y = c2x2 + c1,
the original equation. It would be fortunate if thisprocedure could be generalized to a functionf(x; c1, c2, . . ., cn),where each ofc1, . . . , cn is an arbitrary constant.However, a variety of problems may occur.(Viz., xy′ = 2y, with a general solution y(x) = c1x
2, x ≥ 0;y(x) = c2x
2 x ≤ 0.)On the other hand, consider the Clairaut equationy = xy′ +G(y′).Despite its complicated appearance, its primitive contains exactlyone arbitrary constant, c1. Elimination of theck’s by successive differentiation and substitutionmay prove to be mathematically intractable. The endresult,
F(x, y, . . . , y(n)
)= 0,
may not be expressible asy(n) = F
(x, y, . . . , y(n−1)
),
for instance. This leads to the next topic: The function
f (x, y; c1, . . . , cn) = 0
may define an implicit relationship between x and y.Frequently, though not always, differentiation and
49
substitution will generate a suitable differentialequation from the implicit relationship betweenx, y, and the arbitrary constant set { c1, . . . , cn }.One important application of a primitive in which thefunction f(x, y; c1) = 0 implicitly relates x and yis that of isotherms (curves of constant temperature)or isobars (curves of constant pressure). Isothermsand isobars are examples of the mathematical notionknown as level curves.These are equations of the form
f(x, y) = c1. (1.47)
If the temperature, θ, is given by the equationθ = x2 − y2, and it is held constant,θ = constant, we differentiate to obtain
dθ
dx= 0 = 2x− 2y
dy
dx(1.48)
or
yy′ = x.
Notice that we may express Equation (1.48)as
x dx = y dy,
where the variables are treated independently.Here we are taking differentialsrather than derivatives.The notion of a differential is closely related tothe concept of the Dirac delta function. Both arelinear functionals enjoying certain properties andobeying certain rules.This notioncan be generalized into an equation of the form
Q(x, y) dy + P (x, y) dx = 0, (1.49)
50
a differential equation of first order. Suchexpressions as Equation (1.49) arepart of the important topic of exact differentialequations, to be studied later.
1.11 Summary
Differential equations are classifiedby type, order, degree, and linearity.There are two types of differential equations:ordinary and partial.The order of a differential equation isthat of the highestorder derivative which occurs in the equation.The degree of the equation isthe exponent of its highest derivative.Ordinary differential equations are eitherlinear or nonlinear in their unknown functionsy(k)(x), k = 0, 1, 2, . . . , n.Many problems in physics and engineering, whenreified mathematically,give rise to differential equations.Solutions of differential equations are functionssatisfying (1.9), and are either generalsolutions with arbitrary constantsor particular solutionshaving initial values or boundary values (or a combinationof both initial conditions and boundary conditions).Modern software and computing devices permit the explicitsolution of many differential equations and allowscientists and engineers to generate interpolating functions,plots, and tabular data for a larger class ofequations F
(x, y, y′, . . . , y(n)
).
The existence theorem answersthe vital question about the conditionsunder which a solution exists.Two technical topics of historical interest and
51
theoretical importance, the Lipschitz conditionand Picard’s method of successive approximations,are introduced. Euler’s methodtells how to manually compute numerical solutions.Any degree ofaccuracy can be obtained by reducing the step size∆x in Equation (1.21).At this point, having carefully read this chapter,one might assume an adequate educationon the subject of differential equations.There are some good reasons for notstopping here.There are certain elementary procedureswhich obtain explicit solutions to somewidely used differential equations.These procedures are central for settingup physics and engineering models.Moreover, many books on physics andengineering assume that the reader hassome exposure to traditional approachesto solving special kinds of equations.For a larger class of differential equations,qualitative information on a solution fora given differential equation may bereadily available even though explicitor numerical solutions may be difficultto obtain. The form of the equation itselfmay yield important conclusions asto the nature of its solution(s). Finally,for the general equation, (1.5),there are alternatesto the Euler’s method (the so-called methodof step-by-step integration) whichconverge more rapidly, require lesscomputation, and exhibit greater stability.
Problems
52
1. Solve the following initial value problems (IVPs):
(a) y′ = ex, with the initial condition y(0) = 2;
(b) y′ = 2y, with the initial condition y(0) = y0;
(c) y′′ = cos x, with the initial conditions
y(0) = 0, y′(0) = 1.
2. Solve the following boundary value problems (BVPs):
(a) y′′ = ex, with the boundary conditions y(0) = 0,
y(1) = e;
(b) y′′′ = 0, with the boundary conditions y(0) = 1,
y(1) = 0,y(2) = 1.
3. Verify that y = c1 cos 2x+ c2 sin 2x is a solution
of the differential equation
y′′ + 4y = 0.
Show that it is the general solution.
4. Solve the following BVPs for y′′ + 4y = 0.
Some may be impossible—if so, explain why.
(a) y(0) = 0, y(π/4) = 1;
(b) y(0) = 1, y(π/2) = 0;
(c) y(0) = 1, y(π/4) = 0.
5. Apply the existence theorem to determine
the domains (of definition) of each of the
following differential equations:
(a) y′ = y2/x;
(b) y′ = y/(x2 + y2);
53
(c) y′ = Arctan(x).
6. Compute differential equations of the
form F(x, y, y′, . . . , y(n)
)= 0 for
each of the following primitives:
(a) y = c1x+ c31;
(b) y = c1x2 + c2x+ c3;
Solutions
1. Solve the following initial value problems (IVPs):
(a) y′ = ex, with the initial condition y(0) = 2;We begin by integrating
∫ex dx:
y(x) = ex + c1,
where c1 is an arbitrary constant.
Solve for c1 when x = 0.
y(0) = 2 = e0 + c1 = 1 + c1.
Thus we have c1 := 1 and
y(x) = ex + 1.
(b) y′ = 2y, with the initial condition y(0) = y0;Integrate y′ to obtain
ln(y(x)) = ln(2x) + c0,
Exponentiate both sides of the above to obtain:
y = e2x+c0 .
Let c1 = ec0. When x = 0, y(0) = y0.
y(x) = c1e2x; y(x) = y0e
2x.
Since c1 := y0
54
(c) y′′ = cos x, with the initial conditions
y(0) = 0, y′(0) = 1.
Again, we integrate y(x), twice this time, to get:
y(x) = − cos(x) + c1x+ c2.
Set x = 0 and we have 0 = −1 + c2, so that
c2 = 1. Now we determine c1. Differentiate
y to obtain y′(x) = sin(x) + c1 and require
that y′(0) = 1. 0 = sin(0) + c1 and c1 = 0.
The resulting equation becomes:
y(x) = − cos(x) + 1.
2. Solve the following boundary value problems (BVPs):
(a) y′′ = ex, with the boundary conditions y(0) = 0,
y(1) = e;
We integrate to obtain the equation y(x) = ex + c1x+ c2.
When x = 0, e0 = 1, and 0 = 1 + c1 · 0 + c2.
Thus c2 := −1. We now solve at y(1) = e.
y(1) = e = e1 + c1 · 1− 1.
Since e1 = 1, we have c1 = 1 and the solution is
y(x) = ex + x− 1.
55
(b) y′′′ = 0, with the boundary conditions y(0) = 1,
y(1) = 0,y(2) = 1.
Solving by repeated integration, we have
y(x) = c1x2 + c2x+ c3.
With the boundary conditions, we have:
1 = c3, 0 = c1 + c2 + c3, 1 = 4c1 + 2c2 + c3.
Solving for c1, c2, c3, we have:
c1 := 1, c2 := −32, c3 := 1
2.
The solution becomes
y(x) = 12x2 − 3
2x+ 1.
3. Verify that y = c1 cos 2x+ c2 sin 2x is a solution
of the differential equation
y′′ + 4y = 0.
Show that it is the general solution.
Differentiating twice yields
y′(x) = −2c1 sin 2x+ 2c2 cos 2x,
y′′(x) = −4c1 cos 2x− 4c2 sin 2x.
Substitution shows that y = c1 cos 2x+ c2 sin 2x
is a solution to y′′ + 4y = 0.
56
y′′ + 4y = −4c1 cos 2x− 4c2 sin 2x+ 4 (c1 cos 2x+ c2 sin 2x) ≡ 0.
Consider a number set { y0, y′0 }
If, for x = 0, y(0) := y0 and y′(0) := y′0, then
c1 := y0 and c2 := y′0/2.
Therefore, from the existence theorem and
the initial value problem, the problem is done.
4. Solve the following BVPs for y′′ + 4y = 0.
Some may be impossible—if so, explain why.
(a) y(0) = 0, y(π/4) = 1;
The general solution is
y(x) = c1 cos(2x) + c2 sin(2x).
Substituting, we have
y(0) = 0 = c1 cos(2 · 0) + 0, c1 = 0.
y(
π4
)= 1 = 0 + c2 sin
(π2
), c2 = 1.
(b) y(0) = 1, y(π/2) = 1;
Again, by substitution,
y(0) = 1 = c1 cos(2 · 0) + 0, c1 = 1.
y(
π2
)= 0 = c1 cos(π) + 0, c1 = 0.
If c1 is a real number and c1 = 1, then
c1 6= 0. This boundary value condition is impossible.
57
(c) y(0) = 1, y(π/4) = 0.
Again, we substitute directly
y(0) = 0 = c1 cos(2 · 0) + 0, c1 = 0;
y(
π4
)= 0 = 0 + c2 sin
(π2
).
The only solution here is the trivial solution,
y ≡ 0.
5. Apply the existence theorem to determine
the domains (of definition) of each of the
following differential equations:
(a) y′ = y2/x;
We first have to look at F (x, y) = y2/x and take
its partial derivative with respect to y
∂F
∂y=
2y
x.
This derivative, ∂F/∂y is continuous
everywhere except when x = 0. Thus the domain
(of definition) of the differential equation
is (−∞, 0) ∪ (0,∞).
(b) y′ = y/(x2 + y2);
We first have to look at F (x, y) = y2/(x2 + y2)
and take its partial derivative with respect to y
58
∂F
∂y=y2 − 2t+ x2
(x2 + y2)2.
This derivative, ∂F/∂y is continuous
everywhere except when (x, y) = (0, 0). Thus the
solution curves of this differential equation
exists for all (x, y) not passing
through (0, 0).
(c) y′ = Arctan(x).
We first have to look at F (x, y) = Arctan(x)
and take its partial derivative with respect to y
∂F
∂y= 0.
This derivative, ∂F/∂y is continuous
everywhere; moreover, the function F (x, y) is defined
for all x and for y such that
−π/2 < y < π/2. Thus the solution
curves of this differential equation
exists for all (x, y) where |y| < π/2.
6. Compute differential equations of the
form F(x, y, y′, . . . , y(n)
)= 0 for
each of the following primitives:
59
(a) y = c1x+ c31;
This equation is the Clairaut’s equation.
Just make the substitution c1 := y′ to get
y = xy′ + (y′)3.
(b) y = c1x2 + c2x+ c3;
Differentiate successively three times.
The result is
y′′′ = 0,
a function F (x, y, y′, y′′, y′′′) = 0 without the
arbitrary constants and also the one of
the lowest order.
60
Chapter 2
First Order, First DegreeEquations
2.1 Differential Form Versus Standard Form
In Section 1.2 we defined a differentialequation of the first order, first degree to be
M(x, y) +N(x, y) y′ = 0. (2.1)
If N(x, y) 6= 0, this expression(that is, Equation (2.1))is equivalent to
y′ = F (x, y), F (x, y) =M(x, y)
N(x, y). (2.2)
Recall from the existence theorem that if each ofF (x, y) and Fy(x, y) ≡ ∂F (x, y)/∂yis continuous in some rectangular region R then Equation(2.2) has exactly one solution passing througheach interior point, (x0, y0) of R.By a region R in the real number plane, we mean a nonempty,open, connected set. We will introduce some notation fromsymbolic logic at this point because the reader is likely toencounter it elsewhere. The so-called existential quantifier(∃) stands for “there exists” and the so-called
61
universal quantifier (∀) stands for “for all” or“for every.” Of course, the symbol (∅) standsfor the empty set, the symbol (∈) stands for “is anelement of” or “belongs to,” and (/∈) stands for“is not in” or “is not an element of.”Symbolicaly, we write “R is not the empty set” as“R 6= ∅.” R is an open set meansthat ∀ (x, y) ∈ R ∃ r > 0 such thatif B((x, y), r) is a ball having center at(x, y) and radius r then B((x, y), r) iscontained in R. The statement that B is a ball isthen written, symbolically,
B((x, y), r
)={
(s, t)∣∣∣ √(s− x)2 + (t− y)2 = ‖(s, t)− (x, y)‖ < r
}.
The statement that R is a connected set means thatif (x, y) ∈ R and (x1, y1) ∈ R then ∃a broken line of finitely many linear line intervals,each of which is contained completelywithin R, connecting (x, y) and (x1, y1).(See Figure 2.1.)When we say that (x0, y0) is aninterior point, we mean that there exists a positivenumber r such that the open ball centered at(x0, y0) and having radius r > 0 is totallycontained in the region R.The case when N(x, y) = 0for some (x1, y1) must be considered separately.We refer to such points, (x1, y1), as singularpoints of F (x, y). Recall further that in Section1.10 we discussed the differentialsdx, dy.
M(x, y) dx+N(x, y) dy = 0. (2.3)
Whenever N(x, y) 6= 0, we can consider Equation(2.3) to be nothing more than Equation(2.2). On the other hand, if M(x, y) 6= 0,then we may form
62
-
6
x
y
(x,y)•�
��
��• •HH
HHHHHH
HH•(x1,y1)
Figure 2.1: A Connected Set
dx
dy=N(x, y)
M(x, y)(2.4)
and consider y as the independent variable andx(y) as a function of y. If both M(x, y) = 0and N(x, y) = 0 at some point (x1, y1), then wehave an indeterminate of the form 0
0
for both of M/N and N/M . In this case, we saythat (x1, y1) is a singular point of Equation(2.3).We will proceed formally throughout this chapter, inthe sense that we will not apply the rigorous theoryof differential forms. All functions are assumed tobe well defined except at singular points.We will not attempt to deal with jump discontinuities,these will be handled by integral transform methods(i.e., the Laplace and the Fourier transform) in alater chapter.
63
Symbol Definition= Equals6= Does not equal:= Defines≡ Is equivalent to∃ There exists∀ For each, for every∈ Is an element of/∈ Is not an element of∅ The empty, or null, set∞ The infinity symbol, infinitely large→ Approaches, goes to or towards⇒ Implies¬ The negation of, “it is not true that”
< The real part of, <(x+ y
√−1)
= x
= The imaginary part of, =(x+ y
√−1)
= y
� Much less than, is greatly exceeded by� Much greater than, greatly exceeds♦ Diamond symbol, used as “end of proof”
Table 2.1: Table of Symbols
64
2.2 Exact Equations
In its differential form, the first order, first degreedifferential equation becomes
M(x, y) dx+N(x, y) dy = 0. (2.5)
The above equation, Equation (2.5), issaid to be exact(in some rectangular region R)if there existsa continuously differentiable function U(x, y)such that
∂U
∂x= M,
∂U
∂y= N. (2.6)
We observe that in this case,
dU = M dx+N dy,
throughout the rectangular region R.We return to the topic of levelcurves and observe that an exact solutionof Equation (2.5) is simply alevel curve of the function U(x, y).Recall that a level curve for U(x, y) is simplythe locus, or set, of points (x, y) such thatU(x, y) = k, where k is a fixed constant.Since U(x, y) may be implicitlydefined, that is, it may not be possible todisplay y = U(x) in closed form, singularpoints may occur on certain level curves.In the Figure 2.2, we observetwo such singular points, P and Q.If we are able to express y as a function of x,then dy/dx makes sense and we may applya classic calculus computation:
∂U
∂x· ∂x∂x
+∂U
∂y
dy
dx=∂U
∂x· 1 +
∂U
∂y
dy
dx= M(x, y) +N(x, y)
dy
dx= 0.
65
••
P Q
U(x,y)=√
2
U(x,y)=1
��
���
U(x,y)=√
2/2
��
��
U(x,y)=3/π
Figure 2.2: Level Curves
Examples are essential in understanding exact differentialequations.
Example 1.
Solve the differential expression.(x2 + y2
)dx+ 2xy dy = 0.
Here M(x, y) = x2 + y2 and N(x, y) = 2xy. What is requiredis to determine a function U(x, y) such that
∂U
∂x= M(x, y) and
∂U
∂y= N(x, y).
We integrate with respect to x and with respect toy and combine the two results to obtain,
U(x, y) = 13x3 + xy2 = c1.
The function is implicitly defined.The result can be verified by differentiation.
66
Example 2. Show that the equation
(y2 − 1) dx+ (2xy + sin y) dy = 0
is exact. We require that
∂U
∂x= y2 − 1 and
∂U
∂y= 2xy + sin y.
Integrating and combining results,we obtain U(x, y) = xy2 − x− cos y = c1.Again, the result can be verified by differentiation.
Example 3. Solve y dx+ x dy = 0.Proceeding formally, we write
dx
x+dy
y= 0,
whenever x 6= 0 and y 6= 0. Integrating, we obtain
ey = e−x + c0,
where c0 > 0. (If c0 < 0 we would have an imaginaryargument.) But, for any c0 > 0 there exists a realnumber c1 such that c0 = ec1 . On the other hand,for each real number c1 there exists exactly onepositive number c0 such that c1 = ln(c0). Sowe solve and obtain
y =c1x
as the solution. We observe that the linesy = 0 and x = 0 are both solutions of thethe differential expression. The point(0, 0) is a singular point—it lies on twodifferent solution curves.
There is a simple test for determining whether or nota given differential expression is exact. If each ofM(x, y) and N(x, y) is a continuously differentiable
67
function in some region in the xy-plane, thenM(x, y) dx+N(x, y) dy = 0 is exact if andonly if
∂M(x, y)
∂y=∂N(x, y)
∂x.
This test for exactness will be stated later asa theorem and an outline for a proof will be given.
Problems
1. Show that the differential expression
(sin y − 2x) dx+ (x cos y − 2y) dy = 0
is exact by finding its general solution.
2. Show that the differential expression
(yexy + 2x) dx+ (xexy + 2y) dy = 0
is exact by finding its general solution.
3. The differential expression
x dy − y dx = 0
fails the test for exactness. However, by
dividing through by x2, one can obtain
x dy − y dx
x2= 0,
and observe that ∂M(x, y)/∂x = −x−2 and that ∂N(x, y)/∂y = −x−2.Determine the general solution.
68
4. Level curves are called contour plots
by certain mathematical software. Apply some
mathematical software to plot level curves for
each of the following functions U(x, y)
of x and y:
(a) U(x, y) = sin(x, y);
(b) U(x, y) = ye−x;
(c) U(x, y) = y − ex;
(d) U(x, y) = arctan(x/y).
Solutions
1. Show that the differential expression
(sin y − 2x) dx+ (x cos y − 2y) dy = 0
is exact by finding its general solution.
We integrate and write the two functions
x sin y − x2 = c+ f(y)
x sin y − y2 = c′ + g(x).
Clearly f(y) = y2 and g(x) = x2 so that the
general solution becomes
x sin y − x2 − y2 = c1,
where c1 is an arbitrary constant.
69
2. Show that the differential expression
(yexy + 2x) dx+ (xexy + 2y) dy = 0
is exact by finding its general solution.
We integrate and write the two functions
yexy
y+ x2 = c+ f(y)
xexy
x− y2 = c′ + g(x).
Clearly f(y) = y2 and g(x) = −x2 so that the
general solution becomes
exy + x2 − y2 = c1,
where c1 is an arbitrary constant.
3. The differential expression
x dy − y dx = 0
fails the test for exactness. However, by
dividing through by x2, one can obtain
x dy − y dx
x2= 0,
and observe that ∂M(x, y)/∂x = −x−2 and that ∂N(x, y)/∂y = −x−2.Determine the general solution.
70
Clearly there is a problem when x = 0.
Setting x 6= 0 for the moment, we compute
d(
yx
)= − y
x2dx+
1
xdy = 0,
from yx
= c1, with c1 a constant.
Therefore, from the elementary differential expression,
we write a general solution
y
x= c1 or y = c1x
for x 6= 0. When x = 0 the y-axis is a solution.
2.3 Separable Variables
There is a distinct difference between the conceptof a function and the notion of an equation.Historically, equations are much older thanfunctions. At one time, however, the twowere synonymous. An equation is generallydefined to be an algebraic relation (implicitor explicit) between two or more variables. A functionf may be defined as consisting of three things:(1) an initial set, called the domain; (2) a final set,called the range or codomain; and (3) a rule whichassigns to each member of the initial set exactlyone member of the final set. The rule itself maytake the form of an algebraic equation, and ittraditionally does. Certainly y = |x| is the sameas y =
√x2. On the other hand, the
equation
71
f(x) =x
|x|
has a problem at the point x = 0 whereas
f(x) =
1 whenever x > 00 at x = 0−1 whenever x < 0
defines a function for each real number x.While we are avoiding many pathological mathematicalconstructions and we are pursuing a formal ratherthan a rigorous development, some care must betaken in approaching the topic of separablevariable to avoid troublesome anomalies.For instance, in the previous section we noticedthat
−y dx+ x dy = 0 and−y dx+ x dy
x2= 0
both represent the same equivalence class ofsolutions to the differential expression.Both are different. In particular,the first expression is well defined everywherewhereas the second has a singularityat the point x = 0 that requires additionalinvestigation. To go from
M(x, y) dx+N(x, y) dy = 0
to
M(x) dx+ N(y) dy = 0
via some suitable manipulation of M and Nby a function G(x, y) so that
M(x) :=M(x, y)
G(x, y)and N(x) :=
N(x, y)
G(x, y)
72
may introduce singularities, discontinuities,or extraneous solutions. Some of the singularitiesmay be “removable” and pose no problem; some ofthe discontinuities may be gotten around by carefullychoosing the region in the xy-plane for the familyof solution curves; and, some extraneous solutionsmay be eliminated by inspection or from initialconditions. In each case, one must investigateadditional conditions imposed by the functionG(x, y).When an equation
M(x, y) dx+N(x, y) dy = 0
can be converted into the form
M(x) dx+ N(y) dy = 0, (2.7)
then the equation is said to have separablevariables. As an example, considerthe equation
dy
dx= −
(x
y
)2
. (2.8)
We “multiply through by ” y2 dx to obtain the differentialexpression
x2 dx+ y2 dy = 0, (2.9)
an exact differential expression.The generalsolution as
x3 + y3 = c1 (c1 real) . (2.10)
Every differential expression of the form (2.7)is exact (except at discontinuous) because
U(x, y) =∫M(x) dx+
∫N(y) dy, (2.11)
73
taking indefinite integrals.Writing in the notation of a total differential:
dU =∂U
∂xdx+
∂U
∂ydy = M dx+ N dy. (2.12)
Therefore, the level curves of Equation(2.11)the level curves ∫
M(x) dx+∫N(y) dy = c1. (2.13)
The differential equation y′ = F (x, y)will have separable variableswhenever F is a product of the form
y′ = F1(x)F2(y). (2.14)
In this case, we may re-write Equation(2.14) as a differential expression
F1(x) dx−dy
F2(y)= 0. (2.15)
Example 1.
Solve the differential equation
yy′ + 4x = 0.
We re-write the differential equation as a differentialexpression of the form
y dy + 4x dx = 0
and integrate ∫y dy +
∫4x dx = c1
12y2 + 4
2x2 = c1
74
12y2 + 2x2 = c1 where c1 ≥ 0.
The set of solution curves is a family of ellipses.There is one degenerate solution curve, that is{(0, 0)}, when c1 = 0.
Example 2.
Solve the differential equation
y′ = −xy.
As suggested, we convert the differential equation intoa differential expression and integrate term by term.
dy
y= −x dx.
∫ dy
y=∫−x dx+ c0.
ln(|y|) = −12x2 + c0.
Now, we immediately notice the above equation is welldefined for each real number c0 and for all valuesof y except y = 0. We “exponentiate” both sideof the above equation to obtain.
eln(|y|) = e−x2/2+c0 = c1ex2/2 where c1 = ec0 .
If y > 0, then we choose c1 = c1 > 0; ify < 0, then we choose c1 = −c1 < 0. Theequation becomes
y = c1e−x2/2 where c1 6= 0.
If we allow c1 to vanish, that is, to equal 0,then the line y = 0 is also allowed as a solution.
75
If we have (0, y0) as an initial condition, thenthe solution curve becomes
y = y0e−x2/2,
the Gaussian distribution fromelementary statistics, also known as thebell-shaped curve.
Example 3.
Solve the differential equation
dy
dx=y2x+ x
x2y + y.
In the usual manner, create a differential expression
y dy
1 + y2=
x dx
1 + x2
from the facts that y2x+ x = x(y2 + 1) andx2y + y = y(x2 + 1). Now integrate term by term.∫ 2y dy
1 + y2=∫ 2x dx
1 + x2+ c0,
with c0 the arbitrary constant of integration.
ln |1 + y2| = ln |1 + x2|+ c0.
Let c0 = ln(|c1|) and obtain, exponentiatingboth sides of the above equation:
1 + y2 = c1(1 + x2).
We have observed that to solve separable variables differentialequations one must really know how to do integrals and somealgebra. We have also observed that problems can arise inthe domain of definition of the solution function and that,in putting the differential equation into the correct format,
76
some solutions may be lost. This same concept is central inthe solution of partial differential equations. One will tryto separate the function F (x, y) by writing it as a productof two functions F1(x), F2(y).One final item of interest is that in some (older) books,differential equations which we have referred to as“separable variables” are also called variablesseparable.
Problems
1. How can one determine the solution of
a first-order separable variables differential
equations y′ = F (x, y)?
2. Solve the separable variables differential
equation
dy
dx− k1 = 0,
where k1 is a nonzero constant.
3. Solve the differential expression
dx+√x dy = 0,
using the techniques of separable variables.
4. Solve the differential equation
(x cos y)dy
dx= 1− y.
Solutions
77
1. How can one determine the solution of
a first-order separable variables differential
equations y′ = F (x, y)?
The above will have separable variables
whenever F is a product of the form
y′ = F1(x)F2(y).
In this case, we may re-write the above Equation
as a differential expression
F1(x) dx−dy
F2(y)= 0.
Provided, of course, that F2(y) 6= 0.
Then integrate to obtain
∫F1(x) dx−
∫ dy
F2(y)= c1,
where c1 is a constant of integration.
2. Solve the separable variables differential
equation
dy
dx− k1y = 0,
where k1 is a nonzero constant.
Manipulate the above expression into the form
78
dy
y− k1 dx = 0,
assuming that y 6= 0. Later we’ll deal with
case as a separate issue. Integrate.
ln |y| − k1x = c0,
where c0 is the constant of integration.
Exponentiate, or take the antilogarithm, of
both sides of the above equation to get
y = c1e−k1x (c1 = ±ec0) ,
where c1 = ec0 for y > 0 and c1 = −cc0 for y < 0.
Notice that the x-axis, y = 0 is also a solution.
3. Solve the differential expression
dx+√x dy = 0,
using the techniques of separable variables.
The equation is not in the correct form. Multiply
through by x−1/2 (x 6= 0) to obtain
dx√x
+ dy = 0.
79
Now we can integrate term by term.
∫ dx√x
+∫dy + c0 = 0.
−12x−3/2 + y + c0 = 0,
where c0 is the constant of integration.
Set c1 = −c0 to obtain y as a function of x.
y = 12x−3/2 + c1.
4. Solve the differential equation
(x cos y)dy
dx= 1− y.
This equation becomes separable variables
cos y dy
1− y=dx
x
when y 6= 1, x 6= 0. Integrating yields
∫ cos y dy
1− y= ln(x) + c0,
where c0 is a constant of integration.
By the indefinite integral∫
cos y dy/(1− y)
isn’t in any tables book. Using definite integrals
and assuming that (x0, y0) is on a solution
80
curve (x 6= 0, y 6= 1), we obtain
∫ y
y0
cos t dt
1− t= ln(x)− ln(x0) = ln (x/x0) .
The function defined by the integral
∫ yy0
cos t dt/(1− t) has to be evaluated
numerically. Try some quality software to plot
several solution curves.
2.4 First Order Homogeneous Equations
If the function F (x, y) of a first orderdifferential equation
y′ = F (x, y) (2.16)
can be written in the form
y′ =dy
dx= F
(x
y
)(2.17)
then we call the Equation (2.16)a first order homogeneous differential equationor simply a homogeneousfirst order equation. Problems of this type canalways be reduced to exact differential equations,a topic which was covered in Section 2.2,by the substitution
u =y
x.
We can compute
y = xu, y′ = xu′ + u
81
so that
xu′ + u = F (u).
The variables can be separated at this point to yield
du
F (u)− u=dx
x.
(Of course, F (u)− u 6= 0.) Integrate termby term to get∫ du
F (u)− u=∫ dx
x+ c0 = ln (c1|x|) , (c1 = ec0)
where the number c0 is the constant of integration.This can be best illustrated by the following examples:
Example 1.
Solve the differential equation
x2 dy
dx− x2
4− y2 = 0.
The first thing we do is to observe that a substitutiony = xu, dy = u dx+ x du will change the equation to
x2
(u+ x
du
dx
)− x2
4− x2u2 = 0,
u+ xdu
dx− 1
4− u2 = 0,
du
u2 − u+ 14
=dx
x,
du(u− 1
2
)2 =dx
x.
82
Integrate term by term to obtain∫ du(u− 1
2
)2 =∫ dx
x+ c0,
−1
u− 12
= ln |x|+ c0.
We can substitute c0 = ln c1, where c0 is any real numberand c1 > 0 and u = y/x to obtain
y
x− 1
2=
−1
ln(c1|x|).
Further simplification yields
y =x
2− x
ln(c1|x|).
That the above equation is a solution of the originaldifferential equation (2.16) can bedetermined from differentiation and substitution.
Example 2.
Solve the differential equation
xy′ − x− y = 0.
Substitute y = ux, y′ = xu′ + u.
x(xu′ + u)− x− xu = 0.
Assume that x 6= 0 and obtain
xu′ + u− 1− u = 0 xu′ = 1.
Integrate
u =∫ dx
x+ c0 = ln |x|+ c0 = ln (c1|x|) ,
83
where c0 = ln c1, c1 > 0.Verify the solution by integration.
Example 3.
Solve the differential equation
x2y′ = y2 + xy + x2.
Substitute y = ux, y′ = xu′ + u.
x2(xu′ + u) = x2u2 + x2u+ x2.
Again, suppose that x 6= 0, we divide out x2.
xu′ + u = u2 + u+ 1.
xu′ = u2 + 1,du
u2 + 1=dx
x.
arctan(y
x
)= ln (c1|x|)
y = x tan (ln(c1|x|)) .
84
Problems
1. Solve the following differential equation:
y2 + (xy + y2)dy
dx= 0.
2. Solve the following differential equation:
y
x+(cos
y
x
)dy
dx= 0.
3. Solve the following differential equation:
y′ − 2 = ey/x
85
Solutions
1. Solve the following differential equation:
y2 + (xy + y2)dy
dx= 0.
The above differential is homogeneous.
We make the substitution y = ux.
x2u2 +(x2u+ x2u2
)(xu′ + u) = 0.
Factor out x2u to get
u+ (1 + u)(xu′ + u) = 0.
Collect terms and write as a differential expression.
u(u+ 2) + (1 + u)u′ = 0;(1 + u) du
u(u+ 2)= −dx
x.
We have to rationalize the denominator.
1 + u
u(u+ 2)=A
u+
B
u+ 2
1 + u = Au+ 2A+Bu, A = 12, B = 1
2.
Group and integrate, to obtain
12
(du
u+
du
u+ 2
)= −dx
x; 1
2ln [u(u+ 2)] = − lnx+ c0.
86
Which is equivalent to
u(u+ 2) =c1x2, c1 = e2c0 .
Substituting u = y/x, we have
y
x
(y
x+ 2
)=c1x2.
The solution is
y(y + x) = c1.
We verify by differentiating.
2. Solve the following differential equation:
y
x+(cos
y
x
)dy
dx= 0.
The above differential is homogeneous.
We make the substitution y = ux.
3. Solve the following differential equation:
y′ − 2 = ey/x
The above differential is homogeneous.
We make the substitution y = ux.
87
2.5 A Theorem on Exactness
We mentioned in Section 2.2 that the testfor exactness in the first order equations,
∂M
∂y=∂N
∂x
was necessary and sufficient to ensure the solution of thedifferential equation M(x, y) +N(x, y)y′ = 0, where the partialderivatives are assumed to be continuous in some region Rin the xy-plane. On the other hand, if there exists afunction U(x, y) such that
∂U
∂x= M(x, y),
∂U
∂y= N(x, y),
under suitable regularity conditions, namely that the secondpartial derivatives of all orders are continuous, then thereis a theorem in advanced calculus which states that
∂2U
∂x∂y≡ ∂2U
∂y∂x.
How can we prove the converse? Here we will have to constructa function U(x, y), having second partial derivatives, fromthe functions M(x, y) and N(x, y). This is actually a specialcase of a much more general theorem which is found in vectoranalysis concerning independence of path and exactness. For thiscase, however, we will be content to prove the following theorem.
Theorem 4 Let R be a rectangular region in the xy-plane,each of M(x, y) and N(x, y) have continuous first partialderivatives at each point (x, y) ∈ R, and
M(x, y) dx+N(x, y) dy = 0 (2.18)
in R. The following equality
∂M
∂y=∂N
∂x(∀ (x, y) ∈ R) (2.19)
88
is true if and only ifthere exists a function U(x, y) such that
∂U
∂x= M(x, y),
∂U
∂y= N(x, y) (∀ (x, y) ∈ R) . (2.20)
Proof: We have already seenthat the existence of such a function U(x, y) is sufficient.To show necessity, we first need to form an initial valueproblem. Let c0 be a fixed but arbitrary constant and defineU(0, 0) := c0. We will define a function, U which willlater be proven to be U itself, as follows:
U(x, y) = U(0, 0) +∫ (x,0)
(0,0)M(r, 0) dr +
∫ (x,y)
(x,0)N(x, s) ds (2.21)
≡ c0 +∫ x
0M(r, 0) dr +
∫ y
0N(x, s) ds.
We can re-write Equation (2.21) as follows
U(x, y) = U(0, 0) +∫ (0,y)
(0,0)N(0, s) ds+
∫ (x,y)
(0,y)M(r, y) dr (2.22)
≡ c0 +∫ y
0N(0, s) ds+
∫ x
0M(r, y) dr.
So, U has been defined in Equation (2.21)and in Equation (2.22). Are the two the same,that is, is U well defined? At this point we apply thehypothesis, namely Equation (2.19), and observethat Γ = [(0, 0), (x, 0)] ∪ [(x, 0), (x, y)] ∪ [(x, y), (0, y)] ∪ [(0, y), (0, 0)] is a
closed path.The notation [(x1, y1), (x2, y2)] stands for the closed lineinterval from the point (x1, y1) to the point (x2, y2) inthe xy-plane. Γ startsat the origin (0, 0) and transverses counter-clockwise through
89
the points (x, 0), (x, y), and (0, y), respectively,returning to the origin. See Figure 2.3.Stokes’ theorem (actually Green’s Lemma in the plane)ensures that theline integral about the path Γ is zero and so the functionU is well defined. If S is the surface enclosed bythe path Γ, then∫
ΓM(x, y) dx+N(x, y) dy =
∫ ∫S
(∂N
∂x− ∂M
∂y
)dx dy = 0.
Now, writing out∫ΓM dx+N dy, we have∫ x
0M(r, 0) dr +
∫ y
0N(x, s) ds+
∫ 0
yN(0, s) ds+
∫ 0
xM(r, y) dr = 0.
The above equation is exactly the same as∫ x
0M(r, 0) dr +
∫ y
0N(x, s) ds−
∫ y
0N(0, s) ds−
∫ x
0M(r, y) dr = 0
which is also
c0 +∫ x
0M(r, 0) dr +
∫ y
0N(x, s) ds = c0 +
∫ y
0N(0, s) ds+
∫ x
0M(r, y) dr.
Substitute back for U(x, y) on both theright-hand side and the left-hand side to ensurethat U(x, y) is, in fact, well defined.Having shown that the function U(x, y) is well defined,we next must show that it satisfies
∂U
∂x= M(x, y),
∂U
∂y= N(x, y).
This can be done directly from Equations (2.21)and (2.22) directly.
∂U(x, y)
∂y=
∂
∂x
(U(0, 0) +
∫ (x,0)
(0,0)M(r, 0) dr +
∫ (x,y)
(x,0)N(x, s) ds
)
=∂
∂y
∫ (x,y)
(x,0)N(x, s) ds = N(x, y).
90
•
Γ
y
x
(x,y)
(x,0)
(0,y)
(0,0)-
6�
?
Figure 2.3: A Closed Path Γ
∂U(x, y)
∂x=
∂
∂x
(U(0, 0) +
∫ (0,y)
(0,0)N(0, s) ds+
∫ (x,y)
(0,y)M(r, y) dr
)
=∂
∂x
∫ (x,y)
(0,y)M(r, y) dr = M(x, y).
This completes the proof of the theorem. ♦One might observe in the proof of the theorem that wemade use of the fact that U(0, 0) := c0. The arbitraryconstant c0 can be determined if U(0, 0) is known.In general, if U(x, y) = c1, an arbitrary constant,then if one knows a single value of U at a point, say(x0, y0), then c1 := U(x0, y0). This can also bewritten in an integral form, when the equation isseparated variables.∫ x
x0
M(x) dx+∫ y
y0
N(y) dy = 0. (2.23)
The above integral equation, Equation (2.23),is frequently found in physics. Finally, a word of caution:
91
There exist equations which satisfy ∂M/∂y = ∂N/∂x everywhere exceptat isolated points
and whose integral∫ΓM dx+N dy is
not independent of the choice of path, Γ.
− y
x2 + y2dx+
x
x2 + y2dy = 0.
92
Problems
1. Test the following equation, if it is exact, then find
the function U such that U(x, y) = c1 is its solution.
ex cos y dx− ex sin y dy = 0.
2. Test the following equation, if exact find
the function U such that U(x, y) = c1 is its solution.
ex sin y dx− ex cos y dy = 0.
3. Test the following equation, if exact find
the function U such that U(x, y) = c1 is its solution.
(x3 − 3xy2 + sin(y)
)dy +
(3x2y − y3
)dx = 0.
4. If R is any rectangular region not containing the
origin, show that following differential expression
is exact and find its solution U(x, y) = c1.
2xy
(x2 + y2)2dx+
y2 − x2
(x2 + y2)2dy = 0.
Solutions
1. Test the following equation, if it is exact, then find
the function U such that U(x, y) = c1 is its solution.
ex cos y dx− ex sin y dy = 0.
93
Apply the test ∂M/∂y = ∂N/∂x:
∂M
∂y= −ex sin y,
∂N
∂x= −ex sin y.
The equation is exact. By inspection, we find
U(x, y) = ex cos(y) = c1.
2. Test the following equation, if exact find
the function U such that U(x, y) = c1 is its solution.
ex sin y dx− ex cos y dy = 0.
Apply the test ∂M/∂y = ∂N/∂x:
∂M
∂y= −ex sin y,
∂N
∂x= +ex sin y.
The equation is not exact. No such U exists.
3. Test the following equation, if exact find
the function U such that U(x, y) = c1 is its solution.
(x3 − 3xy2 + sin(y)
)dy +
(3x2y − y3
)dx = 0.
94
Apply the test ∂M/∂y = ∂N/∂x:
∂M
∂y= 3x2 − 3y2,
∂N
∂x= 3x2 − 3y2.
The equation is exact. We compute U(x, y):
∫ y (x3 − 3xs2 + sin s
)ds = yx3 − xy3 − cos y + F1(x);
∫ x (3t2y − y3
)dt = x3y − xy3 + F2(y).
Therefore,
U(x, y) = yx3 − xy3 − cos y = c1,
where c1 is an arbitrary constant.
4. If R is any rectangular region not containing the
origin, show that following differential expression
is exact and find its solution U(x, y) = c1.
2xy
(x2 + y2)2dx+
y2 − x2
(x2 + y2)2dy = 0.
Apply the test ∂M/∂y = ∂N/∂x:
∂M
∂y=
2x
(x2 + y2)2− 2
4xy2
(x2 + y2)3
95
=2x3 − 6xy2
(x2 + y2)3.
∂N
∂x=
−2x
(x2 + y2)2− 2
(y2 − x2)(2x)
(x2 + y2)3
=2x3 − 6xy2
(x2 + y2)3.
The equation is exact. We compute U(x, y):
U(x, y) =∫ x 2ty
(t2 + y2)2dt
=−y
x2 + y2= c1,
where c1 is an arbitrary constant.
2.6 About Integrating Factors
If the differential expression
M(x, y) dx+N(x, y) dy = 0 (2.24)
is not exact and there exists a functionG(x, y) such that
G(x, y) [M(x, y) dx+N(x, y) dy] = 0
is exact, we call such a function G anintegrating factor. An integrating factoris abbreviated IF.Otherwise stated, an integrating factor is a function of x
96
and y, G(x, y), such that
∂M
∂y6= ∂N
∂xand
∂(GM)
∂y=∂(GN)
∂x.
The original equation, Equation (2.24),is modified when it is multiplied by an integratingfactor. Particular attention has to be given to twospecial cases: (1) points (x0, y0) at which theIF vanishes (G(x0, y0) = 0), and (2)points (x1, y1) of singularity of the IF, that is
lim(x,y)→(x1,y1)
G(x, y) = ∞.
When the IF vanishes, one encounters extraneous solutions.A singularity in the IF may alter the domain of definitionof the solution.Once an appropriate IF has been applied, the resultingexact equation
M(x, y) dx+ N(x, y) dy = 0,
where
M(x, y) := G(x, y) ·M(x, y) and N(x, y) := G(x, y) ·N(x, y),
can be solved by techniques from Section2.2, Section 2.3,or Section 2.4.For a given first order differential equation, Equation(2.24), there can be several possibleintegrating factors, each of which could make Equation(2.24) exact. In general, there is noone unique IF for a given DE (differential equation).Three questions remain to be answered:(1) “Do IFs always exist?”(2) “Are there conditions which guarantee the existenceof an IF?” and (3) “How can one find an IF for a givenDE?”
97
Example 1.
Examine the differential equation3xy + 2y2 + (x2 + 6xy)y′ = 0 and determine whetherit is exact or not. If it is not exact, then locatea suitable integrating factor to make it exact andsolve it.Taking partials ∂M/∂y and∂N/∂x, we get
∂M
∂y=
∂
∂y
(3xy + 2y2
)= 3x+ 4y.
∂N
∂x=
∂
∂x
(x2 + 6xy
)= 2x+ 6y.
The equation is not exact. But, if we multiply throughby the integrating factor (x+ 2y) we obtain
(x+ 2y)(3xy + 2y2) + (x+ 2y)(x2 + 6xy)y′ = 0.
And, taking partial derivatives again,
∂M
∂y=
∂
∂y
[(x+ 2y)(3xy + 2y2)
]= 3x2 + 16xy + 12y2,
∂N
∂x=
∂
∂x
[(x+ 2y)(x2 + 6xy)
]
= (x2 + 6xy) + (x = 2y)(2x+ 6y) = 3x2 + 16xy + 12y2.
The equation is now exact and can be solved.
(3x2y + 8xy2 + 4y3) + (x3 + 8yx2 + 12y2)y′ = 0.
The solution is
U(x, y) = x3y + 4x2y2 + 4xy3 = c1.
And we may simplify this by factoring into
98
U(x, y) = (x2 + 4xy + 4y2)(xy) = (x+ 2y)2xy = c1.
Example 2.
Examine the differential equationy − xy′ = 0 and determine whetherit is exact or not. If it is not exact, then locatea suitable integrating factor to make it exact andsolve it.Taking partials ∂M/∂y and∂N/∂x, we get
∂M
∂y=∂y
∂y= 1.
∂N
∂x=∂(−x)∂x
= −1.
The equation is not exact. But, if we multiply throughby the integrating factor y−2 we obtain
1
y− x
y2y′ = 0,
which is exact. And we solve immediately,
U(x, y) =x
y= c1,
where c1 is an arbitrary constant.
These example illustrate the principle of integratingfactor. How to determine such an integrating factoris another matter. One can best learn by doing plentyof examples and gaining an insight into just whichfunctions work best.
99
Differential Exact Differential
y dx+ x dy d (xy)
1x2 (x dy − y dx) d
(yx
)1
x2+y2 (x dy − y dx) d [arctan (y/x)]
1x2+y2 (x dx+ y dy) d
(log
√x2 + y2
)
Table 2.2: Table of Exact Differentials
2.7 The First Order Linear Differential Equa-
tion
A first order differential equation is said to belinear whenever it can bewritten in the form
dy
dx+ P (x) y = Q(x). (2.25)
The differential expression for Equation (2.25)is
[P (x) y −Q(x)] dx+ dy = 0.
We observe that the above expression is not exact; however,one can apply an integrating factor, namely
IF := e∫
P dx.
If set set M(x, y) := e∫
P dx [P (x) y −Q(x)]
and N(x, y) := e∫
P dx, then taking partial derivativesyields
∂
∂y
(e∫
P dx [P (x)y −Q(x)])
= P (x)e∫
P (x) dx
and
∂
∂x
(e∫
P dx)
= P (x)e∫
P (x) dx.
100
Abbreviation DefinitionBC Boundary Condition(s)BVP Boundary Value ProblemDE Differential EquationFTOC Fundamental Theorem Of CalculusIC Initial Condition(s)IF Integrating FactorIVP Initial Value ProblemLC Life CycleMDT Mean Down TimeMTBF Mean Time Between FailureOA Operational AvailabilityODE Ordinary Differential EquationPDE Partial Differential EquationRM&A Reliability, Maintaintability, and AvailabilityWLOG Without Loss Of Generality
Table 2.3: Table of Common Abbreviations
The above equation is exact, so we apply techniques from theprevious sections.This can be summarized in the following theorem.
Theorem 5 Suppose that [a, b] is a number interval andthat each of P (x) and Q(x) is a continuous function on[a, b]. If c1 is an arbitrary constant then
y(x) = e−∫
P dx∫e∫
P dxQ(x) dx+ c1e−∫
P dx (2.26)
is a solution of Equation (2.25). Foreach real number y0 and for each x0 such thata < x0 < b, there exists a unique value for c1such that Equation (2.26) passesthrough the point (x0, y0).
Proof: Let [a, b] be a numberinterval. Define a function u(x) from [a, b] such that
101
u(x) := exp[∫ x
P (t) dt],
where∫ x P (t) dt is one function representing the
indefinite integral∫P (x) dx. Observe that u(x),
∀x ∈ [a, b], is continuous. It is the compositionof two continuous functions. Furthermore, u′ existsand is also continuous.
u′(x) = P (x) · exp[∫ x
P (t) dt]
= P (x)u(x).
If u(x) is an IF for Equation (2.25), then
d
dx(u(x)y(x)) = Q(x) · u(x).
d
dx(u(x)y(x)) = u′(x)y(x) + u(x)y′(x)
= u(x) [y′(x) + P (x) y(x)] ;
and,
d
dx
∫ x
u(t)Q(t) dt = u(x)Q(x);
Therefore,
u(x) [y′(x) + P (x) y(x)] = u(x)Q(x)
and u(x) is an integrating factor for Equation(2.25).We have
u(x) y(x) =∫ x
Q(t)u(t) dt+ c1,
where c1 is a constant of integration. Wheneveru(x) 6= 0, we may divide it out to get
102
y(x) =1
u(x)
∫ x
Q(t)u(t) dt+c1u(x)
.
Observe that we have Equation (2.26)
y(x) = exp[−∫ x
P (t) dt] ∫ x
exp[∫ x
P (s) ds]Q(t) dt
+ c1 exp[−∫ x
P (t) dt].
Since u is continuous, each of the above stepscan be reversed. u(x) 6= 0 for each x ∈ [a, b]because ew > 0 for each real number w.From an inital condition, (x0, y0), we wish todetermine c1
u(x0)y(x0) = exp[−∫ x0
P (t) dt] ∫ x0
exp[−∫ t
P (s) dx]Q(t) dt
+c1 exp[−∫ x0
P (t) dt].
Set
R(x) = e−∫ x
P (t) dt∫ x
e−∫ t
P (s) dsQ(t) dt
for convenience.
u(x0)y0 ≡ u(x0) y(x0) = R(x0) + c1u(x0),
c1 =u(x0) y0 −R(x0)
u(x0), u(x0) > 0,
c1 = y0 −R(x0)/u(x0).
This solves the IVP and completes the proof. ♦
103
Example 1. Solve the first orderlinear differential equation
y′ − y = ex.
P (x) = −1, Q(x) = ex, and u(x) = exp[∫ x
P (t) dt].
We compute
R(x) = exp[−∫ x
(−1) dx] {∫ x
exp[∫ t
(−1) ds]et dt
},
y(x) = R(x) + c1/u(x)
= e+x∫ x
e−tet dt+ c1e−∫ x
(−1) dt
= xex + c1ex.
Example 2. Solve the first orderdifferential equation
xy′ + y + 1 = 0.
We make it into a linear first orderdifferential equation by dividing by x:
y′ +y
x= −1
x.
P (x) =1
x, Q(x) = −1
x, and u(x) = exp
[∫ x
dt/t]
= eln x = x.
R(x) = exp[−∫ x
dt/t] {∫ t
exp[∫ t
ds/s] (
−1
t
)dt}
104
= e− ln x∫ x
eln t −(
1
t
)dt
=1
x
∫ x
(−1) dt =(
1
x
)(−x) = −1.
Therefore,
y(x) = R(x) + c1/u(x),
y(x) = −1 + c1/x,
is the solution.
2.8 Other Methods
Not every first order differential equation can besolved by the methods previously discussed in thischapter. In this section some substitutions, artifacts,and special techniques are considered. In technical references,one may find many more methods, some of which are equationspecific or computational in nature.Substitution, or transformation of variables, is themost common and useful of the alternate methods. Anunknown equation may be converted into an equation ofa known type by a suitable transform.One could try to alter the form of the equationM(x, y) +N(x, y)y′ = 0 by a suitable substitution
x = x(u, v), x = y(u, v).
Proceeding formally, we look at the totaldifferentials
dx =∂x
∂udu+
∂x
∂vdv, dy =
∂y
∂udu+
∂y
∂vdv.
105
Each of x, y, dx, dy is now expressed in terms ofu, v, du, dv and M(x, y) dx+N(x, y) dy = 0 istransformed into
M(u, v)
(∂x
∂udu
)+ N(u, v)
(∂y
∂udu+
∂y
∂vdv
). (2.27)
and
M(u, v) := M (x(u, v), y(u, v)) , N(u, v) := N (x(u, v), y(u, v)) .
Collecting terms, we have Equation (2.27) as
M(u, v) du+ N(u, v) dv = 0. (2.28)
If Equation (2.28) can be solved, we have
U(u, v) = c1. (2.29)
Substitute back to get
U [u(x, y), v(x, y)] = c1, (2.30)
the desired solution.
2.9 Summary
We have examined a variety of techniques, methods,and procedures to solve first order, first degreeequations. We have learned to recognize certainparticular cases and know immediately which methodto apply. It is also true that computer softwarecan recognize certain types of first order, firstdegree equations and produce amazing results. Buthuman manipulation is often better than even thecleverest algorithms, as will be seen in the followingproblem section.We proved that every differential expression ofthe form M dx+N dy = 0 always has a uniquesolution if it is exact. We next examined separable
106
variables, the most elementary form of exactexpressions, and extended our knowledge of integralcalculus. The first order homogeneous differentialequations were our first encounter with substitutionand with the pitfalls that accompany it. Integratingfactors, in general, and the special integratingfactors for the first order linear differentialequations, in particular, rounded out our classificationof first order differential equations and theirstandard solution methods. We then mentioned someexotic means of obtaining solutions.
Problems
Classify each of the following problems asseparable variables, homogeneous first order,exact, or linear first order. Use mathematicalsoftware to obtain a solution whenever possible.If the software fails to obtain a solution, thensolve the problem manually. One will observe thatthe computer is not infallible and that a humantouch is sometimes necessary. Try to guess atreasons as to why the computer software mightfail in each given case.
1. Classify and solve y′ = − ((y/x) + x(xy + 1)).
2. Classify and solve (y2 + 1)y′ = (x2 − 1).
3. Classify and solve (x2 − x)y′ = −(2xy − y + 2x).
4. Classify and solve y′ = −y + 2x+ 1.
5. Classify and solve yy′ = (x+ 1).
Solutions
107
1. Classify and solve y′ = − ((y/x) + x(xy + 1)).
The above equation is first order linear
y′ + P (x) y = Q(x),
where P (x) = (1/x+ x2) and Q(x) = −x.
The computer gives a solution.
{{y(x) → x2 (−1− xy)
2+ C(1)− y log(x)}}.
2. Classify and solve (y2 + 1)y′ = (x2 − 1).
The above equation can be written in the form
M(x) dx+ N dy = 0.
Thus it is separable variables.
The computer generated the following solution:
{{y(x) → − x
1 + y2+
x3
3 (1 + y2)+ C(1)}}
3. Classify and solve (x2 − x)y′ = −(2xy − y + 2x).
The above equation is exact.
∂M
∂y= (2x− 1) =
∂N
∂x
The computer gives a solution
{{y(x) → (1− x) xC(1)
+2 elog(1−x)+log(x) (1 + log(x/(1− x)) + x log((1− x)/x) )
−1 + x}}.
108
4. Classify and solve y′ = −y + 2x+ 1.
The above equation is first order linear.
The computer gives a solution
{{y(x) → −1 + 2x+C(1)
ex}}.
5. Classify and solve yy′ = (x+ 1).
The above equation is clearly exact since
∂y
∂x= 1 =
∂(x+ 1)
∂y
The computer gives the solution
{{y(x) → −√
2x+ x2 + C(1)}, {y(x) →√
2x+ x2 + C(1)}}.
109
Chapter 3
The Laplace Transform
3.1 Laplace Transform Preliminaries
Suppose that f is a real-valued functionfrom [0,∞) = {x | 0 ≤ x < +∞}.The Laplace transform of f(x)is defined as the function F (s) such that
F (s) :=∫ ∞
0f(x)e−sx dx, (3.1)
wherever it exists. It is important to notethat the improper integral in the definitionof F (s) above usually means∫ ∞
0f(x)e−sx dx = lim
r→0+
∫ 1
rf(x)e−sx dx+ lim
R→+∞
∫ R
1f(x)e−sx dx,
where 0 < r � 1 � R < +∞.The function F (s), defined aboveis a complex-valued function of onecomplex variable, s.Throughout this chapter, we will let lower case italicletters, f , g, h, etc., denote the real-valuedfunctions and upper case italic letters, F , G, H,etc., denote their respective Laplace transforms. Moreover, wewill always denote f as f(x), F as F (s),etc., to ensure that there is no misunderstanding
110
as to whether a given function is a function of a realvariable, x, or of a complex variable, s.The operator L (calligraphic “L”)is defined as F (s) = L[f(x)]; F (s)results from the applicationof the integral operator Lto f(x). One has to place certain restrictionson the function f(x) to guarantee that the integralin Equation (3.1) exists and is finite;moreover, the function F (s) may not exist for all valuesof the complex variable s. For a given functionF (s), one must specify its domain of definition.In particular, if the integral in Equation (3.1)exists for some real number s0, then F (s) exists forall complex numbers s whose real part exceeds s0. Insome textbooks f(x) is called the originalfunction and F (s) is called the imagefunction.To work effectively with Laplace transforms,one needs to establish some elementary transformpairs, that is, definite functions f(x) and their correspondingLaplace transform F (s). These function pairs and theelementary theorems on the Laplace transform give powerfultools for solving ordinary differential equations, especiallythose frequently encountered in RM&A. (Laplacetransforms are also used in solving partial differentialequations.) Some pairs will be presentedin the following illustrative examples.
Example 1.
If x ∈ [0,∞) and f(x) = 1, then
F (s) = L[1] =∫ ∞
0e−sx dx =
1
s. (3.2)
∫ ∞
0e−sx dx =
e−sx
−s
∣∣∣∣∣∞
0
= limR→∞
e−sR
−s+e0
s=
1
s.
Let <(s) denote the real part of the complex number s.
111
As long as <(s) > 0, the above integral exists.
If <(s) < 0 then limR→∞
e−sR
−sfails to exist.
So the strip of convergence ofthe function f(x) ≡ 1 is 0 < <(s).Otherwise stated, this improper integral existsand is finitewhenever s is real and positive or whenevers is complex with a positive real part.
Example 2.
If x ∈ [0,∞) and f(x) = x, then
F (s) = L[x] =∫ ∞
0xe−sx dx
= limR→∞
[xe−sx
−s
]R
0
+ limR→∞
1
s
∫ R
0e−sx dx
= limR→∞
[Re−Rs
−s− e−Rs
s2+
1
s2
]=
1
s2,
where <(s) > 0.
Example 3.
If x ∈ [0,∞), α > −1,and f(x) = xα, then
F (s) = L[xα] =∫ ∞
0xαe−sx dx
=1
sα+slim
R→∞
[∫ R
0yαe−y dy
]R
0
,
where −1 < α and <(s) > 0. We madea change of variable in the integrand usingy = sx. Restricting s to be a positive,real number and α to be a real number,
112
α > −1 defines the improper integral∫ ∞
0xαe−x dx
as the well-known Gamma function Γ(α+ 1).Therefore,
L [xα] =Γ(α+ 1)
sα+1(x > 0, α > −1) .
There is one special case of interest, namelyα = −1
2.
L[x−1/2
]=∫ ∞
0x−1/2e−sx dx =
√π
s.
Example 4.
If x ∈ [0,∞) and f(x) = xn,where n is a positive integer, then
F (s) = L [xn] =∫ ∞
0xne−sx dx =
xne−sx
−s
∣∣∣∣∞0
+n
s
∫ ∞
0xn−1e−sx dx,
and hence, for <(x) > 0
L[xn] =n
sL[xn−1
].
Using recursion and Equation (3.2),we have
F (s) = L [xn] =n!
sn+1(n = 0, 1, 2, . . .).
One important property of the Laplace transform, which it shareswith other operators, is that it is linear, in thesense that if each of f1 and f2 is a real-valued functionfrom [0,∞) and if each of c1 and c2 is a realnumber then
L [c1f1(x) + c2f2(x)] = c1L [f1(x)] + c2L [f2(x)] .
Under suitable assumptions on f(x),
113
(limR→∞ |f(R)|e−sR = 0)one can define the Laplace transform of f ′(x) as
L [f ′(x)] =∫ ∞
0f ′(x)e−sx dx
= limR→∞
f(x)e−sx
∣∣∣∣R0
+ limR→∞
∫ R
0s f(x)e−sx dx
= −f(0) + sL [f(x)] .
Recursively applying the above yields
L [f ′(x)] = sL [f(x)]− f(0)L [f ′′(x)] = s2L [f(x)]− (sf(0) + f ′(0))...
......
L[f (n)(x)
]= snL [f(x)]−
(sn−1f(0) + · · ·+ sf (n−2)(0) + f (n−1)
)Whenever L [f(x)] = L [g(x)],we say that f, g belong to the same equivalence class withrespect to the operator L and we write f ≡ g.If, in addition, each of f and g is continuous, thenf = g. Later we will prove a theorem on the uniqueness of theLaplace transform.
3.2 Basic Theorems
Theorem 6 Suppose that s0 is a real number.If F (s) = L [f(x)] exists for<(s) > s0, then for any real number a
L [eaxf(x)] = F (s− a), (<(s) > s0 + a) .
Proof:Starting with the operational definition of theLaplace transform
114
-
6 1/s
1
···································································································································································································································································
Figure 3.1: f(x) = 1 and F (s) = 1/s
F (s) =∫ ∞
0e−sxf(x) dx.
Replace s with s− a (that is, s := s− a) to obtain
F (s− a) =∫ ∞
0e−(s−a)xf(x) dx
=∫ ∞
0e−sx (eaxf(x)) dx = L [eaxf(x)] .
This ends the proof of the so-called “translation theorem.”♦
Corollary 1 Let z be a complex number.L [ezxf(x)] = F (s− z), ∀<(s) > <(z).
Proof:Make a change of variables and observe that thetheorem implies that L [ezxf(x)] = F (s − z) whenever <(s − z) > s0.
Therefore, <(s) > s0 + <(z). ♦
115
Theorem 7 Suppose that s0 is a real number.If F (s) = L [f(x)] exists for<(s) > s0, then for any real number a > 0
L [f(ax)] =1
aF(s
a
), (<(s) > as0) .
Proof:Starting with the operational definition of theLaplace transform
F (s) =∫ ∞
0e−sxf(x) dx.
Replace s with s/a (that is, s := s/a) to obtain
F(s
a
)=∫ ∞
0e−sx/af(x) dx
Performing a change of variable byredefining x := ax yields
F(s
a
)= a
∫ ∞
0e−sxf(ax) dx.
Finally, divide both sides by the constant a > 0 to get
1
aF(s
a
)=∫ ∞
0e−sxf(ax) dx = L [f(ax)] .
This ends the proof of the so-called “change of scale theorem.”♦
Theorem 8 Suppose that each of f(x) and f ′(x) is acontinuous function from the number set [0,∞).Let s0 be a positive real number.If each of L [f(x)] (s) ≡ L [f(x)]and L [f ′(x)] (s) ≡ L [f ′(x)] exists for<(s) > s0, then
L [f ′(x)] = sL [f(x)]− f(0), (<(s) > s0) .
116
Proof:Starting with the operational definition of theLaplace transform
L [f ′(x)] = limR→∞
∫ R
0e−sxf ′(x) dx.
Perform integration by parts to obtain∫ R
0e−sxf ′(x) dx = e−sxf(x)
∣∣∣∣R0
+ s∫ R
0e−sxf(x) dx.
Therefore,∫ R
0e−sxf ′(x) dx = s
∫ R
0e−sxf(x) dx− f(0) + e−sRf(R). (3.3)
To complete the proof of this theorem, it remains to show that
limR→∞
e−sRf(R) = 0.
The existence of L [f(x)],L [f ′(x)], and the fact thatf(0) is finite imply that
lim supR→∞
e−s0Rf(R) = M < +∞.
Suppose that <(s) > s0 is fixed, but arbitrary.
limR→∞
e−sR =[
limR→∞
e−s0Rf(R)] [
limR→∞
e−(s−s0)R]
≤M ·[
limR→∞
e−(s−s0)R]
= M · 0 = 0.
Let R→ +∞ in Equation (3.3). ThenEquation (3.3) becomes
L [f ′(x)] =∫ ∞
0e−sxf ′(x) dx = s
∫ ∞
0e−sxf(x) dx− f(0) = sL [f(x)]− f(0).
Also written as
L [f ′(x)] = sF (s)− f(0).
This ends the proof of the so-called “derivative theorem.”♦
117
Name Symbol Definition
Bessel’s J0(x) :=∞∑
n=0
(−1)nx2n
22n(n!)2.
Beta B(x, y) :=∫ 1
0ux−1(1− u)y−1 du 0 < x, y.
Dirac delta δ(x) := L−1 [1].∫ ∞
−∞f(x)δ(x) dx = f(0).
δ(x− x0) := L−1 [e−sx0 ], <(s) > 0, x0 real.
Delta prime δ′(x− x0) := L−1 [se−sx0 ], <(s) > 0.
Gamma Γ(x) :=∫ ∞
0e−uux−1 du.
Γ(n) = (n− 1)!, n a positive integer.
Γ(1/2) =√π.
Gamma prime Γ′(x) :=∫ ∞
0ux−1e−u ln(u) du, x ∈ (0,∞).
Γ′(1) = 0.577215 . . ., the Euler constant.
Heaviside H(x− x0) :=
{1, x ≥ x0;0, x < x0.
Sine integral Si(x) :=∫ ∞
x
sinu
udu.
Table 3.1: Named Functions
118
Problems
1. Use the translation theorem to show that
L [xneax] =n!
(s− a)n+1
2. Prove that
L [sin(kx)] =k
s2 + k2.
(Hint: Use integration by parts.)
3. Use the derivative theorem to show that if
L [1] = 1s
then
L [x] = 1s2 .
4. Use the derivative theorem to show that if
L [sin(kx)] = ks2+k2 then
L [cos(kx)] = ss2+k2 .
5. Use the derivative theorem to show that if
L [cos(kx)] = ss2+k2 and
L [sin(kx)] = ks2+k2 then
L [x cos(kx)] = s2−k2
s2+k2 .
6. Use the derivative theorem to show that if
L [cos(kx)] = ss2+k2 and
L [sin(kx)] = ks2+k2 then
L [x sin(kx)] = 2kss2+k2 .
Solutions
119
1. Use the translation theorem to show that
L [xneax] =n!
(s− a)n+1
Recall that L [xn] = n!/sn+1.
The translation theorem states that
L [eaxf(x)] = F (s− a) ∀ a ∈ (−∞,∞).
Therefore,
L [eaxxn] =n!
(s− a)n+1∀<(s) > a.
2. Prove that
L [sin(kx)] =k
s2 + k2.
(Hint: Use integration by parts.)
Starting from the operational definition of F (s),
L [sin(kx)] =∫ ∞
0e−sx sin(kx) dx
=− cos(kx)
ke−sx
∣∣∣∣∣∞
0
− s
k
∫ ∞
0e−sx cos(kx) dx
=1
k− s
k
∫ ∞
0e−sx cos(kx) dx
=1
k− s sin(kx)
k2e−sx
∣∣∣∣∣∞
0
− s2
k2
∫ ∞
0e−sx sin(kx) dx
120
=1
k2− s2
k2F (s).
Therefore,
F (s)
(1 +
s2
k2
)=
1
k,
F (s) =1
k
k2
s2 + k2=
k
s2 + k2.
3. Use the derivative theorem to show that if
L [1] = 1s
then
L [x] = 1s2 .
Recall the derivative theorem
L [f ′(x)] = sL [f(x)]− f(0+).
Require that f(x) = x, f ′(x) = 1, L [1] = 1/s,
∀x ≥ 0, <(s) > 0. Then write
1
s= L [f ′(x)] = sL [f(x)]− f(0).
Therefore,
L [f(x)] = L [x] =1
s
(1
s− 0
)=
1
s2.
4. Use the derivative theorem to show that if
L [sin(kx)] = ks2+k2 then
L [cos(kx)] = ss2+k2 .
121
Immediately we write f(x) = sin(kx) and obtain
L [k cos(kx)] = sL [sin(kx)]− sin(k · 0) = sL [sin(kx)] .
From the fact that L is linear,
L [cos(kx)] =s
k
(k
s2 + k2
)=
k
s2 + k2.
5. Use the derivative theorem to show that if
L [cos(kx)] = ss2+k2 and
L [sin(kx)] = ks2+k2 then
L [x cos(kx)] = s2−k2
s2+k2 .
Set f(x) = x cos(kx) and use
L [f ′′(x)] = s2L [f(x)]− sf(0)− f ′(0).
Doing the calculations,
f(x) = x cos(kx), f(0) = 0;
f ′(x) = cos(kx)− kx sin(kx), f ′(0) = 1;
f ′′(x) = −2k sin(kx)− k2x cos(kx).
Plug in the functions and compute.
L[−2k sin(kx)− k2x cos(kx)
]= s2L [x cos(kx)]− s · 0− 1.
Therefore,
L [x cos(kx)] =1
s2 + k2
(−2k2
s2 + k2+ 1
)=
s2 − k2
(s2 + k2)2
122
6. Use the derivative theorem to show that if
L [cos(kx)] = ss2+k2 and
L [sin(kx)] = ks2+k2 then
L [x sin(kx)] = 2kss2+k2 .
Set f(x) = x sin(kx) and use
L [f ′′(x)] = s2L [f(x)]− sf(0)− f ′(0).
Doing the calculations,
f(x) = x sin(kx), f(0) = 0;
f ′(x) = sin(kx) + kx cos(kx), f ′(0) = 0;
f ′′(x) = 2k cos(kx)− k2x sin(kx).
Plug in the functions and compute.
L[2k cos(kx)− k2x sin(kx)
]= s2L [x sin(kx)]− s · 0− 0.
Therefore,
L [x sin(kx)] =1
s2 + k2
(2ks
s2 + k2
)=
2ks
(s2 + k2)2
3.3 The Inverse Transform
Instead of starting from a function f(x) definedfor x ∈ [0,∞), one might have a functionF (s) defined for a complex variable s ona strip s0 ≤ <(s) ≤ s1, where s0 is a real numberand s1 is either a real number or the symbol
123
+∞. The inverse problem is to determineda function f(x), given the function F (s),such that F (s) = L [f(x)].We are tempted to write
f(x) = L−1 [F (s)] , (3.4)
but we have to ask if the operator L−1
is well defined.If F (s) is analytic on some nonempty open set, thenit is clear from the definitionof the integral that L−1 exists and is linear.Assuming for the moment that such a function f(x)can be uniquely determined, we say that f(x) isthe inverse transform of F (s).The problems are resolved by the following theoremon the uniqueness of the inverse Laplace transform.
Theorem 9 Let s0 be a real number.Suppose that each of f(x) and g(x)is continuous on [0,∞) and that each ofF (s) and G(s) is a continuous function of thecomplex variable s such that<(s) ≥ s0. If F (s) = G(s) ∀ s,such that <(s) ≥ s0, then f(x) = g(x) forall x ∈ [0,∞).
The above theorem is justification for the use oftables of inverse Laplace transforms. Such tablesare invaluable in the solution of problems in physics,engineering, biology, statistics, and reliability.However, there are certain discontinuous functions,the prime examples being the so-called Dirac deltafunction and the unit step function, which also haveLaplace transforms. Their uniqueness will be assumeduntil sufficient lemmata is developed to prove itrigorously. In Table 3.2,H(x) is the Heaviside function which is 1 wheneverx ≥ 0 and 0 whenever x < 0, δ(x) is the
124
Dirac delta function, and a is a real number.The Dirac delta function is also known as theimpulse function. It is actuallya linear functional rather than a function; however,the name is now in common usage. The Heaviside functionand the Dirac delta function are plotted in Figures3.2 and 3.3, respectively.We can resort to methods of complex analysis torecover an original function f(x) from an imagefunction F (s). If the defining improper integralfor F (s) exists for some real number s0 thenthe image function F (s) is a single-valuedanalytic function of the complex variable sfor all s such that <(s) > s0. The inverseLaplace transform of F (s) can be found directlyby complex analysis from the improper integral
f(x) =1
2πi
∫ c+i∞
c−i∞esxF (s) ds,
where c is a real number such that everysingularity of F (s) lies in the half plane<(s) < c. Recall that∫ c+i∞
c−i∞esxF (s) ds := lim
R→∞
∫ c+iR
c−iResxF (s) ds.
Finally, the above theorem can be generalized.If we simply allow each of f(x) and g(x) to haveLaplace transforms F (s) and G(s), respectively,without regard to their continuity, then wheneverF (s) = G(s) ∀<(s) > s0 then f(x) ≡ g(x) in the sense that∫ ∞
0e−sx [f(x)− g(x)] dx = 0.
(Otherwise stated, f(x) = g(x) almost everywhere.)
125
-
6
x0
1
2
x
y
Figure 3.2: The Heaviside Function H(x− x0) (x0 > 0)
-
6
x0
1
2
x
y
6
Figure 3.3: The Dirac Delta Function δ(x− x0) = H ′(x− x0) (x0 > 0)
126
f(x) L [f(x)] Strip of convergence
e−axH(x)1
a+ s−a < <(s)
H(x)1
s0 < <(s)
xH(x)1
s20 < <(s)
δ(x) 1 all s
δ′(x) s all s
sin(kx)k
s2 + k20 < <(s)
cos(kx)s
s2 + k20 < <(s)
xe−axH(x)1
(s+ a)2−a < <(s)
Table 3.2: Laplace Transform Pairs
127
3.4 Transforms and Differential Equations
Ordinary differential equations with constantcoefficients can easily be solved with Laplacetransform methods. One simply takes the transformof the function and its derivatives, performsalgebraic manipulation of the complex variables, and computes the inverse transform. Theonly troublesome portion is that of the computationof the inverse Laplace transform. Modern symbolicmathematical software has been a boon in solvingsuch problems.
Example 1.
Solve the differential equation using Laplacetransform techniques
y′ + 2y = e−2x,
subject to the initial condition y(0) = 1.Take the Laplace transform of both sidesof the above equation and apply the derivativetheorem.
sY (s)− 1 + 2Y (s) =1
s+ 2, where Y (s) = L [y(x)] .
Solve the above algebraic equation for Y (s).
Y (s) =1
(s+ 2)2+
1
s+ 2.
Therefore, we have
y(x) = L−1
[1
(s+ 2)2+
1
s+ 2
]
= xe−2x + e−2x = e−2x(x+ 1).
Of course, this is a first order lineardifferential equation and we could have used the integratingfactor e−2x from Section 2.6.
128
3.5 Partial Fractions
Let each of N(s) and D(s) be a polynomial in thecomplex variable s having real coefficients. If thethe degree of N(s) is less than that of D(s) thenthe inverse transform of N(s)/D(s), that is
L−1
[N(s)
D(s)
],
exists. It is customary to evaluate this rational polynomialby means of a partial fraction decomposition.From the Fundamental Theorem of Algebra it follows thatD(s) can be factored into a product of two types of factors,each with real coefficients: (1) (s− sj)
n and (2)(s2 + tks+ sk)
m, where each of m and n is a nonnegativeinteger. The first can be rationalized into terms of theform
a1
(s− sj)+
a2
(s− sj)2+ · · ·+ an
(s− sj)n;
the second can also be rationalized into
b1s+ c1(s2 + tks+ sk)
+b2s+ c2
(s2 + tks+ sk)2+ · · ·+ bms+ cm
(s2 + tks+ sk)m.
The most elementary partial fraction decomposition occurswhen each of the roots of D(s) is real and distinct.In this case, we have the following
N(s)
D(s)=
a1
(s− s1)+
a2
(s− s2)+ · · ·+ an
(s− sn).
Example 1.
Given F (s) find f(x) = L−1 [F (s)].
F (s) =s
s2 − 1=
s
(s+ 1)(s− 1).
We rationalize the denominator.
129
s
(s+ 1)(s− 1)=
A
s+ 1+
B
s− 1,
s = As− A+Bs+B,
A+B = 1B − A = 0
}2B = 1, B = 1
2,
A = 12.
L−1[
s
s2 − 1
]= 1
2L−1
[1
s+ 1
]+ 1
2L−1
[1
s− 1
]
= 12e−x + 1
2ex = cosh(x), for x ≥ 0.
We notice that for F (s) = 1(s+1)(s−1)
= 12
1s−1
− 12
1s+1
that
L−1
[s
(s+ 1)(s− 1)
]= 1
2ex − 1
2e−x = sinh(x).
Example 2.
Given F (s) find f(x) = L−1 [F (s)].
F (x) =1
s2(s+ 1).
We rationalize the denominator.
1
s2(s+ 1)=A
s+B
s2+
C
s+ 1,
1 = As(s+ 1) +B(s+ 1) + Cs2 = (A+ C)s2 + (A+B)s+B,
giving
B = 1, A+B = 0, A = −1, C = −A = 1.
Therefore,
L−1 [F (s)] = L−1[−1
s+
1
s2+
1
s+ 1
]= −1 + x+ e−x.
130
3.6 Sufficient Conditions
The following theorem expresses some conditions on the functionf(x), x ∈ [0,∞), sufficient to ensure thatL [f(x)] = F (s) exists. These are not theweakest conditions possible; there are functions whichpossess Laplace transforms and fail to meet each of theconditions. However, the theorem covers real phenomenafor most engineering work. Before stating and provingthe theorem, we need to introduce some new concepts.
Definition 3 The statementthat the function f(x) is little-o ofg(x) as x→ x0 means that
limx→x0
|f(x)||g(x)|
= 0.
The statement that the function f(x) is Big-Oof g(x) as x→ x0 means that
lim supx→x0
|f(x)||g(x)|
= M <∞,
where the notation M <∞ means “M is finite.”The above definitions are written f(x) = o (g(x))and f(x) = O (g(x)), respectively.
If one simply says that f(x) is O (g(x)), thenit is assumed that f(x) is O (g(x)) for allx in the domain of definition of f . A similar statementapplies to o (g(x)). Let a be a real number,we say that f(x) is of exponential order ifthere exists some number x0 such that
f(x) = O (eax)
for all x ∈ [x0,∞)Clearly, if f(x) is defined on [0,∞) andf(x) = O (eax), then f(x) = O (eax)
131
as x→∞. All bounded functions areO (eax) ∀ a ≥ 0. The exponentialfunction ekx, k ≥ 0 is O (eax) forall a > k.We now proceed to the important theorem. While the proof issomewhat technical, it is important because it presentstechniques that can be used with functions which fail tosatisfy the hypotheses yet still possess Laplace transforms.
Theorem 10 Let a be a real number and let f(x) bedefined for all x ∈ [0,∞). Iff(x) = O (eax) and f(x) is sectionallycontinuous, then
F (x) = L [f(x)] =∫ ∞
0e−sxf(x) dx
exists for <(s) > a. Moreover,∫ ∞
0e−sxf(x) dx is absolutely convergent
∀<(s) > a.
Proof:Let {x1, x2, . . . , xn} be an increasing sequenceof positive numbers such that
limn→∞
xn = +∞
and let each point of discontinuity, xk, of f(x)belong to {x1, x2, . . . , xn}. (We may consider{xk}∞k=1 to be the set ofdiscontinuities of f(x) WLOG.) Let 0 < r � x1
Consider ∫ ∞
0e−sxf(x) dx = lim
r→0+
∫ x1
re−sxf(x) dx
+∫ x2
x1
e−sxf(x) dx+ · · ·+∫ xn
xn−1
e−sxf(x) dx+ limR→∞
∫ R
xn
e−sxf(x) dx
=∫ x1
0e−sxf(x) dx+ · · ·+
∫ xn
xn−1
e−sxf(x) dx+ · · ·
132
because f(x) is continuous from the right at 0,i.e., f(0+) = lim
r→0+f(r) = lim
h→0f(|h|) exists and is finite.
limR→∞
∫ R
xn
e−sxf(x) dx ≤ limR→∞
∫ R
xn
e−sxMeax dx
= limR→∞
− M
a− se−(s−a)x
∣∣∣∣Rxn
=M
s− ae−(s−a)xn → 0 as n→∞.
Therefore, let x0 := 0 so that∫ ∞
0e−sxf(x) dx =
∞∑k=1
∫ xk
xk−1
e−sxf(x) dx;
∫ ∞
0e−sxf(x) dx ≤
∞∑k=1
∫ xk
xk−1
e−sxMeax dx.
For each positive integer k, we have∫ xk
xk−1
e−sx |f(x)| dx ≤ Me−(s−a)x
−(s− a)
∣∣∣∣∣xk
xk−1
=M
s− a
[e−(s−a)xk−1 − e−(s−a)xk
], (s > a).
The series∞∑
k=1
M
s− a
[e−(s−a)xk−1 − e−(s−a)xk
]telescopes to give (∀n)∫ xn
0e−sx |f(x)| dx ≤ M
s− a
[e−(s−a)0 − e−(s−a)xn
]→ M
s− aas n→∞.
∫ ∞
0e−sx |f(s)| dx ≤ M
s− a<∞.
Therefore, ∫ ∞
0e−sxf(x) dx
exists for all <(s) > a. Therefore F (s) = L [f(x)] exists. ♦
133
Problems
1. Show that the function f(x) = ex2is continuous
on [0,∞) and that it does not possess a Laplace
transform. Deduce that sectional continuity is not a
sufficient condition to ensure that the Laplace transform
exists.
2. Show that the function f(x) = 1/x2 is
O (ex) on [1,∞) (in fact, f(x) = 1/x2
is O (ex) on [ε,∞) for
any positive number ε). Prove that this function
does not have a Laplace transform. This shows that
there exist functions of exponential order which fail to
possess a Laplace transform.
3. Find the Laplace transform of the real-valued function
f(x) = 2x cos(ex2
)ex2
defined on the interval [0,∞). Prove that this function
is not of exponential order, that is, that this function is not
O (eax) for any real number a and as
x→ x0 for any x0 ∈ [0,∞).
4. Prove that if f ′(x) is continuous on [0,∞)
and that if each of L [f(x)] and
L [f ′(x)] exists for some real value
of s, say s = a, then f(x) is of exponential order
eax.
134
Solutions
1. Show that the function f(x) = ex2is continuous
on [0,∞) and that it does not possess a Laplace
transform. Deduce that sectional continuity is not a
sufficient condition to ensure that the Laplace transform
exists.
A continuous function of a continuous function
is a continuous function, that is, the composition
of two (or more) continuous functions is continuous.
f ◦ g (x) = f(g(x))
is continuous whenever each of f and g are.
The function f(x) = ex2does not possess
a Laplace transform because for any complex
number s we find
limR→∞
∫ R
0e−sxex2
dx = es2/4 limR→∞
∫ R
0e(x−s/2)2 dx
≥ es2/4 limR→∞
∫ R
01 · dx = +∞.
Thus the improper integral
∫ ∞
0e−sxex2
dx
fails to exist for any complex number s.
135
2. Show that the function f(x) = 1/x2 is
O (ex) on [1,∞) (in fact, f(x) = 1/x2
is O (ex) on [ε,∞) for
any positive number ε). Prove that this function
does not have a Laplace transform. This shows that
there exist functions of exponential order which fail to
possess a Laplace transform.
We observe that h(x) = f(x)/ex = e−x/x2 is a strictly
decreasing function on [ε,∞) since
h′(x) = −e−x
x3(x+ 1) < 0 ∀x ≥ ε > 0.
Therefore, on the interval [ε,∞), h(x)
has an absolute maximum at ε.
max{|h(x)| | x ∈ [ε,∞} = eε/ε2 ⇒ f(x) = O (ex) on [ε,∞).
Now we argue by contradiction that f(x) does not have a
Laplace transform. If it did then the improper integral∫ 1
0f(x)e−sx dx = lim
r→0+
∫ 1
rf(x)e−sx dx
would exist (and be finite). However, for s > 0,
limr→0+
∫ 1
r
e−sx dx
x2≥ e−s lim
r→0
∫ 1
rx−2 dx = lim
r→0+−r−1
∣∣∣∣1r
= limr→0+
[1
r− 1
]= +∞.
The improper integral fails to exist.
136
3. Find the Laplace transform of the real-valued function
f(x) = 2x cos(ex2
)ex2
defined on the interval [0,∞). Prove that this function
is not of exponential order, that is, that this function is not
O (eax) for any real number a and as
x→ x0 for any x0 ∈ [0,∞).
From the operational definition of L [f(x)], compute
L [f(x)] =∫ ∞
0e−sxf(x) dx
=∫ ∞
0e−sx 2xex2
cos(ex2
) dx.
Integrate by parts.
u(x) = e−sx du(x) = −se−sx dx;
dv(x) = 2xex2cos(ex2
) v(x) = sin(ex2).
L [f(x)] = e−sx sin(ex2
)∣∣∣∞0
+ s∫ ∞
0e−sx sin(ex2
) dx
= − sin(1) + s∫ ∞
0e−sx sin(ex2
) dx, (<(s) > 0) .
The integral
∫ ∞
0e−sx sin(ex2
) dx
can be shown to exist for all <(s) > 0 from a
theorem in advanced calculus.
137
4. Prove that if f ′(x) is continuous on [0,∞)
and that if each of L [f(x)] and
L [f ′(x)] exists for some real value
of s, say s = a, then f(x) is of exponential order
eax.
From the operational definition of F (s) = L [f(x)],
∫ ∞
0f ′(x)e−ax dx ≤M < +∞ and
∫ ∞
0f(x)e−ax dx ≤ M < +∞.
Integrate by parts.
limR→∞
f(R)e−aR − limr→0+
f(r)e−ar −∫ ∞
0f(x)e−ax dx ≤M
⇒
limR→∞
f(R)e−aR ≤M + f(0) + M.
From the above and the continuity of f ′(x),
lim supx∈[0,∞) |f(x)| exists and
is finite on bounded intervals. Therefore,
f(0) is finite and f(x) = O (eax).
3.7 Convolution
If each of f and g is a sectionallycontinuous function of exponential order, then
(f ∗ g) (x) :=∫ x
0f(u)g(x− u) du
138
is a sectionally continuous function ofexponential order. We refer to f ∗ gas the convolution of f and g.Moreover,
L [f ∗ g] = F (s)G(s), (<(s) > max{a1, a2}) ,
where
f(x) = O(ea1x) and g(x) = O(ea2x).
For unbounded regions, multiple integrals can sometimes beseparated into iterated integrals and further separated intoa product of two (or more) usual Riemann integrals:
F (s)G(s) =[∫ ∞
0e−sxf(x) dx
] [∫ ∞
0e−sxg(x) dx
]
= limR→∞
∫ 2R
0e−sx
[∫ x
0f(u)g(x− u) du
]dx, (<(s) > a) .
The convolution property, L−1 [F (s)G(s)] = (f ∗ g) (x), can be used toprove a number of
corollaries and determine a number of useful Laplace transforms.Let f(x) = O (eax), then
L[∫ x
0f(u) du
]=
1
sf(s),
(<(x) > 1
2(a+ |a|)
).
We can compute the inverse transform of F (s)/s2, forexample.
1s2F (s) = L [f(x) ∗ 1 ∗ 1]
= L[∫ x
0f(u) du ∗ 1
]= L
[∫ x
0du∫ u
0f(v) dv
].
139
3.8 Useful Functions and Functionals
The most basic building block in Laplacetransform theory and the principal, prime paradigmof functions with jump discontinuities is theso-called Heaviside function (also known as theunit step function). Its operational definition is
H(x− x0) =
{1, ∀x ≥ x0;0, ∀x < x0.
The graph is shown in Figure 3.2.In the definition, it is assumed that x0 > 0.Although this function is discontinuous, it isO (eas) for each a > 0 and possessesa Laplace transform
L [H(x− x0)] =∫ ∞
0H(x− x0)e
−sx dx
=∫ x0
00 · e−sx dx+
∫ ∞
x0
1 · e−sx dx =e−sx0
s, (<(s) > 0) .
The function f(x) = 1 ∀x ∈ [0,∞) is aspecial case when x0 = 0. In this case,
L [1] = L [H(x− 0)] =1
se−s·0 =
1
s.
Many functions can conveniently be written usingthe Heaviside function notation.One of the most frequently encountered anduseful properties that a real-valued functioncan possess is that of being periodic. A functionf(x) is periodic (of period τ) iff(x) = f(x+ τ) ∀x ∈ (−∞,∞).The Laplace transform of a periodic functions has aspecial characterization.
F (s) =∫ ∞
0e−sxf(x) dx
140
=∫ τ
0e−sxf(x) dx+
∫ ∞
τe−sxf(x) dx.
We will make a substitution, or change of variable,so that when x = τ , y = 0. y = x− τ andx = y + τ , dy = dx, and dx = d(y + τ).∫ ∞
τe−sxf(x) dx =
∫ ∞
0e−s(y+τ)f(y + τ) d(y + τ)
=∫ ∞
0e−s(y+τ)f(y) dy,
because f(y) = f(y + τ). The variable ofintegration, y, is a “dummy” so we may definex := y. ∫ ∞
0e−s(y+τ)f(y) dy =
∫ ∞
0e−s(x+τ)f(y) dx
= e−sτ∫ ∞
0e−sx)f(x) dx = e−sτF (s).
Therefore,∫ ∞
0e−sxf(x) dx =
∫ τ
0e−sxf(x) dx+ e−sx
∫ ∞
0e−sxf(x) dx.
It follows that
F (s) =(1− e−sτ
)−1∫ τ
0e−sxf(x) dx.
We may employ the Heaviside function, H(x),under certain circumstances to characterize aperiodic function. If f(x) is a periodicfunction, then
fp(x) := H(x)f(x)−H(x− τ)f(x− τ),
where fp is f on [0, τ ] and zero elsewhere.
141
For other functions f(x), we may also constructsuch a function fp
fp(x) =
{f(x), 0 ≤ x < τ0, otherwise.
by a variety of artifacts.
Example 1.
The so-called square wave function of period τ = 2a:
fp(x) = H(x)− 2H(x− a) +H(x− 2a).
Example 2.
The so-called sawtooth function of period τ = 1:
fp(x) = H(x)x−H(x− 1)(x− 1)−H(x− 1).
Example 3.
The so-called triangular wave function of period τ = 2a:
fp(x) =1
a[H(x)x− 2H(x− a)(x− a) +H(x− 2a)(x− 2a)] .
Example 4.
The so-called half-wave rectified sine wavefunction of period τ = 2π/k:
fp(x) = H(x) sin(kx) +H(x− π/k) sin [k(x− aπ/k)] .
Example 5.
The so-called full-wave rectified sine wavefunction of period τ = π/k:
fp(x) = H(x) sin(kx) +H(x− π/k) sin [k(x− aπ/k)] .
In reliability engineering (see [10, page 274]),one frequently has need of the age distribution function,g. It is customary to use t instead of x as theindependent variable and write
142
g(x; t0) :=
ψ(t0 − t)[1− φ(t)], t < t0;δ(t− t0)[1− φ(t)], t = t0;0, t > t0;
where δ(t− t0) is the so-called Dirac deltafunction and each of ψ and φ are probabilityfunctions with φ(t) continuous in an openinterval containing t0. We need an operationaldefinition for the Dirac delta function (also knownas the impulse function). One could say that
δ(x− x0) := L−1[e−sx0
],
where <(s) > 0. Could this definition suffice?Suppose we let 0 < ε� 1 and
δε(x− x0) =1
ε
(H(x− x0)−H(x− [x0 + ε])
),
L [δε(x− x0)] = e−sx0
(1− e−sε
sε
),
limε→0
L [δε(x− x0)] = e−sx0 .
Thus we have a function δ(x) = 0 for all x 6= 0and δ(0) is undefined whereas∫ +∞
−∞δ(x− x0)f(x) dx = f(x0),
whenever f(x) is continuous in some open neighborhoodcontaining x0.
143
3.9 Second Order Differential Equations
The most common second order, ordinary differentialequation encountered anywhere—engineering, physics,biology, ecology, etc.—is the initial value problemwith constant coefficients
y′′ + 2ay′ + c2y = q(x),
where a ≥ 0 and c > 0. This is a linear ODE subject to the initialconditions
y0 := y(x0), y′0 := y′(x0), and, usually x0 := 0
so that
y0 := y(0) and y′0 = y′(0).
Taking the Laplace transform, we obtain
s2Y (x)− sy(0)− y′(0) + 2asY (s)− 2ay(0) + c2Y (s) = Q(s).
Collecting terms yields
Y (s) =Q(s) + (s+ 2a)y0 + y′0
s2 + 2as+ c2. (3.5)
Case 1.
c > a. In this case the denominator
−2a±√
4a2 − 4c2
2
has a solution set{−a+
√a2 − c2,−a−
√a2 − c2
}.
Set b = c2 − a2 > 0 to obtain a solution set
{−a+ ib,−a− ib}(i =
√−1).
Re-write Equation (3.5) as
144
Y (s) =Q(s) + (s+ 2a)y0 + y′0
(s+ a)2 + b2.
At this point we simplify, by writing
Y (s) =Q(s)
(s+ a)2 + b2+
ay0 + y′0(s+ a)2 + b2
+y0(s+ a)
(s+ a)2 + b2. (3.6)
The three terms on the right-hand side of the aboveequation (Equation (3.6))can be solved with the standard processesfor taking an inverse Laplace transform:
L−1
[Q(s)
(s+ a)2 + b2
]= q(t) ∗
(e−ax sin bt
b
);
L−1
[ay0 + y′0
(s+ a)2 + b2
]= (ay0 + y′0)
(e−ax sin bt
b
);
L−1
[(s+ 1)y0
(s+ a)2 + b2
]= y0e
−ax cos(bt).
The general solution to Case 1, withinitial conditions, is given by the equation
y(x) = (ay0 + y′0)e−ax sin bt
b+ y0e
−ax cos(bt) +∫ x
0
q(x− τ)e−aτ sin(bτ)
τdτ.
(3.7)The term e−ax is the dampingfactor. It is present whenever a > 0 and representedfriction or energy loss of the system from thedriving function q(x).
Case 2.
c = a. In this case the differential equation may befactored as follows
145
(D + a) (D + a) y = (D + a) (y′ + ay) = y′′ + 2ay′ + a2y
and the denominator in Equation (3.5)has a double root {−a,−a}. The equation becomes
Y (s) =Q(s) + (s+ 2a)y0 + y′0
(s+ a)2
and is readily solved by applying the inverse Laplace transform
L−1
[n!
(s+ a)n
]= e−axxn.
Case 3.
c < a. In this case the denominator in Equations(3.5) has two distinct real roots,a+
√a2 − c2 and a−
√a2 − c2. Denote
these two roots by r1 and r2, respectively,and apply the method of partial fractions to obtainsolutions in terms of exponential functions.001 002 003
3.10 Systems of Differential Equations
004 005 006 007 Laplace transforms are useful in solving 008 009 systems ofordinary differential equations. 010 011 In particular, the Laplace transformcan 012 013 readily convert a system of linear differential 014 015 equationswith constant coefficients into a system 016 017 of simulteneous linear alge-braic equations. 018 019 Recall the system of equations in Section 020 0211.8, Equation (1.37): 022 023
024y′(x) = y(x) + z(x) 025z′(x) = 2y(x).026
027 028 We will now solve this system of equations using 029 030 Laplacetransforms. The transformed system becomes 031 032
033sY (x)− y0 = Y (s) + Z(s), 034
146
035 036037sZ(s)− z0 = 2Y (s), 038
039 040 where Y (s) = L[y(x)], Z(s) = L[z(x)]. 041 042 We define y0 :=y(x0), z0 := z(x0), for a real 043 044 number x0 in the domain of each ofy(x) and z(x). 045 046 We re-write the above transformed system 047 048
049Y (x)(s− 1) = y0 + Z(s), 050
051 052053sZ(s) = z0 + 2Y (s).054
055 056 First solve for Z(s). 057 058
059sZ(s) = z0 + 2
(y0 + Z(s)
s− 1
), 060
061 062063Z(s)
(s2 − s− 2
)= z0(s− 1) + 2y0, 064
065 066067Z(s) (s− 2) (s+ 1) = 068z0(s− 1) + 2y0, 069
070 071
072073Z(s) =z0(s− 1) + 2y0
(s− 2)(s+ 1)074075
076 077
078 =1
3
z0 + 2y0
s− 2079 +
2
3
z0 − y0
s+ 1.080
081 082 Then solve for Y (s). 083 084
085086Y (s)(s− 1) = y0 +z0 + 2Y (s)
s, 087088
089 090091092s(s− 1)Y (s) = sy0 + z0 + 2Y (s), 093094
095 096097098Y (s)
(s2 − s− 2
)= sy0 + z0, 099100
101 102
103104Y (s) =sy0 + z0
(s− 2)(s+ 1)105106
147
107 108
109110 =1
3
2y0 + z0
s− 2111112 +
1
3
y0 − z0
s+ 1.113114
115 116 Applying the elementary inverse Laplace transform 117 118 pairs,we obtain at once. 119 120
121122y(x) = 13(2y0 + z0)e
2x123124 + 13(y0 − z0)e
−x125126
127 128 and 129 130
131132z(x) = 13(z0 + 2y0)e
2x133134 + 23(z0 − y0)e
−x.135136
137 138 If we define c1, c2 such that c1 := 13(2y0 + z0) 139 140 and c2 :=
13(y0− z0), then the solution to the 141 142 system of equations becomes 143
144
145146y(x) = c1e2x + c2e
−x 147148z(x) = c1e2x − 2c2e
−x.149150
151 152 The last equation is precisely the same as the one in 153 154 Section1.8.
3.11 Heaviside Expansion Formula
One particularly important transform pair,which finds frequent use in reliabilityengineering, is the Heaviside expansionformula
f(x) =n∑
k=1
P (ak)
Q′(zk)eakx F (s) =
P (s)
Q(s), (3.8)
where each of P and Q is a polynomial.The degree of P is less than the degreeof Q and Q has exactly n distinctreal roots {a1, a2, . . . , an}.(deg(P ) < deg(Q).)Also significant is the transform pair
f(x) = eaxn∑
k=1
P (n−k)(a)
(n− k)!
xk−1
(k − 1)!F (s) =
P (s)
(s− a)n. (3.9)
148
We will derive the Heaviside expansion formulaof Equation (3.8). Since{a1, a2, . . ., an} are the real, distinctroots of Q, we may write
Q(s) = (s− a1)(s− a2) · · · (s− an).
Using the partial fractions expression, we write
P (s)
Q(s)=
b1s− a1
+b2
s− a2
+ · · ·+ bns− an
. (3.10)
Let k be a positive integer such that 1 ≤ k ≤ n.Multiply both sides by s− ak and let s→ ak.This will yield.
bk =P (s)
Q(s)(1− sk)−
b1(s− ak)
s− a1
− b2(s− ak)
s− a2
− · · ·
−bk−1(s− ak)
s− ak−1
− bk+1(s− ak)
s− ak+1
− · · · − bn(s− ak)
s− an
.
We see at once that, in the limit as s→ ak,
bk = lims→ak
P (s)
Q(s)(s− ak) =
0
0,
since all the terms bj(s− ak)/(s− aj), j 6= k,vanish. This indeterminate, 0
0, indicates
that we may apply l’Hospital’s rule.
bk = lims→ak
P (s)
Q(s)(s− ak) = P (ak) lim
s→ak
s− ak
Q(s)= P (ak)
1
Q′(ak),
since Q′(ak) 6= 0. Q′(s) consists of a sum of n terms,one of which must be
∏j 6=k
(s− aj)
and is nonzero, and (n− 1) other terms,each of which is zero and vanishes.Thus Equation (3.10) can be written
149
P (s)
Q(s)=P (a1)
Q(a1)
1
s− a1
+P (a2)
Q(a2)
1
s− a2
+ · · ·+ P (an)
Q(an)
1
s− an
.
Taking the inverse Laplace transform of both sides ofthe above equation, we have
L−1
[P (s)
Q(s)
]=P (a1)
Q′(a1)ea1x +
P (a2)
Q′(a2)ea2x + · · ·+ P (an)
Q′(an)eanx.
Equations (3.8) and (3.9) arepreferred by reliability engineers over the convolution integrals.Convolution is somewhat more general; however, the majorityof reliability differential equations consist of rationalpolynomials and yield immediate solution by the Heavisideexpansion techniques. Nevertheless, all specialized Laplacetransform techniques areprimarily of theoretical interest dueto the availability and reliability of modern scientificmathematical software. The following examples illustratecomputer generated solutions to problems previously describedin the Heaviside calculus.
In[1]:= InverseLaplaceTransform[(2s2 − 9s+ 19)/((s− 1)2(s+ 3)), s, x]
Out[1]= 4/e(3∗x) − 2 ∗ ex + 3 ∗ ex ∗ x
In[2]:= InverseLaplaceTransform[(3s+ 5)/((s+ 1)(s+ 2)), s, x]
Out[2]= e(−2∗x) + 2/ex
In[3]:= InverseLaplaceTransform[(2s2 − 6s+ 5)/(s3 − 6s2 + 11s− 6), s, x]
Out[3]= ex/2− e(2∗x) + (5 ∗ e(3∗x))/2
In[4]:= InverseLaplaceTransform[(s+ 5)/((s+ 1)(s2 + 1)), s, x]
Out[4]= 2/ex − 2 ∗ cos[x] + 3 ∗ sin[x]
150
In[5]:= InverseLaplaceTransform[(2s2 − 9s+ 19)/((s− 1)2(s+ 3)), s, x]
Out[5]= 4/e(3∗x) − 2 ∗ ex + 3 ∗ ex ∗ x
In[6]:= InverseLaplaceTransform[(2s+ 3)/((s+ 1)2(s+ 2)2), s, x]
Out[6]= −(x/e(2∗x)) + x/ex
3.12 Table of Laplace Transform Theorems
Let each of f and g be a real-valuedfunction from [0,∞). We assume that f(x), g(x) eachhave a Laplace transform on some (non-trivial) strip ofconvergence. The domain ofdefinition of each of the complex-valuedLaplace transforms, F (s) and G(s), off(x) and g(x), respectively iss0 < <(s) < s1 and t0 < <(s) < t1,where each of s0, t0 is a real number andeach of s1, t1 is either a real numberor the symbol ∞. Let each of a, b, and τbe a real number, s be a complex number,and H(x) denote the Heavisidefunction from (−∞,∞).(See Figure 3.2.)The above table is simply an outline. The hypotheses arelacking. Probably one of the most difficult parts ofapplying mathematics is reading the hypothesis of a giventheorem and ensuring that the function conforms to them.Proofs are important for situations in which a given functionfails to meet one or more conditions in the hypothesis.By examining the argument in the proof, one can sometimesdetermine if the theorem can be “extended” to cover theexceptional function or will the function fail altogether.
151
real-valued function Laplace transform Strip of Convergence
af(x) + bg(x) aF (s) + bG(s) max{s0, t0} < <(s)
< min{s1, t1}eaxf(x) F (s− a) a+ s0 < <(s) < a+ s1
f(ax) (1/a)F (s/a) s0/a < <(s) < s1/a
f ′(x) sF (s)− f(0) s0 < <(s) < s1
f(x− a)H(x− a) e−asF (s) s0 < <(s) < s1
f(x+ τ) = f(x)1
1− e−sτ
∫ τ
se−sxf(x) dx <(s) > 0
f(x)/x∫ ∞
sF (u) du s0 < <(s) < s1
f ∗ g (x) F (s)G(s) max{s0, t0} < <(s)
< min{s1, t1}
Table 3.3: Table of Theorems
3.13 Table of Laplace Transforms
Throughout this entire section, f(x)will denote a real-valued function ofthe real variable x, F (s) will denotea function of the complex variable s,<(s) will denote the real part ofs, =(s) will denote the purelyimaginary part of s, and the graph ofF (s) will simply be the function F (s)plotted against <(s), for simplicity.
152
-
6
-
6
-
6
H(x) =
{1, x ≥ 00, x < 0
F (s) =1s
<(s) > 0
<(s)
=(s)
−→
−→−→−→−→−→
···················································································································································································
Figure 3.4: Heaviside Function
-
6
-
6
-
6
f(x) =
{x, x ≥ 00, x < 0
F (s) =1s2
<(s) > 0
<(s)
=(s)
−→
−→−→−→−→−→
��������
·············································································································································································
Figure 3.5: Ramp Function
-
6
-
6
-
6
x0
H(x− x0) =
{1, x ≥ x0
0, x < x0F (s) =
e−sx0
s<(s) > 0
<(s)
=(s)
−→
−→−→−→−→−→
·························································································································································································
Figure 3.6: Shifted Heaviside Function
153
-
6
-
6
-
6
a a
f(x) = eaxH(x) F (s) =1
s− a<(s) > a
<(s)
=(s)
−→
−→−→−→−→−→
························································································································································· ·······························································································································································
Figure 3.7: Linearly Transformed Heaviside Function
-
6
-
6
-
6
a a
f(x) = xeaxH(x) F (s) =1
(s− a)2<(s) > a
<(s)
=(s)
−→
−→−→−→−→−→
········································································································································ ·························································································································································
Figure 3.8: Linearly Transformed Ramp Function
-
6
-
6
-
6
a 2a 3a�
��
���
���
§ 3.8 Example 2. F (s) =1
as2− e−as
s(1− e−as)<(s) > 0
<(s)
=(s)
−→−→
−→−→−→−→−→
··································································································································································
Figure 3.9: The Sawtooth Function
154
3.14 Doing Laplace Transforms
Most textbooks devote time and energy to explaininghow to use the shift theorem, the linearly translatedtransform theorem, and the derivative theorem to dotransforms. They spend time developing various artifactsand clever ploys to manually compute Laplace transforms.This book is concerned with the application of theLaplace transform to RM&A engineering, particularly toReliability Engineering. One is expectedto use modern mathematical software to verify eachLaplace transform, even if it comes from a table book.With that in mind, we present here some sample values fromsymbolic software versus values from tables.
In[1]:= LaplaceTransform[1,x,s]
Out[1]:= 1s
Reference Table f(x) = 1, F (s) = 1s.
In[2]:= LaplaceTransform[x,x,s]
Out[2]:= s−2
Reference Table f(x) = x, F (s) = 1s2 .
In[3]:=LaplaceTransform[Exp[a x],x,s]
Out[3]:= 1−a+s
Reference Table f(x) = eax, F (s) = 1s−a
.
In[4]:=LaplaceTransform[x Exp[a x],x,s]
Out[4]:= (−a+ s)−2
Reference Table f(x) = xeax F (s) = 1(s−a)2
In[5]:= LaplaceTransform[x^(-1/2),x,s]
Out[5]:=√
π√s
Reference Table f(x) = x−1/2,√
πs.
In[6]:= LaplaceTransform[x^a,x,s]
155
Out[6]:= s−1−a Gamma(1 + a)
Reference Table f(x) = xa, Γ(a+1)sa+1 (a > −1).
In[7]:= LaplaceTransform[Cos[k x],x,s]
Out[7]:= sk2+s2
Reference Table f(x) = cos(kx), ss2+k2 .
In[8]:= LaplaceTransform[Sin[k x],x,s]
Out[8]:= kk2+s2
Reference Table f(x) = sin(kx), ks2+k2 .
In[9]:= LaplaceTransform[Cosh[k x],x,s]
Out[9]:= s−k2+s2
Reference Table f(x) = cosh(kx), ss2−k2 .
In[10]:= LaplaceTransform[Sinh[k x],x,s]
Out[10]:= k−k2+s2
Reference Table f(x) = sinh(kx), ks2−k2 .
In[11]:= LaplaceTransform[Exp[a x] Cos[k x],x,s]
Out[11]:= −a+sk2+(−a+s)2
Reference Table f(x) = eax cos(kx), s−a(s−a)2+k2 .
In[12]:= LaplaceTransform[x^n Exp[a x],x,s]
Out[12]:= (−a+ s)−1−n Gamma(1 + n)Reference Table f(x) = xneax, n!
(s−a)n+1 .
In[13]:= LaplaceTransform[x Cos[k x],x,s]
Out[13]:= 2 s2
(k2+s2)2− 1
k2+s2
Reference Table f(x) = x cos(kx), s2−k2
(s2+k2)2.
In[14]:= LaplaceTransform[BesselJ[0,a x],x,s]
Out[14]:= 1√a2+s2
Reference Table f(x) = J0(x),1√
s2+a2 .
In a like manner we may compute inverseLaplace transform.
156
In[15]:= InverseLaplaceTransform[(s-a)^(-2),s,x]
Out[15]:= ea x xReference Table F (s) = 1/(s− a)2, f(s) = xeax
3.15 Summary
In this chapter we investigated the powerfuland extremely useful Laplace transform.We developed an operational definition forthe Laplace transform, computed certainelementary transforms, stated and provedsome basic theorems, and demonstrated howto apply the transform technique to problemsin RM&A. Although we presented and proved many theoremsand illustrated techniques with examples, we have onlyscratched the surface of this procedure. TheLaplace transform procedure is particularly well-suitedto engineering problems, especially initial valueproblems, because the entire IVP can be solveddirectly rather than by first finding a generalsolution and then using the initial conditionsto determine the values of the arbitrary constants.Laplace transforms are also useful in problemswhere the so-called driving function, q(x) inthe equation
y′(x) + ay(x) = q(x),
has discontinuities or is periodic.Despite the time and effort in solving applicationproblems this transform saves, it does notenlarge the class of problems solvable from previouslyexplored techniques. The Laplace transformdoes replace a differential equationwith an algebraic equation. This is a simplificationand a reduction since differential equations are ageneralization of algebraic equations.
157
The generalized derivative theorem
L[f (n)(x)
]= snF (s)−
n−1∑k=0
sn−1−kf (k)(0+)
= snF (s)− sn−1f(0+)− sn−1f ′(0+)− · · · − sf (n−2)(0+)− f (n−1)(0+),
where fk(0+) := limh→0
f (k) (|h|), k = 0, 1, . . . , n− 1,
makes the Laplace transform ideal for solvingIVPs involving linear DEs with constant coefficients.In addition to the Laplace transform, we defined aninverse transform. The function f(x), whose Laplacetransform is F (s), is called the inverse transformof F (s). The operator L−1 [F (s)]was then shown to be a well-defined linear integral operator.We asked the question: “If one knowsthat F (s) is the Laplace transform of somefunction f(x), how can one compute the inversetransform f(x) from the information givenabout F (s)?” We stated (without proof) theorems whichensured the uniqueness of the inverse Laplace transform,under suitable conditions, and discussed methods ofobtaining an inverse transform f(x) solely frominformation about the Laplace transform F (s).In particular, any two continuousfunctions having the same Laplace transform areidentical.We mentioned that there exist certain tools fromcomplex variable theory that permit a directcalculation of the inverse Laplace transform froman analytic function F (s) in the half plane<(s) > s0, where all the singularities of F (s)lie to the right of the line <(s) = s0.Finally, we presented tables of transform pairs,of transform theorems, and of named functions.One attractive feature of the Laplace transformis that it can be computed in a routine fashion
158
solely from transform tables and algebraic considerations.It is necessary, however, that the user of Laplacetransforms be aware of the correct way to apply shifting,convolution, and composition of functions to obtainthe solution of ordinary differential equations.Graphical illustrations help, and a number have beenincluded. A table of Laplace transforms has thesame relation to ordinary, linear differential equationsas an integral table does to the integrand functions.From a relatively small table of Laplace transformsand the elementary Laplace transform theorems,nearly every differential equation in Reliability,Maintainability, and Availability can be successfully solved.Some references refer to the definition in Equation(3.1) as a one-dimensional Laplace transformwhile others defined the Laplace transform as∫ ∞
−∞e−sxf(x) dx
and restrict the function f(x) to a subintervalof (−∞,∞).
159
Glossary
Absolute convergence A function series∞∑
n=1
fn(x) is said to
be absolutely convergent for some number set S if
the function series∞∑
n=1
|fn(x)|
converges for each x ∈ S.
Autonomous differential equation A differential equation
that does not contain the independent variable explicitly.
Autonomous system A system of first-order differential
equations of the form
dyj
dx= Fj(y1, . . . , yn) j = 1, . . . , n
is said to be autonomous. The system is characterized by the
fact that each function, Fj, does not depend on the
independent variable x.
Bessel function The Bessel function is defined as
the (improper) integral
Jn(x) =xn
2nΓ(n+ 1)
[1− x2
2(2n+ 2)+
x4
2 · 4(2n+ 2)(2n+ 4)− · · ·
].
Beta function The Beta function is defined as the (improper) integral
B(x, y) =∫ 1
0ux−1(1− u)y−1 du, where x, y > 0.
160
Big-O The statement that f(x) is Big-O of g(x) at x0 means that ∃M > 0such that lim sup
x→x0
|f(x)|/|g(x)| ≤M .
It is written as f(x) = O (g(x)).
Boundary value problem The problem of finding a solution to a differ-ential equation (or system of differential equations) satisfying certainrequirements for a point set of the independent variable, the so-calledboundary conditions.
Complementary Error function The complementary error function isdefined as
the integral
erfc(x) = 1−erf(x) =2√π
∫ ∞
xe−t2 dt.
Cosine integral The cosine integral is
defined as the (improper) integral
Ci(x) =∫ ∞
x
cos(t)
tdt.
Error function The error function is defined as
the integral
erf(x) =2√π
∫ x
0e−t2 dt.
Exponential integral The exponential integral is defined as the (improper)integral
Ei(x) =∫ ∞
x
e−t
tdt.
Exponential order A function f(x) is said to be of exponential order α ifthere exists a constant x0 such that f(x) = O (eαx), for all x > x0.
Fresnel integrals The Fresnel (cosine and sine) integrals are defined as
C(z) :=√
2π
∫ z
0cosx2 dx, S(z) :=
√2π
∫ z
0sin x2 dx.
161
Fundamental theorem of algebra Every polynomial equation with com-plex coefficients of degree n ≥ 1 has at least at least one root. The rootmay be real or complex.
Fundamental theorem of calculus If f(x) is
continuous on [a, b] and
F(x) =∫ x
af(u) du,
then F(x) has a derivative in (a, b) such that
dF(x)
dx= f(x).
Gamma function The Gamma function is defined
as the (improper)
integral Γ(x) =∫ ∞
0e−uux−1 du, where x ∈ (0,∞).
General solution A formula y = f(x; c1, . . . , cn)
which provides each solution of a given differential equation
F(x, y, y′, . . . , y(m)
)= 0.
Improper integral The statement that∫ b
af(x) dx is an improper integral
means that one
or more of the following is true:
1. a = −∞,
2. b = +∞,
3. f(x) is undefined at x = a > −∞, x = b < +∞ or x = c for somec ∈ (a, b).
Indeterminate form Let each of f(x) and g(x) be
defined in a neighborhood of x0 and
limx→x0
f(x) = limx→x0
g(x) = 0.
Thenf(x0)
g(x0)≡ 0
0
162
is an indeterminate form. Other indeterminate forms include∞∞
,
0 · ∞, 1∞,
∞0, and ∞−∞.
Evaluation of the first two cases, when they exist, sometimes
results from an application of l’Hospital’s rule.
Iteration method A method that yields a sequence of
approximating functions {y0, y1, . . . , yn, . . .}to an unknown function y, where the nth approximation
(n is a positive integer) is obtained from the set
{y0, y1, . . . , yn−1} by some well-defined operation,
is known as an iteration method. One example is Picard’s
method.
l’Hospital’s rule Let each of f(x) and g(x) be
defined (and subject to certain regularity conditions)
in a neighborhood of x0. If limx→x0
f(x)
g(x)is of the indeterminate form
0
0
or∞∞
and if limx→x0
f ′(x)
g′(x)exists, then
limx→x0
f(x)
g(x)= lim
x→x0
f ′(x)
g′(x).
Lipschitz condition The statement that a function f(x) satisfies a Lips-chitz condition at a point x0 means that ∃K > 0 and ∃ δ > 0 such thatif |x− x0| < δ
then |f(x)− f(x0)| ≤ K|x− x+ 0|.
Little-o The statement that f(x) is Little-o
of g(x) at x0 means that
163
limx→x0
|f(x)|/|g(x)| = 0.
It is written as f(x) = o (g(x)).
Modified Bessel function The Modified Bessel function
is defined as the (improper) integral
In(x) = i−nJn(x)
=xn
2nΓ(n+ 1)
[1 +
x2
2(2n+ 2)+
x4
2 · 4(2n+ 2)(2n+ 4)+ · · ·
].
Picard’s method An iteration technique for solving
ordinary differential equations of the type
y′ = F (x, y) with an initial condition of (x0, y0)
such that y1(x) =∫ x
x0
F (t, y0) dt and yn(x) =∫ x
x0
F (x, yn−1(t)) dt.
Sine integral The sine integral is
defined as the (improper) integral
Si(x) =∫ x
0
sin(t)
tdt.
Singular solution A solution to a differential equation
which cannot be obtained from any solution formula containing
an arbitrary constant. (Viz., the Clairaux equation
(y′)2 − xy′ + y = 0 and its corresponding singular solution
y = x2/4. The general solution is y = cx− c2.)
Taylor series Let f(x) be defined on
(a, b) = {x | a < x < b } and x0 ∈ (a, b).
If f (n)(x0) exists for each n ≥ 0 then
the series
f(x0) + f ′(x0)(x− x0) + 12f ′′(x0)(x− x0)
2 + . . .
164
+f (n)(x0)
n!(x− x0)
n + . . .
≡∞∑
n=0
f (n)(x0)
n!(x− x0)
n, where f (0)(x) := f(x) ∀x ∈ (a, b)
is called the Taylor series of f(x) at x0. Convergence
of the series to f(x) depends on regularity conditions.
Uniform continuity The statement that f(x)
is uniformly continuous on the finite number interval
[a, b] means that if ε > 0 there exists a
δ = δ(ε) > 0 such that if x, y ∈ [a, b]
and |x− y| < δ then |f(x)− f(y)| < ε.
Uniform convergence A convergent function series∞∑
n=1
fn(x) is said to converge
uniformly on a number set S if ∀ ε > 0
∃N = N(ε), a positive integer, such that
if n ≥ N then
∣∣∣∣∣∣∞∑
j=1
fj(x)−n∑
j=1
fj(x)
∣∣∣∣∣∣ < ε for each x ∈ S.
165
Bibliography
[1] Birkhoff, Garrett and Gian-Carla
Rota, Ordinary Differential Equations.
Boston: Ginn and Company, 1962.
[2] Boyce, William E., and Richard C. DiPrima,
Elementary Differential Equations and Boundary
Value Problems—Third Edition. New York: John
Wiley & Sons, 1977.
[3] Bracewell, Ron,
The Fourier Transform and Its Applications.
New York: McGraw-Hill, Inc., 1965.
[4] Brauer, Fred and John A. Nohel,
Introduction to Differential Equations with
Applications. New York: Harper & Row,
Publishers, 1986.
[5] Bronson, Richard, Schaum’s
Outline Series Theory and Problems of Modern Introductory
Differential Equations with Laplace Transforms, Matrix
Methods, Numerical Methods, Eigenvalue Problems.
New York: McGraw-Hill Book Company, 1973.
166
[6] Copi, Irving M.,
Introduction to Logic.
New York: MacMillan Publishing Co., Inc., 1978.
[7] Hall, Dio L., et al.,
Introduction to the Laplace Transform.
New York: Appleton-Century Crofts, Inc., 1959.
[8] Kaplan, Wilfred,
Elements of Ordinary Differential Equations.
Reading, Massachusetts: Addison-Wesley Publishing Company, Inc.,
1964.
[9] Kreyszig, Erwin,
Advanced Engineering Mathematics—5th ed.
New York: John Wiley & Sons, 1983.
[10] Lloyd, David K. and Myron Lipow,
Reliability: Management, Methods, and
Mathematics. Englewood Cliffs, NJ: Prentice-Hall,
Inc., 1962.
[11] Olmsted, John M. H.,
Advanced Calculus.
New York: Appleton-Century-Crofts, Inc., 1956.
[12] Quinney, Douglas,
An Introduction to the
Numerical Solution of Differential Equations.
Letchworth, Hertfordshire, England:
Research Studies Press, Ltd,
1985.
167
[13] Rabenstein, Albert L.,
Elementary Differential Equations with Linear
Algebra—Second Edition.
New York: Academic Press, 1975.
[14] Rainville, Earl D., and Phillip
E. Bedient, Elementary Differential Equations–Fourth
Edition. New York: The Macmillan Company, 1969.
[15] Ritger, Paul D., and Rose, Nicholas J.,
Differential Equations with Applications.
New York: McGraw-Hill Book Company, 1968.
[16] Sagan, Hans,
Advanced Calculus. Boston: Houghton Mifflin
Company, 1974.
[17] Scheid, Francis, Schaum’s
Outline Series Theory and Problems of Numerical Analysis,
Second Edition.
New York: McGraw-Hill Book Company, 1973.
[18] Spiegel, Murray R.,
Applied Differential Equations—Second Edition.
Englewood Cliffs, N.J.:
Prentice-Hall, Inc., 1967.
[19] Spiegel, Murray R., Schaum’s
Outline Series Theory and Problems of Laplace Transforms.
New York: Schaum Publishing Co., 1965.
[20] Stark, Peter A., Introduction
to Numerical Methods. New York: Macmillan Publishing
Co., Inc., 1970.
168
[21] Thomas, George B., Calculus
and Analytic Geometry—Third Edition.
Reading, Massachusetts:
Addison-Wesley Publishing Company, Inc., 1966
169
top related