eng 1091 lecture notes

Upload: yusuf-nazeer

Post on 13-Oct-2015

114 views

Category:

Documents


4 download

DESCRIPTION

Monash end1091 lecture notes

TRANSCRIPT

  • ENG1091

    Mathematics for Engineering

    Lecture notes

    Clayton Campus2013 Campus

    Australia Malaysia South Africa Italy India monash.edu/science

    School of Mathematical Sciences Monash University

    Contents

    1. Vectors in 3-dimensions 3

    1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    1.1.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    1.1.2 Algebraic properties . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    1.2 Vector Dot Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    1.2.1 Unit Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    1.3 Vector Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    1.3.1 Interpreting the cross product . . . . . . . . . . . . . . . . . . . . . 7

    1.3.2 Right Hand Thumb rule . . . . . . . . . . . . . . . . . . . . . . . . 8

    1.4 Scalar and Vector projections . . . . . . . . . . . . . . . . . . . . . . . . . 9

    1.4.1 Scalar Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    1.4.2 Vector Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    2. Three-Dimensional Euclidean Geometry. Lines. 11

    2.1 Lines in 3-dimensional space . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    2.2 Vector equation of a line . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    3. Three-Dimensional Euclidean Geometry. Planes. 14

    3.1 Planes in 3-dimensional space . . . . . . . . . . . . . . . . . . . . . . . . . 15

    3.1.1 Constructing the equation of a plane . . . . . . . . . . . . . . . . . 15

    3.1.2 Parametric equations for a plane . . . . . . . . . . . . . . . . . . . 16

    3.2 Vector equation of a plane . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    4. Linear systems of equations 19

    4.1 Examples of Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    4.1.1 Bags of coins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    4.1.2 Silly puzzles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    4.1.3 Intersections of planes . . . . . . . . . . . . . . . . . . . . . . . . . 20

    4.2 A standard strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    4.3 Lines and planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    5. Gaussian Elimination 25

    5.1 Gaussian elimination and back-substitution . . . . . . . . . . . . . . . . . 26

    5.2 Gaussian elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    26-Jul-2013 2

  • School of Mathematical Sciences Monash University

    5.2.1 Gaussian elimination strategy . . . . . . . . . . . . . . . . . . . . . 27

    5.3 Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    6. Matrices 29

    6.1 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    6.1.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    6.1.2 Operations on matrices . . . . . . . . . . . . . . . . . . . . . . . . . 31

    6.1.3 Some special matrices . . . . . . . . . . . . . . . . . . . . . . . . . 32

    6.1.4 Properties of matrices . . . . . . . . . . . . . . . . . . . . . . . . . 32

    6.1.5 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    7. Inverses of Square Matrices. 34

    7.1 Matrix Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    7.1.1 Inverse by Gaussian elimination . . . . . . . . . . . . . . . . . . . . 35

    7.2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    7.2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    7.3 Inverse using determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    7.4 Vector Cross Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    8. Eigenvalues and eigenvectors. 39

    8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    8.2 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

    8.3 Decomposing Symmetric matrices . . . . . . . . . . . . . . . . . . . . . . . 44

    8.4 Matrix inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    8.5 The Cayley-Hamilton theorem: Not examinable . . . . . . . . . . . . . . . 47

    9. Integration 49

    9.1 Integration : Revision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

    9.1.1 Some basic integrals . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    9.1.2 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    9.2 Integration by parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    10. Hyperbolic functions 54

    10.1 Hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

    10.1.1 Hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . . . . 56

    10.1.2 More hyperbolic functions . . . . . . . . . . . . . . . . . . . . . . 57

    26-Jul-2013 3

    School of Mathematical Sciences Monash University

    10.2 Special functions: not examinable . . . . . . . . . . . . . . . . . . . . . . 58

    11. Improper integrals 60

    11.1 Improper Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    11.1.1 A standard strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    12. Comparison test for convergence 65

    12.1 Comparison Test for Improper Integrals . . . . . . . . . . . . . . . . . . . 66

    12.2 The General Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    13. Introduction to sequences and series. 70

    13.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    13.1.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    13.1.2 Partial sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    13.1.3 Arithmetic series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

    13.1.4 Fibonacci sequence . . . . . . . . . . . . . . . . . . . . . . . . . . 72

    13.1.5 Geometric series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

    13.1.6 Compound Interest . . . . . . . . . . . . . . . . . . . . . . . . . . 73

    14. Convergence of series. 75

    14.1 Infinite series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

    14.1.1 Convergence and divergence . . . . . . . . . . . . . . . . . . . . . 76

    14.2 Tests for convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

    14.2.1 Zero tail? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

    14.2.2 The Comparison test . . . . . . . . . . . . . . . . . . . . . . . . . 76

    15. Integral and ratio tests. 78

    15.1 The Integral Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

    15.2 The Ratio test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

    16. Comparison test, alternating series. 83

    16.1 Alternating series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

    16.2 Non-positive infinite series . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    16.3 Re-ordering an infinite series . . . . . . . . . . . . . . . . . . . . . . . . . 85

    17. Power series 87

    17.1 Simple power series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

    26-Jul-2013 4

  • School of Mathematical Sciences Monash University

    17.2 The general power series . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

    17.3 Examples of Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

    17.4 Maclaurin Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

    17.5 Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

    17.6 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    18. Radius of convergence 93

    18.1 Radius of convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

    18.2 Computing the Radius of Convergence . . . . . . . . . . . . . . . . . . . 94

    18.3 Some theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

    19. Function Approximation using Taylor Series 96

    19.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

    19.2 Taylor polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

    19.3 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

    19.4 Using Taylor series to calculate limits . . . . . . . . . . . . . . . . . . . . 102

    19.5 lHopitals rule. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

    20. Remainder term for Taylor series. 106

    20.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

    20.2 Integration by parts and Taylor series . . . . . . . . . . . . . . . . . . . . 107

    21. Introduction to ODEs 111

    21.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

    21.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

    21.3 Solution strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

    21.4 General and particular solutions . . . . . . . . . . . . . . . . . . . . . . . 115

    22. Separable first order ODEs. 116

    22.1 Separable equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

    22.2 First order linear ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

    22.2.1 Solving the homogeneous ODE . . . . . . . . . . . . . . . . . . . . 120

    22.2.2 Finding a particular solution . . . . . . . . . . . . . . . . . . . . . 121

    23. The integrating factor. 122

    23.1 The Integrating Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

    26-Jul-2013 5

    School of Mathematical Sciences Monash University

    24. Homogeneous Second order ODEs. 125

    24.1 Second order linear ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . 126

    24.2 Homogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

    25. Non-Homogeneous Second order ODEs. 131

    25.1 Non-homogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . . 132

    25.2 Undetermined coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

    25.3 Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

    26. Coupled systems of ODEs 135

    26.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

    26.2 First method: differentiation . . . . . . . . . . . . . . . . . . . . . . . . . 136

    26.3 Second method: eigenvectors and eigenvalues . . . . . . . . . . . . . . . . 138

    27. Applications of Differential Equations 141

    27.1 Applications of ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

    27.2 Newtons law of cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

    27.3 Pollution in swimming pools . . . . . . . . . . . . . . . . . . . . . . . . . 143

    27.4 Newtonian mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

    28. Functions of Several Variables 147

    28.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

    28.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

    28.3 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

    28.4 Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

    28.5 Alternative forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

    29. Partial derivatives 154

    29.1 First derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

    29.2 Higher derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

    29.3 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

    29.4 Exceptions : when derivatives do not exist . . . . . . . . . . . . . . . . . 158

    30. Chain Rule, Gradient and Directional derivatives 160

    30.1 The Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

    30.2 Gradient and Directional Derivative . . . . . . . . . . . . . . . . . . . . . 163

    26-Jul-2013 6

  • School of Mathematical Sciences Monash University

    31. Tangent planes and linear approximations 166

    31.1 Tangent planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

    31.2 Linear Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

    32. Maxima and minima 170

    32.1 Maxima and minima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

    32.2 Local extrema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

    32.3 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

    32.4 Maxima, Minima or Saddle point? . . . . . . . . . . . . . . . . . . . . . . 173

    26-Jul-2013 7

    SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    1. Vectors in 3-dimensions

  • School of Mathematical Sciences Monash University

    1.1 Introduction

    These can be defined in (at least) two ways, algebraically as objects like

    v

    = (1, 7, 3)

    u

    = (2,1, 4)or geometrically as arrows in space.

    How can we be sure that these two definitions actually describe the same object? Equally,how do we convert from one form to the other? That is, given (1, 2, 7) how do we drawthe arrow and likewise, given the arrow how do we extract the numbers (1, 2, 7)?

    Suppose we are give two points P and Q. Suppose also that we find the change incoordinates from P to Q is (say) (1, 2, 7). We could also draw an arrow from P to Q.Thus we have two ways of recording the path from P to Q, either as the numbers (1, 2, 7)or the arrow.

    Suppose now that we have another pair of points R and S and further that we find thechange in coordinates to be (1, 2, 7). Again, we can join the points with an arrow. Thisarrow will have the same direction and length as that for P to Q.

    In both cases, the displacement, from start to finish, is represented by either the numbers(1, 2, 7) or the arrow thus we can use either form to represent the vector. Note that thismeans that a vector does not live at any one place in space it can be moved anywhereprovided its length and direction are unchanged.

    To extract the numbers (1, 2, 7) given just the arrow simply place the arrow somewherein the x, y, z space, and the measure the change in coordinates from tail to tip of thevector. Equally, to draw the vector given the numbers (1, 2, 7) is easy choose (0, 0, 0)as the tail then the point (1, 2, 7) is the tip.

    1.1.1 Notation

    The components of a vector are just the numbers we use to describe the vector. In theabove, the components of v

    are 1,2 and 7.

    Another very very common way to write a vector, such as v

    = (1, 7, 3) for example, isv

    = 1 i

    + 7j

    + 3k

    . The three vectors i

    , j

    , k

    are a simple way to remind us that thethree numbers in v

    = (1, 7, 3) refer to directions parallel to the three coordinate axes

    (with i

    parallel to the x-axis, j

    parallel to the y-axis and k

    parallel to the z-axis).

    In this way we can always write down any 3-dimensional vector as a linear combinationof the i

    , j

    , k

    and thus these vectors are also known as basis vectors.

    1.1.2 Algebraic properties

    What rules must we observe in playing with vectors?

    26-Jul-2013 9

    School of Mathematical Sciences Monash University

    I Equalityv

    = w

    only when the arrows for v

    and w

    are identical.

    I StretchingThe vector v

    is parallel to v

    but is stretched by a factor .

    I AdditionTo add two vectors v

    and w

    arrange the two so that they are tip to tail. Then

    v

    + w

    is the vector that starts at the first tail and ends at the second tip.

    Example 1.1

    Express each of the above rules in terms of the components of vectors (i.e. in terms ofnumbers like (1, 2, 7) and (a, b, c)).

    Example 1.2

    Given v

    = (3, 4, 2) and w

    = (1, 2, 3) compute v

    + w

    and 2v

    + 7w

    .

    Example 1.3

    Given v

    = (1, 2, 7) draw v

    , 2v

    and v

    .

    Example 1.4

    Given v

    = (1, 2, 7) and w

    = (3, 4, 5) draw and compute v w

    .

    1.2 Vector Dot Product

    How do we multiply vectors? We have already seen one form, stretching, v v

    . This

    is called scalar multiplication.

    Here is another form. Let v

    = (vx, vy, vz) and w

    = (wx, wy, wz) be a pair of vectors thenwe define the dot product v

    w

    by

    v w

    = vxwx + vywy + vzwz

    Example 1.5

    Let v

    = (1, 2, 7) and w

    = (1, 3, 4). Compute v v

    , w w

    and v w

    What do we observe?

    I v w

    is a single number not a vector

    I v w

    = w v

    I (v

    ) w

    = (v w

    )

    I (a

    + b

    ) v

    = a v

    + b v

    The last two cases display what we call linearity.

    26-Jul-2013 10

  • School of Mathematical Sciences Monash University

    Example 1.6 : Length of a vector

    Let v

    = (1, 2, 7). Compute the distance from (0, 0, 0) to (1, 2, 7). Compare this withv v

    .

    We can now show thatv w

    = |v||w| cos where

    |v| = the length of v

    =(v2x + v

    2y + v

    2z

    )1/2|w| = the length of w

    =(w2x + w

    2y + w

    2z

    )1/2and is the angle between the two vectors.

    How do we prove this? Simple start with v w

    and compute its length,

    |v w|2 = (v w

    ) (v w

    )

    = v v v w w v

    + w w

    = |v|2 + |w|2 2v w

    and from the Cosine Rule for triangles we know

    |v w|2 = |v|2 + |w|2 2|v||w| cos

    Thus we havev w

    = |v||w| cos This gives us a convenient way to compute the angle between any pair of vectors. Ifwe find cos = 0 then we say that v

    and w

    are orthogonal (sometimes also called

    perpendicular).

    Thus v

    and w

    are orthogonal when v w

    = 0 (provided neither v

    nor w

    are zero).

    Example 1.7

    Find the angle between the vectors v

    = (2, 7, 1) and w

    = (3, 4,2)

    1.2.1 Unit Vectors

    A vector is said to be a unit vector if its length is one. That is, v

    is a unit vector whenv v

    = 1.

    26-Jul-2013 11

    School of Mathematical Sciences Monash University

    1.3 Vector Cross Product

    This is another way to multiply vectors. Start with v

    = (vx, vy, vz) and w

    = (wx, wy, wz).Then we define the cross product v

    w

    by

    v w

    = (vywz vzwy, vzwx vxwz, vxwy vywx)

    From this definition we observe

    I v w

    is a vector

    I v w

    = w v

    I v v

    = 0

    I (v

    ) w

    = (v w

    )

    I (a

    + b

    ) v

    = a v

    + b v

    I (v w

    ) v

    = (v w

    ) w

    = 0

    Example 1.8

    Verify all of the above.

    Example 1.9

    Given v

    = (1, 2, 7) and w

    = (2, 3, 5) compute vw

    , and its dot product with each ofv

    and w

    .

    1.3.1 Interpreting the cross product

    We know that v w

    is a vector and we know how to compute it. But can we describethis vector? First we need a vector, so lets assume that v

    w6= 0

    . Then what can wesay about the direction and length of v

    w

    ?

    The first thing we should note is that the cross product is a vector which is orthogonalto both of the original vectors. Thus v

    w

    is a vector that is orthogonal to v

    and tow

    . This fact follows from the definition of the cross product.

    Thus we must havev w

    = n

    where n

    is a unit vector orthogonal to both v

    and w

    and is some unknown number(at this stage).

    How do we construct n

    and ? Lets do it!

    26-Jul-2013 12

  • School of Mathematical Sciences Monash University

    1.3.2 Right Hand Thumb rule

    For any choice of v

    and w

    you can see that there are two choices for n

    one points inthe opposite direction to the other. Which one do we choose? Its up to us to make ahard rule. This is it. Place your right hand palm so that your fingers curl over from v

    to w

    . Your thumb then points in the direction of v w

    .

    Now for , we will show that

    |v w| = = |v||w| sin

    How? First we build a triangle from v

    and w

    and then compute the cross product foreach pair of vectors

    v w

    = n

    (v w

    ) v

    = n

    (v w

    ) w

    = n

    (one for each of the three vertices). We need to compute each .

    Now since (v

    ) w

    = (v w

    ) for any number we must have in v w

    = nproportional to |v||w|, likewise for the other s. Thus

    = |v||w| = |v||v w| = |w||v w|

    where each depends only on the angle between the two vectors on which it was built(i.e. depends only on the angle between v

    and v

    w

    ).

    But we also have vw

    = (vw

    ) v

    = (vw

    )w

    which implies that = = which in turn gives us

    |v w| =

    |w| =

    |v|

    (Were in the home straight...)

    But we also have the Sine Rule for triangles

    sin

    |v w| =sin

    |w| =sin

    |v|and so

    = k sin , = k sin, = k sin

    where k is a pure number that does not depend on any of the angles nor on any oflengths of the edges the value of k is the same for every triangle. We can choose atrivial case to compute k, simply put v

    = (1, 0, 0) and w

    = (0, 1, 0). Then we find k = 1.

    Its been a merry ride but weve found that

    |v w| = |v||w| sin

    26-Jul-2013 13

    School of Mathematical Sciences Monash University

    Example 1.10

    Show that |v w| also equals the area of the parallelogram formed by v

    and w

    .

    Vector Dot and Cross products

    Let v

    = (vx, vy, vz) and w

    = (wx, wy, wz). Then the Dot Product of v

    and w

    isdefined by

    v w

    = vxwx + vywy + vzvz .

    while the Cross Product is defined by

    v w

    = (vywz vzwy, vzwx vxwz, vxwy vywx)

    1.4 Scalar and Vector projections

    These are like shadows and there are two basic types, scalar and vector projections.

    1.4.1 Scalar Projections

    This is simply the shadow cast by one vector on another.

    Example 1.11

    What is the length (i.e. scalar projection) of v

    = (1, 2, 7) in the direction of the vectorw

    = (2, 3, 4)?

    Scalar projection

    The scalar projection, vw, of v

    in the direction of w

    is given by

    vw =v w|w|

    1.4.2 Vector Projection

    This time we produce a vector shadow with length equal to the scalar projection.

    26-Jul-2013 14

  • School of Mathematical Sciences Monash University

    Example 1.12

    Find the vector projection of v

    = (1, 2, 7) in the direction of w

    = (2, 3, 4)

    Vector projection

    The vector projection, vw, of v

    in the direction of w

    is given by

    vw =

    (v w|w|2)w

    Example 1.13

    Given v

    = (1, 2, 7) and w

    = (2, 3, 4) express v

    in terms of w

    and a vector perpendicularto w

    .

    This example shows how a vector may be resolved into its parts parallel and perpendic-ular to another vector.

    26-Jul-2013 15

    SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    2. Three-Dimensional Euclidean Geometry. Lines.

  • School of Mathematical Sciences Monash University

    2.1 Lines in 3-dimensional space

    Through any pair of distinct points we can always construct a straight line. These linesare normally drawn to be infinitely long in both directions.

    Example 2.1

    Find all points on the line joining (2, 4, 0) and (2, 4, 7)

    Example 2.2

    Find all points on the line joining (2, 0, 0) and (2, 4, 7)

    These equations for the line are all of the form

    x(t) = a+ pt , y(t) = b+ qt , z(t) = c+ rt

    where t is a parameter (it selects each point on the line) and the numbers a, b, c, p, q, rare computed from the coordinates of two points on the line. (There are other ways towrite an equation for a line.)

    How do we compute a, b, c, p, q, r? First put t = 0, then x = a, y = b, z = c. That is(a, b, c) are the coordinates of one point on the line and so a, b, c are known. Next, putt = 1, then x = a+ p, y = b+ q, z = c+ r. Take this to be the second point on the line,and thus solve for p, q, r.

    A common interpretation is that (a, b, c) are the coordinates of one (any) point on theline and (p, q, r) are the components of a (any) vector parallel to the line.

    Example 2.3

    Find the equation of the line joining the two points (1, 7, 3) and (2, 0,3).

    Example 2.4

    Show that a line may also be expressed as

    x ap

    =y bq

    =z cr

    provided p 6= 0, q 6= 0 and r 6= 0. This is known as the Symmetric Form of the equationfor a a straight line.

    Example 2.5

    In some cases you may find a small problem with the form suggested in the previousexample. What is that problem and how would you deal with it?

    Example 2.6

    Determine if the line defined by the points (1, 0, 1) and (1, 2, 0) intersects with the linedefined by the points (3,1, 0) and (1, 2, 5).

    26-Jul-2013 17

    School of Mathematical Sciences Monash University

    Example 2.7

    Is the line defined by the points (3, 7,1) and (2,2, 1) parallel to the line defined bythe points (1, 4,1) and (0,5, 1).

    Example 2.8

    Is the line defined by the points (3, 7,1) and (2,2, 1) parallel to the line defined bythe points (1, 4,1) and (2,23, 5).

    2.2 Vector equation of a line

    The parametric equations of a line are

    x(t) = a+ pt , y(t) = b+ qt z(t) = c+ rt

    Note that

    (a, b, c) = the vector to one point on the line

    (p, q, r) = the vector from the first point to

    the second point on the line

    = a vector parallel to the line

    Lets put d

    = (a, b, c), v

    = (p, q, r) and r

    (t) = (x(t), y(t), z(t)), then

    r

    (t) = d

    + tv

    This is known as the vector equation of a line.

    Example 2.9

    Write down the vector equation of the line that passes through the points (1, 2, 7) and(2, 3, 4).

    Example 2.10

    Write down the vector equation of the line that passes through the points (2, 3, 7) and(4, 1, 2).

    Example 2.11

    Find the shortest distance between the pair of lines described in the two previous ex-amples. Hint : Find any vector that joins a point from one line to the other and thencompute the scalar projection of this vector onto the vector orthogonal to both lines (ithelps to draw a diagram).

    26-Jul-2013 18

  • SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    3. Three-Dimensional Euclidean Geometry. Planes.

    School of Mathematical Sciences Monash University

    3.1 Planes in 3-dimensional space

    A plane in 3-dimensional space is a flat 2-dimensional surface. The standard equationfor a plane in 3-d is

    ax+ by + cz = d

    where a, b, c and d are some bunch of numbers that identify this plane from all otherplanes. (There are other ways to write an equation for a plane, as we shall see).

    Example 3.1

    Sketch each of the planes z = 1, y = 3 and x = 1.

    3.1.1 Constructing the equation of a plane

    A plane is uniquely determined by any three points (provided not all three points arecontained on a line). Recall, that a line is fully determined by any pair of points on theline.

    Lets find the equation of the plane that passes through the three points (1, 0, 0), (0, 3, 0)and (0, 0, 2). Our game is to compute a, b, c and d. We do this by substituting each pointinto the above equation,

    1st point a 1 + b 0 + c 0 = d2nd point a 0 + b 3 + c 0 = d3rd point a 0 + b 0 + c 2 = d

    Now we have a slight problem, we are trying to compute 4 numbers, a, b, c, d but weonly have 3 equations. We have to make an arbitrary choice for one of the 4 numbersa, b, c, d. Lets set d = 6. Then we find from the above that a = 6, b = 2 and c = 3.Thus the equation of the plane is

    6x+ 2y + 3z = 6

    Example 3.2

    What equation do you get if you chose d = 1 in the previous example? What happensif you chose d = 0?

    Example 3.3

    Find an equation of the plane that passes through the three points (1, 0, 0), (1, 2, 0)and (2,1, 5).

    26-Jul-2013 20

  • School of Mathematical Sciences Monash University

    3.1.2 Parametric equations for a plane

    Recall that a line could be written in the parametric form

    x(t) = a+ pt

    y(t) = b+ qt

    z(t) = c+ rt

    A line is 1-dimensional so its points can be selected by a single parameter t.

    However, a plane is 2-dimensional and so we need two parameters (say u and v) to selecteach point. Thus its no surprise that every plane can also be described by the followingequations

    x(u, v) = a+ pu+ lv

    y(u, v) = b+ qu+mv

    z(u, v) = c+ ru+ nv

    Now we have 9 parameters a, b, c, p, q, r, l,m and n. These can be computed from thecoordinates of three (distinct) points on the plane. For the first point put (u, v) = (0, 0),the second put (u, v) = (1, 0) and for the final point put (u, v) = (0, 1). Then solve fora through to n (its easy!).

    Example 3.4

    Find the parametric equations of the plane that passes through the three points (1, 0, 0),(1, 2, 0) and (2,1, 5).

    Example 3.5

    Show that the parametric equations found in the previous example describe exactly thesame plane as found in Example 3.3 (Hint : substitute the answers from Example 3.4into the equation found in Example 3.3).

    Example 3.6

    Find the parametric equations of the plane that passes through the three points (1, 2, 1),(1, 2, 3) and (2,1, 5).

    Example 3.7

    Repeat the previous example but with points re-arranged as (1, 2, 1), (2,1, 5) and(1, 2, 3). You will find that the parametric equations look different yet you know theydescribe the same plane. If you did not know this last fact, how would you prove thatthe two sets of parametric equations describe the same plane?

    26-Jul-2013 21

    School of Mathematical Sciences Monash University

    3.2 Vector equation of a plane

    The Cartesian equation for a plane is

    ax+ by + cz = d

    for some bunch of numbers a, b, c and d. We will now re-express this in a vector form.

    Suppose we know one point on the plane, say (x, y, z) = (x, y, z)0, then

    ax0 + by0 + cz0 = d

    a(x x0) + b(y y0) + c(z z0) = 0This is an equivalent form of the above equation.

    Now suppose we have two more points on the plane (x, y, z)1 and (x, y, z)2. Then

    a(x1 x0) + b(y1 y0) + c(z1 z0) = 0a(x2 x0) + b(y2 y0) + c(z2 z0) = 0

    Put x

    10 = (x1x0, y1 y0, z1 z0) and x

    20 = (x2x0, y2 y0, z2 z0). Notice thatboth of these vectors lie in the plane and that

    (a, b, c) x

    10 = (a, b, c) x

    20 = 0

    What does this tell us? Simply that both vectors are orthogonal to the vector (a, b, c).Thus we must have that

    (a, b, c) = the normal vector to the plane

    Now lets put

    n

    = (a, b, c) = the normal vector to the plane

    d

    = (x0, y0, z0) = one (any) point on the plane

    r

    = (x, y, z) = a typical point on the plane

    Then we haven (r d

    ) = 0

    This is the vector equation of a plane.

    Example 3.8

    Find the vector equation of the plane that contains the points (1, 2, 7), (2, 3, 4) and(1, 2, 1).

    Example 3.9

    Re-express the previous result in the form ax+ by + cz = d.

    26-Jul-2013 22

  • School of Mathematical Sciences Monash University

    Example 3.10

    Find the shortest distance between the pair of planes 2x+3y4z = 2 and 4x+6y8z = 3.

    An investment firm is hiring mathematicians. After the first round of in-terviews, three hopeful recent graduatesa pure mathematician, an appliedmathematician, and a graduate in mathematical financeare asked whatstarting salary they are expecting. The pure mathematician: Would $30,000be too much? The applied mathematician: I think $60,000 would be OK.The maths finance person: What about $300,000? The personnel officer isflabbergasted: Do you know that we have a graduate in pure mathematicswho is willing to do the same work for a tenth of what you are demanding!?Well, I thought of $135,000 for me, $135,000 for you - and $30,000 for thepure mathematician who will do the work.

    26-Jul-2013 23

    SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    4. Linear systems of equations

  • School of Mathematical Sciences Monash University

    4.1 Examples of Linear Systems

    4.1.1 Bags of coins

    We have three bags with a mixture of gold, silver and copper coins. We are given thefollowing information

    Bag 1 contains 10 gold, 3silver, 1 copper and weighs 60gBag 2 contains 5 gold, 1 silver and 2 copper and weighs 30gBag 3 contains 3 gold, 2silver, 4 copper and weighs 25g

    The question is What are the respective weights of the Gold, Silver and Copper coins?

    Let G,S and C denote the weight of each of the gold, silver and copper coins. Then wehave the system of equations

    10G + 3S + C = 605G + S + 2C = 303G + 2S + 4C = 25

    4.1.2 Silly puzzles

    John and Marys ages add to 75 years. When John was half his present age John wastwice as old as Mary. How old are they?

    We have just two equations,

    J + M = 7512J 2M = 0

    4.1.3 Intersections of planes

    Its easy to imagine three planes in space. Is it possible that they share one point incommon? Here are the equations for three such planes

    3x + 7y 2z = 06x + 16y 3z = 13x + 9y + 3z = 3

    Can we solve this system for (x, y, z)?

    In all of the above examples we need to unscramble the set of linear equations to extractthe unknowns (e.g. G,S,C etc.).

    26-Jul-2013 25

    School of Mathematical Sciences Monash University

    4.2 A standard strategy

    We start with the previous example

    3x+ 7y 2z = 0 (1)6x+ 16y 3z = 1 (2)3x+ 9y + 3z = 3 (3)

    Suppose by some process we were able to rearrange these equations into the followingform

    3x+ 7y 2z = 0 (1)2y + z = 1 (2)

    4z = 4 (3)

    Then we could solve (3) for z

    (3) 4z = 4 z = 1

    and then substitute into (2) to solve for y

    (2) 2y + 1 = 1 y = 1

    and substitute into (1) to solve for x

    (1) 3x 7 2 = 0 x = 3

    The question is : How do we get the modified equations (1), (2) and (3) ?

    The general trick is to take suitable combinations of the equations so that we can elim-inate various terms. The trick is applied as many times as we need to turn the originalequations into the simple form like (1), (2) and (3).

    Lets start with the first pair of the original equations

    3x+ 7y 2z = 0 (1)6x+ 16y 3z = 1 (2)

    We can eliminate the 6x in equations (2) by replacing equation (2) with (2) 2(1),

    0x+ (16 14)y + (3 + 4)z = 1 (2) 2y + z = 1 (2)

    Likewise, for the 3x term in equation (3) we replace equation (3) with (3) (1),

    2y + 5z = 3 (3)

    26-Jul-2013 26

  • School of Mathematical Sciences Monash University

    At this point our system of equations is

    3x+ 7y 2z = 0 (1)2y + z = 1 (2)2y + 5z = 3 (3)

    The last step is to eliminate the 2y term in the last equation. We do this by replacingequation (3) with (3) (2)

    4z = 4 (3)

    So finally we arrive at the system of equations

    3x+ 7y 2z = 0 (1)2y + z = 1 (2)

    4z = 4 (3)

    which, as before, we solve to find z = 1, y = 1 and x = 3.The procedure we just went through is known as a reduction to upper triangular formand we used elementary row operations to do so. We then solved for the unknowns byback substitution.

    This procedure is applicable to any system of linear equations (though beware, for somesystems the back substitution method requires special care, well see examples later).

    The general strategy is to eliminate all terms below the main diagonal, working columnby column from left to right.

    4.3 Lines and planes

    In previous lecture we saw how we could construct the equations for lines and planes.Now we can answer some simple questions.

    How do we compute the intersection between a line and a plane? Can we be sure thatthey do intersect? And what about the intersection of a pair or more of planes?

    The general approach to all of these questions is simply to write down equations for eachof the lines and planes and then to search for a common point (i.e. a consistent solutionto the system of equations).

    Example 4.1

    Find the intersection of the plane y = 0 with the plane 2x+ 3y 4z = 1.

    Example 4.2

    Find the intersection of the line x(t) = 1 + 3t, y(t) = 3 2t, z(t) = 1 t with the plane2x+ 3y 4z = 1.

    26-Jul-2013 27

    School of Mathematical Sciences Monash University

    Example 4.3

    Find the intersection of the three planes 2x+ 3y z = 1, x y = 2 and x = 1In general, three planes may intersect at a single point or along a common line or evennot at all.

    Here are some examples (there are others) of how planes may (or may not) intersect.

    No point of intersection

    One point of intersection

    Intersection in a common line

    26-Jul-2013 28

  • School of Mathematical Sciences Monash University

    Example 4.4

    What other examples can you draw of intersecting planes?

    Three men are in a hot-air balloon. Soon, they find themselves lost in acanyon somewhere. One of the three men says, Ive got an idea. We cancall for help in this canyon and the echo will carry our voices far.

    So he leans over the basket and yells out, Helllloooooo! Where are we?(They hear the echo several times).

    15 minutes later, they hear this echoing voice: Helllloooooo! Youre lost!!

    One of the men says, That must have been a mathematician. Puzzled,one of the other men asks, Why do you say that? The reply: For threereasons.

    (1) he took a long time to answer,(2) he was absolutely correct, and(3) his answer was absolutely useless.

    26-Jul-2013 29

    SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    5. Gaussian Elimination

  • School of Mathematical Sciences Monash University

    5.1 Gaussian elimination and back-substitution

    Example 5.1 : Typical layout

    2x+ 3y + z = 10

    x+ 2y + 2z = 10

    4x+ 8y + 11z = 49

    (1)

    (2) 2(2) (1)(3) (3) 2(1)

    2x+ 3y + z = 10

    y + 3z = 10

    2y + 9z = 29

    (1)

    (2)

    (3) (3) 2(2)

    2x+ 3y + z = 10

    y + 3z = 10

    3z = 9

    (1)

    (2)

    (3)

    Now we solve this system using back-substitution, z = 3, y = 1, x = 2.

    Note how we record the next set of row-operations on each equation. This makes itmuch easier for someone else to see what you are doing and it also helps you track downany arithmetic errors.

    5.2 Gaussian elimination

    In the previous example we found

    2x+ 3y + z = 10

    y + 3z = 10

    3z = 9

    (1)

    (2)

    (3)

    Why stop there? We can apply more row-operations to eliminate terms above thediagonal. This does not involve back-substitution. This method is known as Gaussianelimination. Take note of the difference!

    Example 5.2

    Continue from the previous example and use row-operations to eliminate the terms abovethe diagonal. Hence solve the system of equations.

    26-Jul-2013 31

    School of Mathematical Sciences Monash University

    5.2.1 Gaussian elimination strategy

    1. Use row-operations to eliminate elements below the diagonal.

    2. Use row-operations to eliminate elements above the diagonal.

    3. If possible, re-scale each equation so that each diagonal element = 1.

    4. The right hand side is now the solution of the system of equations.

    If you bail out after step 1 you are doing Gaussian elimination with back-substitution(this is usually the easier option).

    5.3 Exceptions

    Here are some examples where problems arise.

    Example 5.3 : A zero on the diagonal

    2x + y + 2z + w = 2

    2x + y z + 2w = 1x 2y + z w = 2x + 3y z + 2w = 2

    (1)

    (2) (2) (1)(3) 2(3) (1)(4) 2(4) (1)

    2x + y + 2z + w = 2

    0y 3z + w = 1 5y + 0z 3w = 6+ 5y 4z + 3w = 2

    (1)

    (2) (3)(3) (2)(4)

    The zero on the diagonal on the second equation is a serious problem, it means we cannot use that row to eliminate the elements below the diagonal term. Hence we swap thesecond row with any other lower row so that we get a non-zero term on the diagonal.Then we proceed as usual. The result is w = 2, z = 1, y = 0 and x = 1.

    Example 5.4

    Complete the above example.

    26-Jul-2013 32

  • School of Mathematical Sciences Monash University

    Example 5.5 : A consistent and under-determined system

    Suppose we start with three equations and we wind up with

    2x + 3y z = 1 5y + 5z = 1

    0z = 0

    (1)

    (2)

    (3)

    The last equation tells us nothing! We cant solve it for any of x, y and z. We really onlyhave 2 equations, not 3. That is 2 equations for 3 unknowns. This is an under-determinedsystem.

    We solve the system by choosing any number for one of the unknowns. Say we put z = where is any number (our choice). Then we can leap back into the equations and useback-substitution.

    The result is a one-parameter family of solutions

    x =1

    5 , y = 1

    5+ , z =

    Since we found a solution we say that the system is consistent.

    Example 5.6 : An inconsistent system

    Had we started with

    2x + 3y z = 1x y + 2z = 0

    3x + 2y + z = 0

    (1)

    (2)

    (3)

    we would have arrived at

    2x + 3y z = 1 5y + 5z = 1

    0z = 2

    (1)

    (2)

    (3)

    This last equation makes no sense as there are no finite values for z such that 0z = 2and thus we say that this system is inconsistent and that the system has no solution.

    26-Jul-2013 33

    SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    6. Matrices

  • School of Mathematical Sciences Monash University

    6.1 Matrices

    When we use row-operations on systems of equations such as

    3x + 2y z = 3x y + z = 1

    2x + y z = 0the x, y, z just hang around. All the action occurs on the coefficients and the right handside. To assist in the bookkeeping we introduce a new notation, matrices, 3 2 11 1 1

    2 1 1

    xyx

    = 31

    0

    Each [ ] is a matrix, 3 2 11 1 1

    2 1 1

    is a square 33 matrix, while xy

    z

    and 31

    0

    are 1-dimensional matrices (also called column vectors).

    We can recover the original system of equations by defining a rule for multiplying ma-trices,

    a b c d

    e f g h ...

    = i ...

    i = a e+ b f + c g + d h+

    Example 6.1

    Write the above system of equations in matrix form. 3 2 11 1 12 1 1

    xyz

    = 3 x+ 2 y 1 z1 x 1 y + 1 z

    2 x+ 1 y 1 z

    26-Jul-2013 35

    School of Mathematical Sciences Monash University

    Example 6.2

    Compute [2 34 1

    ] [1 70 2

    ]and

    [1 70 2

    ] [2 34 1

    ]Note that we can only multiply matrices that fit together. That is, if A and B are a pairof matrices then in order that AB makes sense we must have the number of columns ofA equal to the number of rows of B.

    Example 6.3

    Does the following make sense?

    [2 34 1

    ] 1 70 24 1

    6.1.1 Notation

    We use capital letters to represent matrices,

    A =

    3 2 11 1 12 1 1

    , X = xyx

    , B = 31

    0

    and our previous system of equations can then be written as

    AX = B

    Entries within a matrix are denoted by subscripted lowercase letters. Thus for the matrixB above we have b1 = 3, b2 = 1 and b3 = 0 while for the matrix A we have

    A =

    3 2 11 1 12 1 1

    = a11 a12 a13a21 a22 a23a31 a32 a33

    aij = the entry in row i and column j of A

    To remind us that A is a square matrix with elements aij we sometimes write A = [aij].

    6.1.2 Operations on matrices

    I Equality :A = B

    only when all entries in A equal those in B.

    I Addition: Normal addition of corresponding elements.

    26-Jul-2013 36

  • School of Mathematical Sciences Monash University

    I Multiplication by a number : A = times each entry of A

    I Multiplication of matrices : ? ? ? ? ?

    ?????

    = ?

    I Transpose: Flip rows and columns, denoted by [ ]T .

    [1 2 70 3 4

    ]T=

    1 02 37 4

    6.1.3 Some special matrices

    I The Identity matrix :

    I =

    1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ...

    ......

    .... . .

    For any square matrix A we have IA = AI = A.

    I The Zero matrix : A matrix full of zeroes!

    I Symmetric matrices : Any matrix A for which A = AT .

    I Skew-symmetric matrices : Any matrix A for which A = AT . Sometimes alsocalled anti-symmetric.

    6.1.4 Properties of matrices

    I AB 6= BAI (AB)C = A(BC)

    I (AT )T = A

    I (AB)T = BTAT

    26-Jul-2013 37

    School of Mathematical Sciences Monash University

    6.1.5 Notation

    For the system of equations

    3x + 2y z = 1x y + z = 4

    2x + y z = 1we call 3 2 11 1 1

    2 1 1

    the coefficient matrix and 3 2 1 11 1 1 4

    2 1 1 1

    the augmented matrix.

    When we do row-operations on a system we are manipulating the augmented matrix.But each incarnation represents a system of equations for the same original values forx, y and z. Thus if A and A are two augmented matrices for the same system, then wewrite

    A AThe squiggle means that even though A and A are not the same matrices, they do giveus the same values for x, y and z.

    Example 6.4

    Solve the system of equations

    3x + 2y z = 1x y + z = 4

    2x + y z = 1using matrix notation.

    An accountant is someone who is good with numbers but lacks the personalityto be a statistician.

    26-Jul-2013 38

  • SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    7. Inverses of Square Matrices.

    School of Mathematical Sciences Monash University

    7.1 Matrix Inverse

    Suppose we have a system of equations[a bc d

    ] [xy

    ]=

    [uv

    ]and that we write in the matrix form

    AX = B

    Can we find another matrix, call it A1, such that

    A1A = I = the identity matrix

    If so, then we have

    A1AX = A1B X = A1B

    Thus we have found the solution of the original system of equations.

    For a 2 2 matrix it is easy to verify that

    A1 =[a bc d

    ]1=

    1

    ad bc[

    d bc a

    ]

    But how do we compute the inverse A1 for other (square) matrices?

    Here is one method.

    7.1.1 Inverse by Gaussian elimination

    I Use row-operations to reduce A to the identity matrix.

    I Apply exactly the same row-operations to a matrix set initially to the identity.

    I The final matrix is the inverse of A.

    We usually record this process in a large augmented matrix.

    I Start with [A|I].I Apply row operations to obtain [I|A1]I Crack open the champagne.

    26-Jul-2013 40

  • School of Mathematical Sciences Monash University

    Example 7.1

    Find the inverse for A =

    [1 73 4

    ]Note that not all matrices will have an inverse. For example, if

    A =

    [a bc d

    ]then

    A1 =1

    ad bc[

    d bc a

    ]and for this to be possible we must have ad bc 6= 0.We call this magic number the determinant of A. If it is zero then A does not have aninverse.

    The question is is there a similar rule for an N N matrix? That is, a rule which canidentify those matrices which have an inverse.

    7.2 Determinants

    The definition is a bit involved, here it is.

    I For a 2 2 matrix A =[a bc d

    ]define detA = ad bc.

    I For an NN matrix A create a sub-matrix Sij of A by deleting row I and columnJ .

    I Then define

    detA = a11 detS11 a12 detS12 + a13 detS13 a1N detS1N

    Thus to compute detA you have to compute a chain of determinants, from (N 1) (N 1) determinants all the way down to 2 2 determinants. This is tedious and veryprone to arithmetic errors!

    Note the alternating plus minus signs, its very important!!!

    7.2.1 Notation

    We often write detA = |A|.

    26-Jul-2013 41

    School of Mathematical Sciences Monash University

    Example 7.2

    Compute the determinant of

    A =

    1 7 23 4 56 0 9

    We can also expand the determinant about any row or column provided we observe thefollowing pattern of signs.

    + + + + + + + + + + + +

    Example 7.3

    By expanding about the second row compute the determinant of

    A =

    1 7 23 4 56 0 9

    Example 7.4

    Compute the determinant of

    A =

    1 2 70 0 31 2 1

    7.3 Inverse using determinants

    Here is another way to compute the inverse matrix.

    I Select a row I and column J of A.

    I Compute (1)i+j detSIJdetA

    I Store this at row J and column I in the inverse matrix.

    I Repeat for all other entries in A.

    That is , ifA = [ aIJ ]

    then

    A1 =1

    detA

    [(1)I+J detSJI

    ]This method for the inverse works but it is rather tedious.

    The best way is to compute the inverse by Gaussian elimination, i.e. [A|I] [I|A1].

    26-Jul-2013 42

  • School of Mathematical Sciences Monash University

    7.4 Vector Cross Products

    The rule for a vector cross product can be conveniently expressed as a determinant.Thus if v

    = vx i

    + vy j

    + vzk

    and w

    = wx i

    + wy j

    + wzk

    then

    v w

    =

    i

    j

    kvx vy vz

    wx wy wz

    A graduate student from TrinityComputed the cube of infinity;But it gave him the fidgetsTo write down all those digits,So he dropped maths and took up divinity.

    26-Jul-2013 43

    SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    8. Eigenvalues and eigenvectors.

  • School of Mathematical Sciences Monash University

    8.1 Introduction

    Okay, its late in the aftrenoon, were feeling a little sleepy and we need somthing to getour minds fired up. So we play a little game. We start with this simple 3 3matrix

    R =

    1 2 02 1 00 0 3

    and when we apply R to any vector of the form v = [0, 0, 1]T we observe the curious factthat the vector remains unchanged apart from an overall scaling by 3. That is

    Rv = 3v

    Now we are wide awake and ready to play this game at full speed. Qustions that cometo mind would (should) include,

    I Can we find such a vector for any matrix?

    I How many distinct vectors are there?

    I Can we find vectors like v but with a different scaling?

    This is a simple example of what is known as an eigenvector equation. The key featureis that the action of the matrix on the vector produces a new vector that is parallel tothe original vector (and in our case, it also happens to be 3 times as long).

    Eigenvalues and eigenvectors

    If A is square matrix and v is a column vector v with

    Av = v

    for some non-zero vector v then we say that the matrix A has v as an eigenvectorwith eigenvalue .

    For the example of the 3 3 matrix given above we have an eigenvalue equal to 3 andan eigenvector of the form v = [0, 0, 1]T .

    Example 8.1

    Show that v = [8, 1]T is an eigenvector of the matrix A =[

    6 161 4

    ]Example 8.2

    The matrix in the previous example has a second eigenvector this time with the eigen-value -2. Find that eigenvector.

    26-Jul-2013 45

    School of Mathematical Sciences Monash University

    Example 8.3

    Let v1 and v2 be two eigenvectors of some matrix. Is it possible to choose and sothat v1 + v2 is also an eigenvector?

    Now we can reexpress our earlier questions as follows.

    I Does every matrix possess an eigenvector?

    I How many eigenvalues can a matrix have?

    I How do we compute the eigenvalues?

    I Is this just pretty mathematics or is there a point to this game?

    Good questions indeed. Lets see what we make of them. We will start with the issueof constructing the eigenvalues (assuming, for the moment, that they exist).

    8.2 Eigenvalues

    Our game here is to find the non-zero values of , if any, that allows the equation

    Av = v

    to have non-zero solutions for v. Take that is given, then re-arrange the equation to

    (A I) v = 0

    where I is the identity matrix (of the same shape as A). Since we are chasing non-zerosolutions for v we must have the determinant of A I equal to zero. That is, werequire that 0 = det(A I). This is a polynomial equation in and is known as thecharacteristic equation for .

    Characteristic equation

    The eigenvalues of a matrix A are solutions of the polynomial equation

    0 = det(A I)

    This is called the characteristic equation of A. If A is an N N matrix, then thisequation will be a polynomial of degree N in . The eigenvalues may in general becomplex numbers.

    26-Jul-2013 46

  • School of Mathematical Sciences Monash University

    Example 8.4

    Compute both eigenvalues of A =

    [6 161 4

    ]We can now answer the pervious question How many eigenvalues can we find for a givenmatrix? If A is an NN matrix then the characteristic equation will be a polynomial ofdegree N and so we can expect at most N distinct eigenvalues (one for each root). Thekeyword here is distinct it is possible that the characteristic equation has repeatedroots. In such cases we will find less than N (distinct) eigenvalues, as shown in thefollowing example.

    Example 8.5

    Show that the matrix A =

    [1 30 1

    ]has only one eigenvalue.

    Example 8.6

    Look carefully at the previous matrix. It describes a stretch along the x-axis. Use thisfact to argue that the matrix can have only one eigenvalue. This is a pure geometricalargument, you should not need to to do any calculations.

    Example 8.7 A characteristic equation

    Show that the characteristic equation for the matrix

    A =

    5 8 164 1 84 4 11

    is given by

    0 = 3 + 52 + 3 9

    Example 8.8 The eigenvalues

    Show that the eigenvalues of the previous example are = 1 and = 3 (this is adouble root of the characteristic equation).

    Example 8.9 Simple eigenvalue

    We now know that matrix

    A =

    5 8 164 1 84 4 11

    has an eigenvalue equal to 1 (and two others which we will deal with in the next example).How do we compute the eigenvector? We return to the eigenvector equation with = 1,that is 5 8 164 1 8

    4 4 11

    abc

    =abc

    26-Jul-2013 47

    School of Mathematical Sciences Monash University

    in which the [a, b, c]T is the eigenvector. We can make our job a little bit tidier byshifting everything to the left hand side. 4 8 164 0 8

    4 4 12

    abc

    = 0Our game now is to solve these equations for a, b and c. This we can do using Gaussianelimination. After the first stage, where we eliminate the lower triangular part, we obtain4 8 160 8 8

    0 0 0

    abc

    = 0Note that the last row is full of zeros. Are we surprised? No. Why Not? Well, since wewere told that the matrix A has = 1 as an eigenvalue we also know that det(A1I) = 0which in turn tells us that at least one of the rows of A 1I must be a (hidden)linear combination of the other rows (and Gaussian elimination reveals that hiddencombination). So seeing a row of zeros is confirmation that we have det(A 1I) = 0.Now lets return to the matter of solving the equations. Using back-substitution we findthat every solution is of the form ab

    c

    = 21

    1

    where is any number. We can set = 1 and this will give us a typical eigenvectorfor the eigenvalue = 1. All other eigenvectors, for this eigenvalue, are parallel to thiseigenvector (differing only in length). Is that what we expected, that there would be aninfinite set of eigenvectors for a given eigenvalue? Yes just look back at the definition,Av = v. If v is a solution of this equation then so too is v. This is exactly what wehave just found.

    Example 8.10 A double eigenvalue

    Now lets find the eigenvectors corresponding to = 3. We start with 8 8 164 4 84 4 8

    abc

    = 0After doing our Gaussian elimination we find8 8 160 0 0

    0 0 0

    abc

    = 0This time we find that we have two rows of zeros. This is not a surprise (agreed?)because we know that = 3 is a double root of the characteristic equation. With tworows of zeros we are forced to introduce two free parameters, say and , leading toab

    c

    = 2

    = 11

    0

    + 20

    1

    26-Jul-2013 48

  • School of Mathematical Sciences Monash University

    This shows that every eigenvector for = 3 is a linear combination of the pair ofvectors [1, 1, 0]T and [2, 0, 1]T .

    Example 8.11

    Show that eigenvectors of the previous example can also be constructed from linearcombinations of [1, 1, 0]T and [1, 1,1]T .

    8.3 Decomposing Symmetric matrices

    Earlier on we asked what is the point of computing eigenvectors and eigenvalues (otherthan pure fun)? Here we will develop some really nice results that follow once we knowthe eigenvalues and eigenvectors. Though many of the results we are about to explorealso apply to general square matrices they are much easier to present (and prove) forreal symmetric matrices that posses a complete set of eigenvalues (i.e. no multiple rootsin the characteristic equation). This restriction is not so severe as to be meaningless formany of the matrices encountered in mathematical physics (and other fields) are oftenof this class.

    Real symmetric matrices with complete eigenvalues

    If A is an N N real symmetric matrix with N distinct eigenvalues i, i =1, 2, 3 N with corresponding eigenvectors vi, i = 1, 2, 3 N then

    I The eigenvalues are real, i = i, i = 1, 2, 3 N andI The eigenvectors for distinct eigenvalues are orthogonal, vTi vj = 0, i 6= j.

    We will only prove the first of these theorems, the second is left as an example for youto play with (it is not all that hard).

    We start by constructing vTAv (where the bar over the v means complex conjugation).This is just one number, that is a 1 1 matrix. Thus it equals its own transpose. So wehave

    vTAv =(vTAv

    )Tnow use (BC)T = (CB)T

    = (Av)T v and again

    = vTAT v but AT = A

    = vTAv

    Now from the definition Av = v we also have, by taking complex conjugates and notingthat A is real, Av = v. Substitute this into the previous equation to obtain

    vTAv = vT v = vT v

    26-Jul-2013 49

    School of Mathematical Sciences Monash University

    But look now at the left hand side. We can manipulate this as follows

    vTAv = vT (Av)

    = vTv

    = vTv

    Compare this with our previous equation and you will see that we must have

    vTv = vT v

    Finally we notice that vT v = vTv = v21 + v22 + v

    23 v2N 6= 0. So this leaves just

    =

    Our job is done, we have proved that the eigenvalue must be real.

    Now here comes a very nice result. We will work with a simple 3 3 real symmetricmatrix with 3 distinct eigenvalues simply to make the notation less cluttered than wouldbe the case if we leapt straight into the general NN case. We will have 3 eigenvalues , and . The corresponding eigenvectors will be u, v and w. Each eigenvector containsthree numbers, so we will write v = [v1, v2, v3]

    T etc. We are free to stretch or shrink eacheigenvector so let us assume that they have been scaled so that each is a unit vector,i.e. vTv = 1 etc. Now lets assemble the three separate eigenvalue equations into onebig matrix equation, like this

    A

    u1 v1 w1u2 v2 w2u3 v3 w3

    =u1 v1 w1u2 v2 w2u3 v3 w3

    0 00 00 0

    This looks pretty but what can we do with this? Good question. The big trick is that wecan easily (trust me) solve this set of equations for the matrix A. Really? Lets supposethat the 3 3 matrix to the right of A has an inverse. Then we could solve for A bymultiplying by the inverse from the left, to obtain

    A =

    u1 v1 w1u2 v2 w2u3 v3 w3

    0 00 00 0

    u1 v1 w1u2 v2 w2u3 v3 w3

    1

    This is nice, but can we compute the inverse? In fact we already have it, just lookcarefully at this equationu1 u2 u3v1 v2 v3

    w1 w2 w3

    u1 v1 w1u2 v2 w2u3 v3 w3

    =1 0 00 1 0

    0 0 1

    This is just a simple way of stating that the eigenvectors are orthogonal and of unitlength. This also shows that one matrix is the inverse of the other, that isu1 v1 w1u2 v2 w2

    u3 v3 w3

    1 =u1 u2 u3v1 v2 v3w1 w2 w3

    26-Jul-2013 50

  • School of Mathematical Sciences Monash University

    Now we have our final result

    A =

    u1 v1 w1u2 v2 w2u3 v3 w3

    0 00 00 0

    u1 u2 u3v1 v2 v3w1 w2 w3

    This shows that any real symmetric 3 3 matrix, with three distinct eigenvalues, canbe re-built from its eigenvalues and eigenvectors. This is not only a neat result it is alsoan extremely useful result.

    In the following examples we will assume that the matrix A is a real symmetric 3 3matrix with three distinct eigenvalues.

    Example 8.12

    Use the above expansion for A to compute A2, A3, A4 and so on.

    Example 8.13

    Use the definition of an eigenvalue to show that A2 has an eigenvalue 2, A3 an eigenvalue3 and so on. How does this compare with the previous example?

    Example 8.14

    Suppose that is an eigenvalue, with corresponding eigenvector v, of any square matrixB. Can you construct an eigenvalue and eigenvector for B1 (assuming that the inverseexists)?

    8.4 Matrix inverse

    The past few examples shows, for our general class of real symmetric 3 3 matrices A,with three distinct eigenvalues, that the powers of A can be written as

    An =

    u1 v1 w1u2 v2 w2u3 v3 w3

    n 0 00 n 00 0 n

    u1 u2 u3v1 v2 v3w1 w2 w3

    It is easy to see that this is true for any positive integer n. But it also applies (assuming, and are non-zero) when n is a negative integer. How can we be so sure? Weknow that A and A1 share the same eigenvectors. Good. We also know that if is aneigenvalue of A then 1/ is an eigenvalue of A1. Finally we note that A1, like A, is areal symmetric 3 3 matrix with three (non-zero) distinct eigenvalues. Since we knowall of its eigenvalues and eigenvectors we can use the eigenvalue expansion to write A1

    as

    A1 =

    u1 v1 w1u2 v2 w2u3 v3 w3

    1 0 00 1 00 0 1

    u1 u2 u3v1 v2 v3w1 w2 w3

    26-Jul-2013 51

    School of Mathematical Sciences Monash University

    Which is just what we would have got by putting n = 1 in the previous equation.From here we could compute A2 = A1A1, A3 = A1A2 and so on. In short, wehave proved the above expression for An for any integer n, positive or negative.

    The above result (with n = 1) give us yet another way to compute the inverse of A.Isnt this exciting (and unexpected)?

    8.5 The Cayley-Hamilton theorem: Not examinable

    What do we know about the three eigenvalues , and ? We know that they aresolutions of the characteristic polynomial

    0 = det(A I)which, after some simple algebra, leads to a polynomial of the form

    0 = 3 + b12 + b2

    1 + b30

    where b1, b2 and b3 are some numbers (built from the numbers in A).

    Now lets do something un-expected (expect the un-expected). Lets replace the number with the 3 3 matrix A in the right hand side of the above polynomial. Where weencounter the powers of A we will use what we have learnt above, that we can useexpansions in powers of the eigenvalues. Thus we have

    A3 + b1A2 + b2A+ b3I =

    u1 v1 w1u2 v2 w2u3 v3 w3

    3 0 00 3 00 0 3

    u1 u2 u3v1 v2 v3w1 w2 w3

    + b1

    u1 v1 w1u2 v2 w2u3 v3 w3

    2 0 00 2 00 0 2

    u1 u2 u3v1 v2 v3w1 w2 w3

    + b2

    u1 v1 w1u2 v2 w2u3 v3 w3

    1 0 00 1 00 0 1

    u1 u2 u3v1 v2 v3w1 w2 w3

    + b3

    u1 v1 w1u2 v2 w2u3 v3 w3

    0 0 00 0 00 0 0

    u1 u2 u3v1 v2 v3w1 w2 w3

    We can tidy this up by collecting the eigenvalue terms into one matrix

    A3 + b1A2 + b2A+ b3I =

    u1 v1 w1u2 v2 w2u3 v3 w3

    D11 0 00 D22 00 0 D33

    u1 u2 u3v1 v2 v3w1 w2 w3

    where

    D11 = 3 + b1

    2 + b2 + b3

    D22 = 2 + b1

    2 + b2 + b3

    D33 = 3 + b1

    2 + b2 + b3

    26-Jul-2013 52

  • School of Mathematical Sciences Monash University

    However we know that each eigenvalue is a solution of the polynomial equation

    0 = 3 + b12 + b2+ b3

    which means that D11 = D22 = D33 = 0 and thus the middle matrix is in fact the zeromatrix. Thus we have shown that

    0 = A3 + b1A2 + b2A+ b3I

    This is an example of the Cayley-Hamilton theorem. It is very much un-expected(agreed?).

    It has been a long road but the journey was fun (yes it was) and it has lead us to afamous theorem in the theory of matrices, the Cayley-Hamilton theorem. Though wehave demonstrated the theorem for the particular case of real symmetric matrices withdistinct eigenvalues it, the theorem, happens to be true for any square matrix. Provingthat this is so is far from easy but sadly the margins of this textbook are too narrow torecord the proof, you will have to wait until your second year of maths.

    The Cayley-Hamilton theorem

    Let A be any N N matrix. Then define the polynomial P () by

    P () = det(A I)

    where I is the N N identity matrix. Then

    0 = P (A)

    Note that the eigenvalues of A are the solutions of

    0 = P ()

    26-Jul-2013 53

    SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    9. Integration

  • School of Mathematical Sciences Monash University

    9.1 Integration : Revision

    Computing I =f(x)dx is no different from finding the function F (x) such that

    dF/dx = f(x).

    The function F (x) is called the anti-derivative of f(x). Finding F (x) can be very tricky.

    Example 9.1

    I =

    sinx dx

    This means find the function F (x) such that

    dF (x)

    dx= sinx

    We know this to be F (x) = cosx+ C where C is a constant of integration.

    Example 9.2

    I =

    sin(3x) dx

    For this we use a substitution,

    u = 3x , du = 3dx , dx = du3

    Thus we have

    I =

    sin(3x) dx =

    (sinu)

    (1

    3du

    )=

    1

    3

    sinu du

    =1

    3( cosu) + C

    Now we flip back to the variable x,

    I =

    sin(3x) dx = 1

    3cos(3x) + C

    Example 9.3

    I =

    x exp(x2) dx

    Choose a substitution that targets the ugly bit in the integral. Thus put u(x) = x2.Then du = 2xdx and xdx = du/2. This gives us

    I =

    1

    2exp(u) du =

    1

    2expu+ C =

    1

    2exp(x2) + C

    26-Jul-2013 55

    School of Mathematical Sciences Monash University

    9.1.1 Some basic integrals

    You must remember the following integrals.exp(x) dx = exp(x) + C

    cos(x) dx = sin(x) + C

    sin(x) dx = cos(x) + C

    xn dx =

    1

    n+ 1xn+1, n 6= 1

    1

    xdx = log(x) + C

    9.1.2 Substitution

    If I =f(x) dx looks nasty, try changing the variable of integration. That is put

    u = u(x) for some chosen function u(x) (usually inspired by some part of f(x)). Thenwe invert the function to find x = x(u) and substitute into the integral.

    I =

    f(x) dx =

    f(x(u))

    dx

    dudu

    If we have chosen well, then this second integral will be easy to do.

    9.2 Integration by parts

    This is a very powerful technique based upon the product rule for derivatives.

    Recall thatd(fg)

    dx= g

    df

    dx+ f

    dg

    dx

    Now integrate both sidesd(fg)

    dxdx =

    gdf

    dxdx+

    fdg

    dxdx

    But integration is the inverse of differentiation, thus we have

    fg =

    gdf

    dxdx+

    fdg

    dxdx

    26-Jul-2013 56

  • School of Mathematical Sciences Monash University

    which we can re-arrange to fdg

    dxdx = fg

    gdf

    dxdx

    Thus we have converted one integral into another. The hope is that the second integralis easier than the first. This will depend on the choices we make for f and dg/dx.

    Example 9.4

    I =

    x exp(x) dx

    We have to split the integrand x exp(x) into two pieces, f and dg/dx.

    Choosef(x) = x df

    dx= 1

    dgdx

    = exp(x) g(x) = exp(x)Then

    I =

    x exp(x) dx = fg

    gdf

    dxdx

    = x exp(x)

    1 exp(x) dx= x exp(x) exp(x) + C

    Example 9.5

    I =

    x cos(x) dx

    Choosef(x) = x df

    dx= 1

    dgdx

    = cos(x) g(x) = sin(x)and thus

    I =

    x cos(x) dx = x sin(x)

    1 sin(x) dx

    = x sin(x) + cos(x) + C

    26-Jul-2013 57

    School of Mathematical Sciences Monash University

    Example 9.6

    I =

    x log(x) dx

    Choosef(x) = x df

    dx= 1

    dgdx

    = log(x) g(x) =???We dont know immediately the anti-derivative for log(x), so we try another split. Thistime we choose

    f(x) = log(x) dfdx

    = 1x

    dgdx

    = x g(x) = 12x2

    and this leads to

    I =

    x log(x) dx =

    1

    2x2 log(x)

    1

    2x2

    1

    xdx

    =1

    2x2 log(x) 1

    4x2 + C

    Example 9.7

    Evaluate

    I =

    log(x)

    xdx

    Two students were doing a maths test. The answer to the the first questionwas log(1 + x). A weak student copied the answer from a good student,but didnt want to make it obvious that he was cheating, so he changed theanswer slightly, to timber(1 + x).

    26-Jul-2013 58

  • SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    10. Hyperbolic functions

    School of Mathematical Sciences Monash University

    10.1 Hyperbolic functions

    Do you remember the time when you first encountered the sine and cosine functions?That would have been in early secondary school when you were studying trigonometry.These functions proved very useful when faced with problems to do with triangles. Youmay have been surprised when (many years later) you found that those same functionsalso proved useful when solving some integration problems. Here is a classic example.

    Example 10.1 Integration requiring trigonometric functions

    Evaluate the following anti-derivative

    I =

    1

    1 x2 dx

    We will use a substitution, x(u) = sinu, as follows

    I =

    1

    1 x2 dx put x = sinu and dx = cosu du

    =

    1

    cosucosu du

    =

    du

    and thus 1

    1 x2 dx = sin1 x

    where, for simplicity, we have ignored the usual integration constant.

    This example was very simple and contained nothing new. But if we had been given thefollowing integral

    I =

    1

    1 + x2dx

    and continued to use a substitution based on simple sine and cosine functions thenwe would find the game to be rather drawn out. As you can easily verify, the correctsubstitution is x(u) = tanu and the integration (ignoring integration constants) leadsto

    11 + x2

    dx = loge

    (x+

    1 + x2)

    26-Jul-2013 60

  • School of Mathematical Sciences Monash University

    Example 10.2

    Verify the above integration.

    This situation is not all that satisfactory as it involve a series of tedious substitutionsand takes far more work than the first example. Can we do a better job? Yes, butit involves a trick where we define new functions, known as hyperbolic functions, to doexactly that job.

    For the moment we will leave behind the issue of integration and focus on this new classof functions. Later we will return to our integrals to show how easy the job can be.

    10.1.1 Hyperbolic functions

    The hyperbolic functions are rather easy to define. It all begins with this pair of functionssinhu, known as hyperbolic sine and pronounced either as sinch or shine and coshu,known as hyperbolic cosine and pronounced as cosh. They are defined by

    sinhu =1

    2

    (eu eu) coshu = 1

    2

    (eu + eu

    ) |u|

  • School of Mathematical Sciences Monash University

    named tanh, cotanh, sech and cosech. The following table provides some basic facts(which again you should verify).

    Properties of Hyperbolic functions. Pt.2

    tanhx =sinhx

    coshxcotanhx =

    coshx

    sinhx

    sechx =1

    coshxcosechx =

    1

    sinhx

    sech2 x tanh2 x = 1d tanhx

    dx= sech2 x ,

    d cotanhx

    dx= cosech2 x

    10.2 Special functions: not examinable

    In the previous examples we conveniently ignored the integration constants. But weshould not be so flippant, instead we should have written

    sinh1 x = C + x

    0

    11 + u2

    du

    Note that the integral on the right hand side vanishes when x = 0 and thus C =sinh1(0). The good thing is that we know that sinh(0) = 0 and this fact can be usedto properly determine the integration constant, that is C = 0 and thus we have

    sinh1 x = x

    0

    11 + u2

    du

    Now we come to an interesting re-interpretation. We could have begun our discussionson hyperbolic sine from this very equation. That is, we could use the right hand sideto define the (inverse) hyperbolic sine. But now you might ask: How do we computea number for sinh1(0.45)? One method would be to compute an approximation byestimating the area under the curve. A better method is to evaluate the right hand sideusing loge(x +

    1 + x2) as the anti-derivative. Either way it is a bit messy but it does

    establish the point, that this integral contains everything we could ever wish to knowabout sinh1.

    What is the point of this discussion? Well it shows how we can turn adversity intoadvantage. Where previously we had a difficult integral (not impossible but difficultnone the less) we invented new functions (the hyperbolic functions) that made suchintegrals trivial. The same idea can be applied to many many more integrals. Forexample, the following integral

    erf(x) =2pi

    x0

    eu2

    du

    26-Jul-2013 63

    School of Mathematical Sciences Monash University

    defines a special function known as the error function. It is used extensively in statisticsand diffusion problems (such as the flow of heat). For this integral there is no knownanti-derivative and thus values for erf(x) can only be obtained by some other means(e.g., the area under the graph).

    Euclids fashion tip: When attending Hyperbolic Functions always choose neat andcasual.

    26-Jul-2013 64

  • SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    11. Improper integrals

    School of Mathematical Sciences Monash University

    11.1 Improper Integrals

    When a definite integral contains an infinity, either in the integrand or on the limits, wesay that we have an improper integral. All other integrals are proper integrals.

    Example 11.1

    I =

    10

    dxx

    , I =

    0

    dx

    1 + x2

    I =

    11

    dx

    x2, I =

    pi/20

    tan(x) dx

    Because of the infinity we must treat the improper integral with care.

    11.1.1 A standard strategy

    We construct a related proper integral say I() that depends on a parameter . Wechoose I() such that we recover the original integral as a limit, say as 0.

    Example 11.2

    I =

    10

    dxx

    For this we construct a related proper integral

    I() =

    1

    dxx

    > 0

    Since this is a proper integral for > 0 we can evaluate it directly,

    I() =

    1

    dxx

    =[2x]1

    = 2 2

    Next we evaluate the limit as 0

    lim0

    I() = lim0

    2 2 = 2

    As this answer is well defined (i.e. finite and independent of the way the limit is ap-proached) we are justified in defining this to be the value of the improper integral.

    I =

    10

    dxx

    = lim0

    1

    dxx

    = 2

    26-Jul-2013 66

  • School of Mathematical Sciences Monash University

    In this case we say we have a convergent improper integral. Had we not got a finiteanswer we would say that we had an divergent improper integral.

    Example 11.3

    I =

    10

    dx

    x2

    This time we choose

    I() =

    1

    dx

    x2 > 0

    which we can easily evaluate

    I() =1

    1

    and thus

    lim0

    I() =

    This is not finite so this time we say the the improper integral is a divergent improperintegral.

    Example 11.4

    What do you think of the following calculation? Be warned : the answer is wrong!

    I =

    11

    dx

    x2=

    [1x

    ]11

    = 2

    Example 11.5

    I =

    11

    dx

    x3

    This time we have an improper integral because the integrand is singular inside theregion of integration. We create our related proper integral by cutting out the singularpoint. Thus we define two separate proper integrals,

    26-Jul-2013 67

    School of Mathematical Sciences Monash University

    I1() =

    1

    dx

    x3dx > 0

    I2() =

    1

    dx

    x3dx > 0

    If both I1 and I2 converge (i.e. have finite values) we say that I also converges with thevalue

    I = lim0

    I1() + lim0

    I2()

    But for our case

    I1() = 1 12

    lim0

    I1() =

    I2() = 1 + 12

    lim0

    I2() = +

    Thus neither I1 nor I2 converges and thus I is a divergent improper integral.

    This may seem easy (it is) but it does require some care as the next example shows.

    Example 11.6

    Suppose we chose I1 and I2 as before but we set

    =

    1 + 22 1

    2 12

    = 2

    Then we would find that

    I1() + I2() =

    (1

    2 12

    )= 2

    for all > 0 and therefore

    lim0

    I1 + I2 = 2

    But had we chosen

    =

    26-Jul-2013 68

  • School of Mathematical Sciences Monash University

    we would have found that

    lim0

    I1 + I2 = 0

    How can this be? The answer is that in computing I1 + I2 we are eventually trying tomake sense of +. Depending on how we approach the limit we can get any answerwe like for +.Consequently, when we say that an integral is divergent we mean that either its valueis infinity or that it has no single well defined value.

    A father who is very much concerned about his sons bad grades in mathsdecides to register him at a catholic school. After his first term there, the sonbrings home his report card: Hes getting As in maths. The father is, ofcourse, pleased, but wants to know: Why are your maths grades suddenlyso good? You know, the son explains, the first day when I walked intothe classroom, I instantly that this place means business! And why was thatinquired the father. Dad! theyve got a guy hanging on the wall and theyvenailed him to a plus sign! exclaimed his son.

    26-Jul-2013 69

    SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    12. Comparison test for convergence

  • School of Mathematical Sciences Monash University

    12.1 Comparison Test for Improper Integrals

    Example 12.1

    I =

    2

    ex2

    dx

    For this integral we would choose

    I() =

    2

    ex2

    dx

    and provided that the limit exists, we would write

    I = lim

    I()

    The trouble is we do not have a simple anti-derivative for ex2. The trick here is to look

    at a simpler (improper) integral for which we can find a simple anti-derivative.

    Note that

    0 < ex2

    < ex for 2 < x

  • School of Mathematical Sciences Monash University

    0 m

    where m is some (possibly large) number.

    Example 14.1

    Use the comparison test to show that the infinite series

    S =n=0

    1

    n2 + n+ 1

    is convergent.

    Example 14.2

    Repeat the previous example, but this time use the integral test.

    26-Jul-2013 82

  • SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    15. Integral and ratio tests.

    School of Mathematical Sciences Monash University

    Example 15.1

    It was claimed earlier that

    S = 1 +1

    2+

    1

    3+ + 1

    n+ =

    k=1

    1

    k

    diverges. How can we be so sure?

    Here is a table of the partial sums

    n 10 100 1000 10000 100000 1000000

    Sn 2.92897 5.18738 7.48547 9.78761 12.0901 14.3927

    This seems to suggest that Sn increases by a (nearly) constant amount every time wemultiply the number of terms by 10. This is not a proof that the series diverges, but itis a strong indication!

    How might we estimate Sn? There is a trick (yes, another trick) in which we usea specially constructed set of rectangles to estimate Sn by computing the total areacontained in those rectangles.

    x

    y

    (

    x

    )

    Over estimate

    Under estimate

    y(x) = 1/x

    0 2 4 6 8 10 12

    0

    .

    2

    0

    .

    4

    0

    .

    6

    0

    .

    8

    1

    .

    0

    For the series 1 + 1/2 + 1/3 + 1/n we can construct two sets of rectangles, oneset lies below the curve y(x) = 1/x, the other set lies above (as in the diagram). Thecrucial point is that as each rectangle has unit width and height equal to 1/x its areamatches exactly the value of a term in the series. Thus we have the following

    26-Jul-2013 84

  • School of Mathematical Sciences Monash University

    Below rectangles : B(n) = 1 + 12

    + 13

    + + 1n1 = Sn1

    Above rectangles : A(n) = 12

    + 13

    + + 1n

    = Sn 1Clearly we also have

    B(n) 1

    Indeterminate : When L = 1

    Example 15.4

    Show, using the ratio test, that the geometric series S =

    n=0 sn is convergent when

    0 < s < 1.

    Example 15.5

    Show that

    S =n=0

    2n

    n2 + 1

    is a divergent series.

    Example 15.6

    What does the ratio test tell you about the Harmonic series?

    26-Jul-2013 87

    SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    16. Comparison test, alternating series.

  • School of Mathematical Sciences Monash University

    16.1 Alternating series

    So far we have only considered infinite series in which all of the terms were positive. Butthere are many infinite series in which the terms alternate from positive to negative.Here are two examples

    S = 1 12

    +1

    3 1

    4 =

    n=0

    (1)nn+ 1

    S = 1 13!

    +1

    5! 1

    7! =

    n=0

    (1)n(2n+ 1)!

    These are known as alternating series. There is one very simple test for the convergenceof an alternating series.

    If |an| 0 as n then we have

    Convergence : When |an+1| < |an| for all n > mDivergence : In all other cases

    where m is some (possibly large) number.

    Example 16.1

    Use the alternating series test to establish the convergence or otherwise of

    S = 1 12

    +1

    3 1

    4 =

    n=0

    (1)nn+ 1

    Example 16.2

    Show that

    S = 1 13!

    +1

    5! 1

    7! =

    n=0

    (1)n(2n+ 1)!

    is convergent.

    Example 16.3

    Draw a graph of Sn versus n for a typical alternating series. Study this graph andexplain why the infinite series is convergent when |an+1| < |an|.

    26-Jul-2013 89

    School of Mathematical Sciences Monash University

    16.2 Non-positive infinite series

    In general, an infinite series may contain any mix of positive an negative terms. Wehave looked at two classes, one where all the terms are positive and another where theterms alternate in sign.

    What should we do with a general series? Given the general series

    S =n=0

    an

    we construct a related series by taking the absolute value of each term,

    S =n=0

    |an|

    The mix of positive and negative terms in S actually works in favour of convergence forS. Thus if we can show that S is convergent then we will also have shown that S isconvergent. On the other if we find that S is divergent then we can not say anythingabout S.

    Convergence : If S converges, then so to does S

    Indeterminate : In all other cases

    This type of convergence is known as absolute convergence. If a series is not absolutelyconvergent we say that it is conditionally convergent.

    Example 16.4

    Show that S < S < S and hence prove the previous assertion, that S convergeswhenever S converges.

    Example 16.5

    Show that

    S = 1 122

    +1

    32 1

    42 =

    n=0

    (1)n(n+ 1)2

    converges.

    16.3 Re-ordering an infinite series

    There is one very useful consequence of absolute convergence.

    If a series is absolutely convergent then you may re-order the terms in the infinite seriesin whatever way you like, the series will converge to the same value every time.

    26-Jul-2013 90

  • School of Mathematical Sciences Monash University

    Example 16.6

    We know that the series

    S = 1 12

    +1

    3 1

    4 =

    n=0

    (1)nn+ 1

    is convergent. But suppose we grouped all the positive terms into one series and the allthe negative into another,

    S = (1 +1

    3+

    1

    5+

    1

    7+ )

    (12

    +1

    4+

    1

    6+ )

    = S+ S

    The two series S+ and S can be shown (by comparison with the Harmonic series) todiverge. Thus we would have

    S =

    and the right hand side is meaningless it can be made to have any value you choose(by taking suitable limits for the partial sums for S+ and S).

    The moral is do not re-order a conditionally convergent series!

    26-Jul-2013 91

    SCHOOL OF MATHEMATICAL SCIENCES

    ENG1091

    Mathematics for Engineering

    17. Power series

  • School of Mathematical Sciences Monash University

    17.1 Simple power series

    Here are some typical examples of what are known as power series

    f(x) = 1 + x+ x2 + x3 + + xn +

    g(x) = 1 + x+x2

    2!+x3

    3!+ + x

    n

    n!+

    h(x) = 1 x2

    2!+x4

    4! + (1)n x

    2n

    (2n)!+

    Each power series is a function of one variable, in this case x and so they are also referredto as a power series in x.

    We might like to ask

    I For what values of x does the series converge?

    I If the series converges, what value does it converge to?

    The first question is a simple extension of the ideas we developed in the previous lectureswith the one exception that the convergence of the series may now depend upon thechoice of x.

    The second question is generally much harder to answer. We will find, in the nextlecture, that it is easier to start with a known function and to then build a power seriesthat has the same values as the function (for values of x for which the power seriesconverges). By this method (Taylor series) we will see that the three power series aboveare representations of the functions f(x) = 1/(1 x), g(x) = ex and h(x) = cos(x).

    17.2 The general power series

    A power series in x around the point x = a is always of the form

    a0 + a1(x a) + a2(x a)2 + + an(x a)n + =n=0

    an(x a)n

    The point x = a is often said to the be point around which the power series is based.

    26-Jul-2013 93

    School of Mathematical Sciences Monash University

    17.3 Examples of Power Series

    In a pr