algebra lineal y otros temas

Upload: rmsanchep

Post on 10-Apr-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Algebra Lineal y Otros Temas

    1/368

    LECTURE NOTES ON

    MATHEMATICAL METHODS

    Mihir SenJoseph M. Powers

    Department of Aerospace and Mechanical EngineeringUniversity of Notre Dame

    Notre Dame, Indiana 46556-5637USA

    updatedApril 9, 2003

  • 8/8/2019 Algebra Lineal y Otros Temas

    2/368

    2

  • 8/8/2019 Algebra Lineal y Otros Temas

    3/368

    Contents

    1 Multi-variable calculus 111.1 Implicit functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.2 Functional dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.3 Coordinate Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    1.3.1 Jacobians and Metric Tensors . . . . . . . . . . . . . . . . . . . . . . 19

    1.3.2 Covariance and Contravariance . . . . . . . . . . . . . . . . . . . . . 251.4 Maxima and minima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301.4.1 Derivatives of integral expressions . . . . . . . . . . . . . . . . . . . . 311.4.2 Calculus of variations . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    1.5 Lagrange multipliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    2 First-order ordinary differential equations 432.1 Separation of variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432.2 Homogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442.3 Exact equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    2.4 Integrating factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472.5 Bernoulli equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502.6 Riccati equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512.7 Reduction of order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

    2.7.1 y absent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522.7.2 x absent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    2.8 Uniqueness and singular solutions . . . . . . . . . . . . . . . . . . . . . . . . 552.9 Clairaut equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

    3 Linear ordinary differential equations 61

    3.1 Linearity and linear independence . . . . . . . . . . . . . . . . . . . . . . . . 613.2 Complementary functions for equations with constant coefficients . . . . . . 63

    3.2.1 Arbitrary order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633.2.2 First order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643.2.3 Second order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

    3.3 Complementary functions for equations with variable coefficients . . . . . . . 663.3.1 One solution to find another . . . . . . . . . . . . . . . . . . . . . . . 663.3.2 Euler equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

    3

  • 8/8/2019 Algebra Lineal y Otros Temas

    4/368

    3.4 Particular solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

    3.4.1 Method of undetermined coefficients . . . . . . . . . . . . . . . . . . 68

    3.4.2 Variation of parameters . . . . . . . . . . . . . . . . . . . . . . . . . 70

    3.4.3 Operator D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    3.4.4 Greens functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

    Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

    4 Series solution methods 81

    4.1 Power series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

    4.1.1 First-order equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

    4.1.2 Second-order equation . . . . . . . . . . . . . . . . . . . . . . . . . . 84

    4.1.2.1 Ordinary point . . . . . . . . . . . . . . . . . . . . . . . . . 84

    4.1.2.2 Regular singular point . . . . . . . . . . . . . . . . . . . . . 86

    4.1.2.3 Irregular singular point . . . . . . . . . . . . . . . . . . . . 89

    4.1.3 Higher order equations . . . . . . . . . . . . . . . . . . . . . . . . . . 894.2 Perturbation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

    4.2.1 Algebraic and transcendental equations . . . . . . . . . . . . . . . . . 91

    4.2.2 Regular perturbations . . . . . . . . . . . . . . . . . . . . . . . . . . 95

    4.2.3 Strained coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

    4.2.4 Multiple scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

    4.2.5 Harmonic approximation . . . . . . . . . . . . . . . . . . . . . . . . . 104

    4.2.6 Boundary layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

    4.2.7 WKB method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

    4.2.8 Solutions of the type eS(x) . . . . . . . . . . . . . . . . . . . . . . . . 112

    4.2.9 Repeated substitution . . . . . . . . . . . . . . . . . . . . . . . . . . 113Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

    5 Special functions 121

    5.1 Sturm-Liouville equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

    5.1.1 Linear oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

    5.1.2 Legendre equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

    5.1.3 Chebyshev equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

    5.1.4 Hermite equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

    5.1.5 Laguerre equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

    5.1.6 Bessel equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

    5.1.6.1 first and second kind . . . . . . . . . . . . . . . . . . . . . . 132

    5.1.6.2 third kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

    5.1.6.3 modified Bessel functions . . . . . . . . . . . . . . . . . . . 135

    5.1.6.4 ber and bei functions . . . . . . . . . . . . . . . . . . . . . . 135

    5.2 Representation of arbitrary functions . . . . . . . . . . . . . . . . . . . . . . 135

    Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

    4

  • 8/8/2019 Algebra Lineal y Otros Temas

    5/368

    6 Vectors and tensors 1436.1 Cartesian index notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436.2 Cartesian tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

    6.2.1 Direction cosines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456.2.1.1 Scalars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

    6.2.1.2 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476.2.1.3 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

    6.2.2 Matrix representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486.2.3 Transpose of a tensor, symmetric and anti-symmetric tensors . . . . . 1486.2.4 Dual vector of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 1496.2.5 Principal axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

    6.3 Algebra of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1516.3.1 Definition and properties . . . . . . . . . . . . . . . . . . . . . . . . . 1526.3.2 Scalar product (dot product, inner product) . . . . . . . . . . . . . . 1526.3.3 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

    6.3.4 Scalar triple product . . . . . . . . . . . . . . . . . . . . . . . . . . . 1536.3.5 Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

    6.4 Calculus of vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1536.4.1 Vector function of single scalar variable . . . . . . . . . . . . . . . . . 1536.4.2 Differential geometry of curves . . . . . . . . . . . . . . . . . . . . . . 155

    6.4.2.1 Curves on a plane . . . . . . . . . . . . . . . . . . . . . . . 1566.4.2.2 Curves in 3-dimensional space . . . . . . . . . . . . . . . . . 157

    6.5 Line and surface integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606.5.1 Line integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606.5.2 Surface integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

    6.6 Differential operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1616.6.1 Gradient of a scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1636.6.2 Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

    6.6.2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646.6.2.2 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

    6.6.3 Curl of a vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646.6.4 Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

    6.6.4.1 Scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1656.6.4.2 Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

    6.6.5 Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1656.7 Special theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

    6.7.1 Path independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666.7.2 Greens theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666.7.3 Gausss theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676.7.4 Greens identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696.7.5 Stokes theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696.7.6 Leibnizs Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

    6.8 Orthogonal curvilinear coordinates . . . . . . . . . . . . . . . . . . . . . . . 171Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

    5

  • 8/8/2019 Algebra Lineal y Otros Temas

    6/368

    7 Linear analysis 1757.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1757.2 Differentiation and integration . . . . . . . . . . . . . . . . . . . . . . . . . . 176

    7.2.1 Frechet derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1767.2.2 Riemann integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

    7.2.3 Lebesgue integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1777.3 Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

    7.3.1 Normed spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1827.3.2 Inner product spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 190

    7.3.2.1 Hilbert space . . . . . . . . . . . . . . . . . . . . . . . . . . 1917.3.2.2 Non-commutation of the inner product . . . . . . . . . . . . 1927.3.2.3 Minkowski space . . . . . . . . . . . . . . . . . . . . . . . . 1937.3.2.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . 1977.3.2.5 Gram-Schmidt procedure . . . . . . . . . . . . . . . . . . . 1987.3.2.6 Representation of a vector . . . . . . . . . . . . . . . . . . . 1997.3.2.7 Parsevals equation, convergence, and completeness . . . . . 205

    7.3.3 Reciprocal bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2057.4 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

    7.4.1 Linear operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2097.4.2 Adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2107.4.3 Inverse operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2137.4.4 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . 216

    7.5 Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2257.6 Method of weighted residuals . . . . . . . . . . . . . . . . . . . . . . . . . . 229Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

    8 Linear algebra 2458.1 Determinants and rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2458.2 Matrix algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246

    8.2.1 Column, row, left and right null spaces . . . . . . . . . . . . . . . . . 2478.2.2 Matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 2498.2.3 Definitions and properties . . . . . . . . . . . . . . . . . . . . . . . . 250

    8.2.3.1 Diagonal matrices . . . . . . . . . . . . . . . . . . . . . . . 2508.2.3.2 Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2528.2.3.3 Similar matrices . . . . . . . . . . . . . . . . . . . . . . . . 253

    8.2.4 Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2538.2.4.1 Overconstrained Systems . . . . . . . . . . . . . . . . . . . 253

    8.2.4.2 Underconstrained Systems . . . . . . . . . . . . . . . . . . . 2568.2.4.3 Over- and Underconstrained Systems . . . . . . . . . . . . . 2578.2.4.4 Square Systems . . . . . . . . . . . . . . . . . . . . . . . . . 259

    8.2.5 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . 2618.2.6 Complex matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

    8.3 Orthogonal and unitary matrices . . . . . . . . . . . . . . . . . . . . . . . . 2668.3.1 Orthogonal matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 2668.3.2 Unitary matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

    6

  • 8/8/2019 Algebra Lineal y Otros Temas

    7/368

    8.4 Matrix decompositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2688.4.1 L D U decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 2688.4.2 Echelon form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2708.4.3 Q R decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 2738.4.4 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274

    8.4.5 Jordan canonical form . . . . . . . . . . . . . . . . . . . . . . . . . . 2808.4.6 Schur decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 2828.4.7 Singular value decomposition . . . . . . . . . . . . . . . . . . . . . . 2828.4.8 Hessenberg form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284

    8.5 Projection matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2858.6 Method of least squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286

    8.6.1 Unweighted least squares . . . . . . . . . . . . . . . . . . . . . . . . . 2868.6.2 Weighted least squares . . . . . . . . . . . . . . . . . . . . . . . . . . 287

    8.7 Matrix exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2898.8 Quadratic form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2918.9 Moore-Penrose inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296

    9 Dynamical systems 3019.1 Paradigm problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301

    9.1.1 Autonomous example . . . . . . . . . . . . . . . . . . . . . . . . . . . 3019.1.2 Non-autonomous example . . . . . . . . . . . . . . . . . . . . . . . . 305

    9.2 General theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3079.3 Iterated maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3089.4 High order scalar differential equations . . . . . . . . . . . . . . . . . . . . . 3119.5 Linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312

    9.5.1 Homogeneous equations with constant A . . . . . . . . . . . . . . . . 3139.5.1.1 n eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . 3149.5.1.2 < n eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . 3159.5.1.3 Summary of method . . . . . . . . . . . . . . . . . . . . . . 3169.5.1.4 Alternative method . . . . . . . . . . . . . . . . . . . . . . . 3169.5.1.5 Fundamental matrix . . . . . . . . . . . . . . . . . . . . . . 319

    9.5.2 Inhomogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . 3209.5.2.1 Undetermined coefficients . . . . . . . . . . . . . . . . . . . 3219.5.2.2 Variation of parameters . . . . . . . . . . . . . . . . . . . . 321

    9.6 Nonlinear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3229.6.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322

    9.6.2 Linear stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3229.6.3 Lyapunov functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3249.6.4 Hamiltonian systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 327

    9.7 Fractals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3299.7.1 Cantor set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3309.7.2 Koch curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3309.7.3 Weierstrass function . . . . . . . . . . . . . . . . . . . . . . . . . . . 3319.7.4 Mandelbrot and Julia sets . . . . . . . . . . . . . . . . . . . . . . . . 331

    7

  • 8/8/2019 Algebra Lineal y Otros Temas

    8/368

    9.8 Bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3329.8.1 Pitchfork bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . 3339.8.2 Transcritical bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . 3359.8.3 Saddle-node bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . 3369.8.4 Hopf bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337

    9.9 Lorenz equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3389.9.1 Linear stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3389.9.2 Center manifold projection . . . . . . . . . . . . . . . . . . . . . . . . 341

    Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344

    10 Appendix 35310.1 Trigonometric relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35310.2 Routh-Hurwitz criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35410.3 Infinite series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35510.4 Asymptotic expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356

    10.4.1 Expansion of integrals . . . . . . . . . . . . . . . . . . . . . . . . . . 356

    10.4.2 Integration and differentiation of series . . . . . . . . . . . . . . . . . 35610.5 Limits and continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35610.6 Special functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356

    10.6.1 Gamma function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35610.6.2 Beta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35610.6.3 Riemann zeta function . . . . . . . . . . . . . . . . . . . . . . . . . . 35710.6.4 Error function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35710.6.5 Fresnel integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35810.6.6 Sine- and cosine-integral functions . . . . . . . . . . . . . . . . . . . . 35810.6.7 Elliptic integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359

    10.6.8 Gausss hypergeometric function . . . . . . . . . . . . . . . . . . . . . 36010.6.9 distribution and Heaviside function . . . . . . . . . . . . . . . . . . 36010.7 Singular integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36110.8 Chain rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36110.9 Complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362

    10.9.1 Eulers formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36210.9.2 Polar and Cartesian representations . . . . . . . . . . . . . . . . . . . 36310.9.3 Cauchy-Riemann equations . . . . . . . . . . . . . . . . . . . . . . . 364

    Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365

    Bibliography 367

    8

  • 8/8/2019 Algebra Lineal y Otros Temas

    9/368

    Preface

    These are lecture notes for AME 561 Mathematical Methods I, the first of a pair of courseson applied mathematics taught at the Department of Aerospace and Mechanical Engineeringof the University of Notre Dame. Most of the students in this course are beginning graduatestudents in engineering coming from a wide variety of backgrounds. The objective of thecourse is to provide a survey of a variety of topics in applied mathematics, including mul-tidimensional calculus, ordinary differential equations, perturbation methods, vectors and

    tensors, linear analysis, and linear algebra, and dynamic systems. The companion course,AME 562, covers complex variables, integral transforms, and partial differential equations.These notes emphasize method and technique over rigor and completeness; the student

    should call on textbooks and other reference materials. It should also be remembered thatpractice is essential to the learning process; the student would do well to apply the techniquespresented here by working as many problems as possible.

    The notes, along with much information on the course itself, can be found on the worldwide web at http://www.nd.edu/powers/ame.561. At this stage, anyone is free to duplicatethe notes on their own printers.

    These notes have appeared in various forms for the past few years; minor changes andadditions have been made and will continue to be made. We would be happy to hear from

    you about errors or suggestions for improvement.

    Mihir [email protected]://www.nd.edu/msenJoseph M. [email protected]://www.nd.edu/powers

    Notre Dame, Indiana; USA

    April 9, 2003

    Copyright c 2003 by Mihir Sen and Joseph M. Powers.All rights reserved.

    9

  • 8/8/2019 Algebra Lineal y Otros Temas

    10/368

    10

  • 8/8/2019 Algebra Lineal y Otros Temas

    11/368

    Chapter 1

    Multi-variable calculus

    see Kaplan, Chapter 2: 2.1-2.22, Chapter 3: 3.9,see Riley, Hobson, Bence, Chapters 4, 19, 20,see Lopez, Chapters 32, 46, 47, 48.

    1.1 Implicit functions

    We can think of a relation such as f(x1, x2, . . . , xn, y) = 0, also written as f(xi, y) = 0, insome region as an implicit function ofy with respect to the other variables. We cannot havefy

    = 0, because then f would not depend on y in this region. In principle, we can write

    y = y(x1, x2, . . . , xn) or y = y(xi) (1.1)

    if fy

    = 0.The derivative

    yxi can be determined from f = 0 without explicitly solving for y. First,

    from the chain rule, we have

    df =f

    x1dx1 +

    f

    x2dx2 + . . . +

    f

    xidxi + . . . +

    f

    xndxn +

    f

    ydy = 0 (1.2)

    Differentiating with respect to xi while holding all the other xj , j = i constant, we getf

    xi+

    f

    y

    y

    xi= 0 (1.3)

    so that

    yxi

    = fxify

    (1.4)

    which can be found if fy = 0. That is to say, y can be considered a function of xi if fy = 0.Let us now consider the equations

    f(x,y,u,v) = 0 (1.5)

    g(x,y,u,v) = 0 (1.6)

    11

  • 8/8/2019 Algebra Lineal y Otros Temas

    12/368

    Under certain circumstances, we can unravel these equations (either algebraically or numer-ically) to form u = u(x, y), v = v(x, y). The conditions for the existence of such a functionaldependency can be found by differentiation of the original equations, for example:

    df =f

    x

    dx +f

    y

    dy +f

    u

    du +f

    v

    dv = 0

    holding y constant and dividing by dx we get

    f

    x+

    f

    u

    u

    x+

    f

    v

    v

    x= 0

    in the same manner, we get

    g

    x+

    g

    u

    u

    x+

    g

    v

    v

    x= 0

    fy

    + fu

    uy

    + fv

    vy

    = 0

    g

    y+

    g

    u

    u

    y+

    g

    v

    v

    y= 0

    Two of the above equations can be solved for ux

    and vx

    , and two others for uy

    and vy

    by

    using Cramers 1 rule. To solve for ux andvx , we first write two of the previous equations

    in matrix form:

    fu fvgu

    gv

    uxvx

    = fx gx

    (1.7)thus from Cramers rule we have

    u

    x=

    fx fv gx

    gv

    fu fvgu

    gv

    (f,g)(x,v)

    (f,g)(u,v)

    v

    x=

    fu fxgu

    gx

    fu fvgu

    gv

    (f,g)(u,x)

    (f,g)(u,v)

    In a similar fashion, we can form expressions for uy andvy :

    u

    y=

    fy fvgy

    gv

    fu fvgu

    gv

    (f,g)(y,v)

    (f,g)(u,v)

    v

    y=

    fu fygu

    gy

    fu fvgu

    gv

    (f,g)(u,y)

    (f,g)(u,v)

    1Gabriel Cramer, 1704-1752, well-travelled Swiss-born mathematician who did enunciate his well knownrule, but was not the first to do so.

    12

  • 8/8/2019 Algebra Lineal y Otros Temas

    13/368

    If the Jacobian 2 determinant, defined below, is non-zero, the derivatives exist, and weindeed can form u(x, y) and v(x, y).

    (f, g)

    (u, v)=

    fu

    fv

    gu

    gv

    = 0 (1.8)

    This is the condition for the implicit to explicit function conversion. Similar conditions holdfor multiple implicit functions fi(x1, . . . , xn, y1, . . . , ym) = 0, i = 1, . . . , m. The derivativesfixj

    , i = 1, . . . , m, j = 1, . . . , n exist in some region if the determinant of the matrix fiyj = 0(i, j = 1, . . . , m) in this region.

    Example 1.1If

    x + y + u6 + u + v = 0

    xy + uv = 1

    Find u

    x.

    Note that we have four unknowns in two equations. In principle we could solve for u(x, y) andv(x, y) and then determine all partial derivatives, such as the one desired. In practice this is not alwayspossible; for example, there is no general solution to sixth order equations such as we have here.

    The two equations are rewritten as

    f(x,y,u,v) = x + y + u6 + u + v = 0

    g(x,y,u,v) = xy + uv 1 = 0Using the formula developed above to solve for the desired derivative, we get

    u

    x =

    f

    xfv

    gx

    gv

    fu fvg

    ugv

    Substituting, we get

    u

    x=

    1 1y u 6u5 + 1 1v u

    =y u

    u(6u5 + 1) v

    Note when

    v = 6u6 + u

    that the relevant Jacobian is zero; at such points we can determine neither ux noruy ; thus we cannot

    form u(x, y).

    At points where the relevant Jacobian (f,g)(u,v) = 0, (which includes nearly all of the (x, y) plane) givena local value of (x, y), we can use algebra to find a corresponding u and v, which may be multivalued,and use the formula developed to find the local value of the partial derivative.

    2Carl Gustav Jacob Jacobi, 1804-1851, German/Prussian mathematician who used these determinants,which were first studied by Cauchy, in his work on partial differential equations.

    13

  • 8/8/2019 Algebra Lineal y Otros Temas

    14/368

    1.2 Functional dependence

    Let u = u(x, y) and v = v(x, y). If we can write u = g(v) or v = h(u), then u and v are saidto be functionally dependent. If functional dependence between u and v exists, then we canconsider f(u, v) = 0. So,

    f

    u

    u

    x+

    f

    v

    v

    x= 0, (1.9)

    f

    u

    u

    y+

    f

    v

    v

    y= 0, (1.10)

    ux

    vx

    uy

    vy

    fufv

    =

    00

    (1.11)

    Since the right hand side is zero, and we desire a non-trivial solution, the determinant of thecoefficient matrix, must be zero for functional dependency, i.e.

    ux vxuy

    vy

    = 0. (1.12)Note, since det A = det AT, that this is equivalent to ux uyv

    xvy

    = (u, v)(x, y) = 0. (1.13)That is the Jacobian must be zero.

    Example 1.2Determine if

    u = y + z

    v = x + 2z2

    w = x 4yz 2y2

    are functionally dependent.The determinant of the resulting coefficient matrix, by extension to three functions of three vari-

    ables, is

    (u, v, w)

    (x,y,z)=

    ux

    uy

    uz

    vx

    vy

    vz

    wx

    wy

    wz

    =

    ux

    vx

    wx

    uy

    vy

    wy

    uz

    vz

    wz

    =

    0 1 11 0 4(y + z)1 4z 4y

    = (1)(4y (4)(y + z)) + (1)(4z)= 4y 4y 4z + 4z= 0

    So, u,v,w are functionally dependent. In fact w = v 2u2.

    14

  • 8/8/2019 Algebra Lineal y Otros Temas

    15/368

    Example 1.3Let

    x + y + z = 0

    x2 + y2 + z2 + 2xz = 1

    Can x and y be considered as functions of z?If x = x(z) and y = y(z), then dxdz and

    dydz must exist. If we take

    f(x, y, z) = x + y + z = 0

    g(x, y, z) = x2 + y2 + z2 + 2xz 1 = 0df =

    f

    zdz +

    f

    xdx +

    f

    ydy = 0

    dg =g

    zdz +

    g

    xdx +

    g

    ydy = 0

    f

    z+

    f

    x

    dx

    dz+

    f

    y

    dy

    dz= 0

    g

    z+

    g

    x

    dx

    dz+

    g

    y

    dy

    dz= 0 f

    xfy

    gx

    gy

    dxdzdydz

    =

    fzgz

    then the solution matrixdxdz

    , dydz

    Tcan be obtained by Cramers rule.

    dxdz

    =

    fz

    fy

    gz gy fx fygx

    gy

    = 1 1(2z + 2x) 2y 1 12x + 2z 2y

    = 2y + 2z + 2x

    2y 2x 2z = 1

    dy

    dz=

    fx fzgx gz

    fx fygx

    gy

    =

    1 12x + 2z (2z + 2x) 1 12x + 2z 2y

    =

    0

    2y 2x 2z

    Note here that in the expression for dxdz that the numerator and denominator cancel; there is no special

    condition defined by the Jacobian determinant of the denominator being zero. In the second, dydz = 0 ify x z = 0, in which case this formula cannot give us the derivative.

    Now in fact, it is easily shown by algebraic manipulations (which for more general functions arenot possible) that

    x(z) = z

    2

    2

    y(z) =

    2

    2

    Note that in fact y x z = 0, so the Jacobian determinant (f,g)(x,y) = 0; thus, the above expressionfor dydz is indeterminant. However, we see from the explicit expression y =

    2

    2 that in fact,dydz = 0.

    15

  • 8/8/2019 Algebra Lineal y Otros Temas

    16/368

    -1

    0

    1

    x

    -0.5

    0

    0.5

    y

    -1

    -0.5

    0

    0.5

    1

    z

    -1

    0

    1

    x

    -0.5

    0

    0.5

    y

    -1

    -0

    0

    0

    1

    -1

    -0.5

    0

    0.5

    1x

    -2

    -1

    0

    1

    2

    y

    -1

    -0.5

    0

    0.5

    1

    z

    -1

    -0.5

    0

    0.5

    1x

    -2

    -1

    0

    1

    2

    y

    -1

    .5

    0

    5

    1

    Figure 1.1: Surfaces ofx + y + z = 0 and x2 + y2 + z2 + 2xz = 1, and their loci of intersection

    The two original functions and their loci of intersection are plotted in Figure 1.1.It is seen that the surface represented by the quadratic function is a open cylindrical tube, and thatrepresented by the linear function is a plane. Note that planes and cylinders may or may not intersect.If they intersect, it is most likely that the intersection will be a closed arc. However, when the planeis aligned with the axis of the cylinder, the intersection will be two non-intersecting lines; such is thecase in this example.

    Lets see how slightly altering the equation for the plane removes the degeneracy. Take now

    5x + y + z = 0

    x2 + y2 + z2 + 2xz = 1

    Can x and y be considered as functions of z?If x = x(z) and y = y(z), then dxdz and

    dydz must exist. If we take

    f(x, y, z) = 5x + y + z = 0g(x, y, z) = x2 + y2 + z2 + 2xz 1 = 0

    then the solution matrixdxdz ,

    dydz

    Tis found as before:

    dx

    dz=

    fz fygz gy fx fyg

    xgy

    =

    1 1(2z + 2x) 2y 5 12x + 2z 2y

    =

    2y + 2z + 2x10y 2x 2z

    dydz =fx fzg

    x g

    z fx fygx

    gy

    =

    5 12x + 2z

    (2z + 2x) 5 12x + 2z 2y

    = 8x 8z10y 2x 2zThe two original functions and their loci of intersection are plotted in Figure 1.2.Straightforward algebra in this case shows that an explicit dependency exists:

    x(z) =6z 213 8z2

    26

    y(z) =4z 5213 8z2

    26

    16

  • 8/8/2019 Algebra Lineal y Otros Temas

    17/368

    -0.20 0.2

    -1

    -0.5

    0

    0.5

    1

    -1

    0

    1

    z

    0

    x

    -1

    -0.5

    0

    0.5

    1

    y

    -1

    0

    1

    -1

    -0.5

    0

    0.5

    1

    x

    -2

    -1

    0

    1

    2

    y

    -1

    -0.5

    0

    0.5

    1

    z

    -1

    -0.5

    0

    0.5

    1

    x

    -2

    -1

    0

    1

    2

    y

    -1

    -0.5

    0

    0.5

    1

    z

    Figure 1.2: Surfaces of 5x+ y +z = 0 and x2 +y2 +z2 +2xz = 1, and their loci of intersection

    These curves represent the projection of the curve of intersection on the x z and y z planes,respectively. In both cases, the projections are ellipses.

    1.3 Coordinate Transformations

    Many problems are formulated in three-dimensional Cartesian 3 space. However, many ofthese problems, especially those involving curved geometrical bodies, are better posed in anon-Cartesian, curvilinear coordinate system. As such, one needs techniques to transformfrom one coordinate system to another.

    For this section, we will take Cartesian coordinates to be represented by (1, 2, 3). Herethe superscript is an index and does not represent a power of . We will denote this pointby i, where i = 1, 2, 3. Since the space is Cartesian, we have the usual Euclidean 4 formulafor arc length s:

    (ds)2

    = d12 + d22 + d32 (1.14)(ds)2 =

    3i=1

    didi didi (1.15)

    Here we have adopted the summation convention that when an index appears twice, asummation from 1 to 3 is understood.

    3Rene Descartes, 1596-1650, French mathematician and philosopher.4Euclid of Alexandria, 325 B.C.- 265 B.C., Greek geometer.

    17

  • 8/8/2019 Algebra Lineal y Otros Temas

    18/368

    Now let us map a point from a point in (1, 2, 3) space to a point in a more convenient(x1, x2, x3) space. This mapping is achieved by defining the following functional dependen-cies:

    x1 = x1(1, 2, 3) (1.16)

    x2 = x2(1, 2, 3) (1.17)

    x3 = x3(1, 2, 3) (1.18)

    Taking derivatives can tell us whether the inverse exists.

    dx1 =x1

    1d1 +

    x1

    2d2 +

    x1

    3d3 =

    x1

    jdj (1.19)

    dx2 = x21

    d1 + x22

    d2 + x23

    d3 = x2j

    dj (1.20)

    dx3 =x3

    1d1 +

    x3

    2d2 +

    x3

    3d3 =

    x3

    jdj (1.21)

    dx1dx2dx3

    =

    x1

    1x1

    2x1

    3

    x2

    1x2

    2x2

    3

    x3

    1x3

    2x3

    3

    d1d2

    d3

    (1.22)

    dxi =xi

    jdj (1.23)

    In order for the inverse to exist we must have a non-zero Jacobian for the transformation,i.e.

    (x1, x2, x3)

    (1, 2, 3)= 0 (1.24)

    It can then be inferred that the inverse transformation exists.

    1 = 1(x1, x2, x3) (1.25)2 = 2(x1, x2, x3) (1.26)

    3 = 3(x1, x2, x3) (1.27)

    Likewise then,

    di = i

    xjdxj (1.28)

    18

  • 8/8/2019 Algebra Lineal y Otros Temas

    19/368

    1.3.1 Jacobians and Metric Tensors

    Defining 5 the Jacobian matrix J, which we associate with the inverse transformation, thatis the transformation from non-Cartesian to Cartesian coordinates, to be

    J = ixj

    =

    1

    x1

    1

    x2

    1

    x3

    2

    x12

    x22

    x33

    x13

    x23

    x3

    (1.29)we can rewrite di in Gibbs 6 vector notation as

    d = J dx (1.30)

    Now for Euclidean spaces, distance must be independent of coordinate systems, so werequire

    (ds)2 = didi = i

    xkdxk

    i

    xldxl =

    i

    xk

    i

    xldxkdxl (1.31)

    In Gibbs vector notation this becomes

    (ds)2 = dT d (1.32)= (J dx)T (J dx) (1.33)= dxT JT J dx (1.34)

    If we define the metric tensor, gkl or G, as follows:

    gkl = i

    xk

    i

    xl(1.35)

    G = JT J (1.36)

    then we have, equivalently in both index and Gibbs notation,

    (ds)2 = gkl dxkdxl (1.37)

    (ds)2 = dxT G dx (1.38)

    Now gkl can be represented as a matrix. If we define

    g = det(gkl) , (1.39)

    5The definition we adopt is that used in most texts, including Kaplan. A few, e.g. Aris, define theJacobian determinant in terms of the transpose of the Jacobian matrix, which is not problematic since thetwo are the same. Extending this, an argument can be made that a better definition of the Jacobian matrixwould be the transpose of the traditional Jacobian matrix. This is because when one considers that thedifferential operator acts first, the Jacobian matrix is really xj

    i, and the alternative definition is more

    consistent with traditional matrix notation, which would have the first row as x1 1, x1

    2, x1 3. As long

    as one realizes the implications of the notation, however, the convention adopted ultimately does not matter.6Josiah Willard Gibbs, 1839-1903, prolific American physicist and mathematician with a lifetime affiliation

    with Yale University.

    19

  • 8/8/2019 Algebra Lineal y Otros Temas

    20/368

    it can be shown that the ratio of volumes of differential elements in one space to that of theother is given by

    d1 d2 d3 =

    g dx1 dx2 dx3 (1.40)

    We also require dependent variables and all derivatives to take on the same values atcorresponding points in each space, e.g. ifS [S = f(1, 2, 3) = h(x1, x2, x3)] is a dependentvariable defined at (1, 2, 3), and (1, 2, 3) maps into (x1, x2, x3)), we require f(1, 2, 3) =h(x1, x2, x3))

    The chain rule lets us transform derivatives to other spaces

    ( S1S2

    S3 ) = (

    Sx1

    Sx2

    Sx3 )

    x1

    1x1

    2x1

    3

    x2

    1x2

    2x2

    3

    x3

    1x3

    2x3

    3

    (1.41)

    S

    i =S

    xjxj

    i (1.42)

    This can also be inverted, given that g = 0, to find Sx1 , Sx2 , Sx3T. The fact that the gradientoperator required the use of row vectors in conjunction with the Jacobian matrix, while thetransformation of distance, earlier in this section, required the use of column vectors is offundamental importance, and will be examined further in an upcoming section where wedistinguish between what are known as covariant and contravariant vectors.

    Example 1.4Transform the Cartesian equation

    S

    1 S = 12 + 22

    under under the following:

    1. Cartesian to affine coordinates.

    Consider the following linear non-orthogonal transformation (those of this type are known as affine):

    x1 = 41 + 22

    x2 = 31 + 22

    x3 = 3

    This is a linear system of three equations in three unknowns; using standard techniques of linear algebraallows us to solve for 1, 2, 3 in terms of x1, x2, x3; that is we find the inverse transformation, whichis

    1 = x1 x2

    2 = 32

    x1 + 2x2

    3 = x3

    Lines of constant x1 and x2 in the 1, 2 plane are plotted in Figure 1.3.

    20

  • 8/8/2019 Algebra Lineal y Otros Temas

    21/368

    -4 -2 2 4

    -4

    -3

    -2

    -1

    1

    2

    3

    4

    1

    2

    Figure 1.3: Lines of constant x1 and x2 in the 1, 2 plane for affine transformation of exampleproblem.

    The appropriate Jacobian matrix for the inverse transformation is

    J = i

    xj=

    (1, 2, 3)

    (x1, x2, x3)=

    1

    x11

    x21

    x3

    2

    x12

    x22

    x3

    3

    x13

    x23

    x3

    J = 1 1 0 32 2 00 0 1

    The determinant of the Jacobian matrix is

    2 32

    =1

    2

    So a unique transformation always exists, since the Jacobian determinant is never zero.The metric tensor is

    gkl = i

    xk i

    xl=

    1

    xk 1

    xl+

    2

    xk 2

    xl+

    3

    xk 3

    xl

    for example for k = 1, l = 1 we get

    g11 = i

    x1 i

    x1=

    1

    x1 1

    x1+

    2

    x1 2

    x1+

    3

    x1 3

    x1

    g11 = (1)(1) +

    3

    2

    3

    2

    + (0)(0) =

    13

    4

    repeating this operation, we find the complete metric tensor is

    21

  • 8/8/2019 Algebra Lineal y Otros Temas

    22/368

    gkl =

    134 4 04 5 0

    0 0 1

    g = det (gkl) =65

    4 16 = 1

    4

    This is equivalent to the calculation in Gibbs notation:

    G = JT J

    G =

    1 32 01 2 0

    0 0 1

    1 1 032 2 0

    0 0 1

    G =

    134 4 04 5 0

    0 0 1

    Distance in the transformed system is given by

    (ds)2

    = gkl dxk dxl

    (ds)2 = dxT G dx

    (ds)2

    = ( dx1 dx2 dx3 )

    134 4 04 5 0

    0 0 1

    dx1dx2

    dx3

    (ds)2

    = ( dx1 dx2 dx3 )

    134 dx1 4 dx24 dx1 + 5 dx2

    dx3

    (ds)2

    =13

    4

    dx1

    2+ 5

    dx2

    2+

    dx32 8 dx1 dx2

    Algebraic manipulation reveals that this can be rewritten as follows:

    (ds)2

    = 8.22 0.627 dx1 0.779 dx2

    2+ 0.0304 0.779 dx1 + 0.627 dx2

    2+ dx3

    2

    Note:

    The Jacobian matrix J is not symmetric. The metric tensor G = JT J is symmetric. The fact that the metric tensor has non-zero off-diagonal elements is a consequence of the transfor-

    mation being non-orthogonal.

    The distance is guaranteed to be positive. This will be true for all affine transformations in ordinarythree-dimensional Euclidean space. In the generalized space-time continuum suggested by the theoryof relativity, the generalized distance may in fact be negative; this generalized distance ds for an

    infintesmial change in space and time is given by ds2 =

    d12

    +

    d22

    +

    d32 d42, where the

    first three coordinates are the ordinary Cartesian space coordinates and the fourth is d4

    2

    = (c dt)2,

    where c is the speed of light.Also we have the volume ratio of differential elements as

    d1 d2 d3 =1

    2dx1 dx2 dx3

    Now

    S

    1=

    S

    x1x1

    1+

    S

    x2x2

    1+

    S

    x3x3

    1

    = 4S

    x1+ 3

    S

    x2

    22

  • 8/8/2019 Algebra Lineal y Otros Temas

    23/368

    -3 -2 -1 1 2 3

    -3

    -2

    -1

    1

    2

    3

    2

    1

    1x = 1

    1x = 2

    1x = 3

    2x = 0

    2x = /4

    2x = /2

    2x = 3/2

    2x =

    2x = 5/4

    x = 3/4

    2x = 7/4

    Figure 1.4: Lines of constant x1

    and x2

    in the 1

    , 2

    plane for cylindrical transformation ofexample problem.

    So the transformed equation becomes

    4S

    x1+ 3

    S

    x2 S = x1 x22 +3

    2x1 + 2 x2

    24

    S

    x1+ 3

    S

    x2 S = 13

    4

    x12 8 x1x2 + 5 x22

    2. Cartesian to cylindrical coordinates.

    The transformations are

    x1 = +

    (1)2

    + (2)2

    x2 = tan1

    2

    1

    x3 = 3

    Note this system of equations is non-linear. For such systems, we cannot always find an explicit algebraicexpression for the inverse transformation. In this case, some straightforward algebraic and trigonometricmanipulation reveals that we can find an explicit representation of the inverse transformation, which is

    1 = x1 cos x2

    2 = x1 sin x2

    3 = x3

    Lines of constant x1 and x2 in the 1, 2 plane are plotted in Figure 1.4. Notice that the lines ofconstant x1 are orthogonal to lines of constant x2 in the Cartesian 1, 2 plane. For general transfor-mations, this will not be the case.

    The appropriate Jacobian matrix for the inverse transformation is

    J = i

    xj=

    (1, 2, 3)

    (x1, x2, x3)=

    1

    x11

    x21

    x3

    2

    x12

    x22

    x3

    3

    x13

    x23

    x3

    23

  • 8/8/2019 Algebra Lineal y Otros Temas

    24/368

    J =

    cos x2 x1 sin x2 0sin x2 x1 cos x2 0

    0 0 1

    The determinant of the Jacobian matrix is

    x1 cos2 x2 + x1 sin2 x2 = x1.

    So a unique transformation fails to exist when x1 = 0.The metric tensor is

    gkl = i

    xk i

    xl=

    1

    xk 1

    xl+

    2

    xk 2

    xl+

    3

    xk 3

    xl

    for example for k = 1, l = 1 we get

    g11 = i

    x1 i

    x1=

    1

    x1 1

    x1+

    2

    x1 2

    x1+

    3

    x1 3

    x1

    g11 = cos

    2

    x

    2

    + sin

    2

    x

    2

    + 0 = 1

    repeating this operation, we find the complete metric tensor is

    gkl =

    1 0 00 x12 0

    0 0 1

    g = det (gkl) = (x1)2

    This is equivalent to the calculation in Gibbs notation:

    G = JT J

    G = cos x2 sin x2 0x1 sin x2 x1 cos x2 0

    0 0 1

    cos x2 x1 sin x2 0sin x2 x1 cos x2 00 0 1

    G =

    1 0 00 x12 0

    0 0 1

    Distance in the transformed system is given by

    (ds)2

    = gkl dxk dxl

    (ds)2

    = dxT G dx

    (ds)

    2

    = [ dx

    1

    dx

    2

    dx

    3

    ]1 0 0

    0 x12 00 0 1dx1

    dx

    2

    dx3

    (ds)2 = ( dx1 dx2 dx3 )

    dx1x12 dx2

    dx3

    (ds)2

    =

    dx12

    +

    x1dx22

    +

    dx32

    Note:

    The fact that the metric tensor is diagonal can be attributed to the transformation being orthogonal.

    24

  • 8/8/2019 Algebra Lineal y Otros Temas

    25/368

    Since the product of any matrix with its transpose is guaranteed to yield a symmetric matrix, themetric tensor is always symmetric.

    Also we have the volume ratio of differential elements as

    d1 d2 d3 = x1 dx1 dx2 dx3

    Now

    S

    1=

    S

    x1x1

    1+

    S

    x2x2

    1+

    S

    x3x3

    1

    =S

    x11

    (1)2

    + (2)2

    Sx2

    2

    (1)2

    + (2)2

    = cos x2S

    x1 sin x

    2

    x1S

    x2

    So the transformed equation becomes

    cos x2S

    x1 sin x2

    x1

    S

    x2 S = x12

    1.3.2 Covariance and Contravariance

    Quantities known as contravariant vectors transform according to

    ui

    =

    xi

    xj uj

    (1.43)

    Quantities known as covariant vectors transform according to

    ui =xj

    xiuj (1.44)

    Here we have considered general transformations from one non-Cartesian coordinate system(x1, x2, x3) to another (x1, x2, x3).

    Example 1.5Lets say (x, y, z) is a normal Cartesian system and define the transformation

    x = x y = y z = z

    Now we can assign velocities in both the unbarred and barred systems:

    ux =dx

    dtuy =

    dy

    dtuz =

    dz

    dt

    ux =dx

    dtuy =

    dy

    dtuz =

    dz

    dt

    25

  • 8/8/2019 Algebra Lineal y Otros Temas

    26/368

    ux =x

    x

    dx

    dtuy =

    y

    y

    dy

    dtuz =

    z

    z

    dz

    dt

    ux = ux uy = uy uz = uz

    ux =x

    xux uy =

    y

    yuy uz =

    z

    zuz

    This suggests the velocity vector is contravariant.Now consider a vector which is the gradient of a function f(x, y, z). For example, let

    f(x, y, z) = x + y2 + z3

    ux =f

    xuy =

    f

    yuz =

    f

    z

    ux = 1 uy = 2y uz = 3z2

    In the new coordinates

    f x

    ,

    y

    ,

    z

    =

    x

    +

    y2

    2+

    z3

    3

    so

    f(x, y, z) =x

    +y2

    2

    +z3

    3

    Now

    ux =f

    xuy =

    f

    yuz =

    f

    z

    ux =1

    uy =

    2y

    2uz =

    3z2

    3

    In terms of x, y, z, we have

    ux =1

    uy =

    2y

    uz =

    3z2

    So it is clear here that, in contrast to the velocity vector,

    ux =1

    ux uy =

    1

    uy uz =

    1

    uz

    Somewhat more generally we find for this case that

    ux =x

    xux uy =

    y

    yuy uz =

    z

    zuz,

    which suggests the gradient vector is covariant.

    Contravariant tensors transform according to

    vij =xi

    xkxj

    xlvkl

    Covariant tensors transform according to

    vij =xk

    xixl

    xjvkl

    Mixed tensors transform according to

    vij =xi

    xkxl

    xjvkl

    26

  • 8/8/2019 Algebra Lineal y Otros Temas

    27/368

    The idea of covariant and contravariant derivatives play an important role in mathemat-ical physics, namely in that the equations should be formulated such that they are invariantunder coordinate transformations. This is not particularly difficult for Cartesian systems,but for non-orthogonal systems, one cannot use differentiation in the ordinary sense butmust instead use the notion of covariant and contravariant derivatives, depending on the

    problem. The role of these terms was especially important in the development of the theoryof relativity.

    Consider a contravariant vector ui defined in xi which has corresponding components Ui

    in the Cartesian i. Take wij and Wij to represent the covariant spatial derivative of u

    i andUi, respectively. Lets use the chain rule and definitions of tensorial quantities to arrive ata formula for covariant differentiation.

    From the definition of contravariance

    Ui = i

    xlul (1.45)

    Take the derivative in Cartesian space and then use the chain rule:

    Wij =Ui

    j=

    Ui

    xkxk

    j(1.46)

    =

    xk

    i

    xlul

    xk

    j(1.47)

    =

    2i

    xkxlul +

    i

    xlul

    xk

    xk

    j(1.48)

    Wpq =

    2p

    xkxlul +

    p

    xlul

    xk

    xk

    q(1.49)

    From the definition of a mixed tensor

    wij = Wpq

    xi

    pq

    xj(1.50)

    =

    2p

    xkxlul +

    p

    xlul

    xk

    xk

    qxi

    pq

    xj(1.51)

    =2p

    xkxlxk

    qxi

    p q

    xjul +

    p

    xlxk

    qxi

    pq

    xjul

    xk(1.52)

    =2p

    xkxlxk

    xjxi

    pul +

    xi

    xlxk

    xjul

    xk(1.53)

    =

    2p

    xkxl k

    j

    xi

    pul

    + i

    lk

    j

    ul

    xk (1.54)

    =2p

    xjxlxi

    pul +

    ui

    xj(1.55)

    Here we have used the identity that xi

    xj= ij , where

    ij is the Kronecker

    7 delta, ij = 1, i =j,ij = 0, i = j. We define the Christoffel8 symbols ijl as follows:

    7Leopold Kronecker, 1823-1891, German/Prussian mathematician.8Elwin Bruno Christoffel, 1829-1900, German mathematician.

    27

  • 8/8/2019 Algebra Lineal y Otros Temas

    28/368

    ijl =2p

    xjxlxi

    p(1.56)

    and use the term j to represent the covariant derivative. Thus we have found the covariantderivative of a contravariant vector ui is as follows:

    jui = wij =

    ui

    xj+ ijlu

    l (1.57)

    Example 1.6Find u in cylindrical coordinates. The transformations are

    x1 = +

    (1)

    2+ (2)

    2

    x2 = tan1

    2

    1

    x3 = 3

    The inverse transformation is

    1 = x1 cos x2

    2 = x1 sin x2

    3 = x3

    This corresponds to finding

    iui = wii =

    ui

    xi+ iilu

    l

    Now for i = j

    iilul =

    2p

    xixlxi

    pul

    =21

    xixlxi

    1ul +

    22

    xixlxi

    2ul +

    23

    xixlxi

    3ul

    noting that all second partials of 3 are zero,

    =21

    xixlxi

    1ul +

    22

    xixlxi

    2ul

    =21

    x1xlx1

    1ul +

    21

    x2xlx2

    1ul +

    21

    x3xlx3

    1ul

    +22

    x1xlx1

    2ul +

    22

    x2xlx2

    2ul +

    22

    x3xlx3

    2ul

    noting that partials of x3 with respect to 1 and 2 are zero,

    =21

    x1xlx1

    1ul +

    21

    x2xlx2

    1ul

    +22

    x1xlx1

    2ul +

    22

    x2xlx2

    2ul

    28

  • 8/8/2019 Algebra Lineal y Otros Temas

    29/368

    =21

    x1x1x1

    1u1 +

    21

    x1x2x1

    1u2 +

    21

    x1x3x1

    1u3

    +21

    x2x1x2

    1u1 +

    21

    x2x2x2

    1u2 +

    21

    x2x3x2

    1u3

    +22

    x1x1x1

    2u1 +

    22

    x1x2x1

    2u2 +

    22

    x1x3x1

    2u3

    +22

    x2x1x2

    2u1 +

    22

    x2x2x2

    2u2 +

    22

    x2x3x2

    2u3

    again removing the x3 variation

    =21

    x1x1x1

    1u1 +

    21

    x1x2x1

    1u2

    +21

    x2x1x2

    1u1 +

    21

    x2x2x2

    1u2

    +22

    x1x1x1

    2u1 +

    22

    x1x2x1

    2u2

    + 22

    x2x1x2 2

    u1 + 22

    x2x2x2 2

    u2

    substituting for the partial derivatives

    = 0u1 sin x2 cos x2u2

    sin x2 sin x2

    x1

    u1 x1 cos x2

    sin x2x1

    u2

    +0u1 + cos x2 sin x2u2

    +cos x2

    cos x2

    x1

    u1 x1 sin x2

    cos x2

    x1

    u2

    = u1

    x1

    So in cylindrical coordinates

    u = u1

    x1+

    u2

    x2+

    u3

    x3+

    u1

    x1

    Note: In standard cylindrical notation, x1 = r, x2 = , x3 = z. Considering u to be a velocity vector,we get

    u = r

    dr

    dt

    +

    d

    dt

    +

    z

    dz

    dt

    +

    1

    r

    dr

    dt

    u = 1r

    r

    r dr

    dt

    + 1

    r

    r d

    dt

    +

    z

    dzdt

    u = 1

    r

    r(rur) +

    1

    r

    u

    +uzz

    Here we have also used the more traditional u = rddt = x

    1u2, along with ur = u1, uz = u

    3. Forpractical purposes, this insures that ur, u, uz all have the same dimensions.

    29

  • 8/8/2019 Algebra Lineal y Otros Temas

    30/368

    We summarize some useful identities, all of which can be proved, as well as some othercommon notation, as follows

    gkl = i

    xk i

    xl(1.58)

    g = det(gij) (1.59)gikg

    kj = ji (1.60)

    ui = gijuj (1.61)

    ui = gijuj (1.62)

    u v = uivi = uivi = gijujvi = gijujvi (1.63)u v = ijkgjmgknumvn = ijkujvk (1.64)

    ijk =2p

    xjxkxi

    p=

    1

    2gip

    gpjxk

    +gpkxj

    gjkxp

    (1.65)

    u = ju

    i = ui,j =ui

    xj+ ijlu

    l (1.66)

    u = iui = ui,i =ui

    xi+ iilu

    l =1

    g

    xi

    g ui

    (1.67)

    u = ujxi

    uixj

    (1.68)

    = ,i = xi

    (1.69)

    2 = = gij,ij = xj

    gij

    xi

    + jjkg

    ik

    xi(1.70)

    =1

    g

    xj

    g gij

    xi (1.71)T = Tij,k =

    Tij

    xk+ ilkT

    lj + jlkTil (1.72)

    T = Tij,j =Tij

    xj+ iljT

    lj + jljTil =

    1g

    xj

    g Tij

    + ijkTjk (1.73)

    =1

    g

    xj

    g Tkj

    i

    xk

    (1.74)

    1.4 Maxima and minima

    Consider the real function f(x), where x [a, b]. Extrema are at x = xm, where f(xm) = 0,if xm [a, b]. It is a local minimum, a local maximum, or an inflection point according towhether f(xm) is positive, negative or zero, respectively.

    Now consider a function of two variables f(x, y), with x [a, b], y [c, d]. A necessarycondition for an extremum is

    f

    x(xm, ym) =

    f

    y(xm, ym) = 0 (1.75)

    30

  • 8/8/2019 Algebra Lineal y Otros Temas

    31/368

    where xm [a, b], ym [c, d]. Next we find the Hessian 9 matrix (Hildebrand 356)

    H =

    2fx2

    2fxy

    2fxy

    2fy2

    (1.76)

    and its determinant D = det H. It can be shown thatf is a maximum if

    2fx2 < 0 and D < 0

    f is a minimum if 2fx2

    > 0 and D < 0f is a saddle ifD > 0Higher order derivatives must be considered if D = 0.

    Example 1.7

    f = x2 y2

    Equating partial derivatives with respect to x and to y to zero, we get

    2x = 0

    2y = 0

    This gives x = 0, y = 0. For these values we find that

    D = 2 00 2

    = 4

    Since D > 0, the point (0,0) is a saddle point.

    1.4.1 Derivatives of integral expressions

    Often functions are expressed in terms of integrals. For example

    y(x) =

    b(x)a(x)

    f(x, t) dt

    Here t is a dummy variable of integration. Leibnizs 10 rule tells us how to take derivativesof functions in integral form:

    y(x) =

    b(x)a(x)

    f(x, t) dt (1.77)

    dy(x)

    dx= f(x, b(x))

    db(x)

    dx f(x, a(x))da(x)

    dx+

    b(x)a(x)

    f(x, t)

    xdt (1.78)

    9Ludwig Otto Hesse, 1811-1874, German mathematician, studied under Jacobi.10Gottfried Wilhelm von Leibniz, 1646-1716, German mathematician and philosopher of great influence;

    co-inventor with Sir Isaac Newton, 1643-1727, of the calculus.

    31

  • 8/8/2019 Algebra Lineal y Otros Temas

    32/368

    Inverting this arrangement in a special case, we note if

    y(x) = y(xo) +

    xx0

    f(t) dt (1.79)

    then (1.80)

    dy(x)dx

    = f(x) dxdx

    f(x0) dxodx

    +b(x)a(x)

    f(t)x

    dt (1.81)

    dy(x)

    dx= f(x) (1.82)

    Note that the integral expression naturally includes the initial condition that when x = x0,y = y(x0). This needs to be expressed separately for the differential version of the equation.

    Example 1.8Find dydx if

    y(x) =

    x2x

    (x + 1)t2 dt (1.83)

    Using Leibnizs rule we get

    dy(x)

    dx= [(x + 1)x4](2x) [(x + 1)x2](1) +

    x2x

    t2 dt (1.84)

    = 2x6 + 2x5 x3 x2 +

    t3

    3

    x2x

    (1.85)

    = 2x6 + 2x5 x3 x2 + x6

    3 x

    3

    3(1.86)

    = 7x63

    + 2x5 4x33

    x2 (1.87)(1.88)

    In this case, but not all, we can achieve the same result from explicit formulation of y(x):

    y(x) = (x + 1)

    x2x

    t2 dt (1.89)

    = (x + 1)

    t3

    3

    x2x

    (1.90)

    = (x + 1)x6

    3 x

    3

    3 (1.91)y(x) =

    x7

    3+

    x6

    3 x

    4

    3 x

    3

    3(1.92)

    dy(x)

    dx=

    7x6

    3+ 2x5 4x

    3

    3 x2 (1.93)

    So the two methods give identical results.

    32

  • 8/8/2019 Algebra Lineal y Otros Temas

    33/368

    1.4.2 Calculus of variations

    (See Hildebrand, p. 360)The problem is to find the function y(x), with x [x1, x2], and boundary conditions

    y(x1) = y1, y(x2) = y2, such that the integral

    I =x2x1

    f(x,y,y) dx (1.94)

    is an extremum. If y(x) is the desired solution, let Y(x) = y(x) + h(x), where h(x1) =h(x2) = 0. Thus Y(x) also satisfies the boundary conditions; also Y

    (x) = y(x) + h(x).We can write

    I() =

    x2x1

    f(x,Y,Y) dx

    Taking dId , utilizing Leibnizs formula, we get

    dI

    d=

    x2

    x1fx

    x

    +

    f

    Y

    Y

    +

    f

    Y

    Y

    dxEvaluating, we find

    dI

    d=

    x2x1

    f

    x0 +

    f

    Yh(x) +

    f

    Y h(x)

    dx

    Since I is an extremum at = 0, we have dI/d = 0 for = 0. This gives

    0 =

    x2x1

    f

    Yh(x) +

    f

    Yh(x)

    =0

    dx

    Also when = 0, we have Y = y, Y = y, so

    0 =

    x2x1

    f

    yh(x) +

    f

    y h(x)

    dx

    Look at the second term in this integral. Since from integration by parts we getx2x1

    f

    y h(x) dx =

    x2x1

    f

    y dh

    =f

    y h(x)

    x2

    x1

    x2x1

    d

    dx

    f

    y

    h(x) dx

    The first term above is zero because of our conditions on h(x1) and h(x2). Thus substitutinginto the original equation we havex2x1

    fy

    ddx

    f

    y

    h(x) dx = 0 (1.95)

    The equality holds for all h(x), so that we must have

    f

    y d

    dx

    f

    y

    = 0 (1.96)

    33

  • 8/8/2019 Algebra Lineal y Otros Temas

    34/368

    called the Euler11 equation.While this is, in general, the preferred form of the Euler equation, its explicit dependency

    on the two end conditions is better displayed by considering a slightly different form. Byexpanding the total derivative term, that is

    d

    dx

    f

    y (x,y,y)

    =

    2f

    y x+

    2f

    y ydy

    dx+

    2f

    y y dy

    dx(1.97)

    =2f

    y x+

    2f

    y yy +

    2f

    y y y (1.98)

    the Euler equation after slight rearrangement becomes

    2f

    y y y +

    2f

    y yy +

    2f

    y x f

    y= 0 (1.99)

    fyyd2y

    dx2+ fyy

    dy

    dx+ (fyx

    fy) = 0 (1.100)

    This is a clearly second order differential equation for fyy = 0, and in general, non-linear.If fyy is always non-zero, the problem is said to be regular. If fyy = 0 at any point, theequation is no longer second order, and the problem is said to be singular at such points.Note that satisfaction of two boundary conditions becomes problematic for equations lessthan second order.

    There are several special cases of the function f.

    1. f = f(x, y)

    The Euler equation is

    fy

    = 0 (1.101)

    which is easily solved:f(x, y) = A(x) (1.102)

    which, knowing f, is then solved for y(x).

    2. f = f(x, y)

    The Euler equation isd

    dx

    f

    y

    = 0 (1.103)

    which yields

    f

    y = A (1.104)

    f(x, y) = Ay + B(x) (1.105)

    Again, knowing f, the equation is solved for y and then integrated to find y(x).11Leonhard Euler, 1707-1783, prolific Swiss mathematician, born in Basel, died in St. Petersburg.

    34

  • 8/8/2019 Algebra Lineal y Otros Temas

    35/368

    3. f = f(y, y)

    The Euler equation is

    f

    y d

    dx

    f

    y (y, y)

    = 0 (1.106)

    fy

    2fyy

    dydx

    + 2fy y

    dydx

    = 0 (1.107)

    f

    y

    2f

    yydy

    dx

    2f

    y y d2y

    dx2= 0 (1.108)

    Multiply by y to get

    y

    f

    y

    2f

    yydy

    dx

    2f

    y y d2y

    dx2

    = 0 (1.109)

    f

    yy +

    f

    y y y

    2f

    yydy

    dx

    2f

    y y d2y

    dx2 f

    y y = 0 (1.110)

    (1.111)

    d

    dx

    f y f

    y

    = 0 (1.112)

    which can be integrated. Thus

    f(y, y) y fy

    = A (1.113)

    which is effectively a first order ordinary differential equation which is solved. Anotherintegration constant arises. This along with A are determined by the two end pointconditions.

    Example 1.9Find the curve of minimum length between the points (x1, y1) and (x2, y2).If y(x) is the curve, then y(x1) = y1 and y(x2) = y2. The length of the curve is

    L =

    x2x1

    1 + (y)2 dx

    The Euler equation is

    d

    dx

    y

    1 + (y)2

    = 0

    which can be integrated to give y1 + (y)2

    = C

    Solving for y we get

    y = A =

    C2

    1 C2from which

    y = Ax + B

    The constants A and B are obtained from the boundary conditions y(x1) = y1 and y(x2) = y2.

    35

  • 8/8/2019 Algebra Lineal y Otros Temas

    36/368

    -1 -0.5 0 0.5 1 1.5 2x

    0.5

    1

    1.5

    2

    2.5

    3

    -1

    0

    1

    2x

    -2

    0

    2y

    -2

    0

    2

    z

    -1

    0

    1

    2x

    -2

    0

    2y

    -2

    0

    2

    curve withendpoints at(-1, 3.09), (2, 2.26)which minimizessurface area of bodyof revolution corresponding

    surface ofrevolution

    .

    .

    Figure 1.5: Body of revolution of minimum surface area for (x1, y1) = (

    1, 3.08616) and

    (x2, y2) = (2, 2.25525)

    Example 1.10Find the curve through the points (x1, y1) and (x2, y2), such that the surface area of the body of

    revolution by rotating the curve around the x-axis is a minimum.We wish to minimize

    I =

    x2x1

    y

    1 + (y)2 dx

    Here f(y, y) = y1 + (y)2. So the Euler equation reduces tof(y, y) y f

    y = A

    y

    1 + y2 yy y

    1 + y2= A

    y(1 + y2) yy2 = A

    1 + y2

    y = A

    1 + y2

    y =

    yA

    2 1

    y(x) = A coshx B

    AThis is a catenary. The constants A and B are determined from the boundary conditions y(x1) = y1and y(x2) = y2. In general this requires a trial and error solution of simultaneous algebraic equations.If (x1, y1) = (1, 3.08616) and (x2, y2) = ( 2, 2.25525), one finds solution of the resulting algebraicequations gives A = 2, B = 1.

    For these conditions, the curve y(x) along with the resulting body of revolution of minimum surfacearea are plotted in Figure 1.5.

    36

  • 8/8/2019 Algebra Lineal y Otros Temas

    37/368

    1.5 Lagrange multipliers

    Suppose we have to determine the extremum off(x1, x2, . . . , xm) subject to the n constraints

    gi(x1, x2, . . . , xm) = 0, i = 1, 2, . . . , n (1.114)

    Definef = f 1g1 2g2 . . . ngn (1.115)

    where the i (i = 1, 2, , n) are unknown constants called Lagrange 12 multipliers. To getthe extremum off, we equate to zero its derivative with respect to x1, x2, . . . , xm. Thus wehave

    f

    xi= 0, i = 1, . . . , m (1.116)

    gi = 0, i = 1, . . . , n (1.117)

    which are (m+n) equations that can be solved for xi (i = 1, 2, . . . , m) and i (i = 1, 2, . . . , n).

    Example 1.11Extremize f = x2 + y2 subject to the constraint 5x2 6xy + 5y2 = 8.Let

    f = x2 + y2 (5x2 6xy + 5y2 8)from which

    2x 10x + 6y = 02y + 6x 10y = 05x2 6xy + 5y2 = 8

    From the first equation

    =2x

    10x 6ywhich, when substituted into the second, gives

    x = y

    The last equation gives the extrema to be at (x, y) = (

    2,

    2), (2, 2), ( 12

    , 12

    ), ( 12

    , 12

    ).

    The first two sets give f = 4 (maximum) and the last two f = 1 (minimum).The function to be maximized along with the constraint function and its image are plotted in Figure

    1.6.

    A similar technique can be used for the extremization of a functional with constraint.We wish to find the function y(x), with x [x1, x2], and y(x1) = y1, y(x2) = y2, such thatthe integral

    I =

    x2x1

    f(x,y,y) dx (1.118)

    12Joseph-Louis Lagrange, 1736-1813, Italian-born French mathematician.

    37

  • 8/8/2019 Algebra Lineal y Otros Temas

    38/368

    -10

    1

    x

    -1

    0

    1y

    0

    1

    2

    3

    4

    f(x,y)

    -10

    1

    -1

    0

    1y

    0

    1

    2

    3

    4

    -10

    12

    x

    -2

    -1012y

    0

    2

    4

    6

    8

    f(x,y)

    -10

    12

    xy

    0

    2

    4

    6

    8

    f(x,y)

    constraintfunction

    constrainedfunction

    unconstrainedfunction

    Figure 1.6: Unconstrained function f(x, y) along with constrained function and constraintfunction (image of constrained function)

    is an extremum, and satisfies the constraint

    g = 0 (1.119)

    DefineI = I g (1.120)

    and continue as before.

    Example 1.12Extremize I, where

    I =

    a0

    y

    1 + (y)2 dx

    with y(0) = y(a) = 0, and subject to the constrainta0

    1 + (y)2 dx =

    That is find the maximum surface area of a body of revolution which has a constant length. Let

    g = a

    01 + (y)2 dx = 0

    Then let

    I = I g =a

    0

    y

    1 + (y)2 dx a

    0

    1 + (y)2 dx +

    =

    a0

    (y )

    1 + (y)2 dx +

    =

    a0

    (y )

    1 + (y)2 +

    a

    dx

    38

  • 8/8/2019 Algebra Lineal y Otros Temas

    39/368

    -0.2

    0

    0.2

    y

    -0.2

    0

    0.2

    z

    0

    0.25

    0.5

    0.751

    x

    0.2 0.4 0.6 0.8 1x

    -0.3-0.25

    -0.2

    -0.15

    -0.1

    -0.05

    y

    Figure 1.7: Curve of length = 5/4 with y(0) = y(1) = 0 whose surface area of correspondingbody of revolution (also shown) is maximum.

    With f = (y )1 + (y)2 +a , we have the Euler equation

    f

    y d

    dx

    f

    y

    = 0

    Integrating from an earlier developed relationship when f = f(y, y), and absorbing a into a constantA, we have

    (y )

    1 + (y)2 y(y ) y

    1 + (y)2= A

    from which(y )(1 + (y)2) (y)2(y ) = A

    1 + (y)2

    (y ) 1 + (y)2 (y)2 = A1 + (y)2y

    = A1 + (y)2

    y =

    y

    A

    2 1

    y = + A coshx B

    A

    Here A,B, have to be numerically determined from the three conditions y(0) = y(a) = 0, g = 0.If we take the case where a = 1, = 5/4, we find that A = 0.422752, B = 12 , = 0.754549. For

    these values, the curve of interest, along with the surface of revolution, is plotted in Figure 1.7.

    Problems

    1. Ifz3 + zx + x4y = 3y,

    (a) find a general expression forz

    x

    y

    ,z

    y

    x

    ,

    39

  • 8/8/2019 Algebra Lineal y Otros Temas

    40/368

  • 8/8/2019 Algebra Lineal y Otros Temas

    41/368

    13. Show that the functions

    u =x + y

    x yv =

    xy

    (x y)2

    are functionally dependent.

    14. Find the point on the curve of intersection of z xy = 10 and x + y + z = 1, that is closest to theorigin.

    15. Find a function y(x) with y(0) = 1, y(1) = 0 that extremizes the integral

    I =

    10

    1 +

    dydx

    2y

    dx

    Plot y(x) for this function.

    16. For elliptic cylindrical coordinates

    1 = cosh x1 cos x2

    2 = sinh x1 sin x2

    3 = x3

    Find the Jacobian matrix J and the metric tensor G. Find the inverse transformation. Plot lines ofconstant x1 and x2 in the 1 and 2 plane.

    17. For the elliptic coordinate system of the previous problem, fine u where u is an arbitrary vector.18. Find the covariant derivative of the contravariant velocity vector in cylindrical coordinates.

    41

  • 8/8/2019 Algebra Lineal y Otros Temas

    42/368

    42

  • 8/8/2019 Algebra Lineal y Otros Temas

    43/368

    Chapter 2

    First-order ordinary differentialequations

    see Lopez, Chapters 1-3,

    see Riley, Hobson, and Bence, Chapter 12,see Bender and Orszag, 1.6.

    A first-order ordinary differential equation is of the form

    F(x,y,y) = 0 (2.1)

    where y = dydx

    .

    2.1 Separation of variables

    Equation (2.1) is separable if it can be written in the formP(x)dx = Q(y)dy (2.2)

    which can then be integrated.

    Example 2.1Solve

    yy =8x + 1

    y, with y(1) = 5.

    Separating variablesy2dy = 8xdx + dx.

    Integrating, we have y3

    3= 4x2 + x + C.

    The initial condition gives C = 1403 , so that the solution isy3 = 12x2 + 3x 140.

    The solution is plotted in Figure 2.1.

    43

  • 8/8/2019 Algebra Lineal y Otros Temas

    44/368

    -10 -5 5 10x

    -5

    -2.5

    2.5

    5

    7.5

    10

    y

    Figure 2.1: y(x) which solves yy = (8x + 1)/y with y(1) = 5.

    2.2 Homogeneous equations

    An equation is homogeneous if it can be written in the form

    y = fy

    x

    . (2.3)

    Defining

    u =y

    x

    (2.4)

    we gety = ux,

    from whichy = u + xu.

    Substituting in equation (2.3) and separating variables, we have

    u + xu = f(u) (2.5)

    u + xdu

    dx= f(u) (2.6)

    xdudx

    = f(u) u (2.7)du

    f(u) u =dx

    x(2.8)

    which can be integrated.Equations of the form

    y = f

    ax + by + c

    dx + ey + h

    (2.9)

    44

  • 8/8/2019 Algebra Lineal y Otros Temas

    45/368

    can be similarly integrated.

    Example 2.2Solve

    xy = 3y +y2

    x, with y(1) = 4.

    This can be written as

    y = 3y

    x

    +y

    x

    2.

    Letting u = y/x, we getdu

    2u + u2=

    dx

    x.

    Since1

    2u + u2=

    1

    2u 1

    4 + 2u

    both sides can be integrated to give

    1

    2 (ln |u| ln |2 + u|) = ln |x| + C.The initial condition gives C = 12 ln

    23 , so that the solution can be reduced to y2x + y

    = 23 x2.This can be solved explicitly for y(x) where we find for each case of the absolute value. The first case

    y(x) =43

    x3

    1 23 x2

    is seen to satisfy the condition at x = 1. The second case is discarded as it does not satisfy the condition

    at x = 1.The solution is plotted in Figure 2.2.

    2.3 Exact equations

    A differential equation is exact if it can be written in the form

    dF(x, y) = 0. (2.10)where F(x, y) = 0 is a solution to the differential equation Using the chain rule to expandthe derivative

    F

    xdx +

    F

    ydy = 0

    So for an equation of the form

    P(x, y)dx + Q(x, y)dy = 0 (2.11)

    45

  • 8/8/2019 Algebra Lineal y Otros Temas

    46/368

    -6 -4 -2 2 4 6x

    -20

    -15

    -10

    -5

    5

    10

    15

    20

    y

    Figure 2.2: y(x) which solves xy = 3y + y2

    xwith y(1) = 4

    we have an exact differential if

    F

    x= P(x, y),

    F

    y= Q(x, y) (2.12)

    2F

    xy=

    P

    y,

    2F

    yx=

    Q

    x(2.13)

    As long as F(x, y) is continuous and differentiable, the mixed second partials are equal, thus,

    P

    y=

    Q

    x(2.14)

    must hold if F(x, y) is to exist and render the original differential equation to be exact.

    Example 2.3Solve

    dy

    dx=

    exy

    exy 1exy dx + 1 exy dy = 0P

    y= exy

    Q

    x= exy

    Since Py =Qx , the equation is exact. Thus

    F

    x= P(x, y)

    46

  • 8/8/2019 Algebra Lineal y Otros Temas

    47/368

    -2

    2

    4

    6

    y

    -6 -4 -2 2 4 6x

    C = -2

    C = -1

    C = 0

    C = 1

    C = 2

    Figure 2.3: y(x) which solves y = exp(x y)/(exp(x y) 1)

    F

    x= exy

    F(x, y) = exy + A(y)F

    y= exy + dA

    dy= Q(x, y) = 1 exy

    dA

    dy= 1

    A(y) = y CF(x, y) = exy + y C = 0

    exy + y = C

    The solution for various values ofC is plotted in Figure 2.3.

    2.4 Integrating factors

    Sometimes, an equation of the form (2.11) is not exact, but can be made so by multiplicationby a function u(x, y), where u is called the integrating factor. It is not always obvious that

    integrating factors exist; sometimes they do not.

    Example 2.4Solve

    dy

    dx=

    2xy

    x2 y2Separating variables, we get

    (x2 y2) dy = 2xy dx.

    47

  • 8/8/2019 Algebra Lineal y Otros Temas

    48/368

    -1.5 -1 -0.5 0.5 1 1.5x

    -3

    -2

    -1

    1

    2

    3y

    C = 3

    C = 2

    C = 1

    C = -1

    C = -2

    C = -3

    Figure 2.4: y(x) which solves y(x) = 2xy(x2y2)

    This is not exact according to criterion (2.14). It turns out that the integrating factor is y2, so thaton multiplication, we get

    2x

    ydx

    x2

    y2 1

    dy = 0.

    This can be written as

    d

    x2

    y+ y

    = 0

    which gives

    x2 + y2 = Cy.

    The solution for various values ofC is plotted in Figure 2.4.

    The general first-order linear equation

    dy(x)

    dx+ P(x) y(x) = Q(x) (2.15)

    withy(xo) = yo

    can be solved using the integrating factor

    exaP(s)ds = e(F(x)F(a)).

    We choose a such that

    F(a) = 0.

    48

  • 8/8/2019 Algebra Lineal y Otros Temas

    49/368

    Multiply by the integrating factor and proceed:exaP(s)ds

    dy(x)dx

    +

    exaP(s)ds

    P(x) y(x) =

    exaP(s)ds

    Q(x) (2.16)

    product rule:d

    dx exaP(s)dsy(x) = e

    xaP(s)ds

    Q(x) (2.17)replace x by t:

    ddt

    etaP(s)dsy(t)

    =

    etaP(s)ds

    Q(t) (2.18)

    integrate:

    xxo

    d

    dt

    etaP(s)dsy(t)

    dt =

    xxo

    etaP(s)ds

    Q(t)dt (2.19)

    exaP(s)dsy(x) e

    xoa

    P(s)dsy(xo) =

    xxo

    etaP(s)ds

    Q(t) dt (2.20)

    which yields

    y(x) = exaP(s)ds

    exoa

    P(s)dsyo +

    xxo

    etaP(s)ds

    Q(t)dt

    . (2.21)

    Example 2.5Solve

    y y = e2x; y(0) = yo.Here

    P(x) = 1or

    P(s) = 1xa

    P(s)ds =

    xa

    (1)ds = s|xa = a xSo

    F() = For F(a) = 0, take a = 0. So the integrating factor is

    exaP(s)ds = eax = e0x = ex

    Multiplying and rearranging, we get

    exdy(x)

    dx exy(x) = ex

    d

    dx

    exy(x)

    = ex

    d

    dt ety(t)

    = et

    xxo=0

    ddtety(t) dt = x

    xo=0

    etdt

    exy(x) e0y(0) = ex e0exy(x) yo = ex 1

    y(x) = ex (yo + ex 1)

    y(x) = e2x + (yo 1) ex

    The solution for various values ofyo is plotted in Figure 2.5.

    49

  • 8/8/2019 Algebra Lineal y Otros Temas

    50/368

    -3 -2 -1 1 2 3x

    -3

    -2

    -1

    1

    2

    3

    yy

    =

    2

    oy

    =

    0

    oy

    =

    -2

    o

    Figure 2.5: y(x) which solves y y = e2x with y(0) = yo

    2.5 Bernoulli equation

    Some first-order nonlinear equations also have analytical solutions. An example is theBernoulli 1 equation

    y + P(x)y = Q(x)yn. (2.22)

    where n = 1. Letu = y1n,

    so thaty = u

    1

    1n .

    The derivative is

    y =1

    1 n

    un

    1n

    u.

    Substituting in equation (2.22), we get

    1

    1 n

    un

    1n

    u + P(x)u

    1

    1n = Q(x)un

    1n .

    This can be written asu + (1 n)P(x)u = (1 n)Q(x) (2.23)

    which is a first-order linear equation of the form (2.15) and can be solved.

    1after one of the members of the prolific Bernoulli family.

    50

  • 8/8/2019 Algebra Lineal y Otros Temas

    51/368

  • 8/8/2019 Algebra Lineal y Otros Temas

    52/368

  • 8/8/2019 Algebra Lineal y Otros Temas

    53/368

    -1 -0.8 - 0.6 - 0.4 - 0.2 0.2 0.4x

    -1

    -0.5

    0.5

    1

    1.5

    2

    2.5

    3C=

    0

    C=

    2

    C= -1

    C= -2

    C= -2

    C= -1C=

    -2

    Figure 2.6: y(x) which solves y = exp(3x)x y/x + 3 exp(3x)

    which is an equation of first order.

    Example 2.7Solve

    xy + 2y = 4x3.

    Let u = y, so that

    xdu

    dx+ 2u = 4x3.

    Multiplying by x

    x2du

    dx+ 2xu = 4x4

    d

    dx(x2u) = 4x4.

    This can be integrated to give

    u =4

    5x3 +

    C1x2

    from which

    y =1

    5x4 C1

    x+ C2

    for x= 0.

    2.7.2 x absent

    Iff(y, y, y) = 0 (2.34)

    53

  • 8/8/2019 Algebra Lineal y Otros Temas

    54/368

    let u(x) = y, so that

    y =dy

    dx=

    dy

    dy

    dy

    dx=

    du

    dyu

    The equation becomes

    fy,u,ududy = 0 (2.35)which is also an equation of first order. Note however that the independent variable is nowy while the dependent variable is u.

    Example 2.8Solve

    y 2yy = 0; y(0) = yo, y(0) = yo.Let u = y, so that y = dudx =

    dydx

    dudy = u

    dudy . The equation becomes

    udu

    dy 2yu = 0

    Nowu = 0

    satisfies the equation. Thus

    dy

    dx= 0

    y = C

    applying one initial condition: y = yo

    This satisfies the initial conditions only under special circumstances, i.e. yo = 0. For u = 0,

    dudy

    = 2y

    u = y2 + C1

    apply I.C.s: yo = y2o + C1

    C1 = yo y2o

    dy

    dx= y2 + yo y2o

    dy

    y2 + yo y2o= dx

    from which for yo y2o > 0

    1yo y2o

    tan1 y

    yo y2o

    = x + C2

    1yo y2o

    tan1

    yoyo y2o

    = C2

    y(x) =

    yo y2o tan

    x

    yo y2o + tan1

    yoyo y2o

    The solution for yo = 0, yo = 1 is plotted in Figure 2.7.

    54

  • 8/8/2019 Algebra Lineal y Otros Temas

    55/368

  • 8/8/2019 Algebra Lineal y Otros Temas

    56/368

  • 8/8/2019 Algebra Lineal y Otros Temas

    57/368

    1 2 3 4x

    -1

    -0.75

    -0.5

    -0.25

    0.25

    0.5

    0.75

    1

    y

    1 2 3 4x

    -1

    -0.75

    -0.5

    -0.25

    0.25

    0.5

    0.75

    1

    y

    Figure 2.8: Two solutions y(x) which satisfy y = 3y2/3 with y(2) = 0

    2.9 Clairaut equation

    The solution of a Clairaut 3 equation

    y = xy + f(y) (2.36)

    can be obtained by letting y = u(x), so that

    y = xu + f(u). (2.37)

    Differentiating with respect to x, we get

    y = xu + u +df

    duu (2.38)

    u = xu + u +df

    duu (2.39)

    ux + dfdu = 0. (2.40)If we take

    u =du

    dx= 0, (2.41)

    we can integrate to getu = C (2.42)

    3Alexis Claude Clairaut, 1713-1765, Parisian/French mathematician.

    57

  • 8/8/2019 Algebra Lineal y Otros Temas

    58/368

  • 8/8/2019 Algebra Lineal y Otros Temas

    59/368

  • 8/8/2019 Algebra Lineal y Otros Temas

    60/368

    (h) y = x+2y52xy+4

    (i) y + xy = y

    Plot solutions for y(0) = 1, 0, 1 (except for part e).10. Find all solutions of

    (x + 1)(y)2

    + (x y)y y = 011. Find an a for which a unique solution of

    (y)4 + 8(y)3 + (3a + 16)(y)2 + 12ay + 2a2 = 0, with y(1) = 2

    exists. Find the solution.

    12. Solve

    y 1x2

    y2 +1

    xy = 1

    60