SlideShare a Scribd company logo
Introduction to Methods of Applied Mathematics
or
Advanced Mathematical Methods for Scientists and Engineers
Sean Mauch
http://www.its.caltech.edu/˜sean
January 24, 2004
2
Contents
Anti-Copyright xv
Preface xvii
0.1 Advice to Teachers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
0.2 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
0.3 Warnings and Disclaimers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
0.4 Suggested Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
0.5 About the Title . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
I Algebra 1
1 Sets and Functions 3
1.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Single Valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Inverses and Multi-Valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Transforming Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Vectors 17
2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.1 Scalars and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.2 The Kronecker Delta and Einstein Summation Convention . . . . . . . . . . . 19
2.1.3 The Dot and Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Sets of Vectors in n Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
II Calculus 31
3 Differential Calculus 33
3.1 Limits of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Continuous Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3 The Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 Implicit Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Maxima and Minima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.6 Mean Value Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.6.1 Application: Using Taylor’s Theorem to Approximate Functions. . . . . . . . 45
3.6.2 Application: Finite Difference Schemes . . . . . . . . . . . . . . . . . . . . . . 47
i
3.7 L’Hospital’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.8.1 Limits of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.8.2 Continuous Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.8.3 The Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.8.4 Implicit Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.8.5 Maxima and Minima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.8.6 Mean Value Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.8.7 L’Hospital’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.10 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.11 Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.12 Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4 Integral Calculus 75
4.1 The Indefinite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.2 The Definite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.3 The Fundamental Theorem of Integral Calculus . . . . . . . . . . . . . . . . . . . . . 80
4.4 Techniques of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.4.1 Partial Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.5 Improper Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.6.1 The Indefinite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.6.2 The Definite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.6.3 The Fundamental Theorem of Integration . . . . . . . . . . . . . . . . . . . . 86
4.6.4 Techniques of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.6.5 Improper Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.9 Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.10 Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5 Vector Calculus 99
5.1 Vector Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.2 Gradient, Divergence and Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.6 Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5.7 Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
III Functions of a Complex Variable 117
6 Complex Numbers 119
6.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
6.2 The Complex Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.3 Polar Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
6.4 Arithmetic and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.5 Integer Exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.6 Rational Exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
ii
6.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7 Functions of a Complex Variable 153
7.1 Curves and Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.2 The Point at Infinity and the Stereographic Projection . . . . . . . . . . . . . . . . . 155
7.3 A Gentle Introduction to Branch Points . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.4 Cartesian and Modulus-Argument Form . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.5 Graphing Functions of a Complex Variable . . . . . . . . . . . . . . . . . . . . . . . 159
7.6 Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
7.7 Inverse Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
7.8 Riemann Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
7.9 Branch Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
7.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.11 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
7.12 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
8 Analytic Functions 223
8.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
8.2 Cauchy-Riemann Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
8.3 Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
8.4 Singularities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
8.4.1 Categorization of Singularities . . . . . . . . . . . . . . . . . . . . . . . . . . 233
8.4.2 Isolated and Non-Isolated Singularities . . . . . . . . . . . . . . . . . . . . . . 235
8.5 Application: Potential Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
8.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
8.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
8.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
9 Analytic Continuation 269
9.1 Analytic Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
9.2 Analytic Continuation of Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
9.3 Analytic Functions Defined in Terms of Real Variables . . . . . . . . . . . . . . . . . 271
9.3.1 Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
9.3.2 Analytic Functions Defined in Terms of Their Real or Imaginary Parts . . . . 276
9.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
9.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
9.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
10 Contour Integration and the Cauchy-Goursat Theorem 285
10.1 Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
10.2 Contour Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
10.2.1 Maximum Modulus Integral Bound . . . . . . . . . . . . . . . . . . . . . . . . 287
10.3 The Cauchy-Goursat Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
10.4 Contour Deformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
10.5 Morera’s Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
10.6 Indefinite Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
10.7 Fundamental Theorem of Calculus via Primitives . . . . . . . . . . . . . . . . . . . . 292
10.7.1 Line Integrals and Primitives . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
10.7.2 Contour Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
10.8 Fundamental Theorem of Calculus via Complex Calculus . . . . . . . . . . . . . . . 292
10.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
10.10Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
10.11Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
iii
11 Cauchy’s Integral Formula 305
11.1 Cauchy’s Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
11.2 The Argument Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
11.3 Rouche’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
11.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
11.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
11.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
12 Series and Convergence 325
12.1 Series of Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
12.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
12.1.2 Special Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
12.1.3 Convergence Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
12.2 Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
12.2.1 Tests for Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 332
12.2.2 Uniform Convergence and Continuous Functions. . . . . . . . . . . . . . . . . 333
12.3 Uniformly Convergent Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
12.4 Integration and Differentiation of Power Series . . . . . . . . . . . . . . . . . . . . . 337
12.5 Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
12.5.1 Newton’s Binomial Formula. . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
12.6 Laurent Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
12.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
12.7.1 Series of Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
12.7.2 Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
12.7.3 Uniformly Convergent Power Series . . . . . . . . . . . . . . . . . . . . . . . . 347
12.7.4 Integration and Differentiation of Power Series . . . . . . . . . . . . . . . . . 349
12.7.5 Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
12.7.6 Laurent Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
12.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
12.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
13 The Residue Theorem 383
13.1 The Residue Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
13.2 Cauchy Principal Value for Real Integrals . . . . . . . . . . . . . . . . . . . . . . . . 387
13.2.1 The Cauchy Principal Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
13.3 Cauchy Principal Value for Contour Integrals . . . . . . . . . . . . . . . . . . . . . . 390
13.4 Integrals on the Real Axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
13.5 Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
13.6 Fourier Cosine and Sine Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
13.7 Contour Integration and Branch Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . 398
13.8 Exploiting Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
13.8.1 Wedge Contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
13.8.2 Box Contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
13.9 Definite Integrals Involving Sine and Cosine . . . . . . . . . . . . . . . . . . . . . . . 403
13.10Infinite Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
13.11Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
13.12Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
13.13Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
iv
IV Ordinary Differential Equations 471
14 First Order Differential Equations 473
14.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
14.2 Example Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
14.2.1 Growth and Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
14.3 One Parameter Families of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
14.4 Integrable Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
14.4.1 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
14.4.2 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
14.4.3 Homogeneous Coefficient Equations . . . . . . . . . . . . . . . . . . . . . . . 480
14.5 The First Order, Linear Differential Equation . . . . . . . . . . . . . . . . . . . . . . 483
14.5.1 Homogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
14.5.2 Inhomogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
14.5.3 Variation of Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
14.6 Initial Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
14.6.1 Piecewise Continuous Coefficients and Inhomogeneities . . . . . . . . . . . . . 486
14.7 Well-Posed Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
14.8 Equations in the Complex Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
14.8.1 Ordinary Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
14.8.2 Regular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
14.8.3 Irregular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
14.8.4 The Point at Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
14.9 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
14.10Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
14.11Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
14.12Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
14.13Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
15 First Order Linear Systems of Differential Equations 515
15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
15.2 Using Eigenvalues and Eigenvectors to find Homogeneous Solutions . . . . . . . . . . 515
15.3 Matrices and Jordan Canonical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
15.4 Using the Matrix Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
15.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
15.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
15.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
16 Theory of Linear Ordinary Differential Equations 547
16.1 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
16.2 Nature of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
16.3 Transformation to a First Order System . . . . . . . . . . . . . . . . . . . . . . . . . 550
16.4 The Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
16.4.1 Derivative of a Determinant. . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
16.4.2 The Wronskian of a Set of Functions. . . . . . . . . . . . . . . . . . . . . . . 551
16.4.3 The Wronskian of the Solutions to a Differential Equation . . . . . . . . . . . 552
16.5 Well-Posed Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
16.6 The Fundamental Set of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
16.7 Adjoint Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
16.8 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
16.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
16.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
16.11Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
16.12Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
v
17 Techniques for Linear Differential Equations 567
17.1 Constant Coefficient Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
17.1.1 Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
17.1.2 Real-Valued Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
17.1.3 Higher Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
17.2 Euler Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
17.2.1 Real-Valued Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
17.3 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
17.4 Equations Without Explicit Dependence on y . . . . . . . . . . . . . . . . . . . . . . 577
17.5 Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
17.6 *Reduction of Order and the Adjoint Equation . . . . . . . . . . . . . . . . . . . . . 578
17.7 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
17.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
17.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
18 Techniques for Nonlinear Differential Equations 601
18.1 Bernoulli Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
18.2 Riccati Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602
18.3 Exchanging the Dependent and Independent Variables . . . . . . . . . . . . . . . . . 604
18.4 Autonomous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
18.5 *Equidimensional-in-x Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
18.6 *Equidimensional-in-y Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
18.7 *Scale-Invariant Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
18.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
18.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
18.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
19 Transformations and Canonical Forms 621
19.1 The Constant Coefficient Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
19.2 Normal Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
19.2.1 Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
19.2.2 Higher Order Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 624
19.3 Transformations of the Independent Variable . . . . . . . . . . . . . . . . . . . . . . 624
19.3.1 Transformation to the form u” + a(x) u = 0 . . . . . . . . . . . . . . . . . . 624
19.3.2 Transformation to a Constant Coefficient Equation . . . . . . . . . . . . . . . 625
19.4 Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
19.4.1 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
19.4.2 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
19.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
19.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
19.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
20 The Dirac Delta Function 637
20.1 Derivative of the Heaviside Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
20.2 The Delta Function as a Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
20.3 Higher Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
20.4 Non-Rectangular Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 639
20.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
20.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
20.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
vi
21 Inhomogeneous Differential Equations 649
21.1 Particular Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
21.2 Method of Undetermined Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
21.3 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
21.3.1 Second Order Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 652
21.3.2 Higher Order Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 654
21.4 Piecewise Continuous Coefficients and Inhomogeneities . . . . . . . . . . . . . . . . . 656
21.5 Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
21.5.1 Eliminating Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . 658
21.5.2 Separating Inhomogeneous Equations and Inhomogeneous Boundary Conditions659
21.5.3 Existence of Solutions of Problems with Inhomogeneous Boundary Conditions 659
21.6 Green Functions for First Order Equations . . . . . . . . . . . . . . . . . . . . . . . 661
21.7 Green Functions for Second Order Equations . . . . . . . . . . . . . . . . . . . . . . 662
21.7.1 Green Functions for Sturm-Liouville Problems . . . . . . . . . . . . . . . . . 668
21.7.2 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
21.7.3 Problems with Unmixed Boundary Conditions . . . . . . . . . . . . . . . . . 671
21.7.4 Problems with Mixed Boundary Conditions . . . . . . . . . . . . . . . . . . . 672
21.8 Green Functions for Higher Order Problems . . . . . . . . . . . . . . . . . . . . . . . 674
21.9 Fredholm Alternative Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
21.10Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
21.11Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
21.12Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
21.13Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
21.14Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
22 Difference Equations 713
22.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
22.2 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714
22.3 Homogeneous First Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
22.4 Inhomogeneous First Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
22.5 Homogeneous Constant Coefficient Equations . . . . . . . . . . . . . . . . . . . . . . 717
22.6 Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
22.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
22.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
22.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
23 Series Solutions of Differential Equations 725
23.1 Ordinary Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
23.1.1 Taylor Series Expansion for a Second Order Differential Equation . . . . . . . 728
23.2 Regular Singular Points of Second Order Equations . . . . . . . . . . . . . . . . . . . 733
23.2.1 Indicial Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
23.2.2 The Case: Double Root . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
23.2.3 The Case: Roots Differ by an Integer . . . . . . . . . . . . . . . . . . . . . . 738
23.3 Irregular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
23.4 The Point at Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
23.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
23.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
23.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
23.8 Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
23.9 Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
vii
24 Asymptotic Expansions 765
24.1 Asymptotic Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
24.2 Leading Order Behavior of Differential Equations . . . . . . . . . . . . . . . . . . . . 767
24.3 Integration by Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
24.4 Asymptotic Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
24.5 Asymptotic Expansions of Differential Equations . . . . . . . . . . . . . . . . . . . . 777
24.5.1 The Parabolic Cylinder Equation. . . . . . . . . . . . . . . . . . . . . . . . . 777
25 Hilbert Spaces 781
25.1 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
25.2 Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782
25.3 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783
25.4 Linear Independence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
25.5 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
25.6 Gramm-Schmidt Orthogonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
25.7 Orthonormal Function Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
25.8 Sets Of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
25.9 Least Squares Fit to a Function and Completeness . . . . . . . . . . . . . . . . . . . 790
25.10Closure Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 792
25.11Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795
25.12Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
25.13Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
25.14Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
26 Self Adjoint Linear Operators 799
26.1 Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
26.2 Self-Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
26.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
26.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
26.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
27 Self-Adjoint Boundary Value Problems 805
27.1 Summary of Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
27.2 Formally Self-Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806
27.3 Self-Adjoint Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
27.4 Self-Adjoint Eigenvalue Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808
27.5 Inhomogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
27.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
27.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
27.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
28 Fourier Series 817
28.1 An Eigenvalue Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
28.2 Fourier Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
28.3 Least Squares Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
28.4 Fourier Series for Functions Defined on Arbitrary Ranges . . . . . . . . . . . . . . . 824
28.5 Fourier Cosine Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826
28.6 Fourier Sine Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
28.7 Complex Fourier Series and Parseval’s Theorem . . . . . . . . . . . . . . . . . . . . . 828
28.8 Behavior of Fourier Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
28.9 Gibb’s Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
28.10Integrating and Differentiating Fourier Series . . . . . . . . . . . . . . . . . . . . . . 835
28.11Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
28.12Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
28.13Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
viii
29 Regular Sturm-Liouville Problems 873
29.1 Derivation of the Sturm-Liouville Form . . . . . . . . . . . . . . . . . . . . . . . . . 873
29.2 Properties of Regular Sturm-Liouville Problems . . . . . . . . . . . . . . . . . . . . . 874
29.3 Solving Differential Equations With Eigenfunction Expansions . . . . . . . . . . . . 881
29.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885
29.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888
29.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
30 Integrals and Convergence 905
30.1 Uniform Convergence of Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
30.2 The Riemann-Lebesgue Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906
30.3 Cauchy Principal Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906
30.3.1 Integrals on an Infinite Domain . . . . . . . . . . . . . . . . . . . . . . . . . . 906
30.3.2 Singular Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
31 The Laplace Transform 909
31.1 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909
31.2 The Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
31.2.1 ˆf(s) with Poles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
31.2.2 ˆf(s) with Branch Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
31.2.3 Asymptotic Behavior of ˆf(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . 916
31.3 Properties of the Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
31.4 Constant Coefficient Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 919
31.5 Systems of Constant Coefficient Differential Equations . . . . . . . . . . . . . . . . . 920
31.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
31.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 926
31.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928
32 The Fourier Transform 947
32.1 Derivation from a Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
32.2 The Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948
32.2.1 A Word of Caution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
32.3 Evaluating Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950
32.3.1 Integrals that Converge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950
32.3.2 Cauchy Principal Value and Integrals that are Not Absolutely Convergent. . 952
32.3.3 Analytic Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
32.4 Properties of the Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
32.4.1 Closure Relation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
32.4.2 Fourier Transform of a Derivative. . . . . . . . . . . . . . . . . . . . . . . . . 955
32.4.3 Fourier Convolution Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . 955
32.4.4 Parseval’s Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
32.4.5 Shift Property. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 958
32.4.6 Fourier Transform of x f(x). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959
32.5 Solving Differential Equations with the Fourier Transform . . . . . . . . . . . . . . . 959
32.6 The Fourier Cosine and Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . 960
32.6.1 The Fourier Cosine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . 960
32.6.2 The Fourier Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961
32.7 Properties of the Fourier Cosine and Sine Transform . . . . . . . . . . . . . . . . . . 962
32.7.1 Transforms of Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962
32.7.2 Convolution Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962
32.7.3 Cosine and Sine Transform in Terms of the Fourier Transform . . . . . . . . 964
32.8 Solving Differential Equations with the Fourier Cosine and Sine Transforms . . . . . 965
32.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
32.10Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970
ix
32.11Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972
33 The Gamma Function 987
33.1 Euler’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987
33.2 Hankel’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988
33.3 Gauss’ Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
33.4 Weierstrass’ Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
33.5 Stirling’s Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
33.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
33.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
33.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997
34 Bessel Functions 999
34.1 Bessel’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999
34.2 Frobeneius Series Solution about z = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . 999
34.2.1 Behavior at Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
34.3 Bessel Functions of the First Kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003
34.3.1 The Bessel Function Satisfies Bessel’s Equation . . . . . . . . . . . . . . . . . 1003
34.3.2 Series Expansion of the Bessel Function . . . . . . . . . . . . . . . . . . . . . 1004
34.3.3 Bessel Functions of Non-Integer Order . . . . . . . . . . . . . . . . . . . . . . 1005
34.3.4 Recursion Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
34.3.5 Bessel Functions of Half-Integer Order . . . . . . . . . . . . . . . . . . . . . . 1009
34.4 Neumann Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1010
34.5 Bessel Functions of the Second Kind . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
34.6 Hankel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
34.7 The Modified Bessel Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
34.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
34.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
34.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020
V Partial Differential Equations 1033
35 Transforming Equations 1035
35.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1036
35.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037
35.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1038
36 Classification of Partial Differential Equations 1039
36.1 Classification of Second Order Quasi-Linear Equations . . . . . . . . . . . . . . . . . 1039
36.1.1 Hyperbolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1040
36.1.2 Parabolic equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
36.1.3 Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
36.2 Equilibrium Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044
36.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046
36.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047
36.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048
37 Separation of Variables 1051
37.1 Eigensolutions of Homogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . 1051
37.2 Homogeneous Equations with Homogeneous Boundary Conditions . . . . . . . . . . 1051
37.3 Time-Independent Sources and Boundary Conditions . . . . . . . . . . . . . . . . . . 1052
37.4 Inhomogeneous Equations with Homogeneous Boundary Conditions . . . . . . . . . 1054
37.5 Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055
37.6 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
x
37.7 General Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058
37.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
37.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
37.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1072
38 Finite Transforms 1119
38.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1121
38.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1122
38.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1123
39 The Diffusion Equation 1127
39.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1128
39.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1129
39.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1130
40 Laplace’s Equation 1135
40.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135
40.2 Fundamental Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135
40.2.1 Two Dimensional Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135
40.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136
40.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1138
40.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1139
41 Waves 1147
41.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
41.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152
41.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154
42 Similarity Methods 1167
42.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1170
42.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1171
42.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1172
43 Method of Characteristics 1175
43.1 First Order Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175
43.2 First Order Quasi-Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
43.3 The Method of Characteristics and the Wave Equation . . . . . . . . . . . . . . . . . 1176
43.4 The Wave Equation for an Infinite Domain . . . . . . . . . . . . . . . . . . . . . . . 1177
43.5 The Wave Equation for a Semi-Infinite Domain . . . . . . . . . . . . . . . . . . . . . 1178
43.6 The Wave Equation for a Finite Domain . . . . . . . . . . . . . . . . . . . . . . . . . 1179
43.7 Envelopes of Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1180
43.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
43.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
43.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
44 Transform Methods 1189
44.1 Fourier Transform for Partial Differential Equations . . . . . . . . . . . . . . . . . . 1189
44.2 The Fourier Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190
44.3 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190
44.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192
44.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195
44.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197
xi
45 Green Functions 1211
45.1 Inhomogeneous Equations and Homogeneous Boundary Conditions . . . . . . . . . . 1211
45.2 Homogeneous Equations and Inhomogeneous Boundary Conditions . . . . . . . . . . 1211
45.3 Eigenfunction Expansions for Elliptic Equations . . . . . . . . . . . . . . . . . . . . . 1213
45.4 The Method of Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215
45.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217
45.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224
45.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
46 Conformal Mapping 1261
46.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262
46.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264
46.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265
47 Non-Cartesian Coordinates 1273
47.1 Spherical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273
47.2 Laplace’s Equation in a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273
47.3 Laplace’s Equation in an Annulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275
VI Calculus of Variations 1279
48 Calculus of Variations 1281
48.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
48.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1291
48.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1294
VII Nonlinear Differential Equations 1345
49 Nonlinear Ordinary Differential Equations 1347
49.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1348
49.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1351
49.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352
50 Nonlinear Partial Differential Equations 1365
50.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366
50.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
50.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369
VIII Appendices 1381
A Greek Letters 1383
B Notation 1385
C Formulas from Complex Variables 1387
D Table of Derivatives 1389
E Table of Integrals 1391
F Definite Integrals 1393
G Table of Sums 1395
xii
H Table of Taylor Series 1397
I Continuous Transforms 1399
I.1 Properties of Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1399
I.2 Table of Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401
I.3 Table of Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403
I.4 Table of Fourier Transforms in n Dimensions . . . . . . . . . . . . . . . . . . . . . . 1405
I.5 Table of Fourier Cosine Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406
I.6 Table of Fourier Sine Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1407
J Table of Wronskians 1409
K Sturm-Liouville Eigenvalue Problems 1411
L Green Functions for Ordinary Differential Equations 1413
M Trigonometric Identities 1415
M.1 Circular Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1415
M.2 Hyperbolic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1416
N Bessel Functions 1419
N.1 Definite Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1419
O Formulas from Linear Algebra 1421
P Vector Analysis 1423
Q Partial Fractions 1425
R Finite Math 1427
S Physics 1429
T Probability 1431
T.1 Independent Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431
T.2 Playing the Odds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431
U Economics 1433
V Glossary 1435
W whoami 1437
xiii
xiv
Anti-Copyright
Anti-Copyright @ 1995-2001 by Mauch Publishing Company, un-Incorporated.
No rights reserved. Any part of this publication may be reproduced, stored in a retrieval system,
transmitted or desecrated without permission.
xv
xvi
Preface
During the summer before my final undergraduate year at Caltech I set out to write a math text
unlike any other, namely, one written by me. In that respect I have succeeded beautifully. Unfor-
tunately, the text is neither complete nor polished. I have a “Warnings and Disclaimers” section
below that is a little amusing, and an appendix on probability that I feel concisesly captures the
essence of the subject. However, all the material in between is in some stage of development. I am
currently working to improve and expand this text.
This text is freely available from my web set. Currently I’m at http://www.its.caltech.edu/˜sean.
I post new versions a couple of times a year.
0.1 Advice to Teachers
If you have something worth saying, write it down.
0.2 Acknowledgments
I would like to thank Professor Saffman for advising me on this project and the Caltech SURF
program for providing the funding for me to write the first edition of this book.
0.3 Warnings and Disclaimers
• This book is a work in progress. It contains quite a few mistakes and typos. I would greatly
appreciate your constructive criticism. You can reach me at ‘sean@caltech.edu’.
• Reading this book impairs your ability to drive a car or operate machinery.
• This book has been found to cause drowsiness in laboratory animals.
• This book contains twenty-three times the US RDA of fiber.
• Caution: FLAMMABLE - Do not read while smoking or near a fire.
• If infection, rash, or irritation develops, discontinue use and consult a physician.
• Warning: For external use only. Use only as directed. Intentional misuse by deliberately
concentrating contents can be harmful or fatal. KEEP OUT OF REACH OF CHILDREN.
• In the unlikely event of a water landing do not use this book as a flotation device.
• The material in this text is fiction; any resemblance to real theorems, living or dead, is purely
coincidental.
• This is by far the most amusing section of this book.
xvii
• Finding the typos and mistakes in this book is left as an exercise for the reader. (Eye ewes
a spelling chequer from thyme too thyme, sew their should knot bee two many misspellings.
Though I ain’t so sure the grammar’s too good.)
• The theorems and methods in this text are subject to change without notice.
• This is a chain book. If you do not make seven copies and distribute them to your friends
within ten days of obtaining this text you will suffer great misfortune and other nastiness.
• The surgeon general has determined that excessive studying is detrimental to your social life.
• This text has been buffered for your protection and ribbed for your pleasure.
• Stop reading this rubbish and get back to work!
0.4 Suggested Use
This text is well suited to the student, professional or lay-person. It makes a superb gift. This text
has a boquet that is light and fruity, with some earthy undertones. It is ideal with dinner or as an
apertif. Bon apetit!
0.5 About the Title
The title is only making light of naming conventions in the sciences and is not an insult to engineers.
If you want to learn about some mathematical subject, look for books with “Introduction” or
“Elementary” in the title. If it is an “Intermediate” text it will be incomprehensible. If it is
“Advanced” then not only will it be incomprehensible, it will have low production qualities, i.e. a
crappy typewriter font, no graphics and no examples. There is an exception to this rule: When the
title also contains the word “Scientists” or “Engineers” the advanced book may be quite suitable for
actually learning the material.
xviii
Part I
Algebra
1
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Chapter 1
Sets and Functions
1.1 Sets
Definition. A set is a collection of objects. We call the objects, elements. A set is denoted by
listing the elements between braces. For example: {e, ı, π, 1} is the set of the integer 1, the pure
imaginary number ı =
√
−1 and the transcendental numbers e = 2.7182818 . . . and π = 3.1415926 . . ..
For elements of a set, we do not count multiplicities. We regard the set {1, 2, 2, 3, 3, 3} as identical
to the set {1, 2, 3}. Order is not significant in sets. The set {1, 2, 3} is equivalent to {3, 2, 1}.
In enumerating the elements of a set, we use ellipses to indicate patterns. We denote the set of
positive integers as {1, 2, 3, . . .}. We also denote sets with the notation {x|conditions on x} for sets
that are more easily described than enumerated. This is read as “the set of elements x such that
. . . ”. x ∈ S is the notation for “x is an element of the set S.” To express the opposite we have
x ∈ S for “x is not an element of the set S.”
Examples. We have notations for denoting some of the commonly encountered sets.
• ∅ = {} is the empty set, the set containing no elements.
• Z = {. . . , −3, −2, −1, 0, 1, 2, 3 . . .} is the set of integers. (Z is for “Zahlen”, the German word
for “number”.)
• Q = {p/q|p, q ∈ Z, q = 0} is the set of rational numbers. (Q is for quotient.) 1
• R = {x|x = a1a2 · · · an.b1b2 · · · } is the set of real numbers, i.e. the set of numbers with decimal
expansions. 2
• C = {a + ıb|a, b ∈ R, ı2
= −1} is the set of complex numbers. ı is the square root of −1. (If
you haven’t seen complex numbers before, don’t dismay. We’ll cover them later.)
• Z+
, Q+
and R+
are the sets of positive integers, rationals and reals, respectively. For example,
Z+
= {1, 2, 3, . . .}. We use a − superscript to denote the sets of negative numbers.
• Z0+
, Q0+
and R0+
are the sets of non-negative integers, rationals and reals, respectively. For
example, Z0+
= {0, 1, 2, . . .}.
• (a . . . b) denotes an open interval on the real axis. (a . . . b) ≡ {x|x ∈ R, a < x < b}
• We use brackets to denote the closed interval. [a..b] ≡ {x|x ∈ R, a ≤ x ≤ b}
1 Note that with this description, we enumerate each rational number an infinite number of times. For example:
1/2 = 2/4 = 3/6 = (−1)/(−2) = · · · . This does not pose a problem as we do not count multiplicities.
2Guess what R is for.
3
The cardinality or order of a set S is denoted |S|. For finite sets, the cardinality is the number
of elements in the set. The Cartesian product of two sets is the set of ordered pairs:
X × Y ≡ {(x, y)|x ∈ X, y ∈ Y }.
The Cartesian product of n sets is the set of ordered n-tuples:
X1 × X2 × · · · × Xn ≡ {(x1, x2, . . . , xn)|x1 ∈ X1, x2 ∈ X2, . . . , xn ∈ Xn}.
Equality. Two sets S and T are equal if each element of S is an element of T and vice versa. This
is denoted, S = T. Inequality is S = T, of course. S is a subset of T, S ⊆ T, if every element of S
is an element of T. S is a proper subset of T, S ⊂ T, if S ⊆ T and S = T. For example: The empty
set is a subset of every set, ∅ ⊆ S. The rational numbers are a proper subset of the real numbers,
Q ⊂ R.
Operations. The union of two sets, S ∪ T, is the set whose elements are in either of the two sets.
The union of n sets,
∪n
j=1Sj ≡ S1 ∪ S2 ∪ · · · ∪ Sn
is the set whose elements are in any of the sets Sj. The intersection of two sets, S ∩ T, is the set
whose elements are in both of the two sets. In other words, the intersection of two sets in the set of
elements that the two sets have in common. The intersection of n sets,
∩n
j=1Sj ≡ S1 ∩ S2 ∩ · · · ∩ Sn
is the set whose elements are in all of the sets Sj. If two sets have no elements in common, S ∩T = ∅,
then the sets are disjoint. If T ⊆ S, then the difference between S and T, S T, is the set of elements
in S which are not in T.
S  T ≡ {x|x ∈ S, x ∈ T}
The difference of sets is also denoted S − T.
Properties. The following properties are easily verified from the above definitions.
• S ∪ ∅ = S, S ∩ ∅ = ∅, S  ∅ = S, S  S = ∅.
• Commutative. S ∪ T = T ∪ S, S ∩ T = T ∩ S.
• Associative. (S ∪ T) ∪ U = S ∪ (T ∪ U) = S ∪ T ∪ U, (S ∩ T) ∩ U = S ∩ (T ∩ U) = S ∩ T ∩ U.
• Distributive. S ∪ (T ∩ U) = (S ∪ T) ∩ (S ∪ U), S ∩ (T ∪ U) = (S ∩ T) ∪ (S ∩ U).
1.2 Single Valued Functions
Single-Valued Functions. A single-valued function or single-valued mapping is a mapping of the
elements x ∈ X into elements y ∈ Y . This is expressed as f : X → Y or X
f
→ Y . If such a function
is well-defined, then for each x ∈ X there exists a unique element of y such that f(x) = y. The set
X is the domain of the function, Y is the codomain, (not to be confused with the range, which we
introduce shortly). To denote the value of a function on a particular element we can use any of
the notations: f(x) = y, f : x → y or simply x → y. f is the identity map on X if f(x) = x for all
x ∈ X.
Let f : X → Y . The range or image of f is
f(X) = {y|y = f(x) for some x ∈ X}.
The range is a subset of the codomain. For each Z ⊆ Y , the inverse image of Z is defined:
f−1
(Z) ≡ {x ∈ X|f(x) = z for some z ∈ Z}.
4
Examples.
• Finite polynomials, f(x) =
n
k=0 akxk
, ak ∈ R, and the exponential function, f(x) = ex
, are
examples of single valued functions which map real numbers to real numbers.
• The greatest integer function, f(x) = x , is a mapping from R to Z. x is defined as the
greatest integer less than or equal to x. Likewise, the least integer function, f(x) = x , is the
least integer greater than or equal to x.
The -jectives. A function is injective if for each x1 = x2, f(x1) = f(x2). In other words, distinct
elements are mapped to distinct elements. f is surjective if for each y in the codomain, there is an
x such that y = f(x). If a function is both injective and surjective, then it is bijective. A bijective
function is also called a one-to-one mapping.
Examples.
• The exponential function f(x) = ex
, considered as a mapping from R to R+
, is bijective, (a
one-to-one mapping).
• f(x) = x2
is a bijection from R+
to R+
. f is not injective from R to R+
. For each positive y
in the range, there are two values of x such that y = x2
.
• f(x) = sin x is not injective from R to [−1..1]. For each y ∈ [−1..1] there exists an infinite
number of values of x such that y = sin x.
Injective Surjective Bijective
Figure 1.1: Depictions of Injective, Surjective and Bijective Functions
1.3 Inverses and Multi-Valued Functions
If y = f(x), then we can write x = f−1
(y) where f−1
is the inverse of f. If y = f(x) is a one-to-one
function, then f−1
(y) is also a one-to-one function. In this case, x = f−1
(f(x)) = f(f−1
(x)) for
values of x where both f(x) and f−1
(x) are defined. For example ln x, which maps R+
to R is the
inverse of ex
. x = eln x
= ln(ex
) for all x ∈ R+
. (Note the x ∈ R+
ensures that ln x is defined.)
If y = f(x) is a many-to-one function, then x = f−1
(y) is a one-to-many function. f−1
(y) is a
multi-valued function. We have x = f(f−1
(x)) for values of x where f−1
(x) is defined, however
x = f−1
(f(x)). There are diagrams showing one-to-one, many-to-one and one-to-many functions in
Figure 1.2.
Example 1.3.1 y = x2
, a many-to-one function has the inverse x = y1/2
. For each positive y, there
are two values of x such that x = y1/2
. y = x2
and y = x1/2
are graphed in Figure 1.3.
5
rangedomain rangedomain rangedomain
one-to-one many-to-one one-to-many
Figure 1.2: Diagrams of One-To-One, Many-To-One and One-To-Many Functions
Figure 1.3: y = x2
and y = x1/2
We say that there are two branches of y = x1/2
: the positive and the negative branch. We denote
the positive branch as y =
√
x; the negative branch is y = −
√
x. We call
√
x the principal branch of
x1/2
. Note that
√
x is a one-to-one function. Finally, x = (x1/2
)2
since (±
√
x)2
= x, but x = (x2
)1/2
since (x2
)1/2
= ±x. y =
√
x is graphed in Figure 1.4.
Figure 1.4: y =
√
x
Now consider the many-to-one function y = sin x. The inverse is x = arcsin y. For each y ∈
[−1..1] there are an infinite number of values x such that x = arcsin y. In Figure 1.5 is a graph of
y = sin x and a graph of a few branches of y = arcsin x.
Figure 1.5: y = sin x and y = arcsin x
Example 1.3.2 arcsin x has an infinite number of branches. We will denote the principal branch
6
by Arcsin x which maps [−1..1] to −π
2 ..π
2 . Note that x = sin(arcsin x), but x = arcsin(sin x).
y = Arcsin x in Figure 1.6.
Figure 1.6: y = Arcsin x
Example 1.3.3 Consider 11/3
. Since x3
is a one-to-one function, x1/3
is a single-valued function.
(See Figure 1.7.) 11/3
= 1.
Figure 1.7: y = x3
and y = x1/3
Example 1.3.4 Consider arccos(1/2). cos x and a portion of arccos x are graphed in Figure 1.8.
The equation cos x = 1/2 has the two solutions x = ±π/3 in the range x ∈ (−π..π]. We use the
periodicity of the cosine, cos(x + 2π) = cos x, to find the remaining solutions.
arccos(1/2) = {±π/3 + 2nπ}, n ∈ Z.
Figure 1.8: y = cos x and y = arccos x
1.4 Transforming Equations
Consider the equation g(x) = h(x) and the single-valued function f(x). A particular value of x
is a solution of the equation if substituting that value into the equation results in an identity. In
determining the solutions of an equation, we often apply functions to each side of the equation in
order to simplify its form. We apply the function to obtain a second equation, f(g(x)) = f(h(x)). If
x = ξ is a solution of the former equation, (let ψ = g(ξ) = h(ξ)), then it is necessarily a solution of
latter. This is because f(g(ξ)) = f(h(ξ)) reduces to the identity f(ψ) = f(ψ). If f(x) is bijective,
then the converse is true: any solution of the latter equation is a solution of the former equation.
7
Suppose that x = ξ is a solution of the latter, f(g(ξ)) = f(h(ξ)). That f(x) is a one-to-one mapping
implies that g(ξ) = h(ξ). Thus x = ξ is a solution of the former equation.
It is always safe to apply a one-to-one, (bijective), function to an equation, (provided it is defined
for that domain). For example, we can apply f(x) = x3
or f(x) = ex
, considered as mappings on
R, to the equation x = 1. The equations x3
= 1 and ex
= e each have the unique solution x = 1 for
x ∈ R.
In general, we must take care in applying functions to equations. If we apply a many-to-one
function, we may introduce spurious solutions. Applying f(x) = x2
to the equation x = π
2 results in
x2
= π2
4 , which has the two solutions, x = {±π
2 }. Applying f(x) = sin x results in x2
= π2
4 , which
has an infinite number of solutions, x = {π
2 + 2nπ | n ∈ Z}.
We do not generally apply a one-to-many, (multi-valued), function to both sides of an equation
as this rarely is useful. Rather, we typically use the definition of the inverse function. Consider the
equation
sin2
x = 1.
Applying the function f(x) = x1/2
to the equation would not get us anywhere.
sin2
x
1/2
= 11/2
Since (sin2
x)1/2
= sin x, we cannot simplify the left side of the equation. Instead we could use the
definition of f(x) = x1/2
as the inverse of the x2
function to obtain
sin x = 11/2
= ±1.
Now note that we should not just apply arcsin to both sides of the equation as arcsin(sin x) = x.
Instead we use the definition of arcsin as the inverse of sin.
x = arcsin(±1)
x = arcsin(1) has the solutions x = π/2+2nπ and x = arcsin(−1) has the solutions x = −π/2+2nπ.
We enumerate the solutions.
x =
π
2
+ nπ | n ∈ Z
8
1.5 Exercises
Exercise 1.1
The area of a circle is directly proportional to the square of its diameter. What is the constant of
proportionality?
Hint, Solution
Exercise 1.2
Consider the equation
x + 1
y − 2
=
x2
− 1
y2 − 4
.
1. Why might one think that this is the equation of a line?
2. Graph the solutions of the equation to demonstrate that it is not the equation of a line.
Hint, Solution
Exercise 1.3
Consider the function of a real variable,
f(x) =
1
x2 + 2
.
What is the domain and range of the function?
Hint, Solution
Exercise 1.4
The temperature measured in degrees Celsius 3
is linearly related to the temperature measured in
degrees Fahrenheit 4
. Water freezes at 0◦
C = 32◦
F and boils at 100◦
C = 212◦
F. Write the
temperature in degrees Celsius as a function of degrees Fahrenheit.
Hint, Solution
Exercise 1.5
Consider the function graphed in Figure 1.9. Sketch graphs of f(−x), f(x + 3), f(3 − x) + 2, and
f−1
(x). You may use the blank grids in Figure 1.10.
Hint, Solution
Exercise 1.6
A culture of bacteria grows at the rate of 10% per minute. At 6:00 pm there are 1 billion bacteria.
How many bacteria are there at 7:00 pm? How many were there at 3:00 pm?
Hint, Solution
Exercise 1.7
The graph in Figure 1.11 shows an even function f(x) = p(x)/q(x) where p(x) and q(x) are rational
quadratic polynomials. Give possible formulas for p(x) and q(x).
Hint, Solution
Exercise 1.8
Find a polynomial of degree 100 which is zero only at x = −2, 1, π and is non-negative.
Hint, Solution
3 Originally, it was called degrees Centigrade. centi because there are 100 degrees between the two calibration
points. It is now called degrees Celsius in honor of the inventor.
4 The Fahrenheit scale, named for Daniel Fahrenheit, was originally calibrated with the freezing point of salt-
saturated water to be 0◦. Later, the calibration points became the freezing point of water, 32◦, and body temperature,
96◦. With this method, there are 64 divisions between the calibration points. Finally, the upper calibration point
was changed to the boiling point of water at 212◦. This gave 180 divisions, (the number of degrees in a half circle),
between the two calibration points.
9
Figure 1.9: Graph of the function.
Figure 1.10: Blank grids.
1.6 Hints
Hint 1.1
area = constant × diameter2
.
Hint 1.2
A pair (x, y) is a solution of the equation if it make the equation an identity.
Hint 1.3
The domain is the subset of R on which the function is defined.
10
1 2
1
2
2 4 6 8 10
1
2
Figure 1.11: Plots of f(x) = p(x)/q(x).
Hint 1.4
Find the slope and x-intercept of the line.
Hint 1.5
The inverse of the function is the reflection of the function across the line y = x.
Hint 1.6
The formula for geometric growth/decay is x(t) = x0rt
, where r is the rate.
Hint 1.7
Note that p(x) and q(x) appear as a ratio, they are determined only up to a multiplicative constant.
We may take the leading coefficient of q(x) to be unity.
f(x) =
p(x)
q(x)
=
ax2
+ bx + c
x2 + βx + χ
Use the properties of the function to solve for the unknown parameters.
Hint 1.8
Write the polynomial in factored form.
11
1.7 Solutions
Solution 1.1
area = π × radius2
area =
π
4
× diameter2
The constant of proportionality is π
4 .
Solution 1.2
1. If we multiply the equation by y2
− 4 and divide by x + 1, we obtain the equation of a line.
y + 2 = x − 1
2. We factor the quadratics on the right side of the equation.
x + 1
y − 2
=
(x + 1)(x − 1)
(y − 2)(y + 2)
.
We note that one or both sides of the equation are undefined at y = ±2 because of division
by zero. There are no solutions for these two values of y and we assume from this point that
y = ±2. We multiply by (y − 2)(y + 2).
(x + 1)(y + 2) = (x + 1)(x − 1)
For x = −1, the equation becomes the identity 0 = 0. Now we consider x = −1. We divide by
x + 1 to obtain the equation of a line.
y + 2 = x − 1
y = x − 3
Now we collect the solutions we have found.
{(−1, y) : y = ±2} ∪ {(x, x − 3) : x = 1, 5}
The solutions are depicted in Figure /reffig not a line.
-6 -4 -2 2 4 6
-6
-4
-2
2
4
6
Figure 1.12: The solutions of x+1
y−2 = x2
−1
y2−4 .
12
Solution 1.3
The denominator is nonzero for all x ∈ R. Since we don’t have any division by zero problems, the
domain of the function is R. For x ∈ R,
0 <
1
x2 + 2
≤ 2.
Consider
y =
1
x2 + 2
. (1.1)
For any y ∈ (0 . . . 1/2], there is at least one value of x that satisfies Equation 1.1.
x2
+ 2 =
1
y
x = ±
1
y
− 2
Thus the range of the function is (0 . . . 1/2]
Solution 1.4
Let c denote degrees Celsius and f denote degrees Fahrenheit. The line passes through the points
(f, c) = (32, 0) and (f, c) = (212, 100). The x-intercept is f = 32. We calculate the slope of the line.
slope =
100 − 0
212 − 32
=
100
180
=
5
9
The relationship between fahrenheit and celcius is
c =
5
9
(f − 32).
Solution 1.5
We plot the various transformations of f(x).
Solution 1.6
The formula for geometric growth/decay is x(t) = x0rt
, where r is the rate. Let t = 0 coincide with
6:00 pm. We determine x0.
x(0) = 109
= x0
11
10
0
= x0
x0 = 109
At 7:00 pm the number of bacteria is
109 11
10
60
=
1160
1051
≈ 3.04 × 1011
At 3:00 pm the number of bacteria was
109 11
10
−180
=
10189
11180
≈ 35.4
Solution 1.7
We write p(x) and q(x) as general quadratic polynomials.
f(x) =
p(x)
q(x)
=
ax2
+ bx + c
αx2 + βx + χ
13
Figure 1.13: Graphs of f(−x), f(x + 3), f(3 − x) + 2, and f−1
(x).
We will use the properties of the function to solve for the unknown parameters.
Note that p(x) and q(x) appear as a ratio, they are determined only up to a multiplicative
constant. We may take the leading coefficient of q(x) to be unity.
f(x) =
p(x)
q(x)
=
ax2
+ bx + c
x2 + βx + χ
f(x) has a second order zero at x = 0. This means that p(x) has a second order zero there and that
χ = 0.
f(x) =
ax2
x2 + βx + χ
We note that f(x) → 2 as x → ∞. This determines the parameter a.
lim
x→∞
f(x) = lim
x→∞
ax2
x2 + βx + χ
= lim
x→∞
2ax
2x + β
= lim
x→∞
2a
2
= a
f(x) =
2x2
x2 + βx + χ
Now we use the fact that f(x) is even to conclude that q(x) is even and thus β = 0.
f(x) =
2x2
x2 + χ
14
Finally, we use that f(1) = 1 to determine χ.
f(x) =
2x2
x2 + 1
Solution 1.8
Consider the polynomial
p(x) = (x + 2)40
(x − 1)30
(x − π)30
.
It is of degree 100. Since the factors only vanish at x = −2, 1, π, p(x) only vanishes there. Since
factors are non-negative, the polynomial is non-negative.
15
16
Chapter 2
Vectors
2.1 Vectors
2.1.1 Scalars and Vectors
A vector is a quantity having both a magnitude and a direction. Examples of vector quantities are
velocity, force and position. One can represent a vector in n-dimensional space with an arrow whose
initial point is at the origin, (Figure 2.1). The magnitude is the length of the vector. Typographically,
variables representing vectors are often written in capital letters, bold face or with a vector over-line,
A, a, a. The magnitude of a vector is denoted |a|.
x
z
y
Figure 2.1: Graphical representation of a vector in three dimensions.
A scalar has only a magnitude. Examples of scalar quantities are mass, time and speed.
Vector Algebra. Two vectors are equal if they have the same magnitude and direction. The
negative of a vector, denoted −a, is a vector of the same magnitude as a but in the opposite
direction. We add two vectors a and b by placing the tail of b at the head of a and defining a + b
to be the vector with tail at the origin and head at the head of b. (See Figure 2.2.)
a+b
a
b
-a
a
2a
Figure 2.2: Vector arithmetic.
17
The difference, a − b, is defined as the sum of a and the negative of b, a + (−b). The result of
multiplying a by a scalar α is a vector of magnitude |α| |a| with the same/opposite direction if α is
positive/negative. (See Figure 2.2.)
Here are the properties of adding vectors and multiplying them by a scalar. They are evident
from geometric considerations.
a + b = b + a αa = aα commutative laws
(a + b) + c = a + (b + c) α(βa) = (αβ)a associative laws
α(a + b) = αa + αb (α + β)a = αa + βa distributive laws
Zero and Unit Vectors. The additive identity element for vectors is the zero vector or null
vector. This is a vector of magnitude zero which is denoted as 0. A unit vector is a vector of
magnitude one. If a is nonzero then a/|a| is a unit vector in the direction of a. Unit vectors are
often denoted with a caret over-line, ˆn.
Rectangular Unit Vectors. In n dimensional Cartesian space, Rn
, the unit vectors in the di-
rections of the coordinates axes are e1, . . . en. These are called the rectangular unit vectors. To cut
down on subscripts, the unit vectors in three dimensional space are often denoted with i, j and k.
(Figure 2.3).
x
z
y
j
k
i
Figure 2.3: Rectangular unit vectors.
Components of a Vector. Consider a vector a with tail at the origin and head having the Carte-
sian coordinates (a1, . . . , an). We can represent this vector as the sum of n rectangular component
vectors, a = a1e1 + · · · + anen. (See Figure 2.4.) Another notation for the vector a is a1, . . . , an .
By the Pythagorean theorem, the magnitude of the vector a is |a| = a2
1 + · · · + a2
n.
x
z
y
a
a
a
1
3
i
k
ja2
Figure 2.4: Components of a vector.
18
2.1.2 The Kronecker Delta and Einstein Summation Convention
The Kronecker Delta tensor is defined
δij =
1 if i = j,
0 if i = j.
This notation will be useful in our work with vectors.
Consider writing a vector in terms of its rectangular components. Instead of using ellipses: a =
a1e1 + · · · + anen, we could write the expression as a sum: a =
n
i=1 aiei. We can shorten this
notation by leaving out the sum: a = aiei, where it is understood that whenever an index is
repeated in a term we sum over that index from 1 to n. This is the Einstein summation convention.
A repeated index is called a summation index or a dummy index. Other indices can take any value
from 1 to n and are called free indices.
Example 2.1.1 Consider the matrix equation: A·x = b. We can write out the matrix and vectors
explicitly. 


a11 · · · a1n
...
...
...
an1 · · · ann






x1
...
xn


 =



b1
...
bn



This takes much less space when we use the summation convention.
aijxj = bi
Here j is a summation index and i is a free index.
2.1.3 The Dot and Cross Product
Dot Product. The dot product or scalar product of two vectors is defined,
a · b ≡ |a||b| cos θ,
where θ is the angle from a to b. From this definition one can derive the following properties:
• a · b = b · a, commutative.
• α(a · b) = (αa) · b = a · (αb), associativity of scalar multiplication.
• a · (b + c) = a · b + a · c, distributive. (See Exercise 2.1.)
• eiej = δij. In three dimensions, this is
i · i = j · j = k · k = 1, i · j = j · k = k · i = 0.
• a · b = aibi ≡ a1b1 + · · · + anbn, dot product in terms of rectangular components.
• If a · b = 0 then either a and b are orthogonal, (perpendicular), or one of a and b are zero.
The Angle Between Two Vectors. We can use the dot product to find the angle between two
vectors, a and b. From the definition of the dot product,
a · b = |a||b| cos θ.
If the vectors are nonzero, then
θ = arccos
a · b
|a||b|
.
19
Example 2.1.2 What is the angle between i and i + j?
θ = arccos
i · (i + j)
|i||i + j|
= arccos
1
√
2
=
π
4
.
Parametric Equation of a Line. Consider a line in Rn
that passes through the point a and is
parallel to the vector t, (tangent). A parametric equation of the line is
x = a + ut, u ∈ R.
Implicit Equation of a Line In 2D. Consider a line in R2
that passes through the point a and
is normal, (orthogonal, perpendicular), to the vector n. All the lines that are normal to n have the
property that x · n is a constant, where x is any point on the line. (See Figure 2.5.) x · n = 0 is
the line that is normal to n and passes through the origin. The line that is normal to n and passes
through the point a is
x · n = a · n.
=0
=1 = a n
n a
=-1
x n
x n
x n
x n
Figure 2.5: Equation for a line.
The normal to a line determines an orientation of the line. The normal points in the direction
that is above the line. A point b is (above/on/below) the line if (b−a)·n is (positive/zero/negative).
The signed distance of a point b from the line x · n = a · n is
(b − a) ·
n
|n|
.
Implicit Equation of a Hyperplane. A hyperplane in Rn
is an n−1 dimensional “sheet” which
passes through a given point and is normal to a given direction. In R3
we call this a plane. Consider
a hyperplane that passes through the point a and is normal to the vector n. All the hyperplanes that
are normal to n have the property that x · n is a constant, where x is any point in the hyperplane.
x · n = 0 is the hyperplane that is normal to n and passes through the origin. The hyperplane that
is normal to n and passes through the point a is
x · n = a · n.
The normal determines an orientation of the hyperplane. The normal points in the direction
that is above the hyperplane. A point b is (above/on/below) the hyperplane if (b − a) · n is
20
(positive/zero/negative). The signed distance of a point b from the hyperplane x · n = a · n is
(b − a) ·
n
|n|
.
Right and Left-Handed Coordinate Systems. Consider a rectangular coordinate system in
two dimensions. Angles are measured from the positive x axis in the direction of the positive y
axis. There are two ways of labeling the axes. (See Figure 2.6.) In one the angle increases in
the counterclockwise direction and in the other the angle increases in the clockwise direction. The
former is the familiar Cartesian coordinate system.
x y
xy
θ
θ
Figure 2.6: There are two ways of labeling the axes in two dimensions.
There are also two ways of labeling the axes in a three-dimensional rectangular coordinate system.
These are called right-handed and left-handed coordinate systems. See Figure 2.7. Any other
labelling of the axes could be rotated into one of these configurations. The right-handed system
is the one that is used by default. If you put your right thumb in the direction of the z axis in a
right-handed coordinate system, then your fingers curl in the direction from the x axis to the y axis.
x
z
yj
i
k
z
k
j
i
y
x
Figure 2.7: Right and left handed coordinate systems.
Cross Product. The cross product or vector product is defined,
a × b = |a||b| sin θ n,
where θ is the angle from a to b and n is a unit vector that is orthogonal to a and b and in the
direction such that the ordered triple of vectors a, b and n form a right-handed system.
You can visualize the direction of a × b by applying the right hand rule. Curl the fingers of your
right hand in the direction from a to b. Your thumb points in the direction of a × b. Warning:
Unless you are a lefty, get in the habit of putting down your pencil before applying the right hand
rule.
The dot and cross products behave a little differently. First note that unlike the dot product,
the cross product is not commutative. The magnitudes of a × b and b × a are the same, but their
directions are opposite. (See Figure 2.8.)
Let
a × b = |a||b| sin θ n and b × a = |b||a| sin φ m.
The angle from a to b is the same as the angle from b to a. Since {a, b, n} and {b, a, m} are
right-handed systems, m points in the opposite direction as n. Since a × b = −b × a we say that
the cross product is anti-commutative.
21
a
b
b a
a b
Figure 2.8: The cross product is anti-commutative.
Next we note that since
|a × b| = |a||b| sin θ,
the magnitude of a×b is the area of the parallelogram defined by the two vectors. (See Figure 2.9.)
The area of the triangle defined by two vectors is then 1
2 |a × b|.
b
sin
b
b
a
θ
a
Figure 2.9: The parallelogram and the triangle defined by two vectors.
From the definition of the cross product, one can derive the following properties:
• a × b = −b × a, anti-commutative.
• α(a × b) = (αa) × b = a × (αb), associativity of scalar multiplication.
• a × (b + c) = a × b + a × c, distributive.
• (a × b) × c = a × (b × c). The cross product is not associative.
• i × i = j × j = k × k = 0.
• i × j = k, j × k = i, k × i = j.
•
a × b = (a2b3 − a3b2)i + (a3b1 − a1b3)j + (a1b2 − a2b1)k =
i j k
a1 a2 a3
b1 b2 b3
,
cross product in terms of rectangular components.
• If a · b = 0 then either a and b are parallel or one of a or b is zero.
Scalar Triple Product. Consider the volume of the parallelopiped defined by three vectors. (See
Figure 2.10.) The area of the base is ||b||c| sin θ|, where θ is the angle between b and c. The height
is |a| cos φ, where φ is the angle between b × c and a. Thus the volume of the parallelopiped is
|a||b||c| sin θ cos φ.
22
φ
θ
b c
a
b
c
Figure 2.10: The parallelopiped defined by three vectors.
Note that
|a · (b × c)| = |a · (|b||c| sin θ n)|
= ||a||b||c| sin θ cos φ| .
Thus |a · (b × c)| is the volume of the parallelopiped. a · (b × c) is the volume or the negative of the
volume depending on whether {a, b, c} is a right or left-handed system.
Note that parentheses are unnecessary in a · b × c. There is only one way to interpret the
expression. If you did the dot product first then you would be left with the cross product of a scalar
and a vector which is meaningless. a · b × c is called the scalar triple product.
Plane Defined by Three Points. Three points which are not collinear define a plane. Consider
a plane that passes through the three points a, b and c. One way of expressing that the point x
lies in the plane is that the vectors x − a, b − a and c − a are coplanar. (See Figure 2.11.) If the
vectors are coplanar, then the parallelopiped defined by these three vectors will have zero volume.
We can express this in an equation using the scalar triple product,
(x − a) · (b − a) × (c − a) = 0.
b
c
x
a
Figure 2.11: Three points define a plane.
2.2 Sets of Vectors in n Dimensions
Orthogonality. Consider two n-dimensional vectors
x = (x1, x2, . . . , xn), y = (y1, y2, . . . , yn).
The inner product of these vectors can be defined
x|y ≡ x · y =
n
i=1
xiyi.
The vectors are orthogonal if x · y = 0. The norm of a vector is the length of the vector generalized
to n dimensions.
x =
√
x · x
23
Consider a set of vectors
{x1, x2, . . . , xm}.
If each pair of vectors in the set is orthogonal, then the set is orthogonal.
xi · xj = 0 if i = j
If in addition each vector in the set has norm 1, then the set is orthonormal.
xi · xj = δij =
1 if i = j
0 if i = j
Here δij is known as the Kronecker delta function.
Completeness. A set of n, n-dimensional vectors
{x1, x2, . . . , xn}
is complete if any n-dimensional vector can be written as a linear combination of the vectors in the
set. That is, any vector y can be written
y =
n
i=1
cixi.
Taking the inner product of each side of this equation with xm,
y · xm =
n
i=1
cixi · xm
=
n
i=1
cixi · xm
= cmxm · xm
cm =
y · xm
xm
2
Thus y has the expansion
y =
n
i=1
y · xi
xi
2
xi.
If in addition the set is orthonormal, then
y =
n
i=1
(y · xi)xi.
24
2.3 Exercises
The Dot and Cross Product
Exercise 2.1
Prove the distributive law for the dot product,
a · (b + c) = a · b + a · c.
Hint, Solution
Exercise 2.2
Prove that
a · b = aibi ≡ a1b1 + · · · + anbn.
Hint, Solution
Exercise 2.3
What is the angle between the vectors i + j and i + 3j?
Hint, Solution
Exercise 2.4
Prove the distributive law for the cross product,
a × (b + c) = a × b + a × b.
Hint, Solution
Exercise 2.5
Show that
a × b =
i j k
a1 a2 a3
b1 b2 b3
Hint, Solution
Exercise 2.6
What is the area of the quadrilateral with vertices at (1, 1), (4, 2), (3, 7) and (2, 3)?
Hint, Solution
Exercise 2.7
What is the volume of the tetrahedron with vertices at (1, 1, 0), (3, 2, 1), (2, 4, 1) and (1, 2, 5)?
Hint, Solution
Exercise 2.8
What is the equation of the plane that passes through the points (1, 2, 3), (2, 3, 1) and (3, 1, 2)?
What is the distance from the point (2, 3, 5) to the plane?
Hint, Solution
25
2.4 Hints
The Dot and Cross Product
Hint 2.1
First prove the distributive law when the first vector is of unit length,
n · (b + c) = n · b + n · c.
Then all the quantities in the equation are projections onto the unit vector n and you can use
geometry.
Hint 2.2
First prove that the dot product of a rectangular unit vector with itself is one and the dot product
of two distinct rectangular unit vectors is zero. Then write a and b in rectangular components and
use the distributive law.
Hint 2.3
Use a · b = |a||b| cos θ.
Hint 2.4
First consider the case that both b and c are orthogonal to a. Prove the distributive law in this
case from geometric considerations.
Next consider two arbitrary vectors a and b. We can write b = b⊥ + b where b⊥ is orthogonal
to a and b is parallel to a. Show that
a × b = a × b⊥.
Finally prove the distributive law for arbitrary b and c.
Hint 2.5
Write the vectors in their rectangular components and use,
i × j = k, j × k = i, k × i = j,
and,
i × i = j × j = k × k = 0.
Hint 2.6
The quadrilateral is composed of two triangles. The area of a triangle defined by the two vectors a
and b is 1
2 |a · b|.
Hint 2.7
Justify that the area of a tetrahedron determined by three vectors is one sixth the area of the
parallelogram determined by those three vectors. The area of a parallelogram determined by three
vectors is the magnitude of the scalar triple product of the vectors: a · b × c.
Hint 2.8
The equation of a line that is orthogonal to a and passes through the point b is a · x = a · b. The
distance of a point c from the plane is
(c − b) ·
a
|a|
26
2.5 Solutions
The Dot and Cross Product
Solution 2.1
First we prove the distributive law when the first vector is of unit length, i.e.,
n · (b + c) = n · b + n · c. (2.1)
From Figure 2.12 we see that the projection of the vector b + c onto n is equal to the sum of the
projections b · n and c · n.
b
c
n b
n c
b+c
n
n (b+c)
Figure 2.12: The distributive law for the dot product.
Now we extend the result to the case when the first vector has arbitrary length. We define
a = |a|n and multiply Equation 2.1 by the scalar, |a|.
|a|n · (b + c) = |a|n · b + |a|n · c
a · (b + c) = a · b + a · c.
Solution 2.2
First note that
ei · ei = |ei||ei| cos(0) = 1.
Then note that that dot product of any two distinct rectangular unit vectors is zero because they are
orthogonal. Now we write a and b in terms of their rectangular components and use the distributive
law.
a · b = aiei · bjej
= aibjei · ej
= aibjδij
= aibi
Solution 2.3
Since a · b = |a||b| cos θ, we have
θ = arccos
a · b
|a||b|
27
when a and b are nonzero.
θ = arccos
(i + j) · (i + 3j)
|i + j||i + 3j|
= arccos
4
√
2
√
10
= arccos
2
√
5
5
≈ 0.463648
Solution 2.4
First consider the case that both b and c are orthogonal to a. b + c is the diagonal of the par-
allelogram defined by b and c, (see Figure 2.13). Since a is orthogonal to each of these vectors,
taking the cross product of a with these vectors has the effect of rotating the vectors through π/2
radians about a and multiplying their length by |a|. Note that a × (b + c) is the diagonal of the
parallelogram defined by a × b and a × c. Thus we see that the distributive law holds when a is
orthogonal to both b and c,
a × (b + c) = a × b + a × c.
b
cb+c
a c
a
a b
a (b+c)
Figure 2.13: The distributive law for the cross product.
Now consider two arbitrary vectors a and b. We can write b = b⊥ + b where b⊥ is orthogonal
to a and b is parallel to a, (see Figure 2.14).
a
b
b
θ
b
Figure 2.14: The vector b written as a sum of components orthogonal and parallel to a.
By the definition of the cross product,
a × b = |a||b| sin θ n.
Note that
|b⊥| = |b| sin θ,
and that a × b⊥ is a vector in the same direction as a × b. Thus we see that
a × b = |a||b| sin θ n
= |a|(sin θ|b|)n
= |a||b⊥|n = |a||b⊥| sin(π/2)n
28
a × b = a × b⊥.
Now we are prepared to prove the distributive law for arbitrary b and c.
a × (b + c) = a × (b⊥ + b + c⊥ + c )
= a × ((b + c)⊥ + (b + c) )
= a × ((b + c)⊥)
= a × b⊥ + a × c⊥
= a × b + a × c
a × (b + c) = a × b + a × c
Solution 2.5
We know that
i × j = k, j × k = i, k × i = j,
and that
i × i = j × j = k × k = 0.
Now we write a and b in terms of their rectangular components and use the distributive law to
expand the cross product.
a × b = (a1i + a2j + a3k) × (b1i + b2j + b3k)
= a1i × (b1i + b2j + b3k) + a2j × (b1i + b2j + b3k) + a3k × (b1i + b2j + b3k)
= a1b2k + a1b3(−j) + a2b1(−k) + a2b3i + a3b1j + a3b2(−i)
= (a2b3 − a3b2)i − (a1b3 − a3b1)j + (a1b2 − a2b1)k
Next we evaluate the determinant.
i j k
a1 a2 a3
b1 b2 b3
= i
a2 a3
b2 b3
− j
a1 a3
b1 b3
+ k
a1 a2
b1 b2
= (a2b3 − a3b2)i − (a1b3 − a3b1)j + (a1b2 − a2b1)k
Thus we see that,
a × b =
i j k
a1 a2 a3
b1 b2 b3
Solution 2.6
The area area of the quadrilateral is the area of two triangles. The first triangle is defined by the
vector from (1, 1) to (4, 2) and the vector from (1, 1) to (2, 3). The second triangle is defined by
the vector from (3, 7) to (4, 2) and the vector from (3, 7) to (2, 3). (See Figure 2.15.) The area of a
triangle defined by the two vectors a and b is 1
2 |a · b|. The area of the quadrilateral is then,
1
2
|(3i + j) · (i + 2j)| +
1
2
|(i − 5j) · (−i − 4j)| =
1
2
(5) +
1
2
(19) = 12.
Solution 2.7
The tetrahedron is determined by the three vectors with tail at (1, 1, 0) and heads at (3, 2, 1), (2, 4, 1)
and (1, 2, 5). These are 2, 1, 1 , 1, 3, 1 and 0, 1, 5 . The area of the tetrahedron is one sixth the
area of the parallelogram determined by these vectors. (This is because the area of a pyramid is
1
3 (base)(height). The base of the tetrahedron is half the area of the parallelogram and the heights
are the same. 1
2
1
3 = 1
6 ) Thus the area of a tetrahedron determined by three vectors is 1
6 |a · b × c|.
The area of the tetrahedron is
1
6
| 2, 1, 1 · 1, 3, 1 × 1, 2, 5 | =
1
6
| 2, 1, 1 · 13, −4, −1 | =
7
2
29
x
y (3,7)
(4,2)
(2,3)
(1,1)
Figure 2.15: Quadrilateral.
Solution 2.8
The two vectors with tails at (1, 2, 3) and heads at (2, 3, 1) and (3, 1, 2) are parallel to the plane.
Taking the cross product of these two vectors gives us a vector that is orthogonal to the plane.
1, 1, −2 × 2, −1, −1 = −3, −3, −3
We see that the plane is orthogonal to the vector 1, 1, 1 and passes through the point (1, 2, 3). The
equation of the plane is
1, 1, 1 · x, y, z = 1, 1, 1 · 1, 2, 3 ,
x + y + z = 6.
Consider the vector with tail at (1, 2, 3) and head at (2, 3, 5). The magnitude of the dot product of
this vector with the unit normal vector gives the distance from the plane.
1, 1, 2 ·
1, 1, 1
| 1, 1, 1 |
=
4
√
3
=
4
√
3
3
30
Part II
Calculus
31
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Chapter 3
Differential Calculus
3.1 Limits of Functions
Definition of a Limit. If the value of the function y(x) gets arbitrarily close to ψ as x approaches
the point ξ, then we say that the limit of the function as x approaches ξ is equal to ψ. This is written:
lim
x→ξ
y(x) = ψ
Now we make the notion of “arbitrarily close” precise. For any > 0 there exists a δ > 0 such that
|y(x) − ψ| < for all 0 < |x − ξ| < δ. That is, there is an interval surrounding the point x = ξ for
which the function is within of ψ. See Figure 3.1. Note that the interval surrounding x = ξ is a
deleted neighborhood, that is it does not contain the point x = ξ. Thus the value of the function at
x = ξ need not be equal to ψ for the limit to exist. Indeed the function need not even be defined at
x = ξ.
x
y
ψ+ε
ψ−ε
ξ−δ ξ+δ
Figure 3.1: The δ neighborhood of x = ξ such that |y(x) − ψ| < .
To prove that a function has a limit at a point ξ we first bound |y(x)−ψ| in terms of δ for values
of x satisfying 0 < |x − ξ| < δ. Denote this upper bound by u(δ). Then for an arbitrary > 0, we
determine a δ > 0 such that the the upper bound u(δ) and hence |y(x) − ψ| is less than .
Example 3.1.1 Show that
lim
x→1
x2
= 1.
Consider any > 0. We need to show that there exists a δ > 0 such that |x2
− 1| < for all
33
|x − 1| < δ. First we obtain a bound on |x2
− 1|.
|x2
− 1| = |(x − 1)(x + 1)|
= |x − 1||x + 1|
< δ|x + 1|
= δ|(x − 1) + 2|
< δ(δ + 2)
Now we choose a positive δ such that,
δ(δ + 2) = .
We see that
δ =
√
1 + − 1,
is positive and satisfies the criterion that |x2
− 1| < for all 0 < |x − 1| < δ. Thus the limit exists.
Example 3.1.2 Recall that the value of the function y(ξ) need not be equal to limx→ξ y(x) for the
limit to exist. We show an example of this. Consider the function
y(x) =
1 for x ∈ Z,
0 for x ∈ Z.
For what values of ξ does limx→ξ y(x) exist?
First consider ξ ∈ Z. Then there exists an open neighborhood a < ξ < b around ξ such that y(x)
is identically zero for x ∈ (a, b). Then trivially, limx→ξ y(x) = 0.
Now consider ξ ∈ Z. Consider any > 0. Then if |x − ξ| < 1 then |y(x) − 0| = 0 < . Thus we
see that limx→ξ y(x) = 0.
Thus, regardless of the value of ξ, limx→ξ y(x) = 0.
Left and Right Limits. With the notation limx→ξ+ y(x) we denote the right limit of y(x). This
is the limit as x approaches ξ from above. Mathematically: limx→ξ+ exists if for any > 0 there
exists a δ > 0 such that |y(x) − ψ| < for all 0 < ξ − x < δ. The left limit limx→ξ− y(x) is defined
analogously.
Example 3.1.3 Consider the function, sin x
|x| , defined for x = 0. (See Figure 3.2.) The left and right
limits exist as x approaches zero.
lim
x→0+
sin x
|x|
= 1, lim
x→0−
sin x
|x|
= −1
However the limit,
lim
x→0
sin x
|x|
,
does not exist.
Figure 3.2: Plot of sin(x)/|x|.
34
Properties of Limits. Let lim
x→ξ
f(x) and lim
x→ξ
g(x) exist.
• lim
x→ξ
(af(x) + bg(x)) = a lim
x→ξ
f(x) + b lim
x→ξ
g(x).
• lim
x→ξ
(f(x)g(x)) = lim
x→ξ
f(x) lim
x→ξ
g(x) .
• lim
x→ξ
f(x)
g(x)
=
limx→ξ f(x)
limx→ξ g(x)
if lim
x→ξ
g(x) = 0.
Example 3.1.4 We prove that if limx→ξ f(x) = φ and limx→ξ g(x) = γ exist then
lim
x→ξ
(f(x)g(x)) = lim
x→ξ
f(x) lim
x→ξ
g(x) .
Since the limit exists for f(x), we know that for all > 0 there exists δ > 0 such that |f(x) − φ| <
for |x − ξ| < δ. Likewise for g(x). We seek to show that for all > 0 there exists δ > 0 such that
|f(x)g(x) − φγ| < for |x − ξ| < δ. We proceed by writing |f(x)g(x) − φγ|, in terms of |f(x) − φ|
and |g(x) − γ|, which we know how to bound.
|f(x)g(x) − φγ| = |f(x)(g(x) − γ) + (f(x) − φ)γ|
≤ |f(x)||g(x) − γ| + |f(x) − φ||γ|
If we choose a δ such that |f(x)||g(x) − γ| < /2 and |f(x) − φ||γ| < /2 then we will have the
desired result: |f(x)g(x) − φγ| < . Trying to ensure that |f(x)||g(x) − γ| < /2 is hard because of
the |f(x)| factor. We will replace that factor with a constant. We want to write |f(x) − φ||γ| < /2
as |f(x) − φ| < /(2|γ|), but this is problematic for the case γ = 0. We fix these two problems and
then proceed. We choose δ1 such that |f(x)−φ| < 1 for |x−ξ| < δ1. This gives us the desired form.
|f(x)g(x) − φγ| ≤ (|φ| + 1)|g(x) − γ| + |f(x) − φ|(|γ| + 1), for |x − ξ| < δ1
Next we choose δ2 such that |g(x) − γ| < /(2(|φ| + 1)) for |x − ξ| < δ2 and choose δ3 such that
|f(x) − φ| < /(2(|γ| + 1)) for |x − ξ| < δ3. Let δ be the minimum of δ1, δ2 and δ3.
|f(x)g(x) − φγ| ≤ (|φ| + 1)|g(x) − γ| + |f(x) − φ|(|γ| + 1) <
2
+
2
, for |x − ξ| < δ
|f(x)g(x) − φγ| < , for |x − ξ| < δ
We conclude that the limit of a product is the product of the limits.
lim
x→ξ
(f(x)g(x)) = lim
x→ξ
f(x) lim
x→ξ
g(x) = φγ.
35
Result 3.1.1 Definition of a Limit. The statement:
lim
x→ξ
y(x) = ψ
means that y(x) gets arbitrarily close to ψ as x approaches ξ. For any > 0
there exists a δ > 0 such that |y(x) − ψ| < for all x in the neighborhood
0 < |x − ξ| < δ. The left and right limits,
lim
x→ξ−
y(x) = ψ and lim
x→ξ+
y(x) = ψ
denote the limiting value as x approaches ξ respectively from below and above.
The neighborhoods are respectively −δ < x − ξ < 0 and 0 < x − ξ < δ.
Properties of Limits. Let lim
x→ξ
u(x) and lim
x→ξ
v(x) exist.
• lim
x→ξ
(au(x) + bv(x)) = a lim
x→ξ
u(x) + b lim
x→ξ
v(x).
• lim
x→ξ
(u(x)v(x)) = lim
x→ξ
u(x) lim
x→ξ
v(x) .
• lim
x→ξ
u(x)
v(x)
=
limx→ξ u(x)
limx→ξ v(x)
if lim
x→ξ
v(x) = 0.
3.2 Continuous Functions
Definition of Continuity. A function y(x) is said to be continuous at x = ξ if the value of the
function is equal to its limit, that is, limx→ξ y(x) = y(ξ). Note that this one condition is actually
the three conditions: y(ξ) is defined, limx→ξ y(x) exists and limx→ξ y(x) = y(ξ). A function is
continuous if it is continuous at each point in its domain. A function is continuous on the closed
interval [a, b] if the function is continuous for each point x ∈ (a, b) and limx→a+ y(x) = y(a) and
limx→b−
y(x) = y(b).
Discontinuous Functions. If a function is not continuous at a point it is called discontinuous
at that point. If limx→ξ y(x) exists but is not equal to y(ξ), then the function has a removable
discontinuity. It is thus named because we could define a continuous function
z(x) =
y(x) for x = ξ,
limx→ξ y(x) for x = ξ,
to remove the discontinuity. If both the left and right limit of a function at a point exist, but are
not equal, then the function has a jump discontinuity at that point. If either the left or right limit
of a function does not exist, then the function is said to have an infinite discontinuity at that point.
Example 3.2.1 sin x
x has a removable discontinuity at x = 0. The Heaviside function,
H(x) =



0 for x < 0,
1/2 for x = 0,
1 for x > 0,
has a jump discontinuity at x = 0. 1
x has an infinite discontinuity at x = 0. See Figure 3.3.
36
Figure 3.3: A Removable discontinuity, a Jump Discontinuity and an Infinite Discontinuity
Properties of Continuous Functions.
Arithmetic. If u(x) and v(x) are continuous at x = ξ then u(x)±v(x) and u(x)v(x) are continuous
at x = ξ. u(x)
v(x) is continuous at x = ξ if v(ξ) = 0.
Function Composition. If u(x) is continuous at x = ξ and v(x) is continuous at x = µ = u(ξ)
then u(v(x)) is continuous at x = ξ. The composition of continuous functions is a continuous
function.
Boundedness. A function which is continuous on a closed interval is bounded in that closed interval.
Nonzero in a Neighborhood. If y(ξ) = 0 then there exists a neighborhood (ξ − , ξ + ), > 0 of
the point ξ such that y(x) = 0 for x ∈ (ξ − , ξ + ).
Intermediate Value Theorem. Let u(x) be continuous on [a, b]. If u(a) ≤ µ ≤ u(b) then there exists
ξ ∈ [a, b] such that u(ξ) = µ. This is known as the intermediate value theorem. A corollary of
this is that if u(a) and u(b) are of opposite sign then u(x) has at least one zero on the interval
(a, b).
Maxima and Minima. If u(x) is continuous on [a, b] then u(x) has a maximum and a minimum on
[a, b]. That is, there is at least one point ξ ∈ [a, b] such that u(ξ) ≥ u(x) for all x ∈ [a, b] and
there is at least one point ψ ∈ [a, b] such that u(ψ) ≤ u(x) for all x ∈ [a, b].
Piecewise Continuous Functions. A function is piecewise continuous on an interval if the
function is bounded on the interval and the interval can be divided into a finite number of intervals
on each of which the function is continuous. For example, the greatest integer function, x , is
piecewise continuous. ( x is defined to the the greatest integer less than or equal to x.) See
Figure 3.4 for graphs of two piecewise continuous functions.
Figure 3.4: Piecewise Continuous Functions
Uniform Continuity. Consider a function f(x) that is continuous on an interval. This means
that for any point ξ in the interval and any positive there exists a δ > 0 such that |f(x)−f(ξ)| <
for all 0 < |x − ξ| < δ. In general, this value of δ depends on both ξ and . If δ can be chosen so
it is a function of alone and independent of ξ then the function is said to be uniformly continuous
on the interval. A sufficient condition for uniform continuity is that the function is continuous on a
closed interval.
37
3.3 The Derivative
Consider a function y(x) on the interval (x . . . x + ∆x) for some ∆x > 0. We define the increment
∆y = y(x + ∆x) − y(x). The average rate of change, (average velocity), of the function on the
interval is ∆y
∆x . The average rate of change is the slope of the secant line that passes through the
points (x, y(x)) and (x + ∆x, y(x + ∆x)). See Figure 3.5.
y
x
∆y
∆x
Figure 3.5: The increments ∆x and ∆y.
If the slope of the secant line has a limit as ∆x approaches zero then we call this slope the
derivative or instantaneous rate of change of the function at the point x. We denote the derivative
by dy
dx , which is a nice notation as the derivative is the limit of ∆y
∆x as ∆x → 0.
dy
dx
≡ lim
∆x→0
y(x + ∆x) − y(x)
∆x
.
∆x may approach zero from below or above. It is common to denote the derivative dy
dx by d
dx y, y (x),
y or Dy.
A function is said to be differentiable at a point if the derivative exists there. Note that differ-
entiability implies continuity, but not vice versa.
Example 3.3.1 Consider the derivative of y(x) = x2
at the point x = 1.
y (1) ≡ lim
∆x→0
y(1 + ∆x) − y(1)
∆x
= lim
∆x→0
(1 + ∆x)2
− 1
∆x
= lim
∆x→0
(2 + ∆x)
= 2
Figure 3.6 shows the secant lines approaching the tangent line as ∆x approaches zero from above
and below.
Example 3.3.2 We can compute the derivative of y(x) = x2
at an arbitrary point x.
d
dx
x2
= lim
∆x→0
(x + ∆x)2
− x2
∆x
= lim
∆x→0
(2x + ∆x)
= 2x
38
0.5 1 1.5 2
0.5
1
1.5
2
2.5
3
3.5
4
0.5 1 1.5 2
0.5
1
1.5
2
2.5
3
3.5
4
Figure 3.6: Secant lines and the tangent to x2
at x = 1.
Properties. Let u(x) and v(x) be differentiable. Let a and b be constants. Some fundamental
properties of derivatives are:
d
dx
(au + bv) = a
du
dx
+ b
dv
dx
Linearity
d
dx
(uv) =
du
dx
v + u
dv
dx
Product Rule
d
dx
u
v
=
v du
dx − udv
dx
v2
Quotient Rule
d
dx
(ua
) = aua−1 du
dx
Power Rule
d
dx
(u(v(x))) =
du
dv
dv
dx
= u (v(x))v (x) Chain Rule
These can be proved by using the definition of differentiation.
Example 3.3.3 Prove the quotient rule for derivatives.
d
dx
u
v
= lim
∆x→0
u(x+∆x)
v(x+∆x) − u(x)
v(x)
∆x
= lim
∆x→0
u(x + ∆x)v(x) − u(x)v(x + ∆x)
∆xv(x)v(x + ∆x)
= lim
∆x→0
u(x + ∆x)v(x) − u(x)v(x) − u(x)v(x + ∆x) + u(x)v(x)
∆xv(x)v(x)
= lim
∆x→0
(u(x + ∆x) − u(x))v(x) − u(x)(v(x + ∆x) − v(x))
∆xv2(x)
=
lim∆x→0
u(x+∆x)−u(x)
∆x v(x) − u(x) lim∆x→0
v(x+∆x)−v(x)
∆x
v2(x)
=
v du
dx − udv
dx
v2
39
Trigonometric Functions. Some derivatives of trigonometric functions are:
d
dx
sin x = cos x
d
dx
arcsin x =
1
(1 − x2)1/2
d
dx
cos x = − sin x
d
dx
arccos x =
−1
(1 − x2)1/2
d
dx
tan x =
1
cos2 x
d
dx
arctan x =
1
1 + x2
d
dx
ex
= ex d
dx
ln x =
1
x
d
dx
sinh x = cosh x
d
dx
arcsinh x =
1
(x2 + 1)1/2
d
dx
cosh x = sinh x
d
dx
arccosh x =
1
(x2 − 1)1/2
d
dx
tanh x =
1
cosh2
x
d
dx
arctanh x =
1
1 − x2
Example 3.3.4 We can evaluate the derivative of xx
by using the identity ab
= eb ln a
.
d
dx
xx
=
d
dx
ex ln x
= ex ln x d
dx
(x ln x)
= xx
(1 · ln x + x
1
x
)
= xx
(1 + ln x)
Inverse Functions. If we have a function y(x), we can consider x as a function of y, x(y). For
example, if y(x) = 8x3
then x(y) = 3
√
y/2; if y(x) = x+2
x+1 then x(y) = 2−y
y−1 . The derivative of an
inverse function is
d
dy
x(y) =
1
dy
dx
.
Example 3.3.5 The inverse function of y(x) = ex
is x(y) = ln y. We can obtain the derivative of
the logarithm from the derivative of the exponential. The derivative of the exponential is
dy
dx
= ex
.
Thus the derivative of the logarithm is
d
dy
ln y =
d
dy
x(y) =
1
dy
dx
=
1
ex
=
1
y
.
3.4 Implicit Differentiation
An explicitly defined function has the form y = f(x). A implicitly defined function has the form
f(x, y) = 0. A few examples of implicit functions are x2
+ y2
− 1 = 0 and x + y + sin(xy) = 0. Often
it is not possible to write an implicit equation in explicit form. This is true of the latter example
above. One can calculate the derivative of y(x) in terms of x and y even when y(x) is defined by an
implicit equation.
Example 3.4.1 Consider the implicit equation
x2
− xy − y2
= 1.
40
This implicit equation can be solved for the dependent variable.
y(x) =
1
2
−x ± 5x2 − 4 .
We can differentiate this expression to obtain
y =
1
2
−1 ±
5x
√
5x2 − 4
.
One can obtain the same result without first solving for y. If we differentiate the implicit equation,
we obtain
2x − y − x
dy
dx
− 2y
dy
dx
= 0.
We can solve this equation for dy
dx .
dy
dx
=
2x − y
x + 2y
We can differentiate this expression to obtain the second derivative of y.
d2
y
dx2
=
(x + 2y)(2 − y ) − (2x − y)(1 + 2y )
(x + 2y)2
=
5(y − xy )
(x + 2y)2
Substitute in the expression for y .
= −
10(x2
− xy − y2
)
(x + 2y)2
Use the original implicit equation.
= −
10
(x + 2y)2
3.5 Maxima and Minima
A differentiable function is increasing where f (x) > 0, decreasing where f (x) < 0 and stationary
where f (x) = 0.
A function f(x) has a relative maxima at a point x = ξ if there exists a neighborhood around
ξ such that f(x) ≤ f(ξ) for x ∈ (x − δ, x + δ), δ > 0. The relative minima is defined analogously.
Note that this definition does not require that the function be differentiable, or even continuous.
We refer to relative maxima and minima collectively are relative extrema.
Relative Extrema and Stationary Points. If f(x) is differentiable and f(ξ) is a relative ex-
trema then x = ξ is a stationary point, f (ξ) = 0. We can prove this using left and right limits.
Assume that f(ξ) is a relative maxima. Then there is a neighborhood (x − δ, x + δ), δ > 0 for which
f(x) ≤ f(ξ). Since f(x) is differentiable the derivative at x = ξ,
f (ξ) = lim
∆x→0
f(ξ + ∆x) − f(ξ)
∆x
,
exists. This in turn means that the left and right limits exist and are equal. Since f(x) ≤ f(ξ) for
ξ − δ < x < ξ the left limit is non-positive,
f (ξ) = lim
∆x→0−
f(ξ + ∆x) − f(ξ)
∆x
≤ 0.
41
Since f(x) ≤ f(ξ) for ξ < x < ξ + δ the right limit is nonnegative,
f (ξ) = lim
∆x→0+
f(ξ + ∆x) − f(ξ)
∆x
≥ 0.
Thus we have 0 ≤ f (ξ) ≤ 0 which implies that f (ξ) = 0.
It is not true that all stationary points are relative extrema. That is, f (ξ) = 0 does not imply
that x = ξ is an extrema. Consider the function f(x) = x3
. x = 0 is a stationary point since
f (x) = x2
, f (0) = 0. However, x = 0 is neither a relative maxima nor a relative minima.
It is also not true that all relative extrema are stationary points. Consider the function f(x) = |x|.
The point x = 0 is a relative minima, but the derivative at that point is undefined.
First Derivative Test. Let f(x) be differentiable and f (ξ) = 0.
• If f (x) changes sign from positive to negative as we pass through x = ξ then the point is a
relative maxima.
• If f (x) changes sign from negative to positive as we pass through x = ξ then the point is a
relative minima.
• If f (x) is not identically zero in a neighborhood of x = ξ and it does not change sign as we
pass through the point then x = ξ is not a relative extrema.
Example 3.5.1 Consider y = x2
and the point x = 0. The function is differentiable. The derivative,
y = 2x, vanishes at x = 0. Since y (x) is negative for x < 0 and positive for x > 0, the point x = 0
is a relative minima. See Figure 3.7.
Example 3.5.2 Consider y = cos x and the point x = 0. The function is differentiable. The
derivative, y = − sin x is positive for −π < x < 0 and negative for 0 < x < π. Since the sign of y
goes from positive to negative, x = 0 is a relative maxima. See Figure 3.7.
Example 3.5.3 Consider y = x3
and the point x = 0. The function is differentiable. The derivative,
y = 3x2
is positive for x < 0 and positive for 0 < x. Since y is not identically zero and the sign of
y does not change, x = 0 is not a relative extrema. See Figure 3.7.
Figure 3.7: Graphs of x2
, cos x and x3
.
Concavity. If the portion of a curve in some neighborhood of a point lies above the tangent line
through that point, the curve is said to be concave upward. If it lies below the tangent it is concave
downward. If a function is twice differentiable then f (x) > 0 where it is concave upward and
f (x) < 0 where it is concave downward. Note that f (x) > 0 is a sufficient, but not a necessary
condition for a curve to be concave upward at a point. A curve may be concave upward at a point
where the second derivative vanishes. A point where the curve changes concavity is called a point
42
of inflection. At such a point the second derivative vanishes, f (x) = 0. For twice continuously
differentiable functions, f (x) = 0 is a necessary but not a sufficient condition for an inflection point.
The second derivative may vanish at places which are not inflection points. See Figure 3.8.
Figure 3.8: Concave Upward, Concave Downward and an Inflection Point.
Second Derivative Test. Let f(x) be twice differentiable and let x = ξ be a stationary point,
f (ξ) = 0.
• If f (ξ) < 0 then the point is a relative maxima.
• If f (ξ) > 0 then the point is a relative minima.
• If f (ξ) = 0 then the test fails.
Example 3.5.4 Consider the function f(x) = cos x and the point x = 0. The derivatives of the
function are f (x) = − sin x, f (x) = − cos x. The point x = 0 is a stationary point, f (0) =
− sin(0) = 0. Since the second derivative is negative there, f (0) = − cos(0) = −1, the point is a
relative maxima.
Example 3.5.5 Consider the function f(x) = x4
and the point x = 0. The derivatives of the
function are f (x) = 4x3
, f (x) = 12x2
. The point x = 0 is a stationary point. Since the second
derivative also vanishes at that point the second derivative test fails. One must use the first derivative
test to determine that x = 0 is a relative minima.
3.6 Mean Value Theorems
Rolle’s Theorem. If f(x) is continuous in [a, b], differentiable in (a, b) and f(a) = f(b) = 0 then
there exists a point ξ ∈ (a, b) such that f (ξ) = 0. See Figure 3.9.
Figure 3.9: Rolle’s Theorem.
To prove this we consider two cases. First we have the trivial case that f(x) ≡ 0. If f(x) is not
identically zero then continuity implies that it must have a nonzero relative maxima or minima in
(a, b). Let x = ξ be one of these relative extrema. Since f(x) is differentiable, x = ξ must be a
stationary point, f (ξ) = 0.
43
Theorem of the Mean. If f(x) is continuous in [a, b] and differentiable in (a, b) then there exists
a point x = ξ such that
f (ξ) =
f(b) − f(a)
b − a
.
That is, there is a point where the instantaneous velocity is equal to the average velocity on the
interval.
Figure 3.10: Theorem of the Mean.
We prove this theorem by applying Rolle’s theorem. Consider the new function
g(x) = f(x) − f(a) −
f(b) − f(a)
b − a
(x − a)
Note that g(a) = g(b) = 0, so it satisfies the conditions of Rolle’s theorem. There is a point x = ξ
such that g (ξ) = 0. We differentiate the expression for g(x) and substitute in x = ξ to obtain the
result.
g (x) = f (x) −
f(b) − f(a)
b − a
g (ξ) = f (ξ) −
f(b) − f(a)
b − a
= 0
f (ξ) =
f(b) − f(a)
b − a
Generalized Theorem of the Mean. If f(x) and g(x) are continuous in [a, b] and differentiable
in (a, b), then there exists a point x = ξ such that
f (ξ)
g (ξ)
=
f(b) − f(a)
g(b) − g(a)
.
We have assumed that g(a) = g(b) so that the denominator does not vanish and that f (x) and g (x)
are not simultaneously zero which would produce an indeterminate form. Note that this theorem
reduces to the regular theorem of the mean when g(x) = x. The proof of the theorem is similar to
that for the theorem of the mean.
Taylor’s Theorem of the Mean. If f(x) is n + 1 times continuously differentiable in (a, b) then
there exists a point x = ξ ∈ (a, b) such that
f(b) = f(a) + (b − a)f (a) +
(b − a)2
2!
f (a) + · · · +
(b − a)n
n!
f(n)
(a) +
(b − a)n+1
(n + 1)!
f(n+1)
(ξ). (3.1)
For the case n = 0, the formula is
f(b) = f(a) + (b − a)f (ξ),
which is just a rearrangement of the terms in the theorem of the mean,
f (ξ) =
f(b) − f(a)
b − a
.
44
3.6.1 Application: Using Taylor’s Theorem to Approximate Functions.
One can use Taylor’s theorem to approximate functions with polynomials. Consider an infinitely
differentiable function f(x) and a point x = a. Substituting x for b into Equation 3.1 we obtain,
f(x) = f(a) + (x − a)f (a) +
(x − a)2
2!
f (a) + · · · +
(x − a)n
n!
f(n)
(a) +
(x − a)n+1
(n + 1)!
f(n+1)
(ξ).
If the last term in the sum is small then we can approximate our function with an nth
order
polynomial.
f(x) ≈ f(a) + (x − a)f (a) +
(x − a)2
2!
f (a) + · · · +
(x − a)n
n!
f(n)
(a)
The last term in Equation 3.6.1 is called the remainder or the error term,
Rn =
(x − a)n+1
(n + 1)!
f(n+1)
(ξ).
Since the function is infinitely differentiable, f(n+1)
(ξ) exists and is bounded. Therefore we note
that the error must vanish as x → 0 because of the (x − a)n+1
factor. We therefore suspect that
our approximation would be a good one if x is close to a. Also note that n! eventually grows faster
than (x − a)n
,
lim
n→∞
(x − a)n
n!
= 0.
So if the derivative term, f(n+1)
(ξ), does not grow to quickly, the error for a certain value of x
will get smaller with increasing n and the polynomial will become a better approximation of the
function. (It is also possible that the derivative factor grows very quickly and the approximation
gets worse with increasing n.)
Example 3.6.1 Consider the function f(x) = ex
. We want a polynomial approximation of this
function near the point x = 0. Since the derivative of ex
is ex
, the value of all the derivatives at
x = 0 is f(n)
(0) = e0
= 1. Taylor’s theorem thus states that
ex
= 1 + x +
x2
2!
+
x3
3!
+ · · · +
xn
n!
+
xn+1
(n + 1)!
eξ
,
for some ξ ∈ (0, x). The first few polynomial approximations of the exponent about the point x = 0
are
f1(x) = 1
f2(x) = 1 + x
f3(x) = 1 + x +
x2
2
f4(x) = 1 + x +
x2
2
+
x3
6
The four approximations are graphed in Figure 3.11.
Note that for the range of x we are looking at, the approximations become more accurate as the
number of terms increases.
Example 3.6.2 Consider the function f(x) = cos x. We want a polynomial approximation of this
function near the point x = 0. The first few derivatives of f are
f(x) = cos x
f (x) = − sin x
f (x) = − cos x
f (x) = sin x
f(4)
(x) = cos x
45
-1 -0.5 0.5 1
0.5
1
1.5
2
2.5
-1 -0.5 0.5 1
0.5
1
1.5
2
2.5
-1 -0.5 0.5 1
0.5
1
1.5
2
2.5
-1 -0.5 0.5 1
0.5
1
1.5
2
2.5
Figure 3.11: Four Finite Taylor Series Approximations of ex
It’s easy to pick out the pattern here,
f(n)
(x) =
(−1)n/2
cos x for even n,
(−1)(n+1)/2
sin x for odd n.
Since cos(0) = 1 and sin(0) = 0 the n-term approximation of the cosine is,
cos x = 1 −
x2
2!
+
x4
4!
−
x6
6!
+ · · · + (−1)2(n−1) x2(n−1)
(2(n − 1))!
+
x2n
(2n)!
cos ξ.
Here are graphs of the one, two, three and four term approximations.
-3 -2 -1 1 2 3
-1
-0.5
0.5
1
-3 -2 -1 1 2 3
-1
-0.5
0.5
1
-3 -2 -1 1 2 3
-1
-0.5
0.5
1
-3 -2 -1 1 2 3
-1
-0.5
0.5
1
Figure 3.12: Taylor Series Approximations of cos x
Note that for the range of x we are looking at, the approximations become more accurate as the
number of terms increases. Consider the ten term approximation of the cosine about x = 0,
cos x = 1 −
x2
2!
+
x4
4!
− · · · −
x18
18!
+
x20
20!
cos ξ.
Note that for any value of ξ, | cos ξ| ≤ 1. Therefore the absolute value of the error term satisfies,
|R| =
x20
20!
cos ξ ≤
|x|20
20!
.
x20
/20! is plotted in Figure 3.13.
Note that the error is very small for x < 6, fairly small but non-negligible for x ≈ 7 and large
for x > 8. The ten term approximation of the cosine, plotted below, behaves just we would predict.
The error is very small until it becomes non-negligible at x ≈ 7 and large at x ≈ 8.
Example 3.6.3 Consider the function f(x) = ln x. We want a polynomial approximation of this
46
2 4 6 8 10
0.2
0.4
0.6
0.8
1
Figure 3.13: Plot of x20
/20!.
-10 -5 5 10
-2
-1.5
-1
-0.5
0.5
1
Figure 3.14: Ten Term Taylor Series Approximation of cos x
function near the point x = 1. The first few derivatives of f are
f(x) = ln x
f (x) =
1
x
f (x) = −
1
x2
f (x) =
2
x3
f(4)
(x) = −
3
x4
The derivatives evaluated at x = 1 are
f(0) = 0, f(n)
(0) = (−1)n−1
(n − 1)!, for n ≥ 1.
By Taylor’s theorem of the mean we have,
ln x = (x − 1) −
(x − 1)2
2
+
(x − 1)3
3
−
(x − 1)4
4
+ · · · + (−1)n−1 (x − 1)n
n
+ (−1)n (x − 1)n+1
n + 1
1
ξn+1
.
Below are plots of the 2, 4, 10 and 50 term approximations.
Note that the approximation gets better on the interval (0, 2) and worse outside this interval as
the number of terms increases. The Taylor series converges to ln x only on this interval.
3.6.2 Application: Finite Difference Schemes
Example 3.6.4 Suppose you sample a function at the discrete points n∆x, n ∈ Z. In Figure 3.16
we sample the function f(x) = sin x on the interval [−4, 4] with ∆x = 1/4 and plot the data points.
47
0.5 1 1.5 2 2.5 3
-6
-5
-4
-3
-2
-1
1
2
0.5 1 1.5 2 2.5 3
-6
-5
-4
-3
-2
-1
1
2
0.5 1 1.5 2 2.5 3
-6
-5
-4
-3
-2
-1
1
2
0.5 1 1.5 2 2.5 3
-6
-5
-4
-3
-2
-1
1
2
Figure 3.15: The 2, 4, 10 and 50 Term Approximations of ln x
-4 -2 2 4
-1
-0.5
0.5
1
Figure 3.16: Sampling of sin x
We wish to approximate the derivative of the function on the grid points using only the value
of the function on those discrete points. From the definition of the derivative, one is lead to the
formula
f (x) ≈
f(x + ∆x) − f(x)
∆x
. (3.2)
Taylor’s theorem states that
f(x + ∆x) = f(x) + ∆xf (x) +
∆x2
2
f (ξ).
Substituting this expression into our formula for approximating the derivative we obtain
f(x + ∆x) − f(x)
∆x
=
f(x) + ∆xf (x) + ∆x2
2 f (ξ) − f(x)
∆x
= f (x) +
∆x
2
f (ξ).
Thus we see that the error in our approximation of the first derivative is ∆x
2 f (ξ). Since the error
has a linear factor of ∆x, we call this a first order accurate method. Equation 3.2 is called the
forward difference scheme for calculating the first derivative. Figure 3.17 shows a plot of the value
of this scheme for the function f(x) = sin x and ∆x = 1/4. The first derivative of the function
f (x) = cos x is shown for comparison.
Another scheme for approximating the first derivative is the centered difference scheme,
f (x) ≈
f(x + ∆x) − f(x − ∆x)
2∆x
.
Expanding the numerator using Taylor’s theorem,
f(x + ∆x) − f(x − ∆x)
2∆x
=
f(x) + ∆xf (x) + ∆x2
2 f (x) + ∆x3
6 f (ξ) − f(x) + ∆xf (x) − ∆x2
2 f (x) + ∆x3
6 f (ψ)
2∆x
= f (x) +
∆x2
12
(f (ξ) + f (ψ)).
48
-4 -2 2 4
-1
-0.5
0.5
1
Figure 3.17: The Forward Difference Scheme Approximation of the Derivative
The error in the approximation is quadratic in ∆x. Therefore this is a second order accurate
scheme. Below is a plot of the derivative of the function and the value of this scheme for the
function f(x) = sin x and ∆x = 1/4.
-4 -2 2 4
-1
-0.5
0.5
1
Figure 3.18: Centered Difference Scheme Approximation of the Derivative
Notice how the centered difference scheme gives a better approximation of the derivative than
the forward difference scheme.
3.7 L’Hospital’s Rule
Some singularities are easy to diagnose. Consider the function cos x
x at the point x = 0. The
function evaluates to 1
0 and is thus discontinuous at that point. Since the numerator and denominator
are continuous functions and the denominator vanishes while the numerator does not, the left and
right limits as x → 0 do not exist. Thus the function has an infinite discontinuity at the point x = 0.
More generally, a function which is composed of continuous functions and evaluates to a
0 at a point
where a = 0 must have an infinite discontinuity there.
Other singularities require more analysis to diagnose. Consider the functions sin x
x , sin x
|x| and
sin x
1−cos x at the point x = 0. All three functions evaluate to 0
0 at that point, but have different kinds
of singularities. The first has a removable discontinuity, the second has a finite discontinuity and
the third has an infinite discontinuity. See Figure 3.19.
An expression that evaluates to 0
0 , ∞
∞ , 0 · ∞, ∞ − ∞, 1∞
, 00
or ∞0
is called an indeterminate. A
function f(x) which is indeterminate at the point x = ξ is singular at that point. The singularity
may be a removable discontinuity, a finite discontinuity or an infinite discontinuity depending on the
behavior of the function around that point. If limx→ξ f(x) exists, then the function has a removable
discontinuity. If the limit does not exist, but the left and right limits do exist, then the function has
49
Figure 3.19: The functions sin x
x , sin x
|x| and sin x
1−cos x .
a finite discontinuity. If either the left or right limit does not exist then the function has an infinite
discontinuity.
L’Hospital’s Rule. Let f(x) and g(x) be differentiable and f(ξ) = g(ξ) = 0. Further, let g(x) be
nonzero in a deleted neighborhood of x = ξ, (g(x) = 0 for x ∈ 0 < |x − ξ| < δ). Then
lim
x→ξ
f(x)
g(x)
= lim
x→ξ
f (x)
g (x)
.
To prove this, we note that f(ξ) = g(ξ) = 0 and apply the generalized theorem of the mean. Note
that
f(x)
g(x)
=
f(x) − f(ξ)
g(x) − g(ξ)
=
f (ψ)
g (ψ)
for some ψ between ξ and x. Thus
lim
x→ξ
f(x)
g(x)
= lim
ψ→ξ
f (ψ)
g (ψ)
= lim
x→ξ
f (x)
g (x)
provided that the limits exist.
L’Hospital’s Rule is also applicable when both functions tend to infinity instead of zero or when
the limit point, ξ, is at infinity. It is also valid for one-sided limits.
L’Hospital’s rule is directly applicable to the indeterminate forms 0
0 and ∞
∞ .
Example 3.7.1 Consider the three functions sin x
x , sin x
|x| and sin x
1−cos x at the point x = 0.
lim
x→0
sin x
x
= lim
x→0
cos x
1
= 1
Thus sin x
x has a removable discontinuity at x = 0.
lim
x→0+
sin x
|x|
= lim
x→0+
sin x
x
= 1
lim
x→0−
sin x
|x|
= lim
x→0−
sin x
−x
= −1
Thus sin x
|x| has a finite discontinuity at x = 0.
lim
x→0
sin x
1 − cos x
= lim
x→0
cos x
sin x
=
1
0
= ∞
Thus sin x
1−cos x has an infinite discontinuity at x = 0.
50
Example 3.7.2 Let a and d be nonzero.
lim
x→∞
ax2
+ bx + c
dx2 + ex + f
= lim
x→∞
2ax + b
2dx + e
= lim
x→∞
2a
2d
=
a
d
Example 3.7.3 Consider
lim
x→0
cos x − 1
x sin x
.
This limit is an indeterminate of the form 0
0 . Applying L’Hospital’s rule we see that limit is equal
to
lim
x→0
− sin x
x cos x + sin x
.
This limit is again an indeterminate of the form 0
0 . We apply L’Hospital’s rule again.
lim
x→0
− cos x
−x sin x + 2 cos x
= −
1
2
Thus the value of the original limit is −1
2 . We could also obtain this result by expanding the functions
in Taylor series.
lim
x→0
cos x − 1
x sin x
= lim
x→0
1 − x2
2 + x4
24 − · · · − 1
x x − x3
6 + x5
120 − · · ·
= lim
x→0
−x2
2 + x4
24 − · · ·
x2 − x4
6 + x6
120 − · · ·
= lim
x→0
−1
2 + x2
24 − · · ·
1 − x2
6 + x4
120 − · · ·
= −
1
2
We can apply L’Hospital’s Rule to the indeterminate forms 0 · ∞ and ∞ − ∞ by rewriting the
expression in a different form, (perhaps putting the expression over a common denominator). If at
first you don’t succeed, try, try again. You may have to apply L’Hospital’s rule several times to
evaluate a limit.
Example 3.7.4
lim
x→0
cot x −
1
x
= lim
x→0
x cos x − sin x
x sin x
= lim
x→0
cos x − x sin x − cos x
sin x + x cos x
= lim
x→0
−x sin x
sin x + x cos x
= lim
x→0
−x cos x − sin x
cos x + cos x − x sin x
= 0
You can apply L’Hospital’s rule to the indeterminate forms 1∞
, 00
or ∞0
by taking the logarithm
of the expression.
51
Example 3.7.5 Consider the limit,
lim
x→0
xx
,
which gives us the indeterminate form 00
. The logarithm of the expression is
ln(xx
) = x ln x.
As x → 0 we now have the indeterminate form 0 · ∞. By rewriting the expression, we can apply
L’Hospital’s rule.
lim
x→0
ln x
1/x
= lim
x→0
1/x
−1/x2
= lim
x→0
(−x)
= 0
Thus the original limit is
lim
x→0
xx
= e0
= 1.
52
3.8 Exercises
3.8.1 Limits of Functions
Exercise 3.1
Does
lim
x→0
sin
1
x
exist?
Hint, Solution
Exercise 3.2
Does
lim
x→0
x sin
1
x
exist?
Hint, Solution
Exercise 3.3
Evaluate the limit:
lim
n→∞
n
√
5.
Hint, Solution
3.8.2 Continuous Functions
Exercise 3.4
Is the function sin(1/x) continuous in the open interval (0, 1)? Is there a value of a such that the
function defined by
f(x) =
sin(1/x) for x = 0,
a for x = 0
is continuous on the closed interval [0, 1]?
Hint, Solution
Exercise 3.5
Is the function sin(1/x) uniformly continuous in the open interval (0, 1)?
Hint, Solution
Exercise 3.6
Are the functions
√
x and 1
x uniformly continuous on the interval (0, 1)?
Hint, Solution
Exercise 3.7
Prove that a function which is continuous on a closed interval is uniformly continuous on that
interval.
Hint, Solution
Exercise 3.8
Prove or disprove each of the following.
1. If limn→∞ an = L then limn→∞ a2
n = L2
.
2. If limn→∞ a2
n = L2
then limn→∞ an = L.
3. If an > 0 for all n > 200, and limn→∞ an = L, then L > 0.
53
4. If f : R → R is continuous and limx→∞ f(x) = L, then for n ∈ Z, limn→∞ f(n) = L.
5. If f : R → R is continuous and limn→∞ f(n) = L, then for x ∈ R, limx→∞ f(x) = L.
Hint, Solution
3.8.3 The Derivative
Exercise 3.9 (mathematica/calculus/differential/definition.nb)
Use the definition of differentiation to prove the following identities where f(x) and g(x) are differ-
entiable functions and n is a positive integer.
1. d
dx (xn
) = nxn−1
, (I suggest that you use Newton’s binomial formula.)
2. d
dx (f(x)g(x)) = f dg
dx + g df
dx
3. d
dx (sin x) = cos x. (You’ll need to use some trig identities.)
4. d
dx (f(g(x))) = f (g(x))g (x)
Hint, Solution
Exercise 3.10
Use the definition of differentiation to determine if the following functions differentiable at x = 0.
1. f(x) = x|x|
2. f(x) = 1 + |x|
Hint, Solution
Exercise 3.11 (mathematica/calculus/differential/rules.nb)
Find the first derivatives of the following:
a. x sin(cos x)
b. f(cos(g(x)))
c. 1
f(ln x)
d. xxx
e. |x| sin |x|
Hint, Solution
Exercise 3.12 (mathematica/calculus/differential/rules.nb)
Using
d
dx
sin x = cos x and
d
dx
tan x =
1
cos2 x
find the derivatives of arcsin x and arctan x.
Hint, Solution
3.8.4 Implicit Differentiation
Exercise 3.13 (mathematica/calculus/differential/implicit.nb)
Find y (x), given that x2
+ y2
= 1. What is y (1/2)?
Hint, Solution
Exercise 3.14 (mathematica/calculus/differential/implicit.nb)
Find y (x) and y (x), given that x2
− xy + y2
= 3.
Hint, Solution
54
3.8.5 Maxima and Minima
Exercise 3.15 (mathematica/calculus/differential/maxima.nb)
Identify any maxima and minima of the following functions.
a. f(x) = x(12 − 2x)2
.
b. f(x) = (x − 2)2/3
.
Hint, Solution
Exercise 3.16 (mathematica/calculus/differential/maxima.nb)
A cylindrical container with a circular base and an open top is to hold 64 cm3
. Find its dimensions
so that the surface area of the cup is a minimum.
Hint, Solution
3.8.6 Mean Value Theorems
Exercise 3.17
Prove the generalized theorem of the mean. If f(x) and g(x) are continuous in [a, b] and differentiable
in (a, b), then there exists a point x = ξ such that
f (ξ)
g (ξ)
=
f(b) − f(a)
g(b) − g(a)
.
Assume that g(a) = g(b) so that the denominator does not vanish and that f (x) and g (x) are not
simultaneously zero which would produce an indeterminate form.
Hint, Solution
Exercise 3.18 (mathematica/calculus/differential/taylor.nb)
Find a polynomial approximation of sin x on the interval [−1, 1] that has a maximum error of 1
1000 .
Don’t use any more terms that you need to. Prove the error bound. Use your polynomial to
approximate sin 1.
Hint, Solution
Exercise 3.19 (mathematica/calculus/differential/taylor.nb)
You use the formula f(x+∆x)−2f(x)+f(x−∆x)
∆x2 to approximate f (x). What is the error in this ap-
proximation?
Hint, Solution
Exercise 3.20
The formulas f(x+∆x)−f(x)
∆x and f(x+∆x)−f(x−∆x)
2∆x are first and second order accurate schemes for
approximating the first derivative f (x). Find a couple other schemes that have successively higher
orders of accuracy. Would these higher order schemes actually give a better approximation of f (x)?
Remember that ∆x is small, but not infinitesimal.
Hint, Solution
3.8.7 L’Hospital’s Rule
Exercise 3.21 (mathematica/calculus/differential/lhospitals.nb)
Evaluate the following limits.
a. limx→0
x−sin x
x3
b. limx→0 csc x − 1
x
c. limx→+∞ 1 + 1
x
x
55
d. limx→0 csc2
x − 1
x2 . (First evaluate using L’Hospital’s rule then using a Taylor series expan-
sion. You will find that the latter method is more convenient.)
Hint, Solution
Exercise 3.22 (mathematica/calculus/differential/lhospitals.nb)
Evaluate the following limits,
lim
x→∞
xa/x
, lim
x→∞
1 +
a
x
bx
,
where a and b are constants.
Hint, Solution
56
3.9 Hints
Hint 3.1
Apply the , δ definition of a limit.
Hint 3.2
Set y = 1/x. Consider limy→∞.
Hint 3.3
Write n
√
5 in terms of the exponential function.
Hint 3.4
The composition of continuous functions is continuous. Apply the definition of continuity and look
at the point x = 0.
Hint 3.5
Note that for x1 = 1
(n−1/2)π and x2 = 1
(n+1/2)π where n ∈ Z we have | sin(1/x1) − sin(1/x2)| = 2.
Hint 3.6
Note that the function
√
x + δ −
√
x is a decreasing function of x and an increasing function of δ for
positive x and δ. Bound this function for fixed δ.
Consider any positive δ and . For what values of x is
1
x
−
1
x + δ
> .
Hint 3.7
Let the function f(x) be continuous on a closed interval. Consider the function
e(x, δ) = sup
|ξ−x|<δ
|f(ξ) − f(x)|.
Bound e(x, δ) with a function of δ alone.
Hint 3.8
CONTINUE
1. If limn→∞ an = L then limn→∞ a2
n = L2
.
2. If limn→∞ a2
n = L2
then limn→∞ an = L.
3. If an > 0 for all n > 200, and limn→∞ an = L, then L > 0.
4. If f : R → R is continuous and limx→∞ f(x) = L, then for n ∈ Z, limn→∞ f(n) = L.
5. If f : R → R is continuous and limn→∞ f(n) = L, then for x ∈ R, limx→∞ f(x) = L.
Hint 3.9
a. Newton’s binomial formula is
(a + b)n
=
n
k=0
n
k
an−k
bk
= an
+ an−1
b +
n(n − 1)
2
an−2
b2
+ · · · + nabn−1
+ bn
.
Recall that the binomial coefficient is
n
k
=
n!
(n − k)!k!
.
57
b. Note that
d
dx
(f(x)g(x)) = lim
∆x→0
f(x + ∆x)g(x + ∆x) − f(x)g(x)
∆x
and
g(x)f (x) + f(x)g (x) = g(x) lim
∆x→0
f(x + ∆x) − f(x)
∆x
+ f(x) lim
∆x→0
g(x + ∆x) − g(x)
∆x
.
Fill in the blank.
c. First prove that
lim
θ→0
sin θ
θ
= 1.
and
lim
θ→0
cos θ − 1
θ
= 0.
d. Let u = g(x). Consider a nonzero increment ∆x, which induces the increments ∆u and ∆f.
By definition,
∆f = f(u + ∆u) − f(u), ∆u = g(x + ∆x) − g(x),
and ∆f, ∆u → 0 as ∆x → 0. If ∆u = 0 then we have
=
∆f
∆u
−
df
du
→ 0 as ∆u → 0.
If ∆u = 0 for some values of ∆x then ∆f also vanishes and we define = 0 for theses values.
In either case,
∆y =
df
du
∆u + ∆u.
Continue from here.
Hint 3.10
Hint 3.11
a. Use the product rule and the chain rule.
b. Use the chain rule.
c. Use the quotient rule and the chain rule.
d. Use the identity ab
= eb ln a
.
e. For x > 0, the expression is x sin x; for x < 0, the expression is (−x) sin(−x) = x sin x. Do
both cases.
Hint 3.12
Use that x (y) = 1/y (x) and the identities cos x = (1 − sin2
x)1/2
and cos(arctan x) = 1
(1+x2)1/2 .
Hint 3.13
Differentiating the equation
x2
+ [y(x)]2
= 1
yields
2x + 2y(x)y (x) = 0.
Solve this equation for y (x) and write y(x) in terms of x.
58
Hint 3.14
Differentiate the equation and solve for y (x) in terms of x and y(x). Differentiate the expression
for y (x) to obtain y (x). You’ll use that
x2
− xy(x) + [y(x)]2
= 3
Hint 3.15
a. Use the second derivative test.
b. The function is not differentiable at the point x = 2 so you can’t use a derivative test at that
point.
Hint 3.16
Let r be the radius and h the height of the cylinder. The volume of the cup is πr2
h = 64. The radius
and height are related by h = 64
πr2 . The surface area of the cup is f(r) = πr2
+ 2πrh = πr2
+ 128
r .
Use the second derivative test to find the minimum of f(r).
Hint 3.17
The proof is analogous to the proof of the theorem of the mean.
Hint 3.18
The first few terms in the Taylor series of sin(x) about x = 0 are
sin(x) = x −
x3
6
+
x5
120
−
x7
5040
+
x9
362880
+ · · · .
When determining the error, use the fact that | cos x0| ≤ 1 and |xn
| ≤ 1 for x ∈ [−1, 1].
Hint 3.19
The terms in the approximation have the Taylor series,
f(x + ∆x) = f(x) + ∆xf (x) +
∆x2
2
f (x) +
∆x3
6
f (x) +
∆x4
24
f (x1),
f(x − ∆x) = f(x) − ∆xf (x) +
∆x2
2
f (x) −
∆x3
6
f (x) +
∆x4
24
f (x2),
where x ≤ x1 ≤ x + ∆x and x − ∆x ≤ x2 ≤ x.
Hint 3.20
Hint 3.21
a. Apply L’Hospital’s rule three times.
b. You can write the expression as
x − sin x
x sin x
.
c. Find the limit of the logarithm of the expression.
d. It takes four successive applications of L’Hospital’s rule to evaluate the limit.
For the Taylor series expansion method,
csc2
x −
1
x2
=
x2
− sin2
x
x2 sin2
x
=
x2
− (x − x3
/6 + O(x5
))2
x2(x + O(x3))2
Hint 3.22
To evaluate the limits use the identity ab
= eb ln a
and then apply L’Hospital’s rule.
59
3.10 Solutions
Solution 3.1
Note that in any open neighborhood of zero, (−δ, δ), the function sin(1/x) takes on all values in the
interval [−1, 1]. Thus if we choose a positive such that < 1 then there is no value of ψ for which
| sin(1/x) − ψ| < for all x ∈ (− , ). Thus the limit does not exist.
Solution 3.2
We make the change of variables y = 1/x and consider y → ∞. We use that sin(y) is bounded.
lim
x→0
x sin
1
x
= lim
y→∞
1
y
sin(y) = 0
Solution 3.3
We write n
√
5 in terms of the exponential function and then evaluate the limit.
lim
n→∞
n
√
5 = lim
n→∞
exp
ln 5
n
= exp lim
n→∞
ln 5
n
= e0
= 1
Solution 3.4
Since 1
x is continuous in the interval (0, 1) and the function sin(x) is continuous everywhere, the
composition sin(1/x) is continuous in the interval (0, 1).
Since limx→0 sin(1/x) does not exist, there is no way of defining sin(1/x) at x = 0 to produce a
function that is continuous in [0, 1].
Solution 3.5
Note that for x1 = 1
(n−1/2)π and x2 = 1
(n+1/2)π where n ∈ Z we have | sin(1/x1) − sin(1/x2)| = 2.
Thus for any 0 < < 2 there is no value of δ > 0 such that | sin(1/x1) − sin(1/x2)| < for all
x1, x2 ∈ (0, 1) and |x1 − x2| < δ. Thus sin(1/x) is not uniformly continuous in the open interval
(0, 1).
Solution 3.6
First consider the function
√
x. Note that the function
√
x + δ −
√
x is a decreasing function of
x and an increasing function of δ for positive x and δ. Thus for any fixed δ, the maximum value
of
√
x + δ −
√
x is bounded by
√
δ. Therefore on the interval (0, 1), a sufficient condition for
|
√
x −
√
ξ| < is |x − ξ| < 2
. The function
√
x is uniformly continuous on the interval (0, 1).
Consider any positive δ and . Note that
1
x
−
1
x + δ
>
for
x <
1
2
δ2 +
4δ
− δ .
Thus there is no value of δ such that
1
x
−
1
ξ
<
for all |x − ξ| < δ. The function 1
x is not uniformly continuous on the interval (0, 1).
60
Solution 3.7
Let the function f(x) be continuous on a closed interval. Consider the function
e(x, δ) = sup
|ξ−x|<δ
|f(ξ) − f(x)|.
Since f(x) is continuous, e(x, δ) is a continuous function of x on the same closed interval. Since
continuous functions on closed intervals are bounded, there is a continuous, increasing function (δ)
satisfying,
e(x, δ) ≤ (δ),
for all x in the closed interval. Since (δ) is continuous and increasing, it has an inverse δ( ). Now
note that |f(x) − f(ξ)| < for all x and ξ in the closed interval satisfying |x − ξ| < δ( ). Thus the
function is uniformly continuous in the closed interval.
Solution 3.8
1. The statement
lim
n→∞
an = L
is equivalent to
∀ > 0, ∃ N s.t. n > N ⇒ |an − L| < .
We want to show that
∀ δ > 0, ∃ M s.t. m > M ⇒ |a2
n − L2
| < δ.
Suppose that |an − L| < . We obtain an upper bound on |a2
n − L2
|.
|a2
n − L2
| = |an − L||an + L| < (|2L| + )
Now we choose a value of such that |a2
n − L2
| < δ
(|2L| + ) = δ
= L2 + δ − |L|
Consider any fixed δ > 0. We see that since
for = L2 + δ − |L|, ∃ N s.t. n > N ⇒ |an − L| <
implies that
n > N ⇒ |a2
n − L2
| < δ.
Therefore
∀ δ > 0, ∃ M s.t. m > M ⇒ |a2
n − L2
| < δ.
We conclude that limn→∞ a2
n = L2
.
2. limn→∞ a2
n = L2
does not imply that limn→∞ an = L. Consider an = −1. In this case
limn→∞ a2
n = 1 and limn→∞ an = −1.
3. If an > 0 for all n > 200, and limn→∞ an = L, then L is not necessarily positive. Consider
an = 1/n, which satisfies the two constraints.
lim
n→∞
1
n
= 0
4. The statement limx→∞ f(x) = L is equivalent to
∀ > 0, ∃ X s.t. x > X ⇒ |f(x) − L| < .
This implies that for n > X , |f(n) − L| < .
∀ > 0, ∃ N s.t. n > N ⇒ |f(n) − L| <
lim
n→∞
f(n) = L
61
5. If f : R → R is continuous and limn→∞ f(n) = L, then for x ∈ R, it is not necessarily true
that limx→∞ f(x) = L. Consider f(x) = sin(πx).
lim
n→∞
sin(πn) = lim
n→∞
0 = 0
limx→∞ sin(πx) does not exist.
Solution 3.9
a.
d
dx
(xn
) = lim
∆x→0
(x + ∆x)n
− xn
∆x
= lim
∆x→0


xn
+ nxn−1
∆x + n(n−1)
2 xn−2
∆x2
+ · · · + ∆xn
− xn
∆x


= lim
∆x→0
nxn−1
+
n(n − 1)
2
xn−2
∆x + · · · + ∆xn−1
= nxn−1
d
dx
(xn
) = nxn−1
b.
d
dx
(f(x)g(x)) = lim
∆x→0
f(x + ∆x)g(x + ∆x) − f(x)g(x)
∆x
= lim
∆x→0
[f(x + ∆x)g(x + ∆x) − f(x)g(x + ∆x)] + [f(x)g(x + ∆x) − f(x)g(x)]
∆x
= lim
∆x→0
[g(x + ∆x)] lim
∆x→0
f(x + ∆x) − f(x)
∆x
+ f(x) lim
∆x→0
g(x + ∆x) − g(x)
∆x
= g(x)f (x) + f(x)g (x)
d
dx
(f(x)g(x)) = f(x)g (x) + f (x)g(x)
c. Consider a right triangle with hypotenuse of length 1 in the first quadrant of the plane. Label
the vertices A, B, C, in clockwise order, starting with the vertex at the origin. The angle of A
is θ. The length of a circular arc of radius cos θ that connects C to the hypotenuse is θ cos θ.
The length of the side BC is sin θ. The length of a circular arc of radius 1 that connects B to
the x axis is θ. (See Figure 3.20.)
Considering the length of these three curves gives us the inequality:
θ cos θ ≤ sin θ ≤ θ.
Dividing by θ,
cos θ ≤
sin θ
θ
≤ 1.
Taking the limit as θ → 0 gives us
lim
θ→0
sin θ
θ
= 1.
62
B
θ
sin
A
C
θ
θθcosθ
Figure 3.20:
One more little tidbit we’ll need to know is
lim
θ→0
cos θ − 1
θ
= lim
θ→0
cos θ − 1
θ
cos θ + 1
cos θ + 1
= lim
θ→0
cos2
θ − 1
θ(cos θ + 1)
= lim
θ→0
− sin2
θ
θ(cos θ + 1)
= lim
θ→0
− sin θ
θ
lim
θ→0
sin θ
(cos θ + 1)
= (−1)
0
2
= 0.
Now we’re ready to find the derivative of sin x.
d
dx
(sin x) = lim
∆x→0
sin(x + ∆x) − sin x
∆x
= lim
∆x→0
cos x sin ∆x + sin x cos ∆x − sin x
∆x
= cos x lim
∆x→0
sin ∆x
∆x
+ sin x lim
∆x→0
cos ∆x − 1
∆x
= cos x
d
dx
(sin x) = cos x
d. Let u = g(x). Consider a nonzero increment ∆x, which induces the increments ∆u and ∆f.
By definition,
∆f = f(u + ∆u) − f(u), ∆u = g(x + ∆x) − g(x),
and ∆f, ∆u → 0 as ∆x → 0. If ∆u = 0 then we have
=
∆f
∆u
−
df
du
→ 0 as ∆u → 0.
63
If ∆u = 0 for some values of ∆x then ∆f also vanishes and we define = 0 for theses values.
In either case,
∆y =
df
du
∆u + ∆u.
We divide this equation by ∆x and take the limit as ∆x → 0.
df
dx
= lim
∆x→0
∆f
∆x
= lim
∆x→0
df
du
∆u
∆x
+
∆u
∆x
=
df
du
lim
∆x→0
∆f
∆x
+ lim
∆x→0
lim
∆x→0
∆u
∆x
=
df
du
du
dx
+ (0)
du
dx
=
df
du
du
dx
Thus we see that
d
dx
(f(g(x))) = f (g(x))g (x).
Solution 3.10
1.
f (0) = lim → 0
| | − 0
= lim → 0| |
= 0
The function is differentiable at x = 0.
2.
f (0) = lim → 0
1 + | | − 1
= lim → 0
1
2 (1 + | |)−1/2
sign( )
1
= lim → 0
1
2
sign( )
Since the limit does not exist, the function is not differentiable at x = 0.
Solution 3.11
a.
d
dx
[x sin(cos x)] =
d
dx
[x] sin(cos x) + x
d
dx
[sin(cos x)]
= sin(cos x) + x cos(cos x)
d
dx
[cos x]
= sin(cos x) − x cos(cos x) sin x
d
dx
[x sin(cos x)] = sin(cos x) − x cos(cos x) sin x
64
b.
d
dx
[f(cos(g(x)))] = f (cos(g(x)))
d
dx
[cos(g(x))]
= −f (cos(g(x))) sin(g(x))
d
dx
[g(x)]
= −f (cos(g(x))) sin(g(x))g (x)
d
dx
[f(cos(g(x)))] = −f (cos(g(x))) sin(g(x))g (x)
c.
d
dx
1
f(ln x)
= −
d
dx [f(ln x)]
[f(ln x)]2
= −
f (ln x) d
dx [ln x]
[f(ln x)]2
= −
f (ln x)
x[f(ln x)]2
d
dx
1
f(ln x)
= −
f (ln x)
x[f(ln x)]2
d. First we write the expression in terms exponentials and logarithms,
xxx
= xexp(x ln x)
= exp(exp(x ln x) ln x).
Then we differentiate using the chain rule and the product rule.
d
dx
exp(exp(x ln x) ln x) = exp(exp(x ln x) ln x)
d
dx
(exp(x ln x) ln x)
= xxx
exp(x ln x)
d
dx
(x ln x) ln x + exp(x ln x)
1
x
= xxx
xx
(ln x + x
1
x
) ln x + x−1
exp(x ln x)
= xxx
xx
(ln x + 1) ln x + x−1
xx
= xxx
+x
x−1
+ ln x + ln2
x
d
dx
xxx
= xxx
+x
x−1
+ ln x + ln2
x
e. For x > 0, the expression is x sin x; for x < 0, the expression is (−x) sin(−x) = x sin x. Thus
we see that
|x| sin |x| = x sin x.
The first derivative of this is
sin x + x cos x.
d
dx
(|x| sin |x|) = sin x + x cos x
65
Solution 3.12
Let y(x) = sin x. Then y (x) = cos x.
d
dy
arcsin y =
1
y (x)
=
1
cos x
=
1
(1 − sin2
x)1/2
=
1
(1 − y2)1/2
d
dx
arcsin x =
1
(1 − x2)1/2
Let y(x) = tan x. Then y (x) = 1/ cos2
x.
d
dy
arctan y =
1
y (x)
= cos2
x
= cos2
(arctan y)
=
1
(1 + y2)1/2
=
1
1 + y2
d
dx
arctan x =
1
1 + x2
Solution 3.13
Differentiating the equation
x2
+ [y(x)]2
= 1
yields
2x + 2y(x)y (x) = 0.
We can solve this equation for y (x).
y (x) = −
x
y(x)
To find y (1/2) we need to find y(x) in terms of x.
y(x) = ± 1 − x2
Thus y (x) is
y (x) = ±
x
√
1 − x2
.
y (1/2) can have the two values:
y
1
2
= ±
1
√
3
.
Solution 3.14
Differentiating the equation
x2
− xy(x) + [y(x)]2
= 3
66
yields
2x − y(x) − xy (x) + 2y(x)y (x) = 0.
Solving this equation for y (x)
y (x) =
y(x) − 2x
2y(x) − x
.
Now we differentiate y (x) to get y (x).
y (x) =
(y (x) − 2)(2y(x) − x) − (y(x) − 2x)(2y (x) − 1)
(2y(x) − x)2
,
y (x) = 3
xy (x) − y(x)
(2y(x) − x)2
,
y (x) = 3
xy(x)−2x
2y(x)−x − y(x)
(2y(x) − x)2
,
y (x) = 3
x(y(x) − 2x) − y(x)(2y(x) − x)
(2y(x) − x)3
,
y (x) = −6
x2
− xy(x) + [y(x)]2
(2y(x) − x)3
,
y (x) =
−18
(2y(x) − x)3
,
Solution 3.15
a.
f (x) = (12 − 2x)2
+ 2x(12 − 2x)(−2)
= 4(x − 6)2
+ 8x(x − 6)
= 12(x − 2)(x − 6)
There are critical points at x = 2 and x = 6.
f (x) = 12(x − 2) + 12(x − 6) = 24(x − 4)
Since f (2) = −48 < 0, x = 2 is a local maximum. Since f (6) = 48 > 0, x = 6 is a local
minimum.
b.
f (x) =
2
3
(x − 2)−1/3
The first derivative exists and is nonzero for x = 2. At x = 2, the derivative does not exist
and thus x = 2 is a critical point. For x < 2, f (x) < 0 and for x > 2, f (x) > 0. x = 2 is a
local minimum.
Solution 3.16
Let r be the radius and h the height of the cylinder. The volume of the cup is πr2
h = 64. The radius
and height are related by h = 64
πr2 . The surface area of the cup is f(r) = πr2
+ 2πrh = πr2
+ 128
r .
The first derivative of the surface area is f (r) = 2πr − 128
r2 . Finding the zeros of f (r),
2πr −
128
r2
= 0,
2πr3
− 128 = 0,
67
r =
4
3
√
π
.
The second derivative of the surface area is f (r) = 2π + 256
r3 . Since f ( 4
3
√
π
) = 6π, r = 4
3
√
π
is a local
minimum of f(r). Since this is the only critical point for r > 0, it must be a global minimum.
The cup has a radius of 4
3
√
π
cm and a height of 4
3
√
π
.
Solution 3.17
We define the function
h(x) = f(x) − f(a) −
f(b) − f(a)
g(b) − g(a)
(g(x) − g(a)).
Note that h(x) is differentiable and that h(a) = h(b) = 0. Thus h(x) satisfies the conditions of
Rolle’s theorem and there exists a point ξ ∈ (a, b) such that
h (ξ) = f (ξ) −
f(b) − f(a)
g(b) − g(a)
g (ξ) = 0,
f (ξ)
g (ξ)
=
f(b) − f(a)
g(b) − g(a)
.
Solution 3.18
The first few terms in the Taylor series of sin(x) about x = 0 are
sin(x) = x −
x3
6
+
x5
120
−
x7
5040
+
x9
362880
+ · · · .
The seventh derivative of sin x is − cos x. Thus we have that
sin(x) = x −
x3
6
+
x5
120
−
cos x0
5040
x7
,
where 0 ≤ x0 ≤ x. Since we are considering x ∈ [−1, 1] and −1 ≤ cos(x0) ≤ 1, the approximation
sin x ≈ x −
x3
6
+
x5
120
has a maximum error of 1
5040 ≈ 0.000198. Using this polynomial to approximate sin(1),
1 −
13
6
+
15
120
≈ 0.841667.
To see that this has the required accuracy,
sin(1) ≈ 0.841471.
Solution 3.19
Expanding the terms in the approximation in Taylor series,
f(x + ∆x) = f(x) + ∆xf (x) +
∆x2
2
f (x) +
∆x3
6
f (x) +
∆x4
24
f (x1),
f(x − ∆x) = f(x) − ∆xf (x) +
∆x2
2
f (x) −
∆x3
6
f (x) +
∆x4
24
f (x2),
where x ≤ x1 ≤ x + ∆x and x − ∆x ≤ x2 ≤ x. Substituting the expansions into the formula,
f(x + ∆x) − 2f(x) + f(x − ∆x)
∆x2
= f (x) +
∆x2
24
[f (x1) + f (x2)].
68
Thus the error in the approximation is
∆x2
24
[f (x1) + f (x2)].
Solution 3.20
Solution 3.21
a.
lim
x→0
x − sin x
x3
= lim
x→0
1 − cos x
3x2
= lim
x→0
sin x
6x
= lim
x→0
cos x
6
=
1
6
lim
x→0
x − sin x
x3
=
1
6
b.
lim
x→0
csc x −
1
x
= lim
x→0
1
sin x
−
1
x
= lim
x→0
x − sin x
x sin x
= lim
x→0
1 − cos x
x cos x + sin x
= lim
x→0
sin x
−x sin x + cos x + cos x
=
0
2
= 0
lim
x→0
csc x −
1
x
= 0
69
c.
ln lim
x→+∞
1 +
1
x
x
= lim
x→+∞
ln 1 +
1
x
x
= lim
x→+∞
x ln 1 +
1
x
= lim
x→+∞
ln 1 + 1
x
1/x
= lim
x→+∞
1 + 1
x
−1
− 1
x2
−1/x2
= lim
x→+∞
1 +
1
x
−1
= 1
Thus we have
lim
x→+∞
1 +
1
x
x
= e.
d. It takes four successive applications of L’Hospital’s rule to evaluate the limit.
lim
x→0
csc2
x −
1
x2
= lim
x→0
x2
− sin2
x
x2 sin2
x
= lim
x→0
2x − 2 cos x sin x
2x2 cos x sin x + 2x sin2
x
= lim
x→0
2 − 2 cos2
x + 2 sin2
x
2x2 cos2 x + 8x cos x sin x + 2 sin2
x − 2x2 sin2
x
= lim
x→0
8 cos x sin x
12x cos2 x + 12 cos x sin x − 8x2 cos x sin x − 12x sin2
x
= lim
x→0
8 cos2
x − 8 sin2
x
24 cos2 x − 8x2 cos2 x − 64x cos x sin x − 24 sin2
x + 8x2 sin2
x
=
1
3
It is easier to use a Taylor series expansion.
lim
x→0
csc2
x −
1
x2
= lim
x→0
x2
− sin2
x
x2 sin2
x
= lim
x→0
x2
− (x − x3
/6 + O(x5
))2
x2(x + O(x3))2
= lim
x→0
x2
− (x2
− x4
/3 + O(x6
))
x4 + O(x6)
= lim
x→0
1
3
+ O(x2
)
=
1
3
70
Solution 3.22
To evaluate the first limit, we use the identity ab
= eb ln a
and then apply L’Hospital’s rule.
lim
x→∞
xa/x
= lim
x→∞
e
a ln x
x
= exp lim
x→∞
a ln x
x
= exp lim
x→∞
a/x
1
= e0
lim
x→∞
xa/x
= 1
We use the same method to evaluate the second limit.
lim
x→∞
1 +
a
x
bx
= lim
x→∞
exp bx ln 1 +
a
x
= exp lim
x→∞
bx ln 1 +
a
x
= exp lim
x→∞
b
ln(1 + a/x)
1/x
= exp

 lim
x→∞
b
−a/x2
1+a/x
−1/x2


= exp lim
x→∞
b
a
1 + a/x
lim
x→∞
1 +
a
x
bx
= eab
71
3.11 Quiz
Problem 3.1
Define continuity.
Solution
Problem 3.2
Fill in the blank with necessary, sufficient or necessary and sufficient.
Continuity is a condition for differentiability.
Differentiability is a condition for continuity.
Existence of lim∆x→0
f(x+∆x)−f(x)
∆x is a condition for differentiability.
Solution
Problem 3.3
Evaluate d
dx f(g(x)h(x)).
Solution
Problem 3.4
Evaluate d
dx f(x)g(x)
.
Solution
Problem 3.5
State the Theorem of the Mean. Interpret the theorem physically.
Solution
Problem 3.6
State Taylor’s Theorem of the Mean.
Solution
Problem 3.7
Evaluate limx→0(sin x)sin x
.
Solution
72
3.12 Quiz Solutions
Solution 3.1
A function y(x) is said to be continuous at x = ξ if limx→ξ y(x) = y(ξ).
Solution 3.2
Continuity is a necessary condition for differentiability.
Differentiability is a sufficient condition for continuity.
Existence of lim∆x→0
f(x+∆x)−f(x)
∆x is a necessary and sufficient condition for differentiability.
Solution 3.3
d
dx
f(g(x)h(x)) = f (g(x)h(x))
d
dx
(g(x)h(x)) = f (g(x)h(x))(g (x)h(x) + g(x)h (x))
Solution 3.4
d
dx
f(x)g(x)
=
d
dx
eg(x) ln f(x)
= eg(x) ln f(x) d
dx
(g(x) ln f(x))
= f(x)g(x)
g (x) ln f(x) + g(x)
f (x)
f(x)
Solution 3.5
If f(x) is continuous in [a..b] and differentiable in (a..b) then there exists a point x = ξ such that
f (ξ) =
f(b) − f(a)
b − a
.
That is, there is a point where the instantaneous velocity is equal to the average velocity on the
interval.
Solution 3.6
If f(x) is n + 1 times continuously differentiable in (a..b) then there exists a point x = ξ ∈ (a..b)
such that
f(b) = f(a) + (b − a)f (a) +
(b − a)2
2!
f (a) + · · · +
(b − a)n
n!
f(n)
(a) +
(b − a)n+1
(n + 1)!
f(n+1)
(ξ).
Solution 3.7
Consider limx→0(sin x)sin x
. This is an indeterminate of the form 00
. The limit of the logarithm of
the expression is limx→0 sin x ln(sin x). This is an indeterminate of the form 0·∞. We can rearrange
the expression to obtain an indeterminate of the form ∞
∞ and then apply L’Hospital’s rule.
lim
x→0
ln(sin x)
1/ sin x
= lim
x→0
cos x/ sin x
− cos x/ sin2
x
= lim
x→0
(− sin x) = 0
The original limit is
lim
x→0
(sin x)sin x
= e0
= 1.
73
74
Chapter 4
Integral Calculus
4.1 The Indefinite Integral
The opposite of a derivative is the anti-derivative or the indefinite integral. The indefinite integral
of a function f(x) is denoted,
f(x) dx.
It is defined by the property that
d
dx
f(x) dx = f(x).
While a function f(x) has a unique derivative if it is differentiable, it has an infinite number of
indefinite integrals, each of which differ by an additive constant.
Zero Slope Implies a Constant Function. If the value of a function’s derivative is identically
zero, df
dx = 0, then the function is a constant, f(x) = c. To prove this, we assume that there exists
a non-constant differentiable function whose derivative is zero and obtain a contradiction. Let f(x)
be such a function. Since f(x) is non-constant, there exist points a and b such that f(a) = f(b). By
the Mean Value Theorem of differential calculus, there exists a point ξ ∈ (a, b) such that
f (ξ) =
f(b) − f(a)
b − a
= 0,
which contradicts that the derivative is everywhere zero.
Indefinite Integrals Differ by an Additive Constant. Suppose that F(x) and G(x) are in-
definite integrals of f(x). Then we have
d
dx
(F(x) − G(x)) = F (x) − G (x) = f(x) − f(x) = 0.
Thus we see that F(x) − G(x) = c and the two indefinite integrals must differ by a constant. For
example, we have sin x dx = − cos x + c. While every function that can be expressed in terms of
elementary functions, (the exponent, logarithm, trigonometric functions, etc.), has a derivative that
can be written explicitly in terms of elementary functions, the same is not true of integrals. For
example, sin(sin x) dx cannot be written explicitly in terms of elementary functions.
Properties. Since the derivative is linear, so is the indefinite integral. That is,
(af(x) + bg(x)) dx = a f(x) dx + b g(x) dx.
75
For each derivative identity there is a corresponding integral identity. Consider the power law
identity, d
dx (f(x))a
= a(f(x))a−1
f (x). The corresponding integral identity is
(f(x))a
f (x) dx =
(f(x))a+1
a + 1
+ c, a = −1,
where we require that a = −1 to avoid division by zero. From the derivative of a logarithm,
d
dx ln(f(x)) = f (x)
f(x) , we obtain,
f (x)
f(x)
dx = ln |f(x)| + c.
Note the absolute value signs. This is because d
dx ln |x| = 1
x for x = 0. In Figure 4.1 is a plot of
ln |x| and 1
x to reinforce this.
Figure 4.1: Plot of ln |x| and 1/x.
Example 4.1.1 Consider
I =
x
(x2 + 1)2
dx.
We evaluate the integral by choosing u = x2
+ 1, du = 2x dx.
I =
1
2
2x
(x2 + 1)2
dx
=
1
2
du
u2
=
1
2
−1
u
= −
1
2(x2 + 1)
.
Example 4.1.2 Consider
I = tan x dx =
sin x
cos x
dx.
By choosing f(x) = cos x, f (x) = − sin x, we see that the integral is
I = −
− sin x
cos x
dx = − ln | cos x| + c.
Change of Variable. The differential of a function g(x) is dg = g (x) dx. Thus one might suspect
that for ξ = g(x),
f(ξ) dξ = f(g(x))g (x) dx, (4.1)
since dξ = dg = g (x) dx. This turns out to be true. To prove it we will appeal to the the chain rule
for differentiation. Let ξ be a function of x. The chain rule is
d
dx
f(ξ) = f (ξ)ξ (x),
76
d
dx
f(ξ) =
df
dξ
dξ
dx
.
We can also write this as
df
dξ
=
dx
dξ
df
dx
,
or in operator notation,
d
dξ
=
dx
dξ
d
dx
.
Now we’re ready to start. The derivative of the left side of Equation 4.1 is
d
dξ
f(ξ) dξ = f(ξ).
Next we differentiate the right side,
d
dξ
f(g(x))g (x) dx =
dx
dξ
d
dx
f(g(x))g (x) dx
=
dx
dξ
f(g(x))g (x)
=
dx
dg
f(g(x))
dg
dx
= f(g(x))
= f(ξ)
to see that it is in fact an identity for ξ = g(x).
Example 4.1.3 Consider
x sin(x2
) dx.
We choose ξ = x2
, dξ = 2xdx to evaluate the integral.
x sin(x2
) dx =
1
2
sin(x2
)2x dx
=
1
2
sin ξ dξ
=
1
2
(− cos ξ) + c
= −
1
2
cos(x2
) + c
Integration by Parts. The product rule for differentiation gives us an identity called integration
by parts. We start with the product rule and then integrate both sides of the equation.
d
dx
(u(x)v(x)) = u (x)v(x) + u(x)v (x)
(u (x)v(x) + u(x)v (x)) dx = u(x)v(x) + c
u (x)v(x) dx + u(x)v (x)) dx = u(x)v(x)
u(x)v (x)) dx = u(x)v(x) − v(x)u (x) dx
The theorem is most often written in the form
u dv = uv − v du.
77
So what is the usefulness of this? Well, it may happen for some integrals and a good choice of u and
v that the integral on the right is easier to evaluate than the integral on the left.
Example 4.1.4 Consider x ex
dx. If we choose u = x, dv = ex
dx then integration by parts
yields
x ex
dx = x ex
− ex
dx = (x − 1) ex
.
Now notice what happens when we choose u = ex
, dv = x dx.
x ex
dx =
1
2
x2
ex
−
1
2
x2
ex
dx
The integral gets harder instead of easier.
When applying integration by parts, one must choose u and dv wisely. As general rules of thumb:
• Pick u so that u is simpler than u.
• Pick dv so that v is not more complicated, (hopefully simpler), than dv.
Also note that you may have to apply integration by parts several times to evaluate some integrals.
4.2 The Definite Integral
4.2.1 Definition
The area bounded by the x axis, the vertical lines x = a and x = b and the function f(x) is denoted
with a definite integral,
b
a
f(x) dx.
The area is signed, that is, if f(x) is negative, then the area is negative. We measure the area
with a divide-and-conquer strategy. First partition the interval (a, b) with a = x0 < x1 < · · · <
xn−1 < xn = b. Note that the area under the curve on the subinterval is approximately the area of
a rectangle of base ∆xi = xi+1 − xi and height f(ξi), where ξi ∈ [xi, xi+1]. If we add up the areas
of the rectangles, we get an approximation of the area under the curve. See Figure 4.2
a x x x xx
x∆
1 2 3
i
n-2 n-1 b
f( )ξ1
Figure 4.2: Divide-and-Conquer Strategy for Approximating a Definite Integral.
b
a
f(x) dx ≈
n−1
i=0
f(ξi)∆xi
78
As the ∆xi’s get smaller, we expect the approximation of the area to get better. Let ∆x =
max0≤i≤n−1 ∆xi. We define the definite integral as the sum of the areas of the rectangles in the
limit that ∆x → 0.
b
a
f(x) dx = lim
∆x→0
n−1
i=0
f(ξi)∆xi
The integral is defined when the limit exists. This is known as the Riemann integral of f(x). f(x)
is called the integrand.
4.2.2 Properties
Linearity and the Basics. Because summation is a linear operator, that is
n−1
i=0
(cfi + dgi) = c
n−1
i=0
fi + d
n−1
i=0
gi,
definite integrals are linear,
b
a
(cf(x) + dg(x)) dx = c
b
a
f(x) dx + d
b
a
g(x) dx.
One can also divide the range of integration.
b
a
f(x) dx =
c
a
f(x) dx +
b
c
f(x) dx
We assume that each of the above integrals exist. If a ≤ b, and we integrate from b to a, then each
of the ∆xi will be negative. From this observation, it is clear that
b
a
f(x) dx = −
a
b
f(x) dx.
If we integrate any function from a point a to that same point a, then all the ∆xi are zero and
a
a
f(x) dx = 0.
Bounding the Integral. Recall that if fi ≤ gi, then
n−1
i=0
fi ≤
n−1
i=0
gi.
Let m = minx∈[a,b] f(x) and M = maxx∈[a,b] f(x). Then
(b − a)m =
n−1
i=0
m∆xi ≤
n−1
i=0
f(ξi)∆xi ≤
n−1
i=0
M∆xi = (b − a)M
implies that
(b − a)m ≤
b
a
f(x) dx ≤ (b − a)M.
Since
n−1
i=0
fi ≤
n−1
i=0
|fi|,
we have
b
a
f(x) dx ≤
b
a
|f(x)| dx.
79
Mean Value Theorem of Integral Calculus. Let f(x) be continuous. We know from above
that
(b − a)m ≤
b
a
f(x) dx ≤ (b − a)M.
Therefore there exists a constant c ∈ [m, M] satisfying
b
a
f(x) dx = (b − a)c.
Since f(x) is continuous, there is a point ξ ∈ [a, b] such that f(ξ) = c. Thus we see that
b
a
f(x) dx = (b − a)f(ξ),
for some ξ ∈ [a, b].
4.3 The Fundamental Theorem of Integral Calculus
Definite Integrals with Variable Limits of Integration. Consider a to be a constant and x
variable, then the function F(x) defined by
F(x) =
x
a
f(t) dt (4.2)
is an anti-derivative of f(x), that is F (x) = f(x). To show this we apply the definition of differen-
tiation and the integral mean value theorem.
F (x) = lim
∆x→0
F(x + ∆x) − F(x)
∆x
= lim
∆x→0
x+∆x
a
f(t) dt −
x
a
f(t) dt
∆x
= lim
∆x→0
x+∆x
x
f(t) dt
∆x
= lim
∆x→0
f(ξ)∆x
∆x
, ξ ∈ [x, x + ∆x]
= f(x)
The Fundamental Theorem of Integral Calculus. Let F(x) be any anti-derivative of f(x).
Noting that all anti-derivatives of f(x) differ by a constant and replacing x by b in Equation 4.2, we
see that there exists a constant c such that
b
a
f(x) dx = F(b) + c.
Now to find the constant. By plugging in b = a,
a
a
f(x) dx = F(a) + c = 0,
we see that c = −F(a). This gives us a result known as the Fundamental Theorem of Integral
Calculus.
b
a
f(x) dx = F(b) − F(a).
We introduce the notation
[F(x)]b
a ≡ F(b) − F(a).
80
Example 4.3.1
π
0
sin x dx = [− cos x]π
0 = − cos(π) + cos(0) = 2
4.4 Techniques of Integration
4.4.1 Partial Fractions
A proper rational function
p(x)
q(x)
=
p(x)
(x − a)nr(x)
Can be written in the form
p(x)
(x − α)nr(x)
=
a0
(x − α)n
+
a1
(x − α)n−1
+ · · · +
an−1
x − α
+ (· · · )
where the ak’s are constants and the last ellipses represents the partial fractions expansion of the
roots of r(x). The coefficients are
ak =
1
k!
dk
dxk
p(x)
r(x) x=α
.
Example 4.4.1 Consider the partial fraction expansion of
1 + x + x2
(x − 1)3
.
The expansion has the form
a0
(x − 1)3
+
a1
(x − 1)2
+
a2
x − 1
.
The coefficients are
a0 =
1
0!
(1 + x + x2
)|x=1 = 3,
a1 =
1
1!
d
dx
(1 + x + x2
)|x=1 = (1 + 2x)|x=1 = 3,
a2 =
1
2!
d2
dx2
(1 + x + x2
)|x=1 =
1
2
(2)|x=1 = 1.
Thus we have
1 + x + x2
(x − 1)3
=
3
(x − 1)3
+
3
(x − 1)2
+
1
x − 1
.
Example 4.4.2 Suppose we want to evaluate
1 + x + x2
(x − 1)3
dx.
If we expand the integrand in a partial fraction expansion, then the integral becomes easy.
1 + x + x2
(x − 1)3
dx. =
3
(x − 1)3
+
3
(x − 1)2
+
1
x − 1
dx
= −
3
2(x − 1)2
−
3
(x − 1)
+ ln(x − 1)
81
Example 4.4.3 Consider the partial fraction expansion of
1 + x + x2
x2(x − 1)2
.
The expansion has the form
a0
x2
+
a1
x
+
b0
(x − 1)2
+
b1
x − 1
.
The coefficients are
a0 =
1
0!
1 + x + x2
(x − 1)2
x=0
= 1,
a1 =
1
1!
d
dx
1 + x + x2
(x − 1)2
x=0
=
1 + 2x
(x − 1)2
−
2(1 + x + x2
)
(x − 1)3
x=0
= 3,
b0 =
1
0!
1 + x + x2
x2
x=1
= 3,
b1 =
1
1!
d
dx
1 + x + x2
x2
x=1
=
1 + 2x
x2
−
2(1 + x + x2
)
x3
x=1
= −3,
Thus we have
1 + x + x2
x2(x − 1)2
=
1
x2
+
3
x
+
3
(x − 1)2
−
3
x − 1
.
If the rational function has real coefficients and the denominator has complex roots, then you
can reduce the work in finding the partial fraction expansion with the following trick: Let α and α
be complex conjugate pairs of roots of the denominator.
p(x)
(x − α)n(x − α)nr(x)
=
a0
(x − α)n
+
a1
(x − α)n−1
+ · · · +
an−1
x − α
+
a0
(x − α)n
+
a1
(x − α)n−1
+ · · · +
an−1
x − α
+ (· · · )
Thus we don’t have to calculate the coefficients for the root at α. We just take the complex conjugate
of the coefficients for α.
Example 4.4.4 Consider the partial fraction expansion of
1 + x
x2 + 1
.
The expansion has the form
a0
x − i
+
a0
x + i
The coefficients are
a0 =
1
0!
1 + x
x + i x=i
=
1
2
(1 − i),
a0 =
1
2
(1 − i) =
1
2
(1 + i)
Thus we have
1 + x
x2 + 1
=
1 − i
2(x − i)
+
1 + i
2(x + i)
.
82
4.5 Improper Integrals
If the range of integration is infinite or f(x) is discontinuous at some points then
b
a
f(x) dx is called
an improper integral.
Discontinuous Functions. If f(x) is continuous on the interval a ≤ x ≤ b except at the point
x = c where a < c < b then
b
a
f(x) dx = lim
δ→0+
c−δ
a
f(x) dx + lim
→0+
b
c+
f(x) dx
provided that both limits exist.
Example 4.5.1 Consider the integral of ln x on the interval [0, 1]. Since the logarithm has a singu-
larity at x = 0, this is an improper integral. We write the integral in terms of a limit and evaluate
the limit with L’Hospital’s rule.
1
0
ln x dx = lim
δ→0
1
δ
ln x dx
= lim
δ→0
[x ln x − x]1
δ
= 1 ln(1) − 1 − lim
δ→0
(δ ln δ − δ)
= −1 − lim
δ→0
(δ ln δ)
= −1 − lim
δ→0
ln δ
1/δ
= −1 − lim
δ→0
1/δ
−1/δ2
= −1
Example 4.5.2 Consider the integral of xa
on the range [0, 1]. If a < 0 then there is a singularity
at x = 0. First assume that a = −1.
1
0
xa
dx = lim
δ→0+
xa+1
a + 1
1
δ
=
1
a + 1
− lim
δ→0+
δa+1
a + 1
This limit exists only for a > −1. Now consider the case that a = −1.
1
0
x−1
dx = lim
δ→0+
[ln x]
1
δ
= ln(0) − lim
δ→0+
ln δ
This limit does not exist. We obtain the result,
1
0
xa
dx =
1
a + 1
, for a > −1.
Infinite Limits of Integration. If the range of integration is infinite, say [a, ∞) then we define
the integral as
∞
a
f(x) dx = lim
α→∞
α
a
f(x) dx,
83
provided that the limit exists. If the range of integration is (−∞, ∞) then
∞
−∞
f(x) dx = lim
α→−∞
a
α
f(x) dx + lim
β→+∞
β
a
f(x) dx.
Example 4.5.3
∞
1
ln x
x2
dx =
∞
1
ln x
d
dx
−1
x
dx
= ln x
−1
x
∞
1
−
∞
1
−1
x
1
x
dx
= lim
x→+∞
−
ln x
x
−
1
x
∞
1
= lim
x→+∞
−
1/x
1
− lim
x→∞
1
x
+ 1
= 1
Example 4.5.4 Consider the integral of xa
on [1, ∞). First assume that a = −1.
∞
1
xa
dx = lim
β→+∞
xa+1
a + 1
β
1
= lim
β→+∞
βa+1
a + 1
−
1
a + 1
The limit exists for β < −1. Now consider the case a = −1.
∞
1
x−1
dx = lim
β→+∞
[ln x]
β
1
= lim
β→+∞
ln β −
1
a + 1
This limit does not exist. Thus we have
∞
1
xa
dx = −
1
a + 1
, for a < −1.
84
4.6 Exercises
4.6.1 The Indefinite Integral
Exercise 4.1 (mathematica/calculus/integral/fundamental.nb)
Evaluate (2x + 3)10
dx.
Hint, Solution
Exercise 4.2 (mathematica/calculus/integral/fundamental.nb)
Evaluate (ln x)2
x dx.
Hint, Solution
Exercise 4.3 (mathematica/calculus/integral/fundamental.nb)
Evaluate x
√
x2 + 3 dx.
Hint, Solution
Exercise 4.4 (mathematica/calculus/integral/fundamental.nb)
Evaluate cos x
sin x dx.
Hint, Solution
Exercise 4.5 (mathematica/calculus/integral/fundamental.nb)
Evaluate x2
x3−5 dx.
Hint, Solution
4.6.2 The Definite Integral
Exercise 4.6 (mathematica/calculus/integral/definite.nb)
Use the result
b
a
f(x) dx = lim
N→∞
N−1
n=0
f(xn)∆x
where ∆x = b−a
N and xn = a + n∆x, to show that
1
0
x dx =
1
2
.
Hint, Solution
Exercise 4.7 (mathematica/calculus/integral/definite.nb)
Evaluate the following integral using integration by parts and the Pythagorean identity.
π
0
sin2
x dx
Hint, Solution
Exercise 4.8 (mathematica/calculus/integral/definite.nb)
Prove that
d
dx
f(x)
g(x)
h(ξ) dξ = h(f(x))f (x) − h(g(x))g (x).
(Don’t use the limit definition of differentiation, use the Fundamental Theorem of Integral Calculus.)
Hint, Solution
Exercise 4.9 (mathematica/calculus/integral/definite.nb)
Let An be the area between the curves x and xn on the interval [0 . . . 1]. What is limn→∞ An?
Explain this result geometrically.
Hint, Solution
85
Exercise 4.10 (mathematica/calculus/integral/taylor.nb)
a. Show that
f(x) = f(0) +
x
0
f (x − ξ) dξ.
b. From the above identity show that
f(x) = f(0) + xf (0) +
x
0
ξf (x − ξ) dξ.
c. Using induction, show that
f(x) = f(0) + xf (0) +
1
2
x2
f (0) + · · · +
1
n!
xn
f(n)
(0) +
x
0
1
n!
ξn
f(n+1)
(x − ξ) dξ.
Hint, Solution
Exercise 4.11
Find a function f(x) whose arc length from 0 to x is 2x.
Hint, Solution
Exercise 4.12
Consider a curve C, bounded by −1 and 1, on the interval (−1 . . . 1). Can the length of C be
unbounded? What if we change to the closed interval [−1 . . . 1]?
Hint, Solution
4.6.3 The Fundamental Theorem of Integration
4.6.4 Techniques of Integration
Exercise 4.13 (mathematica/calculus/integral/parts.nb)
Evaluate x sin x dx.
Hint, Solution
Exercise 4.14 (mathematica/calculus/integral/parts.nb)
Evaluate x3 e2x
dx.
Hint, Solution
Exercise 4.15 (mathematica/calculus/integral/partial.nb)
Evaluate 1
x2−4 dx.
Hint, Solution
Exercise 4.16 (mathematica/calculus/integral/partial.nb)
Evaluate x+1
x3+x2−6x dx.
Hint, Solution
4.6.5 Improper Integrals
Exercise 4.17 (mathematica/calculus/integral/improper.nb)
Evaluate
4
0
1
(x−1)2 dx.
Hint, Solution
Exercise 4.18 (mathematica/calculus/integral/improper.nb)
Evaluate
1
0
1√
x
dx.
Hint, Solution
86
Exercise 4.19 (mathematica/calculus/integral/improper.nb)
Evaluate
∞
0
1
x2+4 dx.
Hint, Solution
87
4.7 Hints
Hint 4.1
Make the change of variables u = 2x + 3.
Hint 4.2
Make the change of variables u = ln x.
Hint 4.3
Make the change of variables u = x2
+ 3.
Hint 4.4
Make the change of variables u = sin x.
Hint 4.5
Make the change of variables u = x3
− 5.
Hint 4.6
1
0
x dx = lim
N→∞
N−1
n=0
xn∆x
= lim
N→∞
N−1
n=0
(n∆x)∆x
Hint 4.7
Let u = sin x and dv = sin x dx. Integration by parts will give you an equation for
π
0
sin2
x dx.
Hint 4.8
Let H (x) = h(x) and evaluate the integral in terms of H(x).
Hint 4.9
CONTINUE
Hint 4.10
a. Evaluate the integral.
b. Use integration by parts to evaluate the integral.
c. Use integration by parts with u = f(n+1)
(x − ξ) and dv = 1
n! ξn
.
Hint 4.11
The arc length from 0 to x is
x
0
1 + (f (ξ))2 dξ (4.3)
First show that the arc length of f(x) from a to b is 2(b − a). Then conclude that the integrand in
Equation 4.3 must everywhere be 2.
Hint 4.12
CONTINUE
Hint 4.13
Let u = x, and dv = sin x dx.
88
Hint 4.14
Perform integration by parts three successive times. For the first one let u = x3
and dv = e2x
dx.
Hint 4.15
Expanding the integrand in partial fractions,
1
x2 − 4
=
1
(x − 2)(x + 2)
=
a
(x − 2)
+
b
(x + 2)
1 = a(x + 2) + b(x − 2)
Set x = 2 and x = −2 to solve for a and b.
Hint 4.16
Expanding the integral in partial fractions,
x + 1
x3 + x2 − 6x
=
x + 1
x(x − 2)(x + 3)
=
a
x
+
b
x − 2
+
c
x + 3
x + 1 = a(x − 2)(x + 3) + bx(x + 3) + cx(x − 2)
Set x = 0, x = 2 and x = −3 to solve for a, b and c.
Hint 4.17
4
0
1
(x − 1)2
dx = lim
δ→0+
1−δ
0
1
(x − 1)2
dx + lim
→0+
4
1+
1
(x − 1)2
dx
Hint 4.18
1
0
1
√
x
dx = lim
→0+
1
1
√
x
dx
Hint 4.19
1
x2 + a2
dx =
1
a
arctan
x
a
89
4.8 Solutions
Solution 4.1
(2x + 3)10
dx
Let u = 2x + 3, g(u) = x = u−3
2 , g (u) = 1
2 .
(2x + 3)10
dx = u10 1
2
du
=
u11
11
1
2
=
(2x + 3)11
22
Solution 4.2
(ln x)2
x
dx = (ln x)2 d(ln x)
dx
dx
=
(ln x)3
3
Solution 4.3
x x2 + 3 dx = x2 + 3
1
2
d(x2
)
dx
dx
=
1
2
(x2
+ 3)3/2
3/2
=
(x2
+ 3)3/2
3
Solution 4.4
cos x
sin x
dx =
1
sin x
d(sin x)
dx
dx
= ln | sin x|
Solution 4.5
x2
x3 − 5
dx =
1
x3 − 5
1
3
d(x3
)
dx
dx
=
1
3
ln |x3
− 5|
90
Solution 4.6
1
0
x dx = lim
N→∞
N−1
n=0
xn∆x
= lim
N→∞
N−1
n=0
(n∆x)∆x
= lim
N→∞
∆x2
N−1
n=0
n
= lim
N→∞
∆x2 N(N − 1)
2
= lim
N→∞
N(N − 1)
2N2
=
1
2
Solution 4.7
Let u = sin x and dv = sin x dx. Then du = cos x dx and v = − cos x.
π
0
sin2
x dx = − sin x cos x
π
0
+
π
0
cos2
x dx
=
π
0
cos2
x dx
=
π
0
(1 − sin2
x) dx
= π −
π
0
sin2
x dx
2
π
0
sin2
x dx = π
π
0
sin2
x dx =
π
2
Solution 4.8
Let H (x) = h(x).
d
dx
f(x)
g(x)
h(ξ) dξ =
d
dx
(H(f(x)) − H(g(x)))
= H (f(x))f (x) − H (g(x))g (x)
= h(f(x))f (x) − h(g(x))g (x)
Solution 4.9
First we compute the area for positive integer n.
An =
1
0
(x − xn
) dx =
x2
2
−
xn+1
n + 1
1
0
=
1
2
−
1
n + 1
Then we consider the area in the limit as n → ∞.
lim
n→∞
An = lim
n→∞
1
2
−
1
n + 1
=
1
2
91
In Figure 4.3 we plot the functions x1
, x2
, x4
, x8
, . . . , x1024
. In the limit as n → ∞, xn
on the interval
[0 . . . 1] tends to the function
0 0 ≤ x < 1
1 x = 1
Thus the area tends to the area of the right triangle with unit base and height.
0.2 0.4 0.6 0.8 1
0.2
0.4
0.6
0.8
1
Figure 4.3: Plots of x1
, x2
, x4
, x8
, . . . , x1024
.
Solution 4.10
1.
f(0) +
x
0
f (x − ξ) dξ = f(0) + [−f(x − ξ)]
x
0
= f(0) − f(0) + f(x)
= f(x)
2.
f(0) + xf (0) +
x
0
ξf (x − ξ) dξ = f(0) + xf (0) + [−ξf (x − ξ)]
x
0 −
x
0
−f (x − ξ) dξ
= f(0) + xf (0) − xf (0) − [f(x − ξ)]
x
0
= f(0) − f(0) + f(x)
= f(x)
3. Above we showed that the hypothesis holds for n = 0 and n = 1. Assume that it holds for
some n = m ≥ 0.
f(x) = f(0) + xf (0) +
1
2
x2
f (0) + · · · +
1
n!
xn
f(n)
(0) +
x
0
1
n!
ξn
f(n+1)
(x − ξ) dξ
= f(0) + xf (0) +
1
2
x2
f (0) + · · · +
1
n!
xn
f(n)
(0) +
1
(n + 1)!
ξn+1
f(n+1)
(x − ξ)
x
0
−
x
0
−
1
(n + 1)!
ξn+1
f(n+2)
(x − ξ) dξ
= f(0) + xf (0) +
1
2
x2
f (0) + · · · +
1
n!
xn
f(n)
(0) +
1
(n + 1)!
xn+1
f(n+1)
(0)
+
x
0
1
(n + 1)!
ξn+1
f(n+2)
(x − ξ) dξ
92
This shows that the hypothesis holds for n = m + 1. By induction, the hypothesis hold for all
n ≥ 0.
Solution 4.11
First note that the arc length from a to b is 2(b − a).
b
a
1 + (f (x))2 dx =
b
0
1 + (f (x))2 dx −
a
0
1 + (f (x))2 dx = 2b − 2a
Since a and b are arbitrary, we conclude that the integrand must everywhere be 2.
1 + (f (x))2 = 2
f (x) = ±
√
3
f(x) is a continuous, piecewise differentiable function which satisfies f (x) = ±
√
3 at the points
where it is differentiable. One example is
f(x) =
√
3x
Solution 4.12
CONTINUE
Solution 4.13
Let u = x, and dv = sin x dx. Then du = dx and v = − cos x.
x sin x dx = −x cos x + cos x dx
= −x cos x + sin x + C
Solution 4.14
Let u = x3
and dv = e2x
dx. Then du = 3x2
dx and v = 1
2
e2x
.
x3
e2x
dx =
1
2
x3
e2x
−
3
2
x2
e2x
dx
Let u = x2
and dv = e2x
dx. Then du = 2x dx and v = 1
2
e2x
.
x3
e2x
dx =
1
2
x3
e2x
−
3
2
1
2
x2
e2x
− x e2x
dx
x3
e2x
dx =
1
2
x3
e2x
−
3
4
x2
e2x
+
3
2
x e2x
dx
Let u = x and dv = e2x
dx. Then du = dx and v = 1
2
e2x
.
x3
e2x
dx =
1
2
x3
e2x
−
3
4
x2
e2x
+
3
2
1
2
x e2x
−
1
2
e2x
dx
x3
e2x
dx =
1
2
x3
e2x
−
3
4
x2
e2x
+
3
4
x e2x
−
3
8
e2x
+C
Solution 4.15
Expanding the integrand in partial fractions,
1
x2 − 4
=
1
(x − 2)(x + 2)
=
A
(x − 2)
+
B
(x + 2)
93
1 = A(x + 2) + B(x − 2)
Setting x = 2 yields A = 1
4 . Setting x = −2 yields B = −1
4 . Now we can do the integral.
1
x2 − 4
dx =
1
4(x − 2)
−
1
4(x + 2)
dx
=
1
4
ln |x − 2| −
1
4
ln |x + 2| + C
=
1
4
x − 2
x + 2
+ C
Solution 4.16
Expanding the integral in partial fractions,
x + 1
x3 + x2 − 6x
=
x + 1
x(x − 2)(x + 3)
=
A
x
+
B
x − 2
+
C
x + 3
x + 1 = A(x − 2)(x + 3) + Bx(x + 3) + Cx(x − 2)
Setting x = 0 yields A = −1
6 . Setting x = 2 yields B = 3
10 . Setting x = −3 yields C = − 2
15 .
x + 1
x3 + x2 − 6x
dx = −
1
6x
+
3
10(x − 2)
−
2
15(x + 3)
dx
= −
1
6
ln |x| +
3
10
ln |x − 2| −
2
15
ln |x + 3| + C
= ln
|x − 2|3/10
|x|1/6|x + 3|2/15
+ C
Solution 4.17
4
0
1
(x − 1)2
dx = lim
δ→0+
1−δ
0
1
(x − 1)2
dx + lim
→0+
4
1+
1
(x − 1)2
dx
= lim
δ→0+
−
1
x − 1
1−δ
0
+ lim
→0+
−
1
x − 1
4
1+
= lim
δ→0+
1
δ
− 1 + lim
→0+
−
1
3
+
1
= ∞ + ∞
The integral diverges.
Solution 4.18
1
0
1
√
x
dx = lim
→0+
1
1
√
x
dx
= lim
→0+
2
√
x
1
= lim
→0+
2(1 −
√
)
= 2
94
Solution 4.19
∞
0
1
x2 + 4
dx = lim
α→∞
α
0
1
x2 + 4
dx
= lim
α→∞
1
2
arctan
x
2
α
0
=
1
2
π
2
− 0
=
π
4
95
4.9 Quiz
Problem 4.1
Write the limit-sum definition of
b
a
f(x) dx.
Solution
Problem 4.2
Evaluate
2
−1
|x| dx.
Solution
Problem 4.3
Evaluate d
dx
x2
x
f(ξ) dξ.
Solution
Problem 4.4
Evaluate 1+x+x2
(x+1)3 dx.
Solution
Problem 4.5
State the integral mean value theorem.
Solution
Problem 4.6
What is the partial fraction expansion of 1
x(x−1)(x−2)(x−3) ?
Solution
96
4.10 Quiz Solutions
Solution 4.1
Let a = x0 < x1 < · · · < xn−1 < xn = b be a partition of the interval (a..b). We define ∆xi =
xi+1 − xi and ∆x = maxi ∆xi and choose ξi ∈ [xi..xi+1].
b
a
f(x) dx = lim
∆x→0
n−1
i=0
f(ξi)∆xi
Solution 4.2
2
−1
|x| dx =
0
−1
√
−x dx +
2
0
√
x dx
=
1
0
√
x dx +
2
0
√
x dx
=
2
3
x3/2
1
0
+
2
3
x3/2
2
0
=
2
3
+
2
3
23/2
=
2
3
(1 + 2
√
2)
Solution 4.3
d
dx
x2
x
f(ξ) dξ = f(x2
)
d
dx
(x2
) − f(x)
d
dx
(x)
= 2xf(x2
) − f(x)
Solution 4.4
First we expand the integrand in partial fractions.
1 + x + x2
(x + 1)3
=
a
(x + 1)3
+
b
(x + 1)2
+
c
x + 1
a = (1 + x + x2
) x=−1
= 1
b =
1
1!
d
dx
(1 + x + x2
)
x=−1
= (1 + 2x) x=−1
= −1
c =
1
2!
d2
dx2
(1 + x + x2
)
x=−1
=
1
2
(2) x=−1
= 1
Then we can do the integration.
1 + x + x2
(x + 1)3
dx =
1
(x + 1)3
−
1
(x + 1)2
+
1
x + 1
dx
= −
1
2(x + 1)2
+
1
x + 1
+ ln |x + 1|
=
x + 1/2
(x + 1)2
+ ln |x + 1|
97
Solution 4.5
Let f(x) be continuous. Then
b
a
f(x) dx = (b − a)f(ξ),
for some ξ ∈ [a..b].
Solution 4.6
1
x(x − 1)(x − 2)(x − 3)
=
a
x
+
b
x − 1
+
c
x − 2
+
d
x − 3
a =
1
(0 − 1)(0 − 2)(0 − 3)
= −
1
6
b =
1
(1)(1 − 2)(1 − 3)
=
1
2
c =
1
(2)(2 − 1)(2 − 3)
= −
1
2
d =
1
(3)(3 − 1)(3 − 2)
=
1
6
1
x(x − 1)(x − 2)(x − 3)
= −
1
6x
+
1
2(x − 1)
−
1
2(x − 2)
+
1
6(x − 3)
98
Chapter 5
Vector Calculus
5.1 Vector Functions
Vector-valued Functions. A vector-valued function, r(t), is a mapping r : R → Rn
that assigns
a vector to each value of t.
r(t) = r1(t)e1 + · · · + rn(t)en.
An example of a vector-valued function is the position of an object in space as a function of time.
The function is continous at a point t = τ if
lim
t→τ
r(t) = r(τ).
This occurs if and only if the component functions are continuous. The function is differentiable if
dr
dt
≡ lim
∆t→0
r(t + ∆t) − r(t)
∆t
exists. This occurs if and only if the component functions are differentiable.
If r(t) represents the position of a particle at time t, then the velocity and acceleration of the
particle are
dr
dt
and
d2
r
dt2
,
respectively. The speed of the particle is |r (t)|.
Differentiation Formulas. Let f(t) and g(t) be vector functions and a(t) be a scalar function.
By writing out components you can verify the differentiation formulas:
d
dt
(f · g) = f · g + f · g
d
dt
(f × g) = f × g + f × g
d
dt
(af) = a f + af
5.2 Gradient, Divergence and Curl
Scalar and Vector Fields. A scalar field is a function of position u(x) that assigns a scalar to
each point in space. A function that gives the temperature of a material is an example of a scalar
field. In two dimensions, you can graph a scalar field as a surface plot, (Figure 5.1), with the vertical
axis for the value of the function.
A vector field is a function of position u(x) that assigns a vector to each point in space. Examples
of vectors fields are functions that give the acceleration due to gravity or the velocity of a fluid. You
99
can graph a vector field in two or three dimension by drawing vectors at regularly spaced points.
(See Figure 5.1 for a vector field in two dimensions.)
0
2
4
6 0
2
4
6
-1
-0.5
0
0.5
1
0
2
4
6
Figure 5.1: A Scalar Field and a Vector Field
Partial Derivatives of Scalar Fields. Consider a scalar field u(x). The partial derivative of u
with respect to xk is the derivative of u in which xk is considered to be a variable and the remaining
arguments are considered to be parameters. The partial derivative is denoted ∂
∂xk
u(x), ∂u
∂xk
or uxk
and is defined
∂u
∂xk
≡ lim
∆x→0
u(x1, . . . , xk + ∆x, . . . , xn) − u(x1, . . . , xk, . . . , xn)
∆x
.
Partial derivatives have the same differentiation formulas as ordinary derivatives.
100
Consider a scalar field in R3
, u(x, y, z). Higher derivatives of u are denoted:
uxx ≡
∂2
u
∂x2
≡
∂
∂x
∂u
∂x
,
uxy ≡
∂2
u
∂x∂y
≡
∂
∂x
∂u
∂y
,
uxxyz ≡
∂4
u
∂x2∂y∂z
≡
∂2
∂x2
∂
∂y
∂u
∂z
.
If uxy and uyx are continuous, then
∂2
u
∂x∂y
=
∂2
u
∂y∂x
.
This is referred to as the equality of mixed partial derivatives.
Partial Derivatives of Vector Fields. Consider a vector field u(x). The partial derivative of u
with respect to xk is denoted ∂
∂xk
u(x), ∂u
∂xk
or uxk
and is defined
∂u
∂xk
≡ lim
∆x→0
u(x1, . . . , xk + ∆x, . . . , xn) − u(x1, . . . , xk, . . . , xn)
∆x
.
Partial derivatives of vector fields have the same differentiation formulas as ordinary derivatives.
Gradient. We introduce the vector differential operator,
≡
∂
∂x1
e1 + · · · +
∂
∂xn
en,
which is known as del or nabla. In R3
it is
≡
∂
∂x
i +
∂
∂y
j +
∂
∂z
k.
Let u(x) be a differential scalar field. The gradient of u is,
u ≡
∂u
∂x1
e1 + · · · +
∂u
∂xn
en,
Directional Derivative. Suppose you are standing on some terrain. The slope of the ground
in a particular direction is the directional derivative of the elevation in that direction. Consider a
differentiable scalar field, u(x). The derivative of the function in the direction of the unit vector a is
the rate of change of the function in that direction. Thus the directional derivative, Dau, is defined:
Dau(x) = lim
→0
u(x + a) − u(x)
= lim
→0
u(x1 + a1, . . . , xn + an) − u(x1, . . . , xn)
= lim
→0
u(x) + a1ux1 (x) + · · · + anuxn (x) + O( 2
) − u(x)
= a1ux1 (x) + · · · + anuxn (x)
Dau(x) = u(x) · a.
101
Tangent to a Surface. The gradient, f, is orthogonal to the surface f(x) = 0. Consider a
point ξ on the surface. Let the differential dr = dx1e1 + · · · dxnen lie in the tangent plane at ξ.
Then
df =
∂f
∂x1
dx1 + · · · +
∂f
∂xn
dxn = 0
since f(x) = 0 on the surface. Then
f · dr =
∂f
∂x1
e1 + · · · +
∂f
∂xn
en · (dx1e1 + · · · + dxnen)
=
∂f
∂x1
dx1 + · · · +
∂f
∂xn
dxn
= 0
Thus f is orthogonal to the tangent plane and hence to the surface.
Example 5.2.1 Consider the paraboloid, x2
+ y2
− z = 0. We want to find the tangent plane to
the surface at the point (1, 1, 2). The gradient is
f = 2xi + 2yj − k.
At the point (1, 1, 2) this is
f(1, 1, 2) = 2i + 2j − k.
We know a point on the tangent plane, (1, 1, 2), and the normal, f(1, 1, 2). The equation of the
plane is
f(1, 1, 2) · (x, y, z) = f(1, 1, 2) · (1, 1, 2)
2x + 2y − z = 2
The gradient of the function f(x) = 0, f(x), is in the direction of the maximum directional
derivative. The magnitude of the gradient, | f(x)|, is the value of the directional derivative in that
direction. To derive this, note that
Daf = f · a = | f| cos θ,
where θ is the angle between f and a. Daf is maximum when θ = 0, i.e. when a is the same
direction as f. In this direction, Daf = | f|. To use the elevation example, f points in the
uphill direction and | f| is the uphill slope.
Example 5.2.2 Suppose that the two surfaces f(x) = 0 and g(x) = 0 intersect at the point x = ξ.
What is the angle between their tangent planes at that point? First we note that the angle between
the tangent planes is by definition the angle between their normals. These normals are in the
direction of f(ξ) and g(ξ). (We assume these are nonzero.) The angle, θ, between the tangent
planes to the surfaces is
θ = arccos
f(ξ) · g(ξ)
| f(ξ)| | g(ξ)|
.
Example 5.2.3 Let u be the distance from the origin:
u(x) =
√
x · x =
√
xixi.
In three dimensions, this is
u(x, y, z) = x2 + y2 + z2.
102
The gradient of u, (x), is a unit vector in the direction of x. The gradient is:
u(x) =
x1
√
x · x
, . . . ,
xn
√
x · x
=
xiei
√
xjxj
.
In three dimensions, we have
u(x, y, z) =
x
x2 + y2 + z2
,
y
x2 + y2 + z2
,
z
x2 + y2 + z2
.
This is a unit vector because the sum of the squared components sums to unity.
u · u =
xiei
√
xjxj
·
xkek
√
xlxl
xixi
xjxj
= 1
Figure 5.2 shows a plot of the vector field of u in two dimensions.
Figure 5.2: The gradient of the distance from the origin.
Example 5.2.4 Consider an ellipse. An implicit equation of an ellipse is
x2
a2
+
y2
b2
= 1.
We can also express an ellipse as u(x, y) + v(x, y) = c where u and v are the distance from the two
foci. That is, an ellipse is the set of points such that the sum of the distances from the two foci is a
constant. Let n = (u + v). This is a vector which is orthogonal to the ellipse when evaluated on
the surface. Let t be a unit tangent to the surface. Since n and t are orthogonal,
n · t = 0
( u + v) · t = 0
u · t = v · (−t).
Since these are unit vectors, the angle between u and t is equal to the angle between v and
−t. In other words: If we draw rays from the foci to a point on the ellipse, the rays make equal
angles with the ellipse. If the ellipse were a reflective surface, a wave starting at one focus would
be reflected from the ellipse and travel to the other focus. See Figure 5.3. This result also holds for
ellipsoids, u(x, y, z) + v(x, y, z) = c.
103
u v
θ
θ
n
t
v
u-t θ
θ
Figure 5.3: An ellipse and rays from the foci.
Figure 5.4: An elliptical dish.
We see that an ellipsoidal dish could be used to collect spherical waves, (waves emanating from
a point). If the dish is shaped so that the source of the waves is located at one foci and a collector
is placed at the second, then any wave starting at the source and reflecting off the dish will travel
to the collector. See Figure 5.4.
104
5.3 Exercises
Vector Functions
Exercise 5.1
Consider the parametric curve
r = cos
t
2
i + sin
t
2
j.
Calculate dr
dt and d2
r
dt2 . Plot the position and some velocity and acceleration vectors.
Hint, Solution
Exercise 5.2
Let r(t) be the position of an object moving with constant speed. Show that the acceleration of the
object is orthogonal to the velocity of the object.
Hint, Solution
Vector Fields
Exercise 5.3
Consider the paraboloid x2
+ y2
− z = 0. What is the angle between the two tangent planes that
touch the surface at (1, 1, 2) and (1, −1, 2)? What are the equations of the tangent planes at these
points?
Hint, Solution
Exercise 5.4
Consider the paraboloid x2
+ y2
− z = 0. What is the point on the paraboloid that is closest to
(1, 0, 0)?
Hint, Solution
Exercise 5.5
Consider the region R defined by x2
+ xy + y2
≤ 9. What is the volume of the solid obtained by
rotating R about the y axis?
Is this the same as the volume of the solid obtained by rotating R about the x axis? Give
geometric and algebraic explanations of this.
Hint, Solution
Exercise 5.6
Two cylinders of unit radius intersect at right angles as shown in Figure 5.5. What is the volume of
the solid enclosed by the cylinders?
Figure 5.5: Two cylinders intersecting.
105
Hint, Solution
Exercise 5.7
Consider the curve f(x) = 1/x on the interval [1 . . . ∞). Let S be the solid obtained by rotating
f(x) about the x axis. (See Figure 5.6.) Show that the length of f(x) and the lateral area of S are
infinite. Find the volume of S. 1
1
2
3
4
5 -1
0
1-1
0
1
1
2
3
4
5
-1
0
1
Figure 5.6: The rotation of 1/x about the x axis.
Hint, Solution
Exercise 5.8
Suppose that a deposit of oil looks like a cone in the ground as illustrated in Figure 5.7. Suppose
that the oil has a density of 800kg/m3
and it’s vertical depth is 12m. How much work2
would it
take to get the oil to the surface.
32 m
12 m
12 m
ground
surface
Figure 5.7: The oil deposit.
Hint, Solution
Exercise 5.9
Find the area and volume of a sphere of radius R by integrating in spherical coordinates.
Hint, Solution
1You could fill S with a finite amount of paint, but it would take an infinite amount of paint to cover its surface.
2 Recall that work = force × distance and force = mass × acceleration.
106
5.4 Hints
Vector Functions
Hint 5.1
Plot the velocity and acceleration vectors at regular intervals along the path of motion.
Hint 5.2
If r(t) has constant speed, then |r (t)| = c. The condition that the acceleration is orthogonal to
the velocity can be stated mathematically in terms of the dot product, r (t) · r (t) = 0. Write the
condition of constant speed in terms of a dot product and go from there.
Vector Fields
Hint 5.3
The angle between two planes is the angle between the vectors orthogonal to the planes. The angle
between the two vectors is
θ = arccos
2, 2, −1 · 2, −2, −1
| 2, 2, −1 || 2, −2, −1 |
The equation of a line orthogonal to a and passing through the point b is a · x = a · b.
Hint 5.4
Since the paraboloid is a differentiable surface, the normal to the surface at the closest point will be
parallel to the vector from the closest point to (1, 0, 0). We can express this using the gradient and
the cross product. If (x, y, z) is the closest point on the paraboloid, then a vector orthogonal to the
surface there is f = 2x, 2y, −1 . The vector from the surface to the point (1, 0, 0) is 1−x, −y, −z .
These two vectors are parallel if their cross product is zero.
Hint 5.5
CONTINUE
Hint 5.6
CONTINUE
Hint 5.7
CONTINUE
Hint 5.8
Start with the formula for the work required to move the oil to the surface. Integrate over the mass
of the oil.
Work = (acceleration) (distance) d(mass)
Here (distance) is the distance of the differential of mass from the surface. The acceleration is that
of gravity, g.
Hint 5.9
CONTINUE
107
5.5 Solutions
Vector Functions
Solution 5.1
The velocity is
r = −
1
2
sin
t
2
i +
1
2
cos
t
2
j.
The acceleration is
r = −
1
4
cos
t
2
i −
1
4
sin
t
2
j.
See Figure 5.8 for plots of position, velocity and acceleration.
Figure 5.8: A Graph of Position and Velocity and of Position and Acceleration
Solution 5.2
If r(t) has constant speed, then |r (t)| = c. The condition that the acceleration is orthogonal to the
velocity can be stated mathematically in terms of the dot product, r (t) · r (t) = 0. Note that we
can write the condition of constant speed in terms of a dot product,
r (t) · r (t) = c,
r (t) · r (t) = c2
.
Differentiating this equation yields,
r (t) · r (t) + r (t) · r (t) = 0
r (t) · r (t) = 0.
This shows that the acceleration is orthogonal to the velocity.
Vector Fields
Solution 5.3
The gradient, which is orthogonal to the surface when evaluated there is f = 2xi + 2yj − k.
2i + 2j − k and 2i − 2j − k are orthogonal to the paraboloid, (and hence the tangent planes), at
the points (1, 1, 2) and (1, −1, 2), respectively. The angle between the tangent planes is the angle
between the vectors orthogonal to the planes. The angle between the two vectors is
θ = arccos
2, 2, −1 · 2, −2, −1
| 2, 2, −1 || 2, −2, −1 |
108
θ = arccos
1
9
≈ 1.45946.
Recall that the equation of a line orthogonal to a and passing through the point b is a · x = a · b.
The equations of the tangent planes are
2, ±2, −1 · x, y, z = 2, ±2, −1 · 1, ±1, 2 ,
2x ± 2y − z = 2.
The paraboloid and the tangent planes are shown in Figure 5.9.
-1
0
1
-1
0
1
0
2
4
0
2
4
Figure 5.9: Paraboloid and Two Tangent Planes
Solution 5.4
Since the paraboloid is a differentiable surface, the normal to the surface at the closest point will be
parallel to the vector from the closest point to (1, 0, 0). We can express this using the gradient and
the cross product. If (x, y, z) is the closest point on the paraboloid, then a vector orthogonal to the
surface there is f = 2x, 2y, −1 . The vector from the surface to the point (1, 0, 0) is 1−x, −y, −z .
These two vectors are parallel if their cross product is zero,
2x, 2y, −1 × 1 − x, −y, −z = −y − 2yz, −1 + x + 2xz, −2y = 0.
This gives us the three equations,
−y − 2yz = 0,
−1 + x + 2xz = 0,
−2y = 0.
The third equation requires that y = 0. The first equation then becomes trivial and we are left with
the second equation,
−1 + x + 2xz = 0.
Substituting z = x2
+ y2
into this equation yields,
2x3
+ x − 1 = 0.
The only real valued solution of this polynomial is
x =
6−2/3
9 +
√
87
2/3
− 6−1/3
9 +
√
87
1/3
≈ 0.589755.
Thus the closest point to (1, 0, 0) on the paraboloid is



6−2/3
9 +
√
87
2/3
− 6−1/3
9 +
√
87
1/3
, 0,


6−2/3
9 +
√
87
2/3
− 6−1/3
9 +
√
87
1/3


2


 ≈ (0.589755, 0, 0.34781).
The closest point is shown graphically in Figure 5.10.
109
-1
-0.5
0
0.5
1-1
-0.5
0
0.5
1
0
0.5
1
1.5
2-1
-0.5
0
0.5
1
0
0.5
1
1.5
2
Figure 5.10: Paraboloid, Tangent Plane and Line Connecting (1, 0, 0) to Closest Point
Solution 5.5
We consider the region R defined by x2
+xy +y2
≤ 9. The boundary of the region is an ellipse. (See
Figure 5.11 for the ellipse and the solid obtained by rotating the region.) Note that in rotating the
-3 -2 -1 1 2 3
-3
-2
-1
1
2
3
-2
0
2
-2
0
2
-2
0
2
-2
0
2
-2
0
2
Figure 5.11: The curve x2
+ xy + y2
= 9.
region about the y axis, only the portions in the second and fourth quadrants make a contribution.
Since the solid is symmetric across the xz plane, we will find the volume of the top half and then
double this to get the volume of the whole solid. Now we consider rotating the region in the second
quadrant about the y axis. In the equation for the ellipse, x2
+ xy + y2
= 9, we solve for x.
x =
1
2
−y ±
√
3 12 − y2
In the second quadrant, the curve (−y −
√
3 12 − y2)/2 is defined on y ∈ [0 . . .
√
12] and the curve
(−y −
√
3 12 − y2)/2 is defined on y ∈ [3 . . .
√
12]. (See Figure 5.12.) We find the volume obtained
110
-3.5 -3 -2.5 -2 -1.5 -1 -0.5
0.5
1
1.5
2
2.5
3
3.5
Figure 5.12: (−y −
√
3 12 − y2)/2 in red and (−y +
√
3 12 − y2)/2 in green.
by rotating the first curve and subtract the volume from rotating the second curve.
V = 2


√
12
0
π
−y −
√
3 12 − y2
2
2
dy −
√
12
3
π
−y +
√
3 12 − y2
2
2
dy


V =
π
2
√
12
0
y +
√
3 12 − y2
2
dy −
√
12
3
−y +
√
3 12 − y2
2
dy
V =
π
2
√
12
0
−2y2
+
√
12y 12 − y2 + 36 dy −
√
12
3
−2y2
−
√
12y 12 − y2 + 36 dy
V =
π
2
−
2
3
y3
−
2
√
3
12 − y2 3/2
+ 36y
√
12
0
− −
2
3
y3
+
2
√
3
12 − y2 3/2
+ 36y
√
12
3
V = 72π
Now consider the volume of the solid obtained by rotating R about the x axis? This as the same
as the volume of the solid obtained by rotating R about the y axis. Geometrically we know this
because R is symmetric about the line y = x.
Now we justify it algebraically. Consider the phrase: Rotate the region x2
+ xy + y2
≤ 9 about
the x axis. We formally swap x and y to obtain: Rotate the region y2
+ yx + x2
≤ 9 about the y
axis. Which is the original problem.
Solution 5.6
We find of the volume of the intersecting cylinders by summing the volumes of the two cylinders
and then subracting the volume of their intersection. The volume of each of the cylinders is 2π.
The intersection is shown in Figure 5.13. If we slice this solid along the plane z = const we have a
square with side length 2
√
1 − z2. The volume of the intersection of the cylinders is
1
−1
4 1 − z2
dz.
We compute the volume of the intersecting cylinders.
111
-1
-0.5
0
0.5
1
-1
-0.5
0
0.5
1
-1
-0.5
0
0.5
1
-1
-0.5
0
0.5
1
-1
-0.5
0
0.5
1
Figure 5.13: The intersection of the two cylinders.
V = 2(2π) − 2
1
0
4 1 − z2
dz
V = 4π −
16
3
Solution 5.7
The length of f(x) is
L =
∞
1
1 + 1/x2 dx.
Since 1 + 1/x2 > 1/x, the integral diverges. The length is infinite.
We find the area of S by integrating the length of circles.
A =
∞
1
2π
x
dx
This integral also diverges. The area is infinite.
Finally we find the volume of S by integrating the area of disks.
V =
∞
1
π
x2
dx = −
π
x
∞
1
= π
Solution 5.8
First we write the formula for the work required to move the oil to the surface. We integrate over
the mass of the oil.
Work = (acceleration) (distance) d(mass)
Here (distance) is the distance of the differential of mass from the surface. The acceleration is that
of gravity, g. The differential of mass can be represented an a differential of volume time the density
of the oil, 800 kg/m3
.
Work = 800g(distance) d(volume)
We place the coordinate axis so that z = 0 coincides with the bottom of the cone. The oil lies
between z = 0 and z = 12. The cross sectional area of the oil deposit at a fixed depth is πz2
. Thus
112
the differential of volume is π z2
dz. This oil must me raised a distance of 24 − z.
W =
12
0
800 g (24 − z) π z2
dz
W = 6912000gπ
W ≈ 2.13 × 108 kg m2
s2
Solution 5.9
The Jacobian in spherical coordinates is r2
sin φ.
area =
2π
0
π
0
R2
sin φ dφ dθ
= 2πR2
π
0
sin φ dφ
= 2πR2
[− cos φ]π
0
area = 4πR2
volume =
R
0
2π
0
π
0
r2
sin φ dφ dθ dr
= 2π
R
0
π
0
r2
sin φ dφ dr
= 2π
r3
3
R
0
[− cos φ]π
0
volume =
4
3
πR3
113
5.6 Quiz
Problem 5.1
What is the distance from the origin to the plane x + 2y + 3z = 4?
Solution
Problem 5.2
A bead of mass m slides frictionlessly on a wire determined parametrically by w(s). The bead moves
under the force of gravity. What is the acceleration of the bead as a function of the parameter s?
Solution
114
5.7 Quiz Solutions
Solution 5.1
Recall that the equation of a plane is x · n = a · n where a is a point in the plane and n is normal
to the plane. We are considering the plane x + 2y + 3z = 4. A normal to the plane is 1, 2, 3 . The
unit normal is
n =
1
√
15
1, 2, 3 .
By substituting in x = y = 0, we see that a point in the plane is a = 0, 0, 4/3 . The distance of the
plane from the origin is a · n = 4√
15
.
Solution 5.2
The force of gravity is −gk. The unit tangent to the wire is w (s)/|w (s)|. The component of the
gravitational force in the tangential direction is −gk · w (s)/|w (s)|. Thus the acceleration of the
bead is
−
gk · w (s)
m|w (s)|
.
115
116
Part III
Functions of a Complex Variable
117
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Chapter 6
Complex Numbers
I’m sorry. You have reached an imaginary number. Please rotate your phone 90 degrees and dial
again.
-Message on answering machine of Cathy Vargas.
6.1 Complex Numbers
Shortcomings of real numbers. When you started algebra, you learned that the quadratic
equation: x2
+ 2ax + b = 0 has either two, one or no solutions. For example:
• x2
− 3x + 2 = 0 has the two solutions x = 1 and x = 2.
• For x2
− 2x + 1 = 0, x = 1 is a solution of multiplicity two.
• x2
+ 1 = 0 has no solutions.
This is a little unsatisfactory. We can formally solve the general quadratic equation.
x2
+ 2ax + b = 0
(x + a)2
= a2
− b
x = −a ± a2 − b
However, the solutions are defined only when the discriminant a2
−b is non-negative. This is because
the square root function
√
x is a bijection from R0+
to R0+
. (See Figure 6.1.)
Figure 6.1: y =
√
x
119
A new mathematical constant. We cannot solve x2
= −1 because the square root of −1 is
not defined. To overcome this apparent shortcoming of the real number system, we create a new
symbolic constant
√
−1. In performing arithmetic, we will treat
√
−1 as we would a real constant
like π or a formal variable like x, i.e.
√
−1 +
√
−1 = 2
√
−1. This constant has the property:
√
−1
2
= −1. Now we can express the solutions of x2
= −1 as x =
√
−1 and x = −
√
−1. These
satisfy the equation since
√
−1
2
= −1 and −
√
−1
2
= (−1)2
√
−1
2
= −1. Note that we can
express the square root of any negative real number in terms of
√
−1:
√
−r =
√
−1
√
r for r ≥ 0.
Euler’s notation. Euler introduced the notation of using the letter i to denote
√
−1. We will
use the symbol ı, an i without a dot, to denote
√
−1. This helps us distinguish it from i used as
a variable or index.1
We call any number of the form ıb, b ∈ R, a pure imaginary number.2
Let
a and b be real numbers. The product of a real number and an imaginary number is an imaginary
number: (a)(ıb) = ı(ab). The product of two imaginary numbers is a real number: (ıa)(ıb) = −ab.
However the sum of a real number and an imaginary number a + ıb is neither real nor imaginary.
We call numbers of the form a + ıb complex numbers.3
The quadratic. Now we return to the quadratic with real coefficients, x2
+2ax+b = 0. It has the
solutions x = −a±
√
a2 − b. The solutions are real-valued only if a2
−b ≥ 0. If not, then we can define
solutions as complex numbers. If the discriminant is negative, we write x = −a ± ı
√
b − a2. Thus
every quadratic polynomial with real coefficients has exactly two solutions, counting multiplicities.
The fundamental theorem of algebra states that an nth
degree polynomial with complex coefficients
has n, not necessarily distinct, complex roots. We will prove this result later using the theory of
functions of a complex variable.
Component operations. Consider the complex number z = x+ıy, (x, y ∈ R). The real part of z
is (z) = x; the imaginary part of z is (z) = y. Two complex numbers, z = x + ıy and ζ = ξ + ıψ,
are equal if and only if x = ξ and y = ψ. The complex conjugate4
of z = x + ıy is z ≡ x − ıy. The
notation z∗
≡ x − ıy is also used.
A little arithmetic. Consider two complex numbers: z = x + ıy, ζ = ξ + ıψ. It is easy to express
the sum or difference as a complex number.
z + ζ = (x + ξ) + ı(y + ψ), z − ζ = (x − ξ) + ı(y − ψ)
It is also easy to form the product.
zζ = (x + ıy)(ξ + ıψ) = xξ + ıxψ + ıyξ + ı2
yψ = (xξ − yψ) + ı(xψ + yξ)
The quotient is a bit more difficult. (Assume that ζ is nonzero.) How do we express z/ζ =
(x + ıy)/(ξ + ıψ) as the sum of a real number and an imaginary number? The trick is to multiply
the numerator and denominator by the complex conjugate of ζ.
z
ζ
=
x + ıy
ξ + ıψ
=
x + ıy
ξ + ıψ
ξ − ıψ
ξ − ıψ
=
xξ − ıxψ − ıyξ − ı2
yψ
ξ2 − ıξψ + ıψξ − ı2ψ2
=
(xξ + yψ) − ı(xψ + yξ)
ξ2 + ψ2
=
(xξ + yψ)
ξ2 + ψ2
−ı
xψ + yξ
ξ2 + ψ2
Now we recognize it as a complex number.
1 Electrical engineering types prefer to use  or j to denote
√
−1.
2 “Imaginary” is an unfortunate term. Real numbers are artificial; constructs of the mind. Real numbers are no
more real than imaginary numbers.
3 Here complex means “composed of two or more parts”, not “hard to separate, analyze, or solve”. Those who
disagree have a complex number complex.
4 Conjugate: having features in common but opposite or inverse in some particular.
120
Field properties. The set of complex numbers C form a field. That essentially means that we can
do arithmetic with complex numbers. When performing arithmetic, we simply treat ı as a symbolic
constant with the property that ı2
= −1. The field of complex numbers satisfy the following list of
properties. Each one is easy to verify; some are proved below. (Let z, ζ, ω ∈ C.)
1. Closure under addition and multiplication.
z + ζ = (x + ıy) + (ξ + ıψ)
= (x + ξ) + ı (y + ψ) ∈ C
zζ = (x + ıy) (ξ + ıψ)
= xξ + ıxψ + ıyξ + ı2
yψ
= (xξ − yψ) + ı (xψ + ξy) ∈ C
2. Commutativity of addition and multiplication. z + ζ = ζ + z. zζ = ζz.
3. Associativity of addition and multiplication. (z + ζ) + ω = z + (ζ + ω). (zζ) ω = z (ζω).
4. Distributive law. z (ζ + ω) = zζ + zω.
5. Identity with respect to addition and multiplication. Zero is the additive identity element,
z + 0 = z; unity is the muliplicative identity element, z(1) = z.
6. Inverse with respect to addition. z + (−z) = (x + ıy) + (−x − ıy) = (x − x) + ı(y − y) = 0.
7. Inverse with respect to multiplication for nonzero numbers. zz−1
= 1, where
z−1
=
1
z
=
1
x + ıy
=
1
x + ıy
x − ıy
x − ıy
=
x − ıy
x2 + y2
=
x
x2 + y2
− ı
y
x2 + y2
Properties of the complex conjugate. Using the field properties of complex numbers, we can
derive the following properties of the complex conjugate, z = x − ıy.
1. (z) = z,
2. z + ζ = z + ζ,
3. zζ = zζ,
4.
z
ζ
=
(z)
ζ
.
6.2 The Complex Plane
Complex plane. We can denote a complex number z = x + ıy as an ordered pair of real numbers
(x, y). Thus we can represent a complex number as a point in R2
where the first component is the
real part and the second component is the imaginary part of z. This is called the complex plane or
the Argand diagram. (See Figure 6.2.) A complex number written as z = x + ıy is said to be in
Cartesian form, or a + ıb form.
Recall that there are two ways of describing a point in the complex plane: an ordered pair of
coordinates (x, y) that give the horizontal and vertical offset from the origin or the distance r from
the origin and the angle θ from the positive horizontal axis. The angle θ is not unique. It is only
determined up to an additive integer multiple of 2π.
121
Im(z)
Re(z)
r
(x,y)
θ
Figure 6.2: The complex plane.
Modulus. The magnitude or modulus of a complex number is the distance of the point from the
origin. It is defined as |z| = |x + ıy| = x2 + y2. Note that zz = (x + ıy)(x − ıy) = x2
+ y2
= |z|2
.
The modulus has the following properties.
1. |zζ| = |z| |ζ|
2.
z
ζ
=
|z|
|ζ|
for ζ = 0.
3. |z + ζ| ≤ |z| + |ζ|
4. |z + ζ| ≥ ||z| − |ζ||
We could prove the first two properties by expanding in x + ıy form, but it would be fairly messy.
The proofs will become simple after polar form has been introduced. The second two properties
follow from the triangle inequalities in geometry. This will become apparent after the relationship
between complex numbers and vectors is introduced. One can show that
|z1z2 · · · zn| = |z1| |z2| · · · |zn|
and
|z1 + z2 + · · · + zn| ≤ |z1| + |z2| + · · · + |zn|
with proof by induction.
Argument. The argument of a complex number is the angle that the vector with tail at the origin
and head at z = x+ıy makes with the positive x-axis. The argument is denoted arg(z). Note that the
argument is defined for all nonzero numbers and is only determined up to an additive integer multiple
of 2π. That is, the argument of a complex number is the set of values: {θ + 2πn | n ∈ Z}. The
principal argument of a complex number is that angle in the set arg(z) which lies in the range (−π, π].
The principal argument is denoted Arg(z). We prove the following identities in Exercise 6.10.
arg(zζ) = arg(z) + arg(ζ)
Arg(zζ) = Arg(z) + Arg(ζ)
arg z2
= arg(z) + arg(z) = 2 arg(z)
Example 6.2.1 Consider the equation |z −1−ı| = 2. The set of points satisfying this equation is a
circle of radius 2 and center at 1 + ı in the complex plane. You can see this by noting that |z − 1 − ı|
is the distance from the point (1, 1). (See Figure 6.3.)
Another way to derive this is to substitute z = x + ıy into the equation.
|x + ıy − 1 − ı| = 2
(x − 1)2 + (y − 1)2 = 2
(x − 1)2
+ (y − 1)2
= 4
This is the analytic geometry equation for a circle of radius 2 centered about (1, 1).
122
-1 1 2 3
-1
1
2
3
Figure 6.3: Solution of |z − 1 − ı| = 2.
Example 6.2.2 Consider the curve described by
|z| + |z − 2| = 4.
Note that |z| is the distance from the origin in the complex plane and |z − 2| is the distance from
z = 2. The equation is
(distance from (0, 0)) + (distance from (2, 0)) = 4.
From geometry, we know that this is an ellipse with foci at (0, 0) and (2, 0), major axis 2, and minor
axis
√
3. (See Figure 6.4.)
-1 1 2 3
-2
-1
1
2
Figure 6.4: Solution of |z| + |z − 2| = 4.
We can use the substitution z = x + ıy to get the equation in algebraic form.
|z| + |z − 2| = 4
|x + ıy| + |x + ıy − 2| = 4
x2 + y2 + (x − 2)2 + y2 = 4
x2
+ y2
= 16 − 8 (x − 2)2 + y2 + x2
− 4x + 4 + y2
x − 5 = −2 (x − 2)2 + y2
x2
− 10x + 25 = 4x2
− 16x + 16 + 4y2
1
4
(x − 1)2
+
1
3
y2
= 1
Thus we have the standard form for an equation describing an ellipse.
123
6.3 Polar Form
Polar form. A complex number written in Cartesian form, z = x + ıy, can be converted polar
form, z = r(cos θ + ı sin θ), using trigonometry. Here r = |z| is the modulus and θ = arctan(x, y) is
the argument of z. The argument is the angle between the x axis and the vector with its head at
(x, y). (See Figure 6.5.) Note that θ is not unique. If z = r(cos θ +ı sin θ) then z = r(cos(θ +2nπ)+
ı sin(θ + 2nπ)) for any n ∈ Z.
Re( )
r
Im( ) (x,y)
r
z
θ
sinθ
z
θcosr
Figure 6.5: Polar form.
The arctangent. Note that arctan(x, y) is not the same thing as the old arctangent that you
learned about in trigonometry arctan(x, y) is sensitive to the quadrant of the point (x, y), while
arctan y
x is not. For example,
arctan(1, 1) =
π
4
+ 2nπ and arctan(−1, −1) =
−3π
4
+ 2nπ,
whereas
arctan
−1
−1
= arctan
1
1
= arctan(1).
Euler’s formula. Euler’s formula, eıθ
= cos θ + ı sin θ,5
allows us to write the polar form more
compactly. Expressing the polar form in terms of the exponential function of imaginary argument
makes arithmetic with complex numbers much more convenient.
z = r(cos θ + ı sin θ) = r eıθ
The exponential of an imaginary argument has all the nice properties that we know from studying
functions of a real variable, like eıa eıb
= eı(a+b)
. Later on we will introduce the exponential of a
complex number.
Using Euler’s Formula, we can express the cosine and sine in terms of the exponential.
eıθ
+ e−ıθ
2
=
(cos(θ) + ı sin(θ)) + (cos(−θ) + ı sin(−θ))
2
= cos(θ)
eıθ
− e−ıθ
ı2
=
(cos(θ) + ı sin(θ)) − (cos(−θ) + ı sin(−θ))
ı2
= sin(θ)
Arithmetic with complex numbers. Note that it is convenient to add complex numbers in
Cartesian form.
z + ζ = (x + ıy) + (ξ + ıψ) = (x + ξ) + ı (y + ψ)
However, it is difficult to multiply or divide them in Cartesian form.
zζ = (x + ıy) (ξ + ıψ) = (xξ − yψ) + ı (xψ + ξy)
z
ζ
=
x + ıy
ξ + ıψ
=
(x + ıy) (ξ − ıψ)
(ξ + ıψ) (ξ − ıψ)
=
xξ + yψ
ξ2 + ψ2
+ ı
ξy − xψ
ξ2 + ψ2
5 See Exercise 6.17 for justification of Euler’s formula.
124
On the other hand, it is difficult to add complex numbers in polar form.
z + ζ = r eıθ
+ρ eıφ
= r (cos θ + ı sin θ) + ρ (cos φ + ı sin φ)
= r cos θ + ρ cos φ + ı (r sin θ + ρ sin φ)
= (r cos θ + ρ cos φ)
2
+ (r sin θ + ρ sin φ)
2
× eı arctan(r cos θ+ρ cos φ,r sin θ+ρ sin φ)
= r2 + ρ2 + 2 cos (θ − φ) eı arctan(r cos θ+ρ cos φ,r sin θ+ρ sin φ)
However, it is convenient to multiply and divide them in polar form.
zζ = r eıθ
ρ eıφ
= rρ eı(θ+φ)
z
ζ
=
r eıθ
ρ eıφ
=
r
ρ
eı(θ−φ)
Keeping this in mind will make working with complex numbers a shade or two less grungy.
Result 6.3.1 Euler’s formula is
eıθ
= cos θ + ı sin θ.
We can write the cosine and sine in terms of the exponential.
cos(θ) =
eıθ
+ e−ıθ
2
, sin(θ) =
eıθ
− e−ıθ
ı2
To change between Cartesian and polar form, use the identities
r eıθ
= r cos θ + ır sin θ,
x + ıy = x2 + y2 eı arctan(x,y)
.
Cartesian form is convenient for addition. Polar form is convenient for multi-
plication and division.
Example 6.3.1 We write 5 + ı7 in polar form.
5 + ı7 =
√
74 eı arctan(5,7)
We write 2 eıπ/6
in Cartesian form.
2 eıπ/6
= 2 cos
π
6
+ 2ı sin
π
6
=
√
3 + ı
Example 6.3.2 We will prove the trigonometric identity
cos4
θ =
1
8
cos(4θ) +
1
2
cos(2θ) +
3
8
.
125
We start by writing the cosine in terms of the exponential.
cos4
θ =
eıθ
+ e−ıθ
2
4
=
1
16
eı4θ
+4 eı2θ
+6 + 4 e−ı2θ
+ e−ı4θ
=
1
8
eı4θ
+ e−ı4θ
2
+
1
2
eı2θ
+ e−ı2θ
2
+
3
8
=
1
8
cos(4θ) +
1
2
cos(2θ) +
3
8
By the definition of exponentiation, we have eınθ
= eıθ n
We apply Euler’s formula to obtain a
result which is useful in deriving trigonometric identities.
cos(nθ) + ı sin(nθ) = (cos θ + ı sin θ)n
Result 6.3.2 DeMoivre’s Theorem.a
cos(nθ) + ı sin(nθ) = (cos θ + ı sin θ)n
aIt’s amazing what passes for a theorem these days. I would think that this would be a corollary at most.
Example 6.3.3 We will express cos(5θ) in terms of cos θ and sin(5θ) in terms of sin θ. We start
with DeMoivre’s theorem.
eı5θ
= eıθ 5
cos(5θ) + ı sin(5θ) = (cos θ + ı sin θ)5
=
5
0
cos5
θ + ı
5
1
cos4
θ sin θ −
5
2
cos3
θ sin2
θ − ı
5
3
cos2
θ sin3
θ
+
5
4
cos θ sin4
θ + ı
5
5
sin5
θ
= cos5
θ − 10 cos3
θ sin2
θ + 5 cos θ sin4
θ + ı 5 cos4
θ sin θ − 10 cos2
θ sin3
θ + sin5
θ
Then we equate the real and imaginary parts.
cos(5θ) = cos5
θ − 10 cos3
θ sin2
θ + 5 cos θ sin4
θ
sin(5θ) = 5 cos4
θ sin θ − 10 cos2
θ sin3
θ + sin5
θ
Finally we use the Pythagorean identity, cos2
θ + sin2
θ = 1.
cos(5θ) = cos5
θ − 10 cos3
θ 1 − cos2
θ + 5 cos θ 1 − cos2
θ
2
cos(5θ) = 16 cos5
θ − 20 cos3
θ + 5 cos θ
sin(5θ) = 5 1 − sin2
θ
2
sin θ − 10 1 − sin2
θ sin3
θ + sin5
θ
sin(5θ) = 16 sin5
θ − 20 sin3
θ + 5 sin θ
6.4 Arithmetic and Vectors
Addition. We can represent the complex number z = x+ıy = r eıθ
as a vector in Cartesian space
with tail at the origin and head at (x, y), or equivalently, the vector of length r and angle θ. With
the vector representation, we can add complex numbers by connecting the tail of one vector to the
head of the other. The vector z + ζ is the diagonal of the parallelogram defined by z and ζ. (See
Figure 6.6.)
126
Negation. The negative of z = x + ıy is −z = −x − ıy. In polar form we have z = r eıθ
and
−z = r eı(θ+π)
, (more generally, z = r eı(θ+(2n+1)π)
, n ∈ Z. In terms of vectors, −z has the same
magnitude but opposite direction as z. (See Figure 6.6.)
Multiplication. The product of z = r eıθ
and ζ = ρ eıφ
is zζ = rρ eı(θ+φ)
. The length of the
vector zζ is the product of the lengths of z and ζ. The angle of zζ is the sum of the angles of z and
ζ. (See Figure 6.6.)
Note that arg(zζ) = arg(z) + arg(ζ). Each of these arguments has an infinite number of values.
If we write out the multi-valuedness explicitly, we have
{θ + φ + 2πn : n ∈ Z} = {θ + 2πn : n ∈ Z} + {φ + 2πn : n ∈ Z}
The same is not true of the principal argument. In general, Arg(zζ) = Arg(z) + Arg(ζ). Consider
the case z = ζ = eı3π/4
. Then Arg(z) = Arg(ζ) = 3π/4, however, Arg(zζ) = −π/2.
xξ−yψ)+i(xψ+yξ)ζ=(
=re θ πi( + )
=re θi
z
=r eρ θ φi( + )
ζ=ξ+ ψ=ρi eiφ
z=x+iy=reiθ
ζ ξ ψ
ζ=ξ+ ψi z=x+iy
z+ =(x+ )+i(y+ )
−z=−x−iy
z=x+iy
Figure 6.6: Addition, negation and multiplication.
Multiplicative inverse. Assume that z is nonzero. The multiplicative inverse of z = r eıθ
is
1
z = 1
r
e−ıθ
. The length of 1
z is the multiplicative inverse of the length of z. The angle of 1
z is the
negative of the angle of z. (See Figure 6.7.)
Division. Assume that ζ is nonzero. The quotient of z = r eıθ
and ζ = ρ eıφ
is z
ζ = r
ρ
eı(θ−φ)
. The
length of the vector z
ζ is the quotient of the lengths of z and ζ. The angle of z
ζ is the difference of
the angles of z and ζ. (See Figure 6.7.)
Complex conjugate. The complex conjugate of z = x + ıy = r eıθ
is z = x − ıy = r e−ıθ
. z is the
mirror image of z, reflected across the x axis. In other words, z has the same magnitude as z and
the angle of z is the negative of the angle of z. (See Figure 6.7.)
6.5 Integer Exponents
Consider the product (a + b)n
, n ∈ Z. If we know arctan(a, b) then it will be most convenient to
expand the product working in polar form. If not, we can write n in base 2 to efficiently do the
multiplications.
Example 6.5.1 Suppose that we want to write
√
3 + ı
20
in Cartesian form.6
We can do the
multiplication directly. Note that 20 is 10100 in base 2. That is, 20 = 24
+ 22
. We first calculate
6No, I have no idea why we would want to do that. Just humor me. If you pretend that you’re interested, I’ll do
the same. Believe me, expressing your real feelings here isn’t going to do anyone any good.
127
= e_
=−e
ζ=ρ
z=re
e
z
ζ
r
φi
θi
(θ−φ)i_
ρ
z=x+iy=re θi
z=x−iy=re
_ −iθ
z=re
1
z
1
r
θi
_ −iθ
Figure 6.7: Multiplicative inverse, division and complex conjugate.
the powers of the form
√
3 + ı
2n
by successive squaring.
√
3 + ı
2
= 2 + ı2
√
3
√
3 + ı
4
= −8 + ı8
√
3
√
3 + ı
8
= −128 − ı128
√
3
√
3 + ı
16
= −32768 + ı32768
√
3
Next we multiply
√
3 + ı
4
and
√
3 + ı
16
to obtain the answer.
√
3 + ı
20
= −32768 + ı32768
√
3 −8 + ı8
√
3 = −524288 − ı524288
√
3
Since we know that arctan
√
3, 1 = π/6, it is easiest to do this problem by first changing to
modulus-argument form.
√
3 + ı
20
=
√
3
2
+ 12 eı arctan(
√
3,1)
20
= 2 eıπ/6
20
= 220
eı4π/3
= 1048576 −
1
2
− ı
√
3
2
= −524288 − ı524288
√
3
Example 6.5.2 Consider (5 + ı7)11
. We will do the exponentiation in polar form and write the
result in Cartesian form.
(5 + ı7)11
=
√
74 eı arctan(5,7)
11
= 745
√
74(cos(11 arctan(5, 7)) + ı sin(11 arctan(5, 7)))
= 2219006624
√
74 cos(11 arctan(5, 7)) + ı2219006624
√
74 sin(11 arctan(5, 7))
The result is correct, but not very satisfying. This expression could be simplified. You could evaluate
the trigonometric functions with some fairly messy trigonometric identities. This would take much
more work than directly multiplying (5 + ı7)11
.
128
6.6 Rational Exponents
In this section we consider complex numbers with rational exponents, zp/q
, where p/q is a rational
number. First we consider unity raised to the 1/n power. We define 11/n
as the set of numbers {z}
such that zn
= 1.
11/n
= {z | zn
= 1}
We can find these values by writing z in modulus-argument form.
zn
= 1
rn
eınθ
= 1
rn
= 1 nθ = 0 mod 2π
r = 1 θ = 2πk for k ∈ Z
11/n
= eı2πk/n
| k ∈ Z
There are only n distinct values as a result of the 2π periodicity of eıθ
. eı2π
= eı0
.
11/n
= eı2πk/n
| k = 0, . . . , n − 1
These values are equally spaced points on the unit circle in the complex plane.
Example 6.6.1 11/6
has the 6 values,
eı0
, eıπ/3
, eı2π/3
, eıπ
, eı4π/3
, eı5π/3
.
In Cartesian form this is
1,
1 + ı
√
3
2
,
−1 + ı
√
3
2
, −1,
−1 − ı
√
3
2
,
1 − ı
√
3
2
.
The sixth roots of unity are plotted in Figure 6.8.
-1 1
-1
1
Figure 6.8: The sixth roots of unity.
The nth
roots of the complex number c = α eıβ
are the set of numbers z = r eıθ
such that
zn
= c = α eıβ
rn
eınθ
= α eıβ
r = n
√
α nθ = β mod 2π
r = n
√
α θ = (β + 2πk)/n for k = 0, . . . , n − 1.
Thus
c1/n
= n
√
α eı(β+2πk)/n
| k = 0, . . . , n − 1 = n
|c| eı(Arg(c)+2πk)/n
| k = 0, . . . , n − 1
129
Principal roots. The principal nth
root is denoted
n
√
z ≡ n
√
z eı Arg(z)/n
.
Thus the principal root has the property
−π/n < Arg n
√
z ≤ π/n.
This is consistent with the notation from functions of a real variable: n
√
x denotes the positive nth
root of a positive real number. We adopt the convention that z1/n
denotes the nth
roots of z, which
is a set of n numbers and n
√
z is the principal nth
root of z, which is a single number. The nth
roots
of z are the principal nth
root of z times the nth
roots of unity.
z1/n
= n
√
r eı(Arg(z)+2πk)/n
| k = 0, . . . , n − 1
z1/n
= n
√
z eı2πk/n
| k = 0, . . . , n − 1
z1/n
= n
√
z11/n
Rational exponents. We interpret zp/q
to mean z(p/q)
. That is, we first simplify the exponent, i.e.
reduce the fraction, before carrying out the exponentiation. Therefore z2/4
= z1/2
and z10/5
= z2
.
If p/q is a reduced fraction, (p and q are relatively prime, in other words, they have no common
factors), then
zp/q
≡ (zp
)
1/q
.
Thus zp/q
is a set of q values. Note that for an un-reduced fraction r/s,
(zr
)
1/s
= z1/s
r
.
The former expression is a set of s values while the latter is a set of no more that s values. For
instance, 12 1/2
= 11/2
= ±1 and 11/2 2
= (±1)2
= 1.
Example 6.6.2 Consider 21/5
, (1 + ı)1/3
and (2 + ı)5/6
.
21/5
=
5
√
2 eı2πk/5
, for k = 0, 1, 2, 3, 4
(1 + ı)1/3
=
√
2 eıπ/4
1/3
=
6
√
2 eıπ/12
eı2πk/3
, for k = 0, 1, 2
(2 + ı)5/6
=
√
5 eı Arctan(2,1)
5/6
=
√
55 eı5 Arctan(2,1)
1/6
=
12
√
55 eı 5
6 Arctan(2,1)
eıπk/3
, for k = 0, 1, 2, 3, 4, 5
Example 6.6.3 We find the roots of z5
+ 4.
(−4)1/5
= (4 eıπ
)
1/5
=
5
√
4 eıπ(1+2k)/5
, for k = 0, 1, 2, 3, 4
130
6.7 Exercises
Complex Numbers
Exercise 6.1
If z = x + ıy, write the following in the form a + ıb:
1. (1 + ı2)7
2.
1
(zz)
3.
ız + z
(3 + ı)9
Hint, Solution
Exercise 6.2
Verify that:
1.
1 + ı2
3 − ı4
+
2 − ı
ı5
= −
2
5
2. (1 − ı)4
= −4
Hint, Solution
Exercise 6.3
Write the following complex numbers in the form a + ıb.
1. 1 + ı
√
3
−10
2. (11 + ı4)2
Hint, Solution
Exercise 6.4
Write the following complex numbers in the form a + ıb
1.
2 + ı
ı6 − (1 − ı2)
2
2. (1 − ı)7
Hint, Solution
Exercise 6.5
If z = x + ıy, write the following in the form u(x, y) + ıv(x, y).
1.
z
z
2.
z + ı2
2 − ız
Hint, Solution
131
Exercise 6.6
Quaternions are sometimes used as a generalization of complex numbers. A quaternion u may be
defined as
u = u0 + ıu1 + u2 + ku3
where u0, u1, u2 and u3 are real numbers and ı,  and k are objects which satisfy
ı2
= 2
= k2
= −1, ı = k, ı = −k
and the usual associative and distributive laws. Show that for any quaternions u, w there exists a
quaternion v such that
uv = w
except for the case u0 = u1 = u2 = u3.
Hint, Solution
Exercise 6.7
Let α = 0, β = 0 be two complex numbers. Show that α = tβ for some real number t (i.e. the
vectors defined by α and β are parallel) if and only if αβ = 0.
Hint, Solution
The Complex Plane
Exercise 6.8
Find and depict all values of
1. (1 + ı)1/3
2. ı1/4
Identify the principal root.
Hint, Solution
Exercise 6.9
Sketch the regions of the complex plane:
1. | (z)| + 2| (z)| ≤ 1
2. 1 ≤ |z − ı| ≤ 2
3. |z − ı| ≤ |z + ı|
Hint, Solution
Exercise 6.10
Prove the following identities.
1. arg(zζ) = arg(z) + arg(ζ)
2. Arg(zζ) = Arg(z) + Arg(ζ)
3. arg z2
= arg(z) + arg(z) = 2 arg(z)
Hint, Solution
Exercise 6.11
Show, both by geometric and algebraic arguments, that for complex numbers z and ζ the inequalities
||z| − |ζ|| ≤ |z + ζ| ≤ |z| + |ζ|
hold.
Hint, Solution
132
Exercise 6.12
Find all the values of
1. (−1)−3/4
2. 81/6
and show them graphically.
Hint, Solution
Exercise 6.13
Find all values of
1. (−1)−1/4
2. 161/8
and show them graphically.
Hint, Solution
Exercise 6.14
Sketch the regions or curves described by
1. 1 < |z − ı2| < 2
2. | (z)| + 5| (z)| = 1
3. |z − ı| = |z + ı|
Hint, Solution
Exercise 6.15
Sketch the regions or curves described by
1. |z − 1 + ı| ≤ 1
2. (z) − (z) = 5
3. |z − ı| + |z + ı| = 1
Hint, Solution
Exercise 6.16
Solve the equation
| eıθ
−1| = 2
for θ (0 ≤ θ ≤ π) and verify the solution geometrically.
Hint, Solution
Polar Form
Exercise 6.17
Show that Euler’s formula, eıθ
= cos θ +ı sin θ, is formally consistent with the standard Taylor series
expansions for the real functions ex
, cos x and sin x. Consider the Taylor series of ex
about x = 0 to
be the definition of the exponential function for complex argument.
Hint, Solution
Exercise 6.18
Use de Moivre’s formula to derive the trigonometric identity
cos(3θ) = cos3
(θ) − 3 cos(θ) sin2
(θ).
Hint, Solution
133
Exercise 6.19
Establish the formula
1 + z + z2
+ · · · + zn
=
1 − zn+1
1 − z
, (z = 1),
for the sum of a finite geometric series; then derive the formulas
1. 1 + cos(θ) + cos(2θ) + · · · + cos(nθ) =
1
2
+
sin((n + 1/2))
2 sin(θ/2)
2. sin(θ) + sin(2θ) + · · · + sin(nθ) =
1
2
cot
θ
2
−
cos((n + 1/2))
2 sin(θ/2)
where 0 < θ < 2π.
Hint, Solution
Arithmetic and Vectors
Exercise 6.20
Prove |zζ| = |z||ζ| and z
ζ = |z|
|ζ| using polar form.
Hint, Solution
Exercise 6.21
Prove that
|z + ζ|
2
+ |z − ζ|
2
= 2 |z|
2
+ |ζ|
2
.
Interpret this geometrically.
Hint, Solution
Integer Exponents
Exercise 6.22
Write (1 + ı)10
in Cartesian form with the following two methods:
1. Just do the multiplication. If it takes you more than four multiplications, you suck.
2. Do the multiplication in polar form.
Hint, Solution
Rational Exponents
Exercise 6.23
Show that each of the numbers z = −a + a2
− b
1/2
satisfies the equation z2
+ 2az + b = 0.
Hint, Solution
134
6.8 Hints
Complex Numbers
Hint 6.1
Hint 6.2
Hint 6.3
Hint 6.4
Hint 6.5
Hint 6.6
Hint 6.7
The Complex Plane
Hint 6.8
Hint 6.9
Hint 6.10
Write the multivaluedness explicitly.
Hint 6.11
Consider a triangle with vertices at 0, z and z + ζ.
Hint 6.12
Hint 6.13
Hint 6.14
Hint 6.15
Hint 6.16
Polar Form
135
Hint 6.17
Find the Taylor series of eıθ
, cos θ and sin θ. Note that ı2n
= (−1)n
.
Hint 6.18
Hint 6.19
Arithmetic and Vectors
Hint 6.20
| eıθ
| = 1.
Hint 6.21
Consider the parallelogram defined by z and ζ.
Integer Exponents
Hint 6.22
For the first part,
(1 + ı)10
= (1 + ı)2 2 2
(1 + ı)2
.
Rational Exponents
Hint 6.23
Substitite the numbers into the equation.
136
6.9 Solutions
Complex Numbers
Solution 6.1
1. We can do the exponentiation by directly multiplying.
(1 + ı2)7
= (1 + ı2)(1 + ı2)2
(1 + ı2)4
= (1 + ı2)(−3 + ı4)(−3 + ı4)2
= (11 − ı2)(−7 − ı24)
= 29 + ı278
We can also do the problem using De Moivre’s Theorem.
(1 + ı2)7
=
√
5 eı arctan(1,2)
7
= 125
√
5 eı7 arctan(1,2)
= 125
√
5 cos(7 arctan(1, 2)) + ı125
√
5 sin(7 arctan(1, 2))
2.
1
(zz)
=
1
(x − ıy)2
=
1
(x − ıy)2
(x + ıy)2
(x + ıy)2
=
(x + ıy)2
(x2 + y2)2
=
x2
− y2
(x2 + y2)2
+ ı
2xy
(x2 + y2)2
3. We can evaluate the expression using De Moivre’s Theorem.
ız + z
(3 + ı)9
= (−y + ıx + x − ıy)(3 + ı)−9
= (1 + ı)(x − y)
√
10 eı arctan(3,1)
−9
= (1 + ı)(x − y)
1
10000
√
10
e−ı9 arctan(3,1)
=
(1 + ı)(x − y)
10000
√
10
(cos(9 arctan(3, 1)) − ı sin(9 arctan(3, 1)))
=
(x − y)
10000
√
10
(cos(9 arctan(3, 1)) + sin(9 arctan(3, 1)))
+ ı
(x − y)
10000
√
10
(cos(9 arctan(3, 1)) − sin(9 arctan(3, 1)))
137
We can also do this problem by directly multiplying but it’s a little grungy.
ız + z
(3 + ı)9
=
(−y + ıx + x − ıy)(3 − ı)9
109
=
(1 + ı)(x − y)(3 − ı) (3 − ı)2 2 2
109
=
(1 + ı)(x − y)(3 − ı) (8 − ı6)
2
2
109
=
(1 + ı)(x − y)(3 − ı)(28 − ı96)2
109
=
(1 + ı)(x − y)(3 − ı)(−8432 − ı5376)
109
=
(x − y)(−22976 − ı38368)
109
=
359(y − x)
15625000
+ ı
1199(y − x)
31250000
Solution 6.2
1.
1 + ı2
3 − ı4
+
2 − ı
ı5
=
1 + ı2
3 − ı4
3 + ı4
3 + ı4
+
2 − ı
ı5
−ı
−ı
=
−5 + ı10
25
+
−1 − ı2
5
= −
2
5
2.
(1 − ı)4
= (−ı2)2
= −4
Solution 6.3
1. First we do the multiplication in Cartesian form.
1 + ı
√
3
−10
= 1 + ı
√
3
2
1 + ı
√
3
8 −1
= −2 + ı2
√
3 −2 + ı2
√
3
4 −1
= −2 + ı2
√
3 −8 − ı8
√
3
2 −1
= −2 + ı2
√
3 −128 + ı128
√
3
−1
= −512 − ı512
√
3
−1
=
1
512
−1
1 + ı
√
3
=
1
512
−1
1 + ı
√
3
1 − ı
√
3
1 − ı
√
3
= −
1
2048
+ ı
√
3
2048
138
Now we do the multiplication in modulus-argument, (polar), form.
1 + ı
√
3
−10
= 2 eıπ/3
−10
= 2−10
e−ı10π/3
=
1
1024
cos −
10π
3
+ ı sin −
10π
3
=
1
1024
cos
4π
3
− ı sin
4π
3
=
1
1024
−
1
2
+ ı
√
3
2
= −
1
2048
+ ı
√
3
2048
2.
(11 + ı4)2
= 105 + ı88
Solution 6.4
1.
2 + ı
ı6 − (1 − ı2)
2
=
2 + ı
−1 + ı8
2
=
3 + ı4
−63 − ı16
=
3 + ı4
−63 − ı16
−63 + ı16
−63 + ı16
= −
253
4225
− ı
204
4225
2.
(1 − ı)7
= (1 − ı)2 2
(1 − ı)2
(1 − ı)
= (−ı2)2
(−ı2)(1 − ı)
= (−4)(−2 − ı2)
= 8 + ı8
Solution 6.5
1.
z
z
=
x + ıy
x + ıy
=
x − ıy
x + ıy
=
x + ıy
x − ıy
=
x + ıy
x − ıy
x + ıy
x + ıy
=
x2
− y2
x2 + y2
+ ı
2xy
x2 + y2
139
2.
z + ı2
2 − ız
=
x + ıy + ı2
2 − ı(x − ıy)
=
x + ı(y + 2)
2 − y − ıx
=
x + ı(y + 2)
2 − y − ıx
2 − y + ıx
2 − y + ıx
=
x(2 − y) − (y + 2)x
(2 − y)2 + x2
+ ı
x2
+ (y + 2)(2 − y)
(2 − y)2 + x2
=
−2xy
(2 − y)2 + x2
+ ı
4 + x2
− y2
(2 − y)2 + x2
Solution 6.6
Method 1. We expand the equation uv = w in its components.
uv = w
(u0 + ıu1 + u2 + ku3) (v0 + ıv1 + v2 + kv3) = w0 + ıw1 + w2 + kw3
(u0v0 − u1v1 − u2v2 − u3v3) + ı (u1v0 + u0v1 − u3v2 + u2v3) +  (u2v0 + u3v1 + u0v2 − u1v3)
+ k (u3v0 − u2v1 + u1v2 + u0v3) = w0 + ıw1 + w2 + kw3
We can write this as a matrix equation.




u0 −u1 −u2 −u3
u1 u0 −u3 u2
u2 u3 u0 −u1
u3 −u2 u1 u0








v0
v1
v2
v3



 =




w0
w1
w2
w3




This linear system of equations has a unique solution for v if and only if the determinant of the
matrix is nonzero. The determinant of the matrix is u2
0 + u2
1 + u2
2 + u2
3
2
. This is zero if and only
if u0 = u1 = u2 = u3 = 0. Thus there exists a unique v such that uv = w if u is nonzero. This v is
v = (u0w0 + u1w1 + u2w2 + u3w3)+ı (−u1w0 + u0w1 + u3w2 − u2w3)+ (−u2w0 − u3w1 + u0w2 + u1w3)
+ k (−u3w0 + u2w1 − u1w2 + u0w3) / u2
0 + u2
1 + u2
2 + u2
3
Method 2. Note that uu is a real number.
uu = (u0 − ıu1 − u2 − ku3) (u0 + ıu1 + u2 + ku3)
= u2
0 + u2
1 + u2
2 + u2
3 + ı (u0u1 − u1u0 − u2u3 + u3u2)
+  (u0u2 + u1u3 − u2u0 − u3u1) + k (u0u3 − u1u2 + u2u1 − u3u0)
= u2
0 + u2
1 + u2
2 + u2
3
uu = 0 only if u = 0. We solve for v by multiplying by the conjugate of u and dividing by uu.
uv = w
uuv = uw
v =
uw
uu
v =
(u0 − ıu1 − u2 − ku3) (w0 + ıw1 + w2 + kw3)
u2
0 + u2
1 + u2
2 + u2
3
v = (u0w0 + u1w1 + u2w2 + u3w3)+ı (−u1w0 + u0w1 + u3w2 − u2w3)+ (−u2w0 − u3w1 + u0w2 + u1w3)
+ k (−u3w0 + u2w1 − u1w2 + u0w3) / u2
0 + u2
1 + u2
2 + u2
3
140
Solution 6.7
If α = tβ, then αβ = t|β|2
, which is a real number. Hence αβ = 0.
Now assume that αβ = 0. This implies that αβ = r for some r ∈ R. We multiply by β and
simplify.
α|β|2
= rβ
α =
r
|β|2
β
By taking t = r
|β|2 We see that α = tβ for some real number t.
The Complex Plane
Solution 6.8
1.
(1 + ı)1/3
=
√
2 eıπ/4
1/3
=
6
√
2 eıπ/12
11/3
=
6
√
2 eıπ/12
eı2πk/3
, k = 0, 1, 2
=
6
√
2 eıπ/12
,
6
√
2 eı3π/4
,
6
√
2 eı17π/12
The principal root is
3
√
1 + ı =
6
√
2 eıπ/12
.
The roots are depicted in Figure 6.9.
-1 1
-1
1
Figure 6.9: (1 + ı)1/3
2.
ı1/4
= eıπ/2
1/4
= eıπ/8
11/4
= eıπ/8
eı2πk/4
, k = 0, 1, 2, 3
= eıπ/8
, eı5π/8
, eı9π/8
, eı13π/8
The principal root is
4
√
ı = eıπ/8
.
The roots are depicted in Figure 6.10.
141
-1 1
-1
1
Figure 6.10: ı1/4
Solution 6.9
1.
| (z)| + 2| (z)| ≤ 1
|x| + 2|y| ≤ 1
In the first quadrant, this is the triangle below the line y = (1 − x)/2. We reflect this triangle
across the coordinate axes to obtain triangles in the other quadrants. Explicitly, we have the
set of points: {z = x + ıy | −1 ≤ x ≤ 1 ∧ |y| ≤ (1 − |x|)/2}. See Figure 6.11.
1
1
−1
−1
Figure 6.11: | (z)| + 2| (z)| ≤ 1
2. |z − ı| is the distance from the point ı in the complex plane. Thus 1 < |z − ı| < 2 is an annulus
centered at z = ı between the radii 1 and 2. See Figure 6.12.
3. The points which are closer to z = ı than z = −ı are those points in the upper half plane. See
Figure 6.13.
Solution 6.10
Let z = r eıθ
and ζ = ρ eıφ
.
1.
arg(zζ) = arg(z) + arg(ζ)
arg rρ eı(θ+φ)
= {θ + 2πm} + {φ + 2πn}
{θ + φ + 2πk} = {θ + φ + 2πm}
142
-3 -2 -1 1 2 3
-2
-1
1
2
3
4
Figure 6.12: 1 < |z − ı| < 2
1
1
−1
−1
Figure 6.13: The upper half plane.
2.
Arg(zζ) = Arg(z) + Arg(ζ)
Consider z = ζ = −1. Arg(z) = Arg(ζ) = π, however Arg(zζ) = Arg(1) = 0. The identity
becomes 0 = 2π.
3.
arg z2
= arg(z) + arg(z) = 2 arg(z)
arg r2
eı2θ
= {θ + 2πk} + {θ + 2πm} = 2{θ + 2πn}
{2θ + 2πk} = {2θ + 2πm} = {2θ + 4πn}
Solution 6.11
Consider a triangle in the complex plane with vertices at 0, z and z + ζ. (See Figure 6.14.)
The lengths of the sides of the triangle are |z|, |ζ| and |z + ζ| The second inequality shows that
one side of the triangle must be less than or equal to the sum of the other two sides.
|z + ζ| ≤ |z| + |ζ|
The first inequality shows that the length of one side of the triangle must be greater than or equal
to the difference in the length of the other two sides.
|z + ζ| ≥ ||z| − |ζ||
143
z
ζ
|ζ| z+ ζ
|z+ |ζ|z|
Figure 6.14: Triangle inequality.
Now we prove the inequalities algebraically. We will reduce the inequality to an identity. Let
z = r eıθ
, ζ = ρ eıφ
.
||z| − |ζ|| ≤ |z + ζ| ≤ |z| + |ζ|
|r − ρ| ≤ |r eıθ
+ρ eıφ
| ≤ r + ρ
(r − ρ)
2
≤ r eıθ
+ρ eıφ
r e−ıθ
+ρ e−ıφ
≤ (r + ρ)
2
r2
+ ρ2
− 2rρ ≤ r2
+ ρ2
+ rρ eı(θ−φ)
+rρ eı(−θ+φ)
≤ r2
+ ρ2
+ 2rρ
−2rρ ≤ 2rρ cos (θ − φ) ≤ 2rρ
−1 ≤ cos(θ − φ) ≤ 1
Solution 6.12
1.
(−1)−3/4
= (−1)−3 1/4
= (−1)1/4
= (eıπ
)
1/4
= eıπ/4
11/4
= eıπ/4
eıkπ/2
, k = 0, 1, 2, 3
= eıπ/4
, eı3π/4
, eı5π/4
, eı7π/4
=
1 + ı
√
2
,
−1 + ı
√
2
,
−1 − ı
√
2
,
1 − ı
√
2
See Figure 6.15.
2.
81/6
=
6
√
811/6
=
√
2 eıkπ/3
, k = 0, 1, 2, 3, 4, 5
=
√
2,
√
2 eıπ/3
,
√
2 eı2π/3
,
√
2 eıπ
,
√
2 eı4π/3
,
√
2 eı5π/3
=
√
2,
1 + ı
√
3
√
2
,
−1 + ı
√
3
√
2
, −
√
2,
−1 − ı
√
3
√
2
,
1 − ı
√
3
√
2
See Figure 6.16.
144
-1 1
-1
1
Figure 6.15: (−1)−3/4
-2 -1 1 2
-2
-1
1
2
Figure 6.16: 81/6
Solution 6.13
1.
(−1)−1/4
= ((−1)−1
)1/4
= (−1)1/4
= (eıπ
)
1/4
= eıπ/4
11/4
= eıπ/4
eıkπ/2
, k = 0, 1, 2, 3
= eıπ/4
, eı3π/4
, eı5π/4
, eı7π/4
=
1 + ı
√
2
,
−1 + ı
√
2
,
−1 − ı
√
2
,
1 − ı
√
2
See Figure 6.17.
2.
161/8
=
8
√
1611/8
=
√
2 eıkπ/4
, k = 0, 1, 2, 3, 4, 5, 6, 7
=
√
2,
√
2 eıπ/4
,
√
2 eıπ/2
,
√
2 eı3π/4
,
√
2 eıπ
,
√
2 eı5π/4
,
√
2 eı3π/2
,
√
2 eı7π/4
=
√
2, 1 + ı, ı
√
2, −1 + ı, −
√
2, −1 − ı, −ı
√
2, 1 − ı
145
-1 1
-1
1
Figure 6.17: (−1)−1/4
-1 1
-1
1
Figure 6.18: 16−1/8
See Figure 6.18.
Solution 6.14
1. |z − ı2| is the distance from the point ı2 in the complex plane. Thus 1 < |z − ı2| < 2 is an
annulus. See Figure 6.19.
-3 -2 -1 1 2 3
-1
1
2
3
4
5
Figure 6.19: 1 < |z − ı2| < 2
2.
| (z)| + 5| (z)| = 1
|x| + 5|y| = 1
146
In the first quadrant this is the line y = (1 − x)/5. We reflect this line segment across the
coordinate axes to obtain line segments in the other quadrants. Explicitly, we have the set of
points: {z = x + ıy | −1 < x < 1 ∧ y = ±(1 − |x|)/5}. See Figure 6.20.
-1 1
-0.4
-0.2
0.2
0.4
Figure 6.20: | (z)| + 5| (z)| = 1
3. The set of points equidistant from ı and −ı is the real axis. See Figure 6.21.
-1 1
-1
1
Figure 6.21: |z − ı| = |z + ı|
Solution 6.15
1. |z − 1 + ı| is the distance from the point (1 − ı). Thus |z − 1 + ı| ≤ 1 is the disk of unit radius
centered at (1 − ı). See Figure 6.22.
-1 1 2 3
-3
-2
-1
1
Figure 6.22: |z − 1 + ı| < 1
2.
(z) − (z) = 5
x − y = 5
y = x − 5
147
See Figure 6.23.
-10 -5 5 10
-15
-10
-5
5
Figure 6.23: (z) − (z) = 5
3. Since |z − ı| + |z + ı| ≥ 2, there are no solutions of |z − ı| + |z + ı| = 1.
Solution 6.16
| eıθ
−1| = 2
eıθ
−1 e−ıθ
−1 = 4
1 − eıθ
− e−ıθ
+1 = 4
−2 cos(θ) = 2
θ = π
eıθ
| 0 ≤ θ ≤ π is a unit semi-circle in the upper half of the complex plane from 1 to −1. The
only point on this semi-circle that is a distance 2 from the point 1 is the point −1, which corresponds
to θ = π.
Polar Form
Solution 6.17
We recall the Taylor series expansion of ex
about x = 0.
ex
=
∞
n=0
xn
n!
.
We take this as the definition of the exponential function for complex argument.
eıθ
=
∞
n=0
(ıθ)n
n!
=
∞
n=0
ın
n!
θn
=
∞
n=0
(−1)n
(2n)!
θ2n
+ ı
∞
n=0
(−1)n
(2n + 1)!
θ2n+1
We compare this expression to the Taylor series for the sine and cosine.
cos θ =
∞
n=0
(−1)n
(2n)!
θ2n
, sin θ =
∞
n=0
(−1)n
(2n + 1)!
θ2n+1
,
148
Thus eıθ
and cos θ + ı sin θ have the same Taylor series expansions about θ = 0.
eıθ
= cos θ + ı sin θ
Solution 6.18
cos(3θ) + ı sin(3θ) = (cos θ + ı sin θ)3
cos(3θ) + ı sin(3θ) = cos3
θ + ı3 cos2
θ sin θ − 3 cos θ sin2
θ − ı sin3
θ
We equate the real parts of the equation.
cos(3θ) = cos3
θ − 3 cos θ sin2
θ
Solution 6.19
Define the partial sum,
Sn(z) =
n
k=0
zk
.
Now consider (1 − z)Sn(z).
(1 − z)Sn(z) = (1 − z)
n
k=0
zk
(1 − z)Sn(z) =
n
k=0
zk
−
n+1
k=1
zk
(1 − z)Sn(z) = 1 − zn+1
We divide by 1 − z. Note that 1 − z is nonzero.
Sn(z) =
1 − zn+1
1 − z
1 + z + z2
+ · · · + zn
=
1 − zn+1
1 − z
, (z = 1)
Now consider z = eıθ
where 0 < θ < 2π so that z is not unity.
n
k=0
eıθ k
=
1 − eıθ n+1
1 − eıθ
n
k=0
eıkθ
=
1 − eı(n+1)θ
1 − eıθ
In order to get sin(θ/2) in the denominator, we multiply top and bottom by e−ıθ/2
.
n
k=0
(cos(kθ) + ı sin(kθ)) =
e−ıθ/2
− eı(n+1/2)θ
e−ıθ/2 − eıθ/2
n
k=0
cos(kθ) + ı
n
k=0
sin(kθ) =
cos(θ/2) − ı sin(θ/2) − cos((n + 1/2)θ) − ı sin((n + 1/2)θ)
−2ı sin(θ/2)
n
k=0
cos(kθ) + ı
n
k=1
sin(kθ) =
1
2
+
sin((n + 1/2)θ)
sin(θ/2)
+ ı
1
2
cot(θ/2) −
cos((n + 1/2)θ)
sin(θ/2)
149
1. We take the real and imaginary part of this to obtain the identities.
n
k=0
cos(kθ) =
1
2
+
sin((n + 1/2)θ)
2 sin(θ/2)
2.
n
k=1
sin(kθ) =
1
2
cot(θ/2) −
cos((n + 1/2)θ)
2 sin(θ/2)
Arithmetic and Vectors
Solution 6.20
|zζ| = |r eıθ
ρ eıφ
|
= |rρ eı(θ+φ)
|
= |rρ|
= |r||ρ|
= |z||ζ|
z
ζ
=
r eıθ
ρ eıφ
=
r
ρ
eı(θ−φ)
=
r
ρ
=
|r|
|ρ|
=
|z|
|ζ|
Solution 6.21
|z + ζ|
2
+ |z − ζ|
2
= (z + ζ) z + ζ + (z − ζ) z − ζ
= zz + zζ + ζz + ζζ + zz − zζ − ζz + ζζ
= 2 |z|
2
+ |ζ|
2
Consider the parallelogram defined by the vectors z and ζ. The lengths of the sides are z and ζ
and the lengths of the diagonals are z + ζ and z − ζ. We know from geometry that the sum of the
squared lengths of the diagonals of a parallelogram is equal to the sum of the squared lengths of the
four sides. (See Figure 6.24.)
Integer Exponents
150
z+
z-
z
ζ
ζ
ζ
Figure 6.24: The parallelogram defined by z and ζ.
Solution 6.22
1.
(1 + ı)10
= (1 + ı)2 2 2
(1 + ı)2
= (ı2)
2
2
(ı2)
= (−4)
2
(ı2)
= 16(ı2)
= ı32
2.
(1 + ı)10
=
√
2 eıπ/4
10
=
√
2
10
eı10π/4
= 32 eıπ/2
= ı32
Rational Exponents
Solution 6.23
We substitite the numbers into the equation to obtain an identity.
z2
+ 2az + b = 0
−a + a2
− b
1/2 2
+ 2a −a + a2
− b
1/2
+ b = 0
a2
− 2a a2
− b
1/2
+ a2
− b − 2a2
+ 2a a2
− b
1/2
+ b = 0
0 = 0
151
152
Chapter 7
Functions of a Complex Variable
If brute force isn’t working, you’re not using enough of it.
-Tim Mauch
In this chapter we introduce the algebra of functions of a complex variable. We will cover the
trigonometric and inverse trigonometric functions. The properties of trigonometric functions carry
over directly from real-variable theory. However, because of multi-valuedness, the inverse trigono-
metric functions are significantly trickier than their real-variable counterparts.
7.1 Curves and Regions
In this section we introduce curves and regions in the complex plane. This material is necessary
for the study of branch points in this chapter and later for contour integration.
Curves. Consider two continuous functions x(t) and y(t) defined on the interval t ∈ [t0..t1]. The
set of points in the complex plane,
{z(t) = x(t) + ıy(t) | t ∈ [t0 . . . t1]},
defines a continuous curve or simply a curve. If the endpoints coincide ( z (t0) = z (t1) ) it is a
closed curve. (We assume that t0 = t1.) If the curve does not intersect itself, then it is said to be a
simple curve.
If x(t) and y(t) have continuous derivatives and the derivatives do not both vanish at any point,
then it is a smooth curve.1
This essentially means that the curve does not have any corners or other
nastiness.
A continuous curve which is composed of a finite number of smooth curves is called a piecewise
smooth curve. We will use the word contour as a synonym for a piecewise smooth curve.
See Figure 7.1 for a smooth curve, a piecewise smooth curve, a simple closed curve and a non-
simple closed curve.
Regions. A region R is connected if any two points in R can be connected by a curve which lies
entirely in R. A region is simply-connected if every closed curve in R can be continuously shrunk to
a point without leaving R. A region which is not simply-connected is said to be multiply-connected
region. Another way of defining simply-connected is that a path connecting two points in R can be
continuously deformed into any other path that connects those points. Figure 7.2 shows a simply-
connected region with two paths which can be continuously deformed into one another and two
multiply-connected regions with paths which cannot be deformed into one another.
1Why is it necessary that the derivatives do not both vanish?
153
(a) (b) (c) (d)
Figure 7.1: (a) Smooth curve. (b) Piecewise smooth curve. (c) Simple closed curve. (d) Non-simple
closed curve.
Figure 7.2: A simply-connected and two multiply-connected regions.
Jordan curve theorem. A continuous, simple, closed curve is known as a Jordan curve. The
Jordan Curve Theorem, which seems intuitively obvious but is difficult to prove, states that a Jordan
curve divides the plane into a simply-connected, bounded region and an unbounded region. These
two regions are called the interior and exterior regions, respectively. The two regions share the curve
as a boundary. Points in the interior are said to be inside the curve; points in the exterior are said
to be outside the curve.
Traversal of a contour. Consider a Jordan curve. If you traverse the curve in the positive
direction, then the inside is to your left. If you traverse the curve in the opposite direction, then
the outside will be to your left and you will go around the curve in the negative direction. For
circles, the positive direction is the counter-clockwise direction. The positive direction is consistent
with the way angles are measured in a right-handed coordinate system, i.e. for a circle centered on
the origin, the positive direction is the direction of increasing angle. For an oriented contour C, we
denote the contour with opposite orientation as −C.
Boundary of a region. Consider a simply-connected region. The boundary of the region is
traversed in the positive direction if the region is to the left as you walk along the contour. For
multiply-connected regions, the boundary may be a set of contours. In this case the boundary is
traversed in the positive direction if each of the contours is traversed in the positive direction. When
we refer to the boundary of a region we will assume it is given the positive orientation. In Figure 7.3
the boundaries of three regions are traversed in the positive direction.
Figure 7.3: Traversing the boundary in the positive direction.
154
Two interpretations of a curve. Consider a simple closed curve as depicted in Figure 7.4a. By
giving it an orientation, we can make a contour that either encloses the bounded domain Figure 7.4b
or the unbounded domain Figure 7.4c. Thus a curve has two interpretations. It can be thought of
as enclosing either the points which are “inside” or the points which are “outside”.2
(a) (b) (c)
Figure 7.4: Two interpretations of a curve.
7.2 The Point at Infinity and the Stereographic Projection
Complex infinity. In real variables, there are only two ways to get to infinity. We can either go
up or down the number line. Thus signed infinity makes sense. By going up or down we respectively
approach +∞ and −∞. In the complex plane there are an infinite number of ways to approach
infinity. We stand at the origin, point ourselves in any direction and go straight. We could walk
along the positive real axis and approach infinity via positive real numbers. We could walk along
the positive imaginary axis and approach infinity via pure imaginary numbers. We could generalize
the real variable notion of signed infinity to a complex variable notion of directional infinity, but this
will not be useful for our purposes. Instead, we introduce complex infinity or the point at infinity
as the limit of going infinitely far along any direction in the complex plane. The complex plane
together with the point at infinity form the extended complex plane.
Stereographic projection. We can visualize the point at infinity with the stereographic projec-
tion. We place a unit sphere on top of the complex plane so that the south pole of the sphere is at
the origin. Consider a line passing through the north pole and a point z = x + ıy in the complex
plane. In the stereographic projection, the point point z is mapped to the point where the line
intersects the sphere. (See Figure 7.5.) Each point z = x + ıy in the complex plane is mapped to a
unique point (X, Y, Z) on the sphere.
X =
4x
|z|2 + 4
, Y =
4y
|z|2 + 4
, Z =
2|z|2
|z|2 + 4
The origin is mapped to the south pole. The point at infinity, |z| = ∞, is mapped to the north pole.
In the stereographic projection, circles in the complex plane are mapped to circles on the unit
sphere. Figure 7.6 shows circles along the real and imaginary axes under the mapping. Lines in the
complex plane are also mapped to circles on the unit sphere. The right diagram in Figure 7.6 shows
lines emanating from the origin under the mapping.
The stereographic projection helps us reason about the point at infinity. When we consider the
complex plane by itself, the point at infinity is an abstract notion. We can’t draw a picture of the
point at infinity. It may be hard to accept the notion of a jordan curve enclosing the point at infinity.
However, in the stereographic projection, the point at infinity is just an ordinary point (namely the
north pole of the sphere).
2 A farmer wanted to know the most efficient way to build a pen to enclose his sheep, so he consulted an engineer,
a physicist and a mathematician. The engineer suggested that he build a circular pen to get the maximum area for
any given perimeter. The physicist suggested that he build a fence at infinity and then shrink it to fit the sheep. The
mathematician constructed a little fence around himself and then defined himself to be outside.
155
x
y
Figure 7.5: The stereographic projection.
Figure 7.6: The stereographic projection of circles and lines.
156
7.3 A Gentle Introduction to Branch Points
In this section we will introduce the concepts of branches, branch points and branch cuts. These
concepts (which are notoriously difficult to understand for beginners) are typically defined in terms
functions of a complex variable. Here we will develop these ideas as they relate to the arctangent
function arctan(x, y). Hopefully this simple example will make the treatment in Section 7.9 more
palateable.
First we review some properties of the arctangent. It is a mapping from R2
to R. It measures
the angle around the origin from the positive x axis. Thus it is a multi-valued function. For a fixed
point in the domain, the function values differ by integer multiples of 2π. The arctangent is not
defined at the origin nor at the point at infinity; it is singular at these two points. If we plot some
of the values of the arctangent, it looks like a corkscrew with axis through the origin. A portion of
this function is plotted in Figure 7.7.
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-5
0
5
-2
-1
0
1
2
x
Figure 7.7: Plots of (log z) and a portion of (log z).
Most of the tools we have for analyzing functions (continuity, differentiability, etc.) depend on
the fact that the function is single-valued. In order to work with the arctangent we need to select
a portion to obtain a single-valued function. Consider the domain (−1..2) × (1..4). On this domain
we select the value of the arctangent that is between 0 and π. The domain and a plot of the selected
values of the arctangent are shown in Figure 7.8.
-3 -2 -1 1 2 3 4 5
-3
-2
-1
1
2
3
4
5
-2
0
2
4
0
2
4
6
0
0.5
1
1.5
2
-2
0
2
4
Figure 7.8: A domain and a selected value of the arctangent for the points in the domain.
CONTINUE.
7.4 Cartesian and Modulus-Argument Form
We can write a function of a complex variable z as a function of x and y or as a function of r and
θ with the substitutions z = x + ıy and z = r eıθ
, respectively. Then we can separate the real and
157
imaginary components or write the function in modulus-argument form,
f(z) = u(x, y) + ıv(x, y), or f(z) = u(r, θ) + ıv(r, θ),
f(z) = ρ(x, y) eıφ(x,y)
, or f(z) = ρ(r, θ) eıφ(r,θ)
.
Example 7.4.1 Consider the functions f(z) = z, f(z) = z3
and f(z) = 1
1−z . We write the functions
in terms of x and y and separate them into their real and imaginary components.
f(z) = z
= x + ıy
f(z) = z3
= (x + ıy)3
= x3
+ ıx2
y − xy2
− ıy3
= x3
− xy2
+ ı x2
y − y3
f(z) =
1
1 − z
=
1
1 − x − ıy
=
1
1 − x − ıy
1 − x + ıy
1 − x + ıy
=
1 − x
(1 − x)2 + y2
+ ı
y
(1 − x)2 + y2
Example 7.4.2 Consider the functions f(z) = z, f(z) = z3
and f(z) = 1
1−z . We write the functions
in terms of r and θ and write them in modulus-argument form.
f(z) = z
= r eıθ
f(z) = z3
= r eıθ 3
= r3
eı3θ
f(z) =
1
1 − z
=
1
1 − r eıθ
=
1
1 − r eıθ
1
1 − r e−ıθ
=
1 − r e−ıθ
1 − r eıθ −r e−ıθ +r2
=
1 − r cos θ + ır sin θ
1 − 2r cos θ + r2
158
Note that the denominator is real and non-negative.
=
1
1 − 2r cos θ + r2
|1 − r cos θ + ır sin θ| eı arctan(1−r cos θ,r sin θ)
=
1
1 − 2r cos θ + r2
(1 − r cos θ)2 + r2 sin2
θ eı arctan(1−r cos θ,r sin θ)
=
1
1 − 2r cos θ + r2
1 − 2r cos θ + r2 cos2 θ + r2 sin2
θ eı arctan(1−r cos θ,r sin θ)
=
1
√
1 − 2r cos θ + r2
eı arctan(1−r cos θ,r sin θ)
7.5 Graphing Functions of a Complex Variable
We cannot directly graph functions of a complex variable as they are mappings from R2
to R2
. To
do so would require four dimensions. However, we can can use a surface plot to graph the real part,
the imaginary part, the modulus or the argument of a function of a complex variable. Each of these
are scalar fields, mappings from R2
to R.
Example 7.5.1 Consider the identity function, f(z) = z. In Cartesian coordinates and Cartesian
form, the function is f(z) = x + ıy. The real and imaginary components are u(x, y) = x and
v(x, y) = y. (See Figure 7.9.) In modulus argument form the function is
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
-1
0
1
2
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
-1
0
1
2
-2
-1
0
1
2
x
Figure 7.9: The real and imaginary parts of f(z) = z = x + ıy.
f(z) = z = r eıθ
= x2 + y2 eı arctan(x,y)
.
The modulus of f(z) is a single-valued function which is the distance from the origin. The argument
of f(z) is a multi-valued function. Recall that arctan(x, y) has an infinite number of values each of
which differ by an integer multiple of 2π. A few branches of arg(f(z)) are plotted in Figure 7.10.
The modulus and principal argument of f(z) = z are plotted in Figure 7.11.
-2
-1
0
1
2
x
-2
-1
0
1
2y
-5
0
5
-2
-1
0
1
2
x
-2
-1
0
1
2y
Figure 7.10: A few branches of arg(z).
159
-2
-1
0
1
2
x -2
-1
0
1
2
y
0
1
2
-2
-1
0
1
2
x
-2
-1
0
1
2
x -2
-1
0
1
2
y
-2
0
2
-2
-1
0
1
2
x
Figure 7.11: Plots of |z| and Arg(z).
Example 7.5.2 Consider the function f(z) = z2
. In Cartesian coordinates and separated into its
real and imaginary components the function is
f(z) = z2
= (x + ıy)2
= x2
− y2
+ ı2xy.
Figure 7.12 shows surface plots of the real and imaginary parts of z2
. The magnitude of z2
is
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-4
-2
0
2
4
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-5
0
5
-2
-1
0
1
2
x
Figure 7.12: Plots of z2
and z2
.
|z2
| = z2z2 = zz = (x + ıy)(x − ıy) = x2
+ y2
.
Note that
z2
= r eıθ 2
= r2
eı2θ
.
In Figure 7.13 are plots of |z2
| and a branch of arg z2
.
-2
-1
0
1
2
x
-2
-1
0
1
2
y
0
2
4
6
8
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-5
0
5
-2
-1
0
1
2
x
Figure 7.13: Plots of |z2
| and a branch of arg z2
.
160
7.6 Trigonometric Functions
The exponential function. Consider the exponential function ez
. We can use Euler’s formula
to write ez
= ex+ıy
in terms of its real and imaginary parts.
ez
= ex+ıy
= ex
eıy
= ex
cos y + ı ex
sin y
From this we see that the exponential function is ı2π periodic: ez+ı2π
= ez
, and ıπ odd periodic:
ez+ıπ
= − ez
. Figure 7.14 has surface plots of the real and imaginary parts of ez
which show this
periodicity.
-2
0
2
x -5
0
5
y
-20
-10
0
10
20
-2
0
2
x
-2
0
2
x -5
0
5
y
-20
-10
0
10
20
-2
0
2
x
Figure 7.14: Plots of (ez
) and (ez
).
The modulus of ez
is a function of x alone.
|ez
| = ex+ıy
= ex
The argument of ez
is a function of y alone.
arg (ez
) = arg ex+ıy
= {y + 2πn | n ∈ Z}
In Figure 7.15 are plots of | ez
| and a branch of arg (ez
).
-2
0
2
x -5
0
5
y
0
5
10
15
20
-2
0
2
x
-2
0
2
x -5
0
5
y
-5
0
5
-2
0
2
x
Figure 7.15: Plots of | ez
| and a branch of arg (ez
).
Example 7.6.1 Show that the transformation w = ez
maps the infinite strip, −∞ < x < ∞,
0 < y < π, onto the upper half-plane.
Method 1. Consider the line z = x + ıc, −∞ < x < ∞. Under the transformation, this is
mapped to
w = ex+ıc
= eıc
ex
, −∞ < x < ∞.
This is a ray from the origin to infinity in the direction of eıc
. Thus we see that z = x is mapped to
the positive, real w axis, z = x + ıπ is mapped to the negative, real axis, and z = x + ıc, 0 < c < π
161
-3 -2 -1 1 2 3
1
2
3
-3 -2 -1 1 2 3
1
2
3
Figure 7.16: ez
maps horizontal lines to rays.
is mapped to a ray with angle c in the upper half-plane. Thus the strip is mapped to the upper
half-plane. See Figure 7.16.
Method 2. Consider the line z = c + ıy, 0 < y < π. Under the transformation, this is mapped
to
w = ec+ıy
+ ec
eıy
, 0 < y < π.
This is a semi-circle in the upper half-plane of radius ec
. As c → −∞, the radius goes to zero.
As c → ∞, the radius goes to infinity. Thus the strip is mapped to the upper half-plane. See
Figure 7.17.
-1 1
1
2
3
-3 -2 -1 1 2 3
1
2
3
Figure 7.17: ez
maps vertical lines to circular arcs.
The sine and cosine. We can write the sine and cosine in terms of the exponential function.
eız
+ e−ız
2
=
cos(z) + ı sin(z) + cos(−z) + ı sin(−z)
2
=
cos(z) + ı sin(z) + cos(z) − ı sin(z)
2
= cos z
eız
− e−ız
ı2
=
cos(z) + ı sin(z) − cos(−z) − ı sin(−z)
2
=
cos(z) + ı sin(z) − cos(z) + ı sin(z)
2
= sin z
We separate the sine and cosine into their real and imaginary parts.
cos z = cos x cosh y − ı sin x sinh y sin z = sin x cosh y + ı cos x sinh y
For fixed y, the sine and cosine are oscillatory in x. The amplitude of the oscillations grows with
increasing |y|. See Figure 7.18 and Figure 7.19 for plots of the real and imaginary parts of the cosine
and sine, respectively. Figure 7.20 shows the modulus of the cosine and the sine.
162
-2
0
2
x
-2
-1
0
1
2
y
-5
-2.5
0
2.5
5
-2
0
2
x
-2
0
2
x
-2
-1
0
1
2
y
-5
-2.5
0
2.5
5
-2
0
2
x
Figure 7.18: Plots of (cos(z)) and (cos(z)).
-2
0
2
x
-2
-1
0
1
2
y
-5
-2.5
0
2.5
5
-2
0
2
x
-2
0
2
x
-2
-1
0
1
2
y
-5
-2.5
0
2.5
5
-2
0
2
x
Figure 7.19: Plots of (sin(z)) and (sin(z)).
-2
0
2
x
-2
-1
0
1
2
y
2
4
-2
0
2
x
-2
0
2
x
-2
-1
0
1
2
y
0
2
4
-2
0
2
x
Figure 7.20: Plots of | cos(z)| and | sin(z)|.
163
The hyperbolic sine and cosine. The hyperbolic sine and cosine have the familiar definitions
in terms of the exponential function. Thus not surprisingly, we can write the sine in terms of the
hyperbolic sine and write the cosine in terms of the hyperbolic cosine. Below is a collection of
trigonometric identities.
Result 7.6.1
ez
= ex
(cos y + ı sin y)
cos z =
eız
+ e−ız
2
sin z =
eız
− e−ız
ı2
cos z = cos x cosh y − ı sin x sinh y sin z = sin x cosh y + ı cos x sinh y
cosh z =
ez
+ e−z
2
sinh z =
ez
− e−z
2
cosh z = cosh x cos y + ı sinh x sin y sinh z = sinh x cos y + ı cosh x sin y
sin(ız) = ı sinh z sinh(ız) = ı sin z
cos(ız) = cosh z cosh(ız) = cos z
log z = ln |z| + ı arg(z) = ln |z| + ı Arg(z) + ı2πn, n ∈ Z
7.7 Inverse Trigonometric Functions
The logarithm. The logarithm, log(z), is defined as the inverse of the exponential function ez
.
The exponential function is many-to-one and thus has a multi-valued inverse. From what we know
of many-to-one functions, we conclude that
elog z
= z, but log (ez
) = z.
This is because elog z
is single-valued but log (ez
) is not. Because ez
is ı2π periodic, the logarithm
of a number is a set of numbers which differ by integer multiples of ı2π. For instance, eı2πn
= 1 so
that log(1) = {ı2πn : n ∈ Z}. The logarithmic function has an infinite number of branches. The
value of the function on the branches differs by integer multiples of ı2π. It has singularities at zero
and infinity. | log(z)| → ∞ as either z → 0 or z → ∞.
We will derive the formula for the complex variable logarithm. For now, let ln(x) denote the real
variable logarithm that is defined for positive real numbers. Consider w = log z. This means that
ew
= z. We write w = u + ıv in Cartesian form and z = r eıθ
in polar form.
eu+ıv
= r eıθ
We equate the modulus and argument of this expression.
eu
= r v = θ + 2πn
u = ln r v = θ + 2πn
With log z = u + ıv, we have a formula for the logarithm.
log z = ln |z| + ı arg(z)
If we write out the multi-valuedness of the argument function we note that this has the form that
we expected.
log z = ln |z| + ı(Arg(z) + 2πn), n ∈ Z
We check that our formula is correct by showing that elog z
= z
elog z
= eln |z|+ı arg(z)
= eln r+ıθ+ı2πn
= r eıθ
= z
164
Note again that log (ez
) = z.
log (ez
) = ln | ez
| + ı arg (ez
) = ln (ex
) + ı arg ex+ıy
= x + ı(y + 2πn) = z + ı2nπ = z
The real part of the logarithm is the single-valued ln r; the imaginary part is the multi-valued
arg(z). We define the principal branch of the logarithm Log z to be the branch that satisfies −π <
(Log z) ≤ π. For positive, real numbers the principal branch, Log x is real-valued. We can write
Log z in terms of the principal argument, Arg z.
Log z = ln |z| + ı Arg(z)
See Figure 7.21 for plots of the real and imaginary part of Log z.
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
-1
0
1
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
0
2
-2
-1
0
1
2
x
Figure 7.21: Plots of (Log z) and (Log z).
The form: ab
. Consider ab
where a and b are complex and a is nonzero. We define this expression
in terms of the exponential and the logarithm as
ab
= eb log a
.
Note that the multi-valuedness of the logarithm may make ab
multi-valued. First consider the case
that the exponent is an integer.
am
= em log a
= em(Log a+ı2nπ)
= em Log a
eı2mnπ
= em Log a
Thus we see that am
has a single value where m is an integer.
Now consider the case that the exponent is a rational number. Let p/q be a rational number in
reduced form.
ap/q
= e
p
q log a
= e
p
q (Log a+ı2nπ)
= e
p
q Log a
eı2npπ/q
.
This expression has q distinct values as
eı2npπ/q
= eı2mpπ/q
if and only if n = m mod q.
Finally consider the case that the exponent b is an irrational number.
ab
= eb log a
= eb(Log a+ı2nπ)
= eb Log a
eı2bnπ
Note that eı2bnπ
and eı2bmπ
are equal if and only if ı2bnπ and ı2bmπ differ by an integer multiple
of ı2π, which means that bn and bm differ by an integer. This occurs only when n = m. Thus
eı2bnπ
has a distinct value for each different integer n. We conclude that ab
has an infinite number
of values.
You may have noticed something a little fishy. If b is not an integer and a is any non-zero complex
number, then ab
is multi-valued. Then why have we been treating eb
as single-valued, when it is
merely the case a = e? The answer is that in the realm of functions of a complex variable, ez
is an
abuse of notation. We write ez
when we mean exp(z), the single-valued exponential function. Thus
when we write ez
we do not mean “the number e raised to the z power”, we mean “the exponential
function of z”. We denote the former scenario as (e)z
, which is multi-valued.
165
Logarithmic identities. Back in high school trigonometry when you thought that the logarithm
was only defined for positive real numbers you learned the identity log xa
= a log x. This identity
doesn’t hold when the logarithm is defined for nonzero complex numbers. Consider the logarithm
of za
.
log za
= Log za
+ ı2πn
a log z = a(Log z + ı2πn) = a Log z + ı2aπn
Note that
log za
= a log z
Furthermore, since
Log za
= ln |za
| + ı Arg (za
) , a Log z = a ln |z| + ıa Arg(z)
and Arg (za
) is not necessarily the same as a Arg(z) we see that
Log za
= a Log z.
Consider the logarithm of a product.
log(ab) = ln |ab| + ı arg(ab)
= ln |a| + ln |b| + ı arg(a) + ı arg(b)
= log a + log b
There is not an analogous identity for the principal branch of the logarithm since Arg(ab) is not in
general the same as Arg(a) + Arg(b).
Using log(ab) = log(a) + log(b) we can deduce that log (an
) =
n
k=1 log a = n log a, where n is a
positive integer. This result is simple, straightforward and wrong. I have led you down the merry
path to damnation.3
In fact, log a2
= 2 log a. Just write the multi-valuedness explicitly,
log a2
= Log a2
+ ı2nπ, 2 log a = 2(Log a + ı2nπ) = 2 Log a + ı4nπ.
You can verify that
log
1
a
= − log a.
We can use this and the product identity to expand the logarithm of a quotient.
log
a
b
= log a − log b
For general values of a, log za
= a log z. However, for some values of a, equality holds. We already
know that a = 1 and a = −1 work. To determine if equality holds for other values of a, we explicitly
write the multi-valuedness.
log za
= log ea log z
= a log z + ı2πk, k ∈ Z
a log z = a ln |z| + ıa Arg z + ıa2πm, m ∈ Z
We see that log za
= a log z if and only if
{am | m ∈ Z} = {am + k | k, m ∈ Z}.
The sets are equal if and only if a = 1/n, n ∈ Z±
. Thus we have the identity:
log z1/n
=
1
n
log z, n ∈ Z±
3 Don’t feel bad if you fell for it. The logarithm is a tricky bastard.
166
Result 7.7.1 Logarithmic Identities.
ab
= eb log a
elog z
= eLog z
= z
log(ab) = log a + log b
log(1/a) = − log a
log(a/b) = log a − log b
log z1/n
=
1
n
log z, n ∈ Z±
Logarithmic Inequalities.
Log(uv) = Log(u) + Log(v)
log za
= a log z
Log za
= a Log z
log ez
= z
Example 7.7.1 Consider 1π
. We apply the definition ab
= eb log a
.
1π
= eπ log(1)
= eπ(ln(1)+ı2nπ)
= eı2nπ2
Thus we see that 1π
has an infinite number of values, all of which lie on the unit circle |z| = 1 in the
complex plane. However, the set 1π
is not equal to the set |z| = 1. There are points in the latter
which are not in the former. This is analogous to the fact that the rational numbers are dense in
the real numbers, but are a subset of the real numbers.
Example 7.7.2 We find the zeros of sin z.
sin z =
eız
− e−ız
ı2
= 0
eız
= e−ız
eı2z
= 1
2z mod 2π = 0
z = nπ, n ∈ Z
Equivalently, we could use the identity
sin z = sin x cosh y + ı cos x sinh y = 0.
This becomes the two equations (for the real and imaginary parts)
sin x cosh y = 0 and cos x sinh y = 0.
Since cosh is real-valued and positive for real argument, the first equation dictates that x = nπ,
n ∈ Z. Since cos(nπ) = (−1)n
for n ∈ Z, the second equation implies that sinh y = 0. For real
argument, sinh y is only zero at y = 0. Thus the zeros are
z = nπ, n ∈ Z
167
Example 7.7.3 Since we can express sin z in terms of the exponential function, one would expect
that we could express the sin−1
z in terms of the logarithm.
w = sin−1
z
z = sin w
z =
eıw
− e−ıw
ı2
eı2w
−ı2z eıw
−1 = 0
eıw
= ız ± 1 − z2
w = −ı log ız ± 1 − z2
Thus we see how the multi-valued sin−1
is related to the logarithm.
sin−1
z = −ı log ız ± 1 − z2
Example 7.7.4 Consider the equation sin3
z = 1.
sin3
z = 1
sin z = 11/3
eız
− e−ız
ı2
= 11/3
eız
−ı2(1)1/3
− e−ız
= 0
eı2z
−ı2(1)1/3
eız
−1 = 0
eız
=
ı2(1)1/3
± −4(1)2/3 + 4
2
eız
= ı(1)1/3
± 1 − (1)2/3
z = −ı log ı(1)1/3
± 1 − 12/3
Note that there are three sources of multi-valuedness in the expression for z. The two values of the
square root are shown explicitly. There are three cube roots of unity. Finally, the logarithm has an
infinite number of branches. To show this multi-valuedness explicitly, we could write
z = −ı Log ı eı2mπ/3
± 1 − eı4mπ/3 + 2πn, m = 0, 1, 2, n = . . . , −1, 0, 1, . . .
Example 7.7.5 Consider the harmless looking equation, ız
= 1.
Before we start with the algebra, note that the right side of the equation is a single number. ız
is single-valued only when z is an integer. Thus we know that if there are solutions for z, they are
integers. We now proceed to solve the equation.
ız
= 1
eıπ/2
z
= 1
Use the fact that z is an integer.
eıπz/2
= 1
ıπz/2 = ı2nπ, for some n ∈ Z
z = 4n, n ∈ Z
168
Here is a different approach. We write down the multi-valued form of ız
. We solve the equation
by requiring that all the values of ız
are 1.
ız
= 1
ez log ı
= 1
z log ı = ı2πn, for some n ∈ Z
z ı
π
2
+ ı2πm = ı2πn, ∀m ∈ Z, for some n ∈ Z
ı
π
2
z + ı2πmz = ı2πn, ∀m ∈ Z, for some n ∈ Z
The only solutions that satisfy the above equation are
z = 4k, k ∈ Z.
Now let’s consider a slightly different problem: 1 ∈ ız
. For what values of z does ız
have 1 as
one of its values.
1 ∈ ız
1 ∈ ez log ı
1 ∈ {ez(ıπ/2+ı2πn)
| n ∈ Z}
z(ıπ/2 + ı2πn) = ı2πm, m, n ∈ Z
z =
4m
1 + 4n
, m, n ∈ Z
There are an infinite set of rational numbers for which ız
has 1 as one of its values. For example,
ı4/5
= 11/5
= 1, eı2π/5
, eı4π/5
, eı6π/5
, eı8π/5
7.8 Riemann Surfaces
Consider the mapping w = log(z). Each nonzero point in the z-plane is mapped to an infinite
number of points in the w plane.
w = {ln |z| + ı arg(z)} = {ln |z| + ı(Arg(z) + 2πn) | n ∈ Z}
This multi-valuedness makes it hard to work with the logarithm. We would like to select one of
the branches of the logarithm. One way of doing this is to decompose the z-plane into an infinite
number of sheets. The sheets lie above one another and are labeled with the integers, n ∈ Z. (See
Figure 7.22.) We label the point z on the nth
sheet as (z, n). Now each point (z, n) maps to a single
point in the w-plane. For instance, we can make the zeroth sheet map to the principal branch of
the logarithm. This would give us the following mapping.
log(z, n) = Log z + ı2πn
This is a nice idea, but it has some problems. The mappings are not continuous. Consider the
mapping on the zeroth sheet. As we approach the negative real axis from above z is mapped to
ln |z| + ıπ as we approach from below it is mapped to ln |z| − ıπ. (Recall Figure 7.21.) The mapping
is not continuous across the negative real axis.
Let’s go back to the regular z-plane for a moment. We start at the point z = 1 and selecting
the branch of the logarithm that maps to zero. (log(1) = ı2πn). We make the logarithm vary
continuously as we walk around the origin once in the positive direction and return to the point
z = 1. Since the argument of z has increased by 2π, the value of the logarithm has changed to ı2π.
If we walk around the origin again we will have log(1) = ı4π. Our flat sheet decomposition of the
169
-2
-1
0
1
2
Figure 7.22: The z-plane decomposed into flat sheets.
z-plane does not reflect this property. We need a decomposition with a geometry that makes the
mapping continuous and connects the various branches of the logarithm.
Drawing inspiration from the plot of arg(z), Figure 7.10, we decompose the z-plane into an
infinite corkscrew with axis at the origin. (See Figure 7.23.) We define the mapping so that the
logarithm varies continuously on this surface. Consider a point z on one of the sheets. The value
of the logarithm at that same point on the sheet directly above it is ı2π more than the original
value. We call this surface, the Riemann surface for the logarithm. The mapping from the Riemann
surface to the w-plane is continuous and one-to-one.
Figure 7.23: The Riemann surface for the logarithm.
7.9 Branch Points
Example 7.9.1 Consider the function z1/2
. For each value of z, there are two values of z1/2
. We
write z1/2
in modulus-argument and Cartesian form.
z1/2
= |z| eı arg(z)/2
z1/2
= |z| cos(arg(z)/2) + ı |z| sin(arg(z)/2)
Figure 7.24 shows the real and imaginary parts of z1/2
from three different viewpoints. The second
and third views are looking down the x axis and y axis, respectively. Consider z1/2
. This is a
double layered sheet which intersects itself on the negative real axis. ( (z1/2
) has a similar structure,
but intersects itself on the positive real axis.) Let’s start at a point on the positive real axis on the
lower sheet. If we walk around the origin once and return to the positive real axis, we will be on the
upper sheet. If we do this again, we will return to the lower sheet.
Suppose we are at a point in the complex plane. We pick one of the two values of z1/2
. If the
function varies continuously as we walk around the origin and back to our starting point, the value
170
of z1/2
will have changed. We will be on the other branch. Because walking around the point z = 0
takes us to a different branch of the function, we refer to z = 0 as a branch point.
-2-1012
x
-2-1012
y
-1
0
1
-2-1012
x
-2-1012
x
-2-1012
y
-1
0
1
-2-1012
x
-2-1012
x
-2 -1 0 1 2
y
-1
0
1
-1
0
1 -2-1012
x
-2 -1 0 1 2
y
-1
0
1
-1
0
1
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-1
0
1
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-1
0
1
-2
-1
0
1
2
x
Figure 7.24: Plots of z1/2
(left) and z1/2
(right) from three viewpoints.
Now consider the modulus-argument form of z1/2
:
z1/2
= |z| eı arg(z)/2
.
Figure 7.25 shows the modulus and the principal argument of z1/2
. We see that each time we walk
around the origin, the argument of z1/2
changes by π. This means that the value of the function
changes by the factor eıπ
= −1, i.e. the function changes sign. If we walk around the origin twice,
the argument changes by 2π, so that the value of the function does not change, eı2π
= 1.
-2-1
0
1
2
x -2
-1
0
1
2
y
0
0.5
1
-2-1
0
1
2
x
-2
-1
0
1
2
x -2
-1
0
1
2
y
-2
0
2
-2
-1
0
1
2
x
Figure 7.25: Plots of |z1/2
| and Arg z1/2
.
z1/2
is a continuous function except at z = 0. Suppose we start at z = 1 = eı0
and the function
value eı0 1/2
= 1. If we follow the first path in Figure 7.26, the argument of z varies from up to
about π
4 , down to about −π
4 and back to 0. The value of the function is still eı0 1/2
.
171
Im(z)
Re(z) Re(z)
Im(z)
Figure 7.26: A path that does not encircle the origin and a path around the origin.
Now suppose we follow a circular path around the origin in the positive, counter-clockwise,
direction. (See the second path in Figure 7.26.) The argument of z increases by 2π. The value of
the function at half turns on the path is
eı0 1/2
= 1,
(eıπ
)
1/2
= eıπ/2
= ı,
eı2π 1/2
= eıπ
= −1
As we return to the point z = 1, the argument of the function has changed by π and the value of the
function has changed from 1 to −1. If we were to walk along the circular path again, the argument
of z would increase by another 2π. The argument of the function would increase by another π and
the value of the function would return to 1.
eı4π 1/2
= eı2π
= 1
In general, any time we walk around the origin, the value of z1/2
changes by the factor −1. We
call z = 0 a branch point. If we want a single-valued square root, we need something to prevent
us from walking around the origin. We achieve this by introducing a branch cut. Suppose we have
the complex plane drawn on an infinite sheet of paper. With a scissors we cut the paper from the
origin to −∞ along the real axis. Then if we start at z = eı0
, and draw a continuous line without
leaving the paper, the argument of z will always be in the range −π < arg z < π. This means
that −π
2 < arg z1/2
< π
2 . No matter what path we follow in this cut plane, z = 1 has argument
zero and (1)1/2
= 1. By never crossing the negative real axis, we have constructed a single valued
branch of the square root function. We call the cut along the negative real axis a branch cut.
Example 7.9.2 Consider the logarithmic function log z. For each value of z, there are an infinite
number of values of log z. We write log z in Cartesian form.
log z = ln |z| + ı arg z
Figure 7.27 shows the real and imaginary parts of the logarithm. The real part is single-valued. The
imaginary part is multi-valued and has an infinite number of branches. The values of the logarithm
form an infinite-layered sheet. If we start on one of the sheets and walk around the origin once in
the positive direction, then the value of the logarithm increases by ı2π and we move to the next
branch. z = 0 is a branch point of the logarithm.
The logarithm is a continuous function except at z = 0. Suppose we start at z = 1 = eı0
and the
function value log eı0
= ln(1) + ı0 = 0. If we follow the first path in Figure 7.26, the argument of
z and thus the imaginary part of the logarithm varies from up to about π
4 , down to about −π
4 and
back to 0. The value of the logarithm is still 0.
Now suppose we follow a circular path around the origin in the positive direction. (See the second
path in Figure 7.26.) The argument of z increases by 2π. The value of the logarithm at half turns
172
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
-1
0
1
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-5
0
5
-2
-1
0
1
2
x
Figure 7.27: Plots of (log z) and a portion of (log z).
on the path is
log eı0
= 0,
log (eıπ
) = ıπ,
log eı2π
= ı2π
As we return to the point z = 1, the value of the logarithm has changed by ı2π. If we were to walk
along the circular path again, the argument of z would increase by another 2π and the value of the
logarithm would increase by another ı2π.
Result 7.9.1 A point z0 is a branch point of a function f(z) if the function
changes value when you walk around the point on any path that encloses no
singularities other than the one at z = z0.
Branch points at infinity : mapping infinity to the origin. Up to this point we have
considered only branch points in the finite plane. Now we consider the possibility of a branch point
at infinity. As a first method of approaching this problem we map the point at infinity to the origin
with the transformation ζ = 1/z and examine the point ζ = 0.
Example 7.9.3 Again consider the function z1/2
. Mapping the point at infinity to the origin, we
have f(ζ) = (1/ζ)1/2
= ζ−1/2
. For each value of ζ, there are two values of ζ−1/2
. We write ζ−1/2
in
modulus-argument form.
ζ−1/2
=
1
|ζ|
e−ı arg(ζ)/2
Like z1/2
, ζ−1/2
has a double-layered sheet of values. Figure 7.28 shows the modulus and the
principal argument of ζ−1/2
. We see that each time we walk around the origin, the argument of
ζ−1/2
changes by −π. This means that the value of the function changes by the factor e−ıπ
= −1,
i.e. the function changes sign. If we walk around the origin twice, the argument changes by −2π,
so that the value of the function does not change, e−ı2π
= 1.
Since ζ−1/2
has a branch point at zero, we conclude that z1/2
has a branch point at infinity.
Example 7.9.4 Again consider the logarithmic function log z. Mapping the point at infinity to
the origin, we have f(ζ) = log(1/ζ) = − log(ζ). From Example 7.9.2 we known that − log(ζ) has a
branch point at ζ = 0. Thus log z has a branch point at infinity.
Branch points at infinity : paths around infinity. We can also check for a branch point at
infinity by following a path that encloses the point at infinity and no other singularities. Just draw
a simple closed curve that separates the complex plane into a bounded component that contains all
173
-2
-1
0
1
2
x
-2
-1
0
1
2
y
1
1.5
2
2.5
3
-2
-1
0
1
2
x
-2
-1
0
1
2
x
-2
-1
0
1
2
y
-2
0
2
-2
-1
0
1
2
x
Figure 7.28: Plots of |ζ−1/2
| and Arg ζ−1/2
.
the singularities of the function in the finite plane. Then, depending on orientation, the curve is a
contour enclosing all the finite singularities, or the point at infinity and no other singularities.
Example 7.9.5 Once again consider the function z1/2
. We know that the function changes value
on a curve that goes once around the origin. Such a curve can be considered to be either a path
around the origin or a path around infinity. In either case the path encloses one singularity. There
are branch points at the origin and at infinity. Now consider a curve that does not go around the
origin. Such a curve can be considered to be either a path around neither of the branch points or
both of them. Thus we see that z1/2
does not change value when we follow a path that encloses
neither or both of its branch points.
Example 7.9.6 Consider f(z) = z2
− 1
1/2
. We factor the function.
f(z) = (z − 1)1/2
(z + 1)1/2
There are branch points at z = ±1. Now consider the point at infinity.
f ζ−1
= ζ−2
− 1
1/2
= ±ζ−1
1 − ζ2 1/2
Since f ζ−1
does not have a branch point at ζ = 0, f(z) does not have a branch point at infinity.
We could reach the same conclusion by considering a path around infinity. Consider a path that
circles the branch points at z = ±1 once in the positive direction. Such a path circles the point at
infinity once in the negative direction. In traversing this path, the value of f(z) is multiplied by the
factor eı2π 1/2
eı2π 1/2
= eı2π
= 1. Thus the value of the function does not change. There is no
branch point at infinity.
Diagnosing branch points. We have the definition of a branch point, but we do not have a
convenient criterion for determining if a particular function has a branch point. We have seen that
log z and zα
for non-integer α have branch points at zero and infinity. The inverse trigonometric
functions like the arcsine also have branch points, but they can be written in terms of the logarithm
and the square root. In fact all the elementary functions with branch points can be written in terms
of the functions log z and zα
. Furthermore, note that the multi-valuedness of zα
comes from the
logarithm, zα
= eα log z
. This gives us a way of quickly determining if and where a function may
have branch points.
Result 7.9.2 Let f(z) be a single-valued function. Then log(f(z)) and
(f(z))α
may have branch points only where f(z) is zero or singular.
Example 7.9.7 Consider the functions,
1. z2 1/2
174
2. z1/2 2
3. z1/2 3
Are they multi-valued? Do they have branch points?
1.
z2 1/2
= ±
√
z2 = ±z
Because of the (·)1/2
, the function is multi-valued. The only possible branch points are at zero
and infinity. If eı0 2 1/2
= 1, then eı2π 2 1/2
= eı4π 1/2
= eı2π
= 1. Thus we see that
the function does not change value when we walk around the origin. We can also consider this
to be a path around infinity. This function is multi-valued, but has no branch points.
2.
z1/2
2
= ±
√
z
2
= z
This function is single-valued.
3.
z1/2
3
= ±
√
z
3
= ±
√
z
3
This function is multi-valued. We consider the possible branch point at z = 0. If e0 1/2 3
=
1, then eı2π 1/2 3
= (eıπ
)
3
= eı3π
= −1. Since the function changes value when we walk
around the origin, it has a branch point at z = 0. Since this is also a path around infinity,
there is a branch point there.
Example 7.9.8 Consider the function f(z) = log 1
z−1 . Since 1
z−1 is only zero at infinity and its
only singularity is at z = 1, the only possibilities for branch points are at z = 1 and z = ∞. Since
log
1
z − 1
= − log(z − 1)
and log w has branch points at zero and infinity, we see that f(z) has branch points at z = 1 and
z = ∞.
Example 7.9.9 Consider the functions,
1. elog z
2. log ez
.
Are they multi-valued? Do they have branch points?
1.
elog z
= exp(Log z + ı2πn) = eLog z
eı2πn
= z
This function is single-valued.
2.
log ez
= Log ez
+ı2πn = z + ı2πm
This function is multi-valued. It may have branch points only where ez
is zero or infinite. This
only occurs at z = ∞. Thus there are no branch points in the finite plane. The function does
not change when traversing a simple closed path. Since this path can be considered to enclose
infinity, there is no branch point at infinity.
175
Consider (f(z))α
where f(z) is single-valued and f(z) has either a zero or a singularity at z = z0.
(f(z))α
may have a branch point at z = z0. If f(z) is not a power of z, then it may be difficult to
tell if (f(z))α
changes value when we walk around z0. Factor f(z) into f(z) = g(z)h(z) where h(z)
is nonzero and finite at z0. Then g(z) captures the important behavior of f(z) at the z0. g(z) tells
us how fast f(z) vanishes or blows up. Since (f(z))α
= (g(z))α
(h(z))α
and (h(z))α
does not have a
branch point at z0, (f(z))α
has a branch point at z0 if and only if (g(z))α
has a branch point there.
Similarly, we can decompose
log(f(z)) = log(g(z)h(z)) = log(g(z)) + log(h(z))
to see that log(f(z)) has a branch point at z0 if and only if log(g(z)) has a branch point there.
Result 7.9.3 Consider a single-valued function f(z) that has either a zero or
a singularity at z = z0. Let f(z) = g(z)h(z) where h(z) is nonzero and finite.
(f(z))α
has a branch point at z = z0 if and only if (g(z))α
has a branch point
there. log(f(z)) has a branch point at z = z0 if and only if log(g(z)) has a
branch point there.
Example 7.9.10 Consider the functions,
1. sin z1/2
2. (sin z)1/2
3. z1/2
sin z1/2
4. sin z2 1/2
Find the branch points and the number of branches.
1.
sin z1/2
= sin ±
√
z = ± sin
√
z
sin z1/2
is multi-valued. It has two branches. There may be branch points at zero and infinity.
Consider the unit circle which is a path around the origin or infinity. If sin eı0 1/2
= sin(1),
then sin eı2π 1/2
= sin (eıπ
) = sin(−1) = − sin(1). There are branch points at the origin
and infinity.
2.
(sin z)1/2
= ±
√
sin z
The function is multi-valued with two branches. The sine vanishes at z = nπ and is singular
at infinity. There could be branch points at these locations. Consider the point z = nπ. We
can write
sin z = (z − nπ)
sin z
z − nπ
Note that sin z
z−nπ is nonzero and has a removable singularity at z = nπ.
lim
z→nπ
sin z
z − nπ
= lim
z→nπ
cos z
1
= (−1)n
Since (z − nπ)1/2
has a branch point at z = nπ, (sin z)1/2
has branch points at z = nπ.
Since the branch points at z = nπ go all the way out to infinity. It is not possible to make a
path that encloses infinity and no other singularities. The point at infinity is a non-isolated
singularity. A point can be a branch point only if it is an isolated singularity.
176
3.
z1/2
sin z1/2
= ±
√
z sin ±
√
z
= ±
√
z ± sin
√
z
=
√
z sin
√
z
The function is single-valued. Thus there could be no branch points.
4.
sin z2 1/2
= ±
√
sin z2
This function is multi-valued. Since sin z2
= 0 at z = (nπ)1/2
, there may be branch points
there. First consider the point z = 0. We can write
sin z2
= z2 sin z2
z2
where sin z2
/z2
is nonzero and has a removable singularity at z = 0.
lim
z→0
sin z2
z2
= lim
z→0
2z cos z2
2z
= 1.
Since z2 1/2
does not have a branch point at z = 0, sin z2 1/2
does not have a branch point
there either.
Now consider the point z =
√
nπ.
sin z2
= z −
√
nπ
sin z2
z −
√
nπ
sin z2
/ (z −
√
nπ) in nonzero and has a removable singularity at z =
√
nπ.
lim
z→
√
nπ
sin z2
z −
√
nπ
= lim
z→
√
nπ
2z cos z2
1
= 2
√
nπ(−1)n
Since (z −
√
nπ)
1/2
has a branch point at z =
√
nπ, sin z2 1/2
also has a branch point there.
Thus we see that sin z2 1/2
has branch points at z = (nπ)1/2
for n ∈ Z  {0}. This is the
set of numbers: {±
√
π, ±
√
2π, . . . , ±ı
√
π, ±ı
√
2π, . . .}. The point at infinity is a non-isolated
singularity.
Example 7.9.11 Find the branch points of
f(z) = z3
− z
1/3
.
Introduce branch cuts. If f(2) = 3
√
6 then what is f(−2)?
We expand f(z).
f(z) = z1/3
(z − 1)1/3
(z + 1)1/3
.
There are branch points at z = −1, 0, 1. We consider the point at infinity.
f
1
ζ
=
1
ζ
1/3
1
ζ
− 1
1/3
1
ζ
+ 1
1/3
=
1
ζ
(1 − ζ)
1/3
(1 + ζ)
1/3
Since f(1/ζ) does not have a branch point at ζ = 0, f(z) does not have a branch point at infinity.
Consider the three possible branch cuts in Figure 7.29.
177
Figure 7.29: Three Possible Branch Cuts for f(z) = z3
− z
1/3
.
The first and the third branch cuts will make the function single valued, the second will not. It
is clear that the first set makes the function single valued since it is not possible to walk around any
of the branch points.
The second set of branch cuts would allow you to walk around the branch points at z = ±1. If
you walked around these two once in the positive direction, the value of the function would change
by the factor eı4π/3
.
The third set of branch cuts would allow you to walk around all three branch points together.
You can verify that if you walk around the three branch points, the value of the function will not
change (eı6π/3
= eı2π
= 1).
Suppose we introduce the third set of branch cuts and are on the branch with f(2) = 3
√
6.
f(2) = 2 eı0 1/3
1 eı0 1/3
3 eı0 1/3
=
3
√
6
The value of f(−2) is
f(−2) = (2 eıπ
)
1/3
(3 eıπ
)
1/3
(1 eıπ
)
1/3
=
3
√
2 eıπ/3 3
√
3 eıπ/3 3
√
1 eıπ/3
=
3
√
6 eıπ
= −
3
√
6.
Example 7.9.12 Find the branch points and number of branches for
f(z) = zz2
.
zz2
= exp z2
log z
There may be branch points at the origin and infinity due to the logarithm. Consider walking around
a circle of radius r centered at the origin in the positive direction. Since the logarithm changes by
ı2π, the value of f(z) changes by the factor eı2πr2
. There are branch points at the origin and infinity.
The function has an infinite number of branches.
Example 7.9.13 Construct a branch of
f(z) = z2
+ 1
1/3
such that
f(0) =
1
2
−1 + ı
√
3 .
First we factor f(z).
f(z) = (z − ı)1/3
(z + ı)1/3
There are branch points at z = ±ı. Figure 7.30 shows one way to introduce branch cuts.
178
θ
r
φ
ρ
Figure 7.30: Branch Cuts for f(z) = z2
+ 1
1/3
.
Since it is not possible to walk around any branch point, these cuts make the function single
valued. We introduce the coordinates:
z − ı = ρ eıφ
, z + ı = r eıθ
.
f(z) = ρ eıφ 1/3
r eıθ 1/3
= 3
√
ρr eı(φ+θ)/3
The condition
f(0) =
1
2
−1 + ı
√
3 = eı(2π/3+2πn)
can be stated
3
√
1 eı(φ+θ)/3
= eı(2π/3+2πn)
φ + θ = 2π + 6πn
The angles must be defined to satisfy this relation. One choice is
π
2
< φ <
5π
2
, −
π
2
< θ <
3π
2
.
Principal branches. We construct the principal branch of the logarithm by putting a branch cut
on the negative real axis choose z = r eıθ
, θ ∈ (−π, π). Thus the principal branch of the logarithm
is
Log z = ln r + ıθ, −π < θ < π.
Note that the if x is a negative real number, (and thus lies on the branch cut), then Log x is
undefined.
The principal branch of zα
is
zα
= eα Log z
.
Note that there is a branch cut on the negative real axis.
−απ < arg eα Log z
< απ
The principal branch of the z1/2
is denoted
√
z. The principal branch of z1/n
is denoted n
√
z.
Example 7.9.14 Construct
√
1 − z2, the principal branch of 1 − z2 1/2
.
First note that since 1 − z2 1/2
= (1 − z)1/2
(1 + z)1/2
there are branch points at z = 1 and
z = −1. The principal branch of the square root has a branch cut on the negative real axis. 1 − z2
is a negative real number for z ∈ (−∞ . . . − 1) ∪ (1 . . . ∞). Thus we put branch cuts on (−∞ . . . − 1]
and [1 . . . ∞).
179
7.10 Exercises
Cartesian and Modulus-Argument Form
Exercise 7.1
Find the image of the strip 2 < x < 3 under the mapping w = f(z) = z2
. Does the image constitute
a domain?
Hint, Solution
Exercise 7.2
For a given real number φ, 0 ≤ φ < 2π, find the image of the sector 0 ≤ arg(z) < φ under the
transformation w = z4
. How large should φ be so that the w plane is covered exactly once?
Hint, Solution
Trigonometric Functions
Exercise 7.3
In Cartesian coordinates, z = x + ıy, write sin(z) in Cartesian and modulus-argument form.
Hint, Solution
Exercise 7.4
Show that ez
is nonzero for all finite z.
Hint, Solution
Exercise 7.5
Show that
ez2
≤ e|z|2
.
When does equality hold?
Hint, Solution
Exercise 7.6
Solve coth(z) = 1.
Hint, Solution
Exercise 7.7
Solve 2 ∈ 2z
. That is, for what values of z is 2 one of the values of 2z
? Derive this result then verify
your answer by evaluating 2z
for the solutions that your find.
Hint, Solution
Exercise 7.8
Solve 1 ∈ 1z
. That is, for what values of z is 1 one of the values of 1z
? Derive this result then verify
your answer by evaluating 1z
for the solutions that your find.
Hint, Solution
Logarithmic Identities
Exercise 7.9
Show that if (z1) > 0 and (z2) > 0 then
Log(z1z2) = Log(z1) + Log(z2)
and illustrate that this relationship does not hold in general.
Hint, Solution
Exercise 7.10
Find the fallacy in the following arguments:
180
1. log(−1) = log 1
−1 = log(1) − log(−1) = − log(−1), therefore, log(−1) = 0.
2. 1 = 11/2
= ((−1)(−1))1/2
= (−1)1/2
(−1)1/2
= ıı = −1, therefore, 1 = −1.
Hint, Solution
Exercise 7.11
Write the following expressions in modulus-argument or Cartesian form. Denote any multi-valuedness
explicitly.
22/5
, 31+ı
,
√
3 − ı
1/4
, 1ı/4
.
Hint, Solution
Exercise 7.12
Solve cos z = 69.
Hint, Solution
Exercise 7.13
Solve cot z = ı47.
Hint, Solution
Exercise 7.14
Determine all values of
1. log(−ı)
2. (−ı)−ı
3. 3π
4. log(log(ı))
and plot them in the complex plane.
Hint, Solution
Exercise 7.15
Evaluate and plot the following in the complex plane:
1. (cosh(ıπ))ı2
2. log
1
1 + ı
3. arctan(ı3)
Hint, Solution
Exercise 7.16
Determine all values of ıı
and log ((1 + ı)ıπ
) and plot them in the complex plane.
Hint, Solution
Exercise 7.17
Find all z for which
1. ez
= ı
2. cos z = sin z
3. tan2
z = −1
181
Hint, Solution
Exercise 7.18
Prove the following identities and identify the branch points of the functions in the extended complex
plane.
1. arctan(z) =
ı
2
log
ı + z
ı − z
2. arctanh(z) =
1
2
log
1 + z
1 − z
3. arccosh(z) = log z + z2
− 1
1/2
Hint, Solution
Branch Points and Branch Cuts
Exercise 7.19
Identify the branch points of the function
f(z) = log
z(z + 1)
z − 1
and introduce appropriate branch cuts to ensure that the function is single-valued.
Hint, Solution
Exercise 7.20
Identify all the branch points of the function
w = f(z) = z3
+ z2
− 6z
1/2
in the extended complex plane. Give a polar description of f(z) and specify branch cuts so that
your choice of angles gives a single-valued function that is continuous at z = −1 with f(−1) = −
√
6.
Sketch the branch cuts in the stereographic projection.
Hint, Solution
Exercise 7.21
Consider the mapping w = f(z) = z1/3
and the inverse mapping z = g(w) = w3
.
1. Describe the multiple-valuedness of f(z).
2. Describe a region of the w-plane that g(w) maps one-to-one to the whole z-plane.
3. Describe and attempt to draw a Riemann surface on which f(z) is single-valued and to which
g(w) maps one-to-one. Comment on the misleading nature of your picture.
4. Identify the branch points of f(z) and introduce a branch cut to make f(z) single-valued.
Hint, Solution
Exercise 7.22
Determine the branch points of the function
f(z) = z3
− 1
1/2
.
Construct cuts and define a branch so that z = 0 and z = −1 do not lie on a cut, and such that
f(0) = −ı. What is f(−1) for this branch?
Hint, Solution
182
Exercise 7.23
Determine the branch points of the function
w(z) = ((z − 1)(z − 6)(z + 2))
1/2
Construct cuts and define a branch so that z = 4 does not lie on a cut, and such that w = ı6 when
z = 4.
Hint, Solution
Exercise 7.24
Give the number of branches and locations of the branch points for the functions
1. cos z1/2
2. (z + ı)−z
Hint, Solution
Exercise 7.25
Find the branch points of the following functions in the extended complex plane, (the complex plane
including the point at infinity).
1. z2
+ 1
1/2
2. z3
− z
1/2
3. log z2
− 1
4. log
z + 1
z − 1
Introduce branch cuts to make the functions single valued.
Hint, Solution
Exercise 7.26
Find all branch points and introduce cuts to make the following functions single-valued: For the
first function, choose cuts so that there is no cut within the disk |z| < 2.
1. f(z) = z3
+ 8
1/2
2. f(z) = log 5 +
z + 1
z − 1
1/2
3. f(z) = (z + ı3)1/2
Hint, Solution
Exercise 7.27
Let f(z) have branch points at z = 0 and z = ±ı, but nowhere else in the extended complex plane.
How does the value and argument of f(z) change while traversing the contour in Figure 7.31? Does
the branch cut in Figure 7.31 make the function single-valued?
Hint, Solution
Exercise 7.28
Let f(z) be analytic except for no more than a countably infinite number of singularities. Suppose
that f(z) has only one branch point in the finite complex plane. Does f(z) have a branch point at
infinity? Now suppose that f(z) has two or more branch points in the finite complex plane. Does
f(z) have a branch point at infinity?
Hint, Solution
183
Figure 7.31: Contour around the branch points and the branch cut.
Exercise 7.29
Find all branch points of z4
+ 1
1/4
in the extended complex plane. Which of the branch cuts in
Figure 7.32 make the function single-valued.
Figure 7.32: Four candidate sets of branch cuts for z4
+ 1
1/4
.
Hint, Solution
Exercise 7.30
Find the branch points of
f(z) =
z
z2 + 1
1/3
in the extended complex plane. Introduce branch cuts that make the function single-valued and
such that the function is defined on the positive real axis. Define a branch such that f(1) = 1/ 3
√
2.
Write down an explicit formula for the value of the branch. What is f(1 + ı)? What is the value of
f(z) on either side of the branch cuts?
Hint, Solution
Exercise 7.31
Find all branch points of
f(z) = ((z − 1)(z − 2)(z − 3))1/2
in the extended complex plane. Which of the branch cuts in Figure 7.33 will make the function
single-valued. Using the first set of branch cuts in this figure define a branch on which f(0) = ı
√
6.
Write out an explicit formula for the value of the function on this branch.
Hint, Solution
Exercise 7.32
Determine the branch points of the function
w = z2
− 2 (z + 2)
1/3
.
184
Figure 7.33: Four candidate sets of branch cuts for ((z − 1)(z − 2)(z − 3))1/2
.
Construct and define a branch so that the resulting cut is one line of finite extent and w(2) = 2.
What is w(−3) for this branch? What are the limiting values of w on either side of the branch cut?
Hint, Solution
Exercise 7.33
Construct the principal branch of arccos(z). (Arccos(z) has the property that if x ∈ [−1, 1] then
Arccos(x) ∈ [0, π]. In particular, Arccos(0) = π
2 ).
Hint, Solution
Exercise 7.34
Find the branch points of z1/2
− 1
1/2
in the finite complex plane. Introduce branch cuts to make
the function single-valued.
Hint, Solution
Exercise 7.35
For the linkage illustrated in Figure 7.34, use complex variables to outline a scheme for expressing
the angular position, velocity and acceleration of arm c in terms of those of arm a. (You needn’t
work out the equations.)
θ
φ
a
b
c
l
Figure 7.34: A linkage.
Hint, Solution
Exercise 7.36
Find the image of the strip | (z)| < 1 and of the strip 1 < (z) < 2 under the transformations:
1. w = 2z2
2. w = z+1
z−1
Hint, Solution
185
Exercise 7.37
Locate and classify all the singularities of the following functions:
1.
(z + 1)1/2
z + 2
2. cos
1
1 + z
3.
1
(1 − ez)
2
In each case discuss the possibility of a singularity at the point ∞.
Hint, Solution
Exercise 7.38
Describe how the mapping w = sinh(z) transforms the infinite strip −∞ < x < ∞, 0 < y < π into
the w-plane. Find cuts in the w-plane which make the mapping continuous both ways. What are
the images of the lines (a) y = π/4; (b) x = 1?
Hint, Solution
186
7.11 Hints
Cartesian and Modulus-Argument Form
Hint 7.1
Hint 7.2
Trigonometric Functions
Hint 7.3
Recall that sin(z) = 1
ı2 (eız
− e−ız
). Use Result 6.3.1 to convert between Cartesian and modulus-
argument form.
Hint 7.4
Write ez
in polar form.
Hint 7.5
The exponential is an increasing function for real variables.
Hint 7.6
Write the hyperbolic cotangent in terms of exponentials.
Hint 7.7
Write out the multi-valuedness of 2z
. There is a doubly-infinite set of solutions to this problem.
Hint 7.8
Write out the multi-valuedness of 1z
.
Logarithmic Identities
Hint 7.9
Hint 7.10
Write out the multi-valuedness of the expressions.
Hint 7.11
Do the exponentiations in polar form.
Hint 7.12
Write the cosine in terms of exponentials. Multiply by eız
to get a quadratic equation for eız
.
Hint 7.13
Write the cotangent in terms of exponentials. Get a quadratic equation for eız
.
Hint 7.14
Hint 7.15
187
Hint 7.16
ıı
has an infinite number of real, positive values. ıı
= eı log ı
. log ((1 + ı)ıπ
) has a doubly infinite set
of values. log ((1 + ı)ıπ
) = log(exp(ıπ log(1 + ı))).
Hint 7.17
Hint 7.18
Branch Points and Branch Cuts
Hint 7.19
Hint 7.20
Hint 7.21
Hint 7.22
Hint 7.23
Hint 7.24
Hint 7.25
1. z2
+ 1
1/2
= (z − ı)1/2
(z + ı)1/2
2. z3
− z
1/2
= z1/2
(z − 1)1/2
(z + 1)1/2
3. log z2
− 1 = log(z − 1) + log(z + 1)
4. log z+1
z−1 = log(z + 1) − log(z − 1)
Hint 7.26
Hint 7.27
Reverse the orientation of the contour so that it encircles infinity and does not contain any branch
points.
Hint 7.28
Consider a contour that encircles all the branch points in the finite complex plane. Reverse the
orientation of the contour so that it contains the point at infinity and does not contain any branch
points in the finite complex plane.
Hint 7.29
Factor the polynomial. The argument of z1/4
changes by π/2 on a contour that goes around the
origin once in the positive direction.
188
Hint 7.30
Hint 7.31
To define the branch, define angles from each of the branch points in the finite complex plane.
Hint 7.32
Hint 7.33
Hint 7.34
Hint 7.35
Hint 7.36
Hint 7.37
Hint 7.38
189
7.12 Solutions
Cartesian and Modulus-Argument Form
Solution 7.1
Let w = u + ıv. We consider the strip 2 < x < 3 as composed of vertical lines. Consider the vertical
line: z = c + ıy, y ∈ R for constant c. We find the image of this line under the mapping.
w = (c + ıy)2
w = c2
− y2
+ ı2cy
u = c2
− y2
, v = 2cy
This is a parabola that opens to the left. We can parameterize the curve in terms of v.
u = c2
−
1
4c2
v2
, v ∈ R
The boundaries of the region, x = 2 and x = 3, are respectively mapped to the parabolas:
u = 4 −
1
16
v2
, v ∈ R and u = 9 −
1
36
v2
, v ∈ R
We write the image of the mapping in set notation.
w = u + ıv : v ∈ R and 4 −
1
16
v2
< u < 9 −
1
36
v2
.
See Figure 7.35 for depictions of the strip and its image under the mapping. The mapping is
one-to-one. Since the image of the strip is open and connected, it is a domain.
-1 1 2 3 4 5
-3
-2
-1
1
2
3
-5 5 10 15
-10
-5
5
10
Figure 7.35: The domain 2 < x < 3 and its image under the mapping w = z2
.
Solution 7.2
We write the mapping w = z4
in polar coordinates.
w = z4
= r eıθ 4
= r4
eı4θ
Thus we see that
w : {r eıθ
| r ≥ 0, 0 ≤ θ < φ} → {r4
eı4θ
| r ≥ 0, 0 ≤ θ < φ} = {r eıθ
| r ≥ 0, 0 ≤ θ < 4φ}.
We can state this in terms of the argument.
w : {z | 0 ≤ arg(z) < φ} → {z | 0 ≤ arg(z) < 4φ}
If φ = π/2, the sector will be mapped exactly to the whole complex plane.
190
Trigonometric Functions
Solution 7.3
sin z =
1
ı2
eız
− e−ız
=
1
ı2
e−y+ıx
− ey−ıx
=
1
ı2
e−y
(cos x + ı sin x) − ey
(cos x − ı sin x)
=
1
2
e−y
(sin x − ı cos x) + ey
(sin x + ı cos x)
= sin x cosh y + ı cos x sinh y
sin z = sin2
x cosh2
y + cos2 x sinh2
y exp(ı arctan(sin x cosh y, cos x sinh y))
= cosh2
y − cos2 x exp(ı arctan(sin x cosh y, cos x sinh y))
=
1
2
(cosh(2y) − cos(2x)) exp(ı arctan(sin x cosh y, cos x sinh y))
Solution 7.4
In order that ez
be zero, the modulus, ex
must be zero. Since ex
has no finite solutions, ez
= 0 has
no finite solutions.
Solution 7.5
We write the expressions in terms of Cartesian coordinates.
ez2
= e(x+ıy)2
= ex2
−y2
+ı2xy
= ex2
−y2
e|z|2
= e|x+ıy|2
= ex2
+y2
The exponential function is an increasing function for real variables. Since x2
− y2
≤ x2
+ y2
,
ex2
−y2
≤ ex2
+y2
.
ez2
≤ e|z|2
Equality holds only when y = 0.
Solution 7.6
coth(z) = 1
(ez
+ e−z
) /2
(ez − e−z) /2
= 1
ez
+ e−z
= ez
− e−z
e−z
= 0
There are no solutions.
191
Solution 7.7
We write out the multi-valuedness of 2z
.
2 ∈ 2z
eln 2
∈ ez log(2)
eln 2
∈ {ez(ln(2)+ı2πn)
| n ∈ Z}
ln 2 ∈ z{ln 2 + ı2πn + ı2πm | m, n ∈ Z}
z =
ln(2) + ı2πm
ln(2) + ı2πn
| m, n ∈ Z
We verify this solution. Consider m and n to be fixed integers. We express the multi-valuedness in
terms of k.
2(ln(2)+ı2πm)/(ln(2)+ı2πn)
= e(ln(2)+ı2πm)/(ln(2)+ı2πn) log(2)
= e(ln(2)+ı2πm)/(ln(2)+ı2πn)(ln(2)+ı2πk)
For k = n, this has the value, eln(2)+ı2πm
= eln(2)
= 2.
Solution 7.8
We write out the multi-valuedness of 1z
.
1 ∈ 1z
1 ∈ ez log(1)
1 ∈ {eız2πn
| n ∈ Z}
The element corresponding to n = 0 is e0
= 1. Thus 1 ∈ 1z
has the solutions,
z ∈ C.
That is, z may be any complex number. We verify this solution.
1z
= ez log(1)
= eız2πn
For n = 0, this has the value 1.
Logarithmic Identities
Solution 7.9
We write the relationship in terms of the natural logarithm and the principal argument.
Log(z1z2) = Log(z1) + Log(z2)
ln |z1z2| + ı Arg(z1z2) = ln |z1| + ı Arg(z1) + ln |z2| + ı Arg(z2)
Arg(z1z2) = Arg(z1) + Arg(z2)
(zk) > 0 implies that Arg(zk) ∈ (−π/2 . . . π/2). Thus Arg(z1) + Arg(z2) ∈ (−π . . . π). In this case
the relationship holds.
The relationship does not hold in general because Arg(z1) + Arg(z2) is not necessarily in the
interval (−π . . . π]. Consider z1 = z2 = −1.
Arg((−1)(−1)) = Arg(1) = 0, Arg(−1) + Arg(−1) = 2π
Log((−1)(−1)) = Log(1) = 0, Log(−1) + Log(−1) = ı2π
192
Solution 7.10
1. The algebraic manipulations are fine. We write out the multi-valuedness of the logarithms.
log(−1) = log
1
−1
= log(1) − log(−1) = − log(−1)
{ıπ + ı2πn : n ∈ Z} = {ıπ + ı2πn : n ∈ Z}
= {ı2πn : n ∈ Z} − {ıπ + ı2πn : n ∈ Z} = {−ıπ − ı2πn : n ∈ Z}
Thus log(−1) = − log(−1). However this does not imply that log(−1) = 0. This is because
the logarithm is a set-valued function log(−1) = − log(−1) is really saying:
{ıπ + ı2πn : n ∈ Z} = {−ıπ − ı2πn : n ∈ Z}
2. We consider
1 = 11/2
= ((−1)(−1))1/2
= (−1)1/2
(−1)1/2
= ıı = −1.
There are three multi-valued expressions above.
11/2
= ±1
((−1)(−1))1/2
= ±1
(−1)1/2
(−1)1/2
= (±ı)(±ı) = ±1
Thus we see that the first and fourth equalities are incorrect.
1 = 11/2
, (−1)1/2
(−1)1/2
= ıı
Solution 7.11
22/5
= 41/5
=
5
√
411/5
=
5
√
4 eı2nπ/5
, n = 0, 1, 2, 3, 4
31+ı
= e(1+ı) log 3
= e(1+ı)(ln 3+ı2πn)
= eln 3−2πn
eı(ln 3+2πn)
, n ∈ Z
√
3 − ı
1/4
= 2 e−ıπ/6
1/4
=
4
√
2 e−ıπ/24
11/4
=
4
√
2 eı(πn/2−π/24)
, n = 0, 1, 2, 3
1ı/4
= e(ı/4) log 1
= e(ı/4)(ı2πn)
= e−πn/2
, n ∈ Z
193
Solution 7.12
cos z = 69
eız
+ e−ız
2
= 69
eı2z
−138 eız
+1 = 0
eız
=
1
2
138 ± 1382 − 4
z = −ı log 69 ± 2
√
1190
z = −ı ln 69 ± 2
√
1190 + ı2πn
z = 2πn − ı ln 69 ± 2
√
1190 , n ∈ Z
Solution 7.13
cot z = ı47
(eız
+ e−ız
) /2
(eız − e−ız) /(ı2)
= ı47
eız
+ e−ız
= 47 eız
− e−ız
46 eı2z
−48 = 0
ı2z = log
24
23
z = −
ı
2
log
24
23
z = −
ı
2
ln
24
23
+ ı2πn , n ∈ Z
z = πn −
ı
2
ln
24
23
, n ∈ Z
Solution 7.14
1.
log(−ı) = ln | − ı| + ı arg(−ı)
= ln(1) + ı −
π
2
+ 2πn , n ∈ Z
log(−ı) = −ı
π
2
+ ı2πn, n ∈ Z
These are equally spaced points in the imaginary axis. See Figure 7.36.
2.
(−ı)−ı
= e−ı log(−ı)
= e−ı(−ıπ/2+ı2πn)
, n ∈ Z
(−ı)−ı
= e−π/2+2πn
, n ∈ Z
These are points on the positive real axis with an accumulation point at the origin. See
Figure 7.37.
194
-1 1
-10
10
Figure 7.36: The values of log(−ı).
1
-1
1
Figure 7.37: The values of (−ı)−ı
.
3.
3π
= eπ log(3)
= eπ(ln(3)+ı arg(3))
3π
= eπ(ln(3)+ı2πn)
, n ∈ Z
These points all lie on the circle of radius |eπ
| centered about the origin in the complex plane.
See Figure 7.38.
-10 -5 5 10
-10
-5
5
10
Figure 7.38: The values of 3π
.
4.
log(log(ı)) = log ı
π
2
+ 2πm , m ∈ Z
= ln
π
2
+ 2πm + ı Arg ı
π
2
+ 2πm + ı2πn, m, n ∈ Z
= ln
π
2
+ 2πm + ı sign(1 + 4m)
π
2
+ ı2πn, m, n ∈ Z
These points all lie in the right half-plane. See Figure 7.39.
195
1 2 3 4 5
-20
-10
10
20
Figure 7.39: The values of log(log(ı)).
Solution 7.15
1.
(cosh(ıπ))ı2
=
eıπ
+ e−ıπ
2
ı2
= (−1)ı2
= eı2 log(−1)
= eı2(ln(1)+ıπ+ı2πn)
, n ∈ Z
= e−2π(1+2n)
, n ∈ Z
These are points on the positive real axis with an accumulation point at the origin. See
Figure 7.40.
1000
-1
1
Figure 7.40: The values of (cosh(ıπ))ı2
.
2.
log
1
1 + ı
= − log(1 + ı)
= − log
√
2 eıπ/4
= −
1
2
ln(2) − log eıπ/4
= −
1
2
ln(2) − ıπ/4 + ı2πn, n ∈ Z
These are points on a vertical line in the complex plane. See Figure 7.41.
196
-1 1
-10
10
Figure 7.41: The values of log 1
1+ı .
3.
arctan(ı3) =
1
ı2
log
ı − ı3
ı + ı3
=
1
ı2
log −
1
2
=
1
ı2
ln
1
2
+ ıπ + ı2πn , n ∈ Z
=
π
2
+ πn +
ı
2
ln(2)
These are points on a horizontal line in the complex plane. See Figure 7.42.
-5 5
-1
1
Figure 7.42: The values of arctan(ı3).
Solution 7.16
ıı
= eı log(ı)
= eı(ln |ı|+ı Arg(ı)+ı2πn)
, n ∈ Z
= eı(ıπ/2+ı2πn)
, n ∈ Z
= e−π(1/2+2n)
, n ∈ Z
These are points on the positive real axis. There is an accumulation point at z = 0. See Figure 7.43.
log ((1 + ı)ıπ
) = log eıπ log(1+ı)
= ıπ log(1 + ı) + ı2πn, n ∈ Z
= ıπ (ln |1 + ı| + ı Arg(1 + ı) + ı2πm) + ı2πn, m, n ∈ Z
= ıπ
1
2
ln 2 + ı
π
4
+ ı2πm + ı2πn, m, n ∈ Z
= −π2 1
4
+ 2m + ıπ
1
2
ln 2 + 2n , m, n ∈ Z
197
25 50 75 100
-1
1
Figure 7.43: The values of ıı
.
See Figure 7.44 for a plot.
-40 -20 20
-10
-5
5
10
Figure 7.44: The values of log ((1 + ı)ıπ
).
Solution 7.17
1.
ez
= ı
z = log ı
z = ln |ı| + ı arg(ı)
z = ln(1) + ı
π
2
+ 2πn , n ∈ Z
z = ı
π
2
+ ı2πn, n ∈ Z
2. We can solve the equation by writing the cosine and sine in terms of the exponential.
cos z = sin z
eız
+ e−ız
2
=
eız
− e−ız
ı2
(1 + ı) eız
= (−1 + ı) e−ız
eı2z
=
−1 + ı
1 + ı
eı2z
= ı
ı2z = log(ı)
ı2z = ı
π
2
+ ı2πn, n ∈ Z
z =
π
4
+ πn, n ∈ Z
198
3.
tan2
z = −1
sin2
z = − cos2
z
cos z = ±ı sin z
eız
+ e−ız
2
= ±ı
eız
− e−ız
ı2
e−ız
= − e−ız
or eız
= − eız
e−ız
= 0 or eız
= 0
ey−ıx
= 0 or e−y+ıx
= 0
ey
= 0 or e−y
= 0
z = ∅
There are no solutions for finite z.
Solution 7.18
1.
w = arctan(z)
z = tan(w)
z =
sin(w)
cos(w)
z =
(eıw
− e−ıw
) /(ı2)
(eıw + e−ıw) /2
z eıw
+z e−ıw
= −ı eıw
+ı e−ıw
(ı + z) eı2w
= (ı − z)
eıw
=
ı − z
ı + z
1/2
w = −ı log
ı − z
ı + z
1/2
arctan(z) =
ı
2
log
ı + z
ı − z
We identify the branch points of the arctangent.
arctan(z) =
ı
2
(log(ı + z) − log(ı − z))
There are branch points at z = ±ı due to the logarithm terms. We examine the point at
infinity with the change of variables ζ = 1/z.
arctan(1/ζ) =
ı
2
log
ı + 1/ζ
ı − 1/ζ
arctan(1/ζ) =
ı
2
log
ıζ + 1
ıζ − 1
As ζ → 0, the argument of the logarithm term tends to −1 The logarithm does not have a
branch point at that point. Since arctan(1/ζ) does not have a branch point at ζ = 0, arctan(z)
does not have a branch point at infinity.
199
2.
w = arctanh(z)
z = tanh(w)
z =
sinh(w)
cosh(w)
z =
(ew
− e−w
) /2
(ew + e−w) /2
z ew
+z e−w
= ew
− e−w
(z − 1) e2w
= −z − 1
ew
=
−z − 1
z − 1
1/2
w = log
z + 1
1 − z
1/2
arctanh(z) =
1
2
log
1 + z
1 − z
We identify the branch points of the hyperbolic arctangent.
arctanh(z) =
1
2
(log(1 + z) − log(1 − z))
There are branch points at z = ±1 due to the logarithm terms. We examine the point at
infinity with the change of variables ζ = 1/z.
arctanh(1/ζ) =
1
2
log
1 + 1/ζ
1 − 1/ζ
arctanh(1/ζ) =
1
2
log
ζ + 1
ζ − 1
As ζ → 0, the argument of the logarithm term tends to −1 The logarithm does not have
a branch point at that point. Since arctanh(1/ζ) does not have a branch point at ζ = 0,
arctanh(z) does not have a branch point at infinity.
3.
w = arccosh(z)
z = cosh(w)
z =
ew
+ e−w
2
e2w
−2z ew
+1 = 0
ew
= z + z2
− 1
1/2
w = log z + z2
− 1
1/2
arccosh(z) = log z + z2
− 1
1/2
We identify the branch points of the hyperbolic arc-cosine.
arccosh(z) = log z + (z − 1)1/2
(z + 1)1/2
First we consider branch points due to the square root. There are branch points at z = ±1 due
to the square root terms. If we walk around the singularity at z = 1 and no other singularities,
200
the z2
− 1
1/2
term changes sign. This will change the value of arccosh(z). The same is true
for the point z = −1. The point at infinity is not a branch point for z2
− 1
1/2
. We factor
the expression to verify this.
z2
− 1
1/2
= z2 1/2
1 − z−2 1/2
z2 1/2
does not have a branch point at infinity. It is multi-valued, but it has no branch points.
1 − z−2 1/2
does not have a branch point at infinity, The argument of the square root function
tends to unity there. In summary, there are branch points at z = ±1 due to the square root. If
we walk around either one of the these branch points. the square root term will change value.
If we walk around both of these points, the square root term will not change value.
Now we consider branch points due to logarithm. There may be branch points where the
argument of the logarithm vanishes or tends to infinity. We see if the argument of the logarithm
vanishes.
z + z2
− 1
1/2
= 0
z2
= z2
− 1
z + z2
− 1
1/2
is non-zero and finite everywhere in the complex plane. The only possibility
for a branch point in the logarithm term is the point at infinity. We see if the argument of
z + z2
− 1
1/2
changes when we walk around infinity but no other singularity. We consider a
circular path with center at the origin and radius greater than unity. We can either say that
this path encloses the two branch points at z = ±1 and no other singularities or we can say
that this path encloses the point at infinity and no other singularities. We examine the value
of the argument of the logarithm on this path.
z + z2
− 1
1/2
= z + z2 1/2
1 − z−2 1/2
Neither z2 1/2
nor 1 − z−2 1/2
changes value as we walk the path. Thus we can use the
principal branch of the square root in the expression.
z + z2
− 1
1/2
= z ± z 1 − z−2 = z 1 ± 1 − z−2
First consider the “+” branch.
z 1 + 1 − z−2
As we walk the path around infinity, the argument of z changes by 2π while the argument of
1 +
√
1 − z−2 does not change. Thus the argument of z + z2
− 1
1/2
changes by 2π when
we go around infinity. This makes the value of the logarithm change by ı2π. There is a branch
point at infinity.
First consider the “−” branch.
z 1 − 1 − z−2 = z 1 − 1 −
1
2
z−2
+ O z−4
= z
1
2
z−2
+ O z−4
=
1
2
z−1
1 + O z−2
As we walk the path around infinity, the argument of z−1
changes by −2π while the argument
of 1 + O z−2
does not change. Thus the argument of z + z2
− 1
1/2
changes by −2π
201
when we go around infinity. This makes the value of the logarithm change by −ı2π. Again we
conclude that there is a branch point at infinity.
For the sole purpose of overkill, let’s repeat the above analysis from a geometric viewpoint.
Again we consider the possibility of a branch point at infinity due to the logarithm. We walk
along the circle shown in the first plot of Figure 7.45. Traversing this path, we go around
infinity, but no other singularities. We consider the mapping w = z + z2
− 1
1/2
. Depending
on the branch of the square root, the circle is mapped to one one of the contours shown in
the second plot. For each branch, the argument of w changes by ±2π as we traverse the circle
in the z-plane. Therefore the value of arccosh(z) = log z + z2
− 1
1/2
changes by ±ı2π as
we traverse the circle. We again conclude that there is a branch point at infinity due to the
logarithm.
-1 1
-1
1
-1 1
-1
1
Figure 7.45: The mapping of a circle under w = z + z2
− 1
1/2
.
To summarize: There are branch points at z = ±1 due to the square root and a branch point
at infinity due to the logarithm.
Branch Points and Branch Cuts
Solution 7.19
We expand the function to diagnose the branch points in the finite complex plane.
f(z) = log
z(z + 1)
z − 1
= log(z) + log(z + 1) − log(z − 1)
The are branch points at z = −1, 0, 1. Now we examine the point at infinity. We make the change
of variables z = 1/ζ.
f
1
ζ
= log
(1/ζ)(1/ζ + 1)
(1/ζ − 1)
= log
1
ζ
(1 + ζ
1 − ζ
= log(1 + ζ) − log(1 − ζ) − log(ζ)
log(ζ) has a branch point at ζ = 0. The other terms do not have branch points there. Since f(1/ζ)
has a branch point at ζ = 0 f(z) has a branch point at infinity.
Note that in walking around either z = −1 or z = 0 once in the positive direction, the argument
of z(z +1)/(z −1) changes by 2π. In walking around z = 1, the argument of z(z +1)/(z −1) changes
by −2π. This argument does not change if we walk around both z = 0 and z = 1. Thus we put a
branch cut between z = 0 and z = 1. Next be put a branch cut between z = −1 and the point at
infinity. This prevents us from walking around either of these branch points. These two branch cuts
separate the branches of the function. See Figure 7.46
202
-3 -2 -1 1 2
Figure 7.46: Branch cuts for log z(z+1)
z−1 .
Solution 7.20
First we factor the function.
f(z) = (z(z + 3)(z − 2))
1/2
= z1/2
(z + 3)1/2
(z − 2)1/2
There are branch points at z = −3, 0, 2. Now we examine the point at infinity.
f
1
ζ
=
1
ζ
1
ζ
+ 3
1
ζ
− 2
1/2
= ζ−3/2
((1 + 3ζ)(1 − 2ζ))1/2
Since ζ−3/2
has a branch point at ζ = 0 and the rest of the terms are analytic there, f(z) has a
branch point at infinity.
Consider the set of branch cuts in Figure 7.47. These cuts do not permit us to walk around any
single branch point. We can only walk around none or all of the branch points, (which is the same
thing). The cuts can be used to define a single-valued branch of the function.
-4 -2 2 4
-3
-2
-1
1
2
3
Figure 7.47: Branch cuts for z3
+ z2
− 6z
1/2
.
Now to define the branch. We make a choice of angles.
z + 3 = r1 eıθ1
, −π < θ1 < π
z = r2 eıθ2
, −
π
2
< θ2 <
3π
2
z − 2 = r3 eıθ3
, 0 < θ3 < 2π
The function is
f(z) = r1 eıθ1
r2 eıθ2
r3 eıθ3
1/2
=
√
r1r2r3 eı(θ1+θ2+θ3)/2
.
We evaluate the function at z = −1.
f(−1) = (2)(1)(3) eı(0+π+π)/2
= −
√
6
203
We see that our choice of angles gives us the desired branch.
The stereographic projection is the projection from the complex plane onto a unit sphere with
south pole at the origin. The point z = x + ıy is mapped to the point (X, Y, Z) on the sphere with
X =
4x
|z|2 + 4
, Y =
4y
|z|2 + 4
, Z =
2|z|2
|z|2 + 4
.
Figure 7.48 first shows the branch cuts and their stereographic projections and then shows the
stereographic projections alone.
-4
0
4 -4
0
4
0
2
-4
0
4
-1
0
1
-1
0
1
0
1
2
-1
0
1
-1
0
1
Figure 7.48: Branch cuts for z3
+ z2
− 6z
1/2
and their stereographic projections.
Solution 7.21
1. For each value of z, f(z) = z1/3
has three values.
f(z) = z1/3
= 3
√
z eık2π/3
, k = 0, 1, 2
2.
g(w) = w3
= |w|3
eı3 arg(w)
Any sector of the w plane of angle 2π/3 maps one-to-one to the whole z-plane.
g : r eıθ
| r ≥ 0, θ0 ≤ θ < θ0 + 2π/3 → r3
eı3θ
| r ≥ 0, θ0 ≤ θ < θ0 + 2π/3
g : r eıθ
| r ≥ 0, θ0 ≤ θ < θ0 + 2π/3 → r eıθ
| r ≥ 0, 3θ0 ≤ θ < 3θ0 + 2π
g : r eıθ
| r ≥ 0, θ0 ≤ θ < θ0 + 2π/3 → C
See Figure 7.49 to see how g(w) maps the sector 0 ≤ θ < 2π/3.
3. See Figure 7.50 for a depiction of the Riemann surface for f(z) = z1/3
. We show two views of
the surface and a curve that traces the edge of the shown portion of the surface. The depiction
is misleading because the surface is not self-intersecting. We would need four dimensions to
properly visualize the this Riemann surface.
4. f(z) = z1/3
has branch points at z = 0 and z = ∞. Any branch cut which connects these two
points would prevent us from walking around the points singly and would thus separate the
branches of the function. For example, we could put a branch cut on the negative real axis.
Defining the angle −π < θ < π for the mapping
f r eıθ
= 3
√
r eıθ/3
defines a single-valued branch of the function.
Solution 7.22
The cube roots of 1 are
1, eı2π/3
, eı4π/3
= 1,
−1 + ı
√
3
2
,
−1 − ı
√
3
2
.
204
Figure 7.49: The function g(w) = w3
maps the sector 0 ≤ θ < 2π/3 one-to-one to the whole z-plane.
Figure 7.50: Riemann surface for f(z) = z1/3
.
205
We factor the polynomial.
z3
− 1
1/2
= (z − 1)1/2
z +
1 − ı
√
3
2
1/2
z +
1 + ı
√
3
2
1/2
There are branch points at each of the cube roots of unity.
z = 1,
−1 + ı
√
3
2
,
−1 − ı
√
3
2
Now we examine the point at infinity. We make the change of variables z = 1/ζ.
f(1/ζ) = 1/ζ3
− 1
1/2
= ζ−3/2
1 − ζ3 1/2
ζ−3/2
has a branch point at ζ = 0, while 1 − ζ3 1/2
is not singular there. Since f(1/ζ) has a branch
point at ζ = 0, f(z) has a branch point at infinity.
There are several ways of introducing branch cuts to separate the branches of the function. The
easiest approach is to put a branch cut from each of the three branch points in the finite complex
plane out to the branch point at infinity. See Figure 7.51a. Clearly this makes the function single
valued as it is impossible to walk around any of the branch points. Another approach is to have a
branch cut from one of the branch points in the finite plane to the branch point at infinity and a
branch cut connecting the remaining two branch points. See Figure 7.51bcd. Note that in walking
around any one of the finite branch points, (in the positive direction), the argument of the function
changes by π. This means that the value of the function changes by eıπ
, which is to say the value
of the function changes sign. In walking around any two of the finite branch points, (again in the
positive direction), the argument of the function changes by 2π. This means that the value of the
function changes by eı2π
, which is to say that the value of the function does not change. This
demonstrates that the latter branch cut approach makes the function single-valued.
a b c d
Figure 7.51: Suitable branch cuts for z3
− 1
1/2
.
Now we construct a branch. We will use the branch cuts in Figure 7.51a. We introduce variables
to measure radii and angles from the three finite branch points.
z − 1 = r1 eıθ1
, 0 < θ1 < 2π
z +
1 − ı
√
3
2
= r2 eıθ2
, −
2π
3
< θ2 <
π
3
z +
1 + ı
√
3
2
= r3 eıθ3
, −
π
3
< θ3 <
2π
3
We compute f(0) to see if it has the desired value.
f(z) =
√
r1r2r3 eı(θ1+θ2+θ3)/2
f(0) = eı(π−π/3+π/3)/2
= ı
Since it does not have the desired value, we change the range of θ1.
z − 1 = r1 eıθ1
, 2π < θ1 < 4π
206
f(0) now has the desired value.
f(0) = eı(3π−π/3+π/3)/2
= −ı
We compute f(−1).
f(−1) =
√
2 eı(3π−2π/3+2π/3)/2
= −ı
√
2
Solution 7.23
First we factor the function.
w(z) = ((z + 2)(z − 1)(z − 6))
1/2
= (z + 2)1/2
(z − 1)1/2
(z − 6)1/2
There are branch points at z = −2, 1, 6. Now we examine the point at infinity.
w
1
ζ
=
1
ζ
+ 2
1
ζ
− 1
1
ζ
− 6
1/2
= ζ−3/2
1 +
2
ζ
1 −
1
ζ
1 −
6
ζ
1/2
Since ζ−3/2
has a branch point at ζ = 0 and the rest of the terms are analytic there, w(z) has a
branch point at infinity.
Consider the set of branch cuts in Figure 7.52. These cuts let us walk around the branch points
at z = −2 and z = 1 together or if we change our perspective, we would be walking around the
branch points at z = 6 and z = ∞ together. Consider a contour in this cut plane that encircles the
branch points at z = −2 and z = 1. Since the argument of (z − z0)
1/2
changes by π when we walk
around z0, the argument of w(z) changes by 2π when we traverse the contour. Thus the value of
the function does not change and it is a valid set of branch cuts.
 ¡ ¢¡¢£¡£¤¡¤ ¥¡¥¦¡¦
Figure 7.52: Branch cuts for ((z + 2)(z − 1)(z − 6))
1/2
.
Now to define the branch. We make a choice of angles.
z + 2 = r1 eıθ1
, θ1 = θ2 for z ∈ (1 . . . 6),
z − 1 = r2 eıθ2
, θ2 = θ1 for z ∈ (1 . . . 6),
z − 6 = r3 eıθ3
, 0 < θ3 < 2π
The function is
w(z) = r1 eıθ1
r2 eıθ2
r3 eıθ3
1/2
=
√
r1r2r3 eı(θ1+θ2+θ3)/2
.
We evaluate the function at z = 4.
w(4) = (6)(3)(2) eı(2πn+2πn+π)/2
= ı6
We see that our choice of angles gives us the desired branch.
Solution 7.24
1.
cos z1/2
= cos ±
√
z = cos
√
z
This is a single-valued function. There are no branch points.
207
2.
(z + ı)−z
= e−z log(z+ı)
= e−z(ln |z+ı|+ı Arg(z+ı)+ı2πn)
, n ∈ Z
There is a branch point at z = −ı. There are an infinite number of branches.
Solution 7.25
1.
f(z) = z2
+ 1
1/2
= (z + ı)1/2
(z − ı)1/2
We see that there are branch points at z = ±ı. To examine the point at infinity, we substitute
z = 1/ζ and examine the point ζ = 0.
1
ζ
2
+ 1
1/2
=
1
(ζ2)
1/2
1 + ζ2 1/2
Since there is no branch point at ζ = 0, f(z) has no branch point at infinity.
A branch cut connecting z = ±ı would make the function single-valued. We could also accom-
plish this with two branch cuts starting z = ±ı and going to infinity.
2.
f(z) = z3
− z
1/2
= z1/2
(z − 1)1/2
(z + 1)1/2
There are branch points at z = −1, 0, 1. Now we consider the point at infinity.
f
1
ζ
=
1
ζ
3
−
1
ζ
1/2
= ζ−3/2
1 − ζ2 1/2
There is a branch point at infinity.
One can make the function single-valued with three branch cuts that start at z = −1, 0, 1
and each go to infinity. We can also make the function single-valued with a branch cut that
connects two of the points z = −1, 0, 1 and another branch cut that starts at the remaining
point and goes to infinity.
3.
f(z) = log z2
− 1 = log(z − 1) + log(z + 1)
There are branch points at z = ±1.
f
1
ζ
= log
1
ζ2
− 1 = log ζ−2
+ log 1 − ζ2
log ζ−2
has a branch point at ζ = 0.
log ζ−2
= ln ζ−2
+ ı arg ζ−2
= ln ζ−2
− ı2 arg(ζ)
Every time we walk around the point ζ = 0 in the positive direction, the value of the function
changes by −ı4π. f(z) has a branch point at infinity.
We can make the function single-valued by introducing two branch cuts that start at z = ±1
and each go to infinity.
4.
f(z) = log
z + 1
z − 1
= log(z + 1) − log(z − 1)
208
There are branch points at z = ±1.
f
1
ζ
= log
1/ζ + 1
1/ζ − 1
= log
1 + ζ
1 − ζ
There is no branch point at ζ = 0. f(z) has no branch point at infinity.
We can make the function single-valued by introducing two branch cuts that start at z = ±1
and each go to infinity. We can also make the function single-valued with a branch cut that
connects the points z = ±1. This is because log(z + 1) and − log(z − 1) change by ı2π and
−ı2π, respectively, when you walk around their branch points once in the positive direction.
Solution 7.26
1. The cube roots of −8 are
−2, −2 eı2π/3
, −2 eı4π/3
= −2, 1 + ı
√
3, 1 − ı
√
3 .
Thus we can write
z3
+ 8
1/2
= (z + 2)1/2
z − 1 − ı
√
3
1/2
z − 1 + ı
√
3
1/2
.
There are three branch points on the circle of radius 2.
z = −2, 1 + ı
√
3, 1 − ı
√
3 .
We examine the point at infinity.
f(1/ζ) = 1/ζ3
+ 8
1/2
= ζ−3/2
1 + 8ζ3 1/2
Since f(1/ζ) has a branch point at ζ = 0, f(z) has a branch point at infinity.
There are several ways of introducing branch cuts outside of the disk |z| < 2 to separate the
branches of the function. The easiest approach is to put a branch cut from each of the three
branch points in the finite complex plane out to the branch point at infinity. See Figure 7.53a.
Clearly this makes the function single valued as it is impossible to walk around any of the
branch points. Another approach is to have a branch cut from one of the branch points in
the finite plane to the branch point at infinity and a branch cut connecting the remaining two
branch points. See Figure 7.53bcd. Note that in walking around any one of the finite branch
points, (in the positive direction), the argument of the function changes by π. This means that
the value of the function changes by eıπ
, which is to say the value of the function changes sign.
In walking around any two of the finite branch points, (again in the positive direction), the
argument of the function changes by 2π. This means that the value of the function changes by
eı2π
, which is to say that the value of the function does not change. This demonstrates that
the latter branch cut approach makes the function single-valued.
a b c d
Figure 7.53: Suitable branch cuts for z3
+ 8
1/2
.
209
2.
f(z) = log 5 +
z + 1
z − 1
1/2
First we deal with the function
g(z) =
z + 1
z − 1
1/2
Note that it has branch points at z = ±1. Consider the point at infinity.
g(1/ζ) =
1/ζ + 1
1/ζ − 1
1/2
=
1 + ζ
1 − ζ
1/2
Since g(1/ζ) has no branch point at ζ = 0, g(z) has no branch point at infinity. This means
that if we walk around both of the branch points at z = ±1, the function does not change
value. We can verify this with another method: When we walk around the point z = −1 once
in the positive direction, the argument of z + 1 changes by 2π, the argument of (z + 1)1/2
changes by π and thus the value of (z + 1)1/2
changes by eıπ
= −1. When we walk around the
point z = 1 once in the positive direction, the argument of z − 1 changes by 2π, the argument
of (z − 1)−1/2
changes by −π and thus the value of (z − 1)−1/2
changes by e−ıπ
= −1. f(z)
has branch points at z = ±1. When we walk around both points z = ±1 once in the positive
direction, the value of z+1
z−1
1/2
does not change. Thus we can make the function single-valued
with a branch cut which enables us to walk around either none or both of these branch points.
We put a branch cut from −1 to 1 on the real axis.
f(z) has branch points where
5 +
z + 1
z − 1
1/2
is either zero or infinite. The only place in the extended complex plane where the expression
becomes infinite is at z = 1. Now we look for the zeros.
5 +
z + 1
z − 1
1/2
= 0
z + 1
z − 1
1/2
= −5
z + 1
z − 1
= 25
z + 1 = 25z − 25
z =
13
12
Note that
13/12 + 1
13/12 − 1
1/2
= 251/2
= ±5.
On one branch, (which we call the positive branch), of the function g(z) the quantity
5 +
z + 1
z − 1
1/2
is always nonzero. On the other (negative) branch of the function, this quantity has a zero at
z = 13/12.
210
The logarithm introduces branch points at z = 1 on both the positive and negative branch of
g(z). It introduces a branch point at z = 13/12 on the negative branch of g(z). To determine
if additional branch cuts are needed to separate the branches, we consider
w = 5 +
z + 1
z − 1
1/2
and see where the branch cut between ±1 gets mapped to in the w plane. We rewrite the
mapping.
w = 5 + 1 +
2
z − 1
1/2
The mapping is the following sequence of simple transformations:
(a) z → z − 1
(b) z →
1
z
(c) z → 2z
(d) z → z + 1
(e) z → z1/2
(f) z → z + 5
We show these transformations graphically below.
-1 1
z → z−1
-2 0
z →
1
z
-1/2
z → 2z
-1
z → z + 1
0
z → z1/2
z → z + 5
For the positive branch of g(z), the branch cut is mapped to the line x = 5 and the z plane is
mapped to the half-plane x > 5. log(w) has branch points at w = 0 and w = ∞. It is possible
to walk around only one of these points in the half-plane x > 5. Thus no additional branch
cuts are needed in the positive sheet of g(z).
For the negative branch of g(z), the branch cut is mapped to the line x = 5 and the z plane
is mapped to the half-plane x < 5. It is possible to walk around either w = 0 or w = ∞ alone
in this half-plane. Thus we need an additional branch cut. On the negative sheet of g(z), we
put a branch cut beteen z = 1 and z = 13/12. This puts a branch cut between w = ∞ and
w = 0 and thus separates the branches of the logarithm.
Figure 7.54 shows the branch cuts in the positive and negative sheets of g(z).
3. The function f(z) = (z + ı3)1/2
has a branch point at z = −ı3. The function is made single-
valued by connecting this point and the point at infinity with a branch cut.
Solution 7.27
Note that the curve with opposite orientation goes around infinity in the positive direction and does
not enclose any branch points. Thus the value of the function does not change when traversing
211
Im(z)
Re(z)
g(13/12)=-5
Im(z)
Re(z)
g(13/12)=5
Figure 7.54: The branch cuts for f(z) = log 5 + z+1
z−1
1/2
.
the curve, (with either orientation, of course). This means that the argument of the function must
change my an integer multiple of 2π. Since the branch cut only allows us to encircle all three or
none of the branch points, it makes the function single valued.
Solution 7.28
We suppose that f(z) has only one branch point in the finite complex plane. Consider any contour
that encircles this branch point in the positive direction. f(z) changes value if we traverse the
contour. If we reverse the orientation of the contour, then it encircles infinity in the positive direction,
but contains no branch points in the finite complex plane. Since the function changes value when
we traverse the contour, we conclude that the point at infinity must be a branch point. If f(z) has
only a single branch point in the finite complex plane then it must have a branch point at infinity.
If f(z) has two or more branch points in the finite complex plane then it may or may not have
a branch point at infinity. This is because the value of the function may or may not change on a
contour that encircles all the branch points in the finite complex plane.
Solution 7.29
First we factor the function,
f(z) = z4
+ 1
1/4
= z −
1 + ı
√
2
1/4
z −
−1 + ı
√
2
1/4
z −
−1 − ı
√
2
1/4
z −
1 − ı
√
2
1/4
.
There are branch points at z = ±1±ı√
2
. We make the substitution z = 1/ζ to examine the point at
infinity.
f
1
ζ
=
1
ζ4
+ 1
1/4
=
1
(ζ4)
1/4
1 + ζ4 1/4
ζ1/4 4
has a removable singularity at the point ζ = 0, but no branch point there. Thus z4
+ 1
1/4
has no branch point at infinity.
Note that the argument of z4
− z0
1/4
changes by π/2 on a contour that goes around the point
z0 once in the positive direction. The argument of z4
+ 1
1/4
changes by nπ/2 on a contour that
goes around n of its branch points. Thus any set of branch cuts that permit you to walk around
only one, two or three of the branch points will not make the function single valued. A set of branch
cuts that permit us to walk around only zero or all four of the branch points will make the function
single-valued. Thus we see that the first two sets of branch cuts in Figure 7.32 will make the function
single-valued, while the remaining two will not.
Consider the contour in Figure 7.32. There are two ways to see that the function does not change
value while traversing the contour. The first is to note that each of the branch points makes the
argument of the function increase by π/2. Thus the argument of z4
+ 1
1/4
changes by 4(π/2) = 2π
on the contour. This means that the value of the function changes by the factor eı2π
= 1. If we
change the orientation of the contour, then it is a contour that encircles infinity once in the positive
direction. There are no branch points inside the this contour with opposite orientation. (Recall that
212
the inside of a contour lies to your left as you walk around it.) Since there are no branch points
inside this contour, the function cannot change value as we traverse it.
Solution 7.30
f(z) =
z
z2 + 1
1/3
= z1/3
(z − ı)−1/3
(z + ı)−1/3
There are branch points at z = 0, ±ı.
f
1
ζ
=
1/ζ
(1/ζ)2 + 1
1/3
=
ζ1/3
(1 + ζ2)
1/3
There is a branch point at ζ = 0. f(z) has a branch point at infinity.
We introduce branch cuts from z = 0 to infinity on the negative real axis, from z = ı to infinity
on the positive imaginary axis and from z = −ı to infinity on the negative imaginary axis. As we
cannot walk around any of the branch points, this makes the function single-valued.
We define a branch by defining angles from the branch points. Let
z = r eıθ
− π < θ < π,
(z − ı) = s eıφ
− 3π/2 < φ < π/2,
(z + ı) = t eıψ
− π/2 < ψ < 3π/2.
With
f(z) = z1/3
(z − ı)−1/3
(z + ı)−1/3
= 3
√
r eıθ/3 1
3
√
s
e−ıφ/3 1
3
√
t
e−ıψ/3
= 3
r
st
eı(θ−φ−ψ)/3
we have an explicit formula for computing the value of the function for this branch. Now we compute
f(1) to see if we chose the correct ranges for the angles. (If not, we’ll just change one of them.)
f(1) = 3
1
√
2
√
2
eı(0−π/4−(−π/4))/3
=
1
3
√
2
We made the right choice for the angles. Now to compute f(1 + ı).
f(1 + ı) =
3
√
2
1
√
5
eı(π/4−0−Arctan(2))/3
=
6 2
5
eı(π/4−Arctan(2))/3
Consider the value of the function above and below the branch cut on the negative real axis. Above
the branch cut the function is
f(−x + ı0) = 3
x
√
x2 + 1
√
x2 + 1
eı(π−φ−ψ)/3
Note that φ = −ψ so that
f(−x + ı0) = 3
x
x2 + 1
eıπ/3
= 3
x
x2 + 1
1 + ı
√
3
2
.
Below the branch cut θ = −π and
f(−x − ı0) = 3
x
x2 + 1
eı(−π)/3
= 3
x
x2 + 1
1 − ı
√
3
2
.
213
For the branch cut along the positive imaginary axis,
f(ıy + 0) = 3
y
(y − 1)(y + 1)
eı(π/2−π/2−π/2)/3
= 3
y
(y − 1)(y + 1)
e−ıπ/6
= 3
y
(y − 1)(y + 1)
√
3 − ı
2
,
f(ıy − 0) = 3
y
(y − 1)(y + 1)
eı(π/2−(−3π/2)−π/2)/3
= 3
y
(y − 1)(y + 1)
eıπ/2
= ı 3
y
(y − 1)(y + 1)
.
For the branch cut along the negative imaginary axis,
f(−ıy + 0) = 3
y
(y + 1)(y − 1)
eı(−π/2−(−π/2)−(−π/2))/3
= 3
y
(y + 1)(y − 1)
eıπ/6
= 3
y
(y + 1)(y − 1)
√
3 + ı
2
,
f(−ıy − 0) = 3
y
(y + 1)(y − 1)
eı(−π/2−(−π/2)−(3π/2))/3
= 3
y
(y + 1)(y − 1)
e−ıπ/2
= −ı 3
y
(y + 1)(y − 1)
.
Solution 7.31
First we factor the function.
f(z) = ((z − 1)(z − 2)(z − 3))
1/2
= (z − 1)1/2
(z − 2)1/2
(z − 3)1/2
There are branch points at z = 1, 2, 3. Now we examine the point at infinity.
f
1
ζ
=
1
ζ
− 1
1
ζ
− 2
1
ζ
− 3
1/2
= ζ−3/2
1 −
1
ζ
1 −
2
ζ
1 −
3
ζ
1/2
Since ζ−3/2
has a branch point at ζ = 0 and the rest of the terms are analytic there, f(z) has a
branch point at infinity.
The first two sets of branch cuts in Figure 7.33 do not permit us to walk around any of the branch
points, including the point at infinity, and thus make the function single-valued. The third set of
branch cuts lets us walk around the branch points at z = 1 and z = 2 together or if we change our
perspective, we would be walking around the branch points at z = 3 and z = ∞ together. Consider
a contour in this cut plane that encircles the branch points at z = 1 and z = 2. Since the argument
of (z − z0)
1/2
changes by π when we walk around z0, the argument of f(z) changes by 2π when we
traverse the contour. Thus the value of the function does not change and it is a valid set of branch
214
cuts. Clearly the fourth set of branch cuts does not make the function single-valued as there are
contours that encircle the branch point at infinity and no other branch points. The other way to see
this is to note that the argument of f(z) changes by 3π as we traverse a contour that goes around
the branch points at z = 1, 2, 3 once in the positive direction.
Now to define the branch. We make the preliminary choice of angles,
z − 1 = r1 eıθ1
, 0 < θ1 < 2π,
z − 2 = r2 eıθ2
, 0 < θ2 < 2π,
z − 3 = r3 eıθ3
, 0 < θ3 < 2π.
The function is
f(z) = r1 eıθ1
r2 eıθ2
r3 eıθ3
1/2
=
√
r1r2r3 eı(θ1+θ2+θ3)/2
.
The value of the function at the origin is
f(0) =
√
6 eı(3π)/2
= −ı
√
6,
which is not what we wanted. We will change range of one of the angles to get the desired result.
z − 1 = r1 eıθ1
, 0 < θ1 < 2π,
z − 2 = r2 eıθ2
, 0 < θ2 < 2π,
z − 3 = r3 eıθ3
, 2π < θ3 < 4π.
f(0) =
√
6 eı(5π)/2
= ı
√
6,
Solution 7.32
w = z2
− 2 (z + 2)
1/3
z +
√
2
1/3
z −
√
2
1/3
(z + 2)1/3
There are branch points at z = ±
√
2 and z = −2. If we walk around any one of the branch points
once in the positive direction, the argument of w changes by 2π/3 and thus the value of the function
changes by eı2π/3
. If we walk around all three branch points then the argument of w changes by
3 × 2π/3 = 2π. The value of the function is unchanged as eı2π
= 1. Thus the branch cut on the real
axis from −2 to
√
2 makes the function single-valued.
Now we define a branch. Let
z −
√
2 = a eıα
, z +
√
2 = b eıβ
, z + 2 = c eıγ
.
We constrain the angles as follows: On the positive real axis, α = β = γ. See Figure 7.55.
αβ
γ
ac b
Re(z)
Im(z)
Figure 7.55: A branch of z2
− 2 (z + 2)
1/3
.
215
Now we determine w(2).
w(2) = 2 −
√
2
1/3
2 +
√
2
1/3
(2 + 2)1/3
=
3
2 −
√
2 eı0 3
2 +
√
2 eı0 3
√
4 eı0
=
3
√
2
3
√
4
= 2.
Note that we didn’t have to choose the angle from each of the branch points as zero. Choosing any
integer multiple of 2π would give us the same result.
w(−3) = −3 −
√
2
1/3
−3 +
√
2
1/3
(−3 + 2)1/3
=
3
3 +
√
2 eıπ/3 3
3 −
√
2 eıπ/3 3
√
1 eıπ/3
=
3
√
7 eıπ
= −
3
√
7
The value of the function is
w =
3
√
abc eı(α+β+γ)/3
.
Consider the interval −
√
2 . . .
√
2 . As we approach the branch cut from above, the function has
the value,
w =
3
√
abc eıπ/3
= 3
√
2 − x x +
√
2 (x + 2) eıπ/3
.
As we approach the branch cut from below, the function has the value,
w =
3
√
abc e−ıπ/3
= 3
√
2 − x x +
√
2 (x + 2) e−ıπ/3
.
Consider the interval −2 . . . −
√
2 . As we approach the branch cut from above, the function
has the value,
w =
3
√
abc eı2π/3
= 3
√
2 − x −x −
√
2 (x + 2) eı2π/3
.
As we approach the branch cut from below, the function has the value,
w =
3
√
abc e−ı2π/3
= 3
√
2 − x −x −
√
2 (x + 2) e−ı2π/3
.
Solution 7.33
Arccos(x) is shown in Figure 7.56 for real variables in the range [−1 . . . 1].
-1 -0.5 0.5 1
0.5
1
1.5
2
2.5
3
Figure 7.56: The principal branch of the arc cosine, Arccos(x).
216
First we write arccos(z) in terms of log(z). If cos(w) = z, then w = arccos(z).
cos(w) = z
eıw
+ e−ıw
2
= z
(eıw
)
2
− 2z eıw
+1 = 0
eıw
= z + z2
− 1
1/2
w = −ı log z + z2
− 1
1/2
Thus we have
arccos(z) = −ı log z + z2
− 1
1/2
.
Since Arccos(0) = π
2 , we must find the branch such that
−ı log 0 + 02
− 1
1/2
= 0
−ı log (−1)1/2
= 0.
Since
−ı log(ı) = −ı ı
π
2
+ ı2πn =
π
2
+ 2πn
and
−ı log(−ı) = −ı −ı
π
2
+ ı2πn = −
π
2
+ 2πn
we must choose the branch of the square root such that (−1)1/2
= ı and the branch of the logarithm
such that log(ı) = ıπ
2 .
First we construct the branch of the square root.
z2
− 1
1/2
= (z + 1)1/2
(z − 1)1/2
We see that there are branch points at z = −1 and z = 1. In particular we want the Arccos to be
defined for z = x, x ∈ [−1 . . . 1]. Hence we introduce branch cuts on the lines −∞ < x ≤ −1 and
1 ≤ x < ∞. Define the local coordinates
z + 1 = r eıθ
, z − 1 = ρ eıφ
.
With the given branch cuts, the angles have the possible ranges
{θ} = {. . . , (−π . . . π), (π . . . 3π), . . .}, {φ} = {. . . , (0 . . . 2π), (2π . . . 4π), . . .}.
Now we choose ranges for θ and φ and see if we get the desired branch. If not, we choose a different
range for one of the angles. First we choose the ranges
θ ∈ (−π . . . π), φ ∈ (0 . . . 2π).
If we substitute in z = 0 we get
02
− 1
1/2
= 1 eı0 1/2
(1 eıπ
)
1/2
= eı0
eıπ/2
= ı
Thus we see that this choice of angles gives us the desired branch.
Now we go back to the expression
arccos(z) = −ı log z + z2
− 1
1/2
.
217
θ=π
θ=−π
φ=0
φ=2π
Figure 7.57: Branch cuts and angles for z2
− 1
1/2
.
We have already seen that there are branch points at z = −1 and z = 1 because of z2
− 1
1/2
. Now
we must determine if the logarithm introduces additional branch points. The only possibilities for
branch points are where the argument of the logarithm is zero.
z + z2
− 1
1/2
= 0
z2
= z2
− 1
0 = −1
We see that the argument of the logarithm is nonzero and thus there are no additional branch points.
Introduce the variable, w = z + z2
− 1
1/2
. What is the image of the branch cuts in the w plane?
We parameterize the branch cut connecting z = 1 and z = +∞ with z = r + 1, r ∈ [0 . . . ∞).
w = r + 1 + (r + 1)2
− 1
1/2
= r + 1 ± r(r + 2)
= r 1 ± r 1 + 2/r + 1
r 1 + 1 + 2/r + 1 is the interval [1 . . . ∞); r 1 − 1 + 2/r + 1 is the interval (0 . . . 1]. Thus
we see that this branch cut is mapped to the interval (0 . . . ∞) in the w plane. Similarly, we could
show that the branch cut (−∞ . . .−1] in the z plane is mapped to (−∞ . . . 0) in the w plane. In the
w plane there is a branch cut along the real w axis from −∞ to ∞. Thus cut makes the logarithm
single-valued. For the branch of the square root that we chose, all the points in the z plane get
mapped to the upper half of the w plane.
With the branch cuts we have introduced so far and the chosen branch of the square root we
have
arccos(0) = −ı log 0 + 02
− 1
1/2
= −ı log ı
= −ı ı
π
2
+ ı2πn
=
π
2
+ 2πn
Choosing the n = 0 branch of the logarithm will give us Arccos(z). We see that we can write
Arccos(z) = −ı Log z + z2
− 1
1/2
.
Solution 7.34
We consider the function f(z) = z1/2
− 1
1/2
. First note that z1/2
has a branch point at z = 0. We
place a branch cut on the negative real axis to make it single valued. f(z) will have a branch point
where z1/2
− 1 = 0. This occurs at z = 1 on the branch of z1/2
on which 11/2
= 1. (11/2
has the
value 1 on one branch of z1/2
and −1 on the other branch.) For this branch we introduce a branch
cut connecting z = 1 with the point at infinity. (See Figure 7.58.)
218
1 =1 1 =-1
1/2 1/2
Figure 7.58: Branch cuts for z1/2
− 1
1/2
.
Solution 7.35
The distance between the end of rod a and the end of rod c is b. In the complex plane, these points
are a eıθ
and l + c eıφ
, respectively. We write this out mathematically.
l + c eıφ
−a eıθ
= b
l + c eıφ
−a eıθ
l + c e−ıφ
−a e−ıθ
= b2
l2
+ cl e−ıφ
−al e−ıθ
+cl eıφ
+c2
− ac eı(φ−θ)
−al eıθ
−ac eı(θ−φ)
+a2
= b2
cl cos φ − ac cos(φ − θ) − al cos θ =
1
2
b2
− a2
− c2
− l2
This equation relates the two angular positions. One could differentiate the equation to relate the
velocities and accelerations.
Solution 7.36
1. Let w = u+ıv. First we do the strip: | (z)| < 1. Consider the vertical line: z = c+ıy, y ∈ R.
This line is mapped to
w = 2(c + ıy)2
w = 2c2
− 2y2
+ ı4cy
u = 2c2
− 2y2
, v = 4cy
This is a parabola that opens to the left. For the case c = 0 it is the negative u axis. We can
parametrize the curve in terms of v.
u = 2c2
−
1
8c2
v2
, v ∈ R
The boundaries of the region are both mapped to the parabolas:
u = 2 −
1
8
v2
, v ∈ R.
The image of the mapping is
w = u + ıv : v ∈ R and u < 2 −
1
8
v2
.
Note that the mapping is two-to-one.
Now we do the strip 1 < (z) < 2. Consider the horizontal line: z = x + ıc, x ∈ R. This line
is mapped to
w = 2(x + ıc)2
w = 2x2
− 2c2
+ ı4cx
u = 2x2
− 2c2
, v = 4cx
219
This is a parabola that opens upward. We can parametrize the curve in terms of v.
u =
1
8c2
v2
− 2c2
, v ∈ R
The boundary (z) = 1 is mapped to
u =
1
8
v2
− 2, v ∈ R.
The boundary (z) = 2 is mapped to
u =
1
32
v2
− 8, v ∈ R
The image of the mapping is
w = u + ıv : v ∈ R and
1
32
v2
− 8 < u <
1
8
v2
− 2 .
2. We write the transformation as
z + 1
z − 1
= 1 +
2
z − 1
.
Thus we see that the transformation is the sequence:
(a) translation by −1
(b) inversion
(c) magnification by 2
(d) translation by 1
Consider the strip | (z)| < 1. The translation by −1 maps this to −2 < (z) < 0. Now we
do the inversion. The left edge, (z) = 0, is mapped to itself. The right edge, (z) = −2, is
mapped to the circle |z + 1/4| = 1/4. Thus the current image is the left half plane minus a
circle:
(z) < 0 and z +
1
4
>
1
4
.
The magnification by 2 yields
(z) < 0 and z +
1
2
>
1
2
.
The final step is a translation by 1.
(z) < 1 and z −
1
2
>
1
2
.
Now consider the strip 1 < (z) < 2. The translation by −1 does not change the domain.
Now we do the inversion. The bottom edge, (z) = 1, is mapped to the circle |z + ı/2| = 1/2.
The top edge, (z) = 2, is mapped to the circle |z + ı/4| = 1/4. Thus the current image is the
region between two circles:
z +
ı
2
<
1
2
and z +
ı
4
>
1
4
.
The magnification by 2 yields
|z + ı| < 1 and z +
ı
2
>
1
2
.
The final step is a translation by 1.
|z − 1 + ı| < 1 and z − 1 +
ı
2
>
1
2
.
220
Solution 7.37
1. There is a simple pole at z = −2. The function has a branch point at z = −1. Since this is
the only branch point in the finite complex plane there is also a branch point at infinity. We
can verify this with the substitution z = 1/ζ.
f
1
ζ
=
(1/ζ + 1)1/2
1/ζ + 2
=
ζ1/2
(1 + ζ)1/2
1 + 2ζ
Since f(1/ζ) has a branch point at ζ = 0, f(z) has a branch point at infinity.
2. cos z is an entire function with an essential singularity at infinity. Thus f(z) has singularities
only where 1/(1 + z) has singularities. 1/(1 + z) has a first order pole at z = −1. It is analytic
everywhere else, including the point at infinity. Thus we conclude that f(z) has an essential
singularity at z = −1 and is analytic elsewhere. To explicitly show that z = −1 is an essential
singularity, we can find the Laurent series expansion of f(z) about z = −1.
cos
1
1 + z
=
∞
n=0
(−1)n
(2n)!
(z + 1)−2n
3. 1 − ez
has simple zeros at z = ı2nπ, n ∈ Z. Thus f(z) has second order poles at those points.
The point at infinity is a non-isolated singularity. To justify this: Note that
f(z) =
1
(1 − ez)
2
has second order poles at z = ı2nπ, n ∈ Z. This means that f(1/ζ) has second order poles at
ζ = 1
ı2nπ , n ∈ Z. These second order poles get arbitrarily close to ζ = 0. There is no deleted
neighborhood around ζ = 0 in which f(1/ζ) is analytic. Thus the point ζ = 0, (z = ∞), is a
non-isolated singularity. There is no Laurent series expansion about the point ζ = 0, (z = ∞).
The point at infinity is neither a branch point nor a removable singularity. It is not a pole
either. If it were, there would be an n such that limz→∞ z−n
f(z) = const = 0. Since z−n
f(z)
has second order poles in every deleted neighborhood of infinity, the above limit does not exist.
Thus we conclude that the point at infinity is an essential singularity.
Solution 7.38
We write sinh z in Cartesian form.
w = sinh z = sinh x cos y + ı cosh x sin y = u + ıv
Consider the line segment x = c, y ∈ (0 . . . π). Its image is
{sinh c cos y + ı cosh c sin y | y ∈ (0 . . . π)}.
This is the parametric equation for the upper half of an ellipse. Also note that u and v satisfy the
equation for an ellipse.
u2
sinh2
c
+
v2
cosh2
c
= 1
The ellipse starts at the point (sinh(c), 0), passes through the point (0, cosh(c)) and ends at (−sinh(c), 0).
As c varies from zero to ∞ or from zero to −∞, the semi-ellipses cover the upper half w plane. Thus
the mapping is 2-to-1.
Consider the infinite line y = c, x ∈ (−∞ . . . ∞).Its image is
{sinh x cos c + ı cosh x sin c | x ∈ (−∞ . . . ∞)}.
221
This is the parametric equation for the upper half of a hyperbola. Also note that u and v satisfy
the equation for a hyperbola.
−
u2
cos2 c
+
v2
sin2
c
= 1
As c varies from 0 to π/2 or from π/2 to π, the semi-hyperbola cover the upper half w plane. Thus
the mapping is 2-to-1.
We look for branch points of sinh−1
w.
w = sinh z
w =
ez
− e−z
2
e2z
−2w ez
−1 = 0
ez
= w + w2
+ 1
1/2
z = log w + (w − ı)1/2
(w + ı)1/2
There are branch points at w = ±ı. Since w + w2
+ 1
1/2
is nonzero and finite in the finite complex
plane, the logarithm does not introduce any branch points in the finite plane. Thus the only branch
point in the upper half w plane is at w = ı. Any branch cut that connects w = ı with the boundary
of (w) > 0 will separate the branches under the inverse mapping.
Consider the line y = π/4. The image under the mapping is the upper half of the hyperbola
2u2
+ 2v2
= 1.
Consider the segment x = 1.The image under the mapping is the upper half of the ellipse
u2
sinh2
1
+
v2
cosh2
1
= 1.
222
Chapter 8
Analytic Functions
Students need encouragement. So if a student gets an answer right, tell them it was a lucky guess.
That way, they develop a good, lucky feeling.1
-Jack Handey
8.1 Complex Derivatives
Functions of a Real Variable. The derivative of a function of a real variable is
d
dx
f(x) = lim
∆x→0
f(x + ∆x) − f(x)
∆x
.
If the limit exists then the function is differentiable at the point x. Note that ∆x can approach zero
from above or below. The limit cannot depend on the direction in which ∆x vanishes.
Consider f(x) = |x|. The function is not differentiable at x = 0 since
lim
∆x→0+
|0 + ∆x| − |0|
∆x
= 1
and
lim
∆x→0−
|0 + ∆x| − |0|
∆x
= −1.
Analyticity. The complex derivative, (or simply derivative if the context is clear), is defined,
d
dz
f(z) = lim
∆z→0
f(z + ∆z) − f(z)
∆z
.
The complex derivative exists if this limit exists. This means that the value of the limit is independent
of the manner in which ∆z → 0. If the complex derivative exists at a point, then we say that the
function is complex differentiable there.
A function of a complex variable is analytic at a point z0 if the complex derivative exists in
a neighborhood about that point. The function is analytic in an open set if it has a complex
derivative at each point in that set. Note that complex differentiable has a different meaning than
analytic. Analyticity refers to the behavior of a function on an open set. A function can be complex
differentiable at isolated points, but the function would not be analytic at those points. Analytic
functions are also called regular or holomorphic. If a function is analytic everywhere in the finite
complex plane, it is called entire.
1Quote slightly modified.
223
Example 8.1.1 Consider zn
, n ∈ Z+
, Is the function differentiable? Is it analytic? What is the
value of the derivative?
We determine differentiability by trying to differentiate the function. We use the limit definition
of differentiation. We will use Newton’s binomial formula to expand (z + ∆z)n
.
d
dz
zn
= lim
∆z→0
(z + ∆z)n
− zn
∆z
= lim
∆z→0
zn
+ nzn−1
∆z + n(n−1)
2 zn−2
∆z2
+ · · · + ∆zn
− zn
∆z
= lim
∆z→0
nzn−1
+
n(n − 1)
2
zn−2
∆z + · · · + ∆zn−1
= nzn−1
The derivative exists everywhere. The function is analytic in the whole complex plane so it is entire.
The value of the derivative is d
dz = nzn−1
.
Example 8.1.2 We will show that f(z) = z is not differentiable. Consider its derivative.
d
dz
f(z) = lim
∆z→0
f(z + ∆z) − f(z)
∆z
.
d
dz
z = lim
∆z→0
z + ∆z − z
∆z
= lim
∆z→0
∆z
∆z
First we take ∆z = ∆x and evaluate the limit.
lim
∆x→0
∆x
∆x
= 1
Then we take ∆z = ı∆y.
lim
∆y→0
−ı∆y
ı∆y
= −1
Since the limit depends on the way that ∆z → 0, the function is nowhere differentiable. Thus the
function is not analytic.
Complex Derivatives in Terms of Plane Coordinates. Let z = ζ(ξ, ψ) be a system of coordi-
nates in the complex plane. (For example, we could have Cartesian coordinates z = ζ(x, y) = x + ıy
or polar coordinates z = ζ(r, θ) = r eıθ
). Let f(z) = φ(ξ, ψ) be a complex-valued function. (For
example we might have a function in the form φ(x, y) = u(x, y)+ıv(x, y) or φ(r, θ) = R(r, θ) eıΘ(r,θ)
.)
If f(z) = φ(ξ, ψ) is analytic, its complex derivative is equal to the derivative in any direction. In
particular, it is equal to the derivatives in the coordinate directions.
df
dz
= lim
∆ξ→0,∆ψ=0
f(z + ∆z) − f(z)
∆z
= lim
∆ξ→0
φ(ξ + ∆ξ, ψ) − φ(ξ, ψ)
∂ζ
∂ξ ∆ξ
=
∂ζ
∂ξ
−1
∂φ
∂ξ
df
dz
= lim
∆ξ=0,∆ψ→0
f(z + ∆z) − f(z)
∆z
= lim
∆ψ→0
φ(ξ, ψ + ∆ψ) − φ(ξ, ψ)
∂ζ
∂ψ ∆ψ
=
∂ζ
∂ψ
−1
∂φ
∂ψ
Example 8.1.3 Consider the Cartesian coordinates z = x + ıy. We write the complex derivative
as derivatives in the coordinate directions for f(z) = φ(x, y).
df
dz
=
∂(x + ıy)
∂x
−1
∂φ
∂x
=
∂φ
∂x
df
dz
=
∂(x + ıy)
∂y
−1
∂φ
∂y
= −ı
∂φ
∂y
224
We write this in operator notation.
d
dz
=
∂
∂x
= −ı
∂
∂y
.
Example 8.1.4 In Example 8.1.1 we showed that zn
, n ∈ Z+
, is an entire function and that
d
dz zn
= nzn−1
. Now we corroborate this by calculating the complex derivative in the Cartesian
coordinate directions.
d
dz
zn
=
∂
∂x
(x + ıy)n
= n(x + ıy)n−1
= nzn−1
d
dz
zn
= −ı
∂
∂y
(x + ıy)n
= −ıın(x + ıy)n−1
= nzn−1
Complex Derivatives are Not the Same as Partial Derivatives Recall from calculus that
f(x, y) = g(s, t) →
∂f
∂x
=
∂g
∂s
∂s
∂x
+
∂g
∂t
∂t
∂x
Do not make the mistake of using a similar formula for functions of a complex variable. If f(z) =
φ(x, y) then
df
dz
=
∂φ
∂x
∂x
∂z
+
∂φ
∂y
∂y
∂z
.
This is because the d
dz operator means “The derivative in any direction in the complex plane.” Since
f(z) is analytic, f (z) is the same no matter in which direction we take the derivative.
Rules of Differentiation. For an analytic function defined in terms of z we can calculate the
complex derivative using all the usual rules of differentiation that we know from calculus like the
product rule,
d
dz
f(z)g(z) = f (z)g(z) + f(z)g (z),
or the chain rule,
d
dz
f(g(z)) = f (g(z))g (z).
This is because the complex derivative derives its properties from properties of limits, just like its
real variable counterpart.
225
Result 8.1.1 The complex derivative is,
d
dz
f(z) = lim
∆z→0
f(z + ∆z) − f(z)
∆z
.
The complex derivative is defined if the limit exists and is independent of the
manner in which ∆z → 0. A function is analytic at a point if the complex
derivative exists in a neighborhood of that point.
Let z = ζ(ξ, ψ) define coordinates in the complex plane. The complex deriva-
tive in the coordinate directions is
d
dz
=
∂ζ
∂ξ
−1
∂
∂ξ
=
∂ζ
∂ψ
−1
∂
∂ψ
.
In Cartesian coordinates, this is
d
dz
=
∂
∂x
= −ı
∂
∂y
.
In polar coordinates, this is
d
dz
= e−ıθ ∂
∂r
= −
ı
r
e−ıθ ∂
∂θ
Since the complex derivative is defined with the same limit formula as real
derivatives, all the rules from the calculus of functions of a real variable may
be used to differentiate functions of a complex variable.
Example 8.1.5 We have shown that zn
, n ∈ Z+
, is an entire function. Now we corroborate that
d
dz zn
= nzn−1
by calculating the complex derivative in the polar coordinate directions.
d
dz
zn
= e−ıθ ∂
∂r
rn
eınθ
= e−ıθ
nrn−1
eınθ
= nrn−1
eı(n−1)θ
= nzn−1
d
dz
zn
= −
ı
r
e−ıθ ∂
∂θ
rn
eınθ
= −
ı
r
e−ıθ
rn
ın eınθ
= nrn−1
eı(n−1)θ
= nzn−1
Analytic Functions can be Written in Terms of z. Consider an analytic function expressed
in terms of x and y, φ(x, y). We can write φ as a function of z = x + ıy and z = x − ıy.
f (z, z) = φ
z + z
2
,
z − z
ı2
226
We treat z and z as independent variables. We find the partial derivatives with respect to these
variables.
∂
∂z
=
∂x
∂z
∂
∂x
+
∂y
∂z
∂
∂y
=
1
2
∂
∂x
− ı
∂
∂y
∂
∂z
=
∂x
∂z
∂
∂x
+
∂y
∂z
∂
∂y
=
1
2
∂
∂x
+ ı
∂
∂y
Since φ is analytic, the complex derivatives in the x and y directions are equal.
∂φ
∂x
= −ı
∂φ
∂y
The partial derivative of f (z, z) with respect to z is zero.
∂f
∂z
=
1
2
∂φ
∂x
+ ı
∂φ
∂y
= 0
Thus f (z, z) has no functional dependence on z, it can be written as a function of z alone.
If we were considering an analytic function expressed in polar coordinates φ(r, θ), then we could
write it in Cartesian coordinates with the substitutions:
r = x2 + y2, θ = arctan(x, y).
Thus we could write φ(r, θ) as a function of z alone.
Result 8.1.2 Any analytic function φ(x, y) or φ(r, θ) can be written as a
function of z alone.
8.2 Cauchy-Riemann Equations
If we know that a function is analytic, then we have a convenient way of determining its complex
derivative. We just express the complex derivative in terms of the derivative in a coordinate direction.
However, we don’t have a nice way of determining if a function is analytic. The definition of complex
derivative in terms of a limit is cumbersome to work with. In this section we remedy this problem.
A necessary condition for analyticity. Consider a function f(z) = φ(x, y). If f(z) is analytic,
the complex derivative is equal to the derivatives in the coordinate directions. We equate the deriva-
tives in the x and y directions to obtain the Cauchy-Riemann equations in Cartesian coordinates.
φx = −ıφy (8.1)
This equation is a necessary condition for the analyticity of f(z).
Let φ(x, y) = u(x, y) + ıv(x, y) where u and v are real-valued functions. We equate the real
and imaginary parts of Equation 8.1 to obtain another form for the Cauchy-Riemann equations in
Cartesian coordinates.
ux = vy, uy = −vx.
Note that this is a necessary and not a sufficient condition for analyticity of f(z). That is, u
and v may satisfy the Cauchy-Riemann equations but f(z) may not be analytic. At this point,
Cauchy-Riemann equations give us an easy test for determining if a function is not analytic.
Example 8.2.1 In Example 8.1.2 we showed that z is not analytic using the definition of complex
differentiation. Now we obtain the same result using the Cauchy-Riemann equations.
z = x − ıy
ux = 1, vy = −1
We see that the first Cauchy-Riemann equation is not satisfied; the function is not analytic at any
point.
227
A sufficient condition for analyticity. A sufficient condition for f(z) = φ(x, y) to be analytic
at a point z0 = (x0, y0) is that the partial derivatives of φ(x, y) exist and are continuous in some
neighborhood of z0 and satisfy the Cauchy-Riemann equations there. If the partial derivatives of φ
exist and are continuous then
φ(x + ∆x, y + ∆y) = φ(x, y) + ∆xφx(x, y) + ∆yφy(x, y) + o(∆x) + o(∆y).
Here the notation o(∆x) means “terms smaller than ∆x”. We calculate the derivative of f(z).
f (z) = lim
∆z→0
f(z + ∆z) − f(z)
∆z
= lim
∆x,∆y→0
φ(x + ∆x, y + ∆y) − φ(x, y)
∆x + ı∆y
= lim
∆x,∆y→0
φ(x, y) + ∆xφx(x, y) + ∆yφy(x, y) + o(∆x) + o(∆y) − φ(x, y)
∆x + ı∆y
= lim
∆x,∆y→0
∆xφx(x, y) + ∆yφy(x, y) + o(∆x) + o(∆y)
∆x + ı∆y
Here we use the Cauchy-Riemann equations.
= lim
∆x,∆y→0
(∆x + ı∆y)φx(x, y)
∆x + ı∆y
+ lim
∆x,∆y→0
o(∆x) + o(∆y)
∆x + ı∆y
= φx(x, y)
Thus we see that the derivative is well defined.
Cauchy-Riemann Equations in General Coordinates Let z = ζ(ξ, ψ) be a system of coordi-
nates in the complex plane. Let φ(ξ, ψ) be a function which we write in terms of these coordinates,
A necessary condition for analyticity of φ(ξ, ψ) is that the complex derivatives in the coordinate
directions exist and are equal. Equating the derivatives in the ξ and ψ directions gives us the
Cauchy-Riemann equations.
∂ζ
∂ξ
−1
∂φ
∂ξ
=
∂ζ
∂ψ
−1
∂φ
∂ψ
We could separate this into two equations by equating the real and imaginary parts or the modulus
and argument.
228
Result 8.2.1 A necessary condition for analyticity of φ(ξ, ψ), where z =
ζ(ξ, ψ), at z = z0 is that the Cauchy-Riemann equations are satisfied in a
neighborhood of z = z0.
∂ζ
∂ξ
−1
∂φ
∂ξ
=
∂ζ
∂ψ
−1
∂φ
∂ψ
.
(We could equate the real and imaginary parts or the modulus and argument
of this to obtain two equations.) A sufficient condition for analyticity of f(z)
is that the Cauchy-Riemann equations hold and the first partial derivatives of
φ exist and are continuous in a neighborhood of z = z0.
Below are the Cauchy-Riemann equations for various forms of f(z).
f(z) = φ(x, y), φx = −ıφy
f(z) = u(x, y) + ıv(x, y), ux = vy, uy = −vx
f(z) = φ(r, θ), φr = −
ı
r
φθ
f(z) = u(r, θ) + ıv(r, θ), ur =
1
r
vθ, uθ = −rvr
f(z) = R(r, θ) eıΘ(r,θ)
, Rr =
R
r
Θθ,
1
r
Rθ = −RΘr
f(z) = R(x, y) eıΘ(x,y)
, Rx = RΘy, Ry = −RΘx
Example 8.2.2 Consider the Cauchy-Riemann equations for f(z) = u(r, θ) + ıv(r, θ). From Exer-
cise 8.3 we know that the complex derivative in the polar coordinate directions is
d
dz
= e−ıθ ∂
∂r
= −
ı
r
e−ıθ ∂
∂θ
.
From Result 8.2.1 we have the equation,
e−ıθ ∂
∂r
[u + ıv] = −
ı
r
e−ıθ ∂
∂θ
[u + ıv].
We multiply by eıθ
and equate the real and imaginary components to obtain the Cauchy-Riemann
equations.
ur =
1
r
vθ, uθ = −rvr
Example 8.2.3 Consider the exponential function.
ez
= φ(x, y) = ex
(cos y + ı sin(y))
We use the Cauchy-Riemann equations to show that the function is entire.
φx = −ıφy
ex
(cos y + ı sin(y)) = −ı ex
(− sin y + ı cos(y))
ex
(cos y + ı sin(y)) = ex
(cos y + ı sin(y))
Since the function satisfies the Cauchy-Riemann equations and the first partial derivatives are con-
tinuous everywhere in the finite complex plane, the exponential function is entire.
229
Now we find the value of the complex derivative.
d
dz
ez
=
∂φ
∂x
= ex
(cos y + ı sin(y)) = ez
The differentiability of the exponential function implies the differentiability of the trigonometric
functions, as they can be written in terms of the exponential.
In Exercise 8.13 you can show that the logarithm log z is differentiable for z = 0. This implies
the differentiability of zα
and the inverse trigonometric functions as they can be written in terms of
the logarithm.
Example 8.2.4 We compute the derivative of zz
.
d
dz
(zz
) =
d
dz
ez log z
= (1 + log z) ez log z
= (1 + log z)zz
= zz
+ zz
log z
8.3 Harmonic Functions
A function u is harmonic if its second partial derivatives exist, are continuous and satisfy Laplace’s
equation ∆u = 0.2
(In Cartesian coordinates the Laplacian is ∆u ≡ uxx + uyy.) If f(z) = u + ıv is
an analytic function then u and v are harmonic functions. To see why this is so, we start with the
Cauchy-Riemann equations.
ux = vy, uy = −vx
We differentiate the first equation with respect to x and the second with respect to y. (We as-
sume that u and v are twice continuously differentiable. We will see later that they are infinitely
differentiable.)
uxx = vxy, uyy = −vyx
Thus we see that u is harmonic.
∆u ≡ uxx + uyy = vxy − vyx = 0
One can use the same method to show that ∆v = 0.
If u is harmonic on some simply-connected domain, then there exists a harmonic function v such
that f(z) = u + ıv is analytic in the domain. v is called the harmonic conjugate of u. The harmonic
conjugate is unique up to an additive constant. To demonstrate this, let w be another harmonic
conjugate of u. Both the pair u and v and the pair u and w satisfy the Cauchy-Riemann equations.
ux = vy, uy = −vx, ux = wy, uy = −wx
We take the difference of these equations.
vx − wx = 0, vy − wy = 0
On a simply connected domain, the difference between v and w is thus a constant.
To prove the existence of the harmonic conjugate, we first write v as an integral.
v(x, y) = v (x0, y0) +
(x,y)
(x0,y0)
vx dx + vy dy
2 The capital Greek letter ∆ is used to denote the Laplacian, like ∆u(x, y), and differentials, like ∆x.
230
On a simply connected domain, the integral is path independent and defines a unique v in terms of
vx and vy. We use the Cauchy-Riemann equations to write v in terms of ux and uy.
v(x, y) = v (x0, y0) +
(x,y)
(x0,y0)
−uy dx + ux dy
Changing the starting point (x0, y0) changes v by an additive constant. The harmonic conjugate of
u to within an additive constant is
v(x, y) = −uy dx + ux dy.
This proves the existence3
of the harmonic conjugate. This is not the formula one would use to
construct the harmonic conjugate of a u. One accomplishes this by solving the Cauchy-Riemann
equations.
Result 8.3.1 If f(z) = u+ıv is an analytic function then u and v are harmonic
functions. That is, the Laplacians of u and v vanish ∆u = ∆v = 0. The
Laplacian in Cartesian and polar coordinates is
∆ =
∂2
∂x2
+
∂2
∂y2
, ∆ =
1
r
∂
∂r
r
∂
∂r
+
1
r2
∂2
∂θ2
.
Given a harmonic function u in a simply connected domain, there exists a
harmonic function v, (unique up to an additive constant), such that f(z) =
u + ıv is analytic in the domain. One can construct v by solving the Cauchy-
Riemann equations.
Example 8.3.1 Is x2
the real part of an analytic function?
The Laplacian of x2
is
∆[x2
] = 2 + 0
x2
is not harmonic and thus is not the real part of an analytic function.
Example 8.3.2 Show that u = e−x
(x sin y − y cos y) is harmonic.
∂u
∂x
= e−x
sin y − ex
(x sin y − y cos y)
= e−x
sin y − x e−x
sin y + y e−x
cos y
∂2
u
∂x2
= − e−x
sin y − e−x
sin y + x e−x
sin y − y e−x
cos y
= −2 e−x
sin y + x e−x
sin y − y e−x
cos y
∂u
∂y
= e−x
(x cos y − cos y + y sin y)
∂2
u
∂y2
= e−x
(−x sin y + sin y + y cos y + sin y)
= −x e−x
sin y + 2 e−x
sin y + y e−x
cos y
Thus we see that ∂2
u
∂x2 + ∂2
u
∂y2 = 0 and u is harmonic.
3 A mathematician returns to his office to find that a cigarette tossed in the trash has started a small fire. Being
calm and a quick thinker he notes that there is a fire extinguisher by the window. He then closes the door and walks
away because “the solution exists.”
231
Example 8.3.3 Consider u = cos x cosh y. This function is harmonic.
uxx + uyy = − cos x cosh y + cos x cosh y = 0
Thus it is the real part of an analytic function, f(z). We find the harmonic conjugate, v, with the
Cauchy-Riemann equations. We integrate the first Cauchy-Riemann equation.
vy = ux = − sin x cosh y
v = − sin x sinh y + a(x)
Here a(x) is a constant of integration. We substitute this into the second Cauchy-Riemann equation
to determine a(x).
vx = −uy
− cos x sinh y + a (x) = − cos x sinh y
a (x) = 0
a(x) = c
Here c is a real constant. Thus the harmonic conjugate is
v = − sin x sinh y + c.
The analytic function is
f(z) = cos x cosh y − ı sin x sinh y + ıc
We recognize this as
f(z) = cos z + ıc.
Example 8.3.4 Here we consider an example that demonstrates the need for a simply connected
domain. Consider u = Log r in the multiply connected domain, r > 0. u is harmonic.
∆ Log r =
1
r
∂
∂r
r
∂
∂r
Log r +
1
r2
∂2
∂θ2
Log r = 0
We solve the Cauchy-Riemann equations to try to find the harmonic conjugate.
ur =
1
r
vθ, uθ = −rvr
vr = 0, vθ = 1
v = θ + c
We are able to solve for v, but it is multi-valued. Any single-valued branch of θ that we choose will
not be continuous on the domain. Thus there is no harmonic conjugate of u = Log r for the domain
r > 0.
If we had instead considered the simply-connected domain r > 0, | arg(z)| < π then the harmonic
conjugate would be v = Arg(z) + c. The corresponding analytic function is f(z) = Log z + ıc.
Example 8.3.5 Consider u = x3
− 3xy2
+ x. This function is harmonic.
uxx + uyy = 6x − 6x = 0
Thus it is the real part of an analytic function, f(z). We find the harmonic conjugate, v, with the
Cauchy-Riemann equations. We integrate the first Cauchy-Riemann equation.
vy = ux = 3x2
− 3y2
+ 1
v = 3x2
y − y3
+ y + a(x)
232
Here a(x) is a constant of integration. We substitute this into the second Cauchy-Riemann equation
to determine a(x).
vx = −uy
6xy + a (x) = 6xy
a (x) = 0
a(x) = c
Here c is a real constant. The harmonic conjugate is
v = 3x2
y − y3
+ y + c.
The analytic function is
f(z) = x3
− 3xy2
+ x + ı 3x2
y − y3
+ y + ıc
f(z) = x3
+ ı3x2
y − 3xy2
− ıy2
+ x + ıy + ıc
f(z) = z3
+ z + ıc
8.4 Singularities
Any point at which a function is not analytic is called a singularity. In this section we will classify
the different flavors of singularities.
Result 8.4.1 Singularities. If a function is not analytic at a point, then
that point is a singular point or a singularity of the function.
8.4.1 Categorization of Singularities
Branch Points. If f(z) has a branch point at z0, then we cannot define a branch of f(z) that is
continuous in a neighborhood of z0. Continuity is necessary for analyticity. Thus all branch points
are singularities. Since function are discontinuous across branch cuts, all points on a branch cut are
singularities.
Example 8.4.1 Consider f(z) = z3/2
. The origin and infinity are branch points and are thus
singularities of f(z). We choose the branch g(z) =
√
z3. All the points on the negative real axis,
including the origin, are singularities of g(z).
Removable Singularities.
Example 8.4.2 Consider
f(z) =
sin z
z
.
This function is undefined at z = 0 because f(0) is the indeterminate form 0/0. f(z) is analytic
everywhere in the finite complex plane except z = 0. Note that the limit as z → 0 of f(z) exists.
lim
z→0
sin z
z
= lim
z→0
cos z
1
= 1
If we were to fill in the hole in the definition of f(z), we could make it differentiable at z = 0.
Consider the function
g(z) =
sin z
z z = 0,
1 z = 0.
233
We calculate the derivative at z = 0 to verify that g(z) is analytic there.
f (0) = lim
z→0
f(0) − f(z)
z
= lim
z→0
1 − sin(z)/z
z
= lim
z→0
z − sin(z)
z2
= lim
z→0
1 − cos(z)
2z
= lim
z→0
sin(z)
2
= 0
We call the point at z = 0 a removable singularity of sin(z)/z because we can remove the singularity
by defining the value of the function to be its limiting value there.
Consider a function f(z) that is analytic in a deleted neighborhood of z = z0. If f(z) is not
analytic at z0, but limz→z0
f(z) exists, then the function has a removable singularity at z0. The
function
g(z) =
f(z) z = z0
limz→z0
f(z) z = z0
is analytic in a neighborhood of z = z0. We show this by calculating g (z0).
g (z0) = lim
z→z0
g (z0) − g(z)
z0 − z
= lim
z→z0
−g (z)
−1
= lim
z→z0
f (z)
This limit exists because f(z) is analytic in a deleted neighborhood of z = z0.
Poles. If a function f(z) behaves like c/ (z − z0)
n
near z = z0 then the function has an nth
order
pole at that point. More mathematically we say
lim
z→z0
(z − z0)
n
f(z) = c = 0.
We require the constant c to be nonzero so we know that it is not a pole of lower order. We can
denote a removable singularity as a pole of order zero.
Another way to say that a function has an nth
order pole is that f(z) is not analytic at z = z0,
but (z − z0)
n
f(z) is either analytic or has a removable singularity at that point.
Example 8.4.3 1/ sin z2
has a second order pole at z = 0 and first order poles at z = (nπ)1/2
,
n ∈ Z±
.
lim
z→0
z2
sin (z2)
= lim
z→0
2z
2z cos (z2)
= lim
z→0
2
2 cos (z2) − 4z2 sin (z2)
= 1
lim
z→(nπ)1/2
z − (nπ)1/2
sin (z2)
= lim
z→(nπ)1/2
1
2z cos (z2)
=
1
2(nπ)1/2(−1)n
234
Example 8.4.4 e1/z
is singular at z = 0. The function is not analytic as limz→0 e1/z
does not exist.
We check if the function has a pole of order n at z = 0.
lim
z→0
zn
e1/z
= lim
ζ→∞
eζ
ζn
= lim
ζ→∞
eζ
n!
Since the limit does not exist for any value of n, the singularity is not a pole. We could say that
e1/z
is more singular than any power of 1/z.
Essential Singularities. If a function f(z) is singular at z = z0, but the singularity is not a
branch point, or a pole, the the point is an essential singularity of the function.
The point at infinity. We can consider the point at infinity z → ∞ by making the change of
variables z = 1/ζ and considering ζ → 0. If f(1/ζ) is analytic at ζ = 0 then f(z) is analytic at
infinity. We have encountered branch points at infinity before (Section 7.9). Assume that f(z) is
not analytic at infinity. If limz→∞ f(z) exists then f(z) has a removable singularity at infinity. If
limz→∞ f(z)/zn
= c = 0 then f(z) has an nth
order pole at infinity.
Result 8.4.2 Categorization of Singularities. Consider a function f(z)
that has a singularity at the point z = z0. Singularities come in four flavors:
Branch Points. Branch points of multi-valued functions are singularities.
Removable Singularities. If limz→z0 f(z) exists, then z0 is a removable
singularity. It is thus named because the singularity could be removed
and thus the function made analytic at z0 by redefining the value of f (z0).
Poles. If limz→z0 (z − z0)n
f(z) = const = 0 then f(z) has an nth
order pole
at z0.
Essential Singularities. Instead of defining what an essential singularity is,
we say what it is not. If z0 neither a branch point, a removable singularity
nor a pole, it is an essential singularity.
A pole may be called a non-essential singularity. This is because multiplying the function by an
integral power of z − z0 will make the function analytic. Then an essential singularity is a point z0
such that there does not exist an n such that (z − z0)
n
f(z) is analytic there.
8.4.2 Isolated and Non-Isolated Singularities
Result 8.4.3 Isolated and Non-Isolated Singularities. Suppose f(z) has
a singularity at z0. If there exists a deleted neighborhood of z0 containing no
singularities then the point is an isolated singularity. Otherwise it is a
non-isolated singularity.
If you don’t like the abstract notion of a deleted neighborhood, you can work with a deleted circular
neighborhood. However, this will require the introduction of more math symbols and a Greek letter.
z = z0 is an isolated singularity if there exists a δ > 0 such that there are no singularities in
0 < |z − z0| < δ.
235
Example 8.4.5 We classify the singularities of f(z) = z/ sin z.
z has a simple zero at z = 0. sin z has simple zeros at z = nπ. Thus f(z) has a removable
singularity at z = 0 and has first order poles at z = nπ for n ∈ Z±
. We can corroborate this by
taking limits.
lim
z→0
f(z) = lim
z→0
z
sin z
= lim
z→0
1
cos z
= 1
lim
z→nπ
(z − nπ)f(z) = lim
z→nπ
(z − nπ)z
sin z
= lim
z→nπ
2z − nπ
cos z
=
nπ
(−1)n
= 0
Now to examine the behavior at infinity. There is no neighborhood of infinity that does not
contain first order poles of f(z). (Another way of saying this is that there does not exist an R such
that there are no singularities in R < |z| < ∞.) Thus z = ∞ is a non-isolated singularity.
We could also determine this by setting ζ = 1/z and examining the point ζ = 0. f(1/ζ) has first
order poles at ζ = 1/(nπ) for n ∈ Z  {0}. These first order poles come arbitrarily close to the point
ζ = 0 There is no deleted neighborhood of ζ = 0 which does not contain singularities. Thus ζ = 0,
and hence z = ∞ is a non-isolated singularity.
The point at infinity is an essential singularity. It is certainly not a branch point or a removable
singularity. It is not a pole, because there is no n such that limz→∞ z−n
f(z) = const = 0. z−n
f(z)
has first order poles in any neighborhood of infinity, so this limit does not exist.
8.5 Application: Potential Flow
Example 8.5.1 We consider 2 dimensional uniform flow in a given direction. The flow corresponds
to the complex potential
Φ(z) = v0 e−ıθ0
z,
where v0 is the fluid speed and θ0 is the direction. We find the velocity potential φ and stream
function ψ.
Φ(z) = φ + ıψ
φ = v0(cos(θ0)x + sin(θ0)y), ψ = v0(− sin(θ0)x + cos(θ0)y)
These are plotted in Figure 8.1 for θ0 = π/6.
-1
-0.5
0
0.5
1-1
-0.5
0
0.5
1
-1
0
1
-1
-0.5
0
0.5
1
-1
-0.5
0
0.5
1-1
-0.5
0
0.5
1
-1
0
1
-1
-0.5
0
0.5
1
Figure 8.1: The velocity potential φ and stream function ψ for Φ(z) = v0 e−ıθ0
z.
236
Next we find the stream lines, ψ = c.
v0(− sin(θ0)x + cos(θ0)y) = c
y =
c
v0 cos(θ0)
+ tan(θ0)x
Figure 8.2 shows how the streamlines go straight along the θ0 direction. Next we find the velocity
-1 -0.5 0 0.5 1
-1
-0.5
0
0.5
1
Figure 8.2: Streamlines for ψ = v0(− sin(θ0)x + cos(θ0)y).
field.
v = φ
v = φxˆx + φy ˆy
v = v0 cos(θ0)ˆx + v0 sin(θ0)ˆy
The velocity field is shown in Figure 8.3.
Figure 8.3: Velocity field and velocity direction field for φ = v0(cos(θ0)x + sin(θ0)y).
Example 8.5.2 Steady, incompressible, inviscid, irrotational flow is governed by the Laplace equa-
tion. We consider flow around an infinite cylinder of radius a. Because the flow does not vary along
the axis of the cylinder, this is a two-dimensional problem. The flow corresponds to the complex
potential
Φ(z) = v0 z +
a2
z
.
237
We find the velocity potential φ and stream function ψ.
Φ(z) = φ + ıψ
φ = v0 r +
a2
r
cos θ, ψ = v0 r −
a2
r
sin θ
These are plotted in Figure 8.4.
Figure 8.4: The velocity potential φ and stream function ψ for Φ(z) = v0 z + a2
z .
Next we find the stream lines, ψ = c.
v0 r −
a2
r
sin θ = c
r =
c ± c2 + 4v0 sin2
θ
2v0 sin θ
Figure 8.5 shows how the streamlines go around the cylinder. Next we find the velocity field.
Figure 8.5: Streamlines for ψ = v0 r − a2
r sin θ.
v = φ
v = φrˆr +
φθ
r
ˆθ
v = v0 1 −
a2
r2
cos θˆr − v0 1 +
a2
r2
sin θˆθ
The velocity field is shown in Figure 8.6.
238
Figure 8.6: Velocity field and velocity direction field for φ = v0 r + a2
r cos θ.
8.6 Exercises
Complex Derivatives
Exercise 8.1
Consider two functions f(z) and g(z) analytic at z0 with f(z0) = g(z0) = 0 and g (z0) = 0.
1. Use the definition of the complex derivative to justify L’Hospital’s rule:
lim
z→z0
f(z)
g(z)
=
f (z0)
g (z0)
2. Evaluate the limits
lim
z→ı
1 + z2
2 + 2z6
, lim
z→ıπ
sinh(z)
ez +1
Hint, Solution
Exercise 8.2
Show that if f(z) is analytic and φ(x, y) = f(z) is twice continuously differentiable then f (z) is
analytic.
Hint, Solution
Exercise 8.3
Find the complex derivative in the coordinate directions for f(z) = φ(r, θ).
Hint, Solution
Exercise 8.4
Show that the following functions are nowhere analytic by checking where the derivative with respect
to z exists.
1. sin x cosh y − ı cos x sinh y
2. x2
− y2
+ x + ı(2xy − y)
Hint, Solution
Exercise 8.5
f(z) is analytic for all z, (|z| < ∞). f (z1 + z2) = f (z1) f (z2) for all z1 and z2. (This is known as a
functional equation). Prove that f(z) = exp (f (0)z).
Hint, Solution
239
Cauchy-Riemann Equations
Exercise 8.6
If f(z) is analytic in a domain and has a constant real part, a constant imaginary part, or a constant
modulus, show that f(z) is constant.
Hint, Solution
Exercise 8.7
Show that the function
f(z) =
e−z−4
for z = 0,
0 for z = 0.
satisfies the Cauchy-Riemann equations everywhere, including at z = 0, but f(z) is not analytic at
the origin.
Hint, Solution
Exercise 8.8
Find the Cauchy-Riemann equations for the following forms.
1. f(z) = R(r, θ) eıΘ(r,θ)
2. f(z) = R(x, y) eıΘ(x,y)
Hint, Solution
Exercise 8.9
1. Show that ez
is not analytic.
2. f(z) is an analytic function of z. Show that f(z) = f (z) is also an analytic function of z.
Hint, Solution
Exercise 8.10
1. Determine all points z = x + ıy where the following functions are differentiable with respect
to z:
(a) x3
+ y3
(b)
x − 1
(x − 1)2 + y2
− ı
y
(x − 1)2 + y2
2. Determine all points z where these functions are analytic.
3. Determine which of the following functions v(x, y) are the imaginary part of an analytic func-
tion u(x, y) + ıv(x, y). For those that are, compute the real part u(x, y) and re-express the
answer as an explicit function of z = x + ıy:
(a) x2
− y2
(b) 3x2
y
Hint, Solution
Exercise 8.11
Let
f(z) =
x4/3
y5/3
+ıx5/3
y4/3
x2+y2 for z = 0,
0 for z = 0.
Show that the Cauchy-Riemann equations hold at z = 0, but that f is not differentiable at this
point.
Hint, Solution
240
Exercise 8.12
Consider the complex function
f(z) = u + ıv =
x3
(1+ı)−y3
(1−ı)
x2+y2 for z = 0,
0 for z = 0.
Show that the partial derivatives of u and v with respect to x and y exist at z = 0 and that ux = vy
and uy = −vx there: the Cauchy-Riemann equations are satisfied at z = 0. On the other hand,
show that
lim
z→0
f(z)
z
does not exist, that is, f is not complex-differentiable at z = 0.
Hint, Solution
Exercise 8.13
Show that the logarithm log z is differentiable for z = 0. Find the derivative of the logarithm.
Hint, Solution
Exercise 8.14
Show that the Cauchy-Riemann equations for the analytic function f(z) = u(r, θ) + ıv(r, θ) are
ur = vθ/r, uθ = −rvr.
Hint, Solution
Exercise 8.15
w = u + ıv is an analytic function of z. φ(x, y) is an arbitrary smooth function of x and y. When
expressed in terms of u and v, φ(x, y) = Φ(u, v). Show that (w = 0)
∂Φ
∂u
− ı
∂Φ
∂v
=
dw
dz
−1
∂φ
∂x
− ı
∂φ
∂y
.
Deduce
∂2
Φ
∂u2
+
∂2
Φ
∂v2
=
dw
dz
−2
∂2
φ
∂x2
+
∂2
φ
∂y2
.
Hint, Solution
Exercise 8.16
Show that the functions defined by f(z) = log |z| + ı arg(z) and f(z) = |z| eı arg(z)/2
are analytic
in the sector |z| > 0, | arg(z)| < π. What are the corresponding derivatives df/dz?
Hint, Solution
Exercise 8.17
Show that the following functions are harmonic. For each one of them find its harmonic conjugate
and form the corresponding holomorphic function.
1. u(x, y) = x Log(r) − y arctan(x, y) (r = 0)
2. u(x, y) = arg(z) (| arg(z)| < π, r = 0)
3. u(x, y) = rn
cos(nθ)
4. u(x, y) = y/r2
(r = 0)
Hint, Solution
241
Exercise 8.18
1. Use the Cauchy-Riemann equations to determine where the function
f(z) = (x − y)2
+ ı2(x + y)
is differentiable and where it is analytic.
2. Evaluate the derivative of
f(z) = ex2
−y2
(cos(2xy) + ı sin(2xy))
and describe the domain of analyticity.
Hint, Solution
Exercise 8.19
Consider the function f(z) = u + ıv with real and imaginary parts expressed in terms of either x
and y or r and θ.
1. Show that the Cauchy-Riemann equations
ux = vy, uy = −vx
are satisfied and these partial derivatives are continuous at a point z if and only if the polar
form of the Cauchy-Riemann equations
ur =
1
r
vθ,
1
r
uθ = −vr
is satisfied and these partial derivatives are continuous there.
2. Show that it is easy to verify that Log z is analytic for r > 0 and −π < θ < π using the polar
form of the Cauchy-Riemann equations and that the value of the derivative is easily obtained
from a polar differentiation formula.
3. Show that in polar coordinates, Laplace’s equation becomes
φrr +
1
r
φr +
1
r2
φθθ = 0.
Hint, Solution
Exercise 8.20
Determine which of the following functions are the real parts of an analytic function.
1. u(x, y) = x3
− y3
2. u(x, y) = sinh x cos y + x
3. u(r, θ) = rn
cos(nθ)
and find f(z) for those that are.
Hint, Solution
Exercise 8.21
Consider steady, incompressible, inviscid, irrotational flow governed by the Laplace equation. De-
termine the form of the velocity potential and stream function contours for the complex potentials
1. Φ(z) = φ(x, y) + ıψ(x, y) = log z + ı log z
2. Φ(z) = log(z − 1) + log(z + 1)
242
Plot and describe the features of the flows you are considering.
Hint, Solution
Exercise 8.22
1. Classify all the singularities (removable, poles, isolated essential, branch points, non-isolated
essential) of the following functions in the extended complex plane
(a)
z
z2 + 1
(b)
1
sin z
(c) log 1 + z2
(d) z sin(1/z)
(e)
tan−1
(z)
z sinh2
(πz)
2. Construct functions that have the following zeros or singularities:
(a) a simple zero at z = ı and an isolated essential singularity at z = 1.
(b) a removable singularity at z = 3, a pole of order 6 at z = −ı and an essential singularity
at z∞.
Hint, Solution
243
8.7 Hints
Complex Derivatives
Hint 8.1
Hint 8.2
Start with the Cauchy-Riemann equation and then differentiate with respect to x.
Hint 8.3
Read Example 8.1.3 and use Result 8.1.1.
Hint 8.4
Use Result 8.1.1.
Hint 8.5
Take the logarithm of the equation to get a linear equation.
Cauchy-Riemann Equations
Hint 8.6
Hint 8.7
Hint 8.8
For the first part use the result of Exercise 8.3.
Hint 8.9
Use the Cauchy-Riemann equations.
Hint 8.10
Hint 8.11
To evaluate ux(0, 0), etc. use the definition of differentiation. Try to find f (z) with the definition
of complex differentiation. Consider ∆z = ∆r eıθ
.
Hint 8.12
To evaluate ux(0, 0), etc. use the definition of differentiation. Try to find f (z) with the definition
of complex differentiation. Consider ∆z = ∆r eıθ
.
Hint 8.13
Hint 8.14
Hint 8.15
Hint 8.16
244
Hint 8.17
Hint 8.18
Hint 8.19
Hint 8.20
Hint 8.21
Hint 8.22
CONTINUE
245
8.8 Solutions
Complex Derivatives
Solution 8.1
1. We consider L’Hospital’s rule.
lim
z→z0
f(z)
g(z)
=
f (z0)
g (z0)
We start with the right side and show that it is equal to the left side. First we apply the
definition of complex differentiation.
f (z0)
g (z0)
=
lim →0
f(z0+ )−f(z0)
limδ→0
g(z0+δ)−g(z0)
δ
=
lim →0
f(z0+ )
limδ→0
g(z0+δ)
δ
Since both of the limits exist, we may take the limits with = δ.
f (z0)
g (z0)
= lim
→0
f(z0 + )
g(z0 + )
f (z0)
g (z0)
= lim
z→z0
f(z)
g(z)
This proves L’Hospital’s rule.
2.
lim
z→ı
1 + z2
2 + 2z6
=
2z
12z5
z=ı
=
1
6
lim
z→ıπ
sinh(z)
ez +1
=
cosh(z)
ez
z=ıπ
= 1
Solution 8.2
We start with the Cauchy-Riemann equation and then differentiate with respect to x.
φx = −ıφy
φxx = −ıφyx
We interchange the order of differentiation.
(φx)x = −ı (φx)y
(f )x = −ı (f )y
Since f (z) satisfies the Cauchy-Riemann equation and its partial derivatives exist and are continu-
ous, it is analytic.
Solution 8.3
We calculate the complex derivative in the coordinate directions.
df
dz
=
∂ r eıθ
∂r
−1
∂φ
∂r
= e−ıθ ∂φ
∂r
,
df
dz
=
∂ r eıθ
∂θ
−1
∂φ
∂θ
= −
ı
r
e−ıθ ∂φ
∂θ
.
We can write this in operator notation.
d
dz
= e−ıθ ∂
∂r
= −
ı
r
e−ıθ ∂
∂θ
246
Solution 8.4
1. Consider f(x, y) = sin x cosh y − ı cos x sinh y. The derivatives in the x and y directions are
∂f
∂x
= cos x cosh y + ı sin x sinh y
−ı
∂f
∂y
= − cos x cosh y − ı sin x sinh y
These derivatives exist and are everywhere continuous. We equate the expressions to get a set
of two equations.
cos x cosh y = − cos x cosh y, sin x sinh y = − sin x sinh y
cos x cosh y = 0, sin x sinh y = 0
x =
π
2
+ nπ and (x = mπ or y = 0)
The function may be differentiable only at the points
x =
π
2
+ nπ, y = 0.
Thus the function is nowhere analytic.
2. Consider f(x, y) = x2
− y2
+ x + ı(2xy − y). The derivatives in the x and y directions are
∂f
∂x
= 2x + 1 + ı2y
−ı
∂f
∂y
= ı2y + 2x − 1
These derivatives exist and are everywhere continuous. We equate the expressions to get a set
of two equations.
2x + 1 = 2x − 1, 2y = 2y.
Since this set of equations has no solutions, there are no points at which the function is
differentiable. The function is nowhere analytic.
Solution 8.5
f (z1 + z2) = f (z1) f (z2)
log (f (z1 + z2)) = log (f (z1)) + log (f (z2))
We define g(z) = log(f(z)).
g (z1 + z2) = g (z1) + g (z2)
This is a linear equation which has exactly the solutions:
g(z) = cz.
Thus f(z) has the solutions:
f(z) = ecz
,
where c is any complex constant. We can write this constant in terms of f (0). We differentiate the
original equation with respect to z1 and then substitute z1 = 0.
f (z1 + z2) = f (z1) f (z2)
f (z2) = f (0)f (z2)
f (z) = f (0)f(z)
247
We substitute in the form of the solution.
c ecz
= f (0) ecz
c = f (0)
Thus we see that
f(z) = ef (0)z
.
Cauchy-Riemann Equations
Solution 8.6
Constant Real Part. First assume that f(z) has constant real part. We solve the Cauchy-Riemann
equations to determine the imaginary part.
ux = vy, uy = −vx
vx = 0, vy = 0
We integrate the first equation to obtain v = a + g(y) where a is a constant and g(y) is an arbitrary
function. Then we substitute this into the second equation to determine g(y).
g (y) = 0
g(y) = b
We see that the imaginary part of f(z) is a constant and conclude that f(z) is constant.
Constant Imaginary Part. Next assume that f(z) has constant imaginary part. We solve the
Cauchy-Riemann equations to determine the real part.
ux = vy, uy = −vx
ux = 0, uy = 0
We integrate the first equation to obtain u = a + g(y) where a is a constant and g(y) is an arbitrary
function. Then we substitute this into the second equation to determine g(y).
g (y) = 0
g(y) = b
We see that the real part of f(z) is a constant and conclude that f(z) is constant.
Constant Modulus. Finally assume that f(z) has constant modulus.
|f(z)| = constant
u2 + v2 = constant
u2
+ v2
= constant
We differentiate this equation with respect to x and y.
2uux + 2vvx = 0, 2uuy + 2vvy = 0
ux vx
uy vy
u
v
= 0
This system has non-trivial solutions for u and v only if the matrix is non-singular. (The trivial
solution u = v = 0 is the constant function f(z) = 0.) We set the determinant of the matrix to zero.
uxvy − uyvx = 0
248
We use the Cauchy-Riemann equations to write this in terms of ux and uy.
u2
x + u2
y = 0
ux = uy = 0
Since its partial derivatives vanish, u is a constant. From the Cauchy-Riemann equations we see
that the partial derivatives of v vanish as well, so it is constant. We conclude that f(z) is a constant.
Constant Modulus. Here is another method for the constant modulus case. We solve the
Cauchy-Riemann equations in polar form to determine the argument of f(z) = R(x, y) eıΘ(x,y)
.
Since the function has constant modulus R, its partial derivatives vanish.
Rx = RΘy, Ry = −RΘx
RΘy = 0, RΘx = 0
The equations are satisfied for R = 0. For this case, f(z) = 0. We consider nonzero R.
Θy = 0, Θx = 0
We see that the argument of f(z) is a constant and conclude that f(z) is constant.
Solution 8.7
First we verify that the Cauchy-Riemann equations are satisfied for z = 0. Note that the form
fx = −ıfy
will be far more convenient than the form
ux = vy, uy = −vx
for this problem.
fx = 4(x + ıy)−5
e−(x+ıy)−4
−ıfy = −ı4(x + ıy)−5
ı e−(x+ıy)−4
= 4(x + ıy)−5
e−(x+ıy)−4
The Cauchy-Riemann equations are satisfied for z = 0.
Now we consider the point z = 0.
fx(0, 0) = lim
∆x→0
f(∆x, 0) − f(0, 0)
∆x
= lim
∆x→0
e−∆x−4
∆x
= 0
−ıfy(0, 0) = −ı lim
∆y→0
f(0, ∆y) − f(0, 0)
∆y
= −ı lim
∆y→0
e−∆y−4
∆y
= 0
The Cauchy-Riemann equations are satisfied for z = 0.
f(z) is not analytic at the point z = 0. We show this by calculating the derivative.
f (0) = lim
∆z→0
f(∆z) − f(0)
∆z
= lim
∆z→0
f(∆z)
∆z
249
Let ∆z = ∆r eıθ
, that is, we approach the origin at an angle of θ.
f (0) = lim
∆r→0
f ∆r eıθ
∆r eıθ
= lim
∆r→0
e−r−4
e−ı4θ
∆r eıθ
For most values of θ the limit does not exist. Consider θ = π/4.
f (0) = lim
∆r→0
er−4
∆r eıπ/4
= ∞
Because the limit does not exist, the function is not differentiable at z = 0. Recall that satisfying
the Cauchy-Riemann equations is a necessary, but not a sufficient condition for differentiability.
Solution 8.8
1. We find the Cauchy-Riemann equations for
f(z) = R(r, θ) eıΘ(r,θ)
.
From Exercise 8.3 we know that the complex derivative in the polar coordinate directions is
d
dz
= e−ıθ ∂
∂r
= −
ı
r
e−ıθ ∂
∂θ
.
We equate the derivatives in the two directions.
e−ıθ ∂
∂r
R eıΘ
= −
ı
r
e−ıθ ∂
∂θ
R eıΘ
(Rr + ıRΘr) eıΘ
= −
ı
r
(Rθ + ıRΘθ) eıΘ
We divide by eıΘ
and equate the real and imaginary components to obtain the Cauchy-Riemann
equations.
Rr =
R
r
Θθ,
1
r
Rθ = −RΘr
2. We find the Cauchy-Riemann equations for
f(z) = R(x, y) eıΘ(x,y)
.
We equate the derivatives in the x and y directions.
∂
∂x
R eıΘ
= −ı
∂
∂y
R eıΘ
(Rx + ıRΘy) eıΘ
= −ı (Rx + ıRΘy) eıΘ
We divide by eıΘ
and equate the real and imaginary components to obtain the Cauchy-Riemann
equations.
Rx = RΘy, Ry = −RΘx
Solution 8.9
1. A necessary condition for analyticity in an open set is that the Cauchy-Riemann equations are
satisfied in that set. We write ez
in Cartesian form.
ez
= ex−ıy
= ex
cos y − ı ex
sin y.
250
Now we determine where u = ex
cos y and v = − ex
sin y satisfy the Cauchy-Riemann equations.
ux = vy, uy = −vx
ex
cos y = − ex
cos y, − ex
sin y = ex
sin y
cos y = 0, sin y = 0
y =
π
2
+ πm, y = πn
Thus we see that the Cauchy-Riemann equations are not satisfied anywhere. ez
is nowhere
analytic.
2. Since f(z) = u + ıv is analytic, u and v satisfy the Cauchy-Riemann equations and their first
partial derivatives are continuous.
f(z) = f (z) = u(x, −y) + ıv(x, −y) = u(x, −y) − ıv(x, −y)
We define f(z) ≡ µ(x, y) + ıν(x, y) = u(x, −y) − ıv(x, y). Now we see if µ and ν satisfy the
Cauchy-Riemann equations.
µx = νy, µy = −νx
(u(x, −y))x = (−v(x, −y))y, (u(x, −y))y = −(−v(x, −y))x
ux(x, −y) = vy(x, −y), −uy(x, −y) = vx(x, −y)
ux = vy, uy = −vx
Thus we see that the Cauchy-Riemann equations for µ and ν are satisfied if and only if
the Cauchy-Riemann equations for u and v are satisfied. The continuity of the first partial
derivatives of u and v implies the same of µ and ν. Thus f(z) is analytic.
Solution 8.10
1. The necessary condition for a function f(z) = u + ıv to be differentiable at a point is that the
Cauchy-Riemann equations hold and the first partial derivatives of u and v are continuous at
that point.
(a)
f(z) = x3
+ y3
+ ı0
The Cauchy-Riemann equations are
ux = vy and uy = −vx
3x2
= 0 and 3y2
= 0
x = 0 and y = 0
The first partial derivatives are continuous. Thus we see that the function is differentiable
only at the point z = 0.
(b)
f(z) =
x − 1
(x − 1)2 + y2
− ı
y
(x − 1)2 + y2
The Cauchy-Riemann equations are
ux = vy and uy = −vx
−(x − 1)2
+ y2
((x − 1)2 + y2)2
=
−(x − 1)2
+ y2
((x − 1)2 + y2)2
and
2(x − 1)y
((x − 1)2 + y2)2
=
2(x − 1)y
((x − 1)2 + y2)2
The Cauchy-Riemann equations are each identities. The first partial derivatives are con-
tinuous everywhere except the point x = 1, y = 0. Thus the function is differentiable
everywhere except z = 1.
251
2. (a) The function is not differentiable in any open set. Thus the function is nowhere analytic.
(b) The function is differentiable everywhere except z = 1. Thus the function is analytic
everywhere except z = 1.
3. (a) First we determine if the function is harmonic.
v = x2
− y2
vxx + vyy = 0
2 − 2 = 0
The function is harmonic in the complex plane and this is the imaginary part of some
analytic function. By inspection, we see that this function is
ız2
+ c = −2xy + c + ı x2
− y2
,
where c is a real constant. We can also find the function by solving the Cauchy-Riemann
equations.
ux = vy and uy = −vx
ux = −2y and uy = −2x
We integrate the first equation.
u = −2xy + g(y)
Here g(y) is a function of integration. We substitute this into the second Cauchy-Riemann
equation to determine g(y).
uy = −2x
−2x + g (y) = −2x
g (y) = 0
g(y) = c
u = −2xy + c
f(z) = −2xy + c + ı x2
− y2
f(z) = ız2
+ c
(b) First we determine if the function is harmonic.
v = 3x2
y
vxx + vyy = 6y
The function is not harmonic. It is not the imaginary part of some analytic function.
Solution 8.11
We write the real and imaginary parts of f(z) = u + ıv.
u =
x4/3
y5/3
x2+y2 for z = 0,
0 for z = 0.
, v =
x5/3
y4/3
x2+y2 for z = 0,
0 for z = 0.
The Cauchy-Riemann equations are
ux = vy, uy = −vx.
252
We calculate the partial derivatives of u and v at the point x = y = 0 using the definition of
differentiation.
ux(0, 0) = lim
∆x→0
u(∆x, 0) − u(0, 0)
∆x
= lim
∆x→0
0 − 0
∆x
= 0
vx(0, 0) = lim
∆x→0
v(∆x, 0) − v(0, 0)
∆x
= lim
∆x→0
0 − 0
∆x
= 0
uy(0, 0) = lim
∆y→0
u(0, ∆y) − u(0, 0)
∆y
= lim
∆y→0
0 − 0
∆y
= 0
vy(0, 0) = lim
∆y→0
v(0, ∆y) − v(0, 0)
∆y
= lim
∆y→0
0 − 0
∆y
= 0
Since ux(0, 0) = uy(0, 0) = vx(0, 0) = vy(0, 0) = 0 the Cauchy-Riemann equations are satisfied.
f(z) is not analytic at the point z = 0. We show this by calculating the derivative there.
f (0) = lim
∆z→0
f(∆z) − f(0)
∆z
= lim
∆z→0
f(∆z)
∆z
We let ∆z = ∆r eıθ
, that is, we approach the origin at an angle of θ. Then x = ∆r cos θ and
y = ∆r sin θ.
f (0) = lim
∆r→0
f ∆r eıθ
∆r eıθ
= lim
∆r→0
∆r4/3
cos4/3
θ∆r5/3
sin5/3
θ+ı∆r5/3
cos5/3
θ∆r4/3
sin4/3
θ
∆r2
∆r eıθ
= lim
∆r→0
cos4/3
θ sin5/3
θ + ı cos5/3
θ sin4/3
θ
eıθ
The value of the limit depends on θ and is not a constant. Thus this limit does not exist. The
function is not differentiable at z = 0.
Solution 8.12
u =
x3
−y3
x2+y2 for z = 0,
0 for z = 0.
, v =
x3
+y3
x2+y2 for z = 0,
0 for z = 0.
The Cauchy-Riemann equations are
ux = vy, uy = −vx.
The partial derivatives of u and v at the point x = y = 0 are,
ux(0, 0) = lim
∆x→0
u(∆x, 0) − u(0, 0)
∆x
= lim
∆x→0
∆x − 0
∆x
= 1,
vx(0, 0) = lim
∆x→0
v(∆x, 0) − v(0, 0)
∆x
= lim
∆x→0
∆x − 0
∆x
= 1,
253
uy(0, 0) = lim
∆y→0
u(0, ∆y) − u(0, 0)
∆y
= lim
∆y→0
−∆y − 0
∆y
= −1,
vy(0, 0) = lim
∆y→0
v(0, ∆y) − v(0, 0)
∆y
= lim
∆y→0
∆y − 0
∆y
= 1.
We see that the Cauchy-Riemann equations are satisfied at x = y = 0
f(z) is not analytic at the point z = 0. We show this by calculating the derivative.
f (0) = lim
∆z→0
f(∆z) − f(0)
∆z
= lim
∆z→0
f(∆z)
∆z
Let ∆z = ∆r eıθ
, that is, we approach the origin at an angle of θ. Then x = ∆r cos θ and y = ∆r sin θ.
f (0) = lim
∆r→0
f ∆r eıθ
∆r eıθ
= lim
∆r→0
(1+ı)∆r3
cos3
θ−(1−ı)∆r3
sin3
θ
∆r2
∆r eıθ
= lim
∆r→0
(1 + ı) cos3
θ − (1 − ı) sin3
θ
eıθ
The value of the limit depends on θ and is not a constant. Thus this limit does not exist. The
function is not differentiable at z = 0. Recall that satisfying the Cauchy-Riemann equations is a
necessary, but not a sufficient condition for differentiability.
Solution 8.13
We show that the logarithm log z = φ(r, θ) = Log r + ıθ satisfies the Cauchy-Riemann equations.
φr = −
ı
r
φθ
1
r
= −
ı
r
ı
1
r
=
1
r
Since the logarithm satisfies the Cauchy-Riemann equations and the first partial derivatives are
continuous for z = 0, the logarithm is analytic for z = 0.
Now we compute the derivative.
d
dz
log z = e−ıθ ∂
∂r
(Log r + ıθ)
= e−ıθ 1
r
=
1
z
Solution 8.14
The complex derivative in the coordinate directions is
d
dz
= e−ıθ ∂
∂r
= −
ı
r
e−ıθ ∂
∂θ
.
254
We substitute f = u + ıv into this identity to obtain the Cauchy-Riemann equation in polar coor-
dinates.
e−ıθ ∂f
∂r
= −
ı
r
e−ıθ ∂f
∂θ
∂f
∂r
= −
ı
r
∂f
∂θ
ur + ıvr = −
ı
r
(uθ + ıvθ)
We equate the real and imaginary parts.
ur =
1
r
vθ, vr = −
1
r
uθ
ur =
1
r
vθ, uθ = −rvr
Solution 8.15
Since w is analytic, u and v satisfy the Cauchy-Riemann equations,
ux = vy and uy = −vx.
Using the chain rule we can write the derivatives with respect to x and y in terms of u and v.
∂
∂x
= ux
∂
∂u
+ vx
∂
∂v
∂
∂y
= uy
∂
∂u
+ vy
∂
∂v
Now we examine φx − ıφy.
φx − ıφy = uxΦu + vxΦv − ı (uyΦu + vyΦv)
φx − ıφy = (ux − ıuy) Φu + (vx − ıvy) Φv
φx − ıφy = (ux − ıuy) Φu − ı (vy + ıvx) Φv
We use the Cauchy-Riemann equations to write uy and vy in terms of ux and vx.
φx − ıφy = (ux + ıvx) Φu − ı (ux + ıvx) Φv
Recall that w = ux + ıvx = vy − ıuy.
φx − ıφy =
dw
dz
(Φu − ıΦv)
Thus we see that,
∂Φ
∂u
− ı
∂Φ
∂v
=
dw
dz
−1
∂φ
∂x
− ı
∂φ
∂y
.
We write this in operator notation.
∂
∂u
− ı
∂
∂v
=
dw
dz
−1
∂
∂x
− ı
∂
∂y
The complex conjugate of this relation is
∂
∂u
+ ı
∂
∂v
=
dw
dz
−1
∂
∂x
+ ı
∂
∂y
255
Now we apply both these operators to Φ = φ.
∂
∂u
+ ı
∂
∂v
∂
∂u
− ı
∂
∂v
Φ =
dw
dz
−1
∂
∂x
+ ı
∂
∂y
dw
dz
−1
∂
∂x
− ı
∂
∂y
φ
∂2
∂u2
+ ı
∂2
∂u∂v
− ı
∂2
∂v∂u
+
∂2
∂v2
Φ
=
dw
dz
−1
∂
∂x
+ ı
∂
∂y
dw
dz
−1
∂
∂x
− ı
∂
∂y
+
dw
dz
−1
∂
∂x
+ ı
∂
∂y
∂
∂x
− ı
∂
∂y
φ
(w )
−1
is an analytic function. Recall that for analytic functions f, f = fx = −ıfy. So that
fx + ıfy = 0.
∂2
Φ
∂u2
+
∂2
Φ
∂v2
=
dw
dz
−1
dw
dz
−1
∂2
∂x2
+
∂2
∂y2
φ
∂2
Φ
∂u2
+
∂2
Φ
∂v2
=
dw
dz
−2
∂2
φ
∂x2
+
∂2
φ
∂y2
Solution 8.16
1. We consider
f(z) = log |z| + ı arg(z) = log r + ıθ.
The Cauchy-Riemann equations in polar coordinates are
ur =
1
r
vθ, uθ = −rvr.
We calculate the derivatives.
ur =
1
r
,
1
r
vθ =
1
r
uθ = 0, −rvr = 0
Since the Cauchy-Riemann equations are satisfied and the partial derivatives are continuous,
f(z) is analytic in |z| > 0, | arg(z)| < π. The complex derivative in terms of polar coordinates
is
d
dz
= e−ıθ ∂
∂r
= −
ı
r
e−ıθ ∂
∂θ
.
We use this to differentiate f(z).
df
dz
= e−ıθ ∂
∂r
[log r + ıθ] = e−ıθ 1
r
=
1
z
2. Next we consider
f(z) = |z| eı arg(z)/2
=
√
r eıθ/2
.
The Cauchy-Riemann equations for polar coordinates and the polar form f(z) = R(r, θ) eıΘ(r,θ)
are
Rr =
R
r
Θθ,
1
r
Rθ = −RΘr.
We calculate the derivatives for R =
√
r, Θ = θ/2.
Rr =
1
2
√
r
,
R
r
Θθ =
1
2
√
r
1
r
Rθ = 0, −RΘr = 0
256
Since the Cauchy-Riemann equations are satisfied and the partial derivatives are continuous,
f(z) is analytic in |z| > 0, | arg(z)| < π. The complex derivative in terms of polar coordinates
is
d
dz
= e−ıθ ∂
∂r
= −
ı
r
e−ıθ ∂
∂θ
.
We use this to differentiate f(z).
df
dz
= e−ıθ ∂
∂r
[
√
r eıθ/2
] =
1
2 eıθ/2
√
r
=
1
2
√
z
Solution 8.17
1. We consider the function
u = x Log r − y arctan(x, y) = r cos θ Log r − rθ sin θ
We compute the Laplacian.
∆u =
1
r
∂
∂r
r
∂u
∂r
+
1
r2
∂2
u
∂θ2
=
1
r
∂
∂r
(cos θ(r + r Log r) − θ sin θ) +
1
r2
(r(θ sin θ − 2 cos θ) − r cos θ Log r)
=
1
r
(2 cos θ + cos θ Log r − θ sin θ) +
1
r
(θ sin θ − 2 cos θ − cos θ Log r)
= 0
The function u is harmonic. We find the harmonic conjugate v by solving the Cauchy-Riemann
equations.
vr = −
1
r
uθ, vθ = rur
vr = sin θ(1 + Log r) + θ cos θ, vθ = r (cos θ(1 + Log r) − θ sin θ)
We integrate the first equation with respect to r to determine v to within the constant of
integration g(θ).
v = r(sin θ Log r + θ cos θ) + g(θ)
We differentiate this expression with respect to θ.
vθ = r (cos θ(1 + Log r) − θ sin θ) + g (θ)
We compare this to the second Cauchy-Riemann equation to see that g (θ) = 0. Thus g(θ) = c.
We have determined the harmonic conjugate.
v = r(sin θ Log r + θ cos θ) + c
The corresponding analytic function is
f(z) = r cos θ Log r − rθ sin θ + ı(r sin θ Log r + rθ cos θ + c).
On the positive real axis, (θ = 0), the function has the value
f(z = r) = r Log r + ıc.
We use analytic continuation to determine the function in the complex plane.
f(z) = z log z + ıc
257
2. We consider the function
u = Arg(z) = θ.
We compute the Laplacian.
∆u =
1
r
∂
∂r
r
∂u
∂r
+
1
r2
∂2
u
∂θ2
= 0
The function u is harmonic. We find the harmonic conjugate v by solving the Cauchy-Riemann
equations.
vr = −
1
r
uθ, vθ = rur
vr = −
1
r
, vθ = 0
We integrate the first equation with respect to r to determine v to within the constant of
integration g(θ).
v = − Log r + g(θ)
We differentiate this expression with respect to θ.
vθ = g (θ)
We compare this to the second Cauchy-Riemann equation to see that g (θ) = 0. Thus g(θ) = c.
We have determined the harmonic conjugate.
v = − Log r + c
The corresponding analytic function is
f(z) = θ − ı Log r + ıc
On the positive real axis, (θ = 0), the function has the value
f(z = r) = −ı Log r + ıc
We use analytic continuation to determine the function in the complex plane.
f(z) = −ı log z + ıc
3. We consider the function
u = rn
cos(nθ)
We compute the Laplacian.
∆u =
1
r
∂
∂r
r
∂u
∂r
+
1
r2
∂2
u
∂θ2
=
1
r
∂
∂r
(nrn
cos(nθ)) − n2
rn−2
cos(nθ)
= n2
rn−2
cos(nθ) − n2
rn−2
cos(nθ)
= 0
The function u is harmonic. We find the harmonic conjugate v by solving the Cauchy-Riemann
equations.
vr = −
1
r
uθ, vθ = rur
vr = nrn−1
sin(nθ), vθ = nrn
cos(nθ)
258
We integrate the first equation with respect to r to determine v to within the constant of
integration g(θ).
v = rn
sin(nθ) + g(θ)
We differentiate this expression with respect to θ.
vθ = nrn
cos(nθ) + g (θ)
We compare this to the second Cauchy-Riemann equation to see that g (θ) = 0. Thus g(θ) = c.
We have determined the harmonic conjugate.
v = rn
sin(nθ) + c
The corresponding analytic function is
f(z) = rn
cos(nθ) + ırn
sin(nθ) + ıc
On the positive real axis, (θ = 0), the function has the value
f(z = r) = rn
+ ıc
We use analytic continuation to determine the function in the complex plane.
f(z) = zn
4. We consider the function
u =
y
r2
=
sin θ
r
We compute the Laplacian.
∆u =
1
r
∂
∂r
r
∂u
∂r
+
1
r2
∂2
u
∂θ2
=
1
r
∂
∂r
−
sin θ
r
−
sin θ
r3
=
sin θ
r3
−
sin θ
r3
= 0
The function u is harmonic. We find the harmonic conjugate v by solving the Cauchy-Riemann
equations.
vr = −
1
r
uθ, vθ = rur
vr = −
cos θ
r2
, vθ = −
sin θ
r
We integrate the first equation with respect to r to determine v to within the constant of
integration g(θ).
v =
cos θ
r
+ g(θ)
We differentiate this expression with respect to θ.
vθ = −
sin θ
r
+ g (θ)
We compare this to the second Cauchy-Riemann equation to see that g (θ) = 0. Thus g(θ) = c.
We have determined the harmonic conjugate.
v =
cos θ
r
+ c
259
The corresponding analytic function is
f(z) =
sin θ
r
+ ı
cos θ
r
+ ıc
On the positive real axis, (θ = 0), the function has the value
f(z = r) =
ı
r
+ ıc.
We use analytic continuation to determine the function in the complex plane.
f(z) =
ı
z
+ ıc
Solution 8.18
1. We calculate the first partial derivatives of u = (x − y)2
and v = 2(x + y).
ux = 2(x − y)
uy = 2(y − x)
vx = 2
vy = 2
We substitute these expressions into the Cauchy-Riemann equations.
ux = vy, uy = −vx
2(x − y) = 2, 2(y − x) = −2
x − y = 1, y − x = −1
y = x − 1
Since the Cauchy-Riemann equation are satisfied along the line y = x − 1 and the partial
derivatives are continuous, the function f(z) is differentiable there. Since the function is not
differentiable in a neighborhood of any point, it is nowhere analytic.
2. We calculate the first partial derivatives of u and v.
ux = 2 ex2
−y2
(x cos(2xy) − y sin(2xy))
uy = −2 ex2
−y2
(y cos(2xy) + x sin(2xy))
vx = 2 ex2
−y2
(y cos(2xy) + x sin(2xy))
vy = 2 ex2
−y2
(x cos(2xy) − y sin(2xy))
Since the Cauchy-Riemann equations, ux = vy and uy = −vx, are satisfied everywhere and the
partial derivatives are continuous, f(z) is everywhere differentiable. Since f(z) is differentiable
in a neighborhood of every point, it is analytic in the complex plane. (f(z) is entire.)
Now to evaluate the derivative. The complex derivative is the derivative in any direction. We
choose the x direction.
f (z) = ux + ıvx
f (z) = 2 ex2
−y2
(x cos(2xy) − y sin(2xy)) + ı2 ex2
−y2
(y cos(2xy) + x sin(2xy))
f (z) = 2 ex2
−y2
((x + ıy) cos(2xy) + (−y + ıx) sin(2xy))
260
Finding the derivative is easier if we first write f(z) in terms of the complex variable z and
use complex differentiation.
f(z) = ex2
−y2
(cos(2x, y) + ı sin(2xy))
f(z) = ex2
−y2
eı2xy
f(z) = e(x+ıy)2
f(z) = ez2
f (z) = 2z ez2
Solution 8.19
1. Assume that the Cauchy-Riemann equations in Cartesian coordinates
ux = vy, uy = −vx
are satisfied and these partial derivatives are continuous at a point z. We write the derivatives
in polar coordinates in terms of derivatives in Cartesian coordinates to verify the Cauchy-
Riemann equations in polar coordinates. First we calculate the derivatives.
x = r cos θ, y = r sin θ
wr =
∂x
∂r
wx +
∂y
∂r
wy = cos θwx + sin θwy
wθ =
∂x
∂θ
wx +
∂y
∂θ
wy = −r sin θwx + r cos θwy
Then we verify the Cauchy-Riemann equations in polar coordinates.
ur = cos θux + sin θuy
= cos θvy − sin θvx
=
1
r
vθ
1
r
uθ = − sin θux + cos θuy
= − sin θvy − cos θvx
= −vr
This proves that the Cauchy-Riemann equations in Cartesian coordinates hold only if the
Cauchy-Riemann equations in polar coordinates hold. (Given that the partial derivatives are
continuous.) Next we prove the converse.
Assume that the Cauchy-Riemann equations in polar coordinates
ur =
1
r
vθ,
1
r
uθ = −vr
are satisfied and these partial derivatives are continuous at a point z. We write the derivatives
in Cartesian coordinates in terms of derivatives in polar coordinates to verify the Cauchy-
Riemann equations in Cartesian coordinates. First we calculate the derivatives.
r = x2 + y2, θ = arctan(x, y)
wx =
∂r
∂x
wr +
∂θ
∂x
wθ =
x
r
wr −
y
r2
wθ
wy =
∂r
∂y
wr +
∂θ
∂y
wθ =
y
r
wr +
x
r2
wθ
261
Then we verify the Cauchy-Riemann equations in Cartesian coordinates.
ux =
x
r
ur −
y
r2
uθ
=
x
r2
vθ +
y
r
vr
= uy
uy =
y
r
ur +
x
r2
uθ
=
y
r2
vθ −
x
r
vr
= −ux
This proves that the Cauchy-Riemann equations in polar coordinates hold only if the Cauchy-
Riemann equations in Cartesian coordinates hold. We have demonstrated the equivalence of
the two forms.
2. We verify that log z is analytic for r > 0 and −π < θ < π using the polar form of the
Cauchy-Riemann equations.
Log z = ln r + ıθ
ur =
1
r
vθ,
1
r
uθ = −vr
1
r
=
1
r
1,
1
r
0 = −0
Since the Cauchy-Riemann equations are satisfied and the partial derivatives are continuous
for r > 0, log z is analytic there. We calculate the value of the derivative using the polar
differentiation formulas.
d
dz
Log z = e−ıθ ∂
∂r
(ln r + ıθ) = e−ıθ 1
r
=
1
z
d
dz
Log z =
−ı
z
∂
∂θ
(ln r + ıθ) =
−ı
z
ı =
1
z
3. Let {xi} denote rectangular coordinates in two dimensions and let {ξi} be an orthogonal
coordinate system . The distance metric coefficients hi are defined
hi =
∂x1
∂ξi
2
+
∂x2
∂ξi
2
.
The Laplacian is
2
u =
1
h1h2
∂
∂ξ1
h2
h1
∂u
∂ξ1
+
∂
∂ξ2
h1
h2
∂u
∂ξ2
.
First we calculate the distance metric coefficients in polar coordinates.
hr =
∂x
∂r
2
+
∂y
∂r
2
= cos2 θ + sin2
θ = 1
hθ =
∂x
∂θ
2
+
∂y
∂θ
2
= r2 sin2
θ + r2 cos2 θ = r
Then we find the Laplacian.
2
φ =
1
r
∂
∂r
(rφr) +
∂
∂θ
1
r
φθ
262
In polar coordinates, Laplace’s equation is
φrr +
1
r
φr +
1
r2
φθθ = 0.
Solution 8.20
1. We compute the Laplacian of u(x, y) = x3
− y3
.
2
u = 6x − 6y
Since u is not harmonic, it is not the real part of on analytic function.
2. We compute the Laplacian of u(x, y) = sinh x cos y + x.
2
u = sinh x cos y − sinh x cos y = 0
Since u is harmonic, it is the real part of on analytic function. We determine v by solving the
Cauchy-Riemann equations.
vx = −uy, vy = ux
vx = sinh x sin y, vy = cosh x cos y + 1
We integrate the first equation to determine v up to an arbitrary additive function of y.
v = cosh x sin y + g(y)
We substitute this into the second Cauchy-Riemann equation. This will determine v up to an
additive constant.
vy = cosh x cos y + 1
cosh x cos y + g (y) = cosh x cos y + 1
g (y) = 1
g(y) = y + a
v = cosh x sin y + y + a
f(z) = sinh x cos y + x + ı(cosh x sin y + y + a)
Here a is a real constant. We write the function in terms of z.
f(z) = sinh z + z + ıa
3. We compute the Laplacian of u(r, θ) = rn
cos(nθ).
2
u = n(n − 1)rn−2
cos(nθ) + nrn−2
cos(nθ) − n2
rn−2
cos(nθ) = 0
Since u is harmonic, it is the real part of on analytic function. We determine v by solving the
Cauchy-Riemann equations.
vr = −
1
r
uθ, vθ = rur
vr = nrn−1
sin(nθ), vθ = nrn
cos(nθ)
We integrate the first equation to determine v up to an arbitrary additive function of θ.
v = rn
sin(nθ) + g(θ)
263
We substitute this into the second Cauchy-Riemann equation. This will determine v up to an
additive constant.
vθ = nrn
cos(nθ)
nrn
cos(nθ) + g (θ) = nrn
cos(nθ)
g (θ) = 0
g(θ) = a
v = rn
sin(nθ) + a
f(z) = rn
cos(nθ) + ı(rn
sin(nθ) + a)
Here a is a real constant. We write the function in terms of z.
f(z) = zn
+ ıa
Solution 8.21
1. We find the velocity potential φ and stream function ψ.
Φ(z) = log z + ı log z
Φ(z) = ln r + ıθ + ı(ln r + ıθ)
φ = ln r − θ, ψ = ln r + θ
A branch of these are plotted in Figure 8.7.
Figure 8.7: The velocity potential φ and stream function ψ for Φ(z) = log z + ı log z.
Next we find the stream lines, ψ = c.
ln r + θ = c
r = ec−θ
These are spirals which go counter-clockwise as we follow them to the origin. See Figure 8.8.
Next we find the velocity field.
v = φ
v = φrˆr +
φθ
r
ˆθ
v =
ˆr
r
−
ˆθ
r
The velocity field is shown in the first plot of Figure 8.9. We see that the fluid flows out from
the origin along the spiral paths of the streamlines. The second plot shows the direction of
the velocity field.
264
Figure 8.8: Streamlines for ψ = ln r + θ.
Figure 8.9: Velocity field and velocity direction field for φ = ln r − θ.
2. We find the velocity potential φ and stream function ψ.
Φ(z) = log(z − 1) + log(z + 1)
Φ(z) = ln |z − 1| + ı arg(z − 1) + ln |z + 1| + ı arg(z + 1)
φ = ln |z2
− 1|, ψ = arg(z − 1) + arg(z + 1)
The velocity potential and a branch of the stream function are plotted in Figure 8.10.
-2
-1
0
1
2-2
-1
0
1
2
-1
0
1
2
-2
-1
0
1
2
-2
-1
0
1
2-2
-1
0
1
2
0
2
4
6
-2
-1
0
1
2
Figure 8.10: The velocity potential φ and stream function ψ for Φ(z) = log(z − 1) + log(z + 1).
265
The stream lines, arg(z − 1) + arg(z + 1) = c, are plotted in Figure 8.11.
-2 -1 0 1 2
-2
-1
0
1
2
Figure 8.11: Streamlines for ψ = arg(z − 1) + arg(z + 1).
Next we find the velocity field.
v = φ
v =
2x(x2
+ y2
− 1)
x4 + 2x2(y2 − 1) + (y2 + 1)2
ˆx +
2y(x2
+ y2
+ 1)
x4 + 2x2(y2 − 1) + (y2 + 1)2
ˆy
The velocity field is shown in the first plot of Figure 8.12. The fluid is flowing out of sources
at z = ±1. The second plot shows the direction of the velocity field.
Figure 8.12: Velocity field and velocity direction field for φ = ln |z2
− 1|.
Solution 8.22
1. (a) We factor the denominator to see that there are first order poles at z = ±ı.
z
z2 + 1
=
z
(z − ı)(z + ı)
266
Since the function behaves like 1/z at infinity, it is analytic there.
(b) The denominator of 1/ sin z has first order zeros at z = nπ, n ∈ Z. Thus the function has
first order poles at these locations. Now we examine the point at infinity with the change
of variables z = 1/ζ.
1
sin z
=
1
sin(1/ζ)
=
ı2
eı/ζ − e−ı/ζ
We see that the point at infinity is a singularity of the function. Since the denominator
grows exponentially, there is no multiplicative factor of ζn
that will make the function
analytic at ζ = 0. We conclude that the point at infinity is an essential singularity. Since
there is no deleted neighborhood of the point at infinity that does contain first order poles
at the locations z = nπ, the point at infinity is a non-isolated singularity.
(c)
log 1 + z2
= log(z + ı) + log(z − ı)
There are branch points at z = ±ı. Since the argument of the logarithm is unbounded
as z → ∞ there is a branch point at infinity as well. Branch points are non-isolated
singularities.
(d)
z sin(1/z) =
1
2
z eı/z
+ eı/z
The point z = 0 is a singularity. Since the function grows exponentially at z = 0. There
is no multiplicative factor of zn
that will make the function analytic. Thus z = 0 is an
essential singularity.
There are no other singularities in the finite complex plane. We examine the point at
infinity.
z sin
1
z
=
1
ζ
sin ζ
The point at infinity is a singularity. We take the limit ζ → 0 to demonstrate that it is a
removable singularity.
lim
ζ→0
sin ζ
ζ
= lim
ζ→0
cos ζ
1
= 1
(e)
tan−1
(z)
z sinh2
(πz)
=
ı log ı+z
ı−z
2z sinh2
(πz)
There are branch points at z = ±ı due to the logarithm. These are non-isolated singular-
ities. Note that sinh(z) has first order zeros at z = ınπ, n ∈ Z. The arctangent has a first
order zero at z = 0. Thus there is a second order pole at z = 0. There are second order
poles at z = ın, n ∈ Z  {0} due to the hyperbolic sine. Since the hyperbolic sine has
an essential singularity at infinity, the function has an essential singularity at infinity as
well. The point at infinity is a non-isolated singularity because there is no neighborhood
of infinity that does not contain second order poles.
2. (a) (z − ı) e1/(z−1)
has a simple zero at z = ı and an isolated essential singularity at z = 1.
(b)
sin(z − 3)
(z − 3)(z + ı)6
has a removable singularity at z = 3, a pole of order 6 at z = −ı and an essential
singularity at z∞.
267
268
Chapter 9
Analytic Continuation
For every complex problem, there is a solution that is simple, neat, and wrong.
- H. L. Mencken
9.1 Analytic Continuation
Suppose there is a function, f1(z) that is analytic in the domain D1 and another analytic function,
f2(z) that is analytic in the domain D2. (See Figure 9.1.)
Im(z)
Re(z)
D
D1
2
Figure 9.1: Overlapping Domains
If the two domains overlap and f1(z) = f2(z) in the overlap region D1 ∩ D2, then f2(z) is called
an analytic continuation of f1(z). This is an appropriate name since f2(z) continues the definition of
f1(z) outside of its original domain of definition D1. We can define a function f(z) that is analytic
in the union of the domains D1 ∪ D2. On the domain D1 we have f(z) = f1(z) and f(z) = f2(z) on
D2. f1(z) and f2(z) are called function elements. There is an analytic continuation even if the two
domains only share an arc and not a two dimensional region.
With more overlapping domains D3, D4, . . . we could perhaps extend f1(z) to more of the complex
plane. Sometimes it is impossible to extend a function beyond the boundary of a domain. This is
known as a natural boundary. If a function f1(z) is analytically continued to a domain Dn along
two different paths, (See Figure 9.2.), then the two analytic continuations are identical as long as
the paths do not enclose a branch point of the function. This is the uniqueness theorem of analytic
continuation.
Consider an analytic function f(z) defined in the domain D. Suppose that f(z) = 0 on the arc
AB, (see Figure 9.3.) Then f(z) = 0 in all of D.
Consider a point ζ on AB. The Taylor series expansion of f(z) about the point z = ζ converges
in a circle C at least up to the boundary of D. The derivative of f(z) at the point z = ζ is
f (ζ) = lim
∆z→0
f(ζ + ∆z) − f(ζ)
∆z
269
D1
Dn
Figure 9.2: Two Paths of Analytic Continuation
D
B
ζ
C
A
Figure 9.3: Domain Containing Arc Along Which f(z) Vanishes
If ∆z is in the direction of the arc, then f (ζ) vanishes as well as all higher derivatives, f (ζ) =
f (ζ) = f (ζ) = · · · = 0. Thus we see that f(z) = 0 inside C. By taking Taylor series expansions
about points on AB or inside of C we see that f(z) = 0 in D.
Result 9.1.1 Let f1(z) and f2(z) be analytic functions defined in D. If
f1(z) = f2(z) for the points in a region or on an arc in D, then f1(z) = f2(z)
for all points in D.
To prove Result 9.1.1, we define the analytic function g(z) = f1(z) − f2(z). Since g(z) vanishes
in the region or on the arc, then g(z) = 0 and hence f1(z) = f2(z) for all points in D.
Result 9.1.2 Consider analytic functions f1(z) and f2(z) defined on the do-
mains D1 and D2, respectively. Suppose that D1 ∩D2 is a region or an arc and
that f1(z) = f2(z) for all z ∈ D1 ∩ D2. (See Figure 9.4.) Then the function
f(z) =
f1(z) for z ∈ D1,
f2(z) for z ∈ D2,
is analytic in D1 ∪ D2.
D1
D2 D1 D2
Figure 9.4: Domains that Intersect in a Region or an Arc
Result 9.1.2 follows directly from Result 9.1.1.
270
9.2 Analytic Continuation of Sums
Example 9.2.1 Consider the function
f1(z) =
∞
n=0
zn
.
The sum converges uniformly for D1 = |z| ≤ r < 1. Since the derivative also converges in this
domain, the function is analytic there.
Im(z)
Re(z)
Im(z)
Re(z)
D2
D1
Figure 9.5: Domain of Convergence for
∞
n=0 zn
.
Now consider the function
f2(z) =
1
1 − z
.
This function is analytic everywhere except the point z = 1. On the domain D1,
f2(z) =
1
1 − z
=
∞
n=0
zn
= f1(z)
Analytic continuation tells us that there is a function that is analytic on the union of the two
domains. Here, the domain is the entire z plane except the point z = 1 and the function is
f(z) =
1
1 − z
.
1
1−z is said to be an analytic continuation of
∞
n=0 zn
.
9.3 Analytic Functions Defined in Terms of Real Variables
Result 9.3.1 An analytic function, u(x, y) + ıv(x, y) can be written in terms
of a function of a complex variable, f(z) = u(x, y) + ıv(x, y).
Result 9.3.1 is proved in Exercise 9.1.
Example 9.3.1
f(z) = cosh y sin x (x ex
cos y − y ex
sin y) − cos x sinh y (y ex
cos y + x ex
sin y)
+ ı cosh y sin x (y ex
cos y + x ex
sin y) + cos x sinh y (x ex
cos y − y ex
sin y)
is an analytic function. Express f(z) in terms of z.
On the real line, y = 0, f(z) is
f(z = x) = x ex
sin x
271
(Recall that cos(0) = cosh(0) = 1 and sin(0) = sinh(0) = 0.)
The analytic continuation of f(z) into the complex plane is
f(z) = z ez
sin z.
Alternatively, for x = 0 we have
f(z = ıy) = y sinh y(cos y − ı sin y).
The analytic continuation from the imaginary axis to the complex plane is
f(z) = −ız sinh(−ız)(cos(−ız) − ı sin(−ız))
= ız sinh(ız)(cos(ız) + ı sin(ız))
= z sin z ez
.
Example 9.3.2 Consider u = e−x
(x sin y − y cos y). Find v such that f(z) = u + ıv is analytic.
From the Cauchy-Riemann equations,
∂v
∂y
=
∂u
∂x
= e−x
sin y − x e−x
sin y + y e−x
cos y
∂v
∂x
= −
∂u
∂y
= e−x
cos y − x e−x
cos y − y e−x
sin y
Integrate the first equation with respect to y.
v = − e−x
cos y + x e−x
cos y + e−x
(y sin y + cos y) + F(x)
= y e−x
sin y + x e−x
cos y + F(x)
F(x) is an arbitrary function of x. Substitute this expression for v into the equation for ∂v/∂x.
−y e−x
sin y − x e−x
cos y + e−x
cos y + F (x) = −y e−x
sin y − x e−x
cos y + e−x
cos y
Thus F (x) = 0 and F(x) = c.
v = e−x
(y sin y + x cos y) + c
Example 9.3.3 Find f(z) in the previous example. (Up to the additive constant.)
Method 1
f(z) = u + ıv
= e−x
(x sin y − y cos y) + ı e−x
(y sin y + x cos y)
= e−x
x
eıy
− e−ıy
ı2
− y
eıy
+ e−ıy
2
+ ı e−x
y
eıy
− e−ıy
ı2
+ x
eıy
+ e−ıy
2
= ı(x + ıy) e−(x+ıy)
= ız e−z
Method 2 f(z) = f(x + ıy) = u(x, y) + ıv(x, y) is an analytic function.
On the real axis, y = 0, f(z) is
f(z = x) = u(x, 0) + ıv(x, 0)
= e−x
(x sin 0 − 0 cos 0) + ı e−x
(0 sin 0 + x cos 0)
= ıx e−x
272
Suppose there is an analytic continuation of f(z) into the complex plane. If such a continuation,
f(z), exists, then it must be equal to f(z = x) on the real axis An obvious choice for the analytic
continuation is
f(z) = u(z, 0) + ıv(z, 0)
since this is clearly equal to u(x, 0) + ıv(x, 0) when z is real. Thus we obtain
f(z) = ız e−z
Example 9.3.4 Consider f(z) = u(x, y) + ıv(x, y). Show that f (z) = ux(z, 0) − ıuy(z, 0).
f (z) = ux + ıvx
= ux − ıuy
f (z) is an analytic function. On the real axis, z = x, f (z) is
f (z = x) = ux(x, 0) − ıuy(x, 0)
Now f (z = x) is defined on the real line. An analytic continuation of f (z = x) into the complex
plane is
f (z) = ux(z, 0) − ıuy(z, 0).
Example 9.3.5 Again consider the problem of finding f(z) given that u(x, y) = e−x
(x sin y −
y cos y). Now we can use the result of the previous example to do this problem.
ux(x, y) =
∂u
∂x
= e−x
sin y − x e−x
sin y + y e−x
cos y
uy(x, y) =
∂u
∂y
= x e−x
cos y + y e−x
sin y − e−x
cos y
f (z) = ux(z, 0) − ıuy(z, 0)
= 0 − ı z e−z
− e−z
= ı −z e−z
+ e−z
Integration yields the result
f(z) = ız e−z
+c
Example 9.3.6 Find f(z) given that
u(x, y) = cos x cosh2
y sin x + cos x sin x sinh2
y
v(x, y) = cos2
x cosh y sinh y − cosh y sin2
x sinh y
f(z) = u(x, y) + ıv(x, y) is an analytic function. On the real line, f(z) is
f(z = x) = u(x, 0) + ıv(x, 0)
= cos x cosh2
0 sin x + cos x sin x sinh2
0 + ı cos2
x cosh 0 sinh 0 − cosh 0 sin2
x sinh 0
= cos x sin x
Now we know the definition of f(z) on the real line. We would like to find an analytic continuation
of f(z) into the complex plane. An obvious choice for f(z) is
f(z) = cos z sin z
Using trig identities we can write this as
f(z) =
sin(2z)
2
.
273
Example 9.3.7 Find f(z) given only that
u(x, y) = cos x cosh2
y sin x + cos x sin x sinh2
y.
Recall that
f (z) = ux + ıvx
= ux − ıuy
Differentiating u(x, y),
ux = cos2
x cosh2
y − cosh2
y sin2
x + cos2
x sinh2
y − sin2
x sinh2
y
uy = 4 cos x cosh y sin x sinh y
f (z) is an analytic function. On the real axis, f (z) is
f (z = x) = cos2
x − sin2
x
Using trig identities we can write this as
f (z = x) = cos(2x)
Now we find an analytic continuation of f (z = x) into the complex plane.
f (z) = cos(2z)
Integration yields the result
f(z) =
sin(2z)
2
+ c
9.3.1 Polar Coordinates
Example 9.3.8 Is
u(r, θ) = r(log r cos θ − θ sin θ)
the real part of an analytic function?
The Laplacian in polar coordinates is
∆φ =
1
r
∂
∂r
r
∂φ
∂r
+
1
r2
∂2
φ
∂θ2
.
We calculate the partial derivatives of u.
∂u
∂r
= cos θ + log r cos θ − θ sin θ
r
∂u
∂r
= r cos θ + r log r cos θ − rθ sin θ
∂
∂r
r
∂u
∂r
= 2 cos θ + log r cos θ − θ sin θ
1
r
∂
∂r
r
∂u
∂r
=
1
r
(2 cos θ + log r cos θ − θ sin θ)
∂u
∂θ
= −r (θ cos θ + sin θ + log r sin θ)
∂2
u
∂θ2
= r (−2 cos θ − log r cos θ + θ sin θ)
1
r2
∂2
u
∂θ2
=
1
r
(−2 cos θ − log r cos θ + θ sin θ)
274
From the above we see that
∆u =
1
r
∂
∂r
r
∂u
∂r
+
1
r2
∂2
u
∂θ2
= 0.
Therefore u is harmonic and is the real part of some analytic function.
Example 9.3.9 Find an analytic function f(z) whose real part is
u(r, θ) = r (log r cos θ − θ sin θ) .
Let f(z) = u(r, θ) + ıv(r, θ). The Cauchy-Riemann equations are
ur =
vθ
r
, uθ = −rvr.
Using the partial derivatives in the above example, we obtain two partial differential equations for
v(r, θ).
vr = −
uθ
r
= θ cos θ + sin θ + log r sin θ
vθ = rur = r (cos θ + log r cos θ − θ sin θ)
Integrating the equation for vθ yields
v = r (θ cos θ + log r sin θ) + F(r)
where F(r) is a constant of integration.
Substituting our expression for v into the equation for vr yields
θ cos θ + log r sin θ + sin θ + F (r) = θ cos θ + sin θ + log r sin θ
F (r) = 0
F(r) = const
Thus we see that
f(z) = u + ıv
= r (log r cos θ − θ sin θ) + ır (θ cos θ + log r sin θ) + const
f(z) is an analytic function. On the line θ = 0, f(z) is
f(z = r) = r(log r) + ır(0) + const
= r log r + const
The analytic continuation into the complex plane is
f(z) = z log z + const
Example 9.3.10 Find the formula in polar coordinates that is analogous to
f (z) = ux(z, 0) − ıuy(z, 0).
We know that
df
dz
= e−ıθ ∂f
∂r
.
If f(z) = u(r, θ) + ıv(r, θ) then
df
dz
= e−ıθ
(ur + ıvr)
275
From the Cauchy-Riemann equations, we have vr = −uθ/r.
df
dz
= e−ıθ
ur − ı
uθ
r
f (z) is an analytic function. On the line θ = 0, f(z) is
f (z = r) = ur(r, 0) − ı
uθ(r, 0)
r
The analytic continuation of f (z) into the complex plane is
f (z) = ur(z, 0) −
ı
r
uθ(z, 0).
Example 9.3.11 Find an analytic function f(z) whose real part is
u(r, θ) = r (log r cos θ − θ sin θ) .
ur(r, θ) = (log r cos θ − θ sin θ) + cos θ
uθ(r, θ) = r (− log r sin θ − sin θ − θ cos θ)
f (z) = ur(z, 0) −
ı
r
uθ(z, 0)
= log z + 1
Integrating f (z) yields
f(z) = z log z + ıc.
9.3.2 Analytic Functions Defined in Terms of Their Real or Imaginary
Parts
Consider an analytic function: f(z) = u(x, y) + ıv(x, y). We differentiate this expression.
f (z) = ux(x, y) + ıvx(x, y)
We apply the Cauchy-Riemann equation vx = −uy.
f (z) = ux(x, y) − ıuy(x, y). (9.1)
Now consider the function of a complex variable, g(ζ):
g(ζ) = ux(x, ζ) − ıuy(x, ζ) = ux(x, ξ + ıψ) − ıuy(x, ξ + ıψ).
This function is analytic where f(ζ) is analytic. To show this we first verify that the derivatives in
the ξ and ψ directions are equal.
∂
∂ξ
g(ζ) = uxy(x, ξ + ıψ) − ıuyy(x, ξ + ıψ)
−ı
∂
∂ψ
g(ζ) = −ı (ıuxy(x, ξ + ıψ) + uyy(x, ξ + ıψ)) = uxy(x, ξ + ıψ) − ıuyy(x, ξ + ıψ)
Since these partial derivatives are equal and continuous, g(ζ) is analytic. We evaluate the function
g(ζ) at ζ = −ıx. (Substitute y = −ıx into Equation 9.1.)
f (2x) = ux(x, −ıx) − ıuy(x, −ıx)
276
We make a change of variables to solve for f (x).
f (x) = ux
x
2
, −ı
x
2
− ıuy
x
2
, −ı
x
2
.
If the expression is non-singular, then this defines the analytic function, f (z), on the real axis. The
analytic continuation to the complex plane is
f (z) = ux
z
2
, −ı
z
2
− ıuy
z
2
, −ı
z
2
.
Note that d
dz 2u(z/2, −ız/2) = ux(z/2, −ız/2)−ıuy(z/2, −ız/2). We integrate the equation to obtain:
f(z) = 2u
z
2
, −ı
z
2
+ c.
We know that the real part of an analytic function determines that function to within an additive
constant. Assuming that the above expression is non-singular, we have found a formula for writing
an analytic function in terms of its real part. With the same method, we can find how to write an
analytic function in terms of its imaginary part, v.
We can also derive formulas if u and v are expressed in polar coordinates:
f(z) = u(r, θ) + ıv(r, θ).
Result 9.3.2 If f(z) = u(x, y) + ıv(x, y) is analytic and the expressions are
non-singular, then
f(z) = 2u
z
2
, −ı
z
2
+ const (9.2)
f(z) = ı2v
z
2
, −ı
z
2
+ const. (9.3)
If f(z) = u(r, θ) + ıv(r, θ) is analytic and the expressions are non-singular,
then
f(z) = 2u z1/2
, −
ı
2
log z + const (9.4)
f(z) = ı2v z1/2
, −
ı
2
log z + const. (9.5)
Example 9.3.12 Consider the problem of finding f(z) given that u(x, y) = e−x
(x sin y − y cos y).
f(z) = 2u
z
2
, −ı
z
2
= 2 e−z/2 z
2
sin −ı
z
2
+ ı
z
2
cos −ı
z
2
+ c
= ız e−z/2
ı sin ı
z
2
+ cos −ı
z
2
+ c
= ız e−z/2
e−z/2
+ c
= ız e−z
+c
Example 9.3.13 Consider
Log z =
1
2
Log x2
+ y2
+ ı Arctan(x, y).
277
We try to construct the analytic function from it’s real part using Equation 9.2.
f(z) = 2u
z
2
, −ı
z
2
+ c
= 2
1
2
Log
z
2
2
+ −ı
z
2
2
+ c
= Log(0) + c
We obtain a singular expression, so the method fails.
Example 9.3.14 Again consider the logarithm, this time written in terms of polar coordinates.
Log z = Log r + ıθ
We try to construct the analytic function from it’s real part using Equation 9.4.
f(z) = 2u z1/2
, −ı
ı
2
log z + c
= 2 Log z1/2
+ c
= Log z + c
With this method we recover the analytic function.
278
9.4 Exercises
Exercise 9.1
Consider two functions, f(x, y) and g(x, y). They are said to be functionally dependent if there is a
an h(g) such that
f(x, y) = h(g(x, y)).
f and g will be functionally dependent if and only if their Jacobian vanishes.
If f and g are functionally dependent, then the derivatives of f are
fx = h (g)gx
fy = h (g)gy.
Thus we have
∂(f, g)
∂(x, y)
=
fx fy
gx gy
= fxgy − fygx = h (g)gxgy − h (g)gygx = 0.
If the Jacobian of f and g vanishes, then
fxgy − fygx = 0.
This is a first order partial differential equation for f that has the general solution
f(x, y) = h(g(x, y)).
Prove that an analytic function u(x, y) + ıv(x, y) can be written in terms of a function of a
complex variable, f(z) = u(x, y) + ıv(x, y).
Exercise 9.2
Which of the following functions are the real part of an analytic function? For those that are, find
the harmonic conjugate, v(x, y), and find the analytic function f(z) = u(x, y)+ıv(x, y) as a function
of z.
1. x3
− 3xy2
− 2xy + y
2. ex
sinh y
3. ex
(sin x cos y cosh y − cos x sin y sinh y)
Exercise 9.3
For an analytic function, f(z) = u(r, θ) + ıv(r, θ) prove that under suitable restrictions:
f(z) = 2u z1/2
, −
ı
2
log z + const.
279
9.5 Hints
Hint 9.1
Show that u(x, y) + ıv(x, y) is functionally dependent on x + ıy so that you can write f(z) =
f(x + ıy) = u(x, y) + ıv(x, y).
Hint 9.2
Hint 9.3
Check out the derivation of Equation 9.2.
280
9.6 Solutions
Solution 9.1
u(x, y) + ıv(x, y) is functionally dependent on z = x + ıy if and only if
∂(u + ıv, x + ıy)
∂(x, y)
= 0.
∂(u + ıv, x + ıy)
∂(x, y)
=
ux + ıvx uy + ıvy
1 ı
= −vx − uy + ı (ux − vy)
Since u and v satisfy the Cauchy-Riemann equations, this vanishes.
= 0
Thus we see that u(x, y) + ıv(x, y) is functionally dependent on x + ıy so we can write
f(z) = f(x + ıy) = u(x, y) + ıv(x, y).
Solution 9.2
1. Consider u(x, y) = x3
− 3xy2
− 2xy + y. The Laplacian of this function is
∆u ≡ uxx + uyy
= 6x − 6x
= 0
Since the function is harmonic, it is the real part of an analytic function. Clearly the analytic
function is of the form,
az3
+ bz2
+ cz + ıd,
with a, b and c complex-valued constants and d a real constant. Substituting z = x + ıy and
expanding products yields,
a x3
+ ı3x2
y − 3xy2
− ıy3
+ b x2
+ ı2xy − y2
+ c(x + ıy) + ıd.
By inspection, we see that the analytic function is
f(z) = z3
+ ız2
− ız + ıd.
The harmonic conjugate of u is the imaginary part of f(z),
v(x, y) = 3x2
y − y3
+ x2
− y2
− x + d.
We can also do this problem with analytic continuation. The derivatives of u are
ux = 3x2
− 3y2
− 2y,
uy = −6xy − 2x + 1.
The derivative of f(z) is
f (z) = ux − ıuy = 3x2
− 2y2
− 2y + ı(6xy − 2x + 1).
On the real axis we have
f (z = x) = 3x2
− ı2x + ı.
Using analytic continuation, we see that
f (z) = 3z2
− ı2z + ı.
Integration yields
f(z) = z3
− ız2
+ ız + const
281
2. Consider u(x, y) = ex
sinh y. The Laplacian of this function is
∆u = ex
sinh y + ex
sinh y
= 2 ex
sinh y.
Since the function is not harmonic, it is not the real part of an analytic function.
3. Consider u(x, y) = ex
(sin x cos y cosh y − cos x sin y sinh y). The Laplacian of the function is
∆u =
∂
∂x
(ex
(sin x cos y cosh y − cos x sin y sinh y + cos x cos y cosh y + sin x sin y sinh y))
+
∂
∂y
(ex
(− sin x sin y cosh y − cos x cos y sinh y + sin x cos y sinh y − cos x sin y cosh y))
= 2 ex
(cos x cos y cosh y + sin x sin y sinh y) − 2 ex
(cos x cos y cosh y + sin x sin y sinh y)
= 0.
Thus u is the real part of an analytic function. The derivative of the analytic function is
f (z) = ux + ıvx = ux − ıuy
From the derivatives of u we computed before, we have
f(z) = (ex
(sin x cos y cosh y − cos x sin y sinh y + cos x cos y cosh y + sin x sin y sinh y))
− ı (ex
(− sin x sin y cosh y − cos x cos y sinh y + sin x cos y sinh y − cos x sin y cosh y))
Along the real axis, f (z) has the value,
f (z = x) = ex
(sin x + cos x).
By analytic continuation, f (z) is
f (z) = ez
(sin z + cos z)
We obtain f(z) by integrating.
f(z) = ez
sin z + const.
u is the real part of the analytic function
f(z) = ez
sin z + ıc,
where c is a real constant. We find the harmonic conjugate of u by taking the imaginary part
of f.
f(z) = ex
(cosy + ı sin y)(sin x cosh y + ı cos x sinh y) + ıc
v(x, y) = ex
sin x sin y cosh y + cos x cos y sinh y + c
Solution 9.3
We consider the analytic function: f(z) = u(r, θ) + ıv(r, θ). Recall that the complex derivative in
terms of polar coordinates is
d
dz
= e−ıθ ∂
∂r
= −
ı
r
e−ıθ ∂
∂θ
.
The Cauchy-Riemann equations are
ur =
1
r
vθ, vr = −
1
r
uθ.
282
We differentiate f(z) and use the partial derivative in r for the right side.
f (z) = e−ıθ
(ur + ıvr)
We use the Cauchy-Riemann equations to right f (z) in terms of the derivatives of u.
f (z) = e−ıθ
ur − ı
1
r
uθ (9.6)
Now consider the function of a complex variable, g(ζ):
g(ζ) = e−ıζ
ur(r, ζ) − ı
1
r
uθ(r, ζ) = eψ−ıξ
ur(r, ξ + ıψ) − ı
1
r
uθ(r, ξ + ıψ)
This function is analytic where f(ζ) is analytic. It is a simple calculus exercise to show that the
complex derivative in the ξ direction, ∂
∂ξ , and the complex derivative in the ψ direction, −ı ∂
∂ψ , are
equal. Since these partial derivatives are equal and continuous, g(ζ) is analytic. We evaluate the
function g(ζ) at ζ = −ı log r. (Substitute θ = −ı log r into Equation 9.6.)
f r eı(−ı log r)
= e−ı(−ı log r)
ur(r, −ı log r) − ı
1
r
uθ(r, −ı log r)
rf r2
= ur(r, −ı log r) − ı
1
r
uθ(r, −ı log r)
If the expression is non-singular, then it defines the analytic function, f (z), on a curve. The analytic
continuation to the complex plane is
zf z2
= ur(z, −ı log z) − ı
1
z
uθ(z, −ı log z).
We integrate to obtain an expression for f z2
.
1
2
f z2
= u(z, −ı log z) + const
We make a change of variables and solve for f(z).
f(z) = 2u z1/2
, −
ı
2
log z + const.
Assuming that the above expression is non-singular, we have found a formula for writing the analytic
function in terms of its real part, u(r, θ). With the same method, we can find how to write an analytic
function in terms of its imaginary part, v(r, θ).
283
284
Chapter 10
Contour Integration and the
Cauchy-Goursat Theorem
Between two evils, I always pick the one I never tried before.
- Mae West
10.1 Line Integrals
In this section we will recall the definition of a line integral in the Cartesian plane. In the next
section we will use this to define the contour integral in the complex plane.
Limit Sum Definition. First we develop a limit sum definition of a line integral. Consider a
curve C in the Cartesian plane joining the points (a0, b0) and (a1, b1). We partition the curve into
n segments with the points (x0, y0), . . . , (xn, yn) where the first and last points are at the endpoints
of the curve. We define the differences, ∆xk = xk+1 − xk and ∆yk = yk+1 − yk, and let (ξk, ψk) be
points on the curve between (xk, yk) and (xk+1, yk+1). This is shown pictorially in Figure 10.1.
(x ,y )00
(ξ ,ψ )0 0
1 1
(ξ ,ψ )1 1
(x ,y )2 2
(ξ ,ψ )2 2
(ξ ,ψ )n−1 n−1
(x ,y )n n
(x ,y )n−1 n−1
(x ,y )
y
x
Figure 10.1: A curve in the Cartesian plane.
Consider the sum
n−1
k=0
(P(ξk, ψk)∆xk + Q(ξk, ψk)∆yk) ,
where P and Q are continuous functions on the curve. (P and Q may be complex-valued.) In the
limit as each of the ∆xk and ∆yk approach zero the value of the sum, (if the limit exists), is denoted
by
C
P(x, y) dx + Q(x, y) dy.
285
This is a line integral along the curve C. The value of the line integral depends on the functions
P(x, y) and Q(x, y), the endpoints of the curve and the curve C. We can also write a line integral
in vector notation.
C
f(x) · dx
Here x = (x, y) and f(x) = (P(x, y), Q(x, y)).
Evaluating Line Integrals with Parameterization. Let the curve C be parametrized by x =
x(t), y = y(t) for t0 ≤ t ≤ t1. Then the differentials on the curve are dx = x (t) dt and dy = y (t) dt.
Using the parameterization we can evaluate a line integral in terms of a definite integral.
C
P(x, y) dx + Q(x, y) dy =
t1
t0
P(x(t), y(t))x (t) + Q(x(t), y(t))y (t) dt
Example 10.1.1 Consider the line integral
C
x2
dx + (x + y) dy,
where C is the semi-circle from (1, 0) to (−1, 0) in the upper half plane. We parameterize the curve
with x = cos t, y = sin t for 0 ≤ t ≤ π.
C
x2
dx + (x + y) dy =
π
0
cos2
t(− sin t) + (cos t + sin t) cos t dt
=
π
2
−
2
3
10.2 Contour Integrals
Limit Sum Definition. We develop a limit sum definition for contour integrals. It will be anal-
ogous to the definition for line integrals except that the notation is cleaner in complex variables.
Consider a contour C in the complex plane joining the points c0 and c1. We partition the contour
into n segments with the points z0, . . . , zn where the first and last points are at the endpoints of the
contour. We define the differences ∆zk = zk+1 − zk and let ζk be points on the contour between zk
and zk+1. Consider the sum
n−1
k=0
f(ζk)∆zk,
where f is a continuous function on the contour. In the limit as each of the ∆zk approach zero the
value of the sum, (if the limit exists), is denoted by
C
f(z) dz.
This is a contour integral along C.
We can write a contour integral in terms of a line integral. Let f(z) = φ(x, y). (φ : R2
→ C.)
C
f(z) dz =
C
φ(x, y)(dx + ı dy)
C
f(z) dz =
C
(φ(x, y) dx + ıφ(x, y) dy) (10.1)
Further, we can write a contour integral in terms of two real-valued line integrals. Let f(z) =
u(x, y) + ıv(x, y).
C
f(z) dz =
C
(u(x, y) + ıv(x, y))(dx + ı dy)
C
f(z) dz =
C
(u(x, y) dx − v(x, y) dy) + ı
C
(v(x, y) dx + u(x, y) dy) (10.2)
286
Evaluation. Let the contour C be parametrized by z = z(t) for t0 ≤ t ≤ t1. Then the differential
on the contour is dz = z (t) dt. Using the parameterization we can evaluate a contour integral in
terms of a definite integral.
C
f(z) dz =
t1
t0
f(z(t))z (t) dt
Example 10.2.1 Let C be the positively oriented unit circle about the origin in the complex plane.
Evaluate:
1. C
z dz
2. C
1
z dz
3. C
1
z |dz|
In each case we parameterize the contour and then do the integral.
1.
z = eıθ
, dz = ı eıθ
dθ
C
z dz =
2π
0
eıθ
ı eıθ
dθ
=
1
2
eı2θ
2π
0
=
1
2
eı4π
−
1
2
eı0
= 0
2.
C
1
z
dz =
2π
0
1
eıθ
ı eıθ
dθ = ı
2π
0
dθ = ı2π
3.
|dz| = ı eıθ
dθ = ı eıθ
|dθ| = |dθ|
Since dθ is positive in this case, |dθ| = dθ.
C
1
z
|dz| =
2π
0
1
eıθ
dθ = ı e−ıθ 2π
0
= 0
10.2.1 Maximum Modulus Integral Bound
The absolute value of a real integral obeys the inequality
b
a
f(x) dx ≤
b
a
|f(x)| |dx| ≤ (b − a) max
a≤x≤b
|f(x)|.
287
Now we prove the analogous result for the modulus of a contour integral.
C
f(z) dz = lim
∆z→0
n−1
k=0
f(ζk)∆zk
≤ lim
∆z→0
n−1
k=0
|f(ζk)| |∆zk|
=
C
|f(z)| |dz|
≤
C
max
z∈C
|f(z)| |dz|
= max
z∈C
|f(z)|
C
|dz|
= max
z∈C
|f(z)| × (length of C)
Result 10.2.1 Maximum Modulus Integral Bound.
C
f(z) dz ≤
C
|f(z)| |dz| ≤ max
z∈C
|f(z)| (length of C)
10.3 The Cauchy-Goursat Theorem
Let f(z) be analytic in a compact, closed, connected domain D. We consider the integral of f(z) on
the boundary of the domain.
∂D
f(z) dz =
∂D
ψ(x, y)(dx + ı dy) =
∂D
ψ dx + ıψ dy
Recall Green’s Theorem.
∂D
P dx + Q dy =
D
(Qx − Py) dx dy
If we assume that f (z) is continuous, we can apply Green’s Theorem to the integral of f(z) on ∂D.
∂D
f(z) dz =
∂D
ψ dx + ıψ dy =
D
(ıψx − ψy) dx dy
Since f(z) is analytic, it satisfies the Cauchy-Riemann equation ψx = −ıψy. The integrand in the
area integral, ıψx − ψy, is zero. Thus the contour integral vanishes.
∂D
f(z) dz = 0
This is known as Cauchy’s Theorem. The assumption that f (z) is continuous is not necessary, but
it makes the proof much simpler because we can use Green’s Theorem. If we remove this restriction
the result is known as the Cauchy-Goursat Theorem. The proof of this result is omitted.
288
Result 10.3.1 The Cauchy-Goursat Theorem. If f(z) is analytic in a
compact, closed, connected domain D then the integral of f(z) on the bound-
ary of the domain vanishes.
∂D
f(z) dz =
k Ck
f(z) dz = 0
Here the set of contours {Ck} make up the positively oriented boundary ∂D
of the domain D.
As a special case of the Cauchy-Goursat theorem we can consider a simply-connected region.
For this the boundary is a Jordan curve. We can state the theorem in terms of this curve instead of
referring to the boundary.
Result 10.3.2 The Cauchy-Goursat Theorem for Jordan Curves. If
f(z) is analytic inside and on a simple, closed contour C, then
C
f(z) dz = 0
Example 10.3.1 Let C be the unit circle about the origin with positive orientation. In Exam-
ple 10.2.1 we calculated that
C
z dz = 0
Now we can evaluate the integral without parameterizing the curve. We simply note that the
integrand is analytic inside and on the circle, which is simple and closed. By the Cauchy-Goursat
Theorem, the integral vanishes.
We cannot apply the Cauchy-Goursat theorem to evaluate
C
1
z
dz = ı2π
as the integrand is not analytic at z = 0.
Example 10.3.2 Consider the domain D = {z | |z| > 1}. The boundary of the domain is the unit
circle with negative orientation. f(z) = 1/z is analytic on D and its boundary. However ∂D
f(z) dz
does not vanish and we cannot apply the Cauchy-Goursat Theorem. This is because the domain is
not compact.
10.4 Contour Deformation
Path Independence. Consider a function f(z) that is analytic on a simply connected domain a
contour C in that domain with end points a and b. The contour integral C
f(z) dz is independent of
the path connecting the end points and can be denoted
b
a
f(z) dz. This result is a direct consequence
of the Cauchy-Goursat Theorem. Let C1 and C2 be two different paths connecting the points. Let
−C2 denote the second contour with the opposite orientation. Let C be the contour which is the
union of C1 and −C2. By the Cauchy-Goursat theorem, the integral along this contour vanishes.
C
f(z) dz =
C1
f(z) dz +
−C2
f(z) dz = 0
This implies that the integrals along C1 and C2 are equal.
C1
f(z) dz =
C2
f(z) dz
289
Thus contour integrals on simply connected domains are independent of path. This result does not
hold for multiply connected domains.
Result 10.4.1 Path Independence. Let f(z) be analytic on a simply con-
nected domain. For points a and b in the domain, the contour integral,
b
a
f(z) dz
is independent of the path connecting the points.
Deforming Contours. Consider two simple, closed, positively oriented contours, C1 and C2. Let
C2 lie completely within C1. If f(z) is analytic on and between C1 and C2 then the integrals of f(z)
along C1 and C2 are equal.
C1
f(z) dz =
C2
f(z) dz
Again, this is a direct consequence of the Cauchy-Goursat Theorem. Let D be the domain on and
between C1 and C2. By the Cauchy-Goursat Theorem the integral along the boundary of D vanishes.
C1
f(z) dz +
−C2
f(z) dz = 0
C1
f(z) dz =
C2
f(z) dz
By following this line of reasoning, we see that we can deform a contour C without changing the
value of C
f(z) dz as long as we stay on the domain where f(z) is analytic.
Result 10.4.2 Contour Deformation. Let f(z) be analytic on a domain
D. If a set of closed contours {Cm} can be continuously deformed on the
domain D to a set of contours {Γn} then the integrals along {Cm} and {Γn}
are equal.
{Cm}
f(z) dz =
{Γn}
f(z) dz
10.5 Morera’s Theorem.
The converse of the Cauchy-Goursat theorem is Morera’s Theorem. If the integrals of a continuous
function f(z) vanish along all possible simple, closed contours in a domain, then f(z) is analytic
on that domain. To prove Morera’s Theorem we will assume that first partial derivatives of f(z) =
u(x, y)+ıv(x, y) are continuous, although the result can be derived without this restriction. Let the
simple, closed contour C be the boundary of D which is contained in the domain Ω.
C
f(z) dz =
C
(u + ıv)(dx + ı dy)
=
C
u dx − v dy + ı
C
v dx + u dy
=
D
(−vx − uy) dx dy + ı
D
(ux − vy) dx dy
= 0
290
Since the two integrands are continuous and vanish for all C in Ω, we conclude that the integrands
are identically zero. This implies that the Cauchy-Riemann equations,
ux = vy, uy = −vx,
are satisfied. f(z) is analytic in Ω.
The converse of the Cauchy-Goursat theorem is Morera’s Theorem. If the integrals of a con-
tinuous function f(z) vanish along all possible simple, closed contours in a domain, then f(z) is
analytic on that domain. To prove Morera’s Theorem we will assume that first partial derivatives of
f(z) = φ(x, y) are continuous, although the result can be derived without this restriction. Let the
simple, closed contour C be the boundary of D which is contained in the domain Ω.
C
f(z) dz =
C
(φ dx + ıφ dy)
=
D
(ıφx − φy) dx dy
= 0
Since the integrand, ıφx−φy is continuous and vanishes for all C in Ω, we conclude that the integrand
is identically zero. This implies that the Cauchy-Riemann equation,
φx = −ıφy,
is satisfied. We conclude that f(z) is analytic in Ω.
Result 10.5.1 Morera’s Theorem. If f(z) is continuous in a simply con-
nected domain Ω and
C
f(z) dz = 0
for all possible simple, closed contours C in the domain, then f(z) is analytic
in Ω.
10.6 Indefinite Integrals
Consider a function f(z) which is analytic in a domain D. An anti-derivative or indefinite integral
(or simply integral) is a function F(z) which satisfies F (z) = f(z). This integral exists and is
unique up to an additive constant. Note that if the domain is not connected, then the additive
constants in each connected component are independent. The indefinite integrals are denoted:
f(z) dz = F(z) + c.
We will prove existence later by writing an indefinite integral as a contour integral. We briefly
consider uniqueness of the indefinite integral here. Let F(z) and G(z) be integrals of f(z). Then
F (z) − G (z) = f(z) − f(z) = 0. Although we do not prove it, it certainly makes sense that
F(z) − G(z) is a constant on each connected component of the domain. Indefinite integrals are
unique up to an additive constant.
Integrals of analytic functions have all the nice properties of integrals of functions of a real
variable. All the formulas from integral tables, including things like integration by parts, carry over
directly.
291
10.7 Fundamental Theorem of Calculus via Primitives
10.7.1 Line Integrals and Primitives
Here we review some concepts from vector calculus. Analagous to an integral in functions of a single
variable is a primitive in functions of several variables. Consider a function f(x). F(x) is an integral
of f(x) if and only if dF = f dx. Now we move to functions of x and y. Let P(x, y) and Q(x, y) be
defined on a simply connected domain. A primitive Φ satisfies
dΦ = P dx + Q dy.
A necessary and sufficient condition for the existence of a primitive is that Py = Qx. The definite
integral can be evaluated in terms of the primitive.
(c,d)
(a,b)
P dx + Q dy = Φ(c, d) − Φ(a, b)
10.7.2 Contour Integrals
Now consider integral along the contour C of the function f(z) = φ(x, y).
C
f(z) dz =
C
(φ dx + ıφ dy)
A primitive Φ of φ dx+ıφ dy exists if and only if φy = ıφx. We recognize this as the Cauch-Riemann
equation, φx = −ıφy. Thus a primitive exists if and only if f(z) is analytic. If so, then
dΦ = φ dx + ıφ dy.
How do we find the primitive Φ that satisfies Φx = φ and Φy = ıφ? Note that choosing
Ψ(x, y) = F(z) where F(z) is an anti-derivative of f(z), F (z) = f(z), does the trick. We express
the complex derivative as partial derivatives in the coordinate directions to show this.
F (z) = f(z) = ψ(x, y), F (z) = Φx = −ıΦy
From this we see that Φx = φ and Φy = ıφ so Φ(x, y) = F(z) is a primitive. Since we can evaluate
the line integral of (φ dx + ıφ dy),
(c,d)
(a,b)
(φ dx + ıφ dy) = Φ(c, d) − Φ(a, b),
We can evaluate a definite integral of f in terms of its indefinite integral, F.
b
a
f(z) dz = F(b) − F(a)
This is the Fundamental Theorem of Calculus for functions of a complex variable.
10.8 Fundamental Theorem of Calculus via Complex Calcu-
lus
Result 10.8.1 Constructing an Indefinite Integral. If f(z) is analytic in
a simply connected domain D and a is a point in the domain, then
F(z) =
z
a
f(ζ) dζ
is analytic in D and is an indefinite integral of f(z), (F (z) = f(z)).
292
Now we consider anti-derivatives and definite integrals without using vector calculus. From real
variables we know that we can construct an integral of f(x) with a definite integral.
F(x) =
x
a
f(ξ) dξ
Now we will prove the analogous property for functions of a complex variable.
F(z) =
z
a
f(ζ) dζ
Let f(z) be analytic in a simply connected domain D and let a be a point in the domain. To show
that F(z) =
z
a
f(ζ) dζ is an integral of f(z), we apply the limit definition of differentiation.
F (z) = lim
∆z→0
F(z + ∆z) − F(z)
∆z
= lim
∆z→0
1
∆z
z+∆z
a
f(ζ) dζ −
z
a
f(ζ) dζ
= lim
∆z→0
1
∆z
z+∆z
z
f(ζ) dζ
The integral is independent of path. We choose a straight line connecting z and z + ∆z. We add
and subtract ∆zf(z) =
z+∆z
z
f(z) dζ from the expression for F (z).
F (z) = lim
∆z→0
1
∆z
∆zf(z) +
z+∆z
z
(f(ζ) − f(z)) dζ
= f(z) + lim
∆z→0
1
∆z
z+∆z
z
(f(ζ) − f(z)) dζ
Since f(z) is analytic, it is certainly continuous. This means that
lim
ζ→z
f(ζ) = 0.
The limit term vanishes as a result of this continuity.
lim
∆z→0
1
∆z
z+∆z
z
(f(ζ) − f(z)) dζ ≤ lim
∆z→0
1
|∆z|
|∆z| max
ζ∈[z...z+∆z]
|f(ζ) − f(z)|
= lim
∆z→0
max
ζ∈[z...z+∆z]
|f(ζ) − f(z)|
= 0
Thus F (z) = f(z).
This results demonstrates the existence of the indefinite integral. We will use this to prove the
Fundamental Theorem of Calculus for functions of a complex variable.
Result 10.8.2 Fundamental Theorem of Calculus. If f(z) is analytic in
a simply connected domain D then
b
a
f(z) dz = F(b) − F(a)
where F(z) is any indefinite integral of f(z).
293
From Result 10.8.1 we know that
b
a
f(z) dz = F(b) + c.
(Here we are considering b to be a variable.) The case b = a determines the constant.
a
a
f(z) dz = F(a) + c = 0
c = −F(a)
This proves the Fundamental Theorem of Calculus for functions of a complex variable.
Example 10.8.1 Consider the integral
C
1
z − a
dz
where C is any closed contour that goes around the point z = a once in the positive direction.
We use the Fundamental Theorem of Calculus to evaluate the integral. We start at a point on the
contour z − a = r eıθ
. When we traverse the contour once in the positive direction we end at the
point z − a = r eı(θ+2π)
.
C
1
z − a
dz = [log(z − a)]
z−a=r eı(θ+2π)
z−a=r eıθ
= Log r + ı(θ + 2π) − (Log r + ıθ)
= ı2π
294
10.9 Exercises
Exercise 10.1
C is the arc corresponding to the unit semi-circle, |z| = 1, (z) ≥ 0, directed from z = −1 to z = 1.
Evaluate
1.
C
z2
dz
2.
C
z2
dz
3.
C
z2
|dz|
4.
C
z2
|dz|
Hint, Solution
Exercise 10.2
Evaluate
∞
−∞
e−(ax2
+bx)
dx,
where a, b ∈ C and (a) > 0. Use the fact that
∞
−∞
e−x2
dx =
√
π.
Hint, Solution
Exercise 10.3
Evaluate
2
∞
0
e−ax2
cos(ωx) dx, and 2
∞
0
x e−ax2
sin(ωx)dx,
where (a) > 0 and ω ∈ R.
Hint, Solution
Exercise 10.4
Use an admissible parameterization to evaluate
C
(z − z0)n
dz, n ∈ Z
for the following cases:
1. C is the circle |z − z0| = 1 traversed in the counterclockwise direction.
2. C is the circle |z − z0 − ı2| = 1 traversed in the counterclockwise direction.
3. z0 = 0, n = −1 and C is the closed contour defined by the polar equation
r = 2 − sin2 θ
4
Is this result compatible with the results of part (a)?
Hint, Solution
295
Exercise 10.5
1. Use bounding arguments to show that
lim
R→∞ CR
z + Log z
z3 + 1
dz = 0
where CR is the positive closed contour |z| = R.
2. Place a bound on
C
Log z dz
where C is the arc of the circle |z| = 2 from −ı2 to ı2.
3. Deduce that
C
z2
− 1
z2 + 1
dz ≤ πr
R2
+ 1
R2 − 1
where C is a semicircle of radius R > 1 centered at the origin.
Hint, Solution
Exercise 10.6
Let C denote the entire positively oriented boundary of the half disk 0 ≤ r ≤ 1, 0 ≤ θ ≤ π in the
upper half plane. Consider the branch
f(z) =
√
r eıθ/2
, −
π
2
< θ <
3π
2
of the multi-valued function z1/2
. Show by separate parametric evaluation of the semi-circle and the
two radii constituting the boundary that
C
f(z) dz = 0.
Does the Cauchy-Goursat theorem apply here?
Hint, Solution
Exercise 10.7
Evaluate the following contour integrals using anti-derivatives and justify your approach for each.
1.
C
ız3
+ z−3
dz,
where C is the line segment from z1 = 1 + ı to z2 = ı.
2.
C
sin2
z cos z dz
where C is a right-handed spiral from z1 = π to z2 = ıπ.
3.
C
zı
dz =
1 + e−π
2
(1 − ı)
with
zı
= eı Log z
, −π < Arg z < π.
C joins z1 = −1 and z2 = 1, lying above the real axis except at the end points. (Hint: redefine
zı
so that it remains unchanged above the real axis and is defined continuously on the real
axis.)
Hint, Solution
296
10.10 Hints
Hint 10.1
Hint 10.2
Let C be the parallelogram in the complex plane with corners at ±R and ±R + b/(2a). Consider
the integral of e−az2
on this contour. Take the limit as R → ∞.
Hint 10.3
Extend the range of integration to (−∞ . . . ∞). Use eıωx
= cos(ωx) + ı sin(ωx) and the result of
Exercise 10.2.
Hint 10.4
Hint 10.5
Hint 10.6
Hint 10.7
297
10.11 Solutions
Solution 10.1
We parameterize the path with z = eıθ
, with θ ranging from π to 0.
dz = ı eıθ
dθ
|dz| = |ı eıθ
dθ| = |dθ| = −dθ
1.
C
z2
dz =
0
π
eı2θ
ı eıθ
dθ
=
0
π
ı eı3θ
dθ
=
1
3
eı3θ
0
π
=
1
3
eı0
− eı3π
=
1
3
(1 − (−1))
=
2
3
2.
C
|z2
| dz =
0
π
| eı2θ
|ı eıθ
dθ
=
0
π
ı eıθ
dθ
= eıθ 0
π
= 1 − (−1)
= 2
3.
C
z2
|dz| =
0
π
eı2θ
|ı eıθ
dθ|
=
0
π
− eı2θ
dθ
=
ı
2
eı2θ
0
π
=
ı
2
(1 − 1)
= 0
4.
C
|z2
| |dz| =
0
π
| eı2θ
||ı eıθ
dθ|
=
0
π
−dθ
= [−θ]
0
π
= π
298
Solution 10.2
I =
∞
−∞
e−(ax2
+bx)
dx
First we complete the square in the argument of the exponential.
I = eb2
/(4a)
∞
−∞
e−a(x+b/(2a))2
dx
Consider the parallelogram in the complex plane with corners at ±R and ±R +b/(2a). The integral
of e−az2
on this contour vanishes as it is an entire function. We relate the integral along one side of
the parallelogram to the integrals along the other three sides.
R+b/(2a)
−R+b/(2a)
e−az2
dz =
−R
−R+b/(2a)
+
R
−R
+
R+b/(2a)
R
e−az2
dz.
The first and third integrals on the right side vanish as R → ∞ because the integrand vanishes and
the lengths of the paths of integration are finite. Taking the limit as R → ∞ we have,
∞+b/(2a)
−∞+b/(2a)
e−az2
dz ≡
∞
−∞
e−a(x+b/(2a))2
dx =
∞
−∞
e−ax2
dx.
Now we have
I = eb2
/(4a)
∞
−∞
e−ax2
dx.
We make the change of variables ξ =
√
ax.
I = eb2
/(4a) 1
√
a
∞
−∞
e−ξ2
dξ
∞
−∞
e−(ax2
+bx)
dx =
π
a
eb2
/(4a)
Solution 10.3
Consider
I = 2
∞
0
e−ax2
cos(ωx) dx.
Since the integrand is an even function,
I =
∞
−∞
e−ax2
cos(ωx) dx.
Since e−ax2
sin(ωx) is an odd function,
I =
∞
−∞
e−ax2
eıωx
dx.
We evaluate this integral with the result of Exercise 10.2.
2
∞
0
e−ax2
cos(ωx) dx =
π
a
e−ω2
/(4a)
Consider
I = 2
∞
0
x e−ax2
sin(ωx) dx.
299
Since the integrand is an even function,
I =
∞
−∞
x e−ax2
sin(ωx) dx.
Since x e−ax2
cos(ωx) is an odd function,
I = −ı
∞
−∞
x e−ax2
eıωx
dx.
We add a dash of integration by parts to get rid of the x factor.
I = −ı −
1
2a
e−ax2
eıωx
∞
−∞
+ ı
∞
−∞
−
1
2a
e−ax2
ıω eıωx
dx
I =
ω
2a
∞
−∞
e−ax2
eıωx
dx
2
∞
0
x e−ax2
sin(ωx) dx =
ω
2a
π
a
e−ω2
/(4a)
Solution 10.4
1. We parameterize the contour and do the integration.
z − z0 = eıθ
, θ ∈ [0 . . . 2π)
C
(z − z0)n
dz =
2π
0
eınθ
ı eıθ
dθ
=



eı(n+1)θ
n+1
2π
0
for n = −1
[ıθ]
2π
0 for n = −1
=
0 for n = −1
ı2π for n = −1
2. We parameterize the contour and do the integration.
z − z0 = ı2 + eıθ
, θ ∈ [0 . . . 2π)
C
(z − z0)n
dz =
2π
0
ı2 + eıθ n
ı eıθ
dθ
=



(ı2+eıθ
)
n+1
n+1
2π
0
for n = −1
log ı2 + eıθ 2π
0
for n = −1
= 0
3. We parameterize the contour and do the integration.
z = r eıθ
, r = 2 − sin2 θ
4
, θ ∈ [0 . . . 4π)
The contour encircles the origin twice. See Figure 10.2.
C
z−1
dz =
4π
0
1
r(θ) eıθ
(r (θ) + ır(θ)) eıθ
dθ
=
4π
0
r (θ)
r(θ)
+ ı dθ
= [log(r(θ)) + ıθ]
4π
0
300
-1 1
-1
1
Figure 10.2: The contour: r = 2 − sin2 θ
4 .
Since r(θ) does not vanish, the argument of r(θ) does not change in traversing the contour and
thus the logarithmic term has the same value at the beginning and end of the path.
C
z−1
dz = ı4π
This answer is twice what we found in part (a) because the contour goes around the origin
twice.
Solution 10.5
1. We parameterize the contour with z = R eıθ
and bound the modulus of the integral.
CR
z + Log z
z3 + 1
dz ≤
CR
z + Log z
z3 + 1
|dz|
≤
2π
0
R + ln R + π
R3 − 1
R dθ
= 2πr
R + ln R + π
R3 − 1
The upper bound on the modulus on the integral vanishes as R → ∞.
lim
R→∞
2πr
R + ln R + π
R3 − 1
= 0
We conclude that the integral vanishes as R → ∞.
lim
R→∞ CR
z + Log z
z3 + 1
dz = 0
2. We parameterize the contour and bound the modulus of the integral.
z = 2 eıθ
, θ ∈ [−π/2 . . . π/2]
301
C
Log z dz ≤
C
|Log z| |dz|
=
π/2
−π/2
| ln 2 + ıθ|2 dθ
≤ 2
π/2
−π/2
(ln 2 + |θ|) dθ
= 4
π/2
0
(ln 2 + θ) dθ
=
π
2
(π + 4 ln 2)
3. We parameterize the contour and bound the modulus of the integral.
z = R eıθ
, θ ∈ [θ0 . . . θ0 + π]
C
z2
− 1
z2 + 1
dz ≤
C
z2
− 1
z2 + 1
|dz|
≤
θ0+π
θ0
R2 eı2θ
−1
R2 eı2θ +1
|R dθ|
≤ R
θ0+π
θ0
R2
+ 1
R2 − 1
dθ
= πr
R2
+ 1
R2 − 1
Solution 10.6
C
f(z) dz =
1
0
√
r dr +
π
0
eıθ/2
ı eıθ
dθ +
0
1
ı
√
r (−dr)
=
2
3
+ −
2
3
− ı
2
3
+ ı
2
3
= 0
The Cauchy-Goursat theorem does not apply because the function is not analytic at z = 0, a point
on the boundary.
Solution 10.7
1.
C
ız3
+ z−3
dz =
ız4
4
−
1
2z2
ı
1+ı
=
1
2
+ ı
In this example, the anti-derivative is single-valued.
2.
C
sin2
z cos z dz =
sin3
z
3
ıπ
π
=
1
3
sin3
(ıπ) − sin3
(π)
= −ı
sinh3
(π)
3
302
Again the anti-derivative is single-valued.
3. We choose the branch of zı
with −π/2 < arg(z) < 3π/2. This matches the principal value of
zı
above the real axis and is defined continuously on the path of integration.
C
zı
dz =
z1+ı
1 + ı
eı0
eıπ
=
1 − ı
2
e(1+ı) log z
eı0
eıπ
=
1 − ı
2
e0
− e(1+ı)ıπ
=
1 + e−π
2
(1 − ı)
303
304
Chapter 11
Cauchy’s Integral Formula
If I were founding a university I would begin with a smoking room; next a dormitory; and then a
decent reading room and a library. After that, if I still had more money that I couldn’t use, I would
hire a professor and get some text books.
- Stephen Leacock
11.1 Cauchy’s Integral Formula
Result 11.1.1 Cauchy’s Integral Formula. If f(ζ) is analytic in a com-
pact, closed, connected domain D and z is a point in the interior of D then
f(z) =
1
ı2π ∂D
f(ζ)
ζ − z
dζ =
1
ı2π k Ck
f(ζ)
ζ − z
dζ. (11.1)
Here the set of contours {Ck} make up the positively oriented boundary ∂D
of the domain D. More generally, we have
f(n)
(z) =
n!
ı2π ∂D
f(ζ)
(ζ − z)n+1
dζ =
n!
ı2π k Ck
f(ζ)
(ζ − z)n+1
dζ. (11.2)
Cauchy’s Formula shows that the value of f(z) and all its derivatives in a domain are determined
by the value of f(z) on the boundary of the domain. Consider the first formula of the result,
Equation 11.1. We deform the contour to a circle of radius δ about the point ζ = z.
C
f(ζ)
ζ − z
dζ =
Cδ
f(ζ)
ζ − z
dζ
=
Cδ
f(z)
ζ − z
dζ +
Cδ
f(ζ) − f(z)
ζ − z
dζ
We use the result of Example 10.8.1 to evaluate the first integral.
C
f(ζ)
ζ − z
dζ = ı2πf(z) +
Cδ
f(ζ) − f(z)
ζ − z
dζ
305
The remaining integral along Cδ vanishes as δ → 0 because f(ζ) is continuous. We demonstrate this
with the maximum modulus integral bound. The length of the path of integration is 2πδ.
lim
δ→0 Cδ
f(ζ) − f(z)
ζ − z
dζ ≤ lim
δ→0
(2πδ)
1
δ
max
|ζ−z|=δ
|f(ζ) − f(z)|
≤ lim
δ→0
2π max
|ζ−z|=δ
|f(ζ) − f(z)|
= 0
This gives us the desired result.
f(z) =
1
ı2π C
f(ζ)
ζ − z
dζ
We derive the second formula, Equation 11.2, from the first by differentiating with respect to z.
Note that the integral converges uniformly for z in any closed subset of the interior of C. Thus we
can differentiate with respect to z and interchange the order of differentiation and integration.
f(n)
(z) =
1
ı2π
dn
dzn
C
f(ζ)
ζ − z
dζ
=
1
ı2π C
dn
dzn
f(ζ)
ζ − z
dζ
=
n!
ı2π C
f(ζ)
(ζ − z)n+1
dζ
Example 11.1.1 Consider the following integrals where C is the positive contour on the unit circle.
For the third integral, the point z = −1 is removed from the contour.
1.
C
sin cos z5
dz
2.
C
1
(z − 3)(3z − 1)
dz
3.
C
√
z dz
1. Since sin cos z5
is an analytic function inside the unit circle,
C
sin cos z5
dz = 0
2. 1
(z−3)(3z−1) has singularities at z = 3 and z = 1/3. Since z = 3 is outside the contour, only
the singularity at z = 1/3 will contribute to the value of the integral. We will evaluate this
integral using the Cauchy integral formula.
C
1
(z − 3)(3z − 1)
dz = ı2π
1
(1/3 − 3)3
= −
ıπ
4
3. Since the curve is not closed, we cannot apply the Cauchy integral formula. Note that
√
z is
single-valued and analytic in the complex plane with a branch cut on the negative real axis.
306
Thus we use the Fundamental Theorem of Calculus.
C
√
z dz =
2
3
√
z3
eıπ
e−ıπ
=
2
3
eı3π/2
− e−ı3π/2
=
2
3
(−ı − ı)
= −ı
4
3
Cauchy’s Inequality. Suppose the f(ζ) is analytic in the closed disk |ζ − z| ≤ r. By Cauchy’s
integral formula,
f(n)
(z) =
n!
ı2π C
f(ζ)
(ζ − z)n+1
dζ,
where C is the circle of radius r centered about the point z. We use this to obtain an upper bound
on the modulus of f(n)
(z).
f(n)
(z) =
n!
2π C
f(ζ)
(ζ − z)n+1
dζ
≤
n!
2π
2πr max
|ζ−z|=r
f(ζ)
(ζ − z)n+1
=
n!
rn
max
|ζ−z|=r
|f(ζ)|
Result 11.1.2 Cauchy’s Inequality. If f(ζ) is analytic in |ζ − z| ≤ r then
f(n)
(z) ≤
n!M
rn
where |f(ζ)| ≤ M for all |ζ − z| = r.
Liouville’s Theorem. Consider a function f(z) that is analytic and bounded, (|f(z)| ≤ M), in
the complex plane. From Cauchy’s inequality,
|f (z)| ≤
M
r
for any positive r. By taking r → ∞, we see that f (z) is identically zero for all z. Thus f(z) is a
constant.
Result 11.1.3 Liouville’s Theorem. If f(z) is analytic and |f(z)| is
bounded in the complex plane then f(z) is a constant.
The Fundamental Theorem of Algebra. We will prove that every polynomial of degree n ≥ 1
has exactly n roots, counting multiplicities. First we demonstrate that each such polynomial has
at least one root. Suppose that an nth
degree polynomial p(z) has no roots. Let the lower bound
on the modulus of p(z) be 0 < m ≤ |p(z)|. The function f(z) = 1/p(z) is analytic, (f (z) =
p (z)/p2
(z)), and bounded, (|f(z)| ≤ 1/m), in the extended complex plane. Using Liouville’s theorem
we conclude that f(z) and hence p(z) are constants, which yields a contradiction. Therefore every
such polynomial p(z) must have at least one root.
307
Now we show that we can factor the root out of the polynomial. Let
p(z) =
n
k=0
pkzk
.
We note that
(zn
− cn
) = (z − c)
n−1
k=0
cn−1−k
zk
.
Suppose that the nth
degree polynomial p(z) has a root at z = c.
p(z) = p(z) − p(c)
=
n
k=0
pkzk
−
n
k=0
pkck
=
n
k=0
pk zk
− ck
=
n
k=0
pk(z − c)
k−1
j=0
ck−1−j
zj
= (z − c)q(z)
Here q(z) is a polynomial of degree n − 1. By induction, we see that p(z) has exactly n roots.
Result 11.1.4 Fundamental Theorem of Algebra. Every polynomial of
degree n ≥ 1 has exactly n roots, counting multiplicities.
Gauss’ Mean Value Theorem. Let f(ζ) be analytic in |ζ−z| ≤ r. By Cauchy’s integral formula,
f(z) =
1
ı2π C
f(ζ)
ζ − z
dζ,
where C is the circle |ζ − z| = r. We parameterize the contour with ζ = z + r eıθ
.
f(z) =
1
ı2π
2π
0
f(z + r eıθ
)
r eıθ
ır eıθ
dθ
Writing this in the form,
f(z) =
1
2πr
2π
0
f(z + r eıθ
)r dθ,
we see that f(z) is the average value of f(ζ) on the circle of radius r about the point z.
Result 11.1.5 Gauss’ Average Value Theorem. If f(ζ) is analytic in
|ζ − z| ≤ r then
f(z) =
1
2π
2π
0
f(z + r eıθ
) dθ.
That is, f(z) is equal to its average value on a circle of radius r about the
point z.
308
Extremum Modulus Theorem. Let f(z) be analytic in closed, connected domain, D. The
extreme values of the modulus of the function must occur on the boundary. If |f(z)| has an interior
extrema, then the function is a constant. We will show this with proof by contradiction. Assume
that |f(z)| has an interior maxima at the point z = c. This means that there exists an neighborhood
of the point z = c for which |f(z)| ≤ |f(c)|. Choose an so that the set |z − c| ≤ lies inside this
neighborhood. First we use Gauss’ mean value theorem.
f(c) =
1
2π
2π
0
f c + eıθ
dθ
We get an upper bound on |f(c)| with the maximum modulus integral bound.
|f(c)| ≤
1
2π
2π
0
f c + eıθ
dθ
Since z = c is a maxima of |f(z)| we can get a lower bound on |f(c)|.
|f(c)| ≥
1
2π
2π
0
f c + eıθ
dθ
If |f(z)| < |f(c)| for any point on |z − c| = , then the continuity of f(z) implies that |f(z)| < |f(c)|
in a neighborhood of that point which would make the value of the integral of |f(z)| strictly less
than |f(c)|. Thus we conclude that |f(z)| = |f(c)| for all |z − c| = . Since we can repeat the above
procedure for any circle of radius smaller than , |f(z)| = |f(c)| for all |z − c| ≤ , i.e. all the points
in the disk of radius about z = c are also maxima. By recursively repeating this procedure points
in this disk, we see that |f(z)| = |f(c)| for all z ∈ D. This implies that f(z) is a constant in the
domain. By reversing the inequalities in the above method we see that the minimum modulus of
f(z) must also occur on the boundary.
Result 11.1.6 Extremum Modulus Theorem. Let f(z) be analytic in
a closed, connected domain, D. The extreme values of the modulus of the
function must occur on the boundary. If |f(z)| has an interior extrema, then
the function is a constant.
11.2 The Argument Theorem
Result 11.2.1 The Argument Theorem. Let f(z) be analytic inside and
on C except for isolated poles inside the contour. Let f(z) be nonzero on C.
1
ı2π C
f (z)
f(z)
dz = N − P
Here N is the number of zeros and P the number of poles, counting multiplic-
ities, of f(z) inside C.
First we will simplify the problem and consider a function f(z) that has one zero or one pole. Let
f(z) be analytic and nonzero inside and on A except for a zero of order n at z = a. Then we can
write f(z) = (z − a)n
g(z) where g(z) is analytic and nonzero inside and on A. The integral of f (z)
f(z)
309
along A is
1
ı2π A
f (z)
f(z)
dz =
1
ı2π A
d
dz
(log(f(z))) dz
=
1
ı2π A
d
dz
(log((z − a)n
) + log(g(z))) dz
=
1
ı2π A
d
dz
(log((z − a)n
)) dz
=
1
ı2π A
n
z − a
dz
= n
Now let f(z) be analytic and nonzero inside and on B except for a pole of order p at z = b. Then
we can write f(z) = g(z)
(z−b)p where g(z) is analytic and nonzero inside and on B. The integral of f (z)
f(z)
along B is
1
ı2π B
f (z)
f(z)
dz =
1
ı2π B
d
dz
(log(f(z))) dz
=
1
ı2π B
d
dz
log((z − b)−p
) + log(g(z)) dz
=
1
ı2π B
d
dz
log((z − b)−p
)+ dz
=
1
ı2π B
−p
z − b
dz
= −p
Now consider a function f(z) that is analytic inside an on the contour C except for isolated poles at
the points b1, . . . , bp. Let f(z) be nonzero except at the isolated points a1, . . . , an. Let the contours
Ak, k = 1, . . . , n, be simple, positive contours which contain the zero at ak but no other poles or
zeros of f(z). Likewise, let the contours Bk, k = 1, . . . , p be simple, positive contours which contain
the pole at bk but no other poles of zeros of f(z). (See Figure 11.1.) By deforming the contour we
obtain
C
f (z)
f(z)
dz =
n
j=1 Aj
f (z)
f(z)
dz +
p
k=1 Bj
f (z)
f(z)
dz.
From this we obtain Result 11.2.1.
CA1
B1
B3
B2
A2
Figure 11.1: Deforming the contour C.
310
11.3 Rouche’s Theorem
Result 11.3.1 Rouche’s Theorem. Let f(z) and g(z) be analytic inside
and on a simple, closed contour C. If |f(z)| > |g(z)| on C then f(z) and
f(z) + g(z) have the same number of zeros inside C and no zeros on C.
First note that since |f(z)| > |g(z)| on C, f(z) is nonzero on C. The inequality implies that
|f(z) + g(z)| > 0 on C so f(z) + g(z) has no zeros on C. We well count the number of zeros of f(z)
and g(z) using the Argument Theorem, (Result 11.2.1). The number of zeros N of f(z) inside the
contour is
N =
1
ı2π C
f (z)
f(z)
dz.
Now consider the number of zeros M of f(z) + g(z). We introduce the function h(z) = g(z)/f(z).
M =
1
ı2π C
f (z) + g (z)
f(z) + g(z)
dz
=
1
ı2π C
f (z) + f (z)h(z) + f(z)h (z)
f(z) + f(z)h(z)
dz
=
1
ı2π C
f (z)
f(z)
dz +
1
ı2π C
h (z)
1 + h(z)
dz
= N +
1
ı2π
[log(1 + h(z))]C
= N
(Note that since |h(z)| < 1 on C, (1 + h(z)) > 0 on C and the value of log(1 + h(z)) does not
not change in traversing the contour.) This demonstrates that f(z) and f(z) + g(z) have the same
number of zeros inside C and proves the result.
311
11.4 Exercises
Exercise 11.1
What is
(arg(sin z)) C
where C is the unit circle?
Exercise 11.2
Let C be the circle of radius 2 centered about the origin and oriented in the positive direction.
Evaluate the following integrals:
1. C
sin z
z2+5 dz
2. C
z
z2+1 dz
3. C
z2
+1
z dz
Exercise 11.3
Let f(z) be analytic and bounded (i.e. |f(z)| < M) for |z| > R, but not necessarily analytic for
|z| ≤ R. Let the points α and β lie inside the circle |z| = R. Evaluate
C
f(z)
(z − α)(z − β)
dz
where C is any closed contour outside |z| = R, containing the circle |z| = R. [Hint: consider the circle
at infinity] Now suppose that in addition f(z) is analytic everywhere. Deduce that f(α) = f(β).
Exercise 11.4
Using Rouche’s theorem show that all the roots of the equation p(z) = z6
− 5z2
+ 10 = 0 lie in the
annulus 1 < |z| < 2.
Exercise 11.5
Evaluate as a function of t
ω =
1
ı2π C
ezt
z2(z2 + a2)
dz,
where C is any positively oriented contour surrounding the circle |z| = a.
Exercise 11.6
Consider C1, (the positively oriented circle |z| = 4), and C2, (the positively oriented boundary of
the square whose sides lie along the lines x = ±1, y = ±1). Explain why
C1
f(z) dz =
C2
f(z) dz
for the functions
1. f(z) =
1
3z2 + 1
2. f(z) =
z
1 − ez
Exercise 11.7
Show that if f(z) is of the form
f(z) =
αk
zk
+
αk−1
zk−1
+ · · · +
α1
z
+ g(z), k ≥ 1
312
where g is analytic inside and on C, (the positive circle |z| = 1), then
C
f(z) dz = ı2πα1.
Exercise 11.8
Show that if f(z) is analytic within and on a simple closed contour C and z0 is not on C then
C
f (z)
z − z0
dz =
C
f(z)
(z − z0)2
dz.
Note that z0 may be either inside or outside of C.
Exercise 11.9
If C is the positive circle z = eıθ
show that for any real constant a,
C
eaz
z
dz = ı2π
and hence π
0
ea cos θ
cos(a sin θ) dθ = π.
Exercise 11.10
Use Cauchy-Goursat, the generalized Cauchy integral formula, and suitable extensions to multiply-
connected domains to evaluate the following integrals. Be sure to justify your approach in each
case.
1.
C
z
z3 − 9
dz
where C is the positively oriented rectangle whose sides lie along x = ±5, y = ±3.
2.
C
sin z
z2(z − 4)
dz,
where C is the positively oriented circle |z| = 2.
3.
C
(z3
+ z + ı) sin z
z4 + ız3
dz,
where C is the positively oriented circle |z| = π.
4.
C
ezt
z2(z + 1)
dz
where C is any positive simple closed contour surrounding |z| = 1.
Exercise 11.11
Use Liouville’s theorem to prove the following:
1. If f(z) is entire with (f(z)) ≤ M for all z then f(z) is constant.
2. If f(z) is entire with |f(5)
(z)| ≤ M for all z then f(z) is a polynomial of degree at most five.
Exercise 11.12
Find all functions f(z) analytic in the domain D : |z| < R that satisfy f(0) = eı
and |f(z)| ≤ 1 for
all z in D.
313
Exercise 11.13
Let f(z) =
∞
k=0 k4 z
4
k
and evaluate the following contour integrals, providing justification in each
case:
1.
C
cos(ız)f(z) dz C is the positive circle |z − 1| = 1.
2.
C
f(z)
z3
dz C is the positive circle |z| = π.
314
11.5 Hints
Hint 11.1
Use the argument theorem.
Hint 11.2
Hint 11.3
To evaluate the integral, consider the circle at infinity.
Hint 11.4
Hint 11.5
Hint 11.6
Hint 11.7
Hint 11.8
Hint 11.9
Hint 11.10
Hint 11.11
Hint 11.12
Hint 11.13
315
11.6 Solutions
Solution 11.1
Let f(z) be analytic inside and on the contour C. Let f(z) be nonzero on the contour. The argument
theorem states that
1
ı2π C
f (z)
f(z)
dz = N − P,
where N is the number of zeros and P is the number of poles, (counting multiplicities), of f(z)
inside C. The theorem is aptly named, as
1
ı2π C
f (z)
f(z)
dz =
1
ı2π
[log(f(z))]C
=
1
ı2π
[log |f(z)| + ı arg(f(z))]C
=
1
2π
[arg(f(z))]C .
Thus we could write the argument theorem as
1
ı2π C
f (z)
f(z)
dz =
1
2π
[arg(f(z))]C = N − P.
Since sin z has a single zero and no poles inside the unit circle, we have
1
2π
arg(sin(z)) C
= 1 − 0
arg(sin(z)) C
= 2π
Solution 11.2
1. Since the integrand sin z
z2+5 is analytic inside and on the contour, (the only singularities are at
z = ±ı
√
5 and at infinity), the integral is zero by Cauchy’s Theorem.
2. First we expand the integrand in partial fractions.
z
z2 + 1
=
a
z − ı
+
b
z + ı
a =
z
z + ı z=ı
=
1
2
, b =
z
z − ı z=−ı
=
1
2
Now we can do the integral with Cauchy’s formula.
C
z
z2 + 1
dz =
C
1/2
z − ı
dz +
C
1/2
z + ı
dz
=
1
2
ı2π +
1
2
ı2π
= ı2π
3.
C
z2
+ 1
z
dz =
C
z +
1
z
dz
=
C
z dz +
C
1
z
dz
= 0 + ı2π
= ı2π
316
Solution 11.3
Let C be the circle of radius r, (r > R), centered at the origin. We get an upper bound on the
integral with the Maximum Modulus Integral Bound, (Result 10.2.1).
C
f(z)
(z − α)(z − β)
dz ≤ 2πr max
|z|=r
f(z)
(z − α)(z − β)
≤ 2πr
M
(r − |α|)(r − |β|)
By taking the limit as r → ∞ we see that the modulus of the integral is bounded above by zero.
Thus the integral vanishes.
Now we assume that f(z) is analytic and evaluate the integral with Cauchy’s Integral Formula.
(We assume that α = β.)
C
f(z)
(z − α)(z − β)
dz = 0
C
f(z)
(z − α)(α − β)
dz +
C
f(z)
(β − α)(z − β)
dz = 0
ı2π
f(α)
α − β
+ ı2π
f(β)
β − α
= 0
f(α) = f(β)
Solution 11.4
Consider the circle |z| = 2. On this circle:
|z6
| = 64
| − 5z2
+ 10| ≤ | − 5z2
| + |10| = 30
Since |z6
| < | − 5z2
+ 10| on |z| = 2, p(z) has the same number of roots as z6
in |z| < 2. p(z) has 6
roots in |z| < 2.
Consider the circle |z| = 1. On this circle:
|10| = 10
|z6
− 5z2
| ≤ |z6
| + | − 5z2
| = 6
Since |z6
− 5z2
| < |10| on |z| = 1, p(z) has the same number of roots as 10 in |z| < 1. p(z) has no
roots in |z| < 1.
On the unit circle,
|p(z)| ≥ |10| − |z6
| − |5z2
| = 4.
Thus p(z) has no roots on the unit circle.
We conclude that p(z) has exactly 6 roots in 1 < |z| < 2.
Solution 11.5
We evaluate the integral with Cauchy’s Integral Formula.
ω =
1
ı2π C
ezt
z2(z2 + a2)
dz
ω =
1
ı2π C
ezt
a2z2
+
ı ezt
2a3(z − ıa)
−
ı ezt
2a3(z + ıa)
dz
ω =
d
dz
ezt
a2
z=0
+
ı eıat
2a3
−
ı e−ıat
2a3
ω =
t
a2
−
sin(at)
a3
ω =
at − sin(at)
a3
317
Solution 11.6
1. We factor the denominator of the integrand.
1
3z2 + 1
=
1
3(z − ı
√
3/3)(z + ı
√
3/3)
There are two first order poles which could contribute to the value of an integral on a closed
path. Both poles lie inside both contours. See Figure 11.2. We see that C1 can be continuously
-4 -2 2 4
-4
-2
2
4
Figure 11.2: The contours and the singularities of 1
3z2+1 .
deformed to C2 on the domain where the integrand is analytic. Thus the integrals have the
same value.
2. We consider the integrand
z
1 − ez
.
Since ez
= 1 has the solutions z = ı2πn for n ∈ Z, the integrand has singularities at these
points. There is a removable singularity at z = 0 and first order poles at z = ı2πn for
n ∈ Z{0}. Each contour contains only the singularity at z = 0. See Figure 11.3. We see that
-6 -4 -2 2 4 6
-6
-4
-2
2
4
6
Figure 11.3: The contours and the singularities of z
1−ez .
C1 can be continuously deformed to C2 on the domain where the integrand is analytic. Thus
the integrals have the same value.
Solution 11.7
First we write the integral of f(z) as a sum of integrals.
C
f(z) dz =
C
αk
zk
+
αk−1
zk−1
+ · · · +
α1
z
+ g(z) dz
=
C
αk
zk
dz +
C
αk−1
zk−1
dz + · · · +
C
α1
z
dz +
C
g(z) dz
318
The integral of g(z) vanishes by the Cauchy-Goursat theorem. We evaluate the integral of α1/z
with Cauchy’s integral formula.
C
α1
z
dz = ı2πα1
We evaluate the remaining αn/zn
terms with anti-derivatives. Each of these integrals vanish.
C
f(z) dz =
C
αk
zk
dz +
C
αk−1
zk−1
dz + · · · +
C
α1
z
dz +
C
g(z) dz
= −
αk
(k − 1)zk−1
C
+ · · · + −
α2
z C
+ ı2πα1
= ı2πα1
Solution 11.8
We evaluate the integrals with the Cauchy integral formula. (z0 is required to not be on C so the
integrals exist.)
C
f (z)
z − z0
dz =
ı2πf (z0) if z0 is inside C
0 if z0 is outside C
C
f(z)
(z − z0)2
dz =
ı2π
1! f (z0) if z0 is inside C
0 if z0 is outside C
Thus we see that the integrals are equal.
Solution 11.9
First we evaluate the integral using the Cauchy Integral Formula.
C
eaz
z
dz = [eaz
]z=0 = ı2π
Next we parameterize the path of integration. We use the periodicity of the cosine and sine to
simplify the integral.
C
eaz
z
dz = ı2π
2π
0
ea eıθ
eıθ
ı eıθ
dθ = ı2π
2π
0
ea(cos θ+ı sin θ)
dθ = 2π
2π
0
ea cos θ
(cos(sin θ) + ı sin(sin θ)) dθ = 2π
2π
0
ea cos θ
cos(sin θ) dθ = 2π
π
0
ea cos θ
cos(sin θ) dθ = π
Solution 11.10
1. We factor the integrand to see that there are singularities at the cube roots of 9.
z
z3 − 9
=
z
z − 3
√
9 z − 3
√
9 eı2π/3 z − 3
√
9 e−ı2π/3
Let C1, C2 and C3 be contours around z = 3
√
9, z = 3
√
9 eı2π/3
and z = 3
√
9 e−ı2π/3
. See
Figure 11.4. Let D be the domain between C, C1 and C2, i.e. the boundary of D is the union
319
of C, −C1 and −C2. Since the integrand is analytic in D, the integral along the boundary of
D vanishes.
∂D
z
z3 − 9
dz =
C
z
z3 − 9
dz +
−C1
z
z3 − 9
dz +
−C2
z
z3 − 9
dz +
−C3
z
z3 − 9
dz = 0
From this we see that the integral along C is equal to the sum of the integrals along C1, C2
and C3. (We could also see this by deforming C onto C1, C2 and C3.)
C
z
z3 − 9
dz =
C1
z
z3 − 9
dz +
C2
z
z3 − 9
dz +
C3
z
z3 − 9
dz
We use the Cauchy Integral Formula to evaluate the integrals along C1, C2 and C2.
C
z
z3 − 9
dz =
C1
z
z − 3
√
9 z − 3
√
9 eı2π/3 z − 3
√
9 e−ı2π/3
dz
+
C2
z
z − 3
√
9 z − 3
√
9 eı2π/3 z − 3
√
9 e−ı2π/3
dz
+
C3
z
z − 3
√
9 z − 3
√
9 eı2π/3 z − 3
√
9 e−ı2π/3
dz
= ı2π
z
z − 3
√
9 eı2π/3 z − 3
√
9 e−ı2π/3
z= 3√
9
+ ı2π
z
z − 3
√
9 z − 3
√
9 e−ı2π/3
z= 3√
9 eı2π/3
+ ı2π
z
z − 3
√
9 z − 3
√
9 eı2π/3
z= 3√
9 e−ı2π/3
= ı2π3−5/3
1 − eıπ/3
+ eı2π/3
= 0
-6 -4 -2 2 4 6
-4
-2
2
4
C
C1
C2
C3
Figure 11.4: The contours for z
z3−9 .
2. The integrand has singularities at z = 0 and z = 4. Only the singularity at z = 0 lies inside
the contour. We use the Cauchy Integral Formula to evaluate the integral.
C
sin z
z2(z − 4)
dz = ı2π
d
dz
sin z
z − 4 z=0
= ı2π
cos z
z − 4
−
sin z
(z − 4)2
z=0
= −
ıπ
2
320
3. We factor the integrand to see that there are singularities at z = 0 and z = −ı.
C
(z3
+ z + ı) sin z
z4 + ız3
dz =
C
(z3
+ z + ı) sin z
z3(z + ı)
dz
Let C1 and C2 be contours around z = 0 and z = −ı. See Figure 11.5. Let D be the domain
between C, C1 and C2, i.e. the boundary of D is the union of C, −C1 and −C2. Since the
integrand is analytic in D, the integral along the boundary of D vanishes.
∂D
=
C
+
−C1
+
−C2
= 0
From this we see that the integral along C is equal to the sum of the integrals along C1 and
C2. (We could also see this by deforming C onto C1 and C2.)
C
=
C1
+
C2
We use the Cauchy Integral Formula to evaluate the integrals along C1 and C2.
C
(z3
+ z + ı) sin z
z4 + ız3
dz =
C1
(z3
+ z + ı) sin z
z3(z + ı)
dz +
C2
(z3
+ z + ı) sin z
z3(z + ı)
dz
= ı2π
(z3
+ z + ı) sin z
z3
z=−ı
+
ı2π
2!
d2
dz2
(z3
+ z + ı) sin z
z + ı z=0
= ı2π(−ı sinh(1)) + ıπ 2
3z2
+ 1
z + ı
−
z3
+ z + ı
(z + ı)2
cos z
+
6z
z + ı
−
2(3z2
+ 1)
(z + ı)2
+
2(z3
+ z + ı)
(z + ı)3
−
z3
+ z + ı
z + ı
sin z
z=0
= 2π sinh(1)
-4 -2 2 4
-4
-2
2
4
CC1
C2
Figure 11.5: The contours for (z3
+z+ı) sin z
z4+ız3 .
4. We consider the integral
C
ezt
z2(z + 1)
dz.
There are singularities at z = 0 and z = −1.
321
Let C1 and C2 be contours around z = 0 and z = −1. See Figure 11.6. We deform C onto C1
and C2.
C
=
C1
+
C2
We use the Cauchy Integral Formula to evaluate the integrals along C1 and C2.
C
ezt
z2(z + 1)
dz =
C1
ezt
z2(z + 1)
dz +
C1
ezt
z2(z + 1)
dz
= ı2π
ezt
z2
z=−1
+ ı2π
d
dz
ezt
(z + 1) z=0
= ı2π e−t
+ı2π
t ezt
(z + 1)
−
ezt
(z + 1)2
z=0
= ı2π(e−t
+t − 1)
-2 -1 1 2
-2
-1
1
2
CC1
C2
Figure 11.6: The contours for ezt
z2(z+1) .
Solution 11.11
Liouville’s Theorem states that if f(z) is analytic and bounded in the complex plane then f(z) is a
constant.
1. Since f(z) is analytic, ef(z)
is analytic. The modulus of ef(z)
is bounded.
ef(z)
= e (f(z))
≤ eM
By Liouville’s Theorem we conclude that ef(z)
is constant and hence f(z) is constant.
2. We know that f(z) is entire and |f(5)
(z)| is bounded in the complex plane. Since f(z) is
analytic, so is f(5)
(z). We apply Liouville’s Theorem to f(5)
(z) to conclude that it is a constant.
Then we integrate to determine the form of f(z).
f(z) = c5z5
+ c4z4
+ c3z3
+ c2z2
+ c1z + c0
Here c5 is the value of f(5)
(z) and c4 through c0 are constants of integration. We see that f(z)
is a polynomial of degree at most five.
Solution 11.12
For this problem we will use the Extremum Modulus Theorem: Let f(z) be analytic in a closed,
connected domain, D. The extreme values of the modulus of the function must occur on the
boundary. If |f(z)| has an interior extrema, then the function is a constant.
Since |f(z)| has an interior extrema, |f(0)| = | eı
| = 1, we conclude that f(z) is a constant on
D. Since we know the value at z = 0, we know that f(z) = eı
.
322
Solution 11.13
First we determine the radius of convergence of the series with the ratio test.
R = lim
k→∞
k4
/4k
(k + 1)4/4k+1
= 4 lim
k→∞
k4
(k + 1)4
= 4 lim
k→∞
24
24
= 4
The series converges absolutely for |z| < 4.
1. Since the integrand is analytic inside and on the contour of integration, the integral vanishes
by Cauchy’s Theorem.
2.
C
f(z)
z3
dz =
C
∞
k=0
k4 z
4
k 1
z3
dz
=
C
∞
k=1
k4
4k
zk−3
dz
=
C
∞
k=−2
(k + 3)4
4k+3
zk
dz
=
C
1
4z2
dz +
C
1
z
dz +
C
∞
k=0
(k + 3)4
4k+3
zk
dz
We can parameterize the first integral to show that it vanishes. The second integral has the
value ı2π by the Cauchy-Goursat Theorem. The third integral vanishes by Cauchy’s Theorem
as the integrand is analytic inside and on the contour.
C
f(z)
z3
dz = ı2π
323
324
Chapter 12
Series and Convergence
You are not thinking. You are merely being logical.
- Neils Bohr
12.1 Series of Constants
12.1.1 Definitions
Convergence of Sequences. The infinite sequence {an}∞
n=0 ≡ a0, a1, a2, . . . is said to converge if
lim
n→∞
an = a
for some constant a. If the limit does not exist, then the sequence diverges. Recall the definition of
the limit in the above formula: For any > 0 there exists an N ∈ Z such that |a − an| < for all
n > N.
Example 12.1.1 The sequence {sin(n)} is divergent. The sequence is bounded above and below,
but boundedness does not imply convergence.
Cauchy Convergence Criterion. Note that there is something a little fishy about the above
definition. We should be able to say if a sequence converges without first finding the constant to
which it converges. We fix this problem with the Cauchy convergence criterion. A sequence {an}
converges if and only if for any > 0 there exists an N such that |an − am| < for all n, m > N.
The Cauchy convergence criterion is equivalent to the definition we had before. For some problems
it is handier to use. Now we don’t need to know the limit of a sequence to show that it converges.
Convergence of Series. The series
∞
n=1 an converges if the sequence of partial sums, SN =
N−1
n=0 an, converges. That is,
lim
N→∞
SN = lim
N→∞
N−1
n=0
an = constant.
If the limit does not exist, then the series diverges. A necessary condition for the convergence of a
series is that
lim
n→∞
an = 0.
(See Exercise 12.1.) Otherwise the sequence of partial sums would not converge.
Example 12.1.2 The series
∞
n=0(−1)n
= 1 − 1 + 1 − 1 + · · · is divergent because the sequence of
partial sums, {SN } = 1, 0, 1, 0, 1, 0, . . . is divergent.
325
Tail of a Series. An infinite series,
∞
n=0 an, converges or diverges with its tail. That is, for fixed
N,
∞
n=0 an converges if and only if
∞
n=N an converges. This is because the sum of the first N
terms of a series is just a number. Adding or subtracting a number to a series does not change its
convergence.
Absolute Convergence. The series
∞
n=0 an converges absolutely if
∞
n=0 |an| converges. Abso-
lute convergence implies convergence. If a series is convergent, but not absolutely convergent, then
it is said to be conditionally convergent.
The terms of an absolutely convergent series can be rearranged in any order and the series will
still converge to the same sum. This is not true of conditionally convergent series. Rearranging the
terms of a conditionally convergent series may change the sum. In fact, the terms of a conditionally
convergent series may be rearranged to obtain any desired sum.
Example 12.1.3 The alternating harmonic series,
1 −
1
2
+
1
3
−
1
4
+ · · · ,
converges, (Exercise 12.4). Since
1 +
1
2
+
1
3
+
1
4
+ · · ·
diverges, (Exercise 12.5), the alternating harmonic series is not absolutely convergent. Thus the
terms can be rearranged to obtain any sum, (Exercise 12.6).
Finite Series and Residuals. Consider the series f(z) =
∞
n=0 an(z). We will denote the sum
of the first N terms in the series as
SN (z) =
N−1
n=0
an(z).
We will denote the residual after N terms as
RN (z) ≡ f(z) − SN (z) =
∞
n=N
an(z).
12.1.2 Special Series
Geometric Series. One of the most important series in mathematics is the geometric series, 1
∞
n=0
zn
= 1 + z + z2
+ z3
+ · · · .
The series clearly diverges for |z| ≥ 1 since the terms do not vanish as n → ∞. Consider the partial
sum, SN (z) ≡
N−1
n=0 zn
, for |z| < 1.
(1 − z)SN (z) = (1 − z)
N−1
n=0
zn
=
N−1
n=0
zn
−
N
n=1
zn
= 1 + z + · · · + zN−1
− z + z2
+ · · · + zN
= 1 − zN
1 The series is so named because the terms grow or decay geometrically. Each term in the series is a constant times
the previous term.
326
N−1
n=0
zn
=
1 − zN
1 − z
→
1
1 − z
as N → ∞.
The limit of the partial sums is 1
1−z .
∞
n=0
zn
=
1
1 − z
for |z| < 1
Harmonic Series. Another important series is the harmonic series,
∞
n=1
1
nα
= 1 +
1
2α
+
1
3α
+ · · · .
The series is absolutely convergent for (α) > 1 and absolutely divergent for (α) ≤ 1, (see the
Exercise 12.8). The Riemann zeta function ζ(α) is defined as the sum of the harmonic series.
ζ(α) =
∞
n=1
1
nα
The alternating harmonic series is
∞
n=1
(−1)n+1
nα
= 1 −
1
2α
+
1
3α
−
1
4α
+ · · · .
Again, the series is absolutely convergent for (α) > 1 and absolutely divergent for (α) ≤ 1.
12.1.3 Convergence Tests
The Comparison Test.
Result 12.1.1 The series of positive terms an converges if there exists a
convergent series bn such that an ≤ bn for all n. Similarly, an diverges if
there exists a divergent series bn such that an ≥ bn for all n.
Example 12.1.4 Consider the series
∞
n=1
1
2n2 .
We can rewrite this as
∞
n=1
n a perfect square
1
2n
.
Then by comparing this series to the geometric series,
∞
n=1
1
2n
= 1,
we see that it is convergent.
327
Integral Test.
Result 12.1.2 If the coefficients an of a series ∞
n=0 an are monotonically
decreasing and can be extended to a monotonically decreasing function of the
continuous variable x,
a(x) = an for x ∈ Z0+
,
then the series converges or diverges with the integral
∞
0
a(x) dx.
Example 12.1.5 Consider the series
∞
n=1
1
n2 . Define the functions sl(x) and sr(x), (left and
right),
sl(x) =
1
( x )
2 , sr(x) =
1
( x )
2 .
Recall that x is the greatest integer function, the greatest integer which is less than or equal to
x. x is the least integer function, the least integer greater than or equal to x. We can express the
series as integrals of these functions.
∞
n=1
1
n2
=
∞
0
sl(x) dx =
∞
1
sr(x) dx
In Figure 12.1 these functions are plotted against y = 1/x2
. From the graph, it is clear that we can
obtain a lower and upper bound for the series.
∞
1
1
x2
dx ≤
∞
n=1
1
n2
≤ 1 +
∞
1
1
x2
dx
1 ≤
∞
n=1
1
n2
≤ 2
1 2 3 4
1
1 2 3 4
1
Figure 12.1: Upper and Lower bounds to
∞
n=1 1/n2
.
In general, we have
∞
m
a(x) dx ≤
∞
n=m
an ≤ am +
∞
m
a(x) dx.
Thus we see that the sum converges or diverges with the integral.
328
The Ratio Test.
Result 12.1.3 The series an converges absolutely if
lim
n→∞
an+1
an
< 1.
If the limit is greater than unity, then the series diverges. If the limit is unity,
the test fails.
If the limit is greater than unity, then the terms are eventually increasing with n. Since the
terms do not vanish, the sum is divergent. If the limit is less than unity, then there exists some N
such that
an+1
an
≤ r < 1 for all n ≥ N.
From this we can show that
∞
n=0 an is absolutely convergent by comparing it to the geometric
series.
∞
n=N
|an| ≤ |aN |
∞
n=0
rn
= |aN |
1
1 − r
Example 12.1.6 Consider the series,
∞
n=1
en
n!
.
We apply the ratio test to test for absolute convergence.
lim
n→∞
an+1
an
= lim
n→∞
en+1
n!
en(n + 1)!
= lim
n→∞
e
n + 1
= 0
The series is absolutely convergent.
Example 12.1.7 Consider the series,
∞
n=1
1
n2
,
which we know to be absolutely convergent. We apply the ratio test.
lim
n→∞
an+1
an
= lim
n→∞
1/(n + 1)2
1/n2
= lim
n→∞
n2
n2 + 2n + 1
= lim
n→∞
1
1 + 2/n + 1/n2
= 1
The test fails to predict the absolute convergence of the series.
329
The Root Test.
Result 12.1.4 The series an converges absolutely if
lim
n→∞
|an|1/n
< 1.
If the limit is greater than unity, then the series diverges. If the limit is unity,
the test fails. More generally, we can test that
lim sup |an|1/n
< 1.
If the limit is greater than unity, then the terms in the series do not vanish as n → ∞. This
implies that the sum does not converge. If the limit is less than unity, then there exists some N
such that
|an|1/n
≤ r < 1 for all n ≥ N.
We bound the tail of the series of |an|.
∞
n=N
|an| =
∞
n=N
|an|1/n
n
≤
∞
n=N
rn
=
rN
1 − r
∞
n=0 an is absolutely convergent.
Example 12.1.8 Consider the series
∞
n=0
na
bn
,
where a and b are real constants. We use the root test to check for absolute convergence.
lim
n→∞
|na
bn
|
1/n
< 1
|b| lim
n→∞
na/n
< 1
|b| exp lim
n→∞
1 ln n
n
< 1
|b| e0
< 1
|b| < 1
Thus we see that the series converges absolutely for |b| < 1. Note that the value of a does not affect
the absolute convergence.
Example 12.1.9 Consider the absolutely convergent series,
∞
n=1
1
n2
.
330
We aply the root test.
lim
n→∞
|an|
1/n
= lim
n→∞
1
n2
1/n
= lim
n→∞
n−2/n
= lim
n→∞
e− 2
n ln n
= e0
= 1
It fails to predict the convergence of the series.
Raabe’s Test
Result 12.1.5 The series an converges absolutely if
lim
n→∞
n 1 −
an+1
an
> 1.
If the limit is less than unity, then the series diverges or converges conditionally.
If the limit is unity, the test fails.
Gauss’ Test
Result 12.1.6 Consider the series an. If
an+1
an
= 1 −
L
n
+
bn
n2
where bn is bounded then the series converges absolutely if L > 1. Otherwise
the series diverges or converges conditionally.
12.2 Uniform Convergence
Continuous Functions. A function f(z) is continuous in a closed domain if, given any > 0,
there exists a δ > 0 such that |f(z) − f(ζ)| < for all |z − ζ| < δ in the domain.
An equivalent definition is that f(z) is continuous in a closed domain if
lim
ζ→z
f(ζ) = f(z)
for all z in the domain.
Convergence. Consider a series in which the terms are functions of z,
∞
n=0 an(z). The series is
convergent in a domain if the series converges for each point z in the domain. We can then define
the function f(z) =
∞
n=0 an(z). We can state the convergence criterion as: For any given > 0
there exists a function N(z) such that
|f(z) − SN(z)(z)| = f(z) −
N(z)−1
n=0
an(z) <
for all z in the domain. Note that the rate of convergence, i.e. the number of terms, N(z) required
for for the absolute error to be less than , is a function of z.
331
Uniform Convergence. Consider a series
∞
n=0 an(z) that is convergent in some domain. If the
rate of convergence is independent of z then the series is said to be uniformly convergent. Stating
this a little more mathematically, the series is uniformly convergent in the domain if for any given
> 0 there exists an N, independent of z, such that
|f(z) − SN (z)| = f(z) −
N
n=1
an(z) <
for all z in the domain.
12.2.1 Tests for Uniform Convergence
Weierstrass M-test. The Weierstrass M-test is useful in determining if a series is uniformly
convergent. The series
∞
n=0 an(z) is uniformly and absolutely convergent in a domain if there
exists a convergent series of positive terms
∞
n=0 Mn such that |an(z)| ≤ Mn for all z in the domain.
This condition first implies that the series is absolutely convergent for all z in the domain. The
condition |an(z)| ≤ Mn also ensures that the rate of convergence is independent of z, which is the
criterion for uniform convergence.
Note that absolute convergence and uniform convergence are independent. A series of functions
may be absolutely convergent without being uniformly convergent or vice versa. The Weierstrass
M-test is a sufficient but not a necessary condition for uniform convergence. The Weierstrass M-test
can succeed only if the series is uniformly and absolutely convergent.
Example 12.2.1 The series
f(x) =
∞
n=1
sin x
n(n + 1)
is uniformly and absolutely convergent for all real x because | sin x
n(n+1) | < 1
n2 and
∞
n=1
1
n2 converges.
Dirichlet Test. Consider a sequence of monotone decreasing, positive constants cn with limit
zero. If all the partial sums of an(z) are bounded in some closed domain, that is
N
n=1
an(z) < constant
for all N, then
∞
n=1 cnan(z) is uniformly convergent in that closed domain. Note that the Dirichlet
test does not imply that the series is absolutely convergent.
Example 12.2.2 Consider the series,
∞
n=1
sin(nx)
n
.
We cannot use the Weierstrass M-test to determine if the series is uniformly convergent on an
interval. While it is easy to bound the terms with | sin(nx)/n| ≤ 1/n, the sum
∞
n=1
1
n
does not converge. Thus we will try the Dirichlet test. Consider the sum
N−1
n=1 sin(nx). This sum
can be evaluated in closed form. (See Exercise 12.9.)
N−1
n=1
sin(nx) =
0 for x = 2πk
cos(x/2)−cos((N−1/2)x)
2 sin(x/2) for x = 2πk
332
The partial sums have infinite discontinuities at x = 2πk, k ∈ Z. The partial sums are bounded on
any closed interval that does not contain an integer multiple of 2π. By the Dirichlet test, the sum
∞
n=1
sin(nx)
n is uniformly convergent on any such closed interval. The series may not be uniformly
convergent in neighborhoods of x = 2kπ.
12.2.2 Uniform Convergence and Continuous Functions.
Consider a series f(z) =
∞
n=1 an(z) that is uniformly convergent in some domain and whose terms
an(z) are continuous functions. Since the series is uniformly convergent, for any given > 0 there
exists an N such that |RN | < for all z in the domain.
Since the finite sum SN is continuous, for that there exists a δ > 0 such that |SN (z)−SN (ζ)| <
for all ζ in the domain satisfying |z − ζ| < δ.
We combine these two results to show that f(z) is continuous.
|f(z) − f(ζ)| = |SN (z) + RN (z) − SN (ζ) − RN (ζ)|
≤ |SN (z) − SN (ζ)| + |RN (z)| + |RN (ζ)|
< 3 for |z − ζ| < δ
Result 12.2.1 A uniformly convergent series of continuous terms represents
a continuous function.
Example 12.2.3 Again consider
∞
n=1
sin(nx)
n . In Example 12.2.2 we showed that the convergence
is uniform in any closed interval that does not contain an integer multiple of 2π. In Figure 12.2 is
a plot of the first 10 and then 50 terms in the series and finally the function to which the series
converges. We see that the function has jump discontinuities at x = 2kπ and is continuous on any
closed interval not containing one of those points.
Figure 12.2: Ten, Fifty and all the Terms of
∞
n=1
sin(nx)
n .
12.3 Uniformly Convergent Power Series
Power Series. Power series are series of the form
∞
n=0
an(z − z0)n
.
Domain of Convergence of a Power Series Consider the series
∞
n=0 anzn
. Let the series
converge at some point z0. Then |anzn
0 | is bounded by some constant A for all n, so
|anzn
| = |anzn
0 |
z
z0
n
< A
z
z0
n
333
This comparison test shows that the series converges absolutely for all z satisfying |z| < |z0|.
Suppose that the series diverges at some point z1. Then the series could not converge for any
|z| > |z1| since this would imply convergence at z1. Thus there exists some circle in the z plane such
that the power series converges absolutely inside the circle and diverges outside the circle.
Result 12.3.1 The domain of convergence of a power series is a circle in the
complex plane.
Radius of Convergence of Power Series. Consider a power series
f(z) =
∞
n=0
anzn
Applying the ratio test, we see that the series converges if
lim
n→∞
an+1zn+1
|anzn|
< l
lim
n→∞
|an+1|
|an|
|z| < 1
|z| < lim
n→∞
|an|
|an+1|
Result 12.3.2 Ratio formula. The radius of convergence of the power series
∞
n=0
anzn
is
R = lim
n→∞
|an|
|an+1|
when the limit exists.
Result 12.3.3 Cauchy-Hadamard formula. The radius of convergence of
the power series:
∞
n=0
anzn
is
R =
1
lim sup n
|an|
.
Absolute Convergence of Power Series. Consider a power series
f(z) =
∞
n=0
anzn
334
that converges for z = z0. Let M be the value of the greatest term, anzn
0 . Consider any point z
such that |z| < |z0|. We can bound the residual of
∞
n=0 |anzn
|,
RN (z) =
∞
n=N
|anzn
|
=
∞
n=N
anzn
anzn
0
|anzn
0 |
≤ M
∞
n=N
z
z0
n
Since |z/z0| < 1, this is a convergent geometric series.
= M
z
z0
N
1
1 − |z/z0|
→ 0 as N → ∞
Thus the power series is absolutely convergent for |z| < |z0|.
Result 12.3.4 If the power series ∞
n=0 anzn
converges for z = z0, then the
series converges absolutely for |z| < |z0|.
Example 12.3.1 Find the radii of convergence of the following series.
1.
∞
n=1
nzn
2.
∞
n=1
n!zn
3.
∞
n=1
n!zn!
1. We apply the ratio test to determine the radius of convergence.
R = lim
n→∞
an
an+1
= lim
n→∞
n
n + 1
= 1
The series converges absolutely for |z| < 1.
2. We apply the ratio test to the series.
R = lim
n→∞
n!
(n + 1)!
= lim
n→∞
1
n + 1
= 0
The series has a vanishing radius of convergence. It converges only for z = 0.
335
3. Again we apply the ration test to determine the radius of convergence.
lim
n→∞
(n + 1)!z(n+1)!
n!zn!
< 1
lim
n→∞
(n + 1)|z|(n+1)!−n!
< 1
lim
n→∞
(n + 1)|z|(n)n!
< 1
lim
n→∞
(ln(n + 1) + (n)n! ln |z|) < 0
ln |z| < lim
n→∞
− ln(n + 1)
(n)n!
ln |z| < 0
|z| < 1
The series converges absolutely for |z| < 1.
Alternatively we could determine the radius of convergence of the series with the comparison
test.
∞
n=1
n!zn!
≤
∞
n=1
|nzn
|
∞
n=1 nzn
has a radius of convergence of 1. Thus the series must have a radius of convergence
of at least 1. Note that if |z| > 1 then the terms in the series do not vanish as n → ∞. Thus
the series must diverge for all |z| ≥ 1. Again we see that the radius of convergence is 1.
Uniform Convergence of Power Series. Consider a power series
∞
n=0 anzn
that converges
in the disk |z| < r0. The sum converges absolutely for z in the closed disk, |z| ≤ r < r0. Since
|anzn
| ≤ |anrn
| and
∞
n=0 |anrn
| converges, the power series is uniformly convergent in |z| ≤ r < r0.
Result 12.3.5 If the power series ∞
n=0 anzn
converges for |z| < r0 then the
series converges uniformly for |z| ≤ r < r0.
Example 12.3.2 Convergence and Uniform Convergence. Consider the series
log(1 − z) = −
∞
n=1
zn
n
.
This series converges for |z| ≤ 1, z = 1. Is the series uniformly convergent in this domain? The
residual after N terms RN is
RN (z) =
∞
n=N+1
zn
n
.
We can get a lower bound on the absolute value of the residual for real, positive z.
|RN (x)| =
∞
n=N+1
xn
n
≤
∞
N+1
xα
α
dα
= − Ei((N + 1) ln x)
The exponential integral function, Ei(z), is defined
Ei(z) = −
∞
−z
e−t
t
dt.
336
The exponential integral function is plotted in Figure 12.3. Since Ei(z) diverges as z → 0, by
choosing x sufficiently close to 1 the residual can be made arbitrarily large. Thus this series is not
uniformly convergent in the domain |z| ≤ 1, z = 1. The series is uniformly convergent for |z| ≤ r < 1.
-4 -3 -2 -1
-3
-2
-1
Figure 12.3: The Exponential Integral Function.
Analyticity. Recall that a sufficient condition for the analyticity of a function f(z) in a domain
is that C
f(z) dz = 0 for all simple, closed contours in the domain.
Consider a power series f(z) =
∞
n=0 anzn
that is uniformly convergent in |z| ≤ r. If C is any
simple, closed contour in the domain then C
f(z) dz exists. Expanding f(z) into a finite series and
a residual,
C
f(z) dz =
C
(SN (z) + RN (z)) dz.
Since the series is uniformly convergent, for any given > 0 there exists an N such that |RN | <
for all z in |z| ≤ r. Let L be the length of the contour C.
C
RN (z) dz ≤ L → 0 as N → ∞
C
f(z) dz = lim
N→∞ C
N−1
n=0
anzn
+ RN (z) dz
=
C
∞
n=0
anzn
=
∞
n=0
an
C
zn
dz
= 0
Thus f(z) is analytic for |z| < r.
Result 12.3.6 A power series is analytic in its domain of uniform conver-
gence.
12.4 Integration and Differentiation of Power Series
Consider a power series f(z) =
∞
n=0 anzn
that is convergent in the disk |z| < r0. Let C be any
contour of finite length L lying entirely within the closed domain |z| ≤ r < r0. The integral of f(z)
along C is
C
f(z) dz =
C
(SN (z) + RN (z)) dz.
337
Since the series is uniformly convergent in the closed disk, for any given > 0, there exists an N
such that
|RN (z)| < for all |z| ≤ r.
We bound the absolute value of the integral of RN (z).
C
RN (z) dz ≤
C
|RN (z)| dz
< L
→ 0 as N → ∞
Thus
C
f(z) dz = lim
N→∞ C
N
n=0
anzn
dz
= lim
N→∞
N
n=0
an
C
zn
dz
=
∞
n=0
an
C
zn
dz
Result 12.4.1 If C is a contour lying in the domain of uniform convergence
of the power series ∞
n=0 anzn
then
C
∞
n=0
anzn
dz =
∞
n=0
an
C
zn
dz.
In the domain of uniform convergence of a series we can interchange the order of summation and
a limit process. That is,
lim
z→z0
∞
n=0
an(z) =
∞
n=0
lim
z→z0
an(z).
We can do this because the rate of convergence does not depend on z. Since differentiation is a limit
process,
d
dz
f(z) = lim
h→0
f(z + h) − f(z)
h
,
we would expect that we could differentiate a uniformly convergent series.
Since we showed that a uniformly convergent power series is equal to an analytic function, we
can differentiate a power series in it’s domain of uniform convergence.
Result 12.4.2 Power series can be differentiated in their domain of uniform
convergence.
d
dz
∞
n=0
anzn
=
∞
n=0
(n + 1)an+1zn
.
Example 12.4.1 Differentiating a Series. Consider the series from Example 12.3.2.
log(1 − z) = −
∞
n=1
zn
n
338
We differentiate this to obtain the geometric series.
−
1
1 − z
= −
∞
n=1
zn−1
1
1 − z
=
∞
n=0
zn
The geometric series is convergent for |z| < 1 and uniformly convergent for |z| ≤ r < 1. Note that
the domain of convergence is different than the series for log(1 − z). The geometric series does not
converge for |z| = 1, z = 1. However, the domain of uniform convergence has remained the same.
12.5 Taylor Series
Result 12.5.1 Taylor’s Theorem. Let f(z) be a function that is single-
valued and analytic in |z − z0| < R. For all z in this open disk, f(z) has the
convergent Taylor series
f(z) =
∞
n=0
f(n)
(z0)
n!
(z − z0)n
. (12.1)
We can also write this as
f(z) =
∞
n=0
an(z − z0)n
, an =
f(n)
(z0)
n!
=
1
ı2π C
f(z)
(z − z0)n+1
dz, (12.2)
where C is a simple, positive, closed contour in 0 < |z − z0| < R that goes
once around the point z0.
Proof of Taylor’s Theorem. Let’s see why Result 12.5.1 is true. Consider a function f(z) that
is analytic in |z| < R. (Considering z0 = 0 is only trivially more general as we can introduce the
change of variables ζ = z − z0.) According to Cauchy’s Integral Formula, (Result ??),
f(z) =
1
ı2π C
f(ζ)
ζ − z
dζ, (12.3)
where C is a positive, simple, closed contour in 0 < |ζ − z| < R that goes once around z. We take
this contour to be the circle about the origin of radius r where |z| < r < R. (See Figure 12.4.)
Im(z)
Re(z)
r
C
R
z
Figure 12.4: Graph of Domain of Convergence and Contour of Integration.
339
We expand 1
ζ−z in a geometric series,
1
ζ − z
=
1/ζ
1 − z/ζ
=
1
ζ
∞
n=0
z
ζ
n
, for |z| < |ζ|
=
∞
n=0
zn
ζn+1
, for |z| < |ζ|
We substitute this series into Equation 12.3.
f(z) =
1
ı2π C
∞
n=0
f(ζ)zn
ζn+1
dζ
The series converges uniformly so we can interchange integration and summation.
=
∞
n=0
zn
ı2π C
f(ζ)
ζn+1
dζ
Now we have derived Equation 12.2. To obtain Equation 12.1, we apply Cauchy’s Integral Formula.
=
∞
n=0
f(n)
(0)
n!
zn
There is a table of some commonly encountered Taylor series in Appendix H.
Example 12.5.1 Consider the Taylor series expansion of 1/(1 − z) about z = 0. Previously, we
showed that this function is the sum of the geometric series
∞
n=0 zn
and we used the ratio test to
show that the series converged absolutely for |z| < 1. Now we find the series using Taylor’s theorem.
Since the nearest singularity of the function is at z = 1, the radius of convergence of the series is 1.
The coefficients in the series are
an =
1
n!
dn
dzn
1
1 − z z=0
=
1
n!
n!
(1 − z)n
z=0
= 1
Thus we have
1
1 − z
=
∞
n=0
zn
, for |z| < 1.
340
12.5.1 Newton’s Binomial Formula.
Result 12.5.2 For all |z| < 1, a complex:
(1 + z)a
= 1 +
a
1
z +
a
2
z2
+
a
3
z3
+ · · ·
where
a
r
=
a(a − 1)(a − 2) · · · (a − r + 1)
r!
.
If a is complex, then the expansion is of the principle branch of (1 + z)a
. We
define
r
0
= 1,
0
r
= 0, for r = 0,
0
0
= 1.
Example 12.5.2 Evaluate limn→∞(1 + 1/n)n
.
First we expand (1 + 1/n)n
using Newton’s binomial formula.
lim
n→∞
1 +
1
n
n
= lim
n→∞
1 +
n
1
1
n
+
n
2
1
n2
+
n
3
1
n3
+ · · ·
= lim
n→∞
1 + 1 +
n(n − 1)
2!n2
+
n(n − 1)(n − 2)
3!n3
+ · · ·
= 1 + 1 +
1
2!
+
1
3!
+ · · ·
We recognize this as the Taylor series expansion of e1
.
= e
We can also evaluate the limit using L’Hospital’s rule.
ln lim
x→∞
1 +
1
x
x
= lim
x→∞
ln 1 +
1
x
x
= lim
x→∞
x ln 1 +
1
x
= lim
x→∞
ln(1 + 1/x)
1/x
= lim
x→∞
−1/x2
1+1/x
−1/x2
= 1
lim
x→∞
1 +
1
x
x
= e
Example 12.5.3 Find the Taylor series expansion of 1/(1 + z) about z = 0.
For |z| < 1,
1
1 + z
= 1 +
−1
1
z +
−1
2
z2
+
−1
3
z3
+ · · ·
= 1 + (−1)1
z + (−1)2
z2
+ (−1)3
z3
+ · · ·
= 1 − z + z2
− z3
+ · · ·
341
Example 12.5.4 Find the first few terms in the Taylor series expansion of
1
√
z2 + 5z + 6
about the origin.
We factor the denominator and then apply Newton’s binomial formula.
1
√
z2 + 5z + 6
=
1
√
z + 3
1
√
z + 2
=
1
√
3 1 + z/3
1
√
2 1 + z/2
=
1
√
6
1 +
−1/2
1
z
3
+
−1/2
2
z
3
2
+ · · · 1 +
−1/2
1
z
2
+
−1/2
2
z
2
2
+ · · ·
=
1
√
6
1 −
z
6
+
z2
24
+ · · · 1 −
z
4
+
3z2
32
+ · · ·
=
1
√
6
1 −
5
12
z +
17
96
z2
+ · · ·
12.6 Laurent Series
Result 12.6.1 Let f(z) be single-valued and analytic in the annulus R1 <
|z − z0| < R2. For points in the annulus, the function has the convergent
Laurent series
f(z) =
∞
n=−∞
anzn
,
where
an =
1
ı2π C
f(z)
(z − z0)n+1
dz
and C is a positively oriented, closed contour around z0 lying in the annulus.
To derive this result, consider a function f(ζ) that is analytic in the annulus R1 < |ζ| < R2.
Consider any point z in the annulus. Let C1 be a circle of radius r1 with R1 < r1 < |z|. Let C2 be
a circle of radius r2 with |z| < r2 < R2. Let Cz be a circle around z, lying entirely between C1 and
C2. (See Figure 12.5 for an illustration.)
Consider the integral of f(ζ)
ζ−z around the C2 contour. Since the the only singularities of f(ζ)
ζ−z occur
at ζ = z and at points outside the annulus,
C2
f(ζ)
ζ − z
dζ =
Cz
f(ζ)
ζ − z
dζ +
C1
f(ζ)
ζ − z
dζ.
By Cauchy’s Integral Formula, the integral around Cz is
Cz
f(ζ)
ζ − z
dζ = ı2πf(z).
This gives us an expression for f(z).
f(z) =
1
ı2π C2
f(ζ)
ζ − z
dζ −
1
ı2π C1
f(ζ)
ζ − z
dζ (12.4)
342
On the C2 contour, |z| < |ζ|. Thus
1
ζ − z
=
1/ζ
1 − z/ζ
=
1
ζ
∞
n=0
z
ζ
n
, for |z| < |ζ|
=
∞
n=0
zn
ζn+1
, for |z| < |ζ|
On the C1 contour, |ζ| < |z|. Thus
−
1
ζ − z
=
1/z
1 − ζ/z
=
1
z
∞
n=0
ζ
z
n
, for |ζ| < |z|
=
∞
n=0
ζn
zn+1
, for |ζ| < |z|
=
−1
n=−∞
zn
ζn+1
, for |ζ| < |z|
We substitute these geometric series into Equation 12.4.
f(z) =
1
ı2π C2
∞
n=0
f(ζ)zn
ζn+1
dζ +
1
ı2π C1
−1
n=−∞
f(ζ)zn
ζn+1
dζ
Since the sums converge uniformly, we can interchange the order of integration and summation.
f(z) =
1
ı2π
∞
n=0 C2
f(ζ)zn
ζn+1
dζ +
1
ı2π
−1
n=−∞ C1
f(ζ)zn
ζn+1
dζ
Since the only singularities of the integrands lie outside of the annulus, the C1 and C2 contours can
be deformed to any positive, closed contour C that lies in the annulus and encloses the origin. (See
Figure 12.5.) Finally, we combine the two integrals to obtain the desired result.
f(z) =
∞
n=−∞
1
ı2π C
f(ζ)
ζn+1
dζ zn
For the case of arbitrary z0, simply make the transformation z → z − z0.
Example 12.6.1 Find the Laurent series expansions of 1/(1 + z).
For |z| < 1,
1
1 + z
= 1 +
−1
1
z +
−1
2
z2
+
−1
3
z3
+ · · ·
= 1 + (−1)1
z + (−1)2
z2
+ (−1)3
z3
+ · · ·
= 1 − z + z2
− z3
+ · · ·
For |z| > 1,
1
1 + z
=
1/z
1 + 1/z
=
1
z
1 +
−1
1
z−1
+
−1
2
z−2
+ · · ·
= z−1
− z−2
+ z−3
− · · ·
343
Im(z)
Re(z)
R
R2
1
Im(z)
Re(z)
R
R2
1
C
r1
r2
z
C C
C1
2 z
Figure 12.5: Contours for a Laurent Expansion in an Annulus.
12.7 Exercises
12.7.1 Series of Constants
Exercise 12.1
Show that if an converges then limn→∞ an = 0. That is, limn→∞ an = 0 is a necessary condition
for the convergence of the series.
Hint, Solution
Exercise 12.2
Answer the following questions true or false. Justify your answers.
1. There exists a sequence which converges to both 1 and −1.
2. There exists a sequence {an} such that an > 1 for all n and limn→∞ an = 1.
3. There exists a divergent geometric series whose terms converge.
4. There exists a sequence whose even terms are greater than 1, whose odd terms are less than 1
and that converges to 1.
5. There exists a divergent series of non-negative terms,
∞
n=0 an, such that an < (1/2)n
.
6. There exists a convergent sequence, {an}, such that limn→∞(an+1 − an) = 0.
7. There exists a divergent sequence, {an}, such that limn→∞ |an| = 2.
8. There exists divergent series, an and bn, such that (an + bn) is convergent.
9. There exists 2 different series of nonzero terms that have the same sum.
10. There exists a series of nonzero terms that converges to zero.
11. There exists a series with an infinite number of non-real terms which converges to a real
number.
12. There exists a convergent series an with limn→∞ |an+1/an| = 1.
13. There exists a divergent series an with limn→∞ |an+1/an| = 1.
14. There exists a convergent series an with limn→∞
n
|an| = 1.
15. There exists a divergent series an with limn→∞
n
|an| = 1.
344
16. There exists a convergent series of non-negative terms, an, for which a2
n diverges.
17. There exists a convergent series of non-negative terms, an, for which
√
an diverges.
18. There exists a convergent series, an, for which |an| diverges.
19. There exists a power series an(z − z0)n
which converges for z = 0 and z = 3 but diverges
for z = 2.
20. There exists a power series an(z − z0)n
which converges for z = 0 and z = ı2 but diverges
for z = 2.
Hint, Solution
Exercise 12.3
Determine if the following series converge.
1.
∞
n=2
1
n ln(n)
2.
∞
n=2
1
ln (nn)
3.
∞
n=2
ln
n
√
ln n
4.
∞
n=10
1
n(ln n)(ln(ln n))
5.
∞
n=1
ln (2n
)
ln (3n) + 1
6.
∞
n=0
1
ln(n + 20)
7.
∞
n=0
4n
+ 1
3n − 2
8.
∞
n=0
(Logπ 2)n
9.
∞
n=2
n2
− 1
n4 − 1
10.
∞
n=2
n2
(ln n)n
11.
∞
n=2
(−1)n
ln
1
n
12.
∞
n=2
(n!)2
(2n)!
13.
∞
n=2
3n
+ 4n
+ 5
5n − 4n − 3
345
14.
∞
n=2
n!
(ln n)n
15.
∞
n=2
en
ln(n!)
16.
∞
n=1
(n!)2
(n2)!
17.
∞
n=1
n8
+ 4n4
+ 8
3n9 − n5 + 9n
18.
∞
n=1
1
n
−
1
n + 1
19.
∞
n=1
cos(nπ)
n
20.
∞
n=2
ln n
n11/10
Hint, Solution
Exercise 12.4 (mathematica/fcv/series/constants.nb)
Show that the alternating harmonic series,
∞
n=1
(−1)n+1
n
= 1 −
1
2
+
1
3
−
1
4
+ · · · ,
is convergent.
Hint, Solution
Exercise 12.5 (mathematica/fcv/series/constants.nb)
Show that the series
∞
n=1
1
n
is divergent with the Cauchy convergence criterion.
Hint, Solution
Exercise 12.6
The alternating harmonic series has the sum:
∞
n=1
(−1)n
n
= ln(2).
Show that the terms in this series can be rearranged to sum to π.
Hint, Solution
Exercise 12.7 (mathematica/fcv/series/constants.nb)
Is the series,
∞
n=1
n!
nn
,
convergent?
Hint, Solution
346
Exercise 12.8
Show that the harmonic series,
∞
n=1
1
nα
= 1 +
1
2α
+
1
3α
+ · · · ,
converges for α > 1 and diverges for α ≤ 1.
Hint, Solution
Exercise 12.9
Evaluate
N−1
n=1 sin(nx).
Hint, Solution
Exercise 12.10
Evaluate
n
k=1
kzk
and
n
k=1
k2
zk
for z = 1.
Hint, Solution
Exercise 12.11
Which of the following series converge? Find the sum of those that do.
1.
1
2
+
1
6
+
1
12
+
1
20
+ · · ·
2. 1 + (−1) + 1 + (−1) + · · ·
3.
∞
n=1
1
2n−1
1
3n
1
5n+1
Hint, Solution
Exercise 12.12
Evaluate the following sum.
∞
k1=0
∞
k2=k1
· · ·
∞
kn=kn−1
1
2kn
Hint, Solution
12.7.2 Uniform Convergence
12.7.3 Uniformly Convergent Power Series
Exercise 12.13
Determine the domain of convergence of the following series.
1.
∞
n=0
zn
(z + 3)n
2.
∞
n=2
Log z
ln n
3.
∞
n=1
z
n
347
4.
∞
n=1
(z + 2)2
n2
5.
∞
n=1
(z − e)n
nn
6.
∞
n=1
z2n
2nz
7.
∞
n=0
zn!
(n!)2
8.
∞
n=0
zln(n!)
n!
9.
∞
n=0
(z − π)2n+1
nπ
n!
10.
∞
n=0
ln n
zn
Hint, Solution
Exercise 12.14
Find the circle of convergence of the following series.
1. z + (α − β)
z2
2!
+ (α − β)(α − 2β)
z3
3!
+ (α − β)(α − 2β)(α − 3β)
z4
4!
+ · · ·
2.
∞
n=1
n
2n
(z − ı)n
3.
∞
n=1
nn
zn
4.
∞
n=1
n!
nn
zn
5.
∞
n=1
(3 + (−1)n
)
n
zn
6.
∞
n=1
(n + αn
) zn
(|α| > 1)
Hint, Solution
Exercise 12.15
Find the circle of convergence of the following series:
1.
∞
k=0
kzk
2.
∞
k=1
kk
zk
348
3.
∞
k=1
k!
kk
zk
4.
∞
k=0
(z + ı5)2k
(k + 1)2
5.
∞
k=0
(k + 2k
)zk
Hint, Solution
12.7.4 Integration and Differentiation of Power Series
Exercise 12.16
Using the geometric series, show that
1
(1 − z)2
=
∞
n=0
(n + 1)zn
, for |z| < 1,
and
log(1 − z) = −
∞
n=1
zn
n
, for |z| < 1.
Hint, Solution
12.7.5 Taylor Series
Exercise 12.17
Find the Taylor series of 1
1+z2 about the z = 0. Determine the radius of convergence of the Taylor
series from the singularities of the function. Determine the radius of convergence with the ratio test.
Hint, Solution
Exercise 12.18
Use two methods to find the Taylor series expansion of log(1 + z) about z = 0 and determine the
circle of convergence. First directly apply Taylor’s theorem, then differentiate a geometric series.
Hint, Solution
Exercise 12.19
Let f(z) = (1 + z)α
be the branch for which f(0) = 1. Find its Taylor series expansion about z = 0.
What is the radius of convergence of the series? (α is an arbitrary complex number.)
Hint, Solution
Exercise 12.20
Find the Taylor series expansions about the point z = 1 for the following functions. What are the
radii of convergence?
1.
1
z
2. Log z
3.
1
z2
4. z Log z − z
Hint, Solution
349
Exercise 12.21
Find the Taylor series expansion about the point z = 0 for ez
. What is the radius of convergence?
Use this to find the Taylor series expansions of cos z and sin z about z = 0.
Hint, Solution
Exercise 12.22
Find the Taylor series expansion about the point z = π for the cosine and sine.
Hint, Solution
Exercise 12.23
Sum the following series.
1.
∞
n=0
(ln 2)n
n!
2.
∞
n=0
(n + 1)(n + 2)
2n
3.
∞
n=0
(−1)n
n!
4.
∞
n=0
(−1)n
π2n+1
(2n + 1)!
5.
∞
n=0
(−1)n
π2n
(2n)!
6.
∞
n=0
(−π)n
(2n)!
Hint, Solution
Exercise 12.24
1. Find the first three terms in the following Taylor series and state the convergence properties
for the following.
(a) e−z
around z0 = 0
(b)
1 + z
1 − z
around z0 = ı
(c)
ez
z − 1
around z0 = 0
It may be convenient to use the Cauchy product of two Taylor series.
2. Consider a function f(z) analytic for |z − z0| < R. Show that the series obtained by differ-
entiating the Taylor series for f(z) termwise is actually the Taylor series for f (z) and hence
argue that this series converges uniformly to f (z) for |z − z0| ≤ ρ < R.
3. Find the Taylor series for
1
(1 − z)3
by appropriate differentiation of the geometric series and state the radius of convergence.
4. Consider the branch of f(z) = (z + 1)ı
corresponding to f(0) = 1. Find the Taylor series
expansion about z0 = 0 and state the radius of convergence.
Hint, Solution
350
12.7.6 Laurent Series
Exercise 12.25
Find the Laurent series about z = 0 of 1/(z − ı) for |z| < 1 and |z| > 1.
Hint, Solution
Exercise 12.26
Obtain the Laurent expansion of
f(z) =
1
(z + 1)(z + 2)
centered on z = 0 for the three regions:
1. |z| < 1
2. 1 < |z| < 2
3. 2 < |z|
Hint, Solution
Exercise 12.27
By comparing the Laurent expansion of (z + 1/z)m
, m ∈ Z+
, with the binomial expansion of this
quantity, show that
2π
0
(cos θ)m
cos(nθ) dθ =
π
2m−1
m
(m−n)/2 −m ≤ n ≤ m and m − n even
0 otherwise
Hint, Solution
Exercise 12.28
The function f(z) is analytic in the entire z-plane, including ∞, except at the point z = ı/2, where
it has a simple pole, and at z = 2, where it has a pole of order 2. In addition
|z|=1
f(z) dz = ı2π,
|z|=3
f(z) dz = 0,
|z|=3
(z − 1)f(z) dz = 0.
Find f(z) and its complete Laurent expansion about z = 0.
Hint, Solution
Exercise 12.29
Let f(z) =
∞
k=1 k3 z
3
k
. Compute each of the following, giving justification in each case. The
contours are circles of radius one about the origin.
1.
|z|=1
eız
f(z) dz
2.
|z|=1
f(z)
z4
dz
3.
|z|=1
f(z) ez
z2
dz
Hint, Solution
Exercise 12.30
1. Expand f(z) = 1
z(1−z) in Laurent series that converge in the following domains:
(a) 0 < |z| < 1
351
(b) |z| > 1
(c) |z + 1| > 2
2. Without determining the series, specify the region of convergence for a Laurent series repre-
senting f(z) = 1/(z4
+ 4) in powers of z − 1 that converges at z = ı.
Hint, Solution
352
12.8 Hints
Hint 12.1
Use the Cauchy convergence criterion for series. In particular, consider |SN+1 − SN |.
Hint 12.2
CONTINUE
Hint 12.3
1.
∞
n=2
1
n ln(n)
Use the integral test.
2.
∞
n=2
1
ln (nn)
Simplify the summand.
3.
∞
n=2
ln
n
√
ln n
Simplify the summand. Use the comparison test.
4.
∞
n=10
1
n(ln n)(ln(ln n))
Use the integral test.
5.
∞
n=1
ln (2n
)
ln (3n) + 1
Show that the terms in the sum do not vanish as n → ∞
6.
∞
n=0
1
ln(n + 20)
Shift the indices.
7.
∞
n=0
4n
+ 1
3n − 2
Show that the terms in the sum do not vanish as n → ∞
8.
∞
n=0
(Logπ 2)n
This is a geometric series.
9.
∞
n=2
n2
− 1
n4 − 1
Simplify the integrand. Use the comparison test.
353
10.
∞
n=2
n2
(ln n)n
Compare to a geometric series.
11.
∞
n=2
(−1)n
ln
1
n
Group pairs of consecutive terms to obtain a series of positive terms.
12.
∞
n=2
(n!)2
(2n)!
Use the comparison test.
13.
∞
n=2
3n
+ 4n
+ 5
5n − 4n − 3
Use the root test.
14.
∞
n=2
n!
(ln n)n
Show that the terms do not vanish as n → ∞.
15.
∞
n=2
en
ln(n!)
Show that the terms do not vanish as n → ∞.
16.
∞
n=1
(n!)2
(n2)!
Apply the ratio test.
17.
∞
n=1
n8
+ 4n4
+ 8
3n9 − n5 + 9n
Use the comparison test.
18.
∞
n=1
1
n
−
1
n + 1
Use the comparison test.
19.
∞
n=1
cos(nπ)
n
Simplify the integrand.
354
20.
∞
n=2
ln n
n11/10
Use the integral test.
Hint 12.4
Group the terms.
1 −
1
2
=
1
2
1
3
−
1
4
=
1
12
1
5
−
1
6
=
1
30
· · ·
Hint 12.5
Show that
|S2n − Sn| >
1
2
.
Hint 12.6
The alternating harmonic series is conditionally convergent. Let {an} and {bn} be the positive and
negative terms in the sum, respectively, ordered in decreasing magnitude. Note that both
∞
n=1 an
and
∞
n=1 bn are divergent. Devise a method for alternately taking terms from {an} and {bn}.
Hint 12.7
Use the ratio test.
Hint 12.8
Use the integral test.
Hint 12.9
Note that sin(nx) = (eınx
). This substitute will yield a finite geometric series.
Hint 12.10
Let Sn be the sum. Consider Sn − zSn. Use the finite geometric sum.
Hint 12.11
1. The summand is a rational function. Find the first few partial sums.
2.
3. This a geometric series.
Hint 12.12
CONTINUE
Hint 12.13
CONTINUE
1.
∞
n=0
zn
(z + 3)n
2.
∞
n=2
Log z
ln n
355
3.
∞
n=1
z
n
4.
∞
n=1
(z + 2)2
n2
5.
∞
n=1
(z − e)n
nn
6.
∞
n=1
z2n
2nz
7.
∞
n=0
zn!
(n!)2
8.
∞
n=0
zln(n!)
n!
9.
∞
n=0
(z − π)2n+1
nπ
n!
10.
∞
n=0
ln n
zn
Hint 12.14
Hint 12.15
CONTINUE
Hint 12.16
Differentiate the geometric series. Integrate the geometric series.
Hint 12.17
The Taylor series is a geometric series.
Hint 12.18
Hint 12.19
Hint 12.20
1.
1
z
=
1
1 + (z − 1)
The right side is the sum of a geometric series.
2. Integrate the series for 1/z.
3. Differentiate the series for 1/z.
4. Integrate the series for Log z.
356
Hint 12.21
Evaluate the derivatives of ez
at z = 0. Use Taylor’s Theorem.
Write the cosine and sine in terms of the exponential function.
Hint 12.22
cos z = − cos(z − π)
sin z = − sin(z − π)
Hint 12.23
CONTINUE
Hint 12.24
CONTINUE
Hint 12.25
Hint 12.26
Hint 12.27
Hint 12.28
Hint 12.29
Hint 12.30
CONTINUE
357
12.9 Solutions
Solution 12.1
∞
n=0 an converges only if the partial sums, Sn, are a Cauchy sequence.
∀ > 0 ∃N s.t. m, n > N ⇒ |Sm − Sn| < ,
In particular, we can consider m = n + 1.
∀ > 0 ∃N s.t. n > N ⇒ |Sn+1 − Sn| <
Now we note that Sn+1 − sn = an.
∀ > 0 ∃N s.t. n > N ⇒ |an| <
This is exactly the Cauchy convergence criterion for the sequence {an}. Thus we see that limn→∞ an =
0 is a necessary condition for the convergence of the series
∞
n=0 an.
Solution 12.2
CONTINUE
Solution 12.3
1.
∞
n=2
1
n ln(n)
Since this is a series of positive, monotone decreasing terms, the sum converges or diverges
with the integral,
∞
2
1
x ln x
dx =
∞
ln 2
1
ξ
dξ
Since the integral diverges, the series also diverges.
2.
∞
n=2
1
ln (nn)
=
∞
n=2
1
n ln(n)
The sum converges.
3.
∞
n=2
ln
n
√
ln n =
∞
n=2
1
n
ln(ln n) ≥
∞
n=2
1
n
The sum is divergent by the comparison test.
4.
∞
n=10
1
n(ln n)(ln(ln n))
Since this is a series of positive, monotone decreasing terms, the sum converges or diverges
with the integral,
∞
10
1
x ln x ln(ln x)
dx =
∞
ln(10)
1
y ln y
dy =
∞
ln(ln(10))
1
z
dz
Since the integral diverges, the series also diverges.
5.
∞
n=1
ln (2n
)
ln (3n) + 1
=
∞
n=1
n ln 2
n ln 3 + 1
=
∞
n=1
ln 2
ln 3 + 1/n
Since the terms in the sum do not vanish as n → ∞, the series is divergent.
358
6.
∞
n=0
1
ln(n + 20)
=
∞
n=20
1
ln n
The series diverges.
7.
∞
n=0
4n
+ 1
3n − 2
Since the terms in the sum do not vanish as n → ∞, the series is divergent.
8.
∞
n=0
(Logπ 2)n
This is a geometric series. Since | Logπ 2| < 1, the series converges.
9.
∞
n=2
n2
− 1
n4 − 1
=
∞
n=2
1
n2 + 1
<
∞
n=2
1
n2
The series converges by comparison to the harmonic series.
10.
∞
n=2
n2
(ln n)n
=
∞
n=2
n2/n
ln n
n
Since n2/n
→ 1 as n → ∞, n2/n
/ ln n → 0 as n → ∞. The series converges by comparison to
a geometric series.
11. We group pairs of consecutive terms to obtain a series of positive terms.
∞
n=2
(−1)n
ln
1
n
=
∞
n=1
ln
1
2n
− ln
1
2n + 1
=
∞
n=1
ln
2n + 1
2n
The series on the right side diverges because the terms do not vanish as n → ∞.
12.
∞
n=2
(n!)2
(2n)!
=
∞
n=2
(1)(2) · · · n
(n + 1)(n + 2) · · · (2n)
<
∞
n=2
1
2n
The series converges by comparison with a geometric series.
13.
∞
n=2
3n
+ 4n
+ 5
5n − 4n − 3
We use the root test to check for convergence.
lim
n→∞
|an|
1/n
= lim
n→∞
3n
+ 4n
+ 5
5n − 4n − 3
1/n
= lim
n→∞
4
5
(3/4)n
+ 1 + 5/4n
1 − (4/5)n − 3/5n
1/n
=
4
5
< 1
We see that the series is absolutely convergent.
359
14. We will use the comparison test.
∞
n=2
n!
(ln n)n
>
∞
n=2
(n/2)n/2
(ln n)n
=
∞
n=2
n/2
ln n
n
Since the terms in the series on the right side do not vanish as n → ∞, the series is divergent.
15. We will use the comparison test.
∞
n=2
en
ln(n!)
>
∞
n=2
en
ln(nn)
=
∞
n=2
en
n ln(n)
Since the terms in the series on the right side do not vanish as n → ∞, the series is divergent.
16.
∞
n=1
(n!)2
(n2)!
We apply the ratio test.
lim
n→∞
an+1
an
= lim
n→∞
((n + 1)!)2
(n2
)!
((n + 1)2)!(n!)2
= lim
n→∞
(n + 1)2
((n + 1)2 − n2)!
= lim
n→∞
(n + 1)2
(2n + 1)!
= 0
The series is convergent.
17.
∞
n=1
n8
+ 4n4
+ 8
3n9 − n5 + 9n
=
∞
n=1
1
n
1 + 4n−4
+ 8n−8
3 − n−4 + 9n−8
>
1
4
∞
n=1
1
n
We see that the series is divergent by comparison to the harmonic series.
18.
∞
n=1
1
n
−
1
n + 1
=
∞
n=1
1
n2 + n
<
∞
n=1
1
n2
The series converges by the comparison test.
19.
∞
n=1
cos(nπ)
n
=
∞
n=1
(−1)n
n
We recognize this as the alternating harmonic series, which is conditionally convergent.
20.
∞
n=2
ln n
n11/10
Since this is a series of positive, monotone decreasing terms, the sum converges or diverges
with the integral,
∞
2
ln x
x11/10
dx =
∞
ln 2
y e−y/10
dy
Since the integral is convergent, so is the series.
360
Solution 12.4
∞
n=1
(−1)n+1
n
=
∞
n=1
1
2n − 1
−
1
2n
=
∞
n=1
1
(2n − 1)(2n)
<
∞
n=1
1
(2n − 1)2
<
1
2
∞
n=1
1
n2
=
π2
12
Thus the series is convergent.
Solution 12.5
Since
|S2n − Sn| =
2n−1
j=n
1
j
≥
2n−1
j=n
1
2n − 1
=
n
2n − 1
>
1
2
the series does not satisfy the Cauchy convergence criterion.
Solution 12.6
The alternating harmonic series is conditionally convergent. That is, the sum is convergent but not
absolutely convergent. Let {an} and {bn} be the positive and negative terms in the sum, respectively,
ordered in decreasing magnitude. Note that both
∞
n=1 an and
∞
n=1 bn are divergent. Otherwise
the alternating harmonic series would be absolutely convergent.
To sum the terms in the series to π we repeat the following two steps indefinitely:
1. Take terms from {an} until the sum is greater than π.
2. Take terms from {bn} until the sum is less than π.
Each of these steps can always be accomplished because the sums,
∞
n=1 an and
∞
n=1 bn are both
divergent. Hence the tails of the series are divergent. No matter how many terms we take, the
remaining terms in each series are divergent. In each step a finite, nonzero number of terms from
the respective series is taken. Thus all the terms will be used. Since the terms in each series vanish
as n → ∞, the running sum converges to π.
361
Solution 12.7
Applying the ratio test,
lim
n→∞
an+1
an
= lim
n→∞
(n + 1)!nn
n!(n + 1)(n+1)
= lim
n→∞
nn
(n + 1)n
= lim
n→∞
n
(n + 1)
n
=
1
e
< 1,
we see that the series is absolutely convergent.
Solution 12.8
The harmonic series,
∞
n=1
1
nα
= 1 +
1
2α
+
1
3α
+ · · · ,
converges or diverges absolutely with the integral,
∞
1
1
|xα|
dx =
∞
1
1
x (α)
dx =



[ln x]∞
1 for (α) = 1,
x1− (α)
1− (α)
∞
1
for (α) = 1.
The integral converges only for (α) > 1. Thus the harmonic series converges absolutely for (α) > 1
and diverges absolutely for (α) ≤ 1.
Solution 12.9
N−1
n=1
sin(nx) =
N−1
n=0
sin(nx)
=
N−1
n=0
(eınx
)
=
N−1
n=0
(eıx
)n
=
(N) for x = 2πk
1−eınx
1−eıx for x = 2πk
=
0 for x = 2πk
e−ıx/2
− eı(N−1/2)x
e−ıx/2 − eıx/2 for x = 2πk
=
0 for x = 2πk
e−ıx/2
− eı(N−1/2)x
−ı2 sin(x/2) for x = 2πk
=
0 for x = 2πk
e−ıx/2
− eı(N−1/2)x
2 sin(x/2) for x = 2πk
N−1
n=1
sin(nx) =
0 for x = 2πk
cos(x/2)−cos((N−1/2)x)
2 sin(x/2) for x = 2πk
362
Solution 12.10
Let
Sn =
n
k=1
kzk
.
Sn − zSn =
n
k=1
kzk
−
n
k=1
kzk+1
=
n
k=1
kzk
−
n+1
k=2
(k − 1)zk
=
n
k=1
zk
− nzn+1
=
z − zn+1
1 − z
− nzn+1
n
k=1
kzk
=
z(1 − (n + 1)zn
+ nzn+1
)
(1 − z)2
Let
Sn =
n
k=1
k2
zk
.
Sn − zSn =
n
k=1
(k2
− (k − 1)2
)zk
− n2
zn+1
= 2
n
k=1
kzk
−
n
k=1
zk
− n2
zn+1
= 2
z(1 − (n + 1)zn
+ nzn+1
)
(1 − z)2
−
z − zn+1
1 − z
− n2
zn+1
n
k=1
k2
zk
=
z(1 + z − zn
(1 + z + n(n(z − 1) − 2)(z − 1)))
(1 − z)3
Solution 12.11
1.
∞
n=1
an =
1
2
+
1
6
+
1
12
+
1
20
+ · · ·
We conjecture that the terms in the sum are rational functions of summation index. That is,
an = 1/p(n) where p(n) is a polynomial. We use divided differences to determine the order of
the polynomial.
2 6 12 20
4 6 8
2 2
We see that the polynomial is second order. p(n) = an2
+ bn + c. We solve for the coefficients.
a + b + c = 2
4a + 2b + c = 6
9a + 3b + c = 12
363
p(n) = n2
+ n
We examine the first few partial sums.
S1 =
1
2
S2 =
2
3
S3 =
3
4
S4 =
4
5
We conjecture that Sn = n/(n + 1). We prove this with induction. The base case is n = 1.
S1 = 1/(1 + 1) = 1/2. Now we assume the induction hypothesis and calculate Sn+1.
Sn+1 = Sn + an+1
=
n
n + 1
+
1
(n + 1)2 + (n + 1)
=
n + 1
n + 2
This proves the induction hypothesis. We calculate the limit of the partial sums to evaluate
the series.
∞
n=1
1
n2 + n
= lim
n→∞
n
n + 1
∞
n=1
1
n2 + n
= 1
2.
∞
n=0
(−1)n
= 1 + (−1) + 1 + (−1) + · · ·
Since the terms in the series do not vanish as n → ∞, the series is divergent.
3. We can directly sum this geometric series.
∞
n=1
1
2n−1
1
3n
1
5n+1
=
1
75
1
1 − 1/30
=
2
145
CONTINUE
Solution 12.12
The innermost sum is a geometric series.
∞
kn=kn−1
1
2kn
=
1
2kn−1
1
1 − 1/2
= 21−kn−1
This gives us a relationship between n nested sums and n − 1 nested sums.
∞
k1=0
∞
k2=k1
· · ·
∞
kn=kn−1
1
2kn
= 2
∞
k1=0
∞
k2=k1
· · ·
∞
kn−1=kn−2
1
2kn−1
364
We evaluate the n nested sums by induction.
∞
k1=0
∞
k2=k1
· · ·
∞
kn=kn−1
1
2kn
= 2n
Solution 12.13
CONTINUE.
1.
∞
n=0
zn
(z + 3)n
2.
∞
n=2
Log z
ln n
3.
∞
n=1
z
n
4.
∞
n=1
(z + 2)2
n2
5.
∞
n=1
(z − e)n
nn
6.
∞
n=1
z2n
2nz
7.
∞
n=0
zn!
(n!)2
8.
∞
n=0
zln(n!)
n!
9.
∞
n=0
(z − π)2n+1
nπ
n!
10.
∞
n=0
ln n
zn
Solution 12.14
1. We assume that β = 0. We determine the radius of convergence with the ratio test.
R = lim
n→∞
an
an+1
= lim
n→∞
(α − β) · · · (α − (n − 1)β)/n!
(α − β) · · · (α − nβ)/(n + 1)!
= lim
n→∞
n + 1
α − nβ
=
1
|β|
The series converges absolutely for |z| < 1/|β|.
365
2. By the ratio test formula, the radius of absolute convergence is
R = lim
n→∞
n/2n
(n + 1)/2n+1
= 2 lim
n→∞
n
n + 1
= 2
By the root test formula, the radius of absolute convergence is
R =
1
limn→∞
n
|n/2n|
=
2
limn→∞
n
√
n
= 2
The series converges absolutely for |z − ı| < 2.
3. We determine the radius of convergence with the Cauchy-Hadamard formula.
R =
1
lim sup n
|an|
=
1
lim sup n
|nn|
=
1
lim sup n
= 0
The series converges only for z = 0.
4. By the ratio test formula, the radius of absolute convergence is
R = lim
n→∞
n!/nn
(n + 1)!/(n + 1)n+1
= lim
n→∞
(n + 1)n
nn
= lim
n→∞
n + 1
n
n
= exp lim
n→∞
ln
n + 1
n
n
= exp lim
n→∞
n ln
n + 1
n
= exp lim
n→∞
ln(n + 1) − ln(n)
1/n
= exp lim
n→∞
1/(n + 1) − 1/n
−1/n2
= exp lim
n→∞
n
n + 1
= e1
The series converges absolutely in the circle, |z| < e.
366
5. By the Cauchy-Hadamard formula, the radius of absolute convergence is
R =
1
lim sup n
| (3 + (−1)n)
n
|
=
1
lim sup (3 + (−1)n)
=
1
4
Thus the series converges absolutely for |z| < 1/4.
6. By the Cauchy-Hadamard formula, the radius of absolute convergence is
R =
1
lim sup n
|n + αn|
=
1
lim sup |α| n
|1 + n/αn|
=
1
|α|
Thus the sum converges absolutely for |z| < 1/|α|.
Solution 12.15
1.
∞
k=0
kzk
We determine the radius of convergence with the ratio formula.
R = lim
k→∞
k
k + 1
= lim
k→∞
1
1
= 1
The series converges absolutely for |z| < 1.
2.
∞
k=1
kk
zk
We determine the radius of convergence with the Cauchy-Hadamard formula.
R =
1
lim sup k
|kk|
=
1
lim sup k
= 0
The series converges only for z = 0.
3.
∞
k=1
k!
kk
zk
367
We determine the radius of convergence with the ratio formula.
R = lim
k→∞
k!/kk
(k + 1)!/(k + 1)(k+1)
= lim
k→∞
(k + 1)k
kk
= exp lim
k→∞
k ln
k + 1
k
= exp lim
k→∞
ln(k + 1) − ln(k)
1/k
= exp lim
k→∞
1/(k + 1) − 1/k
−1/k2
= exp lim
k→∞
k
k + 1
= exp(1)
= e
The series converges absolutely for |z| < e.
4.
∞
k=0
(z + ı5)2k
(k + 1)2
We use the ratio formula to determine the domain of convergence.
lim
k→∞
(z + ı5)2(k+1)
(k + 2)2
(z + ı5)2k(k + 1)2
< 1
|z + ı5|2
lim
k→∞
(k + 2)2
(k + 1)2
< 1
|z + ı5|2
lim
k→∞
2(k + 2)
2(k + 1)
< 1
|z + ı5|2
lim
k→∞
2
2
< 1
|z + ı5|2
< 1
5.
∞
k=0
(k + 2k
)zk
We determine the radius of convergence with the Cauchy-Hadamard formula.
R =
1
lim sup k
|k + 2k|
=
1
lim sup 2 k
|1 + k/2k|
=
1
2
The series converges for |z| < 1/2.
Solution 12.16
The geometric series is
1
1 − z
=
∞
n=0
zn
.
368
This series is uniformly convergent in the domain, |z| ≤ r < 1. Differentiating this equation yields,
1
(1 − z)2
=
∞
n=1
nzn−1
=
∞
n=0
(n + 1)zn
for |z| < 1.
Integrating the geometric series yields
− log(1 − z) =
∞
n=0
zn+1
n + 1
log(1 − z) = −
∞
n=1
zn
n
, for |z| < 1.
Solution 12.17
1
1 + z2
=
∞
n=0
−z2 n
=
∞
n=0
(−1)n
z2n
The function 1
1+z2 = 1
(1−ız)(1+ız) has singularities at z = ±ı. Thus the radius of convergence is 1.
Now we use the ratio test to corroborate that the radius of convergence is 1.
lim
n→∞
an+1(z)
an(z)
< 1
lim
n→∞
(−1)n+1
z2(n+1)
(−1)nz2n
< 1
lim
n→∞
z2
< 1
|z| < 1
Solution 12.18
Method 1.
log(1 + z) = [log(1 + z)]z=0 +
d
dz
log(1 + z)
z=0
z
1!
+
d2
dz2
log(1 + z)
z=0
z2
2!
+ · · ·
= 0 +
1
1 + z z=0
z
1!
+
−1
(1 + z)2
z=0
z2
2!
+
2
(1 + z)3
z=0
z3
3!
+ · · ·
= z −
z2
2
+
z3
3
−
z4
4
+ · · ·
=
∞
n=1
(−1)n+1 zn
n
Since the nearest singularity of log(1 + z) is at z = −1, the radius of convergence is 1.
Method 2. We know the geometric series converges for |z| < 1.
1
1 + z
=
∞
n=0
(−1)n
zn
We integrate this equation to get the series for log(1 + z) in the domain |z| < 1.
log(1 + z) =
∞
n=0
(−1)n zn+1
n + 1
=
∞
n=1
(−1)n+1 zn
n
369
We calculate the radius of convergence with the ratio test.
R = lim
n→∞
an
an+1
= lim
n→∞
−(n + 1)
n
= 1
Thus the series converges absolutely for |z| < 1.
Solution 12.19
The Taylor series expansion of f(z) about z = 0 is
f(z) =
∞
n=0
f(n)
(0)
n!
zn
.
The derivatives of f(z) are
f(n)
(z) =
n−1
k=0
(α − k) (1 + z)α−n
.
Thus f(n)
(0) is
f(n)
(0) =
n−1
k=0
(α − k).
If α = m is a non-negative integer, then only the first m + 1 terms are nonzero. The Taylor series
is a polynomial and the series has an infinite radius of convergence.
(1 + z)m
=
m
n=0
n−1
k=0 (α − k)
n!
zn
If α is not a non-negative integer, then all of the terms in the series are non-zero.
(1 + z)α
=
∞
n=0
n−1
k=0 (α − k)
n!
zn
The radius of convergence of the series is the distance to the nearest singularity of (1 + z)α
. This
occurs at z = −1. Thus the series converges for |z| < 1. We can corroborate this with the ratio test.
The radius of convergence is
R = lim
n→∞
n−1
k=0 (α − k) /n!
(
n
k=0(α − k)) /(n + 1)!
= lim
n→∞
n + 1
α − n
= 1.
If we use the binomial coefficient, we can write the series in a compact form.
α
n
≡
n−1
k=0 (α − k)
n!
(1 + z)α
=
∞
n=0
α
n
zn
Solution 12.20
1. We find the series for 1/z by writing it in terms of z − 1 and using the geometric series.
1
z
=
1
1 + (z − 1)
1
z
=
∞
n=0
(−1)n
(z − 1)n
for |z − 1| < 1
370
Since the nearest singularity is at z = 0, the radius of convergence is 1. The series converges
absolutely for |z −1| < 1. We could also determine the radius of convergence with the Cauchy-
Hadamard formula.
R =
1
lim sup n
|an|
=
1
lim sup n
|(−1)n|
= 1
2. We integrate 1/ζ from 1 to z for in the circle |z − 1| < 1.
z
1
1
ζ
dζ = [Log ζ]z
1 = Log z
The series we derived for 1/z is uniformly convergent for |z − 1| ≤ r < 1. We can integrate
the series in this domain.
Log z =
z
1
∞
n=0
(−1)n
(ζ − 1)n
dζ
=
∞
n=0
(−1)n
z
1
(ζ − 1)n
dζ
=
∞
n=0
(−1)n (z − 1)n+1
n + 1
Log z =
∞
n=1
(−1)n−1
(z − 1)n
n
for |z − 1| < 1
3. The series we derived for 1/z is uniformly convergent for |z − 1| ≤ r < 1. We can differentiate
the series in this domain.
1
z2
= −
d
dz
1
z
= −
d
dz
∞
n=0
(−1)n
(z − 1)n
=
∞
n=1
(−1)n+1
n(z − 1)n−1
1
z2
=
∞
n=0
(−1)n
(n + 1)(z − 1)n
for |z − 1| < 1
4. We integrate Log ζ from 1 to z for in the circle |z − 1| < 1.
z
1
Log ζ dζ = [ζ Log ζ − ζ]z
1 = z Log z − z + 1
The series we derived for Log z is uniformly convergent for |z − 1| ≤ r < 1. We can integrate
371
the series in this domain.
z Log z − z = = −1 +
z
1
Log ζ dζ
= −1 +
z
1
∞
n=1
(−1)n−1
(ζ − 1)n
n
dζ
= −1 +
∞
n=1
(−1)n−1
(z − 1)n+1
n(n + 1)
z Log z − z = −1 +
∞
n=2
(−1)n
(z − 1)n
n(n − 1)
for |z − 1| < 1
Solution 12.21
We evaluate the derivatives of ez
at z = 0. Then we use Taylor’s Theorem.
dn
dzn
ez
= ez
dn
dzn
ez
= ez
z=0
= 1
ez
=
∞
n=0
zn
n!
Since the exponential function has no singularities in the finite complex plane, the radius of conver-
gence is infinite.
We find the Taylor series for the cosine and sine by writing them in terms of the exponential
function.
cos z =
eız
+ e−ız
2
=
1
2
∞
n=0
(ız)n
n!
+
∞
n=0
(−ız)n
n!
=
∞
n=0
even n
(ız)n
n!
cos z =
∞
n=0
(−1)n
z2n
(2n)!
sin z =
eız
− e−ız
ı2
=
1
ı2
∞
n=0
(ız)n
n!
−
∞
n=0
(−ız)n
n!
= −ı
∞
n=0
odd n
(ız)n
n!
sin z =
∞
n=0
(−1)n
z2n+1
(2n + 1)!
372
Solution 12.22
cos z = − cos(z − π)
= −
∞
n=0
(−1)n
(z − π)2n
(2n)!
=
∞
n=0
(−1)n+1
(z − π)2n
(2n)!
sin z = − sin(z − π)
= −
∞
n=0
(−1)n
(z − π)2n+1
(2n + 1)!
=
∞
n=0
(−1)n+1
(z − π)2n+1
(2n + 1)!
Solution 12.23
CONTINUE
Solution 12.24
1. (a)
f(z) = e−z
f(0) = 1
f (0) = −1
f (0) = 1
e−z
= 1 − z +
z2
2
+ O z3
Since e−z
is entire, the Taylor series converges in the complex plane.
(b)
f(z) =
1 + z
1 − z
, f(ı) = ı
f (z) =
2
(1 − z)2
, f (ı) = ı
f (z) =
4
(1 − z)3
, f (ı) = −1 + ı
1 + z
1 − z
= ı + ı(z − ı) +
−1 + ı
2
(z − ı)2
+ O (z − ı)3
Since the nearest singularity, (at z = 1), is a distance of
√
2 from z0 = ı, the radius of
convergence is
√
2. The series converges absolutely for |z − ı| <
√
2.
(c)
ez
z − 1
= − 1 + z +
z2
2
+ O z3
1 + z + z2
+ O z3
= −1 − 2z −
5
2
z2
+ O z3
Since the nearest singularity, (at z = 1), is a distance of 1 from z0 = 0, the radius of
convergence is 1. The series converges absolutely for |z| < 1.
373
2. Since f(z) is analytic in |z − z0| < R, its Taylor series converges absolutely on this domain.
f(z) =
∞
n=0
f(n)
(z0)zn
n!
The Taylor series converges uniformly on any closed sub-domain of |z − z0| < R. We consider
the sub-domain |z − z0| ≤ ρ < R. On the domain of uniform convergence we can interchange
differentiation and summation.
f (z) =
d
dz
∞
n=0
f(n)
(z0)zn
n!
f (z) =
∞
n=1
nf(n)
(z0)zn−1
n!
f (z) =
∞
n=0
f(n+1)
(z0)zn
n!
Note that this is the Taylor series that we could obtain directly for f (z). Since f(z) is analytic
on |z − z0| < R so is f (z).
f (z) =
∞
n=0
f(n+1)
(z0)zn
n!
3.
1
(1 − z)3
=
d2
dz2
1
2
1
1 − z
=
1
2
d2
dz2
∞
n=0
zn
=
1
2
∞
n=2
n(n − 1)zn−2
=
1
2
∞
n=0
(n + 2)(n + 1)zn
The radius of convergence is 1, which is the distance to the nearest singularity at z = 1.
4. The Taylor series expansion of f(z) about z = 0 is
f(z) =
∞
n=0
f(n)
(0)
n!
zn
.
We compute the derivatives of f(z).
f(n)
(z) =
n−1
k=0
(ı − k) (1 + z)ı−n
.
Now we determine the coefficients in the series.
f(n)
(0) =
n−1
k=0
(ı − k)
(1 + z)ı
=
∞
n=0
n−1
k=0 (ı − k)
n!
zn
374
The radius of convergence of the series is the distance to the nearest singularity of (1 + z)ı
.
This occurs at z = −1. Thus the series converges for |z| < 1. We can corroborate this with
the ratio test. We compute the radius of convergence.
R = lim
n→∞
n−1
k=0 (ı − k) /n!
(
n
k=0(ı − k)) /(n + 1)!
= lim
n→∞
n + 1
ı − n
= 1
If we use the binomial coefficient,
α
n
≡
n−1
k=0 (α − k)
n!
,
then we can write the series in a compact form.
(1 + z)ı
=
∞
n=0
ı
n
zn
Solution 12.25
For |z| < 1:
1
z − ı
=
ı
1 + ız
= ı
∞
n=0
(−ız)n
(Note that |z| < 1 ⇔ | − ız| < 1.)
For |z| > 1:
1
z − ı
=
1
z
1
(1 − ı/z)
(Note that |z| > 1 ⇔ | − ı/z| < 1.)
=
1
z
∞
n=0
ı
z
n
=
1
z
0
n=−∞
ı−n
zn
=
0
n=−∞
(−ı)n
zn−1
=
−1
n=−∞
(−ı)n+1
zn
Solution 12.26
We expand the function in partial fractions.
f(z) =
1
(z + 1)(z + 2)
=
1
z + 1
−
1
z + 2
375
The Taylor series about z = 0 for 1/(z + 1) is
1
1 + z
=
1
1 − (−z)
=
∞
n=0
(−z)n
, for |z| < 1
=
∞
n=0
(−1)n
zn
, for |z| < 1
The series about z = ∞ for 1/(z + 1) is
1
1 + z
=
1/z
1 + 1/z
=
1
z
∞
n=0
(−1/z)n
, for |1/z| < 1
=
∞
n=0
(−1)n
z−n−1
, for |z| > 1
=
−1
n=−∞
(−1)n+1
zn
, for |z| > 1
The Taylor series about z = 0 for 1/(z + 2) is
1
2 + z
=
1/2
1 + z/2
=
1
2
∞
n=0
(−z/2)n
, for |z/2| < 1
=
∞
n=0
(−1)n
2n+1
zn
, for |z| < 2
The series about z = ∞ for 1/(z + 2) is
1
2 + z
=
1/z
1 + 2/z
=
1
z
∞
n=0
(−2/z)n
, for |2/z| < 1
=
∞
n=0
(−1)n
2n
z−n−1
, for |z| > 2
=
−1
n=−∞
(−1)n+1
2n+1
zn
, for |z| > 2
To find the expansions in the three regions, we just choose the appropriate series.
1.
f(z) =
1
1 + z
−
1
2 + z
=
∞
n=0
(−1)n
zn
−
∞
n=0
(−1)n
2n+1
zn
, for |z| < 1
=
∞
n=0
(−1)n
1 −
1
2n+1
zn
, for |z| < 1
376
f(z) =
∞
n=0
(−1)n 2n+1
− 1
2n+1
zn
, for |z| < 1
2.
f(z) =
1
1 + z
−
1
2 + z
f(z) =
−1
n=−∞
(−1)n+1
zn
−
∞
n=0
(−1)n
2n+1
zn
, for 1 < |z| < 2
3.
f(z) =
1
1 + z
−
1
2 + z
=
−1
n=−∞
(−1)n+1
zn
−
−1
n=−∞
(−1)n+1
2n+1
zn
, for 2 < |z|
f(z) =
−1
n=−∞
(−1)n+1 2n+1
− 1
2n+1
zn
, for 2 < |z|
Solution 12.27
Laurent Series. We assume that m is a non-negative integer and that n is an integer. The Laurent
series about the point z = 0 of
f(z) = z +
1
z
m
is
f(z) =
∞
n=−∞
anzn
where
an =
1
ı2π C
f(z)
zn+1
dz
and C is a contour going around the origin once in the positive direction. We manipulate the
coefficient integral into the desired form.
an =
1
ı2π C
(z + 1/z)m
zn+1
dz
=
1
ı2π
2π
0
(eıθ
+ e−ıθ
)m
eı(n+1)θ
ı eıθ
dθ
=
1
2π
2π
0
2m
cosm
θ e−ınθ
dθ
=
2m−1
π
2π
0
cosm
θ(cos(nθ) − ı sin(nθ)) dθ
Note that cosm
θ is even and sin(nθ) is odd about θ = π.
=
2m−1
π
2π
0
cosm
θ cos(nθ) dθ
377
Binomial Series. Now we find the binomial series expansion of f(z).
z +
1
z
m
=
m
n=0
m
n
zm−n 1
z
n
=
m
n=0
m
n
zm−2n
=
m
n=−m
m−n even
m
(m − n)/2
zn
The coefficients in the series f(z) =
∞
n=−∞ anzn
are
an =
m
(m−n)/2 −m ≤ n ≤ m and m − n even
0 otherwise
By equating the coefficients found by the two methods, we evaluate the desired integral.
2π
0
(cos θ)m
cos(nθ) dθ =
π
2m−1
m
(m−n)/2 −m ≤ n ≤ m and m − n even
0 otherwise
Solution 12.28
First we write f(z) in the form
f(z) =
g(z)
(z − ı/2)(z − 2)2
.
g(z) is an entire function which grows no faster that z3
at infinity. By expanding g(z) in a Taylor
series about the origin, we see that it is a polynomial of degree no greater than 3.
f(z) =
αz3
+ βz2
+ γz + δ
(z − ı/2)(z − 2)2
Since f(z) is a rational function we expand it in partial fractions to obtain a form that is convenient
to integrate.
f(z) =
a
z − ı/2
+
b
z − 2
+
c
(z − 2)2
+ d
We use the value of the integrals of f(z) to determine the constants, a, b, c and d.
|z|=1
a
z − ı/2
+
b
z − 2
+
c
(z − 2)2
+ d dz = ı2π
ı2πa = ı2π
a = 1
|z|=3
1
z − ı/2
+
b
z − 2
+
c
(z − 2)2
+ d dz = 0
ı2π(1 + b) = 0
b = −1
Note that by applying the second constraint, we can change the third constraint to
|z|=3
zf(z) dz = 0.
378
|z|=3
z
1
z − ı/2
−
1
z − 2
+
c
(z − 2)2
+ d dz = 0
|z|=3
(z − ı/2) + ı/2
z − ı/2
−
(z − 2) + 2
z − 2
+
c(z − 2) + 2c
(z − 2)2
dz = 0
ı2π
ı
2
− 2 + c = 0
c = 2 −
ı
2
Thus we see that the function is
f(z) =
1
z − ı/2
−
1
z − 2
+
2 − ı/2
(z − 2)2
+ d,
where d is an arbitrary constant. We can also write the function in the form:
f(z) =
dz3
+ 15 − ı8
4(z − ı/2)(z − 2)2
.
Complete Laurent Series. We find the complete Laurent series about z = 0 for each of the
terms in the partial fraction expansion of f(z).
1
z − ı/2
=
ı2
1 + ı2z
= ı2
∞
n=0
(−ı2z)n
, for | − ı2z| < 1
= −
∞
n=0
(−ı2)n+1
zn
, for |z| < 1/2
1
z − ı/2
=
1/z
1 − ı/(2z)
=
1
z
∞
n=0
ı
2z
n
, for |ı/(2z)| < 1
=
∞
n=0
ı
2
n
z−n−1
, for |z| < 2
=
−1
n=−∞
ı
2
−n−1
zn
, for |z| < 2
=
−1
n=−∞
(−ı2)n+1
zn
, for |z| < 2
−
1
z − 2
=
1/2
1 − z/2
=
1
2
∞
n=0
z
2
n
, for |z/2| < 1
=
∞
n=0
zn
2n+1
, for |z| < 2
379
−
1
z − 2
= −
1/z
1 − 2/z
= −
1
z
∞
n=0
2
z
n
, for |2/z| < 1
= −
∞
n=0
2n
z−n−1
, for |z| > 2
= −
−1
n=−∞
2−n−1
zn
, for |z| > 2
2 − ı/2
(z − 2)2
= (2 − ı/2)
1
4
(1 − z/2)−2
=
4 − ı
8
∞
n=0
−2
n
−
z
2
n
, for |z/2| < 1
=
4 − ı
8
∞
n=0
(−1)n
(n + 1)(−1)n
2−n
zn
, for |z| < 2
=
4 − ı
8
∞
n=0
n + 1
2n
zn
, for |z| < 2
2 − ı/2
(z − 2)2
=
2 − ı/2
z2
1 −
2
z
−2
=
2 − ı/2
z2
∞
n=0
−2
n
−
2
z
n
, for |2/z| < 1
= (2 − ı/2)
∞
n=0
(−1)n
(n + 1)(−1)n
2n
z−n−2
, for |z| > 2
= (2 − ı/2)
−2
n=−∞
(−n − 1)2−n−2
zn
, for |z| > 2
= −(2 − ı/2)
−2
n=−∞
n + 1
2n+2
zn
, for |z| > 2
We take the appropriate combination of these series to find the Laurent series expansions in the
regions: |z| < 1/2, 1/2 < |z| < 2 and 2 < |z|. For |z| < 1/2, we have
f(z) = −
∞
n=0
(−ı2)n+1
zn
+
∞
n=0
zn
2n+1
+
4 − ı
8
∞
n=0
n + 1
2n
zn
+ d
f(z) =
∞
n=0
−(−ı2)n+1
+
1
2n+1
+
4 − ı
8
n + 1
2n
zn
+ d
f(z) =
∞
n=0
−(−ı2)n+1
+
1
2n+1
1 +
4 − ı
4
(n + 1) zn
+ d, for |z| < 1/2
380
For 1/2 < |z| < 2, we have
f(z) =
−1
n=−∞
(−ı2)n+1
zn
+
∞
n=0
zn
2n+1
+
4 − ı
8
∞
n=0
n + 1
2n
zn
+ d
f(z) =
−1
n=−∞
(−ı2)n+1
zn
+
∞
n=0
1
2n+1
1 +
4 − ı
4
(n + 1) zn
+ d, for 1/2 < |z| < 2
For 2 < |z|, we have
f(z) =
−1
n=−∞
(−ı2)n+1
zn
−
−1
n=−∞
2−n−1
zn
− (2 − ı/2)
−2
n=−∞
n + 1
2n+2
zn
+ d
f(z) =
−2
n=−∞
(−ı2)n+1
−
1
2n+1
(1 + (1 − ı/4)(n + 1)) zn
+ d, for 2 < |z|
Solution 12.29
The radius of convergence of the series for f(z) is
R = lim
n→∞
k3
/3k
(k + 1)3/3k+1
= 3 lim
n→∞
k3
(k + 1)3
= 3.
Thus f(z) is a function which is analytic inside the circle of radius 3.
1. The integrand is analytic. Thus by Cauchy’s theorem the value of the integral is zero.
|z|=1
eız
f(z) dz = 0
2. We use Cauchy’s integral formula to evaluate the integral.
|z|=1
f(z)
z4
dz =
ı2π
3!
f(3)
(0) =
ı2π
3!
3!33
33
= ı2π
|z|=1
f(z)
z4
dz = ı2π
3. We use Cauchy’s integral formula to evaluate the integral.
|z|=1
f(z) ez
z2
dz =
ı2π
1!
d
dz
(f(z) ez
) z=0
= ı2π
1!13
31
|z|=1
f(z) ez
z2
dz =
ı2π
3
Solution 12.30
1. (a)
1
z(1 − z)
=
1
z
+
1
1 − z
=
1
z
+
∞
n=0
zn
, for 0 < |z| < 1
=
1
z
+
∞
n=−1
zn
, for 0 < |z| < 1
381
(b)
1
z(1 − z)
=
1
z
+
1
1 − z
=
1
z
−
1
z
1
1 − 1/z
=
1
z
−
1
z
∞
n=0
1
z
n
, for |z| > 1
= −
1
z
∞
n=1
z−n
, for |z| > 1
= −
−∞
n=−2
zn
, for |z| > 1
(c)
1
z(1 − z)
=
1
z
+
1
1 − z
=
1
(z + 1) − 1
+
1
2 − (z + 1)
=
1
(z + 1)
1
1 − 1/(z + 1)
−
1
(z + 1)
1
1 − 2/(z + 1)
, for |z + 1| > 1 and |z + 1| > 2
=
1
(z + 1)
∞
n=0
1
(z + 1)n
−
1
(z + 1)
∞
n=0
2n
(z + 1)n
, for |z + 1| > 1 and |z + 1| > 2
=
1
(z + 1)
∞
n=0
1 − 2n
(z + 1)n
, for |z + 1| > 2
=
∞
n=1
1 − 2n
(z + 1)n+1
, for |z + 1| > 2
=
−∞
n=−2
1 − 2−n−1
(z + 1)n
, for |z + 1| > 2
2. First we factor the denominator of f(z) = 1/(z4
+ 4).
z4
+ 4 = (z − 1 − ı)(z − 1 + ı)(z + 1 − ı)(z + 1 + ı)
We look for an annulus about z = 1 containing the point z = ı where f(z) is analytic. The
singularities at z = 1 ± ı are a distance of 1 from z = 1; the singularities at z = −1 ± ı are at
a distance of
√
5. Since f(z) is analytic in the domain 1 < |z − 1| <
√
5 there is a convergent
Laurent series in that domain.
382
Chapter 13
The Residue Theorem
Man will occasionally stumble over the truth, but most of the time he will pick himself up and
continue on.
- Winston Churchill
13.1 The Residue Theorem
We will find that many integrals on closed contours may be evaluated in terms of the residues of a
function. We first define residues and then prove the Residue Theorem.
Result 13.1.1 Residues. Let f(z) be single-valued an analytic in a deleted
neighborhood of z0. Then f(z) has the Laurent series expansion
f(z) =
∞
n=−∞
an(z − z0)n
,
The residue of f(z) at z = z0 is the coefficient of the 1
z−z0
term:
Res(f(z), z0) = a−1.
The residue at a branch point or non-isolated singularity is undefined as the
Laurent series does not exist. If f(z) has a pole of order n at z = z0 then we
can use the Residue Formula:
Res(f(z), z0) = lim
z→z0
1
(n − 1)!
dn−1
dzn−1
(z − z0)n
f(z) .
See Exercise 13.4 for a proof of the Residue Formula.
Example 13.1.1 In Example 8.4.5 we showed that f(z) = z/ sin z has first order poles at z = nπ,
383
C B
Figure 13.1: Deform the contour to lie in the deleted disk.
n ∈ Z  {0}. Now we find the residues at these isolated singularities.
Res
z
sin z
, z = nπ = lim
z→nπ
(z − nπ)
z
sin z
= nπ lim
z→nπ
z − nπ
sin z
= nπ lim
z→nπ
1
cos z
= nπ
1
(−1)n
= (−1)n
nπ
Residue Theorem. We can evaluate many integrals in terms of the residues of a function. Sup-
pose f(z) has only one singularity, (at z = z0), inside the simple, closed, positively oriented contour
C. f(z) has a convergent Laurent series in some deleted disk about z0. We deform C to lie in the
disk. See Figure 13.1. We now evaluate C
f(z) dz by deforming the contour and using the Laurent
series expansion of the function.
C
f(z) dz =
B
f(z) dz
=
B
∞
n=−∞
an(z − z0)n
dz
=
∞
n=−∞
n=−1
an
(z − z0)n+1
n + 1
r eı(θ+2π)
r eıθ
+ a−1 [log(z − z0)]
r eı(θ+2π)
r eıθ
= a−1ı2π
C
f(z) dz = ı2π Res(f(z), z0)
Now assume that f(z) has n singularities at {z1, . . . , zn}. We deform C to n contours C1, . . . , Cn
which enclose the singularities and lie in deleted disks about the singularities in which f(z) has
convergent Laurent series. See Figure 13.2. We evaluate C
f(z) dz by deforming the contour.
C
f(z) dz =
n
k=1 Ck
f(z) dz = ı2π
n
k=1
Res(f(z), zk)
Now instead let f(z) be analytic outside and on C except for isolated singularities at {ζn} in the
domain outside C and perhaps an isolated singularity at infinity. Let a be any point in the interior
of C. To evaluate C
f(z) dz we make the change of variables ζ = 1/(z − a). This maps the contour
C to C . (Note that C is negatively oriented.) All the points outside C are mapped to points inside
C and vice versa. We can then evaluate the integral in terms of the singularities inside C .
384
C
C
CC1
2
3
Figure 13.2: Deform the contour n contours which enclose the n singularities.
a
C
C’
Figure 13.3: The change of variables ζ = 1/(z − a).
C
f(z) dz =
C
f
1
ζ
+ a
−1
ζ2
dζ
=
−C
1
z2
f
1
z
+ a dz
= ı2π
n
Res
1
z2
f
1
z
+ a ,
1
ζn − a
+ ı2π Res
1
z2
f
1
z
+ a , 0 .
Result 13.1.2 Residue Theorem. If f(z) is analytic in a compact, closed,
connected domain D except for isolated singularities at {zn} in the interior of
D then
∂D
f(z) dz =
k Ck
f(z) dz = ı2π
n
Res(f(z), zn).
Here the set of contours {Ck} make up the positively oriented boundary ∂D
of the domain D. If the boundary of the domain is a single contour C then
the formula simplifies.
C
f(z) dz = ı2π
n
Res(f(z), zn)
If instead f(z) is analytic outside and on C except for isolated singularities at
{ζn} in the domain outside C and perhaps an isolated singularity at infinity
then
C
f(z) dz = ı2π
n
Res
1
z2
f
1
z
+ a ,
1
ζn − a
+ı2π Res
1
z2
f
1
z
+ a , 0 .
Here a is a any point in the interior of C.
385
Example 13.1.2 Consider
1
ı2π C
sin z
z(z − 1)
dz
where C is the positively oriented circle of radius 2 centered at the origin. Since the integrand is
single-valued with only isolated singularities, the Residue Theorem applies. The value of the integral
is the sum of the residues from singularities inside the contour.
The only places that the integrand could have singularities are z = 0 and z = 1. Since
lim
z→0
sin z
z
= lim
z→0
cos z
1
= 1,
there is a removable singularity at the point z = 0. There is no residue at this point.
Now we consider the point z = 1. Since sin(z)/z is analytic and nonzero at z = 1, that point is
a first order pole of the integrand. The residue there is
Res
sin z
z(z − 1)
, z = 1 = lim
z→1
(z − 1)
sin z
z(z − 1)
= sin(1).
There is only one singular point with a residue inside the path of integration. The residue at
this point is sin(1). Thus the value of the integral is
1
ı2π C
sin z
z(z − 1)
dz = sin(1)
Example 13.1.3 Evaluate the integral
C
cot z coth z
z3
dz
where C is the unit circle about the origin in the positive direction.
The integrand is
cot z coth z
z3
=
cos z cosh z
z3 sin z sinh z
sin z has zeros at nπ. sinh z has zeros at ınπ. Thus the only pole inside the contour of integration
is at z = 0. Since sin z and sinh z both have simple zeros at z = 0,
sin z = z + O(z3
), sinh z = z + O(z3
)
386
the integrand has a pole of order 5 at the origin. The residue at z = 0 is
lim
z→0
1
4!
d4
dz4
z5 cot z coth z
z3
= lim
z→0
1
4!
d4
dz4
z2
cot z coth z
=
1
4!
lim
z→0
24 cot(z) coth(z)csc(z)
2
− 32z coth(z)csc(z)
4
− 16z cos(2z) coth(z)csc(z)
4
+ 22z2
cot(z) coth(z)csc(z)
4
+ 2z2
cos(3z) coth(z)csc(z)
5
+ 24 cot(z) coth(z)csch(z)
2
+ 24csc(z)
2
csch(z)
2
− 48z cot(z)csc(z)
2
csch(z)
2
− 48z coth(z)csc(z)
2
csch(z)
2
+ 24z2
cot(z) coth(z)csc(z)
2
csch(z)
2
+ 16z2
csc(z)
4
csch(z)
2
+ 8z2
cos(2z)csc(z)
4
csch(z)
2
− 32z cot(z)csch(z)
4
− 16z cosh(2z) cot(z)csch(z)
4
+ 22z2
cot(z) coth(z)csch(z)
4
+ 16z2
csc(z)
2
csch(z)
4
+ 8z2
cosh(2z)csc(z)
2
csch(z)
4
+ 2z2
cosh(3z) cot(z)csch(z)
5
=
1
4!
−
56
15
= −
7
45
Since taking the fourth derivative of z2
cot z coth z really sucks, we would like a more elegant way
of finding the residue. We expand the functions in the integrand in Taylor series about the origin.
cos z cosh z
z3 sin z sinh z
=
1 − z2
2 + z4
24 − · · · 1 + z2
2 + z4
24 + · · ·
z3 z − z3
6 + z5
120 − · · · z + z3
6 + z5
120 + · · ·
=
1 − z4
6 + · · ·
z3 z2 + z6 −1
36 + 1
60 + · · ·
=
1
z5
1 − z4
6 + · · ·
1 − z4
90 + · · ·
=
1
z5
1 −
z4
6
+ · · · 1 +
z4
90
+ · · ·
=
1
z5
1 −
7
45
z4
+ · · ·
=
1
z5
−
7
45
1
z
+ · · ·
Thus we see that the residue is − 7
45 . Now we can evaluate the integral.
C
cot z coth z
z3
dz = −ı
14
45
π
13.2 Cauchy Principal Value for Real Integrals
13.2.1 The Cauchy Principal Value
First we recap improper integrals. If f(x) has a singularity at x0 ∈ (a . . . b) then
b
a
f(x) dx ≡ lim
→0+
x0−
a
f(x) dx + lim
δ→0+
b
x0+δ
f(x) dx.
387
For integrals on (−∞ . . . ∞),
∞
−∞
f(x) dx ≡ lim
a→−∞, b→∞
b
a
f(x) dx.
Example 13.2.1
1
−1
1
x dx is divergent. We show this with the definition of improper integrals.
1
−1
1
x
dx = lim
→0+
−
−1
1
x
dx + lim
δ→0+
1
δ
1
x
dx
= lim
→0+
[ln |x|]
−
−1 + lim
δ→0+
[ln |x|]
1
δ
= lim
→0+
ln − lim
δ→0+
ln δ
The integral diverges because and δ approach zero independently.
Since 1/x is an odd function, it appears that the area under the curve is zero. Consider what
would happen if and δ were not independent. If they approached zero symmetrically, δ = , then
the value of the integral would be zero.
lim
→0+
−
−1
+
1
1
x
dx = lim
→0+
(ln − ln ) = 0
We could make the integral have any value we pleased by choosing δ = c . 1
lim
→0+
−
−1
+
1
c
1
x
dx = lim
→0+
(ln − ln(c )) = − ln c
We have seen it is reasonable that
1
−1
1
x
dx
has some meaning, and if we could evaluate the integral, the most reasonable value would be zero.
The Cauchy principal value provides us with a way of evaluating such integrals. If f(x) is continuous
on (a, b) except at the point x0 ∈ (a, b) then the Cauchy principal value of the integral is defined
−
b
a
f(x) dx = lim
→0+
x0−
a
f(x) dx +
b
x0+
f(x) dx .
The Cauchy principal value is obtained by approaching the singularity symmetrically. The principal
value of the integral may exist when the integral diverges. If the integral exists, it is equal to the
principal value of the integral.
The Cauchy principal value of
1
−1
1
x dx is defined
−
1
−1
1
x
dx ≡ lim
→0+
−
−1
1
x
dx +
1
1
x
dx
= lim
→0+
[log |x|]
−
−1 [log |x|]
1
= lim
→0+
(log | − | − log | |)
= 0.
(Another notation for the principal value of an integral is PV f(x) dx.) Since the limits of integra-
tion approach zero symmetrically, the two halves of the integral cancel. If the limits of integration
approached zero independently, (the definition of the integral), then the two halves would both
diverge.
1This may remind you of conditionally convergent series. You can rearrange the terms to make the series sum to
any number.
388
Example 13.2.2
∞
−∞
x
x2+1 dx is divergent. We show this with the definition of improper integrals.
∞
−∞
x
x2 + 1
dx = lim
a→−∞, b→∞
b
a
x
x2 + 1
dx
= lim
a→−∞, b→∞
1
2
ln(x2
+ 1)
b
a
=
1
2
lim
a→−∞, b→∞
ln
b2
+ 1
a2 + 1
The integral diverges because a and b approach infinity independently. Now consider what would
happen if a and b were not independent. If they approached zero symmetrically, a = −b, then the
value of the integral would be zero.
1
2
lim
b→∞
ln
b2
+ 1
b2 + 1
= 0
We could make the integral have any value we pleased by choosing a = −cb.
We can assign a meaning to divergent integrals of the form
∞
−∞
f(x) dx with the Cauchy principal
value. The Cauchy principal value of the integral is defined
−
∞
−∞
f(x) dx = lim
a→∞
a
−a
f(x) dx.
The Cauchy principal value is obtained by approaching infinity symmetrically.
The Cauchy principal value of
∞
−∞
x
x2+1 dx is defined
−
∞
−∞
x
x2 + 1
dx = lim
a→∞
a
−a
x
x2 + 1
dx
= lim
a→∞
1
2
ln x2
+ 1
a
−a
= 0.
389
Result 13.2.1 Cauchy Principal Value. If f(x) is continuous on (a, b)
except at the point x0 ∈ (a, b) then the integral of f(x) is defined
b
a
f(x) dx = lim
→0+
x0−
a
f(x) dx + lim
δ→0+
b
x0+δ
f(x) dx.
The Cauchy principal value of the integral is defined
−
b
a
f(x) dx = lim
→0+
x0−
a
f(x) dx +
b
x0+
f(x) dx .
If f(x) is continuous on (−∞, ∞) then the integral of f(x) is defined
∞
−∞
f(x) dx = lim
a→−∞, b→∞
b
a
f(x) dx.
The Cauchy principal value of the integral is defined
−
∞
−∞
f(x) dx = lim
a→∞
a
−a
f(x) dx.
The principal value of the integral may exist when the integral diverges. If the
integral exists, it is equal to the principal value of the integral.
Example 13.2.3 Clearly
∞
−∞
x dx diverges, however the Cauchy principal value exists.
−
∞
−∞
x dx = lim
a→∞
x2
2 −a
a = 0
In general, if f(x) is an odd function with no singularities on the finite real axis then
−
∞
−∞
f(x) dx = 0.
13.3 Cauchy Principal Value for Contour Integrals
Example 13.3.1 Consider the integral
Cr
1
z − 1
dz,
where Cr is the positively oriented circle of radius r and center at the origin. From the residue
theorem, we know that the integral is
Cr
1
z − 1
dz =
0 for r < 1,
ı2π for r > 1.
When r = 1, the integral diverges, as there is a first order pole on the path of integration. However,
the principal value of the integral exists.
−
Cr
1
z − 1
dz = lim
→0+
2π−
1
eıθ − 1
ıeıθ
dθ
= lim
→0+
log(eıθ
− 1)
2π−
390
β−α
Cε
z0 ε
Figure 13.4: The C Contour
We choose the branch of the logarithm with a branch cut on the positive real axis and arg log z ∈
(0, 2π).
= lim
→0+
log eı(2π− )
− 1 − log (eı
− 1)
= lim
→0+
log 1 − i + O( 2
) − 1 − log 1 + i + O( 2
) − 1
= lim
→0+
log −i + O( 2
) − log i + O( 2
)
= lim
→0+
Log + O( 2
) + ı arg −ı + O( 2
) − Log + O( 2
) − ı arg ı + O( 2
)
= ı
3π
2
− ı
π
2
= ıπ
Thus we obtain
−
Cr
1
z − 1
dz =



0 for r < 1,
ıπ for r = 1,
ı2π for r > 1.
In the above example we evaluated the contour integral by parameterizing the contour. This
approach is only feasible when the integrand is simple. We would like to use the residue theorem
to more easily evaluate the principal value of the integral. But before we do that, we will need a
preliminary result.
Result 13.3.1 Let f(z) have a first order pole at z = z0 and let (z − z0)f(z)
be analytic in some neighborhood of z0. Let the contour C be a circular arc
from z0 + eıα
to z0 + eıβ
. (We assume that β > α and β − α < 2π.)
lim
→0+
C
f(z) dz = ı(β − α) Res(f(z), z0)
The contour is shown in Figure 13.4. (See Exercise 13.9 for a proof of this
result.)
Example 13.3.2 Consider
−
C
1
z − 1
dz
where C is the unit circle. Let Cp be the circular arc of radius 1 that starts and ends a distance
of from z = 1. Let C be the positive, circular arc of radius with center at z = 1 that joins the
endpoints of Cp. Let Ci, be the union of Cp and C . (Cp stands for Principal value Contour; Ci
stands for Indented Contour.) Ci is an indented contour that avoids the first order pole at z = 1.
Figure 13.5 shows the three contours.
391
C
C
p
ε
Figure 13.5: The indented contour.
Note that the principal value of the integral is
−
C
1
z − 1
dz = lim
→0+
Cp
1
z − 1
dz.
We can calculate the integral along Ci with the residue theorem.
Ci
1
z − 1
dz = ı2π
We can calculate the integral along C using Result 13.3.1. Note that as → 0+
, the contour
becomes a semi-circle, a circular arc of π radians.
lim
→0+
C
1
z − 1
dz = ıπ Res
1
z − 1
, 1 = ıπ
Now we can write the principal value of the integral along C in terms of the two known integrals.
−
C
1
z − 1
dz =
Ci
1
z − 1
dz −
C
1
z − 1
dz
= ı2π − ıπ
= ıπ
In the previous example, we formed an indented contour that included the first order pole. You
can show that if we had indented the contour to exclude the pole, we would obtain the same result.
(See Exercise 13.11.)
We can extend the residue theorem to principal values of integrals. (See Exercise 13.10.)
Result 13.3.2 Residue Theorem for Principal Values. Let f(z) be an-
alytic inside and on a simple, closed, positive contour C, except for isolated
singularities at z1, . . . , zm inside the contour and first order poles at ζ1, . . . , ζn
on the contour. Further, let the contour be C1
at the locations of these first
order poles. (i.e., the contour does not have a corner at any of the first order
poles.) Then the principal value of the integral of f(z) along C is
−
C
f(z) dz = ı2π
m
j=1
Res(f(z), zj) + ıπ
n
j=1
Res(f(z), ζj).
392
13.4 Integrals on the Real Axis
Example 13.4.1 We wish to evaluate the integral
∞
−∞
1
x2 + 1
dx.
We can evaluate this integral directly using calculus.
∞
−∞
1
x2 + 1
dx = [arctan x]
∞
−∞
= π
Now we will evaluate the integral using contour integration. Let CR be the semicircular arc from R
to −R in the upper half plane. Let C be the union of CR and the interval [−R, R].
We can evaluate the integral along C with the residue theorem. The integrand has first order
poles at z = ±ı. For R > 1, we have
C
1
z2 + 1
dz = ı2π Res
1
z2 + 1
, ı
= ı2π
1
ı2
= π.
Now we examine the integral along CR. We use the maximum modulus integral bound to show that
the value of the integral vanishes as R → ∞.
CR
1
z2 + 1
dz ≤ πR max
z∈CR
1
z2 + 1
= πR
1
R2 − 1
→ 0 as R → ∞.
Now we are prepared to evaluate the original real integral.
C
1
z2 + 1
dz = π
R
−R
1
x2 + 1
dx +
CR
1
z2 + 1
dz = π
We take the limit as R → ∞.
∞
−∞
1
x2 + 1
dx = π
We would get the same result by closing the path of integration in the lower half plane. Note that
in this case the closed contour would be in the negative direction.
If you are really observant, you may have noticed that we did something a little funny in evalu-
ating
∞
−∞
1
x2 + 1
dx.
The definition of this improper integral is
∞
−∞
1
x2 + 1
dx = lim
a→+∞
0
−a
1
x2 + 1
dx+ = lim
b→+∞
b
0
1
x2 + 1
dx.
393
In the above example we instead computed
lim
R→+∞
R
−R
1
x2 + 1
dx.
Note that for some integrands, the former and latter are not the same. Consider the integral of
x
x2+1 .
∞
−∞
x
x2 + 1
dx = lim
a→+∞
0
−a
x
x2 + 1
dx + lim
b→+∞
b
0
x
x2 + 1
dx
= lim
a→+∞
1
2
log |a2
+ 1| + lim
b→+∞
−
1
2
log |b2
+ 1|
Note that the limits do not exist and hence the integral diverges. We get a different result if the
limits of integration approach infinity symmetrically.
lim
R→+∞
R
−R
x
x2 + 1
dx = lim
R→+∞
1
2
(log |R2
+ 1| − log |R2
+ 1|)
= 0
(Note that the integrand is an odd function, so the integral from −R to R is zero.) We call this the
principal value of the integral and denote it by writing “PV” in front of the integral sign or putting
a dash through the integral.
PV
∞
−∞
f(x) dx ≡ −
∞
−∞
f(x) dx ≡ lim
R→+∞
R
−R
f(x) dx
The principal value of an integral may exist when the integral diverges. If the integral does
converge, then it is equal to its principal value.
We can use the method of Example 13.4.1 to evaluate the principal value of integrals of functions
that vanish fast enough at infinity.
394
Result 13.4.1 Let f(z) be analytic except for isolated singularities, with only
first order poles on the real axis. Let CR be the semi-circle from R to −R in
the upper half plane. If
lim
R→∞
R max
z∈CR
|f(z)| = 0
then
−
∞
−∞
f(x) dx = ı2π
m
k=1
Res (f(z), zk) + ıπ
n
k=1
Res(f(z), xk)
where z1, . . . zm are the singularities of f(z) in the upper half plane and
x1, . . . , xn are the first order poles on the real axis.
Now let CR be the semi-circle from R to −R in the lower half plane. If
lim
R→∞
R max
z∈CR
|f(z)| = 0
then
−
∞
−∞
f(x) dx = −ı2π
m
k=1
Res (f(z), zk) − ıπ
n
k=1
Res(f(z), xk)
where z1, . . . zm are the singularities of f(z) in the lower half plane and
x1, . . . , xn are the first order poles on the real axis.
This result is proved in Exercise 13.13. Of course we can use this result to evaluate the integrals
of the form
∞
0
f(z) dz,
where f(x) is an even function.
13.5 Fourier Integrals
In order to do Fourier transforms, which are useful in solving differential equations, it is necessary
to be able to calculate Fourier integrals. Fourier integrals have the form
∞
−∞
eıωx
f(x) dx.
We evaluate these integrals by closing the path of integration in the lower or upper half plane and
using techniques of contour integration.
Consider the integral
π/2
0
e−R sin θ
dθ.
Since 2θ/π ≤ sin θ for 0 ≤ θ ≤ π/2,
e−R sin θ
≤ e−R2θ/π
for 0 ≤ θ ≤ π/2
395
π/2
0
e−R sin θ
dθ ≤
π/2
0
e−R2θ/π
dθ
= −
π
2R
e−R2θ/π
π/2
0
= −
π
2R
(e−R
−1)
≤
π
2R
→ 0 as R → ∞
We can use this to prove the following Result 13.5.1. (See Exercise 13.17.)
Result 13.5.1 Jordan’s Lemma.
π
0
e−R sin θ
dθ <
π
R
.
Suppose that f(z) vanishes as |z| → ∞. If ω is a (positive/negative) real
number and CR is a semi-circle of radius R in the (upper/lower) half plane
then the integral
CR
f(z) eıωz
dz
vanishes as R → ∞.
We can use Jordan’s Lemma and the Residue Theorem to evaluate many Fourier integrals. Con-
sider
∞
−∞
f(x) eıωx
dx, where ω is a positive real number. Let f(z) be analytic except for isolated
singularities, with only first order poles on the real axis. Let C be the contour from −R to R on
the real axis and then back to −R along a semi-circle in the upper half plane. If R is large enough
so that C encloses all the singularities of f(z) in the upper half plane then
C
f(z) eıωz
dz = ı2π
m
k=1
Res(f(z) eıωz
, zk) + ıπ
n
k=1
Res(f(z) eıωz
, xk)
where z1, . . . zm are the singularities of f(z) in the upper half plane and x1, . . . , xn are the first order
poles on the real axis. If f(z) vanishes as |z| → ∞ then the integral on CR vanishes as R → ∞ by
Jordan’s Lemma.
∞
−∞
f(x) eıωx
dx = ı2π
m
k=1
Res(f(z) eıωz
, zk) + ıπ
n
k=1
Res(f(z) eıωz
, xk)
For negative ω we close the path of integration in the lower half plane. Note that the contour is
then in the negative direction.
396
Result 13.5.2 Fourier Integrals. Let f(z) be analytic except for isolated
singularities, with only first order poles on the real axis. Suppose that f(z)
vanishes as |z| → ∞. If ω is a positive real number then
∞
−∞
f(x) eıωx
dx = ı2π
m
k=1
Res(f(z) eıωz
, zk) + ıπ
n
k=1
Res(f(z) eıωz
, xk)
where z1, . . . zm are the singularities of f(z) in the upper half plane and
x1, . . . , xn are the first order poles on the real axis. If ω is a negative real
number then
∞
−∞
f(x) eıωx
dx = −ı2π
m
k=1
Res(f(z) eıωz
, zk) − ıπ
n
k=1
Res(f(z) eıωz
, xk)
where z1, . . . zm are the singularities of f(z) in the lower half plane and
x1, . . . , xn are the first order poles on the real axis.
13.6 Fourier Cosine and Sine Integrals
Fourier cosine and sine integrals have the form,
∞
0
f(x) cos(ωx) dx and
∞
0
f(x) sin(ωx) dx.
If f(x) is even/odd then we can evaluate the cosine/sine integral with the method we developed for
Fourier integrals.
Let f(z) be analytic except for isolated singularities, with only first order poles on the real axis.
Suppose that f(x) is an even function and that f(z) vanishes as |z| → ∞. We consider real ω > 0.
−
∞
0
f(x) cos(ωx) dx =
1
2
−
∞
−∞
f(x) cos(ωx) dx
Since f(x) sin(ωx) is an odd function,
1
2
−
∞
−∞
f(x) sin(ωx) dx = 0.
Thus
−
∞
0
f(x) cos(ωx) dx =
1
2
−
∞
−∞
f(x) eıωx
dx
Now we apply Result 13.5.2.
−
∞
0
f(x) cos(ωx) dx = ıπ
m
k=1
Res(f(z) eıωz
, zk) +
ıπ
2
n
k=1
Res(f(z) eıωz
, xk)
where z1, . . . zm are the singularities of f(z) in the upper half plane and x1, . . . , xn are the first order
poles on the real axis.
If f(x) is an odd function, we note that f(x) cos(ωx) is an odd function to obtain the analogous
result for Fourier sine integrals.
397
Result 13.6.1 Fourier Cosine and Sine Integrals. Let f(z) be analytic
except for isolated singularities, with only first order poles on the real axis.
Suppose that f(x) is an even function and that f(z) vanishes as |z| → ∞. We
consider real ω > 0.
−
∞
0
f(x) cos(ωx) dx = ıπ
m
k=1
Res(f(z) eıωz
, zk) +
ıπ
2
n
k=1
Res(f(z) eıωz
, xk)
where z1, . . . zm are the singularities of f(z) in the upper half plane and
x1, . . . , xn are the first order poles on the real axis. If f(x) is an odd function
then,
−
∞
0
f(x) sin(ωx) dx = π
µ
k=1
Res(f(z) eıωz
, ζk) +
π
2
n
k=1
Res(f(z) eıωz
, xk)
where ζ1, . . . ζµ are the singularities of f(z) in the lower half plane and
x1, . . . , xn are the first order poles on the real axis.
Now suppose that f(x) is neither even nor odd. We can evaluate integrals of the form:
∞
−∞
f(x) cos(ωx) dx and
∞
−∞
f(x) sin(ωx) dx
by writing them in terms of Fourier integrals
∞
−∞
f(x) cos(ωx) dx =
1
2
∞
−∞
f(x) eıωx
dx +
1
2
∞
−∞
f(x) e−ıωx
dx
∞
−∞
f(x) sin(ωx) dx = −
ı
2
∞
−∞
f(x) eıωx
dx +
ı
2
∞
−∞
f(x) e−ıωx
dx
13.7 Contour Integration and Branch Cuts
Example 13.7.1 Consider
∞
0
x−a
x + 1
dx, 0 < a < 1,
where x−a
denotes exp(−a ln(x)). We choose the branch of the function
f(z) =
z−a
z + 1
|z| > 0, 0 < arg z < 2π
with a branch cut on the positive real axis.
Let C and CR denote the circular arcs of radius and R where < 1 < R. C is negatively
oriented; CR is positively oriented. Consider the closed contour C that is traced by a point moving
from C to CR above the branch cut, next around CR, then below the cut to C , and finally around
C . (See Figure 13.6.)
We write f(z) in polar coordinates.
f(z) =
exp(−a log z)
z + 1
=
exp(−a(log r + iθ))
r eıθ +1
398
ε
CR
C
Figure 13.6:
We evaluate the function above, (z = r eı0
), and below, (z = r eı2π
), the branch cut.
f(r eı0
) =
exp[−a(log r + i0)]
r + 1
=
r−a
r + 1
f(r eı2π
) =
exp[−a(log r + ı2π)]
r + 1
=
r−a e−ı2aπ
r + 1
.
We use the residue theorem to evaluate the integral along C.
C
f(z) dz = ı2π Res(f(z), −1)
R
r−a
r + 1
dr +
CR
f(z) dz −
R
r−a e−ı2aπ
r + 1
dr +
C
f(z) dz = ı2π Res(f(z), −1)
The residue is
Res(f(z), −1) = exp(−a log(−1)) = exp(−a(log 1 + ıπ)) = e−ıaπ
.
We bound the integrals along C and CR with the maximum modulus integral bound.
C
f(z) dz ≤ 2π
−a
1 −
= 2π
1−a
1 −
CR
f(z) dz ≤ 2πR
R−a
R − 1
= 2π
R1−a
R − 1
Since 0 < a < 1, the values of the integrals tend to zero as → 0 and R → ∞. Thus we have
∞
0
r−a
r + 1
dr = ı2π
e−ıaπ
1 − e−ı2aπ
∞
0
x−a
x + 1
dx =
π
sin aπ
399
Result 13.7.1 Integrals from Zero to Infinity. Let f(z) be a single-valued
analytic function with only isolated singularities and no singularities on the
positive, real axis, [0, ∞). Let a ∈ Z. If the integrals exist then,
∞
0
f(x) dx = −
n
k=1
Res (f(z) log z, zk) ,
∞
0
xa
f(x) dx =
ı2π
1 − eı2πa
n
k=1
Res (za
f(z), zk) ,
∞
0
f(x) log x dx = −
1
2
n
k=1
Res f(z) log2
z, zk + ıπ
n
k=1
Res (f(z) log z, zk) ,
∞
0
xa
f(x) log x dx =
ı2π
1 − eı2πa
n
k=1
Res (za
f(z) log z, zk)
+
π2
a
sin2
(πa)
n
k=1
Res (za
f(z), zk) ,
∞
0
xa
f(x) logm
x dx =
∂m
∂am
ı2π
1 − eı2πa
n
k=1
Res (za
f(z), zk) ,
where z1, . . . , zn are the singularities of f(z) and there is a branch cut on the
positive real axis with 0 < arg(z) < 2π.
13.8 Exploiting Symmetry
We have already used symmetry of the integrand to evaluate certain integrals. For f(x) an even
function we were able to evaluate
∞
0
f(x) dx by extending the range of integration from −∞ to ∞.
For
∞
0
xα
f(x) dx
we put a branch cut on the positive real axis and noted that the value of the integrand below the
branch cut is a constant multiple of the value of the function above the branch cut. This enabled
us to evaluate the real integral with contour integration. In this section we will use other kinds of
symmetry to evaluate integrals. We will discover that periodicity of the integrand will produce this
symmetry.
13.8.1 Wedge Contours
We note that zn
= rn eınθ
is periodic in θ with period 2π/n. The real and imaginary parts of
zn
are odd periodic in θ with period π/n. This observation suggests that certain integrals on the
positive real axis may be evaluated by closing the path of integration with a wedge contour.
Example 13.8.1 Consider
∞
0
1
1 + xn
dx
400
where n ∈ N, n ≥ 2. We can evaluate this integral using Result 13.7.1.
∞
0
1
1 + xn
dx = −
n−1
k=0
Res
log z
1 + zn
, eıπ(1+2k)/n
= −
n−1
k=0
lim
z→eıπ(1+2k)/n
(z − eıπ(1+2k)/n
) log z
1 + zn
= −
n−1
k=0
lim
z→eıπ(1+2k)/n
log z + (z − eıπ(1+2k)/n
)/z
nzn−1
= −
n−1
k=0
ıπ(1 + 2k)/n
n eıπ(1+2k)(n−1)/n
= −
ıπ
n2 eıπ(n−1)/n
n−1
k=0
(1 + 2k) eı2πk/n
=
ı2π eıπ/n
n2
n−1
k=1
k eı2πk/n
=
ı2π eıπ/n
n2
n
eı2π/n −1
=
π
n sin(π/n)
This is a bit grungy. To find a spiffier way to evaluate the integral we note that if we write the
integrand as a function of r and θ, it is periodic in θ with period 2π/n.
1
1 + zn
=
1
1 + rn eınθ
The integrand along the rays θ = 2π/n, 4π/n, 6π/n, . . . has the same value as the integrand on the
real axis. Consider the contour C that is the boundary of the wedge 0 < r < R, 0 < θ < 2π/n.
There is one singularity inside the contour. We evaluate the residue there.
Res
1
1 + zn
, eıπ/n
= lim
z→eıπ/n
z − eıπ/n
1 + zn
= lim
z→eıπ/n
1
nzn−1
= −
eıπ/n
n
We evaluate the integral along C with the residue theorem.
C
1
1 + zn
dz =
−ı2π eıπ/n
n
Let CR be the circular arc. The integral along CR vanishes as R → ∞.
CR
1
1 + zn
dz ≤
2πR
n
max
z∈CR
1
1 + zn
≤
2πR
n
1
Rn − 1
→ 0 as R → ∞
401
We parametrize the contour to evaluate the desired integral.
∞
0
1
1 + xn
dx +
0
∞
1
1 + xn
eı2π/n
dx =
−ı2π eıπ/n
n
∞
0
1
1 + xn
dx =
−ı2π eıπ/n
n(1 − eı2π/n)
∞
0
1
1 + xn
dx =
π
n sin(π/n)
13.8.2 Box Contours
Recall that ez
= ex+ıy
is periodic in y with period 2π. This implies that the hyperbolic trigono-
metric functions cosh z, sinh z and tanh z are periodic in y with period 2π and odd periodic in y
with period π. We can exploit this property to evaluate certain integrals on the real axis by closing
the path of integration with a box contour.
Example 13.8.2 Consider the integral
∞
−∞
1
cosh x
dx = ı log tanh
ıπ
4
+
x
2
∞
−∞
= ı log(1) − ı log(−1)
= π.
We will evaluate this integral using contour integration. Note that
cosh(x + ıπ) =
ex+ıπ
+ e−x−ıπ
2
= − cosh(x).
Consider the box contour C that is the boundary of the region −R < x < R, 0 < y < π. The only
singularity of the integrand inside the contour is a first order pole at z = ıπ/2. We evaluate the
integral along C with the residue theorem.
C
1
cosh z
dz = ı2π Res
1
cosh z
,
ıπ
2
= ı2π lim
z→ıπ/2
z − ıπ/2
cosh z
= ı2π lim
z→ıπ/2
1
sinh z
= 2π
The integrals along the sides of the box vanish as R → ∞.
±R+ıπ
±R
1
cosh z
dz ≤ π max
z∈[±R...±R+ıπ]
1
cosh z
≤ π max
y∈[0...π]
2
e±R+ıy + e R−ıy
=
2
eR − e−R
≤
π
sinh R
→ 0 as R → ∞
402
The value of the integrand on the top of the box is the negative of its value on the bottom. We take
the limit as R → ∞.
∞
−∞
1
cosh x
dx +
−∞
∞
1
− cosh x
dx = 2π
∞
−∞
1
cosh x
dx = π
13.9 Definite Integrals Involving Sine and Cosine
Example 13.9.1 For real-valued a, evaluate the integral:
f(a) =
2π
0
dθ
1 + a sin θ
.
What is the value of the integral for complex-valued a.
Real-Valued a. For −1 < a < 1, the integrand is bounded, hence the integral exists. For
|a| = 1, the integrand has a second order pole on the path of integration. For |a| > 1 the integrand
has two first order poles on the path of integration. The integral is divergent for these two cases.
Thus we see that the integral exists for −1 < a < 1.
For a = 0, the value of the integral is 2π. Now consider a = 0. We make the change of variables
z = eıθ
. The real integral from θ = 0 to θ = 2π becomes a contour integral along the unit circle,
|z| = 1. We write the sine, cosine and the differential in terms of z.
sin θ =
z − z−1
ı2
, cos θ =
z + z−1
2
, dz = ı eıθ
dθ, dθ =
dz
ız
We write f(a) as an integral along C, the positively oriented unit circle |z| = 1.
f(a) =
C
1/(ız)
1 + a(z − z−1)/(2ı)
dz =
C
2/a
z2 + (ı2/a)z − 1
dz
We factor the denominator of the integrand.
f(a) =
C
2/a
(z − z1)(z − z2)
dz
z1 = ı
−1 +
√
1 − a2
a
, z2 = ı
−1 −
√
1 − a2
a
Because |a| < 1, the second root is outside the unit circle.
|z2| =
1 +
√
1 − a2
|a|
> 1.
Since |z1z2| = 1, |z1| < 1. Thus the pole at z1 is inside the contour and the pole at z2 is outside.
We evaluate the contour integral with the residue theorem.
f(a) =
C
2/a
z2 + (ı2/a)z − 1
dz
= ı2π
2/a
z1 − z2
= ı2π
1
ı
√
1 − a2
f(a) =
2π
√
1 − a2
403
Complex-Valued a. We note that the integral converges except for real-valued a satisfying
|a| ≥ 1. On any closed subset of C  {a ∈ R | |a| ≥ 1} the integral is uniformly convergent. Thus
except for the values {a ∈ R | |a| ≥ 1}, we can differentiate the integral with respect to a. f(a) is
analytic in the complex plane except for the set of points on the real axis: a ∈ (−∞ . . . − 1] and
a ∈ [1 . . . ∞). The value of the analytic function f(a) on the real axis for the interval (−1 . . . 1) is
f(a) =
2π
√
1 − a2
.
By analytic continuation we see that the value of f(a) in the complex plane is the branch of the
function
f(a) =
2π
(1 − a2)1/2
where f(a) is positive, real-valued for a ∈ (−1 . . . 1) and there are branch cuts on the real axis on
the intervals: (−∞ . . . − 1] and [1 . . . ∞).
Result 13.9.1 For evaluating integrals of the form
a+2π
a
F(sin θ, cos θ) dθ
it may be useful to make the change of variables z = eıθ
. This gives us a
contour integral along the unit circle about the origin. We can write the sine,
cosine and differential in terms of z.
sin θ =
z − z−1
ı2
, cos θ =
z + z−1
2
, dθ =
dz
ız
13.10 Infinite Sums
The function g(z) = π cot(πz) has simple poles at z = n ∈ Z. The residues at these points are all
unity.
Res(π cot(πz), n) = lim
z→n
π(z − n) cos(πz)
sin(πz)
= lim
z→n
π cos(πz) − π(z − n) sin(πz)
π cos(πz)
= 1
Let Cn be the square contour with corners at z = (n + 1/2)(±1 ± ı). Recall that
cos z = cos x cosh y − ı sin x sinh y and sin z = sin x cosh y + ı cos x sinh y.
First we bound the modulus of cot(z).
| cot(z)| =
cos x cosh y − ı sin x sinh y
sin x cosh y + ı cos x sinh y
=
cos2 x cosh2
y + sin2
x sinh2
y
sin2
x cosh2
y + cos2 x sinh2
y
≤
cosh2
y
sinh2
y
= | coth(y)|
404
The hyperbolic cotangent, coth(y), has a simple pole at y = 0 and tends to ±1 as y → ±∞.
Along the top and bottom of Cn, (z = x±ı(n+1/2)), we bound the modulus of g(z) = π cot(πz).
|π cot(πz)| ≤ π coth(π(n + 1/2))
Along the left and right sides of Cn, (z = ±(n + 1/2) + ıy), the modulus of the function is bounded
by a constant.
|g(±(n + 1/2) + ıy)| = π
cos(π(n + 1/2)) cosh(πy) ı sin(π(n + 1/2)) sinh(πy)
sin(π(n + 1/2)) cosh(πy) + ı cos(π(n + 1/2)) sinh(πy)
= | ıπ tanh(πy)|
≤ π
Thus the modulus of π cot(πz) can be bounded by a constant M on Cn.
Let f(z) be analytic except for isolated singularities. Consider the integral,
Cn
π cot(πz)f(z) dz.
We use the maximum modulus integral bound.
Cn
π cot(πz)f(z) dz ≤ (8n + 4)M max
z∈Cn
|f(z)|
Note that if
lim
|z|→∞
|zf(z)| = 0,
then
lim
n→∞ Cn
π cot(πz)f(z) dz = 0.
This implies that the sum of all residues of π cot(πz)f(z) is zero. Suppose further that f(z) is
analytic at z = n ∈ Z. The residues of π cot(πz)f(z) at z = n are f(n). This means
∞
n=−∞
f(n) = −( sum of the residues of π cot(πz)f(z) at the poles of f(z) ).
Result 13.10.1 If
lim
|z|→∞
|zf(z)| = 0,
then the sum of all the residues of π cot(πz)f(z) is zero. If in addition f(z) is
analytic at z = n ∈ Z then
∞
n=−∞
f(n) = −( sum of the residues of π cot(πz)f(z) at the poles of f(z) ).
Example 13.10.1 Consider the sum
∞
n=−∞
1
(n + a)2
, a ∈ Z.
405
By Result 13.10.1 with f(z) = 1/(z + a)2
we have
∞
n=−∞
1
(n + a)2
= − Res π cot(πz)
1
(z + a)2
, −a
= −π lim
z→−a
d
dz
cot(πz)
= −π
−π sin2
(πz) − π cos2
(πz)
sin2
(πz)
.
∞
n=−∞
1
(n + a)2
=
π2
sin2
(πa)
Example 13.10.2 Derive π/4 = 1 − 1/3 + 1/5 − 1/7 + 1/9 − · · · .
Consider the integral
In =
1
ı2π Cn
dw
w(w − z) sin w
where Cn is the square with corners at w = (n + 1/2)(±1 ± ı)π, n ∈ Z+
. With the substitution
w = x + ıy,
| sin w|2
= sin2
x + sinh2
y,
we see that |1/ sin w| ≤ 1 on Cn. Thus In → 0 as n → ∞. We use the residue theorem and take the
limit n → ∞.
0 =
∞
n=1
(−1)n
nπ(nπ − z)
+
(−1)n
nπ(nπ + z)
+
1
z sin z
−
1
z2
1
sin z
=
1
z
− 2z
∞
n=1
(−1)n
n2π2 − z2
=
1
z
−
∞
n=1
(−1)n
nπ − z
−
(−1)n
nπ + z
We substitute z = π/2 into the above expression to obtain
π/4 = 1 − 1/3 + 1/5 − 1/7 + 1/9 − · · ·
406
13.11 Exercises
The Residue Theorem
Exercise 13.1
Evaluate the following closed contour integrals using Cauchy’s residue theorem.
1.
C
dz
z2 − 1
, where C is the contour parameterized by r = 2 cos(2θ), 0 ≤ θ ≤ 2π.
2.
C
eız
z2(z − 2)(z + ı5)
dz, where C is the positive circle |z| = 3.
3.
C
e1/z
sin(1/z) dz, where C is the positive circle |z| = 1.
Hint, Solution
Exercise 13.2
Derive Cauchy’s integral formula from Cauchy’s residue theorem.
Hint, Solution
Exercise 13.3
Calculate the residues of the following functions at each of the poles in the finite part of the plane.
1.
1
z4 − a4
2.
sin z
z2
3.
1 + z2
z(z − 1)2
4.
ez
z2 + a2
5.
(1 − cos z)2
z7
Hint, Solution
Exercise 13.4
Let f(z) have a pole of order n at z = z0. Prove the Residue Formula:
Res(f(z), z0) = lim
z→z0
1
(n − 1)!
dn−1
dzn−1
[(z − z0)n
f(z)] .
Hint, Solution
Exercise 13.5
Consider the function
f(z) =
z4
z2 + 1
.
Classify the singularities of f(z) in the extended complex plane. Calculate the residue at each pole
and at infinity. Find the Laurent series expansions and their domains of convergence about the
points z = 0, z = ı and z = ∞.
Hint, Solution
407
Exercise 13.6
Let P(z) be a polynomial none of whose roots lie on the closed contour Γ. Show that
1
ı2π
P (z)
P(z)
dz = number of roots of P(z) which lie inside Γ.
where the roots are counted according to their multiplicity.
Hint: From the fundamental theorem of algebra, it is always possible to factor P(z) in the form
P(z) = (z − z1)(z − z2) · · · (z − zn). Using this form of P(z) the integrand P (z)/P(z) reduces to a
very simple expression.
Hint, Solution
Exercise 13.7
Find the value of
C
ez
(z − π) tan z
dz
where C is the positively-oriented circle
1. |z| = 2
2. |z| = 4
Hint, Solution
Cauchy Principal Value for Real Integrals
Solution 13.1
Show that the integral
1
−1
1
x
dx.
is divergent. Evaluate the integral
1
−1
1
x − ıα
dx, α ∈ R, α = 0.
Evaluate
lim
α→0+
1
−1
1
x − ıα
dx
and
lim
α→0−
1
−1
1
x − ıα
dx.
The integral exists for α arbitrarily close to zero, but diverges when α = 0. Plot the real and
imaginary part of the integrand. If one were to assign meaning to the integral for α = 0, what would
the value of the integral be?
Exercise 13.8
Do the principal values of the following integrals exist?
1.
1
−1
1
x2 dx,
2.
1
−1
1
x3 dx,
3.
1
−1
f(x)
x3 dx.
Assume that f(x) is real analytic on the interval (−1, 1).
Hint, Solution
408
Cauchy Principal Value for Contour Integrals
Exercise 13.9
Let f(z) have a first order pole at z = z0 and let (z − z0)f(z) be analytic in some neighborhood
of z0. Let the contour C be a circular arc from z0 + eıα
to z0 + eıβ
. (Assume that β > α and
β − α < 2π.) Show that
lim
→0+
C
f(z) dz = ı(β − α) Res(f(z), z0)
Hint, Solution
Exercise 13.10
Let f(z) be analytic inside and on a simple, closed, positive contour C, except for isolated singu-
larities at z1, . . . , zm inside the contour and first order poles at ζ1, . . . , ζn on the contour. Further,
let the contour be C1
at the locations of these first order poles. (i.e., the contour does not have a
corner at any of the first order poles.) Show that the principal value of the integral of f(z) along C
is
−
C
f(z) dz = ı2π
m
j=1
Res(f(z), zj) + ıπ
n
j=1
Res(f(z), ζj).
Hint, Solution
Exercise 13.11
Let C be the unit circle. Evaluate
−
C
1
z − 1
dz
by indenting the contour to exclude the first order pole at z = 1.
Hint, Solution
Integrals on the Real Axis
Exercise 13.12
Evaluate the following improper integrals.
1.
∞
0
x2
(x2 + 1)(x2 + 4)
dx =
π
6
2.
∞
−∞
dx
(x + b)2 + a2
, a > 0
Hint, Solution
Exercise 13.13
Prove Result 13.4.1.
Hint, Solution
Exercise 13.14
Evaluate
−
∞
−∞
2x
x2 + x + 1
.
Hint, Solution
Exercise 13.15
Use contour integration to evaluate the integrals
1.
∞
−∞
dx
1 + x4
,
409
2.
∞
−∞
x2
dx
(1 + x2)2
,
3.
∞
−∞
cos(x)
1 + x2
dx.
Hint, Solution
Exercise 13.16
Evaluate by contour integration
∞
0
x6
(x4 + 1)2
dx.
Hint, Solution
Fourier Integrals
Exercise 13.17
Suppose that f(z) vanishes as |z| → ∞. If ω is a (positive / negative) real number and CR is a
semi-circle of radius R in the (upper / lower) half plane then show that the integral
CR
f(z) eıωz
dz
vanishes as R → ∞.
Hint, Solution
Exercise 13.18
Evaluate by contour integration
∞
−∞
cos 2x
x − ıπ
dx.
Hint, Solution
Fourier Cosine and Sine Integrals
Exercise 13.19
Evaluate
∞
−∞
sin x
x
dx.
Hint, Solution
Exercise 13.20
Evaluate
∞
−∞
1 − cos x
x2
dx.
Hint, Solution
Exercise 13.21
Evaluate
∞
0
sin(πx)
x(1 − x2)
dx.
Hint, Solution
Contour Integration and Branch Cuts
410
Exercise 13.22
Evaluate the following integrals.
1.
∞
0
ln2
x
1 + x2
dx =
π3
8
2.
∞
0
ln x
1 + x2
dx = 0
Hint, Solution
Exercise 13.23
By methods of contour integration find
∞
0
dx
x2 + 5x + 6
[ Recall the trick of considering Γ
f(z) log z dz with a suitably chosen contour Γ and branch for
log z. ]
Hint, Solution
Exercise 13.24
Show that
∞
0
xa
(x + 1)2
dx =
πa
sin(πa)
for − 1 < (a) < 1.
From this derive that
∞
0
log x
(x + 1)2
dx = 0,
∞
0
log2
x
(x + 1)2
dx =
π2
3
.
Hint, Solution
Exercise 13.25
Consider the integral
I(a) =
∞
0
xa
1 + x2
dx.
1. For what values of a does the integral exist?
2. Evaluate the integral. Show that
I(a) =
π
2 cos(πa/2)
3. Deduce from your answer in part (b) the results
∞
0
log x
1 + x2
dx = 0,
∞
0
log2
x
1 + x2
dx =
π3
8
.
You may assume that it is valid to differentiate under the integral sign.
Hint, Solution
Exercise 13.26
Let f(z) be a single-valued analytic function with only isolated singularities and no singularities
on the positive real axis, [0, ∞). Give sufficient conditions on f(x) for absolute convergence of the
integral
∞
0
xa
f(x) dx.
Assume that a is not an integer. Evaluate the integral by considering the integral of za
f(z) on a
suitable contour. (Consider the branch of za
on which 1a
= 1.)
Hint, Solution
411
Exercise 13.27
Using the solution to Exercise 13.26, evaluate
∞
0
xa
f(x) log x dx,
and ∞
0
xa
f(x) logm
x dx,
where m is a positive integer.
Hint, Solution
Exercise 13.28
Using the solution to Exercise 13.26, evaluate
∞
0
f(x) dx,
i.e. examine a = 0. The solution will suggest a way to evaluate the integral with contour integration.
Do the contour integration to corroborate the value of
∞
0
f(x) dx.
Hint, Solution
Exercise 13.29
Let f(z) be an analytic function with only isolated singularities and no singularities on the positive
real axis, [0, ∞). Give sufficient conditions on f(x) for absolute convergence of the integral
∞
0
f(x) log x dx
Evaluate the integral with contour integration.
Hint, Solution
Exercise 13.30
For what values of a does the following integral exist?
∞
0
xa
1 + x4
dx.
Evaluate the integral. (Consider the branch of xa
on which 1a
= 1.)
Hint, Solution
Exercise 13.31
By considering the integral of f(z) = z1/2
log z/(z + 1)2
on a suitable contour, show that
∞
0
x1/2
log x
(x + 1)2
dx = π,
∞
0
x1/2
(x + 1)2
dx =
π
2
.
Hint, Solution
Exploiting Symmetry
Exercise 13.32
Evaluate by contour integration, the principal value integral
I(a) = −
∞
−∞
eax
ex − e−x
dx
for a real and |a| < 1. [Hint: Consider the contour that is the boundary of the box, −R < x < R,
0 < y < π, but indented around z = 0 and z = ıπ.
Hint, Solution
412
Exercise 13.33
Evaluate the following integrals.
1.
∞
0
dx
(1 + x2)2
,
2.
∞
0
dx
1 + x3
.
Hint, Solution
Exercise 13.34
Find the value of the integral I
I =
∞
0
dx
1 + x6
by considering the contour integral
Γ
dz
1 + z6
with an appropriately chosen contour Γ.
Hint, Solution
Exercise 13.35
Let C be the boundary of the sector 0 < r < R, 0 < θ < π/4. By integrating e−z2
on C and letting
R → ∞ show that ∞
0
cos(x2
) dx =
∞
0
sin(x2
) dx =
1
√
2
∞
0
e−x2
dx.
Hint, Solution
Exercise 13.36
Evaluate
∞
−∞
x
sinh x
dx
using contour integration.
Hint, Solution
Exercise 13.37
Show that
∞
−∞
eax
ex +1
dx =
π
sin(πa)
for 0 < a < 1.
Use this to derive that
∞
−∞
cosh(bx)
cosh x
dx =
π
cos(πb/2)
for − 1 < b < 1.
Hint, Solution
Exercise 13.38
Using techniques of contour integration find for real a and b:
F(a, b) =
π
0
dθ
(a + b cos θ)2
What are the restrictions on a and b if any? Can the result be applied for complex a, b? How?
Hint, Solution
413
Exercise 13.39
Show that
∞
−∞
cos x
ex + e−x
dx =
π
eπ/2 + e−π/2
[ Hint: Begin by considering the integral of eız
/(ez
+ e−z
) around a rectangle with vertices: ±R,
±R + ıπ.]
Hint, Solution
Definite Integrals Involving Sine and Cosine
Exercise 13.40
Evaluate the following real integrals.
1.
π
−π
dθ
1 + sin2
θ
=
√
2π
2.
π/2
0
sin4
θ dθ
Hint, Solution
Exercise 13.41
Use contour integration to evaluate the integrals
1.
2π
0
dθ
2 + sin(θ)
,
2.
π
−π
cos(nθ)
1 − 2a cos(θ) + a2
dθ for |a| < 1, n ∈ Z0+
.
Hint, Solution
Exercise 13.42
By integration around the unit circle, suitably indented, show that
−
π
0
cos(nθ)
cos θ − cos α
dθ = π
sin(nα)
sin α
.
Hint, Solution
Exercise 13.43
Evaluate
1
0
x2
(1 + x2)
√
1 − x2
dx.
Hint, Solution
Infinite Sums
Exercise 13.44
Evaluate
∞
n=1
1
n4
.
Hint, Solution
414
Exercise 13.45
Sum the following series using contour integration:
∞
n=−∞
1
n2 − α2
Hint, Solution
415
13.12 Hints
The Residue Theorem
Hint 13.1
Hint 13.2
Hint 13.3
Hint 13.4
Substitute the Laurent series into the formula and simplify.
Hint 13.5
Use that the sum of all residues of the function in the extended complex plane is zero in calculating
the residue at infinity. To obtain the Laurent series expansion about z = ı, write the function as
a proper rational function, (numerator has a lower degree than the denominator) and expand in
partial fractions.
Hint 13.6
Hint 13.7
Cauchy Principal Value for Real Integrals
Hint 13.8
Hint 13.9
For the third part, does the integrand have a term that behaves like 1/x2
?
Cauchy Principal Value for Contour Integrals
Hint 13.10
Expand f(z) in a Laurent series. Only the first term will make a contribution to the integral in the
limit as → 0+
.
Hint 13.11
Use the result of Exercise 13.9.
Hint 13.12
Look at Example 13.3.2.
Integrals on the Real Axis
Hint 13.13
Hint 13.14
Close the path of integration in the upper or lower half plane with a semi-circle. Use the maximum
modulus integral bound, (Result 10.2.1), to show that the integral along the semi-circle vanishes.
416
Hint 13.15
Make the change of variables x = 1/ξ.
Hint 13.16
Use Result 13.4.1.
Hint 13.17
Fourier Integrals
Hint 13.18
Use
π
0
e−R sin θ
dθ <
π
R
.
Hint 13.19
Fourier Cosine and Sine Integrals
Hint 13.20
Consider the integral of eıx
ıx .
Hint 13.21
Show that
∞
−∞
1 − cos x
x2
dx = −
∞
−∞
1 − eıx
x2
dx.
Hint 13.22
Show that
∞
0
sin(πx)
x(1 − x2)
dx = −
ı
2
−
∞
−∞
eıx
x(1 − x2)
dx.
Contour Integration and Branch Cuts
Hint 13.23
Integrate a branch of log2
z/(1 + z2
) along the boundary of the domain < r < R, 0 < θ < π.
Hint 13.24
Hint 13.25
Note that
1
0
xa
dx
converges for (a) > −1; and
∞
1
xa
dx
converges for (a) < 1.
Consider f(z) = za
/(z + 1)2
with a branch cut along the positive real axis and the contour in
Figure ?? in the limit as ρ → 0 and R → ∞.
To derive the last two integrals, differentiate with respect to a.
417
Hint 13.26
Hint 13.27
Consider the integral of za
f(z) on the contour in Figure ??.
Hint 13.28
Differentiate with respect to a.
Hint 13.29
Take the limit as a → 0. Use L’Hospital’s rule. To corroborate the result, consider the integral of
f(z) log z on an appropriate contour.
Hint 13.30
Consider the integral of f(z) log2
z on the contour in Figure ??.
Hint 13.31
Consider the integral of
f(z) =
za
1 + z4
on the boundary of the region < r < R, 0 < θ < π/2. Take the limits as → 0 and R → ∞.
Hint 13.32
Consider the branch of f(z) = z1/2
log z/(z + 1)2
with a branch cut on the positive real axis and
0 < arg z < 2π. Integrate this function on the contour in Figure ??.
Exploiting Symmetry
Hint 13.33
Hint 13.34
For the second part, consider the integral along the boundary of the region, 0 < r < R, 0 < θ < 2π/3.
Hint 13.35
Hint 13.36
To show that the integral on the quarter-circle vanishes as R → ∞ establish the inequality,
cos 2θ ≥ 1 −
4
π
θ, 0 ≤ θ ≤
π
4
.
Hint 13.37
Consider the box contour C this is the boundary of the rectangle, −R ≤ x ≤ R, 0 ≤ y ≤ π. The
value of the integral is π2
/2.
Hint 13.38
Consider the rectangular contour with corners at ±R and ±R + ı2π. Let R → ∞.
Hint 13.39
Hint 13.40
418
Definite Integrals Involving Sine and Cosine
Hint 13.41
Hint 13.42
Hint 13.43
Hint 13.44
Make the changes of variables x = sin ξ and then z = eıξ
.
Infinite Sums
Hint 13.45
Use Result 13.10.1.
Hint 13.46
419
-1 1
-1
1
Figure 13.7: The contour r = 2 cos(2θ).
13.13 Solutions
The Residue Theorem
Solution 13.2
1. We consider
C
dz
z2 − 1
where C is the contour parameterized by r = 2 cos(2θ), 0 ≤ θ ≤ 2π. (See Figure 13.7.) There
are first order poles at z = ±1. We evaluate the integral with Cauchy’s residue theorem.
C
dz
z2 − 1
= ı2π Res
1
z2 − 1
, z = 1 + Res
1
z2 − 1
, z = −1
= ı2π
1
z + 1 z=1
+
1
z − 1 z=−1
= 0
2. We consider the integral
C
eız
z2(z − 2)(z + ı5)
dz,
where C is the positive circle |z| = 3. There is a second order pole at z = 0, and first order
poles at z = 2 and z = −ı5. The poles at z = 0 and z = 2 lie inside the contour. We evaluate
420
the integral with Cauchy’s residue theorem.
C
eız
z2(z − 2)(z + ı5)
dz = ı2π Res
eız
z2(z − 2)(z + ı5)
, z = 0
+ Res
eız
z2(z − 2)(z + ı5)
, z = 2
= ı2π
d
dz
eız
(z − 2)(z + ı5) z=0
+
eız
z2(z + ı5) z=2
= ı2π
d
dz
eız
(z − 2)(z + ı5) z=0
+
eız
z2(z + ı5) z=2
= ı2π
ı z2
+ (ı7 − 2)z − 5 − ı12 eız
(z − 2)2(z + ı5)2
z=0
+
1
58
− ı
5
116
eı2
= ı2π −
3
25
+
ı
20
+
1
58
− ı
5
116
eı2
= −
π
10
+
5
58
π cos 2 −
1
29
π sin 2 + ı −
6π
25
+
1
29
π cos 2 +
5
58
π sin 2
3. We consider the integral
C
e1/z
sin(1/z) dz
where C is the positive circle |z| = 1. There is an essential singularity at z = 0. We determine
the residue there by expanding the integrand in a Laurent series.
e1/z
sin(1/z) = 1 +
1
z
+ O
1
z2
1
z
+ O
1
z3
=
1
z
+ O
1
z2
The residue at z = 0 is 1. We evaluate the integral with the residue theorem.
C
e1/z
sin(1/z) dz = ı2π
Solution 13.3
If f(ζ) is analytic in a compact, closed, connected domain D and z is a point in the interior of D
then Cauchy’s integral formula states
f(n)
(z) =
n!
ı2π ∂D
f(ζ)
(ζ − z)n+1
dζ.
To corroborate this, we evaluate the integral with Cauchy’s residue theorem. There is a pole of order
n + 1 at the point ζ = z.
n!
ı2π ∂D
f(ζ)
(ζ − z)n+1
dζ. =
n!
ı2π
ı2π
n!
dn
dζn
f(ζ)
ζ=z
= f(n)
(z)
Solution 13.4
1.
1
z4 − a4
=
1
(z − a)(z + a)(z − ıa)(z + ıa)
421
There are first order poles at z = ±a and z = ±ıa. We calculate the residues there.
Res
1
z4 − a4
, z = a =
1
(z + a)(z − ıa)(z + ıa) z=a
=
1
4a3
Res
1
z4 − a4
, z = −a =
1
(z − a)(z − ıa)(z + ıa) z=−a
= −
1
4a3
Res
1
z4 − a4
, z = ıa =
1
(z − a)(z + a)(z + ıa) z=ıa
=
ı
4a3
Res
1
z4 − a4
, z = −ıa =
1
(z − a)(z + a)(z − ıa) z=−ıa
= −
ı
4a3
2.
sin z
z2
Since denominator has a second order zero at z = 0 and the numerator has a first order zero
there, the function has a first order pole at z = 0. We calculate the residue there.
Res
sin z
z2
, z = 0 = lim
z→0
sin z
z
= lim
z→0
cos z
1
= 1
3.
1 + z2
z(z − 1)2
There is a first order pole at z = 0 and a second order pole at z = 1.
Res
1 + z2
z(z − 1)2
, z = 0 =
1 + z2
(z − 1)2
z=0
= 1
Res
1 + z2
z(z − 1)2
, z = 1 =
d
dz
1 + z2
z z=1
= 1 −
1
z2
z=1
= 0
4. ez
/ z2
+ a2
has first order poles at z = ±ıa. We calculate the residues there.
Res
ez
z2 + a2
, z = ıa =
ez
z + ıa z=ıa
= −
ı eıa
2a
Res
ez
z2 + a2
, z = −ıa =
ez
z − ıa z=−ıa
=
ı e−ıa
2a
5. Since 1 − cos z has a second order zero at z = 0, (1−cos z)2
z7 has a third order pole at that point.
422
We find the residue by expanding the function in a Laurent series.
(1 − cos z)2
z7
= z−7
1 − 1 −
z2
2
+
z4
24
+ O z6
2
= z−7 z2
2
−
z4
24
+ O z6
2
= z−7 z4
4
−
z6
24
+ O z8
=
1
4z3
−
1
24z
+ O(z)
The residue at z = 0 is −1/24.
Solution 13.5
Since f(z) has an isolated pole of order n at z = z0, it has a Laurent series that is convergent in a
deleted neighborhood about that point. We substitute this Laurent series into the Residue Formula
to verify it.
Res(f(z), z0) = lim
z→z0
1
(n − 1)!
dn−1
dzn−1
[(z − z0)n
f(z)]
= lim
z→z0
1
(n − 1)!
dn−1
dzn−1
(z − z0)n
∞
k=−n
ak(z − z0)k
= lim
z→z0
1
(n − 1)!
dn−1
dzn−1
∞
k=0
ak−n(z − z0)k
= lim
z→z0
1
(n − 1)!
∞
k=n−1
ak−n
k!
(k − n + 1)!
(z − z0)k−n+1
= lim
z→z0
1
(n − 1)!
∞
k=0
ak−1
(k + n − 1)!
k!
(z − z0)k
=
1
(n − 1)!
a−1
(n − 1)!
0!
= a−1
This proves the Residue Formula.
Solution 13.6
Classify Singularities.
f(z) =
z4
z2 + 1
=
z4
(z − ı)(z + ı)
.
There are first order poles at z = ±ı. Since the function behaves like z2
at infinity, there is a second
order pole there. To see this more slowly, we can make the substitution z = 1/ζ and examine the
point ζ = 0.
f
1
ζ
=
ζ−4
ζ−2 + 1
=
1
ζ2 + ζ4
=
1
ζ2(1 + ζ2)
f(1/ζ) has a second order pole at ζ = 0, which implies that f(z) has a second order pole at infinity.
Residues. The residues at z = ±ı are,
Res
z4
z2 + 1
, ı = lim
z→ı
z4
z + ı
= −
ı
2
,
423
Res
z4
z2 + 1
, −ı = lim
z→−ı
z4
z − ı
=
ı
2
.
The residue at infinity is
Res(f(z), ∞) = Res
−1
ζ2
f
1
ζ
, ζ = 0
= Res
−1
ζ2
ζ−4
ζ−2 + 1
, ζ = 0
= Res −
ζ−4
1 + ζ2
, ζ = 0
Here we could use the residue formula, but it’s easier to find the Laurent expansion.
= Res −ζ−4
∞
n=0
(−1)n
ζ2n
, ζ = 0
= 0
We could also calculate the residue at infinity by recalling that the sum of all residues of this function
in the extended complex plane is zero.
−ı
2
+
ı
2
+ Res(f(z), ∞) = 0
Res(f(z), ∞) = 0
Laurent Series about z = 0. Since the nearest singularities are at z = ±ı, the Taylor series
will converge in the disk |z| < 1.
z4
z2 + 1
= z4 1
1 − (−z)2
= z4
∞
n=0
(−z2
)n
= z4
∞
n=0
(−1)n
z2n
=
∞
n=2
(−1)n
z2n
This geometric series converges for | − z2
| < 1, or |z| < 1. The series expansion of the function is
z4
z2 + 1
=
∞
n=2
(−1)n
z2n
for |z| < 1
Laurent Series about z = ı. We expand f(z) in partial fractions. First we write the function
as a proper rational function, (i.e. the numerator has lower degree than the denominator). By
polynomial division, we see that
f(z) = z2
− 1 +
1
z2 + 1
.
Now we expand the last term in partial fractions.
f(z) = z2
− 1 +
−ı/2
z − ı
+
ı/2
z + ı
424
Since the nearest singularity is at z = −ı, the Laurent series will converge in the annulus 0 < |z−ı| <
2.
z2
− 1 = ((z − ı) + ı)2
− 1
= (z − ı)2
+ ı2(z − ı) − 2
ı/2
z + ı
=
ı/2
ı2 + (z − ı)
=
1/4
1 − ı(z − ı)/2
=
1
4
∞
n=0
ı(z − ı)
2
n
=
1
4
∞
n=0
ın
2n
(z − ı)n
This geometric series converges for |ı(z − ı)/2| < 1, or |z − ı| < 2. The series expansion of f(z) is
f(z) =
−ı/2
z − ı
− 2 + ı2(z − ı) + (z − ı)2
+
1
4
∞
n=0
ın
2n
(z − ı)n
.
z4
z2 + 1
=
−ı/2
z − ı
− 2 + ı2(z − ı) + (z − ı)2
+
1
4
∞
n=0
ın
2n
(z − ı)n
for |z − ı| < 2
Laurent Series about z = ∞. Since the nearest singularities are at z = ±ı, the Laurent series
will converge in the annulus 1 < |z| < ∞.
z4
z2 + 1
=
z2
1 + 1/z2
= z2
∞
n=0
−
1
z2
n
=
0
n=−∞
(−1)n
z2(n+1)
=
1
n=−∞
(−1)n+1
z2n
This geometric series converges for | − 1/z2
| < 1, or |z| > 1. The series expansion of f(z) is
z4
z2 + 1
=
1
n=−∞
(−1)n+1
z2n
for 1 < |z| < ∞
Solution 13.7
Method 1: Residue Theorem. We factor P(z). Let m be the number of roots, counting
multiplicities, that lie inside the contour Γ. We find a simple expression for P (z)/P(z).
P(z) = c
n
k=1
(z − zk)
P (z) = c
n
k=1
n
j=1
j=k
(z − zj)
425
P (z)
P(z)
=
c
n
k=1
n
j=1
j=k
(z − zj)
c
n
k=1(z − zk)
=
n
k=1
n
j=1
j=k
(z − zj)
n
j=1(z − zj)
=
n
k=1
1
z − zk
Now we do the integration using the residue theorem.
1
ı2π Γ
P (z)
P(z)
dz =
1
ı2π Γ
n
k=1
1
z − zk
dz
=
n
k=1
1
ı2π Γ
1
z − zk
dz
=
zk inside Γ
1
ı2π Γ
1
z − zk
dz
=
zk inside Γ
1
= m
Method 2: Fundamental Theorem of Calculus. We factor the polynomial, P(z) =
c
n
k=1(z − zk). Let m be the number of roots, counting multiplicities, that lie inside the contour Γ.
1
ı2π Γ
P (z)
P(z)
dz =
1
ı2π
[log P(z)]C
=
1
ı2π
log
n
k=1
(z − zk)
C
=
1
ı2π
n
k=1
log(z − zk)
C
The value of the logarithm changes by ı2π for the terms in which zk is inside the contour. Its value
does not change for the terms in which zk is outside the contour.
=
1
ı2π
zk inside Γ
log(z − zk)
C
=
1
ı2π
zk inside Γ
ı2π
= m
Solution 13.8
1.
C
ez
(z − π) tan z
dz =
C
ez
cos z
(z − π) sin z
dz
The integrand has first order poles at z = nπ, n ∈ Z, n = 1 and a double pole at z = π.
The only pole inside the contour occurs at z = 0. We evaluate the integral with the residue
426
theorem.
C
ez
cos z
(z − π) sin z
dz = ı2π Res
ez
cos z
(z − π) sin z
, z = 0
= ı2π lim
z=0
z
ez
cos z
(z − π) sin z
= −ı2 lim
z=0
z
sin z
= −ı2 lim
z=0
1
cos z
= −ı2
C
ez
(z − π) tan z
dz = −ı2
2. The integrand has a first order poles at z = 0, −π and a second order pole at z = π inside the
contour. The value of the integral is ı2π times the sum of the residues at these points. From
the previous part we know that residue at z = 0.
Res
ez
cos z
(z − π) sin z
, z = 0 = −
1
π
We find the residue at z = −π with the residue formula.
Res
ez
cos z
(z − π) sin z
, z = −π = lim
z→−π
(z + π)
ez
cos z
(z − π) sin z
=
e−π
(−1)
−2π
lim
z→−π
z + π
sin z
=
e−π
2π
lim
z→−π
1
cos z
= −
e−π
2π
We find the residue at z = π by finding the first few terms in the Laurent series of the integrand.
ez
cos z
(z − π) sin z
=
eπ
+ eπ
(z − π) + O (z − π)2
1 + O (z − π)2
(z − π) (−(z − π) + O ((z − π)3))
=
− eπ
− eπ
(z − π) + O (z − π)2
−(z − π)2 + O ((z − π)4)
=
eπ
(z−π)2 + eπ
z−π + O(1)
1 + O ((z − π)2)
=
eπ
(z − π)2
+
eπ
z − π
+ O(1) 1 + O (z − π)2
=
eπ
(z − π)2
+
eπ
z − π
+ O(1)
With this we see that
Res
ez
cos z
(z − π) sin z
, z = π = eπ
.
427
The integral is
C
ez
cos z
(z − π) sin z
dz = ı2π Res
ez
cos z
(z − π) sin z
, z = −π + Res
ez
cos z
(z − π) sin z
, z = 0
+ Res
ez
cos z
(z − π) sin z
, z = π
= ı2π −
1
π
−
e−π
2π
+ eπ
C
ez
(z − π) tan z
dz = ı 2π eπ
−2 − e−π
Cauchy Principal Value for Real Integrals
Solution 13.9
Consider the integral
1
−1
1
x
dx.
By the definition of improper integrals we have
1
−1
1
x
dx = lim
→0+
−
−1
1
x
dx + lim
δ→0+
1
δ
1
x
dx
= lim
→0+
[log |x|]
−
−1 + lim
δ→0+
[log |x|]
1
δ
= lim
→0+
log − lim
δ→0+
log δ
This limit diverges. Thus the integral diverges.
Now consider the integral
1
−1
1
x − ıα
dx
where α ∈ R, α = 0. Since the integrand is bounded, the integral exists.
1
−1
1
x − ıα
dx =
1
−1
x + ıα
x2 + α2
dx
=
1
−1
ıα
x2 + α2
dx
= ı2
1
0
α
x2 + α2
dx
= ı2
1/α
0
1
ξ2 + 1
dξ
= ı2 [arctan ξ]
1/α
0
= ı2 arctan
1
α
Note that the integral exists for all nonzero real α and that
lim
α→0+
1
−1
1
x − ıα
dx = ıπ
and
lim
α→0−
1
−1
1
x − ıα
dx = −ıπ.
428
Figure 13.8: The real and imaginary part of the integrand for several values of α.
The integral exists for α arbitrarily close to zero, but diverges when α = 0. The real part of the
integrand is an odd function with two humps that get thinner and taller with decreasing α. The
imaginary part of the integrand is an even function with a hump that gets thinner and taller with
decreasing α. (See Figure 13.8.)
1
x − ıα
=
x
x2 + α2
,
1
x − ıα
=
α
x2 + α2
Note that
1
0
1
x − ıα
dx → +∞ as α → 0+
and
0
−1
1
x − ıα
dx → −∞ as α → 0−
.
However,
lim
α→0
1
−1
1
x − ıα
dx = 0
because the two integrals above cancel each other.
Now note that when α = 0, the integrand is real. Of course the integral doesn’t converge for this
case, but if we could assign some value to
1
−1
1
x
dx
it would be a real number. Since
lim
α→0
1
−1
1
x − ıα
dx = 0,
This number should be zero.
Solution 13.10
1.
−
1
−1
1
x2
dx = lim
→0+
−
−1
1
x2
dx +
1
1
x2
dx
= lim
→0+
−
1
x
−
−1
+ −
1
x
1
= lim
→0+
1
− 1 − 1 +
1
The principal value of the integral does not exist.
429
2.
−
1
−1
1
x3
dx = lim
→0+
−
−1
1
x3
dx +
1
1
x3
dx
= lim
→0+
−
1
2x2
−
−1
+ −
1
2x2
1
= lim
→0+
−
1
2(− )2
+
1
2(−1)2
−
1
2(1)2
+
1
2 2
= 0
3. Since f(x) is real analytic,
f(x) =
∞
n=1
fnxn
for x ∈ (−1, 1).
We can rewrite the integrand as
f(x)
x3
=
f0
x3
+
f1
x2
+
f2
x
+
f(x) − f0 − f1x − f2x2
x3
.
Note that the final term is real analytic on (−1, 1). Thus the principal value of the integral
exists if and only if f2 = 0.
Cauchy Principal Value for Contour Integrals
Solution 13.11
We can write f(z) as
f(z) =
f0
z − z0
+
(z − z0)f(z) − f0
z − z0
.
Note that the second term is analytic in a neighborhood of z0. Thus it is bounded on the contour.
Let M be the maximum modulus of (z−z0)f(z)−f0
z−z0
on C . By using the maximum modulus integral
bound, we have
C
(z − z0)f(z) − f0
z − z0
dz ≤ (β − α) M
→ 0 as → 0+
.
Thus we see that
lim
→0+
C
f(z) dz lim
→0+
C
f0
z − z0
dz.
We parameterize the path of integration with
z = z0 + eıθ
, θ ∈ (α, β).
Now we evaluate the integral.
lim
→0+
C
f0
z − z0
dz = lim
→0+
β
α
f0
eıθ
ı eıθ
dθ
= lim
→0+
β
α
ıf0 dθ
= ı(β − α)f0
≡ ı(β − α) Res(f(z), z0)
This proves the result.
430
CONTINUE
Figure 13.9: The Indented Contour.
Solution 13.12
Let Ci be the contour that is indented with circular arcs or radius at each of the first order poles
on C so as to enclose these poles. Let A1, . . . , An be these circular arcs of radius centered at the
points ζ1, . . . , ζn. Let Cp be the contour, (not necessarily connected), obtained by subtracting each
of the Aj’s from Ci.
Since the curve is C1
, (or continuously differentiable), at each of the first order poles on C, the
Aj’s becomes semi-circles as → 0+
. Thus
Aj
f(z) dz = ıπ Res(f(z), ζj) for j = 1, . . . , n.
The principal value of the integral along C is
−
C
f(z) dz = lim
→0+
Cp
f(z) dz
= lim
→0+


Ci
f(z) dz −
n
j=1 Aj
f(z) dz


= ı2π


m
j=1
Res(f(z), zj) +
n
j=1
Res(f(z), ζj)

 − ıπ
n
j=1
Res(f(z), ζj)
−
C
f(z) dz = ı2π
m
j=1
Res(f(z), zj) + ıπ
n
j=1
Res(f(z), ζj).
Solution 13.13
Consider
−
C
1
z − 1
dz
where C is the unit circle. Let Cp be the circular arc of radius 1 that starts and ends a distance of
from z = 1. Let C be the negative, circular arc of radius with center at z = 1 that joins the
endpoints of Cp. Let Ci, be the union of Cp and C . (Cp stands for Principal value Contour; Ci
stands for Indented Contour.) Ci is an indented contour that avoids the first order pole at z = 1.
Figure 13.9 shows the three contours.
Note that the principal value of the integral is
−
C
1
z − 1
dz = lim
→0+
Cp
1
z − 1
dz.
We can calculate the integral along Ci with Cauchy’s theorem. The integrand is analytic inside the
contour.
Ci
1
z − 1
dz = 0
We can calculate the integral along C using Result 13.3.1. Note that as → 0+
, the contour
becomes a semi-circle, a circular arc of π radians in the negative direction.
lim
→0+
C
1
z − 1
dz = −ıπ Res
1
z − 1
, 1 = −ıπ
431
Now we can write the principal value of the integral along C in terms of the two known integrals.
−
C
1
z − 1
dz =
Ci
1
z − 1
dz −
C
1
z − 1
dz
= 0 − (−ıπ)
= ıπ
Integrals on the Real Axis
Solution 13.14
1. First we note that the integrand is an even function and extend the domain of integration.
∞
0
x2
(x2 + 1)(x2 + 4)
dx =
1
2
∞
−∞
x2
(x2 + 1)(x2 + 4)
dx
Next we close the path of integration in the upper half plane. Consider the integral along the
boundary of the domain 0 < r < R, 0 < θ < π.
1
2 C
z2
(z2 + 1)(z2 + 4)
dz =
1
2 C
z2
(z − ı)(z + ı)(z − ı2)(z + ı2)
dz
= ı2π
1
2
Res
z2
(z2 + 1)(z2 + 4)
, z = ı
+ Res
z2
(z2 + 1)(z2 + 4)
, z = ı2
= ıπ
z2
(z + ı)(z2 + 4) z=ı
+
z2
(z2 + 1)(z + ı2) z=ı2
= ıπ
ı
6
−
ı
3
=
π
6
Let CR be the circular arc portion of the contour. C
=
R
−R
+ CR
. We show that the integral
along CR vanishes as R → ∞ with the maximum modulus bound.
CR
z2
(z2 + 1)(z2 + 4)
dz ≤ πR max
z∈CR
z2
(z2 + 1)(z2 + 4)
= πR
R2
(R2 − 1)(R2 − 4)
→ 0 as R → ∞
We take the limit as R → ∞ to evaluate the integral along the real axis.
lim
R→∞
1
2
R
−R
x2
(x2 + 1)(x2 + 4)
dx =
π
6
∞
0
x2
(x2 + 1)(x2 + 4)
dx =
π
6
2. We close the path of integration in the upper half plane. Consider the integral along the
432
boundary of the domain 0 < r < R, 0 < θ < π.
C
dz
(z + b)2 + a2
=
C
dz
(z + b − ıa)(z + b + ıa)
= ı2π Res
1
(z + b − ıa)(z + b + ıa)
, z = −b + ıa
= ı2π
1
z + b + ıa z=−b+ıa
=
π
a
Let CR be the circular arc portion of the contour. C
=
R
−R
+ CR
. We show that the integral
along CR vanishes as R → ∞ with the maximum modulus bound.
CR
dz
(z + b)2 + a2
≤ πR max
z∈CR
1
(z + b)2 + a2
= πR
1
(R − b)2 + a2
→ 0 as R → ∞
We take the limit as R → ∞ to evaluate the integral along the real axis.
lim
R→∞
R
−R
dx
(x + b)2 + a2
=
π
a
∞
−∞
dx
(x + b)2 + a2
=
π
a
Solution 13.15
Let CR be the semicircular arc from R to −R in the upper half plane. Let C be the union of CR and
the interval [−R, R]. We can evaluate the principal value of the integral along C with Result 13.3.2.
−
C
f(x) dx = ı2π
m
k=1
Res (f(z), zk) + ıπ
n
k=1
Res(f(z), xk)
We examine the integral along CR as R → ∞.
CR
f(z) dz ≤ πR max
z∈CR
|f(z)|
→ 0 as R → ∞.
Now we are prepared to evaluate the real integral.
−
∞
−∞
f(x) dx = lim
R→∞
−
R
−R
f(x) dx
= lim
R→∞
−
C
f(z) dz
= ı2π
m
k=1
Res (f(z), zk) + ıπ
n
k=1
Res(f(z), xk)
If we close the path of integration in the lower half plane, the contour will be in the negative direction.
−
∞
−∞
f(x) dx = −ı2π
m
k=1
Res (f(z), zk) − ıπ
n
k=1
Res(f(z), xk)
433
Solution 13.16
We consider
−
∞
−∞
2x
x2 + x + 1
dx.
With the change of variables x = 1/ξ, this becomes
−
−∞
∞
2ξ−1
ξ−2 + ξ−1 + 1
−1
ξ2
dξ,
−
∞
−∞
2ξ−1
ξ2 + ξ + 1
dξ
There are first order poles at ξ = 0 and ξ = −1/2 ± ı
√
3/2. We close the path of integration in
the upper half plane with a semi-circle. Since the integrand decays like ξ−3
the integrand along the
semi-circle vanishes as the radius tends to infinity. The value of the integral is thus
ıπ Res
2z−1
z2 + z + 1
, z = 0 + ı2π Res
2z−1
z2 + z + 1
, z = −
1
2
+ ı
√
3
2
ıπ lim
z→0
2
z2 + z + 1
+ ı2π lim
z→(−1+ı
√
3)/2
2z−1
z + (1 + ı
√
3)/2
−
∞
−∞
2x
x2 + x + 1
dx = −
2π
√
3
Solution 13.17
1. Consider
∞
−∞
1
x4 + 1
dx.
The integrand 1
z4+1 is analytic on the real axis and has isolated singularities at the points
z = {eıπ/4
, eı3π/4
, eı5π/4
, eı7π/4
}.
Let CR be the semi-circle of radius R in the upper half plane. Since
lim
R→∞
R max
z∈CR
1
z4 + 1
= lim
R→∞
R
1
R4 − 1
= 0,
we can apply Result 13.4.1.
∞
−∞
1
x4 + 1
dx = ı2π Res
1
z4 + 1
, eıπ/4
+ Res
1
z4 + 1
, eı3π/4
The appropriate residues are,
Res
1
z4 + 1
, eıπ/4
= lim
z→eıπ/4
z − eıπ/4
z4 + 1
= lim
z→eıπ/4
1
4z3
=
1
4
e−ı3π/4
=
−1 − ı
4
√
2
,
434
Res
1
z4 + 1
, eı3π/4
=
1
4(eı3π/4)3
=
1
4
e−ıπ/4
=
1 − ı
4
√
2
,
We evaluate the integral with the residue theorem.
∞
−∞
1
x4 + 1
dx = ı2π
−1 − ı
4
√
2
+
1 − ı
4
√
2
∞
−∞
1
x4 + 1
dx =
π
√
2
2. Now consider
∞
−∞
x2
(x2 + 1)2
dx.
The integrand is analytic on the real axis and has second order poles at z = ±ı. Since the
integrand decays sufficiently fast at infinity,
lim
R→∞
R max
z∈CR
z2
(z2 + 1)2
= lim
R→∞
R
R2
(R2 − 1)2
= 0
we can apply Result 13.4.1.
∞
−∞
x2
(x2 + 1)2
dx = ı2π Res
z2
(z2 + 1)2
, z = ı
Res
z2
(z2 + 1)2
, z = ı = lim
z→ı
d
dz
(z − ı)2 z2
(z2 + 1)2
= lim
z→ı
d
dz
z2
(z + ı)2
= lim
z→ı
(z + ı)2
2z − z2
2(z + ı)
(z + ı)4
= −
ı
4
∞
−∞
x2
(x2 + 1)2
dx =
π
2
3. Since
sin(x)
1 + x2
is an odd function,
∞
−∞
cos(x)
1 + x2
dx =
∞
−∞
eıx
1 + x2
dx
Since eız
/(1 + z2
) is analytic except for simple poles at z = ±ı and the integrand decays
sufficiently fast in the upper half plane,
lim
R→∞
R max
z∈CR
eız
1 + z2
= lim
R→∞
R
1
R2 − 1
= 0
435
we can apply Result 13.4.1.
∞
−∞
eıx
1 + x2
dx = ı2π Res
eız
(z − ı)(z + ı)
, z = ı
= ı2π
e−1
ı2
∞
−∞
cos(x)
1 + x2
dx =
π
e
Solution 13.18
Consider the function
f(z) =
z6
(z4 + 1)2
.
The value of the function on the imaginary axis:
−y6
(y4 + 1)2
is a constant multiple of the value of the function on the real axis:
x6
(x4 + 1)2
.
Thus to evaluate the real integral we consider the path of integration, C, which starts at the origin,
follows the real axis to R, follows a circular path to ıR and then follows the imaginary axis back
down to the origin. f(z) has second order poles at the fourth roots of −1: (±1 ± ı)/
√
2. Of these
only (1+ı)/
√
2 lies inside the path of integration. We evaluate the contour integral with the Residue
Theorem. For R > 1:
C
z6
(z4 + 1)2
dz = ı2π Res
z6
(z4 + 1)2
, z = eıπ/4
= ı2π lim
z→eıπ/4
d
dz
(z − eıπ/4
)2 z6
(z4 + 1)2
= ı2π lim
z→eıπ/4
d
dz
z6
(z − eı3π/4)2(z − eı5π/4)2(z − eı7π/4)2
= ı2π lim
z→eıπ/4
z6
(z − eı3π/4)2(z − eı5π/4)2(z − eı7π/4)2
6
z
−
2
z − eı3π/4
−
2
z − eı5π/4
−
2
z − eı7π/4
= ı2π
−ı
(2)(ı4)(−2)
6
√
2
1 + ı
−
2
√
2
−
2
√
2
2 + ı2
−
2
ı
√
2
= ı2π
3
32
(1 − ı)
√
2
=
3π
8
√
2
(1 + ı)
The integral along the circular part of the contour, CR, vanishes as R → ∞. We demonstrate this
436
with the maximum modulus integral bound.
CR
z6
(z4 + 1)2
dz ≤
πR
4
max
z∈CR
z6
(z4 + 1)2
=
πR
4
R6
(R4 − 1)2
→ 0 as R → ∞
Taking the limit R → ∞, we have:
∞
0
x6
(x4 + 1)2
dx +
0
∞
(ıy)6
((ıy)4 + 1)2
ı dy =
3π
8
√
2
(1 + ı)
∞
0
x6
(x4 + 1)2
dx + ı
∞
0
y6
(y4 + 1)2
dy =
3π
8
√
2
(1 + ı)
(1 + ı)
∞
0
x6
(x4 + 1)2
dx =
3π
8
√
2
(1 + ı)
∞
0
x6
(x4 + 1)2
dx =
3π
8
√
2
Fourier Integrals
Solution 13.19
We know that
π
0
e−R sin θ
dθ <
π
R
.
First take the case that ω is positive and the semi-circle is in the upper half plane.
CR
f(z) eıωz
dz ≤
CR
eıωz
dz max
z∈CR
|f(z)|
≤
π
0
eıωR eıθ
R eıθ
dθ max
z∈CR
|f(z)|
= R
π
0
e−ωR sin θ
dθ max
z∈CR
|f(z)|
< R
π
ωR
max
z∈CR
|f(z)|
=
π
ω
max
z∈CR
|f(z)|
→ 0 as R → ∞
The procedure is almost the same for negative ω.
Solution 13.20
First we write the integral in terms of Fourier integrals.
∞
−∞
cos 2x
x − ıπ
dx =
∞
−∞
eı2x
2(x − ıπ)
dx +
∞
−∞
e−ı2x
2(x − ıπ)
dx
Note that 1
2(z−ıπ) vanishes as |z| → ∞. We close the former Fourier integral in the upper half plane
and the latter in the lower half plane. There is a first order pole at z = ıπ in the upper half plane.
∞
−∞
eı2x
2(x − ıπ)
dx = ı2π Res
eı2z
2(z − ıπ)
, z = ıπ
= ı2π
e−2π
2
437
There are no singularities in the lower half plane.
∞
−∞
e−ı2x
2(x − ıπ)
dx = 0
Thus the value of the original real integral is
∞
−∞
cos 2x
x − ıπ
dx = ıπ e−2π
Fourier Cosine and Sine Integrals
Solution 13.21
We are considering the integral
∞
−∞
sin x
x
dx.
The integrand is an entire function. So it doesn’t appear that the residue theorem would directly
apply. Also the integrand is unbounded as x → +ı∞ and x → −ı∞, so closing the integral in the
upper or lower half plane is not directly applicable. In order to proceed, we must write the integrand
in a different form. Note that
−
∞
−∞
cos x
x
dx = 0
since the integrand is odd and has only a first order pole at x = 0. Thus
∞
−∞
sin x
x
dx = −
∞
−∞
eıx
ıx
dx.
Let CR be the semicircular arc in the upper half plane from R to −R. Let C be the closed contour
that is the union of CR and the real interval [−R, R]. If we close the path of integration with a
semicircular arc in the upper half plane, we have
∞
−∞
sin x
x
dx = lim
R→∞
−
C
eız
ız
dz −
CR
eız
ız
dz ,
provided that all the integrals exist.
The integral along CR vanishes as R → ∞ by Jordan’s lemma. By the residue theorem for
principal values we have
−
eız
ız
dz = ıπ Res
eız
ız
, 0 = π.
Combining these results,
∞
−∞
sin x
x
dx = π.
Solution 13.22
Note that (1−cos x)/x2
has a removable singularity at x = 0. The integral decays like 1
x2 at infinity,
so the integral exists. Since (sin x)/x2
is a odd function with a simple pole at x = 0, the principal
value of its integral vanishes.
−
∞
−∞
sin x
x2
dx = 0
∞
−∞
1 − cos x
x2
dx = −
∞
−∞
1 − cos x − ı sin x
x2
dx = −
∞
−∞
1 − eıx
x2
dx
438
Let CR be the semi-circle of radius R in the upper half plane. Since
lim
R→∞
R max
z∈CR
1 − eız
z2
= lim
R→∞
R
2
R2
= 0
the integral along CR vanishes as R → ∞.
CR
1 − eız
z2
dz → 0 as R → ∞
We can apply Result 13.4.1.
−
∞
−∞
1 − eıx
x2
dx = ıπ Res
1 − eız
z2
, z = 0 = ıπ lim
z→0
1 − eız
z
= ıπ lim
z→0
−ı eız
1
∞
−∞
1 − cos x
x2
dx = π
Solution 13.23
Consider
∞
0
sin(πx)
x(1 − x2)
dx.
Note that the integrand has removable singularities at the points x = 0, ±1 and is an even function.
∞
0
sin(πx)
x(1 − x2)
dx =
1
2
∞
−∞
sin(πx)
x(1 − x2)
dx.
Note that
cos(πx)
x(1 − x2)
is an odd function with first order poles at x = 0, ±1.
−
∞
−∞
cos(πx)
x(1 − x2)
dx = 0
∞
0
sin(πx)
x(1 − x2)
dx = −
ı
2
−
∞
−∞
eıπx
x(1 − x2)
dx.
Let CR be the semi-circle of radius R in the upper half plane. Since
lim
R→∞
R max
z∈CR
eıπz
z(1 − z2)
= lim
R→∞
R
1
R(R2 − 1)
= 0
the integral along CR vanishes as R → ∞.
CR
eıπz
z(1 − z2)
dz → 0 as R → ∞
We can apply Result 13.4.1.
−
ı
2
−
∞
−∞
eıπx
x(1 − x2)
dx = ıπ
−ı
2
Res
eız
z(1 − z2)
, z = 0 + Res
eız
z(1 − z2)
, z = 1
+ Res
eız
z(1 − z2)
, z = −1
=
π
2
lim
z→0
eıπz
1 − z2
− lim
z→0
eıπz
z(1 + z)
+ lim
z→0
eıπz
z(1 − z)
=
π
2
1 −
−1
2
+
−1
−2
∞
0
sin(πx)
x(1 − x2)
dx = π
439
Contour Integration and Branch Cuts
Solution 13.24
Let C be the boundary of the region < r < R, 0 < θ < π. Choose the branch of the logarithm with
a branch cut on the negative imaginary axis and the angle range −π/2 < θ < 3π/2. We consider
the integral of log2
z/(1 + z2
) on this contour.
C
log2
z
1 + z2
dz = ı2π Res
log2
z
1 + z2
, z = ı
= ı2π lim
z→ı
log2
z
z + ı
= ı2π
(ıπ/2)2
ı2
= −
π3
4
Let CR be the semi-circle from R to −R in the upper half plane. We show that the integral along
CR vanishes as R → ∞ with the maximum modulus integral bound.
CR
log2
z
1 + z2
dz ≤ πR max
z∈CR
log2
z
1 + z2
≤ πR
ln2
R + 2π ln R + π2
R2 − 1
→ 0 as R → ∞
Let C be the semi-circle from − to in the upper half plane. We show that the integral along C
vanishes as → 0 with the maximum modulus integral bound.
C
log2
z
1 + z2
dz ≤ π max
z∈C
log2
z
1 + z2
≤ π
ln2
− 2π ln + π2
1 − 2
→ 0 as → 0
Now we take the limit as → 0 and R → ∞ for the integral along C.
C
log2
z
1 + z2
dz = −
π3
4
∞
0
ln2
r
1 + r2
dr +
0
∞
(ln r + ıπ)2
1 + r2
dr = −
π3
4
2
∞
0
ln2
x
1 + x2
dx + ı2π
∞
0
ln x
1 + x2
dx = π2
∞
0
1
1 + x2
dx −
π3
4
(13.1)
We evaluate the integral of 1/(1 + x2
) by extending the path of integration to (−∞ . . . ∞) and
closing the path of integration in the upper half plane. Since
lim
R→∞
R max
z∈CR
1
1 + z2
≤ lim
R→∞
R
1
R2 − 1
= 0,
the integral of 1/(1 + z2
) along CR vanishes as R → ∞. We evaluate the integral with the Residue
440
ε
CR
C
Figure 13.10: The path of integration.
Theorem.
π2
∞
0
1
1 + x2
dx =
π2
2
∞
−∞
1
1 + x2
dx
=
π2
2
ı2π Res
1
1 + z2
, z = ı
= ıπ3
lim
z→ı
1
z + ı
=
π3
2
Now we return to Equation 13.1.
2
∞
0
ln2
x
1 + x2
dx + ı2π
∞
0
ln x
1 + x2
dx =
π3
4
We equate the real and imaginary parts to solve for the desired integrals.
∞
0
ln2
x
1 + x2
dx =
π3
8
∞
0
ln x
1 + x2
dx = 0
Solution 13.25
We consider the branch of the function
f(z) =
log z
z2 + 5z + 6
with a branch cut on the real axis and 0 < arg(z) < 2π.
Let C and CR denote the circles of radius and R where < 1 < R. C is negatively oriented;
CR is positively oriented. Consider the closed contour, C, that is traced by a point moving from
to R above the branch cut, next around CR back to R, then below the cut to , and finally around
C back to . (See Figure 13.10.)
We can evaluate the integral of f(z) along C with the residue theorem. For R > 3, there are
441
first order poles inside the path of integration at z = −2 and z = −3.
C
log z
z2 + 5z + 6
dz = ı2π Res
log z
z2 + 5z + 6
, z = −2 + Res
log z
z2 + 5z + 6
, z = −3
= ı2π lim
z→−2
log z
z + 3
+ lim
z→−3
log z
z + 2
= ı2π
log(−2)
1
+
log(−3)
−1
= ı2π (log(2) + ıπ − log(3) − ıπ)
= ı2π log
2
3
In the limit as → 0, the integral along C vanishes. We demonstrate this with the maximum
modulus theorem.
C
log z
z2 + 5z + 6
dz ≤ 2π max
z∈C
log z
z2 + 5z + 6
≤ 2π
2π − log
6 − 5 − 2
→ 0 as → 0
In the limit as R → ∞, the integral along CR vanishes. We again demonstrate this with the
maximum modulus theorem.
CR
log z
z2 + 5z + 6
dz ≤ 2πR max
z∈CR
log z
z2 + 5z + 6
≤ 2πR
log R + 2π
R2 − 5R − 6
→ 0 as R → ∞
Taking the limit as → 0 and R → ∞, the integral along C is:
C
log z
z2 + 5z + 6
dz =
∞
0
log x
x2 + 5x + 6
dx +
0
∞
log x + ı2π
x2 + 5x + 6
dx
= −ı2π
∞
0
log x
x2 + 5x + 6
dx
Now we can evaluate the real integral.
−ı2π
∞
0
log x
x2 + 5x + 6
dx = ı2π log
2
3
∞
0
log x
x2 + 5x + 6
dx = log
3
2
Solution 13.26
We consider the integral
I(a) =
∞
0
xa
(x + 1)2
dx.
To examine convergence, we split the domain of integration.
∞
0
xa
(x + 1)2
dx =
1
0
xa
(x + 1)2
dx +
∞
1
xa
(x + 1)2
dx
442
First we work with the integral on (0 . . . 1).
1
0
xa
(x + 1)2
dx ≤
1
0
xa
(x + 1)2
|dx|
=
1
0
x (a)
(x + 1)2
dx
≤
1
0
x (a)
dx
This integral converges for (a) > −1.
Next we work with the integral on (1 . . . ∞).
∞
1
xa
(x + 1)2
dx ≤
∞
1
xa
(x + 1)2
|dx|
=
∞
1
x (a)
(x + 1)2
dx
≤
∞
1
x (a)−2
dx
This integral converges for (a) < 1.
Thus we see that the integral defining I(a) converges in the strip, −1 < (a) < 1. The integral
converges uniformly in any closed subset of this domain. Uniform convergence means that we can
differentiate the integral with respect to a and interchange the order of integration and differentiation.
I (a) =
∞
0
xa
log x
(x + 1)2
dx
Thus we see that I(a) is analytic for −1 < (a) < 1.
For −1 < (a) < 1 and a = 0, za
is multi-valued. Consider the branch of the function f(z) =
za
/(z + 1)2
with a branch cut on the positive real axis and 0 < arg(z) < 2π. We integrate along the
contour in Figure ??.
The integral on C vanishes as → 0. We show this with the maximum modulus integral bound.
First we write za
in modulus-argument form, z = eıθ
, where a = α + ıβ.
za
= ea log z
= e(α+ıβ)(ln +ıθ)
= eα ln −βθ+ı(β ln +αθ)
= α
e−βθ
eı(β log +αθ)
Now we bound the integral.
C
za
(z + 1)2
dz ≤ 2π max
z∈C
za
(z + 1)2
≤ 2π
α e2π|β|
(1 − )2
→ 0 as → 0
The integral on CR vanishes as R → ∞.
CR
za
(z + 1)2
dz ≤ 2πR max
z∈CR
za
(z + 1)2
≤ 2πR
Rα e2π|β|
(R − 1)2
→ 0 as R → ∞
443
Above the branch cut, (z = r eı0
), the integrand is
f(r eı0
) =
ra
(r + 1)2
.
Below the branch cut, (z = r eı2π
), we have,
f(r eı2π
) =
eı2πa
ra
(r + 1)2
.
Now we use the residue theorem.
∞
0
ra
(r + 1)2
dr +
0
∞
eı2πa
ra
(r + 1)2
dr = ı2π Res
za
(z + 1)2
, −1
1 − eı2πa
∞
0
ra
(r + 1)2
dr = ı2π lim
z→−1
d
dz
(za
)
∞
0
ra
(r + 1)2
dr = ı2π
a eıπ(a−1)
1 − eı2πa
∞
0
ra
(r + 1)2
dr =
−ı2πa
e−ıπa − eıπa
∞
0
xa
(x + 1)2
dx =
πa
sin(πa)
for − 1 < (a) < 1, a = 0
The right side has a removable singularity at a = 0. We use analytic continuation to extend the
answer to a = 0.
I(a) =
∞
0
xa
(x + 1)2
dx =
πa
sin(πa) for − 1 < (a) < 1, a = 0
1 for a = 0
We can derive the last two integrals by differentiating this formula with respect to a and taking
the limit a → 0.
I (a) =
∞
0
xa
log x
(x + 1)2
dx, I (a) =
∞
0
xa
log2
x
(x + 1)2
dx
I (0) =
∞
0
log x
(x + 1)2
dx, I (0) =
∞
0
log2
x
(x + 1)2
dx
We can find I (0) and I (0) either by differentiating the expression for I(a) or by finding the first
few terms in the Taylor series expansion of I(a) about a = 0. The latter approach is a little easier.
I(a) =
∞
n=0
I(n)
(0)
n!
an
I(a) =
πa
sin(πa)
=
πa
πa − (πa)3/6 + O(a5)
=
1
1 − (πa)2/6 + O(a4)
= 1 +
π2
a2
6
+ O(a4
)
444
I (0) =
∞
0
log x
(x + 1)2
dx = 0
I (0) =
∞
0
log2
x
(x + 1)2
dx =
π2
3
Solution 13.27
1. We consider the integral
I(a) =
∞
0
xa
1 + x2
dx.
To examine convergence, we split the domain of integration.
∞
0
xa
1 + x2
dx =
1
0
xa
1 + x2
dx +
∞
1
xa
1 + x2
dx
First we work with the integral on (0 . . . 1).
1
0
xa
1 + x2
dx ≤
1
0
xa
1 + x2
|dx|
=
1
0
x (a)
1 + x2
dx
≤
1
0
x (a)
dx
This integral converges for (a) > −1.
Next we work with the integral on (1 . . . ∞).
∞
1
xa
1 + x2
dx ≤
∞
1
xa
1 + x2
|dx|
=
∞
1
x (a)
1 + x2
dx
≤
∞
1
x (a)−2
dx
This integral converges for (a) < 1.
Thus we see that the integral defining I(a) converges in the strip, −1 < (a) < 1. The integral
converges uniformly in any closed subset of this domain. Uniform convergence means that we
can differentiate the integral with respect to a and interchange the order of integration and
differentiation.
I (a) =
∞
0
xa
log x
1 + x2
dx
Thus we see that I(a) is analytic for −1 < (a) < 1.
2. For −1 < (a) < 1 and a = 0, za
is multi-valued. Consider the branch of the function
f(z) = za
/(1 + z2
) with a branch cut on the positive real axis and 0 < arg(z) < 2π. We
integrate along the contour in Figure 13.11.
The integral on Cρ vanishes are ρ → 0. We show this with the maximum modulus integral
bound. First we write za
in modulus-argument form, where z = ρ eıθ
and a = α + ıβ.
za
= ea log z
= e(α+ıβ)(log ρ+ıθ)
= eα log ρ−βθ+ı(β log ρ+αθ)
= ρa
e−βθ
eı(β log ρ+αθ)
445
ε
CR
C
Figure 13.11:
Now we bound the integral.
Cρ
za
1 + z2
dz ≤ 2πρ max
z∈Cρ
za
1 + z2
≤ 2πρ
ρα e2π|β|
1 − ρ2
→ 0 as ρ → 0
The integral on CR vanishes as R → ∞.
CR
za
1 + z2
dz ≤ 2πR max
z∈CR
za
1 + z2
≤ 2πR
Rα e2π|β|
R2 − 1
→ 0 as R → ∞
Above the branch cut, (z = r eı0
), the integrand is
f(r eı0
) =
ra
1 + r2
.
Below the branch cut, (z = r eı2π
), we have,
f(r eı2π
) =
eı2πa
ra
1 + r2
.
446
Now we use the residue theorem.
∞
0
ra
1 + r2
dr +
0
∞
eı2πa
ra
1 + r2
dr = ı2π Res
za
1 + z2
, ı + Res
za
1 + z2
, −ı
1 − eı2πa
∞
0
xa
1 + x2
dx = ı2π lim
z→ı
za
z + ı
+ lim
z→−ı
za
z − ı
1 − eı2πa
∞
0
xa
1 + x2
dx = ı2π
eıaπ/2
ı2
+
eıa3π/2
−ı2
∞
0
xa
1 + x2
dx = π
eıaπ/2
− eıa3π/2
1 − eı2aπ
∞
0
xa
1 + x2
dx = π
eıaπ/2
(1 − eıaπ
)
(1 + eıaπ)(1 − eıaπ)
∞
0
xa
1 + x2
dx =
π
e−ıaπ/2 + eıaπ/2
∞
0
xa
1 + x2
dx =
π
2 cos(πa/2)
for − 1 < (a) < 1, a = 0
We use analytic continuation to extend the answer to a = 0.
I(a) =
∞
0
xa
1 + x2
dx =
π
2 cos(πa/2)
for − 1 < (a) < 1
3. We can derive the last two integrals by differentiating this formula with respect to a and taking
the limit a → 0.
I (a) =
∞
0
xa
log x
1 + x2
dx, I (a) =
∞
0
xa
log2
x
1 + x2
dx
I (0) =
∞
0
log x
1 + x2
dx, I (0) =
∞
0
log2
x
1 + x2
dx
We can find I (0) and I (0) either by differentiating the expression for I(a) or by finding the
first few terms in the Taylor series expansion of I(a) about a = 0. The latter approach is a
little easier.
I(a) =
∞
n=0
I(n)
(0)
n!
an
I(a) =
π
2 cos(πa/2)
=
π
2
1
1 − (πa/2)2/2 + O(a4)
=
π
2
1 + (πa/2)2
/2 + O(a4
)
=
π
2
+
π3
/8
2
a2
+ O(a4
)
I (0) =
∞
0
log x
1 + x2
dx = 0
I (0) =
∞
0
log2
x
1 + x2
dx =
π3
8
447
Solution 13.28
Convergence. If xa
f(x) xα
as x → 0 for some α > −1 then the integral
1
0
xa
f(x) dx
will converge absolutely. If xa
f(x) xβ
as x → ∞ for some β < −1 then the integral
∞
1
xa
f(x)
will converge absolutely. These are sufficient conditions for the absolute convergence of
∞
0
xa
f(x) dx.
Contour Integration. We put a branch cut on the positive real axis and choose 0 < arg(z) <
2π. We consider the integral of za
f(z) on the contour in Figure ??. Let the singularities of f(z)
occur at z1, . . . , zn. By the residue theorem,
C
za
f(z) dz = ı2π
n
k=1
Res (za
f(z), zk) .
On the circle of radius , the integrand is o( −1
). Since the length of C is 2π , the integral on
C vanishes as → 0. On the circle of radius R, the integrand is o(R−1
). Since the length of CR is
2πR, the integral on CR vanishes as R → ∞.
The value of the integrand below the branch cut, z = x eı2π
, is
f(x eı2π
) = xa
eı2πa
f(x)
In the limit as → 0 and R → ∞ we have
∞
0
xa
f(x) dx +
0
−∞
xa
eı2πa
f(x) dx = ı2π
n
k=1
Res (za
f(z), zk) .
∞
0
xa
f(x) dx =
ı2π
1 − eı2πa
n
k=1
Res (za
f(z), zk) .
Solution 13.29
In the interval of uniform convergence of th integral, we can differentiate the formula
∞
0
xa
f(x) dx =
ı2π
1 − eı2πa
n
k=1
Res (za
f(z), zk) ,
with respect to a to obtain,
∞
0
xa
f(x) log x dx =
ı2π
1 − eı2πa
n
k=1
Res (za
f(z) log z, zk) , −
4π2
a eı2πa
(1 − eı2πa)2
n
k=1
Res (za
f(z), zk) .
∞
0
xa
f(x) log x dx =
ı2π
1 − eı2πa
n
k=1
Res (za
f(z) log z, zk) , +
π2
a
sin2
(πa)
n
k=1
Res (za
f(z), zk) ,
Differentiating the solution of Exercise 13.26 m times with respect to a yields
∞
0
xa
f(x) logm
x dx =
∂m
∂am
ı2π
1 − eı2πa
n
k=1
Res (za
f(z), zk) ,
448
Solution 13.30
Taking the limit as a → 0 ∈ Z in the solution of Exercise 13.26 yields
∞
0
f(x) dx = ı2π lim
a→0
n
k=1 Res (za
f(z), zk)
1 − eı2πa
The numerator vanishes because the sum of all residues of zn
f(z) is zero. Thus we can use
L’Hospital’s rule.
∞
0
f(x) dx = ı2π lim
a→0
n
k=1 Res (za
f(z) log z, zk)
−ı2π eı2πa
∞
0
f(x) dx = −
n
k=1
Res (f(z) log z, zk)
This suggests that we could have derived the result directly by considering the integral of f(z) log z
on the contour in Figure ??. We put a branch cut on the positive real axis and choose the branch
arg z = 0. Recall that we have assumed that f(z) has only isolated singularities and no singularities
on the positive real axis, [0, ∞). By the residue theorem,
C
f(z) log z dz = ı2π
n
k=1
Res (f(z) log z, z = zk) .
By assuming that f(z) zα
as z → 0 where α > −1 the integral on C will vanish as → 0. By
assuming that f(z) zβ
as z → ∞ where β < −1 the integral on CR will vanish as R → ∞. The
value of the integrand below the branch cut, z = x eı2π
is f(x)(log x + ı2π). Taking the limit as
→ 0 and R → ∞, we have
∞
0
f(x) log x dx +
0
∞
f(x)(log x + ı2π) dx = ı2π
n
k=1
Res (f(z) log z, zk) .
Thus we corroborate the result.
∞
0
f(x) dx = −
n
k=1
Res (f(z) log z, zk)
Solution 13.31
Consider the integral of f(z) log2
z on the contour in Figure ??. We put a branch cut on the positive
real axis and choose the branch 0 < arg z < 2π. Let z1, . . . zn be the singularities of f(z). By the
residue theorem,
C
f(z) log2
z dz = ı2π
n
k=1
Res f(z) log2
z, zk .
If f(z) zα
as z → 0 for some α > −1 then the integral on C will vanish as → 0. f(z) zβ
as
z → ∞ for some β < −1 then the integral on CR will vanish as R → ∞. Below the branch cut the
integrand is f(x)(log x + ı2π)2
. Thus we have
∞
0
f(x) log2
x dx +
0
∞
f(x)(log2
x + ı4π log x − 4π2
) dx = ı2π
n
k=1
Res f(z) log2
z, zk .
−ı4π
∞
0
f(x) log x dx + 4π2
∞
0
f(x) dx = ı2π
n
k=1
Res f(z) log2
z, zk .
∞
0
f(x) log x dx = −
1
2
n
k=1
Res f(z) log2
z, zk + ıπ
n
k=1
Res (f(z) log z, zk)
449
CR
Cε
Figure 13.12: Possible path of integration for f(z) = za
1+z4
Solution 13.32
Convergence. We consider
∞
0
xa
1 + x4
dx.
Since the integrand behaves like xa
near x = 0 we must have (a) > −1. Since the integrand behaves
like xa−4
at infinity we must have (a − 4) < −1. The integral converges for −1 < (a) < 3.
Contour Integration. The function
f(z) =
za
1 + z4
has first order poles at z = (±1 ± ı)/
√
2 and a branch point at z = 0. We could evaluate the real
integral by putting a branch cut on the positive real axis with 0 < arg(z) < 2π and integrating f(z)
on the contour in Figure 13.12.
Integrating on this contour would work because the value of the integrand below the branch cut
is a constant times the value of the integrand above the branch cut. After demonstrating that the
integrals along C and CR vanish in the limits as → 0 and R → ∞ we would see that the value of
the integral is a constant times the sum of the residues at the four poles. However, this is not the
only, (and not the best), contour that can be used to evaluate the real integral. Consider the value
of the integral on the line arg(z) = θ.
f(r eıθ
) =
ra eıaθ
1 + r4 eı4θ
If θ is a integer multiple of π/2 then the integrand is a constant multiple of
f(x) =
ra
1 + r4
.
Thus any of the contours in Figure 13.13 can be used to evaluate the real integral. The only difference
is how many residues we have to calculate. Thus we choose the first contour in Figure 13.13. We put
a branch cut on the negative real axis and choose the branch −π < arg(z) < π to satisfy f(1) = 1.
We evaluate the integral along C with the Residue Theorem.
C
za
1 + z4
dz = ı2π Res
za
1 + z4
, z =
1 + ı
√
2
Let a = α + ıβ and z = r eıθ
. Note that
|za
| = |(r eıθ
)α+ıβ
| = rα
e−βθ
.
450
C
C
C
C
C
C
R
ε ε
R
ε
R
Figure 13.13: Possible Paths of Integration for f(z) = za
1+z4
The integral on C vanishes as → 0. We demonstrate this with the maximum modulus integral
bound.
C
za
1 + z4
dz ≤
π
2
max
z∈C
za
1 + z4
≤
π
2
α eπ|β|/2
1 − 4
→ 0 as → 0
The integral on CR vanishes as R → ∞.
CR
za
1 + z4
dz ≤
πR
2
max
z∈CR
za
1 + z4
≤
πR
2
Rα eπ|β|/2
R4 − 1
→ 0 as R → ∞
The value of the integrand on the positive imaginary axis, z = x eıπ/2
, is
(x eıπ/2
)a
1 + (x eıπ/2)4
=
xa eıπa/2
1 + x4
.
We take the limit as → 0 and R → ∞.
∞
0
xa
1 + x4
dx +
0
∞
xa eıπa/2
1 + x4
eıπ/2
dx = ı2π Res
za
1 + z4
, eıπ/4
1 − eıπ(a+1)/2
∞
0
xa
1 + x4
dx = ı2π lim
z→eıπ/4
za
(z − eıπ/2
)
1 + z4
∞
0
xa
1 + x4
dx =
ı2π
1 − eıπ(a+1)/2
lim
z→eıπ/4
aza
(z − eıπ/2
) + za
4z3
∞
0
xa
1 + x4
dx =
ı2π
1 − eıπ(a+1)/2
eıπa/4
4 eı3π/4
∞
0
xa
1 + x4
dx =
−ıπ
2(e−ıπ(a+1)/4 − eıπ(a+1)/4)
∞
0
xa
1 + x4
dx =
π
4
csc
π(a + 1)
4
451
Solution 13.33
Consider the branch of f(z) = z1/2
log z/(z + 1)2
with a branch cut on the positive real axis and
0 < arg z < 2π. We integrate this function on the contour in Figure ??.
We use the maximum modulus integral bound to show that the integral on Cρ vanishes as ρ → 0.
Cρ
z1/2
log z
(z + 1)2
dz ≤ 2πρ max
Cρ
z1/2
log z
(z + 1)2
= 2πρ
ρ1/2
(2π − log ρ)
(1 − ρ)2
→ 0 as ρ → 0
The integral on CR vanishes as R → ∞.
CR
z1/2
log z
(z + 1)2
dz ≤ 2πR max
CR
z1/2
log z
(z + 1)2
= 2πR
R1/2
(log R + 2π)
(R − 1)2
→ 0 as R → ∞
Above the branch cut, (z = x eı0
), the integrand is,
f(x eı0
) =
x1/2
log x
(x + 1)2
.
Below the branch cut, (z = x eı2π
), we have,
f(x eı2π
) =
−x1/2
(log x + ıπ)
(x + 1)2
.
Taking the limit as ρ → 0 and R → ∞, the residue theorem gives us
∞
0
x1/2
log x
(x + 1)2
dx +
0
∞
−x1/2
(log x + ı2π)
(x + 1)2
dx = ı2π Res
z1/2
log z
(z + 1)2
, −1 .
2
∞
0
x1/2
log x
(x + 1)2
dx + ı2π
∞
0
x1/2
(x + 1)2
dx = ı2π lim
z→−1
d
dz
(z1/2
log z)
2
∞
0
x1/2
log x
(x + 1)2
dx + ı2π
∞
0
x1/2
(x + 1)2
dx = ı2π lim
z→−1
1
2
z−1/2
log z + z1/2 1
z
2
∞
0
x1/2
log x
(x + 1)2
dx + ı2π
∞
0
x1/2
(x + 1)2
dx = ı2π
1
2
(−ı)(ıπ) − ı
2
∞
0
x1/2
log x
(x + 1)2
dx + ı2π
∞
0
x1/2
(x + 1)2
dx = 2π + ıπ2
Equating real and imaginary parts,
∞
0
x1/2
log x
(x + 1)2
dx = π,
∞
0
x1/2
(x + 1)2
dx =
π
2
.
Exploiting Symmetry
452
Solution 13.34
Convergence. The integrand,
eaz
ez − e−z
=
eaz
2 sinh(z)
,
has first order poles at z = ınπ, n ∈ Z. To study convergence, we split the domain of integration.
∞
−∞
=
−1
−∞
+
1
−1
+
∞
1
The principal value integral
−
1
−1
eax
ex − e−x
dx
exists for any a because the integrand has only a first order pole on the path of integration.
Now consider the integral on (1 . . . ∞).
∞
1
eax
ex − e−x
dx =
∞
1
e(a−1)x
1 − e−2x
dx
≤
1
1 − e−2
∞
1
e(a−1)x
dx
This integral converges for a − 1 < 0; a < 1.
Finally consider the integral on (−∞ . . . − 1).
−1
−∞
eax
ex − e−x
dx =
−1
−∞
e(a+1)x
1 − e2x
dx
≤
1
1 − e−2
−1
−∞
e(a+1)x
dx
This integral converges for a + 1 > 0; a > −1.
Thus we see that the integral for I(a) converges for real a, |a| < 1.
Choice of Contour. Consider the contour C that is the boundary of the region: −R < x < R,
0 < y < π. The integrand has no singularities inside the contour. There are first order poles on the
contour at z = 0 and z = ıπ. The value of the integral along the contour is ıπ times the sum of
these two residues.
The integrals along the vertical sides of the contour vanish as R → ∞.
R+ıπ
R
eaz
ez − e−z
dz ≤ π max
z∈(R...R+ıπ)
eaz
ez − e−z
≤ π
eaR
eR − e−R
→ 0 as R → ∞
−R+ıπ
−R
eaz
ez − e−z
dz ≤ π max
z∈(−R...−R+ıπ)
eaz
ez − e−z
≤ π
e−aR
e−R − eR
→ 0 as R → ∞
Evaluating the Integral. We take the limit as R → ∞ and apply the residue theorem.
∞
−∞
eax
ex − e−x
dx +
−∞+ıπ
∞+ıπ
eaz
ez − e−z
dz
= ıπ Res
eaz
ez − e−z
, z = 0 + ıπ Res
eaz
ez − e−z
, z = ıπ
453
∞
−∞
eax
ex − e−x
dx +
−∞
∞
ea(x+ıπ
ex+ıπ − e−x−ıπ
dz = ıπ lim
z→0
z eaz
2 sinh(z)
+ ıπ lim
z→ıπ
(z − ıπ) eaz
2 sinh(z)
(1 + eıaπ
)
∞
−∞
eax
ex − e−x
dx = ıπ lim
z→0
eaz
+az eaz
2 cosh(z)
+ ıπ lim
z→ıπ
eaz
+a(z − ıπ) eaz
2 cosh(z)
(1 + eıaπ
)
∞
−∞
eax
ex − e−x
dx = ıπ
1
2
+ ıπ
eıaπ
−2
∞
−∞
eax
ex − e−x
dx =
ıπ(1 − eıaπ
)
2(1 + eıaπ)
∞
−∞
eax
ex − e−x
dx =
π
2
ı(e−ıaπ/2
− eıaπ/2
)
eıaπ/2 + eıaπ/2
∞
−∞
eax
ex − e−x
dx =
π
2
tan
aπ
2
Solution 13.35
1.
∞
0
dx
(1 + x2)
2 =
1
2
∞
−∞
dx
(1 + x2)
2
We apply Result 13.4.1 to the integral on the real axis. First we verify that the integrand
vanishes fast enough in the upper half plane.
lim
R→∞
R max
z∈CR
1
(1 + z2)
2 = lim
R→∞
R
1
(R2 − 1)
2 = 0
Then we evaluate the integral with the residue theorem.
∞
−∞
dx
(1 + x2)
2 = ı2π Res
1
(1 + z2)
2 , z = ı
= ı2π Res
1
(z − ı)2(z + ı)2
, z = ı
= ı2π lim
z→ı
d
dz
1
(z + ı)2
= ı2π lim
z→ı
−2
(z + ı)3
=
π
2
∞
0
dx
(1 + x2)
2 =
π
4
2. We wish to evaluate
∞
0
dx
x3 + 1
.
Let the contour C be the boundary of the region 0 < r < R, 0 < θ < 2π/3. We factor the
denominator of the integrand to see that the contour encloses the simple pole at eıπ/3
for
R > 1.
z3
+ 1 = (z − eıπ/3
)(z + 1)(z − e−ıπ/3
)
454
We calculate the residue at that point.
Res
1
z3 + 1
, z = eıπ/3
= lim
z→eıπ/3
(z − eıπ/3
)
1
z3 + 1
= lim
z→eıπ/3
1
(z + 1)(z − e−ıπ/3)
=
1
(eıπ/3 +1)(eıπ/3 − e−ıπ/3)
= −
eıπ/3
3
We use the residue theorem to evaluate the integral.
C
dz
z3 + 1
= −
ı2π eıπ/3
3
Let CR be the circular arc portion of the contour.
C
dz
z3 + 1
=
R
0
dx
x3 + 1
+
CR
dz
z3 + 1
−
R
0
eı2π/3
dx
x3 + 1
= (1 + e−ıπ/3
)
R
0
dx
x3 + 1
+
CR
dz
z3 + 1
We show that the integral along CR vanishes as R → ∞ with the maximum modulus integral
bound.
CR
dz
z3 + 1
≤
2πR
3
1
R3 − 1
→ 0 as R → ∞
We take R → ∞ and solve for the desired integral.
1 + e−ıπ/3
∞
0
dx
x3 + 1
= −
ı2π eıπ/3
3
∞
0
dx
x3 + 1
=
2π
3
√
3
Solution 13.36
Method 1: Semi-Circle Contour. We wish to evaluate the integral
I =
∞
0
dx
1 + x6
.
We note that the integrand is an even function and express I as an integral over the whole real axis.
I =
1
2
∞
−∞
dx
1 + x6
Now we will evaluate the integral using contour integration. We close the path of integration in the
upper half plane. Let ΓR be the semicircular arc from R to −R in the upper half plane. Let Γ be
the union of ΓR and the interval [−R, R]. (See Figure 13.14.)
We can evaluate the integral along Γ with the residue theorem. The integrand has first order
poles at z = eıπ(1+2k)/6
, k = 0, 1, 2, 3, 4, 5. Three of these poles are in the upper half plane. For
R > 1, we have
Γ
1
z6 + 1
dz = ı2π
2
k=0
Res
1
z6 + 1
, eıπ(1+2k)/6
= ı2π
2
k=0
lim
z→eıπ(1+2k)/6
z − eıπ(1+2k)/6
z6 + 1
455
Figure 13.14: The semi-circle contour.
Since the numerator and denominator vanish, we apply L’Hospital’s rule.
= ı2π
2
k=0
lim
z→eıπ(1+2k)/6
1
6z5
=
ıπ
3
2
k=0
e−ıπ5(1+2k)/6
=
ıπ
3
e−ıπ5/6
+ e−ıπ15/6
+ e−ıπ25/6
=
ıπ
3
e−ıπ5/6
+ e−ıπ/2
+ e−ıπ/6
=
ıπ
3
−
√
3 − ı
2
− ı +
√
3 − ı
2
=
2π
3
Now we examine the integral along ΓR. We use the maximum modulus integral bound to show that
the value of the integral vanishes as R → ∞.
ΓR
1
z6 + 1
dz ≤ πR max
z∈ΓR
1
z6 + 1
= πR
1
R6 − 1
→ 0 as R → ∞.
Now we are prepared to evaluate the original real integral.
Γ
1
z6 + 1
dz =
2π
3
R
−R
1
x6 + 1
dx +
ΓR
1
z6 + 1
dz =
2π
3
We take the limit as R → ∞.
∞
−∞
1
x6 + 1
dx =
2π
3
∞
0
1
x6 + 1
dx =
π
3
We would get the same result by closing the path of integration in the lower half plane. Note that
in this case the closed contour would be in the negative direction.
456
Figure 13.15: The wedge contour.
Method 2: Wedge Contour. Consider the contour Γ, which starts at the origin, goes to the
point R along the real axis, then to the point R eıπ/3
along a circle of radius R and then back to the
origin along the ray θ = π/3. (See Figure 13.15.)
We can evaluate the integral along Γ with the residue theorem. The integrand has one first order
pole inside the contour at z = eıπ/6
. For R > 1, we have
Γ
1
z6 + 1
dz = ı2π Res
1
z6 + 1
, eıπ/6
= ı2π lim
z→eıπ/6
z − eıπ/6
z6 + 1
Since the numerator and denominator vanish, we apply L’Hospital’s rule.
= ı2π lim
z→eıπ/6
1
6z5
=
ıπ
3
e−ıπ5/6
=
π
3
e−ıπ/3
Now we examine the integral along the circular arc, ΓR. We use the maximum modulus integral
bound to show that the value of the integral vanishes as R → ∞.
ΓR
1
z6 + 1
dz ≤
πR
3
max
z∈ΓR
1
z6 + 1
=
πR
3
1
R6 − 1
→ 0 as R → ∞.
Now we are prepared to evaluate the original real integral.
Γ
1
z6 + 1
dz =
π
3
e−ıπ/3
R
0
1
x6 + 1
dx +
ΓR
1
z6 + 1
dz +
0
R eıπ/3
1
z6 + 1
dz =
π
3
e−ıπ/3
R
0
1
x6 + 1
dx +
ΓR
1
z6 + 1
dz +
0
R
1
x6 + 1
eıπ/3
dx =
π
3
e−ıπ/3
457
Figure 13.16: cos(2θ) and 1 − 4
π θ
We take the limit as R → ∞.
1 − eıπ/3
∞
0
1
x6 + 1
dx =
π
3
e−ıπ/3
∞
0
1
x6 + 1
dx =
π
3
e−ıπ/3
1 − eıπ/3
∞
0
1
x6 + 1
dx =
π
3
(1 − ı
√
3)/2
1 − (1 + ı
√
3)/2
∞
0
1
x6 + 1
dx =
π
3
Solution 13.37
First note that
cos(2θ) ≥ 1 −
4
π
θ, 0 ≤ θ ≤
π
4
.
These two functions are plotted in Figure 13.16. To prove this inequality analytically, note that the
two functions are equal at the endpoints of the interval and that cos(2θ) is concave downward on
the interval,
d2
dθ2
cos(2θ) = −4 cos(2θ) ≤ 0 for 0 ≤ θ ≤
π
4
,
while 1 − 4θ/π is linear.
Let CR be the quarter circle of radius R from θ = 0 to θ = π/4. The integral along this contour
vanishes as R → ∞.
CR
e−z2
dz ≤
π/4
0
e−(R eıθ
)2
Rı eıθ
dθ
≤
π/4
0
R e−R2
cos(2θ)
dθ
≤
π/4
0
R e−R2
(1−4θ/π)
dθ
= R
π
4R2
e−R2
(1−4θ/π)
π/4
0
=
π
4R
1 − e−R2
→ 0 as R → ∞
Let C be the boundary of the domain 0 < r < R, 0 < θ < π/4. Since the integrand is analytic
inside C the integral along C is zero. Taking the limit as R → ∞, the integral from r = 0 to ∞
along θ = 0 is equal to the integral from r = 0 to ∞ along θ = π/4.
∞
0
e−x2
dx =
∞
0
e
−
“
1+ı√
2
x
”2
1 + ı
√
2
dx
∞
0
e−x2
dx =
1 + ı
√
2
∞
0
e−ıx2
dx
458
∞
0
e−x2
dx =
1 + ı
√
2
∞
0
cos(x2
) − ı sin(x2
) dx
∞
0
e−x2
dx =
1
√
2
∞
0
cos(x2
) dx +
∞
0
sin(x2
) dx +
ı
√
2
∞
0
cos(x2
) dx −
∞
0
sin(x2
) dx
We equate the imaginary part of this equation to see that the integrals of cos(x2
) and sin(x2
) are
equal.
∞
0
cos(x2
) dx =
∞
0
sin(x2
) dx
The real part of the equation then gives us the desired identity.
∞
0
cos(x2
) dx =
∞
0
sin(x2
) dx =
1
√
2
∞
0
e−x2
dx
Solution 13.38
Consider the box contour C that is the boundary of the rectangle −R ≤ x ≤ R, 0 ≤ y ≤ π. There
is a removable singularity at z = 0 and a first order pole at z = ıπ. By the residue theorem,
−
C
z
sinh z
dz = ıπ Res
z
sinh z
, ıπ
= ıπ lim
z→ıπ
z(z − ıπ)
sinh z
= ıπ lim
z→ıπ
2z − ıπ
cosh z
= π2
The integrals along the side of the box vanish as R → ∞.
±R+ıπ
±R
z
sinh z
dz ≤ π max
z∈[±R,±R+ıπ]
z
sinh z
≤ π
R + π
sinh R
→ 0 as R → ∞
The value of the integrand on the top of the box is
x + ıπ
sinh(x + ıπ)
= −
x + ıπ
sinh x
.
Taking the limit as R → ∞,
∞
−∞
x
sinh x
dx + −
∞
−∞
−
x + ıπ
sinh x
dx = π2
.
Note that
−
∞
−∞
1
sinh x
dx = 0
as there is a first order pole at x = 0 and the integrand is odd.
∞
−∞
x
sinh x
dx =
π2
2
459
Solution 13.39
First we evaluate
∞
−∞
eax
ex +1
dx.
Consider the rectangular contour in the positive direction with corners at ±R and ±R + ı2π. With
the maximum modulus integral bound we see that the integrals on the vertical sides of the contour
vanish as R → ∞.
R+ı2π
R
eaz
ez +1
dz ≤ 2π
eaR
eR −1
→ 0 as R → ∞
−R
−R+ı2π
eaz
ez +1
dz ≤ 2π
e−aR
1 − e−R
→ 0 as R → ∞
In the limit as R tends to infinity, the integral on the rectangular contour is the sum of the integrals
along the top and bottom sides.
C
eaz
ez +1
dz =
∞
−∞
eax
ex +1
dx +
−∞
∞
ea(x+ı2π)
ex+ı2π +1
dx
C
eaz
ez +1
dz = (1 − e−ı2aπ
)
∞
−∞
eax
ex +1
dx
The only singularity of the integrand inside the contour is a first order pole at z = ıπ. We use the
residue theorem to evaluate the integral.
C
eaz
ez +1
dz = ı2π Res
eaz
ez +1
, ıπ
= ı2π lim
z→ıπ
(z − ıπ) eaz
ez +1
= ı2π lim
z→ıπ
a(z − ıπ) eaz
+ eaz
ez
= −ı2π eıaπ
We equate the two results for the value of the contour integral.
(1 − e−ı2aπ
)
∞
−∞
eax
ex +1
dx = −ı2π eıaπ
∞
−∞
eax
ex +1
dx =
ı2π
eıaπ − e−ıaπ
∞
−∞
eax
ex +1
dx =
π
sin(πa)
Now we derive the value of,
∞
−∞
cosh(bx)
cosh x
dx.
First make the change of variables x → 2x in the previous result.
∞
−∞
e2ax
e2x +1
2 dx =
π
sin(πa)
∞
−∞
e(2a−1)x
ex + e−x
dx =
π
sin(πa)
460
Now we set b = 2a − 1.
∞
−∞
ebx
cosh x
dx =
π
sin(π(b + 1)/2)
=
π
cos(πb/2)
for − 1 < b < 1
Since the cosine is an even function, we also have,
∞
−∞
e−bx
cosh x
dx =
π
cos(πb/2)
for − 1 < b < 1
Adding these two equations and dividing by 2 yields the desired result.
∞
−∞
cosh(bx)
cosh x
dx =
π
cos(πb/2)
for − 1 < b < 1
Solution 13.40
Real-Valued Parameters. For b = 0, the integral has the value: π/a2
. If b is nonzero, then we
can write the integral as
F(a, b) =
1
b2
π
0
dθ
(a/b + cos θ)2
.
We define the new parameter c = a/b and the function,
G(c) = b2
F(a, b) =
π
0
dθ
(c + cos θ)2
.
If −1 ≤ c ≤ 1 then the integrand has a double pole on the path of integration. The integral diverges.
Otherwise the integral exists. To evaluate the integral, we extend the range of integration to (0..2π)
and make the change of variables, z = eıθ
to integrate along the unit circle in the complex plane.
G(c) =
1
2
2π
0
dθ
(c + cos θ)2
For this change of variables, we have,
cos θ =
z + z−1
2
, dθ =
dz
ız
.
G(c) =
1
2 C
dz/(ız)
(c + (z + z−1)/2)2
= −ı2
C
z
(2cz + z2 + 1)2
dz
= −ı2
C
z
(z + c +
√
c2 − 1)2(z + c −
√
c2 − 1)2
dz
If c > 1, then −c −
√
c2 − 1 is outside the unit circle and −c +
√
c2 − 1 is inside the unit circle.
The integrand has a second order pole inside the path of integration. We evaluate the integral with
461
the residue theorem.
G(c) = −ı2ı2π Res
z
(z + c +
√
c2 − 1)2(z + c −
√
c2 − 1)2
, z = −c + c2 − 1
= 4π lim
z→−c+
√
c2−1
d
dz
z
(z + c +
√
c2 − 1)2
= 4π lim
z→−c+
√
c2−1
1
(z + c +
√
c2 − 1)2
−
2z
(z + c +
√
c2 − 1)3
= 4π lim
z→−c+
√
c2−1
c +
√
c2 − 1 − z
(z + c +
√
c2 − 1)3
= 4π
2c
(2
√
c2 − 1)3
=
πc
(c2 − 1)3
If c < 1, then −c −
√
c2 − 1 is inside the unit circle and −c +
√
c2 − 1 is outside the unit circle.
G(c) = −ı2ı2π Res
z
(z + c +
√
c2 − 1)2(z + c −
√
c2 − 1)2
, z = −c − c2 − 1
= 4π lim
z→−c−
√
c2−1
d
dz
z
(z + c −
√
c2 − 1)2
= 4π lim
z→−c−
√
c2−1
1
(z + c −
√
c2 − 1)2
−
2z
(z + c −
√
c2 − 1)3
= 4π lim
z→−c−
√
c2−1
c −
√
c2 − 1 − z
(z + c −
√
c2 − 1)3
= 4π
2c
(−2
√
c2 − 1)3
= −
πc
(c2 − 1)3
Thus we see that
G(c)



= πc√
(c2−1)3
for c > 1,
= − πc√
(c2−1)3
for c < 1,
is divergent for − 1 ≤ c ≤ 1.
In terms of F(a, b), this is
F(a, b)



= aπ√
(a2−b2)3
for a/b > 1,
= − aπ√
(a2−b2)3
for a/b < 1,
is divergent for − 1 ≤ a/b ≤ 1.
Complex-Valued Parameters. Consider
G(c) =
π
0
dθ
(c + cos θ)2
,
for complex c. Except for real-valued c between −1 and 1, the integral converges uniformly. We can
462
interchange differentiation and integration. The derivative of G(c) is
G (c) =
d
dc
π
0
dθ
(c + cos θ)2
=
π
0
−2
(c + cos θ)3
dθ
Thus we see that G(c) is analytic in the complex plane with a cut on the real axis from −1 to 1.
The value of the function on the positive real axis for c > 1 is
G(c) =
πc
(c2 − 1)3
.
We use analytic continuation to determine G(c) for complex c. By inspection we see that G(c) is
the branch of
πc
(c2 − 1)3/2
,
with a branch cut on the real axis from −1 to 1 and which is real-valued and positive for real c > 1.
Using F(a, b) = G(c)/b2
we can determine F for complex-valued a and b.
Solution 13.41
First note that
∞
−∞
cos x
ex + e−x
dx =
∞
−∞
eıx
ex + e−x
dx
since sin x/(ex
+ e−x
) is an odd function. For the function
f(z) =
eız
ez + e−z
we have
f(x + ıπ) =
eıx−π
ex+ıπ + e−x−ıπ
= − e−π eıx
ex + e−x
= − e−π
f(x).
Thus we consider the integral
C
eız
ez + e−z
dz
where C is the box contour with corners at ±R and ±R + ıπ. We can evaluate this integral with
the residue theorem. We can write the integrand as
eız
2 cosh z
.
We see that the integrand has first order poles at z = ıπ(n + 1/2). The only pole inside the path of
integration is at z = ıπ/2.
C
eız
ez + e−z
dz = ı2π Res
eız
ez + e−z
, z =
ıπ
2
= ı2π lim
z→ıπ/2
(z − ıπ/2) eız
ez + e−z
= ı2π lim
z→ıπ/2
eız
+ı(z − ıπ/2) eız
ez − e−z
= ı2π
e−π/2
eıπ/2 − e−ıπ/2
= π e−π/2
463
The integrals along the vertical sides of the box vanish as R → ∞.
±R+ıπ
±R
eız
ez + e−z
dz ≤ π max
z∈[±R...±R+ıπ]
eız
ez + e−z
≤ π max
y∈[0...π]
1
eR+ıy + e−R−ıy
≤ π max
y∈[0...π]
1
eR + e−R−ı2y
= π
1
2 sinh R
→ 0 as R → ∞
Taking the limit as R → ∞, we have
∞
−∞
eıx
ex + e−x
dx +
−∞+ıπ
∞+ıπ
eız
ez + e−z
dz = π e−π/2
(1 + e−π
)
∞
−∞
eıx
ex + e−x
dx = π e−π/2
∞
−∞
eıx
ex + e−x
dx =
π
eπ/2 + e−π/2
Finally we have,
∞
−∞
cos x
ex + e−x
dx =
π
eπ/2 + e−π/2
.
Definite Integrals Involving Sine and Cosine
Solution 13.42
1. To evaluate the integral we make the change of variables z = eıθ
. The path of integration in
the complex plane is the positively oriented unit circle.
π
−π
dθ
1 + sin2
θ
=
C
1
1 − (z − z−1)
2
/4
dz
ız
=
C
ı4z
z4 − 6z2 + 1
dz
=
C
ı4z
z − 1 −
√
2 z − 1 +
√
2 z + 1 −
√
2 z + 1 +
√
2
dz
There are first order poles at z = ±1 ±
√
2. The poles at z = −1 +
√
2 and z = 1 −
√
2 are
464
inside the path of integration. We evaluate the integral with Cauchy’s Residue Formula.
C
ı4z
z4 − 6z2 + 1
dz = ı2π Res
ı4z
z4 − 6z2 + 1
, z = −1 +
√
2
+ Res
ı4z
z4 − 6z2 + 1
, z = 1 −
√
2
= −8π
z
z − 1 −
√
2 z − 1 +
√
2 z + 1 +
√
2
z=−1+
√
2
+
z
z − 1 −
√
2 z + 1 −
√
2 z + 1 +
√
2
z=1−
√
2
= −8π −
1
8
√
2
−
1
8
√
2
=
√
2π
2. First we use symmetry to expand the domain of integration.
π/2
0
sin4
θ dθ =
1
4
2π
0
sin4
θ dθ
Next we make the change of variables z = eıθ
. The path of integration in the complex plane
is the positively oriented unit circle. We evaluate the integral with the residue theorem.
1
4
2π
0
sin4
θ dθ =
1
4 C
1
16
z −
1
z
4
dz
ız
=
1
64 C
−ı
(z2
− 1)4
z5
dz
=
−ı
64 C
z3
− 4z +
6
z
−
4
z3
+
1
z5
dz
= ı2π
−ı
64
6
=
3π
16
Solution 13.43
1. Let C be the positively oriented unit circle about the origin. We parametrize this contour.
z = eıθ
, dz = ı eıθ
dθ, θ ∈ (0 . . . 2π)
We write sin θ and the differential dθ in terms of z. Then we evaluate the integral with the
Residue theorem.
2π
0
1
2 + sin θ
dθ =
C
1
2 + (z − 1/z)/(ı2)
dz
ız
=
C
2
z2 + ı4z − 1
dz
=
C
2
z + ı 2 +
√
3 z + ı 2 −
√
3
dz
= ı2π Res z + ı 2 +
√
3 z + ı 2 −
√
3 , z = ı −2 +
√
3
= ı2π
2
ı2
√
3
=
2π
√
3
465
2. First consider the case a = 0.
π
−π
cos(nθ) dθ =
0 for n ∈ Z+
2π for n = 0
Now we consider |a| < 1, a = 0. Since
sin(nθ)
1 − 2a cos θ + a2
is an even function,
π
−π
cos(nθ)
1 − 2a cos θ + a2
dθ =
π
−π
eınθ
1 − 2a cos θ + a2
dθ
Let C be the positively oriented unit circle about the origin. We parametrize this contour.
z = eıθ
, dz = ı eıθ
dθ, θ ∈ (−π . . . π)
We write the integrand and the differential dθ in terms of z. Then we evaluate the integral
with the Residue theorem.
π
−π
eınθ
1 − 2a cos θ + a2
dθ =
C
zn
1 − a(z + 1/z) + a2
dz
ız
= −ı
C
zn
−az2 + (1 + a2)z − a
dz
=
ı
a C
zn
z2 − (a + 1/a)z + 1
dz
=
ı
a C
zn
(z − a)(z − 1/a)
dz
= ı2π
ı
a
Res
zn
(z − a)(z − 1/a)
, z = a
= −
2π
a
an
a − 1/a
=
2πan
1 − a2
We write the value of the integral for |a| < 1 and n ∈ Z0+
.
π
−π
cos(nθ)
1 − 2a cos θ + a2
dθ =
2π for a = 0, n = 0
2πan
1−a2 otherwise
Solution 13.44
Convergence. We consider the integral
I(α) = −
π
0
cos(nθ)
cos θ − cos α
dθ = π
sin(nα)
sin α
.
We assume that α is real-valued. If α is an integer, then the integrand has a second order pole on
the path of integration, the principal value of the integral does not exist. If α is real, but not an
integer, then the integrand has a first order pole on the path of integration. The integral diverges,
but its principal value exists.
466
Contour Integration. We will evaluate the integral for real, non-integer α.
I(α) = −
π
0
cos(nθ)
cos θ − cos α
dθ
=
1
2
−
2π
0
cos(nθ)
cos θ − cos α
dθ
=
1
2
−
2π
0
eınθ
cos θ − cos α
dθ
We make the change of variables: z = eıθ
.
I(α) =
1
2
−
C
zn
(z + 1/z)/2 − cos α
dz
ız
= −
C
−ızn
(z − eıα)(z − e−ıα)
dz
Now we use the residue theorem.
= ıπ(−ı) Res
zn
(z − eıα)(z − e−ıα)
, z = eıα
+ Res
zn
(z − eıα)(z − e−ıα)
, z = e−ıα
= π lim
z→eıα
zn
z − e−ıα
+ lim
z→e−ıα
zn
z − eıα
= π
eınα
eıα − e−ıα
+
e−ınα
e−ıα − eıα
= π
eınα
− e−ınα
eıα − e−ıα
= π
sin(nα)
sin(α)
I(α) = −
π
0
cos(nθ)
cos θ − cos α
dθ = π
sin(nα)
sin α
.
Solution 13.45
Consider the integral
1
0
x2
(1 + x2)
√
1 − x2
dx.
We make the change of variables x = sin ξ to obtain,
π/2
0
sin2
ξ
(1 + sin2
ξ) 1 − sin2
ξ
cos ξ dξ
π/2
0
sin2
ξ
1 + sin2
ξ
dξ
π/2
0
1 − cos(2ξ)
3 − cos(2ξ)
dξ
1
4
2π
0
1 − cos ξ
3 − cos ξ
dξ
467
Now we make the change of variables z = eıξ
to obtain a contour integral on the unit circle.
1
4 C
1 − (z + 1/z)/2
3 − (z + 1/z)/2
−ı
z
dz
−ı
4 C
(z − 1)2
z(z − 3 + 2
√
2)(z − 3 − 2
√
2)
dz
There are two first order poles inside the contour. The value of the integral is
ı2π
−ı
4
Res
(z − 1)2
z(z − 3 + 2
√
2)(z − 3 − 2
√
2)
, 0 + Res
(z − 1)2
z(z − 3 + 2
√
2)(z − 3 − 2
√
2)
, z = 3 − 2
√
2
π
2
lim
z→0
(z − 1)2
(z − 3 + 2
√
2)(z − 3 − 2
√
2)
+ lim
z→3−2
√
2
(z − 1)2
z(z − 3 − 2
√
2)
.
1
0
x2
(1 + x2)
√
1 − x2
dx =
(2 −
√
2)π
4
Infinite Sums
Solution 13.46
From Result 13.10.1 we see that the sum of the residues of π cot(πz)/z4
is zero. This function has
simples poles at nonzero integers z = n with residue 1/n4
. There is a fifth order pole at z = 0.
Finding the residue with the formula
1
4!
lim
z→0
d4
dz4
(πz cot(πz))
would be a real pain. After doing the differentiation, we would have to apply L’Hospital’s rule
multiple times. A better way of finding the residue is with the Laurent series expansion of the
function. Note that
1
sin(πz)
=
1
πz − (πz)3/6 + (πz)5/120 − · · ·
=
1
πz
1
1 − (πz)2/6 + (πz)4/120 − · · ·
=
1
πz
1 +
π2
6
z2
−
π4
120
z4
+ · · · +
π2
6
z2
−
π4
120
z4
+ · · ·
2
+ · · · .
Now we find the z−1
term in the Laurent series expansion of π cot(πz)/z4
.
π cos(πz)
z4 sin(πz)
=
π
z4
1 −
π2
2
z2
+
π4
24
z4
− · · ·
1
πz
1 +
π2
6
z2
−
π4
120
z4
+ · · · +
π2
6
z2
−
π4
120
z4
+ · · ·
2
+ · · ·
=
1
z5
· · · + −
π4
120
+
π4
36
−
π4
12
+
π4
24
z4
+ · · ·
= · · · −
π4
45
1
z
+ · · ·
Thus the residue at z = 0 is −π4
/45. Summing the residues,
−1
n=−∞
1
n4
−
π4
45
+
∞
n=1
1
n4
= 0.
∞
n=1
1
n4
=
π4
90
468
Solution 13.47
For this problem we will use the following result: If
lim
|z|→∞
|zf(z)| = 0,
then the sum of all the residues of π cot(πz)f(z) is zero. If in addition, f(z) is analytic at z = n ∈ Z
then
∞
n=−∞
f(n) = −( sum of the residues of π cot(πz)f(z) at the poles of f(z) ).
We assume that α is not an integer, otherwise the sum is not defined. Consider f(z) = 1/(z2
−α2
).
Since
lim
|z|→∞
z
1
z2 − α2
= 0,
and f(z) is analytic at z = n, n ∈ Z, we have
∞
n=−∞
1
n2 − α2
= −( sum of the residues of π cot(πz)f(z) at the poles of f(z) ).
f(z) has first order poles at z = ±α.
∞
n=−∞
1
n2 − α2
= − Res
π cot(πz)
z2 − α2
, z = α − Res
π cot(πz)
z2 − α2
, z = −α
= − lim
z→α
π cot(πz)
z + α
− lim
z→−α
π cot(πz)
z − α
= −
π cot(πα)
2α
−
π cot(−πα)
−2α
∞
n=−∞
1
n2 − α2
= −
π cot(πα)
α
469
470
Part IV
Ordinary Differential Equations
471
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Chapter 14
First Order Differential Equations
Don’t show me your technique. Show me your heart.
-Tetsuyasu Uekuma
14.1 Notation
A differential equation is an equation involving a function, it’s derivatives, and independent variables.
If there is only one independent variable, then it is an ordinary differential equation. Identities such
as
d
dx
f2
(x) = 2f(x)f (x), and
dy
dx
dx
dy
= 1
are not differential equations.
The order of a differential equation is the order of the highest derivative. The following equations
for y(x) are first, second and third order, respectively.
• y = xy2
• y + 3xy + 2y = x2
• y = y y
The degree of a differential equation is the highest power of the highest derivative in the equation.
The following equations are first, second and third degree, respectively.
• y − 3y2
= sin x
• (y )2
+ 2x cos y = ex
• (y )3
+ y5
= 0
An equation is said to be linear if it is linear in the dependent variable.
• y cos x + x2
y = 0 is a linear differential equation.
• y + xy2
= 0 is a nonlinear differential equation.
A differential equation is homogeneous if it has no terms that are functions of the independent
variable alone. Thus an inhomogeneous equation is one in which there are terms that are functions
of the independent variables alone.
• y + xy + y = 0 is a homogeneous equation.
• y + y + x2
= 0 is an inhomogeneous equation.
473
1 2 3 4
4
8
12
16
1 2 3 4
4
8
12
16
Figure 14.1: The population of bacteria.
1 2 3 4
32
64
96
128
1 2 3 4
32
64
96
128
Figure 14.2: The discrete population of bacteria and a continuous population approximation.
A first order differential equation may be written in terms of differentials. Recall that for the
function y(x) the differential dy is defined dy = y (x) dx. Thus the differential equations
y = x2
y and y + xy2
= sin(x)
can be denoted:
dy = x2
y dx and dy + xy2
dx = sin(x) dx.
A solution of a differential equation is a function which when substituted into the equation yields
an identity. For example, y = x ln |x| is a solution of
y −
y
x
= 1.
We verify this by substituting it into the differential equation.
ln |x| + 1 − ln |x| = 1
We can also verify that y = c ex
is a solution of y − y = 0 for any value of the parameter c.
c ex
−c ex
= 0
14.2 Example Problems
In this section we will discuss physical and geometrical problems that lead to first order differential
equations.
14.2.1 Growth and Decay
Example 14.2.1 Consider a culture of bacteria in which each bacterium divides once per hour. Let
n(t) ∈ N denote the population, let t denote the time in hours and let n0 be the population at time
t = 0. The population doubles every hour. Thus for integer t, the population is n02t
. Figure 14.1
shows two possible populations when there is initially a single bacterium. In the first plot, each of
the bacteria divide at times t = m for m ∈ N. In the second plot, they divide at times t = m − 1/2.
For both plots the population is 2t
for integer t.
We model this problem by considering a continuous population y(t) ∈ R which approximates the
discrete population. In Figure 14.2 we first show the population when there is initially 8 bacteria.
The divisions of bacteria is spread out over each one second interval. For integer t, the populations
is 8 · 2t
. Next we show the population with a plot of the continuous function y(t) = 8 · 2t
. We see
that y(t) is a reasonable approximation of the discrete population.
474
In the discrete problem, the growth of the population is proportional to its number; the popula-
tion doubles every hour. For the continuous problem, we assume that this is true for y(t). We write
this as an equation:
y (t) = αy(t).
That is, the rate of change y (t) in the population is proportional to the population y(t), (with
constant of proportionality α). We specify the population at time t = 0 with the initial condition:
y(0) = n0. Note that y(t) = n0 eαt
satisfies the problem:
y (t) = αy(t), y(0) = n0.
For our bacteria example, α = ln 2.
Result 14.2.1 A quantity y(t) whose growth or decay is proportional to y(t)
is modelled by the problem:
y (t) = αy(t), y(t0) = y0.
Here we assume that the quantity is known at time t = t0. eα
is the factor by
which the quantity grows/decays in unit time. The solution of this problem is
y(t) = y0 eα(t−t0)
.
14.3 One Parameter Families of Functions
Consider the equation:
F(x, y(x), c) = 0, (14.1)
which implicitly defines a one-parameter family of functions y(x; c). Here y is a function of the
variable x and the parameter c. For simplicity, we will write y(x) and not explicitly show the
parameter dependence.
Example 14.3.1 The equation y = cx defines family of lines with slope c, passing through the
origin. The equation x2
+ y2
= c2
defines circles of radius c, centered at the origin.
Consider a chicken dropped from a height h. The elevation y of the chicken at time t after its
release is y(t) = h − gt2
, where g is the acceleration due to gravity. This is family of functions for
the parameter h.
It turns out that the general solution of any first order differential equation is a one-parameter
family of functions. This is not easy to prove. However, it is easy to verify the converse. We
differentiate Equation 14.1 with respect to x.
Fx + Fyy = 0
(We assume that F has a non-trivial dependence on y, that is Fy = 0.) This gives us two equa-
tions involving the independent variable x, the dependent variable y(x) and its derivative and the
parameter c. If we algebraically eliminate c between the two equations, the eliminant will be a first
order differential equation for y(x). Thus we see that every one-parameter family of functions y(x)
satisfies a first order differential equation. This y(x) is the primitive of the differential equation.
Later we will discuss why y(x) is the general solution of the differential equation.
Example 14.3.2 Consider the family of circles of radius c centered about the origin.
x2
+ y2
= c2
475
x
y
y’ = −x/y
Figure 14.3: A circle and its tangent.
Differentiating this yields:
2x + 2yy = 0.
It is trivial to eliminate the parameter and obtain a differential equation for the family of circles.
x + yy = 0
We can see the geometric meaning in this equation by writing it in the form:
y = −
x
y
.
For a point on the circle, the slope of the tangent y is the negative of the cotangent of the angle
x/y. (See Figure 14.3.)
Example 14.3.3 Consider the one-parameter family of functions:
y(x) = f(x) + cg(x),
where f(x) and g(x) are known functions. The derivative is
y = f + cg .
We eliminate the parameter.
gy − g y = gf − g f
y −
g
g
y = f −
g f
g
Thus we see that y(x) = f(x) + cg(x) satisfies a first order linear differential equation. Later we
will prove the converse: the general solution of a first order linear differential equation has the form:
y(x) = f(x) + cg(x).
We have shown that every one-parameter family of functions satisfies a first order differential
equation. We do not prove it here, but the converse is true as well.
Result 14.3.1 Every first order differential equation has a one-parameter
family of solutions y(x) defined by an equation of the form:
F(x, y(x); c) = 0.
This y(x) is called the general solution. If the equation is linear then the
general solution expresses the totality of solutions of the differential equation.
If the equation is nonlinear, there may be other special singular solutions,
which do not depend on a parameter.
476
This is strictly an existence result. It does not say that the general solution of a first order
differential equation can be determined by some method, it just says that it exists. There is no
method for solving the general first order differential equation. However, there are some special
forms that are soluble. We will devote the rest of this chapter to studying these forms.
14.4 Integrable Forms
In this section we will introduce a few forms of differential equations that we may solve through
integration.
14.4.1 Separable Equations
Any differential equation that can written in the form
P(x) + Q(y)y = 0
is a separable equation, (because the dependent and independent variables are separated). We can
obtain an implicit solution by integrating with respect to x.
P(x) dx + Q(y)
dy
dx
dx = c
P(x) dx + Q(y) dy = c
Result 14.4.1 The separable equation P(x) + Q(y)y = 0 may be solved by
integrating with respect to x. The general solution is
P(x) dx + Q(y) dy = c.
Example 14.4.1 Consider the differential equation y = xy2
. We separate the dependent and
independent variables and integrate to find the solution.
dy
dx
= xy2
y−2
dy = x dx
y−2
dy = x dx + c
−y−1
=
x2
2
+ c
y = −
1
x2/2 + c
Example 14.4.2 The equation y = y − y2
is separable.
y
y − y2
= 1
We expand in partial fractions and integrate.
1
y
−
1
y − 1
y = 1
ln |y| − ln |y − 1| = x + c
477
We have an implicit equation for y(x). Now we solve for y(x).
ln
y
y − 1
= x + c
y
y − 1
= ex+c
y
y − 1
= ± ex+c
y
y − 1
= c ex 1
y =
c ex
c ex −1
y =
1
1 + c ex
14.4.2 Exact Equations
Any first order ordinary differential equation of the first degree can be written as the total differ-
ential equation,
P(x, y) dx + Q(x, y) dy = 0.
If this equation can be integrated directly, that is if there is a primitive, u(x, y), such that
du = P dx + Q dy,
then this equation is called exact. The (implicit) solution of the differential equation is
u(x, y) = c,
where c is an arbitrary constant. Since the differential of a function, u(x, y), is
du ≡
∂u
∂x
dx +
∂u
∂y
dy,
P and Q are the partial derivatives of u:
P(x, y) =
∂u
∂x
, Q(x, y) =
∂u
∂y
.
In an alternate notation, the differential equation
P(x, y) + Q(x, y)
dy
dx
= 0, (14.2)
is exact if there is a primitive u(x, y) such that
du
dx
≡
∂u
∂x
+
∂u
∂y
dy
dx
= P(x, y) + Q(x, y)
dy
dx
.
The solution of the differential equation is u(x, y) = c.
Example 14.4.3
x + y
dy
dx
= 0
is an exact differential equation since
d
dx
1
2
(x2
+ y2
) = x + y
dy
dx
478
The solution of the differential equation is
1
2
(x2
+ y2
) = c.
Example 14.4.4 , Let f(x) and g(x) be known functions.
g(x)y + g (x)y = f(x)
is an exact differential equation since
d
dx
(g(x)y(x)) = gy + g y.
The solution of the differential equation is
g(x)y(x) = f(x) dx + c
y(x) =
1
g(x)
f(x) dx +
c
g(x)
.
A necessary condition for exactness. The solution of the exact equation P + Qy = 0 is u = c
where u is the primitive of the equation, du
dx = P + Qy . At present the only method we have for
determining the primitive is guessing. This is fine for simple equations, but for more difficult cases
we would like a method more concrete than divine inspiration. As a first step toward this goal we
determine a criterion for determining if an equation is exact.
Consider the exact equation,
P + Qy = 0,
with primitive u, where we assume that the functions P and Q are continuously differentiable. Since
the mixed partial derivatives of u are equal,
∂2
u
∂x∂y
=
∂2
u
∂y∂x
,
a necessary condition for exactness is
∂P
∂y
=
∂Q
∂x
.
A sufficient condition for exactness. This necessary condition for exactness is also a sufficient
condition. We demonstrate this by deriving the general solution of (14.2). Assume that P +Qy = 0
is not necessarily exact, but satisfies the condition Py = Qx. If the equation has a primitive,
du
dx
≡
∂u
∂x
+
∂u
∂y
dy
dx
= P(x, y) + Q(x, y)
dy
dx
,
then it satisfies
∂u
∂x
= P,
∂u
∂y
= Q. (14.3)
Integrating the first equation of (14.3), we see that the primitive has the form
u(x, y) =
x
x0
P(ξ, y) dξ + f(y),
for some f(y). Now we substitute this form into the second equation of (14.3).
∂u
∂y
= Q(x, y)
x
x0
Py(ξ, y) dξ + f (y) = Q(x, y)
479
Now we use the condition Py = Qx.
x
x0
Qx(ξ, y) dξ + f (y) = Q(x, y)
Q(x, y) − Q(x0, y) + f (y) = Q(x, y)
f (y) = Q(x0, y)
f(y) =
y
y0
Q(x0, ψ) dψ
Thus we see that
u =
x
x0
P(ξ, y) dξ +
y
y0
Q(x0, ψ) dψ
is a primitive of the derivative; the equation is exact. The solution of the differential equation is
x
x0
P(ξ, y) dξ +
y
y0
Q(x0, ψ) dψ = c.
Even though there are three arbitrary constants: x0, y0 and c, the solution is a one-parameter family.
This is because changing x0 or y0 only changes the left side by an additive constant.
Result 14.4.2 Any first order differential equation of the first degree can be
written in the form
P(x, y) + Q(x, y)
dy
dx
= 0.
This equation is exact if and only if
Py = Qx.
In this case the solution of the differential equation is given by
x
x0
P(ξ, y) dξ +
y
y0
Q(x0, ψ) dψ = c.
Exercise 14.1
Solve the following differential equations by inspection. That is, group terms into exact derivatives
and then integrate. f(x) and g(x) are known functions.
1. y (x)
y(x) = f(x)
2. yα
(x)y (x) = f(x)
3. y
cos x + y tan x
cos x = cos x
Hint, Solution
14.4.3 Homogeneous Coefficient Equations
Homogeneous coefficient, first order differential equations form another class of soluble equations.
We will find that a change of dependent variable will make such equations separable or we can
determine an integrating factor that will make such equations exact. First we define homogeneous
functions.
480
Euler’s Theorem on Homogeneous Functions. The function F(x, y) is homogeneous of degree
n if
F(λx, λy) = λn
F(x, y).
From this definition we see that
F(x, y) = xn
F 1,
y
x
.
(Just formally substitute 1/x for λ.) For example,
xy2
,
x2
y + 2y3
x + y
, x cos(y/x)
are homogeneous functions of orders 3, 2 and 1, respectively.
Euler’s theorem for a homogeneous function of order n is:
xFx + yFy = nF.
To prove this, we define ξ = λx, ψ = λy. From the definition of homogeneous functions, we have
F(ξ, ψ) = λn
F(x, y).
We differentiate this equation with respect to λ.
∂F(ξ, ψ)
∂ξ
∂ξ
∂λ
+
∂F(ξ, ψ)
∂ψ
∂ψ
∂λ
= nλn−1
F(x, y)
xFξ + yFψ = nλn−1
F(x, y)
Setting λ = 1, (and hence ξ = x, ψ = y), proves Euler’s theorem.
Result 14.4.3 Euler’s Theorem on Homogeneous Functions. If F(x, y)
is a homogeneous function of degree n, then
xFx + yFy = nF.
Homogeneous Coefficient Differential Equations. If the coefficient functions P(x, y) and
Q(x, y) are homogeneous of degree n then the differential equation,
P(x, y) + Q(x, y)
dy
dx
= 0, (14.4)
is called a homogeneous coefficient equation. They are often referred to simply as homogeneous
equations.
Transformation to a Separable Equation. We can write the homogeneous equation in the
form,
xn
P 1,
y
x
+ xn
Q 1,
y
x
dy
dx
= 0,
P 1,
y
x
+ Q 1,
y
x
dy
dx
= 0.
This suggests the change of dependent variable u(x) = y(x)
x .
P(1, u) + Q(1, u) u + x
du
dx
= 0
481
This equation is separable.
P(1, u) + uQ(1, u) + xQ(1, u)
du
dx
= 0
1
x
+
Q(1, u)
P(1, u) + uQ(1, u)
du
dx
= 0
ln |x| +
1
u + P(1, u)/Q(1, u)
du = c
By substituting ln |c| for c, we can write this in a simpler form.
1
u + P(1, u)/Q(1, u)
du = ln
c
x
.
Integrating Factor. One can show that
µ(x, y) =
1
xP(x, y) + yQ(x, y)
is an integrating factor for the Equation 14.4. The proof of this is left as an exercise for the reader.
(See Exercise 14.2.)
Result 14.4.4 Homogeneous Coefficient Differential Equations. If
P(x, y) and Q(x, y) are homogeneous functions of degree n, then the equa-
tion
P(x, y) + Q(x, y)
dy
dx
= 0
is made separable by the change of independent variable u(x) = y(x)
x
. The
solution is determined by
1
u + P(1, u)/Q(1, u)
du = ln
c
x
.
Alternatively, the homogeneous equation can be made exact with the integrat-
ing factor
µ(x, y) =
1
xP(x, y) + yQ(x, y)
.
Example 14.4.5 Consider the homogeneous coefficient equation
x2
− y2
+ xy
dy
dx
= 0.
The solution for u(x) = y(x)/x is determined by
1
u + 1−u2
u
du = ln
c
x
u du = ln
c
x
1
2
u2
= ln
c
x
u = ± 2 ln |c/x|
482
Thus the solution of the differential equation is
y = ±x 2 ln |c/x|
Exercise 14.2
Show that
µ(x, y) =
1
xP(x, y) + yQ(x, y)
is an integrating factor for the homogeneous equation,
P(x, y) + Q(x, y)
dy
dx
= 0.
Hint, Solution
Exercise 14.3 (mathematica/ode/first order/exact.nb)
Find the general solution of the equation
dy
dt
= 2
y
t
+
y
t
2
.
Hint, Solution
14.5 The First Order, Linear Differential Equation
14.5.1 Homogeneous Equations
The first order, linear, homogeneous equation has the form
dy
dx
+ p(x)y = 0.
Note that if we can find one solution, then any constant times that solution also satisfies the equation.
If fact, all the solutions of this equation differ only by multiplicative constants. We can solve any
equation of this type because it is separable.
y
y
= −p(x)
ln |y| = − p(x) dx + c
y = ± e−
R
p(x) dx+c
y = c e−
R
p(x) dx
Result 14.5.1 First Order, Linear Homogeneous Differential Equa-
tions. The first order, linear, homogeneous differential equation,
dy
dx
+ p(x)y = 0,
has the solution
y = c e−
R
p(x) dx
. (14.5)
The solutions differ by multiplicative constants.
483
Example 14.5.1 Consider the equation
dy
dx
+
1
x
y = 0.
We use Equation 14.5 to determine the solution.
y(x) = c e−
R
1/x dx
, for x = 0
y(x) = c e− ln |x|
y(x) =
c
|x|
y(x) =
c
x
14.5.2 Inhomogeneous Equations
The first order, linear, inhomogeneous differential equation has the form
dy
dx
+ p(x)y = f(x). (14.6)
This equation is not separable. Note that it is similar to the exact equation we solved in Exam-
ple 14.4.4,
g(x)y (x) + g (x)y(x) = f(x).
To solve Equation 14.6, we multiply by an integrating factor. Multiplying a differential equation by
its integrating factor changes it to an exact equation. Multiplying Equation 14.6 by the function,
I(x), yields,
I(x)
dy
dx
+ p(x)I(x)y = f(x)I(x).
In order that I(x) be an integrating factor, it must satisfy
d
dx
I(x) = p(x)I(x).
This is a first order, linear, homogeneous equation with the solution
I(x) = c e
R
p(x) dx
.
This is an integrating factor for any constant c. For simplicity we will choose c = 1.
To solve Equation 14.6 we multiply by the integrating factor and integrate. Let P(x) = p(x) dx.
eP (x) dy
dx
+ p(x) eP (x)
y = eP (x)
f(x)
d
dx
eP (x)
y = eP (x)
f(x)
y = e−P (x)
eP (x)
f(x) dx + c e−P (x)
y ≡ yp + c yh
Note that the general solution is the sum of a particular solution, yp, that satisfies y +p(x)y = f(x),
and an arbitrary constant times a homogeneous solution, yh, that satisfies y + p(x)y = 0.
Example 14.5.2 Consider the differential equation
y +
1
x
y = x2
, x > 0.
484
-1 1
-10
-5
5
10
Figure 14.4: Solutions to y + y/x = x2
.
First we find the integrating factor.
I(x) = exp
1
x
dx = eln x
= x
We multiply by the integrating factor and integrate.
d
dx
(xy) = x3
xy =
1
4
x4
+ c
y =
1
4
x3
+
c
x
.
The particular and homogeneous solutions are
yp =
1
4
x3
and yh =
1
x
.
Note that the general solution to the differential equation is a one-parameter family of functions.
The general solution is plotted in Figure 14.4 for various values of c.
Exercise 14.4 (mathematica/ode/first order/linear.nb)
Solve the differential equation
y −
1
x
y = xα
, x > 0.
Hint, Solution
14.5.3 Variation of Parameters.
We could also have found the particular solution with the method of variation of parameters.
Although we can solve first order equations without this method, it will become important in the
study of higher order inhomogeneous equations. We begin by assuming that the particular solution
has the form yp = u(x)yh(x) where u(x) is an unknown function. We substitute this into the
differential equation.
d
dx
yp + p(x)yp = f(x)
d
dx
(uyh) + p(x)uyh = f(x)
u yh + u(yh + p(x)yh) = f(x)
485
Since yh is a homogeneous solution, yh + p(x)yh = 0.
u =
f(x)
yh
u =
f(x)
yh(x)
dx
Recall that the homogeneous solution is yh = e−P (x)
.
u = eP (x)
f(x) dx
Thus the particular solution is
yp = e−P (x)
eP (x)
f(x) dx.
14.6 Initial Conditions
In physical problems involving first order differential equations, the solution satisfies both the
differential equation and a constraint which we call the initial condition. Consider a first order
linear differential equation subject to the initial condition y(x0) = y0. The general solution is
y = yp + cyh = e−P (x)
eP (x)
f(x) dx + c e−P (x)
.
For the moment, we will assume that this problem is well-posed. A problem is well-posed if there is a
unique solution to the differential equation that satisfies the constraint(s). Recall that eP (x)
f(x) dx
denotes any integral of eP (x)
f(x). For convenience, we choose
x
x0
eP (ξ)
f(ξ) dξ. The initial condition
requires that
y(x0) = y0 = e−P (x0)
x0
x0
eP (ξ)
f(ξ) dξ + c e−P (x0)
= c e−P (x0)
.
Thus c = y0 eP (x0)
. The solution subject to the initial condition is
y = e−P (x)
x
x0
eP (ξ)
f(ξ) dξ + y0 eP (x0)−P (x)
.
Example 14.6.1 Consider the problem
y + (cos x)y = x, y(0) = 2.
From Result 14.6.1, the solution subject to the initial condition is
y = e− sin x
x
0
ξ esin ξ
dξ + 2 e− sin x
.
14.6.1 Piecewise Continuous Coefficients and Inhomogeneities
If the coefficient function p(x) and the inhomogeneous term f(x) in the first order linear differential
equation
dy
dx
+ p(x)y = f(x)
are continuous, then the solution is continuous and has a continuous first derivative. To see this, we
note that the solution
y = e−P (x)
eP (x)
f(x) dx + c e−P (x)
486
-1 1 2
2
4
6
8
Figure 14.5: Solution to y − y = H(x − 1).
is continuous since the integral of a piecewise continuous function is continuous. The first derivative
of the solution can be found directly from the differential equation.
y = −p(x)y + f(x)
Since p(x), y, and f(x) are continuous, y is continuous.
If p(x) or f(x) is only piecewise continuous, then the solution will be continuous since the integral
of a piecewise continuous function is continuous. The first derivative of the solution will be piecewise
continuous.
Example 14.6.2 Consider the problem
y − y = H(x − 1), y(0) = 1,
where H(x) is the Heaviside function.
H(x) =
1 for x > 0,
0 for x < 0.
To solve this problem, we divide it into two equations on separate domains.
y1 − y1 = 0, y1(0) = 1, for x < 1
y2 − y2 = 1, y2(1) = y1(1), for x > 1
With the condition y2(1) = y1(1) on the second equation, we demand that the solution be continuous.
The solution to the first equation is y = ex
. The solution for the second equation is
y = ex
x
1
e−ξ
dξ + e1
ex−1
= −1 + ex−1
+ ex
.
Thus the solution over the whole domain is
y =
ex
for x < 1,
(1 + e−1
) ex
−1 for x > 1.
The solution is graphed in Figure 14.5.
Example 14.6.3 Consider the problem,
y + sign(x)y = 0, y(1) = 1.
Recall that
sign x =



−1 for x < 0
0 for x = 0
1 for x > 0.
487
-3 -2 -1 1 2 3
1
2
Figure 14.6: Solution to y + sign(x)y = 0.
Since sign x is piecewise defined, we solve the two problems,
y+ + y+ = 0, y+(1) = 1, for x > 0
y− − y− = 0, y−(0) = y+(0), for x < 0,
and define the solution, y, to be
y(x) =
y+(x), for x ≥ 0,
y−(x), for x ≤ 0.
The initial condition for y− demands that the solution be continuous.
Solving the two problems for positive and negative x, we obtain
y(x) =
e1−x
, for x > 0,
e1+x
, for x < 0.
This can be simplified to
y(x) = e1−|x|
.
This solution is graphed in Figure 14.6.
Result 14.6.1 Existence, Uniqueness Theorem. Let p(x) and f(x) be
piecewise continuous on the interval [a, b] and let x0 ∈ [a, b]. Consider the
problem,
dy
dx
+ p(x)y = f(x), y(x0) = y0.
The general solution of the differential equation is
y = e−P(x)
eP(x)
f(x) dx + c e−P(x)
.
The unique, continuous solution of the differential equation subject to the
initial condition is
y = e−P(x)
x
x0
eP(ξ)
f(ξ) dξ + y0 eP(x0)−P(x)
,
where P(x) = p(x) dx.
488
-1 1
-1
1
Figure 14.7: Solutions to y − y/x = 0.
Exercise 14.5 (mathematica/ode/first order/exact.nb)
Find the solutions of the following differential equations which satisfy the given initial conditions:
1.
dy
dx
+ xy = x2n+1
, y(1) = 1, n ∈ Z
2.
dy
dx
− 2xy = 1, y(0) = 1
Hint, Solution
Exercise 14.6 (mathematica/ode/first order/exact.nb)
Show that if α > 0 and λ > 0, then for any real β, every solution of
dy
dx
+ αy(x) = β e−λx
satisfies limx→+∞ y(x) = 0. (The case α = λ requires special treatment.) Find the solution for
β = λ = 1 which satisfies y(0) = 1. Sketch this solution for 0 ≤ x < ∞ for several values of α. In
particular, show what happens when α → 0 and α → ∞.
Hint, Solution
14.7 Well-Posed Problems
Example 14.7.1 Consider the problem,
y −
1
x
y = 0, y(0) = 1.
The general solution is y = cx. Applying the initial condition demands that 1 = c · 0, which cannot
be satisfied. The general solution for various values of c is plotted in Figure 14.7.
Example 14.7.2 Consider the problem
y −
1
x
y = −
1
x
, y(0) = 1.
The general solution is
y = 1 + cx.
The initial condition is satisfied for any value of c so there are an infinite number of solutions.
489
Example 14.7.3 Consider the problem
y +
1
x
y = 0, y(0) = 1.
The general solution is y = c
x . Depending on whether c is nonzero, the solution is either singular or
zero at the origin and cannot satisfy the initial condition.
The above problems in which there were either no solutions or an infinite number of solutions
are said to be ill-posed. If there is a unique solution that satisfies the initial condition, the problem
is said to be well-posed. We should have suspected that we would run into trouble in the above
examples as the initial condition was given at a singularity of the coefficient function, p(x) = 1/x.
Consider the problem,
y + p(x)y = f(x), y(x0) = y0.
We assume that f(x) bounded in a neighborhood of x = x0. The differential equation has the
general solution,
y = e−P (x)
eP (x)
f(x) dx + c e−P (x)
.
If the homogeneous solution, e−P (x)
, is nonzero and finite at x = x0, then there is a unique value of
c for which the initial condition is satisfied. If the homogeneous solution vanishes at x = x0 then
either the initial condition cannot be satisfied or the initial condition is satisfied for all values of c.
The homogeneous solution can vanish or be infinite only if P(x) → ±∞ as x → x0. This can occur
only if the coefficient function, p(x), is unbounded at that point.
Result 14.7.1 If the initial condition is given where the homogeneous solution
to a first order, linear differential equation is zero or infinite then the problem
may be ill-posed. This may occur only if the coefficient function, p(x), is
unbounded at that point.
14.8 Equations in the Complex Plane
14.8.1 Ordinary Points
Consider the first order homogeneous equation
dw
dz
+ p(z)w = 0,
where p(z), a function of a complex variable, is analytic in some domain D. The integrating factor,
I(z) = exp p(z) dz ,
is an analytic function in that domain. As with the case of real variables, multiplying by the
integrating factor and integrating yields the solution,
w(z) = c exp − p(z) dz .
We see that the solution is analytic in D.
Example 14.8.1 It does not make sense to pose the equation
dw
dz
+ |z|w = 0.
For the solution to exist, w and hence w (z) must be analytic. Since p(z) = |z| is not analytic
anywhere in the complex plane, the equation has no solution.
490
Any point at which p(z) is analytic is called an ordinary point of the differential equation. Since
the solution is analytic we can expand it in a Taylor series about an ordinary point. The radius
of convergence of the series will be at least the distance to the nearest singularity of p(z) in the
complex plane.
Example 14.8.2 Consider the equation
dw
dz
−
1
1 − z
w = 0.
The general solution is w = c
1−z . Expanding this solution about the origin,
w =
c
1 − z
= c
∞
n=0
zn
.
The radius of convergence of the series is,
R = lim
n→∞
an
an+1
= 1,
which is the distance from the origin to the nearest singularity of p(z) = 1
1−z .
We do not need to solve the differential equation to find the Taylor series expansion of the
homogeneous solution. We could substitute a general Taylor series expansion into the differential
equation and solve for the coefficients. Since we can always solve first order equations, this method
is of limited usefulness. However, when we consider higher order equations in which we cannot solve
the equations exactly, this will become an important method.
Example 14.8.3 Again consider the equation
dw
dz
−
1
1 − z
w = 0.
Since we know that the solution has a Taylor series expansion about z = 0, we substitute w =
∞
n=0 anzn
into the differential equation.
(1 − z)
d
dz
∞
n=0
anzn
−
∞
n=0
anzn
= 0
∞
n=1
nanzn−1
−
∞
n=1
nanzn
−
∞
n=0
anzn
= 0
∞
n=0
(n + 1)an+1zn
−
∞
n=0
nanzn
−
∞
n=0
anzn
= 0
∞
n=0
((n + 1)an+1 − (n + 1)an) zn
= 0.
Now we equate powers of z to zero. For zn
, the equation is (n+1)an+1 −(n+1)an = 0, or an+1 = an.
Thus we have that an = a0 for all n ≥ 1. The solution is then
w = a0
∞
n=0
zn
,
which is the result we obtained by expanding the solution in Example 14.8.2.
491
Result 14.8.1 Consider the equation
dw
dz
+ p(z)w = 0.
If p(z) is analytic at z = z0 then z0 is called an ordinary point of the differ-
ential equation. The Taylor series expansion of the solution can be found by
substituting w = ∞
n=0 an(z − z0)n
into the equation and equating powers of
(z − z0). The radius of convergence of the series is at least the distance to the
nearest singularity of p(z) in the complex plane.
Exercise 14.7
Find the Taylor series expansion about the origin of the solution to
dw
dz
+
1
1 − z
w = 0
with the substitution w =
∞
n=0 anzn
. What is the radius of convergence of the series? What is the
distance to the nearest singularity of 1
1−z ?
Hint, Solution
14.8.2 Regular Singular Points
If the coefficient function p(z) has a simple pole at z = z0 then z0 is a regular singular point of
the first order differential equation.
Example 14.8.4 Consider the equation
dw
dz
+
α
z
w = 0, α = 0.
This equation has a regular singular point at z = 0. The solution is w = cz−α
. Depending on the
value of α, the solution can have three different kinds of behavior.
α is a negative integer. The solution is analytic in the finite complex plane.
α is a positive integer The solution has a pole at the origin. w is analytic in the annulus, 0 < |z|.
α is not an integer. w has a branch point at z = 0. The solution is analytic in the cut annulus
0 < |z| < ∞, θ0 < arg z < θ0 + 2π.
Consider the differential equation
dw
dz
+ p(z)w = 0,
where p(z) has a simple pole at the origin and is analytic in the annulus, 0 < |z| < r, for some
positive r. Recall that the solution is
w = c exp − p(z) dz
= c exp −
b0
z
+ p(z) −
b0
z
dz
= c exp −b0 log z −
zp(z) − b0
z
dz
= cz−b0
exp −
zp(z) − b0
z
dz
492
The exponential factor has a removable singularity at z = 0 and is analytic in |z| < r. We
consider the following cases for the z−b0
factor:
b0 is a negative integer. Since z−b0
is analytic at the origin, the solution to the differential
equation is analytic in the circle |z| < r.
b0 is a positive integer. The solution has a pole of order −b0 at the origin and is analytic in the
annulus 0 < |z| < r.
b0 is not an integer. The solution has a branch point at the origin and thus is not single-valued.
The solution is analytic in the cut annulus 0 < |z| < r, θ0 < arg z < θ0 + 2π.
Since the exponential factor has a convergent Taylor series in |z| < r, the solution can be
expanded in a series of the form
w = z−b0
∞
n=0
anzn
, where a0 = 0 and b0 = lim
z→0
z p(z).
In the case of a regular singular point at z = z0, the series is
w = (z − z0)−b0
∞
n=0
an(z − z0)n
, where a0 = 0 and b0 = lim
z→z0
(z − z0) p(z).
Series of this form are known as Frobenius series. Since we can write the solution as
w = c(z − z0)−b0
exp − p(z) −
b0
z − z0
dz ,
we see that the Frobenius expansion of the solution will have a radius of convergence at least the
distance to the nearest singularity of p(z).
Result 14.8.2 Consider the equation,
dw
dz
+ p(z)w = 0,
where p(z) has a simple pole at z = z0, p(z) is analytic in some annulus,
0 < |z − z0| < r, and limz→z0 (z − z0)p(z) = β. The solution to the differential
equation has a Frobenius series expansion of the form
w = (z − z0)−β
∞
n=0
an(z − z0)n
, a0 = 0.
The radius of convergence of the expansion will be at least the distance to the
nearest singularity of p(z).
Example 14.8.5 We will find the first two nonzero terms in the series solution about z = 0 of the
differential equation,
dw
dz
+
1
sin z
w = 0.
First we note that the coefficient function has a simple pole at z = 0 and
lim
z→0
z
sin z
= lim
z→0
1
cos z
= 1.
493
2 4 6
-4
-2
2
4
Figure 14.8: Plot of the exact solution and the two term approximation.
Thus we look for a series solution of the form
w = z−1
∞
n=0
anzn
, a0 = 0.
The nearest singularities of 1/ sin z in the complex plane are at z = ±π. Thus the radius of
convergence of the series will be at least π.
Substituting the first three terms of the expansion into the differential equation,
d
dz
(a0z−1
+ a1 + a2z) +
1
sin z
(a0z−1
+ a1 + a2z) = O(z).
Recall that the Taylor expansion of sin z is sin z = z − 1
6 z3
+ O(z5
).
z −
z3
6
+ O(z5
) (−a0z−2
+ a2) + (a0z−1
+ a1 + a2z) = O(z2
)
−a0z−1
+ a2 +
a0
6
z + a0z−1
+ a1 + a2z = O(z2
)
a1 + 2a2 +
a0
6
z = O(z2
)
a0 is arbitrary. Equating powers of z,
z0
: a1 = 0.
z1
: 2a2 +
a0
6
= 0.
Thus the solution has the expansion,
w = a0 z−1
−
z
12
+ O(z2
).
In Figure 14.8 the exact solution is plotted in a solid line and the two term approximation is plotted
in a dashed line. The two term approximation is very good near the point x = 0.
Example 14.8.6 Find the first two nonzero terms in the series expansion about z = 0 of the
solution to
w − i
cos z
z
w = 0.
Since cos z
z has a simple pole at z = 0 and limz→0 −i cos z = −i we see that the Frobenius series will
have the form
w = zi
∞
n=0
anzn
, a0 = 0.
494
Recall that cos z has the Taylor expansion
∞
n=0
(−1)n
z2n
(2n)! . Substituting the Frobenius expansion
into the differential equation yields
z izi−1
∞
n=0
anzn
+ zi
∞
n=0
nanzn−1
− i
∞
n=0
(−1)n
z2n
(2n)!
zi
∞
n=0
anzn
= 0
∞
n=0
(n + i)anzn
− i
∞
n=0
(−1)n
z2n
(2n)!
∞
n=0
anzn
= 0.
Equating powers of z,
z0
: ia0 − ia0 = 0 → a0 is arbitrary
z1
: (1 + i)a1 − ia1 = 0 → a1 = 0
z2
: (2 + i)a2 − ia2 +
i
2
a0 = 0 → a2 = −
i
4
a0.
Thus the solution is
w = a0zi
1 −
i
4
z2
+ O(z3
) .
14.8.3 Irregular Singular Points
If a point is not an ordinary point or a regular singular point then it is called an irregular singular
point. The following equations have irregular singular points at the origin.
• w +
√
zw = 0
• w − z−2
w = 0
• w + exp(1/z)w = 0
Example 14.8.7 Consider the differential equation
dw
dz
+ αzβ
w = 0, α = 0, β = −1, 0, 1, 2, . . .
This equation has an irregular singular point at the origin. Solving this equation,
d
dz
exp αzβ
dz w = 0
w = c exp −
α
β + 1
zβ+1
= c
∞
n=0
(−1)n
n!
α
β + 1
n
z(β+1)n
.
If β is not an integer, then the solution has a branch point at the origin. If β is an integer, β < −1,
then the solution has an essential singularity at the origin. The solution cannot be expanded in a
Frobenius series, w = zλ ∞
n=0 anzn
.
Although we will not show it, this result holds for any irregular singular point of the differential
equation. We cannot approximate the solution near an irregular singular point using a Frobenius
expansion.
Now would be a good time to summarize what we have discovered about solutions of first order
differential equations in the complex plane.
495
Result 14.8.3 Consider the first order differential equation
dw
dz
+ p(z)w = 0.
Ordinary Points If p(z) is analytic at z = z0 then z0 is an ordinary point
of the differential equation. The solution can be expanded in the Taylor
series w = ∞
n=0 an(z −z0)n
. The radius of convergence of the series is at
least the distance to the nearest singularity of p(z) in the complex plane.
Regular Singular Points If p(z) has a simple pole at z = z0 and is analytic
in some annulus 0 < |z − z0| < r then z0 is a regular singular point
of the differential equation. The solution at z0 will either be analytic,
have a pole, or have a branch point. The solution can be expanded in
the Frobenius series w = (z − z0)−β ∞
n=0 an(z − z0)n
where a0 = 0 and
β = limz→z0 (z−z0)p(z). The radius of convergence of the Frobenius series
will be at least the distance to the nearest singularity of p(z).
Irregular Singular Points If the point z = z0 is not an ordinary point
or a regular singular point, then it is an irregular singular point of the
differential equation. The solution cannot be expanded in a Frobenius
series about that point.
14.8.4 The Point at Infinity
Now we consider the behavior of first order linear differential equations at the point at infinity.
Recall from complex variables that the complex plane together with the point at infinity is called
the extended complex plane. To study the behavior of a function f(z) at infinity, we make the
transformation z = 1
ζ and study the behavior of f(1/ζ) at ζ = 0.
Example 14.8.8 Let’s examine the behavior of sin z at infinity. We make the substitution z = 1/ζ
and find the Laurent expansion about ζ = 0.
sin(1/ζ) =
∞
n=0
(−1)n
(2n + 1)! ζ(2n+1)
Since sin(1/ζ) has an essential singularity at ζ = 0, sin z has an essential singularity at infinity.
We use the same approach if we want to examine the behavior at infinity of a differential equation.
Starting with the first order differential equation,
dw
dz
+ p(z)w = 0,
we make the substitution
z =
1
ζ
,
d
dz
= −ζ2 d
dζ
, w(z) = u(ζ)
to obtain
−ζ2 du
dζ
+ p(1/ζ)u = 0
du
dζ
−
p(1/ζ)
ζ2
u = 0.
496
Result 14.8.4 The behavior at infinity of
dw
dz
+ p(z)w = 0
is the same as the behavior at ζ = 0 of
du
dζ
−
p(1/ζ)
ζ2
u = 0.
Example 14.8.9 We classify the singular points of the equation
dw
dz
+
1
z2 + 9
w = 0.
We factor the denominator of the fraction to see that z = ı3 and z = −ı3 are regular singular points.
dw
dz
+
1
(z − ı3)(z + ı3)
w = 0
We make the transformation z = 1/ζ to examine the point at infinity.
du
dζ
−
1
ζ2
1
(1/ζ)2 + 9
u = 0
du
dζ
−
1
9ζ2 + 1
u = 0
Since the equation for u has a ordinary point at ζ = 0, z = ∞ is a ordinary point of the equation
for w.
497
14.9 Additional Exercises
Exact Equations
Exercise 14.8 (mathematica/ode/first order/exact.nb)
Find the general solution y = y(x) of the equations
1.
dy
dx
=
x2
+ xy + y2
x2
,
2. (4y − 3x) dx + (y − 2x) dy = 0.
Hint, Solution
Exercise 14.9 (mathematica/ode/first order/exact.nb)
Determine whether or not the following equations can be made exact. If so find the corresponding
general solution.
1. (3x2
− 2xy + 2) dx + (6y2
− x2
+ 3) dy = 0
2.
dy
dx
= −
ax + by
bx + cy
Hint, Solution
Exercise 14.10 (mathematica/ode/first order/exact.nb)
Find the solutions of the following differential equations which satisfy the given initial condition. In
each case determine the interval in which the solution is defined.
1.
dy
dx
= (1 − 2x)y2
, y(0) = −1/6.
2. x dx + y e−x
dy = 0, y(0) = 1.
Hint, Solution
Exercise 14.11
Are the following equations exact? If so, solve them.
1. (4y − x)y − (9x2
+ y − 1) = 0
2. (2x − 2y)y + (2x + 4y) = 0.
Hint, Solution
Exercise 14.12 (mathematica/ode/first order/exact.nb)
Find all functions f(t) such that the differential equation
y2
sin t + yf(t)
dy
dt
= 0 (14.7)
is exact. Solve the differential equation for these f(t).
Hint, Solution
The First Order, Linear Differential Equation
Exercise 14.13 (mathematica/ode/first order/linear.nb)
Solve the differential equation
y +
y
sin x
= 0.
Hint, Solution
498
Initial Conditions
Well-Posed Problems
Exercise 14.14
Find the solutions of
t
dy
dt
+ Ay = 1 + t2
, t > 0
which are bounded at t = 0. Consider all (real) values of A.
Hint, Solution
Equations in the Complex Plane
Exercise 14.15
Classify the singular points of the following first order differential equations, (include the point at
infinity).
1. w + sin z
z w = 0
2. w + 1
z−3 w = 0
3. w + z1/2
w = 0
Hint, Solution
Exercise 14.16
Consider the equation
w + z−2
w = 0.
The point z = 0 is an irregular singular point of the differential equation. Thus we know that we
cannot expand the solution about z = 0 in a Frobenius series. Try substituting the series solution
w = zλ
∞
n=0
anzn
, a0 = 0
into the differential equation anyway. What happens?
Hint, Solution
499
14.10 Hints
Hint 14.1
1. d
dx ln |u| = 1
u
2. d
dx uc
= uc−1
u
Hint 14.2
Hint 14.3
The equation is homogeneous. Make the change of variables u = y/t.
Hint 14.4
Make sure you consider the case α = 0.
Hint 14.5
Hint 14.6
Hint 14.7
The radius of convergence of the series and the distance to the nearest singularity of 1
1−z are not
the same.
Exact Equations
Hint 14.8
1.
2.
Hint 14.9
1. The equation is exact. Determine the primitive u by solving the equations ux = P, uy = Q.
2. The equation can be made exact.
Hint 14.10
1. This equation is separable. Integrate to get the general solution. Apply the initial condition
to determine the constant of integration.
2. Ditto. You will have to numerically solve an equation to determine where the solution is
defined.
Hint 14.11
Hint 14.12
The First Order, Linear Differential Equation
Hint 14.13
Look in the appendix for the integral of csc x.
500
Initial Conditions
Well-Posed Problems
Hint 14.14
Equations in the Complex Plane
Hint 14.15
Hint 14.16
Try to find the value of λ by substituting the series into the differential equation and equating powers
of z.
501
14.11 Solutions
Solution 14.1
1.
y (x)
y(x)
= f(x)
d
dx
ln |y(x)| = f(x)
ln |y(x)| = f(x) dx + c
y(x) = ± e
R
f(x) dx+c
y(x) = c e
R
f(x) dx
2.
yα
(x)y (x) = f(x)
yα+1
(x)
α + 1
= f(x) dx + c
y(x) = (α + 1) f(x) dx + a
1/(α+1)
3.
y
cos x
+ y
tan x
cos x
= cos x
d
dx
y
cos x
= cos x
y
cos x
= sin x + c
y(x) = sin x cos x + c cos x
Solution 14.2
We consider the homogeneous equation,
P(x, y) + Q(x, y)
dy
dx
= 0.
That is, both P and Q are homogeneous of degree n. We hypothesize that multiplying by
µ(x, y) =
1
xP(x, y) + yQ(x, y)
will make the equation exact. To prove this we use the result that
M(x, y) + N(x, y)
dy
dx
= 0
is exact if and only if My = Nx.
My =
∂
∂y
P
xP + yQ
=
Py(xP + yQ) − P(xPy + Q + yQy)
(xP + yQ)2
502
Nx =
∂
∂x
Q
xP + yQ
=
Qx(xP + yQ) − Q(P + xPx + yQx)
(xP + yQ)2
My = Nx
Py(xP + yQ) − P(xPy + Q + yQy) = Qx(xP + yQ) − Q(P + xPx + yQx)
yPyQ − yPQy = xPQx − xPxQ
xPxQ + yPyQ = xPQx + yPQy
(xPx + yPy)Q = P(xQx + yQy)
With Euler’s theorem, this reduces to an identity.
nPQ = PnQ
Thus the equation is exact. µ(x, y) is an integrating factor for the homogeneous equation.
Solution 14.3
We note that this is a homogeneous differential equation. The coefficient of dy/dt and the inhomo-
geneity are homogeneous of degree zero.
dy
dt
= 2
y
t
+
y
t
2
.
We make the change of variables u = y/t to obtain a separable equation.
tu + u = 2u + u2
u
u2 + u
=
1
t
Now we integrate to solve for u.
u
u(u + 1)
=
1
t
u
u
−
u
u + 1
=
1
t
ln |u| − ln |u + 1| = ln |t| + c
ln
u
u + 1
= ln |ct|
u
u + 1
= ±ct
u
u + 1
= ct
u =
ct
1 − ct
u =
t
c − t
y =
t2
c − t
Solution 14.4
We consider
y −
1
x
y = xα
, x > 0.
503
First we find the integrating factor.
I(x) = exp −
1
x
dx = exp (− ln x) =
1
x
.
We multiply by the integrating factor and integrate.
1
x
y −
1
x2
y = xα−1
d
dx
1
x
y = xα−1
1
x
y = xα−1
dx + c
y = x xα−1
dx + cx
y =
xα+1
α + cx for α = 0,
x ln x + cx for α = 0.
Solution 14.5
1.
y + xy = x2n+1
, y(1) = 1, n ∈ Z
We find the integrating factor.
I(x) = e
R
x dx
= ex2
/2
We multiply by the integrating factor and integrate. Since the initial condition is given at
x = 1, we will take the lower bound of integration to be that point.
d
dx
ex2
/2
y = x2n+1
ex2
/2
y = e−x2
/2
x
1
ξ2n+1
eξ2
/2
dξ + c e−x2
/2
We choose the constant of integration to satisfy the initial condition.
y = e−x2
/2
x
1
ξ2n+1
eξ2
/2
dξ + e(1−x2
)/2
If n ≥ 0 then we can use integration by parts to write the integral as a sum of terms. If n < 0
we can write the integral in terms of the exponential integral function. However, the integral
form above is as nice as any other and we leave the answer in that form.
2.
dy
dx
− 2xy(x) = 1, y(0) = 1.
We determine the integrating factor and then integrate the equation.
I(x) = e
R
−2x dx
= e−x2
d
dx
e−x2
y = e−x2
y = ex2
x
0
e−ξ2
dξ + c ex2
We choose the constant of integration to satisfy the initial condition.
y = ex2
1 +
x
0
e−ξ2
dξ
504
We can write the answer in terms of the Error function,
erf(x) ≡
2
√
π
x
0
e−ξ2
dξ.
y = ex2
1 +
√
π
2
erf(x)
Solution 14.6
We determine the integrating factor and then integrate the equation.
I(x) = e
R
α dx
= eαx
d
dx
(eαx
y) = β e(α−λ)x
y = β e−αx
e(α−λ)x
dx + c e−αx
First consider the case α = λ.
y = β e−αx e(α−λ)x
α − λ
+ c e−αx
y =
β
α − λ
e−λx
+c e−αx
Clearly the solution vanishes as x → ∞.
Next consider α = λ.
y = β e−αx
x + c e−αx
y = (c + βx) e−αx
We use L’Hospital’s rule to show that the solution vanishes as x → ∞.
lim
x→∞
c + βx
eαx
= lim
x→∞
β
α eαx
= 0
For β = λ = 1, the solution is
y =
1
α−1
e−x
+c e−αx
for α = 1,
(c + x) e−x
for α = 1.
The solution which satisfies the initial condition is
y =
1
α−1 (e−x
+(α − 2) e−αx
) for α = 1,
(1 + x) e−x
for α = 1.
In Figure 14.9 the solution is plotted for α = 1/16, 1/8, . . . , 16.
Consider the solution in the limit as α → 0.
lim
α→0
y(x) = lim
α→0
1
α − 1
e−x
+(α − 2) e−αx
= 2 − e−x
In the limit as α → ∞ we have,
lim
α→∞
y(x) = lim
α→∞
1
α − 1
e−x
+(α − 2) e−αx
= lim
α→∞
α − 2
α − 1
e−αx
=
1 for x = 0,
0 for x > 0.
505
4 8 12 16
1
Figure 14.9: The Solution for a Range of α
1 2 3 4
1
1 2 3 4
1
Figure 14.10: The Solution as α → 0 and α → ∞
This behavior is shown in Figure 14.10. The first graph plots the solutions for α = 1/128, 1/64, . . . , 1.
The second graph plots the solutions for α = 1, 2, . . . , 128.
Solution 14.7
We substitute w =
∞
n=0 anzn
into the equation dw
dz + 1
1−z w = 0.
d
dz
∞
n=0
anzn
+
1
1 − z
∞
n=0
anzn
= 0
(1 − z)
∞
n=1
nanzn−1
+
∞
n=0
anzn
= 0
∞
n=0
(n + 1)an+1zn
−
∞
n=0
nanzn
+
∞
n=0
anzn
= 0
∞
n=0
((n + 1)an+1 − (n − 1)an) zn
= 0
Equating powers of z to zero, we obtain the relation,
an+1 =
n − 1
n + 1
an.
a0 is arbitrary. We can compute the rest of the coefficients from the recurrence relation.
a1 =
−1
1
a0 = −a0
a2 =
0
2
a1 = 0
We see that the coefficients are zero for n ≥ 2. Thus the Taylor series expansion, (and the exact
solution), is
w = a0(1 − z).
506
The radius of convergence of the series in infinite. The nearest singularity of 1
1−z is at z = 1. Thus
we see the radius of convergence can be greater than the distance to the nearest singularity of the
coefficient function, p(z).
Exact Equations
Solution 14.8
1.
dy
dx
=
x2
+ xy + y2
x2
Since the right side is a homogeneous function of order zero, this is a homogeneous differential
equation. We make the change of variables u = y/x and then solve the differential equation
for u.
xu + u = 1 + u + u2
du
1 + u2
=
dx
x
arctan(u) = ln |x| + c
u = tan(ln(|cx|))
y = x tan(ln(|cx|))
2.
(4y − 3x) dx + (y − 2x) dy = 0
Since the coefficients are homogeneous functions of order one, this is a homogeneous differential
equation. We make the change of variables u = y/x and then solve the differential equation
for u.
4
y
x
− 3 dx +
y
x
− 2 dy = 0
(4u − 3) dx + (u − 2)(u dx + x du) = 0
(u2
+ 2u − 3) dx + x(u − 2) du = 0
dx
x
+
u − 2
(u + 3)(u − 1)
du = 0
dx
x
+
5/4
u + 3
−
1/4
u − 1
du = 0
ln(x) +
5
4
ln(u + 3) −
1
4
ln(u − 1) = c
x4
(u + 3)5
u − 1
= c
x4
(y/x + 3)5
y/x − 1
= c
(y + 3x)5
y − x
= c
Solution 14.9
1.
(3x2
− 2xy + 2) dx + (6y2
− x2
+ 3) dy = 0
We check if this form of the equation, P dx + Q dy = 0, is exact.
Py = −2x, Qx = −2x
507
Since Py = Qx, the equation is exact. Now we find the primitive u(x, y) which satisfies
du = (3x2
− 2xy + 2) dx + (6y2
− x2
+ 3) dy.
The primitive satisfies the partial differential equations
ux = P, uy = Q. (14.8)
We integrate the first equation of 14.8 to determine u up to a function of integration.
ux = 3x2
− 2xy + 2
u = x3
− x2
y + 2x + f(y)
We substitute this into the second equation of 14.8 to determine the function of integration
up to an additive constant.
−x2
+ f (y) = 6y2
− x2
+ 3
f (y) = 6y2
+ 3
f(y) = 2y3
+ 3y
The solution of the differential equation is determined by the implicit equation u = c.
x3
− x2
y + 2x + 2y3
+ 3y = c
2.
dy
dx
= −
ax + by
bx + cy
(ax + by) dx + (bx + cy) dy = 0
We check if this form of the equation, P dx + Q dy = 0, is exact.
Py = b, Qx = b
Since Py = Qx, the equation is exact. Now we find the primitive u(x, y) which satisfies
du = (ax + by) dx + (bx + cy) dy
The primitive satisfies the partial differential equations
ux = P, uy = Q. (14.9)
We integrate the first equation of 14.9 to determine u up to a function of integration.
ux = ax + by
u =
1
2
ax2
+ bxy + f(y)
We substitute this into the second equation of 14.9 to determine the function of integration
up to an additive constant.
bx + f (y) = bx + cy
f (y) = cy
f(y) =
1
2
cy2
The solution of the differential equation is determined by the implicit equation u = d.
ax2
+ 2bxy + cy2
= d
508
Solution 14.10
Note that since these equations are nonlinear, we cannot predict where the solutions will be defined
from the equation alone.
1. This equation is separable. We integrate to get the general solution.
dy
dx
= (1 − 2x)y2
dy
y2
= (1 − 2x) dx
−
1
y
= x − x2
+ c
y =
1
x2 − x − c
Now we apply the initial condition.
y(0) =
1
−c
= −
1
6
y =
1
x2 − x − 6
y =
1
(x + 2)(x − 3)
The solution is defined on the interval (−2 . . . 3).
2. This equation is separable. We integrate to get the general solution.
x dx + y e−x
dy = 0
x ex
dx + y dy = 0
(x − 1) ex
+
1
2
y2
= c
y = 2(c + (1 − x) ex)
We apply the initial condition to determine the constant of integration.
y(0) = 2(c + 1) = 1
c = −
1
2
y = 2(1 − x) ex −1
The function 2(1 − x) ex
−1 is plotted in Figure 14.11. We see that the argument of the
square root in the solution is non-negative only on an interval about the origin. Because 2(1−
x) ex
−1 == 0 is a mixed algebraic / transcendental equation, we cannot solve it analytically.
The solution of the differential equation is defined on the interval (−1.67835 . . . 0.768039).
Solution 14.11
1. We consider the differential equation,
(4y − x)y − (9x2
+ y − 1) = 0.
Py =
∂
∂y
1 − y − 9x2
= −1
Qx =
∂
∂x
(4y − x) = −1
509
-5 -4 -3 -2 -1 1
-3
-2
-1
1
Figure 14.11: The function 2(1 − x) ex
−1.
This equation is exact. It is simplest to solve the equation by rearranging terms to form exact
derivatives.
4yy − xy − y + 1 − 9x2
= 0
d
dx
2y2
− xy + 1 − 9x2
= 0
2y2
− xy + x − 3x3
+ c = 0
y =
1
4
x ± x2 − 8(c + x − 3x3)
2. We consider the differential equation,
(2x − 2y)y + (2x + 4y) = 0.
Py =
∂
∂y
(2x + 4y) = 4
Qx =
∂
∂x
(2x − 2y) = 2
Since Py = Qx, this is not an exact equation.
Solution 14.12
Recall that the differential equation
P(x, y) + Q(x, y)y = 0
is exact if and only if Py = Qx. For Equation 14.7, this criterion is
2y sin t = yf (t)
f (t) = 2 sin t
f(t) = 2(a − cos t).
In this case, the differential equation is
y2
sin t + 2yy (a − cos t) = 0.
We can integrate this exact equation by inspection.
d
dt
y2
(a − cos t) = 0
y2
(a − cos t) = c
y = ±
c
√
a − cos t
510
The First Order, Linear Differential Equation
Solution 14.13
Consider the differential equation
y +
y
sin x
= 0.
We use Equation 14.5 to determine the solution.
y = c e
R
−1/ sin x dx
y = c e− ln | tan(x/2)|
y = c cot
x
2
y = c cot
x
2
Initial Conditions
Well-Posed Problems
Solution 14.14
First we write the differential equation in the standard form.
dy
dt
+
A
t
y =
1
t
+ t, t > 0
We determine the integrating factor.
I(t) = e
R
A/t dt
= eA ln t
= tA
We multiply the differential equation by the integrating factor and integrate.
dy
dt
+
A
t
y =
1
t
+ t
d
dt
tA
y = tA−1
+ tA+1
tA
y =



tA
A + tA+2
A+2 + c, A = 0, −2
ln t + 1
2 t2
+ c, A = 0
−1
2 t−2
+ ln t + c, A = −2
y =



1
A + t2
A+2 + ct−A
, A = −2
ln t + 1
2 t2
+ c, A = 0
−1
2 + t2
ln t + ct2
, A = −2
For positive A, the solution is bounded at the origin only for c = 0. For A = 0, there are no bounded
solutions. For negative A, the solution is bounded there for any value of c and thus we have a
one-parameter family of solutions.
In summary, the solutions which are bounded at the origin are:
y =



1
A + t2
A+2 , A > 0
1
A + t2
A+2 + ct−A
, A < 0, A = −2
−1
2 + t2
ln t + ct2
, A = −2
Equations in the Complex Plane
Solution 14.15
511
1. Consider the equation w + sin z
z w = 0. The point z = 0 is the only point we need to examine
in the finite plane. Since sin z
z has a removable singularity at z = 0, there are no singular points
in the finite plane. The substitution z = 1
ζ yields the equation
u −
sin(1/ζ)
ζ
u = 0.
Since sin(1/ζ)
ζ has an essential singularity at ζ = 0, the point at infinity is an irregular singular
point of the original differential equation.
2. Consider the equation w + 1
z−3 w = 0. Since 1
z−3 has a simple pole at z = 3, the differential
equation has a regular singular point there. Making the substitution z = 1/ζ, w(z) = u(ζ)
u −
1
ζ2(1/ζ − 3)
u = 0
u −
1
ζ(1 − 3ζ)
u = 0.
Since this equation has a simple pole at ζ = 0, the original equation has a regular singular
point at infinity.
3. Consider the equation w + z1/2
w = 0. There is an irregular singular point at z = 0. With the
substitution z = 1/ζ, w(z) = u(ζ),
u −
ζ−1/2
ζ2
u = 0
u − ζ−5/2
u = 0.
We see that the point at infinity is also an irregular singular point of the original differential
equation.
Solution 14.16
We start with the equation
w + z−2
w = 0.
Substituting w = zλ ∞
n=0 anzn
, a0 = 0 yields
d
dz
zλ
∞
n=0
anzn
+ z−2
zλ
∞
n=0
anzn
= 0
λzλ−1
∞
n=0
anzn
+ zλ
∞
n=1
nanzn−1
+ zλ
∞
n=0
anzn−2
= 0
The lowest power of z in the expansion is zλ−2
. The coefficient of this term is a0. Equating powers
of z demands that a0 = 0 which contradicts our initial assumption that it was nonzero. Thus we
cannot find a λ such that the solution can be expanded in the form,
w = zλ
∞
n=0
anzn
, a0 = 0.
512
14.12 Quiz
Problem 14.1
What is the general solution of a first order differential equation?
Solution
Problem 14.2
Write a statement about the functions P and Q to make the following statement correct.
The first order differential equation
P(x, y) + Q(x, y)
dy
dx
= 0
is exact if and only if . It is separable if .
Solution
Problem 14.3
Derive the general solution of
dy
dx
+ p(x)y = f(x).
Solution
Problem 14.4
Solve y = y − y2
.
Solution
513
14.13 Quiz Solutions
Solution 14.1
The general solution of a first order differential equation is a one-parameter family of functions which
satisfies the equation.
Solution 14.2
The first order differential equation
P(x, y) + Q(x, y)
dy
dx
= 0
is exact if and only if Py = Qx. It is separable if P = P(x) and Q = Q(y).
Solution 14.3
dy
dx
+ p(x)y = f(x)
We multiply by the integrating factor µ(x) = exp(P(x)) = exp p(x) dx , and integrate.
dy
dx
eP (x)
+p(x)y eP (x)
= eP (x)
f(x)
d
dx
y eP (x)
= eP (x)
f(x)
y eP (x)
= eP (x)
f(x) dx + c
y = e−P (x)
eP (x)
f(x) dx + c e−P (x)
Solution 14.4
y = y − y2
is separable.
y = y − y2
y
y − y2
= 1
y
y
−
y
y − 1
= 1
ln y − ln(y − 1) = x + c
We do algebraic simplifications and rename the constant of integration to write the solution in a
nice form.
y
y − 1
= c ex
y = (y − 1)c ex
y =
−c ex
1 − c ex
y =
ex
ex −c
y =
1
1 − c e−x
514
Chapter 15
First Order Linear Systems of
Differential Equations
We all agree that your theory is crazy, but is it crazy enough?
- Niels Bohr
15.1 Introduction
In this chapter we consider first order linear systems of differential equations. That is, we consider
equations of the form,
x (t) = Ax(t) + f(t),
x(t) =



x1(t)
...
xn(t)


 , A =





a11 a12 . . . a1n
a21 a22 . . . a2n
...
...
...
...
an1 an2 . . . ann





.
Initially we will consider the homogeneous problem, x (t) = Ax(t). (Later we will find particular
solutions with variation of parameters.) The best way to solve these equations is through the use
of the matrix exponential. Unfortunately, using the matrix exponential requires knowledge of the
Jordan canonical form and matrix functions. Fortunately, we can solve a certain class of problems
using only the concepts of eigenvalues and eigenvectors of a matrix. We present this simple method
in the next section. In the following section we will take a detour into matrix theory to cover Jordan
canonical form and its applications. Then we will be able to solve the general case.
15.2 Using Eigenvalues and Eigenvectors to find Homoge-
neous Solutions
If you have forgotten what eigenvalues and eigenvectors are and how to compute them, go find
a book on linear algebra and spend a few minutes re-aquainting yourself with the rudimentary
material.
Recall that the single differential equation x (t) = Ax has the general solution x = c eAt
. Maybe
the system of differential equations
x (t) = Ax(t) (15.1)
515
has similiar solutions. Perhaps it has a solution of the form x(t) = xi eλt
for some constant vector
xi and some value λ. Let’s substitute this into the differential equation and see what happens.
x (t) = Ax(t)
xiλ eλt
= Axi eλt
Axi = λxi
We see that if λ is an eigenvalue of A with eigenvector xi then x(t) = xi eλt
satisfies the differential
equation. Since the differential equation is linear, cxi eλt
is a solution.
Suppose that the n × n matrix A has the eigenvalues {λk} with a complete set of linearly
independent eigenvectors {xik}. Then each of xik eλkt
is a homogeneous solution of Equation 15.1.
We note that each of these solutions is linearly independent. Without any kind of justification I
will tell you that the general solution of the differential equation is a linear combination of these n
linearly independent solutions.
Result 15.2.1 Suppose that the n × n matrix A has the eigenvalues {λk}
with a complete set of linearly independent eigenvectors {xik}. The system of
differential equations,
x (t) = Ax(t),
has the general solution,
x(t) =
n
k=1
ckxik eλkt
Example 15.2.1 (mathematica/ode/systems/systems.nb) Find the solution of the following
initial value problem. Describe the behavior of the solution as t → ∞.
x = Ax ≡
−2 1
−5 4
x, x(0) = x0 ≡
1
3
The matrix has the distinct eigenvalues λ1 = −1, λ2 = 3. The corresponding eigenvectors are
x1 =
1
1
, x2 =
1
5
.
The general solution of the system of differential equations is
x = c1
1
1
e−t
+c2
1
5
e3t
.
We apply the initial condition to determine the constants.
1 1
1 5
c1
c2
=
1
3
c1 =
1
2
, c2 =
1
2
The solution subject to the initial condition is
x =
1
2
1
1
e−t
+
1
2
1
5
e3t
For large t, the solution looks like
x ≈
1
2
1
5
e3t
.
516
-10 -7.5 -5 -2.5 2.5 5 7.5 10
-10
-7.5
-5
-2.5
2.5
5
7.5
10
Figure 15.1: Homogeneous solutions in the phase plane.
Both coordinates tend to infinity.
Figure 15.1 shows some homogeneous solutions in the phase plane.
Example 15.2.2 (mathematica/ode/systems/systems.nb) Find the solution of the following
initial value problem. Describe the behavior of the solution as t → ∞.
x = Ax ≡


1 1 2
0 2 2
−1 1 3

 x, x(0) = x0 ≡


2
0
1


The matrix has the distinct eigenvalues λ1 = 1, λ2 = 2, λ3 = 3. The corresponding eigenvectors
are
x1 =


0
−2
1

 , x2 =


1
1
0

 , x3 =


2
2
1

 .
The general solution of the system of differential equations is
x = c1


0
−2
1

 et
+c2


1
1
0

 e2t
+c3


2
2
1

 e3t
.
We apply the initial condition to determine the constants.


0 1 2
−2 1 2
1 0 1




c1
c2
c3

 =


2
0
1


c1 = 1, c2 = 2, c3 = 0
The solution subject to the initial condition is
x =


0
−2
1

 et
+2


1
1
0

 e2t
.
As t → ∞, all coordinates tend to infinity.
517
Exercise 15.1 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as
t → ∞.
x = Ax ≡
1 −5
1 −3
x, x(0) = x0 ≡
1
1
Hint, Solution
Exercise 15.2 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as
t → ∞.
x = Ax ≡


−3 0 2
1 −1 0
−2 −1 0

 x, x(0) = x0 ≡


1
0
0


Hint, Solution
Exercise 15.3
Use the matrix form of the method of variation of parameters to find the general solution of
dx
dt
=
4 −2
8 −4
x +
t−3
−t−2 , t > 0.
Hint, Solution
15.3 Matrices and Jordan Canonical Form
Functions of Square Matrices. Consider a function f(x) with a Taylor series.
f(x) =
∞
n=0
f(n)
(0)
n!
xn
We can define the function to take square matrices as arguments. The function of the square matrix
A is defined in terms of the Taylor series.
f(A) =
∞
n=0
f(n)
(0)
n!
An
(Note that this definition is usually not the most convenient method for computing a function of a
matrix. Use the Jordan canonical form for that.)
Eigenvalues and Eigenvectors. Consider a square matrix A. A nonzero vector x is an eigen-
vector of the matrix with eigenvalue λ if
Ax = λx.
Note that we can write this equation as
(A − λI)x = 0.
This equation has solutions for nonzero x if and only if A − λI is singular, (det(A − λI) = 0). We
define the characteristic polynomial of the matrix χ(λ) as this determinant.
χ(λ) = det(A − λI)
The roots of the characteristic polynomial are the eigenvalues of the matrix. The eigenvectors
of distinct eigenvalues are linearly independent. Thus if a matrix has distinct eigenvalues, the
eigenvectors form a basis.
If λ is a root of χ(λ) of multiplicity m then there are up to m linearly independent eigenvectors
corresponding to that eigenvalue. That is, it has from 1 to m eigenvectors.
518
Diagonalizing Matrices. Consider an n × n matrix A that has a complete set of n linearly
independent eigenvectors. A may or may not have distinct eigenvalues. Consider the matrix S with
eigenvectors as columns.
S = x1 x2 · · · xn
A is diagonalized by the similarity transformation:
Λ = S−1
AS.
Λ is a diagonal matrix with the eigenvalues of A as the diagonal elements. Furthermore, the kth
diagonal element is λk, the eigenvalue corresponding to the the eigenvector, xk.
Generalized Eigenvectors. A vector xk is a generalized eigenvector of rank k if
(A − λI)k
xk = 0 but (A − λI)k−1
xk = 0.
Eigenvectors are generalized eigenvectors of rank 1. An n × n matrix has n linearly independent
generalized eigenvectors. A chain of generalized eigenvectors generated by the rank m generalized
eigenvector xm is the set: {x1, x2, . . . , xm}, where
xk = (A − λI)xk+1, for k = m − 1, . . . , 1.
Computing Generalized Eigenvectors. Let λ be an eigenvalue of multiplicity m. Let n be the
smallest integer such that
rank (nullspace ((A − λI)n
)) = m.
Let Nk denote the number of eigenvalues of rank k. These have the value:
Nk = rank nullspace (A − λI)k
− rank nullspace (A − λI)k−1
.
One can compute the generalized eigenvectors of a matrix by looping through the following three
steps until all the the Nk are zero:
1. Select the largest k for which Nk is positive. Find a generalized eigenvector xk of rank k which
is linearly independent of all the generalized eigenvectors found thus far.
2. From xk generate the chain of eigenvectors {x1, x2, . . . , xk}. Add this chain to the known
generalized eigenvectors.
3. Decrement each positive Nk by one.
Example 15.3.1 Consider the matrix
A =


1 1 1
2 1 −1
−3 2 4

 .
The characteristic polynomial of the matrix is
χ(λ) =
1 − λ 1 1
2 1 − λ −1
−3 2 4 − λ
= (1 − λ)2
(4 − λ) + 3 + 4 + 3(1 − λ) − 2(4 − λ) + 2(1 − λ)
= −(λ − 2)3
.
Thus we see that λ = 2 is an eigenvalue of multiplicity 3. A − 2I is
A − 2I =


−1 1 1
2 −1 −1
−3 2 2


519
The rank of the nullspace space of A − 2I is less than 3.
(A − 2I)2
=


0 0 0
−1 1 1
1 −1 −1


The rank of nullspace((A − 2I)2
) is less than 3 as well, so we have to take one more step.
(A − 2I)3
=


0 0 0
0 0 0
0 0 0


The rank of nullspace((A − 2I)3
) is 3. Thus there are generalized eigenvectors of ranks 1, 2 and 3.
The generalized eigenvector of rank 3 satisfies:
(A − 2I)3
x3 = 0


0 0 0
0 0 0
0 0 0

 x3 = 0
We choose the solution
x3 =


1
0
0

 .
Now to compute the chain generated by x3.
x2 = (A − 2I)x3 =


−1
2
−3


x1 = (A − 2I)x2 =


0
−1
1


Thus a set of generalized eigenvectors corresponding to the eigenvalue λ = 2 are
x1 =


0
−1
1

 , x2 =


−1
2
−3

 , x3 =


1
0
0

 .
Jordan Block. A Jordan block is a square matrix which has the constant, λ, on the diagonal and
ones on the first super-diagonal:











λ 1 0 · · · 0 0
0 λ 1 · · · 0 0
0 0 λ
... 0 0
...
...
...
...
...
...
0 0 0
... λ 1
0 0 0 · · · 0 λ











520
Jordan Canonical Form. A matrix J is in Jordan canonical form if all the elements are zero
except for Jordan blocks Jk along the diagonal.
J =









J1 0 · · · 0 0
0 J2
... 0 0
...
...
...
...
...
0 0
... Jn−1 0
0 0 · · · 0 Jn









The Jordan canonical form of a matrix is obtained with the similarity transformation:
J = S−1
AS,
where S is the matrix of the generalized eigenvectors of A and the generalized eigenvectors are
grouped in chains.
Example 15.3.2 Again consider the matrix
A =


1 1 1
2 1 −1
−3 2 4

 .
Since λ = 2 is an eigenvalue of multiplicity 3, the Jordan canonical form of the matrix is
J =


2 1 0
0 2 1
0 0 2

 .
In Example 15.3.1 we found the generalized eigenvectors of A. We define the matrix with generalized
eigenvectors as columns:
S =


0 −1 1
−1 2 0
1 −3 0

 .
We can verify that J = S−1
AS.
J = S−1
AS
=


0 −3 −2
0 −1 −1
1 −1 −1




1 1 1
2 1 −1
−3 2 4




0 −1 1
−1 2 0
1 −3 0


=


2 1 0
0 2 1
0 0 2


Functions of Matrices in Jordan Canonical Form. The function of an n × n Jordan block is
the upper-triangular matrix:
f(Jk) =













f(λ) f (λ)
1!
f (λ)
2! · · · f(n−2)
(λ)
(n−2)!
f(n−1)
(λ)
(n−1)!
0 f(λ) f (λ)
1! · · · f(n−3)
(λ)
(n−3)!
f(n−2)
(λ)
(n−2)!
0 0 f(λ)
... f(n−4)
(λ)
(n−4)!
f(n−3)
(λ)
(n−3)!
...
...
...
...
...
...
0 0 0
... f(λ) f (λ)
1!
0 0 0 · · · 0 f(λ)













521
The function of a matrix in Jordan canonical form is
f(J) =









f(J1) 0 · · · 0 0
0 f(J2)
... 0 0
...
...
...
...
...
0 0
... f(Jn−1) 0
0 0 · · · 0 f(Jn)









The Jordan canonical form of a matrix satisfies:
f(J) = S−1
f(A)S,
where S is the matrix of the generalized eigenvectors of A. This gives us a convenient method for
computing functions of matrices.
Example 15.3.3 Consider the matrix exponential function eA
for our old friend:
A =


1 1 1
2 1 −1
−3 2 4

 .
In Example 15.3.2 we showed that the Jordan canonical form of the matrix is
J =


2 1 0
0 2 1
0 0 2

 .
Since all the derivatives of eλ
are just eλ
, it is especially easy to compute eJ
.
eJ
=


e2 e2 e2
/2
0 e2 e2
0 0 e2


We find eA
with a similarity transformation of eJ
. We use the matrix of generalized eigenvectors
found in Example 15.3.2.
eA
= S eJ
S−1
eA
=


0 −1 1
−1 2 0
1 −3 0




e2 e2 e2
/2
0 e2 e2
0 0 e2




0 −3 −2
0 −1 −1
1 −1 −1


eA
=


0 2 2
3 1 −1
−5 3 5


e2
2
15.4 Using the Matrix Exponential
The homogeneous differential equation
x (t) = Ax(t)
has the solution
x(t) = eAt
c
where c is a vector of constants. The solution subject to the initial condition, x(t0) = x0 is
x(t) = eA(t−t0)
x0.
522
The homogeneous differential equation
x (t) =
1
t
Ax(t)
has the solution
x(t) = tA
c ≡ eA Log t
c,
where c is a vector of constants. The solution subject to the initial condition, x(t0) = x0 is
x(t) =
t
t0
A
x0 ≡ eA Log(t/t0)
x0.
The inhomogeneous problem
x (t) = Ax(t) + f(t), x(t0) = x0
has the solution
x(t) = eA(t−t0)
x0 + eAt
t
t0
e−Aτ
f(τ) dτ.
Example 15.4.1 Consider the system
dx
dt
=


1 1 1
2 1 −1
−3 2 4

 x.
The general solution of the system of differential equations is
x(t) = eAt
c.
In Example 15.3.3 we found eA
. At is just a constant times A. The eigenvalues of At are {λkt}
where {λk} are the eigenvalues of A. The generalized eigenvectors of At are the same as those of
A.
Consider eJt
. The derivatives of f(λ) = eλt
are f (λ) = t eλt
and f (λ) = t2 eλt
. Thus we have
eJt
=


e2t
t e2t
t2 e2t
/2
0 e2t
t e2t
0 0 e2t


eJt
=


1 t t2
/2
0 1 t
0 0 1

 e2t
We find eAt
with a similarity transformation.
eAt
= S eJt
S−1
eAt
=


0 −1 1
−1 2 0
1 −3 0




1 t t2
/2
0 1 t
0 0 1

 e2t


0 −3 −2
0 −1 −1
1 −1 −1


eAt
=


1 − t t t
2t − t2
/2 1 − t + t2
/2 −t + t2
/2
−3t + t2
/2 2t − t2
/2 1 + 2t − t2
/2

 e2t
The solution of the system of differential equations is
x(t) =

c1


1 − t
2t − t2
/2
−3t + t2
/2

 + c2


t
1 − t + t2
/2
2t − t2
/2

 + c3


t
−t + t2
/2
1 + 2t − t2
/2



 e2t
523
Example 15.4.2 Consider the Euler equation system
dx
dt
=
1
t
Ax ≡
1
t
1 0
1 1
x.
The solution is x(t) = tA
c. Note that A is almost in Jordan canonical form. It has a one on the
sub-diagonal instead of the super-diagonal. It is clear that a function of A is defined
f(A) =
f(1) 0
f (1) f(1)
.
The function f(λ) = tλ
has the derivative f (λ) = tλ
log t. Thus the solution of the system is
x(t) =
t 0
t log t t
c1
c2
= c1
t
t log t
+ c2
0
t
Example 15.4.3 Consider an inhomogeneous system of differential equations.
dx
dt
= Ax + f(t) ≡
4 −2
8 −4
x +
t−3
−t−2 , t > 0.
The general solution is
x(t) = eAt
c + eAt
e−At
f(t) dt.
First we find homogeneous solutions. The characteristic equation for the matrix is
χ(λ) =
4 − λ −2
8 −4 − λ
= λ2
= 0
λ = 0 is an eigenvalue of multiplicity 2. Thus the Jordan canonical form of the matrix is
J =
0 1
0 0
.
Since rank(nullspace(A − 0I)) = 1 there is only one eigenvector. A generalized eigenvector of
rank 2 satisfies
(A − 0I)2
x2 = 0
0 0
0 0
x2 = 0
We choose
x2 =
1
0
Now we generate the chain from x2.
x1 = (A − 0I)x2 =
4
8
We define the matrix of generalized eigenvectors S.
S =
4 1
8 0
The derivative of f(λ) = eλt
is f (λ) = t eλt
. Thus
eJt
=
1 t
0 1
524
The homogeneous solution of the differential equation system is xh = eAt
c where
eAt
= S eJt
S−1
eAt
=
4 1
8 0
.
1 t
0 1
0 1/8
1 −1/2
eAt
=
1 + 4t −2t
8t 1 − 4t
The general solution of the inhomogeneous system of equations is
x(t) = eAt
c + eAt
e−At
f(t) dt
x(t) =
1 + 4t −2t
8t 1 − 4t
c +
1 + 4t −2t
8t 1 − 4t
1 − 4t 2t
−8t 1 + 4t
t−3
−t−2 dt
x(t) = c1
1 + 4t
8t
+ c2
−2t
1 − 4t
+
2 − 2 Log t + 6
t − 1
2t2
4 − 4 Log t + 13
t
We can tidy up the answer a little bit. First we take linear combinations of the homogeneous
solutions to obtain a simpler form.
x(t) = c1
1
2
+ c2
2t
4t − 1
+
2 − 2 Log t + 6
t − 1
2t2
4 − 4 Log t + 13
t
Then we subtract 2 times the first homogeneous solution from the particular solution.
x(t) = c1
1
2
+ c2
2t
4t − 1
+
−2 Log t + 6
t − 1
2t2
−4 Log t + 13
t
525
15.5 Exercises
Exercise 15.4 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem.
x = Ax ≡
−2 1
−5 4
x, x(0) = x0 ≡
1
3
Hint, Solution
Exercise 15.5 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem.
x = Ax ≡


1 1 2
0 2 2
−1 1 3

 x, x(0) = x0 ≡


2
0
1


Hint, Solution
Exercise 15.6 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as
t → ∞.
x = Ax ≡
1 −5
1 −3
x, x(0) = x0 ≡
1
1
Hint, Solution
Exercise 15.7 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as
t → ∞.
x = Ax ≡


−3 0 2
1 −1 0
−2 −1 0

 x, x(0) = x0 ≡


1
0
0


Hint, Solution
Exercise 15.8 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as
t → ∞.
x = Ax ≡
1 −4
4 −7
x, x(0) = x0 ≡
3
2
Hint, Solution
Exercise 15.9 (mathematica/ode/systems/systems.nb)
Find the solution of the following initial value problem. Describe the behavior of the solution as
t → ∞.
x = Ax ≡


−1 0 0
−4 1 0
3 6 2

 x, x(0) = x0 ≡


−1
2
−30


Hint, Solution
Exercise 15.10
1. Consider the system
x = Ax =


1 1 1
2 1 −1
−3 2 4

 x. (15.2)
526
(a) Show that λ = 2 is an eigenvalue of multiplicity 3 of the coefficient matrix A, and that
there is only one corresponding eigenvector, namely
xi(1)
=


0
1
−1

 .
(b) Using the information in part (i), write down one solution x(1)
(t) of the system (15.2).
There is no other solution of a purely exponential form x = xi eλt
.
(c) To find a second solution use the form x = xit e2t
+η e2t
, and find appropriate vectors
xi and η. This gives a solution of the system (15.2) which is independent of the one
obtained in part (ii).
(d) To find a third linearly independent solution use the form x = xi(t2
/2) e2t
+ηt e2t
+ζ e2t
.
Show that xi, η and ζ satisfy the equations
(A − 2I)xi = 0, (A − 2I)η = xi, (A − 2I)ζ = η.
The first two equations can be taken to coincide with those obtained in part (iii). Solve
the third equation, and write down a third independent solution of the system (15.2).
2. Consider the system
x = Ax =


5 −3 −2
8 −5 −4
−4 3 3

 x. (15.3)
(a) Show that λ = 1 is an eigenvalue of multiplicity 3 of the coefficient matrix A, and that
there are only two linearly independent eigenvectors, which we may take as
xi(1)
=


1
0
2

 , xi(2)
=


0
2
−3


Find two independent solutions of equation (15.3).
(b) To find a third solution use the form x = xit et
+ηet
; then show that xi and η must
satisfy
(A − I)xi = 0, (A − I)η = xi.
Show that the most general solution of the first of these equations is xi = c1xi1 + c2xi2,
where c1 and c2 are arbitrary constants. Show that, in order to solve the second of these
equations it is necessary to take c1 = c2. Obtain such a vector η, and use it to obtain a
third independent solution of the system (15.3).
Hint, Solution
Exercise 15.11 (mathematica/ode/systems/systems.nb)
Consider the system of ODE’s
dx
dt
= Ax, x(0) = x0
where A is the constant 3 × 3 matrix
A =


1 1 1
2 1 −1
−8 −5 −3


1. Find the eigenvalues and associated eigenvectors of A. [HINT: notice that λ = −1 is a root of
the characteristic polynomial of A.]
527
2. Use the results from part (a) to construct eAt
and therefore the solution to the initial value
problem above.
3. Use the results of part (a) to find the general solution to
dx
dt
=
1
t
Ax.
Hint, Solution
Exercise 15.12 (mathematica/ode/systems/systems.nb)
1. Find the general solution to
dx
dt
= Ax
where
A =


2 0 1
0 2 0
0 1 3


2. Solve
dx
dt
= Ax + g(t), x(0) = 0
using A from part (a).
Hint, Solution
Exercise 15.13
Let A be an n × n matrix of constants. The system
dx
dt
=
1
t
Ax, (15.4)
is analogous to the Euler equation.
1. Verify that when A is a 2×2 constant matrix, elimination of (15.4) yields a second order Euler
differential equation.
2. Now assume that A is an n × n matrix of constants. Show that this system, in analogy with
the Euler equation has solutions of the form x = atλ
where a is a constant vector provided a
and λ satisfy certain conditions.
3. Based on your experience with the treatment of multiple roots in the solution of constant
coefficient systems, what form will the general solution of (15.4) take if λ is a multiple eigenvalue
in the eigenvalue problem derived in part (b)?
4. Verify your prediction by deriving the general solution for the system
dx
dt
=
1
t
1 0
1 1
x.
Hint, Solution
528
15.6 Hints
Hint 15.1
Hint 15.2
Hint 15.3
Hint 15.4
Hint 15.5
Hint 15.6
Hint 15.7
Hint 15.8
Hint 15.9
Hint 15.10
Hint 15.11
Hint 15.12
Hint 15.13
529
15.7 Solutions
Solution 15.1
We consider an initial value problem.
x = Ax ≡
1 −5
1 −3
x, x(0) = x0 ≡
1
1
The matrix has the distinct eigenvalues λ1 = −1−ı, λ2 = −1+ı. The corresponding eigenvectors
are
x1 =
2 − ı
1
, x2 =
2 + ı
1
.
The general solution of the system of differential equations is
x = c1
2 − ı
1
e(−1−ı)t
+c2
2 + ı
1
e(−1+ı)t
.
We can take the real and imaginary parts of either of these solution to obtain real-valued solutions.
2 + ı
1
e(−1+ı)t
=
2 cos(t) − sin(t)
cos(t)
e−t
+ı
cos(t) + 2 sin(t)
sin(t)
e−t
x = c1
2 cos(t) − sin(t)
cos(t)
e−t
+c2
cos(t) + 2 sin(t)
sin(t)
e−t
We apply the initial condition to determine the constants.
2 1
1 0
c1
c2
=
1
1
c1 = 1, c2 = −1
The solution subject to the initial condition is
x =
cos(t) − 3 sin(t)
cos(t) − sin(t)
e−t
.
Plotted in the phase plane, the solution spirals in to the origin as t increases. Both coordinates tend
to zero as t → ∞.
Solution 15.2
We consider an initial value problem.
x = Ax ≡


−3 0 2
1 −1 0
−2 −1 0

 x, x(0) = x0 ≡


1
0
0


The matrix has the distinct eigenvalues λ1 = −2, λ2 = −1 − ı
√
2, λ3 = −1 + ı
√
2. The
corresponding eigenvectors are
x1 =


2
−2
1

 , x2 =


2 + ı
√
2
−1 + ı
√
2
3

 , x3 =


2 − ı
√
2
−1 − ı
√
2
3

 .
The general solution of the system of differential equations is
x = c1


2
−2
1

 e−2t
+c2


2 + ı
√
2
−1 + ı
√
2
3

 e(−1−ı
√
2)t
+c3


2 − ı
√
2
−1 − ı
√
2
3

 e(−1+ı
√
2)t
.
530
We can take the real and imaginary parts of the second or third solution to obtain two real-valued
solutions.


2 + ı
√
2
−1 + ı
√
2
3

 e(−1−ı
√
2)t
=


2 cos(
√
2t) +
√
2 sin(
√
2t)
− cos(
√
2t) +
√
2 sin(
√
2t)
3 cos(
√
2t)

 e−t
+ı


√
2 cos(
√
2t) − 2 sin(
√
2t)√
2 cos(
√
2t) + sin(
√
2t)
−3 sin(
√
2t)

 e−t
x = c1


2
−2
1

 e−2t
+c2


2 cos(
√
2t) +
√
2 sin(
√
2t)
− cos(
√
2t) +
√
2 sin(
√
2t)
3 cos(
√
2t)

 e−t
+c3


√
2 cos(
√
2t) − 2 sin(
√
2t)√
2 cos(
√
2t) + sin(
√
2t)
−3 sin(
√
2t)

 e−t
We apply the initial condition to determine the constants.


2 2
√
2
−2 −1
√
2
1 3 0




c1
c2
c3

 =


1
0
0


c1 =
1
3
, c2 = −
1
9
, c3 =
5
9
√
2
The solution subject to the initial condition is
x =
1
3


2
−2
1

 e−2t
+
1
6


2 cos(
√
2t) − 4
√
2 sin(
√
2t)
4 cos(
√
2t) +
√
2 sin(
√
2t)
−2 cos(
√
2t) − 5
√
2 sin(
√
2t)

 e−t
.
As t → ∞, all coordinates tend to infinity. Plotted in the phase plane, the solution would spiral in
to the origin.
Solution 15.3
Homogeneous Solution, Method 1. We designate the inhomogeneous system of differential
equations
x = Ax + g(t).
First we find homogeneous solutions. The characteristic equation for the matrix is
χ(λ) =
4 − λ −2
8 −4 − λ
= λ2
= 0
λ = 0 is an eigenvalue of multiplicity 2. The eigenvectors satisfy
4 −2
8 −4
ξ1
ξ2
=
0
0
.
Thus we see that there is only one linearly independent eigenvector. We choose
xi =
1
2
.
One homogeneous solution is then
x1 =
1
2
e0t
=
1
2
.
We look for a second homogeneous solution of the form
x2 = xit + η.
We substitute this into the homogeneous equation.
x2 = Ax2
xi = A(xit + η)
531
We see that xi and η satisfy
Axi = 0, Aη = xi.
We choose xi to be the eigenvector that we found previously. The equation for η is then
4 −2
8 −4
η1
η2
=
1
2
.
η is determined up to an additive multiple of xi. We choose
η =
0
−1/2
.
Thus a second homogeneous solution is
x2 =
1
2
t +
0
−1/2
.
The general homogeneous solution of the system is
xh = c1
1
2
+ c2
t
2t − 1/2
We can write this in matrix notation using the fundamental matrix Ψ(t).
xh = Ψ(t)c =
1 t
2 2t − 1/2
c1
c2
Homogeneous Solution, Method 2. The similarity transform c−1
Ac with
c =
1 0
2 −1/2
will convert the matrix
A =
4 −2
8 −4
to Jordan canonical form. We make the change of variables,
y =
1 0
2 −1/2
x.
The homogeneous system becomes
dy
dt
=
1 0
4 −2
4 −2
8 −4
1 0
2 −1/2
y
y1
y2
=
0 1
0 0
y1
y2
The equation for y2 is
y2 = 0.
y2 = c2
The equation for y1 becomes
y1 = c2.
y1 = c1 + c2t
532
The solution for y is then
y = c1
1
0
+ c2
t
1
.
We multiply this by c to obtain the homogeneous solution for x.
xh = c1
1
2
+ c2
t
2t − 1/2
Inhomogeneous Solution. By the method of variation of parameters, a particular solution is
xp = Ψ(t) Ψ−1
(t)g(t) dt.
xp =
1 t
2 2t − 1/2
1 − 4t 2t
4 −2
t−3
−t−2 dt
xp =
1 t
2 2t − 1/2
−2t−1
− 4t−2
+ t−3
2t−2
+ 4t−3 dt
xp =
1 t
2 2t − 1/2
−2 log t + 4t−1
− 1
2 t−2
−2t−1
− 2t−2
xp =
−2 − 2 log t + 2t−1
− 1
2 t−2
−4 − 4 log t + 5t−1
By adding 2 times our first homogeneous solution, we obtain
xp =
−2 log t + 2t−1
− 1
2 t−2
−4 log t + 5t−1
The general solution of the system of differential equations is
x = c1
1
2
+ c2
t
2t − 1/2
+
−2 log t + 2t−1
− 1
2 t−2
−4 log t + 5t−1
Solution 15.4
We consider an initial value problem.
x = Ax ≡
−2 1
−5 4
x, x(0) = x0 ≡
1
3
The Jordan canonical form of the matrix is
J =
−1 0
0 3
.
The solution of the initial value problem is x = eAt
x0.
x = eAt
x0
= S eJt
S−1
x0
=
1 1
1 5
e−t
0
0 e3t
1
4
5 −1
−1 1
1
3
=
1
2
e−t
+ e3t
e−t
+5 e3t
x =
1
2
1
1
e−t
+
1
2
1
5
e3t
533
Solution 15.5
We consider an initial value problem.
x = Ax ≡


1 1 2
0 2 2
−1 1 3

 x, x(0) = x0 ≡


2
0
1


The Jordan canonical form of the matrix is
J =


1 0 0
0 2 0
0 0 3

 .
The solution of the initial value problem is x = eAt
x0.
x = eAt
x0
= S eJt
S−1
x0
=


0 1 2
−2 1 2
1 0 1




et
0 0
0 e2t
0
0 0 e3t

 1
2


1 −1 0
4 −2 −4
−1 1 2




2
0
1


=


2 e2t
−2 et
+2 e2t
et


x =


0
−2
1

 et
+


2
2
0

 e2t
.
Solution 15.6
We consider an initial value problem.
x = Ax ≡
1 −5
1 −3
x, x(0) = x0 ≡
1
1
The Jordan canonical form of the matrix is
J =
−1 − ı 0
0 −1 + ı
.
The solution of the initial value problem is x = eAt
x0.
x = eAt
x0
= S eJt
S−1
x0
=
2 − ı 2 + ı
1 1
e(−1−ı)t
0
0 e(−1+ı)t
1
2
ı 1 − ı2
−ı 1 + ı2
1
1
=
(cos(t) − 3 sin(t)) e−t
(cos(t) − sin(t)) e−t
x =
1
1
e−t
cos(t) −
3
1
e−t
sin(t)
Solution 15.7
We consider an initial value problem.
x = Ax ≡


−3 0 2
1 −1 0
−2 −1 0

 x, x(0) = x0 ≡


1
0
0


534
The Jordan canonical form of the matrix is
J =


−2 0 0
0 −1 − ı
√
2 0
0 0 −1 + ı
√
2

 .
The solution of the initial value problem is x = eAt
x0.
x = eAt
x0
= S eJt
S−1
x0
=
1
3


6 2 + ı
√
2 2 − ı
√
2
−6 −1 + ı
√
2 −1 − ı
√
2
3 3 3




e−2t
0 0
0 e(−1−ı
√
2)t
0
0 0 e(−1+ı
√
2)t


1
6


2 −2 −2
−1 − ı5
√
2/2 1 − ı2
√
2 4 + ı
√
2
−1 + ı5
√
2/2 1 + ı2
√
2 4 − ı
√
2




1
0
0


x =
1
3


2
−2
1

 e−2t
+
1
6


2 cos(
√
2t) − 4
√
2 sin(
√
2t)
4 cos(
√
2t) +
√
2 sin(
√
2t)
−2 cos(
√
2t) − 5
√
2 sin(
√
2t)

 e−t
.
Solution 15.8
We consider an initial value problem.
x = Ax ≡
1 −4
4 −7
x, x(0) = x0 ≡
3
2
Method 1. Find Homogeneous Solutions. The matrix has the double eigenvalue λ1 = λ2 =
−3. There is only one corresponding eigenvector. We compute a chain of generalized eigenvectors.
(A + 3I)2
x2 = 0
0x2 = 0
x2 =
1
0
(A + 3I)x2 = x1
x1 =
4
4
The general solution of the system of differential equations is
x = c1
1
1
e−3t
+c2
4
4
t +
1
0
e−3t
.
We apply the initial condition to determine the constants.
1 1
1 0
c1
c2
=
3
2
c1 = 2, c2 = 1
The solution subject to the initial condition is
x =
3 + 4t
2 + 4t
e−3t
.
535
Both coordinates tend to zero as t → ∞.
Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is
J =
−3 1
0 −3
.
The solution of the initial value problem is x = eAt
x0.
x = eAt
x0
= S eJt
S−1
x0
=
1 1/4
1 0
e−3t
t e−3t
0 e−3t
0 1
4 −4
3
2
x =
3 + 4t
2 + 4t
e−3t
.
Solution 15.9
We consider an initial value problem.
x = Ax ≡


−1 0 0
−4 1 0
3 6 2

 x, x(0) = x0 ≡


−1
2
−30


Method 1. Find Homogeneous Solutions. The matrix has the distinct eigenvalues λ1 = −1,
λ2 = 1, λ3 = 2. The corresponding eigenvectors are
x1 =


−1
−2
5

 , x2 =


0
−1
6

 , x3 =


0
0
1

 .
The general solution of the system of differential equations is
x = c1


−1
−2
5

 e−t
+c2


0
−1
6

 et
+c3


0
0
1

 e2t
.
We apply the initial condition to determine the constants.


−1 0 0
−2 −1 0
5 6 1




c1
c2
c3

 =


−1
2
−30


c1 = 1, c2 = −4, c3 = −11
The solution subject to the initial condition is
x =


−1
−2
5

 e−t
−4


0
−1
6

 et
−11


0
0
1

 e2t
.
As t → ∞, the first coordinate vanishes, the second coordinate tends to ∞ and the third coordinate
tends to −∞
Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is
J =


−1 0 0
0 1 0
0 0 2

 .
536
The solution of the initial value problem is x = eAt
x0.
x = eAt
x0
= S eJt
S−1
x0
=


−1 0 0
−2 −1 0
5 6 1




e−t
0 0
0 et
0
0 0 e2t

 1
2


−1 0 0
2 −1 0
−7 6 1




−1
2
−30


x =


−1
−2
5

 e−t
−4


0
−1
6

 et
−11


0
0
1

 e2t
.
Solution 15.10
1. (a) We compute the eigenvalues of the matrix.
χ(λ) =
1 − λ 1 1
2 1 − λ −1
−3 2 4 − λ
= −λ3
+ 6λ2
− 12λ + 8 = −(λ − 2)3
λ = 2 is an eigenvalue of multiplicity 3. The rank of the null space of A − 2I is 1. (The
first two rows are linearly independent, but the third is a linear combination of the first
two.)
A − 2I =


−1 1 1
2 −1 −1
−3 2 2


Thus there is only one eigenvector.


−1 1 1
2 −1 −1
−3 2 2




ξ1
ξ2
ξ3

 = 0
xi(1)
=


0
1
−1


(b) One solution of the system of differential equations is
x(1)
=


0
1
−1

 e2t
.
(c) We substitute the form x = xit e2t
+η e2t
into the differential equation.
x = Ax
xi e2t
+2xit e2t
+2η e2t
= Axit e2t
+Aη e2t
(A − 2I)xi = 0, (A − 2I)η = xi
We already have a solution of the first equation, we need the generalized eigenvector η.
Note that η is only determined up to a constant times xi. Thus we look for the solution
537
whose second component vanishes to simplify the algebra.
(A − 2I)η = xi


−1 1 1
2 −1 −1
−3 2 2




η1
0
η3

 =


0
1
−1


−η1 + η3 = 0, 2η1 − η3 = 1, −3η1 + 2η3 = −1
η =


1
0
1


A second linearly independent solution is
x(2)
=


0
1
−1

 t e2t
+


1
0
1

 e2t
.
(d) To find a third solution we substutite the form x = xi(t2
/2) e2t
+ηt e2t
+ζ e2t
into the
differential equation.
x = Ax
2xi(t2
/2) e2t
+(xi + 2η)t e2t
+(η + 2ζ) e2t
= Axi(t2
/2) e2t
+Aηt e2t
+Aζ e2t
(A − 2I)xi = 0, (A − 2I)η = xi, (A − 2I)ζ = η
We have already solved the first two equations, we need the generalized eigenvector ζ.
Note that ζ is only determined up to a constant times xi. Thus we look for the solution
whose second component vanishes to simplify the algebra.
(A − 2I)ζ = η


−1 1 1
2 −1 −1
−3 2 2




ζ1
0
ζ3

 =


1
0
1


−ζ1 + ζ3 = 1, 2ζ1 − ζ3 = 0, −3ζ1 + 2ζ3 = 1
ζ =


1
0
2


A third linearly independent solution is
x(3)
=


0
1
−1

 (t2
/2) e2t
+


1
0
1

 t e2t
+


1
0
2

 e2t
2. (a) We compute the eigenvalues of the matrix.
χ(λ) =
5 − λ −3 −2
8 −5 − λ −4
−4 3 3 − λ
= −λ3
+ 3λ2
− 3λ + 1 = −(λ − 1)3
λ = 1 is an eigenvalue of multiplicity 3. The rank of the null space of A − I is 2. (The
second and third rows are multiples of the first.)
A − I =


4 −3 −2
8 −6 −4
−4 3 2


538
Thus there are two eigenvectors.


4 −3 −2
8 −6 −4
−4 3 2




ξ1
ξ2
ξ3

 = 0
xi(1)
=


1
0
2

 , xi(2)
=


0
2
−3


Two linearly independent solutions of the differential equation are
x(1)
=


1
0
2

 et
, x(2)
=


0
2
−3

 et
.
(b) We substitute the form x = xit et
+η et
into the differential equation.
x = Ax
xi et
+xit et
+η et
= Axit et
+Aη et
(A − I)xi = 0, (A − I)η = xi
The general solution of the first equation is a linear combination of the two solutions we
found in the previous part.
xi = c1xi1 + c2xi2
Now we find the generalized eigenvector, η. Note that η is only determined up to a linear
combination of xi1 and xi2. Thus we can take the first two components of η to be zero.


4 −3 −2
8 −6 −4
−4 3 2




0
0
η3

 = c1


1
0
2

 + c2


0
2
−3


−2η3 = c1, −4η3 = 2c2, 2η3 = 2c1 − 3c2
c1 = c2, η3 = −
c1
2
We see that we must take c1 = c2 in order to obtain a solution. We choose c1 = c2 = 2
A third linearly independent solution of the differential equation is
x(3)
=


2
4
−2

 t et
+


0
0
−1

 et
.
Solution 15.11
1. The characteristic polynomial of the matrix is
χ(λ) =
1 − λ 1 1
2 1 − λ −1
−8 −5 −3 − λ
= (1 − λ)2
(−3 − λ) + 8 − 10 − 5(1 − λ) − 2(−3 − λ) − 8(1 − λ)
= −λ3
− λ2
+ 4λ + 4
= −(λ + 2)(λ + 1)(λ − 2)
Thus we see that the eigenvalues are λ = −2, −1, 2. The eigenvectors xi satisfy
(A − λI)xi = 0.
539
For λ = −2, we have
(A + 2I)xi = 0.


3 1 1
2 3 −1
−8 −5 −1




ξ1
ξ2
ξ3

 =


0
0
0


If we take ξ3 = 1 then the first two rows give us the system,
3 1
2 3
ξ1
ξ2
=
−1
1
which has the solution ξ1 = −4/7, ξ2 = 5/7. For the first eigenvector we choose:
xi =


−4
5
7


For λ = −1, we have
(A + I)xi = 0.


2 1 1
2 2 −1
−8 −5 −2




ξ1
ξ2
ξ3

 =


0
0
0


If we take ξ3 = 1 then the first two rows give us the system,
2 1
2 2
ξ1
ξ2
=
−1
1
which has the solution ξ1 = −3/2, ξ2 = 2. For the second eigenvector we choose:
xi =


−3
4
2


For λ = 2, we have
(A + I)xi = 0.


−1 1 1
2 −1 −1
−8 −5 −5




ξ1
ξ2
ξ3

 =


0
0
0


If we take ξ3 = 1 then the first two rows give us the system,
−1 1
2 −1
ξ1
ξ2
=
−1
1
which has the solution ξ1 = 0, ξ2 = −1. For the third eigenvector we choose:
xi =


0
−1
1


In summary, the eigenvalues and eigenvectors are
λ = {−2, −1, 2}, xi =





−4
5
7

 ,


−3
4
2

 ,


0
−1
1





540
2. The matrix is diagonalized with the similarity transformation
J = S−1
AS,
where S is the matrix with eigenvectors as columns:
S =


−4 −3 0
5 4 −1
7 2 1


The matrix exponential, eAt
is given by
eA
= S eJ
S−1
.
eA
=


−4 −3 0
5 4 −1
7 2 1




e−2t
0 0
0 e−t
0
0 0 e2t

 1
12


6 3 3
−12 −4 −4
−18 −13 −1

 .
eAt
=


−2 e−2t
+3 e−t
− e−2t
+ e−t
− e−2t
+ e−t
5 e−2t
−8 e−t
+3 et
2
15 e−2t
−16 e−t
+13 et
12
15 e−2t
−16 e−t
+ et
12
7 e−2t
−4 e−t
−3 et
2
21 e−2t
−8 e−t
−13 et
12
21 e−2t
−8 e−t
− et
12


The solution of the initial value problem is eAt
x0.
3. The general solution of the Euler equation is
c1


−4
5
7

 t−2
+ c2


−3
4
2

 t−1
+ c3


0
−1
1

 t2
.
We could also write the solution as
x = tA
c ≡ eA log t
c,
Solution 15.12
1. The characteristic polynomial of the matrix is
χ(λ) =
2 − λ 0 1
0 2 − λ 0
0 1 3 − λ
= (2 − λ)2
(3 − λ)
Thus we see that the eigenvalues are λ = 2, 2, 3. Consider
A − 2I =


0 0 1
0 0 0
0 1 3

 .
Since rank(nullspace(A − 2I)) = 1 there is one eigenvector and one generalized eigenvector of
rank two for λ = 2. The generalized eigenvector of rank two satisfies
(A − 2I)2
xi2 = 0


0 1 1
0 0 0
0 1 1

 xi2 = 0
541
We choose the solution
xi2 =


0
−1
1

 .
The eigenvector for λ = 2 is
xi1 = (A − 2I)xi2 =


1
0
0

 .
The eigenvector for λ = 3 satisfies
(A − 3I)2
xi = 0


−1 0 1
0 −1 0
0 1 0

 xi = 0
We choose the solution
xi =


1
0
1

 .
The eigenvalues and generalized eigenvectors are
λ = {2, 2, 3}, xi =





1
0
0

 ,


0
−1
1

 ,


1
0
1





.
The matrix of eigenvectors and its inverse is
S =


1 0 1
0 −1 0
0 1 1

 , S−1
=


1 −1 −1
0 −1 0
0 1 1

 .
The Jordan canonical form of the matrix, which satisfies J = S−1
AS is
J =


2 1 0
0 2 0
0 0 3


Recall that the function of a Jordan block is:
f








λ 1 0 0
0 λ 1 0
0 0 λ 1
0 0 0 λ







 =





f(λ) f (λ)
1!
f (λ)
2!
f (λ)
3!
0 f(λ) f (λ)
1!
f (λ)
2!
0 0 f(λ) f (λ)
1!
0 0 0 f(λ)





,
and that the function of a matrix in Jordan canonical form is
f








J1 0 0 0
0 J2 0 0
0 0 J3 0
0 0 0 J4







 =




f(J1) 0 0 0
0 f(J2) 0 0
0 0 f(J3) 0
0 0 0 f(J4)



 .
We want to compute eJt
so we consider the function f(λ) = eλt
, which has the derivative
f (λ) = t eλt
. Thus we see that
eJt
=


e2t
t e2t
0
0 e2t
0
0 0 e3t


542
The exponential matrix is
eAt
= S eJt
S−1
,
eAt
=


e2t
−(1 + t) e2t
+ e3t
− e2t
+ e3t
0 e2t
0
0 − e2t
+ e3t e3t

 .
The general solution of the homogeneous differential equation is
x = eAt
c.
2. The solution of the inhomogeneous differential equation subject to the initial condition is
x = eAt
0 + eAt
t
0
e−Aτ
g(τ) dτ
x = eAt
t
0
e−Aτ
g(τ) dτ
Solution 15.13
1.
dx
dt
=
1
t
Ax
t
x1
x2
=
a b
c d
x1
x2
The first component of this equation is
tx1 = ax1 + bx2.
We differentiate and multiply by t to obtain a second order coupled equation for x1. We use
(15.4) to eliminate the dependence on x2.
t2
x1 + tx1 = atx1 + btx2
t2
x1 + (1 − a)tx1 = b(cx1 + dx2)
t2
x1 + (1 − a)tx1 − bcx1 = d(tx1 − ax1)
t2
x1 + (1 − a − d)tx1 + (ad − bc)x1 = 0
Thus we see that x1 satisfies a second order, Euler equation. By symmetry we see that x2
satisfies,
t2
x2 + (1 − b − c)tx2 + (bc − ad)x2 = 0.
2. We substitute x = atλ
into (15.4).
λatλ−1
=
1
t
Aatλ
Aa = λa
Thus we see that x = atλ
is a solution if λ is an eigenvalue of A with eigenvector a.
3. Suppose that λ = α is an eigenvalue of multiplicity 2. If λ = α has two linearly independent
eigenvectors, a and b then atα
and btα
are linearly independent solutions. If λ = α has only
one linearly independent eigenvector, a, then atα
is a solution. We look for a second solution
of the form
x = xitα
log t + ηtα
.
543
Substituting this into the differential equation yields
αxitα−1
log t + xitα−1
+ αηtα−1
= Axitα−1
log t + Aηtα−1
We equate coefficients of tα−1
log t and tα−1
to determine xi and η.
(A − αI)xi = 0, (A − αI)η = xi
These equations have solutions because λ = α has generalized eigenvectors of first and second
order.
Note that the change of independent variable τ = log t, y(τ) = x(t), will transform (15.4) into
a constant coefficient system.
dy
dτ
= Ay
Thus all the methods for solving constant coefficient systems carry over directly to solving
(15.4). In the case of eigenvalues with multiplicity greater than one, we will have solutions of
the form,
xitα
, xitα
log t + ηtα
, xitα
(log t)
2
+ ηtα
log t + ζtα
, . . . ,
analogous to the form of the solutions for a constant coefficient system,
xi eατ
, xiτ eατ
+η eατ
, xiτ2
eατ
+ητ eατ
+ζ eατ
, . . . .
4. Method 1. Now we consider
dx
dt
=
1
t
1 0
1 1
x.
The characteristic polynomial of the matrix is
χ(λ) =
1 − λ 0
1 1 − λ
= (1 − λ)2
.
λ = 1 is an eigenvalue of multiplicity 2. The equation for the associated eigenvectors is
0 0
1 0
ξ1
ξ2
=
0
0
.
There is only one linearly independent eigenvector, which we choose to be
a =
0
1
.
One solution of the differential equation is
x1 =
0
1
t.
We look for a second solution of the form
x2 = at log t + ηt.
η satisfies the equation
(A − I)η =
0 0
1 0
η =
0
1
.
The solution is determined only up to an additive multiple of a. We choose
η =
1
0
.
544
Thus a second linearly independent solution is
x2 =
0
1
t log t +
1
0
t.
The general solution of the differential equation is
x = c1
0
1
t + c2
0
1
t log t +
1
0
t .
Method 2. Note that the matrix is lower triangular.
x1
x2
=
1
t
1 0
1 1
x1
x2
(15.5)
We have an uncoupled equation for x1.
x1 =
1
t
x1
x1 = c1t
By substituting the solution for x1 into (15.5), we obtain an uncoupled equation for x2.
x2 =
1
t
(c1t + x2)
x2 −
1
t
x2 = c1
1
t
x2 =
c1
t
1
t
x2 = c1 log t + c2
x2 = c1t log t + c2t
Thus the solution of the system is
x =
c1t
c1t log t + c2t
,
x = c1
t
t log t
+ c2
0
t
,
which is equivalent to the solution we obtained previously.
545
546
Chapter 16
Theory of Linear Ordinary
Differential Equations
A little partyin’ is good for the soul.
-Matt Metz
16.1 Exact Equations
Exercise 16.1
Consider a second order, linear, homogeneous differential equation:
P(x)y + Q(x)y + R(x)y = 0. (16.1)
Show that P − Q + R = 0 is a necessary and sufficient condition for this equation to be exact.
Hint, Solution
Exercise 16.2
Determine an equation for the integrating factor µ(x) for Equation 16.1.
Hint, Solution
Exercise 16.3
Show that
y + xy + y = 0
is exact. Find the solution.
Hint, Solution
547
16.2 Nature of Solutions
Result 16.2.1 Consider the nth
order ordinary differential equation of the
form
L[y] =
dn
y
dxn
+ pn−1(x)
dn−1
y
dxn−1
+ · · · + p1(x)
dy
dx
+ p0(x)y = f(x). (16.2)
If the coefficient functions pn−1(x), . . . , p0(x) and the inhomogeneity f(x) are
continuous on some interval a < x < b then the differential equation subject
to the conditions,
y(x0) = v0, y (x0) = v1, . . . y(n−1)
(x0) = vn−1, a < x0 < b,
has a unique solution on the interval.
Exercise 16.4
On what intervals do the following problems have unique solutions?
1. xy + 3y = x
2. x(x − 1)y + 3xy + 4y = 2
3. ex
y + x2
y + y = tan x
Hint, Solution
Linearity of the Operator. The differential operator L is linear. To verify this,
L[cy] =
dn
dxn
(cy) + pn−1(x)
dn−1
dxn−1
(cy) + · · · + p1(x)
d
dx
(cy) + p0(x)(cy)
= c
dn
dxn
y + cpn−1(x)
dn−1
dxn−1
y + · · · + cp1(x)
d
dx
y + cp0(x)y
= cL[y]
L[y1 + y2] =
dn
dxn
(y1 + y2) + pn−1(x)
dn−1
dxn−1
(y1 + y2) + · · · + p1(x)
d
dx
(y1 + y2) + p0(x)(y1 + y2)
=
dn
dxn
(y1) + pn−1(x)
dn−1
dxn−1
(y1) + · · · + p1(x)
d
dx
(y1) + p0(x)(y1)
+
dn
dxn
(y2) + pn−1(x)
dn−1
dxn−1
(y2) + · · · + p1(x)
d
dx
(y2) + p0(x)(y2)
= L[y1] + L[y2].
Homogeneous Solutions. The general homogeneous equation has the form
L[y] =
dn
y
dxn
+ pn−1(x)
dn−1
y
dxn−1
+ · · · + p1(x)
dy
dx
+ p0(x)y = 0.
From the linearity of L, we see that if y1 and y2 are solutions to the homogeneous equation then
c1y1 + c2y2 is also a solution, (L[c1y1 + c2y2] = 0).
On any interval where the coefficient functions are continuous, the nth
order linear homogeneous
equation has n linearly independent solutions, y1, y2, . . . , yn. (We will study linear independence in
Section 16.4.) The general solution to the homogeneous problem is then
yh = c1y1 + c2y2 + · · · + cnyn.
548
Particular Solutions. Any function, yp, that satisfies the inhomogeneous equation, L[yp] = f(x),
is called a particular solution or particular integral of the equation. Note that for linear differential
equations the particular solution is not unique. If yp is a particular solution then yp + yh is also a
particular solution where yh is any homogeneous solution.
The general solution to the problem L[y] = f(x) is the sum of a particular solution and a linear
combination of the homogeneous solutions
y = yp + c1y1 + · · · + cnyn.
Example 16.2.1 Consider the differential equation
y − y = 1.
You can verify that two homogeneous solutions are ex
and 1. A particular solution is −x. Thus the
general solution is
y = −x + c1 ex
+c2.
Exercise 16.5
Suppose you are able to find three linearly independent particular solutions u1(x), u2(x) and u3(x)
of the second order linear differential equation L[y] = f(x). What is the general solution?
Hint, Solution
Real-Valued Solutions. If the coefficient function and the inhomogeneity in Equation 16.2 are
real-valued, then the general solution can be written in terms of real-valued functions. Let y be any,
homogeneous solution, (perhaps complex-valued). By taking the complex conjugate of the equation
L[y] = 0 we show that ¯y is a homogeneous solution as well.
L[y] = 0
L[y] = 0
y(n) + pn−1y(n−1) + · · · + p0y = 0
¯y(n)
+ pn−1 ¯y(n−1)
+ · · · + p0 ¯y = 0
L [¯y] = 0
For the same reason, if yp is a particular solution, then yp is a particular solution as well.
Since the real and imaginary parts of a function y are linear combinations of y and ¯y,
(y) =
y + ¯y
2
, (y) =
y − ¯y
ı2
,
if y is a homogeneous solution then both y and (y) are homogeneous solutions. Likewise, if yp is
a particular solution then (yp) is a particular solution.
L [ (yp)] = L
yp + yp
2
=
f
2
+
f
2
= f
Thus we see that the homogeneous solution, the particular solution and the general solution of a
linear differential equation with real-valued coefficients and inhomogeneity can be written in terms
of real-valued functions.
549
Result 16.2.2 The differential equation
L[y] =
dn
y
dxn
+ pn−1(x)
dn−1
y
dxn−1
+ · · · + p1(x)
dy
dx
+ p0(x)y = f(x)
with continuous coefficients and inhomogeneity has a general solution of the
form
y = yp + c1y1 + · · · + cnyn
where yp is a particular solution, L[yp] = f, and the yk are linearly inde-
pendent homogeneous solutions, L[yk] = 0. If the coefficient functions and
inhomogeneity are real-valued, then the general solution can be written in
terms of real-valued functions.
16.3 Transformation to a First Order System
Any linear differential equation can be put in the form of a system of first order differential equations.
Consider
y(n)
+ pn−1y(n−1)
+ · · · + p0y = f(x).
We introduce the functions,
y1 = y, y2 = y , , . . . , yn = y(n−1)
.
The differential equation is equivalent to the system
y1 = y2
y2 = y3
... =
...
yn = f(x) − pn−1yn − · · · − p0y1.
The first order system is more useful when numerically solving the differential equation.
Example 16.3.1 Consider the differential equation
y + x2
y + cos x y = sin x.
The corresponding system of first order equations is
y1 = y2
y2 = sin x − x2
y2 − cos x y1.
16.4 The Wronskian
16.4.1 Derivative of a Determinant.
Before investigating the Wronskian, we will need a preliminary result from matrix theory. Consider
an n × n matrix A whose elements aij(x) are functions of x. We will denote the determinant by
∆[A(x)]. We then have the following theorem.
550
Result 16.4.1 Let aij(x), the elements of the matrix A, be differentiable func-
tions of x. Then
d
dx
∆[A(x)] =
n
k=1
∆k[A(x)]
where ∆k[A(x)] is the determinant of the matrix A with the kth
row replaced
by the derivative of the kth
row.
Example 16.4.1 Consider the the matrix
A(x) =
x x2
x2
x4
The determinant is x5
−x4
thus the derivative of the determinant is 5x4
−4x3
. To check the theorem,
d
dx
∆[A(x)] =
d
dx
x x2
x2
x4
=
1 2x
x2
x4 +
x x2
2x 4x3
= x4
− 2x3
+ 4x4
− 2x3
= 5x4
− 4x3
.
16.4.2 The Wronskian of a Set of Functions.
A set of functions {y1, y2, . . . , yn} is linearly dependent on an interval if there are constants c1, . . . , cn
not all zero such that
c1y1 + c2y2 + · · · + cnyn = 0 (16.3)
identically on the interval. The set is linearly independent if all of the constants must be zero to
satisfy c1y1 + · · · cnyn = 0 on the interval.
Consider a set of functions {y1, y2, . . . , yn} that are linearly dependent on a given interval and
n − 1 times differentiable. There are a set of constants, not all zero, that satisfy equation 16.3
Differentiating equation 16.3 n − 1 times gives the equations,
c1y1 + c2y2 + · · · + cnyn = 0
c1y1 + c2y2 + · · · + cnyn = 0
· · ·
c1y
(n−1)
1 + c2y
(n−1)
2 + · · · + cny(n−1)
n = 0.
We could write the problem to find the constants as







y1 y2 . . . yn
y1 y2 . . . yn
y1 y2 . . . yn
...
...
... . . .
y
(n−1)
1 y
(n−1)
2 . . . y
(n−1)
n














c1
c2
c3
...
cn







= 0
From linear algebra, we know that this equation has a solution for a nonzero constant vector only if
the determinant of the matrix is zero. Here we define the Wronskian ,W(x), of a set of functions.
W(x) =
y1 y2 . . . yn
y1 y2 . . . yn
...
...
... . . .
y
(n−1)
1 y
(n−1)
2 . . . y
(n−1)
n
551
Thus if a set of functions is linearly dependent on an interval, then the Wronskian is identically zero
on that interval. Alternatively, if the Wronskian is identically zero, then the above matrix equation
has a solution for a nonzero constant vector. This implies that the the set of functions is linearly
dependent.
Result 16.4.2 The Wronskian of a set of functions vanishes identically over
an interval if and only if the set of functions is linearly dependent on that
interval. The Wronskian of a set of linearly independent functions does not
vanish except possibly at isolated points.
Example 16.4.2 Consider the set, {x, x2
}. The Wronskian is
W(x) =
x x2
1 2x
= 2x2
− x2
= x2
.
Thus the functions are independent.
Example 16.4.3 Consider the set {sin x, cos x, eıx
}. The Wronskian is
W(x) =
sin x cos x eıx
cos x − sin x ı eıx
− sin x − cos x − eıx
.
Since the last row is a constant multiple of the first row, the determinant is zero. The functions are
dependent. We could also see this with the identity eıx
= cos x + ı sin x.
16.4.3 The Wronskian of the Solutions to a Differential Equation
Consider the nth
order linear homogeneous differential equation
y(n)
+ pn−1(x)y(n−1)
+ · · · + p0(x)y = 0.
Let {y1, y2, . . . , yn} be any set of n linearly independent solutions. Let Y (x) be the matrix such that
W(x) = ∆[Y (x)]. Now let’s differentiate W(x).
W (x) =
d
dx
∆[Y (x)]
=
n
k=1
∆k[Y (x)]
We note that the all but the last term in this sum is zero. To see this, let’s take a look at the first
term.
∆1[Y (x)] =
y1 y2 · · · yn
y1 y2 · · · yn
...
...
...
...
y
(n−1)
1 y
(n−1)
2 · · · y
(n−1)
n
The first two rows in the matrix are identical. Since the rows are dependent, the determinant is
zero.
552
The last term in the sum is
∆n[Y (x)] =
y1 y2 · · · yn
...
...
...
...
y
(n−2)
1 y
(n−2)
2 · · · y
(n−2)
n
y
(n)
1 y
(n)
2 · · · y
(n)
n
.
In the last row of this matrix we make the substitution y
(n)
i = −pn−1(x)y
(n−1)
i − · · · − p0(x)yi.
Recalling that we can add a multiple of a row to another without changing the determinant, we add
p0(x) times the first row, and p1(x) times the second row, etc., to the last row. Thus we have the
determinant,
W (x) =
y1 y2 · · · yn
...
...
...
...
y
(n−2)
1 y
(n−2)
2 · · · y
(n−2)
n
−pn−1(x)y
(n−1)
1 −pn−1(x)y
(n−1)
2 · · · −pn−1(x)y
(n−1)
n
= −pn−1(x)
y1 y2 · · · yn
...
...
...
...
y
(n−2)
1 y
(n−2)
2 · · · y
(n−2)
n
y
(n−1)
1 y
(n−1)
2 · · · y
(n−1)
n
= −pn−1(x)W(x)
Thus the Wronskian satisfies the first order differential equation,
W (x) = −pn−1(x)W(x).
Solving this equation we get a result known as Abel’s formula.
W(x) = c exp − pn−1(x) dx
Thus regardless of the particular set of solutions that we choose, we can compute their Wronskian
up to a constant factor.
Result 16.4.3 The Wronskian of any linearly independent set of solutions to
the equation
y(n)
+ pn−1(x)y(n−1)
+ · · · + p0(x)y = 0
is, (up to a multiplicative constant), given by
W(x) = exp − pn−1(x) dx .
Example 16.4.4 Consider the differential equation
y − 3y + 2y = 0.
The Wronskian of the two independent solutions is
W(x) = c exp − −3 dx
= c e3x
.
For the choice of solutions {ex
, e2x
}, the Wronskian is
W(x) =
ex e2x
ex
2 e2x = 2 e3x
− e3x
= e3x
.
553
16.5 Well-Posed Problems
Consider the initial value problem for an nth
order linear differential equation.
dn
y
dxn
+ pn−1(x)
dn−1
y
dxn−1
+ · · · + p1(x)
dy
dx
+ p0(x)y = f(x)
y(x0) = v1, y (x0) = v2, . . . , y(n−1)
(x0) = vn
Since the general solution to the differential equation is a linear combination of the n homogeneous
solutions plus the particular solution
y = yp + c1y1 + c2y2 + · · · + cnyn,
the problem to find the constants ci can be written





y1(x0) y2(x0) . . . yn(x0)
y1(x0) y2(x0) . . . yn(x0)
...
...
... . . .
y
(n−1)
1 (x0) y
(n−1)
2 (x0) . . . y
(n−1)
n (x0)










c1
c2
...
cn





+





yp(x0)
yp(x0)
...
y
(n−1)
p (x0)





=





v1
v2
...
vn





.
From linear algebra we know that this system of equations has a unique solution only if the deter-
minant of the matrix is nonzero. Note that the determinant of the matrix is just the Wronskian
evaluated at x0. Thus if the Wronskian vanishes at x0, the initial value problem for the differential
equation either has no solutions or infinitely many solutions. Such problems are said to be ill-posed.
From Abel’s formula for the Wronskian
W(x) = exp − pn−1(x) dx ,
we see that the only way the Wronskian can vanish is if the value of the integral goes to ∞.
Example 16.5.1 Consider the initial value problem
y −
2
x
y +
2
x2
y = 0, y(0) = y (0) = 1.
The Wronskian
W(x) = exp − −
2
x
dx = exp (2 log x) = x2
vanishes at x = 0. Thus this problem is not well-posed.
The general solution of the differential equation is
y = c1x + c2x2
.
We see that the general solution cannot satisfy the initial conditions. If instead we had the initial
conditions y(0) = 0, y (0) = 1, then there would be an infinite number of solutions.
Example 16.5.2 Consider the initial value problem
y −
2
x2
y = 0, y(0) = y (0) = 1.
The Wronskian
W(x) = exp − 0 dx = 1
does not vanish anywhere. However, this problem is not well-posed.
The general solution,
y = c1x−1
+ c2x2
,
cannot satisfy the initial conditions. Thus we see that a non-vanishing Wronskian does not imply
that the problem is well-posed.
554
Result 16.5.1 Consider the initial value problem
dn
y
dxn
+ pn−1(x)
dn−1
y
dxn−1
+ · · · + p1(x)
dy
dx
+ p0(x)y = 0
y(x0) = v1, y (x0) = v2, . . . , y(n−1)
(x0) = vn.
If the Wronskian
W(x) = exp − pn−1(x) dx
vanishes at x = x0 then the problem is ill-posed. The problem may be ill-posed
even if the Wronskian does not vanish.
16.6 The Fundamental Set of Solutions
Consider a set of linearly independent solutions {u1, u2, . . . , un} to an nth
order linear homogeneous
differential equation. This is called the fundamental set of solutions at x0 if they satisfy the
relations
u1(x0) = 1 u2(x0) = 0 . . . un(x0) = 0
u1(x0) = 0 u2(x0) = 1 . . . un(x0) = 0
...
...
...
...
u
(n−1)
1 (x0) = 0 u
(n−1)
2 (x0) = 0 . . . u
(n−1)
n (x0) = 1
Knowing the fundamental set of solutions is handy because it makes the task of solving an initial
value problem trivial. Say we are given the initial conditions,
y(x0) = v1, y (x0) = v2, . . . , y(n−1)
(x0) = vn.
If the ui’s are a fundamental set then the solution that satisfies these constraints is just
y = v1u1(x) + v2u2(x) + · · · + vnun(x).
Of course in general, a set of solutions is not the fundamental set. If the Wronskian of the solutions
is nonzero and finite we can generate a fundamental set of solutions that are linear combinations of
our original set. Consider the case of a second order equation Let {y1, y2} be two linearly independent
solutions. We will generate the fundamental set of solutions, {u1, u2}.
u1
u2
=
c11 c12
c21 c22
y1
y2
For {u1, u2} to satisfy the relations that define a fundamental set, it must satisfy the matrix equation
u1(x0) u1(x0)
u2(x0) u2(x0)
=
c11 c12
c21 c22
y1(x0) y1(x0)
y2(x0) y2(x0)
=
1 0
0 1
c11 c12
c21 c22
=
y1(x0) y1(x0)
y2(x0) y2(x0)
−1
If the Wronskian is non-zero and finite, we can solve for the constants, cij, and thus find the
fundamental set of solutions. To generalize this result to an equation of order n, simply replace all
the 2 × 2 matrices and vectors of length 2 with n × n matrices and vectors of length n. I presented
the case of n = 2 simply to save having to write out all the ellipses involved in the general case. (It
also makes for easier reading.)
555
Example 16.6.1 Two linearly independent solutions to the differential equation y + y = 0 are
y1 = eıx
and y2 = e−ıx
.
y1(0) y1(0)
y2(0) y2(0)
=
1 ı
1 −i
To find the fundamental set of solutions, {u1, u2}, at x = 0 we solve the equation
c11 c12
c21 c22
=
1 ı
1 −ı
−1
c11 c12
c21 c22
=
1
ı2
ı ı
1 −1
The fundamental set is
u1 =
eıx
+ e−ıx
2
, u2 =
eıx
− e−ıx
ı2
.
Using trigonometric identities we can rewrite these as
u1 = cos x, u2 = sin x.
Result 16.6.1 The fundamental set of solutions at x = x0, {u1, u2, . . . , un},
to an nth
order linear differential equation, satisfy the relations
u1(x0) = 1 u2(x0) = 0 . . . un(x0) = 0
u1(x0) = 0 u2(x0) = 1 . . . un(x0) = 0
...
...
...
...
u
(n−1)
1 (x0) = 0 u
(n−1)
2 (x0) = 0 . . . u
(n−1)
n (x0) = 1.
If the Wronskian of the solutions is nonzero and finite at the point x0 then you
can generate the fundamental set of solutions from any linearly independent
set of solutions.
Exercise 16.6
Two solutions of y − y = 0 are ex
and e−x
. Show that the solutions are independent. Find the
fundamental set of solutions at x = 0.
Hint, Solution
16.7 Adjoint Equations
For the nth
order linear differential operator
L[y] = pn
dn
y
dxn
+ pn−1
dn−1
y
dxn−1
+ · · · + p0y
(where the pj are complex-valued functions) we define the adjoint of L
L∗
[y] = (−1)n dn
dxn
(pny) + (−1)n−1 dn−1
dxn−1
(pn−1y) + · · · + p0y.
Here f denotes the complex conjugate of f.
Example 16.7.1
L[y] = xy +
1
x
y + y
556
has the adjoint
L∗
[y] =
d2
dx2
[xy] −
d
dx
1
x
y + y
= xy + 2y −
1
x
y +
1
x2
y + y
= xy + 2 −
1
x
y + 1 +
1
x2
y.
Taking the adjoint of L∗
yields
L∗∗
[y] =
d2
dx2
[xy] −
d
dx
2 −
1
x
y + 1 +
1
x2
y
= xy + 2y − 2 −
1
x
y −
1
x2
y + 1 +
1
x2
y
= xy +
1
x
y + y.
Thus by taking the adjoint of L∗
, we obtain the original operator.
In general, L∗∗
= L.
Consider L[y] = pny(n)
+ · · · + p0y. If each of the pk is k times continuously differentiable and u
and v are n times continuously differentiable on some interval, then on that interval
vL[u] − uL∗[v] =
d
dx
B[u, v]
where B[u, v], the bilinear concomitant, is the bilinear form
B[u, v] =
n
m=1 j+k=m−1
j≥0,k≥0
(−1)j
u(k)
(pmv)(j)
.
This equation is known as Lagrange’s identity. If L is a second order operator then
vL[u] − uL∗[v] =
d
dx
up1v + u p2v − u(p2v)
= u p2v + u p1v + u − p2v + (−2p2 + p1)v + (−p2 + p1)v .
Example 16.7.2 Verify Lagrange’s identity for the second order operator, L[y] = p2y +p1y +p0y.
vL[u] − uL∗[v] = v(p2u + p1u + p0u) − u
d2
dx2
(p2v) −
d
dx
(p1v) + p0v
= v(p2u + p1u + p0u) − u(p2v + (2p2 − p1)v + (p2 − p1 + p0)v)
= u p2v + u p1v + u − p2v + (−2p2 + p1)v + (−p2 + p1)v .
We will not verify Lagrange’s identity for the general case.
Integrating Lagrange’s identity on its interval of validity gives us Green’s formula.
b
a
vL[u] − uL∗[v] dx = B[u, v] x=b
− B[u, v] x=a
557
Result 16.7.1 The adjoint of the operator
L[y] = pn
dn
y
dxn
+ pn−1
dn−1
y
dxn−1
+ · · · + p0y
is defined
L∗
[y] = (−1)n dn
dxn
(pny) + (−1)n−1 dn−1
dxn−1
(pn−1y) + · · · + p0y.
If each of the pk is k times continuously differentiable and u and v are n times
continuously differentiable, then Lagrange’s identity states
vL[y] − uL∗[v] =
d
dx
B[u, v] =
d
dx
n
m=1 j+k=m−1
j≥0,k≥0
(−1)j
u(k)
(pmv)(j)
.
Integrating Lagrange’s identity on it’s domain of validity yields Green’s for-
mula,
b
a
vL[u] − uL∗[v] dx = B[u, v] x=b
− B[u, v] x=a
.
558
16.8 Additional Exercises
Exact Equations
Nature of Solutions
Transformation to a First Order System
The Wronskian
Well-Posed Problems
The Fundamental Set of Solutions
Adjoint Equations
Exercise 16.7
Find the adjoint of the Bessel equation of order ν,
x2
y + xy + (x2
− ν2
)y = 0,
and the Legendre equation of order α,
(1 − x2
)y − 2xy + α(α + 1)y = 0.
Hint, Solution
Exercise 16.8
Find the adjoint of
x2
y − xy + 3y = 0.
Hint, Solution
559
16.9 Hints
Hint 16.1
Hint 16.2
Hint 16.3
Hint 16.4
Hint 16.5
The difference of any two of the ui’s is a homogeneous solution.
Hint 16.6
Exact Equations
Nature of Solutions
Transformation to a First Order System
The Wronskian
Well-Posed Problems
The Fundamental Set of Solutions
Adjoint Equations
Hint 16.7
Hint 16.8
560
16.10 Solutions
Solution 16.1
The second order, linear, homogeneous differential equation is
P(x)y + Q(x)y + R(x)y = 0. (16.4)
An exact equation can be written in the form:
d
dx
[a(x)y + b(x)y] = 0.
If Equation 16.4 is exact, then we can write it in the form:
d
dx
[P(x)y + f(x)y] = 0
for some function f(x). We carry out the differentiation to write the equation in standard form:
P(x)y + (P (x) + f(x)) y + f (x)y = 0 (16.5)
We equate the coefficients of Equations 16.4 and 16.5 to obtain a set of equations.
P (x) + f(x) = Q(x), f (x) = R(x).
In order to eliminate f(x), we differentiate the first equation and substitute in the expression for
f (x) from the second equation. This gives us a necessary condition for Equation 16.4 to be exact:
P (x) − Q (x) + R(x) = 0 (16.6)
Now we demonstrate that Equation 16.6 is a sufficient condition for exactness. Suppose that Equa-
tion 16.6 holds. Then we can replace R by Q − P in the differential equation.
Py + Qy + (Q − P )y = 0
We recognize the right side as an exact differential.
(Py + (Q − P )y) = 0
Thus Equation 16.6 is a sufficient condition for exactness. We can integrate to reduce the problem
to a first order differential equation.
Py + (Q − P )y = c
Solution 16.2
Suppose that there is an integrating factor µ(x) that will make
P(x)y + Q(x)y + R(x)y = 0
exact. We multiply by this integrating factor.
µ(x)P(x)y + µ(x)Q(x)y + µ(x)R(x)y = 0. (16.7)
We apply the exactness condition from Exercise 16.1 to obtain a differential equation for the inte-
grating factor.
(µP) − (µQ) + µR = 0
µ P + 2µ P + µP − µ Q − µQ + µR = 0
Pµ + (2P − Q)µ + (P − Q + R)µ = 0
561
Solution 16.3
We consider the differential equation,
y + xy + y = 0.
Since
(1) − (x) + 1 = 0
we see that this is an exact equation. We rearrange terms to form exact derivatives and then
integrate.
(y ) + (xy) = 0
y + xy = c
d
dx
ex2
/2
y = c ex2
/2
y = c e−x2
/2
ex2
/2
dx + d e−x2
/2
Solution 16.4
Consider the initial value problem,
y + p(x)y + q(x)y = f(x),
y(x0) = y0, y (x0) = y1.
If p(x), q(x) and f(x) are continuous on an interval (a . . . b) with x0 ∈ (a . . . b), then the problem
has a unique solution on that interval.
1.
xy + 3y = x
y +
3
x
y = 1
Unique solutions exist on the intervals (−∞ . . . 0) and (0 . . . ∞).
2.
x(x − 1)y + 3xy + 4y = 2
y +
3
x − 1
y +
4
x(x − 1)
y =
2
x(x − 1)
Unique solutions exist on the intervals (−∞ . . . 0), (0 . . . 1) and (1 . . . ∞).
3.
ex
y + x2
y + y = tan x
y + x2
e−x
y + e−x
y = e−x
tan x
Unique solutions exist on the intervals (2n−1)π
2 . . . (2n+1)π
2 for n ∈ Z.
Solution 16.5
We know that the general solution is
y = yp + c1y1 + c2y2,
where yp is a particular solution and y1 and y2 are linearly independent homogeneous solutions.
Since yp can be any particular solution, we choose yp = u1. Now we need to find two homogeneous
562
solutions. Since L[ui] = f(x), L[u1 − u2] = L[u2 − u3] = 0. Finally, we note that since the ui’s are
linearly independent, y1 = u1 − u2 and y2 = u2 − u3 are linearly independent. Thus the general
solution is
y = u1 + c1(u1 − u2) + c2(u2 − u3).
Solution 16.6
The Wronskian of the solutions is
W(x) =
ex e−x
ex
− e−x = −2.
Since the Wronskian is nonzero, the solutions are independent.
The fundamental set of solutions, {u1, u2}, is a linear combination of ex
and e−x
.
u1
u2
=
c11 c12
c21 c22
ex
e−x
The coefficients are
c11 c12
c21 c22
=
e0 e0
e−0
− e−0
−1
=
1 1
1 −1
−1
=
1
−2
−1 −1
−1 1
=
1
2
1 1
1 −1
u1 =
1
2
(ex
+ e−x
), u2 =
1
2
(ex
− e−x
).
The fundamental set of solutions at x = 0 is
{cosh x, sinh x}.
Exact Equations
Nature of Solutions
Transformation to a First Order System
The Wronskian
Well-Posed Problems
The Fundamental Set of Solutions
Adjoint Equations
Solution 16.7
1. The Bessel equation of order ν is
x2
y + xy + (x2
− ν2
)y = 0.
The adjoint equation is
x2
µ + (4x − x)µ + (2 − 1 + x2
− ν2
)µ = 0
x2
µ + 3xµ + (1 + x2
− ν2
)µ = 0.
563
2. The Legendre equation of order α is
(1 − x2
)y − 2xy + α(α + 1)y = 0
The adjoint equation is
(1 − x2
)µ + (−4x + 2x)µ + (−2 + 2 + α(α + 1))µ = 0
(1 − x2
)µ − 2xµ + α(α + 1)µ = 0
Solution 16.8
The adjoint of
x2
y − xy + 3y = 0
is
d2
dx2
(x2
y) +
d
dx
(xy) + 3y = 0
(x2
y + 4xy + 2y) + (xy + y) + 3y = 0
x2
y + 5xy + 6y = 0.
564
16.11 Quiz
Problem 16.1
What is the differential equation whose solution is the two parameter family of curves y = c1 sin(2x+
c2)?
Solution
565
16.12 Quiz Solutions
Solution 16.1
We take the first and second derivative of y = c1 sin(2x + c2).
y = 2c1 cos(2x + c2)
y = −4c1 sin(2x + c2)
This gives us three equations involving x, y, y , y and the parameters c1 and c2. We eliminate the
the parameters to obtain the differential equation. Clearly we have,
y + 4y = 0.
566
Chapter 17
Techniques for Linear Differential
Equations
My new goal in life is to take the meaningless drivel out of human interaction.
-Dave Ozenne
The nth
order linear homogeneous differential equation can be written in the form:
y(n)
+ an−1(x)y(n−1)
+ · · · + a1(x)y + a0(x)y = 0.
In general it is not possible to solve second order and higher linear differential equations. In this
chapter we will examine equations that have special forms which allow us to either reduce the order
of the equation or solve it.
17.1 Constant Coefficient Equations
The nth
order constant coefficient differential equation has the form:
y(n)
+ an−1y(n−1)
+ · · · + a1y + a0y = 0.
We will find that solving a constant coefficient differential equation is no more difficult than finding
the roots of a polynomial. For notational simplicity, we will first consider second order equations.
Then we will apply the same techniques to higher order equations.
17.1.1 Second Order Equations
Factoring the Differential Equation. Consider the second order constant coefficient differential
equation:
y + 2ay + by = 0. (17.1)
Just as we can factor a second degree polynomial:
λ2
+ 2aλ + b = (λ − α)(λ − β), α = −a + a2 − b and β = −a − a2 − b,
we can factor Equation 17.1.
d2
dx2
+ 2a
d
dx
+ b y =
d
dx
− α
d
dx
− β y
567
Once we have factored the differential equation, we can solve it by solving a series of two first order
differential equations. We set u = d
dx − β y to obtain a first order equation:
d
dx
− α u = 0,
which has the solution:
u = c1 eαx
.
To find the solution of Equation 17.1, we solve
d
dx
− β y = u = c1 eαx
.
We multiply by the integrating factor and integrate.
d
dx
e−βx
y = c1 e(α−β)x
y = c1 eβx
e(α−β)x
dx + c2 eβx
We first consider the case that α and β are distinct.
y = c1 eβx 1
α − β
e(α−β)x
+c2 eβx
We choose new constants to write the solution in a simpler form.
y = c1 eαx
+c2 eβx
Now we consider the case α = β.
y = c1 eαx
1 dx + c2 eαx
y = c1x eαx
+c2 eαx
The solution of Equation 17.1 is
y =
c1 eαx
+c2 eβx
, α = β,
c1 eαx
+c2x eαx
, α = β.
(17.2)
Example 17.1.1 Consider the differential equation: y +y = 0. To obtain the general solution, we
factor the equation and apply the result in Equation 17.2.
d
dx
− ı
d
dx
+ ı y = 0
y = c1 eıx
+c2 e−ıx
.
Example 17.1.2 Next we solve y = 0.
d
dx
− 0
d
dx
− 0 y = 0
y = c1 e0x
+c2x e0x
y = c1 + c2x
568
Substituting the Form of the Solution into the Differential Equation. Note that if we
substitute y = eλx
into the differential equation (17.1), we will obtain the quadratic polynomial
(17.1.1) for λ.
y + 2ay + by = 0
λ2
eλx
+2aλ eλx
+b eλx
= 0
λ2
+ 2aλ + b = 0
This gives us a superficially different method for solving constant coefficient equations. We substitute
y = eλx
into the differential equation. Let α and β be the roots of the quadratic in λ. If the roots
are distinct, then the linearly independent solutions are y1 = eαx
and y2 = eβx
. If the quadratic has
a double root at λ = α, then the linearly independent solutions are y1 = eαx
and y2 = x eαx
.
Example 17.1.3 Consider the equation:
y − 3y + 2y = 0.
The substitution y = eλx
yields
λ2
− 3λ + 2 = (λ − 1)(λ − 2) = 0.
Thus the solutions are ex
and e2x
.
Example 17.1.4 Next consider the equation:
y − 2y + 4y = 0.
The substitution y = eλx
yields
λ2
− 2λ + 4 = (λ − 2)2
= 0.
Because the polynomial has a double root, the solutions are e2x
and x e2x
.
Result 17.1.1 Consider the second order constant coefficient differential
equation:
y + 2ay + by = 0.
We can factor the differential equation into the form:
d
dx
− α
d
dx
− β y = 0,
which has the solution:
y =
c1 eαx
+c2 eβx
, α = β,
c1 eαx
+c2x eαx
, α = β.
We can also determine α and β by substituting y = eλx
into the differential
equation and factoring the polynomial in λ.
Shift Invariance. Note that if u(x) is a solution of a constant coefficient equation, then u(x + c)
is also a solution. This is useful in applying initial or boundary conditions.
569
Example 17.1.5 Consider the problem
y − 3y + 2y = 0, y(0) = a, y (0) = b.
We know that the general solution is
y = c1 ex
+c2 e2x
.
Applying the initial conditions, we obtain the equations,
c1 + c2 = a, c1 + 2c2 = b.
The solution is
y = (2a − b) ex
+(b − a) e2x
.
Now suppose we wish to solve the same differential equation with the boundary conditions y(1) = a
and y (1) = b. All we have to do is shift the solution to the right.
y = (2a − b) ex−1
+(b − a) e2(x−1)
.
17.1.2 Real-Valued Solutions
If the coefficients of the differential equation are real, then the solution can be written in terms of
real-valued functions (Result 16.2.2). For a real root λ = α of the polynomial in λ, the corresponding
solution, y = eαx
, is real-valued.
Now recall that the complex roots of a polynomial with real coefficients occur in complex conju-
gate pairs. Assume that α ± ıβ are roots of
λn
+ an−1λn−1
+ · · · + a1λ + a0 = 0.
The corresponding solutions of the differential equation are e(α+ıβ)x
and e(α−ıβ)x
. Note that the
linear combinations
e(α+ıβ)x
+ e(α−ıβ)x
2
= eαx
cos(βx),
e(α+ıβ)x
− e(α−ıβ)x
ı2
= eαx
sin(βx),
are real-valued solutions of the differential equation. We could also obtain real-valued solution by
taking the real and imaginary parts of either e(α+ıβ)x
or e(α−ıβ)x
.
e(α+ıβ)x
= eαx
cos(βx), e(α+ıβ)x
= eαx
sin(βx)
Example 17.1.6 Consider the equation
y − 2y + 2y = 0.
The substitution y = eλx
yields
λ2
− 2λ + 2 = (λ − 1 − ı)(λ − 1 + ı) = 0.
The linearly independent solutions are
e(1+ı)x
, and e(1−ı)x
.
We can write the general solution in terms of real functions.
y = c1 ex
cos x + c2 ex
sin x
570
Exercise 17.1
Find the general solution of
y + 2ay + by = 0
for a, b ∈ R. There are three distinct forms of the solution depending on the sign of a2
− b.
Hint, Solution
Exercise 17.2
Find the fundamental set of solutions of
y + 2ay + by = 0
at the point x = 0, for a, b ∈ R. Use the general solutions obtained in Exercise 17.1.
Hint, Solution
Result 17.1.2 . Consider the second order constant coefficient equation
y + 2ay + by = 0.
The general solution of this differential equation is
y =



e−ax
c1 e
√
a2−b x
+c2 e−
√
a2−b x
if a2
> b,
e−ax
c1 cos(
√
b − a2 x) + c2 sin(
√
b − a2 x) if a2
< b,
e−ax
(c1 + c2x) if a2
= b.
The fundamental set of solutions at x = 0 is
8
>>>><
>>>>:

e−ax
„
cosh(
√
a2 − b x) + a√
a2−b
sinh(
√
a2 − b x)
«
, e−ax 1√
a2−b
sinh(
√
a2 − b x)
ff
if a2
> b,

e−ax
„
cos(
√
b − a2 x) + a√
b−a2
sin(
√
b − a2 x)
«
, e−ax 1√
b−a2
sin(
√
b − a2 x)
ff
if a2
< b,
˘
(1 + ax) e−ax
, x e−ax
¯
if a2
= b.
To obtain the fundamental set of solutions at the point x = ξ, substitute
(x − ξ) for x in the above solutions.
17.1.3 Higher Order Equations
The constant coefficient equation of order n has the form
L[y] = y(n)
+ an−1y(n−1)
+ · · · + a1y + a0y = 0. (17.3)
The substitution y = eλx
will transform this differential equation into an algebraic equation.
L[eλx
] = λn
eλx
+an−1λn−1
eλx
+ · · · + a1λ eλx
+a0 eλx
= 0
λn
+ an−1λn−1
+ · · · + a1λ + a0 eλx
= 0
λn
+ an−1λn−1
+ · · · + a1λ + a0 = 0
Assume that the roots of this equation, λ1, . . . , λn, are distinct. Then the n linearly independent
solutions of Equation 17.3 are
eλ1x
, . . . , eλnx
.
If the roots of the algebraic equation are not distinct then we will not obtain all the solutions
of the differential equation. Suppose that λ1 = α is a double root. We substitute y = eλx
into the
differential equation.
L[eλx
] = [(λ − α)2
(λ − λ3) · · · (λ − λn)] eλx
= 0
571
Setting λ = α will make the left side of the equation zero. Thus y = eαx
is a solution. Now we
differentiate both sides of the equation with respect to λ and interchange the order of differentiation.
d
dλ
L[eλx
] = L
d
dλ
eλx
= L x eλx
Let p(λ) = (λ − λ3) · · · (λ − λn). We calculate L x eλx
by applying L and then differentiating with
respect to λ.
L x eλx
=
d
dλ
L[eλx
]
=
d
dλ
[(λ − α)2
(λ − λ3) · · · (λ − λn)] eλx
=
d
dλ
[(λ − α)2
p(λ)] eλx
= 2(λ − α)p(λ) + (λ − α)2
p (λ) + (λ − α)2
p(λ)x eλx
= (λ − α) [2p(λ) + (λ − α)p (λ) + (λ − α)p(λ)x] eλx
Since setting λ = α will make this expression zero, L[x eαx
] = 0, x eαx
is a solution of Equation 17.3.
You can verify that eαx
and x eαx
are linearly independent. Now we have generated all of the
solutions for the differential equation.
If λ = α is a root of multiplicity m then by repeatedly differentiating with respect to λ you can
show that the corresponding solutions are
eαx
, x eαx
, x2
eαx
, . . . , xm−1
eαx
.
Example 17.1.7 Consider the equation
y − 3y + 2y = 0.
The substitution y = eλx
yields
λ3
− 3λ + 2 = (λ − 1)2
(λ + 2) = 0.
Thus the general solution is
y = c1 ex
+c2x ex
+c3 e−2x
.
Result 17.1.3 Consider the nth
order constant coefficient equation
dn
y
dxn
+ an−1
dn−1
y
dxn−1
+ · · · + a1
dy
dx
+ a0y = 0.
Let the factorization of the algebraic equation obtained with the substitution
y = eλx
be
(λ − λ1)m1
(λ − λ2)m2
· · · (λ − λp)mp
= 0.
A set of linearly independent solutions is given by
{eλ1x
, x eλ1x
, . . . , xm1−1
eλ1x
, . . . , eλpx
, x eλpx
, . . . , xmp−1
eλpx
}.
If the coefficients of the differential equation are real, then we can find a real-
valued set of solutions.
572
Example 17.1.8 Consider the equation
d4
y
dx4
+ 2
d2
y
dx2
+ y = 0.
The substitution y = eλx
yields
λ4
+ 2λ2
+ 1 = (λ − i)2
(λ + i)2
= 0.
Thus the linearly independent solutions are
eıx
, x eıx
, e−ıx
and x e−ıx
.
Noting that
eıx
= cos(x) + ı sin(x),
we can write the general solution in terms of sines and cosines.
y = c1 cos x + c2 sin x + c3x cos x + c4x sin x
17.2 Euler Equations
Consider the equation
L[y] = x2 d2
y
dx2
+ ax
dy
dx
+ by = 0, x > 0.
Let’s say, for example, that y has units of distance and x has units of time. Note that each term in
the differential equation has the same dimension.
(time)2 (distance)
(time)2
= (time)
(distance)
(time)
= (distance)
Thus this is a second order Euler, or equidimensional equation. We know that the first order Euler
equation, xy + ay = 0, has the solution y = cxa
. Thus for the second order equation we will try a
solution of the form y = xλ
. The substitution y = xλ
will transform the differential equation into
an algebraic equation.
L[xλ
] = x2 d2
dx2
[xλ
] + ax
d
dx
[xλ
] + bxλ
= 0
λ(λ − 1)xλ
+ aλxλ
+ bxλ
= 0
λ(λ − 1) + aλ + b = 0
Factoring yields
(λ − λ1)(λ − λ2) = 0.
If the two roots, λ1 and λ2, are distinct then the general solution is
y = c1xλ1
+ c2xλ2
.
If the roots are not distinct, λ1 = λ2 = λ, then we only have the one solution, y = xλ
. To generate
the other solution we use the same approach as for the constant coefficient equation. We substitute
y = xλ
into the differential equation and differentiate with respect to λ.
d
dλ
L[xλ
] = L[
d
dλ
xλ
]
= L[ln x xλ
]
573
Note that
d
dλ
xλ
=
d
dλ
eλ ln x
= ln x eλ ln x
= ln x xλ
.
Now we apply L and then differentiate with respect to λ.
d
dλ
L[xλ
] =
d
dλ
(λ − α)2
xλ
= 2(λ − α)xλ
+ (λ − α)2
ln x xλ
Equating these two results,
L[ln x xλ
] = 2(λ − α)xλ
+ (λ − α)2
ln x xλ
.
Setting λ = α will make the right hand side zero. Thus y = ln x xα
is a solution.
If you are in the mood for a little algebra you can show by repeatedly differentiating with respect
to λ that if λ = α is a root of multiplicity m in an nth
order Euler equation then the associated
solutions are
xα
, ln x xα
, (ln x)2
xα
, . . . , (ln x)m−1
xα
.
Example 17.2.1 Consider the Euler equation
xy − y +
y
x
= 0.
The substitution y = xλ
yields the algebraic equation
λ(λ − 1) − λ + 1 = (λ − 1)2
= 0.
Thus the general solution is
y = c1x + c2x ln x.
17.2.1 Real-Valued Solutions
If the coefficients of the Euler equation are real, then the solution can be written in terms of functions
that are real-valued when x is real and positive, (Result 16.2.2). If α ± ıβ are the roots of
λ(λ − 1) + aλ + b = 0
then the corresponding solutions of the Euler equation are
xα+ıβ
and xα−ıβ
.
We can rewrite these as
xα
eıβ ln x
and xα
e−ıβ ln x
.
Note that the linear combinations
xα eıβ ln x
+xα e−ıβ ln x
2
= xα
cos(β ln x), and
xα eıβ ln x
−xα e−ıβ ln x
ı2
= xα
sin(β ln x),
are real-valued solutions when x is real and positive. Equivalently, we could take the real and
imaginary parts of either xα+ıβ
or xα−ıβ
.
xα
eıβ ln x
= xα
cos(β ln x), xα
eıβ ln x
= xα
sin(β ln x)
574
Result 17.2.1 Consider the second order Euler equation
x2
y + (2a + 1)xy + by = 0.
The general solution of this differential equation is
y =



x−a
c1x
√
a2−b
+ c2x−
√
a2−b
if a2
> b,
x−a
c1 cos
√
b − a2 ln x + c2 sin
√
b − a2 ln x if a2
< b,
x−a
(c1 + c2 ln x) if a2
= b.
The fundamental set of solutions at x = ξ is
y =



x
ξ
−a
cosh
√
a2 − b ln x
ξ + a√
a2−b
sinh
√
a2 − b ln x
ξ ,
x
ξ
−a
ξ√
a2−b
sinh
√
a2 − b ln x
ξ if a2
> b,
x
ξ
−a
cos
√
b − a2 ln x
ξ + a√
b−a2
sin
√
b − a2 ln x
ξ ,
x
ξ
−a
ξ√
b−a2
sin
√
b − a2 ln x
ξ if a2
< b,
x
ξ
−a
1 + a ln x
ξ , x
ξ
−a
ξ ln x
ξ if a2
= b.
Example 17.2.2 Consider the Euler equation
x2
y − 3xy + 13y = 0.
The substitution y = xλ
yields
λ(λ − 1) − 3λ + 13 = (λ − 2 − ı3)(λ − 2 + ı3) = 0.
The linearly independent solutions are
x2+ı3
, x2−ı3
.
We can put this in a more understandable form.
x2+ı3
= x2
eı3 ln x
= x2
cos(3 ln x) + x2
sin(3 ln x)
We can write the general solution in terms of real-valued functions.
y = c1x2
cos(3 ln x) + c2x2
sin(3 ln x)
575
Result 17.2.2 Consider the nth
order Euler equation
xn dn
y
dxn
+ an−1xn−1 dn−1
y
dxn−1
+ · · · + a1x
dy
dx
+ a0y = 0.
Let the factorization of the algebraic equation obtained with the substitution
y = xλ
be
(λ − λ1)m1
(λ − λ2)m2
· · · (λ − λp)mp
= 0.
A set of linearly independent solutions is given by
{xλ1
, ln x xλ1
, . . . , (ln x)m1−1
xλ1
, . . . , xλp
, ln x xλp
, . . . , (ln x)mp−1
xλp
}.
If the coefficients of the differential equation are real, then we can find a set
of solutions that are real valued when x is real and positive.
17.3 Exact Equations
Exact equations have the form
d
dx
F(x, y, y , y , . . .) = f(x).
If you can write an equation in the form of an exact equation, you can integrate to reduce the order
by one, (or solve the equation for first order). We will consider a few examples to illustrate the
method.
Example 17.3.1 Consider the equation
y + x2
y + 2xy = 0.
We can rewrite this as
d
dx
y + x2
y = 0.
Integrating yields a first order inhomogeneous equation.
y + x2
y = c1
We multiply by the integrating factor I(x) = exp( x2
dx) to make this an exact equation.
d
dx
ex3
/3
y = c1 ex3
/3
ex3
/3
y = c1 ex3
/3
dx + c2
y = c1 e−x3
/3
ex3
/3
dx + c2 e−x3
/3
Result 17.3.1 If you can write a differential equation in the form
d
dx
F(x, y, y , y , . . .) = f(x),
then you can integrate to reduce the order of the equation.
F(x, y, y , y , . . .) = f(x) dx + c
576
17.4 Equations Without Explicit Dependence on y
Example 17.4.1 Consider the equation
y +
√
xy = 0.
This is a second order equation for y, but note that it is a first order equation for y . We can solve
directly for y .
d
dx
exp
2
3
x3/2
y = 0
y = c1 exp −
2
3
x3/2
Now we just integrate to get the solution for y.
y = c1 exp −
2
3
x3/2
dx + c2
Result 17.4.1 If an nth
order equation does not explicitly depend on y then
you can consider it as an equation of order n − 1 for y .
17.5 Reduction of Order
Consider the second order linear equation
L[y] ≡ y + p(x)y + q(x)y = f(x).
Suppose that we know one homogeneous solution y1. We make the substitution y = uy1 and use
that L[y1] = 0.
L[uy1] = 0u y1 + 2u y1 + uy1 + p(u y1 + uy1) + quy1 = 0
u y1 + u (2y1 + py1) + u(y1 + py1 + qy1) = 0
u y1 + u (2y1 + py1) = 0
Thus we have reduced the problem to a first order equation for u . An analogous result holds for
higher order equations.
Result 17.5.1 Consider the nth
order linear differential equation
y(n)
+ pn−1(x)y(n−1)
+ · · · + p1(x)y + p0(x)y = f(x).
Let y1 be a solution of the homogeneous equation. The substitution y = uy1
will transform the problem into an (n − 1)th
order equation for u . For the
second order problem
y + p(x)y + q(x)y = f(x)
this reduced equation is
u y1 + u (2y1 + py1) = f(x).
577
Example 17.5.1 Consider the equation
y + xy − y = 0.
By inspection we see that y1 = x is a solution. We would like to find another linearly independent
solution. The substitution y = xu yields
xu + (2 + x2
)u = 0
u +
2
x
+ x u = 0
The integrating factor is I(x) = exp(2 ln x + x2
/2) = x2
exp(x2
/2).
d
dx
x2
ex2
/2
u = 0
u = c1x−2
e−x2
/2
u = c1 x−2
e−x2
/2
dx + c2
y = c1x x−2
e−x2
/2
dx + c2x
Thus we see that a second solution is
y2 = x x−2
e−x2
/2
dx.
17.6 *Reduction of Order and the Adjoint Equation
Let L be the linear differential operator
L[y] = pn
dn
y
dxn
+ pn−1
dn−1
y
dxn−1
+ · · · + p0y,
where each pj is a j times continuously differentiable complex valued function. Recall that the
adjoint of L is
L∗
[y] = (−1)n dn
dxn
(pny) + (−1)n−1 dn−1
dxn−1
(pn−1y) + · · · + p0y.
If u and v are n times continuously differentiable, then Lagrange’s identity states
vL[u] − uL∗[v] =
d
dx
B[u, v],
where
B[u, v] =
n
m=1 j+k=m−1
j≥0,k≥0
(−1)j
u(k)
(pmv)(j)
.
For second order equations,
B[u, v] = up1v + u p2v − u(p2v) .
(See Section 16.7.)
If we can find a solution to the homogeneous adjoint equation, L∗
[y] = 0, then we can reduce
the order of the equation L[y] = f(x). Let ψ satisfy L∗
[ψ] = 0. Substituting u = y, v = ψ into
Lagrange’s identity yields
ψL[y] − yL∗[ψ] =
d
dx
B[y, ψ]
ψL[y] =
d
dx
B[y, ψ].
578
The equation L[y] = f(x) is equivalent to the equation
d
dx
B[y, ψ] = ψf
B[y, ψ] = ψ(x)f(x) dx,
which is a linear equation in y of order n − 1.
Example 17.6.1 Consider the equation
L[y] = y − x2
y − 2xy = 0.
Method 1. Note that this is an exact equation.
d
dx
(y − x2
y) = 0
y − x2
y = c1
d
dx
e−x3
/3
y = c1 e−x3
/3
y = c1 ex3
/3
e−x3
/3
dx + c2 ex3
/3
Method 2. The adjoint equation is
L∗
[y] = y + x2
y = 0.
By inspection we see that ψ = (constant) is a solution of the adjoint equation. To simplify the
algebra we will choose ψ = 1. Thus the equation L[y] = 0 is equivalent to
B[y, 1] = c1
y(−x2
) +
d
dx
[y](1) − y
d
dx
[1] = c1
y − x2
y = c1.
By using the adjoint equation to reduce the order we obtain the same solution as with Method 1.
579
17.7 Additional Exercises
Constant Coefficient Equations
Exercise 17.3 (mathematica/ode/techniques linear/constant.nb)
Find the solution of each one of the following initial value problems. Sketch the graph of the solution
and describe its behavior as t increases.
1. 6y − 5y + y = 0, y(0) = 4, y (0) = 0
2. y − 2y + 5y = 0, y(π/2) = 0, y (π/2) = 2
3. y + 4y + 4y = 0, y(−1) = 2, y (−1) = 1
Hint, Solution
Exercise 17.4 (mathematica/ode/techniques linear/constant.nb)
Substitute y = eλx
to find two linearly independent solutions to
y − 4y + 13y = 0.
that are real-valued when x is real-valued.
Hint, Solution
Exercise 17.5 (mathematica/ode/techniques linear/constant.nb)
Find the general solution to
y − y + y − y = 0.
Write the solution in terms of functions that are real-valued when x is real-valued.
Hint, Solution
Exercise 17.6
Substitute y = eλx
to find the fundamental set of solutions at x = 0 for each of the equations:
1. y + y = 0,
2. y − y = 0,
3. y = 0.
What are the fundamental set of solutions at x = 1 for each of these equations.
Hint, Solution
Exercise 17.7
Consider a ball of mass m hanging by an ideal spring of spring constant k. The ball is suspended in
a fluid which damps the motion. This resistance has a coefficient of friction, µ. Find the differential
equation for the displacement of the mass from its equilibrium position by balancing forces. Denote
this displacement by y(t). If the damping force is weak, the mass will have a decaying, oscillatory
motion. If the damping force is strong, the mass will not oscillate. The displacement will decay to
zero. The value of the damping which separates these two behaviors is called critical damping.
Find the solution which satisfies the initial conditions y(0) = 0, y (0) = 1. Use the solutions
obtained in Exercise 17.2 or refer to Result 17.1.2.
Consider the case m = k = 1. Find the coefficient of friction for which the displacement of the
mass decays most rapidly. Plot the displacement for strong, weak and critical damping.
Hint, Solution
Exercise 17.8
Show that y = c cos(x − φ) is the general solution of y + y = 0 where c and φ are constants of
integration. (It is not sufficient to show that y = c cos(x−φ) satisfies the differential equation. y = 0
580
satisfies the differential equation, but is is certainly not the general solution.) Find constants c and
φ such that y = sin(x).
Is y = c cosh(x − φ) the general solution of y − y = 0? Are there constants c and φ such that
y = sinh(x)?
Hint, Solution
Exercise 17.9 (mathematica/ode/techniques linear/constant.nb)
Let y(t) be the solution of the initial-value problem
y + 5y + 6y = 0; y(0) = 1, y (0) = V.
For what values of V does y(t) remain nonnegative for all t > 0?
Hint, Solution
Exercise 17.10 (mathematica/ode/techniques linear/constant.nb)
Find two linearly independent solutions of
y + sign(x)y = 0, −∞ < x < ∞.
where sign(x) = ±1 according as x is positive or negative. (The solution should be continuous and
have a continuous first derivative.)
Hint, Solution
Euler Equations
Exercise 17.11
Find the general solution of
x2
y + xy + y = 0, x > 0.
Hint, Solution
Exercise 17.12
Substitute y = xλ
to find the general solution of
x2
y − 2xy + 2y = 0.
Hint, Solution
Exercise 17.13 (mathematica/ode/techniques linear/constant.nb)
Substitute y = xλ
to find the general solution of
xy + y +
1
x
y = 0.
Write the solution in terms of functions that are real-valued when x is real-valued and positive.
Hint, Solution
Exercise 17.14
Find the general solution of
x2
y + (2a + 1)xy + by = 0.
Hint, Solution
Exercise 17.15
Show that
y1 = eax
, y2 = lim
α→a
eαx
− e−αx
α
581
are linearly indepedent solutions of
y − a2
y = 0
for all values of a. It is common to abuse notation and write the second solution as
y2 =
eax
− e−ax
a
where the limit is taken if a = 0. Likewise show that
y1 = xa
, y2 =
xa
− x−a
a
are linearly indepedent solutions of
x2
y + xy − a2
y = 0
for all values of a.
Hint, Solution
Exercise 17.16 (mathematica/ode/techniques linear/constant.nb)
Find two linearly independent solutions (i.e., the general solution) of
(a) x2
y − 2xy + 2y = 0, (b) x2
y − 2y = 0, (c) x2
y − xy + y = 0.
Hint, Solution
Exact Equations
Exercise 17.17
Solve the differential equation
y + y sin x + y cos x = 0.
Hint, Solution
Equations Without Explicit Dependence on y
Reduction of Order
Exercise 17.18
Consider
(1 − x2
)y − 2xy + 2y = 0, −1 < x < 1.
Verify that y = x is a solution. Find the general solution.
Hint, Solution
Exercise 17.19
Consider the differential equation
y −
x + 1
x
y +
1
x
y = 0.
Since the coefficients sum to zero, (1 − x+1
x + 1
x = 0), y = ex
is a solution. Find another linearly
independent solution.
Hint, Solution
Exercise 17.20
One solution of
(1 − 2x)y + 4xy − 4y = 0
is y = x. Find the general solution.
Hint, Solution
582
Exercise 17.21
Find the general solution of
(x − 1)y − xy + y = 0,
given that one solution is y = ex
. (you may assume x > 1)
Hint, Solution
*Reduction of Order and the Adjoint Equation
583
17.8 Hints
Hint 17.1
Substitute y = eλx
into the differential equation.
Hint 17.2
The fundamental set of solutions is a linear combination of the homogeneous solutions.
Constant Coefficient Equations
Hint 17.3
Hint 17.4
Hint 17.5
It is a constant coefficient equation.
Hint 17.6
Use the fact that if u(x) is a solution of a constant coefficient equation, then u(x + c) is also a
solution.
Hint 17.7
The force on the mass due to the spring is −ky(t). The frictional force is −µy (t).
Note that the initial conditions describe the second fundamental solution at t = 0.
Note that for large t, t eαt
is much small than eβt
if α < β. (Prove this.)
Hint 17.8
By definition, the general solution of a second order differential equation is a two parameter family
of functions that satisfies the differential equation. The trigonometric identities in Appendix M may
be useful.
Hint 17.9
Hint 17.10
Euler Equations
Hint 17.11
Hint 17.12
Hint 17.13
Hint 17.14
Substitute y = xλ
into the differential equation. Consider the three cases: a2
> b, a2
< b and a2
= b.
Hint 17.15
584
Hint 17.16
Exact Equations
Hint 17.17
It is an exact equation.
Equations Without Explicit Dependence on y
Reduction of Order
Hint 17.18
Hint 17.19
Use reduction of order to find the other solution.
Hint 17.20
Use reduction of order to find the other solution.
Hint 17.21
*Reduction of Order and the Adjoint Equation
585
17.9 Solutions
Solution 17.1
We substitute y = eλx
into the differential equation.
y + 2ay + by = 0
λ2
+ 2aλ + b = 0
λ = −a ± a2 − b
If a2
> b then the two roots are distinct and real. The general solution is
y = c1 e(−a+
√
a2−b)x
+c2 e(−a−
√
a2−b)x
.
If a2
< b then the two roots are distinct and complex-valued. We can write them as
λ = −a ± ı b − a2.
The general solution is
y = c1 e(−a+ı
√
b−a2
)x
+c2 e(−a−ı
√
b−a2
)x
.
By taking the sum and difference of the two linearly independent solutions above, we can write the
general solution as
y = c1 e−ax
cos b − a2 x + c2 e−ax
sin b − a2 x .
If a2
= b then the only root is λ = −a. The general solution in this case is then
y = c1 e−ax
+c2x e−ax
.
In summary, the general solution is
y =



e−ax
c1 e
√
a2−b x
+c2 e−
√
a2−b x
if a2
> b,
e−ax
c1 cos
√
b − a2 x + c2 sin
√
b − a2 x if a2
< b,
e−ax
(c1 + c2x) if a2
= b.
Solution 17.2
First we note that the general solution can be written,
y =



e−ax
c1 cosh
√
a2 − b x + c2 sinh
√
a2 − b x if a2
> b,
e−ax
c1 cos
√
b − a2 x + c2 sin
√
b − a2 x if a2
< b,
e−ax
(c1 + c2x) if a2
= b.
We first consider the case a2
> b. The derivative is
y = e−ax
−ac1 + a2 − b c2 cosh a2 − b x + −ac2 + a2 − b c1 sinh a2 − b x .
The conditions, y1(0) = 1 and y1(0) = 0, for the first solution become,
c1 = 1, −ac1 + a2 − b c2 = 0,
c1 = 1, c2 =
a
√
a2 − b
.
The conditions, y2(0) = 0 and y2(0) = 1, for the second solution become,
c1 = 0, −ac1 + a2 − b c2 = 1,
c1 = 0, c2 =
1
√
a2 − b
.
586
The fundamental set of solutions is
e−ax
cosh a2 − b x +
a
√
a2 − b
sinh a2 − b x , e−ax 1
√
a2 − b
sinh a2 − b x .
Now consider the case a2
< b. The derivative is
y = e−ax
−ac1 + b − a2 c2 cos b − a2 x + −ac2 − b − a2 c1 sin b − a2 x .
Clearly, the fundamental set of solutions is
e−ax
cos b − a2 x +
a
√
b − a2
sin b − a2 x , e−ax 1
√
b − a2
sin b − a2 x .
Finally we consider the case a2
= b. The derivative is
y = e−ax
(−ac1 + c2 + −ac2x).
The conditions, y1(0) = 1 and y1(0) = 0, for the first solution become,
c1 = 1, −ac1 + c2 = 0,
c1 = 1, c2 = a.
The conditions, y2(0) = 0 and y2(0) = 1, for the second solution become,
c1 = 0, −ac1 + c2 = 1,
c1 = 0, c2 = 1.
The fundamental set of solutions is
(1 + ax) e−ax
, x e−ax
.
In summary, the fundamental set of solutions at x = 0 is



e−ax
cosh
√
a2 − b x + a√
a2−b
sinh
√
a2 − b x , e−ax 1√
a2−b
sinh
√
a2 − b x if a2
> b,
e−ax
cos
√
b − a2 x + a√
b−a2
sin
√
b − a2 x , e−ax 1√
b−a2
sin
√
b − a2 x if a2
< b,
{(1 + ax) e−ax
, x e−ax
} if a2
= b.
Constant Coefficient Equations
Solution 17.3
1. We consider the problem
6y − 5y + y = 0, y(0) = 4, y (0) = 0.
We make the substitution y = eλx
in the differential equation.
6λ2
− 5λ + 1 = 0
(2λ − 1)(3λ − 1) = 0
λ =
1
3
,
1
2
The general solution of the differential equation is
y = c1 et/3
+c2 et/2
.
587
1 2 3 4 5
-30
-25
-20
-15
-10
-5
Figure 17.1: The solution of 6y − 5y + y = 0, y(0) = 4, y (0) = 0.
We apply the initial conditions to determine the constants.
c1 + c2 = 4,
c1
3
+
c2
2
= 0
c1 = 12, c2 = −8
The solution subject to the initial conditions is
y = 12 et/3
−8 et/2
.
The solution is plotted in Figure 17.1. The solution tends to −∞ as t → ∞.
2. We consider the problem
y − 2y + 5y = 0, y(π/2) = 0, y (π/2) = 2.
We make the substitution y = eλx
in the differential equation.
λ2
− 2λ + 5 = 0
λ = 1 ±
√
1 − 5
λ = {1 + ı2, 1 − ı2}
The general solution of the differential equation is
y = c1 et
cos(2t) + c2 et
sin(2t).
We apply the initial conditions to determine the constants.
y(π/2) = 0 ⇒ −c1 eπ/2
= 0 ⇒ c1 = 0
y (π/2) = 2 ⇒ −2c2 eπ/2
= 2 ⇒ c2 = − e−π/2
The solution subject to the initial conditions is
y = − et−π/2
sin(2t).
The solution is plotted in Figure 17.2. The solution oscillates with an amplitude that tends to
∞ as t → ∞.
3. We consider the problem
y + 4y + 4y = 0, y(−1) = 2, y (−1) = 1.
We make the substitution y = eλx
in the differential equation.
λ2
+ 4λ + 4 = 0
(λ + 2)2
= 0
λ = −2
588
3 4 5 6
-10
10
20
30
40
50
Figure 17.2: The solution of y − 2y + 5y = 0, y(π/2) = 0, y (π/2) = 2.
-1 1 2 3 4 5
0.5
1
1.5
2
Figure 17.3: The solution of y + 4y + 4y = 0, y(−1) = 2, y (−1) = 1.
The general solution of the differential equation is
y = c1 e−2t
+c2t e−2t
.
We apply the initial conditions to determine the constants.
c1 e2
−c2 e2
= 2, −2c1 e2
+3c2 e2
= 1
c1 = 7 e−2
, c2 = 5 e−2
The solution subject to the initial conditions is
y = (7 + 5t) e−2(t+1)
The solution is plotted in Figure 17.3. The solution vanishes as t → ∞.
lim
t→∞
(7 + 5t) e−2(t+1)
= lim
t→∞
7 + 5t
e2(t+1)
= lim
t→∞
5
2 e2(t+1)
= 0
Solution 17.4
y − 4y + 13y = 0.
With the substitution y = eλx
we obtain
λ2
eλx
−4λ eλx
+13 eλx
= 0
λ2
− 4λ + 13 = 0
λ = 2 ± 3i.
Thus two linearly independent solutions are
e(2+3i)x
, and e(2−3i)x
.
589
Noting that
e(2+3i)x
= e2x
[cos(3x) + ı sin(3x)]
e(2−3i)x
= e2x
[cos(3x) − ı sin(3x)],
we can write the two linearly independent solutions
y1 = e2x
cos(3x), y2 = e2x
sin(3x).
Solution 17.5
We note that
y − y + y − y = 0
is a constant coefficient equation. The substitution, y = eλx
, yields
λ3
− λ2
+ λ − 1 = 0
(λ − 1)(λ − i)(λ + i) = 0.
The corresponding solutions are ex
, eıx
, and e−ıx
. We can write the general solution as
y = c1 ex
+c2 cos x + c3 sin x.
Solution 17.6
We start with the equation y +y = 0. We substitute y = eλx
into the differential equation to obtain
λ2
+ 1 = 0, λ = ±i.
A linearly independent set of solutions is
{eıx
, e−ıx
}.
The fundamental set of solutions has the form
y1 = c1 eıx
+c2 e−ıx
,
y2 = c3 eıx
+c4 e−ıx
.
By applying the constraints
y1(0) = 1, y1(0) = 0,
y2(0) = 0, y2(0) = 1,
we obtain
y1 =
eıx
+ e−ıx
2
= cos x,
y2 =
eıx
+ e−ıx
ı2
= sin x.
Now consider the equation y − y = 0. By substituting y = eλx
we find that a set of solutions is
{ex
, e−x
}.
By taking linear combinations of these we see that another set of solutions is
{cosh x, sinh x}.
Note that this is the fundamental set of solutions.
590
Next consider y = 0. We can find the solutions by substituting y = eλx
or by integrating the
equation twice. The fundamental set of solutions as x = 0 is
{1, x}.
Note that if u(x) is a solution of a constant coefficient differential equation, then u(x + c) is also
a solution. Also note that if u(x) satisfies y(0) = a, y (0) = b, then u(x − x0) satisfies y(x0) = a,
y (x0) = b. Thus the fundamental sets of solutions at x = 1 are
1. {cos(x − 1), sin(x − 1)},
2. {cosh(x − 1), sinh(x − 1)},
3. {1, x − 1}.
Solution 17.7
Let y(t) denote the displacement of the mass from equilibrium. The forces on the mass are −ky(t)
due to the spring and −µy (t) due to friction. We equate the external forces to my (t) to find the
differential equation of the motion.
my = −ky − µy
y +
µ
m
y +
k
m
y = 0
The solution which satisfies the initial conditions y(0) = 0, y (0) = 1 is
y(t) =



e−µt/(2m) 2m√
µ2−4km
sinh µ2 − 4km t/(2m) if µ2
> km,
e−µt/(2m) 2m√
4km−µ2
sin 4km − µ2 t/(2m) if µ2
< km,
t e−µt/(2m)
if µ2
= km.
We respectively call these cases: strongly damped, weakly damped and critically damped. In the
case that m = k = 1 the solution is
y(t) =



e−µt/2 2√
µ2−4
sinh µ2 − 4 t/2 if µ > 2,
e−µt/2 2√
4−µ2
sin 4 − µ2 t/2 if µ < 2,
t e−t
if µ = 2.
Note that when t is large, t e−t
is much smaller than e−µt/2
for µ < 2. To prove this we examine
the ratio of these functions as t → ∞.
lim
t→∞
t e−t
e−µt/2
= lim
t→∞
t
e(1−µ/2)t
= lim
t→∞
1
(1 − µ/2) e(1−µ)t
= 0
Using this result, we see that the critically damped solution decays faster than the weakly damped
solution.
We can write the strongly damped solution as
e−µt/2 2
µ2 − 4
e
√
µ2−4 t/2
− e−
√
µ2−4 t/2
.
591
2 4 6 8 10
-0.1
0.1
0.2
0.3
0.4
0.5
Critical
Weak
Strong
Figure 17.4: Strongly, weakly and critically damped solutions.
For large t, the dominant factor is e
“√
µ2−4−µ
”
t/2
. Note that for µ > 2,
µ2 − 4 = (µ + 2)(µ − 2) > µ − 2.
Therefore we have the bounds
−2 < µ2 − 4 − µ < 0.
This shows that the critically damped solution decays faster than the strongly damped solution.
µ = 2 gives the fastest decaying solution. Figure 17.4 shows the solution for µ = 4, µ = 1 and µ = 2.
Solution 17.8
Clearly y = c cos(x − φ) satisfies the differential equation y + y = 0. Since it is a two-parameter
family of functions, it must be the general solution.
Using a trigonometric identity we can rewrite the solution as
y = c cos φ cos x + c sin φ sin x.
Setting this equal to sin x gives us the two equations
c cos φ = 0,
c sin φ = 1,
which has the solutions c = 1, φ = (2n + 1/2)π, and c = −1, φ = (2n − 1/2)π, for n ∈ Z.
Clearly y = c cosh(x−φ) satisfies the differential equation y −y = 0. Since it is a two-parameter
family of functions, it must be the general solution.
Using a trigonometric identity we can rewrite the solution as
y = c cosh φ cosh x + c sinh φ sinh x.
Setting this equal to sinh x gives us the two equations
c cosh φ = 0,
c sinh φ = 1,
which has the solutions c = −i, φ = ı(2n + 1/2)π, and c = i, φ = ı(2n − 1/2)π, for n ∈ Z.
Solution 17.9
We substitute y = eλt
into the differential equation.
λ2
eλt
+5λ eλt
+6 eλt
= 0
λ2
+ 5λ + 6 = 0
(λ + 2)(λ + 3) = 0
592
The general solution of the differential equation is
y = c1 e−2t
+c2 e−3t
.
The initial conditions give us the constraints:
c1 + c2 = 1,
−2c1 − 3c2 = V.
The solution subject to the initial conditions is
y = (3 + V ) e−2t
−(2 + V ) e−3t
.
This solution will be non-negative for t > 0 if V ≥ −3.
Solution 17.10
For negative x, the differential equation is
y − y = 0.
We substitute y = eλx
into the differential equation to find the solutions.
λ2
− 1 = 0
λ = ±1
y = ex
, e−x
We can take linear combinations to write the solutions in terms of the hyperbolic sine and cosine.
y = {cosh(x), sinh(x)}
For positive x, the differential equation is
y + y = 0.
We substitute y = eλx
into the differential equation to find the solutions.
λ2
+ 1 = 0
λ = ±ı
y = eıx
, e−ıx
We can take linear combinations to write the solutions in terms of the sine and cosine.
y = {cos(x), sin(x)}
We will find the fundamental set of solutions at x = 0. That is, we will find a set of solutions,
{y1, y2} that satisfy the conditions:
y1(0) = 1 y1(0) = 0
y2(0) = 0 y2(0) = 1
Clearly, these solutions are
y1 =
cosh(x) x < 0
cos(x) x ≥ 0
y2 =
sinh(x) x < 0
sin(x) x ≥ 0
593
Euler Equations
Solution 17.11
We consider an Euler equation,
x2
y + xy + y = 0, x > 0.
We make the change of independent variable ξ = ln x, u(ξ) = y(x) to obtain
u + u = 0.
We make the substitution u(ξ) = eλξ
.
λ2
+ 1 = 0
λ = ±i
A set of linearly independent solutions for u(ξ) is
{eıξ
, e−ıξ
}.
Since
cos ξ =
eıξ
+ e−ıξ
2
and sin ξ =
eıξ
− e−ıξ
ı2
,
another linearly independent set of solutions is
{cos ξ, sin ξ}.
The general solution for y(x) is
y(x) = c1 cos(ln x) + c2 sin(ln x).
Solution 17.12
Consider the differential equation
x2
y − 2xy + 2y = 0.
With the substitution y = xλ
this equation becomes
λ(λ − 1) − 2λ + 2 = 0
λ2
− 3λ + 2 = 0
λ = 1, 2.
The general solution is then
y = c1x + c2x2
.
Solution 17.13
We note that
xy + y +
1
x
y = 0
is an Euler equation. The substitution y = xλ
yields
λ3
− 3λ2
+ 2λ + λ2
− λ + λ = 0
λ3
− 2λ2
+ 2λ = 0.
The three roots of this algebraic equation are
λ = 0, λ = 1 + i, λ = 1 − ı
594
The corresponding solutions to the differential equation are
y = x0
y = x1+ı
y = x1−ı
y = 1 y = x eı ln x
y = x e−ı ln x
.
We can write the general solution as
y = c1 + c2x cos(ln x) + c3 sin(ln x).
Solution 17.14
We substitute y = xλ
into the differential equation.
x2
y + (2a + 1)xy + by = 0
λ(λ − 1) + (2a + 1)λ + b = 0
λ2
+ 2aλ + b = 0
λ = −a ± a2 − b
For a2
> b then the general solution is
y = c1x−a+
√
a2−b
+ c2x−a−
√
a2−b
.
For a2
< b, then the general solution is
y = c1x−a+ı
√
b−a2
+ c2x−a−ı
√
b−a2
.
By taking the sum and difference of these solutions, we can write the general solution as
y = c1x−a
cos b − a2 ln x + c2x−a
sin b − a2 ln x .
For a2
= b, the quadratic in lambda has a double root at λ = a. The general solution of the
differential equation is
y = c1x−a
+ c2x−a
ln x.
In summary, the general solution is:
y =



x−a
c1x
√
a2−b
+ c2x−
√
a2−b
if a2
> b,
x−a
c1 cos
√
b − a2 ln x + c2 sin
√
b − a2 ln x if a2
< b,
x−a
(c1 + c2 ln x) if a2
= b.
Solution 17.15
For a = 0, two linearly independent solutions of
y − a2
y = 0
are
y1 = eax
, y2 = e−ax
.
For a = 0, we have
y1 = e0x
= 1, y2 = x e0x
= x.
In this case the solution are defined by
y1 = [eax
]a=0 , y2 =
d
da
eax
a=0
.
595
By the definition of differentiation, f (0) is
f (0) = lim
a→0
f(a) − f(−a)
2a
.
Thus the second solution in the case a = 0 is
y2 = lim
a→0
eax
− e−ax
a
Consider the solutions
y1 = eax
, y2 = lim
α→a
eαx
− e−αx
α
.
Clearly y1 is a solution for all a. For a = 0, y2 is a linear combination of eax
and e−ax
and is
thus a solution. Since the coefficient of e−ax
in this linear combination is non-zero, it is linearly
independent to y1. For a = 0, y2 is one half the derivative of eax
evaluated at a = 0. Thus it is a
solution.
For a = 0, two linearly independent solutions of
x2
y + xy − a2
y = 0
are
y1 = xa
, y2 = x−a
.
For a = 0, we have
y1 = [xa
]a=0 = 1, y2 =
d
da
xa
a=0
= ln x.
Consider the solutions
y1 = xa
, y2 =
xa
− x−a
a
Clearly y1 is a solution for all a. For a = 0, y2 is a linear combination of xa
and x−a
and is thus a
solution. For a = 0, y2 is one half the derivative of xa
evaluated at a = 0. Thus it is a solution.
Solution 17.16
1.
x2
y − 2xy + 2y = 0
We substitute y = xλ
into the differential equation.
λ(λ − 1) − 2λ + 2 = 0
λ2
− 3λ + 2 = 0
(λ − 1)(λ − 2) = 0
y = c1x + c2x2
2.
x2
y − 2y = 0
We substitute y = xλ
into the differential equation.
λ(λ − 1) − 2 = 0
λ2
− λ − 2 = 0
(λ + 1)(λ − 2) = 0
y =
c1
x
+ c2x2
596
3.
x2
y − xy + y = 0
We substitute y = xλ
into the differential equation.
λ(λ − 1) − λ + 1 = 0
λ2
− 2λ + 1 = 0
(λ − 1)2
= 0
Since there is a double root, the solution is:
y = c1x + c2x ln x.
Exact Equations
Solution 17.17
We note that
y + y sin x + y cos x = 0
is an exact equation.
d
dx
[y + y sin x] = 0
y + y sin x = c1
d
dx
y e− cos x
= c1 e− cos x
y = c1 ecos x
e− cos x
dx + c2 ecos x
Equations Without Explicit Dependence on y
Reduction of Order
Solution 17.18
(1 − x2
)y − 2xy + 2y = 0, −1 < x < 1
We substitute y = x into the differential equation to check that it is a solution.
(1 − x2
)(0) − 2x(1) + 2x = 0
We look for a second solution of the form y = xu. We substitute this into the differential equation
597
and use the fact that x is a solution.
(1 − x2
)(xu + 2u ) − 2x(xu + u) + 2xu = 0
(1 − x2
)(xu + 2u ) − 2x(xu ) = 0
(1 − x2
)xu + (2 − 4x2
)u = 0
u
u
=
2 − 4x2
x(x2 − 1)
u
u
= −
2
x
+
1
1 − x
−
1
1 + x
ln(u ) = −2 ln(x) − ln(1 − x) − ln(1 + x) + const
ln(u ) = ln
c
x2(1 − x)(1 + x)
u =
c
x2(1 − x)(1 + x)
u = c
1
x2
+
1
2(1 − x)
+
1
2(1 + x)
u = c −
1
x
−
1
2
ln(1 − x) +
1
2
ln(1 + x) + const
u = c −
1
x
+
1
2
ln
1 + x
1 − x
+ const
A second linearly independent solution is
y = −1 +
x
2
ln
1 + x
1 − x
.
Solution 17.19
We are given that y = ex
is a solution of
y −
x + 1
x
y +
1
x
y = 0.
To find another linearly independent solution, we will use reduction of order. Substituting
y = u ex
y = (u + u) ex
y = (u + 2u + u) ex
into the differential equation yields
u + 2u + u −
x + 1
x
(u + u) +
1
x
u = 0.
u +
x − 1
x
u = 0
d
dx
u exp 1 −
1
x
dx = 0
u ex−ln x
= c1
u = c1x e−x
u = c1 x e−x
dx + c2
u = c1(x e−x
+ e−x
) + c2
y = c1(x + 1) + c2 ex
598
Thus a second linearly independent solution is
y = x + 1.
Solution 17.20
We are given that y = x is a solution of
(1 − 2x)y + 4xy − 4y = 0.
To find another linearly independent solution, we will use reduction of order. Substituting
y = xu
y = xu + u
y = xu + 2u
into the differential equation yields
(1 − 2x)(xu + 2u ) + 4x(xu + u) − 4xu = 0,
(1 − 2x)xu + (4x2
− 4x + 2)u = 0,
u
u
=
4x2
− 4x + 2
x(2x − 1)
,
u
u
= 2 −
2
x
+
2
2x − 1
,
ln(u ) = 2x − 2 ln x + ln(2x − 1) + const,
u = c1
2
x
−
1
x2
e2x
,
u = c1
1
x
e2x
+c2,
y = c1 e2x
+c2x.
Solution 17.21
One solution of
(x − 1)y − xy + y = 0,
is y1 = ex
. We find a second solution with reduction of order. We make the substitution y2 = u ex
in the differential equation. We determine u up to an additive constant.
(x − 1)(u + 2u + u) ex
−x(u + u) ex
+u ex
= 0
(x − 1)u + (x − 2)u = 0
u
u
= −
x − 2
x − 1
= −1 +
1
x − 1
ln |u | = −x + ln |x − 1| + c
u = c(x − 1) e−x
u = −cx e−x
The second solution of the differential equation is y2 = x.
*Reduction of Order and the Adjoint Equation
599
600
Chapter 18
Techniques for Nonlinear
Differential Equations
In mathematics you don’t understand things. You just get used to them.
- Johann von Neumann
18.1 Bernoulli Equations
Sometimes it is possible to solve a nonlinear equation by making a change of the dependent variable
that converts it into a linear equation. One of the most important such equations is the Bernoulli
equation
dy
dt
+ p(t)y = q(t)yα
, α = 1.
The change of dependent variable u = y1−α
will yield a first order linear equation for u which when
solved will give us an implicit solution for y. (See Exercise 18.4.)
Result 18.1.1 The Bernoulli equation y + p(t)y = q(t)yα
, α = 1 can be
transformed to the first order linear equation
du
dt
+ (1 − α)p(t)u = (1 − α)q(t)
with the change of variables u = y1−α
.
Example 18.1.1 Consider the Bernoulli equation
y =
2
x
y + y2
.
First we divide by y2
.
y−2
y =
2
x
y−1
+ 1
We make the change of variable u = y−1
.
−u =
2
x
u + 1
u +
2
x
u = −1
601
The integrating factor is I(x) = exp( 2
x dx) = x2
.
d
dx
(x2
u) = −x2
x2
u = −
1
3
x3
+ c
u = −
1
3
x +
c
x2
y = −
1
3
x +
c
x2
−1
Thus the solution for y is
y =
3x2
c − x2
.
18.2 Riccati Equations
Factoring Second Order Operators. Consider the second order linear equation
L[y] =
d2
dx2
+ p(x)
d
dx
+ q(x) y = y + p(x)y + q(x)y = f(x).
If we were able to factor the linear operator L into the form
L =
d
dx
+ a(x)
d
dx
+ b(x) , (18.1)
then we would be able to solve the differential equation. Factoring reduces the problem to a system
of first order equations. We start with the factored equation
d
dx
+ a(x)
d
dx
+ b(x) y = f(x).
We set u = d
dx + b(x) y and solve the problem
d
dx
+ a(x) u = f(x).
Then to obtain the solution we solve
d
dx
+ b(x) y = u.
Example 18.2.1 Consider the equation
y + x −
1
x
y +
1
x2
− 1 y = 0.
Let’s say by some insight or just random luck we are able to see that this equation can be factored
into
d
dx
+ x
d
dx
−
1
x
y = 0.
602
We first solve the equation
d
dx
+ x u = 0.
u + xu = 0
d
dx
ex2
/2
u = 0
u = c1 e−x2
/2
Then we solve for y with the equation
d
dx
−
1
x
y = u = c1 e−x2
/2
.
y −
1
x
y = c1 e−x2
/2
d
dx
x−1
y = c1x−1
e−x2
/2
y = c1x x−1
e−x2
/2
dx + c2x
If we were able to solve for a and b in Equation 18.1 in terms of p and q then we would be able
to solve any second order differential equation. Equating the two operators,
d2
dx2
+ p
d
dx
+ q =
d
dx
+ a
d
dx
+ b
=
d2
dx2
+ (a + b)
d
dx
+ (b + ab).
Thus we have the two equations
a + b = p, and b + ab = q.
Eliminating a,
b + (p − b)b = q
b = b2
− pb + q
Now we have a nonlinear equation for b that is no easier to solve than the original second order
linear equation.
Riccati Equations. Equations of the form
y = a(x)y2
+ b(x)y + c(x)
are called Riccati equations. From the above derivation we see that for every second order differential
equation there is a corresponding Riccati equation. Now we will show that the converse is true.
We make the substitution
y = −
u
au
, y = −
u
au
+
(u )2
au2
+
a u
a2u
,
in the Riccati equation.
y = ay2
+ by + c
−
u
au
+
(u )2
au2
+
a u
a2u
= a
(u )2
a2u2
− b
u
au
+ c
−
u
au
+
a u
a2u
+ b
u
au
− c = 0
u −
a
a
+ b u + acu = 0
603
Now we have a second order linear equation for u.
Result 18.2.1 The substitution y = − u
au
transforms the Riccati equation
y = a(x)y2
+ b(x)y + c(x)
into the second order linear equation
u −
a
a
+ b u + acu = 0.
Example 18.2.2 Consider the Riccati equation
y = y2
+
1
x
y +
1
x2
.
With the substitution y = −u
u we obtain
u −
1
x
u +
1
x2
u = 0.
This is an Euler equation. The substitution u = xλ
yields
λ(λ − 1) − λ + 1 = (λ − 1)2
= 0.
Thus the general solution for u is
u = c1x + c2x log x.
Since y = −u
u ,
y = −
c1 + c2(1 + log x)
c1x + c2x log x
y = −
1 + c(1 + log x)
x + cx log x
18.3 Exchanging the Dependent and Independent Variables
Some differential equations can be put in a more elementary form by exchanging the dependent
and independent variables. If the new equation can be solved, you will have an implicit solution for
the initial equation. We will consider a few examples to illustrate the method.
Example 18.3.1 Consider the equation
y =
1
y3 − xy2
.
Instead of considering y to be a function of x, consider x to be a function of y. That is, x = x(y),
x = dx
dy .
dy
dx
=
1
y3 − xy2
dx
dy
= y3
− xy2
x + y2
x = y3
604
Now we have a first order equation for x.
d
dy
ey3
/3
x = y3
ey3
/3
x = e−y3
/3
y3
ey3
/3
dy + c e−y3
/3
Example 18.3.2 Consider the equation
y =
y
y2 + 2x
.
Interchanging the dependent and independent variables yields
1
x
=
y
y2 + 2x
x = y + 2
x
y
x − 2
x
y
= y
d
dy
(y−2
x) = y−1
y−2
x = log y + c
x = y2
log y + cy2
Result 18.3.1 Some differential equations can be put in a simpler form by
exchanging the dependent and independent variables. Thus a differential equa-
tion for y(x) can be written as an equation for x(y). Solving the equation for
x(y) will give an implicit solution for y(x).
18.4 Autonomous Equations
Autonomous equations have no explicit dependence on x. The following are examples.
• y + 3y − 2y = 0
• y = y + (y )2
• y + y y = 0
The change of variables u(y) = y reduces an nth
order autonomous equation in y to a non-
autonomous equation of order n − 1 in u(y). Writing the derivatives of y in terms of u,
y = u(y)
y =
d
dx
u(y)
=
dy
dx
d
dy
u(y)
= y u
= u u
y = (u u + (u )2
)u.
605
Thus we see that the equation for u(y) will have an order of one less than the original equation.
Result 18.4.1 Consider an autonomous differential equation for y(x), (au-
tonomous equations have no explicit dependence on x.) The change of vari-
ables u(y) = y reduces an nth
order autonomous equation in y to a non-
autonomous equation of order n − 1 in u(y).
Example 18.4.1 Consider the equation
y = y + (y )2
.
With the substitution u(y) = y , the equation becomes
u u = y + u2
u = u + yu−1
.
We recognize this as a Bernoulli equation. The substitution v = u2
yields
1
2
v = v + y
v − 2v = 2y
d
dy
e−2y
v = 2y e−2y
v(y) = c1 e2y
+ e2y
2y e−2y
dy
v(y) = c1 e2y
+ e2y
−y e−2y
+ e−2y
dy
v(y) = c1 e2y
+ e2y
−y e−2y
−
1
2
e−2y
v(y) = c1 e2y
−y −
1
2
.
Now we solve for u.
u(y) = c1 e2y
−y −
1
2
1/2
.
dy
dx
= c1 e2y
−y −
1
2
1/2
This equation is separable.
dx =
dy
c1 e2y −y − 1
2
1/2
x + c2 =
1
c1 e2y −y − 1
2
1/2
dy
Thus we finally have arrived at an implicit solution for y(x).
Example 18.4.2 Consider the equation
y + y3
= 0.
606
With the change of variables, u(y) = y , the equation becomes
u u + y3
= 0.
This equation is separable.
u du = −y3
dy
1
2
u2
= −
1
4
y4
+ c1
u = 2c1 −
1
2
y4
1/2
y = 2c1 −
1
2
y4
1/2
dy
(2c1 − 1
2 y4)1/2
= dx
Integrating gives us the implicit solution
1
(2c1 − 1
2 y4)1/2
dy = x + c2.
18.5 *Equidimensional-in-x Equations
Differential equations that are invariant under the change of variables x = c ξ are said to be
equidimensional-in-x. For a familiar example from linear equations, we note that the Euler equation
is equidimensional-in-x. Writing the new derivatives under the change of variables,
x = c ξ,
d
dx
=
1
c
d
dξ
,
d2
dx2
=
1
c2
d2
dξ2
, . . . .
Example 18.5.1 Consider the Euler equation
y +
2
x
y +
3
x2
y = 0.
Under the change of variables, x = c ξ, y(x) = u(ξ), this equation becomes
1
c2
u +
2
c ξ
1
c
u +
3
c2 ξ2
u = 0
u +
2
ξ
u +
3
ξ2
u = 0.
Thus this equation is invariant under the change of variables x = c ξ.
Example 18.5.2 For a nonlinear example, consider the equation
y y +
y
x y
+
y
x2
= 0.
With the change of variables x = c ξ, y(x) = u(ξ) the equation becomes
u
c2
u
c
+
u
c3 ξ u
+
u
c3 ξ2
= 0
u u +
u
ξ u
+
u
ξ2
= 0.
We see that this equation is also equidimensional-in-x.
607
You may recall that the change of variables x = et
reduces an Euler equation to a constant
coefficient equation. To generalize this result to nonlinear equations we will see that the same
change of variables reduces an equidimensional-in-x equation to an autonomous equation.
Writing the derivatives with respect to x in terms of t,
x = et
,
d
dx
=
dt
dx
d
dt
= e−t d
dt
x
d
dx
=
d
dt
x2 d2
dx2
= x
d
dx
x
d
dx
− x
d
dx
=
d2
dt2
−
d
dt
.
Example 18.5.3 Consider the equation in Example 18.5.2
y y +
y
x y
+
y
x2
= 0.
Applying the change of variables x = et
, y(x) = u(t) yields an autonomous equation for u(t).
x2
y x y +
x2
y
y
+ x y = 0
(u − u )u +
u − u
u
+ u = 0
Result 18.5.1 A differential equation that is invariant under the change of
variables x = c ξ is equidimensional-in-x. Such an equation can be reduced to
autonomous equation of the same order with the change of variables, x = et
.
18.6 *Equidimensional-in-y Equations
A differential equation is said to be equidimensional-in-y if it is invariant under the change of
variables y(x) = c v(x). Note that all linear homogeneous equations are equidimensional-in-y.
Example 18.6.1 Consider the linear equation
y + p(x)y + q(x)y = 0.
With the change of variables y(x) = cv(x) the equation becomes
cv + p(x)cv + q(x)cv = 0
v + p(x)v + q(x)v = 0
Thus we see that the equation is invariant under the change of variables.
Example 18.6.2 For a nonlinear example, consider the equation
y y + (y )2
− y2
= 0.
Under the change of variables y(x) = cv(x) the equation becomes.
cv cv + (cv )2
− (cv)2
= 0
v v + (v )2
− v2
= 0.
Thus we see that this equation is also equidimensional-in-y.
608
The change of variables y(x) = eu(x)
reduces an nth
order equidimensional-in-y equation to an
equation of order n − 1 for u . Writing the derivatives of eu(x)
,
d
dx
eu
= u eu
d2
dx2
eu
= (u + (u )2
) eu
d3
dx3
eu
= (u + 3u u + (u )3
) eu
.
Example 18.6.3 Consider the linear equation in Example 18.6.1
y + p(x)y + q(x)y = 0.
Under the change of variables y(x) = eu(x)
the equation becomes
(u + (u )2
) eu
+p(x)u eu
+q(x) eu
= 0
u + (u )2
+ p(x)u + q(x) = 0.
Thus we have a Riccati equation for u . This transformation might seem rather useless since lin-
ear equations are usually easier to work with than nonlinear equations, but it is often useful in
determining the asymptotic behavior of the equation.
Example 18.6.4 From Example 18.6.2 we have the equation
y y + (y )2
− y2
= 0.
The change of variables y(x) = eu(x)
yields
(u + (u )2
) eu
eu
+(u eu
)2
− (eu
)2
= 0
u + 2(u )2
− 1 = 0
u = −2(u )2
+ 1
Now we have a Riccati equation for u . We make the substitution u = v
2v .
v
2v
−
(v )2
2v2
= −2
(v )2
4v2
+ 1
v − 2v = 0
v = c1 e
√
2x
+c2 e−
√
2x
u = 2
√
2
c1 e
√
2x
−c2 e−
√
2x
c1 e
√
2x +c2 e−
√
2x
u = 2
c1
√
2 e
√
2x
−c2
√
2 e−
√
2x
c1 e
√
2x +c2 e−
√
2x
dx + c3
u = 2 log c1 e
√
2x
+c2 e−
√
2x
+ c3
y = c1 e
√
2x
+c2 e−
√
2x
2
ec3
The constants are redundant, the general solution is
y = c1 e
√
2x
+c2 e−
√
2x
2
609
Result 18.6.1 A differential equation is equidimensional-in-y if it is invariant
under the change of variables y(x) = cv(x). An nth
order equidimensional-in-y
equation can be reduced to an equation of order n − 1 in u with the change
of variables y(x) = eu(x)
.
18.7 *Scale-Invariant Equations
Result 18.7.1 An equation is scale invariant if it is invariant under the change
of variables, x = cξ, y(x) = cα
v(ξ), for some value of α. A scale-invariant
equation can be transformed to an equidimensional-in-x equation with the
change of variables, y(x) = xα
u(x).
Example 18.7.1 Consider the equation
y + x2
y2
= 0.
Under the change of variables x = cξ, y(x) = cα
v(ξ) this equation becomes
cα
c2
v (ξ) + c2
x2
c2α
v2
(ξ) = 0.
Equating powers of c in the two terms yields α = −4.
Introducing the change of variables y(x) = x−4
u(x) yields
d2
dx2
x−4
u(x) + x2
(x−4
u(x))2
= 0
x−4
u − 8x−5
u + 20x−6
u + x−6
u2
= 0
x2
u − 8xu + 20u + u2
= 0.
We see that the equation for u is equidimensional-in-x.
610
18.8 Exercises
Exercise 18.1
1. Find the general solution and the singular solution of the Clairaut equation,
y = xp + p2
.
2. Show that the singular solution is the envelope of the general solution.
Hint, Solution
Bernoulli Equations
Exercise 18.2 (mathematica/ode/techniques nonlinear/bernoulli.nb)
Consider the Bernoulli equation
dy
dt
+ p(t)y = q(t)yα
.
1. Solve the Bernoulli equation for α = 1.
2. Show that for α = 1 the substitution u = y1−α
reduces Bernoulli’s equation to a linear
equation.
3. Find the general solution to the following equations.
t2 dy
dt
+ 2ty − y3
= 0, t > 0
(a)
dy
dx
+ 2xy + y2
= 0
(b)
Hint, Solution
Exercise 18.3
Consider a population, y. Let the birth rate of the population be proportional to y with constant
of proportionality 1. Let the death rate of the population be proportional to y2
with constant of
proportionality 1/1000. Assume that the population is large enough so that you can consider y to
be continuous. What is the population as a function of time if the initial population is y0?
Hint, Solution
Exercise 18.4
Show that the transformation u = y1−n
reduces the equation to a linear first order equation. Solve
the equations
1. t2 dy
dt
+ 2ty − y3
= 0 t > 0
2.
dy
dt
= (Γ cos t + T) y − y3
, Γ and T are real constants. (From a fluid flow stability problem.)
Hint, Solution
Riccati Equations
Exercise 18.5
1. Consider the Ricatti equation,
dy
dx
= a(x)y2
+ b(x)y + c(x).
611
Substitute
y = yp(x) +
1
u(x)
into the Ricatti equation, where yp is some particular solution to obtain a first order linear
differential equation for u.
2. Consider a Ricatti equation,
y = 1 + x2
− 2xy + y2
.
Verify that yp(x) = x is a particular solution. Make the substitution y = yp + 1/u to find the
general solution.
What would happen if you continued this method, taking the general solution for yp? Would
you be able to find a more general solution?
3. The substitution
y = −
u
au
gives us the second order, linear, homogeneous differential equation,
u −
a
a
+ b u + acu = 0.
The general solution for u has two constants of integration. However, the solution for y should
only have one constant of integration as it satisfies a first order equation. Write y in terms of
the solution for u and verify tha y has only one constant of integration.
Hint, Solution
Exchanging the Dependent and Independent Variables
Exercise 18.6
Solve the differential equation
y =
√
y
xy + y
.
Hint, Solution
Autonomous Equations
*Equidimensional-in-x Equations
*Equidimensional-in-y Equations
*Scale-Invariant Equations
612
18.9 Hints
Hint 18.1
Bernoulli Equations
Hint 18.2
Hint 18.3
The differential equation governing the population is
dy
dt
= y −
y2
1000
, y(0) = y0.
This is a Bernoulli equation.
Hint 18.4
Riccati Equations
Hint 18.5
Exchanging the Dependent and Independent Variables
Hint 18.6
Exchange the dependent and independent variables.
Autonomous Equations
*Equidimensional-in-x Equations
*Equidimensional-in-y Equations
*Scale-Invariant Equations
613
-4 -2 2 4
-4
-2
2
Figure 18.1: The Envelope of y = cx + c2
.
18.10 Solutions
Solution 18.1
We consider the Clairaut equation,
y = xp + p2
. (18.2)
1. We differentiate Equation 18.2 with respect to x to obtain a second order differential equation.
y = y + xy + 2y y
y (2y + x) = 0
Equating the first or second factor to zero will lead us to two distinct solutions.
y = 0 or y = −
x
2
If y = 0 then y ≡ p is a constant, (say y = c). From Equation 18.2 we see that the general
solution is,
y(x) = cx + c2
. (18.3)
Recall that the general solution of a first order differential equation has one constant of inte-
gration.
If y = −x/2 then y = −x2
/4+const. We determine the constant by substituting the expression
into Equation 18.2.
−
x2
4
+ c = x −
x
2
+ −
x
2
2
Thus we see that a singular solution of the Clairaut equation is
y(x) = −
1
4
x2
. (18.4)
Recall that a singular solution of a first order nonlinear differential equation has no constant
of integration.
2. Equating the general and singular solutions, y(x), and their derivatives, y (x), gives us the
system of equations,
cx + c2
= −
1
4
x2
, c = −
1
2
x.
Since the first equation is satisfied for c = −x/2, we see that the solution y = cx+c2
is tangent
to the solution y = −x2
/4 at the point (−2c, −|c|). The solution y = cx + c2
is plotted for
c = . . . , −1/4, 0, 1/4, . . . in Figure 18.1.
614
The envelope of a one-parameter family F(x, y, c) = 0 is given by the system of equations,
F(x, y, c) = 0, Fc(x, y, c) = 0.
For the family of solutions y = cx + c2
these equations are
y = cx + c2
, 0 = x + 2c.
Substituting the solution of the second equation, c = −x/2, into the first equation gives the
envelope,
y = −
1
2
x x + −
1
2
x
2
= −
1
4
x2
.
Thus we see that the singular solution is the envelope of the general solution.
Bernoulli Equations
Solution 18.2
1.
dy
dt
+ p(t)y = q(t)y
dy
y
= (q − p) dt
ln y = (q − p) dt + c
y = c e
R
(q−p) dt
2. We consider the Bernoulli equation,
dy
dt
+ p(t)y = q(t)yα
, α = 1.
We divide by yα
.
y−α
y + p(t)y1−α
= q(t)
This suggests the change of dependent variable u = y1−α
, u = (1 − α)y−α
y .
1
1 − α
d
dt
y1−α
+ p(t)y1−α
= q(t)
du
dt
+ (1 − α)p(t)u = (1 − α)q(t)
Thus we obtain a linear equation for u which when solved will give us an implicit solution for
y.
3. (a)
t2 dy
dt
+ 2ty − y3
= 0, t > 0
t2 y
y3
+ 2t
1
y2
= 1
We make the change of variables u = y−2
.
−
1
2
t2
u + 2tu = 1
u −
4
t
u = −
2
t2
615
The integrating factor is
µ = e
R
(−4/t) dt
= e−4 ln t
= t−4
.
We multiply by the integrating factor and integrate to obtain the solution.
d
dt
t−4
u = −2t−6
u =
2
5
t−1
+ ct4
y−2
=
2
5
t−1
+ ct4
y = ±
1
2
5 t−1 + ct4
y = ±
√
5t
√
2 + ct5
(b)
dy
dx
+ 2xy + y2
= 0
y
y2
+
2x
y
= −1
We make the change of variables u = y−1
.
u − 2xu = 1
The integrating factor is
µ = e
R
(−2x) dx
= e−x2
.
We multiply by the integrating factor and integrate to obtain the solution.
d
dx
e−x2
u = e−x2
u = ex2
e−x2
dx + c ex2
y =
e−x2
e−x2
dx + c
Solution 18.3
The differential equation governing the population is
dy
dt
= y −
y2
1000
, y(0) = y0.
We recognize this as a Bernoulli equation. The substitution u(t) = 1/y(t) yields
−
du
dt
= u −
1
1000
, u(0) =
1
y0
.
u + u =
1
1000
u =
1
y0
e−t
+
e−t
1000
t
0
eτ
dτ
u =
1
1000
+
1
y0
−
1
1000
e−t
616
Solving for y(t),
y(t) =
1
1000
+
1
y0
−
1
1000
e−t
−1
.
As a check, we see that as t → ∞, y(t) → 1000, which is an equilibrium solution of the differential
equation.
dy
dt
= 0 = y −
y2
1000
→ y = 1000.
Solution 18.4
1.
t2 dy
dt
+ 2ty − y3
= 0
dy
dt
+ 2t−1
y = t−2
y3
We make the change of variables u(t) = y−2
(t).
u − 4t−1
u = −2t−2
This gives us a first order, linear equation. The integrating factor is
I(t) = e
R
−4t−1
dt
= e−4 log t
= t−4
.
We multiply by the integrating factor and integrate.
d
dt
t−4
u = −2t−6
t−4
u =
2
5
t−5
+ c
u =
2
5
t−1
+ ct4
Finally we write the solution in terms of y(t).
y(t) = ±
1
2
5 t−1 + ct4
y(t) = ±
√
5t
√
2 + ct5
2.
dy
dt
− (Γ cos t + T) y = −y3
We make the change of variables u(t) = y−2
(t).
u + 2 (Γ cos t + T) u = 2
This gives us a first order, linear equation. The integrating factor is
I(t) = e
R
2(Γ cos t+T ) dt
= e2(Γ sin t+T t)
617
We multiply by the integrating factor and integrate.
d
dt
e2(Γ sin t+T t)
u = 2 e2(Γ sin t+T t)
u = 2 e−2(Γ sin t+T t)
e2(Γ sin t+T t)
dt + c
Finally we write the solution in terms of y(t).
y = ±
eΓ sin t+T t
2 e2(Γ sin t+T t) dt + c
Riccati Equations
Solution 18.5
We consider the Ricatti equation,
dy
dx
= a(x)y2
+ b(x)y + c(x). (18.5)
1. We substitute
y = yp(x) +
1
u(x)
into the Ricatti equation, where yp is some particular solution.
yp −
u
u2
= +a(x) y2
p + 2
yp
u
+
1
u2
+ b(x) yp +
1
u
+ c(x)
−
u
u2
= b(x)
1
u
+ a(x) 2
yp
u
+
1
u2
u = − (b + 2ayp) u − a
We obtain a first order linear differential equation for u whose solution will contain one constant
of integration.
2. We consider a Ricatti equation,
y = 1 + x2
− 2xy + y2
. (18.6)
We verify that yp(x) = x is a solution.
1 = 1 + x2
− 2xx + x2
Substituting y = yp + 1/u into Equation 18.6 yields,
u = − (−2x + 2x) u − 1
u = −x + c
y = x +
1
c − x
What would happen if we continued this method? Since y = x + 1
c−x is a solution of the
Ricatti equation we can make the substitution,
y = x +
1
c − x
+
1
u(x)
, (18.7)
618
which will lead to a solution for y which has two constants of integration. Then we could
repeat the process, substituting the sum of that solution and 1/u(x) into the Ricatti equation
to find a solution with three constants of integration. We know that the general solution of
a first order, ordinary differential equation has only one constant of integration. Does this
method for Ricatti equations violate this theorem? There’s only one way to find out. We
substitute Equation 18.7 into the Ricatti equation.
u = − −2x + 2 x +
1
c − x
u − 1
u = −
2
c − x
u − 1
u +
2
c − x
u = −1
The integrating factor is
I(x) = e2/(c−x)
= e−2 log(c−x)
=
1
(c − x)2
.
Upon multiplying by the integrating factor, the equation becomes exact.
d
dx
1
(c − x)2
u = −
1
(c − x)2
u = (c − x)2 −1
c − x
+ b(c − x)2
u = x − c + b(c − x)2
Thus the Ricatti equation has the solution,
y = x +
1
c − x
+
1
x − c + b(c − x)2
.
It appears that we we have found a solution that has two constants of integration, but appear-
ances can be deceptive. We do a little algebraic simplification of the solution.
y = x +
1
c − x
+
1
(b(c − x) − 1)(c − x)
y = x +
(b(c − x) − 1) + 1
(b(c − x) − 1)(c − x)
y = x +
b
b(c − x) − 1
y = x +
1
(c − 1/b) − x
This is actually a solution, (namely the solution we had before), with one constant of inte-
gration, (namely c − 1/b). Thus we see that repeated applications of the procedure will not
produce more general solutions.
3. The substitution
y = −
u
au
gives us the second order, linear, homogeneous differential equation,
u −
a
a
+ b u + acu = 0.
619
The solution to this linear equation is a linear combination of two homogeneous solutions, u1
and u2.
u = c1u1(x) + c2u2(x)
The solution of the Ricatti equation is then
y = −
c1u1(x) + c2u2(x)
a(x)(c1u1(x) + c2u2(x))
.
Since we can divide the numerator and denominator by either c1 or c2, this answer has only
one constant of integration, (namely c1/c2 or c2/c1).
Exchanging the Dependent and Independent Variables
Solution 18.6
Exchanging the dependent and independent variables in the differential equation,
y =
√
y
xy + y
,
yields
x (y) = y1/2
x + y1/2
.
This is a first order differential equation for x(y).
x − y1/2
x = y1/2
d
dy
x exp −
2y3/2
3
= y1/2
exp −
2y3/2
3
x exp −
2y3/2
3
= − exp −
2y3/2
3
+ c1
x = −1 + c1 exp
2y3/2
3
x + 1
c1
= exp
2y3/2
3
log
x + 1
c1
=
2
3
y3/2
y =
3
2
log
x + 1
c1
2/3
y = c +
3
2
log(x + 1)
2/3
Autonomous Equations
*Equidimensional-in-x Equations
*Equidimensional-in-y Equations
*Scale-Invariant Equations
620
Chapter 19
Transformations and Canonical
Forms
Prize intensity more than extent. Excellence resides in quality not in quantity. The best is always
few and rare - abundance lowers value. Even among men, the giants are usually really dwarfs.
Some reckon books by the thickness, as if they were written to exercise the brawn more than the
brain. Extent alone never rises above mediocrity; it is the misfortune of universal geniuses that in
attempting to be at home everywhere are so nowhere. Intensity gives eminence and rises to the
heroic in matters sublime.
-Balthasar Gracian
19.1 The Constant Coefficient Equation
The solution of any second order linear homogeneous differential equation can be written in terms
of the solutions to either
y = 0, or y − y = 0
Consider the general equation
y + ay + by = 0.
We can solve this differential equation by making the substitution y = eλx
. This yields the algebraic
equation
λ2
+ aλ + b = 0.
λ =
1
2
−a ± a2 − 4b
There are two cases to consider. If a2
= 4b then the solutions are
y1 = e(−a+
√
a2−4b)x/2
, y2 = e(−a−
√
a2−4b)x/2
If a2
= 4b then we have
y1 = e−ax/2
, y2 = x e−ax/2
Note that regardless of the values of a and b the solutions are of the form
y = e−ax/2
u(x)
621
We would like to write the solutions to the general differential equation in terms of the solutions
to simpler differential equations. We make the substitution
y = eλx
u
The derivatives of y are
y = eλx
(u + λu)
y = eλx
(u + 2λu + λ2
u)
Substituting these into the differential equation yields
u + (2λ + a)u + (λ2
+ aλ + b)u = 0
In order to get rid of the u term we choose
λ = −
a
2
.
The equation is then
u + b −
a2
4
u = 0.
There are now two cases to consider.
Case 1. If b = a2
/4 then the differential equation is
u = 0
which has solutions 1 and x. The general solution for y is then
y = e−ax/2
(c1 + c2x).
Case 2. If b = a2
/4 then the differential equation is
u −
a2
4
− b u = 0.
We make the change variables
u(x) = v(ξ), x = µξ.
The derivatives in terms of ξ are
d
dx
=
dξ
dx
d
dξ
=
1
µ
d
dξ
d2
dx2
=
1
µ
d
dξ
1
µ
d
dξ
=
1
µ2
d2
dξ2
.
The differential equation for v is
1
µ2
v −
a2
4
− b v = 0
v − µ2 a2
4
− b v = 0
We choose
µ =
a2
4
− b
−1/2
to obtain
v − v = 0
which has solutions e±ξ
. The solution for y is
y = eλx
c1 ex/µ
+c2 e−x/µ
y = e−ax/2
c1 e
√
a2/4−b x
+c2 e−
√
a2/4−b x
622
19.2 Normal Form
19.2.1 Second Order Equations
Consider the second order equation
y + p(x)y + q(x)y = 0. (19.1)
Through a change of dependent variable, this equation can be transformed to
u + I(x)y = 0.
This is known as the normal form of (19.1). The function I(x) is known as the invariant of the
equation.
Now to find the change of variables that will accomplish this transformation. We make the
substitution y(x) = a(x)u(x) in (19.1).
au + 2a u + a u + p(au + a u) + qau = 0
u + 2
a
a
+ p u +
a
a
+
pa
a
+ q u = 0
To eliminate the u term, a(x) must satisfy
2
a
a
+ p = 0
a +
1
2
pa = 0
a = c exp −
1
2
p(x) dx .
For this choice of a, our differential equation for u becomes
u + q −
p2
4
−
p
2
u = 0.
Two differential equations having the same normal form are called equivalent.
Result 19.2.1 The change of variables
y(x) = exp −
1
2
p(x) dx u(x)
transforms the differential equation
y + p(x)y + q(x)y = 0
into its normal form
u + I(x)u = 0
where the invariant of the equation, I(x), is
I(x) = q −
p2
4
−
p
2
.
623
19.2.2 Higher Order Differential Equations
Consider the third order differential equation
y + p(x)y + q(x)y + r(x)y = 0.
We can eliminate the y term. Making the change of dependent variable
y = u exp −
1
3
p(x) dx
y = u −
1
3
pu exp −
1
3
p(x) dx
y = u −
2
3
pu +
1
9
(p2
− 3p )u exp −
1
3
p(x) dx
y = u − pu +
1
3
(p2
− 3p )u +
1
27
(9p − 9p − p3
)u exp −
1
3
p(x) dx
yields the differential equation
u +
1
3
(3q − 3p − p2
)u +
1
27
(27r − 9pq − 9p + 2p3
)u = 0.
Result 19.2.2 The change of variables
y(x) = exp −
1
n
pn−1(x) dx u(x)
transforms the differential equation
y(n)
+ pn−1(x)y(n−1)
+ pn−2(x)y(n−2)
+ · · · + p0(x)y = 0
into the form
u(n)
+ an−2(x)u(n−2)
+ an−3(x)u(n−3)
+ · · · + a0(x)u = 0.
19.3 Transformations of the Independent Variable
19.3.1 Transformation to the form u” + a(x) u = 0
Consider the second order linear differential equation
y + p(x)y + q(x)y = 0.
We make the change of independent variable
ξ = f(x), u(ξ) = y(x).
The derivatives in terms of ξ are
d
dx
=
dξ
dx
d
dξ
= f
d
dξ
d2
dx2
= f
d
dξ
f
d
dξ
= (f )2 d2
dξ2
+ f
d
dξ
624
The differential equation becomes
(f )2
u + f u + pf u + qu = 0.
In order to eliminate the u term, f must satisfy
f + pf = 0
f = exp − p(x) dx
f = exp − p(x) dx dx.
The differential equation for u is then
u +
q
(f )2
u = 0
u (ξ) + q(x) exp 2 p(x) dx u(ξ) = 0.
Result 19.3.1 The change of variables
ξ = exp − p(x) dx dx, u(ξ) = y(x)
transforms the differential equation
y + p(x)y + q(x)y = 0
into
u (ξ) + q(x) exp 2 p(x) dx u(ξ) = 0.
19.3.2 Transformation to a Constant Coefficient Equation
Consider the second order linear differential equation
y + p(x)y + q(x)y = 0.
With the change of independent variable
ξ = f(x), u(ξ) = y(x),
the differential equation becomes
(f )2
u + (f + pf )u + qu = 0.
For this to be a constant coefficient equation we must have
(f )2
= c1q, and f + pf = c2q,
for some constants c1 and c2. Solving the first condition,
f = c
√
q,
625
f = c q(x) dx.
The second constraint becomes
f + pf
q
= const
1
2 cq−1/2
q + pcq1/2
q
= const
q + 2pq
q3/2
= const.
Result 19.3.2 Consider the differential equation
y + p(x)y + q(x)y = 0.
If the expression
q + 2pq
q3/2
is a constant then the change of variables
ξ = c q(x) dx, u(ξ) = y(x),
will yield a constant coefficient differential equation. (Here c is an arbitrary
constant.)
19.4 Integral Equations
Volterra’s Equations. Volterra’s integral equation of the first kind has the form
x
a
N(x, ξ)f(ξ) dξ = f(x).
The Volterra equation of the second kind is
y(x) = f(x) + λ
x
a
N(x, ξ)y(ξ) dξ.
N(x, ξ) is known as the kernel of the equation.
Fredholm’s Equations. Fredholm’s integral equations of the first and second kinds are
b
a
N(x, ξ)f(ξ) dξ = f(x),
y(x) = f(x) + λ
b
a
N(x, ξ)y(ξ) dξ.
19.4.1 Initial Value Problems
Consider the initial value problem
y + p(x)y + q(x)y = f(x), y(a) = α, y (a) = β.
626
Integrating this equation twice yields
x
a
η
a
y (ξ) + p(ξ)y (ξ) + q(ξ)y(ξ) dξ dη =
x
a
η
a
f(ξ) dξ dη
x
a
(x − ξ)[y (ξ) + p(ξ)y (ξ) + q(ξ)y(ξ)] dξ =
x
a
(x − ξ)f(ξ) dξ.
Now we use integration by parts.
(x − ξ)y (ξ)
x
a
−
x
a
−y (ξ) dξ + (x − ξ)p(ξ)y(ξ)
x
a
−
x
a
[(x − ξ)p (ξ) − p(ξ)]y(ξ) dξ
+
x
a
(x − ξ)q(ξ)y(ξ) dξ =
x
a
(x − ξ)f(ξ) dξ.
− (x − a)y (a) + y(x) − y(a) − (x − a)p(a)y(a) −
x
a
[(x − ξ)p (ξ) − p(ξ)]y(ξ) dξ
+
x
a
(x − ξ)q(ξ)y(ξ) dξ =
x
a
(x − ξ)f(ξ) dξ.
We obtain a Volterra integral equation of the second kind for y(x).
y(x) =
x
a
(x − ξ)f(ξ) dξ + (x − a)(αp(a) + β) + α +
x
a
(x − ξ)[p (ξ) − q(ξ)] − p(ξ) y(ξ) dξ.
Note that the initial conditions for the differential equation are “built into” the Volterra equation.
Setting x = a in the Volterra equation yields y(a) = α. Differentiating the Volterra equation,
y (x) =
x
a
f(ξ) dξ + (αp(a) + β) − p(x)y(x) +
x
a
[p (ξ) − q(ξ)] − p(ξ)y(ξ) dξ
and setting x = a yields
y (a) = αp(a) + β − p(a)α = β.
(Recall from calculus that
d
dx
x
g(x, ξ) dξ = g(x, x) +
x
∂
∂x
[g(x, ξ)] dξ.)
Result 19.4.1 The initial value problem
y + p(x)y + q(x)y = f(x), y(a) = α, y (a) = β.
is equivalent to the Volterra equation of the second kind
y(x) = F(x) +
x
a
N(x, ξ)y(ξ) dξ
where
F(x) =
x
a
(x − ξ)f(ξ) dξ + (x − a)(αp(a) + β) + α
N(x, ξ) = (x − ξ)[p (ξ) − q(ξ)] − p(ξ).
627
19.4.2 Boundary Value Problems
Consider the boundary value problem
y = f(x), y(a) = α, y(b) = β. (19.2)
To obtain a problem with homogeneous boundary conditions, we make the change of variable
y(x) = u(x) + α +
β − α
b − a
(x − a)
to obtain the problem
u = f(x), u(a) = u(b) = 0.
Now we will use Green’s functions to write the solution as an integral. First we solve the problem
G = δ(x − ξ), G(a|ξ) = G(b|ξ) = 0.
The homogeneous solutions of the differential equation that satisfy the left and right boundary
conditions are
c1(x − a) and c2(x − b).
Thus the Green’s function has the form
G(x|ξ) =
c1(x − a), for x ≤ ξ
c2(x − b), for x ≥ ξ
Imposing continuity of G(x|ξ) at x = ξ and a unit jump of G(x|ξ) at x = ξ, we obtain
G(x|ξ) =
(x−a)(ξ−b)
b−a , for x ≤ ξ
(x−b)(ξ−a)
b−a , for x ≥ ξ
Thus the solution of the (19.2) is
y(x) = α +
β − α
b − a
(x − a) +
b
a
G(x|ξ)f(ξ) dξ.
Now consider the boundary value problem
y + p(x)y + q(x)y = 0, y(a) = α, y(b) = β.
From the above result we can see that the solution satisfies
y(x) = α +
β − α
b − a
(x − a) +
b
a
G(x|ξ)[f(ξ) − p(ξ)y (ξ) − q(ξ)y(ξ)] dξ.
Using integration by parts, we can write
−
b
a
G(x|ξ)p(ξ)y (ξ) dξ = − G(x|ξ)p(ξ)y(ξ)
b
a
+
b
a
∂G(x|ξ)
∂ξ
p(ξ) + G(x|ξ)p (ξ) y(ξ) dξ
=
b
a
∂G(x|ξ)
∂ξ
p(ξ) + G(x|ξ)p (ξ) y(ξ) dξ.
Substituting this into our expression for y(x),
y(x) = α +
β − α
b − a
(x − a) +
b
a
G(x|ξ)f(ξ) dξ +
b
a
∂G(x|ξ)
∂ξ
p(ξ) + G(x|ξ)[p (ξ) − q(ξ)] y(ξ) dξ,
we obtain a Fredholm integral equation of the second kind.
628
Result 19.4.2 The boundary value problem
y + p(x)y + q(x)y = f(x), y(a) = α, y(b) = β.
is equivalent to the Fredholm equation of the second kind
y(x) = F(x) +
b
a
N(x, ξ)y(ξ) dξ
where
F(x) = α +
β − α
b − a
(x − a) +
b
a
G(x|ξ)f(ξ) dξ,
N(x, ξ) =
b
a
H(x|ξ)y(ξ) dξ,
G(x|ξ) =
(x−a)(ξ−b)
b−a
, for x ≤ ξ
(x−b)(ξ−a)
b−a
, for x ≥ ξ,
H(x|ξ) =
(x−a)
b−a
p(ξ) + (x−a)(ξ−b)
b−a
[p (ξ) − q(ξ)] for x ≤ ξ
(x−b)
b−a
p(ξ) + (x−b)(ξ−a)
b−a
[p (ξ) − q(ξ)] for x ≥ ξ.
629
19.5 Exercises
The Constant Coefficient Equation
Normal Form
Exercise 19.1
Solve the differential equation
y + 2 +
4
3
x y +
1
9
24 + 12x + 4x2
y = 0.
Hint, Solution
Transformations of the Independent Variable
Integral Equations
Exercise 19.2
Show that the solution of the differential equation
y + 2(a + bx)y + (c + dx + ex2
)y = 0
can be written in terms of one of the following canonical forms:
v + (ξ2
+ A)v = 0
v = ξv
v + v = 0
v = 0.
Hint, Solution
Exercise 19.3
Show that the solution of the differential equation
y + 2 a +
b
x
y + c +
d
x
+
e
x2
y = 0
can be written in terms of one of the following canonical forms:
v + 1 +
A
ξ
+
B
ξ2
v = 0
v +
1
ξ
+
A
ξ2
v = 0
v +
A
ξ2
v = 0
Hint, Solution
Exercise 19.4
Show that the second order Euler equation
x2 d2
y
d2x
+ a1x
dy
dx
+ a0y = 0
can be transformed to a constant coefficient equation.
Hint, Solution
630
Exercise 19.5
Solve Bessel’s equation of order 1/2,
y +
1
x
y + 1 −
1
4x2
y = 0.
Hint, Solution
631
19.6 Hints
The Constant Coefficient Equation
Normal Form
Hint 19.1
Transform the equation to normal form.
Transformations of the Independent Variable
Integral Equations
Hint 19.2
Transform the equation to normal form and then apply the scale transformation x = λξ + µ.
Hint 19.3
Transform the equation to normal form and then apply the scale transformation x = λξ.
Hint 19.4
Make the change of variables x = et
, y(x) = u(t). Write the derivatives with respect to x in terms
of t.
x = et
dx = et
dt
d
dx
= e−t d
dt
x
d
dx
=
d
dt
Hint 19.5
Transform the equation to normal form.
632
19.7 Solutions
The Constant Coefficient Equation
Normal Form
Solution 19.1
y + 2 +
4
3
x y +
1
9
24 + 12x + 4x2
y = 0
To transform the equation to normal form we make the substitution
y = exp −
1
2
2 +
4
3
x dx u
= e−x−x2
/3
u
The invariant of the equation is
I(x) =
1
9
24 + 12x + 4x2
−
1
4
2 +
4
3
x
2
−
1
2
d
dx
2 +
4
3
x
= 1.
The normal form of the differential equation is then
u + u = 0
which has the general solution
u = c1 cos x + c2 sin x
Thus the equation for y has the general solution
y = c1 e−x−x2
/3
cos x + c2 e−x−x2
/3
sin x.
Transformations of the Independent Variable
Integral Equations
Solution 19.2
The substitution that will transform the equation to normal form is
y = exp −
1
2
2(a + bx) dx u
= e−ax−bx2
/2
u.
The invariant of the equation is
I(x) = c + dx + ex2
−
1
4
(2(a + bx))2
−
1
2
d
dx
(2(a + bx))
= c − b − a2
+ (d − 2ab)x + (e − b2
)x2
≡ α + βx + γx2
The normal form of the differential equation is
u + (α + βx + γx2
)u = 0
We consider the following cases:
γ = 0.
633
β = 0.
α = 0. We immediately have the equation
u = 0.
α = 0. With the change of variables
v(ξ) = u(x), x = α−1/2
ξ,
we obtain
v + v = 0.
β = 0. We have the equation
y + (α + βx)y = 0.
The scale transformation x = λξ + µ yields
v + λ2
(α + β(λξ + µ))y = 0
v = [βλ3
ξ + λ2
(βµ + α)]v.
Choosing
λ = (−β)−1/3
, µ = −
α
β
yields the differential equation
v = ξv.
γ = 0. The scale transformation x = λξ + µ yields
v + λ2
[α + β(λξ + µ) + γ(λξ + µ)2
]v = 0
v + λ2
[α + βµ + γµ2
+ λ(β + 2γµ)ξ + λ2
γξ2
]v = 0.
Choosing
λ = γ−1/4
, µ = −
β
2γ
yields the differential equation
v + (ξ2
+ A)v = 0
where
A = γ−1/2
−
1
4
βγ−3/2
.
Solution 19.3
The substitution that will transform the equation to normal form is
y = exp −
1
2
2 a +
b
x
dx u
= x−b
e−ax
u.
The invariant of the equation is
I(x) = c +
d
x
+
e
x2
−
1
4
2 a +
b
x
2
−
1
2
d
dx
2 a +
b
x
= c − ax
+
d − 2ab
x
+
e + b − b2
x2
≡ α +
β
x
+
γ
x2
.
The invariant form of the differential equation is
u + α +
β
x
+
γ
x2
u = 0.
We consider the following cases:
634
α = 0.
β = 0. We immediately have the equation
u +
γ
x2
u = 0.
β = 0. We have the equation
u +
β
x
+
γ
x2
u = 0.
The scale transformation u(x) = v(ξ), x = λξ yields
v +
βλ
ξ
+
γ
ξ2
u = 0.
Choosing λ = β−1
, we obtain
v +
1
ξ
+
γ
ξ2
u = 0.
α = 0. The scale transformation x = λξ yields
v + αλ2
+
βλ
ξ
+
γ
ξ2
v = 0.
Choosing λ = α−1/2
, we obtain
v + 1 +
α−1/2
β
ξ
+
γ
ξ2
v = 0.
Solution 19.4
We write the derivatives with respect to x in terms of t.
x = et
dx = et
dt
d
dx
= e−t d
dt
x
d
dx
=
d
dt
Now we express x2 d2
dx2 in terms of t.
x2 d2
dx2
= x
d
dx
x
d
dx
− x
d
dx
=
d2
dt2
−
d
dt
Thus under the change of variables, x = et
, y(x) = u(t), the Euler equation becomes
u − u + a1u + a0u = 0
u + (a1 − 1)u + a0u = 0.
Solution 19.5
The transformation
y = exp −
1
2
1
x
dx = x−1/2
u
635
will put the equation in normal form. The invariant is
I(x) = 1 −
1
4x2
−
1
4
1
x2
−
1
2
−1
x2
= 1.
Thus we have the differential equation
u + u = 0,
with the solution
u = c1 cos x + c2 sin x.
The solution of Bessel’s equation of order 1/2 is
y = c1x−1/2
cos x + c2x−1/2
sin x.
636
Chapter 20
The Dirac Delta Function
I do not know what I appear to the world; but to myself I seem to have been only like a boy
playing on a seashore, and diverting myself now and then by finding a smoother pebble or a prettier
shell than ordinary, whilst the great ocean of truth lay all undiscovered before me.
- Sir Issac Newton
20.1 Derivative of the Heaviside Function
The Heaviside function H(x) is defined
H(x) =
0 for x < 0,
1 for x > 0.
The derivative of the Heaviside function is zero for x = 0. At x = 0 the derivative is undefined. We
will represent the derivative of the Heaviside function by the Dirac delta function, δ(x). The delta
function is zero for x = 0 and infinite at the point x = 0. Since the derivative of H(x) is undefined,
δ(x) is not a function in the conventional sense of the word. One can derive the properties of the
delta function rigorously, but the treatment in this text will be almost entirely heuristic.
The Dirac delta function is defined by the properties
δ(x) =
0 for x = 0,
∞ for x = 0,
and
∞
−∞
δ(x) dx = 1.
The second property comes from the fact that δ(x) represents the derivative of H(x). The Dirac
delta function is conceptually pictured in Figure 20.1.
Figure 20.1: The Dirac Delta Function.
637
Let f(x) be a continuous function that vanishes at infinity. Consider the integral
∞
−∞
f(x)δ(x) dx.
We use integration by parts to evaluate the integral.
∞
−∞
f(x)δ(x) dx = f(x)H(x)
∞
−∞
−
∞
−∞
f (x)H(x) dx
= −
∞
0
f (x) dx
= [−f(x)]∞
0
= f(0)
We assumed that f(x) vanishes at infinity in order to use integration by parts to evaluate the
integral. However, since the delta function is zero for x = 0, the integrand is nonzero only at x = 0.
Thus the behavior of the function at infinity should not affect the value of the integral. Thus it is
reasonable that f(0) =
∞
−∞
f(x)δ(x) dx holds for all continuous functions. By changing variables
and noting that δ(x) is symmetric we can derive a more general formula.
f(0) =
∞
−∞
f(ξ)δ(ξ) dξ
f(x) =
∞
−∞
f(ξ + x)δ(ξ) dξ
f(x) =
∞
−∞
f(ξ)δ(ξ − x) dξ
f(x) =
∞
−∞
f(ξ)δ(x − ξ) dξ
This formula is very important in solving inhomogeneous differential equations.
20.2 The Delta Function as a Limit
Consider a function b(x, ) defined by
b(x, ) =
0 for |x| > /2
1
for |x| < /2.
The graph of b(x, 1/10) is shown in Figure 20.2.
-1 1
5
10
Figure 20.2: Graph of b(x, 1/10).
The Dirac delta function δ(x) can be thought of as b(x, ) in the limit as → 0. Note that the
delta function so defined satisfies the properties,
δ(x) =
0 for x = 0
∞ for x = 0
and
∞
−∞
δ(x) dx = 1
638
Delayed Limiting Process. When the Dirac delta function appears inside an integral, we can
think of the delta function as a delayed limiting process.
∞
−∞
f(x)δ(x) dx ≡ lim
→0
∞
−∞
f(x)b(x, ) dx.
Let f(x) be a continuous function and let F (x) = f(x). We compute the integral of f(x)δ(x).
∞
−∞
f(x)δ(x) dx = lim
→0
1 /2
− /2
f(x) dx
= lim
→0
1
[F(x)]
/2
− /2
= lim
→0
F( /2) − F(− /2)
= F (0)
= f(0)
20.3 Higher Dimensions
We can define a Dirac delta function in n-dimensional Cartesian space, δn(x), x ∈ Rn
. It is
defined by the following two properties.
δn(x) = 0 for x = 0
Rn
δn(x) dx = 1
It is easy to verify, that the n-dimensional Dirac delta function can be written as a product of
1-dimensional Dirac delta functions.
δn(x) =
n
k=1
δ(xk)
20.4 Non-Rectangular Coordinate Systems
We can derive Dirac delta functions in non-rectangular coordinate systems by making a change
of variables in the relation,
Rn
δn(x) dx = 1
Where the transformation is non-singular, one merely divides the Dirac delta function by the Jaco-
bian of the transformation to the coordinate system.
Example 20.4.1 Consider the Dirac delta function in cylindrical coordinates, (r, θ, z). The Jaco-
bian is J = r.
∞
−∞
2π
0
∞
0
δ3 (x − x0) r dr dθ dz = 1
For r0 = 0, the Dirac Delta function is
δ3 (x − x0) =
1
r
δ (r − r0) δ (θ − θ0) δ (z − z0)
since it satisfies the two defining properties.
1
r
δ (r − r0) δ (θ − θ0) δ (z − z0) = 0 for (r, θ, z) = (r0, θ0, z0)
639
∞
−∞
2π
0
∞
0
1
r
δ (r − r0) δ (θ − θ0) δ (z − z0) r dr dθ dz
=
∞
0
δ (r − r0) dr
2π
0
δ (θ − θ0) dθ
∞
−∞
δ (z − z0) dz = 1
For r0 = 0, we have
δ3 (x − x0) =
1
2πr
δ (r) δ (z − z0)
since this again satisfies the two defining properties.
1
2πr
δ (r) δ (z − z0) = 0 for (r, z) = (0, z0)
∞
−∞
2π
0
∞
0
1
2πr
δ (r) δ (z − z0) r dr dθ dz =
1
2π
∞
0
δ (r) dr
2π
0
dθ
∞
−∞
δ (z − z0) dz = 1
640
20.5 Exercises
Exercise 20.1
Let f(x) be a function that is continuous except for a jump discontinuity at x = 0. Using a delayed
limiting process, show that
f(0−
) + f(0+
)
2
=
∞
−∞
f(x)δ(x) dx.
Hint, Solution
Exercise 20.2
Show that the Dirac delta function is symmetric.
δ(−x) = δ(x)
Hint, Solution
Exercise 20.3
Show that
δ(cx) =
δ(x)
|c|
.
Hint, Solution
Exercise 20.4
We will consider the Dirac delta function with a function as on argument, δ(y(x)). Assume that
y(x) has simple zeros at the points {xn}.
y(xn) = 0, y (xn) = 0
Further assume that y(x) has no multiple zeros. (If y(x) has multiple zeros δ(y(x)) is not well-defined
in the same sense that 1/0 is not well-defined.) Prove that
δ(y(x)) =
n
δ(x − xn)
|y (xn)|
.
Hint, Solution
Exercise 20.5
Justify the identity
∞
−∞
f(x)δ(n)
(x) dx = (−1)n
f(n)
(0)
From this show that
δ(n)
(−x) = (−1)n
δ(n)
(x) and xδ(n)
(x) = −nδ(n−1)
(x).
Hint, Solution
Exercise 20.6
Consider x = (x1, . . . , xn) ∈ Rn
and the curvilinear coordinate system ξ = (ξ1, . . . , ξn). Show that
δ(x − a) =
δ(ξ − α)
|J|
where a and α are corresponding points in the two coordinate systems and J is the Jacobian of the
transformation from x to ξ.
J ≡
∂x
∂ξ
Hint, Solution
641
Exercise 20.7
Determine the Dirac delta function in spherical coordinates, (r, θ, φ).
x = r cos θ sin φ, y = r sin θ sin φ, z = r cos φ
Hint, Solution
642
20.6 Hints
Hint 20.1
Hint 20.2
Verify that δ(−x) satisfies the two properties of the Dirac delta function.
Hint 20.3
Evaluate the integral,
∞
−∞
f(x)δ(cx) dx,
by noting that the Dirac delta function is symmetric and making a change of variables.
Hint 20.4
Let the points {ξm} partition the interval (−∞ . . . ∞) such that y (x) is monotone on each interval
(ξm . . . ξm+1). Consider some such interval, (a . . . b) ≡ (ξm . . . ξm+1). Show that
b
a
δ(y(x)) dx =
β
α
δ(y)
|y (xn)| dy if y(xn) = 0 for a < xn < b
0 otherwise
for α = min(y(a), y(b)) and β = max(y(a), y(b)). Now consider the integral on the interval
(−∞ . . . ∞) as the sum of integrals on the intervals {(ξm . . . ξm+1)}.
Hint 20.5
Justify the identity,
∞
−∞
f(x)δ(n)
(x) dx = (−1)n
f(n)
(0),
with integration by parts.
Hint 20.6
The Dirac delta function is defined by the following two properties.
δ(x − a) = 0 for x = a
Rn
δ(x − a) dx = 1
Verify that δ(ξ − α)/|J| satisfies these properties in the ξ coordinate system.
Hint 20.7
Consider the special cases φ0 = 0, π and r0 = 0.
643
20.7 Solutions
Solution 20.1
Let F (x) = f(x).
∞
−∞
f(x)δ(x) dx = lim
→0
1 ∞
−∞
f(x)b(x, ) dx
= lim
→0
1 0
− /2
f(x)b(x, ) dx +
/2
0
f(x)b(x, ) dx
= lim
→0
1
((F(0) − F(− /2)) + (F( /2) − F(0)))
= lim
→0
1
2
F(0) − F(− /2)
/2
+
F( /2) − F(0)
/2
=
F (0−
) + F (0+
)
2
=
f(0−
) + f(0+
)
2
Solution 20.2
δ(−x) satisfies the two properties of the Dirac delta function.
δ(−x) = 0 for x = 0
∞
−∞
δ(−x) dx =
−∞
∞
δ(x) (−dx) =
∞
−∞
δ(−x) dx = 1
Therefore δ(−x) = δ(x).
Solution 20.3
We note the the Dirac delta function is symmetric and we make a change of variables to derive the
identity.
∞
−∞
δ(cx) dx =
∞
−∞
δ(|c|x) dx
=
∞
−∞
δ(x)
|c|
dx
δ(cx) =
δ(x)
|c|
Solution 20.4
Let the points {ξm} partition the interval (−∞ . . . ∞) such that y (x) is monotone on each interval
(ξm . . . ξm+1). Consider some such interval, (a . . . b) ≡ (ξm . . . ξm+1). Note that y (x) is either
entirely positive or entirely negative in the interval. First consider the case when it is positive. In
this case y(a) < y(b).
b
a
δ(y(x)) dx =
y(b)
y(a)
δ(y)
dy
dx
−1
dy
=
y(b)
y(a)
δ(y)
y (x)
dy
=
y(b)
y(a)
δ(y)
y (xn) dy for y(xn) = 0 if y(a) < 0 < y(b)
0 otherwise
644
Now consider the case that y (x) is negative on the interval so y(a) > y(b).
b
a
δ(y(x)) dx =
y(b)
y(a)
δ(y)
dy
dx
−1
dy
=
y(b)
y(a)
δ(y)
y (x)
dy
=
y(a)
y(b)
δ(y)
−y (x)
dy
=
y(a)
y(b)
δ(y)
−y (xn) dy for y(xn) = 0 if y(b) < 0 < y(a)
0 otherwise
We conclude that
b
a
δ(y(x)) dx =
β
α
δ(y)
|y (xn)| dy if y(xn) = 0 for a < xn < b
0 otherwise
for α = min(y(a), y(b)) and β = max(y(a), y(b)).
Now we turn to the integral of δ(y(x)) on (−∞ . . . ∞). Let αm = min(y(ξm), y(ξm)) and βm =
max(y(ξm), y(ξm)).
∞
−∞
δ(y(x)) dx =
m
ξm+1
ξm
δ(y(x)) dx
=
m
xn∈(ξm...ξm+1)
ξm+1
ξm
δ(y(x)) dx
=
m
xn∈(ξm...ξm+1)
βm+1
αm
δ(y)
|y (xn)|
dy
=
n
∞
−∞
δ(y)
|y (xn)|
dy
=
∞
−∞ n
δ(y)
|y (xn)|
dy
δ(y(x)) =
n
δ(x − xn)
|y (xn)|
Solution 20.5
To justify the identity,
∞
−∞
f(x)δ(n)
(x) dx = (−1)n
f(n)
(0),
we will use integration by parts.
∞
−∞
f(x)δ(n)
(x) dx = f(x)δ(n−1)
(x)
∞
−∞
−
∞
−∞
f (x)δ(n−1)
(x) dx
= −
∞
−∞
f (x)δ(n−1)
(x) dx
= (−1)n
∞
−∞
f(n)
(x)δ(x) dx
= (−1)n
f(n)
(0)
645
CONTINUE HERE
δ(n)
(−x) = (−1)n
δ(n)
(x) and xδ(n)
(x) = −nδ(n−1)
(x).
Solution 20.6
The Dirac delta function is defined by the following two properties.
δ(x − a) = 0 for x = a
Rn
δ(x − a) dx = 1
We verify that δ(ξ − α)/|J| satisfies these properties in the ξ coordinate system.
δ(ξ − α)
|J|
=
δ(ξ1 − α1) · · · δ(ξn − αn)
|J|
= 0 for ξ = α
δ(ξ − α)
|J|
|J| dξ = δ(ξ − α) dξ
= δ(ξ1 − α1) · · · δ(ξn − αn) dξ
= δ(ξ1 − α1) dξ1 · · · δ(ξn − αn) dξn
= 1
We conclude that δ(ξ − α)/|J| is the Dirac delta function in the ξ coordinate system.
δ(x − a) =
δ(ξ − α)
|J|
Solution 20.7
We consider the Dirac delta function in spherical coordinates, (r, θ, φ). The Jacobian is J = r2
sin(φ).
π
0
2π
0
∞
0
δ3 (x − x0) r2
sin(φ) dr dθ dφ = 1
For r0 = 0, and φ0 = 0, π, the Dirac Delta function is
δ3 (x − x0) =
1
r2 sin(φ)
δ (r − r0) δ (θ − θ0) δ (φ − φ0)
since it satisfies the two defining properties.
1
r2 sin(φ)
δ (r − r0) δ (θ − θ0) δ (φ − φ0) = 0 for (r, θ, φ) = (r0, θ0, φ0)
π
0
2π
0
∞
0
1
r2 sin(φ)
δ (r − r0) δ (θ − θ0) δ (φ − φ0) r2
sin(φ) dr dθ dφ
=
∞
0
δ (r − r0) dr
2π
0
δ (θ − θ0) dθ
π
0
δ (φ − φ0) dφ = 1
For φ0 = 0 or φ0 = π, the Dirac delta function is
δ3 (x − x0) =
1
2πr2 sin(φ)
δ (r − r0) δ (φ − φ0) .
646
We check that the value of the integral is unity.
π
0
2π
0
∞
0
1
2πr2 sin(φ)
δ (r − r0) δ (φ − φ0) r2
sin(φ) dr dθ dφ
=
1
2π
∞
0
δ (r − r0) dr
2π
0
dθ
π
0
δ (φ − φ0) dφ = 1
For r0 = 0 the Dirac delta function is
δ3 (x) =
1
4πr2
δ (r)
We verify that the value of the integral is unity.
π
0
2π
0
∞
0
1
4πr2
δ (r − r0) r2
sin(φ) dr dθ dφ =
1
4π
∞
0
δ (r) dr
2π
0
dθ
π
0
sin(φ) dφ = 1
647
648
Chapter 21
Inhomogeneous Differential
Equations
Feelin’ stupid? I know I am!
-Homer Simpson
21.1 Particular Solutions
Consider the nth
order linear homogeneous equation
L[y] ≡ y(n)
+ pn−1(x)y(n−1)
+ · · · + p1(x)y + p0(x)y = 0.
Let {y1, y2, . . . , yn} be a set of linearly independent homogeneous solutions, L[yk] = 0. We know
that the general solution of the homogeneous equation is a linear combination of the homogeneous
solutions.
yh =
n
k=1
ckyk(x)
Now consider the nth
order linear inhomogeneous equation
L[y] ≡ y(n)
+ pn−1(x)y(n−1)
+ · · · + p1(x)y + p0(x)y = f(x).
Any function yp which satisfies this equation is called a particular solution of the differential equation.
We want to know the general solution of the inhomogeneous equation. Later in this chapter we will
cover methods of constructing this solution; now we consider the form of the solution.
Let yp be a particular solution. Note that yp + h is a particular solution if h satisfies the
homogeneous equation.
L[yp + h] = L[yp] + L[h] = f + 0 = f
Therefore yp + yh satisfies the homogeneous equation. We show that this is the general solution
of the inhomogeneous equation. Let yp and ηp both be solutions of the inhomogeneous equation
L[y] = f. The difference of yp and ηp is a homogeneous solution.
L[yp − ηp] = L[yp] − L[ηp] = f − f = 0
yp and ηp differ by a linear combination of the homogeneous solutions {yk}. Therefore the general
solution of L[y] = f is the sum of any particular solution yp and the general homogeneous solution
yh.
yp + yh = yp(x) +
n
k=1
ckyk(x)
649
Result 21.1.1 The general solution of the nth
order linear inhomogeneous
equation L[y] = f(x) is
y = yp + c1y1 + c2y2 + · · · + cnyn,
where yp is a particular solution, {y1, . . . , yn} is a set of linearly independent
homogeneous solutions, and the ck’s are arbitrary constants.
Example 21.1.1 The differential equation
y + y = sin(2x)
has the two homogeneous solutions
y1 = cos x, y2 = sin x,
and a particular solution
yp = −
1
3
sin(2x).
We can add any combination of the homogeneous solutions to yp and it will still be a particular
solution. For example,
ηp = −
1
3
sin(2x) −
1
3
sin x
= −
2
3
sin
3x
2
cos
x
2
is a particular solution.
21.2 Method of Undetermined Coefficients
The first method we present for computing particular solutions is the method of undetermined
coefficients. For some simple differential equations, (primarily constant coefficient equations), and
some simple inhomogeneities we are able to guess the form of a particular solution. This form
will contain some unknown parameters. We substitute this form into the differential equation to
determine the parameters and thus determine a particular solution.
Later in this chapter we will present general methods which work for any linear differential
equation and any inhogeneity. Thus one might wonder why I would present a method that works
only for some simple problems. (And why it is called a “method” if it amounts to no more than
guessing.) The answer is that guessing an answer is less grungy than computing it with the formulas
we will develop later. Also, the process of this guessing is not random, there is rhyme and reason to
it.
Consider an nth
order constant coefficient, inhomogeneous equation.
L[y] ≡ y(n)
+ an−1y(n−1)
+ · · · + a1y + a0y = f(x)
If f(x) is one of a few simple forms, then we can guess the form of a particular solution. Below we
enumerate some cases.
f = p(x). If f is an mth
order polynomial, f(x) = pmxm
+ · · · + p1x + p0, then guess
yp = cmxm
+ · · · c1x + c0.
650
f = p(x) eax
. If f is a polynomial times an exponential then guess
yp = (cmxm
+ · · · c1x + c0) eax
.
f = p(x) eax
cos (bx). If f is a cosine or sine times a polynomial and perhaps an exponential, f(x) =
p(x) eax
cos(bx) or f(x) = p(x) eax
sin(bx) then guess
yp = (cmxm
+ · · · c1x + c0) eax
cos(bx) + (dmxm
+ · · · d1x + d0) eax
sin(bx).
Likewise for hyperbolic sines and hyperbolic cosines.
Example 21.2.1 Consider
y − 2y + y = t2
.
The homogeneous solutions are y1 = et
and y2 = t et
. We guess a particular solution of the form
yp = at2
+ bt + c.
We substitute the expression into the differential equation and equate coefficients of powers of t to
determine the parameters.
yp − 2yp + yp = t2
(2a) − 2(2at + b) + (at2
+ bt + c) = t2
(a − 1)t2
+ (b − 4a)t + (2a − 2b + c) = 0
a − 1 = 0, b − 4a = 0, 2a − 2b + c = 0
a = 1, b = 4, c = 6
A particular solution is
yp = t2
+ 4t + 6.
If the inhomogeneity is a sum of terms, L[y] = f ≡ f1 + · · · + fk, you can solve the problems
L[y] = f1, . . . , L[y] = fk independently and then take the sum of the solutions as a particular
solution of L[y] = f.
Example 21.2.2 Consider
L[y] ≡ y − 2y + y = t2
+ e2t
. (21.1)
The homogeneous solutions are y1 = et
and y2 = t et
. We already know a particular solution to
L[y] = t2
. We seek a particular solution to L[y] = e2t
. We guess a particular solution of the form
yp = a e2t
.
We substitute the expression into the differential equation to determine the parameter.
yp − 2yp + yp = e2t
4ae2t
− 4a e2t
+a e2t
= e2t
a = 1
A particular solution of L[y] = e2t
is yp = e2t
. Thus a particular solution of Equation 21.1 is
yp = t2
+ 4t + 6 + e2t
.
651
The above guesses will not work if the inhomogeneity is a homogeneous solution. In this case,
multiply the guess by the lowest power of x such that the guess does not contain homogeneous
solutions.
Example 21.2.3 Consider
L[y] ≡ y − 2y + y = et
.
The homogeneous solutions are y1 = et
and y2 = t et
. Guessing a particular solution of the form
yp = a et
would not work because L[et
] = 0. We guess a particular solution of the form
yp = at2
et
We substitute the expression into the differential equation and equate coefficients of like terms to
determine the parameters.
yp − 2yp + yp = et
(at2
+ 4at + 2a) et
−2(at2
+ 2at) et
+at2
et
= et
2a et
= et
a =
1
2
A particular solution is
yp =
t2
2
et
.
Example 21.2.4 Consider
y +
1
x
y +
1
x2
y = x, x > 0.
The homogeneous solutions are y1 = cos(ln x) and y2 = sin(ln x). We guess a particular solution of
the form
yp = ax3
We substitute the expression into the differential equation and equate coefficients of like terms to
determine the parameter.
yp +
1
x
yp +
1
x2
yp = x
6ax + 3ax + ax = x
a =
1
10
A particular solution is
yp =
x3
10
.
21.3 Variation of Parameters
In this section we present a method for computing a particular solution of an inhomogeneous equa-
tion given that we know the homogeneous solutions. We will first consider second order equations
and then generalize the result for nth
order equations.
21.3.1 Second Order Differential Equations
Consider the second order inhomogeneous equation,
L[y] ≡ y + p(x)y + q(x)y = f(x), on a < x < b.
652
We assume that the coefficient functions in the differential equation are continuous on [a . . . b]. Let
y1(x) and y2(x) be two linearly independent solutions to the homogeneous equation. Since the
Wronskian,
W(x) = exp − p(x) dx ,
is non-vanishing, we know that these solutions exist. We seek a particular solution of the form,
yp = u1(x)y1 + u2(x)y2.
We compute the derivatives of yp.
yp = u1y1 + u1y1 + u2y2 + u2y2
yp = u1 y1 + 2u1y1 + u1y1 + u2 y2 + 2u2y2 + u2y2
We substitute the expression for yp and its derivatives into the inhomogeneous equation and use the
fact that y1 and y2 are homogeneous solutions to simplify the equation.
u1 y1 + 2u1y1 + u1y1 + u2 y2 + 2u2y2 + u2y2 + p(u1y1 + u1y1 + u2y2 + u2y2) + q(u1y1 + u2y2) = f
u1 y1 + 2u1y1 + u2 y2 + 2u2y2 + p(u1y1 + u2y2) = f
This is an ugly equation for u1 and u2, however, we have an ace up our sleeve. Since u1 and u2
are undetermined functions of x, we are free to impose a constraint. We choose this constraint to
simplify the algebra.
u1y1 + u2y2 = 0
This constraint simplifies the derivatives of yp,
yp = u1y1 + u1y1 + u2y2 + u2y2
= u1y1 + u2y2
yp = u1y1 + u1y1 + u2y2 + u2y2 .
We substitute the new expressions for yp and its derivatives into the inhomogeneous differential
equation to obtain a much simpler equation than before.
u1y1 + u1y1 + u2y2 + u2y2 + p(u1y1 + u2y2) + q(u1y1 + u2y2) = f(x)
u1y1 + u2y2 + u1L[y1] + u2L[y2] = f(x)
u1y1 + u2y2 = f(x).
With the constraint, we have a system of linear equations for u1 and u2.
u1y1 + u2y2 = 0
u1y1 + u2y2 = f(x).
y1 y2
y1 y2
u1
u2
=
0
f
We solve this system using Kramer’s rule. (See Appendix O.)
u1 = −
f(x)y2
W(x)
u2 =
f(x)y1
W(x)
Here W(x) is the Wronskian.
W(x) =
y1 y2
y1 y2
653
We integrate to get u1 and u2. This gives us a particular solution.
yp = −y1
f(x)y2(x)
W(x)
dx + y2
f(x)y1(x)
W(x)
dx.
Result 21.3.1 Let y1 and y2 be linearly independent homogeneous solutions
of
L[y] = y + p(x)y + q(x)y = f(x).
A particular solution is
yp = −y1(x)
f(x)y2(x)
W(x)
dx + y2(x)
f(x)y1(x)
W(x)
dx,
where W(x) is the Wronskian of y1 and y2.
Example 21.3.1 Consider the equation,
y + y = cos(2x).
The homogeneous solutions are y1 = cos x and y2 = sin x. We compute the Wronskian.
W(x) =
cos x sin x
− sin x cos x
= cos2
x + sin2
x = 1
We use variation of parameters to find a particular solution.
yp = − cos(x) cos(2x) sin(x) dx + sin(x) cos(2x) cos(x) dx
= −
1
2
cos(x) sin(3x) − sin(x) dx +
1
2
sin(x) cos(3x) + cos(x) dx
= −
1
2
cos(x) −
1
3
cos(3x) + cos(x) +
1
2
sin(x)
1
3
sin(3x) + sin(x)
=
1
2
sin2
(x) − cos2
(x) +
1
6
cos(3x) cos(x) + sin(3x) sin(x)
= −
1
2
cos(2x) +
1
6
cos(2x)
= −
1
3
cos(2x)
The general solution of the inhomogeneous equation is
y = −
1
3
cos(2x) + c1 cos(x) + c2 sin(x).
21.3.2 Higher Order Differential Equations
Consider the nth
order inhomogeneous equation,
L[y] = y(n) + pn−1(x)y(n−1)
+ · · · + p1(x)y + p0(x)y = f(x), on a < x < b.
We assume that the coefficient functions in the differential equation are continuous on [a . . . b]. Let
{y1, . . . , yn} be a set of linearly independent solutions to the homogeneous equation. Since the
Wronskian,
W(x) = exp − pn−1(x) dx ,
654
is non-vanishing, we know that these solutions exist. We seek a particular solution of the form
yp = u1y1 + u2y2 + · · · + unyn.
Since {u1, . . . , un} are undetermined functions of x, we are free to impose n − 1 constraints. We
choose these constraints to simplify the algebra.
u1y1 +u2y2 + · · ·+unyn =0
u1y1 +u2y2 + · · ·+unyn =0
... +
... +
... +
... =0
u1y
(n−2)
1 +u2y
(n−2)
2 + · · ·+uny(n−2)
n =0
We differentiate the expression for yp, utilizing our constraints.
yp =u1y1 +u2y2 + · · ·+unyn
yp =u1y1 +u2y2 + · · ·+unyn
yp =u1y1 +u2y2 + · · ·+unyn
... =
... +
... +
... +
...
y(n)
p =u1y
(n)
1 +u2y
(n)
2 + · · ·+uny(n)
n + u1y
(n−1)
1 + u2y
(n−1)
2 + · · · + uny(n−1)
n
We substitute yp and its derivatives into the inhomogeneous differential equation and use the fact
that the yk are homogeneous solutions.
u1y
(n)
1 + · · · + uny(n)
n + u1y
(n−1)
1 + · · · + uny(n−1)
n + pn−1(u1y
(n−1)
1 + · · · + uny(n−1)
n ) + · · · + p0(u1y1 + · · · unyn) = f
u1L[y1] + u2L[y2] + · · · + unL[yn] + u1y
(n−1)
1 + u2y
(n−1)
2 + · · · + uny(n−1)
n = f
u1y
(n−1)
1 + u2y
(n−1)
2 + · · · + uny(n−1)
n = f.
With the constraints, we have a system of linear equations for {u1, . . . , un}.





y1 y2 · · · yn
y1 y2 · · · yn
...
...
...
...
y
(n−1)
1 y
(n−1)
2 · · · y
(n−1)
n










u1
u2
...
un





=





0
...
0
f





.
We solve this system using Kramer’s rule. (See Appendix O.)
uk = (−1)n+k+1 W[y1, . . . , yk−1, yk+1, . . . , yn]
W[y1, y2, . . . , yn]
f, for k = 1, . . . , n,
Here W is the Wronskian.
We integrating to obtain the uk’s.
uk = (−1)n+k+1 W[y1, . . . , yk−1, yk+1, . . . , yn](x)
W[y1, y2, . . . , yn](x)
f(x) dx, for k = 1, . . . , n
655
Result 21.3.2 Let {y1, . . . , yn} be linearly independent homogeneous solu-
tions of
L[y] = y(n) + pn−1(x)y(n−1)
+ · · · + p1(x)y + p0(x)y = f(x), on a < x < b.
A particular solution is
yp = u1y1 + u2y2 + · · · + unyn.
where
uk = (−1)n+k+1 W[y1, . . . , yk−1, yk+1, . . . , yn](x)
W[y1, y2, . . . , yn](x)
f(x) dx, for k = 1, . . . , n,
and W[y1, y2, . . . , yn](x) is the Wronskian of {y1(x), . . . , yn(x)}.
21.4 Piecewise Continuous Coefficients and Inhomogeneities
Example 21.4.1 Consider the problem
y − y = e−α|x|
, y(±∞) = 0, α > 0, α = 1.
The homogeneous solutions of the differential equation are ex
and e−x
. We use variation of param-
eters to find a particular solution for x > 0.
yp = − ex
x
e−ξ e−αξ
−2
dξ + e−x
x
eξ e−αξ
−2
dξ
=
1
2
ex
x
e−(α+1)ξ
dξ −
1
2
e−x
x
e(1−α)ξ
dξ
= −
1
2(α + 1)
e−αx
+
1
2(α − 1)
e−αx
=
e−αx
α2 − 1
, for x > 0
A particular solution for x < 0 is
yp =
eαx
α2 − 1
, for x < 0.
Thus a particular solution is
yp =
e−α|x|
α2 − 1
.
The general solution is
y =
1
α2 − 1
e−α|x|
+c1 ex
+c2 e−x
.
Applying the boundary conditions, we see that c1 = c2 = 0. Apparently the solution is
y =
e−α|x|
α2 − 1
.
This function is plotted in Figure 21.1. This function satisfies the differential equation for positive
and negative x. It also satisfies the boundary conditions. However, this is NOT a solution to the
differential equation. Since the differential equation has no singular points and the inhomogeneous
term is continuous, the solution must be twice continuously differentiable. Since the derivative of
656
-4 -2 2 4
0.05
0.1
0.15
0.2
0.25
0.3
-4 -2 2 4
-0.3
-0.25
-0.2
-0.15
-0.1
-0.05
Figure 21.1: The Incorrect and Correct Solution to the Differential Equation.
e−α|x|
/(α2
− 1) has a jump discontinuity at x = 0, the second derivative does not exist. Thus
this function could not possibly be a solution to the differential equation. In the next example we
examine the right way to solve this problem.
Example 21.4.2 Again consider
y − y = e−α|x|
, y(±∞) = 0, α > 0, α = 1.
Separating this into two problems for positive and negative x,
y− − y− = eαx
, y−(−∞) = 0, on − ∞ < x ≤ 0,
y+ − y+ = e−αx
, y+(∞) = 0, on 0 ≤ x < ∞.
In order for the solution over the whole domain to be twice differentiable, the solution and it’s first
derivative must be continuous. Thus we impose the additional boundary conditions
y−(0) = y+(0), y−(0) = y+(0).
The solutions that satisfy the two differential equations and the boundary conditions at infinity are
y− =
eαx
α2 − 1
+ c− ex
, y+ =
e−αx
α2 − 1
+ c+ e−x
.
The two additional boundary conditions give us the equations
y−(0) = y+(0) → c− = c+
y−(0) = y+(0) →
α
α2 − 1
+ c− = −
α
α2 − 1
− c+.
We solve these two equations to determine c− and c+.
c− = c+ = −
α
α2 − 1
Thus the solution over the whole domain is
y =
eαx
−α ex
α2−1 for x < 0,
e−αx
−α e−x
α2−1 for x > 0
657
y =
e−α|x|
−α e−|x|
α2 − 1
.
This function is plotted in Figure 21.1. You can verify that this solution is twice continuously
differentiable.
21.5 Inhomogeneous Boundary Conditions
21.5.1 Eliminating Inhomogeneous Boundary Conditions
Consider the nth
order equation
L[y] = f(x), for a < x < b,
subject to the linear inhomogeneous boundary conditions
Bj[y] = γj, for j = 1, . . . , n,
where the boundary conditions are of the form
B[y] ≡ α0y(a) + α1y (a) + · · · + yn−1y(n−1)
(a) + β0y(b) + β1y (b) + · · · + βn−1y(n−1)
Let g(x) be an n-times continuously differentiable function that satisfies the boundary conditions.
Substituting y = u + g into the differential equation and boundary conditions yields
L[u] = f(x) − L[g], Bj[u] = bj − Bj[g] = 0 for j = 1, . . . , n.
Note that the problem for u has homogeneous boundary conditions. Thus a problem with inhomo-
geneous boundary conditions can be reduced to one with homogeneous boundary conditions. This
technique is of limited usefulness for ordinary differential equations but is important for solving some
partial differential equation problems.
Example 21.5.1 Consider the problem
y + y = cos 2x, y(0) = 1, y(π) = 2.
g(x) = x
π + 1 satisfies the boundary conditions. Substituting y = u + g yields
u + u = cos 2x −
x
π
− 1, y(0) = y(π) = 0.
Example 21.5.2 Consider
y + y = cos 2x, y (0) = y(π) = 1.
g(x) = sin x − cos x satisfies the inhomogeneous boundary conditions. Substituting y = u + sin x −
cos x yields
u + u = cos 2x, u (0) = u(π) = 0.
Note that since g(x) satisfies the homogeneous equation, the inhomogeneous term in the equation
for u is the same as that in the equation for y.
Example 21.5.3 Consider
y + y = cos 2x, y(0) =
2
3
, y(π) = −
4
3
.
g(x) = cos x − 1
3 satisfies the boundary conditions. Substituting y = u + cos x − 1
3 yields
u + u = cos 2x +
1
3
, u(0) = u(π) = 0.
658
Result 21.5.1 The nth
order differential equation with boundary conditions
L[y] = f(x), Bj[y] = bj, for j = 1, . . . , n
has the solution y = u + g where u satisfies
L[u] = f(x) − L[g], Bj[u] = 0, for j = 1, . . . , n
and g is any n-times continuously differentiable function that satisfies the
inhomogeneous boundary conditions.
21.5.2 Separating Inhomogeneous Equations and Inhomogeneous Bound-
ary Conditions
Now consider a problem with inhomogeneous boundary conditions
L[y] = f(x), B1[y] = γ1, B2[y] = γ2.
In order to solve this problem, we solve the two problems
L[u] = f(x), B1[u] = B2[u] = 0, and
L[v] = 0, B1[v] = γ1, B2[v] = γ2.
The solution for the problem with an inhomogeneous equation and inhomogeneous boundary con-
ditions will be the sum of u and v. To verify this,
L[u + v] = L[u] + L[v] = f(x) + 0 = f(x),
Bi[u + v] = Bi[u] + Bi[v] = 0 + γi = γi.
This will be a useful technique when we develop Green functions.
Result 21.5.2 The solution to
L[y] = f(x), B1[y] = γ1, B2[y] = γ2,
is y = u + v where
L[u] = f(x), B1[u] = 0, B2[u] = 0, and
L[v] = 0, B1[v] = γ1, B2[v] = γ2.
21.5.3 Existence of Solutions of Problems with Inhomogeneous Boundary
Conditions
Consider the nth
order homogeneous differential equation
L[y] = y(n)
+ pn−1y(n−1)
+ · · · + p1y + p0y = f(x), for a < x < b,
subject to the n inhomogeneous boundary conditions
Bj[y] = γj, for j = 1, . . . , n
where each boundary condition is of the form
B[y] ≡ α0y(a) + α1y (a) + · · · + αn−1y(n−1)
(a) + β0y(b) + β1y (b) + · · · + βn−1y(n−1)
(b).
659
We assume that the coefficients in the differential equation are continuous on [a, b]. Since the
Wronskian of the solutions of the differential equation,
W(x) = exp − pn−1(x) dx ,
is non-vanishing on [a, b], there are n linearly independent solution on that range. Let {y1, . . . , yn}
be a set of linearly independent solutions of the homogeneous equation. From Result 21.3.2 we know
that a particular solution yp exists. The general solution of the differential equation is
y = yp + c1y1 + c2y2 + · · · + cnyn.
The n boundary conditions impose the matrix equation,





B1[y1] B1[y2] · · · B1[yn]
B2[y1] B2[y2] · · · B2[yn]
...
...
...
...
Bn[y1] Bn[y2] · · · Bn[yn]










c1
c2
...
cn





=





γ1 − B1[yp]
γ2 − B2[yp]
...
γn − Bn[yp]





This equation has a unique solution if and only if the equation





B1[y1] B1[y2] · · · B1[yn]
B2[y1] B2[y2] · · · B2[yn]
...
...
...
...
Bn[y1] Bn[y2] · · · Bn[yn]










c1
c2
...
cn





=





0
0
...
0





has only the trivial solution. (This is the case if and only if the determinant of the matrix is nonzero.)
Thus the problem
L[y] = y(n)
+ pn−1y(n−1)
+ · · · + p1y + p0y = f(x), for a < x < b,
subject to the n inhomogeneous boundary conditions
Bj[y] = γj, for j = 1, . . . , n,
has a unique solution if and only if the problem
L[y] = y(n)
+ pn−1y(n−1)
+ · · · + p1y + p0y = 0, for a < x < b,
subject to the n homogeneous boundary conditions
Bj[y] = 0, for j = 1, . . . , n,
has only the trivial solution.
Result 21.5.3 The problem
L[y] = y(n)
+ pn−1y(n−1)
+ · · · + p1y + p0y = f(x), for a < x < b,
subject to the n inhomogeneous boundary conditions
Bj[y] = γj, for j = 1, . . . , n,
has a unique solution if and only if the problem
L[y] = y(n)
+ pn−1y(n−1)
+ · · · + p1y + p0y = 0, for a < x < b,
subject to
Bj[y] = 0, for j = 1, . . . , n,
has only the trivial solution.
660
21.6 Green Functions for First Order Equations
Consider the first order inhomogeneous equation
L[y] ≡ y + p(x)y = f(x), for x > a, (21.2)
subject to a homogeneous initial condition, B[y] ≡ y(a) = 0.
The Green function G(x|ξ) is defined as the solution to
L[G(x|ξ)] = δ(x − ξ) subject to G(a|ξ) = 0.
We can represent the solution to the inhomogeneous problem in Equation 21.2 as an integral involving
the Green function. To show that
y(x) =
∞
a
G(x|ξ)f(ξ) dξ
is the solution, we apply the linear operator L to the integral. (Assume that the integral is uniformly
convergent.)
L
∞
a
G(x|ξ)f(ξ) dξ =
∞
a
L[G(x|ξ)]f(ξ) dξ
=
∞
a
δ(x − ξ)f(ξ) dξ
= f(x)
The integral also satisfies the initial condition.
B
∞
a
G(x|ξ)f(ξ) dξ =
∞
a
B[G(x|ξ)]f(ξ) dξ
=
∞
a
(0)f(ξ) dξ
= 0
Now we consider the qualitiative behavior of the Green function. For x = ξ, the Green function
is simply a homogeneous solution of the differential equation, however at x = ξ we expect some
singular behavior. G (x|ξ) will have a Dirac delta function type singularity. This means that G(x|ξ)
will have a jump discontinuity at x = ξ. We integrate the differential equation on the vanishing
interval (ξ−
. . . ξ+
) to determine this jump.
G + p(x)G = δ(x − ξ)
G(ξ+
|ξ) − G(ξ−
|ξ) +
ξ+
ξ−
p(x)G(x|ξ) dx = 1
G(ξ+
|ξ) − G(ξ−
|ξ) = 1 (21.3)
The homogeneous solution of the differential equation is
yh = e−
R
p(x) dx
Since the Green function satisfies the homogeneous equation for x = ξ, it will be a constant times
this homogeneous solution for x < ξ and x > ξ.
G(x|ξ) =
c1 e−
R
p(x) dx
a < x < ξ
c2 e−
R
p(x) dx
ξ < x
661
In order to satisfy the homogeneous initial condition G(a|ξ) = 0, the Green function must vanish on
the interval (a . . . ξ).
G(x|ξ) =
0 a < x < ξ
c e−
R
p(x) dx
ξ < x
The jump condition, (Equation 21.3), gives us the constraint G(ξ+
|ξ) = 1. This determines the
constant in the homogeneous solution for x > ξ.
G(x|ξ) =
0 a < x < ξ
e−
R x
ξ
p(t) dt
ξ < x
We can use the Heaviside function to write the Green function without using a case statement.
G(x|ξ) = e−
R x
ξ
p(t) dt
H(x − ξ)
Clearly the Green function is of little value in solving the inhomogeneous differential equation in
Equation 21.2, as we can solve that problem directly. However, we will encounter first order Green
function problems in solving some partial differential equations.
Result 21.6.1 The first order inhomogeneous differential equation with ho-
mogeneous initial condition
L[y] ≡ y + p(x)y = f(x), for a < x, y(a) = 0,
has the solution
y =
∞
a
G(x|ξ)f(ξ) dξ,
where G(x|ξ) satisfies the equation
L[G(x|ξ)] = δ(x − ξ), for a < x, G(a|ξ) = 0.
The Green function is
G(x|ξ) = e−
R x
ξ p(t) dt
H(x − ξ)
21.7 Green Functions for Second Order Equations
Consider the second order inhomogeneous equation
L[y] = y + p(x)y + q(x)y = f(x), for a < x < b, (21.4)
subject to the homogeneous boundary conditions
B1[y] = B2[y] = 0.
The Green function G(x|ξ) is defined as the solution to
L[G(x|ξ)] = δ(x − ξ) subject to B1[G] = B2[G] = 0.
The Green function is useful because you can represent the solution to the inhomogeneous problem
in Equation 21.4 as an integral involving the Green function. To show that
y(x) =
b
a
G(x|ξ)f(ξ) dξ
662
is the solution, we apply the linear operator L to the integral. (Assume that the integral is uniformly
convergent.)
L
b
a
G(x|ξ)f(ξ) dξ =
b
a
L[G(x|ξ)]f(ξ) dξ
=
b
a
δ(x − ξ)f(ξ) dξ
= f(x)
The integral also satisfies the boundary conditions.
Bi
b
a
G(x|ξ)f(ξ) dξ =
b
a
Bi[G(x|ξ)]f(ξ) dξ
=
b
a
[0]f(ξ) dξ
= 0
One of the advantages of using Green functions is that once you find the Green function for a
linear operator and certain homogeneous boundary conditions,
L[G] = δ(x − ξ), B1[G] = B2[G] = 0,
you can write the solution for any inhomogeneity, f(x).
L[f] = f(x), B1[y] = B2[y] = 0
You do not need to do any extra work to obtain the solution for a different inhomogeneous term.
Qualitatively, what kind of behavior will the Green function for a second order differential equa-
tion have? Will it have a delta function singularity; will it be continuous? To answer these questions
we will first look at the behavior of integrals and derivatives of δ(x).
The integral of δ(x) is the Heaviside function, H(x).
H(x) =
x
−∞
δ(t) dt =
0 for x < 0
1 for x > 0
The integral of the Heaviside function is the ramp function, r(x).
r(x) =
x
−∞
H(t) dt =
0 for x < 0
x for x > 0
The derivative of the delta function is zero for x = 0. At x = 0 it goes from 0 up to +∞, down to
−∞ and then back up to 0.
In Figure 21.2 we see conceptually the behavior of the ramp function, the Heaviside function,
the delta function, and the derivative of the delta function.
We write the differential equation for the Green function.
G (x|ξ) + p(x)G (x|ξ) + q(x)G(x|ξ) = δ(x − ξ)
we see that only the G (x|ξ) term can have a delta function type singularity. If one of the other terms
had a delta function type singularity then G (x|ξ) would be more singular than a delta function
and there would be nothing in the right hand side of the equation to match this kind of singularity.
Analogous to the progression from a delta function to a Heaviside function to a ramp function, we
see that G (x|ξ) will have a jump discontinuity and G(x|ξ) will be continuous.
663
Figure 21.2: r(x), H(x), δ(x) and d
dx δ(x)
Let y1 and y2 be two linearly independent solutions to the homogeneous equation, L[y] = 0. Since
the Green function satisfies the homogeneous equation for x = ξ, it will be a linear combination of
the homogeneous solutions.
G(x|ξ) =
c1y1 + c2y2 for x < ξ
d1y1 + d2y2 for x > ξ
We require that G(x|ξ) be continuous.
G(x|ξ) x→ξ− = G(x|ξ) x→ξ+
We can write this in terms of the homogeneous solutions.
c1y1(ξ) + c2y2(ξ) = d1y1(ξ) + d2y2(ξ)
We integrate L[G(x|ξ)] = δ(x − ξ) from ξ−
to ξ+.
ξ+
ξ−
[G (x|ξ) + p(x)G (x|ξ) + q(x)G(x|ξ)] dx =
ξ+
ξ−
δ(x − ξ) dx.
Since G(x|ξ) is continuous and G (x|ξ) has only a jump discontinuity two of the terms vanish.
ξ+
ξ−
p(x)G (x|ξ) dx = 0 and
ξ+
ξ−
q(x)G(x|ξ) dx = 0
ξ+
ξ−
G (x|ξ) dx =
ξ+
ξ−
δ(x − ξ) dx
G (x|ξ)
ξ+
ξ− = H(x − ξ)
ξ+
ξ−
G (ξ+
|ξ) − G (ξ−
|ξ) = 1
We write this jump condition in terms of the homogeneous solutions.
d1y1(ξ) + d2y2(ξ) − c1y1(ξ) − c2y2(ξ) = 1
Combined with the two boundary conditions, this gives us a total of four equations to determine
our four constants, c1, c2, d1, and d2.
664
Result 21.7.1 The second order inhomogeneous differential equation with
homogeneous boundary conditions
L[y] = y + p(x)y + q(x)y = f(x), for a < x < b, B1[y] = B2[y] = 0,
has the solution
y =
b
a
G(x|ξ)f(ξ) dξ,
where G(x|ξ) satisfies the equation
L[G(x|ξ)] = δ(x − ξ), for a < x < b, B1[G(x|ξ)] = B2[G(x|ξ)] = 0.
G(x|ξ) is continuous and G (x|ξ) has a jump discontinuity of height 1 at x = ξ.
Example 21.7.1 Solve the boundary value problem
y = f(x), y(0) = y(1) = 0,
using a Green function.
A pair of solutions to the homogeneous equation are y1 = 1 and y2 = x. First note that only the
trivial solution to the homogeneous equation satisfies the homogeneous boundary conditions. Thus
there is a unique solution to this problem.
The Green function satisfies
G (x|ξ) = δ(x − ξ), G(0|ξ) = G(1|ξ) = 0.
The Green function has the form
G(x|ξ) =
c1 + c2x for x < ξ
d1 + d2x for x > ξ.
Applying the two boundary conditions, we see that c1 = 0 and d1 = −d2. The Green function now
has the form
G(x|ξ) =
cx for x < ξ
d(x − 1) for x > ξ.
Since the Green function must be continuous,
cξ = d(ξ − 1) → d = c
ξ
ξ − 1
.
From the jump condition,
d
dx
c
ξ
ξ − 1
(x − 1)
x=ξ
−
d
dx
cx
x=ξ
= 1
c
ξ
ξ − 1
− c = 1
c = ξ − 1.
Thus the Green function is
G(x|ξ) =
(ξ − 1)x for x < ξ
ξ(x − 1) for x > ξ.
The Green function is plotted in Figure 21.3 for various values of ξ. The solution to y = f(x) is
665
0.5 1
-0.3
-0.2
-0.1
0.1
0.5 1
-0.3
-0.2
-0.1
0.1
0.5 1
-0.3
-0.2
-0.1
0.1
0.5 1
-0.3
-0.2
-0.1
0.1
Figure 21.3: Plot of G(x|0.05),G(x|0.25),G(x|0.5) and G(x|0.75).
y(x) =
1
0
G(x|ξ)f(ξ) dξ
y(x) = (x − 1)
x
0
ξf(ξ) dξ + x
1
x
(ξ − 1)f(ξ) dξ.
Example 21.7.2 Solve the boundary value problem
y = f(x), y(0) = 1, y(1) = 2.
In Example 21.7.1 we saw that the solution to
u = f(x), u(0) = u(1) = 0
is
u(x) = (x − 1)
x
0
ξf(ξ) dξ + x
1
x
(ξ − 1)f(ξ) dξ.
Now we have to find the solution to
v = 0, v(0) = 1, u(1) = 2.
The general solution is
v = c1 + c2x.
Applying the boundary conditions yields
v = 1 + x.
Thus the solution for y is
y = 1 + x + (x − 1)
x
0
ξf(ξ) dξ + x
1
x
(ξ − 1)f( xi) dξ.
Example 21.7.3 Consider
y = x, y(0) = y(1) = 0.
Method 1. Integrating the differential equation twice yields
y =
1
6
x3
+ c1x + c2.
Applying the boundary conditions, we find that the solution is
y =
1
6
(x3
− x).
666
Method 2. Using the Green function to find the solution,
y = (x − 1)
x
0
ξ2
dξ + x
1
x
(ξ − 1)ξ dξ
= (x − 1)
1
3
x3
+ x
1
3
−
1
2
−
1
3
x3
+
1
2
x2
y =
1
6
(x3
− x).
Example 21.7.4 Find the solution to the differential equation
y − y = sin x,
that is bounded for all x.
The Green function for this problem satisfies
G (x|ξ) − G(x|ξ) = δ(x − ξ).
The homogeneous solutions are y1 = ex
, and y2 = e−x
. The Green function has the form
G(x|ξ) =
c1 ex
+c2 e−x
for x < ξ
d1 ex
+d2 e−x
for x > ξ.
Since the solution must be bounded for all x, the Green function must also be bounded. Thus
c2 = d1 = 0. The Green function now has the form
G(x|ξ) =
c ex
for x < ξ
d e−x
for x > ξ.
Requiring that G(x|ξ) be continuous gives us the condition
c eξ
= d e−ξ
→ d = c e2ξ
.
G(x|ξ) has a jump discontinuity of height 1 at x = ξ.
d
dx
c e2ξ
e−x
x=ξ
−
d
dx
c ex
x=ξ
= 1
−c e2ξ
e−ξ
−c eξ
= 1
c = −
1
2
e−ξ
The Green function is then
G(x|ξ) =
−1
2
ex−ξ
for x < ξ
−1
2
e−x+ξ
for x > ξ
G(x|ξ) = −
1
2
e−|x−ξ|
.
A plot of G(x|0) is given in Figure 21.4. The solution to y − y = sin x is
y(x) =
∞
−∞
−
1
2
e−|x−ξ|
sin ξ dξ
= −
1
2
x
−∞
sin ξ ex−ξ
dξ +
∞
x
sin ξ e−x+ξ
dξ
= −
1
2
(−
sin x + cos x
2
+
− sin x + cos x
2
)
y =
1
2
sin x.
667
-4 -2 2 4
-0.6
-0.4
-0.2
0.2
0.4
0.6
Figure 21.4: Plot of G(x|0).
21.7.1 Green Functions for Sturm-Liouville Problems
Consider the problem
L[y] = (p(x)y ) + q(x)y = f(x), subject to
B1[y] = α1y(a) + α2y (a) = 0, B2[y] = β1y(b) + β2y (b) = 0.
This is known as a Sturm-Liouville problem. Equations of this type often occur when solving partial
differential equations. The Green function associated with this problem satisfies
L[G(x|ξ)] = δ(x − ξ), B1[G(x|ξ)] = B2[G(x|ξ)] = 0.
Let y1 and y2 be two non-trivial homogeneous solutions that satisfy the left and right boundary
conditions, respectively.
L[y1] = 0, B1[y1] = 0, L[y2] = 0, B2[y2] = 0.
The Green function satisfies the homogeneous equation for x = ξ and satisfies the homogeneous
boundary conditions. Thus it must have the following form.
G(x|ξ) =
c1(ξ)y1(x) for a ≤ x ≤ ξ,
c2(ξ)y2(x) for ξ ≤ x ≤ b,
Here c1 and c2 are unknown functions of ξ.
The first constraint on c1 and c2 comes from the continuity condition.
G(ξ−
|ξ) = G(ξ+
|ξ)
c1(ξ)y1(ξ) = c2(ξ)y2(ξ)
We write the inhomogeneous equation in the standard form.
G (x|ξ) +
p
p
G (x|ξ) +
q
p
G(x|ξ) =
δ(x − ξ)
p
The second constraint on c1 and c2 comes from the jump condition.
G (ξ+
|ξ) − G (ξ−
|ξ) =
1
p(ξ)
c2(ξ)y2(ξ) − c1(ξ)y1(ξ) =
1
p(ξ)
668
Now we have a system of equations to determine c1 and c2.
c1(ξ)y1(ξ) − c2(ξ)y2(ξ) = 0
c1(ξ)y1(ξ) − c2(ξ)y2(ξ) = −
1
p(ξ)
We solve this system with Kramer’s rule.
c1(ξ) = −
y2(ξ)
p(ξ)(−W(ξ))
, c2(ξ) = −
y1(ξ)
p(ξ)(−W(ξ))
Here W(x) is the Wronskian of y1(x) and y2(x). The Green function is
G(x|ξ) =
y1(x)y2(ξ)
p(ξ)W (ξ) for a ≤ x ≤ ξ,
y2(x)y1(ξ)
p(ξ)W (ξ) for ξ ≤ x ≤ b.
The solution of the Sturm-Liouville problem is
y =
b
a
G(x|ξ)f(ξ) dξ.
Result 21.7.2 The problem
L[y] = (p(x)y ) + q(x)y = f(x), subject to
B1[y] = α1y(a) + α2y (a) = 0, B2[y] = β1y(b) + β2y (b) = 0.
has the Green function
G(x|ξ) =
y1(x)y2(ξ)
p(ξ)W(ξ)
for a ≤ x ≤ ξ,
y2(x)y1(ξ)
p(ξ)W(ξ)
for ξ ≤ x ≤ b,
where y1 and y2 are non-trivial homogeneous solutions that satisfy B1[y1] =
B2[y2] = 0, and W(x) is the Wronskian of y1 and y2.
Example 21.7.5 Consider the equation
y − y = f(x), y(0) = y(1) = 0.
A set of solutions to the homogeneous equation is {ex
, e−x
}. Equivalently, one could use the set
{cosh x, sinh x}. Note that sinh x satisfies the left boundary condition and sinh(x − 1) satisfies the
right boundary condition. The Wronskian of these two homogeneous solutions is
W(x) =
sinh x sinh(x − 1)
cosh x cosh(x − 1)
= sinh x cosh(x − 1) − cosh x sinh(x − 1)
=
1
2
[sinh(2x − 1) + sinh(1)] −
1
2
[sinh(2x − 1) − sinh(1)]
= sinh(1).
The Green function for the problem is then
G(x|ξ) =
sinh x sinh(ξ−1)
sinh(1) for 0 ≤ x ≤ ξ
sinh(x−1) sinh ξ
sinh(1) for ξ ≤ x ≤ 1.
669
The solution to the problem is
y =
sinh(x − 1)
sinh(1)
x
0
sinh(ξ)f(ξ) dξ +
sinh(x)
sinh(1)
1
x
sinh(ξ − 1)f(ξ) dξ.
21.7.2 Initial Value Problems
Consider
L[y] = y + p(x)y + q(x)y = f(x), for a < x < b,
subject the the initial conditions
y(a) = γ1, y (a) = γ2.
The solution is y = u + v where
u + p(x)u + q(x)u = f(x), u(a) = 0, u (a) = 0,
and
v + p(x)v + q(x)v = 0, v(a) = γ1, v (a) = γ2.
Since the Wronskian
W(x) = c exp − p(x) dx
is non-vanishing, the solutions of the differential equation for v are linearly independent. Thus there
is a unique solution for v that satisfies the initial conditions.
The Green function for u satisfies
G (x|ξ) + p(x)G (x|ξ) + q(x)G(x|ξ) = δ(x − ξ), G(a|ξ) = 0, G (a|ξ) = 0.
The continuity and jump conditions are
G(ξ−
|ξ) = G(ξ+
|ξ), G (ξ−
|ξ) + 1 = G (ξ+
|ξ).
Let u1 and u2 be two linearly independent solutions of the differential equation. For x < ξ, G(x|ξ)
is a linear combination of these solutions. Since the Wronskian is non-vanishing, only the trivial
solution satisfies the homogeneous initial conditions. The Green function must be
G(x|ξ) =
0 for x < ξ
uξ(x) for x > ξ,
where uξ(x) is the linear combination of u1 and u2 that satisfies
uξ(ξ) = 0, uξ(ξ) = 1.
Note that the non-vanishing Wronskian ensures a unique solution for uξ. We can write the Green
function in the form
G(x|ξ) = H(x − ξ)uξ(x).
This is known as the causal solution. The solution for u is
u =
b
a
G(x|ξ)f(ξ) dξ
=
b
a
H(x − ξ)uξ(x)f(ξ) dξ
=
x
a
uξ(x)f(ξ) dξ
670
Now we have the solution for y,
y = v +
x
a
uξ(x)f(ξ) dξ.
Result 21.7.3 The solution of the problem
y + p(x)y + q(x)y = f(x), y(a) = γ1, y (a) = γ2,
is
y = yh +
x
a
yξ(x)f(ξ) dξ
where yh is the combination of the homogeneous solutions of the equation that
satisfy the initial conditions and yξ(x) is the linear combination of homoge-
neous solutions that satisfy yξ(ξ) = 0, yξ(ξ) = 1.
21.7.3 Problems with Unmixed Boundary Conditions
Consider
L[y] = y + p(x)y + q(x)y = f(x), for a < x < b,
subject the the unmixed boundary conditions
α1y(a) + α2y (a) = γ1, β1y(b) + β2y (b) = γ2.
The solution is y = u + v where
u + p(x)u + q(x)u = f(x), α1u(a) + α2u (a) = 0, β1u(b) + β2u (b) = 0,
and
v + p(x)v + q(x)v = 0, α1v(a) + α2v (a) = γ1, β1v(b) + β2v (b) = γ2.
The problem for v may have no solution, a unique solution or an infinite number of solutions. We
consider only the case that there is a unique solution for v. In this case the homogeneous equation
subject to homogeneous boundary conditions has only the trivial solution.
The Green function for u satisfies
G (x|ξ) + p(x)G (x|ξ) + q(x)G(x|ξ) = δ(x − ξ),
α1G(a|ξ) + α2G (a|ξ) = 0, β1G(b|ξ) + β2G (b|ξ) = 0.
The continuity and jump conditions are
G(ξ−
|ξ) = G(ξ+
|ξ), G (ξ−
|ξ) + 1 = G (ξ+
|ξ).
Let u1 and u2 be two solutions of the homogeneous equation that satisfy the left and right boundary
conditions, respectively. The non-vanishing of the Wronskian ensures that these solutions exist.
Let W(x) denote the Wronskian of u1 and u2. Since the homogeneous equation with homogeneous
boundary conditions has only the trivial solution, W(x) is nonzero on [a, b]. The Green function has
the form
G(x|ξ) =
c1u1 for x < ξ,
c2u2 for x > ξ.
671
The continuity and jump conditions for Green function gives us the equations
c1u1(ξ) − c2u2(ξ) = 0
c1u1(ξ) − c2u2(ξ) = −1.
Using Kramer’s rule, the solution is
c1 =
u2(ξ)
W(ξ)
, c2 =
u1(ξ)
W(ξ)
.
Thus the Green function is
G(x|ξ) =
u1(x)u2(ξ)
W (ξ) for x < ξ,
u1(ξ)u2(x)
W (ξ) for x > ξ.
The solution for u is
u =
b
a
G(x|ξ)f(ξ) dξ.
Thus if there is a unique solution for v, the solution for y is
y = v +
b
a
G(x|ξ)f(ξ) dξ.
Result 21.7.4 Consider the problem
y + p(x)y + q(x)y = f(x),
α1y(a) + α2y (a) = γ1, β1y(b) + β2y (b) = γ2.
If the homogeneous differential equation subject to the inhomogeneous bound-
ary conditions has the unique solution yh, then the problem has the unique
solution
y = yh +
b
a
G(x|ξ)f(ξ) dξ
where
G(x|ξ) =
u1(x)u2(ξ)
W(ξ)
for x < ξ,
u1(ξ)u2(x)
W(ξ)
for x > ξ,
u1 and u2 are solutions of the homogeneous differential equation that satisfy the
left and right boundary conditions, respectively, and W(x) is the Wronskian
of u1 and u2.
21.7.4 Problems with Mixed Boundary Conditions
Consider
L[y] = y + p(x)y + q(x)y = f(x), for a < x < b,
subject the the mixed boundary conditions
B1[y] = α11y(a) + α12y (a) + β11y(b) + β12y (b) = γ1,
B2[y] = α21y(a) + α22y (a) + β21y(b) + β22y (b) = γ2.
The solution is y = u + v where
u + p(x)u + q(x)u = f(x), B1[u] = 0, B2[u] = 0,
and
v + p(x)v + q(x)v = 0, B1[v] = γ1, B2[v] = γ2.
672
The problem for v may have no solution, a unique solution or an infinite number of solutions.
Again we consider only the case that there is a unique solution for v. In this case the homogeneous
equation subject to homogeneous boundary conditions has only the trivial solution.
Let y1 and y2 be two solutions of the homogeneous equation that satisfy the boundary conditions
B1[y1] = 0 and B2[y2] = 0. Since the completely homogeneous problem has no solutions, we know
that B1[y2] and B2[y1] are nonzero. The solution for v has the form
v = c1y1 + c2y2.
Applying the two boundary conditions yields
v =
γ2
B2[y1]
y1 +
γ1
B1[y2]
y2.
The Green function for u satisfies
G (x|ξ) + p(x)G (x|ξ) + q(x)G(x|ξ) = δ(x − ξ), B1[G] = 0, B2[G] = 0.
The continuity and jump conditions are
G(ξ−
|ξ) = G(ξ+
|ξ), G (ξ−
|ξ) + 1 = G (ξ+
|ξ).
We write the Green function as the sum of the causal solution and the two homogeneous solutions
G(x|ξ) = H(x − ξ)yξ(x) + c1y1(x) + c2y2(x)
With this form, the continuity and jump conditions are automatically satisfied. Applying the bound-
ary conditions yields
B1[G] = B1[H(x − ξ)yξ] + c2B1[y2] = 0,
B2[G] = B2[H(x − ξ)yξ] + c1B2[y1] = 0,
B1[G] = β11yξ(b) + β12yξ(b) + c2B1[y2] = 0,
B2[G] = β21yξ(b) + β22yξ(b) + c1B2[y1] = 0,
G(x|ξ) = H(x − ξ)yξ(x) −
β21yξ(b) + β22yξ(b)
B2[y1]
y1(x) −
β11yξ(b) + β12yξ(b)
B1[y2]
y2(x).
Note that the Green function is well defined since B2[y1] and B1[y2] are nonzero. The solution for
u is
u =
b
a
G(x|ξ)f(ξ) dξ.
Thus if there is a unique solution for v, the solution for y is
y =
b
a
G(x|ξ)f(ξ) dξ +
γ2
B2[y1]
y1 +
γ1
B1[y2]
y2.
673
Result 21.7.5 Consider the problem
y + p(x)y + q(x)y = f(x),
B1[y] = α11y(a) + α12y (a) + β11y(b) + β12y (b) = γ1,
B2[y] = α21y(a) + α22y (a) + β21y(b) + β22y (b) = γ2.
If the homogeneous differential equation subject to the homogeneous boundary
conditions has no solution, then the problem has the unique solution
y =
b
a
G(x|ξ)f(ξ) dξ +
γ2
B2[y1]
y1 +
γ1
B1[y2]
y2,
where
G(x|ξ) = H(x − ξ)yξ(x) −
β21yξ(b) + β22yξ(b)
B2[y1]
y1(x)
−
β11yξ(b) + β12yξ(b)
B1[y2]
y2(x),
y1 and y2 are solutions of the homogeneous differential equation that satisfy
the first and second boundary conditions, respectively, and yξ(x) is the solution
of the homogeneous equation that satisfies yξ(ξ) = 0, yξ(ξ) = 1.
21.8 Green Functions for Higher Order Problems
Consider the nth order differential equation
L[y] = y(n)
+ pn−1(x)y(n−1)
+ · · · + p1(x)y + p0y = f(x) on a < x < b,
subject to the n independent boundary conditions
Bj[y] = γj
where the boundary conditions are of the form
B[y] ≡
n−1
k=0
αky(k)
(a) +
n−1
k=0
βky(k)
(b).
We assume that the coefficient functions in the differential equation are continuous on [a, b]. The
solution is y = u + v where u and v satisfy
L[u] = f(x), with Bj[u] = 0,
and
L[v] = 0, with Bj[v] = γj
From Result 21.5.3, we know that if the completely homogeneous problem
L[w] = 0, with Bj[w] = 0,
has only the trivial solution, then the solution for y exists and is unique. We will construct this
solution using Green functions.
674
First we consider the problem for v. Let {y1, . . . , yn} be a set of linearly independent solutions.
The solution for v has the form
v = c1y1 + · · · + cnyn
where the constants are determined by the matrix equation





B1[y1] B1[y2] · · · B1[yn]
B2[y1] B2[y2] · · · B2[yn]
...
...
...
...
Bn[y1] Bn[y2] · · · Bn[yn]










c1
c2
...
cn





=





γ1
γ2
...
γn





.
To solve the problem for u we consider the Green function satisfying
L[G(x|ξ)] = δ(x − ξ), with Bj[G] = 0.
Let yξ(x) be the linear combination of the homogeneous solutions that satisfy the conditions
yξ(ξ) = 0
yξ(ξ) = 0
... =
...
y
(n−2)
ξ (ξ) = 0
y
(n−1)
ξ (ξ) = 1.
The causal solution is then
yc(x) = H(x − ξ)yξ(x).
The Green function has the form
G(x|ξ) = H(x − ξ)yξ(x) + d1y1(x) + · · · + dnyn(x)
The constants are determined by the matrix equation





B1[y1] B1[y2] · · · B1[yn]
B2[y1] B2[y2] · · · B2[yn]
...
...
...
...
Bn[y1] Bn[y2] · · · Bn[yn]










d1
d2
...
dn





=





−B1[H(x − ξ)yξ(x)]
−B2[H(x − ξ)yξ(x)]
...
−Bn[H(x − ξ)yξ(x)]





.
The solution for u then is
u =
b
a
G(x|ξ)f(ξ) dξ.
675
Result 21.8.1 Consider the nth order differential equation
L[y] = y(n)
+ pn−1(x)y(n−1)
+ · · · + p1(x)y + p0y = f(x) on a < x < b,
subject to the n independent boundary conditions
Bj[y] = γj
If the homogeneous differential equation subject to the homogeneous bound-
ary conditions has only the trivial solution, then the problem has the unique
solution
y =
b
a
G(x|ξ)f(ξ) dξ + c1y1 + · · · cnyn
where
G(x|ξ) = H(x − ξ)yξ(x) + d1y1(x) + · · · + dnyn(x),
{y1, . . . , yn} is a set of solutions of the homogeneous differential equation, and
the constants cj and dj can be determined by solving sets of linear equations.
Example 21.8.1 Consider the problem
y − y + y − y = f(x),
y(0) = 1, y (0) = 2, y(1) = 3.
The completely homogeneous associated problem is
w − w + w − w = 0, w(0) = w (0) = w(1) = 0.
The solution of the differential equation is
w = c1 cos x + c2 sin x + c2 ex
.
The boundary conditions give us the equation


1 0 1
0 1 1
cos 1 sin 1 e




c1
c2
c3

 =


0
0
0

 .
The determinant of the matrix is e − cos 1 − sin 1 = 0. Thus the homogeneous problem has only the
trivial solution and the inhomogeneous problem has a unique solution.
We separate the inhomogeneous problem into the two problems
u − u + u − u = f(x), u(0) = u (0) = u(1) = 0,
v − v + v − v = 0, v(0) = 1, v (0) = 2, v(1) = 3,
First we solve the problem for v. The solution of the differential equation is
v = c1 cos x + c2 sin x + c2 ex
.
The boundary conditions yields the equation


1 0 1
0 1 1
cos 1 sin 1 e




c1
c2
c3

 =


1
2
3

 .
676
The solution for v is
v =
1
e − cos 1 − sin 1
(e + sin 1 − 3) cos x + (2e − cos 1 − 3) sin x + (3 − cos 1 − 2 sin 1) ex
.
Now we find the Green function for the problem in u. The causal solution is
H(x − ξ)uξ(x) = H(x − ξ)
1
2
(sin ξ − cos ξ) cos x − (sin ξ + cos ξ) sin ξ + e−ξ
ex
,
H(x − ξ)uξ(x) =
1
2
H(x − ξ) ex−ξ
− cos(x − ξ) − sin(x − ξ) .
The Green function has the form
G(x|ξ) = H(x − ξ)uξ(x) + c1 cos x + c2 sin x + c3 ex
.
The constants are determined by the three conditions
c1 cos x + c2 sin x + c3 ex
x=0
= 0,
∂
∂x
(c1 cos x + c2 sin x + c3 ex
)
x=0
= 0,
uξ(x) + c1 cos x + c2 sin x + c3 ex
x=1
= 0.
The Green function is
G(x|ξ) =
1
2
H(x−ξ) ex−ξ
− cos(x−ξ)−sin(x−ξ) +
cos(1 − ξ) + sin(1 − ξ) − e1−ξ
2(cos 1 + sin 1 − e)
cos x+sin x−ex
The solution for v is
v =
1
0
G(x|ξ)f(ξ) dξ.
Thus the solution for y is
y =
1
0
G(x|ξ)f(ξ) dξ +
1
e − cos 1 − sin 1
(e + sin 1 − 3) cos x
+ (2e − cos 1 − 3) sin x + (3 − cos 1 − 2 sin 1) ex
.
21.9 Fredholm Alternative Theorem
Orthogonality. Two real vectors, u and v are orthogonal if u · v = 0. Consider two functions,
u(x) and v(x), defined in [a, b]. The dot product in vector space is analogous to the integral
b
a
u(x)v(x) dx
in function space. Thus two real functions are orthogonal if
b
a
u(x)v(x) dx = 0.
Consider the nth
order linear inhomogeneous differential equation
L[y] = f(x) on [a, b],
subject to the linear inhomogeneous boundary conditions
Bj[y] = 0, for j = 1, 2, . . . n.
The Fredholm alternative theorem tells us if the problem has a unique solution, an infinite
number of solutions, or no solution. Before presenting the theorem, we will consider a few motivating
examples.
677
No Nontrivial Homogeneous Solutions. In the section on Green functions we showed that if
the completely homogeneous problem has only the trivial solution then the inhomogeneous problem
has a unique solution.
Nontrivial Homogeneous Solutions Exist. If there are nonzero solutions to the homogeneous
problem L[y] = 0 that satisfy the homogeneous boundary conditions Bj[y] = 0 then the inhomoge-
neous problem L[y] = f(x) subject to the same boundary conditions either has no solution or an
infinite number of solutions.
Suppose there is a particular solution yp that satisfies the boundary conditions. If there is a
solution yh to the homogeneous equation that satisfies the boundary conditions then there will be
an infinite number of solutions since yp + cyh is also a particular solution.
The question now remains: Given that there are homogeneous solutions that satisfy the boundary
conditions, how do we know if a particular solution that satisfies the boundary conditions exists?
Before we address this question we will consider a few examples.
Example 21.9.1 Consider the problem
y + y = cos x, y(0) = y(π) = 0.
The two homogeneous solutions of the differential equation are
y1 = cos x, and y2 = sin x.
y2 = sin x satisfies the boundary conditions. Thus we know that there are either no solutions or an
infinite number of solutions. A particular solution is
yp = − cos x
cos x sin x
1
dx + sin x
cos2
x
1
dx
= − cos x
1
2
sin(2x) dx + sin x
1
2
+
1
2
cos(2x) dx
=
1
4
cos x cos(2x) + sin x
1
2
x +
1
4
sin(2x)
=
1
2
x sin x +
1
4
cos x cos(2x) + sin x sin(2x)
=
1
2
x sin x +
1
4
cos x
The general solution is
y =
1
2
x sin x + c1 cos x + c2 sin x.
Applying the two boundary conditions yields
y =
1
2
x sin x + c sin x.
Thus there are an infinite number of solutions.
Example 21.9.2 Consider the differential equation
y + y = sin x, y(0) = y(π) = 0.
The general solution is
y = −
1
2
x cos x + c1 cos x + c2 sin x.
678
Applying the boundary conditions,
y(0) = 0 → c1 = 0
y(π) = 0 → −
1
2
π cos(π) + c2 sin(π) = 0
→
π
2
= 0.
Since this equation has no solution, there are no solutions to the inhomogeneous problem.
In both of the above examples there is a homogeneous solution y = sin x that satisfies the bound-
ary conditions. In Example 21.9.1, the inhomogeneous term is cos x and there are an infinite number
of solutions. In Example 21.9.2, the inhomogeneity is sin x and there are no solutions. In general,
if the inhomogeneous term is orthogonal to all the homogeneous solutions that satisfy the bound-
ary conditions then there are an infinite number of solutions. If not, there are no inhomogeneous
solutions.
Result 21.9.1 Fredholm Alternative Theorem. Consider the nth
order
inhomogeneous problem
L[y] = f(x) on [a, b] subject to Bj[y] = 0 for j = 1, 2, . . . , n,
and the associated homogeneous problem
L[y] = 0 on [a, b] subject to Bj[y] = 0 for j = 1, 2, . . . , n.
If the homogeneous problem has only the trivial solution then the inhomo-
geneous problem has a unique solution. If the homogeneous problem has m
independent solutions, {y1, y2, . . . , ym}, then there are two possibilities:
• If f(x) is orthogonal to each of the homogeneous solutions then there are
an infinite number of solutions of the form
y = yp +
m
j=1
cjyj.
• If f(x) is not orthogonal to each of the homogeneous solutions then there
are no inhomogeneous solutions.
Example 21.9.3 Consider the problem
y + y = cos 2x, y(0) = 1, y(π) = 2.
cos x and sin x are two linearly independent solutions to the homogeneous equation. sin x satisfies
the homogeneous boundary conditions. Thus there are either an infinite number of solutions, or no
solution.
To transform this problem to one with homogeneous boundary conditions, we note that g(x) =
x
π + 1 and make the change of variables y = u + g to obtain
u + u = cos 2x −
x
π
− 1, y(0) = 0, y(π) = 0.
Since cos 2x − x
π − 1 is not orthogonal to sin x, there is no solution to the inhomogeneous problem.
679
To check this, the general solution is
y = −
1
3
cos 2x + c1 cos x + c2 sin x.
Applying the boundary conditions,
y(0) = 1 → c1 =
4
3
y(π) = 2 → −
1
3
−
4
3
= 2.
Thus we see that the right boundary condition cannot be satisfied.
Example 21.9.4 Consider
y + y = cos 2x, y (0) = y(π) = 1.
There are no solutions to the homogeneous equation that satisfy the homogeneous boundary con-
ditions. To check this, note that all solutions of the homogeneous equation have the form uh =
c1 cos x + c2 sin x.
uh(0) = 0 → c2 = 0
uh(π) = 0 → c1 = 0.
From the Fredholm Alternative Theorem we see that the inhomogeneous problem has a unique
solution.
To find the solution, start with
y = −
1
3
cos 2x + c1 cos x + c2 sin x.
y (0) = 1 → c2 = 1
y(π) = 1 → −
1
3
− c1 = 1
Thus the solution is
y = −
1
3
cos 2x −
4
3
cos x + sin x.
Example 21.9.5 Consider
y + y = cos 2x, y(0) =
2
3
, y(π) = −
4
3
.
cos x and sin x satisfy the homogeneous differential equation. sin x satisfies the homogeneous bound-
ary conditions. Since g(x) = cos x−1/3 satisfies the boundary conditions, the substitution y = u+g
yields
u + u = cos 2x +
1
3
, y(0) = 0, y(π) = 0.
Now we check if sin x is orthogonal to cos 2x + 1
3 .
π
0
sin x cos 2x +
1
3
dx =
π
0
1
2
sin 3x −
1
2
sin x +
1
3
sin x dx
= −
1
6
cos 3x +
1
6
cos x
π
0
= 0
680
Since sin x is orthogonal to the inhomogeneity, there are an infinite number of solutions to the
problem for u, (and hence the problem for y).
As a check, then general solution for y is
y = −
1
3
cos 2x + c1 cos x + c2 sin x.
Applying the boundary conditions,
y(0) =
2
3
→ c1 = 1
y(π) = −
4
3
→ −
4
3
= −
4
3
.
Thus we see that c2 is arbitrary. There are an infinite number of solutions of the form
y = −
1
3
cos 2x + cos x + c sin x.
681
21.10 Exercises
Undetermined Coefficients
Exercise 21.1 (mathematica/ode/inhomogeneous/undetermined.nb)
Find the general solution of the following equations.
1. y + 2y + 5y = 3 sin(2t)
2. 2y + 3y + y = t2
+ 3 sin(t)
Hint, Solution
Exercise 21.2 (mathematica/ode/inhomogeneous/undetermined.nb)
Find the solution of each one of the following initial value problems.
1. y − 2y + y = t et
+4, y(0) = 1, y (0) = 1
2. y + 2y + 5y = 4 e−t
cos(2t), y(0) = 1, y (0) = 0
Hint, Solution
Variation of Parameters
Exercise 21.3 (mathematica/ode/inhomogeneous/variation.nb)
Use the method of variation of parameters to find a particular solution of the given differential
equation.
1. y − 5y + 6y = 2 et
2. y + y = tan(t), 0 < t < π/2
3. y − 5y + 6y = g(t), for a given function g.
Hint, Solution
Exercise 21.4 (mathematica/ode/inhomogeneous/variation.nb)
Solve
y (x) + y(x) = x, y(0) = 1, y (0) = 0.
Hint, Solution
Exercise 21.5 (mathematica/ode/inhomogeneous/variation.nb)
Solve
x2
y (x) − xy (x) + y(x) = x.
Hint, Solution
Exercise 21.6 (mathematica/ode/inhomogeneous/variation.nb)
1. Find the general solution of y + y = ex
.
2. Solve y + λ2
y = sin x, y(0) = y (0) = 0. λ is an arbitrary real constant. Is there anything
special about λ = 1?
Hint, Solution
Exercise 21.7 (mathematica/ode/inhomogeneous/variation.nb)
Consider the problem of solving the initial value problem
y + y = g(t), y(0) = 0, y (0) = 0.
682
1. Show that the general solution of y + y = g(t) is
y(t) = c1 −
t
a
g(τ) sin τ dτ cos t + c2 +
t
b
g(τ) cos τ dτ sin t,
where c1 and c2 are arbitrary constants and a and b are any conveniently chosen points.
2. Using the result of part (a) show that the solution satisfying the initial conditions y(0) = 0
and y (0) = 0 is given by
y(t) =
t
0
g(τ) sin(t − τ) dτ.
Notice that this equation gives a formula for computing the solution of the original initial value
problem for any given inhomogeneous term g(t). The integral is referred to as the convolution
of g(t) with sin t.
3. Use the result of part (b) to solve the initial value problem,
y + y = sin(λt), y(0) = 0, y (0) = 0,
where λ is a real constant. How does the solution for λ = 1 differ from that for λ = 1?
The λ = 1 case provides an example of resonant forcing. Plot the solution for resonant and
non-resonant forcing.
Hint, Solution
Exercise 21.8
Find the variation of parameters solution for the third order differential equation
y + p2(x)y + p1(x)y + p0(x)y = f(x).
Hint, Solution
Green Functions
Exercise 21.9
Use a Green function to solve
y = f(x), y(−∞) = y (−∞) = 0.
Verify the the solution satisfies the differential equation.
Hint, Solution
Exercise 21.10
Solve the initial value problem
y +
1
x
y −
1
x2
y = x2
, y(0) = 0, y (0) = 1.
First use variation of parameters, and then solve the problem with a Green function.
Hint, Solution
Exercise 21.11
What are the continuity conditions at x = ξ for the Green function for the problem
y + p2(x)y + p1(x)y + p0(x)y = f(x).
Hint, Solution
683
Exercise 21.12
Use variation of parameters and Green functions to solve
x2
y − 2xy + 2y = e−x
, y(1) = 0, y (1) = 1.
Hint, Solution
Exercise 21.13
Find the Green function for
y − y = f(x), y (0) = y(1) = 0.
Hint, Solution
Exercise 21.14
Find the Green function for
y − y = f(x), y(0) = y(∞) = 0.
Hint, Solution
Exercise 21.15
Find the Green function for each of the following:
a) xu + u = f(x), u(0+
) bounded, u(1) = 0.
b) u − u = f(x), u(−a) = u(a) = 0.
c) u − u = f(x), u(x) bounded as |x| → ∞.
d) Show that the Green function for (b) approaches that for (c) as a → ∞.
Hint, Solution
Exercise 21.16
1. For what values of λ does the problem
y + λy = f(x), y(0) = y(π) = 0, (21.5)
have a unique solution? Find the Green functions for these cases.
2. For what values of α does the problem
y + 9y = 1 + αx, y(0) = y(π) = 0,
have a solution? Find the solution.
3. For λ = n2
, n ∈ Z+
state in general the conditions on f in Equation 21.5 so that a solution
will exist. What is the appropriate modified Green function (in terms of eigenfunctions)?
Hint, Solution
Exercise 21.17
Show that the inhomogeneous boundary value problem:
Lu ≡ (pu ) + qu = f(x), a < x < b, u(a) = α, u(b) = β
has the solution:
u(x) =
b
a
g(x; ξ)f(ξ) dξ − αp(a)gξ(x; a) + βp(b)gξ(x; b).
Hint, Solution
684
Exercise 21.18
The Green function for
u − k2
u = f(x), −∞ < x < ∞
subject to |u(±∞)| < ∞ is
G(x; ξ) = −
1
2k
e−k|x−ξ|
.
(We assume that k > 0.) Use the image method to find the Green function for the same equation
on the semi-infinite interval 0 < x < ∞ satisfying the boundary conditions,
i) u(0) = 0 |u(∞)| < ∞,
ii) u (0) = 0 |u(∞)| < ∞.
Express these results in simplified forms without absolute values.
Hint, Solution
Exercise 21.19
1. Determine the Green function for solving:
y − a2
y = f(x), y(0) = y (L) = 0.
2. Take the limit as L → ∞ to find the Green function on (0, ∞) for the boundary conditions:
y(0) = 0, y (∞) = 0. We assume here that a > 0. Use the limiting Green function to solve:
y − a2
y = e−x
, y(0) = 0, y (∞) = 0.
Check that your solution satisfies all the conditions of the problem.
Hint, Solution
685
21.11 Hints
Undetermined Coefficients
Hint 21.1
Hint 21.2
Variation of Parameters
Hint 21.3
Hint 21.4
Hint 21.5
Hint 21.6
Hint 21.7
Hint 21.8
Look for a particular solution of the form
yp = u1y1 + u2y2 + u3y3,
where the yj’s are homogeneous solutions. Impose the constraints
u1y1 + u2y2 + u3y3 = 0
u1y1 + u2y2 + u3y3 = 0.
To avoid some messy algebra when solving for uj, use Kramer’s rule.
Green Functions
Hint 21.9
Hint 21.10
Hint 21.11
Hint 21.12
Hint 21.13
cosh(x) and sinh(x−1) are homogeneous solutions that satisfy the left and right boundary conditions,
respectively.
686
Hint 21.14
sinh(x) and e−x
are homogeneous solutions that satisfy the left and right boundary conditions,
respectively.
Hint 21.15
The Green function for the differential equation
L[y] ≡
d
dx
(p(x)y ) + q(x)y = f(x),
subject to unmixed, homogeneous boundary conditions is
G(x|ξ) =
y1(x<)y2(x>)
p(ξ)W(ξ)
,
G(x|ξ) =
y1(x)y2(ξ)
p(ξ)W (ξ) for a ≤ x ≤ ξ,
y1(ξ)y2(x)
p(ξ)W (ξ) for ξ ≤ x ≤ b,
where y1 and y2 are homogeneous solutions that satisfy the left and right boundary conditions,
respectively.
Recall that if y(x) is a solution of a homogeneous, constant coefficient differential equation then
y(x + c) is also a solution.
Hint 21.16
The problem has a Green function if and only if the inhomogeneous problem has a unique solution.
The inhomogeneous problem has a unique solution if and only if the homogeneous problem has only
the trivial solution.
Hint 21.17
Show that gξ(x; a) and gξ(x; b) are solutions of the homogeneous differential equation. Determine
the value of these solutions at the boundary.
Hint 21.18
Hint 21.19
687
21.12 Solutions
Undetermined Coefficients
Solution 21.1
1. We consider
y + 2y + 5y = 3 sin(2t).
We first find the homogeneous solution with the substitition y = eλt
.
λ2
+ 2λ + 5 = 0
λ = −1 ± 2i
The homogeneous solution is
yh = c1 e−t
cos(2t) + c2 e−t
sin(2t).
We guess a particular solution of the form
yp = a cos(2t) + b sin(2t).
We substitute this into the differential equation to determine the coefficients.
yp + 2yp + 5yp = 3 sin(2t)
−4a cos(2t) − 4b sin(2t) − 4a sin(2t) + 4b sin(2t) + 5a cos(2t) + 5b sin(2t) = −3 sin(2t)
(a + 4b) cos(2t) + (−3 − 4a + b) sin(2t) = 0
a + 4b = 0, −4a + b = 3
a = −
12
17
, b =
3
17
A particular solution is
yp =
3
17
(sin(2t) − 4 cos(2t)).
The general solution of the differential equation is
y = c1 e−t
cos(2t) + c2 e−t
sin(2t) +
3
17
(sin(2t) − 4 cos(2t)).
2. We consider
2y + 3y + y = t2
+ 3 sin(t)
We first find the homogeneous solution with the substitition y = eλt
.
2λ2
+ 3λ + 1 = 0
λ = {−1, −1/2}
The homogeneous solution is
yh = c1 e−t
+c2 e−t/2
.
We guess a particular solution of the form
yp = at2
+ bt + c + d cos(t) + e sin(t).
We substitute this into the differential equation to determine the coefficients.
2yp + 3yp + yp = t2
+ 3 sin(t)
688
2(2a − d cos(t) − e sin(t)) + 3(2at + b − d sin(t) + e cos(t))
+ at2
+ bt + c + d cos(t) + e sin(t) = t2
+ 3 sin(t)
(a − 1)t2
+ (6a + b)t + (4a + 3b + c) + (−d + 3e) cos(t) − (3 + 3d + e) sin(t) = 0
a − 1 = 0, 6a + b = 0, 4a + 3b + c = 0, −d + 3e = 0, 3 + 3d + e = 0
a = 1, b = −6, c = 14, d = −
9
10
, e = −
3
10
A particular solution is
yp = t2
− 6t + 14 −
3
10
(3 cos(t) + sin(t)).
The general solution of the differential equation is
y = c1 e−t
+c2 e−t/2
+t2
− 6t + 14 −
3
10
(3 cos(t) + sin(t)).
Solution 21.2
1. We consider the problem
y − 2y + y = t et
+4, y(0) = 1, y (0) = 1.
First we solve the homogeneous equation with the substitution y = eλt
.
λ2
− 2λ + 1 = 0
(λ − 1)2
= 0
λ = 1
The homogeneous solution is
yh = c1 et
+c2t et
.
We guess a particular solution of the form
yp = at3
et
+bt2
et
+4.
We substitute this into the inhomogeneous differential equation to determine the coefficients.
yp − 2yp + yp = t et
+4
(a(t3
+ 6t2
+ 6t) + b(t2
+ 4t + 2)) et
−2(a(t2
+ 3t) + b(t + 2)) et
at3
et
+bt2
et
+4 = t et
+4
(6a − 1)t + 2b = 0
6a − 1 = 0, 2b = 0
a =
1
6
, b = 0
A particular solution is
yp =
t3
6
et
+4.
The general solution of the differential equation is
y = c1 et
+c2t et
+
t3
6
et
+4.
We use the initial conditions to determine the constants of integration.
y(0) = 1, y (0) = 1
c1 + 4 = 1, c1 + c2 = 1
c1 = −3, c2 = 4
689
The solution of the initial value problem is
y =
t3
6
+ 4t − 3 et
+4.
2. We consider the problem
y + 2y + 5y = 4 e−t
cos(2t), y(0) = 1, y (0) = 0.
First we solve the homogeneous equation with the substitution y = eλt
.
λ2
+ 2λ + 5 = 0
λ = −1 ±
√
1 − 5
λ = −1 ± ı2
The homogeneous solution is
yh = c1 e−t
cos(2t) + c2 e−t
sin(2t).
We guess a particular solution of the form
yp = t e−t
(a cos(2t) + b sin(2t))
We substitute this into the inhomogeneous differential equation to determine the coefficients.
yp + 2yp + 5yp = 4 e−t
cos(2t)
e−t
((−(2 + 3t)a + 4(1 − t)b) cos(2t) + (4(t − 1)a − (2 + 3t)b) sin(2t))
+ 2 e−t
(((1 − t)a + 2tb) cos(2t) + (−2ta + (1 − t)b) sin(2t))
+ 5(e−t
(ta cos(2t) + tb sin(2t))) = 4 e−t
cos(2t)
4(b − 1) cos(2t) − 4a sin(2t) = 0
a = 0, b = 1
A particular solution is
yp = t e−t
sin(2t).
The general solution of the differential equation is
y = c1 e−t
cos(2t) + c2 e−t
sin(2t) + t e−t
sin(2t).
We use the initial conditions to determine the constants of integration.
y(0) = 1, y (0) = 0
c1 = 1, −c1 + 2c2 = 0
c1 = 1, c2 =
1
2
The solution of the initial value problem is
y =
1
2
e−t
(2 cos(2t) + (2t + 1) sin(2t)) .
Variation of Parameters
690
Solution 21.3
1. We consider the equation
y − 5y + 6y = 2 et
.
We find homogeneous solutions with the substitution y = eλt
.
λ2
− 5λ + 6 = 0
λ = {2, 3}
The homogeneous solutions are
y1 = e2t
, y2 = e3t
.
We compute the Wronskian of these solutions.
W(t) =
e2t e3t
2 e2t
3 e3t = e5t
We find a particular solution with variation of parameters.
yp = − e2t 2 et e3t
e5t
dt + e3t 2 et e2t
e5t
dt
= −2 e2t
e−t
dt + 2 e3t
e−2t
dt
= 2 et
− et
yp = et
2. We consider the equation
y + y = tan(t), 0 < t <
π
2
.
We find homogeneous solutions with the substitution y = eλt
.
λ2
+ 1 = 0
λ = ±i
The homogeneous solutions are
y1 = cos(t), y2 = sin(t).
We compute the Wronskian of these solutions.
W(t) =
cos(t) sin(t)
− sin(t) cos(t)
= cos2
(t) + sin2
(t) = 1
We find a particular solution with variation of parameters.
yp = − cos(t) tan(t) sin(t) dt + sin(t) tan(t) cos(t) dt
= − cos(t)
sin2
(t)
cos(t)
dt + sin(t) sin(t) dt
= cos(t) ln
cos(t/2) − sin(t/2)
cos(t/2) + sin(t/2)
+ sin(t) − sin(t) cos(t)
yp = cos(t) ln
cos(t/2) − sin(t/2)
cos(t/2) + sin(t/2)
691
3. We consider the equation
y − 5y + 6y = g(t).
The homogeneous solutions are
y1 = e2t
, y2 = e3t
.
The Wronskian of these solutions is W(t) = e5t
. We find a particular solution with variation
of parameters.
yp = − e2t g(t) e3t
e5t
dt + e3t g(t) e2t
e5t
dt
yp = − e2t
g(t) e−2t
dt + e3t
g(t) e−3t
dt
Solution 21.4
Solve
y (x) + y(x) = x, y(0) = 1, y (0) = 0.
The solutions of the homogeneous equation are
y1(x) = cos x, y2(x) = sin x.
The Wronskian of these solutions is
W[cos x, sin x] =
cos x sin x
− sin x cos x
= cos2
x + sin2
x
= 1.
The variation of parameters solution for the particular solution is
yp = − cos x x sin x dx + sin x x cos x dx
= − cos x −x cos x + cos x dx + sin x x sin x − sin x dx
= − cos x (−x cos x + sin x) + sin x (x sin x + cos x)
= x cos2
x − cos x sin x + x sin2
x + cos x sin x
= x
The general solution of the differential equation is thus
y = c1 cos x + c2 sin x + x.
Applying the two initial conditions gives us the equations
c1 = 1, c2 + 1 = 0.
The solution subject to the initial conditions is
y = cos x − sin x + x.
Solution 21.5
Solve
x2
y (x) − xy (x) + y(x) = x.
The homogeneous equation is
x2
y (x) − xy (x) + y(x) = 0.
692
Substituting y = xλ
into the homogeneous differential equation yields
x2
λ(λ − 1)xλ−2
− xλxλ
+ xλ
= 0
λ2
− 2λ + 1 = 0
(λ − 1)2
= 0
λ = 1.
The homogeneous solutions are
y1 = x, y2 = x log x.
The Wronskian of the homogeneous solutions is
W[x, x log x] =
x x log x
1 1 + log x
= x + x log x − x log x
= x.
Writing the inhomogeneous equation in the standard form:
y (x) −
1
x
y (x) +
1
x2
y(x) =
1
x
.
Using variation of parameters to find the particular solution,
yp = −x
log x
x
dx + x log x
1
x
dx
= −x
1
2
log2
x + x log x log x
=
1
2
x log2
x.
Thus the general solution of the inhomogeneous differential equation is
y = c1x + c2x log x +
1
2
x log2
x.
Solution 21.6
1. First we find the homogeneous solutions. We substitute y = eλx
into the homogeneous differ-
ential equation.
y + y = 0
λ2
+ 1 = 0
λ = ±ı
y = eıx
, e−ıx
We can also write the solutions in terms of real-valued functions.
y = {cos x, sin x}
The Wronskian of the homogeneous solutions is
W[cos x, sin x] =
cos x sin x
− sin x cos x
= cos2
x + sin2
x = 1.
693
We obtain a particular solution with the variation of parameters formula.
yp = − cos x ex
sin x dx + sin x ex
cos x dx
yp = − cos x
1
2
ex
(sin x − cos x) + sin x
1
2
ex
(sin x + cos x)
yp =
1
2
ex
The general solution is the particular solution plus a linear combination of the homogeneous
solutions.
y =
1
2
ex
+ cos x + sin x
2.
y + λ2
y = sin x, y(0) = y (0) = 0
Assume that λ is positive. First we find the homogeneous solutions by substituting y = eαx
into the homogeneous differential equation.
y + λ2
y = 0
α2
+ λ2
= 0
α = ±ıλ
y = eıλx
, e−ıλx
y = {cos(λx), sin(λx)}
The Wronskian of these homogeneous solution is
W[cos(λx), sin(λx)] =
cos(λx) sin(λx)
−λ sin(λx) λ cos(λx)
= λ cos2
(λx) + λ sin2
(λx) = λ.
We obtain a particular solution with the variation of parameters formula.
yp = − cos(λx)
sin(λx) sin x
λ
dx + sin(λx)
cos(λx) sin x
λ
dx
We evaluate the integrals for λ = 1.
yp = − cos(λx)
cos(x) sin(λx) − λ sin x cos(λx)
λ(λ2 − 1)
+ sin(λx)
cos(x) cos(λx) + λ sin x sin(λx)
λ(λ2 − 1)
yp =
sin x
λ2 − 1
The general solution for λ = 1 is
y =
sin x
λ2 − 1
+ c1 cos(λx) + c2 sin(λx).
The initial conditions give us the constraints:
c1 = 0,
1
λ2 − 1
+ λc2 = 0,
For λ = 1, (non-resonant forcing), the solution subject to the initial conditions is
y =
λ sin(x) − sin(λx)
λ(λ2 − 1)
.
694
Now consider the case λ = 1. We obtain a particular solution with the variation of parameters
formula.
yp = − cos(x) sin2
(x) dx + sin(x) cos(x) sin x dx
yp = − cos(x)
1
2
(x − cos(x) sin(x)) + sin(x) −
1
2
cos2
(x)
yp = −
1
2
x cos(x)
The general solution for λ = 1 is
y = −
1
2
x cos(x) + c1 cos(x) + c2 sin(x).
The initial conditions give us the constraints:
c1 = 0
−
1
2
+ c2 = 0
For λ = 1, (resonant forcing), the solution subject to the initial conditions is
y =
1
2
(sin(x) − x cos x).
Solution 21.7
1. A set of linearly independent, homogeneous solutions is {cos t, sin t}. The Wronskian of these
solutions is
W(t) =
cos t sin t
− sin t cos t
= cos2
t + sin2
t = 1.
We use variation of parameters to find a particular solution.
yp = − cos t g(t) sin t dt + sin t g(t) cos t dt
The general solution can be written in the form,
y(t) = c1 −
t
a
g(τ) sin τ dτ cos t + c2 +
t
b
g(τ) cos τ dτ sin t.
2. Since the initial conditions are given at t = 0 we choose the lower bounds of integration in the
general solution to be that point.
y = c1 −
t
0
g(τ) sin τ dτ cos t + c2 +
t
0
g(τ) cos τ dτ sin t
The initial condition y(0) = 0 gives the constraint, c1 = 0. The derivative of y(t) is then,
y (t) = −g(t) sin t cos t +
t
0
g(τ) sin τ dτ sin t + g(t) cos t sin t + c2 +
t
0
g(τ) cos τ dτ cos t,
y (t) =
t
0
g(τ) sin τ dτ sin t + c2 +
t
0
g(τ) cos τ dτ cos t.
The initial condition y (0) = 0 gives the constraint c2 = 0. The solution subject to the initial
conditions is
y =
t
0
g(τ)(sin t cos τ − cos t sin τ) dτ
y =
t
0
g(τ) sin(t − τ) dτ
695
Figure 21.5: Non-resonant Forcing
3. The solution of the initial value problem
y + y = sin(λt), y(0) = 0, y (0) = 0,
is
y =
t
0
sin(λτ) sin(t − τ) dτ.
For λ = 1, this is
y =
1
2
t
0
cos(t − τ − λτ) − cos(t − τ + λτ) dτ
=
1
2
−
sin(t − τ − λτ)
1 + λ
+
sin(t − τ + λτ)
1 − λ
t
0
=
1
2
sin(t) − sin(−λt)
1 + λ
+
− sin(t) + sin(λt)
1 − λ
y = −
λ sin t
1 − λ2
+
sin(λt)
1 − λ2
. (21.6)
The solution is the sum of two periodic functions of period 2π and 2π/λ. This solution is
plotted in Figure 21.5 on the interval t ∈ [0, 16π] for the values λ = 1/4, 7/8, 5/2.
For λ = 1, we have
y =
1
2
t
0
cos(t − 2τ) − cos(tau) dτ
=
1
2
−
1
2
sin(t − 2τ) − τ cos t
t
0
y =
1
2
(sin t − t cos t) . (21.7)
The solution has both a periodic and a transient term. This solution is plotted in Figure 21.5
on the interval t ∈ [0, 16π].
Note that we can derive (21.7) from (21.6) by taking the limit as λ → 0.
lim
λ→1
sin(λt) − λ sin t
1 − λ2
= lim
λ→1
t cos(λt) − sin t
−2λ
=
1
2
(sin t − t cos t)
696
Figure 21.6: Resonant Forcing
Solution 21.8
Let y1, y2 and y3 be linearly independent homogeneous solutions to the differential equation
L[y] = y + p2y + p1y + p0y = f(x).
We will look for a particular solution of the form
yp = u1y1 + u2y2 + u3y3.
Since the uj’s are undetermined functions, we are free to impose two constraints. We choose the
constraints to simplify the algebra.
u1y1 + u2y2 + u3y3 = 0
u1y1 + u2y2 + u3y3 = 0
Differentiating the expression for yp,
yp = u1y1 + u1y1 + u2y2 + u2y2 + u3y3 + u3y3
= u1y1 + u2y2 + u3y3
yp = u1y1 + u1y1 + u2y2 + u2y2 + u3y3 + u3y3
= u1y1 + u2y2 + u3y3
yp = u1y1 + u1y1 + u2y2 + u2y2 + u3y3 + u3y3
Substituting the expressions for yp and its derivatives into the differential equation,
u1y1 + u1y1 + u2y2 + u2y2 + u3y3 + u3y3 + p2(u1y1 + u2y2 + u3y3 ) + p1(u1y1 + u2y2 + u3y3)
+ p0(u1y1 + u2y2 + u3y3) = f(x)
u1y1 + u2y2 + u3y3 + u1L[y1] + u2L[y2] + u3L[y3] = f(x)
u1y1 + u2y2 + u3y3 = f(x).
With the two constraints, we have the system of equations,
u1y1 + u2y2 + u3y3 = 0
u1y1 + u2y2 + u3y3 = 0
u1y1 + u2y2 + u3y3 = f(x)
We solve for the uj using Kramer’s rule.
u1 =
(y2y3 − y2y3)f(x)
W(x)
, u2 = −
(y1y3 − y1y3)f(x)
W(x)
, u3 =
(y1y2 − y1y2)f(x)
W(x)
Here W(x) is the Wronskian of {y1, y2, y3}. Integrating the expressions for uj, the particular solution
is
yp = y1
(y2y3 − y2y3)f(x)
W(x)
dx + y2
(y3y1 − y3y1)f(x)
W(x)
dx + y3
(y1y2 − y1y2)f(x)
W(x)
dx.
697
Green Functions
Solution 21.9
We consider the Green function problem
G = f(x), G(−∞|ξ) = G (−∞|ξ) = 0.
The homogeneous solution is y = c1 + c2x. The homogeneous solution that satisfies the boundary
conditions is y = 0. Thus the Green function has the form
G(x|ξ) =
0 x < ξ,
c1 + c2x x > ξ.
The continuity and jump conditions are then
G(ξ+
|ξ) = 0, G (ξ+
|ξ) = 1.
Thus the Green function is
G(x|ξ) =
0 x < ξ,
x − ξ x > ξ
= (x − ξ)H(x − ξ).
The solution of the problem
y = f(x), y(−∞) = y (−∞) = 0.
is
y =
∞
−∞
f(ξ)G(x|ξ) dξ
y =
∞
−∞
f(ξ)(x − ξ)H(x − ξ) dξ
y =
x
−∞
f(ξ)(x − ξ) dξ
We differentiate this solution to verify that it satisfies the differential equation.
y = [f(ξ)(x − ξ)]ξ=x +
x
−∞
∂
∂x
(f(ξ)(x − ξ)) dξ =
x
−∞
f(ξ) dξ
y = [f(ξ)]ξ=x = f(x)
Solution 21.10
Since we are dealing with an Euler equation, we substitute y = xλ
to find the homogeneous solutions.
λ(λ − 1) + λ − 1 = 0
(λ − 1)(λ + 1) = 0
y1 = x, y2 =
1
x
Variation of Parameters. The Wronskian of the homogeneous solutions is
W(x) =
x 1/x
1 −1/x2 = −
1
x
−
1
x
= −
2
x
.
698
A particular solution is
yp = −x
x2
(1/x)
−2/x
dx +
1
x
x2
x
−2/x
dx
= −x −
x2
2
dx +
1
x
−
x4
2
dx
=
x4
6
−
x4
10
=
x4
15
.
The general solution is
y =
x4
15
+ c1x + c2
1
x
.
Applying the initial conditions,
y(0) = 0 → c2 = 0
y (0) = 0 → c1 = 1.
Thus we have the solution
y =
x4
15
+ x.
Green Function. Since this problem has both an inhomogeneous term in the differential equation
and inhomogeneous boundary conditions, we separate it into the two problems
u +
1
x
u −
1
x2
u = x2
, u(0) = u (0) = 0,
v +
1
x
v −
1
x2
v = 0, v(0) = 0, v (0) = 1.
First we solve the inhomogeneous differential equation with the homogeneous boundary conditions.
The Green function for this problem satisfies
L[G(x|ξ)] = δ(x − ξ), G(0|ξ) = G (0|ξ) = 0.
Since the Green function must satisfy the homogeneous boundary conditions, it has the form
G(x|ξ) =
0 for x < ξ
cx + d/x for x > ξ.
From the continuity condition,
0 = cξ + d/ξ.
The jump condition yields
c − d/ξ2
= 1.
Solving these two equations, we obtain
G(x|ξ) =
0 for x < ξ
1
2 x − ξ2
2x for x > ξ
699
Thus the solution is
u(x) =
∞
0
G(x|ξ)ξ2
dξ
=
x
0
1
2
x −
ξ2
2x
ξ2
dξ
=
1
6
x4
−
1
10
x4
=
x4
15
.
Now to solve the homogeneous differential equation with inhomogeneous boundary conditions.
The general solution for v is
v = cx + d/x.
Applying the two boundary conditions gives
v = x.
Thus the solution for y is
y = x +
x4
15
.
Solution 21.11
The Green function satisfies
G (x|ξ) + p2(x)G (x|ξ) + p1(x)G (x|ξ) + p0(x)G(x|ξ) = δ(x − ξ).
First note that only the G (x|ξ) term can have a delta function singularity. If a lower derivative
had a delta function type singularity, then G (x|ξ) would be more singular than a delta function
and there would be no other term in the equation to balance that behavior. Thus we see that
G (x|ξ) will have a delta function singularity; G (x|ξ) will have a jump discontinuity; G (x|ξ) will
be continuous at x = ξ. Integrating the differential equation from ξ−
to ξ+
yields
ξ+
ξ−
G (x|ξ) dx =
ξ+
ξ−
δ(x − ξ) dx
G (ξ+
|ξ) − G (ξ−
|ξ) = 1.
Thus we have the three continuity conditions:
G (ξ+
|ξ) = G (ξ−
|ξ) + 1
G (ξ+
|ξ) = G (ξ−
|ξ)
G(ξ+
|ξ) = G(ξ−
|ξ)
Solution 21.12
Variation of Parameters. Consider the problem
x2
y − 2xy + 2y = e−x
, y(1) = 0, y (1) = 1.
Previously we showed that two homogeneous solutions are
y1 = x, y2 = x2
.
The Wronskian of these solutions is
W(x) =
x x2
1 2x
= 2x2
− x2
= x2
.
700
In the variation of parameters formula, we will choose 1 as the lower bound of integration. (This
will simplify the algebra in applying the initial conditions.)
yp = −x
x
1
e−ξ
ξ2
ξ4
dξ + x2
x
1
e−ξ
ξ
ξ4
dξ
= −x
x
1
e−ξ
ξ2
dξ + x2
x
1
e−ξ
ξ3
dξ
= −x e−1
−
e−x
x
−
x
1
e−ξ
ξ
dξ + x2 e−x
2x
−
e−x
2x2
+
1
2
x
1
e−ξ
ξ
dξ
= −x e−1
+
1
2
(1 + x) e−x
+
x + x2
2
x
1
e−ξ
ξ
dξ
If you wanted to, you could write the last integral in terms of exponential integral functions.
The general solution is
y = c1x + c2x2
− x e−1
+
1
2
(1 + x) e−x
+ x +
x2
2
x
1
e−ξ
ξ
dξ
Applying the boundary conditions,
y(1) = 0 → c1 + c2 = 0
y (1) = 1 → c1 + 2c2 = 1,
we find that c1 = −1, c2 = 1.
Thus the solution subject to the initial conditions is
y = −(1 + e−1
)x + x2
+
1
2
(1 + x) e−x
+ x +
x2
2
x
1
e−ξ
ξ
dξ
Green Functions. The solution to the problem is y = u + v where
u −
2
x
u +
2
x2
u =
e−x
x2
, u(1) = 0, u (1) = 0,
and
v −
2
x
v +
2
x2
v = 0, v(1) = 0, v (1) = 1.
The problem for v has the solution
v = −x + x2
.
The Green function for u is
G(x|ξ) = H(x − ξ)uξ(x)
where
uξ(ξ) = 0, and uξ(ξ) = 1.
Thus the Green function is
G(x|ξ) = H(x − ξ) −x +
x2
ξ
.
The solution for u is then
u =
∞
1
G(x|ξ)
e−ξ
ξ2
dξ
=
x
1
−x +
x2
ξ
e−ξ
ξ2
dξ
= −x e−1
+
1
2
(1 + x) e−x
+ x +
x2
2
x
1
e−ξ
ξ
dξ.
701
Thus we find the solution for y is
y = −(1 + e−1
)x + x2
+
1
2
(1 + x) e−x
+ x +
x2
2
x
1
e−ξ
ξ
dξ
Solution 21.13
The differential equation for the Green function is
G − G = δ(x − ξ), Gx(0|ξ) = G(1|ξ) = 0.
Note that cosh(x) and sinh(x−1) are homogeneous solutions that satisfy the left and right boundary
conditions, respectively. The Wronskian of these two solutions is
W(x) =
cosh(x) sinh(x − 1)
sinh(x) cosh(x − 1)
= cosh(x) cosh(x − 1) − sinh(x) sinh(x − 1)
=
1
4
ex
+ e−x
ex−1
+ e−x+1
− ex
− e−x
ex−1
− e−x+1
=
1
2
e1
+ e−1
= cosh(1).
The Green function for the problem is then
G(x|ξ) =
cosh(x<) sinh(x> − 1)
cosh(1)
,
G(x|ξ) =
cosh(x) sinh(ξ−1)
cosh(1) for 0 ≤ x ≤ ξ,
cosh(ξ) sinh(x−1)
cosh(1) for ξ ≤ x ≤ 1.
Solution 21.14
The differential equation for the Green function is
G − G = δ(x − ξ), G(0|ξ) = G(∞|ξ) = 0.
Note that sinh(x) and e−x
are homogeneous solutions that satisfy the left and right boundary
conditions, respectively. The Wronskian of these two solutions is
W(x) =
sinh(x) e−x
cosh(x) − e−x
= − sinh(x) e−x
− cosh(x) e−x
= −
1
2
ex
− e−x
e−x
−
1
2
ex
+ e−x
e−x
= −1
The Green function for the problem is then
G(x|ξ) = − sinh(x<) e−x>
G(x|ξ) =
− sinh(x) e−ξ
for 0 ≤ x ≤ ξ,
− sinh(ξ) e−x
for ξ ≤ x ≤ ∞.
Solution 21.15
702
a) The Green function problem is
xG (x|ξ) + G (x|ξ) = δ(x − ξ), G(0|ξ) bounded, G(1|ξ) = 0.
First we find the homogeneous solutions of the differential equation.
xy + y = 0
This is an exact equation.
d
dx
[xy ] = 0
y =
c1
x
y = c1 log x + c2
The homogeneous solutions y1 = 1 and y2 = log x satisfy the left and right boundary condi-
tions, respectively. The Wronskian of these solutions is
W(x) =
1 log x
0 1/x
=
1
x
.
The Green function is
G(x|ξ) =
1 · log x>
ξ(1/ξ)
,
G(x|ξ) = log x>.
b) The Green function problem is
G (x|ξ) − G(x|ξ) = δ(x − ξ), G(−a|ξ) = G(a|ξ) = 0.
{ex
, e−x
} and {cosh x, sinh x} are both linearly independent sets of homogeneous solutions.
sinh(x+a) and sinh(x−a) are homogeneous solutions that satisfy the left and right boundary
conditions, respectively. The Wronskian of these two solutions is,
W(x) =
sinh(x + a) sinh(x − a)
cosh(x + a) cosh(x − a)
= sinh(x + a) cosh(x − a) − sinh(x − a) cosh(x + a)
= sinh(2a)
The Green function is
G(x|ξ) =
sinh(x< + a) sinh(x> − a)
sinh(2a)
.
c) The Green function problem is
G (x|ξ) − G(x|ξ) = δ(x − ξ), G(x|ξ) bounded as |x| → ∞.
ex
and e−x
are homogeneous solutions that satisfy the left and right boundary conditions,
respectively. The Wronskian of these solutions is
W(x) =
ex e−x
ex
− e−x = −2.
The Green function is
G(x|ξ) =
ex< e−x>
−2
,
G(x|ξ) = −
1
2
ex<−x>
.
703
d) The Green function from part (b) is,
G(x|ξ) =
sinh(x< + a) sinh(x> − a)
sinh(2a)
.
We take the limit as a → ∞.
lim
a→∞
sinh(x< + a) sinh(x> − a)
sinh(2a)
= lim
a→∞
(ex<+a
− e−x<−a
) (ex>−a
− e−x>+a
)
2 (e2a − e−2a)
= lim
a→∞
− ex<−x>
+ ex<+x>−2a
+ e−x<−x>−2a
− e−x<+x>−4a
2 − 2 e−4a
= −
ex<−x>
2
Thus we see that the solution from part (b) approaches the solution from part (c) as a → ∞.
Solution 21.16
1. The problem,
y + λy = f(x), y(0) = y(π) = 0,
has a Green function if and only if it has a unique solution. This inhomogeneous problem has
a unique solution if and only if the homogeneous problem has only the trivial solution.
First consider the case λ = 0. We find the general solution of the homogeneous differential
equation.
y = c1 + c2x
Only the trivial solution satisfies the boundary conditions. The problem has a unique solution
for λ = 0.
Now consider non-zero λ. We find the general solution of the homogeneous differential equation.
y = c1 cos
√
λx + c2 sin
√
λx .
The solution that satisfies the left boundary condition is
y = c sin
√
λx .
We apply the right boundary condition and find nontrivial solutions.
sin
√
λπ = 0
λ = n2
, n ∈ Z+
Thus the problem has a unique solution for all complex λ except λ = n2
, n ∈ Z+
.
Consider the case λ = 0. We find solutions of the homogeneous equation that satisfy the left
and right boundary conditions, respectively.
y1 = x, y2 = x − π.
We compute the Wronskian of these functions.
W(x) =
x x − π
1 1
= π.
The Green function for this case is
G(x|ξ) =
x<(x> − π)
π
.
704
We consider the case λ = n2
, λ = 0. We find the solutions of the homogeneous equation that
satisfy the left and right boundary conditions, respectively.
y1 = sin
√
λx , y2 = sin
√
λ(x − π) .
We compute the Wronskian of these functions.
W(x) =
sin
√
λx sin
√
λ(x − π)
√
λ cos
√
λx
√
λ cos
√
λ(x − π)
=
√
λ sin
√
λπ
The Green function for this case is
G(x|ξ) =
sin
√
λx< sin
√
λ(x> − π)
√
λ sin
√
λπ
.
2. Now we consider the problem
y + 9y = 1 + αx, y(0) = y(π) = 0.
The homogeneous solutions of the problem are constant multiples of sin(3x). Thus for each
value of α, the problem either has no solution or an infinite number of solutions. There will be
an infinite number of solutions if the inhomogeneity 1 + αx is orthogonal to the homogeneous
solution sin(3x) and no solution otherwise.
π
0
(1 + αx) sin(3x) dx =
πα + 2
3
The problem has a solution only for α = −2/π. For this case the general solution of the
inhomogeneous differential equation is
y =
1
9
1 −
2x
π
+ c1 cos(3x) + c2 sin(3x).
The one-parameter family of solutions that satisfies the boundary conditions is
y =
1
9
1 −
2x
π
− cos(3x) + c sin(3x).
3. For λ = n2
, n ∈ Z+
, y = sin(nx) is a solution of the homogeneous equation that satisfies the
boundary conditions. Equation 21.5 has a (non-unique) solution only if f is orthogonal to
sin(nx).
π
0
f(x) sin(nx) dx = 0
The modified Green function satisfies
G + n2
G = δ(x − ξ) −
sin(nx) sin(nξ)
π/2
.
We expand G in a series of the eigenfunctions.
G(x|ξ) =
∞
k=1
gk sin(kx)
705
We substitute the expansion into the differential equation to determine the coefficients. This
will not determine gn. We choose gn = 0, which is one of the choices that will make the
modified Green function symmetric in x and ξ.
∞
k=1
gk n2
− k2
sin(kx) =
2
π
∞
k=1
k=n
sin(kx) sin(kξ)
G(x|ξ) =
2
π
∞
k=1
k=n
sin(kx) sin(kξ)
n2 − k2
The solution of the inhomogeneous problem is
y(x) =
π
0
f(ξ)G(x|ξ) dξ.
Solution 21.17
We separate the problem for u into the two problems:
Lv ≡ (pv ) + qv = f(x), a < x < b, v(a) = 0, v(b) = 0
Lw ≡ (pw ) + qw = 0, a < x < b, w(a) = α, w(b) = β
and note that the solution for u is u = v + w.
The problem for v has the solution,
v =
b
a
g(x; ξ)f(ξ) dξ,
with the Green function,
g(x; ξ) =
v1(x<)v2(x>)
p(ξ)W(ξ)
≡
v1(x)v2(ξ)
p(ξ)W (ξ) for a ≤ x ≤ ξ,
v1(ξ)v2(x)
p(ξ)W (ξ) for ξ ≤ x ≤ b.
Here v1 and v2 are homogeneous solutions that respectively satisfy the left and right homogeneous
boundary conditions.
Since g(x; ξ) is a solution of the homogeneous equation for x = ξ, gξ(x; ξ) is a solution of the
homogeneous equation for x = ξ. This is because for x = ξ,
L
∂
∂ξ
g =
∂
∂ξ
L[g] =
∂
∂ξ
δ(x − ξ) = 0.
If ξ is outside of the domain, (a, b), then g(x; ξ) and gξ(x; ξ) are homogeneous solutions on that
domain. In particular gξ(x; a) and gξ(x; b) are homogeneous solutions,
L [gξ(x; a)] = L [gξ(x; b)] = 0.
Now we use the definition of the Green function and v1(a) = v2(b) = 0 to determine simple expres-
sions for these homogeneous solutions.
gξ(x; a) =
v1(a)v2(x)
p(a)W(a)
−
(p (a)W(a) + p(a)W (a))v1(a)v2(x)
(p(a)W(a))2
=
v1(a)v2(x)
p(a)W(a)
=
v1(a)v2(x)
p(a)(v1(a)v2(a) − v1(a)v2(a))
= −
v1(a)v2(x)
p(a)v1(a)v2(a)
= −
v2(x)
p(a)v2(a)
706
-4 -2 2 4
-0.5
-0.4
-0.3
-0.2
-0.1
Figure 21.7: G(x; 1) and G(x; −1)
We note that this solution has the boundary values,
gξ(a; a) = −
v2(a)
p(a)v2(a)
= −
1
p(a)
, gξ(b; a) = −
v2(b)
p(a)v2(a)
= 0.
We examine the second solution.
gξ(x; b) =
v1(x)v2(b)
p(b)W(b)
−
(p (b)W(b) + p(b)W (b))v1(x)v2(b)
(p(b)W(b))2
=
v1(x)v2(b)
p(b)W(b)
=
v1(x)v2(b)
p(b)(v1(b)v2(b) − v1(b)v2(b))
=
v1(x)v2(b)
p(b)v1(b)v2(b)
=
v1(x)
p(b)v1(b)
This solution has the boundary values,
gξ(a; b) =
v1(a)
p(b)v1(b)
= 0, gξ(b; b) =
v1(b)
p(b)v1(b)
=
1
p(b)
.
Thus we see that the solution of
Lw = (pw ) + qw = 0, a < x < b, w(a) = α, w(b) = β,
is
w = −αp(a)gξ(x; a) + βp(b)gξ(x; b).
Therefore the solution of the problem for u is
u =
b
a
g(x; ξ)f(ξ) dξ − αp(a)gξ(x; a) + βp(b)gξ(x; b).
Solution 21.18
Figure 21.7 shows a plot of G(x; 1) and G(x; −1) for k = 1.
First we consider the boundary condition u(0) = 0. Note that the solution of
G − k2
G = δ(x − ξ) − δ(x + ξ), |G(±∞; ξ)| < ∞,
satisfies the condition G(0; ξ) = 0. Thus the Green function which satisfies G(0; ξ) = 0 is
G(x; ξ) = −
1
2k
e−k|x−ξ|
+
1
2k
e−k|x+ξ|
.
707
1 2 3 4 5
-0.4
-0.3
-0.2
-0.1
1 2 3 4 5
-0.5
-0.4
-0.3
-0.2
-0.1
Figure 21.8: G(x; 1) and G(x; −1)
Since x, ξ > 0 we can write this as
G(x; ξ) = −
1
2k
e−k|x−ξ|
+
1
2k
e−k(x+ξ)
=
− 1
2k
e−k(ξ−x)
+ 1
2k
e−k(x+ξ)
, for x < ξ
− 1
2k
e−k(x−ξ)
+ 1
2k
e−k(x+ξ)
, for ξ < x
=
−1
k
e−kξ
sinh(kx), for x < ξ
−1
k
e−kx
sinh(kξ), for ξ < x
G(x; ξ) = −
1
k
e−kx>
sinh(kx<)
Now consider the boundary condition u (0) = 0. Note that the solution of
G − k2
G = δ(x − ξ) + δ(x + ξ), |G(±∞; ξ)| < ∞,
satisfies the boundary condition G (x; ξ) = 0. Thus the Green function is
G(x; ξ) = −
1
2k
e−k|x−ξ|
−
1
2k
e−k|x+ξ|
.
Since x, ξ > 0 we can write this as
G(x; ξ) = −
1
2k
e−k|x−ξ|
−
1
2k
e−k(x+ξ)
=
− 1
2k
e−k(ξ−x)
− 1
2k
e−k(x+ξ)
, for x < ξ
− 1
2k
e−k(x−ξ)
− 1
2k
e−k(x+ξ)
, for ξ < x
=
−1
k
e−kξ
cosh(kx), for x < ξ
−1
k
e−kx
cosh(kξ), for ξ < x
G(x; ξ) = −
1
k
e−kx>
cosh(kx<)
The Green functions which satisfies G(0; ξ) = 0 and G (0; ξ) = 0 are shown in Figure 21.8.
Solution 21.19
1. The Green function satisfies
g − a2
g = δ(x − ξ), g(0; ξ) = g (L; ξ) = 0.
We can write the set of homogeneous solutions as
eax
, e−ax
or {cosh(ax), sinh(ax)} .
708
The solutions that respectively satisfy the left and right boundary conditions are
u1 = sinh(ax), u2 = cosh(a(x − L)).
The Wronskian of these solutions is
W(x) =
sinh(ax) cosh(a(x − L))
a cosh(ax) a sinh(a(x − L))
= −a cosh(aL).
Thus the Green function is
g(x; ξ) =
−sinh(ax) cosh(a(ξ−L))
a cosh(aL) for x ≤ ξ,
−sinh(aξ) cosh(a(x−L))
a cosh(aL) for ξ ≤ x.
= −
sinh(ax<) cosh(a(x> − L))
a cosh(aL)
.
2. We take the limit as L → ∞.
g(x; ξ) = lim
L→∞
−
sinh(ax<) cosh(a(x> − L))
a cosh(aL)
= lim
L→∞
−
sinh(ax<)
a
cosh(ax>) cosh(aL) − sinh(ax>) sinh(aL)
cosh(aL)
= −
sinh(ax<)
a
(cosh(ax>) − sinh(ax>))
g(x; ξ) = −
1
a
sinh(ax<) e−ax>
The solution of
y − a2
y = e−x
, y(0) = y (∞) = 0
is
y =
∞
0
g(x; ξ) e−ξ
dξ
= −
1
a
∞
0
sinh(ax<) e−ax>
e−ξ
dξ
= −
1
a
x
0
sinh(aξ) e−ax
e−ξ
dξ +
∞
x
sinh(ax) e−aξ
e−ξ
dξ
We first consider the case that a = 1.
= −
1
a
e−ax
a2 − 1
−a + e−x
(a cosh(ax) + sinh(ax)) +
1
a + 1
e−(a+1)x
sinh(ax)
=
e−ax
− e−x
a2 − 1
For a = 1, we have
y = −
1
4
e −x −1 + 2x + e−2x
+
1
2
e−2x
sinh(x)
= −
1
2
x e−x
.
Thus the solution of the problem is
y =
e−ax
− e−x
a2−1 for a = 1,
−1
2 x e−x
for a = 1.
We note that this solution satisfies the differential equation and the boundary conditions.
709
21.13 Quiz
Problem 21.1
Find the general solution of
y − y = f(x),
where f(x) is a known function.
Solution
710
21.14 Quiz Solutions
Solution 21.1
y − y = f(x)
We substitute y = eλx
into the homogeneous differential equation.
y − y = 0
λ2
eλx
− eλx
= 0
λ = ±1
The homogeneous solutions are ex
and e−x
. The Wronskian of these solutions is
ex e−x
ex
− e−x = −2.
We find a particular solution with variation of parameters.
yp = − ex e−x
f(x)
−2
dx + e−x ex
f(x)
−2
dx
The general solution is
y = c1 ex
+c2 e−x
− ex e−x
f(x)
−2
dx + e−x ex
f(x)
−2
dx.
711
712
Chapter 22
Difference Equations
Televisions should have a dial to turn up the intelligence. There is a brightness knob, but it
doesn’t work.
-?
22.1 Introduction
Example 22.1.1 Gambler’s ruin problem. Consider a gambler that initially has n dollars. He
plays a game in which he has a probability p of winning a dollar and q of losing a dollar. (Note that
p + q = 1.) The gambler has decided that if he attains N dollars he will stop playing the game. In
this case we will say that he has succeeded. Of course if he runs out of money before that happens,
we will say that he is ruined. What is the probability of the gambler’s ruin? Let us denote this
probability by an. We know that if he has no money left, then his ruin is certain, so a0 = 1. If he
reaches N dollars he will quit the game, so that aN = 0. If he is somewhere in between ruin and
success then the probability of his ruin is equal to p times the probability of his ruin if he had n + 1
dollars plus q times the probability of his ruin if he had n − 1 dollars. Writing this in an equation,
an = pan+1 + qan−1 subject to a0 = 1, aN = 0.
This is an example of a difference equation. You will learn how to solve this particular problem in
the section on constant coefficient equations.
Consider the sequence a1, a2, a3, . . . Analogous to a derivative of a continuous function, we can
define a discrete derivative on the sequence
Dan = an+1 − an.
The second discrete derivative is then defined as
D2
an = D[an+1 − an] = an+2 − 2an+1 + an.
The discrete integral of an is
n
i=n0
ai.
Corresponding to
β
α
df
dx
dx = f(β) − f(α),
in the discrete realm we have
β−1
n=α
D[an] =
β−1
n=α
(an+1 − an) = aβ − aα.
713
Linear difference equations have the form
Dr
an + pr−1(n)Dr−1
an + · · · + p1(n)Dan + p0(n)an = f(n).
From the definition of the discrete derivative an equivalent form is
an+r + qr−1(n)anr−1 + · · · + q1(n)an+1 + q0(n)an = f(n).
Besides being important in their own right, we will need to solve difference equations in order to
develop series solutions of differential equations. Also, some methods of solving differential equations
numerically are based on approximating them with difference equations.
There are many similarities between differential and difference equations. Like differential equa-
tions, an rth
order homogeneous difference equation has r linearly independent solutions. The
general solution to the rth
order inhomogeneous equation is the sum of the particular solution and
an arbitrary linear combination of the homogeneous solutions.
For an rth
order difference equation, the initial condition is given by specifying the values of the
first r an’s.
Example 22.1.2 Consider the difference equation an−2 − an−1 − an = 0 subject to the initial
condition a1 = a2 = 1. Note that although we may not know a closed-form formula for the an
we can calculate the an in order by substituting into the difference equation. The first few an are
1, 1, 2, 3, 5, 8, 13, 21, . . . We recognize this as the Fibonacci sequence.
22.2 Exact Equations
Consider the sequence a1, a2, . . .. Exact difference equations on this sequence have the form
D[F(an, an+1, . . . , n)] = g(n).
We can reduce the order of, (or solve for first order), this equation by summing from 1 to n − 1.
n−1
j=1
D[F(aj, aj+1, . . . , j)] =
n−1
j=1
g(j)
F(an, an+1, . . . , n) − F(a1, a2, . . . , 1) =
n−1
j=1
g(j)
F(an, an+1, . . . , n) =
n−1
j=1
g(j) + F(a1, a2, . . . , 1)
Result 22.2.1 We can reduce the order of the exact difference equation
D[F(an, an+1, . . . , n)] = g(n), for n ≥ 1
by summing both sides of the equation to obtain
F(an, an+1, . . . , n) =
n−1
j=1
g(j) + F(a1, a2, . . . , 1).
714
Example 22.2.1 Consider the difference equation, D[nan] = 1. Summing both sides of this equa-
tion
n−1
j=1
D[jaj] =
n−1
j=1
1
nan − a1 = n − 1
an =
n + a1 − 1
n
.
22.3 Homogeneous First Order
Consider the homogeneous first order difference equation
an+1 = p(n)an, for n ≥ 1.
We can directly solve for an.
an = an
an−1
an−1
an−2
an−2
· · ·
a1
a1
= a1
an
an−1
an−1
an−2
· · ·
a2
a1
= a1p(n − 1)p(n − 2) · · · p(1)
= a1
n−1
j=1
p(j)
Alternatively, we could solve this equation by making it exact. Analogous to an integrating
factor for differential equations, we multiply the equation by the summing factor
S(n) =


n
j=1
p(j)


−1
.
an+1 − p(n)an = 0
an+1
n
j=1 p(j)
−
an
n−1
j=1 p(j)
= 0
D
an
n−1
j=1 p(j)
= 0
Now we sum from 1 to n − 1.
an
n−1
j=1 p(j)
− a1 = 0
an = a1
n−1
j=1
p(j)
Result 22.3.1 The solution of the homogeneous first order difference equation
an+1 = p(n)an, for n ≥ 1,
is
an = a1
n−1
j=1
p(j).
715
Example 22.3.1 Consider the equation an+1 = nan with the initial condition a1 = 1.
an = a1
n−1
j=1
j = (1)(n − 1)! = Γ(n)
Recall that Γ(z) is the generalization of the factorial function. For positive integral values of the
argument, Γ(n) = (n − 1)!.
22.4 Inhomogeneous First Order
Consider the equation
an+1 = p(n)an + q(n) for n ≥ 1.
Multiplying by S(n) =
n
j=1 p(j)
−1
yields
an+1
n
j=1 p(j)
−
an
n−1
j=1 p(j)
=
q(n)
n
j=1 p(j)
.
The left hand side is a discrete derivative.
D
an
n−1
j=1 p(j)
=
q(n)
n
j=1 p(j)
Summing both sides from 1 to n − 1,
an
n−1
j=1 p(j)
− a1 =
n−1
k=1
q(k)
k
j=1 p(j)
an =
n−1
m=1
p(m)
n−1
k=1
q(k)
k
j=1 p(j)
+ a1 .
Result 22.4.1 The solution of the inhomogeneous first order difference equa-
tion
an+1 = p(n)an + q(n) for n ≥ 1
is
an =
n−1
m=1
p(m)
n−1
k=1
q(k)
k
j=1 p(j)
+ a1 .
Example 22.4.1 Consider the equation an+1 = nan + 1 for n ≥ 1. The summing factor is
S(n) =


n
j=1
j


−1
=
1
n!
.
716
Multiplying the difference equation by the summing factor,
an+1
n!
−
an
(n − 1)!
=
1
n!
D
an
(n − 1)!
=
1
n!
an
(n − 1)!
− a1 =
n−1
k=1
1
k!
an = (n − 1)!
n−1
k=1
1
k!
+ a1 .
Example 22.4.2 Consider the equation
an+1 = λan + µ, for n ≥ 0.
From the above result, (with the products and sums starting at zero instead of one), the solution is
a0 =
n−1
m=0
λ
n−1
k=0
µ
k
j=0 λ
+ a0
= λn
n−1
k=0
µ
λk+1
+ a0
= λn
µ
λ−n−1
− λ−1
λ−1 − 1
+ a0
= λn
µ
λ−n
− 1
1 − λ
+ a0
= µ
1 − λn
1 − λ
+ a0λn
.
22.5 Homogeneous Constant Coefficient Equations
Homogeneous constant coefficient equations have the form
an+N + pN−1an+N−1 + · · · + p1an+1 + p0an = 0.
The substitution an = rn
yields
rN
+ pN−1rN−1
+ · · · + p1r + p0 = 0
(r − r1)m1
· · · (r − rk)mk
= 0.
If r1 is a distinct root then the associated linearly independent solution is rn
1 . If r1 is a root of
multiplicity m > 1 then the associated solutions are rn
1 , nrn
1 , n2
rn
1 , . . . , nm−1
rn
1 .
Result 22.5.1 Consider the homogeneous constant coefficient difference
equation
an+N + pN−1an+N−1 + · · · + p1an+1 + p0an = 0.
The substitution an = rn
yields the equation
(r − r1)m1
· · · (r − rk)mk
= 0.
A set of linearly independent solutions is
{rn
1 , nrn
1 , . . . , nm1−1
rn
1 , . . . , rn
k , nrn
k , . . . , nmk−1
rn
k }.
717
Example 22.5.1 Consider the equation an+2 − 3an+1 + 2an = 0 with the initial conditions a1 = 1
and a2 = 3. The substitution an = rn
yields
r2
− 3r + 2 = (r − 1)(r − 2) = 0.
Thus the general solution is
an = c11n
+ c22n
.
The initial conditions give the two equations,
a1 = 1 = c1 + 2c2
a2 = 3 = c1 + 4c2
Since c1 = −1 and c2 = 1, the solution to the difference equation subject to the initial conditions is
an = 2n
− 1.
Example 22.5.2 Consider the gambler’s ruin problem that was introduced in Example 22.1.1. The
equation for the probability of the gambler’s ruin at n dollars is
an = pan+1 + qan−1 subject to a0 = 1, aN = 0.
We assume that 0 < p < 1. With the substitution an = rn
we obtain
r = pr2
+ q.
The roots of this equation are
r =
1 ±
√
1 − 4pq
2p
=
1 ± 1 − 4p(1 − p)
2p
=
1 ± (1 − 2p)2
2p
=
1 ± |1 − 2p|
2p
.
We will consider the two cases p = 1/2 and p = 1/2.
p = 1/2. If p < 1/2, the roots are
r =
1 ± (1 − 2p)
2p
r1 =
1 − p
p
=
q
p
, r2 = 1.
If p > 1/2 the roots are
r =
1 ± (2p − 1)
2p
r1 = 1, r2 =
−p + 1
p
=
q
p
.
Thus the general solution for p = 1/2 is
an = c1 + c2
q
p
n
.
718
The boundary condition a0 = 1 requires that c1 +c2 = 1. From the boundary condition aN = 0
we have
(1 − c2) + c2
q
p
N
= 0
c2 =
−1
−1 + (q/p)N
c2 =
pN
pN − qN
.
Solving for c1,
c1 = 1 −
pN
pN − qN
c1 =
−qN
pN − qN
.
Thus we have
an =
−qN
pN − qN
+
pN
pN − qN
q
p
n
.
p = 1/2. In this case, the two roots of the polynomial are both 1. The general solution is
an = c1 + c2n.
The left boundary condition demands that c1 = 1. From the right boundary condition we
obtain
1 + c2N = 0
c2 = −
1
N
.
Thus the solution for this case is
an = 1 −
n
N
.
As a check that this formula makes sense, we see that for n = N/2 the probability of ruin is
1 − N/2
N = 1
2 .
22.6 Reduction of Order
Consider the difference equation
(n + 1)(n + 2)an+2 − 3(n + 1)an+1 + 2an = 0 for n ≥ 0 (22.1)
We see that one solution to this equation is an = 1/n!. Analogous to the reduction of order for
differential equations, the substitution an = bn/n! will reduce the order of the difference equation.
(n + 1)(n + 2)bn+2
(n + 2)!
−
3(n + 1)bn+1
(n + 1)!
+
2bn
n!
= 0
bn+2 − 3bn+1 + 2bn = 0 (22.2)
At first glance it appears that we have not reduced the order of the equation, but writing it in terms
of discrete derivatives
D2
bn − Dbn = 0
719
shows that we now have a first order difference equation for Dbn. The substitution bn = rn
in
equation 22.2 yields the algebraic equation
r2
− 3r + 2 = (r − 1)(r − 2) = 0.
Thus the solutions are bn = 1 and bn = 2n
. Only the bn = 2n
solution will give us another linearly
independent solution for an. Thus the second solution for an is an = bn/n! = 2n
/n!. The general
solution to equation 22.1 is then
an = c1
1
n!
+ c2
2n
n!
.
Result 22.6.1 Let an = sn be a homogeneous solution of a linear difference
equation. The substitution an = snbn will yield a difference equation for bn
that is of order one less than the equation for an.
720
22.7 Exercises
Exercise 22.1
Find a formula for the nth
term in the Fibonacci sequence 1, 1, 2, 3, 5, 8, 13, . . ..
Hint, Solution
Exercise 22.2
Solve the difference equation
an+2 =
2
n
an, a1 = a2 = 1.
Hint, Solution
721
22.8 Hints
Hint 22.1
The difference equation corresponding to the Fibonacci sequence is
an+2 − an+1 − an = 0, a1 = a2 = 1.
Hint 22.2
Consider this exercise as two first order difference equations; one for the even terms, one for the odd
terms.
722
22.9 Solutions
Solution 22.1
We can describe the Fibonacci sequence with the difference equation
an+2 − an+1 − an = 0, a1 = a2 = 1.
With the substitution an = rn
we obtain the equation
r2
− r − 1 = 0.
This equation has the two distinct roots
r1 =
1 +
√
5
2
, r2 =
1 −
√
5
2
.
Thus the general solution is
an = c1
1 +
√
5
2
n
+ c2
1 −
√
5
2
n
.
From the initial conditions we have
c1r1+c2r2 = 1
c1r2
1+c2r2
2 = 1.
Solving for c2 in the first equation,
c2 =
1
r2
(1 − c1r1).
We substitute this into the second equation.
c1r2
1 +
1
r2
(1 − c1r1)r2
2 = 1
c1(r2
1 − r1r2) = 1 − r2
c1 =
1 − r2
r2
1 − r1r2
=
1 − 1−
√
5
2
1+
√
5
2
√
5
=
1+
√
5
2
1+
√
5
2
√
5
=
1
√
5
Substitute this result into the equation for c2.
c2 =
1
r2
1 −
1
√
5
r1
=
2
1 −
√
5
1 −
1
√
5
1 +
√
5
2
= −
2
1 −
√
5
1 −
√
5
2
√
5
= −
1
√
5
723
Thus the nth
term in the Fibonacci sequence has the formula
an =
1
√
5
1 +
√
5
2
n
−
1
√
5
1 −
√
5
2
n
.
It is interesting to note that although the Fibonacci sequence is defined in terms of integers, one
cannot express the formula form the nth
element in terms of rational numbers.
Solution 22.2
We can consider
an+2 =
2
n
an, a1 = a2 = 1
to be a first order difference equation. First consider the odd terms.
a1 = 1
a3 =
2
1
a5 =
2
3
2
1
an =
2(n−1)/2
(n − 2)(n − 4) · · · (1)
For the even terms,
a2 = 1
a4 =
2
2
a6 =
2
4
2
2
an =
2(n−2)/2
(n − 2)(n − 4) · · · (2)
.
Thus
an =
2(n−1)/2
(n−2)(n−4)···(1) for odd n
2(n−2)/2
(n−2)(n−4)···(2) for even n.
724
Chapter 23
Series Solutions of Differential
Equations
Skill beats honesty any day.
-?
23.1 Ordinary Points
Big O and Little o Notation. The notation O(zn
) means “terms no bigger than zn
.” This gives
us a convenient shorthand for manipulating series. For example,
sin z = z −
z3
6
+ O(z5
)
1
1 − z
= 1 + O(z)
The notation o(zn
) means “terms smaller that zn
.” For example,
cos z = 1 + o(1)
ez
= 1 + z + o(z)
Example 23.1.1 Consider the equation
w (z) − 3w (z) + 2w(z) = 0.
The general solution to this constant coefficient equation is
w = c1 ez
+c2 e2z
.
The functions ez
and e2z
are analytic in the finite complex plane. Recall that a function is analytic
at a point z0 if and only if the function has a Taylor series about z0 with a nonzero radius of
convergence. If we substitute the Taylor series expansions about z = 0 of ez
and e2z
into the general
solution, we obtain
w = c1
∞
n=0
zn
n!
+ c2
∞
n=0
2n
zn
n!
.
Thus we have a series solution of the differential equation.
725
Alternatively, we could try substituting a Taylor series into the differential equation and solving
for the coefficients. Substituting w =
∞
n=0 anzn
into the differential equation yields
d2
dz2
∞
n=0
anzn
− 3
d
dz
∞
n=0
anzn
+ 2
∞
n=0
anzn
= 0
∞
n=2
n(n − 1)anzn−2
− 3
∞
n=1
nanzn−1
+ 2
∞
n=0
anzn
= 0
∞
n=0
(n + 2)(n + 1)an+2zn
− 3
∞
n=0
(n + 1)an+1zn
+ 2
∞
n=0
anzn
= 0
∞
n=0
(n + 2)(n + 1)an+2 − 3(n + 1)an+1 + 2an zn
= 0.
Equating powers of z, we obtain the difference equation
(n + 2)(n + 1)an+2 − 3(n + 1)an+1 + 2an = 0, n ≥ 0.
We see that an = 1/n! is one solution since
(n + 2)(n + 1)
(n + 2)!
− 3
n + 1
(n + 1)!
+ 2
1
n!
=
1 − 3 + 2
n!
= 0.
We use reduction of order for difference equations to find the other solution. Substituting an = bn/n!
into the difference equation yields
(n + 2)(n + 1)
bn+2
(n + 2)!
− 3(n + 1)
bn+1
(n + 1)!
+ 2
bn
n!
= 0
bn+2 − 3bn+1 + 2bn = 0.
At first glance it appears that we have not reduced the order of the difference equation. However
writing this equation in terms of discrete derivatives,
D2
bn − Dbn = 0
we see that this is a first order difference equation for Dbn. Since this is a constant coefficient
difference equation we substitute bn = rn
into the equation to obtain an algebraic equation for r.
r2
− 3r + 2 = (r − 1)(r − 2) = 0
Thus the two solutions are bn = 1n
b0 and bn = 2n
b0. Only bn = 2n
b0 will give us a second
independent solution for an. Thus the two solutions for an are
an =
a0
n!
and an =
2n
a0
n!
.
Thus we can write the general solution to the differential equation as
w = c1
∞
n=0
zn
n!
+ c2
∞
n=0
2n
zn
n!
.
We recognize these two sums as the Taylor expansions of ez
and e2z
. Thus we obtain the same result
as we did solving the differential equation directly.
Of course it would be pretty silly to go through all the grunge involved in developing a series
expansion of the solution in a problem like Example 23.1.1 since we can solve the problem exactly.
726
However if we could not solve a differential equation, then having a Taylor series expansion of the
solution about a point z0 would be useful in determining the behavior of the solutions near that
point.
For this method of substituting a Taylor series into the differential equation to be useful we have
to know at what points the solutions are analytic. Let’s say we were considering a second order
differential equation whose solutions were
w1 =
1
z
, and w2 = log z.
Trying to find a Taylor series expansion of the solutions about the point z = 0 would fail because
the solutions are not analytic at z = 0. This brings us to two important questions.
1. Can we tell if the solutions to a linear differential equation are analytic at a point without
knowing the solutions?
2. If there are Taylor series expansions of the solutions to a differential equation, what are the
radii of convergence of the series?
In order to answer these questions, we will introduce the concept of an ordinary point. Consider
the nth
order linear homogeneous equation
dn
w
dzn
+ pn−1(z)
dn−1
w
dzn−1
+ · · · + p1(z)
dw
dz
+ p0(z)w = 0.
If each of the coefficient functions pi(z) are analytic at z = z0 then z0 is an ordinary point of the
differential equation.
For reasons of typography we will restrict our attention to second order equations and the point
z0 = 0 for a while. The generalization to an nth
order equation will be apparent. Considering the
point z0 = 0 is only trivially more general as we could introduce the transformation z − z0 → z to
move the point to the origin.
In the chapter on first order differential equations we showed that the solution is analytic at
ordinary points. One would guess that this remains true for higher order equations. Consider the
second order equation
y + p(z)y + q(z)y = 0,
where p and q are analytic at the origin.
p(z) =
∞
n=0
pnzn
, and q(z) =
∞
n=0
qnzn
Assume that one of the solutions is not analytic at the origin and behaves like zα
at z = 0 where
α = 0, 1, 2, . . .. That is, we can approximate the solution with w(z) = zα
+ o(zα
). Let’s substitute
w = zα
+ o(zα
) into the differential equation and look at the lowest power of z in each of the terms.
α(α − 1)zα−2
+ o(zα−2
) + αzα−1
+ o(zα−1
)
∞
n=0
pnzn
+ zα
+ o(zα
)
∞
n=0
qnzn
= 0.
We see that the solution could not possibly behave like zα
, α = 0, 1, 2, · · · because there is no term
on the left to cancel out the zα−2
term. The terms on the left side could not add to zero.
You could also check that a solution could not possibly behave like log z at the origin. Though
we will not prove it, if z0 is an ordinary point of a homogeneous differential equation, then all the
solutions are analytic at the point z0. Since the solution is analytic at z0 we can expand it in a
Taylor series.
727
Now we are prepared to answer our second question. From complex variables, we know that the
radius of convergence of the Taylor series expansion of a function is the distance to the nearest
singularity of that function. Since the solutions to a differential equation are analytic at ordinary
points of the equation, the series expansion about an ordinary point will have a radius of convergence
at least as large as the distance to the nearest singularity of the coefficient functions.
Example 23.1.2 Consider the equation
w +
1
cos z
w + z2
w = 0.
If we expand the solution to the differential equation in Taylor series about z = 0, the radius of
convergence will be at least π/2. This is because the coefficient functions are analytic at the origin,
and the nearest singularities of 1/ cos z are at z = ±π/2.
23.1.1 Taylor Series Expansion for a Second Order Differential Equation
Consider the differential equation
w + p(z)w + q(z)w = 0
where p(z) and q(z) are analytic in some neighborhood of the origin.
p(z) =
∞
n=0
pnzn
and q(z) =
∞
n=0
qnzn
We substitute a Taylor series and it’s derivatives
w =
∞
n=0
anzn
w =
∞
n=1
nznzn−1
=
∞
n=0
(n + 1)an+1zn
w =
∞
n=2
n(n − 1)anzn−2
=
∞
n=0
(n + 2)(n + 1)an+2zn
into the differential equation to obtain
∞
n=0
(n + 2)(n + 1)an+2zn
+
∞
n=0
pnzn
∞
n=0
(n + 1)an+1zn
+
∞
n=0
qnzn
∞
n=0
anzn
= 0
∞
n=0
(n + 2)(n + 1)an+2zn
+
∞
n=0
n
m=0
(m + 1)am+1pn−m zn
+
∞
n=0
n
m=0
amqn−m zn
= 0
∞
n=0
(n + 2)(n + 1)an+2 +
n
m=0
(m + 1)am+1pn−m + amqn−m zn
= 0.
Equating coefficients of powers of z,
(n + 2)(n + 1)an+2 +
n
m=0
(m + 1)am+1pn−m + amqn−m = 0 for n ≥ 0.
728
0.2 0.4 0.6 0.8 1 1.2 1.4
0.7
0.8
0.9
1.1
1.2
Figure 23.1: Plot of the Numerical Solution and the First Three Terms in the Taylor Series.
We see that a0 and a1 are arbitrary and the rest of the coefficients are determined by the recurrence
relation
an+2 = −
1
(n + 1)(n + 2)
n
m=0
((m + 1)am+1pn−m + amqn−m) for n ≥ 0.
Example 23.1.3 Consider the problem
y +
1
cos x
y + ex
y = 0, y(0) = y (0) = 1.
Let’s expand the solution in a Taylor series about the origin.
y(x) =
∞
n=0
anxn
Since y(0) = a0 and y (0) = a1, we see that a0 = a1 = 1. The Taylor expansions of the coefficient
functions are
1
cos x
= 1 + O(x), and ex
= 1 + O(x).
Now we can calculate a2 from the recurrence relation.
a2 = −
1
1 · 2
0
m=0
((m + 1)am+1p0−m + amq0−m)
= −
1
2
(1 · 1 · 1 + 1 · 1)
= −1
Thus the solution to the problem is
y(x) = 1 + x − x2
+ O(x3
).
In Figure 23.1 the numerical solution is plotted in a solid line and the sum of the first three terms
of the Taylor series is plotted in a dashed line.
The general recurrence relation for the an’s is useful if you only want to calculate the first few
terms in the Taylor expansion. However, for many problems substituting the Taylor series for the
coefficient functions into the differential equation will enable you to find a simpler form of the
solution. We consider the following example to illustrate this point.
729
Example 23.1.4 Develop a series expansion of the solution to the initial value problem
w +
1
(z2 + 1)
w = 0, w(0) = 1, w (0) = 0.
Solution using the General Recurrence Relation. The coefficient function has the Taylor
expansion
1
1 + z2
=
∞
n=0
(−1)n
z2n
.
From the initial condition we obtain a0 = 1 and a1 = 0. Thus we see that the solution is
w =
∞
n=0
anzn
,
where
an+2 = −
1
(n + 1)(n + 2)
n
m=0
amqn−m
and
qn =
0 for odd n
(−1)(n/2)
for even n.
Although this formula is fine if you only want to calculate the first few an’s, it is just a tad unwieldy
to work with. Let’s see if we can get a better expression for the solution.
Substitute the Taylor Series into the Differential Equation. Substituting a Taylor series
for w yields
d2
dz2
∞
n=0
anzn
+
1
(z2 + 1)
∞
n=0
anzn
= 0.
Note that the algebra will be easier if we multiply by z2
+ 1. The polynomial z2
+ 1 has only two
terms, but the Taylor series for 1/(z2
+ 1) has an infinite number of terms.
(z2
+ 1)
d2
dz2
∞
n=0
anzn
+
∞
n=0
anzn
= 0
∞
n=2
n(n − 1)anzn
+
∞
n=2
n(n − 1)anzn−2
+
∞
n=0
anzn
= 0
∞
n=0
n(n − 1)anzn
+
∞
n=0
(n + 2)(n + 1)an+2zn
+
∞
n=0
anzn
= 0
∞
n=0
(n + 2)(n + 1)an+2 + n(n − 1)an + an zn
= 0
Equating powers of z gives us the difference equation
an+2 = −
n2
− n + 1
(n + 2)(n + 1)
an, for n ≥ 0.
From the initial conditions we see that a0 = 1 and a1 = 0. All of the odd terms in the series will
be zero. For the even terms, it is easier to reformulate the problem with the change of variables
bn = a2n. In terms of bn the difference equation is
bn+1 = −
(2n)2
− 2n + 1
(2n + 2)(2n + 1)
bn, b0 = 1.
730
0.2 0.4 0.6 0.8 1 1.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Figure 23.2: Plot of the solution and approximations.
This is a first order difference equation with the solution
bn =
n
j=0
−
4j2
− 2j + 1
(2j + 2)(2j + 1)
.
Thus we have that
an =
n/2
j=0 − 4j2
−2j+1
(2j+2)(2j+1) for even n,
0 for odd n.
Note that the nearest singularities of 1/(z2
+ 1) in the complex plane are at z = ±i. Thus the
radius of convergence must be at least 1. Applying the ratio test, the series converges for values of
|z| such that
lim
n→∞
an+2zn+2
anzn
< 1
lim
n→∞
−
n2
− n + 1
(n + 2)(n + 1)
|z|2
< 1
|z|2
< 1.
The radius of convergence is 1.
The first few terms in the Taylor expansion are
w = 1 −
1
2
z2
+
1
8
z4
−
13
240
z6
+ · · · .
In Figure 23.2 the plot of the first two nonzero terms is shown in a short dashed line, the plot of
the first four nonzero terms is shown in a long dashed line, and the numerical solution is shown in
a solid line.
In general, if the coefficient functions are rational functions, that is they are fractions of poly-
nomials, multiplying the equations by the quotient will reduce the algebra involved in finding the
series solution.
Example 23.1.5 If we were going to find the Taylor series expansion about z = 0 of the solution
to
w +
z
1 + z
w +
1
1 − z2
w = 0,
731
we would first want to multiply the equation by 1 − z2
to obtain
(1 − z2
)w + z(1 − z)w + w = 0.
Example 23.1.6 Find the series expansions about z = 0 of the fundamental set of solutions for
w + z2
w = 0.
Recall that the fundamental set of solutions {w1, w2} satisfy
w1(0) = 1 w2(0) = 0
w1(0) = 0 w2(0) = 1.
Thus if
w1 =
∞
n=0
anzn
and w2 =
∞
n=0
bnzn
,
then the coefficients must satisfy
a0 = 1, a1 = 0, and b0 = 0, b1 = 1.
Substituting the Taylor expansion w =
∞
n=0 cnzn
into the differential equation,
∞
n=2
n(n − 1)cnzn−2
+
∞
n=0
cnzn+2
= 0
∞
n=0
(n + 2)(n + 1)cn+2zn
+
∞
n=2
cn−2zn
= 0
2c2 + 6c3z +
∞
n=2
(n + 2)(n + 1)cn+2 + cn−2 zn
= 0
Equating coefficients of powers of z,
z0
: c2 = 0
z1
: c3 = 0
zn
: (n + 2)(n + 1)cn+2 + cn−2 = 0, for n ≥ 2
cn+4 = −
cn
(n + 4)(n + 3)
For our first solution we have the difference equation
a0 = 1, a1 = 0, a2 = 0, a3 = 0, an+4 = −
an
(n + 4)(n + 3)
.
For our second solution,
b0 = 0, b1 = 1, b2 = 0, b3 = 0, bn+4 = −
bn
(n + 4)(n + 3)
.
The first few terms in the fundamental set of solutions are
w1 = 1 −
1
12
z4
+
1
672
z8
− · · · , w2 = z −
1
20
z5
+
1
1440
z9
− · · · .
In Figure 23.3 the five term approximation is graphed in a coarse dashed line, the ten term
approximation is graphed in a fine dashed line, and the numerical solution of w1 is graphed in a
solid line. The same is done for w2.
732
1 2 3 4 5 6
-1
-0.5
0.5
1
1.5
1 2 3 4 5 6
-1
-0.5
0.5
1
1.5
Figure 23.3: The graph of approximations and numerical solution of w1 and w2.
Result 23.1.1 Consider the nth
order linear homogeneous equation
dn
w
dzn
+ pn−1(z)
dn−1
w
dzn−1
+ · · · + p1(z)
dw
dz
+ p0(z)w = 0.
If each of the coefficient functions pi(z) are analytic at z = z0 then z0 is an
ordinary point of the differential equation. The solution is analytic in some
region containing z0 and can be expanded in a Taylor series. The radius of
convergence of the series will be at least the distance to the nearest singularity
of the coefficient functions in the complex plane.
23.2 Regular Singular Points of Second Order Equations
Consider the differential equation
w +
p(z)
z − z0
w +
q(z)
(z − z0)2
w = 0.
If z = z0 is not an ordinary point but both p(z) and q(z) are analytic at z = z0 then z0 is a regular
singular point of the differential equation. The following equations have a regular singular point
at z = 0.
• w + 1
z w + z2
w = 0
• w + 1
sin z w − w = 0
• w − zw + 1
z sin z w = 0
Concerning regular singular points of second order linear equations there is good news and bad
news.
The Good News. We will find that with the use of the Frobenius method we can always find
series expansions of two linearly independent solutions at a regular singular point. We will illustrate
this theory with several examples.
733
The Bad News. Instead of a tidy little theory like we have for ordinary points, the solutions can
be of several different forms. Also, for some of the problems the algebra can get pretty ugly.
Example 23.2.1 Consider the equation
w +
3(1 + z)
16z2
w = 0.
We wish to find series solutions about the point z = 0. First we try a Taylor series w =
∞
n=0 anzn
.
Substituting this into the differential equation,
z2
∞
n=2
n(n − 1)anzn−2
+
3
16
(1 + z)
∞
n=0
anzn
= 0
∞
n=0
n(n − 1)anzn
+
3
16
∞
n=0
anzn
+
3
16
∞
n=1
an+1zn
= 0.
Equating powers of z,
z0
: a0 = 0
zn
: n(n − 1) +
3
16
an +
3
16
an+1 = 0
an+1 =
16
3
n(n − 1) + 1 an.
This difference equation has the solution an = 0 for all n. Thus we have obtained only the trivial
solution to the differential equation. We must try an expansion of a more general form. We recall
that for regular singular points of first order equations we can always find a solution in the form of a
Frobenius series w = zα ∞
n=0 anzn
, a0 = 0. We substitute this series into the differential equation.
z2
∞
n=0
α(α − 1) + 2αn + n(n − 1) anzn+α−2
+
3
16
(1 + z)zα
∞
n=0
anzn
= 0
∞
n=0
α(α − 1) + 2n + n(n − 1) anzn
+
3
16
∞
n=0
anzn
+
3
16
∞
n=1
an−1zn
= 0
Equating the z0
term to zero yields the equation
α(α − 1) +
3
16
a0 = 0.
Since we have assumed that a0 = 0, the polynomial in α must be zero. The two roots of the
polynomial are
α1 =
1 + 1 − 3/4
2
=
3
4
, α2 =
1 − 1 − 3/4
2
=
1
4
.
Thus our two series solutions will be of the form
w1 = z3/4
∞
n=0
anzn
, w2 = z1/4
∞
n=0
bnzn
.
Substituting the first series into the differential equation,
∞
n=0
−
3
16
+ 2n + n(n − 1) +
3
16
anzn
+
3
16
∞
n=1
an−1zn
= 0.
734
Equating powers of z, we see that a0 is arbitrary and
an = −
3
16n(n + 1)
an−1 for n ≥ 1.
This difference equation has the solution
an = a0
n
j=1
−
3
16j(j + 1)
= a0 −
3
16
n n
j=1
1
j(j + 1)
= a0 −
3
16
n
1
n!(n + 1)!
for n ≥ 1.
Substituting the second series into the differential equation,
∞
n=0
−
3
16
+ 2n + n(n − 1) +
3
16
bnzn
+
3
16
∞
n=1
bn−1zn
= 0.
We see that the difference equation for bn is the same as the equation for an. Thus we can write the
general solution to the differential equation as
w = c1z3/4
1 +
∞
n=1
−
3
16
n
1
n!(n + 1)!
zn
+ c2z1/4
1 +
∞
n=1
−
3
16
n
1
n!(n + 1)!
zn
c1z3/4
+ c2z1/4
1 +
∞
n=1
−
3
16
n
1
n!(n + 1)!
zn
.
23.2.1 Indicial Equation
Now let’s consider the general equation for a regular singular point at z = 0
w +
p(z)
z
w +
q(z)
z2
w = 0.
Since p(z) and q(z) are analytic at z = 0 we can expand them in Taylor series.
p(z) =
∞
n=0
pnzn
, q(z) =
∞
n=0
qnzn
Substituting a Frobenius series w = zα ∞
n=0 anzn
, a0 = 0 and the Taylor series for p(z) and q(z)
into the differential equation yields
∞
n=0
(α + n)(α + n − 1) anzn
+
∞
n=0
pnzn
∞
n=0
(α + n)anzn
+
∞
n=0
qnzn
∞
n=0
anzn
= 0
∞
n=0
(α + n)2
− (α + n) + p0(α + n) + q0 anzn
+
∞
n=1
pnzn
∞
n=0
(α + n)anzn
+
∞
n=1
qnzn
∞
n=0
anzn
= 0
735
∞
n=0
(α + n)2
+ (p0 − 1)(αn) + q0 anzn
+
∞
n=1


n−1
j=0
(α + j)ajpn−j

 zn
+
∞
n=1


n−1
j=0
ajqn−j

 zn
= 0
Equating powers of z,
z0
: α2
+ (p0 − 1)α + q0 a0 = 0
zn
: (α + n)2
+ (p0 − 1)(α + n) + q0 an = −
n−1
j=0
(α + j)pn−j + qn−j aj.
Let
I(α) = α2
+ (p0 − 1)α + q0 = 0.
This is known as the indicial equation. The indicial equation gives us the form of the solutions.
The equation for a0 is I(α)a0 = 0. Since we assumed that a0 is nonzero, I(α) = 0. Let the two
roots of I(α) be α1 and α2 where (α1) ≥ (α2).
Rewriting the difference equation for an(α),
I(α + n)an(α) = −
n−1
j=0
(α + j)pn−j + qn−j aj(α) for n ≥ 1. (23.1)
If the roots are distinct and do not differ by an integer then we can use Equation 23.1 to solve
for an(α1) and an(α2), which will give us the two solutions
w1 = zα1
∞
n=0
an(α1)zn
, and w2 = zα2
∞
n=0
an(α2)zn
.
If the roots are not distinct, α1 = α2, we will only have one solution and will have to generate
another. If the roots differ by an integer, α1 − α2 = N, there is one solution corresponding to α1,
but when we try to solve Equation 23.1 for an(α2), we will encounter the equation
I(α2 + N)aN (α2) = I(α1)aN (α2) = 0 · aN (α2) = −
N−1
j=0
(α + n)pn−j + qn−j aj(α2).
If the right side of the equation is nonzero, then aN (α2) is undefined. On the other hand, if the
right side is zero then aN (α2) is arbitrary. The rest of this section is devoted to considering the
cases α1 = α2 and α1 − α2 = N.
23.2.2 The Case: Double Root
Consider a second order equation L[w] = 0 with a regular singular point at z = 0. Suppose the
indicial equation has a double root.
I(α) = (α − α1)2
= 0
One solution has the form
w1 = zα1
∞
n=0
anzn
.
In order to find the second solution, we will differentiate with respect to the parameter, α. Let an(α)
satisfy Equation 23.1 Substituting the Frobenius expansion into the differential equation,
L zα
∞
n=0
an(α)zn
= 0.
736
Setting α = α1 will make the left hand side of the equation zero. Differentiating this equation with
respect to α,
∂
∂α
L zα
∞
n=0
an(α)zn
= 0.
Interchanging the order of differentiation,
L log z zα
∞
n=0
an(α)zn
+ zα
∞
n=0
dan(α)
dα
zn
= 0.
Since setting α = α1 will make the left hand side of this equation zero, the second linearly indepen-
dent solution is
w2 = log z zα1
∞
n=0
an(α1)zn
+ zα1
∞
n=0
dan(α)
dα
α=α1
zn
w2 = w1 log z + zα1
∞
n=0
an(α1)zn
.
Example 23.2.2 Consider the differential equation
w +
1 + z
4z2
w = 0.
There is a regular singular point at z = 0. The indicial equation is
α(α − 1) +
1
4
= α −
1
2
2
= 0.
One solution will have the form
w1 = z1/2
∞
n=0
anzn
, a0 = 0.
Substituting the Frobenius expansion
zα
∞
n=0
an(α)zn
into the differential equation yields
z2
w +
1
4
(1 + z)w = 0
∞
n=0
α(α − 1) + 2αn + n(n − 1) an(α)zn+α
+
1
4
∞
n=0
an(α)zn+α
+
1
4
∞
n=0
an(α)zn+α+1
= 0.
Divide by zα
and adjust the summation indices.
∞
n=0
[α(α − 1) + 2αn + n(n − 1)] an(α)zn
+
1
4
∞
n=0
an(α)zn
+
1
4
∞
n=1
an−1(α)zn
= 0
α(α − 1)a0 +
1
4
a0 +
∞
n=1
α(α − 1) + 2n + n(n − 1) +
1
4
an(α) +
1
4
an−1(α) zn
= 0
737
Equating the coefficient of z0
to zero yields I(α)a0 = 0. Equating the coefficients of zn
to zero yields
the difference equation
α(α − 1) + 2n + n(n − 1) +
1
4
an(α) +
1
4
an−1(α) = 0
an(α) = −
n(n + 1)
4
+
α(α − 1)
4
+
1
16
an−1(α).
The first few an’s are
a0, − α(α − 1) +
9
16
a0, α(α − 1) +
25
16
α(α − 1) +
9
16
a0, . . .
Setting α = 1/2, the coefficients for the first solution are
a0, −
5
16
a0,
105
16
a0, . . .
The second solution has the form
w2 = w1 log z + z1/2
∞
n=0
an(1/2)zn
.
Differentiating the an(α),
da0
dα
= 0,
da1(α)
dα
= −(2α−1)a0,
da2(α)
dα
= (2α−1) α(α − 1) +
9
16
+ α(α − 1) +
25
16
a0, . . .
Setting α = 1/2 in this equation yields
a0 = 0, a1(1/2) = 0, a2(1/2) = 0, . . .
Thus the second solution is
w2 = w1 log z.
The first few terms in the general solution are
(c1 + c2 log z) 1 −
5
16
z +
105
16
z2
− · · · .
23.2.3 The Case: Roots Differ by an Integer
Consider the case in which the roots of the indicial equation α1 and α2 differ by an integer. (α1−α2 =
N) Recall the equation that determines an(α)
I(α + n)an = (α + n)2
+ (p0 − 1)(α + n) + q0 an = −
n−1
j=0
(α + j)pn−j + qn−j aj.
When α = α2 the equation for aN is
I(α2 + N)aN (α2) = 0 · aN (α2) = −
N−1
j=0
(α + j)pN−j + qN−j aj.
If the right hand side of this equation is zero, then aN is arbitrary. There will be two solutions of
the Frobenius form.
w1 = zα1
∞
n=0
an(α1)zn
and w2 = zα2
∞
n=0
an(α2)zn
.
738
If the right hand side of the equation is nonzero then aN (α2) will be undefined. We will have to
generate the second solution. Let
w(z, α) = zα
∞
n=0
an(α)zn
,
where an(α) satisfies the recurrence formula. Substituting this series into the differential equation
yields
L[w(z, α)] = 0.
We will multiply by (α − α2), differentiate this equation with respect to α and then set α = α2.
This will generate a linearly independent solution.
∂
∂α
L[(α − α2)w(z, α)] = L
∂
∂α
(α − α2)w(z, α)
= L
∂
∂α
(α − α2)zα
∞
n=0
an(α)zn
= L log z zα
∞
n=0
(α − α2)an(α)zn
+ zα
∞
n=0
d
dα
[(α − α2)an(α)]zn
Setting α = α2 with make this expression zero, thus
log z zα
∞
n=0
lim
α→α2
{(α − α2)an(α)} zn
+ zα2
∞
n=0
lim
α→α2
d
dα
[(α − α2)an(α)] zn
is a solution. Now let’s look at the first term in this solution
log z zα
∞
n=0
lim
α→α2
{(α − α2)an(α)} zn
.
The first N terms in the sum will be zero. That is because a0, . . . , aN−1 are finite, so multiplying by
(α − α2) and taking the limit as α → α2 will make the coefficients vanish. The equation for aN (α)
is
I(α + N)aN (α) = −
N−1
j=0
(α + j)pN−j + qN−j aj(α).
Thus the coefficient of the Nth
term is
lim
α→α2
(α − α2)aN (α) = − lim
α→α2

 (α − α2)
I(α + N)
N−1
j=0
(α + j)pN−j + qN−j aj(α)


= − lim
α→α2

 (α − α2)
(α + N − α1)(α + N − α2)
N−1
j=0
(α + j)pN−j + qN−j aj(α)


Since α1 = α2 + N, limα→α2
α−α2
α+N−α1
= 1.
= −
1
(α1 − α2)
N−1
j=0
(α2 + j)pN−j + qN−j aj(α2).
Using this you can show that the first term in the solution can be written
d−1 log z w1,
739
where d−1 is a constant. Thus the second linearly independent solution is
w2 = d−1 log z w1 + zα2
∞
n=0
dnzn
,
where
d−1 = −
1
a0
1
(α1 − α2)
N−1
j=0
(α2 + j)pN−j + qN−j aj(α2)
and
dn = lim
α→α2
d
dα
(α − α2)an(α) for n ≥ 0.
Example 23.2.3 Consider the differential equation
w + 1 −
2
z
w +
2
z2
w = 0.
The point z = 0 is a regular singular point. In order to find series expansions of the solutions, we
first calculate the indicial equation. We can write the coefficient functions in the form
p(z)
z
=
1
z
(−2 + z), and
q(z)
z2
=
1
z2
(2).
Thus the indicial equation is
α2
+ (−2 − 1)α + 2 = 0
(α − 1)(α − 2) = 0.
The First Solution. The first solution will have the Frobenius form
w1 = z2
∞
n=0
an(α1)zn
.
Substituting a Frobenius series into the differential equation,
z2
w + (z2
− 2z)w + 2w = 0
∞
n=0
(n + α)(n + α − 1)zn+α
+ (z2
− 2z)
∞
n=0
(n + α)zn+α−1
+ 2
∞
n=0
anzn
= 0
[α2
− 3α + 2]a0 +
∞
n=1
(n + α)(n + α − 1)an + (n + α − 1)an−1 − 2(n + α)an + 2an zn
= 0.
Equating powers of z,
(n + α)(n + α − 1) − 2(n + α) + 2 an = −(n + α − 1)an−1
an = −
an−1
n + α − 2
.
Setting α = α1 = 2, the recurrence relation becomes
an(α1) = −
an−1(α1)
n
= a0
(−1)n
n!
.
The first solution is
w1 = a0
∞
n=0
(−1)n
n!
zn
= a0 e−z
.
740
The Second Solution. The equation for a1(α2) is
0 · a1(α2) = 2a0.
Since the right hand side of this equation is not zero, the second solution will have the form
w2 = d−1 log z w1 + zα2
∞
n=0
lim
α→α2
d
dα
[(α − α2)an(α)] zn
First we will calculate d−1 as we defined it previously.
d−1 = −
1
a0
1
2 − 1
a0 = −1.
The expression for an(α) is
an(α) =
(−1)n
a0
(α + n − 2)(α + n − 1) · · · (α − 1)
.
The first few an(α) are
a1(α) = −
a0
α − 1
a2(α) =
a0
α(α − 1)
a3(α) = −
a0
(α + 1)α(α − 1)
.
We would like to calculate
dn = lim
α→1
d
dα
(α − 1)an(α) .
The first few dn are
d0 = lim
α→1
d
dα
(α − 1)a0
= a0
d1 = lim
α→1
d
dα
(α − 1) −
a0
α − 1
= lim
α→1
d
dα
− a0
= 0
d2 = lim
α→1
d
dα
(α − 1)
a0
α(α − 1)
= lim
α→1
d
dα
a0
α
= −a0
d3 = lim
α→1
d
dα
(α − 1) −
a0
(α + 1)α(α − 1)
= lim
α→1
d
dα
−
a0
(α + 1)α
=
3
4
a0.
741
It will take a little work to find the general expression for dn. We will need the following relations.
Γ(n) = (n − 1)!, Γ (z) = Γ(z)ψ(z), ψ(n) = −γ +
n−1
k=1
1
k
.
See the chapter on the Gamma function for explanations of these equations.
dn = lim
α→1
d
dα
(α − 1)
(−1)n
a0
(α + n − 2)(α + n − 1) · · · (α − 1)
= lim
α→1
d
dα
(−1)n
a0
(α + n − 2)(α + n − 1) · · · (α)
= lim
α→1
d
dα
(−1)n
a0Γ(α)
Γ(α + n − 1)
= (−1)n
a0 lim
α→1
Γ(α)ψ(α)
Γ(α + n − 1)
−
Γ(α)ψ(α + n − 1)
Γ(α + n − 1)
= (−1)n
a0 lim
α→1
Γ(α)[ψ(α) − ψ(α + n − 1)]
Γ(α + n − 1)
= (−1)n
a0
ψ(1) − ψ(n)
(n − 1)!
=
(−1)n+1
a0
(n − 1)!
n−1
k=0
1
k
Thus the second solution is
w2 = − log z w1 + z
∞
n=0
(−1)n+1
a0
(n − 1)!
n−1
k=0
1
k
zn
.
The general solution is
w = c1 e−z
−c2 log z e−z
+c2z
∞
n=0
(−1)n+1
(n − 1)!
n−1
k=0
1
k
zn
.
We see that even in problems that are chosen for their simplicity, the algebra involved in the
Frobenius method can be pretty involved.
Example 23.2.4 Consider a series expansion about the origin of the equation
w +
1 − z
z
w −
1
z2
w = 0.
The indicial equation is
α2
− 1 = 0
α = ±1.
Substituting a Frobenius series into the differential equation,
z2
∞
n=0
(n + α)(n + α − 1)anzn−2
+ (z − z2
)
∞
n=0
(n + α)anzn−1
−
∞
n=0
anzn
= 0
∞
n=0
(n + α)(n + α − 1)anzn
+
∞
n=0
(n + α)anzn
−
∞
n=1
(n + α − 1)an−1zn
−
∞
n=0
anzn
= 0
α(α − 1) + α − 1 a0 +
∞
n=1
n + α)(n + α − 1)an + (n + α − 1)an − (n + α − 1)an−1 zn
= 0.
742
Equating powers of z to zero,
an(α) =
an−1(α)
n + α + 1
.
We know that the first solution has the form
w1 = z
∞
n=0
anzn
.
Setting α = 1 in the reccurence formula,
an =
an−1
n + 2
=
2a0
(n + 2)!
.
Thus the first solution is
w1 = z
∞
n=0
2a0
(n + 2)!
zn
= 2a0
1
z
∞
n=0
zn+2
(n + 2)!
=
2a0
z
∞
n=0
zn
n!
− 1 − z
=
2a0
z
(ez
−1 − z).
Now to find the second solution. Setting α = −1 in the reccurence formula,
an =
an−1
n
=
a0
n!
.
We see that in this case there is no trouble in defining a2(α2). The second solution is
w2 =
a0
z
∞
n=0
zn
n!
=
a0
z
ez
.
Thus we see that the general solution is
w =
c1
z
(ez
−1 − z) +
c2
z
ez
w =
d1
z
ez
+d2 1 +
1
z
.
23.3 Irregular Singular Points
If a point z0 of a differential equation is not ordinary or regular singular, then it is an irregular
singular point. At least one of the solutions at an irregular singular point will not be of the
Frobenius form. We will examine how to obtain series expansions about an irregular singular point
in the chapter on asymptotic expansions.
23.4 The Point at Infinity
If we want to determine the behavior of a function f(z) at infinity, we can make the transformation
ζ = 1/z and examine the point ζ = 0.
743
Example 23.4.1 Consider the behavior of f(z) = sin z at infinity. This is the same as considering
the point ζ = 0 of sin(1/ζ), which has the series expansion
sin
1
ζ
=
∞
n=0
(−1)n
(2n + 1)!ζ2n+1
.
Thus we see that the point ζ = 0 is an essential singularity of sin(1/ζ). Hence sin z has an essential
singularity at z = ∞.
Example 23.4.2 Consider the behavior at infinity of z e1/z
. We make the transformation ζ = 1/z.
1
ζ
eζ
=
1
ζ
∞
n=0
ζn
n!
Thus z e1/z
has a pole of order 1 at infinity.
In order to classify the point at infinity of a differential equation in w(z), we apply the transfor-
mation ζ = 1/z, u(ζ) = w(z). We write the derivatives with respect to z in terms of ζ.
z =
1
ζ
dz = −
1
ζ2
dζ
d
dz
= −ζ2 d
dζ
d2
dz2
= −ζ2 d
dζ
−ζ2 d
dζ
= ζ4 d2
dζ2
+ 2ζ3 d
dζ
Now we apply the transformation to the differential equation.
w + p(z)w + q(z)w = 0
ζ4
u + 2ζ3
u + p(1/ζ)(−ζ2
)u + q(1/ζ)u = 0
u +
2
ζ
−
p(1/ζ)
ζ2
u +
q(1/ζ)
ζ4
u = 0
Example 23.4.3 Classify the singular points of the differential equation
w +
1
z
w + 2w = 0.
There is a regular singular point at z = 0. To examine the point at infinity we make the
transformation ζ = 1/z, u(ζ) = w(z).
u +
2
ζ
−
1
ζ
u +
2
ζ4
u = 0
u +
1
ζ
u +
2
ζ4
u = 0
Thus we see that the differential equation for w(z) has an irregular singular point at infinity.
744
23.5 Exercises
Exercise 23.1 (mathematica/ode/series/series.nb)
f(x) satisfies the Hermite equation
d2
f
dx2
− 2x
df
dx
+ 2λf = 0.
Construct two linearly independent solutions of the equation as Taylor series about x = 0. For what
values of x do the series converge?
Show that for certain values of λ, called eigenvalues, one of the solutions is a polynomial, called
an eigenfunction. Calculate the first four eigenfunctions H0(x), H1(x), H2(x), H3(x), ordered by
degree.
Hint, Solution
Exercise 23.2
Consider the Legendre equation
(1 − x2
)y − 2xy + α(α + 1)y = 0.
1. Find two linearly independent solutions in the form of power series about x = 0.
2. Compute the radius of convergence of the series. Explain why it is possible to predict the
radius of convergence without actually deriving the series.
3. Show that if α = 2n, with n an integer and n ≥ 0, the series for one of the solutions reduces
to an even polynomial of degree 2n.
4. Show that if α = 2n+1, with n an integer and n ≥ 0, the series for one of the solutions reduces
to an odd polynomial of degree 2n + 1.
5. Show that the first 4 polynomial solutions Pn(x) (known as Legendre polynomials) ordered by
their degree and normalized so that Pn(1) = 1 are
P0 = 1 P1 = x
P2 =
1
2
(3x2
− 1) P4 =
1
2
(5x3
− 3x)
6. Show that the Legendre equation can also be written as
((1 − x2
)y ) = −α(α + 1)y.
Note that two Legendre polynomials Pn(x) and Pm(x) must satisfy this relation for α = n and
α = m respectively. By multiplying the first relation by Pm(x) and the second by Pn(x) and
integrating by parts show that Legendre polynomials satisfy the orthogonality relation
1
−1
Pn(x)Pm(x) dx = 0 if n = m.
If n = m, it can be shown that the value of the integral is 2/(2n + 1). Verify this for the first
three polynomials (but you needn’t prove it in general).
Hint, Solution
Exercise 23.3
Find the forms of two linearly independent series expansions about the point z = 0 for the differential
equation
w +
1
sin z
w +
1 − z
z2
w = 0,
such that the series are real-valued on the positive real axis. Do not calculate the coefficients in the
expansions.
Hint, Solution
745
Exercise 23.4
Classify the singular points of the equation
w +
w
z − 1
+ 2w = 0.
Hint, Solution
Exercise 23.5
Find the series expansions about z = 0 for
w +
5
4z
w +
z − 1
8z2
w = 0.
Hint, Solution
Exercise 23.6
Find the series expansions about z = 0 of the fundamental solutions of
w + zw + w = 0.
Hint, Solution
Exercise 23.7
Find the series expansions about z = 0 of the two linearly independent solutions of
w +
1
2z
w +
1
z
w = 0.
Hint, Solution
Exercise 23.8
Classify the singularity at infinity of the differential equation
w +
2
z
+
3
z2
w +
1
z2
w = 0.
Find the forms of the series solutions of the differential equation about infinity that are real-valued
when z is real-valued and positive. Do not calculate the coefficients in the expansions.
Hint, Solution
Exercise 23.9
Consider the second order differential equation
x
d2
y
dx2
+ (b − x)
dy
dx
− ay = 0,
where a, b are real constants.
1. Show that x = 0 is a regular singular point. Determine the location of any additional singular
points and classify them. Include the point at infinity.
2. Compute the indicial equation for the point x = 0.
3. By solving an appropriate recursion relation, show that one solution has the form
y1(x) = 1 +
ax
b
+
(a)2x2
(b)22!
+ · · · +
(a)nxn
(b)nn!
+ · · ·
where the notation (a)n is defined by
(a)n = a(a + 1)(a + 2) · · · (a + n − 1), (a)0 = 1.
Assume throughout this problem that b = n where n is a non-negative integer.
746
4. Show that when a = −m, where m is a non-negative integer, that there are polynomial solutions
to this equation. Compute the radius of convergence of the series above when a = −m. Verify
that the result you get is in accord with the Frobenius theory.
5. Show that if b = n + 1 where n = 0, 1, 2, . . ., then the second solution of this equation has
logarithmic terms. Indicate the form of the second solution in this case. You need not compute
any coefficients.
Hint, Solution
Exercise 23.10
Consider the equation
xy + 2xy + 6 ex
y = 0.
Find the first three non-zero terms in each of two linearly independent series solutions about x = 0.
Hint, Solution
747
23.6 Hints
Hint 23.1
Hint 23.2
Hint 23.3
Hint 23.4
Hint 23.5
Hint 23.6
Hint 23.7
Hint 23.8
Hint 23.9
Hint 23.10
748
23.7 Solutions
Solution 23.1
f(x) is a Taylor series about x = 0.
f(x) =
∞
n=0
anxn
f (x) =
∞
n=1
nanxn−1
=
∞
n=0
nanxn−1
f (x) =
∞
n=2
n(n − 1)anxn−2
=
∞
n=0
(n + 2)(n + 1)an+2xn
We substitute the Taylor series into the differential equation.
f (x) − 2xf (x) + 2λf = 0
∞
n=0
(n + 2)(n + 1)an+2xn
− 2
∞
n=0
nanxn
+ 2λ
∞
n=0
anxn
Equating coefficients gives us a difference equation for an:
(n + 2)(n + 1)an+2 − 2nan + 2λan = 0
an+2 = 2
n − λ
(n + 1)(n + 2)
an.
The first two coefficients, a0 and a1 are arbitrary. The remaining coefficients are determined by the
recurrence relation. We will find the fundamental set of solutions at x = 0. That is, for the first
solution we choose a0 = 1 and a1 = 0; for the second solution we choose a0 = 0, a1 = 1. The
difference equation for y1 is
an+2 = 2
n − λ
(n + 1)(n + 2)
an, a0 = 1, a1 = 0,
which has the solution
a2n =
2n n
k=0(2(n − k) − λ)
(2n)!
, a2n+1 = 0.
The difference equation for y2 is
an+2 = 2
n − λ
(n + 1)(n + 2)
an, a0 = 0, a1 = 1,
which has the solution
a2n = 0, a2n+1 =
2n n−1
k=0 (2(n − k) − 1 − λ)
(2n + 1)!
.
A set of linearly independent solutions, (in fact the fundamental set of solutions at x = 0), is
y1(x) =
∞
n=0
2n n
k=0(2(n − k) − λ)
(2n)!
x2n
, y2(x) =
∞
n=0
2n n−1
k=0 (2(n − k) − 1 − λ)
(2n + 1)!
x2n+1
.
749
Since the coefficient functions in the differential equation do not have any singularities in the finite
complex plane, the radius of convergence of the series is infinite.
If λ = n is a positive even integer, then the first solution, y1, is a polynomial of order n. If λ = n
is a positive odd integer, then the second solution, y2, is a polynomial of order n. For λ = 0, 1, 2, 3,
we have
H0(x) = 1
H1(x) = x
H2(x) = 1 − 2x2
H3(x) = x −
2
3
x3
Solution 23.2
1. First we write the differential equation in the standard form.
1 − x2
y − 2xy + α(α + 1)y = 0 (23.2)
y −
2x
1 − x2
y +
α(α + 1)
1 − x2
y = 0. (23.3)
Since the coefficients of y and y are analytic in a neighborhood of x = 0, We can find two
Taylor series solutions about that point. We find the Taylor series for y and its derivatives.
y =
∞
n=0
anxn
y =
∞
n=1
nanxn−1
y =
∞
n=2
(n − 1)nanxn−2
=
∞
n=0
(n + 1)(n + 2)an+2xn
Here we used index shifting to explicitly write the two forms that we will need for y . Note
that we can take the lower bound of summation to be n = 0 for all above sums. The terms
added by this operation are zero. We substitute the Taylor series into Equation 23.2.
∞
n=0
(n + 1)(n + 2)an+2xn
−
∞
n=0
(n − 1)nanxn
− 2
∞
n=0
nanxn
+ α(α + 1)
∞
n=0
anxn
= 0
∞
n=0
(n + 1)(n + 2)an+2 − (n − 1)n + 2n − α(α + 1) an xn
= 0
We equate coefficients of xn
to obtain a recurrence relation.
(n + 1)(n + 2)an+2 = (n(n + 1) − α(α + 1))an
an+2 =
n(n + 1) − α(α + 1)
(n + 1)(n + 2)
an, n ≥ 0
We can solve this difference equation to determine the an’s. (a0 and a1 are arbitrary.)
an =



a0
n!
n−2
k=0
even k
k(k + 1) − α(α + 1) , even n,
a1
n!
n−2
k=1
odd k
k(k + 1) − α(α + 1) , odd n
750
We will find the fundamental set of solutions at x = 0, that is the set {y1, y2} that satisfies
y1(0) = 1 y1(0) = 0
y2(0) = 0 y2(0) = 1.
For y1 we take a0 = 1 and a1 = 0; for y2 we take a0 = 0 and a1 = 1. The rest of the coefficients
are determined from the recurrence relation.
y1 =
∞
n=0
even n



1
n!
n−2
k=0
even k
k(k + 1) − α(α + 1)


 xn
y2 =
∞
n=1
odd n



1
n!
n−2
k=1
odd k
k(k + 1) − α(α + 1)


 xn
2. We determine the radius of convergence of the series solutions with the ratio test.
lim
n→∞
an+2xn+2
anxn
< 1
lim
n→∞
n(n+1)−α(α+1)
(n+1)(n+2) anxn+2
anxn
< 1
lim
n→∞
n(n + 1) − α(α + 1)
(n + 1)(n + 2)
x2
< 1
x2
< 1
Thus we see that the radius of convergence of the series is 1. We knew that the radius of
convergence would be at least one, because the nearest singularities of the coefficients of (23.3)
occur at x = ±1, a distance of 1 from the origin. This implies that the solutions of the
equation are analytic in the unit circle about x = 0. The radius of convergence of the Taylor
series expansion of an analytic function is the distance to the nearest singularity.
3. If α = 2n then a2n+2 = 0 in our first solution. From the recurrence relation, we see that all
subsequent coefficients are also zero. The solution becomes an even polynomial.
y1 =
2n
m=0
even m



1
m!
m−2
k=0
even k
k(k + 1) − α(α + 1)


 xm
4. If α = 2n + 1 then a2n+3 = 0 in our second solution. From the recurrence relation, we see that
all subsequent coefficients are also zero. The solution becomes an odd polynomial.
y2 =
2n+1
m=1
odd m



1
m!
m−2
k=1
odd k
k(k + 1) − α(α + 1)


 xm
5. From our solutions above, the first four polynomials are
1
x
1 − 3x2
x −
5
3
x3
751
Figure 23.4: The First Four Legendre Polynomials
To obtain the Legendre polynomials we normalize these to have value unity at x = 1
P0 = 1
P1 = x
P2 =
1
2
3x2
− 1
P3 =
1
2
5x3
− 3x
These four Legendre polynomials are plotted in Figure 23.4.
6. We note that the first two terms in the Legendre equation form an exact derivative. Thus the
Legendre equation can also be written as
(1 − x2
)y = −α(α + 1)y.
Pn and Pm are solutions of the Legendre equation.
(1 − x2
)Pn = −n(n + 1)Pn, (1 − x2
)Pm = −m(m + 1)Pm (23.4)
We multiply the first relation of Equation 23.4 by Pm and integrate by parts.
(1 − x2
)Pn Pm = −n(n + 1)PnPm
1
−1
(1 − x2
)Pn Pm dx = −n(n + 1)
1
−1
PnPm dx
(1 − x2
)Pn Pm
1
−1
−
1
−1
(1 − x2
)PnPm dx = −n(n + 1)
1
−1
PnPm dx
1
−1
(1 − x2
)PnPm dx = n(n + 1)
1
−1
PnPm dx
We multiply the secord relation of Equation 23.4 by Pn and integrate by parts. To obtain a
different expression for
1
−1
(1 − x2
)PmPn dx.
1
−1
(1 − x2
)PmPn dx = m(m + 1)
1
−1
PmPn dx
We equate the two expressions for
1
−1
(1 − x2
)PmPn dx. to obtain an orthogonality relation.
(n(n + 1) − m(m + 1))
1
−1
PnPm dx = 0
1
−1
Pn(x)Pm(x) dx = 0 if n = m.
752
We verify that for the first four polynomials the value of the integral is 2/(2n + 1) for n = m.
1
−1
P0(x)P0(x) dx =
1
−1
1 dx = 2
1
−1
P1(x)P1(x) dx =
1
−1
x2
dx =
x3
3
1
−1
=
2
3
1
−1
P2(x)P2(x) dx =
1
−1
1
4
9x4
− 6x2
+ 1 dx =
1
4
9x5
5
− 2x3
+ x
1
−1
=
2
5
1
−1
P3(x)P3(x) dx =
1
−1
1
4
25x6
− 30x4
+ 9x2
dx =
1
4
25x7
7
− 6x5
+ 3x3
1
−1
=
2
7
Solution 23.3
The indicial equation for this problem is
α2
+ 1 = 0.
Since the two roots α1 = i and α2 = −i are distinct and do not differ by an integer, there are two
solutions in the Frobenius form.
w1 = zi
∞
n=0
anzn
, w1 = z−i
∞
n=0
bnzn
However, these series are not real-valued on the positive real axis. Recalling that
zi
= ei log z
= cos(log z) + i sin(log z), and z−i
= e−i log z
= cos(log z) − i sin(log z),
we can write a new set of solutions that are real-valued on the positive real axis as linear combinations
of w1 and w2.
u1 =
1
2
(w1 + w2), u2 =
1
2i
(w1 − w2)
u1 = cos(log z)
∞
n=0
cnzn
, u1 = sin(log z)
∞
n=0
dnzn
Solution 23.4
Consider the equation w + w /(z − 1) + 2w = 0.
We see that there is a regular singular point at z = 1. All other finite values of z are ordinary
points of the equation. To examine the point at infinity we introduce the transformation z = 1/t,
w(z) = u(t). Writing the derivatives with respect to z in terms of t yields
d
dz
= −t2 d
dt
,
d2
dz2
= t4 d2
dt2
+ 2t3 d
dt
.
Substituting into the differential equation gives us
t4
u + 2t3
u −
t2
u
1/t − 1
+ 2u = 0
u +
2
t
−
1
t(1 − t)
u +
2
t4
u = 0.
Since t = 0 is an irregular singular point in the equation for u(t), z = ∞ is an irregular singular
point in the equation for w(z).
753
Solution 23.5
Find the series expansions about z = 0 for
w +
5
4z
w +
z − 1
8z2
w = 0.
We see that z = 0 is a regular singular point of the equation. The indicial equation is
α2
+
1
4
α −
1
8
= 0
α +
1
2
α −
1
4
= 0.
Since the roots are distinct and do not differ by an integer, there will be two solutions in the
Frobenius form.
w1 = z1/4
∞
n=0
an(α1)zn
, w2 = z−1/2
∞
n=0
an(α2)zn
We multiply the differential equation by 8z2
to put it in a better form. Substituting a Frobenius
series into the differential equation,
8z2
∞
n=0
(n + α)(n + α − 1)anzn+α−2
+ 10z
∞
n=0
(n + α)anzn+α−1
+ (z − 1)
∞
n=0
anzn+α
8
∞
n=0
(n + α)(n + α − 1)anzn
+ 10
∞
n=0
(n + α)anzn
+
∞
n=1
an−1zn
−
∞
n=0
anzn
.
Equating coefficients of powers of z,
[8(n + α)(n + α − 1) + 10(n + α) − 1] an = −an−1
an = −
an−1
8(n + α)2 + 2(n + α) − 1
.
The First Solution. Setting α = 1/4 in the recurrence formula,
an(α1) = −
an−1
8(n + 1/4)2 + 2(n + 1/4) − 1
an(α1) = −
an−1
2n(4n + 3)
.
Thus the first solution is
w1 = z1/4
∞
n=0
an(α1)zn
= a0z1/4
1 −
1
14
z +
1
616
z2
+ · · · .
The Second Solution. Setting α = −1/2 in the recurrence formula,
an = −
an−1
8(n − 1/2)2 + 2(n − 1/2) − 1
an = −
an−1
2n(4n − 3)
Thus the second linearly independent solution is
w2 = z−1/2
∞
n=0
an(α2)zn
= a0z−1/2
1 −
1
2
z +
1
40
z2
+ · · · .
754
Solution 23.6
We consider the series solutions of,
w + zw + w = 0.
We would like to find the expansions of the fundamental set of solutions about z = 0. Since z = 0
is a regular point, (the coefficient functions are analytic there), we expand the solutions in Taylor
series. Differentiating the series expansions for w(z),
w =
∞
n=0
anzn
w =
∞
n=1
nanzn−1
w =
∞
n=2
n(n − 1)anzn−2
=
∞
n=0
(n + 2)(n + 1)an+2zn
We may take the lower limit of summation to be zero without changing the sums. Substituting these
expressions into the differential equation,
∞
n=0
(n + 2)(n + 1)an+2zn
+
∞
n=0
nanzn
+
∞
n=0
anzn
= 0
∞
n=0
(n + 2)(n + 1)an+2 + (n + 1)an zn
= 0.
Equating the coefficient of the zn
term gives us
(n + 2)(n + 1)an+2 + (n + 1)an = 0, n ≥ 0
an+2 = −
an
n + 2
, n ≥ 0.
a0 and a1 are arbitrary. We determine the rest of the coefficients from the recurrence relation. We
consider the cases for even and odd n separately.
a2n = −
a2n−2
2n
=
a2n−4
(2n)(2n − 2)
= (−1)n a0
(2n)(2n − 2) · · · 4 · 2
= (−1)n a0
n
m=1 2m
, n ≥ 0
a2n+1 = −
a2n−1
2n + 1
=
a2n−3
(2n + 1)(2n − 1)
= (−1)n a1
(2n + 1)(2n − 1) · · · 5 · 3
= (−1)n a1
n
m=1(2m + 1)
, n ≥ 0
755
If {w1, w2} is the fundamental set of solutions, then the initial conditions demand that w1 = 1 + 0 ·
z + · · · and w2 = 0 + z + · · · . We see that w1 will have only even powers of z and w2 will have only
odd powers of z.
w1 =
∞
n=0
(−1)n
n
m=1 2m
z2n
, w2 =
∞
n=0
(−1)n
n
m=1(2m + 1)
z2n+1
Since the coefficient functions in the differential equation are entire, (analytic in the finite complex
plane), the radius of convergence of these series solutions is infinite.
Solution 23.7
w +
1
2z
w +
1
z
w = 0.
We can find the indicial equation by substituting w = zα
+ O(zα+1
) into the differential equation.
α(α − 1)zα−2
+
1
2
αzα−2
+ zα−1
= O(zα−1
)
Equating the coefficient of the zα−2
term,
α(α − 1) +
1
2
α = 0
α = 0,
1
2
.
Since the roots are distinct and do not differ by an integer, the solutions are of the form
w1 =
∞
n=0
anzn
, w2 = z1/2
∞
n=0
bnzn
.
Differentiating the series for the first solution,
w1 =
∞
n=0
anzn
w1 =
∞
n=1
nanzn−1
=
∞
n=0
(n + 1)an+1zn
w1 =
∞
n=1
n(n + 1)an+1zn−1
.
Substituting this series into the differential equation,
∞
n=1
n(n + 1)an+1zn−1
+
1
2z
∞
n=0
(n + 1)an+1zn
+
1
z
∞
n=0
anzn
= 0
∞
n=1
n(n + 1)an+1 +
1
2
(n + 1)an+1 + an zn−1
+
1
2z
a1 +
1
z
a0 = 0.
Equating powers of z,
z−1
:
a1
2
+ a0 = 0 → a1 = −2a0
zn−1
: n +
1
2
(n + 1)an+1 + an = 0 → an+1 = −
an
(n + 1/2)(n + 1)
.
756
We can combine the above two equations for an.
an+1 = −
an
(n + 1/2)(n + 1)
, for n ≥ 0
Solving this difference equation for an,
an = a0
n−1
j=0
−1
(j + 1/2)(j + 1)
an = a0
(−1)n
n!
n−1
j=0
1
j + 1/2
Now let’s find the second solution. Differentiating w2,
w2 =
∞
n=0
(n + 1/2)bnzn−1/2
w2 =
∞
n=0
(n + 1/2)(n − 1/2)bnzn−3/2
.
Substituting these expansions into the differential equation,
∞
n=0
(n + 1/2)(n − 1/2)bnzn−3/2
+
1
2
∞
n=0
(n + 1/2)bnzn−3/2
+
∞
n=1
bn−1zn−3/2
= 0.
Equating the coefficient of the z−3/2
term,
1
2
−
1
2
b0 +
1
2
1
2
b0 = 0,
we see that b0 is arbitrary. Equating the other coefficients of powers of z,
(n + 1/2)(n − 1/2)bn +
1
2
(n + 1/2)bn + bn−1 = 0
bn = −
bn−1
n(n + 1/2)
Calculating the bn’s,
b1 = −
b0
1 · 3
2
b2 =
b0
1 · 2 · 3
2 · 5
2
bn =
(−1)n
2n
b0
n! · 3 · 5 · · · (2n + 1)
Thus the second solution is
w2 = b0z1/2
∞
n=0
(−1)n
2n
zn
n! 3 · 5 · · · (2n + 1)
.
Solution 23.8
w +
2
z
+
3
z2
w +
1
z2
w = 0.
757
In order to analyze the behavior at infinity we make the change of variables t = 1/z, u(t) = w(z)
and examine the point t = 0. Writing the derivatives with respect to z in terms if t yields
z =
1
t
dz = −
1
t2
dt
d
dz
= −t2 d
dt
d2
dz2
= −t2 d
dt
−t2 d
dt
= t4 d2
dt2
+ 2t3 d
dt
.
The equation for u is then
t4
u + 2t3
u + (2t + 3t2
)(−t2
)u + t2
u = 0
u + −3u +
1
t2
u = 0
We see that t = 0 is a regular singular point. To find the indicial equation, we substitute u =
tα
+ O(tα+1
) into the differential equation.
α(α − 1)tα−2
− 3αtα−1
+ tα−2
= O(tα−1
)
Equating the coefficients of the tα−2
terms,
α(α − 1) + 1 = 0
α =
1 ± i
√
3
2
Since the roots of the indicial equation are distinct and do not differ by an integer, a set of solutions
has the form
t(1+i
√
3)/2
∞
n=0
antn
, t(1−i
√
3)/2
∞
n=0
bntn
.
Noting that
t(1+i
√
3)/2
= t1/2
exp
i
√
3
2
log t , and t(1−i
√
3)/2
= t1/2
exp −
i
√
3
2
log t .
We can take the sum and difference of the above solutions to obtain the form
u1 = t1/2
cos
√
3
2
log t
∞
n=0
antn
, u1 = t1/2
sin
√
3
2
log t
∞
n=0
bntn
.
Putting the answer in terms of z, we have the form of the two Frobenius expansions about infinity.
w1 = z−1/2
cos
√
3
2
log z
∞
n=0
an
zn
, w1 = z−1/2
sin
√
3
2
log z
∞
n=0
bn
zn
.
Solution 23.9
1. We write the equation in the standard form.
y +
b − x
x
y −
a
x
y = 0
758
Since b−x
x has no worse than a first order pole and a
x has no worse than a second order pole at
x = 0, that is a regular singular point. Since the coefficient functions have no other singularities
in the finite complex plane, all the other points in the finite complex plane are regular points.
Now to examine the point at infinity. We make the change of variables u(ξ) = y(x), ξ = 1/x.
y =
dξ
dx
d
dξ
u = −
1
x2
u = −ξ2
u
y = −ξ2 d
dξ
−ξ2 d
dξ
u = ξ4
u + 2ξ3
u
The differential equation becomes
xy + (b − x)y − ay
1
ξ
ξ4
u + 2ξ3
u + b −
1
ξ
−ξ2
u − au = 0
ξ3
u + (2 − b)ξ2
+ ξ u − au = 0
u +
2 − b
ξ
+
1
ξ2
−
a
ξ3
u = 0
Since this equation has an irregular singular point at ξ = 0, the equation for y(x) has an
irregular singular point at infinity.
2. The coefficient functions are
p(x) ≡
1
x
∞
n=1
pnxn
=
1
x
(b − x),
q(x) ≡
1
x2
∞
n=1
qnxn
=
1
x2
(0 − ax).
The indicial equation is
α2
+ (p0 − 1)α + q0 = 0
α2
+ (b − 1)α + 0 = 0
α(α + b − 1) = 0.
3. Since one of the roots of the indicial equation is zero, and the other root is not a negative
759
integer, one of the solutions of the differential equation is a Taylor series.
y1 =
∞
k=0
ckxk
y1 =
∞
k=1
kckxk−1
=
∞
k=0
(k + 1)ck+1xk
=
∞
k=0
kckxk−1
y1 =
k=2
k(k − 1)ckxk−2
=
∞
k=1
(k + 1)kck+1xk−1
=
∞
k=0
(k + 1)kck+1xk−1
We substitute the Taylor series into the differential equation.
xy + (b − x)y − ay = 0
∞
k=0
(k + 1)kck+1xk
+ b
∞
k=0
(k + 1)ck+1xk
−
∞
k=0
kckxk
− a
∞
k=0
ckxk
= 0
We equate coefficients to determine a recurrence relation for the coefficients.
(k + 1)kck+1 + b(k + 1)ck+1 − kck − ack = 0
ck+1 =
k + a
(k + 1)(k + b)
ck
For c0 = 1, the recurrence relation has the solution
ck =
(a)kxk
(b)kk!
.
Thus one solution is
y1(x) =
∞
k=0
(a)k
(b)kk!
xk
.
4. If a = −m, where m is a non-negative integer, then (a)k = 0 for k > m. This makes y1 a
polynomial:
y1(x) =
m
k=0
(a)k
(b)kk!
xk
.
5. If b = n + 1, where n is a non-negative integer, the indicial equation is
α(α + n) = 0.
For the case n = 0, the indicial equation has a double root at zero. Thus the solutions have
the form:
y1(x) =
m
k=0
(a)k
(b)kk!
xk
, y2(x) = y1(x) log x +
∞
k=0
dkxk
760
For the case n > 0 the roots of the indicial equation differ by an integer. The solutions have
the form:
y1(x) =
m
k=0
(a)k
(b)kk!
xk
, y2(x) = d−1y1(x) log x + x−n
∞
k=0
dkxk
The form of the solution for y2 can be substituted into the equation to determine the coefficients
dk.
Solution 23.10
We write the equation in the standard form.
xy + 2xy + 6 ex
y = 0
y + 2y + 6
ex
x
y = 0
We see that x = 0 is a regular singular point. The indicial equation is
α2
− α = 0
α = 0, 1.
The first solution has the Frobenius form.
y1 = x + a2x2
+ a3x3
+ O(x4
)
We substitute y1 into the differential equation and equate coefficients of powers of x.
xy + 2xy + 6 ex
y = 0
x(2a2 + 6a3x + O(x2
)) + 2x(1 + 2a2x + 3a3x2
+ O(x3
))
+ 6(1 + x + x2
/2 + O(x3
))(x + a2x2
+ a3x3
+ O(x4
)) = 0
(2a2x + 6a3x2
) + (2x + 4a2x2
) + (6x + 6(1 + a2)x2
) = O(x3
) = 0
a2 = −4, a3 =
17
3
y1 = x − 4x2
+
17
3
x3
+ O(x4
)
Now we see if the second solution has the Frobenius form. There is no a1x term because y2 is only
determined up to an additive constant times y1.
y2 = 1 + O(x2
)
We substitute y2 into the differential equation and equate coefficients of powers of x.
xy + 2xy + 6 ex
y = 0
O(x) + O(x) + 6(1 + O(x))(1 + O(x2
)) = 0
6 = O(x)
The substitution y2 = 1 + O(x) has yielded a contradiction. Since the second solution is not of the
Frobenius form, it has the following form:
y2 = y1 ln(x) + a0 + a2x2
+ O(x3
)
The first three terms in the solution are
y2 = a0 + x ln x − 4x2
ln x + O(x2
).
761
We calculate the derivatives of y2.
y2 = ln(x) + O(1)
y2 =
1
x
+ O(ln(x))
We substitute y2 into the differential equation and equate coefficients.
xy + 2xy + 6 ex
y = 0
(1 + O(x ln x)) + 2 (O(x ln x)) + 6 (a0 + O(x ln x)) = 0
1 + 6a0 = 0
y2 = −
1
6
+ x ln x − 4x2
ln x + O(x2
)
762
23.8 Quiz
Problem 23.1
Write the definition of convergence of the series
∞
n=1 an.
Solution
Problem 23.2
What is the Cauchy convergence criterion for series?
Solution
Problem 23.3
Define absolute convergence and uniform convergence. What is the relationship between the two?
Solution
Problem 23.4
Write the geometric series and the function to which it converges. For what values of the variable
does the series converge?
Solution
Problem 23.5
For what real values of a does the series
∞
n=1 na
converge?
Solution
Problem 23.6
State the ratio and root convergence tests.
Solution
Problem 23.7
State the integral convergence test.
Solution
763
23.9 Quiz Solutions
Solution 23.1
The series
∞
n=1 an converges if the sequence of partial sums, SN =
N
n=1 an, converges. That is,
lim
N→∞
SN = lim
N→∞
N
n=1
an = constant.
Solution 23.2
A series converges if and only if for any > 0 there exists an N such that |Sn − Sm| < for all
n, m > N.
Solution 23.3
The series
∞
n=1 an converges absolutely if
∞
n=1 |an| converges. If the rate of convergence of
∞
n=1 an(z) is independent of z then the series is uniformly convergent. The series is uniformly
convergent in a domain if for any given > 0 there exists an N, independent of z, such that
|f(z) − SN (z)| = f(z) −
N
n=1
an(z) <
for all z in the domain.
There is no relationship between absolute convergence and uniform convergence.
Solution 23.4
1
1 − z
=
∞
n=0
zn
for |z| < 1.
Solution 23.5
The series converges for a < −1.
Solution 23.6
The series
∞
n=1 an converges absolutely if
lim
n→∞
an+1
an
< 1.
If the limit is greater than unity, then the series diverges. If the limit is unity, the test fails.
The series
∞
n=1 an converges absolutely if
lim
n→∞
|an|1/n
< 1.
If the limit is greater than unity, then the series diverges. If the limit is unity, the test fails.
Solution 23.7
If the coefficients an of a series
∞
n=1 an are monotonically decreasing and can be extended to a
monotonically decreasing function of the continuous variable x:
a(x) = an for integer x,
then the sum converges or diverges with the integral:
∞
1
a(x) dx.
764
Chapter 24
Asymptotic Expansions
The more you sweat in practice, the less you bleed in battle.
-Navy Seal Saying
24.1 Asymptotic Relations
The and ∼ symbols. First we will introduce two new symbols used in asymptotic relations.
f(x) g(x) as x → x0,
is read, “f(x) is much smaller than g(x) as x tends to x0”. This means
lim
x→x0
f(x)
g(x)
= 0.
The notation
f(x) ∼ g(x) as x → x0,
is read “f(x) is asymptotic to g(x) as x tends to x0”; which means
lim
x→x0
f(x)
g(x)
= 1.
A few simple examples are
• − ex
x as x → +∞
• sin x ∼ x as x → 0
• 1/x 1 as x → +∞
• e−1/x
x−n
as x → 0+
for all n
An equivalent definition of f(x) ∼ g(x) as x → x0 is
f(x) − g(x) g(x) as x → x0.
Note that it does not make sense to say that a function f(x) is asymptotic to zero. Using the above
definition this would imply
f(x) 0 as x → x0.
If you encounter an expression like f(x) + g(x) ∼ 0, take this to mean f(x) ∼ −g(x).
765
The Big O and Little o Notation. If |f(x)| ≤ m|g(x)| for some constant m in some neighbor-
hood of the point x = x0, then we say that
f(x) = O(g(x)) as x → x0.
We read this as “f is big O of g as x goes to x0”. If g(x) does not vanish, an equivalent definition
is that f(x)/g(x) is bounded as x → x0.
If for any given positive δ there exists a neighborhood of x = x0 in which |f(x)| ≤ δ|g(x)| then
f(x) = o(g(x)) as x → x0.
This is read, “f is little o of g as x goes to x0.”
For a few examples of the use of this notation,
• e−x
= o(x−n
) as x → ∞ for any n.
• sin x = O(x) as x → 0.
• cos x − 1 = o(1) as x → 0.
• log x = o(xα
) as x → +∞ for any positive α.
Operations on Asymptotic Relations. You can perform the ordinary arithmetic operations on
asymptotic relations. Addition, multiplication, and division are valid.
You can always integrate an asymptotic relation. Integration is a smoothing operation. However,
it is necessary to exercise some care.
Example 24.1.1 Consider
f (x) ∼
1
x2
as x → ∞.
This does not imply that
f(x) ∼
−1
x
as x → ∞.
We have forgotten the constant of integration. Integrating the asymptotic relation for f (x) yields
f(x) ∼
−1
x
+ c as x → ∞.
If c is nonzero then
f(x) ∼ c as x → ∞.
It is not always valid to differentiate an asymptotic relation.
Example 24.1.2 Consider f(x) = 1
x + 1
x2 sin(x3
).
f(x) ∼
1
x
as x → ∞.
Differentiating this relation yields
f (x) ∼ −
1
x2
as x → ∞.
However, this is not true since
f (x) = −
1
x2
−
2
x3
sin(x3
) + 2 cos(x3
)
∼ −
1
x2
as x → ∞.
766
The Controlling Factor. The controlling factor is the most rapidly varying factor in an asymp-
totic relation. Consider a function f(x) that is asymptotic to x2 ex
as x goes to infinity. The
controlling factor is ex
. For a few examples of this,
• x log x has the controlling factor x as x → ∞.
• x−2 e1/x
has the controlling factor e1/x
as x → 0.
• x−1
sin x has the controlling factor sin x as x → ∞.
The Leading Behavior. Consider a function that is asymptotic to a sum of terms.
f(x) ∼ a0(x) + a1(x) + a2(x) + · · · , as x → x0.
where
a0(x) a1(x) a2(x) · · · , as x → x0.
The first term in the sum is the leading order behavior. For a few examples,
• For sin x ∼ x − x3
/6 + x5
/120 − · · · as x → 0, the leading order behavior is x.
• For f(x) ∼ ex
(1 − 1/x + 1/x2
− · · · ) as x → ∞, the leading order behavior is ex
.
24.2 Leading Order Behavior of Differential Equations
It is often useful to know the leading order behavior of the solutions to a differential equation. If
we are considering a regular point or a regular singular point, the approach is straight forward. We
simply use a Taylor expansion or the Frobenius method. However, if we are considering an irregular
singular point, w
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers
Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers

More Related Content

PDF
Manual Solution Probability and Statistic Hayter 4th Edition
PDF
Quantum Mechanics: Lecture notes
PDF
Thats How We C
PDF
A practical introduction_to_python_programming_heinold
PDF
I do like cfd vol 1 2ed_v2p2
PDF
genral physis
PDF
Elements of Applied Mathematics for Engineers
Manual Solution Probability and Statistic Hayter 4th Edition
Quantum Mechanics: Lecture notes
Thats How We C
A practical introduction_to_python_programming_heinold
I do like cfd vol 1 2ed_v2p2
genral physis
Elements of Applied Mathematics for Engineers

What's hot (11)

PDF
Am06 complete 16-sep06
PDF
Metashape pro 1-7_en
PDF
Math17 Reference
PDF
Lecture notes on hybrid systems
PDF
PDF
tutorial.pdf
PDF
PDF
Advanced cardiovascular exercites
PDF
Queueing
PDF
The C Preprocessor
PDF
Am06 complete 16-sep06
Metashape pro 1-7_en
Math17 Reference
Lecture notes on hybrid systems
tutorial.pdf
Advanced cardiovascular exercites
Queueing
The C Preprocessor
Ad

Viewers also liked (10)

DOCX
Applied math sba
PDF
Applied mathematics 40
PPTX
Data structures and algorithms
PDF
Data Structures and Algorithms
PPT
Introduction to data structures and Algorithm
PPT
Lecture 1 data structures and algorithms
PPTX
Mathematics
PPTX
Introduction to database
PPT
Fundamentals of data structures
PPT
Fundamentals of Database ppt ch01
Applied math sba
Applied mathematics 40
Data structures and algorithms
Data Structures and Algorithms
Introduction to data structures and Algorithm
Lecture 1 data structures and algorithms
Mathematics
Introduction to database
Fundamentals of data structures
Fundamentals of Database ppt ch01
Ad

Similar to Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers (20)

PDF
Introduction to Methods of Applied Mathematics
PDF
Basic calculus free
PDF
Calculus is beneficial for the university students
PDF
Calculus Solutions
PDF
Advanced Calculus and Analysis MA1002 - CiteSeer ( PDFDrive ).pdf
PDF
Calculus Volume 1
PDF
Applied mathematics I for enginering
PDF
PDF
A First Course In Complex Analysis
PDF
PDF
Hand book of Howard Anton calculus exercises 8th edition
PDF
Differential equations
PDF
Na 20130603
PDF
Mathematical formula handbook
PDF
Mathematical formula handbook
PDF
Mathematical formula handbook
PDF
Mathematical formula handbook
PDF
Mathematical formula handbook (1)
PDF
Numerical Analysis
PDF
Perry’s Chemical Engineers’ Handbook 7ma Ed Chap 03
Introduction to Methods of Applied Mathematics
Basic calculus free
Calculus is beneficial for the university students
Calculus Solutions
Advanced Calculus and Analysis MA1002 - CiteSeer ( PDFDrive ).pdf
Calculus Volume 1
Applied mathematics I for enginering
A First Course In Complex Analysis
Hand book of Howard Anton calculus exercises 8th edition
Differential equations
Na 20130603
Mathematical formula handbook
Mathematical formula handbook
Mathematical formula handbook
Mathematical formula handbook
Mathematical formula handbook (1)
Numerical Analysis
Perry’s Chemical Engineers’ Handbook 7ma Ed Chap 03

Recently uploaded (20)

PDF
Approach and Philosophy of On baking technology
PPTX
Programs and apps: productivity, graphics, security and other tools
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
KodekX | Application Modernization Development
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPT
Teaching material agriculture food technology
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
Big Data Technologies - Introduction.pptx
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Machine learning based COVID-19 study performance prediction
Approach and Philosophy of On baking technology
Programs and apps: productivity, graphics, security and other tools
“AI and Expert System Decision Support & Business Intelligence Systems”
Mobile App Security Testing_ A Comprehensive Guide.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
KodekX | Application Modernization Development
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Teaching material agriculture food technology
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Reach Out and Touch Someone: Haptics and Empathic Computing
Spectral efficient network and resource selection model in 5G networks
sap open course for s4hana steps from ECC to s4
Encapsulation_ Review paper, used for researhc scholars
Dropbox Q2 2025 Financial Results & Investor Presentation
Big Data Technologies - Introduction.pptx
Review of recent advances in non-invasive hemoglobin estimation
MIND Revenue Release Quarter 2 2025 Press Release
NewMind AI Weekly Chronicles - August'25 Week I
Machine learning based COVID-19 study performance prediction

Introduction to methods of applied mathematics or Advanced Mathematical Methods for Scientist and Engineers

  • 1. Introduction to Methods of Applied Mathematics or Advanced Mathematical Methods for Scientists and Engineers Sean Mauch http://www.its.caltech.edu/˜sean January 24, 2004
  • 2. 2
  • 3. Contents Anti-Copyright xv Preface xvii 0.1 Advice to Teachers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii 0.2 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii 0.3 Warnings and Disclaimers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii 0.4 Suggested Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii 0.5 About the Title . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii I Algebra 1 1 Sets and Functions 3 1.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Single Valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Inverses and Multi-Valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 Transforming Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2 Vectors 17 2.1 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.1 Scalars and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.2 The Kronecker Delta and Einstein Summation Convention . . . . . . . . . . . 19 2.1.3 The Dot and Cross Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2 Sets of Vectors in n Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 II Calculus 31 3 Differential Calculus 33 3.1 Limits of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2 Continuous Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.3 The Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.4 Implicit Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.5 Maxima and Minima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.6 Mean Value Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.6.1 Application: Using Taylor’s Theorem to Approximate Functions. . . . . . . . 45 3.6.2 Application: Finite Difference Schemes . . . . . . . . . . . . . . . . . . . . . . 47 i
  • 4. 3.7 L’Hospital’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.8.1 Limits of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.8.2 Continuous Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.8.3 The Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.8.4 Implicit Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.8.5 Maxima and Minima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.8.6 Mean Value Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.8.7 L’Hospital’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.10 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.11 Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.12 Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4 Integral Calculus 75 4.1 The Indefinite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.2 The Definite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.3 The Fundamental Theorem of Integral Calculus . . . . . . . . . . . . . . . . . . . . . 80 4.4 Techniques of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.4.1 Partial Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.5 Improper Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.6.1 The Indefinite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.6.2 The Definite Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.6.3 The Fundamental Theorem of Integration . . . . . . . . . . . . . . . . . . . . 86 4.6.4 Techniques of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.6.5 Improper Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.9 Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.10 Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5 Vector Calculus 99 5.1 Vector Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.2 Gradient, Divergence and Curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 5.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.6 Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.7 Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 III Functions of a Complex Variable 117 6 Complex Numbers 119 6.1 Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.2 The Complex Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 6.3 Polar Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 6.4 Arithmetic and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.5 Integer Exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 6.6 Rational Exponents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 ii
  • 5. 6.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 6.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 7 Functions of a Complex Variable 153 7.1 Curves and Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 7.2 The Point at Infinity and the Stereographic Projection . . . . . . . . . . . . . . . . . 155 7.3 A Gentle Introduction to Branch Points . . . . . . . . . . . . . . . . . . . . . . . . . 157 7.4 Cartesian and Modulus-Argument Form . . . . . . . . . . . . . . . . . . . . . . . . . 157 7.5 Graphing Functions of a Complex Variable . . . . . . . . . . . . . . . . . . . . . . . 159 7.6 Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 7.7 Inverse Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 7.8 Riemann Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 7.9 Branch Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 7.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 7.11 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 7.12 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 8 Analytic Functions 223 8.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 8.2 Cauchy-Riemann Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 8.3 Harmonic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 8.4 Singularities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 8.4.1 Categorization of Singularities . . . . . . . . . . . . . . . . . . . . . . . . . . 233 8.4.2 Isolated and Non-Isolated Singularities . . . . . . . . . . . . . . . . . . . . . . 235 8.5 Application: Potential Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 8.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 8.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 8.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 9 Analytic Continuation 269 9.1 Analytic Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 9.2 Analytic Continuation of Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 9.3 Analytic Functions Defined in Terms of Real Variables . . . . . . . . . . . . . . . . . 271 9.3.1 Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 9.3.2 Analytic Functions Defined in Terms of Their Real or Imaginary Parts . . . . 276 9.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 9.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 9.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 10 Contour Integration and the Cauchy-Goursat Theorem 285 10.1 Line Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 10.2 Contour Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 10.2.1 Maximum Modulus Integral Bound . . . . . . . . . . . . . . . . . . . . . . . . 287 10.3 The Cauchy-Goursat Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 10.4 Contour Deformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 10.5 Morera’s Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 10.6 Indefinite Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 10.7 Fundamental Theorem of Calculus via Primitives . . . . . . . . . . . . . . . . . . . . 292 10.7.1 Line Integrals and Primitives . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 10.7.2 Contour Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 10.8 Fundamental Theorem of Calculus via Complex Calculus . . . . . . . . . . . . . . . 292 10.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 10.10Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 10.11Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 iii
  • 6. 11 Cauchy’s Integral Formula 305 11.1 Cauchy’s Integral Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 11.2 The Argument Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 11.3 Rouche’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 11.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 11.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 11.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 12 Series and Convergence 325 12.1 Series of Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 12.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 12.1.2 Special Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 12.1.3 Convergence Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 12.2 Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 12.2.1 Tests for Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 332 12.2.2 Uniform Convergence and Continuous Functions. . . . . . . . . . . . . . . . . 333 12.3 Uniformly Convergent Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 12.4 Integration and Differentiation of Power Series . . . . . . . . . . . . . . . . . . . . . 337 12.5 Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 12.5.1 Newton’s Binomial Formula. . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 12.6 Laurent Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 12.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 12.7.1 Series of Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 12.7.2 Uniform Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 12.7.3 Uniformly Convergent Power Series . . . . . . . . . . . . . . . . . . . . . . . . 347 12.7.4 Integration and Differentiation of Power Series . . . . . . . . . . . . . . . . . 349 12.7.5 Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 12.7.6 Laurent Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 12.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 12.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 13 The Residue Theorem 383 13.1 The Residue Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 13.2 Cauchy Principal Value for Real Integrals . . . . . . . . . . . . . . . . . . . . . . . . 387 13.2.1 The Cauchy Principal Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 13.3 Cauchy Principal Value for Contour Integrals . . . . . . . . . . . . . . . . . . . . . . 390 13.4 Integrals on the Real Axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 13.5 Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 13.6 Fourier Cosine and Sine Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 13.7 Contour Integration and Branch Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . 398 13.8 Exploiting Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 13.8.1 Wedge Contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 13.8.2 Box Contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402 13.9 Definite Integrals Involving Sine and Cosine . . . . . . . . . . . . . . . . . . . . . . . 403 13.10Infinite Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 13.11Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 13.12Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 13.13Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 iv
  • 7. IV Ordinary Differential Equations 471 14 First Order Differential Equations 473 14.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473 14.2 Example Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 14.2.1 Growth and Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 14.3 One Parameter Families of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 14.4 Integrable Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 14.4.1 Separable Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 14.4.2 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 14.4.3 Homogeneous Coefficient Equations . . . . . . . . . . . . . . . . . . . . . . . 480 14.5 The First Order, Linear Differential Equation . . . . . . . . . . . . . . . . . . . . . . 483 14.5.1 Homogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 14.5.2 Inhomogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484 14.5.3 Variation of Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 14.6 Initial Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486 14.6.1 Piecewise Continuous Coefficients and Inhomogeneities . . . . . . . . . . . . . 486 14.7 Well-Posed Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 14.8 Equations in the Complex Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 14.8.1 Ordinary Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490 14.8.2 Regular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492 14.8.3 Irregular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 14.8.4 The Point at Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 14.9 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 14.10Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 14.11Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 14.12Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 14.13Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514 15 First Order Linear Systems of Differential Equations 515 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515 15.2 Using Eigenvalues and Eigenvectors to find Homogeneous Solutions . . . . . . . . . . 515 15.3 Matrices and Jordan Canonical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . 518 15.4 Using the Matrix Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 15.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526 15.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 15.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 16 Theory of Linear Ordinary Differential Equations 547 16.1 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 16.2 Nature of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548 16.3 Transformation to a First Order System . . . . . . . . . . . . . . . . . . . . . . . . . 550 16.4 The Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 16.4.1 Derivative of a Determinant. . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 16.4.2 The Wronskian of a Set of Functions. . . . . . . . . . . . . . . . . . . . . . . 551 16.4.3 The Wronskian of the Solutions to a Differential Equation . . . . . . . . . . . 552 16.5 Well-Posed Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 16.6 The Fundamental Set of Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 16.7 Adjoint Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 16.8 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559 16.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 16.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 16.11Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 16.12Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566 v
  • 8. 17 Techniques for Linear Differential Equations 567 17.1 Constant Coefficient Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 17.1.1 Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567 17.1.2 Real-Valued Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570 17.1.3 Higher Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571 17.2 Euler Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 17.2.1 Real-Valued Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574 17.3 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576 17.4 Equations Without Explicit Dependence on y . . . . . . . . . . . . . . . . . . . . . . 577 17.5 Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577 17.6 *Reduction of Order and the Adjoint Equation . . . . . . . . . . . . . . . . . . . . . 578 17.7 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 17.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584 17.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586 18 Techniques for Nonlinear Differential Equations 601 18.1 Bernoulli Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601 18.2 Riccati Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 602 18.3 Exchanging the Dependent and Independent Variables . . . . . . . . . . . . . . . . . 604 18.4 Autonomous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 18.5 *Equidimensional-in-x Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 18.6 *Equidimensional-in-y Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608 18.7 *Scale-Invariant Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610 18.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611 18.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 18.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614 19 Transformations and Canonical Forms 621 19.1 The Constant Coefficient Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621 19.2 Normal Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623 19.2.1 Second Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623 19.2.2 Higher Order Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 624 19.3 Transformations of the Independent Variable . . . . . . . . . . . . . . . . . . . . . . 624 19.3.1 Transformation to the form u” + a(x) u = 0 . . . . . . . . . . . . . . . . . . 624 19.3.2 Transformation to a Constant Coefficient Equation . . . . . . . . . . . . . . . 625 19.4 Integral Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626 19.4.1 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626 19.4.2 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628 19.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630 19.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632 19.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633 20 The Dirac Delta Function 637 20.1 Derivative of the Heaviside Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 637 20.2 The Delta Function as a Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638 20.3 Higher Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639 20.4 Non-Rectangular Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 639 20.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641 20.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643 20.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644 vi
  • 9. 21 Inhomogeneous Differential Equations 649 21.1 Particular Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649 21.2 Method of Undetermined Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . 650 21.3 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652 21.3.1 Second Order Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 652 21.3.2 Higher Order Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 654 21.4 Piecewise Continuous Coefficients and Inhomogeneities . . . . . . . . . . . . . . . . . 656 21.5 Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 658 21.5.1 Eliminating Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . 658 21.5.2 Separating Inhomogeneous Equations and Inhomogeneous Boundary Conditions659 21.5.3 Existence of Solutions of Problems with Inhomogeneous Boundary Conditions 659 21.6 Green Functions for First Order Equations . . . . . . . . . . . . . . . . . . . . . . . 661 21.7 Green Functions for Second Order Equations . . . . . . . . . . . . . . . . . . . . . . 662 21.7.1 Green Functions for Sturm-Liouville Problems . . . . . . . . . . . . . . . . . 668 21.7.2 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670 21.7.3 Problems with Unmixed Boundary Conditions . . . . . . . . . . . . . . . . . 671 21.7.4 Problems with Mixed Boundary Conditions . . . . . . . . . . . . . . . . . . . 672 21.8 Green Functions for Higher Order Problems . . . . . . . . . . . . . . . . . . . . . . . 674 21.9 Fredholm Alternative Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677 21.10Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682 21.11Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686 21.12Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688 21.13Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710 21.14Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711 22 Difference Equations 713 22.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713 22.2 Exact Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714 22.3 Homogeneous First Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715 22.4 Inhomogeneous First Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716 22.5 Homogeneous Constant Coefficient Equations . . . . . . . . . . . . . . . . . . . . . . 717 22.6 Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719 22.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721 22.8 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722 22.9 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723 23 Series Solutions of Differential Equations 725 23.1 Ordinary Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725 23.1.1 Taylor Series Expansion for a Second Order Differential Equation . . . . . . . 728 23.2 Regular Singular Points of Second Order Equations . . . . . . . . . . . . . . . . . . . 733 23.2.1 Indicial Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 23.2.2 The Case: Double Root . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736 23.2.3 The Case: Roots Differ by an Integer . . . . . . . . . . . . . . . . . . . . . . 738 23.3 Irregular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743 23.4 The Point at Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743 23.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745 23.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748 23.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749 23.8 Quiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763 23.9 Quiz Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764 vii
  • 10. 24 Asymptotic Expansions 765 24.1 Asymptotic Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765 24.2 Leading Order Behavior of Differential Equations . . . . . . . . . . . . . . . . . . . . 767 24.3 Integration by Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772 24.4 Asymptotic Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777 24.5 Asymptotic Expansions of Differential Equations . . . . . . . . . . . . . . . . . . . . 777 24.5.1 The Parabolic Cylinder Equation. . . . . . . . . . . . . . . . . . . . . . . . . 777 25 Hilbert Spaces 781 25.1 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781 25.2 Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782 25.3 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 783 25.4 Linear Independence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784 25.5 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784 25.6 Gramm-Schmidt Orthogonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784 25.7 Orthonormal Function Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786 25.8 Sets Of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787 25.9 Least Squares Fit to a Function and Completeness . . . . . . . . . . . . . . . . . . . 790 25.10Closure Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 792 25.11Linear Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795 25.12Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796 25.13Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797 25.14Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798 26 Self Adjoint Linear Operators 799 26.1 Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799 26.2 Self-Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799 26.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801 26.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802 26.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803 27 Self-Adjoint Boundary Value Problems 805 27.1 Summary of Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805 27.2 Formally Self-Adjoint Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806 27.3 Self-Adjoint Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807 27.4 Self-Adjoint Eigenvalue Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 808 27.5 Inhomogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811 27.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813 27.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814 27.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815 28 Fourier Series 817 28.1 An Eigenvalue Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817 28.2 Fourier Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819 28.3 Least Squares Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821 28.4 Fourier Series for Functions Defined on Arbitrary Ranges . . . . . . . . . . . . . . . 824 28.5 Fourier Cosine Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826 28.6 Fourier Sine Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827 28.7 Complex Fourier Series and Parseval’s Theorem . . . . . . . . . . . . . . . . . . . . . 828 28.8 Behavior of Fourier Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829 28.9 Gibb’s Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835 28.10Integrating and Differentiating Fourier Series . . . . . . . . . . . . . . . . . . . . . . 835 28.11Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838 28.12Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843 28.13Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845 viii
  • 11. 29 Regular Sturm-Liouville Problems 873 29.1 Derivation of the Sturm-Liouville Form . . . . . . . . . . . . . . . . . . . . . . . . . 873 29.2 Properties of Regular Sturm-Liouville Problems . . . . . . . . . . . . . . . . . . . . . 874 29.3 Solving Differential Equations With Eigenfunction Expansions . . . . . . . . . . . . 881 29.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885 29.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888 29.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889 30 Integrals and Convergence 905 30.1 Uniform Convergence of Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905 30.2 The Riemann-Lebesgue Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906 30.3 Cauchy Principal Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906 30.3.1 Integrals on an Infinite Domain . . . . . . . . . . . . . . . . . . . . . . . . . . 906 30.3.2 Singular Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907 31 The Laplace Transform 909 31.1 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909 31.2 The Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910 31.2.1 ˆf(s) with Poles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912 31.2.2 ˆf(s) with Branch Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914 31.2.3 Asymptotic Behavior of ˆf(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . 916 31.3 Properties of the Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 917 31.4 Constant Coefficient Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 919 31.5 Systems of Constant Coefficient Differential Equations . . . . . . . . . . . . . . . . . 920 31.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922 31.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 926 31.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928 32 The Fourier Transform 947 32.1 Derivation from a Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947 32.2 The Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948 32.2.1 A Word of Caution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949 32.3 Evaluating Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950 32.3.1 Integrals that Converge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 950 32.3.2 Cauchy Principal Value and Integrals that are Not Absolutely Convergent. . 952 32.3.3 Analytic Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953 32.4 Properties of the Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 954 32.4.1 Closure Relation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954 32.4.2 Fourier Transform of a Derivative. . . . . . . . . . . . . . . . . . . . . . . . . 955 32.4.3 Fourier Convolution Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . 955 32.4.4 Parseval’s Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957 32.4.5 Shift Property. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 958 32.4.6 Fourier Transform of x f(x). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959 32.5 Solving Differential Equations with the Fourier Transform . . . . . . . . . . . . . . . 959 32.6 The Fourier Cosine and Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . 960 32.6.1 The Fourier Cosine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . 960 32.6.2 The Fourier Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961 32.7 Properties of the Fourier Cosine and Sine Transform . . . . . . . . . . . . . . . . . . 962 32.7.1 Transforms of Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962 32.7.2 Convolution Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 962 32.7.3 Cosine and Sine Transform in Terms of the Fourier Transform . . . . . . . . 964 32.8 Solving Differential Equations with the Fourier Cosine and Sine Transforms . . . . . 965 32.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966 32.10Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970 ix
  • 12. 32.11Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972 33 The Gamma Function 987 33.1 Euler’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987 33.2 Hankel’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988 33.3 Gauss’ Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989 33.4 Weierstrass’ Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990 33.5 Stirling’s Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991 33.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995 33.7 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996 33.8 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997 34 Bessel Functions 999 34.1 Bessel’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999 34.2 Frobeneius Series Solution about z = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . 999 34.2.1 Behavior at Infinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001 34.3 Bessel Functions of the First Kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003 34.3.1 The Bessel Function Satisfies Bessel’s Equation . . . . . . . . . . . . . . . . . 1003 34.3.2 Series Expansion of the Bessel Function . . . . . . . . . . . . . . . . . . . . . 1004 34.3.3 Bessel Functions of Non-Integer Order . . . . . . . . . . . . . . . . . . . . . . 1005 34.3.4 Recursion Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007 34.3.5 Bessel Functions of Half-Integer Order . . . . . . . . . . . . . . . . . . . . . . 1009 34.4 Neumann Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1010 34.5 Bessel Functions of the Second Kind . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012 34.6 Hankel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013 34.7 The Modified Bessel Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013 34.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016 34.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019 34.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020 V Partial Differential Equations 1033 35 Transforming Equations 1035 35.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1036 35.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037 35.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1038 36 Classification of Partial Differential Equations 1039 36.1 Classification of Second Order Quasi-Linear Equations . . . . . . . . . . . . . . . . . 1039 36.1.1 Hyperbolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1040 36.1.2 Parabolic equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043 36.1.3 Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043 36.2 Equilibrium Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044 36.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046 36.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047 36.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048 37 Separation of Variables 1051 37.1 Eigensolutions of Homogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . 1051 37.2 Homogeneous Equations with Homogeneous Boundary Conditions . . . . . . . . . . 1051 37.3 Time-Independent Sources and Boundary Conditions . . . . . . . . . . . . . . . . . . 1052 37.4 Inhomogeneous Equations with Homogeneous Boundary Conditions . . . . . . . . . 1054 37.5 Inhomogeneous Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055 37.6 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056 x
  • 13. 37.7 General Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058 37.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059 37.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069 37.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1072 38 Finite Transforms 1119 38.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1121 38.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1122 38.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1123 39 The Diffusion Equation 1127 39.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1128 39.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1129 39.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1130 40 Laplace’s Equation 1135 40.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135 40.2 Fundamental Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135 40.2.1 Two Dimensional Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135 40.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136 40.4 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1138 40.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1139 41 Waves 1147 41.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148 41.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152 41.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154 42 Similarity Methods 1167 42.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1170 42.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1171 42.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1172 43 Method of Characteristics 1175 43.1 First Order Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175 43.2 First Order Quasi-Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176 43.3 The Method of Characteristics and the Wave Equation . . . . . . . . . . . . . . . . . 1176 43.4 The Wave Equation for an Infinite Domain . . . . . . . . . . . . . . . . . . . . . . . 1177 43.5 The Wave Equation for a Semi-Infinite Domain . . . . . . . . . . . . . . . . . . . . . 1178 43.6 The Wave Equation for a Finite Domain . . . . . . . . . . . . . . . . . . . . . . . . . 1179 43.7 Envelopes of Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1180 43.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182 43.9 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183 43.10Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184 44 Transform Methods 1189 44.1 Fourier Transform for Partial Differential Equations . . . . . . . . . . . . . . . . . . 1189 44.2 The Fourier Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190 44.3 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190 44.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192 44.5 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195 44.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197 xi
  • 14. 45 Green Functions 1211 45.1 Inhomogeneous Equations and Homogeneous Boundary Conditions . . . . . . . . . . 1211 45.2 Homogeneous Equations and Inhomogeneous Boundary Conditions . . . . . . . . . . 1211 45.3 Eigenfunction Expansions for Elliptic Equations . . . . . . . . . . . . . . . . . . . . . 1213 45.4 The Method of Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215 45.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217 45.6 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224 45.7 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226 46 Conformal Mapping 1261 46.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262 46.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264 46.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265 47 Non-Cartesian Coordinates 1273 47.1 Spherical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273 47.2 Laplace’s Equation in a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273 47.3 Laplace’s Equation in an Annulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275 VI Calculus of Variations 1279 48 Calculus of Variations 1281 48.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282 48.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1291 48.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1294 VII Nonlinear Differential Equations 1345 49 Nonlinear Ordinary Differential Equations 1347 49.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1348 49.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1351 49.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352 50 Nonlinear Partial Differential Equations 1365 50.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366 50.2 Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368 50.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1369 VIII Appendices 1381 A Greek Letters 1383 B Notation 1385 C Formulas from Complex Variables 1387 D Table of Derivatives 1389 E Table of Integrals 1391 F Definite Integrals 1393 G Table of Sums 1395 xii
  • 15. H Table of Taylor Series 1397 I Continuous Transforms 1399 I.1 Properties of Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1399 I.2 Table of Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401 I.3 Table of Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403 I.4 Table of Fourier Transforms in n Dimensions . . . . . . . . . . . . . . . . . . . . . . 1405 I.5 Table of Fourier Cosine Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406 I.6 Table of Fourier Sine Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1407 J Table of Wronskians 1409 K Sturm-Liouville Eigenvalue Problems 1411 L Green Functions for Ordinary Differential Equations 1413 M Trigonometric Identities 1415 M.1 Circular Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1415 M.2 Hyperbolic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1416 N Bessel Functions 1419 N.1 Definite Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1419 O Formulas from Linear Algebra 1421 P Vector Analysis 1423 Q Partial Fractions 1425 R Finite Math 1427 S Physics 1429 T Probability 1431 T.1 Independent Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431 T.2 Playing the Odds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1431 U Economics 1433 V Glossary 1435 W whoami 1437 xiii
  • 16. xiv
  • 17. Anti-Copyright Anti-Copyright @ 1995-2001 by Mauch Publishing Company, un-Incorporated. No rights reserved. Any part of this publication may be reproduced, stored in a retrieval system, transmitted or desecrated without permission. xv
  • 18. xvi
  • 19. Preface During the summer before my final undergraduate year at Caltech I set out to write a math text unlike any other, namely, one written by me. In that respect I have succeeded beautifully. Unfor- tunately, the text is neither complete nor polished. I have a “Warnings and Disclaimers” section below that is a little amusing, and an appendix on probability that I feel concisesly captures the essence of the subject. However, all the material in between is in some stage of development. I am currently working to improve and expand this text. This text is freely available from my web set. Currently I’m at http://www.its.caltech.edu/˜sean. I post new versions a couple of times a year. 0.1 Advice to Teachers If you have something worth saying, write it down. 0.2 Acknowledgments I would like to thank Professor Saffman for advising me on this project and the Caltech SURF program for providing the funding for me to write the first edition of this book. 0.3 Warnings and Disclaimers • This book is a work in progress. It contains quite a few mistakes and typos. I would greatly appreciate your constructive criticism. You can reach me at ‘sean@caltech.edu’. • Reading this book impairs your ability to drive a car or operate machinery. • This book has been found to cause drowsiness in laboratory animals. • This book contains twenty-three times the US RDA of fiber. • Caution: FLAMMABLE - Do not read while smoking or near a fire. • If infection, rash, or irritation develops, discontinue use and consult a physician. • Warning: For external use only. Use only as directed. Intentional misuse by deliberately concentrating contents can be harmful or fatal. KEEP OUT OF REACH OF CHILDREN. • In the unlikely event of a water landing do not use this book as a flotation device. • The material in this text is fiction; any resemblance to real theorems, living or dead, is purely coincidental. • This is by far the most amusing section of this book. xvii
  • 20. • Finding the typos and mistakes in this book is left as an exercise for the reader. (Eye ewes a spelling chequer from thyme too thyme, sew their should knot bee two many misspellings. Though I ain’t so sure the grammar’s too good.) • The theorems and methods in this text are subject to change without notice. • This is a chain book. If you do not make seven copies and distribute them to your friends within ten days of obtaining this text you will suffer great misfortune and other nastiness. • The surgeon general has determined that excessive studying is detrimental to your social life. • This text has been buffered for your protection and ribbed for your pleasure. • Stop reading this rubbish and get back to work! 0.4 Suggested Use This text is well suited to the student, professional or lay-person. It makes a superb gift. This text has a boquet that is light and fruity, with some earthy undertones. It is ideal with dinner or as an apertif. Bon apetit! 0.5 About the Title The title is only making light of naming conventions in the sciences and is not an insult to engineers. If you want to learn about some mathematical subject, look for books with “Introduction” or “Elementary” in the title. If it is an “Intermediate” text it will be incomprehensible. If it is “Advanced” then not only will it be incomprehensible, it will have low production qualities, i.e. a crappy typewriter font, no graphics and no examples. There is an exception to this rule: When the title also contains the word “Scientists” or “Engineers” the advanced book may be quite suitable for actually learning the material. xviii
  • 23. Chapter 1 Sets and Functions 1.1 Sets Definition. A set is a collection of objects. We call the objects, elements. A set is denoted by listing the elements between braces. For example: {e, ı, π, 1} is the set of the integer 1, the pure imaginary number ı = √ −1 and the transcendental numbers e = 2.7182818 . . . and π = 3.1415926 . . .. For elements of a set, we do not count multiplicities. We regard the set {1, 2, 2, 3, 3, 3} as identical to the set {1, 2, 3}. Order is not significant in sets. The set {1, 2, 3} is equivalent to {3, 2, 1}. In enumerating the elements of a set, we use ellipses to indicate patterns. We denote the set of positive integers as {1, 2, 3, . . .}. We also denote sets with the notation {x|conditions on x} for sets that are more easily described than enumerated. This is read as “the set of elements x such that . . . ”. x ∈ S is the notation for “x is an element of the set S.” To express the opposite we have x ∈ S for “x is not an element of the set S.” Examples. We have notations for denoting some of the commonly encountered sets. • ∅ = {} is the empty set, the set containing no elements. • Z = {. . . , −3, −2, −1, 0, 1, 2, 3 . . .} is the set of integers. (Z is for “Zahlen”, the German word for “number”.) • Q = {p/q|p, q ∈ Z, q = 0} is the set of rational numbers. (Q is for quotient.) 1 • R = {x|x = a1a2 · · · an.b1b2 · · · } is the set of real numbers, i.e. the set of numbers with decimal expansions. 2 • C = {a + ıb|a, b ∈ R, ı2 = −1} is the set of complex numbers. ı is the square root of −1. (If you haven’t seen complex numbers before, don’t dismay. We’ll cover them later.) • Z+ , Q+ and R+ are the sets of positive integers, rationals and reals, respectively. For example, Z+ = {1, 2, 3, . . .}. We use a − superscript to denote the sets of negative numbers. • Z0+ , Q0+ and R0+ are the sets of non-negative integers, rationals and reals, respectively. For example, Z0+ = {0, 1, 2, . . .}. • (a . . . b) denotes an open interval on the real axis. (a . . . b) ≡ {x|x ∈ R, a < x < b} • We use brackets to denote the closed interval. [a..b] ≡ {x|x ∈ R, a ≤ x ≤ b} 1 Note that with this description, we enumerate each rational number an infinite number of times. For example: 1/2 = 2/4 = 3/6 = (−1)/(−2) = · · · . This does not pose a problem as we do not count multiplicities. 2Guess what R is for. 3
  • 24. The cardinality or order of a set S is denoted |S|. For finite sets, the cardinality is the number of elements in the set. The Cartesian product of two sets is the set of ordered pairs: X × Y ≡ {(x, y)|x ∈ X, y ∈ Y }. The Cartesian product of n sets is the set of ordered n-tuples: X1 × X2 × · · · × Xn ≡ {(x1, x2, . . . , xn)|x1 ∈ X1, x2 ∈ X2, . . . , xn ∈ Xn}. Equality. Two sets S and T are equal if each element of S is an element of T and vice versa. This is denoted, S = T. Inequality is S = T, of course. S is a subset of T, S ⊆ T, if every element of S is an element of T. S is a proper subset of T, S ⊂ T, if S ⊆ T and S = T. For example: The empty set is a subset of every set, ∅ ⊆ S. The rational numbers are a proper subset of the real numbers, Q ⊂ R. Operations. The union of two sets, S ∪ T, is the set whose elements are in either of the two sets. The union of n sets, ∪n j=1Sj ≡ S1 ∪ S2 ∪ · · · ∪ Sn is the set whose elements are in any of the sets Sj. The intersection of two sets, S ∩ T, is the set whose elements are in both of the two sets. In other words, the intersection of two sets in the set of elements that the two sets have in common. The intersection of n sets, ∩n j=1Sj ≡ S1 ∩ S2 ∩ · · · ∩ Sn is the set whose elements are in all of the sets Sj. If two sets have no elements in common, S ∩T = ∅, then the sets are disjoint. If T ⊆ S, then the difference between S and T, S T, is the set of elements in S which are not in T. S T ≡ {x|x ∈ S, x ∈ T} The difference of sets is also denoted S − T. Properties. The following properties are easily verified from the above definitions. • S ∪ ∅ = S, S ∩ ∅ = ∅, S ∅ = S, S S = ∅. • Commutative. S ∪ T = T ∪ S, S ∩ T = T ∩ S. • Associative. (S ∪ T) ∪ U = S ∪ (T ∪ U) = S ∪ T ∪ U, (S ∩ T) ∩ U = S ∩ (T ∩ U) = S ∩ T ∩ U. • Distributive. S ∪ (T ∩ U) = (S ∪ T) ∩ (S ∪ U), S ∩ (T ∪ U) = (S ∩ T) ∪ (S ∩ U). 1.2 Single Valued Functions Single-Valued Functions. A single-valued function or single-valued mapping is a mapping of the elements x ∈ X into elements y ∈ Y . This is expressed as f : X → Y or X f → Y . If such a function is well-defined, then for each x ∈ X there exists a unique element of y such that f(x) = y. The set X is the domain of the function, Y is the codomain, (not to be confused with the range, which we introduce shortly). To denote the value of a function on a particular element we can use any of the notations: f(x) = y, f : x → y or simply x → y. f is the identity map on X if f(x) = x for all x ∈ X. Let f : X → Y . The range or image of f is f(X) = {y|y = f(x) for some x ∈ X}. The range is a subset of the codomain. For each Z ⊆ Y , the inverse image of Z is defined: f−1 (Z) ≡ {x ∈ X|f(x) = z for some z ∈ Z}. 4
  • 25. Examples. • Finite polynomials, f(x) = n k=0 akxk , ak ∈ R, and the exponential function, f(x) = ex , are examples of single valued functions which map real numbers to real numbers. • The greatest integer function, f(x) = x , is a mapping from R to Z. x is defined as the greatest integer less than or equal to x. Likewise, the least integer function, f(x) = x , is the least integer greater than or equal to x. The -jectives. A function is injective if for each x1 = x2, f(x1) = f(x2). In other words, distinct elements are mapped to distinct elements. f is surjective if for each y in the codomain, there is an x such that y = f(x). If a function is both injective and surjective, then it is bijective. A bijective function is also called a one-to-one mapping. Examples. • The exponential function f(x) = ex , considered as a mapping from R to R+ , is bijective, (a one-to-one mapping). • f(x) = x2 is a bijection from R+ to R+ . f is not injective from R to R+ . For each positive y in the range, there are two values of x such that y = x2 . • f(x) = sin x is not injective from R to [−1..1]. For each y ∈ [−1..1] there exists an infinite number of values of x such that y = sin x. Injective Surjective Bijective Figure 1.1: Depictions of Injective, Surjective and Bijective Functions 1.3 Inverses and Multi-Valued Functions If y = f(x), then we can write x = f−1 (y) where f−1 is the inverse of f. If y = f(x) is a one-to-one function, then f−1 (y) is also a one-to-one function. In this case, x = f−1 (f(x)) = f(f−1 (x)) for values of x where both f(x) and f−1 (x) are defined. For example ln x, which maps R+ to R is the inverse of ex . x = eln x = ln(ex ) for all x ∈ R+ . (Note the x ∈ R+ ensures that ln x is defined.) If y = f(x) is a many-to-one function, then x = f−1 (y) is a one-to-many function. f−1 (y) is a multi-valued function. We have x = f(f−1 (x)) for values of x where f−1 (x) is defined, however x = f−1 (f(x)). There are diagrams showing one-to-one, many-to-one and one-to-many functions in Figure 1.2. Example 1.3.1 y = x2 , a many-to-one function has the inverse x = y1/2 . For each positive y, there are two values of x such that x = y1/2 . y = x2 and y = x1/2 are graphed in Figure 1.3. 5
  • 26. rangedomain rangedomain rangedomain one-to-one many-to-one one-to-many Figure 1.2: Diagrams of One-To-One, Many-To-One and One-To-Many Functions Figure 1.3: y = x2 and y = x1/2 We say that there are two branches of y = x1/2 : the positive and the negative branch. We denote the positive branch as y = √ x; the negative branch is y = − √ x. We call √ x the principal branch of x1/2 . Note that √ x is a one-to-one function. Finally, x = (x1/2 )2 since (± √ x)2 = x, but x = (x2 )1/2 since (x2 )1/2 = ±x. y = √ x is graphed in Figure 1.4. Figure 1.4: y = √ x Now consider the many-to-one function y = sin x. The inverse is x = arcsin y. For each y ∈ [−1..1] there are an infinite number of values x such that x = arcsin y. In Figure 1.5 is a graph of y = sin x and a graph of a few branches of y = arcsin x. Figure 1.5: y = sin x and y = arcsin x Example 1.3.2 arcsin x has an infinite number of branches. We will denote the principal branch 6
  • 27. by Arcsin x which maps [−1..1] to −π 2 ..π 2 . Note that x = sin(arcsin x), but x = arcsin(sin x). y = Arcsin x in Figure 1.6. Figure 1.6: y = Arcsin x Example 1.3.3 Consider 11/3 . Since x3 is a one-to-one function, x1/3 is a single-valued function. (See Figure 1.7.) 11/3 = 1. Figure 1.7: y = x3 and y = x1/3 Example 1.3.4 Consider arccos(1/2). cos x and a portion of arccos x are graphed in Figure 1.8. The equation cos x = 1/2 has the two solutions x = ±π/3 in the range x ∈ (−π..π]. We use the periodicity of the cosine, cos(x + 2π) = cos x, to find the remaining solutions. arccos(1/2) = {±π/3 + 2nπ}, n ∈ Z. Figure 1.8: y = cos x and y = arccos x 1.4 Transforming Equations Consider the equation g(x) = h(x) and the single-valued function f(x). A particular value of x is a solution of the equation if substituting that value into the equation results in an identity. In determining the solutions of an equation, we often apply functions to each side of the equation in order to simplify its form. We apply the function to obtain a second equation, f(g(x)) = f(h(x)). If x = ξ is a solution of the former equation, (let ψ = g(ξ) = h(ξ)), then it is necessarily a solution of latter. This is because f(g(ξ)) = f(h(ξ)) reduces to the identity f(ψ) = f(ψ). If f(x) is bijective, then the converse is true: any solution of the latter equation is a solution of the former equation. 7
  • 28. Suppose that x = ξ is a solution of the latter, f(g(ξ)) = f(h(ξ)). That f(x) is a one-to-one mapping implies that g(ξ) = h(ξ). Thus x = ξ is a solution of the former equation. It is always safe to apply a one-to-one, (bijective), function to an equation, (provided it is defined for that domain). For example, we can apply f(x) = x3 or f(x) = ex , considered as mappings on R, to the equation x = 1. The equations x3 = 1 and ex = e each have the unique solution x = 1 for x ∈ R. In general, we must take care in applying functions to equations. If we apply a many-to-one function, we may introduce spurious solutions. Applying f(x) = x2 to the equation x = π 2 results in x2 = π2 4 , which has the two solutions, x = {±π 2 }. Applying f(x) = sin x results in x2 = π2 4 , which has an infinite number of solutions, x = {π 2 + 2nπ | n ∈ Z}. We do not generally apply a one-to-many, (multi-valued), function to both sides of an equation as this rarely is useful. Rather, we typically use the definition of the inverse function. Consider the equation sin2 x = 1. Applying the function f(x) = x1/2 to the equation would not get us anywhere. sin2 x 1/2 = 11/2 Since (sin2 x)1/2 = sin x, we cannot simplify the left side of the equation. Instead we could use the definition of f(x) = x1/2 as the inverse of the x2 function to obtain sin x = 11/2 = ±1. Now note that we should not just apply arcsin to both sides of the equation as arcsin(sin x) = x. Instead we use the definition of arcsin as the inverse of sin. x = arcsin(±1) x = arcsin(1) has the solutions x = π/2+2nπ and x = arcsin(−1) has the solutions x = −π/2+2nπ. We enumerate the solutions. x = π 2 + nπ | n ∈ Z 8
  • 29. 1.5 Exercises Exercise 1.1 The area of a circle is directly proportional to the square of its diameter. What is the constant of proportionality? Hint, Solution Exercise 1.2 Consider the equation x + 1 y − 2 = x2 − 1 y2 − 4 . 1. Why might one think that this is the equation of a line? 2. Graph the solutions of the equation to demonstrate that it is not the equation of a line. Hint, Solution Exercise 1.3 Consider the function of a real variable, f(x) = 1 x2 + 2 . What is the domain and range of the function? Hint, Solution Exercise 1.4 The temperature measured in degrees Celsius 3 is linearly related to the temperature measured in degrees Fahrenheit 4 . Water freezes at 0◦ C = 32◦ F and boils at 100◦ C = 212◦ F. Write the temperature in degrees Celsius as a function of degrees Fahrenheit. Hint, Solution Exercise 1.5 Consider the function graphed in Figure 1.9. Sketch graphs of f(−x), f(x + 3), f(3 − x) + 2, and f−1 (x). You may use the blank grids in Figure 1.10. Hint, Solution Exercise 1.6 A culture of bacteria grows at the rate of 10% per minute. At 6:00 pm there are 1 billion bacteria. How many bacteria are there at 7:00 pm? How many were there at 3:00 pm? Hint, Solution Exercise 1.7 The graph in Figure 1.11 shows an even function f(x) = p(x)/q(x) where p(x) and q(x) are rational quadratic polynomials. Give possible formulas for p(x) and q(x). Hint, Solution Exercise 1.8 Find a polynomial of degree 100 which is zero only at x = −2, 1, π and is non-negative. Hint, Solution 3 Originally, it was called degrees Centigrade. centi because there are 100 degrees between the two calibration points. It is now called degrees Celsius in honor of the inventor. 4 The Fahrenheit scale, named for Daniel Fahrenheit, was originally calibrated with the freezing point of salt- saturated water to be 0◦. Later, the calibration points became the freezing point of water, 32◦, and body temperature, 96◦. With this method, there are 64 divisions between the calibration points. Finally, the upper calibration point was changed to the boiling point of water at 212◦. This gave 180 divisions, (the number of degrees in a half circle), between the two calibration points. 9
  • 30. Figure 1.9: Graph of the function. Figure 1.10: Blank grids. 1.6 Hints Hint 1.1 area = constant × diameter2 . Hint 1.2 A pair (x, y) is a solution of the equation if it make the equation an identity. Hint 1.3 The domain is the subset of R on which the function is defined. 10
  • 31. 1 2 1 2 2 4 6 8 10 1 2 Figure 1.11: Plots of f(x) = p(x)/q(x). Hint 1.4 Find the slope and x-intercept of the line. Hint 1.5 The inverse of the function is the reflection of the function across the line y = x. Hint 1.6 The formula for geometric growth/decay is x(t) = x0rt , where r is the rate. Hint 1.7 Note that p(x) and q(x) appear as a ratio, they are determined only up to a multiplicative constant. We may take the leading coefficient of q(x) to be unity. f(x) = p(x) q(x) = ax2 + bx + c x2 + βx + χ Use the properties of the function to solve for the unknown parameters. Hint 1.8 Write the polynomial in factored form. 11
  • 32. 1.7 Solutions Solution 1.1 area = π × radius2 area = π 4 × diameter2 The constant of proportionality is π 4 . Solution 1.2 1. If we multiply the equation by y2 − 4 and divide by x + 1, we obtain the equation of a line. y + 2 = x − 1 2. We factor the quadratics on the right side of the equation. x + 1 y − 2 = (x + 1)(x − 1) (y − 2)(y + 2) . We note that one or both sides of the equation are undefined at y = ±2 because of division by zero. There are no solutions for these two values of y and we assume from this point that y = ±2. We multiply by (y − 2)(y + 2). (x + 1)(y + 2) = (x + 1)(x − 1) For x = −1, the equation becomes the identity 0 = 0. Now we consider x = −1. We divide by x + 1 to obtain the equation of a line. y + 2 = x − 1 y = x − 3 Now we collect the solutions we have found. {(−1, y) : y = ±2} ∪ {(x, x − 3) : x = 1, 5} The solutions are depicted in Figure /reffig not a line. -6 -4 -2 2 4 6 -6 -4 -2 2 4 6 Figure 1.12: The solutions of x+1 y−2 = x2 −1 y2−4 . 12
  • 33. Solution 1.3 The denominator is nonzero for all x ∈ R. Since we don’t have any division by zero problems, the domain of the function is R. For x ∈ R, 0 < 1 x2 + 2 ≤ 2. Consider y = 1 x2 + 2 . (1.1) For any y ∈ (0 . . . 1/2], there is at least one value of x that satisfies Equation 1.1. x2 + 2 = 1 y x = ± 1 y − 2 Thus the range of the function is (0 . . . 1/2] Solution 1.4 Let c denote degrees Celsius and f denote degrees Fahrenheit. The line passes through the points (f, c) = (32, 0) and (f, c) = (212, 100). The x-intercept is f = 32. We calculate the slope of the line. slope = 100 − 0 212 − 32 = 100 180 = 5 9 The relationship between fahrenheit and celcius is c = 5 9 (f − 32). Solution 1.5 We plot the various transformations of f(x). Solution 1.6 The formula for geometric growth/decay is x(t) = x0rt , where r is the rate. Let t = 0 coincide with 6:00 pm. We determine x0. x(0) = 109 = x0 11 10 0 = x0 x0 = 109 At 7:00 pm the number of bacteria is 109 11 10 60 = 1160 1051 ≈ 3.04 × 1011 At 3:00 pm the number of bacteria was 109 11 10 −180 = 10189 11180 ≈ 35.4 Solution 1.7 We write p(x) and q(x) as general quadratic polynomials. f(x) = p(x) q(x) = ax2 + bx + c αx2 + βx + χ 13
  • 34. Figure 1.13: Graphs of f(−x), f(x + 3), f(3 − x) + 2, and f−1 (x). We will use the properties of the function to solve for the unknown parameters. Note that p(x) and q(x) appear as a ratio, they are determined only up to a multiplicative constant. We may take the leading coefficient of q(x) to be unity. f(x) = p(x) q(x) = ax2 + bx + c x2 + βx + χ f(x) has a second order zero at x = 0. This means that p(x) has a second order zero there and that χ = 0. f(x) = ax2 x2 + βx + χ We note that f(x) → 2 as x → ∞. This determines the parameter a. lim x→∞ f(x) = lim x→∞ ax2 x2 + βx + χ = lim x→∞ 2ax 2x + β = lim x→∞ 2a 2 = a f(x) = 2x2 x2 + βx + χ Now we use the fact that f(x) is even to conclude that q(x) is even and thus β = 0. f(x) = 2x2 x2 + χ 14
  • 35. Finally, we use that f(1) = 1 to determine χ. f(x) = 2x2 x2 + 1 Solution 1.8 Consider the polynomial p(x) = (x + 2)40 (x − 1)30 (x − π)30 . It is of degree 100. Since the factors only vanish at x = −2, 1, π, p(x) only vanishes there. Since factors are non-negative, the polynomial is non-negative. 15
  • 36. 16
  • 37. Chapter 2 Vectors 2.1 Vectors 2.1.1 Scalars and Vectors A vector is a quantity having both a magnitude and a direction. Examples of vector quantities are velocity, force and position. One can represent a vector in n-dimensional space with an arrow whose initial point is at the origin, (Figure 2.1). The magnitude is the length of the vector. Typographically, variables representing vectors are often written in capital letters, bold face or with a vector over-line, A, a, a. The magnitude of a vector is denoted |a|. x z y Figure 2.1: Graphical representation of a vector in three dimensions. A scalar has only a magnitude. Examples of scalar quantities are mass, time and speed. Vector Algebra. Two vectors are equal if they have the same magnitude and direction. The negative of a vector, denoted −a, is a vector of the same magnitude as a but in the opposite direction. We add two vectors a and b by placing the tail of b at the head of a and defining a + b to be the vector with tail at the origin and head at the head of b. (See Figure 2.2.) a+b a b -a a 2a Figure 2.2: Vector arithmetic. 17
  • 38. The difference, a − b, is defined as the sum of a and the negative of b, a + (−b). The result of multiplying a by a scalar α is a vector of magnitude |α| |a| with the same/opposite direction if α is positive/negative. (See Figure 2.2.) Here are the properties of adding vectors and multiplying them by a scalar. They are evident from geometric considerations. a + b = b + a αa = aα commutative laws (a + b) + c = a + (b + c) α(βa) = (αβ)a associative laws α(a + b) = αa + αb (α + β)a = αa + βa distributive laws Zero and Unit Vectors. The additive identity element for vectors is the zero vector or null vector. This is a vector of magnitude zero which is denoted as 0. A unit vector is a vector of magnitude one. If a is nonzero then a/|a| is a unit vector in the direction of a. Unit vectors are often denoted with a caret over-line, ˆn. Rectangular Unit Vectors. In n dimensional Cartesian space, Rn , the unit vectors in the di- rections of the coordinates axes are e1, . . . en. These are called the rectangular unit vectors. To cut down on subscripts, the unit vectors in three dimensional space are often denoted with i, j and k. (Figure 2.3). x z y j k i Figure 2.3: Rectangular unit vectors. Components of a Vector. Consider a vector a with tail at the origin and head having the Carte- sian coordinates (a1, . . . , an). We can represent this vector as the sum of n rectangular component vectors, a = a1e1 + · · · + anen. (See Figure 2.4.) Another notation for the vector a is a1, . . . , an . By the Pythagorean theorem, the magnitude of the vector a is |a| = a2 1 + · · · + a2 n. x z y a a a 1 3 i k ja2 Figure 2.4: Components of a vector. 18
  • 39. 2.1.2 The Kronecker Delta and Einstein Summation Convention The Kronecker Delta tensor is defined δij = 1 if i = j, 0 if i = j. This notation will be useful in our work with vectors. Consider writing a vector in terms of its rectangular components. Instead of using ellipses: a = a1e1 + · · · + anen, we could write the expression as a sum: a = n i=1 aiei. We can shorten this notation by leaving out the sum: a = aiei, where it is understood that whenever an index is repeated in a term we sum over that index from 1 to n. This is the Einstein summation convention. A repeated index is called a summation index or a dummy index. Other indices can take any value from 1 to n and are called free indices. Example 2.1.1 Consider the matrix equation: A·x = b. We can write out the matrix and vectors explicitly.    a11 · · · a1n ... ... ... an1 · · · ann       x1 ... xn    =    b1 ... bn    This takes much less space when we use the summation convention. aijxj = bi Here j is a summation index and i is a free index. 2.1.3 The Dot and Cross Product Dot Product. The dot product or scalar product of two vectors is defined, a · b ≡ |a||b| cos θ, where θ is the angle from a to b. From this definition one can derive the following properties: • a · b = b · a, commutative. • α(a · b) = (αa) · b = a · (αb), associativity of scalar multiplication. • a · (b + c) = a · b + a · c, distributive. (See Exercise 2.1.) • eiej = δij. In three dimensions, this is i · i = j · j = k · k = 1, i · j = j · k = k · i = 0. • a · b = aibi ≡ a1b1 + · · · + anbn, dot product in terms of rectangular components. • If a · b = 0 then either a and b are orthogonal, (perpendicular), or one of a and b are zero. The Angle Between Two Vectors. We can use the dot product to find the angle between two vectors, a and b. From the definition of the dot product, a · b = |a||b| cos θ. If the vectors are nonzero, then θ = arccos a · b |a||b| . 19
  • 40. Example 2.1.2 What is the angle between i and i + j? θ = arccos i · (i + j) |i||i + j| = arccos 1 √ 2 = π 4 . Parametric Equation of a Line. Consider a line in Rn that passes through the point a and is parallel to the vector t, (tangent). A parametric equation of the line is x = a + ut, u ∈ R. Implicit Equation of a Line In 2D. Consider a line in R2 that passes through the point a and is normal, (orthogonal, perpendicular), to the vector n. All the lines that are normal to n have the property that x · n is a constant, where x is any point on the line. (See Figure 2.5.) x · n = 0 is the line that is normal to n and passes through the origin. The line that is normal to n and passes through the point a is x · n = a · n. =0 =1 = a n n a =-1 x n x n x n x n Figure 2.5: Equation for a line. The normal to a line determines an orientation of the line. The normal points in the direction that is above the line. A point b is (above/on/below) the line if (b−a)·n is (positive/zero/negative). The signed distance of a point b from the line x · n = a · n is (b − a) · n |n| . Implicit Equation of a Hyperplane. A hyperplane in Rn is an n−1 dimensional “sheet” which passes through a given point and is normal to a given direction. In R3 we call this a plane. Consider a hyperplane that passes through the point a and is normal to the vector n. All the hyperplanes that are normal to n have the property that x · n is a constant, where x is any point in the hyperplane. x · n = 0 is the hyperplane that is normal to n and passes through the origin. The hyperplane that is normal to n and passes through the point a is x · n = a · n. The normal determines an orientation of the hyperplane. The normal points in the direction that is above the hyperplane. A point b is (above/on/below) the hyperplane if (b − a) · n is 20
  • 41. (positive/zero/negative). The signed distance of a point b from the hyperplane x · n = a · n is (b − a) · n |n| . Right and Left-Handed Coordinate Systems. Consider a rectangular coordinate system in two dimensions. Angles are measured from the positive x axis in the direction of the positive y axis. There are two ways of labeling the axes. (See Figure 2.6.) In one the angle increases in the counterclockwise direction and in the other the angle increases in the clockwise direction. The former is the familiar Cartesian coordinate system. x y xy θ θ Figure 2.6: There are two ways of labeling the axes in two dimensions. There are also two ways of labeling the axes in a three-dimensional rectangular coordinate system. These are called right-handed and left-handed coordinate systems. See Figure 2.7. Any other labelling of the axes could be rotated into one of these configurations. The right-handed system is the one that is used by default. If you put your right thumb in the direction of the z axis in a right-handed coordinate system, then your fingers curl in the direction from the x axis to the y axis. x z yj i k z k j i y x Figure 2.7: Right and left handed coordinate systems. Cross Product. The cross product or vector product is defined, a × b = |a||b| sin θ n, where θ is the angle from a to b and n is a unit vector that is orthogonal to a and b and in the direction such that the ordered triple of vectors a, b and n form a right-handed system. You can visualize the direction of a × b by applying the right hand rule. Curl the fingers of your right hand in the direction from a to b. Your thumb points in the direction of a × b. Warning: Unless you are a lefty, get in the habit of putting down your pencil before applying the right hand rule. The dot and cross products behave a little differently. First note that unlike the dot product, the cross product is not commutative. The magnitudes of a × b and b × a are the same, but their directions are opposite. (See Figure 2.8.) Let a × b = |a||b| sin θ n and b × a = |b||a| sin φ m. The angle from a to b is the same as the angle from b to a. Since {a, b, n} and {b, a, m} are right-handed systems, m points in the opposite direction as n. Since a × b = −b × a we say that the cross product is anti-commutative. 21
  • 42. a b b a a b Figure 2.8: The cross product is anti-commutative. Next we note that since |a × b| = |a||b| sin θ, the magnitude of a×b is the area of the parallelogram defined by the two vectors. (See Figure 2.9.) The area of the triangle defined by two vectors is then 1 2 |a × b|. b sin b b a θ a Figure 2.9: The parallelogram and the triangle defined by two vectors. From the definition of the cross product, one can derive the following properties: • a × b = −b × a, anti-commutative. • α(a × b) = (αa) × b = a × (αb), associativity of scalar multiplication. • a × (b + c) = a × b + a × c, distributive. • (a × b) × c = a × (b × c). The cross product is not associative. • i × i = j × j = k × k = 0. • i × j = k, j × k = i, k × i = j. • a × b = (a2b3 − a3b2)i + (a3b1 − a1b3)j + (a1b2 − a2b1)k = i j k a1 a2 a3 b1 b2 b3 , cross product in terms of rectangular components. • If a · b = 0 then either a and b are parallel or one of a or b is zero. Scalar Triple Product. Consider the volume of the parallelopiped defined by three vectors. (See Figure 2.10.) The area of the base is ||b||c| sin θ|, where θ is the angle between b and c. The height is |a| cos φ, where φ is the angle between b × c and a. Thus the volume of the parallelopiped is |a||b||c| sin θ cos φ. 22
  • 43. φ θ b c a b c Figure 2.10: The parallelopiped defined by three vectors. Note that |a · (b × c)| = |a · (|b||c| sin θ n)| = ||a||b||c| sin θ cos φ| . Thus |a · (b × c)| is the volume of the parallelopiped. a · (b × c) is the volume or the negative of the volume depending on whether {a, b, c} is a right or left-handed system. Note that parentheses are unnecessary in a · b × c. There is only one way to interpret the expression. If you did the dot product first then you would be left with the cross product of a scalar and a vector which is meaningless. a · b × c is called the scalar triple product. Plane Defined by Three Points. Three points which are not collinear define a plane. Consider a plane that passes through the three points a, b and c. One way of expressing that the point x lies in the plane is that the vectors x − a, b − a and c − a are coplanar. (See Figure 2.11.) If the vectors are coplanar, then the parallelopiped defined by these three vectors will have zero volume. We can express this in an equation using the scalar triple product, (x − a) · (b − a) × (c − a) = 0. b c x a Figure 2.11: Three points define a plane. 2.2 Sets of Vectors in n Dimensions Orthogonality. Consider two n-dimensional vectors x = (x1, x2, . . . , xn), y = (y1, y2, . . . , yn). The inner product of these vectors can be defined x|y ≡ x · y = n i=1 xiyi. The vectors are orthogonal if x · y = 0. The norm of a vector is the length of the vector generalized to n dimensions. x = √ x · x 23
  • 44. Consider a set of vectors {x1, x2, . . . , xm}. If each pair of vectors in the set is orthogonal, then the set is orthogonal. xi · xj = 0 if i = j If in addition each vector in the set has norm 1, then the set is orthonormal. xi · xj = δij = 1 if i = j 0 if i = j Here δij is known as the Kronecker delta function. Completeness. A set of n, n-dimensional vectors {x1, x2, . . . , xn} is complete if any n-dimensional vector can be written as a linear combination of the vectors in the set. That is, any vector y can be written y = n i=1 cixi. Taking the inner product of each side of this equation with xm, y · xm = n i=1 cixi · xm = n i=1 cixi · xm = cmxm · xm cm = y · xm xm 2 Thus y has the expansion y = n i=1 y · xi xi 2 xi. If in addition the set is orthonormal, then y = n i=1 (y · xi)xi. 24
  • 45. 2.3 Exercises The Dot and Cross Product Exercise 2.1 Prove the distributive law for the dot product, a · (b + c) = a · b + a · c. Hint, Solution Exercise 2.2 Prove that a · b = aibi ≡ a1b1 + · · · + anbn. Hint, Solution Exercise 2.3 What is the angle between the vectors i + j and i + 3j? Hint, Solution Exercise 2.4 Prove the distributive law for the cross product, a × (b + c) = a × b + a × b. Hint, Solution Exercise 2.5 Show that a × b = i j k a1 a2 a3 b1 b2 b3 Hint, Solution Exercise 2.6 What is the area of the quadrilateral with vertices at (1, 1), (4, 2), (3, 7) and (2, 3)? Hint, Solution Exercise 2.7 What is the volume of the tetrahedron with vertices at (1, 1, 0), (3, 2, 1), (2, 4, 1) and (1, 2, 5)? Hint, Solution Exercise 2.8 What is the equation of the plane that passes through the points (1, 2, 3), (2, 3, 1) and (3, 1, 2)? What is the distance from the point (2, 3, 5) to the plane? Hint, Solution 25
  • 46. 2.4 Hints The Dot and Cross Product Hint 2.1 First prove the distributive law when the first vector is of unit length, n · (b + c) = n · b + n · c. Then all the quantities in the equation are projections onto the unit vector n and you can use geometry. Hint 2.2 First prove that the dot product of a rectangular unit vector with itself is one and the dot product of two distinct rectangular unit vectors is zero. Then write a and b in rectangular components and use the distributive law. Hint 2.3 Use a · b = |a||b| cos θ. Hint 2.4 First consider the case that both b and c are orthogonal to a. Prove the distributive law in this case from geometric considerations. Next consider two arbitrary vectors a and b. We can write b = b⊥ + b where b⊥ is orthogonal to a and b is parallel to a. Show that a × b = a × b⊥. Finally prove the distributive law for arbitrary b and c. Hint 2.5 Write the vectors in their rectangular components and use, i × j = k, j × k = i, k × i = j, and, i × i = j × j = k × k = 0. Hint 2.6 The quadrilateral is composed of two triangles. The area of a triangle defined by the two vectors a and b is 1 2 |a · b|. Hint 2.7 Justify that the area of a tetrahedron determined by three vectors is one sixth the area of the parallelogram determined by those three vectors. The area of a parallelogram determined by three vectors is the magnitude of the scalar triple product of the vectors: a · b × c. Hint 2.8 The equation of a line that is orthogonal to a and passes through the point b is a · x = a · b. The distance of a point c from the plane is (c − b) · a |a| 26
  • 47. 2.5 Solutions The Dot and Cross Product Solution 2.1 First we prove the distributive law when the first vector is of unit length, i.e., n · (b + c) = n · b + n · c. (2.1) From Figure 2.12 we see that the projection of the vector b + c onto n is equal to the sum of the projections b · n and c · n. b c n b n c b+c n n (b+c) Figure 2.12: The distributive law for the dot product. Now we extend the result to the case when the first vector has arbitrary length. We define a = |a|n and multiply Equation 2.1 by the scalar, |a|. |a|n · (b + c) = |a|n · b + |a|n · c a · (b + c) = a · b + a · c. Solution 2.2 First note that ei · ei = |ei||ei| cos(0) = 1. Then note that that dot product of any two distinct rectangular unit vectors is zero because they are orthogonal. Now we write a and b in terms of their rectangular components and use the distributive law. a · b = aiei · bjej = aibjei · ej = aibjδij = aibi Solution 2.3 Since a · b = |a||b| cos θ, we have θ = arccos a · b |a||b| 27
  • 48. when a and b are nonzero. θ = arccos (i + j) · (i + 3j) |i + j||i + 3j| = arccos 4 √ 2 √ 10 = arccos 2 √ 5 5 ≈ 0.463648 Solution 2.4 First consider the case that both b and c are orthogonal to a. b + c is the diagonal of the par- allelogram defined by b and c, (see Figure 2.13). Since a is orthogonal to each of these vectors, taking the cross product of a with these vectors has the effect of rotating the vectors through π/2 radians about a and multiplying their length by |a|. Note that a × (b + c) is the diagonal of the parallelogram defined by a × b and a × c. Thus we see that the distributive law holds when a is orthogonal to both b and c, a × (b + c) = a × b + a × c. b cb+c a c a a b a (b+c) Figure 2.13: The distributive law for the cross product. Now consider two arbitrary vectors a and b. We can write b = b⊥ + b where b⊥ is orthogonal to a and b is parallel to a, (see Figure 2.14). a b b θ b Figure 2.14: The vector b written as a sum of components orthogonal and parallel to a. By the definition of the cross product, a × b = |a||b| sin θ n. Note that |b⊥| = |b| sin θ, and that a × b⊥ is a vector in the same direction as a × b. Thus we see that a × b = |a||b| sin θ n = |a|(sin θ|b|)n = |a||b⊥|n = |a||b⊥| sin(π/2)n 28
  • 49. a × b = a × b⊥. Now we are prepared to prove the distributive law for arbitrary b and c. a × (b + c) = a × (b⊥ + b + c⊥ + c ) = a × ((b + c)⊥ + (b + c) ) = a × ((b + c)⊥) = a × b⊥ + a × c⊥ = a × b + a × c a × (b + c) = a × b + a × c Solution 2.5 We know that i × j = k, j × k = i, k × i = j, and that i × i = j × j = k × k = 0. Now we write a and b in terms of their rectangular components and use the distributive law to expand the cross product. a × b = (a1i + a2j + a3k) × (b1i + b2j + b3k) = a1i × (b1i + b2j + b3k) + a2j × (b1i + b2j + b3k) + a3k × (b1i + b2j + b3k) = a1b2k + a1b3(−j) + a2b1(−k) + a2b3i + a3b1j + a3b2(−i) = (a2b3 − a3b2)i − (a1b3 − a3b1)j + (a1b2 − a2b1)k Next we evaluate the determinant. i j k a1 a2 a3 b1 b2 b3 = i a2 a3 b2 b3 − j a1 a3 b1 b3 + k a1 a2 b1 b2 = (a2b3 − a3b2)i − (a1b3 − a3b1)j + (a1b2 − a2b1)k Thus we see that, a × b = i j k a1 a2 a3 b1 b2 b3 Solution 2.6 The area area of the quadrilateral is the area of two triangles. The first triangle is defined by the vector from (1, 1) to (4, 2) and the vector from (1, 1) to (2, 3). The second triangle is defined by the vector from (3, 7) to (4, 2) and the vector from (3, 7) to (2, 3). (See Figure 2.15.) The area of a triangle defined by the two vectors a and b is 1 2 |a · b|. The area of the quadrilateral is then, 1 2 |(3i + j) · (i + 2j)| + 1 2 |(i − 5j) · (−i − 4j)| = 1 2 (5) + 1 2 (19) = 12. Solution 2.7 The tetrahedron is determined by the three vectors with tail at (1, 1, 0) and heads at (3, 2, 1), (2, 4, 1) and (1, 2, 5). These are 2, 1, 1 , 1, 3, 1 and 0, 1, 5 . The area of the tetrahedron is one sixth the area of the parallelogram determined by these vectors. (This is because the area of a pyramid is 1 3 (base)(height). The base of the tetrahedron is half the area of the parallelogram and the heights are the same. 1 2 1 3 = 1 6 ) Thus the area of a tetrahedron determined by three vectors is 1 6 |a · b × c|. The area of the tetrahedron is 1 6 | 2, 1, 1 · 1, 3, 1 × 1, 2, 5 | = 1 6 | 2, 1, 1 · 13, −4, −1 | = 7 2 29
  • 50. x y (3,7) (4,2) (2,3) (1,1) Figure 2.15: Quadrilateral. Solution 2.8 The two vectors with tails at (1, 2, 3) and heads at (2, 3, 1) and (3, 1, 2) are parallel to the plane. Taking the cross product of these two vectors gives us a vector that is orthogonal to the plane. 1, 1, −2 × 2, −1, −1 = −3, −3, −3 We see that the plane is orthogonal to the vector 1, 1, 1 and passes through the point (1, 2, 3). The equation of the plane is 1, 1, 1 · x, y, z = 1, 1, 1 · 1, 2, 3 , x + y + z = 6. Consider the vector with tail at (1, 2, 3) and head at (2, 3, 5). The magnitude of the dot product of this vector with the unit normal vector gives the distance from the plane. 1, 1, 2 · 1, 1, 1 | 1, 1, 1 | = 4 √ 3 = 4 √ 3 3 30
  • 53. Chapter 3 Differential Calculus 3.1 Limits of Functions Definition of a Limit. If the value of the function y(x) gets arbitrarily close to ψ as x approaches the point ξ, then we say that the limit of the function as x approaches ξ is equal to ψ. This is written: lim x→ξ y(x) = ψ Now we make the notion of “arbitrarily close” precise. For any > 0 there exists a δ > 0 such that |y(x) − ψ| < for all 0 < |x − ξ| < δ. That is, there is an interval surrounding the point x = ξ for which the function is within of ψ. See Figure 3.1. Note that the interval surrounding x = ξ is a deleted neighborhood, that is it does not contain the point x = ξ. Thus the value of the function at x = ξ need not be equal to ψ for the limit to exist. Indeed the function need not even be defined at x = ξ. x y ψ+ε ψ−ε ξ−δ ξ+δ Figure 3.1: The δ neighborhood of x = ξ such that |y(x) − ψ| < . To prove that a function has a limit at a point ξ we first bound |y(x)−ψ| in terms of δ for values of x satisfying 0 < |x − ξ| < δ. Denote this upper bound by u(δ). Then for an arbitrary > 0, we determine a δ > 0 such that the the upper bound u(δ) and hence |y(x) − ψ| is less than . Example 3.1.1 Show that lim x→1 x2 = 1. Consider any > 0. We need to show that there exists a δ > 0 such that |x2 − 1| < for all 33
  • 54. |x − 1| < δ. First we obtain a bound on |x2 − 1|. |x2 − 1| = |(x − 1)(x + 1)| = |x − 1||x + 1| < δ|x + 1| = δ|(x − 1) + 2| < δ(δ + 2) Now we choose a positive δ such that, δ(δ + 2) = . We see that δ = √ 1 + − 1, is positive and satisfies the criterion that |x2 − 1| < for all 0 < |x − 1| < δ. Thus the limit exists. Example 3.1.2 Recall that the value of the function y(ξ) need not be equal to limx→ξ y(x) for the limit to exist. We show an example of this. Consider the function y(x) = 1 for x ∈ Z, 0 for x ∈ Z. For what values of ξ does limx→ξ y(x) exist? First consider ξ ∈ Z. Then there exists an open neighborhood a < ξ < b around ξ such that y(x) is identically zero for x ∈ (a, b). Then trivially, limx→ξ y(x) = 0. Now consider ξ ∈ Z. Consider any > 0. Then if |x − ξ| < 1 then |y(x) − 0| = 0 < . Thus we see that limx→ξ y(x) = 0. Thus, regardless of the value of ξ, limx→ξ y(x) = 0. Left and Right Limits. With the notation limx→ξ+ y(x) we denote the right limit of y(x). This is the limit as x approaches ξ from above. Mathematically: limx→ξ+ exists if for any > 0 there exists a δ > 0 such that |y(x) − ψ| < for all 0 < ξ − x < δ. The left limit limx→ξ− y(x) is defined analogously. Example 3.1.3 Consider the function, sin x |x| , defined for x = 0. (See Figure 3.2.) The left and right limits exist as x approaches zero. lim x→0+ sin x |x| = 1, lim x→0− sin x |x| = −1 However the limit, lim x→0 sin x |x| , does not exist. Figure 3.2: Plot of sin(x)/|x|. 34
  • 55. Properties of Limits. Let lim x→ξ f(x) and lim x→ξ g(x) exist. • lim x→ξ (af(x) + bg(x)) = a lim x→ξ f(x) + b lim x→ξ g(x). • lim x→ξ (f(x)g(x)) = lim x→ξ f(x) lim x→ξ g(x) . • lim x→ξ f(x) g(x) = limx→ξ f(x) limx→ξ g(x) if lim x→ξ g(x) = 0. Example 3.1.4 We prove that if limx→ξ f(x) = φ and limx→ξ g(x) = γ exist then lim x→ξ (f(x)g(x)) = lim x→ξ f(x) lim x→ξ g(x) . Since the limit exists for f(x), we know that for all > 0 there exists δ > 0 such that |f(x) − φ| < for |x − ξ| < δ. Likewise for g(x). We seek to show that for all > 0 there exists δ > 0 such that |f(x)g(x) − φγ| < for |x − ξ| < δ. We proceed by writing |f(x)g(x) − φγ|, in terms of |f(x) − φ| and |g(x) − γ|, which we know how to bound. |f(x)g(x) − φγ| = |f(x)(g(x) − γ) + (f(x) − φ)γ| ≤ |f(x)||g(x) − γ| + |f(x) − φ||γ| If we choose a δ such that |f(x)||g(x) − γ| < /2 and |f(x) − φ||γ| < /2 then we will have the desired result: |f(x)g(x) − φγ| < . Trying to ensure that |f(x)||g(x) − γ| < /2 is hard because of the |f(x)| factor. We will replace that factor with a constant. We want to write |f(x) − φ||γ| < /2 as |f(x) − φ| < /(2|γ|), but this is problematic for the case γ = 0. We fix these two problems and then proceed. We choose δ1 such that |f(x)−φ| < 1 for |x−ξ| < δ1. This gives us the desired form. |f(x)g(x) − φγ| ≤ (|φ| + 1)|g(x) − γ| + |f(x) − φ|(|γ| + 1), for |x − ξ| < δ1 Next we choose δ2 such that |g(x) − γ| < /(2(|φ| + 1)) for |x − ξ| < δ2 and choose δ3 such that |f(x) − φ| < /(2(|γ| + 1)) for |x − ξ| < δ3. Let δ be the minimum of δ1, δ2 and δ3. |f(x)g(x) − φγ| ≤ (|φ| + 1)|g(x) − γ| + |f(x) − φ|(|γ| + 1) < 2 + 2 , for |x − ξ| < δ |f(x)g(x) − φγ| < , for |x − ξ| < δ We conclude that the limit of a product is the product of the limits. lim x→ξ (f(x)g(x)) = lim x→ξ f(x) lim x→ξ g(x) = φγ. 35
  • 56. Result 3.1.1 Definition of a Limit. The statement: lim x→ξ y(x) = ψ means that y(x) gets arbitrarily close to ψ as x approaches ξ. For any > 0 there exists a δ > 0 such that |y(x) − ψ| < for all x in the neighborhood 0 < |x − ξ| < δ. The left and right limits, lim x→ξ− y(x) = ψ and lim x→ξ+ y(x) = ψ denote the limiting value as x approaches ξ respectively from below and above. The neighborhoods are respectively −δ < x − ξ < 0 and 0 < x − ξ < δ. Properties of Limits. Let lim x→ξ u(x) and lim x→ξ v(x) exist. • lim x→ξ (au(x) + bv(x)) = a lim x→ξ u(x) + b lim x→ξ v(x). • lim x→ξ (u(x)v(x)) = lim x→ξ u(x) lim x→ξ v(x) . • lim x→ξ u(x) v(x) = limx→ξ u(x) limx→ξ v(x) if lim x→ξ v(x) = 0. 3.2 Continuous Functions Definition of Continuity. A function y(x) is said to be continuous at x = ξ if the value of the function is equal to its limit, that is, limx→ξ y(x) = y(ξ). Note that this one condition is actually the three conditions: y(ξ) is defined, limx→ξ y(x) exists and limx→ξ y(x) = y(ξ). A function is continuous if it is continuous at each point in its domain. A function is continuous on the closed interval [a, b] if the function is continuous for each point x ∈ (a, b) and limx→a+ y(x) = y(a) and limx→b− y(x) = y(b). Discontinuous Functions. If a function is not continuous at a point it is called discontinuous at that point. If limx→ξ y(x) exists but is not equal to y(ξ), then the function has a removable discontinuity. It is thus named because we could define a continuous function z(x) = y(x) for x = ξ, limx→ξ y(x) for x = ξ, to remove the discontinuity. If both the left and right limit of a function at a point exist, but are not equal, then the function has a jump discontinuity at that point. If either the left or right limit of a function does not exist, then the function is said to have an infinite discontinuity at that point. Example 3.2.1 sin x x has a removable discontinuity at x = 0. The Heaviside function, H(x) =    0 for x < 0, 1/2 for x = 0, 1 for x > 0, has a jump discontinuity at x = 0. 1 x has an infinite discontinuity at x = 0. See Figure 3.3. 36
  • 57. Figure 3.3: A Removable discontinuity, a Jump Discontinuity and an Infinite Discontinuity Properties of Continuous Functions. Arithmetic. If u(x) and v(x) are continuous at x = ξ then u(x)±v(x) and u(x)v(x) are continuous at x = ξ. u(x) v(x) is continuous at x = ξ if v(ξ) = 0. Function Composition. If u(x) is continuous at x = ξ and v(x) is continuous at x = µ = u(ξ) then u(v(x)) is continuous at x = ξ. The composition of continuous functions is a continuous function. Boundedness. A function which is continuous on a closed interval is bounded in that closed interval. Nonzero in a Neighborhood. If y(ξ) = 0 then there exists a neighborhood (ξ − , ξ + ), > 0 of the point ξ such that y(x) = 0 for x ∈ (ξ − , ξ + ). Intermediate Value Theorem. Let u(x) be continuous on [a, b]. If u(a) ≤ µ ≤ u(b) then there exists ξ ∈ [a, b] such that u(ξ) = µ. This is known as the intermediate value theorem. A corollary of this is that if u(a) and u(b) are of opposite sign then u(x) has at least one zero on the interval (a, b). Maxima and Minima. If u(x) is continuous on [a, b] then u(x) has a maximum and a minimum on [a, b]. That is, there is at least one point ξ ∈ [a, b] such that u(ξ) ≥ u(x) for all x ∈ [a, b] and there is at least one point ψ ∈ [a, b] such that u(ψ) ≤ u(x) for all x ∈ [a, b]. Piecewise Continuous Functions. A function is piecewise continuous on an interval if the function is bounded on the interval and the interval can be divided into a finite number of intervals on each of which the function is continuous. For example, the greatest integer function, x , is piecewise continuous. ( x is defined to the the greatest integer less than or equal to x.) See Figure 3.4 for graphs of two piecewise continuous functions. Figure 3.4: Piecewise Continuous Functions Uniform Continuity. Consider a function f(x) that is continuous on an interval. This means that for any point ξ in the interval and any positive there exists a δ > 0 such that |f(x)−f(ξ)| < for all 0 < |x − ξ| < δ. In general, this value of δ depends on both ξ and . If δ can be chosen so it is a function of alone and independent of ξ then the function is said to be uniformly continuous on the interval. A sufficient condition for uniform continuity is that the function is continuous on a closed interval. 37
  • 58. 3.3 The Derivative Consider a function y(x) on the interval (x . . . x + ∆x) for some ∆x > 0. We define the increment ∆y = y(x + ∆x) − y(x). The average rate of change, (average velocity), of the function on the interval is ∆y ∆x . The average rate of change is the slope of the secant line that passes through the points (x, y(x)) and (x + ∆x, y(x + ∆x)). See Figure 3.5. y x ∆y ∆x Figure 3.5: The increments ∆x and ∆y. If the slope of the secant line has a limit as ∆x approaches zero then we call this slope the derivative or instantaneous rate of change of the function at the point x. We denote the derivative by dy dx , which is a nice notation as the derivative is the limit of ∆y ∆x as ∆x → 0. dy dx ≡ lim ∆x→0 y(x + ∆x) − y(x) ∆x . ∆x may approach zero from below or above. It is common to denote the derivative dy dx by d dx y, y (x), y or Dy. A function is said to be differentiable at a point if the derivative exists there. Note that differ- entiability implies continuity, but not vice versa. Example 3.3.1 Consider the derivative of y(x) = x2 at the point x = 1. y (1) ≡ lim ∆x→0 y(1 + ∆x) − y(1) ∆x = lim ∆x→0 (1 + ∆x)2 − 1 ∆x = lim ∆x→0 (2 + ∆x) = 2 Figure 3.6 shows the secant lines approaching the tangent line as ∆x approaches zero from above and below. Example 3.3.2 We can compute the derivative of y(x) = x2 at an arbitrary point x. d dx x2 = lim ∆x→0 (x + ∆x)2 − x2 ∆x = lim ∆x→0 (2x + ∆x) = 2x 38
  • 59. 0.5 1 1.5 2 0.5 1 1.5 2 2.5 3 3.5 4 0.5 1 1.5 2 0.5 1 1.5 2 2.5 3 3.5 4 Figure 3.6: Secant lines and the tangent to x2 at x = 1. Properties. Let u(x) and v(x) be differentiable. Let a and b be constants. Some fundamental properties of derivatives are: d dx (au + bv) = a du dx + b dv dx Linearity d dx (uv) = du dx v + u dv dx Product Rule d dx u v = v du dx − udv dx v2 Quotient Rule d dx (ua ) = aua−1 du dx Power Rule d dx (u(v(x))) = du dv dv dx = u (v(x))v (x) Chain Rule These can be proved by using the definition of differentiation. Example 3.3.3 Prove the quotient rule for derivatives. d dx u v = lim ∆x→0 u(x+∆x) v(x+∆x) − u(x) v(x) ∆x = lim ∆x→0 u(x + ∆x)v(x) − u(x)v(x + ∆x) ∆xv(x)v(x + ∆x) = lim ∆x→0 u(x + ∆x)v(x) − u(x)v(x) − u(x)v(x + ∆x) + u(x)v(x) ∆xv(x)v(x) = lim ∆x→0 (u(x + ∆x) − u(x))v(x) − u(x)(v(x + ∆x) − v(x)) ∆xv2(x) = lim∆x→0 u(x+∆x)−u(x) ∆x v(x) − u(x) lim∆x→0 v(x+∆x)−v(x) ∆x v2(x) = v du dx − udv dx v2 39
  • 60. Trigonometric Functions. Some derivatives of trigonometric functions are: d dx sin x = cos x d dx arcsin x = 1 (1 − x2)1/2 d dx cos x = − sin x d dx arccos x = −1 (1 − x2)1/2 d dx tan x = 1 cos2 x d dx arctan x = 1 1 + x2 d dx ex = ex d dx ln x = 1 x d dx sinh x = cosh x d dx arcsinh x = 1 (x2 + 1)1/2 d dx cosh x = sinh x d dx arccosh x = 1 (x2 − 1)1/2 d dx tanh x = 1 cosh2 x d dx arctanh x = 1 1 − x2 Example 3.3.4 We can evaluate the derivative of xx by using the identity ab = eb ln a . d dx xx = d dx ex ln x = ex ln x d dx (x ln x) = xx (1 · ln x + x 1 x ) = xx (1 + ln x) Inverse Functions. If we have a function y(x), we can consider x as a function of y, x(y). For example, if y(x) = 8x3 then x(y) = 3 √ y/2; if y(x) = x+2 x+1 then x(y) = 2−y y−1 . The derivative of an inverse function is d dy x(y) = 1 dy dx . Example 3.3.5 The inverse function of y(x) = ex is x(y) = ln y. We can obtain the derivative of the logarithm from the derivative of the exponential. The derivative of the exponential is dy dx = ex . Thus the derivative of the logarithm is d dy ln y = d dy x(y) = 1 dy dx = 1 ex = 1 y . 3.4 Implicit Differentiation An explicitly defined function has the form y = f(x). A implicitly defined function has the form f(x, y) = 0. A few examples of implicit functions are x2 + y2 − 1 = 0 and x + y + sin(xy) = 0. Often it is not possible to write an implicit equation in explicit form. This is true of the latter example above. One can calculate the derivative of y(x) in terms of x and y even when y(x) is defined by an implicit equation. Example 3.4.1 Consider the implicit equation x2 − xy − y2 = 1. 40
  • 61. This implicit equation can be solved for the dependent variable. y(x) = 1 2 −x ± 5x2 − 4 . We can differentiate this expression to obtain y = 1 2 −1 ± 5x √ 5x2 − 4 . One can obtain the same result without first solving for y. If we differentiate the implicit equation, we obtain 2x − y − x dy dx − 2y dy dx = 0. We can solve this equation for dy dx . dy dx = 2x − y x + 2y We can differentiate this expression to obtain the second derivative of y. d2 y dx2 = (x + 2y)(2 − y ) − (2x − y)(1 + 2y ) (x + 2y)2 = 5(y − xy ) (x + 2y)2 Substitute in the expression for y . = − 10(x2 − xy − y2 ) (x + 2y)2 Use the original implicit equation. = − 10 (x + 2y)2 3.5 Maxima and Minima A differentiable function is increasing where f (x) > 0, decreasing where f (x) < 0 and stationary where f (x) = 0. A function f(x) has a relative maxima at a point x = ξ if there exists a neighborhood around ξ such that f(x) ≤ f(ξ) for x ∈ (x − δ, x + δ), δ > 0. The relative minima is defined analogously. Note that this definition does not require that the function be differentiable, or even continuous. We refer to relative maxima and minima collectively are relative extrema. Relative Extrema and Stationary Points. If f(x) is differentiable and f(ξ) is a relative ex- trema then x = ξ is a stationary point, f (ξ) = 0. We can prove this using left and right limits. Assume that f(ξ) is a relative maxima. Then there is a neighborhood (x − δ, x + δ), δ > 0 for which f(x) ≤ f(ξ). Since f(x) is differentiable the derivative at x = ξ, f (ξ) = lim ∆x→0 f(ξ + ∆x) − f(ξ) ∆x , exists. This in turn means that the left and right limits exist and are equal. Since f(x) ≤ f(ξ) for ξ − δ < x < ξ the left limit is non-positive, f (ξ) = lim ∆x→0− f(ξ + ∆x) − f(ξ) ∆x ≤ 0. 41
  • 62. Since f(x) ≤ f(ξ) for ξ < x < ξ + δ the right limit is nonnegative, f (ξ) = lim ∆x→0+ f(ξ + ∆x) − f(ξ) ∆x ≥ 0. Thus we have 0 ≤ f (ξ) ≤ 0 which implies that f (ξ) = 0. It is not true that all stationary points are relative extrema. That is, f (ξ) = 0 does not imply that x = ξ is an extrema. Consider the function f(x) = x3 . x = 0 is a stationary point since f (x) = x2 , f (0) = 0. However, x = 0 is neither a relative maxima nor a relative minima. It is also not true that all relative extrema are stationary points. Consider the function f(x) = |x|. The point x = 0 is a relative minima, but the derivative at that point is undefined. First Derivative Test. Let f(x) be differentiable and f (ξ) = 0. • If f (x) changes sign from positive to negative as we pass through x = ξ then the point is a relative maxima. • If f (x) changes sign from negative to positive as we pass through x = ξ then the point is a relative minima. • If f (x) is not identically zero in a neighborhood of x = ξ and it does not change sign as we pass through the point then x = ξ is not a relative extrema. Example 3.5.1 Consider y = x2 and the point x = 0. The function is differentiable. The derivative, y = 2x, vanishes at x = 0. Since y (x) is negative for x < 0 and positive for x > 0, the point x = 0 is a relative minima. See Figure 3.7. Example 3.5.2 Consider y = cos x and the point x = 0. The function is differentiable. The derivative, y = − sin x is positive for −π < x < 0 and negative for 0 < x < π. Since the sign of y goes from positive to negative, x = 0 is a relative maxima. See Figure 3.7. Example 3.5.3 Consider y = x3 and the point x = 0. The function is differentiable. The derivative, y = 3x2 is positive for x < 0 and positive for 0 < x. Since y is not identically zero and the sign of y does not change, x = 0 is not a relative extrema. See Figure 3.7. Figure 3.7: Graphs of x2 , cos x and x3 . Concavity. If the portion of a curve in some neighborhood of a point lies above the tangent line through that point, the curve is said to be concave upward. If it lies below the tangent it is concave downward. If a function is twice differentiable then f (x) > 0 where it is concave upward and f (x) < 0 where it is concave downward. Note that f (x) > 0 is a sufficient, but not a necessary condition for a curve to be concave upward at a point. A curve may be concave upward at a point where the second derivative vanishes. A point where the curve changes concavity is called a point 42
  • 63. of inflection. At such a point the second derivative vanishes, f (x) = 0. For twice continuously differentiable functions, f (x) = 0 is a necessary but not a sufficient condition for an inflection point. The second derivative may vanish at places which are not inflection points. See Figure 3.8. Figure 3.8: Concave Upward, Concave Downward and an Inflection Point. Second Derivative Test. Let f(x) be twice differentiable and let x = ξ be a stationary point, f (ξ) = 0. • If f (ξ) < 0 then the point is a relative maxima. • If f (ξ) > 0 then the point is a relative minima. • If f (ξ) = 0 then the test fails. Example 3.5.4 Consider the function f(x) = cos x and the point x = 0. The derivatives of the function are f (x) = − sin x, f (x) = − cos x. The point x = 0 is a stationary point, f (0) = − sin(0) = 0. Since the second derivative is negative there, f (0) = − cos(0) = −1, the point is a relative maxima. Example 3.5.5 Consider the function f(x) = x4 and the point x = 0. The derivatives of the function are f (x) = 4x3 , f (x) = 12x2 . The point x = 0 is a stationary point. Since the second derivative also vanishes at that point the second derivative test fails. One must use the first derivative test to determine that x = 0 is a relative minima. 3.6 Mean Value Theorems Rolle’s Theorem. If f(x) is continuous in [a, b], differentiable in (a, b) and f(a) = f(b) = 0 then there exists a point ξ ∈ (a, b) such that f (ξ) = 0. See Figure 3.9. Figure 3.9: Rolle’s Theorem. To prove this we consider two cases. First we have the trivial case that f(x) ≡ 0. If f(x) is not identically zero then continuity implies that it must have a nonzero relative maxima or minima in (a, b). Let x = ξ be one of these relative extrema. Since f(x) is differentiable, x = ξ must be a stationary point, f (ξ) = 0. 43
  • 64. Theorem of the Mean. If f(x) is continuous in [a, b] and differentiable in (a, b) then there exists a point x = ξ such that f (ξ) = f(b) − f(a) b − a . That is, there is a point where the instantaneous velocity is equal to the average velocity on the interval. Figure 3.10: Theorem of the Mean. We prove this theorem by applying Rolle’s theorem. Consider the new function g(x) = f(x) − f(a) − f(b) − f(a) b − a (x − a) Note that g(a) = g(b) = 0, so it satisfies the conditions of Rolle’s theorem. There is a point x = ξ such that g (ξ) = 0. We differentiate the expression for g(x) and substitute in x = ξ to obtain the result. g (x) = f (x) − f(b) − f(a) b − a g (ξ) = f (ξ) − f(b) − f(a) b − a = 0 f (ξ) = f(b) − f(a) b − a Generalized Theorem of the Mean. If f(x) and g(x) are continuous in [a, b] and differentiable in (a, b), then there exists a point x = ξ such that f (ξ) g (ξ) = f(b) − f(a) g(b) − g(a) . We have assumed that g(a) = g(b) so that the denominator does not vanish and that f (x) and g (x) are not simultaneously zero which would produce an indeterminate form. Note that this theorem reduces to the regular theorem of the mean when g(x) = x. The proof of the theorem is similar to that for the theorem of the mean. Taylor’s Theorem of the Mean. If f(x) is n + 1 times continuously differentiable in (a, b) then there exists a point x = ξ ∈ (a, b) such that f(b) = f(a) + (b − a)f (a) + (b − a)2 2! f (a) + · · · + (b − a)n n! f(n) (a) + (b − a)n+1 (n + 1)! f(n+1) (ξ). (3.1) For the case n = 0, the formula is f(b) = f(a) + (b − a)f (ξ), which is just a rearrangement of the terms in the theorem of the mean, f (ξ) = f(b) − f(a) b − a . 44
  • 65. 3.6.1 Application: Using Taylor’s Theorem to Approximate Functions. One can use Taylor’s theorem to approximate functions with polynomials. Consider an infinitely differentiable function f(x) and a point x = a. Substituting x for b into Equation 3.1 we obtain, f(x) = f(a) + (x − a)f (a) + (x − a)2 2! f (a) + · · · + (x − a)n n! f(n) (a) + (x − a)n+1 (n + 1)! f(n+1) (ξ). If the last term in the sum is small then we can approximate our function with an nth order polynomial. f(x) ≈ f(a) + (x − a)f (a) + (x − a)2 2! f (a) + · · · + (x − a)n n! f(n) (a) The last term in Equation 3.6.1 is called the remainder or the error term, Rn = (x − a)n+1 (n + 1)! f(n+1) (ξ). Since the function is infinitely differentiable, f(n+1) (ξ) exists and is bounded. Therefore we note that the error must vanish as x → 0 because of the (x − a)n+1 factor. We therefore suspect that our approximation would be a good one if x is close to a. Also note that n! eventually grows faster than (x − a)n , lim n→∞ (x − a)n n! = 0. So if the derivative term, f(n+1) (ξ), does not grow to quickly, the error for a certain value of x will get smaller with increasing n and the polynomial will become a better approximation of the function. (It is also possible that the derivative factor grows very quickly and the approximation gets worse with increasing n.) Example 3.6.1 Consider the function f(x) = ex . We want a polynomial approximation of this function near the point x = 0. Since the derivative of ex is ex , the value of all the derivatives at x = 0 is f(n) (0) = e0 = 1. Taylor’s theorem thus states that ex = 1 + x + x2 2! + x3 3! + · · · + xn n! + xn+1 (n + 1)! eξ , for some ξ ∈ (0, x). The first few polynomial approximations of the exponent about the point x = 0 are f1(x) = 1 f2(x) = 1 + x f3(x) = 1 + x + x2 2 f4(x) = 1 + x + x2 2 + x3 6 The four approximations are graphed in Figure 3.11. Note that for the range of x we are looking at, the approximations become more accurate as the number of terms increases. Example 3.6.2 Consider the function f(x) = cos x. We want a polynomial approximation of this function near the point x = 0. The first few derivatives of f are f(x) = cos x f (x) = − sin x f (x) = − cos x f (x) = sin x f(4) (x) = cos x 45
  • 66. -1 -0.5 0.5 1 0.5 1 1.5 2 2.5 -1 -0.5 0.5 1 0.5 1 1.5 2 2.5 -1 -0.5 0.5 1 0.5 1 1.5 2 2.5 -1 -0.5 0.5 1 0.5 1 1.5 2 2.5 Figure 3.11: Four Finite Taylor Series Approximations of ex It’s easy to pick out the pattern here, f(n) (x) = (−1)n/2 cos x for even n, (−1)(n+1)/2 sin x for odd n. Since cos(0) = 1 and sin(0) = 0 the n-term approximation of the cosine is, cos x = 1 − x2 2! + x4 4! − x6 6! + · · · + (−1)2(n−1) x2(n−1) (2(n − 1))! + x2n (2n)! cos ξ. Here are graphs of the one, two, three and four term approximations. -3 -2 -1 1 2 3 -1 -0.5 0.5 1 -3 -2 -1 1 2 3 -1 -0.5 0.5 1 -3 -2 -1 1 2 3 -1 -0.5 0.5 1 -3 -2 -1 1 2 3 -1 -0.5 0.5 1 Figure 3.12: Taylor Series Approximations of cos x Note that for the range of x we are looking at, the approximations become more accurate as the number of terms increases. Consider the ten term approximation of the cosine about x = 0, cos x = 1 − x2 2! + x4 4! − · · · − x18 18! + x20 20! cos ξ. Note that for any value of ξ, | cos ξ| ≤ 1. Therefore the absolute value of the error term satisfies, |R| = x20 20! cos ξ ≤ |x|20 20! . x20 /20! is plotted in Figure 3.13. Note that the error is very small for x < 6, fairly small but non-negligible for x ≈ 7 and large for x > 8. The ten term approximation of the cosine, plotted below, behaves just we would predict. The error is very small until it becomes non-negligible at x ≈ 7 and large at x ≈ 8. Example 3.6.3 Consider the function f(x) = ln x. We want a polynomial approximation of this 46
  • 67. 2 4 6 8 10 0.2 0.4 0.6 0.8 1 Figure 3.13: Plot of x20 /20!. -10 -5 5 10 -2 -1.5 -1 -0.5 0.5 1 Figure 3.14: Ten Term Taylor Series Approximation of cos x function near the point x = 1. The first few derivatives of f are f(x) = ln x f (x) = 1 x f (x) = − 1 x2 f (x) = 2 x3 f(4) (x) = − 3 x4 The derivatives evaluated at x = 1 are f(0) = 0, f(n) (0) = (−1)n−1 (n − 1)!, for n ≥ 1. By Taylor’s theorem of the mean we have, ln x = (x − 1) − (x − 1)2 2 + (x − 1)3 3 − (x − 1)4 4 + · · · + (−1)n−1 (x − 1)n n + (−1)n (x − 1)n+1 n + 1 1 ξn+1 . Below are plots of the 2, 4, 10 and 50 term approximations. Note that the approximation gets better on the interval (0, 2) and worse outside this interval as the number of terms increases. The Taylor series converges to ln x only on this interval. 3.6.2 Application: Finite Difference Schemes Example 3.6.4 Suppose you sample a function at the discrete points n∆x, n ∈ Z. In Figure 3.16 we sample the function f(x) = sin x on the interval [−4, 4] with ∆x = 1/4 and plot the data points. 47
  • 68. 0.5 1 1.5 2 2.5 3 -6 -5 -4 -3 -2 -1 1 2 0.5 1 1.5 2 2.5 3 -6 -5 -4 -3 -2 -1 1 2 0.5 1 1.5 2 2.5 3 -6 -5 -4 -3 -2 -1 1 2 0.5 1 1.5 2 2.5 3 -6 -5 -4 -3 -2 -1 1 2 Figure 3.15: The 2, 4, 10 and 50 Term Approximations of ln x -4 -2 2 4 -1 -0.5 0.5 1 Figure 3.16: Sampling of sin x We wish to approximate the derivative of the function on the grid points using only the value of the function on those discrete points. From the definition of the derivative, one is lead to the formula f (x) ≈ f(x + ∆x) − f(x) ∆x . (3.2) Taylor’s theorem states that f(x + ∆x) = f(x) + ∆xf (x) + ∆x2 2 f (ξ). Substituting this expression into our formula for approximating the derivative we obtain f(x + ∆x) − f(x) ∆x = f(x) + ∆xf (x) + ∆x2 2 f (ξ) − f(x) ∆x = f (x) + ∆x 2 f (ξ). Thus we see that the error in our approximation of the first derivative is ∆x 2 f (ξ). Since the error has a linear factor of ∆x, we call this a first order accurate method. Equation 3.2 is called the forward difference scheme for calculating the first derivative. Figure 3.17 shows a plot of the value of this scheme for the function f(x) = sin x and ∆x = 1/4. The first derivative of the function f (x) = cos x is shown for comparison. Another scheme for approximating the first derivative is the centered difference scheme, f (x) ≈ f(x + ∆x) − f(x − ∆x) 2∆x . Expanding the numerator using Taylor’s theorem, f(x + ∆x) − f(x − ∆x) 2∆x = f(x) + ∆xf (x) + ∆x2 2 f (x) + ∆x3 6 f (ξ) − f(x) + ∆xf (x) − ∆x2 2 f (x) + ∆x3 6 f (ψ) 2∆x = f (x) + ∆x2 12 (f (ξ) + f (ψ)). 48
  • 69. -4 -2 2 4 -1 -0.5 0.5 1 Figure 3.17: The Forward Difference Scheme Approximation of the Derivative The error in the approximation is quadratic in ∆x. Therefore this is a second order accurate scheme. Below is a plot of the derivative of the function and the value of this scheme for the function f(x) = sin x and ∆x = 1/4. -4 -2 2 4 -1 -0.5 0.5 1 Figure 3.18: Centered Difference Scheme Approximation of the Derivative Notice how the centered difference scheme gives a better approximation of the derivative than the forward difference scheme. 3.7 L’Hospital’s Rule Some singularities are easy to diagnose. Consider the function cos x x at the point x = 0. The function evaluates to 1 0 and is thus discontinuous at that point. Since the numerator and denominator are continuous functions and the denominator vanishes while the numerator does not, the left and right limits as x → 0 do not exist. Thus the function has an infinite discontinuity at the point x = 0. More generally, a function which is composed of continuous functions and evaluates to a 0 at a point where a = 0 must have an infinite discontinuity there. Other singularities require more analysis to diagnose. Consider the functions sin x x , sin x |x| and sin x 1−cos x at the point x = 0. All three functions evaluate to 0 0 at that point, but have different kinds of singularities. The first has a removable discontinuity, the second has a finite discontinuity and the third has an infinite discontinuity. See Figure 3.19. An expression that evaluates to 0 0 , ∞ ∞ , 0 · ∞, ∞ − ∞, 1∞ , 00 or ∞0 is called an indeterminate. A function f(x) which is indeterminate at the point x = ξ is singular at that point. The singularity may be a removable discontinuity, a finite discontinuity or an infinite discontinuity depending on the behavior of the function around that point. If limx→ξ f(x) exists, then the function has a removable discontinuity. If the limit does not exist, but the left and right limits do exist, then the function has 49
  • 70. Figure 3.19: The functions sin x x , sin x |x| and sin x 1−cos x . a finite discontinuity. If either the left or right limit does not exist then the function has an infinite discontinuity. L’Hospital’s Rule. Let f(x) and g(x) be differentiable and f(ξ) = g(ξ) = 0. Further, let g(x) be nonzero in a deleted neighborhood of x = ξ, (g(x) = 0 for x ∈ 0 < |x − ξ| < δ). Then lim x→ξ f(x) g(x) = lim x→ξ f (x) g (x) . To prove this, we note that f(ξ) = g(ξ) = 0 and apply the generalized theorem of the mean. Note that f(x) g(x) = f(x) − f(ξ) g(x) − g(ξ) = f (ψ) g (ψ) for some ψ between ξ and x. Thus lim x→ξ f(x) g(x) = lim ψ→ξ f (ψ) g (ψ) = lim x→ξ f (x) g (x) provided that the limits exist. L’Hospital’s Rule is also applicable when both functions tend to infinity instead of zero or when the limit point, ξ, is at infinity. It is also valid for one-sided limits. L’Hospital’s rule is directly applicable to the indeterminate forms 0 0 and ∞ ∞ . Example 3.7.1 Consider the three functions sin x x , sin x |x| and sin x 1−cos x at the point x = 0. lim x→0 sin x x = lim x→0 cos x 1 = 1 Thus sin x x has a removable discontinuity at x = 0. lim x→0+ sin x |x| = lim x→0+ sin x x = 1 lim x→0− sin x |x| = lim x→0− sin x −x = −1 Thus sin x |x| has a finite discontinuity at x = 0. lim x→0 sin x 1 − cos x = lim x→0 cos x sin x = 1 0 = ∞ Thus sin x 1−cos x has an infinite discontinuity at x = 0. 50
  • 71. Example 3.7.2 Let a and d be nonzero. lim x→∞ ax2 + bx + c dx2 + ex + f = lim x→∞ 2ax + b 2dx + e = lim x→∞ 2a 2d = a d Example 3.7.3 Consider lim x→0 cos x − 1 x sin x . This limit is an indeterminate of the form 0 0 . Applying L’Hospital’s rule we see that limit is equal to lim x→0 − sin x x cos x + sin x . This limit is again an indeterminate of the form 0 0 . We apply L’Hospital’s rule again. lim x→0 − cos x −x sin x + 2 cos x = − 1 2 Thus the value of the original limit is −1 2 . We could also obtain this result by expanding the functions in Taylor series. lim x→0 cos x − 1 x sin x = lim x→0 1 − x2 2 + x4 24 − · · · − 1 x x − x3 6 + x5 120 − · · · = lim x→0 −x2 2 + x4 24 − · · · x2 − x4 6 + x6 120 − · · · = lim x→0 −1 2 + x2 24 − · · · 1 − x2 6 + x4 120 − · · · = − 1 2 We can apply L’Hospital’s Rule to the indeterminate forms 0 · ∞ and ∞ − ∞ by rewriting the expression in a different form, (perhaps putting the expression over a common denominator). If at first you don’t succeed, try, try again. You may have to apply L’Hospital’s rule several times to evaluate a limit. Example 3.7.4 lim x→0 cot x − 1 x = lim x→0 x cos x − sin x x sin x = lim x→0 cos x − x sin x − cos x sin x + x cos x = lim x→0 −x sin x sin x + x cos x = lim x→0 −x cos x − sin x cos x + cos x − x sin x = 0 You can apply L’Hospital’s rule to the indeterminate forms 1∞ , 00 or ∞0 by taking the logarithm of the expression. 51
  • 72. Example 3.7.5 Consider the limit, lim x→0 xx , which gives us the indeterminate form 00 . The logarithm of the expression is ln(xx ) = x ln x. As x → 0 we now have the indeterminate form 0 · ∞. By rewriting the expression, we can apply L’Hospital’s rule. lim x→0 ln x 1/x = lim x→0 1/x −1/x2 = lim x→0 (−x) = 0 Thus the original limit is lim x→0 xx = e0 = 1. 52
  • 73. 3.8 Exercises 3.8.1 Limits of Functions Exercise 3.1 Does lim x→0 sin 1 x exist? Hint, Solution Exercise 3.2 Does lim x→0 x sin 1 x exist? Hint, Solution Exercise 3.3 Evaluate the limit: lim n→∞ n √ 5. Hint, Solution 3.8.2 Continuous Functions Exercise 3.4 Is the function sin(1/x) continuous in the open interval (0, 1)? Is there a value of a such that the function defined by f(x) = sin(1/x) for x = 0, a for x = 0 is continuous on the closed interval [0, 1]? Hint, Solution Exercise 3.5 Is the function sin(1/x) uniformly continuous in the open interval (0, 1)? Hint, Solution Exercise 3.6 Are the functions √ x and 1 x uniformly continuous on the interval (0, 1)? Hint, Solution Exercise 3.7 Prove that a function which is continuous on a closed interval is uniformly continuous on that interval. Hint, Solution Exercise 3.8 Prove or disprove each of the following. 1. If limn→∞ an = L then limn→∞ a2 n = L2 . 2. If limn→∞ a2 n = L2 then limn→∞ an = L. 3. If an > 0 for all n > 200, and limn→∞ an = L, then L > 0. 53
  • 74. 4. If f : R → R is continuous and limx→∞ f(x) = L, then for n ∈ Z, limn→∞ f(n) = L. 5. If f : R → R is continuous and limn→∞ f(n) = L, then for x ∈ R, limx→∞ f(x) = L. Hint, Solution 3.8.3 The Derivative Exercise 3.9 (mathematica/calculus/differential/definition.nb) Use the definition of differentiation to prove the following identities where f(x) and g(x) are differ- entiable functions and n is a positive integer. 1. d dx (xn ) = nxn−1 , (I suggest that you use Newton’s binomial formula.) 2. d dx (f(x)g(x)) = f dg dx + g df dx 3. d dx (sin x) = cos x. (You’ll need to use some trig identities.) 4. d dx (f(g(x))) = f (g(x))g (x) Hint, Solution Exercise 3.10 Use the definition of differentiation to determine if the following functions differentiable at x = 0. 1. f(x) = x|x| 2. f(x) = 1 + |x| Hint, Solution Exercise 3.11 (mathematica/calculus/differential/rules.nb) Find the first derivatives of the following: a. x sin(cos x) b. f(cos(g(x))) c. 1 f(ln x) d. xxx e. |x| sin |x| Hint, Solution Exercise 3.12 (mathematica/calculus/differential/rules.nb) Using d dx sin x = cos x and d dx tan x = 1 cos2 x find the derivatives of arcsin x and arctan x. Hint, Solution 3.8.4 Implicit Differentiation Exercise 3.13 (mathematica/calculus/differential/implicit.nb) Find y (x), given that x2 + y2 = 1. What is y (1/2)? Hint, Solution Exercise 3.14 (mathematica/calculus/differential/implicit.nb) Find y (x) and y (x), given that x2 − xy + y2 = 3. Hint, Solution 54
  • 75. 3.8.5 Maxima and Minima Exercise 3.15 (mathematica/calculus/differential/maxima.nb) Identify any maxima and minima of the following functions. a. f(x) = x(12 − 2x)2 . b. f(x) = (x − 2)2/3 . Hint, Solution Exercise 3.16 (mathematica/calculus/differential/maxima.nb) A cylindrical container with a circular base and an open top is to hold 64 cm3 . Find its dimensions so that the surface area of the cup is a minimum. Hint, Solution 3.8.6 Mean Value Theorems Exercise 3.17 Prove the generalized theorem of the mean. If f(x) and g(x) are continuous in [a, b] and differentiable in (a, b), then there exists a point x = ξ such that f (ξ) g (ξ) = f(b) − f(a) g(b) − g(a) . Assume that g(a) = g(b) so that the denominator does not vanish and that f (x) and g (x) are not simultaneously zero which would produce an indeterminate form. Hint, Solution Exercise 3.18 (mathematica/calculus/differential/taylor.nb) Find a polynomial approximation of sin x on the interval [−1, 1] that has a maximum error of 1 1000 . Don’t use any more terms that you need to. Prove the error bound. Use your polynomial to approximate sin 1. Hint, Solution Exercise 3.19 (mathematica/calculus/differential/taylor.nb) You use the formula f(x+∆x)−2f(x)+f(x−∆x) ∆x2 to approximate f (x). What is the error in this ap- proximation? Hint, Solution Exercise 3.20 The formulas f(x+∆x)−f(x) ∆x and f(x+∆x)−f(x−∆x) 2∆x are first and second order accurate schemes for approximating the first derivative f (x). Find a couple other schemes that have successively higher orders of accuracy. Would these higher order schemes actually give a better approximation of f (x)? Remember that ∆x is small, but not infinitesimal. Hint, Solution 3.8.7 L’Hospital’s Rule Exercise 3.21 (mathematica/calculus/differential/lhospitals.nb) Evaluate the following limits. a. limx→0 x−sin x x3 b. limx→0 csc x − 1 x c. limx→+∞ 1 + 1 x x 55
  • 76. d. limx→0 csc2 x − 1 x2 . (First evaluate using L’Hospital’s rule then using a Taylor series expan- sion. You will find that the latter method is more convenient.) Hint, Solution Exercise 3.22 (mathematica/calculus/differential/lhospitals.nb) Evaluate the following limits, lim x→∞ xa/x , lim x→∞ 1 + a x bx , where a and b are constants. Hint, Solution 56
  • 77. 3.9 Hints Hint 3.1 Apply the , δ definition of a limit. Hint 3.2 Set y = 1/x. Consider limy→∞. Hint 3.3 Write n √ 5 in terms of the exponential function. Hint 3.4 The composition of continuous functions is continuous. Apply the definition of continuity and look at the point x = 0. Hint 3.5 Note that for x1 = 1 (n−1/2)π and x2 = 1 (n+1/2)π where n ∈ Z we have | sin(1/x1) − sin(1/x2)| = 2. Hint 3.6 Note that the function √ x + δ − √ x is a decreasing function of x and an increasing function of δ for positive x and δ. Bound this function for fixed δ. Consider any positive δ and . For what values of x is 1 x − 1 x + δ > . Hint 3.7 Let the function f(x) be continuous on a closed interval. Consider the function e(x, δ) = sup |ξ−x|<δ |f(ξ) − f(x)|. Bound e(x, δ) with a function of δ alone. Hint 3.8 CONTINUE 1. If limn→∞ an = L then limn→∞ a2 n = L2 . 2. If limn→∞ a2 n = L2 then limn→∞ an = L. 3. If an > 0 for all n > 200, and limn→∞ an = L, then L > 0. 4. If f : R → R is continuous and limx→∞ f(x) = L, then for n ∈ Z, limn→∞ f(n) = L. 5. If f : R → R is continuous and limn→∞ f(n) = L, then for x ∈ R, limx→∞ f(x) = L. Hint 3.9 a. Newton’s binomial formula is (a + b)n = n k=0 n k an−k bk = an + an−1 b + n(n − 1) 2 an−2 b2 + · · · + nabn−1 + bn . Recall that the binomial coefficient is n k = n! (n − k)!k! . 57
  • 78. b. Note that d dx (f(x)g(x)) = lim ∆x→0 f(x + ∆x)g(x + ∆x) − f(x)g(x) ∆x and g(x)f (x) + f(x)g (x) = g(x) lim ∆x→0 f(x + ∆x) − f(x) ∆x + f(x) lim ∆x→0 g(x + ∆x) − g(x) ∆x . Fill in the blank. c. First prove that lim θ→0 sin θ θ = 1. and lim θ→0 cos θ − 1 θ = 0. d. Let u = g(x). Consider a nonzero increment ∆x, which induces the increments ∆u and ∆f. By definition, ∆f = f(u + ∆u) − f(u), ∆u = g(x + ∆x) − g(x), and ∆f, ∆u → 0 as ∆x → 0. If ∆u = 0 then we have = ∆f ∆u − df du → 0 as ∆u → 0. If ∆u = 0 for some values of ∆x then ∆f also vanishes and we define = 0 for theses values. In either case, ∆y = df du ∆u + ∆u. Continue from here. Hint 3.10 Hint 3.11 a. Use the product rule and the chain rule. b. Use the chain rule. c. Use the quotient rule and the chain rule. d. Use the identity ab = eb ln a . e. For x > 0, the expression is x sin x; for x < 0, the expression is (−x) sin(−x) = x sin x. Do both cases. Hint 3.12 Use that x (y) = 1/y (x) and the identities cos x = (1 − sin2 x)1/2 and cos(arctan x) = 1 (1+x2)1/2 . Hint 3.13 Differentiating the equation x2 + [y(x)]2 = 1 yields 2x + 2y(x)y (x) = 0. Solve this equation for y (x) and write y(x) in terms of x. 58
  • 79. Hint 3.14 Differentiate the equation and solve for y (x) in terms of x and y(x). Differentiate the expression for y (x) to obtain y (x). You’ll use that x2 − xy(x) + [y(x)]2 = 3 Hint 3.15 a. Use the second derivative test. b. The function is not differentiable at the point x = 2 so you can’t use a derivative test at that point. Hint 3.16 Let r be the radius and h the height of the cylinder. The volume of the cup is πr2 h = 64. The radius and height are related by h = 64 πr2 . The surface area of the cup is f(r) = πr2 + 2πrh = πr2 + 128 r . Use the second derivative test to find the minimum of f(r). Hint 3.17 The proof is analogous to the proof of the theorem of the mean. Hint 3.18 The first few terms in the Taylor series of sin(x) about x = 0 are sin(x) = x − x3 6 + x5 120 − x7 5040 + x9 362880 + · · · . When determining the error, use the fact that | cos x0| ≤ 1 and |xn | ≤ 1 for x ∈ [−1, 1]. Hint 3.19 The terms in the approximation have the Taylor series, f(x + ∆x) = f(x) + ∆xf (x) + ∆x2 2 f (x) + ∆x3 6 f (x) + ∆x4 24 f (x1), f(x − ∆x) = f(x) − ∆xf (x) + ∆x2 2 f (x) − ∆x3 6 f (x) + ∆x4 24 f (x2), where x ≤ x1 ≤ x + ∆x and x − ∆x ≤ x2 ≤ x. Hint 3.20 Hint 3.21 a. Apply L’Hospital’s rule three times. b. You can write the expression as x − sin x x sin x . c. Find the limit of the logarithm of the expression. d. It takes four successive applications of L’Hospital’s rule to evaluate the limit. For the Taylor series expansion method, csc2 x − 1 x2 = x2 − sin2 x x2 sin2 x = x2 − (x − x3 /6 + O(x5 ))2 x2(x + O(x3))2 Hint 3.22 To evaluate the limits use the identity ab = eb ln a and then apply L’Hospital’s rule. 59
  • 80. 3.10 Solutions Solution 3.1 Note that in any open neighborhood of zero, (−δ, δ), the function sin(1/x) takes on all values in the interval [−1, 1]. Thus if we choose a positive such that < 1 then there is no value of ψ for which | sin(1/x) − ψ| < for all x ∈ (− , ). Thus the limit does not exist. Solution 3.2 We make the change of variables y = 1/x and consider y → ∞. We use that sin(y) is bounded. lim x→0 x sin 1 x = lim y→∞ 1 y sin(y) = 0 Solution 3.3 We write n √ 5 in terms of the exponential function and then evaluate the limit. lim n→∞ n √ 5 = lim n→∞ exp ln 5 n = exp lim n→∞ ln 5 n = e0 = 1 Solution 3.4 Since 1 x is continuous in the interval (0, 1) and the function sin(x) is continuous everywhere, the composition sin(1/x) is continuous in the interval (0, 1). Since limx→0 sin(1/x) does not exist, there is no way of defining sin(1/x) at x = 0 to produce a function that is continuous in [0, 1]. Solution 3.5 Note that for x1 = 1 (n−1/2)π and x2 = 1 (n+1/2)π where n ∈ Z we have | sin(1/x1) − sin(1/x2)| = 2. Thus for any 0 < < 2 there is no value of δ > 0 such that | sin(1/x1) − sin(1/x2)| < for all x1, x2 ∈ (0, 1) and |x1 − x2| < δ. Thus sin(1/x) is not uniformly continuous in the open interval (0, 1). Solution 3.6 First consider the function √ x. Note that the function √ x + δ − √ x is a decreasing function of x and an increasing function of δ for positive x and δ. Thus for any fixed δ, the maximum value of √ x + δ − √ x is bounded by √ δ. Therefore on the interval (0, 1), a sufficient condition for | √ x − √ ξ| < is |x − ξ| < 2 . The function √ x is uniformly continuous on the interval (0, 1). Consider any positive δ and . Note that 1 x − 1 x + δ > for x < 1 2 δ2 + 4δ − δ . Thus there is no value of δ such that 1 x − 1 ξ < for all |x − ξ| < δ. The function 1 x is not uniformly continuous on the interval (0, 1). 60
  • 81. Solution 3.7 Let the function f(x) be continuous on a closed interval. Consider the function e(x, δ) = sup |ξ−x|<δ |f(ξ) − f(x)|. Since f(x) is continuous, e(x, δ) is a continuous function of x on the same closed interval. Since continuous functions on closed intervals are bounded, there is a continuous, increasing function (δ) satisfying, e(x, δ) ≤ (δ), for all x in the closed interval. Since (δ) is continuous and increasing, it has an inverse δ( ). Now note that |f(x) − f(ξ)| < for all x and ξ in the closed interval satisfying |x − ξ| < δ( ). Thus the function is uniformly continuous in the closed interval. Solution 3.8 1. The statement lim n→∞ an = L is equivalent to ∀ > 0, ∃ N s.t. n > N ⇒ |an − L| < . We want to show that ∀ δ > 0, ∃ M s.t. m > M ⇒ |a2 n − L2 | < δ. Suppose that |an − L| < . We obtain an upper bound on |a2 n − L2 |. |a2 n − L2 | = |an − L||an + L| < (|2L| + ) Now we choose a value of such that |a2 n − L2 | < δ (|2L| + ) = δ = L2 + δ − |L| Consider any fixed δ > 0. We see that since for = L2 + δ − |L|, ∃ N s.t. n > N ⇒ |an − L| < implies that n > N ⇒ |a2 n − L2 | < δ. Therefore ∀ δ > 0, ∃ M s.t. m > M ⇒ |a2 n − L2 | < δ. We conclude that limn→∞ a2 n = L2 . 2. limn→∞ a2 n = L2 does not imply that limn→∞ an = L. Consider an = −1. In this case limn→∞ a2 n = 1 and limn→∞ an = −1. 3. If an > 0 for all n > 200, and limn→∞ an = L, then L is not necessarily positive. Consider an = 1/n, which satisfies the two constraints. lim n→∞ 1 n = 0 4. The statement limx→∞ f(x) = L is equivalent to ∀ > 0, ∃ X s.t. x > X ⇒ |f(x) − L| < . This implies that for n > X , |f(n) − L| < . ∀ > 0, ∃ N s.t. n > N ⇒ |f(n) − L| < lim n→∞ f(n) = L 61
  • 82. 5. If f : R → R is continuous and limn→∞ f(n) = L, then for x ∈ R, it is not necessarily true that limx→∞ f(x) = L. Consider f(x) = sin(πx). lim n→∞ sin(πn) = lim n→∞ 0 = 0 limx→∞ sin(πx) does not exist. Solution 3.9 a. d dx (xn ) = lim ∆x→0 (x + ∆x)n − xn ∆x = lim ∆x→0   xn + nxn−1 ∆x + n(n−1) 2 xn−2 ∆x2 + · · · + ∆xn − xn ∆x   = lim ∆x→0 nxn−1 + n(n − 1) 2 xn−2 ∆x + · · · + ∆xn−1 = nxn−1 d dx (xn ) = nxn−1 b. d dx (f(x)g(x)) = lim ∆x→0 f(x + ∆x)g(x + ∆x) − f(x)g(x) ∆x = lim ∆x→0 [f(x + ∆x)g(x + ∆x) − f(x)g(x + ∆x)] + [f(x)g(x + ∆x) − f(x)g(x)] ∆x = lim ∆x→0 [g(x + ∆x)] lim ∆x→0 f(x + ∆x) − f(x) ∆x + f(x) lim ∆x→0 g(x + ∆x) − g(x) ∆x = g(x)f (x) + f(x)g (x) d dx (f(x)g(x)) = f(x)g (x) + f (x)g(x) c. Consider a right triangle with hypotenuse of length 1 in the first quadrant of the plane. Label the vertices A, B, C, in clockwise order, starting with the vertex at the origin. The angle of A is θ. The length of a circular arc of radius cos θ that connects C to the hypotenuse is θ cos θ. The length of the side BC is sin θ. The length of a circular arc of radius 1 that connects B to the x axis is θ. (See Figure 3.20.) Considering the length of these three curves gives us the inequality: θ cos θ ≤ sin θ ≤ θ. Dividing by θ, cos θ ≤ sin θ θ ≤ 1. Taking the limit as θ → 0 gives us lim θ→0 sin θ θ = 1. 62
  • 83. B θ sin A C θ θθcosθ Figure 3.20: One more little tidbit we’ll need to know is lim θ→0 cos θ − 1 θ = lim θ→0 cos θ − 1 θ cos θ + 1 cos θ + 1 = lim θ→0 cos2 θ − 1 θ(cos θ + 1) = lim θ→0 − sin2 θ θ(cos θ + 1) = lim θ→0 − sin θ θ lim θ→0 sin θ (cos θ + 1) = (−1) 0 2 = 0. Now we’re ready to find the derivative of sin x. d dx (sin x) = lim ∆x→0 sin(x + ∆x) − sin x ∆x = lim ∆x→0 cos x sin ∆x + sin x cos ∆x − sin x ∆x = cos x lim ∆x→0 sin ∆x ∆x + sin x lim ∆x→0 cos ∆x − 1 ∆x = cos x d dx (sin x) = cos x d. Let u = g(x). Consider a nonzero increment ∆x, which induces the increments ∆u and ∆f. By definition, ∆f = f(u + ∆u) − f(u), ∆u = g(x + ∆x) − g(x), and ∆f, ∆u → 0 as ∆x → 0. If ∆u = 0 then we have = ∆f ∆u − df du → 0 as ∆u → 0. 63
  • 84. If ∆u = 0 for some values of ∆x then ∆f also vanishes and we define = 0 for theses values. In either case, ∆y = df du ∆u + ∆u. We divide this equation by ∆x and take the limit as ∆x → 0. df dx = lim ∆x→0 ∆f ∆x = lim ∆x→0 df du ∆u ∆x + ∆u ∆x = df du lim ∆x→0 ∆f ∆x + lim ∆x→0 lim ∆x→0 ∆u ∆x = df du du dx + (0) du dx = df du du dx Thus we see that d dx (f(g(x))) = f (g(x))g (x). Solution 3.10 1. f (0) = lim → 0 | | − 0 = lim → 0| | = 0 The function is differentiable at x = 0. 2. f (0) = lim → 0 1 + | | − 1 = lim → 0 1 2 (1 + | |)−1/2 sign( ) 1 = lim → 0 1 2 sign( ) Since the limit does not exist, the function is not differentiable at x = 0. Solution 3.11 a. d dx [x sin(cos x)] = d dx [x] sin(cos x) + x d dx [sin(cos x)] = sin(cos x) + x cos(cos x) d dx [cos x] = sin(cos x) − x cos(cos x) sin x d dx [x sin(cos x)] = sin(cos x) − x cos(cos x) sin x 64
  • 85. b. d dx [f(cos(g(x)))] = f (cos(g(x))) d dx [cos(g(x))] = −f (cos(g(x))) sin(g(x)) d dx [g(x)] = −f (cos(g(x))) sin(g(x))g (x) d dx [f(cos(g(x)))] = −f (cos(g(x))) sin(g(x))g (x) c. d dx 1 f(ln x) = − d dx [f(ln x)] [f(ln x)]2 = − f (ln x) d dx [ln x] [f(ln x)]2 = − f (ln x) x[f(ln x)]2 d dx 1 f(ln x) = − f (ln x) x[f(ln x)]2 d. First we write the expression in terms exponentials and logarithms, xxx = xexp(x ln x) = exp(exp(x ln x) ln x). Then we differentiate using the chain rule and the product rule. d dx exp(exp(x ln x) ln x) = exp(exp(x ln x) ln x) d dx (exp(x ln x) ln x) = xxx exp(x ln x) d dx (x ln x) ln x + exp(x ln x) 1 x = xxx xx (ln x + x 1 x ) ln x + x−1 exp(x ln x) = xxx xx (ln x + 1) ln x + x−1 xx = xxx +x x−1 + ln x + ln2 x d dx xxx = xxx +x x−1 + ln x + ln2 x e. For x > 0, the expression is x sin x; for x < 0, the expression is (−x) sin(−x) = x sin x. Thus we see that |x| sin |x| = x sin x. The first derivative of this is sin x + x cos x. d dx (|x| sin |x|) = sin x + x cos x 65
  • 86. Solution 3.12 Let y(x) = sin x. Then y (x) = cos x. d dy arcsin y = 1 y (x) = 1 cos x = 1 (1 − sin2 x)1/2 = 1 (1 − y2)1/2 d dx arcsin x = 1 (1 − x2)1/2 Let y(x) = tan x. Then y (x) = 1/ cos2 x. d dy arctan y = 1 y (x) = cos2 x = cos2 (arctan y) = 1 (1 + y2)1/2 = 1 1 + y2 d dx arctan x = 1 1 + x2 Solution 3.13 Differentiating the equation x2 + [y(x)]2 = 1 yields 2x + 2y(x)y (x) = 0. We can solve this equation for y (x). y (x) = − x y(x) To find y (1/2) we need to find y(x) in terms of x. y(x) = ± 1 − x2 Thus y (x) is y (x) = ± x √ 1 − x2 . y (1/2) can have the two values: y 1 2 = ± 1 √ 3 . Solution 3.14 Differentiating the equation x2 − xy(x) + [y(x)]2 = 3 66
  • 87. yields 2x − y(x) − xy (x) + 2y(x)y (x) = 0. Solving this equation for y (x) y (x) = y(x) − 2x 2y(x) − x . Now we differentiate y (x) to get y (x). y (x) = (y (x) − 2)(2y(x) − x) − (y(x) − 2x)(2y (x) − 1) (2y(x) − x)2 , y (x) = 3 xy (x) − y(x) (2y(x) − x)2 , y (x) = 3 xy(x)−2x 2y(x)−x − y(x) (2y(x) − x)2 , y (x) = 3 x(y(x) − 2x) − y(x)(2y(x) − x) (2y(x) − x)3 , y (x) = −6 x2 − xy(x) + [y(x)]2 (2y(x) − x)3 , y (x) = −18 (2y(x) − x)3 , Solution 3.15 a. f (x) = (12 − 2x)2 + 2x(12 − 2x)(−2) = 4(x − 6)2 + 8x(x − 6) = 12(x − 2)(x − 6) There are critical points at x = 2 and x = 6. f (x) = 12(x − 2) + 12(x − 6) = 24(x − 4) Since f (2) = −48 < 0, x = 2 is a local maximum. Since f (6) = 48 > 0, x = 6 is a local minimum. b. f (x) = 2 3 (x − 2)−1/3 The first derivative exists and is nonzero for x = 2. At x = 2, the derivative does not exist and thus x = 2 is a critical point. For x < 2, f (x) < 0 and for x > 2, f (x) > 0. x = 2 is a local minimum. Solution 3.16 Let r be the radius and h the height of the cylinder. The volume of the cup is πr2 h = 64. The radius and height are related by h = 64 πr2 . The surface area of the cup is f(r) = πr2 + 2πrh = πr2 + 128 r . The first derivative of the surface area is f (r) = 2πr − 128 r2 . Finding the zeros of f (r), 2πr − 128 r2 = 0, 2πr3 − 128 = 0, 67
  • 88. r = 4 3 √ π . The second derivative of the surface area is f (r) = 2π + 256 r3 . Since f ( 4 3 √ π ) = 6π, r = 4 3 √ π is a local minimum of f(r). Since this is the only critical point for r > 0, it must be a global minimum. The cup has a radius of 4 3 √ π cm and a height of 4 3 √ π . Solution 3.17 We define the function h(x) = f(x) − f(a) − f(b) − f(a) g(b) − g(a) (g(x) − g(a)). Note that h(x) is differentiable and that h(a) = h(b) = 0. Thus h(x) satisfies the conditions of Rolle’s theorem and there exists a point ξ ∈ (a, b) such that h (ξ) = f (ξ) − f(b) − f(a) g(b) − g(a) g (ξ) = 0, f (ξ) g (ξ) = f(b) − f(a) g(b) − g(a) . Solution 3.18 The first few terms in the Taylor series of sin(x) about x = 0 are sin(x) = x − x3 6 + x5 120 − x7 5040 + x9 362880 + · · · . The seventh derivative of sin x is − cos x. Thus we have that sin(x) = x − x3 6 + x5 120 − cos x0 5040 x7 , where 0 ≤ x0 ≤ x. Since we are considering x ∈ [−1, 1] and −1 ≤ cos(x0) ≤ 1, the approximation sin x ≈ x − x3 6 + x5 120 has a maximum error of 1 5040 ≈ 0.000198. Using this polynomial to approximate sin(1), 1 − 13 6 + 15 120 ≈ 0.841667. To see that this has the required accuracy, sin(1) ≈ 0.841471. Solution 3.19 Expanding the terms in the approximation in Taylor series, f(x + ∆x) = f(x) + ∆xf (x) + ∆x2 2 f (x) + ∆x3 6 f (x) + ∆x4 24 f (x1), f(x − ∆x) = f(x) − ∆xf (x) + ∆x2 2 f (x) − ∆x3 6 f (x) + ∆x4 24 f (x2), where x ≤ x1 ≤ x + ∆x and x − ∆x ≤ x2 ≤ x. Substituting the expansions into the formula, f(x + ∆x) − 2f(x) + f(x − ∆x) ∆x2 = f (x) + ∆x2 24 [f (x1) + f (x2)]. 68
  • 89. Thus the error in the approximation is ∆x2 24 [f (x1) + f (x2)]. Solution 3.20 Solution 3.21 a. lim x→0 x − sin x x3 = lim x→0 1 − cos x 3x2 = lim x→0 sin x 6x = lim x→0 cos x 6 = 1 6 lim x→0 x − sin x x3 = 1 6 b. lim x→0 csc x − 1 x = lim x→0 1 sin x − 1 x = lim x→0 x − sin x x sin x = lim x→0 1 − cos x x cos x + sin x = lim x→0 sin x −x sin x + cos x + cos x = 0 2 = 0 lim x→0 csc x − 1 x = 0 69
  • 90. c. ln lim x→+∞ 1 + 1 x x = lim x→+∞ ln 1 + 1 x x = lim x→+∞ x ln 1 + 1 x = lim x→+∞ ln 1 + 1 x 1/x = lim x→+∞ 1 + 1 x −1 − 1 x2 −1/x2 = lim x→+∞ 1 + 1 x −1 = 1 Thus we have lim x→+∞ 1 + 1 x x = e. d. It takes four successive applications of L’Hospital’s rule to evaluate the limit. lim x→0 csc2 x − 1 x2 = lim x→0 x2 − sin2 x x2 sin2 x = lim x→0 2x − 2 cos x sin x 2x2 cos x sin x + 2x sin2 x = lim x→0 2 − 2 cos2 x + 2 sin2 x 2x2 cos2 x + 8x cos x sin x + 2 sin2 x − 2x2 sin2 x = lim x→0 8 cos x sin x 12x cos2 x + 12 cos x sin x − 8x2 cos x sin x − 12x sin2 x = lim x→0 8 cos2 x − 8 sin2 x 24 cos2 x − 8x2 cos2 x − 64x cos x sin x − 24 sin2 x + 8x2 sin2 x = 1 3 It is easier to use a Taylor series expansion. lim x→0 csc2 x − 1 x2 = lim x→0 x2 − sin2 x x2 sin2 x = lim x→0 x2 − (x − x3 /6 + O(x5 ))2 x2(x + O(x3))2 = lim x→0 x2 − (x2 − x4 /3 + O(x6 )) x4 + O(x6) = lim x→0 1 3 + O(x2 ) = 1 3 70
  • 91. Solution 3.22 To evaluate the first limit, we use the identity ab = eb ln a and then apply L’Hospital’s rule. lim x→∞ xa/x = lim x→∞ e a ln x x = exp lim x→∞ a ln x x = exp lim x→∞ a/x 1 = e0 lim x→∞ xa/x = 1 We use the same method to evaluate the second limit. lim x→∞ 1 + a x bx = lim x→∞ exp bx ln 1 + a x = exp lim x→∞ bx ln 1 + a x = exp lim x→∞ b ln(1 + a/x) 1/x = exp   lim x→∞ b −a/x2 1+a/x −1/x2   = exp lim x→∞ b a 1 + a/x lim x→∞ 1 + a x bx = eab 71
  • 92. 3.11 Quiz Problem 3.1 Define continuity. Solution Problem 3.2 Fill in the blank with necessary, sufficient or necessary and sufficient. Continuity is a condition for differentiability. Differentiability is a condition for continuity. Existence of lim∆x→0 f(x+∆x)−f(x) ∆x is a condition for differentiability. Solution Problem 3.3 Evaluate d dx f(g(x)h(x)). Solution Problem 3.4 Evaluate d dx f(x)g(x) . Solution Problem 3.5 State the Theorem of the Mean. Interpret the theorem physically. Solution Problem 3.6 State Taylor’s Theorem of the Mean. Solution Problem 3.7 Evaluate limx→0(sin x)sin x . Solution 72
  • 93. 3.12 Quiz Solutions Solution 3.1 A function y(x) is said to be continuous at x = ξ if limx→ξ y(x) = y(ξ). Solution 3.2 Continuity is a necessary condition for differentiability. Differentiability is a sufficient condition for continuity. Existence of lim∆x→0 f(x+∆x)−f(x) ∆x is a necessary and sufficient condition for differentiability. Solution 3.3 d dx f(g(x)h(x)) = f (g(x)h(x)) d dx (g(x)h(x)) = f (g(x)h(x))(g (x)h(x) + g(x)h (x)) Solution 3.4 d dx f(x)g(x) = d dx eg(x) ln f(x) = eg(x) ln f(x) d dx (g(x) ln f(x)) = f(x)g(x) g (x) ln f(x) + g(x) f (x) f(x) Solution 3.5 If f(x) is continuous in [a..b] and differentiable in (a..b) then there exists a point x = ξ such that f (ξ) = f(b) − f(a) b − a . That is, there is a point where the instantaneous velocity is equal to the average velocity on the interval. Solution 3.6 If f(x) is n + 1 times continuously differentiable in (a..b) then there exists a point x = ξ ∈ (a..b) such that f(b) = f(a) + (b − a)f (a) + (b − a)2 2! f (a) + · · · + (b − a)n n! f(n) (a) + (b − a)n+1 (n + 1)! f(n+1) (ξ). Solution 3.7 Consider limx→0(sin x)sin x . This is an indeterminate of the form 00 . The limit of the logarithm of the expression is limx→0 sin x ln(sin x). This is an indeterminate of the form 0·∞. We can rearrange the expression to obtain an indeterminate of the form ∞ ∞ and then apply L’Hospital’s rule. lim x→0 ln(sin x) 1/ sin x = lim x→0 cos x/ sin x − cos x/ sin2 x = lim x→0 (− sin x) = 0 The original limit is lim x→0 (sin x)sin x = e0 = 1. 73
  • 94. 74
  • 95. Chapter 4 Integral Calculus 4.1 The Indefinite Integral The opposite of a derivative is the anti-derivative or the indefinite integral. The indefinite integral of a function f(x) is denoted, f(x) dx. It is defined by the property that d dx f(x) dx = f(x). While a function f(x) has a unique derivative if it is differentiable, it has an infinite number of indefinite integrals, each of which differ by an additive constant. Zero Slope Implies a Constant Function. If the value of a function’s derivative is identically zero, df dx = 0, then the function is a constant, f(x) = c. To prove this, we assume that there exists a non-constant differentiable function whose derivative is zero and obtain a contradiction. Let f(x) be such a function. Since f(x) is non-constant, there exist points a and b such that f(a) = f(b). By the Mean Value Theorem of differential calculus, there exists a point ξ ∈ (a, b) such that f (ξ) = f(b) − f(a) b − a = 0, which contradicts that the derivative is everywhere zero. Indefinite Integrals Differ by an Additive Constant. Suppose that F(x) and G(x) are in- definite integrals of f(x). Then we have d dx (F(x) − G(x)) = F (x) − G (x) = f(x) − f(x) = 0. Thus we see that F(x) − G(x) = c and the two indefinite integrals must differ by a constant. For example, we have sin x dx = − cos x + c. While every function that can be expressed in terms of elementary functions, (the exponent, logarithm, trigonometric functions, etc.), has a derivative that can be written explicitly in terms of elementary functions, the same is not true of integrals. For example, sin(sin x) dx cannot be written explicitly in terms of elementary functions. Properties. Since the derivative is linear, so is the indefinite integral. That is, (af(x) + bg(x)) dx = a f(x) dx + b g(x) dx. 75
  • 96. For each derivative identity there is a corresponding integral identity. Consider the power law identity, d dx (f(x))a = a(f(x))a−1 f (x). The corresponding integral identity is (f(x))a f (x) dx = (f(x))a+1 a + 1 + c, a = −1, where we require that a = −1 to avoid division by zero. From the derivative of a logarithm, d dx ln(f(x)) = f (x) f(x) , we obtain, f (x) f(x) dx = ln |f(x)| + c. Note the absolute value signs. This is because d dx ln |x| = 1 x for x = 0. In Figure 4.1 is a plot of ln |x| and 1 x to reinforce this. Figure 4.1: Plot of ln |x| and 1/x. Example 4.1.1 Consider I = x (x2 + 1)2 dx. We evaluate the integral by choosing u = x2 + 1, du = 2x dx. I = 1 2 2x (x2 + 1)2 dx = 1 2 du u2 = 1 2 −1 u = − 1 2(x2 + 1) . Example 4.1.2 Consider I = tan x dx = sin x cos x dx. By choosing f(x) = cos x, f (x) = − sin x, we see that the integral is I = − − sin x cos x dx = − ln | cos x| + c. Change of Variable. The differential of a function g(x) is dg = g (x) dx. Thus one might suspect that for ξ = g(x), f(ξ) dξ = f(g(x))g (x) dx, (4.1) since dξ = dg = g (x) dx. This turns out to be true. To prove it we will appeal to the the chain rule for differentiation. Let ξ be a function of x. The chain rule is d dx f(ξ) = f (ξ)ξ (x), 76
  • 97. d dx f(ξ) = df dξ dξ dx . We can also write this as df dξ = dx dξ df dx , or in operator notation, d dξ = dx dξ d dx . Now we’re ready to start. The derivative of the left side of Equation 4.1 is d dξ f(ξ) dξ = f(ξ). Next we differentiate the right side, d dξ f(g(x))g (x) dx = dx dξ d dx f(g(x))g (x) dx = dx dξ f(g(x))g (x) = dx dg f(g(x)) dg dx = f(g(x)) = f(ξ) to see that it is in fact an identity for ξ = g(x). Example 4.1.3 Consider x sin(x2 ) dx. We choose ξ = x2 , dξ = 2xdx to evaluate the integral. x sin(x2 ) dx = 1 2 sin(x2 )2x dx = 1 2 sin ξ dξ = 1 2 (− cos ξ) + c = − 1 2 cos(x2 ) + c Integration by Parts. The product rule for differentiation gives us an identity called integration by parts. We start with the product rule and then integrate both sides of the equation. d dx (u(x)v(x)) = u (x)v(x) + u(x)v (x) (u (x)v(x) + u(x)v (x)) dx = u(x)v(x) + c u (x)v(x) dx + u(x)v (x)) dx = u(x)v(x) u(x)v (x)) dx = u(x)v(x) − v(x)u (x) dx The theorem is most often written in the form u dv = uv − v du. 77
  • 98. So what is the usefulness of this? Well, it may happen for some integrals and a good choice of u and v that the integral on the right is easier to evaluate than the integral on the left. Example 4.1.4 Consider x ex dx. If we choose u = x, dv = ex dx then integration by parts yields x ex dx = x ex − ex dx = (x − 1) ex . Now notice what happens when we choose u = ex , dv = x dx. x ex dx = 1 2 x2 ex − 1 2 x2 ex dx The integral gets harder instead of easier. When applying integration by parts, one must choose u and dv wisely. As general rules of thumb: • Pick u so that u is simpler than u. • Pick dv so that v is not more complicated, (hopefully simpler), than dv. Also note that you may have to apply integration by parts several times to evaluate some integrals. 4.2 The Definite Integral 4.2.1 Definition The area bounded by the x axis, the vertical lines x = a and x = b and the function f(x) is denoted with a definite integral, b a f(x) dx. The area is signed, that is, if f(x) is negative, then the area is negative. We measure the area with a divide-and-conquer strategy. First partition the interval (a, b) with a = x0 < x1 < · · · < xn−1 < xn = b. Note that the area under the curve on the subinterval is approximately the area of a rectangle of base ∆xi = xi+1 − xi and height f(ξi), where ξi ∈ [xi, xi+1]. If we add up the areas of the rectangles, we get an approximation of the area under the curve. See Figure 4.2 a x x x xx x∆ 1 2 3 i n-2 n-1 b f( )ξ1 Figure 4.2: Divide-and-Conquer Strategy for Approximating a Definite Integral. b a f(x) dx ≈ n−1 i=0 f(ξi)∆xi 78
  • 99. As the ∆xi’s get smaller, we expect the approximation of the area to get better. Let ∆x = max0≤i≤n−1 ∆xi. We define the definite integral as the sum of the areas of the rectangles in the limit that ∆x → 0. b a f(x) dx = lim ∆x→0 n−1 i=0 f(ξi)∆xi The integral is defined when the limit exists. This is known as the Riemann integral of f(x). f(x) is called the integrand. 4.2.2 Properties Linearity and the Basics. Because summation is a linear operator, that is n−1 i=0 (cfi + dgi) = c n−1 i=0 fi + d n−1 i=0 gi, definite integrals are linear, b a (cf(x) + dg(x)) dx = c b a f(x) dx + d b a g(x) dx. One can also divide the range of integration. b a f(x) dx = c a f(x) dx + b c f(x) dx We assume that each of the above integrals exist. If a ≤ b, and we integrate from b to a, then each of the ∆xi will be negative. From this observation, it is clear that b a f(x) dx = − a b f(x) dx. If we integrate any function from a point a to that same point a, then all the ∆xi are zero and a a f(x) dx = 0. Bounding the Integral. Recall that if fi ≤ gi, then n−1 i=0 fi ≤ n−1 i=0 gi. Let m = minx∈[a,b] f(x) and M = maxx∈[a,b] f(x). Then (b − a)m = n−1 i=0 m∆xi ≤ n−1 i=0 f(ξi)∆xi ≤ n−1 i=0 M∆xi = (b − a)M implies that (b − a)m ≤ b a f(x) dx ≤ (b − a)M. Since n−1 i=0 fi ≤ n−1 i=0 |fi|, we have b a f(x) dx ≤ b a |f(x)| dx. 79
  • 100. Mean Value Theorem of Integral Calculus. Let f(x) be continuous. We know from above that (b − a)m ≤ b a f(x) dx ≤ (b − a)M. Therefore there exists a constant c ∈ [m, M] satisfying b a f(x) dx = (b − a)c. Since f(x) is continuous, there is a point ξ ∈ [a, b] such that f(ξ) = c. Thus we see that b a f(x) dx = (b − a)f(ξ), for some ξ ∈ [a, b]. 4.3 The Fundamental Theorem of Integral Calculus Definite Integrals with Variable Limits of Integration. Consider a to be a constant and x variable, then the function F(x) defined by F(x) = x a f(t) dt (4.2) is an anti-derivative of f(x), that is F (x) = f(x). To show this we apply the definition of differen- tiation and the integral mean value theorem. F (x) = lim ∆x→0 F(x + ∆x) − F(x) ∆x = lim ∆x→0 x+∆x a f(t) dt − x a f(t) dt ∆x = lim ∆x→0 x+∆x x f(t) dt ∆x = lim ∆x→0 f(ξ)∆x ∆x , ξ ∈ [x, x + ∆x] = f(x) The Fundamental Theorem of Integral Calculus. Let F(x) be any anti-derivative of f(x). Noting that all anti-derivatives of f(x) differ by a constant and replacing x by b in Equation 4.2, we see that there exists a constant c such that b a f(x) dx = F(b) + c. Now to find the constant. By plugging in b = a, a a f(x) dx = F(a) + c = 0, we see that c = −F(a). This gives us a result known as the Fundamental Theorem of Integral Calculus. b a f(x) dx = F(b) − F(a). We introduce the notation [F(x)]b a ≡ F(b) − F(a). 80
  • 101. Example 4.3.1 π 0 sin x dx = [− cos x]π 0 = − cos(π) + cos(0) = 2 4.4 Techniques of Integration 4.4.1 Partial Fractions A proper rational function p(x) q(x) = p(x) (x − a)nr(x) Can be written in the form p(x) (x − α)nr(x) = a0 (x − α)n + a1 (x − α)n−1 + · · · + an−1 x − α + (· · · ) where the ak’s are constants and the last ellipses represents the partial fractions expansion of the roots of r(x). The coefficients are ak = 1 k! dk dxk p(x) r(x) x=α . Example 4.4.1 Consider the partial fraction expansion of 1 + x + x2 (x − 1)3 . The expansion has the form a0 (x − 1)3 + a1 (x − 1)2 + a2 x − 1 . The coefficients are a0 = 1 0! (1 + x + x2 )|x=1 = 3, a1 = 1 1! d dx (1 + x + x2 )|x=1 = (1 + 2x)|x=1 = 3, a2 = 1 2! d2 dx2 (1 + x + x2 )|x=1 = 1 2 (2)|x=1 = 1. Thus we have 1 + x + x2 (x − 1)3 = 3 (x − 1)3 + 3 (x − 1)2 + 1 x − 1 . Example 4.4.2 Suppose we want to evaluate 1 + x + x2 (x − 1)3 dx. If we expand the integrand in a partial fraction expansion, then the integral becomes easy. 1 + x + x2 (x − 1)3 dx. = 3 (x − 1)3 + 3 (x − 1)2 + 1 x − 1 dx = − 3 2(x − 1)2 − 3 (x − 1) + ln(x − 1) 81
  • 102. Example 4.4.3 Consider the partial fraction expansion of 1 + x + x2 x2(x − 1)2 . The expansion has the form a0 x2 + a1 x + b0 (x − 1)2 + b1 x − 1 . The coefficients are a0 = 1 0! 1 + x + x2 (x − 1)2 x=0 = 1, a1 = 1 1! d dx 1 + x + x2 (x − 1)2 x=0 = 1 + 2x (x − 1)2 − 2(1 + x + x2 ) (x − 1)3 x=0 = 3, b0 = 1 0! 1 + x + x2 x2 x=1 = 3, b1 = 1 1! d dx 1 + x + x2 x2 x=1 = 1 + 2x x2 − 2(1 + x + x2 ) x3 x=1 = −3, Thus we have 1 + x + x2 x2(x − 1)2 = 1 x2 + 3 x + 3 (x − 1)2 − 3 x − 1 . If the rational function has real coefficients and the denominator has complex roots, then you can reduce the work in finding the partial fraction expansion with the following trick: Let α and α be complex conjugate pairs of roots of the denominator. p(x) (x − α)n(x − α)nr(x) = a0 (x − α)n + a1 (x − α)n−1 + · · · + an−1 x − α + a0 (x − α)n + a1 (x − α)n−1 + · · · + an−1 x − α + (· · · ) Thus we don’t have to calculate the coefficients for the root at α. We just take the complex conjugate of the coefficients for α. Example 4.4.4 Consider the partial fraction expansion of 1 + x x2 + 1 . The expansion has the form a0 x − i + a0 x + i The coefficients are a0 = 1 0! 1 + x x + i x=i = 1 2 (1 − i), a0 = 1 2 (1 − i) = 1 2 (1 + i) Thus we have 1 + x x2 + 1 = 1 − i 2(x − i) + 1 + i 2(x + i) . 82
  • 103. 4.5 Improper Integrals If the range of integration is infinite or f(x) is discontinuous at some points then b a f(x) dx is called an improper integral. Discontinuous Functions. If f(x) is continuous on the interval a ≤ x ≤ b except at the point x = c where a < c < b then b a f(x) dx = lim δ→0+ c−δ a f(x) dx + lim →0+ b c+ f(x) dx provided that both limits exist. Example 4.5.1 Consider the integral of ln x on the interval [0, 1]. Since the logarithm has a singu- larity at x = 0, this is an improper integral. We write the integral in terms of a limit and evaluate the limit with L’Hospital’s rule. 1 0 ln x dx = lim δ→0 1 δ ln x dx = lim δ→0 [x ln x − x]1 δ = 1 ln(1) − 1 − lim δ→0 (δ ln δ − δ) = −1 − lim δ→0 (δ ln δ) = −1 − lim δ→0 ln δ 1/δ = −1 − lim δ→0 1/δ −1/δ2 = −1 Example 4.5.2 Consider the integral of xa on the range [0, 1]. If a < 0 then there is a singularity at x = 0. First assume that a = −1. 1 0 xa dx = lim δ→0+ xa+1 a + 1 1 δ = 1 a + 1 − lim δ→0+ δa+1 a + 1 This limit exists only for a > −1. Now consider the case that a = −1. 1 0 x−1 dx = lim δ→0+ [ln x] 1 δ = ln(0) − lim δ→0+ ln δ This limit does not exist. We obtain the result, 1 0 xa dx = 1 a + 1 , for a > −1. Infinite Limits of Integration. If the range of integration is infinite, say [a, ∞) then we define the integral as ∞ a f(x) dx = lim α→∞ α a f(x) dx, 83
  • 104. provided that the limit exists. If the range of integration is (−∞, ∞) then ∞ −∞ f(x) dx = lim α→−∞ a α f(x) dx + lim β→+∞ β a f(x) dx. Example 4.5.3 ∞ 1 ln x x2 dx = ∞ 1 ln x d dx −1 x dx = ln x −1 x ∞ 1 − ∞ 1 −1 x 1 x dx = lim x→+∞ − ln x x − 1 x ∞ 1 = lim x→+∞ − 1/x 1 − lim x→∞ 1 x + 1 = 1 Example 4.5.4 Consider the integral of xa on [1, ∞). First assume that a = −1. ∞ 1 xa dx = lim β→+∞ xa+1 a + 1 β 1 = lim β→+∞ βa+1 a + 1 − 1 a + 1 The limit exists for β < −1. Now consider the case a = −1. ∞ 1 x−1 dx = lim β→+∞ [ln x] β 1 = lim β→+∞ ln β − 1 a + 1 This limit does not exist. Thus we have ∞ 1 xa dx = − 1 a + 1 , for a < −1. 84
  • 105. 4.6 Exercises 4.6.1 The Indefinite Integral Exercise 4.1 (mathematica/calculus/integral/fundamental.nb) Evaluate (2x + 3)10 dx. Hint, Solution Exercise 4.2 (mathematica/calculus/integral/fundamental.nb) Evaluate (ln x)2 x dx. Hint, Solution Exercise 4.3 (mathematica/calculus/integral/fundamental.nb) Evaluate x √ x2 + 3 dx. Hint, Solution Exercise 4.4 (mathematica/calculus/integral/fundamental.nb) Evaluate cos x sin x dx. Hint, Solution Exercise 4.5 (mathematica/calculus/integral/fundamental.nb) Evaluate x2 x3−5 dx. Hint, Solution 4.6.2 The Definite Integral Exercise 4.6 (mathematica/calculus/integral/definite.nb) Use the result b a f(x) dx = lim N→∞ N−1 n=0 f(xn)∆x where ∆x = b−a N and xn = a + n∆x, to show that 1 0 x dx = 1 2 . Hint, Solution Exercise 4.7 (mathematica/calculus/integral/definite.nb) Evaluate the following integral using integration by parts and the Pythagorean identity. π 0 sin2 x dx Hint, Solution Exercise 4.8 (mathematica/calculus/integral/definite.nb) Prove that d dx f(x) g(x) h(ξ) dξ = h(f(x))f (x) − h(g(x))g (x). (Don’t use the limit definition of differentiation, use the Fundamental Theorem of Integral Calculus.) Hint, Solution Exercise 4.9 (mathematica/calculus/integral/definite.nb) Let An be the area between the curves x and xn on the interval [0 . . . 1]. What is limn→∞ An? Explain this result geometrically. Hint, Solution 85
  • 106. Exercise 4.10 (mathematica/calculus/integral/taylor.nb) a. Show that f(x) = f(0) + x 0 f (x − ξ) dξ. b. From the above identity show that f(x) = f(0) + xf (0) + x 0 ξf (x − ξ) dξ. c. Using induction, show that f(x) = f(0) + xf (0) + 1 2 x2 f (0) + · · · + 1 n! xn f(n) (0) + x 0 1 n! ξn f(n+1) (x − ξ) dξ. Hint, Solution Exercise 4.11 Find a function f(x) whose arc length from 0 to x is 2x. Hint, Solution Exercise 4.12 Consider a curve C, bounded by −1 and 1, on the interval (−1 . . . 1). Can the length of C be unbounded? What if we change to the closed interval [−1 . . . 1]? Hint, Solution 4.6.3 The Fundamental Theorem of Integration 4.6.4 Techniques of Integration Exercise 4.13 (mathematica/calculus/integral/parts.nb) Evaluate x sin x dx. Hint, Solution Exercise 4.14 (mathematica/calculus/integral/parts.nb) Evaluate x3 e2x dx. Hint, Solution Exercise 4.15 (mathematica/calculus/integral/partial.nb) Evaluate 1 x2−4 dx. Hint, Solution Exercise 4.16 (mathematica/calculus/integral/partial.nb) Evaluate x+1 x3+x2−6x dx. Hint, Solution 4.6.5 Improper Integrals Exercise 4.17 (mathematica/calculus/integral/improper.nb) Evaluate 4 0 1 (x−1)2 dx. Hint, Solution Exercise 4.18 (mathematica/calculus/integral/improper.nb) Evaluate 1 0 1√ x dx. Hint, Solution 86
  • 108. 4.7 Hints Hint 4.1 Make the change of variables u = 2x + 3. Hint 4.2 Make the change of variables u = ln x. Hint 4.3 Make the change of variables u = x2 + 3. Hint 4.4 Make the change of variables u = sin x. Hint 4.5 Make the change of variables u = x3 − 5. Hint 4.6 1 0 x dx = lim N→∞ N−1 n=0 xn∆x = lim N→∞ N−1 n=0 (n∆x)∆x Hint 4.7 Let u = sin x and dv = sin x dx. Integration by parts will give you an equation for π 0 sin2 x dx. Hint 4.8 Let H (x) = h(x) and evaluate the integral in terms of H(x). Hint 4.9 CONTINUE Hint 4.10 a. Evaluate the integral. b. Use integration by parts to evaluate the integral. c. Use integration by parts with u = f(n+1) (x − ξ) and dv = 1 n! ξn . Hint 4.11 The arc length from 0 to x is x 0 1 + (f (ξ))2 dξ (4.3) First show that the arc length of f(x) from a to b is 2(b − a). Then conclude that the integrand in Equation 4.3 must everywhere be 2. Hint 4.12 CONTINUE Hint 4.13 Let u = x, and dv = sin x dx. 88
  • 109. Hint 4.14 Perform integration by parts three successive times. For the first one let u = x3 and dv = e2x dx. Hint 4.15 Expanding the integrand in partial fractions, 1 x2 − 4 = 1 (x − 2)(x + 2) = a (x − 2) + b (x + 2) 1 = a(x + 2) + b(x − 2) Set x = 2 and x = −2 to solve for a and b. Hint 4.16 Expanding the integral in partial fractions, x + 1 x3 + x2 − 6x = x + 1 x(x − 2)(x + 3) = a x + b x − 2 + c x + 3 x + 1 = a(x − 2)(x + 3) + bx(x + 3) + cx(x − 2) Set x = 0, x = 2 and x = −3 to solve for a, b and c. Hint 4.17 4 0 1 (x − 1)2 dx = lim δ→0+ 1−δ 0 1 (x − 1)2 dx + lim →0+ 4 1+ 1 (x − 1)2 dx Hint 4.18 1 0 1 √ x dx = lim →0+ 1 1 √ x dx Hint 4.19 1 x2 + a2 dx = 1 a arctan x a 89
  • 110. 4.8 Solutions Solution 4.1 (2x + 3)10 dx Let u = 2x + 3, g(u) = x = u−3 2 , g (u) = 1 2 . (2x + 3)10 dx = u10 1 2 du = u11 11 1 2 = (2x + 3)11 22 Solution 4.2 (ln x)2 x dx = (ln x)2 d(ln x) dx dx = (ln x)3 3 Solution 4.3 x x2 + 3 dx = x2 + 3 1 2 d(x2 ) dx dx = 1 2 (x2 + 3)3/2 3/2 = (x2 + 3)3/2 3 Solution 4.4 cos x sin x dx = 1 sin x d(sin x) dx dx = ln | sin x| Solution 4.5 x2 x3 − 5 dx = 1 x3 − 5 1 3 d(x3 ) dx dx = 1 3 ln |x3 − 5| 90
  • 111. Solution 4.6 1 0 x dx = lim N→∞ N−1 n=0 xn∆x = lim N→∞ N−1 n=0 (n∆x)∆x = lim N→∞ ∆x2 N−1 n=0 n = lim N→∞ ∆x2 N(N − 1) 2 = lim N→∞ N(N − 1) 2N2 = 1 2 Solution 4.7 Let u = sin x and dv = sin x dx. Then du = cos x dx and v = − cos x. π 0 sin2 x dx = − sin x cos x π 0 + π 0 cos2 x dx = π 0 cos2 x dx = π 0 (1 − sin2 x) dx = π − π 0 sin2 x dx 2 π 0 sin2 x dx = π π 0 sin2 x dx = π 2 Solution 4.8 Let H (x) = h(x). d dx f(x) g(x) h(ξ) dξ = d dx (H(f(x)) − H(g(x))) = H (f(x))f (x) − H (g(x))g (x) = h(f(x))f (x) − h(g(x))g (x) Solution 4.9 First we compute the area for positive integer n. An = 1 0 (x − xn ) dx = x2 2 − xn+1 n + 1 1 0 = 1 2 − 1 n + 1 Then we consider the area in the limit as n → ∞. lim n→∞ An = lim n→∞ 1 2 − 1 n + 1 = 1 2 91
  • 112. In Figure 4.3 we plot the functions x1 , x2 , x4 , x8 , . . . , x1024 . In the limit as n → ∞, xn on the interval [0 . . . 1] tends to the function 0 0 ≤ x < 1 1 x = 1 Thus the area tends to the area of the right triangle with unit base and height. 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 Figure 4.3: Plots of x1 , x2 , x4 , x8 , . . . , x1024 . Solution 4.10 1. f(0) + x 0 f (x − ξ) dξ = f(0) + [−f(x − ξ)] x 0 = f(0) − f(0) + f(x) = f(x) 2. f(0) + xf (0) + x 0 ξf (x − ξ) dξ = f(0) + xf (0) + [−ξf (x − ξ)] x 0 − x 0 −f (x − ξ) dξ = f(0) + xf (0) − xf (0) − [f(x − ξ)] x 0 = f(0) − f(0) + f(x) = f(x) 3. Above we showed that the hypothesis holds for n = 0 and n = 1. Assume that it holds for some n = m ≥ 0. f(x) = f(0) + xf (0) + 1 2 x2 f (0) + · · · + 1 n! xn f(n) (0) + x 0 1 n! ξn f(n+1) (x − ξ) dξ = f(0) + xf (0) + 1 2 x2 f (0) + · · · + 1 n! xn f(n) (0) + 1 (n + 1)! ξn+1 f(n+1) (x − ξ) x 0 − x 0 − 1 (n + 1)! ξn+1 f(n+2) (x − ξ) dξ = f(0) + xf (0) + 1 2 x2 f (0) + · · · + 1 n! xn f(n) (0) + 1 (n + 1)! xn+1 f(n+1) (0) + x 0 1 (n + 1)! ξn+1 f(n+2) (x − ξ) dξ 92
  • 113. This shows that the hypothesis holds for n = m + 1. By induction, the hypothesis hold for all n ≥ 0. Solution 4.11 First note that the arc length from a to b is 2(b − a). b a 1 + (f (x))2 dx = b 0 1 + (f (x))2 dx − a 0 1 + (f (x))2 dx = 2b − 2a Since a and b are arbitrary, we conclude that the integrand must everywhere be 2. 1 + (f (x))2 = 2 f (x) = ± √ 3 f(x) is a continuous, piecewise differentiable function which satisfies f (x) = ± √ 3 at the points where it is differentiable. One example is f(x) = √ 3x Solution 4.12 CONTINUE Solution 4.13 Let u = x, and dv = sin x dx. Then du = dx and v = − cos x. x sin x dx = −x cos x + cos x dx = −x cos x + sin x + C Solution 4.14 Let u = x3 and dv = e2x dx. Then du = 3x2 dx and v = 1 2 e2x . x3 e2x dx = 1 2 x3 e2x − 3 2 x2 e2x dx Let u = x2 and dv = e2x dx. Then du = 2x dx and v = 1 2 e2x . x3 e2x dx = 1 2 x3 e2x − 3 2 1 2 x2 e2x − x e2x dx x3 e2x dx = 1 2 x3 e2x − 3 4 x2 e2x + 3 2 x e2x dx Let u = x and dv = e2x dx. Then du = dx and v = 1 2 e2x . x3 e2x dx = 1 2 x3 e2x − 3 4 x2 e2x + 3 2 1 2 x e2x − 1 2 e2x dx x3 e2x dx = 1 2 x3 e2x − 3 4 x2 e2x + 3 4 x e2x − 3 8 e2x +C Solution 4.15 Expanding the integrand in partial fractions, 1 x2 − 4 = 1 (x − 2)(x + 2) = A (x − 2) + B (x + 2) 93
  • 114. 1 = A(x + 2) + B(x − 2) Setting x = 2 yields A = 1 4 . Setting x = −2 yields B = −1 4 . Now we can do the integral. 1 x2 − 4 dx = 1 4(x − 2) − 1 4(x + 2) dx = 1 4 ln |x − 2| − 1 4 ln |x + 2| + C = 1 4 x − 2 x + 2 + C Solution 4.16 Expanding the integral in partial fractions, x + 1 x3 + x2 − 6x = x + 1 x(x − 2)(x + 3) = A x + B x − 2 + C x + 3 x + 1 = A(x − 2)(x + 3) + Bx(x + 3) + Cx(x − 2) Setting x = 0 yields A = −1 6 . Setting x = 2 yields B = 3 10 . Setting x = −3 yields C = − 2 15 . x + 1 x3 + x2 − 6x dx = − 1 6x + 3 10(x − 2) − 2 15(x + 3) dx = − 1 6 ln |x| + 3 10 ln |x − 2| − 2 15 ln |x + 3| + C = ln |x − 2|3/10 |x|1/6|x + 3|2/15 + C Solution 4.17 4 0 1 (x − 1)2 dx = lim δ→0+ 1−δ 0 1 (x − 1)2 dx + lim →0+ 4 1+ 1 (x − 1)2 dx = lim δ→0+ − 1 x − 1 1−δ 0 + lim →0+ − 1 x − 1 4 1+ = lim δ→0+ 1 δ − 1 + lim →0+ − 1 3 + 1 = ∞ + ∞ The integral diverges. Solution 4.18 1 0 1 √ x dx = lim →0+ 1 1 √ x dx = lim →0+ 2 √ x 1 = lim →0+ 2(1 − √ ) = 2 94
  • 115. Solution 4.19 ∞ 0 1 x2 + 4 dx = lim α→∞ α 0 1 x2 + 4 dx = lim α→∞ 1 2 arctan x 2 α 0 = 1 2 π 2 − 0 = π 4 95
  • 116. 4.9 Quiz Problem 4.1 Write the limit-sum definition of b a f(x) dx. Solution Problem 4.2 Evaluate 2 −1 |x| dx. Solution Problem 4.3 Evaluate d dx x2 x f(ξ) dξ. Solution Problem 4.4 Evaluate 1+x+x2 (x+1)3 dx. Solution Problem 4.5 State the integral mean value theorem. Solution Problem 4.6 What is the partial fraction expansion of 1 x(x−1)(x−2)(x−3) ? Solution 96
  • 117. 4.10 Quiz Solutions Solution 4.1 Let a = x0 < x1 < · · · < xn−1 < xn = b be a partition of the interval (a..b). We define ∆xi = xi+1 − xi and ∆x = maxi ∆xi and choose ξi ∈ [xi..xi+1]. b a f(x) dx = lim ∆x→0 n−1 i=0 f(ξi)∆xi Solution 4.2 2 −1 |x| dx = 0 −1 √ −x dx + 2 0 √ x dx = 1 0 √ x dx + 2 0 √ x dx = 2 3 x3/2 1 0 + 2 3 x3/2 2 0 = 2 3 + 2 3 23/2 = 2 3 (1 + 2 √ 2) Solution 4.3 d dx x2 x f(ξ) dξ = f(x2 ) d dx (x2 ) − f(x) d dx (x) = 2xf(x2 ) − f(x) Solution 4.4 First we expand the integrand in partial fractions. 1 + x + x2 (x + 1)3 = a (x + 1)3 + b (x + 1)2 + c x + 1 a = (1 + x + x2 ) x=−1 = 1 b = 1 1! d dx (1 + x + x2 ) x=−1 = (1 + 2x) x=−1 = −1 c = 1 2! d2 dx2 (1 + x + x2 ) x=−1 = 1 2 (2) x=−1 = 1 Then we can do the integration. 1 + x + x2 (x + 1)3 dx = 1 (x + 1)3 − 1 (x + 1)2 + 1 x + 1 dx = − 1 2(x + 1)2 + 1 x + 1 + ln |x + 1| = x + 1/2 (x + 1)2 + ln |x + 1| 97
  • 118. Solution 4.5 Let f(x) be continuous. Then b a f(x) dx = (b − a)f(ξ), for some ξ ∈ [a..b]. Solution 4.6 1 x(x − 1)(x − 2)(x − 3) = a x + b x − 1 + c x − 2 + d x − 3 a = 1 (0 − 1)(0 − 2)(0 − 3) = − 1 6 b = 1 (1)(1 − 2)(1 − 3) = 1 2 c = 1 (2)(2 − 1)(2 − 3) = − 1 2 d = 1 (3)(3 − 1)(3 − 2) = 1 6 1 x(x − 1)(x − 2)(x − 3) = − 1 6x + 1 2(x − 1) − 1 2(x − 2) + 1 6(x − 3) 98
  • 119. Chapter 5 Vector Calculus 5.1 Vector Functions Vector-valued Functions. A vector-valued function, r(t), is a mapping r : R → Rn that assigns a vector to each value of t. r(t) = r1(t)e1 + · · · + rn(t)en. An example of a vector-valued function is the position of an object in space as a function of time. The function is continous at a point t = τ if lim t→τ r(t) = r(τ). This occurs if and only if the component functions are continuous. The function is differentiable if dr dt ≡ lim ∆t→0 r(t + ∆t) − r(t) ∆t exists. This occurs if and only if the component functions are differentiable. If r(t) represents the position of a particle at time t, then the velocity and acceleration of the particle are dr dt and d2 r dt2 , respectively. The speed of the particle is |r (t)|. Differentiation Formulas. Let f(t) and g(t) be vector functions and a(t) be a scalar function. By writing out components you can verify the differentiation formulas: d dt (f · g) = f · g + f · g d dt (f × g) = f × g + f × g d dt (af) = a f + af 5.2 Gradient, Divergence and Curl Scalar and Vector Fields. A scalar field is a function of position u(x) that assigns a scalar to each point in space. A function that gives the temperature of a material is an example of a scalar field. In two dimensions, you can graph a scalar field as a surface plot, (Figure 5.1), with the vertical axis for the value of the function. A vector field is a function of position u(x) that assigns a vector to each point in space. Examples of vectors fields are functions that give the acceleration due to gravity or the velocity of a fluid. You 99
  • 120. can graph a vector field in two or three dimension by drawing vectors at regularly spaced points. (See Figure 5.1 for a vector field in two dimensions.) 0 2 4 6 0 2 4 6 -1 -0.5 0 0.5 1 0 2 4 6 Figure 5.1: A Scalar Field and a Vector Field Partial Derivatives of Scalar Fields. Consider a scalar field u(x). The partial derivative of u with respect to xk is the derivative of u in which xk is considered to be a variable and the remaining arguments are considered to be parameters. The partial derivative is denoted ∂ ∂xk u(x), ∂u ∂xk or uxk and is defined ∂u ∂xk ≡ lim ∆x→0 u(x1, . . . , xk + ∆x, . . . , xn) − u(x1, . . . , xk, . . . , xn) ∆x . Partial derivatives have the same differentiation formulas as ordinary derivatives. 100
  • 121. Consider a scalar field in R3 , u(x, y, z). Higher derivatives of u are denoted: uxx ≡ ∂2 u ∂x2 ≡ ∂ ∂x ∂u ∂x , uxy ≡ ∂2 u ∂x∂y ≡ ∂ ∂x ∂u ∂y , uxxyz ≡ ∂4 u ∂x2∂y∂z ≡ ∂2 ∂x2 ∂ ∂y ∂u ∂z . If uxy and uyx are continuous, then ∂2 u ∂x∂y = ∂2 u ∂y∂x . This is referred to as the equality of mixed partial derivatives. Partial Derivatives of Vector Fields. Consider a vector field u(x). The partial derivative of u with respect to xk is denoted ∂ ∂xk u(x), ∂u ∂xk or uxk and is defined ∂u ∂xk ≡ lim ∆x→0 u(x1, . . . , xk + ∆x, . . . , xn) − u(x1, . . . , xk, . . . , xn) ∆x . Partial derivatives of vector fields have the same differentiation formulas as ordinary derivatives. Gradient. We introduce the vector differential operator, ≡ ∂ ∂x1 e1 + · · · + ∂ ∂xn en, which is known as del or nabla. In R3 it is ≡ ∂ ∂x i + ∂ ∂y j + ∂ ∂z k. Let u(x) be a differential scalar field. The gradient of u is, u ≡ ∂u ∂x1 e1 + · · · + ∂u ∂xn en, Directional Derivative. Suppose you are standing on some terrain. The slope of the ground in a particular direction is the directional derivative of the elevation in that direction. Consider a differentiable scalar field, u(x). The derivative of the function in the direction of the unit vector a is the rate of change of the function in that direction. Thus the directional derivative, Dau, is defined: Dau(x) = lim →0 u(x + a) − u(x) = lim →0 u(x1 + a1, . . . , xn + an) − u(x1, . . . , xn) = lim →0 u(x) + a1ux1 (x) + · · · + anuxn (x) + O( 2 ) − u(x) = a1ux1 (x) + · · · + anuxn (x) Dau(x) = u(x) · a. 101
  • 122. Tangent to a Surface. The gradient, f, is orthogonal to the surface f(x) = 0. Consider a point ξ on the surface. Let the differential dr = dx1e1 + · · · dxnen lie in the tangent plane at ξ. Then df = ∂f ∂x1 dx1 + · · · + ∂f ∂xn dxn = 0 since f(x) = 0 on the surface. Then f · dr = ∂f ∂x1 e1 + · · · + ∂f ∂xn en · (dx1e1 + · · · + dxnen) = ∂f ∂x1 dx1 + · · · + ∂f ∂xn dxn = 0 Thus f is orthogonal to the tangent plane and hence to the surface. Example 5.2.1 Consider the paraboloid, x2 + y2 − z = 0. We want to find the tangent plane to the surface at the point (1, 1, 2). The gradient is f = 2xi + 2yj − k. At the point (1, 1, 2) this is f(1, 1, 2) = 2i + 2j − k. We know a point on the tangent plane, (1, 1, 2), and the normal, f(1, 1, 2). The equation of the plane is f(1, 1, 2) · (x, y, z) = f(1, 1, 2) · (1, 1, 2) 2x + 2y − z = 2 The gradient of the function f(x) = 0, f(x), is in the direction of the maximum directional derivative. The magnitude of the gradient, | f(x)|, is the value of the directional derivative in that direction. To derive this, note that Daf = f · a = | f| cos θ, where θ is the angle between f and a. Daf is maximum when θ = 0, i.e. when a is the same direction as f. In this direction, Daf = | f|. To use the elevation example, f points in the uphill direction and | f| is the uphill slope. Example 5.2.2 Suppose that the two surfaces f(x) = 0 and g(x) = 0 intersect at the point x = ξ. What is the angle between their tangent planes at that point? First we note that the angle between the tangent planes is by definition the angle between their normals. These normals are in the direction of f(ξ) and g(ξ). (We assume these are nonzero.) The angle, θ, between the tangent planes to the surfaces is θ = arccos f(ξ) · g(ξ) | f(ξ)| | g(ξ)| . Example 5.2.3 Let u be the distance from the origin: u(x) = √ x · x = √ xixi. In three dimensions, this is u(x, y, z) = x2 + y2 + z2. 102
  • 123. The gradient of u, (x), is a unit vector in the direction of x. The gradient is: u(x) = x1 √ x · x , . . . , xn √ x · x = xiei √ xjxj . In three dimensions, we have u(x, y, z) = x x2 + y2 + z2 , y x2 + y2 + z2 , z x2 + y2 + z2 . This is a unit vector because the sum of the squared components sums to unity. u · u = xiei √ xjxj · xkek √ xlxl xixi xjxj = 1 Figure 5.2 shows a plot of the vector field of u in two dimensions. Figure 5.2: The gradient of the distance from the origin. Example 5.2.4 Consider an ellipse. An implicit equation of an ellipse is x2 a2 + y2 b2 = 1. We can also express an ellipse as u(x, y) + v(x, y) = c where u and v are the distance from the two foci. That is, an ellipse is the set of points such that the sum of the distances from the two foci is a constant. Let n = (u + v). This is a vector which is orthogonal to the ellipse when evaluated on the surface. Let t be a unit tangent to the surface. Since n and t are orthogonal, n · t = 0 ( u + v) · t = 0 u · t = v · (−t). Since these are unit vectors, the angle between u and t is equal to the angle between v and −t. In other words: If we draw rays from the foci to a point on the ellipse, the rays make equal angles with the ellipse. If the ellipse were a reflective surface, a wave starting at one focus would be reflected from the ellipse and travel to the other focus. See Figure 5.3. This result also holds for ellipsoids, u(x, y, z) + v(x, y, z) = c. 103
  • 124. u v θ θ n t v u-t θ θ Figure 5.3: An ellipse and rays from the foci. Figure 5.4: An elliptical dish. We see that an ellipsoidal dish could be used to collect spherical waves, (waves emanating from a point). If the dish is shaped so that the source of the waves is located at one foci and a collector is placed at the second, then any wave starting at the source and reflecting off the dish will travel to the collector. See Figure 5.4. 104
  • 125. 5.3 Exercises Vector Functions Exercise 5.1 Consider the parametric curve r = cos t 2 i + sin t 2 j. Calculate dr dt and d2 r dt2 . Plot the position and some velocity and acceleration vectors. Hint, Solution Exercise 5.2 Let r(t) be the position of an object moving with constant speed. Show that the acceleration of the object is orthogonal to the velocity of the object. Hint, Solution Vector Fields Exercise 5.3 Consider the paraboloid x2 + y2 − z = 0. What is the angle between the two tangent planes that touch the surface at (1, 1, 2) and (1, −1, 2)? What are the equations of the tangent planes at these points? Hint, Solution Exercise 5.4 Consider the paraboloid x2 + y2 − z = 0. What is the point on the paraboloid that is closest to (1, 0, 0)? Hint, Solution Exercise 5.5 Consider the region R defined by x2 + xy + y2 ≤ 9. What is the volume of the solid obtained by rotating R about the y axis? Is this the same as the volume of the solid obtained by rotating R about the x axis? Give geometric and algebraic explanations of this. Hint, Solution Exercise 5.6 Two cylinders of unit radius intersect at right angles as shown in Figure 5.5. What is the volume of the solid enclosed by the cylinders? Figure 5.5: Two cylinders intersecting. 105
  • 126. Hint, Solution Exercise 5.7 Consider the curve f(x) = 1/x on the interval [1 . . . ∞). Let S be the solid obtained by rotating f(x) about the x axis. (See Figure 5.6.) Show that the length of f(x) and the lateral area of S are infinite. Find the volume of S. 1 1 2 3 4 5 -1 0 1-1 0 1 1 2 3 4 5 -1 0 1 Figure 5.6: The rotation of 1/x about the x axis. Hint, Solution Exercise 5.8 Suppose that a deposit of oil looks like a cone in the ground as illustrated in Figure 5.7. Suppose that the oil has a density of 800kg/m3 and it’s vertical depth is 12m. How much work2 would it take to get the oil to the surface. 32 m 12 m 12 m ground surface Figure 5.7: The oil deposit. Hint, Solution Exercise 5.9 Find the area and volume of a sphere of radius R by integrating in spherical coordinates. Hint, Solution 1You could fill S with a finite amount of paint, but it would take an infinite amount of paint to cover its surface. 2 Recall that work = force × distance and force = mass × acceleration. 106
  • 127. 5.4 Hints Vector Functions Hint 5.1 Plot the velocity and acceleration vectors at regular intervals along the path of motion. Hint 5.2 If r(t) has constant speed, then |r (t)| = c. The condition that the acceleration is orthogonal to the velocity can be stated mathematically in terms of the dot product, r (t) · r (t) = 0. Write the condition of constant speed in terms of a dot product and go from there. Vector Fields Hint 5.3 The angle between two planes is the angle between the vectors orthogonal to the planes. The angle between the two vectors is θ = arccos 2, 2, −1 · 2, −2, −1 | 2, 2, −1 || 2, −2, −1 | The equation of a line orthogonal to a and passing through the point b is a · x = a · b. Hint 5.4 Since the paraboloid is a differentiable surface, the normal to the surface at the closest point will be parallel to the vector from the closest point to (1, 0, 0). We can express this using the gradient and the cross product. If (x, y, z) is the closest point on the paraboloid, then a vector orthogonal to the surface there is f = 2x, 2y, −1 . The vector from the surface to the point (1, 0, 0) is 1−x, −y, −z . These two vectors are parallel if their cross product is zero. Hint 5.5 CONTINUE Hint 5.6 CONTINUE Hint 5.7 CONTINUE Hint 5.8 Start with the formula for the work required to move the oil to the surface. Integrate over the mass of the oil. Work = (acceleration) (distance) d(mass) Here (distance) is the distance of the differential of mass from the surface. The acceleration is that of gravity, g. Hint 5.9 CONTINUE 107
  • 128. 5.5 Solutions Vector Functions Solution 5.1 The velocity is r = − 1 2 sin t 2 i + 1 2 cos t 2 j. The acceleration is r = − 1 4 cos t 2 i − 1 4 sin t 2 j. See Figure 5.8 for plots of position, velocity and acceleration. Figure 5.8: A Graph of Position and Velocity and of Position and Acceleration Solution 5.2 If r(t) has constant speed, then |r (t)| = c. The condition that the acceleration is orthogonal to the velocity can be stated mathematically in terms of the dot product, r (t) · r (t) = 0. Note that we can write the condition of constant speed in terms of a dot product, r (t) · r (t) = c, r (t) · r (t) = c2 . Differentiating this equation yields, r (t) · r (t) + r (t) · r (t) = 0 r (t) · r (t) = 0. This shows that the acceleration is orthogonal to the velocity. Vector Fields Solution 5.3 The gradient, which is orthogonal to the surface when evaluated there is f = 2xi + 2yj − k. 2i + 2j − k and 2i − 2j − k are orthogonal to the paraboloid, (and hence the tangent planes), at the points (1, 1, 2) and (1, −1, 2), respectively. The angle between the tangent planes is the angle between the vectors orthogonal to the planes. The angle between the two vectors is θ = arccos 2, 2, −1 · 2, −2, −1 | 2, 2, −1 || 2, −2, −1 | 108
  • 129. θ = arccos 1 9 ≈ 1.45946. Recall that the equation of a line orthogonal to a and passing through the point b is a · x = a · b. The equations of the tangent planes are 2, ±2, −1 · x, y, z = 2, ±2, −1 · 1, ±1, 2 , 2x ± 2y − z = 2. The paraboloid and the tangent planes are shown in Figure 5.9. -1 0 1 -1 0 1 0 2 4 0 2 4 Figure 5.9: Paraboloid and Two Tangent Planes Solution 5.4 Since the paraboloid is a differentiable surface, the normal to the surface at the closest point will be parallel to the vector from the closest point to (1, 0, 0). We can express this using the gradient and the cross product. If (x, y, z) is the closest point on the paraboloid, then a vector orthogonal to the surface there is f = 2x, 2y, −1 . The vector from the surface to the point (1, 0, 0) is 1−x, −y, −z . These two vectors are parallel if their cross product is zero, 2x, 2y, −1 × 1 − x, −y, −z = −y − 2yz, −1 + x + 2xz, −2y = 0. This gives us the three equations, −y − 2yz = 0, −1 + x + 2xz = 0, −2y = 0. The third equation requires that y = 0. The first equation then becomes trivial and we are left with the second equation, −1 + x + 2xz = 0. Substituting z = x2 + y2 into this equation yields, 2x3 + x − 1 = 0. The only real valued solution of this polynomial is x = 6−2/3 9 + √ 87 2/3 − 6−1/3 9 + √ 87 1/3 ≈ 0.589755. Thus the closest point to (1, 0, 0) on the paraboloid is    6−2/3 9 + √ 87 2/3 − 6−1/3 9 + √ 87 1/3 , 0,   6−2/3 9 + √ 87 2/3 − 6−1/3 9 + √ 87 1/3   2    ≈ (0.589755, 0, 0.34781). The closest point is shown graphically in Figure 5.10. 109
  • 130. -1 -0.5 0 0.5 1-1 -0.5 0 0.5 1 0 0.5 1 1.5 2-1 -0.5 0 0.5 1 0 0.5 1 1.5 2 Figure 5.10: Paraboloid, Tangent Plane and Line Connecting (1, 0, 0) to Closest Point Solution 5.5 We consider the region R defined by x2 +xy +y2 ≤ 9. The boundary of the region is an ellipse. (See Figure 5.11 for the ellipse and the solid obtained by rotating the region.) Note that in rotating the -3 -2 -1 1 2 3 -3 -2 -1 1 2 3 -2 0 2 -2 0 2 -2 0 2 -2 0 2 -2 0 2 Figure 5.11: The curve x2 + xy + y2 = 9. region about the y axis, only the portions in the second and fourth quadrants make a contribution. Since the solid is symmetric across the xz plane, we will find the volume of the top half and then double this to get the volume of the whole solid. Now we consider rotating the region in the second quadrant about the y axis. In the equation for the ellipse, x2 + xy + y2 = 9, we solve for x. x = 1 2 −y ± √ 3 12 − y2 In the second quadrant, the curve (−y − √ 3 12 − y2)/2 is defined on y ∈ [0 . . . √ 12] and the curve (−y − √ 3 12 − y2)/2 is defined on y ∈ [3 . . . √ 12]. (See Figure 5.12.) We find the volume obtained 110
  • 131. -3.5 -3 -2.5 -2 -1.5 -1 -0.5 0.5 1 1.5 2 2.5 3 3.5 Figure 5.12: (−y − √ 3 12 − y2)/2 in red and (−y + √ 3 12 − y2)/2 in green. by rotating the first curve and subtract the volume from rotating the second curve. V = 2   √ 12 0 π −y − √ 3 12 − y2 2 2 dy − √ 12 3 π −y + √ 3 12 − y2 2 2 dy   V = π 2 √ 12 0 y + √ 3 12 − y2 2 dy − √ 12 3 −y + √ 3 12 − y2 2 dy V = π 2 √ 12 0 −2y2 + √ 12y 12 − y2 + 36 dy − √ 12 3 −2y2 − √ 12y 12 − y2 + 36 dy V = π 2 − 2 3 y3 − 2 √ 3 12 − y2 3/2 + 36y √ 12 0 − − 2 3 y3 + 2 √ 3 12 − y2 3/2 + 36y √ 12 3 V = 72π Now consider the volume of the solid obtained by rotating R about the x axis? This as the same as the volume of the solid obtained by rotating R about the y axis. Geometrically we know this because R is symmetric about the line y = x. Now we justify it algebraically. Consider the phrase: Rotate the region x2 + xy + y2 ≤ 9 about the x axis. We formally swap x and y to obtain: Rotate the region y2 + yx + x2 ≤ 9 about the y axis. Which is the original problem. Solution 5.6 We find of the volume of the intersecting cylinders by summing the volumes of the two cylinders and then subracting the volume of their intersection. The volume of each of the cylinders is 2π. The intersection is shown in Figure 5.13. If we slice this solid along the plane z = const we have a square with side length 2 √ 1 − z2. The volume of the intersection of the cylinders is 1 −1 4 1 − z2 dz. We compute the volume of the intersecting cylinders. 111
  • 132. -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 Figure 5.13: The intersection of the two cylinders. V = 2(2π) − 2 1 0 4 1 − z2 dz V = 4π − 16 3 Solution 5.7 The length of f(x) is L = ∞ 1 1 + 1/x2 dx. Since 1 + 1/x2 > 1/x, the integral diverges. The length is infinite. We find the area of S by integrating the length of circles. A = ∞ 1 2π x dx This integral also diverges. The area is infinite. Finally we find the volume of S by integrating the area of disks. V = ∞ 1 π x2 dx = − π x ∞ 1 = π Solution 5.8 First we write the formula for the work required to move the oil to the surface. We integrate over the mass of the oil. Work = (acceleration) (distance) d(mass) Here (distance) is the distance of the differential of mass from the surface. The acceleration is that of gravity, g. The differential of mass can be represented an a differential of volume time the density of the oil, 800 kg/m3 . Work = 800g(distance) d(volume) We place the coordinate axis so that z = 0 coincides with the bottom of the cone. The oil lies between z = 0 and z = 12. The cross sectional area of the oil deposit at a fixed depth is πz2 . Thus 112
  • 133. the differential of volume is π z2 dz. This oil must me raised a distance of 24 − z. W = 12 0 800 g (24 − z) π z2 dz W = 6912000gπ W ≈ 2.13 × 108 kg m2 s2 Solution 5.9 The Jacobian in spherical coordinates is r2 sin φ. area = 2π 0 π 0 R2 sin φ dφ dθ = 2πR2 π 0 sin φ dφ = 2πR2 [− cos φ]π 0 area = 4πR2 volume = R 0 2π 0 π 0 r2 sin φ dφ dθ dr = 2π R 0 π 0 r2 sin φ dφ dr = 2π r3 3 R 0 [− cos φ]π 0 volume = 4 3 πR3 113
  • 134. 5.6 Quiz Problem 5.1 What is the distance from the origin to the plane x + 2y + 3z = 4? Solution Problem 5.2 A bead of mass m slides frictionlessly on a wire determined parametrically by w(s). The bead moves under the force of gravity. What is the acceleration of the bead as a function of the parameter s? Solution 114
  • 135. 5.7 Quiz Solutions Solution 5.1 Recall that the equation of a plane is x · n = a · n where a is a point in the plane and n is normal to the plane. We are considering the plane x + 2y + 3z = 4. A normal to the plane is 1, 2, 3 . The unit normal is n = 1 √ 15 1, 2, 3 . By substituting in x = y = 0, we see that a point in the plane is a = 0, 0, 4/3 . The distance of the plane from the origin is a · n = 4√ 15 . Solution 5.2 The force of gravity is −gk. The unit tangent to the wire is w (s)/|w (s)|. The component of the gravitational force in the tangential direction is −gk · w (s)/|w (s)|. Thus the acceleration of the bead is − gk · w (s) m|w (s)| . 115
  • 136. 116
  • 137. Part III Functions of a Complex Variable 117
  • 139. Chapter 6 Complex Numbers I’m sorry. You have reached an imaginary number. Please rotate your phone 90 degrees and dial again. -Message on answering machine of Cathy Vargas. 6.1 Complex Numbers Shortcomings of real numbers. When you started algebra, you learned that the quadratic equation: x2 + 2ax + b = 0 has either two, one or no solutions. For example: • x2 − 3x + 2 = 0 has the two solutions x = 1 and x = 2. • For x2 − 2x + 1 = 0, x = 1 is a solution of multiplicity two. • x2 + 1 = 0 has no solutions. This is a little unsatisfactory. We can formally solve the general quadratic equation. x2 + 2ax + b = 0 (x + a)2 = a2 − b x = −a ± a2 − b However, the solutions are defined only when the discriminant a2 −b is non-negative. This is because the square root function √ x is a bijection from R0+ to R0+ . (See Figure 6.1.) Figure 6.1: y = √ x 119
  • 140. A new mathematical constant. We cannot solve x2 = −1 because the square root of −1 is not defined. To overcome this apparent shortcoming of the real number system, we create a new symbolic constant √ −1. In performing arithmetic, we will treat √ −1 as we would a real constant like π or a formal variable like x, i.e. √ −1 + √ −1 = 2 √ −1. This constant has the property: √ −1 2 = −1. Now we can express the solutions of x2 = −1 as x = √ −1 and x = − √ −1. These satisfy the equation since √ −1 2 = −1 and − √ −1 2 = (−1)2 √ −1 2 = −1. Note that we can express the square root of any negative real number in terms of √ −1: √ −r = √ −1 √ r for r ≥ 0. Euler’s notation. Euler introduced the notation of using the letter i to denote √ −1. We will use the symbol ı, an i without a dot, to denote √ −1. This helps us distinguish it from i used as a variable or index.1 We call any number of the form ıb, b ∈ R, a pure imaginary number.2 Let a and b be real numbers. The product of a real number and an imaginary number is an imaginary number: (a)(ıb) = ı(ab). The product of two imaginary numbers is a real number: (ıa)(ıb) = −ab. However the sum of a real number and an imaginary number a + ıb is neither real nor imaginary. We call numbers of the form a + ıb complex numbers.3 The quadratic. Now we return to the quadratic with real coefficients, x2 +2ax+b = 0. It has the solutions x = −a± √ a2 − b. The solutions are real-valued only if a2 −b ≥ 0. If not, then we can define solutions as complex numbers. If the discriminant is negative, we write x = −a ± ı √ b − a2. Thus every quadratic polynomial with real coefficients has exactly two solutions, counting multiplicities. The fundamental theorem of algebra states that an nth degree polynomial with complex coefficients has n, not necessarily distinct, complex roots. We will prove this result later using the theory of functions of a complex variable. Component operations. Consider the complex number z = x+ıy, (x, y ∈ R). The real part of z is (z) = x; the imaginary part of z is (z) = y. Two complex numbers, z = x + ıy and ζ = ξ + ıψ, are equal if and only if x = ξ and y = ψ. The complex conjugate4 of z = x + ıy is z ≡ x − ıy. The notation z∗ ≡ x − ıy is also used. A little arithmetic. Consider two complex numbers: z = x + ıy, ζ = ξ + ıψ. It is easy to express the sum or difference as a complex number. z + ζ = (x + ξ) + ı(y + ψ), z − ζ = (x − ξ) + ı(y − ψ) It is also easy to form the product. zζ = (x + ıy)(ξ + ıψ) = xξ + ıxψ + ıyξ + ı2 yψ = (xξ − yψ) + ı(xψ + yξ) The quotient is a bit more difficult. (Assume that ζ is nonzero.) How do we express z/ζ = (x + ıy)/(ξ + ıψ) as the sum of a real number and an imaginary number? The trick is to multiply the numerator and denominator by the complex conjugate of ζ. z ζ = x + ıy ξ + ıψ = x + ıy ξ + ıψ ξ − ıψ ξ − ıψ = xξ − ıxψ − ıyξ − ı2 yψ ξ2 − ıξψ + ıψξ − ı2ψ2 = (xξ + yψ) − ı(xψ + yξ) ξ2 + ψ2 = (xξ + yψ) ξ2 + ψ2 −ı xψ + yξ ξ2 + ψ2 Now we recognize it as a complex number. 1 Electrical engineering types prefer to use  or j to denote √ −1. 2 “Imaginary” is an unfortunate term. Real numbers are artificial; constructs of the mind. Real numbers are no more real than imaginary numbers. 3 Here complex means “composed of two or more parts”, not “hard to separate, analyze, or solve”. Those who disagree have a complex number complex. 4 Conjugate: having features in common but opposite or inverse in some particular. 120
  • 141. Field properties. The set of complex numbers C form a field. That essentially means that we can do arithmetic with complex numbers. When performing arithmetic, we simply treat ı as a symbolic constant with the property that ı2 = −1. The field of complex numbers satisfy the following list of properties. Each one is easy to verify; some are proved below. (Let z, ζ, ω ∈ C.) 1. Closure under addition and multiplication. z + ζ = (x + ıy) + (ξ + ıψ) = (x + ξ) + ı (y + ψ) ∈ C zζ = (x + ıy) (ξ + ıψ) = xξ + ıxψ + ıyξ + ı2 yψ = (xξ − yψ) + ı (xψ + ξy) ∈ C 2. Commutativity of addition and multiplication. z + ζ = ζ + z. zζ = ζz. 3. Associativity of addition and multiplication. (z + ζ) + ω = z + (ζ + ω). (zζ) ω = z (ζω). 4. Distributive law. z (ζ + ω) = zζ + zω. 5. Identity with respect to addition and multiplication. Zero is the additive identity element, z + 0 = z; unity is the muliplicative identity element, z(1) = z. 6. Inverse with respect to addition. z + (−z) = (x + ıy) + (−x − ıy) = (x − x) + ı(y − y) = 0. 7. Inverse with respect to multiplication for nonzero numbers. zz−1 = 1, where z−1 = 1 z = 1 x + ıy = 1 x + ıy x − ıy x − ıy = x − ıy x2 + y2 = x x2 + y2 − ı y x2 + y2 Properties of the complex conjugate. Using the field properties of complex numbers, we can derive the following properties of the complex conjugate, z = x − ıy. 1. (z) = z, 2. z + ζ = z + ζ, 3. zζ = zζ, 4. z ζ = (z) ζ . 6.2 The Complex Plane Complex plane. We can denote a complex number z = x + ıy as an ordered pair of real numbers (x, y). Thus we can represent a complex number as a point in R2 where the first component is the real part and the second component is the imaginary part of z. This is called the complex plane or the Argand diagram. (See Figure 6.2.) A complex number written as z = x + ıy is said to be in Cartesian form, or a + ıb form. Recall that there are two ways of describing a point in the complex plane: an ordered pair of coordinates (x, y) that give the horizontal and vertical offset from the origin or the distance r from the origin and the angle θ from the positive horizontal axis. The angle θ is not unique. It is only determined up to an additive integer multiple of 2π. 121
  • 142. Im(z) Re(z) r (x,y) θ Figure 6.2: The complex plane. Modulus. The magnitude or modulus of a complex number is the distance of the point from the origin. It is defined as |z| = |x + ıy| = x2 + y2. Note that zz = (x + ıy)(x − ıy) = x2 + y2 = |z|2 . The modulus has the following properties. 1. |zζ| = |z| |ζ| 2. z ζ = |z| |ζ| for ζ = 0. 3. |z + ζ| ≤ |z| + |ζ| 4. |z + ζ| ≥ ||z| − |ζ|| We could prove the first two properties by expanding in x + ıy form, but it would be fairly messy. The proofs will become simple after polar form has been introduced. The second two properties follow from the triangle inequalities in geometry. This will become apparent after the relationship between complex numbers and vectors is introduced. One can show that |z1z2 · · · zn| = |z1| |z2| · · · |zn| and |z1 + z2 + · · · + zn| ≤ |z1| + |z2| + · · · + |zn| with proof by induction. Argument. The argument of a complex number is the angle that the vector with tail at the origin and head at z = x+ıy makes with the positive x-axis. The argument is denoted arg(z). Note that the argument is defined for all nonzero numbers and is only determined up to an additive integer multiple of 2π. That is, the argument of a complex number is the set of values: {θ + 2πn | n ∈ Z}. The principal argument of a complex number is that angle in the set arg(z) which lies in the range (−π, π]. The principal argument is denoted Arg(z). We prove the following identities in Exercise 6.10. arg(zζ) = arg(z) + arg(ζ) Arg(zζ) = Arg(z) + Arg(ζ) arg z2 = arg(z) + arg(z) = 2 arg(z) Example 6.2.1 Consider the equation |z −1−ı| = 2. The set of points satisfying this equation is a circle of radius 2 and center at 1 + ı in the complex plane. You can see this by noting that |z − 1 − ı| is the distance from the point (1, 1). (See Figure 6.3.) Another way to derive this is to substitute z = x + ıy into the equation. |x + ıy − 1 − ı| = 2 (x − 1)2 + (y − 1)2 = 2 (x − 1)2 + (y − 1)2 = 4 This is the analytic geometry equation for a circle of radius 2 centered about (1, 1). 122
  • 143. -1 1 2 3 -1 1 2 3 Figure 6.3: Solution of |z − 1 − ı| = 2. Example 6.2.2 Consider the curve described by |z| + |z − 2| = 4. Note that |z| is the distance from the origin in the complex plane and |z − 2| is the distance from z = 2. The equation is (distance from (0, 0)) + (distance from (2, 0)) = 4. From geometry, we know that this is an ellipse with foci at (0, 0) and (2, 0), major axis 2, and minor axis √ 3. (See Figure 6.4.) -1 1 2 3 -2 -1 1 2 Figure 6.4: Solution of |z| + |z − 2| = 4. We can use the substitution z = x + ıy to get the equation in algebraic form. |z| + |z − 2| = 4 |x + ıy| + |x + ıy − 2| = 4 x2 + y2 + (x − 2)2 + y2 = 4 x2 + y2 = 16 − 8 (x − 2)2 + y2 + x2 − 4x + 4 + y2 x − 5 = −2 (x − 2)2 + y2 x2 − 10x + 25 = 4x2 − 16x + 16 + 4y2 1 4 (x − 1)2 + 1 3 y2 = 1 Thus we have the standard form for an equation describing an ellipse. 123
  • 144. 6.3 Polar Form Polar form. A complex number written in Cartesian form, z = x + ıy, can be converted polar form, z = r(cos θ + ı sin θ), using trigonometry. Here r = |z| is the modulus and θ = arctan(x, y) is the argument of z. The argument is the angle between the x axis and the vector with its head at (x, y). (See Figure 6.5.) Note that θ is not unique. If z = r(cos θ +ı sin θ) then z = r(cos(θ +2nπ)+ ı sin(θ + 2nπ)) for any n ∈ Z. Re( ) r Im( ) (x,y) r z θ sinθ z θcosr Figure 6.5: Polar form. The arctangent. Note that arctan(x, y) is not the same thing as the old arctangent that you learned about in trigonometry arctan(x, y) is sensitive to the quadrant of the point (x, y), while arctan y x is not. For example, arctan(1, 1) = π 4 + 2nπ and arctan(−1, −1) = −3π 4 + 2nπ, whereas arctan −1 −1 = arctan 1 1 = arctan(1). Euler’s formula. Euler’s formula, eıθ = cos θ + ı sin θ,5 allows us to write the polar form more compactly. Expressing the polar form in terms of the exponential function of imaginary argument makes arithmetic with complex numbers much more convenient. z = r(cos θ + ı sin θ) = r eıθ The exponential of an imaginary argument has all the nice properties that we know from studying functions of a real variable, like eıa eıb = eı(a+b) . Later on we will introduce the exponential of a complex number. Using Euler’s Formula, we can express the cosine and sine in terms of the exponential. eıθ + e−ıθ 2 = (cos(θ) + ı sin(θ)) + (cos(−θ) + ı sin(−θ)) 2 = cos(θ) eıθ − e−ıθ ı2 = (cos(θ) + ı sin(θ)) − (cos(−θ) + ı sin(−θ)) ı2 = sin(θ) Arithmetic with complex numbers. Note that it is convenient to add complex numbers in Cartesian form. z + ζ = (x + ıy) + (ξ + ıψ) = (x + ξ) + ı (y + ψ) However, it is difficult to multiply or divide them in Cartesian form. zζ = (x + ıy) (ξ + ıψ) = (xξ − yψ) + ı (xψ + ξy) z ζ = x + ıy ξ + ıψ = (x + ıy) (ξ − ıψ) (ξ + ıψ) (ξ − ıψ) = xξ + yψ ξ2 + ψ2 + ı ξy − xψ ξ2 + ψ2 5 See Exercise 6.17 for justification of Euler’s formula. 124
  • 145. On the other hand, it is difficult to add complex numbers in polar form. z + ζ = r eıθ +ρ eıφ = r (cos θ + ı sin θ) + ρ (cos φ + ı sin φ) = r cos θ + ρ cos φ + ı (r sin θ + ρ sin φ) = (r cos θ + ρ cos φ) 2 + (r sin θ + ρ sin φ) 2 × eı arctan(r cos θ+ρ cos φ,r sin θ+ρ sin φ) = r2 + ρ2 + 2 cos (θ − φ) eı arctan(r cos θ+ρ cos φ,r sin θ+ρ sin φ) However, it is convenient to multiply and divide them in polar form. zζ = r eıθ ρ eıφ = rρ eı(θ+φ) z ζ = r eıθ ρ eıφ = r ρ eı(θ−φ) Keeping this in mind will make working with complex numbers a shade or two less grungy. Result 6.3.1 Euler’s formula is eıθ = cos θ + ı sin θ. We can write the cosine and sine in terms of the exponential. cos(θ) = eıθ + e−ıθ 2 , sin(θ) = eıθ − e−ıθ ı2 To change between Cartesian and polar form, use the identities r eıθ = r cos θ + ır sin θ, x + ıy = x2 + y2 eı arctan(x,y) . Cartesian form is convenient for addition. Polar form is convenient for multi- plication and division. Example 6.3.1 We write 5 + ı7 in polar form. 5 + ı7 = √ 74 eı arctan(5,7) We write 2 eıπ/6 in Cartesian form. 2 eıπ/6 = 2 cos π 6 + 2ı sin π 6 = √ 3 + ı Example 6.3.2 We will prove the trigonometric identity cos4 θ = 1 8 cos(4θ) + 1 2 cos(2θ) + 3 8 . 125
  • 146. We start by writing the cosine in terms of the exponential. cos4 θ = eıθ + e−ıθ 2 4 = 1 16 eı4θ +4 eı2θ +6 + 4 e−ı2θ + e−ı4θ = 1 8 eı4θ + e−ı4θ 2 + 1 2 eı2θ + e−ı2θ 2 + 3 8 = 1 8 cos(4θ) + 1 2 cos(2θ) + 3 8 By the definition of exponentiation, we have eınθ = eıθ n We apply Euler’s formula to obtain a result which is useful in deriving trigonometric identities. cos(nθ) + ı sin(nθ) = (cos θ + ı sin θ)n Result 6.3.2 DeMoivre’s Theorem.a cos(nθ) + ı sin(nθ) = (cos θ + ı sin θ)n aIt’s amazing what passes for a theorem these days. I would think that this would be a corollary at most. Example 6.3.3 We will express cos(5θ) in terms of cos θ and sin(5θ) in terms of sin θ. We start with DeMoivre’s theorem. eı5θ = eıθ 5 cos(5θ) + ı sin(5θ) = (cos θ + ı sin θ)5 = 5 0 cos5 θ + ı 5 1 cos4 θ sin θ − 5 2 cos3 θ sin2 θ − ı 5 3 cos2 θ sin3 θ + 5 4 cos θ sin4 θ + ı 5 5 sin5 θ = cos5 θ − 10 cos3 θ sin2 θ + 5 cos θ sin4 θ + ı 5 cos4 θ sin θ − 10 cos2 θ sin3 θ + sin5 θ Then we equate the real and imaginary parts. cos(5θ) = cos5 θ − 10 cos3 θ sin2 θ + 5 cos θ sin4 θ sin(5θ) = 5 cos4 θ sin θ − 10 cos2 θ sin3 θ + sin5 θ Finally we use the Pythagorean identity, cos2 θ + sin2 θ = 1. cos(5θ) = cos5 θ − 10 cos3 θ 1 − cos2 θ + 5 cos θ 1 − cos2 θ 2 cos(5θ) = 16 cos5 θ − 20 cos3 θ + 5 cos θ sin(5θ) = 5 1 − sin2 θ 2 sin θ − 10 1 − sin2 θ sin3 θ + sin5 θ sin(5θ) = 16 sin5 θ − 20 sin3 θ + 5 sin θ 6.4 Arithmetic and Vectors Addition. We can represent the complex number z = x+ıy = r eıθ as a vector in Cartesian space with tail at the origin and head at (x, y), or equivalently, the vector of length r and angle θ. With the vector representation, we can add complex numbers by connecting the tail of one vector to the head of the other. The vector z + ζ is the diagonal of the parallelogram defined by z and ζ. (See Figure 6.6.) 126
  • 147. Negation. The negative of z = x + ıy is −z = −x − ıy. In polar form we have z = r eıθ and −z = r eı(θ+π) , (more generally, z = r eı(θ+(2n+1)π) , n ∈ Z. In terms of vectors, −z has the same magnitude but opposite direction as z. (See Figure 6.6.) Multiplication. The product of z = r eıθ and ζ = ρ eıφ is zζ = rρ eı(θ+φ) . The length of the vector zζ is the product of the lengths of z and ζ. The angle of zζ is the sum of the angles of z and ζ. (See Figure 6.6.) Note that arg(zζ) = arg(z) + arg(ζ). Each of these arguments has an infinite number of values. If we write out the multi-valuedness explicitly, we have {θ + φ + 2πn : n ∈ Z} = {θ + 2πn : n ∈ Z} + {φ + 2πn : n ∈ Z} The same is not true of the principal argument. In general, Arg(zζ) = Arg(z) + Arg(ζ). Consider the case z = ζ = eı3π/4 . Then Arg(z) = Arg(ζ) = 3π/4, however, Arg(zζ) = −π/2. xξ−yψ)+i(xψ+yξ)ζ=( =re θ πi( + ) =re θi z =r eρ θ φi( + ) ζ=ξ+ ψ=ρi eiφ z=x+iy=reiθ ζ ξ ψ ζ=ξ+ ψi z=x+iy z+ =(x+ )+i(y+ ) −z=−x−iy z=x+iy Figure 6.6: Addition, negation and multiplication. Multiplicative inverse. Assume that z is nonzero. The multiplicative inverse of z = r eıθ is 1 z = 1 r e−ıθ . The length of 1 z is the multiplicative inverse of the length of z. The angle of 1 z is the negative of the angle of z. (See Figure 6.7.) Division. Assume that ζ is nonzero. The quotient of z = r eıθ and ζ = ρ eıφ is z ζ = r ρ eı(θ−φ) . The length of the vector z ζ is the quotient of the lengths of z and ζ. The angle of z ζ is the difference of the angles of z and ζ. (See Figure 6.7.) Complex conjugate. The complex conjugate of z = x + ıy = r eıθ is z = x − ıy = r e−ıθ . z is the mirror image of z, reflected across the x axis. In other words, z has the same magnitude as z and the angle of z is the negative of the angle of z. (See Figure 6.7.) 6.5 Integer Exponents Consider the product (a + b)n , n ∈ Z. If we know arctan(a, b) then it will be most convenient to expand the product working in polar form. If not, we can write n in base 2 to efficiently do the multiplications. Example 6.5.1 Suppose that we want to write √ 3 + ı 20 in Cartesian form.6 We can do the multiplication directly. Note that 20 is 10100 in base 2. That is, 20 = 24 + 22 . We first calculate 6No, I have no idea why we would want to do that. Just humor me. If you pretend that you’re interested, I’ll do the same. Believe me, expressing your real feelings here isn’t going to do anyone any good. 127
  • 148. = e_ =−e ζ=ρ z=re e z ζ r φi θi (θ−φ)i_ ρ z=x+iy=re θi z=x−iy=re _ −iθ z=re 1 z 1 r θi _ −iθ Figure 6.7: Multiplicative inverse, division and complex conjugate. the powers of the form √ 3 + ı 2n by successive squaring. √ 3 + ı 2 = 2 + ı2 √ 3 √ 3 + ı 4 = −8 + ı8 √ 3 √ 3 + ı 8 = −128 − ı128 √ 3 √ 3 + ı 16 = −32768 + ı32768 √ 3 Next we multiply √ 3 + ı 4 and √ 3 + ı 16 to obtain the answer. √ 3 + ı 20 = −32768 + ı32768 √ 3 −8 + ı8 √ 3 = −524288 − ı524288 √ 3 Since we know that arctan √ 3, 1 = π/6, it is easiest to do this problem by first changing to modulus-argument form. √ 3 + ı 20 = √ 3 2 + 12 eı arctan( √ 3,1) 20 = 2 eıπ/6 20 = 220 eı4π/3 = 1048576 − 1 2 − ı √ 3 2 = −524288 − ı524288 √ 3 Example 6.5.2 Consider (5 + ı7)11 . We will do the exponentiation in polar form and write the result in Cartesian form. (5 + ı7)11 = √ 74 eı arctan(5,7) 11 = 745 √ 74(cos(11 arctan(5, 7)) + ı sin(11 arctan(5, 7))) = 2219006624 √ 74 cos(11 arctan(5, 7)) + ı2219006624 √ 74 sin(11 arctan(5, 7)) The result is correct, but not very satisfying. This expression could be simplified. You could evaluate the trigonometric functions with some fairly messy trigonometric identities. This would take much more work than directly multiplying (5 + ı7)11 . 128
  • 149. 6.6 Rational Exponents In this section we consider complex numbers with rational exponents, zp/q , where p/q is a rational number. First we consider unity raised to the 1/n power. We define 11/n as the set of numbers {z} such that zn = 1. 11/n = {z | zn = 1} We can find these values by writing z in modulus-argument form. zn = 1 rn eınθ = 1 rn = 1 nθ = 0 mod 2π r = 1 θ = 2πk for k ∈ Z 11/n = eı2πk/n | k ∈ Z There are only n distinct values as a result of the 2π periodicity of eıθ . eı2π = eı0 . 11/n = eı2πk/n | k = 0, . . . , n − 1 These values are equally spaced points on the unit circle in the complex plane. Example 6.6.1 11/6 has the 6 values, eı0 , eıπ/3 , eı2π/3 , eıπ , eı4π/3 , eı5π/3 . In Cartesian form this is 1, 1 + ı √ 3 2 , −1 + ı √ 3 2 , −1, −1 − ı √ 3 2 , 1 − ı √ 3 2 . The sixth roots of unity are plotted in Figure 6.8. -1 1 -1 1 Figure 6.8: The sixth roots of unity. The nth roots of the complex number c = α eıβ are the set of numbers z = r eıθ such that zn = c = α eıβ rn eınθ = α eıβ r = n √ α nθ = β mod 2π r = n √ α θ = (β + 2πk)/n for k = 0, . . . , n − 1. Thus c1/n = n √ α eı(β+2πk)/n | k = 0, . . . , n − 1 = n |c| eı(Arg(c)+2πk)/n | k = 0, . . . , n − 1 129
  • 150. Principal roots. The principal nth root is denoted n √ z ≡ n √ z eı Arg(z)/n . Thus the principal root has the property −π/n < Arg n √ z ≤ π/n. This is consistent with the notation from functions of a real variable: n √ x denotes the positive nth root of a positive real number. We adopt the convention that z1/n denotes the nth roots of z, which is a set of n numbers and n √ z is the principal nth root of z, which is a single number. The nth roots of z are the principal nth root of z times the nth roots of unity. z1/n = n √ r eı(Arg(z)+2πk)/n | k = 0, . . . , n − 1 z1/n = n √ z eı2πk/n | k = 0, . . . , n − 1 z1/n = n √ z11/n Rational exponents. We interpret zp/q to mean z(p/q) . That is, we first simplify the exponent, i.e. reduce the fraction, before carrying out the exponentiation. Therefore z2/4 = z1/2 and z10/5 = z2 . If p/q is a reduced fraction, (p and q are relatively prime, in other words, they have no common factors), then zp/q ≡ (zp ) 1/q . Thus zp/q is a set of q values. Note that for an un-reduced fraction r/s, (zr ) 1/s = z1/s r . The former expression is a set of s values while the latter is a set of no more that s values. For instance, 12 1/2 = 11/2 = ±1 and 11/2 2 = (±1)2 = 1. Example 6.6.2 Consider 21/5 , (1 + ı)1/3 and (2 + ı)5/6 . 21/5 = 5 √ 2 eı2πk/5 , for k = 0, 1, 2, 3, 4 (1 + ı)1/3 = √ 2 eıπ/4 1/3 = 6 √ 2 eıπ/12 eı2πk/3 , for k = 0, 1, 2 (2 + ı)5/6 = √ 5 eı Arctan(2,1) 5/6 = √ 55 eı5 Arctan(2,1) 1/6 = 12 √ 55 eı 5 6 Arctan(2,1) eıπk/3 , for k = 0, 1, 2, 3, 4, 5 Example 6.6.3 We find the roots of z5 + 4. (−4)1/5 = (4 eıπ ) 1/5 = 5 √ 4 eıπ(1+2k)/5 , for k = 0, 1, 2, 3, 4 130
  • 151. 6.7 Exercises Complex Numbers Exercise 6.1 If z = x + ıy, write the following in the form a + ıb: 1. (1 + ı2)7 2. 1 (zz) 3. ız + z (3 + ı)9 Hint, Solution Exercise 6.2 Verify that: 1. 1 + ı2 3 − ı4 + 2 − ı ı5 = − 2 5 2. (1 − ı)4 = −4 Hint, Solution Exercise 6.3 Write the following complex numbers in the form a + ıb. 1. 1 + ı √ 3 −10 2. (11 + ı4)2 Hint, Solution Exercise 6.4 Write the following complex numbers in the form a + ıb 1. 2 + ı ı6 − (1 − ı2) 2 2. (1 − ı)7 Hint, Solution Exercise 6.5 If z = x + ıy, write the following in the form u(x, y) + ıv(x, y). 1. z z 2. z + ı2 2 − ız Hint, Solution 131
  • 152. Exercise 6.6 Quaternions are sometimes used as a generalization of complex numbers. A quaternion u may be defined as u = u0 + ıu1 + u2 + ku3 where u0, u1, u2 and u3 are real numbers and ı,  and k are objects which satisfy ı2 = 2 = k2 = −1, ı = k, ı = −k and the usual associative and distributive laws. Show that for any quaternions u, w there exists a quaternion v such that uv = w except for the case u0 = u1 = u2 = u3. Hint, Solution Exercise 6.7 Let α = 0, β = 0 be two complex numbers. Show that α = tβ for some real number t (i.e. the vectors defined by α and β are parallel) if and only if αβ = 0. Hint, Solution The Complex Plane Exercise 6.8 Find and depict all values of 1. (1 + ı)1/3 2. ı1/4 Identify the principal root. Hint, Solution Exercise 6.9 Sketch the regions of the complex plane: 1. | (z)| + 2| (z)| ≤ 1 2. 1 ≤ |z − ı| ≤ 2 3. |z − ı| ≤ |z + ı| Hint, Solution Exercise 6.10 Prove the following identities. 1. arg(zζ) = arg(z) + arg(ζ) 2. Arg(zζ) = Arg(z) + Arg(ζ) 3. arg z2 = arg(z) + arg(z) = 2 arg(z) Hint, Solution Exercise 6.11 Show, both by geometric and algebraic arguments, that for complex numbers z and ζ the inequalities ||z| − |ζ|| ≤ |z + ζ| ≤ |z| + |ζ| hold. Hint, Solution 132
  • 153. Exercise 6.12 Find all the values of 1. (−1)−3/4 2. 81/6 and show them graphically. Hint, Solution Exercise 6.13 Find all values of 1. (−1)−1/4 2. 161/8 and show them graphically. Hint, Solution Exercise 6.14 Sketch the regions or curves described by 1. 1 < |z − ı2| < 2 2. | (z)| + 5| (z)| = 1 3. |z − ı| = |z + ı| Hint, Solution Exercise 6.15 Sketch the regions or curves described by 1. |z − 1 + ı| ≤ 1 2. (z) − (z) = 5 3. |z − ı| + |z + ı| = 1 Hint, Solution Exercise 6.16 Solve the equation | eıθ −1| = 2 for θ (0 ≤ θ ≤ π) and verify the solution geometrically. Hint, Solution Polar Form Exercise 6.17 Show that Euler’s formula, eıθ = cos θ +ı sin θ, is formally consistent with the standard Taylor series expansions for the real functions ex , cos x and sin x. Consider the Taylor series of ex about x = 0 to be the definition of the exponential function for complex argument. Hint, Solution Exercise 6.18 Use de Moivre’s formula to derive the trigonometric identity cos(3θ) = cos3 (θ) − 3 cos(θ) sin2 (θ). Hint, Solution 133
  • 154. Exercise 6.19 Establish the formula 1 + z + z2 + · · · + zn = 1 − zn+1 1 − z , (z = 1), for the sum of a finite geometric series; then derive the formulas 1. 1 + cos(θ) + cos(2θ) + · · · + cos(nθ) = 1 2 + sin((n + 1/2)) 2 sin(θ/2) 2. sin(θ) + sin(2θ) + · · · + sin(nθ) = 1 2 cot θ 2 − cos((n + 1/2)) 2 sin(θ/2) where 0 < θ < 2π. Hint, Solution Arithmetic and Vectors Exercise 6.20 Prove |zζ| = |z||ζ| and z ζ = |z| |ζ| using polar form. Hint, Solution Exercise 6.21 Prove that |z + ζ| 2 + |z − ζ| 2 = 2 |z| 2 + |ζ| 2 . Interpret this geometrically. Hint, Solution Integer Exponents Exercise 6.22 Write (1 + ı)10 in Cartesian form with the following two methods: 1. Just do the multiplication. If it takes you more than four multiplications, you suck. 2. Do the multiplication in polar form. Hint, Solution Rational Exponents Exercise 6.23 Show that each of the numbers z = −a + a2 − b 1/2 satisfies the equation z2 + 2az + b = 0. Hint, Solution 134
  • 155. 6.8 Hints Complex Numbers Hint 6.1 Hint 6.2 Hint 6.3 Hint 6.4 Hint 6.5 Hint 6.6 Hint 6.7 The Complex Plane Hint 6.8 Hint 6.9 Hint 6.10 Write the multivaluedness explicitly. Hint 6.11 Consider a triangle with vertices at 0, z and z + ζ. Hint 6.12 Hint 6.13 Hint 6.14 Hint 6.15 Hint 6.16 Polar Form 135
  • 156. Hint 6.17 Find the Taylor series of eıθ , cos θ and sin θ. Note that ı2n = (−1)n . Hint 6.18 Hint 6.19 Arithmetic and Vectors Hint 6.20 | eıθ | = 1. Hint 6.21 Consider the parallelogram defined by z and ζ. Integer Exponents Hint 6.22 For the first part, (1 + ı)10 = (1 + ı)2 2 2 (1 + ı)2 . Rational Exponents Hint 6.23 Substitite the numbers into the equation. 136
  • 157. 6.9 Solutions Complex Numbers Solution 6.1 1. We can do the exponentiation by directly multiplying. (1 + ı2)7 = (1 + ı2)(1 + ı2)2 (1 + ı2)4 = (1 + ı2)(−3 + ı4)(−3 + ı4)2 = (11 − ı2)(−7 − ı24) = 29 + ı278 We can also do the problem using De Moivre’s Theorem. (1 + ı2)7 = √ 5 eı arctan(1,2) 7 = 125 √ 5 eı7 arctan(1,2) = 125 √ 5 cos(7 arctan(1, 2)) + ı125 √ 5 sin(7 arctan(1, 2)) 2. 1 (zz) = 1 (x − ıy)2 = 1 (x − ıy)2 (x + ıy)2 (x + ıy)2 = (x + ıy)2 (x2 + y2)2 = x2 − y2 (x2 + y2)2 + ı 2xy (x2 + y2)2 3. We can evaluate the expression using De Moivre’s Theorem. ız + z (3 + ı)9 = (−y + ıx + x − ıy)(3 + ı)−9 = (1 + ı)(x − y) √ 10 eı arctan(3,1) −9 = (1 + ı)(x − y) 1 10000 √ 10 e−ı9 arctan(3,1) = (1 + ı)(x − y) 10000 √ 10 (cos(9 arctan(3, 1)) − ı sin(9 arctan(3, 1))) = (x − y) 10000 √ 10 (cos(9 arctan(3, 1)) + sin(9 arctan(3, 1))) + ı (x − y) 10000 √ 10 (cos(9 arctan(3, 1)) − sin(9 arctan(3, 1))) 137
  • 158. We can also do this problem by directly multiplying but it’s a little grungy. ız + z (3 + ı)9 = (−y + ıx + x − ıy)(3 − ı)9 109 = (1 + ı)(x − y)(3 − ı) (3 − ı)2 2 2 109 = (1 + ı)(x − y)(3 − ı) (8 − ı6) 2 2 109 = (1 + ı)(x − y)(3 − ı)(28 − ı96)2 109 = (1 + ı)(x − y)(3 − ı)(−8432 − ı5376) 109 = (x − y)(−22976 − ı38368) 109 = 359(y − x) 15625000 + ı 1199(y − x) 31250000 Solution 6.2 1. 1 + ı2 3 − ı4 + 2 − ı ı5 = 1 + ı2 3 − ı4 3 + ı4 3 + ı4 + 2 − ı ı5 −ı −ı = −5 + ı10 25 + −1 − ı2 5 = − 2 5 2. (1 − ı)4 = (−ı2)2 = −4 Solution 6.3 1. First we do the multiplication in Cartesian form. 1 + ı √ 3 −10 = 1 + ı √ 3 2 1 + ı √ 3 8 −1 = −2 + ı2 √ 3 −2 + ı2 √ 3 4 −1 = −2 + ı2 √ 3 −8 − ı8 √ 3 2 −1 = −2 + ı2 √ 3 −128 + ı128 √ 3 −1 = −512 − ı512 √ 3 −1 = 1 512 −1 1 + ı √ 3 = 1 512 −1 1 + ı √ 3 1 − ı √ 3 1 − ı √ 3 = − 1 2048 + ı √ 3 2048 138
  • 159. Now we do the multiplication in modulus-argument, (polar), form. 1 + ı √ 3 −10 = 2 eıπ/3 −10 = 2−10 e−ı10π/3 = 1 1024 cos − 10π 3 + ı sin − 10π 3 = 1 1024 cos 4π 3 − ı sin 4π 3 = 1 1024 − 1 2 + ı √ 3 2 = − 1 2048 + ı √ 3 2048 2. (11 + ı4)2 = 105 + ı88 Solution 6.4 1. 2 + ı ı6 − (1 − ı2) 2 = 2 + ı −1 + ı8 2 = 3 + ı4 −63 − ı16 = 3 + ı4 −63 − ı16 −63 + ı16 −63 + ı16 = − 253 4225 − ı 204 4225 2. (1 − ı)7 = (1 − ı)2 2 (1 − ı)2 (1 − ı) = (−ı2)2 (−ı2)(1 − ı) = (−4)(−2 − ı2) = 8 + ı8 Solution 6.5 1. z z = x + ıy x + ıy = x − ıy x + ıy = x + ıy x − ıy = x + ıy x − ıy x + ıy x + ıy = x2 − y2 x2 + y2 + ı 2xy x2 + y2 139
  • 160. 2. z + ı2 2 − ız = x + ıy + ı2 2 − ı(x − ıy) = x + ı(y + 2) 2 − y − ıx = x + ı(y + 2) 2 − y − ıx 2 − y + ıx 2 − y + ıx = x(2 − y) − (y + 2)x (2 − y)2 + x2 + ı x2 + (y + 2)(2 − y) (2 − y)2 + x2 = −2xy (2 − y)2 + x2 + ı 4 + x2 − y2 (2 − y)2 + x2 Solution 6.6 Method 1. We expand the equation uv = w in its components. uv = w (u0 + ıu1 + u2 + ku3) (v0 + ıv1 + v2 + kv3) = w0 + ıw1 + w2 + kw3 (u0v0 − u1v1 − u2v2 − u3v3) + ı (u1v0 + u0v1 − u3v2 + u2v3) +  (u2v0 + u3v1 + u0v2 − u1v3) + k (u3v0 − u2v1 + u1v2 + u0v3) = w0 + ıw1 + w2 + kw3 We can write this as a matrix equation.     u0 −u1 −u2 −u3 u1 u0 −u3 u2 u2 u3 u0 −u1 u3 −u2 u1 u0         v0 v1 v2 v3     =     w0 w1 w2 w3     This linear system of equations has a unique solution for v if and only if the determinant of the matrix is nonzero. The determinant of the matrix is u2 0 + u2 1 + u2 2 + u2 3 2 . This is zero if and only if u0 = u1 = u2 = u3 = 0. Thus there exists a unique v such that uv = w if u is nonzero. This v is v = (u0w0 + u1w1 + u2w2 + u3w3)+ı (−u1w0 + u0w1 + u3w2 − u2w3)+ (−u2w0 − u3w1 + u0w2 + u1w3) + k (−u3w0 + u2w1 − u1w2 + u0w3) / u2 0 + u2 1 + u2 2 + u2 3 Method 2. Note that uu is a real number. uu = (u0 − ıu1 − u2 − ku3) (u0 + ıu1 + u2 + ku3) = u2 0 + u2 1 + u2 2 + u2 3 + ı (u0u1 − u1u0 − u2u3 + u3u2) +  (u0u2 + u1u3 − u2u0 − u3u1) + k (u0u3 − u1u2 + u2u1 − u3u0) = u2 0 + u2 1 + u2 2 + u2 3 uu = 0 only if u = 0. We solve for v by multiplying by the conjugate of u and dividing by uu. uv = w uuv = uw v = uw uu v = (u0 − ıu1 − u2 − ku3) (w0 + ıw1 + w2 + kw3) u2 0 + u2 1 + u2 2 + u2 3 v = (u0w0 + u1w1 + u2w2 + u3w3)+ı (−u1w0 + u0w1 + u3w2 − u2w3)+ (−u2w0 − u3w1 + u0w2 + u1w3) + k (−u3w0 + u2w1 − u1w2 + u0w3) / u2 0 + u2 1 + u2 2 + u2 3 140
  • 161. Solution 6.7 If α = tβ, then αβ = t|β|2 , which is a real number. Hence αβ = 0. Now assume that αβ = 0. This implies that αβ = r for some r ∈ R. We multiply by β and simplify. α|β|2 = rβ α = r |β|2 β By taking t = r |β|2 We see that α = tβ for some real number t. The Complex Plane Solution 6.8 1. (1 + ı)1/3 = √ 2 eıπ/4 1/3 = 6 √ 2 eıπ/12 11/3 = 6 √ 2 eıπ/12 eı2πk/3 , k = 0, 1, 2 = 6 √ 2 eıπ/12 , 6 √ 2 eı3π/4 , 6 √ 2 eı17π/12 The principal root is 3 √ 1 + ı = 6 √ 2 eıπ/12 . The roots are depicted in Figure 6.9. -1 1 -1 1 Figure 6.9: (1 + ı)1/3 2. ı1/4 = eıπ/2 1/4 = eıπ/8 11/4 = eıπ/8 eı2πk/4 , k = 0, 1, 2, 3 = eıπ/8 , eı5π/8 , eı9π/8 , eı13π/8 The principal root is 4 √ ı = eıπ/8 . The roots are depicted in Figure 6.10. 141
  • 162. -1 1 -1 1 Figure 6.10: ı1/4 Solution 6.9 1. | (z)| + 2| (z)| ≤ 1 |x| + 2|y| ≤ 1 In the first quadrant, this is the triangle below the line y = (1 − x)/2. We reflect this triangle across the coordinate axes to obtain triangles in the other quadrants. Explicitly, we have the set of points: {z = x + ıy | −1 ≤ x ≤ 1 ∧ |y| ≤ (1 − |x|)/2}. See Figure 6.11. 1 1 −1 −1 Figure 6.11: | (z)| + 2| (z)| ≤ 1 2. |z − ı| is the distance from the point ı in the complex plane. Thus 1 < |z − ı| < 2 is an annulus centered at z = ı between the radii 1 and 2. See Figure 6.12. 3. The points which are closer to z = ı than z = −ı are those points in the upper half plane. See Figure 6.13. Solution 6.10 Let z = r eıθ and ζ = ρ eıφ . 1. arg(zζ) = arg(z) + arg(ζ) arg rρ eı(θ+φ) = {θ + 2πm} + {φ + 2πn} {θ + φ + 2πk} = {θ + φ + 2πm} 142
  • 163. -3 -2 -1 1 2 3 -2 -1 1 2 3 4 Figure 6.12: 1 < |z − ı| < 2 1 1 −1 −1 Figure 6.13: The upper half plane. 2. Arg(zζ) = Arg(z) + Arg(ζ) Consider z = ζ = −1. Arg(z) = Arg(ζ) = π, however Arg(zζ) = Arg(1) = 0. The identity becomes 0 = 2π. 3. arg z2 = arg(z) + arg(z) = 2 arg(z) arg r2 eı2θ = {θ + 2πk} + {θ + 2πm} = 2{θ + 2πn} {2θ + 2πk} = {2θ + 2πm} = {2θ + 4πn} Solution 6.11 Consider a triangle in the complex plane with vertices at 0, z and z + ζ. (See Figure 6.14.) The lengths of the sides of the triangle are |z|, |ζ| and |z + ζ| The second inequality shows that one side of the triangle must be less than or equal to the sum of the other two sides. |z + ζ| ≤ |z| + |ζ| The first inequality shows that the length of one side of the triangle must be greater than or equal to the difference in the length of the other two sides. |z + ζ| ≥ ||z| − |ζ|| 143
  • 164. z ζ |ζ| z+ ζ |z+ |ζ|z| Figure 6.14: Triangle inequality. Now we prove the inequalities algebraically. We will reduce the inequality to an identity. Let z = r eıθ , ζ = ρ eıφ . ||z| − |ζ|| ≤ |z + ζ| ≤ |z| + |ζ| |r − ρ| ≤ |r eıθ +ρ eıφ | ≤ r + ρ (r − ρ) 2 ≤ r eıθ +ρ eıφ r e−ıθ +ρ e−ıφ ≤ (r + ρ) 2 r2 + ρ2 − 2rρ ≤ r2 + ρ2 + rρ eı(θ−φ) +rρ eı(−θ+φ) ≤ r2 + ρ2 + 2rρ −2rρ ≤ 2rρ cos (θ − φ) ≤ 2rρ −1 ≤ cos(θ − φ) ≤ 1 Solution 6.12 1. (−1)−3/4 = (−1)−3 1/4 = (−1)1/4 = (eıπ ) 1/4 = eıπ/4 11/4 = eıπ/4 eıkπ/2 , k = 0, 1, 2, 3 = eıπ/4 , eı3π/4 , eı5π/4 , eı7π/4 = 1 + ı √ 2 , −1 + ı √ 2 , −1 − ı √ 2 , 1 − ı √ 2 See Figure 6.15. 2. 81/6 = 6 √ 811/6 = √ 2 eıkπ/3 , k = 0, 1, 2, 3, 4, 5 = √ 2, √ 2 eıπ/3 , √ 2 eı2π/3 , √ 2 eıπ , √ 2 eı4π/3 , √ 2 eı5π/3 = √ 2, 1 + ı √ 3 √ 2 , −1 + ı √ 3 √ 2 , − √ 2, −1 − ı √ 3 √ 2 , 1 − ı √ 3 √ 2 See Figure 6.16. 144
  • 165. -1 1 -1 1 Figure 6.15: (−1)−3/4 -2 -1 1 2 -2 -1 1 2 Figure 6.16: 81/6 Solution 6.13 1. (−1)−1/4 = ((−1)−1 )1/4 = (−1)1/4 = (eıπ ) 1/4 = eıπ/4 11/4 = eıπ/4 eıkπ/2 , k = 0, 1, 2, 3 = eıπ/4 , eı3π/4 , eı5π/4 , eı7π/4 = 1 + ı √ 2 , −1 + ı √ 2 , −1 − ı √ 2 , 1 − ı √ 2 See Figure 6.17. 2. 161/8 = 8 √ 1611/8 = √ 2 eıkπ/4 , k = 0, 1, 2, 3, 4, 5, 6, 7 = √ 2, √ 2 eıπ/4 , √ 2 eıπ/2 , √ 2 eı3π/4 , √ 2 eıπ , √ 2 eı5π/4 , √ 2 eı3π/2 , √ 2 eı7π/4 = √ 2, 1 + ı, ı √ 2, −1 + ı, − √ 2, −1 − ı, −ı √ 2, 1 − ı 145
  • 166. -1 1 -1 1 Figure 6.17: (−1)−1/4 -1 1 -1 1 Figure 6.18: 16−1/8 See Figure 6.18. Solution 6.14 1. |z − ı2| is the distance from the point ı2 in the complex plane. Thus 1 < |z − ı2| < 2 is an annulus. See Figure 6.19. -3 -2 -1 1 2 3 -1 1 2 3 4 5 Figure 6.19: 1 < |z − ı2| < 2 2. | (z)| + 5| (z)| = 1 |x| + 5|y| = 1 146
  • 167. In the first quadrant this is the line y = (1 − x)/5. We reflect this line segment across the coordinate axes to obtain line segments in the other quadrants. Explicitly, we have the set of points: {z = x + ıy | −1 < x < 1 ∧ y = ±(1 − |x|)/5}. See Figure 6.20. -1 1 -0.4 -0.2 0.2 0.4 Figure 6.20: | (z)| + 5| (z)| = 1 3. The set of points equidistant from ı and −ı is the real axis. See Figure 6.21. -1 1 -1 1 Figure 6.21: |z − ı| = |z + ı| Solution 6.15 1. |z − 1 + ı| is the distance from the point (1 − ı). Thus |z − 1 + ı| ≤ 1 is the disk of unit radius centered at (1 − ı). See Figure 6.22. -1 1 2 3 -3 -2 -1 1 Figure 6.22: |z − 1 + ı| < 1 2. (z) − (z) = 5 x − y = 5 y = x − 5 147
  • 168. See Figure 6.23. -10 -5 5 10 -15 -10 -5 5 Figure 6.23: (z) − (z) = 5 3. Since |z − ı| + |z + ı| ≥ 2, there are no solutions of |z − ı| + |z + ı| = 1. Solution 6.16 | eıθ −1| = 2 eıθ −1 e−ıθ −1 = 4 1 − eıθ − e−ıθ +1 = 4 −2 cos(θ) = 2 θ = π eıθ | 0 ≤ θ ≤ π is a unit semi-circle in the upper half of the complex plane from 1 to −1. The only point on this semi-circle that is a distance 2 from the point 1 is the point −1, which corresponds to θ = π. Polar Form Solution 6.17 We recall the Taylor series expansion of ex about x = 0. ex = ∞ n=0 xn n! . We take this as the definition of the exponential function for complex argument. eıθ = ∞ n=0 (ıθ)n n! = ∞ n=0 ın n! θn = ∞ n=0 (−1)n (2n)! θ2n + ı ∞ n=0 (−1)n (2n + 1)! θ2n+1 We compare this expression to the Taylor series for the sine and cosine. cos θ = ∞ n=0 (−1)n (2n)! θ2n , sin θ = ∞ n=0 (−1)n (2n + 1)! θ2n+1 , 148
  • 169. Thus eıθ and cos θ + ı sin θ have the same Taylor series expansions about θ = 0. eıθ = cos θ + ı sin θ Solution 6.18 cos(3θ) + ı sin(3θ) = (cos θ + ı sin θ)3 cos(3θ) + ı sin(3θ) = cos3 θ + ı3 cos2 θ sin θ − 3 cos θ sin2 θ − ı sin3 θ We equate the real parts of the equation. cos(3θ) = cos3 θ − 3 cos θ sin2 θ Solution 6.19 Define the partial sum, Sn(z) = n k=0 zk . Now consider (1 − z)Sn(z). (1 − z)Sn(z) = (1 − z) n k=0 zk (1 − z)Sn(z) = n k=0 zk − n+1 k=1 zk (1 − z)Sn(z) = 1 − zn+1 We divide by 1 − z. Note that 1 − z is nonzero. Sn(z) = 1 − zn+1 1 − z 1 + z + z2 + · · · + zn = 1 − zn+1 1 − z , (z = 1) Now consider z = eıθ where 0 < θ < 2π so that z is not unity. n k=0 eıθ k = 1 − eıθ n+1 1 − eıθ n k=0 eıkθ = 1 − eı(n+1)θ 1 − eıθ In order to get sin(θ/2) in the denominator, we multiply top and bottom by e−ıθ/2 . n k=0 (cos(kθ) + ı sin(kθ)) = e−ıθ/2 − eı(n+1/2)θ e−ıθ/2 − eıθ/2 n k=0 cos(kθ) + ı n k=0 sin(kθ) = cos(θ/2) − ı sin(θ/2) − cos((n + 1/2)θ) − ı sin((n + 1/2)θ) −2ı sin(θ/2) n k=0 cos(kθ) + ı n k=1 sin(kθ) = 1 2 + sin((n + 1/2)θ) sin(θ/2) + ı 1 2 cot(θ/2) − cos((n + 1/2)θ) sin(θ/2) 149
  • 170. 1. We take the real and imaginary part of this to obtain the identities. n k=0 cos(kθ) = 1 2 + sin((n + 1/2)θ) 2 sin(θ/2) 2. n k=1 sin(kθ) = 1 2 cot(θ/2) − cos((n + 1/2)θ) 2 sin(θ/2) Arithmetic and Vectors Solution 6.20 |zζ| = |r eıθ ρ eıφ | = |rρ eı(θ+φ) | = |rρ| = |r||ρ| = |z||ζ| z ζ = r eıθ ρ eıφ = r ρ eı(θ−φ) = r ρ = |r| |ρ| = |z| |ζ| Solution 6.21 |z + ζ| 2 + |z − ζ| 2 = (z + ζ) z + ζ + (z − ζ) z − ζ = zz + zζ + ζz + ζζ + zz − zζ − ζz + ζζ = 2 |z| 2 + |ζ| 2 Consider the parallelogram defined by the vectors z and ζ. The lengths of the sides are z and ζ and the lengths of the diagonals are z + ζ and z − ζ. We know from geometry that the sum of the squared lengths of the diagonals of a parallelogram is equal to the sum of the squared lengths of the four sides. (See Figure 6.24.) Integer Exponents 150
  • 171. z+ z- z ζ ζ ζ Figure 6.24: The parallelogram defined by z and ζ. Solution 6.22 1. (1 + ı)10 = (1 + ı)2 2 2 (1 + ı)2 = (ı2) 2 2 (ı2) = (−4) 2 (ı2) = 16(ı2) = ı32 2. (1 + ı)10 = √ 2 eıπ/4 10 = √ 2 10 eı10π/4 = 32 eıπ/2 = ı32 Rational Exponents Solution 6.23 We substitite the numbers into the equation to obtain an identity. z2 + 2az + b = 0 −a + a2 − b 1/2 2 + 2a −a + a2 − b 1/2 + b = 0 a2 − 2a a2 − b 1/2 + a2 − b − 2a2 + 2a a2 − b 1/2 + b = 0 0 = 0 151
  • 172. 152
  • 173. Chapter 7 Functions of a Complex Variable If brute force isn’t working, you’re not using enough of it. -Tim Mauch In this chapter we introduce the algebra of functions of a complex variable. We will cover the trigonometric and inverse trigonometric functions. The properties of trigonometric functions carry over directly from real-variable theory. However, because of multi-valuedness, the inverse trigono- metric functions are significantly trickier than their real-variable counterparts. 7.1 Curves and Regions In this section we introduce curves and regions in the complex plane. This material is necessary for the study of branch points in this chapter and later for contour integration. Curves. Consider two continuous functions x(t) and y(t) defined on the interval t ∈ [t0..t1]. The set of points in the complex plane, {z(t) = x(t) + ıy(t) | t ∈ [t0 . . . t1]}, defines a continuous curve or simply a curve. If the endpoints coincide ( z (t0) = z (t1) ) it is a closed curve. (We assume that t0 = t1.) If the curve does not intersect itself, then it is said to be a simple curve. If x(t) and y(t) have continuous derivatives and the derivatives do not both vanish at any point, then it is a smooth curve.1 This essentially means that the curve does not have any corners or other nastiness. A continuous curve which is composed of a finite number of smooth curves is called a piecewise smooth curve. We will use the word contour as a synonym for a piecewise smooth curve. See Figure 7.1 for a smooth curve, a piecewise smooth curve, a simple closed curve and a non- simple closed curve. Regions. A region R is connected if any two points in R can be connected by a curve which lies entirely in R. A region is simply-connected if every closed curve in R can be continuously shrunk to a point without leaving R. A region which is not simply-connected is said to be multiply-connected region. Another way of defining simply-connected is that a path connecting two points in R can be continuously deformed into any other path that connects those points. Figure 7.2 shows a simply- connected region with two paths which can be continuously deformed into one another and two multiply-connected regions with paths which cannot be deformed into one another. 1Why is it necessary that the derivatives do not both vanish? 153
  • 174. (a) (b) (c) (d) Figure 7.1: (a) Smooth curve. (b) Piecewise smooth curve. (c) Simple closed curve. (d) Non-simple closed curve. Figure 7.2: A simply-connected and two multiply-connected regions. Jordan curve theorem. A continuous, simple, closed curve is known as a Jordan curve. The Jordan Curve Theorem, which seems intuitively obvious but is difficult to prove, states that a Jordan curve divides the plane into a simply-connected, bounded region and an unbounded region. These two regions are called the interior and exterior regions, respectively. The two regions share the curve as a boundary. Points in the interior are said to be inside the curve; points in the exterior are said to be outside the curve. Traversal of a contour. Consider a Jordan curve. If you traverse the curve in the positive direction, then the inside is to your left. If you traverse the curve in the opposite direction, then the outside will be to your left and you will go around the curve in the negative direction. For circles, the positive direction is the counter-clockwise direction. The positive direction is consistent with the way angles are measured in a right-handed coordinate system, i.e. for a circle centered on the origin, the positive direction is the direction of increasing angle. For an oriented contour C, we denote the contour with opposite orientation as −C. Boundary of a region. Consider a simply-connected region. The boundary of the region is traversed in the positive direction if the region is to the left as you walk along the contour. For multiply-connected regions, the boundary may be a set of contours. In this case the boundary is traversed in the positive direction if each of the contours is traversed in the positive direction. When we refer to the boundary of a region we will assume it is given the positive orientation. In Figure 7.3 the boundaries of three regions are traversed in the positive direction. Figure 7.3: Traversing the boundary in the positive direction. 154
  • 175. Two interpretations of a curve. Consider a simple closed curve as depicted in Figure 7.4a. By giving it an orientation, we can make a contour that either encloses the bounded domain Figure 7.4b or the unbounded domain Figure 7.4c. Thus a curve has two interpretations. It can be thought of as enclosing either the points which are “inside” or the points which are “outside”.2 (a) (b) (c) Figure 7.4: Two interpretations of a curve. 7.2 The Point at Infinity and the Stereographic Projection Complex infinity. In real variables, there are only two ways to get to infinity. We can either go up or down the number line. Thus signed infinity makes sense. By going up or down we respectively approach +∞ and −∞. In the complex plane there are an infinite number of ways to approach infinity. We stand at the origin, point ourselves in any direction and go straight. We could walk along the positive real axis and approach infinity via positive real numbers. We could walk along the positive imaginary axis and approach infinity via pure imaginary numbers. We could generalize the real variable notion of signed infinity to a complex variable notion of directional infinity, but this will not be useful for our purposes. Instead, we introduce complex infinity or the point at infinity as the limit of going infinitely far along any direction in the complex plane. The complex plane together with the point at infinity form the extended complex plane. Stereographic projection. We can visualize the point at infinity with the stereographic projec- tion. We place a unit sphere on top of the complex plane so that the south pole of the sphere is at the origin. Consider a line passing through the north pole and a point z = x + ıy in the complex plane. In the stereographic projection, the point point z is mapped to the point where the line intersects the sphere. (See Figure 7.5.) Each point z = x + ıy in the complex plane is mapped to a unique point (X, Y, Z) on the sphere. X = 4x |z|2 + 4 , Y = 4y |z|2 + 4 , Z = 2|z|2 |z|2 + 4 The origin is mapped to the south pole. The point at infinity, |z| = ∞, is mapped to the north pole. In the stereographic projection, circles in the complex plane are mapped to circles on the unit sphere. Figure 7.6 shows circles along the real and imaginary axes under the mapping. Lines in the complex plane are also mapped to circles on the unit sphere. The right diagram in Figure 7.6 shows lines emanating from the origin under the mapping. The stereographic projection helps us reason about the point at infinity. When we consider the complex plane by itself, the point at infinity is an abstract notion. We can’t draw a picture of the point at infinity. It may be hard to accept the notion of a jordan curve enclosing the point at infinity. However, in the stereographic projection, the point at infinity is just an ordinary point (namely the north pole of the sphere). 2 A farmer wanted to know the most efficient way to build a pen to enclose his sheep, so he consulted an engineer, a physicist and a mathematician. The engineer suggested that he build a circular pen to get the maximum area for any given perimeter. The physicist suggested that he build a fence at infinity and then shrink it to fit the sheep. The mathematician constructed a little fence around himself and then defined himself to be outside. 155
  • 176. x y Figure 7.5: The stereographic projection. Figure 7.6: The stereographic projection of circles and lines. 156
  • 177. 7.3 A Gentle Introduction to Branch Points In this section we will introduce the concepts of branches, branch points and branch cuts. These concepts (which are notoriously difficult to understand for beginners) are typically defined in terms functions of a complex variable. Here we will develop these ideas as they relate to the arctangent function arctan(x, y). Hopefully this simple example will make the treatment in Section 7.9 more palateable. First we review some properties of the arctangent. It is a mapping from R2 to R. It measures the angle around the origin from the positive x axis. Thus it is a multi-valued function. For a fixed point in the domain, the function values differ by integer multiples of 2π. The arctangent is not defined at the origin nor at the point at infinity; it is singular at these two points. If we plot some of the values of the arctangent, it looks like a corkscrew with axis through the origin. A portion of this function is plotted in Figure 7.7. -2 -1 0 1 2 x -2 -1 0 1 2 y -5 0 5 -2 -1 0 1 2 x Figure 7.7: Plots of (log z) and a portion of (log z). Most of the tools we have for analyzing functions (continuity, differentiability, etc.) depend on the fact that the function is single-valued. In order to work with the arctangent we need to select a portion to obtain a single-valued function. Consider the domain (−1..2) × (1..4). On this domain we select the value of the arctangent that is between 0 and π. The domain and a plot of the selected values of the arctangent are shown in Figure 7.8. -3 -2 -1 1 2 3 4 5 -3 -2 -1 1 2 3 4 5 -2 0 2 4 0 2 4 6 0 0.5 1 1.5 2 -2 0 2 4 Figure 7.8: A domain and a selected value of the arctangent for the points in the domain. CONTINUE. 7.4 Cartesian and Modulus-Argument Form We can write a function of a complex variable z as a function of x and y or as a function of r and θ with the substitutions z = x + ıy and z = r eıθ , respectively. Then we can separate the real and 157
  • 178. imaginary components or write the function in modulus-argument form, f(z) = u(x, y) + ıv(x, y), or f(z) = u(r, θ) + ıv(r, θ), f(z) = ρ(x, y) eıφ(x,y) , or f(z) = ρ(r, θ) eıφ(r,θ) . Example 7.4.1 Consider the functions f(z) = z, f(z) = z3 and f(z) = 1 1−z . We write the functions in terms of x and y and separate them into their real and imaginary components. f(z) = z = x + ıy f(z) = z3 = (x + ıy)3 = x3 + ıx2 y − xy2 − ıy3 = x3 − xy2 + ı x2 y − y3 f(z) = 1 1 − z = 1 1 − x − ıy = 1 1 − x − ıy 1 − x + ıy 1 − x + ıy = 1 − x (1 − x)2 + y2 + ı y (1 − x)2 + y2 Example 7.4.2 Consider the functions f(z) = z, f(z) = z3 and f(z) = 1 1−z . We write the functions in terms of r and θ and write them in modulus-argument form. f(z) = z = r eıθ f(z) = z3 = r eıθ 3 = r3 eı3θ f(z) = 1 1 − z = 1 1 − r eıθ = 1 1 − r eıθ 1 1 − r e−ıθ = 1 − r e−ıθ 1 − r eıθ −r e−ıθ +r2 = 1 − r cos θ + ır sin θ 1 − 2r cos θ + r2 158
  • 179. Note that the denominator is real and non-negative. = 1 1 − 2r cos θ + r2 |1 − r cos θ + ır sin θ| eı arctan(1−r cos θ,r sin θ) = 1 1 − 2r cos θ + r2 (1 − r cos θ)2 + r2 sin2 θ eı arctan(1−r cos θ,r sin θ) = 1 1 − 2r cos θ + r2 1 − 2r cos θ + r2 cos2 θ + r2 sin2 θ eı arctan(1−r cos θ,r sin θ) = 1 √ 1 − 2r cos θ + r2 eı arctan(1−r cos θ,r sin θ) 7.5 Graphing Functions of a Complex Variable We cannot directly graph functions of a complex variable as they are mappings from R2 to R2 . To do so would require four dimensions. However, we can can use a surface plot to graph the real part, the imaginary part, the modulus or the argument of a function of a complex variable. Each of these are scalar fields, mappings from R2 to R. Example 7.5.1 Consider the identity function, f(z) = z. In Cartesian coordinates and Cartesian form, the function is f(z) = x + ıy. The real and imaginary components are u(x, y) = x and v(x, y) = y. (See Figure 7.9.) In modulus argument form the function is -2 -1 0 1 2 x -2 -1 0 1 2 y -2 -1 0 1 2 -2 -1 0 1 2 x -2 -1 0 1 2 x -2 -1 0 1 2 y -2 -1 0 1 2 -2 -1 0 1 2 x Figure 7.9: The real and imaginary parts of f(z) = z = x + ıy. f(z) = z = r eıθ = x2 + y2 eı arctan(x,y) . The modulus of f(z) is a single-valued function which is the distance from the origin. The argument of f(z) is a multi-valued function. Recall that arctan(x, y) has an infinite number of values each of which differ by an integer multiple of 2π. A few branches of arg(f(z)) are plotted in Figure 7.10. The modulus and principal argument of f(z) = z are plotted in Figure 7.11. -2 -1 0 1 2 x -2 -1 0 1 2y -5 0 5 -2 -1 0 1 2 x -2 -1 0 1 2y Figure 7.10: A few branches of arg(z). 159
  • 180. -2 -1 0 1 2 x -2 -1 0 1 2 y 0 1 2 -2 -1 0 1 2 x -2 -1 0 1 2 x -2 -1 0 1 2 y -2 0 2 -2 -1 0 1 2 x Figure 7.11: Plots of |z| and Arg(z). Example 7.5.2 Consider the function f(z) = z2 . In Cartesian coordinates and separated into its real and imaginary components the function is f(z) = z2 = (x + ıy)2 = x2 − y2 + ı2xy. Figure 7.12 shows surface plots of the real and imaginary parts of z2 . The magnitude of z2 is -2 -1 0 1 2 x -2 -1 0 1 2 y -4 -2 0 2 4 -2 -1 0 1 2 x -2 -1 0 1 2 x -2 -1 0 1 2 y -5 0 5 -2 -1 0 1 2 x Figure 7.12: Plots of z2 and z2 . |z2 | = z2z2 = zz = (x + ıy)(x − ıy) = x2 + y2 . Note that z2 = r eıθ 2 = r2 eı2θ . In Figure 7.13 are plots of |z2 | and a branch of arg z2 . -2 -1 0 1 2 x -2 -1 0 1 2 y 0 2 4 6 8 -2 -1 0 1 2 x -2 -1 0 1 2 x -2 -1 0 1 2 y -5 0 5 -2 -1 0 1 2 x Figure 7.13: Plots of |z2 | and a branch of arg z2 . 160
  • 181. 7.6 Trigonometric Functions The exponential function. Consider the exponential function ez . We can use Euler’s formula to write ez = ex+ıy in terms of its real and imaginary parts. ez = ex+ıy = ex eıy = ex cos y + ı ex sin y From this we see that the exponential function is ı2π periodic: ez+ı2π = ez , and ıπ odd periodic: ez+ıπ = − ez . Figure 7.14 has surface plots of the real and imaginary parts of ez which show this periodicity. -2 0 2 x -5 0 5 y -20 -10 0 10 20 -2 0 2 x -2 0 2 x -5 0 5 y -20 -10 0 10 20 -2 0 2 x Figure 7.14: Plots of (ez ) and (ez ). The modulus of ez is a function of x alone. |ez | = ex+ıy = ex The argument of ez is a function of y alone. arg (ez ) = arg ex+ıy = {y + 2πn | n ∈ Z} In Figure 7.15 are plots of | ez | and a branch of arg (ez ). -2 0 2 x -5 0 5 y 0 5 10 15 20 -2 0 2 x -2 0 2 x -5 0 5 y -5 0 5 -2 0 2 x Figure 7.15: Plots of | ez | and a branch of arg (ez ). Example 7.6.1 Show that the transformation w = ez maps the infinite strip, −∞ < x < ∞, 0 < y < π, onto the upper half-plane. Method 1. Consider the line z = x + ıc, −∞ < x < ∞. Under the transformation, this is mapped to w = ex+ıc = eıc ex , −∞ < x < ∞. This is a ray from the origin to infinity in the direction of eıc . Thus we see that z = x is mapped to the positive, real w axis, z = x + ıπ is mapped to the negative, real axis, and z = x + ıc, 0 < c < π 161
  • 182. -3 -2 -1 1 2 3 1 2 3 -3 -2 -1 1 2 3 1 2 3 Figure 7.16: ez maps horizontal lines to rays. is mapped to a ray with angle c in the upper half-plane. Thus the strip is mapped to the upper half-plane. See Figure 7.16. Method 2. Consider the line z = c + ıy, 0 < y < π. Under the transformation, this is mapped to w = ec+ıy + ec eıy , 0 < y < π. This is a semi-circle in the upper half-plane of radius ec . As c → −∞, the radius goes to zero. As c → ∞, the radius goes to infinity. Thus the strip is mapped to the upper half-plane. See Figure 7.17. -1 1 1 2 3 -3 -2 -1 1 2 3 1 2 3 Figure 7.17: ez maps vertical lines to circular arcs. The sine and cosine. We can write the sine and cosine in terms of the exponential function. eız + e−ız 2 = cos(z) + ı sin(z) + cos(−z) + ı sin(−z) 2 = cos(z) + ı sin(z) + cos(z) − ı sin(z) 2 = cos z eız − e−ız ı2 = cos(z) + ı sin(z) − cos(−z) − ı sin(−z) 2 = cos(z) + ı sin(z) − cos(z) + ı sin(z) 2 = sin z We separate the sine and cosine into their real and imaginary parts. cos z = cos x cosh y − ı sin x sinh y sin z = sin x cosh y + ı cos x sinh y For fixed y, the sine and cosine are oscillatory in x. The amplitude of the oscillations grows with increasing |y|. See Figure 7.18 and Figure 7.19 for plots of the real and imaginary parts of the cosine and sine, respectively. Figure 7.20 shows the modulus of the cosine and the sine. 162
  • 183. -2 0 2 x -2 -1 0 1 2 y -5 -2.5 0 2.5 5 -2 0 2 x -2 0 2 x -2 -1 0 1 2 y -5 -2.5 0 2.5 5 -2 0 2 x Figure 7.18: Plots of (cos(z)) and (cos(z)). -2 0 2 x -2 -1 0 1 2 y -5 -2.5 0 2.5 5 -2 0 2 x -2 0 2 x -2 -1 0 1 2 y -5 -2.5 0 2.5 5 -2 0 2 x Figure 7.19: Plots of (sin(z)) and (sin(z)). -2 0 2 x -2 -1 0 1 2 y 2 4 -2 0 2 x -2 0 2 x -2 -1 0 1 2 y 0 2 4 -2 0 2 x Figure 7.20: Plots of | cos(z)| and | sin(z)|. 163
  • 184. The hyperbolic sine and cosine. The hyperbolic sine and cosine have the familiar definitions in terms of the exponential function. Thus not surprisingly, we can write the sine in terms of the hyperbolic sine and write the cosine in terms of the hyperbolic cosine. Below is a collection of trigonometric identities. Result 7.6.1 ez = ex (cos y + ı sin y) cos z = eız + e−ız 2 sin z = eız − e−ız ı2 cos z = cos x cosh y − ı sin x sinh y sin z = sin x cosh y + ı cos x sinh y cosh z = ez + e−z 2 sinh z = ez − e−z 2 cosh z = cosh x cos y + ı sinh x sin y sinh z = sinh x cos y + ı cosh x sin y sin(ız) = ı sinh z sinh(ız) = ı sin z cos(ız) = cosh z cosh(ız) = cos z log z = ln |z| + ı arg(z) = ln |z| + ı Arg(z) + ı2πn, n ∈ Z 7.7 Inverse Trigonometric Functions The logarithm. The logarithm, log(z), is defined as the inverse of the exponential function ez . The exponential function is many-to-one and thus has a multi-valued inverse. From what we know of many-to-one functions, we conclude that elog z = z, but log (ez ) = z. This is because elog z is single-valued but log (ez ) is not. Because ez is ı2π periodic, the logarithm of a number is a set of numbers which differ by integer multiples of ı2π. For instance, eı2πn = 1 so that log(1) = {ı2πn : n ∈ Z}. The logarithmic function has an infinite number of branches. The value of the function on the branches differs by integer multiples of ı2π. It has singularities at zero and infinity. | log(z)| → ∞ as either z → 0 or z → ∞. We will derive the formula for the complex variable logarithm. For now, let ln(x) denote the real variable logarithm that is defined for positive real numbers. Consider w = log z. This means that ew = z. We write w = u + ıv in Cartesian form and z = r eıθ in polar form. eu+ıv = r eıθ We equate the modulus and argument of this expression. eu = r v = θ + 2πn u = ln r v = θ + 2πn With log z = u + ıv, we have a formula for the logarithm. log z = ln |z| + ı arg(z) If we write out the multi-valuedness of the argument function we note that this has the form that we expected. log z = ln |z| + ı(Arg(z) + 2πn), n ∈ Z We check that our formula is correct by showing that elog z = z elog z = eln |z|+ı arg(z) = eln r+ıθ+ı2πn = r eıθ = z 164
  • 185. Note again that log (ez ) = z. log (ez ) = ln | ez | + ı arg (ez ) = ln (ex ) + ı arg ex+ıy = x + ı(y + 2πn) = z + ı2nπ = z The real part of the logarithm is the single-valued ln r; the imaginary part is the multi-valued arg(z). We define the principal branch of the logarithm Log z to be the branch that satisfies −π < (Log z) ≤ π. For positive, real numbers the principal branch, Log x is real-valued. We can write Log z in terms of the principal argument, Arg z. Log z = ln |z| + ı Arg(z) See Figure 7.21 for plots of the real and imaginary part of Log z. -2 -1 0 1 2 x -2 -1 0 1 2 y -2 -1 0 1 -2 -1 0 1 2 x -2 -1 0 1 2 x -2 -1 0 1 2 y -2 0 2 -2 -1 0 1 2 x Figure 7.21: Plots of (Log z) and (Log z). The form: ab . Consider ab where a and b are complex and a is nonzero. We define this expression in terms of the exponential and the logarithm as ab = eb log a . Note that the multi-valuedness of the logarithm may make ab multi-valued. First consider the case that the exponent is an integer. am = em log a = em(Log a+ı2nπ) = em Log a eı2mnπ = em Log a Thus we see that am has a single value where m is an integer. Now consider the case that the exponent is a rational number. Let p/q be a rational number in reduced form. ap/q = e p q log a = e p q (Log a+ı2nπ) = e p q Log a eı2npπ/q . This expression has q distinct values as eı2npπ/q = eı2mpπ/q if and only if n = m mod q. Finally consider the case that the exponent b is an irrational number. ab = eb log a = eb(Log a+ı2nπ) = eb Log a eı2bnπ Note that eı2bnπ and eı2bmπ are equal if and only if ı2bnπ and ı2bmπ differ by an integer multiple of ı2π, which means that bn and bm differ by an integer. This occurs only when n = m. Thus eı2bnπ has a distinct value for each different integer n. We conclude that ab has an infinite number of values. You may have noticed something a little fishy. If b is not an integer and a is any non-zero complex number, then ab is multi-valued. Then why have we been treating eb as single-valued, when it is merely the case a = e? The answer is that in the realm of functions of a complex variable, ez is an abuse of notation. We write ez when we mean exp(z), the single-valued exponential function. Thus when we write ez we do not mean “the number e raised to the z power”, we mean “the exponential function of z”. We denote the former scenario as (e)z , which is multi-valued. 165
  • 186. Logarithmic identities. Back in high school trigonometry when you thought that the logarithm was only defined for positive real numbers you learned the identity log xa = a log x. This identity doesn’t hold when the logarithm is defined for nonzero complex numbers. Consider the logarithm of za . log za = Log za + ı2πn a log z = a(Log z + ı2πn) = a Log z + ı2aπn Note that log za = a log z Furthermore, since Log za = ln |za | + ı Arg (za ) , a Log z = a ln |z| + ıa Arg(z) and Arg (za ) is not necessarily the same as a Arg(z) we see that Log za = a Log z. Consider the logarithm of a product. log(ab) = ln |ab| + ı arg(ab) = ln |a| + ln |b| + ı arg(a) + ı arg(b) = log a + log b There is not an analogous identity for the principal branch of the logarithm since Arg(ab) is not in general the same as Arg(a) + Arg(b). Using log(ab) = log(a) + log(b) we can deduce that log (an ) = n k=1 log a = n log a, where n is a positive integer. This result is simple, straightforward and wrong. I have led you down the merry path to damnation.3 In fact, log a2 = 2 log a. Just write the multi-valuedness explicitly, log a2 = Log a2 + ı2nπ, 2 log a = 2(Log a + ı2nπ) = 2 Log a + ı4nπ. You can verify that log 1 a = − log a. We can use this and the product identity to expand the logarithm of a quotient. log a b = log a − log b For general values of a, log za = a log z. However, for some values of a, equality holds. We already know that a = 1 and a = −1 work. To determine if equality holds for other values of a, we explicitly write the multi-valuedness. log za = log ea log z = a log z + ı2πk, k ∈ Z a log z = a ln |z| + ıa Arg z + ıa2πm, m ∈ Z We see that log za = a log z if and only if {am | m ∈ Z} = {am + k | k, m ∈ Z}. The sets are equal if and only if a = 1/n, n ∈ Z± . Thus we have the identity: log z1/n = 1 n log z, n ∈ Z± 3 Don’t feel bad if you fell for it. The logarithm is a tricky bastard. 166
  • 187. Result 7.7.1 Logarithmic Identities. ab = eb log a elog z = eLog z = z log(ab) = log a + log b log(1/a) = − log a log(a/b) = log a − log b log z1/n = 1 n log z, n ∈ Z± Logarithmic Inequalities. Log(uv) = Log(u) + Log(v) log za = a log z Log za = a Log z log ez = z Example 7.7.1 Consider 1π . We apply the definition ab = eb log a . 1π = eπ log(1) = eπ(ln(1)+ı2nπ) = eı2nπ2 Thus we see that 1π has an infinite number of values, all of which lie on the unit circle |z| = 1 in the complex plane. However, the set 1π is not equal to the set |z| = 1. There are points in the latter which are not in the former. This is analogous to the fact that the rational numbers are dense in the real numbers, but are a subset of the real numbers. Example 7.7.2 We find the zeros of sin z. sin z = eız − e−ız ı2 = 0 eız = e−ız eı2z = 1 2z mod 2π = 0 z = nπ, n ∈ Z Equivalently, we could use the identity sin z = sin x cosh y + ı cos x sinh y = 0. This becomes the two equations (for the real and imaginary parts) sin x cosh y = 0 and cos x sinh y = 0. Since cosh is real-valued and positive for real argument, the first equation dictates that x = nπ, n ∈ Z. Since cos(nπ) = (−1)n for n ∈ Z, the second equation implies that sinh y = 0. For real argument, sinh y is only zero at y = 0. Thus the zeros are z = nπ, n ∈ Z 167
  • 188. Example 7.7.3 Since we can express sin z in terms of the exponential function, one would expect that we could express the sin−1 z in terms of the logarithm. w = sin−1 z z = sin w z = eıw − e−ıw ı2 eı2w −ı2z eıw −1 = 0 eıw = ız ± 1 − z2 w = −ı log ız ± 1 − z2 Thus we see how the multi-valued sin−1 is related to the logarithm. sin−1 z = −ı log ız ± 1 − z2 Example 7.7.4 Consider the equation sin3 z = 1. sin3 z = 1 sin z = 11/3 eız − e−ız ı2 = 11/3 eız −ı2(1)1/3 − e−ız = 0 eı2z −ı2(1)1/3 eız −1 = 0 eız = ı2(1)1/3 ± −4(1)2/3 + 4 2 eız = ı(1)1/3 ± 1 − (1)2/3 z = −ı log ı(1)1/3 ± 1 − 12/3 Note that there are three sources of multi-valuedness in the expression for z. The two values of the square root are shown explicitly. There are three cube roots of unity. Finally, the logarithm has an infinite number of branches. To show this multi-valuedness explicitly, we could write z = −ı Log ı eı2mπ/3 ± 1 − eı4mπ/3 + 2πn, m = 0, 1, 2, n = . . . , −1, 0, 1, . . . Example 7.7.5 Consider the harmless looking equation, ız = 1. Before we start with the algebra, note that the right side of the equation is a single number. ız is single-valued only when z is an integer. Thus we know that if there are solutions for z, they are integers. We now proceed to solve the equation. ız = 1 eıπ/2 z = 1 Use the fact that z is an integer. eıπz/2 = 1 ıπz/2 = ı2nπ, for some n ∈ Z z = 4n, n ∈ Z 168
  • 189. Here is a different approach. We write down the multi-valued form of ız . We solve the equation by requiring that all the values of ız are 1. ız = 1 ez log ı = 1 z log ı = ı2πn, for some n ∈ Z z ı π 2 + ı2πm = ı2πn, ∀m ∈ Z, for some n ∈ Z ı π 2 z + ı2πmz = ı2πn, ∀m ∈ Z, for some n ∈ Z The only solutions that satisfy the above equation are z = 4k, k ∈ Z. Now let’s consider a slightly different problem: 1 ∈ ız . For what values of z does ız have 1 as one of its values. 1 ∈ ız 1 ∈ ez log ı 1 ∈ {ez(ıπ/2+ı2πn) | n ∈ Z} z(ıπ/2 + ı2πn) = ı2πm, m, n ∈ Z z = 4m 1 + 4n , m, n ∈ Z There are an infinite set of rational numbers for which ız has 1 as one of its values. For example, ı4/5 = 11/5 = 1, eı2π/5 , eı4π/5 , eı6π/5 , eı8π/5 7.8 Riemann Surfaces Consider the mapping w = log(z). Each nonzero point in the z-plane is mapped to an infinite number of points in the w plane. w = {ln |z| + ı arg(z)} = {ln |z| + ı(Arg(z) + 2πn) | n ∈ Z} This multi-valuedness makes it hard to work with the logarithm. We would like to select one of the branches of the logarithm. One way of doing this is to decompose the z-plane into an infinite number of sheets. The sheets lie above one another and are labeled with the integers, n ∈ Z. (See Figure 7.22.) We label the point z on the nth sheet as (z, n). Now each point (z, n) maps to a single point in the w-plane. For instance, we can make the zeroth sheet map to the principal branch of the logarithm. This would give us the following mapping. log(z, n) = Log z + ı2πn This is a nice idea, but it has some problems. The mappings are not continuous. Consider the mapping on the zeroth sheet. As we approach the negative real axis from above z is mapped to ln |z| + ıπ as we approach from below it is mapped to ln |z| − ıπ. (Recall Figure 7.21.) The mapping is not continuous across the negative real axis. Let’s go back to the regular z-plane for a moment. We start at the point z = 1 and selecting the branch of the logarithm that maps to zero. (log(1) = ı2πn). We make the logarithm vary continuously as we walk around the origin once in the positive direction and return to the point z = 1. Since the argument of z has increased by 2π, the value of the logarithm has changed to ı2π. If we walk around the origin again we will have log(1) = ı4π. Our flat sheet decomposition of the 169
  • 190. -2 -1 0 1 2 Figure 7.22: The z-plane decomposed into flat sheets. z-plane does not reflect this property. We need a decomposition with a geometry that makes the mapping continuous and connects the various branches of the logarithm. Drawing inspiration from the plot of arg(z), Figure 7.10, we decompose the z-plane into an infinite corkscrew with axis at the origin. (See Figure 7.23.) We define the mapping so that the logarithm varies continuously on this surface. Consider a point z on one of the sheets. The value of the logarithm at that same point on the sheet directly above it is ı2π more than the original value. We call this surface, the Riemann surface for the logarithm. The mapping from the Riemann surface to the w-plane is continuous and one-to-one. Figure 7.23: The Riemann surface for the logarithm. 7.9 Branch Points Example 7.9.1 Consider the function z1/2 . For each value of z, there are two values of z1/2 . We write z1/2 in modulus-argument and Cartesian form. z1/2 = |z| eı arg(z)/2 z1/2 = |z| cos(arg(z)/2) + ı |z| sin(arg(z)/2) Figure 7.24 shows the real and imaginary parts of z1/2 from three different viewpoints. The second and third views are looking down the x axis and y axis, respectively. Consider z1/2 . This is a double layered sheet which intersects itself on the negative real axis. ( (z1/2 ) has a similar structure, but intersects itself on the positive real axis.) Let’s start at a point on the positive real axis on the lower sheet. If we walk around the origin once and return to the positive real axis, we will be on the upper sheet. If we do this again, we will return to the lower sheet. Suppose we are at a point in the complex plane. We pick one of the two values of z1/2 . If the function varies continuously as we walk around the origin and back to our starting point, the value 170
  • 191. of z1/2 will have changed. We will be on the other branch. Because walking around the point z = 0 takes us to a different branch of the function, we refer to z = 0 as a branch point. -2-1012 x -2-1012 y -1 0 1 -2-1012 x -2-1012 x -2-1012 y -1 0 1 -2-1012 x -2-1012 x -2 -1 0 1 2 y -1 0 1 -1 0 1 -2-1012 x -2 -1 0 1 2 y -1 0 1 -1 0 1 -2 -1 0 1 2 x -2 -1 0 1 2 y -1 0 1 -2 -1 0 1 2 x -2 -1 0 1 2 x -2 -1 0 1 2 y -1 0 1 -2 -1 0 1 2 x Figure 7.24: Plots of z1/2 (left) and z1/2 (right) from three viewpoints. Now consider the modulus-argument form of z1/2 : z1/2 = |z| eı arg(z)/2 . Figure 7.25 shows the modulus and the principal argument of z1/2 . We see that each time we walk around the origin, the argument of z1/2 changes by π. This means that the value of the function changes by the factor eıπ = −1, i.e. the function changes sign. If we walk around the origin twice, the argument changes by 2π, so that the value of the function does not change, eı2π = 1. -2-1 0 1 2 x -2 -1 0 1 2 y 0 0.5 1 -2-1 0 1 2 x -2 -1 0 1 2 x -2 -1 0 1 2 y -2 0 2 -2 -1 0 1 2 x Figure 7.25: Plots of |z1/2 | and Arg z1/2 . z1/2 is a continuous function except at z = 0. Suppose we start at z = 1 = eı0 and the function value eı0 1/2 = 1. If we follow the first path in Figure 7.26, the argument of z varies from up to about π 4 , down to about −π 4 and back to 0. The value of the function is still eı0 1/2 . 171
  • 192. Im(z) Re(z) Re(z) Im(z) Figure 7.26: A path that does not encircle the origin and a path around the origin. Now suppose we follow a circular path around the origin in the positive, counter-clockwise, direction. (See the second path in Figure 7.26.) The argument of z increases by 2π. The value of the function at half turns on the path is eı0 1/2 = 1, (eıπ ) 1/2 = eıπ/2 = ı, eı2π 1/2 = eıπ = −1 As we return to the point z = 1, the argument of the function has changed by π and the value of the function has changed from 1 to −1. If we were to walk along the circular path again, the argument of z would increase by another 2π. The argument of the function would increase by another π and the value of the function would return to 1. eı4π 1/2 = eı2π = 1 In general, any time we walk around the origin, the value of z1/2 changes by the factor −1. We call z = 0 a branch point. If we want a single-valued square root, we need something to prevent us from walking around the origin. We achieve this by introducing a branch cut. Suppose we have the complex plane drawn on an infinite sheet of paper. With a scissors we cut the paper from the origin to −∞ along the real axis. Then if we start at z = eı0 , and draw a continuous line without leaving the paper, the argument of z will always be in the range −π < arg z < π. This means that −π 2 < arg z1/2 < π 2 . No matter what path we follow in this cut plane, z = 1 has argument zero and (1)1/2 = 1. By never crossing the negative real axis, we have constructed a single valued branch of the square root function. We call the cut along the negative real axis a branch cut. Example 7.9.2 Consider the logarithmic function log z. For each value of z, there are an infinite number of values of log z. We write log z in Cartesian form. log z = ln |z| + ı arg z Figure 7.27 shows the real and imaginary parts of the logarithm. The real part is single-valued. The imaginary part is multi-valued and has an infinite number of branches. The values of the logarithm form an infinite-layered sheet. If we start on one of the sheets and walk around the origin once in the positive direction, then the value of the logarithm increases by ı2π and we move to the next branch. z = 0 is a branch point of the logarithm. The logarithm is a continuous function except at z = 0. Suppose we start at z = 1 = eı0 and the function value log eı0 = ln(1) + ı0 = 0. If we follow the first path in Figure 7.26, the argument of z and thus the imaginary part of the logarithm varies from up to about π 4 , down to about −π 4 and back to 0. The value of the logarithm is still 0. Now suppose we follow a circular path around the origin in the positive direction. (See the second path in Figure 7.26.) The argument of z increases by 2π. The value of the logarithm at half turns 172
  • 193. -2 -1 0 1 2 x -2 -1 0 1 2 y -2 -1 0 1 -2 -1 0 1 2 x -2 -1 0 1 2 x -2 -1 0 1 2 y -5 0 5 -2 -1 0 1 2 x Figure 7.27: Plots of (log z) and a portion of (log z). on the path is log eı0 = 0, log (eıπ ) = ıπ, log eı2π = ı2π As we return to the point z = 1, the value of the logarithm has changed by ı2π. If we were to walk along the circular path again, the argument of z would increase by another 2π and the value of the logarithm would increase by another ı2π. Result 7.9.1 A point z0 is a branch point of a function f(z) if the function changes value when you walk around the point on any path that encloses no singularities other than the one at z = z0. Branch points at infinity : mapping infinity to the origin. Up to this point we have considered only branch points in the finite plane. Now we consider the possibility of a branch point at infinity. As a first method of approaching this problem we map the point at infinity to the origin with the transformation ζ = 1/z and examine the point ζ = 0. Example 7.9.3 Again consider the function z1/2 . Mapping the point at infinity to the origin, we have f(ζ) = (1/ζ)1/2 = ζ−1/2 . For each value of ζ, there are two values of ζ−1/2 . We write ζ−1/2 in modulus-argument form. ζ−1/2 = 1 |ζ| e−ı arg(ζ)/2 Like z1/2 , ζ−1/2 has a double-layered sheet of values. Figure 7.28 shows the modulus and the principal argument of ζ−1/2 . We see that each time we walk around the origin, the argument of ζ−1/2 changes by −π. This means that the value of the function changes by the factor e−ıπ = −1, i.e. the function changes sign. If we walk around the origin twice, the argument changes by −2π, so that the value of the function does not change, e−ı2π = 1. Since ζ−1/2 has a branch point at zero, we conclude that z1/2 has a branch point at infinity. Example 7.9.4 Again consider the logarithmic function log z. Mapping the point at infinity to the origin, we have f(ζ) = log(1/ζ) = − log(ζ). From Example 7.9.2 we known that − log(ζ) has a branch point at ζ = 0. Thus log z has a branch point at infinity. Branch points at infinity : paths around infinity. We can also check for a branch point at infinity by following a path that encloses the point at infinity and no other singularities. Just draw a simple closed curve that separates the complex plane into a bounded component that contains all 173
  • 194. -2 -1 0 1 2 x -2 -1 0 1 2 y 1 1.5 2 2.5 3 -2 -1 0 1 2 x -2 -1 0 1 2 x -2 -1 0 1 2 y -2 0 2 -2 -1 0 1 2 x Figure 7.28: Plots of |ζ−1/2 | and Arg ζ−1/2 . the singularities of the function in the finite plane. Then, depending on orientation, the curve is a contour enclosing all the finite singularities, or the point at infinity and no other singularities. Example 7.9.5 Once again consider the function z1/2 . We know that the function changes value on a curve that goes once around the origin. Such a curve can be considered to be either a path around the origin or a path around infinity. In either case the path encloses one singularity. There are branch points at the origin and at infinity. Now consider a curve that does not go around the origin. Such a curve can be considered to be either a path around neither of the branch points or both of them. Thus we see that z1/2 does not change value when we follow a path that encloses neither or both of its branch points. Example 7.9.6 Consider f(z) = z2 − 1 1/2 . We factor the function. f(z) = (z − 1)1/2 (z + 1)1/2 There are branch points at z = ±1. Now consider the point at infinity. f ζ−1 = ζ−2 − 1 1/2 = ±ζ−1 1 − ζ2 1/2 Since f ζ−1 does not have a branch point at ζ = 0, f(z) does not have a branch point at infinity. We could reach the same conclusion by considering a path around infinity. Consider a path that circles the branch points at z = ±1 once in the positive direction. Such a path circles the point at infinity once in the negative direction. In traversing this path, the value of f(z) is multiplied by the factor eı2π 1/2 eı2π 1/2 = eı2π = 1. Thus the value of the function does not change. There is no branch point at infinity. Diagnosing branch points. We have the definition of a branch point, but we do not have a convenient criterion for determining if a particular function has a branch point. We have seen that log z and zα for non-integer α have branch points at zero and infinity. The inverse trigonometric functions like the arcsine also have branch points, but they can be written in terms of the logarithm and the square root. In fact all the elementary functions with branch points can be written in terms of the functions log z and zα . Furthermore, note that the multi-valuedness of zα comes from the logarithm, zα = eα log z . This gives us a way of quickly determining if and where a function may have branch points. Result 7.9.2 Let f(z) be a single-valued function. Then log(f(z)) and (f(z))α may have branch points only where f(z) is zero or singular. Example 7.9.7 Consider the functions, 1. z2 1/2 174
  • 195. 2. z1/2 2 3. z1/2 3 Are they multi-valued? Do they have branch points? 1. z2 1/2 = ± √ z2 = ±z Because of the (·)1/2 , the function is multi-valued. The only possible branch points are at zero and infinity. If eı0 2 1/2 = 1, then eı2π 2 1/2 = eı4π 1/2 = eı2π = 1. Thus we see that the function does not change value when we walk around the origin. We can also consider this to be a path around infinity. This function is multi-valued, but has no branch points. 2. z1/2 2 = ± √ z 2 = z This function is single-valued. 3. z1/2 3 = ± √ z 3 = ± √ z 3 This function is multi-valued. We consider the possible branch point at z = 0. If e0 1/2 3 = 1, then eı2π 1/2 3 = (eıπ ) 3 = eı3π = −1. Since the function changes value when we walk around the origin, it has a branch point at z = 0. Since this is also a path around infinity, there is a branch point there. Example 7.9.8 Consider the function f(z) = log 1 z−1 . Since 1 z−1 is only zero at infinity and its only singularity is at z = 1, the only possibilities for branch points are at z = 1 and z = ∞. Since log 1 z − 1 = − log(z − 1) and log w has branch points at zero and infinity, we see that f(z) has branch points at z = 1 and z = ∞. Example 7.9.9 Consider the functions, 1. elog z 2. log ez . Are they multi-valued? Do they have branch points? 1. elog z = exp(Log z + ı2πn) = eLog z eı2πn = z This function is single-valued. 2. log ez = Log ez +ı2πn = z + ı2πm This function is multi-valued. It may have branch points only where ez is zero or infinite. This only occurs at z = ∞. Thus there are no branch points in the finite plane. The function does not change when traversing a simple closed path. Since this path can be considered to enclose infinity, there is no branch point at infinity. 175
  • 196. Consider (f(z))α where f(z) is single-valued and f(z) has either a zero or a singularity at z = z0. (f(z))α may have a branch point at z = z0. If f(z) is not a power of z, then it may be difficult to tell if (f(z))α changes value when we walk around z0. Factor f(z) into f(z) = g(z)h(z) where h(z) is nonzero and finite at z0. Then g(z) captures the important behavior of f(z) at the z0. g(z) tells us how fast f(z) vanishes or blows up. Since (f(z))α = (g(z))α (h(z))α and (h(z))α does not have a branch point at z0, (f(z))α has a branch point at z0 if and only if (g(z))α has a branch point there. Similarly, we can decompose log(f(z)) = log(g(z)h(z)) = log(g(z)) + log(h(z)) to see that log(f(z)) has a branch point at z0 if and only if log(g(z)) has a branch point there. Result 7.9.3 Consider a single-valued function f(z) that has either a zero or a singularity at z = z0. Let f(z) = g(z)h(z) where h(z) is nonzero and finite. (f(z))α has a branch point at z = z0 if and only if (g(z))α has a branch point there. log(f(z)) has a branch point at z = z0 if and only if log(g(z)) has a branch point there. Example 7.9.10 Consider the functions, 1. sin z1/2 2. (sin z)1/2 3. z1/2 sin z1/2 4. sin z2 1/2 Find the branch points and the number of branches. 1. sin z1/2 = sin ± √ z = ± sin √ z sin z1/2 is multi-valued. It has two branches. There may be branch points at zero and infinity. Consider the unit circle which is a path around the origin or infinity. If sin eı0 1/2 = sin(1), then sin eı2π 1/2 = sin (eıπ ) = sin(−1) = − sin(1). There are branch points at the origin and infinity. 2. (sin z)1/2 = ± √ sin z The function is multi-valued with two branches. The sine vanishes at z = nπ and is singular at infinity. There could be branch points at these locations. Consider the point z = nπ. We can write sin z = (z − nπ) sin z z − nπ Note that sin z z−nπ is nonzero and has a removable singularity at z = nπ. lim z→nπ sin z z − nπ = lim z→nπ cos z 1 = (−1)n Since (z − nπ)1/2 has a branch point at z = nπ, (sin z)1/2 has branch points at z = nπ. Since the branch points at z = nπ go all the way out to infinity. It is not possible to make a path that encloses infinity and no other singularities. The point at infinity is a non-isolated singularity. A point can be a branch point only if it is an isolated singularity. 176
  • 197. 3. z1/2 sin z1/2 = ± √ z sin ± √ z = ± √ z ± sin √ z = √ z sin √ z The function is single-valued. Thus there could be no branch points. 4. sin z2 1/2 = ± √ sin z2 This function is multi-valued. Since sin z2 = 0 at z = (nπ)1/2 , there may be branch points there. First consider the point z = 0. We can write sin z2 = z2 sin z2 z2 where sin z2 /z2 is nonzero and has a removable singularity at z = 0. lim z→0 sin z2 z2 = lim z→0 2z cos z2 2z = 1. Since z2 1/2 does not have a branch point at z = 0, sin z2 1/2 does not have a branch point there either. Now consider the point z = √ nπ. sin z2 = z − √ nπ sin z2 z − √ nπ sin z2 / (z − √ nπ) in nonzero and has a removable singularity at z = √ nπ. lim z→ √ nπ sin z2 z − √ nπ = lim z→ √ nπ 2z cos z2 1 = 2 √ nπ(−1)n Since (z − √ nπ) 1/2 has a branch point at z = √ nπ, sin z2 1/2 also has a branch point there. Thus we see that sin z2 1/2 has branch points at z = (nπ)1/2 for n ∈ Z {0}. This is the set of numbers: {± √ π, ± √ 2π, . . . , ±ı √ π, ±ı √ 2π, . . .}. The point at infinity is a non-isolated singularity. Example 7.9.11 Find the branch points of f(z) = z3 − z 1/3 . Introduce branch cuts. If f(2) = 3 √ 6 then what is f(−2)? We expand f(z). f(z) = z1/3 (z − 1)1/3 (z + 1)1/3 . There are branch points at z = −1, 0, 1. We consider the point at infinity. f 1 ζ = 1 ζ 1/3 1 ζ − 1 1/3 1 ζ + 1 1/3 = 1 ζ (1 − ζ) 1/3 (1 + ζ) 1/3 Since f(1/ζ) does not have a branch point at ζ = 0, f(z) does not have a branch point at infinity. Consider the three possible branch cuts in Figure 7.29. 177
  • 198. Figure 7.29: Three Possible Branch Cuts for f(z) = z3 − z 1/3 . The first and the third branch cuts will make the function single valued, the second will not. It is clear that the first set makes the function single valued since it is not possible to walk around any of the branch points. The second set of branch cuts would allow you to walk around the branch points at z = ±1. If you walked around these two once in the positive direction, the value of the function would change by the factor eı4π/3 . The third set of branch cuts would allow you to walk around all three branch points together. You can verify that if you walk around the three branch points, the value of the function will not change (eı6π/3 = eı2π = 1). Suppose we introduce the third set of branch cuts and are on the branch with f(2) = 3 √ 6. f(2) = 2 eı0 1/3 1 eı0 1/3 3 eı0 1/3 = 3 √ 6 The value of f(−2) is f(−2) = (2 eıπ ) 1/3 (3 eıπ ) 1/3 (1 eıπ ) 1/3 = 3 √ 2 eıπ/3 3 √ 3 eıπ/3 3 √ 1 eıπ/3 = 3 √ 6 eıπ = − 3 √ 6. Example 7.9.12 Find the branch points and number of branches for f(z) = zz2 . zz2 = exp z2 log z There may be branch points at the origin and infinity due to the logarithm. Consider walking around a circle of radius r centered at the origin in the positive direction. Since the logarithm changes by ı2π, the value of f(z) changes by the factor eı2πr2 . There are branch points at the origin and infinity. The function has an infinite number of branches. Example 7.9.13 Construct a branch of f(z) = z2 + 1 1/3 such that f(0) = 1 2 −1 + ı √ 3 . First we factor f(z). f(z) = (z − ı)1/3 (z + ı)1/3 There are branch points at z = ±ı. Figure 7.30 shows one way to introduce branch cuts. 178
  • 199. θ r φ ρ Figure 7.30: Branch Cuts for f(z) = z2 + 1 1/3 . Since it is not possible to walk around any branch point, these cuts make the function single valued. We introduce the coordinates: z − ı = ρ eıφ , z + ı = r eıθ . f(z) = ρ eıφ 1/3 r eıθ 1/3 = 3 √ ρr eı(φ+θ)/3 The condition f(0) = 1 2 −1 + ı √ 3 = eı(2π/3+2πn) can be stated 3 √ 1 eı(φ+θ)/3 = eı(2π/3+2πn) φ + θ = 2π + 6πn The angles must be defined to satisfy this relation. One choice is π 2 < φ < 5π 2 , − π 2 < θ < 3π 2 . Principal branches. We construct the principal branch of the logarithm by putting a branch cut on the negative real axis choose z = r eıθ , θ ∈ (−π, π). Thus the principal branch of the logarithm is Log z = ln r + ıθ, −π < θ < π. Note that the if x is a negative real number, (and thus lies on the branch cut), then Log x is undefined. The principal branch of zα is zα = eα Log z . Note that there is a branch cut on the negative real axis. −απ < arg eα Log z < απ The principal branch of the z1/2 is denoted √ z. The principal branch of z1/n is denoted n √ z. Example 7.9.14 Construct √ 1 − z2, the principal branch of 1 − z2 1/2 . First note that since 1 − z2 1/2 = (1 − z)1/2 (1 + z)1/2 there are branch points at z = 1 and z = −1. The principal branch of the square root has a branch cut on the negative real axis. 1 − z2 is a negative real number for z ∈ (−∞ . . . − 1) ∪ (1 . . . ∞). Thus we put branch cuts on (−∞ . . . − 1] and [1 . . . ∞). 179
  • 200. 7.10 Exercises Cartesian and Modulus-Argument Form Exercise 7.1 Find the image of the strip 2 < x < 3 under the mapping w = f(z) = z2 . Does the image constitute a domain? Hint, Solution Exercise 7.2 For a given real number φ, 0 ≤ φ < 2π, find the image of the sector 0 ≤ arg(z) < φ under the transformation w = z4 . How large should φ be so that the w plane is covered exactly once? Hint, Solution Trigonometric Functions Exercise 7.3 In Cartesian coordinates, z = x + ıy, write sin(z) in Cartesian and modulus-argument form. Hint, Solution Exercise 7.4 Show that ez is nonzero for all finite z. Hint, Solution Exercise 7.5 Show that ez2 ≤ e|z|2 . When does equality hold? Hint, Solution Exercise 7.6 Solve coth(z) = 1. Hint, Solution Exercise 7.7 Solve 2 ∈ 2z . That is, for what values of z is 2 one of the values of 2z ? Derive this result then verify your answer by evaluating 2z for the solutions that your find. Hint, Solution Exercise 7.8 Solve 1 ∈ 1z . That is, for what values of z is 1 one of the values of 1z ? Derive this result then verify your answer by evaluating 1z for the solutions that your find. Hint, Solution Logarithmic Identities Exercise 7.9 Show that if (z1) > 0 and (z2) > 0 then Log(z1z2) = Log(z1) + Log(z2) and illustrate that this relationship does not hold in general. Hint, Solution Exercise 7.10 Find the fallacy in the following arguments: 180
  • 201. 1. log(−1) = log 1 −1 = log(1) − log(−1) = − log(−1), therefore, log(−1) = 0. 2. 1 = 11/2 = ((−1)(−1))1/2 = (−1)1/2 (−1)1/2 = ıı = −1, therefore, 1 = −1. Hint, Solution Exercise 7.11 Write the following expressions in modulus-argument or Cartesian form. Denote any multi-valuedness explicitly. 22/5 , 31+ı , √ 3 − ı 1/4 , 1ı/4 . Hint, Solution Exercise 7.12 Solve cos z = 69. Hint, Solution Exercise 7.13 Solve cot z = ı47. Hint, Solution Exercise 7.14 Determine all values of 1. log(−ı) 2. (−ı)−ı 3. 3π 4. log(log(ı)) and plot them in the complex plane. Hint, Solution Exercise 7.15 Evaluate and plot the following in the complex plane: 1. (cosh(ıπ))ı2 2. log 1 1 + ı 3. arctan(ı3) Hint, Solution Exercise 7.16 Determine all values of ıı and log ((1 + ı)ıπ ) and plot them in the complex plane. Hint, Solution Exercise 7.17 Find all z for which 1. ez = ı 2. cos z = sin z 3. tan2 z = −1 181
  • 202. Hint, Solution Exercise 7.18 Prove the following identities and identify the branch points of the functions in the extended complex plane. 1. arctan(z) = ı 2 log ı + z ı − z 2. arctanh(z) = 1 2 log 1 + z 1 − z 3. arccosh(z) = log z + z2 − 1 1/2 Hint, Solution Branch Points and Branch Cuts Exercise 7.19 Identify the branch points of the function f(z) = log z(z + 1) z − 1 and introduce appropriate branch cuts to ensure that the function is single-valued. Hint, Solution Exercise 7.20 Identify all the branch points of the function w = f(z) = z3 + z2 − 6z 1/2 in the extended complex plane. Give a polar description of f(z) and specify branch cuts so that your choice of angles gives a single-valued function that is continuous at z = −1 with f(−1) = − √ 6. Sketch the branch cuts in the stereographic projection. Hint, Solution Exercise 7.21 Consider the mapping w = f(z) = z1/3 and the inverse mapping z = g(w) = w3 . 1. Describe the multiple-valuedness of f(z). 2. Describe a region of the w-plane that g(w) maps one-to-one to the whole z-plane. 3. Describe and attempt to draw a Riemann surface on which f(z) is single-valued and to which g(w) maps one-to-one. Comment on the misleading nature of your picture. 4. Identify the branch points of f(z) and introduce a branch cut to make f(z) single-valued. Hint, Solution Exercise 7.22 Determine the branch points of the function f(z) = z3 − 1 1/2 . Construct cuts and define a branch so that z = 0 and z = −1 do not lie on a cut, and such that f(0) = −ı. What is f(−1) for this branch? Hint, Solution 182
  • 203. Exercise 7.23 Determine the branch points of the function w(z) = ((z − 1)(z − 6)(z + 2)) 1/2 Construct cuts and define a branch so that z = 4 does not lie on a cut, and such that w = ı6 when z = 4. Hint, Solution Exercise 7.24 Give the number of branches and locations of the branch points for the functions 1. cos z1/2 2. (z + ı)−z Hint, Solution Exercise 7.25 Find the branch points of the following functions in the extended complex plane, (the complex plane including the point at infinity). 1. z2 + 1 1/2 2. z3 − z 1/2 3. log z2 − 1 4. log z + 1 z − 1 Introduce branch cuts to make the functions single valued. Hint, Solution Exercise 7.26 Find all branch points and introduce cuts to make the following functions single-valued: For the first function, choose cuts so that there is no cut within the disk |z| < 2. 1. f(z) = z3 + 8 1/2 2. f(z) = log 5 + z + 1 z − 1 1/2 3. f(z) = (z + ı3)1/2 Hint, Solution Exercise 7.27 Let f(z) have branch points at z = 0 and z = ±ı, but nowhere else in the extended complex plane. How does the value and argument of f(z) change while traversing the contour in Figure 7.31? Does the branch cut in Figure 7.31 make the function single-valued? Hint, Solution Exercise 7.28 Let f(z) be analytic except for no more than a countably infinite number of singularities. Suppose that f(z) has only one branch point in the finite complex plane. Does f(z) have a branch point at infinity? Now suppose that f(z) has two or more branch points in the finite complex plane. Does f(z) have a branch point at infinity? Hint, Solution 183
  • 204. Figure 7.31: Contour around the branch points and the branch cut. Exercise 7.29 Find all branch points of z4 + 1 1/4 in the extended complex plane. Which of the branch cuts in Figure 7.32 make the function single-valued. Figure 7.32: Four candidate sets of branch cuts for z4 + 1 1/4 . Hint, Solution Exercise 7.30 Find the branch points of f(z) = z z2 + 1 1/3 in the extended complex plane. Introduce branch cuts that make the function single-valued and such that the function is defined on the positive real axis. Define a branch such that f(1) = 1/ 3 √ 2. Write down an explicit formula for the value of the branch. What is f(1 + ı)? What is the value of f(z) on either side of the branch cuts? Hint, Solution Exercise 7.31 Find all branch points of f(z) = ((z − 1)(z − 2)(z − 3))1/2 in the extended complex plane. Which of the branch cuts in Figure 7.33 will make the function single-valued. Using the first set of branch cuts in this figure define a branch on which f(0) = ı √ 6. Write out an explicit formula for the value of the function on this branch. Hint, Solution Exercise 7.32 Determine the branch points of the function w = z2 − 2 (z + 2) 1/3 . 184
  • 205. Figure 7.33: Four candidate sets of branch cuts for ((z − 1)(z − 2)(z − 3))1/2 . Construct and define a branch so that the resulting cut is one line of finite extent and w(2) = 2. What is w(−3) for this branch? What are the limiting values of w on either side of the branch cut? Hint, Solution Exercise 7.33 Construct the principal branch of arccos(z). (Arccos(z) has the property that if x ∈ [−1, 1] then Arccos(x) ∈ [0, π]. In particular, Arccos(0) = π 2 ). Hint, Solution Exercise 7.34 Find the branch points of z1/2 − 1 1/2 in the finite complex plane. Introduce branch cuts to make the function single-valued. Hint, Solution Exercise 7.35 For the linkage illustrated in Figure 7.34, use complex variables to outline a scheme for expressing the angular position, velocity and acceleration of arm c in terms of those of arm a. (You needn’t work out the equations.) θ φ a b c l Figure 7.34: A linkage. Hint, Solution Exercise 7.36 Find the image of the strip | (z)| < 1 and of the strip 1 < (z) < 2 under the transformations: 1. w = 2z2 2. w = z+1 z−1 Hint, Solution 185
  • 206. Exercise 7.37 Locate and classify all the singularities of the following functions: 1. (z + 1)1/2 z + 2 2. cos 1 1 + z 3. 1 (1 − ez) 2 In each case discuss the possibility of a singularity at the point ∞. Hint, Solution Exercise 7.38 Describe how the mapping w = sinh(z) transforms the infinite strip −∞ < x < ∞, 0 < y < π into the w-plane. Find cuts in the w-plane which make the mapping continuous both ways. What are the images of the lines (a) y = π/4; (b) x = 1? Hint, Solution 186
  • 207. 7.11 Hints Cartesian and Modulus-Argument Form Hint 7.1 Hint 7.2 Trigonometric Functions Hint 7.3 Recall that sin(z) = 1 ı2 (eız − e−ız ). Use Result 6.3.1 to convert between Cartesian and modulus- argument form. Hint 7.4 Write ez in polar form. Hint 7.5 The exponential is an increasing function for real variables. Hint 7.6 Write the hyperbolic cotangent in terms of exponentials. Hint 7.7 Write out the multi-valuedness of 2z . There is a doubly-infinite set of solutions to this problem. Hint 7.8 Write out the multi-valuedness of 1z . Logarithmic Identities Hint 7.9 Hint 7.10 Write out the multi-valuedness of the expressions. Hint 7.11 Do the exponentiations in polar form. Hint 7.12 Write the cosine in terms of exponentials. Multiply by eız to get a quadratic equation for eız . Hint 7.13 Write the cotangent in terms of exponentials. Get a quadratic equation for eız . Hint 7.14 Hint 7.15 187
  • 208. Hint 7.16 ıı has an infinite number of real, positive values. ıı = eı log ı . log ((1 + ı)ıπ ) has a doubly infinite set of values. log ((1 + ı)ıπ ) = log(exp(ıπ log(1 + ı))). Hint 7.17 Hint 7.18 Branch Points and Branch Cuts Hint 7.19 Hint 7.20 Hint 7.21 Hint 7.22 Hint 7.23 Hint 7.24 Hint 7.25 1. z2 + 1 1/2 = (z − ı)1/2 (z + ı)1/2 2. z3 − z 1/2 = z1/2 (z − 1)1/2 (z + 1)1/2 3. log z2 − 1 = log(z − 1) + log(z + 1) 4. log z+1 z−1 = log(z + 1) − log(z − 1) Hint 7.26 Hint 7.27 Reverse the orientation of the contour so that it encircles infinity and does not contain any branch points. Hint 7.28 Consider a contour that encircles all the branch points in the finite complex plane. Reverse the orientation of the contour so that it contains the point at infinity and does not contain any branch points in the finite complex plane. Hint 7.29 Factor the polynomial. The argument of z1/4 changes by π/2 on a contour that goes around the origin once in the positive direction. 188
  • 209. Hint 7.30 Hint 7.31 To define the branch, define angles from each of the branch points in the finite complex plane. Hint 7.32 Hint 7.33 Hint 7.34 Hint 7.35 Hint 7.36 Hint 7.37 Hint 7.38 189
  • 210. 7.12 Solutions Cartesian and Modulus-Argument Form Solution 7.1 Let w = u + ıv. We consider the strip 2 < x < 3 as composed of vertical lines. Consider the vertical line: z = c + ıy, y ∈ R for constant c. We find the image of this line under the mapping. w = (c + ıy)2 w = c2 − y2 + ı2cy u = c2 − y2 , v = 2cy This is a parabola that opens to the left. We can parameterize the curve in terms of v. u = c2 − 1 4c2 v2 , v ∈ R The boundaries of the region, x = 2 and x = 3, are respectively mapped to the parabolas: u = 4 − 1 16 v2 , v ∈ R and u = 9 − 1 36 v2 , v ∈ R We write the image of the mapping in set notation. w = u + ıv : v ∈ R and 4 − 1 16 v2 < u < 9 − 1 36 v2 . See Figure 7.35 for depictions of the strip and its image under the mapping. The mapping is one-to-one. Since the image of the strip is open and connected, it is a domain. -1 1 2 3 4 5 -3 -2 -1 1 2 3 -5 5 10 15 -10 -5 5 10 Figure 7.35: The domain 2 < x < 3 and its image under the mapping w = z2 . Solution 7.2 We write the mapping w = z4 in polar coordinates. w = z4 = r eıθ 4 = r4 eı4θ Thus we see that w : {r eıθ | r ≥ 0, 0 ≤ θ < φ} → {r4 eı4θ | r ≥ 0, 0 ≤ θ < φ} = {r eıθ | r ≥ 0, 0 ≤ θ < 4φ}. We can state this in terms of the argument. w : {z | 0 ≤ arg(z) < φ} → {z | 0 ≤ arg(z) < 4φ} If φ = π/2, the sector will be mapped exactly to the whole complex plane. 190
  • 211. Trigonometric Functions Solution 7.3 sin z = 1 ı2 eız − e−ız = 1 ı2 e−y+ıx − ey−ıx = 1 ı2 e−y (cos x + ı sin x) − ey (cos x − ı sin x) = 1 2 e−y (sin x − ı cos x) + ey (sin x + ı cos x) = sin x cosh y + ı cos x sinh y sin z = sin2 x cosh2 y + cos2 x sinh2 y exp(ı arctan(sin x cosh y, cos x sinh y)) = cosh2 y − cos2 x exp(ı arctan(sin x cosh y, cos x sinh y)) = 1 2 (cosh(2y) − cos(2x)) exp(ı arctan(sin x cosh y, cos x sinh y)) Solution 7.4 In order that ez be zero, the modulus, ex must be zero. Since ex has no finite solutions, ez = 0 has no finite solutions. Solution 7.5 We write the expressions in terms of Cartesian coordinates. ez2 = e(x+ıy)2 = ex2 −y2 +ı2xy = ex2 −y2 e|z|2 = e|x+ıy|2 = ex2 +y2 The exponential function is an increasing function for real variables. Since x2 − y2 ≤ x2 + y2 , ex2 −y2 ≤ ex2 +y2 . ez2 ≤ e|z|2 Equality holds only when y = 0. Solution 7.6 coth(z) = 1 (ez + e−z ) /2 (ez − e−z) /2 = 1 ez + e−z = ez − e−z e−z = 0 There are no solutions. 191
  • 212. Solution 7.7 We write out the multi-valuedness of 2z . 2 ∈ 2z eln 2 ∈ ez log(2) eln 2 ∈ {ez(ln(2)+ı2πn) | n ∈ Z} ln 2 ∈ z{ln 2 + ı2πn + ı2πm | m, n ∈ Z} z = ln(2) + ı2πm ln(2) + ı2πn | m, n ∈ Z We verify this solution. Consider m and n to be fixed integers. We express the multi-valuedness in terms of k. 2(ln(2)+ı2πm)/(ln(2)+ı2πn) = e(ln(2)+ı2πm)/(ln(2)+ı2πn) log(2) = e(ln(2)+ı2πm)/(ln(2)+ı2πn)(ln(2)+ı2πk) For k = n, this has the value, eln(2)+ı2πm = eln(2) = 2. Solution 7.8 We write out the multi-valuedness of 1z . 1 ∈ 1z 1 ∈ ez log(1) 1 ∈ {eız2πn | n ∈ Z} The element corresponding to n = 0 is e0 = 1. Thus 1 ∈ 1z has the solutions, z ∈ C. That is, z may be any complex number. We verify this solution. 1z = ez log(1) = eız2πn For n = 0, this has the value 1. Logarithmic Identities Solution 7.9 We write the relationship in terms of the natural logarithm and the principal argument. Log(z1z2) = Log(z1) + Log(z2) ln |z1z2| + ı Arg(z1z2) = ln |z1| + ı Arg(z1) + ln |z2| + ı Arg(z2) Arg(z1z2) = Arg(z1) + Arg(z2) (zk) > 0 implies that Arg(zk) ∈ (−π/2 . . . π/2). Thus Arg(z1) + Arg(z2) ∈ (−π . . . π). In this case the relationship holds. The relationship does not hold in general because Arg(z1) + Arg(z2) is not necessarily in the interval (−π . . . π]. Consider z1 = z2 = −1. Arg((−1)(−1)) = Arg(1) = 0, Arg(−1) + Arg(−1) = 2π Log((−1)(−1)) = Log(1) = 0, Log(−1) + Log(−1) = ı2π 192
  • 213. Solution 7.10 1. The algebraic manipulations are fine. We write out the multi-valuedness of the logarithms. log(−1) = log 1 −1 = log(1) − log(−1) = − log(−1) {ıπ + ı2πn : n ∈ Z} = {ıπ + ı2πn : n ∈ Z} = {ı2πn : n ∈ Z} − {ıπ + ı2πn : n ∈ Z} = {−ıπ − ı2πn : n ∈ Z} Thus log(−1) = − log(−1). However this does not imply that log(−1) = 0. This is because the logarithm is a set-valued function log(−1) = − log(−1) is really saying: {ıπ + ı2πn : n ∈ Z} = {−ıπ − ı2πn : n ∈ Z} 2. We consider 1 = 11/2 = ((−1)(−1))1/2 = (−1)1/2 (−1)1/2 = ıı = −1. There are three multi-valued expressions above. 11/2 = ±1 ((−1)(−1))1/2 = ±1 (−1)1/2 (−1)1/2 = (±ı)(±ı) = ±1 Thus we see that the first and fourth equalities are incorrect. 1 = 11/2 , (−1)1/2 (−1)1/2 = ıı Solution 7.11 22/5 = 41/5 = 5 √ 411/5 = 5 √ 4 eı2nπ/5 , n = 0, 1, 2, 3, 4 31+ı = e(1+ı) log 3 = e(1+ı)(ln 3+ı2πn) = eln 3−2πn eı(ln 3+2πn) , n ∈ Z √ 3 − ı 1/4 = 2 e−ıπ/6 1/4 = 4 √ 2 e−ıπ/24 11/4 = 4 √ 2 eı(πn/2−π/24) , n = 0, 1, 2, 3 1ı/4 = e(ı/4) log 1 = e(ı/4)(ı2πn) = e−πn/2 , n ∈ Z 193
  • 214. Solution 7.12 cos z = 69 eız + e−ız 2 = 69 eı2z −138 eız +1 = 0 eız = 1 2 138 ± 1382 − 4 z = −ı log 69 ± 2 √ 1190 z = −ı ln 69 ± 2 √ 1190 + ı2πn z = 2πn − ı ln 69 ± 2 √ 1190 , n ∈ Z Solution 7.13 cot z = ı47 (eız + e−ız ) /2 (eız − e−ız) /(ı2) = ı47 eız + e−ız = 47 eız − e−ız 46 eı2z −48 = 0 ı2z = log 24 23 z = − ı 2 log 24 23 z = − ı 2 ln 24 23 + ı2πn , n ∈ Z z = πn − ı 2 ln 24 23 , n ∈ Z Solution 7.14 1. log(−ı) = ln | − ı| + ı arg(−ı) = ln(1) + ı − π 2 + 2πn , n ∈ Z log(−ı) = −ı π 2 + ı2πn, n ∈ Z These are equally spaced points in the imaginary axis. See Figure 7.36. 2. (−ı)−ı = e−ı log(−ı) = e−ı(−ıπ/2+ı2πn) , n ∈ Z (−ı)−ı = e−π/2+2πn , n ∈ Z These are points on the positive real axis with an accumulation point at the origin. See Figure 7.37. 194
  • 215. -1 1 -10 10 Figure 7.36: The values of log(−ı). 1 -1 1 Figure 7.37: The values of (−ı)−ı . 3. 3π = eπ log(3) = eπ(ln(3)+ı arg(3)) 3π = eπ(ln(3)+ı2πn) , n ∈ Z These points all lie on the circle of radius |eπ | centered about the origin in the complex plane. See Figure 7.38. -10 -5 5 10 -10 -5 5 10 Figure 7.38: The values of 3π . 4. log(log(ı)) = log ı π 2 + 2πm , m ∈ Z = ln π 2 + 2πm + ı Arg ı π 2 + 2πm + ı2πn, m, n ∈ Z = ln π 2 + 2πm + ı sign(1 + 4m) π 2 + ı2πn, m, n ∈ Z These points all lie in the right half-plane. See Figure 7.39. 195
  • 216. 1 2 3 4 5 -20 -10 10 20 Figure 7.39: The values of log(log(ı)). Solution 7.15 1. (cosh(ıπ))ı2 = eıπ + e−ıπ 2 ı2 = (−1)ı2 = eı2 log(−1) = eı2(ln(1)+ıπ+ı2πn) , n ∈ Z = e−2π(1+2n) , n ∈ Z These are points on the positive real axis with an accumulation point at the origin. See Figure 7.40. 1000 -1 1 Figure 7.40: The values of (cosh(ıπ))ı2 . 2. log 1 1 + ı = − log(1 + ı) = − log √ 2 eıπ/4 = − 1 2 ln(2) − log eıπ/4 = − 1 2 ln(2) − ıπ/4 + ı2πn, n ∈ Z These are points on a vertical line in the complex plane. See Figure 7.41. 196
  • 217. -1 1 -10 10 Figure 7.41: The values of log 1 1+ı . 3. arctan(ı3) = 1 ı2 log ı − ı3 ı + ı3 = 1 ı2 log − 1 2 = 1 ı2 ln 1 2 + ıπ + ı2πn , n ∈ Z = π 2 + πn + ı 2 ln(2) These are points on a horizontal line in the complex plane. See Figure 7.42. -5 5 -1 1 Figure 7.42: The values of arctan(ı3). Solution 7.16 ıı = eı log(ı) = eı(ln |ı|+ı Arg(ı)+ı2πn) , n ∈ Z = eı(ıπ/2+ı2πn) , n ∈ Z = e−π(1/2+2n) , n ∈ Z These are points on the positive real axis. There is an accumulation point at z = 0. See Figure 7.43. log ((1 + ı)ıπ ) = log eıπ log(1+ı) = ıπ log(1 + ı) + ı2πn, n ∈ Z = ıπ (ln |1 + ı| + ı Arg(1 + ı) + ı2πm) + ı2πn, m, n ∈ Z = ıπ 1 2 ln 2 + ı π 4 + ı2πm + ı2πn, m, n ∈ Z = −π2 1 4 + 2m + ıπ 1 2 ln 2 + 2n , m, n ∈ Z 197
  • 218. 25 50 75 100 -1 1 Figure 7.43: The values of ıı . See Figure 7.44 for a plot. -40 -20 20 -10 -5 5 10 Figure 7.44: The values of log ((1 + ı)ıπ ). Solution 7.17 1. ez = ı z = log ı z = ln |ı| + ı arg(ı) z = ln(1) + ı π 2 + 2πn , n ∈ Z z = ı π 2 + ı2πn, n ∈ Z 2. We can solve the equation by writing the cosine and sine in terms of the exponential. cos z = sin z eız + e−ız 2 = eız − e−ız ı2 (1 + ı) eız = (−1 + ı) e−ız eı2z = −1 + ı 1 + ı eı2z = ı ı2z = log(ı) ı2z = ı π 2 + ı2πn, n ∈ Z z = π 4 + πn, n ∈ Z 198
  • 219. 3. tan2 z = −1 sin2 z = − cos2 z cos z = ±ı sin z eız + e−ız 2 = ±ı eız − e−ız ı2 e−ız = − e−ız or eız = − eız e−ız = 0 or eız = 0 ey−ıx = 0 or e−y+ıx = 0 ey = 0 or e−y = 0 z = ∅ There are no solutions for finite z. Solution 7.18 1. w = arctan(z) z = tan(w) z = sin(w) cos(w) z = (eıw − e−ıw ) /(ı2) (eıw + e−ıw) /2 z eıw +z e−ıw = −ı eıw +ı e−ıw (ı + z) eı2w = (ı − z) eıw = ı − z ı + z 1/2 w = −ı log ı − z ı + z 1/2 arctan(z) = ı 2 log ı + z ı − z We identify the branch points of the arctangent. arctan(z) = ı 2 (log(ı + z) − log(ı − z)) There are branch points at z = ±ı due to the logarithm terms. We examine the point at infinity with the change of variables ζ = 1/z. arctan(1/ζ) = ı 2 log ı + 1/ζ ı − 1/ζ arctan(1/ζ) = ı 2 log ıζ + 1 ıζ − 1 As ζ → 0, the argument of the logarithm term tends to −1 The logarithm does not have a branch point at that point. Since arctan(1/ζ) does not have a branch point at ζ = 0, arctan(z) does not have a branch point at infinity. 199
  • 220. 2. w = arctanh(z) z = tanh(w) z = sinh(w) cosh(w) z = (ew − e−w ) /2 (ew + e−w) /2 z ew +z e−w = ew − e−w (z − 1) e2w = −z − 1 ew = −z − 1 z − 1 1/2 w = log z + 1 1 − z 1/2 arctanh(z) = 1 2 log 1 + z 1 − z We identify the branch points of the hyperbolic arctangent. arctanh(z) = 1 2 (log(1 + z) − log(1 − z)) There are branch points at z = ±1 due to the logarithm terms. We examine the point at infinity with the change of variables ζ = 1/z. arctanh(1/ζ) = 1 2 log 1 + 1/ζ 1 − 1/ζ arctanh(1/ζ) = 1 2 log ζ + 1 ζ − 1 As ζ → 0, the argument of the logarithm term tends to −1 The logarithm does not have a branch point at that point. Since arctanh(1/ζ) does not have a branch point at ζ = 0, arctanh(z) does not have a branch point at infinity. 3. w = arccosh(z) z = cosh(w) z = ew + e−w 2 e2w −2z ew +1 = 0 ew = z + z2 − 1 1/2 w = log z + z2 − 1 1/2 arccosh(z) = log z + z2 − 1 1/2 We identify the branch points of the hyperbolic arc-cosine. arccosh(z) = log z + (z − 1)1/2 (z + 1)1/2 First we consider branch points due to the square root. There are branch points at z = ±1 due to the square root terms. If we walk around the singularity at z = 1 and no other singularities, 200
  • 221. the z2 − 1 1/2 term changes sign. This will change the value of arccosh(z). The same is true for the point z = −1. The point at infinity is not a branch point for z2 − 1 1/2 . We factor the expression to verify this. z2 − 1 1/2 = z2 1/2 1 − z−2 1/2 z2 1/2 does not have a branch point at infinity. It is multi-valued, but it has no branch points. 1 − z−2 1/2 does not have a branch point at infinity, The argument of the square root function tends to unity there. In summary, there are branch points at z = ±1 due to the square root. If we walk around either one of the these branch points. the square root term will change value. If we walk around both of these points, the square root term will not change value. Now we consider branch points due to logarithm. There may be branch points where the argument of the logarithm vanishes or tends to infinity. We see if the argument of the logarithm vanishes. z + z2 − 1 1/2 = 0 z2 = z2 − 1 z + z2 − 1 1/2 is non-zero and finite everywhere in the complex plane. The only possibility for a branch point in the logarithm term is the point at infinity. We see if the argument of z + z2 − 1 1/2 changes when we walk around infinity but no other singularity. We consider a circular path with center at the origin and radius greater than unity. We can either say that this path encloses the two branch points at z = ±1 and no other singularities or we can say that this path encloses the point at infinity and no other singularities. We examine the value of the argument of the logarithm on this path. z + z2 − 1 1/2 = z + z2 1/2 1 − z−2 1/2 Neither z2 1/2 nor 1 − z−2 1/2 changes value as we walk the path. Thus we can use the principal branch of the square root in the expression. z + z2 − 1 1/2 = z ± z 1 − z−2 = z 1 ± 1 − z−2 First consider the “+” branch. z 1 + 1 − z−2 As we walk the path around infinity, the argument of z changes by 2π while the argument of 1 + √ 1 − z−2 does not change. Thus the argument of z + z2 − 1 1/2 changes by 2π when we go around infinity. This makes the value of the logarithm change by ı2π. There is a branch point at infinity. First consider the “−” branch. z 1 − 1 − z−2 = z 1 − 1 − 1 2 z−2 + O z−4 = z 1 2 z−2 + O z−4 = 1 2 z−1 1 + O z−2 As we walk the path around infinity, the argument of z−1 changes by −2π while the argument of 1 + O z−2 does not change. Thus the argument of z + z2 − 1 1/2 changes by −2π 201
  • 222. when we go around infinity. This makes the value of the logarithm change by −ı2π. Again we conclude that there is a branch point at infinity. For the sole purpose of overkill, let’s repeat the above analysis from a geometric viewpoint. Again we consider the possibility of a branch point at infinity due to the logarithm. We walk along the circle shown in the first plot of Figure 7.45. Traversing this path, we go around infinity, but no other singularities. We consider the mapping w = z + z2 − 1 1/2 . Depending on the branch of the square root, the circle is mapped to one one of the contours shown in the second plot. For each branch, the argument of w changes by ±2π as we traverse the circle in the z-plane. Therefore the value of arccosh(z) = log z + z2 − 1 1/2 changes by ±ı2π as we traverse the circle. We again conclude that there is a branch point at infinity due to the logarithm. -1 1 -1 1 -1 1 -1 1 Figure 7.45: The mapping of a circle under w = z + z2 − 1 1/2 . To summarize: There are branch points at z = ±1 due to the square root and a branch point at infinity due to the logarithm. Branch Points and Branch Cuts Solution 7.19 We expand the function to diagnose the branch points in the finite complex plane. f(z) = log z(z + 1) z − 1 = log(z) + log(z + 1) − log(z − 1) The are branch points at z = −1, 0, 1. Now we examine the point at infinity. We make the change of variables z = 1/ζ. f 1 ζ = log (1/ζ)(1/ζ + 1) (1/ζ − 1) = log 1 ζ (1 + ζ 1 − ζ = log(1 + ζ) − log(1 − ζ) − log(ζ) log(ζ) has a branch point at ζ = 0. The other terms do not have branch points there. Since f(1/ζ) has a branch point at ζ = 0 f(z) has a branch point at infinity. Note that in walking around either z = −1 or z = 0 once in the positive direction, the argument of z(z +1)/(z −1) changes by 2π. In walking around z = 1, the argument of z(z +1)/(z −1) changes by −2π. This argument does not change if we walk around both z = 0 and z = 1. Thus we put a branch cut between z = 0 and z = 1. Next be put a branch cut between z = −1 and the point at infinity. This prevents us from walking around either of these branch points. These two branch cuts separate the branches of the function. See Figure 7.46 202
  • 223. -3 -2 -1 1 2 Figure 7.46: Branch cuts for log z(z+1) z−1 . Solution 7.20 First we factor the function. f(z) = (z(z + 3)(z − 2)) 1/2 = z1/2 (z + 3)1/2 (z − 2)1/2 There are branch points at z = −3, 0, 2. Now we examine the point at infinity. f 1 ζ = 1 ζ 1 ζ + 3 1 ζ − 2 1/2 = ζ−3/2 ((1 + 3ζ)(1 − 2ζ))1/2 Since ζ−3/2 has a branch point at ζ = 0 and the rest of the terms are analytic there, f(z) has a branch point at infinity. Consider the set of branch cuts in Figure 7.47. These cuts do not permit us to walk around any single branch point. We can only walk around none or all of the branch points, (which is the same thing). The cuts can be used to define a single-valued branch of the function. -4 -2 2 4 -3 -2 -1 1 2 3 Figure 7.47: Branch cuts for z3 + z2 − 6z 1/2 . Now to define the branch. We make a choice of angles. z + 3 = r1 eıθ1 , −π < θ1 < π z = r2 eıθ2 , − π 2 < θ2 < 3π 2 z − 2 = r3 eıθ3 , 0 < θ3 < 2π The function is f(z) = r1 eıθ1 r2 eıθ2 r3 eıθ3 1/2 = √ r1r2r3 eı(θ1+θ2+θ3)/2 . We evaluate the function at z = −1. f(−1) = (2)(1)(3) eı(0+π+π)/2 = − √ 6 203
  • 224. We see that our choice of angles gives us the desired branch. The stereographic projection is the projection from the complex plane onto a unit sphere with south pole at the origin. The point z = x + ıy is mapped to the point (X, Y, Z) on the sphere with X = 4x |z|2 + 4 , Y = 4y |z|2 + 4 , Z = 2|z|2 |z|2 + 4 . Figure 7.48 first shows the branch cuts and their stereographic projections and then shows the stereographic projections alone. -4 0 4 -4 0 4 0 2 -4 0 4 -1 0 1 -1 0 1 0 1 2 -1 0 1 -1 0 1 Figure 7.48: Branch cuts for z3 + z2 − 6z 1/2 and their stereographic projections. Solution 7.21 1. For each value of z, f(z) = z1/3 has three values. f(z) = z1/3 = 3 √ z eık2π/3 , k = 0, 1, 2 2. g(w) = w3 = |w|3 eı3 arg(w) Any sector of the w plane of angle 2π/3 maps one-to-one to the whole z-plane. g : r eıθ | r ≥ 0, θ0 ≤ θ < θ0 + 2π/3 → r3 eı3θ | r ≥ 0, θ0 ≤ θ < θ0 + 2π/3 g : r eıθ | r ≥ 0, θ0 ≤ θ < θ0 + 2π/3 → r eıθ | r ≥ 0, 3θ0 ≤ θ < 3θ0 + 2π g : r eıθ | r ≥ 0, θ0 ≤ θ < θ0 + 2π/3 → C See Figure 7.49 to see how g(w) maps the sector 0 ≤ θ < 2π/3. 3. See Figure 7.50 for a depiction of the Riemann surface for f(z) = z1/3 . We show two views of the surface and a curve that traces the edge of the shown portion of the surface. The depiction is misleading because the surface is not self-intersecting. We would need four dimensions to properly visualize the this Riemann surface. 4. f(z) = z1/3 has branch points at z = 0 and z = ∞. Any branch cut which connects these two points would prevent us from walking around the points singly and would thus separate the branches of the function. For example, we could put a branch cut on the negative real axis. Defining the angle −π < θ < π for the mapping f r eıθ = 3 √ r eıθ/3 defines a single-valued branch of the function. Solution 7.22 The cube roots of 1 are 1, eı2π/3 , eı4π/3 = 1, −1 + ı √ 3 2 , −1 − ı √ 3 2 . 204
  • 225. Figure 7.49: The function g(w) = w3 maps the sector 0 ≤ θ < 2π/3 one-to-one to the whole z-plane. Figure 7.50: Riemann surface for f(z) = z1/3 . 205
  • 226. We factor the polynomial. z3 − 1 1/2 = (z − 1)1/2 z + 1 − ı √ 3 2 1/2 z + 1 + ı √ 3 2 1/2 There are branch points at each of the cube roots of unity. z = 1, −1 + ı √ 3 2 , −1 − ı √ 3 2 Now we examine the point at infinity. We make the change of variables z = 1/ζ. f(1/ζ) = 1/ζ3 − 1 1/2 = ζ−3/2 1 − ζ3 1/2 ζ−3/2 has a branch point at ζ = 0, while 1 − ζ3 1/2 is not singular there. Since f(1/ζ) has a branch point at ζ = 0, f(z) has a branch point at infinity. There are several ways of introducing branch cuts to separate the branches of the function. The easiest approach is to put a branch cut from each of the three branch points in the finite complex plane out to the branch point at infinity. See Figure 7.51a. Clearly this makes the function single valued as it is impossible to walk around any of the branch points. Another approach is to have a branch cut from one of the branch points in the finite plane to the branch point at infinity and a branch cut connecting the remaining two branch points. See Figure 7.51bcd. Note that in walking around any one of the finite branch points, (in the positive direction), the argument of the function changes by π. This means that the value of the function changes by eıπ , which is to say the value of the function changes sign. In walking around any two of the finite branch points, (again in the positive direction), the argument of the function changes by 2π. This means that the value of the function changes by eı2π , which is to say that the value of the function does not change. This demonstrates that the latter branch cut approach makes the function single-valued. a b c d Figure 7.51: Suitable branch cuts for z3 − 1 1/2 . Now we construct a branch. We will use the branch cuts in Figure 7.51a. We introduce variables to measure radii and angles from the three finite branch points. z − 1 = r1 eıθ1 , 0 < θ1 < 2π z + 1 − ı √ 3 2 = r2 eıθ2 , − 2π 3 < θ2 < π 3 z + 1 + ı √ 3 2 = r3 eıθ3 , − π 3 < θ3 < 2π 3 We compute f(0) to see if it has the desired value. f(z) = √ r1r2r3 eı(θ1+θ2+θ3)/2 f(0) = eı(π−π/3+π/3)/2 = ı Since it does not have the desired value, we change the range of θ1. z − 1 = r1 eıθ1 , 2π < θ1 < 4π 206
  • 227. f(0) now has the desired value. f(0) = eı(3π−π/3+π/3)/2 = −ı We compute f(−1). f(−1) = √ 2 eı(3π−2π/3+2π/3)/2 = −ı √ 2 Solution 7.23 First we factor the function. w(z) = ((z + 2)(z − 1)(z − 6)) 1/2 = (z + 2)1/2 (z − 1)1/2 (z − 6)1/2 There are branch points at z = −2, 1, 6. Now we examine the point at infinity. w 1 ζ = 1 ζ + 2 1 ζ − 1 1 ζ − 6 1/2 = ζ−3/2 1 + 2 ζ 1 − 1 ζ 1 − 6 ζ 1/2 Since ζ−3/2 has a branch point at ζ = 0 and the rest of the terms are analytic there, w(z) has a branch point at infinity. Consider the set of branch cuts in Figure 7.52. These cuts let us walk around the branch points at z = −2 and z = 1 together or if we change our perspective, we would be walking around the branch points at z = 6 and z = ∞ together. Consider a contour in this cut plane that encircles the branch points at z = −2 and z = 1. Since the argument of (z − z0) 1/2 changes by π when we walk around z0, the argument of w(z) changes by 2π when we traverse the contour. Thus the value of the function does not change and it is a valid set of branch cuts.  ¡ ¢¡¢£¡£¤¡¤ ¥¡¥¦¡¦ Figure 7.52: Branch cuts for ((z + 2)(z − 1)(z − 6)) 1/2 . Now to define the branch. We make a choice of angles. z + 2 = r1 eıθ1 , θ1 = θ2 for z ∈ (1 . . . 6), z − 1 = r2 eıθ2 , θ2 = θ1 for z ∈ (1 . . . 6), z − 6 = r3 eıθ3 , 0 < θ3 < 2π The function is w(z) = r1 eıθ1 r2 eıθ2 r3 eıθ3 1/2 = √ r1r2r3 eı(θ1+θ2+θ3)/2 . We evaluate the function at z = 4. w(4) = (6)(3)(2) eı(2πn+2πn+π)/2 = ı6 We see that our choice of angles gives us the desired branch. Solution 7.24 1. cos z1/2 = cos ± √ z = cos √ z This is a single-valued function. There are no branch points. 207
  • 228. 2. (z + ı)−z = e−z log(z+ı) = e−z(ln |z+ı|+ı Arg(z+ı)+ı2πn) , n ∈ Z There is a branch point at z = −ı. There are an infinite number of branches. Solution 7.25 1. f(z) = z2 + 1 1/2 = (z + ı)1/2 (z − ı)1/2 We see that there are branch points at z = ±ı. To examine the point at infinity, we substitute z = 1/ζ and examine the point ζ = 0. 1 ζ 2 + 1 1/2 = 1 (ζ2) 1/2 1 + ζ2 1/2 Since there is no branch point at ζ = 0, f(z) has no branch point at infinity. A branch cut connecting z = ±ı would make the function single-valued. We could also accom- plish this with two branch cuts starting z = ±ı and going to infinity. 2. f(z) = z3 − z 1/2 = z1/2 (z − 1)1/2 (z + 1)1/2 There are branch points at z = −1, 0, 1. Now we consider the point at infinity. f 1 ζ = 1 ζ 3 − 1 ζ 1/2 = ζ−3/2 1 − ζ2 1/2 There is a branch point at infinity. One can make the function single-valued with three branch cuts that start at z = −1, 0, 1 and each go to infinity. We can also make the function single-valued with a branch cut that connects two of the points z = −1, 0, 1 and another branch cut that starts at the remaining point and goes to infinity. 3. f(z) = log z2 − 1 = log(z − 1) + log(z + 1) There are branch points at z = ±1. f 1 ζ = log 1 ζ2 − 1 = log ζ−2 + log 1 − ζ2 log ζ−2 has a branch point at ζ = 0. log ζ−2 = ln ζ−2 + ı arg ζ−2 = ln ζ−2 − ı2 arg(ζ) Every time we walk around the point ζ = 0 in the positive direction, the value of the function changes by −ı4π. f(z) has a branch point at infinity. We can make the function single-valued by introducing two branch cuts that start at z = ±1 and each go to infinity. 4. f(z) = log z + 1 z − 1 = log(z + 1) − log(z − 1) 208
  • 229. There are branch points at z = ±1. f 1 ζ = log 1/ζ + 1 1/ζ − 1 = log 1 + ζ 1 − ζ There is no branch point at ζ = 0. f(z) has no branch point at infinity. We can make the function single-valued by introducing two branch cuts that start at z = ±1 and each go to infinity. We can also make the function single-valued with a branch cut that connects the points z = ±1. This is because log(z + 1) and − log(z − 1) change by ı2π and −ı2π, respectively, when you walk around their branch points once in the positive direction. Solution 7.26 1. The cube roots of −8 are −2, −2 eı2π/3 , −2 eı4π/3 = −2, 1 + ı √ 3, 1 − ı √ 3 . Thus we can write z3 + 8 1/2 = (z + 2)1/2 z − 1 − ı √ 3 1/2 z − 1 + ı √ 3 1/2 . There are three branch points on the circle of radius 2. z = −2, 1 + ı √ 3, 1 − ı √ 3 . We examine the point at infinity. f(1/ζ) = 1/ζ3 + 8 1/2 = ζ−3/2 1 + 8ζ3 1/2 Since f(1/ζ) has a branch point at ζ = 0, f(z) has a branch point at infinity. There are several ways of introducing branch cuts outside of the disk |z| < 2 to separate the branches of the function. The easiest approach is to put a branch cut from each of the three branch points in the finite complex plane out to the branch point at infinity. See Figure 7.53a. Clearly this makes the function single valued as it is impossible to walk around any of the branch points. Another approach is to have a branch cut from one of the branch points in the finite plane to the branch point at infinity and a branch cut connecting the remaining two branch points. See Figure 7.53bcd. Note that in walking around any one of the finite branch points, (in the positive direction), the argument of the function changes by π. This means that the value of the function changes by eıπ , which is to say the value of the function changes sign. In walking around any two of the finite branch points, (again in the positive direction), the argument of the function changes by 2π. This means that the value of the function changes by eı2π , which is to say that the value of the function does not change. This demonstrates that the latter branch cut approach makes the function single-valued. a b c d Figure 7.53: Suitable branch cuts for z3 + 8 1/2 . 209
  • 230. 2. f(z) = log 5 + z + 1 z − 1 1/2 First we deal with the function g(z) = z + 1 z − 1 1/2 Note that it has branch points at z = ±1. Consider the point at infinity. g(1/ζ) = 1/ζ + 1 1/ζ − 1 1/2 = 1 + ζ 1 − ζ 1/2 Since g(1/ζ) has no branch point at ζ = 0, g(z) has no branch point at infinity. This means that if we walk around both of the branch points at z = ±1, the function does not change value. We can verify this with another method: When we walk around the point z = −1 once in the positive direction, the argument of z + 1 changes by 2π, the argument of (z + 1)1/2 changes by π and thus the value of (z + 1)1/2 changes by eıπ = −1. When we walk around the point z = 1 once in the positive direction, the argument of z − 1 changes by 2π, the argument of (z − 1)−1/2 changes by −π and thus the value of (z − 1)−1/2 changes by e−ıπ = −1. f(z) has branch points at z = ±1. When we walk around both points z = ±1 once in the positive direction, the value of z+1 z−1 1/2 does not change. Thus we can make the function single-valued with a branch cut which enables us to walk around either none or both of these branch points. We put a branch cut from −1 to 1 on the real axis. f(z) has branch points where 5 + z + 1 z − 1 1/2 is either zero or infinite. The only place in the extended complex plane where the expression becomes infinite is at z = 1. Now we look for the zeros. 5 + z + 1 z − 1 1/2 = 0 z + 1 z − 1 1/2 = −5 z + 1 z − 1 = 25 z + 1 = 25z − 25 z = 13 12 Note that 13/12 + 1 13/12 − 1 1/2 = 251/2 = ±5. On one branch, (which we call the positive branch), of the function g(z) the quantity 5 + z + 1 z − 1 1/2 is always nonzero. On the other (negative) branch of the function, this quantity has a zero at z = 13/12. 210
  • 231. The logarithm introduces branch points at z = 1 on both the positive and negative branch of g(z). It introduces a branch point at z = 13/12 on the negative branch of g(z). To determine if additional branch cuts are needed to separate the branches, we consider w = 5 + z + 1 z − 1 1/2 and see where the branch cut between ±1 gets mapped to in the w plane. We rewrite the mapping. w = 5 + 1 + 2 z − 1 1/2 The mapping is the following sequence of simple transformations: (a) z → z − 1 (b) z → 1 z (c) z → 2z (d) z → z + 1 (e) z → z1/2 (f) z → z + 5 We show these transformations graphically below. -1 1 z → z−1 -2 0 z → 1 z -1/2 z → 2z -1 z → z + 1 0 z → z1/2 z → z + 5 For the positive branch of g(z), the branch cut is mapped to the line x = 5 and the z plane is mapped to the half-plane x > 5. log(w) has branch points at w = 0 and w = ∞. It is possible to walk around only one of these points in the half-plane x > 5. Thus no additional branch cuts are needed in the positive sheet of g(z). For the negative branch of g(z), the branch cut is mapped to the line x = 5 and the z plane is mapped to the half-plane x < 5. It is possible to walk around either w = 0 or w = ∞ alone in this half-plane. Thus we need an additional branch cut. On the negative sheet of g(z), we put a branch cut beteen z = 1 and z = 13/12. This puts a branch cut between w = ∞ and w = 0 and thus separates the branches of the logarithm. Figure 7.54 shows the branch cuts in the positive and negative sheets of g(z). 3. The function f(z) = (z + ı3)1/2 has a branch point at z = −ı3. The function is made single- valued by connecting this point and the point at infinity with a branch cut. Solution 7.27 Note that the curve with opposite orientation goes around infinity in the positive direction and does not enclose any branch points. Thus the value of the function does not change when traversing 211
  • 232. Im(z) Re(z) g(13/12)=-5 Im(z) Re(z) g(13/12)=5 Figure 7.54: The branch cuts for f(z) = log 5 + z+1 z−1 1/2 . the curve, (with either orientation, of course). This means that the argument of the function must change my an integer multiple of 2π. Since the branch cut only allows us to encircle all three or none of the branch points, it makes the function single valued. Solution 7.28 We suppose that f(z) has only one branch point in the finite complex plane. Consider any contour that encircles this branch point in the positive direction. f(z) changes value if we traverse the contour. If we reverse the orientation of the contour, then it encircles infinity in the positive direction, but contains no branch points in the finite complex plane. Since the function changes value when we traverse the contour, we conclude that the point at infinity must be a branch point. If f(z) has only a single branch point in the finite complex plane then it must have a branch point at infinity. If f(z) has two or more branch points in the finite complex plane then it may or may not have a branch point at infinity. This is because the value of the function may or may not change on a contour that encircles all the branch points in the finite complex plane. Solution 7.29 First we factor the function, f(z) = z4 + 1 1/4 = z − 1 + ı √ 2 1/4 z − −1 + ı √ 2 1/4 z − −1 − ı √ 2 1/4 z − 1 − ı √ 2 1/4 . There are branch points at z = ±1±ı√ 2 . We make the substitution z = 1/ζ to examine the point at infinity. f 1 ζ = 1 ζ4 + 1 1/4 = 1 (ζ4) 1/4 1 + ζ4 1/4 ζ1/4 4 has a removable singularity at the point ζ = 0, but no branch point there. Thus z4 + 1 1/4 has no branch point at infinity. Note that the argument of z4 − z0 1/4 changes by π/2 on a contour that goes around the point z0 once in the positive direction. The argument of z4 + 1 1/4 changes by nπ/2 on a contour that goes around n of its branch points. Thus any set of branch cuts that permit you to walk around only one, two or three of the branch points will not make the function single valued. A set of branch cuts that permit us to walk around only zero or all four of the branch points will make the function single-valued. Thus we see that the first two sets of branch cuts in Figure 7.32 will make the function single-valued, while the remaining two will not. Consider the contour in Figure 7.32. There are two ways to see that the function does not change value while traversing the contour. The first is to note that each of the branch points makes the argument of the function increase by π/2. Thus the argument of z4 + 1 1/4 changes by 4(π/2) = 2π on the contour. This means that the value of the function changes by the factor eı2π = 1. If we change the orientation of the contour, then it is a contour that encircles infinity once in the positive direction. There are no branch points inside the this contour with opposite orientation. (Recall that 212
  • 233. the inside of a contour lies to your left as you walk around it.) Since there are no branch points inside this contour, the function cannot change value as we traverse it. Solution 7.30 f(z) = z z2 + 1 1/3 = z1/3 (z − ı)−1/3 (z + ı)−1/3 There are branch points at z = 0, ±ı. f 1 ζ = 1/ζ (1/ζ)2 + 1 1/3 = ζ1/3 (1 + ζ2) 1/3 There is a branch point at ζ = 0. f(z) has a branch point at infinity. We introduce branch cuts from z = 0 to infinity on the negative real axis, from z = ı to infinity on the positive imaginary axis and from z = −ı to infinity on the negative imaginary axis. As we cannot walk around any of the branch points, this makes the function single-valued. We define a branch by defining angles from the branch points. Let z = r eıθ − π < θ < π, (z − ı) = s eıφ − 3π/2 < φ < π/2, (z + ı) = t eıψ − π/2 < ψ < 3π/2. With f(z) = z1/3 (z − ı)−1/3 (z + ı)−1/3 = 3 √ r eıθ/3 1 3 √ s e−ıφ/3 1 3 √ t e−ıψ/3 = 3 r st eı(θ−φ−ψ)/3 we have an explicit formula for computing the value of the function for this branch. Now we compute f(1) to see if we chose the correct ranges for the angles. (If not, we’ll just change one of them.) f(1) = 3 1 √ 2 √ 2 eı(0−π/4−(−π/4))/3 = 1 3 √ 2 We made the right choice for the angles. Now to compute f(1 + ı). f(1 + ı) = 3 √ 2 1 √ 5 eı(π/4−0−Arctan(2))/3 = 6 2 5 eı(π/4−Arctan(2))/3 Consider the value of the function above and below the branch cut on the negative real axis. Above the branch cut the function is f(−x + ı0) = 3 x √ x2 + 1 √ x2 + 1 eı(π−φ−ψ)/3 Note that φ = −ψ so that f(−x + ı0) = 3 x x2 + 1 eıπ/3 = 3 x x2 + 1 1 + ı √ 3 2 . Below the branch cut θ = −π and f(−x − ı0) = 3 x x2 + 1 eı(−π)/3 = 3 x x2 + 1 1 − ı √ 3 2 . 213
  • 234. For the branch cut along the positive imaginary axis, f(ıy + 0) = 3 y (y − 1)(y + 1) eı(π/2−π/2−π/2)/3 = 3 y (y − 1)(y + 1) e−ıπ/6 = 3 y (y − 1)(y + 1) √ 3 − ı 2 , f(ıy − 0) = 3 y (y − 1)(y + 1) eı(π/2−(−3π/2)−π/2)/3 = 3 y (y − 1)(y + 1) eıπ/2 = ı 3 y (y − 1)(y + 1) . For the branch cut along the negative imaginary axis, f(−ıy + 0) = 3 y (y + 1)(y − 1) eı(−π/2−(−π/2)−(−π/2))/3 = 3 y (y + 1)(y − 1) eıπ/6 = 3 y (y + 1)(y − 1) √ 3 + ı 2 , f(−ıy − 0) = 3 y (y + 1)(y − 1) eı(−π/2−(−π/2)−(3π/2))/3 = 3 y (y + 1)(y − 1) e−ıπ/2 = −ı 3 y (y + 1)(y − 1) . Solution 7.31 First we factor the function. f(z) = ((z − 1)(z − 2)(z − 3)) 1/2 = (z − 1)1/2 (z − 2)1/2 (z − 3)1/2 There are branch points at z = 1, 2, 3. Now we examine the point at infinity. f 1 ζ = 1 ζ − 1 1 ζ − 2 1 ζ − 3 1/2 = ζ−3/2 1 − 1 ζ 1 − 2 ζ 1 − 3 ζ 1/2 Since ζ−3/2 has a branch point at ζ = 0 and the rest of the terms are analytic there, f(z) has a branch point at infinity. The first two sets of branch cuts in Figure 7.33 do not permit us to walk around any of the branch points, including the point at infinity, and thus make the function single-valued. The third set of branch cuts lets us walk around the branch points at z = 1 and z = 2 together or if we change our perspective, we would be walking around the branch points at z = 3 and z = ∞ together. Consider a contour in this cut plane that encircles the branch points at z = 1 and z = 2. Since the argument of (z − z0) 1/2 changes by π when we walk around z0, the argument of f(z) changes by 2π when we traverse the contour. Thus the value of the function does not change and it is a valid set of branch 214
  • 235. cuts. Clearly the fourth set of branch cuts does not make the function single-valued as there are contours that encircle the branch point at infinity and no other branch points. The other way to see this is to note that the argument of f(z) changes by 3π as we traverse a contour that goes around the branch points at z = 1, 2, 3 once in the positive direction. Now to define the branch. We make the preliminary choice of angles, z − 1 = r1 eıθ1 , 0 < θ1 < 2π, z − 2 = r2 eıθ2 , 0 < θ2 < 2π, z − 3 = r3 eıθ3 , 0 < θ3 < 2π. The function is f(z) = r1 eıθ1 r2 eıθ2 r3 eıθ3 1/2 = √ r1r2r3 eı(θ1+θ2+θ3)/2 . The value of the function at the origin is f(0) = √ 6 eı(3π)/2 = −ı √ 6, which is not what we wanted. We will change range of one of the angles to get the desired result. z − 1 = r1 eıθ1 , 0 < θ1 < 2π, z − 2 = r2 eıθ2 , 0 < θ2 < 2π, z − 3 = r3 eıθ3 , 2π < θ3 < 4π. f(0) = √ 6 eı(5π)/2 = ı √ 6, Solution 7.32 w = z2 − 2 (z + 2) 1/3 z + √ 2 1/3 z − √ 2 1/3 (z + 2)1/3 There are branch points at z = ± √ 2 and z = −2. If we walk around any one of the branch points once in the positive direction, the argument of w changes by 2π/3 and thus the value of the function changes by eı2π/3 . If we walk around all three branch points then the argument of w changes by 3 × 2π/3 = 2π. The value of the function is unchanged as eı2π = 1. Thus the branch cut on the real axis from −2 to √ 2 makes the function single-valued. Now we define a branch. Let z − √ 2 = a eıα , z + √ 2 = b eıβ , z + 2 = c eıγ . We constrain the angles as follows: On the positive real axis, α = β = γ. See Figure 7.55. αβ γ ac b Re(z) Im(z) Figure 7.55: A branch of z2 − 2 (z + 2) 1/3 . 215
  • 236. Now we determine w(2). w(2) = 2 − √ 2 1/3 2 + √ 2 1/3 (2 + 2)1/3 = 3 2 − √ 2 eı0 3 2 + √ 2 eı0 3 √ 4 eı0 = 3 √ 2 3 √ 4 = 2. Note that we didn’t have to choose the angle from each of the branch points as zero. Choosing any integer multiple of 2π would give us the same result. w(−3) = −3 − √ 2 1/3 −3 + √ 2 1/3 (−3 + 2)1/3 = 3 3 + √ 2 eıπ/3 3 3 − √ 2 eıπ/3 3 √ 1 eıπ/3 = 3 √ 7 eıπ = − 3 √ 7 The value of the function is w = 3 √ abc eı(α+β+γ)/3 . Consider the interval − √ 2 . . . √ 2 . As we approach the branch cut from above, the function has the value, w = 3 √ abc eıπ/3 = 3 √ 2 − x x + √ 2 (x + 2) eıπ/3 . As we approach the branch cut from below, the function has the value, w = 3 √ abc e−ıπ/3 = 3 √ 2 − x x + √ 2 (x + 2) e−ıπ/3 . Consider the interval −2 . . . − √ 2 . As we approach the branch cut from above, the function has the value, w = 3 √ abc eı2π/3 = 3 √ 2 − x −x − √ 2 (x + 2) eı2π/3 . As we approach the branch cut from below, the function has the value, w = 3 √ abc e−ı2π/3 = 3 √ 2 − x −x − √ 2 (x + 2) e−ı2π/3 . Solution 7.33 Arccos(x) is shown in Figure 7.56 for real variables in the range [−1 . . . 1]. -1 -0.5 0.5 1 0.5 1 1.5 2 2.5 3 Figure 7.56: The principal branch of the arc cosine, Arccos(x). 216
  • 237. First we write arccos(z) in terms of log(z). If cos(w) = z, then w = arccos(z). cos(w) = z eıw + e−ıw 2 = z (eıw ) 2 − 2z eıw +1 = 0 eıw = z + z2 − 1 1/2 w = −ı log z + z2 − 1 1/2 Thus we have arccos(z) = −ı log z + z2 − 1 1/2 . Since Arccos(0) = π 2 , we must find the branch such that −ı log 0 + 02 − 1 1/2 = 0 −ı log (−1)1/2 = 0. Since −ı log(ı) = −ı ı π 2 + ı2πn = π 2 + 2πn and −ı log(−ı) = −ı −ı π 2 + ı2πn = − π 2 + 2πn we must choose the branch of the square root such that (−1)1/2 = ı and the branch of the logarithm such that log(ı) = ıπ 2 . First we construct the branch of the square root. z2 − 1 1/2 = (z + 1)1/2 (z − 1)1/2 We see that there are branch points at z = −1 and z = 1. In particular we want the Arccos to be defined for z = x, x ∈ [−1 . . . 1]. Hence we introduce branch cuts on the lines −∞ < x ≤ −1 and 1 ≤ x < ∞. Define the local coordinates z + 1 = r eıθ , z − 1 = ρ eıφ . With the given branch cuts, the angles have the possible ranges {θ} = {. . . , (−π . . . π), (π . . . 3π), . . .}, {φ} = {. . . , (0 . . . 2π), (2π . . . 4π), . . .}. Now we choose ranges for θ and φ and see if we get the desired branch. If not, we choose a different range for one of the angles. First we choose the ranges θ ∈ (−π . . . π), φ ∈ (0 . . . 2π). If we substitute in z = 0 we get 02 − 1 1/2 = 1 eı0 1/2 (1 eıπ ) 1/2 = eı0 eıπ/2 = ı Thus we see that this choice of angles gives us the desired branch. Now we go back to the expression arccos(z) = −ı log z + z2 − 1 1/2 . 217
  • 238. θ=π θ=−π φ=0 φ=2π Figure 7.57: Branch cuts and angles for z2 − 1 1/2 . We have already seen that there are branch points at z = −1 and z = 1 because of z2 − 1 1/2 . Now we must determine if the logarithm introduces additional branch points. The only possibilities for branch points are where the argument of the logarithm is zero. z + z2 − 1 1/2 = 0 z2 = z2 − 1 0 = −1 We see that the argument of the logarithm is nonzero and thus there are no additional branch points. Introduce the variable, w = z + z2 − 1 1/2 . What is the image of the branch cuts in the w plane? We parameterize the branch cut connecting z = 1 and z = +∞ with z = r + 1, r ∈ [0 . . . ∞). w = r + 1 + (r + 1)2 − 1 1/2 = r + 1 ± r(r + 2) = r 1 ± r 1 + 2/r + 1 r 1 + 1 + 2/r + 1 is the interval [1 . . . ∞); r 1 − 1 + 2/r + 1 is the interval (0 . . . 1]. Thus we see that this branch cut is mapped to the interval (0 . . . ∞) in the w plane. Similarly, we could show that the branch cut (−∞ . . .−1] in the z plane is mapped to (−∞ . . . 0) in the w plane. In the w plane there is a branch cut along the real w axis from −∞ to ∞. Thus cut makes the logarithm single-valued. For the branch of the square root that we chose, all the points in the z plane get mapped to the upper half of the w plane. With the branch cuts we have introduced so far and the chosen branch of the square root we have arccos(0) = −ı log 0 + 02 − 1 1/2 = −ı log ı = −ı ı π 2 + ı2πn = π 2 + 2πn Choosing the n = 0 branch of the logarithm will give us Arccos(z). We see that we can write Arccos(z) = −ı Log z + z2 − 1 1/2 . Solution 7.34 We consider the function f(z) = z1/2 − 1 1/2 . First note that z1/2 has a branch point at z = 0. We place a branch cut on the negative real axis to make it single valued. f(z) will have a branch point where z1/2 − 1 = 0. This occurs at z = 1 on the branch of z1/2 on which 11/2 = 1. (11/2 has the value 1 on one branch of z1/2 and −1 on the other branch.) For this branch we introduce a branch cut connecting z = 1 with the point at infinity. (See Figure 7.58.) 218
  • 239. 1 =1 1 =-1 1/2 1/2 Figure 7.58: Branch cuts for z1/2 − 1 1/2 . Solution 7.35 The distance between the end of rod a and the end of rod c is b. In the complex plane, these points are a eıθ and l + c eıφ , respectively. We write this out mathematically. l + c eıφ −a eıθ = b l + c eıφ −a eıθ l + c e−ıφ −a e−ıθ = b2 l2 + cl e−ıφ −al e−ıθ +cl eıφ +c2 − ac eı(φ−θ) −al eıθ −ac eı(θ−φ) +a2 = b2 cl cos φ − ac cos(φ − θ) − al cos θ = 1 2 b2 − a2 − c2 − l2 This equation relates the two angular positions. One could differentiate the equation to relate the velocities and accelerations. Solution 7.36 1. Let w = u+ıv. First we do the strip: | (z)| < 1. Consider the vertical line: z = c+ıy, y ∈ R. This line is mapped to w = 2(c + ıy)2 w = 2c2 − 2y2 + ı4cy u = 2c2 − 2y2 , v = 4cy This is a parabola that opens to the left. For the case c = 0 it is the negative u axis. We can parametrize the curve in terms of v. u = 2c2 − 1 8c2 v2 , v ∈ R The boundaries of the region are both mapped to the parabolas: u = 2 − 1 8 v2 , v ∈ R. The image of the mapping is w = u + ıv : v ∈ R and u < 2 − 1 8 v2 . Note that the mapping is two-to-one. Now we do the strip 1 < (z) < 2. Consider the horizontal line: z = x + ıc, x ∈ R. This line is mapped to w = 2(x + ıc)2 w = 2x2 − 2c2 + ı4cx u = 2x2 − 2c2 , v = 4cx 219
  • 240. This is a parabola that opens upward. We can parametrize the curve in terms of v. u = 1 8c2 v2 − 2c2 , v ∈ R The boundary (z) = 1 is mapped to u = 1 8 v2 − 2, v ∈ R. The boundary (z) = 2 is mapped to u = 1 32 v2 − 8, v ∈ R The image of the mapping is w = u + ıv : v ∈ R and 1 32 v2 − 8 < u < 1 8 v2 − 2 . 2. We write the transformation as z + 1 z − 1 = 1 + 2 z − 1 . Thus we see that the transformation is the sequence: (a) translation by −1 (b) inversion (c) magnification by 2 (d) translation by 1 Consider the strip | (z)| < 1. The translation by −1 maps this to −2 < (z) < 0. Now we do the inversion. The left edge, (z) = 0, is mapped to itself. The right edge, (z) = −2, is mapped to the circle |z + 1/4| = 1/4. Thus the current image is the left half plane minus a circle: (z) < 0 and z + 1 4 > 1 4 . The magnification by 2 yields (z) < 0 and z + 1 2 > 1 2 . The final step is a translation by 1. (z) < 1 and z − 1 2 > 1 2 . Now consider the strip 1 < (z) < 2. The translation by −1 does not change the domain. Now we do the inversion. The bottom edge, (z) = 1, is mapped to the circle |z + ı/2| = 1/2. The top edge, (z) = 2, is mapped to the circle |z + ı/4| = 1/4. Thus the current image is the region between two circles: z + ı 2 < 1 2 and z + ı 4 > 1 4 . The magnification by 2 yields |z + ı| < 1 and z + ı 2 > 1 2 . The final step is a translation by 1. |z − 1 + ı| < 1 and z − 1 + ı 2 > 1 2 . 220
  • 241. Solution 7.37 1. There is a simple pole at z = −2. The function has a branch point at z = −1. Since this is the only branch point in the finite complex plane there is also a branch point at infinity. We can verify this with the substitution z = 1/ζ. f 1 ζ = (1/ζ + 1)1/2 1/ζ + 2 = ζ1/2 (1 + ζ)1/2 1 + 2ζ Since f(1/ζ) has a branch point at ζ = 0, f(z) has a branch point at infinity. 2. cos z is an entire function with an essential singularity at infinity. Thus f(z) has singularities only where 1/(1 + z) has singularities. 1/(1 + z) has a first order pole at z = −1. It is analytic everywhere else, including the point at infinity. Thus we conclude that f(z) has an essential singularity at z = −1 and is analytic elsewhere. To explicitly show that z = −1 is an essential singularity, we can find the Laurent series expansion of f(z) about z = −1. cos 1 1 + z = ∞ n=0 (−1)n (2n)! (z + 1)−2n 3. 1 − ez has simple zeros at z = ı2nπ, n ∈ Z. Thus f(z) has second order poles at those points. The point at infinity is a non-isolated singularity. To justify this: Note that f(z) = 1 (1 − ez) 2 has second order poles at z = ı2nπ, n ∈ Z. This means that f(1/ζ) has second order poles at ζ = 1 ı2nπ , n ∈ Z. These second order poles get arbitrarily close to ζ = 0. There is no deleted neighborhood around ζ = 0 in which f(1/ζ) is analytic. Thus the point ζ = 0, (z = ∞), is a non-isolated singularity. There is no Laurent series expansion about the point ζ = 0, (z = ∞). The point at infinity is neither a branch point nor a removable singularity. It is not a pole either. If it were, there would be an n such that limz→∞ z−n f(z) = const = 0. Since z−n f(z) has second order poles in every deleted neighborhood of infinity, the above limit does not exist. Thus we conclude that the point at infinity is an essential singularity. Solution 7.38 We write sinh z in Cartesian form. w = sinh z = sinh x cos y + ı cosh x sin y = u + ıv Consider the line segment x = c, y ∈ (0 . . . π). Its image is {sinh c cos y + ı cosh c sin y | y ∈ (0 . . . π)}. This is the parametric equation for the upper half of an ellipse. Also note that u and v satisfy the equation for an ellipse. u2 sinh2 c + v2 cosh2 c = 1 The ellipse starts at the point (sinh(c), 0), passes through the point (0, cosh(c)) and ends at (−sinh(c), 0). As c varies from zero to ∞ or from zero to −∞, the semi-ellipses cover the upper half w plane. Thus the mapping is 2-to-1. Consider the infinite line y = c, x ∈ (−∞ . . . ∞).Its image is {sinh x cos c + ı cosh x sin c | x ∈ (−∞ . . . ∞)}. 221
  • 242. This is the parametric equation for the upper half of a hyperbola. Also note that u and v satisfy the equation for a hyperbola. − u2 cos2 c + v2 sin2 c = 1 As c varies from 0 to π/2 or from π/2 to π, the semi-hyperbola cover the upper half w plane. Thus the mapping is 2-to-1. We look for branch points of sinh−1 w. w = sinh z w = ez − e−z 2 e2z −2w ez −1 = 0 ez = w + w2 + 1 1/2 z = log w + (w − ı)1/2 (w + ı)1/2 There are branch points at w = ±ı. Since w + w2 + 1 1/2 is nonzero and finite in the finite complex plane, the logarithm does not introduce any branch points in the finite plane. Thus the only branch point in the upper half w plane is at w = ı. Any branch cut that connects w = ı with the boundary of (w) > 0 will separate the branches under the inverse mapping. Consider the line y = π/4. The image under the mapping is the upper half of the hyperbola 2u2 + 2v2 = 1. Consider the segment x = 1.The image under the mapping is the upper half of the ellipse u2 sinh2 1 + v2 cosh2 1 = 1. 222
  • 243. Chapter 8 Analytic Functions Students need encouragement. So if a student gets an answer right, tell them it was a lucky guess. That way, they develop a good, lucky feeling.1 -Jack Handey 8.1 Complex Derivatives Functions of a Real Variable. The derivative of a function of a real variable is d dx f(x) = lim ∆x→0 f(x + ∆x) − f(x) ∆x . If the limit exists then the function is differentiable at the point x. Note that ∆x can approach zero from above or below. The limit cannot depend on the direction in which ∆x vanishes. Consider f(x) = |x|. The function is not differentiable at x = 0 since lim ∆x→0+ |0 + ∆x| − |0| ∆x = 1 and lim ∆x→0− |0 + ∆x| − |0| ∆x = −1. Analyticity. The complex derivative, (or simply derivative if the context is clear), is defined, d dz f(z) = lim ∆z→0 f(z + ∆z) − f(z) ∆z . The complex derivative exists if this limit exists. This means that the value of the limit is independent of the manner in which ∆z → 0. If the complex derivative exists at a point, then we say that the function is complex differentiable there. A function of a complex variable is analytic at a point z0 if the complex derivative exists in a neighborhood about that point. The function is analytic in an open set if it has a complex derivative at each point in that set. Note that complex differentiable has a different meaning than analytic. Analyticity refers to the behavior of a function on an open set. A function can be complex differentiable at isolated points, but the function would not be analytic at those points. Analytic functions are also called regular or holomorphic. If a function is analytic everywhere in the finite complex plane, it is called entire. 1Quote slightly modified. 223
  • 244. Example 8.1.1 Consider zn , n ∈ Z+ , Is the function differentiable? Is it analytic? What is the value of the derivative? We determine differentiability by trying to differentiate the function. We use the limit definition of differentiation. We will use Newton’s binomial formula to expand (z + ∆z)n . d dz zn = lim ∆z→0 (z + ∆z)n − zn ∆z = lim ∆z→0 zn + nzn−1 ∆z + n(n−1) 2 zn−2 ∆z2 + · · · + ∆zn − zn ∆z = lim ∆z→0 nzn−1 + n(n − 1) 2 zn−2 ∆z + · · · + ∆zn−1 = nzn−1 The derivative exists everywhere. The function is analytic in the whole complex plane so it is entire. The value of the derivative is d dz = nzn−1 . Example 8.1.2 We will show that f(z) = z is not differentiable. Consider its derivative. d dz f(z) = lim ∆z→0 f(z + ∆z) − f(z) ∆z . d dz z = lim ∆z→0 z + ∆z − z ∆z = lim ∆z→0 ∆z ∆z First we take ∆z = ∆x and evaluate the limit. lim ∆x→0 ∆x ∆x = 1 Then we take ∆z = ı∆y. lim ∆y→0 −ı∆y ı∆y = −1 Since the limit depends on the way that ∆z → 0, the function is nowhere differentiable. Thus the function is not analytic. Complex Derivatives in Terms of Plane Coordinates. Let z = ζ(ξ, ψ) be a system of coordi- nates in the complex plane. (For example, we could have Cartesian coordinates z = ζ(x, y) = x + ıy or polar coordinates z = ζ(r, θ) = r eıθ ). Let f(z) = φ(ξ, ψ) be a complex-valued function. (For example we might have a function in the form φ(x, y) = u(x, y)+ıv(x, y) or φ(r, θ) = R(r, θ) eıΘ(r,θ) .) If f(z) = φ(ξ, ψ) is analytic, its complex derivative is equal to the derivative in any direction. In particular, it is equal to the derivatives in the coordinate directions. df dz = lim ∆ξ→0,∆ψ=0 f(z + ∆z) − f(z) ∆z = lim ∆ξ→0 φ(ξ + ∆ξ, ψ) − φ(ξ, ψ) ∂ζ ∂ξ ∆ξ = ∂ζ ∂ξ −1 ∂φ ∂ξ df dz = lim ∆ξ=0,∆ψ→0 f(z + ∆z) − f(z) ∆z = lim ∆ψ→0 φ(ξ, ψ + ∆ψ) − φ(ξ, ψ) ∂ζ ∂ψ ∆ψ = ∂ζ ∂ψ −1 ∂φ ∂ψ Example 8.1.3 Consider the Cartesian coordinates z = x + ıy. We write the complex derivative as derivatives in the coordinate directions for f(z) = φ(x, y). df dz = ∂(x + ıy) ∂x −1 ∂φ ∂x = ∂φ ∂x df dz = ∂(x + ıy) ∂y −1 ∂φ ∂y = −ı ∂φ ∂y 224
  • 245. We write this in operator notation. d dz = ∂ ∂x = −ı ∂ ∂y . Example 8.1.4 In Example 8.1.1 we showed that zn , n ∈ Z+ , is an entire function and that d dz zn = nzn−1 . Now we corroborate this by calculating the complex derivative in the Cartesian coordinate directions. d dz zn = ∂ ∂x (x + ıy)n = n(x + ıy)n−1 = nzn−1 d dz zn = −ı ∂ ∂y (x + ıy)n = −ıın(x + ıy)n−1 = nzn−1 Complex Derivatives are Not the Same as Partial Derivatives Recall from calculus that f(x, y) = g(s, t) → ∂f ∂x = ∂g ∂s ∂s ∂x + ∂g ∂t ∂t ∂x Do not make the mistake of using a similar formula for functions of a complex variable. If f(z) = φ(x, y) then df dz = ∂φ ∂x ∂x ∂z + ∂φ ∂y ∂y ∂z . This is because the d dz operator means “The derivative in any direction in the complex plane.” Since f(z) is analytic, f (z) is the same no matter in which direction we take the derivative. Rules of Differentiation. For an analytic function defined in terms of z we can calculate the complex derivative using all the usual rules of differentiation that we know from calculus like the product rule, d dz f(z)g(z) = f (z)g(z) + f(z)g (z), or the chain rule, d dz f(g(z)) = f (g(z))g (z). This is because the complex derivative derives its properties from properties of limits, just like its real variable counterpart. 225
  • 246. Result 8.1.1 The complex derivative is, d dz f(z) = lim ∆z→0 f(z + ∆z) − f(z) ∆z . The complex derivative is defined if the limit exists and is independent of the manner in which ∆z → 0. A function is analytic at a point if the complex derivative exists in a neighborhood of that point. Let z = ζ(ξ, ψ) define coordinates in the complex plane. The complex deriva- tive in the coordinate directions is d dz = ∂ζ ∂ξ −1 ∂ ∂ξ = ∂ζ ∂ψ −1 ∂ ∂ψ . In Cartesian coordinates, this is d dz = ∂ ∂x = −ı ∂ ∂y . In polar coordinates, this is d dz = e−ıθ ∂ ∂r = − ı r e−ıθ ∂ ∂θ Since the complex derivative is defined with the same limit formula as real derivatives, all the rules from the calculus of functions of a real variable may be used to differentiate functions of a complex variable. Example 8.1.5 We have shown that zn , n ∈ Z+ , is an entire function. Now we corroborate that d dz zn = nzn−1 by calculating the complex derivative in the polar coordinate directions. d dz zn = e−ıθ ∂ ∂r rn eınθ = e−ıθ nrn−1 eınθ = nrn−1 eı(n−1)θ = nzn−1 d dz zn = − ı r e−ıθ ∂ ∂θ rn eınθ = − ı r e−ıθ rn ın eınθ = nrn−1 eı(n−1)θ = nzn−1 Analytic Functions can be Written in Terms of z. Consider an analytic function expressed in terms of x and y, φ(x, y). We can write φ as a function of z = x + ıy and z = x − ıy. f (z, z) = φ z + z 2 , z − z ı2 226
  • 247. We treat z and z as independent variables. We find the partial derivatives with respect to these variables. ∂ ∂z = ∂x ∂z ∂ ∂x + ∂y ∂z ∂ ∂y = 1 2 ∂ ∂x − ı ∂ ∂y ∂ ∂z = ∂x ∂z ∂ ∂x + ∂y ∂z ∂ ∂y = 1 2 ∂ ∂x + ı ∂ ∂y Since φ is analytic, the complex derivatives in the x and y directions are equal. ∂φ ∂x = −ı ∂φ ∂y The partial derivative of f (z, z) with respect to z is zero. ∂f ∂z = 1 2 ∂φ ∂x + ı ∂φ ∂y = 0 Thus f (z, z) has no functional dependence on z, it can be written as a function of z alone. If we were considering an analytic function expressed in polar coordinates φ(r, θ), then we could write it in Cartesian coordinates with the substitutions: r = x2 + y2, θ = arctan(x, y). Thus we could write φ(r, θ) as a function of z alone. Result 8.1.2 Any analytic function φ(x, y) or φ(r, θ) can be written as a function of z alone. 8.2 Cauchy-Riemann Equations If we know that a function is analytic, then we have a convenient way of determining its complex derivative. We just express the complex derivative in terms of the derivative in a coordinate direction. However, we don’t have a nice way of determining if a function is analytic. The definition of complex derivative in terms of a limit is cumbersome to work with. In this section we remedy this problem. A necessary condition for analyticity. Consider a function f(z) = φ(x, y). If f(z) is analytic, the complex derivative is equal to the derivatives in the coordinate directions. We equate the deriva- tives in the x and y directions to obtain the Cauchy-Riemann equations in Cartesian coordinates. φx = −ıφy (8.1) This equation is a necessary condition for the analyticity of f(z). Let φ(x, y) = u(x, y) + ıv(x, y) where u and v are real-valued functions. We equate the real and imaginary parts of Equation 8.1 to obtain another form for the Cauchy-Riemann equations in Cartesian coordinates. ux = vy, uy = −vx. Note that this is a necessary and not a sufficient condition for analyticity of f(z). That is, u and v may satisfy the Cauchy-Riemann equations but f(z) may not be analytic. At this point, Cauchy-Riemann equations give us an easy test for determining if a function is not analytic. Example 8.2.1 In Example 8.1.2 we showed that z is not analytic using the definition of complex differentiation. Now we obtain the same result using the Cauchy-Riemann equations. z = x − ıy ux = 1, vy = −1 We see that the first Cauchy-Riemann equation is not satisfied; the function is not analytic at any point. 227
  • 248. A sufficient condition for analyticity. A sufficient condition for f(z) = φ(x, y) to be analytic at a point z0 = (x0, y0) is that the partial derivatives of φ(x, y) exist and are continuous in some neighborhood of z0 and satisfy the Cauchy-Riemann equations there. If the partial derivatives of φ exist and are continuous then φ(x + ∆x, y + ∆y) = φ(x, y) + ∆xφx(x, y) + ∆yφy(x, y) + o(∆x) + o(∆y). Here the notation o(∆x) means “terms smaller than ∆x”. We calculate the derivative of f(z). f (z) = lim ∆z→0 f(z + ∆z) − f(z) ∆z = lim ∆x,∆y→0 φ(x + ∆x, y + ∆y) − φ(x, y) ∆x + ı∆y = lim ∆x,∆y→0 φ(x, y) + ∆xφx(x, y) + ∆yφy(x, y) + o(∆x) + o(∆y) − φ(x, y) ∆x + ı∆y = lim ∆x,∆y→0 ∆xφx(x, y) + ∆yφy(x, y) + o(∆x) + o(∆y) ∆x + ı∆y Here we use the Cauchy-Riemann equations. = lim ∆x,∆y→0 (∆x + ı∆y)φx(x, y) ∆x + ı∆y + lim ∆x,∆y→0 o(∆x) + o(∆y) ∆x + ı∆y = φx(x, y) Thus we see that the derivative is well defined. Cauchy-Riemann Equations in General Coordinates Let z = ζ(ξ, ψ) be a system of coordi- nates in the complex plane. Let φ(ξ, ψ) be a function which we write in terms of these coordinates, A necessary condition for analyticity of φ(ξ, ψ) is that the complex derivatives in the coordinate directions exist and are equal. Equating the derivatives in the ξ and ψ directions gives us the Cauchy-Riemann equations. ∂ζ ∂ξ −1 ∂φ ∂ξ = ∂ζ ∂ψ −1 ∂φ ∂ψ We could separate this into two equations by equating the real and imaginary parts or the modulus and argument. 228
  • 249. Result 8.2.1 A necessary condition for analyticity of φ(ξ, ψ), where z = ζ(ξ, ψ), at z = z0 is that the Cauchy-Riemann equations are satisfied in a neighborhood of z = z0. ∂ζ ∂ξ −1 ∂φ ∂ξ = ∂ζ ∂ψ −1 ∂φ ∂ψ . (We could equate the real and imaginary parts or the modulus and argument of this to obtain two equations.) A sufficient condition for analyticity of f(z) is that the Cauchy-Riemann equations hold and the first partial derivatives of φ exist and are continuous in a neighborhood of z = z0. Below are the Cauchy-Riemann equations for various forms of f(z). f(z) = φ(x, y), φx = −ıφy f(z) = u(x, y) + ıv(x, y), ux = vy, uy = −vx f(z) = φ(r, θ), φr = − ı r φθ f(z) = u(r, θ) + ıv(r, θ), ur = 1 r vθ, uθ = −rvr f(z) = R(r, θ) eıΘ(r,θ) , Rr = R r Θθ, 1 r Rθ = −RΘr f(z) = R(x, y) eıΘ(x,y) , Rx = RΘy, Ry = −RΘx Example 8.2.2 Consider the Cauchy-Riemann equations for f(z) = u(r, θ) + ıv(r, θ). From Exer- cise 8.3 we know that the complex derivative in the polar coordinate directions is d dz = e−ıθ ∂ ∂r = − ı r e−ıθ ∂ ∂θ . From Result 8.2.1 we have the equation, e−ıθ ∂ ∂r [u + ıv] = − ı r e−ıθ ∂ ∂θ [u + ıv]. We multiply by eıθ and equate the real and imaginary components to obtain the Cauchy-Riemann equations. ur = 1 r vθ, uθ = −rvr Example 8.2.3 Consider the exponential function. ez = φ(x, y) = ex (cos y + ı sin(y)) We use the Cauchy-Riemann equations to show that the function is entire. φx = −ıφy ex (cos y + ı sin(y)) = −ı ex (− sin y + ı cos(y)) ex (cos y + ı sin(y)) = ex (cos y + ı sin(y)) Since the function satisfies the Cauchy-Riemann equations and the first partial derivatives are con- tinuous everywhere in the finite complex plane, the exponential function is entire. 229
  • 250. Now we find the value of the complex derivative. d dz ez = ∂φ ∂x = ex (cos y + ı sin(y)) = ez The differentiability of the exponential function implies the differentiability of the trigonometric functions, as they can be written in terms of the exponential. In Exercise 8.13 you can show that the logarithm log z is differentiable for z = 0. This implies the differentiability of zα and the inverse trigonometric functions as they can be written in terms of the logarithm. Example 8.2.4 We compute the derivative of zz . d dz (zz ) = d dz ez log z = (1 + log z) ez log z = (1 + log z)zz = zz + zz log z 8.3 Harmonic Functions A function u is harmonic if its second partial derivatives exist, are continuous and satisfy Laplace’s equation ∆u = 0.2 (In Cartesian coordinates the Laplacian is ∆u ≡ uxx + uyy.) If f(z) = u + ıv is an analytic function then u and v are harmonic functions. To see why this is so, we start with the Cauchy-Riemann equations. ux = vy, uy = −vx We differentiate the first equation with respect to x and the second with respect to y. (We as- sume that u and v are twice continuously differentiable. We will see later that they are infinitely differentiable.) uxx = vxy, uyy = −vyx Thus we see that u is harmonic. ∆u ≡ uxx + uyy = vxy − vyx = 0 One can use the same method to show that ∆v = 0. If u is harmonic on some simply-connected domain, then there exists a harmonic function v such that f(z) = u + ıv is analytic in the domain. v is called the harmonic conjugate of u. The harmonic conjugate is unique up to an additive constant. To demonstrate this, let w be another harmonic conjugate of u. Both the pair u and v and the pair u and w satisfy the Cauchy-Riemann equations. ux = vy, uy = −vx, ux = wy, uy = −wx We take the difference of these equations. vx − wx = 0, vy − wy = 0 On a simply connected domain, the difference between v and w is thus a constant. To prove the existence of the harmonic conjugate, we first write v as an integral. v(x, y) = v (x0, y0) + (x,y) (x0,y0) vx dx + vy dy 2 The capital Greek letter ∆ is used to denote the Laplacian, like ∆u(x, y), and differentials, like ∆x. 230
  • 251. On a simply connected domain, the integral is path independent and defines a unique v in terms of vx and vy. We use the Cauchy-Riemann equations to write v in terms of ux and uy. v(x, y) = v (x0, y0) + (x,y) (x0,y0) −uy dx + ux dy Changing the starting point (x0, y0) changes v by an additive constant. The harmonic conjugate of u to within an additive constant is v(x, y) = −uy dx + ux dy. This proves the existence3 of the harmonic conjugate. This is not the formula one would use to construct the harmonic conjugate of a u. One accomplishes this by solving the Cauchy-Riemann equations. Result 8.3.1 If f(z) = u+ıv is an analytic function then u and v are harmonic functions. That is, the Laplacians of u and v vanish ∆u = ∆v = 0. The Laplacian in Cartesian and polar coordinates is ∆ = ∂2 ∂x2 + ∂2 ∂y2 , ∆ = 1 r ∂ ∂r r ∂ ∂r + 1 r2 ∂2 ∂θ2 . Given a harmonic function u in a simply connected domain, there exists a harmonic function v, (unique up to an additive constant), such that f(z) = u + ıv is analytic in the domain. One can construct v by solving the Cauchy- Riemann equations. Example 8.3.1 Is x2 the real part of an analytic function? The Laplacian of x2 is ∆[x2 ] = 2 + 0 x2 is not harmonic and thus is not the real part of an analytic function. Example 8.3.2 Show that u = e−x (x sin y − y cos y) is harmonic. ∂u ∂x = e−x sin y − ex (x sin y − y cos y) = e−x sin y − x e−x sin y + y e−x cos y ∂2 u ∂x2 = − e−x sin y − e−x sin y + x e−x sin y − y e−x cos y = −2 e−x sin y + x e−x sin y − y e−x cos y ∂u ∂y = e−x (x cos y − cos y + y sin y) ∂2 u ∂y2 = e−x (−x sin y + sin y + y cos y + sin y) = −x e−x sin y + 2 e−x sin y + y e−x cos y Thus we see that ∂2 u ∂x2 + ∂2 u ∂y2 = 0 and u is harmonic. 3 A mathematician returns to his office to find that a cigarette tossed in the trash has started a small fire. Being calm and a quick thinker he notes that there is a fire extinguisher by the window. He then closes the door and walks away because “the solution exists.” 231
  • 252. Example 8.3.3 Consider u = cos x cosh y. This function is harmonic. uxx + uyy = − cos x cosh y + cos x cosh y = 0 Thus it is the real part of an analytic function, f(z). We find the harmonic conjugate, v, with the Cauchy-Riemann equations. We integrate the first Cauchy-Riemann equation. vy = ux = − sin x cosh y v = − sin x sinh y + a(x) Here a(x) is a constant of integration. We substitute this into the second Cauchy-Riemann equation to determine a(x). vx = −uy − cos x sinh y + a (x) = − cos x sinh y a (x) = 0 a(x) = c Here c is a real constant. Thus the harmonic conjugate is v = − sin x sinh y + c. The analytic function is f(z) = cos x cosh y − ı sin x sinh y + ıc We recognize this as f(z) = cos z + ıc. Example 8.3.4 Here we consider an example that demonstrates the need for a simply connected domain. Consider u = Log r in the multiply connected domain, r > 0. u is harmonic. ∆ Log r = 1 r ∂ ∂r r ∂ ∂r Log r + 1 r2 ∂2 ∂θ2 Log r = 0 We solve the Cauchy-Riemann equations to try to find the harmonic conjugate. ur = 1 r vθ, uθ = −rvr vr = 0, vθ = 1 v = θ + c We are able to solve for v, but it is multi-valued. Any single-valued branch of θ that we choose will not be continuous on the domain. Thus there is no harmonic conjugate of u = Log r for the domain r > 0. If we had instead considered the simply-connected domain r > 0, | arg(z)| < π then the harmonic conjugate would be v = Arg(z) + c. The corresponding analytic function is f(z) = Log z + ıc. Example 8.3.5 Consider u = x3 − 3xy2 + x. This function is harmonic. uxx + uyy = 6x − 6x = 0 Thus it is the real part of an analytic function, f(z). We find the harmonic conjugate, v, with the Cauchy-Riemann equations. We integrate the first Cauchy-Riemann equation. vy = ux = 3x2 − 3y2 + 1 v = 3x2 y − y3 + y + a(x) 232
  • 253. Here a(x) is a constant of integration. We substitute this into the second Cauchy-Riemann equation to determine a(x). vx = −uy 6xy + a (x) = 6xy a (x) = 0 a(x) = c Here c is a real constant. The harmonic conjugate is v = 3x2 y − y3 + y + c. The analytic function is f(z) = x3 − 3xy2 + x + ı 3x2 y − y3 + y + ıc f(z) = x3 + ı3x2 y − 3xy2 − ıy2 + x + ıy + ıc f(z) = z3 + z + ıc 8.4 Singularities Any point at which a function is not analytic is called a singularity. In this section we will classify the different flavors of singularities. Result 8.4.1 Singularities. If a function is not analytic at a point, then that point is a singular point or a singularity of the function. 8.4.1 Categorization of Singularities Branch Points. If f(z) has a branch point at z0, then we cannot define a branch of f(z) that is continuous in a neighborhood of z0. Continuity is necessary for analyticity. Thus all branch points are singularities. Since function are discontinuous across branch cuts, all points on a branch cut are singularities. Example 8.4.1 Consider f(z) = z3/2 . The origin and infinity are branch points and are thus singularities of f(z). We choose the branch g(z) = √ z3. All the points on the negative real axis, including the origin, are singularities of g(z). Removable Singularities. Example 8.4.2 Consider f(z) = sin z z . This function is undefined at z = 0 because f(0) is the indeterminate form 0/0. f(z) is analytic everywhere in the finite complex plane except z = 0. Note that the limit as z → 0 of f(z) exists. lim z→0 sin z z = lim z→0 cos z 1 = 1 If we were to fill in the hole in the definition of f(z), we could make it differentiable at z = 0. Consider the function g(z) = sin z z z = 0, 1 z = 0. 233
  • 254. We calculate the derivative at z = 0 to verify that g(z) is analytic there. f (0) = lim z→0 f(0) − f(z) z = lim z→0 1 − sin(z)/z z = lim z→0 z − sin(z) z2 = lim z→0 1 − cos(z) 2z = lim z→0 sin(z) 2 = 0 We call the point at z = 0 a removable singularity of sin(z)/z because we can remove the singularity by defining the value of the function to be its limiting value there. Consider a function f(z) that is analytic in a deleted neighborhood of z = z0. If f(z) is not analytic at z0, but limz→z0 f(z) exists, then the function has a removable singularity at z0. The function g(z) = f(z) z = z0 limz→z0 f(z) z = z0 is analytic in a neighborhood of z = z0. We show this by calculating g (z0). g (z0) = lim z→z0 g (z0) − g(z) z0 − z = lim z→z0 −g (z) −1 = lim z→z0 f (z) This limit exists because f(z) is analytic in a deleted neighborhood of z = z0. Poles. If a function f(z) behaves like c/ (z − z0) n near z = z0 then the function has an nth order pole at that point. More mathematically we say lim z→z0 (z − z0) n f(z) = c = 0. We require the constant c to be nonzero so we know that it is not a pole of lower order. We can denote a removable singularity as a pole of order zero. Another way to say that a function has an nth order pole is that f(z) is not analytic at z = z0, but (z − z0) n f(z) is either analytic or has a removable singularity at that point. Example 8.4.3 1/ sin z2 has a second order pole at z = 0 and first order poles at z = (nπ)1/2 , n ∈ Z± . lim z→0 z2 sin (z2) = lim z→0 2z 2z cos (z2) = lim z→0 2 2 cos (z2) − 4z2 sin (z2) = 1 lim z→(nπ)1/2 z − (nπ)1/2 sin (z2) = lim z→(nπ)1/2 1 2z cos (z2) = 1 2(nπ)1/2(−1)n 234
  • 255. Example 8.4.4 e1/z is singular at z = 0. The function is not analytic as limz→0 e1/z does not exist. We check if the function has a pole of order n at z = 0. lim z→0 zn e1/z = lim ζ→∞ eζ ζn = lim ζ→∞ eζ n! Since the limit does not exist for any value of n, the singularity is not a pole. We could say that e1/z is more singular than any power of 1/z. Essential Singularities. If a function f(z) is singular at z = z0, but the singularity is not a branch point, or a pole, the the point is an essential singularity of the function. The point at infinity. We can consider the point at infinity z → ∞ by making the change of variables z = 1/ζ and considering ζ → 0. If f(1/ζ) is analytic at ζ = 0 then f(z) is analytic at infinity. We have encountered branch points at infinity before (Section 7.9). Assume that f(z) is not analytic at infinity. If limz→∞ f(z) exists then f(z) has a removable singularity at infinity. If limz→∞ f(z)/zn = c = 0 then f(z) has an nth order pole at infinity. Result 8.4.2 Categorization of Singularities. Consider a function f(z) that has a singularity at the point z = z0. Singularities come in four flavors: Branch Points. Branch points of multi-valued functions are singularities. Removable Singularities. If limz→z0 f(z) exists, then z0 is a removable singularity. It is thus named because the singularity could be removed and thus the function made analytic at z0 by redefining the value of f (z0). Poles. If limz→z0 (z − z0)n f(z) = const = 0 then f(z) has an nth order pole at z0. Essential Singularities. Instead of defining what an essential singularity is, we say what it is not. If z0 neither a branch point, a removable singularity nor a pole, it is an essential singularity. A pole may be called a non-essential singularity. This is because multiplying the function by an integral power of z − z0 will make the function analytic. Then an essential singularity is a point z0 such that there does not exist an n such that (z − z0) n f(z) is analytic there. 8.4.2 Isolated and Non-Isolated Singularities Result 8.4.3 Isolated and Non-Isolated Singularities. Suppose f(z) has a singularity at z0. If there exists a deleted neighborhood of z0 containing no singularities then the point is an isolated singularity. Otherwise it is a non-isolated singularity. If you don’t like the abstract notion of a deleted neighborhood, you can work with a deleted circular neighborhood. However, this will require the introduction of more math symbols and a Greek letter. z = z0 is an isolated singularity if there exists a δ > 0 such that there are no singularities in 0 < |z − z0| < δ. 235
  • 256. Example 8.4.5 We classify the singularities of f(z) = z/ sin z. z has a simple zero at z = 0. sin z has simple zeros at z = nπ. Thus f(z) has a removable singularity at z = 0 and has first order poles at z = nπ for n ∈ Z± . We can corroborate this by taking limits. lim z→0 f(z) = lim z→0 z sin z = lim z→0 1 cos z = 1 lim z→nπ (z − nπ)f(z) = lim z→nπ (z − nπ)z sin z = lim z→nπ 2z − nπ cos z = nπ (−1)n = 0 Now to examine the behavior at infinity. There is no neighborhood of infinity that does not contain first order poles of f(z). (Another way of saying this is that there does not exist an R such that there are no singularities in R < |z| < ∞.) Thus z = ∞ is a non-isolated singularity. We could also determine this by setting ζ = 1/z and examining the point ζ = 0. f(1/ζ) has first order poles at ζ = 1/(nπ) for n ∈ Z {0}. These first order poles come arbitrarily close to the point ζ = 0 There is no deleted neighborhood of ζ = 0 which does not contain singularities. Thus ζ = 0, and hence z = ∞ is a non-isolated singularity. The point at infinity is an essential singularity. It is certainly not a branch point or a removable singularity. It is not a pole, because there is no n such that limz→∞ z−n f(z) = const = 0. z−n f(z) has first order poles in any neighborhood of infinity, so this limit does not exist. 8.5 Application: Potential Flow Example 8.5.1 We consider 2 dimensional uniform flow in a given direction. The flow corresponds to the complex potential Φ(z) = v0 e−ıθ0 z, where v0 is the fluid speed and θ0 is the direction. We find the velocity potential φ and stream function ψ. Φ(z) = φ + ıψ φ = v0(cos(θ0)x + sin(θ0)y), ψ = v0(− sin(θ0)x + cos(θ0)y) These are plotted in Figure 8.1 for θ0 = π/6. -1 -0.5 0 0.5 1-1 -0.5 0 0.5 1 -1 0 1 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1-1 -0.5 0 0.5 1 -1 0 1 -1 -0.5 0 0.5 1 Figure 8.1: The velocity potential φ and stream function ψ for Φ(z) = v0 e−ıθ0 z. 236
  • 257. Next we find the stream lines, ψ = c. v0(− sin(θ0)x + cos(θ0)y) = c y = c v0 cos(θ0) + tan(θ0)x Figure 8.2 shows how the streamlines go straight along the θ0 direction. Next we find the velocity -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 Figure 8.2: Streamlines for ψ = v0(− sin(θ0)x + cos(θ0)y). field. v = φ v = φxˆx + φy ˆy v = v0 cos(θ0)ˆx + v0 sin(θ0)ˆy The velocity field is shown in Figure 8.3. Figure 8.3: Velocity field and velocity direction field for φ = v0(cos(θ0)x + sin(θ0)y). Example 8.5.2 Steady, incompressible, inviscid, irrotational flow is governed by the Laplace equa- tion. We consider flow around an infinite cylinder of radius a. Because the flow does not vary along the axis of the cylinder, this is a two-dimensional problem. The flow corresponds to the complex potential Φ(z) = v0 z + a2 z . 237
  • 258. We find the velocity potential φ and stream function ψ. Φ(z) = φ + ıψ φ = v0 r + a2 r cos θ, ψ = v0 r − a2 r sin θ These are plotted in Figure 8.4. Figure 8.4: The velocity potential φ and stream function ψ for Φ(z) = v0 z + a2 z . Next we find the stream lines, ψ = c. v0 r − a2 r sin θ = c r = c ± c2 + 4v0 sin2 θ 2v0 sin θ Figure 8.5 shows how the streamlines go around the cylinder. Next we find the velocity field. Figure 8.5: Streamlines for ψ = v0 r − a2 r sin θ. v = φ v = φrˆr + φθ r ˆθ v = v0 1 − a2 r2 cos θˆr − v0 1 + a2 r2 sin θˆθ The velocity field is shown in Figure 8.6. 238
  • 259. Figure 8.6: Velocity field and velocity direction field for φ = v0 r + a2 r cos θ. 8.6 Exercises Complex Derivatives Exercise 8.1 Consider two functions f(z) and g(z) analytic at z0 with f(z0) = g(z0) = 0 and g (z0) = 0. 1. Use the definition of the complex derivative to justify L’Hospital’s rule: lim z→z0 f(z) g(z) = f (z0) g (z0) 2. Evaluate the limits lim z→ı 1 + z2 2 + 2z6 , lim z→ıπ sinh(z) ez +1 Hint, Solution Exercise 8.2 Show that if f(z) is analytic and φ(x, y) = f(z) is twice continuously differentiable then f (z) is analytic. Hint, Solution Exercise 8.3 Find the complex derivative in the coordinate directions for f(z) = φ(r, θ). Hint, Solution Exercise 8.4 Show that the following functions are nowhere analytic by checking where the derivative with respect to z exists. 1. sin x cosh y − ı cos x sinh y 2. x2 − y2 + x + ı(2xy − y) Hint, Solution Exercise 8.5 f(z) is analytic for all z, (|z| < ∞). f (z1 + z2) = f (z1) f (z2) for all z1 and z2. (This is known as a functional equation). Prove that f(z) = exp (f (0)z). Hint, Solution 239
  • 260. Cauchy-Riemann Equations Exercise 8.6 If f(z) is analytic in a domain and has a constant real part, a constant imaginary part, or a constant modulus, show that f(z) is constant. Hint, Solution Exercise 8.7 Show that the function f(z) = e−z−4 for z = 0, 0 for z = 0. satisfies the Cauchy-Riemann equations everywhere, including at z = 0, but f(z) is not analytic at the origin. Hint, Solution Exercise 8.8 Find the Cauchy-Riemann equations for the following forms. 1. f(z) = R(r, θ) eıΘ(r,θ) 2. f(z) = R(x, y) eıΘ(x,y) Hint, Solution Exercise 8.9 1. Show that ez is not analytic. 2. f(z) is an analytic function of z. Show that f(z) = f (z) is also an analytic function of z. Hint, Solution Exercise 8.10 1. Determine all points z = x + ıy where the following functions are differentiable with respect to z: (a) x3 + y3 (b) x − 1 (x − 1)2 + y2 − ı y (x − 1)2 + y2 2. Determine all points z where these functions are analytic. 3. Determine which of the following functions v(x, y) are the imaginary part of an analytic func- tion u(x, y) + ıv(x, y). For those that are, compute the real part u(x, y) and re-express the answer as an explicit function of z = x + ıy: (a) x2 − y2 (b) 3x2 y Hint, Solution Exercise 8.11 Let f(z) = x4/3 y5/3 +ıx5/3 y4/3 x2+y2 for z = 0, 0 for z = 0. Show that the Cauchy-Riemann equations hold at z = 0, but that f is not differentiable at this point. Hint, Solution 240
  • 261. Exercise 8.12 Consider the complex function f(z) = u + ıv = x3 (1+ı)−y3 (1−ı) x2+y2 for z = 0, 0 for z = 0. Show that the partial derivatives of u and v with respect to x and y exist at z = 0 and that ux = vy and uy = −vx there: the Cauchy-Riemann equations are satisfied at z = 0. On the other hand, show that lim z→0 f(z) z does not exist, that is, f is not complex-differentiable at z = 0. Hint, Solution Exercise 8.13 Show that the logarithm log z is differentiable for z = 0. Find the derivative of the logarithm. Hint, Solution Exercise 8.14 Show that the Cauchy-Riemann equations for the analytic function f(z) = u(r, θ) + ıv(r, θ) are ur = vθ/r, uθ = −rvr. Hint, Solution Exercise 8.15 w = u + ıv is an analytic function of z. φ(x, y) is an arbitrary smooth function of x and y. When expressed in terms of u and v, φ(x, y) = Φ(u, v). Show that (w = 0) ∂Φ ∂u − ı ∂Φ ∂v = dw dz −1 ∂φ ∂x − ı ∂φ ∂y . Deduce ∂2 Φ ∂u2 + ∂2 Φ ∂v2 = dw dz −2 ∂2 φ ∂x2 + ∂2 φ ∂y2 . Hint, Solution Exercise 8.16 Show that the functions defined by f(z) = log |z| + ı arg(z) and f(z) = |z| eı arg(z)/2 are analytic in the sector |z| > 0, | arg(z)| < π. What are the corresponding derivatives df/dz? Hint, Solution Exercise 8.17 Show that the following functions are harmonic. For each one of them find its harmonic conjugate and form the corresponding holomorphic function. 1. u(x, y) = x Log(r) − y arctan(x, y) (r = 0) 2. u(x, y) = arg(z) (| arg(z)| < π, r = 0) 3. u(x, y) = rn cos(nθ) 4. u(x, y) = y/r2 (r = 0) Hint, Solution 241
  • 262. Exercise 8.18 1. Use the Cauchy-Riemann equations to determine where the function f(z) = (x − y)2 + ı2(x + y) is differentiable and where it is analytic. 2. Evaluate the derivative of f(z) = ex2 −y2 (cos(2xy) + ı sin(2xy)) and describe the domain of analyticity. Hint, Solution Exercise 8.19 Consider the function f(z) = u + ıv with real and imaginary parts expressed in terms of either x and y or r and θ. 1. Show that the Cauchy-Riemann equations ux = vy, uy = −vx are satisfied and these partial derivatives are continuous at a point z if and only if the polar form of the Cauchy-Riemann equations ur = 1 r vθ, 1 r uθ = −vr is satisfied and these partial derivatives are continuous there. 2. Show that it is easy to verify that Log z is analytic for r > 0 and −π < θ < π using the polar form of the Cauchy-Riemann equations and that the value of the derivative is easily obtained from a polar differentiation formula. 3. Show that in polar coordinates, Laplace’s equation becomes φrr + 1 r φr + 1 r2 φθθ = 0. Hint, Solution Exercise 8.20 Determine which of the following functions are the real parts of an analytic function. 1. u(x, y) = x3 − y3 2. u(x, y) = sinh x cos y + x 3. u(r, θ) = rn cos(nθ) and find f(z) for those that are. Hint, Solution Exercise 8.21 Consider steady, incompressible, inviscid, irrotational flow governed by the Laplace equation. De- termine the form of the velocity potential and stream function contours for the complex potentials 1. Φ(z) = φ(x, y) + ıψ(x, y) = log z + ı log z 2. Φ(z) = log(z − 1) + log(z + 1) 242
  • 263. Plot and describe the features of the flows you are considering. Hint, Solution Exercise 8.22 1. Classify all the singularities (removable, poles, isolated essential, branch points, non-isolated essential) of the following functions in the extended complex plane (a) z z2 + 1 (b) 1 sin z (c) log 1 + z2 (d) z sin(1/z) (e) tan−1 (z) z sinh2 (πz) 2. Construct functions that have the following zeros or singularities: (a) a simple zero at z = ı and an isolated essential singularity at z = 1. (b) a removable singularity at z = 3, a pole of order 6 at z = −ı and an essential singularity at z∞. Hint, Solution 243
  • 264. 8.7 Hints Complex Derivatives Hint 8.1 Hint 8.2 Start with the Cauchy-Riemann equation and then differentiate with respect to x. Hint 8.3 Read Example 8.1.3 and use Result 8.1.1. Hint 8.4 Use Result 8.1.1. Hint 8.5 Take the logarithm of the equation to get a linear equation. Cauchy-Riemann Equations Hint 8.6 Hint 8.7 Hint 8.8 For the first part use the result of Exercise 8.3. Hint 8.9 Use the Cauchy-Riemann equations. Hint 8.10 Hint 8.11 To evaluate ux(0, 0), etc. use the definition of differentiation. Try to find f (z) with the definition of complex differentiation. Consider ∆z = ∆r eıθ . Hint 8.12 To evaluate ux(0, 0), etc. use the definition of differentiation. Try to find f (z) with the definition of complex differentiation. Consider ∆z = ∆r eıθ . Hint 8.13 Hint 8.14 Hint 8.15 Hint 8.16 244
  • 265. Hint 8.17 Hint 8.18 Hint 8.19 Hint 8.20 Hint 8.21 Hint 8.22 CONTINUE 245
  • 266. 8.8 Solutions Complex Derivatives Solution 8.1 1. We consider L’Hospital’s rule. lim z→z0 f(z) g(z) = f (z0) g (z0) We start with the right side and show that it is equal to the left side. First we apply the definition of complex differentiation. f (z0) g (z0) = lim →0 f(z0+ )−f(z0) limδ→0 g(z0+δ)−g(z0) δ = lim →0 f(z0+ ) limδ→0 g(z0+δ) δ Since both of the limits exist, we may take the limits with = δ. f (z0) g (z0) = lim →0 f(z0 + ) g(z0 + ) f (z0) g (z0) = lim z→z0 f(z) g(z) This proves L’Hospital’s rule. 2. lim z→ı 1 + z2 2 + 2z6 = 2z 12z5 z=ı = 1 6 lim z→ıπ sinh(z) ez +1 = cosh(z) ez z=ıπ = 1 Solution 8.2 We start with the Cauchy-Riemann equation and then differentiate with respect to x. φx = −ıφy φxx = −ıφyx We interchange the order of differentiation. (φx)x = −ı (φx)y (f )x = −ı (f )y Since f (z) satisfies the Cauchy-Riemann equation and its partial derivatives exist and are continu- ous, it is analytic. Solution 8.3 We calculate the complex derivative in the coordinate directions. df dz = ∂ r eıθ ∂r −1 ∂φ ∂r = e−ıθ ∂φ ∂r , df dz = ∂ r eıθ ∂θ −1 ∂φ ∂θ = − ı r e−ıθ ∂φ ∂θ . We can write this in operator notation. d dz = e−ıθ ∂ ∂r = − ı r e−ıθ ∂ ∂θ 246
  • 267. Solution 8.4 1. Consider f(x, y) = sin x cosh y − ı cos x sinh y. The derivatives in the x and y directions are ∂f ∂x = cos x cosh y + ı sin x sinh y −ı ∂f ∂y = − cos x cosh y − ı sin x sinh y These derivatives exist and are everywhere continuous. We equate the expressions to get a set of two equations. cos x cosh y = − cos x cosh y, sin x sinh y = − sin x sinh y cos x cosh y = 0, sin x sinh y = 0 x = π 2 + nπ and (x = mπ or y = 0) The function may be differentiable only at the points x = π 2 + nπ, y = 0. Thus the function is nowhere analytic. 2. Consider f(x, y) = x2 − y2 + x + ı(2xy − y). The derivatives in the x and y directions are ∂f ∂x = 2x + 1 + ı2y −ı ∂f ∂y = ı2y + 2x − 1 These derivatives exist and are everywhere continuous. We equate the expressions to get a set of two equations. 2x + 1 = 2x − 1, 2y = 2y. Since this set of equations has no solutions, there are no points at which the function is differentiable. The function is nowhere analytic. Solution 8.5 f (z1 + z2) = f (z1) f (z2) log (f (z1 + z2)) = log (f (z1)) + log (f (z2)) We define g(z) = log(f(z)). g (z1 + z2) = g (z1) + g (z2) This is a linear equation which has exactly the solutions: g(z) = cz. Thus f(z) has the solutions: f(z) = ecz , where c is any complex constant. We can write this constant in terms of f (0). We differentiate the original equation with respect to z1 and then substitute z1 = 0. f (z1 + z2) = f (z1) f (z2) f (z2) = f (0)f (z2) f (z) = f (0)f(z) 247
  • 268. We substitute in the form of the solution. c ecz = f (0) ecz c = f (0) Thus we see that f(z) = ef (0)z . Cauchy-Riemann Equations Solution 8.6 Constant Real Part. First assume that f(z) has constant real part. We solve the Cauchy-Riemann equations to determine the imaginary part. ux = vy, uy = −vx vx = 0, vy = 0 We integrate the first equation to obtain v = a + g(y) where a is a constant and g(y) is an arbitrary function. Then we substitute this into the second equation to determine g(y). g (y) = 0 g(y) = b We see that the imaginary part of f(z) is a constant and conclude that f(z) is constant. Constant Imaginary Part. Next assume that f(z) has constant imaginary part. We solve the Cauchy-Riemann equations to determine the real part. ux = vy, uy = −vx ux = 0, uy = 0 We integrate the first equation to obtain u = a + g(y) where a is a constant and g(y) is an arbitrary function. Then we substitute this into the second equation to determine g(y). g (y) = 0 g(y) = b We see that the real part of f(z) is a constant and conclude that f(z) is constant. Constant Modulus. Finally assume that f(z) has constant modulus. |f(z)| = constant u2 + v2 = constant u2 + v2 = constant We differentiate this equation with respect to x and y. 2uux + 2vvx = 0, 2uuy + 2vvy = 0 ux vx uy vy u v = 0 This system has non-trivial solutions for u and v only if the matrix is non-singular. (The trivial solution u = v = 0 is the constant function f(z) = 0.) We set the determinant of the matrix to zero. uxvy − uyvx = 0 248
  • 269. We use the Cauchy-Riemann equations to write this in terms of ux and uy. u2 x + u2 y = 0 ux = uy = 0 Since its partial derivatives vanish, u is a constant. From the Cauchy-Riemann equations we see that the partial derivatives of v vanish as well, so it is constant. We conclude that f(z) is a constant. Constant Modulus. Here is another method for the constant modulus case. We solve the Cauchy-Riemann equations in polar form to determine the argument of f(z) = R(x, y) eıΘ(x,y) . Since the function has constant modulus R, its partial derivatives vanish. Rx = RΘy, Ry = −RΘx RΘy = 0, RΘx = 0 The equations are satisfied for R = 0. For this case, f(z) = 0. We consider nonzero R. Θy = 0, Θx = 0 We see that the argument of f(z) is a constant and conclude that f(z) is constant. Solution 8.7 First we verify that the Cauchy-Riemann equations are satisfied for z = 0. Note that the form fx = −ıfy will be far more convenient than the form ux = vy, uy = −vx for this problem. fx = 4(x + ıy)−5 e−(x+ıy)−4 −ıfy = −ı4(x + ıy)−5 ı e−(x+ıy)−4 = 4(x + ıy)−5 e−(x+ıy)−4 The Cauchy-Riemann equations are satisfied for z = 0. Now we consider the point z = 0. fx(0, 0) = lim ∆x→0 f(∆x, 0) − f(0, 0) ∆x = lim ∆x→0 e−∆x−4 ∆x = 0 −ıfy(0, 0) = −ı lim ∆y→0 f(0, ∆y) − f(0, 0) ∆y = −ı lim ∆y→0 e−∆y−4 ∆y = 0 The Cauchy-Riemann equations are satisfied for z = 0. f(z) is not analytic at the point z = 0. We show this by calculating the derivative. f (0) = lim ∆z→0 f(∆z) − f(0) ∆z = lim ∆z→0 f(∆z) ∆z 249
  • 270. Let ∆z = ∆r eıθ , that is, we approach the origin at an angle of θ. f (0) = lim ∆r→0 f ∆r eıθ ∆r eıθ = lim ∆r→0 e−r−4 e−ı4θ ∆r eıθ For most values of θ the limit does not exist. Consider θ = π/4. f (0) = lim ∆r→0 er−4 ∆r eıπ/4 = ∞ Because the limit does not exist, the function is not differentiable at z = 0. Recall that satisfying the Cauchy-Riemann equations is a necessary, but not a sufficient condition for differentiability. Solution 8.8 1. We find the Cauchy-Riemann equations for f(z) = R(r, θ) eıΘ(r,θ) . From Exercise 8.3 we know that the complex derivative in the polar coordinate directions is d dz = e−ıθ ∂ ∂r = − ı r e−ıθ ∂ ∂θ . We equate the derivatives in the two directions. e−ıθ ∂ ∂r R eıΘ = − ı r e−ıθ ∂ ∂θ R eıΘ (Rr + ıRΘr) eıΘ = − ı r (Rθ + ıRΘθ) eıΘ We divide by eıΘ and equate the real and imaginary components to obtain the Cauchy-Riemann equations. Rr = R r Θθ, 1 r Rθ = −RΘr 2. We find the Cauchy-Riemann equations for f(z) = R(x, y) eıΘ(x,y) . We equate the derivatives in the x and y directions. ∂ ∂x R eıΘ = −ı ∂ ∂y R eıΘ (Rx + ıRΘy) eıΘ = −ı (Rx + ıRΘy) eıΘ We divide by eıΘ and equate the real and imaginary components to obtain the Cauchy-Riemann equations. Rx = RΘy, Ry = −RΘx Solution 8.9 1. A necessary condition for analyticity in an open set is that the Cauchy-Riemann equations are satisfied in that set. We write ez in Cartesian form. ez = ex−ıy = ex cos y − ı ex sin y. 250
  • 271. Now we determine where u = ex cos y and v = − ex sin y satisfy the Cauchy-Riemann equations. ux = vy, uy = −vx ex cos y = − ex cos y, − ex sin y = ex sin y cos y = 0, sin y = 0 y = π 2 + πm, y = πn Thus we see that the Cauchy-Riemann equations are not satisfied anywhere. ez is nowhere analytic. 2. Since f(z) = u + ıv is analytic, u and v satisfy the Cauchy-Riemann equations and their first partial derivatives are continuous. f(z) = f (z) = u(x, −y) + ıv(x, −y) = u(x, −y) − ıv(x, −y) We define f(z) ≡ µ(x, y) + ıν(x, y) = u(x, −y) − ıv(x, y). Now we see if µ and ν satisfy the Cauchy-Riemann equations. µx = νy, µy = −νx (u(x, −y))x = (−v(x, −y))y, (u(x, −y))y = −(−v(x, −y))x ux(x, −y) = vy(x, −y), −uy(x, −y) = vx(x, −y) ux = vy, uy = −vx Thus we see that the Cauchy-Riemann equations for µ and ν are satisfied if and only if the Cauchy-Riemann equations for u and v are satisfied. The continuity of the first partial derivatives of u and v implies the same of µ and ν. Thus f(z) is analytic. Solution 8.10 1. The necessary condition for a function f(z) = u + ıv to be differentiable at a point is that the Cauchy-Riemann equations hold and the first partial derivatives of u and v are continuous at that point. (a) f(z) = x3 + y3 + ı0 The Cauchy-Riemann equations are ux = vy and uy = −vx 3x2 = 0 and 3y2 = 0 x = 0 and y = 0 The first partial derivatives are continuous. Thus we see that the function is differentiable only at the point z = 0. (b) f(z) = x − 1 (x − 1)2 + y2 − ı y (x − 1)2 + y2 The Cauchy-Riemann equations are ux = vy and uy = −vx −(x − 1)2 + y2 ((x − 1)2 + y2)2 = −(x − 1)2 + y2 ((x − 1)2 + y2)2 and 2(x − 1)y ((x − 1)2 + y2)2 = 2(x − 1)y ((x − 1)2 + y2)2 The Cauchy-Riemann equations are each identities. The first partial derivatives are con- tinuous everywhere except the point x = 1, y = 0. Thus the function is differentiable everywhere except z = 1. 251
  • 272. 2. (a) The function is not differentiable in any open set. Thus the function is nowhere analytic. (b) The function is differentiable everywhere except z = 1. Thus the function is analytic everywhere except z = 1. 3. (a) First we determine if the function is harmonic. v = x2 − y2 vxx + vyy = 0 2 − 2 = 0 The function is harmonic in the complex plane and this is the imaginary part of some analytic function. By inspection, we see that this function is ız2 + c = −2xy + c + ı x2 − y2 , where c is a real constant. We can also find the function by solving the Cauchy-Riemann equations. ux = vy and uy = −vx ux = −2y and uy = −2x We integrate the first equation. u = −2xy + g(y) Here g(y) is a function of integration. We substitute this into the second Cauchy-Riemann equation to determine g(y). uy = −2x −2x + g (y) = −2x g (y) = 0 g(y) = c u = −2xy + c f(z) = −2xy + c + ı x2 − y2 f(z) = ız2 + c (b) First we determine if the function is harmonic. v = 3x2 y vxx + vyy = 6y The function is not harmonic. It is not the imaginary part of some analytic function. Solution 8.11 We write the real and imaginary parts of f(z) = u + ıv. u = x4/3 y5/3 x2+y2 for z = 0, 0 for z = 0. , v = x5/3 y4/3 x2+y2 for z = 0, 0 for z = 0. The Cauchy-Riemann equations are ux = vy, uy = −vx. 252
  • 273. We calculate the partial derivatives of u and v at the point x = y = 0 using the definition of differentiation. ux(0, 0) = lim ∆x→0 u(∆x, 0) − u(0, 0) ∆x = lim ∆x→0 0 − 0 ∆x = 0 vx(0, 0) = lim ∆x→0 v(∆x, 0) − v(0, 0) ∆x = lim ∆x→0 0 − 0 ∆x = 0 uy(0, 0) = lim ∆y→0 u(0, ∆y) − u(0, 0) ∆y = lim ∆y→0 0 − 0 ∆y = 0 vy(0, 0) = lim ∆y→0 v(0, ∆y) − v(0, 0) ∆y = lim ∆y→0 0 − 0 ∆y = 0 Since ux(0, 0) = uy(0, 0) = vx(0, 0) = vy(0, 0) = 0 the Cauchy-Riemann equations are satisfied. f(z) is not analytic at the point z = 0. We show this by calculating the derivative there. f (0) = lim ∆z→0 f(∆z) − f(0) ∆z = lim ∆z→0 f(∆z) ∆z We let ∆z = ∆r eıθ , that is, we approach the origin at an angle of θ. Then x = ∆r cos θ and y = ∆r sin θ. f (0) = lim ∆r→0 f ∆r eıθ ∆r eıθ = lim ∆r→0 ∆r4/3 cos4/3 θ∆r5/3 sin5/3 θ+ı∆r5/3 cos5/3 θ∆r4/3 sin4/3 θ ∆r2 ∆r eıθ = lim ∆r→0 cos4/3 θ sin5/3 θ + ı cos5/3 θ sin4/3 θ eıθ The value of the limit depends on θ and is not a constant. Thus this limit does not exist. The function is not differentiable at z = 0. Solution 8.12 u = x3 −y3 x2+y2 for z = 0, 0 for z = 0. , v = x3 +y3 x2+y2 for z = 0, 0 for z = 0. The Cauchy-Riemann equations are ux = vy, uy = −vx. The partial derivatives of u and v at the point x = y = 0 are, ux(0, 0) = lim ∆x→0 u(∆x, 0) − u(0, 0) ∆x = lim ∆x→0 ∆x − 0 ∆x = 1, vx(0, 0) = lim ∆x→0 v(∆x, 0) − v(0, 0) ∆x = lim ∆x→0 ∆x − 0 ∆x = 1, 253
  • 274. uy(0, 0) = lim ∆y→0 u(0, ∆y) − u(0, 0) ∆y = lim ∆y→0 −∆y − 0 ∆y = −1, vy(0, 0) = lim ∆y→0 v(0, ∆y) − v(0, 0) ∆y = lim ∆y→0 ∆y − 0 ∆y = 1. We see that the Cauchy-Riemann equations are satisfied at x = y = 0 f(z) is not analytic at the point z = 0. We show this by calculating the derivative. f (0) = lim ∆z→0 f(∆z) − f(0) ∆z = lim ∆z→0 f(∆z) ∆z Let ∆z = ∆r eıθ , that is, we approach the origin at an angle of θ. Then x = ∆r cos θ and y = ∆r sin θ. f (0) = lim ∆r→0 f ∆r eıθ ∆r eıθ = lim ∆r→0 (1+ı)∆r3 cos3 θ−(1−ı)∆r3 sin3 θ ∆r2 ∆r eıθ = lim ∆r→0 (1 + ı) cos3 θ − (1 − ı) sin3 θ eıθ The value of the limit depends on θ and is not a constant. Thus this limit does not exist. The function is not differentiable at z = 0. Recall that satisfying the Cauchy-Riemann equations is a necessary, but not a sufficient condition for differentiability. Solution 8.13 We show that the logarithm log z = φ(r, θ) = Log r + ıθ satisfies the Cauchy-Riemann equations. φr = − ı r φθ 1 r = − ı r ı 1 r = 1 r Since the logarithm satisfies the Cauchy-Riemann equations and the first partial derivatives are continuous for z = 0, the logarithm is analytic for z = 0. Now we compute the derivative. d dz log z = e−ıθ ∂ ∂r (Log r + ıθ) = e−ıθ 1 r = 1 z Solution 8.14 The complex derivative in the coordinate directions is d dz = e−ıθ ∂ ∂r = − ı r e−ıθ ∂ ∂θ . 254
  • 275. We substitute f = u + ıv into this identity to obtain the Cauchy-Riemann equation in polar coor- dinates. e−ıθ ∂f ∂r = − ı r e−ıθ ∂f ∂θ ∂f ∂r = − ı r ∂f ∂θ ur + ıvr = − ı r (uθ + ıvθ) We equate the real and imaginary parts. ur = 1 r vθ, vr = − 1 r uθ ur = 1 r vθ, uθ = −rvr Solution 8.15 Since w is analytic, u and v satisfy the Cauchy-Riemann equations, ux = vy and uy = −vx. Using the chain rule we can write the derivatives with respect to x and y in terms of u and v. ∂ ∂x = ux ∂ ∂u + vx ∂ ∂v ∂ ∂y = uy ∂ ∂u + vy ∂ ∂v Now we examine φx − ıφy. φx − ıφy = uxΦu + vxΦv − ı (uyΦu + vyΦv) φx − ıφy = (ux − ıuy) Φu + (vx − ıvy) Φv φx − ıφy = (ux − ıuy) Φu − ı (vy + ıvx) Φv We use the Cauchy-Riemann equations to write uy and vy in terms of ux and vx. φx − ıφy = (ux + ıvx) Φu − ı (ux + ıvx) Φv Recall that w = ux + ıvx = vy − ıuy. φx − ıφy = dw dz (Φu − ıΦv) Thus we see that, ∂Φ ∂u − ı ∂Φ ∂v = dw dz −1 ∂φ ∂x − ı ∂φ ∂y . We write this in operator notation. ∂ ∂u − ı ∂ ∂v = dw dz −1 ∂ ∂x − ı ∂ ∂y The complex conjugate of this relation is ∂ ∂u + ı ∂ ∂v = dw dz −1 ∂ ∂x + ı ∂ ∂y 255
  • 276. Now we apply both these operators to Φ = φ. ∂ ∂u + ı ∂ ∂v ∂ ∂u − ı ∂ ∂v Φ = dw dz −1 ∂ ∂x + ı ∂ ∂y dw dz −1 ∂ ∂x − ı ∂ ∂y φ ∂2 ∂u2 + ı ∂2 ∂u∂v − ı ∂2 ∂v∂u + ∂2 ∂v2 Φ = dw dz −1 ∂ ∂x + ı ∂ ∂y dw dz −1 ∂ ∂x − ı ∂ ∂y + dw dz −1 ∂ ∂x + ı ∂ ∂y ∂ ∂x − ı ∂ ∂y φ (w ) −1 is an analytic function. Recall that for analytic functions f, f = fx = −ıfy. So that fx + ıfy = 0. ∂2 Φ ∂u2 + ∂2 Φ ∂v2 = dw dz −1 dw dz −1 ∂2 ∂x2 + ∂2 ∂y2 φ ∂2 Φ ∂u2 + ∂2 Φ ∂v2 = dw dz −2 ∂2 φ ∂x2 + ∂2 φ ∂y2 Solution 8.16 1. We consider f(z) = log |z| + ı arg(z) = log r + ıθ. The Cauchy-Riemann equations in polar coordinates are ur = 1 r vθ, uθ = −rvr. We calculate the derivatives. ur = 1 r , 1 r vθ = 1 r uθ = 0, −rvr = 0 Since the Cauchy-Riemann equations are satisfied and the partial derivatives are continuous, f(z) is analytic in |z| > 0, | arg(z)| < π. The complex derivative in terms of polar coordinates is d dz = e−ıθ ∂ ∂r = − ı r e−ıθ ∂ ∂θ . We use this to differentiate f(z). df dz = e−ıθ ∂ ∂r [log r + ıθ] = e−ıθ 1 r = 1 z 2. Next we consider f(z) = |z| eı arg(z)/2 = √ r eıθ/2 . The Cauchy-Riemann equations for polar coordinates and the polar form f(z) = R(r, θ) eıΘ(r,θ) are Rr = R r Θθ, 1 r Rθ = −RΘr. We calculate the derivatives for R = √ r, Θ = θ/2. Rr = 1 2 √ r , R r Θθ = 1 2 √ r 1 r Rθ = 0, −RΘr = 0 256
  • 277. Since the Cauchy-Riemann equations are satisfied and the partial derivatives are continuous, f(z) is analytic in |z| > 0, | arg(z)| < π. The complex derivative in terms of polar coordinates is d dz = e−ıθ ∂ ∂r = − ı r e−ıθ ∂ ∂θ . We use this to differentiate f(z). df dz = e−ıθ ∂ ∂r [ √ r eıθ/2 ] = 1 2 eıθ/2 √ r = 1 2 √ z Solution 8.17 1. We consider the function u = x Log r − y arctan(x, y) = r cos θ Log r − rθ sin θ We compute the Laplacian. ∆u = 1 r ∂ ∂r r ∂u ∂r + 1 r2 ∂2 u ∂θ2 = 1 r ∂ ∂r (cos θ(r + r Log r) − θ sin θ) + 1 r2 (r(θ sin θ − 2 cos θ) − r cos θ Log r) = 1 r (2 cos θ + cos θ Log r − θ sin θ) + 1 r (θ sin θ − 2 cos θ − cos θ Log r) = 0 The function u is harmonic. We find the harmonic conjugate v by solving the Cauchy-Riemann equations. vr = − 1 r uθ, vθ = rur vr = sin θ(1 + Log r) + θ cos θ, vθ = r (cos θ(1 + Log r) − θ sin θ) We integrate the first equation with respect to r to determine v to within the constant of integration g(θ). v = r(sin θ Log r + θ cos θ) + g(θ) We differentiate this expression with respect to θ. vθ = r (cos θ(1 + Log r) − θ sin θ) + g (θ) We compare this to the second Cauchy-Riemann equation to see that g (θ) = 0. Thus g(θ) = c. We have determined the harmonic conjugate. v = r(sin θ Log r + θ cos θ) + c The corresponding analytic function is f(z) = r cos θ Log r − rθ sin θ + ı(r sin θ Log r + rθ cos θ + c). On the positive real axis, (θ = 0), the function has the value f(z = r) = r Log r + ıc. We use analytic continuation to determine the function in the complex plane. f(z) = z log z + ıc 257
  • 278. 2. We consider the function u = Arg(z) = θ. We compute the Laplacian. ∆u = 1 r ∂ ∂r r ∂u ∂r + 1 r2 ∂2 u ∂θ2 = 0 The function u is harmonic. We find the harmonic conjugate v by solving the Cauchy-Riemann equations. vr = − 1 r uθ, vθ = rur vr = − 1 r , vθ = 0 We integrate the first equation with respect to r to determine v to within the constant of integration g(θ). v = − Log r + g(θ) We differentiate this expression with respect to θ. vθ = g (θ) We compare this to the second Cauchy-Riemann equation to see that g (θ) = 0. Thus g(θ) = c. We have determined the harmonic conjugate. v = − Log r + c The corresponding analytic function is f(z) = θ − ı Log r + ıc On the positive real axis, (θ = 0), the function has the value f(z = r) = −ı Log r + ıc We use analytic continuation to determine the function in the complex plane. f(z) = −ı log z + ıc 3. We consider the function u = rn cos(nθ) We compute the Laplacian. ∆u = 1 r ∂ ∂r r ∂u ∂r + 1 r2 ∂2 u ∂θ2 = 1 r ∂ ∂r (nrn cos(nθ)) − n2 rn−2 cos(nθ) = n2 rn−2 cos(nθ) − n2 rn−2 cos(nθ) = 0 The function u is harmonic. We find the harmonic conjugate v by solving the Cauchy-Riemann equations. vr = − 1 r uθ, vθ = rur vr = nrn−1 sin(nθ), vθ = nrn cos(nθ) 258
  • 279. We integrate the first equation with respect to r to determine v to within the constant of integration g(θ). v = rn sin(nθ) + g(θ) We differentiate this expression with respect to θ. vθ = nrn cos(nθ) + g (θ) We compare this to the second Cauchy-Riemann equation to see that g (θ) = 0. Thus g(θ) = c. We have determined the harmonic conjugate. v = rn sin(nθ) + c The corresponding analytic function is f(z) = rn cos(nθ) + ırn sin(nθ) + ıc On the positive real axis, (θ = 0), the function has the value f(z = r) = rn + ıc We use analytic continuation to determine the function in the complex plane. f(z) = zn 4. We consider the function u = y r2 = sin θ r We compute the Laplacian. ∆u = 1 r ∂ ∂r r ∂u ∂r + 1 r2 ∂2 u ∂θ2 = 1 r ∂ ∂r − sin θ r − sin θ r3 = sin θ r3 − sin θ r3 = 0 The function u is harmonic. We find the harmonic conjugate v by solving the Cauchy-Riemann equations. vr = − 1 r uθ, vθ = rur vr = − cos θ r2 , vθ = − sin θ r We integrate the first equation with respect to r to determine v to within the constant of integration g(θ). v = cos θ r + g(θ) We differentiate this expression with respect to θ. vθ = − sin θ r + g (θ) We compare this to the second Cauchy-Riemann equation to see that g (θ) = 0. Thus g(θ) = c. We have determined the harmonic conjugate. v = cos θ r + c 259
  • 280. The corresponding analytic function is f(z) = sin θ r + ı cos θ r + ıc On the positive real axis, (θ = 0), the function has the value f(z = r) = ı r + ıc. We use analytic continuation to determine the function in the complex plane. f(z) = ı z + ıc Solution 8.18 1. We calculate the first partial derivatives of u = (x − y)2 and v = 2(x + y). ux = 2(x − y) uy = 2(y − x) vx = 2 vy = 2 We substitute these expressions into the Cauchy-Riemann equations. ux = vy, uy = −vx 2(x − y) = 2, 2(y − x) = −2 x − y = 1, y − x = −1 y = x − 1 Since the Cauchy-Riemann equation are satisfied along the line y = x − 1 and the partial derivatives are continuous, the function f(z) is differentiable there. Since the function is not differentiable in a neighborhood of any point, it is nowhere analytic. 2. We calculate the first partial derivatives of u and v. ux = 2 ex2 −y2 (x cos(2xy) − y sin(2xy)) uy = −2 ex2 −y2 (y cos(2xy) + x sin(2xy)) vx = 2 ex2 −y2 (y cos(2xy) + x sin(2xy)) vy = 2 ex2 −y2 (x cos(2xy) − y sin(2xy)) Since the Cauchy-Riemann equations, ux = vy and uy = −vx, are satisfied everywhere and the partial derivatives are continuous, f(z) is everywhere differentiable. Since f(z) is differentiable in a neighborhood of every point, it is analytic in the complex plane. (f(z) is entire.) Now to evaluate the derivative. The complex derivative is the derivative in any direction. We choose the x direction. f (z) = ux + ıvx f (z) = 2 ex2 −y2 (x cos(2xy) − y sin(2xy)) + ı2 ex2 −y2 (y cos(2xy) + x sin(2xy)) f (z) = 2 ex2 −y2 ((x + ıy) cos(2xy) + (−y + ıx) sin(2xy)) 260
  • 281. Finding the derivative is easier if we first write f(z) in terms of the complex variable z and use complex differentiation. f(z) = ex2 −y2 (cos(2x, y) + ı sin(2xy)) f(z) = ex2 −y2 eı2xy f(z) = e(x+ıy)2 f(z) = ez2 f (z) = 2z ez2 Solution 8.19 1. Assume that the Cauchy-Riemann equations in Cartesian coordinates ux = vy, uy = −vx are satisfied and these partial derivatives are continuous at a point z. We write the derivatives in polar coordinates in terms of derivatives in Cartesian coordinates to verify the Cauchy- Riemann equations in polar coordinates. First we calculate the derivatives. x = r cos θ, y = r sin θ wr = ∂x ∂r wx + ∂y ∂r wy = cos θwx + sin θwy wθ = ∂x ∂θ wx + ∂y ∂θ wy = −r sin θwx + r cos θwy Then we verify the Cauchy-Riemann equations in polar coordinates. ur = cos θux + sin θuy = cos θvy − sin θvx = 1 r vθ 1 r uθ = − sin θux + cos θuy = − sin θvy − cos θvx = −vr This proves that the Cauchy-Riemann equations in Cartesian coordinates hold only if the Cauchy-Riemann equations in polar coordinates hold. (Given that the partial derivatives are continuous.) Next we prove the converse. Assume that the Cauchy-Riemann equations in polar coordinates ur = 1 r vθ, 1 r uθ = −vr are satisfied and these partial derivatives are continuous at a point z. We write the derivatives in Cartesian coordinates in terms of derivatives in polar coordinates to verify the Cauchy- Riemann equations in Cartesian coordinates. First we calculate the derivatives. r = x2 + y2, θ = arctan(x, y) wx = ∂r ∂x wr + ∂θ ∂x wθ = x r wr − y r2 wθ wy = ∂r ∂y wr + ∂θ ∂y wθ = y r wr + x r2 wθ 261
  • 282. Then we verify the Cauchy-Riemann equations in Cartesian coordinates. ux = x r ur − y r2 uθ = x r2 vθ + y r vr = uy uy = y r ur + x r2 uθ = y r2 vθ − x r vr = −ux This proves that the Cauchy-Riemann equations in polar coordinates hold only if the Cauchy- Riemann equations in Cartesian coordinates hold. We have demonstrated the equivalence of the two forms. 2. We verify that log z is analytic for r > 0 and −π < θ < π using the polar form of the Cauchy-Riemann equations. Log z = ln r + ıθ ur = 1 r vθ, 1 r uθ = −vr 1 r = 1 r 1, 1 r 0 = −0 Since the Cauchy-Riemann equations are satisfied and the partial derivatives are continuous for r > 0, log z is analytic there. We calculate the value of the derivative using the polar differentiation formulas. d dz Log z = e−ıθ ∂ ∂r (ln r + ıθ) = e−ıθ 1 r = 1 z d dz Log z = −ı z ∂ ∂θ (ln r + ıθ) = −ı z ı = 1 z 3. Let {xi} denote rectangular coordinates in two dimensions and let {ξi} be an orthogonal coordinate system . The distance metric coefficients hi are defined hi = ∂x1 ∂ξi 2 + ∂x2 ∂ξi 2 . The Laplacian is 2 u = 1 h1h2 ∂ ∂ξ1 h2 h1 ∂u ∂ξ1 + ∂ ∂ξ2 h1 h2 ∂u ∂ξ2 . First we calculate the distance metric coefficients in polar coordinates. hr = ∂x ∂r 2 + ∂y ∂r 2 = cos2 θ + sin2 θ = 1 hθ = ∂x ∂θ 2 + ∂y ∂θ 2 = r2 sin2 θ + r2 cos2 θ = r Then we find the Laplacian. 2 φ = 1 r ∂ ∂r (rφr) + ∂ ∂θ 1 r φθ 262
  • 283. In polar coordinates, Laplace’s equation is φrr + 1 r φr + 1 r2 φθθ = 0. Solution 8.20 1. We compute the Laplacian of u(x, y) = x3 − y3 . 2 u = 6x − 6y Since u is not harmonic, it is not the real part of on analytic function. 2. We compute the Laplacian of u(x, y) = sinh x cos y + x. 2 u = sinh x cos y − sinh x cos y = 0 Since u is harmonic, it is the real part of on analytic function. We determine v by solving the Cauchy-Riemann equations. vx = −uy, vy = ux vx = sinh x sin y, vy = cosh x cos y + 1 We integrate the first equation to determine v up to an arbitrary additive function of y. v = cosh x sin y + g(y) We substitute this into the second Cauchy-Riemann equation. This will determine v up to an additive constant. vy = cosh x cos y + 1 cosh x cos y + g (y) = cosh x cos y + 1 g (y) = 1 g(y) = y + a v = cosh x sin y + y + a f(z) = sinh x cos y + x + ı(cosh x sin y + y + a) Here a is a real constant. We write the function in terms of z. f(z) = sinh z + z + ıa 3. We compute the Laplacian of u(r, θ) = rn cos(nθ). 2 u = n(n − 1)rn−2 cos(nθ) + nrn−2 cos(nθ) − n2 rn−2 cos(nθ) = 0 Since u is harmonic, it is the real part of on analytic function. We determine v by solving the Cauchy-Riemann equations. vr = − 1 r uθ, vθ = rur vr = nrn−1 sin(nθ), vθ = nrn cos(nθ) We integrate the first equation to determine v up to an arbitrary additive function of θ. v = rn sin(nθ) + g(θ) 263
  • 284. We substitute this into the second Cauchy-Riemann equation. This will determine v up to an additive constant. vθ = nrn cos(nθ) nrn cos(nθ) + g (θ) = nrn cos(nθ) g (θ) = 0 g(θ) = a v = rn sin(nθ) + a f(z) = rn cos(nθ) + ı(rn sin(nθ) + a) Here a is a real constant. We write the function in terms of z. f(z) = zn + ıa Solution 8.21 1. We find the velocity potential φ and stream function ψ. Φ(z) = log z + ı log z Φ(z) = ln r + ıθ + ı(ln r + ıθ) φ = ln r − θ, ψ = ln r + θ A branch of these are plotted in Figure 8.7. Figure 8.7: The velocity potential φ and stream function ψ for Φ(z) = log z + ı log z. Next we find the stream lines, ψ = c. ln r + θ = c r = ec−θ These are spirals which go counter-clockwise as we follow them to the origin. See Figure 8.8. Next we find the velocity field. v = φ v = φrˆr + φθ r ˆθ v = ˆr r − ˆθ r The velocity field is shown in the first plot of Figure 8.9. We see that the fluid flows out from the origin along the spiral paths of the streamlines. The second plot shows the direction of the velocity field. 264
  • 285. Figure 8.8: Streamlines for ψ = ln r + θ. Figure 8.9: Velocity field and velocity direction field for φ = ln r − θ. 2. We find the velocity potential φ and stream function ψ. Φ(z) = log(z − 1) + log(z + 1) Φ(z) = ln |z − 1| + ı arg(z − 1) + ln |z + 1| + ı arg(z + 1) φ = ln |z2 − 1|, ψ = arg(z − 1) + arg(z + 1) The velocity potential and a branch of the stream function are plotted in Figure 8.10. -2 -1 0 1 2-2 -1 0 1 2 -1 0 1 2 -2 -1 0 1 2 -2 -1 0 1 2-2 -1 0 1 2 0 2 4 6 -2 -1 0 1 2 Figure 8.10: The velocity potential φ and stream function ψ for Φ(z) = log(z − 1) + log(z + 1). 265
  • 286. The stream lines, arg(z − 1) + arg(z + 1) = c, are plotted in Figure 8.11. -2 -1 0 1 2 -2 -1 0 1 2 Figure 8.11: Streamlines for ψ = arg(z − 1) + arg(z + 1). Next we find the velocity field. v = φ v = 2x(x2 + y2 − 1) x4 + 2x2(y2 − 1) + (y2 + 1)2 ˆx + 2y(x2 + y2 + 1) x4 + 2x2(y2 − 1) + (y2 + 1)2 ˆy The velocity field is shown in the first plot of Figure 8.12. The fluid is flowing out of sources at z = ±1. The second plot shows the direction of the velocity field. Figure 8.12: Velocity field and velocity direction field for φ = ln |z2 − 1|. Solution 8.22 1. (a) We factor the denominator to see that there are first order poles at z = ±ı. z z2 + 1 = z (z − ı)(z + ı) 266
  • 287. Since the function behaves like 1/z at infinity, it is analytic there. (b) The denominator of 1/ sin z has first order zeros at z = nπ, n ∈ Z. Thus the function has first order poles at these locations. Now we examine the point at infinity with the change of variables z = 1/ζ. 1 sin z = 1 sin(1/ζ) = ı2 eı/ζ − e−ı/ζ We see that the point at infinity is a singularity of the function. Since the denominator grows exponentially, there is no multiplicative factor of ζn that will make the function analytic at ζ = 0. We conclude that the point at infinity is an essential singularity. Since there is no deleted neighborhood of the point at infinity that does contain first order poles at the locations z = nπ, the point at infinity is a non-isolated singularity. (c) log 1 + z2 = log(z + ı) + log(z − ı) There are branch points at z = ±ı. Since the argument of the logarithm is unbounded as z → ∞ there is a branch point at infinity as well. Branch points are non-isolated singularities. (d) z sin(1/z) = 1 2 z eı/z + eı/z The point z = 0 is a singularity. Since the function grows exponentially at z = 0. There is no multiplicative factor of zn that will make the function analytic. Thus z = 0 is an essential singularity. There are no other singularities in the finite complex plane. We examine the point at infinity. z sin 1 z = 1 ζ sin ζ The point at infinity is a singularity. We take the limit ζ → 0 to demonstrate that it is a removable singularity. lim ζ→0 sin ζ ζ = lim ζ→0 cos ζ 1 = 1 (e) tan−1 (z) z sinh2 (πz) = ı log ı+z ı−z 2z sinh2 (πz) There are branch points at z = ±ı due to the logarithm. These are non-isolated singular- ities. Note that sinh(z) has first order zeros at z = ınπ, n ∈ Z. The arctangent has a first order zero at z = 0. Thus there is a second order pole at z = 0. There are second order poles at z = ın, n ∈ Z {0} due to the hyperbolic sine. Since the hyperbolic sine has an essential singularity at infinity, the function has an essential singularity at infinity as well. The point at infinity is a non-isolated singularity because there is no neighborhood of infinity that does not contain second order poles. 2. (a) (z − ı) e1/(z−1) has a simple zero at z = ı and an isolated essential singularity at z = 1. (b) sin(z − 3) (z − 3)(z + ı)6 has a removable singularity at z = 3, a pole of order 6 at z = −ı and an essential singularity at z∞. 267
  • 288. 268
  • 289. Chapter 9 Analytic Continuation For every complex problem, there is a solution that is simple, neat, and wrong. - H. L. Mencken 9.1 Analytic Continuation Suppose there is a function, f1(z) that is analytic in the domain D1 and another analytic function, f2(z) that is analytic in the domain D2. (See Figure 9.1.) Im(z) Re(z) D D1 2 Figure 9.1: Overlapping Domains If the two domains overlap and f1(z) = f2(z) in the overlap region D1 ∩ D2, then f2(z) is called an analytic continuation of f1(z). This is an appropriate name since f2(z) continues the definition of f1(z) outside of its original domain of definition D1. We can define a function f(z) that is analytic in the union of the domains D1 ∪ D2. On the domain D1 we have f(z) = f1(z) and f(z) = f2(z) on D2. f1(z) and f2(z) are called function elements. There is an analytic continuation even if the two domains only share an arc and not a two dimensional region. With more overlapping domains D3, D4, . . . we could perhaps extend f1(z) to more of the complex plane. Sometimes it is impossible to extend a function beyond the boundary of a domain. This is known as a natural boundary. If a function f1(z) is analytically continued to a domain Dn along two different paths, (See Figure 9.2.), then the two analytic continuations are identical as long as the paths do not enclose a branch point of the function. This is the uniqueness theorem of analytic continuation. Consider an analytic function f(z) defined in the domain D. Suppose that f(z) = 0 on the arc AB, (see Figure 9.3.) Then f(z) = 0 in all of D. Consider a point ζ on AB. The Taylor series expansion of f(z) about the point z = ζ converges in a circle C at least up to the boundary of D. The derivative of f(z) at the point z = ζ is f (ζ) = lim ∆z→0 f(ζ + ∆z) − f(ζ) ∆z 269
  • 290. D1 Dn Figure 9.2: Two Paths of Analytic Continuation D B ζ C A Figure 9.3: Domain Containing Arc Along Which f(z) Vanishes If ∆z is in the direction of the arc, then f (ζ) vanishes as well as all higher derivatives, f (ζ) = f (ζ) = f (ζ) = · · · = 0. Thus we see that f(z) = 0 inside C. By taking Taylor series expansions about points on AB or inside of C we see that f(z) = 0 in D. Result 9.1.1 Let f1(z) and f2(z) be analytic functions defined in D. If f1(z) = f2(z) for the points in a region or on an arc in D, then f1(z) = f2(z) for all points in D. To prove Result 9.1.1, we define the analytic function g(z) = f1(z) − f2(z). Since g(z) vanishes in the region or on the arc, then g(z) = 0 and hence f1(z) = f2(z) for all points in D. Result 9.1.2 Consider analytic functions f1(z) and f2(z) defined on the do- mains D1 and D2, respectively. Suppose that D1 ∩D2 is a region or an arc and that f1(z) = f2(z) for all z ∈ D1 ∩ D2. (See Figure 9.4.) Then the function f(z) = f1(z) for z ∈ D1, f2(z) for z ∈ D2, is analytic in D1 ∪ D2. D1 D2 D1 D2 Figure 9.4: Domains that Intersect in a Region or an Arc Result 9.1.2 follows directly from Result 9.1.1. 270
  • 291. 9.2 Analytic Continuation of Sums Example 9.2.1 Consider the function f1(z) = ∞ n=0 zn . The sum converges uniformly for D1 = |z| ≤ r < 1. Since the derivative also converges in this domain, the function is analytic there. Im(z) Re(z) Im(z) Re(z) D2 D1 Figure 9.5: Domain of Convergence for ∞ n=0 zn . Now consider the function f2(z) = 1 1 − z . This function is analytic everywhere except the point z = 1. On the domain D1, f2(z) = 1 1 − z = ∞ n=0 zn = f1(z) Analytic continuation tells us that there is a function that is analytic on the union of the two domains. Here, the domain is the entire z plane except the point z = 1 and the function is f(z) = 1 1 − z . 1 1−z is said to be an analytic continuation of ∞ n=0 zn . 9.3 Analytic Functions Defined in Terms of Real Variables Result 9.3.1 An analytic function, u(x, y) + ıv(x, y) can be written in terms of a function of a complex variable, f(z) = u(x, y) + ıv(x, y). Result 9.3.1 is proved in Exercise 9.1. Example 9.3.1 f(z) = cosh y sin x (x ex cos y − y ex sin y) − cos x sinh y (y ex cos y + x ex sin y) + ı cosh y sin x (y ex cos y + x ex sin y) + cos x sinh y (x ex cos y − y ex sin y) is an analytic function. Express f(z) in terms of z. On the real line, y = 0, f(z) is f(z = x) = x ex sin x 271
  • 292. (Recall that cos(0) = cosh(0) = 1 and sin(0) = sinh(0) = 0.) The analytic continuation of f(z) into the complex plane is f(z) = z ez sin z. Alternatively, for x = 0 we have f(z = ıy) = y sinh y(cos y − ı sin y). The analytic continuation from the imaginary axis to the complex plane is f(z) = −ız sinh(−ız)(cos(−ız) − ı sin(−ız)) = ız sinh(ız)(cos(ız) + ı sin(ız)) = z sin z ez . Example 9.3.2 Consider u = e−x (x sin y − y cos y). Find v such that f(z) = u + ıv is analytic. From the Cauchy-Riemann equations, ∂v ∂y = ∂u ∂x = e−x sin y − x e−x sin y + y e−x cos y ∂v ∂x = − ∂u ∂y = e−x cos y − x e−x cos y − y e−x sin y Integrate the first equation with respect to y. v = − e−x cos y + x e−x cos y + e−x (y sin y + cos y) + F(x) = y e−x sin y + x e−x cos y + F(x) F(x) is an arbitrary function of x. Substitute this expression for v into the equation for ∂v/∂x. −y e−x sin y − x e−x cos y + e−x cos y + F (x) = −y e−x sin y − x e−x cos y + e−x cos y Thus F (x) = 0 and F(x) = c. v = e−x (y sin y + x cos y) + c Example 9.3.3 Find f(z) in the previous example. (Up to the additive constant.) Method 1 f(z) = u + ıv = e−x (x sin y − y cos y) + ı e−x (y sin y + x cos y) = e−x x eıy − e−ıy ı2 − y eıy + e−ıy 2 + ı e−x y eıy − e−ıy ı2 + x eıy + e−ıy 2 = ı(x + ıy) e−(x+ıy) = ız e−z Method 2 f(z) = f(x + ıy) = u(x, y) + ıv(x, y) is an analytic function. On the real axis, y = 0, f(z) is f(z = x) = u(x, 0) + ıv(x, 0) = e−x (x sin 0 − 0 cos 0) + ı e−x (0 sin 0 + x cos 0) = ıx e−x 272
  • 293. Suppose there is an analytic continuation of f(z) into the complex plane. If such a continuation, f(z), exists, then it must be equal to f(z = x) on the real axis An obvious choice for the analytic continuation is f(z) = u(z, 0) + ıv(z, 0) since this is clearly equal to u(x, 0) + ıv(x, 0) when z is real. Thus we obtain f(z) = ız e−z Example 9.3.4 Consider f(z) = u(x, y) + ıv(x, y). Show that f (z) = ux(z, 0) − ıuy(z, 0). f (z) = ux + ıvx = ux − ıuy f (z) is an analytic function. On the real axis, z = x, f (z) is f (z = x) = ux(x, 0) − ıuy(x, 0) Now f (z = x) is defined on the real line. An analytic continuation of f (z = x) into the complex plane is f (z) = ux(z, 0) − ıuy(z, 0). Example 9.3.5 Again consider the problem of finding f(z) given that u(x, y) = e−x (x sin y − y cos y). Now we can use the result of the previous example to do this problem. ux(x, y) = ∂u ∂x = e−x sin y − x e−x sin y + y e−x cos y uy(x, y) = ∂u ∂y = x e−x cos y + y e−x sin y − e−x cos y f (z) = ux(z, 0) − ıuy(z, 0) = 0 − ı z e−z − e−z = ı −z e−z + e−z Integration yields the result f(z) = ız e−z +c Example 9.3.6 Find f(z) given that u(x, y) = cos x cosh2 y sin x + cos x sin x sinh2 y v(x, y) = cos2 x cosh y sinh y − cosh y sin2 x sinh y f(z) = u(x, y) + ıv(x, y) is an analytic function. On the real line, f(z) is f(z = x) = u(x, 0) + ıv(x, 0) = cos x cosh2 0 sin x + cos x sin x sinh2 0 + ı cos2 x cosh 0 sinh 0 − cosh 0 sin2 x sinh 0 = cos x sin x Now we know the definition of f(z) on the real line. We would like to find an analytic continuation of f(z) into the complex plane. An obvious choice for f(z) is f(z) = cos z sin z Using trig identities we can write this as f(z) = sin(2z) 2 . 273
  • 294. Example 9.3.7 Find f(z) given only that u(x, y) = cos x cosh2 y sin x + cos x sin x sinh2 y. Recall that f (z) = ux + ıvx = ux − ıuy Differentiating u(x, y), ux = cos2 x cosh2 y − cosh2 y sin2 x + cos2 x sinh2 y − sin2 x sinh2 y uy = 4 cos x cosh y sin x sinh y f (z) is an analytic function. On the real axis, f (z) is f (z = x) = cos2 x − sin2 x Using trig identities we can write this as f (z = x) = cos(2x) Now we find an analytic continuation of f (z = x) into the complex plane. f (z) = cos(2z) Integration yields the result f(z) = sin(2z) 2 + c 9.3.1 Polar Coordinates Example 9.3.8 Is u(r, θ) = r(log r cos θ − θ sin θ) the real part of an analytic function? The Laplacian in polar coordinates is ∆φ = 1 r ∂ ∂r r ∂φ ∂r + 1 r2 ∂2 φ ∂θ2 . We calculate the partial derivatives of u. ∂u ∂r = cos θ + log r cos θ − θ sin θ r ∂u ∂r = r cos θ + r log r cos θ − rθ sin θ ∂ ∂r r ∂u ∂r = 2 cos θ + log r cos θ − θ sin θ 1 r ∂ ∂r r ∂u ∂r = 1 r (2 cos θ + log r cos θ − θ sin θ) ∂u ∂θ = −r (θ cos θ + sin θ + log r sin θ) ∂2 u ∂θ2 = r (−2 cos θ − log r cos θ + θ sin θ) 1 r2 ∂2 u ∂θ2 = 1 r (−2 cos θ − log r cos θ + θ sin θ) 274
  • 295. From the above we see that ∆u = 1 r ∂ ∂r r ∂u ∂r + 1 r2 ∂2 u ∂θ2 = 0. Therefore u is harmonic and is the real part of some analytic function. Example 9.3.9 Find an analytic function f(z) whose real part is u(r, θ) = r (log r cos θ − θ sin θ) . Let f(z) = u(r, θ) + ıv(r, θ). The Cauchy-Riemann equations are ur = vθ r , uθ = −rvr. Using the partial derivatives in the above example, we obtain two partial differential equations for v(r, θ). vr = − uθ r = θ cos θ + sin θ + log r sin θ vθ = rur = r (cos θ + log r cos θ − θ sin θ) Integrating the equation for vθ yields v = r (θ cos θ + log r sin θ) + F(r) where F(r) is a constant of integration. Substituting our expression for v into the equation for vr yields θ cos θ + log r sin θ + sin θ + F (r) = θ cos θ + sin θ + log r sin θ F (r) = 0 F(r) = const Thus we see that f(z) = u + ıv = r (log r cos θ − θ sin θ) + ır (θ cos θ + log r sin θ) + const f(z) is an analytic function. On the line θ = 0, f(z) is f(z = r) = r(log r) + ır(0) + const = r log r + const The analytic continuation into the complex plane is f(z) = z log z + const Example 9.3.10 Find the formula in polar coordinates that is analogous to f (z) = ux(z, 0) − ıuy(z, 0). We know that df dz = e−ıθ ∂f ∂r . If f(z) = u(r, θ) + ıv(r, θ) then df dz = e−ıθ (ur + ıvr) 275
  • 296. From the Cauchy-Riemann equations, we have vr = −uθ/r. df dz = e−ıθ ur − ı uθ r f (z) is an analytic function. On the line θ = 0, f(z) is f (z = r) = ur(r, 0) − ı uθ(r, 0) r The analytic continuation of f (z) into the complex plane is f (z) = ur(z, 0) − ı r uθ(z, 0). Example 9.3.11 Find an analytic function f(z) whose real part is u(r, θ) = r (log r cos θ − θ sin θ) . ur(r, θ) = (log r cos θ − θ sin θ) + cos θ uθ(r, θ) = r (− log r sin θ − sin θ − θ cos θ) f (z) = ur(z, 0) − ı r uθ(z, 0) = log z + 1 Integrating f (z) yields f(z) = z log z + ıc. 9.3.2 Analytic Functions Defined in Terms of Their Real or Imaginary Parts Consider an analytic function: f(z) = u(x, y) + ıv(x, y). We differentiate this expression. f (z) = ux(x, y) + ıvx(x, y) We apply the Cauchy-Riemann equation vx = −uy. f (z) = ux(x, y) − ıuy(x, y). (9.1) Now consider the function of a complex variable, g(ζ): g(ζ) = ux(x, ζ) − ıuy(x, ζ) = ux(x, ξ + ıψ) − ıuy(x, ξ + ıψ). This function is analytic where f(ζ) is analytic. To show this we first verify that the derivatives in the ξ and ψ directions are equal. ∂ ∂ξ g(ζ) = uxy(x, ξ + ıψ) − ıuyy(x, ξ + ıψ) −ı ∂ ∂ψ g(ζ) = −ı (ıuxy(x, ξ + ıψ) + uyy(x, ξ + ıψ)) = uxy(x, ξ + ıψ) − ıuyy(x, ξ + ıψ) Since these partial derivatives are equal and continuous, g(ζ) is analytic. We evaluate the function g(ζ) at ζ = −ıx. (Substitute y = −ıx into Equation 9.1.) f (2x) = ux(x, −ıx) − ıuy(x, −ıx) 276
  • 297. We make a change of variables to solve for f (x). f (x) = ux x 2 , −ı x 2 − ıuy x 2 , −ı x 2 . If the expression is non-singular, then this defines the analytic function, f (z), on the real axis. The analytic continuation to the complex plane is f (z) = ux z 2 , −ı z 2 − ıuy z 2 , −ı z 2 . Note that d dz 2u(z/2, −ız/2) = ux(z/2, −ız/2)−ıuy(z/2, −ız/2). We integrate the equation to obtain: f(z) = 2u z 2 , −ı z 2 + c. We know that the real part of an analytic function determines that function to within an additive constant. Assuming that the above expression is non-singular, we have found a formula for writing an analytic function in terms of its real part. With the same method, we can find how to write an analytic function in terms of its imaginary part, v. We can also derive formulas if u and v are expressed in polar coordinates: f(z) = u(r, θ) + ıv(r, θ). Result 9.3.2 If f(z) = u(x, y) + ıv(x, y) is analytic and the expressions are non-singular, then f(z) = 2u z 2 , −ı z 2 + const (9.2) f(z) = ı2v z 2 , −ı z 2 + const. (9.3) If f(z) = u(r, θ) + ıv(r, θ) is analytic and the expressions are non-singular, then f(z) = 2u z1/2 , − ı 2 log z + const (9.4) f(z) = ı2v z1/2 , − ı 2 log z + const. (9.5) Example 9.3.12 Consider the problem of finding f(z) given that u(x, y) = e−x (x sin y − y cos y). f(z) = 2u z 2 , −ı z 2 = 2 e−z/2 z 2 sin −ı z 2 + ı z 2 cos −ı z 2 + c = ız e−z/2 ı sin ı z 2 + cos −ı z 2 + c = ız e−z/2 e−z/2 + c = ız e−z +c Example 9.3.13 Consider Log z = 1 2 Log x2 + y2 + ı Arctan(x, y). 277
  • 298. We try to construct the analytic function from it’s real part using Equation 9.2. f(z) = 2u z 2 , −ı z 2 + c = 2 1 2 Log z 2 2 + −ı z 2 2 + c = Log(0) + c We obtain a singular expression, so the method fails. Example 9.3.14 Again consider the logarithm, this time written in terms of polar coordinates. Log z = Log r + ıθ We try to construct the analytic function from it’s real part using Equation 9.4. f(z) = 2u z1/2 , −ı ı 2 log z + c = 2 Log z1/2 + c = Log z + c With this method we recover the analytic function. 278
  • 299. 9.4 Exercises Exercise 9.1 Consider two functions, f(x, y) and g(x, y). They are said to be functionally dependent if there is a an h(g) such that f(x, y) = h(g(x, y)). f and g will be functionally dependent if and only if their Jacobian vanishes. If f and g are functionally dependent, then the derivatives of f are fx = h (g)gx fy = h (g)gy. Thus we have ∂(f, g) ∂(x, y) = fx fy gx gy = fxgy − fygx = h (g)gxgy − h (g)gygx = 0. If the Jacobian of f and g vanishes, then fxgy − fygx = 0. This is a first order partial differential equation for f that has the general solution f(x, y) = h(g(x, y)). Prove that an analytic function u(x, y) + ıv(x, y) can be written in terms of a function of a complex variable, f(z) = u(x, y) + ıv(x, y). Exercise 9.2 Which of the following functions are the real part of an analytic function? For those that are, find the harmonic conjugate, v(x, y), and find the analytic function f(z) = u(x, y)+ıv(x, y) as a function of z. 1. x3 − 3xy2 − 2xy + y 2. ex sinh y 3. ex (sin x cos y cosh y − cos x sin y sinh y) Exercise 9.3 For an analytic function, f(z) = u(r, θ) + ıv(r, θ) prove that under suitable restrictions: f(z) = 2u z1/2 , − ı 2 log z + const. 279
  • 300. 9.5 Hints Hint 9.1 Show that u(x, y) + ıv(x, y) is functionally dependent on x + ıy so that you can write f(z) = f(x + ıy) = u(x, y) + ıv(x, y). Hint 9.2 Hint 9.3 Check out the derivation of Equation 9.2. 280
  • 301. 9.6 Solutions Solution 9.1 u(x, y) + ıv(x, y) is functionally dependent on z = x + ıy if and only if ∂(u + ıv, x + ıy) ∂(x, y) = 0. ∂(u + ıv, x + ıy) ∂(x, y) = ux + ıvx uy + ıvy 1 ı = −vx − uy + ı (ux − vy) Since u and v satisfy the Cauchy-Riemann equations, this vanishes. = 0 Thus we see that u(x, y) + ıv(x, y) is functionally dependent on x + ıy so we can write f(z) = f(x + ıy) = u(x, y) + ıv(x, y). Solution 9.2 1. Consider u(x, y) = x3 − 3xy2 − 2xy + y. The Laplacian of this function is ∆u ≡ uxx + uyy = 6x − 6x = 0 Since the function is harmonic, it is the real part of an analytic function. Clearly the analytic function is of the form, az3 + bz2 + cz + ıd, with a, b and c complex-valued constants and d a real constant. Substituting z = x + ıy and expanding products yields, a x3 + ı3x2 y − 3xy2 − ıy3 + b x2 + ı2xy − y2 + c(x + ıy) + ıd. By inspection, we see that the analytic function is f(z) = z3 + ız2 − ız + ıd. The harmonic conjugate of u is the imaginary part of f(z), v(x, y) = 3x2 y − y3 + x2 − y2 − x + d. We can also do this problem with analytic continuation. The derivatives of u are ux = 3x2 − 3y2 − 2y, uy = −6xy − 2x + 1. The derivative of f(z) is f (z) = ux − ıuy = 3x2 − 2y2 − 2y + ı(6xy − 2x + 1). On the real axis we have f (z = x) = 3x2 − ı2x + ı. Using analytic continuation, we see that f (z) = 3z2 − ı2z + ı. Integration yields f(z) = z3 − ız2 + ız + const 281
  • 302. 2. Consider u(x, y) = ex sinh y. The Laplacian of this function is ∆u = ex sinh y + ex sinh y = 2 ex sinh y. Since the function is not harmonic, it is not the real part of an analytic function. 3. Consider u(x, y) = ex (sin x cos y cosh y − cos x sin y sinh y). The Laplacian of the function is ∆u = ∂ ∂x (ex (sin x cos y cosh y − cos x sin y sinh y + cos x cos y cosh y + sin x sin y sinh y)) + ∂ ∂y (ex (− sin x sin y cosh y − cos x cos y sinh y + sin x cos y sinh y − cos x sin y cosh y)) = 2 ex (cos x cos y cosh y + sin x sin y sinh y) − 2 ex (cos x cos y cosh y + sin x sin y sinh y) = 0. Thus u is the real part of an analytic function. The derivative of the analytic function is f (z) = ux + ıvx = ux − ıuy From the derivatives of u we computed before, we have f(z) = (ex (sin x cos y cosh y − cos x sin y sinh y + cos x cos y cosh y + sin x sin y sinh y)) − ı (ex (− sin x sin y cosh y − cos x cos y sinh y + sin x cos y sinh y − cos x sin y cosh y)) Along the real axis, f (z) has the value, f (z = x) = ex (sin x + cos x). By analytic continuation, f (z) is f (z) = ez (sin z + cos z) We obtain f(z) by integrating. f(z) = ez sin z + const. u is the real part of the analytic function f(z) = ez sin z + ıc, where c is a real constant. We find the harmonic conjugate of u by taking the imaginary part of f. f(z) = ex (cosy + ı sin y)(sin x cosh y + ı cos x sinh y) + ıc v(x, y) = ex sin x sin y cosh y + cos x cos y sinh y + c Solution 9.3 We consider the analytic function: f(z) = u(r, θ) + ıv(r, θ). Recall that the complex derivative in terms of polar coordinates is d dz = e−ıθ ∂ ∂r = − ı r e−ıθ ∂ ∂θ . The Cauchy-Riemann equations are ur = 1 r vθ, vr = − 1 r uθ. 282
  • 303. We differentiate f(z) and use the partial derivative in r for the right side. f (z) = e−ıθ (ur + ıvr) We use the Cauchy-Riemann equations to right f (z) in terms of the derivatives of u. f (z) = e−ıθ ur − ı 1 r uθ (9.6) Now consider the function of a complex variable, g(ζ): g(ζ) = e−ıζ ur(r, ζ) − ı 1 r uθ(r, ζ) = eψ−ıξ ur(r, ξ + ıψ) − ı 1 r uθ(r, ξ + ıψ) This function is analytic where f(ζ) is analytic. It is a simple calculus exercise to show that the complex derivative in the ξ direction, ∂ ∂ξ , and the complex derivative in the ψ direction, −ı ∂ ∂ψ , are equal. Since these partial derivatives are equal and continuous, g(ζ) is analytic. We evaluate the function g(ζ) at ζ = −ı log r. (Substitute θ = −ı log r into Equation 9.6.) f r eı(−ı log r) = e−ı(−ı log r) ur(r, −ı log r) − ı 1 r uθ(r, −ı log r) rf r2 = ur(r, −ı log r) − ı 1 r uθ(r, −ı log r) If the expression is non-singular, then it defines the analytic function, f (z), on a curve. The analytic continuation to the complex plane is zf z2 = ur(z, −ı log z) − ı 1 z uθ(z, −ı log z). We integrate to obtain an expression for f z2 . 1 2 f z2 = u(z, −ı log z) + const We make a change of variables and solve for f(z). f(z) = 2u z1/2 , − ı 2 log z + const. Assuming that the above expression is non-singular, we have found a formula for writing the analytic function in terms of its real part, u(r, θ). With the same method, we can find how to write an analytic function in terms of its imaginary part, v(r, θ). 283
  • 304. 284
  • 305. Chapter 10 Contour Integration and the Cauchy-Goursat Theorem Between two evils, I always pick the one I never tried before. - Mae West 10.1 Line Integrals In this section we will recall the definition of a line integral in the Cartesian plane. In the next section we will use this to define the contour integral in the complex plane. Limit Sum Definition. First we develop a limit sum definition of a line integral. Consider a curve C in the Cartesian plane joining the points (a0, b0) and (a1, b1). We partition the curve into n segments with the points (x0, y0), . . . , (xn, yn) where the first and last points are at the endpoints of the curve. We define the differences, ∆xk = xk+1 − xk and ∆yk = yk+1 − yk, and let (ξk, ψk) be points on the curve between (xk, yk) and (xk+1, yk+1). This is shown pictorially in Figure 10.1. (x ,y )00 (ξ ,ψ )0 0 1 1 (ξ ,ψ )1 1 (x ,y )2 2 (ξ ,ψ )2 2 (ξ ,ψ )n−1 n−1 (x ,y )n n (x ,y )n−1 n−1 (x ,y ) y x Figure 10.1: A curve in the Cartesian plane. Consider the sum n−1 k=0 (P(ξk, ψk)∆xk + Q(ξk, ψk)∆yk) , where P and Q are continuous functions on the curve. (P and Q may be complex-valued.) In the limit as each of the ∆xk and ∆yk approach zero the value of the sum, (if the limit exists), is denoted by C P(x, y) dx + Q(x, y) dy. 285
  • 306. This is a line integral along the curve C. The value of the line integral depends on the functions P(x, y) and Q(x, y), the endpoints of the curve and the curve C. We can also write a line integral in vector notation. C f(x) · dx Here x = (x, y) and f(x) = (P(x, y), Q(x, y)). Evaluating Line Integrals with Parameterization. Let the curve C be parametrized by x = x(t), y = y(t) for t0 ≤ t ≤ t1. Then the differentials on the curve are dx = x (t) dt and dy = y (t) dt. Using the parameterization we can evaluate a line integral in terms of a definite integral. C P(x, y) dx + Q(x, y) dy = t1 t0 P(x(t), y(t))x (t) + Q(x(t), y(t))y (t) dt Example 10.1.1 Consider the line integral C x2 dx + (x + y) dy, where C is the semi-circle from (1, 0) to (−1, 0) in the upper half plane. We parameterize the curve with x = cos t, y = sin t for 0 ≤ t ≤ π. C x2 dx + (x + y) dy = π 0 cos2 t(− sin t) + (cos t + sin t) cos t dt = π 2 − 2 3 10.2 Contour Integrals Limit Sum Definition. We develop a limit sum definition for contour integrals. It will be anal- ogous to the definition for line integrals except that the notation is cleaner in complex variables. Consider a contour C in the complex plane joining the points c0 and c1. We partition the contour into n segments with the points z0, . . . , zn where the first and last points are at the endpoints of the contour. We define the differences ∆zk = zk+1 − zk and let ζk be points on the contour between zk and zk+1. Consider the sum n−1 k=0 f(ζk)∆zk, where f is a continuous function on the contour. In the limit as each of the ∆zk approach zero the value of the sum, (if the limit exists), is denoted by C f(z) dz. This is a contour integral along C. We can write a contour integral in terms of a line integral. Let f(z) = φ(x, y). (φ : R2 → C.) C f(z) dz = C φ(x, y)(dx + ı dy) C f(z) dz = C (φ(x, y) dx + ıφ(x, y) dy) (10.1) Further, we can write a contour integral in terms of two real-valued line integrals. Let f(z) = u(x, y) + ıv(x, y). C f(z) dz = C (u(x, y) + ıv(x, y))(dx + ı dy) C f(z) dz = C (u(x, y) dx − v(x, y) dy) + ı C (v(x, y) dx + u(x, y) dy) (10.2) 286
  • 307. Evaluation. Let the contour C be parametrized by z = z(t) for t0 ≤ t ≤ t1. Then the differential on the contour is dz = z (t) dt. Using the parameterization we can evaluate a contour integral in terms of a definite integral. C f(z) dz = t1 t0 f(z(t))z (t) dt Example 10.2.1 Let C be the positively oriented unit circle about the origin in the complex plane. Evaluate: 1. C z dz 2. C 1 z dz 3. C 1 z |dz| In each case we parameterize the contour and then do the integral. 1. z = eıθ , dz = ı eıθ dθ C z dz = 2π 0 eıθ ı eıθ dθ = 1 2 eı2θ 2π 0 = 1 2 eı4π − 1 2 eı0 = 0 2. C 1 z dz = 2π 0 1 eıθ ı eıθ dθ = ı 2π 0 dθ = ı2π 3. |dz| = ı eıθ dθ = ı eıθ |dθ| = |dθ| Since dθ is positive in this case, |dθ| = dθ. C 1 z |dz| = 2π 0 1 eıθ dθ = ı e−ıθ 2π 0 = 0 10.2.1 Maximum Modulus Integral Bound The absolute value of a real integral obeys the inequality b a f(x) dx ≤ b a |f(x)| |dx| ≤ (b − a) max a≤x≤b |f(x)|. 287
  • 308. Now we prove the analogous result for the modulus of a contour integral. C f(z) dz = lim ∆z→0 n−1 k=0 f(ζk)∆zk ≤ lim ∆z→0 n−1 k=0 |f(ζk)| |∆zk| = C |f(z)| |dz| ≤ C max z∈C |f(z)| |dz| = max z∈C |f(z)| C |dz| = max z∈C |f(z)| × (length of C) Result 10.2.1 Maximum Modulus Integral Bound. C f(z) dz ≤ C |f(z)| |dz| ≤ max z∈C |f(z)| (length of C) 10.3 The Cauchy-Goursat Theorem Let f(z) be analytic in a compact, closed, connected domain D. We consider the integral of f(z) on the boundary of the domain. ∂D f(z) dz = ∂D ψ(x, y)(dx + ı dy) = ∂D ψ dx + ıψ dy Recall Green’s Theorem. ∂D P dx + Q dy = D (Qx − Py) dx dy If we assume that f (z) is continuous, we can apply Green’s Theorem to the integral of f(z) on ∂D. ∂D f(z) dz = ∂D ψ dx + ıψ dy = D (ıψx − ψy) dx dy Since f(z) is analytic, it satisfies the Cauchy-Riemann equation ψx = −ıψy. The integrand in the area integral, ıψx − ψy, is zero. Thus the contour integral vanishes. ∂D f(z) dz = 0 This is known as Cauchy’s Theorem. The assumption that f (z) is continuous is not necessary, but it makes the proof much simpler because we can use Green’s Theorem. If we remove this restriction the result is known as the Cauchy-Goursat Theorem. The proof of this result is omitted. 288
  • 309. Result 10.3.1 The Cauchy-Goursat Theorem. If f(z) is analytic in a compact, closed, connected domain D then the integral of f(z) on the bound- ary of the domain vanishes. ∂D f(z) dz = k Ck f(z) dz = 0 Here the set of contours {Ck} make up the positively oriented boundary ∂D of the domain D. As a special case of the Cauchy-Goursat theorem we can consider a simply-connected region. For this the boundary is a Jordan curve. We can state the theorem in terms of this curve instead of referring to the boundary. Result 10.3.2 The Cauchy-Goursat Theorem for Jordan Curves. If f(z) is analytic inside and on a simple, closed contour C, then C f(z) dz = 0 Example 10.3.1 Let C be the unit circle about the origin with positive orientation. In Exam- ple 10.2.1 we calculated that C z dz = 0 Now we can evaluate the integral without parameterizing the curve. We simply note that the integrand is analytic inside and on the circle, which is simple and closed. By the Cauchy-Goursat Theorem, the integral vanishes. We cannot apply the Cauchy-Goursat theorem to evaluate C 1 z dz = ı2π as the integrand is not analytic at z = 0. Example 10.3.2 Consider the domain D = {z | |z| > 1}. The boundary of the domain is the unit circle with negative orientation. f(z) = 1/z is analytic on D and its boundary. However ∂D f(z) dz does not vanish and we cannot apply the Cauchy-Goursat Theorem. This is because the domain is not compact. 10.4 Contour Deformation Path Independence. Consider a function f(z) that is analytic on a simply connected domain a contour C in that domain with end points a and b. The contour integral C f(z) dz is independent of the path connecting the end points and can be denoted b a f(z) dz. This result is a direct consequence of the Cauchy-Goursat Theorem. Let C1 and C2 be two different paths connecting the points. Let −C2 denote the second contour with the opposite orientation. Let C be the contour which is the union of C1 and −C2. By the Cauchy-Goursat theorem, the integral along this contour vanishes. C f(z) dz = C1 f(z) dz + −C2 f(z) dz = 0 This implies that the integrals along C1 and C2 are equal. C1 f(z) dz = C2 f(z) dz 289
  • 310. Thus contour integrals on simply connected domains are independent of path. This result does not hold for multiply connected domains. Result 10.4.1 Path Independence. Let f(z) be analytic on a simply con- nected domain. For points a and b in the domain, the contour integral, b a f(z) dz is independent of the path connecting the points. Deforming Contours. Consider two simple, closed, positively oriented contours, C1 and C2. Let C2 lie completely within C1. If f(z) is analytic on and between C1 and C2 then the integrals of f(z) along C1 and C2 are equal. C1 f(z) dz = C2 f(z) dz Again, this is a direct consequence of the Cauchy-Goursat Theorem. Let D be the domain on and between C1 and C2. By the Cauchy-Goursat Theorem the integral along the boundary of D vanishes. C1 f(z) dz + −C2 f(z) dz = 0 C1 f(z) dz = C2 f(z) dz By following this line of reasoning, we see that we can deform a contour C without changing the value of C f(z) dz as long as we stay on the domain where f(z) is analytic. Result 10.4.2 Contour Deformation. Let f(z) be analytic on a domain D. If a set of closed contours {Cm} can be continuously deformed on the domain D to a set of contours {Γn} then the integrals along {Cm} and {Γn} are equal. {Cm} f(z) dz = {Γn} f(z) dz 10.5 Morera’s Theorem. The converse of the Cauchy-Goursat theorem is Morera’s Theorem. If the integrals of a continuous function f(z) vanish along all possible simple, closed contours in a domain, then f(z) is analytic on that domain. To prove Morera’s Theorem we will assume that first partial derivatives of f(z) = u(x, y)+ıv(x, y) are continuous, although the result can be derived without this restriction. Let the simple, closed contour C be the boundary of D which is contained in the domain Ω. C f(z) dz = C (u + ıv)(dx + ı dy) = C u dx − v dy + ı C v dx + u dy = D (−vx − uy) dx dy + ı D (ux − vy) dx dy = 0 290
  • 311. Since the two integrands are continuous and vanish for all C in Ω, we conclude that the integrands are identically zero. This implies that the Cauchy-Riemann equations, ux = vy, uy = −vx, are satisfied. f(z) is analytic in Ω. The converse of the Cauchy-Goursat theorem is Morera’s Theorem. If the integrals of a con- tinuous function f(z) vanish along all possible simple, closed contours in a domain, then f(z) is analytic on that domain. To prove Morera’s Theorem we will assume that first partial derivatives of f(z) = φ(x, y) are continuous, although the result can be derived without this restriction. Let the simple, closed contour C be the boundary of D which is contained in the domain Ω. C f(z) dz = C (φ dx + ıφ dy) = D (ıφx − φy) dx dy = 0 Since the integrand, ıφx−φy is continuous and vanishes for all C in Ω, we conclude that the integrand is identically zero. This implies that the Cauchy-Riemann equation, φx = −ıφy, is satisfied. We conclude that f(z) is analytic in Ω. Result 10.5.1 Morera’s Theorem. If f(z) is continuous in a simply con- nected domain Ω and C f(z) dz = 0 for all possible simple, closed contours C in the domain, then f(z) is analytic in Ω. 10.6 Indefinite Integrals Consider a function f(z) which is analytic in a domain D. An anti-derivative or indefinite integral (or simply integral) is a function F(z) which satisfies F (z) = f(z). This integral exists and is unique up to an additive constant. Note that if the domain is not connected, then the additive constants in each connected component are independent. The indefinite integrals are denoted: f(z) dz = F(z) + c. We will prove existence later by writing an indefinite integral as a contour integral. We briefly consider uniqueness of the indefinite integral here. Let F(z) and G(z) be integrals of f(z). Then F (z) − G (z) = f(z) − f(z) = 0. Although we do not prove it, it certainly makes sense that F(z) − G(z) is a constant on each connected component of the domain. Indefinite integrals are unique up to an additive constant. Integrals of analytic functions have all the nice properties of integrals of functions of a real variable. All the formulas from integral tables, including things like integration by parts, carry over directly. 291
  • 312. 10.7 Fundamental Theorem of Calculus via Primitives 10.7.1 Line Integrals and Primitives Here we review some concepts from vector calculus. Analagous to an integral in functions of a single variable is a primitive in functions of several variables. Consider a function f(x). F(x) is an integral of f(x) if and only if dF = f dx. Now we move to functions of x and y. Let P(x, y) and Q(x, y) be defined on a simply connected domain. A primitive Φ satisfies dΦ = P dx + Q dy. A necessary and sufficient condition for the existence of a primitive is that Py = Qx. The definite integral can be evaluated in terms of the primitive. (c,d) (a,b) P dx + Q dy = Φ(c, d) − Φ(a, b) 10.7.2 Contour Integrals Now consider integral along the contour C of the function f(z) = φ(x, y). C f(z) dz = C (φ dx + ıφ dy) A primitive Φ of φ dx+ıφ dy exists if and only if φy = ıφx. We recognize this as the Cauch-Riemann equation, φx = −ıφy. Thus a primitive exists if and only if f(z) is analytic. If so, then dΦ = φ dx + ıφ dy. How do we find the primitive Φ that satisfies Φx = φ and Φy = ıφ? Note that choosing Ψ(x, y) = F(z) where F(z) is an anti-derivative of f(z), F (z) = f(z), does the trick. We express the complex derivative as partial derivatives in the coordinate directions to show this. F (z) = f(z) = ψ(x, y), F (z) = Φx = −ıΦy From this we see that Φx = φ and Φy = ıφ so Φ(x, y) = F(z) is a primitive. Since we can evaluate the line integral of (φ dx + ıφ dy), (c,d) (a,b) (φ dx + ıφ dy) = Φ(c, d) − Φ(a, b), We can evaluate a definite integral of f in terms of its indefinite integral, F. b a f(z) dz = F(b) − F(a) This is the Fundamental Theorem of Calculus for functions of a complex variable. 10.8 Fundamental Theorem of Calculus via Complex Calcu- lus Result 10.8.1 Constructing an Indefinite Integral. If f(z) is analytic in a simply connected domain D and a is a point in the domain, then F(z) = z a f(ζ) dζ is analytic in D and is an indefinite integral of f(z), (F (z) = f(z)). 292
  • 313. Now we consider anti-derivatives and definite integrals without using vector calculus. From real variables we know that we can construct an integral of f(x) with a definite integral. F(x) = x a f(ξ) dξ Now we will prove the analogous property for functions of a complex variable. F(z) = z a f(ζ) dζ Let f(z) be analytic in a simply connected domain D and let a be a point in the domain. To show that F(z) = z a f(ζ) dζ is an integral of f(z), we apply the limit definition of differentiation. F (z) = lim ∆z→0 F(z + ∆z) − F(z) ∆z = lim ∆z→0 1 ∆z z+∆z a f(ζ) dζ − z a f(ζ) dζ = lim ∆z→0 1 ∆z z+∆z z f(ζ) dζ The integral is independent of path. We choose a straight line connecting z and z + ∆z. We add and subtract ∆zf(z) = z+∆z z f(z) dζ from the expression for F (z). F (z) = lim ∆z→0 1 ∆z ∆zf(z) + z+∆z z (f(ζ) − f(z)) dζ = f(z) + lim ∆z→0 1 ∆z z+∆z z (f(ζ) − f(z)) dζ Since f(z) is analytic, it is certainly continuous. This means that lim ζ→z f(ζ) = 0. The limit term vanishes as a result of this continuity. lim ∆z→0 1 ∆z z+∆z z (f(ζ) − f(z)) dζ ≤ lim ∆z→0 1 |∆z| |∆z| max ζ∈[z...z+∆z] |f(ζ) − f(z)| = lim ∆z→0 max ζ∈[z...z+∆z] |f(ζ) − f(z)| = 0 Thus F (z) = f(z). This results demonstrates the existence of the indefinite integral. We will use this to prove the Fundamental Theorem of Calculus for functions of a complex variable. Result 10.8.2 Fundamental Theorem of Calculus. If f(z) is analytic in a simply connected domain D then b a f(z) dz = F(b) − F(a) where F(z) is any indefinite integral of f(z). 293
  • 314. From Result 10.8.1 we know that b a f(z) dz = F(b) + c. (Here we are considering b to be a variable.) The case b = a determines the constant. a a f(z) dz = F(a) + c = 0 c = −F(a) This proves the Fundamental Theorem of Calculus for functions of a complex variable. Example 10.8.1 Consider the integral C 1 z − a dz where C is any closed contour that goes around the point z = a once in the positive direction. We use the Fundamental Theorem of Calculus to evaluate the integral. We start at a point on the contour z − a = r eıθ . When we traverse the contour once in the positive direction we end at the point z − a = r eı(θ+2π) . C 1 z − a dz = [log(z − a)] z−a=r eı(θ+2π) z−a=r eıθ = Log r + ı(θ + 2π) − (Log r + ıθ) = ı2π 294
  • 315. 10.9 Exercises Exercise 10.1 C is the arc corresponding to the unit semi-circle, |z| = 1, (z) ≥ 0, directed from z = −1 to z = 1. Evaluate 1. C z2 dz 2. C z2 dz 3. C z2 |dz| 4. C z2 |dz| Hint, Solution Exercise 10.2 Evaluate ∞ −∞ e−(ax2 +bx) dx, where a, b ∈ C and (a) > 0. Use the fact that ∞ −∞ e−x2 dx = √ π. Hint, Solution Exercise 10.3 Evaluate 2 ∞ 0 e−ax2 cos(ωx) dx, and 2 ∞ 0 x e−ax2 sin(ωx)dx, where (a) > 0 and ω ∈ R. Hint, Solution Exercise 10.4 Use an admissible parameterization to evaluate C (z − z0)n dz, n ∈ Z for the following cases: 1. C is the circle |z − z0| = 1 traversed in the counterclockwise direction. 2. C is the circle |z − z0 − ı2| = 1 traversed in the counterclockwise direction. 3. z0 = 0, n = −1 and C is the closed contour defined by the polar equation r = 2 − sin2 θ 4 Is this result compatible with the results of part (a)? Hint, Solution 295
  • 316. Exercise 10.5 1. Use bounding arguments to show that lim R→∞ CR z + Log z z3 + 1 dz = 0 where CR is the positive closed contour |z| = R. 2. Place a bound on C Log z dz where C is the arc of the circle |z| = 2 from −ı2 to ı2. 3. Deduce that C z2 − 1 z2 + 1 dz ≤ πr R2 + 1 R2 − 1 where C is a semicircle of radius R > 1 centered at the origin. Hint, Solution Exercise 10.6 Let C denote the entire positively oriented boundary of the half disk 0 ≤ r ≤ 1, 0 ≤ θ ≤ π in the upper half plane. Consider the branch f(z) = √ r eıθ/2 , − π 2 < θ < 3π 2 of the multi-valued function z1/2 . Show by separate parametric evaluation of the semi-circle and the two radii constituting the boundary that C f(z) dz = 0. Does the Cauchy-Goursat theorem apply here? Hint, Solution Exercise 10.7 Evaluate the following contour integrals using anti-derivatives and justify your approach for each. 1. C ız3 + z−3 dz, where C is the line segment from z1 = 1 + ı to z2 = ı. 2. C sin2 z cos z dz where C is a right-handed spiral from z1 = π to z2 = ıπ. 3. C zı dz = 1 + e−π 2 (1 − ı) with zı = eı Log z , −π < Arg z < π. C joins z1 = −1 and z2 = 1, lying above the real axis except at the end points. (Hint: redefine zı so that it remains unchanged above the real axis and is defined continuously on the real axis.) Hint, Solution 296
  • 317. 10.10 Hints Hint 10.1 Hint 10.2 Let C be the parallelogram in the complex plane with corners at ±R and ±R + b/(2a). Consider the integral of e−az2 on this contour. Take the limit as R → ∞. Hint 10.3 Extend the range of integration to (−∞ . . . ∞). Use eıωx = cos(ωx) + ı sin(ωx) and the result of Exercise 10.2. Hint 10.4 Hint 10.5 Hint 10.6 Hint 10.7 297
  • 318. 10.11 Solutions Solution 10.1 We parameterize the path with z = eıθ , with θ ranging from π to 0. dz = ı eıθ dθ |dz| = |ı eıθ dθ| = |dθ| = −dθ 1. C z2 dz = 0 π eı2θ ı eıθ dθ = 0 π ı eı3θ dθ = 1 3 eı3θ 0 π = 1 3 eı0 − eı3π = 1 3 (1 − (−1)) = 2 3 2. C |z2 | dz = 0 π | eı2θ |ı eıθ dθ = 0 π ı eıθ dθ = eıθ 0 π = 1 − (−1) = 2 3. C z2 |dz| = 0 π eı2θ |ı eıθ dθ| = 0 π − eı2θ dθ = ı 2 eı2θ 0 π = ı 2 (1 − 1) = 0 4. C |z2 | |dz| = 0 π | eı2θ ||ı eıθ dθ| = 0 π −dθ = [−θ] 0 π = π 298
  • 319. Solution 10.2 I = ∞ −∞ e−(ax2 +bx) dx First we complete the square in the argument of the exponential. I = eb2 /(4a) ∞ −∞ e−a(x+b/(2a))2 dx Consider the parallelogram in the complex plane with corners at ±R and ±R +b/(2a). The integral of e−az2 on this contour vanishes as it is an entire function. We relate the integral along one side of the parallelogram to the integrals along the other three sides. R+b/(2a) −R+b/(2a) e−az2 dz = −R −R+b/(2a) + R −R + R+b/(2a) R e−az2 dz. The first and third integrals on the right side vanish as R → ∞ because the integrand vanishes and the lengths of the paths of integration are finite. Taking the limit as R → ∞ we have, ∞+b/(2a) −∞+b/(2a) e−az2 dz ≡ ∞ −∞ e−a(x+b/(2a))2 dx = ∞ −∞ e−ax2 dx. Now we have I = eb2 /(4a) ∞ −∞ e−ax2 dx. We make the change of variables ξ = √ ax. I = eb2 /(4a) 1 √ a ∞ −∞ e−ξ2 dξ ∞ −∞ e−(ax2 +bx) dx = π a eb2 /(4a) Solution 10.3 Consider I = 2 ∞ 0 e−ax2 cos(ωx) dx. Since the integrand is an even function, I = ∞ −∞ e−ax2 cos(ωx) dx. Since e−ax2 sin(ωx) is an odd function, I = ∞ −∞ e−ax2 eıωx dx. We evaluate this integral with the result of Exercise 10.2. 2 ∞ 0 e−ax2 cos(ωx) dx = π a e−ω2 /(4a) Consider I = 2 ∞ 0 x e−ax2 sin(ωx) dx. 299
  • 320. Since the integrand is an even function, I = ∞ −∞ x e−ax2 sin(ωx) dx. Since x e−ax2 cos(ωx) is an odd function, I = −ı ∞ −∞ x e−ax2 eıωx dx. We add a dash of integration by parts to get rid of the x factor. I = −ı − 1 2a e−ax2 eıωx ∞ −∞ + ı ∞ −∞ − 1 2a e−ax2 ıω eıωx dx I = ω 2a ∞ −∞ e−ax2 eıωx dx 2 ∞ 0 x e−ax2 sin(ωx) dx = ω 2a π a e−ω2 /(4a) Solution 10.4 1. We parameterize the contour and do the integration. z − z0 = eıθ , θ ∈ [0 . . . 2π) C (z − z0)n dz = 2π 0 eınθ ı eıθ dθ =    eı(n+1)θ n+1 2π 0 for n = −1 [ıθ] 2π 0 for n = −1 = 0 for n = −1 ı2π for n = −1 2. We parameterize the contour and do the integration. z − z0 = ı2 + eıθ , θ ∈ [0 . . . 2π) C (z − z0)n dz = 2π 0 ı2 + eıθ n ı eıθ dθ =    (ı2+eıθ ) n+1 n+1 2π 0 for n = −1 log ı2 + eıθ 2π 0 for n = −1 = 0 3. We parameterize the contour and do the integration. z = r eıθ , r = 2 − sin2 θ 4 , θ ∈ [0 . . . 4π) The contour encircles the origin twice. See Figure 10.2. C z−1 dz = 4π 0 1 r(θ) eıθ (r (θ) + ır(θ)) eıθ dθ = 4π 0 r (θ) r(θ) + ı dθ = [log(r(θ)) + ıθ] 4π 0 300
  • 321. -1 1 -1 1 Figure 10.2: The contour: r = 2 − sin2 θ 4 . Since r(θ) does not vanish, the argument of r(θ) does not change in traversing the contour and thus the logarithmic term has the same value at the beginning and end of the path. C z−1 dz = ı4π This answer is twice what we found in part (a) because the contour goes around the origin twice. Solution 10.5 1. We parameterize the contour with z = R eıθ and bound the modulus of the integral. CR z + Log z z3 + 1 dz ≤ CR z + Log z z3 + 1 |dz| ≤ 2π 0 R + ln R + π R3 − 1 R dθ = 2πr R + ln R + π R3 − 1 The upper bound on the modulus on the integral vanishes as R → ∞. lim R→∞ 2πr R + ln R + π R3 − 1 = 0 We conclude that the integral vanishes as R → ∞. lim R→∞ CR z + Log z z3 + 1 dz = 0 2. We parameterize the contour and bound the modulus of the integral. z = 2 eıθ , θ ∈ [−π/2 . . . π/2] 301
  • 322. C Log z dz ≤ C |Log z| |dz| = π/2 −π/2 | ln 2 + ıθ|2 dθ ≤ 2 π/2 −π/2 (ln 2 + |θ|) dθ = 4 π/2 0 (ln 2 + θ) dθ = π 2 (π + 4 ln 2) 3. We parameterize the contour and bound the modulus of the integral. z = R eıθ , θ ∈ [θ0 . . . θ0 + π] C z2 − 1 z2 + 1 dz ≤ C z2 − 1 z2 + 1 |dz| ≤ θ0+π θ0 R2 eı2θ −1 R2 eı2θ +1 |R dθ| ≤ R θ0+π θ0 R2 + 1 R2 − 1 dθ = πr R2 + 1 R2 − 1 Solution 10.6 C f(z) dz = 1 0 √ r dr + π 0 eıθ/2 ı eıθ dθ + 0 1 ı √ r (−dr) = 2 3 + − 2 3 − ı 2 3 + ı 2 3 = 0 The Cauchy-Goursat theorem does not apply because the function is not analytic at z = 0, a point on the boundary. Solution 10.7 1. C ız3 + z−3 dz = ız4 4 − 1 2z2 ı 1+ı = 1 2 + ı In this example, the anti-derivative is single-valued. 2. C sin2 z cos z dz = sin3 z 3 ıπ π = 1 3 sin3 (ıπ) − sin3 (π) = −ı sinh3 (π) 3 302
  • 323. Again the anti-derivative is single-valued. 3. We choose the branch of zı with −π/2 < arg(z) < 3π/2. This matches the principal value of zı above the real axis and is defined continuously on the path of integration. C zı dz = z1+ı 1 + ı eı0 eıπ = 1 − ı 2 e(1+ı) log z eı0 eıπ = 1 − ı 2 e0 − e(1+ı)ıπ = 1 + e−π 2 (1 − ı) 303
  • 324. 304
  • 325. Chapter 11 Cauchy’s Integral Formula If I were founding a university I would begin with a smoking room; next a dormitory; and then a decent reading room and a library. After that, if I still had more money that I couldn’t use, I would hire a professor and get some text books. - Stephen Leacock 11.1 Cauchy’s Integral Formula Result 11.1.1 Cauchy’s Integral Formula. If f(ζ) is analytic in a com- pact, closed, connected domain D and z is a point in the interior of D then f(z) = 1 ı2π ∂D f(ζ) ζ − z dζ = 1 ı2π k Ck f(ζ) ζ − z dζ. (11.1) Here the set of contours {Ck} make up the positively oriented boundary ∂D of the domain D. More generally, we have f(n) (z) = n! ı2π ∂D f(ζ) (ζ − z)n+1 dζ = n! ı2π k Ck f(ζ) (ζ − z)n+1 dζ. (11.2) Cauchy’s Formula shows that the value of f(z) and all its derivatives in a domain are determined by the value of f(z) on the boundary of the domain. Consider the first formula of the result, Equation 11.1. We deform the contour to a circle of radius δ about the point ζ = z. C f(ζ) ζ − z dζ = Cδ f(ζ) ζ − z dζ = Cδ f(z) ζ − z dζ + Cδ f(ζ) − f(z) ζ − z dζ We use the result of Example 10.8.1 to evaluate the first integral. C f(ζ) ζ − z dζ = ı2πf(z) + Cδ f(ζ) − f(z) ζ − z dζ 305
  • 326. The remaining integral along Cδ vanishes as δ → 0 because f(ζ) is continuous. We demonstrate this with the maximum modulus integral bound. The length of the path of integration is 2πδ. lim δ→0 Cδ f(ζ) − f(z) ζ − z dζ ≤ lim δ→0 (2πδ) 1 δ max |ζ−z|=δ |f(ζ) − f(z)| ≤ lim δ→0 2π max |ζ−z|=δ |f(ζ) − f(z)| = 0 This gives us the desired result. f(z) = 1 ı2π C f(ζ) ζ − z dζ We derive the second formula, Equation 11.2, from the first by differentiating with respect to z. Note that the integral converges uniformly for z in any closed subset of the interior of C. Thus we can differentiate with respect to z and interchange the order of differentiation and integration. f(n) (z) = 1 ı2π dn dzn C f(ζ) ζ − z dζ = 1 ı2π C dn dzn f(ζ) ζ − z dζ = n! ı2π C f(ζ) (ζ − z)n+1 dζ Example 11.1.1 Consider the following integrals where C is the positive contour on the unit circle. For the third integral, the point z = −1 is removed from the contour. 1. C sin cos z5 dz 2. C 1 (z − 3)(3z − 1) dz 3. C √ z dz 1. Since sin cos z5 is an analytic function inside the unit circle, C sin cos z5 dz = 0 2. 1 (z−3)(3z−1) has singularities at z = 3 and z = 1/3. Since z = 3 is outside the contour, only the singularity at z = 1/3 will contribute to the value of the integral. We will evaluate this integral using the Cauchy integral formula. C 1 (z − 3)(3z − 1) dz = ı2π 1 (1/3 − 3)3 = − ıπ 4 3. Since the curve is not closed, we cannot apply the Cauchy integral formula. Note that √ z is single-valued and analytic in the complex plane with a branch cut on the negative real axis. 306
  • 327. Thus we use the Fundamental Theorem of Calculus. C √ z dz = 2 3 √ z3 eıπ e−ıπ = 2 3 eı3π/2 − e−ı3π/2 = 2 3 (−ı − ı) = −ı 4 3 Cauchy’s Inequality. Suppose the f(ζ) is analytic in the closed disk |ζ − z| ≤ r. By Cauchy’s integral formula, f(n) (z) = n! ı2π C f(ζ) (ζ − z)n+1 dζ, where C is the circle of radius r centered about the point z. We use this to obtain an upper bound on the modulus of f(n) (z). f(n) (z) = n! 2π C f(ζ) (ζ − z)n+1 dζ ≤ n! 2π 2πr max |ζ−z|=r f(ζ) (ζ − z)n+1 = n! rn max |ζ−z|=r |f(ζ)| Result 11.1.2 Cauchy’s Inequality. If f(ζ) is analytic in |ζ − z| ≤ r then f(n) (z) ≤ n!M rn where |f(ζ)| ≤ M for all |ζ − z| = r. Liouville’s Theorem. Consider a function f(z) that is analytic and bounded, (|f(z)| ≤ M), in the complex plane. From Cauchy’s inequality, |f (z)| ≤ M r for any positive r. By taking r → ∞, we see that f (z) is identically zero for all z. Thus f(z) is a constant. Result 11.1.3 Liouville’s Theorem. If f(z) is analytic and |f(z)| is bounded in the complex plane then f(z) is a constant. The Fundamental Theorem of Algebra. We will prove that every polynomial of degree n ≥ 1 has exactly n roots, counting multiplicities. First we demonstrate that each such polynomial has at least one root. Suppose that an nth degree polynomial p(z) has no roots. Let the lower bound on the modulus of p(z) be 0 < m ≤ |p(z)|. The function f(z) = 1/p(z) is analytic, (f (z) = p (z)/p2 (z)), and bounded, (|f(z)| ≤ 1/m), in the extended complex plane. Using Liouville’s theorem we conclude that f(z) and hence p(z) are constants, which yields a contradiction. Therefore every such polynomial p(z) must have at least one root. 307
  • 328. Now we show that we can factor the root out of the polynomial. Let p(z) = n k=0 pkzk . We note that (zn − cn ) = (z − c) n−1 k=0 cn−1−k zk . Suppose that the nth degree polynomial p(z) has a root at z = c. p(z) = p(z) − p(c) = n k=0 pkzk − n k=0 pkck = n k=0 pk zk − ck = n k=0 pk(z − c) k−1 j=0 ck−1−j zj = (z − c)q(z) Here q(z) is a polynomial of degree n − 1. By induction, we see that p(z) has exactly n roots. Result 11.1.4 Fundamental Theorem of Algebra. Every polynomial of degree n ≥ 1 has exactly n roots, counting multiplicities. Gauss’ Mean Value Theorem. Let f(ζ) be analytic in |ζ−z| ≤ r. By Cauchy’s integral formula, f(z) = 1 ı2π C f(ζ) ζ − z dζ, where C is the circle |ζ − z| = r. We parameterize the contour with ζ = z + r eıθ . f(z) = 1 ı2π 2π 0 f(z + r eıθ ) r eıθ ır eıθ dθ Writing this in the form, f(z) = 1 2πr 2π 0 f(z + r eıθ )r dθ, we see that f(z) is the average value of f(ζ) on the circle of radius r about the point z. Result 11.1.5 Gauss’ Average Value Theorem. If f(ζ) is analytic in |ζ − z| ≤ r then f(z) = 1 2π 2π 0 f(z + r eıθ ) dθ. That is, f(z) is equal to its average value on a circle of radius r about the point z. 308
  • 329. Extremum Modulus Theorem. Let f(z) be analytic in closed, connected domain, D. The extreme values of the modulus of the function must occur on the boundary. If |f(z)| has an interior extrema, then the function is a constant. We will show this with proof by contradiction. Assume that |f(z)| has an interior maxima at the point z = c. This means that there exists an neighborhood of the point z = c for which |f(z)| ≤ |f(c)|. Choose an so that the set |z − c| ≤ lies inside this neighborhood. First we use Gauss’ mean value theorem. f(c) = 1 2π 2π 0 f c + eıθ dθ We get an upper bound on |f(c)| with the maximum modulus integral bound. |f(c)| ≤ 1 2π 2π 0 f c + eıθ dθ Since z = c is a maxima of |f(z)| we can get a lower bound on |f(c)|. |f(c)| ≥ 1 2π 2π 0 f c + eıθ dθ If |f(z)| < |f(c)| for any point on |z − c| = , then the continuity of f(z) implies that |f(z)| < |f(c)| in a neighborhood of that point which would make the value of the integral of |f(z)| strictly less than |f(c)|. Thus we conclude that |f(z)| = |f(c)| for all |z − c| = . Since we can repeat the above procedure for any circle of radius smaller than , |f(z)| = |f(c)| for all |z − c| ≤ , i.e. all the points in the disk of radius about z = c are also maxima. By recursively repeating this procedure points in this disk, we see that |f(z)| = |f(c)| for all z ∈ D. This implies that f(z) is a constant in the domain. By reversing the inequalities in the above method we see that the minimum modulus of f(z) must also occur on the boundary. Result 11.1.6 Extremum Modulus Theorem. Let f(z) be analytic in a closed, connected domain, D. The extreme values of the modulus of the function must occur on the boundary. If |f(z)| has an interior extrema, then the function is a constant. 11.2 The Argument Theorem Result 11.2.1 The Argument Theorem. Let f(z) be analytic inside and on C except for isolated poles inside the contour. Let f(z) be nonzero on C. 1 ı2π C f (z) f(z) dz = N − P Here N is the number of zeros and P the number of poles, counting multiplic- ities, of f(z) inside C. First we will simplify the problem and consider a function f(z) that has one zero or one pole. Let f(z) be analytic and nonzero inside and on A except for a zero of order n at z = a. Then we can write f(z) = (z − a)n g(z) where g(z) is analytic and nonzero inside and on A. The integral of f (z) f(z) 309
  • 330. along A is 1 ı2π A f (z) f(z) dz = 1 ı2π A d dz (log(f(z))) dz = 1 ı2π A d dz (log((z − a)n ) + log(g(z))) dz = 1 ı2π A d dz (log((z − a)n )) dz = 1 ı2π A n z − a dz = n Now let f(z) be analytic and nonzero inside and on B except for a pole of order p at z = b. Then we can write f(z) = g(z) (z−b)p where g(z) is analytic and nonzero inside and on B. The integral of f (z) f(z) along B is 1 ı2π B f (z) f(z) dz = 1 ı2π B d dz (log(f(z))) dz = 1 ı2π B d dz log((z − b)−p ) + log(g(z)) dz = 1 ı2π B d dz log((z − b)−p )+ dz = 1 ı2π B −p z − b dz = −p Now consider a function f(z) that is analytic inside an on the contour C except for isolated poles at the points b1, . . . , bp. Let f(z) be nonzero except at the isolated points a1, . . . , an. Let the contours Ak, k = 1, . . . , n, be simple, positive contours which contain the zero at ak but no other poles or zeros of f(z). Likewise, let the contours Bk, k = 1, . . . , p be simple, positive contours which contain the pole at bk but no other poles of zeros of f(z). (See Figure 11.1.) By deforming the contour we obtain C f (z) f(z) dz = n j=1 Aj f (z) f(z) dz + p k=1 Bj f (z) f(z) dz. From this we obtain Result 11.2.1. CA1 B1 B3 B2 A2 Figure 11.1: Deforming the contour C. 310
  • 331. 11.3 Rouche’s Theorem Result 11.3.1 Rouche’s Theorem. Let f(z) and g(z) be analytic inside and on a simple, closed contour C. If |f(z)| > |g(z)| on C then f(z) and f(z) + g(z) have the same number of zeros inside C and no zeros on C. First note that since |f(z)| > |g(z)| on C, f(z) is nonzero on C. The inequality implies that |f(z) + g(z)| > 0 on C so f(z) + g(z) has no zeros on C. We well count the number of zeros of f(z) and g(z) using the Argument Theorem, (Result 11.2.1). The number of zeros N of f(z) inside the contour is N = 1 ı2π C f (z) f(z) dz. Now consider the number of zeros M of f(z) + g(z). We introduce the function h(z) = g(z)/f(z). M = 1 ı2π C f (z) + g (z) f(z) + g(z) dz = 1 ı2π C f (z) + f (z)h(z) + f(z)h (z) f(z) + f(z)h(z) dz = 1 ı2π C f (z) f(z) dz + 1 ı2π C h (z) 1 + h(z) dz = N + 1 ı2π [log(1 + h(z))]C = N (Note that since |h(z)| < 1 on C, (1 + h(z)) > 0 on C and the value of log(1 + h(z)) does not not change in traversing the contour.) This demonstrates that f(z) and f(z) + g(z) have the same number of zeros inside C and proves the result. 311
  • 332. 11.4 Exercises Exercise 11.1 What is (arg(sin z)) C where C is the unit circle? Exercise 11.2 Let C be the circle of radius 2 centered about the origin and oriented in the positive direction. Evaluate the following integrals: 1. C sin z z2+5 dz 2. C z z2+1 dz 3. C z2 +1 z dz Exercise 11.3 Let f(z) be analytic and bounded (i.e. |f(z)| < M) for |z| > R, but not necessarily analytic for |z| ≤ R. Let the points α and β lie inside the circle |z| = R. Evaluate C f(z) (z − α)(z − β) dz where C is any closed contour outside |z| = R, containing the circle |z| = R. [Hint: consider the circle at infinity] Now suppose that in addition f(z) is analytic everywhere. Deduce that f(α) = f(β). Exercise 11.4 Using Rouche’s theorem show that all the roots of the equation p(z) = z6 − 5z2 + 10 = 0 lie in the annulus 1 < |z| < 2. Exercise 11.5 Evaluate as a function of t ω = 1 ı2π C ezt z2(z2 + a2) dz, where C is any positively oriented contour surrounding the circle |z| = a. Exercise 11.6 Consider C1, (the positively oriented circle |z| = 4), and C2, (the positively oriented boundary of the square whose sides lie along the lines x = ±1, y = ±1). Explain why C1 f(z) dz = C2 f(z) dz for the functions 1. f(z) = 1 3z2 + 1 2. f(z) = z 1 − ez Exercise 11.7 Show that if f(z) is of the form f(z) = αk zk + αk−1 zk−1 + · · · + α1 z + g(z), k ≥ 1 312
  • 333. where g is analytic inside and on C, (the positive circle |z| = 1), then C f(z) dz = ı2πα1. Exercise 11.8 Show that if f(z) is analytic within and on a simple closed contour C and z0 is not on C then C f (z) z − z0 dz = C f(z) (z − z0)2 dz. Note that z0 may be either inside or outside of C. Exercise 11.9 If C is the positive circle z = eıθ show that for any real constant a, C eaz z dz = ı2π and hence π 0 ea cos θ cos(a sin θ) dθ = π. Exercise 11.10 Use Cauchy-Goursat, the generalized Cauchy integral formula, and suitable extensions to multiply- connected domains to evaluate the following integrals. Be sure to justify your approach in each case. 1. C z z3 − 9 dz where C is the positively oriented rectangle whose sides lie along x = ±5, y = ±3. 2. C sin z z2(z − 4) dz, where C is the positively oriented circle |z| = 2. 3. C (z3 + z + ı) sin z z4 + ız3 dz, where C is the positively oriented circle |z| = π. 4. C ezt z2(z + 1) dz where C is any positive simple closed contour surrounding |z| = 1. Exercise 11.11 Use Liouville’s theorem to prove the following: 1. If f(z) is entire with (f(z)) ≤ M for all z then f(z) is constant. 2. If f(z) is entire with |f(5) (z)| ≤ M for all z then f(z) is a polynomial of degree at most five. Exercise 11.12 Find all functions f(z) analytic in the domain D : |z| < R that satisfy f(0) = eı and |f(z)| ≤ 1 for all z in D. 313
  • 334. Exercise 11.13 Let f(z) = ∞ k=0 k4 z 4 k and evaluate the following contour integrals, providing justification in each case: 1. C cos(ız)f(z) dz C is the positive circle |z − 1| = 1. 2. C f(z) z3 dz C is the positive circle |z| = π. 314
  • 335. 11.5 Hints Hint 11.1 Use the argument theorem. Hint 11.2 Hint 11.3 To evaluate the integral, consider the circle at infinity. Hint 11.4 Hint 11.5 Hint 11.6 Hint 11.7 Hint 11.8 Hint 11.9 Hint 11.10 Hint 11.11 Hint 11.12 Hint 11.13 315
  • 336. 11.6 Solutions Solution 11.1 Let f(z) be analytic inside and on the contour C. Let f(z) be nonzero on the contour. The argument theorem states that 1 ı2π C f (z) f(z) dz = N − P, where N is the number of zeros and P is the number of poles, (counting multiplicities), of f(z) inside C. The theorem is aptly named, as 1 ı2π C f (z) f(z) dz = 1 ı2π [log(f(z))]C = 1 ı2π [log |f(z)| + ı arg(f(z))]C = 1 2π [arg(f(z))]C . Thus we could write the argument theorem as 1 ı2π C f (z) f(z) dz = 1 2π [arg(f(z))]C = N − P. Since sin z has a single zero and no poles inside the unit circle, we have 1 2π arg(sin(z)) C = 1 − 0 arg(sin(z)) C = 2π Solution 11.2 1. Since the integrand sin z z2+5 is analytic inside and on the contour, (the only singularities are at z = ±ı √ 5 and at infinity), the integral is zero by Cauchy’s Theorem. 2. First we expand the integrand in partial fractions. z z2 + 1 = a z − ı + b z + ı a = z z + ı z=ı = 1 2 , b = z z − ı z=−ı = 1 2 Now we can do the integral with Cauchy’s formula. C z z2 + 1 dz = C 1/2 z − ı dz + C 1/2 z + ı dz = 1 2 ı2π + 1 2 ı2π = ı2π 3. C z2 + 1 z dz = C z + 1 z dz = C z dz + C 1 z dz = 0 + ı2π = ı2π 316
  • 337. Solution 11.3 Let C be the circle of radius r, (r > R), centered at the origin. We get an upper bound on the integral with the Maximum Modulus Integral Bound, (Result 10.2.1). C f(z) (z − α)(z − β) dz ≤ 2πr max |z|=r f(z) (z − α)(z − β) ≤ 2πr M (r − |α|)(r − |β|) By taking the limit as r → ∞ we see that the modulus of the integral is bounded above by zero. Thus the integral vanishes. Now we assume that f(z) is analytic and evaluate the integral with Cauchy’s Integral Formula. (We assume that α = β.) C f(z) (z − α)(z − β) dz = 0 C f(z) (z − α)(α − β) dz + C f(z) (β − α)(z − β) dz = 0 ı2π f(α) α − β + ı2π f(β) β − α = 0 f(α) = f(β) Solution 11.4 Consider the circle |z| = 2. On this circle: |z6 | = 64 | − 5z2 + 10| ≤ | − 5z2 | + |10| = 30 Since |z6 | < | − 5z2 + 10| on |z| = 2, p(z) has the same number of roots as z6 in |z| < 2. p(z) has 6 roots in |z| < 2. Consider the circle |z| = 1. On this circle: |10| = 10 |z6 − 5z2 | ≤ |z6 | + | − 5z2 | = 6 Since |z6 − 5z2 | < |10| on |z| = 1, p(z) has the same number of roots as 10 in |z| < 1. p(z) has no roots in |z| < 1. On the unit circle, |p(z)| ≥ |10| − |z6 | − |5z2 | = 4. Thus p(z) has no roots on the unit circle. We conclude that p(z) has exactly 6 roots in 1 < |z| < 2. Solution 11.5 We evaluate the integral with Cauchy’s Integral Formula. ω = 1 ı2π C ezt z2(z2 + a2) dz ω = 1 ı2π C ezt a2z2 + ı ezt 2a3(z − ıa) − ı ezt 2a3(z + ıa) dz ω = d dz ezt a2 z=0 + ı eıat 2a3 − ı e−ıat 2a3 ω = t a2 − sin(at) a3 ω = at − sin(at) a3 317
  • 338. Solution 11.6 1. We factor the denominator of the integrand. 1 3z2 + 1 = 1 3(z − ı √ 3/3)(z + ı √ 3/3) There are two first order poles which could contribute to the value of an integral on a closed path. Both poles lie inside both contours. See Figure 11.2. We see that C1 can be continuously -4 -2 2 4 -4 -2 2 4 Figure 11.2: The contours and the singularities of 1 3z2+1 . deformed to C2 on the domain where the integrand is analytic. Thus the integrals have the same value. 2. We consider the integrand z 1 − ez . Since ez = 1 has the solutions z = ı2πn for n ∈ Z, the integrand has singularities at these points. There is a removable singularity at z = 0 and first order poles at z = ı2πn for n ∈ Z{0}. Each contour contains only the singularity at z = 0. See Figure 11.3. We see that -6 -4 -2 2 4 6 -6 -4 -2 2 4 6 Figure 11.3: The contours and the singularities of z 1−ez . C1 can be continuously deformed to C2 on the domain where the integrand is analytic. Thus the integrals have the same value. Solution 11.7 First we write the integral of f(z) as a sum of integrals. C f(z) dz = C αk zk + αk−1 zk−1 + · · · + α1 z + g(z) dz = C αk zk dz + C αk−1 zk−1 dz + · · · + C α1 z dz + C g(z) dz 318
  • 339. The integral of g(z) vanishes by the Cauchy-Goursat theorem. We evaluate the integral of α1/z with Cauchy’s integral formula. C α1 z dz = ı2πα1 We evaluate the remaining αn/zn terms with anti-derivatives. Each of these integrals vanish. C f(z) dz = C αk zk dz + C αk−1 zk−1 dz + · · · + C α1 z dz + C g(z) dz = − αk (k − 1)zk−1 C + · · · + − α2 z C + ı2πα1 = ı2πα1 Solution 11.8 We evaluate the integrals with the Cauchy integral formula. (z0 is required to not be on C so the integrals exist.) C f (z) z − z0 dz = ı2πf (z0) if z0 is inside C 0 if z0 is outside C C f(z) (z − z0)2 dz = ı2π 1! f (z0) if z0 is inside C 0 if z0 is outside C Thus we see that the integrals are equal. Solution 11.9 First we evaluate the integral using the Cauchy Integral Formula. C eaz z dz = [eaz ]z=0 = ı2π Next we parameterize the path of integration. We use the periodicity of the cosine and sine to simplify the integral. C eaz z dz = ı2π 2π 0 ea eıθ eıθ ı eıθ dθ = ı2π 2π 0 ea(cos θ+ı sin θ) dθ = 2π 2π 0 ea cos θ (cos(sin θ) + ı sin(sin θ)) dθ = 2π 2π 0 ea cos θ cos(sin θ) dθ = 2π π 0 ea cos θ cos(sin θ) dθ = π Solution 11.10 1. We factor the integrand to see that there are singularities at the cube roots of 9. z z3 − 9 = z z − 3 √ 9 z − 3 √ 9 eı2π/3 z − 3 √ 9 e−ı2π/3 Let C1, C2 and C3 be contours around z = 3 √ 9, z = 3 √ 9 eı2π/3 and z = 3 √ 9 e−ı2π/3 . See Figure 11.4. Let D be the domain between C, C1 and C2, i.e. the boundary of D is the union 319
  • 340. of C, −C1 and −C2. Since the integrand is analytic in D, the integral along the boundary of D vanishes. ∂D z z3 − 9 dz = C z z3 − 9 dz + −C1 z z3 − 9 dz + −C2 z z3 − 9 dz + −C3 z z3 − 9 dz = 0 From this we see that the integral along C is equal to the sum of the integrals along C1, C2 and C3. (We could also see this by deforming C onto C1, C2 and C3.) C z z3 − 9 dz = C1 z z3 − 9 dz + C2 z z3 − 9 dz + C3 z z3 − 9 dz We use the Cauchy Integral Formula to evaluate the integrals along C1, C2 and C2. C z z3 − 9 dz = C1 z z − 3 √ 9 z − 3 √ 9 eı2π/3 z − 3 √ 9 e−ı2π/3 dz + C2 z z − 3 √ 9 z − 3 √ 9 eı2π/3 z − 3 √ 9 e−ı2π/3 dz + C3 z z − 3 √ 9 z − 3 √ 9 eı2π/3 z − 3 √ 9 e−ı2π/3 dz = ı2π z z − 3 √ 9 eı2π/3 z − 3 √ 9 e−ı2π/3 z= 3√ 9 + ı2π z z − 3 √ 9 z − 3 √ 9 e−ı2π/3 z= 3√ 9 eı2π/3 + ı2π z z − 3 √ 9 z − 3 √ 9 eı2π/3 z= 3√ 9 e−ı2π/3 = ı2π3−5/3 1 − eıπ/3 + eı2π/3 = 0 -6 -4 -2 2 4 6 -4 -2 2 4 C C1 C2 C3 Figure 11.4: The contours for z z3−9 . 2. The integrand has singularities at z = 0 and z = 4. Only the singularity at z = 0 lies inside the contour. We use the Cauchy Integral Formula to evaluate the integral. C sin z z2(z − 4) dz = ı2π d dz sin z z − 4 z=0 = ı2π cos z z − 4 − sin z (z − 4)2 z=0 = − ıπ 2 320
  • 341. 3. We factor the integrand to see that there are singularities at z = 0 and z = −ı. C (z3 + z + ı) sin z z4 + ız3 dz = C (z3 + z + ı) sin z z3(z + ı) dz Let C1 and C2 be contours around z = 0 and z = −ı. See Figure 11.5. Let D be the domain between C, C1 and C2, i.e. the boundary of D is the union of C, −C1 and −C2. Since the integrand is analytic in D, the integral along the boundary of D vanishes. ∂D = C + −C1 + −C2 = 0 From this we see that the integral along C is equal to the sum of the integrals along C1 and C2. (We could also see this by deforming C onto C1 and C2.) C = C1 + C2 We use the Cauchy Integral Formula to evaluate the integrals along C1 and C2. C (z3 + z + ı) sin z z4 + ız3 dz = C1 (z3 + z + ı) sin z z3(z + ı) dz + C2 (z3 + z + ı) sin z z3(z + ı) dz = ı2π (z3 + z + ı) sin z z3 z=−ı + ı2π 2! d2 dz2 (z3 + z + ı) sin z z + ı z=0 = ı2π(−ı sinh(1)) + ıπ 2 3z2 + 1 z + ı − z3 + z + ı (z + ı)2 cos z + 6z z + ı − 2(3z2 + 1) (z + ı)2 + 2(z3 + z + ı) (z + ı)3 − z3 + z + ı z + ı sin z z=0 = 2π sinh(1) -4 -2 2 4 -4 -2 2 4 CC1 C2 Figure 11.5: The contours for (z3 +z+ı) sin z z4+ız3 . 4. We consider the integral C ezt z2(z + 1) dz. There are singularities at z = 0 and z = −1. 321
  • 342. Let C1 and C2 be contours around z = 0 and z = −1. See Figure 11.6. We deform C onto C1 and C2. C = C1 + C2 We use the Cauchy Integral Formula to evaluate the integrals along C1 and C2. C ezt z2(z + 1) dz = C1 ezt z2(z + 1) dz + C1 ezt z2(z + 1) dz = ı2π ezt z2 z=−1 + ı2π d dz ezt (z + 1) z=0 = ı2π e−t +ı2π t ezt (z + 1) − ezt (z + 1)2 z=0 = ı2π(e−t +t − 1) -2 -1 1 2 -2 -1 1 2 CC1 C2 Figure 11.6: The contours for ezt z2(z+1) . Solution 11.11 Liouville’s Theorem states that if f(z) is analytic and bounded in the complex plane then f(z) is a constant. 1. Since f(z) is analytic, ef(z) is analytic. The modulus of ef(z) is bounded. ef(z) = e (f(z)) ≤ eM By Liouville’s Theorem we conclude that ef(z) is constant and hence f(z) is constant. 2. We know that f(z) is entire and |f(5) (z)| is bounded in the complex plane. Since f(z) is analytic, so is f(5) (z). We apply Liouville’s Theorem to f(5) (z) to conclude that it is a constant. Then we integrate to determine the form of f(z). f(z) = c5z5 + c4z4 + c3z3 + c2z2 + c1z + c0 Here c5 is the value of f(5) (z) and c4 through c0 are constants of integration. We see that f(z) is a polynomial of degree at most five. Solution 11.12 For this problem we will use the Extremum Modulus Theorem: Let f(z) be analytic in a closed, connected domain, D. The extreme values of the modulus of the function must occur on the boundary. If |f(z)| has an interior extrema, then the function is a constant. Since |f(z)| has an interior extrema, |f(0)| = | eı | = 1, we conclude that f(z) is a constant on D. Since we know the value at z = 0, we know that f(z) = eı . 322
  • 343. Solution 11.13 First we determine the radius of convergence of the series with the ratio test. R = lim k→∞ k4 /4k (k + 1)4/4k+1 = 4 lim k→∞ k4 (k + 1)4 = 4 lim k→∞ 24 24 = 4 The series converges absolutely for |z| < 4. 1. Since the integrand is analytic inside and on the contour of integration, the integral vanishes by Cauchy’s Theorem. 2. C f(z) z3 dz = C ∞ k=0 k4 z 4 k 1 z3 dz = C ∞ k=1 k4 4k zk−3 dz = C ∞ k=−2 (k + 3)4 4k+3 zk dz = C 1 4z2 dz + C 1 z dz + C ∞ k=0 (k + 3)4 4k+3 zk dz We can parameterize the first integral to show that it vanishes. The second integral has the value ı2π by the Cauchy-Goursat Theorem. The third integral vanishes by Cauchy’s Theorem as the integrand is analytic inside and on the contour. C f(z) z3 dz = ı2π 323
  • 344. 324
  • 345. Chapter 12 Series and Convergence You are not thinking. You are merely being logical. - Neils Bohr 12.1 Series of Constants 12.1.1 Definitions Convergence of Sequences. The infinite sequence {an}∞ n=0 ≡ a0, a1, a2, . . . is said to converge if lim n→∞ an = a for some constant a. If the limit does not exist, then the sequence diverges. Recall the definition of the limit in the above formula: For any > 0 there exists an N ∈ Z such that |a − an| < for all n > N. Example 12.1.1 The sequence {sin(n)} is divergent. The sequence is bounded above and below, but boundedness does not imply convergence. Cauchy Convergence Criterion. Note that there is something a little fishy about the above definition. We should be able to say if a sequence converges without first finding the constant to which it converges. We fix this problem with the Cauchy convergence criterion. A sequence {an} converges if and only if for any > 0 there exists an N such that |an − am| < for all n, m > N. The Cauchy convergence criterion is equivalent to the definition we had before. For some problems it is handier to use. Now we don’t need to know the limit of a sequence to show that it converges. Convergence of Series. The series ∞ n=1 an converges if the sequence of partial sums, SN = N−1 n=0 an, converges. That is, lim N→∞ SN = lim N→∞ N−1 n=0 an = constant. If the limit does not exist, then the series diverges. A necessary condition for the convergence of a series is that lim n→∞ an = 0. (See Exercise 12.1.) Otherwise the sequence of partial sums would not converge. Example 12.1.2 The series ∞ n=0(−1)n = 1 − 1 + 1 − 1 + · · · is divergent because the sequence of partial sums, {SN } = 1, 0, 1, 0, 1, 0, . . . is divergent. 325
  • 346. Tail of a Series. An infinite series, ∞ n=0 an, converges or diverges with its tail. That is, for fixed N, ∞ n=0 an converges if and only if ∞ n=N an converges. This is because the sum of the first N terms of a series is just a number. Adding or subtracting a number to a series does not change its convergence. Absolute Convergence. The series ∞ n=0 an converges absolutely if ∞ n=0 |an| converges. Abso- lute convergence implies convergence. If a series is convergent, but not absolutely convergent, then it is said to be conditionally convergent. The terms of an absolutely convergent series can be rearranged in any order and the series will still converge to the same sum. This is not true of conditionally convergent series. Rearranging the terms of a conditionally convergent series may change the sum. In fact, the terms of a conditionally convergent series may be rearranged to obtain any desired sum. Example 12.1.3 The alternating harmonic series, 1 − 1 2 + 1 3 − 1 4 + · · · , converges, (Exercise 12.4). Since 1 + 1 2 + 1 3 + 1 4 + · · · diverges, (Exercise 12.5), the alternating harmonic series is not absolutely convergent. Thus the terms can be rearranged to obtain any sum, (Exercise 12.6). Finite Series and Residuals. Consider the series f(z) = ∞ n=0 an(z). We will denote the sum of the first N terms in the series as SN (z) = N−1 n=0 an(z). We will denote the residual after N terms as RN (z) ≡ f(z) − SN (z) = ∞ n=N an(z). 12.1.2 Special Series Geometric Series. One of the most important series in mathematics is the geometric series, 1 ∞ n=0 zn = 1 + z + z2 + z3 + · · · . The series clearly diverges for |z| ≥ 1 since the terms do not vanish as n → ∞. Consider the partial sum, SN (z) ≡ N−1 n=0 zn , for |z| < 1. (1 − z)SN (z) = (1 − z) N−1 n=0 zn = N−1 n=0 zn − N n=1 zn = 1 + z + · · · + zN−1 − z + z2 + · · · + zN = 1 − zN 1 The series is so named because the terms grow or decay geometrically. Each term in the series is a constant times the previous term. 326
  • 347. N−1 n=0 zn = 1 − zN 1 − z → 1 1 − z as N → ∞. The limit of the partial sums is 1 1−z . ∞ n=0 zn = 1 1 − z for |z| < 1 Harmonic Series. Another important series is the harmonic series, ∞ n=1 1 nα = 1 + 1 2α + 1 3α + · · · . The series is absolutely convergent for (α) > 1 and absolutely divergent for (α) ≤ 1, (see the Exercise 12.8). The Riemann zeta function ζ(α) is defined as the sum of the harmonic series. ζ(α) = ∞ n=1 1 nα The alternating harmonic series is ∞ n=1 (−1)n+1 nα = 1 − 1 2α + 1 3α − 1 4α + · · · . Again, the series is absolutely convergent for (α) > 1 and absolutely divergent for (α) ≤ 1. 12.1.3 Convergence Tests The Comparison Test. Result 12.1.1 The series of positive terms an converges if there exists a convergent series bn such that an ≤ bn for all n. Similarly, an diverges if there exists a divergent series bn such that an ≥ bn for all n. Example 12.1.4 Consider the series ∞ n=1 1 2n2 . We can rewrite this as ∞ n=1 n a perfect square 1 2n . Then by comparing this series to the geometric series, ∞ n=1 1 2n = 1, we see that it is convergent. 327
  • 348. Integral Test. Result 12.1.2 If the coefficients an of a series ∞ n=0 an are monotonically decreasing and can be extended to a monotonically decreasing function of the continuous variable x, a(x) = an for x ∈ Z0+ , then the series converges or diverges with the integral ∞ 0 a(x) dx. Example 12.1.5 Consider the series ∞ n=1 1 n2 . Define the functions sl(x) and sr(x), (left and right), sl(x) = 1 ( x ) 2 , sr(x) = 1 ( x ) 2 . Recall that x is the greatest integer function, the greatest integer which is less than or equal to x. x is the least integer function, the least integer greater than or equal to x. We can express the series as integrals of these functions. ∞ n=1 1 n2 = ∞ 0 sl(x) dx = ∞ 1 sr(x) dx In Figure 12.1 these functions are plotted against y = 1/x2 . From the graph, it is clear that we can obtain a lower and upper bound for the series. ∞ 1 1 x2 dx ≤ ∞ n=1 1 n2 ≤ 1 + ∞ 1 1 x2 dx 1 ≤ ∞ n=1 1 n2 ≤ 2 1 2 3 4 1 1 2 3 4 1 Figure 12.1: Upper and Lower bounds to ∞ n=1 1/n2 . In general, we have ∞ m a(x) dx ≤ ∞ n=m an ≤ am + ∞ m a(x) dx. Thus we see that the sum converges or diverges with the integral. 328
  • 349. The Ratio Test. Result 12.1.3 The series an converges absolutely if lim n→∞ an+1 an < 1. If the limit is greater than unity, then the series diverges. If the limit is unity, the test fails. If the limit is greater than unity, then the terms are eventually increasing with n. Since the terms do not vanish, the sum is divergent. If the limit is less than unity, then there exists some N such that an+1 an ≤ r < 1 for all n ≥ N. From this we can show that ∞ n=0 an is absolutely convergent by comparing it to the geometric series. ∞ n=N |an| ≤ |aN | ∞ n=0 rn = |aN | 1 1 − r Example 12.1.6 Consider the series, ∞ n=1 en n! . We apply the ratio test to test for absolute convergence. lim n→∞ an+1 an = lim n→∞ en+1 n! en(n + 1)! = lim n→∞ e n + 1 = 0 The series is absolutely convergent. Example 12.1.7 Consider the series, ∞ n=1 1 n2 , which we know to be absolutely convergent. We apply the ratio test. lim n→∞ an+1 an = lim n→∞ 1/(n + 1)2 1/n2 = lim n→∞ n2 n2 + 2n + 1 = lim n→∞ 1 1 + 2/n + 1/n2 = 1 The test fails to predict the absolute convergence of the series. 329
  • 350. The Root Test. Result 12.1.4 The series an converges absolutely if lim n→∞ |an|1/n < 1. If the limit is greater than unity, then the series diverges. If the limit is unity, the test fails. More generally, we can test that lim sup |an|1/n < 1. If the limit is greater than unity, then the terms in the series do not vanish as n → ∞. This implies that the sum does not converge. If the limit is less than unity, then there exists some N such that |an|1/n ≤ r < 1 for all n ≥ N. We bound the tail of the series of |an|. ∞ n=N |an| = ∞ n=N |an|1/n n ≤ ∞ n=N rn = rN 1 − r ∞ n=0 an is absolutely convergent. Example 12.1.8 Consider the series ∞ n=0 na bn , where a and b are real constants. We use the root test to check for absolute convergence. lim n→∞ |na bn | 1/n < 1 |b| lim n→∞ na/n < 1 |b| exp lim n→∞ 1 ln n n < 1 |b| e0 < 1 |b| < 1 Thus we see that the series converges absolutely for |b| < 1. Note that the value of a does not affect the absolute convergence. Example 12.1.9 Consider the absolutely convergent series, ∞ n=1 1 n2 . 330
  • 351. We aply the root test. lim n→∞ |an| 1/n = lim n→∞ 1 n2 1/n = lim n→∞ n−2/n = lim n→∞ e− 2 n ln n = e0 = 1 It fails to predict the convergence of the series. Raabe’s Test Result 12.1.5 The series an converges absolutely if lim n→∞ n 1 − an+1 an > 1. If the limit is less than unity, then the series diverges or converges conditionally. If the limit is unity, the test fails. Gauss’ Test Result 12.1.6 Consider the series an. If an+1 an = 1 − L n + bn n2 where bn is bounded then the series converges absolutely if L > 1. Otherwise the series diverges or converges conditionally. 12.2 Uniform Convergence Continuous Functions. A function f(z) is continuous in a closed domain if, given any > 0, there exists a δ > 0 such that |f(z) − f(ζ)| < for all |z − ζ| < δ in the domain. An equivalent definition is that f(z) is continuous in a closed domain if lim ζ→z f(ζ) = f(z) for all z in the domain. Convergence. Consider a series in which the terms are functions of z, ∞ n=0 an(z). The series is convergent in a domain if the series converges for each point z in the domain. We can then define the function f(z) = ∞ n=0 an(z). We can state the convergence criterion as: For any given > 0 there exists a function N(z) such that |f(z) − SN(z)(z)| = f(z) − N(z)−1 n=0 an(z) < for all z in the domain. Note that the rate of convergence, i.e. the number of terms, N(z) required for for the absolute error to be less than , is a function of z. 331
  • 352. Uniform Convergence. Consider a series ∞ n=0 an(z) that is convergent in some domain. If the rate of convergence is independent of z then the series is said to be uniformly convergent. Stating this a little more mathematically, the series is uniformly convergent in the domain if for any given > 0 there exists an N, independent of z, such that |f(z) − SN (z)| = f(z) − N n=1 an(z) < for all z in the domain. 12.2.1 Tests for Uniform Convergence Weierstrass M-test. The Weierstrass M-test is useful in determining if a series is uniformly convergent. The series ∞ n=0 an(z) is uniformly and absolutely convergent in a domain if there exists a convergent series of positive terms ∞ n=0 Mn such that |an(z)| ≤ Mn for all z in the domain. This condition first implies that the series is absolutely convergent for all z in the domain. The condition |an(z)| ≤ Mn also ensures that the rate of convergence is independent of z, which is the criterion for uniform convergence. Note that absolute convergence and uniform convergence are independent. A series of functions may be absolutely convergent without being uniformly convergent or vice versa. The Weierstrass M-test is a sufficient but not a necessary condition for uniform convergence. The Weierstrass M-test can succeed only if the series is uniformly and absolutely convergent. Example 12.2.1 The series f(x) = ∞ n=1 sin x n(n + 1) is uniformly and absolutely convergent for all real x because | sin x n(n+1) | < 1 n2 and ∞ n=1 1 n2 converges. Dirichlet Test. Consider a sequence of monotone decreasing, positive constants cn with limit zero. If all the partial sums of an(z) are bounded in some closed domain, that is N n=1 an(z) < constant for all N, then ∞ n=1 cnan(z) is uniformly convergent in that closed domain. Note that the Dirichlet test does not imply that the series is absolutely convergent. Example 12.2.2 Consider the series, ∞ n=1 sin(nx) n . We cannot use the Weierstrass M-test to determine if the series is uniformly convergent on an interval. While it is easy to bound the terms with | sin(nx)/n| ≤ 1/n, the sum ∞ n=1 1 n does not converge. Thus we will try the Dirichlet test. Consider the sum N−1 n=1 sin(nx). This sum can be evaluated in closed form. (See Exercise 12.9.) N−1 n=1 sin(nx) = 0 for x = 2πk cos(x/2)−cos((N−1/2)x) 2 sin(x/2) for x = 2πk 332
  • 353. The partial sums have infinite discontinuities at x = 2πk, k ∈ Z. The partial sums are bounded on any closed interval that does not contain an integer multiple of 2π. By the Dirichlet test, the sum ∞ n=1 sin(nx) n is uniformly convergent on any such closed interval. The series may not be uniformly convergent in neighborhoods of x = 2kπ. 12.2.2 Uniform Convergence and Continuous Functions. Consider a series f(z) = ∞ n=1 an(z) that is uniformly convergent in some domain and whose terms an(z) are continuous functions. Since the series is uniformly convergent, for any given > 0 there exists an N such that |RN | < for all z in the domain. Since the finite sum SN is continuous, for that there exists a δ > 0 such that |SN (z)−SN (ζ)| < for all ζ in the domain satisfying |z − ζ| < δ. We combine these two results to show that f(z) is continuous. |f(z) − f(ζ)| = |SN (z) + RN (z) − SN (ζ) − RN (ζ)| ≤ |SN (z) − SN (ζ)| + |RN (z)| + |RN (ζ)| < 3 for |z − ζ| < δ Result 12.2.1 A uniformly convergent series of continuous terms represents a continuous function. Example 12.2.3 Again consider ∞ n=1 sin(nx) n . In Example 12.2.2 we showed that the convergence is uniform in any closed interval that does not contain an integer multiple of 2π. In Figure 12.2 is a plot of the first 10 and then 50 terms in the series and finally the function to which the series converges. We see that the function has jump discontinuities at x = 2kπ and is continuous on any closed interval not containing one of those points. Figure 12.2: Ten, Fifty and all the Terms of ∞ n=1 sin(nx) n . 12.3 Uniformly Convergent Power Series Power Series. Power series are series of the form ∞ n=0 an(z − z0)n . Domain of Convergence of a Power Series Consider the series ∞ n=0 anzn . Let the series converge at some point z0. Then |anzn 0 | is bounded by some constant A for all n, so |anzn | = |anzn 0 | z z0 n < A z z0 n 333
  • 354. This comparison test shows that the series converges absolutely for all z satisfying |z| < |z0|. Suppose that the series diverges at some point z1. Then the series could not converge for any |z| > |z1| since this would imply convergence at z1. Thus there exists some circle in the z plane such that the power series converges absolutely inside the circle and diverges outside the circle. Result 12.3.1 The domain of convergence of a power series is a circle in the complex plane. Radius of Convergence of Power Series. Consider a power series f(z) = ∞ n=0 anzn Applying the ratio test, we see that the series converges if lim n→∞ an+1zn+1 |anzn| < l lim n→∞ |an+1| |an| |z| < 1 |z| < lim n→∞ |an| |an+1| Result 12.3.2 Ratio formula. The radius of convergence of the power series ∞ n=0 anzn is R = lim n→∞ |an| |an+1| when the limit exists. Result 12.3.3 Cauchy-Hadamard formula. The radius of convergence of the power series: ∞ n=0 anzn is R = 1 lim sup n |an| . Absolute Convergence of Power Series. Consider a power series f(z) = ∞ n=0 anzn 334
  • 355. that converges for z = z0. Let M be the value of the greatest term, anzn 0 . Consider any point z such that |z| < |z0|. We can bound the residual of ∞ n=0 |anzn |, RN (z) = ∞ n=N |anzn | = ∞ n=N anzn anzn 0 |anzn 0 | ≤ M ∞ n=N z z0 n Since |z/z0| < 1, this is a convergent geometric series. = M z z0 N 1 1 − |z/z0| → 0 as N → ∞ Thus the power series is absolutely convergent for |z| < |z0|. Result 12.3.4 If the power series ∞ n=0 anzn converges for z = z0, then the series converges absolutely for |z| < |z0|. Example 12.3.1 Find the radii of convergence of the following series. 1. ∞ n=1 nzn 2. ∞ n=1 n!zn 3. ∞ n=1 n!zn! 1. We apply the ratio test to determine the radius of convergence. R = lim n→∞ an an+1 = lim n→∞ n n + 1 = 1 The series converges absolutely for |z| < 1. 2. We apply the ratio test to the series. R = lim n→∞ n! (n + 1)! = lim n→∞ 1 n + 1 = 0 The series has a vanishing radius of convergence. It converges only for z = 0. 335
  • 356. 3. Again we apply the ration test to determine the radius of convergence. lim n→∞ (n + 1)!z(n+1)! n!zn! < 1 lim n→∞ (n + 1)|z|(n+1)!−n! < 1 lim n→∞ (n + 1)|z|(n)n! < 1 lim n→∞ (ln(n + 1) + (n)n! ln |z|) < 0 ln |z| < lim n→∞ − ln(n + 1) (n)n! ln |z| < 0 |z| < 1 The series converges absolutely for |z| < 1. Alternatively we could determine the radius of convergence of the series with the comparison test. ∞ n=1 n!zn! ≤ ∞ n=1 |nzn | ∞ n=1 nzn has a radius of convergence of 1. Thus the series must have a radius of convergence of at least 1. Note that if |z| > 1 then the terms in the series do not vanish as n → ∞. Thus the series must diverge for all |z| ≥ 1. Again we see that the radius of convergence is 1. Uniform Convergence of Power Series. Consider a power series ∞ n=0 anzn that converges in the disk |z| < r0. The sum converges absolutely for z in the closed disk, |z| ≤ r < r0. Since |anzn | ≤ |anrn | and ∞ n=0 |anrn | converges, the power series is uniformly convergent in |z| ≤ r < r0. Result 12.3.5 If the power series ∞ n=0 anzn converges for |z| < r0 then the series converges uniformly for |z| ≤ r < r0. Example 12.3.2 Convergence and Uniform Convergence. Consider the series log(1 − z) = − ∞ n=1 zn n . This series converges for |z| ≤ 1, z = 1. Is the series uniformly convergent in this domain? The residual after N terms RN is RN (z) = ∞ n=N+1 zn n . We can get a lower bound on the absolute value of the residual for real, positive z. |RN (x)| = ∞ n=N+1 xn n ≤ ∞ N+1 xα α dα = − Ei((N + 1) ln x) The exponential integral function, Ei(z), is defined Ei(z) = − ∞ −z e−t t dt. 336
  • 357. The exponential integral function is plotted in Figure 12.3. Since Ei(z) diverges as z → 0, by choosing x sufficiently close to 1 the residual can be made arbitrarily large. Thus this series is not uniformly convergent in the domain |z| ≤ 1, z = 1. The series is uniformly convergent for |z| ≤ r < 1. -4 -3 -2 -1 -3 -2 -1 Figure 12.3: The Exponential Integral Function. Analyticity. Recall that a sufficient condition for the analyticity of a function f(z) in a domain is that C f(z) dz = 0 for all simple, closed contours in the domain. Consider a power series f(z) = ∞ n=0 anzn that is uniformly convergent in |z| ≤ r. If C is any simple, closed contour in the domain then C f(z) dz exists. Expanding f(z) into a finite series and a residual, C f(z) dz = C (SN (z) + RN (z)) dz. Since the series is uniformly convergent, for any given > 0 there exists an N such that |RN | < for all z in |z| ≤ r. Let L be the length of the contour C. C RN (z) dz ≤ L → 0 as N → ∞ C f(z) dz = lim N→∞ C N−1 n=0 anzn + RN (z) dz = C ∞ n=0 anzn = ∞ n=0 an C zn dz = 0 Thus f(z) is analytic for |z| < r. Result 12.3.6 A power series is analytic in its domain of uniform conver- gence. 12.4 Integration and Differentiation of Power Series Consider a power series f(z) = ∞ n=0 anzn that is convergent in the disk |z| < r0. Let C be any contour of finite length L lying entirely within the closed domain |z| ≤ r < r0. The integral of f(z) along C is C f(z) dz = C (SN (z) + RN (z)) dz. 337
  • 358. Since the series is uniformly convergent in the closed disk, for any given > 0, there exists an N such that |RN (z)| < for all |z| ≤ r. We bound the absolute value of the integral of RN (z). C RN (z) dz ≤ C |RN (z)| dz < L → 0 as N → ∞ Thus C f(z) dz = lim N→∞ C N n=0 anzn dz = lim N→∞ N n=0 an C zn dz = ∞ n=0 an C zn dz Result 12.4.1 If C is a contour lying in the domain of uniform convergence of the power series ∞ n=0 anzn then C ∞ n=0 anzn dz = ∞ n=0 an C zn dz. In the domain of uniform convergence of a series we can interchange the order of summation and a limit process. That is, lim z→z0 ∞ n=0 an(z) = ∞ n=0 lim z→z0 an(z). We can do this because the rate of convergence does not depend on z. Since differentiation is a limit process, d dz f(z) = lim h→0 f(z + h) − f(z) h , we would expect that we could differentiate a uniformly convergent series. Since we showed that a uniformly convergent power series is equal to an analytic function, we can differentiate a power series in it’s domain of uniform convergence. Result 12.4.2 Power series can be differentiated in their domain of uniform convergence. d dz ∞ n=0 anzn = ∞ n=0 (n + 1)an+1zn . Example 12.4.1 Differentiating a Series. Consider the series from Example 12.3.2. log(1 − z) = − ∞ n=1 zn n 338
  • 359. We differentiate this to obtain the geometric series. − 1 1 − z = − ∞ n=1 zn−1 1 1 − z = ∞ n=0 zn The geometric series is convergent for |z| < 1 and uniformly convergent for |z| ≤ r < 1. Note that the domain of convergence is different than the series for log(1 − z). The geometric series does not converge for |z| = 1, z = 1. However, the domain of uniform convergence has remained the same. 12.5 Taylor Series Result 12.5.1 Taylor’s Theorem. Let f(z) be a function that is single- valued and analytic in |z − z0| < R. For all z in this open disk, f(z) has the convergent Taylor series f(z) = ∞ n=0 f(n) (z0) n! (z − z0)n . (12.1) We can also write this as f(z) = ∞ n=0 an(z − z0)n , an = f(n) (z0) n! = 1 ı2π C f(z) (z − z0)n+1 dz, (12.2) where C is a simple, positive, closed contour in 0 < |z − z0| < R that goes once around the point z0. Proof of Taylor’s Theorem. Let’s see why Result 12.5.1 is true. Consider a function f(z) that is analytic in |z| < R. (Considering z0 = 0 is only trivially more general as we can introduce the change of variables ζ = z − z0.) According to Cauchy’s Integral Formula, (Result ??), f(z) = 1 ı2π C f(ζ) ζ − z dζ, (12.3) where C is a positive, simple, closed contour in 0 < |ζ − z| < R that goes once around z. We take this contour to be the circle about the origin of radius r where |z| < r < R. (See Figure 12.4.) Im(z) Re(z) r C R z Figure 12.4: Graph of Domain of Convergence and Contour of Integration. 339
  • 360. We expand 1 ζ−z in a geometric series, 1 ζ − z = 1/ζ 1 − z/ζ = 1 ζ ∞ n=0 z ζ n , for |z| < |ζ| = ∞ n=0 zn ζn+1 , for |z| < |ζ| We substitute this series into Equation 12.3. f(z) = 1 ı2π C ∞ n=0 f(ζ)zn ζn+1 dζ The series converges uniformly so we can interchange integration and summation. = ∞ n=0 zn ı2π C f(ζ) ζn+1 dζ Now we have derived Equation 12.2. To obtain Equation 12.1, we apply Cauchy’s Integral Formula. = ∞ n=0 f(n) (0) n! zn There is a table of some commonly encountered Taylor series in Appendix H. Example 12.5.1 Consider the Taylor series expansion of 1/(1 − z) about z = 0. Previously, we showed that this function is the sum of the geometric series ∞ n=0 zn and we used the ratio test to show that the series converged absolutely for |z| < 1. Now we find the series using Taylor’s theorem. Since the nearest singularity of the function is at z = 1, the radius of convergence of the series is 1. The coefficients in the series are an = 1 n! dn dzn 1 1 − z z=0 = 1 n! n! (1 − z)n z=0 = 1 Thus we have 1 1 − z = ∞ n=0 zn , for |z| < 1. 340
  • 361. 12.5.1 Newton’s Binomial Formula. Result 12.5.2 For all |z| < 1, a complex: (1 + z)a = 1 + a 1 z + a 2 z2 + a 3 z3 + · · · where a r = a(a − 1)(a − 2) · · · (a − r + 1) r! . If a is complex, then the expansion is of the principle branch of (1 + z)a . We define r 0 = 1, 0 r = 0, for r = 0, 0 0 = 1. Example 12.5.2 Evaluate limn→∞(1 + 1/n)n . First we expand (1 + 1/n)n using Newton’s binomial formula. lim n→∞ 1 + 1 n n = lim n→∞ 1 + n 1 1 n + n 2 1 n2 + n 3 1 n3 + · · · = lim n→∞ 1 + 1 + n(n − 1) 2!n2 + n(n − 1)(n − 2) 3!n3 + · · · = 1 + 1 + 1 2! + 1 3! + · · · We recognize this as the Taylor series expansion of e1 . = e We can also evaluate the limit using L’Hospital’s rule. ln lim x→∞ 1 + 1 x x = lim x→∞ ln 1 + 1 x x = lim x→∞ x ln 1 + 1 x = lim x→∞ ln(1 + 1/x) 1/x = lim x→∞ −1/x2 1+1/x −1/x2 = 1 lim x→∞ 1 + 1 x x = e Example 12.5.3 Find the Taylor series expansion of 1/(1 + z) about z = 0. For |z| < 1, 1 1 + z = 1 + −1 1 z + −1 2 z2 + −1 3 z3 + · · · = 1 + (−1)1 z + (−1)2 z2 + (−1)3 z3 + · · · = 1 − z + z2 − z3 + · · · 341
  • 362. Example 12.5.4 Find the first few terms in the Taylor series expansion of 1 √ z2 + 5z + 6 about the origin. We factor the denominator and then apply Newton’s binomial formula. 1 √ z2 + 5z + 6 = 1 √ z + 3 1 √ z + 2 = 1 √ 3 1 + z/3 1 √ 2 1 + z/2 = 1 √ 6 1 + −1/2 1 z 3 + −1/2 2 z 3 2 + · · · 1 + −1/2 1 z 2 + −1/2 2 z 2 2 + · · · = 1 √ 6 1 − z 6 + z2 24 + · · · 1 − z 4 + 3z2 32 + · · · = 1 √ 6 1 − 5 12 z + 17 96 z2 + · · · 12.6 Laurent Series Result 12.6.1 Let f(z) be single-valued and analytic in the annulus R1 < |z − z0| < R2. For points in the annulus, the function has the convergent Laurent series f(z) = ∞ n=−∞ anzn , where an = 1 ı2π C f(z) (z − z0)n+1 dz and C is a positively oriented, closed contour around z0 lying in the annulus. To derive this result, consider a function f(ζ) that is analytic in the annulus R1 < |ζ| < R2. Consider any point z in the annulus. Let C1 be a circle of radius r1 with R1 < r1 < |z|. Let C2 be a circle of radius r2 with |z| < r2 < R2. Let Cz be a circle around z, lying entirely between C1 and C2. (See Figure 12.5 for an illustration.) Consider the integral of f(ζ) ζ−z around the C2 contour. Since the the only singularities of f(ζ) ζ−z occur at ζ = z and at points outside the annulus, C2 f(ζ) ζ − z dζ = Cz f(ζ) ζ − z dζ + C1 f(ζ) ζ − z dζ. By Cauchy’s Integral Formula, the integral around Cz is Cz f(ζ) ζ − z dζ = ı2πf(z). This gives us an expression for f(z). f(z) = 1 ı2π C2 f(ζ) ζ − z dζ − 1 ı2π C1 f(ζ) ζ − z dζ (12.4) 342
  • 363. On the C2 contour, |z| < |ζ|. Thus 1 ζ − z = 1/ζ 1 − z/ζ = 1 ζ ∞ n=0 z ζ n , for |z| < |ζ| = ∞ n=0 zn ζn+1 , for |z| < |ζ| On the C1 contour, |ζ| < |z|. Thus − 1 ζ − z = 1/z 1 − ζ/z = 1 z ∞ n=0 ζ z n , for |ζ| < |z| = ∞ n=0 ζn zn+1 , for |ζ| < |z| = −1 n=−∞ zn ζn+1 , for |ζ| < |z| We substitute these geometric series into Equation 12.4. f(z) = 1 ı2π C2 ∞ n=0 f(ζ)zn ζn+1 dζ + 1 ı2π C1 −1 n=−∞ f(ζ)zn ζn+1 dζ Since the sums converge uniformly, we can interchange the order of integration and summation. f(z) = 1 ı2π ∞ n=0 C2 f(ζ)zn ζn+1 dζ + 1 ı2π −1 n=−∞ C1 f(ζ)zn ζn+1 dζ Since the only singularities of the integrands lie outside of the annulus, the C1 and C2 contours can be deformed to any positive, closed contour C that lies in the annulus and encloses the origin. (See Figure 12.5.) Finally, we combine the two integrals to obtain the desired result. f(z) = ∞ n=−∞ 1 ı2π C f(ζ) ζn+1 dζ zn For the case of arbitrary z0, simply make the transformation z → z − z0. Example 12.6.1 Find the Laurent series expansions of 1/(1 + z). For |z| < 1, 1 1 + z = 1 + −1 1 z + −1 2 z2 + −1 3 z3 + · · · = 1 + (−1)1 z + (−1)2 z2 + (−1)3 z3 + · · · = 1 − z + z2 − z3 + · · · For |z| > 1, 1 1 + z = 1/z 1 + 1/z = 1 z 1 + −1 1 z−1 + −1 2 z−2 + · · · = z−1 − z−2 + z−3 − · · · 343
  • 364. Im(z) Re(z) R R2 1 Im(z) Re(z) R R2 1 C r1 r2 z C C C1 2 z Figure 12.5: Contours for a Laurent Expansion in an Annulus. 12.7 Exercises 12.7.1 Series of Constants Exercise 12.1 Show that if an converges then limn→∞ an = 0. That is, limn→∞ an = 0 is a necessary condition for the convergence of the series. Hint, Solution Exercise 12.2 Answer the following questions true or false. Justify your answers. 1. There exists a sequence which converges to both 1 and −1. 2. There exists a sequence {an} such that an > 1 for all n and limn→∞ an = 1. 3. There exists a divergent geometric series whose terms converge. 4. There exists a sequence whose even terms are greater than 1, whose odd terms are less than 1 and that converges to 1. 5. There exists a divergent series of non-negative terms, ∞ n=0 an, such that an < (1/2)n . 6. There exists a convergent sequence, {an}, such that limn→∞(an+1 − an) = 0. 7. There exists a divergent sequence, {an}, such that limn→∞ |an| = 2. 8. There exists divergent series, an and bn, such that (an + bn) is convergent. 9. There exists 2 different series of nonzero terms that have the same sum. 10. There exists a series of nonzero terms that converges to zero. 11. There exists a series with an infinite number of non-real terms which converges to a real number. 12. There exists a convergent series an with limn→∞ |an+1/an| = 1. 13. There exists a divergent series an with limn→∞ |an+1/an| = 1. 14. There exists a convergent series an with limn→∞ n |an| = 1. 15. There exists a divergent series an with limn→∞ n |an| = 1. 344
  • 365. 16. There exists a convergent series of non-negative terms, an, for which a2 n diverges. 17. There exists a convergent series of non-negative terms, an, for which √ an diverges. 18. There exists a convergent series, an, for which |an| diverges. 19. There exists a power series an(z − z0)n which converges for z = 0 and z = 3 but diverges for z = 2. 20. There exists a power series an(z − z0)n which converges for z = 0 and z = ı2 but diverges for z = 2. Hint, Solution Exercise 12.3 Determine if the following series converge. 1. ∞ n=2 1 n ln(n) 2. ∞ n=2 1 ln (nn) 3. ∞ n=2 ln n √ ln n 4. ∞ n=10 1 n(ln n)(ln(ln n)) 5. ∞ n=1 ln (2n ) ln (3n) + 1 6. ∞ n=0 1 ln(n + 20) 7. ∞ n=0 4n + 1 3n − 2 8. ∞ n=0 (Logπ 2)n 9. ∞ n=2 n2 − 1 n4 − 1 10. ∞ n=2 n2 (ln n)n 11. ∞ n=2 (−1)n ln 1 n 12. ∞ n=2 (n!)2 (2n)! 13. ∞ n=2 3n + 4n + 5 5n − 4n − 3 345
  • 366. 14. ∞ n=2 n! (ln n)n 15. ∞ n=2 en ln(n!) 16. ∞ n=1 (n!)2 (n2)! 17. ∞ n=1 n8 + 4n4 + 8 3n9 − n5 + 9n 18. ∞ n=1 1 n − 1 n + 1 19. ∞ n=1 cos(nπ) n 20. ∞ n=2 ln n n11/10 Hint, Solution Exercise 12.4 (mathematica/fcv/series/constants.nb) Show that the alternating harmonic series, ∞ n=1 (−1)n+1 n = 1 − 1 2 + 1 3 − 1 4 + · · · , is convergent. Hint, Solution Exercise 12.5 (mathematica/fcv/series/constants.nb) Show that the series ∞ n=1 1 n is divergent with the Cauchy convergence criterion. Hint, Solution Exercise 12.6 The alternating harmonic series has the sum: ∞ n=1 (−1)n n = ln(2). Show that the terms in this series can be rearranged to sum to π. Hint, Solution Exercise 12.7 (mathematica/fcv/series/constants.nb) Is the series, ∞ n=1 n! nn , convergent? Hint, Solution 346
  • 367. Exercise 12.8 Show that the harmonic series, ∞ n=1 1 nα = 1 + 1 2α + 1 3α + · · · , converges for α > 1 and diverges for α ≤ 1. Hint, Solution Exercise 12.9 Evaluate N−1 n=1 sin(nx). Hint, Solution Exercise 12.10 Evaluate n k=1 kzk and n k=1 k2 zk for z = 1. Hint, Solution Exercise 12.11 Which of the following series converge? Find the sum of those that do. 1. 1 2 + 1 6 + 1 12 + 1 20 + · · · 2. 1 + (−1) + 1 + (−1) + · · · 3. ∞ n=1 1 2n−1 1 3n 1 5n+1 Hint, Solution Exercise 12.12 Evaluate the following sum. ∞ k1=0 ∞ k2=k1 · · · ∞ kn=kn−1 1 2kn Hint, Solution 12.7.2 Uniform Convergence 12.7.3 Uniformly Convergent Power Series Exercise 12.13 Determine the domain of convergence of the following series. 1. ∞ n=0 zn (z + 3)n 2. ∞ n=2 Log z ln n 3. ∞ n=1 z n 347
  • 368. 4. ∞ n=1 (z + 2)2 n2 5. ∞ n=1 (z − e)n nn 6. ∞ n=1 z2n 2nz 7. ∞ n=0 zn! (n!)2 8. ∞ n=0 zln(n!) n! 9. ∞ n=0 (z − π)2n+1 nπ n! 10. ∞ n=0 ln n zn Hint, Solution Exercise 12.14 Find the circle of convergence of the following series. 1. z + (α − β) z2 2! + (α − β)(α − 2β) z3 3! + (α − β)(α − 2β)(α − 3β) z4 4! + · · · 2. ∞ n=1 n 2n (z − ı)n 3. ∞ n=1 nn zn 4. ∞ n=1 n! nn zn 5. ∞ n=1 (3 + (−1)n ) n zn 6. ∞ n=1 (n + αn ) zn (|α| > 1) Hint, Solution Exercise 12.15 Find the circle of convergence of the following series: 1. ∞ k=0 kzk 2. ∞ k=1 kk zk 348
  • 369. 3. ∞ k=1 k! kk zk 4. ∞ k=0 (z + ı5)2k (k + 1)2 5. ∞ k=0 (k + 2k )zk Hint, Solution 12.7.4 Integration and Differentiation of Power Series Exercise 12.16 Using the geometric series, show that 1 (1 − z)2 = ∞ n=0 (n + 1)zn , for |z| < 1, and log(1 − z) = − ∞ n=1 zn n , for |z| < 1. Hint, Solution 12.7.5 Taylor Series Exercise 12.17 Find the Taylor series of 1 1+z2 about the z = 0. Determine the radius of convergence of the Taylor series from the singularities of the function. Determine the radius of convergence with the ratio test. Hint, Solution Exercise 12.18 Use two methods to find the Taylor series expansion of log(1 + z) about z = 0 and determine the circle of convergence. First directly apply Taylor’s theorem, then differentiate a geometric series. Hint, Solution Exercise 12.19 Let f(z) = (1 + z)α be the branch for which f(0) = 1. Find its Taylor series expansion about z = 0. What is the radius of convergence of the series? (α is an arbitrary complex number.) Hint, Solution Exercise 12.20 Find the Taylor series expansions about the point z = 1 for the following functions. What are the radii of convergence? 1. 1 z 2. Log z 3. 1 z2 4. z Log z − z Hint, Solution 349
  • 370. Exercise 12.21 Find the Taylor series expansion about the point z = 0 for ez . What is the radius of convergence? Use this to find the Taylor series expansions of cos z and sin z about z = 0. Hint, Solution Exercise 12.22 Find the Taylor series expansion about the point z = π for the cosine and sine. Hint, Solution Exercise 12.23 Sum the following series. 1. ∞ n=0 (ln 2)n n! 2. ∞ n=0 (n + 1)(n + 2) 2n 3. ∞ n=0 (−1)n n! 4. ∞ n=0 (−1)n π2n+1 (2n + 1)! 5. ∞ n=0 (−1)n π2n (2n)! 6. ∞ n=0 (−π)n (2n)! Hint, Solution Exercise 12.24 1. Find the first three terms in the following Taylor series and state the convergence properties for the following. (a) e−z around z0 = 0 (b) 1 + z 1 − z around z0 = ı (c) ez z − 1 around z0 = 0 It may be convenient to use the Cauchy product of two Taylor series. 2. Consider a function f(z) analytic for |z − z0| < R. Show that the series obtained by differ- entiating the Taylor series for f(z) termwise is actually the Taylor series for f (z) and hence argue that this series converges uniformly to f (z) for |z − z0| ≤ ρ < R. 3. Find the Taylor series for 1 (1 − z)3 by appropriate differentiation of the geometric series and state the radius of convergence. 4. Consider the branch of f(z) = (z + 1)ı corresponding to f(0) = 1. Find the Taylor series expansion about z0 = 0 and state the radius of convergence. Hint, Solution 350
  • 371. 12.7.6 Laurent Series Exercise 12.25 Find the Laurent series about z = 0 of 1/(z − ı) for |z| < 1 and |z| > 1. Hint, Solution Exercise 12.26 Obtain the Laurent expansion of f(z) = 1 (z + 1)(z + 2) centered on z = 0 for the three regions: 1. |z| < 1 2. 1 < |z| < 2 3. 2 < |z| Hint, Solution Exercise 12.27 By comparing the Laurent expansion of (z + 1/z)m , m ∈ Z+ , with the binomial expansion of this quantity, show that 2π 0 (cos θ)m cos(nθ) dθ = π 2m−1 m (m−n)/2 −m ≤ n ≤ m and m − n even 0 otherwise Hint, Solution Exercise 12.28 The function f(z) is analytic in the entire z-plane, including ∞, except at the point z = ı/2, where it has a simple pole, and at z = 2, where it has a pole of order 2. In addition |z|=1 f(z) dz = ı2π, |z|=3 f(z) dz = 0, |z|=3 (z − 1)f(z) dz = 0. Find f(z) and its complete Laurent expansion about z = 0. Hint, Solution Exercise 12.29 Let f(z) = ∞ k=1 k3 z 3 k . Compute each of the following, giving justification in each case. The contours are circles of radius one about the origin. 1. |z|=1 eız f(z) dz 2. |z|=1 f(z) z4 dz 3. |z|=1 f(z) ez z2 dz Hint, Solution Exercise 12.30 1. Expand f(z) = 1 z(1−z) in Laurent series that converge in the following domains: (a) 0 < |z| < 1 351
  • 372. (b) |z| > 1 (c) |z + 1| > 2 2. Without determining the series, specify the region of convergence for a Laurent series repre- senting f(z) = 1/(z4 + 4) in powers of z − 1 that converges at z = ı. Hint, Solution 352
  • 373. 12.8 Hints Hint 12.1 Use the Cauchy convergence criterion for series. In particular, consider |SN+1 − SN |. Hint 12.2 CONTINUE Hint 12.3 1. ∞ n=2 1 n ln(n) Use the integral test. 2. ∞ n=2 1 ln (nn) Simplify the summand. 3. ∞ n=2 ln n √ ln n Simplify the summand. Use the comparison test. 4. ∞ n=10 1 n(ln n)(ln(ln n)) Use the integral test. 5. ∞ n=1 ln (2n ) ln (3n) + 1 Show that the terms in the sum do not vanish as n → ∞ 6. ∞ n=0 1 ln(n + 20) Shift the indices. 7. ∞ n=0 4n + 1 3n − 2 Show that the terms in the sum do not vanish as n → ∞ 8. ∞ n=0 (Logπ 2)n This is a geometric series. 9. ∞ n=2 n2 − 1 n4 − 1 Simplify the integrand. Use the comparison test. 353
  • 374. 10. ∞ n=2 n2 (ln n)n Compare to a geometric series. 11. ∞ n=2 (−1)n ln 1 n Group pairs of consecutive terms to obtain a series of positive terms. 12. ∞ n=2 (n!)2 (2n)! Use the comparison test. 13. ∞ n=2 3n + 4n + 5 5n − 4n − 3 Use the root test. 14. ∞ n=2 n! (ln n)n Show that the terms do not vanish as n → ∞. 15. ∞ n=2 en ln(n!) Show that the terms do not vanish as n → ∞. 16. ∞ n=1 (n!)2 (n2)! Apply the ratio test. 17. ∞ n=1 n8 + 4n4 + 8 3n9 − n5 + 9n Use the comparison test. 18. ∞ n=1 1 n − 1 n + 1 Use the comparison test. 19. ∞ n=1 cos(nπ) n Simplify the integrand. 354
  • 375. 20. ∞ n=2 ln n n11/10 Use the integral test. Hint 12.4 Group the terms. 1 − 1 2 = 1 2 1 3 − 1 4 = 1 12 1 5 − 1 6 = 1 30 · · · Hint 12.5 Show that |S2n − Sn| > 1 2 . Hint 12.6 The alternating harmonic series is conditionally convergent. Let {an} and {bn} be the positive and negative terms in the sum, respectively, ordered in decreasing magnitude. Note that both ∞ n=1 an and ∞ n=1 bn are divergent. Devise a method for alternately taking terms from {an} and {bn}. Hint 12.7 Use the ratio test. Hint 12.8 Use the integral test. Hint 12.9 Note that sin(nx) = (eınx ). This substitute will yield a finite geometric series. Hint 12.10 Let Sn be the sum. Consider Sn − zSn. Use the finite geometric sum. Hint 12.11 1. The summand is a rational function. Find the first few partial sums. 2. 3. This a geometric series. Hint 12.12 CONTINUE Hint 12.13 CONTINUE 1. ∞ n=0 zn (z + 3)n 2. ∞ n=2 Log z ln n 355
  • 376. 3. ∞ n=1 z n 4. ∞ n=1 (z + 2)2 n2 5. ∞ n=1 (z − e)n nn 6. ∞ n=1 z2n 2nz 7. ∞ n=0 zn! (n!)2 8. ∞ n=0 zln(n!) n! 9. ∞ n=0 (z − π)2n+1 nπ n! 10. ∞ n=0 ln n zn Hint 12.14 Hint 12.15 CONTINUE Hint 12.16 Differentiate the geometric series. Integrate the geometric series. Hint 12.17 The Taylor series is a geometric series. Hint 12.18 Hint 12.19 Hint 12.20 1. 1 z = 1 1 + (z − 1) The right side is the sum of a geometric series. 2. Integrate the series for 1/z. 3. Differentiate the series for 1/z. 4. Integrate the series for Log z. 356
  • 377. Hint 12.21 Evaluate the derivatives of ez at z = 0. Use Taylor’s Theorem. Write the cosine and sine in terms of the exponential function. Hint 12.22 cos z = − cos(z − π) sin z = − sin(z − π) Hint 12.23 CONTINUE Hint 12.24 CONTINUE Hint 12.25 Hint 12.26 Hint 12.27 Hint 12.28 Hint 12.29 Hint 12.30 CONTINUE 357
  • 378. 12.9 Solutions Solution 12.1 ∞ n=0 an converges only if the partial sums, Sn, are a Cauchy sequence. ∀ > 0 ∃N s.t. m, n > N ⇒ |Sm − Sn| < , In particular, we can consider m = n + 1. ∀ > 0 ∃N s.t. n > N ⇒ |Sn+1 − Sn| < Now we note that Sn+1 − sn = an. ∀ > 0 ∃N s.t. n > N ⇒ |an| < This is exactly the Cauchy convergence criterion for the sequence {an}. Thus we see that limn→∞ an = 0 is a necessary condition for the convergence of the series ∞ n=0 an. Solution 12.2 CONTINUE Solution 12.3 1. ∞ n=2 1 n ln(n) Since this is a series of positive, monotone decreasing terms, the sum converges or diverges with the integral, ∞ 2 1 x ln x dx = ∞ ln 2 1 ξ dξ Since the integral diverges, the series also diverges. 2. ∞ n=2 1 ln (nn) = ∞ n=2 1 n ln(n) The sum converges. 3. ∞ n=2 ln n √ ln n = ∞ n=2 1 n ln(ln n) ≥ ∞ n=2 1 n The sum is divergent by the comparison test. 4. ∞ n=10 1 n(ln n)(ln(ln n)) Since this is a series of positive, monotone decreasing terms, the sum converges or diverges with the integral, ∞ 10 1 x ln x ln(ln x) dx = ∞ ln(10) 1 y ln y dy = ∞ ln(ln(10)) 1 z dz Since the integral diverges, the series also diverges. 5. ∞ n=1 ln (2n ) ln (3n) + 1 = ∞ n=1 n ln 2 n ln 3 + 1 = ∞ n=1 ln 2 ln 3 + 1/n Since the terms in the sum do not vanish as n → ∞, the series is divergent. 358
  • 379. 6. ∞ n=0 1 ln(n + 20) = ∞ n=20 1 ln n The series diverges. 7. ∞ n=0 4n + 1 3n − 2 Since the terms in the sum do not vanish as n → ∞, the series is divergent. 8. ∞ n=0 (Logπ 2)n This is a geometric series. Since | Logπ 2| < 1, the series converges. 9. ∞ n=2 n2 − 1 n4 − 1 = ∞ n=2 1 n2 + 1 < ∞ n=2 1 n2 The series converges by comparison to the harmonic series. 10. ∞ n=2 n2 (ln n)n = ∞ n=2 n2/n ln n n Since n2/n → 1 as n → ∞, n2/n / ln n → 0 as n → ∞. The series converges by comparison to a geometric series. 11. We group pairs of consecutive terms to obtain a series of positive terms. ∞ n=2 (−1)n ln 1 n = ∞ n=1 ln 1 2n − ln 1 2n + 1 = ∞ n=1 ln 2n + 1 2n The series on the right side diverges because the terms do not vanish as n → ∞. 12. ∞ n=2 (n!)2 (2n)! = ∞ n=2 (1)(2) · · · n (n + 1)(n + 2) · · · (2n) < ∞ n=2 1 2n The series converges by comparison with a geometric series. 13. ∞ n=2 3n + 4n + 5 5n − 4n − 3 We use the root test to check for convergence. lim n→∞ |an| 1/n = lim n→∞ 3n + 4n + 5 5n − 4n − 3 1/n = lim n→∞ 4 5 (3/4)n + 1 + 5/4n 1 − (4/5)n − 3/5n 1/n = 4 5 < 1 We see that the series is absolutely convergent. 359
  • 380. 14. We will use the comparison test. ∞ n=2 n! (ln n)n > ∞ n=2 (n/2)n/2 (ln n)n = ∞ n=2 n/2 ln n n Since the terms in the series on the right side do not vanish as n → ∞, the series is divergent. 15. We will use the comparison test. ∞ n=2 en ln(n!) > ∞ n=2 en ln(nn) = ∞ n=2 en n ln(n) Since the terms in the series on the right side do not vanish as n → ∞, the series is divergent. 16. ∞ n=1 (n!)2 (n2)! We apply the ratio test. lim n→∞ an+1 an = lim n→∞ ((n + 1)!)2 (n2 )! ((n + 1)2)!(n!)2 = lim n→∞ (n + 1)2 ((n + 1)2 − n2)! = lim n→∞ (n + 1)2 (2n + 1)! = 0 The series is convergent. 17. ∞ n=1 n8 + 4n4 + 8 3n9 − n5 + 9n = ∞ n=1 1 n 1 + 4n−4 + 8n−8 3 − n−4 + 9n−8 > 1 4 ∞ n=1 1 n We see that the series is divergent by comparison to the harmonic series. 18. ∞ n=1 1 n − 1 n + 1 = ∞ n=1 1 n2 + n < ∞ n=1 1 n2 The series converges by the comparison test. 19. ∞ n=1 cos(nπ) n = ∞ n=1 (−1)n n We recognize this as the alternating harmonic series, which is conditionally convergent. 20. ∞ n=2 ln n n11/10 Since this is a series of positive, monotone decreasing terms, the sum converges or diverges with the integral, ∞ 2 ln x x11/10 dx = ∞ ln 2 y e−y/10 dy Since the integral is convergent, so is the series. 360
  • 381. Solution 12.4 ∞ n=1 (−1)n+1 n = ∞ n=1 1 2n − 1 − 1 2n = ∞ n=1 1 (2n − 1)(2n) < ∞ n=1 1 (2n − 1)2 < 1 2 ∞ n=1 1 n2 = π2 12 Thus the series is convergent. Solution 12.5 Since |S2n − Sn| = 2n−1 j=n 1 j ≥ 2n−1 j=n 1 2n − 1 = n 2n − 1 > 1 2 the series does not satisfy the Cauchy convergence criterion. Solution 12.6 The alternating harmonic series is conditionally convergent. That is, the sum is convergent but not absolutely convergent. Let {an} and {bn} be the positive and negative terms in the sum, respectively, ordered in decreasing magnitude. Note that both ∞ n=1 an and ∞ n=1 bn are divergent. Otherwise the alternating harmonic series would be absolutely convergent. To sum the terms in the series to π we repeat the following two steps indefinitely: 1. Take terms from {an} until the sum is greater than π. 2. Take terms from {bn} until the sum is less than π. Each of these steps can always be accomplished because the sums, ∞ n=1 an and ∞ n=1 bn are both divergent. Hence the tails of the series are divergent. No matter how many terms we take, the remaining terms in each series are divergent. In each step a finite, nonzero number of terms from the respective series is taken. Thus all the terms will be used. Since the terms in each series vanish as n → ∞, the running sum converges to π. 361
  • 382. Solution 12.7 Applying the ratio test, lim n→∞ an+1 an = lim n→∞ (n + 1)!nn n!(n + 1)(n+1) = lim n→∞ nn (n + 1)n = lim n→∞ n (n + 1) n = 1 e < 1, we see that the series is absolutely convergent. Solution 12.8 The harmonic series, ∞ n=1 1 nα = 1 + 1 2α + 1 3α + · · · , converges or diverges absolutely with the integral, ∞ 1 1 |xα| dx = ∞ 1 1 x (α) dx =    [ln x]∞ 1 for (α) = 1, x1− (α) 1− (α) ∞ 1 for (α) = 1. The integral converges only for (α) > 1. Thus the harmonic series converges absolutely for (α) > 1 and diverges absolutely for (α) ≤ 1. Solution 12.9 N−1 n=1 sin(nx) = N−1 n=0 sin(nx) = N−1 n=0 (eınx ) = N−1 n=0 (eıx )n = (N) for x = 2πk 1−eınx 1−eıx for x = 2πk = 0 for x = 2πk e−ıx/2 − eı(N−1/2)x e−ıx/2 − eıx/2 for x = 2πk = 0 for x = 2πk e−ıx/2 − eı(N−1/2)x −ı2 sin(x/2) for x = 2πk = 0 for x = 2πk e−ıx/2 − eı(N−1/2)x 2 sin(x/2) for x = 2πk N−1 n=1 sin(nx) = 0 for x = 2πk cos(x/2)−cos((N−1/2)x) 2 sin(x/2) for x = 2πk 362
  • 383. Solution 12.10 Let Sn = n k=1 kzk . Sn − zSn = n k=1 kzk − n k=1 kzk+1 = n k=1 kzk − n+1 k=2 (k − 1)zk = n k=1 zk − nzn+1 = z − zn+1 1 − z − nzn+1 n k=1 kzk = z(1 − (n + 1)zn + nzn+1 ) (1 − z)2 Let Sn = n k=1 k2 zk . Sn − zSn = n k=1 (k2 − (k − 1)2 )zk − n2 zn+1 = 2 n k=1 kzk − n k=1 zk − n2 zn+1 = 2 z(1 − (n + 1)zn + nzn+1 ) (1 − z)2 − z − zn+1 1 − z − n2 zn+1 n k=1 k2 zk = z(1 + z − zn (1 + z + n(n(z − 1) − 2)(z − 1))) (1 − z)3 Solution 12.11 1. ∞ n=1 an = 1 2 + 1 6 + 1 12 + 1 20 + · · · We conjecture that the terms in the sum are rational functions of summation index. That is, an = 1/p(n) where p(n) is a polynomial. We use divided differences to determine the order of the polynomial. 2 6 12 20 4 6 8 2 2 We see that the polynomial is second order. p(n) = an2 + bn + c. We solve for the coefficients. a + b + c = 2 4a + 2b + c = 6 9a + 3b + c = 12 363
  • 384. p(n) = n2 + n We examine the first few partial sums. S1 = 1 2 S2 = 2 3 S3 = 3 4 S4 = 4 5 We conjecture that Sn = n/(n + 1). We prove this with induction. The base case is n = 1. S1 = 1/(1 + 1) = 1/2. Now we assume the induction hypothesis and calculate Sn+1. Sn+1 = Sn + an+1 = n n + 1 + 1 (n + 1)2 + (n + 1) = n + 1 n + 2 This proves the induction hypothesis. We calculate the limit of the partial sums to evaluate the series. ∞ n=1 1 n2 + n = lim n→∞ n n + 1 ∞ n=1 1 n2 + n = 1 2. ∞ n=0 (−1)n = 1 + (−1) + 1 + (−1) + · · · Since the terms in the series do not vanish as n → ∞, the series is divergent. 3. We can directly sum this geometric series. ∞ n=1 1 2n−1 1 3n 1 5n+1 = 1 75 1 1 − 1/30 = 2 145 CONTINUE Solution 12.12 The innermost sum is a geometric series. ∞ kn=kn−1 1 2kn = 1 2kn−1 1 1 − 1/2 = 21−kn−1 This gives us a relationship between n nested sums and n − 1 nested sums. ∞ k1=0 ∞ k2=k1 · · · ∞ kn=kn−1 1 2kn = 2 ∞ k1=0 ∞ k2=k1 · · · ∞ kn−1=kn−2 1 2kn−1 364
  • 385. We evaluate the n nested sums by induction. ∞ k1=0 ∞ k2=k1 · · · ∞ kn=kn−1 1 2kn = 2n Solution 12.13 CONTINUE. 1. ∞ n=0 zn (z + 3)n 2. ∞ n=2 Log z ln n 3. ∞ n=1 z n 4. ∞ n=1 (z + 2)2 n2 5. ∞ n=1 (z − e)n nn 6. ∞ n=1 z2n 2nz 7. ∞ n=0 zn! (n!)2 8. ∞ n=0 zln(n!) n! 9. ∞ n=0 (z − π)2n+1 nπ n! 10. ∞ n=0 ln n zn Solution 12.14 1. We assume that β = 0. We determine the radius of convergence with the ratio test. R = lim n→∞ an an+1 = lim n→∞ (α − β) · · · (α − (n − 1)β)/n! (α − β) · · · (α − nβ)/(n + 1)! = lim n→∞ n + 1 α − nβ = 1 |β| The series converges absolutely for |z| < 1/|β|. 365
  • 386. 2. By the ratio test formula, the radius of absolute convergence is R = lim n→∞ n/2n (n + 1)/2n+1 = 2 lim n→∞ n n + 1 = 2 By the root test formula, the radius of absolute convergence is R = 1 limn→∞ n |n/2n| = 2 limn→∞ n √ n = 2 The series converges absolutely for |z − ı| < 2. 3. We determine the radius of convergence with the Cauchy-Hadamard formula. R = 1 lim sup n |an| = 1 lim sup n |nn| = 1 lim sup n = 0 The series converges only for z = 0. 4. By the ratio test formula, the radius of absolute convergence is R = lim n→∞ n!/nn (n + 1)!/(n + 1)n+1 = lim n→∞ (n + 1)n nn = lim n→∞ n + 1 n n = exp lim n→∞ ln n + 1 n n = exp lim n→∞ n ln n + 1 n = exp lim n→∞ ln(n + 1) − ln(n) 1/n = exp lim n→∞ 1/(n + 1) − 1/n −1/n2 = exp lim n→∞ n n + 1 = e1 The series converges absolutely in the circle, |z| < e. 366
  • 387. 5. By the Cauchy-Hadamard formula, the radius of absolute convergence is R = 1 lim sup n | (3 + (−1)n) n | = 1 lim sup (3 + (−1)n) = 1 4 Thus the series converges absolutely for |z| < 1/4. 6. By the Cauchy-Hadamard formula, the radius of absolute convergence is R = 1 lim sup n |n + αn| = 1 lim sup |α| n |1 + n/αn| = 1 |α| Thus the sum converges absolutely for |z| < 1/|α|. Solution 12.15 1. ∞ k=0 kzk We determine the radius of convergence with the ratio formula. R = lim k→∞ k k + 1 = lim k→∞ 1 1 = 1 The series converges absolutely for |z| < 1. 2. ∞ k=1 kk zk We determine the radius of convergence with the Cauchy-Hadamard formula. R = 1 lim sup k |kk| = 1 lim sup k = 0 The series converges only for z = 0. 3. ∞ k=1 k! kk zk 367
  • 388. We determine the radius of convergence with the ratio formula. R = lim k→∞ k!/kk (k + 1)!/(k + 1)(k+1) = lim k→∞ (k + 1)k kk = exp lim k→∞ k ln k + 1 k = exp lim k→∞ ln(k + 1) − ln(k) 1/k = exp lim k→∞ 1/(k + 1) − 1/k −1/k2 = exp lim k→∞ k k + 1 = exp(1) = e The series converges absolutely for |z| < e. 4. ∞ k=0 (z + ı5)2k (k + 1)2 We use the ratio formula to determine the domain of convergence. lim k→∞ (z + ı5)2(k+1) (k + 2)2 (z + ı5)2k(k + 1)2 < 1 |z + ı5|2 lim k→∞ (k + 2)2 (k + 1)2 < 1 |z + ı5|2 lim k→∞ 2(k + 2) 2(k + 1) < 1 |z + ı5|2 lim k→∞ 2 2 < 1 |z + ı5|2 < 1 5. ∞ k=0 (k + 2k )zk We determine the radius of convergence with the Cauchy-Hadamard formula. R = 1 lim sup k |k + 2k| = 1 lim sup 2 k |1 + k/2k| = 1 2 The series converges for |z| < 1/2. Solution 12.16 The geometric series is 1 1 − z = ∞ n=0 zn . 368
  • 389. This series is uniformly convergent in the domain, |z| ≤ r < 1. Differentiating this equation yields, 1 (1 − z)2 = ∞ n=1 nzn−1 = ∞ n=0 (n + 1)zn for |z| < 1. Integrating the geometric series yields − log(1 − z) = ∞ n=0 zn+1 n + 1 log(1 − z) = − ∞ n=1 zn n , for |z| < 1. Solution 12.17 1 1 + z2 = ∞ n=0 −z2 n = ∞ n=0 (−1)n z2n The function 1 1+z2 = 1 (1−ız)(1+ız) has singularities at z = ±ı. Thus the radius of convergence is 1. Now we use the ratio test to corroborate that the radius of convergence is 1. lim n→∞ an+1(z) an(z) < 1 lim n→∞ (−1)n+1 z2(n+1) (−1)nz2n < 1 lim n→∞ z2 < 1 |z| < 1 Solution 12.18 Method 1. log(1 + z) = [log(1 + z)]z=0 + d dz log(1 + z) z=0 z 1! + d2 dz2 log(1 + z) z=0 z2 2! + · · · = 0 + 1 1 + z z=0 z 1! + −1 (1 + z)2 z=0 z2 2! + 2 (1 + z)3 z=0 z3 3! + · · · = z − z2 2 + z3 3 − z4 4 + · · · = ∞ n=1 (−1)n+1 zn n Since the nearest singularity of log(1 + z) is at z = −1, the radius of convergence is 1. Method 2. We know the geometric series converges for |z| < 1. 1 1 + z = ∞ n=0 (−1)n zn We integrate this equation to get the series for log(1 + z) in the domain |z| < 1. log(1 + z) = ∞ n=0 (−1)n zn+1 n + 1 = ∞ n=1 (−1)n+1 zn n 369
  • 390. We calculate the radius of convergence with the ratio test. R = lim n→∞ an an+1 = lim n→∞ −(n + 1) n = 1 Thus the series converges absolutely for |z| < 1. Solution 12.19 The Taylor series expansion of f(z) about z = 0 is f(z) = ∞ n=0 f(n) (0) n! zn . The derivatives of f(z) are f(n) (z) = n−1 k=0 (α − k) (1 + z)α−n . Thus f(n) (0) is f(n) (0) = n−1 k=0 (α − k). If α = m is a non-negative integer, then only the first m + 1 terms are nonzero. The Taylor series is a polynomial and the series has an infinite radius of convergence. (1 + z)m = m n=0 n−1 k=0 (α − k) n! zn If α is not a non-negative integer, then all of the terms in the series are non-zero. (1 + z)α = ∞ n=0 n−1 k=0 (α − k) n! zn The radius of convergence of the series is the distance to the nearest singularity of (1 + z)α . This occurs at z = −1. Thus the series converges for |z| < 1. We can corroborate this with the ratio test. The radius of convergence is R = lim n→∞ n−1 k=0 (α − k) /n! ( n k=0(α − k)) /(n + 1)! = lim n→∞ n + 1 α − n = 1. If we use the binomial coefficient, we can write the series in a compact form. α n ≡ n−1 k=0 (α − k) n! (1 + z)α = ∞ n=0 α n zn Solution 12.20 1. We find the series for 1/z by writing it in terms of z − 1 and using the geometric series. 1 z = 1 1 + (z − 1) 1 z = ∞ n=0 (−1)n (z − 1)n for |z − 1| < 1 370
  • 391. Since the nearest singularity is at z = 0, the radius of convergence is 1. The series converges absolutely for |z −1| < 1. We could also determine the radius of convergence with the Cauchy- Hadamard formula. R = 1 lim sup n |an| = 1 lim sup n |(−1)n| = 1 2. We integrate 1/ζ from 1 to z for in the circle |z − 1| < 1. z 1 1 ζ dζ = [Log ζ]z 1 = Log z The series we derived for 1/z is uniformly convergent for |z − 1| ≤ r < 1. We can integrate the series in this domain. Log z = z 1 ∞ n=0 (−1)n (ζ − 1)n dζ = ∞ n=0 (−1)n z 1 (ζ − 1)n dζ = ∞ n=0 (−1)n (z − 1)n+1 n + 1 Log z = ∞ n=1 (−1)n−1 (z − 1)n n for |z − 1| < 1 3. The series we derived for 1/z is uniformly convergent for |z − 1| ≤ r < 1. We can differentiate the series in this domain. 1 z2 = − d dz 1 z = − d dz ∞ n=0 (−1)n (z − 1)n = ∞ n=1 (−1)n+1 n(z − 1)n−1 1 z2 = ∞ n=0 (−1)n (n + 1)(z − 1)n for |z − 1| < 1 4. We integrate Log ζ from 1 to z for in the circle |z − 1| < 1. z 1 Log ζ dζ = [ζ Log ζ − ζ]z 1 = z Log z − z + 1 The series we derived for Log z is uniformly convergent for |z − 1| ≤ r < 1. We can integrate 371
  • 392. the series in this domain. z Log z − z = = −1 + z 1 Log ζ dζ = −1 + z 1 ∞ n=1 (−1)n−1 (ζ − 1)n n dζ = −1 + ∞ n=1 (−1)n−1 (z − 1)n+1 n(n + 1) z Log z − z = −1 + ∞ n=2 (−1)n (z − 1)n n(n − 1) for |z − 1| < 1 Solution 12.21 We evaluate the derivatives of ez at z = 0. Then we use Taylor’s Theorem. dn dzn ez = ez dn dzn ez = ez z=0 = 1 ez = ∞ n=0 zn n! Since the exponential function has no singularities in the finite complex plane, the radius of conver- gence is infinite. We find the Taylor series for the cosine and sine by writing them in terms of the exponential function. cos z = eız + e−ız 2 = 1 2 ∞ n=0 (ız)n n! + ∞ n=0 (−ız)n n! = ∞ n=0 even n (ız)n n! cos z = ∞ n=0 (−1)n z2n (2n)! sin z = eız − e−ız ı2 = 1 ı2 ∞ n=0 (ız)n n! − ∞ n=0 (−ız)n n! = −ı ∞ n=0 odd n (ız)n n! sin z = ∞ n=0 (−1)n z2n+1 (2n + 1)! 372
  • 393. Solution 12.22 cos z = − cos(z − π) = − ∞ n=0 (−1)n (z − π)2n (2n)! = ∞ n=0 (−1)n+1 (z − π)2n (2n)! sin z = − sin(z − π) = − ∞ n=0 (−1)n (z − π)2n+1 (2n + 1)! = ∞ n=0 (−1)n+1 (z − π)2n+1 (2n + 1)! Solution 12.23 CONTINUE Solution 12.24 1. (a) f(z) = e−z f(0) = 1 f (0) = −1 f (0) = 1 e−z = 1 − z + z2 2 + O z3 Since e−z is entire, the Taylor series converges in the complex plane. (b) f(z) = 1 + z 1 − z , f(ı) = ı f (z) = 2 (1 − z)2 , f (ı) = ı f (z) = 4 (1 − z)3 , f (ı) = −1 + ı 1 + z 1 − z = ı + ı(z − ı) + −1 + ı 2 (z − ı)2 + O (z − ı)3 Since the nearest singularity, (at z = 1), is a distance of √ 2 from z0 = ı, the radius of convergence is √ 2. The series converges absolutely for |z − ı| < √ 2. (c) ez z − 1 = − 1 + z + z2 2 + O z3 1 + z + z2 + O z3 = −1 − 2z − 5 2 z2 + O z3 Since the nearest singularity, (at z = 1), is a distance of 1 from z0 = 0, the radius of convergence is 1. The series converges absolutely for |z| < 1. 373
  • 394. 2. Since f(z) is analytic in |z − z0| < R, its Taylor series converges absolutely on this domain. f(z) = ∞ n=0 f(n) (z0)zn n! The Taylor series converges uniformly on any closed sub-domain of |z − z0| < R. We consider the sub-domain |z − z0| ≤ ρ < R. On the domain of uniform convergence we can interchange differentiation and summation. f (z) = d dz ∞ n=0 f(n) (z0)zn n! f (z) = ∞ n=1 nf(n) (z0)zn−1 n! f (z) = ∞ n=0 f(n+1) (z0)zn n! Note that this is the Taylor series that we could obtain directly for f (z). Since f(z) is analytic on |z − z0| < R so is f (z). f (z) = ∞ n=0 f(n+1) (z0)zn n! 3. 1 (1 − z)3 = d2 dz2 1 2 1 1 − z = 1 2 d2 dz2 ∞ n=0 zn = 1 2 ∞ n=2 n(n − 1)zn−2 = 1 2 ∞ n=0 (n + 2)(n + 1)zn The radius of convergence is 1, which is the distance to the nearest singularity at z = 1. 4. The Taylor series expansion of f(z) about z = 0 is f(z) = ∞ n=0 f(n) (0) n! zn . We compute the derivatives of f(z). f(n) (z) = n−1 k=0 (ı − k) (1 + z)ı−n . Now we determine the coefficients in the series. f(n) (0) = n−1 k=0 (ı − k) (1 + z)ı = ∞ n=0 n−1 k=0 (ı − k) n! zn 374
  • 395. The radius of convergence of the series is the distance to the nearest singularity of (1 + z)ı . This occurs at z = −1. Thus the series converges for |z| < 1. We can corroborate this with the ratio test. We compute the radius of convergence. R = lim n→∞ n−1 k=0 (ı − k) /n! ( n k=0(ı − k)) /(n + 1)! = lim n→∞ n + 1 ı − n = 1 If we use the binomial coefficient, α n ≡ n−1 k=0 (α − k) n! , then we can write the series in a compact form. (1 + z)ı = ∞ n=0 ı n zn Solution 12.25 For |z| < 1: 1 z − ı = ı 1 + ız = ı ∞ n=0 (−ız)n (Note that |z| < 1 ⇔ | − ız| < 1.) For |z| > 1: 1 z − ı = 1 z 1 (1 − ı/z) (Note that |z| > 1 ⇔ | − ı/z| < 1.) = 1 z ∞ n=0 ı z n = 1 z 0 n=−∞ ı−n zn = 0 n=−∞ (−ı)n zn−1 = −1 n=−∞ (−ı)n+1 zn Solution 12.26 We expand the function in partial fractions. f(z) = 1 (z + 1)(z + 2) = 1 z + 1 − 1 z + 2 375
  • 396. The Taylor series about z = 0 for 1/(z + 1) is 1 1 + z = 1 1 − (−z) = ∞ n=0 (−z)n , for |z| < 1 = ∞ n=0 (−1)n zn , for |z| < 1 The series about z = ∞ for 1/(z + 1) is 1 1 + z = 1/z 1 + 1/z = 1 z ∞ n=0 (−1/z)n , for |1/z| < 1 = ∞ n=0 (−1)n z−n−1 , for |z| > 1 = −1 n=−∞ (−1)n+1 zn , for |z| > 1 The Taylor series about z = 0 for 1/(z + 2) is 1 2 + z = 1/2 1 + z/2 = 1 2 ∞ n=0 (−z/2)n , for |z/2| < 1 = ∞ n=0 (−1)n 2n+1 zn , for |z| < 2 The series about z = ∞ for 1/(z + 2) is 1 2 + z = 1/z 1 + 2/z = 1 z ∞ n=0 (−2/z)n , for |2/z| < 1 = ∞ n=0 (−1)n 2n z−n−1 , for |z| > 2 = −1 n=−∞ (−1)n+1 2n+1 zn , for |z| > 2 To find the expansions in the three regions, we just choose the appropriate series. 1. f(z) = 1 1 + z − 1 2 + z = ∞ n=0 (−1)n zn − ∞ n=0 (−1)n 2n+1 zn , for |z| < 1 = ∞ n=0 (−1)n 1 − 1 2n+1 zn , for |z| < 1 376
  • 397. f(z) = ∞ n=0 (−1)n 2n+1 − 1 2n+1 zn , for |z| < 1 2. f(z) = 1 1 + z − 1 2 + z f(z) = −1 n=−∞ (−1)n+1 zn − ∞ n=0 (−1)n 2n+1 zn , for 1 < |z| < 2 3. f(z) = 1 1 + z − 1 2 + z = −1 n=−∞ (−1)n+1 zn − −1 n=−∞ (−1)n+1 2n+1 zn , for 2 < |z| f(z) = −1 n=−∞ (−1)n+1 2n+1 − 1 2n+1 zn , for 2 < |z| Solution 12.27 Laurent Series. We assume that m is a non-negative integer and that n is an integer. The Laurent series about the point z = 0 of f(z) = z + 1 z m is f(z) = ∞ n=−∞ anzn where an = 1 ı2π C f(z) zn+1 dz and C is a contour going around the origin once in the positive direction. We manipulate the coefficient integral into the desired form. an = 1 ı2π C (z + 1/z)m zn+1 dz = 1 ı2π 2π 0 (eıθ + e−ıθ )m eı(n+1)θ ı eıθ dθ = 1 2π 2π 0 2m cosm θ e−ınθ dθ = 2m−1 π 2π 0 cosm θ(cos(nθ) − ı sin(nθ)) dθ Note that cosm θ is even and sin(nθ) is odd about θ = π. = 2m−1 π 2π 0 cosm θ cos(nθ) dθ 377
  • 398. Binomial Series. Now we find the binomial series expansion of f(z). z + 1 z m = m n=0 m n zm−n 1 z n = m n=0 m n zm−2n = m n=−m m−n even m (m − n)/2 zn The coefficients in the series f(z) = ∞ n=−∞ anzn are an = m (m−n)/2 −m ≤ n ≤ m and m − n even 0 otherwise By equating the coefficients found by the two methods, we evaluate the desired integral. 2π 0 (cos θ)m cos(nθ) dθ = π 2m−1 m (m−n)/2 −m ≤ n ≤ m and m − n even 0 otherwise Solution 12.28 First we write f(z) in the form f(z) = g(z) (z − ı/2)(z − 2)2 . g(z) is an entire function which grows no faster that z3 at infinity. By expanding g(z) in a Taylor series about the origin, we see that it is a polynomial of degree no greater than 3. f(z) = αz3 + βz2 + γz + δ (z − ı/2)(z − 2)2 Since f(z) is a rational function we expand it in partial fractions to obtain a form that is convenient to integrate. f(z) = a z − ı/2 + b z − 2 + c (z − 2)2 + d We use the value of the integrals of f(z) to determine the constants, a, b, c and d. |z|=1 a z − ı/2 + b z − 2 + c (z − 2)2 + d dz = ı2π ı2πa = ı2π a = 1 |z|=3 1 z − ı/2 + b z − 2 + c (z − 2)2 + d dz = 0 ı2π(1 + b) = 0 b = −1 Note that by applying the second constraint, we can change the third constraint to |z|=3 zf(z) dz = 0. 378
  • 399. |z|=3 z 1 z − ı/2 − 1 z − 2 + c (z − 2)2 + d dz = 0 |z|=3 (z − ı/2) + ı/2 z − ı/2 − (z − 2) + 2 z − 2 + c(z − 2) + 2c (z − 2)2 dz = 0 ı2π ı 2 − 2 + c = 0 c = 2 − ı 2 Thus we see that the function is f(z) = 1 z − ı/2 − 1 z − 2 + 2 − ı/2 (z − 2)2 + d, where d is an arbitrary constant. We can also write the function in the form: f(z) = dz3 + 15 − ı8 4(z − ı/2)(z − 2)2 . Complete Laurent Series. We find the complete Laurent series about z = 0 for each of the terms in the partial fraction expansion of f(z). 1 z − ı/2 = ı2 1 + ı2z = ı2 ∞ n=0 (−ı2z)n , for | − ı2z| < 1 = − ∞ n=0 (−ı2)n+1 zn , for |z| < 1/2 1 z − ı/2 = 1/z 1 − ı/(2z) = 1 z ∞ n=0 ı 2z n , for |ı/(2z)| < 1 = ∞ n=0 ı 2 n z−n−1 , for |z| < 2 = −1 n=−∞ ı 2 −n−1 zn , for |z| < 2 = −1 n=−∞ (−ı2)n+1 zn , for |z| < 2 − 1 z − 2 = 1/2 1 − z/2 = 1 2 ∞ n=0 z 2 n , for |z/2| < 1 = ∞ n=0 zn 2n+1 , for |z| < 2 379
  • 400. − 1 z − 2 = − 1/z 1 − 2/z = − 1 z ∞ n=0 2 z n , for |2/z| < 1 = − ∞ n=0 2n z−n−1 , for |z| > 2 = − −1 n=−∞ 2−n−1 zn , for |z| > 2 2 − ı/2 (z − 2)2 = (2 − ı/2) 1 4 (1 − z/2)−2 = 4 − ı 8 ∞ n=0 −2 n − z 2 n , for |z/2| < 1 = 4 − ı 8 ∞ n=0 (−1)n (n + 1)(−1)n 2−n zn , for |z| < 2 = 4 − ı 8 ∞ n=0 n + 1 2n zn , for |z| < 2 2 − ı/2 (z − 2)2 = 2 − ı/2 z2 1 − 2 z −2 = 2 − ı/2 z2 ∞ n=0 −2 n − 2 z n , for |2/z| < 1 = (2 − ı/2) ∞ n=0 (−1)n (n + 1)(−1)n 2n z−n−2 , for |z| > 2 = (2 − ı/2) −2 n=−∞ (−n − 1)2−n−2 zn , for |z| > 2 = −(2 − ı/2) −2 n=−∞ n + 1 2n+2 zn , for |z| > 2 We take the appropriate combination of these series to find the Laurent series expansions in the regions: |z| < 1/2, 1/2 < |z| < 2 and 2 < |z|. For |z| < 1/2, we have f(z) = − ∞ n=0 (−ı2)n+1 zn + ∞ n=0 zn 2n+1 + 4 − ı 8 ∞ n=0 n + 1 2n zn + d f(z) = ∞ n=0 −(−ı2)n+1 + 1 2n+1 + 4 − ı 8 n + 1 2n zn + d f(z) = ∞ n=0 −(−ı2)n+1 + 1 2n+1 1 + 4 − ı 4 (n + 1) zn + d, for |z| < 1/2 380
  • 401. For 1/2 < |z| < 2, we have f(z) = −1 n=−∞ (−ı2)n+1 zn + ∞ n=0 zn 2n+1 + 4 − ı 8 ∞ n=0 n + 1 2n zn + d f(z) = −1 n=−∞ (−ı2)n+1 zn + ∞ n=0 1 2n+1 1 + 4 − ı 4 (n + 1) zn + d, for 1/2 < |z| < 2 For 2 < |z|, we have f(z) = −1 n=−∞ (−ı2)n+1 zn − −1 n=−∞ 2−n−1 zn − (2 − ı/2) −2 n=−∞ n + 1 2n+2 zn + d f(z) = −2 n=−∞ (−ı2)n+1 − 1 2n+1 (1 + (1 − ı/4)(n + 1)) zn + d, for 2 < |z| Solution 12.29 The radius of convergence of the series for f(z) is R = lim n→∞ k3 /3k (k + 1)3/3k+1 = 3 lim n→∞ k3 (k + 1)3 = 3. Thus f(z) is a function which is analytic inside the circle of radius 3. 1. The integrand is analytic. Thus by Cauchy’s theorem the value of the integral is zero. |z|=1 eız f(z) dz = 0 2. We use Cauchy’s integral formula to evaluate the integral. |z|=1 f(z) z4 dz = ı2π 3! f(3) (0) = ı2π 3! 3!33 33 = ı2π |z|=1 f(z) z4 dz = ı2π 3. We use Cauchy’s integral formula to evaluate the integral. |z|=1 f(z) ez z2 dz = ı2π 1! d dz (f(z) ez ) z=0 = ı2π 1!13 31 |z|=1 f(z) ez z2 dz = ı2π 3 Solution 12.30 1. (a) 1 z(1 − z) = 1 z + 1 1 − z = 1 z + ∞ n=0 zn , for 0 < |z| < 1 = 1 z + ∞ n=−1 zn , for 0 < |z| < 1 381
  • 402. (b) 1 z(1 − z) = 1 z + 1 1 − z = 1 z − 1 z 1 1 − 1/z = 1 z − 1 z ∞ n=0 1 z n , for |z| > 1 = − 1 z ∞ n=1 z−n , for |z| > 1 = − −∞ n=−2 zn , for |z| > 1 (c) 1 z(1 − z) = 1 z + 1 1 − z = 1 (z + 1) − 1 + 1 2 − (z + 1) = 1 (z + 1) 1 1 − 1/(z + 1) − 1 (z + 1) 1 1 − 2/(z + 1) , for |z + 1| > 1 and |z + 1| > 2 = 1 (z + 1) ∞ n=0 1 (z + 1)n − 1 (z + 1) ∞ n=0 2n (z + 1)n , for |z + 1| > 1 and |z + 1| > 2 = 1 (z + 1) ∞ n=0 1 − 2n (z + 1)n , for |z + 1| > 2 = ∞ n=1 1 − 2n (z + 1)n+1 , for |z + 1| > 2 = −∞ n=−2 1 − 2−n−1 (z + 1)n , for |z + 1| > 2 2. First we factor the denominator of f(z) = 1/(z4 + 4). z4 + 4 = (z − 1 − ı)(z − 1 + ı)(z + 1 − ı)(z + 1 + ı) We look for an annulus about z = 1 containing the point z = ı where f(z) is analytic. The singularities at z = 1 ± ı are a distance of 1 from z = 1; the singularities at z = −1 ± ı are at a distance of √ 5. Since f(z) is analytic in the domain 1 < |z − 1| < √ 5 there is a convergent Laurent series in that domain. 382
  • 403. Chapter 13 The Residue Theorem Man will occasionally stumble over the truth, but most of the time he will pick himself up and continue on. - Winston Churchill 13.1 The Residue Theorem We will find that many integrals on closed contours may be evaluated in terms of the residues of a function. We first define residues and then prove the Residue Theorem. Result 13.1.1 Residues. Let f(z) be single-valued an analytic in a deleted neighborhood of z0. Then f(z) has the Laurent series expansion f(z) = ∞ n=−∞ an(z − z0)n , The residue of f(z) at z = z0 is the coefficient of the 1 z−z0 term: Res(f(z), z0) = a−1. The residue at a branch point or non-isolated singularity is undefined as the Laurent series does not exist. If f(z) has a pole of order n at z = z0 then we can use the Residue Formula: Res(f(z), z0) = lim z→z0 1 (n − 1)! dn−1 dzn−1 (z − z0)n f(z) . See Exercise 13.4 for a proof of the Residue Formula. Example 13.1.1 In Example 8.4.5 we showed that f(z) = z/ sin z has first order poles at z = nπ, 383
  • 404. C B Figure 13.1: Deform the contour to lie in the deleted disk. n ∈ Z {0}. Now we find the residues at these isolated singularities. Res z sin z , z = nπ = lim z→nπ (z − nπ) z sin z = nπ lim z→nπ z − nπ sin z = nπ lim z→nπ 1 cos z = nπ 1 (−1)n = (−1)n nπ Residue Theorem. We can evaluate many integrals in terms of the residues of a function. Sup- pose f(z) has only one singularity, (at z = z0), inside the simple, closed, positively oriented contour C. f(z) has a convergent Laurent series in some deleted disk about z0. We deform C to lie in the disk. See Figure 13.1. We now evaluate C f(z) dz by deforming the contour and using the Laurent series expansion of the function. C f(z) dz = B f(z) dz = B ∞ n=−∞ an(z − z0)n dz = ∞ n=−∞ n=−1 an (z − z0)n+1 n + 1 r eı(θ+2π) r eıθ + a−1 [log(z − z0)] r eı(θ+2π) r eıθ = a−1ı2π C f(z) dz = ı2π Res(f(z), z0) Now assume that f(z) has n singularities at {z1, . . . , zn}. We deform C to n contours C1, . . . , Cn which enclose the singularities and lie in deleted disks about the singularities in which f(z) has convergent Laurent series. See Figure 13.2. We evaluate C f(z) dz by deforming the contour. C f(z) dz = n k=1 Ck f(z) dz = ı2π n k=1 Res(f(z), zk) Now instead let f(z) be analytic outside and on C except for isolated singularities at {ζn} in the domain outside C and perhaps an isolated singularity at infinity. Let a be any point in the interior of C. To evaluate C f(z) dz we make the change of variables ζ = 1/(z − a). This maps the contour C to C . (Note that C is negatively oriented.) All the points outside C are mapped to points inside C and vice versa. We can then evaluate the integral in terms of the singularities inside C . 384
  • 405. C C CC1 2 3 Figure 13.2: Deform the contour n contours which enclose the n singularities. a C C’ Figure 13.3: The change of variables ζ = 1/(z − a). C f(z) dz = C f 1 ζ + a −1 ζ2 dζ = −C 1 z2 f 1 z + a dz = ı2π n Res 1 z2 f 1 z + a , 1 ζn − a + ı2π Res 1 z2 f 1 z + a , 0 . Result 13.1.2 Residue Theorem. If f(z) is analytic in a compact, closed, connected domain D except for isolated singularities at {zn} in the interior of D then ∂D f(z) dz = k Ck f(z) dz = ı2π n Res(f(z), zn). Here the set of contours {Ck} make up the positively oriented boundary ∂D of the domain D. If the boundary of the domain is a single contour C then the formula simplifies. C f(z) dz = ı2π n Res(f(z), zn) If instead f(z) is analytic outside and on C except for isolated singularities at {ζn} in the domain outside C and perhaps an isolated singularity at infinity then C f(z) dz = ı2π n Res 1 z2 f 1 z + a , 1 ζn − a +ı2π Res 1 z2 f 1 z + a , 0 . Here a is a any point in the interior of C. 385
  • 406. Example 13.1.2 Consider 1 ı2π C sin z z(z − 1) dz where C is the positively oriented circle of radius 2 centered at the origin. Since the integrand is single-valued with only isolated singularities, the Residue Theorem applies. The value of the integral is the sum of the residues from singularities inside the contour. The only places that the integrand could have singularities are z = 0 and z = 1. Since lim z→0 sin z z = lim z→0 cos z 1 = 1, there is a removable singularity at the point z = 0. There is no residue at this point. Now we consider the point z = 1. Since sin(z)/z is analytic and nonzero at z = 1, that point is a first order pole of the integrand. The residue there is Res sin z z(z − 1) , z = 1 = lim z→1 (z − 1) sin z z(z − 1) = sin(1). There is only one singular point with a residue inside the path of integration. The residue at this point is sin(1). Thus the value of the integral is 1 ı2π C sin z z(z − 1) dz = sin(1) Example 13.1.3 Evaluate the integral C cot z coth z z3 dz where C is the unit circle about the origin in the positive direction. The integrand is cot z coth z z3 = cos z cosh z z3 sin z sinh z sin z has zeros at nπ. sinh z has zeros at ınπ. Thus the only pole inside the contour of integration is at z = 0. Since sin z and sinh z both have simple zeros at z = 0, sin z = z + O(z3 ), sinh z = z + O(z3 ) 386
  • 407. the integrand has a pole of order 5 at the origin. The residue at z = 0 is lim z→0 1 4! d4 dz4 z5 cot z coth z z3 = lim z→0 1 4! d4 dz4 z2 cot z coth z = 1 4! lim z→0 24 cot(z) coth(z)csc(z) 2 − 32z coth(z)csc(z) 4 − 16z cos(2z) coth(z)csc(z) 4 + 22z2 cot(z) coth(z)csc(z) 4 + 2z2 cos(3z) coth(z)csc(z) 5 + 24 cot(z) coth(z)csch(z) 2 + 24csc(z) 2 csch(z) 2 − 48z cot(z)csc(z) 2 csch(z) 2 − 48z coth(z)csc(z) 2 csch(z) 2 + 24z2 cot(z) coth(z)csc(z) 2 csch(z) 2 + 16z2 csc(z) 4 csch(z) 2 + 8z2 cos(2z)csc(z) 4 csch(z) 2 − 32z cot(z)csch(z) 4 − 16z cosh(2z) cot(z)csch(z) 4 + 22z2 cot(z) coth(z)csch(z) 4 + 16z2 csc(z) 2 csch(z) 4 + 8z2 cosh(2z)csc(z) 2 csch(z) 4 + 2z2 cosh(3z) cot(z)csch(z) 5 = 1 4! − 56 15 = − 7 45 Since taking the fourth derivative of z2 cot z coth z really sucks, we would like a more elegant way of finding the residue. We expand the functions in the integrand in Taylor series about the origin. cos z cosh z z3 sin z sinh z = 1 − z2 2 + z4 24 − · · · 1 + z2 2 + z4 24 + · · · z3 z − z3 6 + z5 120 − · · · z + z3 6 + z5 120 + · · · = 1 − z4 6 + · · · z3 z2 + z6 −1 36 + 1 60 + · · · = 1 z5 1 − z4 6 + · · · 1 − z4 90 + · · · = 1 z5 1 − z4 6 + · · · 1 + z4 90 + · · · = 1 z5 1 − 7 45 z4 + · · · = 1 z5 − 7 45 1 z + · · · Thus we see that the residue is − 7 45 . Now we can evaluate the integral. C cot z coth z z3 dz = −ı 14 45 π 13.2 Cauchy Principal Value for Real Integrals 13.2.1 The Cauchy Principal Value First we recap improper integrals. If f(x) has a singularity at x0 ∈ (a . . . b) then b a f(x) dx ≡ lim →0+ x0− a f(x) dx + lim δ→0+ b x0+δ f(x) dx. 387
  • 408. For integrals on (−∞ . . . ∞), ∞ −∞ f(x) dx ≡ lim a→−∞, b→∞ b a f(x) dx. Example 13.2.1 1 −1 1 x dx is divergent. We show this with the definition of improper integrals. 1 −1 1 x dx = lim →0+ − −1 1 x dx + lim δ→0+ 1 δ 1 x dx = lim →0+ [ln |x|] − −1 + lim δ→0+ [ln |x|] 1 δ = lim →0+ ln − lim δ→0+ ln δ The integral diverges because and δ approach zero independently. Since 1/x is an odd function, it appears that the area under the curve is zero. Consider what would happen if and δ were not independent. If they approached zero symmetrically, δ = , then the value of the integral would be zero. lim →0+ − −1 + 1 1 x dx = lim →0+ (ln − ln ) = 0 We could make the integral have any value we pleased by choosing δ = c . 1 lim →0+ − −1 + 1 c 1 x dx = lim →0+ (ln − ln(c )) = − ln c We have seen it is reasonable that 1 −1 1 x dx has some meaning, and if we could evaluate the integral, the most reasonable value would be zero. The Cauchy principal value provides us with a way of evaluating such integrals. If f(x) is continuous on (a, b) except at the point x0 ∈ (a, b) then the Cauchy principal value of the integral is defined − b a f(x) dx = lim →0+ x0− a f(x) dx + b x0+ f(x) dx . The Cauchy principal value is obtained by approaching the singularity symmetrically. The principal value of the integral may exist when the integral diverges. If the integral exists, it is equal to the principal value of the integral. The Cauchy principal value of 1 −1 1 x dx is defined − 1 −1 1 x dx ≡ lim →0+ − −1 1 x dx + 1 1 x dx = lim →0+ [log |x|] − −1 [log |x|] 1 = lim →0+ (log | − | − log | |) = 0. (Another notation for the principal value of an integral is PV f(x) dx.) Since the limits of integra- tion approach zero symmetrically, the two halves of the integral cancel. If the limits of integration approached zero independently, (the definition of the integral), then the two halves would both diverge. 1This may remind you of conditionally convergent series. You can rearrange the terms to make the series sum to any number. 388
  • 409. Example 13.2.2 ∞ −∞ x x2+1 dx is divergent. We show this with the definition of improper integrals. ∞ −∞ x x2 + 1 dx = lim a→−∞, b→∞ b a x x2 + 1 dx = lim a→−∞, b→∞ 1 2 ln(x2 + 1) b a = 1 2 lim a→−∞, b→∞ ln b2 + 1 a2 + 1 The integral diverges because a and b approach infinity independently. Now consider what would happen if a and b were not independent. If they approached zero symmetrically, a = −b, then the value of the integral would be zero. 1 2 lim b→∞ ln b2 + 1 b2 + 1 = 0 We could make the integral have any value we pleased by choosing a = −cb. We can assign a meaning to divergent integrals of the form ∞ −∞ f(x) dx with the Cauchy principal value. The Cauchy principal value of the integral is defined − ∞ −∞ f(x) dx = lim a→∞ a −a f(x) dx. The Cauchy principal value is obtained by approaching infinity symmetrically. The Cauchy principal value of ∞ −∞ x x2+1 dx is defined − ∞ −∞ x x2 + 1 dx = lim a→∞ a −a x x2 + 1 dx = lim a→∞ 1 2 ln x2 + 1 a −a = 0. 389
  • 410. Result 13.2.1 Cauchy Principal Value. If f(x) is continuous on (a, b) except at the point x0 ∈ (a, b) then the integral of f(x) is defined b a f(x) dx = lim →0+ x0− a f(x) dx + lim δ→0+ b x0+δ f(x) dx. The Cauchy principal value of the integral is defined − b a f(x) dx = lim →0+ x0− a f(x) dx + b x0+ f(x) dx . If f(x) is continuous on (−∞, ∞) then the integral of f(x) is defined ∞ −∞ f(x) dx = lim a→−∞, b→∞ b a f(x) dx. The Cauchy principal value of the integral is defined − ∞ −∞ f(x) dx = lim a→∞ a −a f(x) dx. The principal value of the integral may exist when the integral diverges. If the integral exists, it is equal to the principal value of the integral. Example 13.2.3 Clearly ∞ −∞ x dx diverges, however the Cauchy principal value exists. − ∞ −∞ x dx = lim a→∞ x2 2 −a a = 0 In general, if f(x) is an odd function with no singularities on the finite real axis then − ∞ −∞ f(x) dx = 0. 13.3 Cauchy Principal Value for Contour Integrals Example 13.3.1 Consider the integral Cr 1 z − 1 dz, where Cr is the positively oriented circle of radius r and center at the origin. From the residue theorem, we know that the integral is Cr 1 z − 1 dz = 0 for r < 1, ı2π for r > 1. When r = 1, the integral diverges, as there is a first order pole on the path of integration. However, the principal value of the integral exists. − Cr 1 z − 1 dz = lim →0+ 2π− 1 eıθ − 1 ıeıθ dθ = lim →0+ log(eıθ − 1) 2π− 390
  • 411. β−α Cε z0 ε Figure 13.4: The C Contour We choose the branch of the logarithm with a branch cut on the positive real axis and arg log z ∈ (0, 2π). = lim →0+ log eı(2π− ) − 1 − log (eı − 1) = lim →0+ log 1 − i + O( 2 ) − 1 − log 1 + i + O( 2 ) − 1 = lim →0+ log −i + O( 2 ) − log i + O( 2 ) = lim →0+ Log + O( 2 ) + ı arg −ı + O( 2 ) − Log + O( 2 ) − ı arg ı + O( 2 ) = ı 3π 2 − ı π 2 = ıπ Thus we obtain − Cr 1 z − 1 dz =    0 for r < 1, ıπ for r = 1, ı2π for r > 1. In the above example we evaluated the contour integral by parameterizing the contour. This approach is only feasible when the integrand is simple. We would like to use the residue theorem to more easily evaluate the principal value of the integral. But before we do that, we will need a preliminary result. Result 13.3.1 Let f(z) have a first order pole at z = z0 and let (z − z0)f(z) be analytic in some neighborhood of z0. Let the contour C be a circular arc from z0 + eıα to z0 + eıβ . (We assume that β > α and β − α < 2π.) lim →0+ C f(z) dz = ı(β − α) Res(f(z), z0) The contour is shown in Figure 13.4. (See Exercise 13.9 for a proof of this result.) Example 13.3.2 Consider − C 1 z − 1 dz where C is the unit circle. Let Cp be the circular arc of radius 1 that starts and ends a distance of from z = 1. Let C be the positive, circular arc of radius with center at z = 1 that joins the endpoints of Cp. Let Ci, be the union of Cp and C . (Cp stands for Principal value Contour; Ci stands for Indented Contour.) Ci is an indented contour that avoids the first order pole at z = 1. Figure 13.5 shows the three contours. 391
  • 412. C C p ε Figure 13.5: The indented contour. Note that the principal value of the integral is − C 1 z − 1 dz = lim →0+ Cp 1 z − 1 dz. We can calculate the integral along Ci with the residue theorem. Ci 1 z − 1 dz = ı2π We can calculate the integral along C using Result 13.3.1. Note that as → 0+ , the contour becomes a semi-circle, a circular arc of π radians. lim →0+ C 1 z − 1 dz = ıπ Res 1 z − 1 , 1 = ıπ Now we can write the principal value of the integral along C in terms of the two known integrals. − C 1 z − 1 dz = Ci 1 z − 1 dz − C 1 z − 1 dz = ı2π − ıπ = ıπ In the previous example, we formed an indented contour that included the first order pole. You can show that if we had indented the contour to exclude the pole, we would obtain the same result. (See Exercise 13.11.) We can extend the residue theorem to principal values of integrals. (See Exercise 13.10.) Result 13.3.2 Residue Theorem for Principal Values. Let f(z) be an- alytic inside and on a simple, closed, positive contour C, except for isolated singularities at z1, . . . , zm inside the contour and first order poles at ζ1, . . . , ζn on the contour. Further, let the contour be C1 at the locations of these first order poles. (i.e., the contour does not have a corner at any of the first order poles.) Then the principal value of the integral of f(z) along C is − C f(z) dz = ı2π m j=1 Res(f(z), zj) + ıπ n j=1 Res(f(z), ζj). 392
  • 413. 13.4 Integrals on the Real Axis Example 13.4.1 We wish to evaluate the integral ∞ −∞ 1 x2 + 1 dx. We can evaluate this integral directly using calculus. ∞ −∞ 1 x2 + 1 dx = [arctan x] ∞ −∞ = π Now we will evaluate the integral using contour integration. Let CR be the semicircular arc from R to −R in the upper half plane. Let C be the union of CR and the interval [−R, R]. We can evaluate the integral along C with the residue theorem. The integrand has first order poles at z = ±ı. For R > 1, we have C 1 z2 + 1 dz = ı2π Res 1 z2 + 1 , ı = ı2π 1 ı2 = π. Now we examine the integral along CR. We use the maximum modulus integral bound to show that the value of the integral vanishes as R → ∞. CR 1 z2 + 1 dz ≤ πR max z∈CR 1 z2 + 1 = πR 1 R2 − 1 → 0 as R → ∞. Now we are prepared to evaluate the original real integral. C 1 z2 + 1 dz = π R −R 1 x2 + 1 dx + CR 1 z2 + 1 dz = π We take the limit as R → ∞. ∞ −∞ 1 x2 + 1 dx = π We would get the same result by closing the path of integration in the lower half plane. Note that in this case the closed contour would be in the negative direction. If you are really observant, you may have noticed that we did something a little funny in evalu- ating ∞ −∞ 1 x2 + 1 dx. The definition of this improper integral is ∞ −∞ 1 x2 + 1 dx = lim a→+∞ 0 −a 1 x2 + 1 dx+ = lim b→+∞ b 0 1 x2 + 1 dx. 393
  • 414. In the above example we instead computed lim R→+∞ R −R 1 x2 + 1 dx. Note that for some integrands, the former and latter are not the same. Consider the integral of x x2+1 . ∞ −∞ x x2 + 1 dx = lim a→+∞ 0 −a x x2 + 1 dx + lim b→+∞ b 0 x x2 + 1 dx = lim a→+∞ 1 2 log |a2 + 1| + lim b→+∞ − 1 2 log |b2 + 1| Note that the limits do not exist and hence the integral diverges. We get a different result if the limits of integration approach infinity symmetrically. lim R→+∞ R −R x x2 + 1 dx = lim R→+∞ 1 2 (log |R2 + 1| − log |R2 + 1|) = 0 (Note that the integrand is an odd function, so the integral from −R to R is zero.) We call this the principal value of the integral and denote it by writing “PV” in front of the integral sign or putting a dash through the integral. PV ∞ −∞ f(x) dx ≡ − ∞ −∞ f(x) dx ≡ lim R→+∞ R −R f(x) dx The principal value of an integral may exist when the integral diverges. If the integral does converge, then it is equal to its principal value. We can use the method of Example 13.4.1 to evaluate the principal value of integrals of functions that vanish fast enough at infinity. 394
  • 415. Result 13.4.1 Let f(z) be analytic except for isolated singularities, with only first order poles on the real axis. Let CR be the semi-circle from R to −R in the upper half plane. If lim R→∞ R max z∈CR |f(z)| = 0 then − ∞ −∞ f(x) dx = ı2π m k=1 Res (f(z), zk) + ıπ n k=1 Res(f(z), xk) where z1, . . . zm are the singularities of f(z) in the upper half plane and x1, . . . , xn are the first order poles on the real axis. Now let CR be the semi-circle from R to −R in the lower half plane. If lim R→∞ R max z∈CR |f(z)| = 0 then − ∞ −∞ f(x) dx = −ı2π m k=1 Res (f(z), zk) − ıπ n k=1 Res(f(z), xk) where z1, . . . zm are the singularities of f(z) in the lower half plane and x1, . . . , xn are the first order poles on the real axis. This result is proved in Exercise 13.13. Of course we can use this result to evaluate the integrals of the form ∞ 0 f(z) dz, where f(x) is an even function. 13.5 Fourier Integrals In order to do Fourier transforms, which are useful in solving differential equations, it is necessary to be able to calculate Fourier integrals. Fourier integrals have the form ∞ −∞ eıωx f(x) dx. We evaluate these integrals by closing the path of integration in the lower or upper half plane and using techniques of contour integration. Consider the integral π/2 0 e−R sin θ dθ. Since 2θ/π ≤ sin θ for 0 ≤ θ ≤ π/2, e−R sin θ ≤ e−R2θ/π for 0 ≤ θ ≤ π/2 395
  • 416. π/2 0 e−R sin θ dθ ≤ π/2 0 e−R2θ/π dθ = − π 2R e−R2θ/π π/2 0 = − π 2R (e−R −1) ≤ π 2R → 0 as R → ∞ We can use this to prove the following Result 13.5.1. (See Exercise 13.17.) Result 13.5.1 Jordan’s Lemma. π 0 e−R sin θ dθ < π R . Suppose that f(z) vanishes as |z| → ∞. If ω is a (positive/negative) real number and CR is a semi-circle of radius R in the (upper/lower) half plane then the integral CR f(z) eıωz dz vanishes as R → ∞. We can use Jordan’s Lemma and the Residue Theorem to evaluate many Fourier integrals. Con- sider ∞ −∞ f(x) eıωx dx, where ω is a positive real number. Let f(z) be analytic except for isolated singularities, with only first order poles on the real axis. Let C be the contour from −R to R on the real axis and then back to −R along a semi-circle in the upper half plane. If R is large enough so that C encloses all the singularities of f(z) in the upper half plane then C f(z) eıωz dz = ı2π m k=1 Res(f(z) eıωz , zk) + ıπ n k=1 Res(f(z) eıωz , xk) where z1, . . . zm are the singularities of f(z) in the upper half plane and x1, . . . , xn are the first order poles on the real axis. If f(z) vanishes as |z| → ∞ then the integral on CR vanishes as R → ∞ by Jordan’s Lemma. ∞ −∞ f(x) eıωx dx = ı2π m k=1 Res(f(z) eıωz , zk) + ıπ n k=1 Res(f(z) eıωz , xk) For negative ω we close the path of integration in the lower half plane. Note that the contour is then in the negative direction. 396
  • 417. Result 13.5.2 Fourier Integrals. Let f(z) be analytic except for isolated singularities, with only first order poles on the real axis. Suppose that f(z) vanishes as |z| → ∞. If ω is a positive real number then ∞ −∞ f(x) eıωx dx = ı2π m k=1 Res(f(z) eıωz , zk) + ıπ n k=1 Res(f(z) eıωz , xk) where z1, . . . zm are the singularities of f(z) in the upper half plane and x1, . . . , xn are the first order poles on the real axis. If ω is a negative real number then ∞ −∞ f(x) eıωx dx = −ı2π m k=1 Res(f(z) eıωz , zk) − ıπ n k=1 Res(f(z) eıωz , xk) where z1, . . . zm are the singularities of f(z) in the lower half plane and x1, . . . , xn are the first order poles on the real axis. 13.6 Fourier Cosine and Sine Integrals Fourier cosine and sine integrals have the form, ∞ 0 f(x) cos(ωx) dx and ∞ 0 f(x) sin(ωx) dx. If f(x) is even/odd then we can evaluate the cosine/sine integral with the method we developed for Fourier integrals. Let f(z) be analytic except for isolated singularities, with only first order poles on the real axis. Suppose that f(x) is an even function and that f(z) vanishes as |z| → ∞. We consider real ω > 0. − ∞ 0 f(x) cos(ωx) dx = 1 2 − ∞ −∞ f(x) cos(ωx) dx Since f(x) sin(ωx) is an odd function, 1 2 − ∞ −∞ f(x) sin(ωx) dx = 0. Thus − ∞ 0 f(x) cos(ωx) dx = 1 2 − ∞ −∞ f(x) eıωx dx Now we apply Result 13.5.2. − ∞ 0 f(x) cos(ωx) dx = ıπ m k=1 Res(f(z) eıωz , zk) + ıπ 2 n k=1 Res(f(z) eıωz , xk) where z1, . . . zm are the singularities of f(z) in the upper half plane and x1, . . . , xn are the first order poles on the real axis. If f(x) is an odd function, we note that f(x) cos(ωx) is an odd function to obtain the analogous result for Fourier sine integrals. 397
  • 418. Result 13.6.1 Fourier Cosine and Sine Integrals. Let f(z) be analytic except for isolated singularities, with only first order poles on the real axis. Suppose that f(x) is an even function and that f(z) vanishes as |z| → ∞. We consider real ω > 0. − ∞ 0 f(x) cos(ωx) dx = ıπ m k=1 Res(f(z) eıωz , zk) + ıπ 2 n k=1 Res(f(z) eıωz , xk) where z1, . . . zm are the singularities of f(z) in the upper half plane and x1, . . . , xn are the first order poles on the real axis. If f(x) is an odd function then, − ∞ 0 f(x) sin(ωx) dx = π µ k=1 Res(f(z) eıωz , ζk) + π 2 n k=1 Res(f(z) eıωz , xk) where ζ1, . . . ζµ are the singularities of f(z) in the lower half plane and x1, . . . , xn are the first order poles on the real axis. Now suppose that f(x) is neither even nor odd. We can evaluate integrals of the form: ∞ −∞ f(x) cos(ωx) dx and ∞ −∞ f(x) sin(ωx) dx by writing them in terms of Fourier integrals ∞ −∞ f(x) cos(ωx) dx = 1 2 ∞ −∞ f(x) eıωx dx + 1 2 ∞ −∞ f(x) e−ıωx dx ∞ −∞ f(x) sin(ωx) dx = − ı 2 ∞ −∞ f(x) eıωx dx + ı 2 ∞ −∞ f(x) e−ıωx dx 13.7 Contour Integration and Branch Cuts Example 13.7.1 Consider ∞ 0 x−a x + 1 dx, 0 < a < 1, where x−a denotes exp(−a ln(x)). We choose the branch of the function f(z) = z−a z + 1 |z| > 0, 0 < arg z < 2π with a branch cut on the positive real axis. Let C and CR denote the circular arcs of radius and R where < 1 < R. C is negatively oriented; CR is positively oriented. Consider the closed contour C that is traced by a point moving from C to CR above the branch cut, next around CR, then below the cut to C , and finally around C . (See Figure 13.6.) We write f(z) in polar coordinates. f(z) = exp(−a log z) z + 1 = exp(−a(log r + iθ)) r eıθ +1 398
  • 419. ε CR C Figure 13.6: We evaluate the function above, (z = r eı0 ), and below, (z = r eı2π ), the branch cut. f(r eı0 ) = exp[−a(log r + i0)] r + 1 = r−a r + 1 f(r eı2π ) = exp[−a(log r + ı2π)] r + 1 = r−a e−ı2aπ r + 1 . We use the residue theorem to evaluate the integral along C. C f(z) dz = ı2π Res(f(z), −1) R r−a r + 1 dr + CR f(z) dz − R r−a e−ı2aπ r + 1 dr + C f(z) dz = ı2π Res(f(z), −1) The residue is Res(f(z), −1) = exp(−a log(−1)) = exp(−a(log 1 + ıπ)) = e−ıaπ . We bound the integrals along C and CR with the maximum modulus integral bound. C f(z) dz ≤ 2π −a 1 − = 2π 1−a 1 − CR f(z) dz ≤ 2πR R−a R − 1 = 2π R1−a R − 1 Since 0 < a < 1, the values of the integrals tend to zero as → 0 and R → ∞. Thus we have ∞ 0 r−a r + 1 dr = ı2π e−ıaπ 1 − e−ı2aπ ∞ 0 x−a x + 1 dx = π sin aπ 399
  • 420. Result 13.7.1 Integrals from Zero to Infinity. Let f(z) be a single-valued analytic function with only isolated singularities and no singularities on the positive, real axis, [0, ∞). Let a ∈ Z. If the integrals exist then, ∞ 0 f(x) dx = − n k=1 Res (f(z) log z, zk) , ∞ 0 xa f(x) dx = ı2π 1 − eı2πa n k=1 Res (za f(z), zk) , ∞ 0 f(x) log x dx = − 1 2 n k=1 Res f(z) log2 z, zk + ıπ n k=1 Res (f(z) log z, zk) , ∞ 0 xa f(x) log x dx = ı2π 1 − eı2πa n k=1 Res (za f(z) log z, zk) + π2 a sin2 (πa) n k=1 Res (za f(z), zk) , ∞ 0 xa f(x) logm x dx = ∂m ∂am ı2π 1 − eı2πa n k=1 Res (za f(z), zk) , where z1, . . . , zn are the singularities of f(z) and there is a branch cut on the positive real axis with 0 < arg(z) < 2π. 13.8 Exploiting Symmetry We have already used symmetry of the integrand to evaluate certain integrals. For f(x) an even function we were able to evaluate ∞ 0 f(x) dx by extending the range of integration from −∞ to ∞. For ∞ 0 xα f(x) dx we put a branch cut on the positive real axis and noted that the value of the integrand below the branch cut is a constant multiple of the value of the function above the branch cut. This enabled us to evaluate the real integral with contour integration. In this section we will use other kinds of symmetry to evaluate integrals. We will discover that periodicity of the integrand will produce this symmetry. 13.8.1 Wedge Contours We note that zn = rn eınθ is periodic in θ with period 2π/n. The real and imaginary parts of zn are odd periodic in θ with period π/n. This observation suggests that certain integrals on the positive real axis may be evaluated by closing the path of integration with a wedge contour. Example 13.8.1 Consider ∞ 0 1 1 + xn dx 400
  • 421. where n ∈ N, n ≥ 2. We can evaluate this integral using Result 13.7.1. ∞ 0 1 1 + xn dx = − n−1 k=0 Res log z 1 + zn , eıπ(1+2k)/n = − n−1 k=0 lim z→eıπ(1+2k)/n (z − eıπ(1+2k)/n ) log z 1 + zn = − n−1 k=0 lim z→eıπ(1+2k)/n log z + (z − eıπ(1+2k)/n )/z nzn−1 = − n−1 k=0 ıπ(1 + 2k)/n n eıπ(1+2k)(n−1)/n = − ıπ n2 eıπ(n−1)/n n−1 k=0 (1 + 2k) eı2πk/n = ı2π eıπ/n n2 n−1 k=1 k eı2πk/n = ı2π eıπ/n n2 n eı2π/n −1 = π n sin(π/n) This is a bit grungy. To find a spiffier way to evaluate the integral we note that if we write the integrand as a function of r and θ, it is periodic in θ with period 2π/n. 1 1 + zn = 1 1 + rn eınθ The integrand along the rays θ = 2π/n, 4π/n, 6π/n, . . . has the same value as the integrand on the real axis. Consider the contour C that is the boundary of the wedge 0 < r < R, 0 < θ < 2π/n. There is one singularity inside the contour. We evaluate the residue there. Res 1 1 + zn , eıπ/n = lim z→eıπ/n z − eıπ/n 1 + zn = lim z→eıπ/n 1 nzn−1 = − eıπ/n n We evaluate the integral along C with the residue theorem. C 1 1 + zn dz = −ı2π eıπ/n n Let CR be the circular arc. The integral along CR vanishes as R → ∞. CR 1 1 + zn dz ≤ 2πR n max z∈CR 1 1 + zn ≤ 2πR n 1 Rn − 1 → 0 as R → ∞ 401
  • 422. We parametrize the contour to evaluate the desired integral. ∞ 0 1 1 + xn dx + 0 ∞ 1 1 + xn eı2π/n dx = −ı2π eıπ/n n ∞ 0 1 1 + xn dx = −ı2π eıπ/n n(1 − eı2π/n) ∞ 0 1 1 + xn dx = π n sin(π/n) 13.8.2 Box Contours Recall that ez = ex+ıy is periodic in y with period 2π. This implies that the hyperbolic trigono- metric functions cosh z, sinh z and tanh z are periodic in y with period 2π and odd periodic in y with period π. We can exploit this property to evaluate certain integrals on the real axis by closing the path of integration with a box contour. Example 13.8.2 Consider the integral ∞ −∞ 1 cosh x dx = ı log tanh ıπ 4 + x 2 ∞ −∞ = ı log(1) − ı log(−1) = π. We will evaluate this integral using contour integration. Note that cosh(x + ıπ) = ex+ıπ + e−x−ıπ 2 = − cosh(x). Consider the box contour C that is the boundary of the region −R < x < R, 0 < y < π. The only singularity of the integrand inside the contour is a first order pole at z = ıπ/2. We evaluate the integral along C with the residue theorem. C 1 cosh z dz = ı2π Res 1 cosh z , ıπ 2 = ı2π lim z→ıπ/2 z − ıπ/2 cosh z = ı2π lim z→ıπ/2 1 sinh z = 2π The integrals along the sides of the box vanish as R → ∞. ±R+ıπ ±R 1 cosh z dz ≤ π max z∈[±R...±R+ıπ] 1 cosh z ≤ π max y∈[0...π] 2 e±R+ıy + e R−ıy = 2 eR − e−R ≤ π sinh R → 0 as R → ∞ 402
  • 423. The value of the integrand on the top of the box is the negative of its value on the bottom. We take the limit as R → ∞. ∞ −∞ 1 cosh x dx + −∞ ∞ 1 − cosh x dx = 2π ∞ −∞ 1 cosh x dx = π 13.9 Definite Integrals Involving Sine and Cosine Example 13.9.1 For real-valued a, evaluate the integral: f(a) = 2π 0 dθ 1 + a sin θ . What is the value of the integral for complex-valued a. Real-Valued a. For −1 < a < 1, the integrand is bounded, hence the integral exists. For |a| = 1, the integrand has a second order pole on the path of integration. For |a| > 1 the integrand has two first order poles on the path of integration. The integral is divergent for these two cases. Thus we see that the integral exists for −1 < a < 1. For a = 0, the value of the integral is 2π. Now consider a = 0. We make the change of variables z = eıθ . The real integral from θ = 0 to θ = 2π becomes a contour integral along the unit circle, |z| = 1. We write the sine, cosine and the differential in terms of z. sin θ = z − z−1 ı2 , cos θ = z + z−1 2 , dz = ı eıθ dθ, dθ = dz ız We write f(a) as an integral along C, the positively oriented unit circle |z| = 1. f(a) = C 1/(ız) 1 + a(z − z−1)/(2ı) dz = C 2/a z2 + (ı2/a)z − 1 dz We factor the denominator of the integrand. f(a) = C 2/a (z − z1)(z − z2) dz z1 = ı −1 + √ 1 − a2 a , z2 = ı −1 − √ 1 − a2 a Because |a| < 1, the second root is outside the unit circle. |z2| = 1 + √ 1 − a2 |a| > 1. Since |z1z2| = 1, |z1| < 1. Thus the pole at z1 is inside the contour and the pole at z2 is outside. We evaluate the contour integral with the residue theorem. f(a) = C 2/a z2 + (ı2/a)z − 1 dz = ı2π 2/a z1 − z2 = ı2π 1 ı √ 1 − a2 f(a) = 2π √ 1 − a2 403
  • 424. Complex-Valued a. We note that the integral converges except for real-valued a satisfying |a| ≥ 1. On any closed subset of C {a ∈ R | |a| ≥ 1} the integral is uniformly convergent. Thus except for the values {a ∈ R | |a| ≥ 1}, we can differentiate the integral with respect to a. f(a) is analytic in the complex plane except for the set of points on the real axis: a ∈ (−∞ . . . − 1] and a ∈ [1 . . . ∞). The value of the analytic function f(a) on the real axis for the interval (−1 . . . 1) is f(a) = 2π √ 1 − a2 . By analytic continuation we see that the value of f(a) in the complex plane is the branch of the function f(a) = 2π (1 − a2)1/2 where f(a) is positive, real-valued for a ∈ (−1 . . . 1) and there are branch cuts on the real axis on the intervals: (−∞ . . . − 1] and [1 . . . ∞). Result 13.9.1 For evaluating integrals of the form a+2π a F(sin θ, cos θ) dθ it may be useful to make the change of variables z = eıθ . This gives us a contour integral along the unit circle about the origin. We can write the sine, cosine and differential in terms of z. sin θ = z − z−1 ı2 , cos θ = z + z−1 2 , dθ = dz ız 13.10 Infinite Sums The function g(z) = π cot(πz) has simple poles at z = n ∈ Z. The residues at these points are all unity. Res(π cot(πz), n) = lim z→n π(z − n) cos(πz) sin(πz) = lim z→n π cos(πz) − π(z − n) sin(πz) π cos(πz) = 1 Let Cn be the square contour with corners at z = (n + 1/2)(±1 ± ı). Recall that cos z = cos x cosh y − ı sin x sinh y and sin z = sin x cosh y + ı cos x sinh y. First we bound the modulus of cot(z). | cot(z)| = cos x cosh y − ı sin x sinh y sin x cosh y + ı cos x sinh y = cos2 x cosh2 y + sin2 x sinh2 y sin2 x cosh2 y + cos2 x sinh2 y ≤ cosh2 y sinh2 y = | coth(y)| 404
  • 425. The hyperbolic cotangent, coth(y), has a simple pole at y = 0 and tends to ±1 as y → ±∞. Along the top and bottom of Cn, (z = x±ı(n+1/2)), we bound the modulus of g(z) = π cot(πz). |π cot(πz)| ≤ π coth(π(n + 1/2)) Along the left and right sides of Cn, (z = ±(n + 1/2) + ıy), the modulus of the function is bounded by a constant. |g(±(n + 1/2) + ıy)| = π cos(π(n + 1/2)) cosh(πy) ı sin(π(n + 1/2)) sinh(πy) sin(π(n + 1/2)) cosh(πy) + ı cos(π(n + 1/2)) sinh(πy) = | ıπ tanh(πy)| ≤ π Thus the modulus of π cot(πz) can be bounded by a constant M on Cn. Let f(z) be analytic except for isolated singularities. Consider the integral, Cn π cot(πz)f(z) dz. We use the maximum modulus integral bound. Cn π cot(πz)f(z) dz ≤ (8n + 4)M max z∈Cn |f(z)| Note that if lim |z|→∞ |zf(z)| = 0, then lim n→∞ Cn π cot(πz)f(z) dz = 0. This implies that the sum of all residues of π cot(πz)f(z) is zero. Suppose further that f(z) is analytic at z = n ∈ Z. The residues of π cot(πz)f(z) at z = n are f(n). This means ∞ n=−∞ f(n) = −( sum of the residues of π cot(πz)f(z) at the poles of f(z) ). Result 13.10.1 If lim |z|→∞ |zf(z)| = 0, then the sum of all the residues of π cot(πz)f(z) is zero. If in addition f(z) is analytic at z = n ∈ Z then ∞ n=−∞ f(n) = −( sum of the residues of π cot(πz)f(z) at the poles of f(z) ). Example 13.10.1 Consider the sum ∞ n=−∞ 1 (n + a)2 , a ∈ Z. 405
  • 426. By Result 13.10.1 with f(z) = 1/(z + a)2 we have ∞ n=−∞ 1 (n + a)2 = − Res π cot(πz) 1 (z + a)2 , −a = −π lim z→−a d dz cot(πz) = −π −π sin2 (πz) − π cos2 (πz) sin2 (πz) . ∞ n=−∞ 1 (n + a)2 = π2 sin2 (πa) Example 13.10.2 Derive π/4 = 1 − 1/3 + 1/5 − 1/7 + 1/9 − · · · . Consider the integral In = 1 ı2π Cn dw w(w − z) sin w where Cn is the square with corners at w = (n + 1/2)(±1 ± ı)π, n ∈ Z+ . With the substitution w = x + ıy, | sin w|2 = sin2 x + sinh2 y, we see that |1/ sin w| ≤ 1 on Cn. Thus In → 0 as n → ∞. We use the residue theorem and take the limit n → ∞. 0 = ∞ n=1 (−1)n nπ(nπ − z) + (−1)n nπ(nπ + z) + 1 z sin z − 1 z2 1 sin z = 1 z − 2z ∞ n=1 (−1)n n2π2 − z2 = 1 z − ∞ n=1 (−1)n nπ − z − (−1)n nπ + z We substitute z = π/2 into the above expression to obtain π/4 = 1 − 1/3 + 1/5 − 1/7 + 1/9 − · · · 406
  • 427. 13.11 Exercises The Residue Theorem Exercise 13.1 Evaluate the following closed contour integrals using Cauchy’s residue theorem. 1. C dz z2 − 1 , where C is the contour parameterized by r = 2 cos(2θ), 0 ≤ θ ≤ 2π. 2. C eız z2(z − 2)(z + ı5) dz, where C is the positive circle |z| = 3. 3. C e1/z sin(1/z) dz, where C is the positive circle |z| = 1. Hint, Solution Exercise 13.2 Derive Cauchy’s integral formula from Cauchy’s residue theorem. Hint, Solution Exercise 13.3 Calculate the residues of the following functions at each of the poles in the finite part of the plane. 1. 1 z4 − a4 2. sin z z2 3. 1 + z2 z(z − 1)2 4. ez z2 + a2 5. (1 − cos z)2 z7 Hint, Solution Exercise 13.4 Let f(z) have a pole of order n at z = z0. Prove the Residue Formula: Res(f(z), z0) = lim z→z0 1 (n − 1)! dn−1 dzn−1 [(z − z0)n f(z)] . Hint, Solution Exercise 13.5 Consider the function f(z) = z4 z2 + 1 . Classify the singularities of f(z) in the extended complex plane. Calculate the residue at each pole and at infinity. Find the Laurent series expansions and their domains of convergence about the points z = 0, z = ı and z = ∞. Hint, Solution 407
  • 428. Exercise 13.6 Let P(z) be a polynomial none of whose roots lie on the closed contour Γ. Show that 1 ı2π P (z) P(z) dz = number of roots of P(z) which lie inside Γ. where the roots are counted according to their multiplicity. Hint: From the fundamental theorem of algebra, it is always possible to factor P(z) in the form P(z) = (z − z1)(z − z2) · · · (z − zn). Using this form of P(z) the integrand P (z)/P(z) reduces to a very simple expression. Hint, Solution Exercise 13.7 Find the value of C ez (z − π) tan z dz where C is the positively-oriented circle 1. |z| = 2 2. |z| = 4 Hint, Solution Cauchy Principal Value for Real Integrals Solution 13.1 Show that the integral 1 −1 1 x dx. is divergent. Evaluate the integral 1 −1 1 x − ıα dx, α ∈ R, α = 0. Evaluate lim α→0+ 1 −1 1 x − ıα dx and lim α→0− 1 −1 1 x − ıα dx. The integral exists for α arbitrarily close to zero, but diverges when α = 0. Plot the real and imaginary part of the integrand. If one were to assign meaning to the integral for α = 0, what would the value of the integral be? Exercise 13.8 Do the principal values of the following integrals exist? 1. 1 −1 1 x2 dx, 2. 1 −1 1 x3 dx, 3. 1 −1 f(x) x3 dx. Assume that f(x) is real analytic on the interval (−1, 1). Hint, Solution 408
  • 429. Cauchy Principal Value for Contour Integrals Exercise 13.9 Let f(z) have a first order pole at z = z0 and let (z − z0)f(z) be analytic in some neighborhood of z0. Let the contour C be a circular arc from z0 + eıα to z0 + eıβ . (Assume that β > α and β − α < 2π.) Show that lim →0+ C f(z) dz = ı(β − α) Res(f(z), z0) Hint, Solution Exercise 13.10 Let f(z) be analytic inside and on a simple, closed, positive contour C, except for isolated singu- larities at z1, . . . , zm inside the contour and first order poles at ζ1, . . . , ζn on the contour. Further, let the contour be C1 at the locations of these first order poles. (i.e., the contour does not have a corner at any of the first order poles.) Show that the principal value of the integral of f(z) along C is − C f(z) dz = ı2π m j=1 Res(f(z), zj) + ıπ n j=1 Res(f(z), ζj). Hint, Solution Exercise 13.11 Let C be the unit circle. Evaluate − C 1 z − 1 dz by indenting the contour to exclude the first order pole at z = 1. Hint, Solution Integrals on the Real Axis Exercise 13.12 Evaluate the following improper integrals. 1. ∞ 0 x2 (x2 + 1)(x2 + 4) dx = π 6 2. ∞ −∞ dx (x + b)2 + a2 , a > 0 Hint, Solution Exercise 13.13 Prove Result 13.4.1. Hint, Solution Exercise 13.14 Evaluate − ∞ −∞ 2x x2 + x + 1 . Hint, Solution Exercise 13.15 Use contour integration to evaluate the integrals 1. ∞ −∞ dx 1 + x4 , 409
  • 430. 2. ∞ −∞ x2 dx (1 + x2)2 , 3. ∞ −∞ cos(x) 1 + x2 dx. Hint, Solution Exercise 13.16 Evaluate by contour integration ∞ 0 x6 (x4 + 1)2 dx. Hint, Solution Fourier Integrals Exercise 13.17 Suppose that f(z) vanishes as |z| → ∞. If ω is a (positive / negative) real number and CR is a semi-circle of radius R in the (upper / lower) half plane then show that the integral CR f(z) eıωz dz vanishes as R → ∞. Hint, Solution Exercise 13.18 Evaluate by contour integration ∞ −∞ cos 2x x − ıπ dx. Hint, Solution Fourier Cosine and Sine Integrals Exercise 13.19 Evaluate ∞ −∞ sin x x dx. Hint, Solution Exercise 13.20 Evaluate ∞ −∞ 1 − cos x x2 dx. Hint, Solution Exercise 13.21 Evaluate ∞ 0 sin(πx) x(1 − x2) dx. Hint, Solution Contour Integration and Branch Cuts 410
  • 431. Exercise 13.22 Evaluate the following integrals. 1. ∞ 0 ln2 x 1 + x2 dx = π3 8 2. ∞ 0 ln x 1 + x2 dx = 0 Hint, Solution Exercise 13.23 By methods of contour integration find ∞ 0 dx x2 + 5x + 6 [ Recall the trick of considering Γ f(z) log z dz with a suitably chosen contour Γ and branch for log z. ] Hint, Solution Exercise 13.24 Show that ∞ 0 xa (x + 1)2 dx = πa sin(πa) for − 1 < (a) < 1. From this derive that ∞ 0 log x (x + 1)2 dx = 0, ∞ 0 log2 x (x + 1)2 dx = π2 3 . Hint, Solution Exercise 13.25 Consider the integral I(a) = ∞ 0 xa 1 + x2 dx. 1. For what values of a does the integral exist? 2. Evaluate the integral. Show that I(a) = π 2 cos(πa/2) 3. Deduce from your answer in part (b) the results ∞ 0 log x 1 + x2 dx = 0, ∞ 0 log2 x 1 + x2 dx = π3 8 . You may assume that it is valid to differentiate under the integral sign. Hint, Solution Exercise 13.26 Let f(z) be a single-valued analytic function with only isolated singularities and no singularities on the positive real axis, [0, ∞). Give sufficient conditions on f(x) for absolute convergence of the integral ∞ 0 xa f(x) dx. Assume that a is not an integer. Evaluate the integral by considering the integral of za f(z) on a suitable contour. (Consider the branch of za on which 1a = 1.) Hint, Solution 411
  • 432. Exercise 13.27 Using the solution to Exercise 13.26, evaluate ∞ 0 xa f(x) log x dx, and ∞ 0 xa f(x) logm x dx, where m is a positive integer. Hint, Solution Exercise 13.28 Using the solution to Exercise 13.26, evaluate ∞ 0 f(x) dx, i.e. examine a = 0. The solution will suggest a way to evaluate the integral with contour integration. Do the contour integration to corroborate the value of ∞ 0 f(x) dx. Hint, Solution Exercise 13.29 Let f(z) be an analytic function with only isolated singularities and no singularities on the positive real axis, [0, ∞). Give sufficient conditions on f(x) for absolute convergence of the integral ∞ 0 f(x) log x dx Evaluate the integral with contour integration. Hint, Solution Exercise 13.30 For what values of a does the following integral exist? ∞ 0 xa 1 + x4 dx. Evaluate the integral. (Consider the branch of xa on which 1a = 1.) Hint, Solution Exercise 13.31 By considering the integral of f(z) = z1/2 log z/(z + 1)2 on a suitable contour, show that ∞ 0 x1/2 log x (x + 1)2 dx = π, ∞ 0 x1/2 (x + 1)2 dx = π 2 . Hint, Solution Exploiting Symmetry Exercise 13.32 Evaluate by contour integration, the principal value integral I(a) = − ∞ −∞ eax ex − e−x dx for a real and |a| < 1. [Hint: Consider the contour that is the boundary of the box, −R < x < R, 0 < y < π, but indented around z = 0 and z = ıπ. Hint, Solution 412
  • 433. Exercise 13.33 Evaluate the following integrals. 1. ∞ 0 dx (1 + x2)2 , 2. ∞ 0 dx 1 + x3 . Hint, Solution Exercise 13.34 Find the value of the integral I I = ∞ 0 dx 1 + x6 by considering the contour integral Γ dz 1 + z6 with an appropriately chosen contour Γ. Hint, Solution Exercise 13.35 Let C be the boundary of the sector 0 < r < R, 0 < θ < π/4. By integrating e−z2 on C and letting R → ∞ show that ∞ 0 cos(x2 ) dx = ∞ 0 sin(x2 ) dx = 1 √ 2 ∞ 0 e−x2 dx. Hint, Solution Exercise 13.36 Evaluate ∞ −∞ x sinh x dx using contour integration. Hint, Solution Exercise 13.37 Show that ∞ −∞ eax ex +1 dx = π sin(πa) for 0 < a < 1. Use this to derive that ∞ −∞ cosh(bx) cosh x dx = π cos(πb/2) for − 1 < b < 1. Hint, Solution Exercise 13.38 Using techniques of contour integration find for real a and b: F(a, b) = π 0 dθ (a + b cos θ)2 What are the restrictions on a and b if any? Can the result be applied for complex a, b? How? Hint, Solution 413
  • 434. Exercise 13.39 Show that ∞ −∞ cos x ex + e−x dx = π eπ/2 + e−π/2 [ Hint: Begin by considering the integral of eız /(ez + e−z ) around a rectangle with vertices: ±R, ±R + ıπ.] Hint, Solution Definite Integrals Involving Sine and Cosine Exercise 13.40 Evaluate the following real integrals. 1. π −π dθ 1 + sin2 θ = √ 2π 2. π/2 0 sin4 θ dθ Hint, Solution Exercise 13.41 Use contour integration to evaluate the integrals 1. 2π 0 dθ 2 + sin(θ) , 2. π −π cos(nθ) 1 − 2a cos(θ) + a2 dθ for |a| < 1, n ∈ Z0+ . Hint, Solution Exercise 13.42 By integration around the unit circle, suitably indented, show that − π 0 cos(nθ) cos θ − cos α dθ = π sin(nα) sin α . Hint, Solution Exercise 13.43 Evaluate 1 0 x2 (1 + x2) √ 1 − x2 dx. Hint, Solution Infinite Sums Exercise 13.44 Evaluate ∞ n=1 1 n4 . Hint, Solution 414
  • 435. Exercise 13.45 Sum the following series using contour integration: ∞ n=−∞ 1 n2 − α2 Hint, Solution 415
  • 436. 13.12 Hints The Residue Theorem Hint 13.1 Hint 13.2 Hint 13.3 Hint 13.4 Substitute the Laurent series into the formula and simplify. Hint 13.5 Use that the sum of all residues of the function in the extended complex plane is zero in calculating the residue at infinity. To obtain the Laurent series expansion about z = ı, write the function as a proper rational function, (numerator has a lower degree than the denominator) and expand in partial fractions. Hint 13.6 Hint 13.7 Cauchy Principal Value for Real Integrals Hint 13.8 Hint 13.9 For the third part, does the integrand have a term that behaves like 1/x2 ? Cauchy Principal Value for Contour Integrals Hint 13.10 Expand f(z) in a Laurent series. Only the first term will make a contribution to the integral in the limit as → 0+ . Hint 13.11 Use the result of Exercise 13.9. Hint 13.12 Look at Example 13.3.2. Integrals on the Real Axis Hint 13.13 Hint 13.14 Close the path of integration in the upper or lower half plane with a semi-circle. Use the maximum modulus integral bound, (Result 10.2.1), to show that the integral along the semi-circle vanishes. 416
  • 437. Hint 13.15 Make the change of variables x = 1/ξ. Hint 13.16 Use Result 13.4.1. Hint 13.17 Fourier Integrals Hint 13.18 Use π 0 e−R sin θ dθ < π R . Hint 13.19 Fourier Cosine and Sine Integrals Hint 13.20 Consider the integral of eıx ıx . Hint 13.21 Show that ∞ −∞ 1 − cos x x2 dx = − ∞ −∞ 1 − eıx x2 dx. Hint 13.22 Show that ∞ 0 sin(πx) x(1 − x2) dx = − ı 2 − ∞ −∞ eıx x(1 − x2) dx. Contour Integration and Branch Cuts Hint 13.23 Integrate a branch of log2 z/(1 + z2 ) along the boundary of the domain < r < R, 0 < θ < π. Hint 13.24 Hint 13.25 Note that 1 0 xa dx converges for (a) > −1; and ∞ 1 xa dx converges for (a) < 1. Consider f(z) = za /(z + 1)2 with a branch cut along the positive real axis and the contour in Figure ?? in the limit as ρ → 0 and R → ∞. To derive the last two integrals, differentiate with respect to a. 417
  • 438. Hint 13.26 Hint 13.27 Consider the integral of za f(z) on the contour in Figure ??. Hint 13.28 Differentiate with respect to a. Hint 13.29 Take the limit as a → 0. Use L’Hospital’s rule. To corroborate the result, consider the integral of f(z) log z on an appropriate contour. Hint 13.30 Consider the integral of f(z) log2 z on the contour in Figure ??. Hint 13.31 Consider the integral of f(z) = za 1 + z4 on the boundary of the region < r < R, 0 < θ < π/2. Take the limits as → 0 and R → ∞. Hint 13.32 Consider the branch of f(z) = z1/2 log z/(z + 1)2 with a branch cut on the positive real axis and 0 < arg z < 2π. Integrate this function on the contour in Figure ??. Exploiting Symmetry Hint 13.33 Hint 13.34 For the second part, consider the integral along the boundary of the region, 0 < r < R, 0 < θ < 2π/3. Hint 13.35 Hint 13.36 To show that the integral on the quarter-circle vanishes as R → ∞ establish the inequality, cos 2θ ≥ 1 − 4 π θ, 0 ≤ θ ≤ π 4 . Hint 13.37 Consider the box contour C this is the boundary of the rectangle, −R ≤ x ≤ R, 0 ≤ y ≤ π. The value of the integral is π2 /2. Hint 13.38 Consider the rectangular contour with corners at ±R and ±R + ı2π. Let R → ∞. Hint 13.39 Hint 13.40 418
  • 439. Definite Integrals Involving Sine and Cosine Hint 13.41 Hint 13.42 Hint 13.43 Hint 13.44 Make the changes of variables x = sin ξ and then z = eıξ . Infinite Sums Hint 13.45 Use Result 13.10.1. Hint 13.46 419
  • 440. -1 1 -1 1 Figure 13.7: The contour r = 2 cos(2θ). 13.13 Solutions The Residue Theorem Solution 13.2 1. We consider C dz z2 − 1 where C is the contour parameterized by r = 2 cos(2θ), 0 ≤ θ ≤ 2π. (See Figure 13.7.) There are first order poles at z = ±1. We evaluate the integral with Cauchy’s residue theorem. C dz z2 − 1 = ı2π Res 1 z2 − 1 , z = 1 + Res 1 z2 − 1 , z = −1 = ı2π 1 z + 1 z=1 + 1 z − 1 z=−1 = 0 2. We consider the integral C eız z2(z − 2)(z + ı5) dz, where C is the positive circle |z| = 3. There is a second order pole at z = 0, and first order poles at z = 2 and z = −ı5. The poles at z = 0 and z = 2 lie inside the contour. We evaluate 420
  • 441. the integral with Cauchy’s residue theorem. C eız z2(z − 2)(z + ı5) dz = ı2π Res eız z2(z − 2)(z + ı5) , z = 0 + Res eız z2(z − 2)(z + ı5) , z = 2 = ı2π d dz eız (z − 2)(z + ı5) z=0 + eız z2(z + ı5) z=2 = ı2π d dz eız (z − 2)(z + ı5) z=0 + eız z2(z + ı5) z=2 = ı2π ı z2 + (ı7 − 2)z − 5 − ı12 eız (z − 2)2(z + ı5)2 z=0 + 1 58 − ı 5 116 eı2 = ı2π − 3 25 + ı 20 + 1 58 − ı 5 116 eı2 = − π 10 + 5 58 π cos 2 − 1 29 π sin 2 + ı − 6π 25 + 1 29 π cos 2 + 5 58 π sin 2 3. We consider the integral C e1/z sin(1/z) dz where C is the positive circle |z| = 1. There is an essential singularity at z = 0. We determine the residue there by expanding the integrand in a Laurent series. e1/z sin(1/z) = 1 + 1 z + O 1 z2 1 z + O 1 z3 = 1 z + O 1 z2 The residue at z = 0 is 1. We evaluate the integral with the residue theorem. C e1/z sin(1/z) dz = ı2π Solution 13.3 If f(ζ) is analytic in a compact, closed, connected domain D and z is a point in the interior of D then Cauchy’s integral formula states f(n) (z) = n! ı2π ∂D f(ζ) (ζ − z)n+1 dζ. To corroborate this, we evaluate the integral with Cauchy’s residue theorem. There is a pole of order n + 1 at the point ζ = z. n! ı2π ∂D f(ζ) (ζ − z)n+1 dζ. = n! ı2π ı2π n! dn dζn f(ζ) ζ=z = f(n) (z) Solution 13.4 1. 1 z4 − a4 = 1 (z − a)(z + a)(z − ıa)(z + ıa) 421
  • 442. There are first order poles at z = ±a and z = ±ıa. We calculate the residues there. Res 1 z4 − a4 , z = a = 1 (z + a)(z − ıa)(z + ıa) z=a = 1 4a3 Res 1 z4 − a4 , z = −a = 1 (z − a)(z − ıa)(z + ıa) z=−a = − 1 4a3 Res 1 z4 − a4 , z = ıa = 1 (z − a)(z + a)(z + ıa) z=ıa = ı 4a3 Res 1 z4 − a4 , z = −ıa = 1 (z − a)(z + a)(z − ıa) z=−ıa = − ı 4a3 2. sin z z2 Since denominator has a second order zero at z = 0 and the numerator has a first order zero there, the function has a first order pole at z = 0. We calculate the residue there. Res sin z z2 , z = 0 = lim z→0 sin z z = lim z→0 cos z 1 = 1 3. 1 + z2 z(z − 1)2 There is a first order pole at z = 0 and a second order pole at z = 1. Res 1 + z2 z(z − 1)2 , z = 0 = 1 + z2 (z − 1)2 z=0 = 1 Res 1 + z2 z(z − 1)2 , z = 1 = d dz 1 + z2 z z=1 = 1 − 1 z2 z=1 = 0 4. ez / z2 + a2 has first order poles at z = ±ıa. We calculate the residues there. Res ez z2 + a2 , z = ıa = ez z + ıa z=ıa = − ı eıa 2a Res ez z2 + a2 , z = −ıa = ez z − ıa z=−ıa = ı e−ıa 2a 5. Since 1 − cos z has a second order zero at z = 0, (1−cos z)2 z7 has a third order pole at that point. 422
  • 443. We find the residue by expanding the function in a Laurent series. (1 − cos z)2 z7 = z−7 1 − 1 − z2 2 + z4 24 + O z6 2 = z−7 z2 2 − z4 24 + O z6 2 = z−7 z4 4 − z6 24 + O z8 = 1 4z3 − 1 24z + O(z) The residue at z = 0 is −1/24. Solution 13.5 Since f(z) has an isolated pole of order n at z = z0, it has a Laurent series that is convergent in a deleted neighborhood about that point. We substitute this Laurent series into the Residue Formula to verify it. Res(f(z), z0) = lim z→z0 1 (n − 1)! dn−1 dzn−1 [(z − z0)n f(z)] = lim z→z0 1 (n − 1)! dn−1 dzn−1 (z − z0)n ∞ k=−n ak(z − z0)k = lim z→z0 1 (n − 1)! dn−1 dzn−1 ∞ k=0 ak−n(z − z0)k = lim z→z0 1 (n − 1)! ∞ k=n−1 ak−n k! (k − n + 1)! (z − z0)k−n+1 = lim z→z0 1 (n − 1)! ∞ k=0 ak−1 (k + n − 1)! k! (z − z0)k = 1 (n − 1)! a−1 (n − 1)! 0! = a−1 This proves the Residue Formula. Solution 13.6 Classify Singularities. f(z) = z4 z2 + 1 = z4 (z − ı)(z + ı) . There are first order poles at z = ±ı. Since the function behaves like z2 at infinity, there is a second order pole there. To see this more slowly, we can make the substitution z = 1/ζ and examine the point ζ = 0. f 1 ζ = ζ−4 ζ−2 + 1 = 1 ζ2 + ζ4 = 1 ζ2(1 + ζ2) f(1/ζ) has a second order pole at ζ = 0, which implies that f(z) has a second order pole at infinity. Residues. The residues at z = ±ı are, Res z4 z2 + 1 , ı = lim z→ı z4 z + ı = − ı 2 , 423
  • 444. Res z4 z2 + 1 , −ı = lim z→−ı z4 z − ı = ı 2 . The residue at infinity is Res(f(z), ∞) = Res −1 ζ2 f 1 ζ , ζ = 0 = Res −1 ζ2 ζ−4 ζ−2 + 1 , ζ = 0 = Res − ζ−4 1 + ζ2 , ζ = 0 Here we could use the residue formula, but it’s easier to find the Laurent expansion. = Res −ζ−4 ∞ n=0 (−1)n ζ2n , ζ = 0 = 0 We could also calculate the residue at infinity by recalling that the sum of all residues of this function in the extended complex plane is zero. −ı 2 + ı 2 + Res(f(z), ∞) = 0 Res(f(z), ∞) = 0 Laurent Series about z = 0. Since the nearest singularities are at z = ±ı, the Taylor series will converge in the disk |z| < 1. z4 z2 + 1 = z4 1 1 − (−z)2 = z4 ∞ n=0 (−z2 )n = z4 ∞ n=0 (−1)n z2n = ∞ n=2 (−1)n z2n This geometric series converges for | − z2 | < 1, or |z| < 1. The series expansion of the function is z4 z2 + 1 = ∞ n=2 (−1)n z2n for |z| < 1 Laurent Series about z = ı. We expand f(z) in partial fractions. First we write the function as a proper rational function, (i.e. the numerator has lower degree than the denominator). By polynomial division, we see that f(z) = z2 − 1 + 1 z2 + 1 . Now we expand the last term in partial fractions. f(z) = z2 − 1 + −ı/2 z − ı + ı/2 z + ı 424
  • 445. Since the nearest singularity is at z = −ı, the Laurent series will converge in the annulus 0 < |z−ı| < 2. z2 − 1 = ((z − ı) + ı)2 − 1 = (z − ı)2 + ı2(z − ı) − 2 ı/2 z + ı = ı/2 ı2 + (z − ı) = 1/4 1 − ı(z − ı)/2 = 1 4 ∞ n=0 ı(z − ı) 2 n = 1 4 ∞ n=0 ın 2n (z − ı)n This geometric series converges for |ı(z − ı)/2| < 1, or |z − ı| < 2. The series expansion of f(z) is f(z) = −ı/2 z − ı − 2 + ı2(z − ı) + (z − ı)2 + 1 4 ∞ n=0 ın 2n (z − ı)n . z4 z2 + 1 = −ı/2 z − ı − 2 + ı2(z − ı) + (z − ı)2 + 1 4 ∞ n=0 ın 2n (z − ı)n for |z − ı| < 2 Laurent Series about z = ∞. Since the nearest singularities are at z = ±ı, the Laurent series will converge in the annulus 1 < |z| < ∞. z4 z2 + 1 = z2 1 + 1/z2 = z2 ∞ n=0 − 1 z2 n = 0 n=−∞ (−1)n z2(n+1) = 1 n=−∞ (−1)n+1 z2n This geometric series converges for | − 1/z2 | < 1, or |z| > 1. The series expansion of f(z) is z4 z2 + 1 = 1 n=−∞ (−1)n+1 z2n for 1 < |z| < ∞ Solution 13.7 Method 1: Residue Theorem. We factor P(z). Let m be the number of roots, counting multiplicities, that lie inside the contour Γ. We find a simple expression for P (z)/P(z). P(z) = c n k=1 (z − zk) P (z) = c n k=1 n j=1 j=k (z − zj) 425
  • 446. P (z) P(z) = c n k=1 n j=1 j=k (z − zj) c n k=1(z − zk) = n k=1 n j=1 j=k (z − zj) n j=1(z − zj) = n k=1 1 z − zk Now we do the integration using the residue theorem. 1 ı2π Γ P (z) P(z) dz = 1 ı2π Γ n k=1 1 z − zk dz = n k=1 1 ı2π Γ 1 z − zk dz = zk inside Γ 1 ı2π Γ 1 z − zk dz = zk inside Γ 1 = m Method 2: Fundamental Theorem of Calculus. We factor the polynomial, P(z) = c n k=1(z − zk). Let m be the number of roots, counting multiplicities, that lie inside the contour Γ. 1 ı2π Γ P (z) P(z) dz = 1 ı2π [log P(z)]C = 1 ı2π log n k=1 (z − zk) C = 1 ı2π n k=1 log(z − zk) C The value of the logarithm changes by ı2π for the terms in which zk is inside the contour. Its value does not change for the terms in which zk is outside the contour. = 1 ı2π zk inside Γ log(z − zk) C = 1 ı2π zk inside Γ ı2π = m Solution 13.8 1. C ez (z − π) tan z dz = C ez cos z (z − π) sin z dz The integrand has first order poles at z = nπ, n ∈ Z, n = 1 and a double pole at z = π. The only pole inside the contour occurs at z = 0. We evaluate the integral with the residue 426
  • 447. theorem. C ez cos z (z − π) sin z dz = ı2π Res ez cos z (z − π) sin z , z = 0 = ı2π lim z=0 z ez cos z (z − π) sin z = −ı2 lim z=0 z sin z = −ı2 lim z=0 1 cos z = −ı2 C ez (z − π) tan z dz = −ı2 2. The integrand has a first order poles at z = 0, −π and a second order pole at z = π inside the contour. The value of the integral is ı2π times the sum of the residues at these points. From the previous part we know that residue at z = 0. Res ez cos z (z − π) sin z , z = 0 = − 1 π We find the residue at z = −π with the residue formula. Res ez cos z (z − π) sin z , z = −π = lim z→−π (z + π) ez cos z (z − π) sin z = e−π (−1) −2π lim z→−π z + π sin z = e−π 2π lim z→−π 1 cos z = − e−π 2π We find the residue at z = π by finding the first few terms in the Laurent series of the integrand. ez cos z (z − π) sin z = eπ + eπ (z − π) + O (z − π)2 1 + O (z − π)2 (z − π) (−(z − π) + O ((z − π)3)) = − eπ − eπ (z − π) + O (z − π)2 −(z − π)2 + O ((z − π)4) = eπ (z−π)2 + eπ z−π + O(1) 1 + O ((z − π)2) = eπ (z − π)2 + eπ z − π + O(1) 1 + O (z − π)2 = eπ (z − π)2 + eπ z − π + O(1) With this we see that Res ez cos z (z − π) sin z , z = π = eπ . 427
  • 448. The integral is C ez cos z (z − π) sin z dz = ı2π Res ez cos z (z − π) sin z , z = −π + Res ez cos z (z − π) sin z , z = 0 + Res ez cos z (z − π) sin z , z = π = ı2π − 1 π − e−π 2π + eπ C ez (z − π) tan z dz = ı 2π eπ −2 − e−π Cauchy Principal Value for Real Integrals Solution 13.9 Consider the integral 1 −1 1 x dx. By the definition of improper integrals we have 1 −1 1 x dx = lim →0+ − −1 1 x dx + lim δ→0+ 1 δ 1 x dx = lim →0+ [log |x|] − −1 + lim δ→0+ [log |x|] 1 δ = lim →0+ log − lim δ→0+ log δ This limit diverges. Thus the integral diverges. Now consider the integral 1 −1 1 x − ıα dx where α ∈ R, α = 0. Since the integrand is bounded, the integral exists. 1 −1 1 x − ıα dx = 1 −1 x + ıα x2 + α2 dx = 1 −1 ıα x2 + α2 dx = ı2 1 0 α x2 + α2 dx = ı2 1/α 0 1 ξ2 + 1 dξ = ı2 [arctan ξ] 1/α 0 = ı2 arctan 1 α Note that the integral exists for all nonzero real α and that lim α→0+ 1 −1 1 x − ıα dx = ıπ and lim α→0− 1 −1 1 x − ıα dx = −ıπ. 428
  • 449. Figure 13.8: The real and imaginary part of the integrand for several values of α. The integral exists for α arbitrarily close to zero, but diverges when α = 0. The real part of the integrand is an odd function with two humps that get thinner and taller with decreasing α. The imaginary part of the integrand is an even function with a hump that gets thinner and taller with decreasing α. (See Figure 13.8.) 1 x − ıα = x x2 + α2 , 1 x − ıα = α x2 + α2 Note that 1 0 1 x − ıα dx → +∞ as α → 0+ and 0 −1 1 x − ıα dx → −∞ as α → 0− . However, lim α→0 1 −1 1 x − ıα dx = 0 because the two integrals above cancel each other. Now note that when α = 0, the integrand is real. Of course the integral doesn’t converge for this case, but if we could assign some value to 1 −1 1 x dx it would be a real number. Since lim α→0 1 −1 1 x − ıα dx = 0, This number should be zero. Solution 13.10 1. − 1 −1 1 x2 dx = lim →0+ − −1 1 x2 dx + 1 1 x2 dx = lim →0+ − 1 x − −1 + − 1 x 1 = lim →0+ 1 − 1 − 1 + 1 The principal value of the integral does not exist. 429
  • 450. 2. − 1 −1 1 x3 dx = lim →0+ − −1 1 x3 dx + 1 1 x3 dx = lim →0+ − 1 2x2 − −1 + − 1 2x2 1 = lim →0+ − 1 2(− )2 + 1 2(−1)2 − 1 2(1)2 + 1 2 2 = 0 3. Since f(x) is real analytic, f(x) = ∞ n=1 fnxn for x ∈ (−1, 1). We can rewrite the integrand as f(x) x3 = f0 x3 + f1 x2 + f2 x + f(x) − f0 − f1x − f2x2 x3 . Note that the final term is real analytic on (−1, 1). Thus the principal value of the integral exists if and only if f2 = 0. Cauchy Principal Value for Contour Integrals Solution 13.11 We can write f(z) as f(z) = f0 z − z0 + (z − z0)f(z) − f0 z − z0 . Note that the second term is analytic in a neighborhood of z0. Thus it is bounded on the contour. Let M be the maximum modulus of (z−z0)f(z)−f0 z−z0 on C . By using the maximum modulus integral bound, we have C (z − z0)f(z) − f0 z − z0 dz ≤ (β − α) M → 0 as → 0+ . Thus we see that lim →0+ C f(z) dz lim →0+ C f0 z − z0 dz. We parameterize the path of integration with z = z0 + eıθ , θ ∈ (α, β). Now we evaluate the integral. lim →0+ C f0 z − z0 dz = lim →0+ β α f0 eıθ ı eıθ dθ = lim →0+ β α ıf0 dθ = ı(β − α)f0 ≡ ı(β − α) Res(f(z), z0) This proves the result. 430
  • 451. CONTINUE Figure 13.9: The Indented Contour. Solution 13.12 Let Ci be the contour that is indented with circular arcs or radius at each of the first order poles on C so as to enclose these poles. Let A1, . . . , An be these circular arcs of radius centered at the points ζ1, . . . , ζn. Let Cp be the contour, (not necessarily connected), obtained by subtracting each of the Aj’s from Ci. Since the curve is C1 , (or continuously differentiable), at each of the first order poles on C, the Aj’s becomes semi-circles as → 0+ . Thus Aj f(z) dz = ıπ Res(f(z), ζj) for j = 1, . . . , n. The principal value of the integral along C is − C f(z) dz = lim →0+ Cp f(z) dz = lim →0+   Ci f(z) dz − n j=1 Aj f(z) dz   = ı2π   m j=1 Res(f(z), zj) + n j=1 Res(f(z), ζj)   − ıπ n j=1 Res(f(z), ζj) − C f(z) dz = ı2π m j=1 Res(f(z), zj) + ıπ n j=1 Res(f(z), ζj). Solution 13.13 Consider − C 1 z − 1 dz where C is the unit circle. Let Cp be the circular arc of radius 1 that starts and ends a distance of from z = 1. Let C be the negative, circular arc of radius with center at z = 1 that joins the endpoints of Cp. Let Ci, be the union of Cp and C . (Cp stands for Principal value Contour; Ci stands for Indented Contour.) Ci is an indented contour that avoids the first order pole at z = 1. Figure 13.9 shows the three contours. Note that the principal value of the integral is − C 1 z − 1 dz = lim →0+ Cp 1 z − 1 dz. We can calculate the integral along Ci with Cauchy’s theorem. The integrand is analytic inside the contour. Ci 1 z − 1 dz = 0 We can calculate the integral along C using Result 13.3.1. Note that as → 0+ , the contour becomes a semi-circle, a circular arc of π radians in the negative direction. lim →0+ C 1 z − 1 dz = −ıπ Res 1 z − 1 , 1 = −ıπ 431
  • 452. Now we can write the principal value of the integral along C in terms of the two known integrals. − C 1 z − 1 dz = Ci 1 z − 1 dz − C 1 z − 1 dz = 0 − (−ıπ) = ıπ Integrals on the Real Axis Solution 13.14 1. First we note that the integrand is an even function and extend the domain of integration. ∞ 0 x2 (x2 + 1)(x2 + 4) dx = 1 2 ∞ −∞ x2 (x2 + 1)(x2 + 4) dx Next we close the path of integration in the upper half plane. Consider the integral along the boundary of the domain 0 < r < R, 0 < θ < π. 1 2 C z2 (z2 + 1)(z2 + 4) dz = 1 2 C z2 (z − ı)(z + ı)(z − ı2)(z + ı2) dz = ı2π 1 2 Res z2 (z2 + 1)(z2 + 4) , z = ı + Res z2 (z2 + 1)(z2 + 4) , z = ı2 = ıπ z2 (z + ı)(z2 + 4) z=ı + z2 (z2 + 1)(z + ı2) z=ı2 = ıπ ı 6 − ı 3 = π 6 Let CR be the circular arc portion of the contour. C = R −R + CR . We show that the integral along CR vanishes as R → ∞ with the maximum modulus bound. CR z2 (z2 + 1)(z2 + 4) dz ≤ πR max z∈CR z2 (z2 + 1)(z2 + 4) = πR R2 (R2 − 1)(R2 − 4) → 0 as R → ∞ We take the limit as R → ∞ to evaluate the integral along the real axis. lim R→∞ 1 2 R −R x2 (x2 + 1)(x2 + 4) dx = π 6 ∞ 0 x2 (x2 + 1)(x2 + 4) dx = π 6 2. We close the path of integration in the upper half plane. Consider the integral along the 432
  • 453. boundary of the domain 0 < r < R, 0 < θ < π. C dz (z + b)2 + a2 = C dz (z + b − ıa)(z + b + ıa) = ı2π Res 1 (z + b − ıa)(z + b + ıa) , z = −b + ıa = ı2π 1 z + b + ıa z=−b+ıa = π a Let CR be the circular arc portion of the contour. C = R −R + CR . We show that the integral along CR vanishes as R → ∞ with the maximum modulus bound. CR dz (z + b)2 + a2 ≤ πR max z∈CR 1 (z + b)2 + a2 = πR 1 (R − b)2 + a2 → 0 as R → ∞ We take the limit as R → ∞ to evaluate the integral along the real axis. lim R→∞ R −R dx (x + b)2 + a2 = π a ∞ −∞ dx (x + b)2 + a2 = π a Solution 13.15 Let CR be the semicircular arc from R to −R in the upper half plane. Let C be the union of CR and the interval [−R, R]. We can evaluate the principal value of the integral along C with Result 13.3.2. − C f(x) dx = ı2π m k=1 Res (f(z), zk) + ıπ n k=1 Res(f(z), xk) We examine the integral along CR as R → ∞. CR f(z) dz ≤ πR max z∈CR |f(z)| → 0 as R → ∞. Now we are prepared to evaluate the real integral. − ∞ −∞ f(x) dx = lim R→∞ − R −R f(x) dx = lim R→∞ − C f(z) dz = ı2π m k=1 Res (f(z), zk) + ıπ n k=1 Res(f(z), xk) If we close the path of integration in the lower half plane, the contour will be in the negative direction. − ∞ −∞ f(x) dx = −ı2π m k=1 Res (f(z), zk) − ıπ n k=1 Res(f(z), xk) 433
  • 454. Solution 13.16 We consider − ∞ −∞ 2x x2 + x + 1 dx. With the change of variables x = 1/ξ, this becomes − −∞ ∞ 2ξ−1 ξ−2 + ξ−1 + 1 −1 ξ2 dξ, − ∞ −∞ 2ξ−1 ξ2 + ξ + 1 dξ There are first order poles at ξ = 0 and ξ = −1/2 ± ı √ 3/2. We close the path of integration in the upper half plane with a semi-circle. Since the integrand decays like ξ−3 the integrand along the semi-circle vanishes as the radius tends to infinity. The value of the integral is thus ıπ Res 2z−1 z2 + z + 1 , z = 0 + ı2π Res 2z−1 z2 + z + 1 , z = − 1 2 + ı √ 3 2 ıπ lim z→0 2 z2 + z + 1 + ı2π lim z→(−1+ı √ 3)/2 2z−1 z + (1 + ı √ 3)/2 − ∞ −∞ 2x x2 + x + 1 dx = − 2π √ 3 Solution 13.17 1. Consider ∞ −∞ 1 x4 + 1 dx. The integrand 1 z4+1 is analytic on the real axis and has isolated singularities at the points z = {eıπ/4 , eı3π/4 , eı5π/4 , eı7π/4 }. Let CR be the semi-circle of radius R in the upper half plane. Since lim R→∞ R max z∈CR 1 z4 + 1 = lim R→∞ R 1 R4 − 1 = 0, we can apply Result 13.4.1. ∞ −∞ 1 x4 + 1 dx = ı2π Res 1 z4 + 1 , eıπ/4 + Res 1 z4 + 1 , eı3π/4 The appropriate residues are, Res 1 z4 + 1 , eıπ/4 = lim z→eıπ/4 z − eıπ/4 z4 + 1 = lim z→eıπ/4 1 4z3 = 1 4 e−ı3π/4 = −1 − ı 4 √ 2 , 434
  • 455. Res 1 z4 + 1 , eı3π/4 = 1 4(eı3π/4)3 = 1 4 e−ıπ/4 = 1 − ı 4 √ 2 , We evaluate the integral with the residue theorem. ∞ −∞ 1 x4 + 1 dx = ı2π −1 − ı 4 √ 2 + 1 − ı 4 √ 2 ∞ −∞ 1 x4 + 1 dx = π √ 2 2. Now consider ∞ −∞ x2 (x2 + 1)2 dx. The integrand is analytic on the real axis and has second order poles at z = ±ı. Since the integrand decays sufficiently fast at infinity, lim R→∞ R max z∈CR z2 (z2 + 1)2 = lim R→∞ R R2 (R2 − 1)2 = 0 we can apply Result 13.4.1. ∞ −∞ x2 (x2 + 1)2 dx = ı2π Res z2 (z2 + 1)2 , z = ı Res z2 (z2 + 1)2 , z = ı = lim z→ı d dz (z − ı)2 z2 (z2 + 1)2 = lim z→ı d dz z2 (z + ı)2 = lim z→ı (z + ı)2 2z − z2 2(z + ı) (z + ı)4 = − ı 4 ∞ −∞ x2 (x2 + 1)2 dx = π 2 3. Since sin(x) 1 + x2 is an odd function, ∞ −∞ cos(x) 1 + x2 dx = ∞ −∞ eıx 1 + x2 dx Since eız /(1 + z2 ) is analytic except for simple poles at z = ±ı and the integrand decays sufficiently fast in the upper half plane, lim R→∞ R max z∈CR eız 1 + z2 = lim R→∞ R 1 R2 − 1 = 0 435
  • 456. we can apply Result 13.4.1. ∞ −∞ eıx 1 + x2 dx = ı2π Res eız (z − ı)(z + ı) , z = ı = ı2π e−1 ı2 ∞ −∞ cos(x) 1 + x2 dx = π e Solution 13.18 Consider the function f(z) = z6 (z4 + 1)2 . The value of the function on the imaginary axis: −y6 (y4 + 1)2 is a constant multiple of the value of the function on the real axis: x6 (x4 + 1)2 . Thus to evaluate the real integral we consider the path of integration, C, which starts at the origin, follows the real axis to R, follows a circular path to ıR and then follows the imaginary axis back down to the origin. f(z) has second order poles at the fourth roots of −1: (±1 ± ı)/ √ 2. Of these only (1+ı)/ √ 2 lies inside the path of integration. We evaluate the contour integral with the Residue Theorem. For R > 1: C z6 (z4 + 1)2 dz = ı2π Res z6 (z4 + 1)2 , z = eıπ/4 = ı2π lim z→eıπ/4 d dz (z − eıπ/4 )2 z6 (z4 + 1)2 = ı2π lim z→eıπ/4 d dz z6 (z − eı3π/4)2(z − eı5π/4)2(z − eı7π/4)2 = ı2π lim z→eıπ/4 z6 (z − eı3π/4)2(z − eı5π/4)2(z − eı7π/4)2 6 z − 2 z − eı3π/4 − 2 z − eı5π/4 − 2 z − eı7π/4 = ı2π −ı (2)(ı4)(−2) 6 √ 2 1 + ı − 2 √ 2 − 2 √ 2 2 + ı2 − 2 ı √ 2 = ı2π 3 32 (1 − ı) √ 2 = 3π 8 √ 2 (1 + ı) The integral along the circular part of the contour, CR, vanishes as R → ∞. We demonstrate this 436
  • 457. with the maximum modulus integral bound. CR z6 (z4 + 1)2 dz ≤ πR 4 max z∈CR z6 (z4 + 1)2 = πR 4 R6 (R4 − 1)2 → 0 as R → ∞ Taking the limit R → ∞, we have: ∞ 0 x6 (x4 + 1)2 dx + 0 ∞ (ıy)6 ((ıy)4 + 1)2 ı dy = 3π 8 √ 2 (1 + ı) ∞ 0 x6 (x4 + 1)2 dx + ı ∞ 0 y6 (y4 + 1)2 dy = 3π 8 √ 2 (1 + ı) (1 + ı) ∞ 0 x6 (x4 + 1)2 dx = 3π 8 √ 2 (1 + ı) ∞ 0 x6 (x4 + 1)2 dx = 3π 8 √ 2 Fourier Integrals Solution 13.19 We know that π 0 e−R sin θ dθ < π R . First take the case that ω is positive and the semi-circle is in the upper half plane. CR f(z) eıωz dz ≤ CR eıωz dz max z∈CR |f(z)| ≤ π 0 eıωR eıθ R eıθ dθ max z∈CR |f(z)| = R π 0 e−ωR sin θ dθ max z∈CR |f(z)| < R π ωR max z∈CR |f(z)| = π ω max z∈CR |f(z)| → 0 as R → ∞ The procedure is almost the same for negative ω. Solution 13.20 First we write the integral in terms of Fourier integrals. ∞ −∞ cos 2x x − ıπ dx = ∞ −∞ eı2x 2(x − ıπ) dx + ∞ −∞ e−ı2x 2(x − ıπ) dx Note that 1 2(z−ıπ) vanishes as |z| → ∞. We close the former Fourier integral in the upper half plane and the latter in the lower half plane. There is a first order pole at z = ıπ in the upper half plane. ∞ −∞ eı2x 2(x − ıπ) dx = ı2π Res eı2z 2(z − ıπ) , z = ıπ = ı2π e−2π 2 437
  • 458. There are no singularities in the lower half plane. ∞ −∞ e−ı2x 2(x − ıπ) dx = 0 Thus the value of the original real integral is ∞ −∞ cos 2x x − ıπ dx = ıπ e−2π Fourier Cosine and Sine Integrals Solution 13.21 We are considering the integral ∞ −∞ sin x x dx. The integrand is an entire function. So it doesn’t appear that the residue theorem would directly apply. Also the integrand is unbounded as x → +ı∞ and x → −ı∞, so closing the integral in the upper or lower half plane is not directly applicable. In order to proceed, we must write the integrand in a different form. Note that − ∞ −∞ cos x x dx = 0 since the integrand is odd and has only a first order pole at x = 0. Thus ∞ −∞ sin x x dx = − ∞ −∞ eıx ıx dx. Let CR be the semicircular arc in the upper half plane from R to −R. Let C be the closed contour that is the union of CR and the real interval [−R, R]. If we close the path of integration with a semicircular arc in the upper half plane, we have ∞ −∞ sin x x dx = lim R→∞ − C eız ız dz − CR eız ız dz , provided that all the integrals exist. The integral along CR vanishes as R → ∞ by Jordan’s lemma. By the residue theorem for principal values we have − eız ız dz = ıπ Res eız ız , 0 = π. Combining these results, ∞ −∞ sin x x dx = π. Solution 13.22 Note that (1−cos x)/x2 has a removable singularity at x = 0. The integral decays like 1 x2 at infinity, so the integral exists. Since (sin x)/x2 is a odd function with a simple pole at x = 0, the principal value of its integral vanishes. − ∞ −∞ sin x x2 dx = 0 ∞ −∞ 1 − cos x x2 dx = − ∞ −∞ 1 − cos x − ı sin x x2 dx = − ∞ −∞ 1 − eıx x2 dx 438
  • 459. Let CR be the semi-circle of radius R in the upper half plane. Since lim R→∞ R max z∈CR 1 − eız z2 = lim R→∞ R 2 R2 = 0 the integral along CR vanishes as R → ∞. CR 1 − eız z2 dz → 0 as R → ∞ We can apply Result 13.4.1. − ∞ −∞ 1 − eıx x2 dx = ıπ Res 1 − eız z2 , z = 0 = ıπ lim z→0 1 − eız z = ıπ lim z→0 −ı eız 1 ∞ −∞ 1 − cos x x2 dx = π Solution 13.23 Consider ∞ 0 sin(πx) x(1 − x2) dx. Note that the integrand has removable singularities at the points x = 0, ±1 and is an even function. ∞ 0 sin(πx) x(1 − x2) dx = 1 2 ∞ −∞ sin(πx) x(1 − x2) dx. Note that cos(πx) x(1 − x2) is an odd function with first order poles at x = 0, ±1. − ∞ −∞ cos(πx) x(1 − x2) dx = 0 ∞ 0 sin(πx) x(1 − x2) dx = − ı 2 − ∞ −∞ eıπx x(1 − x2) dx. Let CR be the semi-circle of radius R in the upper half plane. Since lim R→∞ R max z∈CR eıπz z(1 − z2) = lim R→∞ R 1 R(R2 − 1) = 0 the integral along CR vanishes as R → ∞. CR eıπz z(1 − z2) dz → 0 as R → ∞ We can apply Result 13.4.1. − ı 2 − ∞ −∞ eıπx x(1 − x2) dx = ıπ −ı 2 Res eız z(1 − z2) , z = 0 + Res eız z(1 − z2) , z = 1 + Res eız z(1 − z2) , z = −1 = π 2 lim z→0 eıπz 1 − z2 − lim z→0 eıπz z(1 + z) + lim z→0 eıπz z(1 − z) = π 2 1 − −1 2 + −1 −2 ∞ 0 sin(πx) x(1 − x2) dx = π 439
  • 460. Contour Integration and Branch Cuts Solution 13.24 Let C be the boundary of the region < r < R, 0 < θ < π. Choose the branch of the logarithm with a branch cut on the negative imaginary axis and the angle range −π/2 < θ < 3π/2. We consider the integral of log2 z/(1 + z2 ) on this contour. C log2 z 1 + z2 dz = ı2π Res log2 z 1 + z2 , z = ı = ı2π lim z→ı log2 z z + ı = ı2π (ıπ/2)2 ı2 = − π3 4 Let CR be the semi-circle from R to −R in the upper half plane. We show that the integral along CR vanishes as R → ∞ with the maximum modulus integral bound. CR log2 z 1 + z2 dz ≤ πR max z∈CR log2 z 1 + z2 ≤ πR ln2 R + 2π ln R + π2 R2 − 1 → 0 as R → ∞ Let C be the semi-circle from − to in the upper half plane. We show that the integral along C vanishes as → 0 with the maximum modulus integral bound. C log2 z 1 + z2 dz ≤ π max z∈C log2 z 1 + z2 ≤ π ln2 − 2π ln + π2 1 − 2 → 0 as → 0 Now we take the limit as → 0 and R → ∞ for the integral along C. C log2 z 1 + z2 dz = − π3 4 ∞ 0 ln2 r 1 + r2 dr + 0 ∞ (ln r + ıπ)2 1 + r2 dr = − π3 4 2 ∞ 0 ln2 x 1 + x2 dx + ı2π ∞ 0 ln x 1 + x2 dx = π2 ∞ 0 1 1 + x2 dx − π3 4 (13.1) We evaluate the integral of 1/(1 + x2 ) by extending the path of integration to (−∞ . . . ∞) and closing the path of integration in the upper half plane. Since lim R→∞ R max z∈CR 1 1 + z2 ≤ lim R→∞ R 1 R2 − 1 = 0, the integral of 1/(1 + z2 ) along CR vanishes as R → ∞. We evaluate the integral with the Residue 440
  • 461. ε CR C Figure 13.10: The path of integration. Theorem. π2 ∞ 0 1 1 + x2 dx = π2 2 ∞ −∞ 1 1 + x2 dx = π2 2 ı2π Res 1 1 + z2 , z = ı = ıπ3 lim z→ı 1 z + ı = π3 2 Now we return to Equation 13.1. 2 ∞ 0 ln2 x 1 + x2 dx + ı2π ∞ 0 ln x 1 + x2 dx = π3 4 We equate the real and imaginary parts to solve for the desired integrals. ∞ 0 ln2 x 1 + x2 dx = π3 8 ∞ 0 ln x 1 + x2 dx = 0 Solution 13.25 We consider the branch of the function f(z) = log z z2 + 5z + 6 with a branch cut on the real axis and 0 < arg(z) < 2π. Let C and CR denote the circles of radius and R where < 1 < R. C is negatively oriented; CR is positively oriented. Consider the closed contour, C, that is traced by a point moving from to R above the branch cut, next around CR back to R, then below the cut to , and finally around C back to . (See Figure 13.10.) We can evaluate the integral of f(z) along C with the residue theorem. For R > 3, there are 441
  • 462. first order poles inside the path of integration at z = −2 and z = −3. C log z z2 + 5z + 6 dz = ı2π Res log z z2 + 5z + 6 , z = −2 + Res log z z2 + 5z + 6 , z = −3 = ı2π lim z→−2 log z z + 3 + lim z→−3 log z z + 2 = ı2π log(−2) 1 + log(−3) −1 = ı2π (log(2) + ıπ − log(3) − ıπ) = ı2π log 2 3 In the limit as → 0, the integral along C vanishes. We demonstrate this with the maximum modulus theorem. C log z z2 + 5z + 6 dz ≤ 2π max z∈C log z z2 + 5z + 6 ≤ 2π 2π − log 6 − 5 − 2 → 0 as → 0 In the limit as R → ∞, the integral along CR vanishes. We again demonstrate this with the maximum modulus theorem. CR log z z2 + 5z + 6 dz ≤ 2πR max z∈CR log z z2 + 5z + 6 ≤ 2πR log R + 2π R2 − 5R − 6 → 0 as R → ∞ Taking the limit as → 0 and R → ∞, the integral along C is: C log z z2 + 5z + 6 dz = ∞ 0 log x x2 + 5x + 6 dx + 0 ∞ log x + ı2π x2 + 5x + 6 dx = −ı2π ∞ 0 log x x2 + 5x + 6 dx Now we can evaluate the real integral. −ı2π ∞ 0 log x x2 + 5x + 6 dx = ı2π log 2 3 ∞ 0 log x x2 + 5x + 6 dx = log 3 2 Solution 13.26 We consider the integral I(a) = ∞ 0 xa (x + 1)2 dx. To examine convergence, we split the domain of integration. ∞ 0 xa (x + 1)2 dx = 1 0 xa (x + 1)2 dx + ∞ 1 xa (x + 1)2 dx 442
  • 463. First we work with the integral on (0 . . . 1). 1 0 xa (x + 1)2 dx ≤ 1 0 xa (x + 1)2 |dx| = 1 0 x (a) (x + 1)2 dx ≤ 1 0 x (a) dx This integral converges for (a) > −1. Next we work with the integral on (1 . . . ∞). ∞ 1 xa (x + 1)2 dx ≤ ∞ 1 xa (x + 1)2 |dx| = ∞ 1 x (a) (x + 1)2 dx ≤ ∞ 1 x (a)−2 dx This integral converges for (a) < 1. Thus we see that the integral defining I(a) converges in the strip, −1 < (a) < 1. The integral converges uniformly in any closed subset of this domain. Uniform convergence means that we can differentiate the integral with respect to a and interchange the order of integration and differentiation. I (a) = ∞ 0 xa log x (x + 1)2 dx Thus we see that I(a) is analytic for −1 < (a) < 1. For −1 < (a) < 1 and a = 0, za is multi-valued. Consider the branch of the function f(z) = za /(z + 1)2 with a branch cut on the positive real axis and 0 < arg(z) < 2π. We integrate along the contour in Figure ??. The integral on C vanishes as → 0. We show this with the maximum modulus integral bound. First we write za in modulus-argument form, z = eıθ , where a = α + ıβ. za = ea log z = e(α+ıβ)(ln +ıθ) = eα ln −βθ+ı(β ln +αθ) = α e−βθ eı(β log +αθ) Now we bound the integral. C za (z + 1)2 dz ≤ 2π max z∈C za (z + 1)2 ≤ 2π α e2π|β| (1 − )2 → 0 as → 0 The integral on CR vanishes as R → ∞. CR za (z + 1)2 dz ≤ 2πR max z∈CR za (z + 1)2 ≤ 2πR Rα e2π|β| (R − 1)2 → 0 as R → ∞ 443
  • 464. Above the branch cut, (z = r eı0 ), the integrand is f(r eı0 ) = ra (r + 1)2 . Below the branch cut, (z = r eı2π ), we have, f(r eı2π ) = eı2πa ra (r + 1)2 . Now we use the residue theorem. ∞ 0 ra (r + 1)2 dr + 0 ∞ eı2πa ra (r + 1)2 dr = ı2π Res za (z + 1)2 , −1 1 − eı2πa ∞ 0 ra (r + 1)2 dr = ı2π lim z→−1 d dz (za ) ∞ 0 ra (r + 1)2 dr = ı2π a eıπ(a−1) 1 − eı2πa ∞ 0 ra (r + 1)2 dr = −ı2πa e−ıπa − eıπa ∞ 0 xa (x + 1)2 dx = πa sin(πa) for − 1 < (a) < 1, a = 0 The right side has a removable singularity at a = 0. We use analytic continuation to extend the answer to a = 0. I(a) = ∞ 0 xa (x + 1)2 dx = πa sin(πa) for − 1 < (a) < 1, a = 0 1 for a = 0 We can derive the last two integrals by differentiating this formula with respect to a and taking the limit a → 0. I (a) = ∞ 0 xa log x (x + 1)2 dx, I (a) = ∞ 0 xa log2 x (x + 1)2 dx I (0) = ∞ 0 log x (x + 1)2 dx, I (0) = ∞ 0 log2 x (x + 1)2 dx We can find I (0) and I (0) either by differentiating the expression for I(a) or by finding the first few terms in the Taylor series expansion of I(a) about a = 0. The latter approach is a little easier. I(a) = ∞ n=0 I(n) (0) n! an I(a) = πa sin(πa) = πa πa − (πa)3/6 + O(a5) = 1 1 − (πa)2/6 + O(a4) = 1 + π2 a2 6 + O(a4 ) 444
  • 465. I (0) = ∞ 0 log x (x + 1)2 dx = 0 I (0) = ∞ 0 log2 x (x + 1)2 dx = π2 3 Solution 13.27 1. We consider the integral I(a) = ∞ 0 xa 1 + x2 dx. To examine convergence, we split the domain of integration. ∞ 0 xa 1 + x2 dx = 1 0 xa 1 + x2 dx + ∞ 1 xa 1 + x2 dx First we work with the integral on (0 . . . 1). 1 0 xa 1 + x2 dx ≤ 1 0 xa 1 + x2 |dx| = 1 0 x (a) 1 + x2 dx ≤ 1 0 x (a) dx This integral converges for (a) > −1. Next we work with the integral on (1 . . . ∞). ∞ 1 xa 1 + x2 dx ≤ ∞ 1 xa 1 + x2 |dx| = ∞ 1 x (a) 1 + x2 dx ≤ ∞ 1 x (a)−2 dx This integral converges for (a) < 1. Thus we see that the integral defining I(a) converges in the strip, −1 < (a) < 1. The integral converges uniformly in any closed subset of this domain. Uniform convergence means that we can differentiate the integral with respect to a and interchange the order of integration and differentiation. I (a) = ∞ 0 xa log x 1 + x2 dx Thus we see that I(a) is analytic for −1 < (a) < 1. 2. For −1 < (a) < 1 and a = 0, za is multi-valued. Consider the branch of the function f(z) = za /(1 + z2 ) with a branch cut on the positive real axis and 0 < arg(z) < 2π. We integrate along the contour in Figure 13.11. The integral on Cρ vanishes are ρ → 0. We show this with the maximum modulus integral bound. First we write za in modulus-argument form, where z = ρ eıθ and a = α + ıβ. za = ea log z = e(α+ıβ)(log ρ+ıθ) = eα log ρ−βθ+ı(β log ρ+αθ) = ρa e−βθ eı(β log ρ+αθ) 445
  • 466. ε CR C Figure 13.11: Now we bound the integral. Cρ za 1 + z2 dz ≤ 2πρ max z∈Cρ za 1 + z2 ≤ 2πρ ρα e2π|β| 1 − ρ2 → 0 as ρ → 0 The integral on CR vanishes as R → ∞. CR za 1 + z2 dz ≤ 2πR max z∈CR za 1 + z2 ≤ 2πR Rα e2π|β| R2 − 1 → 0 as R → ∞ Above the branch cut, (z = r eı0 ), the integrand is f(r eı0 ) = ra 1 + r2 . Below the branch cut, (z = r eı2π ), we have, f(r eı2π ) = eı2πa ra 1 + r2 . 446
  • 467. Now we use the residue theorem. ∞ 0 ra 1 + r2 dr + 0 ∞ eı2πa ra 1 + r2 dr = ı2π Res za 1 + z2 , ı + Res za 1 + z2 , −ı 1 − eı2πa ∞ 0 xa 1 + x2 dx = ı2π lim z→ı za z + ı + lim z→−ı za z − ı 1 − eı2πa ∞ 0 xa 1 + x2 dx = ı2π eıaπ/2 ı2 + eıa3π/2 −ı2 ∞ 0 xa 1 + x2 dx = π eıaπ/2 − eıa3π/2 1 − eı2aπ ∞ 0 xa 1 + x2 dx = π eıaπ/2 (1 − eıaπ ) (1 + eıaπ)(1 − eıaπ) ∞ 0 xa 1 + x2 dx = π e−ıaπ/2 + eıaπ/2 ∞ 0 xa 1 + x2 dx = π 2 cos(πa/2) for − 1 < (a) < 1, a = 0 We use analytic continuation to extend the answer to a = 0. I(a) = ∞ 0 xa 1 + x2 dx = π 2 cos(πa/2) for − 1 < (a) < 1 3. We can derive the last two integrals by differentiating this formula with respect to a and taking the limit a → 0. I (a) = ∞ 0 xa log x 1 + x2 dx, I (a) = ∞ 0 xa log2 x 1 + x2 dx I (0) = ∞ 0 log x 1 + x2 dx, I (0) = ∞ 0 log2 x 1 + x2 dx We can find I (0) and I (0) either by differentiating the expression for I(a) or by finding the first few terms in the Taylor series expansion of I(a) about a = 0. The latter approach is a little easier. I(a) = ∞ n=0 I(n) (0) n! an I(a) = π 2 cos(πa/2) = π 2 1 1 − (πa/2)2/2 + O(a4) = π 2 1 + (πa/2)2 /2 + O(a4 ) = π 2 + π3 /8 2 a2 + O(a4 ) I (0) = ∞ 0 log x 1 + x2 dx = 0 I (0) = ∞ 0 log2 x 1 + x2 dx = π3 8 447
  • 468. Solution 13.28 Convergence. If xa f(x) xα as x → 0 for some α > −1 then the integral 1 0 xa f(x) dx will converge absolutely. If xa f(x) xβ as x → ∞ for some β < −1 then the integral ∞ 1 xa f(x) will converge absolutely. These are sufficient conditions for the absolute convergence of ∞ 0 xa f(x) dx. Contour Integration. We put a branch cut on the positive real axis and choose 0 < arg(z) < 2π. We consider the integral of za f(z) on the contour in Figure ??. Let the singularities of f(z) occur at z1, . . . , zn. By the residue theorem, C za f(z) dz = ı2π n k=1 Res (za f(z), zk) . On the circle of radius , the integrand is o( −1 ). Since the length of C is 2π , the integral on C vanishes as → 0. On the circle of radius R, the integrand is o(R−1 ). Since the length of CR is 2πR, the integral on CR vanishes as R → ∞. The value of the integrand below the branch cut, z = x eı2π , is f(x eı2π ) = xa eı2πa f(x) In the limit as → 0 and R → ∞ we have ∞ 0 xa f(x) dx + 0 −∞ xa eı2πa f(x) dx = ı2π n k=1 Res (za f(z), zk) . ∞ 0 xa f(x) dx = ı2π 1 − eı2πa n k=1 Res (za f(z), zk) . Solution 13.29 In the interval of uniform convergence of th integral, we can differentiate the formula ∞ 0 xa f(x) dx = ı2π 1 − eı2πa n k=1 Res (za f(z), zk) , with respect to a to obtain, ∞ 0 xa f(x) log x dx = ı2π 1 − eı2πa n k=1 Res (za f(z) log z, zk) , − 4π2 a eı2πa (1 − eı2πa)2 n k=1 Res (za f(z), zk) . ∞ 0 xa f(x) log x dx = ı2π 1 − eı2πa n k=1 Res (za f(z) log z, zk) , + π2 a sin2 (πa) n k=1 Res (za f(z), zk) , Differentiating the solution of Exercise 13.26 m times with respect to a yields ∞ 0 xa f(x) logm x dx = ∂m ∂am ı2π 1 − eı2πa n k=1 Res (za f(z), zk) , 448
  • 469. Solution 13.30 Taking the limit as a → 0 ∈ Z in the solution of Exercise 13.26 yields ∞ 0 f(x) dx = ı2π lim a→0 n k=1 Res (za f(z), zk) 1 − eı2πa The numerator vanishes because the sum of all residues of zn f(z) is zero. Thus we can use L’Hospital’s rule. ∞ 0 f(x) dx = ı2π lim a→0 n k=1 Res (za f(z) log z, zk) −ı2π eı2πa ∞ 0 f(x) dx = − n k=1 Res (f(z) log z, zk) This suggests that we could have derived the result directly by considering the integral of f(z) log z on the contour in Figure ??. We put a branch cut on the positive real axis and choose the branch arg z = 0. Recall that we have assumed that f(z) has only isolated singularities and no singularities on the positive real axis, [0, ∞). By the residue theorem, C f(z) log z dz = ı2π n k=1 Res (f(z) log z, z = zk) . By assuming that f(z) zα as z → 0 where α > −1 the integral on C will vanish as → 0. By assuming that f(z) zβ as z → ∞ where β < −1 the integral on CR will vanish as R → ∞. The value of the integrand below the branch cut, z = x eı2π is f(x)(log x + ı2π). Taking the limit as → 0 and R → ∞, we have ∞ 0 f(x) log x dx + 0 ∞ f(x)(log x + ı2π) dx = ı2π n k=1 Res (f(z) log z, zk) . Thus we corroborate the result. ∞ 0 f(x) dx = − n k=1 Res (f(z) log z, zk) Solution 13.31 Consider the integral of f(z) log2 z on the contour in Figure ??. We put a branch cut on the positive real axis and choose the branch 0 < arg z < 2π. Let z1, . . . zn be the singularities of f(z). By the residue theorem, C f(z) log2 z dz = ı2π n k=1 Res f(z) log2 z, zk . If f(z) zα as z → 0 for some α > −1 then the integral on C will vanish as → 0. f(z) zβ as z → ∞ for some β < −1 then the integral on CR will vanish as R → ∞. Below the branch cut the integrand is f(x)(log x + ı2π)2 . Thus we have ∞ 0 f(x) log2 x dx + 0 ∞ f(x)(log2 x + ı4π log x − 4π2 ) dx = ı2π n k=1 Res f(z) log2 z, zk . −ı4π ∞ 0 f(x) log x dx + 4π2 ∞ 0 f(x) dx = ı2π n k=1 Res f(z) log2 z, zk . ∞ 0 f(x) log x dx = − 1 2 n k=1 Res f(z) log2 z, zk + ıπ n k=1 Res (f(z) log z, zk) 449
  • 470. CR Cε Figure 13.12: Possible path of integration for f(z) = za 1+z4 Solution 13.32 Convergence. We consider ∞ 0 xa 1 + x4 dx. Since the integrand behaves like xa near x = 0 we must have (a) > −1. Since the integrand behaves like xa−4 at infinity we must have (a − 4) < −1. The integral converges for −1 < (a) < 3. Contour Integration. The function f(z) = za 1 + z4 has first order poles at z = (±1 ± ı)/ √ 2 and a branch point at z = 0. We could evaluate the real integral by putting a branch cut on the positive real axis with 0 < arg(z) < 2π and integrating f(z) on the contour in Figure 13.12. Integrating on this contour would work because the value of the integrand below the branch cut is a constant times the value of the integrand above the branch cut. After demonstrating that the integrals along C and CR vanish in the limits as → 0 and R → ∞ we would see that the value of the integral is a constant times the sum of the residues at the four poles. However, this is not the only, (and not the best), contour that can be used to evaluate the real integral. Consider the value of the integral on the line arg(z) = θ. f(r eıθ ) = ra eıaθ 1 + r4 eı4θ If θ is a integer multiple of π/2 then the integrand is a constant multiple of f(x) = ra 1 + r4 . Thus any of the contours in Figure 13.13 can be used to evaluate the real integral. The only difference is how many residues we have to calculate. Thus we choose the first contour in Figure 13.13. We put a branch cut on the negative real axis and choose the branch −π < arg(z) < π to satisfy f(1) = 1. We evaluate the integral along C with the Residue Theorem. C za 1 + z4 dz = ı2π Res za 1 + z4 , z = 1 + ı √ 2 Let a = α + ıβ and z = r eıθ . Note that |za | = |(r eıθ )α+ıβ | = rα e−βθ . 450
  • 471. C C C C C C R ε ε R ε R Figure 13.13: Possible Paths of Integration for f(z) = za 1+z4 The integral on C vanishes as → 0. We demonstrate this with the maximum modulus integral bound. C za 1 + z4 dz ≤ π 2 max z∈C za 1 + z4 ≤ π 2 α eπ|β|/2 1 − 4 → 0 as → 0 The integral on CR vanishes as R → ∞. CR za 1 + z4 dz ≤ πR 2 max z∈CR za 1 + z4 ≤ πR 2 Rα eπ|β|/2 R4 − 1 → 0 as R → ∞ The value of the integrand on the positive imaginary axis, z = x eıπ/2 , is (x eıπ/2 )a 1 + (x eıπ/2)4 = xa eıπa/2 1 + x4 . We take the limit as → 0 and R → ∞. ∞ 0 xa 1 + x4 dx + 0 ∞ xa eıπa/2 1 + x4 eıπ/2 dx = ı2π Res za 1 + z4 , eıπ/4 1 − eıπ(a+1)/2 ∞ 0 xa 1 + x4 dx = ı2π lim z→eıπ/4 za (z − eıπ/2 ) 1 + z4 ∞ 0 xa 1 + x4 dx = ı2π 1 − eıπ(a+1)/2 lim z→eıπ/4 aza (z − eıπ/2 ) + za 4z3 ∞ 0 xa 1 + x4 dx = ı2π 1 − eıπ(a+1)/2 eıπa/4 4 eı3π/4 ∞ 0 xa 1 + x4 dx = −ıπ 2(e−ıπ(a+1)/4 − eıπ(a+1)/4) ∞ 0 xa 1 + x4 dx = π 4 csc π(a + 1) 4 451
  • 472. Solution 13.33 Consider the branch of f(z) = z1/2 log z/(z + 1)2 with a branch cut on the positive real axis and 0 < arg z < 2π. We integrate this function on the contour in Figure ??. We use the maximum modulus integral bound to show that the integral on Cρ vanishes as ρ → 0. Cρ z1/2 log z (z + 1)2 dz ≤ 2πρ max Cρ z1/2 log z (z + 1)2 = 2πρ ρ1/2 (2π − log ρ) (1 − ρ)2 → 0 as ρ → 0 The integral on CR vanishes as R → ∞. CR z1/2 log z (z + 1)2 dz ≤ 2πR max CR z1/2 log z (z + 1)2 = 2πR R1/2 (log R + 2π) (R − 1)2 → 0 as R → ∞ Above the branch cut, (z = x eı0 ), the integrand is, f(x eı0 ) = x1/2 log x (x + 1)2 . Below the branch cut, (z = x eı2π ), we have, f(x eı2π ) = −x1/2 (log x + ıπ) (x + 1)2 . Taking the limit as ρ → 0 and R → ∞, the residue theorem gives us ∞ 0 x1/2 log x (x + 1)2 dx + 0 ∞ −x1/2 (log x + ı2π) (x + 1)2 dx = ı2π Res z1/2 log z (z + 1)2 , −1 . 2 ∞ 0 x1/2 log x (x + 1)2 dx + ı2π ∞ 0 x1/2 (x + 1)2 dx = ı2π lim z→−1 d dz (z1/2 log z) 2 ∞ 0 x1/2 log x (x + 1)2 dx + ı2π ∞ 0 x1/2 (x + 1)2 dx = ı2π lim z→−1 1 2 z−1/2 log z + z1/2 1 z 2 ∞ 0 x1/2 log x (x + 1)2 dx + ı2π ∞ 0 x1/2 (x + 1)2 dx = ı2π 1 2 (−ı)(ıπ) − ı 2 ∞ 0 x1/2 log x (x + 1)2 dx + ı2π ∞ 0 x1/2 (x + 1)2 dx = 2π + ıπ2 Equating real and imaginary parts, ∞ 0 x1/2 log x (x + 1)2 dx = π, ∞ 0 x1/2 (x + 1)2 dx = π 2 . Exploiting Symmetry 452
  • 473. Solution 13.34 Convergence. The integrand, eaz ez − e−z = eaz 2 sinh(z) , has first order poles at z = ınπ, n ∈ Z. To study convergence, we split the domain of integration. ∞ −∞ = −1 −∞ + 1 −1 + ∞ 1 The principal value integral − 1 −1 eax ex − e−x dx exists for any a because the integrand has only a first order pole on the path of integration. Now consider the integral on (1 . . . ∞). ∞ 1 eax ex − e−x dx = ∞ 1 e(a−1)x 1 − e−2x dx ≤ 1 1 − e−2 ∞ 1 e(a−1)x dx This integral converges for a − 1 < 0; a < 1. Finally consider the integral on (−∞ . . . − 1). −1 −∞ eax ex − e−x dx = −1 −∞ e(a+1)x 1 − e2x dx ≤ 1 1 − e−2 −1 −∞ e(a+1)x dx This integral converges for a + 1 > 0; a > −1. Thus we see that the integral for I(a) converges for real a, |a| < 1. Choice of Contour. Consider the contour C that is the boundary of the region: −R < x < R, 0 < y < π. The integrand has no singularities inside the contour. There are first order poles on the contour at z = 0 and z = ıπ. The value of the integral along the contour is ıπ times the sum of these two residues. The integrals along the vertical sides of the contour vanish as R → ∞. R+ıπ R eaz ez − e−z dz ≤ π max z∈(R...R+ıπ) eaz ez − e−z ≤ π eaR eR − e−R → 0 as R → ∞ −R+ıπ −R eaz ez − e−z dz ≤ π max z∈(−R...−R+ıπ) eaz ez − e−z ≤ π e−aR e−R − eR → 0 as R → ∞ Evaluating the Integral. We take the limit as R → ∞ and apply the residue theorem. ∞ −∞ eax ex − e−x dx + −∞+ıπ ∞+ıπ eaz ez − e−z dz = ıπ Res eaz ez − e−z , z = 0 + ıπ Res eaz ez − e−z , z = ıπ 453
  • 474. ∞ −∞ eax ex − e−x dx + −∞ ∞ ea(x+ıπ ex+ıπ − e−x−ıπ dz = ıπ lim z→0 z eaz 2 sinh(z) + ıπ lim z→ıπ (z − ıπ) eaz 2 sinh(z) (1 + eıaπ ) ∞ −∞ eax ex − e−x dx = ıπ lim z→0 eaz +az eaz 2 cosh(z) + ıπ lim z→ıπ eaz +a(z − ıπ) eaz 2 cosh(z) (1 + eıaπ ) ∞ −∞ eax ex − e−x dx = ıπ 1 2 + ıπ eıaπ −2 ∞ −∞ eax ex − e−x dx = ıπ(1 − eıaπ ) 2(1 + eıaπ) ∞ −∞ eax ex − e−x dx = π 2 ı(e−ıaπ/2 − eıaπ/2 ) eıaπ/2 + eıaπ/2 ∞ −∞ eax ex − e−x dx = π 2 tan aπ 2 Solution 13.35 1. ∞ 0 dx (1 + x2) 2 = 1 2 ∞ −∞ dx (1 + x2) 2 We apply Result 13.4.1 to the integral on the real axis. First we verify that the integrand vanishes fast enough in the upper half plane. lim R→∞ R max z∈CR 1 (1 + z2) 2 = lim R→∞ R 1 (R2 − 1) 2 = 0 Then we evaluate the integral with the residue theorem. ∞ −∞ dx (1 + x2) 2 = ı2π Res 1 (1 + z2) 2 , z = ı = ı2π Res 1 (z − ı)2(z + ı)2 , z = ı = ı2π lim z→ı d dz 1 (z + ı)2 = ı2π lim z→ı −2 (z + ı)3 = π 2 ∞ 0 dx (1 + x2) 2 = π 4 2. We wish to evaluate ∞ 0 dx x3 + 1 . Let the contour C be the boundary of the region 0 < r < R, 0 < θ < 2π/3. We factor the denominator of the integrand to see that the contour encloses the simple pole at eıπ/3 for R > 1. z3 + 1 = (z − eıπ/3 )(z + 1)(z − e−ıπ/3 ) 454
  • 475. We calculate the residue at that point. Res 1 z3 + 1 , z = eıπ/3 = lim z→eıπ/3 (z − eıπ/3 ) 1 z3 + 1 = lim z→eıπ/3 1 (z + 1)(z − e−ıπ/3) = 1 (eıπ/3 +1)(eıπ/3 − e−ıπ/3) = − eıπ/3 3 We use the residue theorem to evaluate the integral. C dz z3 + 1 = − ı2π eıπ/3 3 Let CR be the circular arc portion of the contour. C dz z3 + 1 = R 0 dx x3 + 1 + CR dz z3 + 1 − R 0 eı2π/3 dx x3 + 1 = (1 + e−ıπ/3 ) R 0 dx x3 + 1 + CR dz z3 + 1 We show that the integral along CR vanishes as R → ∞ with the maximum modulus integral bound. CR dz z3 + 1 ≤ 2πR 3 1 R3 − 1 → 0 as R → ∞ We take R → ∞ and solve for the desired integral. 1 + e−ıπ/3 ∞ 0 dx x3 + 1 = − ı2π eıπ/3 3 ∞ 0 dx x3 + 1 = 2π 3 √ 3 Solution 13.36 Method 1: Semi-Circle Contour. We wish to evaluate the integral I = ∞ 0 dx 1 + x6 . We note that the integrand is an even function and express I as an integral over the whole real axis. I = 1 2 ∞ −∞ dx 1 + x6 Now we will evaluate the integral using contour integration. We close the path of integration in the upper half plane. Let ΓR be the semicircular arc from R to −R in the upper half plane. Let Γ be the union of ΓR and the interval [−R, R]. (See Figure 13.14.) We can evaluate the integral along Γ with the residue theorem. The integrand has first order poles at z = eıπ(1+2k)/6 , k = 0, 1, 2, 3, 4, 5. Three of these poles are in the upper half plane. For R > 1, we have Γ 1 z6 + 1 dz = ı2π 2 k=0 Res 1 z6 + 1 , eıπ(1+2k)/6 = ı2π 2 k=0 lim z→eıπ(1+2k)/6 z − eıπ(1+2k)/6 z6 + 1 455
  • 476. Figure 13.14: The semi-circle contour. Since the numerator and denominator vanish, we apply L’Hospital’s rule. = ı2π 2 k=0 lim z→eıπ(1+2k)/6 1 6z5 = ıπ 3 2 k=0 e−ıπ5(1+2k)/6 = ıπ 3 e−ıπ5/6 + e−ıπ15/6 + e−ıπ25/6 = ıπ 3 e−ıπ5/6 + e−ıπ/2 + e−ıπ/6 = ıπ 3 − √ 3 − ı 2 − ı + √ 3 − ı 2 = 2π 3 Now we examine the integral along ΓR. We use the maximum modulus integral bound to show that the value of the integral vanishes as R → ∞. ΓR 1 z6 + 1 dz ≤ πR max z∈ΓR 1 z6 + 1 = πR 1 R6 − 1 → 0 as R → ∞. Now we are prepared to evaluate the original real integral. Γ 1 z6 + 1 dz = 2π 3 R −R 1 x6 + 1 dx + ΓR 1 z6 + 1 dz = 2π 3 We take the limit as R → ∞. ∞ −∞ 1 x6 + 1 dx = 2π 3 ∞ 0 1 x6 + 1 dx = π 3 We would get the same result by closing the path of integration in the lower half plane. Note that in this case the closed contour would be in the negative direction. 456
  • 477. Figure 13.15: The wedge contour. Method 2: Wedge Contour. Consider the contour Γ, which starts at the origin, goes to the point R along the real axis, then to the point R eıπ/3 along a circle of radius R and then back to the origin along the ray θ = π/3. (See Figure 13.15.) We can evaluate the integral along Γ with the residue theorem. The integrand has one first order pole inside the contour at z = eıπ/6 . For R > 1, we have Γ 1 z6 + 1 dz = ı2π Res 1 z6 + 1 , eıπ/6 = ı2π lim z→eıπ/6 z − eıπ/6 z6 + 1 Since the numerator and denominator vanish, we apply L’Hospital’s rule. = ı2π lim z→eıπ/6 1 6z5 = ıπ 3 e−ıπ5/6 = π 3 e−ıπ/3 Now we examine the integral along the circular arc, ΓR. We use the maximum modulus integral bound to show that the value of the integral vanishes as R → ∞. ΓR 1 z6 + 1 dz ≤ πR 3 max z∈ΓR 1 z6 + 1 = πR 3 1 R6 − 1 → 0 as R → ∞. Now we are prepared to evaluate the original real integral. Γ 1 z6 + 1 dz = π 3 e−ıπ/3 R 0 1 x6 + 1 dx + ΓR 1 z6 + 1 dz + 0 R eıπ/3 1 z6 + 1 dz = π 3 e−ıπ/3 R 0 1 x6 + 1 dx + ΓR 1 z6 + 1 dz + 0 R 1 x6 + 1 eıπ/3 dx = π 3 e−ıπ/3 457
  • 478. Figure 13.16: cos(2θ) and 1 − 4 π θ We take the limit as R → ∞. 1 − eıπ/3 ∞ 0 1 x6 + 1 dx = π 3 e−ıπ/3 ∞ 0 1 x6 + 1 dx = π 3 e−ıπ/3 1 − eıπ/3 ∞ 0 1 x6 + 1 dx = π 3 (1 − ı √ 3)/2 1 − (1 + ı √ 3)/2 ∞ 0 1 x6 + 1 dx = π 3 Solution 13.37 First note that cos(2θ) ≥ 1 − 4 π θ, 0 ≤ θ ≤ π 4 . These two functions are plotted in Figure 13.16. To prove this inequality analytically, note that the two functions are equal at the endpoints of the interval and that cos(2θ) is concave downward on the interval, d2 dθ2 cos(2θ) = −4 cos(2θ) ≤ 0 for 0 ≤ θ ≤ π 4 , while 1 − 4θ/π is linear. Let CR be the quarter circle of radius R from θ = 0 to θ = π/4. The integral along this contour vanishes as R → ∞. CR e−z2 dz ≤ π/4 0 e−(R eıθ )2 Rı eıθ dθ ≤ π/4 0 R e−R2 cos(2θ) dθ ≤ π/4 0 R e−R2 (1−4θ/π) dθ = R π 4R2 e−R2 (1−4θ/π) π/4 0 = π 4R 1 − e−R2 → 0 as R → ∞ Let C be the boundary of the domain 0 < r < R, 0 < θ < π/4. Since the integrand is analytic inside C the integral along C is zero. Taking the limit as R → ∞, the integral from r = 0 to ∞ along θ = 0 is equal to the integral from r = 0 to ∞ along θ = π/4. ∞ 0 e−x2 dx = ∞ 0 e − “ 1+ı√ 2 x ”2 1 + ı √ 2 dx ∞ 0 e−x2 dx = 1 + ı √ 2 ∞ 0 e−ıx2 dx 458
  • 479. ∞ 0 e−x2 dx = 1 + ı √ 2 ∞ 0 cos(x2 ) − ı sin(x2 ) dx ∞ 0 e−x2 dx = 1 √ 2 ∞ 0 cos(x2 ) dx + ∞ 0 sin(x2 ) dx + ı √ 2 ∞ 0 cos(x2 ) dx − ∞ 0 sin(x2 ) dx We equate the imaginary part of this equation to see that the integrals of cos(x2 ) and sin(x2 ) are equal. ∞ 0 cos(x2 ) dx = ∞ 0 sin(x2 ) dx The real part of the equation then gives us the desired identity. ∞ 0 cos(x2 ) dx = ∞ 0 sin(x2 ) dx = 1 √ 2 ∞ 0 e−x2 dx Solution 13.38 Consider the box contour C that is the boundary of the rectangle −R ≤ x ≤ R, 0 ≤ y ≤ π. There is a removable singularity at z = 0 and a first order pole at z = ıπ. By the residue theorem, − C z sinh z dz = ıπ Res z sinh z , ıπ = ıπ lim z→ıπ z(z − ıπ) sinh z = ıπ lim z→ıπ 2z − ıπ cosh z = π2 The integrals along the side of the box vanish as R → ∞. ±R+ıπ ±R z sinh z dz ≤ π max z∈[±R,±R+ıπ] z sinh z ≤ π R + π sinh R → 0 as R → ∞ The value of the integrand on the top of the box is x + ıπ sinh(x + ıπ) = − x + ıπ sinh x . Taking the limit as R → ∞, ∞ −∞ x sinh x dx + − ∞ −∞ − x + ıπ sinh x dx = π2 . Note that − ∞ −∞ 1 sinh x dx = 0 as there is a first order pole at x = 0 and the integrand is odd. ∞ −∞ x sinh x dx = π2 2 459
  • 480. Solution 13.39 First we evaluate ∞ −∞ eax ex +1 dx. Consider the rectangular contour in the positive direction with corners at ±R and ±R + ı2π. With the maximum modulus integral bound we see that the integrals on the vertical sides of the contour vanish as R → ∞. R+ı2π R eaz ez +1 dz ≤ 2π eaR eR −1 → 0 as R → ∞ −R −R+ı2π eaz ez +1 dz ≤ 2π e−aR 1 − e−R → 0 as R → ∞ In the limit as R tends to infinity, the integral on the rectangular contour is the sum of the integrals along the top and bottom sides. C eaz ez +1 dz = ∞ −∞ eax ex +1 dx + −∞ ∞ ea(x+ı2π) ex+ı2π +1 dx C eaz ez +1 dz = (1 − e−ı2aπ ) ∞ −∞ eax ex +1 dx The only singularity of the integrand inside the contour is a first order pole at z = ıπ. We use the residue theorem to evaluate the integral. C eaz ez +1 dz = ı2π Res eaz ez +1 , ıπ = ı2π lim z→ıπ (z − ıπ) eaz ez +1 = ı2π lim z→ıπ a(z − ıπ) eaz + eaz ez = −ı2π eıaπ We equate the two results for the value of the contour integral. (1 − e−ı2aπ ) ∞ −∞ eax ex +1 dx = −ı2π eıaπ ∞ −∞ eax ex +1 dx = ı2π eıaπ − e−ıaπ ∞ −∞ eax ex +1 dx = π sin(πa) Now we derive the value of, ∞ −∞ cosh(bx) cosh x dx. First make the change of variables x → 2x in the previous result. ∞ −∞ e2ax e2x +1 2 dx = π sin(πa) ∞ −∞ e(2a−1)x ex + e−x dx = π sin(πa) 460
  • 481. Now we set b = 2a − 1. ∞ −∞ ebx cosh x dx = π sin(π(b + 1)/2) = π cos(πb/2) for − 1 < b < 1 Since the cosine is an even function, we also have, ∞ −∞ e−bx cosh x dx = π cos(πb/2) for − 1 < b < 1 Adding these two equations and dividing by 2 yields the desired result. ∞ −∞ cosh(bx) cosh x dx = π cos(πb/2) for − 1 < b < 1 Solution 13.40 Real-Valued Parameters. For b = 0, the integral has the value: π/a2 . If b is nonzero, then we can write the integral as F(a, b) = 1 b2 π 0 dθ (a/b + cos θ)2 . We define the new parameter c = a/b and the function, G(c) = b2 F(a, b) = π 0 dθ (c + cos θ)2 . If −1 ≤ c ≤ 1 then the integrand has a double pole on the path of integration. The integral diverges. Otherwise the integral exists. To evaluate the integral, we extend the range of integration to (0..2π) and make the change of variables, z = eıθ to integrate along the unit circle in the complex plane. G(c) = 1 2 2π 0 dθ (c + cos θ)2 For this change of variables, we have, cos θ = z + z−1 2 , dθ = dz ız . G(c) = 1 2 C dz/(ız) (c + (z + z−1)/2)2 = −ı2 C z (2cz + z2 + 1)2 dz = −ı2 C z (z + c + √ c2 − 1)2(z + c − √ c2 − 1)2 dz If c > 1, then −c − √ c2 − 1 is outside the unit circle and −c + √ c2 − 1 is inside the unit circle. The integrand has a second order pole inside the path of integration. We evaluate the integral with 461
  • 482. the residue theorem. G(c) = −ı2ı2π Res z (z + c + √ c2 − 1)2(z + c − √ c2 − 1)2 , z = −c + c2 − 1 = 4π lim z→−c+ √ c2−1 d dz z (z + c + √ c2 − 1)2 = 4π lim z→−c+ √ c2−1 1 (z + c + √ c2 − 1)2 − 2z (z + c + √ c2 − 1)3 = 4π lim z→−c+ √ c2−1 c + √ c2 − 1 − z (z + c + √ c2 − 1)3 = 4π 2c (2 √ c2 − 1)3 = πc (c2 − 1)3 If c < 1, then −c − √ c2 − 1 is inside the unit circle and −c + √ c2 − 1 is outside the unit circle. G(c) = −ı2ı2π Res z (z + c + √ c2 − 1)2(z + c − √ c2 − 1)2 , z = −c − c2 − 1 = 4π lim z→−c− √ c2−1 d dz z (z + c − √ c2 − 1)2 = 4π lim z→−c− √ c2−1 1 (z + c − √ c2 − 1)2 − 2z (z + c − √ c2 − 1)3 = 4π lim z→−c− √ c2−1 c − √ c2 − 1 − z (z + c − √ c2 − 1)3 = 4π 2c (−2 √ c2 − 1)3 = − πc (c2 − 1)3 Thus we see that G(c)    = πc√ (c2−1)3 for c > 1, = − πc√ (c2−1)3 for c < 1, is divergent for − 1 ≤ c ≤ 1. In terms of F(a, b), this is F(a, b)    = aπ√ (a2−b2)3 for a/b > 1, = − aπ√ (a2−b2)3 for a/b < 1, is divergent for − 1 ≤ a/b ≤ 1. Complex-Valued Parameters. Consider G(c) = π 0 dθ (c + cos θ)2 , for complex c. Except for real-valued c between −1 and 1, the integral converges uniformly. We can 462
  • 483. interchange differentiation and integration. The derivative of G(c) is G (c) = d dc π 0 dθ (c + cos θ)2 = π 0 −2 (c + cos θ)3 dθ Thus we see that G(c) is analytic in the complex plane with a cut on the real axis from −1 to 1. The value of the function on the positive real axis for c > 1 is G(c) = πc (c2 − 1)3 . We use analytic continuation to determine G(c) for complex c. By inspection we see that G(c) is the branch of πc (c2 − 1)3/2 , with a branch cut on the real axis from −1 to 1 and which is real-valued and positive for real c > 1. Using F(a, b) = G(c)/b2 we can determine F for complex-valued a and b. Solution 13.41 First note that ∞ −∞ cos x ex + e−x dx = ∞ −∞ eıx ex + e−x dx since sin x/(ex + e−x ) is an odd function. For the function f(z) = eız ez + e−z we have f(x + ıπ) = eıx−π ex+ıπ + e−x−ıπ = − e−π eıx ex + e−x = − e−π f(x). Thus we consider the integral C eız ez + e−z dz where C is the box contour with corners at ±R and ±R + ıπ. We can evaluate this integral with the residue theorem. We can write the integrand as eız 2 cosh z . We see that the integrand has first order poles at z = ıπ(n + 1/2). The only pole inside the path of integration is at z = ıπ/2. C eız ez + e−z dz = ı2π Res eız ez + e−z , z = ıπ 2 = ı2π lim z→ıπ/2 (z − ıπ/2) eız ez + e−z = ı2π lim z→ıπ/2 eız +ı(z − ıπ/2) eız ez − e−z = ı2π e−π/2 eıπ/2 − e−ıπ/2 = π e−π/2 463
  • 484. The integrals along the vertical sides of the box vanish as R → ∞. ±R+ıπ ±R eız ez + e−z dz ≤ π max z∈[±R...±R+ıπ] eız ez + e−z ≤ π max y∈[0...π] 1 eR+ıy + e−R−ıy ≤ π max y∈[0...π] 1 eR + e−R−ı2y = π 1 2 sinh R → 0 as R → ∞ Taking the limit as R → ∞, we have ∞ −∞ eıx ex + e−x dx + −∞+ıπ ∞+ıπ eız ez + e−z dz = π e−π/2 (1 + e−π ) ∞ −∞ eıx ex + e−x dx = π e−π/2 ∞ −∞ eıx ex + e−x dx = π eπ/2 + e−π/2 Finally we have, ∞ −∞ cos x ex + e−x dx = π eπ/2 + e−π/2 . Definite Integrals Involving Sine and Cosine Solution 13.42 1. To evaluate the integral we make the change of variables z = eıθ . The path of integration in the complex plane is the positively oriented unit circle. π −π dθ 1 + sin2 θ = C 1 1 − (z − z−1) 2 /4 dz ız = C ı4z z4 − 6z2 + 1 dz = C ı4z z − 1 − √ 2 z − 1 + √ 2 z + 1 − √ 2 z + 1 + √ 2 dz There are first order poles at z = ±1 ± √ 2. The poles at z = −1 + √ 2 and z = 1 − √ 2 are 464
  • 485. inside the path of integration. We evaluate the integral with Cauchy’s Residue Formula. C ı4z z4 − 6z2 + 1 dz = ı2π Res ı4z z4 − 6z2 + 1 , z = −1 + √ 2 + Res ı4z z4 − 6z2 + 1 , z = 1 − √ 2 = −8π z z − 1 − √ 2 z − 1 + √ 2 z + 1 + √ 2 z=−1+ √ 2 + z z − 1 − √ 2 z + 1 − √ 2 z + 1 + √ 2 z=1− √ 2 = −8π − 1 8 √ 2 − 1 8 √ 2 = √ 2π 2. First we use symmetry to expand the domain of integration. π/2 0 sin4 θ dθ = 1 4 2π 0 sin4 θ dθ Next we make the change of variables z = eıθ . The path of integration in the complex plane is the positively oriented unit circle. We evaluate the integral with the residue theorem. 1 4 2π 0 sin4 θ dθ = 1 4 C 1 16 z − 1 z 4 dz ız = 1 64 C −ı (z2 − 1)4 z5 dz = −ı 64 C z3 − 4z + 6 z − 4 z3 + 1 z5 dz = ı2π −ı 64 6 = 3π 16 Solution 13.43 1. Let C be the positively oriented unit circle about the origin. We parametrize this contour. z = eıθ , dz = ı eıθ dθ, θ ∈ (0 . . . 2π) We write sin θ and the differential dθ in terms of z. Then we evaluate the integral with the Residue theorem. 2π 0 1 2 + sin θ dθ = C 1 2 + (z − 1/z)/(ı2) dz ız = C 2 z2 + ı4z − 1 dz = C 2 z + ı 2 + √ 3 z + ı 2 − √ 3 dz = ı2π Res z + ı 2 + √ 3 z + ı 2 − √ 3 , z = ı −2 + √ 3 = ı2π 2 ı2 √ 3 = 2π √ 3 465
  • 486. 2. First consider the case a = 0. π −π cos(nθ) dθ = 0 for n ∈ Z+ 2π for n = 0 Now we consider |a| < 1, a = 0. Since sin(nθ) 1 − 2a cos θ + a2 is an even function, π −π cos(nθ) 1 − 2a cos θ + a2 dθ = π −π eınθ 1 − 2a cos θ + a2 dθ Let C be the positively oriented unit circle about the origin. We parametrize this contour. z = eıθ , dz = ı eıθ dθ, θ ∈ (−π . . . π) We write the integrand and the differential dθ in terms of z. Then we evaluate the integral with the Residue theorem. π −π eınθ 1 − 2a cos θ + a2 dθ = C zn 1 − a(z + 1/z) + a2 dz ız = −ı C zn −az2 + (1 + a2)z − a dz = ı a C zn z2 − (a + 1/a)z + 1 dz = ı a C zn (z − a)(z − 1/a) dz = ı2π ı a Res zn (z − a)(z − 1/a) , z = a = − 2π a an a − 1/a = 2πan 1 − a2 We write the value of the integral for |a| < 1 and n ∈ Z0+ . π −π cos(nθ) 1 − 2a cos θ + a2 dθ = 2π for a = 0, n = 0 2πan 1−a2 otherwise Solution 13.44 Convergence. We consider the integral I(α) = − π 0 cos(nθ) cos θ − cos α dθ = π sin(nα) sin α . We assume that α is real-valued. If α is an integer, then the integrand has a second order pole on the path of integration, the principal value of the integral does not exist. If α is real, but not an integer, then the integrand has a first order pole on the path of integration. The integral diverges, but its principal value exists. 466
  • 487. Contour Integration. We will evaluate the integral for real, non-integer α. I(α) = − π 0 cos(nθ) cos θ − cos α dθ = 1 2 − 2π 0 cos(nθ) cos θ − cos α dθ = 1 2 − 2π 0 eınθ cos θ − cos α dθ We make the change of variables: z = eıθ . I(α) = 1 2 − C zn (z + 1/z)/2 − cos α dz ız = − C −ızn (z − eıα)(z − e−ıα) dz Now we use the residue theorem. = ıπ(−ı) Res zn (z − eıα)(z − e−ıα) , z = eıα + Res zn (z − eıα)(z − e−ıα) , z = e−ıα = π lim z→eıα zn z − e−ıα + lim z→e−ıα zn z − eıα = π eınα eıα − e−ıα + e−ınα e−ıα − eıα = π eınα − e−ınα eıα − e−ıα = π sin(nα) sin(α) I(α) = − π 0 cos(nθ) cos θ − cos α dθ = π sin(nα) sin α . Solution 13.45 Consider the integral 1 0 x2 (1 + x2) √ 1 − x2 dx. We make the change of variables x = sin ξ to obtain, π/2 0 sin2 ξ (1 + sin2 ξ) 1 − sin2 ξ cos ξ dξ π/2 0 sin2 ξ 1 + sin2 ξ dξ π/2 0 1 − cos(2ξ) 3 − cos(2ξ) dξ 1 4 2π 0 1 − cos ξ 3 − cos ξ dξ 467
  • 488. Now we make the change of variables z = eıξ to obtain a contour integral on the unit circle. 1 4 C 1 − (z + 1/z)/2 3 − (z + 1/z)/2 −ı z dz −ı 4 C (z − 1)2 z(z − 3 + 2 √ 2)(z − 3 − 2 √ 2) dz There are two first order poles inside the contour. The value of the integral is ı2π −ı 4 Res (z − 1)2 z(z − 3 + 2 √ 2)(z − 3 − 2 √ 2) , 0 + Res (z − 1)2 z(z − 3 + 2 √ 2)(z − 3 − 2 √ 2) , z = 3 − 2 √ 2 π 2 lim z→0 (z − 1)2 (z − 3 + 2 √ 2)(z − 3 − 2 √ 2) + lim z→3−2 √ 2 (z − 1)2 z(z − 3 − 2 √ 2) . 1 0 x2 (1 + x2) √ 1 − x2 dx = (2 − √ 2)π 4 Infinite Sums Solution 13.46 From Result 13.10.1 we see that the sum of the residues of π cot(πz)/z4 is zero. This function has simples poles at nonzero integers z = n with residue 1/n4 . There is a fifth order pole at z = 0. Finding the residue with the formula 1 4! lim z→0 d4 dz4 (πz cot(πz)) would be a real pain. After doing the differentiation, we would have to apply L’Hospital’s rule multiple times. A better way of finding the residue is with the Laurent series expansion of the function. Note that 1 sin(πz) = 1 πz − (πz)3/6 + (πz)5/120 − · · · = 1 πz 1 1 − (πz)2/6 + (πz)4/120 − · · · = 1 πz 1 + π2 6 z2 − π4 120 z4 + · · · + π2 6 z2 − π4 120 z4 + · · · 2 + · · · . Now we find the z−1 term in the Laurent series expansion of π cot(πz)/z4 . π cos(πz) z4 sin(πz) = π z4 1 − π2 2 z2 + π4 24 z4 − · · · 1 πz 1 + π2 6 z2 − π4 120 z4 + · · · + π2 6 z2 − π4 120 z4 + · · · 2 + · · · = 1 z5 · · · + − π4 120 + π4 36 − π4 12 + π4 24 z4 + · · · = · · · − π4 45 1 z + · · · Thus the residue at z = 0 is −π4 /45. Summing the residues, −1 n=−∞ 1 n4 − π4 45 + ∞ n=1 1 n4 = 0. ∞ n=1 1 n4 = π4 90 468
  • 489. Solution 13.47 For this problem we will use the following result: If lim |z|→∞ |zf(z)| = 0, then the sum of all the residues of π cot(πz)f(z) is zero. If in addition, f(z) is analytic at z = n ∈ Z then ∞ n=−∞ f(n) = −( sum of the residues of π cot(πz)f(z) at the poles of f(z) ). We assume that α is not an integer, otherwise the sum is not defined. Consider f(z) = 1/(z2 −α2 ). Since lim |z|→∞ z 1 z2 − α2 = 0, and f(z) is analytic at z = n, n ∈ Z, we have ∞ n=−∞ 1 n2 − α2 = −( sum of the residues of π cot(πz)f(z) at the poles of f(z) ). f(z) has first order poles at z = ±α. ∞ n=−∞ 1 n2 − α2 = − Res π cot(πz) z2 − α2 , z = α − Res π cot(πz) z2 − α2 , z = −α = − lim z→α π cot(πz) z + α − lim z→−α π cot(πz) z − α = − π cot(πα) 2α − π cot(−πα) −2α ∞ n=−∞ 1 n2 − α2 = − π cot(πα) α 469
  • 490. 470
  • 493. Chapter 14 First Order Differential Equations Don’t show me your technique. Show me your heart. -Tetsuyasu Uekuma 14.1 Notation A differential equation is an equation involving a function, it’s derivatives, and independent variables. If there is only one independent variable, then it is an ordinary differential equation. Identities such as d dx f2 (x) = 2f(x)f (x), and dy dx dx dy = 1 are not differential equations. The order of a differential equation is the order of the highest derivative. The following equations for y(x) are first, second and third order, respectively. • y = xy2 • y + 3xy + 2y = x2 • y = y y The degree of a differential equation is the highest power of the highest derivative in the equation. The following equations are first, second and third degree, respectively. • y − 3y2 = sin x • (y )2 + 2x cos y = ex • (y )3 + y5 = 0 An equation is said to be linear if it is linear in the dependent variable. • y cos x + x2 y = 0 is a linear differential equation. • y + xy2 = 0 is a nonlinear differential equation. A differential equation is homogeneous if it has no terms that are functions of the independent variable alone. Thus an inhomogeneous equation is one in which there are terms that are functions of the independent variables alone. • y + xy + y = 0 is a homogeneous equation. • y + y + x2 = 0 is an inhomogeneous equation. 473
  • 494. 1 2 3 4 4 8 12 16 1 2 3 4 4 8 12 16 Figure 14.1: The population of bacteria. 1 2 3 4 32 64 96 128 1 2 3 4 32 64 96 128 Figure 14.2: The discrete population of bacteria and a continuous population approximation. A first order differential equation may be written in terms of differentials. Recall that for the function y(x) the differential dy is defined dy = y (x) dx. Thus the differential equations y = x2 y and y + xy2 = sin(x) can be denoted: dy = x2 y dx and dy + xy2 dx = sin(x) dx. A solution of a differential equation is a function which when substituted into the equation yields an identity. For example, y = x ln |x| is a solution of y − y x = 1. We verify this by substituting it into the differential equation. ln |x| + 1 − ln |x| = 1 We can also verify that y = c ex is a solution of y − y = 0 for any value of the parameter c. c ex −c ex = 0 14.2 Example Problems In this section we will discuss physical and geometrical problems that lead to first order differential equations. 14.2.1 Growth and Decay Example 14.2.1 Consider a culture of bacteria in which each bacterium divides once per hour. Let n(t) ∈ N denote the population, let t denote the time in hours and let n0 be the population at time t = 0. The population doubles every hour. Thus for integer t, the population is n02t . Figure 14.1 shows two possible populations when there is initially a single bacterium. In the first plot, each of the bacteria divide at times t = m for m ∈ N. In the second plot, they divide at times t = m − 1/2. For both plots the population is 2t for integer t. We model this problem by considering a continuous population y(t) ∈ R which approximates the discrete population. In Figure 14.2 we first show the population when there is initially 8 bacteria. The divisions of bacteria is spread out over each one second interval. For integer t, the populations is 8 · 2t . Next we show the population with a plot of the continuous function y(t) = 8 · 2t . We see that y(t) is a reasonable approximation of the discrete population. 474
  • 495. In the discrete problem, the growth of the population is proportional to its number; the popula- tion doubles every hour. For the continuous problem, we assume that this is true for y(t). We write this as an equation: y (t) = αy(t). That is, the rate of change y (t) in the population is proportional to the population y(t), (with constant of proportionality α). We specify the population at time t = 0 with the initial condition: y(0) = n0. Note that y(t) = n0 eαt satisfies the problem: y (t) = αy(t), y(0) = n0. For our bacteria example, α = ln 2. Result 14.2.1 A quantity y(t) whose growth or decay is proportional to y(t) is modelled by the problem: y (t) = αy(t), y(t0) = y0. Here we assume that the quantity is known at time t = t0. eα is the factor by which the quantity grows/decays in unit time. The solution of this problem is y(t) = y0 eα(t−t0) . 14.3 One Parameter Families of Functions Consider the equation: F(x, y(x), c) = 0, (14.1) which implicitly defines a one-parameter family of functions y(x; c). Here y is a function of the variable x and the parameter c. For simplicity, we will write y(x) and not explicitly show the parameter dependence. Example 14.3.1 The equation y = cx defines family of lines with slope c, passing through the origin. The equation x2 + y2 = c2 defines circles of radius c, centered at the origin. Consider a chicken dropped from a height h. The elevation y of the chicken at time t after its release is y(t) = h − gt2 , where g is the acceleration due to gravity. This is family of functions for the parameter h. It turns out that the general solution of any first order differential equation is a one-parameter family of functions. This is not easy to prove. However, it is easy to verify the converse. We differentiate Equation 14.1 with respect to x. Fx + Fyy = 0 (We assume that F has a non-trivial dependence on y, that is Fy = 0.) This gives us two equa- tions involving the independent variable x, the dependent variable y(x) and its derivative and the parameter c. If we algebraically eliminate c between the two equations, the eliminant will be a first order differential equation for y(x). Thus we see that every one-parameter family of functions y(x) satisfies a first order differential equation. This y(x) is the primitive of the differential equation. Later we will discuss why y(x) is the general solution of the differential equation. Example 14.3.2 Consider the family of circles of radius c centered about the origin. x2 + y2 = c2 475
  • 496. x y y’ = −x/y Figure 14.3: A circle and its tangent. Differentiating this yields: 2x + 2yy = 0. It is trivial to eliminate the parameter and obtain a differential equation for the family of circles. x + yy = 0 We can see the geometric meaning in this equation by writing it in the form: y = − x y . For a point on the circle, the slope of the tangent y is the negative of the cotangent of the angle x/y. (See Figure 14.3.) Example 14.3.3 Consider the one-parameter family of functions: y(x) = f(x) + cg(x), where f(x) and g(x) are known functions. The derivative is y = f + cg . We eliminate the parameter. gy − g y = gf − g f y − g g y = f − g f g Thus we see that y(x) = f(x) + cg(x) satisfies a first order linear differential equation. Later we will prove the converse: the general solution of a first order linear differential equation has the form: y(x) = f(x) + cg(x). We have shown that every one-parameter family of functions satisfies a first order differential equation. We do not prove it here, but the converse is true as well. Result 14.3.1 Every first order differential equation has a one-parameter family of solutions y(x) defined by an equation of the form: F(x, y(x); c) = 0. This y(x) is called the general solution. If the equation is linear then the general solution expresses the totality of solutions of the differential equation. If the equation is nonlinear, there may be other special singular solutions, which do not depend on a parameter. 476
  • 497. This is strictly an existence result. It does not say that the general solution of a first order differential equation can be determined by some method, it just says that it exists. There is no method for solving the general first order differential equation. However, there are some special forms that are soluble. We will devote the rest of this chapter to studying these forms. 14.4 Integrable Forms In this section we will introduce a few forms of differential equations that we may solve through integration. 14.4.1 Separable Equations Any differential equation that can written in the form P(x) + Q(y)y = 0 is a separable equation, (because the dependent and independent variables are separated). We can obtain an implicit solution by integrating with respect to x. P(x) dx + Q(y) dy dx dx = c P(x) dx + Q(y) dy = c Result 14.4.1 The separable equation P(x) + Q(y)y = 0 may be solved by integrating with respect to x. The general solution is P(x) dx + Q(y) dy = c. Example 14.4.1 Consider the differential equation y = xy2 . We separate the dependent and independent variables and integrate to find the solution. dy dx = xy2 y−2 dy = x dx y−2 dy = x dx + c −y−1 = x2 2 + c y = − 1 x2/2 + c Example 14.4.2 The equation y = y − y2 is separable. y y − y2 = 1 We expand in partial fractions and integrate. 1 y − 1 y − 1 y = 1 ln |y| − ln |y − 1| = x + c 477
  • 498. We have an implicit equation for y(x). Now we solve for y(x). ln y y − 1 = x + c y y − 1 = ex+c y y − 1 = ± ex+c y y − 1 = c ex 1 y = c ex c ex −1 y = 1 1 + c ex 14.4.2 Exact Equations Any first order ordinary differential equation of the first degree can be written as the total differ- ential equation, P(x, y) dx + Q(x, y) dy = 0. If this equation can be integrated directly, that is if there is a primitive, u(x, y), such that du = P dx + Q dy, then this equation is called exact. The (implicit) solution of the differential equation is u(x, y) = c, where c is an arbitrary constant. Since the differential of a function, u(x, y), is du ≡ ∂u ∂x dx + ∂u ∂y dy, P and Q are the partial derivatives of u: P(x, y) = ∂u ∂x , Q(x, y) = ∂u ∂y . In an alternate notation, the differential equation P(x, y) + Q(x, y) dy dx = 0, (14.2) is exact if there is a primitive u(x, y) such that du dx ≡ ∂u ∂x + ∂u ∂y dy dx = P(x, y) + Q(x, y) dy dx . The solution of the differential equation is u(x, y) = c. Example 14.4.3 x + y dy dx = 0 is an exact differential equation since d dx 1 2 (x2 + y2 ) = x + y dy dx 478
  • 499. The solution of the differential equation is 1 2 (x2 + y2 ) = c. Example 14.4.4 , Let f(x) and g(x) be known functions. g(x)y + g (x)y = f(x) is an exact differential equation since d dx (g(x)y(x)) = gy + g y. The solution of the differential equation is g(x)y(x) = f(x) dx + c y(x) = 1 g(x) f(x) dx + c g(x) . A necessary condition for exactness. The solution of the exact equation P + Qy = 0 is u = c where u is the primitive of the equation, du dx = P + Qy . At present the only method we have for determining the primitive is guessing. This is fine for simple equations, but for more difficult cases we would like a method more concrete than divine inspiration. As a first step toward this goal we determine a criterion for determining if an equation is exact. Consider the exact equation, P + Qy = 0, with primitive u, where we assume that the functions P and Q are continuously differentiable. Since the mixed partial derivatives of u are equal, ∂2 u ∂x∂y = ∂2 u ∂y∂x , a necessary condition for exactness is ∂P ∂y = ∂Q ∂x . A sufficient condition for exactness. This necessary condition for exactness is also a sufficient condition. We demonstrate this by deriving the general solution of (14.2). Assume that P +Qy = 0 is not necessarily exact, but satisfies the condition Py = Qx. If the equation has a primitive, du dx ≡ ∂u ∂x + ∂u ∂y dy dx = P(x, y) + Q(x, y) dy dx , then it satisfies ∂u ∂x = P, ∂u ∂y = Q. (14.3) Integrating the first equation of (14.3), we see that the primitive has the form u(x, y) = x x0 P(ξ, y) dξ + f(y), for some f(y). Now we substitute this form into the second equation of (14.3). ∂u ∂y = Q(x, y) x x0 Py(ξ, y) dξ + f (y) = Q(x, y) 479
  • 500. Now we use the condition Py = Qx. x x0 Qx(ξ, y) dξ + f (y) = Q(x, y) Q(x, y) − Q(x0, y) + f (y) = Q(x, y) f (y) = Q(x0, y) f(y) = y y0 Q(x0, ψ) dψ Thus we see that u = x x0 P(ξ, y) dξ + y y0 Q(x0, ψ) dψ is a primitive of the derivative; the equation is exact. The solution of the differential equation is x x0 P(ξ, y) dξ + y y0 Q(x0, ψ) dψ = c. Even though there are three arbitrary constants: x0, y0 and c, the solution is a one-parameter family. This is because changing x0 or y0 only changes the left side by an additive constant. Result 14.4.2 Any first order differential equation of the first degree can be written in the form P(x, y) + Q(x, y) dy dx = 0. This equation is exact if and only if Py = Qx. In this case the solution of the differential equation is given by x x0 P(ξ, y) dξ + y y0 Q(x0, ψ) dψ = c. Exercise 14.1 Solve the following differential equations by inspection. That is, group terms into exact derivatives and then integrate. f(x) and g(x) are known functions. 1. y (x) y(x) = f(x) 2. yα (x)y (x) = f(x) 3. y cos x + y tan x cos x = cos x Hint, Solution 14.4.3 Homogeneous Coefficient Equations Homogeneous coefficient, first order differential equations form another class of soluble equations. We will find that a change of dependent variable will make such equations separable or we can determine an integrating factor that will make such equations exact. First we define homogeneous functions. 480
  • 501. Euler’s Theorem on Homogeneous Functions. The function F(x, y) is homogeneous of degree n if F(λx, λy) = λn F(x, y). From this definition we see that F(x, y) = xn F 1, y x . (Just formally substitute 1/x for λ.) For example, xy2 , x2 y + 2y3 x + y , x cos(y/x) are homogeneous functions of orders 3, 2 and 1, respectively. Euler’s theorem for a homogeneous function of order n is: xFx + yFy = nF. To prove this, we define ξ = λx, ψ = λy. From the definition of homogeneous functions, we have F(ξ, ψ) = λn F(x, y). We differentiate this equation with respect to λ. ∂F(ξ, ψ) ∂ξ ∂ξ ∂λ + ∂F(ξ, ψ) ∂ψ ∂ψ ∂λ = nλn−1 F(x, y) xFξ + yFψ = nλn−1 F(x, y) Setting λ = 1, (and hence ξ = x, ψ = y), proves Euler’s theorem. Result 14.4.3 Euler’s Theorem on Homogeneous Functions. If F(x, y) is a homogeneous function of degree n, then xFx + yFy = nF. Homogeneous Coefficient Differential Equations. If the coefficient functions P(x, y) and Q(x, y) are homogeneous of degree n then the differential equation, P(x, y) + Q(x, y) dy dx = 0, (14.4) is called a homogeneous coefficient equation. They are often referred to simply as homogeneous equations. Transformation to a Separable Equation. We can write the homogeneous equation in the form, xn P 1, y x + xn Q 1, y x dy dx = 0, P 1, y x + Q 1, y x dy dx = 0. This suggests the change of dependent variable u(x) = y(x) x . P(1, u) + Q(1, u) u + x du dx = 0 481
  • 502. This equation is separable. P(1, u) + uQ(1, u) + xQ(1, u) du dx = 0 1 x + Q(1, u) P(1, u) + uQ(1, u) du dx = 0 ln |x| + 1 u + P(1, u)/Q(1, u) du = c By substituting ln |c| for c, we can write this in a simpler form. 1 u + P(1, u)/Q(1, u) du = ln c x . Integrating Factor. One can show that µ(x, y) = 1 xP(x, y) + yQ(x, y) is an integrating factor for the Equation 14.4. The proof of this is left as an exercise for the reader. (See Exercise 14.2.) Result 14.4.4 Homogeneous Coefficient Differential Equations. If P(x, y) and Q(x, y) are homogeneous functions of degree n, then the equa- tion P(x, y) + Q(x, y) dy dx = 0 is made separable by the change of independent variable u(x) = y(x) x . The solution is determined by 1 u + P(1, u)/Q(1, u) du = ln c x . Alternatively, the homogeneous equation can be made exact with the integrat- ing factor µ(x, y) = 1 xP(x, y) + yQ(x, y) . Example 14.4.5 Consider the homogeneous coefficient equation x2 − y2 + xy dy dx = 0. The solution for u(x) = y(x)/x is determined by 1 u + 1−u2 u du = ln c x u du = ln c x 1 2 u2 = ln c x u = ± 2 ln |c/x| 482
  • 503. Thus the solution of the differential equation is y = ±x 2 ln |c/x| Exercise 14.2 Show that µ(x, y) = 1 xP(x, y) + yQ(x, y) is an integrating factor for the homogeneous equation, P(x, y) + Q(x, y) dy dx = 0. Hint, Solution Exercise 14.3 (mathematica/ode/first order/exact.nb) Find the general solution of the equation dy dt = 2 y t + y t 2 . Hint, Solution 14.5 The First Order, Linear Differential Equation 14.5.1 Homogeneous Equations The first order, linear, homogeneous equation has the form dy dx + p(x)y = 0. Note that if we can find one solution, then any constant times that solution also satisfies the equation. If fact, all the solutions of this equation differ only by multiplicative constants. We can solve any equation of this type because it is separable. y y = −p(x) ln |y| = − p(x) dx + c y = ± e− R p(x) dx+c y = c e− R p(x) dx Result 14.5.1 First Order, Linear Homogeneous Differential Equa- tions. The first order, linear, homogeneous differential equation, dy dx + p(x)y = 0, has the solution y = c e− R p(x) dx . (14.5) The solutions differ by multiplicative constants. 483
  • 504. Example 14.5.1 Consider the equation dy dx + 1 x y = 0. We use Equation 14.5 to determine the solution. y(x) = c e− R 1/x dx , for x = 0 y(x) = c e− ln |x| y(x) = c |x| y(x) = c x 14.5.2 Inhomogeneous Equations The first order, linear, inhomogeneous differential equation has the form dy dx + p(x)y = f(x). (14.6) This equation is not separable. Note that it is similar to the exact equation we solved in Exam- ple 14.4.4, g(x)y (x) + g (x)y(x) = f(x). To solve Equation 14.6, we multiply by an integrating factor. Multiplying a differential equation by its integrating factor changes it to an exact equation. Multiplying Equation 14.6 by the function, I(x), yields, I(x) dy dx + p(x)I(x)y = f(x)I(x). In order that I(x) be an integrating factor, it must satisfy d dx I(x) = p(x)I(x). This is a first order, linear, homogeneous equation with the solution I(x) = c e R p(x) dx . This is an integrating factor for any constant c. For simplicity we will choose c = 1. To solve Equation 14.6 we multiply by the integrating factor and integrate. Let P(x) = p(x) dx. eP (x) dy dx + p(x) eP (x) y = eP (x) f(x) d dx eP (x) y = eP (x) f(x) y = e−P (x) eP (x) f(x) dx + c e−P (x) y ≡ yp + c yh Note that the general solution is the sum of a particular solution, yp, that satisfies y +p(x)y = f(x), and an arbitrary constant times a homogeneous solution, yh, that satisfies y + p(x)y = 0. Example 14.5.2 Consider the differential equation y + 1 x y = x2 , x > 0. 484
  • 505. -1 1 -10 -5 5 10 Figure 14.4: Solutions to y + y/x = x2 . First we find the integrating factor. I(x) = exp 1 x dx = eln x = x We multiply by the integrating factor and integrate. d dx (xy) = x3 xy = 1 4 x4 + c y = 1 4 x3 + c x . The particular and homogeneous solutions are yp = 1 4 x3 and yh = 1 x . Note that the general solution to the differential equation is a one-parameter family of functions. The general solution is plotted in Figure 14.4 for various values of c. Exercise 14.4 (mathematica/ode/first order/linear.nb) Solve the differential equation y − 1 x y = xα , x > 0. Hint, Solution 14.5.3 Variation of Parameters. We could also have found the particular solution with the method of variation of parameters. Although we can solve first order equations without this method, it will become important in the study of higher order inhomogeneous equations. We begin by assuming that the particular solution has the form yp = u(x)yh(x) where u(x) is an unknown function. We substitute this into the differential equation. d dx yp + p(x)yp = f(x) d dx (uyh) + p(x)uyh = f(x) u yh + u(yh + p(x)yh) = f(x) 485
  • 506. Since yh is a homogeneous solution, yh + p(x)yh = 0. u = f(x) yh u = f(x) yh(x) dx Recall that the homogeneous solution is yh = e−P (x) . u = eP (x) f(x) dx Thus the particular solution is yp = e−P (x) eP (x) f(x) dx. 14.6 Initial Conditions In physical problems involving first order differential equations, the solution satisfies both the differential equation and a constraint which we call the initial condition. Consider a first order linear differential equation subject to the initial condition y(x0) = y0. The general solution is y = yp + cyh = e−P (x) eP (x) f(x) dx + c e−P (x) . For the moment, we will assume that this problem is well-posed. A problem is well-posed if there is a unique solution to the differential equation that satisfies the constraint(s). Recall that eP (x) f(x) dx denotes any integral of eP (x) f(x). For convenience, we choose x x0 eP (ξ) f(ξ) dξ. The initial condition requires that y(x0) = y0 = e−P (x0) x0 x0 eP (ξ) f(ξ) dξ + c e−P (x0) = c e−P (x0) . Thus c = y0 eP (x0) . The solution subject to the initial condition is y = e−P (x) x x0 eP (ξ) f(ξ) dξ + y0 eP (x0)−P (x) . Example 14.6.1 Consider the problem y + (cos x)y = x, y(0) = 2. From Result 14.6.1, the solution subject to the initial condition is y = e− sin x x 0 ξ esin ξ dξ + 2 e− sin x . 14.6.1 Piecewise Continuous Coefficients and Inhomogeneities If the coefficient function p(x) and the inhomogeneous term f(x) in the first order linear differential equation dy dx + p(x)y = f(x) are continuous, then the solution is continuous and has a continuous first derivative. To see this, we note that the solution y = e−P (x) eP (x) f(x) dx + c e−P (x) 486
  • 507. -1 1 2 2 4 6 8 Figure 14.5: Solution to y − y = H(x − 1). is continuous since the integral of a piecewise continuous function is continuous. The first derivative of the solution can be found directly from the differential equation. y = −p(x)y + f(x) Since p(x), y, and f(x) are continuous, y is continuous. If p(x) or f(x) is only piecewise continuous, then the solution will be continuous since the integral of a piecewise continuous function is continuous. The first derivative of the solution will be piecewise continuous. Example 14.6.2 Consider the problem y − y = H(x − 1), y(0) = 1, where H(x) is the Heaviside function. H(x) = 1 for x > 0, 0 for x < 0. To solve this problem, we divide it into two equations on separate domains. y1 − y1 = 0, y1(0) = 1, for x < 1 y2 − y2 = 1, y2(1) = y1(1), for x > 1 With the condition y2(1) = y1(1) on the second equation, we demand that the solution be continuous. The solution to the first equation is y = ex . The solution for the second equation is y = ex x 1 e−ξ dξ + e1 ex−1 = −1 + ex−1 + ex . Thus the solution over the whole domain is y = ex for x < 1, (1 + e−1 ) ex −1 for x > 1. The solution is graphed in Figure 14.5. Example 14.6.3 Consider the problem, y + sign(x)y = 0, y(1) = 1. Recall that sign x =    −1 for x < 0 0 for x = 0 1 for x > 0. 487
  • 508. -3 -2 -1 1 2 3 1 2 Figure 14.6: Solution to y + sign(x)y = 0. Since sign x is piecewise defined, we solve the two problems, y+ + y+ = 0, y+(1) = 1, for x > 0 y− − y− = 0, y−(0) = y+(0), for x < 0, and define the solution, y, to be y(x) = y+(x), for x ≥ 0, y−(x), for x ≤ 0. The initial condition for y− demands that the solution be continuous. Solving the two problems for positive and negative x, we obtain y(x) = e1−x , for x > 0, e1+x , for x < 0. This can be simplified to y(x) = e1−|x| . This solution is graphed in Figure 14.6. Result 14.6.1 Existence, Uniqueness Theorem. Let p(x) and f(x) be piecewise continuous on the interval [a, b] and let x0 ∈ [a, b]. Consider the problem, dy dx + p(x)y = f(x), y(x0) = y0. The general solution of the differential equation is y = e−P(x) eP(x) f(x) dx + c e−P(x) . The unique, continuous solution of the differential equation subject to the initial condition is y = e−P(x) x x0 eP(ξ) f(ξ) dξ + y0 eP(x0)−P(x) , where P(x) = p(x) dx. 488
  • 509. -1 1 -1 1 Figure 14.7: Solutions to y − y/x = 0. Exercise 14.5 (mathematica/ode/first order/exact.nb) Find the solutions of the following differential equations which satisfy the given initial conditions: 1. dy dx + xy = x2n+1 , y(1) = 1, n ∈ Z 2. dy dx − 2xy = 1, y(0) = 1 Hint, Solution Exercise 14.6 (mathematica/ode/first order/exact.nb) Show that if α > 0 and λ > 0, then for any real β, every solution of dy dx + αy(x) = β e−λx satisfies limx→+∞ y(x) = 0. (The case α = λ requires special treatment.) Find the solution for β = λ = 1 which satisfies y(0) = 1. Sketch this solution for 0 ≤ x < ∞ for several values of α. In particular, show what happens when α → 0 and α → ∞. Hint, Solution 14.7 Well-Posed Problems Example 14.7.1 Consider the problem, y − 1 x y = 0, y(0) = 1. The general solution is y = cx. Applying the initial condition demands that 1 = c · 0, which cannot be satisfied. The general solution for various values of c is plotted in Figure 14.7. Example 14.7.2 Consider the problem y − 1 x y = − 1 x , y(0) = 1. The general solution is y = 1 + cx. The initial condition is satisfied for any value of c so there are an infinite number of solutions. 489
  • 510. Example 14.7.3 Consider the problem y + 1 x y = 0, y(0) = 1. The general solution is y = c x . Depending on whether c is nonzero, the solution is either singular or zero at the origin and cannot satisfy the initial condition. The above problems in which there were either no solutions or an infinite number of solutions are said to be ill-posed. If there is a unique solution that satisfies the initial condition, the problem is said to be well-posed. We should have suspected that we would run into trouble in the above examples as the initial condition was given at a singularity of the coefficient function, p(x) = 1/x. Consider the problem, y + p(x)y = f(x), y(x0) = y0. We assume that f(x) bounded in a neighborhood of x = x0. The differential equation has the general solution, y = e−P (x) eP (x) f(x) dx + c e−P (x) . If the homogeneous solution, e−P (x) , is nonzero and finite at x = x0, then there is a unique value of c for which the initial condition is satisfied. If the homogeneous solution vanishes at x = x0 then either the initial condition cannot be satisfied or the initial condition is satisfied for all values of c. The homogeneous solution can vanish or be infinite only if P(x) → ±∞ as x → x0. This can occur only if the coefficient function, p(x), is unbounded at that point. Result 14.7.1 If the initial condition is given where the homogeneous solution to a first order, linear differential equation is zero or infinite then the problem may be ill-posed. This may occur only if the coefficient function, p(x), is unbounded at that point. 14.8 Equations in the Complex Plane 14.8.1 Ordinary Points Consider the first order homogeneous equation dw dz + p(z)w = 0, where p(z), a function of a complex variable, is analytic in some domain D. The integrating factor, I(z) = exp p(z) dz , is an analytic function in that domain. As with the case of real variables, multiplying by the integrating factor and integrating yields the solution, w(z) = c exp − p(z) dz . We see that the solution is analytic in D. Example 14.8.1 It does not make sense to pose the equation dw dz + |z|w = 0. For the solution to exist, w and hence w (z) must be analytic. Since p(z) = |z| is not analytic anywhere in the complex plane, the equation has no solution. 490
  • 511. Any point at which p(z) is analytic is called an ordinary point of the differential equation. Since the solution is analytic we can expand it in a Taylor series about an ordinary point. The radius of convergence of the series will be at least the distance to the nearest singularity of p(z) in the complex plane. Example 14.8.2 Consider the equation dw dz − 1 1 − z w = 0. The general solution is w = c 1−z . Expanding this solution about the origin, w = c 1 − z = c ∞ n=0 zn . The radius of convergence of the series is, R = lim n→∞ an an+1 = 1, which is the distance from the origin to the nearest singularity of p(z) = 1 1−z . We do not need to solve the differential equation to find the Taylor series expansion of the homogeneous solution. We could substitute a general Taylor series expansion into the differential equation and solve for the coefficients. Since we can always solve first order equations, this method is of limited usefulness. However, when we consider higher order equations in which we cannot solve the equations exactly, this will become an important method. Example 14.8.3 Again consider the equation dw dz − 1 1 − z w = 0. Since we know that the solution has a Taylor series expansion about z = 0, we substitute w = ∞ n=0 anzn into the differential equation. (1 − z) d dz ∞ n=0 anzn − ∞ n=0 anzn = 0 ∞ n=1 nanzn−1 − ∞ n=1 nanzn − ∞ n=0 anzn = 0 ∞ n=0 (n + 1)an+1zn − ∞ n=0 nanzn − ∞ n=0 anzn = 0 ∞ n=0 ((n + 1)an+1 − (n + 1)an) zn = 0. Now we equate powers of z to zero. For zn , the equation is (n+1)an+1 −(n+1)an = 0, or an+1 = an. Thus we have that an = a0 for all n ≥ 1. The solution is then w = a0 ∞ n=0 zn , which is the result we obtained by expanding the solution in Example 14.8.2. 491
  • 512. Result 14.8.1 Consider the equation dw dz + p(z)w = 0. If p(z) is analytic at z = z0 then z0 is called an ordinary point of the differ- ential equation. The Taylor series expansion of the solution can be found by substituting w = ∞ n=0 an(z − z0)n into the equation and equating powers of (z − z0). The radius of convergence of the series is at least the distance to the nearest singularity of p(z) in the complex plane. Exercise 14.7 Find the Taylor series expansion about the origin of the solution to dw dz + 1 1 − z w = 0 with the substitution w = ∞ n=0 anzn . What is the radius of convergence of the series? What is the distance to the nearest singularity of 1 1−z ? Hint, Solution 14.8.2 Regular Singular Points If the coefficient function p(z) has a simple pole at z = z0 then z0 is a regular singular point of the first order differential equation. Example 14.8.4 Consider the equation dw dz + α z w = 0, α = 0. This equation has a regular singular point at z = 0. The solution is w = cz−α . Depending on the value of α, the solution can have three different kinds of behavior. α is a negative integer. The solution is analytic in the finite complex plane. α is a positive integer The solution has a pole at the origin. w is analytic in the annulus, 0 < |z|. α is not an integer. w has a branch point at z = 0. The solution is analytic in the cut annulus 0 < |z| < ∞, θ0 < arg z < θ0 + 2π. Consider the differential equation dw dz + p(z)w = 0, where p(z) has a simple pole at the origin and is analytic in the annulus, 0 < |z| < r, for some positive r. Recall that the solution is w = c exp − p(z) dz = c exp − b0 z + p(z) − b0 z dz = c exp −b0 log z − zp(z) − b0 z dz = cz−b0 exp − zp(z) − b0 z dz 492
  • 513. The exponential factor has a removable singularity at z = 0 and is analytic in |z| < r. We consider the following cases for the z−b0 factor: b0 is a negative integer. Since z−b0 is analytic at the origin, the solution to the differential equation is analytic in the circle |z| < r. b0 is a positive integer. The solution has a pole of order −b0 at the origin and is analytic in the annulus 0 < |z| < r. b0 is not an integer. The solution has a branch point at the origin and thus is not single-valued. The solution is analytic in the cut annulus 0 < |z| < r, θ0 < arg z < θ0 + 2π. Since the exponential factor has a convergent Taylor series in |z| < r, the solution can be expanded in a series of the form w = z−b0 ∞ n=0 anzn , where a0 = 0 and b0 = lim z→0 z p(z). In the case of a regular singular point at z = z0, the series is w = (z − z0)−b0 ∞ n=0 an(z − z0)n , where a0 = 0 and b0 = lim z→z0 (z − z0) p(z). Series of this form are known as Frobenius series. Since we can write the solution as w = c(z − z0)−b0 exp − p(z) − b0 z − z0 dz , we see that the Frobenius expansion of the solution will have a radius of convergence at least the distance to the nearest singularity of p(z). Result 14.8.2 Consider the equation, dw dz + p(z)w = 0, where p(z) has a simple pole at z = z0, p(z) is analytic in some annulus, 0 < |z − z0| < r, and limz→z0 (z − z0)p(z) = β. The solution to the differential equation has a Frobenius series expansion of the form w = (z − z0)−β ∞ n=0 an(z − z0)n , a0 = 0. The radius of convergence of the expansion will be at least the distance to the nearest singularity of p(z). Example 14.8.5 We will find the first two nonzero terms in the series solution about z = 0 of the differential equation, dw dz + 1 sin z w = 0. First we note that the coefficient function has a simple pole at z = 0 and lim z→0 z sin z = lim z→0 1 cos z = 1. 493
  • 514. 2 4 6 -4 -2 2 4 Figure 14.8: Plot of the exact solution and the two term approximation. Thus we look for a series solution of the form w = z−1 ∞ n=0 anzn , a0 = 0. The nearest singularities of 1/ sin z in the complex plane are at z = ±π. Thus the radius of convergence of the series will be at least π. Substituting the first three terms of the expansion into the differential equation, d dz (a0z−1 + a1 + a2z) + 1 sin z (a0z−1 + a1 + a2z) = O(z). Recall that the Taylor expansion of sin z is sin z = z − 1 6 z3 + O(z5 ). z − z3 6 + O(z5 ) (−a0z−2 + a2) + (a0z−1 + a1 + a2z) = O(z2 ) −a0z−1 + a2 + a0 6 z + a0z−1 + a1 + a2z = O(z2 ) a1 + 2a2 + a0 6 z = O(z2 ) a0 is arbitrary. Equating powers of z, z0 : a1 = 0. z1 : 2a2 + a0 6 = 0. Thus the solution has the expansion, w = a0 z−1 − z 12 + O(z2 ). In Figure 14.8 the exact solution is plotted in a solid line and the two term approximation is plotted in a dashed line. The two term approximation is very good near the point x = 0. Example 14.8.6 Find the first two nonzero terms in the series expansion about z = 0 of the solution to w − i cos z z w = 0. Since cos z z has a simple pole at z = 0 and limz→0 −i cos z = −i we see that the Frobenius series will have the form w = zi ∞ n=0 anzn , a0 = 0. 494
  • 515. Recall that cos z has the Taylor expansion ∞ n=0 (−1)n z2n (2n)! . Substituting the Frobenius expansion into the differential equation yields z izi−1 ∞ n=0 anzn + zi ∞ n=0 nanzn−1 − i ∞ n=0 (−1)n z2n (2n)! zi ∞ n=0 anzn = 0 ∞ n=0 (n + i)anzn − i ∞ n=0 (−1)n z2n (2n)! ∞ n=0 anzn = 0. Equating powers of z, z0 : ia0 − ia0 = 0 → a0 is arbitrary z1 : (1 + i)a1 − ia1 = 0 → a1 = 0 z2 : (2 + i)a2 − ia2 + i 2 a0 = 0 → a2 = − i 4 a0. Thus the solution is w = a0zi 1 − i 4 z2 + O(z3 ) . 14.8.3 Irregular Singular Points If a point is not an ordinary point or a regular singular point then it is called an irregular singular point. The following equations have irregular singular points at the origin. • w + √ zw = 0 • w − z−2 w = 0 • w + exp(1/z)w = 0 Example 14.8.7 Consider the differential equation dw dz + αzβ w = 0, α = 0, β = −1, 0, 1, 2, . . . This equation has an irregular singular point at the origin. Solving this equation, d dz exp αzβ dz w = 0 w = c exp − α β + 1 zβ+1 = c ∞ n=0 (−1)n n! α β + 1 n z(β+1)n . If β is not an integer, then the solution has a branch point at the origin. If β is an integer, β < −1, then the solution has an essential singularity at the origin. The solution cannot be expanded in a Frobenius series, w = zλ ∞ n=0 anzn . Although we will not show it, this result holds for any irregular singular point of the differential equation. We cannot approximate the solution near an irregular singular point using a Frobenius expansion. Now would be a good time to summarize what we have discovered about solutions of first order differential equations in the complex plane. 495
  • 516. Result 14.8.3 Consider the first order differential equation dw dz + p(z)w = 0. Ordinary Points If p(z) is analytic at z = z0 then z0 is an ordinary point of the differential equation. The solution can be expanded in the Taylor series w = ∞ n=0 an(z −z0)n . The radius of convergence of the series is at least the distance to the nearest singularity of p(z) in the complex plane. Regular Singular Points If p(z) has a simple pole at z = z0 and is analytic in some annulus 0 < |z − z0| < r then z0 is a regular singular point of the differential equation. The solution at z0 will either be analytic, have a pole, or have a branch point. The solution can be expanded in the Frobenius series w = (z − z0)−β ∞ n=0 an(z − z0)n where a0 = 0 and β = limz→z0 (z−z0)p(z). The radius of convergence of the Frobenius series will be at least the distance to the nearest singularity of p(z). Irregular Singular Points If the point z = z0 is not an ordinary point or a regular singular point, then it is an irregular singular point of the differential equation. The solution cannot be expanded in a Frobenius series about that point. 14.8.4 The Point at Infinity Now we consider the behavior of first order linear differential equations at the point at infinity. Recall from complex variables that the complex plane together with the point at infinity is called the extended complex plane. To study the behavior of a function f(z) at infinity, we make the transformation z = 1 ζ and study the behavior of f(1/ζ) at ζ = 0. Example 14.8.8 Let’s examine the behavior of sin z at infinity. We make the substitution z = 1/ζ and find the Laurent expansion about ζ = 0. sin(1/ζ) = ∞ n=0 (−1)n (2n + 1)! ζ(2n+1) Since sin(1/ζ) has an essential singularity at ζ = 0, sin z has an essential singularity at infinity. We use the same approach if we want to examine the behavior at infinity of a differential equation. Starting with the first order differential equation, dw dz + p(z)w = 0, we make the substitution z = 1 ζ , d dz = −ζ2 d dζ , w(z) = u(ζ) to obtain −ζ2 du dζ + p(1/ζ)u = 0 du dζ − p(1/ζ) ζ2 u = 0. 496
  • 517. Result 14.8.4 The behavior at infinity of dw dz + p(z)w = 0 is the same as the behavior at ζ = 0 of du dζ − p(1/ζ) ζ2 u = 0. Example 14.8.9 We classify the singular points of the equation dw dz + 1 z2 + 9 w = 0. We factor the denominator of the fraction to see that z = ı3 and z = −ı3 are regular singular points. dw dz + 1 (z − ı3)(z + ı3) w = 0 We make the transformation z = 1/ζ to examine the point at infinity. du dζ − 1 ζ2 1 (1/ζ)2 + 9 u = 0 du dζ − 1 9ζ2 + 1 u = 0 Since the equation for u has a ordinary point at ζ = 0, z = ∞ is a ordinary point of the equation for w. 497
  • 518. 14.9 Additional Exercises Exact Equations Exercise 14.8 (mathematica/ode/first order/exact.nb) Find the general solution y = y(x) of the equations 1. dy dx = x2 + xy + y2 x2 , 2. (4y − 3x) dx + (y − 2x) dy = 0. Hint, Solution Exercise 14.9 (mathematica/ode/first order/exact.nb) Determine whether or not the following equations can be made exact. If so find the corresponding general solution. 1. (3x2 − 2xy + 2) dx + (6y2 − x2 + 3) dy = 0 2. dy dx = − ax + by bx + cy Hint, Solution Exercise 14.10 (mathematica/ode/first order/exact.nb) Find the solutions of the following differential equations which satisfy the given initial condition. In each case determine the interval in which the solution is defined. 1. dy dx = (1 − 2x)y2 , y(0) = −1/6. 2. x dx + y e−x dy = 0, y(0) = 1. Hint, Solution Exercise 14.11 Are the following equations exact? If so, solve them. 1. (4y − x)y − (9x2 + y − 1) = 0 2. (2x − 2y)y + (2x + 4y) = 0. Hint, Solution Exercise 14.12 (mathematica/ode/first order/exact.nb) Find all functions f(t) such that the differential equation y2 sin t + yf(t) dy dt = 0 (14.7) is exact. Solve the differential equation for these f(t). Hint, Solution The First Order, Linear Differential Equation Exercise 14.13 (mathematica/ode/first order/linear.nb) Solve the differential equation y + y sin x = 0. Hint, Solution 498
  • 519. Initial Conditions Well-Posed Problems Exercise 14.14 Find the solutions of t dy dt + Ay = 1 + t2 , t > 0 which are bounded at t = 0. Consider all (real) values of A. Hint, Solution Equations in the Complex Plane Exercise 14.15 Classify the singular points of the following first order differential equations, (include the point at infinity). 1. w + sin z z w = 0 2. w + 1 z−3 w = 0 3. w + z1/2 w = 0 Hint, Solution Exercise 14.16 Consider the equation w + z−2 w = 0. The point z = 0 is an irregular singular point of the differential equation. Thus we know that we cannot expand the solution about z = 0 in a Frobenius series. Try substituting the series solution w = zλ ∞ n=0 anzn , a0 = 0 into the differential equation anyway. What happens? Hint, Solution 499
  • 520. 14.10 Hints Hint 14.1 1. d dx ln |u| = 1 u 2. d dx uc = uc−1 u Hint 14.2 Hint 14.3 The equation is homogeneous. Make the change of variables u = y/t. Hint 14.4 Make sure you consider the case α = 0. Hint 14.5 Hint 14.6 Hint 14.7 The radius of convergence of the series and the distance to the nearest singularity of 1 1−z are not the same. Exact Equations Hint 14.8 1. 2. Hint 14.9 1. The equation is exact. Determine the primitive u by solving the equations ux = P, uy = Q. 2. The equation can be made exact. Hint 14.10 1. This equation is separable. Integrate to get the general solution. Apply the initial condition to determine the constant of integration. 2. Ditto. You will have to numerically solve an equation to determine where the solution is defined. Hint 14.11 Hint 14.12 The First Order, Linear Differential Equation Hint 14.13 Look in the appendix for the integral of csc x. 500
  • 521. Initial Conditions Well-Posed Problems Hint 14.14 Equations in the Complex Plane Hint 14.15 Hint 14.16 Try to find the value of λ by substituting the series into the differential equation and equating powers of z. 501
  • 522. 14.11 Solutions Solution 14.1 1. y (x) y(x) = f(x) d dx ln |y(x)| = f(x) ln |y(x)| = f(x) dx + c y(x) = ± e R f(x) dx+c y(x) = c e R f(x) dx 2. yα (x)y (x) = f(x) yα+1 (x) α + 1 = f(x) dx + c y(x) = (α + 1) f(x) dx + a 1/(α+1) 3. y cos x + y tan x cos x = cos x d dx y cos x = cos x y cos x = sin x + c y(x) = sin x cos x + c cos x Solution 14.2 We consider the homogeneous equation, P(x, y) + Q(x, y) dy dx = 0. That is, both P and Q are homogeneous of degree n. We hypothesize that multiplying by µ(x, y) = 1 xP(x, y) + yQ(x, y) will make the equation exact. To prove this we use the result that M(x, y) + N(x, y) dy dx = 0 is exact if and only if My = Nx. My = ∂ ∂y P xP + yQ = Py(xP + yQ) − P(xPy + Q + yQy) (xP + yQ)2 502
  • 523. Nx = ∂ ∂x Q xP + yQ = Qx(xP + yQ) − Q(P + xPx + yQx) (xP + yQ)2 My = Nx Py(xP + yQ) − P(xPy + Q + yQy) = Qx(xP + yQ) − Q(P + xPx + yQx) yPyQ − yPQy = xPQx − xPxQ xPxQ + yPyQ = xPQx + yPQy (xPx + yPy)Q = P(xQx + yQy) With Euler’s theorem, this reduces to an identity. nPQ = PnQ Thus the equation is exact. µ(x, y) is an integrating factor for the homogeneous equation. Solution 14.3 We note that this is a homogeneous differential equation. The coefficient of dy/dt and the inhomo- geneity are homogeneous of degree zero. dy dt = 2 y t + y t 2 . We make the change of variables u = y/t to obtain a separable equation. tu + u = 2u + u2 u u2 + u = 1 t Now we integrate to solve for u. u u(u + 1) = 1 t u u − u u + 1 = 1 t ln |u| − ln |u + 1| = ln |t| + c ln u u + 1 = ln |ct| u u + 1 = ±ct u u + 1 = ct u = ct 1 − ct u = t c − t y = t2 c − t Solution 14.4 We consider y − 1 x y = xα , x > 0. 503
  • 524. First we find the integrating factor. I(x) = exp − 1 x dx = exp (− ln x) = 1 x . We multiply by the integrating factor and integrate. 1 x y − 1 x2 y = xα−1 d dx 1 x y = xα−1 1 x y = xα−1 dx + c y = x xα−1 dx + cx y = xα+1 α + cx for α = 0, x ln x + cx for α = 0. Solution 14.5 1. y + xy = x2n+1 , y(1) = 1, n ∈ Z We find the integrating factor. I(x) = e R x dx = ex2 /2 We multiply by the integrating factor and integrate. Since the initial condition is given at x = 1, we will take the lower bound of integration to be that point. d dx ex2 /2 y = x2n+1 ex2 /2 y = e−x2 /2 x 1 ξ2n+1 eξ2 /2 dξ + c e−x2 /2 We choose the constant of integration to satisfy the initial condition. y = e−x2 /2 x 1 ξ2n+1 eξ2 /2 dξ + e(1−x2 )/2 If n ≥ 0 then we can use integration by parts to write the integral as a sum of terms. If n < 0 we can write the integral in terms of the exponential integral function. However, the integral form above is as nice as any other and we leave the answer in that form. 2. dy dx − 2xy(x) = 1, y(0) = 1. We determine the integrating factor and then integrate the equation. I(x) = e R −2x dx = e−x2 d dx e−x2 y = e−x2 y = ex2 x 0 e−ξ2 dξ + c ex2 We choose the constant of integration to satisfy the initial condition. y = ex2 1 + x 0 e−ξ2 dξ 504
  • 525. We can write the answer in terms of the Error function, erf(x) ≡ 2 √ π x 0 e−ξ2 dξ. y = ex2 1 + √ π 2 erf(x) Solution 14.6 We determine the integrating factor and then integrate the equation. I(x) = e R α dx = eαx d dx (eαx y) = β e(α−λ)x y = β e−αx e(α−λ)x dx + c e−αx First consider the case α = λ. y = β e−αx e(α−λ)x α − λ + c e−αx y = β α − λ e−λx +c e−αx Clearly the solution vanishes as x → ∞. Next consider α = λ. y = β e−αx x + c e−αx y = (c + βx) e−αx We use L’Hospital’s rule to show that the solution vanishes as x → ∞. lim x→∞ c + βx eαx = lim x→∞ β α eαx = 0 For β = λ = 1, the solution is y = 1 α−1 e−x +c e−αx for α = 1, (c + x) e−x for α = 1. The solution which satisfies the initial condition is y = 1 α−1 (e−x +(α − 2) e−αx ) for α = 1, (1 + x) e−x for α = 1. In Figure 14.9 the solution is plotted for α = 1/16, 1/8, . . . , 16. Consider the solution in the limit as α → 0. lim α→0 y(x) = lim α→0 1 α − 1 e−x +(α − 2) e−αx = 2 − e−x In the limit as α → ∞ we have, lim α→∞ y(x) = lim α→∞ 1 α − 1 e−x +(α − 2) e−αx = lim α→∞ α − 2 α − 1 e−αx = 1 for x = 0, 0 for x > 0. 505
  • 526. 4 8 12 16 1 Figure 14.9: The Solution for a Range of α 1 2 3 4 1 1 2 3 4 1 Figure 14.10: The Solution as α → 0 and α → ∞ This behavior is shown in Figure 14.10. The first graph plots the solutions for α = 1/128, 1/64, . . . , 1. The second graph plots the solutions for α = 1, 2, . . . , 128. Solution 14.7 We substitute w = ∞ n=0 anzn into the equation dw dz + 1 1−z w = 0. d dz ∞ n=0 anzn + 1 1 − z ∞ n=0 anzn = 0 (1 − z) ∞ n=1 nanzn−1 + ∞ n=0 anzn = 0 ∞ n=0 (n + 1)an+1zn − ∞ n=0 nanzn + ∞ n=0 anzn = 0 ∞ n=0 ((n + 1)an+1 − (n − 1)an) zn = 0 Equating powers of z to zero, we obtain the relation, an+1 = n − 1 n + 1 an. a0 is arbitrary. We can compute the rest of the coefficients from the recurrence relation. a1 = −1 1 a0 = −a0 a2 = 0 2 a1 = 0 We see that the coefficients are zero for n ≥ 2. Thus the Taylor series expansion, (and the exact solution), is w = a0(1 − z). 506
  • 527. The radius of convergence of the series in infinite. The nearest singularity of 1 1−z is at z = 1. Thus we see the radius of convergence can be greater than the distance to the nearest singularity of the coefficient function, p(z). Exact Equations Solution 14.8 1. dy dx = x2 + xy + y2 x2 Since the right side is a homogeneous function of order zero, this is a homogeneous differential equation. We make the change of variables u = y/x and then solve the differential equation for u. xu + u = 1 + u + u2 du 1 + u2 = dx x arctan(u) = ln |x| + c u = tan(ln(|cx|)) y = x tan(ln(|cx|)) 2. (4y − 3x) dx + (y − 2x) dy = 0 Since the coefficients are homogeneous functions of order one, this is a homogeneous differential equation. We make the change of variables u = y/x and then solve the differential equation for u. 4 y x − 3 dx + y x − 2 dy = 0 (4u − 3) dx + (u − 2)(u dx + x du) = 0 (u2 + 2u − 3) dx + x(u − 2) du = 0 dx x + u − 2 (u + 3)(u − 1) du = 0 dx x + 5/4 u + 3 − 1/4 u − 1 du = 0 ln(x) + 5 4 ln(u + 3) − 1 4 ln(u − 1) = c x4 (u + 3)5 u − 1 = c x4 (y/x + 3)5 y/x − 1 = c (y + 3x)5 y − x = c Solution 14.9 1. (3x2 − 2xy + 2) dx + (6y2 − x2 + 3) dy = 0 We check if this form of the equation, P dx + Q dy = 0, is exact. Py = −2x, Qx = −2x 507
  • 528. Since Py = Qx, the equation is exact. Now we find the primitive u(x, y) which satisfies du = (3x2 − 2xy + 2) dx + (6y2 − x2 + 3) dy. The primitive satisfies the partial differential equations ux = P, uy = Q. (14.8) We integrate the first equation of 14.8 to determine u up to a function of integration. ux = 3x2 − 2xy + 2 u = x3 − x2 y + 2x + f(y) We substitute this into the second equation of 14.8 to determine the function of integration up to an additive constant. −x2 + f (y) = 6y2 − x2 + 3 f (y) = 6y2 + 3 f(y) = 2y3 + 3y The solution of the differential equation is determined by the implicit equation u = c. x3 − x2 y + 2x + 2y3 + 3y = c 2. dy dx = − ax + by bx + cy (ax + by) dx + (bx + cy) dy = 0 We check if this form of the equation, P dx + Q dy = 0, is exact. Py = b, Qx = b Since Py = Qx, the equation is exact. Now we find the primitive u(x, y) which satisfies du = (ax + by) dx + (bx + cy) dy The primitive satisfies the partial differential equations ux = P, uy = Q. (14.9) We integrate the first equation of 14.9 to determine u up to a function of integration. ux = ax + by u = 1 2 ax2 + bxy + f(y) We substitute this into the second equation of 14.9 to determine the function of integration up to an additive constant. bx + f (y) = bx + cy f (y) = cy f(y) = 1 2 cy2 The solution of the differential equation is determined by the implicit equation u = d. ax2 + 2bxy + cy2 = d 508
  • 529. Solution 14.10 Note that since these equations are nonlinear, we cannot predict where the solutions will be defined from the equation alone. 1. This equation is separable. We integrate to get the general solution. dy dx = (1 − 2x)y2 dy y2 = (1 − 2x) dx − 1 y = x − x2 + c y = 1 x2 − x − c Now we apply the initial condition. y(0) = 1 −c = − 1 6 y = 1 x2 − x − 6 y = 1 (x + 2)(x − 3) The solution is defined on the interval (−2 . . . 3). 2. This equation is separable. We integrate to get the general solution. x dx + y e−x dy = 0 x ex dx + y dy = 0 (x − 1) ex + 1 2 y2 = c y = 2(c + (1 − x) ex) We apply the initial condition to determine the constant of integration. y(0) = 2(c + 1) = 1 c = − 1 2 y = 2(1 − x) ex −1 The function 2(1 − x) ex −1 is plotted in Figure 14.11. We see that the argument of the square root in the solution is non-negative only on an interval about the origin. Because 2(1− x) ex −1 == 0 is a mixed algebraic / transcendental equation, we cannot solve it analytically. The solution of the differential equation is defined on the interval (−1.67835 . . . 0.768039). Solution 14.11 1. We consider the differential equation, (4y − x)y − (9x2 + y − 1) = 0. Py = ∂ ∂y 1 − y − 9x2 = −1 Qx = ∂ ∂x (4y − x) = −1 509
  • 530. -5 -4 -3 -2 -1 1 -3 -2 -1 1 Figure 14.11: The function 2(1 − x) ex −1. This equation is exact. It is simplest to solve the equation by rearranging terms to form exact derivatives. 4yy − xy − y + 1 − 9x2 = 0 d dx 2y2 − xy + 1 − 9x2 = 0 2y2 − xy + x − 3x3 + c = 0 y = 1 4 x ± x2 − 8(c + x − 3x3) 2. We consider the differential equation, (2x − 2y)y + (2x + 4y) = 0. Py = ∂ ∂y (2x + 4y) = 4 Qx = ∂ ∂x (2x − 2y) = 2 Since Py = Qx, this is not an exact equation. Solution 14.12 Recall that the differential equation P(x, y) + Q(x, y)y = 0 is exact if and only if Py = Qx. For Equation 14.7, this criterion is 2y sin t = yf (t) f (t) = 2 sin t f(t) = 2(a − cos t). In this case, the differential equation is y2 sin t + 2yy (a − cos t) = 0. We can integrate this exact equation by inspection. d dt y2 (a − cos t) = 0 y2 (a − cos t) = c y = ± c √ a − cos t 510
  • 531. The First Order, Linear Differential Equation Solution 14.13 Consider the differential equation y + y sin x = 0. We use Equation 14.5 to determine the solution. y = c e R −1/ sin x dx y = c e− ln | tan(x/2)| y = c cot x 2 y = c cot x 2 Initial Conditions Well-Posed Problems Solution 14.14 First we write the differential equation in the standard form. dy dt + A t y = 1 t + t, t > 0 We determine the integrating factor. I(t) = e R A/t dt = eA ln t = tA We multiply the differential equation by the integrating factor and integrate. dy dt + A t y = 1 t + t d dt tA y = tA−1 + tA+1 tA y =    tA A + tA+2 A+2 + c, A = 0, −2 ln t + 1 2 t2 + c, A = 0 −1 2 t−2 + ln t + c, A = −2 y =    1 A + t2 A+2 + ct−A , A = −2 ln t + 1 2 t2 + c, A = 0 −1 2 + t2 ln t + ct2 , A = −2 For positive A, the solution is bounded at the origin only for c = 0. For A = 0, there are no bounded solutions. For negative A, the solution is bounded there for any value of c and thus we have a one-parameter family of solutions. In summary, the solutions which are bounded at the origin are: y =    1 A + t2 A+2 , A > 0 1 A + t2 A+2 + ct−A , A < 0, A = −2 −1 2 + t2 ln t + ct2 , A = −2 Equations in the Complex Plane Solution 14.15 511
  • 532. 1. Consider the equation w + sin z z w = 0. The point z = 0 is the only point we need to examine in the finite plane. Since sin z z has a removable singularity at z = 0, there are no singular points in the finite plane. The substitution z = 1 ζ yields the equation u − sin(1/ζ) ζ u = 0. Since sin(1/ζ) ζ has an essential singularity at ζ = 0, the point at infinity is an irregular singular point of the original differential equation. 2. Consider the equation w + 1 z−3 w = 0. Since 1 z−3 has a simple pole at z = 3, the differential equation has a regular singular point there. Making the substitution z = 1/ζ, w(z) = u(ζ) u − 1 ζ2(1/ζ − 3) u = 0 u − 1 ζ(1 − 3ζ) u = 0. Since this equation has a simple pole at ζ = 0, the original equation has a regular singular point at infinity. 3. Consider the equation w + z1/2 w = 0. There is an irregular singular point at z = 0. With the substitution z = 1/ζ, w(z) = u(ζ), u − ζ−1/2 ζ2 u = 0 u − ζ−5/2 u = 0. We see that the point at infinity is also an irregular singular point of the original differential equation. Solution 14.16 We start with the equation w + z−2 w = 0. Substituting w = zλ ∞ n=0 anzn , a0 = 0 yields d dz zλ ∞ n=0 anzn + z−2 zλ ∞ n=0 anzn = 0 λzλ−1 ∞ n=0 anzn + zλ ∞ n=1 nanzn−1 + zλ ∞ n=0 anzn−2 = 0 The lowest power of z in the expansion is zλ−2 . The coefficient of this term is a0. Equating powers of z demands that a0 = 0 which contradicts our initial assumption that it was nonzero. Thus we cannot find a λ such that the solution can be expanded in the form, w = zλ ∞ n=0 anzn , a0 = 0. 512
  • 533. 14.12 Quiz Problem 14.1 What is the general solution of a first order differential equation? Solution Problem 14.2 Write a statement about the functions P and Q to make the following statement correct. The first order differential equation P(x, y) + Q(x, y) dy dx = 0 is exact if and only if . It is separable if . Solution Problem 14.3 Derive the general solution of dy dx + p(x)y = f(x). Solution Problem 14.4 Solve y = y − y2 . Solution 513
  • 534. 14.13 Quiz Solutions Solution 14.1 The general solution of a first order differential equation is a one-parameter family of functions which satisfies the equation. Solution 14.2 The first order differential equation P(x, y) + Q(x, y) dy dx = 0 is exact if and only if Py = Qx. It is separable if P = P(x) and Q = Q(y). Solution 14.3 dy dx + p(x)y = f(x) We multiply by the integrating factor µ(x) = exp(P(x)) = exp p(x) dx , and integrate. dy dx eP (x) +p(x)y eP (x) = eP (x) f(x) d dx y eP (x) = eP (x) f(x) y eP (x) = eP (x) f(x) dx + c y = e−P (x) eP (x) f(x) dx + c e−P (x) Solution 14.4 y = y − y2 is separable. y = y − y2 y y − y2 = 1 y y − y y − 1 = 1 ln y − ln(y − 1) = x + c We do algebraic simplifications and rename the constant of integration to write the solution in a nice form. y y − 1 = c ex y = (y − 1)c ex y = −c ex 1 − c ex y = ex ex −c y = 1 1 − c e−x 514
  • 535. Chapter 15 First Order Linear Systems of Differential Equations We all agree that your theory is crazy, but is it crazy enough? - Niels Bohr 15.1 Introduction In this chapter we consider first order linear systems of differential equations. That is, we consider equations of the form, x (t) = Ax(t) + f(t), x(t) =    x1(t) ... xn(t)    , A =      a11 a12 . . . a1n a21 a22 . . . a2n ... ... ... ... an1 an2 . . . ann      . Initially we will consider the homogeneous problem, x (t) = Ax(t). (Later we will find particular solutions with variation of parameters.) The best way to solve these equations is through the use of the matrix exponential. Unfortunately, using the matrix exponential requires knowledge of the Jordan canonical form and matrix functions. Fortunately, we can solve a certain class of problems using only the concepts of eigenvalues and eigenvectors of a matrix. We present this simple method in the next section. In the following section we will take a detour into matrix theory to cover Jordan canonical form and its applications. Then we will be able to solve the general case. 15.2 Using Eigenvalues and Eigenvectors to find Homoge- neous Solutions If you have forgotten what eigenvalues and eigenvectors are and how to compute them, go find a book on linear algebra and spend a few minutes re-aquainting yourself with the rudimentary material. Recall that the single differential equation x (t) = Ax has the general solution x = c eAt . Maybe the system of differential equations x (t) = Ax(t) (15.1) 515
  • 536. has similiar solutions. Perhaps it has a solution of the form x(t) = xi eλt for some constant vector xi and some value λ. Let’s substitute this into the differential equation and see what happens. x (t) = Ax(t) xiλ eλt = Axi eλt Axi = λxi We see that if λ is an eigenvalue of A with eigenvector xi then x(t) = xi eλt satisfies the differential equation. Since the differential equation is linear, cxi eλt is a solution. Suppose that the n × n matrix A has the eigenvalues {λk} with a complete set of linearly independent eigenvectors {xik}. Then each of xik eλkt is a homogeneous solution of Equation 15.1. We note that each of these solutions is linearly independent. Without any kind of justification I will tell you that the general solution of the differential equation is a linear combination of these n linearly independent solutions. Result 15.2.1 Suppose that the n × n matrix A has the eigenvalues {λk} with a complete set of linearly independent eigenvectors {xik}. The system of differential equations, x (t) = Ax(t), has the general solution, x(t) = n k=1 ckxik eλkt Example 15.2.1 (mathematica/ode/systems/systems.nb) Find the solution of the following initial value problem. Describe the behavior of the solution as t → ∞. x = Ax ≡ −2 1 −5 4 x, x(0) = x0 ≡ 1 3 The matrix has the distinct eigenvalues λ1 = −1, λ2 = 3. The corresponding eigenvectors are x1 = 1 1 , x2 = 1 5 . The general solution of the system of differential equations is x = c1 1 1 e−t +c2 1 5 e3t . We apply the initial condition to determine the constants. 1 1 1 5 c1 c2 = 1 3 c1 = 1 2 , c2 = 1 2 The solution subject to the initial condition is x = 1 2 1 1 e−t + 1 2 1 5 e3t For large t, the solution looks like x ≈ 1 2 1 5 e3t . 516
  • 537. -10 -7.5 -5 -2.5 2.5 5 7.5 10 -10 -7.5 -5 -2.5 2.5 5 7.5 10 Figure 15.1: Homogeneous solutions in the phase plane. Both coordinates tend to infinity. Figure 15.1 shows some homogeneous solutions in the phase plane. Example 15.2.2 (mathematica/ode/systems/systems.nb) Find the solution of the following initial value problem. Describe the behavior of the solution as t → ∞. x = Ax ≡   1 1 2 0 2 2 −1 1 3   x, x(0) = x0 ≡   2 0 1   The matrix has the distinct eigenvalues λ1 = 1, λ2 = 2, λ3 = 3. The corresponding eigenvectors are x1 =   0 −2 1   , x2 =   1 1 0   , x3 =   2 2 1   . The general solution of the system of differential equations is x = c1   0 −2 1   et +c2   1 1 0   e2t +c3   2 2 1   e3t . We apply the initial condition to determine the constants.   0 1 2 −2 1 2 1 0 1     c1 c2 c3   =   2 0 1   c1 = 1, c2 = 2, c3 = 0 The solution subject to the initial condition is x =   0 −2 1   et +2   1 1 0   e2t . As t → ∞, all coordinates tend to infinity. 517
  • 538. Exercise 15.1 (mathematica/ode/systems/systems.nb) Find the solution of the following initial value problem. Describe the behavior of the solution as t → ∞. x = Ax ≡ 1 −5 1 −3 x, x(0) = x0 ≡ 1 1 Hint, Solution Exercise 15.2 (mathematica/ode/systems/systems.nb) Find the solution of the following initial value problem. Describe the behavior of the solution as t → ∞. x = Ax ≡   −3 0 2 1 −1 0 −2 −1 0   x, x(0) = x0 ≡   1 0 0   Hint, Solution Exercise 15.3 Use the matrix form of the method of variation of parameters to find the general solution of dx dt = 4 −2 8 −4 x + t−3 −t−2 , t > 0. Hint, Solution 15.3 Matrices and Jordan Canonical Form Functions of Square Matrices. Consider a function f(x) with a Taylor series. f(x) = ∞ n=0 f(n) (0) n! xn We can define the function to take square matrices as arguments. The function of the square matrix A is defined in terms of the Taylor series. f(A) = ∞ n=0 f(n) (0) n! An (Note that this definition is usually not the most convenient method for computing a function of a matrix. Use the Jordan canonical form for that.) Eigenvalues and Eigenvectors. Consider a square matrix A. A nonzero vector x is an eigen- vector of the matrix with eigenvalue λ if Ax = λx. Note that we can write this equation as (A − λI)x = 0. This equation has solutions for nonzero x if and only if A − λI is singular, (det(A − λI) = 0). We define the characteristic polynomial of the matrix χ(λ) as this determinant. χ(λ) = det(A − λI) The roots of the characteristic polynomial are the eigenvalues of the matrix. The eigenvectors of distinct eigenvalues are linearly independent. Thus if a matrix has distinct eigenvalues, the eigenvectors form a basis. If λ is a root of χ(λ) of multiplicity m then there are up to m linearly independent eigenvectors corresponding to that eigenvalue. That is, it has from 1 to m eigenvectors. 518
  • 539. Diagonalizing Matrices. Consider an n × n matrix A that has a complete set of n linearly independent eigenvectors. A may or may not have distinct eigenvalues. Consider the matrix S with eigenvectors as columns. S = x1 x2 · · · xn A is diagonalized by the similarity transformation: Λ = S−1 AS. Λ is a diagonal matrix with the eigenvalues of A as the diagonal elements. Furthermore, the kth diagonal element is λk, the eigenvalue corresponding to the the eigenvector, xk. Generalized Eigenvectors. A vector xk is a generalized eigenvector of rank k if (A − λI)k xk = 0 but (A − λI)k−1 xk = 0. Eigenvectors are generalized eigenvectors of rank 1. An n × n matrix has n linearly independent generalized eigenvectors. A chain of generalized eigenvectors generated by the rank m generalized eigenvector xm is the set: {x1, x2, . . . , xm}, where xk = (A − λI)xk+1, for k = m − 1, . . . , 1. Computing Generalized Eigenvectors. Let λ be an eigenvalue of multiplicity m. Let n be the smallest integer such that rank (nullspace ((A − λI)n )) = m. Let Nk denote the number of eigenvalues of rank k. These have the value: Nk = rank nullspace (A − λI)k − rank nullspace (A − λI)k−1 . One can compute the generalized eigenvectors of a matrix by looping through the following three steps until all the the Nk are zero: 1. Select the largest k for which Nk is positive. Find a generalized eigenvector xk of rank k which is linearly independent of all the generalized eigenvectors found thus far. 2. From xk generate the chain of eigenvectors {x1, x2, . . . , xk}. Add this chain to the known generalized eigenvectors. 3. Decrement each positive Nk by one. Example 15.3.1 Consider the matrix A =   1 1 1 2 1 −1 −3 2 4   . The characteristic polynomial of the matrix is χ(λ) = 1 − λ 1 1 2 1 − λ −1 −3 2 4 − λ = (1 − λ)2 (4 − λ) + 3 + 4 + 3(1 − λ) − 2(4 − λ) + 2(1 − λ) = −(λ − 2)3 . Thus we see that λ = 2 is an eigenvalue of multiplicity 3. A − 2I is A − 2I =   −1 1 1 2 −1 −1 −3 2 2   519
  • 540. The rank of the nullspace space of A − 2I is less than 3. (A − 2I)2 =   0 0 0 −1 1 1 1 −1 −1   The rank of nullspace((A − 2I)2 ) is less than 3 as well, so we have to take one more step. (A − 2I)3 =   0 0 0 0 0 0 0 0 0   The rank of nullspace((A − 2I)3 ) is 3. Thus there are generalized eigenvectors of ranks 1, 2 and 3. The generalized eigenvector of rank 3 satisfies: (A − 2I)3 x3 = 0   0 0 0 0 0 0 0 0 0   x3 = 0 We choose the solution x3 =   1 0 0   . Now to compute the chain generated by x3. x2 = (A − 2I)x3 =   −1 2 −3   x1 = (A − 2I)x2 =   0 −1 1   Thus a set of generalized eigenvectors corresponding to the eigenvalue λ = 2 are x1 =   0 −1 1   , x2 =   −1 2 −3   , x3 =   1 0 0   . Jordan Block. A Jordan block is a square matrix which has the constant, λ, on the diagonal and ones on the first super-diagonal:            λ 1 0 · · · 0 0 0 λ 1 · · · 0 0 0 0 λ ... 0 0 ... ... ... ... ... ... 0 0 0 ... λ 1 0 0 0 · · · 0 λ            520
  • 541. Jordan Canonical Form. A matrix J is in Jordan canonical form if all the elements are zero except for Jordan blocks Jk along the diagonal. J =          J1 0 · · · 0 0 0 J2 ... 0 0 ... ... ... ... ... 0 0 ... Jn−1 0 0 0 · · · 0 Jn          The Jordan canonical form of a matrix is obtained with the similarity transformation: J = S−1 AS, where S is the matrix of the generalized eigenvectors of A and the generalized eigenvectors are grouped in chains. Example 15.3.2 Again consider the matrix A =   1 1 1 2 1 −1 −3 2 4   . Since λ = 2 is an eigenvalue of multiplicity 3, the Jordan canonical form of the matrix is J =   2 1 0 0 2 1 0 0 2   . In Example 15.3.1 we found the generalized eigenvectors of A. We define the matrix with generalized eigenvectors as columns: S =   0 −1 1 −1 2 0 1 −3 0   . We can verify that J = S−1 AS. J = S−1 AS =   0 −3 −2 0 −1 −1 1 −1 −1     1 1 1 2 1 −1 −3 2 4     0 −1 1 −1 2 0 1 −3 0   =   2 1 0 0 2 1 0 0 2   Functions of Matrices in Jordan Canonical Form. The function of an n × n Jordan block is the upper-triangular matrix: f(Jk) =              f(λ) f (λ) 1! f (λ) 2! · · · f(n−2) (λ) (n−2)! f(n−1) (λ) (n−1)! 0 f(λ) f (λ) 1! · · · f(n−3) (λ) (n−3)! f(n−2) (λ) (n−2)! 0 0 f(λ) ... f(n−4) (λ) (n−4)! f(n−3) (λ) (n−3)! ... ... ... ... ... ... 0 0 0 ... f(λ) f (λ) 1! 0 0 0 · · · 0 f(λ)              521
  • 542. The function of a matrix in Jordan canonical form is f(J) =          f(J1) 0 · · · 0 0 0 f(J2) ... 0 0 ... ... ... ... ... 0 0 ... f(Jn−1) 0 0 0 · · · 0 f(Jn)          The Jordan canonical form of a matrix satisfies: f(J) = S−1 f(A)S, where S is the matrix of the generalized eigenvectors of A. This gives us a convenient method for computing functions of matrices. Example 15.3.3 Consider the matrix exponential function eA for our old friend: A =   1 1 1 2 1 −1 −3 2 4   . In Example 15.3.2 we showed that the Jordan canonical form of the matrix is J =   2 1 0 0 2 1 0 0 2   . Since all the derivatives of eλ are just eλ , it is especially easy to compute eJ . eJ =   e2 e2 e2 /2 0 e2 e2 0 0 e2   We find eA with a similarity transformation of eJ . We use the matrix of generalized eigenvectors found in Example 15.3.2. eA = S eJ S−1 eA =   0 −1 1 −1 2 0 1 −3 0     e2 e2 e2 /2 0 e2 e2 0 0 e2     0 −3 −2 0 −1 −1 1 −1 −1   eA =   0 2 2 3 1 −1 −5 3 5   e2 2 15.4 Using the Matrix Exponential The homogeneous differential equation x (t) = Ax(t) has the solution x(t) = eAt c where c is a vector of constants. The solution subject to the initial condition, x(t0) = x0 is x(t) = eA(t−t0) x0. 522
  • 543. The homogeneous differential equation x (t) = 1 t Ax(t) has the solution x(t) = tA c ≡ eA Log t c, where c is a vector of constants. The solution subject to the initial condition, x(t0) = x0 is x(t) = t t0 A x0 ≡ eA Log(t/t0) x0. The inhomogeneous problem x (t) = Ax(t) + f(t), x(t0) = x0 has the solution x(t) = eA(t−t0) x0 + eAt t t0 e−Aτ f(τ) dτ. Example 15.4.1 Consider the system dx dt =   1 1 1 2 1 −1 −3 2 4   x. The general solution of the system of differential equations is x(t) = eAt c. In Example 15.3.3 we found eA . At is just a constant times A. The eigenvalues of At are {λkt} where {λk} are the eigenvalues of A. The generalized eigenvectors of At are the same as those of A. Consider eJt . The derivatives of f(λ) = eλt are f (λ) = t eλt and f (λ) = t2 eλt . Thus we have eJt =   e2t t e2t t2 e2t /2 0 e2t t e2t 0 0 e2t   eJt =   1 t t2 /2 0 1 t 0 0 1   e2t We find eAt with a similarity transformation. eAt = S eJt S−1 eAt =   0 −1 1 −1 2 0 1 −3 0     1 t t2 /2 0 1 t 0 0 1   e2t   0 −3 −2 0 −1 −1 1 −1 −1   eAt =   1 − t t t 2t − t2 /2 1 − t + t2 /2 −t + t2 /2 −3t + t2 /2 2t − t2 /2 1 + 2t − t2 /2   e2t The solution of the system of differential equations is x(t) =  c1   1 − t 2t − t2 /2 −3t + t2 /2   + c2   t 1 − t + t2 /2 2t − t2 /2   + c3   t −t + t2 /2 1 + 2t − t2 /2     e2t 523
  • 544. Example 15.4.2 Consider the Euler equation system dx dt = 1 t Ax ≡ 1 t 1 0 1 1 x. The solution is x(t) = tA c. Note that A is almost in Jordan canonical form. It has a one on the sub-diagonal instead of the super-diagonal. It is clear that a function of A is defined f(A) = f(1) 0 f (1) f(1) . The function f(λ) = tλ has the derivative f (λ) = tλ log t. Thus the solution of the system is x(t) = t 0 t log t t c1 c2 = c1 t t log t + c2 0 t Example 15.4.3 Consider an inhomogeneous system of differential equations. dx dt = Ax + f(t) ≡ 4 −2 8 −4 x + t−3 −t−2 , t > 0. The general solution is x(t) = eAt c + eAt e−At f(t) dt. First we find homogeneous solutions. The characteristic equation for the matrix is χ(λ) = 4 − λ −2 8 −4 − λ = λ2 = 0 λ = 0 is an eigenvalue of multiplicity 2. Thus the Jordan canonical form of the matrix is J = 0 1 0 0 . Since rank(nullspace(A − 0I)) = 1 there is only one eigenvector. A generalized eigenvector of rank 2 satisfies (A − 0I)2 x2 = 0 0 0 0 0 x2 = 0 We choose x2 = 1 0 Now we generate the chain from x2. x1 = (A − 0I)x2 = 4 8 We define the matrix of generalized eigenvectors S. S = 4 1 8 0 The derivative of f(λ) = eλt is f (λ) = t eλt . Thus eJt = 1 t 0 1 524
  • 545. The homogeneous solution of the differential equation system is xh = eAt c where eAt = S eJt S−1 eAt = 4 1 8 0 . 1 t 0 1 0 1/8 1 −1/2 eAt = 1 + 4t −2t 8t 1 − 4t The general solution of the inhomogeneous system of equations is x(t) = eAt c + eAt e−At f(t) dt x(t) = 1 + 4t −2t 8t 1 − 4t c + 1 + 4t −2t 8t 1 − 4t 1 − 4t 2t −8t 1 + 4t t−3 −t−2 dt x(t) = c1 1 + 4t 8t + c2 −2t 1 − 4t + 2 − 2 Log t + 6 t − 1 2t2 4 − 4 Log t + 13 t We can tidy up the answer a little bit. First we take linear combinations of the homogeneous solutions to obtain a simpler form. x(t) = c1 1 2 + c2 2t 4t − 1 + 2 − 2 Log t + 6 t − 1 2t2 4 − 4 Log t + 13 t Then we subtract 2 times the first homogeneous solution from the particular solution. x(t) = c1 1 2 + c2 2t 4t − 1 + −2 Log t + 6 t − 1 2t2 −4 Log t + 13 t 525
  • 546. 15.5 Exercises Exercise 15.4 (mathematica/ode/systems/systems.nb) Find the solution of the following initial value problem. x = Ax ≡ −2 1 −5 4 x, x(0) = x0 ≡ 1 3 Hint, Solution Exercise 15.5 (mathematica/ode/systems/systems.nb) Find the solution of the following initial value problem. x = Ax ≡   1 1 2 0 2 2 −1 1 3   x, x(0) = x0 ≡   2 0 1   Hint, Solution Exercise 15.6 (mathematica/ode/systems/systems.nb) Find the solution of the following initial value problem. Describe the behavior of the solution as t → ∞. x = Ax ≡ 1 −5 1 −3 x, x(0) = x0 ≡ 1 1 Hint, Solution Exercise 15.7 (mathematica/ode/systems/systems.nb) Find the solution of the following initial value problem. Describe the behavior of the solution as t → ∞. x = Ax ≡   −3 0 2 1 −1 0 −2 −1 0   x, x(0) = x0 ≡   1 0 0   Hint, Solution Exercise 15.8 (mathematica/ode/systems/systems.nb) Find the solution of the following initial value problem. Describe the behavior of the solution as t → ∞. x = Ax ≡ 1 −4 4 −7 x, x(0) = x0 ≡ 3 2 Hint, Solution Exercise 15.9 (mathematica/ode/systems/systems.nb) Find the solution of the following initial value problem. Describe the behavior of the solution as t → ∞. x = Ax ≡   −1 0 0 −4 1 0 3 6 2   x, x(0) = x0 ≡   −1 2 −30   Hint, Solution Exercise 15.10 1. Consider the system x = Ax =   1 1 1 2 1 −1 −3 2 4   x. (15.2) 526
  • 547. (a) Show that λ = 2 is an eigenvalue of multiplicity 3 of the coefficient matrix A, and that there is only one corresponding eigenvector, namely xi(1) =   0 1 −1   . (b) Using the information in part (i), write down one solution x(1) (t) of the system (15.2). There is no other solution of a purely exponential form x = xi eλt . (c) To find a second solution use the form x = xit e2t +η e2t , and find appropriate vectors xi and η. This gives a solution of the system (15.2) which is independent of the one obtained in part (ii). (d) To find a third linearly independent solution use the form x = xi(t2 /2) e2t +ηt e2t +ζ e2t . Show that xi, η and ζ satisfy the equations (A − 2I)xi = 0, (A − 2I)η = xi, (A − 2I)ζ = η. The first two equations can be taken to coincide with those obtained in part (iii). Solve the third equation, and write down a third independent solution of the system (15.2). 2. Consider the system x = Ax =   5 −3 −2 8 −5 −4 −4 3 3   x. (15.3) (a) Show that λ = 1 is an eigenvalue of multiplicity 3 of the coefficient matrix A, and that there are only two linearly independent eigenvectors, which we may take as xi(1) =   1 0 2   , xi(2) =   0 2 −3   Find two independent solutions of equation (15.3). (b) To find a third solution use the form x = xit et +ηet ; then show that xi and η must satisfy (A − I)xi = 0, (A − I)η = xi. Show that the most general solution of the first of these equations is xi = c1xi1 + c2xi2, where c1 and c2 are arbitrary constants. Show that, in order to solve the second of these equations it is necessary to take c1 = c2. Obtain such a vector η, and use it to obtain a third independent solution of the system (15.3). Hint, Solution Exercise 15.11 (mathematica/ode/systems/systems.nb) Consider the system of ODE’s dx dt = Ax, x(0) = x0 where A is the constant 3 × 3 matrix A =   1 1 1 2 1 −1 −8 −5 −3   1. Find the eigenvalues and associated eigenvectors of A. [HINT: notice that λ = −1 is a root of the characteristic polynomial of A.] 527
  • 548. 2. Use the results from part (a) to construct eAt and therefore the solution to the initial value problem above. 3. Use the results of part (a) to find the general solution to dx dt = 1 t Ax. Hint, Solution Exercise 15.12 (mathematica/ode/systems/systems.nb) 1. Find the general solution to dx dt = Ax where A =   2 0 1 0 2 0 0 1 3   2. Solve dx dt = Ax + g(t), x(0) = 0 using A from part (a). Hint, Solution Exercise 15.13 Let A be an n × n matrix of constants. The system dx dt = 1 t Ax, (15.4) is analogous to the Euler equation. 1. Verify that when A is a 2×2 constant matrix, elimination of (15.4) yields a second order Euler differential equation. 2. Now assume that A is an n × n matrix of constants. Show that this system, in analogy with the Euler equation has solutions of the form x = atλ where a is a constant vector provided a and λ satisfy certain conditions. 3. Based on your experience with the treatment of multiple roots in the solution of constant coefficient systems, what form will the general solution of (15.4) take if λ is a multiple eigenvalue in the eigenvalue problem derived in part (b)? 4. Verify your prediction by deriving the general solution for the system dx dt = 1 t 1 0 1 1 x. Hint, Solution 528
  • 549. 15.6 Hints Hint 15.1 Hint 15.2 Hint 15.3 Hint 15.4 Hint 15.5 Hint 15.6 Hint 15.7 Hint 15.8 Hint 15.9 Hint 15.10 Hint 15.11 Hint 15.12 Hint 15.13 529
  • 550. 15.7 Solutions Solution 15.1 We consider an initial value problem. x = Ax ≡ 1 −5 1 −3 x, x(0) = x0 ≡ 1 1 The matrix has the distinct eigenvalues λ1 = −1−ı, λ2 = −1+ı. The corresponding eigenvectors are x1 = 2 − ı 1 , x2 = 2 + ı 1 . The general solution of the system of differential equations is x = c1 2 − ı 1 e(−1−ı)t +c2 2 + ı 1 e(−1+ı)t . We can take the real and imaginary parts of either of these solution to obtain real-valued solutions. 2 + ı 1 e(−1+ı)t = 2 cos(t) − sin(t) cos(t) e−t +ı cos(t) + 2 sin(t) sin(t) e−t x = c1 2 cos(t) − sin(t) cos(t) e−t +c2 cos(t) + 2 sin(t) sin(t) e−t We apply the initial condition to determine the constants. 2 1 1 0 c1 c2 = 1 1 c1 = 1, c2 = −1 The solution subject to the initial condition is x = cos(t) − 3 sin(t) cos(t) − sin(t) e−t . Plotted in the phase plane, the solution spirals in to the origin as t increases. Both coordinates tend to zero as t → ∞. Solution 15.2 We consider an initial value problem. x = Ax ≡   −3 0 2 1 −1 0 −2 −1 0   x, x(0) = x0 ≡   1 0 0   The matrix has the distinct eigenvalues λ1 = −2, λ2 = −1 − ı √ 2, λ3 = −1 + ı √ 2. The corresponding eigenvectors are x1 =   2 −2 1   , x2 =   2 + ı √ 2 −1 + ı √ 2 3   , x3 =   2 − ı √ 2 −1 − ı √ 2 3   . The general solution of the system of differential equations is x = c1   2 −2 1   e−2t +c2   2 + ı √ 2 −1 + ı √ 2 3   e(−1−ı √ 2)t +c3   2 − ı √ 2 −1 − ı √ 2 3   e(−1+ı √ 2)t . 530
  • 551. We can take the real and imaginary parts of the second or third solution to obtain two real-valued solutions.   2 + ı √ 2 −1 + ı √ 2 3   e(−1−ı √ 2)t =   2 cos( √ 2t) + √ 2 sin( √ 2t) − cos( √ 2t) + √ 2 sin( √ 2t) 3 cos( √ 2t)   e−t +ı   √ 2 cos( √ 2t) − 2 sin( √ 2t)√ 2 cos( √ 2t) + sin( √ 2t) −3 sin( √ 2t)   e−t x = c1   2 −2 1   e−2t +c2   2 cos( √ 2t) + √ 2 sin( √ 2t) − cos( √ 2t) + √ 2 sin( √ 2t) 3 cos( √ 2t)   e−t +c3   √ 2 cos( √ 2t) − 2 sin( √ 2t)√ 2 cos( √ 2t) + sin( √ 2t) −3 sin( √ 2t)   e−t We apply the initial condition to determine the constants.   2 2 √ 2 −2 −1 √ 2 1 3 0     c1 c2 c3   =   1 0 0   c1 = 1 3 , c2 = − 1 9 , c3 = 5 9 √ 2 The solution subject to the initial condition is x = 1 3   2 −2 1   e−2t + 1 6   2 cos( √ 2t) − 4 √ 2 sin( √ 2t) 4 cos( √ 2t) + √ 2 sin( √ 2t) −2 cos( √ 2t) − 5 √ 2 sin( √ 2t)   e−t . As t → ∞, all coordinates tend to infinity. Plotted in the phase plane, the solution would spiral in to the origin. Solution 15.3 Homogeneous Solution, Method 1. We designate the inhomogeneous system of differential equations x = Ax + g(t). First we find homogeneous solutions. The characteristic equation for the matrix is χ(λ) = 4 − λ −2 8 −4 − λ = λ2 = 0 λ = 0 is an eigenvalue of multiplicity 2. The eigenvectors satisfy 4 −2 8 −4 ξ1 ξ2 = 0 0 . Thus we see that there is only one linearly independent eigenvector. We choose xi = 1 2 . One homogeneous solution is then x1 = 1 2 e0t = 1 2 . We look for a second homogeneous solution of the form x2 = xit + η. We substitute this into the homogeneous equation. x2 = Ax2 xi = A(xit + η) 531
  • 552. We see that xi and η satisfy Axi = 0, Aη = xi. We choose xi to be the eigenvector that we found previously. The equation for η is then 4 −2 8 −4 η1 η2 = 1 2 . η is determined up to an additive multiple of xi. We choose η = 0 −1/2 . Thus a second homogeneous solution is x2 = 1 2 t + 0 −1/2 . The general homogeneous solution of the system is xh = c1 1 2 + c2 t 2t − 1/2 We can write this in matrix notation using the fundamental matrix Ψ(t). xh = Ψ(t)c = 1 t 2 2t − 1/2 c1 c2 Homogeneous Solution, Method 2. The similarity transform c−1 Ac with c = 1 0 2 −1/2 will convert the matrix A = 4 −2 8 −4 to Jordan canonical form. We make the change of variables, y = 1 0 2 −1/2 x. The homogeneous system becomes dy dt = 1 0 4 −2 4 −2 8 −4 1 0 2 −1/2 y y1 y2 = 0 1 0 0 y1 y2 The equation for y2 is y2 = 0. y2 = c2 The equation for y1 becomes y1 = c2. y1 = c1 + c2t 532
  • 553. The solution for y is then y = c1 1 0 + c2 t 1 . We multiply this by c to obtain the homogeneous solution for x. xh = c1 1 2 + c2 t 2t − 1/2 Inhomogeneous Solution. By the method of variation of parameters, a particular solution is xp = Ψ(t) Ψ−1 (t)g(t) dt. xp = 1 t 2 2t − 1/2 1 − 4t 2t 4 −2 t−3 −t−2 dt xp = 1 t 2 2t − 1/2 −2t−1 − 4t−2 + t−3 2t−2 + 4t−3 dt xp = 1 t 2 2t − 1/2 −2 log t + 4t−1 − 1 2 t−2 −2t−1 − 2t−2 xp = −2 − 2 log t + 2t−1 − 1 2 t−2 −4 − 4 log t + 5t−1 By adding 2 times our first homogeneous solution, we obtain xp = −2 log t + 2t−1 − 1 2 t−2 −4 log t + 5t−1 The general solution of the system of differential equations is x = c1 1 2 + c2 t 2t − 1/2 + −2 log t + 2t−1 − 1 2 t−2 −4 log t + 5t−1 Solution 15.4 We consider an initial value problem. x = Ax ≡ −2 1 −5 4 x, x(0) = x0 ≡ 1 3 The Jordan canonical form of the matrix is J = −1 0 0 3 . The solution of the initial value problem is x = eAt x0. x = eAt x0 = S eJt S−1 x0 = 1 1 1 5 e−t 0 0 e3t 1 4 5 −1 −1 1 1 3 = 1 2 e−t + e3t e−t +5 e3t x = 1 2 1 1 e−t + 1 2 1 5 e3t 533
  • 554. Solution 15.5 We consider an initial value problem. x = Ax ≡   1 1 2 0 2 2 −1 1 3   x, x(0) = x0 ≡   2 0 1   The Jordan canonical form of the matrix is J =   1 0 0 0 2 0 0 0 3   . The solution of the initial value problem is x = eAt x0. x = eAt x0 = S eJt S−1 x0 =   0 1 2 −2 1 2 1 0 1     et 0 0 0 e2t 0 0 0 e3t   1 2   1 −1 0 4 −2 −4 −1 1 2     2 0 1   =   2 e2t −2 et +2 e2t et   x =   0 −2 1   et +   2 2 0   e2t . Solution 15.6 We consider an initial value problem. x = Ax ≡ 1 −5 1 −3 x, x(0) = x0 ≡ 1 1 The Jordan canonical form of the matrix is J = −1 − ı 0 0 −1 + ı . The solution of the initial value problem is x = eAt x0. x = eAt x0 = S eJt S−1 x0 = 2 − ı 2 + ı 1 1 e(−1−ı)t 0 0 e(−1+ı)t 1 2 ı 1 − ı2 −ı 1 + ı2 1 1 = (cos(t) − 3 sin(t)) e−t (cos(t) − sin(t)) e−t x = 1 1 e−t cos(t) − 3 1 e−t sin(t) Solution 15.7 We consider an initial value problem. x = Ax ≡   −3 0 2 1 −1 0 −2 −1 0   x, x(0) = x0 ≡   1 0 0   534
  • 555. The Jordan canonical form of the matrix is J =   −2 0 0 0 −1 − ı √ 2 0 0 0 −1 + ı √ 2   . The solution of the initial value problem is x = eAt x0. x = eAt x0 = S eJt S−1 x0 = 1 3   6 2 + ı √ 2 2 − ı √ 2 −6 −1 + ı √ 2 −1 − ı √ 2 3 3 3     e−2t 0 0 0 e(−1−ı √ 2)t 0 0 0 e(−1+ı √ 2)t   1 6   2 −2 −2 −1 − ı5 √ 2/2 1 − ı2 √ 2 4 + ı √ 2 −1 + ı5 √ 2/2 1 + ı2 √ 2 4 − ı √ 2     1 0 0   x = 1 3   2 −2 1   e−2t + 1 6   2 cos( √ 2t) − 4 √ 2 sin( √ 2t) 4 cos( √ 2t) + √ 2 sin( √ 2t) −2 cos( √ 2t) − 5 √ 2 sin( √ 2t)   e−t . Solution 15.8 We consider an initial value problem. x = Ax ≡ 1 −4 4 −7 x, x(0) = x0 ≡ 3 2 Method 1. Find Homogeneous Solutions. The matrix has the double eigenvalue λ1 = λ2 = −3. There is only one corresponding eigenvector. We compute a chain of generalized eigenvectors. (A + 3I)2 x2 = 0 0x2 = 0 x2 = 1 0 (A + 3I)x2 = x1 x1 = 4 4 The general solution of the system of differential equations is x = c1 1 1 e−3t +c2 4 4 t + 1 0 e−3t . We apply the initial condition to determine the constants. 1 1 1 0 c1 c2 = 3 2 c1 = 2, c2 = 1 The solution subject to the initial condition is x = 3 + 4t 2 + 4t e−3t . 535
  • 556. Both coordinates tend to zero as t → ∞. Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is J = −3 1 0 −3 . The solution of the initial value problem is x = eAt x0. x = eAt x0 = S eJt S−1 x0 = 1 1/4 1 0 e−3t t e−3t 0 e−3t 0 1 4 −4 3 2 x = 3 + 4t 2 + 4t e−3t . Solution 15.9 We consider an initial value problem. x = Ax ≡   −1 0 0 −4 1 0 3 6 2   x, x(0) = x0 ≡   −1 2 −30   Method 1. Find Homogeneous Solutions. The matrix has the distinct eigenvalues λ1 = −1, λ2 = 1, λ3 = 2. The corresponding eigenvectors are x1 =   −1 −2 5   , x2 =   0 −1 6   , x3 =   0 0 1   . The general solution of the system of differential equations is x = c1   −1 −2 5   e−t +c2   0 −1 6   et +c3   0 0 1   e2t . We apply the initial condition to determine the constants.   −1 0 0 −2 −1 0 5 6 1     c1 c2 c3   =   −1 2 −30   c1 = 1, c2 = −4, c3 = −11 The solution subject to the initial condition is x =   −1 −2 5   e−t −4   0 −1 6   et −11   0 0 1   e2t . As t → ∞, the first coordinate vanishes, the second coordinate tends to ∞ and the third coordinate tends to −∞ Method 2. Use the Exponential Matrix. The Jordan canonical form of the matrix is J =   −1 0 0 0 1 0 0 0 2   . 536
  • 557. The solution of the initial value problem is x = eAt x0. x = eAt x0 = S eJt S−1 x0 =   −1 0 0 −2 −1 0 5 6 1     e−t 0 0 0 et 0 0 0 e2t   1 2   −1 0 0 2 −1 0 −7 6 1     −1 2 −30   x =   −1 −2 5   e−t −4   0 −1 6   et −11   0 0 1   e2t . Solution 15.10 1. (a) We compute the eigenvalues of the matrix. χ(λ) = 1 − λ 1 1 2 1 − λ −1 −3 2 4 − λ = −λ3 + 6λ2 − 12λ + 8 = −(λ − 2)3 λ = 2 is an eigenvalue of multiplicity 3. The rank of the null space of A − 2I is 1. (The first two rows are linearly independent, but the third is a linear combination of the first two.) A − 2I =   −1 1 1 2 −1 −1 −3 2 2   Thus there is only one eigenvector.   −1 1 1 2 −1 −1 −3 2 2     ξ1 ξ2 ξ3   = 0 xi(1) =   0 1 −1   (b) One solution of the system of differential equations is x(1) =   0 1 −1   e2t . (c) We substitute the form x = xit e2t +η e2t into the differential equation. x = Ax xi e2t +2xit e2t +2η e2t = Axit e2t +Aη e2t (A − 2I)xi = 0, (A − 2I)η = xi We already have a solution of the first equation, we need the generalized eigenvector η. Note that η is only determined up to a constant times xi. Thus we look for the solution 537
  • 558. whose second component vanishes to simplify the algebra. (A − 2I)η = xi   −1 1 1 2 −1 −1 −3 2 2     η1 0 η3   =   0 1 −1   −η1 + η3 = 0, 2η1 − η3 = 1, −3η1 + 2η3 = −1 η =   1 0 1   A second linearly independent solution is x(2) =   0 1 −1   t e2t +   1 0 1   e2t . (d) To find a third solution we substutite the form x = xi(t2 /2) e2t +ηt e2t +ζ e2t into the differential equation. x = Ax 2xi(t2 /2) e2t +(xi + 2η)t e2t +(η + 2ζ) e2t = Axi(t2 /2) e2t +Aηt e2t +Aζ e2t (A − 2I)xi = 0, (A − 2I)η = xi, (A − 2I)ζ = η We have already solved the first two equations, we need the generalized eigenvector ζ. Note that ζ is only determined up to a constant times xi. Thus we look for the solution whose second component vanishes to simplify the algebra. (A − 2I)ζ = η   −1 1 1 2 −1 −1 −3 2 2     ζ1 0 ζ3   =   1 0 1   −ζ1 + ζ3 = 1, 2ζ1 − ζ3 = 0, −3ζ1 + 2ζ3 = 1 ζ =   1 0 2   A third linearly independent solution is x(3) =   0 1 −1   (t2 /2) e2t +   1 0 1   t e2t +   1 0 2   e2t 2. (a) We compute the eigenvalues of the matrix. χ(λ) = 5 − λ −3 −2 8 −5 − λ −4 −4 3 3 − λ = −λ3 + 3λ2 − 3λ + 1 = −(λ − 1)3 λ = 1 is an eigenvalue of multiplicity 3. The rank of the null space of A − I is 2. (The second and third rows are multiples of the first.) A − I =   4 −3 −2 8 −6 −4 −4 3 2   538
  • 559. Thus there are two eigenvectors.   4 −3 −2 8 −6 −4 −4 3 2     ξ1 ξ2 ξ3   = 0 xi(1) =   1 0 2   , xi(2) =   0 2 −3   Two linearly independent solutions of the differential equation are x(1) =   1 0 2   et , x(2) =   0 2 −3   et . (b) We substitute the form x = xit et +η et into the differential equation. x = Ax xi et +xit et +η et = Axit et +Aη et (A − I)xi = 0, (A − I)η = xi The general solution of the first equation is a linear combination of the two solutions we found in the previous part. xi = c1xi1 + c2xi2 Now we find the generalized eigenvector, η. Note that η is only determined up to a linear combination of xi1 and xi2. Thus we can take the first two components of η to be zero.   4 −3 −2 8 −6 −4 −4 3 2     0 0 η3   = c1   1 0 2   + c2   0 2 −3   −2η3 = c1, −4η3 = 2c2, 2η3 = 2c1 − 3c2 c1 = c2, η3 = − c1 2 We see that we must take c1 = c2 in order to obtain a solution. We choose c1 = c2 = 2 A third linearly independent solution of the differential equation is x(3) =   2 4 −2   t et +   0 0 −1   et . Solution 15.11 1. The characteristic polynomial of the matrix is χ(λ) = 1 − λ 1 1 2 1 − λ −1 −8 −5 −3 − λ = (1 − λ)2 (−3 − λ) + 8 − 10 − 5(1 − λ) − 2(−3 − λ) − 8(1 − λ) = −λ3 − λ2 + 4λ + 4 = −(λ + 2)(λ + 1)(λ − 2) Thus we see that the eigenvalues are λ = −2, −1, 2. The eigenvectors xi satisfy (A − λI)xi = 0. 539
  • 560. For λ = −2, we have (A + 2I)xi = 0.   3 1 1 2 3 −1 −8 −5 −1     ξ1 ξ2 ξ3   =   0 0 0   If we take ξ3 = 1 then the first two rows give us the system, 3 1 2 3 ξ1 ξ2 = −1 1 which has the solution ξ1 = −4/7, ξ2 = 5/7. For the first eigenvector we choose: xi =   −4 5 7   For λ = −1, we have (A + I)xi = 0.   2 1 1 2 2 −1 −8 −5 −2     ξ1 ξ2 ξ3   =   0 0 0   If we take ξ3 = 1 then the first two rows give us the system, 2 1 2 2 ξ1 ξ2 = −1 1 which has the solution ξ1 = −3/2, ξ2 = 2. For the second eigenvector we choose: xi =   −3 4 2   For λ = 2, we have (A + I)xi = 0.   −1 1 1 2 −1 −1 −8 −5 −5     ξ1 ξ2 ξ3   =   0 0 0   If we take ξ3 = 1 then the first two rows give us the system, −1 1 2 −1 ξ1 ξ2 = −1 1 which has the solution ξ1 = 0, ξ2 = −1. For the third eigenvector we choose: xi =   0 −1 1   In summary, the eigenvalues and eigenvectors are λ = {−2, −1, 2}, xi =      −4 5 7   ,   −3 4 2   ,   0 −1 1      540
  • 561. 2. The matrix is diagonalized with the similarity transformation J = S−1 AS, where S is the matrix with eigenvectors as columns: S =   −4 −3 0 5 4 −1 7 2 1   The matrix exponential, eAt is given by eA = S eJ S−1 . eA =   −4 −3 0 5 4 −1 7 2 1     e−2t 0 0 0 e−t 0 0 0 e2t   1 12   6 3 3 −12 −4 −4 −18 −13 −1   . eAt =   −2 e−2t +3 e−t − e−2t + e−t − e−2t + e−t 5 e−2t −8 e−t +3 et 2 15 e−2t −16 e−t +13 et 12 15 e−2t −16 e−t + et 12 7 e−2t −4 e−t −3 et 2 21 e−2t −8 e−t −13 et 12 21 e−2t −8 e−t − et 12   The solution of the initial value problem is eAt x0. 3. The general solution of the Euler equation is c1   −4 5 7   t−2 + c2   −3 4 2   t−1 + c3   0 −1 1   t2 . We could also write the solution as x = tA c ≡ eA log t c, Solution 15.12 1. The characteristic polynomial of the matrix is χ(λ) = 2 − λ 0 1 0 2 − λ 0 0 1 3 − λ = (2 − λ)2 (3 − λ) Thus we see that the eigenvalues are λ = 2, 2, 3. Consider A − 2I =   0 0 1 0 0 0 0 1 3   . Since rank(nullspace(A − 2I)) = 1 there is one eigenvector and one generalized eigenvector of rank two for λ = 2. The generalized eigenvector of rank two satisfies (A − 2I)2 xi2 = 0   0 1 1 0 0 0 0 1 1   xi2 = 0 541
  • 562. We choose the solution xi2 =   0 −1 1   . The eigenvector for λ = 2 is xi1 = (A − 2I)xi2 =   1 0 0   . The eigenvector for λ = 3 satisfies (A − 3I)2 xi = 0   −1 0 1 0 −1 0 0 1 0   xi = 0 We choose the solution xi =   1 0 1   . The eigenvalues and generalized eigenvectors are λ = {2, 2, 3}, xi =      1 0 0   ,   0 −1 1   ,   1 0 1      . The matrix of eigenvectors and its inverse is S =   1 0 1 0 −1 0 0 1 1   , S−1 =   1 −1 −1 0 −1 0 0 1 1   . The Jordan canonical form of the matrix, which satisfies J = S−1 AS is J =   2 1 0 0 2 0 0 0 3   Recall that the function of a Jordan block is: f         λ 1 0 0 0 λ 1 0 0 0 λ 1 0 0 0 λ         =      f(λ) f (λ) 1! f (λ) 2! f (λ) 3! 0 f(λ) f (λ) 1! f (λ) 2! 0 0 f(λ) f (λ) 1! 0 0 0 f(λ)      , and that the function of a matrix in Jordan canonical form is f         J1 0 0 0 0 J2 0 0 0 0 J3 0 0 0 0 J4         =     f(J1) 0 0 0 0 f(J2) 0 0 0 0 f(J3) 0 0 0 0 f(J4)     . We want to compute eJt so we consider the function f(λ) = eλt , which has the derivative f (λ) = t eλt . Thus we see that eJt =   e2t t e2t 0 0 e2t 0 0 0 e3t   542
  • 563. The exponential matrix is eAt = S eJt S−1 , eAt =   e2t −(1 + t) e2t + e3t − e2t + e3t 0 e2t 0 0 − e2t + e3t e3t   . The general solution of the homogeneous differential equation is x = eAt c. 2. The solution of the inhomogeneous differential equation subject to the initial condition is x = eAt 0 + eAt t 0 e−Aτ g(τ) dτ x = eAt t 0 e−Aτ g(τ) dτ Solution 15.13 1. dx dt = 1 t Ax t x1 x2 = a b c d x1 x2 The first component of this equation is tx1 = ax1 + bx2. We differentiate and multiply by t to obtain a second order coupled equation for x1. We use (15.4) to eliminate the dependence on x2. t2 x1 + tx1 = atx1 + btx2 t2 x1 + (1 − a)tx1 = b(cx1 + dx2) t2 x1 + (1 − a)tx1 − bcx1 = d(tx1 − ax1) t2 x1 + (1 − a − d)tx1 + (ad − bc)x1 = 0 Thus we see that x1 satisfies a second order, Euler equation. By symmetry we see that x2 satisfies, t2 x2 + (1 − b − c)tx2 + (bc − ad)x2 = 0. 2. We substitute x = atλ into (15.4). λatλ−1 = 1 t Aatλ Aa = λa Thus we see that x = atλ is a solution if λ is an eigenvalue of A with eigenvector a. 3. Suppose that λ = α is an eigenvalue of multiplicity 2. If λ = α has two linearly independent eigenvectors, a and b then atα and btα are linearly independent solutions. If λ = α has only one linearly independent eigenvector, a, then atα is a solution. We look for a second solution of the form x = xitα log t + ηtα . 543
  • 564. Substituting this into the differential equation yields αxitα−1 log t + xitα−1 + αηtα−1 = Axitα−1 log t + Aηtα−1 We equate coefficients of tα−1 log t and tα−1 to determine xi and η. (A − αI)xi = 0, (A − αI)η = xi These equations have solutions because λ = α has generalized eigenvectors of first and second order. Note that the change of independent variable τ = log t, y(τ) = x(t), will transform (15.4) into a constant coefficient system. dy dτ = Ay Thus all the methods for solving constant coefficient systems carry over directly to solving (15.4). In the case of eigenvalues with multiplicity greater than one, we will have solutions of the form, xitα , xitα log t + ηtα , xitα (log t) 2 + ηtα log t + ζtα , . . . , analogous to the form of the solutions for a constant coefficient system, xi eατ , xiτ eατ +η eατ , xiτ2 eατ +ητ eατ +ζ eατ , . . . . 4. Method 1. Now we consider dx dt = 1 t 1 0 1 1 x. The characteristic polynomial of the matrix is χ(λ) = 1 − λ 0 1 1 − λ = (1 − λ)2 . λ = 1 is an eigenvalue of multiplicity 2. The equation for the associated eigenvectors is 0 0 1 0 ξ1 ξ2 = 0 0 . There is only one linearly independent eigenvector, which we choose to be a = 0 1 . One solution of the differential equation is x1 = 0 1 t. We look for a second solution of the form x2 = at log t + ηt. η satisfies the equation (A − I)η = 0 0 1 0 η = 0 1 . The solution is determined only up to an additive multiple of a. We choose η = 1 0 . 544
  • 565. Thus a second linearly independent solution is x2 = 0 1 t log t + 1 0 t. The general solution of the differential equation is x = c1 0 1 t + c2 0 1 t log t + 1 0 t . Method 2. Note that the matrix is lower triangular. x1 x2 = 1 t 1 0 1 1 x1 x2 (15.5) We have an uncoupled equation for x1. x1 = 1 t x1 x1 = c1t By substituting the solution for x1 into (15.5), we obtain an uncoupled equation for x2. x2 = 1 t (c1t + x2) x2 − 1 t x2 = c1 1 t x2 = c1 t 1 t x2 = c1 log t + c2 x2 = c1t log t + c2t Thus the solution of the system is x = c1t c1t log t + c2t , x = c1 t t log t + c2 0 t , which is equivalent to the solution we obtained previously. 545
  • 566. 546
  • 567. Chapter 16 Theory of Linear Ordinary Differential Equations A little partyin’ is good for the soul. -Matt Metz 16.1 Exact Equations Exercise 16.1 Consider a second order, linear, homogeneous differential equation: P(x)y + Q(x)y + R(x)y = 0. (16.1) Show that P − Q + R = 0 is a necessary and sufficient condition for this equation to be exact. Hint, Solution Exercise 16.2 Determine an equation for the integrating factor µ(x) for Equation 16.1. Hint, Solution Exercise 16.3 Show that y + xy + y = 0 is exact. Find the solution. Hint, Solution 547
  • 568. 16.2 Nature of Solutions Result 16.2.1 Consider the nth order ordinary differential equation of the form L[y] = dn y dxn + pn−1(x) dn−1 y dxn−1 + · · · + p1(x) dy dx + p0(x)y = f(x). (16.2) If the coefficient functions pn−1(x), . . . , p0(x) and the inhomogeneity f(x) are continuous on some interval a < x < b then the differential equation subject to the conditions, y(x0) = v0, y (x0) = v1, . . . y(n−1) (x0) = vn−1, a < x0 < b, has a unique solution on the interval. Exercise 16.4 On what intervals do the following problems have unique solutions? 1. xy + 3y = x 2. x(x − 1)y + 3xy + 4y = 2 3. ex y + x2 y + y = tan x Hint, Solution Linearity of the Operator. The differential operator L is linear. To verify this, L[cy] = dn dxn (cy) + pn−1(x) dn−1 dxn−1 (cy) + · · · + p1(x) d dx (cy) + p0(x)(cy) = c dn dxn y + cpn−1(x) dn−1 dxn−1 y + · · · + cp1(x) d dx y + cp0(x)y = cL[y] L[y1 + y2] = dn dxn (y1 + y2) + pn−1(x) dn−1 dxn−1 (y1 + y2) + · · · + p1(x) d dx (y1 + y2) + p0(x)(y1 + y2) = dn dxn (y1) + pn−1(x) dn−1 dxn−1 (y1) + · · · + p1(x) d dx (y1) + p0(x)(y1) + dn dxn (y2) + pn−1(x) dn−1 dxn−1 (y2) + · · · + p1(x) d dx (y2) + p0(x)(y2) = L[y1] + L[y2]. Homogeneous Solutions. The general homogeneous equation has the form L[y] = dn y dxn + pn−1(x) dn−1 y dxn−1 + · · · + p1(x) dy dx + p0(x)y = 0. From the linearity of L, we see that if y1 and y2 are solutions to the homogeneous equation then c1y1 + c2y2 is also a solution, (L[c1y1 + c2y2] = 0). On any interval where the coefficient functions are continuous, the nth order linear homogeneous equation has n linearly independent solutions, y1, y2, . . . , yn. (We will study linear independence in Section 16.4.) The general solution to the homogeneous problem is then yh = c1y1 + c2y2 + · · · + cnyn. 548
  • 569. Particular Solutions. Any function, yp, that satisfies the inhomogeneous equation, L[yp] = f(x), is called a particular solution or particular integral of the equation. Note that for linear differential equations the particular solution is not unique. If yp is a particular solution then yp + yh is also a particular solution where yh is any homogeneous solution. The general solution to the problem L[y] = f(x) is the sum of a particular solution and a linear combination of the homogeneous solutions y = yp + c1y1 + · · · + cnyn. Example 16.2.1 Consider the differential equation y − y = 1. You can verify that two homogeneous solutions are ex and 1. A particular solution is −x. Thus the general solution is y = −x + c1 ex +c2. Exercise 16.5 Suppose you are able to find three linearly independent particular solutions u1(x), u2(x) and u3(x) of the second order linear differential equation L[y] = f(x). What is the general solution? Hint, Solution Real-Valued Solutions. If the coefficient function and the inhomogeneity in Equation 16.2 are real-valued, then the general solution can be written in terms of real-valued functions. Let y be any, homogeneous solution, (perhaps complex-valued). By taking the complex conjugate of the equation L[y] = 0 we show that ¯y is a homogeneous solution as well. L[y] = 0 L[y] = 0 y(n) + pn−1y(n−1) + · · · + p0y = 0 ¯y(n) + pn−1 ¯y(n−1) + · · · + p0 ¯y = 0 L [¯y] = 0 For the same reason, if yp is a particular solution, then yp is a particular solution as well. Since the real and imaginary parts of a function y are linear combinations of y and ¯y, (y) = y + ¯y 2 , (y) = y − ¯y ı2 , if y is a homogeneous solution then both y and (y) are homogeneous solutions. Likewise, if yp is a particular solution then (yp) is a particular solution. L [ (yp)] = L yp + yp 2 = f 2 + f 2 = f Thus we see that the homogeneous solution, the particular solution and the general solution of a linear differential equation with real-valued coefficients and inhomogeneity can be written in terms of real-valued functions. 549
  • 570. Result 16.2.2 The differential equation L[y] = dn y dxn + pn−1(x) dn−1 y dxn−1 + · · · + p1(x) dy dx + p0(x)y = f(x) with continuous coefficients and inhomogeneity has a general solution of the form y = yp + c1y1 + · · · + cnyn where yp is a particular solution, L[yp] = f, and the yk are linearly inde- pendent homogeneous solutions, L[yk] = 0. If the coefficient functions and inhomogeneity are real-valued, then the general solution can be written in terms of real-valued functions. 16.3 Transformation to a First Order System Any linear differential equation can be put in the form of a system of first order differential equations. Consider y(n) + pn−1y(n−1) + · · · + p0y = f(x). We introduce the functions, y1 = y, y2 = y , , . . . , yn = y(n−1) . The differential equation is equivalent to the system y1 = y2 y2 = y3 ... = ... yn = f(x) − pn−1yn − · · · − p0y1. The first order system is more useful when numerically solving the differential equation. Example 16.3.1 Consider the differential equation y + x2 y + cos x y = sin x. The corresponding system of first order equations is y1 = y2 y2 = sin x − x2 y2 − cos x y1. 16.4 The Wronskian 16.4.1 Derivative of a Determinant. Before investigating the Wronskian, we will need a preliminary result from matrix theory. Consider an n × n matrix A whose elements aij(x) are functions of x. We will denote the determinant by ∆[A(x)]. We then have the following theorem. 550
  • 571. Result 16.4.1 Let aij(x), the elements of the matrix A, be differentiable func- tions of x. Then d dx ∆[A(x)] = n k=1 ∆k[A(x)] where ∆k[A(x)] is the determinant of the matrix A with the kth row replaced by the derivative of the kth row. Example 16.4.1 Consider the the matrix A(x) = x x2 x2 x4 The determinant is x5 −x4 thus the derivative of the determinant is 5x4 −4x3 . To check the theorem, d dx ∆[A(x)] = d dx x x2 x2 x4 = 1 2x x2 x4 + x x2 2x 4x3 = x4 − 2x3 + 4x4 − 2x3 = 5x4 − 4x3 . 16.4.2 The Wronskian of a Set of Functions. A set of functions {y1, y2, . . . , yn} is linearly dependent on an interval if there are constants c1, . . . , cn not all zero such that c1y1 + c2y2 + · · · + cnyn = 0 (16.3) identically on the interval. The set is linearly independent if all of the constants must be zero to satisfy c1y1 + · · · cnyn = 0 on the interval. Consider a set of functions {y1, y2, . . . , yn} that are linearly dependent on a given interval and n − 1 times differentiable. There are a set of constants, not all zero, that satisfy equation 16.3 Differentiating equation 16.3 n − 1 times gives the equations, c1y1 + c2y2 + · · · + cnyn = 0 c1y1 + c2y2 + · · · + cnyn = 0 · · · c1y (n−1) 1 + c2y (n−1) 2 + · · · + cny(n−1) n = 0. We could write the problem to find the constants as        y1 y2 . . . yn y1 y2 . . . yn y1 y2 . . . yn ... ... ... . . . y (n−1) 1 y (n−1) 2 . . . y (n−1) n               c1 c2 c3 ... cn        = 0 From linear algebra, we know that this equation has a solution for a nonzero constant vector only if the determinant of the matrix is zero. Here we define the Wronskian ,W(x), of a set of functions. W(x) = y1 y2 . . . yn y1 y2 . . . yn ... ... ... . . . y (n−1) 1 y (n−1) 2 . . . y (n−1) n 551
  • 572. Thus if a set of functions is linearly dependent on an interval, then the Wronskian is identically zero on that interval. Alternatively, if the Wronskian is identically zero, then the above matrix equation has a solution for a nonzero constant vector. This implies that the the set of functions is linearly dependent. Result 16.4.2 The Wronskian of a set of functions vanishes identically over an interval if and only if the set of functions is linearly dependent on that interval. The Wronskian of a set of linearly independent functions does not vanish except possibly at isolated points. Example 16.4.2 Consider the set, {x, x2 }. The Wronskian is W(x) = x x2 1 2x = 2x2 − x2 = x2 . Thus the functions are independent. Example 16.4.3 Consider the set {sin x, cos x, eıx }. The Wronskian is W(x) = sin x cos x eıx cos x − sin x ı eıx − sin x − cos x − eıx . Since the last row is a constant multiple of the first row, the determinant is zero. The functions are dependent. We could also see this with the identity eıx = cos x + ı sin x. 16.4.3 The Wronskian of the Solutions to a Differential Equation Consider the nth order linear homogeneous differential equation y(n) + pn−1(x)y(n−1) + · · · + p0(x)y = 0. Let {y1, y2, . . . , yn} be any set of n linearly independent solutions. Let Y (x) be the matrix such that W(x) = ∆[Y (x)]. Now let’s differentiate W(x). W (x) = d dx ∆[Y (x)] = n k=1 ∆k[Y (x)] We note that the all but the last term in this sum is zero. To see this, let’s take a look at the first term. ∆1[Y (x)] = y1 y2 · · · yn y1 y2 · · · yn ... ... ... ... y (n−1) 1 y (n−1) 2 · · · y (n−1) n The first two rows in the matrix are identical. Since the rows are dependent, the determinant is zero. 552
  • 573. The last term in the sum is ∆n[Y (x)] = y1 y2 · · · yn ... ... ... ... y (n−2) 1 y (n−2) 2 · · · y (n−2) n y (n) 1 y (n) 2 · · · y (n) n . In the last row of this matrix we make the substitution y (n) i = −pn−1(x)y (n−1) i − · · · − p0(x)yi. Recalling that we can add a multiple of a row to another without changing the determinant, we add p0(x) times the first row, and p1(x) times the second row, etc., to the last row. Thus we have the determinant, W (x) = y1 y2 · · · yn ... ... ... ... y (n−2) 1 y (n−2) 2 · · · y (n−2) n −pn−1(x)y (n−1) 1 −pn−1(x)y (n−1) 2 · · · −pn−1(x)y (n−1) n = −pn−1(x) y1 y2 · · · yn ... ... ... ... y (n−2) 1 y (n−2) 2 · · · y (n−2) n y (n−1) 1 y (n−1) 2 · · · y (n−1) n = −pn−1(x)W(x) Thus the Wronskian satisfies the first order differential equation, W (x) = −pn−1(x)W(x). Solving this equation we get a result known as Abel’s formula. W(x) = c exp − pn−1(x) dx Thus regardless of the particular set of solutions that we choose, we can compute their Wronskian up to a constant factor. Result 16.4.3 The Wronskian of any linearly independent set of solutions to the equation y(n) + pn−1(x)y(n−1) + · · · + p0(x)y = 0 is, (up to a multiplicative constant), given by W(x) = exp − pn−1(x) dx . Example 16.4.4 Consider the differential equation y − 3y + 2y = 0. The Wronskian of the two independent solutions is W(x) = c exp − −3 dx = c e3x . For the choice of solutions {ex , e2x }, the Wronskian is W(x) = ex e2x ex 2 e2x = 2 e3x − e3x = e3x . 553
  • 574. 16.5 Well-Posed Problems Consider the initial value problem for an nth order linear differential equation. dn y dxn + pn−1(x) dn−1 y dxn−1 + · · · + p1(x) dy dx + p0(x)y = f(x) y(x0) = v1, y (x0) = v2, . . . , y(n−1) (x0) = vn Since the general solution to the differential equation is a linear combination of the n homogeneous solutions plus the particular solution y = yp + c1y1 + c2y2 + · · · + cnyn, the problem to find the constants ci can be written      y1(x0) y2(x0) . . . yn(x0) y1(x0) y2(x0) . . . yn(x0) ... ... ... . . . y (n−1) 1 (x0) y (n−1) 2 (x0) . . . y (n−1) n (x0)           c1 c2 ... cn      +      yp(x0) yp(x0) ... y (n−1) p (x0)      =      v1 v2 ... vn      . From linear algebra we know that this system of equations has a unique solution only if the deter- minant of the matrix is nonzero. Note that the determinant of the matrix is just the Wronskian evaluated at x0. Thus if the Wronskian vanishes at x0, the initial value problem for the differential equation either has no solutions or infinitely many solutions. Such problems are said to be ill-posed. From Abel’s formula for the Wronskian W(x) = exp − pn−1(x) dx , we see that the only way the Wronskian can vanish is if the value of the integral goes to ∞. Example 16.5.1 Consider the initial value problem y − 2 x y + 2 x2 y = 0, y(0) = y (0) = 1. The Wronskian W(x) = exp − − 2 x dx = exp (2 log x) = x2 vanishes at x = 0. Thus this problem is not well-posed. The general solution of the differential equation is y = c1x + c2x2 . We see that the general solution cannot satisfy the initial conditions. If instead we had the initial conditions y(0) = 0, y (0) = 1, then there would be an infinite number of solutions. Example 16.5.2 Consider the initial value problem y − 2 x2 y = 0, y(0) = y (0) = 1. The Wronskian W(x) = exp − 0 dx = 1 does not vanish anywhere. However, this problem is not well-posed. The general solution, y = c1x−1 + c2x2 , cannot satisfy the initial conditions. Thus we see that a non-vanishing Wronskian does not imply that the problem is well-posed. 554
  • 575. Result 16.5.1 Consider the initial value problem dn y dxn + pn−1(x) dn−1 y dxn−1 + · · · + p1(x) dy dx + p0(x)y = 0 y(x0) = v1, y (x0) = v2, . . . , y(n−1) (x0) = vn. If the Wronskian W(x) = exp − pn−1(x) dx vanishes at x = x0 then the problem is ill-posed. The problem may be ill-posed even if the Wronskian does not vanish. 16.6 The Fundamental Set of Solutions Consider a set of linearly independent solutions {u1, u2, . . . , un} to an nth order linear homogeneous differential equation. This is called the fundamental set of solutions at x0 if they satisfy the relations u1(x0) = 1 u2(x0) = 0 . . . un(x0) = 0 u1(x0) = 0 u2(x0) = 1 . . . un(x0) = 0 ... ... ... ... u (n−1) 1 (x0) = 0 u (n−1) 2 (x0) = 0 . . . u (n−1) n (x0) = 1 Knowing the fundamental set of solutions is handy because it makes the task of solving an initial value problem trivial. Say we are given the initial conditions, y(x0) = v1, y (x0) = v2, . . . , y(n−1) (x0) = vn. If the ui’s are a fundamental set then the solution that satisfies these constraints is just y = v1u1(x) + v2u2(x) + · · · + vnun(x). Of course in general, a set of solutions is not the fundamental set. If the Wronskian of the solutions is nonzero and finite we can generate a fundamental set of solutions that are linear combinations of our original set. Consider the case of a second order equation Let {y1, y2} be two linearly independent solutions. We will generate the fundamental set of solutions, {u1, u2}. u1 u2 = c11 c12 c21 c22 y1 y2 For {u1, u2} to satisfy the relations that define a fundamental set, it must satisfy the matrix equation u1(x0) u1(x0) u2(x0) u2(x0) = c11 c12 c21 c22 y1(x0) y1(x0) y2(x0) y2(x0) = 1 0 0 1 c11 c12 c21 c22 = y1(x0) y1(x0) y2(x0) y2(x0) −1 If the Wronskian is non-zero and finite, we can solve for the constants, cij, and thus find the fundamental set of solutions. To generalize this result to an equation of order n, simply replace all the 2 × 2 matrices and vectors of length 2 with n × n matrices and vectors of length n. I presented the case of n = 2 simply to save having to write out all the ellipses involved in the general case. (It also makes for easier reading.) 555
  • 576. Example 16.6.1 Two linearly independent solutions to the differential equation y + y = 0 are y1 = eıx and y2 = e−ıx . y1(0) y1(0) y2(0) y2(0) = 1 ı 1 −i To find the fundamental set of solutions, {u1, u2}, at x = 0 we solve the equation c11 c12 c21 c22 = 1 ı 1 −ı −1 c11 c12 c21 c22 = 1 ı2 ı ı 1 −1 The fundamental set is u1 = eıx + e−ıx 2 , u2 = eıx − e−ıx ı2 . Using trigonometric identities we can rewrite these as u1 = cos x, u2 = sin x. Result 16.6.1 The fundamental set of solutions at x = x0, {u1, u2, . . . , un}, to an nth order linear differential equation, satisfy the relations u1(x0) = 1 u2(x0) = 0 . . . un(x0) = 0 u1(x0) = 0 u2(x0) = 1 . . . un(x0) = 0 ... ... ... ... u (n−1) 1 (x0) = 0 u (n−1) 2 (x0) = 0 . . . u (n−1) n (x0) = 1. If the Wronskian of the solutions is nonzero and finite at the point x0 then you can generate the fundamental set of solutions from any linearly independent set of solutions. Exercise 16.6 Two solutions of y − y = 0 are ex and e−x . Show that the solutions are independent. Find the fundamental set of solutions at x = 0. Hint, Solution 16.7 Adjoint Equations For the nth order linear differential operator L[y] = pn dn y dxn + pn−1 dn−1 y dxn−1 + · · · + p0y (where the pj are complex-valued functions) we define the adjoint of L L∗ [y] = (−1)n dn dxn (pny) + (−1)n−1 dn−1 dxn−1 (pn−1y) + · · · + p0y. Here f denotes the complex conjugate of f. Example 16.7.1 L[y] = xy + 1 x y + y 556
  • 577. has the adjoint L∗ [y] = d2 dx2 [xy] − d dx 1 x y + y = xy + 2y − 1 x y + 1 x2 y + y = xy + 2 − 1 x y + 1 + 1 x2 y. Taking the adjoint of L∗ yields L∗∗ [y] = d2 dx2 [xy] − d dx 2 − 1 x y + 1 + 1 x2 y = xy + 2y − 2 − 1 x y − 1 x2 y + 1 + 1 x2 y = xy + 1 x y + y. Thus by taking the adjoint of L∗ , we obtain the original operator. In general, L∗∗ = L. Consider L[y] = pny(n) + · · · + p0y. If each of the pk is k times continuously differentiable and u and v are n times continuously differentiable on some interval, then on that interval vL[u] − uL∗[v] = d dx B[u, v] where B[u, v], the bilinear concomitant, is the bilinear form B[u, v] = n m=1 j+k=m−1 j≥0,k≥0 (−1)j u(k) (pmv)(j) . This equation is known as Lagrange’s identity. If L is a second order operator then vL[u] − uL∗[v] = d dx up1v + u p2v − u(p2v) = u p2v + u p1v + u − p2v + (−2p2 + p1)v + (−p2 + p1)v . Example 16.7.2 Verify Lagrange’s identity for the second order operator, L[y] = p2y +p1y +p0y. vL[u] − uL∗[v] = v(p2u + p1u + p0u) − u d2 dx2 (p2v) − d dx (p1v) + p0v = v(p2u + p1u + p0u) − u(p2v + (2p2 − p1)v + (p2 − p1 + p0)v) = u p2v + u p1v + u − p2v + (−2p2 + p1)v + (−p2 + p1)v . We will not verify Lagrange’s identity for the general case. Integrating Lagrange’s identity on its interval of validity gives us Green’s formula. b a vL[u] − uL∗[v] dx = B[u, v] x=b − B[u, v] x=a 557
  • 578. Result 16.7.1 The adjoint of the operator L[y] = pn dn y dxn + pn−1 dn−1 y dxn−1 + · · · + p0y is defined L∗ [y] = (−1)n dn dxn (pny) + (−1)n−1 dn−1 dxn−1 (pn−1y) + · · · + p0y. If each of the pk is k times continuously differentiable and u and v are n times continuously differentiable, then Lagrange’s identity states vL[y] − uL∗[v] = d dx B[u, v] = d dx n m=1 j+k=m−1 j≥0,k≥0 (−1)j u(k) (pmv)(j) . Integrating Lagrange’s identity on it’s domain of validity yields Green’s for- mula, b a vL[u] − uL∗[v] dx = B[u, v] x=b − B[u, v] x=a . 558
  • 579. 16.8 Additional Exercises Exact Equations Nature of Solutions Transformation to a First Order System The Wronskian Well-Posed Problems The Fundamental Set of Solutions Adjoint Equations Exercise 16.7 Find the adjoint of the Bessel equation of order ν, x2 y + xy + (x2 − ν2 )y = 0, and the Legendre equation of order α, (1 − x2 )y − 2xy + α(α + 1)y = 0. Hint, Solution Exercise 16.8 Find the adjoint of x2 y − xy + 3y = 0. Hint, Solution 559
  • 580. 16.9 Hints Hint 16.1 Hint 16.2 Hint 16.3 Hint 16.4 Hint 16.5 The difference of any two of the ui’s is a homogeneous solution. Hint 16.6 Exact Equations Nature of Solutions Transformation to a First Order System The Wronskian Well-Posed Problems The Fundamental Set of Solutions Adjoint Equations Hint 16.7 Hint 16.8 560
  • 581. 16.10 Solutions Solution 16.1 The second order, linear, homogeneous differential equation is P(x)y + Q(x)y + R(x)y = 0. (16.4) An exact equation can be written in the form: d dx [a(x)y + b(x)y] = 0. If Equation 16.4 is exact, then we can write it in the form: d dx [P(x)y + f(x)y] = 0 for some function f(x). We carry out the differentiation to write the equation in standard form: P(x)y + (P (x) + f(x)) y + f (x)y = 0 (16.5) We equate the coefficients of Equations 16.4 and 16.5 to obtain a set of equations. P (x) + f(x) = Q(x), f (x) = R(x). In order to eliminate f(x), we differentiate the first equation and substitute in the expression for f (x) from the second equation. This gives us a necessary condition for Equation 16.4 to be exact: P (x) − Q (x) + R(x) = 0 (16.6) Now we demonstrate that Equation 16.6 is a sufficient condition for exactness. Suppose that Equa- tion 16.6 holds. Then we can replace R by Q − P in the differential equation. Py + Qy + (Q − P )y = 0 We recognize the right side as an exact differential. (Py + (Q − P )y) = 0 Thus Equation 16.6 is a sufficient condition for exactness. We can integrate to reduce the problem to a first order differential equation. Py + (Q − P )y = c Solution 16.2 Suppose that there is an integrating factor µ(x) that will make P(x)y + Q(x)y + R(x)y = 0 exact. We multiply by this integrating factor. µ(x)P(x)y + µ(x)Q(x)y + µ(x)R(x)y = 0. (16.7) We apply the exactness condition from Exercise 16.1 to obtain a differential equation for the inte- grating factor. (µP) − (µQ) + µR = 0 µ P + 2µ P + µP − µ Q − µQ + µR = 0 Pµ + (2P − Q)µ + (P − Q + R)µ = 0 561
  • 582. Solution 16.3 We consider the differential equation, y + xy + y = 0. Since (1) − (x) + 1 = 0 we see that this is an exact equation. We rearrange terms to form exact derivatives and then integrate. (y ) + (xy) = 0 y + xy = c d dx ex2 /2 y = c ex2 /2 y = c e−x2 /2 ex2 /2 dx + d e−x2 /2 Solution 16.4 Consider the initial value problem, y + p(x)y + q(x)y = f(x), y(x0) = y0, y (x0) = y1. If p(x), q(x) and f(x) are continuous on an interval (a . . . b) with x0 ∈ (a . . . b), then the problem has a unique solution on that interval. 1. xy + 3y = x y + 3 x y = 1 Unique solutions exist on the intervals (−∞ . . . 0) and (0 . . . ∞). 2. x(x − 1)y + 3xy + 4y = 2 y + 3 x − 1 y + 4 x(x − 1) y = 2 x(x − 1) Unique solutions exist on the intervals (−∞ . . . 0), (0 . . . 1) and (1 . . . ∞). 3. ex y + x2 y + y = tan x y + x2 e−x y + e−x y = e−x tan x Unique solutions exist on the intervals (2n−1)π 2 . . . (2n+1)π 2 for n ∈ Z. Solution 16.5 We know that the general solution is y = yp + c1y1 + c2y2, where yp is a particular solution and y1 and y2 are linearly independent homogeneous solutions. Since yp can be any particular solution, we choose yp = u1. Now we need to find two homogeneous 562
  • 583. solutions. Since L[ui] = f(x), L[u1 − u2] = L[u2 − u3] = 0. Finally, we note that since the ui’s are linearly independent, y1 = u1 − u2 and y2 = u2 − u3 are linearly independent. Thus the general solution is y = u1 + c1(u1 − u2) + c2(u2 − u3). Solution 16.6 The Wronskian of the solutions is W(x) = ex e−x ex − e−x = −2. Since the Wronskian is nonzero, the solutions are independent. The fundamental set of solutions, {u1, u2}, is a linear combination of ex and e−x . u1 u2 = c11 c12 c21 c22 ex e−x The coefficients are c11 c12 c21 c22 = e0 e0 e−0 − e−0 −1 = 1 1 1 −1 −1 = 1 −2 −1 −1 −1 1 = 1 2 1 1 1 −1 u1 = 1 2 (ex + e−x ), u2 = 1 2 (ex − e−x ). The fundamental set of solutions at x = 0 is {cosh x, sinh x}. Exact Equations Nature of Solutions Transformation to a First Order System The Wronskian Well-Posed Problems The Fundamental Set of Solutions Adjoint Equations Solution 16.7 1. The Bessel equation of order ν is x2 y + xy + (x2 − ν2 )y = 0. The adjoint equation is x2 µ + (4x − x)µ + (2 − 1 + x2 − ν2 )µ = 0 x2 µ + 3xµ + (1 + x2 − ν2 )µ = 0. 563
  • 584. 2. The Legendre equation of order α is (1 − x2 )y − 2xy + α(α + 1)y = 0 The adjoint equation is (1 − x2 )µ + (−4x + 2x)µ + (−2 + 2 + α(α + 1))µ = 0 (1 − x2 )µ − 2xµ + α(α + 1)µ = 0 Solution 16.8 The adjoint of x2 y − xy + 3y = 0 is d2 dx2 (x2 y) + d dx (xy) + 3y = 0 (x2 y + 4xy + 2y) + (xy + y) + 3y = 0 x2 y + 5xy + 6y = 0. 564
  • 585. 16.11 Quiz Problem 16.1 What is the differential equation whose solution is the two parameter family of curves y = c1 sin(2x+ c2)? Solution 565
  • 586. 16.12 Quiz Solutions Solution 16.1 We take the first and second derivative of y = c1 sin(2x + c2). y = 2c1 cos(2x + c2) y = −4c1 sin(2x + c2) This gives us three equations involving x, y, y , y and the parameters c1 and c2. We eliminate the the parameters to obtain the differential equation. Clearly we have, y + 4y = 0. 566
  • 587. Chapter 17 Techniques for Linear Differential Equations My new goal in life is to take the meaningless drivel out of human interaction. -Dave Ozenne The nth order linear homogeneous differential equation can be written in the form: y(n) + an−1(x)y(n−1) + · · · + a1(x)y + a0(x)y = 0. In general it is not possible to solve second order and higher linear differential equations. In this chapter we will examine equations that have special forms which allow us to either reduce the order of the equation or solve it. 17.1 Constant Coefficient Equations The nth order constant coefficient differential equation has the form: y(n) + an−1y(n−1) + · · · + a1y + a0y = 0. We will find that solving a constant coefficient differential equation is no more difficult than finding the roots of a polynomial. For notational simplicity, we will first consider second order equations. Then we will apply the same techniques to higher order equations. 17.1.1 Second Order Equations Factoring the Differential Equation. Consider the second order constant coefficient differential equation: y + 2ay + by = 0. (17.1) Just as we can factor a second degree polynomial: λ2 + 2aλ + b = (λ − α)(λ − β), α = −a + a2 − b and β = −a − a2 − b, we can factor Equation 17.1. d2 dx2 + 2a d dx + b y = d dx − α d dx − β y 567
  • 588. Once we have factored the differential equation, we can solve it by solving a series of two first order differential equations. We set u = d dx − β y to obtain a first order equation: d dx − α u = 0, which has the solution: u = c1 eαx . To find the solution of Equation 17.1, we solve d dx − β y = u = c1 eαx . We multiply by the integrating factor and integrate. d dx e−βx y = c1 e(α−β)x y = c1 eβx e(α−β)x dx + c2 eβx We first consider the case that α and β are distinct. y = c1 eβx 1 α − β e(α−β)x +c2 eβx We choose new constants to write the solution in a simpler form. y = c1 eαx +c2 eβx Now we consider the case α = β. y = c1 eαx 1 dx + c2 eαx y = c1x eαx +c2 eαx The solution of Equation 17.1 is y = c1 eαx +c2 eβx , α = β, c1 eαx +c2x eαx , α = β. (17.2) Example 17.1.1 Consider the differential equation: y +y = 0. To obtain the general solution, we factor the equation and apply the result in Equation 17.2. d dx − ı d dx + ı y = 0 y = c1 eıx +c2 e−ıx . Example 17.1.2 Next we solve y = 0. d dx − 0 d dx − 0 y = 0 y = c1 e0x +c2x e0x y = c1 + c2x 568
  • 589. Substituting the Form of the Solution into the Differential Equation. Note that if we substitute y = eλx into the differential equation (17.1), we will obtain the quadratic polynomial (17.1.1) for λ. y + 2ay + by = 0 λ2 eλx +2aλ eλx +b eλx = 0 λ2 + 2aλ + b = 0 This gives us a superficially different method for solving constant coefficient equations. We substitute y = eλx into the differential equation. Let α and β be the roots of the quadratic in λ. If the roots are distinct, then the linearly independent solutions are y1 = eαx and y2 = eβx . If the quadratic has a double root at λ = α, then the linearly independent solutions are y1 = eαx and y2 = x eαx . Example 17.1.3 Consider the equation: y − 3y + 2y = 0. The substitution y = eλx yields λ2 − 3λ + 2 = (λ − 1)(λ − 2) = 0. Thus the solutions are ex and e2x . Example 17.1.4 Next consider the equation: y − 2y + 4y = 0. The substitution y = eλx yields λ2 − 2λ + 4 = (λ − 2)2 = 0. Because the polynomial has a double root, the solutions are e2x and x e2x . Result 17.1.1 Consider the second order constant coefficient differential equation: y + 2ay + by = 0. We can factor the differential equation into the form: d dx − α d dx − β y = 0, which has the solution: y = c1 eαx +c2 eβx , α = β, c1 eαx +c2x eαx , α = β. We can also determine α and β by substituting y = eλx into the differential equation and factoring the polynomial in λ. Shift Invariance. Note that if u(x) is a solution of a constant coefficient equation, then u(x + c) is also a solution. This is useful in applying initial or boundary conditions. 569
  • 590. Example 17.1.5 Consider the problem y − 3y + 2y = 0, y(0) = a, y (0) = b. We know that the general solution is y = c1 ex +c2 e2x . Applying the initial conditions, we obtain the equations, c1 + c2 = a, c1 + 2c2 = b. The solution is y = (2a − b) ex +(b − a) e2x . Now suppose we wish to solve the same differential equation with the boundary conditions y(1) = a and y (1) = b. All we have to do is shift the solution to the right. y = (2a − b) ex−1 +(b − a) e2(x−1) . 17.1.2 Real-Valued Solutions If the coefficients of the differential equation are real, then the solution can be written in terms of real-valued functions (Result 16.2.2). For a real root λ = α of the polynomial in λ, the corresponding solution, y = eαx , is real-valued. Now recall that the complex roots of a polynomial with real coefficients occur in complex conju- gate pairs. Assume that α ± ıβ are roots of λn + an−1λn−1 + · · · + a1λ + a0 = 0. The corresponding solutions of the differential equation are e(α+ıβ)x and e(α−ıβ)x . Note that the linear combinations e(α+ıβ)x + e(α−ıβ)x 2 = eαx cos(βx), e(α+ıβ)x − e(α−ıβ)x ı2 = eαx sin(βx), are real-valued solutions of the differential equation. We could also obtain real-valued solution by taking the real and imaginary parts of either e(α+ıβ)x or e(α−ıβ)x . e(α+ıβ)x = eαx cos(βx), e(α+ıβ)x = eαx sin(βx) Example 17.1.6 Consider the equation y − 2y + 2y = 0. The substitution y = eλx yields λ2 − 2λ + 2 = (λ − 1 − ı)(λ − 1 + ı) = 0. The linearly independent solutions are e(1+ı)x , and e(1−ı)x . We can write the general solution in terms of real functions. y = c1 ex cos x + c2 ex sin x 570
  • 591. Exercise 17.1 Find the general solution of y + 2ay + by = 0 for a, b ∈ R. There are three distinct forms of the solution depending on the sign of a2 − b. Hint, Solution Exercise 17.2 Find the fundamental set of solutions of y + 2ay + by = 0 at the point x = 0, for a, b ∈ R. Use the general solutions obtained in Exercise 17.1. Hint, Solution Result 17.1.2 . Consider the second order constant coefficient equation y + 2ay + by = 0. The general solution of this differential equation is y =    e−ax c1 e √ a2−b x +c2 e− √ a2−b x if a2 > b, e−ax c1 cos( √ b − a2 x) + c2 sin( √ b − a2 x) if a2 < b, e−ax (c1 + c2x) if a2 = b. The fundamental set of solutions at x = 0 is 8 >>>>< >>>>:  e−ax „ cosh( √ a2 − b x) + a√ a2−b sinh( √ a2 − b x) « , e−ax 1√ a2−b sinh( √ a2 − b x) ff if a2 > b,  e−ax „ cos( √ b − a2 x) + a√ b−a2 sin( √ b − a2 x) « , e−ax 1√ b−a2 sin( √ b − a2 x) ff if a2 < b, ˘ (1 + ax) e−ax , x e−ax ¯ if a2 = b. To obtain the fundamental set of solutions at the point x = ξ, substitute (x − ξ) for x in the above solutions. 17.1.3 Higher Order Equations The constant coefficient equation of order n has the form L[y] = y(n) + an−1y(n−1) + · · · + a1y + a0y = 0. (17.3) The substitution y = eλx will transform this differential equation into an algebraic equation. L[eλx ] = λn eλx +an−1λn−1 eλx + · · · + a1λ eλx +a0 eλx = 0 λn + an−1λn−1 + · · · + a1λ + a0 eλx = 0 λn + an−1λn−1 + · · · + a1λ + a0 = 0 Assume that the roots of this equation, λ1, . . . , λn, are distinct. Then the n linearly independent solutions of Equation 17.3 are eλ1x , . . . , eλnx . If the roots of the algebraic equation are not distinct then we will not obtain all the solutions of the differential equation. Suppose that λ1 = α is a double root. We substitute y = eλx into the differential equation. L[eλx ] = [(λ − α)2 (λ − λ3) · · · (λ − λn)] eλx = 0 571
  • 592. Setting λ = α will make the left side of the equation zero. Thus y = eαx is a solution. Now we differentiate both sides of the equation with respect to λ and interchange the order of differentiation. d dλ L[eλx ] = L d dλ eλx = L x eλx Let p(λ) = (λ − λ3) · · · (λ − λn). We calculate L x eλx by applying L and then differentiating with respect to λ. L x eλx = d dλ L[eλx ] = d dλ [(λ − α)2 (λ − λ3) · · · (λ − λn)] eλx = d dλ [(λ − α)2 p(λ)] eλx = 2(λ − α)p(λ) + (λ − α)2 p (λ) + (λ − α)2 p(λ)x eλx = (λ − α) [2p(λ) + (λ − α)p (λ) + (λ − α)p(λ)x] eλx Since setting λ = α will make this expression zero, L[x eαx ] = 0, x eαx is a solution of Equation 17.3. You can verify that eαx and x eαx are linearly independent. Now we have generated all of the solutions for the differential equation. If λ = α is a root of multiplicity m then by repeatedly differentiating with respect to λ you can show that the corresponding solutions are eαx , x eαx , x2 eαx , . . . , xm−1 eαx . Example 17.1.7 Consider the equation y − 3y + 2y = 0. The substitution y = eλx yields λ3 − 3λ + 2 = (λ − 1)2 (λ + 2) = 0. Thus the general solution is y = c1 ex +c2x ex +c3 e−2x . Result 17.1.3 Consider the nth order constant coefficient equation dn y dxn + an−1 dn−1 y dxn−1 + · · · + a1 dy dx + a0y = 0. Let the factorization of the algebraic equation obtained with the substitution y = eλx be (λ − λ1)m1 (λ − λ2)m2 · · · (λ − λp)mp = 0. A set of linearly independent solutions is given by {eλ1x , x eλ1x , . . . , xm1−1 eλ1x , . . . , eλpx , x eλpx , . . . , xmp−1 eλpx }. If the coefficients of the differential equation are real, then we can find a real- valued set of solutions. 572
  • 593. Example 17.1.8 Consider the equation d4 y dx4 + 2 d2 y dx2 + y = 0. The substitution y = eλx yields λ4 + 2λ2 + 1 = (λ − i)2 (λ + i)2 = 0. Thus the linearly independent solutions are eıx , x eıx , e−ıx and x e−ıx . Noting that eıx = cos(x) + ı sin(x), we can write the general solution in terms of sines and cosines. y = c1 cos x + c2 sin x + c3x cos x + c4x sin x 17.2 Euler Equations Consider the equation L[y] = x2 d2 y dx2 + ax dy dx + by = 0, x > 0. Let’s say, for example, that y has units of distance and x has units of time. Note that each term in the differential equation has the same dimension. (time)2 (distance) (time)2 = (time) (distance) (time) = (distance) Thus this is a second order Euler, or equidimensional equation. We know that the first order Euler equation, xy + ay = 0, has the solution y = cxa . Thus for the second order equation we will try a solution of the form y = xλ . The substitution y = xλ will transform the differential equation into an algebraic equation. L[xλ ] = x2 d2 dx2 [xλ ] + ax d dx [xλ ] + bxλ = 0 λ(λ − 1)xλ + aλxλ + bxλ = 0 λ(λ − 1) + aλ + b = 0 Factoring yields (λ − λ1)(λ − λ2) = 0. If the two roots, λ1 and λ2, are distinct then the general solution is y = c1xλ1 + c2xλ2 . If the roots are not distinct, λ1 = λ2 = λ, then we only have the one solution, y = xλ . To generate the other solution we use the same approach as for the constant coefficient equation. We substitute y = xλ into the differential equation and differentiate with respect to λ. d dλ L[xλ ] = L[ d dλ xλ ] = L[ln x xλ ] 573
  • 594. Note that d dλ xλ = d dλ eλ ln x = ln x eλ ln x = ln x xλ . Now we apply L and then differentiate with respect to λ. d dλ L[xλ ] = d dλ (λ − α)2 xλ = 2(λ − α)xλ + (λ − α)2 ln x xλ Equating these two results, L[ln x xλ ] = 2(λ − α)xλ + (λ − α)2 ln x xλ . Setting λ = α will make the right hand side zero. Thus y = ln x xα is a solution. If you are in the mood for a little algebra you can show by repeatedly differentiating with respect to λ that if λ = α is a root of multiplicity m in an nth order Euler equation then the associated solutions are xα , ln x xα , (ln x)2 xα , . . . , (ln x)m−1 xα . Example 17.2.1 Consider the Euler equation xy − y + y x = 0. The substitution y = xλ yields the algebraic equation λ(λ − 1) − λ + 1 = (λ − 1)2 = 0. Thus the general solution is y = c1x + c2x ln x. 17.2.1 Real-Valued Solutions If the coefficients of the Euler equation are real, then the solution can be written in terms of functions that are real-valued when x is real and positive, (Result 16.2.2). If α ± ıβ are the roots of λ(λ − 1) + aλ + b = 0 then the corresponding solutions of the Euler equation are xα+ıβ and xα−ıβ . We can rewrite these as xα eıβ ln x and xα e−ıβ ln x . Note that the linear combinations xα eıβ ln x +xα e−ıβ ln x 2 = xα cos(β ln x), and xα eıβ ln x −xα e−ıβ ln x ı2 = xα sin(β ln x), are real-valued solutions when x is real and positive. Equivalently, we could take the real and imaginary parts of either xα+ıβ or xα−ıβ . xα eıβ ln x = xα cos(β ln x), xα eıβ ln x = xα sin(β ln x) 574
  • 595. Result 17.2.1 Consider the second order Euler equation x2 y + (2a + 1)xy + by = 0. The general solution of this differential equation is y =    x−a c1x √ a2−b + c2x− √ a2−b if a2 > b, x−a c1 cos √ b − a2 ln x + c2 sin √ b − a2 ln x if a2 < b, x−a (c1 + c2 ln x) if a2 = b. The fundamental set of solutions at x = ξ is y =    x ξ −a cosh √ a2 − b ln x ξ + a√ a2−b sinh √ a2 − b ln x ξ , x ξ −a ξ√ a2−b sinh √ a2 − b ln x ξ if a2 > b, x ξ −a cos √ b − a2 ln x ξ + a√ b−a2 sin √ b − a2 ln x ξ , x ξ −a ξ√ b−a2 sin √ b − a2 ln x ξ if a2 < b, x ξ −a 1 + a ln x ξ , x ξ −a ξ ln x ξ if a2 = b. Example 17.2.2 Consider the Euler equation x2 y − 3xy + 13y = 0. The substitution y = xλ yields λ(λ − 1) − 3λ + 13 = (λ − 2 − ı3)(λ − 2 + ı3) = 0. The linearly independent solutions are x2+ı3 , x2−ı3 . We can put this in a more understandable form. x2+ı3 = x2 eı3 ln x = x2 cos(3 ln x) + x2 sin(3 ln x) We can write the general solution in terms of real-valued functions. y = c1x2 cos(3 ln x) + c2x2 sin(3 ln x) 575
  • 596. Result 17.2.2 Consider the nth order Euler equation xn dn y dxn + an−1xn−1 dn−1 y dxn−1 + · · · + a1x dy dx + a0y = 0. Let the factorization of the algebraic equation obtained with the substitution y = xλ be (λ − λ1)m1 (λ − λ2)m2 · · · (λ − λp)mp = 0. A set of linearly independent solutions is given by {xλ1 , ln x xλ1 , . . . , (ln x)m1−1 xλ1 , . . . , xλp , ln x xλp , . . . , (ln x)mp−1 xλp }. If the coefficients of the differential equation are real, then we can find a set of solutions that are real valued when x is real and positive. 17.3 Exact Equations Exact equations have the form d dx F(x, y, y , y , . . .) = f(x). If you can write an equation in the form of an exact equation, you can integrate to reduce the order by one, (or solve the equation for first order). We will consider a few examples to illustrate the method. Example 17.3.1 Consider the equation y + x2 y + 2xy = 0. We can rewrite this as d dx y + x2 y = 0. Integrating yields a first order inhomogeneous equation. y + x2 y = c1 We multiply by the integrating factor I(x) = exp( x2 dx) to make this an exact equation. d dx ex3 /3 y = c1 ex3 /3 ex3 /3 y = c1 ex3 /3 dx + c2 y = c1 e−x3 /3 ex3 /3 dx + c2 e−x3 /3 Result 17.3.1 If you can write a differential equation in the form d dx F(x, y, y , y , . . .) = f(x), then you can integrate to reduce the order of the equation. F(x, y, y , y , . . .) = f(x) dx + c 576
  • 597. 17.4 Equations Without Explicit Dependence on y Example 17.4.1 Consider the equation y + √ xy = 0. This is a second order equation for y, but note that it is a first order equation for y . We can solve directly for y . d dx exp 2 3 x3/2 y = 0 y = c1 exp − 2 3 x3/2 Now we just integrate to get the solution for y. y = c1 exp − 2 3 x3/2 dx + c2 Result 17.4.1 If an nth order equation does not explicitly depend on y then you can consider it as an equation of order n − 1 for y . 17.5 Reduction of Order Consider the second order linear equation L[y] ≡ y + p(x)y + q(x)y = f(x). Suppose that we know one homogeneous solution y1. We make the substitution y = uy1 and use that L[y1] = 0. L[uy1] = 0u y1 + 2u y1 + uy1 + p(u y1 + uy1) + quy1 = 0 u y1 + u (2y1 + py1) + u(y1 + py1 + qy1) = 0 u y1 + u (2y1 + py1) = 0 Thus we have reduced the problem to a first order equation for u . An analogous result holds for higher order equations. Result 17.5.1 Consider the nth order linear differential equation y(n) + pn−1(x)y(n−1) + · · · + p1(x)y + p0(x)y = f(x). Let y1 be a solution of the homogeneous equation. The substitution y = uy1 will transform the problem into an (n − 1)th order equation for u . For the second order problem y + p(x)y + q(x)y = f(x) this reduced equation is u y1 + u (2y1 + py1) = f(x). 577
  • 598. Example 17.5.1 Consider the equation y + xy − y = 0. By inspection we see that y1 = x is a solution. We would like to find another linearly independent solution. The substitution y = xu yields xu + (2 + x2 )u = 0 u + 2 x + x u = 0 The integrating factor is I(x) = exp(2 ln x + x2 /2) = x2 exp(x2 /2). d dx x2 ex2 /2 u = 0 u = c1x−2 e−x2 /2 u = c1 x−2 e−x2 /2 dx + c2 y = c1x x−2 e−x2 /2 dx + c2x Thus we see that a second solution is y2 = x x−2 e−x2 /2 dx. 17.6 *Reduction of Order and the Adjoint Equation Let L be the linear differential operator L[y] = pn dn y dxn + pn−1 dn−1 y dxn−1 + · · · + p0y, where each pj is a j times continuously differentiable complex valued function. Recall that the adjoint of L is L∗ [y] = (−1)n dn dxn (pny) + (−1)n−1 dn−1 dxn−1 (pn−1y) + · · · + p0y. If u and v are n times continuously differentiable, then Lagrange’s identity states vL[u] − uL∗[v] = d dx B[u, v], where B[u, v] = n m=1 j+k=m−1 j≥0,k≥0 (−1)j u(k) (pmv)(j) . For second order equations, B[u, v] = up1v + u p2v − u(p2v) . (See Section 16.7.) If we can find a solution to the homogeneous adjoint equation, L∗ [y] = 0, then we can reduce the order of the equation L[y] = f(x). Let ψ satisfy L∗ [ψ] = 0. Substituting u = y, v = ψ into Lagrange’s identity yields ψL[y] − yL∗[ψ] = d dx B[y, ψ] ψL[y] = d dx B[y, ψ]. 578
  • 599. The equation L[y] = f(x) is equivalent to the equation d dx B[y, ψ] = ψf B[y, ψ] = ψ(x)f(x) dx, which is a linear equation in y of order n − 1. Example 17.6.1 Consider the equation L[y] = y − x2 y − 2xy = 0. Method 1. Note that this is an exact equation. d dx (y − x2 y) = 0 y − x2 y = c1 d dx e−x3 /3 y = c1 e−x3 /3 y = c1 ex3 /3 e−x3 /3 dx + c2 ex3 /3 Method 2. The adjoint equation is L∗ [y] = y + x2 y = 0. By inspection we see that ψ = (constant) is a solution of the adjoint equation. To simplify the algebra we will choose ψ = 1. Thus the equation L[y] = 0 is equivalent to B[y, 1] = c1 y(−x2 ) + d dx [y](1) − y d dx [1] = c1 y − x2 y = c1. By using the adjoint equation to reduce the order we obtain the same solution as with Method 1. 579
  • 600. 17.7 Additional Exercises Constant Coefficient Equations Exercise 17.3 (mathematica/ode/techniques linear/constant.nb) Find the solution of each one of the following initial value problems. Sketch the graph of the solution and describe its behavior as t increases. 1. 6y − 5y + y = 0, y(0) = 4, y (0) = 0 2. y − 2y + 5y = 0, y(π/2) = 0, y (π/2) = 2 3. y + 4y + 4y = 0, y(−1) = 2, y (−1) = 1 Hint, Solution Exercise 17.4 (mathematica/ode/techniques linear/constant.nb) Substitute y = eλx to find two linearly independent solutions to y − 4y + 13y = 0. that are real-valued when x is real-valued. Hint, Solution Exercise 17.5 (mathematica/ode/techniques linear/constant.nb) Find the general solution to y − y + y − y = 0. Write the solution in terms of functions that are real-valued when x is real-valued. Hint, Solution Exercise 17.6 Substitute y = eλx to find the fundamental set of solutions at x = 0 for each of the equations: 1. y + y = 0, 2. y − y = 0, 3. y = 0. What are the fundamental set of solutions at x = 1 for each of these equations. Hint, Solution Exercise 17.7 Consider a ball of mass m hanging by an ideal spring of spring constant k. The ball is suspended in a fluid which damps the motion. This resistance has a coefficient of friction, µ. Find the differential equation for the displacement of the mass from its equilibrium position by balancing forces. Denote this displacement by y(t). If the damping force is weak, the mass will have a decaying, oscillatory motion. If the damping force is strong, the mass will not oscillate. The displacement will decay to zero. The value of the damping which separates these two behaviors is called critical damping. Find the solution which satisfies the initial conditions y(0) = 0, y (0) = 1. Use the solutions obtained in Exercise 17.2 or refer to Result 17.1.2. Consider the case m = k = 1. Find the coefficient of friction for which the displacement of the mass decays most rapidly. Plot the displacement for strong, weak and critical damping. Hint, Solution Exercise 17.8 Show that y = c cos(x − φ) is the general solution of y + y = 0 where c and φ are constants of integration. (It is not sufficient to show that y = c cos(x−φ) satisfies the differential equation. y = 0 580
  • 601. satisfies the differential equation, but is is certainly not the general solution.) Find constants c and φ such that y = sin(x). Is y = c cosh(x − φ) the general solution of y − y = 0? Are there constants c and φ such that y = sinh(x)? Hint, Solution Exercise 17.9 (mathematica/ode/techniques linear/constant.nb) Let y(t) be the solution of the initial-value problem y + 5y + 6y = 0; y(0) = 1, y (0) = V. For what values of V does y(t) remain nonnegative for all t > 0? Hint, Solution Exercise 17.10 (mathematica/ode/techniques linear/constant.nb) Find two linearly independent solutions of y + sign(x)y = 0, −∞ < x < ∞. where sign(x) = ±1 according as x is positive or negative. (The solution should be continuous and have a continuous first derivative.) Hint, Solution Euler Equations Exercise 17.11 Find the general solution of x2 y + xy + y = 0, x > 0. Hint, Solution Exercise 17.12 Substitute y = xλ to find the general solution of x2 y − 2xy + 2y = 0. Hint, Solution Exercise 17.13 (mathematica/ode/techniques linear/constant.nb) Substitute y = xλ to find the general solution of xy + y + 1 x y = 0. Write the solution in terms of functions that are real-valued when x is real-valued and positive. Hint, Solution Exercise 17.14 Find the general solution of x2 y + (2a + 1)xy + by = 0. Hint, Solution Exercise 17.15 Show that y1 = eax , y2 = lim α→a eαx − e−αx α 581
  • 602. are linearly indepedent solutions of y − a2 y = 0 for all values of a. It is common to abuse notation and write the second solution as y2 = eax − e−ax a where the limit is taken if a = 0. Likewise show that y1 = xa , y2 = xa − x−a a are linearly indepedent solutions of x2 y + xy − a2 y = 0 for all values of a. Hint, Solution Exercise 17.16 (mathematica/ode/techniques linear/constant.nb) Find two linearly independent solutions (i.e., the general solution) of (a) x2 y − 2xy + 2y = 0, (b) x2 y − 2y = 0, (c) x2 y − xy + y = 0. Hint, Solution Exact Equations Exercise 17.17 Solve the differential equation y + y sin x + y cos x = 0. Hint, Solution Equations Without Explicit Dependence on y Reduction of Order Exercise 17.18 Consider (1 − x2 )y − 2xy + 2y = 0, −1 < x < 1. Verify that y = x is a solution. Find the general solution. Hint, Solution Exercise 17.19 Consider the differential equation y − x + 1 x y + 1 x y = 0. Since the coefficients sum to zero, (1 − x+1 x + 1 x = 0), y = ex is a solution. Find another linearly independent solution. Hint, Solution Exercise 17.20 One solution of (1 − 2x)y + 4xy − 4y = 0 is y = x. Find the general solution. Hint, Solution 582
  • 603. Exercise 17.21 Find the general solution of (x − 1)y − xy + y = 0, given that one solution is y = ex . (you may assume x > 1) Hint, Solution *Reduction of Order and the Adjoint Equation 583
  • 604. 17.8 Hints Hint 17.1 Substitute y = eλx into the differential equation. Hint 17.2 The fundamental set of solutions is a linear combination of the homogeneous solutions. Constant Coefficient Equations Hint 17.3 Hint 17.4 Hint 17.5 It is a constant coefficient equation. Hint 17.6 Use the fact that if u(x) is a solution of a constant coefficient equation, then u(x + c) is also a solution. Hint 17.7 The force on the mass due to the spring is −ky(t). The frictional force is −µy (t). Note that the initial conditions describe the second fundamental solution at t = 0. Note that for large t, t eαt is much small than eβt if α < β. (Prove this.) Hint 17.8 By definition, the general solution of a second order differential equation is a two parameter family of functions that satisfies the differential equation. The trigonometric identities in Appendix M may be useful. Hint 17.9 Hint 17.10 Euler Equations Hint 17.11 Hint 17.12 Hint 17.13 Hint 17.14 Substitute y = xλ into the differential equation. Consider the three cases: a2 > b, a2 < b and a2 = b. Hint 17.15 584
  • 605. Hint 17.16 Exact Equations Hint 17.17 It is an exact equation. Equations Without Explicit Dependence on y Reduction of Order Hint 17.18 Hint 17.19 Use reduction of order to find the other solution. Hint 17.20 Use reduction of order to find the other solution. Hint 17.21 *Reduction of Order and the Adjoint Equation 585
  • 606. 17.9 Solutions Solution 17.1 We substitute y = eλx into the differential equation. y + 2ay + by = 0 λ2 + 2aλ + b = 0 λ = −a ± a2 − b If a2 > b then the two roots are distinct and real. The general solution is y = c1 e(−a+ √ a2−b)x +c2 e(−a− √ a2−b)x . If a2 < b then the two roots are distinct and complex-valued. We can write them as λ = −a ± ı b − a2. The general solution is y = c1 e(−a+ı √ b−a2 )x +c2 e(−a−ı √ b−a2 )x . By taking the sum and difference of the two linearly independent solutions above, we can write the general solution as y = c1 e−ax cos b − a2 x + c2 e−ax sin b − a2 x . If a2 = b then the only root is λ = −a. The general solution in this case is then y = c1 e−ax +c2x e−ax . In summary, the general solution is y =    e−ax c1 e √ a2−b x +c2 e− √ a2−b x if a2 > b, e−ax c1 cos √ b − a2 x + c2 sin √ b − a2 x if a2 < b, e−ax (c1 + c2x) if a2 = b. Solution 17.2 First we note that the general solution can be written, y =    e−ax c1 cosh √ a2 − b x + c2 sinh √ a2 − b x if a2 > b, e−ax c1 cos √ b − a2 x + c2 sin √ b − a2 x if a2 < b, e−ax (c1 + c2x) if a2 = b. We first consider the case a2 > b. The derivative is y = e−ax −ac1 + a2 − b c2 cosh a2 − b x + −ac2 + a2 − b c1 sinh a2 − b x . The conditions, y1(0) = 1 and y1(0) = 0, for the first solution become, c1 = 1, −ac1 + a2 − b c2 = 0, c1 = 1, c2 = a √ a2 − b . The conditions, y2(0) = 0 and y2(0) = 1, for the second solution become, c1 = 0, −ac1 + a2 − b c2 = 1, c1 = 0, c2 = 1 √ a2 − b . 586
  • 607. The fundamental set of solutions is e−ax cosh a2 − b x + a √ a2 − b sinh a2 − b x , e−ax 1 √ a2 − b sinh a2 − b x . Now consider the case a2 < b. The derivative is y = e−ax −ac1 + b − a2 c2 cos b − a2 x + −ac2 − b − a2 c1 sin b − a2 x . Clearly, the fundamental set of solutions is e−ax cos b − a2 x + a √ b − a2 sin b − a2 x , e−ax 1 √ b − a2 sin b − a2 x . Finally we consider the case a2 = b. The derivative is y = e−ax (−ac1 + c2 + −ac2x). The conditions, y1(0) = 1 and y1(0) = 0, for the first solution become, c1 = 1, −ac1 + c2 = 0, c1 = 1, c2 = a. The conditions, y2(0) = 0 and y2(0) = 1, for the second solution become, c1 = 0, −ac1 + c2 = 1, c1 = 0, c2 = 1. The fundamental set of solutions is (1 + ax) e−ax , x e−ax . In summary, the fundamental set of solutions at x = 0 is    e−ax cosh √ a2 − b x + a√ a2−b sinh √ a2 − b x , e−ax 1√ a2−b sinh √ a2 − b x if a2 > b, e−ax cos √ b − a2 x + a√ b−a2 sin √ b − a2 x , e−ax 1√ b−a2 sin √ b − a2 x if a2 < b, {(1 + ax) e−ax , x e−ax } if a2 = b. Constant Coefficient Equations Solution 17.3 1. We consider the problem 6y − 5y + y = 0, y(0) = 4, y (0) = 0. We make the substitution y = eλx in the differential equation. 6λ2 − 5λ + 1 = 0 (2λ − 1)(3λ − 1) = 0 λ = 1 3 , 1 2 The general solution of the differential equation is y = c1 et/3 +c2 et/2 . 587
  • 608. 1 2 3 4 5 -30 -25 -20 -15 -10 -5 Figure 17.1: The solution of 6y − 5y + y = 0, y(0) = 4, y (0) = 0. We apply the initial conditions to determine the constants. c1 + c2 = 4, c1 3 + c2 2 = 0 c1 = 12, c2 = −8 The solution subject to the initial conditions is y = 12 et/3 −8 et/2 . The solution is plotted in Figure 17.1. The solution tends to −∞ as t → ∞. 2. We consider the problem y − 2y + 5y = 0, y(π/2) = 0, y (π/2) = 2. We make the substitution y = eλx in the differential equation. λ2 − 2λ + 5 = 0 λ = 1 ± √ 1 − 5 λ = {1 + ı2, 1 − ı2} The general solution of the differential equation is y = c1 et cos(2t) + c2 et sin(2t). We apply the initial conditions to determine the constants. y(π/2) = 0 ⇒ −c1 eπ/2 = 0 ⇒ c1 = 0 y (π/2) = 2 ⇒ −2c2 eπ/2 = 2 ⇒ c2 = − e−π/2 The solution subject to the initial conditions is y = − et−π/2 sin(2t). The solution is plotted in Figure 17.2. The solution oscillates with an amplitude that tends to ∞ as t → ∞. 3. We consider the problem y + 4y + 4y = 0, y(−1) = 2, y (−1) = 1. We make the substitution y = eλx in the differential equation. λ2 + 4λ + 4 = 0 (λ + 2)2 = 0 λ = −2 588
  • 609. 3 4 5 6 -10 10 20 30 40 50 Figure 17.2: The solution of y − 2y + 5y = 0, y(π/2) = 0, y (π/2) = 2. -1 1 2 3 4 5 0.5 1 1.5 2 Figure 17.3: The solution of y + 4y + 4y = 0, y(−1) = 2, y (−1) = 1. The general solution of the differential equation is y = c1 e−2t +c2t e−2t . We apply the initial conditions to determine the constants. c1 e2 −c2 e2 = 2, −2c1 e2 +3c2 e2 = 1 c1 = 7 e−2 , c2 = 5 e−2 The solution subject to the initial conditions is y = (7 + 5t) e−2(t+1) The solution is plotted in Figure 17.3. The solution vanishes as t → ∞. lim t→∞ (7 + 5t) e−2(t+1) = lim t→∞ 7 + 5t e2(t+1) = lim t→∞ 5 2 e2(t+1) = 0 Solution 17.4 y − 4y + 13y = 0. With the substitution y = eλx we obtain λ2 eλx −4λ eλx +13 eλx = 0 λ2 − 4λ + 13 = 0 λ = 2 ± 3i. Thus two linearly independent solutions are e(2+3i)x , and e(2−3i)x . 589
  • 610. Noting that e(2+3i)x = e2x [cos(3x) + ı sin(3x)] e(2−3i)x = e2x [cos(3x) − ı sin(3x)], we can write the two linearly independent solutions y1 = e2x cos(3x), y2 = e2x sin(3x). Solution 17.5 We note that y − y + y − y = 0 is a constant coefficient equation. The substitution, y = eλx , yields λ3 − λ2 + λ − 1 = 0 (λ − 1)(λ − i)(λ + i) = 0. The corresponding solutions are ex , eıx , and e−ıx . We can write the general solution as y = c1 ex +c2 cos x + c3 sin x. Solution 17.6 We start with the equation y +y = 0. We substitute y = eλx into the differential equation to obtain λ2 + 1 = 0, λ = ±i. A linearly independent set of solutions is {eıx , e−ıx }. The fundamental set of solutions has the form y1 = c1 eıx +c2 e−ıx , y2 = c3 eıx +c4 e−ıx . By applying the constraints y1(0) = 1, y1(0) = 0, y2(0) = 0, y2(0) = 1, we obtain y1 = eıx + e−ıx 2 = cos x, y2 = eıx + e−ıx ı2 = sin x. Now consider the equation y − y = 0. By substituting y = eλx we find that a set of solutions is {ex , e−x }. By taking linear combinations of these we see that another set of solutions is {cosh x, sinh x}. Note that this is the fundamental set of solutions. 590
  • 611. Next consider y = 0. We can find the solutions by substituting y = eλx or by integrating the equation twice. The fundamental set of solutions as x = 0 is {1, x}. Note that if u(x) is a solution of a constant coefficient differential equation, then u(x + c) is also a solution. Also note that if u(x) satisfies y(0) = a, y (0) = b, then u(x − x0) satisfies y(x0) = a, y (x0) = b. Thus the fundamental sets of solutions at x = 1 are 1. {cos(x − 1), sin(x − 1)}, 2. {cosh(x − 1), sinh(x − 1)}, 3. {1, x − 1}. Solution 17.7 Let y(t) denote the displacement of the mass from equilibrium. The forces on the mass are −ky(t) due to the spring and −µy (t) due to friction. We equate the external forces to my (t) to find the differential equation of the motion. my = −ky − µy y + µ m y + k m y = 0 The solution which satisfies the initial conditions y(0) = 0, y (0) = 1 is y(t) =    e−µt/(2m) 2m√ µ2−4km sinh µ2 − 4km t/(2m) if µ2 > km, e−µt/(2m) 2m√ 4km−µ2 sin 4km − µ2 t/(2m) if µ2 < km, t e−µt/(2m) if µ2 = km. We respectively call these cases: strongly damped, weakly damped and critically damped. In the case that m = k = 1 the solution is y(t) =    e−µt/2 2√ µ2−4 sinh µ2 − 4 t/2 if µ > 2, e−µt/2 2√ 4−µ2 sin 4 − µ2 t/2 if µ < 2, t e−t if µ = 2. Note that when t is large, t e−t is much smaller than e−µt/2 for µ < 2. To prove this we examine the ratio of these functions as t → ∞. lim t→∞ t e−t e−µt/2 = lim t→∞ t e(1−µ/2)t = lim t→∞ 1 (1 − µ/2) e(1−µ)t = 0 Using this result, we see that the critically damped solution decays faster than the weakly damped solution. We can write the strongly damped solution as e−µt/2 2 µ2 − 4 e √ µ2−4 t/2 − e− √ µ2−4 t/2 . 591
  • 612. 2 4 6 8 10 -0.1 0.1 0.2 0.3 0.4 0.5 Critical Weak Strong Figure 17.4: Strongly, weakly and critically damped solutions. For large t, the dominant factor is e “√ µ2−4−µ ” t/2 . Note that for µ > 2, µ2 − 4 = (µ + 2)(µ − 2) > µ − 2. Therefore we have the bounds −2 < µ2 − 4 − µ < 0. This shows that the critically damped solution decays faster than the strongly damped solution. µ = 2 gives the fastest decaying solution. Figure 17.4 shows the solution for µ = 4, µ = 1 and µ = 2. Solution 17.8 Clearly y = c cos(x − φ) satisfies the differential equation y + y = 0. Since it is a two-parameter family of functions, it must be the general solution. Using a trigonometric identity we can rewrite the solution as y = c cos φ cos x + c sin φ sin x. Setting this equal to sin x gives us the two equations c cos φ = 0, c sin φ = 1, which has the solutions c = 1, φ = (2n + 1/2)π, and c = −1, φ = (2n − 1/2)π, for n ∈ Z. Clearly y = c cosh(x−φ) satisfies the differential equation y −y = 0. Since it is a two-parameter family of functions, it must be the general solution. Using a trigonometric identity we can rewrite the solution as y = c cosh φ cosh x + c sinh φ sinh x. Setting this equal to sinh x gives us the two equations c cosh φ = 0, c sinh φ = 1, which has the solutions c = −i, φ = ı(2n + 1/2)π, and c = i, φ = ı(2n − 1/2)π, for n ∈ Z. Solution 17.9 We substitute y = eλt into the differential equation. λ2 eλt +5λ eλt +6 eλt = 0 λ2 + 5λ + 6 = 0 (λ + 2)(λ + 3) = 0 592
  • 613. The general solution of the differential equation is y = c1 e−2t +c2 e−3t . The initial conditions give us the constraints: c1 + c2 = 1, −2c1 − 3c2 = V. The solution subject to the initial conditions is y = (3 + V ) e−2t −(2 + V ) e−3t . This solution will be non-negative for t > 0 if V ≥ −3. Solution 17.10 For negative x, the differential equation is y − y = 0. We substitute y = eλx into the differential equation to find the solutions. λ2 − 1 = 0 λ = ±1 y = ex , e−x We can take linear combinations to write the solutions in terms of the hyperbolic sine and cosine. y = {cosh(x), sinh(x)} For positive x, the differential equation is y + y = 0. We substitute y = eλx into the differential equation to find the solutions. λ2 + 1 = 0 λ = ±ı y = eıx , e−ıx We can take linear combinations to write the solutions in terms of the sine and cosine. y = {cos(x), sin(x)} We will find the fundamental set of solutions at x = 0. That is, we will find a set of solutions, {y1, y2} that satisfy the conditions: y1(0) = 1 y1(0) = 0 y2(0) = 0 y2(0) = 1 Clearly, these solutions are y1 = cosh(x) x < 0 cos(x) x ≥ 0 y2 = sinh(x) x < 0 sin(x) x ≥ 0 593
  • 614. Euler Equations Solution 17.11 We consider an Euler equation, x2 y + xy + y = 0, x > 0. We make the change of independent variable ξ = ln x, u(ξ) = y(x) to obtain u + u = 0. We make the substitution u(ξ) = eλξ . λ2 + 1 = 0 λ = ±i A set of linearly independent solutions for u(ξ) is {eıξ , e−ıξ }. Since cos ξ = eıξ + e−ıξ 2 and sin ξ = eıξ − e−ıξ ı2 , another linearly independent set of solutions is {cos ξ, sin ξ}. The general solution for y(x) is y(x) = c1 cos(ln x) + c2 sin(ln x). Solution 17.12 Consider the differential equation x2 y − 2xy + 2y = 0. With the substitution y = xλ this equation becomes λ(λ − 1) − 2λ + 2 = 0 λ2 − 3λ + 2 = 0 λ = 1, 2. The general solution is then y = c1x + c2x2 . Solution 17.13 We note that xy + y + 1 x y = 0 is an Euler equation. The substitution y = xλ yields λ3 − 3λ2 + 2λ + λ2 − λ + λ = 0 λ3 − 2λ2 + 2λ = 0. The three roots of this algebraic equation are λ = 0, λ = 1 + i, λ = 1 − ı 594
  • 615. The corresponding solutions to the differential equation are y = x0 y = x1+ı y = x1−ı y = 1 y = x eı ln x y = x e−ı ln x . We can write the general solution as y = c1 + c2x cos(ln x) + c3 sin(ln x). Solution 17.14 We substitute y = xλ into the differential equation. x2 y + (2a + 1)xy + by = 0 λ(λ − 1) + (2a + 1)λ + b = 0 λ2 + 2aλ + b = 0 λ = −a ± a2 − b For a2 > b then the general solution is y = c1x−a+ √ a2−b + c2x−a− √ a2−b . For a2 < b, then the general solution is y = c1x−a+ı √ b−a2 + c2x−a−ı √ b−a2 . By taking the sum and difference of these solutions, we can write the general solution as y = c1x−a cos b − a2 ln x + c2x−a sin b − a2 ln x . For a2 = b, the quadratic in lambda has a double root at λ = a. The general solution of the differential equation is y = c1x−a + c2x−a ln x. In summary, the general solution is: y =    x−a c1x √ a2−b + c2x− √ a2−b if a2 > b, x−a c1 cos √ b − a2 ln x + c2 sin √ b − a2 ln x if a2 < b, x−a (c1 + c2 ln x) if a2 = b. Solution 17.15 For a = 0, two linearly independent solutions of y − a2 y = 0 are y1 = eax , y2 = e−ax . For a = 0, we have y1 = e0x = 1, y2 = x e0x = x. In this case the solution are defined by y1 = [eax ]a=0 , y2 = d da eax a=0 . 595
  • 616. By the definition of differentiation, f (0) is f (0) = lim a→0 f(a) − f(−a) 2a . Thus the second solution in the case a = 0 is y2 = lim a→0 eax − e−ax a Consider the solutions y1 = eax , y2 = lim α→a eαx − e−αx α . Clearly y1 is a solution for all a. For a = 0, y2 is a linear combination of eax and e−ax and is thus a solution. Since the coefficient of e−ax in this linear combination is non-zero, it is linearly independent to y1. For a = 0, y2 is one half the derivative of eax evaluated at a = 0. Thus it is a solution. For a = 0, two linearly independent solutions of x2 y + xy − a2 y = 0 are y1 = xa , y2 = x−a . For a = 0, we have y1 = [xa ]a=0 = 1, y2 = d da xa a=0 = ln x. Consider the solutions y1 = xa , y2 = xa − x−a a Clearly y1 is a solution for all a. For a = 0, y2 is a linear combination of xa and x−a and is thus a solution. For a = 0, y2 is one half the derivative of xa evaluated at a = 0. Thus it is a solution. Solution 17.16 1. x2 y − 2xy + 2y = 0 We substitute y = xλ into the differential equation. λ(λ − 1) − 2λ + 2 = 0 λ2 − 3λ + 2 = 0 (λ − 1)(λ − 2) = 0 y = c1x + c2x2 2. x2 y − 2y = 0 We substitute y = xλ into the differential equation. λ(λ − 1) − 2 = 0 λ2 − λ − 2 = 0 (λ + 1)(λ − 2) = 0 y = c1 x + c2x2 596
  • 617. 3. x2 y − xy + y = 0 We substitute y = xλ into the differential equation. λ(λ − 1) − λ + 1 = 0 λ2 − 2λ + 1 = 0 (λ − 1)2 = 0 Since there is a double root, the solution is: y = c1x + c2x ln x. Exact Equations Solution 17.17 We note that y + y sin x + y cos x = 0 is an exact equation. d dx [y + y sin x] = 0 y + y sin x = c1 d dx y e− cos x = c1 e− cos x y = c1 ecos x e− cos x dx + c2 ecos x Equations Without Explicit Dependence on y Reduction of Order Solution 17.18 (1 − x2 )y − 2xy + 2y = 0, −1 < x < 1 We substitute y = x into the differential equation to check that it is a solution. (1 − x2 )(0) − 2x(1) + 2x = 0 We look for a second solution of the form y = xu. We substitute this into the differential equation 597
  • 618. and use the fact that x is a solution. (1 − x2 )(xu + 2u ) − 2x(xu + u) + 2xu = 0 (1 − x2 )(xu + 2u ) − 2x(xu ) = 0 (1 − x2 )xu + (2 − 4x2 )u = 0 u u = 2 − 4x2 x(x2 − 1) u u = − 2 x + 1 1 − x − 1 1 + x ln(u ) = −2 ln(x) − ln(1 − x) − ln(1 + x) + const ln(u ) = ln c x2(1 − x)(1 + x) u = c x2(1 − x)(1 + x) u = c 1 x2 + 1 2(1 − x) + 1 2(1 + x) u = c − 1 x − 1 2 ln(1 − x) + 1 2 ln(1 + x) + const u = c − 1 x + 1 2 ln 1 + x 1 − x + const A second linearly independent solution is y = −1 + x 2 ln 1 + x 1 − x . Solution 17.19 We are given that y = ex is a solution of y − x + 1 x y + 1 x y = 0. To find another linearly independent solution, we will use reduction of order. Substituting y = u ex y = (u + u) ex y = (u + 2u + u) ex into the differential equation yields u + 2u + u − x + 1 x (u + u) + 1 x u = 0. u + x − 1 x u = 0 d dx u exp 1 − 1 x dx = 0 u ex−ln x = c1 u = c1x e−x u = c1 x e−x dx + c2 u = c1(x e−x + e−x ) + c2 y = c1(x + 1) + c2 ex 598
  • 619. Thus a second linearly independent solution is y = x + 1. Solution 17.20 We are given that y = x is a solution of (1 − 2x)y + 4xy − 4y = 0. To find another linearly independent solution, we will use reduction of order. Substituting y = xu y = xu + u y = xu + 2u into the differential equation yields (1 − 2x)(xu + 2u ) + 4x(xu + u) − 4xu = 0, (1 − 2x)xu + (4x2 − 4x + 2)u = 0, u u = 4x2 − 4x + 2 x(2x − 1) , u u = 2 − 2 x + 2 2x − 1 , ln(u ) = 2x − 2 ln x + ln(2x − 1) + const, u = c1 2 x − 1 x2 e2x , u = c1 1 x e2x +c2, y = c1 e2x +c2x. Solution 17.21 One solution of (x − 1)y − xy + y = 0, is y1 = ex . We find a second solution with reduction of order. We make the substitution y2 = u ex in the differential equation. We determine u up to an additive constant. (x − 1)(u + 2u + u) ex −x(u + u) ex +u ex = 0 (x − 1)u + (x − 2)u = 0 u u = − x − 2 x − 1 = −1 + 1 x − 1 ln |u | = −x + ln |x − 1| + c u = c(x − 1) e−x u = −cx e−x The second solution of the differential equation is y2 = x. *Reduction of Order and the Adjoint Equation 599
  • 620. 600
  • 621. Chapter 18 Techniques for Nonlinear Differential Equations In mathematics you don’t understand things. You just get used to them. - Johann von Neumann 18.1 Bernoulli Equations Sometimes it is possible to solve a nonlinear equation by making a change of the dependent variable that converts it into a linear equation. One of the most important such equations is the Bernoulli equation dy dt + p(t)y = q(t)yα , α = 1. The change of dependent variable u = y1−α will yield a first order linear equation for u which when solved will give us an implicit solution for y. (See Exercise 18.4.) Result 18.1.1 The Bernoulli equation y + p(t)y = q(t)yα , α = 1 can be transformed to the first order linear equation du dt + (1 − α)p(t)u = (1 − α)q(t) with the change of variables u = y1−α . Example 18.1.1 Consider the Bernoulli equation y = 2 x y + y2 . First we divide by y2 . y−2 y = 2 x y−1 + 1 We make the change of variable u = y−1 . −u = 2 x u + 1 u + 2 x u = −1 601
  • 622. The integrating factor is I(x) = exp( 2 x dx) = x2 . d dx (x2 u) = −x2 x2 u = − 1 3 x3 + c u = − 1 3 x + c x2 y = − 1 3 x + c x2 −1 Thus the solution for y is y = 3x2 c − x2 . 18.2 Riccati Equations Factoring Second Order Operators. Consider the second order linear equation L[y] = d2 dx2 + p(x) d dx + q(x) y = y + p(x)y + q(x)y = f(x). If we were able to factor the linear operator L into the form L = d dx + a(x) d dx + b(x) , (18.1) then we would be able to solve the differential equation. Factoring reduces the problem to a system of first order equations. We start with the factored equation d dx + a(x) d dx + b(x) y = f(x). We set u = d dx + b(x) y and solve the problem d dx + a(x) u = f(x). Then to obtain the solution we solve d dx + b(x) y = u. Example 18.2.1 Consider the equation y + x − 1 x y + 1 x2 − 1 y = 0. Let’s say by some insight or just random luck we are able to see that this equation can be factored into d dx + x d dx − 1 x y = 0. 602
  • 623. We first solve the equation d dx + x u = 0. u + xu = 0 d dx ex2 /2 u = 0 u = c1 e−x2 /2 Then we solve for y with the equation d dx − 1 x y = u = c1 e−x2 /2 . y − 1 x y = c1 e−x2 /2 d dx x−1 y = c1x−1 e−x2 /2 y = c1x x−1 e−x2 /2 dx + c2x If we were able to solve for a and b in Equation 18.1 in terms of p and q then we would be able to solve any second order differential equation. Equating the two operators, d2 dx2 + p d dx + q = d dx + a d dx + b = d2 dx2 + (a + b) d dx + (b + ab). Thus we have the two equations a + b = p, and b + ab = q. Eliminating a, b + (p − b)b = q b = b2 − pb + q Now we have a nonlinear equation for b that is no easier to solve than the original second order linear equation. Riccati Equations. Equations of the form y = a(x)y2 + b(x)y + c(x) are called Riccati equations. From the above derivation we see that for every second order differential equation there is a corresponding Riccati equation. Now we will show that the converse is true. We make the substitution y = − u au , y = − u au + (u )2 au2 + a u a2u , in the Riccati equation. y = ay2 + by + c − u au + (u )2 au2 + a u a2u = a (u )2 a2u2 − b u au + c − u au + a u a2u + b u au − c = 0 u − a a + b u + acu = 0 603
  • 624. Now we have a second order linear equation for u. Result 18.2.1 The substitution y = − u au transforms the Riccati equation y = a(x)y2 + b(x)y + c(x) into the second order linear equation u − a a + b u + acu = 0. Example 18.2.2 Consider the Riccati equation y = y2 + 1 x y + 1 x2 . With the substitution y = −u u we obtain u − 1 x u + 1 x2 u = 0. This is an Euler equation. The substitution u = xλ yields λ(λ − 1) − λ + 1 = (λ − 1)2 = 0. Thus the general solution for u is u = c1x + c2x log x. Since y = −u u , y = − c1 + c2(1 + log x) c1x + c2x log x y = − 1 + c(1 + log x) x + cx log x 18.3 Exchanging the Dependent and Independent Variables Some differential equations can be put in a more elementary form by exchanging the dependent and independent variables. If the new equation can be solved, you will have an implicit solution for the initial equation. We will consider a few examples to illustrate the method. Example 18.3.1 Consider the equation y = 1 y3 − xy2 . Instead of considering y to be a function of x, consider x to be a function of y. That is, x = x(y), x = dx dy . dy dx = 1 y3 − xy2 dx dy = y3 − xy2 x + y2 x = y3 604
  • 625. Now we have a first order equation for x. d dy ey3 /3 x = y3 ey3 /3 x = e−y3 /3 y3 ey3 /3 dy + c e−y3 /3 Example 18.3.2 Consider the equation y = y y2 + 2x . Interchanging the dependent and independent variables yields 1 x = y y2 + 2x x = y + 2 x y x − 2 x y = y d dy (y−2 x) = y−1 y−2 x = log y + c x = y2 log y + cy2 Result 18.3.1 Some differential equations can be put in a simpler form by exchanging the dependent and independent variables. Thus a differential equa- tion for y(x) can be written as an equation for x(y). Solving the equation for x(y) will give an implicit solution for y(x). 18.4 Autonomous Equations Autonomous equations have no explicit dependence on x. The following are examples. • y + 3y − 2y = 0 • y = y + (y )2 • y + y y = 0 The change of variables u(y) = y reduces an nth order autonomous equation in y to a non- autonomous equation of order n − 1 in u(y). Writing the derivatives of y in terms of u, y = u(y) y = d dx u(y) = dy dx d dy u(y) = y u = u u y = (u u + (u )2 )u. 605
  • 626. Thus we see that the equation for u(y) will have an order of one less than the original equation. Result 18.4.1 Consider an autonomous differential equation for y(x), (au- tonomous equations have no explicit dependence on x.) The change of vari- ables u(y) = y reduces an nth order autonomous equation in y to a non- autonomous equation of order n − 1 in u(y). Example 18.4.1 Consider the equation y = y + (y )2 . With the substitution u(y) = y , the equation becomes u u = y + u2 u = u + yu−1 . We recognize this as a Bernoulli equation. The substitution v = u2 yields 1 2 v = v + y v − 2v = 2y d dy e−2y v = 2y e−2y v(y) = c1 e2y + e2y 2y e−2y dy v(y) = c1 e2y + e2y −y e−2y + e−2y dy v(y) = c1 e2y + e2y −y e−2y − 1 2 e−2y v(y) = c1 e2y −y − 1 2 . Now we solve for u. u(y) = c1 e2y −y − 1 2 1/2 . dy dx = c1 e2y −y − 1 2 1/2 This equation is separable. dx = dy c1 e2y −y − 1 2 1/2 x + c2 = 1 c1 e2y −y − 1 2 1/2 dy Thus we finally have arrived at an implicit solution for y(x). Example 18.4.2 Consider the equation y + y3 = 0. 606
  • 627. With the change of variables, u(y) = y , the equation becomes u u + y3 = 0. This equation is separable. u du = −y3 dy 1 2 u2 = − 1 4 y4 + c1 u = 2c1 − 1 2 y4 1/2 y = 2c1 − 1 2 y4 1/2 dy (2c1 − 1 2 y4)1/2 = dx Integrating gives us the implicit solution 1 (2c1 − 1 2 y4)1/2 dy = x + c2. 18.5 *Equidimensional-in-x Equations Differential equations that are invariant under the change of variables x = c ξ are said to be equidimensional-in-x. For a familiar example from linear equations, we note that the Euler equation is equidimensional-in-x. Writing the new derivatives under the change of variables, x = c ξ, d dx = 1 c d dξ , d2 dx2 = 1 c2 d2 dξ2 , . . . . Example 18.5.1 Consider the Euler equation y + 2 x y + 3 x2 y = 0. Under the change of variables, x = c ξ, y(x) = u(ξ), this equation becomes 1 c2 u + 2 c ξ 1 c u + 3 c2 ξ2 u = 0 u + 2 ξ u + 3 ξ2 u = 0. Thus this equation is invariant under the change of variables x = c ξ. Example 18.5.2 For a nonlinear example, consider the equation y y + y x y + y x2 = 0. With the change of variables x = c ξ, y(x) = u(ξ) the equation becomes u c2 u c + u c3 ξ u + u c3 ξ2 = 0 u u + u ξ u + u ξ2 = 0. We see that this equation is also equidimensional-in-x. 607
  • 628. You may recall that the change of variables x = et reduces an Euler equation to a constant coefficient equation. To generalize this result to nonlinear equations we will see that the same change of variables reduces an equidimensional-in-x equation to an autonomous equation. Writing the derivatives with respect to x in terms of t, x = et , d dx = dt dx d dt = e−t d dt x d dx = d dt x2 d2 dx2 = x d dx x d dx − x d dx = d2 dt2 − d dt . Example 18.5.3 Consider the equation in Example 18.5.2 y y + y x y + y x2 = 0. Applying the change of variables x = et , y(x) = u(t) yields an autonomous equation for u(t). x2 y x y + x2 y y + x y = 0 (u − u )u + u − u u + u = 0 Result 18.5.1 A differential equation that is invariant under the change of variables x = c ξ is equidimensional-in-x. Such an equation can be reduced to autonomous equation of the same order with the change of variables, x = et . 18.6 *Equidimensional-in-y Equations A differential equation is said to be equidimensional-in-y if it is invariant under the change of variables y(x) = c v(x). Note that all linear homogeneous equations are equidimensional-in-y. Example 18.6.1 Consider the linear equation y + p(x)y + q(x)y = 0. With the change of variables y(x) = cv(x) the equation becomes cv + p(x)cv + q(x)cv = 0 v + p(x)v + q(x)v = 0 Thus we see that the equation is invariant under the change of variables. Example 18.6.2 For a nonlinear example, consider the equation y y + (y )2 − y2 = 0. Under the change of variables y(x) = cv(x) the equation becomes. cv cv + (cv )2 − (cv)2 = 0 v v + (v )2 − v2 = 0. Thus we see that this equation is also equidimensional-in-y. 608
  • 629. The change of variables y(x) = eu(x) reduces an nth order equidimensional-in-y equation to an equation of order n − 1 for u . Writing the derivatives of eu(x) , d dx eu = u eu d2 dx2 eu = (u + (u )2 ) eu d3 dx3 eu = (u + 3u u + (u )3 ) eu . Example 18.6.3 Consider the linear equation in Example 18.6.1 y + p(x)y + q(x)y = 0. Under the change of variables y(x) = eu(x) the equation becomes (u + (u )2 ) eu +p(x)u eu +q(x) eu = 0 u + (u )2 + p(x)u + q(x) = 0. Thus we have a Riccati equation for u . This transformation might seem rather useless since lin- ear equations are usually easier to work with than nonlinear equations, but it is often useful in determining the asymptotic behavior of the equation. Example 18.6.4 From Example 18.6.2 we have the equation y y + (y )2 − y2 = 0. The change of variables y(x) = eu(x) yields (u + (u )2 ) eu eu +(u eu )2 − (eu )2 = 0 u + 2(u )2 − 1 = 0 u = −2(u )2 + 1 Now we have a Riccati equation for u . We make the substitution u = v 2v . v 2v − (v )2 2v2 = −2 (v )2 4v2 + 1 v − 2v = 0 v = c1 e √ 2x +c2 e− √ 2x u = 2 √ 2 c1 e √ 2x −c2 e− √ 2x c1 e √ 2x +c2 e− √ 2x u = 2 c1 √ 2 e √ 2x −c2 √ 2 e− √ 2x c1 e √ 2x +c2 e− √ 2x dx + c3 u = 2 log c1 e √ 2x +c2 e− √ 2x + c3 y = c1 e √ 2x +c2 e− √ 2x 2 ec3 The constants are redundant, the general solution is y = c1 e √ 2x +c2 e− √ 2x 2 609
  • 630. Result 18.6.1 A differential equation is equidimensional-in-y if it is invariant under the change of variables y(x) = cv(x). An nth order equidimensional-in-y equation can be reduced to an equation of order n − 1 in u with the change of variables y(x) = eu(x) . 18.7 *Scale-Invariant Equations Result 18.7.1 An equation is scale invariant if it is invariant under the change of variables, x = cξ, y(x) = cα v(ξ), for some value of α. A scale-invariant equation can be transformed to an equidimensional-in-x equation with the change of variables, y(x) = xα u(x). Example 18.7.1 Consider the equation y + x2 y2 = 0. Under the change of variables x = cξ, y(x) = cα v(ξ) this equation becomes cα c2 v (ξ) + c2 x2 c2α v2 (ξ) = 0. Equating powers of c in the two terms yields α = −4. Introducing the change of variables y(x) = x−4 u(x) yields d2 dx2 x−4 u(x) + x2 (x−4 u(x))2 = 0 x−4 u − 8x−5 u + 20x−6 u + x−6 u2 = 0 x2 u − 8xu + 20u + u2 = 0. We see that the equation for u is equidimensional-in-x. 610
  • 631. 18.8 Exercises Exercise 18.1 1. Find the general solution and the singular solution of the Clairaut equation, y = xp + p2 . 2. Show that the singular solution is the envelope of the general solution. Hint, Solution Bernoulli Equations Exercise 18.2 (mathematica/ode/techniques nonlinear/bernoulli.nb) Consider the Bernoulli equation dy dt + p(t)y = q(t)yα . 1. Solve the Bernoulli equation for α = 1. 2. Show that for α = 1 the substitution u = y1−α reduces Bernoulli’s equation to a linear equation. 3. Find the general solution to the following equations. t2 dy dt + 2ty − y3 = 0, t > 0 (a) dy dx + 2xy + y2 = 0 (b) Hint, Solution Exercise 18.3 Consider a population, y. Let the birth rate of the population be proportional to y with constant of proportionality 1. Let the death rate of the population be proportional to y2 with constant of proportionality 1/1000. Assume that the population is large enough so that you can consider y to be continuous. What is the population as a function of time if the initial population is y0? Hint, Solution Exercise 18.4 Show that the transformation u = y1−n reduces the equation to a linear first order equation. Solve the equations 1. t2 dy dt + 2ty − y3 = 0 t > 0 2. dy dt = (Γ cos t + T) y − y3 , Γ and T are real constants. (From a fluid flow stability problem.) Hint, Solution Riccati Equations Exercise 18.5 1. Consider the Ricatti equation, dy dx = a(x)y2 + b(x)y + c(x). 611
  • 632. Substitute y = yp(x) + 1 u(x) into the Ricatti equation, where yp is some particular solution to obtain a first order linear differential equation for u. 2. Consider a Ricatti equation, y = 1 + x2 − 2xy + y2 . Verify that yp(x) = x is a particular solution. Make the substitution y = yp + 1/u to find the general solution. What would happen if you continued this method, taking the general solution for yp? Would you be able to find a more general solution? 3. The substitution y = − u au gives us the second order, linear, homogeneous differential equation, u − a a + b u + acu = 0. The general solution for u has two constants of integration. However, the solution for y should only have one constant of integration as it satisfies a first order equation. Write y in terms of the solution for u and verify tha y has only one constant of integration. Hint, Solution Exchanging the Dependent and Independent Variables Exercise 18.6 Solve the differential equation y = √ y xy + y . Hint, Solution Autonomous Equations *Equidimensional-in-x Equations *Equidimensional-in-y Equations *Scale-Invariant Equations 612
  • 633. 18.9 Hints Hint 18.1 Bernoulli Equations Hint 18.2 Hint 18.3 The differential equation governing the population is dy dt = y − y2 1000 , y(0) = y0. This is a Bernoulli equation. Hint 18.4 Riccati Equations Hint 18.5 Exchanging the Dependent and Independent Variables Hint 18.6 Exchange the dependent and independent variables. Autonomous Equations *Equidimensional-in-x Equations *Equidimensional-in-y Equations *Scale-Invariant Equations 613
  • 634. -4 -2 2 4 -4 -2 2 Figure 18.1: The Envelope of y = cx + c2 . 18.10 Solutions Solution 18.1 We consider the Clairaut equation, y = xp + p2 . (18.2) 1. We differentiate Equation 18.2 with respect to x to obtain a second order differential equation. y = y + xy + 2y y y (2y + x) = 0 Equating the first or second factor to zero will lead us to two distinct solutions. y = 0 or y = − x 2 If y = 0 then y ≡ p is a constant, (say y = c). From Equation 18.2 we see that the general solution is, y(x) = cx + c2 . (18.3) Recall that the general solution of a first order differential equation has one constant of inte- gration. If y = −x/2 then y = −x2 /4+const. We determine the constant by substituting the expression into Equation 18.2. − x2 4 + c = x − x 2 + − x 2 2 Thus we see that a singular solution of the Clairaut equation is y(x) = − 1 4 x2 . (18.4) Recall that a singular solution of a first order nonlinear differential equation has no constant of integration. 2. Equating the general and singular solutions, y(x), and their derivatives, y (x), gives us the system of equations, cx + c2 = − 1 4 x2 , c = − 1 2 x. Since the first equation is satisfied for c = −x/2, we see that the solution y = cx+c2 is tangent to the solution y = −x2 /4 at the point (−2c, −|c|). The solution y = cx + c2 is plotted for c = . . . , −1/4, 0, 1/4, . . . in Figure 18.1. 614
  • 635. The envelope of a one-parameter family F(x, y, c) = 0 is given by the system of equations, F(x, y, c) = 0, Fc(x, y, c) = 0. For the family of solutions y = cx + c2 these equations are y = cx + c2 , 0 = x + 2c. Substituting the solution of the second equation, c = −x/2, into the first equation gives the envelope, y = − 1 2 x x + − 1 2 x 2 = − 1 4 x2 . Thus we see that the singular solution is the envelope of the general solution. Bernoulli Equations Solution 18.2 1. dy dt + p(t)y = q(t)y dy y = (q − p) dt ln y = (q − p) dt + c y = c e R (q−p) dt 2. We consider the Bernoulli equation, dy dt + p(t)y = q(t)yα , α = 1. We divide by yα . y−α y + p(t)y1−α = q(t) This suggests the change of dependent variable u = y1−α , u = (1 − α)y−α y . 1 1 − α d dt y1−α + p(t)y1−α = q(t) du dt + (1 − α)p(t)u = (1 − α)q(t) Thus we obtain a linear equation for u which when solved will give us an implicit solution for y. 3. (a) t2 dy dt + 2ty − y3 = 0, t > 0 t2 y y3 + 2t 1 y2 = 1 We make the change of variables u = y−2 . − 1 2 t2 u + 2tu = 1 u − 4 t u = − 2 t2 615
  • 636. The integrating factor is µ = e R (−4/t) dt = e−4 ln t = t−4 . We multiply by the integrating factor and integrate to obtain the solution. d dt t−4 u = −2t−6 u = 2 5 t−1 + ct4 y−2 = 2 5 t−1 + ct4 y = ± 1 2 5 t−1 + ct4 y = ± √ 5t √ 2 + ct5 (b) dy dx + 2xy + y2 = 0 y y2 + 2x y = −1 We make the change of variables u = y−1 . u − 2xu = 1 The integrating factor is µ = e R (−2x) dx = e−x2 . We multiply by the integrating factor and integrate to obtain the solution. d dx e−x2 u = e−x2 u = ex2 e−x2 dx + c ex2 y = e−x2 e−x2 dx + c Solution 18.3 The differential equation governing the population is dy dt = y − y2 1000 , y(0) = y0. We recognize this as a Bernoulli equation. The substitution u(t) = 1/y(t) yields − du dt = u − 1 1000 , u(0) = 1 y0 . u + u = 1 1000 u = 1 y0 e−t + e−t 1000 t 0 eτ dτ u = 1 1000 + 1 y0 − 1 1000 e−t 616
  • 637. Solving for y(t), y(t) = 1 1000 + 1 y0 − 1 1000 e−t −1 . As a check, we see that as t → ∞, y(t) → 1000, which is an equilibrium solution of the differential equation. dy dt = 0 = y − y2 1000 → y = 1000. Solution 18.4 1. t2 dy dt + 2ty − y3 = 0 dy dt + 2t−1 y = t−2 y3 We make the change of variables u(t) = y−2 (t). u − 4t−1 u = −2t−2 This gives us a first order, linear equation. The integrating factor is I(t) = e R −4t−1 dt = e−4 log t = t−4 . We multiply by the integrating factor and integrate. d dt t−4 u = −2t−6 t−4 u = 2 5 t−5 + c u = 2 5 t−1 + ct4 Finally we write the solution in terms of y(t). y(t) = ± 1 2 5 t−1 + ct4 y(t) = ± √ 5t √ 2 + ct5 2. dy dt − (Γ cos t + T) y = −y3 We make the change of variables u(t) = y−2 (t). u + 2 (Γ cos t + T) u = 2 This gives us a first order, linear equation. The integrating factor is I(t) = e R 2(Γ cos t+T ) dt = e2(Γ sin t+T t) 617
  • 638. We multiply by the integrating factor and integrate. d dt e2(Γ sin t+T t) u = 2 e2(Γ sin t+T t) u = 2 e−2(Γ sin t+T t) e2(Γ sin t+T t) dt + c Finally we write the solution in terms of y(t). y = ± eΓ sin t+T t 2 e2(Γ sin t+T t) dt + c Riccati Equations Solution 18.5 We consider the Ricatti equation, dy dx = a(x)y2 + b(x)y + c(x). (18.5) 1. We substitute y = yp(x) + 1 u(x) into the Ricatti equation, where yp is some particular solution. yp − u u2 = +a(x) y2 p + 2 yp u + 1 u2 + b(x) yp + 1 u + c(x) − u u2 = b(x) 1 u + a(x) 2 yp u + 1 u2 u = − (b + 2ayp) u − a We obtain a first order linear differential equation for u whose solution will contain one constant of integration. 2. We consider a Ricatti equation, y = 1 + x2 − 2xy + y2 . (18.6) We verify that yp(x) = x is a solution. 1 = 1 + x2 − 2xx + x2 Substituting y = yp + 1/u into Equation 18.6 yields, u = − (−2x + 2x) u − 1 u = −x + c y = x + 1 c − x What would happen if we continued this method? Since y = x + 1 c−x is a solution of the Ricatti equation we can make the substitution, y = x + 1 c − x + 1 u(x) , (18.7) 618
  • 639. which will lead to a solution for y which has two constants of integration. Then we could repeat the process, substituting the sum of that solution and 1/u(x) into the Ricatti equation to find a solution with three constants of integration. We know that the general solution of a first order, ordinary differential equation has only one constant of integration. Does this method for Ricatti equations violate this theorem? There’s only one way to find out. We substitute Equation 18.7 into the Ricatti equation. u = − −2x + 2 x + 1 c − x u − 1 u = − 2 c − x u − 1 u + 2 c − x u = −1 The integrating factor is I(x) = e2/(c−x) = e−2 log(c−x) = 1 (c − x)2 . Upon multiplying by the integrating factor, the equation becomes exact. d dx 1 (c − x)2 u = − 1 (c − x)2 u = (c − x)2 −1 c − x + b(c − x)2 u = x − c + b(c − x)2 Thus the Ricatti equation has the solution, y = x + 1 c − x + 1 x − c + b(c − x)2 . It appears that we we have found a solution that has two constants of integration, but appear- ances can be deceptive. We do a little algebraic simplification of the solution. y = x + 1 c − x + 1 (b(c − x) − 1)(c − x) y = x + (b(c − x) − 1) + 1 (b(c − x) − 1)(c − x) y = x + b b(c − x) − 1 y = x + 1 (c − 1/b) − x This is actually a solution, (namely the solution we had before), with one constant of inte- gration, (namely c − 1/b). Thus we see that repeated applications of the procedure will not produce more general solutions. 3. The substitution y = − u au gives us the second order, linear, homogeneous differential equation, u − a a + b u + acu = 0. 619
  • 640. The solution to this linear equation is a linear combination of two homogeneous solutions, u1 and u2. u = c1u1(x) + c2u2(x) The solution of the Ricatti equation is then y = − c1u1(x) + c2u2(x) a(x)(c1u1(x) + c2u2(x)) . Since we can divide the numerator and denominator by either c1 or c2, this answer has only one constant of integration, (namely c1/c2 or c2/c1). Exchanging the Dependent and Independent Variables Solution 18.6 Exchanging the dependent and independent variables in the differential equation, y = √ y xy + y , yields x (y) = y1/2 x + y1/2 . This is a first order differential equation for x(y). x − y1/2 x = y1/2 d dy x exp − 2y3/2 3 = y1/2 exp − 2y3/2 3 x exp − 2y3/2 3 = − exp − 2y3/2 3 + c1 x = −1 + c1 exp 2y3/2 3 x + 1 c1 = exp 2y3/2 3 log x + 1 c1 = 2 3 y3/2 y = 3 2 log x + 1 c1 2/3 y = c + 3 2 log(x + 1) 2/3 Autonomous Equations *Equidimensional-in-x Equations *Equidimensional-in-y Equations *Scale-Invariant Equations 620
  • 641. Chapter 19 Transformations and Canonical Forms Prize intensity more than extent. Excellence resides in quality not in quantity. The best is always few and rare - abundance lowers value. Even among men, the giants are usually really dwarfs. Some reckon books by the thickness, as if they were written to exercise the brawn more than the brain. Extent alone never rises above mediocrity; it is the misfortune of universal geniuses that in attempting to be at home everywhere are so nowhere. Intensity gives eminence and rises to the heroic in matters sublime. -Balthasar Gracian 19.1 The Constant Coefficient Equation The solution of any second order linear homogeneous differential equation can be written in terms of the solutions to either y = 0, or y − y = 0 Consider the general equation y + ay + by = 0. We can solve this differential equation by making the substitution y = eλx . This yields the algebraic equation λ2 + aλ + b = 0. λ = 1 2 −a ± a2 − 4b There are two cases to consider. If a2 = 4b then the solutions are y1 = e(−a+ √ a2−4b)x/2 , y2 = e(−a− √ a2−4b)x/2 If a2 = 4b then we have y1 = e−ax/2 , y2 = x e−ax/2 Note that regardless of the values of a and b the solutions are of the form y = e−ax/2 u(x) 621
  • 642. We would like to write the solutions to the general differential equation in terms of the solutions to simpler differential equations. We make the substitution y = eλx u The derivatives of y are y = eλx (u + λu) y = eλx (u + 2λu + λ2 u) Substituting these into the differential equation yields u + (2λ + a)u + (λ2 + aλ + b)u = 0 In order to get rid of the u term we choose λ = − a 2 . The equation is then u + b − a2 4 u = 0. There are now two cases to consider. Case 1. If b = a2 /4 then the differential equation is u = 0 which has solutions 1 and x. The general solution for y is then y = e−ax/2 (c1 + c2x). Case 2. If b = a2 /4 then the differential equation is u − a2 4 − b u = 0. We make the change variables u(x) = v(ξ), x = µξ. The derivatives in terms of ξ are d dx = dξ dx d dξ = 1 µ d dξ d2 dx2 = 1 µ d dξ 1 µ d dξ = 1 µ2 d2 dξ2 . The differential equation for v is 1 µ2 v − a2 4 − b v = 0 v − µ2 a2 4 − b v = 0 We choose µ = a2 4 − b −1/2 to obtain v − v = 0 which has solutions e±ξ . The solution for y is y = eλx c1 ex/µ +c2 e−x/µ y = e−ax/2 c1 e √ a2/4−b x +c2 e− √ a2/4−b x 622
  • 643. 19.2 Normal Form 19.2.1 Second Order Equations Consider the second order equation y + p(x)y + q(x)y = 0. (19.1) Through a change of dependent variable, this equation can be transformed to u + I(x)y = 0. This is known as the normal form of (19.1). The function I(x) is known as the invariant of the equation. Now to find the change of variables that will accomplish this transformation. We make the substitution y(x) = a(x)u(x) in (19.1). au + 2a u + a u + p(au + a u) + qau = 0 u + 2 a a + p u + a a + pa a + q u = 0 To eliminate the u term, a(x) must satisfy 2 a a + p = 0 a + 1 2 pa = 0 a = c exp − 1 2 p(x) dx . For this choice of a, our differential equation for u becomes u + q − p2 4 − p 2 u = 0. Two differential equations having the same normal form are called equivalent. Result 19.2.1 The change of variables y(x) = exp − 1 2 p(x) dx u(x) transforms the differential equation y + p(x)y + q(x)y = 0 into its normal form u + I(x)u = 0 where the invariant of the equation, I(x), is I(x) = q − p2 4 − p 2 . 623
  • 644. 19.2.2 Higher Order Differential Equations Consider the third order differential equation y + p(x)y + q(x)y + r(x)y = 0. We can eliminate the y term. Making the change of dependent variable y = u exp − 1 3 p(x) dx y = u − 1 3 pu exp − 1 3 p(x) dx y = u − 2 3 pu + 1 9 (p2 − 3p )u exp − 1 3 p(x) dx y = u − pu + 1 3 (p2 − 3p )u + 1 27 (9p − 9p − p3 )u exp − 1 3 p(x) dx yields the differential equation u + 1 3 (3q − 3p − p2 )u + 1 27 (27r − 9pq − 9p + 2p3 )u = 0. Result 19.2.2 The change of variables y(x) = exp − 1 n pn−1(x) dx u(x) transforms the differential equation y(n) + pn−1(x)y(n−1) + pn−2(x)y(n−2) + · · · + p0(x)y = 0 into the form u(n) + an−2(x)u(n−2) + an−3(x)u(n−3) + · · · + a0(x)u = 0. 19.3 Transformations of the Independent Variable 19.3.1 Transformation to the form u” + a(x) u = 0 Consider the second order linear differential equation y + p(x)y + q(x)y = 0. We make the change of independent variable ξ = f(x), u(ξ) = y(x). The derivatives in terms of ξ are d dx = dξ dx d dξ = f d dξ d2 dx2 = f d dξ f d dξ = (f )2 d2 dξ2 + f d dξ 624
  • 645. The differential equation becomes (f )2 u + f u + pf u + qu = 0. In order to eliminate the u term, f must satisfy f + pf = 0 f = exp − p(x) dx f = exp − p(x) dx dx. The differential equation for u is then u + q (f )2 u = 0 u (ξ) + q(x) exp 2 p(x) dx u(ξ) = 0. Result 19.3.1 The change of variables ξ = exp − p(x) dx dx, u(ξ) = y(x) transforms the differential equation y + p(x)y + q(x)y = 0 into u (ξ) + q(x) exp 2 p(x) dx u(ξ) = 0. 19.3.2 Transformation to a Constant Coefficient Equation Consider the second order linear differential equation y + p(x)y + q(x)y = 0. With the change of independent variable ξ = f(x), u(ξ) = y(x), the differential equation becomes (f )2 u + (f + pf )u + qu = 0. For this to be a constant coefficient equation we must have (f )2 = c1q, and f + pf = c2q, for some constants c1 and c2. Solving the first condition, f = c √ q, 625
  • 646. f = c q(x) dx. The second constraint becomes f + pf q = const 1 2 cq−1/2 q + pcq1/2 q = const q + 2pq q3/2 = const. Result 19.3.2 Consider the differential equation y + p(x)y + q(x)y = 0. If the expression q + 2pq q3/2 is a constant then the change of variables ξ = c q(x) dx, u(ξ) = y(x), will yield a constant coefficient differential equation. (Here c is an arbitrary constant.) 19.4 Integral Equations Volterra’s Equations. Volterra’s integral equation of the first kind has the form x a N(x, ξ)f(ξ) dξ = f(x). The Volterra equation of the second kind is y(x) = f(x) + λ x a N(x, ξ)y(ξ) dξ. N(x, ξ) is known as the kernel of the equation. Fredholm’s Equations. Fredholm’s integral equations of the first and second kinds are b a N(x, ξ)f(ξ) dξ = f(x), y(x) = f(x) + λ b a N(x, ξ)y(ξ) dξ. 19.4.1 Initial Value Problems Consider the initial value problem y + p(x)y + q(x)y = f(x), y(a) = α, y (a) = β. 626
  • 647. Integrating this equation twice yields x a η a y (ξ) + p(ξ)y (ξ) + q(ξ)y(ξ) dξ dη = x a η a f(ξ) dξ dη x a (x − ξ)[y (ξ) + p(ξ)y (ξ) + q(ξ)y(ξ)] dξ = x a (x − ξ)f(ξ) dξ. Now we use integration by parts. (x − ξ)y (ξ) x a − x a −y (ξ) dξ + (x − ξ)p(ξ)y(ξ) x a − x a [(x − ξ)p (ξ) − p(ξ)]y(ξ) dξ + x a (x − ξ)q(ξ)y(ξ) dξ = x a (x − ξ)f(ξ) dξ. − (x − a)y (a) + y(x) − y(a) − (x − a)p(a)y(a) − x a [(x − ξ)p (ξ) − p(ξ)]y(ξ) dξ + x a (x − ξ)q(ξ)y(ξ) dξ = x a (x − ξ)f(ξ) dξ. We obtain a Volterra integral equation of the second kind for y(x). y(x) = x a (x − ξ)f(ξ) dξ + (x − a)(αp(a) + β) + α + x a (x − ξ)[p (ξ) − q(ξ)] − p(ξ) y(ξ) dξ. Note that the initial conditions for the differential equation are “built into” the Volterra equation. Setting x = a in the Volterra equation yields y(a) = α. Differentiating the Volterra equation, y (x) = x a f(ξ) dξ + (αp(a) + β) − p(x)y(x) + x a [p (ξ) − q(ξ)] − p(ξ)y(ξ) dξ and setting x = a yields y (a) = αp(a) + β − p(a)α = β. (Recall from calculus that d dx x g(x, ξ) dξ = g(x, x) + x ∂ ∂x [g(x, ξ)] dξ.) Result 19.4.1 The initial value problem y + p(x)y + q(x)y = f(x), y(a) = α, y (a) = β. is equivalent to the Volterra equation of the second kind y(x) = F(x) + x a N(x, ξ)y(ξ) dξ where F(x) = x a (x − ξ)f(ξ) dξ + (x − a)(αp(a) + β) + α N(x, ξ) = (x − ξ)[p (ξ) − q(ξ)] − p(ξ). 627
  • 648. 19.4.2 Boundary Value Problems Consider the boundary value problem y = f(x), y(a) = α, y(b) = β. (19.2) To obtain a problem with homogeneous boundary conditions, we make the change of variable y(x) = u(x) + α + β − α b − a (x − a) to obtain the problem u = f(x), u(a) = u(b) = 0. Now we will use Green’s functions to write the solution as an integral. First we solve the problem G = δ(x − ξ), G(a|ξ) = G(b|ξ) = 0. The homogeneous solutions of the differential equation that satisfy the left and right boundary conditions are c1(x − a) and c2(x − b). Thus the Green’s function has the form G(x|ξ) = c1(x − a), for x ≤ ξ c2(x − b), for x ≥ ξ Imposing continuity of G(x|ξ) at x = ξ and a unit jump of G(x|ξ) at x = ξ, we obtain G(x|ξ) = (x−a)(ξ−b) b−a , for x ≤ ξ (x−b)(ξ−a) b−a , for x ≥ ξ Thus the solution of the (19.2) is y(x) = α + β − α b − a (x − a) + b a G(x|ξ)f(ξ) dξ. Now consider the boundary value problem y + p(x)y + q(x)y = 0, y(a) = α, y(b) = β. From the above result we can see that the solution satisfies y(x) = α + β − α b − a (x − a) + b a G(x|ξ)[f(ξ) − p(ξ)y (ξ) − q(ξ)y(ξ)] dξ. Using integration by parts, we can write − b a G(x|ξ)p(ξ)y (ξ) dξ = − G(x|ξ)p(ξ)y(ξ) b a + b a ∂G(x|ξ) ∂ξ p(ξ) + G(x|ξ)p (ξ) y(ξ) dξ = b a ∂G(x|ξ) ∂ξ p(ξ) + G(x|ξ)p (ξ) y(ξ) dξ. Substituting this into our expression for y(x), y(x) = α + β − α b − a (x − a) + b a G(x|ξ)f(ξ) dξ + b a ∂G(x|ξ) ∂ξ p(ξ) + G(x|ξ)[p (ξ) − q(ξ)] y(ξ) dξ, we obtain a Fredholm integral equation of the second kind. 628
  • 649. Result 19.4.2 The boundary value problem y + p(x)y + q(x)y = f(x), y(a) = α, y(b) = β. is equivalent to the Fredholm equation of the second kind y(x) = F(x) + b a N(x, ξ)y(ξ) dξ where F(x) = α + β − α b − a (x − a) + b a G(x|ξ)f(ξ) dξ, N(x, ξ) = b a H(x|ξ)y(ξ) dξ, G(x|ξ) = (x−a)(ξ−b) b−a , for x ≤ ξ (x−b)(ξ−a) b−a , for x ≥ ξ, H(x|ξ) = (x−a) b−a p(ξ) + (x−a)(ξ−b) b−a [p (ξ) − q(ξ)] for x ≤ ξ (x−b) b−a p(ξ) + (x−b)(ξ−a) b−a [p (ξ) − q(ξ)] for x ≥ ξ. 629
  • 650. 19.5 Exercises The Constant Coefficient Equation Normal Form Exercise 19.1 Solve the differential equation y + 2 + 4 3 x y + 1 9 24 + 12x + 4x2 y = 0. Hint, Solution Transformations of the Independent Variable Integral Equations Exercise 19.2 Show that the solution of the differential equation y + 2(a + bx)y + (c + dx + ex2 )y = 0 can be written in terms of one of the following canonical forms: v + (ξ2 + A)v = 0 v = ξv v + v = 0 v = 0. Hint, Solution Exercise 19.3 Show that the solution of the differential equation y + 2 a + b x y + c + d x + e x2 y = 0 can be written in terms of one of the following canonical forms: v + 1 + A ξ + B ξ2 v = 0 v + 1 ξ + A ξ2 v = 0 v + A ξ2 v = 0 Hint, Solution Exercise 19.4 Show that the second order Euler equation x2 d2 y d2x + a1x dy dx + a0y = 0 can be transformed to a constant coefficient equation. Hint, Solution 630
  • 651. Exercise 19.5 Solve Bessel’s equation of order 1/2, y + 1 x y + 1 − 1 4x2 y = 0. Hint, Solution 631
  • 652. 19.6 Hints The Constant Coefficient Equation Normal Form Hint 19.1 Transform the equation to normal form. Transformations of the Independent Variable Integral Equations Hint 19.2 Transform the equation to normal form and then apply the scale transformation x = λξ + µ. Hint 19.3 Transform the equation to normal form and then apply the scale transformation x = λξ. Hint 19.4 Make the change of variables x = et , y(x) = u(t). Write the derivatives with respect to x in terms of t. x = et dx = et dt d dx = e−t d dt x d dx = d dt Hint 19.5 Transform the equation to normal form. 632
  • 653. 19.7 Solutions The Constant Coefficient Equation Normal Form Solution 19.1 y + 2 + 4 3 x y + 1 9 24 + 12x + 4x2 y = 0 To transform the equation to normal form we make the substitution y = exp − 1 2 2 + 4 3 x dx u = e−x−x2 /3 u The invariant of the equation is I(x) = 1 9 24 + 12x + 4x2 − 1 4 2 + 4 3 x 2 − 1 2 d dx 2 + 4 3 x = 1. The normal form of the differential equation is then u + u = 0 which has the general solution u = c1 cos x + c2 sin x Thus the equation for y has the general solution y = c1 e−x−x2 /3 cos x + c2 e−x−x2 /3 sin x. Transformations of the Independent Variable Integral Equations Solution 19.2 The substitution that will transform the equation to normal form is y = exp − 1 2 2(a + bx) dx u = e−ax−bx2 /2 u. The invariant of the equation is I(x) = c + dx + ex2 − 1 4 (2(a + bx))2 − 1 2 d dx (2(a + bx)) = c − b − a2 + (d − 2ab)x + (e − b2 )x2 ≡ α + βx + γx2 The normal form of the differential equation is u + (α + βx + γx2 )u = 0 We consider the following cases: γ = 0. 633
  • 654. β = 0. α = 0. We immediately have the equation u = 0. α = 0. With the change of variables v(ξ) = u(x), x = α−1/2 ξ, we obtain v + v = 0. β = 0. We have the equation y + (α + βx)y = 0. The scale transformation x = λξ + µ yields v + λ2 (α + β(λξ + µ))y = 0 v = [βλ3 ξ + λ2 (βµ + α)]v. Choosing λ = (−β)−1/3 , µ = − α β yields the differential equation v = ξv. γ = 0. The scale transformation x = λξ + µ yields v + λ2 [α + β(λξ + µ) + γ(λξ + µ)2 ]v = 0 v + λ2 [α + βµ + γµ2 + λ(β + 2γµ)ξ + λ2 γξ2 ]v = 0. Choosing λ = γ−1/4 , µ = − β 2γ yields the differential equation v + (ξ2 + A)v = 0 where A = γ−1/2 − 1 4 βγ−3/2 . Solution 19.3 The substitution that will transform the equation to normal form is y = exp − 1 2 2 a + b x dx u = x−b e−ax u. The invariant of the equation is I(x) = c + d x + e x2 − 1 4 2 a + b x 2 − 1 2 d dx 2 a + b x = c − ax + d − 2ab x + e + b − b2 x2 ≡ α + β x + γ x2 . The invariant form of the differential equation is u + α + β x + γ x2 u = 0. We consider the following cases: 634
  • 655. α = 0. β = 0. We immediately have the equation u + γ x2 u = 0. β = 0. We have the equation u + β x + γ x2 u = 0. The scale transformation u(x) = v(ξ), x = λξ yields v + βλ ξ + γ ξ2 u = 0. Choosing λ = β−1 , we obtain v + 1 ξ + γ ξ2 u = 0. α = 0. The scale transformation x = λξ yields v + αλ2 + βλ ξ + γ ξ2 v = 0. Choosing λ = α−1/2 , we obtain v + 1 + α−1/2 β ξ + γ ξ2 v = 0. Solution 19.4 We write the derivatives with respect to x in terms of t. x = et dx = et dt d dx = e−t d dt x d dx = d dt Now we express x2 d2 dx2 in terms of t. x2 d2 dx2 = x d dx x d dx − x d dx = d2 dt2 − d dt Thus under the change of variables, x = et , y(x) = u(t), the Euler equation becomes u − u + a1u + a0u = 0 u + (a1 − 1)u + a0u = 0. Solution 19.5 The transformation y = exp − 1 2 1 x dx = x−1/2 u 635
  • 656. will put the equation in normal form. The invariant is I(x) = 1 − 1 4x2 − 1 4 1 x2 − 1 2 −1 x2 = 1. Thus we have the differential equation u + u = 0, with the solution u = c1 cos x + c2 sin x. The solution of Bessel’s equation of order 1/2 is y = c1x−1/2 cos x + c2x−1/2 sin x. 636
  • 657. Chapter 20 The Dirac Delta Function I do not know what I appear to the world; but to myself I seem to have been only like a boy playing on a seashore, and diverting myself now and then by finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me. - Sir Issac Newton 20.1 Derivative of the Heaviside Function The Heaviside function H(x) is defined H(x) = 0 for x < 0, 1 for x > 0. The derivative of the Heaviside function is zero for x = 0. At x = 0 the derivative is undefined. We will represent the derivative of the Heaviside function by the Dirac delta function, δ(x). The delta function is zero for x = 0 and infinite at the point x = 0. Since the derivative of H(x) is undefined, δ(x) is not a function in the conventional sense of the word. One can derive the properties of the delta function rigorously, but the treatment in this text will be almost entirely heuristic. The Dirac delta function is defined by the properties δ(x) = 0 for x = 0, ∞ for x = 0, and ∞ −∞ δ(x) dx = 1. The second property comes from the fact that δ(x) represents the derivative of H(x). The Dirac delta function is conceptually pictured in Figure 20.1. Figure 20.1: The Dirac Delta Function. 637
  • 658. Let f(x) be a continuous function that vanishes at infinity. Consider the integral ∞ −∞ f(x)δ(x) dx. We use integration by parts to evaluate the integral. ∞ −∞ f(x)δ(x) dx = f(x)H(x) ∞ −∞ − ∞ −∞ f (x)H(x) dx = − ∞ 0 f (x) dx = [−f(x)]∞ 0 = f(0) We assumed that f(x) vanishes at infinity in order to use integration by parts to evaluate the integral. However, since the delta function is zero for x = 0, the integrand is nonzero only at x = 0. Thus the behavior of the function at infinity should not affect the value of the integral. Thus it is reasonable that f(0) = ∞ −∞ f(x)δ(x) dx holds for all continuous functions. By changing variables and noting that δ(x) is symmetric we can derive a more general formula. f(0) = ∞ −∞ f(ξ)δ(ξ) dξ f(x) = ∞ −∞ f(ξ + x)δ(ξ) dξ f(x) = ∞ −∞ f(ξ)δ(ξ − x) dξ f(x) = ∞ −∞ f(ξ)δ(x − ξ) dξ This formula is very important in solving inhomogeneous differential equations. 20.2 The Delta Function as a Limit Consider a function b(x, ) defined by b(x, ) = 0 for |x| > /2 1 for |x| < /2. The graph of b(x, 1/10) is shown in Figure 20.2. -1 1 5 10 Figure 20.2: Graph of b(x, 1/10). The Dirac delta function δ(x) can be thought of as b(x, ) in the limit as → 0. Note that the delta function so defined satisfies the properties, δ(x) = 0 for x = 0 ∞ for x = 0 and ∞ −∞ δ(x) dx = 1 638
  • 659. Delayed Limiting Process. When the Dirac delta function appears inside an integral, we can think of the delta function as a delayed limiting process. ∞ −∞ f(x)δ(x) dx ≡ lim →0 ∞ −∞ f(x)b(x, ) dx. Let f(x) be a continuous function and let F (x) = f(x). We compute the integral of f(x)δ(x). ∞ −∞ f(x)δ(x) dx = lim →0 1 /2 − /2 f(x) dx = lim →0 1 [F(x)] /2 − /2 = lim →0 F( /2) − F(− /2) = F (0) = f(0) 20.3 Higher Dimensions We can define a Dirac delta function in n-dimensional Cartesian space, δn(x), x ∈ Rn . It is defined by the following two properties. δn(x) = 0 for x = 0 Rn δn(x) dx = 1 It is easy to verify, that the n-dimensional Dirac delta function can be written as a product of 1-dimensional Dirac delta functions. δn(x) = n k=1 δ(xk) 20.4 Non-Rectangular Coordinate Systems We can derive Dirac delta functions in non-rectangular coordinate systems by making a change of variables in the relation, Rn δn(x) dx = 1 Where the transformation is non-singular, one merely divides the Dirac delta function by the Jaco- bian of the transformation to the coordinate system. Example 20.4.1 Consider the Dirac delta function in cylindrical coordinates, (r, θ, z). The Jaco- bian is J = r. ∞ −∞ 2π 0 ∞ 0 δ3 (x − x0) r dr dθ dz = 1 For r0 = 0, the Dirac Delta function is δ3 (x − x0) = 1 r δ (r − r0) δ (θ − θ0) δ (z − z0) since it satisfies the two defining properties. 1 r δ (r − r0) δ (θ − θ0) δ (z − z0) = 0 for (r, θ, z) = (r0, θ0, z0) 639
  • 660. ∞ −∞ 2π 0 ∞ 0 1 r δ (r − r0) δ (θ − θ0) δ (z − z0) r dr dθ dz = ∞ 0 δ (r − r0) dr 2π 0 δ (θ − θ0) dθ ∞ −∞ δ (z − z0) dz = 1 For r0 = 0, we have δ3 (x − x0) = 1 2πr δ (r) δ (z − z0) since this again satisfies the two defining properties. 1 2πr δ (r) δ (z − z0) = 0 for (r, z) = (0, z0) ∞ −∞ 2π 0 ∞ 0 1 2πr δ (r) δ (z − z0) r dr dθ dz = 1 2π ∞ 0 δ (r) dr 2π 0 dθ ∞ −∞ δ (z − z0) dz = 1 640
  • 661. 20.5 Exercises Exercise 20.1 Let f(x) be a function that is continuous except for a jump discontinuity at x = 0. Using a delayed limiting process, show that f(0− ) + f(0+ ) 2 = ∞ −∞ f(x)δ(x) dx. Hint, Solution Exercise 20.2 Show that the Dirac delta function is symmetric. δ(−x) = δ(x) Hint, Solution Exercise 20.3 Show that δ(cx) = δ(x) |c| . Hint, Solution Exercise 20.4 We will consider the Dirac delta function with a function as on argument, δ(y(x)). Assume that y(x) has simple zeros at the points {xn}. y(xn) = 0, y (xn) = 0 Further assume that y(x) has no multiple zeros. (If y(x) has multiple zeros δ(y(x)) is not well-defined in the same sense that 1/0 is not well-defined.) Prove that δ(y(x)) = n δ(x − xn) |y (xn)| . Hint, Solution Exercise 20.5 Justify the identity ∞ −∞ f(x)δ(n) (x) dx = (−1)n f(n) (0) From this show that δ(n) (−x) = (−1)n δ(n) (x) and xδ(n) (x) = −nδ(n−1) (x). Hint, Solution Exercise 20.6 Consider x = (x1, . . . , xn) ∈ Rn and the curvilinear coordinate system ξ = (ξ1, . . . , ξn). Show that δ(x − a) = δ(ξ − α) |J| where a and α are corresponding points in the two coordinate systems and J is the Jacobian of the transformation from x to ξ. J ≡ ∂x ∂ξ Hint, Solution 641
  • 662. Exercise 20.7 Determine the Dirac delta function in spherical coordinates, (r, θ, φ). x = r cos θ sin φ, y = r sin θ sin φ, z = r cos φ Hint, Solution 642
  • 663. 20.6 Hints Hint 20.1 Hint 20.2 Verify that δ(−x) satisfies the two properties of the Dirac delta function. Hint 20.3 Evaluate the integral, ∞ −∞ f(x)δ(cx) dx, by noting that the Dirac delta function is symmetric and making a change of variables. Hint 20.4 Let the points {ξm} partition the interval (−∞ . . . ∞) such that y (x) is monotone on each interval (ξm . . . ξm+1). Consider some such interval, (a . . . b) ≡ (ξm . . . ξm+1). Show that b a δ(y(x)) dx = β α δ(y) |y (xn)| dy if y(xn) = 0 for a < xn < b 0 otherwise for α = min(y(a), y(b)) and β = max(y(a), y(b)). Now consider the integral on the interval (−∞ . . . ∞) as the sum of integrals on the intervals {(ξm . . . ξm+1)}. Hint 20.5 Justify the identity, ∞ −∞ f(x)δ(n) (x) dx = (−1)n f(n) (0), with integration by parts. Hint 20.6 The Dirac delta function is defined by the following two properties. δ(x − a) = 0 for x = a Rn δ(x − a) dx = 1 Verify that δ(ξ − α)/|J| satisfies these properties in the ξ coordinate system. Hint 20.7 Consider the special cases φ0 = 0, π and r0 = 0. 643
  • 664. 20.7 Solutions Solution 20.1 Let F (x) = f(x). ∞ −∞ f(x)δ(x) dx = lim →0 1 ∞ −∞ f(x)b(x, ) dx = lim →0 1 0 − /2 f(x)b(x, ) dx + /2 0 f(x)b(x, ) dx = lim →0 1 ((F(0) − F(− /2)) + (F( /2) − F(0))) = lim →0 1 2 F(0) − F(− /2) /2 + F( /2) − F(0) /2 = F (0− ) + F (0+ ) 2 = f(0− ) + f(0+ ) 2 Solution 20.2 δ(−x) satisfies the two properties of the Dirac delta function. δ(−x) = 0 for x = 0 ∞ −∞ δ(−x) dx = −∞ ∞ δ(x) (−dx) = ∞ −∞ δ(−x) dx = 1 Therefore δ(−x) = δ(x). Solution 20.3 We note the the Dirac delta function is symmetric and we make a change of variables to derive the identity. ∞ −∞ δ(cx) dx = ∞ −∞ δ(|c|x) dx = ∞ −∞ δ(x) |c| dx δ(cx) = δ(x) |c| Solution 20.4 Let the points {ξm} partition the interval (−∞ . . . ∞) such that y (x) is monotone on each interval (ξm . . . ξm+1). Consider some such interval, (a . . . b) ≡ (ξm . . . ξm+1). Note that y (x) is either entirely positive or entirely negative in the interval. First consider the case when it is positive. In this case y(a) < y(b). b a δ(y(x)) dx = y(b) y(a) δ(y) dy dx −1 dy = y(b) y(a) δ(y) y (x) dy = y(b) y(a) δ(y) y (xn) dy for y(xn) = 0 if y(a) < 0 < y(b) 0 otherwise 644
  • 665. Now consider the case that y (x) is negative on the interval so y(a) > y(b). b a δ(y(x)) dx = y(b) y(a) δ(y) dy dx −1 dy = y(b) y(a) δ(y) y (x) dy = y(a) y(b) δ(y) −y (x) dy = y(a) y(b) δ(y) −y (xn) dy for y(xn) = 0 if y(b) < 0 < y(a) 0 otherwise We conclude that b a δ(y(x)) dx = β α δ(y) |y (xn)| dy if y(xn) = 0 for a < xn < b 0 otherwise for α = min(y(a), y(b)) and β = max(y(a), y(b)). Now we turn to the integral of δ(y(x)) on (−∞ . . . ∞). Let αm = min(y(ξm), y(ξm)) and βm = max(y(ξm), y(ξm)). ∞ −∞ δ(y(x)) dx = m ξm+1 ξm δ(y(x)) dx = m xn∈(ξm...ξm+1) ξm+1 ξm δ(y(x)) dx = m xn∈(ξm...ξm+1) βm+1 αm δ(y) |y (xn)| dy = n ∞ −∞ δ(y) |y (xn)| dy = ∞ −∞ n δ(y) |y (xn)| dy δ(y(x)) = n δ(x − xn) |y (xn)| Solution 20.5 To justify the identity, ∞ −∞ f(x)δ(n) (x) dx = (−1)n f(n) (0), we will use integration by parts. ∞ −∞ f(x)δ(n) (x) dx = f(x)δ(n−1) (x) ∞ −∞ − ∞ −∞ f (x)δ(n−1) (x) dx = − ∞ −∞ f (x)δ(n−1) (x) dx = (−1)n ∞ −∞ f(n) (x)δ(x) dx = (−1)n f(n) (0) 645
  • 666. CONTINUE HERE δ(n) (−x) = (−1)n δ(n) (x) and xδ(n) (x) = −nδ(n−1) (x). Solution 20.6 The Dirac delta function is defined by the following two properties. δ(x − a) = 0 for x = a Rn δ(x − a) dx = 1 We verify that δ(ξ − α)/|J| satisfies these properties in the ξ coordinate system. δ(ξ − α) |J| = δ(ξ1 − α1) · · · δ(ξn − αn) |J| = 0 for ξ = α δ(ξ − α) |J| |J| dξ = δ(ξ − α) dξ = δ(ξ1 − α1) · · · δ(ξn − αn) dξ = δ(ξ1 − α1) dξ1 · · · δ(ξn − αn) dξn = 1 We conclude that δ(ξ − α)/|J| is the Dirac delta function in the ξ coordinate system. δ(x − a) = δ(ξ − α) |J| Solution 20.7 We consider the Dirac delta function in spherical coordinates, (r, θ, φ). The Jacobian is J = r2 sin(φ). π 0 2π 0 ∞ 0 δ3 (x − x0) r2 sin(φ) dr dθ dφ = 1 For r0 = 0, and φ0 = 0, π, the Dirac Delta function is δ3 (x − x0) = 1 r2 sin(φ) δ (r − r0) δ (θ − θ0) δ (φ − φ0) since it satisfies the two defining properties. 1 r2 sin(φ) δ (r − r0) δ (θ − θ0) δ (φ − φ0) = 0 for (r, θ, φ) = (r0, θ0, φ0) π 0 2π 0 ∞ 0 1 r2 sin(φ) δ (r − r0) δ (θ − θ0) δ (φ − φ0) r2 sin(φ) dr dθ dφ = ∞ 0 δ (r − r0) dr 2π 0 δ (θ − θ0) dθ π 0 δ (φ − φ0) dφ = 1 For φ0 = 0 or φ0 = π, the Dirac delta function is δ3 (x − x0) = 1 2πr2 sin(φ) δ (r − r0) δ (φ − φ0) . 646
  • 667. We check that the value of the integral is unity. π 0 2π 0 ∞ 0 1 2πr2 sin(φ) δ (r − r0) δ (φ − φ0) r2 sin(φ) dr dθ dφ = 1 2π ∞ 0 δ (r − r0) dr 2π 0 dθ π 0 δ (φ − φ0) dφ = 1 For r0 = 0 the Dirac delta function is δ3 (x) = 1 4πr2 δ (r) We verify that the value of the integral is unity. π 0 2π 0 ∞ 0 1 4πr2 δ (r − r0) r2 sin(φ) dr dθ dφ = 1 4π ∞ 0 δ (r) dr 2π 0 dθ π 0 sin(φ) dφ = 1 647
  • 668. 648
  • 669. Chapter 21 Inhomogeneous Differential Equations Feelin’ stupid? I know I am! -Homer Simpson 21.1 Particular Solutions Consider the nth order linear homogeneous equation L[y] ≡ y(n) + pn−1(x)y(n−1) + · · · + p1(x)y + p0(x)y = 0. Let {y1, y2, . . . , yn} be a set of linearly independent homogeneous solutions, L[yk] = 0. We know that the general solution of the homogeneous equation is a linear combination of the homogeneous solutions. yh = n k=1 ckyk(x) Now consider the nth order linear inhomogeneous equation L[y] ≡ y(n) + pn−1(x)y(n−1) + · · · + p1(x)y + p0(x)y = f(x). Any function yp which satisfies this equation is called a particular solution of the differential equation. We want to know the general solution of the inhomogeneous equation. Later in this chapter we will cover methods of constructing this solution; now we consider the form of the solution. Let yp be a particular solution. Note that yp + h is a particular solution if h satisfies the homogeneous equation. L[yp + h] = L[yp] + L[h] = f + 0 = f Therefore yp + yh satisfies the homogeneous equation. We show that this is the general solution of the inhomogeneous equation. Let yp and ηp both be solutions of the inhomogeneous equation L[y] = f. The difference of yp and ηp is a homogeneous solution. L[yp − ηp] = L[yp] − L[ηp] = f − f = 0 yp and ηp differ by a linear combination of the homogeneous solutions {yk}. Therefore the general solution of L[y] = f is the sum of any particular solution yp and the general homogeneous solution yh. yp + yh = yp(x) + n k=1 ckyk(x) 649
  • 670. Result 21.1.1 The general solution of the nth order linear inhomogeneous equation L[y] = f(x) is y = yp + c1y1 + c2y2 + · · · + cnyn, where yp is a particular solution, {y1, . . . , yn} is a set of linearly independent homogeneous solutions, and the ck’s are arbitrary constants. Example 21.1.1 The differential equation y + y = sin(2x) has the two homogeneous solutions y1 = cos x, y2 = sin x, and a particular solution yp = − 1 3 sin(2x). We can add any combination of the homogeneous solutions to yp and it will still be a particular solution. For example, ηp = − 1 3 sin(2x) − 1 3 sin x = − 2 3 sin 3x 2 cos x 2 is a particular solution. 21.2 Method of Undetermined Coefficients The first method we present for computing particular solutions is the method of undetermined coefficients. For some simple differential equations, (primarily constant coefficient equations), and some simple inhomogeneities we are able to guess the form of a particular solution. This form will contain some unknown parameters. We substitute this form into the differential equation to determine the parameters and thus determine a particular solution. Later in this chapter we will present general methods which work for any linear differential equation and any inhogeneity. Thus one might wonder why I would present a method that works only for some simple problems. (And why it is called a “method” if it amounts to no more than guessing.) The answer is that guessing an answer is less grungy than computing it with the formulas we will develop later. Also, the process of this guessing is not random, there is rhyme and reason to it. Consider an nth order constant coefficient, inhomogeneous equation. L[y] ≡ y(n) + an−1y(n−1) + · · · + a1y + a0y = f(x) If f(x) is one of a few simple forms, then we can guess the form of a particular solution. Below we enumerate some cases. f = p(x). If f is an mth order polynomial, f(x) = pmxm + · · · + p1x + p0, then guess yp = cmxm + · · · c1x + c0. 650
  • 671. f = p(x) eax . If f is a polynomial times an exponential then guess yp = (cmxm + · · · c1x + c0) eax . f = p(x) eax cos (bx). If f is a cosine or sine times a polynomial and perhaps an exponential, f(x) = p(x) eax cos(bx) or f(x) = p(x) eax sin(bx) then guess yp = (cmxm + · · · c1x + c0) eax cos(bx) + (dmxm + · · · d1x + d0) eax sin(bx). Likewise for hyperbolic sines and hyperbolic cosines. Example 21.2.1 Consider y − 2y + y = t2 . The homogeneous solutions are y1 = et and y2 = t et . We guess a particular solution of the form yp = at2 + bt + c. We substitute the expression into the differential equation and equate coefficients of powers of t to determine the parameters. yp − 2yp + yp = t2 (2a) − 2(2at + b) + (at2 + bt + c) = t2 (a − 1)t2 + (b − 4a)t + (2a − 2b + c) = 0 a − 1 = 0, b − 4a = 0, 2a − 2b + c = 0 a = 1, b = 4, c = 6 A particular solution is yp = t2 + 4t + 6. If the inhomogeneity is a sum of terms, L[y] = f ≡ f1 + · · · + fk, you can solve the problems L[y] = f1, . . . , L[y] = fk independently and then take the sum of the solutions as a particular solution of L[y] = f. Example 21.2.2 Consider L[y] ≡ y − 2y + y = t2 + e2t . (21.1) The homogeneous solutions are y1 = et and y2 = t et . We already know a particular solution to L[y] = t2 . We seek a particular solution to L[y] = e2t . We guess a particular solution of the form yp = a e2t . We substitute the expression into the differential equation to determine the parameter. yp − 2yp + yp = e2t 4ae2t − 4a e2t +a e2t = e2t a = 1 A particular solution of L[y] = e2t is yp = e2t . Thus a particular solution of Equation 21.1 is yp = t2 + 4t + 6 + e2t . 651
  • 672. The above guesses will not work if the inhomogeneity is a homogeneous solution. In this case, multiply the guess by the lowest power of x such that the guess does not contain homogeneous solutions. Example 21.2.3 Consider L[y] ≡ y − 2y + y = et . The homogeneous solutions are y1 = et and y2 = t et . Guessing a particular solution of the form yp = a et would not work because L[et ] = 0. We guess a particular solution of the form yp = at2 et We substitute the expression into the differential equation and equate coefficients of like terms to determine the parameters. yp − 2yp + yp = et (at2 + 4at + 2a) et −2(at2 + 2at) et +at2 et = et 2a et = et a = 1 2 A particular solution is yp = t2 2 et . Example 21.2.4 Consider y + 1 x y + 1 x2 y = x, x > 0. The homogeneous solutions are y1 = cos(ln x) and y2 = sin(ln x). We guess a particular solution of the form yp = ax3 We substitute the expression into the differential equation and equate coefficients of like terms to determine the parameter. yp + 1 x yp + 1 x2 yp = x 6ax + 3ax + ax = x a = 1 10 A particular solution is yp = x3 10 . 21.3 Variation of Parameters In this section we present a method for computing a particular solution of an inhomogeneous equa- tion given that we know the homogeneous solutions. We will first consider second order equations and then generalize the result for nth order equations. 21.3.1 Second Order Differential Equations Consider the second order inhomogeneous equation, L[y] ≡ y + p(x)y + q(x)y = f(x), on a < x < b. 652
  • 673. We assume that the coefficient functions in the differential equation are continuous on [a . . . b]. Let y1(x) and y2(x) be two linearly independent solutions to the homogeneous equation. Since the Wronskian, W(x) = exp − p(x) dx , is non-vanishing, we know that these solutions exist. We seek a particular solution of the form, yp = u1(x)y1 + u2(x)y2. We compute the derivatives of yp. yp = u1y1 + u1y1 + u2y2 + u2y2 yp = u1 y1 + 2u1y1 + u1y1 + u2 y2 + 2u2y2 + u2y2 We substitute the expression for yp and its derivatives into the inhomogeneous equation and use the fact that y1 and y2 are homogeneous solutions to simplify the equation. u1 y1 + 2u1y1 + u1y1 + u2 y2 + 2u2y2 + u2y2 + p(u1y1 + u1y1 + u2y2 + u2y2) + q(u1y1 + u2y2) = f u1 y1 + 2u1y1 + u2 y2 + 2u2y2 + p(u1y1 + u2y2) = f This is an ugly equation for u1 and u2, however, we have an ace up our sleeve. Since u1 and u2 are undetermined functions of x, we are free to impose a constraint. We choose this constraint to simplify the algebra. u1y1 + u2y2 = 0 This constraint simplifies the derivatives of yp, yp = u1y1 + u1y1 + u2y2 + u2y2 = u1y1 + u2y2 yp = u1y1 + u1y1 + u2y2 + u2y2 . We substitute the new expressions for yp and its derivatives into the inhomogeneous differential equation to obtain a much simpler equation than before. u1y1 + u1y1 + u2y2 + u2y2 + p(u1y1 + u2y2) + q(u1y1 + u2y2) = f(x) u1y1 + u2y2 + u1L[y1] + u2L[y2] = f(x) u1y1 + u2y2 = f(x). With the constraint, we have a system of linear equations for u1 and u2. u1y1 + u2y2 = 0 u1y1 + u2y2 = f(x). y1 y2 y1 y2 u1 u2 = 0 f We solve this system using Kramer’s rule. (See Appendix O.) u1 = − f(x)y2 W(x) u2 = f(x)y1 W(x) Here W(x) is the Wronskian. W(x) = y1 y2 y1 y2 653
  • 674. We integrate to get u1 and u2. This gives us a particular solution. yp = −y1 f(x)y2(x) W(x) dx + y2 f(x)y1(x) W(x) dx. Result 21.3.1 Let y1 and y2 be linearly independent homogeneous solutions of L[y] = y + p(x)y + q(x)y = f(x). A particular solution is yp = −y1(x) f(x)y2(x) W(x) dx + y2(x) f(x)y1(x) W(x) dx, where W(x) is the Wronskian of y1 and y2. Example 21.3.1 Consider the equation, y + y = cos(2x). The homogeneous solutions are y1 = cos x and y2 = sin x. We compute the Wronskian. W(x) = cos x sin x − sin x cos x = cos2 x + sin2 x = 1 We use variation of parameters to find a particular solution. yp = − cos(x) cos(2x) sin(x) dx + sin(x) cos(2x) cos(x) dx = − 1 2 cos(x) sin(3x) − sin(x) dx + 1 2 sin(x) cos(3x) + cos(x) dx = − 1 2 cos(x) − 1 3 cos(3x) + cos(x) + 1 2 sin(x) 1 3 sin(3x) + sin(x) = 1 2 sin2 (x) − cos2 (x) + 1 6 cos(3x) cos(x) + sin(3x) sin(x) = − 1 2 cos(2x) + 1 6 cos(2x) = − 1 3 cos(2x) The general solution of the inhomogeneous equation is y = − 1 3 cos(2x) + c1 cos(x) + c2 sin(x). 21.3.2 Higher Order Differential Equations Consider the nth order inhomogeneous equation, L[y] = y(n) + pn−1(x)y(n−1) + · · · + p1(x)y + p0(x)y = f(x), on a < x < b. We assume that the coefficient functions in the differential equation are continuous on [a . . . b]. Let {y1, . . . , yn} be a set of linearly independent solutions to the homogeneous equation. Since the Wronskian, W(x) = exp − pn−1(x) dx , 654
  • 675. is non-vanishing, we know that these solutions exist. We seek a particular solution of the form yp = u1y1 + u2y2 + · · · + unyn. Since {u1, . . . , un} are undetermined functions of x, we are free to impose n − 1 constraints. We choose these constraints to simplify the algebra. u1y1 +u2y2 + · · ·+unyn =0 u1y1 +u2y2 + · · ·+unyn =0 ... + ... + ... + ... =0 u1y (n−2) 1 +u2y (n−2) 2 + · · ·+uny(n−2) n =0 We differentiate the expression for yp, utilizing our constraints. yp =u1y1 +u2y2 + · · ·+unyn yp =u1y1 +u2y2 + · · ·+unyn yp =u1y1 +u2y2 + · · ·+unyn ... = ... + ... + ... + ... y(n) p =u1y (n) 1 +u2y (n) 2 + · · ·+uny(n) n + u1y (n−1) 1 + u2y (n−1) 2 + · · · + uny(n−1) n We substitute yp and its derivatives into the inhomogeneous differential equation and use the fact that the yk are homogeneous solutions. u1y (n) 1 + · · · + uny(n) n + u1y (n−1) 1 + · · · + uny(n−1) n + pn−1(u1y (n−1) 1 + · · · + uny(n−1) n ) + · · · + p0(u1y1 + · · · unyn) = f u1L[y1] + u2L[y2] + · · · + unL[yn] + u1y (n−1) 1 + u2y (n−1) 2 + · · · + uny(n−1) n = f u1y (n−1) 1 + u2y (n−1) 2 + · · · + uny(n−1) n = f. With the constraints, we have a system of linear equations for {u1, . . . , un}.      y1 y2 · · · yn y1 y2 · · · yn ... ... ... ... y (n−1) 1 y (n−1) 2 · · · y (n−1) n           u1 u2 ... un      =      0 ... 0 f      . We solve this system using Kramer’s rule. (See Appendix O.) uk = (−1)n+k+1 W[y1, . . . , yk−1, yk+1, . . . , yn] W[y1, y2, . . . , yn] f, for k = 1, . . . , n, Here W is the Wronskian. We integrating to obtain the uk’s. uk = (−1)n+k+1 W[y1, . . . , yk−1, yk+1, . . . , yn](x) W[y1, y2, . . . , yn](x) f(x) dx, for k = 1, . . . , n 655
  • 676. Result 21.3.2 Let {y1, . . . , yn} be linearly independent homogeneous solu- tions of L[y] = y(n) + pn−1(x)y(n−1) + · · · + p1(x)y + p0(x)y = f(x), on a < x < b. A particular solution is yp = u1y1 + u2y2 + · · · + unyn. where uk = (−1)n+k+1 W[y1, . . . , yk−1, yk+1, . . . , yn](x) W[y1, y2, . . . , yn](x) f(x) dx, for k = 1, . . . , n, and W[y1, y2, . . . , yn](x) is the Wronskian of {y1(x), . . . , yn(x)}. 21.4 Piecewise Continuous Coefficients and Inhomogeneities Example 21.4.1 Consider the problem y − y = e−α|x| , y(±∞) = 0, α > 0, α = 1. The homogeneous solutions of the differential equation are ex and e−x . We use variation of param- eters to find a particular solution for x > 0. yp = − ex x e−ξ e−αξ −2 dξ + e−x x eξ e−αξ −2 dξ = 1 2 ex x e−(α+1)ξ dξ − 1 2 e−x x e(1−α)ξ dξ = − 1 2(α + 1) e−αx + 1 2(α − 1) e−αx = e−αx α2 − 1 , for x > 0 A particular solution for x < 0 is yp = eαx α2 − 1 , for x < 0. Thus a particular solution is yp = e−α|x| α2 − 1 . The general solution is y = 1 α2 − 1 e−α|x| +c1 ex +c2 e−x . Applying the boundary conditions, we see that c1 = c2 = 0. Apparently the solution is y = e−α|x| α2 − 1 . This function is plotted in Figure 21.1. This function satisfies the differential equation for positive and negative x. It also satisfies the boundary conditions. However, this is NOT a solution to the differential equation. Since the differential equation has no singular points and the inhomogeneous term is continuous, the solution must be twice continuously differentiable. Since the derivative of 656
  • 677. -4 -2 2 4 0.05 0.1 0.15 0.2 0.25 0.3 -4 -2 2 4 -0.3 -0.25 -0.2 -0.15 -0.1 -0.05 Figure 21.1: The Incorrect and Correct Solution to the Differential Equation. e−α|x| /(α2 − 1) has a jump discontinuity at x = 0, the second derivative does not exist. Thus this function could not possibly be a solution to the differential equation. In the next example we examine the right way to solve this problem. Example 21.4.2 Again consider y − y = e−α|x| , y(±∞) = 0, α > 0, α = 1. Separating this into two problems for positive and negative x, y− − y− = eαx , y−(−∞) = 0, on − ∞ < x ≤ 0, y+ − y+ = e−αx , y+(∞) = 0, on 0 ≤ x < ∞. In order for the solution over the whole domain to be twice differentiable, the solution and it’s first derivative must be continuous. Thus we impose the additional boundary conditions y−(0) = y+(0), y−(0) = y+(0). The solutions that satisfy the two differential equations and the boundary conditions at infinity are y− = eαx α2 − 1 + c− ex , y+ = e−αx α2 − 1 + c+ e−x . The two additional boundary conditions give us the equations y−(0) = y+(0) → c− = c+ y−(0) = y+(0) → α α2 − 1 + c− = − α α2 − 1 − c+. We solve these two equations to determine c− and c+. c− = c+ = − α α2 − 1 Thus the solution over the whole domain is y = eαx −α ex α2−1 for x < 0, e−αx −α e−x α2−1 for x > 0 657
  • 678. y = e−α|x| −α e−|x| α2 − 1 . This function is plotted in Figure 21.1. You can verify that this solution is twice continuously differentiable. 21.5 Inhomogeneous Boundary Conditions 21.5.1 Eliminating Inhomogeneous Boundary Conditions Consider the nth order equation L[y] = f(x), for a < x < b, subject to the linear inhomogeneous boundary conditions Bj[y] = γj, for j = 1, . . . , n, where the boundary conditions are of the form B[y] ≡ α0y(a) + α1y (a) + · · · + yn−1y(n−1) (a) + β0y(b) + β1y (b) + · · · + βn−1y(n−1) Let g(x) be an n-times continuously differentiable function that satisfies the boundary conditions. Substituting y = u + g into the differential equation and boundary conditions yields L[u] = f(x) − L[g], Bj[u] = bj − Bj[g] = 0 for j = 1, . . . , n. Note that the problem for u has homogeneous boundary conditions. Thus a problem with inhomo- geneous boundary conditions can be reduced to one with homogeneous boundary conditions. This technique is of limited usefulness for ordinary differential equations but is important for solving some partial differential equation problems. Example 21.5.1 Consider the problem y + y = cos 2x, y(0) = 1, y(π) = 2. g(x) = x π + 1 satisfies the boundary conditions. Substituting y = u + g yields u + u = cos 2x − x π − 1, y(0) = y(π) = 0. Example 21.5.2 Consider y + y = cos 2x, y (0) = y(π) = 1. g(x) = sin x − cos x satisfies the inhomogeneous boundary conditions. Substituting y = u + sin x − cos x yields u + u = cos 2x, u (0) = u(π) = 0. Note that since g(x) satisfies the homogeneous equation, the inhomogeneous term in the equation for u is the same as that in the equation for y. Example 21.5.3 Consider y + y = cos 2x, y(0) = 2 3 , y(π) = − 4 3 . g(x) = cos x − 1 3 satisfies the boundary conditions. Substituting y = u + cos x − 1 3 yields u + u = cos 2x + 1 3 , u(0) = u(π) = 0. 658
  • 679. Result 21.5.1 The nth order differential equation with boundary conditions L[y] = f(x), Bj[y] = bj, for j = 1, . . . , n has the solution y = u + g where u satisfies L[u] = f(x) − L[g], Bj[u] = 0, for j = 1, . . . , n and g is any n-times continuously differentiable function that satisfies the inhomogeneous boundary conditions. 21.5.2 Separating Inhomogeneous Equations and Inhomogeneous Bound- ary Conditions Now consider a problem with inhomogeneous boundary conditions L[y] = f(x), B1[y] = γ1, B2[y] = γ2. In order to solve this problem, we solve the two problems L[u] = f(x), B1[u] = B2[u] = 0, and L[v] = 0, B1[v] = γ1, B2[v] = γ2. The solution for the problem with an inhomogeneous equation and inhomogeneous boundary con- ditions will be the sum of u and v. To verify this, L[u + v] = L[u] + L[v] = f(x) + 0 = f(x), Bi[u + v] = Bi[u] + Bi[v] = 0 + γi = γi. This will be a useful technique when we develop Green functions. Result 21.5.2 The solution to L[y] = f(x), B1[y] = γ1, B2[y] = γ2, is y = u + v where L[u] = f(x), B1[u] = 0, B2[u] = 0, and L[v] = 0, B1[v] = γ1, B2[v] = γ2. 21.5.3 Existence of Solutions of Problems with Inhomogeneous Boundary Conditions Consider the nth order homogeneous differential equation L[y] = y(n) + pn−1y(n−1) + · · · + p1y + p0y = f(x), for a < x < b, subject to the n inhomogeneous boundary conditions Bj[y] = γj, for j = 1, . . . , n where each boundary condition is of the form B[y] ≡ α0y(a) + α1y (a) + · · · + αn−1y(n−1) (a) + β0y(b) + β1y (b) + · · · + βn−1y(n−1) (b). 659
  • 680. We assume that the coefficients in the differential equation are continuous on [a, b]. Since the Wronskian of the solutions of the differential equation, W(x) = exp − pn−1(x) dx , is non-vanishing on [a, b], there are n linearly independent solution on that range. Let {y1, . . . , yn} be a set of linearly independent solutions of the homogeneous equation. From Result 21.3.2 we know that a particular solution yp exists. The general solution of the differential equation is y = yp + c1y1 + c2y2 + · · · + cnyn. The n boundary conditions impose the matrix equation,      B1[y1] B1[y2] · · · B1[yn] B2[y1] B2[y2] · · · B2[yn] ... ... ... ... Bn[y1] Bn[y2] · · · Bn[yn]           c1 c2 ... cn      =      γ1 − B1[yp] γ2 − B2[yp] ... γn − Bn[yp]      This equation has a unique solution if and only if the equation      B1[y1] B1[y2] · · · B1[yn] B2[y1] B2[y2] · · · B2[yn] ... ... ... ... Bn[y1] Bn[y2] · · · Bn[yn]           c1 c2 ... cn      =      0 0 ... 0      has only the trivial solution. (This is the case if and only if the determinant of the matrix is nonzero.) Thus the problem L[y] = y(n) + pn−1y(n−1) + · · · + p1y + p0y = f(x), for a < x < b, subject to the n inhomogeneous boundary conditions Bj[y] = γj, for j = 1, . . . , n, has a unique solution if and only if the problem L[y] = y(n) + pn−1y(n−1) + · · · + p1y + p0y = 0, for a < x < b, subject to the n homogeneous boundary conditions Bj[y] = 0, for j = 1, . . . , n, has only the trivial solution. Result 21.5.3 The problem L[y] = y(n) + pn−1y(n−1) + · · · + p1y + p0y = f(x), for a < x < b, subject to the n inhomogeneous boundary conditions Bj[y] = γj, for j = 1, . . . , n, has a unique solution if and only if the problem L[y] = y(n) + pn−1y(n−1) + · · · + p1y + p0y = 0, for a < x < b, subject to Bj[y] = 0, for j = 1, . . . , n, has only the trivial solution. 660
  • 681. 21.6 Green Functions for First Order Equations Consider the first order inhomogeneous equation L[y] ≡ y + p(x)y = f(x), for x > a, (21.2) subject to a homogeneous initial condition, B[y] ≡ y(a) = 0. The Green function G(x|ξ) is defined as the solution to L[G(x|ξ)] = δ(x − ξ) subject to G(a|ξ) = 0. We can represent the solution to the inhomogeneous problem in Equation 21.2 as an integral involving the Green function. To show that y(x) = ∞ a G(x|ξ)f(ξ) dξ is the solution, we apply the linear operator L to the integral. (Assume that the integral is uniformly convergent.) L ∞ a G(x|ξ)f(ξ) dξ = ∞ a L[G(x|ξ)]f(ξ) dξ = ∞ a δ(x − ξ)f(ξ) dξ = f(x) The integral also satisfies the initial condition. B ∞ a G(x|ξ)f(ξ) dξ = ∞ a B[G(x|ξ)]f(ξ) dξ = ∞ a (0)f(ξ) dξ = 0 Now we consider the qualitiative behavior of the Green function. For x = ξ, the Green function is simply a homogeneous solution of the differential equation, however at x = ξ we expect some singular behavior. G (x|ξ) will have a Dirac delta function type singularity. This means that G(x|ξ) will have a jump discontinuity at x = ξ. We integrate the differential equation on the vanishing interval (ξ− . . . ξ+ ) to determine this jump. G + p(x)G = δ(x − ξ) G(ξ+ |ξ) − G(ξ− |ξ) + ξ+ ξ− p(x)G(x|ξ) dx = 1 G(ξ+ |ξ) − G(ξ− |ξ) = 1 (21.3) The homogeneous solution of the differential equation is yh = e− R p(x) dx Since the Green function satisfies the homogeneous equation for x = ξ, it will be a constant times this homogeneous solution for x < ξ and x > ξ. G(x|ξ) = c1 e− R p(x) dx a < x < ξ c2 e− R p(x) dx ξ < x 661
  • 682. In order to satisfy the homogeneous initial condition G(a|ξ) = 0, the Green function must vanish on the interval (a . . . ξ). G(x|ξ) = 0 a < x < ξ c e− R p(x) dx ξ < x The jump condition, (Equation 21.3), gives us the constraint G(ξ+ |ξ) = 1. This determines the constant in the homogeneous solution for x > ξ. G(x|ξ) = 0 a < x < ξ e− R x ξ p(t) dt ξ < x We can use the Heaviside function to write the Green function without using a case statement. G(x|ξ) = e− R x ξ p(t) dt H(x − ξ) Clearly the Green function is of little value in solving the inhomogeneous differential equation in Equation 21.2, as we can solve that problem directly. However, we will encounter first order Green function problems in solving some partial differential equations. Result 21.6.1 The first order inhomogeneous differential equation with ho- mogeneous initial condition L[y] ≡ y + p(x)y = f(x), for a < x, y(a) = 0, has the solution y = ∞ a G(x|ξ)f(ξ) dξ, where G(x|ξ) satisfies the equation L[G(x|ξ)] = δ(x − ξ), for a < x, G(a|ξ) = 0. The Green function is G(x|ξ) = e− R x ξ p(t) dt H(x − ξ) 21.7 Green Functions for Second Order Equations Consider the second order inhomogeneous equation L[y] = y + p(x)y + q(x)y = f(x), for a < x < b, (21.4) subject to the homogeneous boundary conditions B1[y] = B2[y] = 0. The Green function G(x|ξ) is defined as the solution to L[G(x|ξ)] = δ(x − ξ) subject to B1[G] = B2[G] = 0. The Green function is useful because you can represent the solution to the inhomogeneous problem in Equation 21.4 as an integral involving the Green function. To show that y(x) = b a G(x|ξ)f(ξ) dξ 662
  • 683. is the solution, we apply the linear operator L to the integral. (Assume that the integral is uniformly convergent.) L b a G(x|ξ)f(ξ) dξ = b a L[G(x|ξ)]f(ξ) dξ = b a δ(x − ξ)f(ξ) dξ = f(x) The integral also satisfies the boundary conditions. Bi b a G(x|ξ)f(ξ) dξ = b a Bi[G(x|ξ)]f(ξ) dξ = b a [0]f(ξ) dξ = 0 One of the advantages of using Green functions is that once you find the Green function for a linear operator and certain homogeneous boundary conditions, L[G] = δ(x − ξ), B1[G] = B2[G] = 0, you can write the solution for any inhomogeneity, f(x). L[f] = f(x), B1[y] = B2[y] = 0 You do not need to do any extra work to obtain the solution for a different inhomogeneous term. Qualitatively, what kind of behavior will the Green function for a second order differential equa- tion have? Will it have a delta function singularity; will it be continuous? To answer these questions we will first look at the behavior of integrals and derivatives of δ(x). The integral of δ(x) is the Heaviside function, H(x). H(x) = x −∞ δ(t) dt = 0 for x < 0 1 for x > 0 The integral of the Heaviside function is the ramp function, r(x). r(x) = x −∞ H(t) dt = 0 for x < 0 x for x > 0 The derivative of the delta function is zero for x = 0. At x = 0 it goes from 0 up to +∞, down to −∞ and then back up to 0. In Figure 21.2 we see conceptually the behavior of the ramp function, the Heaviside function, the delta function, and the derivative of the delta function. We write the differential equation for the Green function. G (x|ξ) + p(x)G (x|ξ) + q(x)G(x|ξ) = δ(x − ξ) we see that only the G (x|ξ) term can have a delta function type singularity. If one of the other terms had a delta function type singularity then G (x|ξ) would be more singular than a delta function and there would be nothing in the right hand side of the equation to match this kind of singularity. Analogous to the progression from a delta function to a Heaviside function to a ramp function, we see that G (x|ξ) will have a jump discontinuity and G(x|ξ) will be continuous. 663
  • 684. Figure 21.2: r(x), H(x), δ(x) and d dx δ(x) Let y1 and y2 be two linearly independent solutions to the homogeneous equation, L[y] = 0. Since the Green function satisfies the homogeneous equation for x = ξ, it will be a linear combination of the homogeneous solutions. G(x|ξ) = c1y1 + c2y2 for x < ξ d1y1 + d2y2 for x > ξ We require that G(x|ξ) be continuous. G(x|ξ) x→ξ− = G(x|ξ) x→ξ+ We can write this in terms of the homogeneous solutions. c1y1(ξ) + c2y2(ξ) = d1y1(ξ) + d2y2(ξ) We integrate L[G(x|ξ)] = δ(x − ξ) from ξ− to ξ+. ξ+ ξ− [G (x|ξ) + p(x)G (x|ξ) + q(x)G(x|ξ)] dx = ξ+ ξ− δ(x − ξ) dx. Since G(x|ξ) is continuous and G (x|ξ) has only a jump discontinuity two of the terms vanish. ξ+ ξ− p(x)G (x|ξ) dx = 0 and ξ+ ξ− q(x)G(x|ξ) dx = 0 ξ+ ξ− G (x|ξ) dx = ξ+ ξ− δ(x − ξ) dx G (x|ξ) ξ+ ξ− = H(x − ξ) ξ+ ξ− G (ξ+ |ξ) − G (ξ− |ξ) = 1 We write this jump condition in terms of the homogeneous solutions. d1y1(ξ) + d2y2(ξ) − c1y1(ξ) − c2y2(ξ) = 1 Combined with the two boundary conditions, this gives us a total of four equations to determine our four constants, c1, c2, d1, and d2. 664
  • 685. Result 21.7.1 The second order inhomogeneous differential equation with homogeneous boundary conditions L[y] = y + p(x)y + q(x)y = f(x), for a < x < b, B1[y] = B2[y] = 0, has the solution y = b a G(x|ξ)f(ξ) dξ, where G(x|ξ) satisfies the equation L[G(x|ξ)] = δ(x − ξ), for a < x < b, B1[G(x|ξ)] = B2[G(x|ξ)] = 0. G(x|ξ) is continuous and G (x|ξ) has a jump discontinuity of height 1 at x = ξ. Example 21.7.1 Solve the boundary value problem y = f(x), y(0) = y(1) = 0, using a Green function. A pair of solutions to the homogeneous equation are y1 = 1 and y2 = x. First note that only the trivial solution to the homogeneous equation satisfies the homogeneous boundary conditions. Thus there is a unique solution to this problem. The Green function satisfies G (x|ξ) = δ(x − ξ), G(0|ξ) = G(1|ξ) = 0. The Green function has the form G(x|ξ) = c1 + c2x for x < ξ d1 + d2x for x > ξ. Applying the two boundary conditions, we see that c1 = 0 and d1 = −d2. The Green function now has the form G(x|ξ) = cx for x < ξ d(x − 1) for x > ξ. Since the Green function must be continuous, cξ = d(ξ − 1) → d = c ξ ξ − 1 . From the jump condition, d dx c ξ ξ − 1 (x − 1) x=ξ − d dx cx x=ξ = 1 c ξ ξ − 1 − c = 1 c = ξ − 1. Thus the Green function is G(x|ξ) = (ξ − 1)x for x < ξ ξ(x − 1) for x > ξ. The Green function is plotted in Figure 21.3 for various values of ξ. The solution to y = f(x) is 665
  • 686. 0.5 1 -0.3 -0.2 -0.1 0.1 0.5 1 -0.3 -0.2 -0.1 0.1 0.5 1 -0.3 -0.2 -0.1 0.1 0.5 1 -0.3 -0.2 -0.1 0.1 Figure 21.3: Plot of G(x|0.05),G(x|0.25),G(x|0.5) and G(x|0.75). y(x) = 1 0 G(x|ξ)f(ξ) dξ y(x) = (x − 1) x 0 ξf(ξ) dξ + x 1 x (ξ − 1)f(ξ) dξ. Example 21.7.2 Solve the boundary value problem y = f(x), y(0) = 1, y(1) = 2. In Example 21.7.1 we saw that the solution to u = f(x), u(0) = u(1) = 0 is u(x) = (x − 1) x 0 ξf(ξ) dξ + x 1 x (ξ − 1)f(ξ) dξ. Now we have to find the solution to v = 0, v(0) = 1, u(1) = 2. The general solution is v = c1 + c2x. Applying the boundary conditions yields v = 1 + x. Thus the solution for y is y = 1 + x + (x − 1) x 0 ξf(ξ) dξ + x 1 x (ξ − 1)f( xi) dξ. Example 21.7.3 Consider y = x, y(0) = y(1) = 0. Method 1. Integrating the differential equation twice yields y = 1 6 x3 + c1x + c2. Applying the boundary conditions, we find that the solution is y = 1 6 (x3 − x). 666
  • 687. Method 2. Using the Green function to find the solution, y = (x − 1) x 0 ξ2 dξ + x 1 x (ξ − 1)ξ dξ = (x − 1) 1 3 x3 + x 1 3 − 1 2 − 1 3 x3 + 1 2 x2 y = 1 6 (x3 − x). Example 21.7.4 Find the solution to the differential equation y − y = sin x, that is bounded for all x. The Green function for this problem satisfies G (x|ξ) − G(x|ξ) = δ(x − ξ). The homogeneous solutions are y1 = ex , and y2 = e−x . The Green function has the form G(x|ξ) = c1 ex +c2 e−x for x < ξ d1 ex +d2 e−x for x > ξ. Since the solution must be bounded for all x, the Green function must also be bounded. Thus c2 = d1 = 0. The Green function now has the form G(x|ξ) = c ex for x < ξ d e−x for x > ξ. Requiring that G(x|ξ) be continuous gives us the condition c eξ = d e−ξ → d = c e2ξ . G(x|ξ) has a jump discontinuity of height 1 at x = ξ. d dx c e2ξ e−x x=ξ − d dx c ex x=ξ = 1 −c e2ξ e−ξ −c eξ = 1 c = − 1 2 e−ξ The Green function is then G(x|ξ) = −1 2 ex−ξ for x < ξ −1 2 e−x+ξ for x > ξ G(x|ξ) = − 1 2 e−|x−ξ| . A plot of G(x|0) is given in Figure 21.4. The solution to y − y = sin x is y(x) = ∞ −∞ − 1 2 e−|x−ξ| sin ξ dξ = − 1 2 x −∞ sin ξ ex−ξ dξ + ∞ x sin ξ e−x+ξ dξ = − 1 2 (− sin x + cos x 2 + − sin x + cos x 2 ) y = 1 2 sin x. 667
  • 688. -4 -2 2 4 -0.6 -0.4 -0.2 0.2 0.4 0.6 Figure 21.4: Plot of G(x|0). 21.7.1 Green Functions for Sturm-Liouville Problems Consider the problem L[y] = (p(x)y ) + q(x)y = f(x), subject to B1[y] = α1y(a) + α2y (a) = 0, B2[y] = β1y(b) + β2y (b) = 0. This is known as a Sturm-Liouville problem. Equations of this type often occur when solving partial differential equations. The Green function associated with this problem satisfies L[G(x|ξ)] = δ(x − ξ), B1[G(x|ξ)] = B2[G(x|ξ)] = 0. Let y1 and y2 be two non-trivial homogeneous solutions that satisfy the left and right boundary conditions, respectively. L[y1] = 0, B1[y1] = 0, L[y2] = 0, B2[y2] = 0. The Green function satisfies the homogeneous equation for x = ξ and satisfies the homogeneous boundary conditions. Thus it must have the following form. G(x|ξ) = c1(ξ)y1(x) for a ≤ x ≤ ξ, c2(ξ)y2(x) for ξ ≤ x ≤ b, Here c1 and c2 are unknown functions of ξ. The first constraint on c1 and c2 comes from the continuity condition. G(ξ− |ξ) = G(ξ+ |ξ) c1(ξ)y1(ξ) = c2(ξ)y2(ξ) We write the inhomogeneous equation in the standard form. G (x|ξ) + p p G (x|ξ) + q p G(x|ξ) = δ(x − ξ) p The second constraint on c1 and c2 comes from the jump condition. G (ξ+ |ξ) − G (ξ− |ξ) = 1 p(ξ) c2(ξ)y2(ξ) − c1(ξ)y1(ξ) = 1 p(ξ) 668
  • 689. Now we have a system of equations to determine c1 and c2. c1(ξ)y1(ξ) − c2(ξ)y2(ξ) = 0 c1(ξ)y1(ξ) − c2(ξ)y2(ξ) = − 1 p(ξ) We solve this system with Kramer’s rule. c1(ξ) = − y2(ξ) p(ξ)(−W(ξ)) , c2(ξ) = − y1(ξ) p(ξ)(−W(ξ)) Here W(x) is the Wronskian of y1(x) and y2(x). The Green function is G(x|ξ) = y1(x)y2(ξ) p(ξ)W (ξ) for a ≤ x ≤ ξ, y2(x)y1(ξ) p(ξ)W (ξ) for ξ ≤ x ≤ b. The solution of the Sturm-Liouville problem is y = b a G(x|ξ)f(ξ) dξ. Result 21.7.2 The problem L[y] = (p(x)y ) + q(x)y = f(x), subject to B1[y] = α1y(a) + α2y (a) = 0, B2[y] = β1y(b) + β2y (b) = 0. has the Green function G(x|ξ) = y1(x)y2(ξ) p(ξ)W(ξ) for a ≤ x ≤ ξ, y2(x)y1(ξ) p(ξ)W(ξ) for ξ ≤ x ≤ b, where y1 and y2 are non-trivial homogeneous solutions that satisfy B1[y1] = B2[y2] = 0, and W(x) is the Wronskian of y1 and y2. Example 21.7.5 Consider the equation y − y = f(x), y(0) = y(1) = 0. A set of solutions to the homogeneous equation is {ex , e−x }. Equivalently, one could use the set {cosh x, sinh x}. Note that sinh x satisfies the left boundary condition and sinh(x − 1) satisfies the right boundary condition. The Wronskian of these two homogeneous solutions is W(x) = sinh x sinh(x − 1) cosh x cosh(x − 1) = sinh x cosh(x − 1) − cosh x sinh(x − 1) = 1 2 [sinh(2x − 1) + sinh(1)] − 1 2 [sinh(2x − 1) − sinh(1)] = sinh(1). The Green function for the problem is then G(x|ξ) = sinh x sinh(ξ−1) sinh(1) for 0 ≤ x ≤ ξ sinh(x−1) sinh ξ sinh(1) for ξ ≤ x ≤ 1. 669
  • 690. The solution to the problem is y = sinh(x − 1) sinh(1) x 0 sinh(ξ)f(ξ) dξ + sinh(x) sinh(1) 1 x sinh(ξ − 1)f(ξ) dξ. 21.7.2 Initial Value Problems Consider L[y] = y + p(x)y + q(x)y = f(x), for a < x < b, subject the the initial conditions y(a) = γ1, y (a) = γ2. The solution is y = u + v where u + p(x)u + q(x)u = f(x), u(a) = 0, u (a) = 0, and v + p(x)v + q(x)v = 0, v(a) = γ1, v (a) = γ2. Since the Wronskian W(x) = c exp − p(x) dx is non-vanishing, the solutions of the differential equation for v are linearly independent. Thus there is a unique solution for v that satisfies the initial conditions. The Green function for u satisfies G (x|ξ) + p(x)G (x|ξ) + q(x)G(x|ξ) = δ(x − ξ), G(a|ξ) = 0, G (a|ξ) = 0. The continuity and jump conditions are G(ξ− |ξ) = G(ξ+ |ξ), G (ξ− |ξ) + 1 = G (ξ+ |ξ). Let u1 and u2 be two linearly independent solutions of the differential equation. For x < ξ, G(x|ξ) is a linear combination of these solutions. Since the Wronskian is non-vanishing, only the trivial solution satisfies the homogeneous initial conditions. The Green function must be G(x|ξ) = 0 for x < ξ uξ(x) for x > ξ, where uξ(x) is the linear combination of u1 and u2 that satisfies uξ(ξ) = 0, uξ(ξ) = 1. Note that the non-vanishing Wronskian ensures a unique solution for uξ. We can write the Green function in the form G(x|ξ) = H(x − ξ)uξ(x). This is known as the causal solution. The solution for u is u = b a G(x|ξ)f(ξ) dξ = b a H(x − ξ)uξ(x)f(ξ) dξ = x a uξ(x)f(ξ) dξ 670
  • 691. Now we have the solution for y, y = v + x a uξ(x)f(ξ) dξ. Result 21.7.3 The solution of the problem y + p(x)y + q(x)y = f(x), y(a) = γ1, y (a) = γ2, is y = yh + x a yξ(x)f(ξ) dξ where yh is the combination of the homogeneous solutions of the equation that satisfy the initial conditions and yξ(x) is the linear combination of homoge- neous solutions that satisfy yξ(ξ) = 0, yξ(ξ) = 1. 21.7.3 Problems with Unmixed Boundary Conditions Consider L[y] = y + p(x)y + q(x)y = f(x), for a < x < b, subject the the unmixed boundary conditions α1y(a) + α2y (a) = γ1, β1y(b) + β2y (b) = γ2. The solution is y = u + v where u + p(x)u + q(x)u = f(x), α1u(a) + α2u (a) = 0, β1u(b) + β2u (b) = 0, and v + p(x)v + q(x)v = 0, α1v(a) + α2v (a) = γ1, β1v(b) + β2v (b) = γ2. The problem for v may have no solution, a unique solution or an infinite number of solutions. We consider only the case that there is a unique solution for v. In this case the homogeneous equation subject to homogeneous boundary conditions has only the trivial solution. The Green function for u satisfies G (x|ξ) + p(x)G (x|ξ) + q(x)G(x|ξ) = δ(x − ξ), α1G(a|ξ) + α2G (a|ξ) = 0, β1G(b|ξ) + β2G (b|ξ) = 0. The continuity and jump conditions are G(ξ− |ξ) = G(ξ+ |ξ), G (ξ− |ξ) + 1 = G (ξ+ |ξ). Let u1 and u2 be two solutions of the homogeneous equation that satisfy the left and right boundary conditions, respectively. The non-vanishing of the Wronskian ensures that these solutions exist. Let W(x) denote the Wronskian of u1 and u2. Since the homogeneous equation with homogeneous boundary conditions has only the trivial solution, W(x) is nonzero on [a, b]. The Green function has the form G(x|ξ) = c1u1 for x < ξ, c2u2 for x > ξ. 671
  • 692. The continuity and jump conditions for Green function gives us the equations c1u1(ξ) − c2u2(ξ) = 0 c1u1(ξ) − c2u2(ξ) = −1. Using Kramer’s rule, the solution is c1 = u2(ξ) W(ξ) , c2 = u1(ξ) W(ξ) . Thus the Green function is G(x|ξ) = u1(x)u2(ξ) W (ξ) for x < ξ, u1(ξ)u2(x) W (ξ) for x > ξ. The solution for u is u = b a G(x|ξ)f(ξ) dξ. Thus if there is a unique solution for v, the solution for y is y = v + b a G(x|ξ)f(ξ) dξ. Result 21.7.4 Consider the problem y + p(x)y + q(x)y = f(x), α1y(a) + α2y (a) = γ1, β1y(b) + β2y (b) = γ2. If the homogeneous differential equation subject to the inhomogeneous bound- ary conditions has the unique solution yh, then the problem has the unique solution y = yh + b a G(x|ξ)f(ξ) dξ where G(x|ξ) = u1(x)u2(ξ) W(ξ) for x < ξ, u1(ξ)u2(x) W(ξ) for x > ξ, u1 and u2 are solutions of the homogeneous differential equation that satisfy the left and right boundary conditions, respectively, and W(x) is the Wronskian of u1 and u2. 21.7.4 Problems with Mixed Boundary Conditions Consider L[y] = y + p(x)y + q(x)y = f(x), for a < x < b, subject the the mixed boundary conditions B1[y] = α11y(a) + α12y (a) + β11y(b) + β12y (b) = γ1, B2[y] = α21y(a) + α22y (a) + β21y(b) + β22y (b) = γ2. The solution is y = u + v where u + p(x)u + q(x)u = f(x), B1[u] = 0, B2[u] = 0, and v + p(x)v + q(x)v = 0, B1[v] = γ1, B2[v] = γ2. 672
  • 693. The problem for v may have no solution, a unique solution or an infinite number of solutions. Again we consider only the case that there is a unique solution for v. In this case the homogeneous equation subject to homogeneous boundary conditions has only the trivial solution. Let y1 and y2 be two solutions of the homogeneous equation that satisfy the boundary conditions B1[y1] = 0 and B2[y2] = 0. Since the completely homogeneous problem has no solutions, we know that B1[y2] and B2[y1] are nonzero. The solution for v has the form v = c1y1 + c2y2. Applying the two boundary conditions yields v = γ2 B2[y1] y1 + γ1 B1[y2] y2. The Green function for u satisfies G (x|ξ) + p(x)G (x|ξ) + q(x)G(x|ξ) = δ(x − ξ), B1[G] = 0, B2[G] = 0. The continuity and jump conditions are G(ξ− |ξ) = G(ξ+ |ξ), G (ξ− |ξ) + 1 = G (ξ+ |ξ). We write the Green function as the sum of the causal solution and the two homogeneous solutions G(x|ξ) = H(x − ξ)yξ(x) + c1y1(x) + c2y2(x) With this form, the continuity and jump conditions are automatically satisfied. Applying the bound- ary conditions yields B1[G] = B1[H(x − ξ)yξ] + c2B1[y2] = 0, B2[G] = B2[H(x − ξ)yξ] + c1B2[y1] = 0, B1[G] = β11yξ(b) + β12yξ(b) + c2B1[y2] = 0, B2[G] = β21yξ(b) + β22yξ(b) + c1B2[y1] = 0, G(x|ξ) = H(x − ξ)yξ(x) − β21yξ(b) + β22yξ(b) B2[y1] y1(x) − β11yξ(b) + β12yξ(b) B1[y2] y2(x). Note that the Green function is well defined since B2[y1] and B1[y2] are nonzero. The solution for u is u = b a G(x|ξ)f(ξ) dξ. Thus if there is a unique solution for v, the solution for y is y = b a G(x|ξ)f(ξ) dξ + γ2 B2[y1] y1 + γ1 B1[y2] y2. 673
  • 694. Result 21.7.5 Consider the problem y + p(x)y + q(x)y = f(x), B1[y] = α11y(a) + α12y (a) + β11y(b) + β12y (b) = γ1, B2[y] = α21y(a) + α22y (a) + β21y(b) + β22y (b) = γ2. If the homogeneous differential equation subject to the homogeneous boundary conditions has no solution, then the problem has the unique solution y = b a G(x|ξ)f(ξ) dξ + γ2 B2[y1] y1 + γ1 B1[y2] y2, where G(x|ξ) = H(x − ξ)yξ(x) − β21yξ(b) + β22yξ(b) B2[y1] y1(x) − β11yξ(b) + β12yξ(b) B1[y2] y2(x), y1 and y2 are solutions of the homogeneous differential equation that satisfy the first and second boundary conditions, respectively, and yξ(x) is the solution of the homogeneous equation that satisfies yξ(ξ) = 0, yξ(ξ) = 1. 21.8 Green Functions for Higher Order Problems Consider the nth order differential equation L[y] = y(n) + pn−1(x)y(n−1) + · · · + p1(x)y + p0y = f(x) on a < x < b, subject to the n independent boundary conditions Bj[y] = γj where the boundary conditions are of the form B[y] ≡ n−1 k=0 αky(k) (a) + n−1 k=0 βky(k) (b). We assume that the coefficient functions in the differential equation are continuous on [a, b]. The solution is y = u + v where u and v satisfy L[u] = f(x), with Bj[u] = 0, and L[v] = 0, with Bj[v] = γj From Result 21.5.3, we know that if the completely homogeneous problem L[w] = 0, with Bj[w] = 0, has only the trivial solution, then the solution for y exists and is unique. We will construct this solution using Green functions. 674
  • 695. First we consider the problem for v. Let {y1, . . . , yn} be a set of linearly independent solutions. The solution for v has the form v = c1y1 + · · · + cnyn where the constants are determined by the matrix equation      B1[y1] B1[y2] · · · B1[yn] B2[y1] B2[y2] · · · B2[yn] ... ... ... ... Bn[y1] Bn[y2] · · · Bn[yn]           c1 c2 ... cn      =      γ1 γ2 ... γn      . To solve the problem for u we consider the Green function satisfying L[G(x|ξ)] = δ(x − ξ), with Bj[G] = 0. Let yξ(x) be the linear combination of the homogeneous solutions that satisfy the conditions yξ(ξ) = 0 yξ(ξ) = 0 ... = ... y (n−2) ξ (ξ) = 0 y (n−1) ξ (ξ) = 1. The causal solution is then yc(x) = H(x − ξ)yξ(x). The Green function has the form G(x|ξ) = H(x − ξ)yξ(x) + d1y1(x) + · · · + dnyn(x) The constants are determined by the matrix equation      B1[y1] B1[y2] · · · B1[yn] B2[y1] B2[y2] · · · B2[yn] ... ... ... ... Bn[y1] Bn[y2] · · · Bn[yn]           d1 d2 ... dn      =      −B1[H(x − ξ)yξ(x)] −B2[H(x − ξ)yξ(x)] ... −Bn[H(x − ξ)yξ(x)]      . The solution for u then is u = b a G(x|ξ)f(ξ) dξ. 675
  • 696. Result 21.8.1 Consider the nth order differential equation L[y] = y(n) + pn−1(x)y(n−1) + · · · + p1(x)y + p0y = f(x) on a < x < b, subject to the n independent boundary conditions Bj[y] = γj If the homogeneous differential equation subject to the homogeneous bound- ary conditions has only the trivial solution, then the problem has the unique solution y = b a G(x|ξ)f(ξ) dξ + c1y1 + · · · cnyn where G(x|ξ) = H(x − ξ)yξ(x) + d1y1(x) + · · · + dnyn(x), {y1, . . . , yn} is a set of solutions of the homogeneous differential equation, and the constants cj and dj can be determined by solving sets of linear equations. Example 21.8.1 Consider the problem y − y + y − y = f(x), y(0) = 1, y (0) = 2, y(1) = 3. The completely homogeneous associated problem is w − w + w − w = 0, w(0) = w (0) = w(1) = 0. The solution of the differential equation is w = c1 cos x + c2 sin x + c2 ex . The boundary conditions give us the equation   1 0 1 0 1 1 cos 1 sin 1 e     c1 c2 c3   =   0 0 0   . The determinant of the matrix is e − cos 1 − sin 1 = 0. Thus the homogeneous problem has only the trivial solution and the inhomogeneous problem has a unique solution. We separate the inhomogeneous problem into the two problems u − u + u − u = f(x), u(0) = u (0) = u(1) = 0, v − v + v − v = 0, v(0) = 1, v (0) = 2, v(1) = 3, First we solve the problem for v. The solution of the differential equation is v = c1 cos x + c2 sin x + c2 ex . The boundary conditions yields the equation   1 0 1 0 1 1 cos 1 sin 1 e     c1 c2 c3   =   1 2 3   . 676
  • 697. The solution for v is v = 1 e − cos 1 − sin 1 (e + sin 1 − 3) cos x + (2e − cos 1 − 3) sin x + (3 − cos 1 − 2 sin 1) ex . Now we find the Green function for the problem in u. The causal solution is H(x − ξ)uξ(x) = H(x − ξ) 1 2 (sin ξ − cos ξ) cos x − (sin ξ + cos ξ) sin ξ + e−ξ ex , H(x − ξ)uξ(x) = 1 2 H(x − ξ) ex−ξ − cos(x − ξ) − sin(x − ξ) . The Green function has the form G(x|ξ) = H(x − ξ)uξ(x) + c1 cos x + c2 sin x + c3 ex . The constants are determined by the three conditions c1 cos x + c2 sin x + c3 ex x=0 = 0, ∂ ∂x (c1 cos x + c2 sin x + c3 ex ) x=0 = 0, uξ(x) + c1 cos x + c2 sin x + c3 ex x=1 = 0. The Green function is G(x|ξ) = 1 2 H(x−ξ) ex−ξ − cos(x−ξ)−sin(x−ξ) + cos(1 − ξ) + sin(1 − ξ) − e1−ξ 2(cos 1 + sin 1 − e) cos x+sin x−ex The solution for v is v = 1 0 G(x|ξ)f(ξ) dξ. Thus the solution for y is y = 1 0 G(x|ξ)f(ξ) dξ + 1 e − cos 1 − sin 1 (e + sin 1 − 3) cos x + (2e − cos 1 − 3) sin x + (3 − cos 1 − 2 sin 1) ex . 21.9 Fredholm Alternative Theorem Orthogonality. Two real vectors, u and v are orthogonal if u · v = 0. Consider two functions, u(x) and v(x), defined in [a, b]. The dot product in vector space is analogous to the integral b a u(x)v(x) dx in function space. Thus two real functions are orthogonal if b a u(x)v(x) dx = 0. Consider the nth order linear inhomogeneous differential equation L[y] = f(x) on [a, b], subject to the linear inhomogeneous boundary conditions Bj[y] = 0, for j = 1, 2, . . . n. The Fredholm alternative theorem tells us if the problem has a unique solution, an infinite number of solutions, or no solution. Before presenting the theorem, we will consider a few motivating examples. 677
  • 698. No Nontrivial Homogeneous Solutions. In the section on Green functions we showed that if the completely homogeneous problem has only the trivial solution then the inhomogeneous problem has a unique solution. Nontrivial Homogeneous Solutions Exist. If there are nonzero solutions to the homogeneous problem L[y] = 0 that satisfy the homogeneous boundary conditions Bj[y] = 0 then the inhomoge- neous problem L[y] = f(x) subject to the same boundary conditions either has no solution or an infinite number of solutions. Suppose there is a particular solution yp that satisfies the boundary conditions. If there is a solution yh to the homogeneous equation that satisfies the boundary conditions then there will be an infinite number of solutions since yp + cyh is also a particular solution. The question now remains: Given that there are homogeneous solutions that satisfy the boundary conditions, how do we know if a particular solution that satisfies the boundary conditions exists? Before we address this question we will consider a few examples. Example 21.9.1 Consider the problem y + y = cos x, y(0) = y(π) = 0. The two homogeneous solutions of the differential equation are y1 = cos x, and y2 = sin x. y2 = sin x satisfies the boundary conditions. Thus we know that there are either no solutions or an infinite number of solutions. A particular solution is yp = − cos x cos x sin x 1 dx + sin x cos2 x 1 dx = − cos x 1 2 sin(2x) dx + sin x 1 2 + 1 2 cos(2x) dx = 1 4 cos x cos(2x) + sin x 1 2 x + 1 4 sin(2x) = 1 2 x sin x + 1 4 cos x cos(2x) + sin x sin(2x) = 1 2 x sin x + 1 4 cos x The general solution is y = 1 2 x sin x + c1 cos x + c2 sin x. Applying the two boundary conditions yields y = 1 2 x sin x + c sin x. Thus there are an infinite number of solutions. Example 21.9.2 Consider the differential equation y + y = sin x, y(0) = y(π) = 0. The general solution is y = − 1 2 x cos x + c1 cos x + c2 sin x. 678
  • 699. Applying the boundary conditions, y(0) = 0 → c1 = 0 y(π) = 0 → − 1 2 π cos(π) + c2 sin(π) = 0 → π 2 = 0. Since this equation has no solution, there are no solutions to the inhomogeneous problem. In both of the above examples there is a homogeneous solution y = sin x that satisfies the bound- ary conditions. In Example 21.9.1, the inhomogeneous term is cos x and there are an infinite number of solutions. In Example 21.9.2, the inhomogeneity is sin x and there are no solutions. In general, if the inhomogeneous term is orthogonal to all the homogeneous solutions that satisfy the bound- ary conditions then there are an infinite number of solutions. If not, there are no inhomogeneous solutions. Result 21.9.1 Fredholm Alternative Theorem. Consider the nth order inhomogeneous problem L[y] = f(x) on [a, b] subject to Bj[y] = 0 for j = 1, 2, . . . , n, and the associated homogeneous problem L[y] = 0 on [a, b] subject to Bj[y] = 0 for j = 1, 2, . . . , n. If the homogeneous problem has only the trivial solution then the inhomo- geneous problem has a unique solution. If the homogeneous problem has m independent solutions, {y1, y2, . . . , ym}, then there are two possibilities: • If f(x) is orthogonal to each of the homogeneous solutions then there are an infinite number of solutions of the form y = yp + m j=1 cjyj. • If f(x) is not orthogonal to each of the homogeneous solutions then there are no inhomogeneous solutions. Example 21.9.3 Consider the problem y + y = cos 2x, y(0) = 1, y(π) = 2. cos x and sin x are two linearly independent solutions to the homogeneous equation. sin x satisfies the homogeneous boundary conditions. Thus there are either an infinite number of solutions, or no solution. To transform this problem to one with homogeneous boundary conditions, we note that g(x) = x π + 1 and make the change of variables y = u + g to obtain u + u = cos 2x − x π − 1, y(0) = 0, y(π) = 0. Since cos 2x − x π − 1 is not orthogonal to sin x, there is no solution to the inhomogeneous problem. 679
  • 700. To check this, the general solution is y = − 1 3 cos 2x + c1 cos x + c2 sin x. Applying the boundary conditions, y(0) = 1 → c1 = 4 3 y(π) = 2 → − 1 3 − 4 3 = 2. Thus we see that the right boundary condition cannot be satisfied. Example 21.9.4 Consider y + y = cos 2x, y (0) = y(π) = 1. There are no solutions to the homogeneous equation that satisfy the homogeneous boundary con- ditions. To check this, note that all solutions of the homogeneous equation have the form uh = c1 cos x + c2 sin x. uh(0) = 0 → c2 = 0 uh(π) = 0 → c1 = 0. From the Fredholm Alternative Theorem we see that the inhomogeneous problem has a unique solution. To find the solution, start with y = − 1 3 cos 2x + c1 cos x + c2 sin x. y (0) = 1 → c2 = 1 y(π) = 1 → − 1 3 − c1 = 1 Thus the solution is y = − 1 3 cos 2x − 4 3 cos x + sin x. Example 21.9.5 Consider y + y = cos 2x, y(0) = 2 3 , y(π) = − 4 3 . cos x and sin x satisfy the homogeneous differential equation. sin x satisfies the homogeneous bound- ary conditions. Since g(x) = cos x−1/3 satisfies the boundary conditions, the substitution y = u+g yields u + u = cos 2x + 1 3 , y(0) = 0, y(π) = 0. Now we check if sin x is orthogonal to cos 2x + 1 3 . π 0 sin x cos 2x + 1 3 dx = π 0 1 2 sin 3x − 1 2 sin x + 1 3 sin x dx = − 1 6 cos 3x + 1 6 cos x π 0 = 0 680
  • 701. Since sin x is orthogonal to the inhomogeneity, there are an infinite number of solutions to the problem for u, (and hence the problem for y). As a check, then general solution for y is y = − 1 3 cos 2x + c1 cos x + c2 sin x. Applying the boundary conditions, y(0) = 2 3 → c1 = 1 y(π) = − 4 3 → − 4 3 = − 4 3 . Thus we see that c2 is arbitrary. There are an infinite number of solutions of the form y = − 1 3 cos 2x + cos x + c sin x. 681
  • 702. 21.10 Exercises Undetermined Coefficients Exercise 21.1 (mathematica/ode/inhomogeneous/undetermined.nb) Find the general solution of the following equations. 1. y + 2y + 5y = 3 sin(2t) 2. 2y + 3y + y = t2 + 3 sin(t) Hint, Solution Exercise 21.2 (mathematica/ode/inhomogeneous/undetermined.nb) Find the solution of each one of the following initial value problems. 1. y − 2y + y = t et +4, y(0) = 1, y (0) = 1 2. y + 2y + 5y = 4 e−t cos(2t), y(0) = 1, y (0) = 0 Hint, Solution Variation of Parameters Exercise 21.3 (mathematica/ode/inhomogeneous/variation.nb) Use the method of variation of parameters to find a particular solution of the given differential equation. 1. y − 5y + 6y = 2 et 2. y + y = tan(t), 0 < t < π/2 3. y − 5y + 6y = g(t), for a given function g. Hint, Solution Exercise 21.4 (mathematica/ode/inhomogeneous/variation.nb) Solve y (x) + y(x) = x, y(0) = 1, y (0) = 0. Hint, Solution Exercise 21.5 (mathematica/ode/inhomogeneous/variation.nb) Solve x2 y (x) − xy (x) + y(x) = x. Hint, Solution Exercise 21.6 (mathematica/ode/inhomogeneous/variation.nb) 1. Find the general solution of y + y = ex . 2. Solve y + λ2 y = sin x, y(0) = y (0) = 0. λ is an arbitrary real constant. Is there anything special about λ = 1? Hint, Solution Exercise 21.7 (mathematica/ode/inhomogeneous/variation.nb) Consider the problem of solving the initial value problem y + y = g(t), y(0) = 0, y (0) = 0. 682
  • 703. 1. Show that the general solution of y + y = g(t) is y(t) = c1 − t a g(τ) sin τ dτ cos t + c2 + t b g(τ) cos τ dτ sin t, where c1 and c2 are arbitrary constants and a and b are any conveniently chosen points. 2. Using the result of part (a) show that the solution satisfying the initial conditions y(0) = 0 and y (0) = 0 is given by y(t) = t 0 g(τ) sin(t − τ) dτ. Notice that this equation gives a formula for computing the solution of the original initial value problem for any given inhomogeneous term g(t). The integral is referred to as the convolution of g(t) with sin t. 3. Use the result of part (b) to solve the initial value problem, y + y = sin(λt), y(0) = 0, y (0) = 0, where λ is a real constant. How does the solution for λ = 1 differ from that for λ = 1? The λ = 1 case provides an example of resonant forcing. Plot the solution for resonant and non-resonant forcing. Hint, Solution Exercise 21.8 Find the variation of parameters solution for the third order differential equation y + p2(x)y + p1(x)y + p0(x)y = f(x). Hint, Solution Green Functions Exercise 21.9 Use a Green function to solve y = f(x), y(−∞) = y (−∞) = 0. Verify the the solution satisfies the differential equation. Hint, Solution Exercise 21.10 Solve the initial value problem y + 1 x y − 1 x2 y = x2 , y(0) = 0, y (0) = 1. First use variation of parameters, and then solve the problem with a Green function. Hint, Solution Exercise 21.11 What are the continuity conditions at x = ξ for the Green function for the problem y + p2(x)y + p1(x)y + p0(x)y = f(x). Hint, Solution 683
  • 704. Exercise 21.12 Use variation of parameters and Green functions to solve x2 y − 2xy + 2y = e−x , y(1) = 0, y (1) = 1. Hint, Solution Exercise 21.13 Find the Green function for y − y = f(x), y (0) = y(1) = 0. Hint, Solution Exercise 21.14 Find the Green function for y − y = f(x), y(0) = y(∞) = 0. Hint, Solution Exercise 21.15 Find the Green function for each of the following: a) xu + u = f(x), u(0+ ) bounded, u(1) = 0. b) u − u = f(x), u(−a) = u(a) = 0. c) u − u = f(x), u(x) bounded as |x| → ∞. d) Show that the Green function for (b) approaches that for (c) as a → ∞. Hint, Solution Exercise 21.16 1. For what values of λ does the problem y + λy = f(x), y(0) = y(π) = 0, (21.5) have a unique solution? Find the Green functions for these cases. 2. For what values of α does the problem y + 9y = 1 + αx, y(0) = y(π) = 0, have a solution? Find the solution. 3. For λ = n2 , n ∈ Z+ state in general the conditions on f in Equation 21.5 so that a solution will exist. What is the appropriate modified Green function (in terms of eigenfunctions)? Hint, Solution Exercise 21.17 Show that the inhomogeneous boundary value problem: Lu ≡ (pu ) + qu = f(x), a < x < b, u(a) = α, u(b) = β has the solution: u(x) = b a g(x; ξ)f(ξ) dξ − αp(a)gξ(x; a) + βp(b)gξ(x; b). Hint, Solution 684
  • 705. Exercise 21.18 The Green function for u − k2 u = f(x), −∞ < x < ∞ subject to |u(±∞)| < ∞ is G(x; ξ) = − 1 2k e−k|x−ξ| . (We assume that k > 0.) Use the image method to find the Green function for the same equation on the semi-infinite interval 0 < x < ∞ satisfying the boundary conditions, i) u(0) = 0 |u(∞)| < ∞, ii) u (0) = 0 |u(∞)| < ∞. Express these results in simplified forms without absolute values. Hint, Solution Exercise 21.19 1. Determine the Green function for solving: y − a2 y = f(x), y(0) = y (L) = 0. 2. Take the limit as L → ∞ to find the Green function on (0, ∞) for the boundary conditions: y(0) = 0, y (∞) = 0. We assume here that a > 0. Use the limiting Green function to solve: y − a2 y = e−x , y(0) = 0, y (∞) = 0. Check that your solution satisfies all the conditions of the problem. Hint, Solution 685
  • 706. 21.11 Hints Undetermined Coefficients Hint 21.1 Hint 21.2 Variation of Parameters Hint 21.3 Hint 21.4 Hint 21.5 Hint 21.6 Hint 21.7 Hint 21.8 Look for a particular solution of the form yp = u1y1 + u2y2 + u3y3, where the yj’s are homogeneous solutions. Impose the constraints u1y1 + u2y2 + u3y3 = 0 u1y1 + u2y2 + u3y3 = 0. To avoid some messy algebra when solving for uj, use Kramer’s rule. Green Functions Hint 21.9 Hint 21.10 Hint 21.11 Hint 21.12 Hint 21.13 cosh(x) and sinh(x−1) are homogeneous solutions that satisfy the left and right boundary conditions, respectively. 686
  • 707. Hint 21.14 sinh(x) and e−x are homogeneous solutions that satisfy the left and right boundary conditions, respectively. Hint 21.15 The Green function for the differential equation L[y] ≡ d dx (p(x)y ) + q(x)y = f(x), subject to unmixed, homogeneous boundary conditions is G(x|ξ) = y1(x<)y2(x>) p(ξ)W(ξ) , G(x|ξ) = y1(x)y2(ξ) p(ξ)W (ξ) for a ≤ x ≤ ξ, y1(ξ)y2(x) p(ξ)W (ξ) for ξ ≤ x ≤ b, where y1 and y2 are homogeneous solutions that satisfy the left and right boundary conditions, respectively. Recall that if y(x) is a solution of a homogeneous, constant coefficient differential equation then y(x + c) is also a solution. Hint 21.16 The problem has a Green function if and only if the inhomogeneous problem has a unique solution. The inhomogeneous problem has a unique solution if and only if the homogeneous problem has only the trivial solution. Hint 21.17 Show that gξ(x; a) and gξ(x; b) are solutions of the homogeneous differential equation. Determine the value of these solutions at the boundary. Hint 21.18 Hint 21.19 687
  • 708. 21.12 Solutions Undetermined Coefficients Solution 21.1 1. We consider y + 2y + 5y = 3 sin(2t). We first find the homogeneous solution with the substitition y = eλt . λ2 + 2λ + 5 = 0 λ = −1 ± 2i The homogeneous solution is yh = c1 e−t cos(2t) + c2 e−t sin(2t). We guess a particular solution of the form yp = a cos(2t) + b sin(2t). We substitute this into the differential equation to determine the coefficients. yp + 2yp + 5yp = 3 sin(2t) −4a cos(2t) − 4b sin(2t) − 4a sin(2t) + 4b sin(2t) + 5a cos(2t) + 5b sin(2t) = −3 sin(2t) (a + 4b) cos(2t) + (−3 − 4a + b) sin(2t) = 0 a + 4b = 0, −4a + b = 3 a = − 12 17 , b = 3 17 A particular solution is yp = 3 17 (sin(2t) − 4 cos(2t)). The general solution of the differential equation is y = c1 e−t cos(2t) + c2 e−t sin(2t) + 3 17 (sin(2t) − 4 cos(2t)). 2. We consider 2y + 3y + y = t2 + 3 sin(t) We first find the homogeneous solution with the substitition y = eλt . 2λ2 + 3λ + 1 = 0 λ = {−1, −1/2} The homogeneous solution is yh = c1 e−t +c2 e−t/2 . We guess a particular solution of the form yp = at2 + bt + c + d cos(t) + e sin(t). We substitute this into the differential equation to determine the coefficients. 2yp + 3yp + yp = t2 + 3 sin(t) 688
  • 709. 2(2a − d cos(t) − e sin(t)) + 3(2at + b − d sin(t) + e cos(t)) + at2 + bt + c + d cos(t) + e sin(t) = t2 + 3 sin(t) (a − 1)t2 + (6a + b)t + (4a + 3b + c) + (−d + 3e) cos(t) − (3 + 3d + e) sin(t) = 0 a − 1 = 0, 6a + b = 0, 4a + 3b + c = 0, −d + 3e = 0, 3 + 3d + e = 0 a = 1, b = −6, c = 14, d = − 9 10 , e = − 3 10 A particular solution is yp = t2 − 6t + 14 − 3 10 (3 cos(t) + sin(t)). The general solution of the differential equation is y = c1 e−t +c2 e−t/2 +t2 − 6t + 14 − 3 10 (3 cos(t) + sin(t)). Solution 21.2 1. We consider the problem y − 2y + y = t et +4, y(0) = 1, y (0) = 1. First we solve the homogeneous equation with the substitution y = eλt . λ2 − 2λ + 1 = 0 (λ − 1)2 = 0 λ = 1 The homogeneous solution is yh = c1 et +c2t et . We guess a particular solution of the form yp = at3 et +bt2 et +4. We substitute this into the inhomogeneous differential equation to determine the coefficients. yp − 2yp + yp = t et +4 (a(t3 + 6t2 + 6t) + b(t2 + 4t + 2)) et −2(a(t2 + 3t) + b(t + 2)) et at3 et +bt2 et +4 = t et +4 (6a − 1)t + 2b = 0 6a − 1 = 0, 2b = 0 a = 1 6 , b = 0 A particular solution is yp = t3 6 et +4. The general solution of the differential equation is y = c1 et +c2t et + t3 6 et +4. We use the initial conditions to determine the constants of integration. y(0) = 1, y (0) = 1 c1 + 4 = 1, c1 + c2 = 1 c1 = −3, c2 = 4 689
  • 710. The solution of the initial value problem is y = t3 6 + 4t − 3 et +4. 2. We consider the problem y + 2y + 5y = 4 e−t cos(2t), y(0) = 1, y (0) = 0. First we solve the homogeneous equation with the substitution y = eλt . λ2 + 2λ + 5 = 0 λ = −1 ± √ 1 − 5 λ = −1 ± ı2 The homogeneous solution is yh = c1 e−t cos(2t) + c2 e−t sin(2t). We guess a particular solution of the form yp = t e−t (a cos(2t) + b sin(2t)) We substitute this into the inhomogeneous differential equation to determine the coefficients. yp + 2yp + 5yp = 4 e−t cos(2t) e−t ((−(2 + 3t)a + 4(1 − t)b) cos(2t) + (4(t − 1)a − (2 + 3t)b) sin(2t)) + 2 e−t (((1 − t)a + 2tb) cos(2t) + (−2ta + (1 − t)b) sin(2t)) + 5(e−t (ta cos(2t) + tb sin(2t))) = 4 e−t cos(2t) 4(b − 1) cos(2t) − 4a sin(2t) = 0 a = 0, b = 1 A particular solution is yp = t e−t sin(2t). The general solution of the differential equation is y = c1 e−t cos(2t) + c2 e−t sin(2t) + t e−t sin(2t). We use the initial conditions to determine the constants of integration. y(0) = 1, y (0) = 0 c1 = 1, −c1 + 2c2 = 0 c1 = 1, c2 = 1 2 The solution of the initial value problem is y = 1 2 e−t (2 cos(2t) + (2t + 1) sin(2t)) . Variation of Parameters 690
  • 711. Solution 21.3 1. We consider the equation y − 5y + 6y = 2 et . We find homogeneous solutions with the substitution y = eλt . λ2 − 5λ + 6 = 0 λ = {2, 3} The homogeneous solutions are y1 = e2t , y2 = e3t . We compute the Wronskian of these solutions. W(t) = e2t e3t 2 e2t 3 e3t = e5t We find a particular solution with variation of parameters. yp = − e2t 2 et e3t e5t dt + e3t 2 et e2t e5t dt = −2 e2t e−t dt + 2 e3t e−2t dt = 2 et − et yp = et 2. We consider the equation y + y = tan(t), 0 < t < π 2 . We find homogeneous solutions with the substitution y = eλt . λ2 + 1 = 0 λ = ±i The homogeneous solutions are y1 = cos(t), y2 = sin(t). We compute the Wronskian of these solutions. W(t) = cos(t) sin(t) − sin(t) cos(t) = cos2 (t) + sin2 (t) = 1 We find a particular solution with variation of parameters. yp = − cos(t) tan(t) sin(t) dt + sin(t) tan(t) cos(t) dt = − cos(t) sin2 (t) cos(t) dt + sin(t) sin(t) dt = cos(t) ln cos(t/2) − sin(t/2) cos(t/2) + sin(t/2) + sin(t) − sin(t) cos(t) yp = cos(t) ln cos(t/2) − sin(t/2) cos(t/2) + sin(t/2) 691
  • 712. 3. We consider the equation y − 5y + 6y = g(t). The homogeneous solutions are y1 = e2t , y2 = e3t . The Wronskian of these solutions is W(t) = e5t . We find a particular solution with variation of parameters. yp = − e2t g(t) e3t e5t dt + e3t g(t) e2t e5t dt yp = − e2t g(t) e−2t dt + e3t g(t) e−3t dt Solution 21.4 Solve y (x) + y(x) = x, y(0) = 1, y (0) = 0. The solutions of the homogeneous equation are y1(x) = cos x, y2(x) = sin x. The Wronskian of these solutions is W[cos x, sin x] = cos x sin x − sin x cos x = cos2 x + sin2 x = 1. The variation of parameters solution for the particular solution is yp = − cos x x sin x dx + sin x x cos x dx = − cos x −x cos x + cos x dx + sin x x sin x − sin x dx = − cos x (−x cos x + sin x) + sin x (x sin x + cos x) = x cos2 x − cos x sin x + x sin2 x + cos x sin x = x The general solution of the differential equation is thus y = c1 cos x + c2 sin x + x. Applying the two initial conditions gives us the equations c1 = 1, c2 + 1 = 0. The solution subject to the initial conditions is y = cos x − sin x + x. Solution 21.5 Solve x2 y (x) − xy (x) + y(x) = x. The homogeneous equation is x2 y (x) − xy (x) + y(x) = 0. 692
  • 713. Substituting y = xλ into the homogeneous differential equation yields x2 λ(λ − 1)xλ−2 − xλxλ + xλ = 0 λ2 − 2λ + 1 = 0 (λ − 1)2 = 0 λ = 1. The homogeneous solutions are y1 = x, y2 = x log x. The Wronskian of the homogeneous solutions is W[x, x log x] = x x log x 1 1 + log x = x + x log x − x log x = x. Writing the inhomogeneous equation in the standard form: y (x) − 1 x y (x) + 1 x2 y(x) = 1 x . Using variation of parameters to find the particular solution, yp = −x log x x dx + x log x 1 x dx = −x 1 2 log2 x + x log x log x = 1 2 x log2 x. Thus the general solution of the inhomogeneous differential equation is y = c1x + c2x log x + 1 2 x log2 x. Solution 21.6 1. First we find the homogeneous solutions. We substitute y = eλx into the homogeneous differ- ential equation. y + y = 0 λ2 + 1 = 0 λ = ±ı y = eıx , e−ıx We can also write the solutions in terms of real-valued functions. y = {cos x, sin x} The Wronskian of the homogeneous solutions is W[cos x, sin x] = cos x sin x − sin x cos x = cos2 x + sin2 x = 1. 693
  • 714. We obtain a particular solution with the variation of parameters formula. yp = − cos x ex sin x dx + sin x ex cos x dx yp = − cos x 1 2 ex (sin x − cos x) + sin x 1 2 ex (sin x + cos x) yp = 1 2 ex The general solution is the particular solution plus a linear combination of the homogeneous solutions. y = 1 2 ex + cos x + sin x 2. y + λ2 y = sin x, y(0) = y (0) = 0 Assume that λ is positive. First we find the homogeneous solutions by substituting y = eαx into the homogeneous differential equation. y + λ2 y = 0 α2 + λ2 = 0 α = ±ıλ y = eıλx , e−ıλx y = {cos(λx), sin(λx)} The Wronskian of these homogeneous solution is W[cos(λx), sin(λx)] = cos(λx) sin(λx) −λ sin(λx) λ cos(λx) = λ cos2 (λx) + λ sin2 (λx) = λ. We obtain a particular solution with the variation of parameters formula. yp = − cos(λx) sin(λx) sin x λ dx + sin(λx) cos(λx) sin x λ dx We evaluate the integrals for λ = 1. yp = − cos(λx) cos(x) sin(λx) − λ sin x cos(λx) λ(λ2 − 1) + sin(λx) cos(x) cos(λx) + λ sin x sin(λx) λ(λ2 − 1) yp = sin x λ2 − 1 The general solution for λ = 1 is y = sin x λ2 − 1 + c1 cos(λx) + c2 sin(λx). The initial conditions give us the constraints: c1 = 0, 1 λ2 − 1 + λc2 = 0, For λ = 1, (non-resonant forcing), the solution subject to the initial conditions is y = λ sin(x) − sin(λx) λ(λ2 − 1) . 694
  • 715. Now consider the case λ = 1. We obtain a particular solution with the variation of parameters formula. yp = − cos(x) sin2 (x) dx + sin(x) cos(x) sin x dx yp = − cos(x) 1 2 (x − cos(x) sin(x)) + sin(x) − 1 2 cos2 (x) yp = − 1 2 x cos(x) The general solution for λ = 1 is y = − 1 2 x cos(x) + c1 cos(x) + c2 sin(x). The initial conditions give us the constraints: c1 = 0 − 1 2 + c2 = 0 For λ = 1, (resonant forcing), the solution subject to the initial conditions is y = 1 2 (sin(x) − x cos x). Solution 21.7 1. A set of linearly independent, homogeneous solutions is {cos t, sin t}. The Wronskian of these solutions is W(t) = cos t sin t − sin t cos t = cos2 t + sin2 t = 1. We use variation of parameters to find a particular solution. yp = − cos t g(t) sin t dt + sin t g(t) cos t dt The general solution can be written in the form, y(t) = c1 − t a g(τ) sin τ dτ cos t + c2 + t b g(τ) cos τ dτ sin t. 2. Since the initial conditions are given at t = 0 we choose the lower bounds of integration in the general solution to be that point. y = c1 − t 0 g(τ) sin τ dτ cos t + c2 + t 0 g(τ) cos τ dτ sin t The initial condition y(0) = 0 gives the constraint, c1 = 0. The derivative of y(t) is then, y (t) = −g(t) sin t cos t + t 0 g(τ) sin τ dτ sin t + g(t) cos t sin t + c2 + t 0 g(τ) cos τ dτ cos t, y (t) = t 0 g(τ) sin τ dτ sin t + c2 + t 0 g(τ) cos τ dτ cos t. The initial condition y (0) = 0 gives the constraint c2 = 0. The solution subject to the initial conditions is y = t 0 g(τ)(sin t cos τ − cos t sin τ) dτ y = t 0 g(τ) sin(t − τ) dτ 695
  • 716. Figure 21.5: Non-resonant Forcing 3. The solution of the initial value problem y + y = sin(λt), y(0) = 0, y (0) = 0, is y = t 0 sin(λτ) sin(t − τ) dτ. For λ = 1, this is y = 1 2 t 0 cos(t − τ − λτ) − cos(t − τ + λτ) dτ = 1 2 − sin(t − τ − λτ) 1 + λ + sin(t − τ + λτ) 1 − λ t 0 = 1 2 sin(t) − sin(−λt) 1 + λ + − sin(t) + sin(λt) 1 − λ y = − λ sin t 1 − λ2 + sin(λt) 1 − λ2 . (21.6) The solution is the sum of two periodic functions of period 2π and 2π/λ. This solution is plotted in Figure 21.5 on the interval t ∈ [0, 16π] for the values λ = 1/4, 7/8, 5/2. For λ = 1, we have y = 1 2 t 0 cos(t − 2τ) − cos(tau) dτ = 1 2 − 1 2 sin(t − 2τ) − τ cos t t 0 y = 1 2 (sin t − t cos t) . (21.7) The solution has both a periodic and a transient term. This solution is plotted in Figure 21.5 on the interval t ∈ [0, 16π]. Note that we can derive (21.7) from (21.6) by taking the limit as λ → 0. lim λ→1 sin(λt) − λ sin t 1 − λ2 = lim λ→1 t cos(λt) − sin t −2λ = 1 2 (sin t − t cos t) 696
  • 717. Figure 21.6: Resonant Forcing Solution 21.8 Let y1, y2 and y3 be linearly independent homogeneous solutions to the differential equation L[y] = y + p2y + p1y + p0y = f(x). We will look for a particular solution of the form yp = u1y1 + u2y2 + u3y3. Since the uj’s are undetermined functions, we are free to impose two constraints. We choose the constraints to simplify the algebra. u1y1 + u2y2 + u3y3 = 0 u1y1 + u2y2 + u3y3 = 0 Differentiating the expression for yp, yp = u1y1 + u1y1 + u2y2 + u2y2 + u3y3 + u3y3 = u1y1 + u2y2 + u3y3 yp = u1y1 + u1y1 + u2y2 + u2y2 + u3y3 + u3y3 = u1y1 + u2y2 + u3y3 yp = u1y1 + u1y1 + u2y2 + u2y2 + u3y3 + u3y3 Substituting the expressions for yp and its derivatives into the differential equation, u1y1 + u1y1 + u2y2 + u2y2 + u3y3 + u3y3 + p2(u1y1 + u2y2 + u3y3 ) + p1(u1y1 + u2y2 + u3y3) + p0(u1y1 + u2y2 + u3y3) = f(x) u1y1 + u2y2 + u3y3 + u1L[y1] + u2L[y2] + u3L[y3] = f(x) u1y1 + u2y2 + u3y3 = f(x). With the two constraints, we have the system of equations, u1y1 + u2y2 + u3y3 = 0 u1y1 + u2y2 + u3y3 = 0 u1y1 + u2y2 + u3y3 = f(x) We solve for the uj using Kramer’s rule. u1 = (y2y3 − y2y3)f(x) W(x) , u2 = − (y1y3 − y1y3)f(x) W(x) , u3 = (y1y2 − y1y2)f(x) W(x) Here W(x) is the Wronskian of {y1, y2, y3}. Integrating the expressions for uj, the particular solution is yp = y1 (y2y3 − y2y3)f(x) W(x) dx + y2 (y3y1 − y3y1)f(x) W(x) dx + y3 (y1y2 − y1y2)f(x) W(x) dx. 697
  • 718. Green Functions Solution 21.9 We consider the Green function problem G = f(x), G(−∞|ξ) = G (−∞|ξ) = 0. The homogeneous solution is y = c1 + c2x. The homogeneous solution that satisfies the boundary conditions is y = 0. Thus the Green function has the form G(x|ξ) = 0 x < ξ, c1 + c2x x > ξ. The continuity and jump conditions are then G(ξ+ |ξ) = 0, G (ξ+ |ξ) = 1. Thus the Green function is G(x|ξ) = 0 x < ξ, x − ξ x > ξ = (x − ξ)H(x − ξ). The solution of the problem y = f(x), y(−∞) = y (−∞) = 0. is y = ∞ −∞ f(ξ)G(x|ξ) dξ y = ∞ −∞ f(ξ)(x − ξ)H(x − ξ) dξ y = x −∞ f(ξ)(x − ξ) dξ We differentiate this solution to verify that it satisfies the differential equation. y = [f(ξ)(x − ξ)]ξ=x + x −∞ ∂ ∂x (f(ξ)(x − ξ)) dξ = x −∞ f(ξ) dξ y = [f(ξ)]ξ=x = f(x) Solution 21.10 Since we are dealing with an Euler equation, we substitute y = xλ to find the homogeneous solutions. λ(λ − 1) + λ − 1 = 0 (λ − 1)(λ + 1) = 0 y1 = x, y2 = 1 x Variation of Parameters. The Wronskian of the homogeneous solutions is W(x) = x 1/x 1 −1/x2 = − 1 x − 1 x = − 2 x . 698
  • 719. A particular solution is yp = −x x2 (1/x) −2/x dx + 1 x x2 x −2/x dx = −x − x2 2 dx + 1 x − x4 2 dx = x4 6 − x4 10 = x4 15 . The general solution is y = x4 15 + c1x + c2 1 x . Applying the initial conditions, y(0) = 0 → c2 = 0 y (0) = 0 → c1 = 1. Thus we have the solution y = x4 15 + x. Green Function. Since this problem has both an inhomogeneous term in the differential equation and inhomogeneous boundary conditions, we separate it into the two problems u + 1 x u − 1 x2 u = x2 , u(0) = u (0) = 0, v + 1 x v − 1 x2 v = 0, v(0) = 0, v (0) = 1. First we solve the inhomogeneous differential equation with the homogeneous boundary conditions. The Green function for this problem satisfies L[G(x|ξ)] = δ(x − ξ), G(0|ξ) = G (0|ξ) = 0. Since the Green function must satisfy the homogeneous boundary conditions, it has the form G(x|ξ) = 0 for x < ξ cx + d/x for x > ξ. From the continuity condition, 0 = cξ + d/ξ. The jump condition yields c − d/ξ2 = 1. Solving these two equations, we obtain G(x|ξ) = 0 for x < ξ 1 2 x − ξ2 2x for x > ξ 699
  • 720. Thus the solution is u(x) = ∞ 0 G(x|ξ)ξ2 dξ = x 0 1 2 x − ξ2 2x ξ2 dξ = 1 6 x4 − 1 10 x4 = x4 15 . Now to solve the homogeneous differential equation with inhomogeneous boundary conditions. The general solution for v is v = cx + d/x. Applying the two boundary conditions gives v = x. Thus the solution for y is y = x + x4 15 . Solution 21.11 The Green function satisfies G (x|ξ) + p2(x)G (x|ξ) + p1(x)G (x|ξ) + p0(x)G(x|ξ) = δ(x − ξ). First note that only the G (x|ξ) term can have a delta function singularity. If a lower derivative had a delta function type singularity, then G (x|ξ) would be more singular than a delta function and there would be no other term in the equation to balance that behavior. Thus we see that G (x|ξ) will have a delta function singularity; G (x|ξ) will have a jump discontinuity; G (x|ξ) will be continuous at x = ξ. Integrating the differential equation from ξ− to ξ+ yields ξ+ ξ− G (x|ξ) dx = ξ+ ξ− δ(x − ξ) dx G (ξ+ |ξ) − G (ξ− |ξ) = 1. Thus we have the three continuity conditions: G (ξ+ |ξ) = G (ξ− |ξ) + 1 G (ξ+ |ξ) = G (ξ− |ξ) G(ξ+ |ξ) = G(ξ− |ξ) Solution 21.12 Variation of Parameters. Consider the problem x2 y − 2xy + 2y = e−x , y(1) = 0, y (1) = 1. Previously we showed that two homogeneous solutions are y1 = x, y2 = x2 . The Wronskian of these solutions is W(x) = x x2 1 2x = 2x2 − x2 = x2 . 700
  • 721. In the variation of parameters formula, we will choose 1 as the lower bound of integration. (This will simplify the algebra in applying the initial conditions.) yp = −x x 1 e−ξ ξ2 ξ4 dξ + x2 x 1 e−ξ ξ ξ4 dξ = −x x 1 e−ξ ξ2 dξ + x2 x 1 e−ξ ξ3 dξ = −x e−1 − e−x x − x 1 e−ξ ξ dξ + x2 e−x 2x − e−x 2x2 + 1 2 x 1 e−ξ ξ dξ = −x e−1 + 1 2 (1 + x) e−x + x + x2 2 x 1 e−ξ ξ dξ If you wanted to, you could write the last integral in terms of exponential integral functions. The general solution is y = c1x + c2x2 − x e−1 + 1 2 (1 + x) e−x + x + x2 2 x 1 e−ξ ξ dξ Applying the boundary conditions, y(1) = 0 → c1 + c2 = 0 y (1) = 1 → c1 + 2c2 = 1, we find that c1 = −1, c2 = 1. Thus the solution subject to the initial conditions is y = −(1 + e−1 )x + x2 + 1 2 (1 + x) e−x + x + x2 2 x 1 e−ξ ξ dξ Green Functions. The solution to the problem is y = u + v where u − 2 x u + 2 x2 u = e−x x2 , u(1) = 0, u (1) = 0, and v − 2 x v + 2 x2 v = 0, v(1) = 0, v (1) = 1. The problem for v has the solution v = −x + x2 . The Green function for u is G(x|ξ) = H(x − ξ)uξ(x) where uξ(ξ) = 0, and uξ(ξ) = 1. Thus the Green function is G(x|ξ) = H(x − ξ) −x + x2 ξ . The solution for u is then u = ∞ 1 G(x|ξ) e−ξ ξ2 dξ = x 1 −x + x2 ξ e−ξ ξ2 dξ = −x e−1 + 1 2 (1 + x) e−x + x + x2 2 x 1 e−ξ ξ dξ. 701
  • 722. Thus we find the solution for y is y = −(1 + e−1 )x + x2 + 1 2 (1 + x) e−x + x + x2 2 x 1 e−ξ ξ dξ Solution 21.13 The differential equation for the Green function is G − G = δ(x − ξ), Gx(0|ξ) = G(1|ξ) = 0. Note that cosh(x) and sinh(x−1) are homogeneous solutions that satisfy the left and right boundary conditions, respectively. The Wronskian of these two solutions is W(x) = cosh(x) sinh(x − 1) sinh(x) cosh(x − 1) = cosh(x) cosh(x − 1) − sinh(x) sinh(x − 1) = 1 4 ex + e−x ex−1 + e−x+1 − ex − e−x ex−1 − e−x+1 = 1 2 e1 + e−1 = cosh(1). The Green function for the problem is then G(x|ξ) = cosh(x<) sinh(x> − 1) cosh(1) , G(x|ξ) = cosh(x) sinh(ξ−1) cosh(1) for 0 ≤ x ≤ ξ, cosh(ξ) sinh(x−1) cosh(1) for ξ ≤ x ≤ 1. Solution 21.14 The differential equation for the Green function is G − G = δ(x − ξ), G(0|ξ) = G(∞|ξ) = 0. Note that sinh(x) and e−x are homogeneous solutions that satisfy the left and right boundary conditions, respectively. The Wronskian of these two solutions is W(x) = sinh(x) e−x cosh(x) − e−x = − sinh(x) e−x − cosh(x) e−x = − 1 2 ex − e−x e−x − 1 2 ex + e−x e−x = −1 The Green function for the problem is then G(x|ξ) = − sinh(x<) e−x> G(x|ξ) = − sinh(x) e−ξ for 0 ≤ x ≤ ξ, − sinh(ξ) e−x for ξ ≤ x ≤ ∞. Solution 21.15 702
  • 723. a) The Green function problem is xG (x|ξ) + G (x|ξ) = δ(x − ξ), G(0|ξ) bounded, G(1|ξ) = 0. First we find the homogeneous solutions of the differential equation. xy + y = 0 This is an exact equation. d dx [xy ] = 0 y = c1 x y = c1 log x + c2 The homogeneous solutions y1 = 1 and y2 = log x satisfy the left and right boundary condi- tions, respectively. The Wronskian of these solutions is W(x) = 1 log x 0 1/x = 1 x . The Green function is G(x|ξ) = 1 · log x> ξ(1/ξ) , G(x|ξ) = log x>. b) The Green function problem is G (x|ξ) − G(x|ξ) = δ(x − ξ), G(−a|ξ) = G(a|ξ) = 0. {ex , e−x } and {cosh x, sinh x} are both linearly independent sets of homogeneous solutions. sinh(x+a) and sinh(x−a) are homogeneous solutions that satisfy the left and right boundary conditions, respectively. The Wronskian of these two solutions is, W(x) = sinh(x + a) sinh(x − a) cosh(x + a) cosh(x − a) = sinh(x + a) cosh(x − a) − sinh(x − a) cosh(x + a) = sinh(2a) The Green function is G(x|ξ) = sinh(x< + a) sinh(x> − a) sinh(2a) . c) The Green function problem is G (x|ξ) − G(x|ξ) = δ(x − ξ), G(x|ξ) bounded as |x| → ∞. ex and e−x are homogeneous solutions that satisfy the left and right boundary conditions, respectively. The Wronskian of these solutions is W(x) = ex e−x ex − e−x = −2. The Green function is G(x|ξ) = ex< e−x> −2 , G(x|ξ) = − 1 2 ex<−x> . 703
  • 724. d) The Green function from part (b) is, G(x|ξ) = sinh(x< + a) sinh(x> − a) sinh(2a) . We take the limit as a → ∞. lim a→∞ sinh(x< + a) sinh(x> − a) sinh(2a) = lim a→∞ (ex<+a − e−x<−a ) (ex>−a − e−x>+a ) 2 (e2a − e−2a) = lim a→∞ − ex<−x> + ex<+x>−2a + e−x<−x>−2a − e−x<+x>−4a 2 − 2 e−4a = − ex<−x> 2 Thus we see that the solution from part (b) approaches the solution from part (c) as a → ∞. Solution 21.16 1. The problem, y + λy = f(x), y(0) = y(π) = 0, has a Green function if and only if it has a unique solution. This inhomogeneous problem has a unique solution if and only if the homogeneous problem has only the trivial solution. First consider the case λ = 0. We find the general solution of the homogeneous differential equation. y = c1 + c2x Only the trivial solution satisfies the boundary conditions. The problem has a unique solution for λ = 0. Now consider non-zero λ. We find the general solution of the homogeneous differential equation. y = c1 cos √ λx + c2 sin √ λx . The solution that satisfies the left boundary condition is y = c sin √ λx . We apply the right boundary condition and find nontrivial solutions. sin √ λπ = 0 λ = n2 , n ∈ Z+ Thus the problem has a unique solution for all complex λ except λ = n2 , n ∈ Z+ . Consider the case λ = 0. We find solutions of the homogeneous equation that satisfy the left and right boundary conditions, respectively. y1 = x, y2 = x − π. We compute the Wronskian of these functions. W(x) = x x − π 1 1 = π. The Green function for this case is G(x|ξ) = x<(x> − π) π . 704
  • 725. We consider the case λ = n2 , λ = 0. We find the solutions of the homogeneous equation that satisfy the left and right boundary conditions, respectively. y1 = sin √ λx , y2 = sin √ λ(x − π) . We compute the Wronskian of these functions. W(x) = sin √ λx sin √ λ(x − π) √ λ cos √ λx √ λ cos √ λ(x − π) = √ λ sin √ λπ The Green function for this case is G(x|ξ) = sin √ λx< sin √ λ(x> − π) √ λ sin √ λπ . 2. Now we consider the problem y + 9y = 1 + αx, y(0) = y(π) = 0. The homogeneous solutions of the problem are constant multiples of sin(3x). Thus for each value of α, the problem either has no solution or an infinite number of solutions. There will be an infinite number of solutions if the inhomogeneity 1 + αx is orthogonal to the homogeneous solution sin(3x) and no solution otherwise. π 0 (1 + αx) sin(3x) dx = πα + 2 3 The problem has a solution only for α = −2/π. For this case the general solution of the inhomogeneous differential equation is y = 1 9 1 − 2x π + c1 cos(3x) + c2 sin(3x). The one-parameter family of solutions that satisfies the boundary conditions is y = 1 9 1 − 2x π − cos(3x) + c sin(3x). 3. For λ = n2 , n ∈ Z+ , y = sin(nx) is a solution of the homogeneous equation that satisfies the boundary conditions. Equation 21.5 has a (non-unique) solution only if f is orthogonal to sin(nx). π 0 f(x) sin(nx) dx = 0 The modified Green function satisfies G + n2 G = δ(x − ξ) − sin(nx) sin(nξ) π/2 . We expand G in a series of the eigenfunctions. G(x|ξ) = ∞ k=1 gk sin(kx) 705
  • 726. We substitute the expansion into the differential equation to determine the coefficients. This will not determine gn. We choose gn = 0, which is one of the choices that will make the modified Green function symmetric in x and ξ. ∞ k=1 gk n2 − k2 sin(kx) = 2 π ∞ k=1 k=n sin(kx) sin(kξ) G(x|ξ) = 2 π ∞ k=1 k=n sin(kx) sin(kξ) n2 − k2 The solution of the inhomogeneous problem is y(x) = π 0 f(ξ)G(x|ξ) dξ. Solution 21.17 We separate the problem for u into the two problems: Lv ≡ (pv ) + qv = f(x), a < x < b, v(a) = 0, v(b) = 0 Lw ≡ (pw ) + qw = 0, a < x < b, w(a) = α, w(b) = β and note that the solution for u is u = v + w. The problem for v has the solution, v = b a g(x; ξ)f(ξ) dξ, with the Green function, g(x; ξ) = v1(x<)v2(x>) p(ξ)W(ξ) ≡ v1(x)v2(ξ) p(ξ)W (ξ) for a ≤ x ≤ ξ, v1(ξ)v2(x) p(ξ)W (ξ) for ξ ≤ x ≤ b. Here v1 and v2 are homogeneous solutions that respectively satisfy the left and right homogeneous boundary conditions. Since g(x; ξ) is a solution of the homogeneous equation for x = ξ, gξ(x; ξ) is a solution of the homogeneous equation for x = ξ. This is because for x = ξ, L ∂ ∂ξ g = ∂ ∂ξ L[g] = ∂ ∂ξ δ(x − ξ) = 0. If ξ is outside of the domain, (a, b), then g(x; ξ) and gξ(x; ξ) are homogeneous solutions on that domain. In particular gξ(x; a) and gξ(x; b) are homogeneous solutions, L [gξ(x; a)] = L [gξ(x; b)] = 0. Now we use the definition of the Green function and v1(a) = v2(b) = 0 to determine simple expres- sions for these homogeneous solutions. gξ(x; a) = v1(a)v2(x) p(a)W(a) − (p (a)W(a) + p(a)W (a))v1(a)v2(x) (p(a)W(a))2 = v1(a)v2(x) p(a)W(a) = v1(a)v2(x) p(a)(v1(a)v2(a) − v1(a)v2(a)) = − v1(a)v2(x) p(a)v1(a)v2(a) = − v2(x) p(a)v2(a) 706
  • 727. -4 -2 2 4 -0.5 -0.4 -0.3 -0.2 -0.1 Figure 21.7: G(x; 1) and G(x; −1) We note that this solution has the boundary values, gξ(a; a) = − v2(a) p(a)v2(a) = − 1 p(a) , gξ(b; a) = − v2(b) p(a)v2(a) = 0. We examine the second solution. gξ(x; b) = v1(x)v2(b) p(b)W(b) − (p (b)W(b) + p(b)W (b))v1(x)v2(b) (p(b)W(b))2 = v1(x)v2(b) p(b)W(b) = v1(x)v2(b) p(b)(v1(b)v2(b) − v1(b)v2(b)) = v1(x)v2(b) p(b)v1(b)v2(b) = v1(x) p(b)v1(b) This solution has the boundary values, gξ(a; b) = v1(a) p(b)v1(b) = 0, gξ(b; b) = v1(b) p(b)v1(b) = 1 p(b) . Thus we see that the solution of Lw = (pw ) + qw = 0, a < x < b, w(a) = α, w(b) = β, is w = −αp(a)gξ(x; a) + βp(b)gξ(x; b). Therefore the solution of the problem for u is u = b a g(x; ξ)f(ξ) dξ − αp(a)gξ(x; a) + βp(b)gξ(x; b). Solution 21.18 Figure 21.7 shows a plot of G(x; 1) and G(x; −1) for k = 1. First we consider the boundary condition u(0) = 0. Note that the solution of G − k2 G = δ(x − ξ) − δ(x + ξ), |G(±∞; ξ)| < ∞, satisfies the condition G(0; ξ) = 0. Thus the Green function which satisfies G(0; ξ) = 0 is G(x; ξ) = − 1 2k e−k|x−ξ| + 1 2k e−k|x+ξ| . 707
  • 728. 1 2 3 4 5 -0.4 -0.3 -0.2 -0.1 1 2 3 4 5 -0.5 -0.4 -0.3 -0.2 -0.1 Figure 21.8: G(x; 1) and G(x; −1) Since x, ξ > 0 we can write this as G(x; ξ) = − 1 2k e−k|x−ξ| + 1 2k e−k(x+ξ) = − 1 2k e−k(ξ−x) + 1 2k e−k(x+ξ) , for x < ξ − 1 2k e−k(x−ξ) + 1 2k e−k(x+ξ) , for ξ < x = −1 k e−kξ sinh(kx), for x < ξ −1 k e−kx sinh(kξ), for ξ < x G(x; ξ) = − 1 k e−kx> sinh(kx<) Now consider the boundary condition u (0) = 0. Note that the solution of G − k2 G = δ(x − ξ) + δ(x + ξ), |G(±∞; ξ)| < ∞, satisfies the boundary condition G (x; ξ) = 0. Thus the Green function is G(x; ξ) = − 1 2k e−k|x−ξ| − 1 2k e−k|x+ξ| . Since x, ξ > 0 we can write this as G(x; ξ) = − 1 2k e−k|x−ξ| − 1 2k e−k(x+ξ) = − 1 2k e−k(ξ−x) − 1 2k e−k(x+ξ) , for x < ξ − 1 2k e−k(x−ξ) − 1 2k e−k(x+ξ) , for ξ < x = −1 k e−kξ cosh(kx), for x < ξ −1 k e−kx cosh(kξ), for ξ < x G(x; ξ) = − 1 k e−kx> cosh(kx<) The Green functions which satisfies G(0; ξ) = 0 and G (0; ξ) = 0 are shown in Figure 21.8. Solution 21.19 1. The Green function satisfies g − a2 g = δ(x − ξ), g(0; ξ) = g (L; ξ) = 0. We can write the set of homogeneous solutions as eax , e−ax or {cosh(ax), sinh(ax)} . 708
  • 729. The solutions that respectively satisfy the left and right boundary conditions are u1 = sinh(ax), u2 = cosh(a(x − L)). The Wronskian of these solutions is W(x) = sinh(ax) cosh(a(x − L)) a cosh(ax) a sinh(a(x − L)) = −a cosh(aL). Thus the Green function is g(x; ξ) = −sinh(ax) cosh(a(ξ−L)) a cosh(aL) for x ≤ ξ, −sinh(aξ) cosh(a(x−L)) a cosh(aL) for ξ ≤ x. = − sinh(ax<) cosh(a(x> − L)) a cosh(aL) . 2. We take the limit as L → ∞. g(x; ξ) = lim L→∞ − sinh(ax<) cosh(a(x> − L)) a cosh(aL) = lim L→∞ − sinh(ax<) a cosh(ax>) cosh(aL) − sinh(ax>) sinh(aL) cosh(aL) = − sinh(ax<) a (cosh(ax>) − sinh(ax>)) g(x; ξ) = − 1 a sinh(ax<) e−ax> The solution of y − a2 y = e−x , y(0) = y (∞) = 0 is y = ∞ 0 g(x; ξ) e−ξ dξ = − 1 a ∞ 0 sinh(ax<) e−ax> e−ξ dξ = − 1 a x 0 sinh(aξ) e−ax e−ξ dξ + ∞ x sinh(ax) e−aξ e−ξ dξ We first consider the case that a = 1. = − 1 a e−ax a2 − 1 −a + e−x (a cosh(ax) + sinh(ax)) + 1 a + 1 e−(a+1)x sinh(ax) = e−ax − e−x a2 − 1 For a = 1, we have y = − 1 4 e −x −1 + 2x + e−2x + 1 2 e−2x sinh(x) = − 1 2 x e−x . Thus the solution of the problem is y = e−ax − e−x a2−1 for a = 1, −1 2 x e−x for a = 1. We note that this solution satisfies the differential equation and the boundary conditions. 709
  • 730. 21.13 Quiz Problem 21.1 Find the general solution of y − y = f(x), where f(x) is a known function. Solution 710
  • 731. 21.14 Quiz Solutions Solution 21.1 y − y = f(x) We substitute y = eλx into the homogeneous differential equation. y − y = 0 λ2 eλx − eλx = 0 λ = ±1 The homogeneous solutions are ex and e−x . The Wronskian of these solutions is ex e−x ex − e−x = −2. We find a particular solution with variation of parameters. yp = − ex e−x f(x) −2 dx + e−x ex f(x) −2 dx The general solution is y = c1 ex +c2 e−x − ex e−x f(x) −2 dx + e−x ex f(x) −2 dx. 711
  • 732. 712
  • 733. Chapter 22 Difference Equations Televisions should have a dial to turn up the intelligence. There is a brightness knob, but it doesn’t work. -? 22.1 Introduction Example 22.1.1 Gambler’s ruin problem. Consider a gambler that initially has n dollars. He plays a game in which he has a probability p of winning a dollar and q of losing a dollar. (Note that p + q = 1.) The gambler has decided that if he attains N dollars he will stop playing the game. In this case we will say that he has succeeded. Of course if he runs out of money before that happens, we will say that he is ruined. What is the probability of the gambler’s ruin? Let us denote this probability by an. We know that if he has no money left, then his ruin is certain, so a0 = 1. If he reaches N dollars he will quit the game, so that aN = 0. If he is somewhere in between ruin and success then the probability of his ruin is equal to p times the probability of his ruin if he had n + 1 dollars plus q times the probability of his ruin if he had n − 1 dollars. Writing this in an equation, an = pan+1 + qan−1 subject to a0 = 1, aN = 0. This is an example of a difference equation. You will learn how to solve this particular problem in the section on constant coefficient equations. Consider the sequence a1, a2, a3, . . . Analogous to a derivative of a continuous function, we can define a discrete derivative on the sequence Dan = an+1 − an. The second discrete derivative is then defined as D2 an = D[an+1 − an] = an+2 − 2an+1 + an. The discrete integral of an is n i=n0 ai. Corresponding to β α df dx dx = f(β) − f(α), in the discrete realm we have β−1 n=α D[an] = β−1 n=α (an+1 − an) = aβ − aα. 713
  • 734. Linear difference equations have the form Dr an + pr−1(n)Dr−1 an + · · · + p1(n)Dan + p0(n)an = f(n). From the definition of the discrete derivative an equivalent form is an+r + qr−1(n)anr−1 + · · · + q1(n)an+1 + q0(n)an = f(n). Besides being important in their own right, we will need to solve difference equations in order to develop series solutions of differential equations. Also, some methods of solving differential equations numerically are based on approximating them with difference equations. There are many similarities between differential and difference equations. Like differential equa- tions, an rth order homogeneous difference equation has r linearly independent solutions. The general solution to the rth order inhomogeneous equation is the sum of the particular solution and an arbitrary linear combination of the homogeneous solutions. For an rth order difference equation, the initial condition is given by specifying the values of the first r an’s. Example 22.1.2 Consider the difference equation an−2 − an−1 − an = 0 subject to the initial condition a1 = a2 = 1. Note that although we may not know a closed-form formula for the an we can calculate the an in order by substituting into the difference equation. The first few an are 1, 1, 2, 3, 5, 8, 13, 21, . . . We recognize this as the Fibonacci sequence. 22.2 Exact Equations Consider the sequence a1, a2, . . .. Exact difference equations on this sequence have the form D[F(an, an+1, . . . , n)] = g(n). We can reduce the order of, (or solve for first order), this equation by summing from 1 to n − 1. n−1 j=1 D[F(aj, aj+1, . . . , j)] = n−1 j=1 g(j) F(an, an+1, . . . , n) − F(a1, a2, . . . , 1) = n−1 j=1 g(j) F(an, an+1, . . . , n) = n−1 j=1 g(j) + F(a1, a2, . . . , 1) Result 22.2.1 We can reduce the order of the exact difference equation D[F(an, an+1, . . . , n)] = g(n), for n ≥ 1 by summing both sides of the equation to obtain F(an, an+1, . . . , n) = n−1 j=1 g(j) + F(a1, a2, . . . , 1). 714
  • 735. Example 22.2.1 Consider the difference equation, D[nan] = 1. Summing both sides of this equa- tion n−1 j=1 D[jaj] = n−1 j=1 1 nan − a1 = n − 1 an = n + a1 − 1 n . 22.3 Homogeneous First Order Consider the homogeneous first order difference equation an+1 = p(n)an, for n ≥ 1. We can directly solve for an. an = an an−1 an−1 an−2 an−2 · · · a1 a1 = a1 an an−1 an−1 an−2 · · · a2 a1 = a1p(n − 1)p(n − 2) · · · p(1) = a1 n−1 j=1 p(j) Alternatively, we could solve this equation by making it exact. Analogous to an integrating factor for differential equations, we multiply the equation by the summing factor S(n) =   n j=1 p(j)   −1 . an+1 − p(n)an = 0 an+1 n j=1 p(j) − an n−1 j=1 p(j) = 0 D an n−1 j=1 p(j) = 0 Now we sum from 1 to n − 1. an n−1 j=1 p(j) − a1 = 0 an = a1 n−1 j=1 p(j) Result 22.3.1 The solution of the homogeneous first order difference equation an+1 = p(n)an, for n ≥ 1, is an = a1 n−1 j=1 p(j). 715
  • 736. Example 22.3.1 Consider the equation an+1 = nan with the initial condition a1 = 1. an = a1 n−1 j=1 j = (1)(n − 1)! = Γ(n) Recall that Γ(z) is the generalization of the factorial function. For positive integral values of the argument, Γ(n) = (n − 1)!. 22.4 Inhomogeneous First Order Consider the equation an+1 = p(n)an + q(n) for n ≥ 1. Multiplying by S(n) = n j=1 p(j) −1 yields an+1 n j=1 p(j) − an n−1 j=1 p(j) = q(n) n j=1 p(j) . The left hand side is a discrete derivative. D an n−1 j=1 p(j) = q(n) n j=1 p(j) Summing both sides from 1 to n − 1, an n−1 j=1 p(j) − a1 = n−1 k=1 q(k) k j=1 p(j) an = n−1 m=1 p(m) n−1 k=1 q(k) k j=1 p(j) + a1 . Result 22.4.1 The solution of the inhomogeneous first order difference equa- tion an+1 = p(n)an + q(n) for n ≥ 1 is an = n−1 m=1 p(m) n−1 k=1 q(k) k j=1 p(j) + a1 . Example 22.4.1 Consider the equation an+1 = nan + 1 for n ≥ 1. The summing factor is S(n) =   n j=1 j   −1 = 1 n! . 716
  • 737. Multiplying the difference equation by the summing factor, an+1 n! − an (n − 1)! = 1 n! D an (n − 1)! = 1 n! an (n − 1)! − a1 = n−1 k=1 1 k! an = (n − 1)! n−1 k=1 1 k! + a1 . Example 22.4.2 Consider the equation an+1 = λan + µ, for n ≥ 0. From the above result, (with the products and sums starting at zero instead of one), the solution is a0 = n−1 m=0 λ n−1 k=0 µ k j=0 λ + a0 = λn n−1 k=0 µ λk+1 + a0 = λn µ λ−n−1 − λ−1 λ−1 − 1 + a0 = λn µ λ−n − 1 1 − λ + a0 = µ 1 − λn 1 − λ + a0λn . 22.5 Homogeneous Constant Coefficient Equations Homogeneous constant coefficient equations have the form an+N + pN−1an+N−1 + · · · + p1an+1 + p0an = 0. The substitution an = rn yields rN + pN−1rN−1 + · · · + p1r + p0 = 0 (r − r1)m1 · · · (r − rk)mk = 0. If r1 is a distinct root then the associated linearly independent solution is rn 1 . If r1 is a root of multiplicity m > 1 then the associated solutions are rn 1 , nrn 1 , n2 rn 1 , . . . , nm−1 rn 1 . Result 22.5.1 Consider the homogeneous constant coefficient difference equation an+N + pN−1an+N−1 + · · · + p1an+1 + p0an = 0. The substitution an = rn yields the equation (r − r1)m1 · · · (r − rk)mk = 0. A set of linearly independent solutions is {rn 1 , nrn 1 , . . . , nm1−1 rn 1 , . . . , rn k , nrn k , . . . , nmk−1 rn k }. 717
  • 738. Example 22.5.1 Consider the equation an+2 − 3an+1 + 2an = 0 with the initial conditions a1 = 1 and a2 = 3. The substitution an = rn yields r2 − 3r + 2 = (r − 1)(r − 2) = 0. Thus the general solution is an = c11n + c22n . The initial conditions give the two equations, a1 = 1 = c1 + 2c2 a2 = 3 = c1 + 4c2 Since c1 = −1 and c2 = 1, the solution to the difference equation subject to the initial conditions is an = 2n − 1. Example 22.5.2 Consider the gambler’s ruin problem that was introduced in Example 22.1.1. The equation for the probability of the gambler’s ruin at n dollars is an = pan+1 + qan−1 subject to a0 = 1, aN = 0. We assume that 0 < p < 1. With the substitution an = rn we obtain r = pr2 + q. The roots of this equation are r = 1 ± √ 1 − 4pq 2p = 1 ± 1 − 4p(1 − p) 2p = 1 ± (1 − 2p)2 2p = 1 ± |1 − 2p| 2p . We will consider the two cases p = 1/2 and p = 1/2. p = 1/2. If p < 1/2, the roots are r = 1 ± (1 − 2p) 2p r1 = 1 − p p = q p , r2 = 1. If p > 1/2 the roots are r = 1 ± (2p − 1) 2p r1 = 1, r2 = −p + 1 p = q p . Thus the general solution for p = 1/2 is an = c1 + c2 q p n . 718
  • 739. The boundary condition a0 = 1 requires that c1 +c2 = 1. From the boundary condition aN = 0 we have (1 − c2) + c2 q p N = 0 c2 = −1 −1 + (q/p)N c2 = pN pN − qN . Solving for c1, c1 = 1 − pN pN − qN c1 = −qN pN − qN . Thus we have an = −qN pN − qN + pN pN − qN q p n . p = 1/2. In this case, the two roots of the polynomial are both 1. The general solution is an = c1 + c2n. The left boundary condition demands that c1 = 1. From the right boundary condition we obtain 1 + c2N = 0 c2 = − 1 N . Thus the solution for this case is an = 1 − n N . As a check that this formula makes sense, we see that for n = N/2 the probability of ruin is 1 − N/2 N = 1 2 . 22.6 Reduction of Order Consider the difference equation (n + 1)(n + 2)an+2 − 3(n + 1)an+1 + 2an = 0 for n ≥ 0 (22.1) We see that one solution to this equation is an = 1/n!. Analogous to the reduction of order for differential equations, the substitution an = bn/n! will reduce the order of the difference equation. (n + 1)(n + 2)bn+2 (n + 2)! − 3(n + 1)bn+1 (n + 1)! + 2bn n! = 0 bn+2 − 3bn+1 + 2bn = 0 (22.2) At first glance it appears that we have not reduced the order of the equation, but writing it in terms of discrete derivatives D2 bn − Dbn = 0 719
  • 740. shows that we now have a first order difference equation for Dbn. The substitution bn = rn in equation 22.2 yields the algebraic equation r2 − 3r + 2 = (r − 1)(r − 2) = 0. Thus the solutions are bn = 1 and bn = 2n . Only the bn = 2n solution will give us another linearly independent solution for an. Thus the second solution for an is an = bn/n! = 2n /n!. The general solution to equation 22.1 is then an = c1 1 n! + c2 2n n! . Result 22.6.1 Let an = sn be a homogeneous solution of a linear difference equation. The substitution an = snbn will yield a difference equation for bn that is of order one less than the equation for an. 720
  • 741. 22.7 Exercises Exercise 22.1 Find a formula for the nth term in the Fibonacci sequence 1, 1, 2, 3, 5, 8, 13, . . .. Hint, Solution Exercise 22.2 Solve the difference equation an+2 = 2 n an, a1 = a2 = 1. Hint, Solution 721
  • 742. 22.8 Hints Hint 22.1 The difference equation corresponding to the Fibonacci sequence is an+2 − an+1 − an = 0, a1 = a2 = 1. Hint 22.2 Consider this exercise as two first order difference equations; one for the even terms, one for the odd terms. 722
  • 743. 22.9 Solutions Solution 22.1 We can describe the Fibonacci sequence with the difference equation an+2 − an+1 − an = 0, a1 = a2 = 1. With the substitution an = rn we obtain the equation r2 − r − 1 = 0. This equation has the two distinct roots r1 = 1 + √ 5 2 , r2 = 1 − √ 5 2 . Thus the general solution is an = c1 1 + √ 5 2 n + c2 1 − √ 5 2 n . From the initial conditions we have c1r1+c2r2 = 1 c1r2 1+c2r2 2 = 1. Solving for c2 in the first equation, c2 = 1 r2 (1 − c1r1). We substitute this into the second equation. c1r2 1 + 1 r2 (1 − c1r1)r2 2 = 1 c1(r2 1 − r1r2) = 1 − r2 c1 = 1 − r2 r2 1 − r1r2 = 1 − 1− √ 5 2 1+ √ 5 2 √ 5 = 1+ √ 5 2 1+ √ 5 2 √ 5 = 1 √ 5 Substitute this result into the equation for c2. c2 = 1 r2 1 − 1 √ 5 r1 = 2 1 − √ 5 1 − 1 √ 5 1 + √ 5 2 = − 2 1 − √ 5 1 − √ 5 2 √ 5 = − 1 √ 5 723
  • 744. Thus the nth term in the Fibonacci sequence has the formula an = 1 √ 5 1 + √ 5 2 n − 1 √ 5 1 − √ 5 2 n . It is interesting to note that although the Fibonacci sequence is defined in terms of integers, one cannot express the formula form the nth element in terms of rational numbers. Solution 22.2 We can consider an+2 = 2 n an, a1 = a2 = 1 to be a first order difference equation. First consider the odd terms. a1 = 1 a3 = 2 1 a5 = 2 3 2 1 an = 2(n−1)/2 (n − 2)(n − 4) · · · (1) For the even terms, a2 = 1 a4 = 2 2 a6 = 2 4 2 2 an = 2(n−2)/2 (n − 2)(n − 4) · · · (2) . Thus an = 2(n−1)/2 (n−2)(n−4)···(1) for odd n 2(n−2)/2 (n−2)(n−4)···(2) for even n. 724
  • 745. Chapter 23 Series Solutions of Differential Equations Skill beats honesty any day. -? 23.1 Ordinary Points Big O and Little o Notation. The notation O(zn ) means “terms no bigger than zn .” This gives us a convenient shorthand for manipulating series. For example, sin z = z − z3 6 + O(z5 ) 1 1 − z = 1 + O(z) The notation o(zn ) means “terms smaller that zn .” For example, cos z = 1 + o(1) ez = 1 + z + o(z) Example 23.1.1 Consider the equation w (z) − 3w (z) + 2w(z) = 0. The general solution to this constant coefficient equation is w = c1 ez +c2 e2z . The functions ez and e2z are analytic in the finite complex plane. Recall that a function is analytic at a point z0 if and only if the function has a Taylor series about z0 with a nonzero radius of convergence. If we substitute the Taylor series expansions about z = 0 of ez and e2z into the general solution, we obtain w = c1 ∞ n=0 zn n! + c2 ∞ n=0 2n zn n! . Thus we have a series solution of the differential equation. 725
  • 746. Alternatively, we could try substituting a Taylor series into the differential equation and solving for the coefficients. Substituting w = ∞ n=0 anzn into the differential equation yields d2 dz2 ∞ n=0 anzn − 3 d dz ∞ n=0 anzn + 2 ∞ n=0 anzn = 0 ∞ n=2 n(n − 1)anzn−2 − 3 ∞ n=1 nanzn−1 + 2 ∞ n=0 anzn = 0 ∞ n=0 (n + 2)(n + 1)an+2zn − 3 ∞ n=0 (n + 1)an+1zn + 2 ∞ n=0 anzn = 0 ∞ n=0 (n + 2)(n + 1)an+2 − 3(n + 1)an+1 + 2an zn = 0. Equating powers of z, we obtain the difference equation (n + 2)(n + 1)an+2 − 3(n + 1)an+1 + 2an = 0, n ≥ 0. We see that an = 1/n! is one solution since (n + 2)(n + 1) (n + 2)! − 3 n + 1 (n + 1)! + 2 1 n! = 1 − 3 + 2 n! = 0. We use reduction of order for difference equations to find the other solution. Substituting an = bn/n! into the difference equation yields (n + 2)(n + 1) bn+2 (n + 2)! − 3(n + 1) bn+1 (n + 1)! + 2 bn n! = 0 bn+2 − 3bn+1 + 2bn = 0. At first glance it appears that we have not reduced the order of the difference equation. However writing this equation in terms of discrete derivatives, D2 bn − Dbn = 0 we see that this is a first order difference equation for Dbn. Since this is a constant coefficient difference equation we substitute bn = rn into the equation to obtain an algebraic equation for r. r2 − 3r + 2 = (r − 1)(r − 2) = 0 Thus the two solutions are bn = 1n b0 and bn = 2n b0. Only bn = 2n b0 will give us a second independent solution for an. Thus the two solutions for an are an = a0 n! and an = 2n a0 n! . Thus we can write the general solution to the differential equation as w = c1 ∞ n=0 zn n! + c2 ∞ n=0 2n zn n! . We recognize these two sums as the Taylor expansions of ez and e2z . Thus we obtain the same result as we did solving the differential equation directly. Of course it would be pretty silly to go through all the grunge involved in developing a series expansion of the solution in a problem like Example 23.1.1 since we can solve the problem exactly. 726
  • 747. However if we could not solve a differential equation, then having a Taylor series expansion of the solution about a point z0 would be useful in determining the behavior of the solutions near that point. For this method of substituting a Taylor series into the differential equation to be useful we have to know at what points the solutions are analytic. Let’s say we were considering a second order differential equation whose solutions were w1 = 1 z , and w2 = log z. Trying to find a Taylor series expansion of the solutions about the point z = 0 would fail because the solutions are not analytic at z = 0. This brings us to two important questions. 1. Can we tell if the solutions to a linear differential equation are analytic at a point without knowing the solutions? 2. If there are Taylor series expansions of the solutions to a differential equation, what are the radii of convergence of the series? In order to answer these questions, we will introduce the concept of an ordinary point. Consider the nth order linear homogeneous equation dn w dzn + pn−1(z) dn−1 w dzn−1 + · · · + p1(z) dw dz + p0(z)w = 0. If each of the coefficient functions pi(z) are analytic at z = z0 then z0 is an ordinary point of the differential equation. For reasons of typography we will restrict our attention to second order equations and the point z0 = 0 for a while. The generalization to an nth order equation will be apparent. Considering the point z0 = 0 is only trivially more general as we could introduce the transformation z − z0 → z to move the point to the origin. In the chapter on first order differential equations we showed that the solution is analytic at ordinary points. One would guess that this remains true for higher order equations. Consider the second order equation y + p(z)y + q(z)y = 0, where p and q are analytic at the origin. p(z) = ∞ n=0 pnzn , and q(z) = ∞ n=0 qnzn Assume that one of the solutions is not analytic at the origin and behaves like zα at z = 0 where α = 0, 1, 2, . . .. That is, we can approximate the solution with w(z) = zα + o(zα ). Let’s substitute w = zα + o(zα ) into the differential equation and look at the lowest power of z in each of the terms. α(α − 1)zα−2 + o(zα−2 ) + αzα−1 + o(zα−1 ) ∞ n=0 pnzn + zα + o(zα ) ∞ n=0 qnzn = 0. We see that the solution could not possibly behave like zα , α = 0, 1, 2, · · · because there is no term on the left to cancel out the zα−2 term. The terms on the left side could not add to zero. You could also check that a solution could not possibly behave like log z at the origin. Though we will not prove it, if z0 is an ordinary point of a homogeneous differential equation, then all the solutions are analytic at the point z0. Since the solution is analytic at z0 we can expand it in a Taylor series. 727
  • 748. Now we are prepared to answer our second question. From complex variables, we know that the radius of convergence of the Taylor series expansion of a function is the distance to the nearest singularity of that function. Since the solutions to a differential equation are analytic at ordinary points of the equation, the series expansion about an ordinary point will have a radius of convergence at least as large as the distance to the nearest singularity of the coefficient functions. Example 23.1.2 Consider the equation w + 1 cos z w + z2 w = 0. If we expand the solution to the differential equation in Taylor series about z = 0, the radius of convergence will be at least π/2. This is because the coefficient functions are analytic at the origin, and the nearest singularities of 1/ cos z are at z = ±π/2. 23.1.1 Taylor Series Expansion for a Second Order Differential Equation Consider the differential equation w + p(z)w + q(z)w = 0 where p(z) and q(z) are analytic in some neighborhood of the origin. p(z) = ∞ n=0 pnzn and q(z) = ∞ n=0 qnzn We substitute a Taylor series and it’s derivatives w = ∞ n=0 anzn w = ∞ n=1 nznzn−1 = ∞ n=0 (n + 1)an+1zn w = ∞ n=2 n(n − 1)anzn−2 = ∞ n=0 (n + 2)(n + 1)an+2zn into the differential equation to obtain ∞ n=0 (n + 2)(n + 1)an+2zn + ∞ n=0 pnzn ∞ n=0 (n + 1)an+1zn + ∞ n=0 qnzn ∞ n=0 anzn = 0 ∞ n=0 (n + 2)(n + 1)an+2zn + ∞ n=0 n m=0 (m + 1)am+1pn−m zn + ∞ n=0 n m=0 amqn−m zn = 0 ∞ n=0 (n + 2)(n + 1)an+2 + n m=0 (m + 1)am+1pn−m + amqn−m zn = 0. Equating coefficients of powers of z, (n + 2)(n + 1)an+2 + n m=0 (m + 1)am+1pn−m + amqn−m = 0 for n ≥ 0. 728
  • 749. 0.2 0.4 0.6 0.8 1 1.2 1.4 0.7 0.8 0.9 1.1 1.2 Figure 23.1: Plot of the Numerical Solution and the First Three Terms in the Taylor Series. We see that a0 and a1 are arbitrary and the rest of the coefficients are determined by the recurrence relation an+2 = − 1 (n + 1)(n + 2) n m=0 ((m + 1)am+1pn−m + amqn−m) for n ≥ 0. Example 23.1.3 Consider the problem y + 1 cos x y + ex y = 0, y(0) = y (0) = 1. Let’s expand the solution in a Taylor series about the origin. y(x) = ∞ n=0 anxn Since y(0) = a0 and y (0) = a1, we see that a0 = a1 = 1. The Taylor expansions of the coefficient functions are 1 cos x = 1 + O(x), and ex = 1 + O(x). Now we can calculate a2 from the recurrence relation. a2 = − 1 1 · 2 0 m=0 ((m + 1)am+1p0−m + amq0−m) = − 1 2 (1 · 1 · 1 + 1 · 1) = −1 Thus the solution to the problem is y(x) = 1 + x − x2 + O(x3 ). In Figure 23.1 the numerical solution is plotted in a solid line and the sum of the first three terms of the Taylor series is plotted in a dashed line. The general recurrence relation for the an’s is useful if you only want to calculate the first few terms in the Taylor expansion. However, for many problems substituting the Taylor series for the coefficient functions into the differential equation will enable you to find a simpler form of the solution. We consider the following example to illustrate this point. 729
  • 750. Example 23.1.4 Develop a series expansion of the solution to the initial value problem w + 1 (z2 + 1) w = 0, w(0) = 1, w (0) = 0. Solution using the General Recurrence Relation. The coefficient function has the Taylor expansion 1 1 + z2 = ∞ n=0 (−1)n z2n . From the initial condition we obtain a0 = 1 and a1 = 0. Thus we see that the solution is w = ∞ n=0 anzn , where an+2 = − 1 (n + 1)(n + 2) n m=0 amqn−m and qn = 0 for odd n (−1)(n/2) for even n. Although this formula is fine if you only want to calculate the first few an’s, it is just a tad unwieldy to work with. Let’s see if we can get a better expression for the solution. Substitute the Taylor Series into the Differential Equation. Substituting a Taylor series for w yields d2 dz2 ∞ n=0 anzn + 1 (z2 + 1) ∞ n=0 anzn = 0. Note that the algebra will be easier if we multiply by z2 + 1. The polynomial z2 + 1 has only two terms, but the Taylor series for 1/(z2 + 1) has an infinite number of terms. (z2 + 1) d2 dz2 ∞ n=0 anzn + ∞ n=0 anzn = 0 ∞ n=2 n(n − 1)anzn + ∞ n=2 n(n − 1)anzn−2 + ∞ n=0 anzn = 0 ∞ n=0 n(n − 1)anzn + ∞ n=0 (n + 2)(n + 1)an+2zn + ∞ n=0 anzn = 0 ∞ n=0 (n + 2)(n + 1)an+2 + n(n − 1)an + an zn = 0 Equating powers of z gives us the difference equation an+2 = − n2 − n + 1 (n + 2)(n + 1) an, for n ≥ 0. From the initial conditions we see that a0 = 1 and a1 = 0. All of the odd terms in the series will be zero. For the even terms, it is easier to reformulate the problem with the change of variables bn = a2n. In terms of bn the difference equation is bn+1 = − (2n)2 − 2n + 1 (2n + 2)(2n + 1) bn, b0 = 1. 730
  • 751. 0.2 0.4 0.6 0.8 1 1.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Figure 23.2: Plot of the solution and approximations. This is a first order difference equation with the solution bn = n j=0 − 4j2 − 2j + 1 (2j + 2)(2j + 1) . Thus we have that an = n/2 j=0 − 4j2 −2j+1 (2j+2)(2j+1) for even n, 0 for odd n. Note that the nearest singularities of 1/(z2 + 1) in the complex plane are at z = ±i. Thus the radius of convergence must be at least 1. Applying the ratio test, the series converges for values of |z| such that lim n→∞ an+2zn+2 anzn < 1 lim n→∞ − n2 − n + 1 (n + 2)(n + 1) |z|2 < 1 |z|2 < 1. The radius of convergence is 1. The first few terms in the Taylor expansion are w = 1 − 1 2 z2 + 1 8 z4 − 13 240 z6 + · · · . In Figure 23.2 the plot of the first two nonzero terms is shown in a short dashed line, the plot of the first four nonzero terms is shown in a long dashed line, and the numerical solution is shown in a solid line. In general, if the coefficient functions are rational functions, that is they are fractions of poly- nomials, multiplying the equations by the quotient will reduce the algebra involved in finding the series solution. Example 23.1.5 If we were going to find the Taylor series expansion about z = 0 of the solution to w + z 1 + z w + 1 1 − z2 w = 0, 731
  • 752. we would first want to multiply the equation by 1 − z2 to obtain (1 − z2 )w + z(1 − z)w + w = 0. Example 23.1.6 Find the series expansions about z = 0 of the fundamental set of solutions for w + z2 w = 0. Recall that the fundamental set of solutions {w1, w2} satisfy w1(0) = 1 w2(0) = 0 w1(0) = 0 w2(0) = 1. Thus if w1 = ∞ n=0 anzn and w2 = ∞ n=0 bnzn , then the coefficients must satisfy a0 = 1, a1 = 0, and b0 = 0, b1 = 1. Substituting the Taylor expansion w = ∞ n=0 cnzn into the differential equation, ∞ n=2 n(n − 1)cnzn−2 + ∞ n=0 cnzn+2 = 0 ∞ n=0 (n + 2)(n + 1)cn+2zn + ∞ n=2 cn−2zn = 0 2c2 + 6c3z + ∞ n=2 (n + 2)(n + 1)cn+2 + cn−2 zn = 0 Equating coefficients of powers of z, z0 : c2 = 0 z1 : c3 = 0 zn : (n + 2)(n + 1)cn+2 + cn−2 = 0, for n ≥ 2 cn+4 = − cn (n + 4)(n + 3) For our first solution we have the difference equation a0 = 1, a1 = 0, a2 = 0, a3 = 0, an+4 = − an (n + 4)(n + 3) . For our second solution, b0 = 0, b1 = 1, b2 = 0, b3 = 0, bn+4 = − bn (n + 4)(n + 3) . The first few terms in the fundamental set of solutions are w1 = 1 − 1 12 z4 + 1 672 z8 − · · · , w2 = z − 1 20 z5 + 1 1440 z9 − · · · . In Figure 23.3 the five term approximation is graphed in a coarse dashed line, the ten term approximation is graphed in a fine dashed line, and the numerical solution of w1 is graphed in a solid line. The same is done for w2. 732
  • 753. 1 2 3 4 5 6 -1 -0.5 0.5 1 1.5 1 2 3 4 5 6 -1 -0.5 0.5 1 1.5 Figure 23.3: The graph of approximations and numerical solution of w1 and w2. Result 23.1.1 Consider the nth order linear homogeneous equation dn w dzn + pn−1(z) dn−1 w dzn−1 + · · · + p1(z) dw dz + p0(z)w = 0. If each of the coefficient functions pi(z) are analytic at z = z0 then z0 is an ordinary point of the differential equation. The solution is analytic in some region containing z0 and can be expanded in a Taylor series. The radius of convergence of the series will be at least the distance to the nearest singularity of the coefficient functions in the complex plane. 23.2 Regular Singular Points of Second Order Equations Consider the differential equation w + p(z) z − z0 w + q(z) (z − z0)2 w = 0. If z = z0 is not an ordinary point but both p(z) and q(z) are analytic at z = z0 then z0 is a regular singular point of the differential equation. The following equations have a regular singular point at z = 0. • w + 1 z w + z2 w = 0 • w + 1 sin z w − w = 0 • w − zw + 1 z sin z w = 0 Concerning regular singular points of second order linear equations there is good news and bad news. The Good News. We will find that with the use of the Frobenius method we can always find series expansions of two linearly independent solutions at a regular singular point. We will illustrate this theory with several examples. 733
  • 754. The Bad News. Instead of a tidy little theory like we have for ordinary points, the solutions can be of several different forms. Also, for some of the problems the algebra can get pretty ugly. Example 23.2.1 Consider the equation w + 3(1 + z) 16z2 w = 0. We wish to find series solutions about the point z = 0. First we try a Taylor series w = ∞ n=0 anzn . Substituting this into the differential equation, z2 ∞ n=2 n(n − 1)anzn−2 + 3 16 (1 + z) ∞ n=0 anzn = 0 ∞ n=0 n(n − 1)anzn + 3 16 ∞ n=0 anzn + 3 16 ∞ n=1 an+1zn = 0. Equating powers of z, z0 : a0 = 0 zn : n(n − 1) + 3 16 an + 3 16 an+1 = 0 an+1 = 16 3 n(n − 1) + 1 an. This difference equation has the solution an = 0 for all n. Thus we have obtained only the trivial solution to the differential equation. We must try an expansion of a more general form. We recall that for regular singular points of first order equations we can always find a solution in the form of a Frobenius series w = zα ∞ n=0 anzn , a0 = 0. We substitute this series into the differential equation. z2 ∞ n=0 α(α − 1) + 2αn + n(n − 1) anzn+α−2 + 3 16 (1 + z)zα ∞ n=0 anzn = 0 ∞ n=0 α(α − 1) + 2n + n(n − 1) anzn + 3 16 ∞ n=0 anzn + 3 16 ∞ n=1 an−1zn = 0 Equating the z0 term to zero yields the equation α(α − 1) + 3 16 a0 = 0. Since we have assumed that a0 = 0, the polynomial in α must be zero. The two roots of the polynomial are α1 = 1 + 1 − 3/4 2 = 3 4 , α2 = 1 − 1 − 3/4 2 = 1 4 . Thus our two series solutions will be of the form w1 = z3/4 ∞ n=0 anzn , w2 = z1/4 ∞ n=0 bnzn . Substituting the first series into the differential equation, ∞ n=0 − 3 16 + 2n + n(n − 1) + 3 16 anzn + 3 16 ∞ n=1 an−1zn = 0. 734
  • 755. Equating powers of z, we see that a0 is arbitrary and an = − 3 16n(n + 1) an−1 for n ≥ 1. This difference equation has the solution an = a0 n j=1 − 3 16j(j + 1) = a0 − 3 16 n n j=1 1 j(j + 1) = a0 − 3 16 n 1 n!(n + 1)! for n ≥ 1. Substituting the second series into the differential equation, ∞ n=0 − 3 16 + 2n + n(n − 1) + 3 16 bnzn + 3 16 ∞ n=1 bn−1zn = 0. We see that the difference equation for bn is the same as the equation for an. Thus we can write the general solution to the differential equation as w = c1z3/4 1 + ∞ n=1 − 3 16 n 1 n!(n + 1)! zn + c2z1/4 1 + ∞ n=1 − 3 16 n 1 n!(n + 1)! zn c1z3/4 + c2z1/4 1 + ∞ n=1 − 3 16 n 1 n!(n + 1)! zn . 23.2.1 Indicial Equation Now let’s consider the general equation for a regular singular point at z = 0 w + p(z) z w + q(z) z2 w = 0. Since p(z) and q(z) are analytic at z = 0 we can expand them in Taylor series. p(z) = ∞ n=0 pnzn , q(z) = ∞ n=0 qnzn Substituting a Frobenius series w = zα ∞ n=0 anzn , a0 = 0 and the Taylor series for p(z) and q(z) into the differential equation yields ∞ n=0 (α + n)(α + n − 1) anzn + ∞ n=0 pnzn ∞ n=0 (α + n)anzn + ∞ n=0 qnzn ∞ n=0 anzn = 0 ∞ n=0 (α + n)2 − (α + n) + p0(α + n) + q0 anzn + ∞ n=1 pnzn ∞ n=0 (α + n)anzn + ∞ n=1 qnzn ∞ n=0 anzn = 0 735
  • 756. ∞ n=0 (α + n)2 + (p0 − 1)(αn) + q0 anzn + ∞ n=1   n−1 j=0 (α + j)ajpn−j   zn + ∞ n=1   n−1 j=0 ajqn−j   zn = 0 Equating powers of z, z0 : α2 + (p0 − 1)α + q0 a0 = 0 zn : (α + n)2 + (p0 − 1)(α + n) + q0 an = − n−1 j=0 (α + j)pn−j + qn−j aj. Let I(α) = α2 + (p0 − 1)α + q0 = 0. This is known as the indicial equation. The indicial equation gives us the form of the solutions. The equation for a0 is I(α)a0 = 0. Since we assumed that a0 is nonzero, I(α) = 0. Let the two roots of I(α) be α1 and α2 where (α1) ≥ (α2). Rewriting the difference equation for an(α), I(α + n)an(α) = − n−1 j=0 (α + j)pn−j + qn−j aj(α) for n ≥ 1. (23.1) If the roots are distinct and do not differ by an integer then we can use Equation 23.1 to solve for an(α1) and an(α2), which will give us the two solutions w1 = zα1 ∞ n=0 an(α1)zn , and w2 = zα2 ∞ n=0 an(α2)zn . If the roots are not distinct, α1 = α2, we will only have one solution and will have to generate another. If the roots differ by an integer, α1 − α2 = N, there is one solution corresponding to α1, but when we try to solve Equation 23.1 for an(α2), we will encounter the equation I(α2 + N)aN (α2) = I(α1)aN (α2) = 0 · aN (α2) = − N−1 j=0 (α + n)pn−j + qn−j aj(α2). If the right side of the equation is nonzero, then aN (α2) is undefined. On the other hand, if the right side is zero then aN (α2) is arbitrary. The rest of this section is devoted to considering the cases α1 = α2 and α1 − α2 = N. 23.2.2 The Case: Double Root Consider a second order equation L[w] = 0 with a regular singular point at z = 0. Suppose the indicial equation has a double root. I(α) = (α − α1)2 = 0 One solution has the form w1 = zα1 ∞ n=0 anzn . In order to find the second solution, we will differentiate with respect to the parameter, α. Let an(α) satisfy Equation 23.1 Substituting the Frobenius expansion into the differential equation, L zα ∞ n=0 an(α)zn = 0. 736
  • 757. Setting α = α1 will make the left hand side of the equation zero. Differentiating this equation with respect to α, ∂ ∂α L zα ∞ n=0 an(α)zn = 0. Interchanging the order of differentiation, L log z zα ∞ n=0 an(α)zn + zα ∞ n=0 dan(α) dα zn = 0. Since setting α = α1 will make the left hand side of this equation zero, the second linearly indepen- dent solution is w2 = log z zα1 ∞ n=0 an(α1)zn + zα1 ∞ n=0 dan(α) dα α=α1 zn w2 = w1 log z + zα1 ∞ n=0 an(α1)zn . Example 23.2.2 Consider the differential equation w + 1 + z 4z2 w = 0. There is a regular singular point at z = 0. The indicial equation is α(α − 1) + 1 4 = α − 1 2 2 = 0. One solution will have the form w1 = z1/2 ∞ n=0 anzn , a0 = 0. Substituting the Frobenius expansion zα ∞ n=0 an(α)zn into the differential equation yields z2 w + 1 4 (1 + z)w = 0 ∞ n=0 α(α − 1) + 2αn + n(n − 1) an(α)zn+α + 1 4 ∞ n=0 an(α)zn+α + 1 4 ∞ n=0 an(α)zn+α+1 = 0. Divide by zα and adjust the summation indices. ∞ n=0 [α(α − 1) + 2αn + n(n − 1)] an(α)zn + 1 4 ∞ n=0 an(α)zn + 1 4 ∞ n=1 an−1(α)zn = 0 α(α − 1)a0 + 1 4 a0 + ∞ n=1 α(α − 1) + 2n + n(n − 1) + 1 4 an(α) + 1 4 an−1(α) zn = 0 737
  • 758. Equating the coefficient of z0 to zero yields I(α)a0 = 0. Equating the coefficients of zn to zero yields the difference equation α(α − 1) + 2n + n(n − 1) + 1 4 an(α) + 1 4 an−1(α) = 0 an(α) = − n(n + 1) 4 + α(α − 1) 4 + 1 16 an−1(α). The first few an’s are a0, − α(α − 1) + 9 16 a0, α(α − 1) + 25 16 α(α − 1) + 9 16 a0, . . . Setting α = 1/2, the coefficients for the first solution are a0, − 5 16 a0, 105 16 a0, . . . The second solution has the form w2 = w1 log z + z1/2 ∞ n=0 an(1/2)zn . Differentiating the an(α), da0 dα = 0, da1(α) dα = −(2α−1)a0, da2(α) dα = (2α−1) α(α − 1) + 9 16 + α(α − 1) + 25 16 a0, . . . Setting α = 1/2 in this equation yields a0 = 0, a1(1/2) = 0, a2(1/2) = 0, . . . Thus the second solution is w2 = w1 log z. The first few terms in the general solution are (c1 + c2 log z) 1 − 5 16 z + 105 16 z2 − · · · . 23.2.3 The Case: Roots Differ by an Integer Consider the case in which the roots of the indicial equation α1 and α2 differ by an integer. (α1−α2 = N) Recall the equation that determines an(α) I(α + n)an = (α + n)2 + (p0 − 1)(α + n) + q0 an = − n−1 j=0 (α + j)pn−j + qn−j aj. When α = α2 the equation for aN is I(α2 + N)aN (α2) = 0 · aN (α2) = − N−1 j=0 (α + j)pN−j + qN−j aj. If the right hand side of this equation is zero, then aN is arbitrary. There will be two solutions of the Frobenius form. w1 = zα1 ∞ n=0 an(α1)zn and w2 = zα2 ∞ n=0 an(α2)zn . 738
  • 759. If the right hand side of the equation is nonzero then aN (α2) will be undefined. We will have to generate the second solution. Let w(z, α) = zα ∞ n=0 an(α)zn , where an(α) satisfies the recurrence formula. Substituting this series into the differential equation yields L[w(z, α)] = 0. We will multiply by (α − α2), differentiate this equation with respect to α and then set α = α2. This will generate a linearly independent solution. ∂ ∂α L[(α − α2)w(z, α)] = L ∂ ∂α (α − α2)w(z, α) = L ∂ ∂α (α − α2)zα ∞ n=0 an(α)zn = L log z zα ∞ n=0 (α − α2)an(α)zn + zα ∞ n=0 d dα [(α − α2)an(α)]zn Setting α = α2 with make this expression zero, thus log z zα ∞ n=0 lim α→α2 {(α − α2)an(α)} zn + zα2 ∞ n=0 lim α→α2 d dα [(α − α2)an(α)] zn is a solution. Now let’s look at the first term in this solution log z zα ∞ n=0 lim α→α2 {(α − α2)an(α)} zn . The first N terms in the sum will be zero. That is because a0, . . . , aN−1 are finite, so multiplying by (α − α2) and taking the limit as α → α2 will make the coefficients vanish. The equation for aN (α) is I(α + N)aN (α) = − N−1 j=0 (α + j)pN−j + qN−j aj(α). Thus the coefficient of the Nth term is lim α→α2 (α − α2)aN (α) = − lim α→α2   (α − α2) I(α + N) N−1 j=0 (α + j)pN−j + qN−j aj(α)   = − lim α→α2   (α − α2) (α + N − α1)(α + N − α2) N−1 j=0 (α + j)pN−j + qN−j aj(α)   Since α1 = α2 + N, limα→α2 α−α2 α+N−α1 = 1. = − 1 (α1 − α2) N−1 j=0 (α2 + j)pN−j + qN−j aj(α2). Using this you can show that the first term in the solution can be written d−1 log z w1, 739
  • 760. where d−1 is a constant. Thus the second linearly independent solution is w2 = d−1 log z w1 + zα2 ∞ n=0 dnzn , where d−1 = − 1 a0 1 (α1 − α2) N−1 j=0 (α2 + j)pN−j + qN−j aj(α2) and dn = lim α→α2 d dα (α − α2)an(α) for n ≥ 0. Example 23.2.3 Consider the differential equation w + 1 − 2 z w + 2 z2 w = 0. The point z = 0 is a regular singular point. In order to find series expansions of the solutions, we first calculate the indicial equation. We can write the coefficient functions in the form p(z) z = 1 z (−2 + z), and q(z) z2 = 1 z2 (2). Thus the indicial equation is α2 + (−2 − 1)α + 2 = 0 (α − 1)(α − 2) = 0. The First Solution. The first solution will have the Frobenius form w1 = z2 ∞ n=0 an(α1)zn . Substituting a Frobenius series into the differential equation, z2 w + (z2 − 2z)w + 2w = 0 ∞ n=0 (n + α)(n + α − 1)zn+α + (z2 − 2z) ∞ n=0 (n + α)zn+α−1 + 2 ∞ n=0 anzn = 0 [α2 − 3α + 2]a0 + ∞ n=1 (n + α)(n + α − 1)an + (n + α − 1)an−1 − 2(n + α)an + 2an zn = 0. Equating powers of z, (n + α)(n + α − 1) − 2(n + α) + 2 an = −(n + α − 1)an−1 an = − an−1 n + α − 2 . Setting α = α1 = 2, the recurrence relation becomes an(α1) = − an−1(α1) n = a0 (−1)n n! . The first solution is w1 = a0 ∞ n=0 (−1)n n! zn = a0 e−z . 740
  • 761. The Second Solution. The equation for a1(α2) is 0 · a1(α2) = 2a0. Since the right hand side of this equation is not zero, the second solution will have the form w2 = d−1 log z w1 + zα2 ∞ n=0 lim α→α2 d dα [(α − α2)an(α)] zn First we will calculate d−1 as we defined it previously. d−1 = − 1 a0 1 2 − 1 a0 = −1. The expression for an(α) is an(α) = (−1)n a0 (α + n − 2)(α + n − 1) · · · (α − 1) . The first few an(α) are a1(α) = − a0 α − 1 a2(α) = a0 α(α − 1) a3(α) = − a0 (α + 1)α(α − 1) . We would like to calculate dn = lim α→1 d dα (α − 1)an(α) . The first few dn are d0 = lim α→1 d dα (α − 1)a0 = a0 d1 = lim α→1 d dα (α − 1) − a0 α − 1 = lim α→1 d dα − a0 = 0 d2 = lim α→1 d dα (α − 1) a0 α(α − 1) = lim α→1 d dα a0 α = −a0 d3 = lim α→1 d dα (α − 1) − a0 (α + 1)α(α − 1) = lim α→1 d dα − a0 (α + 1)α = 3 4 a0. 741
  • 762. It will take a little work to find the general expression for dn. We will need the following relations. Γ(n) = (n − 1)!, Γ (z) = Γ(z)ψ(z), ψ(n) = −γ + n−1 k=1 1 k . See the chapter on the Gamma function for explanations of these equations. dn = lim α→1 d dα (α − 1) (−1)n a0 (α + n − 2)(α + n − 1) · · · (α − 1) = lim α→1 d dα (−1)n a0 (α + n − 2)(α + n − 1) · · · (α) = lim α→1 d dα (−1)n a0Γ(α) Γ(α + n − 1) = (−1)n a0 lim α→1 Γ(α)ψ(α) Γ(α + n − 1) − Γ(α)ψ(α + n − 1) Γ(α + n − 1) = (−1)n a0 lim α→1 Γ(α)[ψ(α) − ψ(α + n − 1)] Γ(α + n − 1) = (−1)n a0 ψ(1) − ψ(n) (n − 1)! = (−1)n+1 a0 (n − 1)! n−1 k=0 1 k Thus the second solution is w2 = − log z w1 + z ∞ n=0 (−1)n+1 a0 (n − 1)! n−1 k=0 1 k zn . The general solution is w = c1 e−z −c2 log z e−z +c2z ∞ n=0 (−1)n+1 (n − 1)! n−1 k=0 1 k zn . We see that even in problems that are chosen for their simplicity, the algebra involved in the Frobenius method can be pretty involved. Example 23.2.4 Consider a series expansion about the origin of the equation w + 1 − z z w − 1 z2 w = 0. The indicial equation is α2 − 1 = 0 α = ±1. Substituting a Frobenius series into the differential equation, z2 ∞ n=0 (n + α)(n + α − 1)anzn−2 + (z − z2 ) ∞ n=0 (n + α)anzn−1 − ∞ n=0 anzn = 0 ∞ n=0 (n + α)(n + α − 1)anzn + ∞ n=0 (n + α)anzn − ∞ n=1 (n + α − 1)an−1zn − ∞ n=0 anzn = 0 α(α − 1) + α − 1 a0 + ∞ n=1 n + α)(n + α − 1)an + (n + α − 1)an − (n + α − 1)an−1 zn = 0. 742
  • 763. Equating powers of z to zero, an(α) = an−1(α) n + α + 1 . We know that the first solution has the form w1 = z ∞ n=0 anzn . Setting α = 1 in the reccurence formula, an = an−1 n + 2 = 2a0 (n + 2)! . Thus the first solution is w1 = z ∞ n=0 2a0 (n + 2)! zn = 2a0 1 z ∞ n=0 zn+2 (n + 2)! = 2a0 z ∞ n=0 zn n! − 1 − z = 2a0 z (ez −1 − z). Now to find the second solution. Setting α = −1 in the reccurence formula, an = an−1 n = a0 n! . We see that in this case there is no trouble in defining a2(α2). The second solution is w2 = a0 z ∞ n=0 zn n! = a0 z ez . Thus we see that the general solution is w = c1 z (ez −1 − z) + c2 z ez w = d1 z ez +d2 1 + 1 z . 23.3 Irregular Singular Points If a point z0 of a differential equation is not ordinary or regular singular, then it is an irregular singular point. At least one of the solutions at an irregular singular point will not be of the Frobenius form. We will examine how to obtain series expansions about an irregular singular point in the chapter on asymptotic expansions. 23.4 The Point at Infinity If we want to determine the behavior of a function f(z) at infinity, we can make the transformation ζ = 1/z and examine the point ζ = 0. 743
  • 764. Example 23.4.1 Consider the behavior of f(z) = sin z at infinity. This is the same as considering the point ζ = 0 of sin(1/ζ), which has the series expansion sin 1 ζ = ∞ n=0 (−1)n (2n + 1)!ζ2n+1 . Thus we see that the point ζ = 0 is an essential singularity of sin(1/ζ). Hence sin z has an essential singularity at z = ∞. Example 23.4.2 Consider the behavior at infinity of z e1/z . We make the transformation ζ = 1/z. 1 ζ eζ = 1 ζ ∞ n=0 ζn n! Thus z e1/z has a pole of order 1 at infinity. In order to classify the point at infinity of a differential equation in w(z), we apply the transfor- mation ζ = 1/z, u(ζ) = w(z). We write the derivatives with respect to z in terms of ζ. z = 1 ζ dz = − 1 ζ2 dζ d dz = −ζ2 d dζ d2 dz2 = −ζ2 d dζ −ζ2 d dζ = ζ4 d2 dζ2 + 2ζ3 d dζ Now we apply the transformation to the differential equation. w + p(z)w + q(z)w = 0 ζ4 u + 2ζ3 u + p(1/ζ)(−ζ2 )u + q(1/ζ)u = 0 u + 2 ζ − p(1/ζ) ζ2 u + q(1/ζ) ζ4 u = 0 Example 23.4.3 Classify the singular points of the differential equation w + 1 z w + 2w = 0. There is a regular singular point at z = 0. To examine the point at infinity we make the transformation ζ = 1/z, u(ζ) = w(z). u + 2 ζ − 1 ζ u + 2 ζ4 u = 0 u + 1 ζ u + 2 ζ4 u = 0 Thus we see that the differential equation for w(z) has an irregular singular point at infinity. 744
  • 765. 23.5 Exercises Exercise 23.1 (mathematica/ode/series/series.nb) f(x) satisfies the Hermite equation d2 f dx2 − 2x df dx + 2λf = 0. Construct two linearly independent solutions of the equation as Taylor series about x = 0. For what values of x do the series converge? Show that for certain values of λ, called eigenvalues, one of the solutions is a polynomial, called an eigenfunction. Calculate the first four eigenfunctions H0(x), H1(x), H2(x), H3(x), ordered by degree. Hint, Solution Exercise 23.2 Consider the Legendre equation (1 − x2 )y − 2xy + α(α + 1)y = 0. 1. Find two linearly independent solutions in the form of power series about x = 0. 2. Compute the radius of convergence of the series. Explain why it is possible to predict the radius of convergence without actually deriving the series. 3. Show that if α = 2n, with n an integer and n ≥ 0, the series for one of the solutions reduces to an even polynomial of degree 2n. 4. Show that if α = 2n+1, with n an integer and n ≥ 0, the series for one of the solutions reduces to an odd polynomial of degree 2n + 1. 5. Show that the first 4 polynomial solutions Pn(x) (known as Legendre polynomials) ordered by their degree and normalized so that Pn(1) = 1 are P0 = 1 P1 = x P2 = 1 2 (3x2 − 1) P4 = 1 2 (5x3 − 3x) 6. Show that the Legendre equation can also be written as ((1 − x2 )y ) = −α(α + 1)y. Note that two Legendre polynomials Pn(x) and Pm(x) must satisfy this relation for α = n and α = m respectively. By multiplying the first relation by Pm(x) and the second by Pn(x) and integrating by parts show that Legendre polynomials satisfy the orthogonality relation 1 −1 Pn(x)Pm(x) dx = 0 if n = m. If n = m, it can be shown that the value of the integral is 2/(2n + 1). Verify this for the first three polynomials (but you needn’t prove it in general). Hint, Solution Exercise 23.3 Find the forms of two linearly independent series expansions about the point z = 0 for the differential equation w + 1 sin z w + 1 − z z2 w = 0, such that the series are real-valued on the positive real axis. Do not calculate the coefficients in the expansions. Hint, Solution 745
  • 766. Exercise 23.4 Classify the singular points of the equation w + w z − 1 + 2w = 0. Hint, Solution Exercise 23.5 Find the series expansions about z = 0 for w + 5 4z w + z − 1 8z2 w = 0. Hint, Solution Exercise 23.6 Find the series expansions about z = 0 of the fundamental solutions of w + zw + w = 0. Hint, Solution Exercise 23.7 Find the series expansions about z = 0 of the two linearly independent solutions of w + 1 2z w + 1 z w = 0. Hint, Solution Exercise 23.8 Classify the singularity at infinity of the differential equation w + 2 z + 3 z2 w + 1 z2 w = 0. Find the forms of the series solutions of the differential equation about infinity that are real-valued when z is real-valued and positive. Do not calculate the coefficients in the expansions. Hint, Solution Exercise 23.9 Consider the second order differential equation x d2 y dx2 + (b − x) dy dx − ay = 0, where a, b are real constants. 1. Show that x = 0 is a regular singular point. Determine the location of any additional singular points and classify them. Include the point at infinity. 2. Compute the indicial equation for the point x = 0. 3. By solving an appropriate recursion relation, show that one solution has the form y1(x) = 1 + ax b + (a)2x2 (b)22! + · · · + (a)nxn (b)nn! + · · · where the notation (a)n is defined by (a)n = a(a + 1)(a + 2) · · · (a + n − 1), (a)0 = 1. Assume throughout this problem that b = n where n is a non-negative integer. 746
  • 767. 4. Show that when a = −m, where m is a non-negative integer, that there are polynomial solutions to this equation. Compute the radius of convergence of the series above when a = −m. Verify that the result you get is in accord with the Frobenius theory. 5. Show that if b = n + 1 where n = 0, 1, 2, . . ., then the second solution of this equation has logarithmic terms. Indicate the form of the second solution in this case. You need not compute any coefficients. Hint, Solution Exercise 23.10 Consider the equation xy + 2xy + 6 ex y = 0. Find the first three non-zero terms in each of two linearly independent series solutions about x = 0. Hint, Solution 747
  • 768. 23.6 Hints Hint 23.1 Hint 23.2 Hint 23.3 Hint 23.4 Hint 23.5 Hint 23.6 Hint 23.7 Hint 23.8 Hint 23.9 Hint 23.10 748
  • 769. 23.7 Solutions Solution 23.1 f(x) is a Taylor series about x = 0. f(x) = ∞ n=0 anxn f (x) = ∞ n=1 nanxn−1 = ∞ n=0 nanxn−1 f (x) = ∞ n=2 n(n − 1)anxn−2 = ∞ n=0 (n + 2)(n + 1)an+2xn We substitute the Taylor series into the differential equation. f (x) − 2xf (x) + 2λf = 0 ∞ n=0 (n + 2)(n + 1)an+2xn − 2 ∞ n=0 nanxn + 2λ ∞ n=0 anxn Equating coefficients gives us a difference equation for an: (n + 2)(n + 1)an+2 − 2nan + 2λan = 0 an+2 = 2 n − λ (n + 1)(n + 2) an. The first two coefficients, a0 and a1 are arbitrary. The remaining coefficients are determined by the recurrence relation. We will find the fundamental set of solutions at x = 0. That is, for the first solution we choose a0 = 1 and a1 = 0; for the second solution we choose a0 = 0, a1 = 1. The difference equation for y1 is an+2 = 2 n − λ (n + 1)(n + 2) an, a0 = 1, a1 = 0, which has the solution a2n = 2n n k=0(2(n − k) − λ) (2n)! , a2n+1 = 0. The difference equation for y2 is an+2 = 2 n − λ (n + 1)(n + 2) an, a0 = 0, a1 = 1, which has the solution a2n = 0, a2n+1 = 2n n−1 k=0 (2(n − k) − 1 − λ) (2n + 1)! . A set of linearly independent solutions, (in fact the fundamental set of solutions at x = 0), is y1(x) = ∞ n=0 2n n k=0(2(n − k) − λ) (2n)! x2n , y2(x) = ∞ n=0 2n n−1 k=0 (2(n − k) − 1 − λ) (2n + 1)! x2n+1 . 749
  • 770. Since the coefficient functions in the differential equation do not have any singularities in the finite complex plane, the radius of convergence of the series is infinite. If λ = n is a positive even integer, then the first solution, y1, is a polynomial of order n. If λ = n is a positive odd integer, then the second solution, y2, is a polynomial of order n. For λ = 0, 1, 2, 3, we have H0(x) = 1 H1(x) = x H2(x) = 1 − 2x2 H3(x) = x − 2 3 x3 Solution 23.2 1. First we write the differential equation in the standard form. 1 − x2 y − 2xy + α(α + 1)y = 0 (23.2) y − 2x 1 − x2 y + α(α + 1) 1 − x2 y = 0. (23.3) Since the coefficients of y and y are analytic in a neighborhood of x = 0, We can find two Taylor series solutions about that point. We find the Taylor series for y and its derivatives. y = ∞ n=0 anxn y = ∞ n=1 nanxn−1 y = ∞ n=2 (n − 1)nanxn−2 = ∞ n=0 (n + 1)(n + 2)an+2xn Here we used index shifting to explicitly write the two forms that we will need for y . Note that we can take the lower bound of summation to be n = 0 for all above sums. The terms added by this operation are zero. We substitute the Taylor series into Equation 23.2. ∞ n=0 (n + 1)(n + 2)an+2xn − ∞ n=0 (n − 1)nanxn − 2 ∞ n=0 nanxn + α(α + 1) ∞ n=0 anxn = 0 ∞ n=0 (n + 1)(n + 2)an+2 − (n − 1)n + 2n − α(α + 1) an xn = 0 We equate coefficients of xn to obtain a recurrence relation. (n + 1)(n + 2)an+2 = (n(n + 1) − α(α + 1))an an+2 = n(n + 1) − α(α + 1) (n + 1)(n + 2) an, n ≥ 0 We can solve this difference equation to determine the an’s. (a0 and a1 are arbitrary.) an =    a0 n! n−2 k=0 even k k(k + 1) − α(α + 1) , even n, a1 n! n−2 k=1 odd k k(k + 1) − α(α + 1) , odd n 750
  • 771. We will find the fundamental set of solutions at x = 0, that is the set {y1, y2} that satisfies y1(0) = 1 y1(0) = 0 y2(0) = 0 y2(0) = 1. For y1 we take a0 = 1 and a1 = 0; for y2 we take a0 = 0 and a1 = 1. The rest of the coefficients are determined from the recurrence relation. y1 = ∞ n=0 even n    1 n! n−2 k=0 even k k(k + 1) − α(α + 1)    xn y2 = ∞ n=1 odd n    1 n! n−2 k=1 odd k k(k + 1) − α(α + 1)    xn 2. We determine the radius of convergence of the series solutions with the ratio test. lim n→∞ an+2xn+2 anxn < 1 lim n→∞ n(n+1)−α(α+1) (n+1)(n+2) anxn+2 anxn < 1 lim n→∞ n(n + 1) − α(α + 1) (n + 1)(n + 2) x2 < 1 x2 < 1 Thus we see that the radius of convergence of the series is 1. We knew that the radius of convergence would be at least one, because the nearest singularities of the coefficients of (23.3) occur at x = ±1, a distance of 1 from the origin. This implies that the solutions of the equation are analytic in the unit circle about x = 0. The radius of convergence of the Taylor series expansion of an analytic function is the distance to the nearest singularity. 3. If α = 2n then a2n+2 = 0 in our first solution. From the recurrence relation, we see that all subsequent coefficients are also zero. The solution becomes an even polynomial. y1 = 2n m=0 even m    1 m! m−2 k=0 even k k(k + 1) − α(α + 1)    xm 4. If α = 2n + 1 then a2n+3 = 0 in our second solution. From the recurrence relation, we see that all subsequent coefficients are also zero. The solution becomes an odd polynomial. y2 = 2n+1 m=1 odd m    1 m! m−2 k=1 odd k k(k + 1) − α(α + 1)    xm 5. From our solutions above, the first four polynomials are 1 x 1 − 3x2 x − 5 3 x3 751
  • 772. Figure 23.4: The First Four Legendre Polynomials To obtain the Legendre polynomials we normalize these to have value unity at x = 1 P0 = 1 P1 = x P2 = 1 2 3x2 − 1 P3 = 1 2 5x3 − 3x These four Legendre polynomials are plotted in Figure 23.4. 6. We note that the first two terms in the Legendre equation form an exact derivative. Thus the Legendre equation can also be written as (1 − x2 )y = −α(α + 1)y. Pn and Pm are solutions of the Legendre equation. (1 − x2 )Pn = −n(n + 1)Pn, (1 − x2 )Pm = −m(m + 1)Pm (23.4) We multiply the first relation of Equation 23.4 by Pm and integrate by parts. (1 − x2 )Pn Pm = −n(n + 1)PnPm 1 −1 (1 − x2 )Pn Pm dx = −n(n + 1) 1 −1 PnPm dx (1 − x2 )Pn Pm 1 −1 − 1 −1 (1 − x2 )PnPm dx = −n(n + 1) 1 −1 PnPm dx 1 −1 (1 − x2 )PnPm dx = n(n + 1) 1 −1 PnPm dx We multiply the secord relation of Equation 23.4 by Pn and integrate by parts. To obtain a different expression for 1 −1 (1 − x2 )PmPn dx. 1 −1 (1 − x2 )PmPn dx = m(m + 1) 1 −1 PmPn dx We equate the two expressions for 1 −1 (1 − x2 )PmPn dx. to obtain an orthogonality relation. (n(n + 1) − m(m + 1)) 1 −1 PnPm dx = 0 1 −1 Pn(x)Pm(x) dx = 0 if n = m. 752
  • 773. We verify that for the first four polynomials the value of the integral is 2/(2n + 1) for n = m. 1 −1 P0(x)P0(x) dx = 1 −1 1 dx = 2 1 −1 P1(x)P1(x) dx = 1 −1 x2 dx = x3 3 1 −1 = 2 3 1 −1 P2(x)P2(x) dx = 1 −1 1 4 9x4 − 6x2 + 1 dx = 1 4 9x5 5 − 2x3 + x 1 −1 = 2 5 1 −1 P3(x)P3(x) dx = 1 −1 1 4 25x6 − 30x4 + 9x2 dx = 1 4 25x7 7 − 6x5 + 3x3 1 −1 = 2 7 Solution 23.3 The indicial equation for this problem is α2 + 1 = 0. Since the two roots α1 = i and α2 = −i are distinct and do not differ by an integer, there are two solutions in the Frobenius form. w1 = zi ∞ n=0 anzn , w1 = z−i ∞ n=0 bnzn However, these series are not real-valued on the positive real axis. Recalling that zi = ei log z = cos(log z) + i sin(log z), and z−i = e−i log z = cos(log z) − i sin(log z), we can write a new set of solutions that are real-valued on the positive real axis as linear combinations of w1 and w2. u1 = 1 2 (w1 + w2), u2 = 1 2i (w1 − w2) u1 = cos(log z) ∞ n=0 cnzn , u1 = sin(log z) ∞ n=0 dnzn Solution 23.4 Consider the equation w + w /(z − 1) + 2w = 0. We see that there is a regular singular point at z = 1. All other finite values of z are ordinary points of the equation. To examine the point at infinity we introduce the transformation z = 1/t, w(z) = u(t). Writing the derivatives with respect to z in terms of t yields d dz = −t2 d dt , d2 dz2 = t4 d2 dt2 + 2t3 d dt . Substituting into the differential equation gives us t4 u + 2t3 u − t2 u 1/t − 1 + 2u = 0 u + 2 t − 1 t(1 − t) u + 2 t4 u = 0. Since t = 0 is an irregular singular point in the equation for u(t), z = ∞ is an irregular singular point in the equation for w(z). 753
  • 774. Solution 23.5 Find the series expansions about z = 0 for w + 5 4z w + z − 1 8z2 w = 0. We see that z = 0 is a regular singular point of the equation. The indicial equation is α2 + 1 4 α − 1 8 = 0 α + 1 2 α − 1 4 = 0. Since the roots are distinct and do not differ by an integer, there will be two solutions in the Frobenius form. w1 = z1/4 ∞ n=0 an(α1)zn , w2 = z−1/2 ∞ n=0 an(α2)zn We multiply the differential equation by 8z2 to put it in a better form. Substituting a Frobenius series into the differential equation, 8z2 ∞ n=0 (n + α)(n + α − 1)anzn+α−2 + 10z ∞ n=0 (n + α)anzn+α−1 + (z − 1) ∞ n=0 anzn+α 8 ∞ n=0 (n + α)(n + α − 1)anzn + 10 ∞ n=0 (n + α)anzn + ∞ n=1 an−1zn − ∞ n=0 anzn . Equating coefficients of powers of z, [8(n + α)(n + α − 1) + 10(n + α) − 1] an = −an−1 an = − an−1 8(n + α)2 + 2(n + α) − 1 . The First Solution. Setting α = 1/4 in the recurrence formula, an(α1) = − an−1 8(n + 1/4)2 + 2(n + 1/4) − 1 an(α1) = − an−1 2n(4n + 3) . Thus the first solution is w1 = z1/4 ∞ n=0 an(α1)zn = a0z1/4 1 − 1 14 z + 1 616 z2 + · · · . The Second Solution. Setting α = −1/2 in the recurrence formula, an = − an−1 8(n − 1/2)2 + 2(n − 1/2) − 1 an = − an−1 2n(4n − 3) Thus the second linearly independent solution is w2 = z−1/2 ∞ n=0 an(α2)zn = a0z−1/2 1 − 1 2 z + 1 40 z2 + · · · . 754
  • 775. Solution 23.6 We consider the series solutions of, w + zw + w = 0. We would like to find the expansions of the fundamental set of solutions about z = 0. Since z = 0 is a regular point, (the coefficient functions are analytic there), we expand the solutions in Taylor series. Differentiating the series expansions for w(z), w = ∞ n=0 anzn w = ∞ n=1 nanzn−1 w = ∞ n=2 n(n − 1)anzn−2 = ∞ n=0 (n + 2)(n + 1)an+2zn We may take the lower limit of summation to be zero without changing the sums. Substituting these expressions into the differential equation, ∞ n=0 (n + 2)(n + 1)an+2zn + ∞ n=0 nanzn + ∞ n=0 anzn = 0 ∞ n=0 (n + 2)(n + 1)an+2 + (n + 1)an zn = 0. Equating the coefficient of the zn term gives us (n + 2)(n + 1)an+2 + (n + 1)an = 0, n ≥ 0 an+2 = − an n + 2 , n ≥ 0. a0 and a1 are arbitrary. We determine the rest of the coefficients from the recurrence relation. We consider the cases for even and odd n separately. a2n = − a2n−2 2n = a2n−4 (2n)(2n − 2) = (−1)n a0 (2n)(2n − 2) · · · 4 · 2 = (−1)n a0 n m=1 2m , n ≥ 0 a2n+1 = − a2n−1 2n + 1 = a2n−3 (2n + 1)(2n − 1) = (−1)n a1 (2n + 1)(2n − 1) · · · 5 · 3 = (−1)n a1 n m=1(2m + 1) , n ≥ 0 755
  • 776. If {w1, w2} is the fundamental set of solutions, then the initial conditions demand that w1 = 1 + 0 · z + · · · and w2 = 0 + z + · · · . We see that w1 will have only even powers of z and w2 will have only odd powers of z. w1 = ∞ n=0 (−1)n n m=1 2m z2n , w2 = ∞ n=0 (−1)n n m=1(2m + 1) z2n+1 Since the coefficient functions in the differential equation are entire, (analytic in the finite complex plane), the radius of convergence of these series solutions is infinite. Solution 23.7 w + 1 2z w + 1 z w = 0. We can find the indicial equation by substituting w = zα + O(zα+1 ) into the differential equation. α(α − 1)zα−2 + 1 2 αzα−2 + zα−1 = O(zα−1 ) Equating the coefficient of the zα−2 term, α(α − 1) + 1 2 α = 0 α = 0, 1 2 . Since the roots are distinct and do not differ by an integer, the solutions are of the form w1 = ∞ n=0 anzn , w2 = z1/2 ∞ n=0 bnzn . Differentiating the series for the first solution, w1 = ∞ n=0 anzn w1 = ∞ n=1 nanzn−1 = ∞ n=0 (n + 1)an+1zn w1 = ∞ n=1 n(n + 1)an+1zn−1 . Substituting this series into the differential equation, ∞ n=1 n(n + 1)an+1zn−1 + 1 2z ∞ n=0 (n + 1)an+1zn + 1 z ∞ n=0 anzn = 0 ∞ n=1 n(n + 1)an+1 + 1 2 (n + 1)an+1 + an zn−1 + 1 2z a1 + 1 z a0 = 0. Equating powers of z, z−1 : a1 2 + a0 = 0 → a1 = −2a0 zn−1 : n + 1 2 (n + 1)an+1 + an = 0 → an+1 = − an (n + 1/2)(n + 1) . 756
  • 777. We can combine the above two equations for an. an+1 = − an (n + 1/2)(n + 1) , for n ≥ 0 Solving this difference equation for an, an = a0 n−1 j=0 −1 (j + 1/2)(j + 1) an = a0 (−1)n n! n−1 j=0 1 j + 1/2 Now let’s find the second solution. Differentiating w2, w2 = ∞ n=0 (n + 1/2)bnzn−1/2 w2 = ∞ n=0 (n + 1/2)(n − 1/2)bnzn−3/2 . Substituting these expansions into the differential equation, ∞ n=0 (n + 1/2)(n − 1/2)bnzn−3/2 + 1 2 ∞ n=0 (n + 1/2)bnzn−3/2 + ∞ n=1 bn−1zn−3/2 = 0. Equating the coefficient of the z−3/2 term, 1 2 − 1 2 b0 + 1 2 1 2 b0 = 0, we see that b0 is arbitrary. Equating the other coefficients of powers of z, (n + 1/2)(n − 1/2)bn + 1 2 (n + 1/2)bn + bn−1 = 0 bn = − bn−1 n(n + 1/2) Calculating the bn’s, b1 = − b0 1 · 3 2 b2 = b0 1 · 2 · 3 2 · 5 2 bn = (−1)n 2n b0 n! · 3 · 5 · · · (2n + 1) Thus the second solution is w2 = b0z1/2 ∞ n=0 (−1)n 2n zn n! 3 · 5 · · · (2n + 1) . Solution 23.8 w + 2 z + 3 z2 w + 1 z2 w = 0. 757
  • 778. In order to analyze the behavior at infinity we make the change of variables t = 1/z, u(t) = w(z) and examine the point t = 0. Writing the derivatives with respect to z in terms if t yields z = 1 t dz = − 1 t2 dt d dz = −t2 d dt d2 dz2 = −t2 d dt −t2 d dt = t4 d2 dt2 + 2t3 d dt . The equation for u is then t4 u + 2t3 u + (2t + 3t2 )(−t2 )u + t2 u = 0 u + −3u + 1 t2 u = 0 We see that t = 0 is a regular singular point. To find the indicial equation, we substitute u = tα + O(tα+1 ) into the differential equation. α(α − 1)tα−2 − 3αtα−1 + tα−2 = O(tα−1 ) Equating the coefficients of the tα−2 terms, α(α − 1) + 1 = 0 α = 1 ± i √ 3 2 Since the roots of the indicial equation are distinct and do not differ by an integer, a set of solutions has the form t(1+i √ 3)/2 ∞ n=0 antn , t(1−i √ 3)/2 ∞ n=0 bntn . Noting that t(1+i √ 3)/2 = t1/2 exp i √ 3 2 log t , and t(1−i √ 3)/2 = t1/2 exp − i √ 3 2 log t . We can take the sum and difference of the above solutions to obtain the form u1 = t1/2 cos √ 3 2 log t ∞ n=0 antn , u1 = t1/2 sin √ 3 2 log t ∞ n=0 bntn . Putting the answer in terms of z, we have the form of the two Frobenius expansions about infinity. w1 = z−1/2 cos √ 3 2 log z ∞ n=0 an zn , w1 = z−1/2 sin √ 3 2 log z ∞ n=0 bn zn . Solution 23.9 1. We write the equation in the standard form. y + b − x x y − a x y = 0 758
  • 779. Since b−x x has no worse than a first order pole and a x has no worse than a second order pole at x = 0, that is a regular singular point. Since the coefficient functions have no other singularities in the finite complex plane, all the other points in the finite complex plane are regular points. Now to examine the point at infinity. We make the change of variables u(ξ) = y(x), ξ = 1/x. y = dξ dx d dξ u = − 1 x2 u = −ξ2 u y = −ξ2 d dξ −ξ2 d dξ u = ξ4 u + 2ξ3 u The differential equation becomes xy + (b − x)y − ay 1 ξ ξ4 u + 2ξ3 u + b − 1 ξ −ξ2 u − au = 0 ξ3 u + (2 − b)ξ2 + ξ u − au = 0 u + 2 − b ξ + 1 ξ2 − a ξ3 u = 0 Since this equation has an irregular singular point at ξ = 0, the equation for y(x) has an irregular singular point at infinity. 2. The coefficient functions are p(x) ≡ 1 x ∞ n=1 pnxn = 1 x (b − x), q(x) ≡ 1 x2 ∞ n=1 qnxn = 1 x2 (0 − ax). The indicial equation is α2 + (p0 − 1)α + q0 = 0 α2 + (b − 1)α + 0 = 0 α(α + b − 1) = 0. 3. Since one of the roots of the indicial equation is zero, and the other root is not a negative 759
  • 780. integer, one of the solutions of the differential equation is a Taylor series. y1 = ∞ k=0 ckxk y1 = ∞ k=1 kckxk−1 = ∞ k=0 (k + 1)ck+1xk = ∞ k=0 kckxk−1 y1 = k=2 k(k − 1)ckxk−2 = ∞ k=1 (k + 1)kck+1xk−1 = ∞ k=0 (k + 1)kck+1xk−1 We substitute the Taylor series into the differential equation. xy + (b − x)y − ay = 0 ∞ k=0 (k + 1)kck+1xk + b ∞ k=0 (k + 1)ck+1xk − ∞ k=0 kckxk − a ∞ k=0 ckxk = 0 We equate coefficients to determine a recurrence relation for the coefficients. (k + 1)kck+1 + b(k + 1)ck+1 − kck − ack = 0 ck+1 = k + a (k + 1)(k + b) ck For c0 = 1, the recurrence relation has the solution ck = (a)kxk (b)kk! . Thus one solution is y1(x) = ∞ k=0 (a)k (b)kk! xk . 4. If a = −m, where m is a non-negative integer, then (a)k = 0 for k > m. This makes y1 a polynomial: y1(x) = m k=0 (a)k (b)kk! xk . 5. If b = n + 1, where n is a non-negative integer, the indicial equation is α(α + n) = 0. For the case n = 0, the indicial equation has a double root at zero. Thus the solutions have the form: y1(x) = m k=0 (a)k (b)kk! xk , y2(x) = y1(x) log x + ∞ k=0 dkxk 760
  • 781. For the case n > 0 the roots of the indicial equation differ by an integer. The solutions have the form: y1(x) = m k=0 (a)k (b)kk! xk , y2(x) = d−1y1(x) log x + x−n ∞ k=0 dkxk The form of the solution for y2 can be substituted into the equation to determine the coefficients dk. Solution 23.10 We write the equation in the standard form. xy + 2xy + 6 ex y = 0 y + 2y + 6 ex x y = 0 We see that x = 0 is a regular singular point. The indicial equation is α2 − α = 0 α = 0, 1. The first solution has the Frobenius form. y1 = x + a2x2 + a3x3 + O(x4 ) We substitute y1 into the differential equation and equate coefficients of powers of x. xy + 2xy + 6 ex y = 0 x(2a2 + 6a3x + O(x2 )) + 2x(1 + 2a2x + 3a3x2 + O(x3 )) + 6(1 + x + x2 /2 + O(x3 ))(x + a2x2 + a3x3 + O(x4 )) = 0 (2a2x + 6a3x2 ) + (2x + 4a2x2 ) + (6x + 6(1 + a2)x2 ) = O(x3 ) = 0 a2 = −4, a3 = 17 3 y1 = x − 4x2 + 17 3 x3 + O(x4 ) Now we see if the second solution has the Frobenius form. There is no a1x term because y2 is only determined up to an additive constant times y1. y2 = 1 + O(x2 ) We substitute y2 into the differential equation and equate coefficients of powers of x. xy + 2xy + 6 ex y = 0 O(x) + O(x) + 6(1 + O(x))(1 + O(x2 )) = 0 6 = O(x) The substitution y2 = 1 + O(x) has yielded a contradiction. Since the second solution is not of the Frobenius form, it has the following form: y2 = y1 ln(x) + a0 + a2x2 + O(x3 ) The first three terms in the solution are y2 = a0 + x ln x − 4x2 ln x + O(x2 ). 761
  • 782. We calculate the derivatives of y2. y2 = ln(x) + O(1) y2 = 1 x + O(ln(x)) We substitute y2 into the differential equation and equate coefficients. xy + 2xy + 6 ex y = 0 (1 + O(x ln x)) + 2 (O(x ln x)) + 6 (a0 + O(x ln x)) = 0 1 + 6a0 = 0 y2 = − 1 6 + x ln x − 4x2 ln x + O(x2 ) 762
  • 783. 23.8 Quiz Problem 23.1 Write the definition of convergence of the series ∞ n=1 an. Solution Problem 23.2 What is the Cauchy convergence criterion for series? Solution Problem 23.3 Define absolute convergence and uniform convergence. What is the relationship between the two? Solution Problem 23.4 Write the geometric series and the function to which it converges. For what values of the variable does the series converge? Solution Problem 23.5 For what real values of a does the series ∞ n=1 na converge? Solution Problem 23.6 State the ratio and root convergence tests. Solution Problem 23.7 State the integral convergence test. Solution 763
  • 784. 23.9 Quiz Solutions Solution 23.1 The series ∞ n=1 an converges if the sequence of partial sums, SN = N n=1 an, converges. That is, lim N→∞ SN = lim N→∞ N n=1 an = constant. Solution 23.2 A series converges if and only if for any > 0 there exists an N such that |Sn − Sm| < for all n, m > N. Solution 23.3 The series ∞ n=1 an converges absolutely if ∞ n=1 |an| converges. If the rate of convergence of ∞ n=1 an(z) is independent of z then the series is uniformly convergent. The series is uniformly convergent in a domain if for any given > 0 there exists an N, independent of z, such that |f(z) − SN (z)| = f(z) − N n=1 an(z) < for all z in the domain. There is no relationship between absolute convergence and uniform convergence. Solution 23.4 1 1 − z = ∞ n=0 zn for |z| < 1. Solution 23.5 The series converges for a < −1. Solution 23.6 The series ∞ n=1 an converges absolutely if lim n→∞ an+1 an < 1. If the limit is greater than unity, then the series diverges. If the limit is unity, the test fails. The series ∞ n=1 an converges absolutely if lim n→∞ |an|1/n < 1. If the limit is greater than unity, then the series diverges. If the limit is unity, the test fails. Solution 23.7 If the coefficients an of a series ∞ n=1 an are monotonically decreasing and can be extended to a monotonically decreasing function of the continuous variable x: a(x) = an for integer x, then the sum converges or diverges with the integral: ∞ 1 a(x) dx. 764
  • 785. Chapter 24 Asymptotic Expansions The more you sweat in practice, the less you bleed in battle. -Navy Seal Saying 24.1 Asymptotic Relations The and ∼ symbols. First we will introduce two new symbols used in asymptotic relations. f(x) g(x) as x → x0, is read, “f(x) is much smaller than g(x) as x tends to x0”. This means lim x→x0 f(x) g(x) = 0. The notation f(x) ∼ g(x) as x → x0, is read “f(x) is asymptotic to g(x) as x tends to x0”; which means lim x→x0 f(x) g(x) = 1. A few simple examples are • − ex x as x → +∞ • sin x ∼ x as x → 0 • 1/x 1 as x → +∞ • e−1/x x−n as x → 0+ for all n An equivalent definition of f(x) ∼ g(x) as x → x0 is f(x) − g(x) g(x) as x → x0. Note that it does not make sense to say that a function f(x) is asymptotic to zero. Using the above definition this would imply f(x) 0 as x → x0. If you encounter an expression like f(x) + g(x) ∼ 0, take this to mean f(x) ∼ −g(x). 765
  • 786. The Big O and Little o Notation. If |f(x)| ≤ m|g(x)| for some constant m in some neighbor- hood of the point x = x0, then we say that f(x) = O(g(x)) as x → x0. We read this as “f is big O of g as x goes to x0”. If g(x) does not vanish, an equivalent definition is that f(x)/g(x) is bounded as x → x0. If for any given positive δ there exists a neighborhood of x = x0 in which |f(x)| ≤ δ|g(x)| then f(x) = o(g(x)) as x → x0. This is read, “f is little o of g as x goes to x0.” For a few examples of the use of this notation, • e−x = o(x−n ) as x → ∞ for any n. • sin x = O(x) as x → 0. • cos x − 1 = o(1) as x → 0. • log x = o(xα ) as x → +∞ for any positive α. Operations on Asymptotic Relations. You can perform the ordinary arithmetic operations on asymptotic relations. Addition, multiplication, and division are valid. You can always integrate an asymptotic relation. Integration is a smoothing operation. However, it is necessary to exercise some care. Example 24.1.1 Consider f (x) ∼ 1 x2 as x → ∞. This does not imply that f(x) ∼ −1 x as x → ∞. We have forgotten the constant of integration. Integrating the asymptotic relation for f (x) yields f(x) ∼ −1 x + c as x → ∞. If c is nonzero then f(x) ∼ c as x → ∞. It is not always valid to differentiate an asymptotic relation. Example 24.1.2 Consider f(x) = 1 x + 1 x2 sin(x3 ). f(x) ∼ 1 x as x → ∞. Differentiating this relation yields f (x) ∼ − 1 x2 as x → ∞. However, this is not true since f (x) = − 1 x2 − 2 x3 sin(x3 ) + 2 cos(x3 ) ∼ − 1 x2 as x → ∞. 766
  • 787. The Controlling Factor. The controlling factor is the most rapidly varying factor in an asymp- totic relation. Consider a function f(x) that is asymptotic to x2 ex as x goes to infinity. The controlling factor is ex . For a few examples of this, • x log x has the controlling factor x as x → ∞. • x−2 e1/x has the controlling factor e1/x as x → 0. • x−1 sin x has the controlling factor sin x as x → ∞. The Leading Behavior. Consider a function that is asymptotic to a sum of terms. f(x) ∼ a0(x) + a1(x) + a2(x) + · · · , as x → x0. where a0(x) a1(x) a2(x) · · · , as x → x0. The first term in the sum is the leading order behavior. For a few examples, • For sin x ∼ x − x3 /6 + x5 /120 − · · · as x → 0, the leading order behavior is x. • For f(x) ∼ ex (1 − 1/x + 1/x2 − · · · ) as x → ∞, the leading order behavior is ex . 24.2 Leading Order Behavior of Differential Equations It is often useful to know the leading order behavior of the solutions to a differential equation. If we are considering a regular point or a regular singular point, the approach is straight forward. We simply use a Taylor expansion or the Frobenius method. However, if we are considering an irregular singular point, w