SlideShare a Scribd company logo
MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6
Essentials of Applied Mathematics
for Scientists and Engineers
MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6
Copyright © 2007 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations
in printed reviews, without the prior permission of the publisher.
Essentials of Applied Mathematics for Scientists and Engineers
Robert G. Watts
www.morganclaypool.com
ISBN: 1598291866 paperback
ISBN: 9781598291865 paperback
ISBN: 1598291874 ebook
ISBN: 9781598291872 ebook
DOI 10.2200/S00082ED1V01Y200612ENG003
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON ENGINEERING SEQUENCE IN SERIES #3
Lecture #3
Series ISSN: 1559-811X print
Series ISSN: 1559-8128 electronic
First Edition
10 9 8 7 6 5 4 3 2 1
MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6
Essentials of Applied Mathematics
for Scientists and Engineers
Robert G. Watts
Tulane University
SYNTHESIS LECTURES ON ENGINEERING SEQUENCE IN SERIES #3
M&C M o r g a n &C l a y p o o l P u b l i s h e r s
MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6
iv
ABSTRACT
This is a book about linear partial differential equations that are common in engineering and
the physical sciences. It will be useful to graduate students and advanced undergraduates in
all engineering fields as well as students of physics, chemistry, geophysics and other physical
sciences and professional engineers who wish to learn about how advanced mathematics can
be used in their professions. The reader will learn about applications to heat transfer, fluid
flow, mechanical vibrations. The book is written in such a way that solution methods and
application to physical problems are emphasized. There are many examples presented in detail
and fully explained in their relation to the real world. References to suggested further reading
are included. The topics that are covered include classical separation of variables and orthogonal
functions, Laplace transforms, complex variables and Sturm-Liouville transforms.
KEYWORDS
Engineering mathematics, separation of variables, orthogonal functions, Laplace transforms,
complex variables and Sturm-Liouville transforms, differential equations.
MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6
v
Contents
1. Partial Differential Equations in Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introductory Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Fundamental Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
1.3 The Heat Conduction (or Diffusion) Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1 Rectangular Cartesian Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.2 Cylindrical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.3 Spherical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
The Laplacian Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.4 Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
1.4 The Vibrating String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4.1 Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
1.5 Vibrating Membrane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Longitudinal Displacements of an Elastic Bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Further Reading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
2. The Fourier Method: Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1 Heat Conduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.1 Scales and Dimensionless Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
2.1.2 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.3 Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.1.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.5 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.1.6 Scales and Dimensionless Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
2.1.7 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.1.8 Choosing the Sign of the Separation Constant. . . . . . . . . . . . . . . . . . . . . .17
2.1.9 Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.10 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1.11 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.1.12 Scales and Dimensionless Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6
vi ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
2.1.13 Getting to One Nonhomogeneous Condition . . . . . . . . . . . . . . . . . . . . . . 20
2.1.14 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1.15 Choosing the Sign of the Separation Constant. . . . . . . . . . . . . . . . . . . . . .21
2.1.16 Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.17 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.1.18 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1.19 Scales and Dimensionless Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23
2.1.20 Relocating the Nonhomogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.1.21 Separating Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.22 Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.23 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.24 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2 Vibrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2.1 Scales and Dimensionless Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
2.2.2 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.3 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2.4 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3. Orthogonal Sets of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1 Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
3.1.1 Orthogonality of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1.2 Orthonormal Sets of Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
3.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2.1 Orthonormal Sets of Functions and Fourier Series . . . . . . . . . . . . . . . . . . 32
3.2.2 Best Approximation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
3.2.3 Convergence of Fourier Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
3.2.4 Examples of Fourier Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Sturm–Liouville Problems: Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.1 Orthogonality of Eigenfunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4. Series Solutions of Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1 General Series Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6
CONTENTS vii
4.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.2 Ordinary Points and Series Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1.3 Lessons: Finding Series Solutions for Differential Equations
with Ordinary Points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.1.4 Regular Singular Points and the Method of Frobenius. . . . . . . . . . . . . . .49
4.1.5 Lessons: Finding Series Solution for Differential Equations with
Regular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.1.6 Logarithms and Second Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2.1 Solutions of Bessel’s Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Here are the Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2.2 Fourier–Bessel Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.3 Legendre Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4 Associated Legendre Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5. Solutions Using Fourier Series and Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1 Conduction (or Diffusion) Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1.1 Time-Dependent Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . .80
5.2 Vibrations Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.3 Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6. Integral Transforms: The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.1 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.2 Some Important Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.2.1 Exponentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.2.2 Shifting in the s -domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.2.3 Shifting in the Time Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.2.4 Sine and Cosine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.2.5 Hyperbolic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.2.6 Powers of t: tm
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6
viii ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
6.2.7 Heaviside Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.2.8 The Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.2.9 Transforms of Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6.2.10 Laplace Transforms of Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2.11 Derivatives of Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.3 Linear Ordinary Differential Equations with Constant Coefficients . . . . . . . . . 102
6.4 Some Important Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.4.1 Initial Value Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
6.4.2 Final Value Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.4.3 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.5 Partial Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.5.1 Nonrepeating Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.5.2 Repeated Roots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
6.5.3 Quadratic Factors: Complex Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7. Complex Variables and the Laplace Inversion Integral . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.1 Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.1.1 Limits and Differentiation of Complex Variables:
Analytic Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
7.1.2 The Cauchy Integral Formula. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
8. Solutions with Laplace Transforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121
8.1 Mechanical Vibrations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8.2 Diffusion or Conduction Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.3 Duhamel’s Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
9. Sturm–Liouville Transforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141
9.1 A Preliminary Example: Fourier Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . 141
9.2 Generalization: The Sturm–Liouville Transform: Theory . . . . . . . . . . . . . . . . . . 143
9.3 The Inverse Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6
CONTENTS ix
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
10. Introduction to Perturbation Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153
10.1 Examples from Algebra. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153
10.1.1 Regular Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
10.1.2 Singular Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Appendix A: The Roots of Certain Transcendental Equations. . . . . . . . . . . . . . . . . .159
Appendix B: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Author Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6
book Mobk070 March 22, 2007 11:7
1
C H A P T E R 1
Partial Differential Equations
in Engineering
1.1 INTRODUCTORY COMMENTS
This book covers the material presented in a course in applied mathematics that is required
for first-year graduate students in the departments of Chemical and Mechanical Engineering
at Tulane University. A great deal of material is presented, covering boundary value problems,
complex variables, and Fourier transforms. Therefore the depth of coverage is not as extensive
as in many books. Our intent in the course is to introduce students to methods of solving
linear partial differential equations. Subsequent courses such as conduction, solid mechanics,
and fracture mechanics then provide necessary depth.
The reader will note some similarity to the three books, Fourier Series and Boundary
Value Problems, Complex Variables and Applications, and Operational Mathematics, originally by
R. V. Churchill. The first of these has been recently updated by James Ward Brown. The
current author greatly admires these works, and studied them during his own tenure as a
graduate student. The present book is more concise and leaves out some of the proofs in an
attempt to present more material in a way that is still useful and is acceptable for engineering
students.
First we review a few concepts about differential equations in general.
1.2 FUNDAMENTAL CONCEPTS
An ordinary differential equation expresses a dependent variable, say u, as a function of one
independent variable, say x, and its derivatives. The order of the differential equation is given
by the order of the highest derivative of the dependent variable. A boundary value problem
consists of a differential equation that is defined for a given range of the independent variable
(domain) along with conditions on the boundary of the domain. In order for the boundary value
problem to have a unique solution the number of boundary conditions must equal the order of
the differential equation. If the differential equation and the boundary conditions contain only
terms of first degree in u and its derivatives the problem is linear. Otherwise it is nonlinear.
book Mobk070 March 22, 2007 11:7
2 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
A partial differential equation expresses a dependent variable, say u, as a function of more
than one independent variable, say x, y, and z. Partial derivatives are normally written as ∂u/∂x.
This is the first-order derivative of the dependent variable u with respect to the independent
variable x. Sometimes we will use the notation ux or when the derivative is an ordinary derivative
we use u . Higher order derivatives are written as ∂2
u/∂x2
or uxx. The order of the differential
equation now depends on the orders of the derivatives of the dependent variables in terms of
each of the independent variables. For example, it may be of order m for the x variable and of
order n for the y variable. A boundary value problem consists of a partial differential equation
defined on a domain in the space of the independent variables, for example the x, y, z space,
along with conditions on the boundary. Once again, if the partial differential equation and the
boundary conditions contain only terms of first degree in u and its derivatives the problem is
linear. Otherwise it is nonlinear.
A differential equation or a boundary condition is homogeneous if it contains only terms
involving the dependent variable.
Examples
Consider the ordinary differential equation
a(x)u + b(x)u = c (x), 0 < x < A. (1.1)
Two boundary conditions are required because the order of the equation is 2. Suppose
u(0) = 0 and u(A) = 1. (1.2)
The problem is linear. If c (x) is not zero the differential equation is nonhomogeneous. The first
boundary condition is homogeneous, but the second boundary condition is nonhomogeneous.
Next consider the ordinary differential equation
a(u)u + b(x)u = c 0 < x < A (1.3)
Again two boundary conditions are required. Regardless of the forms of the boundary condi-
tions, the problem is nonlinear because the first term in the differential equations is not of first
degree in u and u since the leading coefficient is a function of u. It is homogeneous only if
c = 0.
Now consider the following three partial differential equations:
ux + uxx + uxy = 1 (1.4)
uxx + uyy + uzz = 0 (1.5)
uux + uyy = 1 (1.6)
book Mobk070 March 22, 2007 11:7
PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 3
The first equation is linear and nonhomogeneous. The third term is a mixed partial derivative.
Since it is of second order in x two boundary conditions are necessary on x. It is first order
in y, so that only one boundary condition is required on y. The second equation is linear and
homogeneous and is of second order in all three variables. The third equation is nonlinear
because the first term is not of first degree in u and ux. It is of order 1 in x and order 2 in y.
In this book we consider only linear equations. We will now derive the partial differential
equations that describe some of the physical phenomena that are common in engineering
science.
Problems
Tell whether the following are linear or nonlinear and tell the order in each of the independent
variables:
u + xu + u2
= 0
tan(y)uy + uyy = 0
tan(u)uy + 3u = 0
uyyy + uyx + u = 0
1.3 THE HEAT CONDUCTION (OR DIFFUSION) EQUATION
1.3.1 Rectangular Cartesian Coordinates
The conduction of heat is only one example of the diffusion equation. There are many other
important problems involving the diffusion of one substance in another. One example is the
diffusion of one gas into another if both gases are motionless on the macroscopic level (no
convection). The diffusion of heat in a motionless material is governed by Fourier’s law which
states that heat is conducted per unit area in the negative direction of the temperature gradient
in the (vector) direction n in the amount ∂u/∂n, that is
qn
= −k∂u/∂n (1.7)
where qn
denotes the heat flux in the n direction (not the nth power). In this equation u is the
local temperature and k is the thermal conductivity of the material. Alternatively u could be the
partial fraction of a diffusing material in a host material and k the diffusivity of the diffusing
material relative to the host material.
Consider the diffusion of heat in two dimensions in rectangular Cartesian coordinates.
Fig. 1.1 shows an element of the material of dimension x by y by z. The material has a
specific heat c and a density ρ. Heat is generated in the material at a rate q per unit volume.
Performing a heat balance on the element, the time (t) rate of change of thermal energy
within the element, ρc x y z∂u/∂t is equal to the rate of heat generated within the element
book Mobk070 March 22, 2007 11:7
4 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FIGURE 1.1: An element in three dimensional rectangular Cartesian coordinates
q x y z minus the rate at which heat is conducted out of the material. The flux of heat
conducted into the element at the x face is denoted by qx
while at the y face it is denoted by q y
.
At x + x the heat flux (i.e., per unit area) leaving the element in the x direction is qx
+ qx
while at y + y the heat flux leaving in the y direction is q y
+ q y
. Similarly for qz
. Expanding
the latter three terms in Taylor series, we find that qx
+ qx
= qx
+ qx
x x + (1/2)qx
xx( x)2
+ terms of order ( x)3
or higher order. Similar expressions are obtained for q y
+ q y
and
qz
+ qz
Completing the heat balance
ρc x y z∂u/∂t = q x y z + qx
y z + q y
x z
− (qx
+ qx
x x + (1/2)qx
xx( x)2
+ · · · ) y z
− (q y
+ q y
y y + (1/2)q y
yy ( y)2
+ · · · ) x z (1.8)
− (qz
+ qz
z z + (1/2)qz
zz( z)2
+ · · · ) x y
The terms qx
y z, q y
x z, and qz
x y cancel. Taking the limit as x, y, and z
approach zero, noting that the terms multiplied by ( x)2
, ( y)2
, and ( z)2
may be neglected,
dividing through by x y z and noting that according to Fourier’s law qx
= −k∂u/∂x,
q y
= −k∂u/∂y, and qz
= −k(∂u/∂z) we obtain the time-dependent heat conduction equation
in three-dimensional rectangular Cartesian coordinates:
ρc ∂u/∂t = k(∂2
u/∂x2
+ ∂2
u/∂y2
) + q (1.9)
The equation is first order in t, and second order in both x and y. If the property values
ρ, c and k and the heat generation rate per unit volume q are independent of the dependent
book Mobk070 March 22, 2007 11:7
PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 5
FIGURE 1.2: An element in cylindrical coordinates
variable, temperature the partial differential equation is linear. If q is zero, the equation is
homogeneous. It is easy to see that if a third dimension, z, were included, the term k∂2
u/∂z2
must be added to the right-hand side of the above equation.
1.3.2 Cylindrical Coordinates
A small element of volume r r z is shown in Fig. 1.2.
The method of developing the diffusion equation in cylindrical coordinates is much the
same as for rectangular coordinates except that the heat conducted into and out of the element
depends on the area as well as the heat flux as given by Fourier’s law, and this area varies in
the r-direction. Hence the heat conducted into the element at r is qr
r z, while the heat
conducted out of the element at r + r is qr
r z + ∂(qr
r z)/∂r( r) when terms
of order ( r)2
are neglected as r approaches zero. In the z- and θ-directions the area does
not change. Following the same procedure as in the discussion of rectangular coordinates,
expanding the heat values on the three faces in Tayor series’, and neglecting terms of order
( )2
and ( z)2
and higher,
ρcr θ r z∂u/∂t = −∂(qr
r θ z)/∂r r − ∂(qθ
r z)/∂θ θ
− ∂(qz
r θ r)/∂z z + qr θ r z (1.10)
book Mobk070 March 22, 2007 11:7
6 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FIGURE 1.3: An element in spherical coordinates
Dividing through by the volume, we find after using Fourier’s law for the heat fluxes
ρc ∂u/∂t = (1/r)∂(r∂u/∂r)/∂r + (1/r2
)∂2
u/∂θ2
+ ∂2
u/∂z2
+ q (1.11)
1.3.3 Spherical Coordinates
An element in a spherical coordinate system is shown in Fig. 1.3. The volume of the element is
r sin θ rr θ = r2
sin θ r θ . The net heat flows out of the element in the r, θ, and
directions are respectfully
qr
r2
sin θ θ (1.12)
qθ
r sin θ r (1.13)
q r θ r (1.14)
It is left as an exercise for the student to show that
ρc ∂u/∂t = k[(1/r2
)∂/∂r(r2
∂u/∂r) + (1/r2
sin2
θ)∂2
u/∂ 2
+ (1/r2
sin θ)∂(sin θ∂u/∂θ)/∂θ + q (1.15)
The Laplacian Operator
The linear operator on the right-hand side of the heat equation is often referred to as the
Laplacian operator and is written as ∇2
.
book Mobk070 March 22, 2007 11:7
PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 7
1.3.4 Boundary Conditions
Four types of boundary conditions are common in conduction problems.
a) Heat flux prescribed, in which case k∂u/∂n is given.
b) Heat flux is zero (perhaps just a special case of (a)), in which case ∂u/∂n is zero.
c) Temperature u is prescribed.
d) Convection occurs at the boundary, in which case k∂u/∂n = h(U − u).
Here n is a length in the direction normal to the surface, U is the temperature of the fluid
next to the surface that is heating or cooling the surface, and h is the coefficient of convective
heat transfer. Condition (d) is sometimes called Newton’s law of cooling.
1.4 THE VIBRATING STRING
Next we consider a tightly stretched string on some interval of the x-axis. The string is vibrating
about its equilibrium position so that its departure from equilibrium is y(t, x). The string is
assumed to be perfectly flexible with mass per unit length ρ.
Fig. 1.4 shows a portion of such a string that has been displaced upward. We assume
that the tension in the string is constant. However the direction of the tension vector along the
string varies. The tangent of the angle α(t, x) that the string makes with the horizontal is given
by the slope of the wire, ∂y/∂x,
V (x)/H = tan α(t, x) = ∂y/∂x (1.16)
If we assume that the angle α is small then the horizontal tension force is nearly equal to
the magnitude of the tension vector itself. In this case the tangent of the slope of the wire
FIGURE 1.4: An element of a vibrating string
book Mobk070 March 22, 2007 11:7
8 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
at x + x is
V (x + x)/H = tan α(x + x) = ∂y/∂x(x + x). (1.17)
The vertical force V is then given by H∂y/∂x. The net vertical force is the difference between
the vertical forces at x and x + x, and must be equal to the mass times the acceleration of
that portion of the string. The mass is ρ x and the acceleration is ∂2
y/∂t2
. Thus
ρ x∂2
/∂t2
= H[∂y/∂x(x + x) − ∂y/∂x(x)] (1.18)
Expanding ∂y/∂x(x + x) in a Taylor series about x = 0 and neglecting terms of order
( x)2
and smaller, we find that
ρytt = Hyxx (1.19)
which is the wave equation. Usually it is presented as
ytt = a2
yxx (1.20)
where a2
= H/ρ is a wave speed term.
Had we included the weight of the string there would have been an extra term on the
right-hand side of this equation, the acceleration of gravity (downward). Had we included a
damping force proportional to the velocity of the string, another negative term would result:
ρytt = Hyxx − byt − g (1.21)
1.4.1 Boundary Conditions
The partial differential equation is linear and if the gravity term is included it is nonhomo-
geneous. It is second order in both t and x, and requires two boundary conditions (initial
conditions) on t and two boundary conditions on x. The two conditions on t are normally
specifying the initial velocity and acceleration. The conditions on x are normally specifying the
conditions at the ends of the string, i.e., at x = 0 and x = L.
1.5 VIBRATING MEMBRANE
The partial differential equation describing the motion of a vibrating membrane is simply an
extension of the right-hand side of the equation of the vibrating string to two dimensions.
Thus,
ρytt + byt = −g + ∇2
y (1.22)
In this equation, ρ is the density per unit area and ∇2
y is the Laplacian operator in either
rectangular or cylindrical coordinates.
book Mobk070 March 22, 2007 11:7
PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 9
1.6 LONGITUDINAL DISPLACEMENTS OF AN ELASTIC BAR
The longitudinal displacements of an elastic bar are described by Eq. (1.20) except the in this
case a2
= E/ρ, where ρ is the density and E is Young’s modulus.
FURTHER READING
V. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966.
J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. 6th edition. New
York: McGraw-Hill, 2001.
P. V. O’Neil, Advanced Engineering Mathematics. 5th edition. Pacific Grove, CA: Brooks/Cole-
Thomas Learning, 2003.
book Mobk070 March 22, 2007 11:7
book Mobk070 March 22, 2007 11:7
11
C H A P T E R 2
The Fourier Method: Separation
of Variables
In this chapter we will work through a few example problems in order to introduce the general
idea of separation of variables and the concept of orthogonal functions before moving on to a
more complete discussion of orthogonal function theory. We will also introduce the concepts
of nondimensionalization and normalization.
The goal here is to use the three theorems stated below to walk the student through the
solution of several types of problems using the concept of separation of variables and learn some
early lessons on how to apply the method without getting too much into the details that will
be covered later, especially in Chapter 3.
We state here without proof three fundamental theorems that will be useful in finding
series solutions to partial differential equations.
Theorem 2.1. Linear Superposition: If a group of functions un, n = m through n = M are all
solutions to some linear differential equation then
M
n=m
cnun
is also a solution.
Theorem 2.2. Orthogonal Functions: Certain sets of functions n defined on the interval (a, b)
possess the property that
b
a
n mdx = constant, n = m
b
a
n mdx = 0, n = m
book Mobk070 March 22, 2007 11:7
12 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
These are called orthogonal functions. Examples are the sine and cosine functions. This idea is discussed
fully in Chapter 3, particularly in connection with Sturm–Liouville equations.
Theorem 2.3. Fourier Series: A piecewise continuous function f (x) defined on (a, b) can be
represented by a series of orthogonal functions n(x) on that interval as
f (x) =
∞
n=0
An n(x)
where
An =
b
x=a f (x) n(x)dx
b
x=a n(x) n(x)dx
These properties will be used in the following examples to introduce the idea of solution of partial
differential equations using the concept of separation of variables.
2.1 HEAT CONDUCTION
We will first examine how Theorems 1, 2, and 3 are systematically used to obtain solutions
to problems in heat conduction in the forms of infinite series. We set out the methodology
in detail, step-by-step, with comments on lessons learned in each case. We will see that the
mathematics often serves as a guide, telling us when we make a bad assumption about solution
forms.
Example 2.1. A Transient Heat Conduction Problem
Consider a flat plate occupying the space between x = 0 and x = L. The plate stretches out
in the y and z directions far enough that variations in temperature in those directions may be
neglected. Initially the plate is at a uniform temperature u0. At time t = 0+
the wall at x = 0
is raised to u1 while the wall at x = L is insulated. The boundary value problem is then
ρc ut = kuxx 0 < x < L t > 0 (2.1)
u(t, 0) = u1
ux(t, L) = 0 (2.2)
u(0, x) = u0
2.1.1 Scales and Dimensionless Variables
When it is possible it is always a good idea to write both the independent and dependent
variables in such a way that they range from zero to unity. In the next few problems we shall
show how this can often be done.
book Mobk070 March 22, 2007 11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 13
We first note that the problem has a fundamental length scale, so that if we define another
space variable ξ = x/L, the partial differential equation can be written as
ρc ut = L−2
kuξξ 0 < ξ < 1 t < 0 (2.3)
Next we note that if we define a dimensionless time-like variable as τ = αt/L2
, where α = k/ρc
is called the thermal diffusivity, we find
uτ = uξξ (2.4)
We now proceed to nondimensionalize and normalize the dependent variable and the boundary
conditions. We define a new variable
U = (u − u1)/(u0 − u1) (2.5)
Note that this variable is always between 0 and 1 and is dimensionless. Our boundary value
problem is now devoid of constants.
Uτ = Uξξ (2.6)
U(τ, 0) = 0
Uξ (τ, 1) = 0 (2.7)
U(0, ξ) = 1
All but one of the boundary conditions are homogeneous. This will prove necessary in our analysis.
2.1.2 Separation of Variables
Begin by assuming U = (τ) (ξ). Insert this into the differential equation and obtain
(ξ) τ (τ) = (τ) ξξ (ξ). (2.8)
Next divide both sides by U = ,
τ
=
ξξ
= ±λ2
(2.9)
The left-hand side of the above equation is a function of τ only while the right-hand side is a
function only of ξ. This can only be true if both are constants since they are equal to each other.
λ2
is always positive, but we must decide whether to use the plus sign or the minus sign. We
have two ordinary differential equations instead of one partial differential equation. Solution
for gives a constant times either exp(−λ2
τ) or exp(+λ2
τ). Since we know that U is always
between 0 and 1, we see immediately that we must choose the minus sign. The second ordinary
book Mobk070 March 22, 2007 11:7
14 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
differential equation is
ξξ = −λ2
(2.10)
and we deduce that the two homogeneous boundary conditions are
(0) = 0
(2.11)
ξ (1) = 0
Solving the differential equation we find
= A cos(λξ) + B sin(λξ) (2.12)
where A and B are constants to be determined. The first boundary condition requires that
A = 0.
The second boundary condition requires that either B = 0 or cos(λ) = 0. Since the former
cannot be true (U is not zero!) the latter must be true. ξ can take on any of an infinite number
of values λn = (2n − 1)π/2, where n is an integer between negative and positive infinity.
Equation (2.10) together with boundary conditions (2.11) is called a Sturm–Liouville problem.
The solutions are called eigenfunctions and the λn are called eigenvalues. A full discussion of
Sturm–Liouville theory will be presented in Chapter 3.
Hence the apparent solution to our partial differential equation is any one of the following:
Un = Bn exp[−(2n − 1)2
π2
τ/4)] sin[π(2n − 1)ξ/2]. (2.13)
2.1.3 Superposition
Linear differential equations possess the important property that if each solution Un satisfies
the differential equation and the boundary conditions then the linear combination
∞
n=1
Bn exp[−(2n − 1)2
π2
τ/4] sin[π(2n − 1)ξ/2] =
∞
n=1
Un (2.14)
also satisfies them, as stated in Theorem 2. Can we build this into a solution that satisfies the one
remaining boundary condition? The final condition (the nonhomogeneous initial condition)
states that
1 =
∞
n=1
Bn sin(π(2n − 1)ξ/2) (2.15)
This is called a Fourier sine series representation of 1. The topic of Fourier series is further discussed in
Chapter 3.
book Mobk070 March 22, 2007 11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 15
2.1.4 Orthogonality
It may seem hopeless at this point when we see that we need to find an infinite number of
constants Bn. What saves us is a concept called orthogonality (to be discussed in a more general
way in Chapter 3). The functions sin(π(2n − 1)ξ/2) form an orthogonal set on the interval
0 < ξ < 1, which means that
1
0
sin(π(2n − 1)ξ/2) sin(π(2m − 1)ξ/2)dξ = 0 when m = n (2.16)
= 1/2 when m = n
Hence if we multiply both sides of the final equation by sin(π(2m − 1)ξ/2)dξ and integrate
over the interval, we find that all of the terms in which m = n are zero, and we are left with
one term, the general term for the nth B, Bn
Bn = 2
1
0
sin(π(2n − 1)ξ/2)dξ =
4
π(2n − 1)
(2.17)
Thus
U =
∞
n=1
4
π(2n − 1)
exp[−π2
(2n − 1)2
τ/4] sin[π(2n − 1)ξ/2] (2.18)
satisfies both the partial differential equation and the boundary and initial conditions, and
therefore is a solution to the boundary value problem.
2.1.5 Lessons
We began by assuming a solution that was the product of two variables, each a function of only
one of the independent variables. Each of the resulting ordinary differential equations was then
solved. The two homogeneous boundary conditions were used to evaluate one of the constant
coefficients and the separation constant λ. It was found to have an infinite number of values.
These are called eigenvalues and the resulting functions sinλnξ are called eigenfunctions. Linear
superposition was then used to build a solution in the form of an infinite series. The infinite
series was then required to satisfy the initial condition, the only nonhomogeneous condition.
The coefficients of the series were determined using the concept of orthogonality stated in
Theorem 3, resulting in a Fourier series. Each of these concepts will be discussed further in
Chapter 3. For now we state that many important functions are members of orthogonal sets.
book Mobk070 March 22, 2007 11:7
16 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
The method would not have worked had the differential equation not been homoge-
neous. (Try it.) It also would not have worked if more than one boundary condition had been
nonhomogeneous. We will see how to get around these problems shortly.
Problems
1. Equation (2.9) could just as easily have been written as
τ
=
ξξ
= +λ2
Show two reasons why this would reduce to the trivial solution or a solution for which
approaches infinity as τ approaches infinity, and that therefore the minus sign must
be chosen.
2. Solve the above problem with boundary conditions
Uξ (τ, 0) = 0 and U(τ, 1) = 0
using the steps given above.
Hint: cos(nπx) is an orthogonal set on (0, 1). The result will be a Fourier cosine series
representation of 1.
3. Plot U versus ξ for τ = 0.001, 0.01, and 0.1 in Eq. (2.18). Comment.
Example 2.2. A Steady Heat Transfer Problem in Two Dimensions
Heat is conducted in a region of height a and width b. Temperature is a function of two space
dimensions and independent of time. Three sides are at temperature u0 and the fourth side is
at temperature u1. The formulation is as follows:
∂2
u
∂x2
+
∂2
u
∂y2
= 0 (2.19)
with boundary conditions
u(0, x) = u(b, x) = u(y, a) = u0
u(y, 0) = u1 (2.20)
2.1.6 Scales and Dimensionless Variables
First note that there are two obvious length scales, a and b. We can choose either one of them
to nondimensionalize x and y. We define
ξ = x/a and η = y/b (2.21)
so that both dimensionless lengths are normalized.
book Mobk070 March 22, 2007 11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 17
To normalize temperature we choose
U =
u − u0
u1 − u0
(2.22)
The problem statement reduces to
Uξξ +
a
b
2
Uηη = 0 (2.23)
U(0, ξ) = U(1, ξ) = U(η, 1) = 0
U(η, 0) = 1 (2.24)
2.1.7 Separation of Variables
As before, we assume a solution of the form U(ξ, n) = X(ξ)Y (η). We substitute this into the
differential equation and obtain
Y (η)Xξξ (ξ) = −X(ξ)
a
b
2
Yηη(η) (2.25)
Next we divide both sides by U(ξ, n) and obtain
Xξξ
X
= −
a
b
2 Ynn
Y
= ±λ2
(2.26)
In order for the function only of ξ on the left-hand side of this equation to be equal to the
function only of η on the right-hand side, both must be constant.
2.1.8 Choosing the Sign of the Separation Constant
However in this case it is not as clear as the case of Example 1 what the sign of this constant
must be. Hence we have designated the constant as ±λ2
so that for real values of λ the ± sign
determines the sign of the constant. Let us proceed by choosing the negative sign and see where
this leads.
Thus
Xξξ = −λ2
X
Y (η)X(0) = 1
Y (η)X(1) = 0 (2.27)
or
X(0) = 1
X(1) = 0 (2.28)
book Mobk070 March 22, 2007 11:7
18 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and
Yηη = ∓
b
a
2
λ2
Y (2.29)
X(ξ)Y (0) = X(ξ)Y (1) = 0
Y (0) = Y (1) = 0 (2.30)
The solution of the differential equation in the η direction is
Y (η) = A cosh(bλη/a) + B sinh(bλη/a) (2.31)
Applying the first boundary condition (at η = 0) we find that A = 0. When we apply the
boundary condition at η = 1 however, we find that it requires that
0 = B sinh(bλ/a) (2.32)
so that either B = 0 or λ = 0. Neither of these is acceptable since either would require that
Y (η) = 0 for all values of η.
We next try the positive sign. In this case
Xξξ = λ2
X (2.33)
Yηη = −
b
a
2
λ2
Y (2.34)
with the same boundary conditions given above. The solution for Y (η) is now
Y (η) = A cos(bλη/a) + B sin(bλη/a) (2.35)
The boundary condition at η = 0 requires that
0 = A cos(0) + B sin(0) (2.36)
so that again A = 0. The boundary condition at η = 1 requires that
0 = B sin(bλ/a) (2.37)
Since we don’t want B to be zero, we can satisfy this condition if
λn = anπ/b, n = 0, 1, 2, 3, . . . (2.38)
Thus
Y (η) = B sin(nπη) (2.39)
book Mobk070 March 22, 2007 11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 19
Solution for X(ξ) yields hyperbolic functions.
X(ξ) = C cosh(λnξ) + D sinh(λnξ) (2.40)
The boundary condition at ξ = 1 requires that
0 = C cosh(λn) + D sinh(λn) (2.41)
or, solving for C in terms of D,
C = −D tanh(λn) (2.42)
One solution of our problem is therefore
Un(ξ, η) = Kn sin(nπη)[sinh(anπξ/b) − cosh(anπξ/b) tanh(anπ/b)] (2.43)
2.1.9 Superposition
According to the superposition theorem (Theorem 2) we can now form a solution as
U(ξ, η) =
∞
n=0
Kn sin(nπη)[sinh(anπξ/b) − cosh(anπξ/b) tanh(anπ/b)] (2.44)
The final boundary condition (the nonhomogeneous one) can now be applied,
1 = −
∞
n=1
Kn sin(nπη) tanh(anπ/b) (2.45)
2.1.10 Orthogonality
We have already noted that the sine function is an orthogonal function as defined on (0, 1).
Thus, we multiply both sides of this equation by sin(mπη)dη and integrate over (0, 1), noting
that according to the orthogonality theorem (Theorem 3) the integral is zero unless n = m.
The result is
1
η=0
sin(nπη)dη = −Kn
1
η=0
sin2
(nπη)dη tanh(anπ/b) (2.46)
1
nπ
[1 − (−1)n
] = −Kn tanh(anπ/b)
1
2
(2.47)
Kn = −
2[1 − (−1)n
]
nπ tanh(anπ/b)
(2.48)
book Mobk070 March 22, 2007 11:7
20 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
The solution is represented by the infinite series
U(ξ, η) =
∞
n=1
2[1 − (−1)n
]
nπ tanh(anπ/b)
sin(nπη)
× [cosh(anπξ/b) tanh(anπ/b) − sinh(anπξ/b)] (2.49)
2.1.11 Lessons
The methodology for this problem is the same as in Example 1.
Example 2.3. A Steady Conduction Problem in Two Dimensions: Addition of Solutions
We now illustrate a problem in which two of the boundary conditions are nonhomogeneous.
Since the problem and the boundary conditions are both linear we can simply break the problem
into two problems and add them. Consider steady conduction in a square region L by L in size.
Two sides are at temperature u0 while the other two sides are at temperature u1.
uxx + uyy = 0 (2.50)
We need four boundary conditions since the differential equation is of order 2 in both inde-
pendent variables.
u(0, y) = u(L, y) = u0 (2.51)
u(x, 0) = u(x, L) = u1 (2.52)
2.1.12 Scales and Dimensionless Variables
The length scale is L, so we let ξ = x/L and η = y/L. We can make the first two bound-
ary conditions homogeneous while normalizing the second two by defining a dimensionless
temperature as
U =
u − u0
u1 − u0
(2.53)
Then
Uξξ + Uηη = 0 (2.54)
U(0, η) = U(1, η) = 0 (2.55)
U(ξ, 0) = U(ξ, 1) = 1 (2.56)
2.1.13 Getting to One Nonhomogeneous Condition
There are two nonhomogeneous boundary conditions, so we must find a way to only have one.
Let U = V + W so that we have two problems, each with one nonhomogeneous boundary
book Mobk070 March 22, 2007 11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 21
condition.
Wξξ + Wηη = 0 (2.57)
W(0, η) = W(1, η) = W(ξ, 0) = 0 (2.58)
W(ξ, 1) = 1
Vξξ + Vηη = 0 (2.59)
V (0, η) = V (1, η) = V (ξ, 1) = 0 (2.60)
V (ξ, 0) = 1
(It should be clear that these two problems are identical if we put V = W(1 − η). We will
therefore only need to solve for W.)
2.1.14 Separation of Variables
Separate variables by letting W(ξ, η) = P(ξ)Q(η).
Pξξ
P
= −
Qηη
Q
= ±λ2
(2.61)
2.1.15 Choosing the Sign of the Separation Constant
Once again it is not immediately clear whether to choose the plus sign or the minus sign. Let’s
see what happens if we choose the plus sign.
Pξξ = λ2
P (2.62)
The solution is exponentials or hyperbolic functions.
P = A sinh(λξ) + B cosh(λξ) (2.63)
Applying the boundary condition on ξ = 0, we find that B = 0. The boundary condition on
ξ = 1 requires that A sinh(λ) = 0, which can only be satisfied if A = 0 or λ = 0, which yields
a trivial solution, W = 0, and is unacceptable. The only hope for a solution is thus choosing
the minus sign.
If we choose the minus sign in Eq. (2.61) then
Pξξ = −λ2
P (2.64)
Qηη = λ2
Q (2.65)
with solutions
P = A sin(λξ) + B cos(λξ) (2.66)
book Mobk070 March 22, 2007 11:7
22 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and
Q = C sinh(λη) + D cosh(λη) (2.67)
respectively. Remembering to apply the homogeneous boundary conditions first, we find that for
W(0, η) = 0, B = 0 and for W(1, η) = 0, sin(λ) = 0. Thus, λ = nπ, our eigenvalues cor-
responding to the eigenfunctions sin(nπξ). The last homogeneous boundary condition is
W(ξ, 0) = 0, which requires that D = 0. There are an infinite number of solutions of the form
P Qn = Kn sinh(nπη) sin(nπξ) (2.68)
2.1.16 Superposition
Since our problem is linear we apply superposition.
W =
∞
n=1
Kn sinh(nπη) sin(nπξ) (2.69)
Applying the final boundary condition, W(ξ, 1) = 1
1 =
∞
n=1
Kn sinh(nπ) sin(nπξ). (2.70)
2.1.17 Orthogonality
Multiplying both sides of Eq. (2.70) by sin(mπξ) and integrating over the interval (0, 1)
1
0
sin(mπξ)dξ =
∞
n=0
Kn sinh(nπ)
1
0
sin(nπξ) sin(mπξ)dξ (2.71)
The orthogonality property of the sine eigenfunction states that
1
0
sin(nπξ) sin(mπξ)dξ =
0, m = n
1/2, m = n
(2.72)
Thus,
Kn = 2/ sinh(nπ) (2.73)
and
W =
∞
n=0
2
sinh(nπ)
sinh(nπη) sin(nπξ) (2.74)
book Mobk070 March 22, 2007 11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 23
Recall that
V = W(ξ, 1 − η) and U = V + W
2.1.18 Lessons
If there are two nonhomogeneous boundary conditions break the problem into two problems
that can be added (since the equations are linear) to give the complete solution. If you are
unsure of the sign of the separation constant just assume a sign and move on. Listen to what the
mathematics is telling you. It will always tell you if you choose wrong.
Example 2.4. A Non-homogeneous Heat Conduction Problem
Consider now the arrangement above, but with a heat source, and with both boundaries held
at the initial temperature u0. The heat source is initially zero and is turned on at t = 0+
. The
exercise illustrates the method of solving the problem when the single nonhomogeneous condition is in
the partial differential equation rather than one of the boundary conditions.
ρc ut = kuxx + q (2.75)
u(0, x) = u0
u(t, 0) = u0 (2.76)
u(t, L) = u0
2.1.19 Scales and Dimensionless Variables
Observe that the length scale is still L, so we define ξ = x/L. Recall that k/ρc = α is the
diffusivity. How shall we nondimensionalize temperature? We want as many ones and zeros
in coefficients in the partial differential equation and the boundary conditions as possible.
Define U = (u − u0)/S, where S stands for “something with dimensions of temperature” that
we must find. Dividing both sides of the partial differential equation by q and substituting
for x
L2
SρcUt
q
=
kSUξξ
q
+ 1 (2.77)
Letting S = q/k leads to one as the coefficient of the first term on the right-hand side.
Choosing the same dimensionless time as before, τ = αt/L2
results in one as the coefficient of
book Mobk070 March 22, 2007 11:7
24 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
the time derivative term. We now have
Uτ = Uξξ + 1 (2.78)
U(0, ξ) = 0
U(τ, 0) = 0 (2.79)
U(τ, 1) = 0
2.1.20 Relocating the Nonhomogeneity
We have only one nonhomogeneous condition, but it’s in the wrong place. The differential
equation won’t separate. For example if we let U(ξ, τ) = P(ξ)G(τ) and insert this into the
partial differential equation and divide by PG, we find
G (τ)
G
=
P (ξ)
P
+
1
PG
(2.80)
The technique to deal with this is to relocate the nonhomogenous condition to the initial
condition. Assume a solution in the form U = W(ξ) + V (τ, ξ). We now have
Vτ = Vξξ + Wξξ + 1 (2.81)
If we set Wξξ = −1, the differential equation for V becomes homogeneous. We then set
both W and V equal to zero at ξ = 0 and 1 and V (0, ξ) = −W(ξ)
Wξξ = −1 (2.82)
W(0) = W(1) = 0 (2.83)
and
Vτ = Vξξ (2.84)
V (0, ξ) = −W(ξ)
V (τ, 0) = 0 (2.85)
V (τ, 1) = 0
The solution for W is parabolic
W =
1
2
ξ(1 − ξ) (2.86)
book Mobk070 March 22, 2007 11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 25
2.1.21 Separating Variables
We now solve for V using separation of variables.
V = P(τ)Q(ξ) (2.87)
Pτ
P
=
Qξξ
Q
= ±λ2
(2.88)
We must choose the minus sign once again (see Problem 1 above) to have a negative exponential
for P(τ). (We will see later that it’s not always so obvious.) P = exp(−λ2
τ).
The solution for Q is once again sines and cosines.
Q = A cos(λξ) + B sin(λξ) (2.89)
The boundary condition V (τ, 0) = 0 requires that Q(0) = 0. Hence, A = 0. The boundary
condition V (τ, 1) = 0 requires that Q(1) = 0. Since B cannot be zero, sin(λ) = 0 so that our
eigenvalues are λ = nπ and our eigenfunctions are sin(nπξ).
2.1.22 Superposition
Once again using linear superposition,
V =
∞
n=0
Bn exp(−n2
π2
τ) sin(nπξ) (2.90)
Applying the initial condition
1
2
ξ(ξ − 1) =
∞
n=1
Bn sin(nπξ) (2.91)
This is a Fourier sine series representation of 1
2
ξ(ξ − 1). We now use the orthogonality of
the sine function to obtain the coefficients Bn.
2.1.23 Orthogonality
Using the concept of orthogonality again, we multiply both sides by sin(mπξ)dξ and integrate
over the space noting that the integral is zero if m is not equal to n. Thus, since
1
0
sin2
(nπξ)dξ =
1
2
(2.92)
Bn =
1
0
ξ(ξ − 1) sin(nπξ)dξ (2.93)
book Mobk070 March 22, 2007 11:7
26 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
2.1.24 Lessons
When the differential equation is nonhomogeneous use the linearity of the differential equation
to transfer the nonhomogeneous condition to one of the boundary conditions. Usually this will
result in a homogeneous partial differential equation and an ordinary differential equation.
We pause here to note that while the method of separation of variables is straightforward
in principle, a certain amount of intuition or, if you wish, cleverness is often required in order
to put the equation and boundary conditions in an appropriate form. The student working
diligently will soon develop these skills.
Problems
1. Using these ideas obtain a series solution to the boundary value problem
ut = uxx
u(t, 1) = 0
u(t, 0) = 0
u(0, x) = 1
2. Find a series solution to the boundary value problem
ut = uxx + x
ux(t, 0) = 0
u(t, 1) = 0
u(0, x) = 0
2.2 VIBRATIONS
In vibrations problems the dependent variable occurs in the differential equation as a second-
order derivative of the independent variable t. The methodology is, however, essentially the same
as it is in the diffusion equation. We first apply separation of variables, then use the boundary
conditions to obtain eigenfunctions and eigenvalues, and use the linearity and orthogonality
principles and the single nonhomogeneous condition to obtain a series solution. Once again, if
there are more than one nonhomogeneous condition we use the linear superposition principle
to obtain solutions for each nonhomogeneous condition and add the resulting solutions. We
illustrate these ideas with several examples.
Example 2.5. A Vibrating String
Consider a string of length L fixed at the ends. The string is initially held in a fixed position
y(0, x) = f (x), where it is clear that f (x) must be zero at both x = 0 and x = L. The boundary
book Mobk070 March 22, 2007 11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 27
value problem is as follows:
ytt = a2
yxx (2.94)
y(t, 0) = 0
y(t, L) = 0 (2.95)
y(0, x) = f (x)
yt(0, x) = 0
2.2.1 Scales and Dimensionless Variables
The problem has the obvious length scale L. Hence let ξ = x/L. Now let τ = ta/L and the
equation becomes
yττ = yξξ (2.96)
One could now nondimensionalize y, for example, by defining a new variable as
f (x)/fmax, but it wouldn’t simplify things. The boundary conditions remain the same except t
and x are replaced by τ and ξ.
2.2.2 Separation of Variables
You know the dance. Let y = P(τ)Q(ξ). Differentiating and substituting into Eq. (2.96),
Pττ Q = P Qξξ (2.97)
Dividing by P Q and noting that Pττ /P and Qξξ /Q cannot be equal to one another unless
they are both constants, we find
Pττ /P = Qξξ /Q = ±λ2
(2.98)
It should be physically clear that we want the minus sign. Otherwise both solutions will be
hyperbolic functions. However if you choose the plus sign you will immediately find that
the boundary conditions on ξ cannot be satisfied. Refer back to (2.63) and the sentences
following.
The two ordinary differential equations and homogeneous boundary conditions are
Pττ + λ2
P = 0 (2.99)
Pτ (0) = 0
book Mobk070 March 22, 2007 11:7
28 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and
Qξξ + λ2
Q = 0 (2.100)
Q(0) = 0
Q(1) = 0
The solutions are
P = A sin(λτ) + B cos(λτ) (2.101)
Q = C sin(λξ) + D cos(λξ) (2.102)
The first boundary condition of Eq. (2.100) requires that D = 0. The second requires that
C sin(λ) be zero. Our eigenvalues are again λn = nπ. The boundary condition at τ = 0, that
Pτ = 0 requires that A = 0. Thus
P Qn = Kn sin(nπξ) cos(nπτ) (2.103)
The final form of the solution is then
y(τ, ξ) =
∞
n=0
Kn sin(nπξ) cos(nπτ) (2.104)
2.2.3 Orthogonality
Applying the final (nonhomogeneous) boundary condition (the initial position).
f (ξ) =
∞
n=0
Kn sin(nπξ) (2.105)
In particular, if f (x) = hx, 0 < x < 1/2
= h(1 − x), 1/2 < x < 1 (2.106)
1
0
f (x) sin(nπx)dx =
1/2
0
hx sin(nπx)dx +
1
1/2
h(1 − x) sin(nπx)dx
=
2h
n2π2
sin
nπ
2
=
2h
n2π2
(−1)n+1
(2.107)
and
1
0
Kn sin2
(nπx)dx = Kn/2 (2.108)
book Mobk070 March 22, 2007 11:7
THE FOURIER METHOD: SEPARATION OF VARIABLES 29
so that
y =
4h
π2
∞
n=1
(−1)n+1
n2
sin(nπξ) cos(nπτ) (2.109)
2.2.4 Lessons
The solutions are in the form of infinite series. The coefficients of the terms of the series
are determined by using the fact that the solutions of at least one of the ordinary differential
equations are orthogonal functions. The orthogonality condition allows us to calculate these
coefficients.
Problem
1. Solve the boundary value problem
utt = uxx
u(t, 0) = u(t, 1) = 0
u(0, x) = 0
ut(0, x) = f (x)
Find the special case when f (x) = sin(πx).
FURTHER READING
V. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966.
J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. 6th edition. New
York: McGraw-Hill, 2001.
book Mobk070 March 22, 2007 11:7
book Mobk070 March 22, 2007 11:7
31
C H A P T E R 3
Orthogonal Sets of Functions
In this chapter we elaborate on the concepts of orthogonality and Fourier series. We begin
with the familiar concept of orthogonality of vectors. We then extend the idea to orthogonality
of functions and the use of this idea to represent general functions as Fourier series—series of
orthogonal functions.
Next we show that solutions of a fairly general linear ordinary differential equation—the
Sturm–Liouville equation—are orthogonal functions. Several examples are given.
3.1 VECTORS
We begin our study of orthogonality with the familiar topic of orthogonal vectors. Suppose u(1),
u(2), and u(3) are the three rectangular component vectors in an ordinary three-dimensional
space. The norm of the vector (its length) ||u|| is
||u|| = [u(1)2
+ u(2)2
+ u(3)2
]1/2
(3.1)
If ||u|| = 1, u is said to be normalized. If ||u|| = 0, u(r) = 0 for each r and u is the zero vector.
A linear combination of two vectors u1 and u2 is
u = c1u1 + c2u2, (3.2)
The scalar or inner product of the two vectors u1 and u2 is defined as
(u1, u2) =
3
r=1
u1(r)u2(r) = u1 u2 cos θ (3.3)
3.1.1 Orthogonality of Vectors
If neither u1 nor u2 is the zero vector and if
(u1, u2) = 0 (3.4)
then θ = π/2 and the vectors are orthogonal. The norm of a vector u is
||u|| = (u, u)1/2
(3.5)
book Mobk070 March 22, 2007 11:7
32 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
3.1.2 Orthonormal Sets of Vectors
The vector n = un/||un|| has magnitude unity, and if u1 and u2 are orthogonal then 1 and
2 are orthonormal and their inner product is
( n, m) = δnm = 0, m = n (3.6)
= 1, m = n
where δnm is called the Kronecker delta.
If 1, 2, and 3 are three vectors that are mutually orthogonal to each other then every
vector in three-dimensional space can be written as a linear combination of 1, 2, and 3;
that is,
f(r) = c1 1 + c2 2 + c3 3 (3.7)
Note that due to the fact that the vectors n form an orthonormal set,
(f, 1) = c1, (f, 2) = c2, (f, 3) = c3 (3.8)
Simply put, suppose the vector f is
f = 2 1 + 4 2 + 3. (3.9)
Taking the inner product of f with 1 we find that
(f, 1) = 2( 1, 1) + 4( 1, 2) + ( 1, 3) (3.10)
and according to Eq. (3.8) c1 = 2. Similarly, c2 = 4 and c3 = 1.
3.2 FUNCTIONS
3.2.1 Orthonormal Sets of Functions and Fourier Series
Suppose there is a set of orthonormal functions n(x) defined on an interval a < x < b
(
√
2 sin(nπx) on the interval 0 < x < 1 is an example). A set of orthonormal functions is
defined as one whose inner product, defined as
b
x=a n(x) m(x)dx, is
( n, m) =
b
x=a
n m dx = δnm (3.11)
Suppose we can express a function as an infinite series of these orthonormal functions,
f (x) =
∞
n=0
cn n on a < x < b (3.12)
Equation (3.12) is called a Fourier series of f (x) in terms of the orthonormal function set n(x).
book Mobk070 March 22, 2007 11:7
ORTHOGONAL SETS OF FUNCTIONS 33
If we now form the inner product of m with both sides of Eq. (3.12) and use the
definition of an orthonormal function set as stated in Eq. (3.11) we see that the inner product
of f (x) and n(x) is cn.
cn
b
x=a
2
n(ξ)dξ = cn =
b
x=a
f (ξ) n(ξ)dξ (3.13)
In particular, consider a set of functions n that are orthogonal on the interval (a, b) so that
b
x=a
n(ξ) m(ξ)dξ = 0, m = n
(3.14)
= n
2
, m = n
where n
2
=
b
x=a
2
n(ξ)dξ is called the square of the norm of n. The functions
n
n
= n (3.15)
then form an orthonormal set. We now show how to form the series representation of the
function f (x) as a series expansion in terms of the orthogonal (but not orthonormal) set of
functions n(x).
f (x) =
∞
n=0
n
n
b
ξ=a
f (ξ)
n(ξ)
n
dξ =
∞
n=0
n
b
ξ=a
f (ξ)
n(ξ)
n
2
dξ (3.16)
This is called a Fourier series representation of the function f (x).
As a concrete example, the square of the norm of the sine function on the interval
(0, π) is
sin(nx) 2
=
π
ξ=0
sin2
(nξ)dξ =
π
2
(3.17)
so that the corresponding orthonormal function is
=
2
π
sin(nx) (3.18)
A function can be represented by a series of sine functions on the interval (0, π) as
f (x) =
∞
n=0
sin(nx)
π
ς=0
sin(nς)
π
2
f (ς)dς (3.19)
This is a Fourier sine series.
book Mobk070 March 22, 2007 11:7
34 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
3.2.2 Best Approximation
We next ask whether, since we can never sum to infinity, the values of the constants cn in
Eq. (3.13) give the most accurate approximation of the function. To illustrate the idea we return
to the idea of orthogonal vectors in three-dimensional space. Suppose we want to approximate a
three-dimensional vector with a two-dimensional vector. What will be the components of the
two-dimensional vector that best approximate the three-dimensional vector?
Let the three-dimensional vector be f = c1 1 + c2 2 + c3 3. Let the two-dimensional
vector be k = a1 1 + a2 2. We wish to minimize ||k − f||.
||k − f|| = (a1 − c1)2
+ (a2 − c2)2
+ c 2
3
1/2
(3.20)
It is clear from the above equation (and also from Fig. 3.1) that this will be minimized when
a1 = c1 and a2 = c2.
Turning now to the orthogonal function series, we attempt to minimize the difference
between the function with an infinite number of terms and the summation only to some finite
value m. The square of the error is
E2
=
b
x=a
( f (x) − Km(x))2
dx =
b
x=a
f 2
(x) + K2
(x) − 2 f (x)K(x) dx (3.21)
where
f (x) =
∞
n=1
cn n(x) (3.22)
and
Km =
m
n=1
an n(x) (3.23)
FIGURE 3.1: Best approximation of a three-dimensional vector in two dimensions
book Mobk070 March 22, 2007 11:7
ORTHOGONAL SETS OF FUNCTIONS 35
Noting that
b
x=a
K2
m(x)dx =
m
n=1
m
j=1
ana j
b
x=a
n(x) j (x)dx =
m
n=1
a2
n = a2
1 + a2
2 + a2
3 + · · · + a2
m
(3.24)
and
b
x=a
f (x)K(x)dx =
∞
n=1
m
j=1
cna j
b
x=a
n(x) j (x)dx
=
m
n=1
cnan = c1a1 + c2a2 + · · · + cmam (3.25)
E2
=
b
x=a
f 2
(x)dx + a2
1 + · · · + a2
m − 2a1c1 − · · · − 2amcm (3.26)
Now add and subtract c 2
1, c 2
2, . . . , c 2
m. Thus Eq. (3.26) becomes
E2
=
b
x=a
f 2
(x)dx − c 2
1 − c 2
2 − · · · − c 2
m + (a1 − c1)2
+ (a2 − c2)2
+ · · · + (am − cm)2
(3.27)
which is clearly minimized when an = cn.
3.2.3 Convergence of Fourier Series
We briefly consider the question of whether the Fourier series actually converges to the function
f (x) for all values, say, on the interval a ≤ x ≤ b. The series will converge to the function if
the value of E defined in (3.19) approaches zero as m approaches infinity. Suffice to say that
this is true for functions that are continuous with piecewise continuous first derivatives, that
is, most physically realistic temperature distributions, displacements of vibrating strings and
bars. In each particular situation, however, one should use the various convergence theorems
that are presented in most elementary calculus books. Uniform convergence of Fourier series
is discussed extensively in the book Fourier Series and Boundary Value Problems by James Ward
Brown and R. V. Churchill. In this chapter we give only a few physically clear examples.
book Mobk070 March 22, 2007 11:7
36 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
3.2.4 Examples of Fourier Series
Example 3.1. Determine a Fourier sine series representation of f (x) = x on the interval
(0, 1). The series will take the form
x =
∞
j=0
c j sin( jπx) (3.28)
since the sin( jπx) forms an orthogonal set on (0, 1), multiply both sides by sin(kπx)dx and
integrate over the interval on which the function is orthogonal.
1
x=0
x sin(kπx)dx =
∞
k=0
1
x=0
c j sin( jπx) sin(kπx)dx (3.29)
Noting that all of the terms on the right-hand side of (2.20) are zero except the one for which
k = j,
1
x=0
x sin( jπx)dx = c j
1
x=0
sin2
( jπx)dx (3.30)
After integrating we find
(−1)j+1
jπ
=
c j
2
(3.31)
Thus,
x =
∞
j=0
(−1)j+1
jπ
2 sin( jπx) (3.32)
This is an alternating sign series in which the coefficients always decrease as j increases, and
it therefore converges. The sine function is periodic and so the series must also be a periodic
function beyond the interval (0, 1). The series outside this interval forms the periodiccontinuation
of the series. Note that the sine is an odd function so that sin( jπx) = − sin(− jπx). Thus the
periodic continuation looks like Fig. 3.2. The series converges everywhere, but at x = 1 it is
identically zero instead of one. It converges to 1 − ε arbitrarily close to x = 1.
book Mobk070 March 22, 2007 11:7
ORTHOGONAL SETS OF FUNCTIONS 37
-1 31 2
1
FIGURE 3.2: The periodic continuation of the function x represented by the sine series
Example 3.2. Find a Fourier cosine for f (x) = x on the interval (0, 1). In this case
x =
∞
n=0
cn cos(nπx) (3.33)
Multiply both sides by cos(mπx)dx and integrate over (0, 1).
1
x=0
x cos(mπx)dx =
∞
n=0
cn
1
x=0
cos(mπx) cos(nπx)dx (3.34)
and noting that cos(nπx) is an orthogonal set on (0, 1) all terms in (2.23) are zero except when
n = m. Evaluating the integrals,
cn
2
=
[(−1)2
− 1]
(nπ)2
(3.35)
There is a problem when n = 0. Both the numerator and the denominator are zero there.
However we can evaluate c0 by noting that according to Eq. (3.26)
1
x=0
xdx = c0 =
1
2
(3.36)
and the cosine series is therefore
x =
1
2
+
∞
n=1
2
[(−1)n
− 1]
(nπ)2
cos(nπx) (3.37)
The series converges to x everywhere. Since cos(nπx) = cos(−nπx) it is an even function and
its periodic continuation is shown in Fig. 3.3. Note that the sine series is discontinuous at x = 1,
while the cosine series is continuous everywhere. (Which is the better representation?)
book Mobk070 March 22, 2007 11:7
38 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
-1 31 2
1
FIGURE 3.3: The periodic continuation of the series in Example 3.2
It should be clear from the above examples that in general a Fourier sine/cosine series of
a function f (x) defined on 0 ≤ x ≤ 1 can be written as
f (x) =
c0
2
+
∞
n=1
cn cos(nπx) +
∞
n=1
bn sin(nπx) (3.38)
where
cn =
1
x=0 f (x) cos(nπx)dx
1
x=0 cos2(nπx)dx
n = 0, 1, 2, 3, . . .
bn =
1
x=0 f (x) sin(nπx)dx
1
x=0 sin2(nπx)dx
n = 1, 2, 3, . . . (3.39)
Problems
1. Show that
π
x=0
sin(nx) sin(mx)dx = 0
when n = m.
2. Find the Fourier sine series for f (x) = 1 − x on the interval (0, 1). Sketch the periodic
continuation. Sum the series for the first five terms and sketch over two periods. Discuss
convergence of the series, paying special attention to convergence at x = 0 and x = 1.
3. Find the Fourier cosine series for 1 − x on (0, 1). Sketch the periodic continuation.
Sum the first two terms and sketch. Sum the first five terms and sketch over two periods.
Discuss convergence, paying special attention to convergence at x = 0 and x = 1.
book Mobk070 March 22, 2007 11:7
ORTHOGONAL SETS OF FUNCTIONS 39
3.3 STURM–LIOUVILLE PROBLEMS: ORTHOGONAL
FUNCTIONS
We now proceed to show that solutions of a certain ordinary differential equation with certain
boundary conditions (called a Sturm–Liouville problem) are orthogonal functions with respect to a
weighting function, and that therefore a well-behaved function can be represented by an infinite
series of these orthogonal functions (called eigenfunctions), as in Eqs. (3.12) and (3.16).
Recall that the problem
Xxx + λ2
X = 0, X(0) = 0, X(1) = 0 0 ≤ x ≤ 1 (3.40)
has solutions only for λ = nπ and that the solutions, sin(nπx) are orthogonal on the interval
(0, 1). The sine functions are called eigenfunctions and λ = nπ are called eigenvalues.
As another example, consider the problem
Xxx + λ2
X = 0 (3.41)
with boundary conditions
X(0) = 0
X(1) + HXx(1) = 0
(3.42)
The solution of the differential equation is
X = A sin(λx) + B cos (λx)) (3.43)
The first boundary condition guarantees that B = 0. The second boundary condition is satisfied
by the equation
A[sin(λ) + Hλ cos(λ)] = 0 (3.44)
Since A cannot be zero, this implies that
− tan(λ) = Hλ. (3.45)
The eigenfunctions are sin(λx) and the eigenvalues are solutions of Eq. (3.45). This is illustrated
graphically in Fig. 3.4.
We will generally be interested in the fairly general linear second-order differential
equation and boundary conditions given in Eqs. (3.46) and (3.47).
d
dx
r(x)
d X
dx
+ [q(x) + λp(x)]X = 0 a ≤ x ≤ b (3.46)
a1 X(a) + a2d X(a)/dx = 0
b1 X(b) + b2d X(b)/dx = 0 (3.47)
book Mobk070 March 22, 2007 11:7
40 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FIGURE 3.4: Eigenvalues of − tan(λ) = Hλ
Solutions exist only for discrete values λn the eigenvalues. The corresponding solutions Xn(x)
are the eigenfunctions.
3.3.1 Orthogonality of Eigenfunctions
Consider two solutions of (3.46) and (3.47), Xn and Xm corresponding to eigenvalues λn and
λm. The primes denote differentiation with respect to x.
(r Xm) + q Xm = −λm pXm (3.48)
(r Xn) + q Xn = −λn pXn (3.49)
Multiply the first by Xn and the second by Xm and subtract, obtaining the following:
(r Xn Xm − r Xm Xn) = (λn − λm)pXm Xn (3.50)
Integrating both sides
r(Xm Xn − Xn Xm)b
a = (λn − λm)
b
a
p(x)Xn Xmdx (3.51)
Inserting the boundary conditions into the left-hand side of (3.51)
Xm(b)Xn(b) − Xm(a)Xn(a) − Xn(b)Xm(b) + Xn(a)Xm(a)
= −
b1
b2
Xm(b)Xn(b) +
a1
a2
Xm(a)Xn(a) −
a1
a2
Xn(a)Xm(a) +
b1
b2
Xm(b)Xn(b) = 0 (3.52)
book Mobk070 March 22, 2007 11:7
ORTHOGONAL SETS OF FUNCTIONS 41
Thus
(λn − λm)
b
a
p(x)Xn Xmdx = 0, m = n (3.53)
Notice that Xm and Xn are orthogonal with respect to the weighting function p(x) on the interval
(a, b). Obvious examples are the sine and cosine functions.
Example 3.3. Example 2.1 in Chapter 2 is an example in which the eigenfunctions are sin(λnξ)
and the eigenvalues are (2n − 1)π/2.
Example 3.4. If the boundary conditions in Example 2.1 in Chapter 2 are changed to
(0) = 0 (1) = 0 (3.54)
we note that the general solution of the differential equation is
(ξ) = A cos(λξ) + B sin(λξ) (3.55)
The boundary conditions require that B = 0 and cos(λ) = 0. The values of λ can take on
any of the values π/2, 3π/2, 5π/2, . . . , (2n − 1)π/2. The eigenfunctions are cos(λnξ) and the
eigenvalue are λn = (2n − 1)π/2.
Example 3.5. Suppose the boundary conditions in the original problem (Example 1, Chapter
2) take on the more complicated form
(0) = 0 (1) + h (1) = 0 (3.56)
The first boundary condition requires that B = 0. The second boundary conditions require that
sin(λn) + hλn cos(λn) = 0, or (3.57)
λn = −
1
h
tan(λn) (3.58)
which is a transcendental equation that must be solved for the eigenvalues. The eigenfunctions
are, of course, sin(λnx).
Example 3.6. A Physical Example: Heat Conduction in Cylindrical Coordinates
The heat conduction equation in cylindrical coordinates is
∂u
∂t
=
∂2
u
∂r2
+
1
r
∂u
∂r
0 < r < 1 (3.59)
with boundary conditions at R = 0 and r = 1 and initial condition u(0,r) = f (r).
book Mobk070 March 22, 2007 11:7
42 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Separating variables as u = R(r)T(t),
1
T
dT
dt
=
1
R
d2
R
dr2
+
1
r R
d R
dr
= −λ2
0 ≤ r ≤ 1, 0 ≤ t (3.60)
(Why the minus sign?)
The equation for R(r) is
(r R ) + λ2
r R = 0, (3.61)
which is a Sturm–Liouville equation with weighting function r. It is an eigenvalue problem with
an infinite number of eigenfunctions corresponding to the eigenvalues λn. There will be two
solutions R1(λnr) and R2(λnr) for each λn. The solutions are called Bessel functions, and they
will be discussed in Chapter 4.
Rn(λnr) = An R1(λnr) + Bn R2(λnr) (3.62)
The boundary conditions on r are used to determine a relation between the constants A and
B. For solutions R(λnr) and R(λmr)
1
0
r R(λnr)R(λmr)dr = 0, n = m (3.63)
is the orthogonality condition.
The solution for T(t) is the exponential e−λ2
nt
for all n. Thus, the solution of (3.60),
because of superposition, can be written as an infinite series in a form something like
u =
∞
n=0
Kne−λ2
n R(λnr) (3.64)
and the orthogonality condition is used to find Kn as
Kn =
1
r=0
f (r)R(λnr)rdr/
1
r=0
f (r)R2
(λnr)rdr (3.65)
Problems
1. For Example 2.1 in Chapter 2 with the new boundary conditions described in Example
3.2 above, find Kn and write the infinite series solution to the revised problem.
book Mobk070 March 22, 2007 11:7
ORTHOGONAL SETS OF FUNCTIONS 43
FURTHER READING
J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems, 6th edition. New
York: McGraw-Hill, 2001.
P. V. O’Neil, Advanced Engineering Mathematics. 5th edition. Brooks/Cole Thompson, Pacific
Grove, CA, 2003.
book Mobk070 March 22, 2007 11:7
book Mobk070 March 22, 2007 11:7
45
C H A P T E R 4
Series Solutions of Ordinary
Differential Equations
4.1 GENERAL SERIES SOLUTIONS
The purpose of this chapter is to present a method of obtaining solutions of linear second-order
ordinary differential equations in the form of Taylor series’. The methodology is then used to
obtain solutions of two special differential equations, Bessel’s equation and Legendre’s equation.
Properties of the solutions—Bessel functions and Legendre functions—which are extensively
used in solving problems in mathematical physics, are discussed briefly. Bessel functions are
used in solving both diffusion and vibrations problems in cylindrical coordinates. The functions
R(λnr) in Example 3.4 at the end of Chapter 3 are called Bessel functions. Legendre functions
are useful in solving problems in spherical coordinates. Associated Legendre functions, also
useful in solving problems in spherical coordinates, are briefly discussed.
4.1.1 Definitions
In this chapter we will be concerned with linear second-order equations. A general case is
a(x)u + b(x)u + c (x)u = f (x) (4.1)
Division by a(x) gives
u + p(x)u + q(x)u = r(x) (4.2)
Recall that if r(x) is zero the equation is homogeneous. The solution can be written as the sum of
a homogeneous solution uh (x) and a particular solution up(x). If r(x) is zero, up = 0. The nature
of the solution and the solution method depend on the nature of the coefficients p(x) and q(x).
If each of these functions can be expanded in a Taylor series about a point x0 the point is said
to be an ordinary point and the function is analytic at that point. If either of the coefficients is
not analytic at x0, the point is a singular point. If x0 is a singular point and if (x − x0)p(x) and
(x − x0)2
q(x) are analytic, then the singularities are said to be removable and the singular point
is a regular singular point. If this is not the case the singular point is irregular.
book Mobk070 March 22, 2007 11:7
46 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
4.1.2 Ordinary Points and Series Solutions
If the point x0 is an ordinary point the dependent variable has a solution in the neighborhood
of x0 of the form
u(x) =
∞
n=0
cn(x − x0)n
(4.3)
We now illustrate the solution method with two examples.
Example 4.1. Find a series solution in the form of Eq. (4.3) about the point x = 0 of the
differential equation
u + x2
u = 0 (4.4)
The point x = 0 is an ordinary point so at least near x = 0 there is a solution in the form of
the above series. Differentiating (4.3) twice and inserting it into (4.4)
u =
∞
n=0
ncnxn−1
u =
∞
n=0
n(n − 1)cnxn−2
∞
n=0
n(n − 1)cnxn−2
+
∞
n=0
xn+2
cn = 0 (4.5)
Note that the first term in the u series is zero while the first two terms in the u series are zero.
We can shift the indices in both summations so that the power of x is the same in both series
by setting n − 2 = m in the first series.
∞
n=0
n(n − 1)cnxn−2
=
∞
m=−2
(m + 2)(m + 1)cm+2xm
=
∞
m=0
(m + 2)(m + 1)cm+2xm
(4.6)
Noting that m is a “dummy variable” and that the first two terms in the series are zero the series
can be written as
∞
n=0
(n + 2)(n + 1)cn+2xn
(4.7)
In a similar way we can write the second term as
∞
n=0
cnxn+2
=
∞
n=2
cn−2xn
(4.8)
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 47
We now have
∞
n=0
(n + 2)(n + 1)cn+2xn
+
∞
n=2
cn−2xn
= 0 (4.9)
which can be written as
2c2 + 6c3x +
∞
n=2
[(n + 2)(n + 1)cn+2 + cn−2]xn
= 0 (4.10)
Each coefficient of xn
must be zero in order to satisfy Eq. (4.10). Thus c2 and c3 must be zero
and
cn+2 = −cn−2/(n + 2)(n + 1) (4.11)
while c0 and c1 remain arbitrary.
Setting n = 2, we find that c4 = −c0/12 and setting n = 3, c5 = −c1/20. Since c2 and
c3 are zero, so are c6, c7, c10, c11, etc. Also, c8 = −c4/(8)(7) = c0/(4)(3)(8)(7) and
c9 = −c5/(9)(8) = c1/(5)(4)(9)(8).
The first few terms of the series are
u(x) = c0(1 − x4
/12 + x6
/672 + · · · ) + c1(1 − x5
/20 + x9
/1440 + · · · ) (4.12)
The values of c0 and c1 may be found from appropriate boundary conditions. These are both
alternating sign series with each term smaller than the previous term at least for x ≤ 1 and it is
therefore convergent at least under these conditions.
The constants c0 and c1 can be determined from boundary conditions. For example if
u(0) = 0, c0 + c1 = 0, so c1 = −c0. If u(1) = 1,
c0[−1/12 + 1/20 + 1/672 − 1/1440 + · · · ] = 1
Example 4.2. Find a series solution in the form of Eq. (4.3) of the differential equation
u + xu + u = x2
(4.13)
valid near x = 0.
Assuming a solution in the form of (4.3), differentiating and inserting into (4.13),
∞
n=0
(n − 1)ncnxn−2
+
∞
n=0
ncnxn
+
∞
n=0
cnxn
− x2
= 0 (4.14)
book Mobk070 March 22, 2007 11:7
48 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Shifting the indices as before
∞
n=0
(n + 2)(n + 1)cn+2xn
+
∞
n=0
ncnxn
+
∞
n=0
cnxn
− x2
= 0 (4.15)
Once again, each of the coefficients of xn
must be zero.
Setting n = 0, we see that
n = 0 : 2c2 + c0 = 0, c2 = −c0/2 (4.16)
n = 1 : 6c3 + 2c1 = 0, c3 = −c1/3
n = 2 : 12c4 + 3c2 − 1 = 0, c4 = (1 + 3c0/2)/12
n > 2 : cn+2 =
cn
n + 2
The last of these is called a recurrence formula.
Thus,
u = c0(1 − x2
/2 + x4
/8 − x6
/(8)(6) + · · · )
+ c1(x − x3
/3 + x5
/(3)(5) − x7
/(3)(5)(7) + · · · )
+ x4
(1/12 − x2
/(12)(6) + · · · ) (4.17)
Note that the series on the third line of (4.17) is the particular solution of (4.13). The constants
c0 and c1 are to be evaluated using the boundary conditions.
4.1.3 Lessons: Finding Series Solutions for Differential Equations
with Ordinary Points
If x0 is an ordinary point assume a solution in the form of Eq. (4.3) and substitute into
the differential equation. Then equate the coefficients of equal powers of x. This will give a
recurrence formula from which two series may be obtained in terms of two arbitrary constants.
These may be evaluated by using the two boundary conditions.
Problems
1. The differential equation
u + xu + xu = x
has ordinary points everywhere. Find a series solution near x = 0.
2. Find a series solution of the differential equation
u + (1 + x2
)u = x
near x = 0 and identify the particular solution.
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 49
3. The differential equation
(1 − x2
)u + u = 0
has singular points at x = ±1, but is analytic near x = 0. Find a series solution that is
valid near x = 0 and discuss the radius of convergence.
4.1.4 Regular Singular Points and the Method of Frobenius
If x0 is a singular point in (4.2) there may not be a power series solution of the form of Eq. (4.3).
In such a case we proceed by assuming a solution of the form
u(x) =
∞
n=0
cn(x − x0)n+r
(4.18)
in which c0 = 0 and r is any constant, not necessarily an integer. This is called the method of
Frobenius and the series is called a Frobenius series. The Frobenius series need not be a power
series because r may be a fraction or even negative. Differentiating once
u =
∞
n=0
(n + r)cn(x − x0)n+r−1
(4.19)
and differentiating again
u =
∞
n=0
(n + r − 1)(n + r)cn(x − x0)n+r−2
(4.20)
These are then substituted into the differential equation, shifting is done where required so
that each term contains x raised to the power n, and the coefficients of xn
are each set equal
to zero. The coefficient associated with the lowest power of x will be a quadratic equation that
can be solved for the index r. It is called an indicial equation. There will therefore be two roots
of this equation corresponding to two series solutions. The values of cn are determined as above
by a recurrence equation for each of the roots. Three possible cases are important: (a) the roots
are distinct and do not differ by an integer, (b) the roots differ by an integer, and (c) the roots
are coincident, i.e., repeated. We illustrate the method by a series of examples.
Example 4.3 (distinct roots). Solve the equation
x2
u + x(1/2 + 2x)u + (x − 1/2)u = 0 (4.21)
The coefficient of the u term is
p(x) =
(1/2 + 2x)
x
(4.22)
book Mobk070 March 22, 2007 11:7
50 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and the coefficient of the u term is
q(x) =
(x − 1/2)
x2
(4.23)
Both have singularities at x = 0. However multiplying p(x) by x and q(x) by x2
the singularities
are removed. Thus x = 0 is a regular singular point. Assume a solution in the form of the
Frobenius series: u = ∞
n=0 cnxn+r
, differentiate twice and substitute into (4.21) obtaining
∞
n=0
(n + r)(n + r − 1)xn+1
+
∞
n=0
1
2
(n + r)cnxn+r
+
∞
n=0
2(n + r)cnxn+r+1
+
∞
n=0
cnxn+r+1
−
∞
n=0
1
2
cnxn+r
= 0 (4.24)
The indices of the third and fourth summations are now shifted as in Example 4.1 and we find
r(r − 1) +
1
2
r −
1
2
c0xr
+
∞
n=1
(n + r)(n + r − 1) +
1
2
(n + r) −
1
2
cnxn+r
+
∞
n=1
[2(n + r − 1) + 1]cn−1xn+r
= 0 (4.25)
Each coefficient must be zero for the equation to be true. Thus the coefficient of the c0 term
must be zero since c0 itself cannot be zero. This gives a quadratic equation to be solved for r,
and this is called an indicial equation (since we are solving for the index, r).
r(r − 1) +
1
2
r −
1
2
= 0 (4.26)
with r = 1 and r = −1/2. The coefficients of xn+r
must also be zero. Thus
[(n + r)(n + r − 1) + 1/2(n + r) − 1/2]cn + [2(n + r − 1) + 1]cn−1 = 0 . (4.27)
The recurrence equation is therefore
cn = −
2(n + r − 1) + 1
(n + r)(n + r − 1) + 1
2
(n + r) − 1
2
cn−1 (4.28)
For the case of r = 1
cn = −
2n + 1
n n + 3
2
cn−1 (4.29)
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 51
Computing a few of the coefficients,
c1 = −
3
5
2
c0 = −
6
5
c0
c2 = −
5
7
c1 = −
6
7
c0
c3 = −
7
27
2
c2 = −
4
9
c0
etc. and the first Frobenius series is
u1 = c0 x −
6
5
x2
+
6
7
x3
−
4
9
x4
+ · · · (4.30)
Setting r = −1/2 in the recurrence equation (4.26) and using bn instead of cn to distinguish it
from the first case,
bn = −
2n − 2
n n − 3
2
bn−1 (4.31)
Noting that in this case b1 = 0, all the following bns must be zero and the second Frobenius
series has only one term: b0x−1/2
. The complete solution is
u = c0 x −
6
5
x2
+
6
7
x3
−
4
9
x4
+ · · · + b0x−1/2
(4.32)
Example 4.4 (repeated roots). Next consider the differential equation
x2
u − xu + (x + 1)u = 0 (4.33)
There is a regular singular point at x = 0, so we attempt a Frobenius series around x = 0.
Differentiating (4.17) and substituting into (4.30),
∞
n=0
(n + r − 1)(n + r)cnxn+r
−
∞
n=0
(n + r)cnxn+r
+
∞
n=0
cnxn+r
+
∞
n=0
cnxn+r+1
= 0 (4.34)
or
[r(r − 1) − r + 1]c0xr
+
∞
n=1
[(n + r − 1)(n + r) − (n + r) + 1]cnxn+r
+
∞
n=1
cn−1xn+r
= 0
(4.35)
where we have shifted the index in the last sum.
The indicial equation is
r(r − 1) − r + 1 = 0 (4.36)
book Mobk070 March 22, 2007 11:7
52 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and the roots of this equation are both r = 1. Setting the last two sums to zero we find the
recurrence equation
cn = −
1
(n + r − 1)(n + r) − (n + r) + 1
cn−1 (4.37)
and since r = 1,
cn = −
1
n(n + 1) − (n + 1) + 1
cn−1 (4.38)
c1 = −c0
c2 =
−1
6 − 3 + 1
c1 =
1
4
c0
c3 =
−1
12 − 4 + 1
c2 =
−1
9
c1 =
−1
36
c0
etc.
The Frobenius series is
u1 = c0 x − x2
+
1
4
x3
−
1
36
x4
+ . . . (4.39)
In this case there is no second solution in the form of a Frobenius series because of the repeated
root. We shall soon see what form the second solution takes.
Example 4.5 (roots differing by an integer 1). Next consider the equation
x2
u − 2xu + (x + 2)u = 0 (4.40)
There is a regular singular point at x = 0. We therefore expect a solution in the form of the
Frobenius series (4.18). Substituting (4.18), (4.19), (4.20) into our differential equation, we
obtain
∞
n=0
(n + r)(n + r − 1)cnxn+r
−
∞
n=0
2(n + r)cnxn+r
+
∞
n=0
2cnxn+r
+
∞
n=0
cnxn+r+1
= 0
(4.41)
Taking out the n = 0 term and shifting the last summation,
[r(r − 1) − 2r + 2]c0xr
+
∞
n=1
[(n + r)(n + r − 1) − 2(n + r) + 2]cnxn+r
+
∞
n=1
cn−1xn+r
= 0 (4.42)
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 53
The first term is the indicial equation.
r(r − 1) − 2r + 2 = 0 (4.43)
There are two distinct roots, r1 = 2 and r2 = 1. However they differ by an integer.
r1 − r2 = 1.
Substituting r1 = 2 into (4.39) and noting that each coefficient of xn+r
must be zero,
[(n + 2)(n + 1) − 2(n + 2) + 2]cn + cn−1 = 0 (4.44)
The recurrence equation is
cn =
−cn−1
(n + 2)(n − 1) + 2
c1 =
−c0
2
c2 =
−c1
6
= c0
c0
12
c3 =
−c2
12
=
−c0
144
(4.45)
The first Frobenius series is therefore
u1 = c0 x2
−
1
2
x3
+
1
12
x4
−
1
144
x5
+ . . . (4.46)
We now attempt to find the Frobenius series corresponding to r2 = 1. Substituting into (4.44)
we find that
[n(n + 1) − 2(n + 1) + 2]cn = −cn−1 (4.47)
When n = 1, c0 must be zero. Hence cn must be zero for all n and the attempt to find a second
Frobenius series has failed. This will not always be the case when roots differ by an integer as
illustrated in the following example.
Example 4.6 (roots differing by an integer 2). Consider the differential equation
x2
u + x2
u − 2u = 0 (4.48)
You may show that the indicial equation is r2
− r − 2 = 0 with roots r1 = 2, r2 = −1 and the
roots differ by an integer. When r = 2 the recurrence equation is
cn = −
n + 1
n(n + 3)
cn−1 (4.49)
book Mobk070 March 22, 2007 11:7
54 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
The first Frobenius series is
u1 = c0x2
1 −
1
2
x +
3
20
x2
−
1
30
x3
+ . . . (4.50)
When r = −1 the recurrence equation is
[(n − 1)(n − 2) − 2]bn + (n − 2)bn−1 = 0 (4.51)
When n = 3 this results in b2 = 0. Thus bn = 0 for all n ≥ 2 and the second series terminates.
u2 = b0
1
x
−
1
2
(4.52)
4.1.5 Lessons: Finding Series Solution for Differential Equations with Regular
Singular Points
1. Assume a solution of the form
u =
∞
n=0
cnxn+r
, c0 = 0 (4.53)
Differentiate term by term and insert into the differential equation. Set the coefficient
of the lowest power of x to zero to obtain a quadratic equation on r.
If the indicial equation yields two roots that do not differ by an integer there will
always be two Frobenius series, one for each root of the indicial equation.
2. If the roots are the same (repeated roots) the form of the second solution will be
u2 = u1 ln(x) +
∞
n=1
bnxn+r1
(4.54)
This equation is substituted into the differential equation to determine bn.
3. If the roots differ by an integer, choose the largest root to obtain a Frobenius series for
u1. The second solution may be another Frobenius series. If the method fails assume a
solution of the form
u2 = u1 ln(x) +
∞
n=1
bnxn+r2
(4.55)
This equation is substituted into the differential equation to find bn.
This is considered in the next section.
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 55
4.1.6 Logarithms and Second Solutions
Example 4.7. Reconsider Example 4.4 and assume a solution in the form of (4.54). Recall
that in Example 4.4 the differential equation was
x2
u − xu + (1 + x)u = 0 (4.56)
and the indicial equation yielded a double root at r = 1.
A single Frobenius series was
u1 = x − x2
+
x3
4
−
x4
36
+ · · ·
Now differentiate Eq. (4.54).
u2 = u1 ln x +
1
x
u1 +
∞
n=1
(n + r)bnxn+r−1
u2 = u1 ln x +
2
x
u1 −
1
x2
u1 +
∞
n=1
(n + r − 1)(n + r)bnxn+r−2
(4.57)
Inserting this into the differential equation gives
ln(x)[x2
u1 − xu1 + (1 + x)u1] + 2(xu1 − u1)
+
∞
n=1
[bn(n + r − 1)(n + r)xn+r
− bn(n + r)xn+r
+ bnxn+r
]
+
∞
n=1
bnxn+r+1
= 0 (4.58)
The first term on the left-hand side of (4.52) is clearly zero because the term in brackets is the
original equation. Noting that r = 1 in this case and substituting from the Frobenius series for
u1, we find (c0 can be set equal to unity without losing generality)
2 −x2
+
x3
3
−
x4
12
+ · · · +
∞
n=1
[n(n + 1) − (n + 1) + 1]bnxn+1
+
∞
n=2
bn−1xn+1
= 0
(4.59)
or
−2x2
+ x3
−
x4
6
+ · · · + b1x2
+
∞
n=2
n2
bn + bn−1 xn+1
= 0 (4.60)
Equating coefficients of x raised to powers we find that b1 = 2
book Mobk070 March 22, 2007 11:7
56 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
For n ≥ 2
1 + 4b2 + b1 = 0 b2 = −3/4
−
1
6
+ 9b3 + b2 = 0 b3 =
11
108
etc.
u2 = u1 ln x + 2x2
−
3
4
x3
+
11
108
x4
− · · · (4.61)
The complete solution is
u = [C1 + C2 ln x] u1 + C2 2x2
−
3
4
x3
+
11
108
x4
− · · · (4.62)
Example 4.8. Reconsider Example 4.5 in which a second Frobenius series could not be found
because the roots of the indicial equation differed by an integer. We attempt a second solution
in the form of (4.55).
The differential equation in Example 4.5 was
x2
u − 2xu + (x + 2)u = 0
and the roots of the indicial equation were r = 2 and r = 1, and are therefore separated by an
integer. We found one Frobenius series
u1 = x2
−
1
2
x3
+
1
12
x4
−
1
144
x5
+ · · ·
for the root r = 2, but were unable to find another Frobenius series for the case of r = 1.
Assume a second solution of the form in Eq. (4.55). Differentiating and substituting into
(4.40)
[x2
u1 − 2xu + (x + 2)u] ln(x) + 2xu − 3u1
+
∞
n=1
bn[(n + r)(n + r − 1) − 2(n + r) + 2]xn+r
+
∞
n=1
bnxn+r+1
= 0 (4.63)
Noting that the first term in the brackets is zero, inserting u1 and u1 from (4.50) and noting
that r2 = 1
x2
−
3
2
x3
+
5
12
x4
−
7
144
x5
+ . . . + b0x2
+
∞
n=2
{[n(n − 1)]bn + bn−1}xn+1
= 0 (4.64)
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 57
Equating x2
terms, we find that b0 = −1. For higher order terms
3
2
= 2b2 + b1 = 2b2 + b1
Taking b1 = 0,
b2 =
3
4
−
5
12
= 6b3 + b2 = 6b3 +
3
4
b3 = −
7
36
The second solution is
u2 = u1 ln(x) − x −
3
4
x3
+
7
36
x4
− . . . (4.65)
The complete solution is therefore
u = [C1 + C2 ln x] u1 − C2 x −
3
4
x3
+
7
36
x4
− · · · (4.66)
Problems
1. Find two Frobenius series solutions
x2
u + 2xu + (x2
− 2)u = 0
2. Find two Frobenious series solutions
x2
u + xu + x2
−
1
4
u = 0
3. Show that the indicial equation for the differential equation
xu + u + xu = 0
has roots s = −1 and that the differential equation has only one Frobenius series
solution. Find that solution. Then find another solution in the form
u = ln
∞
n=0
cnxn+s
+
∞
m=0
anxs +m
where the first summation above is the first Frobenius solution.
book Mobk070 March 22, 2007 11:7
58 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
4.2 BESSEL FUNCTIONS
A few differential equations are so widely useful in applied mathematics that they have been
named after the mathematician who first explored their theory. Such is the case with Bessel’s
equation. It occurs in problems involving the Laplacian ∇2
u in cylindrical coordinates when
variables are separated. Bessel’s equation is a Sturm–Liouville equation of the form
ρ2 d2
u
dρ2
+ ρ
du
dρ
+ (λ2
ρ2
− ν2
)u = 0 (4.67)
Changing the independent variable x = λρ, the equation becomes
x2
u + xu + (x2
− ν2
)u = 0 (4.68)
4.2.1 Solutions of Bessel’s Equation
Recalling the standard forms (4.1) and (4.2) we see that it is a linear homogeneous equation
with variable coefficients and with a regular singular point at x = 0. We therefore assume a
solution of the form of a Frobenius series (4.17).
u =
∞
j=0
c j x j+r
(4.69)
Upon differentiating twice and substituting into (4.68) we find
∞
j=0
[( j + r − 1)( j + r) + ( j + r) − ν2
]c j x j+r
+
j=0
c j x j+r+2
= 0 (4.70)
In general ν can be any real number. We will first explore some of the properties of the solution
when ν is a nonnegative integer, 0, 1, 2, 3, . . . . First note that
( j + r − 1)( j + r) + ( j + r) = ( j + r)2
(4.71)
Shifting the exponent in the second summation and writing out the first two terms in the first
(r − n)(r + n)c0 + (r + 1 − n)(r + 1 + n)c1x
+
∞
j=2
[(r + j − n)(r + j + n)c j + c j−2]x j
= 0 (4.72)
In order for the coefficient of the x0
term to vanish r = n or r = −n. (This is the indicial
equation.) In order for the coefficient of the x term to vanish c1 = 0. For each term in the
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 59
summation to vanish
c j =
−1
(r + j − n)(r + j + n)
c j−2 =
−1
j(2n + j)
c j−2 , r = n j = 2, 3, 4, · · · (4.73)
This is the recurrence relation. Since c1 = 0, all c j = 0 when j is an odd number. It is therefore
convenient to write j = 2k and note that
c2k =
−1
22k(r + k)
c2k−2 (4.74)
so that
c2k =
(−1)k
k!(n + 1)(n + 2) . . . (n + k)22k
c0 (4.75)
The Frobenius series is
u = c0xn
1 +
∞
k=1
(−1)k
k!(n + 1)(n + 2) . . . .(n + k)
x
2
2k
(4.76)
Now c0 is an arbitrary constant so we can choose it to be c0 = 1/n!2n
in which case the above
equation reduces to
Jn = u =
∞
k=0
(−1)k
k!(n + k)!
x
2
n+2k
(4.77)
The usual notation is Jn and the function is called a Bessel function of the first kind of order n.
Note that we can immediately conclude from (4.77) that
Jn(−x) = (−1)n
Jn(x) (4.78)
Note that the roots of the indicial equation differ by an integer. When r = −n (4.72) does not
yield a useful second solution since the denominator is zero for j = 0 or 2n. In any case it is easy
to show that Jn(x) = (−1)n
J−n, so when r is an integer the two solutions are not independent.
A second solution is determined by the methods detailed above and involves natural
logarithms. The details are very messy and will not be given here. The result is
Yn(x) =
2
π
Jn(x) ln
x
2
+ γ +
∞
k=1
(−1)k+1
[φ(k) + φ(k + 1)]
22k+n+1k!(k + n)!
x2k+n
−
2
π
n−1
k=0
(n − k − 1)!
22k−n+1k!
x2k−n
(4.79)
In this equation (k) = 1 + 1/2 + 1/3 + · · · + 1/k and γ is Euler’s constant 0.5772156649
. . . . . .
book Mobk070 March 22, 2007 11:7
60 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FIGURE 4.1: Bessel functions of the first kind
Bessel functions of the first and second kinds of order zero are particularly useful in
solving practical problems (Fig. 4.1). For these cases
J0(x) =
∞
k=0
(−1)k
(k!)2
x
2
2k
(4.80)
and
Y0 = J0(x) ln(x) +
∞
k=1
(−1)k+1
22k(k!)2
φ(k)x2k
(4.81)
The case of ν = n. Recall that in (4.70) if ν is not an integer, a part of the denominator is
(1 + ν)(2 + ν)(3 + ν) . . . (n + ν) (4.82)
We were then able to use the familiar properties of factorials to simplify the expression for
Jn(x). If ν = n we can use the properties of the gamma function to the same end. The gamma
function is defined as
(ν) =
∞
0
tν−1
e−t
dt (4.83)
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 61
Note that
(ν + 1) =
∞
0
tν
e−t
dt (4.84)
and integrating by parts
(ν + 1) = [−t νe−t
]∞
0 + ν
∞
0
tν−1
e−t
dt = ν (ν) (4.85)
and (4.82) can be written as
(1 + ν)(2 + ν)(3 + ν) . . . .(n + ν) =
(n + ν + 1)
(ν + 1)
(4.86)
so that when ν is not an integer
Jν(x) =
∞
n=0
(−1)n
22n+νn! (n + ν + 1)
x2n+ν
(4.87)
Fig. 4.3 is a graphical representation of the gamma function.
Here are the rules
1. If 2ν is not an integer, Jν and J−ν are linearly independent and the general solution of
Bessel’s equation of order ν is
u(x) = AJν(x) + BJ−ν(x) (4.88)
where A and B are constants to be determined by boundary conditions.
2. If 2ν is an odd positive integer Jν and J−ν are still linearly independent and the solution
form (4.88) is still valid.
3. If 2ν is an even integer, Jν(x) and J−ν(x) are not linearly independent and the solution
takes the form
u(x) = AJ ν(x) + BY ν(x) (4.89)
Bessel functions are tabulated functions, just as are exponentials and trigonometric functions.
Some examples of their shapes are shown in Figs. 4.1 and 4.2.
Note that both Jν(x) and Yν(x) have an infinite number of zeros and we denote them as
λj , j = 0, 1, 2, 3, . . .
book Mobk070 March 22, 2007 11:7
62 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FIGURE 4.2: Bessel functions of the second kind
FIGURE 4.3: The gamma function
Some important relations involving Bessel functions are shown in Table 4.1. We will
derive only the first, namely
d
dx
(xν
Jν(x)) = xν
Jν−1(x) (4.90)
d
dx
(xν
Jν(x)) =
d
dx
∞
n=0
(−1)n
22n+νn! (n + ν + 1)
x2n+2ν
(4.91)
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 63
TABLE 4.1: Some Properties of Bessel Functions
1. [xν
Jν(x)] = xν
Jν−1(x)
2. [x−ν
Jν(x)] = −x−ν
Jν+1(x)
3. Jν−1(x) + Jν+1(x) = 2ν/x[Jν(x)]
4. Jν−1(x) − Jν+1(x) = 2Jν(x)
5. xν
Jν−1(x)dx = xν
Jv+ constant
6. x−ν
Jν+1(x)dx = x−ν
Jν(x) + constant
=
∞
n=0
(−1)n
2(n + ν)
22n+νn!(n + ν) (n + ν)
x2n+2ν−1
(4.92)
= xν
∞
n=0
(−1)n
22n+ν−1n! (n + ν)
x2n+2ν−1
= xν
Jν−1(x) (4.93)
These will prove important when we begin solving partial differential equations in cylindrical
coordinates using separation of variables.
Bessel’s equation is of the form (4.138) of a Sturm–Liouville equation and the func-
tions Jn(x) are orthogonal with respect to a weight function ρ (see Eqs. (3.46) and (3.53),
Chapter 3).
Note that Bessel’s equation (4.67) with ν = n is
ρ2
Jn + ρ Jn + (λ2
ρ2
− n2
)Jn = 0 (4.94)
which can be written as
d
dρ
(ρ Jn)2
+ (λ2
ρ2
− n2
)
d
dρ
J 2
n = 0 (4.95)
Integrating, we find that
[(ρ J )2
+ (λ2
ρ2
− n2
)J 2
] 1
0 − 2λ2
1
ρ=0
ρ J 2
dρ = 0 (4.96)
Thus,
2λ2
1
ρ=0
ρ J 2
n dρ = λ2
[Jn(λ)]2
+ (λ2
− n2
)[Jn(λ)]2
(4.97)
book Mobk070 March 22, 2007 11:7
64 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Thus, we note from that if the eigenvalues are λj , the roots of Jν(λj ρ) = 0 the orthogonality
condition is, according to Eq. (3.53) in Chapter 3
1
0
ρ Jn(λj ρ)Jn(λkρ)dρ = 0, j = k
=
1
2
[Jn+1(λj )]2
, j = k (4.98)
On the other hand, if the eigenvalues are the roots of the equation
HJn(λj ) + λj Jn(λj ) = 0
1
0
ρ Jn(λj ρ)Jn(λkρ)dρ = 0, j = k
=
(λ2
j − n2
+ H2
)[Jn(λj )]2
2λ2
j
, j = k (4.99)
Using the equations in the table above and integrating by parts it is not difficult to show that
x
s =0
s n
J0(s )ds = xn
J1(x) + (n − 1)xn−1
J0(x) − (n − 1)2
x
s =0
s n−2
J0(s )ds (4.100)
4.2.2 Fourier–Bessel Series
Owing to the fact that Bessel’s equation with appropriate boundary conditions is a Sturm–
Liouville system it is possible to use the orthogonality property to expand any piecewise
continuous function on the interval 0 < x < 1 as a series of Bessel functions. For example,
let
f (x) =
∞
n=1
An J0(λnx) (4.101)
Multiplying both sides by x J0(λk x)dx and integrating from x = 0 to x = 1 (recall that the
weighting function x must be used to insure orthogonality) and noting the orthogonality
property we find that
f (x) =
∞
j=1
1
x=0 x f (x)J0(λj x)dx
1
x=0 x[J0(λj x)]2dx
J0(λj x) (4.102)
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 65
Example 4.9. Derive a Fourier–Bessel series representation of 1 on the interval 0 < x < 1.
We note that with J0(λj ) = 0
1
x=0
x[J0(λj x)]2
dx =
1
2
[J1(λj )]2
(4.103)
and
1
x=0
x J0(λj x)dx = J1(λj ) (4.104)
Thus
1 = 2
∞
j=1
J0(λj x)
λj J1(λj )
(4.105)
Example 4.10 (A problem in cylindrical coordinates). A cylinder of radius r1 is initially at
a temperature u0 when its surface temperature is increased to u1. It is sufficiently long that
variation in the z direction may be neglected and there is no variation in the θ direction. There
is no heat generation. From Chapter 1, Eq. (1.11)
ut =
α
r
(rur )r (4.106)
u(0,r) = u0
u(t,r1) = u1
u is bounded (4.107)
The length scale is r1 and the time scale is r2
1 /α. A dimensionless dependent variable that
normalizes the problem is (u − u1)/(u0 − u1) = U. Setting η = r/r1 and τ = tα/r2
1 ,
Uτ =
1
η
(ηUη)η (4.108)
U(0, η) = 1
U(τ, 1) = 0 (4.109)
U is bounded
Separate variables as T(τ)R(η). Substitute into the differential equation and divide by TR.
Tτ
T
=
1
Rη
(ηRη)η = ±λ2
(4.110)
book Mobk070 March 22, 2007 11:7
66 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
where the minus sign is chosen so that the function is bounded. The solution for T is exponential
and we recognize the equation for R as Bessel’s equation with ν = 0.
1
η
(ηRη)η + λ2
R = 0 (4.111)
The solution is a linear combination of the two Bessel functions of order 0.
C1 J0(λη) + C2Y0(λη) (4.112)
Since we have seen that Y0 is unbounded as η approaches zero, C2 must be zero. Furthermore,
the boundary condition at η = 1 requires that J0(λ) = 0, so that our eigenfunctions are J0(λη)
and the corresponding eigenvalues are the roots of J0(λn) = 0.
Un = Kne−λ2
nτ
J0(λnη), n = 1, 2, 3, 4, . . . (4.113)
Summing (linear superposition)
U =
∞
n=1
Kne−λ2
nτ
J0(λnη) (4.114)
Using the initial condition,
1 =
∞
n=1
Kn J0(λnη) (4.115)
Bessel functions are orthogonal with respect to weighting factor η since theyare solutions to a
Sturm–Liouville system. Therefore when we multiply both sides of this equation by ηJ0(λmη)dη
and integrate over (0, 1) all of the terms in the summation are zero except when m = n. Thus,
1
η=0
J0(λnη)ηdη = Kn
1
η=0
J 2
0 (λnη)ηdη (4.116)
but
1
η=0
ηJ 2
0 (λnη)dη =
J 2
1 (λn)
2
1
η=0
ηJ0(λnη)dη =
1
λn
J1(λn) (4.117)
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 67
Thus
U(τ, η) =
∞
n=0
2
λn J1(λn)
e−λ2
nτ
J0(λnη) (4.118)
Example 4.11 (Heat generation in a cylinder). Reconsider the problem of heat transfer in a
long cylinder but with heat generation and at a normalized initial temperature of zero.
uτ =
1
r
(rur )r + q0 (4.119)
u(τ, 1) = u(0,r) = 0, u bounded (4.120)
Our experience with the above example hints that the solution maybe of the form
u =
∞
j=1
Aj (τ)J0(λjr) (4.121)
This equation satisfies the boundary condition at r = 1 and Aj (τ) is to be determined. Substi-
tuting into the partial differential equation gives
∞
j=1
Aj (τ)J0(λj ) =
∞
j=1
Aj (τ)
1
r
d
dr
r
d J0
dr
+ q0 (4.122)
In view of Bessel’s differential equation, the first term on the right can be written as
∞
j=1
−λ2
j J0(λjr)Aj (τ) (4.123)
The second term can be represented as a Fourier–Bessel series as follows:
q0 = q0
∞
j=1
2J0(λjr)
λj J1(λj )
(4.124)
as shown in Example 4.9 above.
Equating coefficients of J0(λjr) we find that Aj (τ) must satisfy the ordinary differential
equation
A (τ) + λ2
j A(τ) = q0
2
λj J1(λj )
(4.125)
with the initial condition A(0) = 0.
Solution of this simple first-order linear differential equations yields
Aj (τ) =
2q0
λ3
j J1(λj )
+ C exp(−λ2
j τ) (4.126)
book Mobk070 March 22, 2007 11:7
68 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
After applying the initial condition
Aj (τ) =
2q0
λ3
j J1(λj )
1 − exp(−λ2
j τ) (4.127)
The solution is therefore
u(τ,r) =
∞
j=1
2q0
λ3
j J1(λj )
1 − exp(−λ2
j τ) J0(λjr) (4.128)
Example 4.12 (Time dependent heat generation). Suppose that instead of constant heat
generation, the generation is time dependent, q(τ). The differential equation for A(τ) then
becomes
A (τ) + λ2
j A(τ) =
2q(τ)
λj J1(λj )
(4.129)
An integrating factor for this equation is exp(λ2
j τ) so that the equation can be written as
d
dτ
Aj exp(λ2
j τ) =
2q(τ)
λj J1(λj )
exp(λ2
j τ) (4.130)
Integrating and introducing as a dummy variable t
Aj (τ) =
2
λj J1(λj )
τ
t=0
q(t) exp(−λ2
j (τ − t))dt (4.131)
Problems
1. By differentiating the series form of J0(x) term by term show that
J0(x) = −J1(x)
2. Show that
x J0(x)dx = x J1(x) + constant
3. Using the expression for
x
s =0 s n
J0(s )ds show that
x
s =0
s 5
J0(s )ds = x(x2
− 8)[4x J0(x) + (x2
− 8)J1(x)]
4. Express 1 − x as a Fourier–Bessel series.
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 69
4.3 LEGENDRE FUNCTIONS
We now consider another second-order linear differential that is common for problems involv-
ing the Laplacian in spherical coordinates. It is called Legendre’s equation,
(1 − x2
)u − 2xu + ku = 0 (4.132)
This is clearly a Sturm–Liouville equation and we will seek a series solution near the origin,
which is a regular point. We therefore assume a solution in the form of (4.3).
u =
∞
j=0
c j x j
(4.133)
Differentiating (4.133) and substituting into (4.132) we find
∞
j=0
[ j( j − 1)c j x j−2
(1 − x2
) − 2 jc j x j
+ n(n + 1)c j x j
] (4.134)
or
∞
j=0
{[k − j( j + 1)]c j x j
+ j( j − 1)c j x j−2
} = 0 (4.135)
On shifting the last term,
∞
j=0
{( j + 2)( j + 1)c j+2 + [k − j( j + 1)]c j }x j
= 0 (4.136)
The recurrence relation is
c j+2 = −
j( j + 1) − k
( j + 1)( j + 2)
c j (4.137)
There are thus two independent Frobenius series. It can be shown that they both diverge at
x = 1 unless they terminate at some point. It is easy to see from (4.137) that they do in fact
terminate if k = n(n + 1).
Since n and j are integers it follows that cn+2 = 0 and consequently cn+4, cn+6, etc. are
all zero. Therefore the solutions, which depend on n (i.e., the eigenfunctions) are polynomials,
series that terminate at j = n. For example, if n = 0, c2 = 0 and the solution is a constant. If
book Mobk070 March 22, 2007 11:7
70 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
n = 1 cn = 0 when n ≥ 1 and the polynomial is x. In general
u = Pn(x) = cn xn
−
n(n − 1)
2(2n − 1)
xn−2
+
n(n − 1)(n − 2)(n − 3)
2(4)(2n − 1)(2n − 3)
xn−4
− . . .
=
1
2k
m
k=0
(−1)k
k!
(2n − 2k)!
(n − 2k)!(n − k)!
xn−2k
(4.138)
where m = n/2 if n is even and (n − 1)/2 if n is odd.
The coefficient cn is of course arbitrary. It turns out to be convenient to choose it to be
c0 = 1
cn =
(2n − 1)(2n − 3) · · · 1
n!
(4.139)
the first few polynomials are
P0 = 1, P1 = x, P2 = (3x2
− 1)/2, P3 = (5x3
− 3x)/2, P4 = (35x4
− 30x2
+ 3)/8,
Successive Legendre polynomials can be generated by the use of Rodrigues’ formula
Pn(x) =
1
2nn!
dn
dxn
(x2
− 1)n
(4.140)
For example
P5 = (63x5
− 70x3
+ 15x)/8
Fig. 4.4 shows graphs of several Legendre polynomials.
FIGURE 4.4: Legendre polynomials
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 71
The second solution of Legendre’s equation can be found by the method of variation of
parameters. The result is
Qn(x) = Pn(x)
dζ
P2
n (ζ)(1 − ζ2)
(4.141)
It can be shown that this generally takes on a logarithmic form involving ln [(x + 1)/(x − 1)]
which goes to infinity at x = 1. In fact it can be shown that the first two of these
functions are
Q0 =
1
2
ln
1 + x
1 − x
and Q1 =
x
2
ln
1 + x
1 − x
− 1 (4.142)
Thus the complete solution of the Legendre equation is
u = APn(x) + BQn(x) (4.143)
where Pn(x) and Qn(x) are Legendre polynomials of the first and second kind. If we require
the solution to be finite at x = 1, B must be zero.
Referring back to Eqs. (3.46) through (3.53) in Chapter 3, we note that the eigenvalues
λ = n(n + 1) and the eigenfunctions are Pn(x) and Qn(x). We further note from (3.46) and
(3.47) that the weight function is one and that the orthogonality condition is
1
−1
Pn(x)Pm(x)dx =
2
2n + 1
δmn (4.144)
where δmn is Kronecker’s delta, 1 when n = m and 0 otherwise.
Example 4.13. Steady heat conduction in a sphere
Consider heat transfer in a solid sphere whose surface temperature is a function of θ, the angle
measured downward from the z-axis (see Fig. 1.3 Chapter 1). The problem is steady and there
is no heat source.
r
∂2
∂r2
(ru) +
1
sin θ
∂
∂θ
sin θ
∂u
∂θ
= 0
u(r = 1) = f (θ) (4.145)
u is bounded
book Mobk070 March 22, 2007 11:7
72 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Substituting x = cos θ,
r
∂2
∂r2
(ru) +
∂
∂x
(1 − x2
)
∂u
∂x
= 0 (4.146)
We separate variables by assuming u = R(r)X(x). Substitute into the equation and divide by
RX and find
r
R
(r R) = −
[(1 − x2
)X ]
X
= ±λ2
(4.147)
or
r(r R) ∓ λ2
R = 0
[(1 − x2
)X ] ± λ2
X = 0
(4.148)
The second of these is Legendre’s equation, and we have seen that it has bounded
solutions at r = 1 when λ2
= n(n + 1). The first equation is of the Cauchy–Euler type with
solution
R = C1rn
+ C2r−n−1
(4.149)
Noting that the constant C2 must be zero to obtain a bounded solution at r = 0, and using
superposition,
u =
∞
n=0
Knrn
Pn(x) (4.150)
and using the condition at fr = 1 and the orthogonality of the Legendre polynomial
π
θ=0
f (θ)Pn(cos θ)dθ =
π
θ=0
Kn P2
n (cos θ)dθ =
2Kn
2n + 1
(4.151)
4.4 ASSOCIATED LEGENDRE FUNCTIONS
Equation (1.15) in Chapter 1 can be put in the form
1
α
∂u
∂t
=
∂2
u
∂r2
+
2
r
∂u
∂r
+
1
r2
∂
∂µ
(1 − µ2
)
∂u
∂µ
+
1
r2(1 − µ2)
∂2
u
∂ 2
(4.152)
by substituting µ = cos θ.
book Mobk070 March 22, 2007 11:7
SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 73
We shall see later that on separating variables in the case where u is a function of r, θ, ,
and t, we find the following differential equation in the µ variable:
d
dµ
(1 − µ2
)
d f
dµ
+ n(n + 1) −
m2
1 − µ2
f = 0 (4.153)
We state without proof that the solution is the associated Legendre function Pm
n (µ). The
associated Legendre polynomial is given by
Pm
n = (1 − µ2
)1/2m dm
dµm
Pn(µ) (4.154)
The orthogonality condition is
1
−1
[Pm
n (µ)]2
dµ =
2(n + m)!
(2n + 1)(n − m)!
(4.155)
and
1
−1
Pm
n Pm
n dµ = 0 n = n (4.156)
The associated Legendre function of the second kind is singular at x = ±1 and may be
computed by the formula
Qm
n (x) = (1 − x2
)m/2 dm
Qn(x)
dxm
(4.157)
Problems
1. Find and carefully plot P6 and P7.
2. Perform the integral above and show that
Q0(x) = C P0(x)
x
ξ=0
dξ
(1 − ξ2)P0(ξ)
=
C
2
ln
1 + x
1 − x
and that
Q1(x) = Cx
x
ξ=0
dξ
ξ2(1 − ξ2)
=
Cx
2
ln
1 + x
1 − x
− 1
3. Using the equation above find Q0
0(x) and Q1
1(x)
book Mobk070 March 22, 2007 11:7
74 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
FURTHER READING
J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. New York:
McGraw-Hill, 2001.
C. F. Chan Man Fong, D. DeKee, and P. N. Kaloni, Advanced Mathematics for Engineering
and Science. 2nd edition. Singapore: World Scientific, 2004.
P. V. O’Neil, Advanced Engineering Mathematics. 5th edition. Brooks/Cole Thompson, Pacific
Grove, CA, 2003.
book Mobk070 March 22, 2007 11:7
75
C H A P T E R 5
Solutions Using Fourier Series
and Integrals
We have already demonstrated solution of partial differential equations for some simple cases
in rectangular Cartesian coordinates in Chapter 2. We now consider some slightly more
complicated problems as well as solutions in spherical and cylindrical coordinate systems to
further demonstrate the Fourier method of separation of variables.
5.1 CONDUCTION (OR DIFFUSION) PROBLEMS
Example 5.1 (Double Fourier series in conduction). We now consider transient heat con-
duction in two dimensions. The problem is stated as follows:
ut = α(uxx + uyy )
u(t, 0, y) = u(t, a, y) = u(t, x, 0) = u(t, x, b) = u0
u(0, x, y) = f (x, y) (5.1)
That is, the sides of a rectangular area with initial temperature f (x, y) are kept at a constant
temperature u0. We first attempt to scale and nondimensionalize the equation and boundary
conditions. Note that there are two length scales, a and b. We can choose either, but there will
remain an extra parameter, either a/b or b/a in the equation. If we take ξ = x/a and η = y/b
then (5.1) can be written as
a2
α
ut = uξξ +
a2
b2
uηη (5.2)
The time scale is now chosen as a2
/α and the dimensionless time is τ = αt/a2
. We also choose
a new dependent variable U(τ, ξ, η) = (u − u0)/( fmax − u0). The now nondimensionalized
system is
Uτ = Uξξ + r2
Uηη (5.3)
U(τ, 0, η) = U(τ, 1, η) = U(τ, ξ, 0) = U(τ, ξ, 1) = 0
U(0, ξ, η) = ( f − u0)/( fmax − u0) = g(ξ, η)
book Mobk070 March 22, 2007 11:7
76 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
We now proceed by separating variables. Let
U(τ, ξ, η) = T(τ)X(ξ)Y (η) (5.4)
Differentiating and inserting into (5.3) and dividing by (5.4) we find
T
T
=
X Y + r2
Y X
XY
(5.5)
where the primes indicate differentiation with respect to the variable in question and r = a/b.
Since the left-hand side of (5.5) is a function only of τ and the right-hand side is only a function
of ξ and η both sides must be constant. If the solution is to be finite in time we must choose
the constant to be negative, –λ2
. Replacing T /T by –λ2
and rearranging,
−λ2
−
X
X
= r
Y
Y
(5.6)
Once again we see that both sides must be constants. How do we choose the signs? It should be
clear by now that if either of the constants is positive solutions for X or Y will take the form of
hyperbolic functions or exponentials and the boundary conditions on ξ or η cannot be satisfied.
Thus,
T
T
= −λ2
(5.7)
X
X
= −β2
(5.8)
r2 Y
Y
= −γ 2
(5.9)
Note that X and Y are eigenfunctions of (5.8) and (5.9), which are Sturm–Liouville equations
and β and γ are the corresponding eigenvalues.
Solutions of (5.7), (5.8), and (5.9) are
T = A exp(−λ2
τ) (5.10)
X = B1 cos(βξ) + B2 sin(βξ) (5.11)
Y = C1 cos(γ η/r) + C2 sin(γ η/r) (5.12)
Applying the first homogeneous boundary condition, we see that X(0) = 0, so that B1 = 0.
Applying the third homogeneous boundary condition we see that Y (0) = 0, so that C1 = 0.
The second homogeneous boundary condition requires that sin(β) = 0, or β = nπ. The last
homogeneous boundary condition requires sin(γ/r) = 0, or γ = mπr. According to (5.6),
λ2
= β2
+ γ 2
. Combining these solutions, inserting into (5.4) we have one solution in the
book Mobk070 March 22, 2007 11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 77
form
Umn(τ, ξ, η) = Knme−(n2
π2
+m2
π2
r2
)τ
sin(nπξ) sin(mπη) (5.13)
for all m, n = 1, 2, 3, 4, 5, . . .
Superposition now tells us that
∞
n=1
∞
m=1
Knme−(n2
π2
+m2
π2
r2
)τ
sin(nπξ) sin(mπ) (5.14)
Using the initial condition
g(ξ, η) =
∞
n=1
∞
m=1
Knm sin(nπξ) sin(mπη) (5.15)
We have a double Fourier series, and since both sin(nπξ) and sin(mπη) are members of
orthogonal sequences we can multiply both sides by sin(nπξ)sin(mπη)dξdη and integrate over
the domains.
1
ξ=0
1
η=0
g(ξ, η) sin(nπξ) sin(mπη)dξdη
= Knm
1
ξ=0
1
η=0
sin2
(nπξ)dξ sin2
(mπη)dη
=
Knm
4
(5.16)
Our solution is
∞
n=1
∞
m=1
4
1
ξ=0
1
η=0
g(ξ, η) sin(nπξ) sin(mπη)dξdη e−(n2
π2
+m2
π2
r2
)τ
sin(nπξ) sin(mπη) (5.17)
Example 5.2 (A convection boundary condition). Reconsider the problem defined by (2.1)
in Chapter 2, but with different boundary and initial conditions,
u(t, 0) = u0 = u(0, x) (5.18)
kux(t, L) − h[u1 − u(t, L)] = 0 (5.19)
The physical problem is a slab with conductivity k initially at a temperature u0 suddenly exposed
at x = L to a fluid at temperature u1 through a heat transfer coefficient h while the x = 0 face
is maintained at u0.
book Mobk070 March 22, 2007 11:7
78 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
The length and time scales are clearly the same as the problem in Chapter 2. Hence, τ =
tα/L2
and ξ = x/L. If we choose U = (u − u0)/(u1 − u0) we make the boundary condition
at x = 0 homogeneous but the condition at x = L is not. We have the same situation that we
had in Section 2.3 of Chapter 2. The differential equation, one boundary condition, and the
initial condition are homogeneous. Proceeding, we find
Uτ = Uξξ
U(τ, 0) = U(0, ξ) = 0
Uξ (τ, 1) + B[U(τ, 1) − 1] = 0
(5.20)
where B = hL/k. It is useful to relocate the nonhomogeneous condition as the initial condition.
As in the previous problem we assume U(τ, ξ) = V (τ, ξ) + W(ξ).
Vτ = Vξξ + Wξξ
W(0) = 0
Wξ (1) + B[W(1) − 1] = 0
V (τ, 0) = 0
Vξ (τ, 1) + BV (τ, 1) = 0
V (0, ξ) = −W(ξ)
(5.21)
Set Wξξ = 0. Integrating twice and using the two boundary conditions on W,
W(ξ) =
Bξ
B + 1
(5.22)
The initial condition on V becomes
V (0, ξ) = −Bξ/(B + 1) . (5.23)
Assume V (τ, ξ) = P(τ)Q(ξ), substitute into the partial differential equation for V , and divide
by P Q as usual.
P
P
=
Q
Q
= ±λ2
(5.24)
We must choose the minus sign for the solution to be bounded. Hence,
P = Ae−λ2
τ
Q = C1 sin(λξ) + C2 cos(λξ)
(5.25)
book Mobk070 March 22, 2007 11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 79
FIGURE 5.1: The eigenvalues of λn = −B tan(λn)
Applying the boundary condition at ξ = 0, we find that C2 = 0. Now applying the boundary
condition on V at ξ = 1,
C1λ cos(λ) + C1 B sin(λ) = 0 (5.26)
or
λ = −B tan(λ) (5.27)
This is the equation for determining the eigenvalues, λn. It is shown graphically in Fig. 5.1.
Example 5.3 (Superposition of several problems). We’ve seen now that in order to apply
separation of variables the partial differential equation itself must be homogeneous and we have
also seen a technique for transferring the inhomogeneity to one of the boundary conditions or to
the initial condition. But what if several of the boundary conditions are nonhomogeneous? We
demonstrate the technique with the following problem. We have a transient two-dimensional
problem with given conditions on all four faces.
ut = uxx + uyy
u(t, 0, y) = f1(y)
u(t, a, y) = f2(y)
u(t, x, 0) = f3(x)
u(t, x, b) = f4(x)
u(0, x, y) = g(x, y)
(5.28)
book Mobk070 March 22, 2007 11:7
80 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
The problem can be broken down into five problems. u = u1 + u2 + u3 + u4 + u5.
u1t = u1xx + u1yy
u1(0, x, y) = g(x, y) (5.29)
u1 = 0, all boundaries
u2xx + u2yy = 0
u2(0, y) = f1(y) (5.30)
u2 = 0 on all other boundaries
u3xx + u3yy = 0
u3(a, y) = f2(y) (5.31)
u3 = 0 on all other boundaries
u4xx + u4yy = 0
u4(x, 0) = f3(x) (5.32)
u4 = 0 on all other boundaries
u5xx + u5yy = 0
u5(x, b) = f4(x) (5.33)
u5 = 0 on all other boundaries
5.1.1 Time-Dependent Boundary Conditions
We will explore this topic when we discuss Laplace transforms.
Example 5.4 (A finite cylinder). Next we consider a cylinder of finite length 2L and radius
r1. As in the first problem in this chapter, there are two possible length scales and we choose
r1. The cylinder has temperature u0 initially. The ends at L = ±L are suddenly insulated while
the sides are exposed to a fluid at temperature u1. The differential equation with no variation
in the θ direction and the boundary conditions are
ut =
α
r
(rur )r + uzz
uz(t,r, −L) = uz(t,r, +L) = 0
kur (r1) + h[u(r1) − u1(r1)] = 0
u(0,r, z) = u0
u is bounded
(5.34)
book Mobk070 March 22, 2007 11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 81
If we choose the length scale as r1 then we define η = r/r1, ζ = z/L, and τ = αt/r2
1 . The
normalized temperature can be chosen as U = (u − u1)(u0 − u1). With these we find that
Uτ =
1
η
(ηUη)η +
r1
L
2
Uςς
Uς (ς = ±1) = 0
Uη(η = 1) + BU(η = 1) = 0
U(τ = 0) = 1
(5.35)
where B = hr1/k.
Let U = T(τ)R(η)Z(ζ). Insert into the differential equation and divide by U.
T
T
=
1
ηR
(ηR ) +
r1
L
2 Z
Z
(5.36)
Zς (ς = ±1) = 0
Rη(η = 1) + BR(η = 1) = 0
U(τ = 0) = 1
Again, the dance is the same. The left-hand side of Eq. (5.36) cannot be a function of η or ζ so
each side must be a constant. The constant must be negative for the time term to be bounded.
Experience tells us that Z /Z must be a negative constant because otherwise Z would
be exponential functions and we could not simultaneously satisfy the boundary conditions at
ζ = ±1. Thus, we have
T = −λ2
T
η2
R + ηR + β2
η2
R = 0
Z = −γ 2 L
r1
2
Z
(5.37)
with solutions
T = Ae−λ2
t
Z = C1 cos(γ Lς/r1) + C2 sin(γ Lς/r1)
R = C3 J0(βη) + C4Yo (βη)
(5.38)
It is clear that C4 must be zero always when the cylinder is not hollow because Y0 is unbounded
when η = 0. The boundary conditions at ς = ±1 imply that Z is an even function, so that C2
book Mobk070 March 22, 2007 11:7
82 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
must be zero. The boundary condition at ζ = 1 is
Zζ = −C1(γ L/r1) sin(γ L/r1) = 0, or γ L/r1 = nπ (5.39)
The boundary condition at η = 1 requires
C3[J0 (β) + BJ0(β)] = 0 or
BJ0(β) = β J1(β)
(5.40)
which is the transcendental equation for finding βm. Also note that
λ2
= γ 2
n + β2
m (5.41)
By superposition we write the final form of the solution as
U(τ, η, ς) =
∞
n=0
∞
m=0
Knme−(γ 2
n +β2
m)τ
J0(βmη) cos(nπς) (5.42)
Knm is found using the orthogonality properties of J0(βmη) and cos(nπζ) after using the initial
condition.
1
r=0
r J0(βmη)dη
1
ς=−1
cos(nπς)dς = Knm
1
r=0
r J 2
0 (βmη)dη
1
ς=−1
cos2
(nπς)dς (5.43)
Example 5.5 (Heat transfer in a sphere). Consider heat transfer in a solid sphere whose
surface temperature is a function of θ, the angle measured downward from the z-axis (see Fig.
1.3, Chapter 1). The problem is steady and there is no heat source.
r
∂2
∂r2
(ru) +
1
sin θ
∂
∂θ
sin θ
∂u
∂θ
= 0
u(r = 1) = f (θ)
u is bounded
(5.44)
Substituting x = cos θ,
r
∂2
∂r2
(ru) +
∂
∂x
(1 − x2
)
∂u
∂x
= 0 (5.45)
We separate variables by assuming u = R(r)X(x). Substitute into the equation, divide by RX
and find
r
R
(r) = −
[(1 − x2
)X ]
X
= ±λ2
(5.46)
book Mobk070 March 22, 2007 11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 83
or
r(r R) ∓ λ2
R = 0
(5.47)
[(1 − x2
)X ] ± λ2
X = 0
The second of these is Legendre’s equation, and we have seen that it has bounded solutions at
r = 1 when ±λ2
= n(n + 1). The first equation is of the Cauchy–Euler type with solution
R = C1rn
+ C2r−n−1
(5.48)
Noting that the constant C2 must be zero to obtain a bounded solution at r = 0, and using
superposition,
u =
∞
n=0
Knrn
Pn(x) (5.49)
and using the condition at f (r = 1) and the orthogonality of the Legendre polynomial
π
θ=0
f (θ)Pn(cos θ)dθ =
π
θ=0
Kn P2
n (cos θ)dθ =
2Kn
2n + 1
(5.50)
Kn =
2n + 1
2
π
θ=0
f (θ)Pn(cos θ)dθ (5.51)
5.2 VIBRATIONS PROBLEMS
We now consider some vibrations problems. In Chapter 2 we found a solution for a vibrating
string initially displaced. We now consider the problem of a string forced by a sine function.
Example 5.6 (Resonance in a vibration problem). Equation (1.21) in Chapter 1 is
ytt = a2
yxx + A sin(ηt) (5.52)
Select a length scale as L, the length of the string, and a time scale L/a and defining
ξ = x/L and τ = ta/L,
yττ = yξξ + C sin(ωτ) (5.53)
where ω is a dimensionless frequency, ηL/a and C = AL2
a2
.
The boundary conditions and initial velocity and displacement are all zero, so the bound-
ary conditions are all homogeneous, while the differential equation is not. Back in Chapter 2 we
book Mobk070 March 22, 2007 11:7
84 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
saw one way of dealing with this. Note that it wouldn’t have worked had q been a function of
time. We approach this problem somewhat differently. From experience, we expect a solution
of the form
y(ξ, τ) =
∞
n=1
Bn(τ) sin(nπξ) (5.54)
where the coefficients Bn(τ) are to be determined. Note that the equation above satisfies the
end conditions. Inserting this series into the differential equation and using the Fourier sine
series of C
C =
∞
n=1
2C[1 − (−1)n
]
nπ
sin(nπξ) (5.55)
∞
n=1
Bn (τ) sin(nπξ) =
∞
n=1
[−(nπ)2
Bn(τ)] sin(nπξ)
+ C
∞
n=1
2[1 − (−1)n
]
nπ
sin(nπξ) sin( τ) (5.56)
Thus
Bn = −(nπ)2
Bn + C
2[1 − (−1)n
]
nπ
sin( τ) (5.57)
subject to initial conditions y = 0 and yτ = 0 at τ = 0. When n is even the solution is zero.
That is, since the right-hand side is zero when n is even,
Bn = C1 cos(nπτ) + C2 sin(nπτ) (5.58)
But since both Bn(0) and Bn(0) are zero, C1 = C2 = 0. When n is odd we can write
B2n−1 + [(2n − 1)π]2
B2n−1 =
4C
(2n − 1)π
sin(ωτ) (5.59)
(2n − 1)π is the natural frequency of the system, ωn. The homogeneous solution of the above
equation is
B2n−1 = D1 cos(ωnτ) + D2 sin(ωnτ) . (5.60)
To obtain the particular solution we assume a solution in the form of sines and cosines.
BP = E1 cos(ωτ) + E2 sin(ωτ) (5.61)
Differentiating and inserting into the differential equation we find
−E1ω2
cos(ωτ) − E2ω2
sin(ωτ) + ω2
n[E1 cos(ωτ) + E2 sin(ωτ)] =
4C
ωn
sin(ωτ) (5.62)
book Mobk070 March 22, 2007 11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 85
Equating coefficients of sine and cosine terms
E1(ω2
n − ω2
) cos(ωτ) = 0 ω = ωn
E2(ω2
n − ω2
) sin(ωτ) =
4C
ωn
sin(ωτ)
(5.63)
Thus
E1 = 0 E2 =
4C
ωn(ω2
n − ω2)
ω = ωn (5.64)
Combining the homogeneous and particular solutions
B2n−1 = D1 cos(ωnτ) + D2 sin(ωnτ) +
4C
ωn(ω2
n − ω2)
sin(ωτ) (5.65)
The initial conditions at τ = 0 require that
D1 = 0
D2 = −
4C(ω/ωn)
ωn(ω2
n − ω2)
(5.66)
The solution for B2n−1 is
B2n−1 =
4C
ωn(ω2 − ω2
n)
ω
ωn
sin(ωnτ) − sin(ωτ) , ω = ωn (5.67)
The solution is therefore
y(ξ, τ) = 4C
∞
n=1
sin(ωnξ)
ωn(ω2 − ω2
n)
ω
ωn
sin(ωnτ) − sin(ωτ) (5.68)
When ω = ωn the above is not valid. The form of the particular solution should be chosen as
BP = E1τ cos(ωτ) + E2τ sin(ωτ) (5.69)
Differentiating and inserting into the differential equation for B2n−1
[E1τω2
n + 2E2ωn − E1τω2
n] cos(ωnτ) + [E2τω2
n − E2τω2
n − 2E1ωn] sin(ωnτ) =
4C
ωn
sin(ωnτ)
(5.70)
Thus
E2 = 0 E1 = −
4C
2ω2
n
(5.71)
book Mobk070 March 22, 2007 11:7
86 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and the solution when ω = ωn is
B2n−1 = C1 cos(ωnτ) + C2 sin(ωnτ) −
2C
ω2
n
τ cos(ωnτ) (5.72)
The initial condition on position implies that C1 = 0. The initial condition that the initial
velocity is zero gives
ωnC2 −
2C
ω2
n
= 0 (5.73)
The solution for B2n−1 is
B2n−1 =
2C
ω3
n
[sin(ωnτ) − ωnτ cos(ωnτ)] (5.74)
Superposition now gives
y(ξ, τ) =
∞
n=1
2C
ω3
n
sin(ωnξ)[sin(ωnτ) − ωnτ cos(ωnτ)] (5.75)
An interesting feature of the solution is that there are an infinite number of natural frequencies,
η =
a
L
[π, 3π, 5π, . . . , (2n − 1)π, . . .] (5.76)
If the system is excited at any of the frequencies, the magnitude of the oscillation will grow
(theoretically) without bound. The smaller natural frequencies will cause the growth to be
fastest.
Example 5.7 (Vibration of a circular membrane). Consider now a circular membrane (like a
drum). The partial differential equation describing the displacement y(t,r, θ) was derived in
Chapter 1.
a−2 ∂2
y
∂t2
=
1
r
∂
∂r
r
∂y
∂r
+
1
r2
∂2
y
∂θ2
(5.77)
Suppose it has an initial displacement of y(0,r, θ) = f (r, θ) and the velocity yt = 0. The
displacement at r = r1 is also zero and the displacement must be finite for all r, θ, and t. The
length scale is r1 and the time scale is r1/a.r/r1 = η and ta/r1 = τ.
We have
∂2
y
∂τ2
=
1
η
∂
∂η
η
∂y
∂η
+
1
η2
∂2
y
∂θ2
(5.78)
book Mobk070 March 22, 2007 11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 87
Separation of variables as y = T(τ)R(η)S(θ), substituting into the equation and dividing by
TRS,
T
T
=
1
ηR
(ηR ) +
1
η2
S
S
= −λ2
(5.79)
The negative sign is because we anticipate sine and cosine solutions for T.
We also note that
λ2
η2
+
η
R
(ηR ) = −
S
S
= ±β2
(5.80)
To avoid exponential solutions in the θ direction we must choose the positive sign. Thus we
have
T = −λ2
T
S = −β2
S (5.81)
η(ηR ) + (η2
λ2
− β2
)R = 0
The solutions of the first two of these are
T = A1 cos(λτ) + A2 sin(λτ)
(5.82)
S = B1 cos(βθ) + B2 sin(βθ)
The boundary condition on the initial velocity guarantees that A2 = 0. β must be an integer so
that the solution comes around to the same place after θ goes from 0 to 2π. Either B1 and B2
can be chosen zero because it doesn’t matter where θ begins (we can adjust f (r, θ)).
T(τ)S(θ) = AB cos(λτ) sin(nθ) (5.83)
The differential equation for R should be recognized from our discussion of Bessel functions.
The solution with β = n is the Bessel function of the first kind order n. The Bessel function
of the second kind may be omitted because it is unbounded at r = 0. The condition that
R(1) = 0 means that λ is the mth root of Jn(λmn) = 0. The solution can now be completed
using superposition and the orthogonality properties.
y(τ, η, θ) =
∞
n=0
∞
m=1
Knm Jn(λmnη) cos(λmnτ) sin(nθ) (5.84)
Using the initial condition
f (η, θ) =
∞
n=0
∞
m=1
Knm Jn(λmnη) sin(nθ) (5.85)
book Mobk070 March 22, 2007 11:7
88 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and the orthogonality of sin(nθ) and Jn(λmnη)
2π
θ=0
1
η=0
f (η, θ)ηJn(λmnη) sin(nθ)dθdη = Knm
2π
θ=0
sin2
(nθ)dθ
1
r=0
ηJ 2
n (λmnη)dη
=
Knm
4
J 2
n+1(λmn)
(5.86)
Knm =
4
J 2
n+1(λnm)
2π
θ=0
1
η=0
f (η, θ)ηJn(λnmη) sin(nθ)dθdη (5.87)
Problems
1. The conduction equation in one dimension is to be solved subject to an insulated surface
at x = 0 and a convective boundary condition at x = L. Initially the temperature is
u(0, x) = f (x), a function of position. Thus
ut = αuxx
ux(t, 0) = 0
kux(t, L) = −h[u(t, L) − u1]
u(0, x) = f (x)
First nondimensionalize and normalize the equations. Then solve by separation of
variables. Find a specific solution when f (x) = 1 − x2
.
2. Consider the diffusion problem
ut = αuxx + q(x)
ux(t, 0) = 0
ux(t, L) = −h[u(t, L) − u1]
u(0, x) = u1
Define time and length scales and define a u scale such that the initial value of the
dependent variable is zero. Solve by separation of variables and find a specific solution
for q(x) = Q, a constant. Refer to Problem 2.1 in Chapter 2.
book Mobk070 March 22, 2007 11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 89
3. Solve the steady-state conduction
uxx + uyy = 0
ux(0, y) = 0
u(a, y) = u0
u(x, 0) = u1
uy (x, b) = −h[u(x, b) − u1]
Note that one could choose a length scale either a or b. Choose a. Note that if you
choose
U =
u − u1
u0 − u1
there is only one nonhomogeneous boundary condition and it is normalized. Solve by
separation of variables.
5.3 FOURIER INTEGRALS
We consider now problems in which one dimension of the domain is infinite in extent. Recall
that a function defined on an interval (−c , c ) can be represented as a Fourier series
f (x) =
1
2c
c
ς=−c
f (ς)dς +
1
c
∞
n=1
c
ς=−c
f (ς) cos
nπς
c
dς cos
nπx
c
+
1
c
∞
n=1
c
ς=−c
f (ς) sin
nπς
c
dς sin
nπx
c
(5.88)
which can be expressed using trigonometric identities as
f (x) =
1
2c
c
ς=−c
f (ς)dς +
1
c
∞
n=1
c
ς=−c
f (ς) cos
nπ
c
(ς − x) dς (5.89)
We now formally let c approach infinity. If
∞
ς=−c f (ς)dς exists, the first term vanishes. Let
α = π/c . Then
f (x) =
2
π
∞
n=1
c
ς=0
f (ς) cos[n α(ς − x)dς α (5.90)
book Mobk070 March 22, 2007 11:7
90 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
or, with
gc (n α, x) =
c
ς=0
f (ς) cos[n α(ς − x)]dς (5.91)
we have
f (x) =
∞
n=1
gc (n α, x) α (5.92)
As c approaches infinity we can imagine that α approaches dα and n α approaches α,
whereupon the equation for f (x) becomes an integral expression
f (x) =
2
π
∞
ς=0
∞
α=0
f (ς) cos[α(ς − x)]dς dα (5.93)
which can alternatively be written as
f (x) =
∞
α=0
[A(α) cos α x + B(α) sin α x]dα (5.94)
where
A(α) =
2
π
∞
ς=0
f (ς) cos ας dς (5.95)
and
B(α) =
2
π
∞
ς=0
f (ς) sin ας dς (5.96)
Example 5.8 (Transient conduction in a semi-infinite region). Consider the boundary value
problem
ut = uxx (x ≥ 0, t ≥ 0)
u(0, t) = 0 (5.97)
u(x, 0) = f (x)
This represents transient heat conduction with an initial temperature f (x) and the boundary
at x = 0 suddenly reduced to zero. Separation of variables as T(t)X(x) would normally yield a
book Mobk070 March 22, 2007 11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 91
solution of the form
Bn exp(−λ2
t) sin
λx
c
(5.98)
for a region of x on the interval (0, c ). Thus, for x on the interval 0 ≤ x ≤ ∞ we have
B(α) =
2
π
∞
ς=0
f (ς) sin ας dς (5.99)
and the solution is
u(x, t) =
2
π
∞
λ=0
exp(−λ2
t) sin(λx)
∞
s =0
f (s ) sin(λs )ds d α (5.100)
Noting that
2 sin α s sin α x = cos α (s − x) − cos α(s + x) (5.101)
and that
∞
0
exp(−γ 2
α) cos(γ b)dγ =
1
2
π
α
exp −
b2
4α
(5.102)
we have
u(x, t) =
1
2
√
πt
∞
0
f (s ) exp −
(s − x)2
4t
− exp −
(s + x)2
4t
ds (5.103)
Substituting into the first of these integrals σ2
= (s −x)2
4t
and into the second integral
σ2
=
(s + x)2
4t
(5.104)
u(x, t) =
1
√
π
∞
−x/2
√
t
f (x + 2σ
√
t)e−σ2
d σ
−
1
√
π
∞
x/2
√
t
f (−x + 2σ
√
t)e−σ2
d σ (5.105)
book Mobk070 March 22, 2007 11:7
92 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
In the special case where f (x) = u0
u(x, t) =
2u0
√
π
x/2
√
t
0
exp(−σ2
)dσ = u0 erf
x
2
√
t
(5.106)
where erf(p) is the Gauss error function defined as
erf(p) =
2
√
π
p
0
exp(−σ2
)dσ (5.107)
Example 5.9 (Steady conduction in a quadrant). Next we consider steady conduction in the
region x ≥ 0, y ≥ 0 in which the face at x = 0 is kept at zero temperature and the face at
y = 0 is a function of x: u = f (x). The solution is also assumed to be bounded.
uxx + uyy = 0 (5.108)
u(x, 0) = f (x) (5.109)
u(0, y) = 0 (5.110)
Since u(0, y) = 0 the solution should take the form e−αy
sin α x, which is, according to our
experience with separation of variables, a solution of the equation ∇2
u = 0. We therefore
assume a solution of the form
u(x, y) =
∞
0
B(α)e−α y
sin α xdα (5.111)
with
B(α) =
2
π
∞
0
f (ς) sinας dς (5.112)
The solution can then be written as
u(x, y) =
2
π
∞
ς=0
f (ς)
∞
α=0
e−α y
sin α x sin α ς d α d ς (5.113)
Using the trigonometric identity for 2 sin ax sin aς = cos a(ς − x) − cos a(ς + x) and noting
that
∞
0
e−α y
cos aβ d α =
y
β2 + y2
(5.114)
book Mobk070 March 22, 2007 11:7
SOLUTIONS USING FOURIER SERIES AND INTEGRALS 93
we find
u(x, y) =
y
π
∞
0
f (ς)
1
(ς − x)2 + y2
−
1
(ς + x)2 + y2
d ς (5.115)
Problem
Consider the transient heat conduction problem
ut = uxx + uyy x ≥ 0, 0 ≤ y ≤ 1, t ≥ 0
with boundary and initial conditions
u(t, 0, y) = 0
u(t, x, 0) = 0
u(t, x, 1) = 0
u(0, x, y) = u0
and u(t, x, y) is bounded.
Separate the problem into two problems u(t, x, y) = v(t, x)w(t, y) and give appropriate
boundary conditions. Show that the solution is given by
u(t, x, y) =
4
π
erf
x
2
√
t
∞
n=1
sin(2n − 1)πy
2n − 1
exp[−(2n − 1)2
π2
t]
FURTHER READING
V. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966.
J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. 6th edition. New
York: McGraw-Hill, 2001.
book Mobk070 March 22, 2007 11:7
book Mobk070 March 22, 2007 11:7
95
C H A P T E R 6
Integral Transforms: The Laplace
Transform
Integral transforms are a powerful method of obtaining solutions to both ordinary and partial
differential equations. They are used to change ordinary differential equations into algebraic
equations and partial differential into ordinary differential equations. The general idea is to
multiply a function f (t) of some independent variable t (not necessarily time) by a Kernel
function K(t, s ) and integrate over some t space to obtain a function F(s ) of s which one hopes
is easier to solve. Of course one must then inverse the process to find the desired function f (t).
In general,
F(s ) =
b
t=a
K(t, s ) f (t)dt (6.1)
6.1 THE LAPLACE TRANSFORM
A useful and widely used integral transform is the Laplace transform, defined as
L[ f (t)] = F(s ) =
∞
t=0
f (t)e−s t
dt (6.2)
Obviously, the integral must exist. The function f (t) must be sectionally continuous and of
exponential order, which is to say f (t) ≤ Mekt
when t > 0 for some constants M and k. For
example neither the Laplace transform of t−1
nor exp(t2
) exists.
The inversion formula is
L−1
[F(s )] = f (t) =
1
2πi
lim
L → ∞
γ +iL
γ −iL
F(s )ets
ds (6.3)
book Mobk070 March 22, 2007 11:7
96 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
in which γ – iL and γ + iL are complex numbers. We will put off using the inversion integral
until we cover complex variables. Meanwhile, there are many tables giving Laplace transforms
and inverses. We will now spend considerable time developing the theory.
6.2 SOME IMPORTANT TRANSFORMS
6.2.1 Exponentials
First consider the exponential function:
L[e−at
] =
∞
t=0
e−at
e−s t
dt =
∞
t=0
e−(s =a)t
dt =
1
s + a
(6.4)
If a = 0, this reduces to
L[1] = 1/s (6.5)
6.2.2 Shifting in the s-domain
L[ea t
f (t)] =
∞
t=0
e−(s −a)t
f (t)dt = F(s − a) (6.6)
6.2.3 Shifting in the time domain
Consider a function defined as
f (t) = 0 t < a f (t) = f (t − a) t > a (6.7)
Then
∞
τ=0
e−s τ
f (τ − a)dτ =
a
τ=0
0dτ +
∞
τ=a
e−s τ
f (τ − a)dτ (6.8)
Let τ − a = t. Then
∞
t=0
e−s (t+a)
f (t)dt = F(s )e−as
= L[ f (t − a)] (6.9)
the shifted function described above.
book Mobk070 March 22, 2007 11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 97
6.2.4 Sine and cosine
Now consider the sine and cosine functions. We shall see in the next chapter (and you should
already know) that
eikt
= cos(kt) + i sin(kt) (6.10)
Thus the Laplace transform is
L[eikt
] = L[cos(kt)] + iL[sin(kt)] =
1
s − ik
=
s + ik
(s + ik)(s − ik)
=
s
s 2 + k2
+ i
k
s 2 + k2
(6.11)
so
L[sin(kt)] =
k
s 2 + k2
(6.12)
L[cos(kt)] =
s
s 2 + k2
(6.13)
6.2.5 Hyperbolic functions
Similarly for hyperbolic functions
L[sinh(kt)] = L
1
2
(ekt
− e−kt
) =
1
2
1
s − k
−
1
s + k
=
k
s 2 − k2
(6.14)
Similarly,
L[cosh(kt)] =
s
s 2 − k2
(6.15)
6.2.6 Powers of t: tm
We shall soon see that the Laplace transform of tm
is
L[tm
] =
(m + 1)
s m+1
m > −1 (6.16)
Using this together with the s domain shifting results,
L[tm
e−at
] =
(m + 1)
(s + a)m+1
(6.17)
Example 6.1. Find the inverse transform of the function
F(s ) =
1
(s − 1)3
book Mobk070 March 22, 2007 11:7
98 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
This is a function that is shifted in the s -domain and hence Eq. (6.6) is applicable. Noting that
L−1
(1/s 3
) = t2
/ (3) = t2
/2 from Eq. (6.16)
f (t) =
t2
2
et
Or we could use Eq. (6.17) directly.
Example 6.2. Find the inverse transform of the function
F(s ) =
3
s 2 + 4
e−s
The inverse transform of
F(s ) =
2
s 2 + 4
is, according to Eq. (6.11)
f (t) =
3
2
sin(2t)
The exponential term implies shifting in the time domain by 1. Thus
f (t) = 0, t < 1
=
3
2
sin[2(t − 1)], t > 1
Example 6.3. Find the inverse transform of
F(s ) =
s
(s − 2)2 + 1
The denominator is shifted in the s -domain. Thus we shift the numerator term and write F(s )
as two terms
F(s ) =
s − 2
(s − 2)2 + 1
+
2
(s − 2)2 + 1
Equations (6.6), (6.12), and (6.13) are applicable. The inverse transform of the first of these is
a shifted cosine and the second is a shifted sine. Therefore each must be multiplied by exp(2t).
The inverse transform is
f (t) = e2t
cos(t) + 2e2t
sin(t)
book Mobk070 March 22, 2007 11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 99
1
k t
FIGURE 6.1: The Heaviside step
6.2.7 Heaviside step
A frequently useful function is the Heaviside step function, defined as
Uk(t) = 0 0 < t < k
(6.18)
= 1 k < t
It is shown in Fig. 6.1.
The Laplace transform is
L[Uk(t)] =
∞
t=k
e−s t
dt =
1
s
e−ks
(6.19)
The Heaviside step (sometimes called the unit step) is useful for finding the Laplace transforms
of periodic functions.
Example 6.4 (Periodic functions). For example, consider the periodic function shown in
Fig. 6.2.
It can be represented by an infinite series of shifted Heaviside functions as follows:
f (t) = U0 − 2Uk + 2U2k − 2U3k + · · · = U0 +
∞
n=1
(−1)n
2Unk (6.20)
1
-1
FIGURE 6.2: A periodic square wave
book Mobk070 March 22, 2007 11:7
100 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
h
1
t0-h t0
FIGURE 6.3: The Dirac delta function
The Laplace transform is found term by term,
L[ f (t)] =
1
s
{1 − 2e−s k
[1 − e−s k
+ e−2s k
− e−3s k
· · · ]}
=
1
s
1 −
2e−s k
1 + e−s k
=
1
s
1 − e−s k
1 + e−s k
(6.21)
6.2.8 The Dirac delta function
Consider a function defined by
lim
Ut0
− Ut0−h
h
= δ(t0) h → 0 (6.22)
L[δ(t0)] = e−s t0
(6.23)
The function, without taking limits, is shown in Fig. 6.3.
6.2.9 Transforms of derivatives
L
d f
dt
=
∞
t=0
d f
dt
e−s t
dt =
∞
t=0
e−s t
d f (6.24)
and integrating by parts
L
d f
dt
= f (t)e−s t ∞
0
+ s
∞
t=0
f (t)e−s t
dt = s F(s ) − f (0)
To find the Laplace transform of the second derivative we let g(t) − f (t). Taking the Laplace
transform
L[g (t)] = s G(s ) − g(0)
book Mobk070 March 22, 2007 11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 101
and with
G(s ) = L[ f (t)] = s F(s ) − f (0)
we find that
L
d2
f
dt2
= s 2
F(s ) − s f (0) − f (0) (6.25)
In general
L
dn
f
dtn
= s n
F(s ) − s n−1
f (0) − s n−2
f (0) − · · · −
dn−1
f
dtn−1
(0) (6.26)
The Laplace transform of tm
may be found by using the gamma function,
L[tm
] =
∞
0
tm
e−s t
dt and let x = s t (6.27)
L[tm
] =
∞
x=0
x
s
m
e−x dx
s
=
1
s m+1
∞
x=0
xm
e−x
dx =
(m + 1)
s m+1
(6.28)
which is true for all m > −1 even for nonintegers.
6.2.10 Laplace Transforms of Integrals
L


t
τ=0
f (τ)dτ

 = L[g(t)] (6.29)
where dg/dt = f (t). Thus L[dg/dt] = s L[g(t)]. Hence
L


t
τ=0
f (τ)dτ

 =
1
s
F(s ) (6.30)
6.2.11 Derivatives of Transforms
F(s ) =
∞
t=0
f (t)e−s t
dt (6.31)
book Mobk070 March 22, 2007 11:7
102 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
so
d F
ds
= −
∞
t=0
t f (t)e−s t
dt (6.32)
and in general
dn
F
ds n
= L[(−t)n
f (t)] (6.33)
For example
L[t sin(kt)] = −
d
ds
k
s 2 + k2
=
2s k
(s 2 + k2)2
(6.34)
6.3 LINEAR ORDINARY DIFFERENTIAL EQUATIONS WITH
CONSTANT COEFFICIENTS
Example 6.5. A homogeneous linear ordinary differential equation
Consider the differential equation
y + 4y + 3y = 0
y(0) = 0
y (0) = 2
(6.35)
L[y ] = s 2
Y − s y(0) − y (0) = s 2
Y − 2 (6.36)
L[y ] = s Y − y(0) = s Y (6.37)
Therefore
(s 2
+ 4s + 3)Y = 2 (6.38)
Y =
2
(s + 1)(s + 3)
=
A
s + 1
+
B
s + 3
(6.39)
To solve for A and B, note that clearing fractions,
A(s + 3) + B(s + 1)
(s + 1)(s + 3)
=
2
(s + 1)(s + 3)
(6.40)
Equating the numerators, or
A + B = 0 3A + B = 2 : A = 1 B = −1 (6.41)
book Mobk070 March 22, 2007 11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 103
and from Eq. (6.8)
Y =
1
s + 1
−
1
s + 3
y = e−t
− e−3t
(6.42)
6.4 SOME IMPORTANT THEOREMS
6.4.1 Initial Value Theorem
lim
s →∞
∞
t=0
f (t)e−s t
dt = s F(s ) − f (0) = 0 (6.43)
Thus
lim
s →∞
s F(s ) = lim
t→0
f (t) (6.44)
6.4.2 Final Value Theorem
As s approaches zero the above integral approaches the limit as t approaches infinity minus
f (0). Thus
lim s F(s ) = lim f (t)
s → 0 t → ∞
(6.45)
6.4.3 Convolution
A very important property of Laplace transforms is the convolution integral. As we shall see
later, it allows us to write down solutions for very general forcing functions and also, in the
case of partial differential equations, to treat both time dependent forcing and time dependent
boundary conditions.
Consider the two functions f (t) and g(t). F(s ) = L[ f (t)] and G(s ) = L[g(t)]. Because
of the time shifting feature,
e−s τ
G(s ) = L[g(t − τ)] =
∞
t=0
e−s t
g(t − τ)dt (6.46)
F(s )G(s ) =
∞
τ=0
f (τ)e−s τ
G(s )dτ (6.47)
book Mobk070 March 22, 2007 11:7
104 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
But
e−s τ
G(s ) =
∞
t=0
e−s t
g(t − τ)dt (6.48)
so that
F(s )G(s ) =
∞
t=0
e−s t
t
τ=0
f (τ)g(t − τ)dτ dt (6.49)
where we have used the fact that g(t − τ) = 0 when τ > t. The inverse transform of F(s )G(s )
is
L−1
[F(s )G(s )] =
t
τ=0
f (τ)g(t − τ)dτ (6.50)
6.5 PARTIAL FRACTIONS
In the example differential equation above we determined two roots of the polynomial in the
denominator, then separated the two roots so that the two expressions could be inverted in
forms that we already knew. The method of separating out the expressions 1/(s + 1) and
1/(s + 3) is known as the method of partial fractions. We now develop the method into a more
user friendly form.
6.5.1 Nonrepeating Roots
Suppose we wish to invert the transform F(s ) = p(s )/q(s ), where p(s ) and q(s ) are polynomi-
als. We first note that the inverse exists if the degree of p(s ) is lower than that of q(s ). Suppose
q(s ) can be factored and a nonrepeated root is a.
F(s ) =
φ(s )
s − a
(6.51)
According to the theory of partial fractions there exists a constant C such that
φ(s )
s − a
=
C
s − a
+ H(s ) (6.52)
Multiply both sides by (s − a) and take the limit as s → a and the result is
C = φ(a) (6.53)
book Mobk070 March 22, 2007 11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 105
Note also that the limit of
p(s )
s − a
q(s )
(6.54)
as s approaches a is simply p(s )/q (s ).
If q(s ) has no repeated roots and is of the form
q(s ) = (s − a1)(s − a2)(s − a3) · · · (s − an) (6.55)
then
L−1 p(s )
q(s )
=
n
m=1
p(am)
q (am)
eamt
(6.56)
Example 6.6. Find the inverse transform of
F(s ) =
4s + 1
(s 2 + s )(4s 2 − 1)
First separate out the roots of q(s )
q(s ) = 4s (s + 1)(s + 1/2)(s − 1/2)
q(s ) = 4s 4
+ 4s 3
− s 2
− s
q (s ) = 16s 3
+ 12s 2
− 2s − 1
Thus
q (0) = −1 p(0) = 1
q (−1) = −3 p(−1) = −3
q (−1/2) = 1 p(−1/2) = −1
q (1/2) = 3 p(1/2) = 3
f (t) = e−t
− e−t/2
+ et/2
− 1
Example 6.7. Solve the differential equation
y − y = 1 − e3t
subject to initial conditions
y (0) = y(0) = 0
book Mobk070 March 22, 2007 11:7
106 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Taking the Laplace transform
(s 2
− 1)Y =
1
s
−
1
s − 3
Y (s ) =
1
s (s 2 − 1)
−
1
(s − 3)(s 2 − 1)
=
1
s (s + 1)(s − 1)
−
1
(s − 3)(s + 1)(s − 1)
First find the inverse transform of the first term.
q = s 3
− s
q = 3s 2
− 1
q (0) = −1 p(0) = 1
q (1) = 2 p(1) = 1
q (−1) = 2 p(−1) = 1
The inverse transform is
−1 + 1/2et
+ 1/2 e−t
Next consider the second term.
q = s 3
− 3s 2
− s + 3
q = 3s 2
− 6s − 1
q (−3) = 44 p(−3) = 1
q (1) = −4 p(1) = 1
q (−1) = 8 p(−1) = 1
The inverse transform is
1
44
e−3t
−
1
4
et
+
1
8
e−t
Thus
y(t) =
1
4
et
+
5
8
e−t
+
1
44
e−3t
− 1
book Mobk070 March 22, 2007 11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 107
6.5.2 Repeated Roots
We now consider the case when q(s ) has a repeated root (s + a)n+1
. Then
F(s ) =
p(s )
q(s )
=
φ(s )
(s − a)n+1
n = 1, 2, 3, . . .
=
Aa
(s − a)
+
A1
(s − a)2
+ · · · +
An
(s − a)n+1
+ H(s ) (6.57)
It follows that
φ(s ) = A0(s − a)n
+ · · · + Am(s − a)n−m
+ · · · + An + (s − a)n+1
H(s ) (6.58)
By letting s →a we see that An = φ(a). To find the remaining A’s, differentiate φ (n – r) times
and take the limit as s → a.
φ(n−r)
(a) = (n − r)!Ar (6.59)
Thus
F(s ) =
n
r=0
φ(n−r)
(a)
(n − r)!
1
(s − a)r+1
+ H(s ) (6.60)
If the inverse transform of H(s ) (the part containing no repeated roots) is h(t) it follows from
the shifting theorem and the inverse transform of 1/s m
that
f (t) =
n
r=0
φ(n−r)
(a)
(n − r)!r!
tr
eat
+ h(t) (6.61)
Example 6.8. Inverse transform with repeated roots
F(s ) =
s
(s + 2)3(s + 1)
=
A0
(s + 2)
+
A1
(s + 2)2
+
A2
(s + 2)3
+
C
(s + 1)
Multiply by (s + 2)3
.
s
(s + 1)
= A0(s + 2)2
+ A1(s + 2) + A2 +
C(s + 2)3
(s + 1)
= φ(s )
Take the limit as s → −2,
A2 = 2
book Mobk070 March 22, 2007 11:7
108 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Differentiate once
φ =
1
(s + 1)2
φ (−2) = 1 = A1
φ =
−2
(s + 1)3
φ (−2) = 2 = A0
To find C, multiply by (s + 1) and take s = −1 (in the original equation).
C = −1.
Thus
F(s ) =
2
(s + 2)
+
1
(s + 2)2
+
2
(s + 2)3
−
1
(s + 1)
and noting the shifting theorem and the theorem on tm
,
f (t) = 2e−2t
+ te−2t
+ 2t2
e−2t
+ e−t
6.5.3 Quadratic Factors: Complex Roots
If q(s ) has complex roots and all the coefficients are real this part of q(s ) can always be written
in the form
(s − a)2
+ b2
(6.62)
This is a shifted form of
s 2
+ b2
(6.63)
This factor in the denominator leads to sines or cosines.
Example 6.9. Quadratic factors
Find the inverse transform of
F(s ) =
2(s − 1)
s 2 + 2s + 5
=
2s
(s + 1)2 + 4
−
1
(s + 1)2 + 4
Because of the shifted s in the denominator the numerator of the first term must also be shifted
to be consistent. Thus we rewrite as
F(s ) =
2(s + 1)
(s + 1)2 + 4
−
3
(s + 1)2 + 4
The inverse transform of
2s
s 2 + 4
book Mobk070 March 22, 2007 11:7
INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 109
is
2 cos(2t)
and the inverse of
−3
s 2 + 4
= −
3
2
2
(s 2 + 4)
is
−
3
2
sin(2t)
Thus
f (t) = 2e−t
cos(2t) −
3
2
e−t
sin(2t)
Tables of Laplace transforms and inverse transforms can be found in many books such as the
book by Arpaci and in the Schaum’s Outline referenced below. A brief table is given here in
Appendix A.
Problems
1. Solve the problem
y − 2y + 5y = 0
y(0) = y (0) = 0 y (0) = 1
using Laplace transforms.
2. Find the general solution using Laplace transforms
y + k2
y = a
3. Use convolution to find the solution to the following problem for general g(t). Then find
the solution for g(t) = t2
.
y + 2y + y = g(t)
y (0) = y(0) = 0
4. Find the inverse transforms.
(a) F(s ) =
s + c
(s + a)(s + b)2
(b) F(s ) =
1
(s 2 + a2)s 3
book Mobk070 March 22, 2007 11:7
110 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
(c) F(s ) =
(s 2
− a2
)
(s 2 + a2)2
5. Find the periodic function whose Laplace transform is
F(s ) =
1
s 2
1 − e−s
1 + e−s
and plot your results for f (t) for several periods.
FURTHER READING
M. Abramowitz and I. A. Stegun, Eds., Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables. New York: Dover Publications, 1974.
V. S. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966.
R. V. Churchill, Operational Mathematics, 3rd edition. New York: McGraw-Hill, 1972.
I. H. Sneddon, The Use of Integral Transforms. New York: McGraw-Hill, 1972.
book Mobk070 March 22, 2007 11:7
111
C H A P T E R 7
Complex Variables and the Laplace
Inversion Integral
7.1 BASIC PROPERTIES
A complex number z can be defined as an ordered pair of real numbers, say x and y, where x is
the real part of z and y is the real value of the imaginary part:
z = x + iy (7.1)
where i =
√
−1
I am going to assume that the reader is familiar with the elementary properties of
addition, subtraction, multiplication, etc. In general, complex numbers obey the same rules as
real numbers. For example
(x1 + iy1) (x2 + iy2) = x1x2 − y1 y2 + i (x1 y2 + x2 y1) (7.2)
The conjugate of z is
¯z = x − iy (7.3)
It is often convenient to represent complex numbers on Cartesian coordinates with x and
y as the axes. In such a case, we can represent the complex number (or variable) z as
z = x + iy = r(cos θ + i sin θ) (7.4)
as shown in Fig. 7.1. We also define the exponential function of a complex number as cos θ +
i sin θ = eiθ
which is suggested by replacing x in series ex
= ∞
n=0
xn
n!
by iθ.
Accordingly,
eiθ
= cos θ + i sin θ (7.5)
and
e−iθ
= cos θ − i sin θ (7.6)
book Mobk070 March 22, 2007 11:7
112 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
y
r
x
FIGURE 7.1: Polar representation of a complex variable z
Addition gives
cos θ =
eiθ
+ e−iθ
2
= cosh(iθ) (7.7)
and subtraction gives
sin θ =
eiθ
− e−iθ
2i
= −i sinh(iθ) (7.8)
Note that
cosh z =
1
2
ex+iy
+ e−x−iy
=
1
2
ex
[cos y + i sin y] + e−x
[cos y − i sin y]
=
ex
+ e−x
2
cos y + i
ex
− e−x
2
sin y
= cosh x cos y + i sinh x sin y (7.9)
The reader may show that
sinh z = sinh x cos y + i cosh x sin y. (7.10)
Trigonometric functions are defined in the usual way:
sin z =
eiz
− e−iz
2i
cos z =
eiz
+ e−iz
2
tan z =
sin z
cos z
(7.11)
Two complex numbers are equal if and only if their real parts are equal and their imaginary
parts are equal.
book Mobk070 March 22, 2007 11:7
COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 113
Noting that
z2
= r2
(cos2
θ − sin2
θ + i2 sin θ cos θ)
= r2 1
2
(1 + cos 2θ) −
1
2
(1 − cos 2θ) + i sin 2θ
= r2
[cos 2θ + i sin 2θ]
We deduce that
z1/2
= r1/2
(cos θ/2 + i sin θ/2) (7.12)
In fact in general
zm/n
= rm/n
[cos(mθ/n) + i sin(mθ/n)] (7.13)
Example 7.1. Find i1/2
.
Noting that when z = I, r = 1 and θ = π/2, with m = 1 and n = 2.
Thus
i1/2
= 11/2
[cos(π/4) + i sin(π/4)] =
1
√
2
(1 + i)
Note, however, that if
w = cos
π
4
+ π + i sin
π
4
+ π
then w2
= i. Hence 1√
2
(−1 − i) is also a solution. The roots are shown in Fig. 7.2.
i
FIGURE 7.2: Roots of i1/2
book Mobk070 March 22, 2007 11:7
114 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
-1 +1
FIGURE 7.3: The roots of 11/2
In fact in this example θ is also π/2 + 2kπ. Using the fact that
z = re−i(θ+2kπ)
k = 1, 2, 3, . . .
it is easy to show that
z1/n
= n
√
r cos
θ + 2πk
n
+ i sin
θ + 2πk
n
(7.14)
This is De Moivre’s theorem. For example when n = 2 there are two solutions and when
n = 3 there are three solutions. These solutions are called branches of z1/n
. A region in which
the function is single valued is indicated by forming a branch cut, which is a line stretching from
the origin outward such that the region between the positive real axis and the line contains
only one solution. In the above example, a branch cut might be a line from the origin out the
negative real axis.
Example 7.2. Find 11/2
and represent it on the polar diagram.
11/2
= 1 cos
θ
2
+ kπ + i sin
θ
2
+ kπ
and since θ = 0 in this case
11/2
= cos kπ + i sin kπ
There are two distinct roots at z = +1 for k = 0 and −1 for k = 1. The two values are
shown in Fig. 7.3. The two solutions are called branches of
√
1, and an appropriate branch cut
might be from the origin out the positive imaginary axis, leaving as the single solution 1.
Example 7.3. Find the roots of (1 + i)1/4
.
Making use of Eq. (7.13) with m = 1 and n = 4, r =
√
2, θ = π
4, we find that
(1 + i)1/4
= (
√
2)1/4
cos
π
16
+
2kπ
4
+ i sin
π
16
+
2kπ
4
k = 0, 1, 2, 3
book Mobk070 March 22, 2007 11:7
COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 115
16
1+ i
21/8
21/2
1
FIGURE 7.4: The roots of (1 + i)1/4
Hence, the four roots are as follows:
(1 + i)1/4
= 21/8
cos
π
16
+ i sin
π
16
= 21/8
cos
π
16
+
π
2
+ i sin
π
16
+
π
2
= 21/8
cos
π
16
+ π + i sin
π
16
+ π
= 21/8
cos
π
16
+
3π
2
+ i sin
π
16
+
3π
2
The locations of the roots are shown in Fig. 7.4.
The natural logarithm can be defined by writing z = reiθ
for −π ≤ θ < π and noting
that
ln z = lnr + iθ (7.15)
and since z is not affected by adding 2nπ to θ this expression can also be written as
ln z = lnr + i (θ + 2nπ) with n = 0, 1, 2, . . . (7.16)
When n = 0 we obtain the principal branch. All of the single valued branches are analytic
for r > 0 and θ0 < θ < θ0 + 2π.
7.1.1 Limits and Differentiation of Complex Variables: Analytic Functions
Consider a function of a complex variable f (z). We generally write
f (z) = u(x, y) + iv(x, y)
book Mobk070 March 22, 2007 11:7
116 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
where u and v are real functions of x and y. The derivative of a complex variable is defined as
follows:
f = lim
f (z + z) − f (z)
z
z → 0
(7.17)
or
f (z) = lim
u(x + x, y + y) + iv(x + x, y + y) − u(x, y) − iv(x, y)
x + i y
x, y → 0 (7.18)
Taking the limit on x first, we find that
f (z) = lim
u(x, y + y) + iv(x, y + y) − u(x, y) − iv(x, y)
i y
y → 0
(7.19)
and now taking the limit on y,
f (z) =
1
i
∂u
∂y
+
∂v
∂y
=
∂v
∂y
− i
∂u
∂y
(7.20)
Conversely, taking the limit on y first,
j (z) = lim
u(x + x, y) + iv(x + x, y) − u(x, y) − iv(x, y)
x
x → 0
=
∂u
∂x
+ i
∂v
∂x
(7.21)
The derivative exists only if
∂u
∂x
=
∂v
∂y
and
∂u
∂y
= −
∂v
∂x
(7.22)
These are called the Cauchy—Riemann conditions, and in this case the function is said to
be analytic. If a function is analytic for all x and y it is entire.
Polynomials are entire as are trigonometric and hyperbolic functions and exponential
functions. We note in passing that analytic functions share the property that both real and
imaginary parts satisfy the equation ∇2
u = ∇2
v = 0 in two-dimensional space. It should be
obvious at this point that this is important in the solution of the steady-state diffusion equation
book Mobk070 March 22, 2007 11:7
COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 117
y
B1
A
0
x
1 2
FIGURE 7.5: Integration of an analytic function along two paths
in two dimensions. We mention here that it is also important in the study of incompressible,
inviscid fluid mechanics and in other areas of science and engineering. You will undoubtedly
meet with it in some of you clurses.
Example 7.4.
f = z2
f = 2z
f = sin z f = cos z
f = eaz
f = aeaz
Integrals
Consider the line integral along a curve C defined as x = 2y from the origin to the point
x = 2, y = 1, path OB in Fig. 7.5.
C
z2
dz
We can write
z2
= x2
− y2
+ 2ixy = 3y2
+ 4y2
i
and dz = (2 + i)dy
Thus
1
y=0
(3y2
+ 4y2
i)(2 + i)dy = (3 + 4i)(2 + i)
1
y=0
y2
dy =
2
3
+
11
3
i
book Mobk070 March 22, 2007 11:7
118 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
On the other hand, if we perform the same integral along the x axis to x = 2 and then
along the vertical line x = 2 to the same point, path OAB in Fig. 7.5, we find that
2
x=0
x2
dx +
1
y=0
(2 + iy)2
idy =
8
3
+ i
1
y=0
(4 − y2
+ 4iy)dy =
2
3
+
11
3
i
This happened because the function z2
is analytic within the region between the two curves.
In general, if a function is analytic in the region contained between the curves, the integral
C
f (z)dz (7.23)
is independent of the path of C. Since any two integrals are the same, and since if we integrate
the first integral along BO only the sign changes, we see that the integral around the closed
contour is zero.
C
f (z)dz = 0 (7.24)
This is called the Cauchy–Goursat theorem and is true as long as the region R within the
closed curve C is simply connected and the function is analytic everywhere within the region. A
simply connected region R is one in which every closed curve within it encloses only points in R.
The theorem can be extended to allow for multiply connected regions. Fig. 7.6 shows
a doubly connected region. The method is to make a cut through part of the region and to
integrate counterclockwise around C1, along the path C2 through the region, clockwise around
the interior curve C3, and back out along C4. Clearly, the integral along C2 and C4 cancels, so
that
C1
f (z)dz +
C3
f (z)dz = 0 (7.25)
where the first integral is counterclockwise and second clockwise.
7.1.2 The Cauchy Integral Formula
Now consider the following integral:
C
f (z)dz
(z − z0)
(7.26)
If the function f (z) is analytic then the integrand is also analytic at all points except z = z0.
We now form a circle C2 of radius r0 around the point z = z0 that is small enough to fit inside
book Mobk070 March 22, 2007 11:7
COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 119
FIGURE 7.6: A doubly connected region
FIGURE 7.7: Derivation of Cauchy’s integral formula
the curve C1 as shown in Fig. 7.7. Thus we can write
C1
f (z)
z − z0
dz −
C2
f (z)
z − z0
dz = 0 (7.27)
where both integrations are counterclockwise. Let r0 now approach zero so that in the second
integral z approaches z0, z − z0 = r0eiθ
and dz = r0ieiθ
dθ. The second integral is as follows:
C2
f (z0)
r0eiθ
r0ieiθ
dθ = − f (z0)i
2π
θ=0
dθ = −2πi f (z0)
Thus, Cauchy’s integral formula is
f (z0) =
1
2πi C
f (z)
z − z0
dz (7.28)
where the integral is taken counterclockwise and f (z) is analytic inside C.
book Mobk070 March 22, 2007 11:7
120 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
We can formally differentiate the above equation n times with respect to z0 and find an
extension as
f (n)
(z0) =
n!
2πi C
f (z)
(z − z0)n+1
dz (7.29)
Problems
1. Show that
(a) sinh z = sinh x cos y + i cosh x sin y
(b) cos z = cos x cosh y − i sin x sinh y
and show that each is entire.
2. Find all of the values of
(a) (−1 + i
√
3)
3
2
(b) 8
1
6
3. Find all the roots of the equation
sin z = cosh 4
4. Find all the zeros of
(a) sinh z
(b) cosh z
book Mobk070 March 22, 2007 11:7
121
C H A P T E R 8
Solutions with Laplace Transforms
In this chapter, we present detailed solutions of some boundary value problems using the Laplace
transform method. Problems in both mechanical vibrations and diffusion are presented along
with the details of the inversion method.
8.1 MECHANICAL VIBRATIONS
Example 8.1. Consider an elastic bar with one end of the bar fixed and a constant force F per
unit area at the other end acting parallel to the bar. The appropriate partial differential equation
and boundary and initial conditions for the displacement y(x, t) are as follows:
yττ = yζζ , 0 < ζ < 1, t > 0
y(ζ, 0) = yt(ζ, 0) = 0
y(0, τ) = 0
yς (1, τ) = F/E = g
We obtain the Laplace transform of the equation and boundary conditions as
s 2
Y = Yςς
Y (s, 0) = 0
Yς (s, 1) = g/s
Solving the differential equation for Y (s , ζ),
Y (s ) = (A sinh ςs + B cosh ς s )
Applying the boundary conditions we find that B = 0 and
g
s
= As cosh s
A =
g
s 2 cosh s
Y (s ) =
g sinh ς s
s 2 cosh s
book Mobk070 March 22, 2007 11:7
122 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Since the function 1
s
sinh ς s = ς + s 2
ς3
3!
+ s 4
ς5
5!
+ . . . the function
1
s
sinh ς s
is analytic and Y (s ) can be written as the ratio of two analytic functions
Y (s ) =
1
s
sinh ς s
s cosh s
Y (s ) therefore has a simple pole at s = 0 and the residue there is
R(s = 0) =
lim
s → 0
s Y (s ) =
lim
s → 0
ς + s 2
ς3
3!
+ . . .
cosh s
= gς
The remaining poles are the singularities of cosh s . But cosh s = cosh x cos y + i sinh x sin y,
so the zeros of this function are at x = 0 and cosy = 0.
Hence, sn = i(2n − 1)π/2. The residues at these points are
R(s = sn) =
lim
s → sn
g sinh ς s
s d
ds (s cosh s )
es τ
=
g
s 2
n
sinh ς sn
sinh sn
esnτ
(n = ±1, ±2, ±3 . . .)
Since
sinh i
2n − 1
2
(πς) = i sin
2n − 1
2
(πς)
we have
R(s = sn) =
gi sin 2n−1
2
(πς)
− 2n−1
2
π
2
i sin 2n−1
2
π
exp i
2n − 1
2
πτ
and
sin
2n − 1
s
π = (−1)n+1
The exponential function can be written as
exp i
2n − 1
2
πτ = cos
2n − 1
2
πτ + i sin
2n − 1
2
πτ
Note that for the poles on the negative imaginary axis (n < 0) this expression can be
written as
exp i
2m − 1
2
πτ = cos
2m − 1
2
πτ − i sin
2m − 1
2
πτ
where m = −n > 0. This corresponds to the conjugate poles.
book Mobk070 March 22, 2007 11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 123
Thus for each of the sets of poles we have
R(s = sn) =
4g(−1)n
π2(2n − 1)2
sin
(2n − 1)πς
2
exp
(2n − 1)πτi
2
Now adding the residues corresponding to each pole and its conjugate we find that the
final solution is as follows:
y(ς, τ) = g ς +
8
π2
∞
n=1
(−1)n
(2n − 1)2
sin
(2n − 1)πς
2
cos
(2n − 1)πτ
2
Suppose that instead of a constant force at ζ = 1, we allow g to be a function of τ. In
this case, the Laplace transform of y(ζ, τ) takes the form
Y (ς, s ) =
G(s ) sinh(ςs )
s cosh s
The simple pole with residue gζ is not present. However, the other poles are still at the
same sn values. The residues at each of the conjugate poles of the function
F(s ) =
sinh(ς s )
s cosh s
are
2(−1)n
π(2n − 1)
sin
(2n − 1)πς
2
sin
(2n − 1)πτ
2
= f (ς, τ)
According to the convolution theorem
y(ς, τ) =
τ
τ =0
y(τ − τ )g(τ )dτ
y(ς, τ) =
4
π
∞
n=0
(−1)n
(2n − 1)
sin
(2n − 1)πς
2
τ
τ
g(τ − τ ) sin
(2n − 1)πτ
2
dτ .
In the case that g = constant, integration recovers the previous equation.
Example 8.2. An infinitely long string is initially at rest when the end at x = 0 undergoes
a transverse displacement y(0, t) = f (t). The displacement is described by the differential
book Mobk070 March 22, 2007 11:7
124 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
equation and boundary conditions as follows:
∂2
y
∂t2
=
∂2
y
∂x2
y(x, 0) = yt(x, 0) = 0
y(0, t) = f (t)
y is bounded
Taking the Laplace transform with respect to time and applying the initial conditions
yields
s 2
Y (x, s ) =
d2
Y (x, s )
dx2
The solution may be written in terms of exponential functions
Y (x, s ) = Ae−s x
+ Bes x
In order for the solution to be bounded B = 0. Applying the condition at x = 0 we find
A = F(s )
where F(s ) is the Laplace transform of f (t).
Writing the solution in the form
Y (x, s ) = s F(s )
e−s x
s
and noting that the inverse transform of e−s x
/s is the Heaviside step Ux(t) where
Ux(t) = 0 t < x
Ux(t) = 1 t > x
and that the inverse transform of s F(s ) is f (t), we find using convolution that
y(x, t) =
t
µ=0
f (t − µ)Ux(µ)dµ = f (t − x) x < t
= 0 x > t
For example, if f (t) = sin ω t
y(x, t) = sin ω(t − x) x < t
= 0 x > t
book Mobk070 March 22, 2007 11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 125
Problems
1. Solve the above vibration problem when
y(0, τ) = 0
y(1, τ) = g(τ)
Hint: To make use of convolution see Example 8.3.
2. Solve the problem
∂2
y
∂t2
=
∂2
y
∂x2
yx(0, t) = y(x, 0) = yt(x, 0) = 0
y(1, t) = h
using the Laplace transform method.
8.2 DIFFUSION OR CONDUCTION PROBLEMS
We now consider the conduction problem
Example 8.3.
uτ = uςς
u(1, τ) = f (τ)
u(0, τ) = 0
u(ς, 0) = 0
Taking the Laplace transform of the equation and boundary conditions and noting that
u(ς, 0) = 0,
sU(s ) = Uςς
solution yields
U = A sinh
√
s ς + B cosh
√
s ς
U(0, s ) = 0
U(1, s ) = F(s )
The first condition implies that B = 0 and the second gives
F(s ) = A sinh
√
s
and so U = F(s )sinh
√
s ς
sinh
√
s
.
book Mobk070 March 22, 2007 11:7
126 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
If f (τ) = 1, F(s ) = 1/s , a particular solution, V , is
V =
sinh
√
s ς
s sinh
√
s
where
v = L−1
V (s )
Now,
sinh
√
s ς
sinh
√
s
=
ς
√
s + (ς
√
s )3
3!
+ (ς
√
s )5
5!
+ . . .
√
s + (
√
s )3
3!
+ (
√
s )5
5!
+ . . .
and so there is a simple pole of V es τ
at s = 0. Also, since when sinh
√
s = 0, sinhς
√
s not
necessarily zero, there are simple poles at sinh
√
s = 0 or s = −n2
π2
. The residue at the pole
s = 0 is
lim
s → 0
s V (s )es τ
= ς
and since V (s ) es τ
has the form P(s )/Q(s ) the residue of the pole at −n2
π2
is
P(ς, −n2
π2
)
Q (−n2π2)
e−n2
π2
τ
=
sinh ς
√
s e−n2
π2
τ
√
s
2
cosh
√
s + sinh
√
s s =−n2π2
= 2
sin(nπς)
nπ cos(nπ)
e−n2
π2
τ
The solution for v(ζ, τ) is then
v(ς, τ) = ς +
∞
n=1
2(−1)n
nπ
e−n2
π2
τ
sin(nπς)
The solution for the general case as originally stated with u(1, τ) = f (τ) is obtained by
first differentiating the equation for v(ζ, τ) and then noting the following:
U(ς, s ) = s F(s )
sinh ς
√
s
s sinh
√
s
and
L f (τ) = s F(s ) − f (τ = 0)
so that
U(ς, s ) = f (τ = 0)V (ς, s ) + L f (s ) V (ς, s )
book Mobk070 March 22, 2007 11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 127
Consequently
u(ς, τ) = f (τ = 0)v(ς, τ) +
τ
τ =0
f (τ − τ )v(ς, τ )dτ
= ς f (τ) +
2 f (0)
π
∞
n=1
(−1)n
n
e−n2
π2
τ
sin(nπς)
+
2
π
∞
n=1
(−1)n
n
sin(nπς)
τ
τ =0
f (τ − τ )e−n2
π2
τ
dτ
This series converges rapidly for large values of τ. However for small values of τ, it
converges slowly. There is another form of solution that converges rapidly for small τ.
The Laplace transform of v(ζ, τ) can be written as
sinh ς
√
s
s sinh
√
s
=
eς
√
s
− e−ς
√
s
s (eς
√
s − e−
√
s )
=
1
s e
√
s
eς
√
s
− e−ς
√
s
1 − e−2
√
s
=
1
s e
√
s
eς
√
s
− e−ς
√
s
1 + e−2
√
s
+ e−4
√
s
+ e−6
√
s
+ . . .
=
1
s
∞
n=0
e−(1+2n−ς)
√
s
− e−(1+2n+ς)
√
s
The inverse Laplace transform of e=k
√
s
s
is the complimentary error function, defined by
erfc(k/2
√
τ) = 1 −
2
√
π
k/2
√
τ
x=0
e−x2
dx
Thus we have
v(ς, τ) =
∞
n=0
erfc
1 + 2n − ς
2
√
τ
− erfc
1 + 2n + ς
2
√
τ
and this series converges rapidly for small values of τ.
Example 8.4. Next we consider a conduction problem with a convective boundary condition:
uτ = uςς
u(τ, 0) = 0
uς (τ, 1) + Hu(τ, 1) = 0
u(0, ς) = ς
book Mobk070 March 22, 2007 11:7
128 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Taking the Laplace transform
sU − ς = Uςς
U(s, 0) = 0
Uς (s, 1) + HU(s, 1) = 0
The differential equation has a homogeneous solution
Uh = A cosh(
√
s ς) + B sinh(
√
s ς)
and a particular solution
Up =
ς
s
so that
U =
ς
s
+ A cosh(
√
s ς) + B sinh(
√
s ς)
Applying the boundary conditions, we find A = 0
B = −
1 + H
s
√
s cosh(
√
s ) + H sinh(
√
s )
The Laplace transform of the solution is as follows:
U =
ς
s
−
(1 + H) sinh(
√
s ς)
s
√
s cosh(
√
s ) + H sinh(
√
s )
The inverse transform of the first term is simply ζ. For the second term, we must first
find the poles. There is an isolated pole at s = 0. To obtain the residue of this pole note that
lim
s → 0
−
(1 + H) sinh ς
√
s
√
s cosh
√
s + H sinh
√
s
es τ
=
lim
s → 0
−
(1 + H)(ς
√
s + · · · )
√
s + H(
√
s + · · · )
= −ς
canceling the first residue. To find the remaining residues let
√
s = x + iy. Then
(x + iy) [cosh x cos y + i sinh x sin y] + H [sinh x cos y + i cosh x sin y] = 0
Setting real and imaginary parts equal to 0 yields
x cosh x cos y − y sinh x sin y + H sinh x cos y = 0
and
y cosh x cos y + x sinh x sin y + H cosh x sin y = 0
book Mobk070 March 22, 2007 11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 129
which yields
x = 0
y cos y + H sin y = 0
The solution for the second term of U is
lim
s → iy
(s − iy)(1 + H) sinh(
√
s ς)es τ
s
√
s cosh(
√
s ) + H sinh
√
s )
or
P(ς, s )es τ
Q (ς, s ) s =−y2
where
Q = s
√
s cosh
√
s + H sinh
√
s
Q =
√
s cosh
√
s + H sinh
√
s + s
1
2
√
s
cosh
√
s +
1
2
sinh
√
s +
H
2
√
s
cosh
√
s
Q =
√
s (1 + H)
2
cosh
√
s +
s
2
sinh
√
s
Q =
√
s (1 + H)
2
−
s
√
s
2H
cosh
√
s
Q (s = −y2
) =
H(H + 1) + y2
2H
iy cos(y)
while
P(s = −y2
) = (1 + H)i sin(yς)e−y2
τ
un(ς, τ) =
−(1 + H) sin(ynς)e−y2
τ
H(H+1)+y2
2H
yn cos(yn)
=
−2H(H + 1) sin(ynς)e−y2
τ
[H(H + 1) + y2]yn cos(yn)
=
2(H + 1)
H(H + 1) + y2
sin ς yn
sin yn
e−y2
n τ
The solution is therefore
u(ς, τ) =
∞
n=1
2(H + 1)
H(H + 1) + y2
sin ς yn
sin yn
e−y2
n τ
book Mobk070 March 22, 2007 11:7
130 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Note that as a partial check on this solution, we can evaluate the result when H → ∞ as
u(ς, τ) =
∞
n=1
−2
yn cos yn
sin ς yne−y2
n τ
=
∞
n=1
2(−1)n+1
nπ
sin(nπς)e−n2
π2
τ
in agreement with the separation of variables solution. Also, letting H → 0 we find
u(ς, τ) =
∞
n=1
2
y2
n
sin(ynς)
sin(yn)
e y2
n τ
with yn = 2n−1
2
π again in agreement with the separation of variables solution.
Example 8.5. Next we consider a conduction (diffusion) problem with a transient source q(τ).
(Nondimensionalization and normalization are left as an exercise.)
uτ = uςς + q(τ)
u(ς, 0) = 0 = uς (0, τ)
u(1, τ) = 1
Obtaining the Laplace transform of the equation and boundary conditions we find
sU = Uςς + Q(s )
Uς (0, s ) = 0
U(1, s ) =
1
s
A particular solution is
UP =
Q(s )
s
and the homogeneous solution is
UH = A sinh(ς
√
s ) + B cosh(ς
√
s )
Hence the general solution is
U =
Q
s
+ A sinh(ς
√
s ) + B cosh(ς,
√
s )
book Mobk070 March 22, 2007 11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 131
Using the boundary conditions
Uς (0, s ) = 0, A = 0
U(1, s ) =
1
s
=
Q
s
+ B cosh(
√
s ) B =
1 − Q
s cosh(
√
s )
U =
Q
s
+
1 − Q
s
cosh(ς
√
s )
cosh(
√
s )
The poles are (with
√
s = x + iy)
cosh
√
s = 0 or cos y = 0
√
s = ±
2n − 1
2
π i
s = −
2n − 1
2
2
π2
= −λ2
n n = 1, 2, 3, . . .
or when s = 0.
When s = 0 the residue is
Res =
lim
s → 0
sU(s )es τ
= 1
The denominator of the second term is s cosh
√
s and its derivative with respect to s is
cosh
√
s +
√
s
2
sinh
√
s
When s = −λ2
n, we have for the residue of the second term
lim
s → −λ2
n
(1 − Q) cosh(ς
√
s )
cosh
√
s +
√
s
2
sinh
√
s
es τ
and since
sinh
√
s = i sin
2n − 1
2
π = i(−1)n+1
and
cosh(ς
√
s ) = cos
2n − 1
2
ς π
we have
L−1 cosh(ς
√
s )
s cosh
√
s
=
cos 2n−1
2
ς π
2n−1
2
π i2(−1)n+1
e−(2n−1
2 )
2
π2
τ
=
2(−1)n
cos 2n−1
2
ς π
(2n − 1)π
e−(2n−1
2 )
2
π2
τ
book Mobk070 March 22, 2007 11:7
132 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
We now use the convolution principle to evaluate the solution for the general case of
q(τ). We are searching for the inverse transform of
1
s
cosh(ς
√
s )
cosh
√
s
+
Q(s )
s
1 −
cosh(ς
√
s )
cosh
√
s
The inverse transform of the first term is given above. As for the second term, the inverse
transform of Q(s ) is simply q(τ) and the inverse transform of the second term, absent Q(s) is
1 −
2(−1)n+1
cos 2n−1
2
ς π
(2n − 1)π
e−(2n−1
2 )
2
π2
τ
According to the convolution principle, and summing over all poles
u(ς, τ) =
∞
n=1
2(−1)n+1
cos 2n−1
2
ς π
(2n − 1)π
e−(2n−1
2 )
2
π2
τ
+
∞
n=1
τ
τ =0
1 −
2(−1)n+1
cos 2n−1
2
ς π
(2n − 1)π
e−(2n−1
2 )
2
π2
τ
q(τ − τ )dτ
Example 8.6. Next consider heat conduction in a semiinfinite region x > 0, t > 0. The initial
temperature is zero and the wall is subjected to a temperature u(0, t) = f (t) at the x = 0
surface.
ut = uxx
u(x, 0) = 0
u(0, t) = f (t)
and u is bounded.
Taking the Laplace transform and applying the initial condition
sU = Uxx
Thus
U(x, s ) = A sinh x
√
s + B cosh x
√
s
Both functions are unbounded for x → ∞. Thus it is more convenient to use the
equivalent solution
U(x, s ) = Ae−x
√
s
+ Bex
√
s
= Ae−x
√
s
in order for the function to be bounded. Applying the boundary condition at x = 0
F(s ) = A
book Mobk070 March 22, 2007 11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 133
Thus we have
U(x, s ) = F(s )e−x
√
s
Multiplying and dividing by s gives
U(x, s ) = s F(s )
e−x
√
s
s
The inverse transform of e−x
√
s
/s is
L−1 e−x
√
s
s
= erfc
x
2
√
t
and we have seen that
L{ f } = s F(s ) − f (0)
Thus, making use of convolution, we find
u(x, t) = f (0)erfc
x
2
√
t
+
t
µ=0
f (t − µ) erfc
x
2
√
µ
dµ
Example 8.7. Now consider a problem in cylindrical coordinates. An infinite cylinder is
initially at dimensionless temperature u(r, 0) = 1 and dimensionless temperature at the surface
u(1, t) = 0. We have
∂u
∂t
=
1
r
∂
∂r
r
∂u
∂r
u(1, t) = 0
u(r, 0) = 1
u bounded
The Laplace transform with respect to time yields
sU(r, s ) − 1 =
1
r
d
dr
r
dU
dr
with
U(1, s ) =
1
s
book Mobk070 March 22, 2007 11:7
134 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Obtaining the homogeneous and particular solutions yields
U(r, s ) =
1
s
+ AJ0(i
√
sr) + BY0(i
√
sr)
The boundedness condition requires that B = 0, while the condition at r = 1
A = −
1
s J0(i
√
s )
Thus
U(r, s ) =
1
s
−
J0(i
√
sr)
s J0(i
√
s )
The inverse transform is as follows:
u(r, t) = 1 − Residues of es t J0(i
√
sr
s J0(i
√
s )
Poles of the function occur at s = 0 and J0(i
√
s ) = 0 or i
√
s = λn, the roots of the Bessel
function of the first kind order are zero. Thus, they occur at s = −λ2
n. The residues are
lim
s → 0
es t J0(i
√
sr)
J0(i
√
s )
= 1
and
lim
s → −λ2
n
es t J0(i
√
sr)
s J0(i
√
s )
=
lim
s → −λ2
n
es t J0(i
√
sr)
−J1(i
√
s ) i/2
√
s
= e−λ2
nt J0(λnr)
−1
2
λn J1(λn)
The two unity residues cancel and the final solution is as follows:
u(r, t) =
∞
n=1
e−λ2t
n
J0(λnr)
λn J1(λn)
Problems
1. Consider a finite wall with initial temperature zero and the wall at x = 0 insulated.
The wall at x = 1 is subjected to a temperature u(1, t) = f (t) for t > 0. Find u(x, t).
2. Consider a finite wall with initial temperature zero and with the temperature at x =
0 u(0, t) = 0. The temperature gradient at x = 1 suddenly becomes ux(1, t) = f (t)
for t > 0. Find the temperature when f (t) = 1 and for general f (t).
3. A cylinder is initially at temperature u = 1 and the surface is subject to a convective
boundary condition ur (t, 1) + Hu(t, 1) = 0. Find u(t, r).
book Mobk070 March 22, 2007 11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 135
8.3 DUHAMEL’S THEOREM
We are now prepared to solve the more general problem
∇2
u + g(r, t) =
∂u
∂t
(8.1)
where r may be considered a vector, that is, the problem is in three dimensions. The general
boundary conditions are
∂u
∂ni
+ hi u = fi (r, t) on the boundary Si (8.2)
and
u(r, 0) = F(r) (8.3)
initially. Here ∂u
∂ni
represents the normal derivative of u at the surface. We present Duhamel’s
theorem without proof.
Consider the auxiliary problem
∇2
P + g(r, λ) =
∂ P
∂t
(8.4)
where λ is a timelike constant with boundary conditions
∂ P
∂ni
+ hi P = fi (r, λ) on the boundary Si (8.5)
and initial condition
P(r, 0) = F(r) (8.6)
The solution of Eqs. (8.1), (8.2), and (8.3) is as follows:
u(x, y, z, t) =
∂
∂t
t
λ=0
P(x, y, z, λ, t − λ)dλ = F(x, y, z) +
t
λ=0
∂
∂t
P(x, y, z, λ, t − λ)dλ
(8.7)
This is Duhamel’s theorem. For a proof, refer to the book by Arpaci.
Example 8.8. Consider now the following problem with a time-dependent heat source:
ut = uxx + xe−t
u(0, t) = u(1, t) = 0
u(x, 0) = 0
book Mobk070 March 22, 2007 11:7
136 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
We first solve the problem
Pt = Pxx + xe−λ
P(0, t) = P(1, t) = 0
P(x, 0) = 0
while holding λ constant.
Recall from Chapter 2 that one technique in this case is to assume a solution of the form
P(x, λ, t) = X(x) + W(x, λ, t)
so that
Wt = Wxx
W(0, λ, t) = W(1, λ, t) = 0
W(x, λ, 0) = −X(x, λ)
and
Xxx + xe−λ
= 0
X(0) = X(1) = 0
Separating variables in the equation for W(x, t), we find that for W(x, λ, t) = S(x)Q(t)
Qt
Q
=
Sxx
S
= −β2
The minus sign has been chosen so that Q remains bounded. The boundary conditions
on S(x) are as follows:
S(0) = S(1) = 0
The solution gives
S = A sin(β x) + B cos(βx)
Q = Ce−β t
Applying the boundary condition at x = 0 requires that B = 0 and applying the boundary
condition at x = 1 requires that sin(β) = 0 or β = nπ.
Solving for X(x) and applying the boundary conditions gives
X =
x
6
(1 − x2
)e−λ
= −W(x, λ, 0)
book Mobk070 March 22, 2007 11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 137
The solution for W(x, t) is then obtained by superposition:
W(x, t) =
∞
n=0
Kne−n2
π2
t
sin(nπx)
and using the orthogonality principle
e−λ
1
x=0
x
6
(x2
− 1) sin(nπx)dx = Kn
1
n=0
sin2
(nπx)dx =
1
2
Kn
so
W(x, t) =
∞
n=1
e−λ
1
x=0
x
3
(x2
− 1) sin(nπx)dx e−n2
π2
t
sin(nπx)
P(x, λ, t) =
x
6
(1 − x2
) +
∞
n=1
1
x=0
x
3
(x2
− 1) sin(nπx)dx sin(nπx) e−n2
π2
t
e−λ
and
P(x, λ, t − λ) =
x
6
(1 − x2
)e−λ
+
∞
n=1
1
x=0
x
3
(x2
− 1) sin(nπx)dx sin(nπx) e−n2
π2
t
en2
π2
λ−λ
∂
∂t
P(x, λ, t − λ) =
∞
n=1
n2
π2
1
x=0
x
3
(1 − x2
) sin(nπx)dxe−n2
π2
t
e(n2
π2
−1)λ
According to Duhamel’s theorem, the solution for u(x, t) is then
u(x, t) =
∞
n=1
1
x=0
x
3
(1 − x2
)n2
π2
sin(nπx)dx sin(nπx)
t
λ=0
e−n2
π2
(t−λ) −λ
dλ
=
∞
n=1
n2
π2
n2π2 − 1
1
x=0
x
3
(1 − x2
) sin(nπx)dx [e−t
− e−n2
π2
t
] sin(nπx)
Example 8.9. Reconsider Example 8.6 in which ut = uxx on the half space, with
u(x, 0) = 0
u(0, t) = f (t)
book Mobk070 March 22, 2007 11:7
138 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
To solve this using Duhamel’s theorem, we first set f (t) = f (λ) with λ a timelike
constant.
Following the procedure outlined at the beginning of Example 8.6, we find
U(x, s ) = f (λ)
e−x
√
s
s
The inverse transform is as follows:
u(x, t, λ) = f (λ) erfc
x
2
√
t
Using Duhamel’s theorem,
u(x, t) =
t
λ=0
∂
∂t
f (λ)erfc
x
2
√
t − λ
dλ
which is a different form of the solution given in Example 8.6.
Problems
1. Show that the solutions given in Examples 8.6 and 8.9 are equivalent.
2. Use Duhamel’s theorem along with Laplace transforms to solve the following conduc-
tion problem on the half space:
ut = uxx
u(x, 0) = 0
ux(0, t) = f (t)
3. Solve the following problem first using separation of variables:
∂u
∂t
=
∂2
u
∂x2
+ sin(πx)
u(t, 0) = 0
u(t, 1) = 0
u(0, x) = 0
4. Consider now the problem
∂u
∂t
=
∂2
u
∂x2
+ sin(πx)te−t
with the same boundary conditions as Problem 7. Solve using Duhamel’s theorem.
book Mobk070 March 22, 2007 11:7
SOLUTIONS WITH LAPLACE TRANSFORMS 139
FURTHER READING
V. S. Arpaci, Conduction Heat Transfer, Reading, MA: Addison-Wesley, 1966.
R. V. Churchill, Operational Mathematics, 3rd ed. New York: McGraw-Hill, 1972.
I. H. Sneddon, The Use of Integral Transforms, New York: McGraw-Hill, 1972.
book Mobk070 March 22, 2007 11:7
140
book Mobk070 March 22, 2007 11:7
141
C H A P T E R 9
Sturm–Liouville Transforms
Sturm–Liouville transforms include a variety of examples of choices of the kernel function
K(s, t) that was presented in the general transform equation at the beginning of Chapter 6. We
first illustrate the idea with a simple example of the Fourier sine transform, which is a special
case of a Sturm–Liouville transform. We then move on to the general case and work out some
examples.
9.1 A PRELIMINARY EXAMPLE: FOURIER SINE TRANSFORM
Example 9.1. Consider the boundary value problem
ut = uxx x ≤ 0 ≤ 1
with boundary conditions
u(0, t) = 0
ux(1, t) + Hu(1, t) = 0
and initial condition
u(x, 0) = 1
Multiply both sides of the differential equation by sin(λx)dx and integrate over the
interval x ≤ 0 ≤ 1.
1
x=0
sin(λx)
d2
u
dx2
dx =
d
dt
1
x=0
u(x, t) sin(λx)dx
Integration of the left hand side by parts yields
1
x=0
d2
dx2
[sin(λx)]u(x, t)dx + sin(λx)
du
dx
− u
d
dx
[sin(λx)]
1
0
book Mobk070 March 22, 2007 11:7
142 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and applying the boundary conditions and noting that
d2
dx2
[sin(λx)] = −λ2
sin(λx)
we have
−λ2
1
x=0
sin(λx)u(x, t)dx + [ux sin(λx) − λu cos(λx)]1
0
= −λ2
U(λ, t) − u(1)[λ cos λ + H sin λ]
Defining
Sλ{u(x, t)} =
1
x=0
u(x, t) sin(λx)dx = U(λ, t)
as the Fourier sine transform of u(x, t) and setting
λ cos λ + H sin λ = 0
we find
Ut(λ, t) = −λ2
U(λ, t)
whose solution is
U(λ, t) = Ae−λ2
t
The initial condition of the transformed function is
U(λ, 0) =
1
x=0
sin(λx)dx =
1
λ
[1 − cos(λ)]
Applying the initial condition we find
U(λ, t) =
1
λ
[1 − cos(λ)]e−λ2
t
It now remains to find from this the value of u(x, t).
Recall from the general theory of Fourier series that any odd function of x defined on
0 ≤ x ≤ 1 can be expanded in a Fourier sine series in the form
u(x, t) =
∞
n=1
sin(λnx)
sin(λn)
2
1
ξ=0
u(ξ, t) sin(λnξ)dξ
book Mobk070 March 22, 2007 11:7
STURM–LIOUVILLE TRANSFORMS 143
and this is simply
u(x, t) =
∞
n=1
sin(λnx)
sin(λn)
2
U(λn, t)
with λn given by the transcendental equation above. The final solution is therefore
u(x, t) =
∞
n=1
2(1 − cos λn)
λn − 1
2
sin(2λn)
sin(λnx)e−λ2
nt
9.2 GENERALIZATION: THE STURM–LIOUVILLE TRANSFORM:
THEORY
Consider the differential operator D
D[ f (x)] = A(x) f + B(x) f + C(x) f a ≤ x ≤ b (9.1)
with boundary conditions of the form
Nα[ f (x)]x=a = f (a) cos α + f (a) sin α
Nβ[ f (x)]x=b = f (b) cos β + f (b) sin β
(9.2)
where the symbols Nα and Nβ are differential operators that define the boundary conditions.
For example the differential operator might be
D[ f (x)] = fxx
and the boundary conditions might be defined by the operators
Nα[ f (x)]x=a = f (a) = 0
and
Nβ[ f (x)]x=b = f (b) + Hf (b) = 0
We define an integral transformation
T[ f (x)] =
b
a
f (x)K(x, λ)dx = F(λ) (9.3)
book Mobk070 March 22, 2007 11:7
144 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
We wish to transform these differential forms into algebraic forms. First we write the
differential operator in standard form. Let
r(x) = exp
x
a
B(ξ)
A(ξ)
dξ
p(x) =
r(x)
A(x)
(9.4)
q(x) = −p(x)C(x)
Then
D[ f (x)] =
1
p(x)
(r f ) − q f =
1
p(x)
[ f (x)] (9.5)
where is the Sturm–Liouville operator.
Let the kernel function K(x, λ)in Eq. (9.3) be
K(x, λ) = p(x) (x, λ) (9.6)
Then
T[D[ f (x)]] =
b
a
(x, λ) [ f (x)]dx
=
b
a
f (x) [ (x, λ)]dx + [( fx − x f )r(x)]b
a (9.7)
while
Nα[ f (a)] = f (a) cos α + f (a) sin α
Nα[ f (a)] =
d
dα
f (a) cos α + f (a) sin α (9.8)
= − f (a) sin α + f (a) cos α
so that
f (a) = Nα[ f (a)] cos α − Nα[ f (a)] sin α
f (a) = Nα[ f (a)] cos α + Nα[ f (a)] sin α
(9.9)
where the prime indicates differentiation with respect to α.
book Mobk070 March 22, 2007 11:7
STURM–LIOUVILLE TRANSFORMS 145
The lower boundary condition at x = a is then
[ (a, λ) f (a) − (a, λ) f (a)]r(a)
=


(a, λ)Nα[ f (a)] cos α + (a, λ)Nα[ f (a)] sin α − (a, λ)Nα[ f (a)] cos α
+ (a, λ)Nα[ f (a)] sin α

r(a)
(9.10)
But if (x, λ) is chosen to satisfy the Sturm–Liouville equation and the boundary con-
ditions then
Nα[ (x, λ)]x=a = (a, λ) cos α + (a, λ) sin α
Nβ[ (x, λ)]x=b = (b, λ) cos β + (b, λ) sin β
(9.11)
and
(a, λ) = Nα[ (a, λ)] cos α − Nα[ (a, λ)] sin α
(a, λ) = Nα[ (a, λ)] cos α + Nα[ (a, λ)] sin α
(9.12)
and we have
[(Nα[ (a, λ)] cos α + Nα[ f (a)] sin α)(Nα[ (a, λ)] cos α + Nα[ (a, λ)] sin α)
− (Nα[ (a, λ)] cos α + Nα[ (a, λ)] sin α)(Nα[ f (a)] cos α
− Nα[ f (a)] sin α)]r(a) (9.13)
= {Nα[ f (a)]Nα[ (a, λ)] − Nα[ f (a)]Nα[ (a, λ)]}r(a)
If the kernel function is chosen so that Nα[ (a, λ)] = 0, for example, the lower boundary
condition is
−Nα[ f (a)]Nα[ (a, λ)]r(a) (9.14)
Similarly, at x = b
(b, λ) f (b) − (b, λ) f (b) r(b) = −Nβ[ f (b)]Nβ[ (b, λ)]r(b) (9.15)
Since (x, λ) satisfies the Sturm–Liouville equation, there are n solutions forming a set
of orthogonal functions with weight function p(x) and
n(x, λn) = −λ2
n p(x) n(x, λn) (9.16)
book Mobk070 March 22, 2007 11:7
146 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
so that
T D[ f (x)] = −λ2
b
x=a
p(x) f (x) n(x, λ)dx + Nα[ f (a)]Nα[ n(a, λ)]r(a)
− Nβ[ f (b)]Nβ[ n(b, λ)]r(b) (9.17)
where
λ2
n
b
a
p(x) fn(x) n(x, λn)dx = λ2
n Fn(λn) (9.18)
9.3 THE INVERSE TRANSFORM
The great thing about Sturm–Liouville transforms is that the inversion is so easy. Recall that
the generalized Fourier series of a function f (x) is
f (x) =
∞
n=1
n(x, λn)
n
b
a
fn(ξ)p(ξ)
n(ξ, λn)
n
dξ =
∞
n=1
n(x)
n
2
F(λn) (9.19)
where the functions n(x, λn)form an orthogonal set with respect to the weight function p(x).
Example 9.2 (The cosine transform). Consider the diffusion equation
yt = yxx 0 ≤ x ≤ 1 t > 0
yx(0, t) = y(1, t) = 0
y(x, 0) = f (x)
To find the proper kernel function K(x, λ) we note that according to Eq. (9.16) n(x, λn)
must satisfy the Sturm–Liouville equation
[ n(x, λ)] = −p(x) n(x, λ)
where for the current problem
[ n(x, λ)] =
d2
dx2
[ n(x, λ)] and p(x) = 1
along with the boundary conditions (9.11)
Nα[ (x, λ)]x=a = x(0, λ) = 0
Nβ[ (x, λ)]x=b = (1, λ) = 0
book Mobk070 March 22, 2007 11:7
STURM–LIOUVILLE TRANSFORMS 147
Solution of this differential equation and applying the boundary conditions yields an
infinite number of functions (as any Sturm–Liouville problem)
(x, λn) = A cos(λnx)
with
cos(λn) = 0 λn =
(2n − 1)
2
π
Thus, the appropriate kernel function is K(x, λn) = cos(λnx) with λn = (2n − 1)π
2
.
Using this kernel function in the original partial differential equation, we find
dY
dt
= −λ2
nY
where Cλ y(x, t) = Y (t, λn) is the cosine transform of y(t, x). The solution gives
Y (t, λn) = Be−λ2 t
and applying the cosine transform of the initial condition
B =
1
x=0
f (x) cos(λnx)dx
According to Eq. (9.19) the solution is as follows:
y(x, t) =
∞
n=0
cos(λnx)
cos(λnx)
2
1
x−0
f (x) cos(λnx)dxe−λ2
n t
Example 9.3 (The Hankel transform). Next consider the diffusion equation in cylindrical
coordinates.
ut =
1
r
d
dr
r
du
dr
Boundary and initial conditions are prescribed as
ur (t, 0) = 0
u(t, 1) = 0
u(0,r) = f (r)
First we find the proper kernel function
[ (r, λn)] =
d
dr
r
d n
dr
= −λ2
n r
book Mobk070 March 22, 2007 11:7
148 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
with boundary conditions
r (λn, 0) = 0
(λn, 1) = 0
The solution is the Bessel function J0(λnr) with λn given by J0(λn) = 0. Thus the
transform of u(t, r) is as follows:
Hλ u(t,r) = U(t, λn) =
1
r=0
r J0(λnr)u(t,r)dr
This is called a Hankel transform. The appropriate differential equation for U(t, λn) is
dUn
dt
= −λ2
nUn
so that
Un(t, λn) = Be−λ2
n t
Applying the initial condition, we find
B =
1
r=0
r f (r)J0(λnr)dr
and from Eq. (9.19)
u(t,r) =
∞
n=0
1
r=0 r f (r)J0(λnr)dr
J0(λnr)
2
J0(λnr)e−λ2
n t
Example 9.4 (The sine transform with a source). Next consider a one-dimensional transient
diffusion with a source term q(x):
ut = uxx + q(x)
y(0, x) = y(t, 0) = t(t, π) = 0
First we determine that the sine transform is appropriate. The operator is such that
= xx = λ
book Mobk070 March 22, 2007 11:7
STURM–LIOUVILLE TRANSFORMS 149
and according to the boundary conditions we must choose = sin(nx) and λ = −n2
. The sine
transform of q(x) is Q(λ).
Ut = −n2
U + Q(λ)
U = U(λ, t)
The homogeneous and particular solutions give
Un = Ce−n2
t
+
Qn
n2
when t = 0, U = 0 so that
C = −
Qn
n2
where Qn is given by
Qn =
π
x=0
q(x) sin(nx)dx
Since Un = Qn
n2 [1 − e−n2
t
] the solution is
u(x, t) =
∞
n=1
Qn
n2
[1 − e−n2
t
]
sin(nx)
sin(nx)
2
Note that Qn is just the nth term of the Fourier sine series of q(x). For example, if
q(x) = x,
Qn =
π
n
(−1)n+1
Example 9.5 (A mixed transform). Consider steady temperatures in a half cylinder of infi-
nite length with internal heat generation, q(r) that is a function of the radial position. The
appropriate differential equation is
urr +
1
r
ur +
1
r2
uθθ + uzz + q(r) = 0 0 ≤ r ≤ 1 0 ≤ z ≤ ∞ 0 ≤ θ ≤ π
with boundary conditions
u(1, θ, z) = 1
u(r, 0, z) = u(r, π, z) = u(r, θ, 0) = 0
book Mobk070 March 22, 2007 11:7
150 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Let the sine transform of u be denoted by Sn u(r, θ, z) = Un(r, n, z) with respect to θ
on the interval (0, π). Then
∂2
Un
∂r2
+
1
r
∂Un
∂r
−
n2
r2
Un +
∂2
Un
∂z2
+ q(r)Sn(1) = 0
where Sn(1) is the sine transform of 1, and the boundary conditions for u(r, θ, z) on θ have
been used.
Note that the operator on in the r coordinate direction is
(r, µj ) =
1
r
d
dr
r
d
dr
−
n2
r2
= −µ2
j
With the boundary condition at r = 1 chosen as (1, µj ) = 0 this gives the kernel function as
= r Jn(r, µj ) with eigenvalues determined by Jn(1, µj ) = 0
We now apply the finite Hankel transform to the above partial differential equation and
denote the Hankel transform of Un by Ujn.
After applying the boundary condition on r we find, after noting that
Nβ[Un(z, 1)] = Sn(1)
Nβ[ (1, z)] = −µj Jn+1(µj )
−µ2
jUjn + µj Jn+1(µj )Sn(1) +
d2
Ujn
dz2 + Q j (µj )Sn(1) = 0. Here Q j (µj ) is the Hankel trans-
form of q(r).
Solving the resulting ordinary differential equation and applying the boundary condition
at z = 0,
Ujn(µj , n, z) = Sn(1)
Q j (µj ) + µj Jn+1(µj )
µ2
j
[1 − exp(−µj z)]
We now invert the transform for the sine and Hankel transforms according to Eq. (9.19)
and find that
u(r, θ, z) =
4
π
∞
n=1
∞
j=1
Ujn(µj , n, z)
[Jn+1(µj )]2
Jn(µjr) sin(nθ)
Note that
Sn(1) = [1 − (−1)n
]/n
book Mobk070 March 22, 2007 11:7
STURM–LIOUVILLE TRANSFORMS 151
Problems
Use an appropriate Sturm–Liouville transform to solve each of the following problems:
1. Chapter 3, Problem 1.
2. Chapter 2, Problem 2.
3. Chapter 3, Problem 3.
∂u
∂t
=
1
r
∂
∂r
r
∂u
∂r
+ G(constant t)
4. u(r, 0) = 0
u(1, t) = 0
u bounded
5. Solve the following using an appropriate Sturm–Liouville transform:
∂2
u
∂x2
=
∂u
∂t
u(t, 0) = 0
u(t, 1) = 0
u(0, x) = sin(πx)
6. Find the solution for general ρ(t):
∂u
∂t
=
∂2
u
∂x2
u(t, 0) = 0
u(t, 1) = ρ(t)
u(0.x) = 0
FURTHER READING
V. S. Arpaci, Conduction Heat Transfer, Reading, MA: Addison-Wesley, 1966.
R. V. Churchill, Operational Mathematics, 3rd ed. New York: McGraw-Hill, 1972.
I. H. Sneddon, The Use of Integral Transforms, New York: McGraw-Hill, 1972.
book Mobk070 March 22, 2007 11:7
book Mobk070 March 22, 2007 11:7
153
C H A P T E R 10
Introduction to Perturbation Methods
Perturbation theory is an approximate method of solving equations which contain a parameter
that is small in some sense. The method should result in an approximate solution that may
be termed “precise” in the sense that the error (the difference between the approximate and
exact solutions) is understood and controllable and can be made smaller by some rational
technique. Perturbation methods are particularly useful in obtaining solutions to equations that
are nonlinear or have variable coefficients. In addition, it is important to note that if the method
yields a simple, accurate approximate solution of any problem it may be more useful than an
exact solution that is more complicated.
10.1 EXAMPLES FROM ALGEBRA
We begin with examples from algebra in order to introduce the ideas of regular perturbations
and singular perturbations. We start with a problem of extracting the roots of a quadratic
equation that contains a small parameter ε 1.
10.1.1 Regular Perturbation
Consider, for example, the equation
x2
+ εx − 1 = 0 (10.1)
The exact solution for the roots is, of course, simply obtained from the quadratic formula:
x = −
ε
2
± 1 +
ε2
4
(10.2)
which yields exact solutions
x = 0.962422837
and
x = −1.062422837
book Mobk070 March 22, 2007 11:7
154 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Equation (10.2) can be expanded for small values of ε in the rapidly convergent series
x = 1 −
ε
2
+
ε2
8
−
ε4
128
+ · · · (10.3)
or
x = −1 −
ε
2
−
ε2
8
+
ε4
128
− · · · (10.4)
To apply perturbation theory we first note that if ε = 0 the two roots of the equation, which
we will call the zeroth-order solutions, are x0 = ±1. We assume a solution of the form
x = x0 + a1ε + a2ε2
+ a3ε3
+ a4ε4
+ · · · (10.5)
Substituting (10.5) into (10.1)
1 + (2a1 + 1)ε + a2
1 + 2a2 + a1 ε2
+ (2a1a2 + 2a3 + a2)ε3
+ · · · − 1 = 0 (10.6)
where we have substituted x0 = 1. Each of the coefficients of εn
must be zero. Solving for an
we find
a1 = −
1
2
a2 =
1
8
a3 = 0
(10.7)
so that the approximate solution for the root near x = 1 is
x = 1 −
ε
2
+
ε2
8
+ O(ε4
) (10.8)
The symbol O(ε4
) means that the next term in the series is of order ε4
Performing the same operation with x0 = −1
1 − (1 + 2a1)ε + a2
1 − 2a2 + a1 ε2
+ (2a1a2 − 2a3 + a2)ε3
+ · · · − 1 = 0 (10.9)
Again setting the coefficients of εn
equal to zero
a1 = −
1
2
a2 = −
1
8
a3 = 0
(10.10)
book Mobk070 March 22, 2007 11:7
INTRODUCTION TO PERTURBATION METHODS 155
so that the root near x0 = −1 is
x = −1 −
ε
2
−
ε2
8
+ O(ε4
) (10.11)
The first three terms in (10.8) give x = 0.951249219, accurate to within 1.16% of the exact
value while (10.11) gives the second root as x = −1.051249219, which is accurate to within
1.05%.
Next suppose the small parameter occurs multiplied by the squared term,
εx2
+ x − 1 = 0 (10.12)
Using the quadratic formula gives the exact solution.
x = −
1
2ε
±
1
4ε2
+
1
ε
(10.13)
If ε = 0.1 (10.13) gives two solutions:
x = 0.916079783
and
x = −10.91607983
We attempt to follow the same procedure to obtain an approximate solution. If ε = 0 identically,
x0 = 1. Using (10.5) with x0 = 1 and substituting into (10.12) we find
(1 + a1)ε + (2a1 + a2)ε2
+ 2a2 + a2
1 + a3 ε3
+ · · · = 0 (10.14)
Setting the coefficients of εn
= 0 , solving for an, and substituting into (10.5)
x = 1 − ε + 2ε2
− 5ε3
+ · · · (10.15)
gives x = 0.915, close to the exact value. However Eq. (10.12) clearly has two roots, and the
method cannot give an approximation for the second root.
The essential problem is that the second root is not small. In fact (10.13) shows that as
ε → 0, |x| → 1
2ε
so that the term εx2
is never negligible.
10.1.2 Singular Perturbation
Arranging (10.12) in a normal form
x2
+
x − 1
ε
= 0 (10.12a)
book Mobk070 March 22, 2007 11:7
156 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
and the equation is said to be singular as ε → 0. If we set xε = u we find an equation for u as
u2
+ u − ε = 0 (10.16)
With ε identically zero, u = 0 or −1. Assuming that u may be approximated by a series like
(10.5) we find that
(−a1 − 1)ε + a2
1 − a2 ε2
+ (2a1a2 − a3)ε3
+ · · · = 0 (10.17)
a1 = −1
a2 = 1
a3 = −2
(10.18)
so that
x = −
1
ε
− 1 + ε − 2ε2
+ · · · (10.19)
The three-term approximation of the negative root is therefore x = −10.92, within 0.03% of
the exact solution.
As a third algebraic example consider
x2
− 2εx − ε = 0 (10.20)
This at first seems like a harmless problem that appears at first glance to be amenable to a regular
perturbation expansion since the x2
term is not lost when ε → 0. We proceed optimistically
by taking
x = x0 + a1ε + a2ε2
+ a3ε3
+ · · · (10.21)
Substituting into (10.20) we find
x2
0 + (2x0a1 − 2x0 − 1)ε + a2
1 + 2x0a2 − 2a1 ε2
+ · · · = 0 (10.22)
from which we find
x0 = 0
2x0a1 − 2x0 − 1 = 0
a2
1 + 2x0a2 − 2a1 = 0
(10.23)
From the second of these we conclude that either 0 = −1 or that there is something wrong.
That is, (10.21) is not an appropriate expansion in this case.
Note that (10.20) tells us that as ε → 0, x → 0. Moreover, in writing (10.21) we have
essentially assumed that ε → 0 in such a manner that x
ε
→ constant. Let us suppose instead
book Mobk070 March 22, 2007 11:7
INTRODUCTION TO PERTURBATION METHODS 157
that as ε → 0
x(ε)
εp
→ constant (10.24)
We than define a new variable
x = εp
v(ε) (10.25)
such that v(0) = 0. Substitution into (10.20) yields
ε2p
v2
− 2εp+1
v − ε = Q (10.26)
where Q must be identically zero. Note that since Q
ε
must also be zero no matter how small ε
becomes, as long as it is not identically zero.
Now, if p > 1/2, 2p − 1 > 0 and in the limit as ε → 0 ε2p
v(ε) − 2εp
v(ε) − 1 → −1,
which cannot be true given that Q = 0 identically. Next suppose p < 1/2. Again, Q
ε2p is
identically zero for all ε including the limit as ε → 0. In the limit as ε → 0, v(ε)2
− ε1−p
v(ε) −
ε1−2p
→ v(0) = 0. p = 1/2 is the only possibility left, so we attempt a solution with this value.
Hence
x = ε1/2
v(ε) (10.27)
Substitution into (10.20) gives
v2
− 2
√
εv − 1 = 0 (10.28)
and this can now be solved by a regular perturbation assuming β =
√
ε 1. Hence,
v = v0 + a1β + a2β2
+ a3β3
+ · · · (10.29)
Inserting this into (10.28) with β =
√
ε
v0 − 1 + (2v0a1 − 2v0)β + a2
1 + 2v0a2 − 2a1 β2
+ · · · = 0 (10.30)
Thus
v0 = ±1
a1 = 1
a2 = +
1
2
or −
1
2
(10.31)
book Mobk070 March 22, 2007 11:7
158 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
Thus the two solutions are
v =
√
ε + ε +
1
2
ε
√
ε + · · ·
and
v = −
√
ε + ε −
1
2
ε
√
ε + · · ·
book Mobk070 March 22, 2007 11:7
159
Appendix A: The Roots of Certain
Transcendental Equations
TABLE A.1: The first six roots, † αn, of
α tan α + C = 0.
C α1 α2 α3 α4 α5 α6
0 0 3.1416 6.2832 9.4248 12.5664 15.7080
0.001 0.0316 3.1419 6.2833 9.4249 12.5665 15.7080
0.002 0.0447 3.1422 6.2835 9.4250 12.5665 15.7081
0.004 0.0632 3.1429 6.2838 9.4252 12.5667 15.7082
0.006 0.0774 3.1435 6.2841 9.4254 12.5668 15.7083
0.008 0.0893 3.1441 6.2845 9.4256 12.5670 15.7085
0.01 0.0998 3.1448 6.2848 9.4258 12.5672 15.7086
0.02 0.1410 3.1479 6.2864 9.4269 12.5680 15.7092
0.04 0.1987 3.1543 6.2895 9.4290 12.5696 15.7105
0.06 0.2425 3.1606 6.2927 9.4311 12.5711 15.7118
0.08 0.2791 3.1668 6.2959 9.4333 12.5727 15.7131
0.1 0.3111 3.1731 6.2991 9.4354 12.5743 15.7143
0.2 0.4328 3.2039 6.3148 9.4459 12.5823 15.7207
0.3 0.5218 3.2341 6.3305 9.4565 12.5902 15.7270
0.4 0.5932 3.2636 6.3461 9.4670 12.5981 15.7334
0.5 0.6533 3.2923 6.3616 9.4775 12.6060 15.7397
0.6 0.7051 3.3204 6.3770 9.4879 12.6139 15.7460
0.7 0.7506 3.3477 6.3923 9.4983 12.6218 15.7524
0.8 0.7910 3.3744 6.4074 9’5087 12.6296 15.7587
book Mobk070 March 22, 2007 11:7
160 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
TABLE A.1: (continue)
α tan α + C = 0.
C α1 α2 α3 α4 α5 α6
0.9 0.8274 3.4003 6.4224 9.5190 12.6375 15.7650
1.0 0.8603 3.4256 6.4373 9.5293 12.6453 15.7713
1.5 0.9882 3.5422 6.5097 9.5801 12.6841 15.8026
2.0 1.0769 3.6436 6.5783 9.6296 12.7223 15.8336
3.0 1.1925 3.8088 6.7040 9.7240 12.7966 15.8945
4.0 1.2646 3.9352 6.8140 9.8119 12.8678 15.9536
5.0 1.3138 4.0336 6.9096 9.8928 12.9352 16.0107
6.0 1.3496 4.1116 6.9924 9.9667 12.9988 16.0654
7.0 1.3766 4.1746 7.0640 10.0339 13.0584 16.1177
8.0 1.3978 4.2264 7.1263 10.0949 13.1141 16.1675
9.0 1.4149 4.2694 7.1806 10.1502 13.1660 16.2147
10.0 1.4289 4.3058 7.2281 10.2003 13.2142 16.2594
15.0 1.4729 4.4255 7.3959 10.3898 13.4078 16.4474
20.0 1.4961 4.4915 7.4954 10.5117 13.5420 16.5864
30.0 1.5202 4.5615 7.6057 10.6543 13.7085 16.7691
40.0 1.5325 4.5979 7.6647 10.7334 13.8048 16.8794
50.0 1.5400 4.6202 7.7012 10.7832 13.8666 16.9519
60.0 1.5451 4.6353 7.7259 10.8172 13.9094 17.0026
80.0 1.5514 4.6543 7.7573 10.8606 13.9644 17.0686
100.0 1.5552 4.6658 7.7764 10.8871 13.9981 17.1093
∞ 1.5708 4.7124 7.8540 10.9956 14.1372 17.2788
† The roots of this equation are all real if C > 0.
book Mobk070 March 22, 2007 11:7
APPENDIX A: THE ROOTS OF CERTAIN TRANSCENDENTAL EQUATIONS 161
TABLE A.2: The first six roots, † αn, of
α cotα + C = 0.
C α1 α2 α3 C α1 α2
−1.0 0 4.4934 7.7253 10.9041 14.0662 17.2208
−0.995 0.1224 4.4945 7.7259 10.9046 14.0666 17.2210
−0.99 0.1730 4.4956 7.7265 10.9050 14.0669 17.2213
−0.98 0.2445 4.4979 7.7278 10.9060 14.0676 17.2219
−0.97 0.2991 4.5001 7.7291 10.9069 14.0683 17.2225
−0.96 0.3450 4.5023 7.7304 10.9078 14.0690 17.2231
−0.95 0.3854 4.5045 7.7317 10.9087 14.0697 17.2237
−0.94 0.4217 4.5068 7.7330 10.9096 14.0705 17.2242
−0.93 0.4551 4.5090 7.7343 10.9105 14.0712 17.2248
−0.92 0.4860 4.5112 7.7356 10.9115 14.0719 17.2254
−0.91 0.5150 4.5134 7.7369 10.9124 14.0726 17.2260
−0.90 0.5423 4.5157 7.7382 10.9133 14.0733 17.2266
−0.85 0.6609 4.5268 7.7447 10.9179 14.0769 17.2295
−0.8 0.7593 4.5379 7.7511 10.9225 14.0804 17.2324
−0.7 0.9208 4.5601 7.7641 10.9316 14.0875 17.2382
−0.6 1.0528 4.5822 7.7770 10.9408 14.0946 17.2440
−0.5 J.l656 4.6042 7.7899 10.9499 14.1017 17.2498
−0.4 1.2644 4.6261 7.8028 10.9591 14.1088 17.2556
−0.3 1.3525 4.6479 7.8156 10.9682 14.1159 17.2614
−0.2 1.4320 4.6696 7.8284 10.9774 14.1230 17.2672
−0.1 1.5044 4.6911 7.8412 10.9865 14.1301 17.2730
0 1.5708 4.7124 7.8540 10.9956 14.1372 17.2788
0.1 1.6320 4.7335 7.8667 11.0047 14.1443 17.2845
0.2 1.6887 4.7544 7.8794 11.0137 14.1513 17.2903
0.3 1.7414 4.7751 7.8920 11.0228 14.1584 17.2961
0.4 1.7906 4.7956 7.9046 11.0318 14.1654 17.3019
book Mobk070 March 22, 2007 11:7
162 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
TABLE A.2: (continue)
α cotα + C = 0.
C α1 α2 α3 C α1 α2
0.5 1.8366 4.8158 7.9171 11.0409 14.1724 17.3076
0.6 1.8798 4.8358 7.9295 11.0498 14.1795 17.3134
0.7 1.9203 4.8556 7.9419 11.0588 14.1865 17.3192
0.8 1.9586 4.8751 7.9542 11.0677 14.1935 17.3249
0.9 1.9947 4.8943 7.9665 11.0767 14.2005 17.3306
1.0 2.0288 4.9132 7.9787 11.0856 14.2075 17.3364
1.5 2.1746 5.0037 8.0385 1 J.l296 14.2421 17.3649
2.0 2.2889 5.0870 8.0962 1 J.l727 14.2764 17.3932
3.0 2.4557 5.2329 8.2045 11.2560 14.3434 17.4490
4.0 2.5704 5.3540 8.3029 11.3349 14.4080 17.5034
5.0 2.6537 5.4544 8.3914 11.4086 14.4699 17.5562
6.0 2.7165 5.5378 8.4703 11.4773 14.5288 17.6072
7.0 2.7654 5,6078 8.5406 11.5408 14.5847 17.6562
8.0 2.8044 5.6669 8.6031 11.5994 14.6374 17.7032
9.0 2.8363 5.7172 8.6587 11.6532 14.6870 17.7481
10.0 2.8628 5.7606 8.7083 11.7027 14.7335 17.7908
15.0 2.9476 5.9080 8.8898 11.8959 14.9251 17.9742
20.0 2.9930 5.9921 9.0019 12.0250 15.0625 18.1136
30.0 3.0406 6.0831 9.1294 12.1807 15.2380 18.3018
40.0 3.0651 6.1311 9.1987 12.2688 15.3417 18.4180
50.0 3.0801 6.1606 9.2420 12.3247 15.4090 18.4953
60.0 3.0901 6.1805 9.2715 12.3632 15.4559 18.5497
80.0 3.1028 6.2058 9.3089 12.4124 15.5164 18.6209
100.0 3.1105 6.2211 9.3317 12.4426 15.5537 18.6650
∞ 3.1416 6.2832 9.4248 12.5664 15.7080 18.8496
† The roots of this equation are all real if C > −1. These negative values of C arise in
connection with the sphere, §9.4.
book Mobk070 March 22, 2007 11:7
APPENDIX A: THE ROOTS OF CERTAIN TRANSCENDENTAL EQUATIONS 163
TABLE A.3: The first six roots αn, of
αJ1(α) − C J0(α) = 0
C α1 α2 α3 α4 α5 α6
0 0 3.8317 7.0156 10.1735 13.3237 16.4706
0.01 0.1412 3.8343 7.0170 10.1745 13.3244 16.4712
0.02 0.1995 3.8369 7.0184 10.1754 13.3252 16.4718
0.04 0.2814 3.8421 7.0213 10.1774 13.3267 16.4731
0.06 0.3438 3.8473 7.0241 10.1794 13.3282 16.4743
0.08 0.3960 3.8525 7.0270 10.1813 13.3297 16.4755
0.1 0.4417 3.8577 7.0298 10.1833 13.3312 16.4767
0.15 0.5376 3.8706 7.0369 10.1882 13.3349 16.4797
0.2 0.6170 3.8835 7.0440 10.1931 13.3387 16.4828
0.3 0 7465 3.9091 7.0582 10.2029 13.3462 16.4888
0.4 0.8516 3.9344 7.0723 10.2127 13.3537 16.4949
0.5 0.9408 3.9594 7.0864 10.2225 13.3611 16.5010
0.6 1.0184 3.9841 7.1004 10.2322 13.3686 16.5070
0.7 1.0873 4.0085 7.1143 10.2419 13.3761 16.5131
0.8 1.1490 4.0325 7.1282 10.2516 13.3835 16.5191
0.9 1.2048 4.0562 7.1421 10.2613 13.3910 16.5251
1.0 1.2558 4.0795 7.1558 10.2710 13.3984 16.5312
1.5 1.4569 4.1902 7.2233 10.3188 13.4353 16.5612
2.0 1.5994 4.2910 7.2884 10.3658 13.4719 16.5910
3.0 1.7887 4.4634 7.4103 10.4566 13.5434 16.6499
4.0 1.9081 4.6018 7.5201 10.5423 13.6125 16.7073
5.0 1.9898 4.7131 7.6177 10.6223 13.6786 16.7630
6.0 2.0490 4.8033 7.7039 10.6964 13.7414 16.8168
7.0 2.0937 4.8772 7.7797 10.7646 13.8008 16.8684
8.0 2.1286 4.9384 7.8464 10.8271 13.8566 16.9179
9.0 2.1566 4.9897 7.9051 10.8842 13.9090 16.9650
10.0 2.1795 5.0332 7.9569 10.9363 13.9580 17.0099
15.0 2.2509 5.1773 8.1422 11.1367 14.1576 17.2008
20.0 2.2880 5.2568 8.2534 11.2677 14.2983 17.3442
30.0 2.3261 5.3410 8.3771 11.4221 14.4748 17.5348
40.0 2.3455 5.3846 8.4432 11.5081 14.5774 17.6508
50.0 2.3572 5.4112 8.4840 11.5621 14.6433 17.7272
60.0 2.3651 5.4291 8.5116 11.5990 14.6889 17.7807
80.0 2.3750 5.4516 8.5466 11.6461 14.7475 17.8502
100.0 2.3809 5.4652 8.5678 11.6747 14.7834 17.8931
∞ 2.4048 5.5201 8.6537 11.7915 14.9309 18.0711
book Mobk070 March 22, 2007 11:7
book Mobk070 March 22, 2007 11:7
165
Appendix B
In this table q = (p/a)1/2
; a and x are positive real; α, β, γ are unrestricted; k is a finite integer;
n is a finite integer or zero; v is a fractional number; 1 · 2 · 3 · · · n = n!; 1 · 3 · 5 · · · (2n − 1) =
(2n − 1)!! n (n) = (n + 1) = n!; (1) = 0! = 1; (v) (1 − v) = π/ sin vπ; (1
2
) = π1/2
NO. TRANSFORM FUNCTION
1
1
p
1
2
1
p2
t
3
1
pk
tk−1
(k − 1)!
4
1
p1/2
1
(πt)1/2
5
1
p3/2
2
t
π
1
2
6
1
pk+1/2
2k
π1/2(2k − 1)!!
tk−1/2
7
1
pv
tv−1
(v)
8 p1/2
−
1
2π1/2t5/2
9 p3/2 3
4π1/2t5/2
10 pk−1/2 (−1)k
(2k − 1)!!
2kπ1/2tk+1/2
11 pn−v tv−n−1
(v − n)
12
1
p + α
e−α t
13
1
(p + α)(p + β)
e−β t
− e−α t
α − β
14
1
(p + α)2
te−α t
book Mobk070 March 22, 2007 11:7
166 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
15
1
(p + α)(p + β)(p + γ )
(γ − β)e−αt
+ (α − γ )e−βt
+ (β − α)e−γ t
(α − β)(β − γ )(γ − α)
16
1
(p + α)2(p + β)
e−βt
− e−αt
[1 − (β − α)t]
(β − α)2
17
1
(p + α)3
1
2
t2
e−αt
18
1
(p + α)k
tk−1
e−αt
(k − 1)!
19
p
(p + α)(p + β)
αe−αt
− βe−βt
α − β
20
p
(p + α)2
(1 − αt)e−αt
21
p
(p + α)(p + β)(p + γ )
α(β − γ )e−αt
+ β(γ − α)e−βt
+ γ (α − β)e−γ t
(α − β)(β − γ )(γ − α)
22
p
(p + α)2(p + β)
[β − α(β − α)t]e−αt
− βe−βt
(β − α)2
23
p
(p + α)3
t 1 −
1
2
αt e−αt
24
α
p2 + α2
sin αt
25
p
p2 + α2
cos αt
26
α
p2 − α2
sinh αt
27
p
p2 − α2
cosh αt
28 e−qx x
2(παt3)1/2
e−x2
/4αt
29
e−qx
q
α
π t
1/2
e−x2
/4αt
30
e−qx
p
erfc
x
2(αt)1/2
31
e−qx
qp
2
αt
π
1/2
e−x2
/4αt
− xerfc
x
2(αt)1/2
32
e−qx
p2
t +
x2
2α
erfc
x
2(αt)1/2
− x
t
απ
1/2
e−x2
/4αt
33
e−qx
p1+n/2
(γ − β)e−αt
+ (α − γ )e−βt
+ (β − α)e−γ t
(α − β)(β − γ )(γ − α)
book Mobk070 March 22, 2007 11:7
APPENDIX B 167
34
e−qx
p3/4
e−βt
− e−αt
[1 − (β − α)t]
(β − α)2
35
e−qx
q + β
1
2
t2
e−αt
36
e−qx
q(q + β)
tk−1
e−αt
(k − 1)!
37
e−qx
p(q + β)
αe−αt
− βe−βt
α − β
38
e−qx
qp(q + β)
(1 − αt)e−αt
39
e−qx
qn+1(q + β)
α(β − γ )e−αt
+ β(γ − α)e−βt
+ γ (α − β)e−γ t
(α − β)(β − γ )(γ − α)
40
e−qx
(q + β)2
[β − α(β − α)t]e−αt
− βe−βt
(β − α)2
41
e−qx
p(q + β)2
t 1 −
1
2
αt e−αt
42
e−qx
p − γ
sin αt
43
e−qx
q(p − γ )
1
2
eγ t α
γ
1/2



e−x(γ/α)1/2
erfc
x
2(αt)1/2
− (γ t)1/2
+ex(γ/α)1/2
erfc
x
2(αt)1/2
+ (γ t)1/2



44
e−qx
(p − γ )2
1
2
eγ t



t −
x
2(αt)1/2
e−x(γ/α)1/2
erfc
x
2(αt)1/2
− (γ t)1/2
+ t +
x
2(αt)1/2
ex(γ/α)1/2
erfc
x
2(αt)1/2
+ (γ t)1/2



45
e−qx
(p − γ )(q + β)
,
γ = αβ2
1
2
eγ t



α1/2
α1/2β + γ 1/2
e−x(γ/α)1/2
erfc
x
2(αt)1/2
− (γ t)1/2
+
α1/2
α1/2β − γ 1/2
ex(γ/α)1/2
erfc
x
2(αt)1/2
+ (γ t)1/2



−
αβ
αβ2 − γ
eβx+αβ2
t
erfc
x
2(αt)1/2
+ β(α t)1/2
46 ex/p
− 1
x
t
1/2
I1 2(xt)1/2
47
1
p
ex/p
I0 2(xt)1/2
48
1
py
ex/p t
x
(v−1)/2
Iv−1 2(xt)1/2
book Mobk070 March 22, 2007 11:7
168 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS
49 K0(qx)
1
2t
e−x2
/4α t
50
1
p1/2
K2v(qx)
1
2(πt)1/2
e−x2
8α t
Kv
x2
8αt
51 pv/2−1
Kv(qx) x−v
αv/2
2v−1 ∞
x2/4α t e−u
uv−1
du
52 pv/2
Kv(qx)
xv
αv/2(2t)v+1
e−x2
/4α t
53 p − (p2
− x2
)1/2 v
v
xv
t
Iv(xt)
54 ex[(p+α)1/2
−(p+β)1/2
]
z
− 1
x(α − β)e−(α+β)t/2
I1
1
2
(α − β)t1/2
(t + 4x)1/2
t1/2(t + 4x)1/2
55
ex[p−(p+α)1/2
(p+β)1/2
]
(p + α)1/2(p + β)1/2
e−(α+β)(t+x)/2
I0
1
2
(α − β)t1/2
(t + 2x)1/2
56
ex[(p+α)1/2
−(p+β)1/2
]
2
(p + α)1/2(p + β)1/2 (p + α)1/2 + (p + β)1/2 2v
tv/2
e−(α+β)t/2
Iv
1
2
(α − β)t1/2
(t + 4x)1/2
(α − β)v(t + 4x)v/2
book Mobk070 March 22, 2007 11:7
169
Author Biography
Dr. Robert G. Watts is the Cornelia and Arthur L. Jung Professor of Mechanical Engineering
at Tulane University. He holds a BS (1959) in mechanical engineering from Tulane, an
MS(1960) in nuclear engineering from the Massachusetts Institute of Technology and a PhD
(1965) from Purdue University in mechanical engineering. He spent a year as a Postdoctoral
associate studying atmospheric and ocean science at Harvard University. He has taught advanced
applied mathematics and thermal science at Tulane for most of his 43 years of service to that
university.
Dr. Watts is the author of Keep Your Eye on the Ball: The Science and Folklore of Baseball
(W. H. Freeman) and the editor of Engineering Response to Global Climate Change (CRC Press)
and Innovative Energy Strategies for CO2 Stabilization (Cambridge University Press) as well
as many papers on global warming, paleoclimatology energy and the physic of sport. He is a
Fellow of the American Society of Mechanical Engineers.
book Mobk070 March 22, 2007 11:7

More Related Content

PDF
I do like cfd vol 1 2ed_v2p2
PDF
General physics
PDF
Basic calculus free
PDF
Basic calculus
PDF
Fundamentals of quantum information theory
PDF
1. 1177
PDF
Free high-school-science-texts-physics
PDF
SGC 2014 - Mathematical Sciences Tutorials
I do like cfd vol 1 2ed_v2p2
General physics
Basic calculus free
Basic calculus
Fundamentals of quantum information theory
1. 1177
Free high-school-science-texts-physics
SGC 2014 - Mathematical Sciences Tutorials

What's hot (15)

PDF
Sap2000 basic
PDF
Lecture notes on hybrid systems
PDF
7 introplasma
PDF
Pre calculus Grade 11 Learner's Module Senior High School
PDF
GUIA REFERENCIA EZSTEER PARA EZ250
PDF
PDF
Business Mathematics Code 1429
PDF
MSC-2013-12
PDF
Geometry
PDF
Queueing
PDF
Discrete Mathematics - Mathematics For Computer Science
PDF
Crowell benjamin-newtonian-physics-1
PDF
Math for programmers
PDF
Thermal and statistical physics h. gould, j. tobochnik-1
Sap2000 basic
Lecture notes on hybrid systems
7 introplasma
Pre calculus Grade 11 Learner's Module Senior High School
GUIA REFERENCIA EZSTEER PARA EZ250
Business Mathematics Code 1429
MSC-2013-12
Geometry
Queueing
Discrete Mathematics - Mathematics For Computer Science
Crowell benjamin-newtonian-physics-1
Math for programmers
Thermal and statistical physics h. gould, j. tobochnik-1
Ad

Similar to Essentials of applied mathematics (20)

PDF
Applied Physics for Computer Science and Allied Programmes.pdf
PDF
Quantum Mechanics: Lecture notes
PDF
Applied mathematics I for enginering
PDF
Coulomb gas formalism in conformal field theory
PDF
Elements of concave analysis and applications Prem K. Kythe
PDF
Thats How We C
PDF
Zettili.pdf
PDF
Zettili Quantum mechanics- Concept and application.pdf
PDF
2rtyrtyrtyrtyrtyrtyrtyrtyt0080047410.pdf
DOCX
Go to TOCStatistics for the SciencesCharles Peters.docx
PDF
Convexity And Wellposed Problems R Lucchetti
PDF
cs notes for the syudents of computer science
PDF
Lessons%20in%20 industrial%20instrumentation
PDF
engineering - fracture mechanics
PDF
Introduction to Methods of Applied Mathematics
PDF
Advanced Calculus and Analysis MA1002 - CiteSeer ( PDFDrive ).pdf
PDF
PDF
Lecture notes on planetary sciences and orbit determination
PDF
Introduction to methods of applied mathematics or Advanced Mathematical Metho...
Applied Physics for Computer Science and Allied Programmes.pdf
Quantum Mechanics: Lecture notes
Applied mathematics I for enginering
Coulomb gas formalism in conformal field theory
Elements of concave analysis and applications Prem K. Kythe
Thats How We C
Zettili.pdf
Zettili Quantum mechanics- Concept and application.pdf
2rtyrtyrtyrtyrtyrtyrtyrtyt0080047410.pdf
Go to TOCStatistics for the SciencesCharles Peters.docx
Convexity And Wellposed Problems R Lucchetti
cs notes for the syudents of computer science
Lessons%20in%20 industrial%20instrumentation
engineering - fracture mechanics
Introduction to Methods of Applied Mathematics
Advanced Calculus and Analysis MA1002 - CiteSeer ( PDFDrive ).pdf
Lecture notes on planetary sciences and orbit determination
Introduction to methods of applied mathematics or Advanced Mathematical Metho...
Ad

Recently uploaded (20)

PDF
01-Introduction-to-Information-Management.pdf
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
Insiders guide to clinical Medicine.pdf
PDF
RMMM.pdf make it easy to upload and study
PPTX
Institutional Correction lecture only . . .
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
O7-L3 Supply Chain Operations - ICLT Program
PPTX
GDM (1) (1).pptx small presentation for students
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Complications of Minimal Access Surgery at WLH
PPTX
Pharma ospi slides which help in ospi learning
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
Sports Quiz easy sports quiz sports quiz
PDF
Computing-Curriculum for Schools in Ghana
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
Basic Mud Logging Guide for educational purpose
01-Introduction-to-Information-Management.pdf
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
Insiders guide to clinical Medicine.pdf
RMMM.pdf make it easy to upload and study
Institutional Correction lecture only . . .
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
O7-L3 Supply Chain Operations - ICLT Program
GDM (1) (1).pptx small presentation for students
Abdominal Access Techniques with Prof. Dr. R K Mishra
human mycosis Human fungal infections are called human mycosis..pptx
Final Presentation General Medicine 03-08-2024.pptx
Complications of Minimal Access Surgery at WLH
Pharma ospi slides which help in ospi learning
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
Module 4: Burden of Disease Tutorial Slides S2 2025
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Sports Quiz easy sports quiz sports quiz
Computing-Curriculum for Schools in Ghana
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Basic Mud Logging Guide for educational purpose

Essentials of applied mathematics

  • 1. MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6 Essentials of Applied Mathematics for Scientists and Engineers
  • 2. MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6 Copyright © 2007 by Morgan & Claypool All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in printed reviews, without the prior permission of the publisher. Essentials of Applied Mathematics for Scientists and Engineers Robert G. Watts www.morganclaypool.com ISBN: 1598291866 paperback ISBN: 9781598291865 paperback ISBN: 1598291874 ebook ISBN: 9781598291872 ebook DOI 10.2200/S00082ED1V01Y200612ENG003 A Publication in the Morgan & Claypool Publishers series SYNTHESIS LECTURES ON ENGINEERING SEQUENCE IN SERIES #3 Lecture #3 Series ISSN: 1559-811X print Series ISSN: 1559-8128 electronic First Edition 10 9 8 7 6 5 4 3 2 1
  • 3. MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6 Essentials of Applied Mathematics for Scientists and Engineers Robert G. Watts Tulane University SYNTHESIS LECTURES ON ENGINEERING SEQUENCE IN SERIES #3 M&C M o r g a n &C l a y p o o l P u b l i s h e r s
  • 4. MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6 iv ABSTRACT This is a book about linear partial differential equations that are common in engineering and the physical sciences. It will be useful to graduate students and advanced undergraduates in all engineering fields as well as students of physics, chemistry, geophysics and other physical sciences and professional engineers who wish to learn about how advanced mathematics can be used in their professions. The reader will learn about applications to heat transfer, fluid flow, mechanical vibrations. The book is written in such a way that solution methods and application to physical problems are emphasized. There are many examples presented in detail and fully explained in their relation to the real world. References to suggested further reading are included. The topics that are covered include classical separation of variables and orthogonal functions, Laplace transforms, complex variables and Sturm-Liouville transforms. KEYWORDS Engineering mathematics, separation of variables, orthogonal functions, Laplace transforms, complex variables and Sturm-Liouville transforms, differential equations.
  • 5. MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6 v Contents 1. Partial Differential Equations in Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Introductory Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Fundamental Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 1.3 The Heat Conduction (or Diffusion) Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3.1 Rectangular Cartesian Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3.2 Cylindrical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.3 Spherical Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 The Laplacian Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.4 Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 1.4 The Vibrating String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4.1 Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 1.5 Vibrating Membrane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.6 Longitudinal Displacements of an Elastic Bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Further Reading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 2. The Fourier Method: Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1 Heat Conduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.1.1 Scales and Dimensionless Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 2.1.2 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.1.3 Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1.5 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.1.6 Scales and Dimensionless Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 2.1.7 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.8 Choosing the Sign of the Separation Constant. . . . . . . . . . . . . . . . . . . . . .17 2.1.9 Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.10 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.1.11 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.1.12 Scales and Dimensionless Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
  • 6. MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6 vi ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS 2.1.13 Getting to One Nonhomogeneous Condition . . . . . . . . . . . . . . . . . . . . . . 20 2.1.14 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.1.15 Choosing the Sign of the Separation Constant. . . . . . . . . . . . . . . . . . . . . .21 2.1.16 Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.1.17 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.1.18 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.1.19 Scales and Dimensionless Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 2.1.20 Relocating the Nonhomogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.1.21 Separating Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.1.22 Superposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.1.23 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.1.24 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2 Vibrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2.1 Scales and Dimensionless Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 2.2.2 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.2.3 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.4 Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3. Orthogonal Sets of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.1 Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 3.1.1 Orthogonality of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.1.2 Orthonormal Sets of Vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 3.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2.1 Orthonormal Sets of Functions and Fourier Series . . . . . . . . . . . . . . . . . . 32 3.2.2 Best Approximation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 3.2.3 Convergence of Fourier Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 3.2.4 Examples of Fourier Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3 Sturm–Liouville Problems: Orthogonal Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.1 Orthogonality of Eigenfunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4. Series Solutions of Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.1 General Series Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
  • 7. MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6 CONTENTS vii 4.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.1.2 Ordinary Points and Series Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.1.3 Lessons: Finding Series Solutions for Differential Equations with Ordinary Points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.1.4 Regular Singular Points and the Method of Frobenius. . . . . . . . . . . . . . .49 4.1.5 Lessons: Finding Series Solution for Differential Equations with Regular Singular Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.1.6 Logarithms and Second Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2 Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.2.1 Solutions of Bessel’s Equation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58 Here are the Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.2.2 Fourier–Bessel Series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.3 Legendre Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.4 Associated Legendre Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 5. Solutions Using Fourier Series and Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.1 Conduction (or Diffusion) Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.1.1 Time-Dependent Boundary Conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . .80 5.2 Vibrations Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.3 Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6. Integral Transforms: The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.1 The Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.2 Some Important Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.2.1 Exponentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.2.2 Shifting in the s -domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.2.3 Shifting in the Time Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.2.4 Sine and Cosine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.2.5 Hyperbolic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.2.6 Powers of t: tm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
  • 8. MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6 viii ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS 6.2.7 Heaviside Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6.2.8 The Dirac Delta Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 6.2.9 Transforms of Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 6.2.10 Laplace Transforms of Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6.2.11 Derivatives of Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6.3 Linear Ordinary Differential Equations with Constant Coefficients . . . . . . . . . 102 6.4 Some Important Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6.4.1 Initial Value Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103 6.4.2 Final Value Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6.4.3 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6.5 Partial Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.5.1 Nonrepeating Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 6.5.2 Repeated Roots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107 6.5.3 Quadratic Factors: Complex Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 7. Complex Variables and the Laplace Inversion Integral . . . . . . . . . . . . . . . . . . . . . . . . . 111 7.1 Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 7.1.1 Limits and Differentiation of Complex Variables: Analytic Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115 Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 7.1.2 The Cauchy Integral Formula. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 8. Solutions with Laplace Transforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121 8.1 Mechanical Vibrations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 8.2 Diffusion or Conduction Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 8.3 Duhamel’s Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 9. Sturm–Liouville Transforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141 9.1 A Preliminary Example: Fourier Sine Transform . . . . . . . . . . . . . . . . . . . . . . . . . . 141 9.2 Generalization: The Sturm–Liouville Transform: Theory . . . . . . . . . . . . . . . . . . 143 9.3 The Inverse Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
  • 9. MOBK070-FM MOBKXXX-Sample.cls March 22, 2007 13:6 CONTENTS ix Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 10. Introduction to Perturbation Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153 10.1 Examples from Algebra. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153 10.1.1 Regular Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 10.1.2 Singular Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Appendix A: The Roots of Certain Transcendental Equations. . . . . . . . . . . . . . . . . .159 Appendix B: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Author Biography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
  • 11. book Mobk070 March 22, 2007 11:7 1 C H A P T E R 1 Partial Differential Equations in Engineering 1.1 INTRODUCTORY COMMENTS This book covers the material presented in a course in applied mathematics that is required for first-year graduate students in the departments of Chemical and Mechanical Engineering at Tulane University. A great deal of material is presented, covering boundary value problems, complex variables, and Fourier transforms. Therefore the depth of coverage is not as extensive as in many books. Our intent in the course is to introduce students to methods of solving linear partial differential equations. Subsequent courses such as conduction, solid mechanics, and fracture mechanics then provide necessary depth. The reader will note some similarity to the three books, Fourier Series and Boundary Value Problems, Complex Variables and Applications, and Operational Mathematics, originally by R. V. Churchill. The first of these has been recently updated by James Ward Brown. The current author greatly admires these works, and studied them during his own tenure as a graduate student. The present book is more concise and leaves out some of the proofs in an attempt to present more material in a way that is still useful and is acceptable for engineering students. First we review a few concepts about differential equations in general. 1.2 FUNDAMENTAL CONCEPTS An ordinary differential equation expresses a dependent variable, say u, as a function of one independent variable, say x, and its derivatives. The order of the differential equation is given by the order of the highest derivative of the dependent variable. A boundary value problem consists of a differential equation that is defined for a given range of the independent variable (domain) along with conditions on the boundary of the domain. In order for the boundary value problem to have a unique solution the number of boundary conditions must equal the order of the differential equation. If the differential equation and the boundary conditions contain only terms of first degree in u and its derivatives the problem is linear. Otherwise it is nonlinear.
  • 12. book Mobk070 March 22, 2007 11:7 2 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS A partial differential equation expresses a dependent variable, say u, as a function of more than one independent variable, say x, y, and z. Partial derivatives are normally written as ∂u/∂x. This is the first-order derivative of the dependent variable u with respect to the independent variable x. Sometimes we will use the notation ux or when the derivative is an ordinary derivative we use u . Higher order derivatives are written as ∂2 u/∂x2 or uxx. The order of the differential equation now depends on the orders of the derivatives of the dependent variables in terms of each of the independent variables. For example, it may be of order m for the x variable and of order n for the y variable. A boundary value problem consists of a partial differential equation defined on a domain in the space of the independent variables, for example the x, y, z space, along with conditions on the boundary. Once again, if the partial differential equation and the boundary conditions contain only terms of first degree in u and its derivatives the problem is linear. Otherwise it is nonlinear. A differential equation or a boundary condition is homogeneous if it contains only terms involving the dependent variable. Examples Consider the ordinary differential equation a(x)u + b(x)u = c (x), 0 < x < A. (1.1) Two boundary conditions are required because the order of the equation is 2. Suppose u(0) = 0 and u(A) = 1. (1.2) The problem is linear. If c (x) is not zero the differential equation is nonhomogeneous. The first boundary condition is homogeneous, but the second boundary condition is nonhomogeneous. Next consider the ordinary differential equation a(u)u + b(x)u = c 0 < x < A (1.3) Again two boundary conditions are required. Regardless of the forms of the boundary condi- tions, the problem is nonlinear because the first term in the differential equations is not of first degree in u and u since the leading coefficient is a function of u. It is homogeneous only if c = 0. Now consider the following three partial differential equations: ux + uxx + uxy = 1 (1.4) uxx + uyy + uzz = 0 (1.5) uux + uyy = 1 (1.6)
  • 13. book Mobk070 March 22, 2007 11:7 PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 3 The first equation is linear and nonhomogeneous. The third term is a mixed partial derivative. Since it is of second order in x two boundary conditions are necessary on x. It is first order in y, so that only one boundary condition is required on y. The second equation is linear and homogeneous and is of second order in all three variables. The third equation is nonlinear because the first term is not of first degree in u and ux. It is of order 1 in x and order 2 in y. In this book we consider only linear equations. We will now derive the partial differential equations that describe some of the physical phenomena that are common in engineering science. Problems Tell whether the following are linear or nonlinear and tell the order in each of the independent variables: u + xu + u2 = 0 tan(y)uy + uyy = 0 tan(u)uy + 3u = 0 uyyy + uyx + u = 0 1.3 THE HEAT CONDUCTION (OR DIFFUSION) EQUATION 1.3.1 Rectangular Cartesian Coordinates The conduction of heat is only one example of the diffusion equation. There are many other important problems involving the diffusion of one substance in another. One example is the diffusion of one gas into another if both gases are motionless on the macroscopic level (no convection). The diffusion of heat in a motionless material is governed by Fourier’s law which states that heat is conducted per unit area in the negative direction of the temperature gradient in the (vector) direction n in the amount ∂u/∂n, that is qn = −k∂u/∂n (1.7) where qn denotes the heat flux in the n direction (not the nth power). In this equation u is the local temperature and k is the thermal conductivity of the material. Alternatively u could be the partial fraction of a diffusing material in a host material and k the diffusivity of the diffusing material relative to the host material. Consider the diffusion of heat in two dimensions in rectangular Cartesian coordinates. Fig. 1.1 shows an element of the material of dimension x by y by z. The material has a specific heat c and a density ρ. Heat is generated in the material at a rate q per unit volume. Performing a heat balance on the element, the time (t) rate of change of thermal energy within the element, ρc x y z∂u/∂t is equal to the rate of heat generated within the element
  • 14. book Mobk070 March 22, 2007 11:7 4 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS FIGURE 1.1: An element in three dimensional rectangular Cartesian coordinates q x y z minus the rate at which heat is conducted out of the material. The flux of heat conducted into the element at the x face is denoted by qx while at the y face it is denoted by q y . At x + x the heat flux (i.e., per unit area) leaving the element in the x direction is qx + qx while at y + y the heat flux leaving in the y direction is q y + q y . Similarly for qz . Expanding the latter three terms in Taylor series, we find that qx + qx = qx + qx x x + (1/2)qx xx( x)2 + terms of order ( x)3 or higher order. Similar expressions are obtained for q y + q y and qz + qz Completing the heat balance ρc x y z∂u/∂t = q x y z + qx y z + q y x z − (qx + qx x x + (1/2)qx xx( x)2 + · · · ) y z − (q y + q y y y + (1/2)q y yy ( y)2 + · · · ) x z (1.8) − (qz + qz z z + (1/2)qz zz( z)2 + · · · ) x y The terms qx y z, q y x z, and qz x y cancel. Taking the limit as x, y, and z approach zero, noting that the terms multiplied by ( x)2 , ( y)2 , and ( z)2 may be neglected, dividing through by x y z and noting that according to Fourier’s law qx = −k∂u/∂x, q y = −k∂u/∂y, and qz = −k(∂u/∂z) we obtain the time-dependent heat conduction equation in three-dimensional rectangular Cartesian coordinates: ρc ∂u/∂t = k(∂2 u/∂x2 + ∂2 u/∂y2 ) + q (1.9) The equation is first order in t, and second order in both x and y. If the property values ρ, c and k and the heat generation rate per unit volume q are independent of the dependent
  • 15. book Mobk070 March 22, 2007 11:7 PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 5 FIGURE 1.2: An element in cylindrical coordinates variable, temperature the partial differential equation is linear. If q is zero, the equation is homogeneous. It is easy to see that if a third dimension, z, were included, the term k∂2 u/∂z2 must be added to the right-hand side of the above equation. 1.3.2 Cylindrical Coordinates A small element of volume r r z is shown in Fig. 1.2. The method of developing the diffusion equation in cylindrical coordinates is much the same as for rectangular coordinates except that the heat conducted into and out of the element depends on the area as well as the heat flux as given by Fourier’s law, and this area varies in the r-direction. Hence the heat conducted into the element at r is qr r z, while the heat conducted out of the element at r + r is qr r z + ∂(qr r z)/∂r( r) when terms of order ( r)2 are neglected as r approaches zero. In the z- and θ-directions the area does not change. Following the same procedure as in the discussion of rectangular coordinates, expanding the heat values on the three faces in Tayor series’, and neglecting terms of order ( )2 and ( z)2 and higher, ρcr θ r z∂u/∂t = −∂(qr r θ z)/∂r r − ∂(qθ r z)/∂θ θ − ∂(qz r θ r)/∂z z + qr θ r z (1.10)
  • 16. book Mobk070 March 22, 2007 11:7 6 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS FIGURE 1.3: An element in spherical coordinates Dividing through by the volume, we find after using Fourier’s law for the heat fluxes ρc ∂u/∂t = (1/r)∂(r∂u/∂r)/∂r + (1/r2 )∂2 u/∂θ2 + ∂2 u/∂z2 + q (1.11) 1.3.3 Spherical Coordinates An element in a spherical coordinate system is shown in Fig. 1.3. The volume of the element is r sin θ rr θ = r2 sin θ r θ . The net heat flows out of the element in the r, θ, and directions are respectfully qr r2 sin θ θ (1.12) qθ r sin θ r (1.13) q r θ r (1.14) It is left as an exercise for the student to show that ρc ∂u/∂t = k[(1/r2 )∂/∂r(r2 ∂u/∂r) + (1/r2 sin2 θ)∂2 u/∂ 2 + (1/r2 sin θ)∂(sin θ∂u/∂θ)/∂θ + q (1.15) The Laplacian Operator The linear operator on the right-hand side of the heat equation is often referred to as the Laplacian operator and is written as ∇2 .
  • 17. book Mobk070 March 22, 2007 11:7 PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 7 1.3.4 Boundary Conditions Four types of boundary conditions are common in conduction problems. a) Heat flux prescribed, in which case k∂u/∂n is given. b) Heat flux is zero (perhaps just a special case of (a)), in which case ∂u/∂n is zero. c) Temperature u is prescribed. d) Convection occurs at the boundary, in which case k∂u/∂n = h(U − u). Here n is a length in the direction normal to the surface, U is the temperature of the fluid next to the surface that is heating or cooling the surface, and h is the coefficient of convective heat transfer. Condition (d) is sometimes called Newton’s law of cooling. 1.4 THE VIBRATING STRING Next we consider a tightly stretched string on some interval of the x-axis. The string is vibrating about its equilibrium position so that its departure from equilibrium is y(t, x). The string is assumed to be perfectly flexible with mass per unit length ρ. Fig. 1.4 shows a portion of such a string that has been displaced upward. We assume that the tension in the string is constant. However the direction of the tension vector along the string varies. The tangent of the angle α(t, x) that the string makes with the horizontal is given by the slope of the wire, ∂y/∂x, V (x)/H = tan α(t, x) = ∂y/∂x (1.16) If we assume that the angle α is small then the horizontal tension force is nearly equal to the magnitude of the tension vector itself. In this case the tangent of the slope of the wire FIGURE 1.4: An element of a vibrating string
  • 18. book Mobk070 March 22, 2007 11:7 8 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS at x + x is V (x + x)/H = tan α(x + x) = ∂y/∂x(x + x). (1.17) The vertical force V is then given by H∂y/∂x. The net vertical force is the difference between the vertical forces at x and x + x, and must be equal to the mass times the acceleration of that portion of the string. The mass is ρ x and the acceleration is ∂2 y/∂t2 . Thus ρ x∂2 /∂t2 = H[∂y/∂x(x + x) − ∂y/∂x(x)] (1.18) Expanding ∂y/∂x(x + x) in a Taylor series about x = 0 and neglecting terms of order ( x)2 and smaller, we find that ρytt = Hyxx (1.19) which is the wave equation. Usually it is presented as ytt = a2 yxx (1.20) where a2 = H/ρ is a wave speed term. Had we included the weight of the string there would have been an extra term on the right-hand side of this equation, the acceleration of gravity (downward). Had we included a damping force proportional to the velocity of the string, another negative term would result: ρytt = Hyxx − byt − g (1.21) 1.4.1 Boundary Conditions The partial differential equation is linear and if the gravity term is included it is nonhomo- geneous. It is second order in both t and x, and requires two boundary conditions (initial conditions) on t and two boundary conditions on x. The two conditions on t are normally specifying the initial velocity and acceleration. The conditions on x are normally specifying the conditions at the ends of the string, i.e., at x = 0 and x = L. 1.5 VIBRATING MEMBRANE The partial differential equation describing the motion of a vibrating membrane is simply an extension of the right-hand side of the equation of the vibrating string to two dimensions. Thus, ρytt + byt = −g + ∇2 y (1.22) In this equation, ρ is the density per unit area and ∇2 y is the Laplacian operator in either rectangular or cylindrical coordinates.
  • 19. book Mobk070 March 22, 2007 11:7 PARTIAL DIFFERENTIAL EQUATIONS IN ENGINEERING 9 1.6 LONGITUDINAL DISPLACEMENTS OF AN ELASTIC BAR The longitudinal displacements of an elastic bar are described by Eq. (1.20) except the in this case a2 = E/ρ, where ρ is the density and E is Young’s modulus. FURTHER READING V. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966. J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. 6th edition. New York: McGraw-Hill, 2001. P. V. O’Neil, Advanced Engineering Mathematics. 5th edition. Pacific Grove, CA: Brooks/Cole- Thomas Learning, 2003.
  • 20. book Mobk070 March 22, 2007 11:7
  • 21. book Mobk070 March 22, 2007 11:7 11 C H A P T E R 2 The Fourier Method: Separation of Variables In this chapter we will work through a few example problems in order to introduce the general idea of separation of variables and the concept of orthogonal functions before moving on to a more complete discussion of orthogonal function theory. We will also introduce the concepts of nondimensionalization and normalization. The goal here is to use the three theorems stated below to walk the student through the solution of several types of problems using the concept of separation of variables and learn some early lessons on how to apply the method without getting too much into the details that will be covered later, especially in Chapter 3. We state here without proof three fundamental theorems that will be useful in finding series solutions to partial differential equations. Theorem 2.1. Linear Superposition: If a group of functions un, n = m through n = M are all solutions to some linear differential equation then M n=m cnun is also a solution. Theorem 2.2. Orthogonal Functions: Certain sets of functions n defined on the interval (a, b) possess the property that b a n mdx = constant, n = m b a n mdx = 0, n = m
  • 22. book Mobk070 March 22, 2007 11:7 12 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS These are called orthogonal functions. Examples are the sine and cosine functions. This idea is discussed fully in Chapter 3, particularly in connection with Sturm–Liouville equations. Theorem 2.3. Fourier Series: A piecewise continuous function f (x) defined on (a, b) can be represented by a series of orthogonal functions n(x) on that interval as f (x) = ∞ n=0 An n(x) where An = b x=a f (x) n(x)dx b x=a n(x) n(x)dx These properties will be used in the following examples to introduce the idea of solution of partial differential equations using the concept of separation of variables. 2.1 HEAT CONDUCTION We will first examine how Theorems 1, 2, and 3 are systematically used to obtain solutions to problems in heat conduction in the forms of infinite series. We set out the methodology in detail, step-by-step, with comments on lessons learned in each case. We will see that the mathematics often serves as a guide, telling us when we make a bad assumption about solution forms. Example 2.1. A Transient Heat Conduction Problem Consider a flat plate occupying the space between x = 0 and x = L. The plate stretches out in the y and z directions far enough that variations in temperature in those directions may be neglected. Initially the plate is at a uniform temperature u0. At time t = 0+ the wall at x = 0 is raised to u1 while the wall at x = L is insulated. The boundary value problem is then ρc ut = kuxx 0 < x < L t > 0 (2.1) u(t, 0) = u1 ux(t, L) = 0 (2.2) u(0, x) = u0 2.1.1 Scales and Dimensionless Variables When it is possible it is always a good idea to write both the independent and dependent variables in such a way that they range from zero to unity. In the next few problems we shall show how this can often be done.
  • 23. book Mobk070 March 22, 2007 11:7 THE FOURIER METHOD: SEPARATION OF VARIABLES 13 We first note that the problem has a fundamental length scale, so that if we define another space variable ξ = x/L, the partial differential equation can be written as ρc ut = L−2 kuξξ 0 < ξ < 1 t < 0 (2.3) Next we note that if we define a dimensionless time-like variable as τ = αt/L2 , where α = k/ρc is called the thermal diffusivity, we find uτ = uξξ (2.4) We now proceed to nondimensionalize and normalize the dependent variable and the boundary conditions. We define a new variable U = (u − u1)/(u0 − u1) (2.5) Note that this variable is always between 0 and 1 and is dimensionless. Our boundary value problem is now devoid of constants. Uτ = Uξξ (2.6) U(τ, 0) = 0 Uξ (τ, 1) = 0 (2.7) U(0, ξ) = 1 All but one of the boundary conditions are homogeneous. This will prove necessary in our analysis. 2.1.2 Separation of Variables Begin by assuming U = (τ) (ξ). Insert this into the differential equation and obtain (ξ) τ (τ) = (τ) ξξ (ξ). (2.8) Next divide both sides by U = , τ = ξξ = ±λ2 (2.9) The left-hand side of the above equation is a function of τ only while the right-hand side is a function only of ξ. This can only be true if both are constants since they are equal to each other. λ2 is always positive, but we must decide whether to use the plus sign or the minus sign. We have two ordinary differential equations instead of one partial differential equation. Solution for gives a constant times either exp(−λ2 τ) or exp(+λ2 τ). Since we know that U is always between 0 and 1, we see immediately that we must choose the minus sign. The second ordinary
  • 24. book Mobk070 March 22, 2007 11:7 14 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS differential equation is ξξ = −λ2 (2.10) and we deduce that the two homogeneous boundary conditions are (0) = 0 (2.11) ξ (1) = 0 Solving the differential equation we find = A cos(λξ) + B sin(λξ) (2.12) where A and B are constants to be determined. The first boundary condition requires that A = 0. The second boundary condition requires that either B = 0 or cos(λ) = 0. Since the former cannot be true (U is not zero!) the latter must be true. ξ can take on any of an infinite number of values λn = (2n − 1)π/2, where n is an integer between negative and positive infinity. Equation (2.10) together with boundary conditions (2.11) is called a Sturm–Liouville problem. The solutions are called eigenfunctions and the λn are called eigenvalues. A full discussion of Sturm–Liouville theory will be presented in Chapter 3. Hence the apparent solution to our partial differential equation is any one of the following: Un = Bn exp[−(2n − 1)2 π2 τ/4)] sin[π(2n − 1)ξ/2]. (2.13) 2.1.3 Superposition Linear differential equations possess the important property that if each solution Un satisfies the differential equation and the boundary conditions then the linear combination ∞ n=1 Bn exp[−(2n − 1)2 π2 τ/4] sin[π(2n − 1)ξ/2] = ∞ n=1 Un (2.14) also satisfies them, as stated in Theorem 2. Can we build this into a solution that satisfies the one remaining boundary condition? The final condition (the nonhomogeneous initial condition) states that 1 = ∞ n=1 Bn sin(π(2n − 1)ξ/2) (2.15) This is called a Fourier sine series representation of 1. The topic of Fourier series is further discussed in Chapter 3.
  • 25. book Mobk070 March 22, 2007 11:7 THE FOURIER METHOD: SEPARATION OF VARIABLES 15 2.1.4 Orthogonality It may seem hopeless at this point when we see that we need to find an infinite number of constants Bn. What saves us is a concept called orthogonality (to be discussed in a more general way in Chapter 3). The functions sin(π(2n − 1)ξ/2) form an orthogonal set on the interval 0 < ξ < 1, which means that 1 0 sin(π(2n − 1)ξ/2) sin(π(2m − 1)ξ/2)dξ = 0 when m = n (2.16) = 1/2 when m = n Hence if we multiply both sides of the final equation by sin(π(2m − 1)ξ/2)dξ and integrate over the interval, we find that all of the terms in which m = n are zero, and we are left with one term, the general term for the nth B, Bn Bn = 2 1 0 sin(π(2n − 1)ξ/2)dξ = 4 π(2n − 1) (2.17) Thus U = ∞ n=1 4 π(2n − 1) exp[−π2 (2n − 1)2 τ/4] sin[π(2n − 1)ξ/2] (2.18) satisfies both the partial differential equation and the boundary and initial conditions, and therefore is a solution to the boundary value problem. 2.1.5 Lessons We began by assuming a solution that was the product of two variables, each a function of only one of the independent variables. Each of the resulting ordinary differential equations was then solved. The two homogeneous boundary conditions were used to evaluate one of the constant coefficients and the separation constant λ. It was found to have an infinite number of values. These are called eigenvalues and the resulting functions sinλnξ are called eigenfunctions. Linear superposition was then used to build a solution in the form of an infinite series. The infinite series was then required to satisfy the initial condition, the only nonhomogeneous condition. The coefficients of the series were determined using the concept of orthogonality stated in Theorem 3, resulting in a Fourier series. Each of these concepts will be discussed further in Chapter 3. For now we state that many important functions are members of orthogonal sets.
  • 26. book Mobk070 March 22, 2007 11:7 16 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS The method would not have worked had the differential equation not been homoge- neous. (Try it.) It also would not have worked if more than one boundary condition had been nonhomogeneous. We will see how to get around these problems shortly. Problems 1. Equation (2.9) could just as easily have been written as τ = ξξ = +λ2 Show two reasons why this would reduce to the trivial solution or a solution for which approaches infinity as τ approaches infinity, and that therefore the minus sign must be chosen. 2. Solve the above problem with boundary conditions Uξ (τ, 0) = 0 and U(τ, 1) = 0 using the steps given above. Hint: cos(nπx) is an orthogonal set on (0, 1). The result will be a Fourier cosine series representation of 1. 3. Plot U versus ξ for τ = 0.001, 0.01, and 0.1 in Eq. (2.18). Comment. Example 2.2. A Steady Heat Transfer Problem in Two Dimensions Heat is conducted in a region of height a and width b. Temperature is a function of two space dimensions and independent of time. Three sides are at temperature u0 and the fourth side is at temperature u1. The formulation is as follows: ∂2 u ∂x2 + ∂2 u ∂y2 = 0 (2.19) with boundary conditions u(0, x) = u(b, x) = u(y, a) = u0 u(y, 0) = u1 (2.20) 2.1.6 Scales and Dimensionless Variables First note that there are two obvious length scales, a and b. We can choose either one of them to nondimensionalize x and y. We define ξ = x/a and η = y/b (2.21) so that both dimensionless lengths are normalized.
  • 27. book Mobk070 March 22, 2007 11:7 THE FOURIER METHOD: SEPARATION OF VARIABLES 17 To normalize temperature we choose U = u − u0 u1 − u0 (2.22) The problem statement reduces to Uξξ + a b 2 Uηη = 0 (2.23) U(0, ξ) = U(1, ξ) = U(η, 1) = 0 U(η, 0) = 1 (2.24) 2.1.7 Separation of Variables As before, we assume a solution of the form U(ξ, n) = X(ξ)Y (η). We substitute this into the differential equation and obtain Y (η)Xξξ (ξ) = −X(ξ) a b 2 Yηη(η) (2.25) Next we divide both sides by U(ξ, n) and obtain Xξξ X = − a b 2 Ynn Y = ±λ2 (2.26) In order for the function only of ξ on the left-hand side of this equation to be equal to the function only of η on the right-hand side, both must be constant. 2.1.8 Choosing the Sign of the Separation Constant However in this case it is not as clear as the case of Example 1 what the sign of this constant must be. Hence we have designated the constant as ±λ2 so that for real values of λ the ± sign determines the sign of the constant. Let us proceed by choosing the negative sign and see where this leads. Thus Xξξ = −λ2 X Y (η)X(0) = 1 Y (η)X(1) = 0 (2.27) or X(0) = 1 X(1) = 0 (2.28)
  • 28. book Mobk070 March 22, 2007 11:7 18 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS and Yηη = ∓ b a 2 λ2 Y (2.29) X(ξ)Y (0) = X(ξ)Y (1) = 0 Y (0) = Y (1) = 0 (2.30) The solution of the differential equation in the η direction is Y (η) = A cosh(bλη/a) + B sinh(bλη/a) (2.31) Applying the first boundary condition (at η = 0) we find that A = 0. When we apply the boundary condition at η = 1 however, we find that it requires that 0 = B sinh(bλ/a) (2.32) so that either B = 0 or λ = 0. Neither of these is acceptable since either would require that Y (η) = 0 for all values of η. We next try the positive sign. In this case Xξξ = λ2 X (2.33) Yηη = − b a 2 λ2 Y (2.34) with the same boundary conditions given above. The solution for Y (η) is now Y (η) = A cos(bλη/a) + B sin(bλη/a) (2.35) The boundary condition at η = 0 requires that 0 = A cos(0) + B sin(0) (2.36) so that again A = 0. The boundary condition at η = 1 requires that 0 = B sin(bλ/a) (2.37) Since we don’t want B to be zero, we can satisfy this condition if λn = anπ/b, n = 0, 1, 2, 3, . . . (2.38) Thus Y (η) = B sin(nπη) (2.39)
  • 29. book Mobk070 March 22, 2007 11:7 THE FOURIER METHOD: SEPARATION OF VARIABLES 19 Solution for X(ξ) yields hyperbolic functions. X(ξ) = C cosh(λnξ) + D sinh(λnξ) (2.40) The boundary condition at ξ = 1 requires that 0 = C cosh(λn) + D sinh(λn) (2.41) or, solving for C in terms of D, C = −D tanh(λn) (2.42) One solution of our problem is therefore Un(ξ, η) = Kn sin(nπη)[sinh(anπξ/b) − cosh(anπξ/b) tanh(anπ/b)] (2.43) 2.1.9 Superposition According to the superposition theorem (Theorem 2) we can now form a solution as U(ξ, η) = ∞ n=0 Kn sin(nπη)[sinh(anπξ/b) − cosh(anπξ/b) tanh(anπ/b)] (2.44) The final boundary condition (the nonhomogeneous one) can now be applied, 1 = − ∞ n=1 Kn sin(nπη) tanh(anπ/b) (2.45) 2.1.10 Orthogonality We have already noted that the sine function is an orthogonal function as defined on (0, 1). Thus, we multiply both sides of this equation by sin(mπη)dη and integrate over (0, 1), noting that according to the orthogonality theorem (Theorem 3) the integral is zero unless n = m. The result is 1 η=0 sin(nπη)dη = −Kn 1 η=0 sin2 (nπη)dη tanh(anπ/b) (2.46) 1 nπ [1 − (−1)n ] = −Kn tanh(anπ/b) 1 2 (2.47) Kn = − 2[1 − (−1)n ] nπ tanh(anπ/b) (2.48)
  • 30. book Mobk070 March 22, 2007 11:7 20 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS The solution is represented by the infinite series U(ξ, η) = ∞ n=1 2[1 − (−1)n ] nπ tanh(anπ/b) sin(nπη) × [cosh(anπξ/b) tanh(anπ/b) − sinh(anπξ/b)] (2.49) 2.1.11 Lessons The methodology for this problem is the same as in Example 1. Example 2.3. A Steady Conduction Problem in Two Dimensions: Addition of Solutions We now illustrate a problem in which two of the boundary conditions are nonhomogeneous. Since the problem and the boundary conditions are both linear we can simply break the problem into two problems and add them. Consider steady conduction in a square region L by L in size. Two sides are at temperature u0 while the other two sides are at temperature u1. uxx + uyy = 0 (2.50) We need four boundary conditions since the differential equation is of order 2 in both inde- pendent variables. u(0, y) = u(L, y) = u0 (2.51) u(x, 0) = u(x, L) = u1 (2.52) 2.1.12 Scales and Dimensionless Variables The length scale is L, so we let ξ = x/L and η = y/L. We can make the first two bound- ary conditions homogeneous while normalizing the second two by defining a dimensionless temperature as U = u − u0 u1 − u0 (2.53) Then Uξξ + Uηη = 0 (2.54) U(0, η) = U(1, η) = 0 (2.55) U(ξ, 0) = U(ξ, 1) = 1 (2.56) 2.1.13 Getting to One Nonhomogeneous Condition There are two nonhomogeneous boundary conditions, so we must find a way to only have one. Let U = V + W so that we have two problems, each with one nonhomogeneous boundary
  • 31. book Mobk070 March 22, 2007 11:7 THE FOURIER METHOD: SEPARATION OF VARIABLES 21 condition. Wξξ + Wηη = 0 (2.57) W(0, η) = W(1, η) = W(ξ, 0) = 0 (2.58) W(ξ, 1) = 1 Vξξ + Vηη = 0 (2.59) V (0, η) = V (1, η) = V (ξ, 1) = 0 (2.60) V (ξ, 0) = 1 (It should be clear that these two problems are identical if we put V = W(1 − η). We will therefore only need to solve for W.) 2.1.14 Separation of Variables Separate variables by letting W(ξ, η) = P(ξ)Q(η). Pξξ P = − Qηη Q = ±λ2 (2.61) 2.1.15 Choosing the Sign of the Separation Constant Once again it is not immediately clear whether to choose the plus sign or the minus sign. Let’s see what happens if we choose the plus sign. Pξξ = λ2 P (2.62) The solution is exponentials or hyperbolic functions. P = A sinh(λξ) + B cosh(λξ) (2.63) Applying the boundary condition on ξ = 0, we find that B = 0. The boundary condition on ξ = 1 requires that A sinh(λ) = 0, which can only be satisfied if A = 0 or λ = 0, which yields a trivial solution, W = 0, and is unacceptable. The only hope for a solution is thus choosing the minus sign. If we choose the minus sign in Eq. (2.61) then Pξξ = −λ2 P (2.64) Qηη = λ2 Q (2.65) with solutions P = A sin(λξ) + B cos(λξ) (2.66)
  • 32. book Mobk070 March 22, 2007 11:7 22 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS and Q = C sinh(λη) + D cosh(λη) (2.67) respectively. Remembering to apply the homogeneous boundary conditions first, we find that for W(0, η) = 0, B = 0 and for W(1, η) = 0, sin(λ) = 0. Thus, λ = nπ, our eigenvalues cor- responding to the eigenfunctions sin(nπξ). The last homogeneous boundary condition is W(ξ, 0) = 0, which requires that D = 0. There are an infinite number of solutions of the form P Qn = Kn sinh(nπη) sin(nπξ) (2.68) 2.1.16 Superposition Since our problem is linear we apply superposition. W = ∞ n=1 Kn sinh(nπη) sin(nπξ) (2.69) Applying the final boundary condition, W(ξ, 1) = 1 1 = ∞ n=1 Kn sinh(nπ) sin(nπξ). (2.70) 2.1.17 Orthogonality Multiplying both sides of Eq. (2.70) by sin(mπξ) and integrating over the interval (0, 1) 1 0 sin(mπξ)dξ = ∞ n=0 Kn sinh(nπ) 1 0 sin(nπξ) sin(mπξ)dξ (2.71) The orthogonality property of the sine eigenfunction states that 1 0 sin(nπξ) sin(mπξ)dξ = 0, m = n 1/2, m = n (2.72) Thus, Kn = 2/ sinh(nπ) (2.73) and W = ∞ n=0 2 sinh(nπ) sinh(nπη) sin(nπξ) (2.74)
  • 33. book Mobk070 March 22, 2007 11:7 THE FOURIER METHOD: SEPARATION OF VARIABLES 23 Recall that V = W(ξ, 1 − η) and U = V + W 2.1.18 Lessons If there are two nonhomogeneous boundary conditions break the problem into two problems that can be added (since the equations are linear) to give the complete solution. If you are unsure of the sign of the separation constant just assume a sign and move on. Listen to what the mathematics is telling you. It will always tell you if you choose wrong. Example 2.4. A Non-homogeneous Heat Conduction Problem Consider now the arrangement above, but with a heat source, and with both boundaries held at the initial temperature u0. The heat source is initially zero and is turned on at t = 0+ . The exercise illustrates the method of solving the problem when the single nonhomogeneous condition is in the partial differential equation rather than one of the boundary conditions. ρc ut = kuxx + q (2.75) u(0, x) = u0 u(t, 0) = u0 (2.76) u(t, L) = u0 2.1.19 Scales and Dimensionless Variables Observe that the length scale is still L, so we define ξ = x/L. Recall that k/ρc = α is the diffusivity. How shall we nondimensionalize temperature? We want as many ones and zeros in coefficients in the partial differential equation and the boundary conditions as possible. Define U = (u − u0)/S, where S stands for “something with dimensions of temperature” that we must find. Dividing both sides of the partial differential equation by q and substituting for x L2 SρcUt q = kSUξξ q + 1 (2.77) Letting S = q/k leads to one as the coefficient of the first term on the right-hand side. Choosing the same dimensionless time as before, τ = αt/L2 results in one as the coefficient of
  • 34. book Mobk070 March 22, 2007 11:7 24 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS the time derivative term. We now have Uτ = Uξξ + 1 (2.78) U(0, ξ) = 0 U(τ, 0) = 0 (2.79) U(τ, 1) = 0 2.1.20 Relocating the Nonhomogeneity We have only one nonhomogeneous condition, but it’s in the wrong place. The differential equation won’t separate. For example if we let U(ξ, τ) = P(ξ)G(τ) and insert this into the partial differential equation and divide by PG, we find G (τ) G = P (ξ) P + 1 PG (2.80) The technique to deal with this is to relocate the nonhomogenous condition to the initial condition. Assume a solution in the form U = W(ξ) + V (τ, ξ). We now have Vτ = Vξξ + Wξξ + 1 (2.81) If we set Wξξ = −1, the differential equation for V becomes homogeneous. We then set both W and V equal to zero at ξ = 0 and 1 and V (0, ξ) = −W(ξ) Wξξ = −1 (2.82) W(0) = W(1) = 0 (2.83) and Vτ = Vξξ (2.84) V (0, ξ) = −W(ξ) V (τ, 0) = 0 (2.85) V (τ, 1) = 0 The solution for W is parabolic W = 1 2 ξ(1 − ξ) (2.86)
  • 35. book Mobk070 March 22, 2007 11:7 THE FOURIER METHOD: SEPARATION OF VARIABLES 25 2.1.21 Separating Variables We now solve for V using separation of variables. V = P(τ)Q(ξ) (2.87) Pτ P = Qξξ Q = ±λ2 (2.88) We must choose the minus sign once again (see Problem 1 above) to have a negative exponential for P(τ). (We will see later that it’s not always so obvious.) P = exp(−λ2 τ). The solution for Q is once again sines and cosines. Q = A cos(λξ) + B sin(λξ) (2.89) The boundary condition V (τ, 0) = 0 requires that Q(0) = 0. Hence, A = 0. The boundary condition V (τ, 1) = 0 requires that Q(1) = 0. Since B cannot be zero, sin(λ) = 0 so that our eigenvalues are λ = nπ and our eigenfunctions are sin(nπξ). 2.1.22 Superposition Once again using linear superposition, V = ∞ n=0 Bn exp(−n2 π2 τ) sin(nπξ) (2.90) Applying the initial condition 1 2 ξ(ξ − 1) = ∞ n=1 Bn sin(nπξ) (2.91) This is a Fourier sine series representation of 1 2 ξ(ξ − 1). We now use the orthogonality of the sine function to obtain the coefficients Bn. 2.1.23 Orthogonality Using the concept of orthogonality again, we multiply both sides by sin(mπξ)dξ and integrate over the space noting that the integral is zero if m is not equal to n. Thus, since 1 0 sin2 (nπξ)dξ = 1 2 (2.92) Bn = 1 0 ξ(ξ − 1) sin(nπξ)dξ (2.93)
  • 36. book Mobk070 March 22, 2007 11:7 26 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS 2.1.24 Lessons When the differential equation is nonhomogeneous use the linearity of the differential equation to transfer the nonhomogeneous condition to one of the boundary conditions. Usually this will result in a homogeneous partial differential equation and an ordinary differential equation. We pause here to note that while the method of separation of variables is straightforward in principle, a certain amount of intuition or, if you wish, cleverness is often required in order to put the equation and boundary conditions in an appropriate form. The student working diligently will soon develop these skills. Problems 1. Using these ideas obtain a series solution to the boundary value problem ut = uxx u(t, 1) = 0 u(t, 0) = 0 u(0, x) = 1 2. Find a series solution to the boundary value problem ut = uxx + x ux(t, 0) = 0 u(t, 1) = 0 u(0, x) = 0 2.2 VIBRATIONS In vibrations problems the dependent variable occurs in the differential equation as a second- order derivative of the independent variable t. The methodology is, however, essentially the same as it is in the diffusion equation. We first apply separation of variables, then use the boundary conditions to obtain eigenfunctions and eigenvalues, and use the linearity and orthogonality principles and the single nonhomogeneous condition to obtain a series solution. Once again, if there are more than one nonhomogeneous condition we use the linear superposition principle to obtain solutions for each nonhomogeneous condition and add the resulting solutions. We illustrate these ideas with several examples. Example 2.5. A Vibrating String Consider a string of length L fixed at the ends. The string is initially held in a fixed position y(0, x) = f (x), where it is clear that f (x) must be zero at both x = 0 and x = L. The boundary
  • 37. book Mobk070 March 22, 2007 11:7 THE FOURIER METHOD: SEPARATION OF VARIABLES 27 value problem is as follows: ytt = a2 yxx (2.94) y(t, 0) = 0 y(t, L) = 0 (2.95) y(0, x) = f (x) yt(0, x) = 0 2.2.1 Scales and Dimensionless Variables The problem has the obvious length scale L. Hence let ξ = x/L. Now let τ = ta/L and the equation becomes yττ = yξξ (2.96) One could now nondimensionalize y, for example, by defining a new variable as f (x)/fmax, but it wouldn’t simplify things. The boundary conditions remain the same except t and x are replaced by τ and ξ. 2.2.2 Separation of Variables You know the dance. Let y = P(τ)Q(ξ). Differentiating and substituting into Eq. (2.96), Pττ Q = P Qξξ (2.97) Dividing by P Q and noting that Pττ /P and Qξξ /Q cannot be equal to one another unless they are both constants, we find Pττ /P = Qξξ /Q = ±λ2 (2.98) It should be physically clear that we want the minus sign. Otherwise both solutions will be hyperbolic functions. However if you choose the plus sign you will immediately find that the boundary conditions on ξ cannot be satisfied. Refer back to (2.63) and the sentences following. The two ordinary differential equations and homogeneous boundary conditions are Pττ + λ2 P = 0 (2.99) Pτ (0) = 0
  • 38. book Mobk070 March 22, 2007 11:7 28 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS and Qξξ + λ2 Q = 0 (2.100) Q(0) = 0 Q(1) = 0 The solutions are P = A sin(λτ) + B cos(λτ) (2.101) Q = C sin(λξ) + D cos(λξ) (2.102) The first boundary condition of Eq. (2.100) requires that D = 0. The second requires that C sin(λ) be zero. Our eigenvalues are again λn = nπ. The boundary condition at τ = 0, that Pτ = 0 requires that A = 0. Thus P Qn = Kn sin(nπξ) cos(nπτ) (2.103) The final form of the solution is then y(τ, ξ) = ∞ n=0 Kn sin(nπξ) cos(nπτ) (2.104) 2.2.3 Orthogonality Applying the final (nonhomogeneous) boundary condition (the initial position). f (ξ) = ∞ n=0 Kn sin(nπξ) (2.105) In particular, if f (x) = hx, 0 < x < 1/2 = h(1 − x), 1/2 < x < 1 (2.106) 1 0 f (x) sin(nπx)dx = 1/2 0 hx sin(nπx)dx + 1 1/2 h(1 − x) sin(nπx)dx = 2h n2π2 sin nπ 2 = 2h n2π2 (−1)n+1 (2.107) and 1 0 Kn sin2 (nπx)dx = Kn/2 (2.108)
  • 39. book Mobk070 March 22, 2007 11:7 THE FOURIER METHOD: SEPARATION OF VARIABLES 29 so that y = 4h π2 ∞ n=1 (−1)n+1 n2 sin(nπξ) cos(nπτ) (2.109) 2.2.4 Lessons The solutions are in the form of infinite series. The coefficients of the terms of the series are determined by using the fact that the solutions of at least one of the ordinary differential equations are orthogonal functions. The orthogonality condition allows us to calculate these coefficients. Problem 1. Solve the boundary value problem utt = uxx u(t, 0) = u(t, 1) = 0 u(0, x) = 0 ut(0, x) = f (x) Find the special case when f (x) = sin(πx). FURTHER READING V. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966. J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. 6th edition. New York: McGraw-Hill, 2001.
  • 40. book Mobk070 March 22, 2007 11:7
  • 41. book Mobk070 March 22, 2007 11:7 31 C H A P T E R 3 Orthogonal Sets of Functions In this chapter we elaborate on the concepts of orthogonality and Fourier series. We begin with the familiar concept of orthogonality of vectors. We then extend the idea to orthogonality of functions and the use of this idea to represent general functions as Fourier series—series of orthogonal functions. Next we show that solutions of a fairly general linear ordinary differential equation—the Sturm–Liouville equation—are orthogonal functions. Several examples are given. 3.1 VECTORS We begin our study of orthogonality with the familiar topic of orthogonal vectors. Suppose u(1), u(2), and u(3) are the three rectangular component vectors in an ordinary three-dimensional space. The norm of the vector (its length) ||u|| is ||u|| = [u(1)2 + u(2)2 + u(3)2 ]1/2 (3.1) If ||u|| = 1, u is said to be normalized. If ||u|| = 0, u(r) = 0 for each r and u is the zero vector. A linear combination of two vectors u1 and u2 is u = c1u1 + c2u2, (3.2) The scalar or inner product of the two vectors u1 and u2 is defined as (u1, u2) = 3 r=1 u1(r)u2(r) = u1 u2 cos θ (3.3) 3.1.1 Orthogonality of Vectors If neither u1 nor u2 is the zero vector and if (u1, u2) = 0 (3.4) then θ = π/2 and the vectors are orthogonal. The norm of a vector u is ||u|| = (u, u)1/2 (3.5)
  • 42. book Mobk070 March 22, 2007 11:7 32 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS 3.1.2 Orthonormal Sets of Vectors The vector n = un/||un|| has magnitude unity, and if u1 and u2 are orthogonal then 1 and 2 are orthonormal and their inner product is ( n, m) = δnm = 0, m = n (3.6) = 1, m = n where δnm is called the Kronecker delta. If 1, 2, and 3 are three vectors that are mutually orthogonal to each other then every vector in three-dimensional space can be written as a linear combination of 1, 2, and 3; that is, f(r) = c1 1 + c2 2 + c3 3 (3.7) Note that due to the fact that the vectors n form an orthonormal set, (f, 1) = c1, (f, 2) = c2, (f, 3) = c3 (3.8) Simply put, suppose the vector f is f = 2 1 + 4 2 + 3. (3.9) Taking the inner product of f with 1 we find that (f, 1) = 2( 1, 1) + 4( 1, 2) + ( 1, 3) (3.10) and according to Eq. (3.8) c1 = 2. Similarly, c2 = 4 and c3 = 1. 3.2 FUNCTIONS 3.2.1 Orthonormal Sets of Functions and Fourier Series Suppose there is a set of orthonormal functions n(x) defined on an interval a < x < b ( √ 2 sin(nπx) on the interval 0 < x < 1 is an example). A set of orthonormal functions is defined as one whose inner product, defined as b x=a n(x) m(x)dx, is ( n, m) = b x=a n m dx = δnm (3.11) Suppose we can express a function as an infinite series of these orthonormal functions, f (x) = ∞ n=0 cn n on a < x < b (3.12) Equation (3.12) is called a Fourier series of f (x) in terms of the orthonormal function set n(x).
  • 43. book Mobk070 March 22, 2007 11:7 ORTHOGONAL SETS OF FUNCTIONS 33 If we now form the inner product of m with both sides of Eq. (3.12) and use the definition of an orthonormal function set as stated in Eq. (3.11) we see that the inner product of f (x) and n(x) is cn. cn b x=a 2 n(ξ)dξ = cn = b x=a f (ξ) n(ξ)dξ (3.13) In particular, consider a set of functions n that are orthogonal on the interval (a, b) so that b x=a n(ξ) m(ξ)dξ = 0, m = n (3.14) = n 2 , m = n where n 2 = b x=a 2 n(ξ)dξ is called the square of the norm of n. The functions n n = n (3.15) then form an orthonormal set. We now show how to form the series representation of the function f (x) as a series expansion in terms of the orthogonal (but not orthonormal) set of functions n(x). f (x) = ∞ n=0 n n b ξ=a f (ξ) n(ξ) n dξ = ∞ n=0 n b ξ=a f (ξ) n(ξ) n 2 dξ (3.16) This is called a Fourier series representation of the function f (x). As a concrete example, the square of the norm of the sine function on the interval (0, π) is sin(nx) 2 = π ξ=0 sin2 (nξ)dξ = π 2 (3.17) so that the corresponding orthonormal function is = 2 π sin(nx) (3.18) A function can be represented by a series of sine functions on the interval (0, π) as f (x) = ∞ n=0 sin(nx) π ς=0 sin(nς) π 2 f (ς)dς (3.19) This is a Fourier sine series.
  • 44. book Mobk070 March 22, 2007 11:7 34 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS 3.2.2 Best Approximation We next ask whether, since we can never sum to infinity, the values of the constants cn in Eq. (3.13) give the most accurate approximation of the function. To illustrate the idea we return to the idea of orthogonal vectors in three-dimensional space. Suppose we want to approximate a three-dimensional vector with a two-dimensional vector. What will be the components of the two-dimensional vector that best approximate the three-dimensional vector? Let the three-dimensional vector be f = c1 1 + c2 2 + c3 3. Let the two-dimensional vector be k = a1 1 + a2 2. We wish to minimize ||k − f||. ||k − f|| = (a1 − c1)2 + (a2 − c2)2 + c 2 3 1/2 (3.20) It is clear from the above equation (and also from Fig. 3.1) that this will be minimized when a1 = c1 and a2 = c2. Turning now to the orthogonal function series, we attempt to minimize the difference between the function with an infinite number of terms and the summation only to some finite value m. The square of the error is E2 = b x=a ( f (x) − Km(x))2 dx = b x=a f 2 (x) + K2 (x) − 2 f (x)K(x) dx (3.21) where f (x) = ∞ n=1 cn n(x) (3.22) and Km = m n=1 an n(x) (3.23) FIGURE 3.1: Best approximation of a three-dimensional vector in two dimensions
  • 45. book Mobk070 March 22, 2007 11:7 ORTHOGONAL SETS OF FUNCTIONS 35 Noting that b x=a K2 m(x)dx = m n=1 m j=1 ana j b x=a n(x) j (x)dx = m n=1 a2 n = a2 1 + a2 2 + a2 3 + · · · + a2 m (3.24) and b x=a f (x)K(x)dx = ∞ n=1 m j=1 cna j b x=a n(x) j (x)dx = m n=1 cnan = c1a1 + c2a2 + · · · + cmam (3.25) E2 = b x=a f 2 (x)dx + a2 1 + · · · + a2 m − 2a1c1 − · · · − 2amcm (3.26) Now add and subtract c 2 1, c 2 2, . . . , c 2 m. Thus Eq. (3.26) becomes E2 = b x=a f 2 (x)dx − c 2 1 − c 2 2 − · · · − c 2 m + (a1 − c1)2 + (a2 − c2)2 + · · · + (am − cm)2 (3.27) which is clearly minimized when an = cn. 3.2.3 Convergence of Fourier Series We briefly consider the question of whether the Fourier series actually converges to the function f (x) for all values, say, on the interval a ≤ x ≤ b. The series will converge to the function if the value of E defined in (3.19) approaches zero as m approaches infinity. Suffice to say that this is true for functions that are continuous with piecewise continuous first derivatives, that is, most physically realistic temperature distributions, displacements of vibrating strings and bars. In each particular situation, however, one should use the various convergence theorems that are presented in most elementary calculus books. Uniform convergence of Fourier series is discussed extensively in the book Fourier Series and Boundary Value Problems by James Ward Brown and R. V. Churchill. In this chapter we give only a few physically clear examples.
  • 46. book Mobk070 March 22, 2007 11:7 36 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS 3.2.4 Examples of Fourier Series Example 3.1. Determine a Fourier sine series representation of f (x) = x on the interval (0, 1). The series will take the form x = ∞ j=0 c j sin( jπx) (3.28) since the sin( jπx) forms an orthogonal set on (0, 1), multiply both sides by sin(kπx)dx and integrate over the interval on which the function is orthogonal. 1 x=0 x sin(kπx)dx = ∞ k=0 1 x=0 c j sin( jπx) sin(kπx)dx (3.29) Noting that all of the terms on the right-hand side of (2.20) are zero except the one for which k = j, 1 x=0 x sin( jπx)dx = c j 1 x=0 sin2 ( jπx)dx (3.30) After integrating we find (−1)j+1 jπ = c j 2 (3.31) Thus, x = ∞ j=0 (−1)j+1 jπ 2 sin( jπx) (3.32) This is an alternating sign series in which the coefficients always decrease as j increases, and it therefore converges. The sine function is periodic and so the series must also be a periodic function beyond the interval (0, 1). The series outside this interval forms the periodiccontinuation of the series. Note that the sine is an odd function so that sin( jπx) = − sin(− jπx). Thus the periodic continuation looks like Fig. 3.2. The series converges everywhere, but at x = 1 it is identically zero instead of one. It converges to 1 − ε arbitrarily close to x = 1.
  • 47. book Mobk070 March 22, 2007 11:7 ORTHOGONAL SETS OF FUNCTIONS 37 -1 31 2 1 FIGURE 3.2: The periodic continuation of the function x represented by the sine series Example 3.2. Find a Fourier cosine for f (x) = x on the interval (0, 1). In this case x = ∞ n=0 cn cos(nπx) (3.33) Multiply both sides by cos(mπx)dx and integrate over (0, 1). 1 x=0 x cos(mπx)dx = ∞ n=0 cn 1 x=0 cos(mπx) cos(nπx)dx (3.34) and noting that cos(nπx) is an orthogonal set on (0, 1) all terms in (2.23) are zero except when n = m. Evaluating the integrals, cn 2 = [(−1)2 − 1] (nπ)2 (3.35) There is a problem when n = 0. Both the numerator and the denominator are zero there. However we can evaluate c0 by noting that according to Eq. (3.26) 1 x=0 xdx = c0 = 1 2 (3.36) and the cosine series is therefore x = 1 2 + ∞ n=1 2 [(−1)n − 1] (nπ)2 cos(nπx) (3.37) The series converges to x everywhere. Since cos(nπx) = cos(−nπx) it is an even function and its periodic continuation is shown in Fig. 3.3. Note that the sine series is discontinuous at x = 1, while the cosine series is continuous everywhere. (Which is the better representation?)
  • 48. book Mobk070 March 22, 2007 11:7 38 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS -1 31 2 1 FIGURE 3.3: The periodic continuation of the series in Example 3.2 It should be clear from the above examples that in general a Fourier sine/cosine series of a function f (x) defined on 0 ≤ x ≤ 1 can be written as f (x) = c0 2 + ∞ n=1 cn cos(nπx) + ∞ n=1 bn sin(nπx) (3.38) where cn = 1 x=0 f (x) cos(nπx)dx 1 x=0 cos2(nπx)dx n = 0, 1, 2, 3, . . . bn = 1 x=0 f (x) sin(nπx)dx 1 x=0 sin2(nπx)dx n = 1, 2, 3, . . . (3.39) Problems 1. Show that π x=0 sin(nx) sin(mx)dx = 0 when n = m. 2. Find the Fourier sine series for f (x) = 1 − x on the interval (0, 1). Sketch the periodic continuation. Sum the series for the first five terms and sketch over two periods. Discuss convergence of the series, paying special attention to convergence at x = 0 and x = 1. 3. Find the Fourier cosine series for 1 − x on (0, 1). Sketch the periodic continuation. Sum the first two terms and sketch. Sum the first five terms and sketch over two periods. Discuss convergence, paying special attention to convergence at x = 0 and x = 1.
  • 49. book Mobk070 March 22, 2007 11:7 ORTHOGONAL SETS OF FUNCTIONS 39 3.3 STURM–LIOUVILLE PROBLEMS: ORTHOGONAL FUNCTIONS We now proceed to show that solutions of a certain ordinary differential equation with certain boundary conditions (called a Sturm–Liouville problem) are orthogonal functions with respect to a weighting function, and that therefore a well-behaved function can be represented by an infinite series of these orthogonal functions (called eigenfunctions), as in Eqs. (3.12) and (3.16). Recall that the problem Xxx + λ2 X = 0, X(0) = 0, X(1) = 0 0 ≤ x ≤ 1 (3.40) has solutions only for λ = nπ and that the solutions, sin(nπx) are orthogonal on the interval (0, 1). The sine functions are called eigenfunctions and λ = nπ are called eigenvalues. As another example, consider the problem Xxx + λ2 X = 0 (3.41) with boundary conditions X(0) = 0 X(1) + HXx(1) = 0 (3.42) The solution of the differential equation is X = A sin(λx) + B cos (λx)) (3.43) The first boundary condition guarantees that B = 0. The second boundary condition is satisfied by the equation A[sin(λ) + Hλ cos(λ)] = 0 (3.44) Since A cannot be zero, this implies that − tan(λ) = Hλ. (3.45) The eigenfunctions are sin(λx) and the eigenvalues are solutions of Eq. (3.45). This is illustrated graphically in Fig. 3.4. We will generally be interested in the fairly general linear second-order differential equation and boundary conditions given in Eqs. (3.46) and (3.47). d dx r(x) d X dx + [q(x) + λp(x)]X = 0 a ≤ x ≤ b (3.46) a1 X(a) + a2d X(a)/dx = 0 b1 X(b) + b2d X(b)/dx = 0 (3.47)
  • 50. book Mobk070 March 22, 2007 11:7 40 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS FIGURE 3.4: Eigenvalues of − tan(λ) = Hλ Solutions exist only for discrete values λn the eigenvalues. The corresponding solutions Xn(x) are the eigenfunctions. 3.3.1 Orthogonality of Eigenfunctions Consider two solutions of (3.46) and (3.47), Xn and Xm corresponding to eigenvalues λn and λm. The primes denote differentiation with respect to x. (r Xm) + q Xm = −λm pXm (3.48) (r Xn) + q Xn = −λn pXn (3.49) Multiply the first by Xn and the second by Xm and subtract, obtaining the following: (r Xn Xm − r Xm Xn) = (λn − λm)pXm Xn (3.50) Integrating both sides r(Xm Xn − Xn Xm)b a = (λn − λm) b a p(x)Xn Xmdx (3.51) Inserting the boundary conditions into the left-hand side of (3.51) Xm(b)Xn(b) − Xm(a)Xn(a) − Xn(b)Xm(b) + Xn(a)Xm(a) = − b1 b2 Xm(b)Xn(b) + a1 a2 Xm(a)Xn(a) − a1 a2 Xn(a)Xm(a) + b1 b2 Xm(b)Xn(b) = 0 (3.52)
  • 51. book Mobk070 March 22, 2007 11:7 ORTHOGONAL SETS OF FUNCTIONS 41 Thus (λn − λm) b a p(x)Xn Xmdx = 0, m = n (3.53) Notice that Xm and Xn are orthogonal with respect to the weighting function p(x) on the interval (a, b). Obvious examples are the sine and cosine functions. Example 3.3. Example 2.1 in Chapter 2 is an example in which the eigenfunctions are sin(λnξ) and the eigenvalues are (2n − 1)π/2. Example 3.4. If the boundary conditions in Example 2.1 in Chapter 2 are changed to (0) = 0 (1) = 0 (3.54) we note that the general solution of the differential equation is (ξ) = A cos(λξ) + B sin(λξ) (3.55) The boundary conditions require that B = 0 and cos(λ) = 0. The values of λ can take on any of the values π/2, 3π/2, 5π/2, . . . , (2n − 1)π/2. The eigenfunctions are cos(λnξ) and the eigenvalue are λn = (2n − 1)π/2. Example 3.5. Suppose the boundary conditions in the original problem (Example 1, Chapter 2) take on the more complicated form (0) = 0 (1) + h (1) = 0 (3.56) The first boundary condition requires that B = 0. The second boundary conditions require that sin(λn) + hλn cos(λn) = 0, or (3.57) λn = − 1 h tan(λn) (3.58) which is a transcendental equation that must be solved for the eigenvalues. The eigenfunctions are, of course, sin(λnx). Example 3.6. A Physical Example: Heat Conduction in Cylindrical Coordinates The heat conduction equation in cylindrical coordinates is ∂u ∂t = ∂2 u ∂r2 + 1 r ∂u ∂r 0 < r < 1 (3.59) with boundary conditions at R = 0 and r = 1 and initial condition u(0,r) = f (r).
  • 52. book Mobk070 March 22, 2007 11:7 42 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Separating variables as u = R(r)T(t), 1 T dT dt = 1 R d2 R dr2 + 1 r R d R dr = −λ2 0 ≤ r ≤ 1, 0 ≤ t (3.60) (Why the minus sign?) The equation for R(r) is (r R ) + λ2 r R = 0, (3.61) which is a Sturm–Liouville equation with weighting function r. It is an eigenvalue problem with an infinite number of eigenfunctions corresponding to the eigenvalues λn. There will be two solutions R1(λnr) and R2(λnr) for each λn. The solutions are called Bessel functions, and they will be discussed in Chapter 4. Rn(λnr) = An R1(λnr) + Bn R2(λnr) (3.62) The boundary conditions on r are used to determine a relation between the constants A and B. For solutions R(λnr) and R(λmr) 1 0 r R(λnr)R(λmr)dr = 0, n = m (3.63) is the orthogonality condition. The solution for T(t) is the exponential e−λ2 nt for all n. Thus, the solution of (3.60), because of superposition, can be written as an infinite series in a form something like u = ∞ n=0 Kne−λ2 n R(λnr) (3.64) and the orthogonality condition is used to find Kn as Kn = 1 r=0 f (r)R(λnr)rdr/ 1 r=0 f (r)R2 (λnr)rdr (3.65) Problems 1. For Example 2.1 in Chapter 2 with the new boundary conditions described in Example 3.2 above, find Kn and write the infinite series solution to the revised problem.
  • 53. book Mobk070 March 22, 2007 11:7 ORTHOGONAL SETS OF FUNCTIONS 43 FURTHER READING J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems, 6th edition. New York: McGraw-Hill, 2001. P. V. O’Neil, Advanced Engineering Mathematics. 5th edition. Brooks/Cole Thompson, Pacific Grove, CA, 2003.
  • 54. book Mobk070 March 22, 2007 11:7
  • 55. book Mobk070 March 22, 2007 11:7 45 C H A P T E R 4 Series Solutions of Ordinary Differential Equations 4.1 GENERAL SERIES SOLUTIONS The purpose of this chapter is to present a method of obtaining solutions of linear second-order ordinary differential equations in the form of Taylor series’. The methodology is then used to obtain solutions of two special differential equations, Bessel’s equation and Legendre’s equation. Properties of the solutions—Bessel functions and Legendre functions—which are extensively used in solving problems in mathematical physics, are discussed briefly. Bessel functions are used in solving both diffusion and vibrations problems in cylindrical coordinates. The functions R(λnr) in Example 3.4 at the end of Chapter 3 are called Bessel functions. Legendre functions are useful in solving problems in spherical coordinates. Associated Legendre functions, also useful in solving problems in spherical coordinates, are briefly discussed. 4.1.1 Definitions In this chapter we will be concerned with linear second-order equations. A general case is a(x)u + b(x)u + c (x)u = f (x) (4.1) Division by a(x) gives u + p(x)u + q(x)u = r(x) (4.2) Recall that if r(x) is zero the equation is homogeneous. The solution can be written as the sum of a homogeneous solution uh (x) and a particular solution up(x). If r(x) is zero, up = 0. The nature of the solution and the solution method depend on the nature of the coefficients p(x) and q(x). If each of these functions can be expanded in a Taylor series about a point x0 the point is said to be an ordinary point and the function is analytic at that point. If either of the coefficients is not analytic at x0, the point is a singular point. If x0 is a singular point and if (x − x0)p(x) and (x − x0)2 q(x) are analytic, then the singularities are said to be removable and the singular point is a regular singular point. If this is not the case the singular point is irregular.
  • 56. book Mobk070 March 22, 2007 11:7 46 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS 4.1.2 Ordinary Points and Series Solutions If the point x0 is an ordinary point the dependent variable has a solution in the neighborhood of x0 of the form u(x) = ∞ n=0 cn(x − x0)n (4.3) We now illustrate the solution method with two examples. Example 4.1. Find a series solution in the form of Eq. (4.3) about the point x = 0 of the differential equation u + x2 u = 0 (4.4) The point x = 0 is an ordinary point so at least near x = 0 there is a solution in the form of the above series. Differentiating (4.3) twice and inserting it into (4.4) u = ∞ n=0 ncnxn−1 u = ∞ n=0 n(n − 1)cnxn−2 ∞ n=0 n(n − 1)cnxn−2 + ∞ n=0 xn+2 cn = 0 (4.5) Note that the first term in the u series is zero while the first two terms in the u series are zero. We can shift the indices in both summations so that the power of x is the same in both series by setting n − 2 = m in the first series. ∞ n=0 n(n − 1)cnxn−2 = ∞ m=−2 (m + 2)(m + 1)cm+2xm = ∞ m=0 (m + 2)(m + 1)cm+2xm (4.6) Noting that m is a “dummy variable” and that the first two terms in the series are zero the series can be written as ∞ n=0 (n + 2)(n + 1)cn+2xn (4.7) In a similar way we can write the second term as ∞ n=0 cnxn+2 = ∞ n=2 cn−2xn (4.8)
  • 57. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 47 We now have ∞ n=0 (n + 2)(n + 1)cn+2xn + ∞ n=2 cn−2xn = 0 (4.9) which can be written as 2c2 + 6c3x + ∞ n=2 [(n + 2)(n + 1)cn+2 + cn−2]xn = 0 (4.10) Each coefficient of xn must be zero in order to satisfy Eq. (4.10). Thus c2 and c3 must be zero and cn+2 = −cn−2/(n + 2)(n + 1) (4.11) while c0 and c1 remain arbitrary. Setting n = 2, we find that c4 = −c0/12 and setting n = 3, c5 = −c1/20. Since c2 and c3 are zero, so are c6, c7, c10, c11, etc. Also, c8 = −c4/(8)(7) = c0/(4)(3)(8)(7) and c9 = −c5/(9)(8) = c1/(5)(4)(9)(8). The first few terms of the series are u(x) = c0(1 − x4 /12 + x6 /672 + · · · ) + c1(1 − x5 /20 + x9 /1440 + · · · ) (4.12) The values of c0 and c1 may be found from appropriate boundary conditions. These are both alternating sign series with each term smaller than the previous term at least for x ≤ 1 and it is therefore convergent at least under these conditions. The constants c0 and c1 can be determined from boundary conditions. For example if u(0) = 0, c0 + c1 = 0, so c1 = −c0. If u(1) = 1, c0[−1/12 + 1/20 + 1/672 − 1/1440 + · · · ] = 1 Example 4.2. Find a series solution in the form of Eq. (4.3) of the differential equation u + xu + u = x2 (4.13) valid near x = 0. Assuming a solution in the form of (4.3), differentiating and inserting into (4.13), ∞ n=0 (n − 1)ncnxn−2 + ∞ n=0 ncnxn + ∞ n=0 cnxn − x2 = 0 (4.14)
  • 58. book Mobk070 March 22, 2007 11:7 48 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Shifting the indices as before ∞ n=0 (n + 2)(n + 1)cn+2xn + ∞ n=0 ncnxn + ∞ n=0 cnxn − x2 = 0 (4.15) Once again, each of the coefficients of xn must be zero. Setting n = 0, we see that n = 0 : 2c2 + c0 = 0, c2 = −c0/2 (4.16) n = 1 : 6c3 + 2c1 = 0, c3 = −c1/3 n = 2 : 12c4 + 3c2 − 1 = 0, c4 = (1 + 3c0/2)/12 n > 2 : cn+2 = cn n + 2 The last of these is called a recurrence formula. Thus, u = c0(1 − x2 /2 + x4 /8 − x6 /(8)(6) + · · · ) + c1(x − x3 /3 + x5 /(3)(5) − x7 /(3)(5)(7) + · · · ) + x4 (1/12 − x2 /(12)(6) + · · · ) (4.17) Note that the series on the third line of (4.17) is the particular solution of (4.13). The constants c0 and c1 are to be evaluated using the boundary conditions. 4.1.3 Lessons: Finding Series Solutions for Differential Equations with Ordinary Points If x0 is an ordinary point assume a solution in the form of Eq. (4.3) and substitute into the differential equation. Then equate the coefficients of equal powers of x. This will give a recurrence formula from which two series may be obtained in terms of two arbitrary constants. These may be evaluated by using the two boundary conditions. Problems 1. The differential equation u + xu + xu = x has ordinary points everywhere. Find a series solution near x = 0. 2. Find a series solution of the differential equation u + (1 + x2 )u = x near x = 0 and identify the particular solution.
  • 59. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 49 3. The differential equation (1 − x2 )u + u = 0 has singular points at x = ±1, but is analytic near x = 0. Find a series solution that is valid near x = 0 and discuss the radius of convergence. 4.1.4 Regular Singular Points and the Method of Frobenius If x0 is a singular point in (4.2) there may not be a power series solution of the form of Eq. (4.3). In such a case we proceed by assuming a solution of the form u(x) = ∞ n=0 cn(x − x0)n+r (4.18) in which c0 = 0 and r is any constant, not necessarily an integer. This is called the method of Frobenius and the series is called a Frobenius series. The Frobenius series need not be a power series because r may be a fraction or even negative. Differentiating once u = ∞ n=0 (n + r)cn(x − x0)n+r−1 (4.19) and differentiating again u = ∞ n=0 (n + r − 1)(n + r)cn(x − x0)n+r−2 (4.20) These are then substituted into the differential equation, shifting is done where required so that each term contains x raised to the power n, and the coefficients of xn are each set equal to zero. The coefficient associated with the lowest power of x will be a quadratic equation that can be solved for the index r. It is called an indicial equation. There will therefore be two roots of this equation corresponding to two series solutions. The values of cn are determined as above by a recurrence equation for each of the roots. Three possible cases are important: (a) the roots are distinct and do not differ by an integer, (b) the roots differ by an integer, and (c) the roots are coincident, i.e., repeated. We illustrate the method by a series of examples. Example 4.3 (distinct roots). Solve the equation x2 u + x(1/2 + 2x)u + (x − 1/2)u = 0 (4.21) The coefficient of the u term is p(x) = (1/2 + 2x) x (4.22)
  • 60. book Mobk070 March 22, 2007 11:7 50 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS and the coefficient of the u term is q(x) = (x − 1/2) x2 (4.23) Both have singularities at x = 0. However multiplying p(x) by x and q(x) by x2 the singularities are removed. Thus x = 0 is a regular singular point. Assume a solution in the form of the Frobenius series: u = ∞ n=0 cnxn+r , differentiate twice and substitute into (4.21) obtaining ∞ n=0 (n + r)(n + r − 1)xn+1 + ∞ n=0 1 2 (n + r)cnxn+r + ∞ n=0 2(n + r)cnxn+r+1 + ∞ n=0 cnxn+r+1 − ∞ n=0 1 2 cnxn+r = 0 (4.24) The indices of the third and fourth summations are now shifted as in Example 4.1 and we find r(r − 1) + 1 2 r − 1 2 c0xr + ∞ n=1 (n + r)(n + r − 1) + 1 2 (n + r) − 1 2 cnxn+r + ∞ n=1 [2(n + r − 1) + 1]cn−1xn+r = 0 (4.25) Each coefficient must be zero for the equation to be true. Thus the coefficient of the c0 term must be zero since c0 itself cannot be zero. This gives a quadratic equation to be solved for r, and this is called an indicial equation (since we are solving for the index, r). r(r − 1) + 1 2 r − 1 2 = 0 (4.26) with r = 1 and r = −1/2. The coefficients of xn+r must also be zero. Thus [(n + r)(n + r − 1) + 1/2(n + r) − 1/2]cn + [2(n + r − 1) + 1]cn−1 = 0 . (4.27) The recurrence equation is therefore cn = − 2(n + r − 1) + 1 (n + r)(n + r − 1) + 1 2 (n + r) − 1 2 cn−1 (4.28) For the case of r = 1 cn = − 2n + 1 n n + 3 2 cn−1 (4.29)
  • 61. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 51 Computing a few of the coefficients, c1 = − 3 5 2 c0 = − 6 5 c0 c2 = − 5 7 c1 = − 6 7 c0 c3 = − 7 27 2 c2 = − 4 9 c0 etc. and the first Frobenius series is u1 = c0 x − 6 5 x2 + 6 7 x3 − 4 9 x4 + · · · (4.30) Setting r = −1/2 in the recurrence equation (4.26) and using bn instead of cn to distinguish it from the first case, bn = − 2n − 2 n n − 3 2 bn−1 (4.31) Noting that in this case b1 = 0, all the following bns must be zero and the second Frobenius series has only one term: b0x−1/2 . The complete solution is u = c0 x − 6 5 x2 + 6 7 x3 − 4 9 x4 + · · · + b0x−1/2 (4.32) Example 4.4 (repeated roots). Next consider the differential equation x2 u − xu + (x + 1)u = 0 (4.33) There is a regular singular point at x = 0, so we attempt a Frobenius series around x = 0. Differentiating (4.17) and substituting into (4.30), ∞ n=0 (n + r − 1)(n + r)cnxn+r − ∞ n=0 (n + r)cnxn+r + ∞ n=0 cnxn+r + ∞ n=0 cnxn+r+1 = 0 (4.34) or [r(r − 1) − r + 1]c0xr + ∞ n=1 [(n + r − 1)(n + r) − (n + r) + 1]cnxn+r + ∞ n=1 cn−1xn+r = 0 (4.35) where we have shifted the index in the last sum. The indicial equation is r(r − 1) − r + 1 = 0 (4.36)
  • 62. book Mobk070 March 22, 2007 11:7 52 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS and the roots of this equation are both r = 1. Setting the last two sums to zero we find the recurrence equation cn = − 1 (n + r − 1)(n + r) − (n + r) + 1 cn−1 (4.37) and since r = 1, cn = − 1 n(n + 1) − (n + 1) + 1 cn−1 (4.38) c1 = −c0 c2 = −1 6 − 3 + 1 c1 = 1 4 c0 c3 = −1 12 − 4 + 1 c2 = −1 9 c1 = −1 36 c0 etc. The Frobenius series is u1 = c0 x − x2 + 1 4 x3 − 1 36 x4 + . . . (4.39) In this case there is no second solution in the form of a Frobenius series because of the repeated root. We shall soon see what form the second solution takes. Example 4.5 (roots differing by an integer 1). Next consider the equation x2 u − 2xu + (x + 2)u = 0 (4.40) There is a regular singular point at x = 0. We therefore expect a solution in the form of the Frobenius series (4.18). Substituting (4.18), (4.19), (4.20) into our differential equation, we obtain ∞ n=0 (n + r)(n + r − 1)cnxn+r − ∞ n=0 2(n + r)cnxn+r + ∞ n=0 2cnxn+r + ∞ n=0 cnxn+r+1 = 0 (4.41) Taking out the n = 0 term and shifting the last summation, [r(r − 1) − 2r + 2]c0xr + ∞ n=1 [(n + r)(n + r − 1) − 2(n + r) + 2]cnxn+r + ∞ n=1 cn−1xn+r = 0 (4.42)
  • 63. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 53 The first term is the indicial equation. r(r − 1) − 2r + 2 = 0 (4.43) There are two distinct roots, r1 = 2 and r2 = 1. However they differ by an integer. r1 − r2 = 1. Substituting r1 = 2 into (4.39) and noting that each coefficient of xn+r must be zero, [(n + 2)(n + 1) − 2(n + 2) + 2]cn + cn−1 = 0 (4.44) The recurrence equation is cn = −cn−1 (n + 2)(n − 1) + 2 c1 = −c0 2 c2 = −c1 6 = c0 c0 12 c3 = −c2 12 = −c0 144 (4.45) The first Frobenius series is therefore u1 = c0 x2 − 1 2 x3 + 1 12 x4 − 1 144 x5 + . . . (4.46) We now attempt to find the Frobenius series corresponding to r2 = 1. Substituting into (4.44) we find that [n(n + 1) − 2(n + 1) + 2]cn = −cn−1 (4.47) When n = 1, c0 must be zero. Hence cn must be zero for all n and the attempt to find a second Frobenius series has failed. This will not always be the case when roots differ by an integer as illustrated in the following example. Example 4.6 (roots differing by an integer 2). Consider the differential equation x2 u + x2 u − 2u = 0 (4.48) You may show that the indicial equation is r2 − r − 2 = 0 with roots r1 = 2, r2 = −1 and the roots differ by an integer. When r = 2 the recurrence equation is cn = − n + 1 n(n + 3) cn−1 (4.49)
  • 64. book Mobk070 March 22, 2007 11:7 54 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS The first Frobenius series is u1 = c0x2 1 − 1 2 x + 3 20 x2 − 1 30 x3 + . . . (4.50) When r = −1 the recurrence equation is [(n − 1)(n − 2) − 2]bn + (n − 2)bn−1 = 0 (4.51) When n = 3 this results in b2 = 0. Thus bn = 0 for all n ≥ 2 and the second series terminates. u2 = b0 1 x − 1 2 (4.52) 4.1.5 Lessons: Finding Series Solution for Differential Equations with Regular Singular Points 1. Assume a solution of the form u = ∞ n=0 cnxn+r , c0 = 0 (4.53) Differentiate term by term and insert into the differential equation. Set the coefficient of the lowest power of x to zero to obtain a quadratic equation on r. If the indicial equation yields two roots that do not differ by an integer there will always be two Frobenius series, one for each root of the indicial equation. 2. If the roots are the same (repeated roots) the form of the second solution will be u2 = u1 ln(x) + ∞ n=1 bnxn+r1 (4.54) This equation is substituted into the differential equation to determine bn. 3. If the roots differ by an integer, choose the largest root to obtain a Frobenius series for u1. The second solution may be another Frobenius series. If the method fails assume a solution of the form u2 = u1 ln(x) + ∞ n=1 bnxn+r2 (4.55) This equation is substituted into the differential equation to find bn. This is considered in the next section.
  • 65. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 55 4.1.6 Logarithms and Second Solutions Example 4.7. Reconsider Example 4.4 and assume a solution in the form of (4.54). Recall that in Example 4.4 the differential equation was x2 u − xu + (1 + x)u = 0 (4.56) and the indicial equation yielded a double root at r = 1. A single Frobenius series was u1 = x − x2 + x3 4 − x4 36 + · · · Now differentiate Eq. (4.54). u2 = u1 ln x + 1 x u1 + ∞ n=1 (n + r)bnxn+r−1 u2 = u1 ln x + 2 x u1 − 1 x2 u1 + ∞ n=1 (n + r − 1)(n + r)bnxn+r−2 (4.57) Inserting this into the differential equation gives ln(x)[x2 u1 − xu1 + (1 + x)u1] + 2(xu1 − u1) + ∞ n=1 [bn(n + r − 1)(n + r)xn+r − bn(n + r)xn+r + bnxn+r ] + ∞ n=1 bnxn+r+1 = 0 (4.58) The first term on the left-hand side of (4.52) is clearly zero because the term in brackets is the original equation. Noting that r = 1 in this case and substituting from the Frobenius series for u1, we find (c0 can be set equal to unity without losing generality) 2 −x2 + x3 3 − x4 12 + · · · + ∞ n=1 [n(n + 1) − (n + 1) + 1]bnxn+1 + ∞ n=2 bn−1xn+1 = 0 (4.59) or −2x2 + x3 − x4 6 + · · · + b1x2 + ∞ n=2 n2 bn + bn−1 xn+1 = 0 (4.60) Equating coefficients of x raised to powers we find that b1 = 2
  • 66. book Mobk070 March 22, 2007 11:7 56 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS For n ≥ 2 1 + 4b2 + b1 = 0 b2 = −3/4 − 1 6 + 9b3 + b2 = 0 b3 = 11 108 etc. u2 = u1 ln x + 2x2 − 3 4 x3 + 11 108 x4 − · · · (4.61) The complete solution is u = [C1 + C2 ln x] u1 + C2 2x2 − 3 4 x3 + 11 108 x4 − · · · (4.62) Example 4.8. Reconsider Example 4.5 in which a second Frobenius series could not be found because the roots of the indicial equation differed by an integer. We attempt a second solution in the form of (4.55). The differential equation in Example 4.5 was x2 u − 2xu + (x + 2)u = 0 and the roots of the indicial equation were r = 2 and r = 1, and are therefore separated by an integer. We found one Frobenius series u1 = x2 − 1 2 x3 + 1 12 x4 − 1 144 x5 + · · · for the root r = 2, but were unable to find another Frobenius series for the case of r = 1. Assume a second solution of the form in Eq. (4.55). Differentiating and substituting into (4.40) [x2 u1 − 2xu + (x + 2)u] ln(x) + 2xu − 3u1 + ∞ n=1 bn[(n + r)(n + r − 1) − 2(n + r) + 2]xn+r + ∞ n=1 bnxn+r+1 = 0 (4.63) Noting that the first term in the brackets is zero, inserting u1 and u1 from (4.50) and noting that r2 = 1 x2 − 3 2 x3 + 5 12 x4 − 7 144 x5 + . . . + b0x2 + ∞ n=2 {[n(n − 1)]bn + bn−1}xn+1 = 0 (4.64)
  • 67. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 57 Equating x2 terms, we find that b0 = −1. For higher order terms 3 2 = 2b2 + b1 = 2b2 + b1 Taking b1 = 0, b2 = 3 4 − 5 12 = 6b3 + b2 = 6b3 + 3 4 b3 = − 7 36 The second solution is u2 = u1 ln(x) − x − 3 4 x3 + 7 36 x4 − . . . (4.65) The complete solution is therefore u = [C1 + C2 ln x] u1 − C2 x − 3 4 x3 + 7 36 x4 − · · · (4.66) Problems 1. Find two Frobenius series solutions x2 u + 2xu + (x2 − 2)u = 0 2. Find two Frobenious series solutions x2 u + xu + x2 − 1 4 u = 0 3. Show that the indicial equation for the differential equation xu + u + xu = 0 has roots s = −1 and that the differential equation has only one Frobenius series solution. Find that solution. Then find another solution in the form u = ln ∞ n=0 cnxn+s + ∞ m=0 anxs +m where the first summation above is the first Frobenius solution.
  • 68. book Mobk070 March 22, 2007 11:7 58 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS 4.2 BESSEL FUNCTIONS A few differential equations are so widely useful in applied mathematics that they have been named after the mathematician who first explored their theory. Such is the case with Bessel’s equation. It occurs in problems involving the Laplacian ∇2 u in cylindrical coordinates when variables are separated. Bessel’s equation is a Sturm–Liouville equation of the form ρ2 d2 u dρ2 + ρ du dρ + (λ2 ρ2 − ν2 )u = 0 (4.67) Changing the independent variable x = λρ, the equation becomes x2 u + xu + (x2 − ν2 )u = 0 (4.68) 4.2.1 Solutions of Bessel’s Equation Recalling the standard forms (4.1) and (4.2) we see that it is a linear homogeneous equation with variable coefficients and with a regular singular point at x = 0. We therefore assume a solution of the form of a Frobenius series (4.17). u = ∞ j=0 c j x j+r (4.69) Upon differentiating twice and substituting into (4.68) we find ∞ j=0 [( j + r − 1)( j + r) + ( j + r) − ν2 ]c j x j+r + j=0 c j x j+r+2 = 0 (4.70) In general ν can be any real number. We will first explore some of the properties of the solution when ν is a nonnegative integer, 0, 1, 2, 3, . . . . First note that ( j + r − 1)( j + r) + ( j + r) = ( j + r)2 (4.71) Shifting the exponent in the second summation and writing out the first two terms in the first (r − n)(r + n)c0 + (r + 1 − n)(r + 1 + n)c1x + ∞ j=2 [(r + j − n)(r + j + n)c j + c j−2]x j = 0 (4.72) In order for the coefficient of the x0 term to vanish r = n or r = −n. (This is the indicial equation.) In order for the coefficient of the x term to vanish c1 = 0. For each term in the
  • 69. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 59 summation to vanish c j = −1 (r + j − n)(r + j + n) c j−2 = −1 j(2n + j) c j−2 , r = n j = 2, 3, 4, · · · (4.73) This is the recurrence relation. Since c1 = 0, all c j = 0 when j is an odd number. It is therefore convenient to write j = 2k and note that c2k = −1 22k(r + k) c2k−2 (4.74) so that c2k = (−1)k k!(n + 1)(n + 2) . . . (n + k)22k c0 (4.75) The Frobenius series is u = c0xn 1 + ∞ k=1 (−1)k k!(n + 1)(n + 2) . . . .(n + k) x 2 2k (4.76) Now c0 is an arbitrary constant so we can choose it to be c0 = 1/n!2n in which case the above equation reduces to Jn = u = ∞ k=0 (−1)k k!(n + k)! x 2 n+2k (4.77) The usual notation is Jn and the function is called a Bessel function of the first kind of order n. Note that we can immediately conclude from (4.77) that Jn(−x) = (−1)n Jn(x) (4.78) Note that the roots of the indicial equation differ by an integer. When r = −n (4.72) does not yield a useful second solution since the denominator is zero for j = 0 or 2n. In any case it is easy to show that Jn(x) = (−1)n J−n, so when r is an integer the two solutions are not independent. A second solution is determined by the methods detailed above and involves natural logarithms. The details are very messy and will not be given here. The result is Yn(x) = 2 π Jn(x) ln x 2 + γ + ∞ k=1 (−1)k+1 [φ(k) + φ(k + 1)] 22k+n+1k!(k + n)! x2k+n − 2 π n−1 k=0 (n − k − 1)! 22k−n+1k! x2k−n (4.79) In this equation (k) = 1 + 1/2 + 1/3 + · · · + 1/k and γ is Euler’s constant 0.5772156649 . . . . . .
  • 70. book Mobk070 March 22, 2007 11:7 60 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS FIGURE 4.1: Bessel functions of the first kind Bessel functions of the first and second kinds of order zero are particularly useful in solving practical problems (Fig. 4.1). For these cases J0(x) = ∞ k=0 (−1)k (k!)2 x 2 2k (4.80) and Y0 = J0(x) ln(x) + ∞ k=1 (−1)k+1 22k(k!)2 φ(k)x2k (4.81) The case of ν = n. Recall that in (4.70) if ν is not an integer, a part of the denominator is (1 + ν)(2 + ν)(3 + ν) . . . (n + ν) (4.82) We were then able to use the familiar properties of factorials to simplify the expression for Jn(x). If ν = n we can use the properties of the gamma function to the same end. The gamma function is defined as (ν) = ∞ 0 tν−1 e−t dt (4.83)
  • 71. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 61 Note that (ν + 1) = ∞ 0 tν e−t dt (4.84) and integrating by parts (ν + 1) = [−t νe−t ]∞ 0 + ν ∞ 0 tν−1 e−t dt = ν (ν) (4.85) and (4.82) can be written as (1 + ν)(2 + ν)(3 + ν) . . . .(n + ν) = (n + ν + 1) (ν + 1) (4.86) so that when ν is not an integer Jν(x) = ∞ n=0 (−1)n 22n+νn! (n + ν + 1) x2n+ν (4.87) Fig. 4.3 is a graphical representation of the gamma function. Here are the rules 1. If 2ν is not an integer, Jν and J−ν are linearly independent and the general solution of Bessel’s equation of order ν is u(x) = AJν(x) + BJ−ν(x) (4.88) where A and B are constants to be determined by boundary conditions. 2. If 2ν is an odd positive integer Jν and J−ν are still linearly independent and the solution form (4.88) is still valid. 3. If 2ν is an even integer, Jν(x) and J−ν(x) are not linearly independent and the solution takes the form u(x) = AJ ν(x) + BY ν(x) (4.89) Bessel functions are tabulated functions, just as are exponentials and trigonometric functions. Some examples of their shapes are shown in Figs. 4.1 and 4.2. Note that both Jν(x) and Yν(x) have an infinite number of zeros and we denote them as λj , j = 0, 1, 2, 3, . . .
  • 72. book Mobk070 March 22, 2007 11:7 62 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS FIGURE 4.2: Bessel functions of the second kind FIGURE 4.3: The gamma function Some important relations involving Bessel functions are shown in Table 4.1. We will derive only the first, namely d dx (xν Jν(x)) = xν Jν−1(x) (4.90) d dx (xν Jν(x)) = d dx ∞ n=0 (−1)n 22n+νn! (n + ν + 1) x2n+2ν (4.91)
  • 73. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 63 TABLE 4.1: Some Properties of Bessel Functions 1. [xν Jν(x)] = xν Jν−1(x) 2. [x−ν Jν(x)] = −x−ν Jν+1(x) 3. Jν−1(x) + Jν+1(x) = 2ν/x[Jν(x)] 4. Jν−1(x) − Jν+1(x) = 2Jν(x) 5. xν Jν−1(x)dx = xν Jv+ constant 6. x−ν Jν+1(x)dx = x−ν Jν(x) + constant = ∞ n=0 (−1)n 2(n + ν) 22n+νn!(n + ν) (n + ν) x2n+2ν−1 (4.92) = xν ∞ n=0 (−1)n 22n+ν−1n! (n + ν) x2n+2ν−1 = xν Jν−1(x) (4.93) These will prove important when we begin solving partial differential equations in cylindrical coordinates using separation of variables. Bessel’s equation is of the form (4.138) of a Sturm–Liouville equation and the func- tions Jn(x) are orthogonal with respect to a weight function ρ (see Eqs. (3.46) and (3.53), Chapter 3). Note that Bessel’s equation (4.67) with ν = n is ρ2 Jn + ρ Jn + (λ2 ρ2 − n2 )Jn = 0 (4.94) which can be written as d dρ (ρ Jn)2 + (λ2 ρ2 − n2 ) d dρ J 2 n = 0 (4.95) Integrating, we find that [(ρ J )2 + (λ2 ρ2 − n2 )J 2 ] 1 0 − 2λ2 1 ρ=0 ρ J 2 dρ = 0 (4.96) Thus, 2λ2 1 ρ=0 ρ J 2 n dρ = λ2 [Jn(λ)]2 + (λ2 − n2 )[Jn(λ)]2 (4.97)
  • 74. book Mobk070 March 22, 2007 11:7 64 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Thus, we note from that if the eigenvalues are λj , the roots of Jν(λj ρ) = 0 the orthogonality condition is, according to Eq. (3.53) in Chapter 3 1 0 ρ Jn(λj ρ)Jn(λkρ)dρ = 0, j = k = 1 2 [Jn+1(λj )]2 , j = k (4.98) On the other hand, if the eigenvalues are the roots of the equation HJn(λj ) + λj Jn(λj ) = 0 1 0 ρ Jn(λj ρ)Jn(λkρ)dρ = 0, j = k = (λ2 j − n2 + H2 )[Jn(λj )]2 2λ2 j , j = k (4.99) Using the equations in the table above and integrating by parts it is not difficult to show that x s =0 s n J0(s )ds = xn J1(x) + (n − 1)xn−1 J0(x) − (n − 1)2 x s =0 s n−2 J0(s )ds (4.100) 4.2.2 Fourier–Bessel Series Owing to the fact that Bessel’s equation with appropriate boundary conditions is a Sturm– Liouville system it is possible to use the orthogonality property to expand any piecewise continuous function on the interval 0 < x < 1 as a series of Bessel functions. For example, let f (x) = ∞ n=1 An J0(λnx) (4.101) Multiplying both sides by x J0(λk x)dx and integrating from x = 0 to x = 1 (recall that the weighting function x must be used to insure orthogonality) and noting the orthogonality property we find that f (x) = ∞ j=1 1 x=0 x f (x)J0(λj x)dx 1 x=0 x[J0(λj x)]2dx J0(λj x) (4.102)
  • 75. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 65 Example 4.9. Derive a Fourier–Bessel series representation of 1 on the interval 0 < x < 1. We note that with J0(λj ) = 0 1 x=0 x[J0(λj x)]2 dx = 1 2 [J1(λj )]2 (4.103) and 1 x=0 x J0(λj x)dx = J1(λj ) (4.104) Thus 1 = 2 ∞ j=1 J0(λj x) λj J1(λj ) (4.105) Example 4.10 (A problem in cylindrical coordinates). A cylinder of radius r1 is initially at a temperature u0 when its surface temperature is increased to u1. It is sufficiently long that variation in the z direction may be neglected and there is no variation in the θ direction. There is no heat generation. From Chapter 1, Eq. (1.11) ut = α r (rur )r (4.106) u(0,r) = u0 u(t,r1) = u1 u is bounded (4.107) The length scale is r1 and the time scale is r2 1 /α. A dimensionless dependent variable that normalizes the problem is (u − u1)/(u0 − u1) = U. Setting η = r/r1 and τ = tα/r2 1 , Uτ = 1 η (ηUη)η (4.108) U(0, η) = 1 U(τ, 1) = 0 (4.109) U is bounded Separate variables as T(τ)R(η). Substitute into the differential equation and divide by TR. Tτ T = 1 Rη (ηRη)η = ±λ2 (4.110)
  • 76. book Mobk070 March 22, 2007 11:7 66 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS where the minus sign is chosen so that the function is bounded. The solution for T is exponential and we recognize the equation for R as Bessel’s equation with ν = 0. 1 η (ηRη)η + λ2 R = 0 (4.111) The solution is a linear combination of the two Bessel functions of order 0. C1 J0(λη) + C2Y0(λη) (4.112) Since we have seen that Y0 is unbounded as η approaches zero, C2 must be zero. Furthermore, the boundary condition at η = 1 requires that J0(λ) = 0, so that our eigenfunctions are J0(λη) and the corresponding eigenvalues are the roots of J0(λn) = 0. Un = Kne−λ2 nτ J0(λnη), n = 1, 2, 3, 4, . . . (4.113) Summing (linear superposition) U = ∞ n=1 Kne−λ2 nτ J0(λnη) (4.114) Using the initial condition, 1 = ∞ n=1 Kn J0(λnη) (4.115) Bessel functions are orthogonal with respect to weighting factor η since theyare solutions to a Sturm–Liouville system. Therefore when we multiply both sides of this equation by ηJ0(λmη)dη and integrate over (0, 1) all of the terms in the summation are zero except when m = n. Thus, 1 η=0 J0(λnη)ηdη = Kn 1 η=0 J 2 0 (λnη)ηdη (4.116) but 1 η=0 ηJ 2 0 (λnη)dη = J 2 1 (λn) 2 1 η=0 ηJ0(λnη)dη = 1 λn J1(λn) (4.117)
  • 77. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 67 Thus U(τ, η) = ∞ n=0 2 λn J1(λn) e−λ2 nτ J0(λnη) (4.118) Example 4.11 (Heat generation in a cylinder). Reconsider the problem of heat transfer in a long cylinder but with heat generation and at a normalized initial temperature of zero. uτ = 1 r (rur )r + q0 (4.119) u(τ, 1) = u(0,r) = 0, u bounded (4.120) Our experience with the above example hints that the solution maybe of the form u = ∞ j=1 Aj (τ)J0(λjr) (4.121) This equation satisfies the boundary condition at r = 1 and Aj (τ) is to be determined. Substi- tuting into the partial differential equation gives ∞ j=1 Aj (τ)J0(λj ) = ∞ j=1 Aj (τ) 1 r d dr r d J0 dr + q0 (4.122) In view of Bessel’s differential equation, the first term on the right can be written as ∞ j=1 −λ2 j J0(λjr)Aj (τ) (4.123) The second term can be represented as a Fourier–Bessel series as follows: q0 = q0 ∞ j=1 2J0(λjr) λj J1(λj ) (4.124) as shown in Example 4.9 above. Equating coefficients of J0(λjr) we find that Aj (τ) must satisfy the ordinary differential equation A (τ) + λ2 j A(τ) = q0 2 λj J1(λj ) (4.125) with the initial condition A(0) = 0. Solution of this simple first-order linear differential equations yields Aj (τ) = 2q0 λ3 j J1(λj ) + C exp(−λ2 j τ) (4.126)
  • 78. book Mobk070 March 22, 2007 11:7 68 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS After applying the initial condition Aj (τ) = 2q0 λ3 j J1(λj ) 1 − exp(−λ2 j τ) (4.127) The solution is therefore u(τ,r) = ∞ j=1 2q0 λ3 j J1(λj ) 1 − exp(−λ2 j τ) J0(λjr) (4.128) Example 4.12 (Time dependent heat generation). Suppose that instead of constant heat generation, the generation is time dependent, q(τ). The differential equation for A(τ) then becomes A (τ) + λ2 j A(τ) = 2q(τ) λj J1(λj ) (4.129) An integrating factor for this equation is exp(λ2 j τ) so that the equation can be written as d dτ Aj exp(λ2 j τ) = 2q(τ) λj J1(λj ) exp(λ2 j τ) (4.130) Integrating and introducing as a dummy variable t Aj (τ) = 2 λj J1(λj ) τ t=0 q(t) exp(−λ2 j (τ − t))dt (4.131) Problems 1. By differentiating the series form of J0(x) term by term show that J0(x) = −J1(x) 2. Show that x J0(x)dx = x J1(x) + constant 3. Using the expression for x s =0 s n J0(s )ds show that x s =0 s 5 J0(s )ds = x(x2 − 8)[4x J0(x) + (x2 − 8)J1(x)] 4. Express 1 − x as a Fourier–Bessel series.
  • 79. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 69 4.3 LEGENDRE FUNCTIONS We now consider another second-order linear differential that is common for problems involv- ing the Laplacian in spherical coordinates. It is called Legendre’s equation, (1 − x2 )u − 2xu + ku = 0 (4.132) This is clearly a Sturm–Liouville equation and we will seek a series solution near the origin, which is a regular point. We therefore assume a solution in the form of (4.3). u = ∞ j=0 c j x j (4.133) Differentiating (4.133) and substituting into (4.132) we find ∞ j=0 [ j( j − 1)c j x j−2 (1 − x2 ) − 2 jc j x j + n(n + 1)c j x j ] (4.134) or ∞ j=0 {[k − j( j + 1)]c j x j + j( j − 1)c j x j−2 } = 0 (4.135) On shifting the last term, ∞ j=0 {( j + 2)( j + 1)c j+2 + [k − j( j + 1)]c j }x j = 0 (4.136) The recurrence relation is c j+2 = − j( j + 1) − k ( j + 1)( j + 2) c j (4.137) There are thus two independent Frobenius series. It can be shown that they both diverge at x = 1 unless they terminate at some point. It is easy to see from (4.137) that they do in fact terminate if k = n(n + 1). Since n and j are integers it follows that cn+2 = 0 and consequently cn+4, cn+6, etc. are all zero. Therefore the solutions, which depend on n (i.e., the eigenfunctions) are polynomials, series that terminate at j = n. For example, if n = 0, c2 = 0 and the solution is a constant. If
  • 80. book Mobk070 March 22, 2007 11:7 70 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS n = 1 cn = 0 when n ≥ 1 and the polynomial is x. In general u = Pn(x) = cn xn − n(n − 1) 2(2n − 1) xn−2 + n(n − 1)(n − 2)(n − 3) 2(4)(2n − 1)(2n − 3) xn−4 − . . . = 1 2k m k=0 (−1)k k! (2n − 2k)! (n − 2k)!(n − k)! xn−2k (4.138) where m = n/2 if n is even and (n − 1)/2 if n is odd. The coefficient cn is of course arbitrary. It turns out to be convenient to choose it to be c0 = 1 cn = (2n − 1)(2n − 3) · · · 1 n! (4.139) the first few polynomials are P0 = 1, P1 = x, P2 = (3x2 − 1)/2, P3 = (5x3 − 3x)/2, P4 = (35x4 − 30x2 + 3)/8, Successive Legendre polynomials can be generated by the use of Rodrigues’ formula Pn(x) = 1 2nn! dn dxn (x2 − 1)n (4.140) For example P5 = (63x5 − 70x3 + 15x)/8 Fig. 4.4 shows graphs of several Legendre polynomials. FIGURE 4.4: Legendre polynomials
  • 81. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 71 The second solution of Legendre’s equation can be found by the method of variation of parameters. The result is Qn(x) = Pn(x) dζ P2 n (ζ)(1 − ζ2) (4.141) It can be shown that this generally takes on a logarithmic form involving ln [(x + 1)/(x − 1)] which goes to infinity at x = 1. In fact it can be shown that the first two of these functions are Q0 = 1 2 ln 1 + x 1 − x and Q1 = x 2 ln 1 + x 1 − x − 1 (4.142) Thus the complete solution of the Legendre equation is u = APn(x) + BQn(x) (4.143) where Pn(x) and Qn(x) are Legendre polynomials of the first and second kind. If we require the solution to be finite at x = 1, B must be zero. Referring back to Eqs. (3.46) through (3.53) in Chapter 3, we note that the eigenvalues λ = n(n + 1) and the eigenfunctions are Pn(x) and Qn(x). We further note from (3.46) and (3.47) that the weight function is one and that the orthogonality condition is 1 −1 Pn(x)Pm(x)dx = 2 2n + 1 δmn (4.144) where δmn is Kronecker’s delta, 1 when n = m and 0 otherwise. Example 4.13. Steady heat conduction in a sphere Consider heat transfer in a solid sphere whose surface temperature is a function of θ, the angle measured downward from the z-axis (see Fig. 1.3 Chapter 1). The problem is steady and there is no heat source. r ∂2 ∂r2 (ru) + 1 sin θ ∂ ∂θ sin θ ∂u ∂θ = 0 u(r = 1) = f (θ) (4.145) u is bounded
  • 82. book Mobk070 March 22, 2007 11:7 72 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Substituting x = cos θ, r ∂2 ∂r2 (ru) + ∂ ∂x (1 − x2 ) ∂u ∂x = 0 (4.146) We separate variables by assuming u = R(r)X(x). Substitute into the equation and divide by RX and find r R (r R) = − [(1 − x2 )X ] X = ±λ2 (4.147) or r(r R) ∓ λ2 R = 0 [(1 − x2 )X ] ± λ2 X = 0 (4.148) The second of these is Legendre’s equation, and we have seen that it has bounded solutions at r = 1 when λ2 = n(n + 1). The first equation is of the Cauchy–Euler type with solution R = C1rn + C2r−n−1 (4.149) Noting that the constant C2 must be zero to obtain a bounded solution at r = 0, and using superposition, u = ∞ n=0 Knrn Pn(x) (4.150) and using the condition at fr = 1 and the orthogonality of the Legendre polynomial π θ=0 f (θ)Pn(cos θ)dθ = π θ=0 Kn P2 n (cos θ)dθ = 2Kn 2n + 1 (4.151) 4.4 ASSOCIATED LEGENDRE FUNCTIONS Equation (1.15) in Chapter 1 can be put in the form 1 α ∂u ∂t = ∂2 u ∂r2 + 2 r ∂u ∂r + 1 r2 ∂ ∂µ (1 − µ2 ) ∂u ∂µ + 1 r2(1 − µ2) ∂2 u ∂ 2 (4.152) by substituting µ = cos θ.
  • 83. book Mobk070 March 22, 2007 11:7 SERIES SOLUTIONS OF ORDINARY DIFFERENTIAL EQUATIONS 73 We shall see later that on separating variables in the case where u is a function of r, θ, , and t, we find the following differential equation in the µ variable: d dµ (1 − µ2 ) d f dµ + n(n + 1) − m2 1 − µ2 f = 0 (4.153) We state without proof that the solution is the associated Legendre function Pm n (µ). The associated Legendre polynomial is given by Pm n = (1 − µ2 )1/2m dm dµm Pn(µ) (4.154) The orthogonality condition is 1 −1 [Pm n (µ)]2 dµ = 2(n + m)! (2n + 1)(n − m)! (4.155) and 1 −1 Pm n Pm n dµ = 0 n = n (4.156) The associated Legendre function of the second kind is singular at x = ±1 and may be computed by the formula Qm n (x) = (1 − x2 )m/2 dm Qn(x) dxm (4.157) Problems 1. Find and carefully plot P6 and P7. 2. Perform the integral above and show that Q0(x) = C P0(x) x ξ=0 dξ (1 − ξ2)P0(ξ) = C 2 ln 1 + x 1 − x and that Q1(x) = Cx x ξ=0 dξ ξ2(1 − ξ2) = Cx 2 ln 1 + x 1 − x − 1 3. Using the equation above find Q0 0(x) and Q1 1(x)
  • 84. book Mobk070 March 22, 2007 11:7 74 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS FURTHER READING J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. New York: McGraw-Hill, 2001. C. F. Chan Man Fong, D. DeKee, and P. N. Kaloni, Advanced Mathematics for Engineering and Science. 2nd edition. Singapore: World Scientific, 2004. P. V. O’Neil, Advanced Engineering Mathematics. 5th edition. Brooks/Cole Thompson, Pacific Grove, CA, 2003.
  • 85. book Mobk070 March 22, 2007 11:7 75 C H A P T E R 5 Solutions Using Fourier Series and Integrals We have already demonstrated solution of partial differential equations for some simple cases in rectangular Cartesian coordinates in Chapter 2. We now consider some slightly more complicated problems as well as solutions in spherical and cylindrical coordinate systems to further demonstrate the Fourier method of separation of variables. 5.1 CONDUCTION (OR DIFFUSION) PROBLEMS Example 5.1 (Double Fourier series in conduction). We now consider transient heat con- duction in two dimensions. The problem is stated as follows: ut = α(uxx + uyy ) u(t, 0, y) = u(t, a, y) = u(t, x, 0) = u(t, x, b) = u0 u(0, x, y) = f (x, y) (5.1) That is, the sides of a rectangular area with initial temperature f (x, y) are kept at a constant temperature u0. We first attempt to scale and nondimensionalize the equation and boundary conditions. Note that there are two length scales, a and b. We can choose either, but there will remain an extra parameter, either a/b or b/a in the equation. If we take ξ = x/a and η = y/b then (5.1) can be written as a2 α ut = uξξ + a2 b2 uηη (5.2) The time scale is now chosen as a2 /α and the dimensionless time is τ = αt/a2 . We also choose a new dependent variable U(τ, ξ, η) = (u − u0)/( fmax − u0). The now nondimensionalized system is Uτ = Uξξ + r2 Uηη (5.3) U(τ, 0, η) = U(τ, 1, η) = U(τ, ξ, 0) = U(τ, ξ, 1) = 0 U(0, ξ, η) = ( f − u0)/( fmax − u0) = g(ξ, η)
  • 86. book Mobk070 March 22, 2007 11:7 76 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS We now proceed by separating variables. Let U(τ, ξ, η) = T(τ)X(ξ)Y (η) (5.4) Differentiating and inserting into (5.3) and dividing by (5.4) we find T T = X Y + r2 Y X XY (5.5) where the primes indicate differentiation with respect to the variable in question and r = a/b. Since the left-hand side of (5.5) is a function only of τ and the right-hand side is only a function of ξ and η both sides must be constant. If the solution is to be finite in time we must choose the constant to be negative, –λ2 . Replacing T /T by –λ2 and rearranging, −λ2 − X X = r Y Y (5.6) Once again we see that both sides must be constants. How do we choose the signs? It should be clear by now that if either of the constants is positive solutions for X or Y will take the form of hyperbolic functions or exponentials and the boundary conditions on ξ or η cannot be satisfied. Thus, T T = −λ2 (5.7) X X = −β2 (5.8) r2 Y Y = −γ 2 (5.9) Note that X and Y are eigenfunctions of (5.8) and (5.9), which are Sturm–Liouville equations and β and γ are the corresponding eigenvalues. Solutions of (5.7), (5.8), and (5.9) are T = A exp(−λ2 τ) (5.10) X = B1 cos(βξ) + B2 sin(βξ) (5.11) Y = C1 cos(γ η/r) + C2 sin(γ η/r) (5.12) Applying the first homogeneous boundary condition, we see that X(0) = 0, so that B1 = 0. Applying the third homogeneous boundary condition we see that Y (0) = 0, so that C1 = 0. The second homogeneous boundary condition requires that sin(β) = 0, or β = nπ. The last homogeneous boundary condition requires sin(γ/r) = 0, or γ = mπr. According to (5.6), λ2 = β2 + γ 2 . Combining these solutions, inserting into (5.4) we have one solution in the
  • 87. book Mobk070 March 22, 2007 11:7 SOLUTIONS USING FOURIER SERIES AND INTEGRALS 77 form Umn(τ, ξ, η) = Knme−(n2 π2 +m2 π2 r2 )τ sin(nπξ) sin(mπη) (5.13) for all m, n = 1, 2, 3, 4, 5, . . . Superposition now tells us that ∞ n=1 ∞ m=1 Knme−(n2 π2 +m2 π2 r2 )τ sin(nπξ) sin(mπ) (5.14) Using the initial condition g(ξ, η) = ∞ n=1 ∞ m=1 Knm sin(nπξ) sin(mπη) (5.15) We have a double Fourier series, and since both sin(nπξ) and sin(mπη) are members of orthogonal sequences we can multiply both sides by sin(nπξ)sin(mπη)dξdη and integrate over the domains. 1 ξ=0 1 η=0 g(ξ, η) sin(nπξ) sin(mπη)dξdη = Knm 1 ξ=0 1 η=0 sin2 (nπξ)dξ sin2 (mπη)dη = Knm 4 (5.16) Our solution is ∞ n=1 ∞ m=1 4 1 ξ=0 1 η=0 g(ξ, η) sin(nπξ) sin(mπη)dξdη e−(n2 π2 +m2 π2 r2 )τ sin(nπξ) sin(mπη) (5.17) Example 5.2 (A convection boundary condition). Reconsider the problem defined by (2.1) in Chapter 2, but with different boundary and initial conditions, u(t, 0) = u0 = u(0, x) (5.18) kux(t, L) − h[u1 − u(t, L)] = 0 (5.19) The physical problem is a slab with conductivity k initially at a temperature u0 suddenly exposed at x = L to a fluid at temperature u1 through a heat transfer coefficient h while the x = 0 face is maintained at u0.
  • 88. book Mobk070 March 22, 2007 11:7 78 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS The length and time scales are clearly the same as the problem in Chapter 2. Hence, τ = tα/L2 and ξ = x/L. If we choose U = (u − u0)/(u1 − u0) we make the boundary condition at x = 0 homogeneous but the condition at x = L is not. We have the same situation that we had in Section 2.3 of Chapter 2. The differential equation, one boundary condition, and the initial condition are homogeneous. Proceeding, we find Uτ = Uξξ U(τ, 0) = U(0, ξ) = 0 Uξ (τ, 1) + B[U(τ, 1) − 1] = 0 (5.20) where B = hL/k. It is useful to relocate the nonhomogeneous condition as the initial condition. As in the previous problem we assume U(τ, ξ) = V (τ, ξ) + W(ξ). Vτ = Vξξ + Wξξ W(0) = 0 Wξ (1) + B[W(1) − 1] = 0 V (τ, 0) = 0 Vξ (τ, 1) + BV (τ, 1) = 0 V (0, ξ) = −W(ξ) (5.21) Set Wξξ = 0. Integrating twice and using the two boundary conditions on W, W(ξ) = Bξ B + 1 (5.22) The initial condition on V becomes V (0, ξ) = −Bξ/(B + 1) . (5.23) Assume V (τ, ξ) = P(τ)Q(ξ), substitute into the partial differential equation for V , and divide by P Q as usual. P P = Q Q = ±λ2 (5.24) We must choose the minus sign for the solution to be bounded. Hence, P = Ae−λ2 τ Q = C1 sin(λξ) + C2 cos(λξ) (5.25)
  • 89. book Mobk070 March 22, 2007 11:7 SOLUTIONS USING FOURIER SERIES AND INTEGRALS 79 FIGURE 5.1: The eigenvalues of λn = −B tan(λn) Applying the boundary condition at ξ = 0, we find that C2 = 0. Now applying the boundary condition on V at ξ = 1, C1λ cos(λ) + C1 B sin(λ) = 0 (5.26) or λ = −B tan(λ) (5.27) This is the equation for determining the eigenvalues, λn. It is shown graphically in Fig. 5.1. Example 5.3 (Superposition of several problems). We’ve seen now that in order to apply separation of variables the partial differential equation itself must be homogeneous and we have also seen a technique for transferring the inhomogeneity to one of the boundary conditions or to the initial condition. But what if several of the boundary conditions are nonhomogeneous? We demonstrate the technique with the following problem. We have a transient two-dimensional problem with given conditions on all four faces. ut = uxx + uyy u(t, 0, y) = f1(y) u(t, a, y) = f2(y) u(t, x, 0) = f3(x) u(t, x, b) = f4(x) u(0, x, y) = g(x, y) (5.28)
  • 90. book Mobk070 March 22, 2007 11:7 80 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS The problem can be broken down into five problems. u = u1 + u2 + u3 + u4 + u5. u1t = u1xx + u1yy u1(0, x, y) = g(x, y) (5.29) u1 = 0, all boundaries u2xx + u2yy = 0 u2(0, y) = f1(y) (5.30) u2 = 0 on all other boundaries u3xx + u3yy = 0 u3(a, y) = f2(y) (5.31) u3 = 0 on all other boundaries u4xx + u4yy = 0 u4(x, 0) = f3(x) (5.32) u4 = 0 on all other boundaries u5xx + u5yy = 0 u5(x, b) = f4(x) (5.33) u5 = 0 on all other boundaries 5.1.1 Time-Dependent Boundary Conditions We will explore this topic when we discuss Laplace transforms. Example 5.4 (A finite cylinder). Next we consider a cylinder of finite length 2L and radius r1. As in the first problem in this chapter, there are two possible length scales and we choose r1. The cylinder has temperature u0 initially. The ends at L = ±L are suddenly insulated while the sides are exposed to a fluid at temperature u1. The differential equation with no variation in the θ direction and the boundary conditions are ut = α r (rur )r + uzz uz(t,r, −L) = uz(t,r, +L) = 0 kur (r1) + h[u(r1) − u1(r1)] = 0 u(0,r, z) = u0 u is bounded (5.34)
  • 91. book Mobk070 March 22, 2007 11:7 SOLUTIONS USING FOURIER SERIES AND INTEGRALS 81 If we choose the length scale as r1 then we define η = r/r1, ζ = z/L, and τ = αt/r2 1 . The normalized temperature can be chosen as U = (u − u1)(u0 − u1). With these we find that Uτ = 1 η (ηUη)η + r1 L 2 Uςς Uς (ς = ±1) = 0 Uη(η = 1) + BU(η = 1) = 0 U(τ = 0) = 1 (5.35) where B = hr1/k. Let U = T(τ)R(η)Z(ζ). Insert into the differential equation and divide by U. T T = 1 ηR (ηR ) + r1 L 2 Z Z (5.36) Zς (ς = ±1) = 0 Rη(η = 1) + BR(η = 1) = 0 U(τ = 0) = 1 Again, the dance is the same. The left-hand side of Eq. (5.36) cannot be a function of η or ζ so each side must be a constant. The constant must be negative for the time term to be bounded. Experience tells us that Z /Z must be a negative constant because otherwise Z would be exponential functions and we could not simultaneously satisfy the boundary conditions at ζ = ±1. Thus, we have T = −λ2 T η2 R + ηR + β2 η2 R = 0 Z = −γ 2 L r1 2 Z (5.37) with solutions T = Ae−λ2 t Z = C1 cos(γ Lς/r1) + C2 sin(γ Lς/r1) R = C3 J0(βη) + C4Yo (βη) (5.38) It is clear that C4 must be zero always when the cylinder is not hollow because Y0 is unbounded when η = 0. The boundary conditions at ς = ±1 imply that Z is an even function, so that C2
  • 92. book Mobk070 March 22, 2007 11:7 82 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS must be zero. The boundary condition at ζ = 1 is Zζ = −C1(γ L/r1) sin(γ L/r1) = 0, or γ L/r1 = nπ (5.39) The boundary condition at η = 1 requires C3[J0 (β) + BJ0(β)] = 0 or BJ0(β) = β J1(β) (5.40) which is the transcendental equation for finding βm. Also note that λ2 = γ 2 n + β2 m (5.41) By superposition we write the final form of the solution as U(τ, η, ς) = ∞ n=0 ∞ m=0 Knme−(γ 2 n +β2 m)τ J0(βmη) cos(nπς) (5.42) Knm is found using the orthogonality properties of J0(βmη) and cos(nπζ) after using the initial condition. 1 r=0 r J0(βmη)dη 1 ς=−1 cos(nπς)dς = Knm 1 r=0 r J 2 0 (βmη)dη 1 ς=−1 cos2 (nπς)dς (5.43) Example 5.5 (Heat transfer in a sphere). Consider heat transfer in a solid sphere whose surface temperature is a function of θ, the angle measured downward from the z-axis (see Fig. 1.3, Chapter 1). The problem is steady and there is no heat source. r ∂2 ∂r2 (ru) + 1 sin θ ∂ ∂θ sin θ ∂u ∂θ = 0 u(r = 1) = f (θ) u is bounded (5.44) Substituting x = cos θ, r ∂2 ∂r2 (ru) + ∂ ∂x (1 − x2 ) ∂u ∂x = 0 (5.45) We separate variables by assuming u = R(r)X(x). Substitute into the equation, divide by RX and find r R (r) = − [(1 − x2 )X ] X = ±λ2 (5.46)
  • 93. book Mobk070 March 22, 2007 11:7 SOLUTIONS USING FOURIER SERIES AND INTEGRALS 83 or r(r R) ∓ λ2 R = 0 (5.47) [(1 − x2 )X ] ± λ2 X = 0 The second of these is Legendre’s equation, and we have seen that it has bounded solutions at r = 1 when ±λ2 = n(n + 1). The first equation is of the Cauchy–Euler type with solution R = C1rn + C2r−n−1 (5.48) Noting that the constant C2 must be zero to obtain a bounded solution at r = 0, and using superposition, u = ∞ n=0 Knrn Pn(x) (5.49) and using the condition at f (r = 1) and the orthogonality of the Legendre polynomial π θ=0 f (θ)Pn(cos θ)dθ = π θ=0 Kn P2 n (cos θ)dθ = 2Kn 2n + 1 (5.50) Kn = 2n + 1 2 π θ=0 f (θ)Pn(cos θ)dθ (5.51) 5.2 VIBRATIONS PROBLEMS We now consider some vibrations problems. In Chapter 2 we found a solution for a vibrating string initially displaced. We now consider the problem of a string forced by a sine function. Example 5.6 (Resonance in a vibration problem). Equation (1.21) in Chapter 1 is ytt = a2 yxx + A sin(ηt) (5.52) Select a length scale as L, the length of the string, and a time scale L/a and defining ξ = x/L and τ = ta/L, yττ = yξξ + C sin(ωτ) (5.53) where ω is a dimensionless frequency, ηL/a and C = AL2 a2 . The boundary conditions and initial velocity and displacement are all zero, so the bound- ary conditions are all homogeneous, while the differential equation is not. Back in Chapter 2 we
  • 94. book Mobk070 March 22, 2007 11:7 84 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS saw one way of dealing with this. Note that it wouldn’t have worked had q been a function of time. We approach this problem somewhat differently. From experience, we expect a solution of the form y(ξ, τ) = ∞ n=1 Bn(τ) sin(nπξ) (5.54) where the coefficients Bn(τ) are to be determined. Note that the equation above satisfies the end conditions. Inserting this series into the differential equation and using the Fourier sine series of C C = ∞ n=1 2C[1 − (−1)n ] nπ sin(nπξ) (5.55) ∞ n=1 Bn (τ) sin(nπξ) = ∞ n=1 [−(nπ)2 Bn(τ)] sin(nπξ) + C ∞ n=1 2[1 − (−1)n ] nπ sin(nπξ) sin( τ) (5.56) Thus Bn = −(nπ)2 Bn + C 2[1 − (−1)n ] nπ sin( τ) (5.57) subject to initial conditions y = 0 and yτ = 0 at τ = 0. When n is even the solution is zero. That is, since the right-hand side is zero when n is even, Bn = C1 cos(nπτ) + C2 sin(nπτ) (5.58) But since both Bn(0) and Bn(0) are zero, C1 = C2 = 0. When n is odd we can write B2n−1 + [(2n − 1)π]2 B2n−1 = 4C (2n − 1)π sin(ωτ) (5.59) (2n − 1)π is the natural frequency of the system, ωn. The homogeneous solution of the above equation is B2n−1 = D1 cos(ωnτ) + D2 sin(ωnτ) . (5.60) To obtain the particular solution we assume a solution in the form of sines and cosines. BP = E1 cos(ωτ) + E2 sin(ωτ) (5.61) Differentiating and inserting into the differential equation we find −E1ω2 cos(ωτ) − E2ω2 sin(ωτ) + ω2 n[E1 cos(ωτ) + E2 sin(ωτ)] = 4C ωn sin(ωτ) (5.62)
  • 95. book Mobk070 March 22, 2007 11:7 SOLUTIONS USING FOURIER SERIES AND INTEGRALS 85 Equating coefficients of sine and cosine terms E1(ω2 n − ω2 ) cos(ωτ) = 0 ω = ωn E2(ω2 n − ω2 ) sin(ωτ) = 4C ωn sin(ωτ) (5.63) Thus E1 = 0 E2 = 4C ωn(ω2 n − ω2) ω = ωn (5.64) Combining the homogeneous and particular solutions B2n−1 = D1 cos(ωnτ) + D2 sin(ωnτ) + 4C ωn(ω2 n − ω2) sin(ωτ) (5.65) The initial conditions at τ = 0 require that D1 = 0 D2 = − 4C(ω/ωn) ωn(ω2 n − ω2) (5.66) The solution for B2n−1 is B2n−1 = 4C ωn(ω2 − ω2 n) ω ωn sin(ωnτ) − sin(ωτ) , ω = ωn (5.67) The solution is therefore y(ξ, τ) = 4C ∞ n=1 sin(ωnξ) ωn(ω2 − ω2 n) ω ωn sin(ωnτ) − sin(ωτ) (5.68) When ω = ωn the above is not valid. The form of the particular solution should be chosen as BP = E1τ cos(ωτ) + E2τ sin(ωτ) (5.69) Differentiating and inserting into the differential equation for B2n−1 [E1τω2 n + 2E2ωn − E1τω2 n] cos(ωnτ) + [E2τω2 n − E2τω2 n − 2E1ωn] sin(ωnτ) = 4C ωn sin(ωnτ) (5.70) Thus E2 = 0 E1 = − 4C 2ω2 n (5.71)
  • 96. book Mobk070 March 22, 2007 11:7 86 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS and the solution when ω = ωn is B2n−1 = C1 cos(ωnτ) + C2 sin(ωnτ) − 2C ω2 n τ cos(ωnτ) (5.72) The initial condition on position implies that C1 = 0. The initial condition that the initial velocity is zero gives ωnC2 − 2C ω2 n = 0 (5.73) The solution for B2n−1 is B2n−1 = 2C ω3 n [sin(ωnτ) − ωnτ cos(ωnτ)] (5.74) Superposition now gives y(ξ, τ) = ∞ n=1 2C ω3 n sin(ωnξ)[sin(ωnτ) − ωnτ cos(ωnτ)] (5.75) An interesting feature of the solution is that there are an infinite number of natural frequencies, η = a L [π, 3π, 5π, . . . , (2n − 1)π, . . .] (5.76) If the system is excited at any of the frequencies, the magnitude of the oscillation will grow (theoretically) without bound. The smaller natural frequencies will cause the growth to be fastest. Example 5.7 (Vibration of a circular membrane). Consider now a circular membrane (like a drum). The partial differential equation describing the displacement y(t,r, θ) was derived in Chapter 1. a−2 ∂2 y ∂t2 = 1 r ∂ ∂r r ∂y ∂r + 1 r2 ∂2 y ∂θ2 (5.77) Suppose it has an initial displacement of y(0,r, θ) = f (r, θ) and the velocity yt = 0. The displacement at r = r1 is also zero and the displacement must be finite for all r, θ, and t. The length scale is r1 and the time scale is r1/a.r/r1 = η and ta/r1 = τ. We have ∂2 y ∂τ2 = 1 η ∂ ∂η η ∂y ∂η + 1 η2 ∂2 y ∂θ2 (5.78)
  • 97. book Mobk070 March 22, 2007 11:7 SOLUTIONS USING FOURIER SERIES AND INTEGRALS 87 Separation of variables as y = T(τ)R(η)S(θ), substituting into the equation and dividing by TRS, T T = 1 ηR (ηR ) + 1 η2 S S = −λ2 (5.79) The negative sign is because we anticipate sine and cosine solutions for T. We also note that λ2 η2 + η R (ηR ) = − S S = ±β2 (5.80) To avoid exponential solutions in the θ direction we must choose the positive sign. Thus we have T = −λ2 T S = −β2 S (5.81) η(ηR ) + (η2 λ2 − β2 )R = 0 The solutions of the first two of these are T = A1 cos(λτ) + A2 sin(λτ) (5.82) S = B1 cos(βθ) + B2 sin(βθ) The boundary condition on the initial velocity guarantees that A2 = 0. β must be an integer so that the solution comes around to the same place after θ goes from 0 to 2π. Either B1 and B2 can be chosen zero because it doesn’t matter where θ begins (we can adjust f (r, θ)). T(τ)S(θ) = AB cos(λτ) sin(nθ) (5.83) The differential equation for R should be recognized from our discussion of Bessel functions. The solution with β = n is the Bessel function of the first kind order n. The Bessel function of the second kind may be omitted because it is unbounded at r = 0. The condition that R(1) = 0 means that λ is the mth root of Jn(λmn) = 0. The solution can now be completed using superposition and the orthogonality properties. y(τ, η, θ) = ∞ n=0 ∞ m=1 Knm Jn(λmnη) cos(λmnτ) sin(nθ) (5.84) Using the initial condition f (η, θ) = ∞ n=0 ∞ m=1 Knm Jn(λmnη) sin(nθ) (5.85)
  • 98. book Mobk070 March 22, 2007 11:7 88 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS and the orthogonality of sin(nθ) and Jn(λmnη) 2π θ=0 1 η=0 f (η, θ)ηJn(λmnη) sin(nθ)dθdη = Knm 2π θ=0 sin2 (nθ)dθ 1 r=0 ηJ 2 n (λmnη)dη = Knm 4 J 2 n+1(λmn) (5.86) Knm = 4 J 2 n+1(λnm) 2π θ=0 1 η=0 f (η, θ)ηJn(λnmη) sin(nθ)dθdη (5.87) Problems 1. The conduction equation in one dimension is to be solved subject to an insulated surface at x = 0 and a convective boundary condition at x = L. Initially the temperature is u(0, x) = f (x), a function of position. Thus ut = αuxx ux(t, 0) = 0 kux(t, L) = −h[u(t, L) − u1] u(0, x) = f (x) First nondimensionalize and normalize the equations. Then solve by separation of variables. Find a specific solution when f (x) = 1 − x2 . 2. Consider the diffusion problem ut = αuxx + q(x) ux(t, 0) = 0 ux(t, L) = −h[u(t, L) − u1] u(0, x) = u1 Define time and length scales and define a u scale such that the initial value of the dependent variable is zero. Solve by separation of variables and find a specific solution for q(x) = Q, a constant. Refer to Problem 2.1 in Chapter 2.
  • 99. book Mobk070 March 22, 2007 11:7 SOLUTIONS USING FOURIER SERIES AND INTEGRALS 89 3. Solve the steady-state conduction uxx + uyy = 0 ux(0, y) = 0 u(a, y) = u0 u(x, 0) = u1 uy (x, b) = −h[u(x, b) − u1] Note that one could choose a length scale either a or b. Choose a. Note that if you choose U = u − u1 u0 − u1 there is only one nonhomogeneous boundary condition and it is normalized. Solve by separation of variables. 5.3 FOURIER INTEGRALS We consider now problems in which one dimension of the domain is infinite in extent. Recall that a function defined on an interval (−c , c ) can be represented as a Fourier series f (x) = 1 2c c ς=−c f (ς)dς + 1 c ∞ n=1 c ς=−c f (ς) cos nπς c dς cos nπx c + 1 c ∞ n=1 c ς=−c f (ς) sin nπς c dς sin nπx c (5.88) which can be expressed using trigonometric identities as f (x) = 1 2c c ς=−c f (ς)dς + 1 c ∞ n=1 c ς=−c f (ς) cos nπ c (ς − x) dς (5.89) We now formally let c approach infinity. If ∞ ς=−c f (ς)dς exists, the first term vanishes. Let α = π/c . Then f (x) = 2 π ∞ n=1 c ς=0 f (ς) cos[n α(ς − x)dς α (5.90)
  • 100. book Mobk070 March 22, 2007 11:7 90 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS or, with gc (n α, x) = c ς=0 f (ς) cos[n α(ς − x)]dς (5.91) we have f (x) = ∞ n=1 gc (n α, x) α (5.92) As c approaches infinity we can imagine that α approaches dα and n α approaches α, whereupon the equation for f (x) becomes an integral expression f (x) = 2 π ∞ ς=0 ∞ α=0 f (ς) cos[α(ς − x)]dς dα (5.93) which can alternatively be written as f (x) = ∞ α=0 [A(α) cos α x + B(α) sin α x]dα (5.94) where A(α) = 2 π ∞ ς=0 f (ς) cos ας dς (5.95) and B(α) = 2 π ∞ ς=0 f (ς) sin ας dς (5.96) Example 5.8 (Transient conduction in a semi-infinite region). Consider the boundary value problem ut = uxx (x ≥ 0, t ≥ 0) u(0, t) = 0 (5.97) u(x, 0) = f (x) This represents transient heat conduction with an initial temperature f (x) and the boundary at x = 0 suddenly reduced to zero. Separation of variables as T(t)X(x) would normally yield a
  • 101. book Mobk070 March 22, 2007 11:7 SOLUTIONS USING FOURIER SERIES AND INTEGRALS 91 solution of the form Bn exp(−λ2 t) sin λx c (5.98) for a region of x on the interval (0, c ). Thus, for x on the interval 0 ≤ x ≤ ∞ we have B(α) = 2 π ∞ ς=0 f (ς) sin ας dς (5.99) and the solution is u(x, t) = 2 π ∞ λ=0 exp(−λ2 t) sin(λx) ∞ s =0 f (s ) sin(λs )ds d α (5.100) Noting that 2 sin α s sin α x = cos α (s − x) − cos α(s + x) (5.101) and that ∞ 0 exp(−γ 2 α) cos(γ b)dγ = 1 2 π α exp − b2 4α (5.102) we have u(x, t) = 1 2 √ πt ∞ 0 f (s ) exp − (s − x)2 4t − exp − (s + x)2 4t ds (5.103) Substituting into the first of these integrals σ2 = (s −x)2 4t and into the second integral σ2 = (s + x)2 4t (5.104) u(x, t) = 1 √ π ∞ −x/2 √ t f (x + 2σ √ t)e−σ2 d σ − 1 √ π ∞ x/2 √ t f (−x + 2σ √ t)e−σ2 d σ (5.105)
  • 102. book Mobk070 March 22, 2007 11:7 92 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS In the special case where f (x) = u0 u(x, t) = 2u0 √ π x/2 √ t 0 exp(−σ2 )dσ = u0 erf x 2 √ t (5.106) where erf(p) is the Gauss error function defined as erf(p) = 2 √ π p 0 exp(−σ2 )dσ (5.107) Example 5.9 (Steady conduction in a quadrant). Next we consider steady conduction in the region x ≥ 0, y ≥ 0 in which the face at x = 0 is kept at zero temperature and the face at y = 0 is a function of x: u = f (x). The solution is also assumed to be bounded. uxx + uyy = 0 (5.108) u(x, 0) = f (x) (5.109) u(0, y) = 0 (5.110) Since u(0, y) = 0 the solution should take the form e−αy sin α x, which is, according to our experience with separation of variables, a solution of the equation ∇2 u = 0. We therefore assume a solution of the form u(x, y) = ∞ 0 B(α)e−α y sin α xdα (5.111) with B(α) = 2 π ∞ 0 f (ς) sinας dς (5.112) The solution can then be written as u(x, y) = 2 π ∞ ς=0 f (ς) ∞ α=0 e−α y sin α x sin α ς d α d ς (5.113) Using the trigonometric identity for 2 sin ax sin aς = cos a(ς − x) − cos a(ς + x) and noting that ∞ 0 e−α y cos aβ d α = y β2 + y2 (5.114)
  • 103. book Mobk070 March 22, 2007 11:7 SOLUTIONS USING FOURIER SERIES AND INTEGRALS 93 we find u(x, y) = y π ∞ 0 f (ς) 1 (ς − x)2 + y2 − 1 (ς + x)2 + y2 d ς (5.115) Problem Consider the transient heat conduction problem ut = uxx + uyy x ≥ 0, 0 ≤ y ≤ 1, t ≥ 0 with boundary and initial conditions u(t, 0, y) = 0 u(t, x, 0) = 0 u(t, x, 1) = 0 u(0, x, y) = u0 and u(t, x, y) is bounded. Separate the problem into two problems u(t, x, y) = v(t, x)w(t, y) and give appropriate boundary conditions. Show that the solution is given by u(t, x, y) = 4 π erf x 2 √ t ∞ n=1 sin(2n − 1)πy 2n − 1 exp[−(2n − 1)2 π2 t] FURTHER READING V. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966. J. W. Brown and R. V. Churchill, Fourier Series and Boundary Value Problems. 6th edition. New York: McGraw-Hill, 2001.
  • 104. book Mobk070 March 22, 2007 11:7
  • 105. book Mobk070 March 22, 2007 11:7 95 C H A P T E R 6 Integral Transforms: The Laplace Transform Integral transforms are a powerful method of obtaining solutions to both ordinary and partial differential equations. They are used to change ordinary differential equations into algebraic equations and partial differential into ordinary differential equations. The general idea is to multiply a function f (t) of some independent variable t (not necessarily time) by a Kernel function K(t, s ) and integrate over some t space to obtain a function F(s ) of s which one hopes is easier to solve. Of course one must then inverse the process to find the desired function f (t). In general, F(s ) = b t=a K(t, s ) f (t)dt (6.1) 6.1 THE LAPLACE TRANSFORM A useful and widely used integral transform is the Laplace transform, defined as L[ f (t)] = F(s ) = ∞ t=0 f (t)e−s t dt (6.2) Obviously, the integral must exist. The function f (t) must be sectionally continuous and of exponential order, which is to say f (t) ≤ Mekt when t > 0 for some constants M and k. For example neither the Laplace transform of t−1 nor exp(t2 ) exists. The inversion formula is L−1 [F(s )] = f (t) = 1 2πi lim L → ∞ γ +iL γ −iL F(s )ets ds (6.3)
  • 106. book Mobk070 March 22, 2007 11:7 96 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS in which γ – iL and γ + iL are complex numbers. We will put off using the inversion integral until we cover complex variables. Meanwhile, there are many tables giving Laplace transforms and inverses. We will now spend considerable time developing the theory. 6.2 SOME IMPORTANT TRANSFORMS 6.2.1 Exponentials First consider the exponential function: L[e−at ] = ∞ t=0 e−at e−s t dt = ∞ t=0 e−(s =a)t dt = 1 s + a (6.4) If a = 0, this reduces to L[1] = 1/s (6.5) 6.2.2 Shifting in the s-domain L[ea t f (t)] = ∞ t=0 e−(s −a)t f (t)dt = F(s − a) (6.6) 6.2.3 Shifting in the time domain Consider a function defined as f (t) = 0 t < a f (t) = f (t − a) t > a (6.7) Then ∞ τ=0 e−s τ f (τ − a)dτ = a τ=0 0dτ + ∞ τ=a e−s τ f (τ − a)dτ (6.8) Let τ − a = t. Then ∞ t=0 e−s (t+a) f (t)dt = F(s )e−as = L[ f (t − a)] (6.9) the shifted function described above.
  • 107. book Mobk070 March 22, 2007 11:7 INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 97 6.2.4 Sine and cosine Now consider the sine and cosine functions. We shall see in the next chapter (and you should already know) that eikt = cos(kt) + i sin(kt) (6.10) Thus the Laplace transform is L[eikt ] = L[cos(kt)] + iL[sin(kt)] = 1 s − ik = s + ik (s + ik)(s − ik) = s s 2 + k2 + i k s 2 + k2 (6.11) so L[sin(kt)] = k s 2 + k2 (6.12) L[cos(kt)] = s s 2 + k2 (6.13) 6.2.5 Hyperbolic functions Similarly for hyperbolic functions L[sinh(kt)] = L 1 2 (ekt − e−kt ) = 1 2 1 s − k − 1 s + k = k s 2 − k2 (6.14) Similarly, L[cosh(kt)] = s s 2 − k2 (6.15) 6.2.6 Powers of t: tm We shall soon see that the Laplace transform of tm is L[tm ] = (m + 1) s m+1 m > −1 (6.16) Using this together with the s domain shifting results, L[tm e−at ] = (m + 1) (s + a)m+1 (6.17) Example 6.1. Find the inverse transform of the function F(s ) = 1 (s − 1)3
  • 108. book Mobk070 March 22, 2007 11:7 98 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS This is a function that is shifted in the s -domain and hence Eq. (6.6) is applicable. Noting that L−1 (1/s 3 ) = t2 / (3) = t2 /2 from Eq. (6.16) f (t) = t2 2 et Or we could use Eq. (6.17) directly. Example 6.2. Find the inverse transform of the function F(s ) = 3 s 2 + 4 e−s The inverse transform of F(s ) = 2 s 2 + 4 is, according to Eq. (6.11) f (t) = 3 2 sin(2t) The exponential term implies shifting in the time domain by 1. Thus f (t) = 0, t < 1 = 3 2 sin[2(t − 1)], t > 1 Example 6.3. Find the inverse transform of F(s ) = s (s − 2)2 + 1 The denominator is shifted in the s -domain. Thus we shift the numerator term and write F(s ) as two terms F(s ) = s − 2 (s − 2)2 + 1 + 2 (s − 2)2 + 1 Equations (6.6), (6.12), and (6.13) are applicable. The inverse transform of the first of these is a shifted cosine and the second is a shifted sine. Therefore each must be multiplied by exp(2t). The inverse transform is f (t) = e2t cos(t) + 2e2t sin(t)
  • 109. book Mobk070 March 22, 2007 11:7 INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 99 1 k t FIGURE 6.1: The Heaviside step 6.2.7 Heaviside step A frequently useful function is the Heaviside step function, defined as Uk(t) = 0 0 < t < k (6.18) = 1 k < t It is shown in Fig. 6.1. The Laplace transform is L[Uk(t)] = ∞ t=k e−s t dt = 1 s e−ks (6.19) The Heaviside step (sometimes called the unit step) is useful for finding the Laplace transforms of periodic functions. Example 6.4 (Periodic functions). For example, consider the periodic function shown in Fig. 6.2. It can be represented by an infinite series of shifted Heaviside functions as follows: f (t) = U0 − 2Uk + 2U2k − 2U3k + · · · = U0 + ∞ n=1 (−1)n 2Unk (6.20) 1 -1 FIGURE 6.2: A periodic square wave
  • 110. book Mobk070 March 22, 2007 11:7 100 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS h 1 t0-h t0 FIGURE 6.3: The Dirac delta function The Laplace transform is found term by term, L[ f (t)] = 1 s {1 − 2e−s k [1 − e−s k + e−2s k − e−3s k · · · ]} = 1 s 1 − 2e−s k 1 + e−s k = 1 s 1 − e−s k 1 + e−s k (6.21) 6.2.8 The Dirac delta function Consider a function defined by lim Ut0 − Ut0−h h = δ(t0) h → 0 (6.22) L[δ(t0)] = e−s t0 (6.23) The function, without taking limits, is shown in Fig. 6.3. 6.2.9 Transforms of derivatives L d f dt = ∞ t=0 d f dt e−s t dt = ∞ t=0 e−s t d f (6.24) and integrating by parts L d f dt = f (t)e−s t ∞ 0 + s ∞ t=0 f (t)e−s t dt = s F(s ) − f (0) To find the Laplace transform of the second derivative we let g(t) − f (t). Taking the Laplace transform L[g (t)] = s G(s ) − g(0)
  • 111. book Mobk070 March 22, 2007 11:7 INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 101 and with G(s ) = L[ f (t)] = s F(s ) − f (0) we find that L d2 f dt2 = s 2 F(s ) − s f (0) − f (0) (6.25) In general L dn f dtn = s n F(s ) − s n−1 f (0) − s n−2 f (0) − · · · − dn−1 f dtn−1 (0) (6.26) The Laplace transform of tm may be found by using the gamma function, L[tm ] = ∞ 0 tm e−s t dt and let x = s t (6.27) L[tm ] = ∞ x=0 x s m e−x dx s = 1 s m+1 ∞ x=0 xm e−x dx = (m + 1) s m+1 (6.28) which is true for all m > −1 even for nonintegers. 6.2.10 Laplace Transforms of Integrals L   t τ=0 f (τ)dτ   = L[g(t)] (6.29) where dg/dt = f (t). Thus L[dg/dt] = s L[g(t)]. Hence L   t τ=0 f (τ)dτ   = 1 s F(s ) (6.30) 6.2.11 Derivatives of Transforms F(s ) = ∞ t=0 f (t)e−s t dt (6.31)
  • 112. book Mobk070 March 22, 2007 11:7 102 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS so d F ds = − ∞ t=0 t f (t)e−s t dt (6.32) and in general dn F ds n = L[(−t)n f (t)] (6.33) For example L[t sin(kt)] = − d ds k s 2 + k2 = 2s k (s 2 + k2)2 (6.34) 6.3 LINEAR ORDINARY DIFFERENTIAL EQUATIONS WITH CONSTANT COEFFICIENTS Example 6.5. A homogeneous linear ordinary differential equation Consider the differential equation y + 4y + 3y = 0 y(0) = 0 y (0) = 2 (6.35) L[y ] = s 2 Y − s y(0) − y (0) = s 2 Y − 2 (6.36) L[y ] = s Y − y(0) = s Y (6.37) Therefore (s 2 + 4s + 3)Y = 2 (6.38) Y = 2 (s + 1)(s + 3) = A s + 1 + B s + 3 (6.39) To solve for A and B, note that clearing fractions, A(s + 3) + B(s + 1) (s + 1)(s + 3) = 2 (s + 1)(s + 3) (6.40) Equating the numerators, or A + B = 0 3A + B = 2 : A = 1 B = −1 (6.41)
  • 113. book Mobk070 March 22, 2007 11:7 INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 103 and from Eq. (6.8) Y = 1 s + 1 − 1 s + 3 y = e−t − e−3t (6.42) 6.4 SOME IMPORTANT THEOREMS 6.4.1 Initial Value Theorem lim s →∞ ∞ t=0 f (t)e−s t dt = s F(s ) − f (0) = 0 (6.43) Thus lim s →∞ s F(s ) = lim t→0 f (t) (6.44) 6.4.2 Final Value Theorem As s approaches zero the above integral approaches the limit as t approaches infinity minus f (0). Thus lim s F(s ) = lim f (t) s → 0 t → ∞ (6.45) 6.4.3 Convolution A very important property of Laplace transforms is the convolution integral. As we shall see later, it allows us to write down solutions for very general forcing functions and also, in the case of partial differential equations, to treat both time dependent forcing and time dependent boundary conditions. Consider the two functions f (t) and g(t). F(s ) = L[ f (t)] and G(s ) = L[g(t)]. Because of the time shifting feature, e−s τ G(s ) = L[g(t − τ)] = ∞ t=0 e−s t g(t − τ)dt (6.46) F(s )G(s ) = ∞ τ=0 f (τ)e−s τ G(s )dτ (6.47)
  • 114. book Mobk070 March 22, 2007 11:7 104 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS But e−s τ G(s ) = ∞ t=0 e−s t g(t − τ)dt (6.48) so that F(s )G(s ) = ∞ t=0 e−s t t τ=0 f (τ)g(t − τ)dτ dt (6.49) where we have used the fact that g(t − τ) = 0 when τ > t. The inverse transform of F(s )G(s ) is L−1 [F(s )G(s )] = t τ=0 f (τ)g(t − τ)dτ (6.50) 6.5 PARTIAL FRACTIONS In the example differential equation above we determined two roots of the polynomial in the denominator, then separated the two roots so that the two expressions could be inverted in forms that we already knew. The method of separating out the expressions 1/(s + 1) and 1/(s + 3) is known as the method of partial fractions. We now develop the method into a more user friendly form. 6.5.1 Nonrepeating Roots Suppose we wish to invert the transform F(s ) = p(s )/q(s ), where p(s ) and q(s ) are polynomi- als. We first note that the inverse exists if the degree of p(s ) is lower than that of q(s ). Suppose q(s ) can be factored and a nonrepeated root is a. F(s ) = φ(s ) s − a (6.51) According to the theory of partial fractions there exists a constant C such that φ(s ) s − a = C s − a + H(s ) (6.52) Multiply both sides by (s − a) and take the limit as s → a and the result is C = φ(a) (6.53)
  • 115. book Mobk070 March 22, 2007 11:7 INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 105 Note also that the limit of p(s ) s − a q(s ) (6.54) as s approaches a is simply p(s )/q (s ). If q(s ) has no repeated roots and is of the form q(s ) = (s − a1)(s − a2)(s − a3) · · · (s − an) (6.55) then L−1 p(s ) q(s ) = n m=1 p(am) q (am) eamt (6.56) Example 6.6. Find the inverse transform of F(s ) = 4s + 1 (s 2 + s )(4s 2 − 1) First separate out the roots of q(s ) q(s ) = 4s (s + 1)(s + 1/2)(s − 1/2) q(s ) = 4s 4 + 4s 3 − s 2 − s q (s ) = 16s 3 + 12s 2 − 2s − 1 Thus q (0) = −1 p(0) = 1 q (−1) = −3 p(−1) = −3 q (−1/2) = 1 p(−1/2) = −1 q (1/2) = 3 p(1/2) = 3 f (t) = e−t − e−t/2 + et/2 − 1 Example 6.7. Solve the differential equation y − y = 1 − e3t subject to initial conditions y (0) = y(0) = 0
  • 116. book Mobk070 March 22, 2007 11:7 106 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Taking the Laplace transform (s 2 − 1)Y = 1 s − 1 s − 3 Y (s ) = 1 s (s 2 − 1) − 1 (s − 3)(s 2 − 1) = 1 s (s + 1)(s − 1) − 1 (s − 3)(s + 1)(s − 1) First find the inverse transform of the first term. q = s 3 − s q = 3s 2 − 1 q (0) = −1 p(0) = 1 q (1) = 2 p(1) = 1 q (−1) = 2 p(−1) = 1 The inverse transform is −1 + 1/2et + 1/2 e−t Next consider the second term. q = s 3 − 3s 2 − s + 3 q = 3s 2 − 6s − 1 q (−3) = 44 p(−3) = 1 q (1) = −4 p(1) = 1 q (−1) = 8 p(−1) = 1 The inverse transform is 1 44 e−3t − 1 4 et + 1 8 e−t Thus y(t) = 1 4 et + 5 8 e−t + 1 44 e−3t − 1
  • 117. book Mobk070 March 22, 2007 11:7 INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 107 6.5.2 Repeated Roots We now consider the case when q(s ) has a repeated root (s + a)n+1 . Then F(s ) = p(s ) q(s ) = φ(s ) (s − a)n+1 n = 1, 2, 3, . . . = Aa (s − a) + A1 (s − a)2 + · · · + An (s − a)n+1 + H(s ) (6.57) It follows that φ(s ) = A0(s − a)n + · · · + Am(s − a)n−m + · · · + An + (s − a)n+1 H(s ) (6.58) By letting s →a we see that An = φ(a). To find the remaining A’s, differentiate φ (n – r) times and take the limit as s → a. φ(n−r) (a) = (n − r)!Ar (6.59) Thus F(s ) = n r=0 φ(n−r) (a) (n − r)! 1 (s − a)r+1 + H(s ) (6.60) If the inverse transform of H(s ) (the part containing no repeated roots) is h(t) it follows from the shifting theorem and the inverse transform of 1/s m that f (t) = n r=0 φ(n−r) (a) (n − r)!r! tr eat + h(t) (6.61) Example 6.8. Inverse transform with repeated roots F(s ) = s (s + 2)3(s + 1) = A0 (s + 2) + A1 (s + 2)2 + A2 (s + 2)3 + C (s + 1) Multiply by (s + 2)3 . s (s + 1) = A0(s + 2)2 + A1(s + 2) + A2 + C(s + 2)3 (s + 1) = φ(s ) Take the limit as s → −2, A2 = 2
  • 118. book Mobk070 March 22, 2007 11:7 108 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Differentiate once φ = 1 (s + 1)2 φ (−2) = 1 = A1 φ = −2 (s + 1)3 φ (−2) = 2 = A0 To find C, multiply by (s + 1) and take s = −1 (in the original equation). C = −1. Thus F(s ) = 2 (s + 2) + 1 (s + 2)2 + 2 (s + 2)3 − 1 (s + 1) and noting the shifting theorem and the theorem on tm , f (t) = 2e−2t + te−2t + 2t2 e−2t + e−t 6.5.3 Quadratic Factors: Complex Roots If q(s ) has complex roots and all the coefficients are real this part of q(s ) can always be written in the form (s − a)2 + b2 (6.62) This is a shifted form of s 2 + b2 (6.63) This factor in the denominator leads to sines or cosines. Example 6.9. Quadratic factors Find the inverse transform of F(s ) = 2(s − 1) s 2 + 2s + 5 = 2s (s + 1)2 + 4 − 1 (s + 1)2 + 4 Because of the shifted s in the denominator the numerator of the first term must also be shifted to be consistent. Thus we rewrite as F(s ) = 2(s + 1) (s + 1)2 + 4 − 3 (s + 1)2 + 4 The inverse transform of 2s s 2 + 4
  • 119. book Mobk070 March 22, 2007 11:7 INTEGRAL TRANSFORMS: THE LAPLACE TRANSFORM 109 is 2 cos(2t) and the inverse of −3 s 2 + 4 = − 3 2 2 (s 2 + 4) is − 3 2 sin(2t) Thus f (t) = 2e−t cos(2t) − 3 2 e−t sin(2t) Tables of Laplace transforms and inverse transforms can be found in many books such as the book by Arpaci and in the Schaum’s Outline referenced below. A brief table is given here in Appendix A. Problems 1. Solve the problem y − 2y + 5y = 0 y(0) = y (0) = 0 y (0) = 1 using Laplace transforms. 2. Find the general solution using Laplace transforms y + k2 y = a 3. Use convolution to find the solution to the following problem for general g(t). Then find the solution for g(t) = t2 . y + 2y + y = g(t) y (0) = y(0) = 0 4. Find the inverse transforms. (a) F(s ) = s + c (s + a)(s + b)2 (b) F(s ) = 1 (s 2 + a2)s 3
  • 120. book Mobk070 March 22, 2007 11:7 110 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS (c) F(s ) = (s 2 − a2 ) (s 2 + a2)2 5. Find the periodic function whose Laplace transform is F(s ) = 1 s 2 1 − e−s 1 + e−s and plot your results for f (t) for several periods. FURTHER READING M. Abramowitz and I. A. Stegun, Eds., Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. New York: Dover Publications, 1974. V. S. Arpaci, Conduction Heat Transfer. Reading, MA: Addison-Wesley, 1966. R. V. Churchill, Operational Mathematics, 3rd edition. New York: McGraw-Hill, 1972. I. H. Sneddon, The Use of Integral Transforms. New York: McGraw-Hill, 1972.
  • 121. book Mobk070 March 22, 2007 11:7 111 C H A P T E R 7 Complex Variables and the Laplace Inversion Integral 7.1 BASIC PROPERTIES A complex number z can be defined as an ordered pair of real numbers, say x and y, where x is the real part of z and y is the real value of the imaginary part: z = x + iy (7.1) where i = √ −1 I am going to assume that the reader is familiar with the elementary properties of addition, subtraction, multiplication, etc. In general, complex numbers obey the same rules as real numbers. For example (x1 + iy1) (x2 + iy2) = x1x2 − y1 y2 + i (x1 y2 + x2 y1) (7.2) The conjugate of z is ¯z = x − iy (7.3) It is often convenient to represent complex numbers on Cartesian coordinates with x and y as the axes. In such a case, we can represent the complex number (or variable) z as z = x + iy = r(cos θ + i sin θ) (7.4) as shown in Fig. 7.1. We also define the exponential function of a complex number as cos θ + i sin θ = eiθ which is suggested by replacing x in series ex = ∞ n=0 xn n! by iθ. Accordingly, eiθ = cos θ + i sin θ (7.5) and e−iθ = cos θ − i sin θ (7.6)
  • 122. book Mobk070 March 22, 2007 11:7 112 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS y r x FIGURE 7.1: Polar representation of a complex variable z Addition gives cos θ = eiθ + e−iθ 2 = cosh(iθ) (7.7) and subtraction gives sin θ = eiθ − e−iθ 2i = −i sinh(iθ) (7.8) Note that cosh z = 1 2 ex+iy + e−x−iy = 1 2 ex [cos y + i sin y] + e−x [cos y − i sin y] = ex + e−x 2 cos y + i ex − e−x 2 sin y = cosh x cos y + i sinh x sin y (7.9) The reader may show that sinh z = sinh x cos y + i cosh x sin y. (7.10) Trigonometric functions are defined in the usual way: sin z = eiz − e−iz 2i cos z = eiz + e−iz 2 tan z = sin z cos z (7.11) Two complex numbers are equal if and only if their real parts are equal and their imaginary parts are equal.
  • 123. book Mobk070 March 22, 2007 11:7 COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 113 Noting that z2 = r2 (cos2 θ − sin2 θ + i2 sin θ cos θ) = r2 1 2 (1 + cos 2θ) − 1 2 (1 − cos 2θ) + i sin 2θ = r2 [cos 2θ + i sin 2θ] We deduce that z1/2 = r1/2 (cos θ/2 + i sin θ/2) (7.12) In fact in general zm/n = rm/n [cos(mθ/n) + i sin(mθ/n)] (7.13) Example 7.1. Find i1/2 . Noting that when z = I, r = 1 and θ = π/2, with m = 1 and n = 2. Thus i1/2 = 11/2 [cos(π/4) + i sin(π/4)] = 1 √ 2 (1 + i) Note, however, that if w = cos π 4 + π + i sin π 4 + π then w2 = i. Hence 1√ 2 (−1 − i) is also a solution. The roots are shown in Fig. 7.2. i FIGURE 7.2: Roots of i1/2
  • 124. book Mobk070 March 22, 2007 11:7 114 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS -1 +1 FIGURE 7.3: The roots of 11/2 In fact in this example θ is also π/2 + 2kπ. Using the fact that z = re−i(θ+2kπ) k = 1, 2, 3, . . . it is easy to show that z1/n = n √ r cos θ + 2πk n + i sin θ + 2πk n (7.14) This is De Moivre’s theorem. For example when n = 2 there are two solutions and when n = 3 there are three solutions. These solutions are called branches of z1/n . A region in which the function is single valued is indicated by forming a branch cut, which is a line stretching from the origin outward such that the region between the positive real axis and the line contains only one solution. In the above example, a branch cut might be a line from the origin out the negative real axis. Example 7.2. Find 11/2 and represent it on the polar diagram. 11/2 = 1 cos θ 2 + kπ + i sin θ 2 + kπ and since θ = 0 in this case 11/2 = cos kπ + i sin kπ There are two distinct roots at z = +1 for k = 0 and −1 for k = 1. The two values are shown in Fig. 7.3. The two solutions are called branches of √ 1, and an appropriate branch cut might be from the origin out the positive imaginary axis, leaving as the single solution 1. Example 7.3. Find the roots of (1 + i)1/4 . Making use of Eq. (7.13) with m = 1 and n = 4, r = √ 2, θ = π 4, we find that (1 + i)1/4 = ( √ 2)1/4 cos π 16 + 2kπ 4 + i sin π 16 + 2kπ 4 k = 0, 1, 2, 3
  • 125. book Mobk070 March 22, 2007 11:7 COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 115 16 1+ i 21/8 21/2 1 FIGURE 7.4: The roots of (1 + i)1/4 Hence, the four roots are as follows: (1 + i)1/4 = 21/8 cos π 16 + i sin π 16 = 21/8 cos π 16 + π 2 + i sin π 16 + π 2 = 21/8 cos π 16 + π + i sin π 16 + π = 21/8 cos π 16 + 3π 2 + i sin π 16 + 3π 2 The locations of the roots are shown in Fig. 7.4. The natural logarithm can be defined by writing z = reiθ for −π ≤ θ < π and noting that ln z = lnr + iθ (7.15) and since z is not affected by adding 2nπ to θ this expression can also be written as ln z = lnr + i (θ + 2nπ) with n = 0, 1, 2, . . . (7.16) When n = 0 we obtain the principal branch. All of the single valued branches are analytic for r > 0 and θ0 < θ < θ0 + 2π. 7.1.1 Limits and Differentiation of Complex Variables: Analytic Functions Consider a function of a complex variable f (z). We generally write f (z) = u(x, y) + iv(x, y)
  • 126. book Mobk070 March 22, 2007 11:7 116 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS where u and v are real functions of x and y. The derivative of a complex variable is defined as follows: f = lim f (z + z) − f (z) z z → 0 (7.17) or f (z) = lim u(x + x, y + y) + iv(x + x, y + y) − u(x, y) − iv(x, y) x + i y x, y → 0 (7.18) Taking the limit on x first, we find that f (z) = lim u(x, y + y) + iv(x, y + y) − u(x, y) − iv(x, y) i y y → 0 (7.19) and now taking the limit on y, f (z) = 1 i ∂u ∂y + ∂v ∂y = ∂v ∂y − i ∂u ∂y (7.20) Conversely, taking the limit on y first, j (z) = lim u(x + x, y) + iv(x + x, y) − u(x, y) − iv(x, y) x x → 0 = ∂u ∂x + i ∂v ∂x (7.21) The derivative exists only if ∂u ∂x = ∂v ∂y and ∂u ∂y = − ∂v ∂x (7.22) These are called the Cauchy—Riemann conditions, and in this case the function is said to be analytic. If a function is analytic for all x and y it is entire. Polynomials are entire as are trigonometric and hyperbolic functions and exponential functions. We note in passing that analytic functions share the property that both real and imaginary parts satisfy the equation ∇2 u = ∇2 v = 0 in two-dimensional space. It should be obvious at this point that this is important in the solution of the steady-state diffusion equation
  • 127. book Mobk070 March 22, 2007 11:7 COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 117 y B1 A 0 x 1 2 FIGURE 7.5: Integration of an analytic function along two paths in two dimensions. We mention here that it is also important in the study of incompressible, inviscid fluid mechanics and in other areas of science and engineering. You will undoubtedly meet with it in some of you clurses. Example 7.4. f = z2 f = 2z f = sin z f = cos z f = eaz f = aeaz Integrals Consider the line integral along a curve C defined as x = 2y from the origin to the point x = 2, y = 1, path OB in Fig. 7.5. C z2 dz We can write z2 = x2 − y2 + 2ixy = 3y2 + 4y2 i and dz = (2 + i)dy Thus 1 y=0 (3y2 + 4y2 i)(2 + i)dy = (3 + 4i)(2 + i) 1 y=0 y2 dy = 2 3 + 11 3 i
  • 128. book Mobk070 March 22, 2007 11:7 118 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS On the other hand, if we perform the same integral along the x axis to x = 2 and then along the vertical line x = 2 to the same point, path OAB in Fig. 7.5, we find that 2 x=0 x2 dx + 1 y=0 (2 + iy)2 idy = 8 3 + i 1 y=0 (4 − y2 + 4iy)dy = 2 3 + 11 3 i This happened because the function z2 is analytic within the region between the two curves. In general, if a function is analytic in the region contained between the curves, the integral C f (z)dz (7.23) is independent of the path of C. Since any two integrals are the same, and since if we integrate the first integral along BO only the sign changes, we see that the integral around the closed contour is zero. C f (z)dz = 0 (7.24) This is called the Cauchy–Goursat theorem and is true as long as the region R within the closed curve C is simply connected and the function is analytic everywhere within the region. A simply connected region R is one in which every closed curve within it encloses only points in R. The theorem can be extended to allow for multiply connected regions. Fig. 7.6 shows a doubly connected region. The method is to make a cut through part of the region and to integrate counterclockwise around C1, along the path C2 through the region, clockwise around the interior curve C3, and back out along C4. Clearly, the integral along C2 and C4 cancels, so that C1 f (z)dz + C3 f (z)dz = 0 (7.25) where the first integral is counterclockwise and second clockwise. 7.1.2 The Cauchy Integral Formula Now consider the following integral: C f (z)dz (z − z0) (7.26) If the function f (z) is analytic then the integrand is also analytic at all points except z = z0. We now form a circle C2 of radius r0 around the point z = z0 that is small enough to fit inside
  • 129. book Mobk070 March 22, 2007 11:7 COMPLEX VARIABLES AND THE LAPLACE INVERSION INTEGRAL 119 FIGURE 7.6: A doubly connected region FIGURE 7.7: Derivation of Cauchy’s integral formula the curve C1 as shown in Fig. 7.7. Thus we can write C1 f (z) z − z0 dz − C2 f (z) z − z0 dz = 0 (7.27) where both integrations are counterclockwise. Let r0 now approach zero so that in the second integral z approaches z0, z − z0 = r0eiθ and dz = r0ieiθ dθ. The second integral is as follows: C2 f (z0) r0eiθ r0ieiθ dθ = − f (z0)i 2π θ=0 dθ = −2πi f (z0) Thus, Cauchy’s integral formula is f (z0) = 1 2πi C f (z) z − z0 dz (7.28) where the integral is taken counterclockwise and f (z) is analytic inside C.
  • 130. book Mobk070 March 22, 2007 11:7 120 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS We can formally differentiate the above equation n times with respect to z0 and find an extension as f (n) (z0) = n! 2πi C f (z) (z − z0)n+1 dz (7.29) Problems 1. Show that (a) sinh z = sinh x cos y + i cosh x sin y (b) cos z = cos x cosh y − i sin x sinh y and show that each is entire. 2. Find all of the values of (a) (−1 + i √ 3) 3 2 (b) 8 1 6 3. Find all the roots of the equation sin z = cosh 4 4. Find all the zeros of (a) sinh z (b) cosh z
  • 131. book Mobk070 March 22, 2007 11:7 121 C H A P T E R 8 Solutions with Laplace Transforms In this chapter, we present detailed solutions of some boundary value problems using the Laplace transform method. Problems in both mechanical vibrations and diffusion are presented along with the details of the inversion method. 8.1 MECHANICAL VIBRATIONS Example 8.1. Consider an elastic bar with one end of the bar fixed and a constant force F per unit area at the other end acting parallel to the bar. The appropriate partial differential equation and boundary and initial conditions for the displacement y(x, t) are as follows: yττ = yζζ , 0 < ζ < 1, t > 0 y(ζ, 0) = yt(ζ, 0) = 0 y(0, τ) = 0 yς (1, τ) = F/E = g We obtain the Laplace transform of the equation and boundary conditions as s 2 Y = Yςς Y (s, 0) = 0 Yς (s, 1) = g/s Solving the differential equation for Y (s , ζ), Y (s ) = (A sinh ςs + B cosh ς s ) Applying the boundary conditions we find that B = 0 and g s = As cosh s A = g s 2 cosh s Y (s ) = g sinh ς s s 2 cosh s
  • 132. book Mobk070 March 22, 2007 11:7 122 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Since the function 1 s sinh ς s = ς + s 2 ς3 3! + s 4 ς5 5! + . . . the function 1 s sinh ς s is analytic and Y (s ) can be written as the ratio of two analytic functions Y (s ) = 1 s sinh ς s s cosh s Y (s ) therefore has a simple pole at s = 0 and the residue there is R(s = 0) = lim s → 0 s Y (s ) = lim s → 0 ς + s 2 ς3 3! + . . . cosh s = gς The remaining poles are the singularities of cosh s . But cosh s = cosh x cos y + i sinh x sin y, so the zeros of this function are at x = 0 and cosy = 0. Hence, sn = i(2n − 1)π/2. The residues at these points are R(s = sn) = lim s → sn g sinh ς s s d ds (s cosh s ) es τ = g s 2 n sinh ς sn sinh sn esnτ (n = ±1, ±2, ±3 . . .) Since sinh i 2n − 1 2 (πς) = i sin 2n − 1 2 (πς) we have R(s = sn) = gi sin 2n−1 2 (πς) − 2n−1 2 π 2 i sin 2n−1 2 π exp i 2n − 1 2 πτ and sin 2n − 1 s π = (−1)n+1 The exponential function can be written as exp i 2n − 1 2 πτ = cos 2n − 1 2 πτ + i sin 2n − 1 2 πτ Note that for the poles on the negative imaginary axis (n < 0) this expression can be written as exp i 2m − 1 2 πτ = cos 2m − 1 2 πτ − i sin 2m − 1 2 πτ where m = −n > 0. This corresponds to the conjugate poles.
  • 133. book Mobk070 March 22, 2007 11:7 SOLUTIONS WITH LAPLACE TRANSFORMS 123 Thus for each of the sets of poles we have R(s = sn) = 4g(−1)n π2(2n − 1)2 sin (2n − 1)πς 2 exp (2n − 1)πτi 2 Now adding the residues corresponding to each pole and its conjugate we find that the final solution is as follows: y(ς, τ) = g ς + 8 π2 ∞ n=1 (−1)n (2n − 1)2 sin (2n − 1)πς 2 cos (2n − 1)πτ 2 Suppose that instead of a constant force at ζ = 1, we allow g to be a function of τ. In this case, the Laplace transform of y(ζ, τ) takes the form Y (ς, s ) = G(s ) sinh(ςs ) s cosh s The simple pole with residue gζ is not present. However, the other poles are still at the same sn values. The residues at each of the conjugate poles of the function F(s ) = sinh(ς s ) s cosh s are 2(−1)n π(2n − 1) sin (2n − 1)πς 2 sin (2n − 1)πτ 2 = f (ς, τ) According to the convolution theorem y(ς, τ) = τ τ =0 y(τ − τ )g(τ )dτ y(ς, τ) = 4 π ∞ n=0 (−1)n (2n − 1) sin (2n − 1)πς 2 τ τ g(τ − τ ) sin (2n − 1)πτ 2 dτ . In the case that g = constant, integration recovers the previous equation. Example 8.2. An infinitely long string is initially at rest when the end at x = 0 undergoes a transverse displacement y(0, t) = f (t). The displacement is described by the differential
  • 134. book Mobk070 March 22, 2007 11:7 124 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS equation and boundary conditions as follows: ∂2 y ∂t2 = ∂2 y ∂x2 y(x, 0) = yt(x, 0) = 0 y(0, t) = f (t) y is bounded Taking the Laplace transform with respect to time and applying the initial conditions yields s 2 Y (x, s ) = d2 Y (x, s ) dx2 The solution may be written in terms of exponential functions Y (x, s ) = Ae−s x + Bes x In order for the solution to be bounded B = 0. Applying the condition at x = 0 we find A = F(s ) where F(s ) is the Laplace transform of f (t). Writing the solution in the form Y (x, s ) = s F(s ) e−s x s and noting that the inverse transform of e−s x /s is the Heaviside step Ux(t) where Ux(t) = 0 t < x Ux(t) = 1 t > x and that the inverse transform of s F(s ) is f (t), we find using convolution that y(x, t) = t µ=0 f (t − µ)Ux(µ)dµ = f (t − x) x < t = 0 x > t For example, if f (t) = sin ω t y(x, t) = sin ω(t − x) x < t = 0 x > t
  • 135. book Mobk070 March 22, 2007 11:7 SOLUTIONS WITH LAPLACE TRANSFORMS 125 Problems 1. Solve the above vibration problem when y(0, τ) = 0 y(1, τ) = g(τ) Hint: To make use of convolution see Example 8.3. 2. Solve the problem ∂2 y ∂t2 = ∂2 y ∂x2 yx(0, t) = y(x, 0) = yt(x, 0) = 0 y(1, t) = h using the Laplace transform method. 8.2 DIFFUSION OR CONDUCTION PROBLEMS We now consider the conduction problem Example 8.3. uτ = uςς u(1, τ) = f (τ) u(0, τ) = 0 u(ς, 0) = 0 Taking the Laplace transform of the equation and boundary conditions and noting that u(ς, 0) = 0, sU(s ) = Uςς solution yields U = A sinh √ s ς + B cosh √ s ς U(0, s ) = 0 U(1, s ) = F(s ) The first condition implies that B = 0 and the second gives F(s ) = A sinh √ s and so U = F(s )sinh √ s ς sinh √ s .
  • 136. book Mobk070 March 22, 2007 11:7 126 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS If f (τ) = 1, F(s ) = 1/s , a particular solution, V , is V = sinh √ s ς s sinh √ s where v = L−1 V (s ) Now, sinh √ s ς sinh √ s = ς √ s + (ς √ s )3 3! + (ς √ s )5 5! + . . . √ s + ( √ s )3 3! + ( √ s )5 5! + . . . and so there is a simple pole of V es τ at s = 0. Also, since when sinh √ s = 0, sinhς √ s not necessarily zero, there are simple poles at sinh √ s = 0 or s = −n2 π2 . The residue at the pole s = 0 is lim s → 0 s V (s )es τ = ς and since V (s ) es τ has the form P(s )/Q(s ) the residue of the pole at −n2 π2 is P(ς, −n2 π2 ) Q (−n2π2) e−n2 π2 τ = sinh ς √ s e−n2 π2 τ √ s 2 cosh √ s + sinh √ s s =−n2π2 = 2 sin(nπς) nπ cos(nπ) e−n2 π2 τ The solution for v(ζ, τ) is then v(ς, τ) = ς + ∞ n=1 2(−1)n nπ e−n2 π2 τ sin(nπς) The solution for the general case as originally stated with u(1, τ) = f (τ) is obtained by first differentiating the equation for v(ζ, τ) and then noting the following: U(ς, s ) = s F(s ) sinh ς √ s s sinh √ s and L f (τ) = s F(s ) − f (τ = 0) so that U(ς, s ) = f (τ = 0)V (ς, s ) + L f (s ) V (ς, s )
  • 137. book Mobk070 March 22, 2007 11:7 SOLUTIONS WITH LAPLACE TRANSFORMS 127 Consequently u(ς, τ) = f (τ = 0)v(ς, τ) + τ τ =0 f (τ − τ )v(ς, τ )dτ = ς f (τ) + 2 f (0) π ∞ n=1 (−1)n n e−n2 π2 τ sin(nπς) + 2 π ∞ n=1 (−1)n n sin(nπς) τ τ =0 f (τ − τ )e−n2 π2 τ dτ This series converges rapidly for large values of τ. However for small values of τ, it converges slowly. There is another form of solution that converges rapidly for small τ. The Laplace transform of v(ζ, τ) can be written as sinh ς √ s s sinh √ s = eς √ s − e−ς √ s s (eς √ s − e− √ s ) = 1 s e √ s eς √ s − e−ς √ s 1 − e−2 √ s = 1 s e √ s eς √ s − e−ς √ s 1 + e−2 √ s + e−4 √ s + e−6 √ s + . . . = 1 s ∞ n=0 e−(1+2n−ς) √ s − e−(1+2n+ς) √ s The inverse Laplace transform of e=k √ s s is the complimentary error function, defined by erfc(k/2 √ τ) = 1 − 2 √ π k/2 √ τ x=0 e−x2 dx Thus we have v(ς, τ) = ∞ n=0 erfc 1 + 2n − ς 2 √ τ − erfc 1 + 2n + ς 2 √ τ and this series converges rapidly for small values of τ. Example 8.4. Next we consider a conduction problem with a convective boundary condition: uτ = uςς u(τ, 0) = 0 uς (τ, 1) + Hu(τ, 1) = 0 u(0, ς) = ς
  • 138. book Mobk070 March 22, 2007 11:7 128 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Taking the Laplace transform sU − ς = Uςς U(s, 0) = 0 Uς (s, 1) + HU(s, 1) = 0 The differential equation has a homogeneous solution Uh = A cosh( √ s ς) + B sinh( √ s ς) and a particular solution Up = ς s so that U = ς s + A cosh( √ s ς) + B sinh( √ s ς) Applying the boundary conditions, we find A = 0 B = − 1 + H s √ s cosh( √ s ) + H sinh( √ s ) The Laplace transform of the solution is as follows: U = ς s − (1 + H) sinh( √ s ς) s √ s cosh( √ s ) + H sinh( √ s ) The inverse transform of the first term is simply ζ. For the second term, we must first find the poles. There is an isolated pole at s = 0. To obtain the residue of this pole note that lim s → 0 − (1 + H) sinh ς √ s √ s cosh √ s + H sinh √ s es τ = lim s → 0 − (1 + H)(ς √ s + · · · ) √ s + H( √ s + · · · ) = −ς canceling the first residue. To find the remaining residues let √ s = x + iy. Then (x + iy) [cosh x cos y + i sinh x sin y] + H [sinh x cos y + i cosh x sin y] = 0 Setting real and imaginary parts equal to 0 yields x cosh x cos y − y sinh x sin y + H sinh x cos y = 0 and y cosh x cos y + x sinh x sin y + H cosh x sin y = 0
  • 139. book Mobk070 March 22, 2007 11:7 SOLUTIONS WITH LAPLACE TRANSFORMS 129 which yields x = 0 y cos y + H sin y = 0 The solution for the second term of U is lim s → iy (s − iy)(1 + H) sinh( √ s ς)es τ s √ s cosh( √ s ) + H sinh √ s ) or P(ς, s )es τ Q (ς, s ) s =−y2 where Q = s √ s cosh √ s + H sinh √ s Q = √ s cosh √ s + H sinh √ s + s 1 2 √ s cosh √ s + 1 2 sinh √ s + H 2 √ s cosh √ s Q = √ s (1 + H) 2 cosh √ s + s 2 sinh √ s Q = √ s (1 + H) 2 − s √ s 2H cosh √ s Q (s = −y2 ) = H(H + 1) + y2 2H iy cos(y) while P(s = −y2 ) = (1 + H)i sin(yς)e−y2 τ un(ς, τ) = −(1 + H) sin(ynς)e−y2 τ H(H+1)+y2 2H yn cos(yn) = −2H(H + 1) sin(ynς)e−y2 τ [H(H + 1) + y2]yn cos(yn) = 2(H + 1) H(H + 1) + y2 sin ς yn sin yn e−y2 n τ The solution is therefore u(ς, τ) = ∞ n=1 2(H + 1) H(H + 1) + y2 sin ς yn sin yn e−y2 n τ
  • 140. book Mobk070 March 22, 2007 11:7 130 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Note that as a partial check on this solution, we can evaluate the result when H → ∞ as u(ς, τ) = ∞ n=1 −2 yn cos yn sin ς yne−y2 n τ = ∞ n=1 2(−1)n+1 nπ sin(nπς)e−n2 π2 τ in agreement with the separation of variables solution. Also, letting H → 0 we find u(ς, τ) = ∞ n=1 2 y2 n sin(ynς) sin(yn) e y2 n τ with yn = 2n−1 2 π again in agreement with the separation of variables solution. Example 8.5. Next we consider a conduction (diffusion) problem with a transient source q(τ). (Nondimensionalization and normalization are left as an exercise.) uτ = uςς + q(τ) u(ς, 0) = 0 = uς (0, τ) u(1, τ) = 1 Obtaining the Laplace transform of the equation and boundary conditions we find sU = Uςς + Q(s ) Uς (0, s ) = 0 U(1, s ) = 1 s A particular solution is UP = Q(s ) s and the homogeneous solution is UH = A sinh(ς √ s ) + B cosh(ς √ s ) Hence the general solution is U = Q s + A sinh(ς √ s ) + B cosh(ς, √ s )
  • 141. book Mobk070 March 22, 2007 11:7 SOLUTIONS WITH LAPLACE TRANSFORMS 131 Using the boundary conditions Uς (0, s ) = 0, A = 0 U(1, s ) = 1 s = Q s + B cosh( √ s ) B = 1 − Q s cosh( √ s ) U = Q s + 1 − Q s cosh(ς √ s ) cosh( √ s ) The poles are (with √ s = x + iy) cosh √ s = 0 or cos y = 0 √ s = ± 2n − 1 2 π i s = − 2n − 1 2 2 π2 = −λ2 n n = 1, 2, 3, . . . or when s = 0. When s = 0 the residue is Res = lim s → 0 sU(s )es τ = 1 The denominator of the second term is s cosh √ s and its derivative with respect to s is cosh √ s + √ s 2 sinh √ s When s = −λ2 n, we have for the residue of the second term lim s → −λ2 n (1 − Q) cosh(ς √ s ) cosh √ s + √ s 2 sinh √ s es τ and since sinh √ s = i sin 2n − 1 2 π = i(−1)n+1 and cosh(ς √ s ) = cos 2n − 1 2 ς π we have L−1 cosh(ς √ s ) s cosh √ s = cos 2n−1 2 ς π 2n−1 2 π i2(−1)n+1 e−(2n−1 2 ) 2 π2 τ = 2(−1)n cos 2n−1 2 ς π (2n − 1)π e−(2n−1 2 ) 2 π2 τ
  • 142. book Mobk070 March 22, 2007 11:7 132 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS We now use the convolution principle to evaluate the solution for the general case of q(τ). We are searching for the inverse transform of 1 s cosh(ς √ s ) cosh √ s + Q(s ) s 1 − cosh(ς √ s ) cosh √ s The inverse transform of the first term is given above. As for the second term, the inverse transform of Q(s ) is simply q(τ) and the inverse transform of the second term, absent Q(s) is 1 − 2(−1)n+1 cos 2n−1 2 ς π (2n − 1)π e−(2n−1 2 ) 2 π2 τ According to the convolution principle, and summing over all poles u(ς, τ) = ∞ n=1 2(−1)n+1 cos 2n−1 2 ς π (2n − 1)π e−(2n−1 2 ) 2 π2 τ + ∞ n=1 τ τ =0 1 − 2(−1)n+1 cos 2n−1 2 ς π (2n − 1)π e−(2n−1 2 ) 2 π2 τ q(τ − τ )dτ Example 8.6. Next consider heat conduction in a semiinfinite region x > 0, t > 0. The initial temperature is zero and the wall is subjected to a temperature u(0, t) = f (t) at the x = 0 surface. ut = uxx u(x, 0) = 0 u(0, t) = f (t) and u is bounded. Taking the Laplace transform and applying the initial condition sU = Uxx Thus U(x, s ) = A sinh x √ s + B cosh x √ s Both functions are unbounded for x → ∞. Thus it is more convenient to use the equivalent solution U(x, s ) = Ae−x √ s + Bex √ s = Ae−x √ s in order for the function to be bounded. Applying the boundary condition at x = 0 F(s ) = A
  • 143. book Mobk070 March 22, 2007 11:7 SOLUTIONS WITH LAPLACE TRANSFORMS 133 Thus we have U(x, s ) = F(s )e−x √ s Multiplying and dividing by s gives U(x, s ) = s F(s ) e−x √ s s The inverse transform of e−x √ s /s is L−1 e−x √ s s = erfc x 2 √ t and we have seen that L{ f } = s F(s ) − f (0) Thus, making use of convolution, we find u(x, t) = f (0)erfc x 2 √ t + t µ=0 f (t − µ) erfc x 2 √ µ dµ Example 8.7. Now consider a problem in cylindrical coordinates. An infinite cylinder is initially at dimensionless temperature u(r, 0) = 1 and dimensionless temperature at the surface u(1, t) = 0. We have ∂u ∂t = 1 r ∂ ∂r r ∂u ∂r u(1, t) = 0 u(r, 0) = 1 u bounded The Laplace transform with respect to time yields sU(r, s ) − 1 = 1 r d dr r dU dr with U(1, s ) = 1 s
  • 144. book Mobk070 March 22, 2007 11:7 134 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Obtaining the homogeneous and particular solutions yields U(r, s ) = 1 s + AJ0(i √ sr) + BY0(i √ sr) The boundedness condition requires that B = 0, while the condition at r = 1 A = − 1 s J0(i √ s ) Thus U(r, s ) = 1 s − J0(i √ sr) s J0(i √ s ) The inverse transform is as follows: u(r, t) = 1 − Residues of es t J0(i √ sr s J0(i √ s ) Poles of the function occur at s = 0 and J0(i √ s ) = 0 or i √ s = λn, the roots of the Bessel function of the first kind order are zero. Thus, they occur at s = −λ2 n. The residues are lim s → 0 es t J0(i √ sr) J0(i √ s ) = 1 and lim s → −λ2 n es t J0(i √ sr) s J0(i √ s ) = lim s → −λ2 n es t J0(i √ sr) −J1(i √ s ) i/2 √ s = e−λ2 nt J0(λnr) −1 2 λn J1(λn) The two unity residues cancel and the final solution is as follows: u(r, t) = ∞ n=1 e−λ2t n J0(λnr) λn J1(λn) Problems 1. Consider a finite wall with initial temperature zero and the wall at x = 0 insulated. The wall at x = 1 is subjected to a temperature u(1, t) = f (t) for t > 0. Find u(x, t). 2. Consider a finite wall with initial temperature zero and with the temperature at x = 0 u(0, t) = 0. The temperature gradient at x = 1 suddenly becomes ux(1, t) = f (t) for t > 0. Find the temperature when f (t) = 1 and for general f (t). 3. A cylinder is initially at temperature u = 1 and the surface is subject to a convective boundary condition ur (t, 1) + Hu(t, 1) = 0. Find u(t, r).
  • 145. book Mobk070 March 22, 2007 11:7 SOLUTIONS WITH LAPLACE TRANSFORMS 135 8.3 DUHAMEL’S THEOREM We are now prepared to solve the more general problem ∇2 u + g(r, t) = ∂u ∂t (8.1) where r may be considered a vector, that is, the problem is in three dimensions. The general boundary conditions are ∂u ∂ni + hi u = fi (r, t) on the boundary Si (8.2) and u(r, 0) = F(r) (8.3) initially. Here ∂u ∂ni represents the normal derivative of u at the surface. We present Duhamel’s theorem without proof. Consider the auxiliary problem ∇2 P + g(r, λ) = ∂ P ∂t (8.4) where λ is a timelike constant with boundary conditions ∂ P ∂ni + hi P = fi (r, λ) on the boundary Si (8.5) and initial condition P(r, 0) = F(r) (8.6) The solution of Eqs. (8.1), (8.2), and (8.3) is as follows: u(x, y, z, t) = ∂ ∂t t λ=0 P(x, y, z, λ, t − λ)dλ = F(x, y, z) + t λ=0 ∂ ∂t P(x, y, z, λ, t − λ)dλ (8.7) This is Duhamel’s theorem. For a proof, refer to the book by Arpaci. Example 8.8. Consider now the following problem with a time-dependent heat source: ut = uxx + xe−t u(0, t) = u(1, t) = 0 u(x, 0) = 0
  • 146. book Mobk070 March 22, 2007 11:7 136 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS We first solve the problem Pt = Pxx + xe−λ P(0, t) = P(1, t) = 0 P(x, 0) = 0 while holding λ constant. Recall from Chapter 2 that one technique in this case is to assume a solution of the form P(x, λ, t) = X(x) + W(x, λ, t) so that Wt = Wxx W(0, λ, t) = W(1, λ, t) = 0 W(x, λ, 0) = −X(x, λ) and Xxx + xe−λ = 0 X(0) = X(1) = 0 Separating variables in the equation for W(x, t), we find that for W(x, λ, t) = S(x)Q(t) Qt Q = Sxx S = −β2 The minus sign has been chosen so that Q remains bounded. The boundary conditions on S(x) are as follows: S(0) = S(1) = 0 The solution gives S = A sin(β x) + B cos(βx) Q = Ce−β t Applying the boundary condition at x = 0 requires that B = 0 and applying the boundary condition at x = 1 requires that sin(β) = 0 or β = nπ. Solving for X(x) and applying the boundary conditions gives X = x 6 (1 − x2 )e−λ = −W(x, λ, 0)
  • 147. book Mobk070 March 22, 2007 11:7 SOLUTIONS WITH LAPLACE TRANSFORMS 137 The solution for W(x, t) is then obtained by superposition: W(x, t) = ∞ n=0 Kne−n2 π2 t sin(nπx) and using the orthogonality principle e−λ 1 x=0 x 6 (x2 − 1) sin(nπx)dx = Kn 1 n=0 sin2 (nπx)dx = 1 2 Kn so W(x, t) = ∞ n=1 e−λ 1 x=0 x 3 (x2 − 1) sin(nπx)dx e−n2 π2 t sin(nπx) P(x, λ, t) = x 6 (1 − x2 ) + ∞ n=1 1 x=0 x 3 (x2 − 1) sin(nπx)dx sin(nπx) e−n2 π2 t e−λ and P(x, λ, t − λ) = x 6 (1 − x2 )e−λ + ∞ n=1 1 x=0 x 3 (x2 − 1) sin(nπx)dx sin(nπx) e−n2 π2 t en2 π2 λ−λ ∂ ∂t P(x, λ, t − λ) = ∞ n=1 n2 π2 1 x=0 x 3 (1 − x2 ) sin(nπx)dxe−n2 π2 t e(n2 π2 −1)λ According to Duhamel’s theorem, the solution for u(x, t) is then u(x, t) = ∞ n=1 1 x=0 x 3 (1 − x2 )n2 π2 sin(nπx)dx sin(nπx) t λ=0 e−n2 π2 (t−λ) −λ dλ = ∞ n=1 n2 π2 n2π2 − 1 1 x=0 x 3 (1 − x2 ) sin(nπx)dx [e−t − e−n2 π2 t ] sin(nπx) Example 8.9. Reconsider Example 8.6 in which ut = uxx on the half space, with u(x, 0) = 0 u(0, t) = f (t)
  • 148. book Mobk070 March 22, 2007 11:7 138 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS To solve this using Duhamel’s theorem, we first set f (t) = f (λ) with λ a timelike constant. Following the procedure outlined at the beginning of Example 8.6, we find U(x, s ) = f (λ) e−x √ s s The inverse transform is as follows: u(x, t, λ) = f (λ) erfc x 2 √ t Using Duhamel’s theorem, u(x, t) = t λ=0 ∂ ∂t f (λ)erfc x 2 √ t − λ dλ which is a different form of the solution given in Example 8.6. Problems 1. Show that the solutions given in Examples 8.6 and 8.9 are equivalent. 2. Use Duhamel’s theorem along with Laplace transforms to solve the following conduc- tion problem on the half space: ut = uxx u(x, 0) = 0 ux(0, t) = f (t) 3. Solve the following problem first using separation of variables: ∂u ∂t = ∂2 u ∂x2 + sin(πx) u(t, 0) = 0 u(t, 1) = 0 u(0, x) = 0 4. Consider now the problem ∂u ∂t = ∂2 u ∂x2 + sin(πx)te−t with the same boundary conditions as Problem 7. Solve using Duhamel’s theorem.
  • 149. book Mobk070 March 22, 2007 11:7 SOLUTIONS WITH LAPLACE TRANSFORMS 139 FURTHER READING V. S. Arpaci, Conduction Heat Transfer, Reading, MA: Addison-Wesley, 1966. R. V. Churchill, Operational Mathematics, 3rd ed. New York: McGraw-Hill, 1972. I. H. Sneddon, The Use of Integral Transforms, New York: McGraw-Hill, 1972.
  • 150. book Mobk070 March 22, 2007 11:7 140
  • 151. book Mobk070 March 22, 2007 11:7 141 C H A P T E R 9 Sturm–Liouville Transforms Sturm–Liouville transforms include a variety of examples of choices of the kernel function K(s, t) that was presented in the general transform equation at the beginning of Chapter 6. We first illustrate the idea with a simple example of the Fourier sine transform, which is a special case of a Sturm–Liouville transform. We then move on to the general case and work out some examples. 9.1 A PRELIMINARY EXAMPLE: FOURIER SINE TRANSFORM Example 9.1. Consider the boundary value problem ut = uxx x ≤ 0 ≤ 1 with boundary conditions u(0, t) = 0 ux(1, t) + Hu(1, t) = 0 and initial condition u(x, 0) = 1 Multiply both sides of the differential equation by sin(λx)dx and integrate over the interval x ≤ 0 ≤ 1. 1 x=0 sin(λx) d2 u dx2 dx = d dt 1 x=0 u(x, t) sin(λx)dx Integration of the left hand side by parts yields 1 x=0 d2 dx2 [sin(λx)]u(x, t)dx + sin(λx) du dx − u d dx [sin(λx)] 1 0
  • 152. book Mobk070 March 22, 2007 11:7 142 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS and applying the boundary conditions and noting that d2 dx2 [sin(λx)] = −λ2 sin(λx) we have −λ2 1 x=0 sin(λx)u(x, t)dx + [ux sin(λx) − λu cos(λx)]1 0 = −λ2 U(λ, t) − u(1)[λ cos λ + H sin λ] Defining Sλ{u(x, t)} = 1 x=0 u(x, t) sin(λx)dx = U(λ, t) as the Fourier sine transform of u(x, t) and setting λ cos λ + H sin λ = 0 we find Ut(λ, t) = −λ2 U(λ, t) whose solution is U(λ, t) = Ae−λ2 t The initial condition of the transformed function is U(λ, 0) = 1 x=0 sin(λx)dx = 1 λ [1 − cos(λ)] Applying the initial condition we find U(λ, t) = 1 λ [1 − cos(λ)]e−λ2 t It now remains to find from this the value of u(x, t). Recall from the general theory of Fourier series that any odd function of x defined on 0 ≤ x ≤ 1 can be expanded in a Fourier sine series in the form u(x, t) = ∞ n=1 sin(λnx) sin(λn) 2 1 ξ=0 u(ξ, t) sin(λnξ)dξ
  • 153. book Mobk070 March 22, 2007 11:7 STURM–LIOUVILLE TRANSFORMS 143 and this is simply u(x, t) = ∞ n=1 sin(λnx) sin(λn) 2 U(λn, t) with λn given by the transcendental equation above. The final solution is therefore u(x, t) = ∞ n=1 2(1 − cos λn) λn − 1 2 sin(2λn) sin(λnx)e−λ2 nt 9.2 GENERALIZATION: THE STURM–LIOUVILLE TRANSFORM: THEORY Consider the differential operator D D[ f (x)] = A(x) f + B(x) f + C(x) f a ≤ x ≤ b (9.1) with boundary conditions of the form Nα[ f (x)]x=a = f (a) cos α + f (a) sin α Nβ[ f (x)]x=b = f (b) cos β + f (b) sin β (9.2) where the symbols Nα and Nβ are differential operators that define the boundary conditions. For example the differential operator might be D[ f (x)] = fxx and the boundary conditions might be defined by the operators Nα[ f (x)]x=a = f (a) = 0 and Nβ[ f (x)]x=b = f (b) + Hf (b) = 0 We define an integral transformation T[ f (x)] = b a f (x)K(x, λ)dx = F(λ) (9.3)
  • 154. book Mobk070 March 22, 2007 11:7 144 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS We wish to transform these differential forms into algebraic forms. First we write the differential operator in standard form. Let r(x) = exp x a B(ξ) A(ξ) dξ p(x) = r(x) A(x) (9.4) q(x) = −p(x)C(x) Then D[ f (x)] = 1 p(x) (r f ) − q f = 1 p(x) [ f (x)] (9.5) where is the Sturm–Liouville operator. Let the kernel function K(x, λ)in Eq. (9.3) be K(x, λ) = p(x) (x, λ) (9.6) Then T[D[ f (x)]] = b a (x, λ) [ f (x)]dx = b a f (x) [ (x, λ)]dx + [( fx − x f )r(x)]b a (9.7) while Nα[ f (a)] = f (a) cos α + f (a) sin α Nα[ f (a)] = d dα f (a) cos α + f (a) sin α (9.8) = − f (a) sin α + f (a) cos α so that f (a) = Nα[ f (a)] cos α − Nα[ f (a)] sin α f (a) = Nα[ f (a)] cos α + Nα[ f (a)] sin α (9.9) where the prime indicates differentiation with respect to α.
  • 155. book Mobk070 March 22, 2007 11:7 STURM–LIOUVILLE TRANSFORMS 145 The lower boundary condition at x = a is then [ (a, λ) f (a) − (a, λ) f (a)]r(a) =   (a, λ)Nα[ f (a)] cos α + (a, λ)Nα[ f (a)] sin α − (a, λ)Nα[ f (a)] cos α + (a, λ)Nα[ f (a)] sin α  r(a) (9.10) But if (x, λ) is chosen to satisfy the Sturm–Liouville equation and the boundary con- ditions then Nα[ (x, λ)]x=a = (a, λ) cos α + (a, λ) sin α Nβ[ (x, λ)]x=b = (b, λ) cos β + (b, λ) sin β (9.11) and (a, λ) = Nα[ (a, λ)] cos α − Nα[ (a, λ)] sin α (a, λ) = Nα[ (a, λ)] cos α + Nα[ (a, λ)] sin α (9.12) and we have [(Nα[ (a, λ)] cos α + Nα[ f (a)] sin α)(Nα[ (a, λ)] cos α + Nα[ (a, λ)] sin α) − (Nα[ (a, λ)] cos α + Nα[ (a, λ)] sin α)(Nα[ f (a)] cos α − Nα[ f (a)] sin α)]r(a) (9.13) = {Nα[ f (a)]Nα[ (a, λ)] − Nα[ f (a)]Nα[ (a, λ)]}r(a) If the kernel function is chosen so that Nα[ (a, λ)] = 0, for example, the lower boundary condition is −Nα[ f (a)]Nα[ (a, λ)]r(a) (9.14) Similarly, at x = b (b, λ) f (b) − (b, λ) f (b) r(b) = −Nβ[ f (b)]Nβ[ (b, λ)]r(b) (9.15) Since (x, λ) satisfies the Sturm–Liouville equation, there are n solutions forming a set of orthogonal functions with weight function p(x) and n(x, λn) = −λ2 n p(x) n(x, λn) (9.16)
  • 156. book Mobk070 March 22, 2007 11:7 146 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS so that T D[ f (x)] = −λ2 b x=a p(x) f (x) n(x, λ)dx + Nα[ f (a)]Nα[ n(a, λ)]r(a) − Nβ[ f (b)]Nβ[ n(b, λ)]r(b) (9.17) where λ2 n b a p(x) fn(x) n(x, λn)dx = λ2 n Fn(λn) (9.18) 9.3 THE INVERSE TRANSFORM The great thing about Sturm–Liouville transforms is that the inversion is so easy. Recall that the generalized Fourier series of a function f (x) is f (x) = ∞ n=1 n(x, λn) n b a fn(ξ)p(ξ) n(ξ, λn) n dξ = ∞ n=1 n(x) n 2 F(λn) (9.19) where the functions n(x, λn)form an orthogonal set with respect to the weight function p(x). Example 9.2 (The cosine transform). Consider the diffusion equation yt = yxx 0 ≤ x ≤ 1 t > 0 yx(0, t) = y(1, t) = 0 y(x, 0) = f (x) To find the proper kernel function K(x, λ) we note that according to Eq. (9.16) n(x, λn) must satisfy the Sturm–Liouville equation [ n(x, λ)] = −p(x) n(x, λ) where for the current problem [ n(x, λ)] = d2 dx2 [ n(x, λ)] and p(x) = 1 along with the boundary conditions (9.11) Nα[ (x, λ)]x=a = x(0, λ) = 0 Nβ[ (x, λ)]x=b = (1, λ) = 0
  • 157. book Mobk070 March 22, 2007 11:7 STURM–LIOUVILLE TRANSFORMS 147 Solution of this differential equation and applying the boundary conditions yields an infinite number of functions (as any Sturm–Liouville problem) (x, λn) = A cos(λnx) with cos(λn) = 0 λn = (2n − 1) 2 π Thus, the appropriate kernel function is K(x, λn) = cos(λnx) with λn = (2n − 1)π 2 . Using this kernel function in the original partial differential equation, we find dY dt = −λ2 nY where Cλ y(x, t) = Y (t, λn) is the cosine transform of y(t, x). The solution gives Y (t, λn) = Be−λ2 t and applying the cosine transform of the initial condition B = 1 x=0 f (x) cos(λnx)dx According to Eq. (9.19) the solution is as follows: y(x, t) = ∞ n=0 cos(λnx) cos(λnx) 2 1 x−0 f (x) cos(λnx)dxe−λ2 n t Example 9.3 (The Hankel transform). Next consider the diffusion equation in cylindrical coordinates. ut = 1 r d dr r du dr Boundary and initial conditions are prescribed as ur (t, 0) = 0 u(t, 1) = 0 u(0,r) = f (r) First we find the proper kernel function [ (r, λn)] = d dr r d n dr = −λ2 n r
  • 158. book Mobk070 March 22, 2007 11:7 148 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS with boundary conditions r (λn, 0) = 0 (λn, 1) = 0 The solution is the Bessel function J0(λnr) with λn given by J0(λn) = 0. Thus the transform of u(t, r) is as follows: Hλ u(t,r) = U(t, λn) = 1 r=0 r J0(λnr)u(t,r)dr This is called a Hankel transform. The appropriate differential equation for U(t, λn) is dUn dt = −λ2 nUn so that Un(t, λn) = Be−λ2 n t Applying the initial condition, we find B = 1 r=0 r f (r)J0(λnr)dr and from Eq. (9.19) u(t,r) = ∞ n=0 1 r=0 r f (r)J0(λnr)dr J0(λnr) 2 J0(λnr)e−λ2 n t Example 9.4 (The sine transform with a source). Next consider a one-dimensional transient diffusion with a source term q(x): ut = uxx + q(x) y(0, x) = y(t, 0) = t(t, π) = 0 First we determine that the sine transform is appropriate. The operator is such that = xx = λ
  • 159. book Mobk070 March 22, 2007 11:7 STURM–LIOUVILLE TRANSFORMS 149 and according to the boundary conditions we must choose = sin(nx) and λ = −n2 . The sine transform of q(x) is Q(λ). Ut = −n2 U + Q(λ) U = U(λ, t) The homogeneous and particular solutions give Un = Ce−n2 t + Qn n2 when t = 0, U = 0 so that C = − Qn n2 where Qn is given by Qn = π x=0 q(x) sin(nx)dx Since Un = Qn n2 [1 − e−n2 t ] the solution is u(x, t) = ∞ n=1 Qn n2 [1 − e−n2 t ] sin(nx) sin(nx) 2 Note that Qn is just the nth term of the Fourier sine series of q(x). For example, if q(x) = x, Qn = π n (−1)n+1 Example 9.5 (A mixed transform). Consider steady temperatures in a half cylinder of infi- nite length with internal heat generation, q(r) that is a function of the radial position. The appropriate differential equation is urr + 1 r ur + 1 r2 uθθ + uzz + q(r) = 0 0 ≤ r ≤ 1 0 ≤ z ≤ ∞ 0 ≤ θ ≤ π with boundary conditions u(1, θ, z) = 1 u(r, 0, z) = u(r, π, z) = u(r, θ, 0) = 0
  • 160. book Mobk070 March 22, 2007 11:7 150 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Let the sine transform of u be denoted by Sn u(r, θ, z) = Un(r, n, z) with respect to θ on the interval (0, π). Then ∂2 Un ∂r2 + 1 r ∂Un ∂r − n2 r2 Un + ∂2 Un ∂z2 + q(r)Sn(1) = 0 where Sn(1) is the sine transform of 1, and the boundary conditions for u(r, θ, z) on θ have been used. Note that the operator on in the r coordinate direction is (r, µj ) = 1 r d dr r d dr − n2 r2 = −µ2 j With the boundary condition at r = 1 chosen as (1, µj ) = 0 this gives the kernel function as = r Jn(r, µj ) with eigenvalues determined by Jn(1, µj ) = 0 We now apply the finite Hankel transform to the above partial differential equation and denote the Hankel transform of Un by Ujn. After applying the boundary condition on r we find, after noting that Nβ[Un(z, 1)] = Sn(1) Nβ[ (1, z)] = −µj Jn+1(µj ) −µ2 jUjn + µj Jn+1(µj )Sn(1) + d2 Ujn dz2 + Q j (µj )Sn(1) = 0. Here Q j (µj ) is the Hankel trans- form of q(r). Solving the resulting ordinary differential equation and applying the boundary condition at z = 0, Ujn(µj , n, z) = Sn(1) Q j (µj ) + µj Jn+1(µj ) µ2 j [1 − exp(−µj z)] We now invert the transform for the sine and Hankel transforms according to Eq. (9.19) and find that u(r, θ, z) = 4 π ∞ n=1 ∞ j=1 Ujn(µj , n, z) [Jn+1(µj )]2 Jn(µjr) sin(nθ) Note that Sn(1) = [1 − (−1)n ]/n
  • 161. book Mobk070 March 22, 2007 11:7 STURM–LIOUVILLE TRANSFORMS 151 Problems Use an appropriate Sturm–Liouville transform to solve each of the following problems: 1. Chapter 3, Problem 1. 2. Chapter 2, Problem 2. 3. Chapter 3, Problem 3. ∂u ∂t = 1 r ∂ ∂r r ∂u ∂r + G(constant t) 4. u(r, 0) = 0 u(1, t) = 0 u bounded 5. Solve the following using an appropriate Sturm–Liouville transform: ∂2 u ∂x2 = ∂u ∂t u(t, 0) = 0 u(t, 1) = 0 u(0, x) = sin(πx) 6. Find the solution for general ρ(t): ∂u ∂t = ∂2 u ∂x2 u(t, 0) = 0 u(t, 1) = ρ(t) u(0.x) = 0 FURTHER READING V. S. Arpaci, Conduction Heat Transfer, Reading, MA: Addison-Wesley, 1966. R. V. Churchill, Operational Mathematics, 3rd ed. New York: McGraw-Hill, 1972. I. H. Sneddon, The Use of Integral Transforms, New York: McGraw-Hill, 1972.
  • 162. book Mobk070 March 22, 2007 11:7
  • 163. book Mobk070 March 22, 2007 11:7 153 C H A P T E R 10 Introduction to Perturbation Methods Perturbation theory is an approximate method of solving equations which contain a parameter that is small in some sense. The method should result in an approximate solution that may be termed “precise” in the sense that the error (the difference between the approximate and exact solutions) is understood and controllable and can be made smaller by some rational technique. Perturbation methods are particularly useful in obtaining solutions to equations that are nonlinear or have variable coefficients. In addition, it is important to note that if the method yields a simple, accurate approximate solution of any problem it may be more useful than an exact solution that is more complicated. 10.1 EXAMPLES FROM ALGEBRA We begin with examples from algebra in order to introduce the ideas of regular perturbations and singular perturbations. We start with a problem of extracting the roots of a quadratic equation that contains a small parameter ε 1. 10.1.1 Regular Perturbation Consider, for example, the equation x2 + εx − 1 = 0 (10.1) The exact solution for the roots is, of course, simply obtained from the quadratic formula: x = − ε 2 ± 1 + ε2 4 (10.2) which yields exact solutions x = 0.962422837 and x = −1.062422837
  • 164. book Mobk070 March 22, 2007 11:7 154 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Equation (10.2) can be expanded for small values of ε in the rapidly convergent series x = 1 − ε 2 + ε2 8 − ε4 128 + · · · (10.3) or x = −1 − ε 2 − ε2 8 + ε4 128 − · · · (10.4) To apply perturbation theory we first note that if ε = 0 the two roots of the equation, which we will call the zeroth-order solutions, are x0 = ±1. We assume a solution of the form x = x0 + a1ε + a2ε2 + a3ε3 + a4ε4 + · · · (10.5) Substituting (10.5) into (10.1) 1 + (2a1 + 1)ε + a2 1 + 2a2 + a1 ε2 + (2a1a2 + 2a3 + a2)ε3 + · · · − 1 = 0 (10.6) where we have substituted x0 = 1. Each of the coefficients of εn must be zero. Solving for an we find a1 = − 1 2 a2 = 1 8 a3 = 0 (10.7) so that the approximate solution for the root near x = 1 is x = 1 − ε 2 + ε2 8 + O(ε4 ) (10.8) The symbol O(ε4 ) means that the next term in the series is of order ε4 Performing the same operation with x0 = −1 1 − (1 + 2a1)ε + a2 1 − 2a2 + a1 ε2 + (2a1a2 − 2a3 + a2)ε3 + · · · − 1 = 0 (10.9) Again setting the coefficients of εn equal to zero a1 = − 1 2 a2 = − 1 8 a3 = 0 (10.10)
  • 165. book Mobk070 March 22, 2007 11:7 INTRODUCTION TO PERTURBATION METHODS 155 so that the root near x0 = −1 is x = −1 − ε 2 − ε2 8 + O(ε4 ) (10.11) The first three terms in (10.8) give x = 0.951249219, accurate to within 1.16% of the exact value while (10.11) gives the second root as x = −1.051249219, which is accurate to within 1.05%. Next suppose the small parameter occurs multiplied by the squared term, εx2 + x − 1 = 0 (10.12) Using the quadratic formula gives the exact solution. x = − 1 2ε ± 1 4ε2 + 1 ε (10.13) If ε = 0.1 (10.13) gives two solutions: x = 0.916079783 and x = −10.91607983 We attempt to follow the same procedure to obtain an approximate solution. If ε = 0 identically, x0 = 1. Using (10.5) with x0 = 1 and substituting into (10.12) we find (1 + a1)ε + (2a1 + a2)ε2 + 2a2 + a2 1 + a3 ε3 + · · · = 0 (10.14) Setting the coefficients of εn = 0 , solving for an, and substituting into (10.5) x = 1 − ε + 2ε2 − 5ε3 + · · · (10.15) gives x = 0.915, close to the exact value. However Eq. (10.12) clearly has two roots, and the method cannot give an approximation for the second root. The essential problem is that the second root is not small. In fact (10.13) shows that as ε → 0, |x| → 1 2ε so that the term εx2 is never negligible. 10.1.2 Singular Perturbation Arranging (10.12) in a normal form x2 + x − 1 ε = 0 (10.12a)
  • 166. book Mobk070 March 22, 2007 11:7 156 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS and the equation is said to be singular as ε → 0. If we set xε = u we find an equation for u as u2 + u − ε = 0 (10.16) With ε identically zero, u = 0 or −1. Assuming that u may be approximated by a series like (10.5) we find that (−a1 − 1)ε + a2 1 − a2 ε2 + (2a1a2 − a3)ε3 + · · · = 0 (10.17) a1 = −1 a2 = 1 a3 = −2 (10.18) so that x = − 1 ε − 1 + ε − 2ε2 + · · · (10.19) The three-term approximation of the negative root is therefore x = −10.92, within 0.03% of the exact solution. As a third algebraic example consider x2 − 2εx − ε = 0 (10.20) This at first seems like a harmless problem that appears at first glance to be amenable to a regular perturbation expansion since the x2 term is not lost when ε → 0. We proceed optimistically by taking x = x0 + a1ε + a2ε2 + a3ε3 + · · · (10.21) Substituting into (10.20) we find x2 0 + (2x0a1 − 2x0 − 1)ε + a2 1 + 2x0a2 − 2a1 ε2 + · · · = 0 (10.22) from which we find x0 = 0 2x0a1 − 2x0 − 1 = 0 a2 1 + 2x0a2 − 2a1 = 0 (10.23) From the second of these we conclude that either 0 = −1 or that there is something wrong. That is, (10.21) is not an appropriate expansion in this case. Note that (10.20) tells us that as ε → 0, x → 0. Moreover, in writing (10.21) we have essentially assumed that ε → 0 in such a manner that x ε → constant. Let us suppose instead
  • 167. book Mobk070 March 22, 2007 11:7 INTRODUCTION TO PERTURBATION METHODS 157 that as ε → 0 x(ε) εp → constant (10.24) We than define a new variable x = εp v(ε) (10.25) such that v(0) = 0. Substitution into (10.20) yields ε2p v2 − 2εp+1 v − ε = Q (10.26) where Q must be identically zero. Note that since Q ε must also be zero no matter how small ε becomes, as long as it is not identically zero. Now, if p > 1/2, 2p − 1 > 0 and in the limit as ε → 0 ε2p v(ε) − 2εp v(ε) − 1 → −1, which cannot be true given that Q = 0 identically. Next suppose p < 1/2. Again, Q ε2p is identically zero for all ε including the limit as ε → 0. In the limit as ε → 0, v(ε)2 − ε1−p v(ε) − ε1−2p → v(0) = 0. p = 1/2 is the only possibility left, so we attempt a solution with this value. Hence x = ε1/2 v(ε) (10.27) Substitution into (10.20) gives v2 − 2 √ εv − 1 = 0 (10.28) and this can now be solved by a regular perturbation assuming β = √ ε 1. Hence, v = v0 + a1β + a2β2 + a3β3 + · · · (10.29) Inserting this into (10.28) with β = √ ε v0 − 1 + (2v0a1 − 2v0)β + a2 1 + 2v0a2 − 2a1 β2 + · · · = 0 (10.30) Thus v0 = ±1 a1 = 1 a2 = + 1 2 or − 1 2 (10.31)
  • 168. book Mobk070 March 22, 2007 11:7 158 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS Thus the two solutions are v = √ ε + ε + 1 2 ε √ ε + · · · and v = − √ ε + ε − 1 2 ε √ ε + · · ·
  • 169. book Mobk070 March 22, 2007 11:7 159 Appendix A: The Roots of Certain Transcendental Equations TABLE A.1: The first six roots, † αn, of α tan α + C = 0. C α1 α2 α3 α4 α5 α6 0 0 3.1416 6.2832 9.4248 12.5664 15.7080 0.001 0.0316 3.1419 6.2833 9.4249 12.5665 15.7080 0.002 0.0447 3.1422 6.2835 9.4250 12.5665 15.7081 0.004 0.0632 3.1429 6.2838 9.4252 12.5667 15.7082 0.006 0.0774 3.1435 6.2841 9.4254 12.5668 15.7083 0.008 0.0893 3.1441 6.2845 9.4256 12.5670 15.7085 0.01 0.0998 3.1448 6.2848 9.4258 12.5672 15.7086 0.02 0.1410 3.1479 6.2864 9.4269 12.5680 15.7092 0.04 0.1987 3.1543 6.2895 9.4290 12.5696 15.7105 0.06 0.2425 3.1606 6.2927 9.4311 12.5711 15.7118 0.08 0.2791 3.1668 6.2959 9.4333 12.5727 15.7131 0.1 0.3111 3.1731 6.2991 9.4354 12.5743 15.7143 0.2 0.4328 3.2039 6.3148 9.4459 12.5823 15.7207 0.3 0.5218 3.2341 6.3305 9.4565 12.5902 15.7270 0.4 0.5932 3.2636 6.3461 9.4670 12.5981 15.7334 0.5 0.6533 3.2923 6.3616 9.4775 12.6060 15.7397 0.6 0.7051 3.3204 6.3770 9.4879 12.6139 15.7460 0.7 0.7506 3.3477 6.3923 9.4983 12.6218 15.7524 0.8 0.7910 3.3744 6.4074 9’5087 12.6296 15.7587
  • 170. book Mobk070 March 22, 2007 11:7 160 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS TABLE A.1: (continue) α tan α + C = 0. C α1 α2 α3 α4 α5 α6 0.9 0.8274 3.4003 6.4224 9.5190 12.6375 15.7650 1.0 0.8603 3.4256 6.4373 9.5293 12.6453 15.7713 1.5 0.9882 3.5422 6.5097 9.5801 12.6841 15.8026 2.0 1.0769 3.6436 6.5783 9.6296 12.7223 15.8336 3.0 1.1925 3.8088 6.7040 9.7240 12.7966 15.8945 4.0 1.2646 3.9352 6.8140 9.8119 12.8678 15.9536 5.0 1.3138 4.0336 6.9096 9.8928 12.9352 16.0107 6.0 1.3496 4.1116 6.9924 9.9667 12.9988 16.0654 7.0 1.3766 4.1746 7.0640 10.0339 13.0584 16.1177 8.0 1.3978 4.2264 7.1263 10.0949 13.1141 16.1675 9.0 1.4149 4.2694 7.1806 10.1502 13.1660 16.2147 10.0 1.4289 4.3058 7.2281 10.2003 13.2142 16.2594 15.0 1.4729 4.4255 7.3959 10.3898 13.4078 16.4474 20.0 1.4961 4.4915 7.4954 10.5117 13.5420 16.5864 30.0 1.5202 4.5615 7.6057 10.6543 13.7085 16.7691 40.0 1.5325 4.5979 7.6647 10.7334 13.8048 16.8794 50.0 1.5400 4.6202 7.7012 10.7832 13.8666 16.9519 60.0 1.5451 4.6353 7.7259 10.8172 13.9094 17.0026 80.0 1.5514 4.6543 7.7573 10.8606 13.9644 17.0686 100.0 1.5552 4.6658 7.7764 10.8871 13.9981 17.1093 ∞ 1.5708 4.7124 7.8540 10.9956 14.1372 17.2788 † The roots of this equation are all real if C > 0.
  • 171. book Mobk070 March 22, 2007 11:7 APPENDIX A: THE ROOTS OF CERTAIN TRANSCENDENTAL EQUATIONS 161 TABLE A.2: The first six roots, † αn, of α cotα + C = 0. C α1 α2 α3 C α1 α2 −1.0 0 4.4934 7.7253 10.9041 14.0662 17.2208 −0.995 0.1224 4.4945 7.7259 10.9046 14.0666 17.2210 −0.99 0.1730 4.4956 7.7265 10.9050 14.0669 17.2213 −0.98 0.2445 4.4979 7.7278 10.9060 14.0676 17.2219 −0.97 0.2991 4.5001 7.7291 10.9069 14.0683 17.2225 −0.96 0.3450 4.5023 7.7304 10.9078 14.0690 17.2231 −0.95 0.3854 4.5045 7.7317 10.9087 14.0697 17.2237 −0.94 0.4217 4.5068 7.7330 10.9096 14.0705 17.2242 −0.93 0.4551 4.5090 7.7343 10.9105 14.0712 17.2248 −0.92 0.4860 4.5112 7.7356 10.9115 14.0719 17.2254 −0.91 0.5150 4.5134 7.7369 10.9124 14.0726 17.2260 −0.90 0.5423 4.5157 7.7382 10.9133 14.0733 17.2266 −0.85 0.6609 4.5268 7.7447 10.9179 14.0769 17.2295 −0.8 0.7593 4.5379 7.7511 10.9225 14.0804 17.2324 −0.7 0.9208 4.5601 7.7641 10.9316 14.0875 17.2382 −0.6 1.0528 4.5822 7.7770 10.9408 14.0946 17.2440 −0.5 J.l656 4.6042 7.7899 10.9499 14.1017 17.2498 −0.4 1.2644 4.6261 7.8028 10.9591 14.1088 17.2556 −0.3 1.3525 4.6479 7.8156 10.9682 14.1159 17.2614 −0.2 1.4320 4.6696 7.8284 10.9774 14.1230 17.2672 −0.1 1.5044 4.6911 7.8412 10.9865 14.1301 17.2730 0 1.5708 4.7124 7.8540 10.9956 14.1372 17.2788 0.1 1.6320 4.7335 7.8667 11.0047 14.1443 17.2845 0.2 1.6887 4.7544 7.8794 11.0137 14.1513 17.2903 0.3 1.7414 4.7751 7.8920 11.0228 14.1584 17.2961 0.4 1.7906 4.7956 7.9046 11.0318 14.1654 17.3019
  • 172. book Mobk070 March 22, 2007 11:7 162 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS TABLE A.2: (continue) α cotα + C = 0. C α1 α2 α3 C α1 α2 0.5 1.8366 4.8158 7.9171 11.0409 14.1724 17.3076 0.6 1.8798 4.8358 7.9295 11.0498 14.1795 17.3134 0.7 1.9203 4.8556 7.9419 11.0588 14.1865 17.3192 0.8 1.9586 4.8751 7.9542 11.0677 14.1935 17.3249 0.9 1.9947 4.8943 7.9665 11.0767 14.2005 17.3306 1.0 2.0288 4.9132 7.9787 11.0856 14.2075 17.3364 1.5 2.1746 5.0037 8.0385 1 J.l296 14.2421 17.3649 2.0 2.2889 5.0870 8.0962 1 J.l727 14.2764 17.3932 3.0 2.4557 5.2329 8.2045 11.2560 14.3434 17.4490 4.0 2.5704 5.3540 8.3029 11.3349 14.4080 17.5034 5.0 2.6537 5.4544 8.3914 11.4086 14.4699 17.5562 6.0 2.7165 5.5378 8.4703 11.4773 14.5288 17.6072 7.0 2.7654 5,6078 8.5406 11.5408 14.5847 17.6562 8.0 2.8044 5.6669 8.6031 11.5994 14.6374 17.7032 9.0 2.8363 5.7172 8.6587 11.6532 14.6870 17.7481 10.0 2.8628 5.7606 8.7083 11.7027 14.7335 17.7908 15.0 2.9476 5.9080 8.8898 11.8959 14.9251 17.9742 20.0 2.9930 5.9921 9.0019 12.0250 15.0625 18.1136 30.0 3.0406 6.0831 9.1294 12.1807 15.2380 18.3018 40.0 3.0651 6.1311 9.1987 12.2688 15.3417 18.4180 50.0 3.0801 6.1606 9.2420 12.3247 15.4090 18.4953 60.0 3.0901 6.1805 9.2715 12.3632 15.4559 18.5497 80.0 3.1028 6.2058 9.3089 12.4124 15.5164 18.6209 100.0 3.1105 6.2211 9.3317 12.4426 15.5537 18.6650 ∞ 3.1416 6.2832 9.4248 12.5664 15.7080 18.8496 † The roots of this equation are all real if C > −1. These negative values of C arise in connection with the sphere, §9.4.
  • 173. book Mobk070 March 22, 2007 11:7 APPENDIX A: THE ROOTS OF CERTAIN TRANSCENDENTAL EQUATIONS 163 TABLE A.3: The first six roots αn, of αJ1(α) − C J0(α) = 0 C α1 α2 α3 α4 α5 α6 0 0 3.8317 7.0156 10.1735 13.3237 16.4706 0.01 0.1412 3.8343 7.0170 10.1745 13.3244 16.4712 0.02 0.1995 3.8369 7.0184 10.1754 13.3252 16.4718 0.04 0.2814 3.8421 7.0213 10.1774 13.3267 16.4731 0.06 0.3438 3.8473 7.0241 10.1794 13.3282 16.4743 0.08 0.3960 3.8525 7.0270 10.1813 13.3297 16.4755 0.1 0.4417 3.8577 7.0298 10.1833 13.3312 16.4767 0.15 0.5376 3.8706 7.0369 10.1882 13.3349 16.4797 0.2 0.6170 3.8835 7.0440 10.1931 13.3387 16.4828 0.3 0 7465 3.9091 7.0582 10.2029 13.3462 16.4888 0.4 0.8516 3.9344 7.0723 10.2127 13.3537 16.4949 0.5 0.9408 3.9594 7.0864 10.2225 13.3611 16.5010 0.6 1.0184 3.9841 7.1004 10.2322 13.3686 16.5070 0.7 1.0873 4.0085 7.1143 10.2419 13.3761 16.5131 0.8 1.1490 4.0325 7.1282 10.2516 13.3835 16.5191 0.9 1.2048 4.0562 7.1421 10.2613 13.3910 16.5251 1.0 1.2558 4.0795 7.1558 10.2710 13.3984 16.5312 1.5 1.4569 4.1902 7.2233 10.3188 13.4353 16.5612 2.0 1.5994 4.2910 7.2884 10.3658 13.4719 16.5910 3.0 1.7887 4.4634 7.4103 10.4566 13.5434 16.6499 4.0 1.9081 4.6018 7.5201 10.5423 13.6125 16.7073 5.0 1.9898 4.7131 7.6177 10.6223 13.6786 16.7630 6.0 2.0490 4.8033 7.7039 10.6964 13.7414 16.8168 7.0 2.0937 4.8772 7.7797 10.7646 13.8008 16.8684 8.0 2.1286 4.9384 7.8464 10.8271 13.8566 16.9179 9.0 2.1566 4.9897 7.9051 10.8842 13.9090 16.9650 10.0 2.1795 5.0332 7.9569 10.9363 13.9580 17.0099 15.0 2.2509 5.1773 8.1422 11.1367 14.1576 17.2008 20.0 2.2880 5.2568 8.2534 11.2677 14.2983 17.3442 30.0 2.3261 5.3410 8.3771 11.4221 14.4748 17.5348 40.0 2.3455 5.3846 8.4432 11.5081 14.5774 17.6508 50.0 2.3572 5.4112 8.4840 11.5621 14.6433 17.7272 60.0 2.3651 5.4291 8.5116 11.5990 14.6889 17.7807 80.0 2.3750 5.4516 8.5466 11.6461 14.7475 17.8502 100.0 2.3809 5.4652 8.5678 11.6747 14.7834 17.8931 ∞ 2.4048 5.5201 8.6537 11.7915 14.9309 18.0711
  • 174. book Mobk070 March 22, 2007 11:7
  • 175. book Mobk070 March 22, 2007 11:7 165 Appendix B In this table q = (p/a)1/2 ; a and x are positive real; α, β, γ are unrestricted; k is a finite integer; n is a finite integer or zero; v is a fractional number; 1 · 2 · 3 · · · n = n!; 1 · 3 · 5 · · · (2n − 1) = (2n − 1)!! n (n) = (n + 1) = n!; (1) = 0! = 1; (v) (1 − v) = π/ sin vπ; (1 2 ) = π1/2 NO. TRANSFORM FUNCTION 1 1 p 1 2 1 p2 t 3 1 pk tk−1 (k − 1)! 4 1 p1/2 1 (πt)1/2 5 1 p3/2 2 t π 1 2 6 1 pk+1/2 2k π1/2(2k − 1)!! tk−1/2 7 1 pv tv−1 (v) 8 p1/2 − 1 2π1/2t5/2 9 p3/2 3 4π1/2t5/2 10 pk−1/2 (−1)k (2k − 1)!! 2kπ1/2tk+1/2 11 pn−v tv−n−1 (v − n) 12 1 p + α e−α t 13 1 (p + α)(p + β) e−β t − e−α t α − β 14 1 (p + α)2 te−α t
  • 176. book Mobk070 March 22, 2007 11:7 166 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS 15 1 (p + α)(p + β)(p + γ ) (γ − β)e−αt + (α − γ )e−βt + (β − α)e−γ t (α − β)(β − γ )(γ − α) 16 1 (p + α)2(p + β) e−βt − e−αt [1 − (β − α)t] (β − α)2 17 1 (p + α)3 1 2 t2 e−αt 18 1 (p + α)k tk−1 e−αt (k − 1)! 19 p (p + α)(p + β) αe−αt − βe−βt α − β 20 p (p + α)2 (1 − αt)e−αt 21 p (p + α)(p + β)(p + γ ) α(β − γ )e−αt + β(γ − α)e−βt + γ (α − β)e−γ t (α − β)(β − γ )(γ − α) 22 p (p + α)2(p + β) [β − α(β − α)t]e−αt − βe−βt (β − α)2 23 p (p + α)3 t 1 − 1 2 αt e−αt 24 α p2 + α2 sin αt 25 p p2 + α2 cos αt 26 α p2 − α2 sinh αt 27 p p2 − α2 cosh αt 28 e−qx x 2(παt3)1/2 e−x2 /4αt 29 e−qx q α π t 1/2 e−x2 /4αt 30 e−qx p erfc x 2(αt)1/2 31 e−qx qp 2 αt π 1/2 e−x2 /4αt − xerfc x 2(αt)1/2 32 e−qx p2 t + x2 2α erfc x 2(αt)1/2 − x t απ 1/2 e−x2 /4αt 33 e−qx p1+n/2 (γ − β)e−αt + (α − γ )e−βt + (β − α)e−γ t (α − β)(β − γ )(γ − α)
  • 177. book Mobk070 March 22, 2007 11:7 APPENDIX B 167 34 e−qx p3/4 e−βt − e−αt [1 − (β − α)t] (β − α)2 35 e−qx q + β 1 2 t2 e−αt 36 e−qx q(q + β) tk−1 e−αt (k − 1)! 37 e−qx p(q + β) αe−αt − βe−βt α − β 38 e−qx qp(q + β) (1 − αt)e−αt 39 e−qx qn+1(q + β) α(β − γ )e−αt + β(γ − α)e−βt + γ (α − β)e−γ t (α − β)(β − γ )(γ − α) 40 e−qx (q + β)2 [β − α(β − α)t]e−αt − βe−βt (β − α)2 41 e−qx p(q + β)2 t 1 − 1 2 αt e−αt 42 e−qx p − γ sin αt 43 e−qx q(p − γ ) 1 2 eγ t α γ 1/2    e−x(γ/α)1/2 erfc x 2(αt)1/2 − (γ t)1/2 +ex(γ/α)1/2 erfc x 2(αt)1/2 + (γ t)1/2    44 e−qx (p − γ )2 1 2 eγ t    t − x 2(αt)1/2 e−x(γ/α)1/2 erfc x 2(αt)1/2 − (γ t)1/2 + t + x 2(αt)1/2 ex(γ/α)1/2 erfc x 2(αt)1/2 + (γ t)1/2    45 e−qx (p − γ )(q + β) , γ = αβ2 1 2 eγ t    α1/2 α1/2β + γ 1/2 e−x(γ/α)1/2 erfc x 2(αt)1/2 − (γ t)1/2 + α1/2 α1/2β − γ 1/2 ex(γ/α)1/2 erfc x 2(αt)1/2 + (γ t)1/2    − αβ αβ2 − γ eβx+αβ2 t erfc x 2(αt)1/2 + β(α t)1/2 46 ex/p − 1 x t 1/2 I1 2(xt)1/2 47 1 p ex/p I0 2(xt)1/2 48 1 py ex/p t x (v−1)/2 Iv−1 2(xt)1/2
  • 178. book Mobk070 March 22, 2007 11:7 168 ESSENTIALS OF APPLIED MATHEMATICS FOR SCIENTISTS AND ENGINEERS 49 K0(qx) 1 2t e−x2 /4α t 50 1 p1/2 K2v(qx) 1 2(πt)1/2 e−x2 8α t Kv x2 8αt 51 pv/2−1 Kv(qx) x−v αv/2 2v−1 ∞ x2/4α t e−u uv−1 du 52 pv/2 Kv(qx) xv αv/2(2t)v+1 e−x2 /4α t 53 p − (p2 − x2 )1/2 v v xv t Iv(xt) 54 ex[(p+α)1/2 −(p+β)1/2 ] z − 1 x(α − β)e−(α+β)t/2 I1 1 2 (α − β)t1/2 (t + 4x)1/2 t1/2(t + 4x)1/2 55 ex[p−(p+α)1/2 (p+β)1/2 ] (p + α)1/2(p + β)1/2 e−(α+β)(t+x)/2 I0 1 2 (α − β)t1/2 (t + 2x)1/2 56 ex[(p+α)1/2 −(p+β)1/2 ] 2 (p + α)1/2(p + β)1/2 (p + α)1/2 + (p + β)1/2 2v tv/2 e−(α+β)t/2 Iv 1 2 (α − β)t1/2 (t + 4x)1/2 (α − β)v(t + 4x)v/2
  • 179. book Mobk070 March 22, 2007 11:7 169 Author Biography Dr. Robert G. Watts is the Cornelia and Arthur L. Jung Professor of Mechanical Engineering at Tulane University. He holds a BS (1959) in mechanical engineering from Tulane, an MS(1960) in nuclear engineering from the Massachusetts Institute of Technology and a PhD (1965) from Purdue University in mechanical engineering. He spent a year as a Postdoctoral associate studying atmospheric and ocean science at Harvard University. He has taught advanced applied mathematics and thermal science at Tulane for most of his 43 years of service to that university. Dr. Watts is the author of Keep Your Eye on the Ball: The Science and Folklore of Baseball (W. H. Freeman) and the editor of Engineering Response to Global Climate Change (CRC Press) and Innovative Energy Strategies for CO2 Stabilization (Cambridge University Press) as well as many papers on global warming, paleoclimatology energy and the physic of sport. He is a Fellow of the American Society of Mechanical Engineers.
  • 180. book Mobk070 March 22, 2007 11:7