Chapter 4
Systems of Linear Differential
Equations
Introduction to Systems
Up to this point the entries in a vector or matrix have been real numbers. In
this section, and in the following sections, we will be dealing with vectors and matri-
ces whose entries are functions. A vector whose components are functions is called
a vector-valued function or vector function. Similarly, a matrix whose entries are
functions is called a matrix function.
The operations of vector and matrix addition, multiplication by a number and
matrix multiplication for vector and matrix functions are exactly as defined in Chap-
ter 5 so there is nothing new in terms of arithmetic. However, there are operations on
functions other than arithmetic operations, e.g., limits, differentiation, and integra-
tion, that we have to define for vector and matrix functions. These operation from
calculus are defined in a natural way.
Let v(t) = (f1(t), f2(t), . . . , fn(t)) be a vector function whose compo-
nents are defined on an interval I.
Limit: Let c ∈ I. If lim
x→c
fi(t) = αi exists for i = 1, 2, . . . n, then
lim
t→c
v(t) =

lim
t→c
f1(t), lim
t→c
f2(t), . . . , lim
t→c
fn(t)

= (α1, α2, . . . , αn) .
Limits of vector functions are calculated “component-wise.”
Derivative: If f1, f2, . . ., fn are differentiable on I, then v is
87
differentiable on I, and
v0
= ((f0
1(t), f0
2(t), . . . , f0
n(t)).
That is, v0
is the vector function whose components are the derivatives
of the components of v.
Integration: Since differentiation of vector functions is done component-
wise, integration must also be component-wise. That is
Z
v(t) dt =
Z
f1(t) dt,
Z
f2(t) dt, . . . ,
Z
fn(t) dt

.
Limits, differentiation and integration of matrix functions is done in ex-
actly the same way, component-wise.
4.1. Systems of Linear Differential Equations
Consider the third-order linear differential equation
y000
+ p(t)y00
+ q(t)y0
+ r(t)y = f(t)
where p, q, r, f are continuous functions on some interval I. Solving the equation
for y000
, we get
y000
= −r(t)y − q(t)y0
− p(t)y00
+ f(t).
Introduce new dependent variables x1, x2, x3, as follows:
x1 = y
x2 = x0
1 (= y0
)
x3 = x0
2 (= y00
)
Then
y000
= x0
3 = −r(t)x1 − q(t)x2 − p(t)x3 + f(t)
and the third-order equation can be written equivalently as a system of three first-
order equations:
x0
1 = x2
x0
2 = x3
x0
3 = −r(t)x1 − q(t)x2 − p(t)x3 + f(t)
88
Example 1. (a) Consider the third-order nonhomgeneous equation
y000
− y00
− 8y0
+ 12y = 2et
.
Solving the equation for y000
, we have
y000
= −12y + 8y0
+ y00
+ 2et
.
Let x1 = y, x0
1 = x2 (= y0
), x0
2 = x3 (= y00
). Then
y000
= x0
3 = −12x1 + 8x2 + x3 + 2et
and the equation converts to the equivalent system:
x0
1 = x2
x0
2 = x3
x0
3 = −12x1 + 8x2 + x3 + 2et
Note: This system is just a very special case of the “general” system of three,
first-order differential equations:
x0
1 = a11(t)x1 + a12(t)x2 + a13(t)x3(t) + b1(t)
x0
2 = a21(t)x1 + a22(t)x2 + a23(t)x3(t) + b2(t)
x0
3 = a31(t)x1 + a32(t)x2 + a33(t)x3(t) + b3(t)
(b) Consider the second-order homogeneous equation
t2
y00
− ty0
− 3y = 0.
Solving this equation for y00
, we get
y00
=
3
t2
y +
1
t
y0
.
To convert this equation to an equivalent system, we let x1 = y, x0
1 = x2 (= y0
).
Then we have
x0
1 = x2
x0
2 =
3
t2
x1 +
1
t
x2
which is just a special case of the general system of two first-order differential equa-
tions:
x0
1 = a11(t)x1 + a12(t)x2 + b1(t)
x0
2 = a21(t)x1 + a22(t)x2 + b2(t)
89
General Theory
Let a11(t), a12(t), . . ., a1n(t), a21(t), . . ., ann(t), b1(t), b2(t), . . . , bn(t) be contin-
uous functions on some interval I. The system of n first-order differential equations
x0
1 = a11(t)x1 + a12(t)x2 + · · · + a1n(t)xn(t) + b1(t)
x0
2 = a21(t)x1 + a22(t)x2 + · · · + a2n(t)xn(t) + b2(t)
.
.
.
.
.
.
x0
n = an1(t)x1 + an2(t)x2 + · · · + ann(t)xn(t) + bn(t)
(S)
is called a first-order linear differential system.
The system (S) is homogeneous if
b1(t) ≡ b2(t) ≡ · · · ≡ bn(t) ≡ 0 on I.
(S) is nonhomogeneous if the functions bi(t) are not all identically zero on I; that
is, if there is at least one point a ∈ I and at least one function bi(t) such that
bi(a) 6= 0.
Let A(t) be the n × n matrix
A(t) =






a11(t) a12(t) · · · a1n(t)
a21(t) a22(t) · · · a2n(t)
.
.
.
.
.
.
.
.
.
an1(t) an2(t) · · · ann(t)






and let x and b be the vectors
x =






x1
x2
.
.
.
xn






, b =






b1
b2
.
.
.
bn






.
Then (S) can be written in the vector-matrix form
x0
= A(t) x + b. (S)
The matrix A(t) is called the matrix of coefficients or the coefficient matrix.
Example 2. The vector-matrix form of the system in Example 1(a) is:
x0
=



0 1 0
0 0 1
−12 8 1


x +



0
0
2et


,
90
a nonhomogeneous system.
The vector-matrix form of the system in Example 1(b) is:
x0
=
0 1
3/t2
1/t
!
x +
0
0
!
=
0 1
3/t2
1/t
!
x, where x =
x1
x2
!
,
a homogeneous system.
A solution of the linear differential system (S) is a differentiable vector function
x(t) =






x1(t)
x2(t)
.
.
.
xn(t)






that satisfies (S) on the interval I.
Example 3. Verify that x(t) =



e2t
2e2t
4e2t


 +



1
2
et
1
2
et
1
2
et


 is a solution of the nonho-
mogeneous system
x0
=



0 1 0
0 0 1
−12 8 1


x +



0
0
2et



of Example 2.
91
SOLUTION
x0
=






e2t
2e2t
4e2t


 +



1
2
et
1
2
et
1
2
et






0
=



2e2t
4e2t
8e2t


 +



1
2
et
1
2
et
1
2
et



?
=



0 1 0
0 0 1
−12 8 1









e2t
2e2t
4e2t


 +



1
2
et
1
2
et
1
2
et





 +



0
0
2et



?
=



0 1 0
0 0 1
−12 8 1






e2t
2e2t
4e2t


 +



0 1 0
0 0 1
−12 8 1






1
2
et
1
2
et
1
2
et


 +



0
0
2et



=



2e2t
4e2t
8e2t


 +



1
2
et
1
2
et
−3
2
et


 +



0
0
2et



=



2e2t
4e2t
8e2t


 +



1
2
et
1
2
et
1
2
et


.
x is a solution.
THEOREM 1. (Existence and Uniqueness Theorem) Let a be any point on the
interval I, and let α1, α2, . . . , αn be any n real numbers. Then the initial-value
problem
x0
= A(t) x + b(t), x(a) =






α1
α2
.
.
.
αn






has a unique solution.
Exercises 4.1
Convert the differential equation into a system of first-order equations.
1. y00
− ty0
+ 3y = sin 2t.
92
2. y00
+ y = 2e−2t
.
3. y000
− y00
+ y = et
.
4. my00
+ cy0
+ ky = cos λt, m, c, k, λ are constants.
Write the system in vector-matrix form.
5.
x0
1 = −2x1 + x2 + sin t
x0
2 = x1 − 3x2 − 2 cos t
6.
x0
1 = et
x1 − e2t
x2
x0
2 = e−t
x1 − 3et
x2
7.
x0
1 = 2x1 + x2 + 3x3 + 3e2t
x0
2 = x1 − 3x2 − 2 cos t
x0
3 = 2x1 − x2 + 4x3 + t
8.
x0
1 = t2
x1 + x2 − tx3 + 3
x0
2 = −3et
x2 + 2x3 − 2e−2t
x0
3 = 2x1 + t2
x2 + 4x3
9. Verify that u(t) =
t−1
−t−2
!
is a solution of the system in Example 1 (b).
10. Verify that u(t) =



e−3t
−3e−3t
9e−3t


 +



1
2
et
1
2
et
1
2
et


 is a solution of the system in
Example 1 (a).
11. Verify that w(t) =



te2t
e2t
+ 2te2t
4e2t
+ 4te2t


 is a solution of the homogeneous system
associated with the system in Example 1 (a).
93
12. Verify that x(t) =
− sin t
− cos t − 2 sin t
!
is a solution of the system
x0
=
−2 1
−3 2
!
x +
0
2 sin t
!
.
13. Verify that x(t) =



−2e−2t
0
3e−2t


 is a solution of the system
x0
=



1 −3 2
0 −1 0
0 −1 −2


x.
4.2. Homogeneous Systems
In this section we give the basic theory for linear homogeneous systems. This “theory”
is simply a repetition results given in Sections 3.2 and 3.6, phrased this time in terms
of the system
x0
1 = a11(t)x1 + a12(t)x2 + · · · + a1n(t)xn(t)
x0
2 = a21(t)x1 + a22(t)x2 + · · · + a2n(t)xn(t)
.
.
.
.
.
.
x0
n = an1(t)x1 + an2(t)x2 + · · · + ann(t)xn(t)
(H)
or
x0
= A(t)x. (H)
Note first that the zero vector z(t) ≡ 0 =






0
0
.
.
.
0






is a solution of (H). As before,
this solution is called the trivial solution. Of course, we are interested in finding
nontrivial solutions.
THEOREM 1. If x1, x2, . . ., xk are solutions of (H), and if c1, c2, . . . , ck are
real numbers, then
c1x1 + c2x2 + · · · + ckxk
is a solution of (H); any linear combination of solutions of (H) is also a solution of
(H).
94
DEFINITION 1. Let
x1 =






x11(t)
x21(t)
.
.
.
xn1(t)






, x2 =






x12(t)
x22(t)
.
.
.
xn2(t)






, . . ., xk =






x1k(t)
x2k(t)
.
.
.
xnk(t)






be n-component vector functions defined on some interval I. The vectors are linearly
dependent on I if there exist k real numbers c1, c2, . . . , ck, not all zero, such
that
c1x1(t) + c2x2(t) + · · · + ckxk(t) ≡ 0 on I.
Otherwise the vectors are linearly independent on I.
THEOREM 2. Let x1, x2, . . . , xn be n, n-component vector functions defined
on an interval I. If the vectors are linearly dependent, then
W(t) =
x11 x12 · · · x1n
x21 x22 · · · x2n
.
.
.
.
.
.
.
.
.
.
.
.
xn1 xn2 · · · xnn
≡ 0 on I.
The determinant in Theorem 4 is called the Wronskian of the vector functions
x1, x2, . . ., xn.
COROLLARY Let x1, x2, . . . , xn be n, n-component vector functions defined
on an interval I. If the Wronskian W(t) 6= 0 for at least one t ∈ I, then the
vectors are linearly independent on I.
Example 1. The vector functions
u =
t3
3t2
!
and x =
t−1
−t−2
!
are solutions of the homogeneous system in Example 1(b), Section 4.1. Their Wron-
skian is:
W(t) =
t3
t−1
3t2
−t−2
= −4t.
The solutions are linearly independent.
The vector functions
x1 =



e2t
2e2t
4e2t


, x2 =



e−3t
−3e−3t
9e−3t


, x3 =



te2t
e2t
+ 2te2t
4e2t
+ 4te2t



95
are solutions of the homogeneous system
x0
=



0 1 0
0 0 1
−12 8 1


x.
Their Wronskian is:
W(t) =
e2t
e−3t
te2t
2e2t
−3e−3t e2t
+ 2te2t
4e2t
9e−3t
4e2t
+ 4te2t
= −25et
.
These solutions are linearly independent.
THEOREM 3. Let x1, x2, . . ., xn be n solutions of (H). Exactly one of the
following holds:
1. W(x1, x2, . . ., xn)(t) ≡ 0 on I and the solutions are linearly dependent.
2. W(x1, x2, . . ., xn)(t) 6= 0 for all t ∈ I and the solutions are linearly
independent.
It is easy to construct sets of n linearly independent solutions of (H). Simply pick
any point a ∈ I and any nonsingular n × n matrix A. Let α1 be the first column
of A, α2 the second column of A, and so on. Then let x1 be the solution of (H)
such that x1(a) = α1, let x2 be the solution of (H) such that x2(a) = α2, . . .,
and let xn be the solution of (H) such that xn = αn. The existence and uniqueness
theorem guarantees the existence of these solutions. Now
W(x1, x2, . . ., xn)(a) = det A 6= 0.
Therefore, W(t) 6= 0 for all t ∈ I and the solutions are linearly independent.
A particularly nice set of n linearly independent solutions is obtained by choosing
A = In, the identity matrix.
THEOREM 4. Let x1, x2, . . . , xn be n linearly independent solutions of (H). Let
u be any solution of (H). Then there exists a unique set of constants c1, c2, . . . , cn
such that
u = c1x1 + c2x2 + · · · + cnxn.
That is, every solution of (H) can be written as a unique linear combination of
x1, x2, . . ., xn.
96
DEFINITION 2. A set {x1, x2, . . ., xn} of n linearly independent solutions of
(H) is called a fundamental set of solutions. A fundamental set of solutions is also
called a solutions basis for (H). If x1, x2, . . . , xn is a fundamental set of solutions
of (H), then the n × n matrix
X(t) =






x11(t) x12(t) · · · x1n(t)
x21(t) x22(t) · · · x2n(t)
.
.
.
.
.
.
.
.
.
xn1(t) xn2(t) · · · xnn(t)






(the vectors x1, x2, . . . , xn are the columns of X) is called a fundamental matrix for
(H).
DEFINITION 3. Let x1, x2, . . . , xn be a fundamental set of solutions of (H).
Then
x = c1x1 + c2x2 + · · · + cnxn,
where c1, c2, . . . , cn are arbitrary constants, is the general solution of (H).
Exercises 4.2
Determine whether or not the vector functions are linearly dependent.
1. x1 =
2t − 1
−t
!
, x2 =
−t + 1
2t
!
2. x1 =
cos t
sin t
!
, x2 =
sin t
cos t
!
3. x1 =



2 − t
t
−2


, x2 =



t
−1
2


, x3 =



2 + t
t − 2
2


.
4. x1 =



et
−et
et


, x2 =



−et
2et
−et


, x3 =



0
et
0


.
5. x1 =
et
0
!
, x2 =
0
0
!
, x3 =
0
et
!
97
6. Given the linear differential system
x0
=
5 −3
2 0
!
x.
Let
x1 =
e2t
e2t
!
and x2 =
3e3t
2e3t
!
.
(a) Show that x1, x2 are a fundamental set of solutions of the system.
(b) Let X be the corresponding fundamental matrix. Show that
X0
= AX.
(c) Give the general solution of the system.
(d) Find the solution of the system that satisfies x(0) =
1
0
!
.
7. Let X be the matrix function
X(t) =



0 4te−t
e−t
1 e−t
0
1 0 0



(a) Verify that X is a fundamental matrix for the system
x0
=



−1 4 −4
0 −1 1
0 0 0


x.
(b) Find the solution of the system that satisfies x(0) =



0
1
2


.
4.3. Homogeneous Systems with Constant Coefficients
A homogeneous system with constant coefficients is a linear differential system having
the form
x0
1 = a11x1 + a12x2 + · · · + a1nxn
x0
2 = a21x1 + a22x2 + · · · + a2nxn
.
.
.
.
.
.
x0
n = an1x1 + an2x2 + · · · + annxn
98
where a11, a12, . . . , ann are constants. The system in vector-matrix form is
x0
1
x0
2
−
x0
n
=





a11 a12 · · · a1n
a21 a22 · · · a2n
− − − −
an1 an2 · · · ann










x1
x2
−
xn





or
−
→
x0
= A−
→
x .
Example 1. Consider the 3rd
order linear homogeneous differential equation
y000
− y00
− 8y0
+ 12y = 0.
The characteristic equation is:
r3
− r2
− 8r + 12 = (r − 2)2
(r + 3) = 0
and {e2t
, te2t
, e−3t
} is a solution basis for the equation.
The corresponding linear homogeneous system is
x0
=



0 1 0
0 0 1
−12 8 1


x
and
x1(t) =



e2t
2e2t
4e2t


 = e2t



1
2
4



is a solution vector. Similarly,
x2(t) = e−3t



1
3
9



is a solution vector.
The example suggests that homogeneous systems with constant coefficients might
have solution vectors of the form x(t) = eλt
v, for some number λ and some
constant vector v.
If x(t) = eλt
v is a solution vector of (H), then
x0
= Ax which implies λeλt
v = Aeλt
v and so Av = λ v.
The latter equation is an eigenvalue-eigenvector equation for A. Thus, we look for
solutions of the form x(t) = eλt
x where λ is an eigenvalue of A and c is a
corresponding eigenvector.
99
Example 2. Find a fundamental set of solution vectors of
x0
=
1 5
3 3
!
x
and give the general solution of the system.
SOLUTION First we find the eigenvalues:
det(A − λI) =
1 − λ 5
3 3 − λ
= (λ − 6)(λ + 2).
The eigenvalues are λ1 = 6 and λ2 = −2.
Next, we find corresponding eigenvectors. For λ1 = 6 we have:
(A−6I)x =
−5 5
3 −3
!
x1
x2
!
=
0
0
!
which implies x1 = x2, x2 arbitrary.
Setting x2 = 1, we get the eigenvector
1
1
!
.
Repeating the process for λ2 = −2, we get the eigenvector
5
−3
!
.
Thus x1 = e6t 1
1
!
and x2 = e−2t 5
−3
!
are solution vectors of the system.
The Wronskian of x1 and x2 is:
W(t) =
e6t
5e−2t
e6t
−3e−2t
= −8e4t
6= 0.
Thus x1 and x2 are linearly independent; they form a fundamental set of solutions.
The general solution of the system is
x(t) = c1x1 + c2x2 = c1e6t 1
1
!
+ c2e−2t 5
−3
!
.
Example 3. Find a fundamental set of solution vectors of
x0
=



3 −1 −1
−12 0 5
4 −2 −1


x
100
and find the solution that satisfies the initial condition x(0) =



1
0
1


.
SOLUTION
det(A − λI) =
3 − λ −1 −1
−12 −λ 5
4 −2 −1 − λ
= −λ3
+ 2λ2
+ λ − 2.
Now
det(A − λI) = 0 implies λ3
− 2λ2
− λ + 2 = (λ − 2)(λ − 1)(λ + 1) = 0.
The eigenvalues are λ1 = 2, λ2 = 1, λ3 = −1.
As you can check, corresponding eigenvectors are:
v1 =



1
−1
2


, v2 =



3
−1
7


 , v3 =



1
2
2


 .
A fundamental set of solution vectors is:
x1 = e2t



1
−1
2


 , x2 = et



3
−1
7


 , x3 = e−t



1
2
2


 .
since distinct exponential vector-functions are linearly independent (calculate the
Wronskian to verify.)
To find the solution vector satisfying the initial condition, solve
c1x1(0) + c2x2(0) + c3x3(0) =



1
0
1



which is:
c1



1
−1
2


 + c2



3
−1
7


 + c3



1
2
2


 =



1
0
1



or 


1 3 1
−1 −1 2
2 7 2






c1
c2
c3


 =



1
0
1


 .
101
Note: The matrix of coefficients is the fundamental matrix evaluated at t = 0
Using the solution method of your choice (row reduction, inverse, Cramer’s rule),
the solution is: c1 = 3, c2 = −1, c3 = 1. The solution of the initial-value problem is
x = 3e2t



1
−1
2


 − et



3
−1
7


 + e−t



1
2
2


 .
Two Difficulties
There are two difficulties that can arise:
1. A has complex eigenvalues.
If λ = a + bi is a complex eigenvalue with corresponding (complex) eigenvector
u + i x, then λ = a − bi (the complex conjugate of λ) is also an eigenvalue of A
and u − i x is a corresponding eigenvector. The corresponding linearly independent
complex solutions of x0
= Ax are:
w1 = e(a+bi)t
(u + i x) = eat
(cos bt + i sin bt)(u + i x)
= eat
[(cos bt u − sin bt x) + i(cos bt x + sin bt u)]
w2 = e(a−bi)t
(u − i x) = eat
(cos bt − i sin bt)(u − i x)
= eat
[(cos bt u − sin bt x) − i(cos bt x + sin bt u)]
Now
x1(t) = 1
2
[w1(t) + w2(t)] = eat
(cos bt u − sin bt x)
and
x2(t) = 1
2i
[w1(t) − w2(t)] = eat
(cos bt x + sin bt u)
are linearly independent solutions of the system, and they are real-valued vector
functions. It is worth noting that x1 and x2 are simply the real and imaginary
parts of w1 (or of w2).
Example 4. Determine a fundamental set of solution vectors of
x0
=



1 −4 −1
3 2 3
1 1 3


x.
102
SOLUTION
det(A−λI) =
1 − λ −4 −1
3 2 − λ 3
1 1 3 − λ
= −λ3
+6λ2
−21λ+26 = −(λ−2)(λ2
−4λ+13).
The eigenvalues are: λ1 = 2, λ2 = 2+3i, λ3 = 2−3i. The corresponding eigenvectors
are:
v1 =



1
0
−1


 , v2 =



−5 + 3i
3 + 3i
2


 =



−5
3
2


 + i



3
3
0



v3 =



−5 − 3i
3 − 3i
2


 =



−5
3
2


 − i



3
3
0


 .
Now
e(2+3i)t






−5
3
2


 + i



3
3
0





 =
e2t
(cos 3t + i sin 3t)






−5
3
2


 + i



3
3
0





 =
e2t


cos 3t



−5
3
2


 − sin 3t



3
3
0





 + i e2t


cos 3t



3
3
0


 + sin 3t



−5
3
2





 .
A fundamental set of solution vectors for the system is:
x1 = e2t



1
0
−1


 , x2 = e2t


cos 3t



−5
3
2


 − sin 3t



3
3
0





 ,
x3 = e2t


cos 3t



3
3
0


 + sin 3t



−5
3
2





 .
2. A has an eigenvalue of multiplicity greater than 1
We’ll look first at the case where A has an eigenvalue of multiplicity 2.
103
Example 5. Let A =



1 −3 3
3 −5 3
6 −6 4


.
det(A − λI) =
1 − λ −3 3
3 −5 − λ 3
6 −6 4 − λ
= −λ3
+ 12λ − 16 = −(λ − 4)(λ + 2)2
.
The eigenvalues are: λ1 = 4, λ2 = λ3 = −2.
As you can check, an eigenvector corresponding to λ1 = 4 is v1 =



1
1
2


.
We’ll carry out the details involved in finding an eigenvector corresponding to the
“double” eigenvalue −2.
[A − (−2)I]v =



3 −3 3
3 −3 3
6 −6 6






v1
v2
v3


 =



0
0
0


 .
The augmented matrix for this system of equations is



3 −3 3 0
3 −3 3 0
6 −6 6 0


 which row reduces to



1 −1 1 0
0 0 0 0
0 0 0 0



The solutions of this system are: v1 = v2 −v3, v2, v3 arbitrary. We can assign values
to v2 and v3 independently and obtain two linearly independent eigenvectors. For
example, setting v2 = 1, v3 = 0, we get the eigenvector v2 =



1
1
0


. Reversing
the roles, we set v2 = 0, v3 = −1 to get the eigenvector v3 =



1
0
−1


. Clearly
v2 and v3 are linearly independent. You should understand that there is nothing
magic about our two choices for v2, v3; any choice which produces two independent
vectors will do.
The important thing to note here is that this eigenvalue of multiplicity 2 produced
two independent eigenvectors.
104
Based on our work above, a fundamental set of solutions for the differential system
x0
=



1 −3 3
3 −5 3
6 −6 4


x
is
x1 = e4t



1
1
2


 , x2 = e−2t



1
1
0


 , x3 = e−2t



1
0
−1


 .
Example 6. Let A =



0 1 0
0 0 1
12 8 −1



det(A − λI) =
−λ 1 0
0 −λ 1
12 8 −1 − λ
= −λ3
− λ2
+ 8λ − 12 = −(λ − 3)(λ + 2)2
.
The eigenvalues are: λ1 = 3, λ2 = λ3 = −2.
As you can check, an eigenvector corresponding to λ1 = 3 is v1 =



1
3
9


.
We’ll carry out the details involved in finding an eigenvector corresponding to the
“double” eigenvalue −2.
[A − (−2)I]v =



2 1 0
0 2 1
12 8 1






v1
v2
v3


 =



0
0
0


 .
The augmented matrix for this system of equations is



2 1 0 0
0 2 1 0
12 8 1 0


 which row reduces to



2 1 0 0
0 2 1 0
0 0 0 0



The solutions of this system are v1 = 1
4
v3, v2 = −1
2
v3, v3 arbitrary. Here there is
only one parameter and so we’ll get only one eigenvector. Setting v3 = 4 we get the
eigenvector v2 =



1
−2
4


.
In contrast to the preceding example, the “double” eigenvalue here has only one
(independent) eigenvector.
105
Suppose that we were asked to find a fundamental set of solutions of the linear
differential system
x0
=



0 1 0
0 0 1
12 8 −1


x.
By our work above, we have two independent solutions
x1 = e3t



1
3
9


 and x2 = e−2t



1
−2
4


 .
We need a third solution which is independent of these two.
Our system has a special form; it is equivalent to the third order equation
y000
+ y00
− 8y0
− 12y = 0.
The characteristic equation is
r3
+ r2
− 8r − 12 = (r − 3)(r + 2)2
= 0
(compare with det(A−λI).) The roots are: r1 = 3, r2 = r3 = −2 and a fundamental
set of solutions is {y1 = e3t
, y2 = e−2t
, y3 = te−2t
}. The correspondence between
these solutions and the solution vectors we found above should be clear:
e3t
−→ e3t



1
3
9


 , e−2t
−→ e−2t



1
−2
4


 .
The solution y3 = te−2t
of the equation corresponds to the solution vector
x3 =



y3
y0
3
y00
3


 =



te−2t
e−2t
− 2te−2t
−4e−2t
− 4te−2t


 = e−2t



0
1
−4


 + te−2t



1
−2
4


 .
The appearance of the te−2t
v2 term should not be unexpected since we know that
a characteristic root of multiplicity 2 produces a solution of the form tert
.
You can check that x3 is independent of x1 and x2. Therefore, the solution
vectors x1, x2, x3 are a fundamental set of solutions of the system.
The question is: What is the significance of the vector w =



0
1
−4


? How
is it related to the eigenvalue −2 which generated it, and to the corresponding
eigenvector?
106
Let’s look at [A − (−2)I]w = [A + 2I]w:
[A + 2I]w =



2 1 0
0 2 1
12 8 1






0
1
−4


 =



1
−2
4


 = v2;
A − (−2)I “maps” w onto the eigenvector v2. The corresponding solution of the
system has the form
x3 = e−2t
w + te−2t
v2
where v2 is the eigenvector corresponding to −2 and w satisfies
[A − (−2)I]w = v2.
General Result
Given the linear differential system x0
= Ax. Suppose that A has an eigenvalue λ
of multiplicity 2. Then exactly one of the following holds:
1. λ has two linearly independent eigenvectors, v1 and v2. Corresponding
linearly independent solution vectors of the differential system are x1(t) = eλt
v1
and x2(t) = eλt
v2.
2. λ has only one (independent) eigenvector v. Then a linearly independent pair
of solution vectors corresponding to λ are:
x1(t) = eλt
v and x2(t) = eλt
w + teλt
v
where w is a vector that satisfies (A − λI)w = v. The vector w is called a
generalized eigenvector corresponding to the eigenvalue λ.
Example 7. Find a fundamental set of solution vectors of x0
=
1 −1
1 3
!
x.
SOLUTION
det(A − λI) =
1 − λ −1
1 3 − λ
= λ2
− 4λ + 4 = (λ − 2)2
.
Characteristic values: λ1 = λ2 = 2.
Characteristic vectors:
(A − 2I)v =
−1 −1
1 1
!
v1
v2
!
=
0
0
!
;
107
−1 −1 0
1 1 0
!
−→
1 1 0
0 0 0
!
.
The solutions are: v1 = −v2, v2 arbitrary; there is only one eigenvector. Setting
v2 = −1, we get v =
1
−1
!
.
The vector x1 = e2t 1
−1
!
is a solution of the system.
A second solution, independent of x1 is x2 = e2t
w+te2t
v where w is a solution
of (A − 2I)z = v:
(A − 2I)z =
−1 −1
1 1
!
z1
z2
!
=
1
−1
!
;
−1 −1 1
1 1 1
!
−→
1 1 −1
0 0 0
!
.
The solutions of this system are z1 = −1 − z2, z2 arbitrary. If we choose z2 = 0
(any choice for z2 will do), we get z1 = −1 and w =
−1
0
!
. Thus
x2(t) = e2t −1
0
!
+ te2t 1
−1
!
is a solution of the system independent of x1. The solutions
x1(t) = e2t 1
−1
!
, x2(t) = e2t −1
0
!
+ te2t 1
−1
!
are a fundamental set of solutions of the system.
Eigenvalues of Multiplicity 3.
Given the differential system x0
= Ax. Suppose that λ is an eigenvalue of A of
multiplicity 3. Then exactly one of the following holds:
1. λ has three linearly independent eigenvectors v1, v2, v3. Then three linearly
independent solution vectors of the system corresponding to λ are:
x1(t) = eλt
v1, x2(t) = eλt
v2, x3(t) = eλt
v3.
108
2. λ has two linearly independent eigenvectors v1, v2. Then two linearly inde-
pendent solutions of the system corresponding to λ are:
x1(t) = eλt
v1, x2(t) = eλt
v2
A third solution, independent of x1 and x2 has the form
x3(t) = eλt
w + teλt
x
where x is an eigenvector corresponding to λ and (A − λI)w = x.
3. λ has only one (independent) eigenvector v. Then three linearly independent
solutions of the system have the form:
x1 = eλt
v, x2 = eλt
w + teλt
v,
x3(t) = eλt
z + teλt
w + t2
eλt
v
where (A − λI)w = v and (A − λI)z = w.
Exercises 4.3
Find the general solution of the system x0
= Ax where A is the given matrix. If
an initial condition is given, also find the solution that satisfies the condition.
1.
−2 4
1 1
!
.
2.
−1 1
4 2
!
, x(0) =
−1
1
!
.
3.



−2 2 1
0 −1 0
2 −2 −1


. Hint: −3 is an eigenvalue.
4.



3 0 −1
−2 2 1
8 0 −3


, x(0) =



−1
2
−8


. Hint: 2 is an eigenvalue.
5.
1 −2
2 1
!
.
109
6.
−1 2
−1 −3
!
.
7.
3 2
−8 −5
!
.
8.



−3 0 −3
1 −2 3
1 0 1


. Hint: −2 is an eigenvalue.
9.



2 −1 −1
−1 2 −1
1 1 4


. Hint: 3 is an eigenvalue
10.



−2 1 −1
3 −3 4
3 −1 2


. Hint: 1 is an eigenvalue.
11.



−3 1 −1
−7 5 −1
−6 6 −2


.
110

More Related Content

PPT
Ch07 5
PDF
03_AJMS_170_19_RA.pdf
PDF
03_AJMS_170_19_RA.pdf
PDF
A Solution Manual And Notes For Kalman Filtering Theory And Practice Using ...
PDF
On the Application of the Fixed Point Theory to the Solution of Systems of Li...
PPT
Ch07 9
PPTX
PPT
Ch07 8
Ch07 5
03_AJMS_170_19_RA.pdf
03_AJMS_170_19_RA.pdf
A Solution Manual And Notes For Kalman Filtering Theory And Practice Using ...
On the Application of the Fixed Point Theory to the Solution of Systems of Li...
Ch07 9
Ch07 8

Similar to Perdif Systems of Linear Differential.pdf (20)

PDF
501 lecture8
PDF
Senior Seminar: Systems of Differential Equations
PPTX
Controllability of Linear Dynamical System
PPT
Intro Class.ppt
PDF
Math formulas worksheet answers key to the way to pass easily
PDF
Nagle solucionario impares
PDF
Solucionario de ecuaciones diferenciales y problemas con valores en la fronte...
PDF
Solucionario de ecuaciones diferenciales y problemas con valores en la fronte...
PPT
Ch07 4
DOCX
algebra
DOC
Lsd ee n
PDF
2415systems_odes
PDF
Elementary Differential Equations 11th Edition Boyce Solutions Manual
PPT
Ch07 7
PPTX
Systems Of Differential Equations
PDF
Solution to linear equhgations
PPTX
Chap 3 - LTI systems introduction & Coverage.pptx
PDF
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
501 lecture8
Senior Seminar: Systems of Differential Equations
Controllability of Linear Dynamical System
Intro Class.ppt
Math formulas worksheet answers key to the way to pass easily
Nagle solucionario impares
Solucionario de ecuaciones diferenciales y problemas con valores en la fronte...
Solucionario de ecuaciones diferenciales y problemas con valores en la fronte...
Ch07 4
algebra
Lsd ee n
2415systems_odes
Elementary Differential Equations 11th Edition Boyce Solutions Manual
Ch07 7
Systems Of Differential Equations
Solution to linear equhgations
Chap 3 - LTI systems introduction & Coverage.pptx
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
Ad

More from SetionoSetiono3 (16)

PPTX
Aktifitas dan Penjadwalan Proyek Konstruksi
PPTX
391163812-Materi-Primavera-6-per-April-2016-pptx.pptx
PPTX
Simple Introduction to PyQT6 Tutorial.pptx
PDF
the Matirks dan Sistem PD 09 Nop 2020.pdf
PDF
Persamaan Diferensial derajat tinggi (2)
PDF
TERAPAN PERSAMAAN DIFERENSIAL PARSIAL 23 NOP 2020.pdf
PDF
Matriks dan Sistem Pers D 09 Nop 2020.pdf
PDF
Persamaan Diferensial Bagian ke-empat...
PDF
Persamaan Diferensial Penyelesaian Deret
PDF
Materi Persamaan Diferensial Pertemuan ke-2
PDF
Pengenalan Matakuliah Persamaan Diferensial
PPTX
MATERI PERSAMAAN DIFERENSIAL METODE EULER.pptx
PPTX
Materi Perkuliahan dengan topik ikatan kimia
PPTX
Teknologi Pengolahan Air Minum untuk Kehidupan
PPT
Komputer dan Sistem Komputer Bagian Pertama
PDF
Rps kimia dasar ta. 2021 2022 gasal
Aktifitas dan Penjadwalan Proyek Konstruksi
391163812-Materi-Primavera-6-per-April-2016-pptx.pptx
Simple Introduction to PyQT6 Tutorial.pptx
the Matirks dan Sistem PD 09 Nop 2020.pdf
Persamaan Diferensial derajat tinggi (2)
TERAPAN PERSAMAAN DIFERENSIAL PARSIAL 23 NOP 2020.pdf
Matriks dan Sistem Pers D 09 Nop 2020.pdf
Persamaan Diferensial Bagian ke-empat...
Persamaan Diferensial Penyelesaian Deret
Materi Persamaan Diferensial Pertemuan ke-2
Pengenalan Matakuliah Persamaan Diferensial
MATERI PERSAMAAN DIFERENSIAL METODE EULER.pptx
Materi Perkuliahan dengan topik ikatan kimia
Teknologi Pengolahan Air Minum untuk Kehidupan
Komputer dan Sistem Komputer Bagian Pertama
Rps kimia dasar ta. 2021 2022 gasal
Ad

Recently uploaded (20)

PPTX
(Ali Hamza) Roll No: (F24-BSCS-1103).pptx
PPTX
STERILIZATION AND DISINFECTION-1.ppthhhbx
PDF
Navigating the Thai Supplements Landscape.pdf
PPTX
retention in jsjsksksksnbsndjddjdnFPD.pptx
PDF
[EN] Industrial Machine Downtime Prediction
PDF
Introduction to the R Programming Language
PPTX
QUANTUM_COMPUTING_AND_ITS_POTENTIAL_APPLICATIONS[2].pptx
PDF
Data Engineering Interview Questions & Answers Cloud Data Stacks (AWS, Azure,...
PPTX
Topic 5 Presentation 5 Lesson 5 Corporate Fin
PDF
Introduction to Data Science and Data Analysis
PPTX
modul_python (1).pptx for professional and student
PPTX
CYBER SECURITY the Next Warefare Tactics
PPTX
Phase1_final PPTuwhefoegfohwfoiehfoegg.pptx
PDF
Optimise Shopper Experiences with a Strong Data Estate.pdf
PPTX
Managing Community Partner Relationships
PPTX
FMIS 108 and AISlaudon_mis17_ppt_ch11.pptx
PPTX
Business_Capability_Map_Collection__pptx
PPTX
sac 451hinhgsgshssjsjsjheegdggeegegdggddgeg.pptx
PPTX
Introduction to Inferential Statistics.pptx
PDF
Microsoft Core Cloud Services powerpoint
(Ali Hamza) Roll No: (F24-BSCS-1103).pptx
STERILIZATION AND DISINFECTION-1.ppthhhbx
Navigating the Thai Supplements Landscape.pdf
retention in jsjsksksksnbsndjddjdnFPD.pptx
[EN] Industrial Machine Downtime Prediction
Introduction to the R Programming Language
QUANTUM_COMPUTING_AND_ITS_POTENTIAL_APPLICATIONS[2].pptx
Data Engineering Interview Questions & Answers Cloud Data Stacks (AWS, Azure,...
Topic 5 Presentation 5 Lesson 5 Corporate Fin
Introduction to Data Science and Data Analysis
modul_python (1).pptx for professional and student
CYBER SECURITY the Next Warefare Tactics
Phase1_final PPTuwhefoegfohwfoiehfoegg.pptx
Optimise Shopper Experiences with a Strong Data Estate.pdf
Managing Community Partner Relationships
FMIS 108 and AISlaudon_mis17_ppt_ch11.pptx
Business_Capability_Map_Collection__pptx
sac 451hinhgsgshssjsjsjheegdggeegegdggddgeg.pptx
Introduction to Inferential Statistics.pptx
Microsoft Core Cloud Services powerpoint

Perdif Systems of Linear Differential.pdf

  • 1. Chapter 4 Systems of Linear Differential Equations Introduction to Systems Up to this point the entries in a vector or matrix have been real numbers. In this section, and in the following sections, we will be dealing with vectors and matri- ces whose entries are functions. A vector whose components are functions is called a vector-valued function or vector function. Similarly, a matrix whose entries are functions is called a matrix function. The operations of vector and matrix addition, multiplication by a number and matrix multiplication for vector and matrix functions are exactly as defined in Chap- ter 5 so there is nothing new in terms of arithmetic. However, there are operations on functions other than arithmetic operations, e.g., limits, differentiation, and integra- tion, that we have to define for vector and matrix functions. These operation from calculus are defined in a natural way. Let v(t) = (f1(t), f2(t), . . . , fn(t)) be a vector function whose compo- nents are defined on an interval I. Limit: Let c ∈ I. If lim x→c fi(t) = αi exists for i = 1, 2, . . . n, then lim t→c v(t) = lim t→c f1(t), lim t→c f2(t), . . . , lim t→c fn(t) = (α1, α2, . . . , αn) . Limits of vector functions are calculated “component-wise.” Derivative: If f1, f2, . . ., fn are differentiable on I, then v is 87
  • 2. differentiable on I, and v0 = ((f0 1(t), f0 2(t), . . . , f0 n(t)). That is, v0 is the vector function whose components are the derivatives of the components of v. Integration: Since differentiation of vector functions is done component- wise, integration must also be component-wise. That is Z v(t) dt = Z f1(t) dt, Z f2(t) dt, . . . , Z fn(t) dt . Limits, differentiation and integration of matrix functions is done in ex- actly the same way, component-wise. 4.1. Systems of Linear Differential Equations Consider the third-order linear differential equation y000 + p(t)y00 + q(t)y0 + r(t)y = f(t) where p, q, r, f are continuous functions on some interval I. Solving the equation for y000 , we get y000 = −r(t)y − q(t)y0 − p(t)y00 + f(t). Introduce new dependent variables x1, x2, x3, as follows: x1 = y x2 = x0 1 (= y0 ) x3 = x0 2 (= y00 ) Then y000 = x0 3 = −r(t)x1 − q(t)x2 − p(t)x3 + f(t) and the third-order equation can be written equivalently as a system of three first- order equations: x0 1 = x2 x0 2 = x3 x0 3 = −r(t)x1 − q(t)x2 − p(t)x3 + f(t) 88
  • 3. Example 1. (a) Consider the third-order nonhomgeneous equation y000 − y00 − 8y0 + 12y = 2et . Solving the equation for y000 , we have y000 = −12y + 8y0 + y00 + 2et . Let x1 = y, x0 1 = x2 (= y0 ), x0 2 = x3 (= y00 ). Then y000 = x0 3 = −12x1 + 8x2 + x3 + 2et and the equation converts to the equivalent system: x0 1 = x2 x0 2 = x3 x0 3 = −12x1 + 8x2 + x3 + 2et Note: This system is just a very special case of the “general” system of three, first-order differential equations: x0 1 = a11(t)x1 + a12(t)x2 + a13(t)x3(t) + b1(t) x0 2 = a21(t)x1 + a22(t)x2 + a23(t)x3(t) + b2(t) x0 3 = a31(t)x1 + a32(t)x2 + a33(t)x3(t) + b3(t) (b) Consider the second-order homogeneous equation t2 y00 − ty0 − 3y = 0. Solving this equation for y00 , we get y00 = 3 t2 y + 1 t y0 . To convert this equation to an equivalent system, we let x1 = y, x0 1 = x2 (= y0 ). Then we have x0 1 = x2 x0 2 = 3 t2 x1 + 1 t x2 which is just a special case of the general system of two first-order differential equa- tions: x0 1 = a11(t)x1 + a12(t)x2 + b1(t) x0 2 = a21(t)x1 + a22(t)x2 + b2(t) 89
  • 4. General Theory Let a11(t), a12(t), . . ., a1n(t), a21(t), . . ., ann(t), b1(t), b2(t), . . . , bn(t) be contin- uous functions on some interval I. The system of n first-order differential equations x0 1 = a11(t)x1 + a12(t)x2 + · · · + a1n(t)xn(t) + b1(t) x0 2 = a21(t)x1 + a22(t)x2 + · · · + a2n(t)xn(t) + b2(t) . . . . . . x0 n = an1(t)x1 + an2(t)x2 + · · · + ann(t)xn(t) + bn(t) (S) is called a first-order linear differential system. The system (S) is homogeneous if b1(t) ≡ b2(t) ≡ · · · ≡ bn(t) ≡ 0 on I. (S) is nonhomogeneous if the functions bi(t) are not all identically zero on I; that is, if there is at least one point a ∈ I and at least one function bi(t) such that bi(a) 6= 0. Let A(t) be the n × n matrix A(t) =       a11(t) a12(t) · · · a1n(t) a21(t) a22(t) · · · a2n(t) . . . . . . . . . an1(t) an2(t) · · · ann(t)       and let x and b be the vectors x =       x1 x2 . . . xn       , b =       b1 b2 . . . bn       . Then (S) can be written in the vector-matrix form x0 = A(t) x + b. (S) The matrix A(t) is called the matrix of coefficients or the coefficient matrix. Example 2. The vector-matrix form of the system in Example 1(a) is: x0 =    0 1 0 0 0 1 −12 8 1   x +    0 0 2et   , 90
  • 5. a nonhomogeneous system. The vector-matrix form of the system in Example 1(b) is: x0 = 0 1 3/t2 1/t ! x + 0 0 ! = 0 1 3/t2 1/t ! x, where x = x1 x2 ! , a homogeneous system. A solution of the linear differential system (S) is a differentiable vector function x(t) =       x1(t) x2(t) . . . xn(t)       that satisfies (S) on the interval I. Example 3. Verify that x(t) =    e2t 2e2t 4e2t    +    1 2 et 1 2 et 1 2 et    is a solution of the nonho- mogeneous system x0 =    0 1 0 0 0 1 −12 8 1   x +    0 0 2et    of Example 2. 91
  • 6. SOLUTION x0 =       e2t 2e2t 4e2t    +    1 2 et 1 2 et 1 2 et       0 =    2e2t 4e2t 8e2t    +    1 2 et 1 2 et 1 2 et    ? =    0 1 0 0 0 1 −12 8 1          e2t 2e2t 4e2t    +    1 2 et 1 2 et 1 2 et       +    0 0 2et    ? =    0 1 0 0 0 1 −12 8 1       e2t 2e2t 4e2t    +    0 1 0 0 0 1 −12 8 1       1 2 et 1 2 et 1 2 et    +    0 0 2et    =    2e2t 4e2t 8e2t    +    1 2 et 1 2 et −3 2 et    +    0 0 2et    =    2e2t 4e2t 8e2t    +    1 2 et 1 2 et 1 2 et   . x is a solution. THEOREM 1. (Existence and Uniqueness Theorem) Let a be any point on the interval I, and let α1, α2, . . . , αn be any n real numbers. Then the initial-value problem x0 = A(t) x + b(t), x(a) =       α1 α2 . . . αn       has a unique solution. Exercises 4.1 Convert the differential equation into a system of first-order equations. 1. y00 − ty0 + 3y = sin 2t. 92
  • 7. 2. y00 + y = 2e−2t . 3. y000 − y00 + y = et . 4. my00 + cy0 + ky = cos λt, m, c, k, λ are constants. Write the system in vector-matrix form. 5. x0 1 = −2x1 + x2 + sin t x0 2 = x1 − 3x2 − 2 cos t 6. x0 1 = et x1 − e2t x2 x0 2 = e−t x1 − 3et x2 7. x0 1 = 2x1 + x2 + 3x3 + 3e2t x0 2 = x1 − 3x2 − 2 cos t x0 3 = 2x1 − x2 + 4x3 + t 8. x0 1 = t2 x1 + x2 − tx3 + 3 x0 2 = −3et x2 + 2x3 − 2e−2t x0 3 = 2x1 + t2 x2 + 4x3 9. Verify that u(t) = t−1 −t−2 ! is a solution of the system in Example 1 (b). 10. Verify that u(t) =    e−3t −3e−3t 9e−3t    +    1 2 et 1 2 et 1 2 et    is a solution of the system in Example 1 (a). 11. Verify that w(t) =    te2t e2t + 2te2t 4e2t + 4te2t    is a solution of the homogeneous system associated with the system in Example 1 (a). 93
  • 8. 12. Verify that x(t) = − sin t − cos t − 2 sin t ! is a solution of the system x0 = −2 1 −3 2 ! x + 0 2 sin t ! . 13. Verify that x(t) =    −2e−2t 0 3e−2t    is a solution of the system x0 =    1 −3 2 0 −1 0 0 −1 −2   x. 4.2. Homogeneous Systems In this section we give the basic theory for linear homogeneous systems. This “theory” is simply a repetition results given in Sections 3.2 and 3.6, phrased this time in terms of the system x0 1 = a11(t)x1 + a12(t)x2 + · · · + a1n(t)xn(t) x0 2 = a21(t)x1 + a22(t)x2 + · · · + a2n(t)xn(t) . . . . . . x0 n = an1(t)x1 + an2(t)x2 + · · · + ann(t)xn(t) (H) or x0 = A(t)x. (H) Note first that the zero vector z(t) ≡ 0 =       0 0 . . . 0       is a solution of (H). As before, this solution is called the trivial solution. Of course, we are interested in finding nontrivial solutions. THEOREM 1. If x1, x2, . . ., xk are solutions of (H), and if c1, c2, . . . , ck are real numbers, then c1x1 + c2x2 + · · · + ckxk is a solution of (H); any linear combination of solutions of (H) is also a solution of (H). 94
  • 9. DEFINITION 1. Let x1 =       x11(t) x21(t) . . . xn1(t)       , x2 =       x12(t) x22(t) . . . xn2(t)       , . . ., xk =       x1k(t) x2k(t) . . . xnk(t)       be n-component vector functions defined on some interval I. The vectors are linearly dependent on I if there exist k real numbers c1, c2, . . . , ck, not all zero, such that c1x1(t) + c2x2(t) + · · · + ckxk(t) ≡ 0 on I. Otherwise the vectors are linearly independent on I. THEOREM 2. Let x1, x2, . . . , xn be n, n-component vector functions defined on an interval I. If the vectors are linearly dependent, then W(t) = x11 x12 · · · x1n x21 x22 · · · x2n . . . . . . . . . . . . xn1 xn2 · · · xnn ≡ 0 on I. The determinant in Theorem 4 is called the Wronskian of the vector functions x1, x2, . . ., xn. COROLLARY Let x1, x2, . . . , xn be n, n-component vector functions defined on an interval I. If the Wronskian W(t) 6= 0 for at least one t ∈ I, then the vectors are linearly independent on I. Example 1. The vector functions u = t3 3t2 ! and x = t−1 −t−2 ! are solutions of the homogeneous system in Example 1(b), Section 4.1. Their Wron- skian is: W(t) = t3 t−1 3t2 −t−2 = −4t. The solutions are linearly independent. The vector functions x1 =    e2t 2e2t 4e2t   , x2 =    e−3t −3e−3t 9e−3t   , x3 =    te2t e2t + 2te2t 4e2t + 4te2t    95
  • 10. are solutions of the homogeneous system x0 =    0 1 0 0 0 1 −12 8 1   x. Their Wronskian is: W(t) = e2t e−3t te2t 2e2t −3e−3t e2t + 2te2t 4e2t 9e−3t 4e2t + 4te2t = −25et . These solutions are linearly independent. THEOREM 3. Let x1, x2, . . ., xn be n solutions of (H). Exactly one of the following holds: 1. W(x1, x2, . . ., xn)(t) ≡ 0 on I and the solutions are linearly dependent. 2. W(x1, x2, . . ., xn)(t) 6= 0 for all t ∈ I and the solutions are linearly independent. It is easy to construct sets of n linearly independent solutions of (H). Simply pick any point a ∈ I and any nonsingular n × n matrix A. Let α1 be the first column of A, α2 the second column of A, and so on. Then let x1 be the solution of (H) such that x1(a) = α1, let x2 be the solution of (H) such that x2(a) = α2, . . ., and let xn be the solution of (H) such that xn = αn. The existence and uniqueness theorem guarantees the existence of these solutions. Now W(x1, x2, . . ., xn)(a) = det A 6= 0. Therefore, W(t) 6= 0 for all t ∈ I and the solutions are linearly independent. A particularly nice set of n linearly independent solutions is obtained by choosing A = In, the identity matrix. THEOREM 4. Let x1, x2, . . . , xn be n linearly independent solutions of (H). Let u be any solution of (H). Then there exists a unique set of constants c1, c2, . . . , cn such that u = c1x1 + c2x2 + · · · + cnxn. That is, every solution of (H) can be written as a unique linear combination of x1, x2, . . ., xn. 96
  • 11. DEFINITION 2. A set {x1, x2, . . ., xn} of n linearly independent solutions of (H) is called a fundamental set of solutions. A fundamental set of solutions is also called a solutions basis for (H). If x1, x2, . . . , xn is a fundamental set of solutions of (H), then the n × n matrix X(t) =       x11(t) x12(t) · · · x1n(t) x21(t) x22(t) · · · x2n(t) . . . . . . . . . xn1(t) xn2(t) · · · xnn(t)       (the vectors x1, x2, . . . , xn are the columns of X) is called a fundamental matrix for (H). DEFINITION 3. Let x1, x2, . . . , xn be a fundamental set of solutions of (H). Then x = c1x1 + c2x2 + · · · + cnxn, where c1, c2, . . . , cn are arbitrary constants, is the general solution of (H). Exercises 4.2 Determine whether or not the vector functions are linearly dependent. 1. x1 = 2t − 1 −t ! , x2 = −t + 1 2t ! 2. x1 = cos t sin t ! , x2 = sin t cos t ! 3. x1 =    2 − t t −2   , x2 =    t −1 2   , x3 =    2 + t t − 2 2   . 4. x1 =    et −et et   , x2 =    −et 2et −et   , x3 =    0 et 0   . 5. x1 = et 0 ! , x2 = 0 0 ! , x3 = 0 et ! 97
  • 12. 6. Given the linear differential system x0 = 5 −3 2 0 ! x. Let x1 = e2t e2t ! and x2 = 3e3t 2e3t ! . (a) Show that x1, x2 are a fundamental set of solutions of the system. (b) Let X be the corresponding fundamental matrix. Show that X0 = AX. (c) Give the general solution of the system. (d) Find the solution of the system that satisfies x(0) = 1 0 ! . 7. Let X be the matrix function X(t) =    0 4te−t e−t 1 e−t 0 1 0 0    (a) Verify that X is a fundamental matrix for the system x0 =    −1 4 −4 0 −1 1 0 0 0   x. (b) Find the solution of the system that satisfies x(0) =    0 1 2   . 4.3. Homogeneous Systems with Constant Coefficients A homogeneous system with constant coefficients is a linear differential system having the form x0 1 = a11x1 + a12x2 + · · · + a1nxn x0 2 = a21x1 + a22x2 + · · · + a2nxn . . . . . . x0 n = an1x1 + an2x2 + · · · + annxn 98
  • 13. where a11, a12, . . . , ann are constants. The system in vector-matrix form is x0 1 x0 2 − x0 n =      a11 a12 · · · a1n a21 a22 · · · a2n − − − − an1 an2 · · · ann           x1 x2 − xn      or − → x0 = A− → x . Example 1. Consider the 3rd order linear homogeneous differential equation y000 − y00 − 8y0 + 12y = 0. The characteristic equation is: r3 − r2 − 8r + 12 = (r − 2)2 (r + 3) = 0 and {e2t , te2t , e−3t } is a solution basis for the equation. The corresponding linear homogeneous system is x0 =    0 1 0 0 0 1 −12 8 1   x and x1(t) =    e2t 2e2t 4e2t    = e2t    1 2 4    is a solution vector. Similarly, x2(t) = e−3t    1 3 9    is a solution vector. The example suggests that homogeneous systems with constant coefficients might have solution vectors of the form x(t) = eλt v, for some number λ and some constant vector v. If x(t) = eλt v is a solution vector of (H), then x0 = Ax which implies λeλt v = Aeλt v and so Av = λ v. The latter equation is an eigenvalue-eigenvector equation for A. Thus, we look for solutions of the form x(t) = eλt x where λ is an eigenvalue of A and c is a corresponding eigenvector. 99
  • 14. Example 2. Find a fundamental set of solution vectors of x0 = 1 5 3 3 ! x and give the general solution of the system. SOLUTION First we find the eigenvalues: det(A − λI) = 1 − λ 5 3 3 − λ = (λ − 6)(λ + 2). The eigenvalues are λ1 = 6 and λ2 = −2. Next, we find corresponding eigenvectors. For λ1 = 6 we have: (A−6I)x = −5 5 3 −3 ! x1 x2 ! = 0 0 ! which implies x1 = x2, x2 arbitrary. Setting x2 = 1, we get the eigenvector 1 1 ! . Repeating the process for λ2 = −2, we get the eigenvector 5 −3 ! . Thus x1 = e6t 1 1 ! and x2 = e−2t 5 −3 ! are solution vectors of the system. The Wronskian of x1 and x2 is: W(t) = e6t 5e−2t e6t −3e−2t = −8e4t 6= 0. Thus x1 and x2 are linearly independent; they form a fundamental set of solutions. The general solution of the system is x(t) = c1x1 + c2x2 = c1e6t 1 1 ! + c2e−2t 5 −3 ! . Example 3. Find a fundamental set of solution vectors of x0 =    3 −1 −1 −12 0 5 4 −2 −1   x 100
  • 15. and find the solution that satisfies the initial condition x(0) =    1 0 1   . SOLUTION det(A − λI) = 3 − λ −1 −1 −12 −λ 5 4 −2 −1 − λ = −λ3 + 2λ2 + λ − 2. Now det(A − λI) = 0 implies λ3 − 2λ2 − λ + 2 = (λ − 2)(λ − 1)(λ + 1) = 0. The eigenvalues are λ1 = 2, λ2 = 1, λ3 = −1. As you can check, corresponding eigenvectors are: v1 =    1 −1 2   , v2 =    3 −1 7    , v3 =    1 2 2    . A fundamental set of solution vectors is: x1 = e2t    1 −1 2    , x2 = et    3 −1 7    , x3 = e−t    1 2 2    . since distinct exponential vector-functions are linearly independent (calculate the Wronskian to verify.) To find the solution vector satisfying the initial condition, solve c1x1(0) + c2x2(0) + c3x3(0) =    1 0 1    which is: c1    1 −1 2    + c2    3 −1 7    + c3    1 2 2    =    1 0 1    or    1 3 1 −1 −1 2 2 7 2       c1 c2 c3    =    1 0 1    . 101
  • 16. Note: The matrix of coefficients is the fundamental matrix evaluated at t = 0 Using the solution method of your choice (row reduction, inverse, Cramer’s rule), the solution is: c1 = 3, c2 = −1, c3 = 1. The solution of the initial-value problem is x = 3e2t    1 −1 2    − et    3 −1 7    + e−t    1 2 2    . Two Difficulties There are two difficulties that can arise: 1. A has complex eigenvalues. If λ = a + bi is a complex eigenvalue with corresponding (complex) eigenvector u + i x, then λ = a − bi (the complex conjugate of λ) is also an eigenvalue of A and u − i x is a corresponding eigenvector. The corresponding linearly independent complex solutions of x0 = Ax are: w1 = e(a+bi)t (u + i x) = eat (cos bt + i sin bt)(u + i x) = eat [(cos bt u − sin bt x) + i(cos bt x + sin bt u)] w2 = e(a−bi)t (u − i x) = eat (cos bt − i sin bt)(u − i x) = eat [(cos bt u − sin bt x) − i(cos bt x + sin bt u)] Now x1(t) = 1 2 [w1(t) + w2(t)] = eat (cos bt u − sin bt x) and x2(t) = 1 2i [w1(t) − w2(t)] = eat (cos bt x + sin bt u) are linearly independent solutions of the system, and they are real-valued vector functions. It is worth noting that x1 and x2 are simply the real and imaginary parts of w1 (or of w2). Example 4. Determine a fundamental set of solution vectors of x0 =    1 −4 −1 3 2 3 1 1 3   x. 102
  • 17. SOLUTION det(A−λI) = 1 − λ −4 −1 3 2 − λ 3 1 1 3 − λ = −λ3 +6λ2 −21λ+26 = −(λ−2)(λ2 −4λ+13). The eigenvalues are: λ1 = 2, λ2 = 2+3i, λ3 = 2−3i. The corresponding eigenvectors are: v1 =    1 0 −1    , v2 =    −5 + 3i 3 + 3i 2    =    −5 3 2    + i    3 3 0    v3 =    −5 − 3i 3 − 3i 2    =    −5 3 2    − i    3 3 0    . Now e(2+3i)t       −5 3 2    + i    3 3 0       = e2t (cos 3t + i sin 3t)       −5 3 2    + i    3 3 0       = e2t   cos 3t    −5 3 2    − sin 3t    3 3 0       + i e2t   cos 3t    3 3 0    + sin 3t    −5 3 2       . A fundamental set of solution vectors for the system is: x1 = e2t    1 0 −1    , x2 = e2t   cos 3t    −5 3 2    − sin 3t    3 3 0       , x3 = e2t   cos 3t    3 3 0    + sin 3t    −5 3 2       . 2. A has an eigenvalue of multiplicity greater than 1 We’ll look first at the case where A has an eigenvalue of multiplicity 2. 103
  • 18. Example 5. Let A =    1 −3 3 3 −5 3 6 −6 4   . det(A − λI) = 1 − λ −3 3 3 −5 − λ 3 6 −6 4 − λ = −λ3 + 12λ − 16 = −(λ − 4)(λ + 2)2 . The eigenvalues are: λ1 = 4, λ2 = λ3 = −2. As you can check, an eigenvector corresponding to λ1 = 4 is v1 =    1 1 2   . We’ll carry out the details involved in finding an eigenvector corresponding to the “double” eigenvalue −2. [A − (−2)I]v =    3 −3 3 3 −3 3 6 −6 6       v1 v2 v3    =    0 0 0    . The augmented matrix for this system of equations is    3 −3 3 0 3 −3 3 0 6 −6 6 0    which row reduces to    1 −1 1 0 0 0 0 0 0 0 0 0    The solutions of this system are: v1 = v2 −v3, v2, v3 arbitrary. We can assign values to v2 and v3 independently and obtain two linearly independent eigenvectors. For example, setting v2 = 1, v3 = 0, we get the eigenvector v2 =    1 1 0   . Reversing the roles, we set v2 = 0, v3 = −1 to get the eigenvector v3 =    1 0 −1   . Clearly v2 and v3 are linearly independent. You should understand that there is nothing magic about our two choices for v2, v3; any choice which produces two independent vectors will do. The important thing to note here is that this eigenvalue of multiplicity 2 produced two independent eigenvectors. 104
  • 19. Based on our work above, a fundamental set of solutions for the differential system x0 =    1 −3 3 3 −5 3 6 −6 4   x is x1 = e4t    1 1 2    , x2 = e−2t    1 1 0    , x3 = e−2t    1 0 −1    . Example 6. Let A =    0 1 0 0 0 1 12 8 −1    det(A − λI) = −λ 1 0 0 −λ 1 12 8 −1 − λ = −λ3 − λ2 + 8λ − 12 = −(λ − 3)(λ + 2)2 . The eigenvalues are: λ1 = 3, λ2 = λ3 = −2. As you can check, an eigenvector corresponding to λ1 = 3 is v1 =    1 3 9   . We’ll carry out the details involved in finding an eigenvector corresponding to the “double” eigenvalue −2. [A − (−2)I]v =    2 1 0 0 2 1 12 8 1       v1 v2 v3    =    0 0 0    . The augmented matrix for this system of equations is    2 1 0 0 0 2 1 0 12 8 1 0    which row reduces to    2 1 0 0 0 2 1 0 0 0 0 0    The solutions of this system are v1 = 1 4 v3, v2 = −1 2 v3, v3 arbitrary. Here there is only one parameter and so we’ll get only one eigenvector. Setting v3 = 4 we get the eigenvector v2 =    1 −2 4   . In contrast to the preceding example, the “double” eigenvalue here has only one (independent) eigenvector. 105
  • 20. Suppose that we were asked to find a fundamental set of solutions of the linear differential system x0 =    0 1 0 0 0 1 12 8 −1   x. By our work above, we have two independent solutions x1 = e3t    1 3 9    and x2 = e−2t    1 −2 4    . We need a third solution which is independent of these two. Our system has a special form; it is equivalent to the third order equation y000 + y00 − 8y0 − 12y = 0. The characteristic equation is r3 + r2 − 8r − 12 = (r − 3)(r + 2)2 = 0 (compare with det(A−λI).) The roots are: r1 = 3, r2 = r3 = −2 and a fundamental set of solutions is {y1 = e3t , y2 = e−2t , y3 = te−2t }. The correspondence between these solutions and the solution vectors we found above should be clear: e3t −→ e3t    1 3 9    , e−2t −→ e−2t    1 −2 4    . The solution y3 = te−2t of the equation corresponds to the solution vector x3 =    y3 y0 3 y00 3    =    te−2t e−2t − 2te−2t −4e−2t − 4te−2t    = e−2t    0 1 −4    + te−2t    1 −2 4    . The appearance of the te−2t v2 term should not be unexpected since we know that a characteristic root of multiplicity 2 produces a solution of the form tert . You can check that x3 is independent of x1 and x2. Therefore, the solution vectors x1, x2, x3 are a fundamental set of solutions of the system. The question is: What is the significance of the vector w =    0 1 −4   ? How is it related to the eigenvalue −2 which generated it, and to the corresponding eigenvector? 106
  • 21. Let’s look at [A − (−2)I]w = [A + 2I]w: [A + 2I]w =    2 1 0 0 2 1 12 8 1       0 1 −4    =    1 −2 4    = v2; A − (−2)I “maps” w onto the eigenvector v2. The corresponding solution of the system has the form x3 = e−2t w + te−2t v2 where v2 is the eigenvector corresponding to −2 and w satisfies [A − (−2)I]w = v2. General Result Given the linear differential system x0 = Ax. Suppose that A has an eigenvalue λ of multiplicity 2. Then exactly one of the following holds: 1. λ has two linearly independent eigenvectors, v1 and v2. Corresponding linearly independent solution vectors of the differential system are x1(t) = eλt v1 and x2(t) = eλt v2. 2. λ has only one (independent) eigenvector v. Then a linearly independent pair of solution vectors corresponding to λ are: x1(t) = eλt v and x2(t) = eλt w + teλt v where w is a vector that satisfies (A − λI)w = v. The vector w is called a generalized eigenvector corresponding to the eigenvalue λ. Example 7. Find a fundamental set of solution vectors of x0 = 1 −1 1 3 ! x. SOLUTION det(A − λI) = 1 − λ −1 1 3 − λ = λ2 − 4λ + 4 = (λ − 2)2 . Characteristic values: λ1 = λ2 = 2. Characteristic vectors: (A − 2I)v = −1 −1 1 1 ! v1 v2 ! = 0 0 ! ; 107
  • 22. −1 −1 0 1 1 0 ! −→ 1 1 0 0 0 0 ! . The solutions are: v1 = −v2, v2 arbitrary; there is only one eigenvector. Setting v2 = −1, we get v = 1 −1 ! . The vector x1 = e2t 1 −1 ! is a solution of the system. A second solution, independent of x1 is x2 = e2t w+te2t v where w is a solution of (A − 2I)z = v: (A − 2I)z = −1 −1 1 1 ! z1 z2 ! = 1 −1 ! ; −1 −1 1 1 1 1 ! −→ 1 1 −1 0 0 0 ! . The solutions of this system are z1 = −1 − z2, z2 arbitrary. If we choose z2 = 0 (any choice for z2 will do), we get z1 = −1 and w = −1 0 ! . Thus x2(t) = e2t −1 0 ! + te2t 1 −1 ! is a solution of the system independent of x1. The solutions x1(t) = e2t 1 −1 ! , x2(t) = e2t −1 0 ! + te2t 1 −1 ! are a fundamental set of solutions of the system. Eigenvalues of Multiplicity 3. Given the differential system x0 = Ax. Suppose that λ is an eigenvalue of A of multiplicity 3. Then exactly one of the following holds: 1. λ has three linearly independent eigenvectors v1, v2, v3. Then three linearly independent solution vectors of the system corresponding to λ are: x1(t) = eλt v1, x2(t) = eλt v2, x3(t) = eλt v3. 108
  • 23. 2. λ has two linearly independent eigenvectors v1, v2. Then two linearly inde- pendent solutions of the system corresponding to λ are: x1(t) = eλt v1, x2(t) = eλt v2 A third solution, independent of x1 and x2 has the form x3(t) = eλt w + teλt x where x is an eigenvector corresponding to λ and (A − λI)w = x. 3. λ has only one (independent) eigenvector v. Then three linearly independent solutions of the system have the form: x1 = eλt v, x2 = eλt w + teλt v, x3(t) = eλt z + teλt w + t2 eλt v where (A − λI)w = v and (A − λI)z = w. Exercises 4.3 Find the general solution of the system x0 = Ax where A is the given matrix. If an initial condition is given, also find the solution that satisfies the condition. 1. −2 4 1 1 ! . 2. −1 1 4 2 ! , x(0) = −1 1 ! . 3.    −2 2 1 0 −1 0 2 −2 −1   . Hint: −3 is an eigenvalue. 4.    3 0 −1 −2 2 1 8 0 −3   , x(0) =    −1 2 −8   . Hint: 2 is an eigenvalue. 5. 1 −2 2 1 ! . 109
  • 24. 6. −1 2 −1 −3 ! . 7. 3 2 −8 −5 ! . 8.    −3 0 −3 1 −2 3 1 0 1   . Hint: −2 is an eigenvalue. 9.    2 −1 −1 −1 2 −1 1 1 4   . Hint: 3 is an eigenvalue 10.    −2 1 −1 3 −3 4 3 −1 2   . Hint: 1 is an eigenvalue. 11.    −3 1 −1 −7 5 −1 −6 6 −2   . 110