Analysis of Algorithms
Minimum Spanning Trees
Andres Mendez-Vazquez
November 8, 2015
1 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
2 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
3 / 69
Originally
We had a Graph without weights
2
3
5
1
4
6
7
9
8
4 / 69
Then
Now, we have have weights
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
5 / 69
Finally, the optimization problem
We want to find
min
T (u,v)∈T
w(u, v)
Where T ⊆ E such that T is acyclic and connects all the vertices.
2
3
5
1
4
6
7
9
8
12
4
5
11
8
9
2
1
17
8
2
21
This problem is called
The minimum spanning tree problem
6 / 69
Finally, the optimization problem
We want to find
min
T (u,v)∈T
w(u, v)
Where T ⊆ E such that T is acyclic and connects all the vertices.
2
3
5
1
4
6
7
9
8
12
4
5
11
8
9
2
1
17
8
2
21
This problem is called
The minimum spanning tree problem
6 / 69
When do you need minimum spanning trees?
In power distribution
We want to connect points x and y with the minimum amount of cable.
In a wireless network
Given a collection of mobile beacons we want to maintain the minimum
connection overhead between all of them.
7 / 69
When do you need minimum spanning trees?
In power distribution
We want to connect points x and y with the minimum amount of cable.
In a wireless network
Given a collection of mobile beacons we want to maintain the minimum
connection overhead between all of them.
7 / 69
Some Applications
Tracking the Genetic Variance of Age-Gender-Associated
Staphylococcus Aureus
8 / 69
Some Applications
What?
Urban Tapestries is an interactive location-based wireless application
allowing users to access and publish location-specific multimedia content.
Using MST we can create paths for public multimedia shows that are
no too exhausting
9 / 69
These models can be seen as
Connected, undirected graphs G = (V , E)
E is the set of possible connections between pairs of beacons.
Each of the this edges (u, v) has a weight w(u, v) specifying the cost
of connecting u and v.
10 / 69
These models can be seen as
Connected, undirected graphs G = (V , E)
E is the set of possible connections between pairs of beacons.
Each of the this edges (u, v) has a weight w(u, v) specifying the cost
of connecting u and v.
10 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
11 / 69
Growing a Minimum Spanning Tree
There are two classic algorithms, Prim and Kruskal
Both algorithms Kruskal and Prim use a greedy approach.
Basic greedy idea
Prior to each iteration, A is a subset of some minimum spanning tree.
At each step, we determine an edge (u, v) that can be added to A
such that A ∪ {(u, v)} is also a subset of a minimum spanning tree.
12 / 69
Growing a Minimum Spanning Tree
There are two classic algorithms, Prim and Kruskal
Both algorithms Kruskal and Prim use a greedy approach.
Basic greedy idea
Prior to each iteration, A is a subset of some minimum spanning tree.
At each step, we determine an edge (u, v) that can be added to A
such that A ∪ {(u, v)} is also a subset of a minimum spanning tree.
12 / 69
Growing a Minimum Spanning Tree
There are two classic algorithms, Prim and Kruskal
Both algorithms Kruskal and Prim use a greedy approach.
Basic greedy idea
Prior to each iteration, A is a subset of some minimum spanning tree.
At each step, we determine an edge (u, v) that can be added to A
such that A ∪ {(u, v)} is also a subset of a minimum spanning tree.
12 / 69
Generic minimum spanning tree algorithm
A Generic Code
Generic-MST(G, w)
1 A = ∅
2 while A does not form a spanning tree
3 do find an edge (u, v) that is safe for A
4 A = A ∪ {(u, v)}
5 return A
This has the following loop invariance
Initialization: Line 1 A trivially satisfies.
Maintenance: The loop only adds safe edges.
Termination: The final A contains all the edges in a minimum spanning
tree.
13 / 69
Generic minimum spanning tree algorithm
A Generic Code
Generic-MST(G, w)
1 A = ∅
2 while A does not form a spanning tree
3 do find an edge (u, v) that is safe for A
4 A = A ∪ {(u, v)}
5 return A
This has the following loop invariance
Initialization: Line 1 A trivially satisfies.
Maintenance: The loop only adds safe edges.
Termination: The final A contains all the edges in a minimum spanning
tree.
13 / 69
Generic minimum spanning tree algorithm
A Generic Code
Generic-MST(G, w)
1 A = ∅
2 while A does not form a spanning tree
3 do find an edge (u, v) that is safe for A
4 A = A ∪ {(u, v)}
5 return A
This has the following loop invariance
Initialization: Line 1 A trivially satisfies.
Maintenance: The loop only adds safe edges.
Termination: The final A contains all the edges in a minimum spanning
tree.
13 / 69
Generic minimum spanning tree algorithm
A Generic Code
Generic-MST(G, w)
1 A = ∅
2 while A does not form a spanning tree
3 do find an edge (u, v) that is safe for A
4 A = A ∪ {(u, v)}
5 return A
This has the following loop invariance
Initialization: Line 1 A trivially satisfies.
Maintenance: The loop only adds safe edges.
Termination: The final A contains all the edges in a minimum spanning
tree.
13 / 69
Some basic definitions for the Greedy Choice
A cut (S, V − S) is a partition of V
Then (u, v) in E crosses the cut (S, V − S) if one end point is in S
and the other is in V − S.
We say that a cut respects A if no edge in A crosses the cut.
A light edge is an edge crossing the cut with minimum weight with
respect to the other edges crossing the cut.
14 / 69
Some basic definitions for the Greedy Choice
A cut (S, V − S) is a partition of V
Then (u, v) in E crosses the cut (S, V − S) if one end point is in S
and the other is in V − S.
We say that a cut respects A if no edge in A crosses the cut.
A light edge is an edge crossing the cut with minimum weight with
respect to the other edges crossing the cut.
14 / 69
Some basic definitions for the Greedy Choice
A cut (S, V − S) is a partition of V
Then (u, v) in E crosses the cut (S, V − S) if one end point is in S
and the other is in V − S.
We say that a cut respects A if no edge in A crosses the cut.
A light edge is an edge crossing the cut with minimum weight with
respect to the other edges crossing the cut.
14 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
15 / 69
The Greedy Choice
Remark
The following algorithms are based in the Greedy Choice.
Which Greedy Choice?
The way we add edges to the set of edges belonging to the Minimum
Spanning Trees.
They are known as
Safe Edges
16 / 69
The Greedy Choice
Remark
The following algorithms are based in the Greedy Choice.
Which Greedy Choice?
The way we add edges to the set of edges belonging to the Minimum
Spanning Trees.
They are known as
Safe Edges
16 / 69
The Greedy Choice
Remark
The following algorithms are based in the Greedy Choice.
Which Greedy Choice?
The way we add edges to the set of edges belonging to the Minimum
Spanning Trees.
They are known as
Safe Edges
16 / 69
Recognizing safe edges
Theorem for Recognizing Safe Edges (23.1)
Let G = (V , E) be a connected, undirected graph with weights w defined
on E. Let A ⊆ E that is included in a MST for G, let (S, V − S) be any
cut of G that respects A, and let (u, v) be a light edge crossing
(S, V − S). Then, edge (u, v) is safe for A.
b c d
i
h g f
ea
4
8
8
11
7
2
6
1 2
2
4
9
14
10
S
V-S
S
V-S
17 / 69
Observations
Notice that
At any point in the execution of the algorithm the graph
GA = (V , A) is a forest, and each of the connected components
of GA is a tree.
Thus
Any safe edge (u, v) for A connects distinct components of GA, since
A ∪ {(u, v)} must be acyclic.
18 / 69
Observations
Notice that
At any point in the execution of the algorithm the graph
GA = (V , A) is a forest, and each of the connected components
of GA is a tree.
Thus
Any safe edge (u, v) for A connects distinct components of GA, since
A ∪ {(u, v)} must be acyclic.
18 / 69
The basic corollary
Corollary 23.2
Let G = (V , E) be a connected, undirected graph with real-valued weight
function w defined on E. Let A be a subset of E that is included in some
minimum spanning tree for G, and let C = (Vc, Ec) be a connected
component (tree) in the forest GA = (V , A). If (u, v) is a light edge
connecting C to some other component in GA, then (u, v) is safe for A.
Proof
The cut (Vc, V − Vc) respects A, and (u, v) is a light edge for this cut.
Therefore, (u, v) is safe for A.
19 / 69
The basic corollary
Corollary 23.2
Let G = (V , E) be a connected, undirected graph with real-valued weight
function w defined on E. Let A be a subset of E that is included in some
minimum spanning tree for G, and let C = (Vc, Ec) be a connected
component (tree) in the forest GA = (V , A). If (u, v) is a light edge
connecting C to some other component in GA, then (u, v) is safe for A.
Proof
The cut (Vc, V − Vc) respects A, and (u, v) is a light edge for this cut.
Therefore, (u, v) is safe for A.
19 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
20 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
21 / 69
Kruskal’s Algorithm
Algorithm
MST-KRUSKAL(G, w)
1 A = ∅
2 for each vertex v ∈ V [G]
3 do Make-Set
4 sort the edges of E into non-decreasing order by weight w
5 for each edge (u, v) ∈ E taken in non-decreasing order by weight
6 do if FIND − SET(u) = FIND − SET(v)
7 then A = A ∪ {(u, v)}
8 Union(u,v)
9 return A
22 / 69
Kruskal’s Algorithm
Algorithm
MST-KRUSKAL(G, w)
1 A = ∅
2 for each vertex v ∈ V [G]
3 do Make-Set
4 sort the edges of E into non-decreasing order by weight w
5 for each edge (u, v) ∈ E taken in non-decreasing order by weight
6 do if FIND − SET(u) = FIND − SET(v)
7 then A = A ∪ {(u, v)}
8 Union(u,v)
9 return A
22 / 69
Kruskal’s Algorithm
Algorithm
MST-KRUSKAL(G, w)
1 A = ∅
2 for each vertex v ∈ V [G]
3 do Make-Set
4 sort the edges of E into non-decreasing order by weight w
5 for each edge (u, v) ∈ E taken in non-decreasing order by weight
6 do if FIND − SET(u) = FIND − SET(v)
7 then A = A ∪ {(u, v)}
8 Union(u,v)
9 return A
22 / 69
Kruskal’s Algorithm
Algorithm
MST-KRUSKAL(G, w)
1 A = ∅
2 for each vertex v ∈ V [G]
3 do Make-Set
4 sort the edges of E into non-decreasing order by weight w
5 for each edge (u, v) ∈ E taken in non-decreasing order by weight
6 do if FIND − SET(u) = FIND − SET(v)
7 then A = A ∪ {(u, v)}
8 Union(u,v)
9 return A
22 / 69
Let us run the Algorithm
We have as an input the following graph
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
23 / 69
Let us run the Algorithm
1st
step everybody is a set!!!
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
24 / 69
Let us run the Algorithm
Given (f , g) with weight 1
Question: FIND − SET(f ) = FIND − SET(g)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
25 / 69
Let us run the Algorithm
Then A = A ∪ {(f , g)}, next FIND − SET(f ) = FIND − SET(i)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
26 / 69
Let us run the Algorithm
Then A = A ∪ {(f , i)}, next FIND − SET(c) = FIND − SET(f )?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
27 / 69
Let us run the Algorithm
Then A = A ∪ {(c, f )}, next FIND − SET(a) = FIND − SET(d)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
28 / 69
Let us run the Algorithm
Then A = A ∪ {(a, d)}, next FIND − SET(b) = FIND − SET(e)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
29 / 69
Let us run the Algorithm
Then A = A ∪ {(b, e)}, next FIND − SET(e) = FIND − SET(i)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
30 / 69
Let us run the Algorithm
Then A = A ∪ {(e, i)}, next FIND − SET(b) = FIND − SET(f )?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
31 / 69
Let us run the Algorithm
Then A = A, next FIND − SET(b) = FIND − SET(c)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
32 / 69
Let us run the Algorithm
Then A = A, next FIND − SET(d) = FIND − SET(e)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
33 / 69
Let us run the Algorithm
Then A = A ∪ {(d, e)}, next FIND − SET(a) = FIND − SET(b)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
34 / 69
Let us run the Algorithm
Then A = A, next FIND − SET(e) = FIND − SET(g)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
35 / 69
Let us run the Algorithm
Then A = A, next FIND − SET(g) = FIND − SET(h)?
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
36 / 69
Let us run the Algorithm
Then A = A ∪ {(g, h)}
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
37 / 69
Kruskal’s Algorithm
Algorithm
MST-KRUSKAL(G, w)
1 A = ∅
2 for each vertex v ∈ V [G]
3 do Make-Set
4 sort the edges of E into non-decreasing order by weight w
5 for each edge (u, v) ∈ E taken in non-decreasing order by weight
6 do if FIND − SET(u) = FIND − SET(v)
7 then A = A ∪ {(u, v)}
8 Union(u,v)
9 return A
38 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Complexity
Explanation
Line 1. Initializing the set A takes O(1) time.
Line 2. Sorting the edges in line 4 takes O(E log E).
Lines 5 to 8. The for loop performs:
O(E) FIND-SET and UNION operations.
Along with the |V | MAKE-SET operations that take O((V + E)α(V )),
where α is the pseudoinverse of the Ackermann’s function.
Thus
Given that G is connected, we have |E| ≥ |V | − 1, and so the
disjoint-set operations take O(Eα(V )) time and
α(|V |) = O(log V ) = O(log E).
The total running time of Kruskal’s algorithm is O(E log E), but
observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that
log |E| = O (log V ), and so we can restate the running time of the
algorithm as O(E log V ).
39 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
40 / 69
Prim’s Algorithm
Prim’s algorithm operates much like Dijkstra’s algorithm
The tree starts from an arbitrary root vertex r.
At each step, a light edge is added to the tree A that connects A to
an isolated vertex of GA = (V , A).
When the algorithm terminates, the edges in A form a minimum
spanning tree.
41 / 69
Prim’s Algorithm
Prim’s algorithm operates much like Dijkstra’s algorithm
The tree starts from an arbitrary root vertex r.
At each step, a light edge is added to the tree A that connects A to
an isolated vertex of GA = (V , A).
When the algorithm terminates, the edges in A form a minimum
spanning tree.
41 / 69
Prim’s Algorithm
Prim’s algorithm operates much like Dijkstra’s algorithm
The tree starts from an arbitrary root vertex r.
At each step, a light edge is added to the tree A that connects A to
an isolated vertex of GA = (V , A).
When the algorithm terminates, the edges in A form a minimum
spanning tree.
41 / 69
Problem
Important
In order to implement Prim’s algorithm efficiently, we need a fast way to
select a new edge to add to the tree formed by the edges in A.
For this, we use a min-priority queue Q
During execution of the algorithm, all vertices that are not in the tree
reside in a min-priority queue Q based on a key attribute.
There is a field key for every vertex v
It is the minimum weight of any edge connecting v to a vertex in the
minimum spanning tree (THE LIGHT EDGE!!!).
By convention, v.key = ∞ if there is no such edge.
42 / 69
Problem
Important
In order to implement Prim’s algorithm efficiently, we need a fast way to
select a new edge to add to the tree formed by the edges in A.
For this, we use a min-priority queue Q
During execution of the algorithm, all vertices that are not in the tree
reside in a min-priority queue Q based on a key attribute.
There is a field key for every vertex v
It is the minimum weight of any edge connecting v to a vertex in the
minimum spanning tree (THE LIGHT EDGE!!!).
By convention, v.key = ∞ if there is no such edge.
42 / 69
Problem
Important
In order to implement Prim’s algorithm efficiently, we need a fast way to
select a new edge to add to the tree formed by the edges in A.
For this, we use a min-priority queue Q
During execution of the algorithm, all vertices that are not in the tree
reside in a min-priority queue Q based on a key attribute.
There is a field key for every vertex v
It is the minimum weight of any edge connecting v to a vertex in the
minimum spanning tree (THE LIGHT EDGE!!!).
By convention, v.key = ∞ if there is no such edge.
42 / 69
The algorithm
Pseudo-code
MST-PRIM(G, w, r)
1 for each u ∈ V [G]
2 u.key = ∞
3 u.π = NIL
4 r.key = 0
5 Q = V [G]
6 while Q = ∅
7 u =Extract-Min(Q)
8 for each v ∈ Adj [u]
9 if v ∈ Q and w (u, v) < v.key
10 π [v] = u
11 v.key = w (u, v) an implicit decrease key
in Q
43 / 69
The algorithm
Pseudo-code
MST-PRIM(G, w, r)
1 for each u ∈ V [G]
2 u.key = ∞
3 u.π = NIL
4 r.key = 0
5 Q = V [G]
6 while Q = ∅
7 u =Extract-Min(Q)
8 for each v ∈ Adj [u]
9 if v ∈ Q and w (u, v) < v.key
10 π [v] = u
11 v.key = w (u, v) an implicit decrease key
in Q
43 / 69
The algorithm
Pseudo-code
MST-PRIM(G, w, r)
1 for each u ∈ V [G]
2 u.key = ∞
3 u.π = NIL
4 r.key = 0
5 Q = V [G]
6 while Q = ∅
7 u =Extract-Min(Q)
8 for each v ∈ Adj [u]
9 if v ∈ Q and w (u, v) < v.key
10 π [v] = u
11 v.key = w (u, v) an implicit decrease key
in Q
43 / 69
The algorithm
Pseudo-code
MST-PRIM(G, w, r)
1 for each u ∈ V [G]
2 u.key = ∞
3 u.π = NIL
4 r.key = 0
5 Q = V [G]
6 while Q = ∅
7 u =Extract-Min(Q)
8 for each v ∈ Adj [u]
9 if v ∈ Q and w (u, v) < v.key
10 π [v] = u
11 v.key = w (u, v) an implicit decrease key
in Q
43 / 69
Explanation
Observations
1 A = {(v, π[v]) : v ∈ V − {r} − Q}.
2 The vertices already placed into the minimum spanning tree are those
in V − Q.
3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is
the weight of a light edge (v, π[v]) connecting v to some vertex
already placed into the minimum spanning tree.
44 / 69
Explanation
Observations
1 A = {(v, π[v]) : v ∈ V − {r} − Q}.
2 The vertices already placed into the minimum spanning tree are those
in V − Q.
3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is
the weight of a light edge (v, π[v]) connecting v to some vertex
already placed into the minimum spanning tree.
44 / 69
Explanation
Observations
1 A = {(v, π[v]) : v ∈ V − {r} − Q}.
2 The vertices already placed into the minimum spanning tree are those
in V − Q.
3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is
the weight of a light edge (v, π[v]) connecting v to some vertex
already placed into the minimum spanning tree.
44 / 69
Let us run the Algorithm
We have as an input the following graph
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
45 / 69
Let us run the Algorithm
Select r =b
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
46 / 69
Let us run the Algorithm
Extract b from the priority queue Q
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
47 / 69
Let us run the Algorithm
Update the predecessor of a and its key to 12 from ∞
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
Note: The RED color represent the field π [v]
48 / 69
Let us run the Algorithm
Update the predecessor of c and its key to 9 from ∞
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
49 / 69
Let us run the Algorithm
Update the predecessor of e and its key to 5 from ∞
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
50 / 69
Let us run the Algorithm
Update the predecessor of f and its key to 8 from ∞
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
51 / 69
Let us run the Algorithm
Extract e, then update adjacent vertices
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
52 / 69
Let us run the Algorithm
Extract i from the priority queue Q
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
53 / 69
Let us run the Algorithm
Update adjacent vertices
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
54 / 69
Let us run the Algorithm
Extract f and update adjacent vertices
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
55 / 69
Let us run the Algorithm
Extract g and update
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
56 / 69
Let us run the Algorithm
Extract c and no update
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
57 / 69
Let us run the Algorithm
Extract d and update key at 1
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
58 / 69
Let us run the Algorithm
Extract a and no update
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
59 / 69
Let us run the Algorithm
Extract h
b
c
f
a
d
e
i
g
h
12
4
5
11
8
9
2
1
17
8
2
21
60 / 69
Complexity I
Complexity analysis
The performance of Prim’s algorithm depends on how we implement
the min-priority queue Q.
If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform
the initialization in lines 1 to 5 will run in O(|V |) time.
The body of the while loop is executed |V | times, and
EXTRACT-MIN operation takes O(log V ) time, the total time for all
calls to EXTRACT-MIN is O(V log V ).
The for loop in lines 8 to 11 is executed O(E) times altogether, since
the sum of the lengths of all adjacency lists is 2|E|.
61 / 69
Complexity I
Complexity analysis
The performance of Prim’s algorithm depends on how we implement
the min-priority queue Q.
If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform
the initialization in lines 1 to 5 will run in O(|V |) time.
The body of the while loop is executed |V | times, and
EXTRACT-MIN operation takes O(log V ) time, the total time for all
calls to EXTRACT-MIN is O(V log V ).
The for loop in lines 8 to 11 is executed O(E) times altogether, since
the sum of the lengths of all adjacency lists is 2|E|.
61 / 69
Complexity I
Complexity analysis
The performance of Prim’s algorithm depends on how we implement
the min-priority queue Q.
If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform
the initialization in lines 1 to 5 will run in O(|V |) time.
The body of the while loop is executed |V | times, and
EXTRACT-MIN operation takes O(log V ) time, the total time for all
calls to EXTRACT-MIN is O(V log V ).
The for loop in lines 8 to 11 is executed O(E) times altogether, since
the sum of the lengths of all adjacency lists is 2|E|.
61 / 69
Complexity I
Complexity analysis
The performance of Prim’s algorithm depends on how we implement
the min-priority queue Q.
If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform
the initialization in lines 1 to 5 will run in O(|V |) time.
The body of the while loop is executed |V | times, and
EXTRACT-MIN operation takes O(log V ) time, the total time for all
calls to EXTRACT-MIN is O(V log V ).
The for loop in lines 8 to 11 is executed O(E) times altogether, since
the sum of the lengths of all adjacency lists is 2|E|.
61 / 69
Complexity II
Complexity analysis (continuation)
Within the for loop, the test for membership in Q in line 9 can be
implemented in constant time.
The assignment in line 11 involves an implicit DECREASE-KEY
operation on the min-heap, which can be implemented in a binary
min-heap in O(log V ) time. Thus, the total time for Prim’s algorithm
is:
O(V log V + E log V ) = O(E log V )
62 / 69
Complexity II
Complexity analysis (continuation)
Within the for loop, the test for membership in Q in line 9 can be
implemented in constant time.
The assignment in line 11 involves an implicit DECREASE-KEY
operation on the min-heap, which can be implemented in a binary
min-heap in O(log V ) time. Thus, the total time for Prim’s algorithm
is:
O(V log V + E log V ) = O(E log V )
62 / 69
If you use Fibonacci Heaps
Complexity analysis
EXTRACT-MIN operation in O(log V ) amortized time.
DECREASE-KEY operation (to implement line 11) in O(1) amortized
time.
If we use a Fibonacci Heap to implement the min-priority queue Q we
get a running time of O(E + V log V ).
63 / 69
If you use Fibonacci Heaps
Complexity analysis
EXTRACT-MIN operation in O(log V ) amortized time.
DECREASE-KEY operation (to implement line 11) in O(1) amortized
time.
If we use a Fibonacci Heap to implement the min-priority queue Q we
get a running time of O(E + V log V ).
63 / 69
If you use Fibonacci Heaps
Complexity analysis
EXTRACT-MIN operation in O(log V ) amortized time.
DECREASE-KEY operation (to implement line 11) in O(1) amortized
time.
If we use a Fibonacci Heap to implement the min-priority queue Q we
get a running time of O(E + V log V ).
63 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
64 / 69
Faster Algorithms
Linear Time Algorithms
Karger, Klein & Tarjan (1995) proposed a linear time randomized
algorithm.
The Fastest (O (Eα (E, V ))) by Bernard Chazelle (2000) is based on
the soft heap, an approximate priority queue.
Chazelle has also written essays about music and politics
Linear-time algorithms in special cases
If the graph is dense i.e. log log log V ≤ E
V , then a deterministic
algorithm by Fredman and Tarjan finds the MST in time O (E).
65 / 69
Faster Algorithms
Linear Time Algorithms
Karger, Klein & Tarjan (1995) proposed a linear time randomized
algorithm.
The Fastest (O (Eα (E, V ))) by Bernard Chazelle (2000) is based on
the soft heap, an approximate priority queue.
Chazelle has also written essays about music and politics
Linear-time algorithms in special cases
If the graph is dense i.e. log log log V ≤ E
V , then a deterministic
algorithm by Fredman and Tarjan finds the MST in time O (E).
65 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
66 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Applications
Minimum spanning trees have direct applications in the design of
networks
Telecommunications networks
Transportation networks
Water supply networks
Electrical grids
As a subroutine in
Machine Learning/Big Data Cluster Analysis
Network Communications are using Spanning Tree Protocol (STP)
Image registration and segmentation
Circuit design: implementing efficient multiple constant
multiplications, as used in finite impulse response filters.
Etc
67 / 69
Outline
1 Spanning trees
Basic concepts
Growing a Minimum Spanning Tree
The Greedy Choice and Safe Edges
Kruskal’s algorithm
2 Kruskal’s Algorithm
Directly from the previous Corollary
3 Prim’s Algorithm
Implementation
4 More About the MST Problem
Faster Algorithms
Applications
Exercises
68 / 69
Exercises
From Cormen’s book solve
23.1-3
23.1-5
23.1-7
23.1-9
23.2-2
23.2-3
23.2-5
23.2-7
69 / 69

More Related Content

PDF
20 Single Source Shorthest Path
PPT
minimum spanning trees Algorithm
PPTX
Minimum spanning tree
PPTX
Minimum spanning tree
PPTX
GRAPH APPLICATION - MINIMUM SPANNING TREE (MST)
PPT
Spanning trees
PPTX
Spanning trees & applications
PDF
Minimum spanning tree
20 Single Source Shorthest Path
minimum spanning trees Algorithm
Minimum spanning tree
Minimum spanning tree
GRAPH APPLICATION - MINIMUM SPANNING TREE (MST)
Spanning trees
Spanning trees & applications
Minimum spanning tree

What's hot (20)

PPT
ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra
PPTX
Kruskal Algorithm
PDF
My presentation all shortestpath
PPT
Shortest path (Dijkistra's Algorithm) & Spanning Tree (Prim's Algorithm)
PDF
Topological sorting
PPTX
Prims & kruskal algorithms
PDF
21 All Pairs Shortest Path
PPTX
Data Algorithms And Analysis
PPT
Single source stortest path bellman ford and dijkstra
PPTX
My presentation minimum spanning tree
PPT
Prim's Algorithm on minimum spanning tree
PPTX
Minimum spanning Tree
PPTX
Kruskal's algorithm
PPT
Prim Algorithm and kruskal algorithm
PDF
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graph
PDF
Paths and Polynomials
PPT
chapter24.ppt
PDF
Daa chpater14
PPT
Minimum spanning tree
PPT
The Floyd–Warshall algorithm
ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra
Kruskal Algorithm
My presentation all shortestpath
Shortest path (Dijkistra's Algorithm) & Spanning Tree (Prim's Algorithm)
Topological sorting
Prims & kruskal algorithms
21 All Pairs Shortest Path
Data Algorithms And Analysis
Single source stortest path bellman ford and dijkstra
My presentation minimum spanning tree
Prim's Algorithm on minimum spanning tree
Minimum spanning Tree
Kruskal's algorithm
Prim Algorithm and kruskal algorithm
Strong (Weak) Triple Connected Domination Number of a Fuzzy Graph
Paths and Polynomials
chapter24.ppt
Daa chpater14
Minimum spanning tree
The Floyd–Warshall algorithm
Ad

Similar to 19 Minimum Spanning Trees (20)

PPT
Algorithm Design and Complexity - Course 9
PDF
Daa chapter13
PPT
Greedy Approach in Design Analysis and Algorithms
PPT
test pre
PPTX
uva-201026072839.pptxvcvczcvzvcxbxcvbcxvbvcxbcx
PPT
Chapter 24 aoa
PPT
lecture 16
PDF
Skiena algorithm 2007 lecture13 minimum spanning trees
PPT
Weighted graphs
PPT
Graph Theory PPT presentation created by Selvam.
PDF
Shortest Path Problem
PPTX
prim's and kruskal's algorithm
PPTX
Minimum spanning tree.pptx data structure programming
PDF
lecture 23 algorithm design and analysis
PPTX
8_MST_pptx.pptx
PPTX
APznzaZLM_MVouyxM4cxHPJR5BC-TAxTWqhQJ2EywQQuXStxJTDoGkHdsKEQGd4Vo7BS3Q1npCOMV...
PDF
White Grey Minimalist Geometric Project Presentation.pdf
PDF
Shortest path by using suitable algorithm.pdf
PDF
Algorithm chapter 9
PDF
Graphs: Finding shortest paths
Algorithm Design and Complexity - Course 9
Daa chapter13
Greedy Approach in Design Analysis and Algorithms
test pre
uva-201026072839.pptxvcvczcvzvcxbxcvbcxvbvcxbcx
Chapter 24 aoa
lecture 16
Skiena algorithm 2007 lecture13 minimum spanning trees
Weighted graphs
Graph Theory PPT presentation created by Selvam.
Shortest Path Problem
prim's and kruskal's algorithm
Minimum spanning tree.pptx data structure programming
lecture 23 algorithm design and analysis
8_MST_pptx.pptx
APznzaZLM_MVouyxM4cxHPJR5BC-TAxTWqhQJ2EywQQuXStxJTDoGkHdsKEQGd4Vo7BS3Q1npCOMV...
White Grey Minimalist Geometric Project Presentation.pdf
Shortest path by using suitable algorithm.pdf
Algorithm chapter 9
Graphs: Finding shortest paths
Ad

More from Andres Mendez-Vazquez (20)

PDF
2.03 bayesian estimation
PDF
05 linear transformations
PDF
01.04 orthonormal basis_eigen_vectors
PDF
01.03 squared matrices_and_other_issues
PDF
01.02 linear equations
PDF
01.01 vector spaces
PDF
06 recurrent neural_networks
PDF
05 backpropagation automatic_differentiation
PDF
Zetta global
PDF
01 Introduction to Neural Networks and Deep Learning
PDF
25 introduction reinforcement_learning
PDF
Neural Networks and Deep Learning Syllabus
PDF
Introduction to artificial_intelligence_syllabus
PDF
Ideas 09 22_2018
PDF
Ideas about a Bachelor in Machine Learning/Data Sciences
PDF
Analysis of Algorithms Syllabus
PDF
20 k-means, k-center, k-meoids and variations
PDF
18.1 combining models
PDF
17 vapnik chervonenkis dimension
PDF
A basic introduction to learning
2.03 bayesian estimation
05 linear transformations
01.04 orthonormal basis_eigen_vectors
01.03 squared matrices_and_other_issues
01.02 linear equations
01.01 vector spaces
06 recurrent neural_networks
05 backpropagation automatic_differentiation
Zetta global
01 Introduction to Neural Networks and Deep Learning
25 introduction reinforcement_learning
Neural Networks and Deep Learning Syllabus
Introduction to artificial_intelligence_syllabus
Ideas 09 22_2018
Ideas about a Bachelor in Machine Learning/Data Sciences
Analysis of Algorithms Syllabus
20 k-means, k-center, k-meoids and variations
18.1 combining models
17 vapnik chervonenkis dimension
A basic introduction to learning

Recently uploaded (20)

PPTX
Information Storage and Retrieval Techniques Unit III
PDF
August 2025 - Top 10 Read Articles in Network Security & Its Applications
PDF
Design Guidelines and solutions for Plastics parts
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PDF
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
PPTX
Fundamentals of Mechanical Engineering.pptx
PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
PDF
Abrasive, erosive and cavitation wear.pdf
PDF
Categorization of Factors Affecting Classification Algorithms Selection
PPT
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
PDF
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
PDF
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PDF
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
PDF
Exploratory_Data_Analysis_Fundamentals.pdf
PPTX
introduction to high performance computing
PPTX
Current and future trends in Computer Vision.pptx
PDF
ChapteR012372321DFGDSFGDFGDFSGDFGDFGDFGSDFGDFGFD
PDF
Influence of Green Infrastructure on Residents’ Endorsement of the New Ecolog...
Information Storage and Retrieval Techniques Unit III
August 2025 - Top 10 Read Articles in Network Security & Its Applications
Design Guidelines and solutions for Plastics parts
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
Fundamentals of Mechanical Engineering.pptx
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
Abrasive, erosive and cavitation wear.pdf
Categorization of Factors Affecting Classification Algorithms Selection
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
Fundamentals of safety and accident prevention -final (1).pptx
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
Exploratory_Data_Analysis_Fundamentals.pdf
introduction to high performance computing
Current and future trends in Computer Vision.pptx
ChapteR012372321DFGDSFGDFGDFSGDFGDFGDFGSDFGDFGFD
Influence of Green Infrastructure on Residents’ Endorsement of the New Ecolog...

19 Minimum Spanning Trees

  • 1. Analysis of Algorithms Minimum Spanning Trees Andres Mendez-Vazquez November 8, 2015 1 / 69
  • 2. Outline 1 Spanning trees Basic concepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 2 / 69
  • 3. Outline 1 Spanning trees Basic concepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 3 / 69
  • 4. Originally We had a Graph without weights 2 3 5 1 4 6 7 9 8 4 / 69
  • 5. Then Now, we have have weights b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 5 / 69
  • 6. Finally, the optimization problem We want to find min T (u,v)∈T w(u, v) Where T ⊆ E such that T is acyclic and connects all the vertices. 2 3 5 1 4 6 7 9 8 12 4 5 11 8 9 2 1 17 8 2 21 This problem is called The minimum spanning tree problem 6 / 69
  • 7. Finally, the optimization problem We want to find min T (u,v)∈T w(u, v) Where T ⊆ E such that T is acyclic and connects all the vertices. 2 3 5 1 4 6 7 9 8 12 4 5 11 8 9 2 1 17 8 2 21 This problem is called The minimum spanning tree problem 6 / 69
  • 8. When do you need minimum spanning trees? In power distribution We want to connect points x and y with the minimum amount of cable. In a wireless network Given a collection of mobile beacons we want to maintain the minimum connection overhead between all of them. 7 / 69
  • 9. When do you need minimum spanning trees? In power distribution We want to connect points x and y with the minimum amount of cable. In a wireless network Given a collection of mobile beacons we want to maintain the minimum connection overhead between all of them. 7 / 69
  • 10. Some Applications Tracking the Genetic Variance of Age-Gender-Associated Staphylococcus Aureus 8 / 69
  • 11. Some Applications What? Urban Tapestries is an interactive location-based wireless application allowing users to access and publish location-specific multimedia content. Using MST we can create paths for public multimedia shows that are no too exhausting 9 / 69
  • 12. These models can be seen as Connected, undirected graphs G = (V , E) E is the set of possible connections between pairs of beacons. Each of the this edges (u, v) has a weight w(u, v) specifying the cost of connecting u and v. 10 / 69
  • 13. These models can be seen as Connected, undirected graphs G = (V , E) E is the set of possible connections between pairs of beacons. Each of the this edges (u, v) has a weight w(u, v) specifying the cost of connecting u and v. 10 / 69
  • 14. Outline 1 Spanning trees Basic concepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 11 / 69
  • 15. Growing a Minimum Spanning Tree There are two classic algorithms, Prim and Kruskal Both algorithms Kruskal and Prim use a greedy approach. Basic greedy idea Prior to each iteration, A is a subset of some minimum spanning tree. At each step, we determine an edge (u, v) that can be added to A such that A ∪ {(u, v)} is also a subset of a minimum spanning tree. 12 / 69
  • 16. Growing a Minimum Spanning Tree There are two classic algorithms, Prim and Kruskal Both algorithms Kruskal and Prim use a greedy approach. Basic greedy idea Prior to each iteration, A is a subset of some minimum spanning tree. At each step, we determine an edge (u, v) that can be added to A such that A ∪ {(u, v)} is also a subset of a minimum spanning tree. 12 / 69
  • 17. Growing a Minimum Spanning Tree There are two classic algorithms, Prim and Kruskal Both algorithms Kruskal and Prim use a greedy approach. Basic greedy idea Prior to each iteration, A is a subset of some minimum spanning tree. At each step, we determine an edge (u, v) that can be added to A such that A ∪ {(u, v)} is also a subset of a minimum spanning tree. 12 / 69
  • 18. Generic minimum spanning tree algorithm A Generic Code Generic-MST(G, w) 1 A = ∅ 2 while A does not form a spanning tree 3 do find an edge (u, v) that is safe for A 4 A = A ∪ {(u, v)} 5 return A This has the following loop invariance Initialization: Line 1 A trivially satisfies. Maintenance: The loop only adds safe edges. Termination: The final A contains all the edges in a minimum spanning tree. 13 / 69
  • 19. Generic minimum spanning tree algorithm A Generic Code Generic-MST(G, w) 1 A = ∅ 2 while A does not form a spanning tree 3 do find an edge (u, v) that is safe for A 4 A = A ∪ {(u, v)} 5 return A This has the following loop invariance Initialization: Line 1 A trivially satisfies. Maintenance: The loop only adds safe edges. Termination: The final A contains all the edges in a minimum spanning tree. 13 / 69
  • 20. Generic minimum spanning tree algorithm A Generic Code Generic-MST(G, w) 1 A = ∅ 2 while A does not form a spanning tree 3 do find an edge (u, v) that is safe for A 4 A = A ∪ {(u, v)} 5 return A This has the following loop invariance Initialization: Line 1 A trivially satisfies. Maintenance: The loop only adds safe edges. Termination: The final A contains all the edges in a minimum spanning tree. 13 / 69
  • 21. Generic minimum spanning tree algorithm A Generic Code Generic-MST(G, w) 1 A = ∅ 2 while A does not form a spanning tree 3 do find an edge (u, v) that is safe for A 4 A = A ∪ {(u, v)} 5 return A This has the following loop invariance Initialization: Line 1 A trivially satisfies. Maintenance: The loop only adds safe edges. Termination: The final A contains all the edges in a minimum spanning tree. 13 / 69
  • 22. Some basic definitions for the Greedy Choice A cut (S, V − S) is a partition of V Then (u, v) in E crosses the cut (S, V − S) if one end point is in S and the other is in V − S. We say that a cut respects A if no edge in A crosses the cut. A light edge is an edge crossing the cut with minimum weight with respect to the other edges crossing the cut. 14 / 69
  • 23. Some basic definitions for the Greedy Choice A cut (S, V − S) is a partition of V Then (u, v) in E crosses the cut (S, V − S) if one end point is in S and the other is in V − S. We say that a cut respects A if no edge in A crosses the cut. A light edge is an edge crossing the cut with minimum weight with respect to the other edges crossing the cut. 14 / 69
  • 24. Some basic definitions for the Greedy Choice A cut (S, V − S) is a partition of V Then (u, v) in E crosses the cut (S, V − S) if one end point is in S and the other is in V − S. We say that a cut respects A if no edge in A crosses the cut. A light edge is an edge crossing the cut with minimum weight with respect to the other edges crossing the cut. 14 / 69
  • 25. Outline 1 Spanning trees Basic concepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 15 / 69
  • 26. The Greedy Choice Remark The following algorithms are based in the Greedy Choice. Which Greedy Choice? The way we add edges to the set of edges belonging to the Minimum Spanning Trees. They are known as Safe Edges 16 / 69
  • 27. The Greedy Choice Remark The following algorithms are based in the Greedy Choice. Which Greedy Choice? The way we add edges to the set of edges belonging to the Minimum Spanning Trees. They are known as Safe Edges 16 / 69
  • 28. The Greedy Choice Remark The following algorithms are based in the Greedy Choice. Which Greedy Choice? The way we add edges to the set of edges belonging to the Minimum Spanning Trees. They are known as Safe Edges 16 / 69
  • 29. Recognizing safe edges Theorem for Recognizing Safe Edges (23.1) Let G = (V , E) be a connected, undirected graph with weights w defined on E. Let A ⊆ E that is included in a MST for G, let (S, V − S) be any cut of G that respects A, and let (u, v) be a light edge crossing (S, V − S). Then, edge (u, v) is safe for A. b c d i h g f ea 4 8 8 11 7 2 6 1 2 2 4 9 14 10 S V-S S V-S 17 / 69
  • 30. Observations Notice that At any point in the execution of the algorithm the graph GA = (V , A) is a forest, and each of the connected components of GA is a tree. Thus Any safe edge (u, v) for A connects distinct components of GA, since A ∪ {(u, v)} must be acyclic. 18 / 69
  • 31. Observations Notice that At any point in the execution of the algorithm the graph GA = (V , A) is a forest, and each of the connected components of GA is a tree. Thus Any safe edge (u, v) for A connects distinct components of GA, since A ∪ {(u, v)} must be acyclic. 18 / 69
  • 32. The basic corollary Corollary 23.2 Let G = (V , E) be a connected, undirected graph with real-valued weight function w defined on E. Let A be a subset of E that is included in some minimum spanning tree for G, and let C = (Vc, Ec) be a connected component (tree) in the forest GA = (V , A). If (u, v) is a light edge connecting C to some other component in GA, then (u, v) is safe for A. Proof The cut (Vc, V − Vc) respects A, and (u, v) is a light edge for this cut. Therefore, (u, v) is safe for A. 19 / 69
  • 33. The basic corollary Corollary 23.2 Let G = (V , E) be a connected, undirected graph with real-valued weight function w defined on E. Let A be a subset of E that is included in some minimum spanning tree for G, and let C = (Vc, Ec) be a connected component (tree) in the forest GA = (V , A). If (u, v) is a light edge connecting C to some other component in GA, then (u, v) is safe for A. Proof The cut (Vc, V − Vc) respects A, and (u, v) is a light edge for this cut. Therefore, (u, v) is safe for A. 19 / 69
  • 34. Outline 1 Spanning trees Basic concepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 20 / 69
  • 35. Outline 1 Spanning trees Basic concepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 21 / 69
  • 36. Kruskal’s Algorithm Algorithm MST-KRUSKAL(G, w) 1 A = ∅ 2 for each vertex v ∈ V [G] 3 do Make-Set 4 sort the edges of E into non-decreasing order by weight w 5 for each edge (u, v) ∈ E taken in non-decreasing order by weight 6 do if FIND − SET(u) = FIND − SET(v) 7 then A = A ∪ {(u, v)} 8 Union(u,v) 9 return A 22 / 69
  • 37. Kruskal’s Algorithm Algorithm MST-KRUSKAL(G, w) 1 A = ∅ 2 for each vertex v ∈ V [G] 3 do Make-Set 4 sort the edges of E into non-decreasing order by weight w 5 for each edge (u, v) ∈ E taken in non-decreasing order by weight 6 do if FIND − SET(u) = FIND − SET(v) 7 then A = A ∪ {(u, v)} 8 Union(u,v) 9 return A 22 / 69
  • 38. Kruskal’s Algorithm Algorithm MST-KRUSKAL(G, w) 1 A = ∅ 2 for each vertex v ∈ V [G] 3 do Make-Set 4 sort the edges of E into non-decreasing order by weight w 5 for each edge (u, v) ∈ E taken in non-decreasing order by weight 6 do if FIND − SET(u) = FIND − SET(v) 7 then A = A ∪ {(u, v)} 8 Union(u,v) 9 return A 22 / 69
  • 39. Kruskal’s Algorithm Algorithm MST-KRUSKAL(G, w) 1 A = ∅ 2 for each vertex v ∈ V [G] 3 do Make-Set 4 sort the edges of E into non-decreasing order by weight w 5 for each edge (u, v) ∈ E taken in non-decreasing order by weight 6 do if FIND − SET(u) = FIND − SET(v) 7 then A = A ∪ {(u, v)} 8 Union(u,v) 9 return A 22 / 69
  • 40. Let us run the Algorithm We have as an input the following graph b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 23 / 69
  • 41. Let us run the Algorithm 1st step everybody is a set!!! b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 24 / 69
  • 42. Let us run the Algorithm Given (f , g) with weight 1 Question: FIND − SET(f ) = FIND − SET(g)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 25 / 69
  • 43. Let us run the Algorithm Then A = A ∪ {(f , g)}, next FIND − SET(f ) = FIND − SET(i)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 26 / 69
  • 44. Let us run the Algorithm Then A = A ∪ {(f , i)}, next FIND − SET(c) = FIND − SET(f )? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 27 / 69
  • 45. Let us run the Algorithm Then A = A ∪ {(c, f )}, next FIND − SET(a) = FIND − SET(d)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 28 / 69
  • 46. Let us run the Algorithm Then A = A ∪ {(a, d)}, next FIND − SET(b) = FIND − SET(e)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 29 / 69
  • 47. Let us run the Algorithm Then A = A ∪ {(b, e)}, next FIND − SET(e) = FIND − SET(i)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 30 / 69
  • 48. Let us run the Algorithm Then A = A ∪ {(e, i)}, next FIND − SET(b) = FIND − SET(f )? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 31 / 69
  • 49. Let us run the Algorithm Then A = A, next FIND − SET(b) = FIND − SET(c)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 32 / 69
  • 50. Let us run the Algorithm Then A = A, next FIND − SET(d) = FIND − SET(e)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 33 / 69
  • 51. Let us run the Algorithm Then A = A ∪ {(d, e)}, next FIND − SET(a) = FIND − SET(b)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 34 / 69
  • 52. Let us run the Algorithm Then A = A, next FIND − SET(e) = FIND − SET(g)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 35 / 69
  • 53. Let us run the Algorithm Then A = A, next FIND − SET(g) = FIND − SET(h)? b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 36 / 69
  • 54. Let us run the Algorithm Then A = A ∪ {(g, h)} b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 37 / 69
  • 55. Kruskal’s Algorithm Algorithm MST-KRUSKAL(G, w) 1 A = ∅ 2 for each vertex v ∈ V [G] 3 do Make-Set 4 sort the edges of E into non-decreasing order by weight w 5 for each edge (u, v) ∈ E taken in non-decreasing order by weight 6 do if FIND − SET(u) = FIND − SET(v) 7 then A = A ∪ {(u, v)} 8 Union(u,v) 9 return A 38 / 69
  • 56. Complexity Explanation Line 1. Initializing the set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 57. Complexity Explanation Line 1. Initializing the set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 58. Complexity Explanation Line 1. Initializing the set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 59. Complexity Explanation Line 1. Initializing the set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 60. Complexity Explanation Line 1. Initializing the set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 61. Complexity Explanation Line 1. Initializing the set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 62. Complexity Explanation Line 1. Initializing the set A takes O(1) time. Line 2. Sorting the edges in line 4 takes O(E log E). Lines 5 to 8. The for loop performs: O(E) FIND-SET and UNION operations. Along with the |V | MAKE-SET operations that take O((V + E)α(V )), where α is the pseudoinverse of the Ackermann’s function. Thus Given that G is connected, we have |E| ≥ |V | − 1, and so the disjoint-set operations take O(Eα(V )) time and α(|V |) = O(log V ) = O(log E). The total running time of Kruskal’s algorithm is O(E log E), but observing that |E| < |V |2 −→ log |E| < 2 log |V |, we have that log |E| = O (log V ), and so we can restate the running time of the algorithm as O(E log V ). 39 / 69
  • 63. Outline 1 Spanning trees Basic concepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 40 / 69
  • 64. Prim’s Algorithm Prim’s algorithm operates much like Dijkstra’s algorithm The tree starts from an arbitrary root vertex r. At each step, a light edge is added to the tree A that connects A to an isolated vertex of GA = (V , A). When the algorithm terminates, the edges in A form a minimum spanning tree. 41 / 69
  • 65. Prim’s Algorithm Prim’s algorithm operates much like Dijkstra’s algorithm The tree starts from an arbitrary root vertex r. At each step, a light edge is added to the tree A that connects A to an isolated vertex of GA = (V , A). When the algorithm terminates, the edges in A form a minimum spanning tree. 41 / 69
  • 66. Prim’s Algorithm Prim’s algorithm operates much like Dijkstra’s algorithm The tree starts from an arbitrary root vertex r. At each step, a light edge is added to the tree A that connects A to an isolated vertex of GA = (V , A). When the algorithm terminates, the edges in A form a minimum spanning tree. 41 / 69
  • 67. Problem Important In order to implement Prim’s algorithm efficiently, we need a fast way to select a new edge to add to the tree formed by the edges in A. For this, we use a min-priority queue Q During execution of the algorithm, all vertices that are not in the tree reside in a min-priority queue Q based on a key attribute. There is a field key for every vertex v It is the minimum weight of any edge connecting v to a vertex in the minimum spanning tree (THE LIGHT EDGE!!!). By convention, v.key = ∞ if there is no such edge. 42 / 69
  • 68. Problem Important In order to implement Prim’s algorithm efficiently, we need a fast way to select a new edge to add to the tree formed by the edges in A. For this, we use a min-priority queue Q During execution of the algorithm, all vertices that are not in the tree reside in a min-priority queue Q based on a key attribute. There is a field key for every vertex v It is the minimum weight of any edge connecting v to a vertex in the minimum spanning tree (THE LIGHT EDGE!!!). By convention, v.key = ∞ if there is no such edge. 42 / 69
  • 69. Problem Important In order to implement Prim’s algorithm efficiently, we need a fast way to select a new edge to add to the tree formed by the edges in A. For this, we use a min-priority queue Q During execution of the algorithm, all vertices that are not in the tree reside in a min-priority queue Q based on a key attribute. There is a field key for every vertex v It is the minimum weight of any edge connecting v to a vertex in the minimum spanning tree (THE LIGHT EDGE!!!). By convention, v.key = ∞ if there is no such edge. 42 / 69
  • 70. The algorithm Pseudo-code MST-PRIM(G, w, r) 1 for each u ∈ V [G] 2 u.key = ∞ 3 u.π = NIL 4 r.key = 0 5 Q = V [G] 6 while Q = ∅ 7 u =Extract-Min(Q) 8 for each v ∈ Adj [u] 9 if v ∈ Q and w (u, v) < v.key 10 π [v] = u 11 v.key = w (u, v) an implicit decrease key in Q 43 / 69
  • 71. The algorithm Pseudo-code MST-PRIM(G, w, r) 1 for each u ∈ V [G] 2 u.key = ∞ 3 u.π = NIL 4 r.key = 0 5 Q = V [G] 6 while Q = ∅ 7 u =Extract-Min(Q) 8 for each v ∈ Adj [u] 9 if v ∈ Q and w (u, v) < v.key 10 π [v] = u 11 v.key = w (u, v) an implicit decrease key in Q 43 / 69
  • 72. The algorithm Pseudo-code MST-PRIM(G, w, r) 1 for each u ∈ V [G] 2 u.key = ∞ 3 u.π = NIL 4 r.key = 0 5 Q = V [G] 6 while Q = ∅ 7 u =Extract-Min(Q) 8 for each v ∈ Adj [u] 9 if v ∈ Q and w (u, v) < v.key 10 π [v] = u 11 v.key = w (u, v) an implicit decrease key in Q 43 / 69
  • 73. The algorithm Pseudo-code MST-PRIM(G, w, r) 1 for each u ∈ V [G] 2 u.key = ∞ 3 u.π = NIL 4 r.key = 0 5 Q = V [G] 6 while Q = ∅ 7 u =Extract-Min(Q) 8 for each v ∈ Adj [u] 9 if v ∈ Q and w (u, v) < v.key 10 π [v] = u 11 v.key = w (u, v) an implicit decrease key in Q 43 / 69
  • 74. Explanation Observations 1 A = {(v, π[v]) : v ∈ V − {r} − Q}. 2 The vertices already placed into the minimum spanning tree are those in V − Q. 3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is the weight of a light edge (v, π[v]) connecting v to some vertex already placed into the minimum spanning tree. 44 / 69
  • 75. Explanation Observations 1 A = {(v, π[v]) : v ∈ V − {r} − Q}. 2 The vertices already placed into the minimum spanning tree are those in V − Q. 3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is the weight of a light edge (v, π[v]) connecting v to some vertex already placed into the minimum spanning tree. 44 / 69
  • 76. Explanation Observations 1 A = {(v, π[v]) : v ∈ V − {r} − Q}. 2 The vertices already placed into the minimum spanning tree are those in V − Q. 3 For all vertices v ∈ Q, if π[v] = NIL, then key[v] < ∞ and key[v] is the weight of a light edge (v, π[v]) connecting v to some vertex already placed into the minimum spanning tree. 44 / 69
  • 77. Let us run the Algorithm We have as an input the following graph b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 45 / 69
  • 78. Let us run the Algorithm Select r =b b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 46 / 69
  • 79. Let us run the Algorithm Extract b from the priority queue Q b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 47 / 69
  • 80. Let us run the Algorithm Update the predecessor of a and its key to 12 from ∞ b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 Note: The RED color represent the field π [v] 48 / 69
  • 81. Let us run the Algorithm Update the predecessor of c and its key to 9 from ∞ b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 49 / 69
  • 82. Let us run the Algorithm Update the predecessor of e and its key to 5 from ∞ b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 50 / 69
  • 83. Let us run the Algorithm Update the predecessor of f and its key to 8 from ∞ b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 51 / 69
  • 84. Let us run the Algorithm Extract e, then update adjacent vertices b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 52 / 69
  • 85. Let us run the Algorithm Extract i from the priority queue Q b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 53 / 69
  • 86. Let us run the Algorithm Update adjacent vertices b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 54 / 69
  • 87. Let us run the Algorithm Extract f and update adjacent vertices b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 55 / 69
  • 88. Let us run the Algorithm Extract g and update b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 56 / 69
  • 89. Let us run the Algorithm Extract c and no update b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 57 / 69
  • 90. Let us run the Algorithm Extract d and update key at 1 b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 58 / 69
  • 91. Let us run the Algorithm Extract a and no update b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 59 / 69
  • 92. Let us run the Algorithm Extract h b c f a d e i g h 12 4 5 11 8 9 2 1 17 8 2 21 60 / 69
  • 93. Complexity I Complexity analysis The performance of Prim’s algorithm depends on how we implement the min-priority queue Q. If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform the initialization in lines 1 to 5 will run in O(|V |) time. The body of the while loop is executed |V | times, and EXTRACT-MIN operation takes O(log V ) time, the total time for all calls to EXTRACT-MIN is O(V log V ). The for loop in lines 8 to 11 is executed O(E) times altogether, since the sum of the lengths of all adjacency lists is 2|E|. 61 / 69
  • 94. Complexity I Complexity analysis The performance of Prim’s algorithm depends on how we implement the min-priority queue Q. If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform the initialization in lines 1 to 5 will run in O(|V |) time. The body of the while loop is executed |V | times, and EXTRACT-MIN operation takes O(log V ) time, the total time for all calls to EXTRACT-MIN is O(V log V ). The for loop in lines 8 to 11 is executed O(E) times altogether, since the sum of the lengths of all adjacency lists is 2|E|. 61 / 69
  • 95. Complexity I Complexity analysis The performance of Prim’s algorithm depends on how we implement the min-priority queue Q. If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform the initialization in lines 1 to 5 will run in O(|V |) time. The body of the while loop is executed |V | times, and EXTRACT-MIN operation takes O(log V ) time, the total time for all calls to EXTRACT-MIN is O(V log V ). The for loop in lines 8 to 11 is executed O(E) times altogether, since the sum of the lengths of all adjacency lists is 2|E|. 61 / 69
  • 96. Complexity I Complexity analysis The performance of Prim’s algorithm depends on how we implement the min-priority queue Q. If Q is a binary min-heap, BUILD-MIN-HEAP procedure to perform the initialization in lines 1 to 5 will run in O(|V |) time. The body of the while loop is executed |V | times, and EXTRACT-MIN operation takes O(log V ) time, the total time for all calls to EXTRACT-MIN is O(V log V ). The for loop in lines 8 to 11 is executed O(E) times altogether, since the sum of the lengths of all adjacency lists is 2|E|. 61 / 69
  • 97. Complexity II Complexity analysis (continuation) Within the for loop, the test for membership in Q in line 9 can be implemented in constant time. The assignment in line 11 involves an implicit DECREASE-KEY operation on the min-heap, which can be implemented in a binary min-heap in O(log V ) time. Thus, the total time for Prim’s algorithm is: O(V log V + E log V ) = O(E log V ) 62 / 69
  • 98. Complexity II Complexity analysis (continuation) Within the for loop, the test for membership in Q in line 9 can be implemented in constant time. The assignment in line 11 involves an implicit DECREASE-KEY operation on the min-heap, which can be implemented in a binary min-heap in O(log V ) time. Thus, the total time for Prim’s algorithm is: O(V log V + E log V ) = O(E log V ) 62 / 69
  • 99. If you use Fibonacci Heaps Complexity analysis EXTRACT-MIN operation in O(log V ) amortized time. DECREASE-KEY operation (to implement line 11) in O(1) amortized time. If we use a Fibonacci Heap to implement the min-priority queue Q we get a running time of O(E + V log V ). 63 / 69
  • 100. If you use Fibonacci Heaps Complexity analysis EXTRACT-MIN operation in O(log V ) amortized time. DECREASE-KEY operation (to implement line 11) in O(1) amortized time. If we use a Fibonacci Heap to implement the min-priority queue Q we get a running time of O(E + V log V ). 63 / 69
  • 101. If you use Fibonacci Heaps Complexity analysis EXTRACT-MIN operation in O(log V ) amortized time. DECREASE-KEY operation (to implement line 11) in O(1) amortized time. If we use a Fibonacci Heap to implement the min-priority queue Q we get a running time of O(E + V log V ). 63 / 69
  • 102. Outline 1 Spanning trees Basic concepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 64 / 69
  • 103. Faster Algorithms Linear Time Algorithms Karger, Klein & Tarjan (1995) proposed a linear time randomized algorithm. The Fastest (O (Eα (E, V ))) by Bernard Chazelle (2000) is based on the soft heap, an approximate priority queue. Chazelle has also written essays about music and politics Linear-time algorithms in special cases If the graph is dense i.e. log log log V ≤ E V , then a deterministic algorithm by Fredman and Tarjan finds the MST in time O (E). 65 / 69
  • 104. Faster Algorithms Linear Time Algorithms Karger, Klein & Tarjan (1995) proposed a linear time randomized algorithm. The Fastest (O (Eα (E, V ))) by Bernard Chazelle (2000) is based on the soft heap, an approximate priority queue. Chazelle has also written essays about music and politics Linear-time algorithms in special cases If the graph is dense i.e. log log log V ≤ E V , then a deterministic algorithm by Fredman and Tarjan finds the MST in time O (E). 65 / 69
  • 105. Outline 1 Spanning trees Basic concepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 66 / 69
  • 106. Applications Minimum spanning trees have direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 107. Applications Minimum spanning trees have direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 108. Applications Minimum spanning trees have direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 109. Applications Minimum spanning trees have direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 110. Applications Minimum spanning trees have direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 111. Applications Minimum spanning trees have direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 112. Applications Minimum spanning trees have direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 113. Applications Minimum spanning trees have direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 114. Applications Minimum spanning trees have direct applications in the design of networks Telecommunications networks Transportation networks Water supply networks Electrical grids As a subroutine in Machine Learning/Big Data Cluster Analysis Network Communications are using Spanning Tree Protocol (STP) Image registration and segmentation Circuit design: implementing efficient multiple constant multiplications, as used in finite impulse response filters. Etc 67 / 69
  • 115. Outline 1 Spanning trees Basic concepts Growing a Minimum Spanning Tree The Greedy Choice and Safe Edges Kruskal’s algorithm 2 Kruskal’s Algorithm Directly from the previous Corollary 3 Prim’s Algorithm Implementation 4 More About the MST Problem Faster Algorithms Applications Exercises 68 / 69
  • 116. Exercises From Cormen’s book solve 23.1-3 23.1-5 23.1-7 23.1-9 23.2-2 23.2-3 23.2-5 23.2-7 69 / 69