SlideShare a Scribd company logo
Solving connectivity problems via basic Linear
Algebra
Samir Datta
Chennai Mathematical Institute
Includes joint work with
Raghav Kulkarni Anish Mukherjee Thomas Schwentick
Thomas Zeume
NMI Workshop on Complexity
IIT Gandhinagar
November 6, 2016
Outline
1 Part I: Disjoint Paths
2 Part II: Dynamic Reachability
3 Conclusion
Two fundamental problems
Two fundamental problems
• Definition (Reachability)
G directed graph and s, t ∈ V (G). Is there a path from s to t?
Two fundamental problems
• Definition (Reachability)
G directed graph and s, t ∈ V (G). Is there a path from s to t?
• Complete for NL
Two fundamental problems
• Definition (Reachability)
G directed graph and s, t ∈ V (G). Is there a path from s to t?
• Complete for NL
• Optimization version: shortest path also in NL
Two fundamental problems
• Definition (Reachability)
G directed graph and s, t ∈ V (G). Is there a path from s to t?
• Complete for NL
• Optimization version: shortest path also in NL
Two fundamental problems
• Definition (Reachability)
G directed graph and s, t ∈ V (G). Is there a path from s to t?
• Complete for NL
• Optimization version: shortest path also in NL
• Definition (Connectivity)
G undirected graph, s, t ∈ V (G). Is there a path joining s, t?
Two fundamental problems
• Definition (Reachability)
G directed graph and s, t ∈ V (G). Is there a path from s to t?
• Complete for NL
• Optimization version: shortest path also in NL
• Definition (Connectivity)
G undirected graph, s, t ∈ V (G). Is there a path joining s, t?
• Complete for SL
Two fundamental problems
• Definition (Reachability)
G directed graph and s, t ∈ V (G). Is there a path from s to t?
• Complete for NL
• Optimization version: shortest path also in NL
• Definition (Connectivity)
G undirected graph, s, t ∈ V (G). Is there a path joining s, t?
• Complete for SL
• (Reingold) SL = L
Two fundamental problems
• Definition (Reachability)
G directed graph and s, t ∈ V (G). Is there a path from s to t?
• Complete for NL
• Optimization version: shortest path also in NL
• Definition (Connectivity)
G undirected graph, s, t ∈ V (G). Is there a path joining s, t?
• Complete for SL
• (Reingold) SL = L
• Optimization version: shortest path in NL
Disjoint Path Problem
Disjoint Path Problem
• Definition (k-DPP)
G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist
vertex-disjoint (si , ti )-paths?
Disjoint Path Problem
• Definition (k-DPP)
G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist
vertex-disjoint (si , ti )-paths?
• Distinct from k-connectivity (k-reachability)
Disjoint Path Problem
• Definition (k-DPP)
G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist
vertex-disjoint (si , ti )-paths?
• Distinct from k-connectivity (k-reachability)
• NP-complete for directed graphs for k = 2
Disjoint Path Problem
• Definition (k-DPP)
G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist
vertex-disjoint (si , ti )-paths?
• Distinct from k-connectivity (k-reachability)
• NP-complete for directed graphs for k = 2
• NP-complete for undirected graphs when k part of input.
Disjoint Path Problem
• Definition (k-DPP)
G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist
vertex-disjoint (si , ti )-paths?
• Distinct from k-connectivity (k-reachability)
• NP-complete for directed graphs for k = 2
• NP-complete for undirected graphs when k part of input.
• (Robertson-Seymour) Cubic time (fixed k) in undirected
graphs.
Disjoint Path Problem
• Definition (k-DPP)
G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist
vertex-disjoint (si , ti )-paths?
• Distinct from k-connectivity (k-reachability)
• NP-complete for directed graphs for k = 2
• NP-complete for undirected graphs when k part of input.
• (Robertson-Seymour) Cubic time (fixed k) in undirected
graphs.
• Special cases: Planar graphs/DAGs well studied.
One-face Min-sum Disjoint Path Problem
• A very special case:
1
2
4
2k
2k − 1
3
One-face Min-sum Disjoint Path Problem
• A very special case:
• k-DPP instance with undirected planar graph, and
1
2
4
2k
2k − 1
3
One-face Min-sum Disjoint Path Problem
• A very special case:
• k-DPP instance with undirected planar graph, and
• All terminals on single face, and
1
2
4
2k
2k − 1
3
One-face Min-sum Disjoint Path Problem
• A very special case:
• k-DPP instance with undirected planar graph, and
• All terminals on single face, and
• Want solution with minimum total length of paths.
1
2
4
2k
2k − 1
3
One-face Min-sum Disjoint Path Problem:
Motivation
• Directed general case remains NP-hard in the plane though
fixed parameter tractable in k.
One-face Min-sum Disjoint Path Problem:
Motivation
• Directed general case remains NP-hard in the plane though
fixed parameter tractable in k.
• NP-hardness in undirected case: open for k > 2.
One-face Min-sum Disjoint Path Problem:
Motivation
• Directed general case remains NP-hard in the plane though
fixed parameter tractable in k.
• NP-hardness in undirected case: open for k > 2.
• (Bj¨orklund-Husfeldt’14) Randomized polynomial time
algorithm in undirected case for 2-DPP.
One-face Min-sum Disjoint Path Problem:
Motivation
• (de Verdi`ere-Schrijver’08) Two-face case with sources and
sinks on two different faces in time O(kn log n).
One-face Min-sum Disjoint Path Problem:
Motivation
• (de Verdi`ere-Schrijver’08) Two-face case with sources and
sinks on two different faces in time O(kn log n).
• (Kobayashi-Sommer’10) One-face min-sum 3-DPP in
polynomial time
One-face Min-sum Disjoint Path Problem:
Motivation
• (de Verdi`ere-Schrijver’08) Two-face case with sources and
sinks on two different faces in time O(kn log n).
• (Kobayashi-Sommer’10) One-face min-sum 3-DPP in
polynomial time
• (Borradaile-Nayyeri-Zafarani’15) One-face min-sum k-DPP in
“serial” configuration in polynomial time (in n as well as k).
One-face min-sum Disjoint Path Problem:
Our Results
Theorem
• The count of One-face min-sum k-DPP solutions can be
reduced to O(4k) determinant computations.
One-face min-sum Disjoint Path Problem:
Our Results
Theorem
• The count of One-face min-sum k-DPP solutions can be
reduced to O(4k) determinant computations.
• Finding a witness to the problem can be randomly reduced to
O(4k) determinant computations.
One-face min-sum Disjoint Path Problem:
Our Results
Theorem
• The count of One-face min-sum k-DPP solutions can be
reduced to O(4k) determinant computations.
• Finding a witness to the problem can be randomly reduced to
O(4k) determinant computations.
• Can also do the above sequentially in deterministic O(4knω)
time (ω matrix multiplication constant).
Determinant
• Let G be a weighted graph with V (G) = {1, . . . , n}
Determinant
• Let G be a weighted graph with V (G) = {1, . . . , n}
• Let A be the weighted adjacency matrix of G
Determinant
• Let G be a weighted graph with V (G) = {1, . . . , n}
• Let A be the weighted adjacency matrix of G
• Permutation π ∈ Sn can be written as a product of cycles
Determinant
• Let G be a weighted graph with V (G) = {1, . . . , n}
• Let A be the weighted adjacency matrix of G
• Permutation π ∈ Sn can be written as a product of cycles
• sgn(π) = (−1)#transpositions in π = (−1)n+c(π)
Determinant
• Let G be a weighted graph with V (G) = {1, . . . , n}
• Let A be the weighted adjacency matrix of G
• Permutation π ∈ Sn can be written as a product of cycles
• sgn(π) = (−1)#transpositions in π = (−1)n+c(π)
• where, c(π) is the number of cycles in π
Determinant
• Let G be a weighted graph with V (G) = {1, . . . , n}
• Let A be the weighted adjacency matrix of G
• Permutation π ∈ Sn can be written as a product of cycles
• sgn(π) = (−1)#transpositions in π = (−1)n+c(π)
• where, c(π) is the number of cycles in π
• w(π) := 1≤i≤n Ai,π(i).
Determinant
• Let G be a weighted graph with V (G) = {1, . . . , n}
• Let A be the weighted adjacency matrix of G
• Permutation π ∈ Sn can be written as a product of cycles
• sgn(π) = (−1)#transpositions in π = (−1)n+c(π)
• where, c(π) is the number of cycles in π
• w(π) := 1≤i≤n Ai,π(i).
• det(A) = π∈Sn
sgn(π)w(π)
Determinant
• Let G be a weighted graph with V (G) = {1, . . . , n}
• Let A be the weighted adjacency matrix of G
• Permutation π ∈ Sn can be written as a product of cycles
• sgn(π) = (−1)#transpositions in π = (−1)n+c(π)
• where, c(π) is the number of cycles in π
• w(π) := 1≤i≤n Ai,π(i).
• det(A) = π∈Sn
sgn(π)w(π)
• = signed sum of weights of cycle covers in G
Determinant
• Let G be a weighted graph with V (G) = {1, . . . , n}
• Let A be the weighted adjacency matrix of G
• Permutation π ∈ Sn can be written as a product of cycles
• sgn(π) = (−1)#transpositions in π = (−1)n+c(π)
• where, c(π) is the number of cycles in π
• w(π) := 1≤i≤n Ai,π(i).
• det(A) = π∈Sn
sgn(π)w(π)
• = signed sum of weights of cycle covers in G
• cycle cover is a covering of vertices of G by disjoint cycles
Determinant
• Let G be a weighted graph with V (G) = {1, . . . , n}
• Let A be the weighted adjacency matrix of G
• Permutation π ∈ Sn can be written as a product of cycles
• sgn(π) = (−1)#transpositions in π = (−1)n+c(π)
• where, c(π) is the number of cycles in π
• w(π) := 1≤i≤n Ai,π(i).
• det(A) = π∈Sn
sgn(π)w(π)
• = signed sum of weights of cycle covers in G
• cycle cover is a covering of vertices of G by disjoint cycles
• (cycles include self loops)
Solving a special case
1
2
3 k-1
k
k+1
k+22k-2
2k-1
2k
• Add edge from sink ti to si inside the face
Solving a special case
1
2
3 k-1
k
k+1
k+22k-2
2k-1
2k
• Add edge from sink ti to si inside the face
• Subdivide edge at ri
Solving a special case
1
2
3 k-1
k
k+1
k+22k-2
2k-1
2k
• Add edge from sink ti to si inside the face
• Subdivide edge at ri
• Put self loops on all vertices in graph except ri ’s
Solving a special case
1
2
3 k-1
k
k+1
k+22k-2
2k-1
2k
• Add edge from sink ti to si inside the face
• Subdivide edge at ri
• Put self loops on all vertices in graph except ri ’s
• Put weight x on each original edge and 1 on self-loops
Solving a special case
• Bijection between k-DPP’s and cycle covers
Solving a special case
• Bijection between k-DPP’s and cycle covers
• No k-DPP implies determinant = 0
Solving a special case
• Bijection between k-DPP’s and cycle covers
• No k-DPP implies determinant = 0
• Preserves weight (= x#edges in k-DPP)
Solving a special case
• Bijection between k-DPP’s and cycle covers
• No k-DPP implies determinant = 0
• Preserves weight (= x#edges in k-DPP)
• All min-sum k-DPPs are:
Solving a special case
• Bijection between k-DPP’s and cycle covers
• No k-DPP implies determinant = 0
• Preserves weight (= x#edges in k-DPP)
• All min-sum k-DPPs are:
• Same weight = x ( = length of min-sum k-DPP)
Solving a special case
• Bijection between k-DPP’s and cycle covers
• No k-DPP implies determinant = 0
• Preserves weight (= x#edges in k-DPP)
• All min-sum k-DPPs are:
• Same weight = x ( = length of min-sum k-DPP)
• Same sign = (−1)n+(n− )+k
Solving a special case
• Bijection between k-DPP’s and cycle covers
• No k-DPP implies determinant = 0
• Preserves weight (= x#edges in k-DPP)
• All min-sum k-DPPs are:
• Same weight = x ( = length of min-sum k-DPP)
• Same sign = (−1)n+(n− )+k
• Lighter than any other cycle cover
Solving a special case
• Bijection between k-DPP’s and cycle covers
• No k-DPP implies determinant = 0
• Preserves weight (= x#edges in k-DPP)
• All min-sum k-DPPs are:
• Same weight = x ( = length of min-sum k-DPP)
• Same sign = (−1)n+(n− )+k
• Lighter than any other cycle cover
• coefficient of smallest degree term counts #-min-sum k-DPPs
Solving a special case
• Bijection between k-DPP’s and cycle covers
• No k-DPP implies determinant = 0
• Preserves weight (= x#edges in k-DPP)
• All min-sum k-DPPs are:
• Same weight = x ( = length of min-sum k-DPP)
• Same sign = (−1)n+(n− )+k
• Lighter than any other cycle cover
• coefficient of smallest degree term counts #-min-sum k-DPPs
• As easy as determinant
Solving a special case
• Search reduces to counting in polynomial time
Solving a special case
• Search reduces to counting in polynomial time
• Sequentially consider edges
Solving a special case
• Search reduces to counting in polynomial time
• Sequentially consider edges
• Discard any edge after removing which count remains > 0
Solving a special case
• Search reduces to counting in polynomial time
• Sequentially consider edges
• Discard any edge after removing which count remains > 0
• Search in RNC
Solving a special case
• Search reduces to counting in polynomial time
• Sequentially consider edges
• Discard any edge after removing which count remains > 0
• Search in RNC
• Use Isolation Lemma [MVV] to isolate a min-sum k-DPP
Solving a special case
• Search reduces to counting in polynomial time
• Sequentially consider edges
• Discard any edge after removing which count remains > 0
• Search in RNC
• Use Isolation Lemma [MVV] to isolate a min-sum k-DPP
• Use isolating weights w(e) as exponent of x on e
Solving a special case
• Search reduces to counting in polynomial time
• Sequentially consider edges
• Discard any edge after removing which count remains > 0
• Search in RNC
• Use Isolation Lemma [MVV] to isolate a min-sum k-DPP
• Use isolating weights w(e) as exponent of x on e
• Discard any edge removing which doesn’t affect min weight
exponent
Issues with general case
1
2 3
4
5
67
8
• Spurious cycle covers in addition to good ones
Issues with general case
1
2 3
4
5
67
8
• Spurious cycle covers in addition to good ones
• No bijection between required k-DPP paths and cycle covers
Issues with general case
1
2 3
4
5
67
8
• Spurious cycle covers in addition to good ones
• No bijection between required k-DPP paths and cycle covers
• Spurious cycle covers for different terminal pairing/demands
Fixing issues with general case
1
2 3
4
5
67
8
1
2 3
4
5
67
8
1
2 3
4
5
67
8
• Spurious cycle covers canceled by cycle covers with new
demands
Fixing issues with general case
1
2 3
4
5
67
8
1
2 3
4
5
67
8
1
2 3
4
5
67
8
• Spurious cycle covers canceled by cycle covers with new
demands
• Further spurious cycles - canceled by another demand set -
etc.
Fixing issues with general case
1
2 3
4
5
67
8
1
2 3
4
5
67
8
1
2 3
4
5
67
8
• Spurious cycle covers canceled by cycle covers with new
demands
• Further spurious cycles - canceled by another demand set -
etc.
• Need to prove that process converges
Fixing issues with general case
• Define a function len mapping demands to N
Fixing issues with general case
• Define a function len mapping demands to N
• Prove: len of spurious demands strictly greater
Fixing issues with general case
• Define a function len mapping demands to N
• Prove: len of spurious demands strictly greater
• Prove: len of parallel demands maximum
Fixing issues with general case
• Define a function len mapping demands to N
• Prove: len of spurious demands strictly greater
• Prove: len of parallel demands maximum
• Recursively express k-DPP for demand as
Fixing issues with general case
• Define a function len mapping demands to N
• Prove: len of spurious demands strictly greater
• Prove: len of parallel demands maximum
• Recursively express k-DPP for demand as
• Sum of determinant and k-DPP for demands of greater len.
Outline
1 Part I: Disjoint Paths
2 Part II: Dynamic Reachability
3 Conclusion
Two fundamental problems
• Definition (Reachability)
G directed graph and s, t ∈ V (G). Is there a path from s to t?
• Complete for NL
• Optimization version: shortest path also in NL
• Definition (Connectivity)
G undirected graph, s, t ∈ V (G). Is there a path joining s, t?
• Complete for SL
• (Reingold) SL = L
• Optimization version: shortest path in NL
Dynamic Complexity
• Static complexity:
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
• Start out with empty graph G on fixed number n of vertices
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
• Start out with empty graph G on fixed number n of vertices
• Edge Insertion: Edge e = (u, v) is added
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
• Start out with empty graph G on fixed number n of vertices
• Edge Insertion: Edge e = (u, v) is added
• Edge Deletion: Edge e = (u, v) is deleted
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
• Start out with empty graph G on fixed number n of vertices
• Edge Insertion: Edge e = (u, v) is added
• Edge Deletion: Edge e = (u, v) is deleted
• Query: Is property satisfied by current graph?
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
• Start out with empty graph G on fixed number n of vertices
• Edge Insertion: Edge e = (u, v) is added
• Edge Deletion: Edge e = (u, v) is deleted
• Query: Is property satisfied by current graph?
• At every time step either insertion/deletion/query occurs
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
• Start out with empty graph G on fixed number n of vertices
• Edge Insertion: Edge e = (u, v) is added
• Edge Deletion: Edge e = (u, v) is deleted
• Query: Is property satisfied by current graph?
• At every time step either insertion/deletion/query occurs
• Algorithm input: string output at last step
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
• Start out with empty graph G on fixed number n of vertices
• Edge Insertion: Edge e = (u, v) is added
• Edge Deletion: Edge e = (u, v) is deleted
• Query: Is property satisfied by current graph?
• At every time step either insertion/deletion/query occurs
• Algorithm input: string output at last step
• Algorithm output: new string
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
• Start out with empty graph G on fixed number n of vertices
• Edge Insertion: Edge e = (u, v) is added
• Edge Deletion: Edge e = (u, v) is deleted
• Query: Is property satisfied by current graph?
• At every time step either insertion/deletion/query occurs
• Algorithm input: string output at last step
• Algorithm output: new string
• One (specific) bit must be answer to query
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
• Start out with empty graph G on fixed number n of vertices
• Edge Insertion: Edge e = (u, v) is added
• Edge Deletion: Edge e = (u, v) is deleted
• Query: Is property satisfied by current graph?
• At every time step either insertion/deletion/query occurs
• Algorithm input: string output at last step
• Algorithm output: new string
• One (specific) bit must be answer to query
• Complexity: in terms of circuit or descriptive complexity
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
• Start out with empty graph G on fixed number n of vertices
• Edge Insertion: Edge e = (u, v) is added
• Edge Deletion: Edge e = (u, v) is deleted
• Query: Is property satisfied by current graph?
• At every time step either insertion/deletion/query occurs
• Algorithm input: string output at last step
• Algorithm output: new string
• One (specific) bit must be answer to query
• Complexity: in terms of circuit or descriptive complexity
• DynAC0
: update circuit is AC0
Dynamic Complexity
• Static complexity:
• Input graph G presented in one shot
• Algorithm outputs whether G satisfies a property.
• Complexity: Time/space/circuit depth/... of algorithm
• Dynamic Complexity:
• Start out with empty graph G on fixed number n of vertices
• Edge Insertion: Edge e = (u, v) is added
• Edge Deletion: Edge e = (u, v) is deleted
• Query: Is property satisfied by current graph?
• At every time step either insertion/deletion/query occurs
• Algorithm input: string output at last step
• Algorithm output: new string
• One (specific) bit must be answer to query
• Complexity: in terms of circuit or descriptive complexity
• DynAC0
: update circuit is AC0
• DynFO: updates are First Order computable
Dynamic Reachability
Theorem
Reach is in DynFO.
Proof Outline:
• Rank: What is the rank of a given n × n-matrix?
Dynamic Reachability
Theorem
Reach is in DynFO.
Proof Outline:
• Rank: What is the rank of a given n × n-matrix?
• Step 1 Reduce Reach to Rank
Dynamic Reachability
Theorem
Reach is in DynFO.
Proof Outline:
• Rank: What is the rank of a given n × n-matrix?
• Step 1 Reduce Reach to Rank
• Step 2 Rank ∈ uniform DynAC0
Dynamic Reachability
Theorem
Reach is in DynFO.
Proof Outline:
• Rank: What is the rank of a given n × n-matrix?
• Step 1 Reduce Reach to Rank
• Step 2 Rank ∈ uniform DynAC0
• Step 3 DynFO ≈ uniform DynAC0
Reducing Reach to Rank
• Approach: Map a graph G to a matrix B such that
• s-t-Reachability in G corresponds to full rank of B.
• A change of G can be simulated by a constant number of
changes of B.
Reducing Reach to Rank
• Approach: Map a graph G to a matrix B such that
• s-t-Reachability in G corresponds to full rank of B.
• A change of G can be simulated by a constant number of
changes of B.
• Denote the adjacency matrix of G by A.
• Recall: (s, t)-entry of Ai is the number of paths from s to t
Reducing Reach to Rank
• Approach: Map a graph G to a matrix B such that
• s-t-Reachability in G corresponds to full rank of B.
• A change of G can be simulated by a constant number of
changes of B.
• Denote the adjacency matrix of G by A.
• Recall: (s, t)-entry of Ai is the number of paths from s to t
• Observe: I − 1
n A is invertible and
(I −
1
n
A)−1
=
∞
i=0
(
1
n
A)i
Reducing Reach to Rank
• Approach: Map a graph G to a matrix B such that
• s-t-Reachability in G corresponds to full rank of B.
• A change of G can be simulated by a constant number of
changes of B.
• Denote the adjacency matrix of G by A.
• Recall: (s, t)-entry of Ai is the number of paths from s to t
• Observe: I − 1
n A is invertible and
(I −
1
n
A)−1
=
∞
i=0
(
1
n
A)i
• Crux: t is not reachable from s
• iff (s, t)-entry of B−1
is zero where B
def
= I − 1
n A
Reducing Reach to Rank
• Approach: Map a graph G to a matrix B such that
• s-t-Reachability in G corresponds to full rank of B.
• A change of G can be simulated by a constant number of
changes of B.
• Denote the adjacency matrix of G by A.
• Recall: (s, t)-entry of Ai is the number of paths from s to t
• Observe: I − 1
n A is invertible and
(I −
1
n
A)−1
=
∞
i=0
(
1
n
A)i
• Crux: t is not reachable from s
• iff (s, t)-entry of B−1
is zero where B
def
= I − 1
n A
• iff x = B−1
et with xs = 0
Reducing Reach to Rank
• Approach: Map a graph G to a matrix B such that
• s-t-Reachability in G corresponds to full rank of B.
• A change of G can be simulated by a constant number of
changes of B.
• Denote the adjacency matrix of G by A.
• Recall: (s, t)-entry of Ai is the number of paths from s to t
• Observe: I − 1
n A is invertible and
(I −
1
n
A)−1
=
∞
i=0
(
1
n
A)i
• Crux: t is not reachable from s
• iff (s, t)-entry of B−1
is zero where B
def
= I − 1
n A
• iff x = B−1
et with xs = 0
• iff Bx = et has unique solution x with xs = 0
Reducing Reach to Rank
• Approach: Map a graph G to a matrix B such that
• s-t-Reachability in G corresponds to full rank of B.
• A change of G can be simulated by a constant number of
changes of B.
• Denote the adjacency matrix of G by A.
• Recall: (s, t)-entry of Ai is the number of paths from s to t
• Observe: I − 1
n A is invertible and
(I −
1
n
A)−1
=
∞
i=0
(
1
n
A)i
• Crux: t is not reachable from s
• iff (s, t)-entry of B−1
is zero where B
def
= I − 1
n A
• iff x = B−1
et with xs = 0
• iff Bx = et has unique solution x with xs = 0
• This equation system can be phrased as rank-problem
...and one change in A leads to one change in B
Reducing Reach to Rank
• Approach: Map a graph G to a matrix B such that
• s-t-Reachability in G corresponds to full rank of B.
• A change of G can be simulated by a constant number of
changes of B.
• Denote the adjacency matrix of G by A.
• Recall: (s, t)-entry of Ai is the number of paths from s to t
• Observe: I − 1
n A is invertible and
(I −
1
n
A)−1
=
∞
i=0
(
1
n
A)i
• Crux: t is not reachable from s
• iff (s, t)-entry of B−1
is zero where B
def
= I − 1
n A
• iff x = B−1
et with xs = 0
• iff Bx = et has unique solution x with xs = 0
• This equation system can be phrased as rank-problem
...and one change in A leads to one change in B
• Technical detail: Use B
def
= nI − A instead of B
def
= I − 1
n A
Reducing Rank to mod-p-Rank
• rk(A) = maxp∈[n2] rkp(A)
Reducing Rank to mod-p-Rank
• rk(A) = maxp∈[n2] rkp(A)
• rk(A) ≥ k iff
Reducing Rank to mod-p-Rank
• rk(A) = maxp∈[n2] rkp(A)
• rk(A) ≥ k iff
• ∃A , k × k submatrix of A such that rk(A ) = k iff
Reducing Rank to mod-p-Rank
• rk(A) = maxp∈[n2] rkp(A)
• rk(A) ≥ k iff
• ∃A , k × k submatrix of A such that rk(A ) = k iff
• det(A ) = 0 iff
Reducing Rank to mod-p-Rank
• rk(A) = maxp∈[n2] rkp(A)
• rk(A) ≥ k iff
• ∃A , k × k submatrix of A such that rk(A ) = k iff
• det(A ) = 0 iff
• ∃p, small prime not dividing det(A ) iff
Reducing Rank to mod-p-Rank
• rk(A) = maxp∈[n2] rkp(A)
• rk(A) ≥ k iff
• ∃A , k × k submatrix of A such that rk(A ) = k iff
• det(A ) = 0 iff
• ∃p, small prime not dividing det(A ) iff
• rkp(A ) = k =⇒
Reducing Rank to mod-p-Rank
• rk(A) = maxp∈[n2] rkp(A)
• rk(A) ≥ k iff
• ∃A , k × k submatrix of A such that rk(A ) = k iff
• det(A ) = 0 iff
• ∃p, small prime not dividing det(A ) iff
• rkp(A ) = k =⇒
• rkp(A) ≥ k
Reducing Rank to mod-p-Rank
• rk(A) = maxp∈[n2] rkp(A)
• rk(A) ≥ k iff
• ∃A , k × k submatrix of A such that rk(A ) = k iff
• det(A ) = 0 iff
• ∃p, small prime not dividing det(A ) iff
• rkp(A ) = k =⇒
• rkp(A) ≥ k
• But, rkp(A) ≤ rk(A), since
Reducing Rank to mod-p-Rank
• rk(A) = maxp∈[n2] rkp(A)
• rk(A) ≥ k iff
• ∃A , k × k submatrix of A such that rk(A ) = k iff
• det(A ) = 0 iff
• ∃p, small prime not dividing det(A ) iff
• rkp(A ) = k =⇒
• rkp(A) ≥ k
• But, rkp(A) ≤ rk(A), since
• Linear relation over integers =⇒ linear relation over Zp
Maintaining row-echelon form
• Definition








1 4 0 2 0 2
0 0 1 3 0 4
0 0 0 0 1 7
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0








Maintaining row-echelon form
• Definition
• Leading (left-most non-zero)
entry in every row is 1, 







1 4 0 2 0 2
0 0 1 3 0 4
0 0 0 0 1 7
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0








Maintaining row-echelon form
• Definition
• Leading (left-most non-zero)
entry in every row is 1,
• in column of leading entry: other
entries zero








1 4 0 2 0 2
0 0 1 3 0 4
0 0 0 0 1 7
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0








Maintaining row-echelon form
• Definition
• Leading (left-most non-zero)
entry in every row is 1,
• in column of leading entry: other
entries zero
• Rows are sorted in “diagonal”
fashion








1 4 0 2 0 2
0 0 1 3 0 4
0 0 0 0 1 7
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0








Maintaining row-echelon form
• Definition
• Leading (left-most non-zero)
entry in every row is 1,
• in column of leading entry: other
entries zero
• Rows are sorted in “diagonal”
fashion
• Given matrix A








1 4 0 2 0 2
0 0 1 3 0 4
0 0 0 0 1 7
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0








Maintaining row-echelon form
• Definition
• Leading (left-most non-zero)
entry in every row is 1,
• in column of leading entry: other
entries zero
• Rows are sorted in “diagonal”
fashion
• Given matrix A
• Maintain B invertible








1 4 0 2 0 2
0 0 1 3 0 4
0 0 0 0 1 7
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0








Maintaining row-echelon form
• Definition
• Leading (left-most non-zero)
entry in every row is 1,
• in column of leading entry: other
entries zero
• Rows are sorted in “diagonal”
fashion
• Given matrix A
• Maintain B invertible
• Maintain E in row echelon form








1 4 0 2 0 2
0 0 1 3 0 4
0 0 0 0 1 7
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0








Maintaining row-echelon form
• Definition
• Leading (left-most non-zero)
entry in every row is 1,
• in column of leading entry: other
entries zero
• Rows are sorted in “diagonal”
fashion
• Given matrix A
• Maintain B invertible
• Maintain E in row echelon form
• such that E = BA








1 4 0 2 0 2
0 0 1 3 0 4
0 0 0 0 1 7
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0








Maintaining row-echelon form
• Definition
• Leading (left-most non-zero)
entry in every row is 1,
• in column of leading entry: other
entries zero
• Rows are sorted in “diagonal”
fashion
• Given matrix A
• Maintain B invertible
• Maintain E in row echelon form
• such that E = BA
• Rank(A) is #non-zero rows of E








1 4 0 2 0 2
0 0 1 3 0 4
0 0 0 0 1 7
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0








Maintaining row-echelon form
• Change A[i, j] to yield A .
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
• new leading entry with max # of consecutive 0’s in row after
column j
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
• new leading entry with max # of consecutive 0’s in row after
column j
• by row operations:
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
• new leading entry with max # of consecutive 0’s in row after
column j
• by row operations:
• set new leading entry to 1,
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
• new leading entry with max # of consecutive 0’s in row after
column j
• by row operations:
• set new leading entry to 1,
• set all other entries of column j to 0
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
• new leading entry with max # of consecutive 0’s in row after
column j
• by row operations:
• set new leading entry to 1,
• set all other entries of column j to 0
• If leading entry of row k is lost in column j ,
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
• new leading entry with max # of consecutive 0’s in row after
column j
• by row operations:
• set new leading entry to 1,
• set all other entries of column j to 0
• If leading entry of row k is lost in column j ,
• new leading entry next non-zero in row k and column > j
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
• new leading entry with max # of consecutive 0’s in row after
column j
• by row operations:
• set new leading entry to 1,
• set all other entries of column j to 0
• If leading entry of row k is lost in column j ,
• new leading entry next non-zero in row k and column > j
• by row operations:
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
• new leading entry with max # of consecutive 0’s in row after
column j
• by row operations:
• set new leading entry to 1,
• set all other entries of column j to 0
• If leading entry of row k is lost in column j ,
• new leading entry next non-zero in row k and column > j
• by row operations:
• set leading entry to 1
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
• new leading entry with max # of consecutive 0’s in row after
column j
• by row operations:
• set new leading entry to 1,
• set all other entries of column j to 0
• If leading entry of row k is lost in column j ,
• new leading entry next non-zero in row k and column > j
• by row operations:
• set leading entry to 1
• set all other entries of column to 0,
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
• new leading entry with max # of consecutive 0’s in row after
column j
• by row operations:
• set new leading entry to 1,
• set all other entries of column j to 0
• If leading entry of row k is lost in column j ,
• new leading entry next non-zero in row k and column > j
• by row operations:
• set leading entry to 1
• set all other entries of column to 0,
• If needed:
Maintaining row-echelon form
• Change A[i, j] to yield A .
• BA is E + i-th column of B as j-th column
• If column j has more than one leading entry of BA :
• new leading entry with max # of consecutive 0’s in row after
column j
• by row operations:
• set new leading entry to 1,
• set all other entries of column j to 0
• If leading entry of row k is lost in column j ,
• new leading entry next non-zero in row k and column > j
• by row operations:
• set leading entry to 1
• set all other entries of column to 0,
• If needed:
• move the (at most two) rows above with changed leading
entries to correct positions
Outline
1 Part I: Disjoint Paths
2 Part II: Dynamic Reachability
3 Conclusion
Conclusion
• Considered two connectivity problems:
Conclusion
• Considered two connectivity problems:
• One-face k-DPP
Conclusion
• Considered two connectivity problems:
• One-face k-DPP
• Dynamic Reachability
Conclusion
• Considered two connectivity problems:
• One-face k-DPP
• Dynamic Reachability
• Writing and solving linear equations, key to solution
Conclusion
• Considered two connectivity problems:
• One-face k-DPP
• Dynamic Reachability
• Writing and solving linear equations, key to solution
• Scratched the surface of Linear Algebra
Open Questions
• Derandomize construction of One-face k-DPP solution?
Open Questions
• Derandomize construction of One-face k-DPP solution?
• “Deparameterize” counting or even decision of above?
Open Questions
• Derandomize construction of One-face k-DPP solution?
• “Deparameterize” counting or even decision of above?
• From 4k
nO(1)
to kO(1)
nO(1)
?
Open Questions
• Derandomize construction of One-face k-DPP solution?
• “Deparameterize” counting or even decision of above?
• From 4k
nO(1)
to kO(1)
nO(1)
?
• Serial/parallel cases can be deparameterized [Borradaile et al.]
Open Questions
• Derandomize construction of One-face k-DPP solution?
• “Deparameterize” counting or even decision of above?
• From 4k
nO(1)
to kO(1)
nO(1)
?
• Serial/parallel cases can be deparameterized [Borradaile et al.]
• Contrariwise, does there exist a crossover gadget?
Open Questions
• Derandomize construction of One-face k-DPP solution?
• “Deparameterize” counting or even decision of above?
• From 4k
nO(1)
to kO(1)
nO(1)
?
• Serial/parallel cases can be deparameterized [Borradaile et al.]
• Contrariwise, does there exist a crossover gadget?
• Is directed distance in DynAC0? What about path
construction?
Thanks!

More Related Content

PDF
Lossy Kernelization
PDF
Polylogarithmic approximation algorithm for weighted F-deletion problems
PDF
Biconnectivity
PDF
Fine Grained Complexity of Rainbow Coloring and its Variants
PDF
Kernel for Chordal Vertex Deletion
PDF
Split Contraction: The Untold Story
PDF
Polynomial Kernel for Interval Vertex Deletion
PDF
Guarding Terrains though the Lens of Parameterized Complexity
Lossy Kernelization
Polylogarithmic approximation algorithm for weighted F-deletion problems
Biconnectivity
Fine Grained Complexity of Rainbow Coloring and its Variants
Kernel for Chordal Vertex Deletion
Split Contraction: The Untold Story
Polynomial Kernel for Interval Vertex Deletion
Guarding Terrains though the Lens of Parameterized Complexity

What's hot (20)

PDF
Node Unique Label Cover
PDF
Graph Modification: Beyond the known Boundaries
PDF
Chapter06
PDF
Chapter06
PDF
Guarding Polygons via CSP
PDF
Chapter04
PDF
Archipelagos
PDF
SPECTRAL SYNTHESIS PROBLEM FOR FOURIER ALGEBRAS
PDF
COnflict Free Feedback Vertex Set: A Parameterized Dichotomy
PDF
On Convolution of Graph Signals and Deep Learning on Graph Domains
PDF
Ilya Shkredov – Subsets of Z/pZ with small Wiener norm and arithmetic progres...
PDF
Chapter02b
PDF
On the Parameterized Complexity of Simultaneous Deletion Problems
PDF
A multi-objective optimization framework for a second order traffic flow mode...
PDF
Crystallographic groups
PDF
PDF
Parabolic Restricted Three Body Problem
PPT
Pushdown automata
PPT
Pushdown automata
PDF
Approximation Algorithms for the Directed k-Tour and k-Stroll Problems
Node Unique Label Cover
Graph Modification: Beyond the known Boundaries
Chapter06
Chapter06
Guarding Polygons via CSP
Chapter04
Archipelagos
SPECTRAL SYNTHESIS PROBLEM FOR FOURIER ALGEBRAS
COnflict Free Feedback Vertex Set: A Parameterized Dichotomy
On Convolution of Graph Signals and Deep Learning on Graph Domains
Ilya Shkredov – Subsets of Z/pZ with small Wiener norm and arithmetic progres...
Chapter02b
On the Parameterized Complexity of Simultaneous Deletion Problems
A multi-objective optimization framework for a second order traffic flow mode...
Crystallographic groups
Parabolic Restricted Three Body Problem
Pushdown automata
Pushdown automata
Approximation Algorithms for the Directed k-Tour and k-Stroll Problems
Ad

Viewers also liked (19)

PPTX
Power point
PDF
Práctica 18. 2 do trimestre hidróxido de sodio
PPTX
Transtornos del lenguaje
PDF
Lecture1 pc
DOCX
SAKTHEESWARAN CHANDRASEGARAM (1)
PPT
Dr prasanna karhade
PDF
Dissertation Organisational Performance in Alliance boots UK Sample
PPTX
Fluig Webinar #4 - Webinar mostra como a FTD Educação levou mobilidade à sua ...
PPS
05 hindi parents expecation from child
PPTX
LISP: Data types in lisp
PPT
Introduction of iso9001
PDF
2016 Media Kit for BLR's Environmental, Health & Safety
PDF
Avi Network SDN meetup
DOCX
Energy Meters
PDF
FSc Marksheet
PPTX
Design Thinking & Innovation Games : Presented by Cedric Mainguy
PDF
Competence Book Talent Management
PPTX
LISP: Introduction to lisp
DOCX
38857 juknis uic hasil tm
Power point
Práctica 18. 2 do trimestre hidróxido de sodio
Transtornos del lenguaje
Lecture1 pc
SAKTHEESWARAN CHANDRASEGARAM (1)
Dr prasanna karhade
Dissertation Organisational Performance in Alliance boots UK Sample
Fluig Webinar #4 - Webinar mostra como a FTD Educação levou mobilidade à sua ...
05 hindi parents expecation from child
LISP: Data types in lisp
Introduction of iso9001
2016 Media Kit for BLR's Environmental, Health & Safety
Avi Network SDN meetup
Energy Meters
FSc Marksheet
Design Thinking & Innovation Games : Presented by Cedric Mainguy
Competence Book Talent Management
LISP: Introduction to lisp
38857 juknis uic hasil tm
Ad

Similar to Solving connectivity problems via basic Linear Algebra (20)

PPT
PPTX
NP Complete Problems -- Internship
PDF
Daa chapter11
PDF
Problem Solving with Algorithms and Data Structure - Graphs
PPTX
Randomized algorithms all pairs shortest path
PDF
Class01_Computer_Contest_Level_3_Notes_Sep_07 - Copy.pdf
PDF
P versus NP
PDF
Graph representation
PDF
Graph in Data Structure
PPTX
ICPC 2015, Tsukuba : Unofficial Commentary
PPT
PPT
Approx
PPT
Appendix b 2
PPT
graph.ppt
PPT
graph.ppt
PPT
Graphs in Discrete mathematics for computing
PDF
NP Problems in design and analysis of alogorithm
PPTX
Matrix representation of graph
NP Complete Problems -- Internship
Daa chapter11
Problem Solving with Algorithms and Data Structure - Graphs
Randomized algorithms all pairs shortest path
Class01_Computer_Contest_Level_3_Notes_Sep_07 - Copy.pdf
P versus NP
Graph representation
Graph in Data Structure
ICPC 2015, Tsukuba : Unofficial Commentary
Approx
Appendix b 2
graph.ppt
graph.ppt
Graphs in Discrete mathematics for computing
NP Problems in design and analysis of alogorithm
Matrix representation of graph

More from cseiitgn (13)

PDF
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...
PDF
Alternate Parameterizations
PDF
Dynamic Parameterized Problems - Algorithms and Complexity
PDF
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
PDF
Hardness of multiple choice problems
PDF
The Chasm at Depth Four, and Tensor Rank : Old results, new insights
PDF
Isolation Lemma for Directed Reachability and NL vs. L
PDF
Unbounded Error Communication Complexity of XOR Functions
PDF
Color Coding-Related Techniques
PPTX
Topology Matters in Communication
PDF
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
PDF
Efficiently decoding Reed-Muller codes from random errors
PDF
Complexity Classes and the Graph Isomorphism Problem
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...
Alternate Parameterizations
Dynamic Parameterized Problems - Algorithms and Complexity
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
Hardness of multiple choice problems
The Chasm at Depth Four, and Tensor Rank : Old results, new insights
Isolation Lemma for Directed Reachability and NL vs. L
Unbounded Error Communication Complexity of XOR Functions
Color Coding-Related Techniques
Topology Matters in Communication
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphs
Efficiently decoding Reed-Muller codes from random errors
Complexity Classes and the Graph Isomorphism Problem

Recently uploaded (20)

PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
01-Introduction-to-Information-Management.pdf
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PPTX
Pharma ospi slides which help in ospi learning
PPTX
Presentation on HIE in infants and its manifestations
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Complications of Minimal Access Surgery at WLH
PDF
Computing-Curriculum for Schools in Ghana
PPTX
Cell Types and Its function , kingdom of life
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
Classroom Observation Tools for Teachers
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
RMMM.pdf make it easy to upload and study
O5-L3 Freight Transport Ops (International) V1.pdf
01-Introduction-to-Information-Management.pdf
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
STATICS OF THE RIGID BODIES Hibbelers.pdf
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Pharma ospi slides which help in ospi learning
Presentation on HIE in infants and its manifestations
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Chinmaya Tiranga quiz Grand Finale.pdf
Complications of Minimal Access Surgery at WLH
Computing-Curriculum for Schools in Ghana
Cell Types and Its function , kingdom of life
Microbial disease of the cardiovascular and lymphatic systems
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
Classroom Observation Tools for Teachers
VCE English Exam - Section C Student Revision Booklet
2.FourierTransform-ShortQuestionswithAnswers.pdf
RMMM.pdf make it easy to upload and study

Solving connectivity problems via basic Linear Algebra

  • 1. Solving connectivity problems via basic Linear Algebra Samir Datta Chennai Mathematical Institute Includes joint work with Raghav Kulkarni Anish Mukherjee Thomas Schwentick Thomas Zeume NMI Workshop on Complexity IIT Gandhinagar November 6, 2016
  • 2. Outline 1 Part I: Disjoint Paths 2 Part II: Dynamic Reachability 3 Conclusion
  • 4. Two fundamental problems • Definition (Reachability) G directed graph and s, t ∈ V (G). Is there a path from s to t?
  • 5. Two fundamental problems • Definition (Reachability) G directed graph and s, t ∈ V (G). Is there a path from s to t? • Complete for NL
  • 6. Two fundamental problems • Definition (Reachability) G directed graph and s, t ∈ V (G). Is there a path from s to t? • Complete for NL • Optimization version: shortest path also in NL
  • 7. Two fundamental problems • Definition (Reachability) G directed graph and s, t ∈ V (G). Is there a path from s to t? • Complete for NL • Optimization version: shortest path also in NL
  • 8. Two fundamental problems • Definition (Reachability) G directed graph and s, t ∈ V (G). Is there a path from s to t? • Complete for NL • Optimization version: shortest path also in NL • Definition (Connectivity) G undirected graph, s, t ∈ V (G). Is there a path joining s, t?
  • 9. Two fundamental problems • Definition (Reachability) G directed graph and s, t ∈ V (G). Is there a path from s to t? • Complete for NL • Optimization version: shortest path also in NL • Definition (Connectivity) G undirected graph, s, t ∈ V (G). Is there a path joining s, t? • Complete for SL
  • 10. Two fundamental problems • Definition (Reachability) G directed graph and s, t ∈ V (G). Is there a path from s to t? • Complete for NL • Optimization version: shortest path also in NL • Definition (Connectivity) G undirected graph, s, t ∈ V (G). Is there a path joining s, t? • Complete for SL • (Reingold) SL = L
  • 11. Two fundamental problems • Definition (Reachability) G directed graph and s, t ∈ V (G). Is there a path from s to t? • Complete for NL • Optimization version: shortest path also in NL • Definition (Connectivity) G undirected graph, s, t ∈ V (G). Is there a path joining s, t? • Complete for SL • (Reingold) SL = L • Optimization version: shortest path in NL
  • 13. Disjoint Path Problem • Definition (k-DPP) G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist vertex-disjoint (si , ti )-paths?
  • 14. Disjoint Path Problem • Definition (k-DPP) G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist vertex-disjoint (si , ti )-paths? • Distinct from k-connectivity (k-reachability)
  • 15. Disjoint Path Problem • Definition (k-DPP) G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist vertex-disjoint (si , ti )-paths? • Distinct from k-connectivity (k-reachability) • NP-complete for directed graphs for k = 2
  • 16. Disjoint Path Problem • Definition (k-DPP) G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist vertex-disjoint (si , ti )-paths? • Distinct from k-connectivity (k-reachability) • NP-complete for directed graphs for k = 2 • NP-complete for undirected graphs when k part of input.
  • 17. Disjoint Path Problem • Definition (k-DPP) G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist vertex-disjoint (si , ti )-paths? • Distinct from k-connectivity (k-reachability) • NP-complete for directed graphs for k = 2 • NP-complete for undirected graphs when k part of input. • (Robertson-Seymour) Cubic time (fixed k) in undirected graphs.
  • 18. Disjoint Path Problem • Definition (k-DPP) G graph/digraph. (si , ti ) for i ∈ [k], si , ti ∈ V (G). Do there exist vertex-disjoint (si , ti )-paths? • Distinct from k-connectivity (k-reachability) • NP-complete for directed graphs for k = 2 • NP-complete for undirected graphs when k part of input. • (Robertson-Seymour) Cubic time (fixed k) in undirected graphs. • Special cases: Planar graphs/DAGs well studied.
  • 19. One-face Min-sum Disjoint Path Problem • A very special case: 1 2 4 2k 2k − 1 3
  • 20. One-face Min-sum Disjoint Path Problem • A very special case: • k-DPP instance with undirected planar graph, and 1 2 4 2k 2k − 1 3
  • 21. One-face Min-sum Disjoint Path Problem • A very special case: • k-DPP instance with undirected planar graph, and • All terminals on single face, and 1 2 4 2k 2k − 1 3
  • 22. One-face Min-sum Disjoint Path Problem • A very special case: • k-DPP instance with undirected planar graph, and • All terminals on single face, and • Want solution with minimum total length of paths. 1 2 4 2k 2k − 1 3
  • 23. One-face Min-sum Disjoint Path Problem: Motivation • Directed general case remains NP-hard in the plane though fixed parameter tractable in k.
  • 24. One-face Min-sum Disjoint Path Problem: Motivation • Directed general case remains NP-hard in the plane though fixed parameter tractable in k. • NP-hardness in undirected case: open for k > 2.
  • 25. One-face Min-sum Disjoint Path Problem: Motivation • Directed general case remains NP-hard in the plane though fixed parameter tractable in k. • NP-hardness in undirected case: open for k > 2. • (Bj¨orklund-Husfeldt’14) Randomized polynomial time algorithm in undirected case for 2-DPP.
  • 26. One-face Min-sum Disjoint Path Problem: Motivation • (de Verdi`ere-Schrijver’08) Two-face case with sources and sinks on two different faces in time O(kn log n).
  • 27. One-face Min-sum Disjoint Path Problem: Motivation • (de Verdi`ere-Schrijver’08) Two-face case with sources and sinks on two different faces in time O(kn log n). • (Kobayashi-Sommer’10) One-face min-sum 3-DPP in polynomial time
  • 28. One-face Min-sum Disjoint Path Problem: Motivation • (de Verdi`ere-Schrijver’08) Two-face case with sources and sinks on two different faces in time O(kn log n). • (Kobayashi-Sommer’10) One-face min-sum 3-DPP in polynomial time • (Borradaile-Nayyeri-Zafarani’15) One-face min-sum k-DPP in “serial” configuration in polynomial time (in n as well as k).
  • 29. One-face min-sum Disjoint Path Problem: Our Results Theorem • The count of One-face min-sum k-DPP solutions can be reduced to O(4k) determinant computations.
  • 30. One-face min-sum Disjoint Path Problem: Our Results Theorem • The count of One-face min-sum k-DPP solutions can be reduced to O(4k) determinant computations. • Finding a witness to the problem can be randomly reduced to O(4k) determinant computations.
  • 31. One-face min-sum Disjoint Path Problem: Our Results Theorem • The count of One-face min-sum k-DPP solutions can be reduced to O(4k) determinant computations. • Finding a witness to the problem can be randomly reduced to O(4k) determinant computations. • Can also do the above sequentially in deterministic O(4knω) time (ω matrix multiplication constant).
  • 32. Determinant • Let G be a weighted graph with V (G) = {1, . . . , n}
  • 33. Determinant • Let G be a weighted graph with V (G) = {1, . . . , n} • Let A be the weighted adjacency matrix of G
  • 34. Determinant • Let G be a weighted graph with V (G) = {1, . . . , n} • Let A be the weighted adjacency matrix of G • Permutation π ∈ Sn can be written as a product of cycles
  • 35. Determinant • Let G be a weighted graph with V (G) = {1, . . . , n} • Let A be the weighted adjacency matrix of G • Permutation π ∈ Sn can be written as a product of cycles • sgn(π) = (−1)#transpositions in π = (−1)n+c(π)
  • 36. Determinant • Let G be a weighted graph with V (G) = {1, . . . , n} • Let A be the weighted adjacency matrix of G • Permutation π ∈ Sn can be written as a product of cycles • sgn(π) = (−1)#transpositions in π = (−1)n+c(π) • where, c(π) is the number of cycles in π
  • 37. Determinant • Let G be a weighted graph with V (G) = {1, . . . , n} • Let A be the weighted adjacency matrix of G • Permutation π ∈ Sn can be written as a product of cycles • sgn(π) = (−1)#transpositions in π = (−1)n+c(π) • where, c(π) is the number of cycles in π • w(π) := 1≤i≤n Ai,π(i).
  • 38. Determinant • Let G be a weighted graph with V (G) = {1, . . . , n} • Let A be the weighted adjacency matrix of G • Permutation π ∈ Sn can be written as a product of cycles • sgn(π) = (−1)#transpositions in π = (−1)n+c(π) • where, c(π) is the number of cycles in π • w(π) := 1≤i≤n Ai,π(i). • det(A) = π∈Sn sgn(π)w(π)
  • 39. Determinant • Let G be a weighted graph with V (G) = {1, . . . , n} • Let A be the weighted adjacency matrix of G • Permutation π ∈ Sn can be written as a product of cycles • sgn(π) = (−1)#transpositions in π = (−1)n+c(π) • where, c(π) is the number of cycles in π • w(π) := 1≤i≤n Ai,π(i). • det(A) = π∈Sn sgn(π)w(π) • = signed sum of weights of cycle covers in G
  • 40. Determinant • Let G be a weighted graph with V (G) = {1, . . . , n} • Let A be the weighted adjacency matrix of G • Permutation π ∈ Sn can be written as a product of cycles • sgn(π) = (−1)#transpositions in π = (−1)n+c(π) • where, c(π) is the number of cycles in π • w(π) := 1≤i≤n Ai,π(i). • det(A) = π∈Sn sgn(π)w(π) • = signed sum of weights of cycle covers in G • cycle cover is a covering of vertices of G by disjoint cycles
  • 41. Determinant • Let G be a weighted graph with V (G) = {1, . . . , n} • Let A be the weighted adjacency matrix of G • Permutation π ∈ Sn can be written as a product of cycles • sgn(π) = (−1)#transpositions in π = (−1)n+c(π) • where, c(π) is the number of cycles in π • w(π) := 1≤i≤n Ai,π(i). • det(A) = π∈Sn sgn(π)w(π) • = signed sum of weights of cycle covers in G • cycle cover is a covering of vertices of G by disjoint cycles • (cycles include self loops)
  • 42. Solving a special case 1 2 3 k-1 k k+1 k+22k-2 2k-1 2k • Add edge from sink ti to si inside the face
  • 43. Solving a special case 1 2 3 k-1 k k+1 k+22k-2 2k-1 2k • Add edge from sink ti to si inside the face • Subdivide edge at ri
  • 44. Solving a special case 1 2 3 k-1 k k+1 k+22k-2 2k-1 2k • Add edge from sink ti to si inside the face • Subdivide edge at ri • Put self loops on all vertices in graph except ri ’s
  • 45. Solving a special case 1 2 3 k-1 k k+1 k+22k-2 2k-1 2k • Add edge from sink ti to si inside the face • Subdivide edge at ri • Put self loops on all vertices in graph except ri ’s • Put weight x on each original edge and 1 on self-loops
  • 46. Solving a special case • Bijection between k-DPP’s and cycle covers
  • 47. Solving a special case • Bijection between k-DPP’s and cycle covers • No k-DPP implies determinant = 0
  • 48. Solving a special case • Bijection between k-DPP’s and cycle covers • No k-DPP implies determinant = 0 • Preserves weight (= x#edges in k-DPP)
  • 49. Solving a special case • Bijection between k-DPP’s and cycle covers • No k-DPP implies determinant = 0 • Preserves weight (= x#edges in k-DPP) • All min-sum k-DPPs are:
  • 50. Solving a special case • Bijection between k-DPP’s and cycle covers • No k-DPP implies determinant = 0 • Preserves weight (= x#edges in k-DPP) • All min-sum k-DPPs are: • Same weight = x ( = length of min-sum k-DPP)
  • 51. Solving a special case • Bijection between k-DPP’s and cycle covers • No k-DPP implies determinant = 0 • Preserves weight (= x#edges in k-DPP) • All min-sum k-DPPs are: • Same weight = x ( = length of min-sum k-DPP) • Same sign = (−1)n+(n− )+k
  • 52. Solving a special case • Bijection between k-DPP’s and cycle covers • No k-DPP implies determinant = 0 • Preserves weight (= x#edges in k-DPP) • All min-sum k-DPPs are: • Same weight = x ( = length of min-sum k-DPP) • Same sign = (−1)n+(n− )+k • Lighter than any other cycle cover
  • 53. Solving a special case • Bijection between k-DPP’s and cycle covers • No k-DPP implies determinant = 0 • Preserves weight (= x#edges in k-DPP) • All min-sum k-DPPs are: • Same weight = x ( = length of min-sum k-DPP) • Same sign = (−1)n+(n− )+k • Lighter than any other cycle cover • coefficient of smallest degree term counts #-min-sum k-DPPs
  • 54. Solving a special case • Bijection between k-DPP’s and cycle covers • No k-DPP implies determinant = 0 • Preserves weight (= x#edges in k-DPP) • All min-sum k-DPPs are: • Same weight = x ( = length of min-sum k-DPP) • Same sign = (−1)n+(n− )+k • Lighter than any other cycle cover • coefficient of smallest degree term counts #-min-sum k-DPPs • As easy as determinant
  • 55. Solving a special case • Search reduces to counting in polynomial time
  • 56. Solving a special case • Search reduces to counting in polynomial time • Sequentially consider edges
  • 57. Solving a special case • Search reduces to counting in polynomial time • Sequentially consider edges • Discard any edge after removing which count remains > 0
  • 58. Solving a special case • Search reduces to counting in polynomial time • Sequentially consider edges • Discard any edge after removing which count remains > 0 • Search in RNC
  • 59. Solving a special case • Search reduces to counting in polynomial time • Sequentially consider edges • Discard any edge after removing which count remains > 0 • Search in RNC • Use Isolation Lemma [MVV] to isolate a min-sum k-DPP
  • 60. Solving a special case • Search reduces to counting in polynomial time • Sequentially consider edges • Discard any edge after removing which count remains > 0 • Search in RNC • Use Isolation Lemma [MVV] to isolate a min-sum k-DPP • Use isolating weights w(e) as exponent of x on e
  • 61. Solving a special case • Search reduces to counting in polynomial time • Sequentially consider edges • Discard any edge after removing which count remains > 0 • Search in RNC • Use Isolation Lemma [MVV] to isolate a min-sum k-DPP • Use isolating weights w(e) as exponent of x on e • Discard any edge removing which doesn’t affect min weight exponent
  • 62. Issues with general case 1 2 3 4 5 67 8 • Spurious cycle covers in addition to good ones
  • 63. Issues with general case 1 2 3 4 5 67 8 • Spurious cycle covers in addition to good ones • No bijection between required k-DPP paths and cycle covers
  • 64. Issues with general case 1 2 3 4 5 67 8 • Spurious cycle covers in addition to good ones • No bijection between required k-DPP paths and cycle covers • Spurious cycle covers for different terminal pairing/demands
  • 65. Fixing issues with general case 1 2 3 4 5 67 8 1 2 3 4 5 67 8 1 2 3 4 5 67 8 • Spurious cycle covers canceled by cycle covers with new demands
  • 66. Fixing issues with general case 1 2 3 4 5 67 8 1 2 3 4 5 67 8 1 2 3 4 5 67 8 • Spurious cycle covers canceled by cycle covers with new demands • Further spurious cycles - canceled by another demand set - etc.
  • 67. Fixing issues with general case 1 2 3 4 5 67 8 1 2 3 4 5 67 8 1 2 3 4 5 67 8 • Spurious cycle covers canceled by cycle covers with new demands • Further spurious cycles - canceled by another demand set - etc. • Need to prove that process converges
  • 68. Fixing issues with general case • Define a function len mapping demands to N
  • 69. Fixing issues with general case • Define a function len mapping demands to N • Prove: len of spurious demands strictly greater
  • 70. Fixing issues with general case • Define a function len mapping demands to N • Prove: len of spurious demands strictly greater • Prove: len of parallel demands maximum
  • 71. Fixing issues with general case • Define a function len mapping demands to N • Prove: len of spurious demands strictly greater • Prove: len of parallel demands maximum • Recursively express k-DPP for demand as
  • 72. Fixing issues with general case • Define a function len mapping demands to N • Prove: len of spurious demands strictly greater • Prove: len of parallel demands maximum • Recursively express k-DPP for demand as • Sum of determinant and k-DPP for demands of greater len.
  • 73. Outline 1 Part I: Disjoint Paths 2 Part II: Dynamic Reachability 3 Conclusion
  • 74. Two fundamental problems • Definition (Reachability) G directed graph and s, t ∈ V (G). Is there a path from s to t? • Complete for NL • Optimization version: shortest path also in NL • Definition (Connectivity) G undirected graph, s, t ∈ V (G). Is there a path joining s, t? • Complete for SL • (Reingold) SL = L • Optimization version: shortest path in NL
  • 76. Dynamic Complexity • Static complexity: • Input graph G presented in one shot
  • 77. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property.
  • 78. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm
  • 79. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity:
  • 80. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity: • Start out with empty graph G on fixed number n of vertices
  • 81. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity: • Start out with empty graph G on fixed number n of vertices • Edge Insertion: Edge e = (u, v) is added
  • 82. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity: • Start out with empty graph G on fixed number n of vertices • Edge Insertion: Edge e = (u, v) is added • Edge Deletion: Edge e = (u, v) is deleted
  • 83. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity: • Start out with empty graph G on fixed number n of vertices • Edge Insertion: Edge e = (u, v) is added • Edge Deletion: Edge e = (u, v) is deleted • Query: Is property satisfied by current graph?
  • 84. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity: • Start out with empty graph G on fixed number n of vertices • Edge Insertion: Edge e = (u, v) is added • Edge Deletion: Edge e = (u, v) is deleted • Query: Is property satisfied by current graph? • At every time step either insertion/deletion/query occurs
  • 85. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity: • Start out with empty graph G on fixed number n of vertices • Edge Insertion: Edge e = (u, v) is added • Edge Deletion: Edge e = (u, v) is deleted • Query: Is property satisfied by current graph? • At every time step either insertion/deletion/query occurs • Algorithm input: string output at last step
  • 86. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity: • Start out with empty graph G on fixed number n of vertices • Edge Insertion: Edge e = (u, v) is added • Edge Deletion: Edge e = (u, v) is deleted • Query: Is property satisfied by current graph? • At every time step either insertion/deletion/query occurs • Algorithm input: string output at last step • Algorithm output: new string
  • 87. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity: • Start out with empty graph G on fixed number n of vertices • Edge Insertion: Edge e = (u, v) is added • Edge Deletion: Edge e = (u, v) is deleted • Query: Is property satisfied by current graph? • At every time step either insertion/deletion/query occurs • Algorithm input: string output at last step • Algorithm output: new string • One (specific) bit must be answer to query
  • 88. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity: • Start out with empty graph G on fixed number n of vertices • Edge Insertion: Edge e = (u, v) is added • Edge Deletion: Edge e = (u, v) is deleted • Query: Is property satisfied by current graph? • At every time step either insertion/deletion/query occurs • Algorithm input: string output at last step • Algorithm output: new string • One (specific) bit must be answer to query • Complexity: in terms of circuit or descriptive complexity
  • 89. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity: • Start out with empty graph G on fixed number n of vertices • Edge Insertion: Edge e = (u, v) is added • Edge Deletion: Edge e = (u, v) is deleted • Query: Is property satisfied by current graph? • At every time step either insertion/deletion/query occurs • Algorithm input: string output at last step • Algorithm output: new string • One (specific) bit must be answer to query • Complexity: in terms of circuit or descriptive complexity • DynAC0 : update circuit is AC0
  • 90. Dynamic Complexity • Static complexity: • Input graph G presented in one shot • Algorithm outputs whether G satisfies a property. • Complexity: Time/space/circuit depth/... of algorithm • Dynamic Complexity: • Start out with empty graph G on fixed number n of vertices • Edge Insertion: Edge e = (u, v) is added • Edge Deletion: Edge e = (u, v) is deleted • Query: Is property satisfied by current graph? • At every time step either insertion/deletion/query occurs • Algorithm input: string output at last step • Algorithm output: new string • One (specific) bit must be answer to query • Complexity: in terms of circuit or descriptive complexity • DynAC0 : update circuit is AC0 • DynFO: updates are First Order computable
  • 91. Dynamic Reachability Theorem Reach is in DynFO. Proof Outline: • Rank: What is the rank of a given n × n-matrix?
  • 92. Dynamic Reachability Theorem Reach is in DynFO. Proof Outline: • Rank: What is the rank of a given n × n-matrix? • Step 1 Reduce Reach to Rank
  • 93. Dynamic Reachability Theorem Reach is in DynFO. Proof Outline: • Rank: What is the rank of a given n × n-matrix? • Step 1 Reduce Reach to Rank • Step 2 Rank ∈ uniform DynAC0
  • 94. Dynamic Reachability Theorem Reach is in DynFO. Proof Outline: • Rank: What is the rank of a given n × n-matrix? • Step 1 Reduce Reach to Rank • Step 2 Rank ∈ uniform DynAC0 • Step 3 DynFO ≈ uniform DynAC0
  • 95. Reducing Reach to Rank • Approach: Map a graph G to a matrix B such that • s-t-Reachability in G corresponds to full rank of B. • A change of G can be simulated by a constant number of changes of B.
  • 96. Reducing Reach to Rank • Approach: Map a graph G to a matrix B such that • s-t-Reachability in G corresponds to full rank of B. • A change of G can be simulated by a constant number of changes of B. • Denote the adjacency matrix of G by A. • Recall: (s, t)-entry of Ai is the number of paths from s to t
  • 97. Reducing Reach to Rank • Approach: Map a graph G to a matrix B such that • s-t-Reachability in G corresponds to full rank of B. • A change of G can be simulated by a constant number of changes of B. • Denote the adjacency matrix of G by A. • Recall: (s, t)-entry of Ai is the number of paths from s to t • Observe: I − 1 n A is invertible and (I − 1 n A)−1 = ∞ i=0 ( 1 n A)i
  • 98. Reducing Reach to Rank • Approach: Map a graph G to a matrix B such that • s-t-Reachability in G corresponds to full rank of B. • A change of G can be simulated by a constant number of changes of B. • Denote the adjacency matrix of G by A. • Recall: (s, t)-entry of Ai is the number of paths from s to t • Observe: I − 1 n A is invertible and (I − 1 n A)−1 = ∞ i=0 ( 1 n A)i • Crux: t is not reachable from s • iff (s, t)-entry of B−1 is zero where B def = I − 1 n A
  • 99. Reducing Reach to Rank • Approach: Map a graph G to a matrix B such that • s-t-Reachability in G corresponds to full rank of B. • A change of G can be simulated by a constant number of changes of B. • Denote the adjacency matrix of G by A. • Recall: (s, t)-entry of Ai is the number of paths from s to t • Observe: I − 1 n A is invertible and (I − 1 n A)−1 = ∞ i=0 ( 1 n A)i • Crux: t is not reachable from s • iff (s, t)-entry of B−1 is zero where B def = I − 1 n A • iff x = B−1 et with xs = 0
  • 100. Reducing Reach to Rank • Approach: Map a graph G to a matrix B such that • s-t-Reachability in G corresponds to full rank of B. • A change of G can be simulated by a constant number of changes of B. • Denote the adjacency matrix of G by A. • Recall: (s, t)-entry of Ai is the number of paths from s to t • Observe: I − 1 n A is invertible and (I − 1 n A)−1 = ∞ i=0 ( 1 n A)i • Crux: t is not reachable from s • iff (s, t)-entry of B−1 is zero where B def = I − 1 n A • iff x = B−1 et with xs = 0 • iff Bx = et has unique solution x with xs = 0
  • 101. Reducing Reach to Rank • Approach: Map a graph G to a matrix B such that • s-t-Reachability in G corresponds to full rank of B. • A change of G can be simulated by a constant number of changes of B. • Denote the adjacency matrix of G by A. • Recall: (s, t)-entry of Ai is the number of paths from s to t • Observe: I − 1 n A is invertible and (I − 1 n A)−1 = ∞ i=0 ( 1 n A)i • Crux: t is not reachable from s • iff (s, t)-entry of B−1 is zero where B def = I − 1 n A • iff x = B−1 et with xs = 0 • iff Bx = et has unique solution x with xs = 0 • This equation system can be phrased as rank-problem ...and one change in A leads to one change in B
  • 102. Reducing Reach to Rank • Approach: Map a graph G to a matrix B such that • s-t-Reachability in G corresponds to full rank of B. • A change of G can be simulated by a constant number of changes of B. • Denote the adjacency matrix of G by A. • Recall: (s, t)-entry of Ai is the number of paths from s to t • Observe: I − 1 n A is invertible and (I − 1 n A)−1 = ∞ i=0 ( 1 n A)i • Crux: t is not reachable from s • iff (s, t)-entry of B−1 is zero where B def = I − 1 n A • iff x = B−1 et with xs = 0 • iff Bx = et has unique solution x with xs = 0 • This equation system can be phrased as rank-problem ...and one change in A leads to one change in B • Technical detail: Use B def = nI − A instead of B def = I − 1 n A
  • 103. Reducing Rank to mod-p-Rank • rk(A) = maxp∈[n2] rkp(A)
  • 104. Reducing Rank to mod-p-Rank • rk(A) = maxp∈[n2] rkp(A) • rk(A) ≥ k iff
  • 105. Reducing Rank to mod-p-Rank • rk(A) = maxp∈[n2] rkp(A) • rk(A) ≥ k iff • ∃A , k × k submatrix of A such that rk(A ) = k iff
  • 106. Reducing Rank to mod-p-Rank • rk(A) = maxp∈[n2] rkp(A) • rk(A) ≥ k iff • ∃A , k × k submatrix of A such that rk(A ) = k iff • det(A ) = 0 iff
  • 107. Reducing Rank to mod-p-Rank • rk(A) = maxp∈[n2] rkp(A) • rk(A) ≥ k iff • ∃A , k × k submatrix of A such that rk(A ) = k iff • det(A ) = 0 iff • ∃p, small prime not dividing det(A ) iff
  • 108. Reducing Rank to mod-p-Rank • rk(A) = maxp∈[n2] rkp(A) • rk(A) ≥ k iff • ∃A , k × k submatrix of A such that rk(A ) = k iff • det(A ) = 0 iff • ∃p, small prime not dividing det(A ) iff • rkp(A ) = k =⇒
  • 109. Reducing Rank to mod-p-Rank • rk(A) = maxp∈[n2] rkp(A) • rk(A) ≥ k iff • ∃A , k × k submatrix of A such that rk(A ) = k iff • det(A ) = 0 iff • ∃p, small prime not dividing det(A ) iff • rkp(A ) = k =⇒ • rkp(A) ≥ k
  • 110. Reducing Rank to mod-p-Rank • rk(A) = maxp∈[n2] rkp(A) • rk(A) ≥ k iff • ∃A , k × k submatrix of A such that rk(A ) = k iff • det(A ) = 0 iff • ∃p, small prime not dividing det(A ) iff • rkp(A ) = k =⇒ • rkp(A) ≥ k • But, rkp(A) ≤ rk(A), since
  • 111. Reducing Rank to mod-p-Rank • rk(A) = maxp∈[n2] rkp(A) • rk(A) ≥ k iff • ∃A , k × k submatrix of A such that rk(A ) = k iff • det(A ) = 0 iff • ∃p, small prime not dividing det(A ) iff • rkp(A ) = k =⇒ • rkp(A) ≥ k • But, rkp(A) ≤ rk(A), since • Linear relation over integers =⇒ linear relation over Zp
  • 112. Maintaining row-echelon form • Definition         1 4 0 2 0 2 0 0 1 3 0 4 0 0 0 0 1 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0        
  • 113. Maintaining row-echelon form • Definition • Leading (left-most non-zero) entry in every row is 1,         1 4 0 2 0 2 0 0 1 3 0 4 0 0 0 0 1 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0        
  • 114. Maintaining row-echelon form • Definition • Leading (left-most non-zero) entry in every row is 1, • in column of leading entry: other entries zero         1 4 0 2 0 2 0 0 1 3 0 4 0 0 0 0 1 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0        
  • 115. Maintaining row-echelon form • Definition • Leading (left-most non-zero) entry in every row is 1, • in column of leading entry: other entries zero • Rows are sorted in “diagonal” fashion         1 4 0 2 0 2 0 0 1 3 0 4 0 0 0 0 1 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0        
  • 116. Maintaining row-echelon form • Definition • Leading (left-most non-zero) entry in every row is 1, • in column of leading entry: other entries zero • Rows are sorted in “diagonal” fashion • Given matrix A         1 4 0 2 0 2 0 0 1 3 0 4 0 0 0 0 1 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0        
  • 117. Maintaining row-echelon form • Definition • Leading (left-most non-zero) entry in every row is 1, • in column of leading entry: other entries zero • Rows are sorted in “diagonal” fashion • Given matrix A • Maintain B invertible         1 4 0 2 0 2 0 0 1 3 0 4 0 0 0 0 1 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0        
  • 118. Maintaining row-echelon form • Definition • Leading (left-most non-zero) entry in every row is 1, • in column of leading entry: other entries zero • Rows are sorted in “diagonal” fashion • Given matrix A • Maintain B invertible • Maintain E in row echelon form         1 4 0 2 0 2 0 0 1 3 0 4 0 0 0 0 1 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0        
  • 119. Maintaining row-echelon form • Definition • Leading (left-most non-zero) entry in every row is 1, • in column of leading entry: other entries zero • Rows are sorted in “diagonal” fashion • Given matrix A • Maintain B invertible • Maintain E in row echelon form • such that E = BA         1 4 0 2 0 2 0 0 1 3 0 4 0 0 0 0 1 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0        
  • 120. Maintaining row-echelon form • Definition • Leading (left-most non-zero) entry in every row is 1, • in column of leading entry: other entries zero • Rows are sorted in “diagonal” fashion • Given matrix A • Maintain B invertible • Maintain E in row echelon form • such that E = BA • Rank(A) is #non-zero rows of E         1 4 0 2 0 2 0 0 1 3 0 4 0 0 0 0 1 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0        
  • 121. Maintaining row-echelon form • Change A[i, j] to yield A .
  • 122. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column
  • 123. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA :
  • 124. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA : • new leading entry with max # of consecutive 0’s in row after column j
  • 125. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA : • new leading entry with max # of consecutive 0’s in row after column j • by row operations:
  • 126. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA : • new leading entry with max # of consecutive 0’s in row after column j • by row operations: • set new leading entry to 1,
  • 127. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA : • new leading entry with max # of consecutive 0’s in row after column j • by row operations: • set new leading entry to 1, • set all other entries of column j to 0
  • 128. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA : • new leading entry with max # of consecutive 0’s in row after column j • by row operations: • set new leading entry to 1, • set all other entries of column j to 0 • If leading entry of row k is lost in column j ,
  • 129. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA : • new leading entry with max # of consecutive 0’s in row after column j • by row operations: • set new leading entry to 1, • set all other entries of column j to 0 • If leading entry of row k is lost in column j , • new leading entry next non-zero in row k and column > j
  • 130. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA : • new leading entry with max # of consecutive 0’s in row after column j • by row operations: • set new leading entry to 1, • set all other entries of column j to 0 • If leading entry of row k is lost in column j , • new leading entry next non-zero in row k and column > j • by row operations:
  • 131. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA : • new leading entry with max # of consecutive 0’s in row after column j • by row operations: • set new leading entry to 1, • set all other entries of column j to 0 • If leading entry of row k is lost in column j , • new leading entry next non-zero in row k and column > j • by row operations: • set leading entry to 1
  • 132. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA : • new leading entry with max # of consecutive 0’s in row after column j • by row operations: • set new leading entry to 1, • set all other entries of column j to 0 • If leading entry of row k is lost in column j , • new leading entry next non-zero in row k and column > j • by row operations: • set leading entry to 1 • set all other entries of column to 0,
  • 133. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA : • new leading entry with max # of consecutive 0’s in row after column j • by row operations: • set new leading entry to 1, • set all other entries of column j to 0 • If leading entry of row k is lost in column j , • new leading entry next non-zero in row k and column > j • by row operations: • set leading entry to 1 • set all other entries of column to 0, • If needed:
  • 134. Maintaining row-echelon form • Change A[i, j] to yield A . • BA is E + i-th column of B as j-th column • If column j has more than one leading entry of BA : • new leading entry with max # of consecutive 0’s in row after column j • by row operations: • set new leading entry to 1, • set all other entries of column j to 0 • If leading entry of row k is lost in column j , • new leading entry next non-zero in row k and column > j • by row operations: • set leading entry to 1 • set all other entries of column to 0, • If needed: • move the (at most two) rows above with changed leading entries to correct positions
  • 135. Outline 1 Part I: Disjoint Paths 2 Part II: Dynamic Reachability 3 Conclusion
  • 136. Conclusion • Considered two connectivity problems:
  • 137. Conclusion • Considered two connectivity problems: • One-face k-DPP
  • 138. Conclusion • Considered two connectivity problems: • One-face k-DPP • Dynamic Reachability
  • 139. Conclusion • Considered two connectivity problems: • One-face k-DPP • Dynamic Reachability • Writing and solving linear equations, key to solution
  • 140. Conclusion • Considered two connectivity problems: • One-face k-DPP • Dynamic Reachability • Writing and solving linear equations, key to solution • Scratched the surface of Linear Algebra
  • 141. Open Questions • Derandomize construction of One-face k-DPP solution?
  • 142. Open Questions • Derandomize construction of One-face k-DPP solution? • “Deparameterize” counting or even decision of above?
  • 143. Open Questions • Derandomize construction of One-face k-DPP solution? • “Deparameterize” counting or even decision of above? • From 4k nO(1) to kO(1) nO(1) ?
  • 144. Open Questions • Derandomize construction of One-face k-DPP solution? • “Deparameterize” counting or even decision of above? • From 4k nO(1) to kO(1) nO(1) ? • Serial/parallel cases can be deparameterized [Borradaile et al.]
  • 145. Open Questions • Derandomize construction of One-face k-DPP solution? • “Deparameterize” counting or even decision of above? • From 4k nO(1) to kO(1) nO(1) ? • Serial/parallel cases can be deparameterized [Borradaile et al.] • Contrariwise, does there exist a crossover gadget?
  • 146. Open Questions • Derandomize construction of One-face k-DPP solution? • “Deparameterize” counting or even decision of above? • From 4k nO(1) to kO(1) nO(1) ? • Serial/parallel cases can be deparameterized [Borradaile et al.] • Contrariwise, does there exist a crossover gadget? • Is directed distance in DynAC0? What about path construction?