SlideShare a Scribd company logo
Lower Bounds on Kernelization
Venkatesh Raman
Institiue of Mathematical Sciences, Chennai

March 6, 2014

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables
Quadratic: k-Vertex Cover – 2k vertices but O(k 2 )
edges

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables
Quadratic: k-Vertex Cover – 2k vertices but O(k 2 )
edges
Cubic: k-Dominating Set in graphs without C4 – O(k 3 )
vertices

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables
Quadratic: k-Vertex Cover – 2k vertices but O(k 2 )
edges
Cubic: k-Dominating Set in graphs without C4 – O(k 3 )
vertices
Exponential: k-Path – 2O(k)

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables
Quadratic: k-Vertex Cover – 2k vertices but O(k 2 )
edges
Cubic: k-Dominating Set in graphs without C4 – O(k 3 )
vertices
Exponential: k-Path – 2O(k)
No Kernel: k-Dominating Set is W-hard. So is not
expected to have kernels of any size.

Venkatesh Raman

Lower Bounds on Kernelization
Some known kernelization results

Linear: MaxSat – 2k clauses, k variables
Quadratic: k-Vertex Cover – 2k vertices but O(k 2 )
edges
Cubic: k-Dominating Set in graphs without C4 – O(k 3 )
vertices
Exponential: k-Path – 2O(k)
No Kernel: k-Dominating Set is W-hard. So is not
expected to have kernels of any size.
In this lecture, we will see some techniques to rule out
polynomial kernels.

Venkatesh Raman

Lower Bounds on Kernelization
OR of a language

Definition
Let L ⊆ {0, 1}∗ be a language. Then define
Or(L) = {(x1 , . . . , xp ) | ∃i such that xi ∈ L}
Definition
Let t : N → N  {0} be a function. Then define
Ort (L) = {(x1 , . . . , xt(|x1 |) ) | ∀j |xj | = |x1 |, and ∃i such that xi ∈ L}

Venkatesh Raman

Lower Bounds on Kernelization
Distillation

Let L, L ⊆ {0, 1}∗ be a pair of languages and let t : N → N  {0}
be a function. We say that L has t-bounded distillation algorithm
if there exists
a polynomial time computable function f : {0, 1}∗ → {0, 1}∗
such that
f ((x1 , . . . , xt(|x1 |) )) ∈ L if and only if
(x1 , . . . , xt(|x1 |) ) ∈ Ort (L), and
|f ((x1 , . . . , xt(|x1 |) )| ≤ O(t(|x1 |) log t(|x1 |)).

Venkatesh Raman

Lower Bounds on Kernelization
Fortnow-Santhanam

Theorem (FS 09)
Suppose for a pair of languages L, L ⊆ {0, 1}∗ , there exists a
polynomially bounded function t : N → N  {0} such that L has a
t-bounded distillation algorithm. Then L ∈ NP/poly. In particular,
if L is NP-hard, then coNP ⊆ NP/poly.

Venkatesh Raman

Lower Bounds on Kernelization
Outline of proof of Fortnow Santhanam theorem

NP-complete problem L with A, a t-bounded distillation
algorithm.

Venkatesh Raman

Lower Bounds on Kernelization
Outline of proof of Fortnow Santhanam theorem

NP-complete problem L with A, a t-bounded distillation
algorithm.
Use A to design NDTM that, with a “polynomial advice”, can
decide L in P-time.

Venkatesh Raman

Lower Bounds on Kernelization
Outline of proof of Fortnow Santhanam theorem

NP-complete problem L with A, a t-bounded distillation
algorithm.
Use A to design NDTM that, with a “polynomial advice”, can
decide L in P-time.
L ∈ NP/poly ⇒ coNP ⊆ NP/poly and we get the theorem!

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).
Ln = {x ∈ L : |x| ≤ n}.

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).
Ln = {x ∈ L : |x| ≤ n}.
given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)])
/
A maps it to y ∈ L ≤α(n)

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).
Ln = {x ∈ L : |x| ≤ n}.
given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)])
/
A maps it to y ∈ L ≤α(n)
we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially
bounded in n such that

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).
Ln = {x ∈ L : |x| ≤ n}.
given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)])
/
A maps it to y ∈ L ≤α(n)
we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially
bounded in n such that
If x ∈ Ln - ∃ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i
such that A(x1 , · · · , xt(n) ) ∈ Sn

Venkatesh Raman

Lower Bounds on Kernelization
Filling in the details
For the proof, we define the notions needed and the requirements.
Let |xi | = n ∀i ∈ [t(n)].
Let α(n) = O(t(n) log(t(n))).
Ln = {x ∈ L : |x| ≤ n}.
given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)])
/
A maps it to y ∈ L ≤α(n)
we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially
bounded in n such that
If x ∈ Ln - ∃ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i
such that A(x1 , · · · , xt(n) ) ∈ Sn
If x ∈ Ln - ∀ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i,
/
A(x1 , · · · , xt(n) ) ∈ Sn
/

Venkatesh Raman

Lower Bounds on Kernelization
How will the nondeterministic algorithm work?

Having Sn as advice gives the desired NDTM which when given x
such that |x| = n, checks whether x ∈ L in the following way.
Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n

Venkatesh Raman

Lower Bounds on Kernelization
How will the nondeterministic algorithm work?

Having Sn as advice gives the desired NDTM which when given x
such that |x| = n, checks whether x ∈ L in the following way.
Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n
Checks whether one of them is x

Venkatesh Raman

Lower Bounds on Kernelization
How will the nondeterministic algorithm work?

Having Sn as advice gives the desired NDTM which when given x
such that |x| = n, checks whether x ∈ L in the following way.
Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n
Checks whether one of them is x
Computes A(x1 , · · · , xt(n) ) and accepts iff output is in Sn .

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn
A : (Ln )t → L ≤α(n)

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn
A : (Ln )t → L ≤α(n)
y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with
xi = x for some i and A(x1 , · · · , xt(n) ) = y

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn
A : (Ln )t → L ≤α(n)
y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with
xi = x for some i and A(x1 , · · · , xt(n) ) = y
We construct Sn by iteratively picking the string in L ≤α(n)
which covers the most number of instances in Ln till there are
no strings left to cover.

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn
A : (Ln )t → L ≤α(n)
y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with
xi = x for some i and A(x1 , · · · , xt(n) ) = y
We construct Sn by iteratively picking the string in L ≤α(n)
which covers the most number of instances in Ln till there are
no strings left to cover.
Let us consider one step of the process. Let F be the set of
uncovered instances in Ln at the start of step.

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn
A : (Ln )t → L ≤α(n)
y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with
xi = x for some i and A(x1 , · · · , xt(n) ) = y
We construct Sn by iteratively picking the string in L ≤α(n)
which covers the most number of instances in Ln till there are
no strings left to cover.
Let us consider one step of the process. Let F be the set of
uncovered instances in Ln at the start of step.
By PHP there exists a string y ∈ L ≤α(n) such that A maps at
least
|F |t(n)
|L ≤α(n) |
tuples in F t(n) to y .
Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn (Cont.)
At least

|F |t(n)
|L ≤α(n) |

1/t(n)

=

|F |
|L

≤α(n) |

1/t(n)

strings in F are

covered by y in each step.

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn (Cont.)
At least

|F |t(n)
|L ≤α(n) |

1/t(n)

=

|F |
|L

≤α(n) |

1/t(n)

strings in F are

covered by y in each step.
We can restate the above statement, saying that at least ϕ(s)
fraction of the remaining set is covered in each iteration,
where
1
1
= (α(n)+1)/t(n)
ϕ(n) =
1/t(n)
2
|L ≤α(n) |

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn (Cont.)
At least

|F |t(n)
|L ≤α(n) |

1/t(n)

=

|F |
|L

≤α(n) |

1/t(n)

strings in F are

covered by y in each step.
We can restate the above statement, saying that at least ϕ(s)
fraction of the remaining set is covered in each iteration,
where
1
1
= (α(n)+1)/t(n)
ϕ(n) =
1/t(n)
2
|L ≤α(n) |
There were 2n strings to cover at the starting. So, the number
of strings left to cover after p steps is at most
(1 − ϕ(n))p 2n ≤

2n
e ϕ(n)·p

which is less than one for p = O(n/ϕ(n)).

Venkatesh Raman

Lower Bounds on Kernelization
How to get Sn (Cont.)
At least

|F |t(n)
|L ≤α(n) |

1/t(n)

=

|F |
|L

≤α(n) |

1/t(n)

strings in F are

covered by y in each step.
We can restate the above statement, saying that at least ϕ(s)
fraction of the remaining set is covered in each iteration,
where
1
1
= (α(n)+1)/t(n)
ϕ(n) =
1/t(n)
2
|L ≤α(n) |
There were 2n strings to cover at the starting. So, the number
of strings left to cover after p steps is at most
(1 − ϕ(n))p 2n ≤

2n
e ϕ(n)·p

which is less than one for p = O(n/ϕ(n)).
So, the process ends after O(n/ϕ(n)) ≤ n · 2(α(n)+1)/t(n) steps,
which is polynomial in n since α(n) = O(t(n) log(t(n))).
Venkatesh Raman

Lower Bounds on Kernelization
Take away
A few comments about the theorem

coNP ⊆ NP/poly implies PH = Σ3 .
p
The theorem gives us the collapse even if the distillation
algorithm is allowed to be in co-nondeterministic.
Main message is, that if you have t(n) instances of size n, you
can not get an instance equivalent to the Or of them in
polynomial time of size O(t(n) log t(n))

Venkatesh Raman

Lower Bounds on Kernelization
How to use the theorem to prove kernel lower bounds
We know that NP-complete problems can not have a
distillation algorithm unless coNP ⊆ NP/poly.

Venkatesh Raman

Lower Bounds on Kernelization
How to use the theorem to prove kernel lower bounds
We know that NP-complete problems can not have a
distillation algorithm unless coNP ⊆ NP/poly.
We want to define some analogue of distillation to produce an
instance (x, k) of a parameterized problem L , starting from
many instances of an NP-complete language L.

Venkatesh Raman

Lower Bounds on Kernelization
How to use the theorem to prove kernel lower bounds
We know that NP-complete problems can not have a
distillation algorithm unless coNP ⊆ NP/poly.
We want to define some analogue of distillation to produce an
instance (x, k) of a parameterized problem L , starting from
many instances of an NP-complete language L.
We call such an algorithm a composition algorithm. We will
define it formally in the next slide.

Venkatesh Raman

Lower Bounds on Kernelization
How to use the theorem to prove kernel lower bounds
We know that NP-complete problems can not have a
distillation algorithm unless coNP ⊆ NP/poly.
We want to define some analogue of distillation to produce an
instance (x, k) of a parameterized problem L , starting from
many instances of an NP-complete language L.
We call such an algorithm a composition algorithm. We will
define it formally in the next slide.
The goal is that composition of an NP-complete language L
into L , combined with a kernel of certain size for L , gives us
distillation L.

Venkatesh Raman

Lower Bounds on Kernelization
How to use the theorem to prove kernel lower bounds
We know that NP-complete problems can not have a
distillation algorithm unless coNP ⊆ NP/poly.
We want to define some analogue of distillation to produce an
instance (x, k) of a parameterized problem L , starting from
many instances of an NP-complete language L.
We call such an algorithm a composition algorithm. We will
define it formally in the next slide.
The goal is that composition of an NP-complete language L
into L , combined with a kernel of certain size for L , gives us
distillation L.
So, if we can show that a composition algorithm exists from L
to L with desired properties, then L can not have a kernel of
certain size.

Venkatesh Raman

Lower Bounds on Kernelization
Weak d-Composition
˜
(Weak d-composition). Let L ⊆ Σ ∗ be a set and let
∗ × N be a parameterized problem. We say that L weak
Q⊆Σ
d-composes into Q if there is an algorithm C which, given t strings
x1 , x2 , . . . , xt , takes time polynomial in t |xi | and outputs an
i=1
instance (y , k) ∈ Σ∗ × N such that the following hold:
k ≤ t 1/d (maxt |xi |)O(1)
i=1
The output is a YES instance of Q if and only if at least one
˜
instance xi is a YES-instance of of L.
Theorem
˜
˜
Let L ⊆ Σ ∗ be a set which is NP-hard. If L weak d-composes into
the parameterized problem Q, then Q has no kernel of size
O(k d− ) for all > 0 unless NP ⊆ coNP/poly.

Venkatesh Raman

Lower Bounds on Kernelization
Proof of the theorem

Theorem
˜
˜
Let L ⊆ Σ ∗ be a set which is NP-hard. If L weak d-composes into
the parameterized problem Q, then Q has no kernel of size
O(k d− ) for all > 0 unless NP ⊆ coNP/poly.
Proof. Let xi = n ∀i ∈ [t(n)] for the input of composition. After
applying the kernelization on the composed instance, the size of
the instance we get is
O(t(n)1/d nc )d− ) = O(t(n)1−(
= O(t(s))

/d) c(d− )

n

)

(for t(s) sufficiently large)

= O(t(s) log t(s))

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .
What if we can output an instance with k = t o(1) (n)O(1) ?
Then we can rule out kernels of k d− for ALL d!

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .
What if we can output an instance with k = t o(1) (n)O(1) ?
Then we can rule out kernels of k d− for ALL d!
We call such an algorithm just “composition”.

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .
What if we can output an instance with k = t o(1) (n)O(1) ?
Then we can rule out kernels of k d− for ALL d!
We call such an algorithm just “composition”.
Since theorem of Fortnow-Santhanam allows
co-nondeterminism, so that allows using coNP compositions
for proving lower bounds.

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .
What if we can output an instance with k = t o(1) (n)O(1) ?
Then we can rule out kernels of k d− for ALL d!
We call such an algorithm just “composition”.
Since theorem of Fortnow-Santhanam allows
co-nondeterminism, so that allows using coNP compositions
for proving lower bounds.
Sometimes getting composition from arbitrary instances of a
language can be difficult.

Venkatesh Raman

Lower Bounds on Kernelization
Some comments about composition
In composition, we asked for the parameter k to be at most
t 1/d (n)O(1) . That ruled out kernels of size k d− .
What if we can output an instance with k = t o(1) (n)O(1) ?
Then we can rule out kernels of k d− for ALL d!
We call such an algorithm just “composition”.
Since theorem of Fortnow-Santhanam allows
co-nondeterminism, so that allows using coNP compositions
for proving lower bounds.
Sometimes getting composition from arbitrary instances of a
language can be difficult.
Some structure on the input instances helps to get a
composition (next slide).

Venkatesh Raman

Lower Bounds on Kernelization
Polynomial Equivalence Relation

(Polynomial Equivalence Relation). An equivalence relation R
on Σ ∗ is called a polynomial equivalence relation if the following
two conditions hold:
1

2

There is an algorithm that given two strings x, y ∈ Σ ∗ decides
whether x and y belong to the same equivalence class in
(|x| + |y |)O(1) time.
For any finite set S ⊆ Σ ∗ the equivalence relation R partitions
the elements of S into at most (maxx∈S |x|)O(1) classes.

Venkatesh Raman

Lower Bounds on Kernelization
What to do with Polynomial Equivalence Relation

The equivalence relation can partition the input on the basis
of different parameters. These equivalence classes can be used
to give the input to the composition a nice structure.
The helpful choices are often partitions which have the same
number of vertices, or the asked solution size etc.
Then all we need to do, is to come up with a composition
algorithm for instances belonging to same equivalence class.
Since there are only polynomial number of equivalence classes,
in the end we can just output an instance of Or(L )
Next slide is a nice illustration of this method by Michal
Pilipczuk.

Venkatesh Raman

Lower Bounds on Kernelization
Proof

Michał Pilipczuk

No-poly-kernels tutorial

11/31
OR-SAT

OR-SAT

Proof

Michał Pilipczuk

No-poly-kernels tutorial

11/31
OR-SAT
NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

11/31
No-poly-kernels tutorial
Michał Pilipczuk

NP-hrd

NP-hrd

NP-hrd

OR-SAT

NP-hrd

˜
L

Proof
NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

1

2

2

2

k

k

k

OR-SAT

NP-hrd

1

11/31
No-poly-kernels tutorial
Michał Pilipczuk

OR-SAT

NP-hrd

1
L

Proof
NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

L

NP-hrd

1

1

1

2

2

2

k

k

OR-SAT

NP-hrd

OR-SAT

Proof

k

cmp

cmp

cmp

poly(k)

poly(k)

L

poly(k)

Michał Pilipczuk

No-poly-kernels tutorial

11/31
NP-hrd

NP-hrd

NP-hrd

NP-hrd

NP-hrd

1

2

2

2

k

k

k

OR-SAT

NP-hrd

1
L

NP-hrd

kern

kern

kern

11/31
No-poly-kernels tutorial
Michał Pilipczuk

poly(k)
poly(k)
poly(k)

cmp

cmp

cmp

L

NP-hrd

1

OR-SAT

NP-hrd

L

Proof
NP-hrd

NP-hrd

NP-hrd

NP-hrd

1

2

2

2

k

k

k

OR-SAT

NP-hrd

1
L

NP-hrd

kern

kern

kern

11/31
No-poly-kernels tutorial
Michał Pilipczuk

poly(k)
poly(k)
poly(k)

cmp

cmp

L

NP-hrd

cmp

L

NP-hrd

1

OR-SAT

NP-hrd

OR-L

Proof
NP-hrd

NP-hrd

NP-hrd

NP-hrd

1

2

2

2

k

k

k

OR-SAT

NP-hrd

1
L

NP-hrd

kern

kern

kern

11/31
No-poly-kernels tutorial
Michał Pilipczuk

poly(k)
poly(k)
poly(k)

cmp

cmp

L

NP-hrd

cmp

L

NP-hrd

1

OR-SAT

NP-hrd

OR-L

Proof
Take away

We use compositions to rule out polynomial kernels.

Venkatesh Raman

Lower Bounds on Kernelization
Take away

We use compositions to rule out polynomial kernels.
A composition from NP-hard problem L to parameterized
problem L gives kernelization hardness for for L .

Venkatesh Raman

Lower Bounds on Kernelization
Take away

We use compositions to rule out polynomial kernels.
A composition from NP-hard problem L to parameterized
problem L gives kernelization hardness for for L .
k = t o(1) nc ⇒ No polynomial kernel.

Venkatesh Raman

Lower Bounds on Kernelization
Take away

We use compositions to rule out polynomial kernels.
A composition from NP-hard problem L to parameterized
problem L gives kernelization hardness for for L .
k = t o(1) nc ⇒ No polynomial kernel.
k = t 1/d nc ⇒ No kernel of size k d− .

Venkatesh Raman

Lower Bounds on Kernelization
Take away

We use compositions to rule out polynomial kernels.
A composition from NP-hard problem L to parameterized
problem L gives kernelization hardness for for L .
k = t o(1) nc ⇒ No polynomial kernel.
k = t 1/d nc ⇒ No kernel of size k d− .
We can make use of equivalence classes to give structure to
input of the composition.

Venkatesh Raman

Lower Bounds on Kernelization
Take away

We use compositions to rule out polynomial kernels.
A composition from NP-hard problem L to parameterized
problem L gives kernelization hardness for for L .
k = t o(1) nc ⇒ No polynomial kernel.
k = t 1/d nc ⇒ No kernel of size k d− .
We can make use of equivalence classes to give structure to
input of the composition.
Examples on the board!

Venkatesh Raman

Lower Bounds on Kernelization
Thank You!

Venkatesh Raman

Lower Bounds on Kernelization

More Related Content

PDF
The Exponential Time Hypothesis
PDF
Kernelization Basics
PDF
Fixed-Parameter Intractability
PDF
Iterative Compression
PDF
Important Cuts and (p,q)-clustering
PDF
Paths and Polynomials
PDF
Biconnectivity
PDF
Bidimensionality
The Exponential Time Hypothesis
Kernelization Basics
Fixed-Parameter Intractability
Iterative Compression
Important Cuts and (p,q)-clustering
Paths and Polynomials
Biconnectivity
Bidimensionality

What's hot (20)

PDF
Lossy Kernelization
PDF
Node Unique Label Cover
PDF
19 Minimum Spanning Trees
PPT
Algorithm Design and Complexity - Course 10
PPT
ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra
PDF
20 Single Source Shorthest Path
PPT
Algorithm Design and Complexity - Course 9
PDF
On Spaces of Entire Functions Having Slow Growth Represented By Dirichlet Series
PDF
Heuristics for counterexamples to the Agrawal Conjecture
PPTX
P, NP and NP-Complete, Theory of NP-Completeness V2
PDF
Scribed lec8
PPT
minimum spanning trees Algorithm
PDF
Nies cuny describing_finite_groups
PPTX
Algorithm Design and Complexity - Course 7
PPTX
Minimum spanning tree algorithms by ibrahim_alfayoumi
PDF
Topological sorting
PPTX
Signals Processing Assignment Help
PDF
Prim algorithm
PDF
05 linear transformations
PDF
Introduction to Fourier transform and signal analysis
Lossy Kernelization
Node Unique Label Cover
19 Minimum Spanning Trees
Algorithm Design and Complexity - Course 10
ADA - Minimum Spanning Tree Prim Kruskal and Dijkstra
20 Single Source Shorthest Path
Algorithm Design and Complexity - Course 9
On Spaces of Entire Functions Having Slow Growth Represented By Dirichlet Series
Heuristics for counterexamples to the Agrawal Conjecture
P, NP and NP-Complete, Theory of NP-Completeness V2
Scribed lec8
minimum spanning trees Algorithm
Nies cuny describing_finite_groups
Algorithm Design and Complexity - Course 7
Minimum spanning tree algorithms by ibrahim_alfayoumi
Topological sorting
Signals Processing Assignment Help
Prim algorithm
05 linear transformations
Introduction to Fourier transform and signal analysis
Ad

Viewers also liked (14)

PDF
Cut and Count
PDF
Treewidth and Applications
PDF
Color Coding
PDF
Matroid Basics
PDF
Representative Sets
PDF
Important Cuts
PDF
Efficient Simplification: The (im)possibilities
PDF
Dynamic Programming Over Graphs of Bounded Treewidth
PDF
Steiner Tree Parameterized by Treewidth
PDF
EKR for Matchings
PDF
A Kernel for Planar F-deletion: The Connected Case
PDF
Kernels for Planar F-Deletion (Restricted Variants)
PDF
From FVS to F-Deletion
PDF
Separators with Non-Hereditary Properties
Cut and Count
Treewidth and Applications
Color Coding
Matroid Basics
Representative Sets
Important Cuts
Efficient Simplification: The (im)possibilities
Dynamic Programming Over Graphs of Bounded Treewidth
Steiner Tree Parameterized by Treewidth
EKR for Matchings
A Kernel for Planar F-deletion: The Connected Case
Kernels for Planar F-Deletion (Restricted Variants)
From FVS to F-Deletion
Separators with Non-Hereditary Properties
Ad

Similar to Kernel Lower Bounds (20)

PDF
lec4_annotated.pdf ml csci 567 vatsal sharan
PDF
MUMS: Bayesian, Fiducial, and Frequentist Conference - Coverage of Credible I...
PDF
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
PDF
Uniform Boundedness of Shift Operators
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PPT
Text classification using Text kernels
PDF
Harmonic Analysis and Deep Learning
PDF
On Twisted Paraproducts and some other Multilinear Singular Integrals
PDF
Tail Probabilities for Randomized Program Runtimes via Martingales for Higher...
PDF
15.sp.dictionary_draft.pdf
PDF
Kernel Bayes Rule
PPTX
Lecture5.pptx
PDF
QMC: Operator Splitting Workshop, Projective Splitting with Forward Steps and...
PDF
Lecture5
PDF
Influence of the sampling on Functional Data Analysis
PDF
Solutions for Problems from Applied Optimization by Ross Baldick
PDF
Solutions for Problems from Applied Optimization by Ross Baldick
PPTX
PRML Chapter 6
lec4_annotated.pdf ml csci 567 vatsal sharan
MUMS: Bayesian, Fiducial, and Frequentist Conference - Coverage of Credible I...
Approximation Methods Of Solutions For Equilibrium Problem In Hilbert Spaces
Uniform Boundedness of Shift Operators
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Text classification using Text kernels
Harmonic Analysis and Deep Learning
On Twisted Paraproducts and some other Multilinear Singular Integrals
Tail Probabilities for Randomized Program Runtimes via Martingales for Higher...
15.sp.dictionary_draft.pdf
Kernel Bayes Rule
Lecture5.pptx
QMC: Operator Splitting Workshop, Projective Splitting with Forward Steps and...
Lecture5
Influence of the sampling on Functional Data Analysis
Solutions for Problems from Applied Optimization by Ross Baldick
Solutions for Problems from Applied Optimization by Ross Baldick
PRML Chapter 6

Recently uploaded (20)

PDF
What if we spent less time fighting change, and more time building what’s rig...
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
PDF
Updated Idioms and Phrasal Verbs in English subject
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
What if we spent less time fighting change, and more time building what’s rig...
Anesthesia in Laparoscopic Surgery in India
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
Paper A Mock Exam 9_ Attempt review.pdf.
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
Updated Idioms and Phrasal Verbs in English subject
202450812 BayCHI UCSC-SV 20250812 v17.pptx
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
2.FourierTransform-ShortQuestionswithAnswers.pdf
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
Microbial disease of the cardiovascular and lymphatic systems
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Chinmaya Tiranga quiz Grand Finale.pdf
Module 4: Burden of Disease Tutorial Slides S2 2025
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
Final Presentation General Medicine 03-08-2024.pptx
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf

Kernel Lower Bounds

  • 1. Lower Bounds on Kernelization Venkatesh Raman Institiue of Mathematical Sciences, Chennai March 6, 2014 Venkatesh Raman Lower Bounds on Kernelization
  • 2. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Venkatesh Raman Lower Bounds on Kernelization
  • 3. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Venkatesh Raman Lower Bounds on Kernelization
  • 4. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Venkatesh Raman Lower Bounds on Kernelization
  • 5. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Exponential: k-Path – 2O(k) Venkatesh Raman Lower Bounds on Kernelization
  • 6. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Exponential: k-Path – 2O(k) No Kernel: k-Dominating Set is W-hard. So is not expected to have kernels of any size. Venkatesh Raman Lower Bounds on Kernelization
  • 7. Some known kernelization results Linear: MaxSat – 2k clauses, k variables Quadratic: k-Vertex Cover – 2k vertices but O(k 2 ) edges Cubic: k-Dominating Set in graphs without C4 – O(k 3 ) vertices Exponential: k-Path – 2O(k) No Kernel: k-Dominating Set is W-hard. So is not expected to have kernels of any size. In this lecture, we will see some techniques to rule out polynomial kernels. Venkatesh Raman Lower Bounds on Kernelization
  • 8. OR of a language Definition Let L ⊆ {0, 1}∗ be a language. Then define Or(L) = {(x1 , . . . , xp ) | ∃i such that xi ∈ L} Definition Let t : N → N {0} be a function. Then define Ort (L) = {(x1 , . . . , xt(|x1 |) ) | ∀j |xj | = |x1 |, and ∃i such that xi ∈ L} Venkatesh Raman Lower Bounds on Kernelization
  • 9. Distillation Let L, L ⊆ {0, 1}∗ be a pair of languages and let t : N → N {0} be a function. We say that L has t-bounded distillation algorithm if there exists a polynomial time computable function f : {0, 1}∗ → {0, 1}∗ such that f ((x1 , . . . , xt(|x1 |) )) ∈ L if and only if (x1 , . . . , xt(|x1 |) ) ∈ Ort (L), and |f ((x1 , . . . , xt(|x1 |) )| ≤ O(t(|x1 |) log t(|x1 |)). Venkatesh Raman Lower Bounds on Kernelization
  • 10. Fortnow-Santhanam Theorem (FS 09) Suppose for a pair of languages L, L ⊆ {0, 1}∗ , there exists a polynomially bounded function t : N → N {0} such that L has a t-bounded distillation algorithm. Then L ∈ NP/poly. In particular, if L is NP-hard, then coNP ⊆ NP/poly. Venkatesh Raman Lower Bounds on Kernelization
  • 11. Outline of proof of Fortnow Santhanam theorem NP-complete problem L with A, a t-bounded distillation algorithm. Venkatesh Raman Lower Bounds on Kernelization
  • 12. Outline of proof of Fortnow Santhanam theorem NP-complete problem L with A, a t-bounded distillation algorithm. Use A to design NDTM that, with a “polynomial advice”, can decide L in P-time. Venkatesh Raman Lower Bounds on Kernelization
  • 13. Outline of proof of Fortnow Santhanam theorem NP-complete problem L with A, a t-bounded distillation algorithm. Use A to design NDTM that, with a “polynomial advice”, can decide L in P-time. L ∈ NP/poly ⇒ coNP ⊆ NP/poly and we get the theorem! Venkatesh Raman Lower Bounds on Kernelization
  • 14. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Venkatesh Raman Lower Bounds on Kernelization
  • 15. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Venkatesh Raman Lower Bounds on Kernelization
  • 16. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. Venkatesh Raman Lower Bounds on Kernelization
  • 17. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) Venkatesh Raman Lower Bounds on Kernelization
  • 18. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially bounded in n such that Venkatesh Raman Lower Bounds on Kernelization
  • 19. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially bounded in n such that If x ∈ Ln - ∃ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i such that A(x1 , · · · , xt(n) ) ∈ Sn Venkatesh Raman Lower Bounds on Kernelization
  • 20. Filling in the details For the proof, we define the notions needed and the requirements. Let |xi | = n ∀i ∈ [t(n)]. Let α(n) = O(t(n) log(t(n))). Ln = {x ∈ L : |x| ≤ n}. given any (x1 , x2 , · · · , xt(n) ) ∈ Or(L) (ie, xi ∈ Ln ∀i ∈ [t(n)]) / A maps it to y ∈ L ≤α(n) we want to obtain a Sn ⊆ L α(n) with |Sn | polynomially bounded in n such that If x ∈ Ln - ∃ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i such that A(x1 , · · · , xt(n) ) ∈ Sn If x ∈ Ln - ∀ strings x1 , · · · , xt(n) ∈ Σ n with xi = x for some i, / A(x1 , · · · , xt(n) ) ∈ Sn / Venkatesh Raman Lower Bounds on Kernelization
  • 21. How will the nondeterministic algorithm work? Having Sn as advice gives the desired NDTM which when given x such that |x| = n, checks whether x ∈ L in the following way. Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n Venkatesh Raman Lower Bounds on Kernelization
  • 22. How will the nondeterministic algorithm work? Having Sn as advice gives the desired NDTM which when given x such that |x| = n, checks whether x ∈ L in the following way. Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n Checks whether one of them is x Venkatesh Raman Lower Bounds on Kernelization
  • 23. How will the nondeterministic algorithm work? Having Sn as advice gives the desired NDTM which when given x such that |x| = n, checks whether x ∈ L in the following way. Guesses t(n) strings, x1 , · · · , xt(n) ∈ Σ n Checks whether one of them is x Computes A(x1 , · · · , xt(n) ) and accepts iff output is in Sn . Venkatesh Raman Lower Bounds on Kernelization
  • 24. How to get Sn A : (Ln )t → L ≤α(n) Venkatesh Raman Lower Bounds on Kernelization
  • 25. How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y Venkatesh Raman Lower Bounds on Kernelization
  • 26. How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y We construct Sn by iteratively picking the string in L ≤α(n) which covers the most number of instances in Ln till there are no strings left to cover. Venkatesh Raman Lower Bounds on Kernelization
  • 27. How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y We construct Sn by iteratively picking the string in L ≤α(n) which covers the most number of instances in Ln till there are no strings left to cover. Let us consider one step of the process. Let F be the set of uncovered instances in Ln at the start of step. Venkatesh Raman Lower Bounds on Kernelization
  • 28. How to get Sn A : (Ln )t → L ≤α(n) y ∈ L ≤α(n) covers a string x ∈ Ln — ∃x1 , · · · , xt ∈ Σ n with xi = x for some i and A(x1 , · · · , xt(n) ) = y We construct Sn by iteratively picking the string in L ≤α(n) which covers the most number of instances in Ln till there are no strings left to cover. Let us consider one step of the process. Let F be the set of uncovered instances in Ln at the start of step. By PHP there exists a string y ∈ L ≤α(n) such that A maps at least |F |t(n) |L ≤α(n) | tuples in F t(n) to y . Venkatesh Raman Lower Bounds on Kernelization
  • 29. How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. Venkatesh Raman Lower Bounds on Kernelization
  • 30. How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. We can restate the above statement, saying that at least ϕ(s) fraction of the remaining set is covered in each iteration, where 1 1 = (α(n)+1)/t(n) ϕ(n) = 1/t(n) 2 |L ≤α(n) | Venkatesh Raman Lower Bounds on Kernelization
  • 31. How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. We can restate the above statement, saying that at least ϕ(s) fraction of the remaining set is covered in each iteration, where 1 1 = (α(n)+1)/t(n) ϕ(n) = 1/t(n) 2 |L ≤α(n) | There were 2n strings to cover at the starting. So, the number of strings left to cover after p steps is at most (1 − ϕ(n))p 2n ≤ 2n e ϕ(n)·p which is less than one for p = O(n/ϕ(n)). Venkatesh Raman Lower Bounds on Kernelization
  • 32. How to get Sn (Cont.) At least |F |t(n) |L ≤α(n) | 1/t(n) = |F | |L ≤α(n) | 1/t(n) strings in F are covered by y in each step. We can restate the above statement, saying that at least ϕ(s) fraction of the remaining set is covered in each iteration, where 1 1 = (α(n)+1)/t(n) ϕ(n) = 1/t(n) 2 |L ≤α(n) | There were 2n strings to cover at the starting. So, the number of strings left to cover after p steps is at most (1 − ϕ(n))p 2n ≤ 2n e ϕ(n)·p which is less than one for p = O(n/ϕ(n)). So, the process ends after O(n/ϕ(n)) ≤ n · 2(α(n)+1)/t(n) steps, which is polynomial in n since α(n) = O(t(n) log(t(n))). Venkatesh Raman Lower Bounds on Kernelization
  • 33. Take away A few comments about the theorem coNP ⊆ NP/poly implies PH = Σ3 . p The theorem gives us the collapse even if the distillation algorithm is allowed to be in co-nondeterministic. Main message is, that if you have t(n) instances of size n, you can not get an instance equivalent to the Or of them in polynomial time of size O(t(n) log t(n)) Venkatesh Raman Lower Bounds on Kernelization
  • 34. How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. Venkatesh Raman Lower Bounds on Kernelization
  • 35. How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. Venkatesh Raman Lower Bounds on Kernelization
  • 36. How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. We call such an algorithm a composition algorithm. We will define it formally in the next slide. Venkatesh Raman Lower Bounds on Kernelization
  • 37. How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. We call such an algorithm a composition algorithm. We will define it formally in the next slide. The goal is that composition of an NP-complete language L into L , combined with a kernel of certain size for L , gives us distillation L. Venkatesh Raman Lower Bounds on Kernelization
  • 38. How to use the theorem to prove kernel lower bounds We know that NP-complete problems can not have a distillation algorithm unless coNP ⊆ NP/poly. We want to define some analogue of distillation to produce an instance (x, k) of a parameterized problem L , starting from many instances of an NP-complete language L. We call such an algorithm a composition algorithm. We will define it formally in the next slide. The goal is that composition of an NP-complete language L into L , combined with a kernel of certain size for L , gives us distillation L. So, if we can show that a composition algorithm exists from L to L with desired properties, then L can not have a kernel of certain size. Venkatesh Raman Lower Bounds on Kernelization
  • 39. Weak d-Composition ˜ (Weak d-composition). Let L ⊆ Σ ∗ be a set and let ∗ × N be a parameterized problem. We say that L weak Q⊆Σ d-composes into Q if there is an algorithm C which, given t strings x1 , x2 , . . . , xt , takes time polynomial in t |xi | and outputs an i=1 instance (y , k) ∈ Σ∗ × N such that the following hold: k ≤ t 1/d (maxt |xi |)O(1) i=1 The output is a YES instance of Q if and only if at least one ˜ instance xi is a YES-instance of of L. Theorem ˜ ˜ Let L ⊆ Σ ∗ be a set which is NP-hard. If L weak d-composes into the parameterized problem Q, then Q has no kernel of size O(k d− ) for all > 0 unless NP ⊆ coNP/poly. Venkatesh Raman Lower Bounds on Kernelization
  • 40. Proof of the theorem Theorem ˜ ˜ Let L ⊆ Σ ∗ be a set which is NP-hard. If L weak d-composes into the parameterized problem Q, then Q has no kernel of size O(k d− ) for all > 0 unless NP ⊆ coNP/poly. Proof. Let xi = n ∀i ∈ [t(n)] for the input of composition. After applying the kernelization on the composed instance, the size of the instance we get is O(t(n)1/d nc )d− ) = O(t(n)1−( = O(t(s)) /d) c(d− ) n ) (for t(s) sufficiently large) = O(t(s) log t(s)) Venkatesh Raman Lower Bounds on Kernelization
  • 41. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . Venkatesh Raman Lower Bounds on Kernelization
  • 42. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! Venkatesh Raman Lower Bounds on Kernelization
  • 43. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Venkatesh Raman Lower Bounds on Kernelization
  • 44. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Since theorem of Fortnow-Santhanam allows co-nondeterminism, so that allows using coNP compositions for proving lower bounds. Venkatesh Raman Lower Bounds on Kernelization
  • 45. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Since theorem of Fortnow-Santhanam allows co-nondeterminism, so that allows using coNP compositions for proving lower bounds. Sometimes getting composition from arbitrary instances of a language can be difficult. Venkatesh Raman Lower Bounds on Kernelization
  • 46. Some comments about composition In composition, we asked for the parameter k to be at most t 1/d (n)O(1) . That ruled out kernels of size k d− . What if we can output an instance with k = t o(1) (n)O(1) ? Then we can rule out kernels of k d− for ALL d! We call such an algorithm just “composition”. Since theorem of Fortnow-Santhanam allows co-nondeterminism, so that allows using coNP compositions for proving lower bounds. Sometimes getting composition from arbitrary instances of a language can be difficult. Some structure on the input instances helps to get a composition (next slide). Venkatesh Raman Lower Bounds on Kernelization
  • 47. Polynomial Equivalence Relation (Polynomial Equivalence Relation). An equivalence relation R on Σ ∗ is called a polynomial equivalence relation if the following two conditions hold: 1 2 There is an algorithm that given two strings x, y ∈ Σ ∗ decides whether x and y belong to the same equivalence class in (|x| + |y |)O(1) time. For any finite set S ⊆ Σ ∗ the equivalence relation R partitions the elements of S into at most (maxx∈S |x|)O(1) classes. Venkatesh Raman Lower Bounds on Kernelization
  • 48. What to do with Polynomial Equivalence Relation The equivalence relation can partition the input on the basis of different parameters. These equivalence classes can be used to give the input to the composition a nice structure. The helpful choices are often partitions which have the same number of vertices, or the asked solution size etc. Then all we need to do, is to come up with a composition algorithm for instances belonging to same equivalence class. Since there are only polynomial number of equivalence classes, in the end we can just output an instance of Or(L ) Next slide is a nice illustration of this method by Michal Pilipczuk. Venkatesh Raman Lower Bounds on Kernelization
  • 57. Take away We use compositions to rule out polynomial kernels. Venkatesh Raman Lower Bounds on Kernelization
  • 58. Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . Venkatesh Raman Lower Bounds on Kernelization
  • 59. Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. Venkatesh Raman Lower Bounds on Kernelization
  • 60. Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. k = t 1/d nc ⇒ No kernel of size k d− . Venkatesh Raman Lower Bounds on Kernelization
  • 61. Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. k = t 1/d nc ⇒ No kernel of size k d− . We can make use of equivalence classes to give structure to input of the composition. Venkatesh Raman Lower Bounds on Kernelization
  • 62. Take away We use compositions to rule out polynomial kernels. A composition from NP-hard problem L to parameterized problem L gives kernelization hardness for for L . k = t o(1) nc ⇒ No polynomial kernel. k = t 1/d nc ⇒ No kernel of size k d− . We can make use of equivalence classes to give structure to input of the composition. Examples on the board! Venkatesh Raman Lower Bounds on Kernelization
  • 63. Thank You! Venkatesh Raman Lower Bounds on Kernelization