Nothing Special   »   [go: up one dir, main page]



LIPIcs, Volume 137

34th Computational Complexity Conference (CCC 2019)



Thumbnail PDF

Event

CCC 2019, July 18-20, 2019, New Brunswick, NJ, USA

Editor

Amir Shpilka
  • Tel Aviv University, Tel Aviv, 69978, Israel

Publication Details

  • published at: 2019-07-16
  • Publisher: Schloss Dagstuhl – Leibniz-Zentrum für Informatik
  • ISBN: 978-3-95977-116-0
  • DBLP: db/conf/coco/coco2019

Access Numbers

Documents

No documents found matching your filter selection.
Document
Complete Volume
LIPIcs, Volume 137, CCC'19, Complete Volume

Authors: Amir Shpilka


Abstract
LIPIcs, Volume 137, CCC'19, Complete Volume

Cite as

34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Proceedings{shpilka:LIPIcs.CCC.2019,
  title =	{{LIPIcs, Volume 137, CCC'19, Complete Volume}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019},
  URN =		{urn:nbn:de:0030-drops-108988},
  doi =		{10.4230/LIPIcs.CCC.2019},
  annote =	{Keywords: Theory of computation}
}
Document
Front Matter
Front Matter, Table of Contents, Preface, Conference Organization

Authors: Amir Shpilka


Abstract
Front Matter, Table of Contents, Preface, Conference Organization

Cite as

34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 0:i-0:xiv, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{shpilka:LIPIcs.CCC.2019.0,
  author =	{Shpilka, Amir},
  title =	{{Front Matter, Table of Contents, Preface, Conference Organization}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{0:i--0:xiv},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.0},
  URN =		{urn:nbn:de:0030-drops-108221},
  doi =		{10.4230/LIPIcs.CCC.2019.0},
  annote =	{Keywords: Front Matter, Table of Contents, Preface, Conference Organization}
}
Document
Criticality of Regular Formulas

Authors: Benjamin Rossman


Abstract
We define the criticality of a boolean function f : {0,1}^n -> {0,1} as the minimum real number lambda >= 1 such that Pr [DT_{depth}(f|R_p) >= t] <= (p lambda)^t for all p in [0,1] and t in N, where R_p is the p-random restriction and DT_{depth} is decision-tree depth. Criticality is a useful parameter: it implies an O(2^((1- 1/(2 lambda))n)) bound on the decision-tree size of f, as well as a 2^{-Omega(k/lambda)} bound on Fourier weight of f on coefficients of size >= k. In an unpublished manuscript [Rossmann, 2018], the author showed that a combination of Håstad’s switching and multi-switching lemmas [Håstad, 1986; Håstad, 2014] implies that AC^0 circuits of depth d+1 and size s have criticality at most O(log s)^d. In the present paper, we establish a stronger O(1/d log s)^d bound for regular formulas: the class of AC^0 formulas in which all gates at any given depth have the same fan-in. This result is based on (i) a novel switching lemma for bounded size (unbounded width) DNF formulas, and (ii) an extension of (i) which analyzes a canonical decision tree associated with an entire depth-d formula. As corollaries of our criticality bound, we obtain an improved #SAT algorithm and tight Linial-Mansour-Nisan Theorem for regular formulas, strengthening previous results for AC^0 circuits due to Impagliazzo, Matthews, Paturi [Impagliazzo et al., 2012] and Tal [Tal, 2017]. As a further corollary, we increase from o(log n /(log log n)) to o(log n) the number of quantifier alternations for which the QBF-SAT (quantified boolean formula satisfiability) algorithm of Santhanam and Williams [Santhanam and Williams, 2014] beats exhaustive search.

Cite as

Benjamin Rossman. Criticality of Regular Formulas. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 1:1-1:28, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{rossman:LIPIcs.CCC.2019.1,
  author =	{Rossman, Benjamin},
  title =	{{Criticality of Regular Formulas}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{1:1--1:28},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.1},
  URN =		{urn:nbn:de:0030-drops-108230},
  doi =		{10.4230/LIPIcs.CCC.2019.1},
  annote =	{Keywords: AC^0 circuits, formulas, criticality}
}
Document
Almost Optimal Distribution-Free Junta Testing

Authors: Nader H. Bshouty


Abstract
We consider the problem of testing whether an unknown n-variable Boolean function is a k-junta in the distribution-free property testing model, where the distance between functions is measured with respect to an arbitrary and unknown probability distribution over {0,1}^n. Chen, Liu, Servedio, Sheng and Xie [Zhengyang Liu et al., 2018] showed that the distribution-free k-junta testing can be performed, with one-sided error, by an adaptive algorithm that makes O~(k^2)/epsilon queries. In this paper, we give a simple two-sided error adaptive algorithm that makes O~(k/epsilon) queries.

Cite as

Nader H. Bshouty. Almost Optimal Distribution-Free Junta Testing. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 2:1-2:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bshouty:LIPIcs.CCC.2019.2,
  author =	{Bshouty, Nader H.},
  title =	{{Almost Optimal Distribution-Free Junta Testing}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{2:1--2:13},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.2},
  URN =		{urn:nbn:de:0030-drops-108249},
  doi =		{10.4230/LIPIcs.CCC.2019.2},
  annote =	{Keywords: Distribution-free property testing, k-Junta}
}
Document
UG-Hardness to NP-Hardness by Losing Half

Authors: Amey Bhangale and Subhash Khot


Abstract
The 2-to-2 Games Theorem of [Subhash Khot et al., 2017; Dinur et al., 2018; Dinur et al., 2018; Dinur et al., 2018] implies that it is NP-hard to distinguish between Unique Games instances with assignment satisfying at least (1/2-epsilon) fraction of the constraints vs. no assignment satisfying more than epsilon fraction of the constraints, for every constant epsilon>0. We show that the reduction can be transformed in a non-trivial way to give a stronger guarantee in the completeness case: For at least (1/2-epsilon) fraction of the vertices on one side, all the constraints associated with them in the Unique Games instance can be satisfied. We use this guarantee to convert the known UG-hardness results to NP-hardness. We show: 1) Tight inapproximability of approximating independent sets in degree d graphs within a factor of Omega(d/(log^2 d)), where d is a constant. 2) NP-hardness of approximate the Maximum Acyclic Subgraph problem within a factor of 2/3+epsilon, improving the previous ratio of 14/15+epsilon by Austrin et al. [Austrin et al., 2015]. 3) For any predicate P^{-1}(1) subseteq [q]^k supporting a balanced pairwise independent distribution, given a P-CSP instance with value at least 1/2-epsilon, it is NP-hard to satisfy more than (|P^{-1}(1)|/(q^k))+epsilon fraction of constraints.

Cite as

Amey Bhangale and Subhash Khot. UG-Hardness to NP-Hardness by Losing Half. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 3:1-3:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bhangale_et_al:LIPIcs.CCC.2019.3,
  author =	{Bhangale, Amey and Khot, Subhash},
  title =	{{UG-Hardness to NP-Hardness by Losing Half}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{3:1--3:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.3},
  URN =		{urn:nbn:de:0030-drops-108258},
  doi =		{10.4230/LIPIcs.CCC.2019.3},
  annote =	{Keywords: NP-hardness, Inapproximability, Unique Games Conjecture}
}
Document
Simple and Efficient Pseudorandom Generators from Gaussian Processes

Authors: Eshan Chattopadhyay, Anindya De, and Rocco A. Servedio


Abstract
We show that a very simple pseudorandom generator fools intersections of k linear threshold functions (LTFs) and arbitrary functions of k LTFs over n-dimensional Gaussian space. The two analyses of our PRG (for intersections versus arbitrary functions of LTFs) are quite different from each other and from previous analyses of PRGs for functions of halfspaces. Our analysis for arbitrary functions of LTFs establishes bounds on the Wasserstein distance between Gaussian random vectors with similar covariance matrices, and combines these bounds with a conversion from Wasserstein distance to "union-of-orthants" distance from [Xi Chen et al., 2014]. Our analysis for intersections of LTFs uses extensions of the classical Sudakov-Fernique type inequalities, which give bounds on the difference between the expectations of the maxima of two Gaussian random vectors with similar covariance matrices. For all values of k, our generator has seed length O(log n) + poly(k) for arbitrary functions of k LTFs and O(log n) + poly(log k) for intersections of k LTFs. The best previous result, due to [Gopalan et al., 2010], only gave such PRGs for arbitrary functions of k LTFs when k=O(log log n) and for intersections of k LTFs when k=O((log n)/(log log n)). Thus our PRG achieves an O(log n) seed length for values of k that are exponentially larger than previous work could achieve. By combining our PRG over Gaussian space with an invariance principle for arbitrary functions of LTFs and with a regularity lemma, we obtain a deterministic algorithm that approximately counts satisfying assignments of arbitrary functions of k general LTFs over {0,1}^n in time poly(n) * 2^{poly(k,1/epsilon)} for all values of k. This algorithm has a poly(n) runtime for k =(log n)^c for some absolute constant c>0, while the previous best poly(n)-time algorithms could only handle k = O(log log n). For intersections of LTFs, by combining these tools with a recent PRG due to [R. O'Donnell et al., 2018], we obtain a deterministic algorithm that can approximately count satisfying assignments of intersections of k general LTFs over {0,1}^n in time poly(n) * 2^{poly(log k, 1/epsilon)}. This algorithm has a poly(n) runtime for k =2^{(log n)^c} for some absolute constant c>0, while the previous best poly(n)-time algorithms for intersections of k LTFs, due to [Gopalan et al., 2010], could only handle k=O((log n)/(log log n)).

Cite as

Eshan Chattopadhyay, Anindya De, and Rocco A. Servedio. Simple and Efficient Pseudorandom Generators from Gaussian Processes. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 4:1-4:33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.CCC.2019.4,
  author =	{Chattopadhyay, Eshan and De, Anindya and Servedio, Rocco A.},
  title =	{{Simple and Efficient Pseudorandom Generators from Gaussian Processes}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{4:1--4:33},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.4},
  URN =		{urn:nbn:de:0030-drops-108262},
  doi =		{10.4230/LIPIcs.CCC.2019.4},
  annote =	{Keywords: Polynomial threshold functions, Gaussian processes, Johnson-Lindenstrauss, pseudorandom generators}
}
Document
From DNF Compression to Sunflower Theorems via Regularity

Authors: Shachar Lovett, Noam Solomon, and Jiapeng Zhang


Abstract
The sunflower conjecture is one of the most well-known open problems in combinatorics. It has several applications in theoretical computer science, one of which is DNF compression, due to Gopalan, Meka and Reingold (Computational Complexity, 2013). In this paper, we show that improved bounds for DNF compression imply improved bounds for the sunflower conjecture, which is the reverse direction of the DNF compression result. The main approach is based on regularity of set systems and a structure-vs-pseudorandomness approach to the sunflower conjecture.

Cite as

Shachar Lovett, Noam Solomon, and Jiapeng Zhang. From DNF Compression to Sunflower Theorems via Regularity. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 5:1-5:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{lovett_et_al:LIPIcs.CCC.2019.5,
  author =	{Lovett, Shachar and Solomon, Noam and Zhang, Jiapeng},
  title =	{{From DNF Compression to Sunflower Theorems via Regularity}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{5:1--5:14},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.5},
  URN =		{urn:nbn:de:0030-drops-108277},
  doi =		{10.4230/LIPIcs.CCC.2019.5},
  annote =	{Keywords: DNF sparsification, sunflower conjecture, regular set systems}
}
Document
Resolution and the Binary Encoding of Combinatorial Principles

Authors: Stefan Dantchev, Nicola Galesi, and Barnaby Martin


Abstract
Res(s) is an extension of Resolution working on s-DNFs. We prove tight n^{Omega(k)} lower bounds for the size of refutations of the binary version of the k-Clique Principle in Res(o(log log n)). Our result improves that of Lauria, Pudlák et al. [Massimo Lauria et al., 2017] who proved the lower bound for Res(1), i.e. Resolution. The exact complexity of the (unary) k-Clique Principle in Resolution is unknown. To prove the lower bound we do not use any form of the Switching Lemma [Nathan Segerlind et al., 2004], instead we apply a recursive argument specific for binary encodings. Since for the k-Clique and other principles lower bounds in Resolution for the unary version follow from lower bounds in Res(log n) for their binary version we start a systematic study of the complexity of proofs in Resolution-based systems for families of contradictions given in the binary encoding. We go on to consider the binary version of the weak Pigeonhole Principle Bin-PHP^m_n for m>n. Using the the same recursive approach we prove the new result that for any delta>0, Bin-PHP^m_n requires proofs of size 2^{n^{1-delta}} in Res(s) for s=o(log^{1/2}n). Our lower bound is almost optimal since for m >= 2^{sqrt{n log n}} there are quasipolynomial size proofs of Bin-PHP^m_n in Res(log n). Finally we propose a general theory in which to compare the complexity of refuting the binary and unary versions of large classes of combinatorial principles, namely those expressible as first order formulae in Pi_2-form and with no finite model.

Cite as

Stefan Dantchev, Nicola Galesi, and Barnaby Martin. Resolution and the Binary Encoding of Combinatorial Principles. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 6:1-6:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dantchev_et_al:LIPIcs.CCC.2019.6,
  author =	{Dantchev, Stefan and Galesi, Nicola and Martin, Barnaby},
  title =	{{Resolution and the Binary Encoding of Combinatorial Principles}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{6:1--6:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.6},
  URN =		{urn:nbn:de:0030-drops-108287},
  doi =		{10.4230/LIPIcs.CCC.2019.6},
  annote =	{Keywords: Proof complexity, k-DNF resolution, binary encodings, Clique and Pigeonhole principle}
}
Document
Fourier Bounds and Pseudorandom Generators for Product Tests

Authors: Chin Ho Lee


Abstract
We study the Fourier spectrum of functions f : {0,1}^{mk} -> {-1,0,1} which can be written as a product of k Boolean functions f_i on disjoint m-bit inputs. We prove that for every positive integer d, sum_{S subseteq [mk]: |S|=d} |hat{f_S}| = O(min{m, sqrt{m log(2k)}})^d . Our upper bounds are tight up to a constant factor in the O(*). Our proof uses Schur-convexity, and builds on a new "level-d inequality" that bounds above sum_{|S|=d} hat{f_S}^2 for any [0,1]-valued function f in terms of its expectation, which may be of independent interest. As a result, we construct pseudorandom generators for such functions with seed length O~(m + log(k/epsilon)), which is optimal up to polynomial factors in log m, log log k and log log(1/epsilon). Our generator in particular works for the well-studied class of combinatorial rectangles, where in addition we allow the bits to be read in any order. Even for this special case, previous generators have an extra O~(log(1/epsilon)) factor in their seed lengths. We also extend our results to functions f_i whose range is [-1,1].

Cite as

Chin Ho Lee. Fourier Bounds and Pseudorandom Generators for Product Tests. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 7:1-7:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{lee:LIPIcs.CCC.2019.7,
  author =	{Lee, Chin Ho},
  title =	{{Fourier Bounds and Pseudorandom Generators for Product Tests}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{7:1--7:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.7},
  URN =		{urn:nbn:de:0030-drops-108296},
  doi =		{10.4230/LIPIcs.CCC.2019.7},
  annote =	{Keywords: bounded independence plus noise, Fourier spectrum, product test, pseudorandom generators}
}
Document
Sherali - Adams Strikes Back

Authors: Ryan O'Donnell and Tselil Schramm


Abstract
Let G be any n-vertex graph whose random walk matrix has its nontrivial eigenvalues bounded in magnitude by 1/sqrt{Delta} (for example, a random graph G of average degree Theta(Delta) typically has this property). We show that the exp(c (log n)/(log Delta))-round Sherali - Adams linear programming hierarchy certifies that the maximum cut in such a G is at most 50.1 % (in fact, at most 1/2 + 2^{-Omega(c)}). For example, in random graphs with n^{1.01} edges, O(1) rounds suffice; in random graphs with n * polylog(n) edges, n^{O(1/log log n)} = n^{o(1)} rounds suffice. Our results stand in contrast to the conventional beliefs that linear programming hierarchies perform poorly for max-cut and other CSPs, and that eigenvalue/SDP methods are needed for effective refutation. Indeed, our results imply that constant-round Sherali - Adams can strongly refute random Boolean k-CSP instances with n^{ceil[k/2] + delta} constraints; previously this had only been done with spectral algorithms or the SOS SDP hierarchy.

Cite as

Ryan O'Donnell and Tselil Schramm. Sherali - Adams Strikes Back. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 8:1-8:30, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{odonnell_et_al:LIPIcs.CCC.2019.8,
  author =	{O'Donnell, Ryan and Schramm, Tselil},
  title =	{{Sherali - Adams Strikes Back}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{8:1--8:30},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.8},
  URN =		{urn:nbn:de:0030-drops-108309},
  doi =		{10.4230/LIPIcs.CCC.2019.8},
  annote =	{Keywords: Linear programming, Sherali, Adams, max-cut, graph eigenvalues, Sum-of-Squares}
}
Document
Typically-Correct Derandomization for Small Time and Space

Authors: William M. Hoza


Abstract
Suppose a language L can be decided by a bounded-error randomized algorithm that runs in space S and time n * poly(S). We give a randomized algorithm for L that still runs in space O(S) and time n * poly(S) that uses only O(S) random bits; our algorithm has a low failure probability on all but a negligible fraction of inputs of each length. As an immediate corollary, there is a deterministic algorithm for L that runs in space O(S) and succeeds on all but a negligible fraction of inputs of each length. We also give several other complexity-theoretic applications of our technique.

Cite as

William M. Hoza. Typically-Correct Derandomization for Small Time and Space. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 9:1-9:39, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hoza:LIPIcs.CCC.2019.9,
  author =	{Hoza, William M.},
  title =	{{Typically-Correct Derandomization for Small Time and Space}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{9:1--9:39},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.9},
  URN =		{urn:nbn:de:0030-drops-108317},
  doi =		{10.4230/LIPIcs.CCC.2019.9},
  annote =	{Keywords: Derandomization, pseudorandomness, space complexity}
}
Document
Optimal Short-Circuit Resilient Formulas

Authors: Mark Braverman, Klim Efremenko, Ran Gelles, and Michael A. Yitayew


Abstract
We consider fault-tolerant boolean formulas in which the output of a faulty gate is short-circuited to one of the gate’s inputs. A recent result by Kalai et al. [FOCS 2012] converts any boolean formula into a resilient formula of polynomial size that works correctly if less than a fraction 1/6 of the gates (on every input-to-output path) are faulty. We improve the result of Kalai et al., and show how to efficiently fortify any boolean formula against a fraction 1/5 of short-circuit gates per path, with only a polynomial blowup in size. We additionally show that it is impossible to obtain formulas with higher resilience and sub-exponential growth in size. Towards our results, we consider interactive coding schemes when noiseless feedback is present; these produce resilient boolean formulas via a Karchmer-Wigderson relation. We develop a coding scheme that resists up to a fraction 1/5 of corrupted transmissions in each direction of the interactive channel. We further show that such a level of noise is maximal for coding schemes with sub-exponential blowup in communication. Our coding scheme takes a surprising inspiration from Blockchain technology.

Cite as

Mark Braverman, Klim Efremenko, Ran Gelles, and Michael A. Yitayew. Optimal Short-Circuit Resilient Formulas. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 10:1-10:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{braverman_et_al:LIPIcs.CCC.2019.10,
  author =	{Braverman, Mark and Efremenko, Klim and Gelles, Ran and Yitayew, Michael A.},
  title =	{{Optimal Short-Circuit Resilient Formulas}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{10:1--10:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.10},
  URN =		{urn:nbn:de:0030-drops-108326},
  doi =		{10.4230/LIPIcs.CCC.2019.10},
  annote =	{Keywords: Circuit Complexity, Noise-Resilient Circuits, Interactive Coding, Coding Theory, Karchmer-Wigderson Games}
}
Document
A Time-Distance Trade-Off for GDD with Preprocessing - Instantiating the DLW Heuristic

Authors: Noah Stephens-Davidowitz


Abstract
For 0 <= alpha <= 1/2, we show an algorithm that does the following. Given appropriate preprocessing P(L) consisting of N_alpha := 2^{O(n^{1-2 alpha} + log n)} vectors in some lattice L subset {R}^n and a target vector t in R^n, the algorithm finds y in L such that ||y-t|| <= n^{1/2 + alpha} eta(L) in time poly(n) * N_alpha, where eta(L) is the smoothing parameter of the lattice. The algorithm itself is very simple and was originally studied by Doulgerakis, Laarhoven, and de Weger (to appear in PQCrypto, 2019), who proved its correctness under certain reasonable heuristic assumptions on the preprocessing P(L) and target t. Our primary contribution is a choice of preprocessing that allows us to prove correctness without any heuristic assumptions. Our main motivation for studying this is the recent breakthrough algorithm for IdealSVP due to Hanrot, Pellet - Mary, and Stehlé (to appear in Eurocrypt, 2019), which uses the DLW algorithm as a key subprocedure. In particular, our result implies that the HPS IdealSVP algorithm can be made to work with fewer heuristic assumptions. Our only technical tool is the discrete Gaussian distribution over L, and in particular, a lemma showing that the one-dimensional projections of this distribution behave very similarly to the continuous Gaussian. This lemma might be of independent interest.

Cite as

Noah Stephens-Davidowitz. A Time-Distance Trade-Off for GDD with Preprocessing - Instantiating the DLW Heuristic. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 11:1-11:8, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{stephensdavidowitz:LIPIcs.CCC.2019.11,
  author =	{Stephens-Davidowitz, Noah},
  title =	{{A Time-Distance Trade-Off for GDD with Preprocessing - Instantiating the DLW Heuristic}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{11:1--11:8},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.11},
  URN =		{urn:nbn:de:0030-drops-108331},
  doi =		{10.4230/LIPIcs.CCC.2019.11},
  annote =	{Keywords: Lattices, guaranteed distance decoding, GDD, GDDP}
}
Document
Limits on the Universal Method for Matrix Multiplication

Authors: Josh Alman


Abstract
In this work, we prove limitations on the known methods for designing matrix multiplication algorithms. Alman and Vassilevska Williams [Alman and Williams, 2018] recently defined the Universal Method, which substantially generalizes all the known approaches including Strassen’s Laser Method [V. Strassen, 1987] and Cohn and Umans' Group Theoretic Method [Cohn and Umans, 2003]. We prove concrete lower bounds on the algorithms one can design by applying the Universal Method to many different tensors. Our proofs use new tools for upper bounding the asymptotic slice rank of a wide range of tensors. Our main result is that the Universal method applied to any Coppersmith-Winograd tensor CW_q cannot yield a bound on omega, the exponent of matrix multiplication, better than 2.16805. By comparison, it was previously only known that the weaker "Galactic Method" applied to CW_q could not achieve an exponent of 2. We also study the Laser Method (which is, in principle, a highly special case of the Universal Method) and prove that it is "complete" for matrix multiplication algorithms: when it applies to a tensor T, it achieves omega = 2 if and only if it is possible for the Universal method applied to T to achieve omega = 2. Hence, the Laser Method, which was originally used as an algorithmic tool, can also be seen as a lower bounding tool. For example, in their landmark paper, Coppersmith and Winograd [Coppersmith and Winograd, 1990] achieved a bound of omega <= 2.376, by applying the Laser Method to CW_q. By our result, the fact that they did not achieve omega=2 implies a lower bound on the Universal Method applied to CW_q. Indeed, if it were possible for the Universal Method applied to CW_q to achieve omega=2, then Coppersmith and Winograd’s application of the Laser Method would have achieved omega=2.

Cite as

Josh Alman. Limits on the Universal Method for Matrix Multiplication. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 12:1-12:24, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{alman:LIPIcs.CCC.2019.12,
  author =	{Alman, Josh},
  title =	{{Limits on the Universal Method for Matrix Multiplication}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{12:1--12:24},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.12},
  URN =		{urn:nbn:de:0030-drops-108347},
  doi =		{10.4230/LIPIcs.CCC.2019.12},
  annote =	{Keywords: Matrix Multiplication, Laser Method, Slice Rank}
}
Document
Optimality of Linear Sketching Under Modular Updates

Authors: Kaave Hosseini, Shachar Lovett, and Grigory Yaroslavtsev


Abstract
We study the relation between streaming algorithms and linear sketching algorithms, in the context of binary updates. We show that for inputs in n dimensions, the existence of efficient streaming algorithms which can process Omega(n^2) updates implies efficient linear sketching algorithms with comparable cost. This improves upon the previous work of Li, Nguyen and Woodruff [Yi Li et al., 2014] and Ai, Hu, Li and Woodruff [Yuqing Ai et al., 2016] which required a triple-exponential number of updates to achieve a similar result for updates over integers. We extend our results to updates modulo p for integers p >= 2, and to approximation instead of exact computation.

Cite as

Kaave Hosseini, Shachar Lovett, and Grigory Yaroslavtsev. Optimality of Linear Sketching Under Modular Updates. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 13:1-13:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{hosseini_et_al:LIPIcs.CCC.2019.13,
  author =	{Hosseini, Kaave and Lovett, Shachar and Yaroslavtsev, Grigory},
  title =	{{Optimality of Linear Sketching Under Modular Updates}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{13:1--13:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.13},
  URN =		{urn:nbn:de:0030-drops-108355},
  doi =		{10.4230/LIPIcs.CCC.2019.13},
  annote =	{Keywords: communication complexity, linear sketching, streaming algorithm}
}
Document
Equality Alone Does not Simulate Randomness

Authors: Arkadev Chattopadhyay, Shachar Lovett, and Marc Vinyals


Abstract
The canonical problem that gives an exponential separation between deterministic and randomized communication complexity in the classical two-party communication model is "Equality". In this work we show that even allowing access to an "Equality" oracle, deterministic protocols remain exponentially weaker than randomized ones. More precisely, we exhibit a total function on n bits with randomized one-sided communication complexity O(log n), but such that every deterministic protocol with access to "Equality" oracle needs Omega(n) cost to compute it. Additionally we exhibit a natural and strict infinite hierarchy within BPP, starting with the class P^{EQ} at its bottom.

Cite as

Arkadev Chattopadhyay, Shachar Lovett, and Marc Vinyals. Equality Alone Does not Simulate Randomness. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 14:1-14:11, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chattopadhyay_et_al:LIPIcs.CCC.2019.14,
  author =	{Chattopadhyay, Arkadev and Lovett, Shachar and Vinyals, Marc},
  title =	{{Equality Alone Does not Simulate Randomness}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{14:1--14:11},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.14},
  URN =		{urn:nbn:de:0030-drops-108368},
  doi =		{10.4230/LIPIcs.CCC.2019.14},
  annote =	{Keywords: Communication lower bound, derandomization}
}
Document
Counting Basic-Irreducible Factors Mod p^k in Deterministic Poly-Time and p-Adic Applications

Authors: Ashish Dwivedi, Rajat Mittal, and Nitin Saxena


Abstract
Finding an irreducible factor, of a polynomial f(x) modulo a prime p, is not known to be in deterministic polynomial time. Though there is such a classical algorithm that counts the number of irreducible factors of f mod p. We can ask the same question modulo prime-powers p^k. The irreducible factors of f mod p^k blow up exponentially in number; making it hard to describe them. Can we count those irreducible factors mod p^k that remain irreducible mod p? These are called basic-irreducible. A simple example is in f=x^2+px mod p^2; it has p many basic-irreducible factors. Also note that, x^2+p mod p^2 is irreducible but not basic-irreducible! We give an algorithm to count the number of basic-irreducible factors of f mod p^k in deterministic poly(deg(f),k log p)-time. This solves the open questions posed in (Cheng et al, ANTS'18 & Kopp et al, Math.Comp.'19). In particular, we are counting roots mod p^k; which gives the first deterministic poly-time algorithm to compute Igusa zeta function of f. Also, our algorithm efficiently partitions the set of all basic-irreducible factors (possibly exponential) into merely deg(f)-many disjoint sets, using a compact tree data structure and split ideals.

Cite as

Ashish Dwivedi, Rajat Mittal, and Nitin Saxena. Counting Basic-Irreducible Factors Mod p^k in Deterministic Poly-Time and p-Adic Applications. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 15:1-15:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dwivedi_et_al:LIPIcs.CCC.2019.15,
  author =	{Dwivedi, Ashish and Mittal, Rajat and Saxena, Nitin},
  title =	{{Counting Basic-Irreducible Factors Mod p^k in Deterministic Poly-Time and p-Adic Applications}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{15:1--15:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.15},
  URN =		{urn:nbn:de:0030-drops-108373},
  doi =		{10.4230/LIPIcs.CCC.2019.15},
  annote =	{Keywords: deterministic, root, counting, modulo, prime-power, tree, basic irreducible, unramified}
}
Document
Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas

Authors: Dean Doron, Pooya Hatami, and William M. Hoza


Abstract
We give an explicit pseudorandom generator (PRG) for read-once AC^0, i.e., constant-depth read-once formulas over the basis {wedge, vee, neg} with unbounded fan-in. The seed length of our PRG is O~(log(n/epsilon)). Previously, PRGs with near-optimal seed length were known only for the depth-2 case [Gopalan et al., 2012]. For a constant depth d > 2, the best prior PRG is a recent construction by Forbes and Kelley with seed length O~(log^2 n + log n log(1/epsilon)) for the more general model of constant-width read-once branching programs with arbitrary variable order [Michael A. Forbes and Zander Kelley, 2018]. Looking beyond read-once AC^0, we also show that our PRG fools read-once AC^0[oplus] with seed length O~(t + log(n/epsilon)), where t is the number of parity gates in the formula. Our construction follows Ajtai and Wigderson’s approach of iterated pseudorandom restrictions [Ajtai and Wigderson, 1989]. We assume by recursion that we already have a PRG for depth-d AC^0 formulas. To fool depth-(d + 1) AC^0 formulas, we use the given PRG, combined with a small-bias distribution and almost k-wise independence, to sample a pseudorandom restriction. The analysis of Forbes and Kelley [Michael A. Forbes and Zander Kelley, 2018] shows that our restriction approximately preserves the expectation of the formula. The crux of our work is showing that after poly(log log n) independent applications of our pseudorandom restriction, the formula simplifies in the sense that every gate other than the output has only polylog n remaining children. Finally, as the last step, we use a recent PRG by Meka, Reingold, and Tal [Meka et al., 2019] to fool this simpler formula.

Cite as

Dean Doron, Pooya Hatami, and William M. Hoza. Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 16:1-16:34, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{doron_et_al:LIPIcs.CCC.2019.16,
  author =	{Doron, Dean and Hatami, Pooya and Hoza, William M.},
  title =	{{Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{16:1--16:34},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.16},
  URN =		{urn:nbn:de:0030-drops-108382},
  doi =		{10.4230/LIPIcs.CCC.2019.16},
  annote =	{Keywords: Pseudorandom generators, Constant-depth formulas, Explicit constructions}
}
Document
Fourier and Circulant Matrices Are Not Rigid

Authors: Zeev Dvir and Allen Liu


Abstract
The concept of matrix rigidity was first introduced by Valiant in [Friedman, 1993]. Roughly speaking, a matrix is rigid if its rank cannot be reduced significantly by changing a small number of entries. There has been extensive interest in rigid matrices as Valiant showed in [Friedman, 1993] that rigidity can be used to prove arithmetic circuit lower bounds. In a surprising result, Alman and Williams showed that the (real valued) Hadamard matrix, which was conjectured to be rigid, is actually not very rigid. This line of work was extended by [Dvir and Edelman, 2017] to a family of matrices related to the Hadamard matrix, but over finite fields. In our work, we take another step in this direction and show that for any abelian group G and function f:G - > {C}, the matrix given by M_{xy} = f(x - y) for x,y in G is not rigid. In particular, we get that complex valued Fourier matrices, circulant matrices, and Toeplitz matrices are all not rigid and cannot be used to carry out Valiant’s approach to proving circuit lower bounds. This complements a recent result of Goldreich and Tal [Goldreich and Tal, 2016] who showed that Toeplitz matrices are nontrivially rigid (but not enough for Valiant’s method). Our work differs from previous non-rigidity results in that those works considered matrices whose underlying group of symmetries was of the form {F}_p^n with p fixed and n tending to infinity, while in the families of matrices we study, the underlying group of symmetries can be any abelian group and, in particular, the cyclic group {Z}_N, which has very different structure. Our results also suggest natural new candidates for rigidity in the form of matrices whose symmetry groups are highly non-abelian. Our proof has four parts. The first extends the results of [Josh Alman and Ryan Williams, 2016; Dvir and Edelman, 2017] to generalized Hadamard matrices over the complex numbers via a new proof technique. The second part handles the N x N Fourier matrix when N has a particularly nice factorization that allows us to embed smaller copies of (generalized) Hadamard matrices inside of it. The third part uses results from number theory to bootstrap the non-rigidity for these special values of N and extend to all sufficiently large N. The fourth and final part involves using the non-rigidity of the Fourier matrix to show that the group algebra matrix, given by M_{xy} = f(x - y) for x,y in G, is not rigid for any function f and abelian group G.

Cite as

Zeev Dvir and Allen Liu. Fourier and Circulant Matrices Are Not Rigid. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 17:1-17:23, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{dvir_et_al:LIPIcs.CCC.2019.17,
  author =	{Dvir, Zeev and Liu, Allen},
  title =	{{Fourier and Circulant Matrices Are Not Rigid}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{17:1--17:23},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.17},
  URN =		{urn:nbn:de:0030-drops-108390},
  doi =		{10.4230/LIPIcs.CCC.2019.17},
  annote =	{Keywords: Rigidity, Fourier matrix, Circulant matrix}
}
Document
Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling

Authors: Susanna F. de Rezende, Jakob Nordström, Or Meir, and Robert Robere


Abstract
We establish an exactly tight relation between reversible pebblings of graphs and Nullstellensatz refutations of pebbling formulas, showing that a graph G can be reversibly pebbled in time t and space s if and only if there is a Nullstellensatz refutation of the pebbling formula over G in size t+1 and degree s (independently of the field in which the Nullstellensatz refutation is made). We use this correspondence to prove a number of strong size-degree trade-offs for Nullstellensatz, which to the best of our knowledge are the first such results for this proof system.

Cite as

Susanna F. de Rezende, Jakob Nordström, Or Meir, and Robert Robere. Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 18:1-18:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{derezende_et_al:LIPIcs.CCC.2019.18,
  author =	{de Rezende, Susanna F. and Nordstr\"{o}m, Jakob and Meir, Or and Robere, Robert},
  title =	{{Nullstellensatz Size-Degree Trade-offs from Reversible Pebbling}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{18:1--18:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.18},
  URN =		{urn:nbn:de:0030-drops-108403},
  doi =		{10.4230/LIPIcs.CCC.2019.18},
  annote =	{Keywords: proof complexity, Nullstellensatz, pebble games, trade-offs, size, degree}
}
Document
Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity

Authors: Lijie Chen and R. Ryan Williams


Abstract
We considerably sharpen the known connections between circuit-analysis algorithms and circuit lower bounds, show intriguing equivalences between the analysis of weak circuits and (apparently difficult) circuits, and provide strong new lower bounds for approximately computing Boolean functions with depth-two neural networks and related models. - We develop approaches to proving THR o THR lower bounds (a notorious open problem), by connecting algorithmic analysis of THR o THR to the provably weaker circuit classes THR o MAJ and MAJ o MAJ, where exponential lower bounds have long been known. More precisely, we show equivalences between algorithmic analysis of THR o THR and these weaker classes. The epsilon-error CAPP problem asks to approximate the acceptance probability of a given circuit to within additive error epsilon; it is the "canonical" derandomization problem. We show: - There is a non-trivial (2^n/n^{omega(1)} time) 1/poly(n)-error CAPP algorithm for poly(n)-size THR o THR circuits if and only if there is such an algorithm for poly(n)-size MAJ o MAJ. - There is a delta > 0 and a non-trivial SAT (delta-error CAPP) algorithm for poly(n)-size THR o THR circuits if and only if there is such an algorithm for poly(n)-size THR o MAJ. Similar results hold for depth-d linear threshold circuits and depth-d MAJORITY circuits. These equivalences are proved via new simulations of THR circuits by circuits with MAJ gates. - We strengthen the connection between non-trivial derandomization (non-trivial CAPP algorithms) for a circuit class C, and circuit lower bounds against C. Previously, [Ben-Sasson and Viola, ICALP 2014] (following [Williams, STOC 2010]) showed that for any polynomial-size class C closed under projections, non-trivial (2^{n}/n^{omega(1)} time) CAPP for OR_{poly(n)} o AND_{3} o C yields NEXP does not have C circuits. We apply Probabilistic Checkable Proofs of Proximity in a new way to show it would suffice to have a non-trivial CAPP algorithm for either XOR_2 o C, AND_2 o C or OR_2 o C. - A direct corollary of the first two bullets is that NEXP does not have THR o THR circuits would follow from either: - a non-trivial delta-error CAPP (or SAT) algorithm for poly(n)-size THR o MAJ circuits, or - a non-trivial 1/poly(n)-error CAPP algorithm for poly(n)-size MAJ o MAJ circuits. - Applying the above machinery, we extend lower bounds for depth-two neural networks and related models [R. Williams, CCC 2018] to weak approximate computations of Boolean functions. For example, for arbitrarily small epsilon > 0, we prove there are Boolean functions f computable in nondeterministic n^{log n} time such that (for infinitely many n) every polynomial-size depth-two neural network N on n inputs (with sign or ReLU activation) must satisfy max_{x in {0,1}^n}|N(x)-f(x)|>1/2-epsilon. That is, short linear combinations of ReLU gates fail miserably at computing f to within close precision. Similar results are proved for linear combinations of ACC o THR circuits, and linear combinations of low-degree F_p polynomials. These results constitute further progress towards THR o THR lower bounds.

Cite as

Lijie Chen and R. Ryan Williams. Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 19:1-19:43, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2019.19,
  author =	{Chen, Lijie and Williams, R. Ryan},
  title =	{{Stronger Connections Between Circuit Analysis and Circuit Lower Bounds, via PCPs of Proximity}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{19:1--19:43},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.19},
  URN =		{urn:nbn:de:0030-drops-108419},
  doi =		{10.4230/LIPIcs.CCC.2019.19},
  annote =	{Keywords: PCP of Proximity, Circuit Lower Bounds, Derandomization, Threshold Circuits, ReLU}
}
Document
Universality of EPR Pairs in Entanglement-Assisted Communication Complexity, and the Communication Cost of State Conversion

Authors: Matthew Coudron and Aram W. Harrow


Abstract
In this work we consider the role of entanglement assistance in quantum communication protocols, focusing, in particular, on whether the type of shared entangled state can affect the quantum communication complexity of a function. This question is interesting because in some other settings in quantum information, such as non-local games, or tasks that involve quantum communication between players and referee, or simulating bipartite unitaries or communication channels, maximally entangled states are known to be less useful as a resource than some partially entangled states. By contrast, we prove that the bounded-error entanglement-assisted quantum communication complexity of a partial or total function cannot be improved by more than a constant factor by replacing maximally entangled states with arbitrary entangled states. In particular, we show that every quantum communication protocol using Q qubits of communication and arbitrary shared entanglement can be epsilon-approximated by a protocol using O(Q/epsilon+log(1/epsilon)/epsilon) qubits of communication and only EPR pairs as shared entanglement. This conclusion is opposite of the common wisdom in the study of non-local games, where it has been shown, for example, that the I3322 inequality has a non-local strategy using a non-maximally entangled state, which surpasses the winning probability achievable by any strategy using a maximally entangled state of any dimension [Vidick and Wehner, 2011]. We leave open the question of how much the use of a shared maximally entangled state can reduce the quantum communication complexity of a function. Our second result concerns an old question in quantum information theory: How much quantum communication is required to approximately convert one pure bipartite entangled state into another? We give simple and efficiently computable upper and lower bounds. Given two bipartite states |chi> and |upsilon>, we define a natural quantity, d_{infty}(|chi>, |upsilon>), which we call the l_{infty} Earth Mover’s distance, and we show that the communication cost of converting between |chi> and |upsilon> is upper bounded by a constant multiple of d_{infty}(|chi>, |upsilon>). Here d_{infty}(|chi>, |upsilon>) may be informally described as the minimum over all transports between the log of the Schmidt coefficients of |chi> and those of |upsilon>, of the maximum distance that any amount of mass must be moved in that transport. A precise definition is given in the introduction. Furthermore, we prove a complementary lower bound on the cost of state conversion by the epsilon-Smoothed l_{infty}-Earth Mover’s Distance, which is a natural smoothing of the l_{infty}-Earth Mover’s Distance that we will define via a connection with optimal transport theory.

Cite as

Matthew Coudron and Aram W. Harrow. Universality of EPR Pairs in Entanglement-Assisted Communication Complexity, and the Communication Cost of State Conversion. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 20:1-20:25, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{coudron_et_al:LIPIcs.CCC.2019.20,
  author =	{Coudron, Matthew and Harrow, Aram W.},
  title =	{{Universality of EPR Pairs in Entanglement-Assisted Communication Complexity, and the Communication Cost of State Conversion}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{20:1--20:25},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.20},
  URN =		{urn:nbn:de:0030-drops-108421},
  doi =		{10.4230/LIPIcs.CCC.2019.20},
  annote =	{Keywords: Entanglement, quantum communication complexity}
}
Document
Average-Case Quantum Advantage with Shallow Circuits

Authors: François Le Gall


Abstract
Recently Bravyi, Gosset and König (Science 2018) proved an unconditional separation between the computational powers of small-depth quantum and classical circuits for a relation. In this paper we show a similar separation in the average-case setting that gives stronger evidence of the superiority of small-depth quantum computation: we construct a computational task that can be solved on all inputs by a quantum circuit of constant depth with bounded-fanin gates (a "shallow" quantum circuit) and show that any classical circuit with bounded-fanin gates solving this problem on a non-negligible fraction of the inputs must have logarithmic depth. Our results are obtained by introducing a technique to create quantum states exhibiting global quantum correlations from any graph, via a construction that we call the extended graph. Similar results have been very recently (and independently) obtained by Coudron, Stark and Vidick (arXiv:1810.04233}), and Bene Watts, Kothari, Schaeffer and Tal (STOC 2019).

Cite as

François Le Gall. Average-Case Quantum Advantage with Shallow Circuits. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 21:1-21:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{legall:LIPIcs.CCC.2019.21,
  author =	{Le Gall, Fran\c{c}ois},
  title =	{{Average-Case Quantum Advantage with Shallow Circuits}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{21:1--21:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.21},
  URN =		{urn:nbn:de:0030-drops-108432},
  doi =		{10.4230/LIPIcs.CCC.2019.21},
  annote =	{Keywords: Quantum computing, circuit complexity, constant-depth circuits}
}
Document
Time-Space Lower Bounds for Two-Pass Learning

Authors: Sumegha Garg, Ran Raz, and Avishay Tal


Abstract
A line of recent works showed that for a large class of learning problems, any learning algorithm requires either super-linear memory size or a super-polynomial number of samples [Raz, 2016; Kol et al., 2017; Raz, 2017; Moshkovitz and Moshkovitz, 2018; Beame et al., 2018; Garg et al., 2018]. For example, any algorithm for learning parities of size n requires either a memory of size Omega(n^{2}) or an exponential number of samples [Raz, 2016]. All these works modeled the learner as a one-pass branching program, allowing only one pass over the stream of samples. In this work, we prove the first memory-samples lower bounds (with a super-linear lower bound on the memory size and super-polynomial lower bound on the number of samples) when the learner is allowed two passes over the stream of samples. For example, we prove that any two-pass algorithm for learning parities of size n requires either a memory of size Omega(n^{1.5}) or at least 2^{Omega(sqrt{n})} samples. More generally, a matrix M: A x X - > {-1,1} corresponds to the following learning problem: An unknown element x in X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2) ..., where for every i, a_i in A is chosen uniformly at random and b_i = M(a_i,x). Assume that k,l, r are such that any submatrix of M of at least 2^{-k} * |A| rows and at least 2^{-l} * |X| columns, has a bias of at most 2^{-r}. We show that any two-pass learning algorithm for the learning problem corresponding to M requires either a memory of size at least Omega (k * min{k,sqrt{l}}), or at least 2^{Omega(min{k,sqrt{l},r})} samples.

Cite as

Sumegha Garg, Ran Raz, and Avishay Tal. Time-Space Lower Bounds for Two-Pass Learning. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 22:1-22:39, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{garg_et_al:LIPIcs.CCC.2019.22,
  author =	{Garg, Sumegha and Raz, Ran and Tal, Avishay},
  title =	{{Time-Space Lower Bounds for Two-Pass Learning}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{22:1--22:39},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.22},
  URN =		{urn:nbn:de:0030-drops-108446},
  doi =		{10.4230/LIPIcs.CCC.2019.22},
  annote =	{Keywords: branching program, time-space tradeoffs, two-pass streaming, PAC learning, lower bounds}
}
Document
Parity Helps to Compute Majority

Authors: Igor Carboni Oliveira, Rahul Santhanam, and Srikanth Srinivasan


Abstract
We study the complexity of computing symmetric and threshold functions by constant-depth circuits with Parity gates, also known as AC^0[oplus] circuits. Razborov [Alexander A. Razborov, 1987] and Smolensky [Roman Smolensky, 1987; Roman Smolensky, 1993] showed that Majority requires depth-d AC^0[oplus] circuits of size 2^{Omega(n^{1/2(d-1)})}. By using a divide-and-conquer approach, it is easy to show that Majority can be computed with depth-d AC^0[oplus] circuits of size 2^{O~(n^{1/(d-1)})}. This gap between upper and lower bounds has stood for nearly three decades. Somewhat surprisingly, we show that neither the upper bound nor the lower bound above is tight for large d. We show for d >= 5 that any symmetric function can be computed with depth-d AC^0[oplus] circuits of size exp(O~(n^{2/3 * 1/(d-4)})). Our upper bound extends to threshold functions (with a constant additive loss in the denominator of the double exponent). We improve the Razborov-Smolensky lower bound to show that for d >= 3 Majority requires depth-d AC^0[oplus] circuits of size 2^{Omega(n^{1/(2d-4)})}. For depths d <= 4, we are able to refine our techniques to get almost-optimal bounds: the depth-3 AC^0[oplus] circuit size of Majority is 2^{Theta~(n^{1/2})}, while its depth-4 AC^0[oplus] circuit size is 2^{Theta~(n^{1/4})}.

Cite as

Igor Carboni Oliveira, Rahul Santhanam, and Srikanth Srinivasan. Parity Helps to Compute Majority. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 23:1-23:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:LIPIcs.CCC.2019.23,
  author =	{Oliveira, Igor Carboni and Santhanam, Rahul and Srinivasan, Srikanth},
  title =	{{Parity Helps to Compute Majority}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{23:1--23:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.23},
  URN =		{urn:nbn:de:0030-drops-108453},
  doi =		{10.4230/LIPIcs.CCC.2019.23},
  annote =	{Keywords: Computational Complexity, Boolean Circuits, Lower Bounds, Parity, Majority}
}
Document
Size-Degree Trade-Offs for Sums-of-Squares and Positivstellensatz Proofs

Authors: Albert Atserias and Tuomas Hakoniemi


Abstract
We show that if a system of degree-k polynomial constraints on n Boolean variables has a Sums-of-Squares (SOS) proof of unsatisfiability with at most s many monomials, then it also has one whose degree is of the order of the square root of n log s plus k. A similar statement holds for the more general Positivstellensatz (PS) proofs. This establishes size-degree trade-offs for SOS and PS that match their analogues for weaker proof systems such as Resolution, Polynomial Calculus, and the proof systems for the LP and SDP hierarchies of Lovász and Schrijver. As a corollary to this, and to the known degree lower bounds, we get optimal integrality gaps for exponential size SOS proofs for sparse random instances of the standard NP-hard constraint optimization problems. We also get exponential size SOS lower bounds for Tseitin and Knapsack formulas. The proof of our main result relies on a zero-gap duality theorem for pre-ordered vector spaces that admit an order unit, whose specialization to PS and SOS may be of independent interest.

Cite as

Albert Atserias and Tuomas Hakoniemi. Size-Degree Trade-Offs for Sums-of-Squares and Positivstellensatz Proofs. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 24:1-24:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{atserias_et_al:LIPIcs.CCC.2019.24,
  author =	{Atserias, Albert and Hakoniemi, Tuomas},
  title =	{{Size-Degree Trade-Offs for Sums-of-Squares and Positivstellensatz Proofs}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{24:1--24:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.24},
  URN =		{urn:nbn:de:0030-drops-108464},
  doi =		{10.4230/LIPIcs.CCC.2019.24},
  annote =	{Keywords: Proof complexity, semialgebraic proof systems, Sums-of-Squares, Positivstellensatz, trade-offs, lower bounds, monomial size, degree}
}
Document
Complexity Lower Bounds for Computing the Approximately-Commuting Operator Value of Non-Local Games to High Precision

Authors: Matthew Coudron and William Slofstra


Abstract
We study the problem of approximating the commuting-operator value of a two-player non-local game. It is well-known that it is NP-complete to decide whether the classical value of a non-local game is 1 or 1- epsilon, promised that one of the two is the case. Furthermore, as long as epsilon is small enough, this result does not depend on the gap epsilon. In contrast, a recent result of Fitzsimons, Ji, Vidick, and Yuen shows that the complexity of computing the quantum value grows without bound as the gap epsilon decreases. In this paper, we show that this also holds for the commuting-operator value of a game. Specifically, in the language of multi-prover interactive proofs, we show that the power of MIP^{co}(2,1,1,s) (proofs with two provers, one round, completeness probability 1, soundness probability s, and commuting-operator strategies) can increase without bound as the gap 1-s gets arbitrarily small. Our results also extend naturally in two ways, to perfect zero-knowledge protocols, and to lower bounds on the complexity of computing the approximately-commuting value of a game. Thus we get lower bounds on the complexity class PZK-MIP^{co}_{delta}(2,1,1,s) of perfect zero-knowledge multi-prover proofs with approximately-commuting operator strategies, as the gap 1-s gets arbitrarily small. While we do not know any computable time upper bound on the class MIP^{co}, a result of the first author and Vidick shows that for s = 1-1/poly(f(n)) and delta = 1/poly(f(n)), the class MIP^{co}_delta(2,1,1,s), with constant communication from the provers, is contained in TIME(exp(poly(f(n)))). We give a lower bound of coNTIME(f(n)) (ignoring constants inside the function) for this class, which is tight up to polynomial factors assuming the exponential time hypothesis.

Cite as

Matthew Coudron and William Slofstra. Complexity Lower Bounds for Computing the Approximately-Commuting Operator Value of Non-Local Games to High Precision. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 25:1-25:20, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{coudron_et_al:LIPIcs.CCC.2019.25,
  author =	{Coudron, Matthew and Slofstra, William},
  title =	{{Complexity Lower Bounds for Computing the Approximately-Commuting Operator Value of Non-Local Games to High Precision}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{25:1--25:20},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.25},
  URN =		{urn:nbn:de:0030-drops-108478},
  doi =		{10.4230/LIPIcs.CCC.2019.25},
  annote =	{Keywords: Quantum complexity theory, Non-local game, Multi-prover interactive proof, Entanglement}
}
Document
Barriers for Fast Matrix Multiplication from Irreversibility

Authors: Matthias Christandl, Péter Vrana, and Jeroen Zuiddam


Abstract
Determining the asymptotic algebraic complexity of matrix multiplication, succinctly represented by the matrix multiplication exponent omega, is a central problem in algebraic complexity theory. The best upper bounds on omega, leading to the state-of-the-art omega <= 2.37.., have been obtained via the laser method of Strassen and its generalization by Coppersmith and Winograd. Recent barrier results show limitations for these and related approaches to improve the upper bound on omega. We introduce a new and more general barrier, providing stronger limitations than in previous work. Concretely, we introduce the notion of "irreversibility" of a tensor and we prove (in some precise sense) that any approach that uses an irreversible tensor in an intermediate step (e.g., as a starting tensor in the laser method) cannot give omega = 2. In quantitative terms, we prove that the best upper bound achievable is lower bounded by two times the irreversibility of the intermediate tensor. The quantum functionals and Strassen support functionals give (so far, the best) lower bounds on irreversibility. We provide lower bounds on the irreversibility of key intermediate tensors, including the small and big Coppersmith - Winograd tensors, that improve limitations shown in previous work. Finally, we discuss barriers on the group-theoretic approach in terms of "monomial" irreversibility.

Cite as

Matthias Christandl, Péter Vrana, and Jeroen Zuiddam. Barriers for Fast Matrix Multiplication from Irreversibility. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 26:1-26:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{christandl_et_al:LIPIcs.CCC.2019.26,
  author =	{Christandl, Matthias and Vrana, P\'{e}ter and Zuiddam, Jeroen},
  title =	{{Barriers for Fast Matrix Multiplication from Irreversibility}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{26:1--26:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.26},
  URN =		{urn:nbn:de:0030-drops-108487},
  doi =		{10.4230/LIPIcs.CCC.2019.26},
  annote =	{Keywords: Matrix multiplication exponent, barriers, laser method}
}
Document
Hardness Magnification near State-Of-The-Art Lower Bounds

Authors: Igor Carboni Oliveira, Ján Pich, and Rahul Santhanam


Abstract
This work continues the development of hardness magnification. The latter proposes a new strategy for showing strong complexity lower bounds by reducing them to a refined analysis of weaker models, where combinatorial techniques might be successful. We consider gap versions of the meta-computational problems MKtP and MCSP, where one needs to distinguish instances (strings or truth-tables) of complexity <= s_1(N) from instances of complexity >= s_2(N), and N = 2^n denotes the input length. In MCSP, complexity is measured by circuit size, while in MKtP one considers Levin’s notion of time-bounded Kolmogorov complexity. (In our results, the parameters s_1(N) and s_2(N) are asymptotically quite close, and the problems almost coincide with their standard formulations without a gap.) We establish that for Gap-MKtP[s_1,s_2] and Gap-MCSP[s_1,s_2], a marginal improvement over the state-of-the-art in unconditional lower bounds in a variety of computational models would imply explicit super-polynomial lower bounds. Theorem. There exists a universal constant c >= 1 for which the following hold. If there exists epsilon > 0 such that for every small enough beta > 0 (1) Gap-MCSP[2^{beta n}/c n, 2^{beta n}] !in Circuit[N^{1 + epsilon}], then NP !subseteq Circuit[poly]. (2) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in TC^0[N^{1 + epsilon}], then EXP !subseteq TC^0[poly]. (3) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in B_2-Formula[N^{2 + epsilon}], then EXP !subseteq Formula[poly]. (4) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in U_2-Formula[N^{3 + epsilon}], then EXP !subseteq Formula[poly]. (5) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in BP[N^{2 + epsilon}], then EXP !subseteq BP[poly]. (6) Gap-MKtP[2^{beta n}, 2^{beta n} + cn] !in (AC^0[6])[N^{1 + epsilon}], then EXP !subseteq AC^0[6]. These results are complemented by lower bounds for Gap-MCSP and Gap-MKtP against different models. For instance, the lower bound assumed in (1) holds for U_2-formulas of near-quadratic size, and lower bounds similar to (3)-(5) hold for various regimes of parameters. We also identify a natural computational model under which the hardness magnification threshold for Gap-MKtP lies below existing lower bounds: U_2-formulas that can compute parity functions at the leaves (instead of just literals). As a consequence, if one managed to adapt the existing lower bound techniques against such formulas to work with Gap-MKtP, then EXP !subseteq NC^1 would follow via hardness magnification.

Cite as

Igor Carboni Oliveira, Ján Pich, and Rahul Santhanam. Hardness Magnification near State-Of-The-Art Lower Bounds. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 27:1-27:29, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{oliveira_et_al:LIPIcs.CCC.2019.27,
  author =	{Oliveira, Igor Carboni and Pich, J\'{a}n and Santhanam, Rahul},
  title =	{{Hardness Magnification near State-Of-The-Art Lower Bounds}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{27:1--27:29},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.27},
  URN =		{urn:nbn:de:0030-drops-108494},
  doi =		{10.4230/LIPIcs.CCC.2019.27},
  annote =	{Keywords: Circuit Complexity, Minimum Circuit Size Problem, Kolmogorov Complexity}
}
Document
Non-Malleable Extractors and Non-Malleable Codes: Partially Optimal Constructions

Authors: Xin Li


Abstract
The recent line of study on randomness extractors has been a great success, resulting in exciting new techniques, new connections, and breakthroughs to long standing open problems in several seemingly different topics. These include seeded non-malleable extractors, privacy amplification protocols with an active adversary, independent source extractors (and explicit Ramsey graphs), and non-malleable codes in the split state model. Previously, the best constructions are given in [Xin Li, 2017]: seeded non-malleable extractors with seed length and entropy requirement O(log n+log(1/epsilon)log log (1/epsilon)) for error epsilon; two-round privacy amplification protocols with optimal entropy loss for security parameter up to Omega(k/log k), where k is the entropy of the shared weak source; two-source extractors for entropy O(log n log log n); and non-malleable codes in the 2-split state model with rate Omega(1/log n). However, in all cases there is still a gap to optimum and the motivation to close this gap remains strong. In this paper, we introduce a set of new techniques to further push the frontier in the above questions. Our techniques lead to improvements in all of the above questions, and in several cases partially optimal constructions. This is in contrast to all previous work, which only obtain close to optimal constructions. Specifically, we obtain: 1) A seeded non-malleable extractor with seed length O(log n)+log^{1+o(1)}(1/epsilon) and entropy requirement O(log log n+log(1/epsilon)), where the entropy requirement is asymptotically optimal by a recent result of Gur and Shinkar [Tom Gur and Igor Shinkar, 2018]; 2) A two-round privacy amplification protocol with optimal entropy loss for security parameter up to Omega(k), which solves the privacy amplification problem completely; 3) A two-source extractor for entropy O((log n log log n)/(log log log n)), which also gives an explicit Ramsey graph on N vertices with no clique or independent set of size (log N)^{O((log log log N)/(log log log log N))}; and 4) The first explicit non-malleable code in the 2-split state model with constant rate, which has been a major goal in the study of non-malleable codes for quite some time. One small caveat is that the error of this code is only (an arbitrarily small) constant, but we can also achieve negligible error with rate Omega(log log log n/log log n), which already improves the rate in [Xin Li, 2017] exponentially. We believe our new techniques can help to eventually obtain completely optimal constructions in the above questions, and may have applications in other settings.

Cite as

Xin Li. Non-Malleable Extractors and Non-Malleable Codes: Partially Optimal Constructions. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 28:1-28:49, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{li:LIPIcs.CCC.2019.28,
  author =	{Li, Xin},
  title =	{{Non-Malleable Extractors and Non-Malleable Codes: Partially Optimal Constructions}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{28:1--28:49},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.28},
  URN =		{urn:nbn:de:0030-drops-108507},
  doi =		{10.4230/LIPIcs.CCC.2019.28},
  annote =	{Keywords: extractor, non-malleable, privacy, codes}
}
Document
Optimal Separation and Strong Direct Sum for Randomized Query Complexity

Authors: Eric Blais and Joshua Brody


Abstract
We establish two results regarding the query complexity of bounded-error randomized algorithms. Bounded-error separation theorem. There exists a total function f : {0,1}^n -> {0,1} whose epsilon-error randomized query complexity satisfies overline{R}_epsilon(f) = Omega(R(f) * log 1/epsilon). Strong direct sum theorem. For every function f and every k >= 2, the randomized query complexity of computing k instances of f simultaneously satisfies overline{R}_epsilon(f^k) = Theta(k * overline{R}_{epsilon/k}(f)). As a consequence of our two main results, we obtain an optimal superlinear direct-sum-type theorem for randomized query complexity: there exists a function f for which R(f^k) = Theta(k log k * R(f)). This answers an open question of Drucker (2012). Combining this result with the query-to-communication complexity lifting theorem of Göös, Pitassi, and Watson (2017), this also shows that there is a total function whose public-coin randomized communication complexity satisfies R^{cc}(f^k) = Theta(k log k * R^{cc}(f)), answering a question of Feder, Kushilevitz, Naor, and Nisan (1995).

Cite as

Eric Blais and Joshua Brody. Optimal Separation and Strong Direct Sum for Randomized Query Complexity. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 29:1-29:17, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{blais_et_al:LIPIcs.CCC.2019.29,
  author =	{Blais, Eric and Brody, Joshua},
  title =	{{Optimal Separation and Strong Direct Sum for Randomized Query Complexity}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{29:1--29:17},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.29},
  URN =		{urn:nbn:de:0030-drops-108511},
  doi =		{10.4230/LIPIcs.CCC.2019.29},
  annote =	{Keywords: Decision trees, query complexity, communication complexity}
}
Document
Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems

Authors: Lijie Chen, Dylan M. McKay, Cody D. Murray, and R. Ryan Williams


Abstract
A frontier open problem in circuit complexity is to prove P^{NP} is not in SIZE[n^k] for all k; this is a necessary intermediate step towards NP is not in P_{/poly}. Previously, for several classes containing P^{NP}, including NP^{NP}, ZPP^{NP}, and S_2 P, such lower bounds have been proved via Karp-Lipton-style Theorems: to prove C is not in SIZE[n^k] for all k, we show that C subset P_{/poly} implies a "collapse" D = C for some larger class D, where we already know D is not in SIZE[n^k] for all k. It seems obvious that one could take a different approach to prove circuit lower bounds for P^{NP} that does not require proving any Karp-Lipton-style theorems along the way. We show this intuition is wrong: (weak) Karp-Lipton-style theorems for P^{NP} are equivalent to fixed-polynomial size circuit lower bounds for P^{NP}. That is, P^{NP} is not in SIZE[n^k] for all k if and only if (NP subset P_{/poly} implies PH subset i.o.- P^{NP}_{/n}). Next, we present new consequences of the assumption NP subset P_{/poly}, towards proving similar results for NP circuit lower bounds. We show that under the assumption, fixed-polynomial circuit lower bounds for NP, nondeterministic polynomial-time derandomizations, and various fixed-polynomial time simulations of NP are all equivalent. Applying this equivalence, we show that circuit lower bounds for NP imply better Karp-Lipton collapses. That is, if NP is not in SIZE[n^k] for all k, then for all C in {Parity-P, PP, PSPACE, EXP}, C subset P_{/poly} implies C subset i.o.-NP_{/n^epsilon} for all epsilon > 0. Note that unconditionally, the collapses are only to MA and not NP. We also explore consequences of circuit lower bounds for a sparse language in NP. Among other results, we show if a polynomially-sparse NP language does not have n^{1+epsilon}-size circuits, then MA subset i.o.-NP_{/O(log n)}, MA subset i.o.-P^{NP[O(log n)]}, and NEXP is not in SIZE[2^{o(m)}]. Finally, we observe connections between these results and the "hardness magnification" phenomena described in recent works.

Cite as

Lijie Chen, Dylan M. McKay, Cody D. Murray, and R. Ryan Williams. Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 30:1-30:21, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{chen_et_al:LIPIcs.CCC.2019.30,
  author =	{Chen, Lijie and McKay, Dylan M. and Murray, Cody D. and Williams, R. Ryan},
  title =	{{Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{30:1--30:21},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.30},
  URN =		{urn:nbn:de:0030-drops-108525},
  doi =		{10.4230/LIPIcs.CCC.2019.30},
  annote =	{Keywords: Karp-Lipton Theorems, Circuit Lower Bounds, Derandomization, Hardness Magnification}
}
Document
A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties

Authors: Karl Bringmann, Nick Fischer, and Marvin Künnemann


Abstract
An important class of problems in logics and database theory is given by fixing a first-order property psi over a relational structure, and considering the model-checking problem for psi. Recently, Gao, Impagliazzo, Kolokolova, and Williams (SODA 2017) identified this class as fundamental for the theory of fine-grained complexity in P, by showing that the (Sparse) Orthogonal Vectors problem is complete for this class under fine-grained reductions. This raises the question whether fine-grained complexity can yield a precise understanding of all first-order model-checking problems. Specifically, can we determine, for any fixed first-order property psi, the exponent of the optimal running time O(m^{c_psi}), where m denotes the number of tuples in the relational structure? Towards answering this question, in this work we give a dichotomy for the class of exists^k-forall-quantified graph properties. For every such property psi, we either give a polynomial-time improvement over the baseline O(m^k)-time algorithm or show that it requires time m^{k-o(1)} under the hypothesis that MAX-3-SAT has no O((2-epsilon)^n)-time algorithm. More precisely, we define a hardness parameter h = H(psi) such that psi can be decided in time O(m^{k-epsilon}) if h <=2 and requires time m^{k-o(1)} for h >= 3 unless the h-uniform HyperClique hypothesis fails. This unveils a natural hardness hierarchy within first-order properties: for any h >= 3, we show that there exists a exists^k-forall-quantified graph property psi with hardness H(psi)=h that is solvable in time O(m^{k-epsilon}) if and only if the h-uniform HyperClique hypothesis fails. Finally, we give more precise upper and lower bounds for an exemplary class of formulas with k=3 and extend our classification to a counting dichotomy.

Cite as

Karl Bringmann, Nick Fischer, and Marvin Künnemann. A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 31:1-31:27, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bringmann_et_al:LIPIcs.CCC.2019.31,
  author =	{Bringmann, Karl and Fischer, Nick and K\"{u}nnemann, Marvin},
  title =	{{A Fine-Grained Analogue of Schaefer’s Theorem in P: Dichotomy of Exists^k-Forall-Quantified First-Order Graph Properties}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{31:1--31:27},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.31},
  URN =		{urn:nbn:de:0030-drops-108533},
  doi =		{10.4230/LIPIcs.CCC.2019.31},
  annote =	{Keywords: Fine-grained Complexity, Hardness in P, Hyperclique Conjecture, Constrained Triangle Detection}
}
Document
Imperfect Gaps in Gap-ETH and PCPs

Authors: Mitali Bafna and Nikhil Vyas


Abstract
We study the role of perfect completeness in probabilistically checkable proof systems (PCPs) and give a way to transform a PCP with imperfect completeness to one with perfect completeness, when the initial gap is a constant. We show that PCP_{c,s}[r,q] subseteq PCP_{1,s'}[r+O(1),q+O(r)] for c-s=Omega(1) which in turn implies that one can convert imperfect completeness to perfect in linear-sized PCPs for NP with a O(log n) additive loss in the query complexity q. We show our result by constructing a "robust circuit" using threshold gates. These results are a gap amplification procedure for PCPs, (when completeness is not 1) analogous to questions studied in parallel repetition [Anup Rao, 2011] and pseudorandomness [David Gillman, 1998] and might be of independent interest. We also investigate the time-complexity of approximating perfectly satisfiable instances of 3SAT versus those with imperfect completeness. We show that the Gap-ETH conjecture without perfect completeness is equivalent to Gap-ETH with perfect completeness, i.e. MAX 3SAT(1-epsilon,1-delta), delta > epsilon has 2^{o(n)} algorithms if and only if MAX 3SAT(1,1-delta) has 2^{o(n)} algorithms. We also relate the time complexities of these two problems in a more fine-grained way to show that T_2(n) <= T_1(n(log log n)^{O(1)}), where T_1(n),T_2(n) denote the randomized time-complexity of approximating MAX 3SAT with perfect and imperfect completeness respectively.

Cite as

Mitali Bafna and Nikhil Vyas. Imperfect Gaps in Gap-ETH and PCPs. In 34th Computational Complexity Conference (CCC 2019). Leibniz International Proceedings in Informatics (LIPIcs), Volume 137, pp. 32:1-32:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@InProceedings{bafna_et_al:LIPIcs.CCC.2019.32,
  author =	{Bafna, Mitali and Vyas, Nikhil},
  title =	{{Imperfect Gaps in Gap-ETH and PCPs}},
  booktitle =	{34th Computational Complexity Conference (CCC 2019)},
  pages =	{32:1--32:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-116-0},
  ISSN =	{1868-8969},
  year =	{2019},
  volume =	{137},
  editor =	{Shpilka, Amir},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CCC.2019.32},
  URN =		{urn:nbn:de:0030-drops-108545},
  doi =		{10.4230/LIPIcs.CCC.2019.32},
  annote =	{Keywords: PCP, Gap-ETH, Hardness of Approximation}
}

Filters


Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail