Computer Science > Data Structures and Algorithms
[Submitted on 25 Aug 2024]
Title:Revisit the Partial Coloring Method: Prefix Spencer and Sampling
View PDF HTML (experimental)Abstract:As the most powerful tool in discrepancy theory, the partial coloring method has wide applications in many problems including the Beck-Fiala problem and Spencer's celebrated result. Currently, there are two major algorithmic methods for the partial coloring method: the first approach uses linear algebraic tools; and the second is called Gaussian measure algorithm. We explore the advantages of these two methods and show the following results for them separately.
1. Spencer conjectured that the prefix discrepancy of any $\mathbf{A} \in \{0,1\}^{m \times n}$ is $O(\sqrt{m})$. We show how to find a partial coloring with prefix discrepancy $O(\sqrt{m})$ and $\Omega(n)$ entries in $\{ \pm 1\}$ efficiently. To the best of our knowledge, this provides the first partial coloring whose prefix discrepancy is almost optimal. However, unlike the classical discrepancy problem, there is no reduction on the number of variables $n$ for the prefix problem. By recursively applying partial coloring, we obtain a full coloring with prefix discrepancy $O(\sqrt{m} \cdot \log \frac{O(n)}{m})$. Prior to this work, the best bounds of the prefix Spencer conjecture for arbitrarily large $n$ were $2m$ and $O(\sqrt{m \log n})$.
2. Our second result extends the first linear algebraic approach to a sampling algorithm in Spencer's classical setting. On the first hand, Spencer proved that there are $1.99^m$ good colorings with discrepancy $O(\sqrt{m})$. Hence a natural question is to design efficient random sampling algorithms in Spencer's setting. On the other hand, some applications of discrepancy theory, prefer a random solution instead of a fixed one. Our second result is an efficient sampling algorithm whose random output has min-entropy $\Omega(n)$ and discrepancy $O(\sqrt{m})$. Moreover, our technique extends the linear algebraic framework by incorporating leverage scores of randomized matrix algorithms.
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.