default search action
32nd COLT 2019: Phoenix, AZ, USA
- Alina Beygelzimer, Daniel Hsu:
Conference on Learning Theory, COLT 2019, 25-28 June 2019, Phoenix, AZ, USA. Proceedings of Machine Learning Research 99, PMLR 2019
Preface
- Preface. 1-2
Contributed Papers
- Jayadev Acharya, Clément L. Canonne, Himanshu Tyagi:
Inference under Information Constraints: Lower Bounds from Chi-Square Contraction. 3-17 - Naman Agarwal, Alon Gonen, Elad Hazan:
Learning in Non-convex Games with an Optimization Oracle. 18-29 - Daniel Alabi, Adam Tauman Kalai, Katrina Ligett, Cameron Musco, Christos Tzamos, Ellen Vitercik:
Learning to Prune: Speeding up Repeated Computations. 30-33 - Maryam Aliakbarpour, Themis Gouleakis, John Peebles, Ronitt Rubinfeld, Anak Yodpinyanee:
Towards Testing Monotonicity of Distributions Over General Posets. 34-82 - Maryam Aliakbarpour, Ravi Kumar, Ronitt Rubinfeld:
Testing Mixtures of Discrete Distributions. 83-114 - Andreas Anastasiou, Krishnakumar Balasubramanian, Murat A. Erdogdu:
Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT. 115-137 - Peter Auer, Pratik Gajane, Ronald Ortner:
Adaptively Tracking the Best Bandit Arm with an Unknown Number of Distribution Changes. 138-158 - Peter Auer, Yifang Chen, Pratik Gajane, Chung-Wei Lee, Haipeng Luo, Ronald Ortner, Chen-Yu Wei:
Achieving Optimal Dynamic Regret for Non-stationary Bandits without Prior Information. 159-163 - Francis R. Bach, Kfir Y. Levy:
A Universal Algorithm for Variational Inequalities Adaptive to Smoothness and Noise. 164-194 - Ainesh Bakshi, Rajesh Jayaram, David P. Woodruff:
Learning Two Layer Rectified Neural Networks in Polynomial Time. 195-268 - Amos Beimel, Shay Moran, Kobbi Nissim, Uri Stemmer:
Private Center Points and Learning of Halfspaces. 269-282 - Ivona Bezáková, Antonio Blanca, Zongchen Chen, Daniel Stefankovic, Eric Vigoda:
Lower bounds for testing graphical models: colorings and antiferromagnetic Ising models. 283-298 - Aditya Bhaskara, Wai Ming Tai:
Approximate Guarantees for Dictionary Learning. 299-317 - Olivier Bousquet, Daniel Kane, Shay Moran:
The Optimal Approximation Factor in Density Estimation. 318-341 - Mark Braverman, Jieming Mao, Yuval Peres:
Sorted Top-k in Rounds. 342-382 - Mark Braverman, Jieming Mao, Jon Schneider, S. Matthew Weinberg:
Multi-armed Bandit Problems with Strategic Arms. 383-416 - Matthew S. Brennan, Guy Bresler, Wasim Huleihel:
Universality of Computational Lower Bounds for Submatrix Detection. 417-468 - Matthew S. Brennan, Guy Bresler:
Optimal Average-Case Reductions to Sparse PCA: From Weak Assumptions to Strong Hardness. 469-470 - Victor-Emmanuel Brunel:
Learning rates for Gaussian mixtures under group action. 471-491 - Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford:
Near-optimal method for highly smooth convex optimization. 492-507 - Sébastien Bubeck, Yuanzhi Li, Haipeng Luo, Chen-Yu Wei:
Improved Path-length Regret Bounds for Bandits. 508-528 - Róbert Busa-Fekete, Dimitris Fotakis, Balázs Szörényi, Manolis Zampetakis:
Optimal Learning of Mallows Block Model. 529-532 - Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco:
Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret. 533-557 - Tongyi Cao, Akshay Krishnamurthy:
Disagreement-Based Combinatorial Pure Exploration: Sample Complexity Bounds and an Efficient Algorithm. 558-588 - Yair Carmon, John C. Duchi, Aaron Sidford, Kevin Tian:
A Rank-1 Sketch for Matrix Multiplicative Weights. 589-623 - Vaggos Chatziafratis, Tim Roughgarden, Joshua R. Wang:
On the Computational Power of Online Gradient Descent. 624-662 - Xue Chen, Eric Price:
Active Regression via Linear-Sample Sparsification. 663-695 - Yifang Chen, Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei:
A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal and Parameter-free. 696-726 - Yu Cheng, Ilias Diakonikolas, Rong Ge, David P. Woodruff:
Faster Algorithms for High-Dimensional Robust Covariance Estimation. 727-757 - Yeshwanth Cherapanamjeri, Peter L. Bartlett:
Testing Symmetric Markov Chains Without Hitting. 758-785 - Yeshwanth Cherapanamjeri, Nicolas Flammarion, Peter L. Bartlett:
Fast Mean Estimation with Sub-Gaussian Rates. 786-806 - Yun Kuen Cheung, Georgios Piliouras:
Vortices Instead of Equilibria in MinMax Optimization: Chaos and Butterfly Effects of Online Learning in Zero-Sum Games. 807-834 - Christian Coester, James R. Lee:
Pure entropic regularization for metrical task systems. 835-848 - Michael B. Cohen, Ben Cousins, Yin Tat Lee, Xin Yang:
A near-optimal algorithm for approximating the John Ellipsoid. 849-873 - Ashok Cutkosky:
Artificial Constraints and Hints for Unbounded Online Learning. 874-894 - Ashok Cutkosky:
Combining Online Learning Guarantees. 895-913 - Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Siddhartha Jayanti:
Learning from Weakly Dependent Data under Dobrushin's Condition. 914-928 - Yuval Dagan, Gil Kur, Ohad Shamir:
Space lower bounds for linear prediction in the streaming model. 929-954 - Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis:
Computationally and Statistically Efficient Truncated Regression. 955-960 - Sami Davies, Miklós Z. Rácz, Cyrus Rashtchian:
Reconstructing Trees from Traces. 961-978 - Anindya De, Elchanan Mossel, Joe Neeman:
Is your function low dimensional? 979-993 - Akshay Degwekar, Preetum Nakkiran, Vinod Vaikuntanathan:
Computational Limitations in Robust Classification and Win-Win Results. 994-1028 - Michal Derezinski:
Fast determinantal point processes via distortion-free intermediate sampling. 1029-1049 - Michal Derezinski, Kenneth L. Clarkson, Michael W. Mahoney, Manfred K. Warmuth:
Minimax experimental design: Bridging the gap between statistical and worst-case approaches to least squares regression. 1050-1069 - Ilias Diakonikolas, Themis Gouleakis, Daniel M. Kane, Sankeerth Rao:
Communication and Memory Efficient Testing of Discrete Distributions. 1070-1106 - Ilias Diakonikolas, Daniel M. Kane, John Peebles:
Testing Identity of Multidimensional Histograms. 1107-1131 - Jelena Diakonikolas, Cristóbal Guzmán:
Lower Bounds for Parallel and Randomized Convex Optimization. 1132-1157 - Shi Dong, Tengyu Ma, Benjamin Van Roy:
On the Performance of Thompson Sampling on Logistic Bandits. 1158-1160 - John C. Duchi, Ryan Rogers:
Lower Bounds for Locally Private Estimation via Communication Complexity. 1161-1191 - Cong Fang, Zhouchen Lin, Tong Zhang:
Sharp Analysis for Nonconvex SGD Escaping from Saddle Points. 1192-1234 - Yingjie Fei, Yudong Chen:
Achieving the Bayes Error Rate in Stochastic Block Model by SDP, Robustly. 1235-1269 - Vitaly Feldman, Jan Vondrák:
High probability generalization bounds for uniformly stable algorithms with nearly optimal rate. 1270-1279 - Dylan J. Foster, Andrej Risteski:
Sum-of-squares meets square loss: Fast rates for agnostic tensor completion. 1280-1318 - Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake E. Woodworth:
The Complexity of Making the Gradient Small in Stochastic Convex Optimization. 1319-1345 - Dylan J. Foster, Vasilis Syrgkanis:
Statistical Learning with a Nuisance Component. 1346-1348 - Dan Garber:
On the Regret Minimization of Nonconvex Online Gradient Ascent for Online PCA. 1349-1373 - Alexander V. Gasnikov, Pavel E. Dvurechensky, Eduard Gorbunov, Evgeniya A. Vorontsova, Daniil Selikhanovych, César A. Uribe:
Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization. 1374-1391 - Alexander V. Gasnikov, Pavel E. Dvurechensky, Eduard Gorbunov, Evgeniya A. Vorontsova, Daniil Selikhanovych, César A. Uribe, Bo Jiang, Haoyue Wang, Shuzhong Zhang, Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford:
Near Optimal Methods for Minimizing Convex Functions with Lipschitz $p$-th Derivatives. 1392-1393 - Rong Ge, Zhize Li, Weiyao Wang, Xiang Wang:
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization. 1394-1448 - Surbhi Goel, Daniel M. Kane, Adam R. Klivans:
Learning Ising Models with Independent Failures. 1449-1469 - Surbhi Goel, Adam R. Klivans:
Learning Neural Networks with Two Nonlinear Layers in Polynomial Time. 1470-1499 - Christina Göpfert, Shai Ben-David, Olivier Bousquet, Sylvain Gelly, Ilya O. Tolstikhin, Ruth Urner:
When can unlabeled data improve the learning rate? 1500-1518 - Navin Goyal, Abhishek Shetty:
Sampling and Optimization on Convex Sets in Riemannian Manifolds of Non-Negative Curvature. 1519-1561 - Anupam Gupta, Tomer Koren, Kunal Talwar:
Better Algorithms for Stochastic Bandits with Adversarial Corruptions. 1562-1578 - Nicholas J. A. Harvey, Christopher Liaw, Yaniv Plan, Sikander Randhawa:
Tight analyses for non-smooth stochastic gradient descent. 1579-1613 - Jan Hazla, Ali Jadbabaie, Elchanan Mossel, M. Amin Rahimian:
Reasoning in Bayesian Opinion Exchange Networks Is PSPACE-Hard. 1614-1648 - Samuel B. Hopkins, Jerry Li:
How Hard is Robust Mean Estimation? 1649-1682 - Samuel B. Hopkins, Tselil Schramm, Jonathan Shi:
A Robust Spectral Algorithm for Overcomplete Tensor Decomposition. 1683-1722 - Piotr Indyk, Ali Vakilian, Tal Wagner, David P. Woodruff:
Sample-Optimal Low-Rank Approximation of Distance Matrices. 1723-1751 - Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli:
Making the Last Iterate of SGD Information Theoretically Optimal. 1752-1755 - Vishesh Jain, Frederic Koehler, Jingbo Liu, Elchanan Mossel:
Accuracy-Memory Tradeoffs and Phase Transitions in Belief Propagation. 1756-1771 - Ziwei Ji, Matus Telgarsky:
The implicit bias of gradient descent on nonseparable data. 1772-1798 - Bo Jiang, Haoyue Wang, Shuzhong Zhang:
An Optimal High-Order Tensor Method for Convex Optimization. 1799-1801 - Kwang-Sung Jun, Francesco Orabona:
Parameter-Free Online Convex Optimization with Sub-Exponential Noise. 1802-1823 - Sandeep Juneja, Subhashini Krishnasamy:
Sample complexity of partition identification using multi-armed bandits. 1824-1852 - Gautam Kamath, Jerry Li, Vikrant Singhal, Jonathan R. Ullman:
Privately Learning High-Dimensional Distributions. 1853-1902 - Daniel Kane, Roi Livni, Shay Moran, Amir Yehudayoff:
On Communication Complexity of Classification Problems. 1903-1943 - Belhal Karimi, Blazej Miasojedow, Eric Moulines, Hoi-To Wai:
Non-asymptotic Analysis of Biased Stochastic Approximation Scheme. 1944-1974 - Zohar S. Karnin, Edo Liberty:
Discrepancy, Coresets, and Sketches in Machine Learning. 1975-1993 - Wojciech Kotlowski, Gergely Neu:
Bandit Principal Component Analysis. 1994-2024 - Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins, Chicheng Zhang:
Contextual bandits with continuous actions: Smoothing, zooming, and adapting. 2025-2027 - Ilja Kuzborskij, Nicolò Cesa-Bianchi, Csaba Szepesvári:
Distribution-Dependent Analysis of Gibbs-ERM Principle. 2028-2054 - Jeongyeol Kwon, Wei Qian, Constantine Caramanis, Yudong Chen, Damek Davis:
Global Convergence of the EM Algorithm for Mixtures of Two Component Linear Regression. 2055-2110 - Tor Lattimore, Csaba Szepesvári:
An Information-Theoretic Approach to Minimax Regret in Partial Monitoring. 2111-2139 - Yin Tat Lee, Zhao Song, Qiuyi Zhang:
Solving Empirical Risk Minimization in the Current Matrix Multiplication Time. 2140-2157 - Jerry Li, Aleksandar Nikolov, Ilya P. Razenshteyn, Erik Waingarten:
On Mean Estimation for General Norms with Statistical Queries. 2158-2172 - Yingkai Li, Yining Wang, Yuan Zhou:
Nearly Minimax-Optimal Regret for Linearly Parameterized Bandits. 2173-2174 - Meimei Liu, Zuofeng Shang, Guang Cheng:
Sharp Theoretical Analysis for Nonparametric Testing under Random Projection. 2175-2209 - Vivek Madan, Mohit Singh, Uthaipon Tantipongpipat, Weijun Xie:
Combinatorial Algorithms for Optimal Design. 2210-2258 - Oren Mangoubi, Nisheeth K. Vishnoi:
Nonconvex sampling with the Metropolis-adjusted Langevin algorithm. 2259-2293 - Ulysse Marteau-Ferey, Dmitrii Ostrovskii, Francis R. Bach, Alessandro Rudi:
Beyond Least-Squares: Fast Rates for Regularized Empirical Risk Minimization through Self-Concordance. 2294-2340 - Laurent Massoulié, Ludovic Stephan, Don Towsley:
Planting trees in graphs, and finding them back. 2341-2371 - Andreas Maurer, Massimiliano Pontil:
Uniform concentration and symmetrization for weak interactions. 2372-2387 - Song Mei, Theodor Misiakiewicz, Andrea Montanari:
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit. 2388-2464 - Nadav Merlis, Shie Mannor:
Batch-Size Independent Regret Bounds for the Combinatorial Multi-Armed Bandit Problem. 2465-2489 - Zakaria Mhammedi, Wouter M. Koolen, Tim van Erven:
Lipschitz Adaptivity with Multiple Learning Rates in Online Learning. 2490-2511 - Omar Montasser, Steve Hanneke, Nathan Srebro:
VC Classes are Adversarially Robustly Learnable, but Only Improperly. 2512-2530 - Dmitrii M. Ostrovskii, Alessandro Rudi:
Affine Invariant Covariance Estimation for Heavy-Tailed Distributions. 2531-2550 - Samet Oymak:
Stochastic Gradient Descent Learns State Equations with Nonlinear Activations. 2551-2579 - Mingda Qiao, Gregory Valiant:
A Theory of Selective Prediction. 2580-2594 - Alexander Rakhlin, Xiyu Zhai:
Consistency of Interpolation with Laplace Kernels is a High-Dimensional Phenomenon. 2595-2623 - Henry W. J. Reeve, Ata Kabán:
Classification with unknown class-conditional label noise on non-compact feature spaces. 2624-2651 - Galen Reeves, Jiaming Xu, Ilias Zadik:
The All-or-Nothing Phenomenon in Sparse Linear Regression. 2652-2663 - Itay Safran, Ronen Eldan, Ohad Shamir:
Depth Separations in Neural Networks: What is Actually Being Separated? 2664-2666 - Pedro Savarese, Itay Evron, Daniel Soudry, Nathan Srebro:
How do infinite width bounded norm networks look in function space? 2667-2690 - Ohad Shamir:
Exponential Convergence Time of Gradient Descent for One-Dimensional Deep Linear Neural Networks. 2691-2713 - Max Simchowitz, Ross Boczar, Benjamin Recht:
Learning Linear Dynamical Systems with Semi-Parametric Least Squares. 2714-2802 - R. Srikant, Lei Ying:
Finite-Time Error Bounds For Linear Stochastic Approximation andTD Learning. 2803-2830 - Ludovic Stephan, Laurent Massoulié:
Robustness of Spectral Methods for Community Detection. 2831-2860 - Damian Straszak, Nisheeth K. Vishnoi:
Maximum Entropy Distributions: Bit Complexity and Stability. 2861-2891 - Arun Sai Suggala, Kush Bhatia, Pradeep Ravikumar, Prateek Jain:
Adaptive Hard Thresholding for Near-optimal Consistent Robust Regression. 2892-2897 - Wen Sun, Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford:
Model-based RL in Contextual Decision Processes: PAC bounds and Exponential Improvements over Model-free Approaches. 2898-2933 - Adrien B. Taylor, Francis R. Bach:
Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions. 2934-2992 - Christopher Tosh, Sanjoy Dasgupta:
The Relative Complexity of Maximum Likelihood Estimation, MAP Estimation, and Sampling. 2993-3035 - Stephen Tu, Benjamin Recht:
The Gap Between Model-Based and Model-Free Methods on the Linear Quadratic Regulator: An Asymptotic Viewpoint. 3036-3083 - Belinda Tzen, Maxim Raginsky:
Theoretical guarantees for sampling and inference in generative models with latent diffusions. 3084-3114 - Santosh S. Vempala, John Wilmes:
Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds. 3115-3117 - Jonathan Weed, Quentin Berthet:
Estimation of smooth densities in Wasserstein distance. 3118-3119 - Geoffrey Wolfer, Aryeh Kontorovich:
Estimating the Mixing Time of Ergodic Markov Chains. 3120-3159 - Lijun Zhang, Zhi-Hua Zhou:
Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond the $O(1/T)$ Convergence Rate. 3160-3179
Open Problems
- Amit Daniely, Vitaly Feldman:
Open Problem: Is Margin Sufficient for Non-Interactive Private Distributed Learning? 3180-3184 - Vitaly Feldman, Roy Frostig, Moritz Hardt:
Open Problem: How fast can a multiclass test set be overfit? 3185-3189 - Rong Ge, Prateek Jain, Sham M. Kakade, Rahul Kidambi, Dheeraj M. Nagaraj, Praneeth Netrapalli:
Open Problem: Do Good Algorithms Necessarily Query Bad Points? 3190-3193 - Filipo Studzinski Perotto, Mathieu Bourgais, Bruno C. Silva, Laurent Vercouter:
Open Problem: Risk of Ruin in Multiarmed Bandits. 3194-3197 - Tom J. Viering, Alexander Mey, Marco Loog:
Open Problem: Monotonicity of Learning. 3198-3201 - Blake E. Woodworth, Nathan Srebro:
Open Problem: The Oracle Complexity of Convex Optimization with Limited Memory. 3202-3210
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.