default search action
34th COLT 2021: Boulder, Colorado, USA
- Mikhail Belkin, Samory Kpotufe:
Conference on Learning Theory, COLT 2021, 15-19 August 2021, Boulder, Colorado, USA. Proceedings of Machine Learning Research 134, PMLR 2021 - Emmanuel Abbe, Elisabetta Cornacchia, Yuzhou Gu, Yury Polyanskiy:
Stochastic block model entropy and broadcasting on trees with survey. 1-25 - Shubhada Agrawal, Sandeep Juneja, Wouter M. Koolen:
Regret Minimization in Heavy-Tailed Bandits. 26-62 - Idan Amir, Tomer Koren, Roi Livni:
SGD Generalizes Better Than GD (And Regularization Doesn't Help). 63-92 - Nima Anari, Moses Charikar, Kirankumar Shiragur, Aaron Sidford:
The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood. 93-158 - Gabriel P. Andrade, Rafael M. Frongillo, Georgios Piliouras:
Learning in Matrix Games can be Arbitrarily Complex. 159-185 - Yair Ashlagi, Lee-Ad Gottlieb, Aryeh Kontorovich:
Functions with average smoothness: structure, algorithms, and learning. 186-236 - Pranjal Awasthi, Vaggos Chatziafratis, Xue Chen, Aravindan Vijayaraghavan:
Adversarially Robust Low Dimensional Representations. 237-325 - Waïss Azizian, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos:
The Last-Iterate Convergence Rate of Optimistic Mirror Descent in Stochastic Variational Inequalities. 326-358 - Dheeraj Baby, Yu-Xiang Wang:
Optimal Dynamic Regret in Exp-Concave Online Learning. 359-409 - Afonso S. Bandeira, Jess Banks, Dmitriy Kunisky, Cristopher Moore, Alexander S. Wein:
Spectral Planting and the Hardness of Refuting Cuts, Colorability, and Communities in Random Graphs. 410-473 - Raef Bassily, Cristóbal Guzmán, Anupama Nandi:
Non-Euclidean Differentially Private Stochastic Convex Optimization. 474-499 - Huck Bennett, Anindya De, Rocco A. Servedio, Emmanouil-Vasileios Vlatakis-Gkaragkounis:
Reconstructing weighted voting schemes from partial information about their power indices. 500-565 - Tomer Berg, Or Ordentlich, Ofer Shayevitz:
Deterministic Finite-Memory Bias Estimation. 566-585 - Omar Besbes, Yuri Fonseca, Ilan Lobel:
Online Learning from Optimal Actions. 586 - Adam Block, Yuval Dagan, Alexander Rakhlin:
Majorizing Measures, Sequential Complexities, and Online Learning. 587-590 - Avrim Blum, Steve Hanneke, Jian Qian, Han Shao:
Robust learning under clean-label attack. 591-634 - Antoine Bodin, Nicolas Macris:
Rank-one matrix estimation: analytic time evolution of gradient descent dynamics. 635-678 - Simina Brânzei, Yuval Peres:
Multiplayer Bandit Learning, from Competition to Cooperation. 679-723 - Mark Braverman, Gillat Kol, Shay Moran, Raghuvansh R. Saxena:
Near Optimal Distributed Learning of Halfspaces with Two Parties. 724-758 - Vladimir Braverman, Robert Krauthgamer, Aditya Krishnan, Shay Sapir:
Near-Optimal Entrywise Sampling of Numerically Sparse Matrices. 759-773 - Matthew S. Brennan, Guy Bresler, Samuel B. Hopkins, Jerry Li, Tselil Schramm:
Statistical Query Algorithms and Low Degree Tests Are Almost Equivalent. 774 - Marco Bressan, Nicolò Cesa-Bianchi, Silvio Lattanzi, Andrea Paudice:
Exact Recovery of Clusters in Finite Metric Spaces Using Oracle Queries. 775-803 - Sébastien Bubeck, Yuanzhi Li, Dheeraj M. Nagaraj:
A Law of Robustness for Two-Layers Neural Networks. 804-820 - Sébastien Bubeck, Thomas Budzinski, Mark Sellke:
Cooperative and Stochastic Multi-Player Multi-Armed Bandit: Optimal Regret With Neither Communication Nor Collisions. 821-822 - Vivien A. Cabannes, Francis R. Bach, Alessandro Rudi:
Fast Rates for Structured Prediction. 823-865 - Yair Carmon, Arun Jambulapati, Yujia Jin, Aaron Sidford:
Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss. 866-882 - Philippe Casgrain, Anastasis Kratsios:
Optimizing Optimizers: Regret-optimal gradient descent algorithms. 883-926 - Niladri S. Chatterji, Philip M. Long, Peter L. Bartlett:
When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations? 927-1027 - Wei-Ning Chen, Peter Kairouz, Ayfer Özgür:
Breaking The Dimension Dependence in Sparse Distribution Estimation under Communication Constraints. 1028-1059 - Xi Chen, Rajesh Jayaram, Amit Levi, Erik Waingarten:
Learning and testing junta distributions with sub cube conditioning. 1060-1113 - Xinyi Chen, Elad Hazan:
Black-Box Control for Linear Dynamical Systems. 1114-1143 - Xue Chen, Michal Derezinski:
Query complexity of least absolute deviation regression via robust uniform convergence. 1144-1179 - Liyu Chen, Haipeng Luo, Chen-Yu Wei:
Minimax Regret for Stochastic Shortest Path with Adversarial Costs and Known Transition. 1180-1215 - Liyu Chen, Haipeng Luo, Chen-Yu Wei:
Impossible Tuning Made Possible: A New Expert Algorithm and Its Applications. 1216-1259 - Sinho Chewi, Chen Lu, Kwangjun Ahn, Xiang Cheng, Thibaut Le Gouic, Philippe Rigollet:
Optimal dimension dependence of the Metropolis-Adjusted Langevin Algorithm. 1260-1300 - Alon Cohen, Haim Kaplan, Tomer Koren, Yishay Mansour:
Online Markov Decision Processes with Aggregate Bandit Feedback. 1301-1329 - Romain Cosson, Devavrat Shah:
Quantifying Variational Approximation for Log-Partition Function. 1330-1357 - Amit Daniely, Gal Vardi:
From Local Pseudorandom Generators to Hardness of Learning. 1358-1394 - Constantinos Daskalakis, Vasilis Kontonis, Christos Tzamos, Emmanouil Zampetakis:
A Statistical Taylor Theorem and Extrapolation of Truncated Densities. 1395-1398 - Anindya De, Rocco A. Servedio:
Weak learning convex sets under normal distributions. 1399-1428 - Anindya De, Ryan O'Donnell, Rocco A. Servedio:
Learning sparse mixtures of permutations from noisy information. 1429-1466 - Michal Derezinski, Zhenyu Liao, Edgar Dobriban, Michael W. Mahoney:
Sparse sketches with small inversion bias. 1467-1510 - Ilias Diakonikolas, Daniel M. Kane:
The Sample Complexity of Robust Covariance Testing. 1511-1521 - Ilias Diakonikolas, Daniel M. Kane, Vasilis Kontonis, Christos Tzamos, Nikos Zarifis:
Agnostic Proper Learning of Halfspaces under Gaussian Marginals. 1522-1551 - Ilias Diakonikolas, Daniel M. Kane, Thanasis Pittas, Nikos Zarifis:
The Optimality of Polynomial Regression for Agnostic Learning under Gaussian Marginals in the SQ Model. 1552-1584 - Ilias Diakonikolas, Russell Impagliazzo, Daniel M. Kane, Rex Lei, Jessica Sorrell, Christos Tzamos:
Boosting in the Presence of Massart Noise. 1585-1644 - Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart, Yuxin Sun:
Outlier-Robust Learning of Ising Models Under Dobrushin's Condition. 1645-1682 - Zhiyan Ding, Qin Li, Jianfeng Lu, Stephen J. Wright:
Random Coordinate Langevin Monte Carlo. 1683-1710 - Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov, Hoi-To Wai:
On the Stability of Random Matrix Product with Markovian Noise: Application to Linear Stochastic Approximation and TD Learning. 1711-1752 - Raaz Dwivedi, Lester Mackey:
Kernel Thinning. 1753 - Ronen Eldan, Dan Mikulincer, Tselil Schramm:
Non-asymptotic approximations of neural networks by Gaussian processes. 1754-1775 - Murat A. Erdogdu, Rasa Hosseinzadeh:
On the Convergence of Langevin Monte Carlo: The Interplay between Tail Growth and Smoothness. 1776-1822 - Hossein Esfandiari, Amin Karbasi, Vahab S. Mirrokni:
Adaptivity in Adaptive Submodularity. 1823-1846 - Mathieu Even, Laurent Massoulié:
Concentration of Non-Isotropic Random Tensors with Applications to Learning and Empirical Risk Minimization. 1847-1886 - Cong Fang, Jason D. Lee, Pengkun Yang, Tong Zhang:
Modeling from Features: a Mean-field Framework for Over-parameterized Deep Neural Networks. 1887-1936 - Meir Feder, Yury Polyanskiy:
Sequential prediction under log-loss and misspecification. 1937-1964 - Xavier Fontaine, Valentin De Bortoli, Alain Durmus:
Convergence rates and approximation results for SGD and its continuous-time counterpart. 1965-2058 - Dylan J. Foster, Alexander Rakhlin, David Simchi-Levi, Yunzong Xu:
Instance-Dependent Complexity of Contextual Bandits and Reinforcement Learning: A Disagreement-Based Perspective. 2059 - Dimitris Fotakis, Alkis Kalavasis, Vasilis Kontonis, Christos Tzamos:
Efficient Algorithms for Learning from Coarse Labels. 2060-2079 - Luca Ganassali, Laurent Massoulié, Marc Lelarge:
Impossibility of Partial Recovery in the Graph Alignment Problem. 2080-2102 - Dan Garber, Noam Wolf:
Frank-Wolfe with a Nearest Extreme Point Oracle. 2103-2132 - Badih Ghazi, Ravi Kumar, Pasin Manurangsi:
On Avoiding the Union Bound When Answering Multiple Differentially Private Queries. 2133-2146 - Angeliki Giannou, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Panayotis Mertikopoulos:
Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information. 2147-2148 - Noah Golowich:
Differentially Private Nonparametric Regression Under a Growth Condition. 2149-2192 - Spencer Gordon, Bijan H. Mazaheri, Yuval Rabani, Leonard J. Schulman:
Source Identification for Mixtures of Product Distributions. 2193-2216 - Peter Grünwald, Thomas Steinke, Lydia Zakynthinou:
PAC-Bayes, MAC-Bayes and Conditional Mutual Information: Fast rate bounds that handle general VC classes. 2217-2247 - Chenghao Guo, Zhiyi Huang, Zhihao Gavin Tang, Xinzhi Zhang:
Generalizing Complex Hypotheses on Product Distributions: Auctions, Prophet Inequalities, and Pandora's Problem. 2248-2288 - Steve Hanneke, Roi Livni, Shay Moran:
Online Learning with Simple Predictors and a Combinatorial Characterization of Minimax in 0/1 Games. 2289-2314 - Jeff Z. HaoChen, Colin Wei, Jason D. Lee, Tengyu Ma:
Shape Matters: Understanding the Implicit Bias of the Noise Covariance. 2315-2357 - Max Hopkins, Daniel Kane, Shachar Lovett, Michal Moshkovitz:
Bounded Memory Active Learning through Enriched Queries. 2358-2387 - Yu-Guan Hsieh, Kimon Antonakopoulos, Panayotis Mertikopoulos:
Adaptive Learning in Continuous Games: Optimal Regret Bounds and Convergence to Nash Equilibrium. 2388-2422 - Daniel Hsu, Clayton Sanford, Rocco A. Servedio, Emmanouil V. Vlatakis-Gkaragkounis:
On the Approximation Power of Two-Layer Networks of Random ReLUs. 2423-2461 - Yichun Hu, Nathan Kallus, Masatoshi Uehara:
Fast Rates for the Regret of Offline Reinforcement Learning. 2462 - De Huang, Jonathan Niles-Weed, Rachel A. Ward:
Streaming k-PCA: Efficient guarantees for Oja's algorithm, beyond rank-one updates. 2463-2498 - Fotis Iliopoulos, Ilias Zadik:
Group testing and local search: is there a computational-statistical gap? 2499-2551 - Shinji Ito:
Parameter-Free Multi-Armed Bandit Algorithms with Hybrid Data-Dependent Regret Bounds. 2552-2583 - Tianyuan Jin, Pan Xu, Xiaokui Xiao, Quanquan Gu:
Double Explore-then-Commit: Asymptotic Optimality and Beyond. 2584-2633 - Christopher Jung, Changhwa Lee, Mallesh M. Pai, Aaron Roth, Rakesh Vohra:
Moment Multicalibration for Uncertainty Estimation. 2634-2678 - Praneeth Kacham, David P. Woodruff:
Reduced-Rank Regression with Operator Norm Error. 2679-2716 - Peter Kairouz, Mónica Ribero Diaz, Keith Rush, Abhradeep Thakurta:
(Nearly) Dimension Independent Private ERM with AdaGrad Ratesvia Publicly Estimated Subspaces. 2717-2746 - Haim Kaplan, Yishay Mansour, Uri Stemmer:
The Sparse Vector Technique, Revisited. 2747-2776 - Johannes Kirschner, Tor Lattimore, Claire Vernade, Csaba Szepesvári:
Asymptotically Optimal Information-Directed Sampling. 2777-2821 - Dmitriy Kunisky:
Hypothesis testing with low-degree polynomials in the Morris class of exponential families. 2822-2848 - Gil Kur, Alexander Rakhlin:
On the Minimal Error of Empirical Risk Minimization. 2849-2852 - Ilja Kuzborskij, Csaba Szepesvári:
Nonparametric Regression with Shallow Overparameterized Neural Networks Trained by GD with Early Stopping. 2853-2890 - Andrew G. Lamperski:
Projected Stochastic Gradient Langevin Algorithms for Constrained Sampling and Non-Convex Learning. 2891-2937 - Tor Lattimore, András György:
Improved Regret for Zeroth-Order Stochastic Convex Bandits. 2938-2964 - Tor Lattimore, András György:
Mirror Descent and the Information Ratio. 2965-2992 - Yin Tat Lee, Ruoqi Shen, Kevin Tian:
Structured Logconcave Sampling with a Restricted Gaussian Oracle. 2993-3050 - Chris Junchi Li, Michael I. Jordan:
Stochastic Approximation for Online Tensorial Independent Component Analysis. 3051-3106 - Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen:
Softmax Policy Gradient Methods Can Take Exponential Time to Converge. 3107-3110 - Yi Li, David P. Woodruff, Taisuke Yasuda:
Exponentially Improved Dimensionality Reduction for l1: Subspace Embeddings and Independence Testing. 3111-3195 - Yulong Lu, Jianfeng Lu, Min Wang:
A Priori Generalization Analysis of the Deep Ritz Method for Solving High Dimensional Elliptic Partial Differential Equations. 3196-3241 - Thodoris Lykouris, Max Simchowitz, Alex Slivkins, Wen Sun:
Corruption-robust exploration in episodic reinforcement learning. 3242-3245 - Yury Makarychev, Ali Vakilian:
Approximation Algorithms for Socially Fair Clustering. 3246-3264 - Eran Malach, Gilad Yehudai, Shai Shalev-Shwartz, Ohad Shamir:
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks. 3265-3295 - Cheng Mao, Mark Rudelson, Konstantin E. Tikhomirov:
Random Graph Matching with Improved Noise Robustness. 3296-3329 - Saeed Masoudian, Yevgeny Seldin:
Improved Analysis of the Tsallis-INF Algorithm in Stochastically Constrained Adversarial Bandits and Stochastic Bandits with Adversarial Corruptions. 3330-3350 - Song Mei, Theodor Misiakiewicz, Andrea Montanari:
Learning with invariances in random features and kernel models. 3351-3418 - Ankur Moitra, Elchanan Mossel, Colin Sandon:
Learning to Sample from Censored Markov Random Fields. 3419-3451 - Omar Montasser, Steve Hanneke, Nathan Srebro:
Adversarially Robust Learning with Unknown Perturbation Sets. 3452-3482 - Mikito Nanashima:
A Theory of Heuristic Learnability. 3483-3525 - Gergely Neu:
Information-Theoretic Generalization Bounds for Stochastic Gradient Descent. 3526-3545 - Jonathan Niles-Weed, Ilias Zadik:
It was "all" for "nothing": sharp phase transitions for noiseless discrete channels. 3546-3547 - Courtney Paquette, Kiwon Lee, Fabian Pedregosa, Elliot Paquette:
SGD in the Large: Average-case Analysis, Asymptotics, and Stepsize Criticality. 3548-3626 - Sejun Park, Jaeho Lee, Chulhee Yun, Jinwoo Shin:
Provable Memorization via Deep Neural Networks using Sub-linear Parameters. 3627-3661 - Pan Peng, Jiapeng Zhang:
Towards a Query-Optimal and Time-Efficient Algorithm for Clustering with a Faulty Oracle. 3662-3680 - Juan C. Perdomo, Max Simchowitz, Alekh Agarwal, Peter L. Bartlett:
Towards a Dimension-Free Understanding of Adaptive Linear Control. 3681-3770 - Orestis Plevrakis:
Learning from Censored and Dependent Data: The case of Linear Dynamics. 3771-3787 - Chara Podimata, Alex Slivkins:
Adaptive Discretization for Adversarial Lipschitz Bandits. 3788-3805 - Nikita Puchkin, Nikita Zhivotovskiy:
Exponential savings in agnostic active learning through abstention. 3806-3832 - Mingda Qiao, Gregory Valiant:
Exponential Weights Algorithms for Selective Learning. 3833-3858 - Cyrus Rashtchian, David P. Woodruff, Peng Ye, Hanlin Zhu:
Average-Case Communication Complexity of Statistical Problems. 3859-3886 - Daniel Russo, Assaf Zeevi, Tianyi Zhang:
Learning to Stop with Surprisingly Few Samples. 3887-3888 - Itay Safran, Gilad Yehudai, Ohad Shamir:
The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks. 3889-3934 - Othmane Sebbouh, Robert M. Gower, Aaron Defazio:
Almost sure convergence rates for Stochastic Gradient Descent and Stochastic Heavy Ball. 3935-3971 - Uri Sherman, Tomer Koren:
Lazy OCO: Online Convex Optimization on a Switching Budget. 3972-3988 - Maciej Skorski:
Johnson-Lindenstrauss Transforms with Best Confidence. 3989-4007 - Arun Sai Suggala, Pradeep Ravikumar, Praneeth Netrapalli:
Efficient Bandit Convex Optimization: Beyond Linear Losses. 4008-4067 - Rong Tang, Yun Yang:
On Empirical Bayes Variational Autoencoder: An Excess Risk Bound. 4068-4125 - Enayat Ullah, Tung Mai, Anup Rao, Ryan A. Rossi, Raman Arora:
Machine Unlearning via Algorithmic Stability. 4126-4142 - Adrien Vacher, Boris Muzellec, Alessandro Rudi, Francis R. Bach, François-Xavier Vialard:
A Dimension-free Computational Upper-bound for Smooth Optimal Transport Estimation. 4143-4173 - Tim van Erven, Sarah Sachs, Wouter M. Koolen, Wojciech Kotlowski:
Robust Online Convex Optimization in the Presence of Outliers. 4174-4194 - Gal Vardi, Daniel Reichman, Toniann Pitassi, Ohad Shamir:
Size and Depth Separation in Approximating Benign Functions with Neural Networks. 4195-4223 - Gal Vardi, Ohad Shamir:
Implicit Regularization in ReLU Networks with the Square Loss. 4224-4258 - Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, Haipeng Luo:
Last-iterate Convergence of Decentralized Optimistic Gradient Descent/Ascent in Infinite-horizon Competitive Markov Games. 4259-4299 - Chen-Yu Wei, Haipeng Luo:
Non-stationary Reinforcement Learning without Prior Knowledge: an Optimal Black-box Approach. 4300-4354 - Gellért Weisz, Philip Amortila, Barnabás Janzer, Yasin Abbasi-Yadkori, Nan Jiang, Csaba Szepesvári:
On Query-efficient Planning in MDPs under Linear Realizability of the Optimal State-value Function. 4355-4385 - Blake E. Woodworth, Brian Bullins, Ohad Shamir, Nathan Srebro:
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication. 4386-4437 - Haike Xu, Tengyu Ma, Simon S. Du:
Fine-Grained Gap-Dependent Bounds for Tabular MDPs via Adaptive Multi-Step Bootstrap. 4438-4472 - Andrea Zanette, Ching-An Cheng, Alekh Agarwal:
Cautiously Optimistic Policy Optimization and Exploration with Linear Function Approximation. 4473-4525 - Chicheng Zhang, Yinan Li:
Improved Algorithms for Efficient Active Learning Halfspaces with Massart and Tsybakov Noise. 4526-4527 - Zihan Zhang, Xiangyang Ji, Simon S. Du:
Is Reinforcement Learning More Difficult Than Bandits? A Near-optimal Algorithm Escaping the Curse of Horizon. 4528-4531 - Dongruo Zhou, Quanquan Gu, Csaba Szepesvári:
Nearly Minimax Optimal Reinforcement Learning for Linear Mixture Markov Decision Processes. 4532-4576 - Mo Zhou, Rong Ge, Chi Jin:
A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network. 4577-4632 - Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Sham M. Kakade:
Benign Overfitting of Constant-Stepsize SGD for Linear Regression. 4633-4635 - Sushant Agarwal, Nivasini Ananthakrishnan, Shai Ben-David, Tosca Lechner, Ruth Urner:
Open Problem: Are all VC-classes CPAC learnable? 4636-4641 - Steve Hanneke:
Open Problem: Is There an Online Learning Algorithm That Learns Whenever Online Learning Is Possible? 4642-4646 - Sattar Vakili, Jonathan Scarlett, Tara Javidi:
Open Problem: Tight Online Confidence Intervals for RKHS Elements. 4647-4652 - Chulhee Yun, Suvrit Sra, Ali Jadbabaie:
Open Problem: Can Single-Shuffle SGD be Better than Reshuffling SGD and GD? 4653-4658
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.