default search action
MSML 2020: Virtual Conference / Princeton, NJ, USA
- Jianfeng Lu, Rachel A. Ward:
Proceedings of Mathematical and Scientific Machine Learning, MSML 2020, 20-24 July 2020, Virtual Conference / Princeton, NJ, USA. Proceedings of Machine Learning Research 107, PMLR 2020 - Roozbeh Yousefzadeh, Dianne P. O'Leary:
Deep learning interpretation: Flip points and homotopy methods. 1-26 - Alia Abbara, Benjamin Aubin, Florent Krzakala, Lenka Zdeborová:
Rademacher complexity and spin glasses: A link between the replica and statistical theories of learning. 27-54 - Benjamin Aubin, Bruno Loureiro, Antoine Baker, Florent Krzakala, Lenka Zdeborová:
Exact asymptotics for phase retrieval and compressed sensing with random generative priors. 55-73 - Beñat Mencia Uranga, Austen Lamacraft:
SchrödingerRNN: Generative modeling of raw audio as a continuously observed quantum state. 74-106 - François Malgouyres:
On the stable recovery of deep structured linear networks under sparsity constraints. 107-127 - Armenak Petrosyan, Anton Dereventsov, Clayton G. Webster:
Neural network integral representations with the ReLU activation function. 128-143 - Yaoyu Zhang, Zhi-Qin John Xu, Tao Luo, Zheng Ma:
A type of generalization error induced by initialization in deep neural networks. 144-164 - Sho Yaida:
Non-Gaussian processes and neural networks at finite widths. 165-192 - Yunru Liu, Tingran Gao, Haizhao Yang:
SelectNet: Learning to Sample from the Wild for Imbalanced Data Training. 193-206 - Kailai Xu, Eric Darve:
Calibrating Multivariate Lévy Processes with Neural Networks. 207-220 - Jiequn Han, Ruimeng Hu:
Deep Fictitious Play for Finding Markovian Nash Equilibrium in Multi-Agent Games. 221-245 - Yuhua Zhu, Lexing Ying:
Borrowing From the Future: An Attempt to Address Double Sampling. 246-268 - Wuyang Li, Xueshuang Xiang, Yingxiang Xu:
Deep Domain Decomposition Method: Elliptic Problems. 269-286 - Antoine Maillard, Gérard Ben Arous, Giulio Biroli:
Landscape Complexity for the Empirical Risk of Generalized Linear Models. 287-327 - Bao Wang, Quanquan Gu, March Boedihardjo, Lingxiao Wang, Farzin Barekat, Stanley J. Osher:
DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM. 328-351 - Yifan Sun, Linan Zhang, Hayden Schaeffer:
NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data. 352-372 - Chao Ma, Lei Wu, Weinan E:
The Slow Deterioration of the Generalization Error of the Random Feature Model. 373-389 - Hugo Cui, Luca Saglietti, Lenka Zdeborová:
Large deviations for the perceptron model and consequences for active learning. 390-430 - Zhongshu Xu, Yingzhou Li, Xiuyuan Cheng:
Butterfly-Net2: Simplified Butterfly-Net and Fourier Transform Initialization. 431-450 - Andreas Mardt, Luca Pasquali, Frank Noé, Hao Wu:
Deep learning Markov and Koopman models with physical constraints. 451-475 - Tankut Can, Kamesh Krishnamurthy, David J. Schwab:
Gating creates slow modes and controls phase-space complexity in GRUs and LSTMs. 476-511 - Eric C. Cyr, Mamikon A. Gulian, Ravi G. Patel, Mauro Perego, Nathaniel A. Trask:
Robust Training and Initialization of Deep Neural Networks: An Adaptive Basis Viewpoint. 512-536 - Vladimir A. Kobzar, Robert V. Kohn, Zhilei Wang:
New Potential-Based Bounds for the Geometric-Stopping Version of Prediction with Expert Advice. 537-554 - Karthik V. Aadithya, Paul Kuberry, Biliana S. Paskaleva, Pavel B. Bochev, K. Leeson, A. Mar, Ting Mei, Eric R. Keiter:
Data-driven Compact Models for Circuit Design and Analysis. 555-569 - Michael Perlmutter, Feng Gao, Guy Wolf, Matthew J. Hirn:
Geometric Wavelet Scattering Networks on Compact Riemannian Manifolds. 570-604 - Jiahao Yao, Marin Bukov, Lin Lin:
Policy Gradient based Quantum Approximate Optimization Algorithm. 605-634 - Ariel Barr, Willem Gispen, Austen Lamacraft:
Quantum Ground States from Reinforcement Learning. 635-653
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.