default search action
ICLR 2016: San Juan, Puerto Rico
- Yoshua Bengio, Yann LeCun:
4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. 2016
Oral Presentations
- Scott E. Reed, Nando de Freitas:
Neural Programmer-Interpreters. - David Krueger, Roland Memisevic:
Regularizing RNNs by Stabilizing Activations. - Shihao Ji, S. V. N. Vishwanathan, Nadathur Satish, Michael J. Anderson, Pradeep Dubey:
BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies. - Felix Hill, Antoine Bordes, Sumit Chopra, Jason Weston:
The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations. - John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu:
Towards Universal Paraphrastic Sentence Embeddings. - Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, John E. Hopcroft:
Convergent Learning: Do different neural networks learn the same representations? - Tianqi Chen, Ian J. Goodfellow, Jonathon Shlens:
Net2Net: Accelerating Learning via Knowledge Transfer. - Dustin Tran, Rajesh Ranganath, David M. Blei:
Variational Gaussian Process. - Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, Richard S. Zemel:
The Variational Fair Autoencoder. - Lucas Theis, Aäron van den Oord, Matthias Bethge:
A note on the evaluation of generative models. - Song Han, Huizi Mao, William J. Dally:
Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding. - Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio:
Neural Networks with Few Multiplications. - Ivan Vendrov, Ryan Kiros, Sanja Fidler, Raquel Urtasun:
Order-Embeddings of Images and Language. - Elman Mansimov, Emilio Parisotto, Lei Jimmy Ba, Ruslan Salakhutdinov:
Generating Images from Captions with Attention. - Johannes Ballé, Valero Laparra, Eero P. Simoncelli:
Density Modeling of Images using a Generalized Normalization Transformation.
Poster Presentations
- Fisher Yu, Vladlen Koltun:
Multi-Scale Context Aggregation by Dilated Convolutions. - Zachary Chase Lipton, David C. Kale, Charles Elkan, Randall C. Wetzel:
Learning to Diagnose with LSTM Recurrent Neural Networks. - Tom Schaul, John Quan, Ioannis Antonoglou, David Silver:
Prioritized Experience Replay. - Yuri Burda, Roger B. Grosse, Ruslan Salakhutdinov:
Importance Weighted Autoencoders. - Zhenwen Dai, Andreas C. Damianou, Javier González, Neil D. Lawrence:
Variational Auto-encoded Deep Gaussian Processes. - Yani Ioannou, Duncan P. Robertson, Jamie Shotton, Roberto Cipolla, Antonio Criminisi:
Training CNNs with Low-Rank Filters for Efficient Image Classification. - Michael Cogswell, Faruk Ahmed, Ross B. Girshick, Larry Zitnick, Dhruv Batra:
Reducing Overfitting in Deep Networks by Decorrelating Representations. - Iasonas Kokkinos:
Surpassing Humans in Boundary Detection using Deep Learning. - Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomás Kociský, Phil Blunsom:
Reasoning about Entailment with Neural Attention. - Cheng Tai, Tong Xiao, Xiaogang Wang, Weinan E:
Convolutional neural networks with low-rank regularization. - David Lopez-Paz, Léon Bottou, Bernhard Schölkopf, Vladimir Vapnik:
Unifying distillation and privileged information. - Giorgos Tolias, Ronan Sicre, Hervé Jégou:
Particular object retrieval with integral max-pooling of CNN activations. - Dmytro Mishkin, Jiri Matas:
All you need is a good init. - Theofanis Karaletsos, Serge J. Belongie, Gunnar Rätsch:
When crowds hold privileges: Bayesian unsupervised representation learning with oracle constraints. - Arvind Neelakantan, Quoc V. Le, Ilya Sutskever:
Neural Programmer: Inducing Latent Programs with Gradient Descent. - Philipp Moritz, Robert Nishihara, Ion Stoica, Michael I. Jordan:
SparkNet: Training Deep Networks in Spark. - Jost Tobias Springenberg:
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks. - Shixiang Gu, Sergey Levine, Ilya Sutskever, Andriy Mnih:
MuProp: Unbiased Backpropagation for Stochastic Neural Networks. - Hristo S. Paskov, John C. Mitchell, Trevor J. Hastie:
Data Representation and Compression Using Linear-Programming Approximations. - Zelda Mariet, Suvrit Sra:
Diversity Networks. - Matthew J. Hausknecht, Peter Stone:
Deep Reinforcement Learning in Parameterized Action Space. - Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, Jitendra Malik:
Learning Visual Predictive Models of Physics for Playing Billiards. - Jason Weston, Antoine Bordes, Sumit Chopra, Tomás Mikolov:
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. - Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander H. Miller, Arthur Szlam, Jason Weston:
Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems. - Yuandong Tian, Yan Zhu:
Better Computer Go Player with Neural Network and Long-term Prediction. - Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii:
Distributional Smoothing by Virtual Adversarial Examples. - Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, Lukasz Kaiser:
Multi-task Sequence to Sequence Learning. - Wacha Bounliphone, Eugene Belilovsky, Matthew B. Blaschko, Ioannis Antonoglou, Arthur Gretton:
A Test of Relative Similarity For Model Selection in Generative Models. - Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, Dongjun Shin:
Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications. - Balázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, Domonkos Tikk:
Session-based Recommendations with Recurrent Neural Networks. - Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra:
Continuous control with deep reinforcement learning. - César Lincoln C. Mattos, Zhenwen Dai, Andreas C. Damianou, Jeremy Forth, Guilherme A. Barreto, Neil D. Lawrence:
Recurrent Gaussian Processes. - Stefano Soatto, Alessandro Chiuso:
Modeling Visual Representations: Defining Properties and Deep Approximations. - Samaneh Azadi, Jiashi Feng, Stefanie Jegelka, Trevor Darrell:
Auxiliary Image Regularization for Deep CNNs with Noisy Labels. - Andrei A. Rusu, Sergio Gomez Colmenarejo, Çaglar Gülçehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, Raia Hadsell:
Policy Distillation. - Karol Kurach, Marcin Andrychowicz, Ilya Sutskever:
Neural Random-Access Machines. - Yujia Li, Daniel Tarlow, Marc Brockschmidt, Richard S. Zemel:
Gated Graph Sequence Neural Networks. - Oren Rippel, Manohar Paluri, Piotr Dollár, Lubomir D. Bourdev:
Metric Learning with Adaptive Density Discrimination. - Harrison Edwards, Amos J. Storkey:
Censoring Representations with an Adversary. - George Toderici, Sean M. O'Malley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, Rahul Sukthankar:
Variable Rate Image Compression with Recurrent Neural Networks. - Nicolas Ballas, Li Yao, Chris Pal, Aaron C. Courville:
Delving Deeper into Convolutional Networks for Learning Video Representations. - Tim Dettmers:
8-Bit Approximations for Parallelism in Deep Learning. - Philipp Krähenbühl, Carl Doersch, Jeff Donahue, Trevor Darrell:
Data-dependent Initializations of Convolutional Neural Networks. - Oriol Vinyals, Samy Bengio, Manjunath Kudlur:
Order Matters: Sequence to sequence for sets. - John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan, Pieter Abbeel:
High-Dimensional Continuous Control Using Generalized Advantage Estimation. - Michaël Mathieu, Camille Couprie, Yann LeCun:
Deep multi-scale video prediction beyond mean square error. - Nal Kalchbrenner, Ivo Danihelka, Alex Graves:
Grid Long Short-Term Memory. - Yann N. Dauphin, David Grangier:
Predicting distributions with Linearizing Belief Networks. - Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter:
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). - Emilio Parisotto, Lei Jimmy Ba, Ruslan Salakhutdinov:
Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning. - Lingpeng Kong, Chris Dyer, Noah A. Smith:
Segmental Recurrent Neural Networks. - Matthias Dorfer, Rainer Kelz, Gerhard Widmer:
Deep Linear Discriminant Analysis. - Weiran Wang, Karen Livescu:
Large-Scale Approximate Kernel Canonical Correlation Analysis. - Alec Radford, Luke Metz, Soumith Chintala:
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. - Pouya Bashivan, Irina Rish, Mohammed Yeasin, Noel Codella:
Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks. - Amr Bakry, Mohamed Elhoseiny, Tarek El-Gaaly, Ahmed M. Elgammal:
Digging Deep into the Layers of CNNs: In Search of How CNNs Achieve View Invariance. - Alexandre de Brébisson, Pascal Vincent:
An Exploration of Softmax Alternatives Belonging to the Spherical Loss Family. - Behnam Neyshabur, Ryota Tomioka, Ruslan Salakhutdinov, Nathan Srebro:
Data-Dependent Path Normalization in Neural Networks. - Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao, Li Deng, Paul Smolensky:
Reasoning in Vector Space: An Exploratory Study of Question Answering. - Lukasz Kaiser, Ilya Sutskever:
Neural GPUs Learn Algorithms. - Marcin Moczulski, Misha Denil, Jeremy Appleyard, Nando de Freitas:
ACDC: A Structured Efficient Linear Layer. - Sara Sabour, Yanshuai Cao, Fartash Faghri, David J. Fleet:
Adversarial Manipulation of Deep Representations. - Olivier J. Hénaff, Eero P. Simoncelli:
Geodesics of learned representations. - Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, Wojciech Zaremba:
Sequence Level Training with Recurrent Neural Networks. - Joan Bruna, Pablo Sprechmann, Yann LeCun:
Super-Resolution with Deep Convolutional Sufficient Statistics.
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.