default search action
5th ICLR 2017: Toulon, France: Worshop Track
- 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net 2017
Paper decision: Accept
- Beilun Wang, Ji Gao, Yanjun Qi:
A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Samples. - Elad Hoffer, Nir Ailon:
Semi-supervised deep learning by metric embedding. - Yunchen Pu, Martin Renqiang Min, Zhe Gan, Lawrence Carin:
Adaptive Feature Abstraction for Translating Video to Language. - Matko Bosnjak, Tim Rocktäschel, Jason Naradowsky, Sebastian Riedel:
Programming With a Differentiable Forth Interpreter. - Georg Martius, Christoph H. Lampert:
Extrapolation and learning equations. - Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, Joelle Pineau:
Towards an automatic Turing test: Learning to evaluate dialogue responses. - Hervé Glotin, Julien Ricard, Randall Balestriero:
Fast Chirplet Transform Injects Priors in Deep Learning of Animal Calls and Speech. - Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar:
Short and Deep: Sketching and Neural Networks. - Yuandong Tian:
Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity. - Pierre Sermanet, Kelvin Xu, Sergey Levine:
Unsupervised Perceptual Rewards for Imitation Learning. - Terrance DeVries, Graham W. Taylor:
Dataset Augmentation in Feature Space. - John K. Feser, Marc Brockschmidt, Alexander L. Gaunt, Daniel Tarlow:
Neural Functional Programming. - Alexander L. Gaunt, Marc Brockschmidt, Nate Kushman, Daniel Tarlow:
Lifelong Perceptual Programming By Example. - Amit Deshpande, Sushrut Karmalkar:
On Robust Concepts and Small Neural Nets. - Alexander Novikov, Mikhail Trofimov, Ivan V. Oseledets:
Exponential Machines. - Jorge Albericio, Patrick Judd, Alberto Delmas, Sayeh Sharify, Andreas Moshovos:
Bit-Pragmatic Deep Neural Network Computing. - Wilson Hsu, Agastya Kalra, Pascal Poupart:
Online Structure Learning for Sum-Product Networks with Gaussian Leaves. - Siamak Ravanbakhsh, Jeff G. Schneider, Barnabás Póczos:
Deep Learning with Sets and Point Clouds. - David Raposo, Adam Santoro, David G. T. Barrett, Razvan Pascanu, Tim Lillicrap, Peter W. Battaglia:
Discovering objects and their relations from entangled scene representations. - Ben Krause, Iain Murray, Steve Renals, Liang Lu:
Multiplicative LSTM for sequence modelling. - Eugene Belilovsky, Kyle Kastner, Gaël Varoquaux, Matthew B. Blaschko:
Learning to Discover Sparse Graphical Models. - Jonas Degrave, Michiel Hermans, Joni Dambre, Francis Wyffels:
A Differentiable Physics Engine for Deep Learning in Robotics. - Philip Blair, Yuval Merhav, Joel Barry:
Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations. - César Laurent, Nicolas Ballas, Pascal Vincent:
Recurrent Normalization Propagation. - Leon Sixt, Benjamin Wild, Tim Landgraf:
RenderGAN: Generating Realistic Labeled Data. - Natasha Jaques, Shixiang Gu, Richard E. Turner, Douglas Eck:
Tuning Recurrent Neural Networks with Reinforcement Learning. - Mehdi Mirza, Aaron C. Courville, Yoshua Bengio:
Generalizable Features From Unsupervised Learning. - Eder Santana, José C. Príncipe:
Perception Updating Networks: On architectural constraints for interpretable video generative models. - Alexey Kurakin, Ian J. Goodfellow, Samy Bengio:
Adversarial examples in the physical world. - Masatoshi Hidaka, Ken Miura, Tatsuya Harada:
Development of JavaScript-based deep learning platform and application to distributed training. - Hang Chu, Raquel Urtasun, Sanja Fidler:
Song From PI: A Musically Plausible Network for Pop Music Generation. - John Edison Arevalo Ovalle, Thamar Solorio, Manuel Montes-y-Gómez, Fabio A. González:
Gated Multimodal Units for Information Fusion. - Armen Aghajanyan:
Charged Point Normalization: An Efficient Solution to the Saddle Point Problem. - Robert Gens, Pedro M. Domingos:
Compositional Kernel Machines. - Sébastien Dubois, Nathanael Romano, Kenneth Jung, Nigam Shah, David C. Kale:
The Effectiveness of Transfer Learning in Electronic Health Records Data. - Alexey Romanov, Anna Rumshisky:
Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels. - David Krueger, Nicolas Ballas, Stanislaw Jastrzebski, Devansh Arpit, Maxinder S. Kanwal, Tegan Maharaj, Emmanuel Bengio, Asja Fischer, Aaron C. Courville:
Deep Nets Don't Learn via Memorization. - Ben Poole, Friedemann Zenke, Surya Ganguli:
Intelligent synapses for multi-task and transfer learning. - Philip Bachman, Alessandro Sordoni, Adam Trischler:
Learning Algorithms for Active Learning. - Chris Cremer, Quaid Morris, David Duvenaud:
Reinterpreting Importance-Weighted Autoencoders. - Tiago Pimentel, Adriano Veloso, Nivio Ziviani:
Unsupervised and Scalable Algorithm for Learning Node Representations. - Sébastien M. R. Arnold, Chunming Wang:
Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix. - Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, Ken Perlin:
Accelerating Eulerian Fluid Simulation With Convolutional Networks. - Mahdieh Abbasi, Christian Gagné:
Robustness to Adversarial Examples through an Ensemble of Specialists. - Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, Samy Bengio:
Neural Combinatorial Optimization with Reinforcement Learning. - Klaus Greff, Sjoerd van Steenkiste, Jürgen Schmidhuber:
Neural Expectation Maximization. - Luca Franceschi, Michele Donini, Paolo Frasconi, Massimiliano Pontil:
On Hyperparameter Optimization in Learning Systems. - Sandy H. Huang, Nicolas Papernot, Ian J. Goodfellow, Yan Duan, Pieter Abbeel:
Adversarial Attacks on Neural Network Policies. - Volodymyr Kuleshov, S. Zayd Enam, Stefano Ermon:
Audio Super-Resolution using Neural Networks. - Antti Tarvainen, Harri Valpola:
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. - Alexander Chistyakov, Ekaterina Lobacheva, Arseny Kuznetsov, Alexey Romanenko:
Semantic embeddings for program behaviour patterns. - Eugene Belilovsky, Matthew B. Blaschko, Jamie Ryan Kiros, Raquel Urtasun, Richard S. Zemel:
Joint Embeddings of Scene Graphs and Images. - Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, Min Sun:
Tactics of Adversarial Attack on Deep Reinforcement Learning Agents. - Chao-Yuan Wu, Amr Ahmed, Alex Beutel, Alexander J. Smola:
Joint Training of Ratings and Reviews with Recurrent Recommender Networks. - Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh:
Particle Value Functions. - Shikhar Sharma, Jing He, Kaheer Suleman, Hannes Schulz, Philip Bachman:
Natural Language Generation in Dialogue using Lexicalized and Delexicalized Data. - Evan Shelhamer, Parsa Mahmoudieh, Max Argus, Trevor Darrell:
Loss is its own Reward: Self-Supervision for Reinforcement Learning. - Mehdi Cherti, Balázs Kégl, Akin Kazakçi:
De novo drug design with deep generative models : an empirical study. - Mehdi Cherti, Balázs Kégl, Akin Kazakçi:
Out-of-class novelty generation: an experimental foundation. - Jack Lanchantin, Ritambhara Singh, Yanjun Qi:
Memory Matching Networks for Genomic Sequence Classification. - Keiji Yanai:
Unseen Style Transfer Based on a Conditional Fast Style Transfer Network. - Eric Tzeng, Judy Hoffman, Kate Saenko, Trevor Darrell:
Adversarial Discriminative Domain Adaptation (workshop extended abstract). - Augustus Odena, Dieterich Lawson, Christopher Olah:
Changing Model Behavior at Test-time Using Reinforcement Learning. - Xingyu Liu, Song Han, Huizi Mao, William J. Dally:
Efficient Sparse-Winograd Convolutional Neural Networks. - Jose Sotelo, Soroush Mehri, Kundan Kumar, João Felipe Santos, Kyle Kastner, Aaron C. Courville, Yoshua Bengio:
Char2Wav: End-to-End Speech Synthesis. - Daniel McNamara, Maria-Florina Balcan:
Performance guarantees for transferring representations. - Jiaming Song, Shengjia Zhao, Stefano Ermon:
Generative Adversarial Learning of Markov Chains. - Guillaume Alain, Yoshua Bengio:
Understanding intermediate layers using linear classifier probes. - Ji Gao, Beilun Wang, Zeming Lin, Weilin Xu, Yanjun Qi:
DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples. - Justin Gilmer, Colin Raffel, Samuel S. Schoenholz, Maithra Raghu, Jascha Sohl-Dickstein:
Explaining the Learning Dynamics of Direct Feedback Alignment. - Colin Raffel, Dieterich Lawson:
Training a Subsampling Mechanism in Expectation. - Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh:
Deep Kernel Machines via the Kernel Reparametrization Trick. - Antonio Vergari, Robert Peharz, Nicola Di Mauro, Floriana Esposito:
Encoding and Decoding Representations with Sum- and Max-Product Networks. - Christophe Gardella, Olivier Marre, Thierry Mora:
Restricted Boltzmann Machines provide an accurate metric for retinal responses to visual stimuli. - Ondrej Bajgar, Rudolf Kadlec, Jan Kleindienst:
Embracing Data Abundance. - Nadav Bhonker, Shai Rozenberg, Itay Hubara:
Playing SNES in the Retro Learning Environment. - Karol Gregor, Danilo Jimenez Rezende, Daan Wierstra:
Variational Intrinsic Control. - Matthias Meyer, Jan Beutel, Lothar Thiele:
Unsupervised Feature Learning for Audio Analysis. - Sergey Bartunov, Dmitry P. Vetrov:
Fast Adaptation in Generative Models with Generative Matching Networks. - Nick Pawlowski, Miguel Jaques, Ben Glocker:
Efficient variational Bayesian neural network ensembles for outlier detection. - Serhii Havrylov, Ivan Titov:
Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols. - Hao Shen:
A Smooth Optimisation Perspective on Training Feedforward Neural Networks. - Takeru Miyato, Daisuke Okanohara, Shin-ichi Maeda, Masanori Koyama:
Synthetic Gradient Methods with Virtual Forward-Backward Networks. - Stefan Carlsson, Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Kevin Smith:
The Preimage of Rectifier Network Activities. - Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo:
Joint Multimodal Learning with Deep Generative Models. - Maciej Zieba, Lei Wang:
Training Triplet Networks with GAN. - Karthik R, Aman Achpal, Vinayshekhar BK, Anantharaman Palacode Narayana Iyer, Channa Bankapur:
Neu0. - Chris Donahue, Zachary C. Lipton, Julian J. McAuley:
Dance Dance Convolution. - Antoine Affouard, Hervé Goëau, Pierre Bonnet, Jean-Christophe Lombardo, Alexis Joly:
Pl@ntNet app in the era of deep learning. - Sahil Sharma, Balaraman Ravindran:
Online Multi-Task Learning Using Active Sampling. - Kevin Vincent, Kevin Stephano, Michael A. Frumkin, Boris Ginsburg, Julien Demouth:
On Improving the Numerical Stability of Winograd Convolutions. - Prajit Ramachandran, Tom Le Paine, Pooya Khorrami, Mohammad Babaeizadeh, Shiyu Chang, Yang Zhang, Mark A. Hasegawa-Johnson, Roy H. Campbell, Thomas S. Huang:
Fast Generation for Convolutional Autoregressive Models. - Oleksii Kuchaiev, Boris Ginsburg:
Factorization tricks for LSTM networks. - Xavier Gastaldi:
Shake-Shake regularization of 3-branch residual networks. - Yongxin Yang, Timothy M. Hospedales:
Trace Norm Regularised Deep Multi-Task Learning. - Sahil Garg, Irina Rish, Guillermo A. Cecchi, Aurélie C. Lozano:
Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a Changing World. - Jernej Kos, Dawn Song:
Delving into adversarial attacks on deep policies. - Alexander G. Anderson, Cory P. Berg:
The High-Dimensional Geometry of Binary Neural Networks. - Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, Xiaodi Hou:
Revisiting Batch Normalization For Practical Domain Adaptation. - Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin:
Coupling Distributed and Symbolic Execution for Natural Language Queries. - Seungwook Kim, Hyo-Eun Kim:
Transferring Knowledge to Smaller Network with Class-Distance Loss. - Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, Geoffrey E. Hinton:
Regularizing Neural Networks by Penalizing Confident Output Distributions. - Zachary C. Lipton, Subarna Tripathi:
Precise Recovery of Latent Vectors from Generative Adversarial Networks. - Xun Huang, Serge J. Belongie:
Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. - Volker Fischer, Mummadi Chaithanya Kumar, Jan Hendrik Metzen, Thomas Brox:
Adversarial Examples for Semantic Image Segmentation. - Joan Serrà, Alexandros Karatzoglou:
Compact Embedding of Binary-coded Inputs and Outputs using Bloom Filters. - George Tucker, Andriy Mnih, Chris J. Maddison, Jascha Sohl-Dickstein:
REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models. - Eric T. Nalisnick, Padhraic Smyth:
Variational Reference Priors. - Dan Hendrycks, Kevin Gimpel:
Early Methods for Detecting Adversarial Images. - Marwin H. S. Segler, Mike Preuss, Mark P. Waller:
Towards "AlphaChem": Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies. - Marco Baroni, Armand Joulin, Allan Jabri, Germán Kruszewski, Angeliki Lazaridou, Klemen Simonic, Tomás Mikolov:
CommAI: Evaluating the first steps towards a useful general AI.
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.