default search action
30th ICANN 2021: Bratislava, Slovakia - Part IV
- Igor Farkas, Paolo Masulli, Sebastian Otte, Stefan Wermter:
Artificial Neural Networks and Machine Learning - ICANN 2021 - 30th International Conference on Artificial Neural Networks, Bratislava, Slovakia, September 14-17, 2021, Proceedings, Part IV. Lecture Notes in Computer Science 12894, Springer 2021, ISBN 978-3-030-86379-1
Model Compression
- Wei He, Zhongzhan Huang, Mingfu Liang, Senwei Liang, Haizhao Yang:
Blending Pruning Criteria for Convolutional Neural Networks. 3-15 - Shoumeng Qiu, Yuzhang Gu, Xiaolin Zhang:
BFRIFP: Brain Functional Reorganization Inspired Filter Pruning. 16-28 - Raoul Heese, Lukas Morand, Dirk Helm, Michael Bortz:
CupNet - Pruning a Network for Geometric Data. 29-33 - Jiacheng Zhang, Pingyu Wang, Zhicheng Zhao, Fei Su:
Pruned-YOLO: Learning Efficient Object Detector Using Model Pruning. 34-45 - Eli Passov, Eli David, Nathan S. Netanyahu:
Gator: Customizable Channel Pruning of Neural Networks with Gating. 46-58
Multi-task and Multi-label Learning
- Xiaoni Li, Yucan Zhou, Yu Zhou, Weiping Wang:
MMF: Multi-task Multi-structure Fusion for Hierarchical Image Classification. 61-73 - Ning Wang, Hongyan Quan:
GLUNet: Global-Local Fusion U-Net for 2D Medical Image Segmentation. 74-85 - Jianwei He, Xianghua Fu, Zi Long, Shuxin Wang, Chaojie Liang, Hongbin Lin:
Textbook Question Answering with Multi-type Question Learning and Contextualized Diagram Representation. 86-98 - Haoda Qian, Qiudan Li, Zaichuan Tang:
A Multi-Task MRC Framework for Chinese Emotion Cause and Experiencer Extraction. 99-110 - Qingquan Zhang, Jialin Liu, Zeqi Zhang, Junyi Wen, Bifei Mao, Xin Yao:
Fairer Machine Learning Through Multi-objective Evolutionary Learning. 111-123
Neural Network Theory
- Joshua Arnold, Peter Stratton, Janet Wiles:
Single Neurons with Delay-Based Learning Can Generalise Between Time-Warped Patterns. 127-138 - Nicolas Posocco, Antoine Bonnefoy:
Estimating Expected Calibration Errors. 139-150 - Aritra Bhowmick, Meenakshi D'Souza, G. Srinivasa Raghavan:
LipBaB: Computing Exact Lipschitz Constant of ReLU Networks. 151-162 - Roseli Suzi Wedemann, Angel Ricardo Plastino:
Nonlinear Lagrangean Neural Networks. 163-173
Normalization and Regularization Methods
- Shu Eguchi, Takafumi Amaba:
Energy Conservation in Infinitely Wide Neural-Networks. 177-189 - Chihuang Liu, Joseph F. JáJá:
Class-Similarity Based Label Smoothing for Confidence Calibration. 190-201 - Kenneth T. Co, David Martínez-Rego, Emil C. Lupu:
Jacobian Regularization for Mitigating Universal Adversarial Perturbations. 202-213 - Daniel Lehmann, Marc Ebner:
Layer-Wise Activation Cluster Analysis of CNNs to Detect Out-of-Distribution Samples. 214-226 - Wolfgang Fuhl, Enkelejda Kasneci:
Weight and Gradient Centralization in Deep Neural Networks. 227-239 - Bojian Yin, H. Steven Scholte, Sander M. Bohté:
LocalNorm: Robust Image Classification Through Dynamically Regularized Normalization. 240-252 - Gen Ye, Tong Lin:
Channel Capacity of Neural Networks. 253-265 - Zhenzhen Li, Kin-Wang Poon, Xuan Yang:
RIAP: A Method for Effective Receptive Field Rectification. 266-278 - Izabela Krysinska, Mikolaj Morzy, Tomasz Kajdanowicz:
Curriculum Learning Revisited: Incremental Batch Learning with Instance Typicality Ranking. 279-291
Person Re-identification
- Qiwei Meng, Te Li, Shanshan Ji, Shiqiang Zhu, Jianjun Gu:
Interesting Receptive Region and Feature Excitation for Partial Person Re-identification. 295-307 - Jing Yang, Canlong Zhang, Zhixin Li, Yanping Tang:
Improved Occluded Person Re-Identification with Multi-feature Fusion. 308-319 - Di Su, Cheng Zhang, Shaobo Wang:
Joint Weights-Averaged and Feature-Separated Learning for Person Re-identification. 320-332 - Takeshi Yoshida, Takuya Kitamura:
Semi-Hard Margin Support Vector Machines for Personal Authentication with an Aerial Signature Motion. 333-344
Recurrent Neural Networks
- Flora J. Ferreira, Weronika Wojtak, Carlos Fernandes, Pedro M. F. Guimarães, Sérgio Monteiro, Estela Bicho, Wolfram Erlhagen:
Dynamic Identification of Stop Locations from GPS Trajectories Based on Their Temporal and Spatial Characteristics. 347-359 - Christian Oliva, Luis Fernando Lago-Fernández:
Separation of Memory and Processing in Dual Recurrent Neural Networks. 360-371 - Sandeep Kumar, Koushik Biswas, Ashish Kumar Pandey:
Predicting Landfall's Location and Time of a Tropical Cyclone Using Reanalysis Data. 372-383 - Matthias Karlbauer, Tobias Menge, Sebastian Otte, Hendrik P. A. Lensch, Thomas Scholten, Volker Wulfmeyer, Martin V. Butz:
Latent State Inference in a Spatiotemporal Generative Model. 384-395 - Gábor Korösi, Richárd Farkas:
Deep Learning Models and Interpretations for Multivariate Discrete-Valued Event Sequence Prediction. 396-406 - Dominic Spata, Arne Grumpe, Anton Kummert:
End-to-End On-Line Multi-object Tracking on Sparse Point Clouds Using Recurrent Convolutional Networks. 407-419 - Vandana M. Ladwani, V. Ramasubramanian:
M-ary Hopfield Neural Network Based Associative Memory Formulation: Limit-Cycle Based Sequence Storage and Retrieval. 420-432 - Peilun Dai, Sang Chin:
Training Many-to-Many Recurrent Neural Networks with Target Propagation. 433-443 - Jana Lang, Martin A. Giese, Matthis Synofzik, Winfried Ilg, Sebastian Otte:
Early Recognition of Ball Catching Success in Clinical Trials with RNN-Based Predictive Classification. 444-456 - Christian Oliva, Vinicio Changoluisa, Francisco de Borja Rodríguez, Luis Fernando Lago-Fernández:
Precise Temporal P300 Detection in Brain Computer Interface EEG Signals Using a Long-Short Term Memory. 457-468 - Emmett Redd, Tayo Obafemi-Ajayi:
Noise Quality and Super-Turing Computation in Recurrent Neural Networks. 469-478
Reinforcement Learning I
- Stefan Wagner, Michael Janschek, Tobias Uelwer, Stefan Harmeling:
Learning to Plan via a Multi-step Policy Regression Method. 481-492 - Antti Keurulainen, Isak Westerlund, Ariel Kwiatkowski, Samuel Kaski, Alexander Ilin:
Behaviour-Conditioned Policies for Cooperative Reinforcement Learning Tasks. 493-504 - Jiaohao Zheng, Mehmet Necip Kurt, Xiaodong Wang:
Integrated Actor-Critic for Deep Reinforcement Learning. 505-518 - Antti Keurulainen, Isak Westerlund, Samuel Kaski, Alexander Ilin:
Learning to Assist Agents by Observing Them. 519-530 - Yaqing Dai, Pengfei Wang, Lei Zhang:
Reinforcement Syntactic Dependency Tree Reasoning for Target-Oriented Opinion Word Extraction. 531-543 - Kejia Wan, Xinhai Xu, Yuan Li:
Learning Distinct Strategies for Heterogeneous Cooperative Multi-agent Reinforcement Learning. 544-555 - Yoshinari Motokawa, Toshiharu Sugawara:
MAT-DQN: Toward Interpretable Multi-agent Deep Reinforcement Learning for Coordinated Activities. 556-567 - Alexandre Chenu, Nicolas Perrin, Stéphane Doncieux, Olivier Sigaud:
Selection-Expansion: A Unifying Framework for Motion-Planning and Diversity Search Algorithms. 568-579 - Juan Pablo Vásconez, Lorena Isabel Barona López, Ángel Leonardo Valdivieso Caraguay, Patricio J. Cruz Davalos, Robin Álvarez, Marco E. Benalcázar:
A Hand Gesture Recognition System Using EMG and Reinforcement Learning: A Q-Learning Approach. 580-591
Reinforcement Learning II
- Wolfgang Fuhl, Efe Bozkir, Enkelejda Kasneci:
Reinforcement Learning for the Privacy Preservation and Manipulation of Eye Tracking Data. 595-607 - Chloé Mercier, Frédéric Alexandre, Thierry Viéville:
Reinforcement Symbolic Learning. 608-612 - Zhenjie Yao, Lan Chen, He Zhang:
Deep Reinforcement Learning for Job Scheduling on Cluster. 613-624 - Shiyang Zhou, Weiya Ren, Xiaoguang Ren, Yanzhen Wang, Xiaodong Yi:
Independent Deep Deterministic Policy Gradient Reinforcement Learning in Cooperative Multiagent Pursuit Games. 625-637 - Malte Schilling:
Avoid Overfitting in Deep Reinforcement Learning: Increasing Robustness Through Decentralized Control. 638-649 - Juraj Holas, Igor Farkas:
Advances in Adaptive Skill Acquisition. 650-661 - Ming-Fan Li, Kaijie Zhou, Hongze Wang, Long Ma, Xuan Li:
Aspect-Based Sentiment Classification with Reinforcement Learning and Local Understanding. 662-674 - Vihanga Gamage, Cathy Ennis, Robert J. Ross:
Latent Dynamics for Artefact-Free Character Animation via Data-Driven Reinforcement Learning. 675-687 - Matej Pechác, Igor Farkas:
Intrinsic Motivation Model Based on Reward Gating. 688-699
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.