default search action
5th CoRL 2021: London, UK
- Aleksandra Faust, David Hsu, Gerhard Neumann:
Conference on Robot Learning, 8-11 November 2021, London, UK. Proceedings of Machine Learning Research 164, PMLR 2021 - Bohan Wu, Suraj Nair, Li Fei-Fei, Chelsea Finn:
Example-Driven Model-Based Reinforcement Learning for Solving Long-Horizon Visuomotor Tasks. 1-13 - Anupam K. Gupta, Laurence Aitchison, Nathan F. Lepora:
Tactile Image-to-Image Disentanglement of Contact Geometry from Motion-Induced Shear. 14-23 - Huy Ha, Shuran Song:
FlingBot: The Unreasonable Effectiveness of Dynamic Manipulation for Cloth Unfolding. 24-33 - Lukas Koestler, Nan Yang, Niclas Zeller, Daniel Cremers:
TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo. 34-45 - Christopher Agia, Krishna Murthy Jatavallabhula, Mohamed Khodeir, Ondrej Miksik, Vibhav Vineet, Mustafa Mukadam, Liam Paull, Florian Shkurti:
Taskography: Evaluating robot task planning over large 3D scene graphs. 46-58 - Brian Ichter, Pierre Sermanet, Corey Lynch:
Broadly-Exploring, Local-Policy Trees for Long-Horizon Task Planning. 59-69 - Lirui Wang, Yu Xiang, Wei Yang, Arsalan Mousavian, Dieter Fox:
Goal-Auxiliary Actor-Critic for 6D Robotic Grasping with Point Clouds. 70-80 - Tin Lai, Weiming Zhi, Tucker Hermans, Fabio Ramos:
Parallelised Diffeomorphic Sampling-based Motion Planning. 81-90 - Nikita Rudin, David Hoeller, Philipp Reist, Marco Hutter:
Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning. 91-100 - Lucas Rath, Andreas René Geist, Sebastian Trimpe:
Using Physics Knowledge for Learning Rigid-body Forward Dynamics with Gaussian Process Force Priors. 101-111 - Yunzhu Li, Shuang Li, Vincent Sitzmann, Pulkit Agrawal, Antonio Torralba:
3D Neural Scene Representations for Visuomotor Control. 112-123 - Yue Meng, Dawei Sun, Zeng Qiu, Md Tawhid Bin Waez, Chuchu Fan:
Learning Density Distribution of Reachable States for Autonomous Systems. 124-136 - Bernardo Aceituno-Cabezas, Alberto Rodriguez, Shubham Tulsiani, Abhinav Gupta, Mustafa Mukadam:
A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation. 137-147 - Wentao Yuan, Chris Paxton, Karthik Desingh, Dieter Fox:
SORNet: Spatial Object-Centric Representations for Sequential Manipulation. 148-157 - Pete Florence, Corey Lynch, Andy Zeng, Oscar A. Ramirez, Ayzaan Wahid, Laura Downs, Adrian Wong, Johnny Lee, Igor Mordatch, Jonathan Tompson:
Implicit Behavioral Cloning. 158-168 - Nick Walker, Christoforos I. Mavrogiannis, Siddhartha S. Srinivasa, Maya Cakmak:
Influencing Behavioral Attributions to Robot Motion During Task Execution. 169-179 - Yue Wang, Vitor Guizilini, Tianyuan Zhang, Yilun Wang, Hang Zhao, Justin Solomon:
DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries. 180-191 - Thomas Weng, Sujay Man Bajracharya, Yufei Wang, Khush Agrawal, David Held:
FabricFlowNet: Bimanual Cloth Manipulation with a Flow-based Policy. 192-202 - Nachiket Deo, Eric M. Wolff, Oscar Beijbom:
Multimodal Trajectory Prediction Conditioned on Lane-Graph Traversals. 203-212 - Joaquim Ortiz de Haro, Jung-Su Ha, Danny Driess, Marc Toussaint:
Structured deep generative models for sampling on constraint manifolds in sequential manipulation. 213-223 - Sean J. Wang, Samuel Triest, Wenshan Wang, Sebastian A. Scherer, Aaron M. Johnson:
Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning. 224-233 - Paloma Sodhi, Eric Dexheimer, Mustafa Mukadam, Stuart Anderson, Michael Kaess:
LEO: Learning Energy-based Models in Factor Graph Optimization. 234-244 - Danny Driess, Jung-Su Ha, Marc Toussaint, Russ Tedrake:
Learning Models as Functionals of Signed-Distance Fields for Manipulation Planning. 245-255 - Xingyu Lin, Yufei Wang, Zixuan Huang, David Held:
Learning Visible Connectivity Dynamics for Cloth Smoothing. 256-266 - Lin Shao, Yifan You, Mengyuan Yan, Shenli Yuan, Qingyun Sun, Jeannette Bohg:
GRAC: Self-Guided and Self-Regularized Actor-Critic. 267-276 - Shuo Cheng, Kaichun Mo, Lin Shao:
Learning to Regrasp by Learning to Place. 277-286 - Xingyu Liu, Kris M. Kitani:
V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated Objects. 287-296 - Tao Chen, Jie Xu, Pulkit Agrawal:
A System for General In-Hand Object Re-Orientation. 297-307 - Charles Sun, Jedrzej Orbik, Coline Manon Devin, Brian H. Yang, Abhishek Gupta, Glen Berseth, Sergey Levine:
Fully Autonomous Real-World Reinforcement Learning with Applications to Mobile Manipulation. 308-319 - Jianren Wang, Ziwen Zhuang, Yuyang Wang, Hang Zhao:
Adversarially Robust Imitation Learning. 320-331 - Yunkun Xu, Zhenyu Liu, Guifang Duan, Jiangcheng Zhu, Xiaolong Bai, Jianrong Tan:
Look Before You Leap: Safe Model-Based Reinforcement Learning with Human Intervention. 332-341 - Vivek Myers, Erdem Biyik, Nima Anari, Dorsa Sadigh:
Learning Multimodal Rewards from Rankings. 342-352 - Nils Wilde, Erdem Biyik, Dorsa Sadigh, Stephen L. Smith:
Learning Reward Functions from Scale Feedback. 353-362 - Zhangjie Cao, Yilun Hao, Mengxi Li, Dorsa Sadigh:
Learning Feasibility to Imitate Demonstrators with Different Dynamics. 363-372 - Leonel Rozo, Vedant Dave:
Orientation Probabilistic Movement Primitives on Riemannian Manifolds. 373-383 - Andrii Zadaianchuk, Georg Martius, Fanny Yang:
Self-supervised Reinforcement Learning with Independently Controllable Subgoals. 384-394 - Sai Rajeswar, Cyril Ibrahim, Nitin Surya, Florian Golemo, David Vázquez, Aaron C. Courville, Pedro O. Pinheiro:
Haptics-based Curiosity for Sparse-reward Tasks. 395-405 - Youngwoon Lee, Joseph J. Lim, Anima Anandkumar, Yuke Zhu:
Adversarial Skill Chaining for Long-Horizon Robot Manipulation via Terminal State Regularization. 406-416 - Aviral Kumar, Anikait Singh, Stephen Tian, Chelsea Finn, Sergey Levine:
A Workflow for Offline Model-Free Robotic Reinforcement Learning. 417-428 - Sourav Garg, Madhu Babu Vankadari, Michael Milford:
SeqMatchNet: Contrastive Learning with Sequence Matching for Place Recognition & Relocalization. 429-443 - Marin Vlastelica P., Sebastian Blaes, Cristina Pinneri, Georg Martius:
Risk-Averse Zero-Order Trajectory Optimization. 444-454 - Chengshu Li, Fei Xia, Roberto Martín-Martín, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Elliott Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, C. Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, Silvio Savarese:
iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks. 455-465 - Ruohan Gao, Yen-Yu Chang, Shivani Mall, Li Fei-Fei, Jiajun Wu:
ObjectFolder: A Dataset of Objects with Implicit Visual, Auditory, and Tactile Representations. 466-476 - Sanjana Srivastava, Chengshu Li, Michael Lingelbach, Roberto Martín-Martín, Fei Xia, Kent Elliott Vainio, Zheng Lian, Cem Gokmen, Shyamal Buch, C. Karen Liu, Silvio Savarese, Hyowon Gweon, Jiajun Wu, Li Fei-Fei:
BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments. 477-490 - Alex Licari LaGrassa, Oliver Kroemer:
Learning Model Preconditions for Planning with Multiple Models. 491-500 - Sergey Zakharov, Rares Andrei Ambrus, Vitor Guizilini, Dennis Park, Wadim Kehl, Frédo Durand, Joshua B. Tenenbaum, Vincent Sitzmann, Jiajun Wu, Adrien Gaidon:
Single-Shot Scene Reconstruction. 501-512 - Michael Murray, Nick Walker, Amal Nanavati, Patrícia Alves-Oliveira, Nikita Filippov, Allison Sauppé, Bilge Mutlu, Maya Cakmak:
Learning Backchanneling Behaviors for a Social Robot via Data Augmentation from Human-Human Conversations. 513-525 - Jeffrey Ichnowski, Yahav Avigal, Justin Kerr, Ken Goldberg:
Dex-NeRF: Using a Neural Radiance Field to Grasp Transparent Objects. 526-536 - Kevin Zakka, Andy Zeng, Pete Florence, Jonathan Tompson, Jeannette Bohg, Debidatta Dwibedi:
XIRL: Cross-embodiment Inverse Reinforcement Learning. 537-546 - Fan Yang, Chao Yang, Huaping Liu, Fuchun Sun:
Evaluations of the Gap between Supervised and Reinforcement Lifelong Learning on Robotic Manipulation Tasks. 547-556 - Dmitry Kalashnikov, Jake Varley, Yevgen Chebotar, Benjamin Swanson, Rico Jonschkowski, Chelsea Finn, Sergey Levine, Karol Hausman:
Scaling Up Multi-Task Robotic Reinforcement Learning. 557-575 - Sumeet Batra, Zhehui Huang, Aleksei Petrenko, Tushar Kumar, Artem Molchanov, Gaurav S. Sukhatme:
Decentralized Control of Quadrotor Swarms with End-to-end Deep Reinforcement Learning. 576-586 - Raunaq M. Bhirangi, Tess Lee Hellebrekers, Carmel Majidi, Abhinav Gupta:
ReSkin: versatile, replaceable, lasting tactile skins. 587-597 - Ryan Hoque, Ashwin Balakrishna, Ellen R. Novoseller, Albert Wilcox, Daniel S. Brown, Ken Goldberg:
ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning. 598-608 - Yuedong Yang, Zihui Xue, Radu Marculescu:
Anytime Depth Estimation with Limited Sensing and Computation Capabilities on Mobile Devices. 609-618 - Amirreza Shaban, Xiangyun Meng, Joonho Lee, Byron Boots, Dieter Fox:
Semantic Terrain Classification for Off-Road Autonomous Driving. 619-629 - Michael James McDonald, Dylan Hadfield-Menell:
Guided Imitation of Task and Motion Planning. 630-640 - I-Chun Arthur Liu, Shagun Uppal, Gaurav S. Sukhatme, Joseph J. Lim, Peter Englert, Youngwoon Lee:
Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation. 641-650 - Priyanka Mandikal, Kristen Grauman:
DexVIP: Learning Dexterous Grasping with Human Hand Pose Priors from Video. 651-661 - Samuel Clarke, Negin Heravi, Mark Rau, Ruohan Gao, Jiajun Wu, Doug L. James, Jeannette Bohg:
DiffImpact: Differentiable Rendering and Identification of Impact Sounds. 662-673 - Dhruv Shah, Benjamin Eysenbach, Nicholas Rhinehart, Sergey Levine:
Rapid Exploration for Open-World Navigation with Latent Goal Models. 674-684 - Ziyue Feng, Longlong Jing, Peng Yin, Yingli Tian, Bing Li:
Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR. 685-694 - Jingyun Yang, Hsiao-Yu Tung, Yunchu Zhang, Gaurav Pathak, Ashwini Pokle, Christopher G. Atkeson, Katerina Fragkiadaki:
Visually-Grounded Library of Behaviors for Manipulating Diverse Objects across Diverse Configurations and Views. 695-705 - Valts Blukis, Chris Paxton, Dieter Fox, Animesh Garg, Yoav Artzi:
A Persistent Spatial Semantic Representation for High-level Natural Language Instruction Execution. 706-717 - Oliver Scheel, Luca Bergamini, Maciej Wolczyk, Blazej Osinski, Peter Ondruska:
Urban Driver: Learning to Drive from Real-world Demonstrations Using Policy Gradients. 718-728 - Karl Pertsch, Youngwoon Lee, Yue Wu, Joseph J. Lim:
Demonstration-Guided Reinforcement Learning with Learned Skills. 729-739 - Ivan Kapelyukh, Edward Johns:
My House, My Rules: Learning Tidying Preferences with Graph Neural Networks. 740-749 - Mohak Bhardwaj, Balakumar Sundaralingam, Arsalan Mousavian, Nathan D. Ratliff, Dieter Fox, Fabio Ramos, Byron Boots:
STORM: An Integrated Framework for Fast Joint-Space Model-Predictive Control for Reactive Manipulation. 750-759 - Ziyang Chen, Xixi Hu, Andrew Owens:
Structure from Silence: Learning Scene Structure from Ambient Sound. 760-772 - Yuxiang Yang, Tingnan Zhang, Erwin Coumans, Jie Tan, Byron Boots:
Fast and Efficient Locomotion via Learned Gait Transitions. 773-783 - Weiye Zhao, Tairan He, Changliu Liu:
Model-free Safe Control for Zero-Violation Reinforcement Learning. 784-793 - Noémie Jaquier, Viacheslav Borovitskiy, Andrei Smolensky, Alexander Terenin, Tamim Asfour, Leonel Dario Rozo:
Geometry-aware Bayesian Optimization in Robotics using Riemannian Matérn Kernels. 794-805 - Chris Paxton, Chris Xie, Tucker Hermans, Dieter Fox:
Predicting Stable Configurations for Semantic Placement of Novel Objects. 806-815 - Sean Segal, Nishanth Kumar, Sergio Casas, Wenyuan Zeng, Mengye Ren, Jingkang Wang, Raquel Urtasun:
Just Label What You Need: Fine-Grained Active Selection for P&P through Partially Labeled Scenes. 816-826 - Haoping Xu, Yi Ru Wang, Sagi Eppel, Alán Aspuru-Guzik, Florian Shkurti, Animesh Garg:
Seeing Glass: Joint Point-Cloud and Depth Completion for Transparent Objects. 827-838 - Boling Yang, Golnaz Habibi, Patrick Lancaster, Byron Boots, Joshua R. Smith:
Motivating Physical Activity via Competitive Human-Robot Interaction. 839-849 - Yilun Zhou, Serena Booth, Nadia Figueroa, Julie Shah:
RoCUS: Robot Controller Understanding via Sampling. 850-860 - Jiawei Mo, Md Jahidul Islam, Junaed Sattar:
IMU-Assisted Learning of Single-View Rolling Shutter Correction. 861-870 - Allan Wang, Christoforos I. Mavrogiannis, Aaron Steinfeld:
Group-based Motion Prediction for Navigation in Crowded Environments. 871-882 - Sandy H. Huang, Abbas Abdolmaleki, Giulia Vezzani, Philemon Brakel, Daniel J. Mankowitz, Michael Neunert, Steven Bohez, Yuval Tassa, Nicolas Heess, Martin A. Riedmiller, Raia Hadsell:
A Constrained Multi-Objective Reinforcement Learning Framework. 883-893 - Mohit Shridhar, Lucas Manuelli, Dieter Fox:
CLIPort: What and Where Pathways for Robotic Manipulation. 894-906 - Samarth Sinha, Ajay Mandlekar, Animesh Garg:
S4RL: Surprisingly Simple Self-Supervision for Offline Reinforcement Learning in Robotics. 907-917 - Stepan Makarenko, Dmitry Igorevich Sorokin, Alexander E. Ulanov, Alexander I. Lvovsky:
Aligning an optical interferometer with beam divergence control and continuous action space. 918-927 - Zipeng Fu, Ashish Kumar, Jitendra Malik, Deepak Pathak:
Minimizing Energy Consumption Leads to the Emergence of Gaits in Legged Robots. 928-937 - Thomas Kollar, Michael Laskey, Kevin Stone, Brijen Thananjeyan, Mark Tjersland:
SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo. 938-948 - Ravi Tejwani, Yen-Ling Kuo, Tianmin Shu, Boris Katz, Andrei Barbu:
Social Interactions as Recursive MDPs. 949-958 - Albert Wilcox, Ashwin Balakrishna, Brijen Thananjeyan, Joseph E. Gonzalez, Ken Goldberg:
LS3: Latent Space Safe Sets for Long-Horizon Visuomotor Control of Sparse Reward Iterative Tasks. 959-969 - Alec Farid, Sushant Veer, Anirudha Majumdar:
Task-Driven Out-of-Distribution Detection with Statistical Guarantees for Robot Learning. 970-980 - Firas Al-Hafez, Jochen J. Steil:
Redundancy Resolution as Action Bias in Policy Search for Robotic Manipulation. 981-990 - Eric Jang, Alex Irpan, Mohi Khansari, Daniel Kappler, Frederik Ebert, Corey Lynch, Sergey Levine, Chelsea Finn:
BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning. 991-1002 - Deyao Zhu, Mohamed Zahran, Li Erran Li, Mohamed Elhoseiny:
Motion Forecasting with Unlikelihood Training in Continuous Space. 1003-1012 - James Tu, Huichen Li, Xinchen Yan, Mengye Ren, Yun Chen, Ming Liang, Eilyan Bitar, Ersin Yumer, Raquel Urtasun:
Exploring Adversarial Robustness of Multi-sensor Perception Systems in Self Driving. 1013-1024 - Gabriel B. Margolis, Tao Chen, Kartik Paigwar, Xiang Fu, Donghyun Kim, Sangbae Kim, Pulkit Agrawal:
Learning to Jump from Pixels. 1025-1034 - Haoran Song, Di Luan, Wenchao Ding, Michael Yu Wang, Qifeng Chen:
Learning to Predict Vehicle Trajectories with Model-based Planning. 1035-1045 - Junha Roh, Karthik Desingh, Ali Farhadi, Dieter Fox:
LanguageRefer: Spatial-Language Model for 3D Visual Grounding. 1046-1056 - Tzu-Yuan Lin, Ray Zhang, Justin Yu, Maani Ghaffari:
Legged Robot State Estimation using Invariant Kalman Filtering and Learned Contact Events. 1057-1066 - Boyuan Chen, Mia Chiquier, Hod Lipson, Carl Vondrick:
The Boombox: Visual Reconstruction from Acoustic Vibrations. 1067-1077 - Yao Lu, Karol Hausman, Yevgen Chebotar, Mengyuan Yan, Eric Jang, Alexander Herzog, Ted Xiao, Alex Irpan, Mohi Khansari, Dmitry Kalashnikov, Sergey Levine:
AW-Opt: Learning Robotic Skills with Imitation andReinforcement at Scale. 1078-1088 - Alex X. Lee, Coline Manon Devin, Yuxiang Zhou, Thomas Lampe, Konstantinos Bousmalis, Jost Tobias Springenberg, Arunkumar Byravan, Abbas Abdolmaleki, Nimrod Gileadi, David Khosid, Claudio Fantacci, José Enrique Chen, Akhil Raju, Rae Jeong, Michael Neunert, Antoine Laurens, Stefano Saliceti, Federico Casarini, Martin A. Riedmiller, Raia Hadsell, Francesco Nori:
Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes. 1089-1131 - Woodrow Zhouyuan Wang, Andy Shih, Annie Xie, Dorsa Sadigh:
Influencing Towards Stable Multi-Agent Interactions. 1132-1143 - Tim Seyde, Wilko Schwarting, Igor Gilitschenski, Markus Wulfmeier, Daniela Rus:
Strength Through Diversity: Robust Behavior Learning via Mixture Policies. 1144-1155 - Tim Seyde, Wilko Schwarting, Sertac Karaman, Daniela Rus:
Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles. 1156-1167 - Jongseok Lee, Jianxiang Feng, Matthias Humt, Marcus Gerhard Müller, Rudolph Triebel:
Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian Processes. 1168-1179 - Norman Di Palo, Edward Johns:
Learning Multi-Stage Tasks with One Demonstration via Self-Replay. 1180-1189 - Antoine Richard, Stéphanie Aravecchia, Matthieu Geist, Cédric Pradalier:
Learning Behaviors through Physics-driven Latent Imagination. 1190-1199 - Sriram Siva, Maggie B. Wigness, John G. Rogers, Hao Zhang:
Enhancing Consistent Ground Maneuverability by Robot Adaptation to Complex Off-Road Terrains. 1200-1210 - Hermann Blum, Francesco Milano, René Zurbrügg, Roland Siegwart, Cesar Cadena, Abel Gawel:
Self-Improving Semantic Perception for Indoor Localisation. 1211-1222 - Julian Wiederer, Arij Bouazizi, Marco Troina, Ulrich Kressel, Vasileios Belagiannis:
Anomaly Detection in Multi-Agent Trajectories for Automated Driving. 1223-1233 - Jerry Zhi-Yang He, Anca D. Dragan:
Assisted Robust Reward Design. 1234-1246 - Maria Priisalu, Aleksis Pirinen, Ciprian Paduraru, Cristian Sminchisescu:
Generating Scenarios with Diverse Pedestrian Behaviors for Autonomous Vehicle Testing. 1247-1258 - Xiaofei Wang, Kimin Lee, Kourosh Hakhamaneshi, Pieter Abbeel, Michael Laskin:
Skill Preferences: Learning to Extract and Execute Robotic Skills from Human Feedback. 1259-1268 - Fang Wan, Xiaobo Liu, Ning Guo, Xudong Han, Feng Tian, Chaoyang Song:
Visual Learning Towards Soft Robot Force Control using a 3D Metamaterial with Differential Stiffness. 1269-1278 - Chen Wang, Claudia Pérez-D'Arpino, Danfei Xu, Li Fei-Fei, C. Karen Liu, Silvio Savarese:
Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration. 1279-1290 - Wenhao Yu, Deepali Jain, Alejandro Escontrela, Atil Iscen, Peng Xu, Erwin Coumans, Sehoon Ha, Jie Tan, Tingnan Zhang:
Visual-Locomotion: Learning to Walk on Complex Terrains with Vision. 1291-1302 - Suraj Nair, Eric Mitchell, Kevin Chen, Brian Ichter, Silvio Savarese, Chelsea Finn:
Learning Language-Conditioned Robot Behavior from Offline Data and Crowd-Sourced Annotation. 1303-1315 - Katie Kang, Gregory Kahn, Sergey Levine:
Hierarchically Integrated Models: Learning to Navigate from Heterogeneous Robots. 1316-1325 - Xiao Li, Jonathan A. DeCastro, Cristian Ioan Vasile, Sertac Karaman, Daniela Rus:
Learning A Risk-Aware Trajectory Planner From Demonstrations Using Logic Monitor. 1326-1335 - Eugene Valassakis, Kamil Dreczkowski, Edward Johns:
Learning Eye-in-Hand Camera Calibration from a Single Image. 1336-1346 - Arthur Moreau, Nathan Piasco, Dzmitry Tsishkou, Bogdan Stanciulescu, Arnaud de La Fortelle:
LENS: Localization enhanced by NeRF synthesis. 1347-1356 - Puze Liu, Davide Tateo, Haitham Bou-Ammar, Jan Peters:
Robot Reinforcement Learning on the Constraint Manifold. 1357-1366 - Josiah Wong, Albert Tung, Andrey Kurenkov, Ajay Mandlekar, Li Fei-Fei, Silvio Savarese, Roberto Martín-Martín:
Error-Aware Imitation Learning from Teleoperation Data for Mobile Manipulation. 1367-1378 - Siddharth Karamcheti, Megha Srivastava, Percy Liang, Dorsa Sadigh:
LILA: Language-Informed Latent Actions. 1379-1390 - Brad Saund, Dmitry Berenson:
CLASP: Constrained Latent Shape Projection for Refining Object Shape from Robot Contact. 1391-1400 - Niklas Funk, Georgia Chalvatzaki, Boris Belousov, Jan Peters:
Learn2Assemble with Structured Representations and Search for Robotic Architectural Construction. 1401-1411 - Minghan Zhu, Maani Ghaffari, Huei Peng:
Correspondence-Free Point Cloud Registration with SO(3)-Equivariant Implicit Shape Representations. 1412-1422 - Onur Celik, Dongzhuoran Zhou, Ge Li, Philipp Becker, Gerhard Neumann:
Specializing Versatile Skill Libraries using Local Mixture of Experts. 1423-1433 - Xiaosong Jia, Liting Sun, Hang Zhao, Masayoshi Tomizuka, Wei Zhan:
Multi-Agent Trajectory Prediction by Combining Egocentric and Allocentric Views. 1434-1443 - Benedikt Mersch, Xieyuanli Chen, Jens Behley, Cyrill Stachniss:
Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks. 1444-1454 - Jinning Li, Chen Tang, Masayoshi Tomizuka, Wei Zhan:
Dealing with the Unknown: Pessimistic Offline Reinforcement Learning. 1455-1464 - Seonghyun Kim, Ingook Jang, Samyeul Noh, Hyunseok Kim:
Stochastic Policy Optimization with Heuristic Information for Robot Learning. 1465-1474 - Tai Wang, Xinge Zhu, Jiangmiao Pang, Dahua Lin:
Probabilistic and Geometric Depth: Detecting Objects in Perspective. 1475-1485 - Elias Stengel-Eskin, Andrew Hundt, Zhuohong He, Aditya Murali, Nakul Gopalan, Matthew C. Gombolay, Gregory D. Hager:
Guiding Multi-Step Rearrangement Tasks with Natural Language Instructions. 1486-1501 - Michael Bloesch, Jan Humplik, Viorica Patraucean, Roland Hafner, Tuomas Haarnoja, Arunkumar Byravan, Noah Yamamoto Siegel, Saran Tunyasuvunakool, Federico Casarini, Nathan Batchelor, Francesco Romano, Stefano Saliceti, Martin A. Riedmiller, S. M. Ali Eslami, Nicolas Heess:
Towards Real Robot Learning in the Wild: A Case Study in Bipedal Locomotion. 1502-1511 - Mincheol Kim, Scott Niekum, Ashish D. Deshpande:
SCAPE: Learning Stiffness Control from Augmented Position Control Experiences. 1512-1521 - Denis Hadjivelichkov, Dimitrios Kanoulas:
Fully Self-Supervised Class Awareness in Dense Object Descriptors. 1522-1531 - Fabio Muratore, Theo Gruner, Florian Wiese, Boris Belousov, Michael Gienger, Jan Peters:
Neural Posterior Domain Randomization. 1532-1542 - Wonjoon Goo, Scott Niekum:
You Only Evaluate Once: a Simple Baseline Algorithm for Offline RL. 1543-1553 - Zhenghao Peng, Quanyi Li, Chunxiao Liu, Bolei Zhou:
Safe Driving via Expert Guided Policy Optimization. 1554-1563 - Andrew Hundt, Aditya Murali, Priyanka Hubli, Ran Liu, Nakul Gopalan, Matthew C. Gombolay, Gregory D. Hager:
"Good Robot! Now Watch This!": Repurposing Reinforcement Learning for Task-to-Task Transfer. 1564-1574 - Russell Buchanan, Marco Camurri, Frank Dellaert, Maurice F. Fallon:
Learning Inertial Odometry for Dynamic Legged Robot State Estimation. 1575-1584 - Xinghang Li, Di Guo, Huaping Liu, Fuchun Sun:
Embodied Semantic Scene Graph Generation. 1585-1594 - Jun Hao Alvin Ng, Ronald P. A. Petrick:
Generalised Task Planning with First-Order Function Approximation. 1595-1610 - Ajinkya Jain, Stephen Giguere, Rudolf Lioutikov, Scott Niekum:
Distributional Depth-Based Estimation of Object Articulation Models. 1611-1621 - Harshit Sikchi, Wenxuan Zhou, David Held:
Learning Off-Policy with Online Planning. 1622-1633 - Antonin Raffin, Jens Kober, Freek Stulp:
Smooth Exploration for Robotic Reinforcement Learning. 1634-1644 - Alex Church, John Lloyd, Raia Hadsell, Nathan F. Lepora:
Tactile Sim-to-Real Policy Transfer via Real-to-Sim Image Translation. 1645-1654 - Chris Xie, Arsalan Mousavian, Yu Xiang, Dieter Fox:
RICE: Refining Instance Masks in Cluttered Environments with Graph Neural Networks. 1655-1665 - Kaichun Mo, Yuzhe Qin, Fanbo Xiang, Hao Su, Leonidas J. Guibas:
O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning. 1666-1677 - Ajay Mandlekar, Danfei Xu, Josiah Wong, Soroush Nasiriany, Chen Wang, Rohun Kulkarni, Li Fei-Fei, Silvio Savarese, Yuke Zhu, Roberto Martín-Martín:
What Matters in Learning from Offline Human Demonstrations for Robot Manipulation. 1678-1690 - Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer:
Language Grounding with 3D Objects. 1691-1701 - Seunghyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin:
Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble. 1702-1712 - Dian Wang, Robin Walters, Xupeng Zhu, Robert Platt Jr.:
Equivariant Q Learning in Spatial Action Spaces. 1713-1723 - Charles Dawson, Zengyi Qin, Sicun Gao, Chuchu Fan:
Safe Nonlinear Control Using Robust Neural Lyapunov-Barrier Functions. 1724-1735 - Martin A. Riedmiller, Jost Tobias Springenberg, Roland Hafner, Nicolas Heess:
Collect & Infer - a fresh look at data-efficient Reinforcement Learning. 1736-1744 - Pulkit Agrawal:
The Task Specification Problem. 1745-1751 - Sergey Levine:
Understanding the World Through Action. 1752-1757 - Chris Eliasmith, P. Michael Furlong:
Continuous then discrete: A recommendation for building robotic brains. 1758-1763 - Edward Johns:
Back to Reality for Imitation Learning. 1764-1768 - Victoria Dean, Yonadav G. Shavit, Abhinav Gupta:
Robots on Demand: A Democratized Robotics Research Cloud. 1769-1775 - Kaylene Caswell Stocking, Alison Gopnik, Claire J. Tomlin:
From Robot Learning To Robot Understanding: Leveraging Causal Graphical Models For Robotics. 1776-1781 - Rika Antonova, Jeannette Bohg:
Learning to be Multimodal : Co-evolving Sensory Modalities and Sensor Properties. 1782-1788 - Qinjie Lin, Guo Ye, Jiayi Wang, Han Liu:
RoboFlow: a Data-centric Workflow Management System for Developing AI-enhanced Robots. 1789-1794 - Yuchong Geng, Dongyue Zhang, Po-han Li, Oguzhan Akcin, Ao Tang, Sandeep P. Chinchali:
Decentralized Sharing and Valuation of Fleet Robotic Data. 1795-1800 - Homanga Bharadhwaj:
Auditing Robot Learning for Safety and Compliance during Deployment. 1801-1806 - Chad DeChant, Daniel Bauer:
Toward robots that learn to summarize their actions in natural language: a set of tasks. 1807-1813 - Elie Aljalbout:
Dual-Arm Adversarial Robot Learning. 1814-1819
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.