Workshop
Chen Tang · Karen Leung · Leilani Gilpin · Jiachen Li · Changliu Liu
Room 357
Fri 2 Dec, 5:50 a.m. PST
The recent advances in deep learning and artificial intelligence have equipped autonomous agents with increasing intelligence, which enables human-level performance in challenging tasks. In particular, these agents with advanced intelligence have shown great potential in interacting and collaborating with humans (e.g., self-driving cars, industrial robot co-worker, smart homes and domestic robots). However, the opaque nature of deep learning models makes it difficult to decipher the decision-making process of the agents, thus preventing stakeholders from readily trusting the autonomous agents, especially for safety-critical tasks requiring physical human interactions. In this workshop, we bring together experts with diverse and interdisciplinary backgrounds, to build a roadmap for developing and deploying trustworthy interactive autonomous systems at scale. Specifically, we aim to the following questions: 1) What properties are required for building trust between humans and interactive autonomous systems? How can we assess and ensure these properties without compromising the expressiveness of the models and performance of the overall systems? 2) How can we develop and deploy trustworthy autonomous agents under an efficient and trustful workflow? How should we transfer from development to deployment? 3) How to define standard metrics to quantify trustworthiness, from regulatory, theoretical, and experimental perspectives? How do we know that the trustworthiness metrics can scale to the broader population? 4) What are the most pressing aspects and open questions for the development of trustworthy autonomous agents interacting with humans? Which research areas are prime for research in academia and which are better suited for industry research?
Schedule
Fri 5:50 a.m. - 6:00 a.m.
|
Opening Remarks
(
Introduction
)
>
SlidesLive Video |
🔗 |
Fri 6:00 a.m. - 6:25 a.m.
|
Trustworthy Robots for Human-Robot Interaction
(
Keynote Talk
)
>
SlidesLive Video |
Harold Soh 🔗 |
Fri 6:25 a.m. - 6:30 a.m.
|
Q & A
(
Q & A
)
>
|
🔗 |
Fri 6:30 a.m. - 6:55 a.m.
|
Towards Safe Model-based Reinforcement Learning
(
Keynote Talk
)
>
SlidesLive Video |
Felix Berkenkamp 🔗 |
Fri 6:55 a.m. - 7:00 a.m.
|
Q & A
(
Q & A
)
>
|
🔗 |
Fri 7:00 a.m. - 7:25 a.m.
|
Scenario Generation via Quality Diversity for Trustworthy AI
(
Keynote Talk
)
>
SlidesLive Video |
Stefanos Nikolaidis 🔗 |
Fri 7:25 a.m. - 7:30 a.m.
|
Q & A
(
Q & A
)
>
|
🔗 |
Fri 7:30 a.m. - 7:36 a.m.
|
Take 5: Interpretable Image Classification with a Handful of Features
(
Spotlight
)
>
link
SlidesLive Video |
Thomas Norrenbrock · Marco Rudolph · Bodo Rosenhahn 🔗 |
Fri 7:36 a.m. - 7:42 a.m.
|
Characterising the Robustness of Reinforcement Learning for Continuous Control using Disturbance Injection
(
Spotlight
)
>
link
SlidesLive Video |
Catherine Glossop · Jacopo Panerati · Amrit Krishnan · Zhaocong Yuan · Angela Schoellig 🔗 |
Fri 7:42 a.m. - 7:48 a.m.
|
MAFEA: Multimodal Attribution Framework for Embodied AI
(
Spotlight
)
>
link
SlidesLive Video |
Vidhi Jain · Jayant Sravan Tamarapalli · Sahiti Yerramilli · Yonatan Bisk 🔗 |
Fri 7:48 a.m. - 7:54 a.m.
|
Sim-to-Lab-to-Real: Safe Reinforcement Learning with Shielding and Generalization Guarantees
(
Spotlight
)
>
link
SlidesLive Video |
Kai-Chieh Hsu · Allen Z. Ren · Duy Nguyen · Anirudha Majumdar · Jaime Fisac 🔗 |
Fri 7:54 a.m. - 8:00 a.m.
|
Addressing Mistake Severity in Neural Networks with Semantic Knowledge
(
Spotlight
)
>
link
SlidesLive Video |
Victoria Helus · Nathan Vaska · Natalie Abreu 🔗 |
Fri 8:00 a.m. - 9:00 a.m.
|
Coffee Break & Poster Session 1
|
🔗 |
Fri 9:00 a.m. - 9:25 a.m.
|
Failure Identification for Semi- and Unstructured Robot Environments
(
Keynote Talk
)
>
SlidesLive Video |
Katherine Driggs-Campbell 🔗 |
Fri 9:25 a.m. - 9:30 a.m.
|
Q & A
(
Q & A
)
>
|
🔗 |
Fri 9:30 a.m. - 9:55 a.m.
|
Progress and Challenges in Learning Control Certificates in Large-scale Autonomy
(
Keynote Talk
)
>
SlidesLive Video |
Chuchu Fan 🔗 |
Fri 9:55 a.m. - 10:00 a.m.
|
Q & A
(
Q & A
)
>
|
🔗 |
Fri 10:00 a.m. - 10:25 a.m.
|
Explainable Interactive Learning for Human-Robot Teaming
(
Keynote Talk
)
>
SlidesLive Video |
Matthew Gombolay 🔗 |
Fri 10:25 a.m. - 10:30 a.m.
|
Q & A
(
Q & A
)
>
|
🔗 |
Fri 10:30 a.m. - 11:20 a.m.
|
Lunch Break
|
🔗 |
Fri 11:20 a.m. - 11:25 a.m.
|
Paper Award Ceremony
(
Award Ceremony
)
>
|
🔗 |
Fri 11:25 a.m. - 11:45 a.m.
|
To Explain or Not to Explain: A Study on the Necessity of Explanations for Autonomous Vehicles
(
Awarded Paper Presentation
)
>
link
SlidesLive Video |
Yuan Shen · Shanduojiao Jiang · Yanlin Chen · Katherine Driggs-Campbell 🔗 |
Fri 11:40 a.m. - 11:45 a.m.
|
Q & A
(
Q & A
)
>
|
🔗 |
Fri 11:45 a.m. - 12:05 p.m.
|
Post-Hoc Attribute-Based Explanations for Recommender Systems
(
Awarded Paper Presentation
)
>
link
SlidesLive Video |
Sahil Verma · Anurag Beniwal · Narayanan Sadagopan · Arjun Seshadri 🔗 |
Fri 12:00 p.m. - 12:05 p.m.
|
Q & A
(
Q & A
)
>
|
🔗 |
Fri 12:05 p.m. - 12:30 p.m.
|
Providing Intelligible Explanations in Autonomous Driving
(
Keynote Talk
)
>
SlidesLive Video |
Daniel Omeiza 🔗 |
Fri 12:30 p.m. - 12:35 p.m.
|
Q & A
(
Q & A
)
>
|
🔗 |
Fri 12:35 p.m. - 12:41 p.m.
|
Dynamic Efficient Adversarial Training Guided by Gradient Magnitude
(
Spotlight
)
>
link
SlidesLive Video |
Fu Wang · Yanghao Zhang · Wenjie Ruan · Yanbin Zheng 🔗 |
Fri 12:41 p.m. - 12:47 p.m.
|
A Theory of Learning with Competing Objectives and User Feedback
(
Spotlight
)
>
link
SlidesLive Video |
Pranjal Awasthi · Corinna Cortes · Yishay Mansour · Mehryar Mohri 🔗 |
Fri 12:47 p.m. - 12:53 p.m.
|
A Framework for Generating Dangerous Scenes for Testing Robustness
(
Spotlight
)
>
link
SlidesLive Video |
Shengjie Xu · Lan Mi · Leilani Gilpin 🔗 |
Fri 12:53 p.m. - 12:59 p.m.
|
What Makes a Good Explanation?: A Unified View of Properties of Interpretable ML
(
Spotlight
)
>
link
SlidesLive Video |
Varshini Subhash · Zixi Chen · Marton Havasi · Weiwei Pan · Finale Doshi-Velez 🔗 |
Fri 1:00 p.m. - 2:00 p.m.
|
Coffee Break & Poster Session
|
🔗 |
Fri 2:00 p.m. - 2:55 p.m.
|
Panel Discussion
(
Panel
)
>
SlidesLive Video |
Chuchu Fan · Stefanos Nikolaidis · Katherine Driggs-Campbell · Matthew Gombolay · Daniel Omeiza 🔗 |
Fri 2:55 p.m. - 3:00 p.m.
|
Closing Remarks
(
Closing
)
>
SlidesLive Video |
🔗 |
-
|
Post-Hoc Attribute-Based Explanations for Recommender Systems
(
Poster
)
>
link
SlidesLive Video |
Sahil Verma · Anurag Beniwal · Narayanan Sadagopan · Arjun Seshadri 🔗 |
-
|
Characterising the Robustness of Reinforcement Learning for Continuous Control using Disturbance Injection
(
Poster
)
>
link
SlidesLive Video |
Catherine Glossop · Jacopo Panerati · Amrit Krishnan · Zhaocong Yuan · Angela Schoellig 🔗 |
-
|
What Makes a Good Explanation?: A Unified View of Properties of Interpretable ML
(
Poster
)
>
link
SlidesLive Video |
Varshini Subhash · Zixi Chen · Marton Havasi · Weiwei Pan · Finale Doshi-Velez 🔗 |
-
|
Dynamic Efficient Adversarial Training Guided by Gradient Magnitude
(
Poster
)
>
link
SlidesLive Video |
Fu Wang · Yanghao Zhang · Wenjie Ruan · Yanbin Zheng 🔗 |
-
|
A Theory of Learning with Competing Objectives and User Feedback
(
Poster
)
>
link
SlidesLive Video |
Pranjal Awasthi · Corinna Cortes · Yishay Mansour · Mehryar Mohri 🔗 |
-
|
Sim-to-Lab-to-Real: Safe Reinforcement Learning with Shielding and Generalization Guarantees
(
Poster
)
>
link
SlidesLive Video |
Kai-Chieh Hsu · Allen Z. Ren · Duy Nguyen · Anirudha Majumdar · Jaime Fisac 🔗 |
-
|
Addressing Mistake Severity in Neural Networks with Semantic Knowledge
(
Poster
)
>
link
SlidesLive Video |
Victoria Helus · Nathan Vaska · Natalie Abreu 🔗 |
-
|
A Framework for Generating Dangerous Scenes for Testing Robustness
(
Poster
)
>
link
SlidesLive Video |
Shengjie Xu · Lan Mi · Leilani Gilpin 🔗 |
-
|
MAFEA: Multimodal Attribution Framework for Embodied AI
(
Poster
)
>
link
SlidesLive Video |
Vidhi Jain · Jayant Sravan Tamarapalli · Sahiti Yerramilli · Yonatan Bisk 🔗 |
-
|
Take 5: Interpretable Image Classification with a Handful of Features
(
Poster
)
>
link
SlidesLive Video |
Thomas Norrenbrock · Marco Rudolph · Bodo Rosenhahn 🔗 |
-
|
To Explain or Not to Explain: A Study on the Necessity of Explanations for Autonomous Vehicles
(
Poster
)
>
link
SlidesLive Video |
Yuan Shen · Shanduojiao Jiang · Yanlin Chen · Katherine Driggs-Campbell 🔗 |