Nothing Special   »   [go: up one dir, main page]

AI Habitat


📢 Habitat 3.0 is out. Find more information on the project page

📢 Habitat Navigation Challenge 2023 is live now! You can find it here

What is Embodied AI?

“AI is … the science and engineering of making intelligent machines.” - John McCarthy

Embodied AI is the science and engineering of intelligent machines with a physical or virtual embodiment (typically, robots and egocentric personal assistants). The embodiment hypothesis is the idea that “intelligence emerges in the interaction of an agent with an environment and as a result of sensorimotor activity”.

What is AI Habitat?

Habitat is a simulation platform for research in Embodied AI.

Our goal is to advance the science and engineering of Embodied AI. Imagine walking up to a home robot and asking “Hey robot – can you go check if my laptop is on my desk? And if so, bring it to me”. Or asking an egocentric AI assistant (sitting on your smart glasses): “Hey – where did I last see my keys?”.

AI Habitat enables training of such embodied AI agents (virtual robots and egocentric assistants) in a highly photorealistic & efficient 3D simulator, before transferring the learned skills to reality.

This empowers a paradigm shift from ‘internet AI’ based on static datasets (e.g. ImageNet, COCO, VQA) to embodied AI where agents act within realistic environments, bringing to the fore active perception, long-term planning, learning from interaction, and holding a dialog grounded in an environment.

Why Simulation?

Training/testing embodied AI agents in the real world is

  • Slow: the real world runs no faster than real time and cannot be parallelized,
  • Dangerous: poorly-trained agents can unwittingly injure themselves or the human wearing the egocentric device, the environment, or others,
  • Expensive: both the agent and the environment(s) in which they execute are expensive,
  • Difficult to control/reproduce: replicating conditions (particularly corner-cases) or experiments is often difficult.

Simulations can run orders of magnitude faster than real-time and can be parallelized over a cluster; training/testing in simulation is safe, cheap, and enables fair systematic benchmarking of progress. Once a promising approach has been developed and tested in simulation, it can be transferred to physical platforms.

Why the name Habitat? Because that’s where AI agents live and learn 🙂.

Overall, Habitat consists of Habitat-Sim, Habitat-Lab, and Habitat Challenge.

Habitat-Sim

A high-performance physics-enabled 3D simulator with support for:

The design philosophy of Habitat is to prioritize simulation speed over the breadth of simulation capabilities. When rendering a scene from the Matterport3D dataset, Habitat-Sim achieves several thousand frames per second (FPS) running single-threaded and reaches over 10,000 FPS multi-process on a single GPU. Habitat-Sim simulates a Fetch robot interacting in ReplicaCAD scenes at over 8,000 steps per second (SPS), where each ‘step’ involves rendering 1 RGBD observation (128×128 pixels) and rigid-body dynamics for 1/30sec.

Habitat-Lab

Habitat-Lab is a modular high-level library for end-to-end development in embodied AI — defining embodied AI tasks (e.g. navigation, interaction, instruction following, question answering), configuring embodied agents (physical form, sensors, capabilities), training these agents (via imitation or reinforcement learning, or no learning at all as in classical SensePlanAct pipelines), and benchmarking their performance on the defined tasks using standard metrics.

Habitat Challenge

An annual autonomous navigation challenge (hosted on the EvalAI platform) that aims to benchmark and accelerate progress in embodied AI. Unlike classical ‘internet AI’ image dataset-based challenges (e.g., ImageNet LSVRC, COCO, VQA), this is a challenge where participants upload code not predictions. The uploaded agents are evaluated in novel (unseen) environments to test for generalization.

The first iteration (in 2019) of this challenge was held in conjuction with the Habitat: Embodied Agents Challenge and Workshop at CVPR 2019. Starting in 2020, the Habitat challenge is held in conjunction with the Embodied AI workshop at CVPR.

Approximate stats over the years:

  • 2021: Total teams: 45. Total submissions: 400.
  • 2020: Total teams: 27. Total submissions: 563.
  • 2019: Total teams: 16. Total submissions: 150.

Team: Current Contributors

Team: Current Students, Interns, and Residents

Team: Past Contributors

Team: Past Students, Interns, and Residents

  • 1 - AI at Meta
  • 2 - Georgia Institute of Technology
  • 3 - Simon Fraser University
  • 4 - Berkeley
  • 5 - University of Washington
  • 6 - Stanford University
  • 7 - Carnegie Mellon University

Citing Habitat

If you use the Habitat platform in your research, please cite the Habitat, Habitat 2.0, and Habitat 3.0 papers:

@misc{puig2023habitat3,
  title  = {Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots},
  author = {Xavi Puig and Eric Undersander and Andrew Szot and Mikael Dallaire Cote and Ruslan Partsey and Jimmy Yang and Ruta Desai and Alexander William Clegg and Michal Hlavac and Tiffany Min and Theo Gervet and Vladimír Vondruš and Vincent-Pierre Berges and John Turner and Oleksandr Maksymets and Zsolt Kira and Mrinal Kalakrishnan and Jitendra Malik and Devendra Singh Chaplot and Unnat Jain and Dhruv Batra and Akshara Rai and Roozbeh Mottaghi},
  year={2023},
  archivePrefix={arXiv},
}

@inproceedings{szot2021habitat,
  title     =     {Habitat 2.0: Training Home Assistants to Rearrange their Habitat},
  author    =     {Andrew Szot and Alex Clegg and Eric Undersander and Erik Wijmans and Yili Zhao and John Turner and Noah Maestre and Mustafa Mukadam and Devendra Chaplot and Oleksandr Maksymets and Aaron Gokaslan and Vladimir Vondrus and Sameer Dharur and Franziska Meier and Wojciech Galuba and Angel Chang and Zsolt Kira and Vladlen Koltun and Jitendra Malik and Manolis Savva and Dhruv Batra},
  booktitle =     {Advances in Neural Information Processing Systems (NeurIPS)},
  year      =     {2021}
}

@inproceedings{habitat19iccv,
  title     =     {Habitat: {A} {P}latform for {E}mbodied {AI} {R}esearch},
  author    =     {{Manolis Savva*} and {Abhishek Kadian*} and {Oleksandr Maksymets*} and Yili Zhao and Erik Wijmans and Bhavana Jain and Julian Straub and Jia Liu and Vladlen Koltun and Jitendra Malik and Devi Parikh and Dhruv Batra},
  booktitle =     {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      =     {2019}
}

Contact

Please read the documentation here.

To connect with Habitat developers and the community, please reach out on our Discussions forum with questions, suggestions, and feedback or to share your own AI Habitat applications and extensions.

Acknowledgments

The Habitat project would not have been possible without the support and contributions of many individuals. We are grateful to Xinlei Chen, Georgia Gkioxari, Daniel Gordon, Leonidas Guibas, Saurabh Gupta, Jerry (Zhi-Yang) He, Rishabh Jain, Or Litany, Joel Marcey, Dmytro Mishkin, Marcus Rohrbach, Amanpreet Singh, Yuandong Tian, Yuxin Wu, Fei Xia, Deshraj Yadav, Amir Zamir, and Jiazhi Zhang for their help.