Nothing Special   »   [go: up one dir, main page]

Housekeep: Tidying Virtual Households using Commonsense Reasoning

1University of Toronto 2Georgia Tech 3Meta AI
*Equal Contribution

We introduce Housekeep, a benchmark to evaluate commonsense reasoning in the home for embodied AI.

Abstract

We introduce Housekeep, a benchmark to evaluate commonsense reasoning in the home for embodied AI. In Housekeep, an embodied agent must tidy a house by rearranging misplaced objects without explicit instructions specifying which objects need to be rearranged. Instead, the agent must learn from and is evaluated against human preferences of which objects belong where in a tidy house. Specifically, we collect a dataset of where humans typically place objects in tidy and untidy houses constituting 1799 objects, 268 object categories, 585 placements, and 105 rooms.

Next, we propose a modular baseline approach for Housekeep that integrates planning, exploration, and navigation. It leverages a fine-tuned large language model (LLM) trained on an internet text corpus for effective planning. We show that our baseline agent generalizes to rearranging unseen objects in unknown environments.

Dataset

The Housekeep dataset includes 1799 object models and 395 receptacle models from five popular asset repositories – Amazon Berkeley Objects (AB), Google Scanned Objects (GSO), ReplicaCAD (R-CAD), iGibson and YCB Objects.

This table gives the number of object and receptacle models obtained from different sources.

BibTeX

@misc{kant2022housekeep,
            title={Housekeep: Tidying Virtual Households using Commonsense Reasoning}, 
            author={Yash Kant and Arun Ramachandran and Sriram Yenamandra and Igor Gilitschenski and Dhruv Batra and Andrew Szot and Harsh Agrawal},
            year={2022},
            eprint={2205.10712},
            archivePrefix={arXiv},
            primaryClass={cs.CV}}