Local Policies Enable Zero-shot Long-horizon Manipulation

M Dalal, M Liu, W Talbott, C Chen, D Pathak… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2410.22332, 2024arxiv.org
Sim2real for robotic manipulation is difficult due to the challenges of simulating complex
contacts and generating realistic task distributions. To tackle the latter problem, we introduce
ManipGen, which leverages a new class of policies for sim2real transfer: local policies.
Locality enables a variety of appealing properties including invariances to absolute robot
and object pose, skill ordering, and global scene configuration. We combine these policies
with foundation models for vision, language and motion planning and demonstrate SOTA …
Sim2real for robotic manipulation is difficult due to the challenges of simulating complex contacts and generating realistic task distributions. To tackle the latter problem, we introduce ManipGen, which leverages a new class of policies for sim2real transfer: local policies. Locality enables a variety of appealing properties including invariances to absolute robot and object pose, skill ordering, and global scene configuration. We combine these policies with foundation models for vision, language and motion planning and demonstrate SOTA zero-shot performance of our method to Robosuite benchmark tasks in simulation (97%). We transfer our local policies from simulation to reality and observe they can solve unseen long-horizon manipulation tasks with up to 8 stages with significant pose, object and scene configuration variation. ManipGen outperforms SOTA approaches such as SayCan, OpenVLA, LLMTrajGen and VoxPoser across 50 real-world manipulation tasks by 36%, 76%, 62% and 60% respectively. Video results at https://mihdalal.github.io/manipgen/
arxiv.org