Infants are experts at playing, with an amazing ability to gen-erate novel structured behaviors in unstructured environmentsthat lack clear extrinsic reward signals. We seek to replicatesome of these abilities with a neural network that implementscuriosity-driven intrinsic motivation. Using a simple but ecolog-ically naturalistic simulated environment in which the agent canmove and interact with objects it sees, the agent learns a worldmodel predicting the dynamic consequences of its actions. Si-multaneously, the agent learns to take actions that adversariallychallenge the developing world model, pushing the agent toexplore novel and informative interactions with its environment.We demonstrate that this policy leads to the self-supervisedemergence of a spectrum of complex behaviors, including egomotion prediction, object attention, and object gathering. More-over, the world model that the agent learns supports improvedperformance on object dynamics prediction and localizationtasks. Our results are a proof-of-principle that computationalmodels of intrinsic motivation might account for key featuresof developmental visuomotor learning in infants.