Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1028523.1028535acmconferencesArticle/Chapter ViewAbstractPublication PagesscaConference Proceedingsconference-collections
Article

Precomputing avatar behavior from human motion data

Published: 27 August 2004 Publication History

Abstract

Creating controllable, responsive avatars is an important problem in computer games and virtual environments. Recently, large collections of motion capture data have been exploited for increased realism in avatar animation and control. Large motion sets have the advantage of accommodating a broad variety of natural human motion. However, when a motion set is large, the time required to identify an appropriate sequence of motions is the bottleneck for achieving interactive avatar control. In this paper, we present a novel method of precomputing avatar behavior from unlabelled motion data in order to animate and control avatars at minimal runtime cost. Based on dynamic programming, our method finds a control policy that indicates how the avatar should act in any given situation. We demonstrate the effectiveness of our approach through examples that include avatars interacting with each other and with the user.

Supplementary Material

JPG File (p79-lee.jpg)
MOV File (p79-lee.mov)
Supplemental video

References

[1]
{AF02} Arikan O., Forsyth D. A.: Interactive motion generation from examples. In Proceedings of SIGGRAPH 2002 (2002), pp. 483--490.
[2]
{AFO03} Arikan O., Forsyth D. A., O'Brien J. F.: Motion synthesis from annotations. ACM Transactions on Graphics (SIGGRAPH 2003) 22, 3 (2003), 402--408.
[3]
{AMS97} Atkeson C., Moore A., Schaal S.: Locally weighted learning for control. AI Review 11 (1997), 75--113.
[4]
{BC89} Bruderlin A., Calvert T. W.: Goal-directed, dynamic animation of human walking. In Computer Graphics (Proceedings of SIGGRAPH 89) (Boston, Massachusetts, July 1989), vol. 23, pp. 233--242.
[5]
{BDI*02} Blumberg B., Downie M., Ivanov Y., Berlin M., Johnson M. P., Tomlinson B.: Integrated learning for interactive synthetic characters. In Proceedings of SIGGRAPH 2002 (2002), pp. 417--426.
[6]
{BG95} Blumberg B. M., Galyean T. A.: Multi-level direction of autonomous creatures for real-time virtual environments. In Proceedings of SIGGRAPH 95 (August 1995), pp. 47--54.
[7]
{BH00} Brand M., Hertzmann A.: Style machines. In Proceedings of SIGGRAPH 2000 (July 2000), pp. 183--192.
[8]
{BHG93} Badler N. I., Hollick M., Granieri J.: Real-time control of a virtual human using minimal sensors. Presence 2 (1993), 82--86.
[9]
{Blu98} Blumberg B.: Swamped! Using plush toys to direct autonomous animated characters. In SIGGRAPH 98 Conference Abstracts and Applications (1998), p. 109.
[10]
{Bow00} Bowden R.: Learning statistical models of human motion. In IEEE Workshop on Human Modelling, Analysis and Synthesis, CVPR2000 (July 2000).
[11]
{BS97} Bradley E., Stuart J.: Using chaos to generate choreographic variations. In Proceedings of the Experimental Chaos Conference (august 1997).
[12]
{CLS03} Choi M. G., Lee J., Shin S. Y.: Planning biped locomotion using motion capture data and probabilistic roadmaps. ACM Transactions on Graphics 22, 2 (2003), 182--203.
[13]
{DYP03} Dontcheva M., Yngve G., Popovic Z.: Layered acting for character animation. ACM Transactions on Graphics (SIGGRAPH 2003) 22, 3 (2003), 409--416.
[14]
{FvT01} Faloutsos P., Van De Panne M., Terzopoulos D.: Composable controllers for physics-based character animation. In Proceedings of SIGGRAPH 2001 (2001), pp. 251--260.
[15]
{GJH01} Galata A., Johnson N., Hogg D.: Learning variable length markov models of behaviour. Computer Vision and Image Understanding (CVIU) Journal 81, 3 (March 2001), 398--413.
[16]
{GT95} Grzeszczuk R., Terzopoulos D.: Automated learning of muscle-actuated locomotion through control abstraction. In Proceedings of SIGGRAPH 95 (1995), pp. 63--70.
[17]
{GTH98} Grzeszczuk R., Terzopoulos D., Hinton G.: Neuroanimator: fast neural network emulation and control of physics-based models. In Proceedings of SIGGRAPH 98 (1998), pp. 9--20.
[18]
{JF03} James D. L., Fatahalian K.: Precomputing interactive dynamic deformable scenes. ACM Transactions on Graphics (SIGGRAPH 2003) 22, 3 (2003), 879--887.
[19]
{KGP02} Kovar L., Gleicher M., Pighin F.: Motion graphs. In Proceedings of SIGGRAPH 2002 (2002), pp. 473--482.
[20]
{KLM96} Kaelbling L. P., Littman M. L., Moore A. W.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4 (1996), 237--285.
[21]
{KPS03} Kim T., Park S. I., Shin S. Y.: Rhythmic-motion synthesis based on motion-beat analysis. ACM Transactions on Graphics (SIGGRAPH 2003) 22, 3 (2003), 392--401.
[22]
{LCR*02} Lee J., Chai J., Reitsma P. S. A., Hodgins J. K., Pollard N. S.: Interactive control of avatars animated with human motion data. In Proceedings of SIGGRAPH 2002 (2002), pp. 491--500).
[23]
{LWS02} Li Y., Wang T., Shum H.-Y.: Motion texture: a two-level statistical model for character motion synthesis. In Proceedings of SIGGRAPH 2002 (2002), pp. 465--472.
[24]
{MA95} Moore A., Atkeson C.: The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces. Machine Learning 21 (1995).
[25]
{Mat94} Mataric M. J.: Reward functions for accelerated learning. In Proceedings of the Eleventh International Conference on Machine Learning (1994).
[26]
{MBT96} Molet T., Boulic R., Thalmann D.: A real-time anatomical converter for human motion capture. In EGCAS '96: Seventh International Workshop on Computer Animation and Simulation (1996), Eurographics.
[27]
{MH00} Molina Tanco L., Hilton A.: Realistic synthesis of novel human movements from a database of motion capture examples. In Proceedings of the Workshop on Human Motion (2000), pp. 137--142.
[28]
{NM93} Ngo J. T., Marks J.: Spacetime constraints revisited. In Proceedings of SIGGRAPH 93 (1993), pp. 343--350.
[29]
{NRH03} Ng R., Ramamoorthi R., Hanrahan P.: All-frequency shadows using non-linear wavelet lighting approximation. ACM Transactions on Graphics (SIGGRAPH 2003) 22, 3 (2003), 376--381.
[30]
{NZB00} Noma T., Zhao L., Badler N. I.: Design of a virtual human presenter. IEEE Computer Graphics & Applications 20, 4 (July/August 2000).
[31]
{PB00} Pullen K., Bregler C.: Animating by multi-level sampling. In Computer Animation 2000 (May 2000), IEEE CS Press, pp. 36--42.
[32]
{PB02} Pullen K., Bregler C.: Motion capture assisted animation: Texturing and synthesis. In Proceedings of SIGGRAPH 2002 (2002), pp. 501--508.
[33]
{PG96} Perlin K., Goldberg A.: Improv: A system for scripting interactive actors in virtual worlds. In Proceedings of SIGGRAPH 96 (August 1996), pp. 205--216.
[34]
{SB98} Sutton R. S., Barto A. G.: Reinforcement Learning: An Introduction. MIT Press, 1998.
[35]
{SBS02} Sidenbladh H., Black M. J., Sigal L.: Implicit probabilistic models of human motion for synthesis and tracking. In European Conference on Computer Vision (ECCV) (2002), pp. 784--800.
[36]
{SE01} Schödl A., Essa I.: Machine learning for video-based rendering. In Advances in Neural Information Processing Systems (NIPS) (2001), vol. 13, pp. 1002--1008.
[37]
{SE02} Schöudl A., Essa I.: Controlled animation of video sprites. In Proceedings of the First ACM Symposium on Computer Animation (2002), pp. 121--127.
[38]
{SHHS03} Sloan P.-P., Hall J., Hart J., Snyder J.: Clustered principal components for precomputed radiance transfer. ACM Transactions on Graphics (SIGGRAPH 2003) 22, 3 (2003), 382--391.
[39]
{SHS98} Semwal S., Hightower R., Stansfield S.: Mapping algorithms for real-time control of an avatar using eight sensors. Presence 7, 1 (1998), 1--21.
[40]
{Sim94} Sims K.: Evolving virtual creatures. In Proceedings of SIGGRAPH 94 (1994), pp. 15--22.
[41]
{SLSG01} Shin H. J., Lee J., Shin S. Y., Gleicher M.: Computer pupperty: An importance-based approach. ACM Transactions on Graphics 20, 2 (2001), 67--94.
[42]
{SLSS03} Sloan P.-P., Liu X., Shum H.-Y., Snyder J.: Bi-scale radiance transfer. ACM Transactions on Graphics (SIGGRAPH 2003) 22, 3 (2003), 370--375.
[43]
{SSSE00} Schödl A., Szeliski R., Salesin D. H., Essa I.: Video textures. In Proceedings of SIGGRAPH 2000 (July 2000), pp. 489--498.
[44]
{ZH02} Zordan V. B., Hodgins J. K.: Motion capture-driven simulations that hit and react. In Proceedings of ACM SIGGRAPH Symposium on Computer Animation (2002), pp. 89--96.

Cited By

View all
  • (2024)PIMT: Physics-Based Interactive Motion Transition for Hybrid Character AnimationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681582(10497-10505)Online publication date: 28-Oct-2024
  • (2024)Categorical Codebook Matching for Embodied Character ControllersACM Transactions on Graphics10.1145/365820943:4(1-14)Online publication date: 19-Jul-2024
  • (2024)Flexible Motion In-betweening with Diffusion ModelsACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657414(1-9)Online publication date: 13-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SCA '04: Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation
August 2004
388 pages
ISBN:3905673142

Sponsors

Publisher

Eurographics Association

Goslar, Germany

Publication History

Published: 27 August 2004

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Article

Conference

SCA04
Sponsor:
SCA04: Symposium on Computer Animation 2004
August 27 - 29, 2004
Grenoble, France

Acceptance Rates

Overall Acceptance Rate 183 of 487 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)54
  • Downloads (Last 6 weeks)7
Reflects downloads up to 14 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)PIMT: Physics-Based Interactive Motion Transition for Hybrid Character AnimationProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681582(10497-10505)Online publication date: 28-Oct-2024
  • (2024)Categorical Codebook Matching for Embodied Character ControllersACM Transactions on Graphics10.1145/365820943:4(1-14)Online publication date: 19-Jul-2024
  • (2024)Flexible Motion In-betweening with Diffusion ModelsACM SIGGRAPH 2024 Conference Papers10.1145/3641519.3657414(1-9)Online publication date: 13-Jul-2024
  • (2024)Introduction to MetaverseUnderstanding the Metaverse10.1007/978-981-97-2278-5_1(1-24)Online publication date: 29-Aug-2024
  • (2023)Neural Categorical Priors for Physics-Based Character ControlACM Transactions on Graphics10.1145/361839742:6(1-16)Online publication date: 5-Dec-2023
  • (2023)Neural Motion GraphSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618181(1-11)Online publication date: 10-Dec-2023
  • (2023)Locomotion-Action-Manipulation: Synthesizing Human-Scene Interactions in Complex 3D Environments2023 IEEE/CVF International Conference on Computer Vision (ICCV)10.1109/ICCV51070.2023.00886(9629-9640)Online publication date: 1-Oct-2023
  • (2023)Envisioning a Next Generation Extended Reality Conferencing System with Efficient Photorealistic Human Rendering2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW59228.2023.00487(4623-4632)Online publication date: Jun-2023
  • (2023)Interactive Character Path-Following Using Long-Horizon Motion Matching With Revised Future QueriesIEEE Access10.1109/ACCESS.2023.324058911(9942-9956)Online publication date: 2023
  • (2022)Authentic volumetric avatars from a phone scanACM Transactions on Graphics10.1145/3528223.353014341:4(1-19)Online publication date: 22-Jul-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media