Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3658852.3659078acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmocoConference Proceedingsconference-collections
research-article

DanceCraft: A Music-Reactive Real-time Dance Improv System

Published: 27 June 2024 Publication History

Abstract

Automatic generation of 3D dance motion, in response to live music, is a challenging task. Prior research has assumed that either the entire music track, or a significant chunk of music track, is available prior to dance generation. In this paper, we present a novel production-ready system that can generate highly realistic dances in reaction to live music. Since predicting future music, or dance choreographed to future music, is a hard problem, we trade-off perfect choreography for spontaneous dance-motion improvisation. Given a small slice of the most recently received audio, we first determine where the audio include music, and if so extract high-level descriptors of the music such as tempo and energy. Based on these descriptors, we generate the dance motion. The generated dance is a combination of previously captured dance sequences as well as randomly triggered generative transitions between different dance sequences. Due to these randomized transitions, two generated dances, even for the same music, tend to appear very different. Furthermore, our system offers a high level of interactivity and personalization, allowing users to import their personal 3D avatars and have them dance to any music played in the environment. User studies show that our system provides an engaging and immersive experience that is appreciated by users.

References

[1]
[n. d.]. Bitmoji. https://www.bitmoji.com/. Accessed: 2024-01-01.
[2]
[n. d.]. Happy 15th Birthday, Bitmoji. https://newsroom.snap.com/happy-birthday-bitmoji/. Accessed: 2024-01-01.
[3]
[n. d.]. Jingle Punks. https://www.jinglepunks.com/. Accessed: 2024-01-01.
[4]
[n. d.]. Newland FM3080 Hind. https://www.newland-id.com/en/products/stationary-scanners/fm3080-hind/. Accessed: 2024-01-01.
[5]
Kfir Aberman, Peizhuo Li, Dani Lischinski, Olga Sorkine-Hornung, Daniel Cohen-Or, and Baoquan Chen. 2020. Skeleton-Aware Networks for Deep Motion Retargeting. ACM Transactions on Graphics (TOG) 39, 4 (2020), 62.
[6]
Emre Aksan, Manuel Kaufmann, Peng Cao, and Otmar Hilliges. 2021. A Spatio-temporal Transformer for 3D Human Motion Prediction. arxiv:2004.08692 [cs.CV]
[7]
Emre Aksan, Manuel Kaufmann, and Otmar Hilliges. 2019. Structured Prediction Helps 3D Human Motion Modelling. arxiv:1910.09070 [cs.CV]
[8]
Omid Alemi and Philippe Pasquier. 2017. GrooveNet : Real-Time Music-Driven Dance Movement Generation using Artificial Neural Networks. https://api.semanticscholar.org/CorpusID:52062683
[9]
Okan Arikan and D. A. Forsyth. 2002. Interactive Motion Generation from Examples. ACM Trans. Graph. 21, 3 (July 2002), 483–490. https://doi.org/10.1145/566654.566606
[10]
Alexander Berman and Valencia James. 2015. Kinetic Imaginations: Exploring the Possibilities of Combining AI and Dance. In Proceedings of the 24th International Conference on Artificial Intelligence (Buenos Aires, Argentina) (Ijcai’15). AAAI Press, 2431–2437.
[11]
R Bowden. 2000. Learning Statistical Models of Human Motion.
[12]
Matthew Brand and Aaron Hertzmann. 2000. Style Machines. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques(Siggraph ’00). ACM Press/Addison-Wesley Publishing Co., Usa, 183–192. https://doi.org/10.1145/344779.344865
[13]
Judith Bütepage, Michael Black, Danica Kragic, and Hedvig Kjellström. 2017. Deep representation learning for human motion prediction and classification. arxiv:1702.07486 [cs.CV]
[14]
Katy Carey, Aidan Moran, and Brendan Rooney. 2019. Learning Choreography: An Investigation of Motor Imagery, Attentional Effort, and Expertise in Modern Dance. Frontiers in Psychology 10 (March 2019). https://doi.org/10.3389/fpsyg.2019.00422
[15]
Jinxiang Chai and Jessica K. Hodgins. 2007. Constraint-Based Motion Optimization Using a Statistical Dynamic Model. ACM Trans. Graph. 26, 3 (July 2007), 8–es. https://doi.org/10.1145/1276377.1276387
[16]
[16] Cmu. 2010. http://mocap.cs.cmu.edu
[17]
Luka Crnkovic-Friis and Louise Crnkovic-Friis. 2016. Generative Choreography using Deep Learning. arxiv:1605.06921 [cs.AI]
[18]
Yinglin Duan, Tianyang Shi, Zhengxia Zou, Yenan Lin, Zhehui Qian, Bohan Zhang, and Yi Yuan. 2021. Single-Shot Motion Completion with Transformer. arxiv:2103.00776 [cs.CV]
[19]
João Pedro Moreira Ferreira, Thiago M. Coutinho, Thiago L. Gomes, José F. Neto, Rafael Azevedo, Renato Martins, and Erickson R. Nascimento. 2020. Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio. CoRR abs/2011.12999 (2020). arXiv:2011.12999https://arxiv.org/abs/2011.12999
[20]
Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik. 2015. Recurrent Network Models for Human Dynamics. arxiv:1508.00271 [cs.CV]
[21]
Partha Ghosh, Jie Song, Emre Aksan, and Otmar Hilliges. 2017. Learning Human Motion Models for Long-term Predictions. arxiv:1704.02827 [cs.CV]
[22]
Anand Gopalakrishnan, Ankur Mali, Dan Kifer, C. Lee Giles, and Alexander G. Ororbia. 2019. A Neural Temporal Model for Human Motion Prediction. arxiv:1809.03036 [cs.CV]
[23]
Liang-Yan Gui, Yu-Xiong Wang, Xiaodan Liang, and José M. F. Moura. 2018. Adversarial Geometry-Aware Human Motion Prediction. In Computer Vision – ECCV 2018, Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (Eds.). Springer International Publishing, Cham, 823–842.
[24]
Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. 2023. MoMask: Generative Masked Modeling of 3D Human Motions. (2023). arxiv:2312.00063 [cs.CV]
[25]
Félix G. Harvey and Christopher Pal. 2021. Recurrent Transition Networks for Character Locomotion. arxiv:1810.02363 [cs.GR]
[26]
Félix G. Harvey, Mike Yurick, Derek Nowrouzezahrai, and Christopher Pal. 2020. Robust Motion In-Betweening. ACM Trans. Graph. 39, 4, Article 60 (Aug. 2020), 12 pages. https://doi.org/10.1145/3386569.3392480
[27]
Mojtaba Heydari, Frank Cwitkowitz, and Zhiyao Duan. 2021. BeatNet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking. In 22th International Society for Music Information Retrieval Conference, ISMIR.
[28]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising Diffusion Probabilistic Models. arXiv preprint arxiv:2006.11239 (2020).
[29]
Daniel Holden, Jun Saito, and Taku Komura. 2016. A Deep Learning Framework for Character Motion Synthesis and Editing. ACM Trans. Graph. 35, 4, Article 138 (July 2016), 11 pages. https://doi.org/10.1145/2897824.2925975
[30]
Daniel Holden, Jun Saito, Taku Komura, and Thomas Joyce. 2015. Learning Motion Manifolds with Convolutional Autoencoders. In SIGGRAPH Asia 2015 Technical Briefs (Kobe, Japan) (Sa ’15). Association for Computing Machinery, New York, NY, USA, Article 18, 4 pages. https://doi.org/10.1145/2820903.2820918
[31]
Ruozi Huang, Huang Hu, Wei Wu, Kei Sawada, Mi Zhang, and Daxin Jiang. 2023. Dance Revolution: Long-Term Dance Generation with Music via Curriculum Learning. arxiv:2006.06119 [cs.CV]
[32]
[32] Adobe Systems Inc. 2018. https://www.mixamo.com
[33]
Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. 2014. Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments. IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 7 (2014), 1325–1339. https://doi.org/10.1109/tpami.2013.248
[34]
Ashesh Jain, Amir R. Zamir, Silvio Savarese, and Ashutosh Saxena. 2016. Structural-RNN: Deep Learning on Spatio-Temporal Graphs. arxiv:1511.05298 [cs.CV]
[35]
Chen Kang, Zhipeng Tan, Jin Lei, Song-Hai Zhang, Yuan-Chen Guo, Weidong Zhang, and Shi-Min Hu. 2021. ChoreoMaster: Choreography-Oriented Music-Driven Dance Synthesis. ACM Transactions on Graphics (TOG) 40, 4 (2021).
[36]
Hsuan-Kai Kao and Li Su. 2020. Temporally Guided Music-to-Body-Movement Generation. In Proceedings of the 28th ACM International Conference on Multimedia(MM ’20). Acm. https://doi.org/10.1145/3394171.3413848
[37]
Manuel Kaufmann, Emre Aksan, Jie Song, Fabrizio Pece, Remo Ziegler, and Otmar Hilliges. 2020. Convolutional Autoencoders for Human Motion Infilling. In 2020 International Conference on 3D Vision (3DV). Ieee. https://doi.org/10.1109/3dv50981.2020.00102
[38]
Jihoon Kim, Taehyun Byun, Seungyoun Shin, Jungdam Won, and Sungjoon Choi. 2022. Conditional Motion In-betweening. Pattern Recognition (2022), 108894. https://doi.org/10.1016/j.patcog.2022.108894
[39]
Jihoon Kim, Jiseob Kim, and Sungjoon Choi. 2022. Flame: Free-form language-based motion synthesis & editing. arXiv preprint arXiv:2209.00349 (2022).
[40]
Jae Woo Kim, Hesham Fouad, and James K. Hahn. 2006. Making Them Dance. In AAAI Fall Symposium: Aurally Informed Performance. https://api.semanticscholar.org/CorpusID:18861896
[41]
Tae-hoon Kim, Sang Il Park, and Sung Yong Shin. 2003. Rhythmic-Motion Synthesis Based on Motion-Beat Analysis. ACM Trans. Graph. 22, 3 (July 2003), 392–401. https://doi.org/10.1145/882262.882283
[42]
Lucas Kovar, Michael Gleicher, and Frédéric Pighin. 2008. Motion graphs. ACM SIGGRAPH 2008 classes (2008). https://doi.org/10.1145/1401132.1401202
[43]
Lucas Kovar, Michael Gleicher, and Frédéric H. Pighin. 2002. Motion graphs. In International Conference on Computer Graphics and Interactive Techniques. https://api.semanticscholar.org/CorpusID:2063215
[44]
Hsu kuang Chiu, Ehsan Adeli, Borui Wang, De-An Huang, and Juan Carlos Niebles. 2018. Action-Agnostic Human Pose Forecasting. arxiv:1810.09676 [cs.CV]
[45]
Alexis Lamouret and Michiel van de Panne. 1996. Motion Synthesis By Example. In Computer Animation and Simulation ’96, Ronan Boulic and Gerard Hégron (Eds.). Springer Vienna, Vienna, 199–212.
[46]
Manfred Lau, Ziv Bar-Joseph, and James Kuffner. 2009. Modeling Spatial and Temporal Variation in Motion Data. ACM Trans. Graph. 28, 5 (Dec. 2009), 1–10. https://doi.org/10.1145/1618452.1618517
[47]
Hsin-Ying Lee, Xiaodong Yang, Ming-Yu Liu, Ting-Chun Wang, Yu-Ding Lu, Ming-Hsuan Yang, and Jan Kautz. 2019. Dancing to Music. arxiv:1911.02001 [cs.CV]
[48]
Jehee Lee, Jinxiang Chai, Paul S. A. Reitsma, Jessica K. Hodgins, and Nancy S. Pollard. 2002. Interactive Control of Avatars Animated with Human Motion Data. 21, 3 (July 2002), 491–500. https://doi.org/10.1145/566654.566607
[49]
Juheon Lee, Seohyun Kim, and Kyogu Lee. 2018. Listen to Dance: Music-driven choreography generation using Autoregressive Encoder-Decoder Network. arxiv:1811.00818 [cs.MM]
[50]
Jehee Lee and Sung Yong Shin. 1999. A Hierarchical Approach to Interactive Motion Editing for Human-like Figures. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques(Siggraph ’99). ACM Press/Addison-Wesley Publishing Co., Usa, 39–48. https://doi.org/10.1145/311535.311539
[51]
Minho Lee, Kyogu Lee, and Jaeheung Park. 2013. Music similarity-based approach to generating dance motion sequence. Multimedia Tools and Applications 62 (02 2013). https://doi.org/10.1007/s11042-012-1288-5
[52]
Andreas M. Lehrmann, Peter V. Gehler, and Sebastian Nowozin. 2014. Efficient Nonlinear Markov Models for Human Motion. In 2014 IEEE Conference on Computer Vision and Pattern Recognition. 1314–1321. https://doi.org/10.1109/cvpr.2014.171
[53]
Buyu Li, Yongchi Zhao, Shi Zhelun, and Lu Sheng. 2022. Danceformer: Music conditioned 3d dance generation with parametric motion transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 1272–1279.
[54]
Jiaman Li, Yihang Yin, Hang Chu, Yi Zhou, Tingwu Wang, Sanja Fidler, and Hao Li. 2020. Learning to Generate Diverse Dance Motions with Transformer. arxiv:2008.08171 [cs.CV]
[55]
Ruilong Li, Shan Yang, David A. Ross, and Angjoo Kanazawa. 2021. AI Choreographer: Music Conditioned 3D Dance Generation with AIST++.
[56]
Weiyu Li, Xuelin Chen, Peizhuo Li, Olga Sorkine-Hornung, and Baoquan Chen. 2023. Example-Based Motion Synthesis via Generative Motion Matching. ACM Transactions on Graphics (TOG) 42, 4, Article 94 (2023). https://doi.org/10.1145/3592395
[57]
Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. 2015. SMPL: A Skinned Multi-Person Linear Model. ACM Trans. Graphics (Proc. SIGGRAPH Asia) 34, 6 (Oct. 2015), 248:1–248:16.
[58]
Mônica m. Ribeiro and Agar Fonseca. 2011. The empathy and the structuring sharing modes of movement sequences in the improvisation of contemporary dance. Research in Dance Education 12, 2 (2011), 71–85. https://doi.org/10.1080/14647893.2011.575220
[59]
Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, and Michael J. Black. 2019. AMASS: Archive of Motion Capture as Surface Shapes. In International Conference on Computer Vision. 5442–5451.
[60]
Julieta Martinez, Michael J. Black, and Javier Romero. 2017. On human motion prediction using recurrent neural networks. arxiv:1705.02445 [cs.CV]
[61]
Jianyuan Min and Jinxiang Chai. 2012. Motion Graphs++: A Compact Generative Model for Semantic Motion Analysis and Synthesis. ACM Trans. Graph. 31, 6, Article 153 (Nov. 2012), 12 pages. https://doi.org/10.1145/2366145.2366172
[62]
Tomohiko Mukai and Shigeru Kuriyama. 2005. Geostatistical Motion Interpolation. ACM Trans. Graph. 24, 3 (July 2005), 1062–1070. https://doi.org/10.1145/1073204.1073313
[63]
Yuko Nakano and Takeshi Okada. 2012. Process of Improvisational Contemporary Dance. In 34th Annual Meeting of the Cognitive Science Society.
[64]
Boris N. Oreshkin, Antonios Valkanas, Félix G. Harvey, Louis-Simon Ménard, Florent Bocquelet, and Mark J. Coates. 2022. Motion Inbetweening via Deep Δ -Interpolator. arxiv:2201.06701 [cs.LG]
[65]
Dario Pavllo, David Grangier, and Michael Auli. 2018. QuaterNet: A Quaternion-based Recurrent Model for Human Motion. arxiv:1805.06485 [cs.CV]
[66]
Vladimir Pavlovic, James M. Rehg, and John MacCormick. 2000. Learning Switching Linear Models of Human Motion. In Proceedings of the 13th International Conference on Neural Information Processing Systems (Denver, CO) (Nips’00). MIT Press, Cambridge, MA, USA, 942–948.
[67]
Steve Paxton. 1975. Contact Improvisation. The Drama Review: TDR 19, 1 (1975), 40–42. http://www.jstor.org/stable/1144967
[68]
M. Plakal and D. Ellis. 2020. YAMNet. https://github.com/tensorflow/models/tree/master/research/audioset/yamnet.
[69]
Jia Qin, Youyi Zheng, and Kun Zhou. 2022. Motion In-Betweening via Two-Stage Transformers. ACM Trans. Graph. 41, 6, Article 184 (Nov. 2022), 16 pages. https://doi.org/10.1145/3550454.3555454
[70]
Sigal Raab, Inbal Leibovitch, Guy Tevet, Moab Arar, Amit H. Bermano, and Daniel Cohen-Or. 2023. Single Motion Diffusion. arxiv:2302.05905 [cs.CV]
[71]
Rokoko. 2023. Capture your body’s motion in real-time with Smartsuit Pro II. https://www.rokoko.com/products/smartsuit-pro
[72]
C. Rose, M.F. Cohen, and B. Bodenheimer. 1998. Verbs and adverbs: multidimensional motion interpolation. IEEE Computer Graphics and Applications 18, 5 (1998), 32–40. https://doi.org/10.1109/38.708559
[73]
Alejandro Hernandez Ruiz, Juergen Gall, and Francesc Moreno-Noguer. 2019. Human Motion Prediction via Spatio-Temporal Inpainting. arxiv:1812.05478 [cs.CV]
[74]
Alla Safonova and Jessica K. Hodgins. 2007. Construction and Optimal Search of Interpolated Motion Graphs. ACM Trans. Graph. 26, 3 (July 2007), 106–es. https://doi.org/10.1145/1276377.1276510
[75]
Ken’ichi Sasaki. 1995. Bigaku Jiten = Dictionary of Aesthetics. Tokyo Daigaku Shuppankai.
[76]
[76] Sfu. 2017. https://mocap.cs.sfu.ca
[77]
Takaaki Shiratori, Atsushi Nakazawa, and Katsushi Ikeuchi. 2006. Dancing-to-Music Character Animation. Computer Graphics Forum 25, 3 (2006), 449–458. https://doi.org/10.1111/j.1467-8659.2006.00964.x arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-8659.2006.00964.x
[78]
Li Siyao, Weijiang Yu, Tianpei Gu, Chunze Lin, Quan Wang, Chen Qian, Chen Change Loy, and Ziwei Liu. 2022. Bailando: 3D dance generation via Actor-Critic GPT with Choreographic Memory. In Cvpr.
[79]
Li Siyao, Weijiang Yu, Tianpei Gu, Chunze Lin, Quan Wang, Chen Qian, Chen Change Loy, and Ziwei Liu. 2023. Bailando++: 3D Dance GPT With Choreographic Memory. IEEE Transactions on Pattern Analysis and Machine Intelligence 45, 12 (2023), 14192–14207. https://doi.org/10.1109/tpami.2023.3319435
[80]
Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. arxiv:1503.03585 [cs.LG]
[81]
G. Stevens. 1975. Psychophysics: Introduction to its perceptual, neural, and social prospects. John Wiley & Sons.
[82]
Guofei Sun, Yongkang Wong, Zhiyong Cheng, Mohan S. Kankanhalli, Weidong Geng, and Xiangdong Li. 2021. DeepDance: Music-to-Dance Motion Choreography With Adversarial Learning. IEEE Transactions on Multimedia 23 (2021), 497–509. https://doi.org/10.1109/tmm.2020.2981989
[83]
L.M. Tanco and A. Hilton. 2000. Realistic synthesis of novel human movements from a database of motion capture examples. In Proceedings Workshop on Human Motion. 137–142. https://doi.org/10.1109/humo.2000.897383
[84]
Taoran Tang, Jia Jia, and Hanyang Mao. 2018. Dance with Melody: An LSTM-Autoencoder Approach to Music-Oriented Dance Synthesis. In Proceedings of the 26th ACM International Conference on Multimedia (Seoul, Republic of Korea) (MM ’18). Association for Computing Machinery, New York, NY, USA, 1598–1606. https://doi.org/10.1145/3240508.3240526
[85]
Xiangjun Tang, He Wang, Bo Hu, Xu Gong, Ruifan Yi, Qilong Kou, and Xiaogang Jin. 2022. Real-time controllable motion transition for characters. ACM Transactions on Graphics 41, 4 (July 2022), 1–10. https://doi.org/10.1145/3528223.3530090
[86]
Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. 2023. Human Motion Diffusion Model. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=SJ1kSyO2jwu
[87]
Jonathan Tseng, Rodrigo Castellon, and C Karen Liu. 2022. EDGE: Editable Dance Generation From Music. arXiv preprint arXiv:2211.10658 (2022).
[88]
Shuhei Tsuchida, Satoru Fukayama, Masahiro Hamasaki, and Masataka Goto. 2019. AIST Dance Video Database: Multi-genre, Multi-dancer, and Multi-camera Database for Dance Information Processing. In Proceedings of the 20th International Society for Music Information Retrieval Conference, ISMIR 2019. Delft, Netherlands.
[89]
Borui Wang, Ehsan Adeli, Hsu kuang Chiu, De-An Huang, and Juan Carlos Niebles. 2019. Imitation Learning for Human Pose Prediction. arxiv:1909.03449 [cs.CV]
[90]
Nelson Yalta, Shinji Watanabe, Kazuhiro Nakadai, and Tetsuya Ogata. 2019. Weakly Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation. arxiv:1807.01126 [cs.LG]
[91]
Zijie Ye, Haozhe Wu, Jia Jia, Yaohua Bu, Wei Chen, Fanbo Meng, and Yanfeng Wang. 2020. ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action Unit. In Proceedings of the 28th ACM International Conference on Multimedia. Acm. https://doi.org/10.1145/3394171.3414005
[92]
Xinyang Yi, Ji Yang, Lichan Hong, Derek Zhiyuan Cheng, Lukasz Heldt, Aditee Ajit Kumthekar, Zhe Zhao, Li Wei, and Ed Chi (Eds.). 2019. Sampling-Bias-Corrected Neural Modeling for Large Corpus Item Recommendations.
[93]
Jiaxu Zhang, Junwu Weng, Di Kang, Fang Zhao, Shaoli Huang, Xuefei Zhe, Linchao Bao, Ying Shan, Jue Wang, and Zhigang Tu. 2023. Skinned Motion Retargeting with Residual Perception of Motion Semantics & Geometry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13864–13872.
[94]
Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. 2022. MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model. arXiv preprint arXiv:2208.15001 (2022).
[95]
Yi Zhou, Jingwan Lu, Connelly Barnes, Jimei Yang, Sitao Xiang, and Hao li. 2020. Generative Tweening: Long-term Inbetweening of 3D Human Motions. arxiv:2005.08891 [cs.CV]
[96]
Wenlin Zhuang, Congyi Wang, Jinxiang Chai, Yangang Wang, Ming Shao, and Siyu Xia. 2022. Music2Dance: DanceNet for Music-Driven Dance Generation. 18, 2, Article 65 (Feb. 2022), 21 pages. https://doi.org/10.1145/3485664

Index Terms

  1. DanceCraft: A Music-Reactive Real-time Dance Improv System

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      MOCO '24: Proceedings of the 9th International Conference on Movement and Computing
      May 2024
      245 pages
      ISBN:9798400709944
      DOI:10.1145/3658852
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 June 2024

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. 3D dance generation
      2. deep learning
      3. motion in-betweening
      4. music-reactive
      5. real-time

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      MOCO '24

      Acceptance Rates

      MOCO '24 Paper Acceptance Rate 35 of 75 submissions, 47%;
      Overall Acceptance Rate 85 of 185 submissions, 46%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 71
        Total Downloads
      • Downloads (Last 12 months)71
      • Downloads (Last 6 weeks)8
      Reflects downloads up to 23 Feb 2025

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media