Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

A Corpus-Based and Complex Computing Digital Media System for 3D Animation

Published: 01 January 2021 Publication History

Abstract

In this paper, we design a corpus-based 3D animation digital media system to improve the accuracy of 3D animation generation and realize crossplatform animation display. The corpus module extracts high-precision data through web crawling, web cleaning, Chinese word separation, and text classification steps; the character animation generation module uses the semantic description method to expand the frame information description of the extracted data, calculates the object spatial 3D coordinates, and uses the built-in animation execution script to generate 3D character animation; the improved digital media player module uses the improved digital media player to realize crossplatform display of 3D character animations using the improved digital media player. By constructing multidimensional character relationships and combining multiple visualization methods, the complex and multifaceted social relationship network is made available to users in an intuitive and more acceptable and understandable mode. Through a large number of user surveys, it is proved that the visual analysis method combining real social and virtual social proposed in this paper provides a more adequate and reliable basis for friend recommendation and social network analysis; the combination of multiple character relationships with geographical information and the use of visualization to describe multidimensional historical character relationships provides a new research perspective for the research and exploration of humanistic neighborhoods. The experimental results prove that the designed system can effectively read known contents and extract keywords and generate 3D animation based on keyword features, with a high accuracy rate, fast response time, small frame loss rate, and crossplatform display animation advantages.

References

[1]
J. McDonald, R. Wolfe, S. Baowidan, N. Guo, S. Johnson, and R. Moncrief, “Using N-gram analytics to improve automatic fingerspelling Generation,” Linguistics and Literature Studies, vol. 5, no. 3, pp. 187–197, 2017.
[2]
X. Liu, J. Lyu, and D. Zheng, “For a better dictionary: revisiting ecolexicography as a new paradigm,” Lexikos, vol. 31, no. 1, pp. 283–321, 2021.
[3]
J. Fernández-Cruz and A. Moreno-Ortiz, “Building the great recession news corpus (GRNC): a contemporary diachronic corpus of economy news in English,” Research in Corpus Linguistics, vol. 8, no. 2, pp. 28–45, 2020.
[4]
J. Zakraoui, M. Saleh, and J. Al Ja'am, “Text-to-picture tools, systems, and approaches: a survey,” Multimedia Tools and Applications, vol. 78, no. 16, pp. 22833–22859, 2019.
[5]
J. M. de Martino, I. R. Silva, C. Z. Bolognini, P. D. P. Costa, K. M. O. Kumada, L. C. Coradine, P. H. . S. Brito, W. M. do Amaral, Â. B. Benetti, E. T. Poeta, L. M. G. Angare, C. M. Ferreira, and D. F. de Conti, “Signing avatars: making education more inclusive,” Universal Access in the Information Society, vol. 16, no. 3, pp. 793–808, 2017.
[6]
P. Mathur and H. Mathur, “A study on speech recognition techniques in regional languages,” International Journal on Future Revolution in Computer Science & Communication Engineering, vol. 4, no. 11, pp. 110–114, 2018.
[7]
G. Eryiğit, C. Eryiğit, S. Karabüklü, M. Kelepir, A. Özkul, T. Pamay, D. Torunoğlu-Selamet, and H. Köse, “Building the first comprehensive machine-readable Turkish sign language resource: methods, challenges and solutions,” Language Resources and Evaluation, vol. 54, no. 1, pp. 97–121, 2020.
[8]
X. Wen, M. Wang, C. Richardt, Z. Y. Chen, and S. M. Hu, “Photorealistic audio-driven video portraits,” IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 12, pp. 3457–3466, 2020.
[9]
S. Neumann, “On the interaction between register variation and regional varieties in English,” Language, Context and Text, vol. 2, no. 1, pp. 121–144, 2020.
[10]
J. Françoise and F. Bevilacqua, “Motion-sound mapping through interaction,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol. 8, no. 2, pp. 1–30, 2018.
[11]
X. Wang, W. Chang, and X. Tan, “Representing and linking Dunhuang cultural heritage information resources using knowledge graph,” KO KNOWLEDGE ORGANIZATION, vol. 47, no. 7, pp. 604–615, 2020.
[12]
J. Aydınlı and D. Ortaçtepe, “Selected research in applied linguistics and English language teaching in Turkey: 2010–2016,” Language Teaching, vol. 51, no. 2, pp. 210–245, 2018.
[13]
M. Chollet, M. Ochs, and C. Pelachaud, “A methodology for the automatic extraction and generation of non-verbal signals sequences conveying interpersonal attitudes,” IEEE Transactions on Affective Computing, vol. 10, no. 4, pp. 585–598, 2019.
[14]
K. Feher, “Recommendations, best practices and key factors of smart city and smart citizenship. Smart cities and regional development (SCRD),” Journal, vol. 4, no. 1, pp. 37–55, 2020.
[15]
Y. Gambier and H. Jin, “A connected history of audiovisual translation,” Translation Spaces, vol. 8, no. 2, pp. 193–230, 2019.
[16]
Y. Wang and Y. Zhou, “Systemic-functional linguistics in China (2010–2016),” WORD, vol. 64, no. 1, pp. 9–37, 2018.
[17]
J. Kaplan and R. Lemov, “Archiving endangerment, endangered archives: journeys through the sound archives of Americanist anthropology and linguistics, 1911–2016,” Technology and Culture, vol. 60, no. 2S, pp. S161–S187, 2019.
[18]
C. Martins and C. Ferreira, “Project-based learning in audiovisual translation,” Journal of Audiovisual Translation, vol. 2, no. 1, pp. 152–182, 2019.
[19]
Y. Gu, C. Wang, J. Ma, R. J. Nemiroff, D. L. Kao, and D. Parra, “Visualization and recommendation of large image collections toward effective sensemaking,” Information Visualization, vol. 16, no. 1, pp. 21–47, 2017.
[20]
B. Michelson, “The Year's work in American humor studies, 2017,” Studies in American Humor, vol. 5, no. 1, pp. 157–242, 2019.
[21]
T. Grubljesic, P. S. Coelho, and J. Jaklic, “The shift to socio-organizational drivers of business intelligence and analytics acceptance,” Journal of Organizational and End User Computing, vol. 31, no. 2, pp. 37–64, 2019.
[22]
M. Lin, Y. Chen, and R. Chen, “Bibliometric analysis on Pythagorean fuzzy sets during 2013–2020,” International Journal of Intelligent Computing and Cybernetics, vol. 14, no. 2, pp. 104–121, 2021.

Index Terms

  1. A Corpus-Based and Complex Computing Digital Media System for 3D Animation
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image Wireless Communications & Mobile Computing
          Wireless Communications & Mobile Computing  Volume 2021, Issue
          2021
          14355 pages
          This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

          Publisher

          John Wiley and Sons Ltd.

          United Kingdom

          Publication History

          Published: 01 January 2021

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 0
            Total Downloads
          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 14 Nov 2024

          Other Metrics

          Citations

          View Options

          View options

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media