Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Motion synthesis and editing for the generation of new sign language content

Building new signs with phonological recombination

  • Published:
Machine Translation

Abstract

Existing work on the animation of signing avatars often relies on pure procedural techniques or on the playback of Motion Capture (MoCap) data. While the first solution results in robotic and unnatural motions, the second one is very limited in the number of signs that it can produce. In this paper, we propose to implement data-driven motion synthesis techniques to increase the variety of Sign Language (SL) motions that can be made from a limited database. In order to generate new signs and inflection mechanisms based on an annotated French Sign Language MoCap corpus, we rely on phonological recombination, i.e. on the motion retrieval and modular reconstruction of SL content at a phonological level with a particular focus on three phonological components of SL: hand placement, hand configuration and hand movement. We propose to modify the values taken by those components in different signs to create their inflected version or completely new signs by (i) applying motion retrieval at a phonological level to exchange the value of one component without any modification, (ii) editing the retrieved data with different operators, or, (iii) using conventional motion generation techniques such as interpolation or inverse kinematics, which are parameterized to comply to the kinematic properties of real motion observed in the data set. The quality of the synthesized motions is perceptually assessed through two distinct evaluations that involved 75 and 53 participants respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. We call channel the set of joints corresponding to a phonological component.

  2. It can also be interesting to add noise to achieve a similar effect.

  3. Not to mention the overall attitude and facial expression that are not part of this work.

  4. One of the 4 remaining ground truth videos was removed from the questionnaire beforehand and was not showed to the participants as it contained an artefact.

  5. We considered a significant difference for a p-value \({< 0.01}\).

  6. A video showing our synthesis results is available at this address: http://sltat.cs.depaul.edu/2019/naert.mp4.

References

  • Aristidou A, Lasenby J (2011) Fabrik: a fast, iterative solver for the inverse kinematics problem. Graph Models 73(5):243–260

    Article  Google Scholar 

  • Battison R (1974) Phonological deletion in American sign language. Sign Lang Stud 5(1):1–19

    Article  Google Scholar 

  • Battison R (1978) Lexical borrowing in American sign language. Linstok Press, Silver Spring, MD

    Google Scholar 

  • Boutora L (2006) Vers un inventaire ordonné des configurations manuelles de la langue des Signes Française. In: Journées d’Études sur la Parole (JEP)

  • Braffort A, Bolot L, Filhol M, Verrecchia C (2007) Démonstrations d’Elsi, la signeuse virtuelle du LIMSI. In: Conférence sur le Traitement Automatique des Langues Naturelles (TALN), Traitement automatique des langues des signes (atelier TALS), Toulouse https://hal.archives-ouvertes.fr/hal-02285133

  • Brock H, Nakadai K (2018) Deep JSLC: a multimodal corpus collection for data-driven generation of Japanese sign language expressions. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)

  • Brun R, Turki A, Laville A (2016) A 3d application to familiarize children with sign language and assess the potential of avatars and motion capture for learning movement. In: Proceedings of the 3rd International Symposium on Movement and Computing, ACM, p 48

  • Cox S, Lincoln M, Tryggvason J, Nakisa M, Wells M, Tutt M, Abbott S (2003) The development and evaluation of a speech-to-sign translation system to assist transactions. Int J Hum Comput Interact 16(2):141–161. https://doi.org/10.1207/S15327590IJHC1602_02

    Article  Google Scholar 

  • Cuxac C (2000) La langue des signes française (LSF) : les voies de l’iconocité (French) [French Sign Language: the iconicity ways]. Faits de langues, Ophrys. https://books.google.fr/books?id=UuS7AAAAIAAJ

  • Delorme M, Filhol M, Braffort A (2009) Animation generation process for sign language synthesis. In: Advances in Computer-Human Interactions, 2009. ACHI’09. Second International Conferences on, IEEE, pp 386–390

  • Duarte K, Gibet S (2010) Corpus design for signing avatars. Workshop on representation and processing of sign languages: corpora and sign language technologies. Valetta, Malta, pp 1–3 (hal-00505182)

  • Filhol M, McDonald JC (2020) The synthesis of complex shape deployments in sign language. In: Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives, European Language Resources Association (ELRA), Marseille, pp 61–68, https://www.aclweb.org/anthology/2020.signlang-1.10

  • Gibet S, Courty N, Duarte K, Le Naour T (2011) The signcom system for data-driven animation of interactive virtual signers: methodology and evaluation. Trans Interact Intell Syst 1(1):1–23

    Article  Google Scholar 

  • Heloir A, Kipp M (2010) Real-time animation of interactive agents: specification and realization. Appl Artif Intel 24(6):510–529

    Article  Google Scholar 

  • Kennaway R (2001) Synthetic animation of deaf signing gestures. In: International Gesture Workshop, Springer, pp 146–157

  • Klima ES, Bellugi U (1979) The signs of language. Harvard University Press, Cambridge

    Google Scholar 

  • Lebourque T, Gibet S (1999) A complete system for the specification and the generation of sign language gestures. In: International Gesture Workshop, Springer, pp 227–238

  • Lombardo V, Nunnari F, Damiano R (2010) A virtual interpreter for the Italian sign language. In: International Conference on Intelligent Virtual Agents, Springer, pp 201–207

  • Lombardo V, Battaglino C, Damiano R, Nunnari F (2011) An avatar-based interface for the Italian sign language. In: Complex, Intelligent and Software Intensive Systems (CISIS), 2011 International Conference on, IEEE, pp 589–594

  • McDonald J, Wolfe R, Schnepp J, Hochgesang J, Jamrozik DG, Stumbo M, Berke L, Bialek M, Thomas F (2016a) An automated technique for real-time production of lifelike animations of American sign language. Univers Access Inform Soc 15(4):551–566. https://doi.org/10.1007/s10209-015-0407-2

    Article  Google Scholar 

  • McDonald J, Wolfe R, Wilbur RB, Moncrief R, Malaia E, Fujimoto S, Baowidan S, Stec J (2016b) A new tool to facilitate prosodic analysis of motion capture data and a data-driven technique for the improvement of avatar motion. In: Proceedings of Language Resources and Evaluation Conference (LREC), pp 153–59

  • Millet A (2004) La langue des signes française (lsf): une langue iconique et spatiale méconnue. Recherche et pratiques pédagogiques en langues de spécialité Cahiers de l’Apliut 23(2):31–44

    Article  Google Scholar 

  • Millet A (2019) Grammaire descriptive de la langue des signes française: dynamiques iconiques et linguistique générale. UGA Editions, Grenoble

    Book  Google Scholar 

  • Naert L, Larboulette C, Gibet S (2020) LSF-ANIMAL: a motion capture corpus in French sign language designed for the animation of signing avatars. In: Language Resources and Evaluation Conference (LREC)

  • Nunnari F, Filhol M, Heloir A (2018) Animating AZEE descriptions using off-the-shelf IK solvers. In: 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community (SignLang 2018), 11th edition of the Language Resources and Evaluation Conference (LREC 2018), pp 7–12

  • Prillwitz S (1989) HamNoSys: Version 2.0; Hamburg notation system for sign languages: an introductory guide. Signum-Verlag, Berlin

    Google Scholar 

  • Sallandre M, Cuxac C (2001) Iconicity in sign language: a theoretical and methodological point of view. In: Wachsmuth I, Sowa T (eds) Gesture and Sign Languages in Human-Computer Interaction, International Gesture Workshop, GW 2001, London, April 18-20, 2001, Revised Papers, Springer, Lecture Notes in Computer Science, vol 2298, pp 173–180, https://doi.org/10.1007/3-540-47873-6_18

  • Sandler W, Lillo-Martin D (2006) Sign language and linguistic universals. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Stokoe WC (1960) Sign language structure: an outline of the visual communication systems of the American deaf. Studies in Linguistics, Occasional Papers, p. 8

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lucie Naert.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Naert, L., Larboulette, C. & Gibet, S. Motion synthesis and editing for the generation of new sign language content. Machine Translation 35, 405–430 (2021). https://doi.org/10.1007/s10590-021-09268-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10590-021-09268-y

Keywords

Navigation