Abstract
The present paper describes a support system for analyzing and notating the three-dimensional (3D) motions of sign language unit of frames that are obtained through optical motion capturing. The 3D motion data acquired involve Manual Signals (MS) and Non-Manual Markers (NMM). The 3D motion data acquired have two basic parts MS and NMM. This system enables users to analyze and describe both MS and NMM, which are the components of sign, while playing back the motions as a sign animation. In the analysis part, being able to step through a motion frame by frame, forward and backward, would be extremely useful for users. Moreover, the motions can be observed from given directions and by a given magnification ratio. In the description part, NVSG model, the sign notation system we propose, is used. The results of the description serve as a database for a morpheme dictionary, because they are stored in SQLite format. The dictionary that enables sign language to be looked up based on the motions and motions to be observed based on the morphemes is the first of its type ever created, and its usability and practicability are extremely high.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
A Sign Language is a visual language that uses a system of MS and NMM as the means of communication. MS consist of four components -handshape, movement, location, and orientation. NMM consist of the various facial expressions, head tilt, shapes of the mouth, and similar signals except MS. Morphemes are composed of these multiple elements. Right and left hands can also independently express morphemes by combining these elements, which makes morphological structure more complicated. Therefore, it must be analyzed by viewing it as sign language, which is one reason why analysis of sign language structure is more difficult. ELAN is a tool for the creation of complex annotations on video resources [1]. However, the ELAN cannot support analysis and the description of the morpheme level of the sign language. In addition, spoken languages such as Japanese and English have their own writing systems. For this reason, there are superior many multilingual dictionaries exist in a spoken language. On the other hand, sign language has no standard writing system. Existing sign language dictionaries have been compiled with the use of pictures, photos and videos. There are no dictionaries that allow users to view Sign motions based on units of phonemes, morphemes and words in 3D animation from any viewpoint. Considering this situation, we construct a support system for analyzing and notating. Notation results automatically become a morpheme dictionary that uses 3D animation.
2 NVSG Element Model [2]
Among elements, MS are described in an N element and V element, while NMM are described in an S element and G element. Items related to hand shape, palm direction and hand position are described in the N element, while movement items are described in the V element. Among non-manual markers, sightline is most important, so items regarding sightline are described in the S element, while other non-manual signals are described in the G element. Table 1 shows the description items of each element of the NVSG element model.
3 Systems that Support the Analysis and Notation of the 3D Motions of Sign
There is software that analyzes motions by using 3D motion data that are obtained through optical motion capturing in various fields, including the rehabilitation and medical fields. And, in the field of the dialogue analysis, the ELAN is a tool for the creation of complex annotations on video resources. However, there are no systems that are capable of supporting the analysis and notation of the 3D motions of sign language in the linguistic sense. This paper examines a support system for analyzing and notating the 3D motions of sign language unit of frames that are obtained through optical motion capturing. The data of the 3D motions are in BVH format. The proposed system is composed of the sign language processing part that supports analysis and NVSG notation, and the morpheme dictionary part that display and searches the results of morpheme notation.
3.1 Sign Language Processing Part
The analysis of sign language is carried out by monitoring 3D animation unit frames by using data in BVH format. TVML, which is under development by NHK STRL, is used as a tool for processing the animation [3]. The animation processing part is described in Fig. 1.
-
(a)
Processing control: The A area in the frame on the bottom left of Fig. 1 controls playback. The \(\rightarrow \) and \(\leftarrow \) keys on the keyboard are used for playback forward and backward by a frame unit. Click on the top right of Fig. 1, and the position of the camera is able to be controlled. By using functions in the B area in the frame on the top right of the figure, analyzers are able to view sign language animation from any viewpoint positions and magnification percentages.
-
(b)
Information notation of BVH data: Click on the top right of Fig. 2, and the details of BVH data under analysis are able to be notated. The information notated here is the names of words under analysis, morpheme starting and ending frame number, and in-transition and out-transition frame number. Moreover, as information on the words under analysis, the structure of the words, such as simple word, compound word and collocation, is also notated. In addition, the structure of words is able to be entered by directly inputting terms using the keyboard or by selecting terms from the pull-down menu.
-
(c)
Morpheme information notation: Click on the top right of Fig. 3, and the morpheme structure of BVH data under analysis is able to be notated. The NVSG element model is used for the notation of the morpheme structure. When extracting data in BVH format, specify the extraction section in area C of Fig. 3. The function of extracting the BVH data is used when complex words are extracted based on a morpheme unit and new words that have not yet been registered in a dictionary are generated through morpheme composition.
By using the NVSG element model, the details of the morpheme structure are notated based on a phonetic unit in the D area of Fig. 3. The notation of N and V elements is carried out in accordance with the NVSG rule sheet [2]. In this support system, in each element, for items in which values that are required to be notated have been decided, elements are selected from the pull-down menu. Using the pull-down menu makes it possible to minimize the individual sways of notation. Moreover, S and G elements are notated by using sIGNDEX [4]. All the notation items of these elements are selected from the pull-down menu based on the sIGNDEX rule sheet. Table 2 shows the list of items that are required to be notated in the D area. Moreover, when the data of one word are composed of more than one morpheme, it is possible to notate a multiple number of morphemes independently in the D area.
3.2 Morpheme Dictionary Part
The results of the description in Sect. 3.1 serve as a database for a morpheme dictionary, because they are stored in SQLite format. The morpheme dictionary part provides the list display of the morpheme dictionary and carries out searches.
For one unit of data, the list display is capable of providing 44 items, including the names of BVH files, the names of Japanese words, the time structure of words, each value of NVSG elements, the notation results of NVSG elements, supplementary information and update dates. Moreover, if one unit of data consists of more than one morpheme, it is possible to easily understand the spatial structure and time structure of the composition. In searching, it is possible to undertake an AND/OR search with the random details of 44 items, including each NVSG element.
As a result, because a search is able to be undertaken from the NVSG elements in this dictionary, it is possible to search meanings in Japanese from the expression of the sign language. The search key allows users to set up more than one condition.
4 Conclusion and Future Issues
This paper has described the construction of a system that supports the analysis and notation of words of sign language by using 3D motion data in BVH format. The NVSG element model is used for notation. Because the notation results are stored in SQLite format, it is possible to construct a morpheme dictionary database. In this dictionary, the N element notates the shape of the hands, and the V element notates the details of the motions. As a result, it is possible to use the dictionary as a reverse dictionary in which the meanings of sign language are able to be searched from the shapes and motions of the hands. Moreover, because 3D motion data and the morpheme time structure are linked to each other one by one in the database, it is possible to view sign language based on the morphemes. There have been no sign language dictionaries available with these functions before. The usability and practicality of the dictionary are extremely high. Approximately 2,000 words in the general and the medical field, etc. have been described in the morpheme dictionary thus far.
We are proceeding with a further study on how to synthesize new words that are not found in the current dictionary by taking advantage of this morpheme dictionary described using NVSG elements.
References
Hellwig, B.: ELAN - Linguistic Annotator, version 4.9.2 (2015). https://tla.mpi.nl/tools/tla-tools/elan/
Watanabe, K., Nagashima, Y., et al.: Study into Methods of Describing Japanese Sign Language. CCIS 435, 270–275. Springer (2014)
Hiruma, N., Shimizu, T., et al.: Automatic generation system of CG sign language animation. J. Inst. Image Electr. Eng. Jpn. 41(4), 406–410 (2012) (in Japanese)
Kanda, K., Ichikawa, A., Nagashima, Y., Kato, Y., Terauchi, M., Hara, D., Sato, M.: Notation system and statistical analysis of NMS in JSL. In: Wachsmuth, I., Sowa, T. (eds.) GW 2001. LNCS (LNAI), vol. 2298, pp. 181–192. Springer, Heidelberg (2002)
Acknowledgment
Part of this study was subsidized by a Grant-in-Aid for Scientific Research (A) 26244021, a scientific research grant by the Ministry of Education, Culture, Sports, Science and Technology.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Nagashima, Y. et al. (2016). A Support Tool for Analyzing the 3D Motions of Sign Language and the Construction of a Morpheme Dictionary. In: Stephanidis, C. (eds) HCI International 2016 – Posters' Extended Abstracts. HCI 2016. Communications in Computer and Information Science, vol 618. Springer, Cham. https://doi.org/10.1007/978-3-319-40542-1_20
Download citation
DOI: https://doi.org/10.1007/978-3-319-40542-1_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-40541-4
Online ISBN: 978-3-319-40542-1
eBook Packages: Computer ScienceComputer Science (R0)