Nothing Special   »   [go: up one dir, main page]

WO2001067278A1 - Appareil et procede de presentation d"expressions labiales en fonction d"elements de textes - Google Patents

Appareil et procede de presentation d"expressions labiales en fonction d"elements de textes Download PDF

Info

Publication number
WO2001067278A1
WO2001067278A1 PCT/KR2001/000332 KR0100332W WO0167278A1 WO 2001067278 A1 WO2001067278 A1 WO 2001067278A1 KR 0100332 W KR0100332 W KR 0100332W WO 0167278 A1 WO0167278 A1 WO 0167278A1
Authority
WO
WIPO (PCT)
Prior art keywords
shape data
lip
lip shape
text data
pronunciations
Prior art date
Application number
PCT/KR2001/000332
Other languages
English (en)
Inventor
Seunghun Baek
Original Assignee
Seunghun Baek
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seunghun Baek filed Critical Seunghun Baek
Priority to AU2001241219A priority Critical patent/AU2001241219A1/en
Publication of WO2001067278A1 publication Critical patent/WO2001067278A1/fr

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking

Definitions

  • the present invention relates to apparatus and method for outputting lip shapes corresponding to text data inputs, and more particularly to apparatus and method for outputting lip shapes corresponding to text data inputs, which outputs on a screen lip shape data corresponding to plural text data inputted by a user.
  • an apparatus comprises a lip shape data storage unit for storing pronunciations for consonants and vowels of a predetermined foreign language and lip shape data 0 corresponding to the pronunciations; a sentence input unit for inputting text data formed in the foreign language according to user's manipulations; a control unit for inputting the text data through the sentence input unit, dividing the inputted text data into syllables and analyzing the pronunciations, and reading in from the lip shape data storage unit and outputting the lip shape data corresponding to the 5 analyzed pronunciations; and a display unit for outputting on a screen lip shapes outputted according to a control of the control unit.
  • a method according to the present invention comprises steps of (1) inputting text data inputted by a user; (2) dividing the text data into syllables and analyzing pronunciations; (3) searching lip shape data based on the analyzed pronunciations; and (4) outputting the searched lip shape data on a screen.
  • step(4) when outputting the lip shape data on the screen, produces and inserts intermediate lip shape data through a morphing technique.
  • FIG. 1 is a schematic block diagram for explaining a structure of a unit for outputting lip shapes corresponding to text data inputs according to an embodiment of the present invention.
  • FIG. 2a to FIG. 2i are views for showing lip shapes corresponding to text data according to an embodiment of the present invention.
  • FIG. 1 is a schematic block diagram for explaining a structure of a unit for outputting lip shapes corresponding to text data inputs according to an embodiment of the present invention.
  • a lip shape data storage unit 10 stores pronunciations for respective consonants and vowels of a certain language, and lip shape data corresponding to the pronunciations.
  • a sentence input unit 20 is an input unit such as a keyboard, and the like, and a user input predetermined text data by manipulating the above sentence input unit 20.
  • a control unit 30 inputs text data through the sentence input unit 20, divides the inputted text data into syllables and analyses their pronunciations, and reads from the lip shape data storage unit 10 and displays to a display unit 40 lip shape data corresponding to the analyzed pronunciations.
  • control unit 30 when outputting on a screen of the display unit the lip shape data read in from the lip shape data storage unit 10, produces and inserts intermediate lip shape data in use of the morphing technique, so that a user can view lip shapes naturally moving according to the text data the user himself inputs.
  • the morphing technique is an image processing method which produces images of intermediate shapes from two images, which is used for broadcasts and advertisements at present.
  • a calculating process of a morphing image first, a user designates characteristic points for two image which are corresponding to each other, and calculates correspondence relations among all the points of the two images through a warping function, and then interpolates positions and colors of two points according to the calculated correspondence relation to figure out a morphing image.
  • the display unit 40 outputs to a screen lip shapes outputted according to the control of the control unit 30.
  • FIG. 2a to FIG. 2i are views for showing lip shapes corresponding to text data according to an embodiment of the present invention.
  • the control unit 30 receives the above sentence, separates the sentence into 'I'm', 'a', and 'boy' for example, divides such separated words into syllables again, and analyzes pronunciations.
  • the lip shape data storage unit 10 applied to the embodiment of the present invention should have information on characteristic points designated between respective lip shape data in addition to the above data.
  • the control unit 30 just like the above embodiment of the present invention, analyzes pronunciations for respective syllables of the sentence, and reads in lip shape data from lip shape data storage unit 10 according to an analysis result.
  • the control unit 30 reads in lip shape data of syllables and information on characteristic points to be outputted on an initial screen from the lip shape data storage unit 10, and then reads in lip shape data of syllables and information on characteristic points to be outputted on a next screen.
  • the control unit 30 calculates a warping function in use of the characteristic points of the two read-in lip shape data, calculates correspondence relations between two lip shape data through the warping function, interpolates positions and colors of two points corresponding according to the calculated correspondence relations to produce intermediate lip shape data between the two lip shape data, and inserts the produced lip shape data.
  • lip shapes of characters of movies or animations can output on screens with exact difference shapes according to pronunciations of text data inputted as dialogues.
  • the unit for outputting lip shapes corresponding to text data As stated above, by showing on screens lip shapes corresponding to text data inputted by a user, there exists an effect that a user can exactly recognize lip shapes for certain pronunciations. Further, by making lip shapes of characters of movies or animations exactly move according to pronunciations of text data inputted as dialogues, an effect is expected which a user can easily recognize lip shapes corresponding to certain pronunciations while viewing the movies or the animations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Machine Translation (AREA)

Abstract

L"invention porte sur un appareil et un procédé de présentation sur écran d"expressions labiales en fonction d"éléments de texte. Elle comporte: une unité de stockage de données d"expressions labiales correspondant à la prononciation des différentes consonnes et voyelles d"une langue étrangère; une unité d"introduction par l"utilisateur de données de textes en langue étrangère; une unité de commande recevant les données de texte, les divisant en syllabes, analysant la prononciation, lisant les données d"expressions labiales correspondant à la prononciation analysée dans l"unité de stockage d"expressions labiales, et les émettant; et une unité de présentation les recevant, et les affichant sur un écran.
PCT/KR2001/000332 2000-03-10 2001-03-05 Appareil et procede de presentation d"expressions labiales en fonction d"elements de textes WO2001067278A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001241219A AU2001241219A1 (en) 2000-03-10 2001-03-05 Apparatus and method for displaying lips shape according to text data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020000012169A KR20010088139A (ko) 2000-03-10 2000-03-10 텍스트 데이터 입력에 대응한 입모양 출력장치 및 그 방법
KR2000/12169 2000-03-10

Publications (1)

Publication Number Publication Date
WO2001067278A1 true WO2001067278A1 (fr) 2001-09-13

Family

ID=19654163

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2001/000332 WO2001067278A1 (fr) 2000-03-10 2001-03-05 Appareil et procede de presentation d"expressions labiales en fonction d"elements de textes

Country Status (3)

Country Link
KR (1) KR20010088139A (fr)
AU (1) AU2001241219A1 (fr)
WO (1) WO2001067278A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005091252A1 (fr) * 2004-03-19 2005-09-29 Lanstar Corporation Pty Ltd Methode pour enseigner une langue
US10172853B2 (en) 2007-02-11 2019-01-08 Map Pharmaceuticals, Inc. Method of therapeutic administration of DHE to enable rapid relief of migraine while minimizing side effect profile

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100693658B1 (ko) * 2004-10-05 2007-03-14 엘지전자 주식회사 휴대용 어학학습 장치 및 방법
KR100897149B1 (ko) * 2007-10-19 2009-05-14 에스케이 텔레콤주식회사 텍스트 분석 기반의 입 모양 동기화 장치 및 방법
KR101017340B1 (ko) * 2009-03-27 2011-02-28 이지연 입 모양 변형장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6011949A (en) * 1997-07-01 2000-01-04 Shimomukai; Satoru Study support system
US6014615A (en) * 1994-08-16 2000-01-11 International Business Machines Corporaiton System and method for processing morphological and syntactical analyses of inputted Chinese language phrases
US6026361A (en) * 1998-12-03 2000-02-15 Lucent Technologies, Inc. Speech intelligibility testing system
KR20000009490A (ko) * 1998-07-24 2000-02-15 윤종용 음성 합성을 위한 립싱크 방법 및 그 장치
US6029124A (en) * 1997-02-21 2000-02-22 Dragon Systems, Inc. Sequential, nonparametric speech recognition and speaker identification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6014615A (en) * 1994-08-16 2000-01-11 International Business Machines Corporaiton System and method for processing morphological and syntactical analyses of inputted Chinese language phrases
US6029124A (en) * 1997-02-21 2000-02-22 Dragon Systems, Inc. Sequential, nonparametric speech recognition and speaker identification
US6011949A (en) * 1997-07-01 2000-01-04 Shimomukai; Satoru Study support system
KR20000009490A (ko) * 1998-07-24 2000-02-15 윤종용 음성 합성을 위한 립싱크 방법 및 그 장치
US6026361A (en) * 1998-12-03 2000-02-15 Lucent Technologies, Inc. Speech intelligibility testing system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005091252A1 (fr) * 2004-03-19 2005-09-29 Lanstar Corporation Pty Ltd Methode pour enseigner une langue
US10172853B2 (en) 2007-02-11 2019-01-08 Map Pharmaceuticals, Inc. Method of therapeutic administration of DHE to enable rapid relief of migraine while minimizing side effect profile

Also Published As

Publication number Publication date
AU2001241219A1 (en) 2001-09-17
KR20010088139A (ko) 2001-09-26

Similar Documents

Publication Publication Date Title
US7890330B2 (en) Voice recording tool for creating database used in text to speech synthesis system
Benoı̂t et al. Audio-visual speech synthesis from French text: Eight years of models, designs and evaluation at the ICP
CN111739556B (zh) 一种语音分析的系统和方法
AU7362698A (en) Method and system for making an audio-visual work with a series of visual word symbols coordinated with oral word utterances and such audio-visual work
KR20010040749A (ko) 텍스트 프로세서
KR20180000990A (ko) 영어 파닉스 학습 장치 및 그 방법
Haryanto et al. A Realistic Visual Speech Synthesis for Indonesian Using a Combination of Morphing Viseme and Syllable Concatenation Approach to Support Pronunciation Learning.
WO2001067278A1 (fr) Appareil et procede de presentation d"expressions labiales en fonction d"elements de textes
Solina et al. Multimedia dictionary and synthesis of sign language
JP2003162291A (ja) 語学学習装置
KR20140087956A (ko) 단어 및 문장과 이미지 데이터 그리고 원어민의 발음 데이터를 이용한 파닉스 학습장치 및 방법
KR20140078810A (ko) 언어 데이터 및 원어민의 발음 데이터를 이용한 리듬 패턴 학습장치 및 방법
EP0982684A1 (fr) Dispositif de generation d'images en mouvement et dispositif d'apprentissage via reseau de controle d'images
KR20140079677A (ko) 언어 데이터 및 원어민의 발음 데이터를 이용한 연음 학습장치 및 방법
RU2195708C2 (ru) Звуковизуальное представление с надписями на нем, способ связывания выразительных устных произнесений с записями по порядку на звуковизуальном представлении и устройство, предназначенное для линейного и интерактивного представления
Jardine et al. Banyaduq prestopped nasals: Synchrony and diachrony
KR20140082127A (ko) 단어의 어원 및 원어민의 발음 데이터를 이용한 단어 학습장치 및 방법
Steinfeld The benefit to the deaf of real-time captions in a mainstream classroom environment
KR20180013475A (ko) 외국어 문자 없이 순수 한글 자막이 부가된 영상자료를 활용한 외국어 학습 시스템
KR20140087957A (ko) 문장 데이터를 이용한 언어 패턴 교육 학습장치 및 방법
KR20140078082A (ko) 문장 데이터를 이용한 언어 패턴 교육 학습장치 및 방법
JP2924784B2 (ja) 発音練習装置
Lim Kinetic typography visual approaches as a learning aid for English intonation and word stress/Lim Chee Hooi
Hooi Kinetic Typography Visual Approaches as a Learning Aid for English Intonation and Word Stress
KR20140087955A (ko) 이미지 데이터 및 원어민의 발음 데이터를 이용한 영어 전치사 학습장치 및 방법

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP