Nothing Special   »   [go: up one dir, main page]

WO2001067278A1 - Apparatus and method for displaying lips shape according to text data - Google Patents

Apparatus and method for displaying lips shape according to text data Download PDF

Info

Publication number
WO2001067278A1
WO2001067278A1 PCT/KR2001/000332 KR0100332W WO0167278A1 WO 2001067278 A1 WO2001067278 A1 WO 2001067278A1 KR 0100332 W KR0100332 W KR 0100332W WO 0167278 A1 WO0167278 A1 WO 0167278A1
Authority
WO
WIPO (PCT)
Prior art keywords
shape data
lip
lip shape
text data
pronunciations
Prior art date
Application number
PCT/KR2001/000332
Other languages
French (fr)
Inventor
Seunghun Baek
Original Assignee
Seunghun Baek
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seunghun Baek filed Critical Seunghun Baek
Priority to AU2001241219A priority Critical patent/AU2001241219A1/en
Publication of WO2001067278A1 publication Critical patent/WO2001067278A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking

Definitions

  • the present invention relates to apparatus and method for outputting lip shapes corresponding to text data inputs, and more particularly to apparatus and method for outputting lip shapes corresponding to text data inputs, which outputs on a screen lip shape data corresponding to plural text data inputted by a user.
  • an apparatus comprises a lip shape data storage unit for storing pronunciations for consonants and vowels of a predetermined foreign language and lip shape data 0 corresponding to the pronunciations; a sentence input unit for inputting text data formed in the foreign language according to user's manipulations; a control unit for inputting the text data through the sentence input unit, dividing the inputted text data into syllables and analyzing the pronunciations, and reading in from the lip shape data storage unit and outputting the lip shape data corresponding to the 5 analyzed pronunciations; and a display unit for outputting on a screen lip shapes outputted according to a control of the control unit.
  • a method according to the present invention comprises steps of (1) inputting text data inputted by a user; (2) dividing the text data into syllables and analyzing pronunciations; (3) searching lip shape data based on the analyzed pronunciations; and (4) outputting the searched lip shape data on a screen.
  • step(4) when outputting the lip shape data on the screen, produces and inserts intermediate lip shape data through a morphing technique.
  • FIG. 1 is a schematic block diagram for explaining a structure of a unit for outputting lip shapes corresponding to text data inputs according to an embodiment of the present invention.
  • FIG. 2a to FIG. 2i are views for showing lip shapes corresponding to text data according to an embodiment of the present invention.
  • FIG. 1 is a schematic block diagram for explaining a structure of a unit for outputting lip shapes corresponding to text data inputs according to an embodiment of the present invention.
  • a lip shape data storage unit 10 stores pronunciations for respective consonants and vowels of a certain language, and lip shape data corresponding to the pronunciations.
  • a sentence input unit 20 is an input unit such as a keyboard, and the like, and a user input predetermined text data by manipulating the above sentence input unit 20.
  • a control unit 30 inputs text data through the sentence input unit 20, divides the inputted text data into syllables and analyses their pronunciations, and reads from the lip shape data storage unit 10 and displays to a display unit 40 lip shape data corresponding to the analyzed pronunciations.
  • control unit 30 when outputting on a screen of the display unit the lip shape data read in from the lip shape data storage unit 10, produces and inserts intermediate lip shape data in use of the morphing technique, so that a user can view lip shapes naturally moving according to the text data the user himself inputs.
  • the morphing technique is an image processing method which produces images of intermediate shapes from two images, which is used for broadcasts and advertisements at present.
  • a calculating process of a morphing image first, a user designates characteristic points for two image which are corresponding to each other, and calculates correspondence relations among all the points of the two images through a warping function, and then interpolates positions and colors of two points according to the calculated correspondence relation to figure out a morphing image.
  • the display unit 40 outputs to a screen lip shapes outputted according to the control of the control unit 30.
  • FIG. 2a to FIG. 2i are views for showing lip shapes corresponding to text data according to an embodiment of the present invention.
  • the control unit 30 receives the above sentence, separates the sentence into 'I'm', 'a', and 'boy' for example, divides such separated words into syllables again, and analyzes pronunciations.
  • the lip shape data storage unit 10 applied to the embodiment of the present invention should have information on characteristic points designated between respective lip shape data in addition to the above data.
  • the control unit 30 just like the above embodiment of the present invention, analyzes pronunciations for respective syllables of the sentence, and reads in lip shape data from lip shape data storage unit 10 according to an analysis result.
  • the control unit 30 reads in lip shape data of syllables and information on characteristic points to be outputted on an initial screen from the lip shape data storage unit 10, and then reads in lip shape data of syllables and information on characteristic points to be outputted on a next screen.
  • the control unit 30 calculates a warping function in use of the characteristic points of the two read-in lip shape data, calculates correspondence relations between two lip shape data through the warping function, interpolates positions and colors of two points corresponding according to the calculated correspondence relations to produce intermediate lip shape data between the two lip shape data, and inserts the produced lip shape data.
  • lip shapes of characters of movies or animations can output on screens with exact difference shapes according to pronunciations of text data inputted as dialogues.
  • the unit for outputting lip shapes corresponding to text data As stated above, by showing on screens lip shapes corresponding to text data inputted by a user, there exists an effect that a user can exactly recognize lip shapes for certain pronunciations. Further, by making lip shapes of characters of movies or animations exactly move according to pronunciations of text data inputted as dialogues, an effect is expected which a user can easily recognize lip shapes corresponding to certain pronunciations while viewing the movies or the animations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Machine Translation (AREA)

Abstract

The disclosed present invention relates to apparatus and method for outputting lip shapes corresponding to text data inputs, which outputs on a screen lip shape data corresponding to plural text data inputted by a user. The present invention includes a lip shape data storage unit for storing pronunciations for consonants and vowels of a predetermined foreign language and lip shape data corresponding to the pronunciations; a sentence input unit for inputting text data formed in the foreign language according to user"s manipulations; a control unit for inputting the text data through the sentence input unit, dividing the inputted text data into syllables and analyzing the pronunciations, and reading in from the lip shape data storage unit and outputting the lip shape data corresponding to the analyzed pronunciations; and a display unit for outputting on a screen lip shapes outputted according to a control of the control unit.

Description

APPARATUS AND METHOD FOR DISPLAYING LIPS SHAPE ACCORDING TO TAXT DATA
TECHINCAL FIELD The present invention relates to apparatus and method for outputting lip shapes corresponding to text data inputs, and more particularly to apparatus and method for outputting lip shapes corresponding to text data inputs, which outputs on a screen lip shape data corresponding to plural text data inputted by a user.
BACKGROUND ART
In general, foreign language educations are a field everybody that faces globalization times continuously shows their interest, unceasingly teaching and being taught foreign languages.
In particular, conversation sectors leading to the conversation with foreigners are in the most activation in the foreign educations.
As stated above, in the conversation sectors, it is most important to make voice in order to enable others to hear, and the voice can be differently heard with different lip shapes.
For example, b and v, r and 1, and p and f are similarly pronounced. Therefore, in case that exact lip shapes are not made when involved in conversations with foreigners, a problem occurs that the foreigners do not correctly understand the words of a speaker.
Therefore, general people, prior to regular studies of conversations leading to the ability for conversation with foreigners, get involved in practicing the exact pronunciations over the entire alphabet while making unnatural lip shapes according to teacher's instructions.
However, in case of practicing pronunciations while making lip shapes as stated above, there exists a problem in that a teacher directly looks at and corrects the lip shapes of a learner who studies a foreign language.
Further, even though the learner himself can practice while looking at his lip shapes through a mirror, there exist a problem in that a good efficiency can not be expected in practicing pronunciations with lip shapes since he can not compare his 5 lip shapes with exact lip shapes in real time.
DISCLOSURE OF THE INVENTION
In order to solve the above problems, it is an object of the present invention to provide apparatus and method for outputting lip shapes corresponding to text data i o inputs, capable of reading and displaying on a screen lip shape data corresponding to plural text data inputted by a user by syllable forming the text data.
It is another object of the present invention to provide apparatus and method for outputting lip shapes corresponding to text data inputs, which outputs more naturally moving lip shape images by producing and inserting intermediate lip
15 shape data in use of a morphing technique between lip shape data outputted corresponding to plural text data inputted by a user.
In order to achieve the above objects, an apparatus according to the present invention comprises a lip shape data storage unit for storing pronunciations for consonants and vowels of a predetermined foreign language and lip shape data 0 corresponding to the pronunciations; a sentence input unit for inputting text data formed in the foreign language according to user's manipulations; a control unit for inputting the text data through the sentence input unit, dividing the inputted text data into syllables and analyzing the pronunciations, and reading in from the lip shape data storage unit and outputting the lip shape data corresponding to the 5 analyzed pronunciations; and a display unit for outputting on a screen lip shapes outputted according to a control of the control unit.
Further, the control unit, when outputting the lip shape data on the screen, produces and inserts intermediate lip shape data in use of a morphing technique. In order to achieve the above object, a method according to the present invention comprises steps of (1) inputting text data inputted by a user; (2) dividing the text data into syllables and analyzing pronunciations; (3) searching lip shape data based on the analyzed pronunciations; and (4) outputting the searched lip shape data on a screen.
Further, the step(4), when outputting the lip shape data on the screen, produces and inserts intermediate lip shape data through a morphing technique.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic block diagram for explaining a structure of a unit for outputting lip shapes corresponding to text data inputs according to an embodiment of the present invention; and
FIG. 2a to FIG. 2i are views for showing lip shapes corresponding to text data according to an embodiment of the present invention.
BEST MODES FOR CARRYING OUT THE INVENTION
Hereinafter, a unit for outputting lip shapes corresponding to text data inputs according to an embodiment of the present invention will be described in detail with reference to the accompanying drawings. FIG. 1 is a schematic block diagram for explaining a structure of a unit for outputting lip shapes corresponding to text data inputs according to an embodiment of the present invention.
As shown in FIG. 1, a lip shape data storage unit 10 stores pronunciations for respective consonants and vowels of a certain language, and lip shape data corresponding to the pronunciations.
For example, if pronunciations such as a, e, i, and so on are involved in a vowel of a, the lip shape data storage unit 10 has all the lip shape data corresponding to the respective pronunciations. A sentence input unit 20 is an input unit such as a keyboard, and the like, and a user input predetermined text data by manipulating the above sentence input unit 20.
A control unit 30 inputs text data through the sentence input unit 20, divides the inputted text data into syllables and analyses their pronunciations, and reads from the lip shape data storage unit 10 and displays to a display unit 40 lip shape data corresponding to the analyzed pronunciations.
At this time, the control unit 30, when outputting on a screen of the display unit the lip shape data read in from the lip shape data storage unit 10, produces and inserts intermediate lip shape data in use of the morphing technique, so that a user can view lip shapes naturally moving according to the text data the user himself inputs.
Here, for a brief description on the morphing technique, the morphing technique is an image processing method which produces images of intermediate shapes from two images, which is used for broadcasts and advertisements at present. In a calculating process of a morphing image, first, a user designates characteristic points for two image which are corresponding to each other, and calculates correspondence relations among all the points of the two images through a warping function, and then interpolates positions and colors of two points according to the calculated correspondence relation to figure out a morphing image.
The display unit 40 outputs to a screen lip shapes outputted according to the control of the control unit 30.
Next, operations of the unit for outputting lip shapes corresponding to text data according to an embodiment of the present invention having the above structure will be concretely described.
FIG. 2a to FIG. 2i are views for showing lip shapes corresponding to text data according to an embodiment of the present invention.
First, if a user inputs a sentence of "I'm a body" through the sentence input unit 20, the control unit 30 receives the above sentence, separates the sentence into 'I'm', 'a', and 'boy' for example, divides such separated words into syllables again, and analyzes pronunciations.
That is, if a user inputs the sentence of 'I'm a boy', the control unit 30, as shown in FIG. 2a to FIG. 2i attached, reads in from the lip shape data storage unit
10 and outputs to the display unit 40 lip shape data corresponding to the pronunciations analyzed for respective syllables, so that the user can view the lip shapes reading the sentence that the user himself has inputted.
In the meantime, the same object and effect can be achieved by further providing a morphing technique to the embodiment of the present invention, which is another embodiment of the present invention.
As stated above, in order to further provide a morphing technique to the embodiment of the present invention, the lip shape data storage unit 10 applied to the embodiment of the present invention should have information on characteristic points designated between respective lip shape data in addition to the above data.
For a brief description on another embodiment of the present invention, first, if a user inputs a certain sentence through the sentence input unit 20, the control unit 30, just like the above embodiment of the present invention, analyzes pronunciations for respective syllables of the sentence, and reads in lip shape data from lip shape data storage unit 10 according to an analysis result.
At this time, the control unit 30 reads in lip shape data of syllables and information on characteristic points to be outputted on an initial screen from the lip shape data storage unit 10, and then reads in lip shape data of syllables and information on characteristic points to be outputted on a next screen. Next, the control unit 30 calculates a warping function in use of the characteristic points of the two read-in lip shape data, calculates correspondence relations between two lip shape data through the warping function, interpolates positions and colors of two points corresponding according to the calculated correspondence relations to produce intermediate lip shape data between the two lip shape data, and inserts the produced lip shape data.
Since the above all process are performed in real time, a user can view lip shape images moving more naturally on a screen. As described above, even through the description of present invention is defined only to the foreign language learning, the present invention can be easily applied to genres such as movies, animation, or the like.
That is. lip shapes of characters of movies or animations can output on screens with exact difference shapes according to pronunciations of text data inputted as dialogues.
With the unit for outputting lip shapes corresponding to text data according to the present invention, as stated above, by showing on screens lip shapes corresponding to text data inputted by a user, there exists an effect that a user can exactly recognize lip shapes for certain pronunciations. Further, by making lip shapes of characters of movies or animations exactly move according to pronunciations of text data inputted as dialogues, an effect is expected which a user can easily recognize lip shapes corresponding to certain pronunciations while viewing the movies or the animations.
Although the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that the present invention should not be limited to the described preferred embodiment, but various changes and modifications can be mode within the spirit and scope of the present invention as defined by the appended claims.

Claims

CLAIMSWhat is claimed is:
1. An apparatus for outputting lip shapes corresponding to text data inputs, comprising:
5 a lip shape data storage unit for storing pronunciations for consonants and vowels of a predetermined foreign language and lip shape data corresponding to the pronunciations; a sentence input unit for inputting text data formed in the foreign language according to user's manipulations; ι o a control unit for inputting the text data through the sentence input unit, dividing the inputted text data into syllables and analyzing the pronunciations, and reading in from the lip shape data storage unit and outputting the lip shape data corresponding to the analyzed pronunciations; and a display unit for outputting on a screen lip shapes outputted according to a 15 control of the control unit.
2. The apparatus as claimed in claim 1, wherein the control unit, when outputting the lip shape data on the screen, produces and inserts intermediate lip shape data in use of a morphing technique.
3. A method for outputting lip shapes corresponding to text data inputs, 2 o comprising steps of:
(1) inputting text data inputted by a user
(2) dividing the text data into syllables and analyzing pronunciations;
(3) searching lip shape data based on the analyzed pronunciations; and
(4) outputting the searched lip shape data on a screen.
25 4. The method as claimed in claim 3, wherein the step(4), when outputting the lip shape data on the screen, produces and inserts intermediate lip shape data through a morphing technique.
PCT/KR2001/000332 2000-03-10 2001-03-05 Apparatus and method for displaying lips shape according to text data WO2001067278A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001241219A AU2001241219A1 (en) 2000-03-10 2001-03-05 Apparatus and method for displaying lips shape according to text data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020000012169A KR20010088139A (en) 2000-03-10 2000-03-10 Apparatus and method for displaying lips shape according to taxt data
KR2000/12169 2000-03-10

Publications (1)

Publication Number Publication Date
WO2001067278A1 true WO2001067278A1 (en) 2001-09-13

Family

ID=19654163

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2001/000332 WO2001067278A1 (en) 2000-03-10 2001-03-05 Apparatus and method for displaying lips shape according to text data

Country Status (3)

Country Link
KR (1) KR20010088139A (en)
AU (1) AU2001241219A1 (en)
WO (1) WO2001067278A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005091252A1 (en) * 2004-03-19 2005-09-29 Lanstar Corporation Pty Ltd A method for teaching a language
US10172853B2 (en) 2007-02-11 2019-01-08 Map Pharmaceuticals, Inc. Method of therapeutic administration of DHE to enable rapid relief of migraine while minimizing side effect profile

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100693658B1 (en) * 2004-10-05 2007-03-14 엘지전자 주식회사 Poratable language study apparatus and method
KR100897149B1 (en) * 2007-10-19 2009-05-14 에스케이 텔레콤주식회사 Apparatus and method for synchronizing text analysis-based lip shape
KR101017340B1 (en) * 2009-03-27 2011-02-28 이지연 Apparatus for transformation lips shape

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6011949A (en) * 1997-07-01 2000-01-04 Shimomukai; Satoru Study support system
US6014615A (en) * 1994-08-16 2000-01-11 International Business Machines Corporaiton System and method for processing morphological and syntactical analyses of inputted Chinese language phrases
US6026361A (en) * 1998-12-03 2000-02-15 Lucent Technologies, Inc. Speech intelligibility testing system
KR20000009490A (en) * 1998-07-24 2000-02-15 윤종용 Method and apparatus of lip-synchronization for voice composition
US6029124A (en) * 1997-02-21 2000-02-22 Dragon Systems, Inc. Sequential, nonparametric speech recognition and speaker identification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6014615A (en) * 1994-08-16 2000-01-11 International Business Machines Corporaiton System and method for processing morphological and syntactical analyses of inputted Chinese language phrases
US6029124A (en) * 1997-02-21 2000-02-22 Dragon Systems, Inc. Sequential, nonparametric speech recognition and speaker identification
US6011949A (en) * 1997-07-01 2000-01-04 Shimomukai; Satoru Study support system
KR20000009490A (en) * 1998-07-24 2000-02-15 윤종용 Method and apparatus of lip-synchronization for voice composition
US6026361A (en) * 1998-12-03 2000-02-15 Lucent Technologies, Inc. Speech intelligibility testing system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005091252A1 (en) * 2004-03-19 2005-09-29 Lanstar Corporation Pty Ltd A method for teaching a language
US10172853B2 (en) 2007-02-11 2019-01-08 Map Pharmaceuticals, Inc. Method of therapeutic administration of DHE to enable rapid relief of migraine while minimizing side effect profile

Also Published As

Publication number Publication date
AU2001241219A1 (en) 2001-09-17
KR20010088139A (en) 2001-09-26

Similar Documents

Publication Publication Date Title
US7890330B2 (en) Voice recording tool for creating database used in text to speech synthesis system
Benoı̂t et al. Audio-visual speech synthesis from French text: Eight years of models, designs and evaluation at the ICP
CN111739556B (en) Voice analysis system and method
AU7362698A (en) Method and system for making an audio-visual work with a series of visual word symbols coordinated with oral word utterances and such audio-visual work
KR20010040749A (en) Text processor
KR20180000990A (en) Apparatus and method for learning english phonics
Haryanto et al. A Realistic Visual Speech Synthesis for Indonesian Using a Combination of Morphing Viseme and Syllable Concatenation Approach to Support Pronunciation Learning.
WO2001067278A1 (en) Apparatus and method for displaying lips shape according to text data
Solina et al. Multimedia dictionary and synthesis of sign language
JP2003162291A (en) Language learning device
KR20140087956A (en) Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data
KR20140078810A (en) Apparatus and method for learning rhythm pattern by using native speaker's pronunciation data and language data.
EP0982684A1 (en) Moving picture generating device and image control network learning device
KR20140079677A (en) Apparatus and method for learning sound connection by using native speaker's pronunciation data and language data.
RU2195708C2 (en) Inscription-bearing audio/video presentation structure, method for ordered linking of oral utterances on audio/video presentation structure, and device for linear and interactive presentation
Jardine et al. Banyaduq prestopped nasals: Synchrony and diachrony
KR20140082127A (en) Apparatus and method for learning word by using native speaker's pronunciation data and origin of a word
Steinfeld The benefit to the deaf of real-time captions in a mainstream classroom environment
KR20180013475A (en) A Foreign Language Learning System Utilizing Image Materials With Pronunciation Subtitles Written In Korean Without Any Foreign Letters
KR20140087957A (en) Apparatus and method for Language Pattern Education by using sentence data.
KR20140078082A (en) Apparatus and method for Language Pattern Education by using sentence data.
JP2924784B2 (en) Pronunciation practice equipment
Lim Kinetic typography visual approaches as a learning aid for English intonation and word stress/Lim Chee Hooi
Hooi Kinetic Typography Visual Approaches as a Learning Aid for English Intonation and Word Stress
KR20140087955A (en) Apparatus and method for learning english preposition by using native speaker's pronunciation data and image data.

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP