Nothing Special   »   [go: up one dir, main page]

KR20110065276A - Method and apparatus for pronunciation exercise using comparison video - Google Patents

Method and apparatus for pronunciation exercise using comparison video Download PDF

Info

Publication number
KR20110065276A
KR20110065276A KR1020100060615A KR20100060615A KR20110065276A KR 20110065276 A KR20110065276 A KR 20110065276A KR 1020100060615 A KR1020100060615 A KR 1020100060615A KR 20100060615 A KR20100060615 A KR 20100060615A KR 20110065276 A KR20110065276 A KR 20110065276A
Authority
KR
South Korea
Prior art keywords
pronunciation
image
learner
learning
display unit
Prior art date
Application number
KR1020100060615A
Other languages
Korean (ko)
Inventor
서동혁
Original Assignee
서동혁
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서동혁 filed Critical 서동혁
Priority to KR1020100060615A priority Critical patent/KR20110065276A/en
Publication of KR20110065276A publication Critical patent/KR20110065276A/en
Priority to PCT/KR2011/004433 priority patent/WO2011162508A2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education

Landscapes

  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Primary Health Care (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Marketing (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The present invention is a pronunciation learning apparatus using a comparison image, the learner including a display unit, a camera for photographing the learner, a pronunciation image of the lecturer according to a predetermined pronunciation learning content, and the shape of the learner's mouth taken by the camera And a control unit for simultaneously displaying the pronunciation images through the display unit.

Figure P1020100060615

Description

Phonetic learning method and apparatus using comparative image {METHOD AND APPARATUS FOR PRONUNCIATION EXERCISE USING COMPARISON VIDEO}

The present invention relates to pronunciation learning, and more particularly, to a method and apparatus for learning pronunciation using a comparison image.

As technology for implementing user interfaces and portable terminal devices is developed, language learning continues to develop beyond books to cassettes, TVs, computers, devices dedicated to language learning, mobile phones and smartphones. As a result, a lot of foreign language learning contents have been developed in the form of an application that can be used to perform foreign language learning by using various media such as text, photographs, voice, video, etc., and used for pronunciation learning using a personal computer or a smartphone. It is becoming.

Generally, when a user exercises a pronunciation of a foreign language for learning to speak a foreign language by using the pronunciation learning content, only the voice information of the foreign language and the corresponding character information such as a sentence and a word are output through the speaker and the display unit, so that the auditory and visual The learning method is used. However, it is difficult for a learner to correctly correct a learner's pronunciation simply by listening and following the speaker's pronunciation for a foreign language. This is because different countries have different ways to pronounce languages, that is, their mouths. It is not possible to know exactly how to pronounce each country in its own way. As a result, in most cases, the pronunciation of the foreigner that the learner hears for the pronunciation practice is correct, but the pronunciation of the learner is likely to be incorrect.

Therefore, for accurate pronunciation, it is necessary to easily understand the pronunciation of voices and mouths of foreigners, and to learn how to pronounce and correct their mouths.

The present invention provides a pronunciation learning method and apparatus for displaying the pronunciation video of the lecturer and the real-time pronunciation video of the learner photographed through the camera at the same time to easily compare the speaker's mouth and the learner's mouth and perform pronunciation correction. I would like to.

According to one aspect of the present invention for achieving this, the present invention provides a pronunciation learning apparatus using a comparison image, a display unit, a camera for photographing a learner, a pronunciation image of a lecturer according to a predetermined pronunciation learning content, And a control unit for simultaneously displaying the learner pronunciation images including the learner's mouth taken by the camera so as to be compared with each other.

The controller may display the pronunciation image of the lecturer on one side of the display unit, and simultaneously display the pronunciation image of the learner on the other side of the display unit.

The controller may be configured to simultaneously display a part of a speaker's pronunciation image and a part of the learner's pronunciation image to form a single mouth shape through the display unit.

The controller may be configured to simultaneously display the pronunciation image of the lecturer and the pronunciation image of the learner through the display unit.

The controller is configured to repeatedly reproduce the pronunciation image of the lecturer and to display the image of the learner photographed in real time through the camera as the pronunciation video of the learner.

According to another aspect of the present invention, the present invention provides a pronunciation learning method using a comparative image, the process of performing the pronunciation learning function, the process of photographing the learner through the camera, the pronunciation of the lecturer according to the pronunciation learning content And displaying the image and the learner pronunciation image including the learner's mouth taken by the camera at the same time so as to be compared with each other.

The process of simultaneously displaying the pronunciation image of the lecturer according to the predetermined pronunciation learning content and the learner pronunciation image including the learner's mouth taken by the camera through the display unit may be compared with each other. And displaying the pronunciation image of the child, and simultaneously displaying the pronunciation image of the learner on the other side of the display unit.

The process of simultaneously displaying the pronunciation image of the lecturer according to the predetermined pronunciation learning content and the learner pronunciation image including the learner's mouth taken by the camera through the display unit may be compared with each other. And combining a portion of the image and a portion of the pronunciation image of the learner to simultaneously display a portion of the image to form a single mouth shape.

The process of simultaneously displaying the pronunciation image of the lecturer according to the predetermined pronunciation learning content and the learner pronunciation image including the learner's mouth taken by the camera through the display unit may be compared with each other through the display unit. And overlapping the pronunciation image and the learner's pronunciation image as a whole.

According to still another aspect of the present invention, the present invention provides a method for providing pronunciation learning contents of a server for pronunciation learning using a comparison image, the method comprising: checking a connection of a pronunciation learning apparatus; Providing the pronunciation learning apparatus, determining whether a download selection of pronunciation learning content is input from the pronunciation learning apparatus, and when a download selection of the pronunciation learning content is input, converting the pronunciation learning content into the pronunciation learning apparatus. And transmitting the pronunciation learning content, when the pronunciation learning function is executed, photographing a learner through a camera, including a pronunciation image of a lecturer according to the pronunciation learning content and a mouth shape of the learner photographed by the camera. Simultaneously display learner pronunciation images through the display Pronunciation learning application that performs the pronunciation learning, characterized in that it comprises,

When the pronunciation learning application executes the pronunciation learning function, the pronunciation image of the lecturer is displayed on one side of the display unit of the pronunciation learning apparatus, and the pronunciation image of the learner is simultaneously displayed on the other side of the display unit. ,

When the pronunciation learning application executes the pronunciation learning function, a part of the lecturer's pronunciation image and a part of the learner's pronunciation image are simultaneously displayed through a display unit of the pronunciation learning apparatus to form a single mouth shape. ,

When the pronunciation learning application executes the pronunciation learning function, the pronunciation image of the lecturer and the pronunciation image of the learner are overlapped and displayed simultaneously at the same time through the display unit of the pronunciation learning apparatus.

The present invention allows the learner to easily compare the lecturer's mouth and his mouth through three pronunciation image display methods that simultaneously display the pronunciation video of the lecturer and the pronunciation video of the learner photographed through the camera. You can make pronunciation corrections and practice efficiently.

In addition, the learning method according to the present invention is more interesting conversation because the learning method according to the present invention can perform the learning while matching the mouth shape according to the lecturer's mouth displayed on the screen. Mold pronunciation learning can be performed.

In addition, the existing speaking / listening learning methods are mostly speech-based learning methods, and thus, learners could not accurately pronounce their mouths when they pronounce foreign languages, and thus could not produce accurate pronunciations. By comparing the mouths of the students, we can quickly learn what parts are not pronounced and provide a learning method to improve.

In addition, the present invention can be used not only to learn a foreign language, but also to practice speech to grasp the meaning of speech by the hearing impaired person and grasp the meaning of the mouth.

1 is a block diagram of a pronunciation learning apparatus according to an exemplary embodiment.
2 is a diagram illustrating a configuration of a pronunciation learning system including a pronunciation learning apparatus and a data server according to an exemplary embodiment.
3 is a diagram illustrating an operation flow of a server when installing a pronunciation learning application according to an embodiment of the present invention.
4A and 4B are diagrams illustrating a flow of a pronunciation learning operation according to an embodiment of the present invention.
5 is a view showing an example of a pronunciation learning apparatus according to an embodiment of the present invention.
6 illustrates another example of a pronunciation learning apparatus according to an exemplary embodiment, and illustrates an example of a pronunciation image display method of simultaneously displaying a lecturer's pronunciation image and a learner's pronunciation image.
FIG. 7 is a diagram illustrating an example of a pronunciation image display method in which a participant's pronunciation image and a part of a learner's pronunciation image are displayed in one mouth shape during a pronunciation learning operation according to an embodiment of the present invention.
FIG. 8 is a diagram illustrating an example of a pronunciation image display method in which a speaker's pronunciation image and a learner's pronunciation image are simultaneously displayed and overlapped during a pronunciation learning operation according to an embodiment of the present invention.
FIG. 9 is a diagram schematically illustrating a process of generating a superimposed pronunciation image of FIG. 8.
FIG. 10 is a diagram schematically illustrating an operation of extracting a learner's pronunciation image from a captured image according to an exemplary embodiment.
11 is a diagram illustrating a display example of a screen during a pronunciation learning operation in a portrait view mode in a terminal according to an embodiment of the present invention.
12 is a diagram illustrating an example of a pronunciation learning apparatus having two display units according to an exemplary embodiment.
13 is a diagram illustrating an example of a form in which a learner performs pronunciation learning according to an embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, an apparatus and an operation method of the present invention will be described in detail with reference to the accompanying drawings. In the following description, specific matters such as specific elements are shown, which are provided to help a more general understanding of the present invention. It is self-evident to those of ordinary knowledge in Esau. In the following description, well-known functions or constructions are not described in detail since they would obscure the invention in unnecessary detail.

In general, infants are naturally trained in speech learning by learning not only the words of parents but also the shape of their parents and following them repeatedly. This can be equally applied to foreign language learning. Therefore, speaking a foreign language can be learned most accurately by mimicking not only the sound of the lecturer but also the shape of the talking mouth. In other words, you can find the closest sound only by seeing what the lecturer says and observing the shape of the mouth and the movement of the tongue.

Based on this, the present invention displays the pronunciation video of the lecturer and the pronunciation video of the learner at the same time through the display unit for speech learning, allowing the learner to easily compare and correct the pronunciation and mouth of the lecturer and the pronunciation and mouth of the learner. We propose a method and apparatus for learning pronunciation.

The present invention proposes three methods of displaying the pronunciation image. The first method displays the speaker's pronunciation video and the learner's pronunciation video on the display at the same time. The second method combines the lecturer's pronunciation video and the learner's pronunciation video to be symmetrical to form a single mouth. The third method is to display the speaker's pronunciation video and the learner's pronunciation video as a whole.

Hereinafter, embodiments of the present invention will be described in more detail with reference to the accompanying drawings.

1 is a block diagram of a pronunciation learning apparatus according to an exemplary embodiment. Referring to FIG. 1, the pronunciation learning apparatus according to an exemplary embodiment may include a camera unit 170, an image processor 180, an input unit 110, a display unit 120, a storage unit 130, a communication unit 160, It is configured to include an audio processor 150.

The camera unit 170 includes a lens (not shown) for forming an optical image of the subject on an image sensor and an image sensor (not shown) for converting image information of the subject into electrical data. Obtain video data of a video. The camera unit 170 may be installed in a pronunciation learning apparatus, and may be configured by connecting an external camera module such as a general webcam or a portable device equipped with a camera module with a pronunciation learning apparatus by wire or wireless. In an embodiment of the present invention, the camera unit 170 captures the pronunciation image of the learner during pronunciation learning.

The image processor 180 processes image data photographed by the camera unit 170. According to an embodiment of the present invention, the image processor 180 extracts a face through face recognition from an image photographed using a person (eg, a learner) as a subject, and in particular, an image to distinguish the mouth shape pronounced from the face. Perform an operation to extract a portion of the. In addition, the pronunciation image is edited / generated so that the speaker's pronunciation video and the learner's pronunciation video can be simultaneously displayed on the display unit 120 according to the pronunciation image display method. For example, in the pronunciation learning operation according to an embodiment of the present invention, when the pronunciation image display method is the perspective mode, the image data may be changed (higher) and overlapped with the lecturer's pronunciation video by the learner's pronunciation video. To process

The input unit 110 receives a user input and transmits it to the controller 140. When the display unit 120 is implemented as a touch screen, the display unit 120 may operate as the input unit 110 at the same time. In an embodiment of the present disclosure, the input unit 110 may include function keys used when performing pronunciation learning.

The display unit 120 displays the display contents under the control of the controller 140. In one embodiment of the present invention, the display unit 120 simultaneously displays the lecturer's pronunciation video and the learner's pronunciation video according to the pronunciation video display method. When the display unit 120 is implemented as a touch screen, the display unit 120 may operate as the input unit 110 at the same time.

The storage unit 130 stores data necessary for the pronunciation learning apparatus to operate. In an embodiment of the present invention, the storage unit 130 stores program data necessary to perform the pronunciation learning function, and data generated during the pronunciation learning function (for example, a pronunciation video of a learner photographed by a camera). , Learner profiles, recent learning intervals, etc.), and the pronunciation video of the lecturer can be stored when the pronunciation learning function is performed. In addition, the pronunciation video of the lecturer downloaded from a separate server can be stored.

The audio processor 150 provides a speaker SPK for outputting a voice or audio output during pronunciation learning and an audio output terminal, and includes a microphone MIC for recording a learner's pronunciation.

The communication unit 160 performs wired or wireless communication with a server or other devices through a wired or wireless communication network. In one embodiment of the present invention, the communication unit 160 communicates with a data server through a communication network.

The controller 140 controls other components of the pronunciation learning apparatus. In an embodiment of the present invention, the control unit 140 executes the foreign language pronunciation learning function, checks the pronunciation video display method set according to the user input, and the lecturer's pronunciation video and the learner's pronunciation video according to the set pronunciation video display method. Display at the same time.

In this case, when the set pronunciation video display method is the first method (symmetric mode), the controller 140 displays the lecturer's pronunciation video on one side of the screen, and simultaneously displays the learner's pronunciation video on the other side of the screen. When the set pronunciation image display method is the second method (divisionally symmetric mode), a part of the speaker's pronunciation video and a part of the learner's pronunciation video are combined and displayed simultaneously so as to have a single mouth shape. In the three-way mode (perspective mode), the speaker's pronunciation video and the learner's pronunciation video are overlaid and displayed simultaneously.

When displaying the speaker's pronunciation video and the learner's pronunciation video as a whole, the image processing unit 180 increases the transparency of the learner's pronunciation video and overlaps the learner's pronunciation video and the lecturer's pronunciation video. A video is generated and the video generated by overlapping is displayed through the display unit 120.

In addition, the controller 140 repeatedly plays the pronunciation video of the lecturer when learning to speak a foreign language, and displays the learner's video recorded in real time through a camera as the pronunciation video of the learner.

The pronunciation learning apparatus according to an embodiment of the present invention is a device for performing a pronunciation learning function (or application) of the present invention, and may be a dedicated device for learning pronunciation, but may be a general desktop computer, a laptop, a mobile phone, a smart phone. It can be implemented in various electronic devices such as PMP, MP3 player, UMPC, and TV. For example, in the case of a general computer, a notebook, and a smartphone, a pronunciation learning application for providing a pronunciation learning function of the present invention may be installed, and the pronunciation according to an embodiment of the present invention may also be applied to a home appliance such as a TV or a refrigerator. If the built-in learning function, it can operate as a pronunciation learning device.

2 is a diagram illustrating a configuration of a pronunciation learning system including a pronunciation learning apparatus and a data server according to an exemplary embodiment.

Referring to FIG. 2, the pronunciation learning system according to an exemplary embodiment of the present invention provides a pronunciation learning apparatus 100 and a server for providing a pronunciation learning function to the pronunciation learning apparatus 100 or providing pronunciation learning data. And a communication network 300 connecting the pronunciation learning apparatus 100 and the server 200.

The communication network 300 includes a mobile communication network such as a universal mobile telephone system (UMTS), a code division multiple access (W-CDMA), a global system for mobile communications (GSM), a Wibro, and a general internet communication network. Represents a wired / wireless network.

The server 200 is a communication unit 210 for performing communication with the pronunciation learning apparatus 100 through the communication network 300 and a control unit 220 for controlling each configuration of the server 200, an application for learning the pronunciation And a DB 230 for storing pronunciation learning data. In an embodiment of the present invention, the pronunciation learning data includes a pronunciation video of a lecturer. In addition, the pronunciation learning data may include moving image data such as a part of a movie or a movie. On the other hand, the DB 230 may be configured as a separate independent DB server.

3 is a diagram illustrating an operation flow of a server when installing a pronunciation learning application according to an embodiment of the present invention.

In the operation of installing the pronunciation learning application according to an embodiment of the present disclosure, if the pronunciation learning apparatus accesses a server providing a pronunciation learning application, the server checks the access of the pronunciation learning apparatus in step 310. In this case, the server may provide a menu for accessing a menu for selecting various types of applications provided by the server. If the pronunciation learning application download is selected by the user of the pronunciation learning apparatus in step 320, the server confirms the pronunciation learning application download and transmits the pronunciation learning application to the pronunciation learning apparatus in step 330.

4A and 4B are diagrams illustrating a flow of a pronunciation learning operation according to an embodiment of the present invention.

4A and 4B illustrate an example of an operation of performing a pronunciation learning function of an application for language learning. 4A and 4B, when the pronunciation learning apparatus 100 executes a language learning function in step 405, the learning menu is displayed through the display unit 120 in step 410. In this case, a menu of speaking learning, listening learning, grammar learning, reading learning, and word / idiom memorization learning may be displayed. In the next step 415, it is determined whether the speech learning (pronounced practice) is selected. If the speech learning is not selected in step 415, the selected learning menu is performed (a learning operation process other than speaking learning is omitted). If the speech learning is selected in step 415, the learning section for performing the speech learning is displayed in step 420. When the learning section is selected by the user, the data of the selected learning section is loaded from the storage 130 in step 425. In this case, the loaded data includes the pronunciation video of the lecturer. In operation 425, the learning data may be directly read from the server 300 through the communication unit as well as the storage 130 of the pronunciation learning apparatus 100. In this case, after the initial learning period is selected, necessary data may be read at one time from the selected learning period, but the server 300 may be continuously transmitted through the communication network 200 when it is necessary periodically or during streaming. It can also operate to read the training data from the ().

In the next step 430, the speech learning mode is performed. In this case, the learner may select a pronunciation image display method. On the other hand, the user may set in advance the basic mode performed when the pronunciation image display method is not selected. In operation 435, the phonetic image display method is checked. If it is determined that the pronunciation image display method is the symmetric mode, and if the pronunciation image display method is the symmetric mode, the process proceeds to step 440, the speaker pronunciation image is displayed on one side of the screen, and the learner's pronunciation image is displayed on the other part of the screen. In this case, the display unit may be displayed as shown in FIGS. 5 and 6 below.

5 is a view showing an example of a pronunciation learning apparatus according to an embodiment of the present invention. 5 illustrates a screen in which a personal computer such as a notebook or a desktop operates as a pronunciation learning apparatus to perform a pronunciation learning function. As shown in FIG. 5, the display unit displays the pronunciation image 502 of the lecturer on the left side and the learner pronunciation image 503 on the right side at the portion displaying the pronunciation image.

Preferably, the lecturer's pronunciation image 502 is repeatedly played, and the learner's pronunciation image 503 is displayed on the learner's face (lip) image taken in real time through the camera 501, so that the learner's The pronunciation practice can be performed while fitting the shape of the mouth to the pronunciation image 502.

In addition, the learner can take an image of the pronunciation, store it, and play it back at a desired time point so that it can be compared with the lecturer's pronunciation image 502. In addition, in order to simultaneously reproduce the pronunciation image of the stored lecturer and the pronunciation image of the learner, the two images may be simultaneously reproduced by matching the starting point of the stored learner's pronunciation image and the lecturer's pronunciation image.

In addition, the playback speed of the video having a shorter playback time may be adjusted (slower playback) so that the playback time of the two videos is the same so that the start point and the end point of the video are the same point of view.

The menu 512 provides a selection means for additional functions when performing a pronunciation learning operation. For example, the user may provide a function of selecting a pronunciation image display method, moving to an initial screen, selecting a learning section, selecting a learner, inputting a learner, setting, and ending. Each of these functions may also be assigned to different buttons and displayed / provided via an indication.

Referring to FIG. 5, a word or sentence (indicated by APPLE in FIG. 5) that is currently learning is displayed on an upper portion of the portions 502 and 503 displaying the pronunciation image, and the portion 502 displaying the pronunciation image. On the left side of 503, function buttons 506, 507, and 508 denoted by A, B, and C, a REC 510 button for recording learner pronunciation images, and a SLOW 511 button for slow playback of lecturer pronunciation videos. This is provided. The function buttons 506, 507, and 508 denoted by A, B, and C may be set to perform a function of simultaneously playing the lecturer pronunciation image, playing the stored learner pronunciation image alone, and simultaneously playing the lecturer pronunciation image and the learner pronunciation image. Can be.

At the bottom of the portion displaying the pronunciation image (502, 503), the Go to previous word / sentence (512) button, stop (513) button, Go to the next word / sentence (514) button is displayed, according to the phonetic symbols An oral structure table 509 indicating the position of is displayed.

To the right of portions 502 and 503 displaying the pronunciation image, buttons for reproducing the pronunciation image 524 and pausing 505 are displayed.

On the other hand, the learning data for pronunciation learning may include moving picture data indicating a certain situation, such as a movie or drama. In this case, during the speech learning, the video may be reproduced through a part of the display unit. For example, the pronunciation learning screen shown in FIG. 5 may be displayed on a part of the display unit, and video data may be simultaneously displayed on another part of the display unit, and video data such as a movie or a drama for pronunciation learning is displayed on the entire display unit, and the displayed video screen is displayed. Portions 502 and 503 for displaying the pronunciation image may be displayed on the portion. In this case, the lecturer pronunciation image 502 displays a pronunciation image related to the actor's dialogue in the video displayed on the full screen.

6 illustrates another example of a pronunciation learning apparatus according to an exemplary embodiment, and illustrates an example of a pronunciation image display method of simultaneously displaying a lecturer's pronunciation image and a learner's pronunciation image. 6 illustrates an example of display of a display unit when performing the pronunciation learning function of the present invention through a personal mobile terminal such as a mobile phone or a smartphone. The function buttons 604 to 614 of FIG. 6 perform the same operations as the function buttons 504 to 514 described in FIG. 5.

Referring back to FIG. 4, when the pronunciation image display method is not the symmetric mode in step 435, the method proceeds to step 445 to determine whether the pronunciation image display method is the split symmetric mode. If it is determined in step 445 that the pronunciation image display method is the split symmetric mode, the process proceeds to step 450 in which the speaker's pronunciation image and the learner's pronunciation image are displayed in left and right halves to simultaneously display one mouth shape. . Then proceed to step 470. In operation 450, the display is displayed as illustrated in FIG. 7.

FIG. 7 is a diagram illustrating an example of a pronunciation image display method in which a participant's pronunciation image and a participant's pronunciation image are simultaneously displayed in a shape of one mouth in a pronunciation learning operation according to an embodiment of the present invention.

As shown in FIG. 7, when the phonetic image display method is a split symmetric mode, the screen is divided into two vertically, and the left screen displays the left part of the speaker's pronunciation image, and the right screen displays the right part of the learner's pronunciation image. Mark it to form a single mouth. In this case, the user can compare the difference in the shape of the mouth of the left screen and the right screen more accurately. The function buttons 704 to 714 of FIG. 7 perform the same operations as the function buttons 504 to 514 described in FIG. 5.

If it is determined in operation 445 that the pronunciation image display method is not the divisional symmetry mode, the operation proceeds to step 455 to determine whether the pronunciation image display method is set to the perspective mode, and the operation proceeds to operation 460 to display the speaker pronunciation image and the learner pronunciation image. Overlapping the whole. Then proceed to step 470. In operation 455, the display unit is displayed as illustrated in FIG. 8.

FIG. 8 is a diagram illustrating an example of a pronunciation image display method in which a speaker's pronunciation image and a learner's pronunciation image are simultaneously displayed and overlapped during a pronunciation learning operation according to an embodiment of the present invention. 8 illustrates a display method of overlapping and displaying the pronunciation image of the lecturer and the pronunciation image of the learner during speech learning. The function buttons 804 to 814 of FIG. 8 perform the same operations as the function buttons 504 to 514 described in FIG. 5.

FIG. 9 is a diagram schematically illustrating a process of generating a superimposed pronunciation image of FIG. 8. As illustrated in FIG. 9, when the pronunciation video of the lecturer and the pronunciation video of the learner are overlapped and displayed, the displayed image first increases the transparency of the learner's pronunciation video 902 (preferably, sets the transparency to 50%). This image is displayed superimposed on the lecturer pronunciation image 901 having a transparency of 0%.

In this case, the transparency of the image is a numerical value representing the degree of transparency of the image. If the transparency is 0%, the original image is displayed, and if the transparency is 100%, the image is completely transparent.

On the other hand, it may be displayed in combination with the learner pronunciation image 902 which increases the transparency of the lecturer pronunciation image 901 and has 0% transparency. On the other hand, when combining the two images should be combined so that the position of the mouth of the two images is approximately located in the center of the image. To this end, the image may be enlarged / reduced / moved so that the two tails of the two images coincide with each other, and the middle of the two tails is positioned at the center of the image.

Referring back to FIG. 4, if it is determined in operation 455 that the pronunciation image display method is not the perspective mode, the operation proceeds to operation 475 and the pronunciation image is not displayed. Thereafter, in step 470, it is determined whether the learning mode is terminated. If it is not terminated, the process proceeds to step 420 to continue the learning mode.

The present invention can provide a pronunciation learning method that can easily compare the pronunciation video of the lecturer and the pronunciation video of the learner according to the operation process as described above. In addition, because the learner can learn pronunciation (mouth shape) according to the words and sentences of the lecturer, it can be used for the speech practice of the hearing impaired person and the speech practice of seeing and communicating the mouth shape.

On the other hand, when photographing the learner pronunciation image, the learner can directly photograph the learner's mouth by adjusting the camera. However, it may be difficult for the learner to adjust the device (camera) while taking a lesson and to enlarge and photograph the vicinity of the mouth accurately. Accordingly, the learner may perform a photographing so that his or her face is taken as a whole, and perform a face recognition function of extracting a face by extracting a feature point of the face from the photographed image data, and extracting and using a pronunciation image of the image. .

FIG. 10 is a diagram schematically illustrating an operation of extracting a learner's pronunciation image from a captured image according to an exemplary embodiment. As shown in FIG. 10, face recognition and mouth extraction are performed on the image 1001 captured by the first learner, and the pronunciation learning apparatus 100 includes a pronunciation image of the learner, that is, an image in which the learner's mouth is located at the center of the image. 1002) can be displayed.

On the other hand, when the display unit 120 is implemented as a touch screen, when the photographed image is displayed, the user's mouth is positioned at the center of the image by directly zooming in / out / moving the image through a touch input. You can set to edit the video.

11 is a diagram illustrating a display example of a screen during a pronunciation learning operation in a portrait view mode in a terminal according to an embodiment of the present invention. The pronunciation learning operation according to an embodiment of the present invention may be displayed in the portrait view mode as shown in FIG. 11. In this case, the lecturer pronunciation image 1102 and the learner pronunciation image 1103 are disposed above and below the display unit. Also, in the case of a general mobile terminal device, since the camera 1101 is often located at the top of the mobile terminal device, the image may be set upside down so that it is easy to photograph the shape of the mouth when the pronunciation learning function is performed. . The function buttons 1104 to 1114 of FIG. 11 perform the same operations as the function buttons 504 to 514 described with reference to FIG. 5.

12 is a diagram illustrating an example of a pronunciation learning apparatus having two display units according to an exemplary embodiment. As shown in FIG. 12, the pronunciation learning method according to an exemplary embodiment of the present disclosure may also be performed through a display unit including two display windows. In this case, as illustrated in FIG. 12, the lecturer pronunciation image 1202 is displayed on the first display unit, and the user pronunciation image 1203 is displayed on the second display unit to perform speech learning. The function buttons 1204 to 1214 of FIG. 12 perform the same operations as the function buttons 504 to 514 described with reference to FIG. 5.

13 is a diagram illustrating an example of a form in which a learner performs pronunciation learning according to an embodiment of the present invention. 13 illustrates an example in which a learner performs pronunciation learning through a home appliance such as a TV at home. In the case of home appliances, a camera is not built in, so pronunciation learning can be performed using an external camera. Referring to FIG. 13, a learner may use a wired / wireless headset 1303 including a microphone / speaker when performing pronunciation learning, and includes an image including his or her own mouth, such as a webcam, a portable camera, or a portable device equipped with a camera module. The user may photograph his / her own image using the photographing apparatus 1302 for photographing the photographs, and perform pronunciation learning through the TV 1301. In this case, the photographing apparatus 1302 and the headset 1303 may be connected to the TV 1301 by wire, and may be wirelessly connected through short-range communication such as Bluetooth to perform communication.

As described above, the configuration and operation of the pronunciation learning method and apparatus according to an embodiment of the present invention can be made. Meanwhile, in the above description of the present invention, specific embodiments have been described, but various modifications do not depart from the scope of the present invention. It can be done without.

Claims (13)

In the pronunciation learning apparatus using a comparison image,
With display part,
A camera for filming learners,
Pronunciation learning, characterized in that it comprises a control unit for displaying the pronunciation image of the lecturer according to a predetermined pronunciation learning content and the learner pronunciation image including the learner's mouth taken by the camera through the display unit at the same time to compare with each other Device.
The method of claim 1, wherein the control unit,
And a pronunciation image of the lecturer is displayed on one side of the display unit, and a pronunciation image of the learner is simultaneously displayed on the other side of the display unit.
The method of claim 1, wherein the control unit,
And a portion of the lecturer's pronunciation image and a portion of the learner's pronunciation image are displayed at the same time to form a single mouth shape through the display unit.
The method of claim 1, wherein the control unit,
The pronunciation learning apparatus characterized by overlapping the pronunciation image of the lecturer and the pronunciation image of the learner at the same time through the display unit.
The method of claim 1, wherein the control unit,
And reproducing the pronunciation image of the lecturer and displaying the image of the learner photographed in real time through the camera as the pronunciation video of the learner.
In the pronunciation learning method using a comparison image,
The pronunciation learning function,
The process of photographing learners through the camera,
The pronunciation learning method comprising the step of simultaneously displaying the pronunciation image of the lecturer according to a predetermined pronunciation learning content and the learner pronunciation image including the learner's mouth taken by the camera through the display unit to compare with each other .
The method of claim 6, wherein the process of simultaneously displaying the pronunciation image of the lecturer according to the predetermined pronunciation learning content and the learner pronunciation image including the learner's mouth taken by the camera are compared with each other through the display unit.
And displaying the pronunciation image of the lecturer on one side of the display unit and simultaneously displaying the pronunciation image of the learner on the other side of the display unit.
The method of claim 6, wherein the process of simultaneously displaying the pronunciation image of the lecturer according to the predetermined pronunciation learning content and the learner pronunciation image including the learner's mouth taken by the camera are compared with each other through the display unit.
And displaying a portion of the lecturer's pronunciation image and the learner's pronunciation image through the display unit and simultaneously displaying the speaker's pronunciation image to form a single mouth.
The method of claim 6, wherein the process of simultaneously displaying the pronunciation image of the lecturer according to the predetermined pronunciation learning content and the learner pronunciation image including the learner's mouth taken by the camera are compared with each other through the display unit.
And a step of overlapping and displaying the pronunciation image of the lecturer and the pronunciation image of the learner through the display unit at the same time.
In the pronunciation learning content providing method of the server for pronunciation learning using a comparison image,
Checking the connection of the pronunciation learning apparatus;
Providing the pronunciation learning apparatus with a list of contents provided by the server;
Determining whether a download selection of pronunciation learning content is input from the pronunciation learning apparatus;
When the download selection of the pronunciation learning content is input, transmitting the pronunciation learning content to the pronunciation learning device,
When the pronunciation learning content is executed, the learner photographs the learner through a camera, and compares the learner pronunciation image including the speaker's pronunciation image and the learner's mouth image taken by the camera according to the pronunciation learning content. Pronunciation learning content providing method characterized in that it comprises a pronunciation learning application for performing the pronunciation learning by simultaneously displaying through the display unit.
The method of claim 10, wherein the pronunciation learning application
When the pronunciation learning function is executed, a pronunciation image of the lecturer is displayed on one side of a display unit of the pronunciation learning apparatus, and a pronunciation image of the learner is simultaneously displayed on the other side of the display unit. .
The method of claim 10, wherein the pronunciation learning application,
When the pronunciation learning function is executed, a part of the lecturer's pronunciation image and a part of the learner's pronunciation image are displayed on the display unit of the pronunciation learning apparatus to simultaneously display a part of the speaker's pronunciation image so as to have a single shape. Way.
The method of claim 10, wherein the pronunciation learning application,
When executing the pronunciation learning function, the pronunciation learning content providing method characterized in that to display the pronunciation image of the lecturer and the pronunciation image of the learner as a whole at the same time through the display unit of the pronunciation learning apparatus.
KR1020100060615A 2010-06-25 2010-06-25 Method and apparatus for pronunciation exercise using comparison video KR20110065276A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020100060615A KR20110065276A (en) 2010-06-25 2010-06-25 Method and apparatus for pronunciation exercise using comparison video
PCT/KR2011/004433 WO2011162508A2 (en) 2010-06-25 2011-06-16 Method and device for learning pronunciation by using comparison images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020100060615A KR20110065276A (en) 2010-06-25 2010-06-25 Method and apparatus for pronunciation exercise using comparison video

Publications (1)

Publication Number Publication Date
KR20110065276A true KR20110065276A (en) 2011-06-15

Family

ID=44398577

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020100060615A KR20110065276A (en) 2010-06-25 2010-06-25 Method and apparatus for pronunciation exercise using comparison video

Country Status (2)

Country Link
KR (1) KR20110065276A (en)
WO (1) WO2011162508A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180093438A (en) 2017-02-13 2018-08-22 이예빈 Pronunciation correction devices and applications that help the deaf accent
KR20220115409A (en) 2021-02-10 2022-08-17 임정택 Speech Training System For Hearing Impaired Person

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110430465B (en) * 2019-07-15 2021-06-01 深圳创维-Rgb电子有限公司 Learning method based on intelligent voice recognition, terminal and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100405061B1 (en) * 2000-03-10 2003-11-10 문창호 Apparatus for training language and Method for analyzing language thereof
KR100593837B1 (en) * 2001-10-17 2006-07-03 박남교 Active studying data offer method to add interface function on internet moving image
KR20050001149A (en) * 2003-06-27 2005-01-06 주식회사 팬택 Mobile station with image creating function
KR20090081046A (en) * 2008-01-23 2009-07-28 최윤정 Language learning system using internet network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180093438A (en) 2017-02-13 2018-08-22 이예빈 Pronunciation correction devices and applications that help the deaf accent
KR20220115409A (en) 2021-02-10 2022-08-17 임정택 Speech Training System For Hearing Impaired Person

Also Published As

Publication number Publication date
WO2011162508A2 (en) 2011-12-29
WO2011162508A3 (en) 2012-04-12

Similar Documents

Publication Publication Date Title
JP6541934B2 (en) Mobile terminal having voice interaction function and voice interaction method therefor
WO2014160316A2 (en) Device, method, and graphical user interface for a group reading environment
WO2014151884A2 (en) Device, method, and graphical user interface for a group reading environment
KR101427528B1 (en) Method of interactive language learning using foreign Video contents and Apparatus for it
KR101789221B1 (en) Device and method for providing moving picture, and computer program for executing the method
KR20190083532A (en) System for learning languages using the video selected by the learners and learning contents production method thereof
JP2016114673A (en) Electronic equipment and program
KR20110065276A (en) Method and apparatus for pronunciation exercise using comparison video
KR101539972B1 (en) Robot study system using stereo image block and method thereof
US20120154514A1 (en) Conference support apparatus and conference support method
JP3569278B1 (en) Pronunciation learning support method, learner terminal, processing program, and recording medium storing the program
KR20140078810A (en) Apparatus and method for learning rhythm pattern by using native speaker's pronunciation data and language data.
KR100393122B1 (en) System and Method for learning languages by using bone-path hearing function of a deaf person and storage media
KR20030079497A (en) service method of language study
KR20140079677A (en) Apparatus and method for learning sound connection by using native speaker's pronunciation data and language data.
KR101920653B1 (en) Method and program for edcating language by making comparison sound
JP2017146402A (en) Learning support device and program
KR20120031373A (en) Learning service system and method thereof
KR101832464B1 (en) Device and method for providing moving picture, and computer program for executing the method
KR20140087951A (en) Apparatus and method for learning english grammar by using native speaker's pronunciation data and image data.
Millett Improving accessibility with captioning: An overview of the current state of technology
JP2006163269A (en) Language learning apparatus
KR20140087950A (en) Apparatus and method for learning rhythm pattern by using native speaker's pronunciation data and language data.
KR20140082127A (en) Apparatus and method for learning word by using native speaker's pronunciation data and origin of a word
KR20160121217A (en) Language learning system using an image-based pop-up image

Legal Events

Date Code Title Description
A201 Request for examination
G15R Request for early opening
G15R Request for early opening
G15R Request for early opening
E601 Decision to refuse application
A107 Divisional application of patent
J201 Request for trial against refusal decision
J801 Dismissal of trial

Free format text: REJECTION OF TRIAL FOR APPEAL AGAINST DECISION TO DECLINE REFUSAL REQUESTED 20120418

Effective date: 20120723