WO2013038734A1 - ジェスチャ認識装置、電子機器、ジェスチャ認識装置の制御方法、制御プログラムおよび記録媒体 - Google Patents
ジェスチャ認識装置、電子機器、ジェスチャ認識装置の制御方法、制御プログラムおよび記録媒体 Download PDFInfo
- Publication number
- WO2013038734A1 WO2013038734A1 PCT/JP2012/056518 JP2012056518W WO2013038734A1 WO 2013038734 A1 WO2013038734 A1 WO 2013038734A1 JP 2012056518 W JP2012056518 W JP 2012056518W WO 2013038734 A1 WO2013038734 A1 WO 2013038734A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gesture recognition
- person
- gesture
- target part
- image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Definitions
- the present invention relates to a gesture recognition device, an electronic device, a gesture recognition device control method, a control program, and a recording medium that capture a user's motion with a camera and recognize the user's gesture from the captured image.
- an information input device that captures a user's action with a camera and recognizes the user's gesture from the captured image.
- an information input device that recognizes a user's gesture from an image is referred to as a gesture recognition device.
- This gesture recognition device is expected as a next-generation interface because the user does not need to operate an input device or the like and can intuitively input an operation instruction to the device.
- the user cannot know whether or not the gesture recognition device can recognize its own gesture. In other words, the user cannot accurately determine whether or not his / her gesture is within the range of the angle of view of the camera. Therefore, even though the gesture recognition device is not capable of recognizing the user's gesture, the user expects the gesture to be recognized and performs the same operation over and over again. Sometimes happened.
- Patent Documents 3 and 4 there is a technique for determining whether or not a human whole body or face is within an angle of view when shooting with a camera, and issuing a warning when the angle is not within the field of view.
- the techniques described in Patent Documents 3 and 4 are techniques related to general cameras, not techniques related to gesture recognition devices. Further, conventionally, in the gesture recognition device, it is not performed to notify whether or not the gesture is within the angle of view.
- the present invention has been made in view of the above problems, and its purpose is to provide a gesture recognition device, an electronic device, and a control method for the gesture recognition device that notify when a user's gesture is not within the angle of view. It is to realize a control program and a recording medium.
- a gesture recognition apparatus recognizes a gesture that is a movement and / or shape of a human gesture recognition target part from an image captured by a camera, and an electronic device based on the recognized gesture
- a gesture recognition device that outputs information for controlling the gesture to the electronic device, a recognition target part determination unit that determines whether or not a gesture recognition target part is included in the image, and the recognition target part determination unit
- Output means for outputting out-of-target-field-angle notification instruction information for instructing notification that the gesture recognition target part is not imaged when it is determined that the gesture recognition target part is not included in the image. It is characterized by.
- a method for controlling a gesture recognition apparatus recognizes a gesture that is a movement and / or shape of a human gesture recognition target part from an image captured by a camera, and is based on the recognized gesture.
- a gesture recognition apparatus control method for outputting information for controlling an electronic device to the electronic device, a recognition target part determination step for determining whether or not a gesture recognition target part is included in the image, When it is determined in the recognition target part determination step that the gesture recognition target part is not included in the image, target part non-view angle notification instruction information is output to instruct that the gesture recognition target part is not imaged. And an output step.
- the output unit instructs the notification that the gesture recognition target part is not imaged.
- the electronic device notifies the person operating the electronic device that the gesture recognition target region is not imaged based on the target region non-view angle notification instruction information.
- a person who operates the electronic device based on the notification from the electronic device, whether or not the gesture recognition target part is included in the image captured by the camera, that is, the gesture deviates from the angle of view of the camera. It can be determined whether or not. Therefore, even if the gesture recognition target part of the person who operates the electronic device is not imaged, the person who operates the electronic device uses the same gesture many times to try to recognize the gesture. There exists an effect that operation
- the recognition target part determination unit that determines whether or not the gesture recognition target part is included in the image, and the recognition target part determination unit performs gesture recognition on the image.
- an output unit that outputs target part non-view angle notification instruction information for instructing that the gesture recognition target part is not imaged is provided.
- control method of the gesture recognition apparatus includes a recognition target part determination step for determining whether or not a gesture recognition target part is included in the image, and gesture recognition for the image in the recognition target part determination step.
- FIG. 1 showing an embodiment of the present invention, is a block diagram showing a main part configuration of a gesture recognition device and a television equipped with the gesture recognition device. It is a figure which shows an example of the usage condition of the television carrying the said gesture recognition apparatus. It is a figure which shows an example of the television process history information stored in the television storage part of the said television. It is a figure which shows an example of the person specific information stored in the gesture recognition apparatus memory
- FIG. 2 shows an example of usage of an electronic device equipped with the gesture recognition device according to the present invention.
- FIG. 2 is a diagram illustrating an example of a usage mode of an electronic device equipped with the gesture recognition device according to the present invention.
- FIG. 2 shows a case where the electronic device is a television 2 and the television 2 has a built-in gesture recognition device.
- the user 4 when the user 4 is making a gesture of shaking the right hand (right palm) to the left and right, if the right hand of the user 4 is out of the angle of view ⁇ of the camera 3, "No hand is shown" or the like is displayed to notify the user 4 that the hand (gesture recognition target part) has not been imaged.
- the user can know whether or not his / her gesture has been imaged. Therefore, it is possible to prevent a user's useless operation in which the user makes the same gesture many times in order to recognize the gesture even though the gesture is not captured.
- the electronic device is an example of a television, but is not limited thereto.
- the electronic device may be a mobile phone, a game machine, a digital camera, a security gate (door), or the like.
- a gesture is defined as a movement and / or shape of a predetermined part (face (head), hand, foot, etc.) of a user.
- the predetermined part is referred to as a gesture recognition target part.
- the television 2 equipped with the gesture recognition device is installed in the home, and three persons, “Father”, “Mother”, and “Child”, exist as users.
- a specific user uses it in this way, but the present invention is not limited to this.
- the electronic device according to the present invention may be used by an unspecified number of users. For example, in a hospital or the like, there are a large number of unspecified users, but it may not be preferable to operate by directly touching the device. Therefore, in an environment such as a hospital, the gesture recognition device can be suitably used as an operation unit for an electronic device.
- the user means a person who operates the electronic device (a person who tries to operate the electronic device).
- FIG. 1 is a block diagram illustrating an example of a main configuration of a television 2 and a gesture recognition device 1 mounted on the television 2. First, the configuration of the television 2 will be described.
- the television 2 includes a television control unit 51, a television storage unit 52, a television communication unit 53, an operation unit 54, a display unit (notification unit) 55, an audio output unit (notification unit) 56, and a gesture recognition device. 1 is provided.
- the television 2 may include a member such as a voice input unit, but the member is not shown because it is not related to the feature point of the invention.
- the TV communication unit 53 communicates with other devices by wireless communication means or wired communication means, and exchanges data in accordance with instructions from the TV control unit 51.
- the television communication unit 53 is an antenna that receives broadcast waves, and receives video data, audio data, program data, and the like by receiving broadcast waves.
- the operation unit 54 is for a user to input an operation signal to the television 2 and operate the television 2.
- the operation unit 54 may be configured by an input device such as an operation button. Further, a touch panel in which the operation unit 54 and the display unit 55 are integrated may be used.
- the operation unit 54 may be a remote control device such as a remote controller separate from the television 2. Note that since the television 2 includes the gesture recognition device 1, the television 2 may not include the operation unit 54.
- the display unit 55 displays an image in accordance with an instruction from the television control unit 51.
- the display unit 55 only needs to display an image in accordance with an instruction from the television control unit 51.
- an LCD liquid crystal display
- an organic EL display organic EL display
- a plasma display or the like can be applied.
- the audio output unit 56 outputs audio according to instructions from the television control unit 51, and is, for example, a speaker.
- the TV control unit 51 performs various calculations by executing a program read from the TV storage unit 52 to a temporary storage unit (not shown), and comprehensively controls each unit included in the TV 2. is there.
- the TV control unit 51 displays video and program information on the display unit 55 based on the video data, audio data, and program data acquired via the TV communication unit 53, and outputs audio from the audio output unit 56. It is something to do. Further, the television control unit 51 executes predetermined processing such as power ON / OFF, channel switching, volume change, program guide display, and the like based on an operation signal from the operation unit 54 or the gesture recognition device 1. . Further, when the predetermined process is executed, the television control unit 51 associates the processing content of the executed process with the processing date and time and stores it in the television storage unit 52 as the television processing history information 61.
- the television control unit 51 displays information indicated by the instruction signal on the display unit 55 and / or outputs voice from the audio output unit 56, and sends the information to the user. Notice.
- the television control unit 51 may notify a user of information indicated by the instruction signal by causing a notification lamp (not shown) such as an LED to emit light.
- the television control unit 51 receives from the gesture recognition device 1 a notification instruction that the gesture recognition target region is not imaged or a notification that it is not the operation target person. Based on such an instruction, the television control unit 51 may display the above-described content as text on the display unit 55 or may output it from the audio output unit 56 as audio. Further, the television control unit 51 may display an image captured by the camera 3 on the whole or a part of the screen of the display unit 55 in order to notify that the gesture recognition target region is not captured.
- the TV storage unit 52 stores programs, data, and the like referred to by the TV control unit 51, and stores, for example, the above-described TV processing history information 61 and the like.
- FIG. 3 is a diagram illustrating an example of the television processing history information 61 stored in the television storage unit 52.
- the television processing history information 61 is information in which processing content of processing executed by the television control unit 51 is associated with processing date and time when the television control unit 51 executed processing.
- the gesture recognition device 1 recognizes a user's gesture from an image captured by the camera 3, and outputs a signal (information) for controlling the electronic device based on the recognized gesture to the electronic device.
- the gesture recognition device 1 is mounted on the television 2, but the present invention is not limited to this, and the television 2 and the gesture recognition device 1 may be separate. The specific configuration and function of the gesture recognition device 1 will be described next.
- the gesture recognition device 1 includes a gesture recognition device control unit 11, a gesture recognition device storage unit 12, and a gesture recognition device communication unit 13.
- the gesture recognition device communication unit 13 communicates with other devices such as the camera 3 by wireless communication means or wired communication means, and exchanges data in accordance with instructions from the gesture recognition device control unit 11. Specifically, the gesture recognition device communication unit 13 acquires an image captured by the camera 3 from the camera 3 in accordance with an instruction from the gesture recognition device control unit 11.
- the gesture recognition device control unit 11 performs various calculations by executing a program read from the gesture recognition device storage unit 12 to a temporary storage unit (not shown), and supervises each unit included in the gesture recognition device 1. Control.
- the gesture recognition device control unit 11 includes, as functional blocks, an image acquisition unit 21, a person determination unit (person determination unit) 22, a person identification unit 23, a user determination unit (operation target person determination unit, operation prohibited person). Determination means) 24, recognition target region determination unit (recognition target region determination unit) 25, timing determination unit (time determination unit) 26, output unit (output unit) 27, gesture recognition execution unit 28, information setting unit 29, entry / exit determination
- the configuration includes a unit (in / out determination unit) 30, a positional relationship specifying unit (part specifying unit, position specifying unit) 31, and a movement vector calculating unit (movement vector specifying unit) 32.
- Each of the function blocks (21 to 32) of the gesture recognizing device control unit 11 stores a program stored in a storage device in which a CPU (central processing unit) is realized by a ROM (read only memory) or the like (random (RAM)). This can be realized by reading out and executing it in a temporary storage unit realized by access (memory).
- a CPU central processing unit
- ROM read only memory
- RAM random (RAM)
- the image acquisition unit 21 acquires an image captured by the camera 3 from the camera 3 via the gesture recognition device communication unit 13.
- the person determination unit 22 determines whether a person is captured in the image acquired by the image acquisition unit 21. For example, the person determination unit 22 performs human body detection or face detection on an image and, as a result, detects a part of the whole body or a part of the human body such as a face (head), hand, foot, etc. Is determined to be imaged. Further, the person determination unit 22 performs human body detection or face detection on the image, specifies the detection reliability of the whole body or a part as the detection result, and if the detection reliability is a predetermined value or more, the person is detected. It is determined that an image has been captured.
- the person determination unit 22 determines that a person is imaged when the whole body of the person or a part of the person is imaged in the image acquired by the image acquisition unit 21.
- the person determination unit 22 may detect one or a plurality of people from one image.
- the person determination part 22 should just determine whether the person is imaged by the acquired image, and does not limit the method.
- the person determination unit 22 detects a part such as a face (head), hand, or foot that is the whole body or a part of the human body, and disclosed in JP-A-2005-115932, JP-A-2008-158790, and the like.
- a part such as a face (head), hand, or foot that is the whole body or a part of the human body
- JP-A-2005-115932, JP-A-2008-158790, and the like JP-A-2005-115932, JP-A-2008-158790, and the like.
- the technique described in Japanese Unexamined Patent Publication No. 2008-15641, a learning-type human body detection technique, or the like may be used.
- the person specifying unit 23 reads the person specifying information 41 from the gesture recognition device storage unit 12 and specifies the person detected by the person determining unit 22 based on the person specifying information 41.
- the person identifying unit 23 detects the person identifying unit 22 when the person identifying information 41 includes information for individually identifying “Father”, “Mother”, and “Child”. Authentication may be performed on the whole body image, face image, and the like, and the detected person may be identified. For example, the person specifying unit 23 calculates the feature amount of the face image detected by the person determination unit 22, and compares the calculated feature amount with the feature amounts of the face images of “Father”, “Mother”, and “Child”. Then, the person who is closest may be specified as the person of the face image detected by the person determination unit 22.
- the person specifying unit 23 specifies gender, age, race, and the like from the whole body image, face image, hand image, and the like detected by the person determining unit 22, and based on the specified result, the person determining unit 22 You may specify who the detected person is.
- the person or the like is detected from the whole body image, the face image, the hand image, or the like detected by the person determination unit 22, and the person detected by the person determination unit 22 based on the decoration worn by the person You may specify whether there is.
- a process of specifying a person may be performed after complementing a portion that is not shown.
- a process of specifying a person after mirroring may be performed.
- the person specifying unit 23 may not only specify the person detected by the person determining unit 22 as an individual but also specify the attributes of a person such as gender, age, and race. Moreover, the person identification part 23 should just identify an individual and an attribute, and does not limit the method. The person specifying unit 23 may specify an individual or an attribute using a known technique other than the above.
- the user determination unit 24 reads the user information 42 from the gesture recognition device storage unit 12 and determines whether or not the person specified by the person specifying unit 23 is an operation target person based on the user information 42. The user determination unit 24 determines whether or not the person specified by the person specifying unit 23 is an operation prohibited person based on the user information 42.
- the operation target person is a person who can execute the operation of the television 2 by a gesture. In other words, only the user set as the operation target person can perform the operation of the television 2 by the gesture.
- the operation-prohibited person is a person who is prohibited from operating the television 2 by a gesture.
- the recognition target part determination unit 25 reads the gesture information 44 from the gesture recognition device storage unit 12 after the user determination unit 24 determines that the user is an operation target person, and adds the gesture information to the image in which the operation target person is captured. It is determined whether or not the recognition target portion of the gesture indicated by 44 is included.
- the recognition target part determination unit 25 only needs to determine whether or not a gesture recognition target part is included in the acquired image, and does not limit the method.
- the recognition target part determination unit 25 applies a model to the human body being imaged using the technique described in Japanese Patent Application Laid-Open No. 2004-326693, and determines whether there is a gesture recognition target part in the model. By doing so, it may be determined whether or not a gesture recognition target part is included in the acquired image.
- the recognition target part determination unit 25 directly detects a gesture recognition target part such as a hand using the technique described in Japanese Patent Application Laid-Open No. 2007-52609 or the technique of Japanese Patent No. 43722051, and obtains the acquired image. It may be determined whether or not a gesture recognition target part is included.
- the timing determination unit 26 reads the operation timing information 43 from the gesture recognition device storage unit 12, and whether or not the processing state of the television 2 or the gesture recognition device 1 satisfies the processing condition indicated by the operation timing information 43, or the image Based on whether or not the region being imaged satisfies the imaging region condition indicated by the operation timing information 43, it is determined whether or not it is an operation timing at which the user desires an operation using a gesture.
- the time determination unit 26 determines whether or not it is the operation timing at which the user desires the operation by the gesture based on the current or past processing of the television 2 or the gesture recognition device 1. More specifically, the time determination unit 26 reads the operation timing information 43 from the gesture recognition device storage unit 12. Then, the time determination unit 26 reads the gesture recognition device processing history information 45 from the gesture recognition device storage unit 12, information indicating the current processing content of the television 2 from the television control unit 51, and the television indicating the past processing content. The processing history information 61 is acquired, and it is determined whether or not the current time corresponds to the processing condition indicated by the operation timing information 43.
- the timing determination unit 26 reads the operation timing information 43 and the gesture information 44 from the gesture recognition device storage unit 12, and a region in the vicinity of the gesture recognition target portion indicated by the gesture information 44 (target vicinity portion) is captured in the image. If it is, the operation timing is determined. Further, the timing determination unit 26 reads the operation timing information 43 and the gesture information 44 from the gesture recognition device storage unit 12, and a part (target interlocking part) interlocked with the gesture recognition target part at the time of the gesture indicated by the gesture information 44 has a predetermined operation. Is detected from the image, it is determined that it is the operation timing.
- the timing determination unit 26 determines that the person being imaged by the user determination unit 24 is neither an operation target person nor an operation prohibited person, the operation timing information 43 and the gesture information from the gesture recognition device storage unit 12.
- the operation timing information 43 and the gesture information from the gesture recognition device storage unit 12.
- the timing determination unit 26 may use any technique as long as it can determine whether or not a region near the target is captured in the image. For example, it is determined by using the same technique as that of the recognition target part determination unit 25 whether the target vicinity part is captured in the image.
- the time determination unit 26 may use any technique as long as it can detect from the image that the target interlocking part is performing a predetermined operation. For example, the time determination unit 26 obtains an edge point of the target interlocking part using an edge operator or SIFT, and the target interlocking part performs a predetermined operation when the obtained edge point changes with time.
- the timing determination unit 26 determines whether the target interlocking portion is performing a predetermined operation using the technique described in Japanese Patent Application Laid-Open No. 2004-280148. May be.
- the output unit 27 instructs an instruction signal to instruct notification of predetermined information or to erase the notification
- the instruction signal to be output is output to the television control unit 51.
- the output unit 27 determines that the person determination unit 22 has captured a person, the user determination unit 24 determines that the person is an operation target, and the recognition target part determination unit 25 If the gesture recognition target part is determined not to be imaged, and the timing determination unit 26 determines that it is the operation timing, a notification outside the target part angle of view that instructs notification that the gesture recognition target part is not imaged
- An instruction signal (target part non-view angle notification instruction information) is output to the television control unit 51.
- the output unit 27 determines that the person determination unit 22 is capturing a person
- the user determination unit 24 determines that the person is not an operation target person, and is not an operation prohibited person
- the time determination unit 26 Is determined to be the operation timing
- a non-operation target person notification instruction signal (non-operation target person notification instruction information) for instructing notification that the user is not the operation target is output to the television control unit 51.
- the output unit 27 determines that the person determination unit 22 has captured a person, the user determination unit 24 determines that the person is an operation target, and the recognition target region determination unit 25 has a gesture recognition target. If it is determined that the part has been imaged, and the television 2 has made a notification that the gesture has not been imaged or a notification that the part is not an operation target, the notification is erased to instruct to erase the notification. An instruction signal is output to the television control unit 51.
- the output unit 27 outputs the operation signal generated by the gesture recognition execution unit 28 to the television control unit 51. Further, the output unit 27 outputs a gesture recognition error notification instruction signal for instructing notification of the cause of the error specified by the gesture recognition execution unit 28 to the television control unit 51. In particular, when the cause of the error is that the gesture recognition target part could not be detected, the output unit 27 outputs a target part non-view angle notification instruction signal for instructing notification that the gesture recognition target part is out of the angle of view. Output to the TV control unit 51. Further, when the gesture recognition target part cannot be detected is the cause of the error, and the entrance / exit determination unit 30 determines that the gesture recognition target part is in and out of the angle of view, the output unit 27 displays the gesture recognition target part. A target part in / out notification instruction signal (target part in / out notification instruction information) for instructing notification that the part is in and out of the angle of view is output to the television control unit 51.
- a target part in / out notification instruction signal target part in / out notification instruction information
- the output unit 27 replaces the target part non-view angle notification instruction signal or the target part entry / exit notification instruction signal with the movement vector calculation unit 32 that moves the gesture recognition target part in the direction specified by the movement vector calculation part 32.
- You may output the vector notification instruction
- the output unit 27 instructs a notification for urging the movement of the gesture recognition target part in the direction specified by the movement vector calculation unit 32 instead of the target part non-view angle notification instruction signal or the target part in / out notification instruction signal.
- a direction notification instruction signal (direction notification instruction information) may be output to the television control unit 51.
- the output unit 27 provides a notification that prompts the user to perform a gesture in front of the reference site specified by the positional relationship specifying unit 31 instead of the target site view angle outside notification instruction signal or the target site entry / exit notification instruction signal.
- a gesture position notification instruction signal may be output to the television control unit 51.
- the output unit 27 instructs a notification to prompt the user to perform a gesture in front of the face. Output a signal.
- the gesture recognition execution unit 28 determines that the person determination unit 22 captures a person, the user determination unit 24 determines that the person is an operation target, and the recognition target part determination unit 25 determines a gesture recognition target. If it is determined that the region is imaged, gesture recognition processing is executed. Specifically, when the gesture recognition execution unit 28 starts the gesture recognition process, the gesture recognition execution unit 28 acquires a plurality of images arranged in time series from the image acquisition unit 21 and reads the gesture information 44 from the gesture recognition device storage unit 12. . Then, the gesture recognition execution unit 28 detects a gesture indicated by the gesture information 44 from the plurality of acquired images.
- the gesture recognition execution unit 28 When the gesture recognition execution unit 28 detects a predetermined gesture, the gesture recognition execution unit 28 generates an operation signal associated with the detected gesture in the gesture information 44. On the other hand, the gesture recognition execution unit 28 may specify the cause of the error when the gesture cannot be detected.
- the cause (error cause) that the gesture cannot be detected is determined based on the determination result of the recognition target region determination unit 25 for a plurality of images. It may be determined whether or not it is not detected.
- the recognition target part determination unit 25 determines whether or not a gesture recognition target part is captured in each image with respect to a plurality of images arranged in time series used by the gesture recognition execution unit 28 for gesture recognition. Judge each one.
- the gesture recognition execution unit 28 may specify the other error cause when the gesture cannot be detected due to other causes other than the failure to detect the gesture recognition target part.
- the gesture recognition execution unit 28 generates the gesture recognition device processing history information 45 by associating the date and time when the gesture recognition processing is executed, the generated operation signal or gesture recognition error signal, and the operation target person, and The data is stored in the recognition device storage unit 12.
- the information setting unit 29 sets the person specifying information 41, the user information 42, the operation timing information 43, and the gesture information 44. Specifically, the information setting unit 29 acquires an operation signal input to the operation unit 54 from the television control unit 51 or an operation signal from the gesture recognition execution unit 28 based on an instruction from the user, Each information is generated or updated based on the acquired operation signal.
- the information setting unit 29 refers to the gesture recognition device processing history information 45 and performs an operation by the gesture.
- a user who has been executed a predetermined number of times may be set as an operation target person, and the user information 42 may be updated.
- the information setting unit 29 refers to the latest processing history, and only the person operated by the gesture immediately before May be set only for a predetermined period as an operation target person, and the user information 42 may be updated.
- the information setting unit 29 refers to the gesture recognition device processing history information 45 and sets one or a plurality of persons who have performed an operation with a gesture within a predetermined period as an operation target person only for a predetermined period, and user information 42 may be updated. For example, when the user X performs an operation by a gesture, the operation waiting time is set to 5 minutes thereafter. It is assumed that the user Y performs an operation with a gesture within the operation waiting time. In this case, the information setting unit 29 sets the user X and the user Y as operation target persons after a predetermined operation waiting time (for example, for 15 minutes) after the end of the operation waiting time.
- a predetermined operation waiting time for example, for 15 minutes
- the information setting unit 29 may cancel the setting of the operation subject person or the operation prohibited person after a predetermined period has elapsed after setting the operation subject person or the operation prohibited person. That is, the information setting unit 29 determines a period during which the setting of the operation subject person or the operation prohibition person is valid, and sets the predetermined user as the operation subject person or the operation prohibition person within the setting valid period, After the end, the setting may be canceled. Further, the information setting unit 29 acquires an operation signal input to the operation unit 54 from the television control unit 51 or an operation signal from the gesture recognition execution unit 28 based on an instruction from the user, that is, the acquired operation. The setting of the operation subject person or the operation prohibited person may be canceled based on the signal.
- the information setting unit 29 acquires the television processing history information 61 from the television control unit 51 and reads out the gesture recognition device processing history information 45 from the gesture recognition device storage unit 12 to obtain the television processing history information 61 and the gesture recognition device processing.
- the operation timing information 43 may be updated by learning an operation timing at which the user desires an operation using a gesture.
- the entry / exit determination unit 30 determines whether or not the gesture recognition target part is in and out of the angle of view based on the determination result executed by the recognition target part determination unit 25 for the plurality of images arranged in time series. Is. In this case, the recognition target part determination unit 25 determines whether or not a gesture recognition target part is captured in each image for a plurality of images arranged in time series.
- the positional relationship specifying unit 31 specifies the position (reference position) of an arbitrary part (reference part) captured in the image when the time determination part 26 determines that it is the operation timing. And the positional relationship specific
- the reference part may be any part as long as it is imaged in the image, may be a part near the target, may be a target interlocking part, or may be another part.
- the movement vector calculation unit 32 specifies a direction from the target part position to the reference position and calculates a distance from the target part position to the reference position.
- the movement vector calculation unit 32 may specify only the direction from the target part position toward the reference position.
- the gesture recognition device storage unit 12 stores programs, data, and the like referred to by the gesture recognition device control unit 11. For example, the person identification information 41, user information 42, operation timing information 43, gesture information 44, Gesture recognition device processing history information 45 and the like are stored.
- FIG. 4 is a diagram illustrating an example of the person identification information 41 stored in the gesture recognition device storage unit 12.
- FIG. 5 is a diagram illustrating an example of the user information 42 stored in the gesture recognition device storage unit 12.
- FIG. 6 is a diagram illustrating an example of the operation timing information 43 stored in the gesture recognition device storage unit 12.
- FIG. 7 is a diagram illustrating an example of the gesture information 44 stored in the gesture recognition device storage unit 12.
- FIG. 8 is a diagram illustrating an example of the gesture recognition device processing history information 45 stored in the gesture recognition device storage unit 12.
- the person specifying information 41 is information in which a user name is associated with individual specifying information for individually specifying the user and personal attribute information indicating the attribute of the user.
- the “feature amount” is extracted from the human body image and / or face image of the user, for example.
- the person identifying unit 23 refers to the “feature amount” and extracts a feature amount from the human body image or the face image detected by the person determining unit 22, and matches or approximates any feature amount included in the person identifying information 41. And identify users individually.
- the personal attribute information includes “sex”, “height”, “age”, “race”, and “decoration”.
- “Gender” indicates the gender of the user. From the human body image or face image detected by the person determination unit 22, the person identification unit 23 determines the color of the lips, the color of the skin, the shape of the eyebrows, the presence or absence of makeup based on the color, the presence or absence of nails, the presence or absence of wrinkles, and the shape of the nails. The user's gender is specified based on clothes, footwear, etc. (for a long case, female, for a deep nail, etc.). The person specifying unit 23 determines which user corresponds to the specified gender. For example, in the illustrated example, if the user can be determined to be female, the female user is only “mother”, and therefore the user can be specified as “mother”.
- the person specifying unit 23 specifies the height of a person shown in the human body image or the face image detected by the person determining unit 22, and determines which user corresponds to the specified height.
- age indicates the age of the user.
- the person specifying unit 23 specifies the age of the person shown in the human body image or the face image detected by the person determining unit 22, and determines which user corresponds to the specified age.
- race indicates the race of the user.
- the person identification unit 23 identifies the person's race based on the eye color, skin color, hair color, and the like from the human body image or face image detected by the person determination unit 22, and the identified person It is determined which user corresponds to the seed.
- the “decoration” indicates the ornament worn by the user.
- the person identifying unit 23 detects ornaments such as glasses, rings, and watches from the human body image or face image detected by the person determining unit 22, and the person wearing the detected ornament corresponds to any user. Judge whether to do.
- information such as “body shape” and “hairstyle” may be included in the person specifying information 41 as personal attribute information.
- the person specifying information 41 includes six types of information. However, the person specifying information 41 only needs to include at least one type of individual specifying information or personal attribute information.
- the user information 42 is information indicating which user is set as an operation target person or an operation prohibited person.
- “Father” is set as the operation target person
- “Child” is set as the operation prohibited person.
- the operation target person and the operation prohibition person are set as individual users, but are not limited thereto. For example, “female” may be set as an operation target person, “underage” may be set as an operation prohibitor, or an operation target person or an operation prohibition person may be set as a user attribute.
- a user set as an operation target person can execute an operation by a gesture, but the present invention is not limited to this.
- anyone can freely set the television to be operable by gestures.
- the operation target person may be set as “all users”.
- the operation target person and the operation prohibited person may be set in advance by a user instruction or default.
- the operation timing information 43 is information indicating a condition (condition to be regarded as an operation timing) for determining whether or not the operation timing at which the user desires an operation by a gesture is satisfied. As shown in FIG. 6, the operation timing information 43 includes a processing condition and an imaging part condition.
- the electronic device After the electronic device performs the predetermined process, the electronic device performs the predetermined process. After the predetermined period elapses after the electronic device performs the predetermined process, the operation by the gesture is performed. It is considered that there is a high possibility that the user will meet the operation timing at which the user desires the operation by the gesture after a predetermined period of time has elapsed. Therefore, the processing condition shown in FIG. 6 is set as a condition for determining whether or not the operation timing is met.
- the time determination unit 26 reads the gesture recognition device processing history information 45 from the gesture recognition device storage unit 12 and the current time is within 5 minutes from the operation by the gesture. , It is determined whether it is within 5 minutes after the lapse of 20 minutes from the operation by the gesture. Further, the time determination unit 26 acquires the television processing history information 61 from the television control unit 51, and the current time is within 5 minutes from the operation of turning on the television, or within 1 minute from the operation of changing the volume. It is determined whether or not it is within 5 minutes after 1 hour has passed since the channel change operation, and it is determined that it is the operation timing if any of them is met. The time determination unit 26 acquires information indicating the current processing content of the television 2 from the television control unit 51, and determines that it is the operation timing if the television 2 is currently displaying a CM.
- the imaging region condition shown in FIG. 6 is set as a condition for determining whether or not the operation timing is met.
- the timing determination unit 26 determines that it is the operation timing if any of the conditions included in the operation timing information 43 is satisfied, but the present invention is not limited to this.
- the time determination unit 26 may determine that it is the operation timing if a predetermined number of conditions or more included in the operation timing information 43 are met.
- the time determination unit 26 may set the reliability for each condition, and may determine that it is the operation timing when the total value of the reliability is a predetermined value or more.
- each condition included in the operation timing information 43 may be associated with each user. Thereby, according to the usage mode of each user, the time determination part 26 can determine operation timing.
- each condition of the operation timing information 43 a specific time zone, day of the week, etc. may be added to further conditions.
- a periodic condition may be set as the operation timing.
- the operation timing varies depending on the characteristics (attributes) of the electronic device and the usage mode of the operator. Therefore, it is desirable that the operation timing is appropriately set or learned according to the characteristics (attributes) of the electronic device and the usage mode of the operator.
- the gesture information 44 is information in which a gesture and an operation signal are associated with each other.
- the gesture recognition execution unit 28 generates an operation signal “S01” when the operation target person detects a gesture “standing only with the index finger of the right hand” from the acquired image.
- the operation signal “S01” is a signal for controlling power ON / OFF of the television 2
- the operation signal “S02” is a signal for controlling channel change of the television 2
- the operation signal “S03” is a television. 2 is a signal for controlling the volume change.
- the gesture information 44 includes a gesture in which a recognition target part of the gesture, a target vicinity part that is a part near the recognition target part, an interlocking target part that is interlocked with the gesture, and a predetermined action thereof are associated with each other. Yes.
- the gesture recognition device processing history information 45 includes an operation signal generated by the gesture recognition execution unit 28, a processing date and time when the gesture recognition execution unit 28 executes the gesture recognition processing, and a gesture recognition execution unit 28. This is information associated with the operation subject who performed the recognized gesture.
- the television 2 includes the gesture recognition device 1, but the present invention is not limited to this, and the television 2 may have all the functions of the gesture recognition device 1.
- the TV control unit 51 has the function of the gesture recognition device control unit 11
- the TV storage unit 52 has information stored in the gesture recognition device storage unit 12, and the TV communication unit 53 recognizes the gesture.
- the function of the device communication unit 13 may be included.
- FIG. 9 is a flowchart illustrating an example of processing executed by the gesture recognition device 1.
- 10, 13, 14, and 16 are diagrams illustrating images captured by the camera 3.
- the information shown in FIG. 5 is stored in the gesture recognition device storage unit 12 as the user information 42. That is, it is assumed that the operation target person is “father”, the operation prohibition person is “child”, and “mother” is a person who is neither the operation target person nor the operation prohibition person.
- the image acquisition unit 21 acquires an image captured by the camera 3 from the camera 3 via the gesture recognition device communication unit 13 (S1).
- the person determination unit 22 determines whether a person is captured in the image acquired by the image acquisition unit 21 (S2).
- the person determination unit 22 detects a person from the image and determines that the person is captured.
- the person specifying unit 23 reads the person specifying information 41 from the gesture recognition device storage unit 12, and based on the person specifying information 41, the person The person detected by the determination unit 22 is specified (S3).
- the person specifying unit 23 specifies the person detected by the person determining unit 22 as “father”.
- the user determination unit 24 reads the user information 42 from the gesture recognition device storage unit 12, and determines whether the person specified by the person specifying unit 23 is an operation target person based on the user information 42 (S4). .
- the user determination unit 24 determines that it is the operation target person (YES in S4).
- the recognition target part determination unit 25 captures the gesture recognition target part in an image in which the operation target person (here, “father”) is captured. It is determined whether or not (S5).
- the recognition target part determination unit 25 determines that the gesture recognition target part is not imaged.
- the timing determination unit 26 executes the operation timing determination process A (S6), and the current time is the gesture. It is determined whether or not it is the operation timing (S7).
- the output unit 27 outputs a person-of-view-angle notification instruction signal for instructing notification that the gesture recognition target region is not imaged. It outputs to 51 (S8).
- the output unit 27 outputs nothing and returns to S1.
- FIG. 11 is a flowchart illustrating an example of the operation timing determination process A.
- the timing determination unit 26 reads the operation timing information 43 and the gesture information 44 from the gesture recognition device storage unit 12 and determines whether or not the region near the target indicated by the gesture information 44 is captured in the image. (S21). Further, the timing determination unit 26 reads the operation timing information 43 and the gesture information 44 from the gesture recognition device storage unit 12, and determines whether or not the target interlocking part indicated by the gesture information 44 is performing a predetermined operation (S22). . Further, the timing determination unit 26 reads the operation timing information 43 from the gesture recognition device storage unit 12, and the current or past processing state of the television 2 or the gesture recognition device 1 corresponds to one of the processing conditions indicated by the operation timing information 43. It is determined whether or not to perform (S23).
- timing determination unit 26 determines in S21 to S23 that any of the conditions is regarded as an operation timing (YES in S21, YES in S22, and YES in S23), it determines that the current time is the operation timing of the gesture. (S24). On the other hand, if the timing determination unit 26 determines in S21 to S23 that the operation timing is not met (NO in any of S21 to S23), it determines that the current timing is not the gesture operation timing. (S24).
- the time determination unit 26 may determine whether or not it is the operation timing by performing at least one of the three determination processes of S21 to S23.
- the timing determination unit 26 may determine that it is the operation timing when it is determined that it corresponds in at least one determination process of S21 to S23.
- the output unit 27 performs a movement vector notification process for notifying the movement direction and the movement amount necessary for the gesture recognition target part to enter the angle of view, instead of outputting the person angle-of-view notification signal. Then, a movement vector notification instruction signal may be output.
- the movement vector notification process executed in place of S8 in the case of YES in S7 will be described based on FIG.
- FIG. 12 is a flowchart illustrating an example of the movement vector notification process.
- the positional relationship specifying unit 31 specifies the position (reference position) of an arbitrary part (reference part) captured in the image ( S81). Then, the positional relationship specifying unit 31 specifies the position (target part position) outside the angle of view of the gesture recognition target part relative to the reference part (S82).
- the movement vector calculation unit 32 specifies a direction (movement direction) from the target part position toward the reference position, and calculates a distance (movement distance) from the target part position to the reference position (S83).
- the output unit 27 sends a vector notification instruction signal to the television control unit 51 instructing a notification for urging movement of the gesture recognition target part by the distance amount calculated by the movement vector calculation unit 32 in the direction specified by the movement vector calculation unit 32.
- the output unit 27 outputs the vector notification instruction signal to the television control unit 51, On the display unit 55 of the television 2, “Please shake your right hand at 30 cm below” or the like may be displayed.
- the output unit 27 outputs a gesture position notification instruction signal for instructing a notification that prompts the user to perform a gesture in front of the reference region specified by the positional relationship specifying unit 31, instead of the target region non-view angle notification instruction signal. You may output to the television control part 51.
- FIG. For example, when the positional relationship specifying unit 31 uses the user's “face” captured in the image as a reference part, the output unit 27 instructs a notification to prompt the user to perform a gesture in front of the face. A signal may be output.
- the positional relationship specifying unit 31 specifies an arbitrary part (reference part) captured in the image. Then, the output unit 27 outputs to the television control unit 51 a gesture position notification instruction signal for instructing notification that prompts the user to perform the gesture in front of the reference portion specified by the positional relationship specifying unit 31.
- the image acquisition unit 21 acquires an image captured by the camera 3 from the camera 3 via the gesture recognition device communication unit 13 (S1).
- the person determination unit 22 determines whether a person is captured in the image acquired by the image acquisition unit 21 (S2).
- the person determination unit 22 detects the person from the image and determines that the person is captured.
- the person specifying unit 23 reads the person specifying information 41 from the gesture recognition device storage unit 12, and based on the person specifying information 41, the person The person detected by the determination unit 22 is specified (S3).
- the person specifying unit 23 specifies the person detected by the person determining unit 22 as “father”.
- the user determination unit 24 reads the user information 42 from the gesture recognition device storage unit 12, and determines whether the person specified by the person specifying unit 23 is an operation target person based on the user information 42 (S4). .
- the user determination unit 24 determines that it is the operation target person (YES in S4).
- the recognition target part determination unit 25 captures the gesture recognition target part in an image in which the operation target person (here, “father”) is captured. It is determined whether or not (S5).
- the recognition target part determination unit 25 determines that the gesture recognition target part is imaged (YES in S5).
- the output unit 27 outputs a notification erasure instruction signal instructing the erasure of the notification to the television control unit 51. (S10).
- the gesture recognition execution part 28 performs a gesture recognition process (S11).
- the output unit 27 outputs nothing, and the gesture recognition executing unit 28 executes the gesture recognition process (S11).
- the image acquisition unit 21 acquires an image captured by the camera 3 from the camera 3 via the gesture recognition device communication unit 13 (S1).
- the person determination unit 22 determines whether a person is captured in the image acquired by the image acquisition unit 21 (S2).
- the person determination unit 22 detects the person from the image and determines that the person is captured.
- the person specifying unit 23 reads the person specifying information 41 from the gesture recognition device storage unit 12, and based on the person specifying information 41, the person The person detected by the determination unit 22 is specified (S3).
- the person specifying unit 23 specifies the person detected by the person determining unit 22 as “mother”.
- the user determination unit 24 reads the user information 42 from the gesture recognition device storage unit 12, and determines whether the person specified by the person specifying unit 23 is an operation target person based on the user information 42 (S4). .
- the user determination unit 24 determines that it is not the operation target person (NO in S4).
- the user determination unit 24 determines whether or not the person specified by the person specifying unit 23 is an operation prohibited person based on the user information 42 (S12). Again, since “mother” is not an operation prohibitor, the user determination unit 24 determines that the operation is not prohibited (NO in S12).
- the time determination unit 26 executes the operation timing determination process B (S13), and determines whether or not the current time is the operation timing of the gesture (S14).
- the output unit 27 outputs a non-operation target person notification instruction signal to instruct a notification that it is not the operation target person to the television control unit 51. (S15).
- the output unit 27 outputs nothing and returns to S1.
- FIG. 15 is a flowchart illustrating an example of the operation timing determination process B.
- the timing determination unit 26 reads the operation timing information 43 and the gesture information 44 from the gesture recognition device storage unit 12 and executes a gesture detection process indicated by the gesture information 44 on the image ( S31).
- the timing determination unit 26 reads the operation timing information 43 from the gesture recognition device storage unit 12, and the current or past processing state of the television 2 or the gesture recognition device 1 is the operation timing information. It is determined whether or not any of the processing conditions indicated by 43 is satisfied (S35). When it is determined that the current or past processing state of the television 2 or the gesture recognition device 1 corresponds to any of the processing conditions (YES in S35), the time determination unit 26 determines that the current time is the operation timing of the gesture ( S36). On the other hand, when the time determination unit 26 determines that the current or past processing state of the television 2 or the gesture recognition device 1 does not correspond to any of the processing conditions (NO in S35), the current time is not the gesture operation timing. Determine (S37).
- the timing determination unit 26 reads the operation timing information 43 and the gesture information 44 from the gesture recognition device storage unit 12, and indicates the gesture information 44. It is determined whether or not the gesture recognition target part has been imaged (S32).
- the timing determination unit 26 reads the operation timing information 43 from the gesture recognition device storage unit 12, and the current or past of the television 2 or the gesture recognition device 1 It is determined whether or not the processing state corresponds to any of the processing conditions indicated by the operation timing information 43 (S35). When it is determined that the current or past processing state of the television 2 or the gesture recognition device 1 corresponds to any of the processing conditions (YES in S35), the time determination unit 26 determines that the current time is the operation timing of the gesture ( S36). On the other hand, when it is determined that the current or past processing state of the television 2 or the gesture recognition device 1 does not correspond to any of the processing conditions (NO in S35), the time determination unit 26 does not indicate the current operation timing of the gesture. Determine (S37).
- the timing determination unit 26 reads the operation timing information 43 and the gesture information 44 from the gesture recognition device storage unit 12, and indicates the gesture information 44. It is determined whether or not the region near the target is imaged (S33). Further, the time determination unit 26 reads the operation timing information 43 and the gesture information 44 from the gesture recognition device storage unit 12, and determines whether or not the target interlocking part indicated by the gesture information 44 is performing a predetermined operation (S34). . Further, the timing determination unit 26 reads the operation timing information 43 from the gesture recognition device storage unit 12, and the current or past processing state of the television 2 or the gesture recognition device 1 corresponds to one of the processing conditions indicated by the operation timing information 43. It is determined whether or not to perform (S35).
- the timing determination unit 26 determines that the current timing is the gesture operation timing. (S36). On the other hand, if the timing determination unit 26 determines in S33 to S35 that the operation timing does not meet any of the conditions (NO in any of S33 to S35), it determines that the current timing is not the gesture operation timing. (S37).
- the determination process of S ⁇ b> 35 is performed to determine whether or not it is the operation timing, but this is not restrictive. If YES in S31, the time determination unit 26 may determine that the current time is the operation timing unconditionally.
- the determination process of S ⁇ b> 35 is performed to determine whether or not it is the operation timing, but this is not restrictive. If YES in S32, the timing determination unit 26 may determine that the current time is the operation timing unconditionally.
- the time determination unit 26 may determine whether or not it is the operation timing by performing at least one of the three determination processes of S33 to S35. Further, in the example shown in FIG. 11, when the three determination processes of S33 to S35 are performed and it is determined that all are applicable, the operation timing is determined, but the present invention is not limited to this. For example, the timing determination unit 26 may determine that it is the operation timing when it is determined to be applicable in the determination process 1 or 2.
- the image acquisition unit 21 acquires an image captured by the camera 3 from the camera 3 via the gesture recognition device communication unit 13 (S1).
- the person determination unit 22 determines whether a person is captured in the image acquired by the image acquisition unit 21 (S2).
- the person determination unit 22 detects a person from the image and determines that the person is captured.
- the person specifying unit 23 reads the person specifying information 41 from the gesture recognition device storage unit 12, and based on the person specifying information 41, the person The person detected by the determination unit 22 is specified (S3).
- the person specifying unit 23 specifies the person detected by the person determining unit 22 as “child”.
- the user determination unit 24 reads the user information 42 from the gesture recognition device storage unit 12, and determines whether the person specified by the person specifying unit 23 is an operation target person based on the user information 42 (S4). .
- the user determination unit 24 determines that it is not the operation target person (NO in S4).
- the user determination unit 24 determines whether or not the person specified by the person specifying unit 23 is an operation prohibited person based on the user information 42 (S12).
- the user determination unit 24 determines that the operation is prohibited (YES in S12).
- the output unit 27 outputs nothing and returns to S1.
- the gesture recognition process is executed after determining whether or not the person is an operation target, and further determining whether or not the gesture recognition target part is imaged. It is not limited. For example, when the gesture recognition execution unit 28 first executes the gesture recognition process on the image acquired by the image acquisition unit 21 and the gesture recognition cannot be performed normally, the above S4, S6 to S8, S12 to The process of S15 may be performed.
- FIG. 17 is a flowchart illustrating another example of processing executed by the gesture recognition device 1.
- the gesture recognition execution unit 28 when the gesture recognition execution unit 28 starts the gesture recognition process (S41), the gesture recognition execution unit 28 acquires a plurality of images arranged in time series from the image acquisition unit 21 (S42). After acquiring a plurality of images, the gesture recognition execution unit 28 reads the gesture information 44 from the gesture recognition device storage unit 12, and detects the gesture indicated by the gesture information 44 from the plurality of acquired images (S43).
- the gesture recognition execution unit 28 When the gesture recognition execution unit 28 detects a predetermined gesture (YES in S44), the gesture recognition execution unit 28 generates an operation signal associated with the detected gesture in the gesture information 44 (S45). Then, the output unit 27 outputs the operation signal generated by the gesture recognition execution unit 28 to the television control unit 51.
- the gesture recognition execution unit 28 cannot detect the gesture (NO in S44), the cause (error cause) that the gesture cannot be detected based on the determination result of the recognition target region determination unit 25 for a plurality of images. It is determined whether or not the gesture recognition target part has not been detected (S47).
- the entrance / exit determination part 30 displays the determination result of the recognition target part determination unit 25 for a plurality of images. Based on this, it is determined whether or not the gesture recognition target part is in or out of the angle of view (S48).
- the output unit 27 instructs notification that the gesture recognition target part is in and out of the angle of view.
- a target part entry / exit notification instruction signal is output to the television controller 51 (S49).
- the output unit 27 instructs to notify that the gesture recognition target part is out of the angle of view.
- the target part non-view angle notification instruction signal to be output is output to the television controller 51 (S50).
- the gesture recognition target If it is determined in S47 that the gesture recognition execution unit 28 is not caused by the failure to detect the gesture recognition target part, that is, if the gesture cannot be detected for other reasons (NO in S47), the gesture recognition target The site identifies the cause of the error.
- the output unit 27 outputs a gesture recognition error notification instruction signal for instructing notification of the cause of the error specified by the gesture recognition execution unit 28 to the television control unit 51 (S51).
- the timing determination unit 26 determines whether or not it is an operation timing, and the output unit outputs each signal only when the timing determination unit 26 determines that it is an operation timing. May be.
- the output unit 27 may output a vector notification instruction signal together with or instead of the target part entry / exit notification instruction signal or the target part angle-of-view notification instruction signal.
- the security gate notifies the member's woman that the gesture has not been reflected.
- the security gate notifies the non-member woman that he / she is not an operation target, that is, is not a member.
- the security gate may notify a non-member woman of information necessary to become a member.
- the security gate notifies nothing. In security gates, etc., notifying an authorized person (operation target person) and a person other than a person who can become an authorized person (that is, a male in this case) that an error has occurred in an operation by a gesture From the viewpoint of.
- the security gate should be once every time even if it is determined to be male. May notify you that you are not yet a member.
- the gesture recognition device recognizes a gesture that is a movement and / or shape of a human gesture recognition target part from an image captured by a camera, and uses information for controlling an electronic device based on the recognized gesture.
- a gesture recognition device that outputs to an electronic device, wherein a recognition target part determination unit that determines whether or not a gesture recognition target part is included in the image, and the recognition target part determination unit includes a gesture recognition target part in the image
- an output unit that outputs the target part non-view angle notification instruction information for instructing that the gesture recognition target part is not imaged is provided.
- a method for controlling a gesture recognition apparatus recognizes a gesture that is a movement and / or shape of a human gesture recognition target part from an image captured by a camera, and controls an electronic device based on the recognized gesture.
- a method for controlling a gesture recognition device that outputs information to the electronic device, the recognition target part determination step for determining whether or not a gesture recognition target part is included in the image, and the recognition target part determination step An output step of outputting target part non-view angle notification instruction information for instructing notification that the gesture recognition target part is not imaged when it is determined that the gesture recognition target part is not included in the image. It is a feature.
- the output unit instructs the notification that the gesture recognition target part is not imaged.
- the electronic device notifies the person operating the electronic device that the gesture recognition target region is not imaged based on the target region non-view angle notification instruction information.
- a person who operates the electronic device based on the notification from the electronic device, whether or not the gesture recognition target part is included in the image captured by the camera, that is, the gesture deviates from the angle of view of the camera. It can be determined whether or not. Therefore, even if the gesture recognition target part of the person who operates the electronic device is not imaged, the person who operates the electronic device uses the same gesture many times to try to recognize the gesture. There exists an effect that operation
- the gesture recognition apparatus further includes timing determination means for determining whether or not the operation timing at which the person desires the operation by the gesture is based on the part of the person captured in the image, and the output
- the means determines that the recognition target part determination means determines that the gesture recognition target part is not included in the image and the timing determination means determines that the timing is an operation timing. It is preferable to output information.
- the output unit detects the gesture recognition target part.
- the target part non-view angle notification instruction information for instructing notification that the image is not captured is output to the electronic device.
- the person who operates the electronic device may desire the operation by the gesture, but the gesture recognition target part may not be within the angle of view.
- the person's gesture recognition target part is not within the angle of view because the person does not want to perform the operation by the gesture. Therefore, the gesture recognition target portion of the person operating the electronic device is out of the angle of view, and the person who operates the electronic device has the gesture recognition target portion of the angle of view from the angle of view only when the person desires the operation by the gesture. By notifying that it is disconnected, it is possible to prevent the person's useless operation without disturbing the use of the electronic device of the person who operates the electronic device.
- the timing determination unit determines that it is an operation timing when the image includes a target vicinity portion near the gesture recognition target portion.
- the operation timing is highly likely.
- the time determination unit captures the image by determining that it is the operation timing.
- the operation timing can be accurately identified based on the person's part.
- the timing determination unit determines that it is an operation timing when it is detected from the image that a target interlocking portion linked to the gesture recognition target portion at the time of the gesture has performed a predetermined operation. It is preferable to do.
- the gesture recognition target part is not within the angle of view while the person desires the operation by the gesture. It is thought that. That is, in this case, it is considered that the operation timing is highly likely.
- the timing determination unit determines that it is the operation timing when it is detected from the image that the target interlocking portion that is interlocked with the gesture recognition target portion at the time of the gesture is operated.
- the operation timing can be accurately identified based on the part of the person imaged in the image.
- the gesture recognition device is an operation target person determination unit that determines whether or not the person in the region imaged in the image is an operation target person who can perform the operation of the electronic device by the gesture.
- the output means outputs the target part non-view angle notification instruction information when the operation subject person judging means judges that the person of the part imaged in the image is the operation subject person It is preferable.
- the output means is configured so that the gesture recognition target region is only displayed when the person operating the electronic device is the operation target, that is, when the person operating the electronic device can execute the operation of the electronic device using a gesture.
- the target part non-view angle notification instruction information for instructing notification that the image is not captured is output to the electronic device.
- the electronic device recognizes the gesture to the person who operates the electronic device based on the target part angle-of-view notification information only when the person who operates the electronic device can perform the operation of the electronic device by the gesture. Notify that the target part has not been imaged.
- the gesture recognition device is an operation target person determination unit that determines whether or not the person in the region imaged in the image is an operation target person who can perform the operation of the electronic device by the gesture. And when the operation target person determination means determines that the person in the part imaged in the image is not the operation target person, the output part is replaced with the target part non-view angle notification instruction information. It is preferable to output non-operation subject notification instruction information for instructing notification that the user is not an operation subject.
- the output means is not an operation subject when a person who tries to operate the electronic device is not an operation target person, that is, a person who tries to operate the electronic device cannot execute the operation of the electronic device by a gesture.
- Non-operation subject notification instruction information is output.
- a person who tries to operate the electronic device can determine whether or not he / she is an operation target person based on the notification from the electronic device. Therefore, it is possible to prevent a useless operation of a person who tries to operate the electronic device, in which the person makes the same gesture many times in order to recognize the gesture even though the operation by the gesture cannot be performed.
- the gesture recognition device is an operation prohibitor determination that determines whether or not a person in a region imaged in the image is an operation prohibitor who is prohibited from operating the electronic device by a gesture. And the output means does not output the non-operation subject notification instruction information when the operation prohibited person determination means determines that the person at the site imaged in the image is an operation prohibited person. It is preferable.
- the output means Non-operation target person notification instruction information for instructing notification is not output. Since the operation prohibited person cannot execute the operation of the electronic device by the gesture, it is not necessary to notify information or the like for easily recognizing the gesture.
- the person who tries to operate the electronic device is an operation banner, the above information is not notified to reduce interference with the use of electronic devices by persons other than the operation banner and Can be prevented.
- the operation target person determination unit may determine that a person set in advance as a person who can perform the operation of the electronic device by the gesture is an operation target person. preferable.
- the operation subject person determination unit determines that a person who has performed the operation of the electronic device a predetermined number of times by the gesture is an operation subject person.
- the operation target person determination unit determines that a person who has operated the electronic device by a gesture within a predetermined period is an operation target person.
- the gesture recognition device includes a part specifying unit that specifies a part captured in the image, a position specifying unit that specifies a position of a gesture recognition target part with respect to the part specified by the part specifying unit, Movement vector specifying means for specifying a movement direction necessary for the gesture recognition target part to enter the angle of view based on the position specified by the position specifying means, and the output means includes the target part angle of view.
- a part specifying unit that specifies a part captured in the image
- a position specifying unit that specifies a position of a gesture recognition target part with respect to the part specified by the part specifying unit
- Movement vector specifying means for specifying a movement direction necessary for the gesture recognition target part to enter the angle of view based on the position specified by the position specifying means
- the output means includes the target part angle of view.
- direction notification instruction information for instructing a notification that prompts the movement of the gesture recognition target part in the direction specified by the movement vector specifying means.
- the output unit moves the gesture recognition target part in the movement direction necessary for the gesture recognition target part to enter the angle of view, instead of the target part non-view angle notification instruction information.
- the electronic device notifies the person to move the gesture recognition target part in the moving direction. For this reason, if the gesture recognition target part is out of the angle of view, the person operating the electronic device knows in which direction the gesture recognition target part will fall within the angle of view based on the notification from the electronic device. Can do.
- the movement vector specifying unit specifies the moving direction based on the position specified by the position specifying unit, and the gesture recognition target part is included in the angle of view.
- the required movement amount is calculated, and the output means replaces the direction notification instruction information with the movement amount specified by the movement vector specifying means in the direction specified by the movement vector specifying means in the direction specified by the movement vector specifying means. It is preferable to output vector notification instruction information for instructing a notification that prompts the user to move.
- the output unit replaces the target part view angle outside notification instruction information with the gesture recognition target part in a moving direction necessary for the gesture recognition target part to enter the view angle.
- Vector notification instruction information for instructing a notification that prompts the movement of the gesture recognition target part for the movement amount necessary for entering the angle of view is output.
- the electronic device Based on the vector notification instruction information, the electronic device notifies the person to move the gesture recognition target part in the movement direction by the movement amount. Therefore, if the gesture recognition target part is out of the angle of view, the person operating the electronic device determines how much the gesture recognition target part will be within the angle of view based on the notification from the electronic device. I can know.
- the gesture recognition apparatus further includes a part specifying unit that specifies a part captured in the image, and the output unit replaces the target part non-view angle notification instruction information with the part specifying unit. It is preferable to output the gesture position notification instruction information for instructing a notification prompting the user to execute the gesture in front of the part specified by the means.
- the said output means is a gesture position notification which instruct
- the electronic device moves the gesture recognition target part in front of the part imaged in the image and notifies the person to perform the gesture. For this reason, when the gesture recognition target part is out of the angle of view, the person who operates the electronic device moves the gesture recognition target part to the angle of view based on the notification from the electronic device. Can know.
- the recognition target part determination unit performs the determination on the plurality of images arranged in time series, and based on the determination result of the recognition target part determination unit. , Further comprising an entrance / exit determination means for determining whether or not the gesture recognition target part is in or out of the angle of view of the camera, and the output means is configured to input or output the gesture recognition target part from the angle of view of the camera.
- the output means is configured to input or output the gesture recognition target part from the angle of view of the camera.
- the output unit is a target that instructs notification of the gesture recognition target part entering and exiting from the angle of view of the camera
- the part entry / exit notification instruction information is output to the electronic device.
- the electronic device notifies the person who operates the electronic device that the gesture recognition target part is in and out of the angle of view based on the target part in / out notification instruction information. Therefore, a person who operates the electronic device can know that his / her gesture recognition target part has entered and exited from the angle of view of the camera based on the notification from the electronic device.
- the electronic apparatus includes the gesture recognition device and a notification unit that notifies a person according to information output by the output unit.
- the electronic device has the same effect as the gesture recognition device.
- the gesture recognition device may be realized by a computer.
- a computer-readable recording medium on which it is recorded also falls within the scope of the present invention.
- each block of the gesture recognition device may be configured by hardware logic, or may be realized by software using a CPU as follows.
- the gesture recognition apparatus 1 includes a storage device (such as a CPU that executes instructions of a control program that realizes each function, a ROM that stores the program, a RAM that expands the program, a memory that stores the program and various data, and the like. Recording medium).
- a storage device such as a CPU that executes instructions of a control program that realizes each function, a ROM that stores the program, a RAM that expands the program, a memory that stores the program and various data, and the like. Recording medium).
- An object of the present invention is to provide a recording medium in which a program code (execution format program, intermediate code program, source program) of a control program of the gesture recognition apparatus 1 which is software for realizing the above-described functions is recorded so as to be readable by a computer. This can also be achieved by supplying to the gesture recognition device 1 and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU).
- Examples of the recording medium include tapes such as magnetic tapes and cassette tapes, magnetic disks such as floppy (registered trademark) disks / hard disks, and disks including optical disks such as CD-ROM / MO / MD / DVD / CD-R.
- Card system such as IC card, IC card (including memory card) / optical card, or semiconductor memory system such as mask ROM / EPROM / EEPROM / flash ROM.
- the gesture recognition device 1 may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
- the communication network is not particularly limited.
- the Internet intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network, telephone line network, mobile communication network, satellite communication. A net or the like is available.
- the transmission medium constituting the communication network is not particularly limited.
- wired such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL line, etc.
- infrared rays such as IrDA and remote control, Bluetooth (Registered trademark), 802.11 wireless, HDR, mobile phone network, satellite line, terrestrial digital network, and the like can also be used.
- the present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
- the present invention can be used in a gesture recognition device that recognizes a user's gesture from an image captured by a camera and controls an electronic device based on the recognized gesture.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
Description
本発明は、ユーザ(操作者)が電子機器をジェスチャにより操作する際に、ジェスチャ認識対象の部位がカメラの画角から外れている場合、ユーザに対してその旨を通知するものである。本発明に係るジェスチャ認識装置を搭載した電子機器の使用態様の一例を図2に示す。図2は、本発明に係るジェスチャ認識装置を搭載した電子機器の使用態様の一例を示す図である。
まず、テレビ2およびテレビ2に搭載されるジェスチャ認識装置1の具体的な構成および機能を説明する。図1は、テレビ2およびテレビ2に搭載されるジェスチャ認識装置1の要部構成の一例を示すブロック図である。まずは、テレビ2の構成について説明する。
図1に示すように、テレビ2は、テレビ制御部51、テレビ記憶部52、テレビ通信部53、操作部54、表示部(通知手段)55、音声出力部(通知手段)56およびジェスチャ認識装置1を備えている。なお、テレビ2は、音声入力部等の部材を備えていてもよいが、発明の特徴点とは関係がないため当該部材を図示していない。
図1に示すように、ジェスチャ認識装置1は、ジェスチャ認識装置制御部11、ジェスチャ認識装置記憶部12およびジェスチャ認識装置通信部13を備えている。
図4に示すように、人物特定情報41は、ユーザ名と、当該ユーザを個別に特定するための個別特定情報および当該ユーザの属性を示す個人属性情報とが対応付けられた情報である。図示の例では、個別特定情報として「特徴量」がある。「特徴量」は、例えば、ユーザの人体画像および/または顔画像から抽出されるものである。人物特定部23は、この「特徴量」を参照して、人物判定部22が検出した人体画像または顔画像から特徴量を抽出し、人物特定情報41に含まれる何れの特徴量と一致または近似しているかを特定して、ユーザを個別に特定する。
図5に示すように、ユーザ情報42は、どのユーザが操作対象者または操作禁止者として設定されているかを示す情報である。図5に示す例では、「父」が操作対象者として設定されており、「子供」が操作禁止者として設定されている。図5に示す例では、操作対象者および操作禁止者は、ユーザ個人として設定されているがこれに限るものではない。例えば、「女性」を操作対象者として設定したり、「未成年」を操作禁止者として設定したり、ユーザの属性で操作対象者または操作禁止者を設定してもよい。
図6に示すように、操作タイミング情報43は、ユーザがジェスチャによる操作を所望する操作タイミングに該当するか否かを判断するための条件(操作タイミングであるとみなす条件)を示す情報である。図6に示すように、操作タイミング情報43は、処理条件および撮像部位条件を含む。
図7に示すように、ジェスチャ情報44は、ジェスチャと操作信号とが対応付けられている情報である。例えば、ジェスチャ認識実行部28は、取得した画像から操作対象者が「右手の人差指のみを立てている」ジェスチャを検出した場合、操作信号「S01」を生成する。なお、操作信号「S01」は、テレビ2の電源ON/OFFを制御する信号であり、操作信号「S02」は、テレビ2のチャンネル変更を制御する信号であり、操作信号「S03」は、テレビ2の音量変更を制御する信号であるとする。
図8に示すように、ジェスチャ認識装置処理履歴情報45は、ジェスチャ認識実行部28が生成した操作信号と、ジェスチャ認識実行部28がジェスチャ認識処理を実行した処理日時と、ジェスチャ認識実行部28が認識したジェスチャを行った操作対象者とが対応付けられている情報である。
次に、ジェスチャ認識装置1が実行する処理について、図9~図16に基づいて説明する。図9は、ジェスチャ認識装置1が実行する処理の一例を示すフローチャートである。また、図10、図13、図14、図16は、カメラ3が撮像した画像を示す図である。ここでは、ユーザ情報42として図5に示す情報がジェスチャ認識装置記憶部12に格納されているものとする。つまり、操作対象者は「父」であり、操作禁止者は「子供」であり、「母」は、操作対象者でも操作禁止者でもない人物であるとする。
まず、図10に示すように、「父」が右手を左右に振るジェスチャを行っているが、カメラ3が撮像した画像から、ジェスチャ認識対象部位である右手が写っていない場合の処理について説明する。
ここで、図9のS6における操作タイミング判定処理Aについて、図11に基づいて詳細に説明する。図11は、操作タイミング判定処理Aの一例を示すフローチャートである。
なお、S8において、出力部27は、人物画角外通知指示信号を出力する代わりに、ジェスチャ認識対象部位が画角に入るために必要な移動方向および移動量を通知する移動ベクトル通知処理を行って、移動ベクトル通知指示信号を出力してもよい。ここで、S7でYESの場合に、S8に代えて実行する移動ベクトル通知処理を図12に基づいて説明する。図12は、移動ベクトル通知処理の一例を示すフローチャートである。
次に、図13に示すように、「父」が右手を左右に振るジェスチャを行っており、カメラ3が撮像した画像に、ジェスチャ認識対象部位である右手が写っている場合の処理について説明する。
次に、図14に示すように、カメラ3が撮像した画像に「母」が写っている場合の処理について説明する。
次に、図9のS13における操作タイミング判定処理Bについて、図15に基づいて説明する。図15は、操作タイミング判定処理Bの一例を示すフローチャートである。
次に、図16に示すように、カメラ3が撮像した画像に「子供」が写っている場合の処理について説明する。
次に、上述の最初にジェスチャ認識処理を実行し、ジェスチャ認識が正常にできなかった場合に、判定処理等を実行する処理について、図17に基づいて説明する。図17は、ジェスチャ認識装置1が実行する処理の他の一例を示すフローチャートである。
変形例として、電子機器がセキュリティゲートである場合を説明する。このセキュリティゲートは、会員の女性のみが入室可能な部屋の入口に設置されているものとする。そして、ここでは、操作対象者として、女性の予め登録されたユーザであり、操作禁止者として男性が設定されているものとする。
本発明に係るジェスチャ認識装置は、カメラが撮像した画像から人のジェスチャ認識対象部位の動きおよび/または形状であるジェスチャを認識し、認識したジェスチャに基づいて電子機器を制御するための情報を当該電子機器に出力するジェスチャ認識装置であって、上記画像にジェスチャ認識対象部位が含まれているか否かを判定する認識対象部位判定手段と、上記認識対象部位判定手段が上記画像にジェスチャ認識対象部位が含まれていないと判定した場合、ジェスチャ認識対象部位が撮像されていないことの通知を指示する対象部位画角外通知指示情報を出力する出力手段とを備えることを特徴としている。
また、本発明に係るジェスチャ認識装置は、上記移動ベクトル特定手段は、上記位置特定手段が特定した位置に基づいて、上記移動方向を特定すると共に、上記ジェスチャ認識対象部位が画角に入るために必要な移動量を算出し、上記出力手段は、上記方向通知指示情報に代えて、上記移動ベクトル特定手段が特定した方向に、上記ジェスチャ認識対象部位を上記移動ベクトル特定手段が特定した移動量分の移動を促す通知を指示するベクトル通知指示情報を出力することが好ましい。
また、本発明に係るジェスチャ認識装置は、上記画像に撮像されている部位を特定する部位特定手段をさらに備え、上記出力手段は、上記対象部位画角外通知指示情報に代えて、上記部位特定手段が特定した部位の前でジェスチャを実行するように促す通知を指示するジェスチャ位置通知指示情報を出力することが好ましい。
本発明は上述した実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能である。すなわち、請求項に示した範囲で適宜変更した技術的手段を組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。
2 テレビ(電子機器)
3 カメラ
4 ユーザ
22 人物判定部(人判定手段)
24 ユーザ判定部(操作対象者判定手段、操作禁止者判定手段)
25 認識対象部位判定部(認識対象部位判定手段)
26 時期判定部(時期判定手段)
27 出力部(出力手段)
30 出入判定部(出入判定手段)
31 位置関係特定部(部位特定手段、位置特定手段)
32 移動ベクトル算出部(移動ベクトル特定手段)
55 表示部(通知手段)
56 音声出力部(通知手段)
Claims (18)
- カメラが撮像した画像から人のジェスチャ認識対象部位の動きおよび/または形状であるジェスチャを認識し、認識したジェスチャに基づいて電子機器を制御するための情報を当該電子機器に出力するジェスチャ認識装置であって、
上記画像に上記ジェスチャ認識対象部位が含まれているか否かを判定する認識対象部位判定手段と、
上記認識対象部位判定手段が上記画像にジェスチャ認識対象部位が含まれていないと判定した場合、ジェスチャ認識対象部位が撮像されていないことの通知を指示する対象部位画角外通知指示情報を出力する出力手段とを備えることを特徴とするジェスチャ認識装置。 - 上記画像に撮像されている人の部位に基づいて、当該人がジェスチャによる操作を所望する操作タイミングか否かを判定する時期判定手段をさらに備え、
上記出力手段は、上記認識対象部位判定手段が上記画像にジェスチャ認識対象部位が含まれていないと判定し、かつ、上記時期判定手段が操作タイミングであると判定した場合、上記対象部位画角外通知指示情報を出力することを特徴とする請求項1に記載のジェスチャ認識装置。 - 上記時期判定手段は、上記画像にジェスチャ認識対象部位付近の対象付近部位が含まれている場合、操作タイミングであると判定することを特徴とする請求項2に記載のジェスチャ認識装置。
- 上記時期判定手段は、ジェスチャ時にジェスチャ認識対象部位と連動する対象連動部位が所定の動作をしたことを上記画像から検出した場合、操作タイミングであると判定することを特徴とする請求項2に記載のジェスチャ認識装置。
- 上記画像に撮像されている部位の人が、ジェスチャにより上記電子機器の操作を実行可能な操作対象者であるか否かを判定する操作対象者判定手段とをさらに備え、
上記出力手段は、上記操作対象者判定手段が上記画像に撮像されている部位の人が操作対象者であると判定した場合、上記対象部位画角外通知指示情報を出力することを特徴とする請求項1~4の何れか1項に記載のジェスチャ認識装置。 - 上記画像に撮像されている部位の人が、ジェスチャにより上記電子機器の操作を実行可能な操作対象者であるか否かを判定する操作対象者判定手段とをさらに備え、
上記出力手段は、上記操作対象者判定手段が上記画像に撮像されている部位の人が操作対象者ではないと判定した場合、上記対象部位画角外通知指示情報に代えて、操作対象者ではないことの通知を指示する非操作対象者通知指示情報を出力することを特徴とする請求項1~5の何れか1項に記載のジェスチャ認識装置。 - 上記画像に撮像されている部位の人が、ジェスチャによる上記電子機器の操作を禁止されている操作禁止者であるか否かを判定する操作禁止者判定手段とをさらに備え、
上記出力手段は、上記操作禁止者判定手段が上記画像に撮像されている部位の人が操作禁止者であると判定した場合、上記非操作対象者通知指示情報を出力しないことを特徴とする請求項6に記載のジェスチャ認識装置。 - 上記操作対象者判定手段は、予め、ジェスチャにより上記電子機器の操作を実行可能な者として設定されている人を操作対象者であると判定することを特徴とする請求項6に記載のジェスチャ認識装置。
- 上記操作対象者判定手段は、ジェスチャにより上記電子機器の操作を所定回数実行した人を操作対象者であると判定することを特徴とする請求項6に記載のジェスチャ認識装置。
- 上記操作対象者判定手段は、所定期間内にジェスチャにより上記電子機器の操作を実行した人を操作対象者であると判定することを特徴とする請求項6に記載のジェスチャ認識装置。
- 上記画像に撮像されている部位を特定する部位特定手段と、
上記部位特定手段が特定した部位に対するジェスチャ認識対象部位の位置を特定する位置特定手段と、
上記位置特定手段が特定した位置に基づいて、上記ジェスチャ認識対象部位が画角に入るために必要な移動方向を特定する移動ベクトル特定手段とをさらに備え、
上記出力手段は、上記対象部位画角外通知指示情報に代えて、上記移動ベクトル特定手段が特定した方向に上記ジェスチャ認識対象部位の移動を促す通知を指示する方向通知指示情報を出力することを特徴とする請求項1~10の何れか1項に記載のジェスチャ認識装置。 - 上記移動ベクトル特定手段は、上記位置特定手段が特定した位置に基づいて、上記移動方向を特定すると共に、上記ジェスチャ認識対象部位が画角に入るために必要な移動量を算出し、
上記出力手段は、上記方向通知指示情報に代えて、上記移動ベクトル特定手段が特定した方向に、上記ジェスチャ認識対象部位を上記移動ベクトル特定手段が特定した移動量分の移動を促す通知を指示するベクトル通知指示情報を出力することを特徴とする請求項11に記載のジェスチャ認識装置。 - 上記画像に撮像されている部位を特定する部位特定手段をさらに備え、
上記出力手段は、上記対象部位画角外通知指示情報に代えて、上記部位特定手段が特定した部位の前でジェスチャを実行するように促す通知を指示するジェスチャ位置通知指示情報を出力することを特徴とする請求項1~10の何れか1項に記載のジェスチャ認識装置。 - 上記認識対象部位判定手段は、時系列に並べられた複数の上記画像に対して上記判定を実行し、
上記認識対象部位判定手段の判定結果に基づいて、ジェスチャ認識対象部位が上記カメラの画角から出入りしているか否かを判定する出入判定手段をさらに備え、
上記出力手段は、上記出入判定手段が上記カメラの画角からジェスチャ認識対象部位が出入りしていると判定した場合、上記カメラの画角からジェスチャ認識対象部位が出入りしていることの通知を指示する対象部位出入通知指示情報を出力することを特徴とする請求項1~13の何れか1項に記載のジェスチャ認識装置。 - 請求項1~14の何れか1項に記載のジェスチャ認識装置と、
上記出力手段が出力する情報に従って、人に通知する通知手段とを備えたことを特徴とする電子機器。 - カメラが撮像した画像から人のジェスチャ認識対象部位の動きおよび/または形状であるジェスチャを認識し、認識したジェスチャに基づいて電子機器を制御するための情報を当該電子機器に出力するジェスチャ認識装置の制御方法であって、
上記画像に上記ジェスチャ認識対象部位が含まれているか否かを判定する認識対象部位判定ステップと、
上記認識対象部位判定ステップにおいて上記画像にジェスチャ認識対象部位が含まれていないと判定された場合、ジェスチャ認識対象部位が撮像されていないことの通知を指示する対象部位画角外通知指示情報を出力する出力ステップとを含むことを特徴とするジェスチャ認識装置の制御方法。 - 請求項1~14の何れか1項に記載のジェスチャ認識装置を動作させるための制御プログラムであって、コンピュータを上記各手段として機能させるための制御プログラム。
- 請求項17に記載の制御プログラムを記録したコンピュータ読み取り可能な記録媒体。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/234,597 US20140152557A1 (en) | 2011-09-15 | 2012-03-14 | Gesture recognition device, electronic apparatus, gesture recognition device control method, control program, and recording medium |
CN201280035763.9A CN103688233A (zh) | 2011-09-15 | 2012-03-14 | 姿势识别装置及其控制方法、程序、电子设备及记录介质 |
KR1020147001071A KR20140028108A (ko) | 2011-09-15 | 2012-03-14 | 제스처 인식 장치, 전자 기기, 제스처 인식 장치의 제어 방법, 제어 프로그램 및 기록 매체 |
EP12832493.6A EP2746899A4 (en) | 2011-09-15 | 2012-03-14 | GESTURE DETECTION DEVICE, ELECTRONIC DEVICE, GAME DETECTION DEVICE CONTROL PROCEDURE, CONTROL PROGRAM AND RECORDING MEDIUM |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-202434 | 2011-09-15 | ||
JP2011202434A JP2013065112A (ja) | 2011-09-15 | 2011-09-15 | ジェスチャ認識装置、電子機器、ジェスチャ認識装置の制御方法、制御プログラムおよび記録媒体 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013038734A1 true WO2013038734A1 (ja) | 2013-03-21 |
Family
ID=47882980
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/056518 WO2013038734A1 (ja) | 2011-09-15 | 2012-03-14 | ジェスチャ認識装置、電子機器、ジェスチャ認識装置の制御方法、制御プログラムおよび記録媒体 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20140152557A1 (ja) |
EP (1) | EP2746899A4 (ja) |
JP (1) | JP2013065112A (ja) |
KR (1) | KR20140028108A (ja) |
CN (1) | CN103688233A (ja) |
WO (1) | WO2013038734A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111158458A (zh) * | 2018-11-07 | 2020-05-15 | 天梭股份有限公司 | 用于广播描述通知消息的告警的方法 |
CN112328090A (zh) * | 2020-11-27 | 2021-02-05 | 北京市商汤科技开发有限公司 | 手势识别方法及装置、电子设备和存储介质 |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017083916A (ja) * | 2014-03-12 | 2017-05-18 | コニカミノルタ株式会社 | ジェスチャー認識装置、ヘッドマウントディスプレイ、および携帯端末 |
JP6268526B2 (ja) | 2014-03-17 | 2018-01-31 | オムロン株式会社 | マルチメディア装置、マルチメディア装置の制御方法、及びマルチメディア装置の制御プログラム |
US20160140933A1 (en) * | 2014-04-04 | 2016-05-19 | Empire Technology Development Llc | Relative positioning of devices |
US10474886B2 (en) | 2014-08-13 | 2019-11-12 | Rakuten, Inc. | Motion input system, motion input method and program |
KR20160149603A (ko) * | 2015-06-18 | 2016-12-28 | 삼성전자주식회사 | 전자 장치 및 전자 장치에서의 노티피케이션 처리 방법 |
CN109643165A (zh) * | 2016-09-01 | 2019-04-16 | 三菱电机株式会社 | 手势判定装置、手势操作装置和手势判定方法 |
JP7131552B2 (ja) * | 2017-05-26 | 2022-09-06 | 凸版印刷株式会社 | 識別装置、識別方法、および、識別プログラム |
CN111164603A (zh) * | 2017-10-03 | 2020-05-15 | 富士通株式会社 | 姿势识别系统、图像修正程序以及图像修正方法 |
JP6943294B2 (ja) * | 2017-12-14 | 2021-09-29 | 富士通株式会社 | 技認識プログラム、技認識方法および技認識システム |
JP7184835B2 (ja) * | 2020-03-04 | 2022-12-06 | グリー株式会社 | コンピュータプログラム、方法及びサーバ装置 |
CN113171472B (zh) * | 2020-05-26 | 2023-05-02 | 中科王府(北京)科技有限公司 | 一种消毒机器人 |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002149302A (ja) * | 2000-11-09 | 2002-05-24 | Sharp Corp | インターフェース装置およびインターフェース処理プログラムを記録した記録媒体 |
JP2004272598A (ja) * | 2003-03-07 | 2004-09-30 | Nippon Telegr & Teleph Corp <Ntt> | ポインティング方法、装置、およびプログラム |
JP2004280148A (ja) | 2003-03-12 | 2004-10-07 | Nippon Telegr & Teleph Corp <Ntt> | 色モデル構築方法、装置、およびプログラム |
JP2004303207A (ja) | 2003-03-28 | 2004-10-28 | Microsoft Corp | ジェスチャの動的フィードバック |
JP2004326693A (ja) | 2003-04-28 | 2004-11-18 | Sony Corp | 画像認識装置及び方法、並びにロボット装置 |
JP2005115932A (ja) | 2003-09-16 | 2005-04-28 | Matsushita Electric Works Ltd | 画像を用いた人体検知装置 |
JP2007052609A (ja) | 2005-08-17 | 2007-03-01 | Sony Corp | 手領域検出装置及び手領域検出方法、並びにプログラム |
JP2007166187A (ja) | 2005-12-13 | 2007-06-28 | Fujifilm Corp | 撮像装置および撮像方法 |
JP2008015641A (ja) | 2006-07-04 | 2008-01-24 | Fujifilm Corp | 人体領域抽出方法および装置並びにプログラム |
JP2008158790A (ja) | 2006-12-22 | 2008-07-10 | Matsushita Electric Works Ltd | 人体検出装置 |
JP2008244804A (ja) | 2007-03-27 | 2008-10-09 | Fujifilm Corp | 撮像装置及び撮像方法並びに制御プログラム |
JP4372051B2 (ja) | 2005-06-13 | 2009-11-25 | 株式会社東芝 | 手形状認識装置及びその方法 |
JP2009301301A (ja) | 2008-06-12 | 2009-12-24 | Tokai Rika Co Ltd | 入力装置及び入力方法 |
JP2010541398A (ja) * | 2007-09-24 | 2010-12-24 | ジェスチャー テック,インコーポレイテッド | 音声及びビデオ通信のための機能向上したインタフェース |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2394235A2 (en) * | 2009-02-06 | 2011-12-14 | Oculis Labs, Inc. | Video-based privacy supporting system |
JP5187280B2 (ja) * | 2009-06-22 | 2013-04-24 | ソニー株式会社 | 操作制御装置および操作制御方法 |
US8305188B2 (en) * | 2009-10-07 | 2012-11-06 | Samsung Electronics Co., Ltd. | System and method for logging in multiple users to a consumer electronics device by detecting gestures with a sensory device |
KR20110076458A (ko) * | 2009-12-29 | 2011-07-06 | 엘지전자 주식회사 | 디스플레이 장치 및 그 제어방법 |
US8933884B2 (en) * | 2010-01-15 | 2015-01-13 | Microsoft Corporation | Tracking groups of users in motion capture system |
US8334842B2 (en) * | 2010-01-15 | 2012-12-18 | Microsoft Corporation | Recognizing user intent in motion capture system |
US8971572B1 (en) * | 2011-08-12 | 2015-03-03 | The Research Foundation For The State University Of New York | Hand pointing estimation for human computer interaction |
-
2011
- 2011-09-15 JP JP2011202434A patent/JP2013065112A/ja active Pending
-
2012
- 2012-03-14 EP EP12832493.6A patent/EP2746899A4/en not_active Withdrawn
- 2012-03-14 KR KR1020147001071A patent/KR20140028108A/ko not_active Application Discontinuation
- 2012-03-14 WO PCT/JP2012/056518 patent/WO2013038734A1/ja active Application Filing
- 2012-03-14 US US14/234,597 patent/US20140152557A1/en not_active Abandoned
- 2012-03-14 CN CN201280035763.9A patent/CN103688233A/zh active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002149302A (ja) * | 2000-11-09 | 2002-05-24 | Sharp Corp | インターフェース装置およびインターフェース処理プログラムを記録した記録媒体 |
JP2004272598A (ja) * | 2003-03-07 | 2004-09-30 | Nippon Telegr & Teleph Corp <Ntt> | ポインティング方法、装置、およびプログラム |
JP2004280148A (ja) | 2003-03-12 | 2004-10-07 | Nippon Telegr & Teleph Corp <Ntt> | 色モデル構築方法、装置、およびプログラム |
JP2004303207A (ja) | 2003-03-28 | 2004-10-28 | Microsoft Corp | ジェスチャの動的フィードバック |
JP2004326693A (ja) | 2003-04-28 | 2004-11-18 | Sony Corp | 画像認識装置及び方法、並びにロボット装置 |
JP2005115932A (ja) | 2003-09-16 | 2005-04-28 | Matsushita Electric Works Ltd | 画像を用いた人体検知装置 |
JP4372051B2 (ja) | 2005-06-13 | 2009-11-25 | 株式会社東芝 | 手形状認識装置及びその方法 |
JP2007052609A (ja) | 2005-08-17 | 2007-03-01 | Sony Corp | 手領域検出装置及び手領域検出方法、並びにプログラム |
JP2007166187A (ja) | 2005-12-13 | 2007-06-28 | Fujifilm Corp | 撮像装置および撮像方法 |
JP2008015641A (ja) | 2006-07-04 | 2008-01-24 | Fujifilm Corp | 人体領域抽出方法および装置並びにプログラム |
JP2008158790A (ja) | 2006-12-22 | 2008-07-10 | Matsushita Electric Works Ltd | 人体検出装置 |
JP2008244804A (ja) | 2007-03-27 | 2008-10-09 | Fujifilm Corp | 撮像装置及び撮像方法並びに制御プログラム |
JP2010541398A (ja) * | 2007-09-24 | 2010-12-24 | ジェスチャー テック,インコーポレイテッド | 音声及びビデオ通信のための機能向上したインタフェース |
JP2009301301A (ja) | 2008-06-12 | 2009-12-24 | Tokai Rika Co Ltd | 入力装置及び入力方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2746899A4 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111158458A (zh) * | 2018-11-07 | 2020-05-15 | 天梭股份有限公司 | 用于广播描述通知消息的告警的方法 |
CN111158458B (zh) * | 2018-11-07 | 2023-04-21 | 天梭股份有限公司 | 用于广播描述通知消息的告警的方法 |
CN112328090A (zh) * | 2020-11-27 | 2021-02-05 | 北京市商汤科技开发有限公司 | 手势识别方法及装置、电子设备和存储介质 |
CN112328090B (zh) * | 2020-11-27 | 2023-01-31 | 北京市商汤科技开发有限公司 | 手势识别方法及装置、电子设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN103688233A (zh) | 2014-03-26 |
EP2746899A1 (en) | 2014-06-25 |
KR20140028108A (ko) | 2014-03-07 |
US20140152557A1 (en) | 2014-06-05 |
JP2013065112A (ja) | 2013-04-11 |
EP2746899A4 (en) | 2015-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013038734A1 (ja) | ジェスチャ認識装置、電子機器、ジェスチャ認識装置の制御方法、制御プログラムおよび記録媒体 | |
JP5862143B2 (ja) | ジェスチャ認識装置、電子機器、ジェスチャ認識装置の制御方法、制御プログラムおよび記録媒体 | |
US11609607B2 (en) | Evolving docking based on detected keyboard positions | |
KR102420100B1 (ko) | 건강 상태 정보를 제공하는 전자 장치, 그 제어 방법, 및 컴퓨터 판독가능 저장매체 | |
JP6011165B2 (ja) | ジェスチャ認識装置、その制御方法、表示機器、および制御プログラム | |
US10206573B2 (en) | Method of obtaining biometric information in electronic device and electronic device for the same | |
KR20160051384A (ko) | 웨어러블 디바이스 및 그 제어 방법 | |
US8942510B2 (en) | Apparatus and method for switching a display mode | |
CN106104650A (zh) | 经由凝视检测进行远程设备控制 | |
JP2014048936A (ja) | ジェスチャ認識装置、その制御方法、表示機器、および制御プログラム | |
KR20110051677A (ko) | 디스플레이 장치 및 그 제어방법 | |
JP2014023127A (ja) | 情報表示装置、情報表示方法、制御プログラム、および記録媒体 | |
KR20150116897A (ko) | Nui 관여의 검출 | |
WO2020250082A1 (ja) | ユーザーの感情に応じた動作を実行する情報処理装置 | |
CN106510164B (zh) | 行李箱控制方法及装置 | |
US20180150722A1 (en) | Photo synthesizing method, device, and medium | |
KR20190099847A (ko) | 전자 장치 및 그의 자세 교정 방법 | |
KR102386299B1 (ko) | 도움 가이드 제공 방법 및 장치 | |
JP7271909B2 (ja) | 表示装置、及び、表示装置の制御方法 | |
CN110022948A (zh) | 用于提供锻炼内容的移动装置和与其连接的可佩戴装置 | |
KR102476619B1 (ko) | 전자 장치 및 이의 제어 방법 | |
US20220284738A1 (en) | Target user locking method and electronic device | |
KR20170007120A (ko) | 대상체의 피부 타입에 관련된 정보를 제공하는 전자 장치 및 방법 | |
US11463616B1 (en) | Automatically activating a mirror mode of a computing device | |
US11551539B2 (en) | Hygiene management device and hygiene management method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12832493 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012832493 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 20147001071 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14234597 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |