WO2018076523A1 - 手势识别的方法、装置及车载系统 - Google Patents
手势识别的方法、装置及车载系统 Download PDFInfo
- Publication number
- WO2018076523A1 WO2018076523A1 PCT/CN2016/112336 CN2016112336W WO2018076523A1 WO 2018076523 A1 WO2018076523 A1 WO 2018076523A1 CN 2016112336 W CN2016112336 W CN 2016112336W WO 2018076523 A1 WO2018076523 A1 WO 2018076523A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gesture
- touch
- current
- dimensional
- area
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
Definitions
- the present application relates to the field of human-computer interaction and automobile manufacturing, and in particular to gesture recognition.
- the application potential in the field of human-computer interaction technology has begun to show, such as motion recognition technology for wearable computers, stealth technology, immersive games, etc., applied to haptic interaction technologies such as virtual reality, remote control robots and telemedicine.
- Voice recognition technology for call routing, home automation, and voice dialing.
- Human-computer interaction solution providers are constantly introducing innovative technologies such as fingerprint recognition technology, side-sliding fingerprint recognition technology, and pressure touch technology.
- the application development of hotspot technology is an opportunity and a challenge.
- the vision-based gesture recognition is the hotspot of human-computer interaction.
- Gesture recognition technology is a technique that achieves control by moving from various parts of the human body, but generally refers to the movement of the face and hands.
- Gesture recognition is mainly divided into two-dimensional gesture recognition and three-dimensional gesture recognition.
- gesture control is another technology that is sought after by many car manufacturers after voice control.
- Gesture control allows users to control the car function with simple wave or several actions while driving, and voice control. In comparison, the accuracy is higher and more convenient.
- the implementation principle of the existing gesture recognition is based on an infrared sensor, based on camera detection, based on resistance, based on capacitance.
- the infrared sensor and the camera are based on three-dimensional gesture recognition
- the resistive and capacitive based are two-dimensional gesture recognition.
- existing car control The touch panel of the touch-sensitive liquid crystal display of the panel is usually resistive or capacitive.
- the user can only use a finger or a touch pen to perform a click operation on the touch screen, and can not operate away from the touch screen, that is, realize a two-dimensional gesture.
- simple three-dimensional gesture recognition such as detecting a wave or approaching, moving away, etc., can be realized, but the two-dimensional gesture cannot be recognized.
- Embodiments of the present invention provide a method, a device, and an in-vehicle system for gesture recognition, which realize integration of three-dimensional gesture recognition and two-dimensional gesture recognition, are more convenient for users to use, and improve user comfort of man-machine interaction.
- the embodiment of the present invention provides the following technical solutions:
- An embodiment of the present invention provides a method for gesture recognition, including:
- the operation area where the current gesture is located is located in the preset touch plane area, determining that the current gesture is a touch gesture, and identifying the operation gesture according to the touch position information of the touch gesture;
- the operation area where the current gesture is located is not located in the preset touch plane area, determining that the current gesture is a three-dimensional gesture, calling a pre-established three-dimensional gesture model library, and identifying an operation gesture of the three-dimensional gesture;
- the determining the current gesture is located.
- the steps of the operation area include:
- the step of determining the operation area where the current gesture is located includes:
- the current gesture is any one or any combination of the following:
- the calling the pre-established three-dimensional gesture model library is:
- the current gesture is a static gesture, and the current gesture is a static gesture.
- the 3D static gesture model library otherwise, the 3D dynamic gesture model library is called.
- Another aspect of the embodiments of the present invention provides a device for gesture recognition, including:
- An image acquisition module configured to acquire image information collected by the image collection device
- An image detecting module configured to detect a current gesture in the image information
- An area determining module configured to determine an operation area where the current gesture is located
- An information recognition module configured to identify the current gesture:
- the operation area where the current gesture is located is located in the preset touch plane area, determining that the gesture information is a touch gesture, and identifying the operation gesture according to the touch position of the touch gesture;
- the gesture information is a three-dimensional gesture, calling a pre-established three-dimensional gesture model library, and identifying an operation gesture of the three-dimensional gesture;
- the instruction generating module is configured to generate a corresponding operation instruction according to the recognized operation gesture.
- the determining area module includes:
- a first determining unit configured to detect whether a button for indicating whether a touch signal exists on the preset touch plane area is turned on, and if yes, determining that an operation area where the current gesture is located is located in the preset touch plane area; No, it is determined that the preset touch plane area is not located; and/or
- a second determining unit calculating a depth value of each pixel point in the image capturing device and the preset touch plane area, and determining a minimum depth value Lmin;
- An embodiment of the present invention further provides an in-vehicle system for gesture recognition, including an image acquisition device, a touch panel, and any of the above gesture recognition devices.
- the touch panel comprises a touch panel and/or a touch switch, and when the image capture device is a 2D camera, the touch panel comprises:
- a panel unit a circuit board unit, a button unit, and a support unit;
- the touch switch includes:
- the touch panel comprises a touch panel and/or a touch switch, and when the image capture device is a 3D camera, the touch panel comprises:
- the touch switch includes:
- An embodiment of the present invention provides a method for determining a gesture, by determining whether an area in which the current gesture operation is located in the captured image information is located in a preset touch plane area, and determining whether the current gesture is a touch gesture or a three-dimensional gesture. If the gesture is a touch gesture, the operation gesture is recognized according to the touch location information of the touch gesture; if the current gesture is a three-dimensional gesture, the pre-established three-dimensional gesture model library is invoked, and the operation gesture of the three-dimensional gesture is recognized, and the three-dimensional gesture is realized. Gesture and two-dimensional gesture recognition integration.
- gesture recognition is singular, that is, only for two-dimensional gesture recognition or only for three-dimensional gesture recognition, for only two-dimensional gesture recognition, it can only be operated on a touch screen with sensing function, and cannot be far away from the screen operation;
- 3D gesture recognition only, the recognition action of 3D gestures is limited, which greatly limits the application of gesture control.
- the application can realize the integration of the three-dimensional gesture and the two-dimensional gesture recognition, so that the recognition of the three-dimensional gesture in the air can be realized, and the two-dimensional gesture of the touch plane area can be recognized, which is more convenient for the user to use, and improves the comfortable operation of the human-computer interaction of the user. Sex.
- the embodiment of the present invention further provides a corresponding implementation device and an in-vehicle system for the method of gesture recognition, which further makes the method more practical, and the device and system have corresponding advantages.
- FIG. 1 is a structural block diagram of an in-vehicle system for gesture recognition according to an embodiment of the present invention
- FIG. 2 is a schematic flowchart of a gesture recognition method according to an embodiment of the present invention.
- FIG. 3 is a partial three-dimensional gesture of a three-dimensional dynamic model library according to an embodiment of the present invention.
- FIG. 4 is a schematic diagram of image processing and gesture recognition according to an embodiment of the present invention.
- FIG. 5 is a schematic flowchart diagram of another gesture recognition method according to an embodiment of the present disclosure.
- FIG. 6 is a schematic diagram of a gesture operation mode and a location according to an embodiment of the present invention.
- FIG. 7 is a schematic diagram of positioning of a current gesture according to an embodiment of the present invention.
- FIG. 8 is a schematic diagram of determining current gesture position information according to an embodiment of the present invention.
- FIG. 9 is a schematic flowchart diagram of another gesture recognition method according to an embodiment of the present disclosure.
- FIG. 10 is a structural diagram of a gesture recognition apparatus according to an embodiment of the present invention.
- FIG. 11 is a structural block diagram of an in-vehicle system for gesture recognition according to an embodiment of the present invention.
- FIG. 12 is a schematic structural diagram of a touch panel based on a 2D camera according to an embodiment of the present invention.
- FIG. 13 is a schematic structural diagram of a touch switch based on a 2D camera according to an embodiment of the present invention.
- FIG. 14 is a schematic structural diagram of a touch panel based on a 3D camera according to an embodiment of the present invention.
- FIG. 15 is a schematic structural diagram of a touch switch based on a 3D camera according to an embodiment of the present invention.
- FIG. 16 is a structural block diagram of an in-vehicle system for gesture recognition control according to an embodiment of the present invention.
- the gesture recognition in the prior art has unity, that is, only for two-dimensional gesture recognition or only for three-dimensional gesture recognition.
- the touch screen with the sensing function can be operated, and the screen operation cannot be far away; for the three-dimensional gesture recognition only, the touch screen with the sensing function is needed to assist in recognizing the two-dimensional gesture.
- the present application realizes the integration of three-dimensional gestures and two-dimensional gesture recognition, so that the recognition of three-dimensional gestures in the air can be realized, and the two-dimensional gesture of the touch area can be recognized, which is more convenient for the user to use, and improves the comfortable operation of the human-computer interaction of the user. Sexuality, to some extent, saves users' cost of use.
- FIG. 1 is a schematic diagram of a vehicle system for gesture recognition according to an embodiment of the present invention. block diagram.
- 101 is a camera installed in a car ceiling light module
- 102 is a sensing area of the camera
- 103 is a gesture
- 104 is a central control panel
- the central control panel includes a rectangular touch panel ABCD and a touch switch EF.
- 101 is generally an infrared camera as a three-dimensional recognition gesture sensor
- 104 is a touch panel with a sensing unit. The detection and recognition of such an air gesture is detected and recognized by the infrared camera, and the two-dimensional touch gesture of the central control panel is recognized by the touch unit.
- the camera can be a 2D camera or a 3D camera as a recognition gesture sensor, and the central control panel does not need to carry an inductive touch unit. All gestures (2D, 3D) are recognized by the camera.
- Embodiment 1 is a diagrammatic representation of Embodiment 1:
- FIG. 2 is a schematic flowchart of a gesture recognition method according to an embodiment of the present invention.
- the embodiment of the present invention may include the following content:
- S201 Acquire image information collected by the image collection device.
- S203 Determine an operation area where the current gesture is located.
- S201 is specifically:
- the image acquisition device is a 2D camera or a 3D camera, and the corresponding image information is different.
- the image information captured by the 2D camera is two-dimensional gray plane image information or two-dimensional color plane image information; the image information captured by the 3D camera includes two-dimensional gray plane image information and one-dimensional depth image information.
- S202 is specifically:
- the image information collected by the image acquisition device When detecting the image information collected by the image acquisition device, it is necessary to determine whether the image information contains the information of the hand. When there is no hand, the acquired image information is discarded, and the next image is determined; when the detected image information contains an incomplete hand The information can be reminded, the user can be voiced to remind the user, and the angle of the image acquisition device can be automatically adjusted. Currently, other methods can be used to implement the reminder function to make the sensing area
- the field contains information of the current hand; when the detected image contains complete gesture information, it proceeds to S203.
- the current gesture may be a static gesture or a dynamic gesture.
- the static gesture may be an action of a button
- the dynamic gesture may be a stroke action.
- the image information including the current gesture may be continuously acquired in a time series, and if the current gesture in the multi-frame image is the same and the position of the current gesture does not change, the current gesture is a static gesture; Then it is a dynamic gesture.
- S203 is specifically:
- the image capturing device may be an inverted pyramid area, which refers to a preset touch plane area and an air sensing area, and the air sensing area is an aerial part of the chamfered cone area.
- the preset touch plane area is a plane area of a two-dimensional gesture operation, generally a touch panel, and the touch gesture (two-dimensional gesture) may be a touch gesture of touching the touch panel or touching a touch gesture of the touch switch.
- the touch gesture may be touched or clicked by a finger pointing, a side of a finger, a finger of a finger, or any combination thereof, and may of course be operated by any part of the body.
- Conventional capacitive touch screens often require high precision when operating with the finger side of the finger or the fingernail, especially for the operation of the nail.
- the resistive touch screen although the operation of the nail can be recognized, the nail is touched by the touch screen. The damage is too great, resulting in inaccurate touch location recognition and shortened life of the touch screen, which is not worth the loss.
- the present application can adopt a touch screen without a sensing function, and the touch screen can be a plastic panel, a metal panel, or a liquid crystal display. Of course, any other material panel can also be used, so that the operation with the nail is not There will be requirements for its accuracy, and inappropriate material selection will inevitably damage the life of the screen.
- S204 is specifically:
- the preset touch plane area may be a touch panel, a touch switch, or a combination of the two, and the number of blocks of the touch panel and the number of touch switches are not limited, and may be one or any Several pieces of splicing, the switch can be one or more.
- the material of the touchpad and the touch switch can be a plastic panel, The metal panel can also be a liquid crystal display. Of course, other materials can be used, and if necessary, a combination of the above.
- the touch position information may be coordinate information or specific touch point position information, which is determined according to the adopted camera and a specific touch area.
- the operation gesture is at the touch switch of the touch area, and the touch may be touched according to the established coordinates.
- the coordinate information feedback of the point may also directly send the information of the occlusion position according to the position of the touch point occlusion. For example, if the a button of the touch switch is detected to be blocked, the position of the a button is sent, indicating that the user's operation at the moment is pressed. a key. Because different positions correspond to different operating instructions, this can be preset by the manufacturer, or it can be preset by the user according to the requirements.
- the operation command represented by the a key is to open the sunroof. When the user presses the a key, the skylight is opened according to the detected a key position information or coordinate information. Therefore, the operation gesture can be identified according to the position information.
- S205 is specifically:
- the three-dimensional gesture in the air can be a static gesture or a dynamic gesture.
- a gesture that directly makes an 'OK' is a static gesture
- a gesture that changes from a palm to a fist is a dynamic gesture.
- a static gesture corresponds to a three-dimensional static.
- Gesture model library, dynamic gestures call the 3D dynamic gesture model library.
- the gesture made is dynamic or static when calling, and can continuously acquire multiple frames of image information including the current gesture by time series, if the current gestures in the multi-frame image are the same and the current gesture The position of the current gesture is unchanged, and the current gesture is a static gesture, and the three-dimensional static gesture model library is retrieved; otherwise, the three-dimensional dynamic gesture model library is called.
- the model library is a preset series of gestures, and the operation instruction corresponding to the preset gesture action may be a default universal or customized according to the requirements of the user, and the user may perform the gesture according to the model library when using the gesture. Operation to achieve human-computer interaction. For example, FIG.
- 301 is a gesture from the extended state of the index finger to the state in which the index finger is recovered into a fist state
- 302 is a gesture of rotating counterclockwise by the index finger in an extended state
- It is a gesture of clockwise rotation of the index finger in the extended state
- 304 is a gesture in which the hand changes from palm to fist
- 305 is a waving gesture from left to right of the palm
- 306 is a waving gesture from right to left of the palm.
- FIG. 3 is only a part of the gestures in the model library.
- gestures in the actual application there are many gestures in the actual application, and different gestures correspond to different operation instructions, for example, the driver moves forward and backward to indicate the answering call, selection, and determination.
- the palm opens to the right to indicate that the phone is hanging, cancels the operation, and the V-shaped gesture indicates pause, play, and custom operation, one Clockwise rotation of the finger means increasing the volume, reducing the navigation map, turning a finger counterclockwise to reduce and enlarge the navigation map, and moving the five fingers into a circle to the right to indicate the next song, returning to the main menu, and the five fingers are gathered into a circle
- the left movement indicates the previous song and returns to the main menu.
- the motor vehicle will determine the motion range, and then determine the degree of the window opening according to the gesture range.
- the wiper or indicator light can be activated by clicking the finger near the steering wheel.
- the air conditioner or radio can be turned on by twisting the wrist in front of the dashboard of the car; the sunroof will automatically open when the finger is pointing to the sunroof, and the index finger is pressed against the lip to lower the volume of the phone. The hand is turned into a trumpet to indicate that the phone is used to make a call.
- a similar thinking chin movement means "I want to retrieve information", and the like action represents "agree".
- it is not limited to the operations corresponding to the gestures and gestures described above, and the actual implementation may be added or modified according to the needs of the user.
- the information in the model library may be the three-dimensional gesture information, the corresponding execution command information of the three-dimensional gesture, or both.
- the palm of the hand can be swung to the right to swipe the gesture, or the palm can be opened and swung to the right.
- the command to be executed corresponding to the gesture is a hanging call, a cancel operation, or both.
- S206 is specifically:
- the operation instruction may be position information, that is, the position information is directly sent to the electronic function module through the interface unit, and the electronic function module further performs the action to be performed according to the preset instruction and the position correspondence relationship; or may be identified according to the position information.
- the action instructions to be executed are directly sent to the electronic function module through the interface unit, and the electronic function module directly executes the action command.
- the preset and the identification related to the correspondence between the location information and the operation may be directly preset in the electronic function module or preset in the information recognition module.
- the operation instruction may be coordinate information or specific position information of the a button, or may be an operation instruction represented by the a key, such as opening the sunroof.
- FIG. 3 is a schematic diagram of image processing and gesture recognition provided by the present application.
- 401 is a system for capturing real-time image information, and for a 2D camera, may be a two-dimensional gray plane image signal or a two-dimensional color plane image signal; corresponding to the 3D camera, which may be a two-dimensional gray plane image signal and one-dimensional depth Image signal.
- the contour, convexity, angle or skeleton of the hand is extracted from the image, and of course other gesture features are also available.
- image processing 403 is image processing, that is, further processing the features of the gesture acquired in real time by image preprocessing, such as compression, optimization, and different processing according to different types of gestures and different gesture features, for example, extracting the outline of the hand for static gestures.
- image preprocessing such as compression, optimization, and different processing according to different types of gestures and different gesture features, for example, extracting the outline of the hand for static gestures.
- the gesture feature is optimized, and for the dynamic gesture to extract the contours of multiple hands of consecutive time frames, compression is required first.
- the static gesture model may adopt the same image processing and feature extraction methods as steps 402 and 403 according to the collected samples, and may obtain a sample model of the static gesture through sample training operations; similarly, the dynamic gesture model may utilize a static gesture model method.
- the gesture contour samples are continuously extracted from the time series, and the sample model of the dynamic gesture is obtained through the sample training operation.
- the 403 real-time acquired gesture feature signal is processed together with the gesture sample model 404 to perform real-time recognition by a specific algorithm, and the recognition algorithm may be any one of the following or a combination of any combination: a boosting algorithm, Random Forest algorithm, Support Vector Machine (SVM), Neural Network, Deep Learning, Hidden Markov Model (HMM) ), of course, other algorithms can be used if necessary.
- a boosting algorithm Random Forest algorithm, Support Vector Machine (SVM), Neural Network, Deep Learning, Hidden Markov Model (HMM)
- SVM Support Vector Machine
- HMM Hidden Markov Model
- the 406 is to send a gesture instruction, that is, according to the generated operation instruction, to send it to the execution unit, for example, the currently recognized gesture signal can be sent to a specific electronic function module in real time through the interface unit, and the corresponding gesture instruction is executed, thereby realizing the man-machine Interaction.
- An embodiment of the present invention provides a method for determining a gesture, by determining whether an area in which the current gesture operation is located in the captured image information is located in a preset touch plane area, and determining whether the current gesture is a touch gesture or a three-dimensional gesture. If the gesture is a touch gesture, the operation gesture is recognized according to the touch location information of the touch gesture; if the current gesture is a three-dimensional gesture, the pre-established three-dimensional gesture model library is invoked, and the operation gesture of the three-dimensional gesture is recognized, and the three-dimensional gesture is realized.
- gestures and two-dimensional gesture recognition can not only recognize the three-dimensional gestures in the air, but also recognize the two-dimensional gestures of the touch area, which is more convenient for the user to use, and improves the randomness and comfort of the user's human-computer interaction.
- the first embodiment is further improved for more convenient use.
- Embodiment 2 is a diagrammatic representation of Embodiment 1:
- FIG. 5 is a schematic flowchart diagram of another gesture recognition method according to an embodiment of the present invention, which may specifically include the following content:
- S501 Acquire image information acquired by a 2D camera.
- S502 Detect a current gesture in the image information.
- S503 Determine whether the operation area of the current gesture is a preset touch plane area. If yes, go to S504, otherwise, go to S505.
- S504 Determine that the current gesture is a touch gesture, and identify the operation gesture according to the touch location information of the touch gesture.
- S505 Determine that the current gesture is a three-dimensional gesture, and invoke a pre-established three-dimensional gesture model library to identify an operation gesture of the three-dimensional gesture.
- S506 Generate a corresponding operation instruction according to the identified operation gesture.
- the current gesture is the three types shown in FIG. 6, and 601 is a 2D camera, and S is recorded. It is the center of the camera sensing surface; 602 is a three-dimensional gesture operation, Q is an operating position of the finger in the air; 603 is a touch panel ABCD with the back of the hand facing the touch area and a touch operation such as clicking and sliding with a finger nail, P is a finger touching The touch position on the board ABCD; 604 is a touch operation such as finger click, slide, etc. of the touch switch EF of the palm toward the touch area, and M is a touch position of the finger at the touch switch EF.
- the specific method for determining which operation area the current gesture is located in is:
- the button may be a switch located on a circuit board of the preset touch plane area. When the preset touch plane area has a touch action, the switch is turned on, and the button may be a micro-stroke non-self-locking type.
- the elastic switch can of course be other devices as long as the function of transmitting a corresponding signal when the preset touch plane area has a touch action can be realized.
- the preset touch plane area may be a touch panel ABCD or a touch switch EF.
- the operation gesture can be implemented by determining the touch position information thereof.
- the identification of the specific location information can first locate the current gesture, that is, the determination of the touch point or the location point, as follows:
- FIG. 7 is a schematic diagram of positioning of a current gesture.
- the current gesture may be a three-dimensional gesture operation or a touch operation above a preset touch plane area.
- 701 is a single-finger gesture using a finger forefinger, such as a 301 gesture, a 302 gesture, or a 303 gesture in the model library of FIG. 702 is a gesture other than an index finger operation gesture, such as a 304 gesture, a 305 gesture, or a 306 gesture in the model library of FIG.
- T in the figure is the vertex (the highest point) of the finger of the gesture 701 in the whole image
- L and T are respectively pixels.
- Point T moves the edge of the finger to the left and right edges of the finger
- G in the figure is the center position of the pixel L and R of the same row, that is, the position or touch point of the current gesture.
- the value range of ⁇ is set at 5mm-15mm, because different people or different countries and regions have different finger widths.
- H is the vertex (the highest pixel point) of the gesture in the entire image, that is, the position point or touch point of the current gesture.
- Determining the position information of the current gesture may be calculated according to the two-dimensional coordinate value of the touch point of the current gesture of the image information by establishing a two-dimensional Cartesian coordinate system in the image information and the preset touch plane area Corresponding to the actual coordinate value of the preset touch plane area, where the actual coordinate value is touch position information of the current gesture.
- the touch gesture is a touch in the preset touch plane area
- calculating the touch position information may also be performed by detecting a position at which the touch switch is blocked, the occluded position being the touch position information of the touch gesture.
- FIG. 8 is a schematic diagram of current gesture position information determination.
- 801 is an image of a preset touch plane area captured by the camera;
- 802 is an actual preset touch plane area, wherein the preset touch plane area may be divided into a touch panel ABCD and a touch switch EF.
- the rectangular area A'B'C'D' in 801 corresponds to the touch panel ABCD of the actual preset touch plane area; E'F' corresponds to the touch switch EF of the actual preset touch plane area; the length of the touch panel in the image
- the width and width are W1' and H', respectively, and the actual touchpad length and width dimensions are W1 and H; the touch switch length in the image is set to W2', and the actual length of the touch switch is set to W2.
- two coordinate systems O'X'Y' and O1'X1'Y1' are respectively set, wherein O'X'Y' is a touch panel A'B'C
- the top left vertex A' of 'D' is taken as the coordinate origin O'
- the line where the coordinate origin O' pixel is located is taken as the X' axis
- the column where the coordinate origin O' pixel is located is taken as the Y' axis
- O1'X1 'Y1' is the left edge center point E' of the touch switch E'F' area in the image as the coordinate origin O1'
- the column is established as the Y1' axis; correspondingly on the actual preset touch plane area, two coordinate systems OXY and O1X1Y1 are also set, wherein the OXY coordinate system is the top left ver
- (x', y' in the image 801 is the image coordinate of the current gesture touch point in the O'X'Y' coordinate system, and the corresponding coordinates in the actual preset touch plane area 802 are (x, y).
- (x1', 0) in image 801 is the image coordinate of the current gesture touch point in the O1'X1'Y1' coordinate system, and the corresponding coordinate in the actual preset touch plane area 802 (x1, 0) ) said.
- the specific calculation method is as follows:
- the touchpad For the touchpad, first calculate the coordinates (x', y') of the current gesture touch point in the O'X'Y' coordinate system and the length W1' of the touchpad A'B'C'D' in the image 801 and Width H'; due to the image of the touchpad captured by the camera and the touchpad of the actual preset touch plane area, there is only a small distortion of the pattern, so the touch point is calculated approximately in the following manner in the actual touchpad ABCD relative to OXY Coordinates of the coordinate system:
- a method can be expressed in the above-mentioned coordinates, that is, the coordinates (x1', 0) and the touch switch length W2' of the current gesture touch point in the O1'X1'Y1' coordinate system are first calculated in the image 801. ; then the touch point is the coordinates of the actual touch switch relative to the O1X1Y1 coordinate system:
- Another method is that the beacon of the specific switch in the area of the touch panel switch E'F' in the image 801 can be occluded by the finger, and the information of the beacon is directly fed back as the position information of the current gesture, for example, in the figure. If the switch with the beacon 'e'' is occluded, the actual switch that determines the touch point of the current gesture is the switch with the beacon 'e'.
- whether the three-dimensional static gesture model library or the three-dimensional dynamic gesture model library needs to be called is further determined by first determining whether the current gesture is a static gesture or a dynamic gesture.
- An operation gesture of the three-dimensional gesture is identified according to a model library. For details, see S205, and details are not described here.
- the embodiment of the present invention provides a method for determining a gesture, by determining whether an area in which the current gesture operation is located in the image information captured by the 2D camera is located in a preset touch plane area, and determining whether the current gesture is a touch gesture or Three-dimensional gestures. If the gesture is a touch gesture, the operation gesture is recognized according to the touch location information of the touch gesture; if the current gesture is a three-dimensional gesture, the pre-established three-dimensional gesture model library is invoked, and the operation gesture of the three-dimensional gesture is recognized, and the three-dimensional gesture is realized.
- the gesture and the two-dimensional gesture recognition are integrated, which is convenient for the user to use, and improves the randomness and comfort of the user's human-computer interaction.
- the second embodiment is further improved.
- Embodiment 3 is a diagrammatic representation of Embodiment 3
- FIG. 9 is a schematic flowchart diagram of another gesture recognition method according to an embodiment of the present invention, which may specifically include the following contents:
- S901 Acquire image information acquired by a 3D camera.
- S902 Detect a current gesture in the image information.
- S903 Determine whether the operation area of the current gesture is a preset touch plane area. If yes, go to S904, otherwise, go to S905.
- S904 When the operation area where the current gesture is located is located in the preset touch plane area, determine that the current gesture is a touch gesture, and identify the operation gesture according to the touch position information of the touch gesture.
- S906 Generate a corresponding operation instruction according to the identified operation gesture.
- the 2D camera is replaced by a 3D camera, that is, the acquired image information has one-dimensional depth information, and the corresponding S903 determines that the operation method in which the current gesture is located is different, and other steps and embodiments
- the second is the same, so it is only elaborated on S903 here, and the rest will not be repeated.
- the preset touch plane area may be a touch pad and/or a touch switch.
- the touch point of the current gesture is determined by the positioning method of the current gesture in the second embodiment, and the distance L between the touch point of the current gesture and the image capturing device is calculated. For example, when the touch point is Q, the distance between the Q point and the camera is calculated. LsQ;
- L-Lmin ⁇ 15 mm it is determined that the operation area where the current gesture is located is not located in the preset touch plane area, and vice versa, it is determined to be located in the preset touch plane area. For example, if LsQ ⁇ Lmin+15mm, it is determined that the current gesture is a three-dimensional gesture, and vice versa is a touch gesture.
- the embodiment of the present invention provides a method for determining a gesture, by determining whether an area in which the current gesture operation is located in the image information captured by the 3D camera is located in a preset touch plane area, and determining whether the current gesture is a touch gesture or Three-dimensional gestures. If the gesture is a touch gesture, the operation gesture is recognized according to the touch location information of the touch gesture; if the current gesture is a three-dimensional gesture, the pre-established three-dimensional gesture model library is invoked, and the operation gesture of the three-dimensional gesture is recognized, and the three-dimensional gesture is realized.
- the gesture and the two-dimensional gesture recognition are integrated, which is convenient for the user to use, and improves the user's human-computer interaction control comfort.
- the embodiment of the invention further provides a corresponding implementation device for the method for gesture recognition, which further makes the method more practical.
- the gesture recognition apparatus provided by the embodiment of the present invention is described below.
- the apparatus for gesture recognition described below and the method of gesture recognition described above may be referred to each other.
- Embodiment 4 is a diagrammatic representation of Embodiment 4:
- FIG. 10 is a structural diagram of a gesture recognition apparatus according to an embodiment of the present invention, where the apparatus may include:
- the image acquisition module 1001 is configured to acquire image information collected by the image collection device.
- the image detecting module 1002 is configured to detect a current gesture in the image information.
- the area determining module 1003 is configured to determine an operation area where the current gesture is located.
- the information identification module 1004 is configured to identify the current gesture:
- the operation area where the current gesture is located is located in the preset touch plane area, determining that the gesture information is a touch gesture, and identifying the operation gesture according to the touch position of the touch gesture;
- the operation area where the current gesture is located is not located in the preset touch plane area, determining that the gesture information is a three-dimensional gesture, calling a pre-established three-dimensional gesture model library, and identifying an operation gesture of the three-dimensional gesture.
- the instruction generation module 1005 is configured to generate a corresponding operation instruction according to the recognized operation gesture.
- the area judging module 1003 may include:
- a first determining unit configured to detect whether a button for indicating whether a touch signal exists on the preset touch plane area is turned on, and if yes, determining that an operation area where the current gesture is located is located in the preset touch plane area; No, it is determined that the preset touch plane area is not located; and/or
- a second determining unit calculating a depth value of each pixel point in the image capturing device and the preset touch plane area, and determining a minimum depth value Lmin;
- the device may further include:
- Infrared fill light module used to emit specific modulated infrared light, and used with image acquisition equipment to obtain clear image signals. In this way, when the current ambient lighting is insufficient, a clear image can still be obtained, which helps to better recognize gestures.
- the embodiment of the present invention provides a device for recognizing a gesture, by determining whether an area in which the current gesture operation is located in the image information collected by the image collection device is located in a preset touch plane area, and determining that the current gesture is a touch gesture. It is also a three-dimensional gesture, which realizes the integration of three-dimensional gestures and two-dimensional gesture recognition, which is more convenient for users to use, and improves the user comfort of man-machine interaction.
- Embodiment 5 is a diagrammatic representation of Embodiment 5:
- an embodiment of the present invention further provides an in-vehicle system, which may include:
- the image capturing device 1101 is configured to collect image information.
- the touch panel 1102 is configured to implement an operation of a two-dimensional gesture
- the gesture recognition device 1103 is configured to identify the current gesture, and is the gesture recognition device described in Embodiment 4 above.
- the image capturing device can be disposed on a roof light module, and the image collecting device is placed on the roof of the vehicle, which can expand the sensing area thereof, and the setting of the roof light can avoid the cost of separately arranging the image collecting device, and of course, can not be set in the vehicle.
- the light module can be set according to the user's preference.
- the image capturing device can also be set to rotate, that is, a 360-degree image without a dead angle, but the problem is that the user needs to track the rotation angle of the image capturing device and make a gesture. Some inconveniences are preferred.
- the image acquisition device can be fixed, and the user only needs to perform gesture operation in the sensing area.
- the image acquisition device can use a 3D camera or a 2D camera. Of course, other image acquisition devices can be used when necessary.
- the touch panel 1102 may include a touch panel and/or a touch switch, and the touch panel and the touch switch may be integrally disposed on the touch panel as a whole, that is, there is no gap between the touch panel and the touch switch, and the whole is a board. In this way, it can avoid the troubles caused by cleaning the touchpad, and it is beautiful and beautiful.
- the touch panel when the image capturing device is a 2D camera, the touch panel includes, for example, a touch panel and a touch switch, please refer to FIG. 12 and FIG. A schematic diagram of a 2D camera-based touch panel provided by an embodiment, FIG. 13 is A schematic diagram of a structure of a touch switch based on a 2D camera provided by the embodiment of the present invention, the touch panel and the touch switch may include:
- the touch panel includes:
- the panel unit 1201 can be a liquid crystal display or a plastic panel or a metal panel or any other material panel;
- Circuit board unit 1202
- the support unit 1204 includes a support member and a casing, and the material thereof is preferably plastic, and of course other materials.
- reference numeral 1205 indicates a touch operation such as finger click and slide of the palm toward the touch panel ABCD; reference numeral 1206 in the figure indicates that the back of the hand faces the touch panel ABCD with a touch operation such as clicking, sliding, or the like with the finger of the finger.
- the touch switch includes:
- the panel unit 1301 with a beacon may be a liquid crystal display or a plastic panel or a metal panel or any other material panel;
- the light guide plate unit 1302 is preferably made of a transparent plastic material, and may of course be other materials, but preferably should be transparent;
- circuit board unit 1303 wherein the circuit board is provided with a button and a lamp;
- the button unit 1304 can be a micro-stroke non-self-locking elastic switch
- the lamp unit 1305 is provided, and the lamp is generally a small-sized lamp with a small power consumption, such as an LED lamp;
- the support unit 1306 includes a support member and a casing, and the material thereof is preferably plastic, and of course other materials.
- 1307 denotes a touch operation such as finger click and slide of the palm toward the touch switch EF; reference numeral 1308 in the figure indicates that the back of the hand faces the touch switch EF, and the touch of the finger is used to click, slide, etc. operating.
- the touch panel when the image capturing device is a 3D camera, the touch panel includes, for example, a touch panel and a touch switch, please refer to FIG. 14 and FIG. A schematic diagram of a structure of a touch panel based on a 3D camera provided by the embodiment of the present invention, and FIG. 15 is a schematic structural diagram of a touch switch based on a 3D camera according to an embodiment of the present invention, where the touch panel and the touch switch may include The following units.
- the touch panel includes:
- the panel unit 1401 can be a liquid crystal display or a plastic panel or a metal panel or any other material panel;
- the support unit 1402 includes a support member and a casing, and the material thereof is preferably plastic, and of course other materials.
- the structure of the touch panel is simpler when the 3D camera is used as the recognition sensor, and no electronic components and circuit boards are needed, that is, the use does not require induction.
- the unit's touch panel which saves user cost.
- the touch switch includes:
- a panel unit with a beacon, a light guide unit, a lamp unit, and a support unit A panel unit with a beacon, a light guide unit, a lamp unit, and a support unit.
- the panel unit 1501 with a beacon may be a liquid crystal display or a plastic panel or a metal panel or any other material panel;
- the light guide plate unit 1502 is preferably made of a transparent plastic material, and may of course be other materials, but preferably should be transparent;
- circuit board unit 1503 wherein the circuit board is provided with a lamp
- the support unit 1504 the support unit comprises a support member and a casing, and the material thereof is preferably plastic, and of course other materials;
- the lamp unit 1505 is provided, and the lamp is generally a small-sized lamp that consumes less power, such as an LED lamp.
- the touch switch Compared with the structural schematic diagram of the touch switch based on the 2D camera shown in FIG. 13 as the recognition sensor, it can be seen that the touch switch lacks the button unit when the 3D camera is used as the recognition sensor, which makes the structure and function more simplified, and can save the user to some extent. cost.
- an in-vehicle system includes not only the units described above, but also many.
- the embodiment is only described in detail for the improved units of the prior art.
- the gesture recognition device is the gesture recognition device described in the foregoing Embodiment 4.
- the embodiment of the present invention provides an in-vehicle system for recognizing a gesture, by determining whether an area in which the current gesture operation is located in the image information collected by the image acquisition device is located in a preset touch plane area, and determining that the current gesture is a touch.
- the gesture or the three-dimensional gesture realizes the integration of the three-dimensional gesture and the two-dimensional gesture recognition, is more convenient for the user to use, and improves the control comfort of the human-computer interaction of the user.
- the structure and function of the touchpad of the present application are more simplified, and the saving is to some extent. User usage costs.
- FIG. 16 is a structural block diagram of an in-vehicle system that can adopt gesture recognition control according to an embodiment of the present invention.
- the control unit 1601 includes a gesture recognition device, which is a control core of the in-vehicle system, and the control unit 1601 is configured to acquire image information of the camera 1602, and detect an operator's gesture in real time according to the acquired information, and use the gesture recognition unit to The detected gesture is identified;
- the control unit 1601 is configured to detect a button signal 1608 inside the touch panel of the central control panel and a button signal 1607 inside the touch panel of the central control panel, and determine the current finger touch according to the button signal. The area in which the operation is located.
- the control unit 1601 controls the infrared fill light module 1603 to emit specific modulated infrared light, and acquires a clear image signal in conjunction with the image capture device. In this way, when the current ambient lighting is insufficient, a clear image can still be obtained, which helps to better recognize gestures.
- the control unit 1601 can also be configured to detect a car headlight switch signal 1606, correspondingly control and illuminate the central control panel touch switch backlight 1604 when detecting that the car headlight is turned on; when detecting the steam When the headlight is turned off, the center panel touch switch backlight 1604 is controlled and turned off correspondingly.
- the control unit 1601 can also control the interface unit 1605, and is configured to send related information of the three-dimensional gesture and the touch gesture detected in real time, such as location information, to a specific electronic function module, and execute corresponding gesture instructions to implement human-computer interaction.
- the interface unit is an interface for matching a specific electronic function module and communicates with a specific electronic function module, usually a CAN bus, a Lin bus or an analog level signal interface.
- the gesture recognition device is embedded in the control unit of the in-vehicle system, and the gesture recognition device is the gesture recognition device described in the foregoing method embodiments, devices, and system embodiments, and the user's three-dimensional gesture or two can be recognized during the user's driving process. Dimensional gestures greatly improve the comfort of user interaction.
- the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both.
- the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
Abstract
一种手势识别的方法,通过判断所采集的图像信息中当前手势操作所在的区域是否位于预设触摸平面区域上,判定所述当前手势为触摸手势还是三维手势。如果为触摸手势,则根据所述触摸手势的触摸位置信息对操作手势进行识别;如果当前手势为三维手势,则调用预先建立的三维手势模型库,识别所述三维手势的操作手势,根据所述操作手势生成对应的操作指令。该方法实现了三维手势和二维手势识别一体化,更方便用户使用,提高用户人机交互的操控舒适性。此外,还针对手势识别的方法提供了相应的实现装置及车载系统,使得所述方法更具有实用性,所述的装置和系统具有相应的优点。
Description
本申请要求于2016年10月25日提交中国专利局、申请号为201610947837.6、发明名称为“一种手势识别的方法、装置及车载系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及人机交互和汽车制造领域,特别是涉及手势识别。
随着计算机技术的发展,人机交互技术也得到了迅猛发展。从模式识别,如语音识别、汉字识别等,操作员和计算机在类似于自然语言或受限制的自然语言这一级上进行交互成为可能,到通过图形进行人机交互,越来越多的领域运用到人机交互技术。
人机交互技术领域的应用潜力已经开始展现,比如应用于可穿戴式计算机、隐身技术、浸入式游戏等的动作识别技术,应用于虚拟现实、遥控机器人及远程医疗等的触觉交互技术,应用于呼叫路由、家庭自动化及语音拨号等场合的语音识别技术等。人机交互解决方案供应商不断地推出各种创新技术,如,指纹识别技术,侧边滑动指纹识别技术,压力触控技术等。热点技术的应用开发是机遇也是挑战,基于视觉的手势识别是目前人机交互的热点。
手势识别技术是通过来自人的身体各部位的运动,但一般是指脸部和手的运动,实现控制的一种技术。手势识别主要分二维手势识别和三维手势识别。目前,手势控制是继语音控制之后另一项受到许多汽车厂商追捧的技术,手势控制可以让用户在驾驶的时候简单的挥挥手或做出几个动作就能对车载功能进行控制,与语音控制相比,准确率更高,同时也更方便。
现有的手势识别的实现原理有基于红外传感器实现的、基于摄像头检测的、基于电阻式的、基于电容式的。一般的,基于红外传感器和摄像头的为三维手势识别,基于电阻式和电容式的是二维手势识别。例如,现有的汽车中控
面板的触控式液晶显示器的触摸面板通常是电阻式的或电容式的,用户只能使用手指或触摸笔在触摸屏上进行点击等操作,而不能远离触摸屏隔空操作,即实现对二维手势的识别;又例如,当汽车中使用红外传感器时,可以实现简单的三维空间的手势识别,例如检测挥手或接近、远离等,但无法识别二维手势。
可见,现有技术中三维手势和二维手势的控制和识别是分别通过两套体系实现的。对于只识别二维手势的,用户无法离开操作屏幕实施操作,在使用过程中会造成很大的不便,尤其是当触控屏坏掉时,整个手势识别无法使用;对于只能识别三维手势的,由于其识别的动作有限,大大的限制了手势控制的应用,同样会给用户操作带来不便。
发明内容
本发明实施例提供一种手势识别的方法、装置及车载系统,实现了三维手势识别和二维手势识别一体化,更方便用户使用,提高用户人机交互的操控舒适性。
为解决上述技术问题,本发明实施例提供以下技术方案:
本发明实施例一方面提供了一种手势识别的方法,包括:
获取图像采集设备采集的图像信息;
检测所述图像信息中的当前手势;
判断所述当前手势所在的操作区域;
当所述当前手势所在的操作区域位于预设触摸平面区域时,判定所述当前手势为触摸手势,根据所述触摸手势的触摸位置信息对操作手势进行识别;
当所述当前手势所在的操作区域不位于所述预设触摸平面区域时,判定所述当前手势为三维手势,调用预先建立的三维手势模型库,识别所述三维手势的操作手势;
根据识别出的所述操作手势生成对应的操作指令。
优选的,所述图像信息为二维平面信息时,所述判断所述当前手势所在的
操作区域的步骤包括:
检测所述预设触摸平面区域上用于指示是否存在触摸信号的按键是否导通,如果是,则判定所述当前手势所在的操作区域位于所述预设触摸平面区域;如果否,则判定不位于所述预设触摸平面区域。
优选的,所述图像信息包括二维平面信息和一维深度图像信息时,所述判断所述当前手势所在的操作区域的步骤包括:
计算所述图像采集设备与所述预设触摸平面区域内各像素点的深度值,确定最小深度值Lmin;
计算所述当前手势的触摸点与所述图像采集设备的距离L;
如果L-Lmin≤15mm,则判定所述当前手势所在的操作区域不位于所述预设触摸平面区域,反之,则判定位于所述预设触摸平面区域。
优选的,所述当前手势为以下任意一种或任意组合:
用手指指心、手指的侧面、指甲进行触摸或点击操作的手势。
优选的,所述调用预先建立的三维手势模型库为:
在时间序列上连续获取多帧包含所述当前手势的图像信息,如果多帧图像中的所述当前手势相同且所述当前手势的位置未发生变化,所述当前手势则为静态手势,调取三维静态手势模型库;反之,则调用三维动态手势模型库。
本发明实施例另一方面提供了一种手势识别的装置,包括:
图像获取模块,用于获取图像采集设备采集的图像信息;
图像检测模块,用于检测所述图像信息中的当前手势;
区域判断模块,用于判断所述当前手势所在的操作区域;
信息识别模块,用于识别所述当前手势:
当所述当前手势所在的操作区域位于预设触摸平面区域时,判定所述手势信息为触摸手势,根据所述触摸手势的触摸位置对操作手势进行识别;
当所述当前手势所在的操作区域不位于所述预设触摸平面区域时,判定所述手势信息为三维手势,调用预先建立的三维手势模型库,识别所述三维手势的操作手势;
指令生成模块,用于根据识别出的所述操作手势生成对应的操作指令。
优选的,所述判断区域模块包括:
第一判断单元,检测所述预设触摸平面区域上用于指示是否存在触摸信号的按键是否导通,如果是,则判定所述当前手势所在的操作区域位于所述预设触摸平面区域;如果否,则判定不位于所述预设触摸平面区域;和/或
第二判断单元,计算所述图像采集设备与所述预设触摸平面区域内各像素点的深度值,确定最小深度值Lmin;
计算所述当前手势的触摸点与所述图像采集设备的距离L;
如果L-Lmin≤15mm,则判定所述当前手势所在的操作区域不位于所述预设触摸平面区域,反之,则判定位于所述预设触摸平面区域。
本发明实施例还提供了一种手势识别的车载系统,包括图像采集设备、触摸面板以及上述任一种手势识别装置。
优选的,所述触摸面板包括触摸板和/或触摸开关,当所述图像采集设备为2D摄像头时,所述触摸板包括:
面板单元、电路板单元、按键单元和支撑件单元;
所述触摸开关包括:
带有信标的面板单元、电路板单元、导光板单元、按键单元、设置灯单元和支撑件单元。
优选的,所述触摸面板包括触摸板和/或触摸开关,当所述图像采集设备为3D摄像头时,所述触摸板包括:
面板单元和支撑件单元;
所述触摸开关包括:
带有信标的面板单元、导光板单元、电路板单元、设置灯单元和支撑件单元。
本发明实施例提供了一种手势识别的方法,通过判断所捕获的图像信息中的当前手势操作所在的区域是否位于预设触摸平面区域,判定所述当前手势为触摸手势还是三维手势。如果为触摸手势,则根据所述触摸手势的触摸位置信息对操作手势进行识别;如果当前手势为三维手势,则调用预先建立的三维手势模型库,识别所述三维手势的操作手势,实现了三维手势和二维手势识别一体化。
如果手势识别具有单一性,即只针对二维手势识别或只针对三维手势识别,对于只针对二维手势识别的,只能在带有感应功能的触摸屏上进行操作,不能远离屏幕操作;而对于只针对三维手势识别的,由于三维手势的识别动作有限,大大的限制了手势控制的应用。而本申请可以实现三维手势和二维手势识别一体化,这样既可以实现空中三维手势的识别,同时也可识别触摸平面区域的二维手势,更方便用户使用,提高用户人机交互的操控舒适性。
此外,本发明实施例还针对手势识别的方法提供了相应的实现装置及车载系统,进一步使得所述方法更具有实用性,所述装置和系统具有相应的优点。
为了更清楚的说明本发明实施例或现有技术的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种手势识别的车载系统结构框图;
图2为本发明实施例提供的一种手势识别方法的流程示意图;
图3为本发明实施例提供的三维动态模型库的部分三维手势;
图4为本发明实施例提供的一种图像处理与手势识别示意图;
图5为本发明实施例提供的另一种手势识别方法的流程示意图;
图6为本发明实施例提供的一种手势操作方式及位置示意图;
图7为本发明实施例提供的一种当前手势的定位示意图;
图8为本发明实施例提供的一种当前手势位置信息确定的示意图;
图9为本发明实施例提供的另一种手势识别方法的流程示意图;
图10为本发明实施例提供的一种手势识别装置的结构图;
图11为本发明实施例提供的一种手势识别的车载系统结构框图;
图12为本发明实施例提供的一种基于2D摄像头的触摸板的结构示意图;
图13为本发明实施例提供的一种基于2D摄像头的触控开关的结构示意图;
图14为本发明实施例提供的一种基于3D摄像头的触摸板的结构示意图;
图15为本发明实施例提供的一种基于3D摄像头的触控开关的结构示意图;
图16为本发明实施例提供的一种手势识别控制的车载系统结构框图。
为了使本技术领域的人员更好地理解本发明方案,下面结合附图和具体实施方式对本发明作进一步的详细说明。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等是用于区别不同的对象,而不是用于描述特定的顺序。此外术语
“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可包括没有列出的步骤或单元。
现有技术中的手势识别具有单一性,即只针对二维手势识别或只针对三维手势识别。对于只针对二维手势识别的只能在带有感应功能的触摸屏上进行操作,不能远离屏幕操作;而对于只针对三维手势识别的还需要带有感应功能的触摸屏辅助进行识别二维手势。鉴于此,本申请实现了三维手势和二维手势识别一体化,这样既可以实现空中三维手势的识别,同时可识别触摸区域的二维手势,更方便用户使用,提高用户人机交互的操控舒适性,一定程度上节约了用户使用成本。
基于上述本发明实施例的技术方案,下面首先结合图1对本发明实施例的技术方案涉及的一些可能的应用场景进行举例介绍,图1为本发明实施例提供的一种手势识别的车载系统结构框图。
如图1所示,101为摄像头,安装于汽车顶灯模块内,102为摄像头的感应区域,103为手势,104为中控面板,所述中控面板包括矩形的触摸板ABCD和触摸开关EF。
现有技术中,101一般为红外摄像头,作为三维识别手势传感器,104为带有感应单元的触控面板。这样空中手势的检测和识别由红外摄像头检测并识别,中控面板的二维触摸手势由触控单元识别。
本申请提供的方案是摄像头可为2D摄像头,也可为3D摄像头,作为识别手势传感器,中控面板不需要携带感应触控单元。所有手势(二维、三维)皆由摄像头识别。
这样,不仅极大的提高了用户的操作舒适度,而且控制面板的结构得到了简化,可以节约一定的制造成本。
需要注意的是,上述应用场景仅是为了便于理解本申请的思想和原理而示出,本申请的实施方式在此方面不受任何限制。相反,本申请的实施方式可以应用于适用的任何场景。
在介绍了本发明实施例的技术方案后,下面详细的说明本申请的各种非限制性实施方式。
实施例一:
首先参见图2,图2为本发明实施例提供的一种手势识别方法的流程示意图,本发明实施例可包括以下内容:
S201:获取图像采集设备采集的图像信息。
S202:检测所述图像信息中的当前手势。
S203:判断所述当前手势所在的操作区域。
S204:当所述当前手势所在的操作区域位于预设触摸平面区域时,判定所述当前手势为触摸手势,根据所述触摸手势的触摸位置信息对操作手势进行识别。
S205:当所述当前手势所在的操作区域不位于所述预设触摸平面区域时,判定所述当前手势为三维手势,调用预先建立的三维手势模型库,识别所述三维手势的操作手势。
S206:根据识别出的所述操作手势生成对应的操作指令。
其中,S201具体为:
所述图像采集设备为2D摄像头或3D摄像头,相应的所述图像信息不同。所述2D摄像头捕获的图像信息为二维灰度平面图像信息或二维彩色平面图像信息;所述3D摄像头捕获的图像信息包括二维灰度平面图像信息和一维深度图像信息。
其中,S202具体为:
检测图像采集设备采集的图像信息时,需要判断图像信息中是否含有手的信息,当没有手时,将获取的图像信息丢掉,判断下一副图像;当所检测的图像信息中含有不完整的手的信息时,可进行提醒,可语音提醒用户,也可自动调整图像采集设备的角度,当前也可采用其他办法实现提醒功能,使其感应区
域包含当前手的信息;当检测的图像中含有完整的手势信息时,进入S203。
需要说明的是,所述当前手势可以是静态手势,也可为动态手势。例如,在触摸区域,所述静态手势可为按键的动作,所述动态手势可为划的动作。可通过在时间序列上连续获取多帧包含当前手势的图像信息,如果多帧图像中的所述当前手势相同且所述当前手势的位置未发生变化,所述当前手势则为静态手势;反之,则为动态手势。
其中,S203具体为:
由于图像采集设备是在一定的区域内捕获图片信息,可为倒棱锥区域,所述区域是指预设触摸平面区域和空中感应区域,所述空中感应区域为倒棱锥区域的空中部分。所述预设触摸平面区域为二维手势操作的平面区域,一般可为触摸板,触摸手势(二维手势)可为触摸在触摸板的触摸手势,或触摸在触摸开关的触摸手势。
其中,所述触摸手势可用手指指心、手指的侧面、手指的指甲或其任意的组合进行触摸、点击操作,当然也可以用身体的任一部分进行操作。传统的电容式触摸屏如果想用手指的侧面、手指的指甲进行操作时,往往对其精度要求高,尤其是指甲的操作;而对于电阻式触摸屏,虽可识别指甲的操作,但是指甲对其触摸屏的伤害太大,导致触摸屏的触摸位置识别不准和寿命缩短,得不偿失。而本申请可采用不带感应功能的触摸屏,所述触摸屏可采用塑料面板、金属面板、也可采用液晶显示屏,当然,也可以采用其他任何一种材质的面板,这样,用指甲操作就不会对其精度有要求,不合适的材质选取会不可避免地损害屏的寿命。
具体判断当前手势所在的操作区域根据图像采集设备的不同,所采用的方法不同,具体介绍参见实施例二和实施例三,此处不再赘述。
其中,S204具体为:
所述预设触摸平面区域可为触摸板、也可为触摸开关、还可为二者的组合,同时所述触摸板的块数和触摸开关的个数并不受限,可为一块或任意几块拼接,开关可为一个或多个。其中所述触摸板和触摸开关的材质可为塑料面板、
金属面板、也可采用液晶显示屏,当然也可采用其他材质,如果必要,也可为上述几种的组合。
所述触摸位置信息可为坐标信息,也可为具体的触摸点位置信息,具体依据采用的摄像头和具体的触摸区域决定,例如操作手势在触摸区域的触摸开关处,可根据建立的坐标将触摸点的坐标信息反馈,也可根据触摸点遮挡的位置,直接发送所述遮挡位置的信息,例如检测到触摸开关的a按键被遮挡,则将a按键的位置发送,表明用户此刻的操作是按a键。因为不同的位置对应的操作指令不同,这可以是厂家预设的,也可是用户根据需求预设的。例如a键代表的操作指令为打开天窗,当用户按a键时,则根据检测到的a键位置信息或坐标信息,将天窗打开。故可根据所述位置信息对操作手势进行识别。
其中,S205具体为:
空中的三维手势可为静态手势,也可为动态手势,例如直接作出’OK’的手势即为静态手势,而有由手掌变化为握拳的手势即为动态手势,相应的,静态手势对应三维静态手势模型库,动态手势调用三维动态手势模型库。这样就需要在调用时先识别所做的手势为动态还是静态,可通过在时间序列上连续获取多帧包含当前手势的图像信息,如果多帧图像中的所述当前手势相同且所述当前手势的位置未发生变化,所述当前手势则为静态手势,调取三维静态手势模型库;反之,则调用三维动态手势模型库。
所述模型库为预设的一系列手势,预设的手势动作对应的操作指令可为默认通用的,也可根据用户的要求定制,用户在使用手势进行操作时可按照模型库中的手势进行操作,从而实现人机交互。例如图3列出了三维动态模型库中一些常用的手势动作,301是由食指伸出状态到食指回收成握拳状态的手势;302是是由食指在伸出状态下逆时针旋转的手势;303是由食指在伸出状态下顺时针旋转的手势;304是手由手掌变化为握拳的手势;305是手掌从左至右的挥动手势;306是手掌从右至左的挥动手势。当然,上述图3中所列的仅为模型库中部分手势,实际应用中还有许多手势,而且不同的手势对应着不同的操作指令,例如驾驶者单指前后移动表示接听电话、选择、确定,手掌张开向右挥动表示挂电话、取消操作,V形手势表示暂停、播放和自定义操作,一根
手指顺时针转动表示增大音量、缩小导航地图,一根手指逆时针转动表示减小、放大导航地图,五指收拢成圈向右移动表示下一曲、回到主菜单,而五指收拢成圈向左移动则表示上一曲、回到主菜单,如果用户在窗户附近的区域内做出扫手的动作,机动车会确定动作幅度,进而根据手势幅度决定车窗打开的程度。在方向盘附近打响指就可启动雨刷器或是指示灯,在汽车仪表盘前扭动手腕就可以打开空调或收音机;手指向天窗就会自动开启天窗,食指抵住嘴唇表示调低手机外放音量,手张成喇叭状表示使用手机拨打电话,类似思考着的攥下巴动作则表示“我想检索信息”,而点赞动作代表“同意”操作。当然,并不限于上述描述的手势和手势对应的操作,实际实现中可以根据用户的需要添加或修改。
需要说明的是,所述模型库中的信息可为所述三维手势信息,也可为三维手势相应的执行命令信息,或二者都有。例如模型库中可仅存手掌张开向右挥动这个手势,也可仅存手掌张开向右挥动这个手势对应的要执行的命令为挂电话、取消操作,也可都保存。
其中,S206具体为:
所述操作指令可以是位置信息,即直接将位置信息通过接口单元发送给电子功能模块,电子功能模块进一步根据预设的指令和位置对应关系对要进行的动作进行执行;也可是根据位置信息识别的要做的动作指令直接通过接口单元发送给电子功能模块,电子功能模块直接执行动作指令。对应的,此处涉及到位置信息和操作的对应关系的预设和识别可直接预设在电子功能模块或预设在信息识别模块中。例如,操作手势在预设触摸平面区域的触摸开关处时,所述操作指令可为a按键的坐标信息或具体位置信息,也可为a键代表的操作指令如打开天窗。
具体的,关于图像处理和手势识别的逻辑,请参见图3,图3为本申请提供的图像处理和手势识别示意图。其中,401为系统捕获实时图像信息,对于2D摄像头,可为二维灰度平面图像信号或二维彩色平面图像信号;对应3D摄像头,可为一种二维灰度平面图像信号和一维深度图像信号。
402为图像的预处理,通常是针对平面图像信号进行处理,即可通过在图
像中提取手的轮廓、凸性、角或骨架,当然也可为其他的手势特征。
403为图像处理,即对图像预处理实时获取的手势的特征做进一步处理,例如压缩、优化,具体的依据不同类型的手势和不同的手势特征做不同的处理,例如对于静态手势提取手的轮廓时,对其手势特征做优化动作,而对于动态手势需要提取连续时间帧的多个手的轮廓,就需要先做压缩。
404为获取样本模型,即对当前手势在图像处理后根据不同类型的手势获取样本模型。例如静态手势模型可根据采集的样本,采取与步骤402和403相同的图像处理和特征提取方法,可经过样品训练运算得到静态手势的样本模型;类似地,动态手势模型可利用静态手势模型的方法,从时间序列上连续提取手势轮廓样本,并经过样品训练运算得到动态手势的样本模型。
405为手势的识别与跟踪,对于静态手势只需识别,而对于动态手势,由于需要采集连续时间内的多帧图像,故还需跟踪。将403实时获取的手势特征信号进行处理后与手势样本模型404一起,通过特定的算法进行实时识别,所述识别算法可以是以下的任意一种或任意几种的组合:提升(Boosting)算法、随机森林算法(Random Forest)、支持向量机算法(Support Vector Machine,简称SVM)、神经网络算法(Neural Network)、深度学习算法(Deep Learning),隐马尔可夫模型算法(Hidden Markov Model,简称HMM),当然必要时,也可采用其他算法。
406为发送手势指令,即根据生成的操作指令将其发送给执行单元,例如可通过接口单元将当前识别出的手势信号实时发送给特定的电子功能模块,执行对应的手势指令,从而实现人机交互。
本发明实施例提供了一种手势识别的方法,通过判断所捕获的图像信息中的当前手势操作所在的区域是否位于预设触摸平面区域,判定所述当前手势为触摸手势还是三维手势。如果为触摸手势,则根据所述触摸手势的触摸位置信息对操作手势进行识别;如果当前手势为三维手势,则调用预先建立的三维手势模型库,识别所述三维手势的操作手势,实现了三维手势和二维手势识别一体化,既可以实现空中三维手势的识别,也可识别触摸区域的二维手势,更方便用户使用,提高用户人机交互的操控随意性和舒适性。
在实际操作过程中,如果是2D摄像头作为识别传感器,为了更方便使用,则对实施例一做进一步的改进。
实施例二:
参见图5,图5为本发明实施例提供的另一种手势识别方法的流程示意图,具体的可包括以下内容:
S501:获取2D摄像头采集的图像信息。
S502:检测所述图像信息中的当前手势。
S503:判断所述当前手势的操作区域是否为预设触摸平面区域,如果是,则进入S504,反之则进入S505。
S504:判定所述当前手势为触摸手势,根据所述触摸手势的触摸位置信息对操作手势进行识别。
S505:判定所述当前手势为三维手势,调用预先建立的三维手势模型库,识别所述三维手势的操作手势。
S506:根据识别出的所述操作手势生成对应的操作指令。
当检测到2D摄像头采集的图像信息中有完整的手势时,当预设的触摸区域分为触摸板和触摸开关时,当前手势为图6所示的三种类型,601为2D摄像头,记S为摄像头感应面的中心;602为三维手势操作,Q为手指在空中的操作位置;603为手背朝向触摸区域的触摸板ABCD且用手指的指甲进行点击、滑动等触摸操作,P为手指在触摸板ABCD上的触摸位置;604为手心朝向触摸区域的触摸开关EF的手指点击、滑动等触摸操作,M为手指在触摸开关EF的触摸位置。
具体的判断所述当前手势位于哪个操作区域的方法为:
判断所述图像信息中所述当前手势操作所在的区域是否和所述预设触摸平面区域重叠,如果否,则判定所述当前手势处于的区域为三维手势所在的空
中区域;反之,则进一步检测所述预设触摸平面区域电路板的按键是否导通,如果是,则判定所述当前手势处于的区域为预设触摸平面区域,否则,则判定所述当前手势处于的区域为三维手势所在的空中区域。其中,所述按键可以为位于所述预设触摸平面区域的电路板上的一种开关,当预设触摸平面区域有触摸动作时,该开关导通,按键可为微行程非自锁式的弹性开关,当然也可为其他的器件,只要可实现当预设触摸平面区域有触摸动作时发送相应的信号的功能即可。当然,所述预设触摸平面区域可为触摸板ABCD,也可为触摸开关EF。
当确定所述当前手势是否位于预设触摸平面区域后,即可判定当前手势为触摸手势还是三维手势,当判定当前手势为触摸手势后,可通过对其触摸位置信息进行确定来实现对操作手势的识别,具体的位置信息确定可先对当前手势进行定位,即对触摸点或位置点的确定,具体如下:
举例来说,请参阅图7,图7为当前手势的定位示意图,当前手势可为在预设触摸平面区域的上方进行三维手势操作也可为触摸操作。701为使用手指食指的单指操作手势,例如图3模型库中的301手势、302手势或303手势。702为除了食指操作手势之外的其他手势,例如图3模型库中的304手势、305手势或306手势。
对于使用单根手指操作时,即701所代表的手势,其触摸点的定位具体如下:图中的T为手势701的手指在整幅图像中的顶点(最高点),L和T分别为像素点T下移△时手指的左右两侧的边缘像素点,图中的G为同一行像素点L和R的中心位置,即当前手势的位置点或触摸点。其中,△的取值范围本装置设定在5mm-15mm,这是因为不同的人或不同国家和地区人的手指大小宽度不一样。而对于702所代表的手势,H是该手势在整幅图像中的顶点(最高像素点),即当前手势的位置点或触摸点。
当前手势的位置信息的确定,可通过在所述图像信息和所述预设触摸平面区域建立二维笛卡尔坐标系,根据所述图像信息的所述当前手势的触摸点的二维坐标值计算相应所述预设触摸平面区域的实际坐标值,所述实际坐标值为所述当前手势的触摸位置信息。当所述触摸手势为触摸在所述预设触摸平面区域
的触摸开关的所述触摸手势时,计算所述触摸位置信息还可通过检测所述触摸开关被遮挡的位置,所述被遮挡的位置为所述触摸手势的所述触摸位置信息。具体如下所述:
请参阅图8,图8为当前手势位置信息确定的示意图。801是摄像头捕获的预设触摸平面区域的图像;802是实际的预设触摸平面区域,其中预设触摸平面区域可分为触摸板ABCD和触摸开关EF。801中的矩形区域A′B′C′D′对应于实际预设触摸平面区域的触摸板ABCD;E′F′对应于实际预设触摸平面区域的触摸开关EF;图像中的触摸板的长和宽的尺寸分别是W1'和H',而实际触摸板的长和宽尺寸是W1和H;图像中的触摸开关长度设为W2',而触摸开关的实际长度设为W2。为了便于触摸位置信息的确定,在图像801中,分别设立了两个坐标系O'X'Y'和O1'X1'Y1',其中O'X'Y'是以触摸板A'B'C'D'的左上角顶点A'作为坐标原点O',以坐标原点O'像素所在的行作为X'轴,以坐标原点O'像素所在的列作为Y'轴而建立的;其中O1'X1'Y1'是以图像中触摸开关E'F'区域的左侧边缘中心点E'作为坐标原点O1',以坐标原点O1'像素所在的行作为X1'轴,以坐标原点O1'像素所在的列作为Y1'轴而建立的;对应的在实际的预设触摸平面区域上,也设定了两个坐标系OXY和O1X1Y1,其中OXY坐标系是以触摸板ABCD的左上角顶点A作为坐标原点O,以上侧水平边线作为X轴,以左侧边线作为Y轴而建立的;其中O1X1Y1坐标系是以触摸开关EF的左侧边线的中心点作为原点O1,以水平边线作为X1轴,以左侧边线作为Y1轴而建立的。图像801中的(x',y')是在O'X'Y'坐标系中当前手势触摸点的图像坐标,对应的在实际的预设触摸平面区域802中的坐标用(x,y)表示;图像801中的(x1',0)是在O1'X1'Y1'坐标系中当前手势触摸点的图像坐标,对应的在实际的预设触摸平面区域802中的坐标用(x1,0)表示。具体计算方法如下:
对于触摸板,首先在图像801中计算出当前手势触摸点在O'X'Y'坐标系中的坐标(x',y')和触摸板A'B'C'D'的长W1'和宽H';由于摄像头采集的触摸板的图像与实际预设触摸平面区域的触摸板,只有很小的图形扭曲,所以近似地按下式计算该触摸点在实际触摸板ABCD内相对于OXY
坐标系的坐标:
对于触摸开关,一种方法可按上述的坐标表示,即首先在图像801中计算出当前手势触摸点在O1'X1'Y1'坐标系中的坐标(x1',0)和触摸开关长度W2';则该触摸点在实际触摸开关相对于O1X1Y1坐标系的坐标:
另一种方法为可通过检测图像801中触摸面板开关E'F'区域内具体哪个开关的信标被手指遮挡,则直接将所述信标的信息作为当前手势的位置信息反馈,例如图中的信标为‘e'’的开关被遮挡,则判定当前手势的触摸点的实际开关即是信标为‘e’的开关。
当判定当前手势为三维手势时,可通过先判断所述当前手势为静态手势还是动态手势,进一步确定需要调用三维静态手势模型库还是三维动态手势模型库。根据模型库来识别所述三维手势的操作手势。具体的参见S205,此处就不再赘述。
由上可知,本发明实施例提供了一种手势识别的方法,通过判断2D摄像头捕获的图像信息中的当前手势操作所在的区域是否位于预设触摸平面区域,判定所述当前手势为触摸手势还是三维手势。如果为触摸手势,则根据所述触摸手势的触摸位置信息对操作手势进行识别;如果当前手势为三维手势,则调用预先建立的三维手势模型库,识别所述三维手势的操作手势,实现了三维手势和二维手势识别一体化,方便用户使用,提高用户人机交互的操控随意性和舒适性。
在实际操作过程中,如果是3D摄像头作为识别传感器,为了更方便使用,
则对实施例二做进一步的改进。
实施例三:
参见图9,图9为本发明实施例提供的另一种手势识别方法的流程示意图,具体的可包括以下内容:
S901:获取3D摄像头采集的图像信息。
S902:检测所述图像信息中的当前手势。
S903:判断所述当前手势的操作区域是否为预设触摸平面区域,如果是,则进入S904,反之则进入S905。
S904:当所述当前手势所在的操作区域位于预设触摸平面区域时,判定所述当前手势为触摸手势,根据所述触摸手势的触摸位置信息对操作手势进行识别。
S905:当所述当前手势所在的操作区域不位于所述预设触摸平面区域时,判定所述当前手势为三维手势,调用预先建立的三维手势模型库,识别所述三维手势的操作手势。
S906:根据识别出的所述操作手势生成对应的操作指令。
本实施例与实施例二相比,2D摄像头换为3D摄像头,即采集的图像信息多了一维的深度信息,相应的S903判断当前手势位于的操作区域的判断方法不同,其他步骤与实施例二一样,故此处只针对S903具体阐述,其他就不再赘述。
判断所述图像信息中所述当前手势操作所在的区域是否和所述预设触摸平面区域重叠,如果否,则判定所述手势处于的区域为三维手势所在的空中区域;反之,则继续下述方法做进一步判断。
计算所述图像采集设备与所述预设触摸平面区域内各像素点的深度值,确定最小深度值Lmin。当然,所述预设触摸平面区域可为触摸板和/或触摸开关。
举例来说,首先计算出矩形触摸板ABCD内各像素点的深度值和触摸开关EF区域内各个开关中心点的深度值La,Lb,Lc,Ld,Le,Lf,Lg,Lh,Li,并根据以上深度信息确定出摄像头距离预设触摸平面区域的最小深度值Lmin,例如在众多的深度值中,如果La为最小的值,则判定Lmin=La。
通过实施例二中当前手势的定位方法确定出当前手势的触摸点,计算所述当前手势的触摸点与所述图像采集设备的距离L,例如触摸点为Q时,计算Q点与摄像头的距离LsQ;
假设L的允许误差为△,且△的取值范围设定在5mm-15mm,如果△取最大值时满足条件,则取范围内的任何值都可满足条件,故可有:
如果L-Lmin≤15mm,则判定所述当前手势所在的操作区域不位于所述预设触摸平面区域,反之,则判定位于所述预设触摸平面区域。例如LsQ≤Lmin+15mm,则判定当前手势为三维手势,反之则为触摸手势。
由上可知,本发明实施例提供了一种手势识别的方法,通过判断3D摄像头捕获的图像信息中的当前手势操作所在的区域是否位于预设触摸平面区域,判定所述当前手势为触摸手势还是三维手势。如果为触摸手势,则根据所述触摸手势的触摸位置信息对操作手势进行识别;如果当前手势为三维手势,则调用预先建立的三维手势模型库,识别所述三维手势的操作手势,实现了三维手势和二维手势识别一体化,方便用户使用,提高用户人机交互的操控舒适性。
本发明实施例还针对手势识别的方法提供了相应的实现装置,进一步使得所述方法更具有实用性。下面对本发明实施例提供的手势识别装置进行介绍,下文描述手势识别的装置与上文描述的手势识别的方法可相互对应参照。
实施例四:
参见图10,图10为本发明实施例提供的一种手势识别装置的结构图,该装置可包括:
图像获取模块1001,用于获取图像采集设备采集的图像信息。
图像检测模块1002,用于检测所述图像信息中的当前手势。
区域判断模块1003,用于判断所述当前手势所在的操作区域。
信息识别模块1004,用于识别所述当前手势:
当所述当前手势所在的操作区域位于预设触摸平面区域时,判定所述手势信息为触摸手势,根据所述触摸手势的触摸位置对操作手势进行识别;
当所述当前手势所在的操作区域不位于所述预设触摸平面区域时,判定所述手势信息为三维手势,调用预先建立的三维手势模型库,识别所述三维手势的操作手势。
指令生成模块1005,用于根据识别出的所述操作手势生成对应的操作指令。
作为优选的,区域判断模块1003可包括:
第一判断单元,检测所述预设触摸平面区域上用于指示是否存在触摸信号的按键是否导通,如果是,则判定所述当前手势所在的操作区域位于所述预设触摸平面区域;如果否,则判定不位于所述预设触摸平面区域;和/或
第二判断单元,计算所述图像采集设备与所述预设触摸平面区域内各像素点的深度值,确定最小深度值Lmin;
计算所述当前手势的触摸点与所述图像采集设备的距离L;
如果L-Lmin≤15mm,则判定所述当前手势所在的操作区域不位于所述预设触摸平面区域,反之,则判定位于所述预设触摸平面区域。
可选的,在本实施例的另一些实施方式中,所述装置例如还可以包括:
红外补光模块,用于发射特定的调制红外光,配合图像采集设备获取清晰的图像信号。这样在当前环境光照不充分时,仍然可以获取清晰图像,有助于更好的识别手势。
本发明实施例所述手势识别装置的各功能模块的功能可根据上述方法实施例中的方法具体实现,其具体实现过程可以参照上述方法实施例的相关描
述,此处不再赘述。
由上可知,本发明实施例提供了一种手势识别的装置,通过判断图像采集设备采集的图像信息中的当前手势操作所在的区域是否位于预设触摸平面区域,判定所述当前手势为触摸手势还是三维手势,实现了三维手势和二维手势识别一体化,更方便用户使用,提高用户人机交互的操控舒适性。
实施例五:
参见图11,本发明实施例还提供了一种车载系统,可以包括:
图像采集设备1101,用于采集图像信息;
触摸面板1102,用于实现二维手势的操作;
手势识别装置1103,用于识别当前手势,为上述实施例四所描述的手势识别装置。
其中,所述图像采集设备可设置于车顶灯模块,图像采集设备置于车顶,可以扩大其感应区域,设置于车顶灯可以避免单独设置地方安置图像采集设备的成本,当然也可不设置在车灯模块,可依据用户的爱好具体设置。需要说明的是,所述图像采集设备也可设置为旋转的,即可360度无死角的采集图像,但是因此而带来的问题是用户需要跟踪图像采集设备的旋转角度而做手势,会带来一些不方便,优选的可将图像采集设备固定,用户只需在其感应区域进行手势操作即可。图像采集设备可以用3D摄像头或2D摄像头,当然,必要的时候可以采用其他图像采集设备。
触摸面板1102可包括触摸板和/或触摸开关,所述触摸板和所述触摸开关可整体无缝设置于所述触摸面板,也就是说触摸板和触摸开关之间没有缝隙,整体为一块板,这样就可避免清理触摸板带来的困扰,而且美观,好看。
可选的,在本实施例的一些实施方式中,当所述图像采集设备为2D摄像头时,所述触摸面板例如包括触摸板和触摸开关,请参阅图12和图13,图12为本发明实施例提供的一种基于2D摄像头的触摸板的结构示意图,图13为
本发明实施例提供的一种基于2D摄像头的触控开关的结构示意图,所述触摸板和所述触摸开关可以包括:
所述触摸板包括:
面板单元1201,可为液晶显示器或者塑料面板或金属面板或其他任何材质的面板;
电路板单元1202;
按键单元1203;
支撑件单元1204,所述支撑件单元包括支撑件及外壳,其材质优选的为塑料,当然也可为其他材质。
其中,1205表示手心朝向触摸板ABCD的手指点击、滑动等触摸操作;图中标号1206表示手背朝向触摸板ABCD用手指的指甲进行点击、滑动等触摸操作。
所述触摸开关包括:
带有信标的面板单元1301,可为液晶显示器或者塑料面板或金属面板或其他任何材质的面板;
导光板单元1302,优选的可为透明塑料材质,当然也可为其他材质,但是优选的应为透明的;
电路板单元1303,所述电路板上设置按键和灯;
按键单元1304,可为一种微行程非自锁式的弹性开关;
设置灯单元1305,灯一般为体积较小的、耗电量较小的灯,例如LED灯;
支撑件单元1306,所述支撑件单元包括支撑件及外壳,其材质优选的为塑料,当然也可为其他材质。
其中,1307表示手心朝向触摸开关EF的手指点击、滑动等触摸操作;图中标号1308表示手背朝向触摸开关EF用手指的指甲进行点击、滑动等触摸
操作。
可选的,在本实施例的又一些实施方式中,当所述图像采集设备为3D摄像头时,所述触摸面板例如包括触摸板和触摸开关,请参阅图14和图15,图14为本发明实施例提供的一种基于3D摄像头的触摸板的结构示意图,图15为本发明实施例提供的一种基于3D摄像头的触控开关的结构示意图,所述触摸板和所述触摸开关可以包括以下单元。
所述触摸板包括:
面板单元1401,可为液晶显示器或者塑料面板或金属面板或其他任何材质的面板;
支撑件单元1402,所述支撑件单元包括支撑件及外壳,其材质优选的为塑料,当然也可为其他材质。
与图12所示的基于2D摄像头作为识别传感器时触摸板的结构示意图对比,可知,基于3D摄像头作为识别传感器时触摸板的结构更加简单,可不需要任何电子元件和电路板,即使用可不需要感应单元的触模面板,这样可以节省用户使用成本。
所述触摸开关包括:
带有信标的面板单元、导光板单元、设置灯单元和支撑件单元。
带有信标的面板单元1501,可为液晶显示器或者塑料面板或金属面板或其他任何材质的面板;
导光板单元1502,优选的可为透明塑料材质,当然也可为其他材质,但是优选的应为透明的;
电路板单元1503,所述电路板上设置灯;
支撑件单元1504,所述支撑件单元包括支撑件及外壳,其材质优选的为塑料,当然也可为其他材质;
设置灯单元1505,灯一般为体积较小的、耗电量较小的灯,例如LED灯。
与图13所示的基于2D摄像头作为识别传感器时触摸开关的结构示意图对比,可知,基于3D摄像头作为识别传感器时触摸开关缺少按键单元,使得结构、功能更加简化,而且一定程度上可节约用户使用成本。
需要说明的是,一种车载系统不仅仅包括上述描述的单元,还包括很多,本实施例只是针对现有技术有改进的单元做了相关详细说明。其中,手势识别装置为上述实施例四所描述的手势识别装置,具体实现过程可以参照上述方法实施例和装置实施例的相关描述,此处不再赘述。
由上可知,本发明实施例提供了一种手势识别的车载系统,通过判断图像采集设备采集的图像信息中的当前手势操作所在的区域是否位于预设触摸平面区域,判定所述当前手势为触摸手势还是三维手势,实现了三维手势和二维手势识别一体化,更方便用户使用,提高用户人机交互的操控舒适性,此外,本申请的触摸板结构、功能更加简化,一定程度上节约了用户使用成本。
为便于更好的理解和实施本发明实施例四和实施例五的上述方案,下面通过举例具体的应用场景进行说明。请参阅图16,图16为本发明实施例提供的一种可采用手势识别控制的车载系统结构框图。
控制单元1601,包括手势识别装置,为该车载系统的控制核心,所述控制单元1601可用于获取摄像头1602的图像信息,并根据获取的信息,实时检测操作者的手势,并通过手势识别单元对所检测的手势进行识别;
其中,当摄像头为2D摄像头时,所述控制单元1601是可用于实时检测中控面板触摸板内部的按键信号1608和中控面板触摸开关内部的按键信号1607,并根据按键信号来判断当前手指触摸操作所在的区域。
所述控制单元1601控制该红外补光模块1603来发射特定的调制红外光,配合图像采集设备获取清晰的图像信号。这样在当前环境光照不充分时,仍然可以获取清晰图像,有助于更好的识别手势。
所述控制单元1601还可用于检测汽车头灯开关信号1606,当检测到汽车头灯被打开时,对应地控制并点亮中控面板触摸开关背光1604;当检测到汽
车头灯被关闭时,对应地控制并关闭中控面板触摸开关背光1604。
所述控制单元1601还可控制接口单元1605,并用于将实时检测出的三维手势和触摸手势的相关信息例如位置信息发送给特定的电子功能模块,执行对应的手势指令,实现人机交互。所述接口单元,是用于匹配特定的电子功能模块的接口,并与特定的电子功能模块进行通信,通常为CAN总线、Lin总线或者模拟电平信号接口。
通过在车载系统的控制单元内植入手势识别装置,所述手势识别装置为上述方法实施例、装置和系统实施例中描述的手势识别装置,在用户行车过程中可识别用户的三维手势或二维手势,大大的提高了用户人机交互操作的舒适性。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM,或技术领域内所公知的任意其它形式的存储介质中。
以上对本发明所提供的一种手势识别的方法、装置以及车载系统进行了详
细介绍。本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。
Claims (10)
- 一种手势识别的方法,其特征在于,包括:获取图像采集设备采集的图像信息;检测所述图像信息中的当前手势;判断所述当前手势所在的操作区域;当所述当前手势所在的操作区域位于预设触摸平面区域时,判定所述当前手势为触摸手势,根据所述触摸手势的触摸位置信息对操作手势进行识别;当所述当前手势所在的操作区域不位于所述预设触摸平面区域时,判定所述当前手势为三维手势,调用预先建立的三维手势模型库,识别所述三维手势的操作手势;根据识别出的所述操作手势生成对应的操作指令。
- 根据权利要求1所述的方法,其特征在于,所述图像信息为二维平面信息时,所述判断所述当前手势所在的操作区域的步骤包括:检测所述预设触摸平面区域上用于指示是否存在触摸信号的按键是否导通,如果是,则判定所述当前手势所在的操作区域位于所述预设触摸平面区域;如果否,则判定不位于所述预设触摸平面区域。
- 根据权利要求1所述的方法,其特征在于,所述图像信息包括二维平面信息和一维深度图像信息时,所述判断所述当前手势所在的操作区域的步骤包括:根据所述图像采集设备与所述预设触摸平面区域内各像素点的深度值,确定最小深度值Lmin;计算所述当前手势的触摸点与所述图像采集设备的距离L;如果L-Lmin≤15mm,则判定所述当前手势所在的操作区域不位于所述预设触摸平面区域,反之,则判定位于所述预设触摸平面区域。
- 根据权利要求1-3任意一项所述的方法,其特征在于,所述当前手势为以下任意一种或任意组合:用手指指心、手指的侧面、指甲进行触摸或点击操作的手势。
- 根据权利要求4所述的方法,其特征在于,所述调用预先建立的三维 手势模型库为:在时间序列上连续获取多帧包含所述当前手势的图像信息,如果多帧图像中的所述当前手势相同且所述当前手势的位置未发生变化,所述当前手势则为静态手势,调取三维静态手势模型库;反之,则调用三维动态手势模型库。
- 一种手势识别的装置,其特征在于,包括:图像获取模块,用于获取图像采集设备采集的图像信息;图像检测模块,用于检测所述图像信息中的当前手势;区域判断模块,用于判断所述当前手势所在的操作区域;信息识别模块,用于识别所述当前手势:当所述当前手势所在的操作区域位于预设触摸平面区域时,判定所述手势信息为触摸手势,根据所述触摸手势的触摸位置对操作手势进行识别;当所述当前手势所在的操作区域不位于所述预设触摸平面区域时,判定所述手势信息为三维手势,调用预先建立的三维手势模型库,识别所述三维手势的操作手势;指令生成模块,用于根据识别出的所述操作手势生成对应的操作指令。
- 根据权利要求6所述的装置,其特征在于,所述判断区域模块包括:第一判断单元,用于检测所述预设触摸平面区域上用于指示是否存在触摸信号的按键是否导通,如果是,则判定所述当前手势所在的操作区域位于所述预设触摸平面区域;如果否,则判定不位于所述预设触摸平面区域;和/或第二判断单元,用于计算所述图像采集设备与所述预设触摸平面区域内各像素点的深度值,确定最小深度值Lmin;计算所述当前手势的触摸点与所述图像采集设备的距离L;如果L-Lmin≤15mm,则判定所述当前手势所在的操作区域不位于所述预设触摸平面区域,反之,则判定位于所述预设触摸平面区域。
- 一种手势识别的车载系统,其特征在于,包括图像采集设备、触摸面板以及根据权利要求6或7所述的手势识别装置。
- 根据权利要求8所述的车载系统,其特征在于,所述触摸面板包括触摸板和/或触摸开关,当所述图像采集设备为2D摄像头时,所述触摸板包括:面板单元、电路板单元、按键单元和支撑件单元;所述触摸开关包括:带有信标的面板单元、电路板单元、导光板单元、按键单元、设置灯单元和支撑件单元。
- 根据权利要求8所述的车载系统,其特征在于,所述触摸面板包括触摸板和/或触摸开关,当所述图像采集设备为3D摄像头时,所述触摸板包括:面板单元和支撑件单元;所述触摸开关包括:带有信标的面板单元、电路板单元、导光板单元、设置灯单元和支撑件单元。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610947837.6A CN106502570B (zh) | 2016-10-25 | 2016-10-25 | 一种手势识别的方法、装置及车载系统 |
CN201610947837.6 | 2016-10-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018076523A1 true WO2018076523A1 (zh) | 2018-05-03 |
Family
ID=58322933
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/112336 WO2018076523A1 (zh) | 2016-10-25 | 2016-12-27 | 手势识别的方法、装置及车载系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106502570B (zh) |
WO (1) | WO2018076523A1 (zh) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108710857A (zh) * | 2018-05-22 | 2018-10-26 | 深圳前海华夏智信数据科技有限公司 | 基于红外补光的人车识别方法及装置 |
CN110096213A (zh) * | 2019-04-30 | 2019-08-06 | 努比亚技术有限公司 | 基于手势的终端操作方法、移动终端及可读存储介质 |
CN111143217A (zh) * | 2019-12-27 | 2020-05-12 | 上海昶枫科技有限公司 | 汽车电子控制单元仿真系统 |
CN112000217A (zh) * | 2019-05-27 | 2020-11-27 | 青岛海尔智慧厨房电器有限公司 | 一种手势识别装置、油烟机及油烟机的控制方法 |
CN112069960A (zh) * | 2020-08-28 | 2020-12-11 | 哈尔滨拓博科技有限公司 | 一种用于摇杆式娃娃机的单目手势控制后装系统、控制方法及改造方法 |
CN112257634A (zh) * | 2019-12-26 | 2021-01-22 | 神盾股份有限公司 | 手势识别系统以及手势识别方法 |
CN113126753A (zh) * | 2021-03-05 | 2021-07-16 | 深圳点猫科技有限公司 | 一种基于手势关闭设备的实现方法、装置及设备 |
CN113657226A (zh) * | 2021-08-06 | 2021-11-16 | 上海有个机器人有限公司 | 一种客户交互方法、装置、介质和移动设备 |
CN114978333A (zh) * | 2022-05-25 | 2022-08-30 | 深圳玩智商科技有限公司 | 一种识别设备、系统及方法 |
CN115798054A (zh) * | 2023-02-10 | 2023-03-14 | 国网山东省电力公司泰安供电公司 | 一种基于ar/mr技术的手势识别方法及电子设备 |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6940969B2 (ja) * | 2017-03-29 | 2021-09-29 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | 車両制御装置、車両制御方法及びプログラム |
CN107102731A (zh) * | 2017-03-31 | 2017-08-29 | 斑马信息科技有限公司 | 用于车辆的手势控制方法及其系统 |
CN108108211B (zh) * | 2017-11-20 | 2021-06-25 | 福建天泉教育科技有限公司 | 一种在虚拟现实场景中进行远程交互的方法及终端 |
CN108415561A (zh) * | 2018-02-11 | 2018-08-17 | 北京光年无限科技有限公司 | 基于虚拟人的手势交互方法及系统 |
CN108537147B (zh) * | 2018-03-22 | 2021-12-10 | 东华大学 | 一种基于深度学习的手势识别方法 |
CN110389800A (zh) * | 2018-04-23 | 2019-10-29 | 广州小鹏汽车科技有限公司 | 一种车载大屏上显示内容处理方法、装置、介质和设备 |
CN110597446A (zh) * | 2018-06-13 | 2019-12-20 | 北京小鸟听听科技有限公司 | 手势识别方法和电子设备 |
CN109143875B (zh) * | 2018-06-29 | 2021-06-15 | 广州市得腾技术服务有限责任公司 | 一种手势控制智能家居方法及其系统 |
CN109710116B (zh) * | 2018-08-23 | 2021-12-07 | 华东师范大学 | 一种非接触式手势状态识别系统及识别方法 |
CN109885240A (zh) * | 2019-01-04 | 2019-06-14 | 四川虹美智能科技有限公司 | 一种应用程序的展示方法和智能冰箱 |
CN109656375A (zh) * | 2019-02-28 | 2019-04-19 | 哈尔滨拓博科技有限公司 | 一种多模式动态手势识别系统、装置及方法 |
CN111176443B (zh) * | 2019-12-12 | 2023-10-13 | 青岛小鸟看看科技有限公司 | 一种车载智能系统及其控制方法 |
CN111258430A (zh) * | 2020-01-21 | 2020-06-09 | 哈尔滨拓博科技有限公司 | 一种基于单目手势控制的桌面交互系统 |
CN113076836B (zh) * | 2021-03-25 | 2022-04-01 | 东风汽车集团股份有限公司 | 一种汽车手势交互方法 |
EP4361771A4 (en) * | 2021-07-17 | 2024-07-03 | Huawei Tech Co Ltd | GESTURE RECOGNITION METHOD AND APPARATUS, SYSTEM AND VEHICLE |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100104134A1 (en) * | 2008-10-29 | 2010-04-29 | Nokia Corporation | Interaction Using Touch and Non-Touch Gestures |
CN102741781A (zh) * | 2009-12-04 | 2012-10-17 | 奈克斯特控股公司 | 用于位置探测的传感器方法和系统 |
CN103547989A (zh) * | 2011-04-13 | 2014-01-29 | 诺基亚公司 | 用于装置状态的用户控制的方法、装置和计算机程序 |
CN104808790A (zh) * | 2015-04-08 | 2015-07-29 | 冯仕昌 | 一种基于非接触式交互获取无形透明界面的方法 |
CN105589553A (zh) * | 2014-09-23 | 2016-05-18 | 上海影创信息科技有限公司 | 一种智能设备的手势控制方法和系统 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9551590B2 (en) * | 2009-08-28 | 2017-01-24 | Robert Bosch Gmbh | Gesture-based information and command entry for motor vehicle |
US9159221B1 (en) * | 2012-05-25 | 2015-10-13 | George Stantchev | Steering wheel with remote control capabilities |
EP2829947B1 (en) * | 2013-07-23 | 2019-05-08 | BlackBerry Limited | Apparatus and method pertaining to the use of a plurality of 3D gesture sensors to detect 3D gestures |
DE102014202834A1 (de) * | 2014-02-17 | 2015-09-03 | Volkswagen Aktiengesellschaft | Anwenderschnittstelle und Verfahren zum berührungslosen Bedienen eines in Hardware ausgestalteten Bedienelementes in einem 3D-Gestenmodus |
CN105334960A (zh) * | 2015-10-22 | 2016-02-17 | 四川膨旭科技有限公司 | 车载智能手势识别系统 |
-
2016
- 2016-10-25 CN CN201610947837.6A patent/CN106502570B/zh active Active
- 2016-12-27 WO PCT/CN2016/112336 patent/WO2018076523A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100104134A1 (en) * | 2008-10-29 | 2010-04-29 | Nokia Corporation | Interaction Using Touch and Non-Touch Gestures |
CN102741781A (zh) * | 2009-12-04 | 2012-10-17 | 奈克斯特控股公司 | 用于位置探测的传感器方法和系统 |
CN103547989A (zh) * | 2011-04-13 | 2014-01-29 | 诺基亚公司 | 用于装置状态的用户控制的方法、装置和计算机程序 |
CN105589553A (zh) * | 2014-09-23 | 2016-05-18 | 上海影创信息科技有限公司 | 一种智能设备的手势控制方法和系统 |
CN104808790A (zh) * | 2015-04-08 | 2015-07-29 | 冯仕昌 | 一种基于非接触式交互获取无形透明界面的方法 |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108710857B (zh) * | 2018-05-22 | 2022-05-17 | 深圳前海华夏智信数据科技有限公司 | 基于红外补光的人车识别方法及装置 |
CN108710857A (zh) * | 2018-05-22 | 2018-10-26 | 深圳前海华夏智信数据科技有限公司 | 基于红外补光的人车识别方法及装置 |
CN110096213A (zh) * | 2019-04-30 | 2019-08-06 | 努比亚技术有限公司 | 基于手势的终端操作方法、移动终端及可读存储介质 |
CN110096213B (zh) * | 2019-04-30 | 2023-12-08 | 努比亚技术有限公司 | 基于手势的终端操作方法、移动终端及可读存储介质 |
CN112000217A (zh) * | 2019-05-27 | 2020-11-27 | 青岛海尔智慧厨房电器有限公司 | 一种手势识别装置、油烟机及油烟机的控制方法 |
CN112257634A (zh) * | 2019-12-26 | 2021-01-22 | 神盾股份有限公司 | 手势识别系统以及手势识别方法 |
CN112257634B (zh) * | 2019-12-26 | 2024-03-29 | 神盾股份有限公司 | 手势识别系统以及手势识别方法 |
CN111143217A (zh) * | 2019-12-27 | 2020-05-12 | 上海昶枫科技有限公司 | 汽车电子控制单元仿真系统 |
CN112069960A (zh) * | 2020-08-28 | 2020-12-11 | 哈尔滨拓博科技有限公司 | 一种用于摇杆式娃娃机的单目手势控制后装系统、控制方法及改造方法 |
CN113126753B (zh) * | 2021-03-05 | 2023-04-07 | 深圳点猫科技有限公司 | 一种基于手势关闭设备的实现方法、装置及设备 |
CN113126753A (zh) * | 2021-03-05 | 2021-07-16 | 深圳点猫科技有限公司 | 一种基于手势关闭设备的实现方法、装置及设备 |
CN113657226A (zh) * | 2021-08-06 | 2021-11-16 | 上海有个机器人有限公司 | 一种客户交互方法、装置、介质和移动设备 |
CN114978333A (zh) * | 2022-05-25 | 2022-08-30 | 深圳玩智商科技有限公司 | 一种识别设备、系统及方法 |
CN114978333B (zh) * | 2022-05-25 | 2024-01-23 | 深圳玩智商科技有限公司 | 一种识别设备、系统及方法 |
CN115798054A (zh) * | 2023-02-10 | 2023-03-14 | 国网山东省电力公司泰安供电公司 | 一种基于ar/mr技术的手势识别方法及电子设备 |
CN115798054B (zh) * | 2023-02-10 | 2023-11-10 | 国网山东省电力公司泰安供电公司 | 一种基于ar/mr技术的手势识别方法及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN106502570A (zh) | 2017-03-15 |
CN106502570B (zh) | 2020-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018076523A1 (zh) | 手势识别的方法、装置及车载系统 | |
US11048333B2 (en) | System and method for close-range movement tracking | |
JP6195939B2 (ja) | 複合的な知覚感知入力の対話 | |
US9910498B2 (en) | System and method for close-range movement tracking | |
EP2717120B1 (en) | Apparatus, methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications | |
EP2790089A1 (en) | Portable device and method for providing non-contact interface | |
US20090284469A1 (en) | Video based apparatus and method for controlling the cursor | |
US10366281B2 (en) | Gesture identification with natural images | |
WO2006036069A1 (en) | Information processing system and method | |
JP2010277198A (ja) | 情報処理装置、情報処理方法およびプログラム | |
CN104049738A (zh) | 用于操作用户装置的传感器的方法和设备 | |
MX2009000305A (es) | Controlador virtual para presentaciones visuales. | |
WO2006091753A2 (en) | Method and apparatus for data entry input | |
CN101869484A (zh) | 具有触摸屏的医疗诊断装置及其操控方法 | |
US11886643B2 (en) | Information processing apparatus and information processing method | |
CN105242776A (zh) | 一种智能眼镜的控制方法及智能眼镜 | |
US20160034027A1 (en) | Optical tracking of a user-guided object for mobile platform user input | |
CN113253908B (zh) | 按键功能执行方法、装置、设备及存储介质 | |
CN106909256A (zh) | 屏幕控制方法及装置 | |
CN116198435B (zh) | 车辆的控制方法、装置、车辆以及存储介质 | |
CN106569716B (zh) | 单手操控方法及操控系统 | |
US11755124B1 (en) | System for improving user input recognition on touch surfaces | |
Sasaki et al. | Hit-wear: A menu system superimposing on a human hand for wearable computers | |
KR101068281B1 (ko) | 후면부 손가락 움직임 및 제스처 인식을 이용한 휴대형 정보 단말기 및 콘텐츠 제어 방법 | |
KR101184742B1 (ko) | 비접촉식 손동작에 의한 방향 인식 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16919713 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16919713 Country of ref document: EP Kind code of ref document: A1 |