CN108008811A - A kind of method and terminal using non-touch screen mode operating terminal - Google Patents
A kind of method and terminal using non-touch screen mode operating terminal Download PDFInfo
- Publication number
- CN108008811A CN108008811A CN201610959809.6A CN201610959809A CN108008811A CN 108008811 A CN108008811 A CN 108008811A CN 201610959809 A CN201610959809 A CN 201610959809A CN 108008811 A CN108008811 A CN 108008811A
- Authority
- CN
- China
- Prior art keywords
- point
- reference object
- mapping
- image
- mapping point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000013507 mapping Methods 0.000 claims abstract description 359
- 210000001747 pupil Anatomy 0.000 claims description 44
- 230000009471 action Effects 0.000 claims description 27
- 230000004397 blinking Effects 0.000 claims description 3
- 230000008859 change Effects 0.000 description 31
- 230000006870 function Effects 0.000 description 24
- 238000010586 diagram Methods 0.000 description 20
- 238000003384 imaging method Methods 0.000 description 14
- 238000003825 pressing Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 4
- 239000011521 glass Substances 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006698 induction Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000005316 response function Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- AMGQUBHHOARCQH-UHFFFAOYSA-N indium;oxotin Chemical compound [In].[Sn]=O AMGQUBHHOARCQH-UHFFFAOYSA-N 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
An embodiment of the present invention provides a kind of method using non-touch screen mode operating terminal, the described method includes:The image of collection object of reference, the distance of the object of reference and terminal display screen exceed setting value in real time;The image of the object of reference is analyzed, draws the mapping point for forming default mapping relations on terminal display screen with the object of reference;The function command at the mapping point is generated, the operation to terminal display screen is realized based on the function command;The embodiment of the present invention additionally provides a kind of terminal.
Description
Technical Field
The invention relates to the field of human-computer interaction, in particular to a method for operating a terminal in a non-touch screen mode and the terminal.
Background
At present, the application range of a terminal with a touch screen becomes wider and wider; the touch screen on the terminal can be used as a device for realizing human-computer interaction, for example, a resistive touch screen is actually a sensor, the structure of the resistive touch screen is basically a structure that a film and glass are superposed, one surface of the film adjacent to the glass is coated with a nano Indium Tin Oxide (ITO) coating, and the ITO has good conductivity and transparency. When touch operation is generated, the ITO on the lower layer of the film is contacted with the ITO on the upper layer of the glass, corresponding electric signals are transmitted through the sensor and are sent to the processor through the conversion circuit, the signals are converted into coordinate values (X, Y values) on the screen through operation, and clicking action is finished and displayed on the screen.
In the prior art, a touch response method and device for a wearable device and the wearable device are disclosed, so that the wearable device can feed back a touch operation effect to a user in real time, and the touch accuracy of the wearable device is improved; the specific technical scheme is as follows: acquiring position information of a target fingertip acquired by binocular recognition equipment in a set touch action occurrence area; determining the position information of a mapping point mapped to the wearable equipment screen by the target fingertip according to the position information of the target fingertip; displaying a cursor at the mapped point on a wearable device screen.
In the above prior art, the mapping point of the fingertip can be determined, and the mobile phone can be operated according to the fingertip trajectory, but the mobile phone screen cannot be directly operated by using the mapping point.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present invention provide a method and a terminal for operating a terminal in a non-touch screen manner, so that the terminal can be operated without touching a screen with a finger.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
the embodiment of the invention provides a method for operating a terminal in a non-touch screen mode, which comprises the following steps:
acquiring an image of a reference object in real time, wherein the distance between the reference object and a terminal display screen exceeds a set value;
analyzing the image of the reference object to obtain a mapping point which forms a preset mapping relation with the reference object on a terminal display screen;
and generating a functional instruction at the mapping point, and realizing the operation of the terminal display screen based on the functional instruction.
In the above scheme, the reference object comprises a pupil of a human;
correspondingly, the analyzing the image of the reference object to obtain a mapping point on a terminal display screen, which forms a preset mapping relationship with the reference object, includes:
acquiring the spatial position of the fovea of the eye in real time, wherein the pupil and the fovea of the eye are positioned in the same eye; obtaining the spatial position of the pupil center point based on the image of the pupil;
and determining the intersection point of a straight line passing through the central fovea of the eye and the pupil center point and the terminal display screen as a mapping point on the terminal display screen, wherein the mapping point forms a preset mapping relation with the reference object on the basis of the spatial positions of the central fovea of the eye and the pupil center point.
In the above scheme, the acquiring an image of a reference object in real time includes: respectively acquiring images of the pupils by using two cameras;
the obtaining the spatial position of the pupil center point based on the image of the pupil comprises: and obtaining the spatial position of the pupil center point based on the spatial positions of the two cameras and the images collected by the two cameras.
In the above scheme, the acquiring an image of a reference object in real time includes: acquiring images of the reference object in real time by using at least one camera;
correspondingly, the analyzing the image of the reference object to obtain a mapping point on a terminal display screen, which forms a preset mapping relationship with the reference object, includes:
selecting at least one point in the image of the reference object as a reference point;
determining the spatial position of each reference point based on the image of each reference point and the spatial position of each camera;
determining a projection point of each reference point on the terminal display screen based on the spatial position of each reference point and the spatial position of the terminal display screen;
and determining a mapping point forming a preset mapping relation with the reference object based on the projection point.
In the foregoing solution, the determining, based on the projection point, a mapping point forming a preset mapping relationship with the reference object includes: and taking the projection point as a mapping point which forms a preset mapping relation with the reference object.
In the above scheme, the number of the reference points is 2;
the determining of the mapping point forming a preset mapping relation with the reference object based on the projection point comprises: and determining projection points of the two reference points on the terminal display screen, and taking the middle point of the connecting line of the two determined projection points as a mapping point which forms a preset mapping relation with the reference object.
In the above scheme, the reference object comprises two eyes of a human;
the selecting at least one point in the image of the reference object as a reference point includes: and respectively taking the pupil center points of the two eyes of the person as reference points.
In the above aspect, the selecting at least one point in the image of the reference object as a reference point includes:
determining the spatial position of the reference object based on the image of the reference object and the spatial position of each camera;
and taking one point with the minimum vertical distance with the terminal display screen in the reference object as a reference point based on the spatial position of the reference object.
In the above scheme, the acquiring an image of a reference object in real time includes: acquiring an image of the reference object in real time by using a camera;
before analyzing the image of the reference, the method further comprises: acquiring the distance between the camera and the reference object in real time;
correspondingly, the determining the spatial position of the reference point based on the image of the reference point and the spatial position of each camera comprises:
and determining the spatial position of the reference point based on the image of the reference point, the spatial position of the camera and the distance between the camera and the reference object.
In the above scheme, the reference object is an object with the smallest vertical distance to the terminal display screen, or the reference object is located on a human body.
In the above scheme, the reference object comprises one eye of a human; the real-time acquisition of images of a reference object includes: acquiring images in human eyes in real time;
the analyzing the image of the reference object to obtain a mapping point on a terminal display screen, which forms a preset mapping relation with the reference object, comprises: determining the area matched with the acquired image in the human eyes in the current display content of the terminal display screen as follows: a screen matching area; and selecting one point in the screen matching area as a mapping point forming a preset mapping relation with the reference object.
In the above scheme, before analyzing the image of the reference object, the method further includes: determining the distance between a terminal display screen and the reference object;
the analyzing the image of the reference object to obtain a mapping point on a terminal display screen, which forms a preset mapping relation with the reference object, comprises: and when the determined distance is in a set interval, analyzing the image of the reference object to obtain a mapping point which forms a preset mapping relation with the reference object on a terminal display screen.
In the foregoing solution, the generating the functional instruction at the mapping point includes: determining the time when the mapping point is in the mapping point area on the display screen of the terminal; generating a functional instruction at the mapping point based on the determined size of time; the mapping point region includes an initial position of a mapping point on the terminal display screen.
In the foregoing solution, the generating a functional instruction at a mapping point based on the determined time includes:
when the determined time is in a first set range, generating an instruction for indicating to click the current mapping point; when the determined time is in a second set range, generating an instruction for indicating the long press at the current mapping point; when the determined time is in a third set range, generating an instruction for indicating screen sliding operation;
the first setting range, the second setting range and the third setting range do not overlap with each other.
In the above solution, the first setting range is a range from a first time threshold to a second time threshold; the second set range is greater than a second time threshold; the third set range is smaller than a first time threshold; the first time threshold is less than a second time threshold.
In the foregoing solution, the generating an instruction for instructing to perform a screen sliding operation includes: and acquiring the moving direction and the moving speed of the mapping point, and generating an instruction for instructing screen sliding operation based on the moving direction and the moving speed of the mapping point.
In the foregoing aspect, the generating an instruction to perform a screen sliding operation based on the moving direction and the moving speed of the mapping point includes:
taking the moving speed of the mapping point in the transverse direction of the mobile terminal display screen as the transverse moving speed of the mapping point, and taking the moving speed of the mapping point in the longitudinal direction of the mobile terminal display screen as the longitudinal moving speed of the mapping point;
when the transverse moving speed of the mapping point is greater than the longitudinal moving speed of the mapping point, generating an instruction for indicating transverse screen sliding operation; or when the transverse moving speed of the mapping point is greater than the longitudinal moving speed of the mapping point and meets a first set condition, generating an instruction for instructing transverse screen sliding operation;
when the longitudinal moving speed of the mapping point is greater than the transverse moving speed of the mapping point, generating an instruction for indicating to perform longitudinal screen sliding operation; or when the longitudinal movement rate of the mapping point is greater than the transverse movement rate of the mapping point and the longitudinal movement rate of the mapping point meets a second set condition, generating an instruction for instructing to perform a longitudinal screen sliding operation.
In the foregoing solution, the first setting condition is: the transverse moving speed of the mapping point is in a fourth set range; the second setting condition is as follows: the longitudinal movement rate of the mapping point is within a fifth set range.
In the above scheme, the mapping point area includes an initial position of a mapping point on the terminal display screen; the area of the mapping point area is less than or equal to a set threshold.
In the above scheme, the mapping point area is a circular area with the initial position of the mapping point as the center of the circle.
In the above solution, before generating the functional instruction at the mapping point, the method further includes: continuously acquiring images of the actions of the user of the terminal to obtain action images of the user; carrying out image recognition on the action image of the user to obtain a recognition result;
accordingly, the generating the functional instruction at the mapping point comprises: generating a functional instruction at the mapping point based on the recognition result.
In the above solution, the generating a functional instruction at the mapping point based on the recognition result includes: when the recognition result is a blinking action, a mouth opening action or a mouth closing action, generating an instruction for indicating clicking a mapping point; when the recognition result is a nodding action, generating an instruction for indicating to perform downward screen sliding operation; when the recognition result is a head-up action, generating an instruction for indicating to perform screen up-sliding operation; and when the recognition result is a left-right shaking motion, generating an instruction for indicating to perform transverse screen sliding operation.
In the above solution, the functional instruction at the mapping point is: and indicating to click the function instruction at the current mapping point, indicating to press the function instruction at the current mapping point or indicating to perform screen sliding operation.
The embodiment of the invention also provides a terminal, which comprises an image acquisition device and a processor; wherein,
the image acquisition device is used for acquiring an image of a reference object in real time, and the distance between the reference object and the terminal display screen exceeds a set value;
the processor is used for analyzing the image of the reference object to obtain a mapping point which forms a preset mapping relation with the reference object on a terminal display screen; and generating a functional instruction at the mapping point, and realizing the operation of the terminal display screen based on the functional instruction.
In the above scheme, the reference object comprises a pupil of a human;
the processor is also used for acquiring the spatial position of the fovea of the eye in real time; the pupil and the eye fovea are located in the same eye;
correspondingly, the processor is specifically configured to derive a spatial position of a pupil center point based on the image of the pupil; based on the space positions of the central fovea of the eye and the central point of the pupil, the intersection point of a straight line passing through the central fovea of the eye and the central point of the pupil and the terminal display screen is determined as follows: and a mapping point which forms a preset mapping relation with the reference object on the terminal display screen.
In the above scheme, the image acquisition device comprises at least one camera;
the processor is specifically configured to select at least one point in the image of the reference object as a reference point; determining the spatial position of each reference point based on the image of each reference point and the spatial position of each camera; determining a projection point of each reference point on the terminal display screen based on the spatial position of each reference point and the spatial position of the terminal display screen; and determining a mapping point forming a preset mapping relation with the reference object based on the projection point.
In the above scheme, the reference object comprises one eye of a human;
the image acquisition device is specifically used for acquiring images in human eyes in real time;
the processor is specifically configured to determine an area, in the current display content of the terminal display screen, that matches the acquired image in the human eye as: a screen matching area; and selecting one point in the screen matching area as a mapping point forming a preset mapping relation with the reference object.
In the above scheme, the processor is further configured to determine a distance between a terminal display screen and the reference object before analyzing the image of the reference object;
the processor is specifically configured to analyze the image of the reference object when the determined distance is within a set interval, and obtain a mapping point on a display screen of the terminal, where the mapping point forms a preset mapping relationship with the reference object.
In the foregoing solution, the processor is specifically configured to determine a time when the mapping point is in a setting area; generating a functional instruction at the mapping point based on the determined size of time.
In the above scheme, the image acquisition device is further configured to continuously perform image acquisition on the motion of the user of the terminal before generating the function instruction at the mapping point, so as to obtain a motion image of the user;
the processor is further used for carrying out image recognition on the action image of the user to obtain a recognition result; generating a functional instruction at the mapping point based on the recognition result.
According to the method and the terminal for operating the terminal in the non-touch screen mode, provided by the embodiment of the invention, the image of the reference object which is not in contact with the display screen of the terminal is collected in real time; analyzing the image of the reference object to obtain a mapping point which forms a preset mapping relation with the reference object on a terminal display screen; generating a functional instruction at the mapping point, and realizing the operation of a terminal display screen based on the functional instruction; therefore, the mapping point on the display screen of the terminal can be obtained by analyzing the image of the reference object, and the operations such as clicking, long pressing, screen sliding and the like can be completed based on the functional instruction at the mapping point, so that the terminal can be operated in a non-touch screen mode; the terminal can be operated only according to the reference object image without touching the screen by fingers; for the technical problem that the terminal is inconvenient to operate by fingers due to the fact that the size of a display screen of the terminal is increased at present, the man-machine interaction efficiency can be effectively improved, the operability of the terminal is improved, and the user experience is improved.
Drawings
FIG. 1 is a flowchart illustrating a method for operating a terminal in a non-touch screen manner according to an embodiment of the present invention;
FIG. 2 is a first diagram of a reference point projected onto a display screen of a terminal according to an embodiment of the present invention;
FIG. 3 is a second diagram illustrating a projection point of a reference point on a display screen of a terminal according to an embodiment of the present invention;
FIG. 4 is a flowchart of an embodiment of a method for operating a terminal in a non-touch screen manner according to the present invention;
FIG. 5 is a schematic diagram of the fovea and pupil of an eye in an embodiment of the invention
FIG. 6 is a schematic diagram illustrating a principle of determining a position of a point in space by using two cameras according to an embodiment of the present invention;
FIG. 7 is a schematic diagram illustrating a position relationship between a human eye line of sight and a display screen of a terminal according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention records a method for operating a terminal in a non-touch screen mode, which can be applied to the terminal, wherein the terminal can be a fixed terminal or a mobile terminal with a display screen; for example, the display screen may not have a touch response function, and may also be a touch screen having a touch response function; the mobile terminal can be a smart phone, a tablet personal computer or wearable equipment (such as smart glasses, a smart watch and the like), and can also be a smart car or a smart household appliance (such as a smart refrigerator, a smart battery, a set-top box and the like); the operating system of the smartphone may be an android operating system, an IOS operating system, or any other operating system developed by a third party and capable of running on a microcomputer structure (including at least a processor and a memory) (e.g., a mobile Linux system, a blackberry QNX operating system, etc.).
The terminal described above includes an image capturing device for capturing an image of a reference object, where the reference object may be an object located at the terminal, for example, the reference object may be an object such as a human eye or a nose; the image acquisition device may comprise at least one camera.
The terminal described above further includes an image analysis device for analyzing the acquired image of the reference object, and in practical implementation, the image analysis device may be a processor on the terminal.
Based on the display screen, the image acquisition device and the image analysis device provided on the terminal described above, the following specific embodiments are proposed.
The first embodiment is as follows:
fig. 1 is a flowchart of a method for operating a terminal in a non-touch screen manner according to an embodiment of the present invention, where as shown in fig. 1, the flowchart includes:
step 101: and acquiring an image of a reference object in real time, wherein the distance between the reference object and a terminal display screen exceeds a set value.
Here, the set value is greater than 0, and the set value can be set according to an actual application scene; that is, the reference object is not in a contact relationship with the terminal display.
Here, the kind of the reference object is not limited, and for example, the reference object may include: the eyes, nose, or reference object of a person is an object or the like with the smallest vertical distance to the display screen of the terminal.
Optionally, at least one camera may be used to respectively acquire images of the pupils, for example, the number of the cameras is 1 or 2; the camera can be arranged at one side of the terminal display screen, namely a front camera is arranged on the terminal; the camera can also be arranged on the back of the terminal, namely a rear camera is arranged on the terminal.
Step 102: and analyzing the image of the reference object to obtain a mapping point which forms a preset mapping relation with the reference object on a terminal display screen.
The implementation of this step is described below in three cases.
First case
The reference object comprises a pupil of a person; acquiring the spatial position of the fovea of the eye in real time before analyzing the image of the reference object; the pupil and the eye fovea are located in the same eye; in step 101, images of the pupils are respectively acquired by two cameras.
Correspondingly, the method specifically comprises the following steps: obtaining the spatial position of the pupil center point based on the image of the pupil; based on the space positions of the central fovea of the eye and the central point of the pupil, the intersection point of a straight line passing through the central fovea of the eye and the central point of the pupil and the terminal display screen is determined as follows: and a mapping point which forms a preset mapping relation with the reference object on the terminal display screen.
In practical implementation, the two cameras can be used for respectively acquiring images of the fovea of the eyes, and can also be used for respectively acquiring images of the pupil center point; here, the image of the eye may be collected, and then the image of the pupil of the eye is obtained by using an image recognition or image matching technique, and finally the image of the pupil center point is determined according to the image of the pupil of the eye.
Here, the deriving the spatial position of the pupil center point based on the image of the pupil includes: and obtaining the spatial position of the pupil center point based on the spatial positions of the two cameras and the images collected by the two cameras.
It can be understood that the spatial position of each camera can be represented by three-dimensional space coordinates, and in practical implementation, coordinates of a point on the terminal can be preset, and then the three-dimensional space coordinates of each camera can be determined according to the positional relationship between the point and each camera.
After the spatial positions of the two cameras and the images collected by the two cameras are obtained, the spatial position of the pupil center point can be determined based on a binocular stereo vision technology.
It can be seen that the straight line passing through the fovea and the pupil center point actually represents the main line of sight of the human eye, and therefore, the determined mapping point is the intersection point of one line of sight of the human eye and the terminal display screen.
Second case
In step 101, images of a reference object are acquired in real time by using at least one camera, where the type of the reference object is not limited, for example, the reference object is an object with a minimum vertical distance from the terminal display screen, or the reference object is located on a human body.
Accordingly, this step may include: selecting at least one point in the image of the reference object as a reference point; determining the spatial position of each reference point based on the image of each reference point and the spatial position of each camera; determining a projection point of each reference point on the terminal display screen based on the spatial position of each reference point and the spatial position of the terminal display screen; and determining a mapping point forming a preset mapping relation with the reference object based on the projection point.
Illustratively, the selecting a point in the image of the reference object as a reference point includes: determining the spatial position of the reference object based on the image of the reference object and the spatial position of each camera; and taking one point with the minimum vertical distance with the terminal display screen in the reference object as a reference point based on the spatial position of the reference object. Further, the reference point is a point of the reference object with the smallest vertical distance from the terminal display screen, for example, when the reference object is a nose of a person, the reference point is a nose tip; and when the reference object is the object with the minimum vertical distance to the terminal display screen, if the finger of the person is the object with the minimum vertical distance to the terminal display screen, the reference point is the fingertip of the finger.
Here, when the number of cameras is 1, the distance between the camera and the reference object may be acquired in real time before analyzing the image of the reference object; and determining the spatial position of the reference point based on the image of the reference point, the spatial position of the camera and the distance between the camera and the reference object.
When the number of the cameras is 2, one point can be selected from the images of the reference object as a reference point; determining the spatial position of the reference point based on the images of the reference point and the spatial positions of the two cameras; determining a projection point of the reference point on the terminal display screen based on the spatial position of the reference point and the spatial position of the terminal display screen; and taking the projection point as a mapping point which forms a preset mapping relation with the reference object on a terminal display screen.
In practical implementation, a distance sensing device or a distance detecting device may be arranged on the terminal at a position where a distance from the camera to the camera does not exceed a first distance threshold, the distance sensing device or the distance detecting device is used for detecting a distance between the distance sensing device and a reference object, and the distance sensing device or the distance detecting device may be a displacement sensor or a proximity sensor arranged on the terminal. Here, the detected distance may be used as the distance between the camera and the reference object.
In practical implementation, the position of the terminal display screen can be determined according to the spatial position of each camera on the terminal and the relative position relationship between each camera and the terminal display screen; the projection point of each reference point on the terminal display screen is visually explained with reference to the attached drawings.
Fig. 2 is a first schematic diagram of a projection point of a reference point on a display screen of a terminal according to an embodiment of the present invention, as shown in fig. 2, cameras a and b are two cameras disposed on the terminal, a plane where the cameras a and b are located coincides with a plane where the display screen of the terminal is located, coordinates of the reference point may be represented as (X, Y, Z), and the projection point of the reference point on the display screen of the terminal is: and passing through the intersection point of a straight line perpendicular to the plane where the reference point and the terminal display screen are located and the terminal display screen.
Fig. 3 is a second schematic diagram of a projection point of a reference point on a terminal display screen according to an embodiment of the present invention, as shown in fig. 3, both a sensing point a and a sensing point b may be cameras disposed on a terminal, a plane where the terminal display screen is located passes through the sensing point a and the sensing point b, the sensing layer represents the terminal display screen, coordinates of the reference point may be represented as (X ', Y ', Z '), and a projection point of the reference point on the sensing layer is: passing through the intersection point of the straight line perpendicular to the plane of the reference point and the induction layer.
After determining the projection point, the projection point may be used as a mapping point forming a preset mapping relationship with the reference object; or when the number of the reference points is 2, determining projection points of the two reference points on the terminal display screen, and taking the middle point of the connecting line of the two determined projection points as a mapping point which forms a preset mapping relation with the reference object.
In a particular implementation, the reference object includes two eyes of a person; respectively taking pupil center points of the two eyes of the person as reference points; and then, determining the projection points of the pupil center points of the two eyes of the person on the terminal display screen, and taking the middle point of the connecting line of the two determined projection points as a mapping point which forms a preset mapping relation with the reference object.
Third case
The reference object is one eye of a person; in step 101, images in human eyes are acquired in real time.
Accordingly, this step may include: determining the area matched with the acquired image in the human eyes in the current display content of the terminal display screen as follows: a screen matching area; and selecting one point in the screen matching area as a mapping point forming a preset mapping relation with the reference object.
Further, before the image of the reference object is analyzed, the distance between a terminal display screen and the reference object can be acquired; and when the acquired distance is in the set interval, executing the step 102, otherwise, if the acquired distance is not in the set interval, directly ending the process.
Here, the interval is set as an interval for indicating a distance, and for example, the interval is set as [0cm, 5cm ], or [30cm, 50cm ], or the like.
Furthermore, after the mapping point is obtained, a cursor can be displayed at the mapping point of the terminal display screen, so that the user can observe the cursor conveniently.
Step 103: and generating a functional instruction at the mapping point, and realizing the operation of the terminal display screen based on the functional instruction.
Here, the functional instruction at the mapping point generated in this step may be: a function instruction indicating clicking the current mapping point, a function instruction indicating long pressing at the current mapping point, or an instruction indicating a screen sliding operation, and the like. The instruction for instructing to perform the screen sliding operation may be an instruction for instructing to perform the vertical screen sliding operation or an instruction for instructing to perform the horizontal screen sliding operation; the instruction for instructing to perform the vertical screen sliding operation may be an instruction for instructing to perform the screen sliding operation upwards or an instruction for instructing to perform the screen sliding operation downwards, and the instruction for instructing to perform the horizontal screen sliding operation may be an instruction for instructing to perform the screen sliding operation leftwards or an instruction for instructing to perform the screen sliding operation rightwards.
In this step, the function instruction at the mapping point may be generated in the following two ways.
The first mode is as follows: determining the time when the mapping point is in the mapping point area on the display screen of the terminal; generating a functional instruction at the mapping point based on the determined size of time; the mapping point region includes an initial position of a mapping point on the terminal display screen.
In actual implementation, the processor of the terminal may determine the time when the mapping point is in the mapping point area; thereafter, the processor of the terminal may generate a functional instruction at the mapping point based on the determined size of time.
Here, the mapping point region may be a region including an initial position of the mapping point on the display screen of the terminal; the area of the mapping point region is equal to or less than a set threshold, for example, if the initial position of the mapping point is point a, the mapping point region may be a region including point a and having an area less than the set threshold, and the set threshold is 0.2cm2、0.3cm2And the like; the shape assumed by the boundary of the setting region may be a circle, an ellipse, a polygon, or the like.
Preferably, the mapping point region may be a circular region centered on the initial position of the mapping point, for example, the mapping point region may be a circular region centered on the initial position of the mapping point and having a set length as a radius.
Illustratively, generating the functional instruction at the mapping point based on the determined size of time includes: when the determined time is in a first set range, generating an instruction for indicating to click the current mapping point; when the determined time is in a second set range, generating an instruction for indicating the long press at the current mapping point; when the determined time is in a third set range, generating an instruction for indicating screen sliding operation; the first setting range, the second setting range and the third setting range do not overlap with each other, that is, no intersection exists between the first setting range, the second setting range and the third setting range.
For example, the first setting range is represented as a section 1, the second setting range is represented as a section 2, the third setting range is represented as a section 3, and the section 1, the section 2, and the section 3 are all used to represent the value range of the time, and each section may be an open section, a closed section, or a half-open and half-closed section, but there is no intersection between every two of the section 1, the section 2, and the section 3.
It can be seen that, since the first setting range, the second setting range and the third setting range do not overlap with each other, the time for the mapping point to be in the setting region can only be within one of the three setting ranges at most, and thus, it is ensured that at most one function command is generated.
In a preferred embodiment, the first setting range is [ a1, a2], where a1 denotes a first time threshold, a2 denotes a second time threshold, a1 is less than a 2; the second setting range is (a2, ∞) and the third setting range is (0, a 1).
For example, the first time threshold a1 is 5 seconds, the second time threshold a2 is 10 seconds; the mapping point area is a circular area which takes the initial position of the mapping point as the center of a circle and takes 0.3cm as the radius; the mapping point area is the equivalent range of the initial position of the mapping point; since the image of the reference object may change, the position of the mapping point may also change accordingly, so that the processor of the terminal may record the time when the mapping point is in the mapping point area, and generate a function command instructing to click the current mapping point when the lasting time of the mapping point in the mapping point area is greater than or equal to 5 seconds and less than 10 seconds; and when the stay time of the mapping point in the mapping point area is more than 10 seconds and less than 50 seconds, generating a function instruction indicating long press at the current mapping point.
Further, if the continuous staying time of the mapping point in the mapping point area is within the third setting range, and if the continuous staying time of the mapping point in the mapping point area is less than 5 seconds, it can be determined that the mapping point has moved in a large width, and in this case, a command instructing to perform a screen sliding operation is generated; in actual implementation, when the sight line moves, the terminal continuously collects the coordinates of the mapping points; and analyzing the moving direction and the moving speed of the mapping point through the change of the position of the coordinate point and the time consumed by the change.
In practical implementation, a two-dimensional rectangular coordinate system can be established on a plane where a terminal display screen is located, and the two-dimensional rectangular coordinate system is marked as a screen coordinate system; setting the X axis of the screen coordinate system as the transverse direction of the terminal display screen, and setting the Y axis of the screen coordinate system as the longitudinal direction of the terminal display screen; the spatial position of the mapping point may be expressed as the coordinates of the screen coordinate system. The positive direction of the X axis of the screen coordinate system is the horizontal right direction, and the positive direction of the Y axis of the screen coordinate system is the vertical upward direction.
It can be understood that, according to the change of the X value of the coordinate of the mapping point in the screen coordinate system, the lateral movement distance of the mapping point is calculated, and then according to the time taken for the change of the X value of the coordinate of the mapping point in the screen coordinate system, the lateral movement speed of the mapping point is calculated; here, the lateral movement direction is leftward or rightward; for example, the X-axis forward direction of the screen coordinate system is the horizontal right direction; if the X value of the coordinate of the mapping point in the screen coordinate system is increased from 0 to 30 within 1.5 seconds, the transverse moving speed of the mapping point is 20 per second, and the transverse moving direction is towards the right; if the X value of the coordinates of the mapping point in the screen coordinate system is decreased from 15 to 0 within 1.5 seconds, the lateral movement rate of the mapping point is 10 per second, and the lateral movement direction is to the left.
Similarly, the longitudinal movement distance of the mapping point can be calculated according to the change of the Y value of the coordinate of the mapping point in the screen coordinate system, and the longitudinal movement speed of the mapping point can be calculated according to the time consumed by the change of the Y value of the coordinate of the mapping point in the screen coordinate system; here, the longitudinal moving direction is upward or downward; for example, the Y-axis forward direction of the screen coordinate system is the vertically upward direction; if the Y value of the coordinate of the mapping point in the screen coordinate system is increased from 0 to 18 within 1 second, the longitudinal moving speed of the mapping point is 18 per second, and the longitudinal moving direction is upward; if the Y value of the coordinates of the mapping point in the screen coordinate system is decreased from 0 to-10 within 1 second, the longitudinal movement rate of the mapping point is 10 per second and the longitudinal movement direction is downward.
In a specific implementation mode, if the longitudinal movement rate of the mapping point is greater than the transverse movement rate of the mapping point, generating an instruction for instructing to perform a longitudinal screen sliding operation, namely generating an instruction for instructing to perform an up-down screen sliding operation; the direction of the longitudinal screen sliding operation is upward or downward; for example, when the longitudinal movement rate of the map dot is b1 and the lateral movement rate of the map dot is b2, and b1 is greater than b2, a command instructing to perform an upward screen-sliding operation or a command instructing to perform a downward screen-sliding operation is generated.
If the transverse moving speed of the mapping point is greater than the longitudinal moving speed of the mapping point, generating an instruction for instructing to perform transverse screen sliding operation, namely generating an instruction for instructing to perform left and right screen sliding operation; the direction of the transverse screen sliding operation is leftward or rightward; for example, when the lateral movement rate of the mapping point is b3 and the longitudinal movement rate of the mapping point is b4 and b3 is greater than b4, a command instructing to perform a leftward screen sliding operation or a command instructing to perform a rightward screen sliding operation is generated.
Specifically, when the lateral movement rate of the mapping point is equal to the longitudinal movement rate of the mapping point, any functional instruction may not be generated, and an instruction instructing to perform a longitudinal sliding operation or an instruction instructing to perform a lateral sliding operation may be generated.
In practical implementation, a processor of the terminal can be used for recording the coordinates of the starting point and the ending point of the change of the mapping point in a screen coordinate system; the direction of the longitudinal change from the starting point to the ending point of the change of the mapping point can be set as the direction for the sliding operation, and the direction of the longitudinal change from the ending point to the starting point of the change of the mapping point can also be set as the direction for the sliding operation; for example, when the Y value of the coordinates of the start point of the mapping point change (initial position of the mapping point) in the screen coordinate system is c1, the Y value of the coordinates of the end point of the mapping point change in the screen coordinate system is c2, and c1 is smaller than c2, the longitudinal change direction from the start point to the end point of the mapping point change is: in the upward direction, the longitudinal direction of the change from the ending point to the starting point of the change of the mapping point is: a downward direction; at this time, the upward direction or the downward direction may be set as the direction in which the screen sliding operation is performed.
Similarly, the direction of lateral change from the start point to the end point of the change in the mapping point may be set as the direction in which the sliding operation is performed, or the direction of lateral change from the end point to the start point of the change in the mapping point may be set as the direction in which the sliding operation is performed; for example, if the X value of the coordinates of the start point of the mapping point change in the screen coordinate system is d1, the X value of the coordinates of the end point of the mapping point change in the screen coordinate system is d2, and d1 is smaller than d2, the lateral direction of change from the start point to the end point of the mapping point change is: in the rightward direction, the lateral direction of the change from the ending point to the starting point of the change of the mapping point is: a leftward direction; at this time, the rightward direction or the leftward direction may be set as the direction in which the screen sliding operation is performed.
In another specific implementation, when the lateral movement rate of the mapping point is greater than the longitudinal movement rate of the mapping point, the instruction instructing the lateral sliding operation is not directly generated, but whether the lateral movement rate of the mapping point meets the first setting condition is continuously judged; if the transverse moving speed of the mapping point meets a first set condition, generating an instruction for indicating transverse screen sliding; if the lateral movement rate of the mapping point does not satisfy the first setting condition, no instruction is generated.
Here, the first setting condition may be: the lateral moving speed of the mapping point is in a fourth setting range, the fourth setting range may be greater than v1, or less than v2, or between v3 and v4, v3 is not equal to v4, v1, v2, v3 and v4 may all be set by the user of the terminal, that is, v1, v2, v3 and v4 may all be set speed values.
Similarly, when the longitudinal moving speed of the mapping point is greater than the transverse moving speed of the mapping point, the instruction for indicating the longitudinal screen sliding operation is not directly generated, but whether the longitudinal moving speed of the mapping point meets a second set condition is continuously judged, and if the longitudinal moving speed of the mapping point meets the second set condition, the instruction for indicating the longitudinal screen sliding operation is generated; if the longitudinal change rate of the mapping point does not satisfy the first setting condition, no instruction is generated.
Here, the second setting condition may be: the longitudinal moving speed of the mapping point is in a fifth setting range, the fifth setting range can be larger than v5, or smaller than v6, or between v7 and v8, v7 is not equal to v8, v5, v6, v7 and v8 can be set by the user of the terminal, that is, v5, v6, v7 and v8 can all be set speed values.
The second mode is as follows: before generating the function instruction at the mapping point, continuously acquiring images of the action of the user of the terminal to obtain an action image of the user; carrying out image recognition on the action image of the user to obtain a recognition result; generating a functional instruction at the mapping point based on the recognition result.
Here, the image capturing device of the terminal may capture a motion image of a user, and then, the processor of the terminal may identify the motion image of the user, and generate a function instruction at the mapping point based on the identification result; for example, the front camera can be used to capture the image change of the head of the user, and then capture the motion image of the user.
Illustratively, when the recognition result is a blinking motion, a mouth opening motion or a mouth closing motion, generating an instruction indicating clicking a mapping point; when the recognition result is a nodding action, generating an instruction for indicating to perform downward screen sliding operation; when the recognition result is a head-up action, generating an instruction for indicating to perform screen up-sliding operation; and when the identification result is a left-right shaking motion, generating an instruction for indicating to perform left screen sliding operation or right screen sliding operation.
In practical implementation, a processor of the terminal may be used to generate a function instruction of the mapping point, and then the terminal may automatically implement an operation on the display screen of the terminal based on the function instruction, that is, the terminal may automatically implement an operation on the display screen of the terminal based on the function instruction without touching the display screen by the user.
Here, the operation on the terminal display screen corresponds to a functional instruction; illustratively, when the functional instruction at the mapping point is an instruction indicating to click the current mapping point, a click operation on the mapping point can be realized based on the instruction; when the functional instruction at the mapping point is the instruction at the long-pressing current mapping point, the long-pressing operation on the mapping point can be realized based on the instruction; when the functional instruction at the mapping point is an instruction indicating to perform a screen sliding operation, the screen sliding operation may be implemented based on the instruction.
After the operation on the terminal display screen is realized, the operation effect can be displayed on the terminal display screen, for example, when the click operation on the mapping point is realized, the effects of opening a menu and exiting the menu can be achieved, and when the long-time pressing operation on the mapping point is realized, the effect of long-time pressing the menu can be realized; when the screen sliding operation is realized, the effect of page turning processing can be realized.
In the embodiment of the method for operating the terminal in the non-touch screen mode, the image of the reference object can be analyzed to obtain the mapping point on the display screen of the terminal, and then the operations of clicking, long pressing, screen sliding and the like are completed based on the functional instruction at the mapping point, so that the terminal is operated in the non-touch screen mode; the terminal can be operated only according to the reference object image without touching the screen by fingers; for the technical problem that the terminal is inconvenient to operate by fingers due to the fact that the size of a display screen of the terminal is increased at present, the man-machine interaction efficiency can be effectively improved, the operability of the terminal is improved, and the user experience is improved.
Example two:
to further illustrate the object of the present invention, the following description is given on the basis of the first embodiment of the present invention.
Fig. 4 is a flowchart of a specific implementation of a method for operating a terminal in a non-touch screen manner according to an embodiment of the present invention, where as shown in fig. 4, the flowchart includes:
step 401: detecting whether the terminal opens the positioning function, if not, directly ending the process, and at the moment, the terminal has no response and does not generate a function instruction; if the terminal has the positioning function turned on, it jumps to step 402.
Step 402: detecting whether an object exists above a display screen of the terminal, if the object does not exist above the terminal, directly ending the process, wherein the terminal does not respond and does not generate a function instruction; and if the object is above the display screen of the terminal, jumping to step 403.
Here, a distance detecting means and a distance sensing means may be employed to determine whether there is an object above the display screen of the terminal within a detection range of the distance detecting means and a sensing range of the distance sensing means.
Step 403: detecting whether a sensing space range is set on the terminal, if not, calculating the coordinate position of a mapping point of a reference object on a terminal screen by the terminal, and finishing command operations of point selection, long-time pressing and screen sliding by the terminal according to the coordinate position of the mapping point, the moving direction and the moving speed of the mapping point; if the terminal is provided with an induction space range, go to step 404.
In this step, the sensing space range is the same as the set interval indicating the distance in the first embodiment.
Step 404: when the object is in the sensing space range, the terminal calculates the coordinate position of the mapping point of the reference object on the terminal screen, and the process goes to step 405.
It should be noted that when the object is not in the sensing space range, the terminal does not respond.
Step 405: and the terminal completes command operations of point selection, long-time pressing and screen sliding according to the coordinate position of the mapping point, the moving direction and the moving speed of the mapping point.
Example three:
to further illustrate the object of the present invention, the following description is given on the basis of the first embodiment of the present invention.
In the third embodiment of the present invention, the display at the terminalThe screen is provided with two cameras which are respectively marked as a camera A3 and a camera B3, the space positions of the camera A3 and the camera B3 can be represented by three-dimensional space coordinates, wherein the three-dimensional space coordinates of the camera A3 are (x)a3、ya3、za3) The three-dimensional space coordinate of the camera B3 is (x)b3、yb3、zb3) (ii) a In practical implementation, the coordinates of a point on the terminal can be preset, and then the three-dimensional space coordinates of each camera can be determined according to the position relationship between the point and each camera.
Here, a reference object may be provided, which may be one eye of a person; FIG. 5 is a schematic diagram of the fovea and pupil of an eye according to an embodiment of the present invention, as shown in FIG. 5, the fovea is labeled E1, the pupil center is labeled E2, and the fovea is where the human eye is most clearly imaged; the line through the fovea E1 and the pupil center E2 may be a primary line of sight for the human eye, which may be referred to as line of sight L.
Here, the spatial positions of the fovea of the eye and the pupil center point may be acquired, the three-dimensional spatial coordinates of the spatial position of the fovea of the eye E1 acquired are (x1, y1, z1), and the three-dimensional spatial coordinates of the spatial position of the pupil center point E2 acquired are (x2, y2, z 2).
In practical implementation, the two cameras can be used for respectively acquiring images of the fovea of the eyes, and can also be used for respectively acquiring images of the pupil center point; here, the image of the eye may be collected, and then the image of the pupil of the eye is obtained by using an image recognition or image matching technique, and finally the image of the pupil center point is determined according to the image of the pupil of the eye.
After the image acquisition is finished, determining the spatial position of the fovea of the eyes based on the images of the fovea of the eyes acquired by the two cameras and the spatial positions of the two cameras; the spatial position of the pupil center point can be determined based on the images of the pupil center point collected by the two cameras and the spatial positions of the two cameras.
Here, when determining the spatial position of the fovea or pupil center point of the eye, it may be implemented using binocular stereo vision techniques; this will be specifically explained below with reference to fig. 6.
FIG. 6 is a schematic diagram illustrating a principle of determining a position of a point in space by using two cameras according to an embodiment of the present invention, where, as shown in FIG. 6, the two cameras are respectively marked as OlAnd OrThe focal lengths of the two cameras are both f, the distance between the two cameras is T, and an XYZ three-dimensional rectangular coordinate system is established by taking the position of one camera as an original point, wherein the X-axis direction is the continuous direction of the two cameras, the Y-axis is vertical to the X-axis, and the Z-axis direction is parallel to the main optical axis (Principal Ray) direction of each camera; drawing and camera OlThe vertical distance from the optical center of the camera to the left imaging plane is a focal length f, a left imaging plane coordinate system is established on the left imaging plane, and two coordinate axes of the left imaging plane coordinate system are x respectivelylAxis and ylAxis, xlThe axis being parallel to the X-axis, ylThe axis is parallel to the Y axis; for the same reason, drawing and camera OrThe right imaging plane is vertical to the main optical axis, the vertical distance from the optical center of the camera to the right imaging plane is a focal length f, a right imaging plane coordinate system is established on the right imaging plane, and two coordinate axes of the right imaging plane coordinate system are x respectivelylAxis and ylAxis, xlThe axis being parallel to the X-axis, ylThe axis is parallel to the Y-axis.
Referring to fig. 6, a camera OlIs represented by (c) in the left imaging plane coordinate systemx1,cy1) A camera OrIs represented in the right imaging plane coordinate system as (c)x2,cy2) (ii) a One point P in space and camera OlThe intersection point of the line connecting the optical centers of the two and the left imaging plane is marked as PlPoint P and camera OrThe intersection point of the connecting line of the optical centers of the two and the right imaging plane is marked as Pr(ii) a In practical implementation, two shots can be takenFocal length f of camera, distance T between two cameras, point PlCoordinates in the left imaging plane coordinate system, point PrAnd obtaining the coordinates of the point P in the XYZ three-dimensional rectangular coordinate system based on the binocular stereo vision principle.
After the spatial positions of the central fovea of the eye and the central point of the pupil are determined, the intersection point of a straight line passing through the central fovea of the eye and the central point of the pupil and the terminal display screen can be determined based on the spatial positions of the central fovea of the eye and the central point of the pupil; this is illustrated in fig. 7.
Fig. 7 is a schematic diagram of a positional relationship between a human eye line of sight and a terminal display screen according to an embodiment of the present invention, and referring to fig. 5 and 7, a line of sight L passing through an eye fovea E1 and a pupil center E2 is determined based on positions of an eye fovea E1 and a pupil center E2; and determining the position of the intersection O of the sight L and the terminal display screen according to the known position of the terminal display screen.
Here, the position of the display screen of the terminal may be determined according to the spatial position of each camera on the terminal and the relative positional relationship between each camera and the display screen of the terminal.
Here, the intersection O of the sight line L and the terminal display screen is a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object; furthermore, after the position of the intersection point O of the sight line L and the terminal display screen is determined, an indication point can be displayed at the position of the intersection point O of the terminal display screen, so that a mapping point which forms a preset mapping relation with the reference object on the terminal display screen can be displayed visually.
After a mapping point forming a preset mapping relation with the reference object on a terminal display screen is determined, generating a function instruction at the mapping point based on the mapping point; the generated functional instructions at the mapping points may be used to: indicating to click the mapping point, indicating to press the mapping point for a long time or indicating to perform screen sliding operation; the direction of the sliding screen operation may be up, down, left or right.
In the third embodiment of the present invention, an implementation manner of generating the functional instruction at the mapping point and an implementation manner of implementing an operation on the display screen of the terminal based on the functional instruction are both described in the first embodiment of the present invention, and are not described herein again.
Example four
To further illustrate the object of the present invention, the following description is given on the basis of the first embodiment of the present invention.
In the fourth embodiment of the present invention, the reference object may be one eye of a human; a camera (front camera) is arranged on the surface of the terminal display screen, and the front camera is used for collecting images in human eyes, namely collecting the images of an object in the human eyes; further, the sharpest point of the acquired image in the human eye may also be determined, where the center point of the image in the human eye may be determined as the sharpest point.
Recording the area matched with the acquired image or reference image in the human eyes in the current display content of the terminal display screen as: a screen matching area; the reference image is an image of an area including the clearest point in the captured image of the human eye, for example, the reference image may be an image of an area centered at the clearest point and having a radius of 1 cm.
And determining a point in the screen matching area as a mapping point which forms a preset mapping relation with the reference object on the terminal display screen. For example, a point is determined as the mapping point in a circular area having a center point of the screen matching area as a center and a radius of 0.5 cm.
Furthermore, the coordinates of the mapping points in the screen coordinate system can be determined.
In the fourth embodiment of the present invention, an implementation manner of generating the functional instruction at the mapping point and an implementation manner of implementing an operation on the display screen of the terminal based on the functional instruction are already described in the first embodiment of the present invention, and are not described herein again.
EXAMPLE five
To further illustrate the object of the present invention, the following description is given on the basis of the first embodiment of the present invention.
In the fifth embodiment of the present invention, the reference object may be two eyes of a human; two cameras are arranged on the surface of the terminal display screen, the two cameras are respectively marked as a camera A5 and a camera B5, and the coordinate of the camera A5 in a three-dimensional coordinate system is (x)a5、ya5、za5) The coordinate of the camera B5 in the three-dimensional coordinate system is (x)b5、yb5、zb5) (ii) a Further, the camera a5 and the camera B5 can be configured to be in the same plane, and the plane formed by the two cameras is parallel to or coincident with the terminal screen.
Each camera can respectively collect images of two eyes, then an image of the pupil of each eye is obtained by utilizing an image recognition or image matching technology, and finally the image of the pupil center point of each eye is determined according to the image of the pupil of each eye; here, the pupil center of each eye is a reference point.
After the camera a5 and the camera B5 both collect images of pupil center points of two eyes, the spatial position of the pupil center points of the two eyes can be determined based on the images of the pupil center points of the two eyes collected by each camera and the spatial positions of the camera a5 and the camera B5; here, the pupil center points of the two eyes may be respectively denoted as a point C5 and a point D5; obviously, after determining the spatial positions of the pupil center points of both eyes, the distance from each camera to the point C5 and the distance from each camera to the point D5 can be obtained.
Here, the principle of determining the spatial positions of the pupil center points of the two eyes has been described in the third embodiment of the present invention, and is not described here again.
After determining the spatial positions of the pupil center points of the two eyes, the processor of the terminal may determine the projection points of the pupil center points of the two eyes on the terminal display screen according to the spatial positions of the pupil center points of the two eyes and the position of the terminal display screen, where the projection point of the point C5 on the terminal display screen is denoted as point E5, the projection point of the point D5 on the terminal display screen is denoted as point F5, and the coordinates of the point E5 on the screen coordinate system are (X5)A5、YA5) The coordinate of the point F5 in the screen coordinate system is (X)B5、YB5)。
Here, the position of the display screen of the terminal may be determined according to the spatial positions of the camera a5 and the camera B5 on the terminal and the relative positional relationship of the camera a5 and the camera B5 to the display screen of the terminal.
After determining the projection points of the pupil center points of the two eyes on the terminal display screen, determining the middle point of the connecting line of the projection points of the pupil center points of the two eyes on the terminal display screen, and taking the middle point of the determined connecting line as: and a mapping point which forms a preset mapping relation with the reference object on the terminal display screen.
In practical implementation, the midpoint O5 of the connecting line between the point E5 and the point F5 may be determined according to the coordinates of the point E5 and the point F5 in the screen coordinate system, where the point O5 is a mapping point on the terminal display screen that forms a preset mapping relationship with the reference object, and the coordinate of the point O5 in the screen coordinate system is expressed as (X)O5,YO5)。
In the fifth embodiment of the present invention, an implementation manner of generating the functional instruction at the mapping point and an implementation manner of implementing an operation on the display screen of the terminal based on the functional instruction are already described in the first embodiment of the present invention, and are not described herein again.
EXAMPLE six
To further illustrate the object of the present invention, the following description is given on the basis of the first embodiment of the present invention.
In the fifth embodiment of the present invention, a front camera and a rear camera are installed on a terminal, the front camera is used for acquiring an image of a reference object a in real time, and the rear camera is used for acquiring an image of a reference object B in real time;
based on the first embodiment of the invention, before analyzing the image of the reference object A, the distance between the front camera and the reference object A needs to be acquired in real time; before analyzing the image of the reference object B, the distance between the rear camera and the reference object B needs to be acquired in real time.
After selecting a reference point from the reference object a, the spatial position of the reference point may be determined based on the image of the reference point, the spatial position of the front camera, and the distance between the front camera and the reference object a; similarly, after the reference point is selected from the reference object B, the spatial position of the reference point may be determined based on the image of the reference point, the spatial position of the rear camera, and the distance between the rear camera and the reference object B.
For example, a reference point selected from the reference object a is represented as point a6, a reference point selected from the reference object B is represented as point B6, and the coordinates of the spatial position of point a6 can be expressed as (X)A6、YA6、ZA6) The coordinate of the spatial position of point B6 can be noted as (X)B6、YB6、ZB6)。
After the reference points are determined, an implementation manner of determining a projection point of each reference point on the terminal display screen, and an implementation manner of determining a mapping point forming a preset mapping relationship with the reference object based on the projection point are already described in the first embodiment of the present invention, and are not described again here.
It should be noted that, a change-over switch of the front camera and the rear camera may be provided on the terminal, and is used to select to start the front camera or the rear camera, so that the user may determine whether to start the front camera or the rear camera according to the relative position relationship between the reference object selected by the user and the terminal.
In the sixth embodiment of the present invention, an implementation manner of generating the functional instruction at the mapping point and an implementation manner of implementing an operation on the display screen of the terminal based on the functional instruction are already described in the first embodiment of the present invention, and are not described herein again.
EXAMPLE seven
On the basis of the first embodiment of the invention, the embodiment of the invention also provides a terminal.
Fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 8, the terminal 800 includes: an image acquisition device 801 and a processor 802; wherein,
the image acquisition device 801 is used for acquiring images of a reference object in real time, wherein the distance between the reference object and a terminal display screen exceeds a set value;
the processor 802 is configured to analyze the image of the reference object to obtain a mapping point on a display screen of the terminal, where the mapping point forms a preset mapping relationship with the reference object; and generating a functional instruction at the mapping point, and realizing the operation of the terminal display screen based on the functional instruction.
In particular, the reference object comprises a human pupil;
the processor 802 is further configured to obtain a spatial position of a fovea of the eye in real time; the pupil and the eye fovea are located in the same eye;
accordingly, the processor 802 is specifically configured to derive a spatial position of a pupil center point based on the image of the pupil; based on the space positions of the central fovea of the eye and the central point of the pupil, the intersection point of a straight line passing through the central fovea of the eye and the central point of the pupil and the terminal display screen is determined as follows: and a mapping point which forms a preset mapping relation with the reference object on the terminal display screen.
Specifically, the image acquisition device 801 includes at least one camera;
the processor 802 is specifically configured to select at least one point in the image of the reference object as a reference point; determining the spatial position of each reference point based on the image of each reference point and the spatial position of each camera; determining a projection point of each reference point on the terminal display screen based on the spatial position of each reference point and the spatial position of the terminal display screen; and determining a mapping point forming a preset mapping relation with the reference object based on the projection point.
In particular, the reference object comprises one eye of a person;
the image acquisition device 801 is specifically used for acquiring images in human eyes in real time;
the processor 802 is specifically configured to determine an area, in the current display content of the terminal display screen, matching the acquired image in the human eye as: a screen matching area; and selecting one point in the screen matching area as a mapping point forming a preset mapping relation with the reference object.
Further, the processor 802 is further configured to determine a distance between a display screen of the terminal and the reference object before analyzing the image of the reference object;
the processor 802 is specifically configured to analyze the image of the reference object when the determined distance is within a set interval, so as to obtain a mapping point on a display screen of the terminal, where the mapping point forms a preset mapping relationship with the reference object; and when the acquired distance is determined not to be in the set interval, directly ending the process.
Specifically, the processor 802 is configured to determine a time when the mapping point is in a setting area; generating a functional instruction at the mapping point based on the determined size of time.
Further, the image capturing device 801 is further configured to, before generating the function instruction at the mapping point, continuously perform image capturing on the motion of the user of the terminal to obtain a motion image of the user;
the processor 802 is further configured to perform image recognition on the motion image of the user to obtain a recognition result; generating a functional instruction at the mapping point based on the recognition result.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.
Claims (30)
1. A method for operating a terminal in a non-touch screen manner, the method comprising:
acquiring an image of a reference object in real time, wherein the distance between the reference object and a terminal display screen exceeds a set value;
analyzing the image of the reference object to obtain a mapping point which forms a preset mapping relation with the reference object on a terminal display screen;
and generating a functional instruction at the mapping point, and realizing the operation of the terminal display screen based on the functional instruction.
2. The method of claim 1, wherein the reference object comprises a pupil of a human;
correspondingly, the analyzing the image of the reference object to obtain a mapping point on a terminal display screen, which forms a preset mapping relationship with the reference object, includes:
acquiring the spatial position of the fovea of the eye in real time, wherein the pupil and the fovea of the eye are positioned in the same eye; obtaining the spatial position of the pupil center point based on the image of the pupil;
and determining the intersection point of a straight line passing through the central fovea of the eye and the pupil center point and the terminal display screen as a mapping point on the terminal display screen, wherein the mapping point forms a preset mapping relation with the reference object on the basis of the spatial positions of the central fovea of the eye and the pupil center point.
3. The method of claim 2, wherein the acquiring images of the reference object in real time comprises: respectively acquiring images of the pupils by using two cameras;
the obtaining the spatial position of the pupil center point based on the image of the pupil comprises: and obtaining the spatial position of the pupil center point based on the spatial positions of the two cameras and the images collected by the two cameras.
4. The method of claim 1, wherein the acquiring images of the reference object in real-time comprises: acquiring images of the reference object in real time by using at least one camera;
correspondingly, the analyzing the image of the reference object to obtain a mapping point on a terminal display screen, which forms a preset mapping relationship with the reference object, includes:
selecting at least one point in the image of the reference object as a reference point;
determining the spatial position of each reference point based on the image of each reference point and the spatial position of each camera;
determining a projection point of each reference point on the terminal display screen based on the spatial position of each reference point and the spatial position of the terminal display screen;
and determining a mapping point forming a preset mapping relation with the reference object based on the projection point.
5. The method of claim 4, wherein determining a mapping point forming a predetermined mapping relationship with the reference object based on the projection point comprises: and taking the projection point as a mapping point which forms a preset mapping relation with the reference object.
6. The method of claim 4, wherein the number of reference points is 2;
the determining of the mapping point forming a preset mapping relation with the reference object based on the projection point comprises: and determining projection points of the two reference points on the terminal display screen, and taking the middle point of the connecting line of the two determined projection points as a mapping point which forms a preset mapping relation with the reference object.
7. The method of claim 6, wherein the reference object comprises two eyes of a person;
the selecting at least one point in the image of the reference object as a reference point includes: and respectively taking the pupil center points of the two eyes of the person as reference points.
8. The method according to claim 4, wherein the selecting at least one point in the image of the reference object as a reference point comprises:
determining the spatial position of the reference object based on the image of the reference object and the spatial position of each camera;
and taking one point with the minimum vertical distance with the terminal display screen in the reference object as a reference point based on the spatial position of the reference object.
9. The method of claim 4, wherein the acquiring images of the reference object in real time comprises: acquiring an image of the reference object in real time by using a camera;
before analyzing the image of the reference, the method further comprises: acquiring the distance between the camera and the reference object in real time;
correspondingly, the determining the spatial position of the reference point based on the image of the reference point and the spatial position of each camera comprises:
and determining the spatial position of the reference point based on the image of the reference point, the spatial position of the camera and the distance between the camera and the reference object.
10. The method according to claim 4, wherein the reference object is an object with a minimum vertical distance from the terminal display screen, or the reference object is located on a human body.
11. The method of claim 1, wherein the reference object comprises one eye of a human; the real-time acquisition of images of a reference object includes: acquiring images in human eyes in real time;
the analyzing the image of the reference object to obtain a mapping point on a terminal display screen, which forms a preset mapping relation with the reference object, comprises: determining the area matched with the acquired image in the human eyes in the current display content of the terminal display screen as follows: a screen matching area; and selecting one point in the screen matching area as a mapping point forming a preset mapping relation with the reference object.
12. The method of claim 1, wherein prior to analyzing the image of the reference, the method further comprises: determining the distance between a terminal display screen and the reference object;
the analyzing the image of the reference object to obtain a mapping point on a terminal display screen, which forms a preset mapping relation with the reference object, comprises: and when the determined distance is in a set interval, analyzing the image of the reference object to obtain a mapping point which forms a preset mapping relation with the reference object on a terminal display screen.
13. The method of claim 1, wherein the generating the functional instruction at the mapping point comprises: determining the time when the mapping point is in the mapping point area on the display screen of the terminal; generating a functional instruction at the mapping point based on the determined size of time; the mapping point region includes an initial position of a mapping point on the terminal display screen.
14. The method of claim 13, wherein generating the functional instruction at the mapping point based on the determined size of the time comprises:
when the determined time is in a first set range, generating an instruction for indicating to click the current mapping point; when the determined time is in a second set range, generating an instruction for indicating the long press at the current mapping point; when the determined time is in a third set range, generating an instruction for indicating screen sliding operation;
the first setting range, the second setting range and the third setting range do not overlap with each other.
15. The method according to claim 14, wherein the first set range is a range from a first time threshold to a second time threshold; the second set range is greater than a second time threshold; the third set range is smaller than a first time threshold; the first time threshold is less than a second time threshold.
16. The method of claim 14, wherein generating the instruction indicating to perform the slide operation comprises: and acquiring the moving direction and the moving speed of the mapping point, and generating an instruction for instructing screen sliding operation based on the moving direction and the moving speed of the mapping point.
17. The method of claim 16, wherein generating the instruction indicating to perform a screen sliding operation based on the moving direction and the moving speed of the mapping point comprises:
taking the moving speed of the mapping point in the transverse direction of the mobile terminal display screen as the transverse moving speed of the mapping point, and taking the moving speed of the mapping point in the longitudinal direction of the mobile terminal display screen as the longitudinal moving speed of the mapping point;
when the transverse moving speed of the mapping point is greater than the longitudinal moving speed of the mapping point, generating an instruction for indicating transverse screen sliding operation; or when the transverse moving speed of the mapping point is greater than the longitudinal moving speed of the mapping point and meets a first set condition, generating an instruction for instructing transverse screen sliding operation;
when the longitudinal moving speed of the mapping point is greater than the transverse moving speed of the mapping point, generating an instruction for indicating to perform longitudinal screen sliding operation; or when the longitudinal movement rate of the mapping point is greater than the transverse movement rate of the mapping point and the longitudinal movement rate of the mapping point meets a second set condition, generating an instruction for instructing to perform a longitudinal screen sliding operation.
18. The method according to claim 17, wherein the first setting condition is: the transverse moving speed of the mapping point is in a fourth set range; the second setting condition is as follows: the longitudinal movement rate of the mapping point is within a fifth set range.
19. The method of claim 13, wherein the mapping point region includes an initial position of a mapping point on the terminal display screen; the area of the mapping point area is less than or equal to a set threshold.
20. The method as claimed in claim 19, wherein the mapping point region is a circular region centered on the initial position of the mapping point.
21. The method of claim 1, wherein prior to generating the functional instructions at the mapping points, the method further comprises: continuously acquiring images of the actions of the user of the terminal to obtain action images of the user; carrying out image recognition on the action image of the user to obtain a recognition result;
accordingly, the generating the functional instruction at the mapping point comprises: generating a functional instruction at the mapping point based on the recognition result.
22. The method of claim 21, wherein generating the functional instruction at the mapping point based on the recognition result comprises: when the recognition result is a blinking action, a mouth opening action or a mouth closing action, generating an instruction for indicating clicking a mapping point; when the recognition result is a nodding action, generating an instruction for indicating to perform downward screen sliding operation; when the recognition result is a head-up action, generating an instruction for indicating to perform screen up-sliding operation; and when the recognition result is a left-right shaking motion, generating an instruction for indicating to perform transverse screen sliding operation.
23. The method of claim 1, wherein the functional instructions at the mapping points are: and indicating to click the function instruction at the current mapping point, indicating to press the function instruction at the current mapping point or indicating to perform screen sliding operation.
24. A terminal is characterized by comprising an image acquisition device and a processor; wherein,
the image acquisition device is used for acquiring an image of a reference object in real time, and the distance between the reference object and the terminal display screen exceeds a set value;
the processor is used for analyzing the image of the reference object to obtain a mapping point which forms a preset mapping relation with the reference object on a terminal display screen; and generating a functional instruction at the mapping point, and realizing the operation of the terminal display screen based on the functional instruction.
25. A terminal as claimed in claim 24, wherein the reference object comprises a human pupil;
the processor is also used for acquiring the spatial position of the fovea of the eye in real time; the pupil and the eye fovea are located in the same eye;
correspondingly, the processor is specifically configured to derive a spatial position of a pupil center point based on the image of the pupil; based on the space positions of the central fovea of the eye and the central point of the pupil, the intersection point of a straight line passing through the central fovea of the eye and the central point of the pupil and the terminal display screen is determined as follows: and a mapping point which forms a preset mapping relation with the reference object on the terminal display screen.
26. The terminal of claim 24, wherein the image capture device comprises at least one camera;
the processor is specifically configured to select at least one point in the image of the reference object as a reference point; determining the spatial position of each reference point based on the image of each reference point and the spatial position of each camera; determining a projection point of each reference point on the terminal display screen based on the spatial position of each reference point and the spatial position of the terminal display screen; and determining a mapping point forming a preset mapping relation with the reference object based on the projection point.
27. The terminal of claim 24, wherein the reference object comprises an eye of a person;
the image acquisition device is specifically used for acquiring images in human eyes in real time;
the processor is specifically configured to determine an area, in the current display content of the terminal display screen, that matches the acquired image in the human eye as: a screen matching area; and selecting one point in the screen matching area as a mapping point forming a preset mapping relation with the reference object.
28. The terminal of claim 24, wherein the processor is further configured to determine a distance between a display of the terminal and the reference object prior to analyzing the image of the reference object;
the processor is specifically configured to analyze the image of the reference object when the determined distance is within a set interval, and obtain a mapping point on a display screen of the terminal, where the mapping point forms a preset mapping relationship with the reference object.
29. The terminal of claim 24, wherein the processor is specifically configured to determine a time when the mapping point is in a setup area; generating a functional instruction at the mapping point based on the determined size of time.
30. The terminal according to claim 24, wherein the image capturing device is further configured to capture images of the actions of the user of the terminal continuously before generating the function instructions at the mapping points, so as to obtain an image of the actions of the user;
the processor is further used for carrying out image recognition on the action image of the user to obtain a recognition result; generating a functional instruction at the mapping point based on the recognition result.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610959809.6A CN108008811A (en) | 2016-10-27 | 2016-10-27 | A kind of method and terminal using non-touch screen mode operating terminal |
PCT/CN2017/078581 WO2018076609A1 (en) | 2016-10-27 | 2017-03-29 | Terminal and method for operating terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610959809.6A CN108008811A (en) | 2016-10-27 | 2016-10-27 | A kind of method and terminal using non-touch screen mode operating terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108008811A true CN108008811A (en) | 2018-05-08 |
Family
ID=62024412
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610959809.6A Pending CN108008811A (en) | 2016-10-27 | 2016-10-27 | A kind of method and terminal using non-touch screen mode operating terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108008811A (en) |
WO (1) | WO2018076609A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111752381A (en) * | 2019-05-23 | 2020-10-09 | 北京京东尚科信息技术有限公司 | Man-machine interaction method and device |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111625210B (en) * | 2019-02-27 | 2023-08-04 | 杭州海康威视系统技术有限公司 | Large screen control method, device and equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441513A (en) * | 2008-11-26 | 2009-05-27 | 北京科技大学 | System for performing non-contact type human-machine interaction by vision |
CN101901485A (en) * | 2010-08-11 | 2010-12-01 | 华中科技大学 | 3D free head moving type gaze tracking system |
CN102662473A (en) * | 2012-04-16 | 2012-09-12 | 广东步步高电子工业有限公司 | Device and method for implementation of man-machine information interaction based on eye motion recognition |
CN104471511A (en) * | 2012-03-13 | 2015-03-25 | 视力移动技术有限公司 | Touch free user interface |
-
2016
- 2016-10-27 CN CN201610959809.6A patent/CN108008811A/en active Pending
-
2017
- 2017-03-29 WO PCT/CN2017/078581 patent/WO2018076609A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441513A (en) * | 2008-11-26 | 2009-05-27 | 北京科技大学 | System for performing non-contact type human-machine interaction by vision |
CN101901485A (en) * | 2010-08-11 | 2010-12-01 | 华中科技大学 | 3D free head moving type gaze tracking system |
CN104471511A (en) * | 2012-03-13 | 2015-03-25 | 视力移动技术有限公司 | Touch free user interface |
CN102662473A (en) * | 2012-04-16 | 2012-09-12 | 广东步步高电子工业有限公司 | Device and method for implementation of man-machine information interaction based on eye motion recognition |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111752381A (en) * | 2019-05-23 | 2020-10-09 | 北京京东尚科信息技术有限公司 | Man-machine interaction method and device |
Also Published As
Publication number | Publication date |
---|---|
WO2018076609A1 (en) | 2018-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110647237B (en) | Gesture-based content sharing in an artificial reality environment | |
CN102662577B (en) | A kind of cursor operating method based on three dimensional display and mobile terminal | |
CN106873778B (en) | Application operation control method and device and virtual reality equipment | |
US9651782B2 (en) | Wearable tracking device | |
CN106846403B (en) | Method and device for positioning hand in three-dimensional space and intelligent equipment | |
KR102160236B1 (en) | A virtual reality interface implementation method based on a single camera-based 3D image analysis, a virtual reality interface implementation device based on a single camera-based 3D image analysis | |
KR102517425B1 (en) | Systems and methods of direct pointing detection for interaction with a digital device | |
CN102473041B (en) | Image recognition device, operation determination method, and program | |
CN107390863B (en) | Device control method and device, electronic device and storage medium | |
CN107357428A (en) | Man-machine interaction method and device based on gesture identification, system | |
CN106708270B (en) | Virtual reality equipment display method and device and virtual reality equipment | |
KR20120068253A (en) | Method and apparatus for providing response of user interface | |
CN104364733A (en) | Position-of-interest detection device, position-of-interest detection method, and position-of-interest detection program | |
CN104813258A (en) | Data input device | |
WO2020080107A1 (en) | Information processing device, information processing method, and program | |
JPWO2014141504A1 (en) | 3D user interface device and 3D operation processing method | |
WO2013149475A1 (en) | User interface control method and device | |
US20160232708A1 (en) | Intuitive interaction apparatus and method | |
CN104298340A (en) | Control method and electronic equipment | |
WO2012142869A1 (en) | Method and apparatus for automatically adjusting terminal interface display | |
US11520409B2 (en) | Head mounted display device and operating method thereof | |
WO2023227072A1 (en) | Virtual cursor determination method and apparatus in virtual reality scene, device, and medium | |
WO2010142455A2 (en) | Method for determining the position of an object in an image, for determining an attitude of a persons face and method for controlling an input device based on the detection of attitude or eye gaze | |
JP5964603B2 (en) | Data input device and display device | |
WO2019085519A1 (en) | Method and device for facial tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180508 |
|
WD01 | Invention patent application deemed withdrawn after publication |