CN112528713B - Gaze point estimation method, gaze point estimation system, gaze point estimation processor and gaze point estimation equipment - Google Patents
Gaze point estimation method, gaze point estimation system, gaze point estimation processor and gaze point estimation equipment Download PDFInfo
- Publication number
- CN112528713B CN112528713B CN201910887940.XA CN201910887940A CN112528713B CN 112528713 B CN112528713 B CN 112528713B CN 201910887940 A CN201910887940 A CN 201910887940A CN 112528713 B CN112528713 B CN 112528713B
- Authority
- CN
- China
- Prior art keywords
- pupil
- gaze point
- calculating
- spot
- eyes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 210000001747 pupil Anatomy 0.000 claims abstract description 153
- 210000001508 eye Anatomy 0.000 claims abstract description 145
- 239000013598 vector Substances 0.000 claims abstract description 100
- 238000010606 normalization Methods 0.000 claims description 24
- 238000013507 mapping Methods 0.000 claims description 23
- 238000004364 calculation method Methods 0.000 claims description 20
- 230000006870 function Effects 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 13
- 238000010586 diagram Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 210000004087 cornea Anatomy 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 210000000554 iris Anatomy 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 6
- 230000004424 eye movement Effects 0.000 description 5
- 210000005252 bulbus oculi Anatomy 0.000 description 4
- 210000000744 eyelid Anatomy 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000001028 reflection method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005314 correlation function Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000001179 pupillary effect Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000003752 polymerase chain reaction Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 210000003786 sclera Anatomy 0.000 description 1
- 210000003491 skin Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses a gaze point estimation method and a gaze point estimation system, wherein the method is applied to a gaze tracking device with a single camera and a single coaxial light source, and comprises the following steps: acquiring an original image captured from a single camera; collecting human eye characteristic information of an original image, and calculating pupil spot center data of two eyes based on the human eye characteristic information; based on pupil spot center data, calculating to obtain an initial PCR vector, wherein the PCR vector represents a vector of the spot center pointing to the pupil center; normalizing the initial PCR vector by using a preset distance factor to obtain a target PCR vector; and calculating and obtaining the gaze point information according to the target PCR vector. Only pupil spot center data is applied when gaze point estimation is carried out, and the problem that gaze point estimation can be carried out only by two groups of light sources is solved, so that the number of light sources of the existing gaze tracking equipment can be reduced, and miniaturization and light weight of the gaze tracking equipment are realized.
Description
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a gaze point estimation method, a gaze point estimation system, a gaze point estimation processor, and a gaze point estimation device.
Background
With the development of man-machine interaction technology, eyeball tracking technology is widely applied. Eye tracking, also known as gaze tracking, is a technique that estimates the gaze and/or gaze point of an eye by measuring eye movement.
Existing gaze tracking techniques are generally based on multiple light source single camera or multiple light source multiple camera settings. The light sources are generally divided into two types: one is that a light source, called a dark pupil light source, is formed separately from the camera position to form a normal pupil map (also called a dark pupil); the other is a face image (also called a bright pupil) where the iris-reflected light, which is coaxial with the camera, causes the camera pupil to illuminate, and this light source is called a bright pupil light source. For the purpose of line-of-sight tracking in the prior art, the light source combination is generally a plurality of dark pupils, or a plurality of dark pupils combined with a bright pupil, or a dark pupil combined with a bright pupil. The distance between the two light sources needs to be set relatively large, which makes the sight tracking device produced by the existing sight tracking method have large volume and cannot meet the purposes of miniaturization and light weight required by users.
Disclosure of Invention
In view of the above problems, the present invention provides a gaze point estimation method and system, which can reduce the number of light sources of the existing gaze tracking device based on gaze point estimation of a single light source, and achieve miniaturization and weight saving of the gaze tracking device.
In order to achieve the above object, the present invention provides the following technical solutions:
A gaze point estimation method applied to a gaze tracking device having a single camera and a single on-axis light source, comprising:
Acquiring an original image captured from the single camera;
Collecting human eye characteristic information of the original image, and calculating pupil spot center data of two eyes based on the human eye characteristic information;
based on the pupil spot center data, calculating to obtain an initial PCR vector, wherein the PCR vector represents a vector of the spot center pointing to the pupil center;
Normalizing the initial PCR vector by using a preset distance factor to obtain a target PCR vector, wherein the preset distance factor characterizes normalization parameters for normalizing the initial PCR vector;
And calculating and obtaining the gaze point information according to the target PCR vector.
Optionally, the pupil spot center data of the two eyes includes pupil image coordinates and spot image coordinates, the collecting the human eye feature information of the original image, and calculating the pupil spot center data of the two eyes based on the human eye feature information includes:
Collecting human eye characteristic information of the original image;
according to the human eye characteristic information, pupil image characteristics and facula image characteristics of two eyes are obtained;
calculating to obtain pupil image coordinates according to the pupil image characteristics;
and calculating to obtain the spot image coordinates according to the spot image characteristics.
Optionally, the preset distance factor characterizes a function of a distance parameter, wherein the distance parameter includes a distance between pupils of two eyes, a distance between focal points of two eyes, or a distance between formulated feature points of two eyes.
Optionally, the capturing the original image captured from the single camera includes capturing the original image captured from the single camera according to a set exposure gain, the method further comprising:
Calculating an average gray value of the pupil area according to the original image;
And judging whether the set exposure gain is adjusted according to the average gray value, wherein the obtained original image meets the light spot searching condition.
Optionally, the determining whether to adjust the set exposure gain according to the average gray value includes:
judging whether the average gray value exceeds a preset gray threshold value, and if so, adjusting the set exposure gain.
Optionally, the method further comprises:
Adjusting the set exposure gain to obtain a target exposure gain;
And controlling the camera to acquire images according to the target exposure gain, so that the acquired original images meet the target exposure gain.
Optionally, the calculating to obtain the gaze point information according to the target PCR vector includes:
And calculating and obtaining the gaze point information according to a preset mapping relation and the target PCR vector, wherein the preset mapping relation represents the mapping relation between the PCR vector and the gaze point and/or the gaze direction.
A gaze point estimation system for use with a gaze tracking device having a single camera and a single on-axis light source, comprising:
an acquisition unit configured to acquire an original image captured from the single camera;
The first calculation unit is used for collecting human eye characteristic information of the original image and calculating pupil spot center data of two eyes based on the human eye characteristic information;
the second calculation unit is used for calculating and obtaining an initial PCR vector based on the pupil spot center data, wherein the PCR vector represents a vector of the spot center pointing to the pupil center;
The normalization unit is used for carrying out normalization processing on the initial PCR vector by utilizing a preset distance factor to obtain a target PCR vector, wherein the preset distance factor characterizes normalization parameters for carrying out normalization processing on the initial PCR vector;
and a third calculation unit, configured to calculate and obtain gaze point information according to the target PCR vector.
Optionally, the first computing unit includes:
the acquisition subunit is used for acquiring the human eye characteristic information of the original image;
The acquisition subunit is used for acquiring pupil image characteristics and facula image characteristics of two eyes according to the human eye characteristic information;
the first calculating subunit is used for calculating and obtaining pupil image coordinates according to the pupil image characteristics;
The second calculating subunit is used for calculating and obtaining the spot image coordinates according to the spot image characteristics; the pupil spot center data of the two eyes comprise pupil image coordinates and spot image coordinates.
Optionally, the acquiring unit is specifically configured to acquire an original image captured from the single camera according to a set exposure gain, and the system further includes:
A gray value calculation unit, configured to calculate an average gray value of a pupil area according to the original image;
the judging unit is used for judging whether the set exposure gain is adjusted according to the average gray value, and if so, the obtained original image meets the light spot searching condition;
wherein, the judging unit is specifically configured to:
judging whether the average gray value exceeds a preset gray threshold value, and if so, adjusting the set exposure gain;
the system further comprises:
an adjusting unit for adjusting the set exposure gain to obtain a target exposure gain;
And the re-acquisition unit is used for controlling the single camera to acquire images according to the target exposure gain so that the acquired original images meet the target exposure gain.
The third computing unit is specifically configured to;
And calculating and obtaining the gaze point information according to a preset mapping relation and the target PCR vector, wherein the preset mapping relation represents the mapping relation between the PCR vector and the gaze point and/or the gaze direction.
A processor for running a program, wherein the program runs to perform the gaze point estimation method as described above.
An apparatus comprising a processor, a memory, and a program stored on the memory and executable on the processor, the processor implementing at least:
Acquiring an original image captured from the single camera;
Collecting human eye characteristic information of the original image, and calculating pupil spot center data of two eyes based on the human eye characteristic information;
based on the pupil spot center data, calculating to obtain an initial PCR vector, wherein the PCR vector represents a vector of the spot center pointing to the pupil center;
Normalizing the initial PCR vector by using a preset distance factor to obtain a target PCR vector, wherein the preset distance factor characterizes normalization parameters for normalizing the initial PCR vector;
And calculating and obtaining the gaze point information according to the target PCR vector.
Compared with the prior art, the invention provides a gaze point estimation method, a gaze point estimation system, a gaze point estimation processor and gaze point estimation equipment, wherein in the gaze point estimation process, pupil light spot center data are obtained, target PCR vectors are obtained after calculation and normalization processing are carried out on the pupil light spot center data, and gaze point information is obtained through calculation of the target PCR vectors. According to the gaze point estimation method, only pupil spot center data is applied when gaze point estimation is carried out, namely, the distance information between a human eye and a camera can be estimated by only one spot and pupil position information, and two spot information in one eye is not needed, so that the problem that gaze point estimation can be carried out only by two groups of light sources is solved, the number of light sources of the existing gaze tracking equipment can be reduced, and miniaturization and light weight of the gaze tracking equipment are realized.
Noun interpretation:
PCR (PupilCornealReflection), pupil-cornea reflex, is one type of optical recording method.
The method comprises the following steps:
Firstly, acquiring an eye image with a light spot (also called purkinje spot), and acquiring a reflection point of a light source on a cornea, namely the light spot; along with the rotation of the eyeball, the relative position relation between the pupil center and the light spots changes, and a plurality of eye images with the light spots correspondingly acquired reflect the position change relation; and estimating the sight line/fixation point according to the position change relation.
IPD (Inter Pupillary Distance), the distance between pupils of the two eyes (left and right).
IGD (Inter Glint Distance) is the distance between two spots in the eye image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of a light source assembly in the prior art;
fig. 2 is a schematic structural diagram of a module of a gaze tracking device according to an embodiment of the present application;
Fig. 3 is a flow chart of a gaze point estimation method according to a first embodiment of the present application;
FIG. 4 is a schematic diagram of a PCR vector provided in an embodiment of the present application;
Fig. 5 is a flowchart of a method for calculating pupil spot center data of two eyes according to a second embodiment of the present application;
FIG. 6 is a flowchart of an exposure gain adjustment method according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of a gaze point estimation system according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first and second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to the listed steps or elements but may include steps or elements not expressly listed.
In the embodiment of the invention, a gaze point estimation method is provided, which can be applied to the field of eye tracking, and eye tracking can also be called eye tracking, which is a technology for estimating the eye's gaze and/or gaze point by measuring the eye movement condition, and the eye tracking technology needs to use special equipment, such as eye tracking equipment.
The gaze point may be understood as a two-dimensional coordinate of the three-dimensional vector projected onto a certain plane. At present, an optical recording method is widely used, wherein a camera or a video camera is used for recording the eye movement condition of a tested person, namely, an eye image reflecting the eye movement is obtained, and eye characteristics are extracted according to the obtained eye image to establish a model of sight line/gaze point estimation. Among other things, ocular features may include: pupil position, pupil shape, iris position, eyelid position, corner of the eye position, spot (also known as purkinje spot) position, and the like.
Eye tracking methods can be broadly divided into two types, interference and non-interference. Most of the current sight tracking systems adopt a non-interference eye movement tracking method, and particularly, the pupil cornea reflection method is most widely applied. According to the physiological characteristics of human eyes and a visual imaging principle, an image processing technology is utilized to process the acquired eye pattern, and human eye characteristic parameters for estimating the sight are obtained. And taking the obtained human eye characteristic parameters as reference points, and obtaining the coordinates of the line of sight falling points by adopting a corresponding mapping model so as to realize the tracking of the line of sight. The method has higher precision, has no interference to the user, and allows the head of the user to freely rotate. The hardware devices used by the device comprise a light source and an image acquisition device, wherein the light source is generally an infrared light source, because the infrared light does not influence the vision of eyes, and can be a plurality of infrared light sources which are arranged in a preset mode, such as a figure-of-a-Chinese character, a straight line and the like; the image capturing device may be an infrared camera device, an infrared image sensor, a camera or video camera, or the like. In the cornea reflection method, since an error is caused by overcoming an asymmetric light spot, a multi-light source single camera or a multi-light source multi-camera is generally adopted to realize gaze point estimation with free head movement.
For example, the light sources are generally two of the following: firstly, the light source is separated from the camera position to form a normal pupil face map (also called dark pupil); secondly, the light source reflects light from the iris caused by the coaxial light source and the camera, so that the camera can obtain a face image (also called a bright pupil) with a bright pupil. Referring to fig. 1, which shows a schematic view of a light source combination in the prior art, it can be seen that the light source combination in the prior art is typically a combination of multiple dark pupils, or multiple dark pupils combined with a bright pupil, or a dark pupil combined with a bright pupil. To achieve a multiple light source solution, the distance between two light sources will typically be greater than 150mm, making the existing gaze tracking devices bulky.
Accordingly, in an embodiment of the present application, there is provided a gaze point estimation method applied to a gaze tracking device provided with a single camera and a single on-axis light source, since the gaze tracking device is small in volume where only one light source would be provided.
Example 1
Referring to fig. 2, a schematic structural diagram of a module of a gaze tracking device according to an embodiment of the present application is shown, where only one infrared light source and one infrared camera are needed in the module, where the infrared light source is used as a bright pupil light source, and it is noted that, to appear bright pupil, the light source is needed to be located near or in the same line with the optical axis of the camera, so that the pupil forms an image on the image that is not black but rather a very bright image due to the principle of specular reflection of the pupil. The module of the light source and the camera can greatly reduce the size of the module in the sight tracking equipment, thereby achieving the purposes of miniaturization and light weight of the sight tracking equipment. The infrared light source in fig. 2 characterizes a single light source in an embodiment of the present application, that is, the single light source in the present application may be one light source or a group of light sources as shown in fig. 2, but the distance between the position points of each light source in the group of light sources is smaller, so that the overall light presentation is similar to that of a single light source, which is different from the scene that the distance between two light sources is greater than 150mm in the prior art.
In order to achieve the estimation of the gaze/gaze point of the gaze tracking device provided with a single camera and a single on-axis light source, in a first embodiment of the present application there is also provided a gaze point estimation method, see fig. 3, which may comprise the steps of:
s101, acquiring an original image captured from a single camera.
S102, acquiring human eye characteristic information of an original image, and calculating pupil spot center data of two eyes based on the human eye characteristic information.
The scene is illuminated with a single light source in the gaze tracking device and image acquisition is performed with a single camera in the gaze tracking device, wherein the acquired image is an image comprising eye feature information, such as a human eye image, a human face image, etc. Specifically, the human eye characteristic information may include information related to the pupil, such as the pupil position, pupil shape, and some iris related information, such as the iris position, iris shape, and spot information formed by the illumination of the eye by the light source.
After the eyes are detected according to the original image, pupil spot center data of the two eyes, namely center coordinate data of pupil images of the two eyes and center coordinate data of spot images of the two eyes are calculated. Specifically, a two-dimensional coordinate system of the eye image may be set, and the pupil spot center data may be determined according to the origin position and the coordinate scale of the two-dimensional coordinate system.
S103, calculating and obtaining an initial PCR vector based on pupil spot center data;
s104, carrying out normalization processing on the initial PCR vector according to a preset distance factor to obtain a target PCR vector.
It should be noted that, the PCR (Pupil Corneal Reflection) vector represents a vector in which the center of the light spot points to the center of the pupil, that is, a vector formed by connecting the center of the pupil and the center of the light spot, referring to fig. 4, which shows a schematic diagram of a PCR vector, and an arrow represents a PCR vector formed from the center of the light spot to the center of the pupil. Assuming that the pupil center is (x 1, y 1), the spot center is (x 2, y 2), the PCR vector is (x 1-x2, y1-y 2). At this time, an initial PCR vector is calculated.
In order to reduce the sensitivity of the PCR vector to the human eye-to-lens distance, the initial PCR vector needs to be normalized by a distance factor, which is specified by dividing the abscissa value of the initial PCR vector by the distance factor, and it should be noted that the distance factor is not a distance value but a function of the relevant distance parameter. For example, taking the distance IPD (Inter Pupillary Distance) between pupils of two eyes as an example, IPD 2 may be used as a function of the distance parameter, i.e., as a distance factor.
The distance parameter may include an inter-pupil distance between two eyes, a spot distance between two eyes, or a distance between any two eye feature points, for example: the inner canthus distance of two eyes, the outer canthus distance of two eyes, the upper eyelid and lower eyelid distance, the face size, the face key point scale and the like. Correspondingly, in one possible implementation, the distance factor represents a functional expression of the square of the spot distance D between two eyes, i.e. D 2 is used as a function of the distance parameter, i.e. the distance factor.
Other functions, such as cubic or square root, may be employed in different processes, and need to be determined in conjunction with the eye characteristics of a particular user, and thus embodiments of the present invention are not limited to the particular form of the function of the distance parameter characterizing the normalization factor.
For example, in another possible implementation, the distance factor characterizes a function of the distance between the two-eye specified feature points, and if the distance between the two-eye feature points is d, the distance factor=d 3.
S105, calculating and obtaining the gaze point information according to the target PCR vector.
The gaze point information estimation can be performed by adopting a cornea reflection method, the target PCR vector is input into a preset regression model to calculate and obtain the gaze point information, or the target PCR vector and human eye parameters can be input into a sight line calculation model to calculate the sight line coordinates of the user, so that the gaze point information matched with the user is obtained. The preset regression model and the sight line calculation model can be obtained by training the model by adopting training samples comprising PCR vectors and gaze point information.
The invention provides a gaze point estimation method, which is characterized in that pupil spot center data are obtained in the gaze point estimation process, target PCR vectors are obtained after calculation and normalization processing are carried out on the pupil spot center data, and gaze point information is obtained through calculation of the target PCR vectors. According to the gaze point estimation method, only pupil spot center data is applied when gaze point estimation is carried out, namely, the distance information between a human eye and a camera can be estimated by only one spot and pupil position information, and two spot information in one eye is not needed, so that the problem that gaze point estimation can be carried out only by two groups of light sources is solved, the number of light sources of the existing gaze tracking equipment can be reduced, and miniaturization and light weight of the gaze tracking equipment are realized.
Example two
In a second embodiment of the present application, a method for calculating pupil spot center data of two eyes is provided, referring to fig. 5, the method includes:
S201, collecting human eye characteristic information of the original image;
s202, acquiring pupil image features and facula image features of two eyes according to the human eye feature information;
S203, calculating and obtaining pupil image coordinates according to the pupil image characteristics;
s204, calculating to obtain the spot image coordinates according to the spot image characteristics.
Pupil spot center data for both eyes in this embodiment includes pupil image coordinates and spot image coordinates.
After the human eye characteristic information is obtained from the original image, the original image can be searched according to the light spot image characteristics as search conditions to obtain the light spot image. For example, the searching may be performed according to the gray value of the spot image, the gray conversion needs to be performed on the original image, the gray value of each pixel point of the original image is obtained, and then the spot image in the original image is determined according to the gray value range of the spot image.
After the spot image is obtained, the coordinates of the center point of the spot image may be taken as the spot image coordinates. Correspondingly, when the pupil image coordinates are determined, the pupil image can be obtained by searching in the original image according to the pupil image characteristics, and the center coordinates of the pupil image are determined as the pupil image coordinates.
In a specific embodiment, the pupil area is extracted from the original image, so that the extraction of the sight feature parameter is facilitated, and since the pupil area is darker in the human eye image and has a very low gray value, a lower gray threshold can be set, the candidate area of the pupil area is determined by comparing the gray value in the image with the threshold, the pupil area is the deepest part of the human eye image, and is in a shape similar to a circle, the human eye image is converted into binary image information according to the gray difference value among the eye skin, sclera, iris and pupil in the human eye image, and then the pupil area can be extracted. The pupil area image comprises a cornea reflection light spot area, a complete pupil area and a partial iris area. Compared with other areas, the cornea reflection light spot area is the part with the highest gray value, the area is smaller, the color is brighter, and cornea reflection light spots contained in each pupil area are distributed in the circular horizontal direction. Therefore, according to the characteristics of the cornea reflection light spot, binarization processing is firstly carried out on the pupil area, a gray value threshold value is set, an area larger than the gray value threshold value is used as a candidate area, namely, a bright spot area of the pupil area is extracted, and then, noise bright spots of the pupil area are removed according to the area and the shape of the bright spot, so that a light spot image corresponding to the light spot area is obtained.
In another possible implementation manner, the pupil searching range may be set with the center of the light spot as the center, and the pupil gray threshold value obtained by calculation according to the gray histogram is obtained, and then the pupil center coordinate is obtained.
Assuming that the pupil image coordinates are (P x,Py), the spot image coordinates are (G x,Gy), and the PCR vector coordinates are (x, y), then:
x=norm(Px-Gx)
y=norm(Py-Gy)
The norm represents normalization calculation, namely, in order to reduce the sensitivity of PCR to the distance from eyes to lenses, the PCR needs to be normalized by using a distance factor, and the distance factor can be a correlation function of the interpupillary distance, the facula distance and the distance between any two eye feature points. It should be noted that, in the embodiment of the present application, the spot distance refers to the spot distance between two eyes, and the normalization factor adopted in the prior art is the distance between two spots in an image, which is generally indicated by IGD (Inter Glint Distance), but in the embodiment of the present application, there is no IGD in the case of using fewer light sources, so a new normalization factor that depends on fewer light sources and can achieve an ideal normalization effect in some gaze point estimation scenes should be considered. The scale information on the other images is used as a normalization factor to resist the influence of distance variations. That is, the normalization factor may be a correlation function of information such as an inter-pupil distance, an inner canthus distance between two eyes, an outer canthus distance between two eyes, an upper and lower eyelid distance, a face size, a spot distance between two eyes, a face key point scale, and the like.
Example III
Because of the large number of light sources in the conventional gaze tracking devices, the control logic for luminance is also complex in order to achieve efficient gaze point estimation. In the third embodiment of the present application, an exposure gain control logic is provided, and only control is required to be implemented according to the gray value of the pupil part.
Referring to fig. 6, a flow chart of an exposure gain adjustment method is shown, the method includes:
s301, calculating an average gray value of a pupil area according to the original image;
s302, judging whether the set exposure gain is adjusted according to the average gray value, so that the obtained original image meets the light spot searching condition.
According to the image characteristics of the pupil area, searching and obtaining the pupil area in the original image, converting the image corresponding to the pupil area into a gray level image, so that the gray level value of each pixel point in the gray level image can be obtained, and then calculating and obtaining the average gray level value of the pupil area. And judging whether the average gray value of the pupil area exceeds the gray threshold according to the set gray threshold, and if so, feeding back a judgment result to a camera in the sight tracking equipment so that the camera can adjust the exposure gain. If not, the camera may take an image according to the current exposure gain. The specific adjustment method is that the exposure-related parameter can be adjusted first, and if the exposure-related parameter is adjusted to the limit value (i.e. the maximum value) and still cannot meet the spot searching condition, the gain-related parameter can be adjusted again until the gain is adjusted to the limit value. The above-mentioned limit values each represent a maximum value of the corresponding parameter.
Specifically, if the camera continuously acquires the image and calculates the gray value of the pupil area of the current frame to obtain that the exposure gain needs to be adjusted, the camera starts to adjust from the current frame, the adjusting effect can be determined according to the actual hardware condition of the camera, and the camera can have an effect in the next frame or after five frames. Therefore, in actual operation, the statistical threshold value can continuously count the average value of five frames to ten frames, so that five frames or ten frames are detected, and the conditions of insufficient adjustment and the like are prevented. The mark for finishing adjustment is generally determined according to the specific condition of the image, mainly the edges of the pupil and the light spot are clear, the outline is easy to extract through an image algorithm, specifically, the gray level difference of the pupil and the iris area is enough, and the gray level difference of the light spot and the pupil is enough.
Of course, the brightness of the light source can be adjusted, for example, the brightness of the infrared light supplement lamp is adjusted, so that the problem of spot searching failure caused by over-brightness is prevented.
The embodiment of the application also provides a method for calculating the gaze point information, which specifically comprises the following steps:
And calculating and obtaining the gaze point information according to a preset mapping relation and the target PCR vector, wherein the preset mapping relation represents the mapping relation between the PCR vector and the gaze point and/or the gaze direction.
And inputting PCRs of two eyes, and estimating and outputting the gazing point/gazing direction according to the established mapping relation between the PCR vector and the gazing point/annotation direction. The method is as follows
X=a0+a1x+a2x2+a3y+a4y2+a5xy
Y=b0+b1x+b2x2+b3y+b4y2+b5xy
Wherein x and y are coordinates of the PCR vector in a two-dimensional coordinate system; x and Y are coordinates of the point of regard in a two-dimensional coordinate system, wherein related parameters such as a0, a1, a2, a3, a4, a5, b0, b1, b2, b3, b4, b5 and the like can be fitted during calibration.
After the gaze point information of the user is obtained by tracking, the gaze point of the corresponding user may be displayed on a display or display module.
In the embodiment of the application, in the gaze point estimation, only pupil and spot information of the left eye and the right eye is adopted. The eye distance is estimated by the light spots of the two light sources, which is different from the traditional scheme, and the scheme adopts the interpupillary distance as an eye distance estimation scheme, so that the problem that sight estimation can be realized only by the two groups of light sources is solved.
The module of the single camera and the single coaxial light source provided by the embodiment of the application can be placed below or above a computer display, so that the eyeball tracking of the display can be realized; the module can be placed in the mobile phone module, so that the eyeball tracking of the mobile phone can be realized. Other devices except the display and the mobile phone can have similar vision to meet the requirement of the device on eye tracking, and the embodiment of the application is not described one by one.
Example IV
In a fourth embodiment of the present application, there is also provided a gaze point estimation system applied to a gaze tracking device provided with a single camera and a single coaxial light source, see fig. 7, including:
an acquisition unit 10 for acquiring an original image captured from the single camera;
A first calculating unit 20, configured to collect human eye feature information of the original image, and calculate pupil spot center data of two eyes based on the human eye feature information;
A second calculation unit 30, configured to calculate and obtain an initial PCR vector based on the pupil spot center data, where the PCR vector characterizes a vector with a spot center pointing to a pupil center;
A normalization unit 40, configured to normalize the initial PCR vector by using a preset distance factor, to obtain a target PCR vector, where the preset distance factor characterizes a normalization parameter for normalizing the initial PCR vector;
A third calculation unit 50 for calculating and obtaining gaze point information according to the target PCR vector.
On the basis of the above embodiment, the first calculation unit includes:
the acquisition subunit is used for acquiring the human eye characteristic information of the original image;
The acquisition subunit is used for acquiring pupil image characteristics and facula image characteristics of two eyes according to the human eye characteristic information;
the first calculating subunit is used for calculating and obtaining pupil image coordinates according to the pupil image characteristics;
The second calculating subunit is used for calculating and obtaining the spot image coordinates according to the spot image characteristics; the pupil spot center data of the two eyes comprise pupil image coordinates and spot image coordinates.
On the basis of the embodiment, the preset distance factor characterizes a function of a distance parameter, wherein the distance parameter comprises a pupil distance between two eyes, a spot distance between two eyes or a distance between specific feature points of two eyes.
On the basis of the above embodiment, the acquiring unit is specifically configured to acquire an original image captured from the single camera according to a set exposure gain, and the system further includes:
A gray value calculation unit, configured to calculate an average gray value of a pupil area according to the original image;
The judging unit is used for judging whether the set exposure gain is adjusted according to the average gray value so that the obtained original image meets the light spot searching condition;
wherein, the judging unit is specifically configured to:
judging whether the average gray value exceeds a preset gray threshold value, and if so, adjusting the set exposure gain;
the system further comprises:
an adjusting unit for adjusting the set exposure gain to obtain a target exposure gain;
And the re-acquisition unit is used for controlling the single camera to acquire images according to the target exposure gain so that the acquired original images meet the target exposure gain.
On the basis of the above embodiment, the third computing unit is specifically configured to:
And calculating and obtaining the gaze point information according to a preset mapping relation and the target PCR vector, wherein the preset mapping relation represents the mapping relation between the PCR vector and the gaze point and/or the gaze direction.
The invention provides a gaze point estimation system, which obtains gaze point information by obtaining pupil spot center data, calculating and normalizing the pupil spot center data to obtain a target PCR vector, and calculating the target PCR vector. According to the gaze point estimation method, only pupil spot center data is applied when gaze point estimation is carried out, namely, the distance information between a human eye and a camera can be estimated by only one spot and pupil position information, and two spot information in one eye is not needed, so that the problem that gaze point estimation can be carried out only by two groups of light sources is solved, the number of light sources of the existing gaze tracking equipment can be reduced, and miniaturization and light weight of the gaze tracking equipment are realized.
Example five
An embodiment five of the present invention provides a processor, where the processor is configured to execute a program, and the program executes the gaze point estimation method according to any one of the embodiments one to three.
Example six
A sixth embodiment of the present invention provides an apparatus, including a processor, a memory, and a program stored on the memory and executable on the processor, wherein the processor executes the program by viewing the following steps:
Acquiring an original image captured from the single camera;
Collecting human eye characteristic information of the original image, and calculating pupil spot center data of two eyes based on the human eye characteristic information;
based on the pupil spot center data, calculating to obtain an initial PCR vector, wherein the PCR vector represents a vector of the spot center pointing to the pupil center;
Normalizing the initial PCR vector by using a preset distance factor to obtain a target PCR vector, wherein the preset distance factor characterizes normalization parameters for normalizing the initial PCR vector;
And calculating and obtaining the gaze point information according to the target PCR vector.
Further, the pupil spot center data of the two eyes includes pupil image coordinates and spot image coordinates, the collecting the human eye feature information of the original image, and calculating the pupil spot center data of the two eyes based on the human eye feature information includes:
Collecting human eye characteristic information of the original image;
according to the human eye characteristic information, pupil image characteristics and facula image characteristics of two eyes are obtained;
calculating to obtain pupil image coordinates according to the pupil image characteristics;
and calculating to obtain the spot image coordinates according to the spot image characteristics.
Further, the preset distance factor characterizes a function of a distance parameter, wherein the distance parameter comprises a distance between pupils of two eyes, a distance between focal spots of two eyes or a distance between formulated feature points of two eyes.
Further, the capturing of the raw image from the camera includes capturing the raw image captured from the camera according to a set exposure gain, the method further including:
Calculating an average gray value of the pupil area according to the original image;
And judging whether the set exposure gain is adjusted according to the average gray value, wherein the obtained original image meets the light spot searching condition.
Further, the determining whether to adjust the set exposure gain according to the average gray value includes:
judging whether the average gray value exceeds a preset gray threshold value, and if so, adjusting the set exposure gain.
Further, the method further comprises:
Adjusting the set exposure gain to obtain a target exposure gain;
And controlling the camera to acquire images according to the target exposure gain, so that the acquired original images meet the target exposure gain.
Further, the calculating to obtain the gaze point information according to the target PCR vector includes:
And calculating and obtaining the gaze point information according to a preset mapping relation and the target PCR vector, wherein the preset mapping relation represents the mapping relation between the PCR vector and the gaze point and/or the gaze direction.
The device herein may be a server, PC, PAD, cell phone, etc.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.
Claims (12)
1. A gaze point estimation method, characterized in that the method is applied to a gaze tracking device having a single camera and a single on-axis light source, comprising:
Acquiring an original image captured from the single camera;
Collecting human eye characteristic information of the original image, and calculating pupil spot center data of two eyes based on the human eye characteristic information;
based on the pupil spot center data, calculating to obtain an initial PCR vector, wherein the PCR vector represents a vector of the spot center pointing to the pupil center;
Normalizing the initial PCR vector by using a preset distance factor to obtain a target PCR vector, wherein the preset distance factor represents a normalization parameter for normalizing the initial PCR vector, and the preset distance factor represents a function of a distance parameter, and the distance parameter comprises the distance between pupils of two eyes, the distance between light spots of two eyes or the distance between appointed characteristic points of two eyes;
And calculating and obtaining the gaze point information according to the target PCR vector.
2. The method of claim 1, wherein the pupil spot center data of both eyes includes pupil image coordinates and spot image coordinates, wherein the acquiring human eye feature information of the original image and calculating pupil spot center data of both eyes based on the human eye feature information includes:
Collecting human eye characteristic information of the original image;
according to the human eye characteristic information, pupil image characteristics and facula image characteristics of two eyes are obtained;
calculating to obtain pupil image coordinates according to the pupil image characteristics;
and calculating to obtain the spot image coordinates according to the spot image characteristics.
3. The method of claim 1, wherein the acquiring the raw image captured from the camera comprises acquiring the raw image captured from the camera according to a set exposure gain, the method further comprising:
Calculating an average gray value of the pupil area according to the original image;
And judging whether the set exposure gain is adjusted according to the average gray value, so that the obtained original image meets the light spot searching condition.
4. A method according to claim 3, wherein said determining whether to adjust said set exposure gain based on said average gray value comprises:
judging whether the average gray value exceeds a preset gray threshold value, and if so, adjusting the set exposure gain.
5. The method of claim 4, further comprising:
Adjusting the set exposure gain to obtain a target exposure gain;
And controlling the camera to acquire images according to the target exposure gain, so that the acquired original images meet the target exposure gain.
6. The method of claim 1, wherein said computing gaze point information from said target PCR vector comprises:
And calculating and obtaining the gaze point information according to a preset mapping relation and the target PCR vector, wherein the preset mapping relation represents the mapping relation between the PCR vector and the gaze point and/or the gaze direction.
7. A gaze point estimation system, characterized in that the system is applied to a gaze tracking device having a single camera and a single on-axis light source, comprising:
an acquisition unit configured to acquire an original image captured from the single camera;
The first calculation unit is used for collecting human eye characteristic information of the original image and calculating pupil spot center data of two eyes based on the human eye characteristic information;
the second calculation unit is used for calculating and obtaining an initial PCR vector based on the pupil spot center data, wherein the PCR vector represents a vector of the spot center pointing to the pupil center;
The normalization unit is used for carrying out normalization processing on the initial PCR vector by utilizing a preset distance factor to obtain a target PCR vector, wherein the preset distance factor represents normalization parameters for carrying out normalization processing on the initial PCR vector, and the preset distance factor represents a function of distance parameters, wherein the distance parameters comprise the distance between pupils of two eyes, the distance between light spots of two eyes or the distance between appointed characteristic points of two eyes;
and a third calculation unit, configured to calculate and obtain gaze point information according to the target PCR vector.
8. The system of claim 7, wherein the first computing unit comprises:
the acquisition subunit is used for acquiring the human eye characteristic information of the original image;
The acquisition subunit is used for acquiring pupil image characteristics and facula image characteristics of two eyes according to the human eye characteristic information;
the first calculating subunit is used for calculating and obtaining pupil image coordinates according to the pupil image characteristics;
The second calculating subunit is used for calculating and obtaining the spot image coordinates according to the spot image characteristics; the pupil spot center data of the two eyes comprise pupil image coordinates and spot image coordinates.
9. The system according to claim 7, wherein the acquisition unit is specifically configured to acquire the original image captured from the single camera according to the set exposure gain, the system further comprising:
A gray value calculation unit, configured to calculate an average gray value of a pupil area according to the original image;
The judging unit is used for judging whether the set exposure gain is adjusted according to the average gray value so that the obtained original image meets the light spot searching condition;
wherein, the judging unit is specifically configured to:
judging whether the average gray value exceeds a preset gray threshold value, and if so, adjusting the set exposure gain;
the system further comprises:
an adjusting unit for adjusting the set exposure gain to obtain a target exposure gain;
And the re-acquisition unit is used for controlling the single camera to acquire images according to the target exposure gain so that the acquired original images meet the target exposure gain.
10. The system according to claim 7, wherein the third computing unit is specifically configured to:
And calculating and obtaining the gaze point information according to a preset mapping relation and the target PCR vector, wherein the preset mapping relation represents the mapping relation between the PCR vector and the gaze point and/or the gaze direction.
11. A processor for running a program, wherein the program is operative to perform the gaze point estimation method of any one of claims 1-6.
12. An electronic device comprising a processor, a memory and a program stored on the memory and executable on the processor, the processor implementing the gaze point estimation method of any of claims 1-6 when the program is executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910887940.XA CN112528713B (en) | 2019-09-19 | 2019-09-19 | Gaze point estimation method, gaze point estimation system, gaze point estimation processor and gaze point estimation equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910887940.XA CN112528713B (en) | 2019-09-19 | 2019-09-19 | Gaze point estimation method, gaze point estimation system, gaze point estimation processor and gaze point estimation equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112528713A CN112528713A (en) | 2021-03-19 |
CN112528713B true CN112528713B (en) | 2024-10-29 |
Family
ID=74974247
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910887940.XA Active CN112528713B (en) | 2019-09-19 | 2019-09-19 | Gaze point estimation method, gaze point estimation system, gaze point estimation processor and gaze point estimation equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112528713B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679180A (en) * | 2012-09-19 | 2014-03-26 | 武汉元宝创意科技有限公司 | Sight tracking method based on single light source of single camera |
CN105979162A (en) * | 2016-07-21 | 2016-09-28 | 凌云光技术集团有限责任公司 | Automatic exposure adjustment method and device for extensible dynamic range images |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002064031A2 (en) * | 2001-02-09 | 2002-08-22 | Sensomotoric Instruments Gmbh | Multidimensional eye tracking and position measurement system |
CN101803906B (en) * | 2010-03-10 | 2011-12-14 | 中国科学院光电技术研究所 | Automatic defocusing compensation human eye aberration Hartmann measuring instrument |
CN104199544B (en) * | 2014-08-28 | 2018-06-22 | 华南理工大学 | Advertisement orientation put-on method based on eye tracking |
CN106056092B (en) * | 2016-06-08 | 2019-08-20 | 华南理工大学 | The gaze estimation method for headset equipment based on iris and pupil |
JP6800091B2 (en) * | 2017-06-09 | 2020-12-16 | 株式会社豊田中央研究所 | Line-of-sight measuring device and program |
CN109034108B (en) * | 2018-08-16 | 2020-09-22 | 北京七鑫易维信息技术有限公司 | Sight estimation method, device and system |
CN110062168B (en) * | 2019-05-05 | 2021-04-27 | 北京七鑫易维信息技术有限公司 | Shooting parameter adjusting method, device, equipment and medium for eye movement tracking equipment |
-
2019
- 2019-09-19 CN CN201910887940.XA patent/CN112528713B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103679180A (en) * | 2012-09-19 | 2014-03-26 | 武汉元宝创意科技有限公司 | Sight tracking method based on single light source of single camera |
CN105979162A (en) * | 2016-07-21 | 2016-09-28 | 凌云光技术集团有限责任公司 | Automatic exposure adjustment method and device for extensible dynamic range images |
Also Published As
Publication number | Publication date |
---|---|
CN112528713A (en) | 2021-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11442539B2 (en) | Event camera-based gaze tracking using neural networks | |
US10534982B2 (en) | Neural network training for three dimensional (3D) gaze prediction with calibration parameters | |
US10564446B2 (en) | Method, apparatus, and computer program for establishing a representation of a spectacle lens edge | |
CN111902070B (en) | Reliability of left and right eye gaze tracking data | |
JP6577454B2 (en) | On-axis gaze tracking system and method | |
US10558895B2 (en) | Deep learning for three dimensional (3D) gaze prediction | |
US10416725B2 (en) | Wearable device having a display, lens, illuminator, and image sensor | |
CN109472189B (en) | Pupil radius compensation | |
US20220301218A1 (en) | Head pose estimation from local eye region | |
CN108985210A (en) | A kind of Eye-controlling focus method and system based on human eye geometrical characteristic | |
US10671890B2 (en) | Training of a neural network for three dimensional (3D) gaze prediction | |
US11947717B2 (en) | Gaze estimation systems and methods using relative points of regard | |
CN114391117A (en) | Eye tracking delay enhancement | |
WO2020157746A1 (en) | Eye tracking device and a method thereof | |
JP6870474B2 (en) | Gaze detection computer program, gaze detection device and gaze detection method | |
US10867252B2 (en) | Continuous calibration based on pupil characteristics | |
US20200193131A1 (en) | Method and device for determining iris recognition image, terminal apparatus, and storage medium | |
US11308321B2 (en) | Method and system for 3D cornea position estimation | |
CN111539984A (en) | Continuous calibration based on pupil characteristics | |
CN112528713B (en) | Gaze point estimation method, gaze point estimation system, gaze point estimation processor and gaze point estimation equipment | |
WO2019190561A1 (en) | Deep learning for three dimensional (3d) gaze prediction | |
CN112528714B (en) | Single-light-source-based gaze point estimation method, system, processor and equipment | |
JP2021179815A (en) | Sight line measurement calibration method and apparatus using change of pupil diameter, and sight line measuring apparatus and camera apparatus | |
US20210350554A1 (en) | Eye-tracking system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |