CN103677270A - Human-computer interaction method based on eye movement tracking - Google Patents
Human-computer interaction method based on eye movement tracking Download PDFInfo
- Publication number
- CN103677270A CN103677270A CN201310684342.5A CN201310684342A CN103677270A CN 103677270 A CN103677270 A CN 103677270A CN 201310684342 A CN201310684342 A CN 201310684342A CN 103677270 A CN103677270 A CN 103677270A
- Authority
- CN
- China
- Prior art keywords
- image
- time period
- ordinate
- fixation point
- pupil
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a human-computer interaction method based on eye movement tracking, wherein five infrared source sets which face a computer operator and a camera with a built-in infrared filter are used, four infrared source sets are arranged at four corners of a computer display screen, and the other infrared source set is arranged surrounding a lens of the camera; the camera is arranged below the computer display screen and connected to a computer, wherein the lens of the camera faces the face of the operator. The human-computer interaction method comprises the steps of calibration, image capturing and detection, blink judgment, fixation point judgment and interaction instruction output. The human-computer interaction method based on eye movement tracking has the advantages that calibration is simple and portability is good; the number of users who the method is applicable to is increased due to detection of dark pupils; interference caused by physiological human eye tremors is overcome through smoothing; system accuracy is still high even if the head of the operator moves within a wide range; only one camera is used, so that cost is lower; processing speed is quite high, and real-time human-computer interaction can be realized.
Description
Technical field
The present invention relates to computer vision control technology field, is specifically a kind of man-machine interaction method based on eye-tracking.
Background technology
In man-machine interaction, eye-tracking is being played the part of a very important role, and eye-tracking can be taken as the interface that connects people and computing equipment, and compared with mouse and keyboard, eye-tracking offers alternately a kind of more naturally mode of people and carries out man-machine interaction.The method that estimation eye movement realizes direction also has a variety of: reflection method, potential electronics skin, and contact lenses etc., these methods are classified as again the method for intrusive mood or non-intrusion type, the method of non-intrusion type more has superiority, more comfortable because the method for non-intrusion type uses.But in eye-tracking research, still also have many open questions.Its precision problem the most generally, the restriction of head movement, robustness, and the easness of demarcating.
Summary of the invention
The object of this invention is to provide a kind of man-machine interaction method based on eye-tracking, this method makes operator's head still can keep higher degree of accuracy when moving in a big way.
The technical scheme that realizes the object of the invention is as follows: a kind of man-machine interaction method based on eye-tracking, the camera that comprises five groups of infrared light supplies towards computer operation person and a built-in infrared optical filter, wherein four groups of infrared light supplies are arranged on four corners of computer display, and another group infrared light supply is arranged on around camera lens; Camera is arranged on computer display below and is connected to computing machine, and camera lens is towards operator's face; Comprise
The step of demarcating, comprises
101: face-image when computing machine is caught operator and watched attentively the infrared light supply in the arbitrary corner of display screen by camera, image is carried out to eye detection, spot detection and pupil detection, obtain pupil center, camera lens corresponding spot center and spot center corresponding to this group infrared light supply of infrared light supply around, calculate corresponding calibration coefficient;
102: according to aforesaid operations, corresponding calibration coefficient when calculating operation person watches the infrared light supply in other corner of display screen attentively respectively; Catch the step with detected image, comprise
201: computing machine continues to catch with frame frequency F the face-image that operator watches display screen by camera;
202: each two field picture is carried out to eye detection, spot detection, pupil detection and fixation point and estimate;
The step that judgement is blinked, comprises
301: the time period T that any time t is starting point is take in judgement
lin the L frame consecutive image of catching, L=T
l* F; If there is and only has a time period T in L two field picture
kin the continuous image of K frame of catching hot spot do not detected, T
k<T
l, be set as operator and blink once; If there are two time period T in L two field picture
k1and T
k2in K1 and the continuous image of K2 frame of catching separately hot spot do not detected, T
k1<T
l, T
k2<T
land T
k1+ T
k2<T
l, and other image detection between two time periods consecutive image of catching is separately to hot spot, is set as operator and blinks twice;
The step that judgement fixation point changes:
401: establish fixation point in that two field picture that any time t the catches horizontal ordinate g on screen
xordinate g
y;
402: the time period T that moment t is starting point is take in judgement
rin R frame consecutive image, R=T
r* F, if the fixation point in R two field picture all rests on horizontal ordinate g
xordinate g
yfor the center of circle, in the circle that radius is r, be set as fixation point and stop;
403: the time period T that moment t is starting point is take in judgement
din D frame consecutive image, D=T
d* F, if the horizontal ordinate of the fixation point in D two field picture and ordinate dullness reduce, and total decrease surpasses respectively horizontal ordinate variable quantity X and ordinate variable quantity Y, is set as fixation point and moves to upper left side;
404: the time period T that moment t is starting point is take in judgement
uin U frame consecutive image, U=T
u* F, if the horizontal ordinate of the fixation point in U two field picture and ordinate monotone increasing, and total recruitment surpasses respectively horizontal ordinate variable quantity X and ordinate variable quantity Y, is set as fixation point and moves to lower right;
The step of output interactive instruction, comprises
501: as operator blinks once, export the instruction that left mouse button is clicked;
502: as operator blinks twice, export the instruction that left mouse button is double-clicked;
503: as fixation point stops, export the instruction that right mouse button is clicked;
504: as fixation point moves to upper left side, export the instruction that mouse roller scrolls up;
505: as fixation point moves to lower right, export the instruction that mouse roller rolls downwards.
Further,
Described eye detection comprises: the face-image of catching is carried out to binaryzation, obtain black white image; Black white image is carried out to profile and search, determine that the minimum rectangle frame of the profile border parcel finding is human eye rectangle frame;
Described spot detection comprises: according to human eye rectangle frame intercepting face-image, obtain human eye rectangle frame image; Human eye rectangle frame image is carried out to binaryzation, obtain binary image; Binary image is removed to noise; Search five white portions of area maximum in the binary image of removing after noise as five groups of hot spots that infrared light supply is corresponding; The barycenter of determining five hot spots is spot center; Determine the one-to-one relationship of five hot spots and five groups of infrared light supplies;
Described pupil detection comprises: take respectively the mean value of horizontal ordinate of five spot center and initialization abscissa value and the initialization ordinate value that the mean value of ordinate is pupil center; Set pupil rectangle frame, take the initialization abscissa value of pupil center and the center that initialization ordinate value is pupil rectangle frame; Pupil rectangle frame is carried out to vertical and horizontal projecting integral, obtain only comprising in pupil rectangle frame the little rectangle frame of pupil; From little rectangle frame, search pupil boundary and center thereof;
Described fixation point is estimated to comprise: according to five spot center and calibration coefficient, calculate respectively spot center that the infrared light supply in four corners of display screen is corresponding four virtual projection points on eyes cornea; According to pupil center and four virtual projection points, and the length of display screen and width, estimate fixation point.
Further, described time period T
l, time period T
r, time period T
dwith time period T
ube at the same time section.
Further, described frame frequency F was 15 frame/seconds; Described time period T
l, time period T
r, time period T
dwith time period T
uduration be 2 seconds; Described time period T
k, time period T
k1with time period T
k2all be greater than 0.5 second; Described radius r is
described horizontal ordinate variable quantity X and ordinate variable quantity Y are respectively
with
wherein w is display screen width, and h is display screen height.
Beneficial effect of the present invention is, 1, demarcate simply, and transplantability is good; 2, by dark pupil is detected, avoided different human eyes, the bad user of bright pupil effect, has increased applicable user; 3,, by smoothing processing, overcome the interference that the vibration of people's physiology of eye brings; 4, operator's head mobile system degree of accuracy in is in a big way still higher; 5, use single camera, compare with using a plurality of cameras, cost is relatively low; 6, processing speed is very fast, and system responses is quicker, in time, can meet real-time man-machine interaction.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of infrared light supply, camera and the face-image of catching, human eye rectangle frame, human eye rectangle frame image;
Fig. 2 is the schematic diagram that in human eye rectangle frame image, hot spot distributes;
Fig. 3, Fig. 4 and Fig. 5 are the principle schematic that fixation point is estimated;
Fig. 6 is the schematic diagram that fixation point stops;
Fig. 7 is the schematic diagram that fixation point moves to upper left side;
Fig. 8 is the schematic diagram that fixation point moves to lower right.
Embodiment
As shown in Figure 1, a kind of man-machine interaction method based on eye-tracking, the camera that comprises five groups of infrared light supplies towards computer operation person and a built-in infrared optical filter, wherein four groups of infrared light supply LED1, LED2, LED3 and LED4 are arranged on four corners of computer display, and another group infrared light supply LED5 is arranged on around camera lens 3; Camera is arranged on computer display below and is connected to computing machine, and camera lens 3 is towards operator's face.
The method of eye detection is: infrared light supply reflects and produces the pul speck of admiring in human eye, in the grey face-image 1 of catching with the camera of optical filter, the pul being positioned on eye cornea is admired speck for the brightest, the position of searching human eye can change into searches the admire position of speck of the brightest pul, the position of searching left eye or right eye can change into search abscissa value minimum or the admire position of speck of maximum pul.Pass through binaryzation, obtain pul and admire speck region for white, all the other positions are the black white image of black, this black white image is carried out to profile and search (border of searching monochrome pixels), and the minimum rectangle frame of definite profile border parcel finding is human eye rectangle frame 2 as the region at human eye place.
The method of spot detection is:
Step1: use human eye rectangle frame 2 intercepting face-images, obtain human eye rectangle frame image 2 ';
Step2: human eye rectangle frame image is cut apart by setting threshold, obtained a binary image;
Step3: process binary image and morphological operation to remove noise;
In binary image, infrared lamp reflection produces the hot spot that five brightness values are 1 (pul admire speck), and different hot spots forms different white portions.But, due to the existence of noise, and the external environment illumination condition of different brightness, also may produce brightness value and be 1 other white portion.In addition, improper due to selected threshold, the single hot spot that infrared lamp reflection produces also may be divided into a plurality of fritters, form polylith white portion, by first corroding again expansion form, learn operation (< < study Opencv (Chinese edition) > >) and eliminate the interference that noise that part area is less brings here.
Step4: group areas;
After morphological operation, now in binary image, be only left the relatively large white portion of some areas.We find, noise place white portion area is less than the white portion area at hot spot place, therefore only need in binary image, find out five white portions that white portion is hot spot place of area maximum.By the cvFindContours function of increasing income in the Opencv of computer vision storehouse, search the profile of every white portion in binary image, recycling cvContourArea function calculates respectively the area that every white portion comprises (pixel and).Each region area is sorted according to descending, and the region of choosing five area maximums after sequence is spot area.
Step5: the center of calculating every spot area;
By calculating above, five hot spots have all been detected, and obtain the barycenter of each spot area as spot center.
Step6: the one-to-one relationship of determining five hot spots and five groups of infrared light supplies;
Relative position relation by five groups of infrared light supplies can be known, ordinate value maximum be LED5, abscissa value minimum be LED1 and LED3, abscissa value maximum be LED2 and LED4, and the ordinate value of LED1 is less than the ordinate value of LED3, the ordinate value of LED2 is less than the ordinate value of LED4.According to this relation, compare the coordinate figure of the spot center of five hot spots, can determine the one-to-one relationship of five hot spots and five groups of infrared light supplies.As Fig. 2, in human eye rectangle frame image, hot spot LED1 ', LED2 ', LED3 ', LED4 ' and LED5 ' are corresponding one by one with infrared light supply LED1, LED2, LED3, LED4 and LED5.
The method of pupil detection is:
Step1: initialization pupil region center;
Before obtained the center of five hot spots.In fact, the position of five hot spots and pupil is separated by very near, therefore we use respectively the horizontal ordinate of five spot center and ordinate mean value as the initialization coordinate figure of pupil center, using this centre coordinate as the center of pupil rectangle frame, and the large I of this rectangle frame is set (general length and width are 160*120 pixel) according to actual conditions.
Step2: iterative projection integration
Above-mentioned pupil rectangle frame is carried out to vertical and horizontal projecting integral, because it is dark that want in other regions of the relative around eyes of pupil, by vertical and horizontal projection integration, each row pixel value of every a line and minimum coordinate are all found out, by the projecting integral of iteration, we can access the little rectangle frame (length and width are 80*60 pixel) that only comprises pupil in pupil rectangle frame.By Opencv binaryzation function cvThreshold, profile is searched function cvFindContours again, and Hough transformation is searched circular function cvHoughCircles, can find pupil boundary and center thereof.
The method of demarcating is:
Face-image when computing machine is caught operator and watched attentively the infrared light supply in the arbitrary corner of display screen by camera, image is carried out to eye detection, spot detection and pupil detection, obtain pupil center, camera lens corresponding spot center and spot center corresponding to this group infrared light supply of infrared light supply LED5 around, calculate corresponding calibration coefficient.
Suppose that operator watches LED1 attentively, computing machine is caught its face-image by camera.By eye detection, spot detection and pupil detection, obtain the u of pupil center
p, camera lens spot center u corresponding to infrared light supply LED5 around
cthe spot center u corresponding with infrared light supply LED1
r1.Because that operator watches attentively is LED1, the corresponding virtual projection point of hot spot that LED1 produces should with the center superposition of pupil, by this constraint, we can calculate this coefficient in order to lower equation:
In above formula, α
1the calibration coefficient of operator while watching LED1 attentively, d (x
1, x
2) be an x
1, x
2between Euclidean distance.When user watches respectively LED2, LED3 and LED4 attentively, according to said method, can calculate corresponding calibration coefficient α
2, α
3and α
4.
The method that fixation point is estimated is:
By the unchangeability of double ratio in projection process, estimate the position of eye gaze point.In Fig. 3, supposing has a tangent virtual section X with it at anterior corneal surface, some v
1, v
2, v
3, v
4on this virtual section, be four groups of infrared light supply LED1 of corner screen, LED2, LED3 and LED4 are at the subpoint in this virtual section, and we are referred to as virtual projection point these four points, and their projection centre is cornea ball centre J.Suppose that these virtual projection points are roughly coplanar, the quadrilateral that these four virtual point are linked to be is so exactly the projection of screen display, and these virtual projections Dian He P of pupil center is projected into five some u on camera image plane T
v1, u
v2, u
v3, u
v4and u
p, Z is camera optical axis center.Thereby, from screen display to the plane of delineation, there are two projection conversions here, if this virtual projection point is roughly coplanar, fixation point g point coordinate just can be estimated to obtain by calculating with Projective invariance.
Specific as follows: as shown in Figure 4, u
v1, u
v2, u
v3, u
v4the virtual projection point on cornea, u
pbe pupil center in image, c is u
v1, u
v2, u
v3, u
v4the point of crossing of the quadrilateral summit line forming.Wherein, u
v1=u
c+ α
1(u
r1-u
c), u
v2=u
c+ α
2(u
r2-u
c), u
v3=u
c+ α
3(u
r3-u
c), u
v4=u
c+ α
4(u
r4-u
c), u in formula
ccamera lens spot center corresponding to infrared light supply LED5 around, u
r1the spot center that infrared light supply LED1 is corresponding, u
r2the spot center that infrared light supply LED2 is corresponding, u
r3the spot center that infrared light supply LED3 is corresponding, u
r4it is the spot center that infrared light supply LED4 is corresponding.
U
m1straight line and u for end point a and c place
v1, u
v2the intersection point of place straight line, u
m2for end point a and u
pthe straight line at place and u
v1, u
v2the intersection point of place straight line, u
m3for end point b and u
pthe straight line at place and u
v2, u
v4the intersection point of place straight line, u
m4straight line and u for end point b and c point place
v2, u
v4the intersection point of place straight line.
Same, the double ratio of screen display, as shown in Figure 5, adopts following equation to calculate:
According to the cross ratio invariability character of projector space, those double ratios equate.So
The y coordinate that g is ordered can be estimated equally in this way, and image double ratio is:
The double ratio of screen display is:
The implementing procedure of the man-machine interaction method based on eye-tracking is: the step of first carry out demarcating, and by human eye detection, spot detection, pupil detection, and user watches respectively four corners of screen attentively and demarcates, and obtains calibration coefficient α
1, α
2, α
3, α
4.After demarcation completes, carry out and catch and detect, be specially: computing machine continues to catch with frame frequency F the face-image that operator watches display screen by camera; Each two field picture is carried out to eye detection, spot detection, pupil detection and fixation point to be estimated.The method that eye detection, spot detection, pupil detection and fixation point are estimated is as aforementioned.According to the result detecting and fixation point is estimated, the judgement that blink judgement and fixation point change.
The method of judgement nictation is: the time period T that any time t is starting point is take in judgement
lin the L frame consecutive image of catching, L=T
l* F; If there is and only has a time period T in L two field picture
kin the continuous image of K frame of catching hot spot do not detected, T
k<T
l, be set as operator and blink once; If there are two time period T in L two field picture
k1and T
k2in K1 and the continuous image of K2 frame of catching separately hot spot do not detected, T
k1<T
l, T
k2<T
land T
k1+ T
k2<T
l, and other image detection between two time periods consecutive image of catching is separately to hot spot, is set as operator and blinks twice.
The method that judgement fixation point changes is: as shown in Figure 6, establish fixation point in that two field picture that any time t the catches horizontal ordinate g on screen
xordinate g
y; The time period T that moment t is starting point is take in judgement
rin R frame consecutive image, R=T
r* F, if the fixation point in R two field picture all rests on horizontal ordinate g
xordinate g
yfor the center of circle, in the circle that radius is r, be set as fixation point and stop.As shown in Figure 7, the time period T that moment t is starting point is take in judgement
din D frame consecutive image, D=T
d* F, if the horizontal ordinate of the fixation point in D two field picture and ordinate dullness reduce, and total decrease surpasses respectively horizontal ordinate variable quantity X and ordinate variable quantity Y, is set as fixation point and moves to upper left side.As shown in Figure 8, the time period T that moment t is starting point is take in judgement
uin U frame consecutive image, U=T
u* F, if the horizontal ordinate of the fixation point in U two field picture and ordinate monotone increasing, and total recruitment surpasses respectively horizontal ordinate variable quantity X and ordinate variable quantity Y, is set as fixation point and moves to lower right.
The method of output interactive instruction is: according to judged result, as operator blinks once, export the instruction that left mouse button is clicked; As operator blinks twice, export the instruction that left mouse button is double-clicked; As fixation point stops, export the instruction that right mouse button is clicked; As fixation point moves to upper left side, export the instruction that mouse roller scrolls up; As fixation point moves to lower right, export the instruction that mouse roller rolls downwards.
In actual judgement, for simplifying the operation, can take out one group of continuous image since any time, this group image is blinked successively and judged and fixation point variation judgement.That is to say, the frame number of the consecutive image that blink judgement and fixation point change to judge is identical, i.e. L=R=D=U, and the time period of catching this framing is also identical, i.e. time period T
l, time period T
r, time period T
dwith time period T
ube at the same time section.If meet nictation or fixation point in this group image, change one of five kinds of situations of judgement, operational order corresponding to instant output just, usings the next frame image of current frame image simultaneously as the start frame of next group consecutive image; If all do not met, do not export any instruction, again get one group of continuous image and judge next time, the start frame that the second frame of above one group of consecutive image is this group consecutive image.
For example, computing machine is caught after operator's face-image with the frame frequency of 15 frame/seconds continuously by camera, carry out eye detection, spot detection, pupil detection and fixation point and estimate, from any frame, start to get one group of 30 continuous two field picture afterwards, the duration of this 30 two field picture is 2 seconds.Suppose in this 30 two field picture, the 3rd frame to the 10 frames all do not detect hot spot, and the time that respective operations person closes one's eyes is greater than 0.5 second, and other image all detects hot spot, can regard as operator and blink once, correspondingly export the instruction that left mouse button is clicked.Suppose in this 30 two field picture, the 3rd frame to the 10 frames and the 20th frame to the 27 frames all do not detect hot spot, and other image all detects hot spot, can regard as operator and blink twice, correspondingly export the instruction that left mouse button is double-clicked.Suppose that, in this 30 two field picture, the fixation point of the first frame is horizontal ordinate g
xordinate g
y, the fixation point of other all frames is with horizontal ordinate g
xordinate g
yfor the center of circle, within the circle that radius is r, can regard as fixation point and stop, correspondingly export the instruction that right mouse button is clicked.Radius r can be set as 1/20th of display screen width w.Suppose that, in this 30 two field picture, the horizontal ordinate of fixation point and ordinate dullness reduce, and the total decrease of horizontal ordinate surpasses
the total decrease of ordinate surpasses
can regard as fixation point and move to upper left side, correspondingly export the instruction that mouse roller scrolls up.Suppose in this 30 two field picture, the horizontal ordinate of fixation point and ordinate are dull to be increased, and the total increase of horizontal ordinate surpasses
the total increase of ordinate surpasses
can regard as fixation point and move to lower right, correspondingly export the instruction that mouse roller rolls downwards.Here h is the height of display screen.According to the instruction of output, just can carry out some shirtsleeve operations, such as controlling ppt, webpage rolling etc., thereby reach the object of man-machine interaction.
Claims (5)
1. the man-machine interaction method based on eye-tracking, the camera that comprises five groups of infrared light supplies towards computer operation person and a built-in infrared optical filter, wherein four groups of infrared light supplies (LED1, LED2, LED3, LED4) are arranged on four corners of computer display, and another group infrared light supply (LED5) is arranged on around camera lens; Camera is arranged on computer display below and is connected to computing machine, and camera lens is towards operator's face; It is characterized in that, comprise
The step of demarcating, comprises
101: face-image when computing machine is caught operator and watched attentively the infrared light supply in the arbitrary corner of display screen by camera, image is carried out to eye detection, spot detection and pupil detection, obtain spot center and the spot center corresponding to this group infrared light supply of pupil center, camera lens infrared light supply (LED5) correspondence around, calculate corresponding calibration coefficient;
102: according to aforesaid operations, corresponding calibration coefficient when calculating operation person watches the infrared light supply in other corner of display screen attentively respectively;
Catch the step with detected image, comprise
201: computing machine continues to catch with frame frequency F the face-image that operator watches display screen by camera;
202: each two field picture is carried out to eye detection, spot detection, pupil detection and fixation point and estimate;
The step that judgement is blinked, comprises
301: the time period T that any time t is starting point is take in judgement
lin the L frame consecutive image of catching, L=T
l* F; If there is and only has a time period T in L two field picture
kin the continuous image of K frame of catching hot spot do not detected, T
k<T
l, be set as operator and blink once; If there are two time period T in L two field picture
k1and T
k2in K1 and the continuous image of K2 frame of catching separately hot spot do not detected, T
k1<T
l, T
k2<T
land T
k1+ T
k2<T
l, and other image detection between two time periods consecutive image of catching is separately to hot spot, is set as operator and blinks twice;
The step that judgement fixation point changes:
401: establish fixation point in that two field picture that any time t the catches horizontal ordinate g on screen
xordinate g
y;
402: the time period T that moment t is starting point is take in judgement
rin R frame consecutive image, R=T
r* F, if the fixation point in R two field picture all rests on horizontal ordinate g
xordinate g
yfor the center of circle, in the circle that radius is r, be set as fixation point and stop;
403: the time period T that moment t is starting point is take in judgement
din D frame consecutive image, D=T
d* F, if the horizontal ordinate of the fixation point in D two field picture and ordinate dullness reduce, and total decrease surpasses respectively horizontal ordinate variable quantity X and ordinate variable quantity Y, is set as fixation point and moves to upper left side;
404: the time period T that moment t is starting point is take in judgement
uin U frame consecutive image, U=T
u* F, if the horizontal ordinate of the fixation point in U two field picture and ordinate monotone increasing, and total recruitment surpasses respectively horizontal ordinate variable quantity X and ordinate variable quantity Y, is set as fixation point and moves to lower right;
The step of output interactive instruction, comprises
501: as operator blinks once, export the instruction that left mouse button is clicked;
502: as operator blinks twice, export the instruction that left mouse button is double-clicked;
503: as fixation point stops, export the instruction that right mouse button is clicked;
504: as fixation point moves to upper left side, export the instruction that mouse roller scrolls up;
505: as fixation point moves to lower right, export the instruction that mouse roller rolls downwards.
2. man-machine interaction method as claimed in claim 1, is characterized in that,
Described eye detection comprises: the face-image of catching is carried out to binaryzation, obtain black white image; Black white image is carried out to profile and search, determine that the minimum rectangle frame of the profile border parcel finding is human eye rectangle frame;
Described spot detection comprises: according to human eye rectangle frame intercepting face-image, obtain human eye rectangle frame image; Human eye rectangle frame image is carried out to binaryzation, obtain binary image; Binary image is removed to noise; Search five white portions of area maximum in the binary image of removing after noise as five groups of hot spots that infrared light supply is corresponding; The barycenter of determining five hot spots is spot center; Determine the one-to-one relationship of five hot spots and five groups of infrared light supplies;
Described pupil detection comprises: take respectively the mean value of horizontal ordinate of five spot center and initialization abscissa value and the initialization ordinate value that the mean value of ordinate is pupil center; Set pupil rectangle frame, take the initialization abscissa value of pupil center and the center that initialization ordinate value is pupil rectangle frame; Pupil rectangle frame is carried out to vertical and horizontal projecting integral, obtain only comprising in pupil rectangle frame the little rectangle frame of pupil; From little rectangle frame, search pupil boundary and center thereof;
Described fixation point is estimated to comprise: according to five spot center and calibration coefficient, calculate respectively spot center that the infrared light supply in four corners of display screen is corresponding four virtual projection points on eyes cornea; According to pupil center and four virtual projection points, and the length of display screen and width, estimate fixation point.
3. man-machine interaction method as claimed in claim 1, is characterized in that, described time period T
l, time period T
r, time period T
dwith time period T
ube at the same time section.
4. man-machine interaction method as claimed in claim 2, is characterized in that, described time period T
l, time period T
r, time period T
dwith time period T
ube at the same time section.
5. the man-machine interaction method of any one as described in claim 1 to 4, is characterized in that, described frame frequency F was 15 frame/seconds; Described time period T
l, time period T
r, time period T
dwith time period T
uduration be 2 seconds; Described time period T
k, time period T
k1with time period T
k2all be greater than 0.5 second; Described radius r is
described horizontal ordinate variable quantity X and ordinate variable quantity Y are respectively
with
wherein w is display screen width, and h is display screen height.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310684342.5A CN103677270B (en) | 2013-12-13 | 2013-12-13 | A kind of man-machine interaction method based on eye-tracking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310684342.5A CN103677270B (en) | 2013-12-13 | 2013-12-13 | A kind of man-machine interaction method based on eye-tracking |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103677270A true CN103677270A (en) | 2014-03-26 |
CN103677270B CN103677270B (en) | 2016-08-17 |
Family
ID=50315078
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310684342.5A Active CN103677270B (en) | 2013-12-13 | 2013-12-13 | A kind of man-machine interaction method based on eye-tracking |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103677270B (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751467A (en) * | 2015-04-01 | 2015-07-01 | 电子科技大学 | Gaze point estimation method based on dynamic cross ratio and system thereof |
CN105078404A (en) * | 2015-09-02 | 2015-11-25 | 北京津发科技股份有限公司 | Fully automatic eye movement tracking distance measuring calibration instrument based on laser algorithm and use method of calibration instrument |
CN105260027A (en) * | 2015-11-04 | 2016-01-20 | 上海斐讯数据通信技术有限公司 | Man-machine interactive system and method |
CN106200961A (en) * | 2016-07-10 | 2016-12-07 | 上海青橙实业有限公司 | Mobile terminal, wearable device and input method |
CN106265006A (en) * | 2016-07-29 | 2017-01-04 | 维沃移动通信有限公司 | The antidote of a kind of dominant eye and mobile terminal |
CN106662911A (en) * | 2014-04-29 | 2017-05-10 | 惠普发展公司,有限责任合伙企业 | Gaze detector using reference frames in media |
CN107024991A (en) * | 2017-04-13 | 2017-08-08 | 长沙职业技术学院 | A kind of glasses system based on Internet of Things |
WO2017152679A1 (en) * | 2016-03-09 | 2017-09-14 | 北京七鑫易维信息技术有限公司 | Eyeball tracking device matchable for intelligent terminal |
CN107450729A (en) * | 2017-08-10 | 2017-12-08 | 上海木爷机器人技术有限公司 | Robot interactive method and device |
CN107818310A (en) * | 2017-11-03 | 2018-03-20 | 电子科技大学 | A kind of driver attention's detection method based on sight |
CN107992196A (en) * | 2017-12-08 | 2018-05-04 | 天津大学 | A kind of man-machine interactive system of blink |
CN108139663A (en) * | 2016-03-03 | 2018-06-08 | 萨利赫·伯克·伊尔汉 | Smiling mirror |
CN108596106A (en) * | 2018-04-26 | 2018-09-28 | 京东方科技集团股份有限公司 | Visual fatigue recognition methods and its device, VR equipment based on VR equipment |
WO2018184246A1 (en) * | 2017-04-08 | 2018-10-11 | 闲客智能(深圳)科技有限公司 | Eye movement identification method and device |
CN109146851A (en) * | 2018-07-30 | 2019-01-04 | 南京慧视医疗科技有限公司 | A kind of nystagmus signal characteristic abstraction and tracing algorithm diagnosing vestibular system disease |
CN110115026A (en) * | 2016-12-19 | 2019-08-09 | 三星电子株式会社 | The method and system of 360 degree of contents is generated on rectangular projection in an electronic |
CN111399627A (en) * | 2020-03-09 | 2020-07-10 | 宁波视睿迪光电有限公司 | Energy-saving method and system for 3D display device |
CN112099615A (en) * | 2019-06-17 | 2020-12-18 | 北京七鑫易维科技有限公司 | Gaze information determination method and device, eyeball tracking equipment and storage medium |
WO2024041488A1 (en) * | 2022-08-22 | 2024-02-29 | 北京七鑫易维信息技术有限公司 | Electronic device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866215A (en) * | 2010-04-20 | 2010-10-20 | 复旦大学 | Human-computer interaction device and method adopting eye tracking in video monitoring |
CN102043952A (en) * | 2010-12-31 | 2011-05-04 | 山东大学 | Eye-gaze tracking method based on double light sources |
US20110182472A1 (en) * | 2008-07-08 | 2011-07-28 | Dan Witzner Hansen | Eye gaze tracking |
CN102193621A (en) * | 2010-03-17 | 2011-09-21 | 三星电子(中国)研发中心 | Vision-based interactive electronic equipment control system and control method thereof |
US20130332160A1 (en) * | 2012-06-12 | 2013-12-12 | John G. Posa | Smart phone with self-training, lip-reading and eye-tracking capabilities |
-
2013
- 2013-12-13 CN CN201310684342.5A patent/CN103677270B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110182472A1 (en) * | 2008-07-08 | 2011-07-28 | Dan Witzner Hansen | Eye gaze tracking |
CN102193621A (en) * | 2010-03-17 | 2011-09-21 | 三星电子(中国)研发中心 | Vision-based interactive electronic equipment control system and control method thereof |
CN101866215A (en) * | 2010-04-20 | 2010-10-20 | 复旦大学 | Human-computer interaction device and method adopting eye tracking in video monitoring |
CN102043952A (en) * | 2010-12-31 | 2011-05-04 | 山东大学 | Eye-gaze tracking method based on double light sources |
US20130332160A1 (en) * | 2012-06-12 | 2013-12-12 | John G. Posa | Smart phone with self-training, lip-reading and eye-tracking capabilities |
Non-Patent Citations (2)
Title |
---|
Z ZHU: "eye and gaze tracking for interactive graphic display", 《MACHINE VISION & APPLICATIONS》, vol. 15, no. 3, 31 December 2002 (2002-12-31), pages 139 - 148, XP002719847, DOI: doi:10.1007/s00138-004-0139-4 * |
王文成 等: "一种基于区域投影的人眼精确定位方法", 《光电子.激光》, vol. 22, no. 4, 30 April 2011 (2011-04-30), pages 618 - 622 * |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106662911A (en) * | 2014-04-29 | 2017-05-10 | 惠普发展公司,有限责任合伙企业 | Gaze detector using reference frames in media |
CN106662911B (en) * | 2014-04-29 | 2020-08-11 | 惠普发展公司,有限责任合伙企业 | Gaze detector using reference frames in media |
CN104751467A (en) * | 2015-04-01 | 2015-07-01 | 电子科技大学 | Gaze point estimation method based on dynamic cross ratio and system thereof |
CN104751467B (en) * | 2015-04-01 | 2017-11-24 | 电子科技大学 | It is a kind of that the point estimation method and its system are stared based on dynamic double ratio |
CN105078404A (en) * | 2015-09-02 | 2015-11-25 | 北京津发科技股份有限公司 | Fully automatic eye movement tracking distance measuring calibration instrument based on laser algorithm and use method of calibration instrument |
CN105260027A (en) * | 2015-11-04 | 2016-01-20 | 上海斐讯数据通信技术有限公司 | Man-machine interactive system and method |
CN108139663A (en) * | 2016-03-03 | 2018-06-08 | 萨利赫·伯克·伊尔汉 | Smiling mirror |
WO2017152679A1 (en) * | 2016-03-09 | 2017-09-14 | 北京七鑫易维信息技术有限公司 | Eyeball tracking device matchable for intelligent terminal |
CN106200961A (en) * | 2016-07-10 | 2016-12-07 | 上海青橙实业有限公司 | Mobile terminal, wearable device and input method |
CN106265006A (en) * | 2016-07-29 | 2017-01-04 | 维沃移动通信有限公司 | The antidote of a kind of dominant eye and mobile terminal |
CN106265006B (en) * | 2016-07-29 | 2019-05-17 | 维沃移动通信有限公司 | A kind of control method and mobile terminal of the apparatus for correcting of dominant eye |
CN110115026A (en) * | 2016-12-19 | 2019-08-09 | 三星电子株式会社 | The method and system of 360 degree of contents is generated on rectangular projection in an electronic |
WO2018184246A1 (en) * | 2017-04-08 | 2018-10-11 | 闲客智能(深圳)科技有限公司 | Eye movement identification method and device |
CN107024991A (en) * | 2017-04-13 | 2017-08-08 | 长沙职业技术学院 | A kind of glasses system based on Internet of Things |
CN107450729B (en) * | 2017-08-10 | 2019-09-10 | 上海木木机器人技术有限公司 | Robot interactive method and device |
CN107450729A (en) * | 2017-08-10 | 2017-12-08 | 上海木爷机器人技术有限公司 | Robot interactive method and device |
CN107818310A (en) * | 2017-11-03 | 2018-03-20 | 电子科技大学 | A kind of driver attention's detection method based on sight |
CN107818310B (en) * | 2017-11-03 | 2021-08-06 | 电子科技大学 | Driver attention detection method based on sight |
CN107992196A (en) * | 2017-12-08 | 2018-05-04 | 天津大学 | A kind of man-machine interactive system of blink |
CN108596106A (en) * | 2018-04-26 | 2018-09-28 | 京东方科技集团股份有限公司 | Visual fatigue recognition methods and its device, VR equipment based on VR equipment |
CN108596106B (en) * | 2018-04-26 | 2023-12-05 | 京东方科技集团股份有限公司 | Visual fatigue recognition method and device based on VR equipment and VR equipment |
CN109146851A (en) * | 2018-07-30 | 2019-01-04 | 南京慧视医疗科技有限公司 | A kind of nystagmus signal characteristic abstraction and tracing algorithm diagnosing vestibular system disease |
CN112099615A (en) * | 2019-06-17 | 2020-12-18 | 北京七鑫易维科技有限公司 | Gaze information determination method and device, eyeball tracking equipment and storage medium |
CN112099615B (en) * | 2019-06-17 | 2024-02-09 | 北京七鑫易维科技有限公司 | Gaze information determination method, gaze information determination device, eyeball tracking device, and storage medium |
CN111399627A (en) * | 2020-03-09 | 2020-07-10 | 宁波视睿迪光电有限公司 | Energy-saving method and system for 3D display device |
WO2024041488A1 (en) * | 2022-08-22 | 2024-02-29 | 北京七鑫易维信息技术有限公司 | Electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN103677270B (en) | 2016-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103677270A (en) | Human-computer interaction method based on eye movement tracking | |
Noureddin et al. | A non-contact device for tracking gaze in a human computer interface | |
Jain et al. | Real-time upper-body human pose estimation using a depth camera | |
Cho et al. | Gaze Detection by Wearable Eye‐Tracking and NIR LED‐Based Head‐Tracking Device Based on SVR | |
US20140313308A1 (en) | Apparatus and method for tracking gaze based on camera array | |
CN101344919A (en) | Sight tracing method and disabled assisting system using the same | |
CN103376890A (en) | Gesture remote control system based on vision | |
CN103713738A (en) | Man-machine interaction method based on visual tracking and gesture recognition | |
CN105912126B (en) | A kind of gesture motion is mapped to the adaptive adjusting gain method at interface | |
TWI479430B (en) | Gesture identification with natural images | |
CN103324284A (en) | Mouse control method based on face and eye detection | |
CN103279188A (en) | Method for operating and controlling PPT in non-contact mode based on Kinect | |
Utaminingrum et al. | Eye movement and blink detection for selecting menu on-screen display using probability analysis based on facial landmark | |
Park | A real-time gaze position estimation method based on a 3-D eye model | |
Dostal et al. | Estimating and using absolute and relative viewing distance in interactive systems | |
CN113255476B (en) | Target tracking method, system and storage medium based on eye movement tracking | |
Xu et al. | Real time detection of eye corners and iris center from images acquired by usual camera | |
Shin et al. | Welfare interface implementation using multiple facial features tracking for the disabled people | |
Dhingra et al. | Eye gaze tracking for detecting non-verbal communication in meeting environments | |
Lin et al. | Identification of eye movements from non-frontal face images for eye-controlled systems | |
Zhang et al. | Pupil detection based on oblique projection using a binocular camera | |
Lemahieu et al. | Low cost eye tracking for human-machine interfacing | |
Taaban et al. | Eye tracking based mobile application | |
CN107256089B (en) | Gesture recognition method by natural image | |
Khilari | Iris tracking and blink detection for human-computer interaction using a low resolution webcam |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |