Nothing Special   »   [go: up one dir, main page]

WO2017134918A1 - Line-of-sight detection device - Google Patents

Line-of-sight detection device Download PDF

Info

Publication number
WO2017134918A1
WO2017134918A1 PCT/JP2016/085919 JP2016085919W WO2017134918A1 WO 2017134918 A1 WO2017134918 A1 WO 2017134918A1 JP 2016085919 W JP2016085919 W JP 2016085919W WO 2017134918 A1 WO2017134918 A1 WO 2017134918A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
exposure condition
lighting
light source
exposure
Prior art date
Application number
PCT/JP2016/085919
Other languages
French (fr)
Japanese (ja)
Inventor
正行 中西
山下 龍麿
Original Assignee
アルプス電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by アルプス電気株式会社 filed Critical アルプス電気株式会社
Publication of WO2017134918A1 publication Critical patent/WO2017134918A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes

Definitions

  • the present invention relates to a line-of-sight detection device capable of detecting the line-of-sight direction of a driver or other subject of a car.
  • the pupil detection method described in Patent Document 1 first, a bright pupil image, a dark pupil image, and a non-illuminated image are acquired, and then a non-illuminated image is subtracted from the bright pupil image to obtain a differential bright pupil image. And obtaining a differential dark pupil image by subtracting the non-illuminated image from the dark pupil image.
  • the non-illuminated image is an image captured in a state where the emission of the inspection light from the light source included in the pupil detection device is stopped.
  • the wire frame method is used to capture feature points such as the corner of the eye, the head of the nose, the head of the nose, and the left and right edges of the mouth, and further using a stereo camera, A three-dimensional structure of the face is acquired, the structure is tracked so that the three-dimensional structure always fits the face, and the direction and position of the face are detected from the position and direction of the structure. Further, the pupil portion is extracted from the re-difference image obtained by subtracting the difference dark pupil image from the difference bright pupil image, and the corneal reflection portion is extracted from the integrated image of the difference bright pupil image and the difference dark pupil image. .
  • the conventional wire frame method is difficult by detecting the pupil part and the corneal reflection part by applying the wire frame method to the image obtained by subtracting the non-illuminated image from the bright pupil image and the dark pupil image. High-precision gaze direction detection is realized.
  • the bright pupil image is captured under bright exposure conditions with a large amount of light incident on the camera so that the corneal reflection image is clearly seen, while the dark pupil image has a clear pupil.
  • the image is taken under a relatively dark exposure condition in which the amount of incident light is smaller than that of the bright exposure condition.
  • the amount of light incident on the image sensor of the camera is adjusted, for example, by controlling the aperture and shutter speed of the camera.
  • a non-illuminated image used for the difference between the bright pupil image and a non-illuminated image used for the difference between the dark pupil images uses a common image, and therefore, under different exposure conditions. Even if the common unilluminated image is differentiated from the bright pupil image and the dark pupil image captured in step 1, the ambient light noise cannot be sufficiently removed.
  • the corneal reflection image or pupil is extracted based on the difference image in which the external light noise remains in this way, the extraction accuracy tends to be low due to the influence of the external light noise, and thus the extracted corneal reflection image and It was difficult to detect gaze with high accuracy from the pupil.
  • the target value of AE Auto Exposure
  • the AE target value is adjusted so that the corneal reflection image is clearly seen, and the image is taken under a relatively dark exposure condition.
  • an object of the present invention is to provide a gaze detection apparatus that can sufficiently remove external light noise and thereby can perform gaze detection with high accuracy.
  • Another object of the present invention is to provide a line-of-sight detection apparatus capable of performing high-precision line-of-sight detection while suppressing a decrease in processing speed.
  • the visual line detection device includes a light source that irradiates light to at least an area including the eye, and a first light exposure condition for the area in a state where the light source is turned on.
  • a camera that acquires an image and acquires a second image under a second exposure condition that is darker than the first exposure condition, and an exposure control unit that controls the first exposure condition and the second exposure condition And an image extraction unit for extracting a cornea reflection image from the first image and extracting a dark pupil image from the second image.
  • the exposure conditions of the first image and the second image are compared.
  • the conditions can be made close to each other, and there is no great difference as in the case of acquiring a conventional bright pupil image and dark pupil image. Therefore, when external light noise removal using an unilluminated image is performed, a non-lighting image captured under a condition between the exposure condition at the time of capturing the first image and the exposure condition at the time of capturing the second image If a non-illuminated image based on is used, a certain noise removal performance can be obtained for both the first image and the second image.
  • the exposure condition means an AE target.
  • line-of-sight detection is performed based on the cornea reflection image and the dark pupil image, high detection accuracy can be obtained.
  • the configuration of the light source is prevented from becoming complicated and the cost of the light source is prevented from increasing. Can do. That is, in order to acquire a bright pupil image and a dark pupil image, a plurality of light sources that emit light of different wavelengths suitable for acquisition of each image are necessary, or a light source that is coaxial with the camera and a light source that is non-coaxial are used. It is necessary to prepare each and irradiate inspection light from each.
  • the line-of-sight detection device of the present invention can be configured with only a single wavelength light source, so that the cost can be reduced, and complicated control as described above is not required.
  • the camera acquires a non-lighting image without turning on the light source, and the image extraction unit extracts a non-illuminated image from the non-lighting image.
  • the detection apparatus includes a difference image calculation unit that obtains a first difference image by subtracting the cornea reflection image and the non-illuminated image, and obtains a second difference image by subtracting the dark pupil image and the non-illuminated image. Is preferred. Thereby, it is possible to perform line-of-sight detection with higher accuracy based on the difference image from which the influence of external light has been removed.
  • the wavelength of the light emitted from the light source is preferably a wavelength that has a high light absorption rate in the eyeball and is not easily reflected by the retina of the eyeball.
  • the non-illuminated image is preferably a common image for the difference calculation of the first image and the difference calculation of the second image.
  • the number of times the non-lighting image is captured can be reduced, so that it is possible to prevent the imaging control from becoming complicated and to increase the speed of eye-gaze detection.
  • the exposure conditions for the cornea reflection image and the dark pupil image are close to each other as compared to the exposure conditions when using the conventional bright pupil image and dark pupil image, so an unilluminated image with a common exposure condition is used.
  • external light noise can be sufficiently removed for each image.
  • the non-lighting image is acquired under an exposure condition closer to the second exposure condition than the first exposure condition or under the same exposure condition as the second exposure condition. Is preferred.
  • the external light noise can be more sufficiently removed from the second image, which is more susceptible to external light noise than the first image, and can contribute to more accurate line-of-sight detection.
  • the camera acquires the first non-lighting image and the second non-lighting image in a state where the light source is not turned on.
  • the first non-illuminated image is extracted based on the non-illuminated image
  • the second non-illuminated image is extracted based on the second non-illuminated image
  • the line-of-sight detection device includes the corneal reflection image and the first non-illuminated image. It is preferable to provide a difference image calculation unit that obtains the first difference image by subtracting the unilluminated image and obtains the second difference image by subtracting the dark pupil image and the second unilluminated image.
  • the first non-lighting image and the second non-lighting image are captured under conditions corresponding to the exposure conditions of the first image and the second image, respectively, so that the corneal reflection image and the dark pupil image are captured. It is possible to more effectively remove external light noise.
  • the first non-lighting image is acquired under the first exposure condition
  • the second non-lighting image is acquired under the second exposure condition.
  • FIG. 1 is a block diagram illustrating a configuration of the visual line detection device according to the first embodiment
  • FIG. 2 is a timing of lighting of the first light source 11 and the second light source 12 and imaging of the first camera 21 and the second camera 22. It is a chart figure which shows the timing.
  • FIG. 3 is a flowchart showing a flow of gaze detection processing in the first embodiment.
  • a light source that emits light having a high light absorptance in the eyeball and is difficult to be reflected by the retina of the eyeball is used.
  • a non-illuminated image for acquiring a difference image from each of the dark pupil image and the dark pupil image.
  • the line-of-sight detection device includes two light sources 11 and 12, two cameras 21 and 22, and an arithmetic control unit 40. It is installed on the panel or windshield so that it faces the driver's face.
  • the first light source 11 and the second light source 12 are each composed of a plurality of LED (light emitting diode) light sources.
  • the plurality of LED light sources of the first light source 11 are disposed so as to be separated from the optical axis of the first camera 21 by a certain distance, respectively, and the plurality of LED light sources of the second light source 12 are optical axes of the second camera 22.
  • the number of LEDs constituting each of the two light sources 11 and 12 can be arbitrarily set.
  • the light emitted from the two light sources 11 and 12 has a high light absorption rate in the eyeball of the human eye and has a wavelength that is not easily reflected by the retina of the eyeball. As such a wavelength, a wavelength exceeding 900 nm and less than 1000 nm is preferable. For example, infrared light of 940 nm is emitted.
  • the timing of lighting (light emission) of the first light source 11 and the second light source 12 is controlled by the light source control unit 33.
  • Information on the lighting timing is output to the exposure control unit 34, and the exposure control unit 34 synchronizes with the lighting of the first light source 11 and the second light source 12, and the first camera 21 and the first 2 Let the camera 22 perform imaging.
  • the first camera 21 and the second camera 22 are spaced apart from each other so that the distance between the optical axes substantially coincides with the distance between both eyes of the person. Directed.
  • the first camera 21 and the second camera 22 have an image sensor such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device), and acquire an image of the face including the driver's eyes.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • each LED light source of the first light source 11 can be regarded as having an optical axis substantially coaxial with the first camera 21.
  • each LED light source of the second light source 12 is the second camera. It can be considered that the optical axis is arranged substantially coaxially with respect to 22.
  • the first camera 21 acquires an image of the face including this eye. Is emitted to a region including the other eye, and the second camera 22 acquires a face image including the eye.
  • the calculation control unit 40 is configured by a CPU and a memory of a computer, and the function of each block is calculated by executing software installed in advance.
  • the calculation control unit 40 includes image acquisition units 31 and 32, a light source control unit 33, an exposure control unit 34, an image extraction unit 35, a difference image calculation unit 36, a corneal reflection light center detection unit 37, and a pupil.
  • a center calculation unit 38 and a line-of-sight direction calculation unit 39 are provided.
  • the image acquisition units 31 and 32 acquire images captured by the first camera 21 and the second camera 22 for each frame under predetermined exposure conditions. Imaging with the first camera 21 and the second camera 22 is performed by imaging the first image for extracting the cornea reflection image and the second image for extracting the dark pupil image, respectively, and further by the first camera 21. A non-illuminated image for unilluminated image extraction is captured between the two imaging operations and the two imaging operations by the second camera 22. The non-lighting image may be captured between the first image and the second image.
  • FIG. 2A shows the timing of lighting of the first light source 11
  • FIG. 2B shows the timing of lighting of the second light source 12
  • FIG. 2C shows the timing of imaging by the first camera 21
  • (D ) Shows the timing of imaging by the second camera 22.
  • the first exposure E11 (imaging) is performed.
  • the second lighting L12 of the first light source 11 for capturing the second image is performed, and the second exposure E12 is performed in the first camera 21 in synchronization therewith.
  • the third exposure E13 is performed in synchronization with the non-lighting N1 timing of the first camera 21.
  • the first exposure E21 in the second camera 22 is performed in synchronization with the first lighting L21 of the second light source 12 for capturing the first image of the other eye
  • the second A second exposure E22 is performed in the second camera 22 in synchronization with the second lighting L22 of the second light source 12 for imaging.
  • the third exposure E23 is performed in synchronization with the timing of the non-lighting N2 of the second light source 12.
  • exposure is performed by the first camera 21 in synchronization with lighting / non-lighting of the first light source 11 in the same manner as the lighting L11, L12, non-lighting N1, and exposures E11, E12, E13 corresponding thereto.
  • exposure (imaging) E11, E12, E13, E21, E22, E23, E14,... Is executed at a constant time interval and in the same time width.
  • the exposure conditions controlled by the exposure controller 34 are the following (1) to (3).
  • the exposure condition means an AE target and is changed by adjusting settings such as an image acquisition time (exposure time), an aperture stop, and a sensor gain in the cameras 21 and 22.
  • Information on the exposure conditions is output from the exposure control unit 34 to the image extraction unit 35. Note that external light is not blocked under any exposure condition.
  • (1) First exposure condition an exposure condition for capturing a first image for extracting a cornea reflection image. This exposure condition is preferably a bright exposure condition in which the amount of light incident on the imaging elements of the cameras 21 and 22 is larger than that in the case of proper exposure, whereby a corneal reflection image is captured brighter than its surroundings. Therefore, the corneal reflection image can be extracted with high accuracy.
  • the proper exposure is an exposure condition in which colors and brightness are adjusted as a general image based on the sensitivity setting of the image sensor, the luminance balance / average luminance within the imaging target range, and the like.
  • Second exposure condition An exposure condition for capturing a second image for dark pupil image extraction, which is an exposure condition darker than the first exposure condition. This exposure condition is such that the amount of light incident on the image sensor of the camera is less than the first exposure condition. As a result, the difference in brightness between the pupil image and the surrounding image can be increased, so that the accuracy of extraction of the pupil image can be increased.
  • Third exposure condition an exposure condition for capturing a non-lighting image for extracting an unilluminated image, and the first light source 11 and the second light source 12 are not lighted.
  • This exposure condition is an exposure condition closer to the second exposure condition than the first exposure condition, but may be the same exposure condition as the second exposure condition.
  • the AE target for capturing an image when not lit is closer to the AE target under the second exposure condition than the AE target under the first exposure condition, or the same as the AE target under the second exposure condition.
  • the images acquired by the image acquisition units 31 and 32 are read into the image extraction unit 35 for each frame.
  • the image extraction unit 35 extracts a region including the eye of the subject, extracts a corneal reflection image from the first image captured under the first exposure condition, and captures the first image captured under the second exposure condition.
  • a dark pupil image is extracted from the second image.
  • an unilluminated image of the region including the eyes of the subject is extracted from the non-lighting image captured under the third exposure condition, that is, the condition where the first light source 11 and the second light source 12 are not lighted.
  • the cornea reflection image, dark pupil image, and non-illuminated image extracted by the image extraction unit 35 are output to the difference image calculation unit 36.
  • the difference image calculation unit 36 a difference calculation between the cornea reflection image and the unilluminated image is executed to calculate a first difference image, and a second difference image is calculated by a difference calculation between the dark pupil image and the unilluminated image.
  • the non-illuminated image used for the two difference calculations is a common image, and immediately before and after the imaging of the first image and the second image based on the cornea reflection image and the dark pupil image, Or it is based on the image at the time of non-lighting acquired by imaging between these imaging. For example, in the case shown in FIG.
  • a non-lighting image is captured at timing N1 immediately after lighting L12 for the second image, and an unilluminated image extracted from this image is based on the imaging E11 and E12, respectively. It is used for the difference calculation for the extracted corneal reflection image and dark pupil image.
  • the first difference image calculated by the difference image calculation unit 36 is output to the corneal reflection light center detection unit 37. Since the first difference image is based on the first image captured under the first exposure condition, the reflected light reflected from the cornea reflection point is bright in the corneal reflection light center detection unit 37 and is a spot image. Detected as The reflected light from the corneal reflection point forms a Purkinje image, and the corneal reflected light center detection unit 37 performs image processing on the spot image to obtain the center of the reflected light from the corneal reflection point. .
  • the second difference image calculated by the difference image calculation unit 36 is output to the pupil center calculation unit 38. Since the second difference image is based on the second image captured under the second exposure condition, the pupil center calculation unit 38 accurately determines the shape of the pupil due to the difference in brightness between the pupil image and its surroundings. Then, the pupil image signal is subjected to image processing and binarized, and an area image corresponding to the shape and area of the pupil is calculated. Furthermore, an ellipse including this area image is extracted, and the intersection of the major axis and the minor axis of the ellipse is calculated as the center position of the pupil. Alternatively, the center of the pupil is obtained from the luminance distribution of the pupil image.
  • the corneal reflection light center calculation value calculated by the corneal reflection light center detection unit 37 and the pupil center calculation value calculated by the pupil center calculation unit 38 are given to the gaze direction calculation unit 39.
  • the line-of-sight direction calculation unit 39 detects the direction of the line of sight from the pupil center calculated value and the corneal reflection light center calculated value.
  • the line-of-sight direction calculation unit 39 calculates a linear distance ⁇ between the center of the pupil and the center of the reflection point from the cornea.
  • XY coordinates with the center of the pupil as the origin are set, and an inclination angle ⁇ between the line connecting the center of the pupil and the center of the reflection point and the X axis is calculated.
  • a first image (step S11), a second image (step S12), and a non-lighting image (step S13) are captured.
  • the first image is captured under the first exposure condition
  • the second image is captured under the second exposure condition
  • the non-lighting image is captured under the third exposure condition.
  • a cornea reflection image is extracted from the first image (step S21)
  • a dark pupil image is extracted from the second image (step S22)
  • an unilluminated image is obtained from the non-lighting image. Extracted (step S23).
  • the difference image calculation unit 36 calculates a first difference image as a difference image between the cornea reflection image and the unilluminated image (step S31), and a second difference image as a difference image between the dark pupil image and the unilluminated image. (Step S32) is calculated.
  • the reflected light reflected from the corneal reflection point is detected as a spot image by the corneal reflection light center detection unit 37, the spot image is subjected to image processing, and reflected light from the corneal reflection point. Is obtained (step S41).
  • the pupil center calculation unit 38 detects the shape of the pupil of the second difference image, the pupil image signal is subjected to image processing and binarized, and an area image corresponding to the shape and area of the pupil is obtained. Calculated. Further, the pupil center calculation unit 38 extracts an ellipse including the area image, and calculates the intersection of the major axis and the minor axis of the ellipse as the position of the pupil center (step S42).
  • the corneal reflection light center calculation value calculated by the corneal reflection light center detection unit 37 and the pupil center calculation value calculated by the pupil center calculation unit 38 are output to the gaze direction calculation unit 39, and the gaze direction calculation unit 39
  • the direction of the line of sight is detected from the corneal reflection light center calculation value and the pupil center calculation value (step S43).
  • a first image is acquired under a first exposure condition
  • a second image is acquired under a second exposure condition that is darker than the first exposure condition.
  • the cornea reflection image is extracted from the first image
  • the dark pupil image is extracted from the second image.
  • the corneal reflection image and the dark pupil image are extracted instead of the combination of the bright pupil image and the dark pupil image as in the related art. It is possible to prevent the cost for the light source from increasing. That is, in order to acquire a bright pupil image and a dark pupil image, a plurality of light sources that emit light of different wavelengths suitable for acquisition of each image are necessary, or a light source that is coaxial with the camera and a light source that is non-coaxial are used. It is necessary to prepare each and irradiate inspection light from each.
  • the line-of-sight detection apparatus of the first embodiment can be configured with only a light source having a single wavelength, and does not require complicated control as described above.
  • an unilluminated image is extracted from a non-lighting image, and a first difference image is obtained by subtracting the corneal reflection image from the unilluminated image. Since the second difference image is acquired by subtracting the non-illuminated image, it is possible to perform more accurate line-of-sight detection based on the difference image from which the influence of external light has been removed.
  • a common unilluminated image is used for the difference calculation of the first image and the difference calculation of the second image.
  • the number of times the non-lighting image is captured can be reduced, so that it is possible to prevent the control of imaging from becoming complicated and to increase the speed of eye gaze detection.
  • the exposure conditions for the cornea reflection image and the dark pupil image are close to each other as compared to the exposure conditions when using the conventional bright pupil image and dark pupil image, so an unilluminated image with a common exposure condition is used.
  • external light noise can be sufficiently removed for each image.
  • the non-lighting image is acquired under an exposure condition closer to the second exposure condition than the first exposure condition or the same exposure condition as the second exposure condition.
  • the external light noise can be more sufficiently removed from the second image that is likely to be affected by the external light noise because the image is captured under a darker exposure condition than the first image. Can contribute to detection.
  • the second embodiment is different from the first embodiment in that the non-illuminated image used for the difference calculation of the first image and the difference calculation of the second image is a separate image.
  • the same configuration as in the first embodiment is omitted.
  • FIG. 4 is a chart showing the lighting timing of the first light source 11 and the second light source 12 and the imaging timing of the first camera 21 and the second camera 22.
  • FIG. 5 is a flowchart showing a flow of gaze detection processing in the second embodiment. 4A shows the timing of lighting of the first light source 11, FIG. 4B shows the timing of lighting of the second light source 12, FIG. 4C shows the timing of imaging by the first camera 21, and (D ) Shows the timing of imaging by the second camera 22.
  • the line-of-sight detection apparatus includes two light sources 11 and 12, two cameras 21 and 22, and an arithmetic control unit 40, as in the first embodiment.
  • the calculation control unit 40 as in the first embodiment, the image acquisition units 31, 32, the light source control unit 33, the exposure control unit 34, the image extraction unit 35, the difference image calculation unit 36, A corneal reflection light center detection unit 37, a pupil center calculation unit 38, and a gaze direction calculation unit 39 are provided.
  • the configuration and arrangement of the first light source 11 and the second light source 12 and the wavelength of the emitted light are the same as in the first embodiment.
  • the positional relationship between the two light sources 11 and 12 and the two cameras 21 and 22 is the same.
  • the exposure timing and exposure conditions by the cameras 21 and 22 are different from those of the first embodiment as described below.
  • the first exposure E111 (imaging) is performed.
  • the second lighting L112 for capturing the second image is performed.
  • the non-lighting N11 after the first lighting L111 and before the second lighting L112 is performed.
  • a second exposure E112 is performed in synchronization with the timing.
  • the third exposure E113 is performed in synchronization with the second lighting L112 of the first light source 11. Further, the fourth exposure E114 is performed in the first camera 21 after the second lighting L112 and in synchronization with the non-lighting N12 timing before the first lighting L121 in the second light source 12.
  • the first exposure E121 in the second camera 22 is performed in synchronization with the first lighting L121 of the second light source 12 for capturing the first image of the other eye. Thereafter, the second lighting L122 for capturing the second image is performed. In the second camera 22, the timing of the non-lighting N13 after the first lighting L121 and before the second lighting L122. The second exposure E122 is performed in synchronization with the above.
  • the third exposure E123 is performed in synchronization with the second lighting L122 of the second light source 12. Further, after the second lighting L122 and before the third lighting L113 in the first light source 11, the second camera 22 performs the fourth exposure E124 in synchronization with the timing of the non-lighting N14. Thereafter, the first light source 11 is alternately turned on / off in the same manner as the lighting L111, non-lighting N11, lighting L112, and non-lighting N12, and the exposures E111, E112, E113, and E114 corresponding to these.
  • the first camera 21 performs exposure / image capturing in synchronization with the second light source 12, and the exposure / image capturing is performed in synchronization with alternate lighting / non-lighting of the second light source 12 in order.
  • exposure (imaging) E111, E112, E113, E114, E121, E122, E123, E124, E115,... Is executed at a constant time interval with the same time width.
  • the exposure conditions controlled by the exposure controller 34 are the following (1) to (4).
  • the exposure conditions are changed by adjusting settings such as image acquisition time (exposure time), aperture stop, and sensor gain in the cameras 21 and 22.
  • Information on the exposure conditions is output from the exposure control unit 34 to the image extraction unit 35. Note that external light is not blocked under any exposure condition.
  • (1) First exposure condition an exposure condition for capturing a first image for extracting a cornea reflection image. This exposure condition is preferably a bright exposure condition in which the amount of light incident on the imaging elements of the cameras 21 and 22 is larger than that in the case of proper exposure, whereby a corneal reflection image is captured brighter than its surroundings. Therefore, the corneal reflection image can be extracted with high accuracy.
  • Second exposure condition An exposure condition for capturing a second image for dark pupil image extraction, which is an exposure condition darker than the first exposure condition. This exposure condition is such that the amount of light incident on the image sensor of the camera is less than the first exposure condition. As a result, the difference in brightness between the pupil image and the surrounding image can be increased, so that the accuracy of extraction of the pupil image can be increased.
  • Third exposure condition an exposure condition for capturing a first non-illuminated image for extracting a first unilluminated image, and the first light source 11 and the second light source 12 are not illuminated. It is a condition. This exposure condition is the same as the first exposure condition.
  • the third exposure condition may not be the same as the first exposure condition, and the fourth exposure condition may not be the same as the second exposure condition.
  • the third exposure condition is preferably closer to the first exposure condition than the second exposure condition, and the fourth exposure condition is preferably closer to the second exposure condition than the first exposure condition.
  • the flow of image processing from the imaging to the line-of-sight direction and the flow of detection of the line-of-sight direction will be described.
  • the first image S111 in FIG. 5
  • the second image step S112
  • the first non-lighting image step S113
  • the non-lighting images step S114 are respectively captured. These images are acquired by the image acquisition units 31 and 32, and read into the image extraction unit 35 for each frame.
  • a cornea reflection image (step S121) is extracted from the first image
  • a dark pupil image (step S122) is extracted from the second image.
  • a first non-illuminated image (step S123) of the region including the eyes of the subject is extracted from the first non-lighting image captured under the third exposure condition, and is captured under the fourth exposure condition.
  • a second non-illuminated image (step S124) of the region including the eyes of the subject is extracted from the second non-lighted image.
  • the cornea reflection image, dark pupil image, first unilluminated image, and second unilluminated image extracted by the image extraction unit 35 are output to the difference image calculation unit 36.
  • the difference image calculation unit 36 a difference calculation between the cornea reflection image and the first unilluminated image is executed to calculate a first difference image (step S131), and a difference calculation between the dark pupil image and the second unilluminated image is performed.
  • the second difference image step S132 is calculated.
  • the first non-illuminated image is a first non-lighting image acquired by performing imaging immediately before or after imaging of the first image that is the basis of the corneal reflection image that is the target of the difference.
  • the second non-illuminated image is based on the second non-illuminated image obtained by performing imaging immediately before or after imaging of the second image that is the source of the dark pupil image that is the target of the difference. Based on time image. For example, in the case illustrated in FIG. 4, the first non-lighting image is captured at the timing N11 immediately after the lighting L111 for the first image, and the first unilluminated image extracted from this image is captured in the imaging E111. The second non-lighting image is captured at timing N12 immediately after the lighting L112 for the second image, and is extracted from the image. Two unilluminated images are used for the difference calculation with respect to the dark pupil image extracted based on the imaging E113.
  • the first difference image calculated by the difference image calculation unit 36 is output to the corneal reflection light center detection unit 37. Since the first difference image is based on the first image captured under the first exposure condition, the reflected light reflected from the cornea reflection point is bright in the corneal reflection light center detection unit 37 and is a spot image. Detected as The reflected light from the corneal reflection point forms a Purkinje image, and the corneal reflected light center detection unit 37 performs image processing on the spot image to obtain the center of the reflected light from the corneal reflection point. (Step S141).
  • the second difference image calculated by the difference image calculation unit 36 is output to the pupil center calculation unit 38. Since the second difference image is based on the second image captured under the second exposure condition, the pupil center calculation unit 38 accurately determines the shape of the pupil due to the difference in brightness between the pupil image and its surroundings. Then, the pupil image signal is subjected to image processing and binarized, and an area image corresponding to the shape and area of the pupil is calculated. Furthermore, an ellipse including this area image is extracted, and the intersection of the major axis and the minor axis of the ellipse is calculated as the center position of the pupil. Alternatively, the center of the pupil is obtained from the luminance distribution of the pupil image (step S142).
  • the corneal reflection light center calculation value calculated by the corneal reflection light center detection unit 37 and the pupil center calculation value calculated by the pupil center calculation unit 38 are given to the gaze direction calculation unit 39.
  • the line-of-sight direction calculation unit 39 detects the direction of the line of sight from the pupil center calculated value and the corneal reflection light center calculated value (step S). 143).
  • a first difference image is acquired by subtracting the cornea reflection image and the first non-illuminated image
  • a second differential image is acquired by subtracting the dark pupil image and the second non-illuminated image.
  • the corneal reflection image and the dark pupil image It is possible to more effectively remove external light noise.
  • the first non-lighting image is acquired under the first exposure condition
  • the second non-lighting image is acquired under the second exposure condition, whereby the first image and the first non-lighting time are acquired.
  • the exposure conditions at the time of image capture can be made the same and the exposure conditions at the time of image capture of the second image and the second non-lighting image can be made the same, the effect of removing external light noise can be further enhanced. As a result, it is possible to contribute to improvement in the accuracy of eye gaze detection.
  • Other operations, effects, and modifications are the same as those in the first embodiment.
  • the line-of-sight detection device is capable of performing line-of-sight detection with high accuracy even in an environment where there is a large amount of external light noise such as an automobile or a large fluctuation of external light noise. Useful.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

[Problem] To provide a line-of-sight detection device which can sufficiently remove outside optical noise and can thereby perform line-of-sight detection with high accuracy. [Solution] This line-of-sight detection device is provided with a light source which irradiates light onto a region that includes at least an eye, a camera which, in a state in which the light source is turned on, acquires a first image of the aforementioned region under a first exposure condition and acquires a second image thereof under a second exposure condition, which is an exposure condition darker than the first exposure condition, an exposure control unit which controls the first exposure condition and the second exposure condition, and an image extraction unit which extracts a cornea-reflected image from the first image and extracts a dark pupil image from the second image.

Description

視線検出装置Gaze detection device
 本発明は、車の運転者その他の対象者の視線方向を検出可能な視線検出装置に関する。 The present invention relates to a line-of-sight detection device capable of detecting the line-of-sight direction of a driver or other subject of a car.
 特許文献1に記載の瞳孔検出方法においては、まず、明瞳孔画像と、暗瞳孔画像と、無照明画像とを取得し、次に明瞳孔画像から無照明画像を差分して差分明瞳孔画像を得るとともに、暗瞳孔画像から無照明画像を差分して差分暗瞳孔画像を得ている。ここで、無照明画像とは、瞳孔を検出する装置が備える光源からの検査光の出射を止めた状態において撮像した画像である。この瞳孔検出方法では、差分明瞳孔画像と差分暗瞳孔画像から、ワイヤフレーム法によって、目尻、目頭、鼻の頭、口の左右の端等の特徴点を捉え、さらに、ステレオカメラを用いて、顔の3次元構造を取得し、この3次元構造が常に顔にフィットするように構造を追跡させ、その構造の位置、方向から顔の方向や位置を検出する。また、差分明瞳孔画像から差分暗瞳孔画像を差分して得た再差分画像から瞳孔部を抽出し、また、差分明瞳孔画像と差分暗瞳孔画像の積算画像から角膜反射部を抽出している。以上のように、明瞳孔画像と暗瞳孔画像から無照明画像をそれぞれ差分した画像を対象にワイヤフレーム法を施して瞳孔部と角膜反射部を検出することにより、従来のワイヤフレーム法では難しかった高精度の視線方向検出を実現している。 In the pupil detection method described in Patent Document 1, first, a bright pupil image, a dark pupil image, and a non-illuminated image are acquired, and then a non-illuminated image is subtracted from the bright pupil image to obtain a differential bright pupil image. And obtaining a differential dark pupil image by subtracting the non-illuminated image from the dark pupil image. Here, the non-illuminated image is an image captured in a state where the emission of the inspection light from the light source included in the pupil detection device is stopped. In this pupil detection method, from the difference bright pupil image and the difference dark pupil image, the wire frame method is used to capture feature points such as the corner of the eye, the head of the nose, the head of the nose, and the left and right edges of the mouth, and further using a stereo camera, A three-dimensional structure of the face is acquired, the structure is tracked so that the three-dimensional structure always fits the face, and the direction and position of the face are detected from the position and direction of the structure. Further, the pupil portion is extracted from the re-difference image obtained by subtracting the difference dark pupil image from the difference bright pupil image, and the corneal reflection portion is extracted from the integrated image of the difference bright pupil image and the difference dark pupil image. . As described above, the conventional wire frame method is difficult by detecting the pupil part and the corneal reflection part by applying the wire frame method to the image obtained by subtracting the non-illuminated image from the bright pupil image and the dark pupil image. High-precision gaze direction detection is realized.
 特許文献1に記載の瞳孔検出方法では、明瞳孔画像では角膜反射像がはっきりと映るように、カメラへの入射光量が多い明るい露光条件で撮像されており、一方、暗瞳孔画像では瞳孔がはっきりと映るように、上記明るい露光条件よりも入射光量の少ない比較的暗い露光条件で撮像されている。カメラの撮像素子に入射する光量の調整は、例えば、カメラの絞りやシャッタースピードを制御することによって行っている。 In the pupil detection method described in Patent Document 1, the bright pupil image is captured under bright exposure conditions with a large amount of light incident on the camera so that the corneal reflection image is clearly seen, while the dark pupil image has a clear pupil. As shown, the image is taken under a relatively dark exposure condition in which the amount of incident light is smaller than that of the bright exposure condition. The amount of light incident on the image sensor of the camera is adjusted, for example, by controlling the aperture and shutter speed of the camera.
特開2008-246004号公報JP 2008-246004 A
 しかしながら、特許文献1に記載の瞳孔検出方法では、明瞳孔画像の差分に用いる無照明画像と暗瞳孔画像の差分に用いる無照明画像は共通の画像を用いているため、異なる露光条件のもとで撮像された明瞳孔画像と暗瞳孔画像から共通の無照明画像をそれぞれ差分しても、外光ノイズを十分に除去することができなかった。このように外光ノイズが残った差分画像に基づいて角膜反射像や瞳孔を抽出した場合、その抽出精度は外光ノイズの影響を受けて低くなりやすく、したがって、こうして抽出された角膜反射像と瞳孔から精度の高い視線検知を行うことは困難であった。 However, in the pupil detection method described in Patent Document 1, a non-illuminated image used for the difference between the bright pupil image and a non-illuminated image used for the difference between the dark pupil images uses a common image, and therefore, under different exposure conditions. Even if the common unilluminated image is differentiated from the bright pupil image and the dark pupil image captured in step 1, the ambient light noise cannot be sufficiently removed. When the corneal reflection image or pupil is extracted based on the difference image in which the external light noise remains in this way, the extraction accuracy tends to be low due to the influence of the external light noise, and thus the extracted corneal reflection image and It was difficult to detect gaze with high accuracy from the pupil.
 これに対して、明瞳孔画像から差分する無照明画像と、暗瞳孔画像から差分する無照明画像とを個別に用意する方法も考えられる。この場合、明瞳孔画像の差分用無照明画像では、瞳孔がはっきりと映るようにAE(Auto Exposure)のターゲット値を調整して比較的明るい露光条件で撮像し、暗瞳孔画像の差分用無照明画像では、角膜反射像がはっきりと映るようにAEのターゲット値を調整して比較的暗い露光条件で撮像する。ところが、この方法では、明瞳孔画像の差分用無照明画像と暗瞳孔画像の差分用無照明画像の2種類の無照明画像を撮像する必要があるため、視線検知の処理速度が遅くなってしまうという問題がある。 On the other hand, a method of separately preparing an unilluminated image that is different from the bright pupil image and an unilluminated image that is different from the dark pupil image is also conceivable. In this case, in the difference non-illuminated image of the bright pupil image, the target value of AE (Auto Exposure) is adjusted so that the pupil can be clearly seen and imaged under a relatively bright exposure condition. In the image, the AE target value is adjusted so that the corneal reflection image is clearly seen, and the image is taken under a relatively dark exposure condition. However, in this method, since it is necessary to capture two types of non-illuminated images, that is, a difference non-illuminated image of a bright pupil image and a difference non-illuminated image of a dark pupil image, the processing speed of line-of-sight detection becomes slow. There is a problem.
 そこで本発明は、外光ノイズを十分に除去することができ、これによって高い精度で視線検出を実行することのできる視線検出装置を提供することを目的とする。また、本発明は、処理速度の低下を抑えつつ、高精度の視線検出を行うことのできる視線検出装置を提供することを目的としている。 Therefore, an object of the present invention is to provide a gaze detection apparatus that can sufficiently remove external light noise and thereby can perform gaze detection with high accuracy. Another object of the present invention is to provide a line-of-sight detection apparatus capable of performing high-precision line-of-sight detection while suppressing a decrease in processing speed.
 上記課題を解決するために、本発明の視線検出装置は、少なくとも眼を含む領域に光を照射する光源と、光源を点灯させた状態において、上記領域について、第1の露光条件で第1の画像を取得し、第1の露光条件よりも暗い露光条件である第2の露光条件で第2の画像を取得するカメラと、第1の露光条件と第2の露光条件を制御する露光制御部と、第1の画像から角膜反射像を抽出し、第2の画像から暗瞳孔画像を抽出する画像抽出部とを備えることを特徴としている。 In order to solve the above-described problem, the visual line detection device according to the present invention includes a light source that irradiates light to at least an area including the eye, and a first light exposure condition for the area in a state where the light source is turned on. A camera that acquires an image and acquires a second image under a second exposure condition that is darker than the first exposure condition, and an exposure control unit that controls the first exposure condition and the second exposure condition And an image extraction unit for extracting a cornea reflection image from the first image and extracting a dark pupil image from the second image.
 角膜反射像を抽出するための第1の画像の撮像においては、明瞳孔画像を取得するときのような明るい露光条件を必要としないため、第1の画像と第2の画像の露光条件を比較的近い条件にすることができ、従来の明瞳孔画像と暗瞳孔画像を取得する場合のような大きな差はない。したがって、無照明画像を用いた外光ノイズ除去を行う場合には、第1の画像の撮像時の露光条件と第2の画像の撮像時の露光条件の間の条件で撮像した非点灯時画像に基づいた無照明画像を用いれば、第1の画像と第2の画像のいずれについても一定のノイズ除去性能を得ることができる。ここで、露光条件はAEのターゲットを意味する。
 また、角膜反射像と暗瞳孔画像に基づいて視線検出を行うため、高い検出精度を得ることができる。
Since the first image for extracting the cornea reflection image does not require bright exposure conditions as in the case of acquiring a bright pupil image, the exposure conditions of the first image and the second image are compared. The conditions can be made close to each other, and there is no great difference as in the case of acquiring a conventional bright pupil image and dark pupil image. Therefore, when external light noise removal using an unilluminated image is performed, a non-lighting image captured under a condition between the exposure condition at the time of capturing the first image and the exposure condition at the time of capturing the second image If a non-illuminated image based on is used, a certain noise removal performance can be obtained for both the first image and the second image. Here, the exposure condition means an AE target.
In addition, since line-of-sight detection is performed based on the cornea reflection image and the dark pupil image, high detection accuracy can be obtained.
 さらに、従来のような明瞳孔画像と暗瞳孔画像の組合せではなく、角膜反射像と暗瞳孔画像を抽出するため、光源の構成が複雑になることや、光源にかかるコストが高まることを防ぐことができる。すなわち、明瞳孔画像と暗瞳孔画像を取得するには、それぞれの画像の取得に適した異なる波長の光を出射する複数の光源が必要であったり、カメラと同軸の光源と非同軸の光源をそれぞれ用意して、それぞれから検査光を照射することが必要となる。出射光の波長が異なる複数の光源を用意するには、高いコストがかかるとともに光源の制御が複雑となり、また、カメラと同軸の光源と非同軸の光源をそれぞれ配置する構成においても光源の制御が複雑となる。これに対して、本発明の視線検出装置では、単一の波長の光源のみで構成することが可能であるためコストを抑えることができ、また、上述のような複雑な制御を要しない。 Furthermore, since the corneal reflection image and the dark pupil image are extracted instead of the conventional combination of the bright pupil image and the dark pupil image, the configuration of the light source is prevented from becoming complicated and the cost of the light source is prevented from increasing. Can do. That is, in order to acquire a bright pupil image and a dark pupil image, a plurality of light sources that emit light of different wavelengths suitable for acquisition of each image are necessary, or a light source that is coaxial with the camera and a light source that is non-coaxial are used. It is necessary to prepare each and irradiate inspection light from each. Providing multiple light sources with different wavelengths of emitted light is expensive and complicates the control of the light source, and it is also possible to control the light source even in a configuration in which a coaxial light source and a non-coaxial light source are arranged respectively. It becomes complicated. On the other hand, the line-of-sight detection device of the present invention can be configured with only a single wavelength light source, so that the cost can be reduced, and complicated control as described above is not required.
 本発明の第1の態様の視線検出装置において、カメラは、光源を点灯させていない状態で非点灯時画像を取得し、画像抽出部は、非点灯時画像から無照明画像を抽出し、視線検出装置は、角膜反射像と無照明画像を差分して第1の差分画像を取得し、暗瞳孔画像と無照明画像を差分して第2の差分画像を取得する差分画像算出部を備えることが好ましい。
 これにより、外光の影響を除去した差分画像に基づいて、より精度の高い視線検出を行うことができる。
In the line-of-sight detection device according to the first aspect of the present invention, the camera acquires a non-lighting image without turning on the light source, and the image extraction unit extracts a non-illuminated image from the non-lighting image. The detection apparatus includes a difference image calculation unit that obtains a first difference image by subtracting the cornea reflection image and the non-illuminated image, and obtains a second difference image by subtracting the dark pupil image and the non-illuminated image. Is preferred.
Thereby, it is possible to perform line-of-sight detection with higher accuracy based on the difference image from which the influence of external light has been removed.
 本発明の第1の態様の視線検出装置において、光源からの出射光の波長は、眼球内での光吸収率が高く、かつ、眼球の網膜で反射されにくい波長であることが好ましい。
 これにより、1つの波長の光源によって、角膜反射像と暗瞳孔画像の両方を得ることができる。
In the line-of-sight detection apparatus according to the first aspect of the present invention, the wavelength of the light emitted from the light source is preferably a wavelength that has a high light absorption rate in the eyeball and is not easily reflected by the retina of the eyeball.
Thereby, both a cornea reflection image and a dark pupil image can be obtained with a light source of one wavelength.
 本発明の第1の態様の視線検出装置において、無照明画像は、第1の画像の差分計算と第2の画像の差分計算で共通の画像であることが好ましい。
 これにより、非点灯時画像の撮像回数を減らすことができるため、撮像の制御が複雑化することを防ぐことができるとともに、視線検出の高速化を図ることができる。また、角膜反射像と暗瞳孔画像の露光条件は、従来の明瞳孔画像と暗瞳孔画像を用いる場合の露光条件と比べて、互いに近い条件であるため、共通の露光条件の無照明画像を用いても、それぞれの画像について外光ノイズを十分に除去することができる。
In the line-of-sight detection device according to the first aspect of the present invention, the non-illuminated image is preferably a common image for the difference calculation of the first image and the difference calculation of the second image.
As a result, the number of times the non-lighting image is captured can be reduced, so that it is possible to prevent the imaging control from becoming complicated and to increase the speed of eye-gaze detection. In addition, the exposure conditions for the cornea reflection image and the dark pupil image are close to each other as compared to the exposure conditions when using the conventional bright pupil image and dark pupil image, so an unilluminated image with a common exposure condition is used. However, external light noise can be sufficiently removed for each image.
 本発明の第1の態様の視線検出装置において、非点灯時画像は、第1の露光条件よりも第2の露光条件に近い露光条件または第2の露光条件と同じ露光条件で取得されることが好ましい。
 これにより、第1の画像とくらべて外光ノイズの影響が出やすい第2の画像について、外光ノイズをより十分に除去することができ、さらに精度の高い視線検出に資することができる。
In the line-of-sight detection device according to the first aspect of the present invention, the non-lighting image is acquired under an exposure condition closer to the second exposure condition than the first exposure condition or under the same exposure condition as the second exposure condition. Is preferred.
As a result, the external light noise can be more sufficiently removed from the second image, which is more susceptible to external light noise than the first image, and can contribute to more accurate line-of-sight detection.
 本発明の第2の態様の視線検出装置において、カメラは、光源を点灯させていない状態で第1の非点灯時画像と第2の非点灯時画像を取得し、画像抽出部は、第1の非点灯時画像に基づいて第1の無照明画像を抽出し、第2の非点灯時画像に基づいて第2の無照明画像を抽出し、視線検出装置は、角膜反射像と第1の無照明画像とを差分して第1の差分画像を取得し、暗瞳孔画像と第2の無照明画像とを差分して第2の差分画像を取得する差分画像算出部を備えることが好ましい。
 これにより、第1の画像と第2の画像のそれぞれの露光条件に対応した条件で第1の非点灯時画像と第2の非点灯時画像をそれぞれ撮像するため、角膜反射像と暗瞳孔画像における外光ノイズの除去をより効果的に行うことが可能となる。
In the line-of-sight detection device according to the second aspect of the present invention, the camera acquires the first non-lighting image and the second non-lighting image in a state where the light source is not turned on. The first non-illuminated image is extracted based on the non-illuminated image, the second non-illuminated image is extracted based on the second non-illuminated image, and the line-of-sight detection device includes the corneal reflection image and the first non-illuminated image. It is preferable to provide a difference image calculation unit that obtains the first difference image by subtracting the unilluminated image and obtains the second difference image by subtracting the dark pupil image and the second unilluminated image.
Accordingly, the first non-lighting image and the second non-lighting image are captured under conditions corresponding to the exposure conditions of the first image and the second image, respectively, so that the corneal reflection image and the dark pupil image are captured. It is possible to more effectively remove external light noise.
 本発明の第2の態様の視線検出装置において、第1の非点灯時画像は第1の露光条件で取得され、第2の非点灯時画像は第2の露光条件で取得されることが好ましい。
 これにより、第1の画像と第1の非点灯時画像の撮像時の露光条件を同一にでき、かつ、第2の画像と第2の非点灯時画像の撮像時の露光条件を同一にできることから、外光ノイズの除去効果をさらに高めることができ、ひいては視線検出精度の向上に資することができる。
In the line-of-sight detection device according to the second aspect of the present invention, it is preferable that the first non-lighting image is acquired under the first exposure condition, and the second non-lighting image is acquired under the second exposure condition. .
Thereby, the exposure conditions at the time of imaging of the first image and the first non-lighting image can be made the same, and the exposure conditions at the time of imaging the second image and the second non-lighting image can be made the same. Therefore, the effect of removing external light noise can be further increased, and as a result, it is possible to contribute to the improvement of the gaze detection accuracy.
 本発明によると、外光ノイズを十分に除去することができ、これによって高い精度で視線検出を実行することができる。 According to the present invention, it is possible to sufficiently remove external light noise, and thereby it is possible to perform line-of-sight detection with high accuracy.
本発明の第1実施形態に係る視線検出装置の構成を示すブロック図である。It is a block diagram which shows the structure of the gaze detection apparatus which concerns on 1st Embodiment of this invention. 第1実施形態における第1光源および第2光源の点灯のタイミングと、第1カメラおよび第2カメラの撮像のタイミングとを示すチャート図である。It is a chart figure which shows the timing of lighting of the 1st light source and the 2nd light source in a 1st embodiment, and the timing of imaging of the 1st camera and the 2nd camera. 第1実施形態における視線検出の処理の流れを示すフローチャートである。It is a flowchart which shows the flow of a process of a gaze detection in 1st Embodiment. 第2実施形態における第1光源および第2光源の点灯のタイミングと、第1カメラおよび第2カメラの撮像のタイミングとを示すチャート図である。It is a chart figure which shows the timing of lighting of the 1st light source in the 2nd embodiment, and the 2nd light source, and the timing of imaging of the 1st camera and the 2nd camera. 第2実施形態における視線検出の処理の流れを示すフローチャートである。It is a flowchart which shows the flow of a process of a gaze detection in 2nd Embodiment.
 <第1実施形態>
 図1~図3を参照しつつ、第1実施形態に係る視線検出装置について説明する。図1は、第1実施形態に係る視線検出装置の構成を示すブロック図、図2は、第1光源11および第2光源12の点灯のタイミングと、第1カメラ21および第2カメラ22の撮像のタイミングとを示すチャート図である。図3は、第1実施形態における視線検出の処理の流れを示すフローチャートである。第1実施形態に係る視線検出装置においては、(1)眼球内での光吸収率が高く、かつ、眼球の網膜で反射されにくい波長の光を出射する光源を用い、(2)角膜反射像と暗瞳孔画像のそれぞれから差分画像を取得するための無照明画像を共通の画像としている。
<First Embodiment>
A line-of-sight detection device according to the first embodiment will be described with reference to FIGS. FIG. 1 is a block diagram illustrating a configuration of the visual line detection device according to the first embodiment, and FIG. 2 is a timing of lighting of the first light source 11 and the second light source 12 and imaging of the first camera 21 and the second camera 22. It is a chart figure which shows the timing. FIG. 3 is a flowchart showing a flow of gaze detection processing in the first embodiment. In the line-of-sight detection apparatus according to the first embodiment, (1) a light source that emits light having a high light absorptance in the eyeball and is difficult to be reflected by the retina of the eyeball is used. And a non-illuminated image for acquiring a difference image from each of the dark pupil image and the dark pupil image.
 図1に示すように、第1実施形態の視線検出装置は、2つの光源11、12と、2つのカメラ21、22と、演算制御部40とを備え、自動車の車室内の、例えばインストルメントパネルやウインドシールドの上部などに、対象者としての運転者の顔に向けるように設置される。 As shown in FIG. 1, the line-of-sight detection device according to the first embodiment includes two light sources 11 and 12, two cameras 21 and 22, and an arithmetic control unit 40. It is installed on the panel or windshield so that it faces the driver's face.
 第1光源11と第2光源12は、それぞれ複数のLED(発光ダイオード)光源からなる。第1光源11の複数のLED光源は、第1カメラ21の光軸から一定の距離だけそれぞれ離れるように配置されており、第2光源12の複数のLED光源は、第2カメラ22の光軸から一定の距離だけそれぞれ離れるように配置されている。ここで、2つの光源11、12をそれぞれ構成するLEDの数は任意に設定できる。2つの光源11、12からの出射光は、人の目の眼球内での光吸収率が高く、かつ、眼球の網膜で反射されにくい波長を有する。このような波長としては、900nmを超えて1000nm未満の波長が好ましく、例えば、940nmの赤外光を出射する。 The first light source 11 and the second light source 12 are each composed of a plurality of LED (light emitting diode) light sources. The plurality of LED light sources of the first light source 11 are disposed so as to be separated from the optical axis of the first camera 21 by a certain distance, respectively, and the plurality of LED light sources of the second light source 12 are optical axes of the second camera 22. Are arranged so as to be separated from each other by a certain distance. Here, the number of LEDs constituting each of the two light sources 11 and 12 can be arbitrarily set. The light emitted from the two light sources 11 and 12 has a high light absorption rate in the eyeball of the human eye and has a wavelength that is not easily reflected by the retina of the eyeball. As such a wavelength, a wavelength exceeding 900 nm and less than 1000 nm is preferable. For example, infrared light of 940 nm is emitted.
 第1光源11と第2光源12の点灯(発光)のタイミングは光源制御部33によって制御される。この点灯のタイミングの情報は露光制御部34へ出力され、露光制御部34は、第1光源11と第2光源12の点灯に同期させるように、後述の露光条件で、第1カメラ21と第2カメラ22に撮像を行わせる。 The timing of lighting (light emission) of the first light source 11 and the second light source 12 is controlled by the light source control unit 33. Information on the lighting timing is output to the exposure control unit 34, and the exposure control unit 34 synchronizes with the lighting of the first light source 11 and the second light source 12, and the first camera 21 and the first 2 Let the camera 22 perform imaging.
 第1カメラ21と第2カメラ22は、光軸間距離が人の両眼の間隔とほぼ一致するように互いに離間して配置されており、それぞれの光軸は運転者などの両眼にそれぞれ向けられる。第1カメラ21と第2カメラ22は、CMOS(相補型金属酸化膜半導体)やCCD(電荷結合素子)などの撮像素子を有しており、運転者の目を含む顔の画像を取得する。撮像素子では、二次元的に配列された複数の画素で光が検出される。 The first camera 21 and the second camera 22 are spaced apart from each other so that the distance between the optical axes substantially coincides with the distance between both eyes of the person. Directed. The first camera 21 and the second camera 22 have an image sensor such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge Coupled Device), and acquire an image of the face including the driver's eyes. In the image sensor, light is detected by a plurality of pixels arranged two-dimensionally.
 第1カメラ21と第1光源11の各LED光源との光軸間距離、および、第2カメラ22と第2光源12の各LED光源との光軸間距離は、視線検出装置と対象者としての運転者との距離を考慮して、第1カメラ21と第2カメラ22の光軸間距離に対して十分に短くしている。このため、第1光源11の各LED光源は第1カメラ21に対して光軸が略同軸に配置されているとみなすことができ、同様に、第2光源12の各LED光源は第2カメラ22に対して光軸が略同軸に配置されているとみなすことができる。これにより、少なくとも、第1光源11からの出射光は一方の眼を含む領域に光を照射されて、第1カメラ21がこの眼を含む顔の画像を取得し、また、第2光源12からの出射光は、他方の眼を含む領域に光を照射されて、第2カメラ22がこの眼を含む顔の画像を取得する。 The distance between the optical axes of the first camera 21 and each LED light source of the first light source 11 and the distance between the optical axes of the second camera 22 and each LED light source of the second light source 12 are as follows: In consideration of the distance to the driver, the distance between the optical axes of the first camera 21 and the second camera 22 is sufficiently short. Therefore, each LED light source of the first light source 11 can be regarded as having an optical axis substantially coaxial with the first camera 21. Similarly, each LED light source of the second light source 12 is the second camera. It can be considered that the optical axis is arranged substantially coaxially with respect to 22. As a result, at least the light emitted from the first light source 11 is applied to the region including one eye, and the first camera 21 acquires an image of the face including this eye. Is emitted to a region including the other eye, and the second camera 22 acquires a face image including the eye.
 演算制御部40は、コンピュータのCPUやメモリで構成されており、各ブロックの機能は、予めインストールされたソフトウエアを実行することで演算が行われる。演算制御部40には、画像取得部31、32と、光源制御部33と、露光制御部34と、画像抽出部35と、差分画像算出部36と、角膜反射光中心検出部37と、瞳孔中心算出部38と、視線方向算出部39とが設けられている。 The calculation control unit 40 is configured by a CPU and a memory of a computer, and the function of each block is calculated by executing software installed in advance. The calculation control unit 40 includes image acquisition units 31 and 32, a light source control unit 33, an exposure control unit 34, an image extraction unit 35, a difference image calculation unit 36, a corneal reflection light center detection unit 37, and a pupil. A center calculation unit 38 and a line-of-sight direction calculation unit 39 are provided.
 画像取得部31、32は、所定の露光条件において、第1カメラ21と第2カメラ22で撮像された画像を、フレームごとにそれぞれ取得する。第1カメラ21と第2カメラ22による撮像は、角膜反射像抽出用の第1の画像の撮像と暗瞳孔画像抽出用の第2の画像の撮像がそれぞれ行われ、さらに、第1カメラ21による2回の撮像と第2カメラ22による2回の撮像との間に、無照明画像抽出用の非点灯時画像の撮像が行われる。なお、非点灯時画像の撮像は、第1の画像の撮像と第2の画像の撮像の間に行っても良い。 The image acquisition units 31 and 32 acquire images captured by the first camera 21 and the second camera 22 for each frame under predetermined exposure conditions. Imaging with the first camera 21 and the second camera 22 is performed by imaging the first image for extracting the cornea reflection image and the second image for extracting the dark pupil image, respectively, and further by the first camera 21. A non-illuminated image for unilluminated image extraction is captured between the two imaging operations and the two imaging operations by the second camera 22. The non-lighting image may be captured between the first image and the second image.
 図2を参照して、2つの光源11、12の点灯のタイミングと2つのカメラ21、22による撮像のタイミングの例について説明する。図2(A)は、第1光源11の点灯のタイミングを示し、(B)は第2光源12の点灯のタイミングを示し、(C)は第1カメラ21による撮像のタイミングを示し、(D)は第2カメラ22による撮像のタイミングを示している。 Referring to FIG. 2, an example of the timing of lighting of the two light sources 11 and 12 and the timing of imaging by the two cameras 21 and 22 will be described. 2A shows the timing of lighting of the first light source 11, FIG. 2B shows the timing of lighting of the second light source 12, FIG. 2C shows the timing of imaging by the first camera 21, and (D ) Shows the timing of imaging by the second camera 22.
 図2に示すように、まず、一方の眼、例えば左眼についての第1の画像の撮像のための第1光源11の最初の点灯L11が行われ、これに同期して第1カメラ21における1回目の露光E11(撮像)が行われる。次に、第2の画像の撮像のための第1光源11の2回目の点灯L12が行われ、これに同期して第1カメラ21において2回目の露光E12が行われる。さらに、上記2回目の点灯L12の後であって第2光源12における1回目の点灯L21の前に、第1カメラ21の非点灯N1のタイミングに同期して3回目の露光E13が行われる。 As shown in FIG. 2, first, the first lighting L11 of the first light source 11 for capturing a first image of one eye, for example, the left eye, is performed, and in synchronization with this, the first camera 21 The first exposure E11 (imaging) is performed. Next, the second lighting L12 of the first light source 11 for capturing the second image is performed, and the second exposure E12 is performed in the first camera 21 in synchronization therewith. Further, after the second lighting L12 and before the first lighting L21 in the second light source 12, the third exposure E13 is performed in synchronization with the non-lighting N1 timing of the first camera 21.
 次に、他方の眼についての第1の画像の撮像のための第2光源12の最初の点灯L21に同期して第2カメラ22における1回目の露光E21が行われ、次に、第2の画像の撮像のための第2光源12の2回目の点灯L22に同期して第2カメラ22において2回目の露光E22が行われる。さらに、上記2回目の点灯L22の後であって第1カメラ21における3回目の点灯L13の前に、第2光源12の非点灯N2のタイミングに同期して3回目の露光E23が行われる。
 その後は、上記点灯L11、L12、及び、非点灯N1、並びに、これらに対応する露光E11、E12、E13と同様に、第1光源11の点灯・非点灯に同期して第1カメラ21で露光・撮像され、さらに、第2光源12の点灯・非点灯に同期して露光・撮像されるという処理が順に繰り返される。
 ここで、露光(撮像)E11、E12、E13、E21、E22、E23、E14、・・・は、一定の時間間隔で、同一の時間幅で実行される。
Next, the first exposure E21 in the second camera 22 is performed in synchronization with the first lighting L21 of the second light source 12 for capturing the first image of the other eye, and then the second A second exposure E22 is performed in the second camera 22 in synchronization with the second lighting L22 of the second light source 12 for imaging. Further, after the second lighting L22 and before the third lighting L13 in the first camera 21, the third exposure E23 is performed in synchronization with the timing of the non-lighting N2 of the second light source 12.
Thereafter, exposure is performed by the first camera 21 in synchronization with lighting / non-lighting of the first light source 11 in the same manner as the lighting L11, L12, non-lighting N1, and exposures E11, E12, E13 corresponding thereto. The process of imaging and further exposing and imaging in synchronization with the lighting / non-lighting of the second light source 12 is repeated in order.
Here, exposure (imaging) E11, E12, E13, E21, E22, E23, E14,... Is executed at a constant time interval and in the same time width.
 ここで、露光制御部34によって制御される露光条件は次の(1)~(3)である。露光条件は、AEのターゲットを意味し、カメラ21、22において、画像取得時間(露光時間)、開口絞り、センサーゲインなどの設定を調整することによって変更する。露光条件に関する情報は、露光制御部34から画像抽出部35へ出力される。なお、いずれの露光条件においても外光は遮断していない。
(1)第1の露光条件:角膜反射像抽出用の第1の画像の撮像のための露光条件である。この露光条件は、カメラ21、22の撮像素子に入射する光量が、適正露出の場合よりも多めになるような明るい露光条件であることが好ましく、これにより角膜反射像をその周囲よりも明るく撮像できるため、角膜反射像の抽出を精度良く行うことができる。ここで、適正露出とは、撮像素子の感度設定、撮像対象範囲内の輝度バランス・平均輝度などに基づいて、一般的な画像として色や明暗の調子が整うような露光条件である。
(2)第2の露光条件:暗瞳孔画像抽出用の第2の画像の撮像のための露光条件であって、第1の露光条件よりも暗い露光条件である。この露光条件は、カメラの撮像素子に入射する光量が、第1の露光条件よりも少なめになるような条件である。これにより、瞳孔画像とその周囲の画像の明暗の差を大きくすることができるため、瞳孔画像の抽出の精度を高めることができる。
(3)第3の露光条件:無照明画像抽出用の非点灯時画像の撮像のための露光条件であって、第1光源11および第2光源12は非点灯とする条件である。この露光条件は、第1の露光条件よりも第2の露光条件に近い露光条件であるが、第2の露光条件と同じ露光条件であってもよい。つまり、非点灯時画像を撮像する際のAEターゲットは、第1の露光条件のAEターゲットよりも第2の露光条件のAEターゲットに近いか、第2の露光条件のAEターゲットと同じである。
Here, the exposure conditions controlled by the exposure controller 34 are the following (1) to (3). The exposure condition means an AE target and is changed by adjusting settings such as an image acquisition time (exposure time), an aperture stop, and a sensor gain in the cameras 21 and 22. Information on the exposure conditions is output from the exposure control unit 34 to the image extraction unit 35. Note that external light is not blocked under any exposure condition.
(1) First exposure condition: an exposure condition for capturing a first image for extracting a cornea reflection image. This exposure condition is preferably a bright exposure condition in which the amount of light incident on the imaging elements of the cameras 21 and 22 is larger than that in the case of proper exposure, whereby a corneal reflection image is captured brighter than its surroundings. Therefore, the corneal reflection image can be extracted with high accuracy. Here, the proper exposure is an exposure condition in which colors and brightness are adjusted as a general image based on the sensitivity setting of the image sensor, the luminance balance / average luminance within the imaging target range, and the like.
(2) Second exposure condition: An exposure condition for capturing a second image for dark pupil image extraction, which is an exposure condition darker than the first exposure condition. This exposure condition is such that the amount of light incident on the image sensor of the camera is less than the first exposure condition. As a result, the difference in brightness between the pupil image and the surrounding image can be increased, so that the accuracy of extraction of the pupil image can be increased.
(3) Third exposure condition: an exposure condition for capturing a non-lighting image for extracting an unilluminated image, and the first light source 11 and the second light source 12 are not lighted. This exposure condition is an exposure condition closer to the second exposure condition than the first exposure condition, but may be the same exposure condition as the second exposure condition. In other words, the AE target for capturing an image when not lit is closer to the AE target under the second exposure condition than the AE target under the first exposure condition, or the same as the AE target under the second exposure condition.
 画像取得部31、32で取得された画像は、フレームごとに画像抽出部35に読み込まれる。画像抽出部35では、対象者の眼を含む領域が抽出され、上記第1の露光条件で撮像された第1の画像からは角膜反射像が抽出され、第2の露光条件で撮像された第2の画像からは暗瞳孔画像が抽出される。また、上記第3の露光条件、すなわち第1光源11と第2光源12を非点灯とした条件で撮像された非点灯時画像から、対象者の目を含む領域の無照明画像が抽出される。 The images acquired by the image acquisition units 31 and 32 are read into the image extraction unit 35 for each frame. The image extraction unit 35 extracts a region including the eye of the subject, extracts a corneal reflection image from the first image captured under the first exposure condition, and captures the first image captured under the second exposure condition. A dark pupil image is extracted from the second image. Further, an unilluminated image of the region including the eyes of the subject is extracted from the non-lighting image captured under the third exposure condition, that is, the condition where the first light source 11 and the second light source 12 are not lighted. .
 画像抽出部35で抽出された、角膜反射像、暗瞳孔画像、および無照明画像は差分画像算出部36へ出力される。差分画像算出部36では、角膜反射像と無照明画像の差分演算が実行されて第1の差分画像が算出され、また、暗瞳孔画像と無照明画像の差分演算によって第2の差分画像が算出される。ここで、2つの差分演算に用いられる無照明画像は共通の画像であって、角膜反射像と暗瞳孔画像のもとになった第1の画像と第2の画像の撮像の直前、直後、またはこれらの撮像の間に撮像を行って取得した非点灯時画像に基づいている。例えば、図2に示す場合では、第2の画像のための点灯L12の直後のタイミングN1で非点灯時画像を撮像し、この画像から抽出した無照明画像を、撮像E11、E12にそれぞれ基づいて抽出された角膜反射像と暗瞳孔画像に対する差分演算に用いている。 The cornea reflection image, dark pupil image, and non-illuminated image extracted by the image extraction unit 35 are output to the difference image calculation unit 36. In the difference image calculation unit 36, a difference calculation between the cornea reflection image and the unilluminated image is executed to calculate a first difference image, and a second difference image is calculated by a difference calculation between the dark pupil image and the unilluminated image. Is done. Here, the non-illuminated image used for the two difference calculations is a common image, and immediately before and after the imaging of the first image and the second image based on the cornea reflection image and the dark pupil image, Or it is based on the image at the time of non-lighting acquired by imaging between these imaging. For example, in the case shown in FIG. 2, a non-lighting image is captured at timing N1 immediately after lighting L12 for the second image, and an unilluminated image extracted from this image is based on the imaging E11 and E12, respectively. It is used for the difference calculation for the extracted corneal reflection image and dark pupil image.
 差分画像算出部36で算出された第1の差分画像は、角膜反射光中心検出部37へ出力される。第1の差分画像は、第1の露光条件で撮像した第1の画像に基づいていることから、角膜反射光中心検出部37においては、角膜の反射点から反射された反射光が明るくスポット画像として検出される。この角膜の反射点からの反射光はプルキニエ像を結像するものであり、角膜反射光中心検出部37では、スポット画像が画像処理されて、角膜の反射点からの反射光の中心が求められる。 The first difference image calculated by the difference image calculation unit 36 is output to the corneal reflection light center detection unit 37. Since the first difference image is based on the first image captured under the first exposure condition, the reflected light reflected from the cornea reflection point is bright in the corneal reflection light center detection unit 37 and is a spot image. Detected as The reflected light from the corneal reflection point forms a Purkinje image, and the corneal reflected light center detection unit 37 performs image processing on the spot image to obtain the center of the reflected light from the corneal reflection point. .
 差分画像算出部36で算出された第2の差分画像は、瞳孔中心算出部38へ出力される。第2の差分画像は、第2の露光条件で撮像した第2の画像に基づいているため、瞳孔中心算出部38においては、瞳孔画像とその周囲との明暗の差によって瞳孔の形状が正確に検出され、さらに、瞳孔画像信号が画像処理されて二値化され、瞳孔の形状と面積に対応する部分のエリア画像が算出される。さらに、このエリア画像を含む楕円が抽出され、楕円の長軸と短軸との交点が瞳孔の中心位置として算出される。あるいは、瞳孔画像の輝度分布から瞳孔の中心が求められる。 The second difference image calculated by the difference image calculation unit 36 is output to the pupil center calculation unit 38. Since the second difference image is based on the second image captured under the second exposure condition, the pupil center calculation unit 38 accurately determines the shape of the pupil due to the difference in brightness between the pupil image and its surroundings. Then, the pupil image signal is subjected to image processing and binarized, and an area image corresponding to the shape and area of the pupil is calculated. Furthermore, an ellipse including this area image is extracted, and the intersection of the major axis and the minor axis of the ellipse is calculated as the center position of the pupil. Alternatively, the center of the pupil is obtained from the luminance distribution of the pupil image.
 角膜反射光中心検出部37で算出された角膜反射光中心算出値と瞳孔中心算出部38で算出された瞳孔中心算出値は、視線方向算出部39に与えられる。視線方向算出部39では、瞳孔中心算出値と角膜反射光中心算出値とから視線の向きが検出される。視線方向算出部39では、瞳孔の中心と、角膜からの反射点の中心との直線距離αが算出される。また瞳孔の中心を原点とするX-Y座標が設定され、瞳孔の中心と反射点の中心とを結ぶ線とX軸との傾き角度βが算出される。 The corneal reflection light center calculation value calculated by the corneal reflection light center detection unit 37 and the pupil center calculation value calculated by the pupil center calculation unit 38 are given to the gaze direction calculation unit 39. The line-of-sight direction calculation unit 39 detects the direction of the line of sight from the pupil center calculated value and the corneal reflection light center calculated value. The line-of-sight direction calculation unit 39 calculates a linear distance α between the center of the pupil and the center of the reflection point from the cornea. In addition, XY coordinates with the center of the pupil as the origin are set, and an inclination angle β between the line connecting the center of the pupil and the center of the reflection point and the X axis is calculated.
 ここで、図3を参照して、撮像から視線方向までの画像の処理および視線方向の検出の流れについて説明する。
 まず、第1カメラ21と第2カメラ22のそれぞれにおいて、第1の画像(ステップS11)と、第2の画像(ステップS12)と、非点灯時画像(ステップS13)とが撮像される。上述のように、第1の画像は第1の露光条件で撮像され、第2の画像は第2の露光条件で撮像され、非点灯時画像は第3の露光条件で撮像される。
Here, with reference to FIG. 3, the flow of image processing from the imaging to the line-of-sight direction and the flow of the line-of-sight direction detection will be described.
First, in each of the first camera 21 and the second camera 22, a first image (step S11), a second image (step S12), and a non-lighting image (step S13) are captured. As described above, the first image is captured under the first exposure condition, the second image is captured under the second exposure condition, and the non-lighting image is captured under the third exposure condition.
 次に、画像抽出部35において、第1の画像から角膜反射像が抽出され(ステップS21)、第2の画像から暗瞳孔画像が抽出され(ステップS22)、非点灯時画像から無照明画像が抽出される(ステップS23)。つづいて、差分画像算出部36において、角膜反射像と無照明画像の差分画像として第1の差分画像が算出され(ステップS31)、暗瞳孔画像と無照明画像の差分画像として第2の差分画像(ステップS32)が算出される。 Next, in the image extraction unit 35, a cornea reflection image is extracted from the first image (step S21), a dark pupil image is extracted from the second image (step S22), and an unilluminated image is obtained from the non-lighting image. Extracted (step S23). Subsequently, the difference image calculation unit 36 calculates a first difference image as a difference image between the cornea reflection image and the unilluminated image (step S31), and a second difference image as a difference image between the dark pupil image and the unilluminated image. (Step S32) is calculated.
 第1の差分画像は、角膜反射光中心検出部37において、角膜の反射点から反射された反射光がスポット画像として検出され、このスポット画像が画像処理されて、角膜の反射点からの反射光の中心が求められる(ステップS41)。 In the first difference image, the reflected light reflected from the corneal reflection point is detected as a spot image by the corneal reflection light center detection unit 37, the spot image is subjected to image processing, and reflected light from the corneal reflection point. Is obtained (step S41).
 また、第2の差分画像は、瞳孔中心算出部38において、瞳孔の形状が検出され、この瞳孔画像信号が画像処理されて二値化され、瞳孔の形状と面積に対応する部分のエリア画像が算出される。さらに、瞳孔中心算出部38では、このエリア画像を含む楕円が抽出され、楕円の長軸と短軸との交点が瞳孔中心の位置として算出される(ステップS42)。 Also, the pupil center calculation unit 38 detects the shape of the pupil of the second difference image, the pupil image signal is subjected to image processing and binarized, and an area image corresponding to the shape and area of the pupil is obtained. Calculated. Further, the pupil center calculation unit 38 extracts an ellipse including the area image, and calculates the intersection of the major axis and the minor axis of the ellipse as the position of the pupil center (step S42).
 角膜反射光中心検出部37で算出された角膜反射光中心算出値と、瞳孔中心算出部38で算出された瞳孔中心算出値は、視線方向算出部39へ出力され、視線方向算出部39では、角膜反射光中心算出値と瞳孔中心算出値とから視線の向きが検出される(ステップS43)。 The corneal reflection light center calculation value calculated by the corneal reflection light center detection unit 37 and the pupil center calculation value calculated by the pupil center calculation unit 38 are output to the gaze direction calculation unit 39, and the gaze direction calculation unit 39 The direction of the line of sight is detected from the corneal reflection light center calculation value and the pupil center calculation value (step S43).
 以上のように構成されたことから、第1実施形態の視線検出装置によれば、次の効果を奏する。
(1)第1実施形態の視線検出装置では、第1の露光条件で第1の画像を取得し、第1の露光条件よりも暗い露光条件である第2の露光条件で第2の画像を取得し、さらに、第1の画像から角膜反射像を抽出し、第2の画像から暗瞳孔画像を抽出している。
 これにより、角膜反射像と暗瞳孔画像に基づいて視線検出を行うため、高い検出精度を得ることができる。
 また、角膜反射像を抽出するための第1の画像の撮像においては、従来の装置において明瞳孔画像を取得するときのような明るい露光条件を必要としない。このため、従来のように明瞳孔画像と暗瞳孔画像を取得する場合のような大きな露光条件の差はなく、比較的近い露光条件で第1の画像と第2の画像を取得することができる。したがって、無照明画像を用いた外光ノイズ除去を行う場合には、第1の画像の撮像時の露光条件と第2の画像の撮像時の露光条件の間の条件で撮像した非点灯時画像に基づいた無照明画像を用いれば、第1の画像と第2の画像のいずれについても一定のノイズ除去性能を得ることができる。
Since it comprised as mentioned above, according to the gaze detection apparatus of 1st Embodiment, there exists the following effect.
(1) In the line-of-sight detection apparatus according to the first embodiment, a first image is acquired under a first exposure condition, and a second image is acquired under a second exposure condition that is darker than the first exposure condition. Further, the cornea reflection image is extracted from the first image, and the dark pupil image is extracted from the second image.
Thereby, since gaze detection is performed based on the cornea reflection image and the dark pupil image, high detection accuracy can be obtained.
In addition, in the imaging of the first image for extracting the cornea reflection image, the bright exposure condition as in the case of acquiring the bright pupil image in the conventional apparatus is not required. Therefore, there is no large difference in exposure conditions as in the case of acquiring a bright pupil image and a dark pupil image as in the prior art, and the first image and the second image can be acquired under relatively close exposure conditions. . Therefore, when external light noise removal using an unilluminated image is performed, a non-lighting image captured under a condition between the exposure condition at the time of capturing the first image and the exposure condition at the time of capturing the second image If a non-illuminated image based on is used, a certain noise removal performance can be obtained for both the first image and the second image.
 さらに、第1実施形態の視線検出装置では、従来のような明瞳孔画像と暗瞳孔画像の組合せではなく、角膜反射像と暗瞳孔画像を抽出するため、光源の構成が複雑になることや、光源にかかるコストが高まることを防ぐことができる。すなわち、明瞳孔画像と暗瞳孔画像を取得するには、それぞれの画像の取得に適した異なる波長の光を出射する複数の光源が必要であったり、カメラと同軸の光源と非同軸の光源をそれぞれ用意して、それぞれから検査光を照射することが必要となる。出射光の波長が異なる複数の光源を用意するには、高いコストがかかるとともに光源の制御が複雑となり、また、カメラと同軸の光源と非同軸の光源をそれぞれ配置する場合においても光源の制御が複雑となる。これに対して、第1実施形態の視線検出装置では、単一の波長の光源のみで構成することが可能であり、また、上述のような複雑な制御を要しない。 Furthermore, in the line-of-sight detection device according to the first embodiment, the corneal reflection image and the dark pupil image are extracted instead of the combination of the bright pupil image and the dark pupil image as in the related art. It is possible to prevent the cost for the light source from increasing. That is, in order to acquire a bright pupil image and a dark pupil image, a plurality of light sources that emit light of different wavelengths suitable for acquisition of each image are necessary, or a light source that is coaxial with the camera and a light source that is non-coaxial are used. It is necessary to prepare each and irradiate inspection light from each. Providing multiple light sources with different wavelengths of emitted light is expensive and complicates the control of the light source, and it is also possible to control the light source even when a coaxial light source and a non-coaxial light source are arranged respectively. It becomes complicated. On the other hand, the line-of-sight detection apparatus of the first embodiment can be configured with only a light source having a single wavelength, and does not require complicated control as described above.
(2)第1実施形態の視線検出装置では、非点灯時画像から無照明画像を抽出するとともに、角膜反射像と無照明画像を差分して第1の差分画像を取得し、暗瞳孔画像と無照明画像を差分して第2の差分画像を取得しているため、外光の影響を除去した差分画像に基づいて、より精度の高い視線検出を行うことができる。 (2) In the line-of-sight detection device according to the first embodiment, an unilluminated image is extracted from a non-lighting image, and a first difference image is obtained by subtracting the corneal reflection image from the unilluminated image. Since the second difference image is acquired by subtracting the non-illuminated image, it is possible to perform more accurate line-of-sight detection based on the difference image from which the influence of external light has been removed.
(3)第1実施形態の視線検出装置では、第1光源11と第2光源12として、眼球内での光吸収率が高く、かつ、眼球の網膜で反射されにくい波長の光を出射する光源を用いているため、1種類の光源によって、角膜反射像と暗瞳孔画像の両方を得ることができる。 (3) In the line-of-sight detection device of the first embodiment, as the first light source 11 and the second light source 12, a light source that emits light having a wavelength that has a high light absorption rate in the eyeball and is not easily reflected by the retina of the eyeball. Therefore, both a cornea reflection image and a dark pupil image can be obtained with one type of light source.
(4)第1実施形態の視線検出装置では、第1の画像の差分計算と第2の画像の差分計算で共通の無照明画像を用いている。これにより、非点灯時画像の撮像回数を減らすことができることから、撮像の制御が複雑化することを防ぐことができるとともに、視線検出の高速化を図ることができる。また、角膜反射像と暗瞳孔画像の露光条件は、従来の明瞳孔画像と暗瞳孔画像を用いる場合の露光条件と比べて、互いに近い条件であるため、共通の露光条件の無照明画像を用いても、それぞれの画像について外光ノイズを十分に除去することができる。 (4) In the line-of-sight detection device of the first embodiment, a common unilluminated image is used for the difference calculation of the first image and the difference calculation of the second image. As a result, the number of times the non-lighting image is captured can be reduced, so that it is possible to prevent the control of imaging from becoming complicated and to increase the speed of eye gaze detection. In addition, the exposure conditions for the cornea reflection image and the dark pupil image are close to each other as compared to the exposure conditions when using the conventional bright pupil image and dark pupil image, so an unilluminated image with a common exposure condition is used. However, external light noise can be sufficiently removed for each image.
(5)第1実施形態の視線検出装置では、非点灯時画像は、第1の露光条件よりも第2の露光条件に近い露光条件または第2の露光条件と同じ露光条件で取得する。これにより、第1の画像とくらべて暗い露光条件で撮像するために外光ノイズの影響が出やすい第2の画像について、外光ノイズをより十分に除去することができ、さらに精度の高い視線検出に資することができる。 (5) In the line-of-sight detection apparatus of the first embodiment, the non-lighting image is acquired under an exposure condition closer to the second exposure condition than the first exposure condition or the same exposure condition as the second exposure condition. As a result, the external light noise can be more sufficiently removed from the second image that is likely to be affected by the external light noise because the image is captured under a darker exposure condition than the first image. Can contribute to detection.
 <第2実施形態>
 つづいて、本発明の第2実施形態について説明する。第2実施形態においては、第1の画像の差分計算と第2の画像の差分計算に用いる無照明画像が別個の画像である点が第1実施形態と異なる。以下の説明において、第1実施形態と同様の構成についての詳細な説明は省略する。
Second Embodiment
Next, a second embodiment of the present invention will be described. The second embodiment is different from the first embodiment in that the non-illuminated image used for the difference calculation of the first image and the difference calculation of the second image is a separate image. In the following description, detailed description of the same configuration as in the first embodiment is omitted.
 図4は、第1光源11および第2光源12の点灯のタイミングと、第1カメラ21および第2カメラ22の撮像のタイミングとを示すチャート図である。図5は、第2実施形態における視線検出の処理の流れを示すフローチャートである。図4(A)は、第1光源11の点灯のタイミングを示し、(B)は第2光源12の点灯のタイミングを示し、(C)は第1カメラ21による撮像のタイミングを示し、(D)は第2カメラ22による撮像のタイミングを示している。 FIG. 4 is a chart showing the lighting timing of the first light source 11 and the second light source 12 and the imaging timing of the first camera 21 and the second camera 22. FIG. 5 is a flowchart showing a flow of gaze detection processing in the second embodiment. 4A shows the timing of lighting of the first light source 11, FIG. 4B shows the timing of lighting of the second light source 12, FIG. 4C shows the timing of imaging by the first camera 21, and (D ) Shows the timing of imaging by the second camera 22.
 第2実施形態の視線検出装置は、第1実施形態と同様に、2つの光源11、12と、2つのカメラ21、22と、演算制御部40とを備える。また、演算制御部40についても、第1実施形態と同様に、画像取得部31、32と、光源制御部33と、露光制御部34と、画像抽出部35と、差分画像算出部36と、角膜反射光中心検出部37と、瞳孔中心算出部38と、視線方向算出部39とを備える。 The line-of-sight detection apparatus according to the second embodiment includes two light sources 11 and 12, two cameras 21 and 22, and an arithmetic control unit 40, as in the first embodiment. As for the calculation control unit 40, as in the first embodiment, the image acquisition units 31, 32, the light source control unit 33, the exposure control unit 34, the image extraction unit 35, the difference image calculation unit 36, A corneal reflection light center detection unit 37, a pupil center calculation unit 38, and a gaze direction calculation unit 39 are provided.
 第1光源11と第2光源12の構成、配置、および出射光の波長は第1実施形態と同様である。また、これら2つの光源11、12と2つのカメラ21、22のそれぞれとの位置関係も同様である。一方、カメラ21、22による露光のタイミングと露光条件は、以下に述べるように、第1実施形態と異なっている。 The configuration and arrangement of the first light source 11 and the second light source 12 and the wavelength of the emitted light are the same as in the first embodiment. The positional relationship between the two light sources 11 and 12 and the two cameras 21 and 22 is the same. On the other hand, the exposure timing and exposure conditions by the cameras 21 and 22 are different from those of the first embodiment as described below.
 図4を参照して、2つの光源11、12の点灯のタイミングと2つのカメラ21、22による撮像のタイミングの例について説明する。
 図4に示すように、まず、一方の眼、例えば左眼についての第1の画像の撮像のための第1光源11の最初の点灯L111が行われ、これに同期して第1カメラ21における1回目の露光E111(撮像)が行われる。次に、第2の画像の撮像ための2回目の点灯L112が行われるが、第1カメラ21では、上記1回目の点灯L111の後であって2回目の点灯L112の前の非点灯N11のタイミングに同期して2回目の露光E112が行われる。
With reference to FIG. 4, an example of lighting timings of the two light sources 11 and 12 and imaging timings of the two cameras 21 and 22 will be described.
As shown in FIG. 4, first, the first lighting L111 of the first light source 11 for capturing a first image of one eye, for example, the left eye, is performed, and in synchronization with this, the first camera 21 The first exposure E111 (imaging) is performed. Next, the second lighting L112 for capturing the second image is performed. In the first camera 21, the non-lighting N11 after the first lighting L111 and before the second lighting L112 is performed. A second exposure E112 is performed in synchronization with the timing.
 つづいて、第1光源11の2回目の点灯L112に同期して3回目の露光E113が行われる。さらに、上記2回目の点灯L112の後であって第2光源12における1回目の点灯L121の前の非点灯N12のタイミングに同期して第1カメラ21において4回目の露光E114が行われる。 Subsequently, the third exposure E113 is performed in synchronization with the second lighting L112 of the first light source 11. Further, the fourth exposure E114 is performed in the first camera 21 after the second lighting L112 and in synchronization with the non-lighting N12 timing before the first lighting L121 in the second light source 12.
 次に、他方の眼についての第1の画像の撮像のための第2光源12の最初の点灯L121に同期して第2カメラ22における1回目の露光E121が行われる。その後、第2の画像の撮像ための2回目の点灯L122が行われるが、第2カメラ22では、上記1回目の点灯L121の後であって2回目の点灯L122の前の非点灯N13のタイミングに同期して2回目の露光E122が行われる。 Next, the first exposure E121 in the second camera 22 is performed in synchronization with the first lighting L121 of the second light source 12 for capturing the first image of the other eye. Thereafter, the second lighting L122 for capturing the second image is performed. In the second camera 22, the timing of the non-lighting N13 after the first lighting L121 and before the second lighting L122. The second exposure E122 is performed in synchronization with the above.
 つづいて、第2光源12の2回目の点灯L122に同期して3回目の露光E123が行われる。さらに、上記2回目の点灯L122の後であって第1光源11における3回目の点灯L113の前の非点灯N14のタイミングに同期して第2カメラ22において4回目の露光E124が行われる。
 その後は、上記点灯L111、非点灯N11、点灯L112、及び、非点灯N12、並びに、これらにそれぞれ対応する露光E111、E112、E113、E114と同様に、第1光源11の交互の点灯・非点灯に同期して第1カメラ21で露光・撮像が行われ、さらに、第2光源12の交互の点灯・非点灯に同期して露光・撮像されるという処理が順に繰り返される。
 ここで、露光(撮像)E111、E112、E113、E114、E121、E122、E123、E124、E115、・・・は、一定の時間間隔で、同一の時間幅で実行される。
Subsequently, the third exposure E123 is performed in synchronization with the second lighting L122 of the second light source 12. Further, after the second lighting L122 and before the third lighting L113 in the first light source 11, the second camera 22 performs the fourth exposure E124 in synchronization with the timing of the non-lighting N14.
Thereafter, the first light source 11 is alternately turned on / off in the same manner as the lighting L111, non-lighting N11, lighting L112, and non-lighting N12, and the exposures E111, E112, E113, and E114 corresponding to these. The first camera 21 performs exposure / image capturing in synchronization with the second light source 12, and the exposure / image capturing is performed in synchronization with alternate lighting / non-lighting of the second light source 12 in order.
Here, exposure (imaging) E111, E112, E113, E114, E121, E122, E123, E124, E115,... Is executed at a constant time interval with the same time width.
 露光制御部34によって制御される露光条件は次の(1)~(4)である。露光条件は、カメラ21、22において、画像取得時間(露光時間)、開口絞り、センサーゲインなどの設定を調整することによって変更する。露光条件に関する情報は、露光制御部34から画像抽出部35へ出力される。なお、いずれの露光条件においても外光は遮断していない。
(1)第1の露光条件:角膜反射像抽出用の第1の画像の撮像のための露光条件である。この露光条件は、カメラ21、22の撮像素子に入射する光量が、適正露出の場合よりも多めになるような明るい露光条件であることが好ましく、これにより角膜反射像をその周囲よりも明るく撮像できるため、角膜反射像の抽出を精度良く行うことができる。
(2)第2の露光条件:暗瞳孔画像抽出用の第2の画像の撮像のための露光条件であって、第1の露光条件よりも暗い露光条件である。この露光条件は、カメラの撮像素子に入射する光量が、第1の露光条件よりも少なめになるような条件である。これにより、瞳孔画像とその周囲の画像の明暗の差を大きくすることができるため、瞳孔画像の抽出の精度を高めることができる。
(3)第3の露光条件:第1の無照明画像抽出用の第1の非点灯時画像の撮像のための露光条件であって、第1光源11および第2光源12は非点灯とする条件である。この露光条件は、第1の露光条件と同一の条件である。
(4)第4の露光条件:第2の無照明画像抽出用の第2の非点灯時画像の撮像のための露光条件であって、第1光源11および第2光源12は非点灯とする条件である。この露光条件は、第2の露光条件と同一の条件である。
The exposure conditions controlled by the exposure controller 34 are the following (1) to (4). The exposure conditions are changed by adjusting settings such as image acquisition time (exposure time), aperture stop, and sensor gain in the cameras 21 and 22. Information on the exposure conditions is output from the exposure control unit 34 to the image extraction unit 35. Note that external light is not blocked under any exposure condition.
(1) First exposure condition: an exposure condition for capturing a first image for extracting a cornea reflection image. This exposure condition is preferably a bright exposure condition in which the amount of light incident on the imaging elements of the cameras 21 and 22 is larger than that in the case of proper exposure, whereby a corneal reflection image is captured brighter than its surroundings. Therefore, the corneal reflection image can be extracted with high accuracy.
(2) Second exposure condition: An exposure condition for capturing a second image for dark pupil image extraction, which is an exposure condition darker than the first exposure condition. This exposure condition is such that the amount of light incident on the image sensor of the camera is less than the first exposure condition. As a result, the difference in brightness between the pupil image and the surrounding image can be increased, so that the accuracy of extraction of the pupil image can be increased.
(3) Third exposure condition: an exposure condition for capturing a first non-illuminated image for extracting a first unilluminated image, and the first light source 11 and the second light source 12 are not illuminated. It is a condition. This exposure condition is the same as the first exposure condition.
(4) Fourth exposure condition: exposure condition for capturing a second non-lighting image for extracting a second unilluminated image, and the first light source 11 and the second light source 12 are not lighted. It is a condition. This exposure condition is the same as the second exposure condition.
 なお、第3の露光条件は第1の露光条件と同一でなくてもよく、また、第4の露光条件は第2の露光条件と同一でなくてもよい。ただし、第3の露光条件は第2の露光条件よりも第1の露光条件に近い方が好ましく、第4の露光条件は第1の露光条件よりも第2の露光条件に近い方が好ましい。 Note that the third exposure condition may not be the same as the first exposure condition, and the fourth exposure condition may not be the same as the second exposure condition. However, the third exposure condition is preferably closer to the first exposure condition than the second exposure condition, and the fourth exposure condition is preferably closer to the second exposure condition than the first exposure condition.
 図5を参照して、撮像から視線方向までの画像の処理および視線方向の検出の流れについて説明する。
 カメラ21、22によって、上記の各露光条件にしたがって、第1の画像(図5のS111)、第2の画像(ステップS112)、第1の非点灯時画像(ステップS113)、および、第2の非点灯時画像(ステップS114)がそれぞれ撮像される。これらの画像は、画像取得部31、32でそれぞれ取得され、フレームごとに画像抽出部35に読み込まれる。
With reference to FIG. 5, the flow of image processing from the imaging to the line-of-sight direction and the flow of detection of the line-of-sight direction will be described.
With the cameras 21 and 22, the first image (S111 in FIG. 5), the second image (step S112), the first non-lighting image (step S113), and the second according to each exposure condition described above The non-lighting images (step S114) are respectively captured. These images are acquired by the image acquisition units 31 and 32, and read into the image extraction unit 35 for each frame.
 画像抽出部35では、第1実施形態と同様に、第1の画像からは角膜反射像(ステップS121)が抽出され、第2の画像からは暗瞳孔画像(ステップS122)が抽出される。また、上記第3の露光条件で撮像された第1の非点灯時画像から、対象者の目を含む領域の第1の無照明画像(ステップS123)が抽出され、第4の露光条件で撮像された第2の非点灯時画像から、対象者の目を含む領域の第2の無照明画像(ステップS124)が抽出される。 In the image extraction unit 35, as in the first embodiment, a cornea reflection image (step S121) is extracted from the first image, and a dark pupil image (step S122) is extracted from the second image. In addition, a first non-illuminated image (step S123) of the region including the eyes of the subject is extracted from the first non-lighting image captured under the third exposure condition, and is captured under the fourth exposure condition. A second non-illuminated image (step S124) of the region including the eyes of the subject is extracted from the second non-lighted image.
 画像抽出部35で抽出された、角膜反射像、暗瞳孔画像、第1の無照明画像、および、第2の無照明画像は差分画像算出部36へ出力される。差分画像算出部36では、角膜反射像と第1の無照明画像の差分演算が実行されて第1の差分画像(ステップS131)が算出され、暗瞳孔画像と第2の無照明画像の差分演算によって第2の差分画像(ステップS132)が算出される。ここで、第1の無照明画像は、差分の対象となる角膜反射像のもとになった第1の画像の撮像の直前または直後に撮像を行って取得した第1の非点灯時画像に基づいており、また、第2の無照明画像は、差分の対象となる暗瞳孔画像のもとになった第2の画像の撮像の直前または直後に撮像を行って取得した第2の非点灯時画像に基づいている。例えば図4に示す場合では、第1の画像のための点灯L111の直後のタイミングN11で第1の非点灯時画像を撮像し、この画像から抽出した第1の無照明画像を、撮像E111に基づいて抽出された角膜反射像に対する差分演算に用いており、また、第2の画像のための点灯L112の直後のタイミングN12で第2の非点灯時画像を撮像し、この画像から抽出した第2の無照明画像を、撮像E113に基づいて抽出された暗瞳孔画像に対する差分演算に用いている。 The cornea reflection image, dark pupil image, first unilluminated image, and second unilluminated image extracted by the image extraction unit 35 are output to the difference image calculation unit 36. In the difference image calculation unit 36, a difference calculation between the cornea reflection image and the first unilluminated image is executed to calculate a first difference image (step S131), and a difference calculation between the dark pupil image and the second unilluminated image is performed. Thus, the second difference image (step S132) is calculated. Here, the first non-illuminated image is a first non-lighting image acquired by performing imaging immediately before or after imaging of the first image that is the basis of the corneal reflection image that is the target of the difference. The second non-illuminated image is based on the second non-illuminated image obtained by performing imaging immediately before or after imaging of the second image that is the source of the dark pupil image that is the target of the difference. Based on time image. For example, in the case illustrated in FIG. 4, the first non-lighting image is captured at the timing N11 immediately after the lighting L111 for the first image, and the first unilluminated image extracted from this image is captured in the imaging E111. The second non-lighting image is captured at timing N12 immediately after the lighting L112 for the second image, and is extracted from the image. Two unilluminated images are used for the difference calculation with respect to the dark pupil image extracted based on the imaging E113.
 差分画像算出部36で算出された第1の差分画像は、角膜反射光中心検出部37へ出力される。第1の差分画像は、第1の露光条件で撮像した第1の画像に基づいていることから、角膜反射光中心検出部37においては、角膜の反射点から反射された反射光が明るくスポット画像として検出される。この角膜の反射点からの反射光はプルキニエ像を結像するものであり、角膜反射光中心検出部37では、スポット画像が画像処理されて、角膜の反射点からの反射光の中心が求められる(ステップS141)。 The first difference image calculated by the difference image calculation unit 36 is output to the corneal reflection light center detection unit 37. Since the first difference image is based on the first image captured under the first exposure condition, the reflected light reflected from the cornea reflection point is bright in the corneal reflection light center detection unit 37 and is a spot image. Detected as The reflected light from the corneal reflection point forms a Purkinje image, and the corneal reflected light center detection unit 37 performs image processing on the spot image to obtain the center of the reflected light from the corneal reflection point. (Step S141).
 差分画像算出部36で算出された第2の差分画像は、瞳孔中心算出部38へ出力される。第2の差分画像は、第2の露光条件で撮像した第2の画像に基づいているため、瞳孔中心算出部38においては、瞳孔画像とその周囲との明暗の差によって瞳孔の形状が正確に検出され、さらに、瞳孔画像信号が画像処理されて二値化され、瞳孔の形状と面積に対応する部分のエリア画像が算出される。さらに、このエリア画像を含む楕円が抽出され、楕円の長軸と短軸との交点が瞳孔の中心位置として算出される。あるいは、瞳孔画像の輝度分布から瞳孔の中心が求められる(ステップS142)。 The second difference image calculated by the difference image calculation unit 36 is output to the pupil center calculation unit 38. Since the second difference image is based on the second image captured under the second exposure condition, the pupil center calculation unit 38 accurately determines the shape of the pupil due to the difference in brightness between the pupil image and its surroundings. Then, the pupil image signal is subjected to image processing and binarized, and an area image corresponding to the shape and area of the pupil is calculated. Furthermore, an ellipse including this area image is extracted, and the intersection of the major axis and the minor axis of the ellipse is calculated as the center position of the pupil. Alternatively, the center of the pupil is obtained from the luminance distribution of the pupil image (step S142).
 角膜反射光中心検出部37で算出された角膜反射光中心算出値と瞳孔中心算出部38で算出された瞳孔中心算出値は、視線方向算出部39に与えられる。視線方向算出部39では、瞳孔中心算出値と角膜反射光中心算出値とから視線の向きが検出される(ステップS
143)。
The corneal reflection light center calculation value calculated by the corneal reflection light center detection unit 37 and the pupil center calculation value calculated by the pupil center calculation unit 38 are given to the gaze direction calculation unit 39. The line-of-sight direction calculation unit 39 detects the direction of the line of sight from the pupil center calculated value and the corneal reflection light center calculated value (step S).
143).
 第2実施形態の視線検出装置によれば、第1実施形態の視線検出装置による効果に加えて、次の効果を奏する。
(1)角膜反射像と第1の無照明画像とを差分して第1の差分画像を取得し、暗瞳孔画像と第2の無照明画像とを差分して第2の差分画像を取得することにより、第1の画像と第2の画像のそれぞれの露光条件に対応した条件で第1の非点灯時画像と第2の非点灯時画像をそれぞれ撮像するため、角膜反射像と暗瞳孔画像における外光ノイズの除去をより効果的に行うことが可能となる。
(2)第1の非点灯時画像を第1の露光条件で取得し、第2の非点灯時画像を第2の露光条件で取得することにより、第1の画像と第1の非点灯時画像の撮像時の露光条件を同一にでき、かつ、第2の画像と第2の非点灯時画像の撮像時の露光条件を同一にできることから、外光ノイズの除去効果をさらに高めることができ、ひいては視線検出精度の向上に資することができる。
 なお、その他の作用、効果、変形例は第1実施形態と同様である。
 本発明について上記実施形態を参照しつつ説明したが、本発明は上記実施形態に限定されるものではなく、改良の目的または本発明の思想の範囲内において改良または変更が可能である。
According to the visual line detection device of the second embodiment, in addition to the effects of the visual line detection device of the first embodiment, the following effects are achieved.
(1) A first difference image is acquired by subtracting the cornea reflection image and the first non-illuminated image, and a second differential image is acquired by subtracting the dark pupil image and the second non-illuminated image. Thus, in order to capture the first non-lighting image and the second non-lighting image under conditions corresponding to the exposure conditions of the first image and the second image, respectively, the corneal reflection image and the dark pupil image It is possible to more effectively remove external light noise.
(2) The first non-lighting image is acquired under the first exposure condition, and the second non-lighting image is acquired under the second exposure condition, whereby the first image and the first non-lighting time are acquired. Since the exposure conditions at the time of image capture can be made the same and the exposure conditions at the time of image capture of the second image and the second non-lighting image can be made the same, the effect of removing external light noise can be further enhanced. As a result, it is possible to contribute to improvement in the accuracy of eye gaze detection.
Other operations, effects, and modifications are the same as those in the first embodiment.
Although the present invention has been described with reference to the above embodiment, the present invention is not limited to the above embodiment, and can be improved or changed within the scope of the purpose of the improvement or the idea of the present invention.
 以上のように、本発明に係る視線検出装置は、自動車等のように外光ノイズが多い、または、外光ノイズの変動の大きな環境においても、高い精度で視線検出を行うことができる点で有用である。 As described above, the line-of-sight detection device according to the present invention is capable of performing line-of-sight detection with high accuracy even in an environment where there is a large amount of external light noise such as an automobile or a large fluctuation of external light noise. Useful.
 11  第1光源
 12  第2光源
 21  第1カメラ
 22  第2カメラ
 31、32 画像取得部
 33  光源制御部
 34  露光制御部
 35  画像抽出部
 36  差分画像算出部
 37  角膜反射光中心検出部
 38  瞳孔中心算出部
 39  視線方向算出部
 40  演算制御部
 E11、E12、E13、E14、E21、E22、E23 露光(撮像)
 E112、E112、E113、E114、E115、E121、E122、E123、E124 露光(撮像)
 L11、L12、L13、L21、L22 点灯(発光)
 L111、L112、L113、L121、L122 点灯(発光)
 N1、N2、N11、N12、N13、N14 非点灯
DESCRIPTION OF SYMBOLS 11 1st light source 12 2nd light source 21 1st camera 22 2nd camera 31, 32 Image acquisition part 33 Light source control part 34 Exposure control part 35 Image extraction part 36 Difference image calculation part 37 Corneal reflection light center detection part 38 Pupil center calculation Unit 39 Gaze direction calculation unit 40 Calculation control unit E11, E12, E13, E14, E21, E22, E23 Exposure (imaging)
E112, E112, E113, E114, E115, E121, E122, E123, E124 Exposure (imaging)
L11, L12, L13, L21, L22 Lit (light emission)
L111, L112, L113, L121, L122 Lit (light emission)
N1, N2, N11, N12, N13, N14 Not lit

Claims (7)

  1.  少なくとも眼を含む領域に光を照射する光源と、
     前記光源を点灯させた状態において、前記領域について、第1の露光条件で第1の画像を取得し、前記第1の露光条件よりも暗い露光条件である第2の露光条件で第2の画像を取得するカメラと、
     前記第1の露光条件と前記第2の露光条件を制御する露光制御部と、
     前記第1の画像から角膜反射像を抽出し、前記第2の画像から暗瞳孔画像を抽出する画像抽出部とを備えることを特徴とする視線検出装置。
    A light source that irradiates at least an area including the eye;
    In a state where the light source is turned on, a first image is obtained for the region under a first exposure condition, and a second image is obtained under a second exposure condition that is darker than the first exposure condition. With a camera to get
    An exposure control unit for controlling the first exposure condition and the second exposure condition;
    An eye-gaze detection apparatus comprising: an image extraction unit that extracts a cornea reflection image from the first image and extracts a dark pupil image from the second image.
  2.  前記カメラは、前記光源を点灯させていない状態で非点灯時画像を取得し、
     前記画像抽出部は、前記非点灯時画像から無照明画像を抽出し、
     前記視線検出装置は、前記角膜反射像と前記無照明画像を差分して第1の差分画像を取得し、前記暗瞳孔画像と前記無照明画像を差分して第2の差分画像を取得する差分画像算出部を備えることを特徴とする請求項1に記載の視線検出装置。
    The camera acquires a non-lighting image in a state where the light source is not turned on,
    The image extraction unit extracts an unilluminated image from the non-lighting image,
    The line-of-sight detection device obtains a first difference image by subtracting the corneal reflection image and the non-illuminated image, and obtains a second difference image by subtracting the dark pupil image and the non-illuminated image. The line-of-sight detection apparatus according to claim 1, further comprising an image calculation unit.
  3.  前記光源からの出射光の波長は、眼球内での光吸収率が高く、かつ、眼球の網膜で反射されにくい波長であることを特徴とする請求項2に記載の視線検出装置。 3. The line-of-sight detection device according to claim 2, wherein the wavelength of the light emitted from the light source has a high light absorption rate in the eyeball and is not easily reflected by the retina of the eyeball.
  4.  前記無照明画像は、前記第1の画像の差分計算と前記第2の画像の差分計算で共通の画像であることを特徴とする請求項3に記載の視線検出装置。 4. The line-of-sight detection device according to claim 3, wherein the non-illuminated image is a common image for the difference calculation of the first image and the difference calculation of the second image.
  5.  前記非点灯時画像は、前記第1の露光条件よりも前記第2の露光条件に近い露光条件または前記第2の露光条件と同じ露光条件で取得されることを特徴とする請求項4に記載の視線検出装置。 5. The non-lighting image is acquired under an exposure condition closer to the second exposure condition than the first exposure condition or under the same exposure condition as the second exposure condition. Gaze detection device.
  6.  前記カメラは、前記光源を点灯させていない状態で第1の非点灯時画像と第2の非点灯時画像を取得し、
     前記画像抽出部は、前記第1の非点灯時画像に基づいて第1の無照明画像を抽出し、前記第2の非点灯時画像に基づいて第2の無照明画像を抽出し、
     前記視線検出装置は、前記角膜反射像と前記第1の無照明画像とを差分して第1の差分画像を取得し、前記暗瞳孔画像と前記第2の無照明画像とを差分して第2の差分画像を取得する差分画像算出部を備えることを特徴とする請求項1に記載の視線検出装置。
    The camera acquires a first non-lighting image and a second non-lighting image in a state where the light source is not turned on,
    The image extraction unit extracts a first unilluminated image based on the first non-lighting image, and extracts a second non-illuminated image based on the second non-lighted image,
    The line-of-sight detection device obtains a first difference image by subtracting the cornea reflection image and the first non-illuminated image, and subtracts the dark pupil image from the second non-illuminated image to obtain a first difference image. The gaze detection apparatus according to claim 1, further comprising a difference image calculation unit that acquires two difference images.
  7.  前記第1の非点灯時画像は前記第1の露光条件で取得され、前記第2の非点灯時画像は前記第2の露光条件で取得されることを特徴とする請求項6に記載の視線検出装置。 The line of sight according to claim 6, wherein the first non-lighting image is acquired under the first exposure condition, and the second non-lighting image is acquired under the second exposure condition. Detection device.
PCT/JP2016/085919 2016-02-01 2016-12-02 Line-of-sight detection device WO2017134918A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016017004 2016-02-01
JP2016-017004 2016-02-01

Publications (1)

Publication Number Publication Date
WO2017134918A1 true WO2017134918A1 (en) 2017-08-10

Family

ID=59499524

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/085919 WO2017134918A1 (en) 2016-02-01 2016-12-02 Line-of-sight detection device

Country Status (1)

Country Link
WO (1) WO2017134918A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023046406A1 (en) * 2021-09-23 2023-03-30 Continental Automotive Technologies GmbH An image processing system and method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005185431A (en) * 2003-12-25 2005-07-14 National Univ Corp Shizuoka Univ Line-of-sight detection method and line-of-sight detector
JP2012115505A (en) * 2010-12-01 2012-06-21 Fujitsu Ltd Visual line detection device and visual line detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005185431A (en) * 2003-12-25 2005-07-14 National Univ Corp Shizuoka Univ Line-of-sight detection method and line-of-sight detector
JP2012115505A (en) * 2010-12-01 2012-06-21 Fujitsu Ltd Visual line detection device and visual line detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NAKASHIMA, AYA ET AL.: "Pupil Detection Using Light Sources of Different Wavelengths", THE JOURNAL OF THE INSTITUTE OF IMAGE INFORMATION AND TELEVISION ENGINEERS, vol. 60, no. 12, 699, 1 December 2006 (2006-12-01), pages 2019 - 2025 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023046406A1 (en) * 2021-09-23 2023-03-30 Continental Automotive Technologies GmbH An image processing system and method thereof
GB2611289A (en) * 2021-09-23 2023-04-05 Continental Automotive Tech Gmbh An image processing system and method thereof

Similar Documents

Publication Publication Date Title
JP4853389B2 (en) Face image capturing device
US8724858B2 (en) Driver imaging apparatus and driver imaging method
JP5145555B2 (en) Pupil detection method
JP5761074B2 (en) Imaging control apparatus and program
JP5435307B2 (en) In-vehicle camera device
US8810642B2 (en) Pupil detection device and pupil detection method
US9898658B2 (en) Pupil detection light source device, pupil detection device and pupil detection method
WO2018051681A1 (en) Line-of-sight measurement device
US20160345818A1 (en) Eyeblink measurement method, eyeblink measurement apparatus, and non-transitory computer-readable medium
JP2013005234A5 (en)
JP2016049260A (en) In-vehicle imaging apparatus
WO2017203769A1 (en) Sight line detection method
JP6381654B2 (en) Gaze detection device
CN109415020B (en) Luminance control device, luminance control system and luminance control method
WO2017134918A1 (en) Line-of-sight detection device
JP2016051317A (en) Visual line detection device
JP2016170052A (en) Eye detector and vehicle display system
JP7278764B2 (en) IMAGING DEVICE, ELECTRONIC DEVICE, IMAGING DEVICE CONTROL METHOD AND PROGRAM
US20190289185A1 (en) Occupant monitoring apparatus
JP2016051312A (en) Visual line detection device
JP6322723B2 (en) Imaging apparatus and vehicle
JP2017162233A (en) Visual line detection device and visual line detection method
JP6551269B2 (en) Distance measuring device
WO2017154356A1 (en) Sight line detection device and sight line detection method
CN112041783A (en) Exposure time control

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16889401

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16889401

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP