WO2016002656A1 - 視線計測システム、視線計測方法、及びプログラム - Google Patents
視線計測システム、視線計測方法、及びプログラム Download PDFInfo
- Publication number
- WO2016002656A1 WO2016002656A1 PCT/JP2015/068522 JP2015068522W WO2016002656A1 WO 2016002656 A1 WO2016002656 A1 WO 2016002656A1 JP 2015068522 W JP2015068522 W JP 2015068522W WO 2016002656 A1 WO2016002656 A1 WO 2016002656A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- line
- observer
- sight
- gaze
- display space
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
Definitions
- the present invention relates to a gaze measurement system, a gaze measurement method, and a program for measuring a gaze direction of an observer.
- This application claims priority based on Japanese Patent Application No. 2014-133919 for which it applied to Japan on June 30, 2014, and uses the content here.
- a line-of-sight measuring device is known as a device for detecting the direction of the observer's line of sight while moving in a display space where various observation objects are displayed on shelves or stands in stores and exhibition halls. Yes.
- This line-of-sight measurement device is composed of a line-of-sight measurement function that detects the direction of the line of sight of the observer and a visual field video camera that captures an object in front of the observer, and is mounted on the head of the observer It has been put into practical use as a type eye gaze measuring device (for example, see Patent Document 1 and Patent Document 2).
- line-of-sight measurement data obtained from the line-of-sight measurement function indicates the direction of the line of sight in a viewpoint coordinate system fixed with respect to the observer's head.
- a field-of-view video camera fixed with respect to the viewpoint coordinate system simultaneously with the line-of-sight measurement, an object in front of the observer's viewpoint is photographed by a field-of-view video camera fixed with respect to the viewpoint coordinate system, and a field-of-view video image captured by the photographing is obtained.
- the line-of-sight direction indicated by the line-of-sight measurement data uniquely corresponds to the pixel coordinates of the visual field video image.
- the position of the observation target indicated by the line-of-sight measurement data in the viewpoint coordinate system can be displayed on the visual field video image.
- the measurer can specify the gaze location where the observer gazes based on the position of the observation target displayed on the visual field video image.
- a line-of-sight measurement device and analysis software for example, each of eye mark recorder EMR-9 (registered trademark) and eye mark application software EMR-dTarget (registered trademark) of NAC Image Technology Co., Ltd. is sold. ing.
- the total result of the gaze points observed by the observer is shown on the image. For example, plotting the frequency of gaze at which the observer is gazing and the movement path of the observer on the front photo image of the display shelf where the observation object is displayed in the store, the plan view showing the arrangement of the observation object in the exhibition hall, etc. Is done.
- the process of identifying the gaze point that the observer watches in the display space from the visual line measurement data and the visual field video image is performed at the discretion of the measurer. Is called.
- the measurer can manually plot the gaze point on the image of the front photograph by adding the gaze point indicated by the large number of gaze measurement data (the gaze point where the observer gazes) by human work. It is necessary to create numerical data for the gaze point, which increases the load on the measurer.
- Patent Document 1 when the observer's line-of-sight measurement is performed using a three-dimensional magnetic sensor in a wide place such as a store or an exhibition hall, the intensity of the magnetic field is reduced with a single magnetic source. Due to problems, the entire range of movement of the observer cannot be covered. In order to apply the method of Patent Document 1 in a wide place such as a store or an exhibition hall, it is necessary to arrange each of a plurality of magnetic field generation sources in a position that can encompass the movement range of the observer according to the size. There is.
- this magnetic generation source not only the apparatus cost and work load increase, but also in the display space with a large number of display objects, the magnetic generation source may not be placed at an appropriate position due to physical restrictions. .
- the magnetic generation source may not be placed at an appropriate position due to physical restrictions. .
- the display space is represented by a single two-dimensional image obtained by imaging with a camera, and the video of the visual field video camera is used with a two-dimensional feature point matching technique. Mapping to the display space.
- a display space having a complicated three-dimensional structure such as a store space in which observation objects (for example, products) are three-dimensionally arranged on a plurality of display shelves.
- observation objects for example, products
- there are the following problems. Depending on the position and direction of movement of the observer in the display space, the two-dimensional positional relationship of the feature points in the field-of-view video camera, or the position of the occlusion changes. May not be able to be reproduced. For this reason, in the feature point matching between the two-dimensional image of the single two-dimensional image captured by the camera and the video of the field-of-view video camera, the gaze point that the observer gazes is measured with high accuracy. I can't.
- the present invention has been made in view of such a situation, and without adding a new installation for obtaining a gaze point in the display space, and the display space in which the observation target is arranged is a complicated three-dimensional display.
- a line-of-sight measurement system, a line-of-sight measurement method, and a program capable of efficiently collecting a large number of pieces of line-of-sight measurement data without giving a load to a measurer even when having a shape.
- the line-of-sight measurement system can be attached to an observer moving in a display space where an observation target is displayed, and a visual field image in front of the observer is displayed.
- a user imaging unit for imaging, a user gaze measurement unit that is attachable to the observer and acquires gaze measurement data indicating a direction of the gaze of the observer in the coordinate system in the visual field image, and the observation target A gaze point at which the observer in the display space is gazing at a coordinate position where the three-dimensional shape data of the display space includes and the observer's gaze direction vector in the display space obtained from the gaze measurement data intersect.
- a desired line-of-sight measurement device is a desired line-of-sight measurement device.
- the line-of-sight measurement apparatus is a plurality of multi-viewpoint captured images obtained by imaging the display space from different directions by an imaging apparatus different from the user imaging unit.
- a three-dimensional space reconstruction unit that reconstructs the three-dimensional shape data may be included.
- 3 You may have a three-dimensional space reconstruction part which reconfigure
- the line-of-sight measurement device differs in the display space between the visual field image picked up by the user image pickup unit and an image pickup device different from the user image pickup unit.
- the line-of-sight measurement device may arrange the gaze point obtained by the line-of-sight measurement device on a depiction image generated from the three-dimensional shape data.
- the line-of-sight measurement device when the line-of-sight direction of the observer indicated by the line-of-sight measurement data faces the same direction at a preset time,
- the intersecting position on the three-dimensional shape data may be the gaze point.
- the line-of-sight measurement method includes a process of capturing a visual field image in front of the observer by a user imaging unit attached to the observer moving in the display space where the observation object is displayed, A process of obtaining gaze measurement data indicating the direction of the gaze of the observer in the coordinate system in the visual field image, three-dimensional shape data of the display space including the observation target, and the display space obtained from the gaze measurement data And a step of obtaining a gaze point to be watched by the observer in the display space based on a coordinate position intersecting with the gaze direction vector of the observer.
- the program according to the third aspect of the present invention includes a computer, a visual field image in front of the observer, which is captured by a user imaging unit attached to an observer who moves in a display space where an observation target is displayed, From the line-of-sight measurement data indicating the direction of the line of sight of the observer in the coordinate system in the visual field image, the three-dimensional shape data of the observation target in the display space, and the observer in the display space obtained from the line-of-sight measurement data Based on the coordinate position intersecting with the line-of-sight direction vector, it functions as a line-of-sight measuring device that obtains a gaze point where the observer gazes in the display space.
- the display space in which the observation target is arranged is complicated without adding a new installation for obtaining a gaze point in the display space. Even in the case of having a shape, it is possible to automatically identify a gaze point in the display space of the observer. For this reason, it does not give a load to the measurer, and a large number of line-of-sight measurement data can be aggregated efficiently.
- FIG. 4 is a diagram illustrating a configuration example of an extraction table stored in an imaging data storage unit 16.
- FIG. 3 is a diagram illustrating an example of a multi-viewpoint captured image used for reconstructing a display space by the imaging device 2.
- FIG. 3 is a diagram illustrating an example of a multi-viewpoint captured image used for reconstructing a display space by the imaging device 2.
- FIG. 3 is a diagram illustrating an example of a multi-viewpoint captured image used for reconstructing a display space by the imaging device 2.
- FIG. 3 is a diagram illustrating an example of a multi-viewpoint captured image used for reconstructing a display space by the imaging device 2.
- FIG. 1 is a block diagram illustrating a configuration example of a line-of-sight measurement system according to the present embodiment.
- the line-of-sight measurement system includes a line-of-sight measurement device 1, an imaging device 2, and a user observation device 3.
- the line-of-sight measurement apparatus 1 includes a three-dimensional space reconstruction unit 11, a stop time detection unit 12, a line-of-sight direction vector conversion unit 13, a cross coordinate detection unit 14, a cross coordinate projection unit 15, an imaging data storage unit 16, and a global coordinate system data storage.
- a unit 17 and a projection image data storage unit 18 are provided.
- the imaging device 2 is a camera using an image sensor such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
- the imaging device 2 is used to capture an image including an observation target in a display space as a multi-viewpoint captured image from a plurality of different viewpoints.
- the three-dimensional space reconstruction unit 11 reconstructs a display space in the global coordinate system from a multi-viewpoint captured image that is a two-dimensional image obtained by capturing the display space from a plurality of different viewpoint directions. In addition, the three-dimensional space reconstruction unit 11 adds multi-viewpoint captured image identification information to the multi-viewpoint captured image obtained by imaging by the imaging device 2. The three-dimensional space reconstruction unit 11 writes the multi-viewpoint captured image identification information and the multi-viewpoint captured image in association with each other in the multi-viewpoint captured image table of the captured data storage unit 16 and stores them.
- the three-dimensional space reconstruction unit 11 reconstructs the three-dimensional space, feature points, camera captured images, camera coordinates, camera imaging direction vectors (indicating posture angles), and images obtained for each multi-viewpoint captured image
- Each of the projection transformation matrices is also written and stored in the multi-view captured image table of the captured data storage unit 16 in association with the multi-view captured image identification information of the corresponding multi-view captured image.
- the three-dimensional space reconstruction unit 11 extracts feature point coordinates that are the coordinates of the feature points (for example, locations where the brightness changes).
- the three-dimensional space reconstruction unit 11 associates feature points (matching processing) between the multi-viewpoint captured images obtained by capturing images from multiple viewpoints.
- PhotoScan registered trademark manufactured by Agisoft Corporation is used as a function of reproducing the captured three-dimensional space from a plurality of two-dimensional images obtained by imaging the three-dimensional space.
- the positional relationship between the extracted feature points is used.
- the three-dimensional space reconstruction unit 11 obtains the three-dimensional coordinates in the global coordinate system of the three-dimensional space of each feature point based on the principle of multi-viewpoint stereo measurement from the correspondence between the feature points.
- the three-dimensional space reconstruction unit 11 forms a polyhedron (an assembly of polygons) by assigning surfaces between coordinate points arranged in the global coordinate system, and reconstructs three-dimensional shape data. Forming the outer surface.
- the data structure of the three-dimensional shape data is not a polygon as an aggregate of polygons, but any data format can be used as long as it is a data format that can represent a three-dimensional shape such as point cloud data and volume data. Also good.
- the three-dimensional shape data indicates geometric data of a three-dimensional shape object including a display shelf in the display space and a display object (observation target) displayed on the display shelf.
- the three-dimensional space reconstruction unit 11 writes and stores the three-dimensional shape data of the reconstructed display space in the created global coordinate system in the global coordinate system data storage unit 17.
- the stop time detection unit 12 detects the stop time in which the observer's line-of-sight direction is in the same direction from the line-of-sight measured by the user observation device 3. In addition, the stop time detection unit 12 extracts a still image from a video whose stop time exceeds a predetermined threshold time as a frame image, and adds captured image identification information. In addition, the stop time detection unit 12 writes the frame image to which the captured image identification information is added to the extraction table of the captured data storage unit 16. This extraction table includes the captured image identification information of the frame image, the captured image address that is the address of the frame image indicated by the captured image identification information, the line-of-sight direction vector of the camera coordinate system, and the captured frame image.
- the line-of-sight vector is a vector indicating the direction of the observer's line of sight from camera imaging coordinates of a user imaging unit 31 (described later), and indicates the line of sight of the observer.
- the gaze direction vector conversion unit 13 generates a gaze direction vector in the camera coordinate system from the gaze measurement data supplied from the user gaze measurement unit 32.
- the line-of-sight vector conversion unit 13 writes the time information added by the user line-of-sight measurement unit 32 and the line-of-sight direction vector in the camera coordinate system sampled in this time information to the imaging data storage unit 16 for storage.
- the line-of-sight direction vector conversion unit 13 converts the line-of-sight direction vector detected in the three-dimensional space in the camera coordinate system from the camera coordinate system to the global coordinate system using the camera coordinate conversion matrix.
- the user gaze measurement unit 32 indicates the gaze measurement data as relative coordinates in the camera coordinates of the user imaging unit 31, but any format such as a direction vector or data in the internal format of the measurement device But you can.
- the three-dimensional space reconstruction unit 11 performs frame image matching processing in the display space in the global coordinate system, extracts feature point coordinates that are feature point coordinates, and calculates camera imaging coordinates and camera imaging direction vectors. I do.
- the line-of-sight direction vector conversion unit 13 generates a camera coordinate conversion matrix for converting the line-of-sight direction vector from the camera coordinate system to the global coordinate system, from the camera imaging coordinates and the camera imaging direction vector. Further, the line-of-sight direction vector conversion unit 13 performs coordinate conversion from the camera coordinate system to the global coordinate system for the line-of-sight direction vector corresponding to the extracted frame image, using the generated camera coordinate conversion matrix.
- the intersection coordinate detection unit 14 reads the shape data of the display space composed of the three-dimensional shape data in the global coordinate system from the global coordinate system data storage unit 17. Then, the intersection coordinate detection unit 14 obtains coordinates at which the read three-dimensional shape data and the line-of-sight direction vector in the global coordinate system obtained by the line-of-sight direction vector conversion unit 13 intersect. Then, the intersection coordinate detection unit 14 sets a coordinate at which the three-dimensional shape data and the line-of-sight direction vector intersect as a gaze location where the observer gazes.
- the cross coordinate projection unit 15 reads an image projection conversion matrix corresponding to the multi-viewpoint captured image selected by the measurer from the multi-viewpoint captured image table of the imaging data storage unit 16. Then, the intersecting coordinate projection unit 15 plots the gaze point where the observer gazes on the multi-viewpoint captured image of the two-dimensional image by the read image projection transformation matrix. In addition, the intersection coordinate projection unit 15 writes and stores the multi-viewpoint captured image in which the gaze point is plotted together with the multi-viewpoint captured image identification information corresponding to the plotted multi-viewpoint captured image in the projection image data storage unit 18. In the present embodiment, the crossed coordinate projection unit 15 plots the gaze point on the multi-viewpoint captured image of the two-dimensional image, but the present invention is not limited to this embodiment.
- the gaze point may be plotted on a CG image (drawing image) generated from the three-dimensional shape data.
- the gaze point may be plotted on a free viewpoint image generated from the multi-viewpoint captured image.
- the method of plotting a gaze location on the multi-viewpoint captured image of a two-dimensional image has the merit that it is excellent in image quality, this invention does not limit the kind of image.
- the user observation device 3 can be attached to an observer.
- the user observation device 3 is attached to the observer.
- the place where the user observation device 3 is attached to the observer is not limited.
- the user observation device 3 may be mounted and fixed on the observer's head. In this case, in an experiment in which a portion to be watched by the observer is detected, the user observation device 3 is fixedly attached to a helmet or headphones worn on the head of the observer.
- the user observation apparatus 3 may be mounted at a position near the observer's eyes. In this case, for example, the user observation device 3 is provided in glasses that cover the eyes of the observer. Further, the user observation device 3 may be hung from the observer's neck.
- the user observation device 3 is not necessarily fixed to the observer.
- the user observation apparatus 3 when using the user observation apparatus 3 hanging from an observer's neck, the user observation apparatus 3 is provided in the necklace etc. Since the user observation device 3 can be attached to the observer, it may be provided in a so-called wearable device. Further, since the user observation device 3 has a function of capturing a visual field image in front of the observer, the user observation device 3 is mounted and used at a position close to the observer's head and eyes. It is preferable.
- the user observation device 3 includes a user imaging unit 31 and a user line-of-sight measurement unit 32.
- the user imaging unit 31 is, for example, a video camera using a CCD or CMOS image sensor.
- the user imaging unit 31 displays a moving image (video) in a direction (in front of the observer) in which the face of the observer moving in the display space faces, that is, image data (view image) of the observer's field of view in the display space. Take an image.
- the user imaging unit 31 is not limited to a video camera as long as it can obtain an image of the observer's visual field, and a still camera that captures image data of the visual field corresponding to the stop point obtained from the line-of-sight measurement data. May be used.
- the user gaze measurement unit 32 includes, for example, a fixed eye camera that is fixed to the above-described helmet, glasses, or the like, and the gaze measurement that indicates the gaze direction of the observer in the camera coordinate system of the visual field image of the user imaging unit 31. Get the data.
- the gaze direction vector conversion unit 13 detects a gaze direction vector indicating the gaze direction of the observer from the gaze measurement data.
- FIG. 2 is a diagram for explaining processing for obtaining coordinates at which the line-of-sight direction vector indicating the gaze location observed by the observer and the three-dimensional shape data intersect.
- FIG. 2 shows a global coordinate system including x′-axis, y′-axis, and z′-axis, and a camera coordinate system including x-axis, y-axis, and z-axis.
- the camera imaging direction of the user imaging unit 31 is the y axis of the camera coordinate system.
- the camera imaging coordinates 1060 is the position of the user imaging unit 31 that captured the frame image, and is the origin of the camera coordinate system in the user imaging unit 31.
- the line-of-sight direction vector 1020 is obtained from the line-of-sight measurement data measured by the user line-of-sight measurement unit 32 provided together with the user image pickup unit 31 corresponding to the frame image obtained by the user image pickup unit 31. It is a gaze direction vector of the observer.
- the line-of-sight direction vector conversion unit 13 performs coordinate conversion of the line-of-sight direction vector 1020 from the camera coordinate system to the global coordinate system according to the following equation (1).
- (1) is a camera coordinate transformation matrix that rotates the coordinate values in the camera coordinate system of (x, y, z) in accordance with the global coordinate system of (x ′, y ′, z ′).
- the posture angle 1040 indicates the posture angle of the user imaging unit 31 and is the Euler angle ( ⁇ , ⁇ , ⁇ ) of the camera coordinate system with respect to the global coordinate system.
- the angle ⁇ indicates an angle formed by the global coordinate system x ′ axis and the camera coordinate system x axis.
- the angle ⁇ is an angle between the global coordinate system y ′ axis and the camera coordinate system y axis.
- the angle ⁇ indicates an angle formed by the global coordinate system z′-axis and the camera coordinate system z-axis.
- the line-of-sight direction vector conversion unit 13 uses the camera coordinate conversion matrix expressed by the above equation (1) to convert the line-of-sight direction vector (x1, y1, z1) in the camera imaging coordinates into the line-of-sight direction vector ( x1 ′, y1 ′, z1 ′) (coordinate direction vector 1020).
- the three-dimensional shape data 1070 is three-dimensional shape data in the display space reconstructed by the three-dimensional space reconstruction unit 11.
- the intersection coordinate detection unit 14 obtains a coordinate point 1050 where the line of sight indicated by the line-of-sight direction vector 1020 and the three-dimensional shape data 1070 intersect from the camera imaging coordinates in the global coordinate system as a gaze point where the observer gazes.
- the three-dimensional shape data 1070 is three-dimensional shape data that first intersects the line of sight indicated by the line-of-sight direction vector 1020 from the camera imaging coordinates in the display space in the global coordinate system. That is, the intersection coordinate detection unit 14 coordinates the intersection in the three-dimensional shape data in which a half line extending from the camera imaging coordinates 1060 in the direction of the observer's line-of-sight direction vector 1020 first intersects in the display space of the global coordinate system. By detecting a point, the coordinates of the detected intersection are obtained as a gaze location where the observer gazes.
- FIG. 3A and 3B are diagrams illustrating a configuration example of a table of captured images stored in the captured data storage unit 16.
- FIG. 3A is a multi-view captured image table. For each multi-view captured image, multi-view captured image identification information, captured image address, feature point data, camera imaging coordinates, camera imaging direction vector, camera attitude angle, and image projection conversion. Each matrix is written and stored.
- the multi-viewpoint captured image identification information is identification information for identifying the multi-viewpoint captured image.
- the captured image address indicates an address where the multi-viewpoint captured image is stored.
- the feature point data is data indicating, for example, feature points such as RGB (Red Green Blue) gradation in the multi-viewpoint captured image and its coordinate points.
- the camera captured image indicates the position of the coordinate in the global coordinate system of the image capturing apparatus 2 when the multi-viewpoint captured image is captured.
- the camera imaging direction vector is a vector indicating the imaging direction of the imaging device 2 when a multi-viewpoint captured image is captured.
- the camera posture angle is a posture angle of the imaging device 2 when a multi-viewpoint captured image is captured.
- the image projection transformation matrix is a matrix used for coordinate transformation in which coordinates in the global coordinate system are projected onto a multi-viewpoint captured image and plotted.
- FIG. 3B is a frame image table, and for each frame image, each of captured image identification information, captured image address, feature point data, camera imaging coordinates, camera imaging direction vector, camera attitude angle, imaging time, and camera coordinate conversion matrix.
- the captured image identification information is identification information for identifying a frame image.
- the captured image address indicates an address where the frame image is stored.
- the feature point data is data indicating feature points such as RGB gradations and coordinate points thereof in the frame image.
- the camera captured image indicates the position of coordinates in the global coordinate system of the user observation device 3 when the extracted frame image is captured.
- the camera imaging direction vector is a vector indicating the imaging direction of the user observation device 3 when the extracted frame image is captured.
- the camera attitude angle is an attitude angle of the user observation device 3 when the extracted frame image is captured.
- the imaging time is the time when imaging of image data that has become a frame image is started.
- the camera coordinate transformation matrix is a matrix that performs coordinate transformation from a line-of-sight direction vector in the camera coordinate system to a line-of-sight direction vector in the global coordinate system.
- FIG. 4 is a diagram illustrating a configuration example of the extraction table stored in the imaging data storage unit 16.
- the extraction table associates captured image identification information, captured image addresses, line-of-sight direction vectors, and imaging time.
- the captured image identification information is identification information for identifying each frame image added to the frame image extracted by the dwell time detection unit 12 from the image data of the video frame.
- the captured image address indicates an address in the captured data storage unit 16 in which image data of a frame in a video, which is a frame image, is stored.
- the line-of-sight direction vector is a line-of-sight direction vector in the camera coordinate system corresponding to time information (imaging time) of the frame image.
- the imaging time indicates the time when the image data of the frame that is the extracted frame image is captured.
- FIGS. 5A to 5E are diagrams illustrating an example of a multi-viewpoint captured image used for reconstructing the display space by the imaging device 2.
- FIG. Each of FIGS. 5A to 5E is an element of a multi-viewpoint captured image.
- the three-dimensional space reconstruction unit 11 obtains respective feature point data from each of the multi-viewpoint captured images, and reconstructs the three-dimensional shape data of the display space in the global coordinate system based on the obtained feature point data. That is, the 3D space reconstruction unit 11 obtains 3D coordinates in the global coordinate system of the 3D space of each feature point using the principle of multiview stereo measurement from the correspondence of the feature points of each multiview captured image, 3D shape data in the display space is reconstructed.
- FIG. 6 is a flowchart showing the operation of the stop point cut-out process performed by the stop time detection unit 12.
- the following processing is performed by the line-of-sight direction vector in the sampled camera coordinate system at a predetermined cycle time interval.
- the stop time detection unit 12 inputs a gaze direction vector from the gaze direction vector conversion unit 13 for each sampling time.
- the user imaging unit 31 temporarily writes and stores the image data of the captured video in the imaging data storage unit 16 in correspondence with the time of the time information.
- the line-of-sight direction vector conversion unit 13 temporarily writes and stores the line-of-sight measurement data for each sampling time in the imaging data storage unit 16 corresponding to the time of the time information.
- the image data of the video and the line-of-sight direction vector are stored corresponding to the time of the time information.
- Step S1 The stop time detection unit 12 reads the line-of-sight measurement data sampled at the earliest time in the order of time passage from the imaging data storage unit 16. Then, the stop time detection unit 12 outputs the read gaze measurement data to the gaze direction vector conversion unit 13 and outputs a control signal instructing the extraction of the gaze direction vector in the camera coordinate system.
- the line-of-sight direction vector conversion unit 13 extracts a line-of-sight direction vector in the camera coordinate system from the supplied line-of-sight measurement data, and outputs it to the stop time detection unit 12.
- Step S2 The stop time detection unit 12 resets a stop time measurement counter provided therein and initializes it to “0”.
- Step S3 The stop time detection unit 12 determines whether or not the line-of-sight measurement data to be read next exists in the imaging data storage unit 16. At this time, when the line-of-sight measurement data to be read next exists in the imaging data storage unit 16, the stop time detection unit 12 advances the process to step S4. On the other hand, when the line-of-sight measurement data to be read next does not exist in the imaging data storage unit 16, the stop time detection unit 12 ends the process.
- Step S4 The stop time detecting unit 12 reads the gaze measurement data sampled at the next earliest time from the currently read gaze direction vector in the time lapse order from the imaging data storage unit 16. Then, the stop time detection unit 12 outputs the read gaze measurement data to the gaze direction vector conversion unit 13 and requests extraction of the gaze direction vector in the camera coordinate system.
- the line-of-sight direction vector conversion unit 13 extracts a line-of-sight direction vector in the camera coordinate system from the supplied line-of-sight measurement data, and outputs it to the stop time detection unit 12.
- Step S5 The stop time detection unit 12 obtains the amount of change between the current gaze direction vector and the newly calculated gaze direction vector. For example, the stop time detection unit 12 obtains an angle change from the inner product of the respective line-of-sight direction vectors and sets it as a change amount ( ⁇ v). And the stop time detection part 12 determines whether (DELTA) v in variation
- Step S6 The stop time detection unit 12 increments the stop time measurement counter (adds “1”). By multiplying one count of this stop time measurement counter by one cycle time of sampling, a stop time, which is a time during which the line-of-sight direction vector points in the same direction, is obtained. The stop time detection unit 12 multiplies the count value of the stop time measurement counter by one cycle time, and obtains a stop time as a result of the multiplication.
- Step S7 The stop time detection unit 12 determines whether or not the obtained stop time is equal to or greater than a preset time threshold. At this time, if the determined stop time is equal to or greater than the preset time threshold, the stop time detection unit 12 advances the process to step S8. On the other hand, if the determined stop time is less than the preset time threshold, the stop time detection unit 12 advances the process to step S3.
- the stop time is obtained by multiplying the count value of the stop time measurement counter by one cycle time of sampling.
- a count threshold value is set using the count value itself of the stop time measurement counter. If the count value is equal to or greater than the count threshold value, the process proceeds to step S8. On the other hand, if the count value is less than the count threshold value, the process proceeds to step S3. You may proceed.
- Step S8 The stop time detection unit 12 extracts the image data of the frame in the video corresponding to the first line-of-sight direction vector that starts counting after the stop time measurement counter is reset as a frame image.
- image data of a video corresponding to the first line-of-sight direction vector that starts counting after the stationary time measurement counter is reset as a frame image is extracted as a frame image.
- the present invention does not limit the method described in step 8 above. Image data corresponding to any line-of-sight direction vector between the line-of-sight direction vector when the stop time exceeds the count threshold from the first line-of-sight direction vector that started counting after the stop time measurement counter was reset as a frame image It may be extracted.
- Step S9 Next, the stop time detection unit 12 adds captured image identification information to the extracted frame image, associates the captured image identification information, the captured image address of the frame image, the line-of-sight direction vector, and the imaging time, and captures the captured image data.
- the data is written and stored in the extraction table of the storage unit 16.
- the imaging time is the time when image data corresponding to the first line-of-sight direction vector from which the stationary time measurement counter starts counting is captured after each of the stationary time measurement counter is reset.
- the stop time detection part 12 advances a process to step S2.
- image data corresponding to any gaze direction vector between the first gaze direction vector that starts counting after the stop time measurement counter is reset and the gaze direction vector when the dwell time becomes equal to or greater than the count threshold is framed.
- the imaging time is the time when the image data of the video extracted as the frame image is captured.
- FIG. 7 is a flowchart for explaining an example of an operation for obtaining a gaze point to be watched by an observer in the display space of the global coordinate system.
- Step S11 The three-dimensional space reconstruction unit 11 sequentially reads multi-viewpoint captured images, obtains feature point data of each, and associates them with the multi-viewpoint captured images, writes them in the multi-viewpoint captured image table of the imaging data storage unit 16, and stores them. .
- the three-dimensional space reconstruction unit 11 obtains feature point data in all the multi-viewpoint captured images in the multi-viewpoint captured image table.
- Step S12 The three-dimensional space reconstruction unit 11 reproduces (reconstructs) the three-dimensional shape data of the display space in the global coordinate system by using the feature point data of each of the multi-viewpoint captured images in the multi-viewpoint captured image table of the imaging data storage unit 16. ). Then, the three-dimensional space reconstruction unit 11 writes and stores the three-dimensional shape data of the display space in the reconstructed global coordinate system in the global coordinate system data storage unit 17. At this time, the three-dimensional space reconstruction unit 11 converts each of the camera imaging coordinates, the camera imaging direction vector, the attitude angle, and the image projection transformation matrix of the multi-view captured image obtained in the reconstruction process into the multi-view captured image. In correspondence with each of these, it writes and memorize
- Step S13 The three-dimensional space reconstruction unit 11 sequentially reads the captured image addresses of the frame images from the extraction table of the frame images stored in the captured data storage unit 16 in order from the earliest imaging time. Then, the three-dimensional space reconstruction unit 11 reads the image data (frame image) of the video at the captured image address, and obtains feature point data of this frame image.
- Step S14 The three-dimensional space reconstruction unit 11 performs matching processing between the three-dimensional shape data of the display space stored in the global coordinate system data storage unit 17 and the feature point data of the frame image. Then, the three-dimensional space reconstruction unit 11 obtains the camera imaging coordinates, the camera imaging direction vector, and the attitude angle of the user imaging unit 31 when the frame image is captured in the global coordinate system by this matching process. At this time, the three-dimensional space reconstruction unit 11 associates the obtained camera imaging coordinates, the camera imaging direction vector, and the attitude angle with the currently processed frame image, and writes and stores them in the frame image table of the imaging data storage unit 16. Let
- Step S15 Next, the line-of-sight direction vector conversion unit 13 generates a camera coordinate conversion matrix shown in Expression (1), which converts the line-of-sight direction vector from the camera coordinate system to the global coordinate system from the camera image pickup coordinates and the camera image pickup direction vector.
- the line-of-sight direction vector conversion unit 13 then stores the generated camera coordinate conversion matrix in the frame image table of the imaging data storage unit 16 in association with the frame image.
- Step S16 The line-of-sight direction vector conversion unit 13 reads the line-of-sight direction vector in the camera coordinate system corresponding to the currently processed frame image from the extraction table of the imaging data storage unit 16.
- the line-of-sight direction vector conversion unit 13 reads the camera coordinate conversion matrix from the frame image table, and coordinates the line-of-sight direction vector corresponding to the read frame image from the camera coordinate system to the global coordinate system using the read camera coordinate conversion matrix. Convert.
- the line-of-sight direction vector conversion unit 13 associates the line-of-sight direction vector, which has been coordinate-converted into a vector in the global coordinate system, with the frame image, and writes and stores it in the frame image table of the imaging data storage unit 16.
- Step S17 The intersection coordinate detection unit 14 reads the three-dimensional shape data in the display space of the global coordinate system from the global coordinate system data storage unit 17. Then, the intersection coordinate detection unit 14 extends the line-of-sight direction vector in the global coordinate system, and detects the intersection coordinate in the three-dimensional shape data that intersects first. The cross coordinate detection unit 14 sets the detected cross coordinates as a gaze location where an observer in the display space gazes. Further, the intersection coordinate detection unit 14 writes and stores the obtained coordinates of the gaze location in a gaze location table (not shown).
- Step S18 The three-dimensional space reconstruction unit 11 determines the presence / absence of a frame image in the extraction table of the imaging data storage unit 16 that has not been subjected to processing for obtaining the camera imaging direction vector and the attitude angle. At this time, if there is a frame image for which the process for obtaining the camera imaging direction vector and the attitude angle is not performed, the three-dimensional space reconstruction unit 11 advances the process to step S13. On the other hand, if there is no frame image for which the process for obtaining the camera imaging direction vector and the attitude angle is not performed, the three-dimensional space reconstruction unit 11 advances the process to step S19.
- Step S19 The cross coordinate projection unit 15 reads the multi-viewpoint captured image selected by the measurer and the image projection conversion matrix corresponding to the multi-viewpoint captured image from the multi-viewpoint captured image table of the imaging data storage unit 16. Then, the intersection coordinate projecting unit 15 sequentially reads the gaze location from the gaze location table of the imaging data storage unit 16 and projects the gaze location on the corresponding multi-viewpoint captured image by the image projection conversion matrix. The cross coordinate projection unit 15 writes and stores the multi-viewpoint captured image on which the gaze point is projected in the projection image data storage unit 18.
- the display space in the global coordinate system is reconstructed from the multi-viewpoint captured images, and each data (feature point data, camera imaging coordinates, camera imaging direction) of the frame image table in the frame image in the reconstructed display space.
- a feature point matching process for obtaining a vector and a camera attitude angle was performed.
- the present invention does not limit the processing method described above.
- the display space may be reconstructed by performing feature point matching processing using both the multi-viewpoint captured image and the frame image. In this case, processing for obtaining each data of the frame image table in the frame image is performed in the feature point matching processing.
- FIG. 8A to FIG. 8C are conceptual diagrams illustrating processing for obtaining a gaze point by intersecting the line-of-sight direction vector and the three-dimensional shape data.
- FIG. 8A shows that the three-dimensional shape data 501 and the line-of-sight direction vector 601 in the display space of the global coordinate system intersect, and the intersection coordinates (xc1, yc1, zc1) as the gaze point 701 are obtained.
- FIG. 8B shows that the three-dimensional shape data 502 and the line-of-sight direction vector 602 intersect in the display space of the global coordinate system, and the intersection coordinates (xc2, yc2, zc2) as the gaze point 702 are obtained.
- FIG. 8A shows that the three-dimensional shape data 501 and the line-of-sight direction vector 601 in the display space of the global coordinate system intersect, and the intersection coordinates (xc1, yc1, zc1) as the gaze point 701 are obtained.
- FIG. 8B shows that the three-dimensional shape data 502
- 8C shows that the three-dimensional shape data 503 and the line-of-sight direction vector 603 in the display space of the global coordinate system intersect to obtain the intersection coordinates (xc3, yc3, zc3) as the gaze point 703.
- 8A to 8C are images obtained by imaging the same display shelf from different directions.
- FIG. 9 is a conceptual diagram showing a multi-viewpoint captured image on which a gaze point that is observed by an observer in the display space is projected.
- FIG. 9 illustrates the cross-coordinates (xc1, yc1, zc1) and cross-coordinates (xc2) in the three-dimensional shape data of FIGS. 8A to 8C in the three-dimensional global coordinate system for the multi-viewpoint captured image illustrated in FIG. , Yc2, zc2) and the intersection coordinates (xc3, yc3, zc3) are projected.
- xc1, yc1, zc1 the cross-coordinates
- xc2 cross-coordinates
- an image area 901 on which the display shelf 1000 and 3D shape data 501 are projected an image area 902 on which 3D shape data 502 is projected, and an image area 903 on which 3D shape data is projected.
- a mark 801 indicating a gaze point on which the intersection coordinates (xc1, yc1, zc1) are projected is added.
- a mark 802 indicating a gaze point on which the intersection coordinates (xc2, yc2, zc2) are projected is added to the image region 902.
- a mark 803 indicating a gaze point on which the intersection coordinates (xc3, yc3, zc3) are projected is added.
- FIG. 10 is a diagram illustrating an example of a report image obtained by projecting a gaze point that a viewer gazes in the display space onto a multi-viewpoint captured image.
- the gaze point that the observer gazes is projected on the image area 910 of the beverage container, which is an image on which the three-dimensional shape data arranged on the display shelf 1010 arranged in the display space is projected.
- a mark 810 is drawn.
- the gaze location for each image area 910 of the beverage container arranged on the display shelf 1010 having a complicated three-dimensional shape can be easily and accurately detected by the mark 810 indicating the gaze location.
- the three-dimensional shape data of the display space is reconstructed from the multi-viewpoint captured image, and the direction of the three-dimensional shape data and the observer's line of sight The coordinate which intersects with the line-of-sight direction vector indicating is obtained, and this coordinate is set as the gaze point. For this reason, it is not necessary to add a new installation object to the display space, and a natural gaze point of the observer can be easily detected.
- the present embodiment even if the observation object arranged in the display space has a complicated shape as the three-dimensional shape data, instead of detecting the place where the observer is gazing as described above, Since the three-dimensional shape data and the line-of-sight vector indicating the gaze of the observer's line of sight are obtained, the load on the measurer can be reduced compared to the conventional case, and the gaze point can be detected with high accuracy.
- FIG. 11 is a diagram in which a display space on which a gaze point to be watched by an observer in the global coordinate system is plotted is projected onto two-dimensional coordinates on a floor plane.
- a gaze point plot 1110 and a gaze point plot 1150 that the viewer gazes at the display objects on the display shelves 1120 and 1130 are shown.
- a line segment connecting the movement path 1100 of the user observation device 3 and the position of the camera imaging coordinates of the user observation device 3 is provided as a line-of-sight history management unit (not shown). Can be easily formed.
- the gaze history management unit adds an arrow 1140 and an arrow 1160 indicating the gaze direction vector to each of the plot 1110 and the plot 1150 of the gaze location, so that it is clear that the gaze location is in any camera imaging location. I understand. In particular, when the display shelf is low and you see a display on another display shelf beyond a display shelf, or when you look back on a display shelf display, You can clearly distinguish whether you are observing the display.
- a moving route for the observer to move is created, and in which place of the moving route, which place on which display shelf is being observed is clearly detected. can do. It is possible to clearly detect the behavior when observing the display items on the display shelf in the display space of the observer by using the information on the watch point on the moving route and the information on the watch point on the display shelf shown in FIGS. It is possible to easily examine the arrangement of products and exhibits.
- a program for realizing the function of the visual line measuring device 1 according to the present invention is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system and executed to measure the visual line. You may perform the detection process of the gaze location in.
- the “computer system” includes an OS and hardware such as peripheral devices.
- the “computer system” includes a WWW system having a homepage providing environment (or display environment).
- the “computer-readable recording medium” refers to a storage device such as a flexible medium, a magneto-optical disk, a portable medium such as a ROM or a CD-ROM, and a hard disk incorporated in a computer system.
- the “computer-readable recording medium” refers to a volatile memory (RAM) in a computer system that becomes a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. In addition, those holding programs for a certain period of time are also included.
- RAM volatile memory
- the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
- the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
- the program may be for realizing a part of the functions described above. Furthermore, what can implement
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Signal Processing (AREA)
- Position Input By Displaying (AREA)
- User Interface Of Digital Computer (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Eye Examination Apparatus (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
本願は、2014年6月30日に日本に出願された特願2014-133919号に基づき優先権を主張し、その内容をここに援用する。
この場合、視線計測と同時に、視点座標系に対して固定された視野ビデオカメラによって、観察者の視点前方の物体を撮影し、撮影によって捉えた視野ビデオ画像を得る。このとき、視線計測データが示す視線方向と、視野ビデオ画像の画素座標は一義的に対応する。
このような視線計測装置及び解析用ソフトウェアとしては、例えば、株式会社ナックイメージテクノロジー社のアイマークレコーダEMR-9(登録商標)、アイマークアプリケーションソフトウエアEMR-dTarget(登録商標)のそれぞれが販売されている。
また、視線計測機能と、測定空間内での観察者の頭部の位置や方向を計測する3次元センサーとを組み合わせることにより、視線計測データが指し示す測定空間内において観察者が注視する注視箇所を特定する方法も考案されている。
しかしながら、視野ビデオカメラと視線計測機能とを組み合わせた上述した方式においては、視線計測データと視野ビデオ画像とから陳列空間における観察者が注視する注視箇所を特定する処理は、計測者の判断によって行われる。このため、計測者は人為的作業によって、多数の視線計測データが指し示す注視箇所(観察者が注視する注視箇所)を特定して上記正面写真の画像上に注視箇所をプロットしたり、集計可能な注視箇所に対する数値データを作成したりする必要があり、計測者の負荷が大きくなる。
また、元来は、陳列空間における存在しないはずの機器、すなわち、磁気発生源が複数の位置に設置されることによって、視覚的あるいは物理的影響が観察者に作用し、観察者の判断や観察行動が変化してしまう恐れがある。
しかしながら、複数の陳列棚に立体的に観察対象(例えば、商品)が配置された店舗空間のように、複雑な3次元的構造を持つ陳列空間において観察者が移動する場合、以下の問題がある。
陳列空間における観察者の位置や移動方向によっては、視野ビデオカメラ内の特徴点の二次元的な位置関係、あるいは、オクルージョンとなる位置が変化したり、単一の二次元画像では観察者の視野を再現できなかったり等の状況が生じる。このため、カメラで撮像して得られた単一の二次元画像と、視野ビデオカメラの映像との二次元画像間における特徴点マッチングでは、観察者が注視する注視箇所を高い精度によって計測することができない。
本発明の第一態様に係る視線計測システムにおいては、前記陳列空間に含まれる構造物に関する設計データや前記陳列空間に含まれる構造物を実際に測定することによって得られた実測データに基づき、3次元形状データを再構成する3次元空間再構成部を有してもよい。
図1は、本実施形態に係る視線計測システムの構成例を示すブロック図である。図1において、視線計測システムは、視線計測装置1、撮像装置2及び利用者観測装置3を備えている。視線計測装置1は、3次元空間再構成部11、停留時間検出部12、視線方向ベクトル変換部13、交差座標検出部14、交差座標投影部15、撮像データ記憶部16、グローバル座標系データ記憶部17及び投影画像データ記憶部18を備えている。
撮像装置2は、CCD(Charge Coupled Device)あるいはCMOS(Complementary Metal Oxide Semiconductor)などのイメージセンサーを用いたカメラなどである。撮像装置2は、陳列空間における観察対象を含む画像を異なる複数の視点から、多視点撮像画像として撮像するために用いられる。
また、停留時間検出部12は、この撮像画像識別情報が付加されたフレーム画像を、撮像データ記憶部16の抽出テーブルに対して書き込む。この抽出テーブルは、フレーム画像の撮像画像識別情報と、この撮像画像識別情報の示すフレーム画像の記憶されているアドレスである撮像画像アドレスと、カメラ座標系の視線方向ベクトルと、フレーム画像の撮像された時間を示す撮像時間とが対応づけられたテーブルである。視線方向ベクトルは、利用者撮像部31(後述)のカメラ撮像座標からの、観察者の視線の方向を示すベクトルであり、観察者の視線方向を示している。
また、視線方向ベクトル変換部13は、求めたカメラ座標系における3次元空間において検出された視線方向ベクトルを、カメラ座標変換行列を用いてカメラ座標系からグローバル座標系に座標変換する。
本実施形態においては、利用者視線計測部32が視線計測データを、利用者撮像部31のカメラ座標における相対座標として示しているが、方向ベクトルや計測装置の内部形式のデータなどの任意の形式でもよい。
視線方向ベクトル変換部13は、カメラ撮像座標及びカメラ撮像方向ベクトルから、視線方向ベクトルをカメラ座標系からグローバル座標系に変換するカメラ座標変換行列を生成する。また、視線方向ベクトル変換部13は、生成したカメラ座標変換行列により、抽出したフレーム画像に対応する視線方向ベクトルを、カメラ座標系からグローバル座標系に座標変換する。
本実施形態においては、交差座標投影部15によって、注視箇所が2次元画像の多視点撮像画像上にプロットされているが、本発明は、この実施形態に限定されない。
例えば、注視箇所は、3次元形状データから生成したCG画像(描写画像)にプロットされてもよい。注視箇所は、多視点撮像画像から生成した自由視点画像にプロットされてもよい。
なお、注視箇所を2次元画像の多視点撮像画像上へプロットする方法は、画質に優れているというメリットを有するが、本発明は、画像の種類を限定していない。
例えば、利用者観測装置3は、観察者の頭部に装着されて固定されてもよい。この場合、観察者が注視する箇所の検出を行う実験において、利用者観測装置3は、観察者が頭部に被るヘルメットやヘッドホンなどに固定されて取り付けられる。
また、利用者観測装置3は、観察者の眼部の近い位置に装着されてもよい。この場合、例えば、利用者観測装置3は、観察者の眼部を覆う眼鏡などに設けられている。
また、利用者観測装置3は、観察者の首からぶら下げてもよい。即ち、利用者観測装置3は、必ずしも観察者に固定する必要はない。このように利用者観測装置3を観察者の首からぶら下げて使用する場合、利用者観測装置3は、ネックレス等に設けられている。
なお、利用者観測装置3は、観察者に装着可能であることから、いわゆる、ウエアラブルデバイスに設けられてもよい。
また、利用者観測装置3は、観察者の前方の視野画像を撮像するという機能を有することから、利用者観測装置3は、観察者の頭部や眼部に近い位置に装着されて使用されることが好ましい。
利用者撮像部31は、例えば、CCDまたはCMOSのイメージセンサーなどを用いたビデオカメラである。利用者撮像部31は、陳列空間を移動する観察者の顔の向いている方向(観察者の前方)の動画(映像)、すなわち、陳列空間における観察者の視野の画像データ(視野画像)を撮像する。この利用者撮像部31としては、観察者の視野の映像が得られる装置であれば、ビデオカメラに限定されず、視線計測データから求めた停留点に対応する視野の画像データを撮像するスチルカメラを用いても良い。
そして、視線方向ベクトル1020は、利用者撮像部31とともに設けられた利用者視線計測部32が、利用者撮像部31が撮像して得られたフレーム画像に対応して計測した視線計測データから求めた観察者の視線方向ベクトルである。視線方向ベクトル変換部13は、この視線方向ベクトル1020のカメラ座標系からグローバル座標系への座標変換を、以下の(1)式により行う。
3次元形状データ1070は、3次元空間再構成部11が再構成した陳列空間における3次元形状データである。交差座標検出部14は、グローバル座標系におけるカメラ撮像座標から視線方向ベクトル1020で示される視線と、3次元形状データ1070とが交差する座標点1050を観察者が注視する注視箇所として求める。
すなわち、交差座標検出部14は、グローバル座標系の陳列空間において、カメラ撮像座標1060から観察者の視線方向ベクトル1020の方向に延長する半直線が、最初に交差する3次元形状データにおける交差点の座標点を検出することにより、この検出した交差点の座標を観察者が注視する注視箇所として求める。
停留時間検出部12は、撮像データ記憶部16から時間経過順において、最も早い時間にサンプリングされた視線計測データを読み出す。
そして、停留時間検出部12は、読み出した視線計測データを視線方向ベクトル変換部13に出力し、カメラ座標系における視線方向ベクトルの抽出を指示する制御信号を出力する。
視線方向ベクトル変換部13は、供給される視線計測データから、カメラ座標系における視線方向ベクトルを抽出し、停留時間検出部12に対して出力する。
停留時間検出部12は、自身内部に設けられた停留時間計測カウンタをリセットし、「0」に初期化する。
停留時間検出部12は、次に読み出す視線計測データが撮像データ記憶部16に存在するか否かの判定を行う。
このとき、停留時間検出部12は、次に読み出す視線計測データが撮像データ記憶部16に存在する場合、処理をステップS4へ進める。
一方、停留時間検出部12は、次に読み出す視線計測データが撮像データ記憶部16に存在しない場合、処理を終了する。
停留時間検出部12は、撮像データ記憶部16から時間経過順において、現在読み出している視線方向ベクトルの次に早い時間にサンプリングされた視線計測データを読み出す。
そして、停留時間検出部12は、読み出した視線計測データを視線方向ベクトル変換部13に出力し、カメラ座標系における視線方向ベクトルの抽出を依頼する。
視線方向ベクトル変換部13は、供給される視線計測データから、カメラ座標系における視線方向ベクトルを抽出し、停留時間検出部12に対して出力する。
停留時間検出部12は、現在の視線方向ベクトルと、新たに算出した視線方向ベクトルとの変化量を求める。例えば、停留時間検出部12は、それぞれの視線方向ベクトルの内積から角度変化を求め変化量(Δv)とする。
そして、停留時間検出部12は、変化量におけるΔvが予め設定した閾値以上か否かの判定を行う。
このとき、停留時間検出部12は、変化量におけるΔvが予め設定した閾値未満である場合、処理をステップS6へ進める。
一方、停留時間検出部12は、変化量におけるΔvの各々の絶対値のいずれかが予め設定した閾値以上である場合、処理をステップS2へ進める。
停留時間検出部12は、停留時間計測カウンタをインクリメント(「1」を加算)する。この停留時間計測カウンタの1カウントに対し、サンプリングの1周期時間を乗算することにより、視線方向ベクトルが同一方向を向いている時間である停留時間が求められる。
停留時間検出部12は、停留時間計測カウンタのカウント値に対して1周期時間を乗算し、この乗算結果として停留時間を求める。
停留時間検出部12は、求めた停留時間が予め設定した時間閾値以上か否かの判定を行う。
このとき、停留時間検出部12は、求めた停留時間が予め設定した時間閾値以上である場合、処理をステップS8へ進める。
一方、停留時間検出部12は、求めた停留時間が予め設定した時間閾値未満である場合、処理をステップS3へ進める。
本実施形態においては、停留時間計測カウンタのカウント値に対してサンプリングの1周期時間を乗算して停留時間を求めている。本発明は、上述したステップ7に記載の方法を限定しない。停留時間計測カウンタのカウント値そのものを用いて、カウント閾値を設定し、カウント値がカウント閾値以上の場合に処理をステップS8へ進め、一方、カウント値がカウント閾値未満の場合に処理をステップS3へ進めてもよい。
停留時間検出部12は、停留時間計測カウンタがリセット後にカウントを開始した最初の視線方向ベクトルに対応する、映像におけるフレームの画像データをフレーム画像として抽出する。ここで、本実施形態においては、フレーム画像として停留時間計測カウンタがリセット後にカウントを開始した最初の視線方向ベクトルに対応する映像の画像データをフレーム画像として抽出している。本発明は、上述したステップ8に記載の方法を限定しない。停留時間計測カウンタがリセット後にカウントを開始した最初の視線方向ベクトルから、停留時間がカウント閾値以上となった際の視線方向ベクトルの間のいずれかの視線方向ベクトルに対応する画像データをフレーム画像として抽出してもよい。
次に、停留時間検出部12は、抽出したフレーム画像に対して撮像画像識別情報を付加し、この撮像画像識別情報、フレーム画像の撮像画像アドレス、視線方向ベクトル及び撮像時間を対応付け、撮像データ記憶部16の抽出テーブルに対して書き込んで記憶させる。ここで、撮像時間は、停留時間計測カウンタのリセット後の各々に、停留時間計測カウンタがカウントを開始した最初の視線方向ベクトルに対応する画像データが撮像された時間である。
そして、停留時間検出部12は、処理をステップS2へ進める。
ステップS11:
3次元空間再構成部11は、多視点撮像画像を順次読み込み、それぞれの特徴点データを求め、それぞれ多視点撮像画像に対応させ、撮像データ記憶部16の多視点撮像画像テーブルに書き込んで記憶させる。ここで、3次元空間再構成部11は、多視点撮像画像テーブルにおける全ての多視点撮像画像における特徴点データを求める。
3次元空間再構成部11は、撮像データ記憶部16の多視点撮像画像テーブルの多視点撮像画像の各々の特徴点データを用い、グローバル座標系における陳列空間の3次元形状データの再生(再構成)の処理を行う。
そして、3次元空間再構成部11は、グローバル座標系データ記憶部17に対して、再構成したグローバル座標系における陳列空間の3次元形状データを書き込んで記憶させる。
このとき、3次元空間再構成部11は、再構成の処理において求められる多視点撮像画像の各々のカメラ撮像座標、カメラ撮像方向ベクトル、姿勢角及び画像投影変換行列のそれぞれを、多視点撮像画像の各々に対応させて、撮像データ記憶部16の多視点撮像画像テーブルに書き込んで記憶させる。
3次元空間再構成部11は、撮像データ記憶部16のフレーム画像を抽出テーブルから、撮像時間の早い順に、順次フレーム画像の撮像画像アドレスを読み込む。
そして、3次元空間再構成部11は、撮像画像アドレスにある映像の画像データ(フレーム画像)を読み込み、このフレーム画像の特徴点データを求める。
3次元空間再構成部11は、グローバル座標系データ記憶部17に記憶されている陳列空間の3次元形状データと、フレーム画像の特徴点データとのマッチング処理を行う。
そして、3次元空間再構成部11は、このマッチング処理により、グローバル座標系におけるフレーム画像の撮像された際の利用者撮像部31のカメラ撮像座標、カメラ撮像方向ベクトル及び姿勢角を求める。
このとき、3次元空間再構成部11は、求めたカメラ撮像座標、カメラ撮像方向ベクトル及び姿勢角を、現在処理中のフレーム画像に対応させ、撮像データ記憶部16のフレーム画像テーブルに書き込んで記憶させる。
次に、視線方向ベクトル変換部13は、カメラ撮像座標及びカメラ撮像方向ベクトルから、視線方向ベクトルをカメラ座標系からグローバル座標系に変換する、(1)式に示すカメラ座標変換行列を生成する。
そして、視線方向ベクトル変換部13は、生成したカメラ座標変換行列を、フレーム画像に対応させ、撮像データ記憶部16のフレーム画像テーブルに対して書き込んで記憶させる。
視線方向ベクトル変換部13は、撮像データ記憶部16の抽出テーブルから、現在処理中のフレーム画像に対応するカメラ座標系における視線方向ベクトルを読み出す。
また、視線方向ベクトル変換部13は、フレーム画像テーブルからカメラ座標変換行列を読み出し、読み出したカメラ座標変換行列により、読み出したフレーム画像に対応する視線方向ベクトルを、カメラ座標系からグローバル座標系に座標変換する。
そして、視線方向ベクトル変換部13は、グローバル座標系におけるベクトルに座標変換した視線方向ベクトルを、フレーム画像に対応させ、撮像データ記憶部16のフレーム画像テーブルに対して書き込んで記憶させる。
交差座標検出部14は、グローバル座標系データ記憶部17から、グローバル座標系の陳列空間における3次元形状データを読み出す。
そして、交差座標検出部14は、グローバル座標系における視線方向ベクトルを延長し、最初に交差する3次元形状データにおける交差座標を検出する。
交差座標検出部14は、検出した交差座標を、陳列空間における観察者が注視する注視箇所とする。また、交差座標検出部14は、求めた注視箇所の座標を図示しない注視箇所テーブルに書き込んで記憶させる。
3次元空間再構成部11は、撮像データ記憶部16の抽出テーブルにおいて、カメラ撮像方向ベクトル及び姿勢角を求める処理が行われていないフレーム画像の有無を判定する。
このとき、3次元空間再構成部11は、カメラ撮像方向ベクトル及び姿勢角を求める処理が行われていないフレーム画像がある場合、処理をステップS13に進める。
一方、3次元空間再構成部11は、カメラ撮像方向ベクトル及び姿勢角を求める処理が行われていないフレーム画像がない場合、処理をステップS19に進める。
交差座標投影部15は、計測者が選択した多視点撮像画像と、この多視点撮像画像に対応する画像投影変換行列とを、撮像データ記憶部16の多視点撮像画像テーブルから読み出す。
そして、交差座標投影部15は、撮像データ記憶部16の注視箇所テーブルから、順次注視箇所を読み出し、画像投影変換行列により、対応する多視点撮像画像上に投影する。
交差座標投影部15は、注視箇所が投影された多視点撮像画像を投影画像データ記憶部18に書き込んで記憶させる。
図8Aは、グローバル座標系の陳列空間における3次元形状データ501と視線方向ベクトル601とが交差し、注視箇所701としての交差座標(xc1、yc1、zc1)が求められることを示している。
図8Bは、グローバル座標系の陳列空間における3次元形状データ502と視線方向ベクトル602とが交差し、注視箇所702としての交差座標(xc2、yc2、zc2)が求められることを示している。
図8Cは、グローバル座標系の陳列空間における3次元形状データ503と視線方向ベクトル603とが交差し、注視箇所703としての交差座標(xc3、yc3、zc3)が求められることを示している。
また、図8Aから図8Cの各々は、同一の陳列棚をそれぞれ異なる方向から撮像して得られた画像である。
図9は、図5Dに示す多視点撮像画像に対して、3次元のグローバル座標系における図8Aから図8Cの各々の3次元形状データにおける交差座標(xc1、yc1、zc1)、交差座標(xc2、yc2、zc2)、交差座標(xc3、yc3、zc3)が投影された結果を示している。
図9には、陳列棚1000、3次元形状データ501が投影された画像領域901、3次元形状データ502が投影された画像領域902、3次元形状データが投影された画像領域903の各々がある。画像領域901には、交差座標(xc1、yc1、zc1)が投影された注視箇所を示すマーク801が付加されている。また、画像領域902には、交差座標(xc2、yc2、zc2)が投影された注視箇所を示すマーク802が付加されている。画像領域903には、交差座標(xc3、yc3、zc3)が投影された注視箇所を示すマーク803が付加されている。
図10において、陳列空間に配置された陳列棚1010に配置された3次元形状データが投影された画像である飲料用容器の画像領域910に対して、観察者が注視する注視箇所が投影されたマーク810が描画されている。
このように、複雑な3次元形状をした陳列棚1010に配置された飲料用容器の画像領域910各々に対する注視箇所を、注視箇所を示すマーク810により容易に、かつ精度良く検出することができる。
また、本実施形態によれば、陳列空間に配置された観察対象が3次元形状データとして複雑な形状を有していても、上述したように観察者が注視する箇所を検出するのではなく、3次元形状データと観察者の視線の公報を示す視線方向ベクトルとの交差する座標により求めるため、計測者の負荷を従来に比較して低減し、高い精度で注視箇所を検出することができる。
図11は、グローバル座標系における観察者が注視する注視箇所がプロットされた陳列空間を床平面の2次元座標上に投影した図である。陳列棚1120及び1130における陳列物に対する、観察者が注視する注視箇所のプロット1110及び注視箇所のプロット1150の各々が示されている。
また、注視箇所のプロット1110及びプロット1150の各々に対し、視線履歴管理部が視線方向ベクトルを示す矢印1140、矢印1160それぞれを付加することにより、いずれのカメラ撮像箇所における注視箇所であることが明確に判る。
特に、陳列棚が低く、ある陳列棚を超えて他の陳列棚の陳列物を見た場合、あるいは振り返って通過した陳列棚の陳列物を見た場合など、視線方向ベクトルを示すことにより、いずれの陳列物を観察しているのかを明確に区別することができる。
この移動経路における注視箇所の情報と、図9及び図10に示す陳列棚における注視箇所の情報とにより、観察者の陳列空間における陳列棚の陳列物を観察する際の挙動を明確に検出することができ、商品や展示物などの配置の検討を容易に行うことができる。
また、「コンピュータシステム」は、ホームページ提供環境(あるいは表示環境)を備えたWWWシステムも含むものとする。また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気ディスク、ROM、CD-ROM等の可搬媒体、コンピュータシステムに内蔵されるハードディスク等の記憶装置のことをいう。さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワークや電話回線等の通信回線を介してプログラムが送信された場合のサーバやクライアントとなるコンピュータシステム内部の揮発性メモリ(RAM)のように、一定時間プログラムを保持しているものも含むものとする。
2…撮像装置
3…利用者観測装置
11…3次元空間再構成部
12…停留時間検出部
13…視線方向ベクトル変換部
14…交差座標検出部
15…交差座標投影部
16…撮像データ記憶部
17…グローバル座標系データ記憶部
18…投影画像データ記憶部
31…利用者撮像部
32…利用者視線計測部
Claims (8)
- 観察対象が陳列された陳列空間を移動する観察者に装着可能であり、当該観察者の前方の視野画像を撮像する利用者撮像部と、
前記観察者に装着可能であり、前記視野画像における座標系における前記観察者の視線の方向を示す視線計測データを取得する利用者視線計測部と、
前記観察対象を含む前記陳列空間の3次元形状データと、前記視線計測データから求めた前記陳列空間における前記観察者の視線方向ベクトルとの交差する座標位置により、当該陳列空間における前記観察者が注視する注視箇所を求める視線計測装置と
を備える視線計測システム。 - 前記視線計測装置が、
前記利用者撮像部と異なる撮像装置により、前記陳列空間を異なる方向から撮像して得られた複数の多視点撮像画像により、前記3次元形状データを再構成する3次元空間再構成部を有する
請求項1に記載の視線計測システム。 - 前記視線計測装置が、
前記利用者撮像部によって撮像された前記視野画像と、前記利用者撮像部とは異なる撮像装置で前記陳列空間を異なる方向から撮像して得られた複数の多視点撮像画像とにより、前記3次元形状データを再構成する3次元空間再構成部を有する
請求項1に記載の視線計測システム。 - 前記視線計測装置は、前記視線計測装置が求めた前記注視箇所を前記陳列空間における任意の多視点撮像画像上に配置する
請求項1から請求項3のいずれか一項に記載の視線計測システム。 - 前記視線計測装置は、前記視線計測装置が求めた前記注視箇所を、前記3次元形状データから生成した描写画像上に配置する
請求項1から請求項4のいずれか一項に記載の視線計測システム。 - 前記視線計測装置は、前記視線計測データの示す前記観察者の視線方向が予め設定した時間において同一方向を向いている場合、当該視線方向と交差する前記3次元形状データ上の位置を前記注視箇所とする
請求項1から請求項5のいずれか一項に記載の視線計測システム。 - 観察対象が陳列された陳列空間を移動する観察者に装着された利用者撮像部により、当該観察者の前方の視野画像を撮像する過程と、
前記視野画像における座標系における前記観察者の視線の方向を示す視線計測データを取得する過程と、
前記観察対象を含む前記陳列空間の3次元形状データと、前記視線計測データから求めた前記陳列空間における前記観察者の視線方向ベクトルとの交差する座標位置により、当該陳列空間における前記観察者が注視する注視箇所を求める過程と
を含む視線計測方法。 - コンピュータを、
観察対象が陳列された陳列空間を移動する観察者に装着された利用者撮像部により撮像された当該観察者の前方の視野画像と、前記視野画像における座標系における前記観察者の視線の方向を示す視線計測データとから、前記陳列空間における前記観察対象の3次元形状データと、前記視線計測データから求めた前記陳列空間における前記観察者の視線方向ベクトルとの交差する座標位置により、当該陳列空間における前記観察者が注視する注視箇所を求める視線計測装置
として機能させるプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016531333A JP6680207B2 (ja) | 2014-06-30 | 2015-06-26 | 視線計測システム、視線計測方法、及びプログラム |
EP15814153.1A EP3163410A4 (en) | 2014-06-30 | 2015-06-26 | Line-of-sight measurement system, line-of-sight measurement method, and program |
US15/393,325 US10460466B2 (en) | 2014-06-30 | 2016-12-29 | Line-of-sight measurement system, line-of-sight measurement method and program thereof |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014133919 | 2014-06-30 | ||
JP2014-133919 | 2014-06-30 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/393,325 Continuation US10460466B2 (en) | 2014-06-30 | 2016-12-29 | Line-of-sight measurement system, line-of-sight measurement method and program thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016002656A1 true WO2016002656A1 (ja) | 2016-01-07 |
Family
ID=55019193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/068522 WO2016002656A1 (ja) | 2014-06-30 | 2015-06-26 | 視線計測システム、視線計測方法、及びプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US10460466B2 (ja) |
EP (1) | EP3163410A4 (ja) |
JP (1) | JP6680207B2 (ja) |
WO (1) | WO2016002656A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107544732A (zh) * | 2016-06-23 | 2018-01-05 | 富士施乐株式会社 | 信息处理装置、信息处理系统和图像形成装置 |
JP2019075018A (ja) * | 2017-10-18 | 2019-05-16 | Kddi株式会社 | 撮影画像からカメラの位置姿勢の類似度を算出する装置、プログラム及び方法 |
JP2020135737A (ja) * | 2019-02-25 | 2020-08-31 | 株式会社パスコ | 注視行動調査システム、及び制御プログラム |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10380440B1 (en) | 2018-10-23 | 2019-08-13 | Capital One Services, Llc | Method for determining correct scanning distance using augmented reality and machine learning models |
US10928904B1 (en) | 2019-12-31 | 2021-02-23 | Logitech Europe S.A. | User recognition and gaze tracking in a video system |
US11163995B2 (en) | 2019-12-31 | 2021-11-02 | Logitech Europe S.A. | User recognition and gaze tracking in a video system |
CN116027910B (zh) * | 2023-03-29 | 2023-07-04 | 广州视景医疗软件有限公司 | 一种基于vr眼动追踪技术的眼位图生成方法及系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04288122A (ja) * | 1991-03-18 | 1992-10-13 | A T R Shichiyoukaku Kiko Kenkyusho:Kk | 視線表示装置 |
JPH11276438A (ja) * | 1998-03-30 | 1999-10-12 | Isuzu Motors Ltd | 視線計測装置 |
JP2009003701A (ja) * | 2007-06-21 | 2009-01-08 | Denso Corp | 情報システム及び情報処理装置 |
WO2011158511A1 (ja) * | 2010-06-17 | 2011-12-22 | パナソニック株式会社 | 指示入力装置、指示入力方法、プログラム、記録媒体および集積回路 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06189906A (ja) | 1992-12-26 | 1994-07-12 | Nissan Motor Co Ltd | 視線方向計測装置 |
US5636334A (en) * | 1994-01-28 | 1997-06-03 | Casio Computer Co., Ltd. | Three-dimensional image creation devices |
JPH11211414A (ja) * | 1998-01-30 | 1999-08-06 | Osaka Gas Co Ltd | 位置検出システム |
US7434931B2 (en) * | 2001-10-25 | 2008-10-14 | Ophthonix | Custom eyeglass manufacturing method |
US7286246B2 (en) * | 2003-03-31 | 2007-10-23 | Mitutoyo Corporation | Method and apparatus for non-contact three-dimensional surface measurement |
JP4434890B2 (ja) * | 2004-09-06 | 2010-03-17 | キヤノン株式会社 | 画像合成方法及び装置 |
DE102005003699B4 (de) * | 2005-01-26 | 2018-07-05 | Rodenstock Gmbh | Vorrichtung und Verfahren zum Bestimmen von optischen Parametern eines Benutzers; Computerprogrammprodukt |
DE102008003906B4 (de) * | 2008-01-10 | 2009-11-26 | Rodenstock Gmbh | Verwendung eines Fixationstargets und Vorrichtung |
WO2009116663A1 (ja) * | 2008-03-21 | 2009-09-24 | Takahashi Atsushi | 三次元デジタル拡大鏡手術支援システム |
US7736000B2 (en) * | 2008-08-27 | 2010-06-15 | Locarna Systems, Inc. | Method and apparatus for tracking eye movement |
US20140240313A1 (en) * | 2009-03-19 | 2014-08-28 | Real Time Companies | Computer-aided system for 360° heads up display of safety/mission critical data |
US9728006B2 (en) * | 2009-07-20 | 2017-08-08 | Real Time Companies, LLC | Computer-aided system for 360° heads up display of safety/mission critical data |
JP2011138064A (ja) * | 2009-12-29 | 2011-07-14 | Seiko Epson Corp | 視線移動量測定方法および視線移動量測定治具 |
JP5725646B2 (ja) * | 2010-03-10 | 2015-05-27 | ホーヤ レンズ マニュファクチャリング フィリピン インク | 累進屈折力レンズの設計方法、累進屈折力レンズ設計システム、および累進屈折力レンズの製造方法 |
US8371693B2 (en) * | 2010-03-30 | 2013-02-12 | National University Corporation Shizuoka University | Autism diagnosis support apparatus |
JP5549605B2 (ja) | 2011-01-13 | 2014-07-16 | 新日鐵住金株式会社 | 視線位置検出装置、視線位置検出方法、及びコンピュータプログラム |
EP2899585A4 (en) * | 2012-09-19 | 2016-05-18 | Nikon Corp | SIGHT LINE DETECTION DEVICE, DISPLAY METHOD, SIGHT LINE DETECTION DEVICE CALIBRATION METHOD, GLASS GLASS DESIGN METHOD, GLASS SELECTION GLASS SELECTION METHOD, GLASS GLASS MANUFACTURING METHOD, PRINTED MATERIAL, SUNGLASS GLASS SELLING METHOD, OPTICAL DEVICE, SIGHT LINE INFORMATION DETECTING METHOD, OPTICAL INSTRUMENT, DESIGN METHOD, OPTICAL INSTRUMENT, OPTICAL INSTRUMENT SELECTION METHOD, AND OPTICAL INSTRUMENT PRODUCTION METHOD |
JP6057396B2 (ja) * | 2013-03-11 | 2017-01-11 | Necソリューションイノベータ株式会社 | 3次元ユーザインタフェース装置及び3次元操作処理方法 |
US20160217578A1 (en) * | 2013-04-16 | 2016-07-28 | Red Lotus Technologies, Inc. | Systems and methods for mapping sensor feedback onto virtual representations of detection surfaces |
CN105378598B (zh) * | 2013-07-19 | 2018-12-25 | 索尼公司 | 检测装置和方法 |
US20160202947A1 (en) * | 2015-01-09 | 2016-07-14 | Sony Corporation | Method and system for remote viewing via wearable electronic devices |
US20170200316A1 (en) * | 2015-09-10 | 2017-07-13 | Sphere Optics Company, Llc | Advertising system for virtual reality environments |
JP2017129898A (ja) * | 2016-01-18 | 2017-07-27 | ソニー株式会社 | 情報処理装置、情報処理方法、およびプログラム |
-
2015
- 2015-06-26 WO PCT/JP2015/068522 patent/WO2016002656A1/ja active Application Filing
- 2015-06-26 JP JP2016531333A patent/JP6680207B2/ja active Active
- 2015-06-26 EP EP15814153.1A patent/EP3163410A4/en not_active Ceased
-
2016
- 2016-12-29 US US15/393,325 patent/US10460466B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04288122A (ja) * | 1991-03-18 | 1992-10-13 | A T R Shichiyoukaku Kiko Kenkyusho:Kk | 視線表示装置 |
JPH11276438A (ja) * | 1998-03-30 | 1999-10-12 | Isuzu Motors Ltd | 視線計測装置 |
JP2009003701A (ja) * | 2007-06-21 | 2009-01-08 | Denso Corp | 情報システム及び情報処理装置 |
WO2011158511A1 (ja) * | 2010-06-17 | 2011-12-22 | パナソニック株式会社 | 指示入力装置、指示入力方法、プログラム、記録媒体および集積回路 |
Non-Patent Citations (2)
Title |
---|
See also references of EP3163410A4 * |
YUJI KOHASHI ET AL.: "Estimating 3D Point-of- regard Using Interest Points for Head-mounted Eye Tracker", IEICE TECHNICAL REPORT, vol. 109, no. 261, 22 October 2009 (2009-10-22), pages 5 - 10, XP058123557 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107544732A (zh) * | 2016-06-23 | 2018-01-05 | 富士施乐株式会社 | 信息处理装置、信息处理系统和图像形成装置 |
JP2019075018A (ja) * | 2017-10-18 | 2019-05-16 | Kddi株式会社 | 撮影画像からカメラの位置姿勢の類似度を算出する装置、プログラム及び方法 |
JP2020135737A (ja) * | 2019-02-25 | 2020-08-31 | 株式会社パスコ | 注視行動調査システム、及び制御プログラム |
JP7266422B2 (ja) | 2019-02-25 | 2023-04-28 | 株式会社パスコ | 注視行動調査システム、及び制御プログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2016002656A1 (ja) | 2017-05-25 |
JP6680207B2 (ja) | 2020-04-15 |
US20170109897A1 (en) | 2017-04-20 |
EP3163410A4 (en) | 2017-12-13 |
EP3163410A1 (en) | 2017-05-03 |
US10460466B2 (en) | 2019-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6680207B2 (ja) | 視線計測システム、視線計測方法、及びプログラム | |
CN108830894B (zh) | 基于增强现实的远程指导方法、装置、终端和存储介质 | |
JP6425780B1 (ja) | 画像処理システム、画像処理装置、画像処理方法及びプログラム | |
CN109615703B (zh) | 增强现实的图像展示方法、装置及设备 | |
JP5260705B2 (ja) | 3次元拡張現実提供装置 | |
RU2683262C2 (ru) | Устройство обработки информации, способ обработки информации и программа | |
US9740282B1 (en) | Gaze direction tracking | |
TWI496108B (zh) | AR image processing apparatus and method | |
WO2016029939A1 (en) | Method and system for determining at least one image feature in at least one image | |
US20180316877A1 (en) | Video Display System for Video Surveillance | |
JP2021520577A (ja) | 画像処理方法及び装置、電子機器並びに記憶媒体 | |
IL275047B1 (en) | Install a complex display in mind | |
US10996751B2 (en) | Training of a gaze tracking model | |
CN114241168A (zh) | 显示方法、显示设备及计算机可读存储介质 | |
JP2018010599A (ja) | 情報処理装置、パノラマ画像表示方法、パノラマ画像表示プログラム | |
EP3038061A1 (en) | Apparatus and method to display augmented reality data | |
CN112073640B (zh) | 全景信息采集位姿获取方法及装置、系统 | |
US20200211275A1 (en) | Information processing device, information processing method, and recording medium | |
CN112950711B (zh) | 一种对象的控制方法、装置、电子设备及存储介质 | |
JP6515022B2 (ja) | 観察領域推定方法、観察領域推定装置、及びプログラム | |
JP2020027390A (ja) | 注意対象推定装置及び注意対象推定方法 | |
JP2014112057A (ja) | 期待結合精度のリアルタイム表示方法および形状計測システム | |
JP2019144958A (ja) | 画像処理装置、画像処理方法およびプログラム | |
JP2014112056A (ja) | 被写体形状の3次元計測方法およびシステム | |
JP7175715B2 (ja) | 情報処理装置、情報処理方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15814153 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016531333 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2015814153 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015814153 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |