Nothing Special   »   [go: up one dir, main page]

WO2013175816A1 - Appareil de mesure de distance - Google Patents

Appareil de mesure de distance Download PDF

Info

Publication number
WO2013175816A1
WO2013175816A1 PCT/JP2013/054021 JP2013054021W WO2013175816A1 WO 2013175816 A1 WO2013175816 A1 WO 2013175816A1 JP 2013054021 W JP2013054021 W JP 2013054021W WO 2013175816 A1 WO2013175816 A1 WO 2013175816A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
color
optical system
interpolation
Prior art date
Application number
PCT/JP2013/054021
Other languages
English (en)
Japanese (ja)
Inventor
秀彰 高橋
Original Assignee
オリンパス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オリンパス株式会社 filed Critical オリンパス株式会社
Publication of WO2013175816A1 publication Critical patent/WO2013175816A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/34Systems for automatic generation of focusing signals using different areas in a pupil plane
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B13/00Viewfinders; Focusing aids for cameras; Means for focusing for cameras; Autofocus systems for cameras
    • G03B13/32Means for focusing
    • G03B13/34Power focusing
    • G03B13/36Autofocus systems

Definitions

  • the present invention relates to a distance measuring device for acquiring distance information of an object.
  • Japanese Patent Application Laid-Open No. 2001-174696 describes a color imaging apparatus that performs pupil division by color by interposing a pupil color division filter having different spectral characteristics for each partial pupil in the photographing optical system. .
  • a subject image formed on a color imaging device through the pupil color division filter is captured by the color imaging device and output as an image signal.
  • This image signal is color separated, and distance information is generated by detecting a relative shift amount between pupil color divided color signals.
  • This distance information may be the distance to the subject itself, but may be both focusing information of the focusing deviation direction and the focusing deviation amount, for example, when aiming at autofocus (AF).
  • a distance measuring device capable of acquiring such distance information can be used for various devices and devices such as a microscope and a digital camera.
  • a microscope when used for a microscope, the height measurement of a subject (also referred to as a sample) Will be able to
  • the colors to be pupil-color-divided are red (R) and blue (B) (see FIGS. 3 to 7 according to the present invention, etc.)
  • the height of the subject 6 as shown in FIG. 8 according to the present invention is measured by a microscope using a pupil color division filter as shown in FIG. 3 according to the present invention.
  • the upper surface of the subject 6 is as shown in FIG. 9 according to the present invention.
  • FIG. 21 is a view showing an example of the RB double image formed on the Bayer array sensor.
  • FIG. 22 is a diagram showing the pixel value of the R pixel and the pixel value of the B pixel acquired from the RB double image of FIG. 21.
  • a black circle connected by a solid line indicates a pixel value of the R pixel
  • a black square connected by a dotted line indicates a pixel value of the B pixel.
  • the data shown in FIG. 22 is subjected to, for example, ZNCC (Zero-mean Normalized Cross-Correlation) (see Equation 1 according to the embodiment of the present invention) to obtain correlation values. Calculate the pixel shift amount from.
  • ZNCC Zero-mean Normalized Cross-Correlation
  • the amount of displacement calculated from this ZNCC is in pixel coordinate units
  • interpolation of sub-pixels for example, subpixel estimation methods such as equiangular straight line fitting and parabola fitting
  • the shift amount calculated in this way can be proportionally converted based on the configuration of the imaging optical system and the pupil color division filter, so the distance in the height direction from the in-focus position can be obtained.
  • the DC component or AC component is extracted, and the correlation is accurately detected by aligning the signal levels of the extracted components. Describes a correlation calculation method, a correlation calculation device, a focus detection device, and an imaging device.
  • a proportional conversion equation obtained by the configuration of the imaging optical system and the pupil color division filter is, for example, the present invention
  • the pixel shift amount is 0 It needs to be detected with an accuracy of .33 pixels or less.
  • the waveform shape of each color image may differ, and at this time, the detection accuracy of the amount of deviation decreases. It will Therefore, there is a need for a technique for further improving the detection accuracy of the amount of deviation, and hence the distance measurement accuracy.
  • the present invention has been made in view of the above circumstances, and when acquiring information on a subject distance based on a plurality of color images obtained by imaging pupil color-divided light, distance measurement accuracy is higher. It aims to provide a distance measuring device that can
  • a distance measurement device includes an imaging optical system for forming an image of an object, and an optical path of the imaging optical system, wherein plural partial pupils of the imaging optical system have different spectral characteristics.
  • a pupil color division optical system that divides the pupil of the imaging optical system by color, and photoelectric conversion of an object image formed by the imaging optical system via the pupil color division optical system, and a plurality of pixels are arrayed Pixels of a color imaging element for outputting the selected image, and a plurality of color images obtained by color separation of the image output from the imaging element, which are arranged in a plurality of color images for different partial pupils Generating a plurality of interpolated color images by combining a pixel interpolation unit that generates the interpolation pixels, a plurality of color images related to the different partial pupils, and the interpolation pixels generated by the pixel interpolation unit; Subject to multiple interpolated color images A distance information generation unit that detects relative displacement of the body image and generates information related to the subject distance based on the
  • FIG. 6 is a diagram for explaining a pixel array of the imaging device in the first embodiment.
  • FIG. 6 is a view for explaining an example of the arrangement of a pupil color division filter according to the first embodiment;
  • FIG. 7 is a plan view showing the state of subject light flux focusing when an object at a longer distance side than the in-focus position is imaged in the first embodiment.
  • FIG. 7 is a view showing, for each color component, the shape of blur formed by light from one point on the subject that is on the far side of the in-focus position in the first embodiment.
  • FIG. 7 is a plan view showing the state of subject light flux focusing when the subject on the near distance side of the in-focus position is imaged in the first embodiment.
  • FIG. 7 is a view showing, for each color component, the shape of blur formed by light from one point on an object on the near distance side of the in-focus position in the first embodiment.
  • FIG. 2 is a perspective view showing a subject in the first embodiment.
  • FIG. 2 is a view showing the upper surface of a black box-like object in the subject of the first embodiment.
  • FIG. 7 is a view showing a state in which the optical subject image of the R component and the optical subject image of the B component of the white printed matter formed on the image pickup element in the first embodiment are shifted.
  • FIG. 7 is a diagram showing pixel values of R and B images of white printed matter “1” for pixel arrangement on line A, and interpolated pixel values and composite values in the first embodiment.
  • FIG. 16 is a diagram showing pixel values of R and B images of white printed matter “1” and a shifted pixel value and a combined value in the pixel arrangement on the line A in the second embodiment.
  • FIG. 14 is a diagram for explaining optical conditions of the imaging optical system in the third embodiment.
  • FIG. 16 is a view showing a state in which the user designates the measurement position of the subject on the screen of the monitor in the fourth embodiment.
  • the block diagram which shows the structure of the distance measuring device in Embodiment 5 of this invention.
  • FIG. 17 shows a state of a subject image of an R component and a B component in an original image formed on an imaging surface of an imaging element and a partially enlarged view of a line A in the fifth embodiment.
  • FIG. 23 is a diagram showing pixel values of an R pixel and pixel values of a B pixel acquired from the RB double image in FIG. 21 conventionally.
  • FIGS. 1 to 11 show Embodiment 1 of the present invention
  • FIG. 1 is a block diagram showing the configuration of the distance measuring device 1.
  • the distance measurement device 1 generates information on the distance to the subject based on an image obtained by capturing an image of the subject.
  • Examples of application of the distance measuring device 1 include, for example, industrial microscopes, digital cameras, digital video cameras, mobile phones with cameras, PDAs with cameras (PDAs with cameras), personal computers with cameras, surveillance cameras, endoscopes And the like, but of course are not limited thereto.
  • the subject 6 is an object whose distance is to be measured, and is also referred to as a sample, for example, in the field of a microscope.
  • the distance measuring device 1 measures the distance to the subject 6 and includes, for example, a lens barrel 2, a controller 3, a personal computer (PC) 4, and a monitor 5.
  • a lens barrel 2 measures the distance to the subject 6 and includes, for example, a lens barrel 2, a controller 3, a personal computer (PC) 4, and a monitor 5.
  • PC personal computer
  • the lens barrel 2 includes an imaging optical system 10, an imaging device 11, a pupil color division filter 14, a ring illumination 16, an imaging device control unit 21, a zoom control unit 22, a focus control unit 23, and a focus driving mechanism. And 24.
  • the imaging optical system 10 is for forming an optical subject image on the imaging element 11 and includes, for example, a zoom lens 12, an aperture 13, and an objective lens 15.
  • the zoom lens 12 is for changing the focal length of the imaging optical system 10 to perform zooming (change of imaging magnification).
  • the stop 13 changes the passing range of the light beam passing through the imaging optical system 10 to adjust the brightness of the subject image formed on the imaging element 11.
  • the pupil diameter of the imaging optical system 10 is also changed by changing the aperture diameter of the diaphragm 13.
  • the objective lens 15 is an optical element mainly responsible for the power in the imaging optical system 10.
  • the imaging device 11 is configured by arraying a plurality of pixels (imaging pixels), and photoelectrically converts an object image formed by the imaging optical system 10 through the pupil color division filter 14 to convert the plurality of pixels (image pixels). Is an output of an arrayed image.
  • the color image pickup device 11 receives an object image for each of a plurality of wavelength bands (for example, but not limited to RGB) light, photoelectrically converts them, and outputs as an electric signal. It is an element.
  • the configuration of the color imaging device may be a single-plate imaging device provided with an on-chip device color filter, or may be a three-plate system using dichroic prisms for performing color separation into RGB color light, or the same. May be an imaging device of a method capable of acquiring RGB imaging information according to the position in the depth direction of the semiconductor at the pixel position of the semiconductor, or which imaging information of a plurality of wavelength bands can be acquired in pixel units It does not matter.
  • the imaging device 11 is a single-plate imaging device provided with an element color filter of primary color Bayer arrangement as shown in FIG. 2 on a chip, for example.
  • FIG. 2 is a diagram for explaining a pixel array of the image pickup device 11.
  • a B filter for configuring a B pixel and an R filter for configuring an R pixel are arranged in one diagonal direction, and the other diagonal direction
  • the basic array has a filter configuration in which 2 ⁇ 2 pixels in which G filters for forming G pixels are arranged are arranged as a basic array.
  • the imaging device 11 can be widely applied to an imaging device such as a CMOS sensor or a CCD sensor, it is not necessary to read out all pixels in order to measure the distance to the subject. It is preferable to employ a CMOS sensor that can read out a desired pixel.
  • the pupil color division filter 14 is disposed on the light path of the imaging optical system 10, and a pupil which color-divides the pupil of the imaging optical system 10 by providing plural partial pupils in the pupil of the imaging optical system 10 with different spectral characteristics. It is a color division optical system. Accordingly, the pupil color division filter 14 can also be called a band-limiting filter because it performs band limiting of light to be transmitted for each partial pupil.
  • the pupil color division filter 14 is configured as shown in FIG.
  • FIG. 3 is a diagram for explaining an example of the configuration of the pupil color division filter 14.
  • the pupil of the imaging optical system 10 is divided into a first partial pupil and a second partial pupil.
  • the left half is a G (green) component and R
  • the RG filter 14r that passes the (red) component and blocks the B (blue) component
  • the GB filter 14b that passes the G and B components and blocks the R component on the right half. Therefore, the pupil color division filter 14 passes all the G components contained in the light passing through the aperture (and hence the pupil) of the diaphragm 11 of the imaging optical system 10, and passes the R component only in the partial pupil of the left half of the aperture.
  • the B component is allowed to pass through only the partial pupil of the other right half of the aperture.
  • the RGB spectral transmission characteristics of the pupil color dividing filter 14 and the RGB spectral of the element filter of the image sensor 11 are preferably identical or as close as possible.
  • the ring illumination 16 is an illumination device that illuminates the subject 6 with illumination light.
  • the ring illumination 16 has a plurality of light sources such as LEDs arranged in a ring around the periphery of the optical path that does not shield the optical path of the imaging optical system 10, and the shadow of the illumination hardly occurs on the subject. Is equipped.
  • the ring illumination 16 is provided because the industrial microscope is assumed as described above, but another illumination device may be used, or natural light may be used without providing the illumination device. I do not care.
  • the imaging element control unit 21 controls the imaging element 11 and performs drive control of the imaging element 11 and readout control from the imaging element 11. Further, when the imaging device 11 is an analog imaging device, the imaging device control unit 21 also performs A / D conversion of a signal read from the imaging device 11.
  • the zoom control unit 22 performs control to move the zoom lens 12 in the optical axis direction and to change the focal length of the imaging optical system 10.
  • the focus control unit 23 controls the focus drive mechanism 24 so that the optical image of the subject 6 formed by the imaging optical system 10 is positioned on the imaging surface of the imaging device 11 (that is, focused). .
  • the focus drive mechanism 24 performs drive to adjust the focus position of the imaging optical system 10 based on the control signal from the focus control unit 23.
  • the focus drive mechanism 24 moves the lens barrel 2 itself in the direction of the optical axis, that is, moves the lens barrel 2 itself in a direction away from the subject 6 or in a direction approaching the subject 6.
  • the focus drive mechanism 24 is configured as a mechanism for moving the focus lens included in the objective lens 15 described above in the optical axis direction.
  • the controller 3 is connected to the lens barrel 2 described above, and controls the entire system of the distance measuring device 1.
  • the system controller 31 controls the whole of the distance measuring device 1 in an integrated manner, including each operation unit and the like in the controller 3 and each control unit and the like in the lens barrel 2.
  • the system controller 31 also performs processing such as color separation of an image output from the imaging device 11 into a plurality of color images.
  • the memory 32 is for temporarily buffering the image signal received from the lens barrel 2, and includes, for example, an SDRAM or the like.
  • the pixel interpolation operation unit 35 interpolates interpolation pixels of pixels arranged in a plurality of color images relating to different partial pupils among a plurality of color images obtained by color separation of the image output from the imaging device 11, It is a pixel interpolation unit that generates at least a relative shift amount in a shift detection direction which is a direction to detect the shift amount.
  • the pixel interpolation operation unit 35 of the present embodiment obtains R, G, B color images obtained by color separation of one image read out from the image sensor 11 and A / D converted by the system controller 31.
  • the interpolation calculation is performed on the R-color image and the B-color image to be subjected to shift detection among the above to generate an R-interpolated pixel and a B-interpolated pixel.
  • the pixel shift detection unit 33 generates a plurality of interpolation color images by combining a plurality of color images relating to different partial pupils and the interpolation pixels generated by the pixel interpolation operation unit 35, and a plurality of interpolation color images The relative displacement amount of the subject image in the above is detected, and is a part of the distance information generation unit.
  • the pixel shift detection unit 33 of the present embodiment combines the R-interpolated color image by combining the R-color image color-separated by the system controller 31 with the R-interpolated pixels generated by the pixel interpolation operation unit 35.
  • a B-interpolated color image is generated by combining the B-color image generated and generated by the system controller 31 with the B-interpolated pixel generated by the pixel interpolation operation unit 35. Then, the pixel shift detection unit 33 detects the shift amount of the subject image in the generated R interpolation color image and B interpolation pixel.
  • the distance calculation unit 34 may be information related to the object distance (the distance to the object itself may be, or the direction and amount of displacement of focusing etc.
  • the former is suitable for measuring, for example, the shape (height) of the subject, and the latter is suitable for generating, for example, focusing on the subject.
  • the PC 4 is connected to the controller 3 described above, and has a control application 41 which is software that doubles as a user interface of the distance measuring device 1.
  • the monitor 5 displays an image signal of the subject 6 transmitted from the controller 3 via the PC 4 and application information on the control application 41.
  • the user operates the distance measuring device 1 by using an input device (a keyboard, a mouse or the like) provided in the PC 4 while observing the display on the monitor 5.
  • an input device a keyboard, a mouse or the like
  • FIG. 4 is a plan view showing how the subject light flux is collected when an object at a longer distance than the in-focus position is imaged
  • FIG. 5 is a view on the object at a longer distance than the in-focus position
  • FIG. 6 is a view showing the shape of blur formed by light from a point for each color component
  • Reference numeral 7 is a diagram showing, for each color component, the shape of a blur formed by light from one point on the subject that is on the near distance side of the in-focus position.
  • the light emitted from one point on the subject is collected at one point on the imaging device 11 regardless of which color component it is, and the subject as a point image Form an image. Therefore, no positional displacement occurs between colors, and a subject image without color blur is formed.
  • a circular blurred object image IMGg is formed, a right half semicircular blurred object image IMGr is formed for the R component, and a left half semicircular blurred object image IMGb is formed for the B component. Therefore, when the object OBJf at a longer distance side than the in-focus position is imaged, a blurred image is formed in which the R component subject image IMGr is shifted to the right and the B component subject image IMGb is shifted to the left.
  • the left and right positions of the R component and the B component in are opposite to the left and right positions of the R component transmission region (RG filter 14r) and the B component transmission region (GB filter 14b) in the pupil color division filter 14 when viewed from the imaging device 11. is there. Further, as shown in FIG. 5, the G component object image IMGg is a blurred image spanning the R component object image IMGr and the B component object image IMGb.
  • the more the subject OBJf moves away from the in-focus position to the far side, the larger the blur, and the distance between the center of gravity Cr of the subject image IMGr of the R component and the center of gravity Cb of the subject image IMGb for the B component The distance between the center of gravity Cr of the subject image IMGr and the center of gravity Cg of the object image IMGg of G component, and the distance between the center of gravity Cg of the object image IMGg of G component and the center of gravity Cb of the object image IMGb for B component Will be increased.
  • the object OBJn when the object OBJn is closer than the in-focus position, for example, the light emitted from one point on the object OBJn causes circular blurring of the G component as shown in FIGS. 6 and 7.
  • a subject image IMGg to be formed is formed, a subject image IMGr having a semicircular blur in the left half is formed for the R component, and a subject image IMGb having a semicircular blur on the right half is formed for the B component. Therefore, when imaging the object OBJn on the near distance side of the in-focus position, a blurred image is formed in which the R component subject image IMGr is shifted to the left and the B component subject image IMGb is shifted to the right.
  • the left and right positions of the R component and the B component in are the same as the left and right positions of the R component transmission region (RG filter 14r) and the B component transmission region (GB filter 14b) in the pupil color division filter 14 when viewed from the imaging device 11. is there.
  • the object image IMGg of the G component is a blurred image that straddles the object image IMGr of the R component and the object image IMGb of the B component (see FIG. 7) as well at this short distance side.
  • the distance between the center of gravity Cr of the subject image IMGr and the center of gravity Cg of the object image IMGg of G component, and the distance between the center of gravity Cg of the object image IMGg of G component and the center of gravity Cb of the object image IMGb for B component Will be increased.
  • the object distance can be calculated by calculating the amount of deviation (the amount of phase difference) based on the correlation. Is possible.
  • the amount of deviation between the subject image IMGr of the R component and the subject image IMGb of the B component with the largest separation distance between the barycentric positions is detected.
  • the accuracy is higher than that in the case of detecting the displacement between the R component subject image IMGr and the G component subject image IMGg, or the G component subject image IMGg and the B component subject image IMGb. It is thought that it will be possible to detect
  • FIG. 8 is a perspective view showing the subject 6.
  • FIG. 9 is a view showing the upper surface of the black box-like object 6 b in the subject 6.
  • FIG. 10 is a view showing a state in which the optical subject image IMGr of the R component and the optical subject image IMGb of the B component of the white printed matter formed on the imaging device 11 are deviated.
  • the R signal image (R image) and the B signal image (B image) are also It becomes a double image similar to FIG.
  • the R image and the B image are transmitted to the PC 4 through the controller 3 and displayed on the monitor 5 by the control application 41 (note that, as an image to be displayed on the monitor 5, the deviation between the R image and the B image is It may be a corrected image or an image picked up without passing through the pupil color division filter 14 (in the case of simply observing the subject, it is preferable to be an image without such deviation). Furthermore, on the monitor 5, an operation screen related to the control application 41 is also displayed.
  • Gr is a G pixel disposed between R pixels in the horizontal line
  • Gb is a G pixel disposed between B pixels in the horizontal line.
  • FIG. 11 is a diagram showing the pixel values of the R image and the B image of the white printed matter “1” for the pixel array on the line A, and the interpolation pixel value and the composite value.
  • the aperture ratio of each pixel is assumed to be 100%.
  • the pixel value for example, the pixel value when the R image is formed on the entire aperture (100%) of the R pixel is 100, and the pixel value when the R image is not formed at all on the R pixel is 0 As the intermediate value, it is assumed that the pixel value is in proportion to the percentage of the aperture of the R pixel where the R image is formed.
  • the shift amount in the X direction of the R image and the B image of the white printed matter “1” is 3.4 pixels with the pixel pitch in the X direction as a unit. Therefore, it is most desirable that the amount of displacement obtained as a detection result be equal to the actual displacement amount of 3.4 pixels.
  • the imaging element 11 since the imaging element 11 has a Bayer array, G pixels (Gr pixels or Gb pixels) are disposed between R pixels and B pixels. Therefore, the acquired pixel values of the R pixel and the pixel values of the B pixel are not only discretely arranged by one pixel in the horizontal direction, but are further arranged to be shifted by one pixel each in the horizontal direction and the vertical direction It becomes.
  • Equation 1 ZNCC (Zero-mean Normalized Cross-Correlation) [Equation 1]
  • the image shift amount in the left column of Table 1 corresponds to 2 pixels (more exactly 2) corresponding to the R pixel and the B pixel being arranged by 1 pixel skipping (every 2 pixels) in the horizontal direction. Pixel pitch is used as a unit.
  • ZNCC normalized cross correlation operation
  • SSD sum of squared difference
  • SAD sum of absolute difference
  • the correlation value 0.92 at which the image shift amount is 1 is the highest correlation value
  • the second highest correlation value is the correlation value 0.80 at which the image shift amount is 2. Therefore, the true shift amount is estimated to be intermediate between the image shift amounts 1 and 2.
  • sub-pixel interpolation is performed using the obtained correlation value.
  • conformal straight line fitting, parabola fitting, etc. are known, but here, for example, it is assumed that equiangular straight line fitting is performed using the obtained correlation value ratio. .
  • the pixel interpolation operation unit 35 interpolates and estimates an R pixel value corresponding to the Gr pixel position in the R-Gr row and a B pixel value corresponding to the Gb pixel position in the Gb-B row.
  • an average value of R pixel values on both sides is calculated as an R interpolation pixel value
  • an average value of B pixel values on both sides is calculated as a B interpolation pixel value.
  • the pixel shift detection unit 33 combines the original R pixel value and the R interpolation pixel value calculated by the pixel interpolation operation unit 35 to generate an R interpolation color image (see R pixel composite value in FIG. 11). Similarly, the original B pixel value and the calculated B interpolation pixel value are combined to generate a B interpolation color image (see the B pixel composite value in FIG. 11).
  • the image shift amount in the left column of Table 2 corresponds to one pixel (more precisely, one pixel pitch) corresponding to arrangement of the interpolation color image in the horizontal direction every one pixel (every one pixel). Unit).
  • the correlation value 0.97 at which the image shift amount is 4 pixels is the highest correlation value
  • the second highest correlation value is the correlation value 0.94 at which the image shift amount is 3 pixels. Therefore, the true shift amount is estimated to be intermediate between 3 pixels and 4 pixels.
  • the pixel shift detection unit 33 performs sub-pixel interpolation using the obtained correlation value in order to calculate the shift amount with higher accuracy.
  • isometric straight line fitting is performed using the ratio of the obtained correlation values.
  • An image shift amount of 3.64 pixels calculated using the interpolation color image has a detection error of 0.24 pixels with respect to an actual shift amount of 3.4 pixels. Become. That is, it is possible to calculate the amount of deviation with higher accuracy than the detection error 0.4 in the case without the above-described interpolation (when the obtained data is used as it is).
  • the detection error of 0.24 pixels also satisfies the accuracy of the shift amount of 0.33 pixels or less as described in the above-mentioned background art.
  • the interpolation in the pixel interpolation operation unit 35 is performed by using the average value of two pixels on both sides in the X direction, but a method of estimating pixel values based on more pixels adjacent in the X direction may be used Instead of the X direction, a method of estimation using pixel values in the vertical direction (Y direction) or an oblique direction may be used. Furthermore, an estimation method using a super resolution technique or the like may be used based on a plurality of images.
  • the distance calculation unit 34 uses the proportional conversion formula obtained based on the configuration of the imaging optical system 10 and the pupil color division filter 14 to calculate the relative shift amount calculated by the pixel shift detection unit 33, and the height of the subject. Convert to directional distance information.
  • Equation 2 Equation 2
  • the measurement height error of the distance measuring device 1 of the present embodiment is 0.036 mm.
  • This numerical value indicates the amount of deviation from the in-focus position, and when the lower surface of the black box-like object 6b (that is, the surface of the white plate 6a shown in FIG. 8) is at the current in-focus position, It indicates the height of the black box-like object 6b.
  • control application 41 can set a plurality of measurement points
  • the user calculates and displays the difference between the plurality of set measurement points, thereby the user can obtain a desired point of the observation subject. It is possible to obtain height information of
  • controller 3 various arithmetic units and the like are described as the hardware configuration in the controller 3, but the present invention is not limited to this. (In this case, the controller 3 may be omitted because the PC 4 can also function as the controller 3).
  • the distance measuring device 1 is configured to include the lens barrel 2, the controller 3, the PC 4, and the monitor 5 has been described, but the distance measuring device 1
  • the lens barrel 2 is a lens barrel of a camera or a lens driving mechanism
  • the controller 3 and the PC 4 are a CPU of a camera Liquid crystal monitor etc.
  • the number of samplings can be increased in the direction of calculating the relative shift amount. Specifically, it is possible to increase R pixels and B pixels to be subjected to shift amount detection in the horizontal direction which is the shift amount calculation direction. This makes it possible to improve the detection accuracy of the amount of deviation. As a result, the distance measurement accuracy with respect to the subject can be improved.
  • the distance measuring device of the present embodiment when acquiring information on the subject distance based on color images of multiple colors obtained by imaging the pupil color-divided light with a color imaging device, the distance measurement accuracy Can be made higher.
  • FIG. 12 is a block diagram showing the configuration of the distance measuring device 1
  • FIG. 13 is an R of white printed matter “1” for the pixel arrangement on line A. It is a figure which shows the pixel value of an image and B image, and a shift pixel value and a synthetic
  • the interpolation pixels are obtained by calculation, but in the present embodiment, the interpolation pixels are obtained by shifting the image sensor 11 and performing imaging a plurality of times.
  • the element shift unit 17 and the element shift control unit 25 are added to the lens barrel 2 with respect to the configuration shown in FIG.
  • the configuration is such that the interpolation operation unit 35 is removed.
  • the element shift unit 17 and the element shift control unit 25 are pixel interpolation units, and move the image pickup device 11 in parallel in the direction of detecting deviation in a plane perpendicular to the optical axis of the imaging optical system 10 It has become.
  • the element shift unit 17 is for minutely moving the imaging element 11 in a direction (for example, a horizontal pixel array direction) in which at least the amount of deviation is detected.
  • a direction for example, a horizontal pixel array direction
  • this camera shake correction mechanism may be used as the device shift unit 17.
  • the element shift control unit 25 controls driving of the element shift unit 17.
  • the subject is shaped as shown in FIG. 8 and FIG. 9 etc., and the obtained RB double image is as shown in FIG. It is assumed that the subject conditions such that the actual displacement amount of is 3.4 pixels are the same as in the first embodiment described above.
  • the system controller 31 When the user inputs an instruction to start measurement via the control application 41, the system controller 31 generally controls each control system related to image capturing of the distance measuring device 1, and first captures an original image with the imaging element 11.
  • the original image thus taken is temporarily stored in the memory 32.
  • the system controller 31 transmits, to the element shift control unit 25, an instruction to shift the imaging element 11.
  • the element shift control unit 25 When receiving this command, the element shift control unit 25 generates a drive signal and transmits the drive signal to the element shift unit 17.
  • the element shift unit 17 When the element shift unit 17 receives this drive signal, it shifts the image sensor 11 in the direction to detect the shift amount, here, right shift by one pixel (more exactly by one horizontal pixel pitch) in the X direction of the pixel array.
  • the shift amount here, right shift by one pixel (more exactly by one horizontal pixel pitch) in the X direction of the pixel array.
  • the system controller 31 generally controls each control system related to image capturing of the distance measuring device 1 to cause the imaging element 11 to capture a shift image.
  • the shift image thus captured is also temporarily stored in the memory 32.
  • the pixel displacement detection unit 33 combines the R pixel value of the original image stored in the memory 32 and the R pixel value of the shift image to generate an R interpolation color image (see the R pixel composite value in FIG. 13). Similarly, the B pixel value of the original image and the B pixel value of the shifted image are combined to generate a B-interpolated color image (see the B pixel combined value in FIG. 13).
  • the image shift amount in the left column of this Table 3 corresponds to one pixel (more precisely, one pixel pitch) corresponding to arrangement of the interpolation color image in each pixel (each pixel) in the horizontal direction. Unit).
  • the correlation value 0.97 in which the image shift amount is 3 pixels is the highest correlation value
  • the second highest correlation value is the correlation value 0.92 in which the image shift amount is 4 pixels. Therefore, the true shift amount is estimated to be intermediate between 3 pixels and 4 pixels.
  • the pixel shift detection unit 33 performs sub-pixel interpolation using the obtained correlation value in order to calculate the shift amount with higher accuracy.
  • isometric straight line fitting is performed using the ratio of the obtained correlation values.
  • the image shift amount 3.39 pixels calculated using the interpolation color image obtained by the shift of the imaging device 11 is 0.01 pixels with respect to the actual shift amount 3.4 pixels. It means that a detection error has occurred. That is, the amount of deviation can be calculated with extremely high accuracy than the detection error of 0.4 when the obtained data as described above is used as it is (in the case shown in Table 1). Further, the amount of deviation can be calculated more accurately than the detection error of 0.24 pixels when the interpolation color image obtained by the interpolation of the first embodiment is used (in the case shown in Table 2). The detection error of 0.01 pixel satisfies the accuracy of the shift amount of 0.33 pixel or less as described in the above-mentioned background art.
  • the distance calculation unit 34 uses the proportional conversion formula obtained based on the configurations of the imaging optical system 10 and the pupil color division filter 14 to calculate the amount of deviation calculated by the pixel deviation detection unit 33, and calculates the distance in the height direction of the subject. Convert to information.
  • the condition that the shifted image does not completely match the original image is satisfied (for example, The multiple, the pixel pitch, etc., and the shift amount may be properly selected.
  • the imaging optical system 10 may be shifted to perform imaging a plurality of times.
  • the element shift unit 17 and the element shift control unit 25 shift the imaging optical system 10.
  • both of the imaging optical system 10 and the imaging element 11 may be shifted to perform imaging a plurality of times.
  • the element shift unit 17 and the element shift control unit 25 shift the imaging optical system 10 and the imaging element 11.
  • the image sensor 11 is moved in parallel by the device shift unit 17 and the device shift control unit 25 which are device moving units.
  • a plurality of color images obtained by acquiring a plurality of images at different positions in the detection direction from the imaging device 11 and performing color separation on the plurality of acquired images, the color images relating to the same color Interpolated color images are generated by combining the pixel shift detection unit 33.
  • FIG. 14 is a block diagram showing the configuration of the distance measuring device 1
  • FIG. 15 is a diagram for explaining the optical conditions of the imaging optical system 10. It is.
  • the interpolation pixel is obtained by shifting the image sensor 11 and performing imaging a plurality of times, but the shift amount of the image sensor 11 at this time It is set based on the optical conditions.
  • the distance measuring device 1 of this embodiment has a configuration in which an element shift amount calculation unit 36 is added to the controller 3 in addition to the configuration shown in FIG. 12 of the second embodiment described above.
  • the element shift amount calculation unit 36 is a movement amount calculation unit that calculates the parallel movement amount (shift amount) of the imaging element 11 during acquisition of two images based on the optical conditions of the imaging optical system 10. .
  • the element shift unit 17 and the element shift control unit 25 which are element movement units translate the imaging optical system 10 in parallel by the amount of parallel movement calculated by the element shift amount calculation unit 36.
  • the imaging optical system 10 has optical conditions as schematically shown in FIG.
  • the optical axis of the imaging optical system 10 is shown as O.
  • D indicates the diameter of the diaphragm of the diaphragm 13 in the imaging optical system 10 (therefore, D / 2 is the radius from the optical axis O to the diaphragm 13).
  • LG indicates the distance (gravity center distance) from the optical axis O to the gravity center position of the RG filter 14r in the pupil color division filter 14, for example. However, the distance from the optical axis O to the gravity center position of the GB filter 14 b (gravity center distance) is also LG.
  • the focal length of the imaging optical system 10 is f.
  • the focal length f is an amount that changes as the zoom lens 12 moves.
  • NA numerical aperture
  • the numerical aperture NA is more precisely the object side NA, and depends on the size of the aperture diameter D of the aperture 13 (the smaller the aperture diameter D, the smaller the NA. As the aperture diameter D increases, the NA also increases.
  • the distance between the focal position and the imaging surface 11 a of the imaging device 11 and the multiplier of the optical magnification are assumed to be Z.
  • the shift amount between the R and B subject images on the imaging surface 11a (located at an optical distance from the focal position Z) of the imaging element 11 is the image of the barycentric position of the RG filter 14r on the imaging surface 11a and the GB filter 14b.
  • the distance to the image at the center of gravity, which is X in the figure (therefore, X / 2 is the distance from the optical axis O to the image of the center of gravity of the RG filter 14r or the image of the center of gravity of the GB filter 14b) .
  • the optical lens constituting the imaging optical system 10 is generally configured to obtain a desired characteristic by combining a plurality of lenses, R subject image and B subject image at an arbitrary zoom magnification It is difficult to express the displacement amount X by a simple mathematical expression.
  • this resolution is the resolution in the optical axis direction
  • this resolution is the resolution in the optical axis direction
  • a certain resolution for example, resolution of 0.05 mm or more
  • Depth of focus determined based on the optical condition of the set observation magnification (DOF: Depth
  • DOF Depth
  • the element shift amount calculation unit 36 obtains the resolution (for example, 1/2 DOF, more generally (k ⁇ DOF) with k as a predetermined coefficient) based on the Of Focus).
  • the shift amount of the image sensor 11 for obtaining the resolution of the required accuracy as shown in (1), (2) etc. in the height direction of the subject 6 is acquired.
  • the above-mentioned respective parameters may be input by arithmetic operation, or look-up tables (LUTs) for the respective parameters may be prepared in advance. It may be stored in the amount calculator 36, and this LUT may be referred to at the time of distance measurement.
  • LUTs look-up tables
  • one of various optical conditions such as the zoom magnification, the numerical aperture NA, and the aperture diameter D is determined.
  • the depth of focus (DOF) is determined based on this optical condition.
  • the required resolution (k ⁇ DOF) in (2) is determined based on the depth of focus (DOF).
  • the detection accuracy of the deviation required to detect the resolution is determined, and a value equal to or less than the determined deviation (for example, a predetermined value of 1 or less (the predetermined value If it is too small, it is preferable to set a predetermined value close to 1 or a value close to 1) in order to obtain a high accuracy that is excessively high than the required accuracy.
  • Such a procedure is repeated while changing the optical conditions of the imaging optical system 10, for example, the zoom magnification.
  • the imaging device 11 When the actual shift amount of the imaging device 11 is determined, not only the optical conditions of the imaging optical system 10 but also the pixel pitch of the imaging device 11 (the imaging device 11 is shifted with an accuracy finer than the pixel pitch) Also, limitations in accordance with various conditions of the entire imaging system (the entire imaging system including the imaging optical system 10, the pupil color division filter 14, the imaging element 11 and the like) are imposed.
  • the optical magnification is 0.7
  • the R pixel and the B pixel are shifted by one pixel (for example, to the right) after capturing the original image (in this case, the position of the G pixel at the original image capturing) R pixel and B pixel are located in the image)
  • the shift image is photographed in this state
  • the original image and the shift image are combined to generate the interpolation color image
  • the relative color image is generated based on the generated interpolation color image.
  • the amount of deviation will be calculated.
  • the optical magnification is 1.0
  • the image is shifted by 2/3 pixels (for example, to the right), and in this state, the shifted image is captured, and further by 2/3 pixels (for example, Shift to the right (in this case, it is shifted by 4/3 pixels from the original image shooting position), shoot the shifted image in this state, combine the original image and the two shifted images An interpolation color image is generated, and the amount of deviation is calculated based on the generated interpolation color image.
  • the optical magnification is 2.0 times
  • after capturing the original image it is shifted by 1/2 pixel (for example, to the right), and in this state, the first shifted image is captured, and then 1/2 pixel Shift the image by a half (for example, to the right) (shift amount of one pixel from the original image capture position), capture a second shift image in this state, and then shift it for
  • the third shift image is captured in this state, and the four images, ie, the original image and the first to third shift images, are synthesized to obtain an interpolation color.
  • An image is generated, and a relative displacement amount is calculated based on the generated interpolated color image.
  • the element shift amount calculation unit 36 refers to, for example, the LUT shown in Table 4, and when the current optical magnification of the imaging optical system 10 is less than 0.85, for example, 0.7 times the optical magnification
  • the shift amount is determined with reference to the column, and when the optical magnification is 0.85 or more and less than 1.5, the shift amount is determined with reference to the optical magnification column of 1.0. When it is 5 times or more, the shift amount is determined with reference to the optical magnification column of 2.0 times and the like.
  • the amount of shifting the imaging element 11 is determined based on the optical conditions of the imaging optical system 10. It is possible to efficiently obtain a shifted image for generating an interpolated color image with appropriate accuracy, without being accompanied by insufficient accuracy or excessive accuracy.
  • the shift amount in the plane perpendicular to the optical axis of the imaging device 11 is calculated based on the value obtained by multiplying the predetermined value by the depth of focus which is the amount in the optical axis direction, it is suitable for industrial microscopes, for example. It becomes composition.
  • FIG. 16 is a block diagram showing the configuration of the distance measuring device 1.
  • FIG. 17 shows the user designating the measurement position of the subject on the screen of the monitor 5. It is a figure which shows a mode that it does.
  • the present embodiment is basically the same configuration as Embodiment 2 described above, but the user can specify which part of the subject is to be subjected to distance measurement, and the distance on the imaging device 11 Only the part to be measured can be read out.
  • the imaging device 11 in this embodiment adopts a configuration that can read out any pixel, and a specific example is a CMOS sensor or the like. Because this can not be done, it is not adopted in this embodiment).
  • the distance measuring device 1 of this embodiment has a configuration in which a reading area calculation unit 37 is added to the controller 3 in addition to the configuration shown in FIG. 12 of the second embodiment described above.
  • the read area calculation unit 37 is a pixel area setting unit that calculates and sets a pixel area for detecting a shift amount by the pixel shift detection unit 33 when capturing a shift image and reading out pixel information from the imaging device 11 .
  • the imaging element control unit 21 drives the imaging element 11 so as to read out only the pixel values of the pixel area calculated by the reading area calculation unit 37 for the shift image.
  • the system controller 31 When the user inputs an instruction to start measurement via the control application 41, the system controller 31 generally controls each control system related to image capturing of the distance measuring device 1, and first captures an original image with the imaging element 11.
  • the original image thus taken is temporarily stored in the memory 32.
  • the photographed original image is transmitted to the PC 4 via the controller 3 and displayed on the screen 5 a of the monitor 5 by the control application 41.
  • a message such as "Please specify a measurement part" may be displayed on the screen 5a of the monitor 5 and displayed.
  • the user operates the input device such as a mouse to move the pointer 5p on the screen 5a, and a portion to be measured (in the example shown, white printed matter “1” in the black box 6b
  • a portion to be measured in the example shown, white printed matter “1” in the black box 6b
  • the designated measurement point is transmitted from the control application 41 to the read area calculation unit 37.
  • the reading area calculation unit 37 calculates a pixel area to be read from the image pickup device 11 at the time of shift imaging in the subsequent stage.
  • the calculation of the pixel area includes the line including the measurement point and the line adjacent to the line (in this selection method, one line is a line including an R pixel and the other line is a B pixel). For example, it is possible to calculate, as a region, R pixels and B pixels included in a constant range on the left and right around the measurement point in the included line).
  • the measurement point designated by the user is (2000: 1500), and this measurement point is the B pixel in the Gb-B row.
  • the read area operation unit 37 selects one of the 1499th line and the 1501th line adjacent to the line including the measurement point as a line including the R pixel.
  • line 1499 is selected.
  • the reading area calculation unit 37 selects, for example, the range of 32 pixels (specifically, the range of the X coordinate of 1985 to 2016) centered on the measurement point in the 1500th line, and the selected pixel range and X Further, a pixel range (a range of 32 pixels in this example) in line 1499 having the same coordinates is further selected.
  • the pixel range selected here corresponds to the line A described in the second embodiment, that is, the line A is set so as to include the measurement points designated by the user.
  • the R pixel and the B pixel included in the pixel range selected by the read area operation unit 37 are as follows.
  • the reading area calculation unit 37 sets the R pixel and the B pixel included in the selected pixel range as the pixel area, and transmits the information of the set pixel area to the imaging element control unit 21.
  • the imaging element control unit 21 generates a read address from the imaging element 11 so that only the received pixel area, that is, a total of 32 pixels of the R pixel of 16 pixels and the B pixel of 16 pixels is read out. Read control 11.
  • the R pixel and the B pixel in the pixel range of 32 pixels in the X direction centered on the designated measurement point are uniquely read out, but the original image is After photographing, based on the state of the subject in the original image, only the area more suitable for the correlation calculation may be set as the pixel area to be read out.
  • edge detection of R pixel and B pixel in the vicinity of the measurement point is performed, and R pixels in a region including the detected edge And B pixels may be set in the pixel area.
  • R pixels on the line segment connecting the two specified points And B pixels may be set in the pixel region.
  • the entire selected pixel range that is, a pixel range that also includes Gr pixels and Gb pixels not used for the shift amount calculation
  • the readout control of the imaging device 11 becomes somewhat easier.
  • FIG. 18 is a block diagram showing a configuration of the distance measuring device 1, and FIG. 19 shows an image formed on the imaging surface 11a of the imaging device 11.
  • FIG. 20 shows a state of a subject image IMGr of R component and a subject image IMGb of B component in a subject image 6i according to an image, and a partially enlarged view of line A, FIG. 20 forms an image on the imaging surface 11a of the imaging device 11.
  • FIG. 17A is a partially enlarged view of line A
  • FIG. 18B is a diagram showing a state of a subject image IMGr of an R component and a subject image IMGb of a B component when a subject image 6i related to a shift image is inclined and slightly larger than an original image;
  • This embodiment has basically the same configuration as that of the second embodiment described above, but before and after shifting the imaging device 11, whether or not the relative movement of the subject 6 with respect to the imaging device 11 occurred except for the shifting It is determined whether or not there is movement, and since it is not possible to perform accurate distance measurement, unnecessary arithmetic processing is not performed.
  • the distance measurement device 1 of this embodiment has a configuration in which a shift image comparison operation unit 38 is added to the controller 3 in addition to the configuration shown in FIG. 12 of the second embodiment described above.
  • the shift image comparison operation unit 38 acquires an original image acquired before shifting the imaging element 11 and a shift amount after moving the imaging element 11 at least in a pixel area where the shift amount is detected by the pixel shift detection unit 33. It is an image comparison unit that compares the match between a pixel shift image that is an image.
  • FIG. 19 shows a subject image 6i related to the original image formed on the imaging surface 11a of the imaging element 11, and on the line A, an image portion AR of the subject image IMGr of R component; In the image portion AB of the B component subject image IMGb, an amount of deviation as shown in the drawing occurs.
  • FIG. 20 shows a subject related to a shift image (an image obtained by shifting the image sensor 11 by, for example, one pixel in the X direction after capturing the original image) formed on the imaging surface 11a of the image sensor 11
  • the image 6i is shown.
  • the subject image 6i is somewhat rotated clockwise in the XY plane and slightly larger in size than the original image of FIG.
  • the reason why the size of the subject image 6i slightly increases is considered to be that the subject 6 has moved somewhat in the direction in which the subject 6 approaches the lens barrel 2 (upward in the height direction).
  • An interpolation color image is generated by combining the original image captured before and after the relative movement of the subject 6 with respect to the imaging device 11 (but excluding the movement due to the shift) and the shift image as described above. Even if the amount of deviation is calculated based on the interpolated color image, an accurate amount of deviation can not be obtained, that is, accurate distance measurement can not be performed.
  • the shift image comparison operation unit 38 detects an image deviation that is inappropriate for generating the interpolation color image by comparing the original image and the shift image, generation of the interpolation color image, interpolation, The generation of information on the subject distance by the pixel shift detection unit 33 and the distance calculation unit 34 based on the color image is stopped.
  • the pixel shift detection unit 33 calculates the amount of shift on the line A based on the R and B color images of the original image stored in the memory 32.
  • the pixel shift detection unit 33 calculates the shift amount on the line A.
  • the shift image comparison operation unit 38 compares the shift amount obtained based on the original image with the shift amount obtained based on the shift image, and determines whether the difference between the two is -1 or more and +1 or less. Determine if
  • the system controller 31 determines that the careless motion has not occurred and sets the original image and the shift image.
  • the pixel shift detection unit 33 causes the pixel shift detection unit 33 to perform processing of generating an interpolated color image by combining the above and a highly accurate shift amount detection based on the interpolated color image. Further, the system controller 31 causes the distance calculation unit 34 to perform processing of generating information on the subject distance based on the detected shift amount.
  • the system controller 31 determines that an inadvertent movement has occurred and performs control. The user is notified of that via the application 41.
  • the system controller 31 cancels the generation process of the interpolation color image by the pixel shift detection unit 33 and the detection process of the shift amount based on the interpolation color image. Therefore, the generation of the information on the subject distance based on the displacement amount obtained from the interpolation color image by the distance calculation unit 34 is also automatically stopped.
  • system controller 31 performs processing of the pixel shift detection unit 33 and processing of the distance calculation unit 34 based on only the original image instead of highly accurate processing based on the interpolated color image, and generates information on the subject distance.
  • the system controller 31 when notifying the user that careless movement has occurred, information on the subject distance obtained only from the original image and information on the subject distance obtained only from the shift image are combined as a reference value. You may make it notify.
  • the system controller 31 further causes the processing of the pixel shift detection unit 33 and the processing of the distance calculation unit 34 to be performed based on only the shift image to generate information on the subject distance.
  • the determination condition is The present invention is not limited to the above, and other determination conditions in consideration of subject conditions etc. may be adopted, and the determination conditions may be variably controlled according to the subject conditions.
  • substantially the same effect as that of the second embodiment described above can be obtained, and it is determined whether or not the relative movement of the subject 6 with respect to the imaging device 11 occurs carelessly.
  • the synthesis of the corresponding image is stopped, and the subsequent processing based on the interpolated color image is also stopped, so that the unnecessary load increase due to unnecessary calculation processing is suppressed and erroneous It is possible to prevent giving information on the subject distance to the user.
  • the user is notified of the fact, so that the user can be given an opportunity to perform precise measurement again.
  • a control method may be used to control the distance measuring device as described above, or a control program for causing a computer to control the distance measuring device as described above, It may be a non-transitory recording medium readable by a computer that records the control program.
  • the present invention is not limited to the above-described embodiment as it is, and in the implementation stage, the constituent elements can be modified and embodied without departing from the scope of the invention.
  • various aspects of the invention can be formed by appropriate combinations of a plurality of constituent elements disclosed in the above-described embodiment. For example, some components may be deleted from all the components shown in the embodiment.
  • the constituent elements in different embodiments may be combined as appropriate. As a matter of course, various modifications and applications are possible without departing from the scope of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)
  • Measurement Of Optical Distance (AREA)
  • Focusing (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

La présente invention porte sur un appareil de mesure de distance qui comporte : un système optique d'imagerie (10) pour former une image d'objet ; un filtre de séparation de couleurs de pupille (14) pour soumettre une pupille du système optique d'imagerie (10) à une séparation de couleurs ; un élément d'imagerie colorée (11) qui soumet l'image d'objet formée à une conversion photoélectrique et délivre en sortie une image ; une unité de calcul d'interpolation de pixel (35) pour générer des pixels d'interpolation de pixels agencés dans une pluralité d'images colorées obtenues par séparation des couleurs de l'image délivrées en sortie par l'élément d'imagerie (11) ; un détecteur de déviation des pixels (33) qui synthétise les images colorées obtenues par séparation de couleurs et les pixels d'interpolation générés par l'unité de calcul d'interpolation de pixel (35) pour générer une pluralité d'images colorées interpolées, et détecte la quantité de déviation relative de l'image d'objet dans la pluralité d'images colorées interpolées ; et une unité de calcul de distance (34) pour générer des informations associées à une distance d'objet sur la base de la quantité de déviation détectée.
PCT/JP2013/054021 2012-05-25 2013-02-19 Appareil de mesure de distance WO2013175816A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012120044A JP2013246052A (ja) 2012-05-25 2012-05-25 距離測定装置
JP2012-120044 2012-05-25

Publications (1)

Publication Number Publication Date
WO2013175816A1 true WO2013175816A1 (fr) 2013-11-28

Family

ID=49623521

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/054021 WO2013175816A1 (fr) 2012-05-25 2013-02-19 Appareil de mesure de distance

Country Status (2)

Country Link
JP (1) JP2013246052A (fr)
WO (1) WO2013175816A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10914960B2 (en) 2016-11-11 2021-02-09 Kabushiki Kaisha Toshiba Imaging apparatus and automatic control system

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101566619B1 (ko) * 2014-06-03 2015-11-09 중앙대학교 산학협력단 듀얼 오프 액시스 컬러 필터 조리개를 이용한 거리 추정 장치 및 방법
JP2016102733A (ja) 2014-11-28 2016-06-02 株式会社東芝 レンズ及び撮影装置
WO2018193544A1 (fr) * 2017-04-19 2018-10-25 オリンパス株式会社 Dispositif de capture d'image et dispositif d'endoscopie
JP6818702B2 (ja) 2018-01-15 2021-01-20 株式会社東芝 光学検査装置及び光学検査方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001174696A (ja) * 1999-12-15 2001-06-29 Olympus Optical Co Ltd カラー撮像装置
JP2006285094A (ja) * 2005-04-04 2006-10-19 Nikon Corp オートフォーカスカメラおよびオートフォーカス装置
JP2010139665A (ja) * 2008-12-10 2010-06-24 Canon Inc 焦点検出装置及びその制御方法
JP2010210810A (ja) * 2009-03-09 2010-09-24 Olympus Imaging Corp 焦点検出装置
JP2012054867A (ja) * 2010-09-03 2012-03-15 Olympus Imaging Corp 撮像装置
JP2012063456A (ja) * 2010-09-14 2012-03-29 Olympus Corp 撮像装置
JP2012068761A (ja) * 2010-09-21 2012-04-05 Toshiba Digital Media Engineering Corp 画像処理装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001174696A (ja) * 1999-12-15 2001-06-29 Olympus Optical Co Ltd カラー撮像装置
JP2006285094A (ja) * 2005-04-04 2006-10-19 Nikon Corp オートフォーカスカメラおよびオートフォーカス装置
JP2010139665A (ja) * 2008-12-10 2010-06-24 Canon Inc 焦点検出装置及びその制御方法
JP2010210810A (ja) * 2009-03-09 2010-09-24 Olympus Imaging Corp 焦点検出装置
JP2012054867A (ja) * 2010-09-03 2012-03-15 Olympus Imaging Corp 撮像装置
JP2012063456A (ja) * 2010-09-14 2012-03-29 Olympus Corp 撮像装置
JP2012068761A (ja) * 2010-09-21 2012-04-05 Toshiba Digital Media Engineering Corp 画像処理装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10914960B2 (en) 2016-11-11 2021-02-09 Kabushiki Kaisha Toshiba Imaging apparatus and automatic control system

Also Published As

Publication number Publication date
JP2013246052A (ja) 2013-12-09

Similar Documents

Publication Publication Date Title
US11099459B2 (en) Focus adjustment device and method capable of executing automatic focus detection, and imaging optical system storing information on aberrations thereof
US9247227B2 (en) Correction of the stereoscopic effect of multiple images for stereoscope view
JP4699995B2 (ja) 複眼撮像装置及び撮像方法
US9742982B2 (en) Image capturing apparatus and method for controlling image capturing apparatus
JP6173156B2 (ja) 画像処理装置、撮像装置及び画像処理方法
US10122911B2 (en) Image pickup apparatus, control method, and non-transitory computer-readable storage medium with aberration and object information acquisition for correcting automatic focus detection
JP5947601B2 (ja) 焦点検出装置、その制御方法および撮像装置
WO2013027504A1 (fr) Dispositif d'imagerie
JP2010271670A (ja) 撮像装置
WO2013175816A1 (fr) Appareil de mesure de distance
JP5882789B2 (ja) 画像処理装置、画像処理方法、及びプログラム
JP2016061609A (ja) 距離計測装置、撮像装置、および距離計測方法
JP5784395B2 (ja) 撮像装置
WO2013005489A1 (fr) Dispositif de capture d'image et dispositif de traitement d'image
JP6357646B2 (ja) 撮像装置
JP5378283B2 (ja) 撮像装置およびその制御方法
JP2013097154A (ja) 距離測定装置、撮像装置、距離測定方法
JP6326631B2 (ja) 撮像装置
JP5786355B2 (ja) デフォーカス量検出装置および電子カメラ
JP2014215436A (ja) 撮像装置、その制御方法、および制御プログラム
JP6012396B2 (ja) 画像処理装置、画像処理方法およびプログラム。
JP6370004B2 (ja) 撮像装置および撮像方法
KR20170015158A (ko) 제어장치, 촬상장치, 및 제어방법
WO2013133115A1 (fr) Dispositif de détection de quantité de défocalisation et appareil photographique
JP2016184776A (ja) 画像処理装置と画像処理方法とプログラムおよび撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13793759

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13793759

Country of ref document: EP

Kind code of ref document: A1