Nothing Special   »   [go: up one dir, main page]

WO2015159791A1 - Dispositif de mesure de distance et procédé de mesure de distance - Google Patents

Dispositif de mesure de distance et procédé de mesure de distance Download PDF

Info

Publication number
WO2015159791A1
WO2015159791A1 PCT/JP2015/061099 JP2015061099W WO2015159791A1 WO 2015159791 A1 WO2015159791 A1 WO 2015159791A1 JP 2015061099 W JP2015061099 W JP 2015061099W WO 2015159791 A1 WO2015159791 A1 WO 2015159791A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging optical
optical systems
imaging
image
lens
Prior art date
Application number
PCT/JP2015/061099
Other languages
English (en)
Japanese (ja)
Inventor
片桐 哲也
基広 浅野
Original Assignee
コニカミノルタ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタ株式会社 filed Critical コニカミノルタ株式会社
Publication of WO2015159791A1 publication Critical patent/WO2015159791A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication

Definitions

  • the present invention relates to searching for corresponding points between images.
  • a parallax (parallax angle) of a stereo image is measured, and a depth (distance) is calculated from the parallax, and a camera setting is corrected.
  • a corresponding point is searched between the two images.
  • a window that includes a target point is set for a target point on a reference image that is one of two images obtained by photographing the same object from different viewpoints, such as a stereo camera.
  • a plurality of windows of the same size are set on the reference image which is the other image.
  • a correlation value is calculated between the window on the standard image and each window on the reference image, the window on the reference image with the highest correlation value is searched, and the barycentric position of the window is the corresponding point of the attention point As required.
  • SAD Sud of Absolute Difference
  • an imaging unit including a lens array having the same curvature radius is used for two lenses that are configured with two or more lens pairs each having different curvature radii, Then, from the captured single-eye images of each lens, the most focused single-eye image pair is extracted from the single-eye image pairs corresponding to each lens pair, and parallax detection is performed using the single-eye image pair. The distance image is acquired.
  • the two images for which the corresponding point search is performed are images obtained by photographing the same object from different viewpoints, even if the same object is photographed, there is a region that exists only in one image. Can occur. This is because the object has a depth and the depth has a context, so that the object in the foreground is in a state (occlusion) of hiding the object behind. Therefore, in the technique disclosed in Patent Document 2, the same object is photographed by a plurality of cameras, the cameras are grouped so that the epipolar line is common, and the correlation value between the reference image and the reference image is determined for each group. The corresponding points of the group for which the sum is calculated and the correlation is determined to be low are not adopted.
  • JP2011-149931A Japanese Patent Laid-Open No. 10-289315
  • Patent Document 2 it is possible to reduce the range in which occlusion occurs, but it is necessary to arrange a plurality of cameras and an epipolar line in common, and easily perform distance measurement. It ’s difficult.
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to provide a distance measuring device and a distance measuring method that can easily and accurately measure a distance while suppressing occlusion.
  • images are captured using at least three first imaging optical systems in a compound eye camera having a plurality of first and second imaging optical systems having different focal lengths.
  • a distance to the measurement object is obtained based on the image and an image captured using at least two second imaging optical systems.
  • the two second imaging optical systems are arranged such that the base line length is longer than the base line lengths of the two first imaging optical systems out of the three first imaging optical systems. Therefore, the distance measuring device and the distance measuring method according to the present invention can suppress the occlusion and can measure the distance easily and accurately.
  • FIG. 1 is a block diagram illustrating a configuration of a distance measuring apparatus according to the embodiment.
  • FIG. 2 is a diagram illustrating a configuration of an array camera in the imaging unit of the distance measuring device.
  • 2A is a perspective view of the array lens unit and the array imaging unit
  • FIG. 2B is a perspective view of the array imaging unit.
  • the distance measuring device 100 performs a corresponding point search on a plurality of images obtained by a stereo method (stereoscopic method) and obtains a parallax between the plurality of images to obtain a distance (a distance to a predetermined object to be measured) ( This is a device for measuring a coordinate of a measurement object in a predetermined coordinate system, and obtains a distance to the measurement object based on a plurality of images acquired by an array camera.
  • the distance measuring device 100 realizes higher-precision distance measurement regardless of the distance of the object to be measured and less susceptible to occlusion by combining the lens configuration of the array camera and the corresponding point search image. Is.
  • the distance measuring device 100 includes an imaging unit 1 (image acquisition unit), a corresponding point search unit 2, an occlusion determination unit 3, a distance calculation unit 4, a coordinate conversion unit 5, a storage unit 6, and overall control.
  • the unit 7 is provided.
  • white arrows indicate the flow of data, and broken arrows indicate the flow of control data.
  • the overall control unit 7 controls each functional unit of the imaging unit 1 to the coordinate conversion unit 5 to measure the distance to the object, and the measured distance (three-dimensional coordinates of the corresponding point in a predetermined coordinate system).
  • the data is stored in the storage unit 6.
  • the imaging unit 1 acquires a plurality of images obtained by imaging the same measurement target (object, subject).
  • the imaging unit 1 includes, for example, an array camera 10 illustrated in FIG. It is assumed that each camera of the imaging unit 1 has known camera parameters (image center, focal length, orientation with respect to the reference camera (rotation, translation)).
  • the array camera 10 includes an array imaging unit 11 having an array lens unit 12 having a plurality of single-eye lenses and a plurality of single-eye imaging units that respectively capture optical images of objects formed by the plurality of single-lens lenses.
  • the array lens unit 12 of the array camera 10 included in the imaging unit 1 of the distance measuring device 100 will be described as an example including nine lenses arranged in a two-dimensional matrix of 3 rows and 3 columns. As will be described later with reference to FIG. 3, the arrangement and the number of lenses are not limited to this example.
  • the array lens unit 12 includes a plurality of individual lenses 121.
  • One single lens 121 is configured to include one or a plurality of optical lenses along its optical axis.
  • the plurality of single-lens lenses 121 are arranged so that the optical axes are substantially parallel to each other, and in two linearly independent directions, more specifically, two in the X and Y directions orthogonal to each other Arranged in a two-dimensional matrix in the direction.
  • the plurality of individual lenses 121 are shown as nine individual lenses 121-11 to 121-33 arranged in a two-dimensional matrix in three rows and three columns.
  • the single lens 121-12 and the single lens 121-33 are lenses having a long focusing distance compared to the focusing distance of the other single lens 121.
  • the array imaging unit 11 includes a plurality of single-eye imaging units 111.
  • One single-eye imaging unit 111 includes a plurality of photoelectric conversion elements (a plurality of pixels) arranged in a two-dimensional matrix, and each photoelectric conversion element converts electricity according to the amount of received light. The signal is output as data of each pixel in the image.
  • the plurality of single-eye imaging units 111 correspond to the plurality of single-eye lenses 121 in the array lens unit 12, and are arranged so that the respective imaging surfaces are on the same plane. In the example illustrated in FIG.
  • the plurality of single-eye imaging units 111 are arranged in a two-dimensional matrix in two linearly independent directions, more specifically, in two directions of X and Y directions orthogonal to each other.
  • the plurality of single-eye imaging units 111 are shown as nine single-eye imaging units 111-11 to 111-33 arranged in a two-dimensional matrix in three rows and three columns.
  • the plurality of single-eye imaging units 111 may be configured to include a plurality of solid-state imaging elements arranged in a two-dimensional matrix on the same substrate. However, in the example illustrated in FIG. (Not shown).
  • the effective pixel area in one solid-state imaging device is divided into a plurality of areas arranged in a two-dimensional matrix so as to correspond to each single-eye imaging unit 111, and each of these areas is divided into each single-eye imaging unit 111.
  • Each photoelectric conversion element of the single-eye imaging unit 111 photoelectrically converts the received light into an electrical signal corresponding to the amount of light, and outputs the electrical signal as data of each pixel in the image.
  • the plurality of single-eye imaging units 111 output image data that is shifted by a substantially parallax showing the same subject.
  • the imaging unit 1 performs so-called normal image processing such as white balance processing, filter processing, gradation conversion processing, and color space conversion processing on the plurality of image data output from the plurality of single-eye imaging units 111, and finally Typical image data is generated and output to the corresponding point search unit 2.
  • FIG. 3 is a diagram illustrating a configuration example of an array lens unit in the imaging unit.
  • one rectangle represents one individual lens 121.
  • FIG. 3A shows an example of the array lens unit 12 composed of six individual lenses 121 arranged in a 2 ⁇ 3 two-dimensional matrix
  • FIG. 3B shows a 3 ⁇ 3 two-dimensional matrix.
  • FIG. 3C shows an example of an array lens unit 12 composed of nine single-lens lenses 121 arranged in the form of FIG. 3C, and is composed of eight single-lens lenses 121 arranged in a two-dimensional matrix of two rows and four columns. An example of the array lens unit 12 is shown.
  • N in the rectangle indicates that the lens has a shorter focal length
  • F indicates that the lens has a longer focal length. That is, in the example of FIG. 3, lenses of two types of focal lengths are included.
  • N lens a lens having a short focal length
  • F lens a lens having a long focal length
  • the number of N lenses constituting the array lens unit 12 is larger than the number of F lenses.
  • the array lens unit 12 shown in FIG. 2 has a configuration indicated by an arrow 30 in FIG. 3B.
  • the “ ⁇ ” circle in the rectangle of the N lens indicates the N lens that captures the reference image when searching for corresponding points.
  • the corresponding point search unit 2 performs a corresponding point search using one of the images captured by the N lens as a reference image and the image of the other N lens as a reference image.
  • the F lens rectangle is not marked with a “ ⁇ ” circle.
  • the array lens unit 12 includes two F lenses. Because it is good. There may also be three or more F lenses.
  • an image captured by the N lens is referred to as an “N lens image”
  • an image captured by the F lens is referred to as an “F lens image”.
  • the first condition is that the images to be paired are N lens images and F lens images. This is because the corresponding points are searched between the images in focus and high-precision distance measurement is performed.
  • the second condition is that the N-lens images are arranged such that the baseline length between the N lenses that capture the paired N lens images is shorter than the baseline length between the F lenses that capture the paired F lens images. Is to pair.
  • the N lens images are paired so that the baseline length between the F lenses that capture the paired F lens images is longer than the baseline length between the N lenses that capture the paired N lens images. .
  • the accuracy of distance measurement is increased by increasing the baseline length because the F lens pair is less accurate in measuring the distance than the N lens pair.
  • the base line length between certain N lenses may be longer than the base line length between F lenses, but a pair of N lenses with such a long base line length is implemented. In the form, it is not used for ranging.
  • the third condition is that the number of pairs of N lens images is larger than the number of pairs of F lens images. It is difficult for occlusion to occur in a pair of images with a lens with a long focal length, but occlusion is likely to occur in a pair of images with a lens with a short focal length. It is.
  • FIGS. 8 and 9 are diagrams for explaining occlusion in the case of photographing a measurement object at a long distance.
  • FIG. 9 is a diagram for explaining occlusion in the case of photographing a measurement object at a short distance.
  • FIG. 8 shows an image P11 captured by the single lens 121-11 and an image P33 captured by the single lens 121-33.
  • the single lens 121-12 and the single lens 121-33 are F lenses.
  • the occlusion portion is limited, and the possibility that a corresponding point is found by template matching is quite high.
  • FIG. 9 shows a captured image P21, a captured image P22, and a captured image P23 obtained by the individual lens 121-21, the individual lens 121-22, and the individual lens 121-23, respectively.
  • the occlusion portion is large due to the head portion H of the person to be imaged.
  • the distance measuring apparatus 100 can calculate a more accurate distance even when occlusion occurs by searching for corresponding points in more pairs using the N lens image.
  • the reference image is preferably at the center of another image. This is because when a different direction is viewed with reference to the center of the captured image, there is a region that is not occlusion in either direction.
  • the arrangement of the single-lens 121 (N lens and F lens) of the array lens unit 12 as shown in FIG. 3 is an example of a desirable arrangement.
  • This arrangement satisfies the first to third conditions described above. It is most desirable to use the image of the N lens indicated by the circle “ ⁇ ”, that is, the image of the N lens located at the center of the entire single-lens lens or the image of the N lens closest to the center as the reference image .
  • the following fourth condition may be imposed on the lens combination.
  • At least one of a plurality of pairs of N lens images is a pair of images of N lenses positioned at different optical axis heights. That is, the intersection of the optical axis of one lens of one pair and the imaging surface of the single-eye imaging unit 111 corresponding to the one lens, the optical axis of the other lens, and the single-eye imaging unit corresponding to the other lens A line connecting the intersection point with the imaging surface of 111, the optical axis of one lens of any other pair, and the intersection point of the imaging surface of the single-eye imaging unit 111 corresponding to the one lens and the other lens It is desirable that the straight line connecting the optical axis and the intersection of the imaging surface of the single-eye imaging unit 111 corresponding to the other lens intersect. In other words, it is desirable that a straight line connecting each principal point of each lens of one pair intersects a straight line connecting each principal point of each lens of any other pair.
  • FIG. 4 is a diagram for explaining a combination of a standard image and a reference image in the distance measuring apparatus.
  • FIG. 4A shows an example of the array lens unit 12 including nine individual lenses 121 arranged in a two-dimensional matrix of 3 rows and 3 columns, and a lens that captures a reference image is a lens B1.
  • the fourth condition is satisfied by selecting not only the lens R1 and the lens R2 but also an image of the lens R3 and the lens R4 whose optical axes are different from the lens B1 as reference images. Note that the image of the upper left or lower right lens may be selected.
  • FIGS. 4B, 4C, and 4D show examples of the array lens unit 12 including eight individual lenses 121 arranged in a two-dimensional and four-column two-dimensional matrix.
  • a lens that captures a reference image is shown in FIG. Lens B2.
  • the images of the lenses R5 and R6 are selected as reference images in FIG. 4B
  • the images of the lenses R5 and R7 are selected as reference images in FIG. 4C
  • the lenses are selected as reference images in FIG. 4D.
  • the fourth condition is satisfied.
  • a pair of lens B1 and lens R2 when a pair of lens B1 and lens R2 is selected, it is more desirable to select a pair of lens B1 and lens R3 as the other pair. That is, a lens pair in which a straight line connecting the principal point of the lens B1 and the principal point of the lens R1 and a straight line connecting the principal point of the lens B1 and the principal point of the lens R3 are orthogonal to each other is selected. This is because there is a higher possibility that the image pairs formed by the lenses having the relationship in which the straight lines connecting the principal points are orthogonal are more resistant to occlusion than the image pairs formed by the lenses having the relationship not orthogonal.
  • the image pair of the lens B2 and the lens R5 and the image pair of the lens B2 and the lens R6 can be said to be an image pair resistant to occlusion.
  • the arrangement and the number of the array lens units 12 of the distance measuring device 100 are not limited to the example in FIG. 3 and may be any arrangement and number. Further, it is sufficient that there are two or more focal lengths. An example is shown in FIG. 3
  • FIG. 14 is a diagram showing a modified example of the configuration of the array lens in the imaging unit.
  • FIG. 14 shows 25 individual lenses arranged in a two-dimensional matrix of 5 rows and 5 columns as a modified example.
  • the modification of the structure of the array lens part 12 comprised by 121 is shown.
  • the lens with the number “0” is called “lens 0”
  • the lens with the number “1” is called “lens 1”.
  • the array lens unit 12 shown in FIG. 14 is composed of lenses having three focal lengths.
  • One rectangle indicates one individual lens 121, and lenses having the same hatching indicate lenses having the same focal length. That is, the lenses 0 to 5, 9, 10, 13, 14, and 18 to 23 indicate the lenses having the longest focal length.
  • Lenses 6, 8, 15, and 17 represent lenses with the second longest focal length, and lenses 7, 11, N, 12, and 16 represent lenses with the shortest focal length.
  • the base line length between the pair of lenses is proportional to the focal length.
  • the length of the baseline is gradually shortened in the order of the baseline length between the lenses with the longest focal length, the baseline length between the lenses with the second longest focal length, and the baseline length between the lenses with the shortest focal length. ing.
  • the base line length in the case where the corresponding point search is performed with the pair of the lens 0 and the lens 23 is the longest.
  • the base line length when the corresponding point search is performed with the pair of the lens 6 and the lens 17 is the second longest.
  • the image of the lens N is used as a standard image
  • the image of the lens 7, the image of the lens 11, the image of the lens 12, and the image of the lens 16 are used as reference images.
  • the baseline length is the shortest. As the number of types of focal lengths increases as in this modification, it becomes possible to perform distance measurement with higher accuracy.
  • the corresponding point search unit 2 performs the corresponding point search process using the above-described image pairs, and as a result of the corresponding point search of each pair, graph data as shown in FIG. Output to.
  • the array lens unit 12 has the configuration indicated by the arrow 30 in FIG. 3B
  • 6 pairs of corresponding points are searched for with each of the surrounding 6 N lens images, and 7 graphs are output to the occlusion determination unit 3.
  • FIG. 5 is a diagram for explaining the relationship between the epipolar line and the reference image.
  • FIG. 6 is a diagram for explaining epipolar lines. 6A shows an example of a standard image, FIG. 6B shows an example of a reference image, and FIG. 6C shows an example of a matching degree graph.
  • FIG. 7 is a diagram for explaining the distance measuring method.
  • the epipolar line is a line obtained by projecting a straight line connecting the point OL and the target point X (the line of sight of the camera on the base image side) onto the reference image.
  • Point O L, the point O R are each a projection center of the camera for photographing the reference image, a reference image, a point e L, the point e R are each a projection of the other camera, the point on the reference image X L is an image of the object X. That is, since the target point X exists in the reference image of the camera on the line of sight, to the on epipolar line on reference image is a projection of the line of sight, is reflected is X R point is the image of the object point X Become.
  • point X R is obtained as the corresponding point.
  • the point AP on the base image in FIG. 6A is the point of interest and the corresponding point CP in the reference image in FIG. 6B is searched.
  • the corresponding points in the reference image exist on the epipolar line on the reference image as described above with reference to FIG. Therefore, in the embodiment, the correlation value between the window W having the center (center) of a certain point on the epipolar line EP in the reference image and the template T that is the window having the center of interest AP of the reference image as the center is calculated.
  • the correlation value between the window W containing the point shifted by a predetermined number of pixels from a certain point on the epipolar line EP and the template T is calculated, and further, the window containing the point shifted by the predetermined number of pixels
  • the correlation value between W and the template T is calculated, and the same processing is repeated thereafter (hereinafter, this processing is referred to as “scanning the epipolar line EP”).
  • scanning the epipolar line EP the center of gravity of the window having the highest calculated correlation value is determined as the corresponding point CP.
  • the correlation values between the window W having the center of gravity at each of a plurality of points on the epipolar line EP (hereinafter referred to as “scanning points”) and the template T, which is a window having the center of interest AP of the reference image as the center of gravity, respectively.
  • the calculated center of gravity of the window with the highest correlation value is determined as the corresponding point CP.
  • the window W is indicated by a broken line and a solid rectangle, and the solid rectangle indicates the window W that includes the corresponding point CP.
  • the epipolar line EP that is, the relationship between the coordinates (pixel position) of each scanning point on the epipolar line EP and the correlation value between the window W and the template T having each scanning point as the center of gravity.
  • the represented graph is shown in FIG. 6C.
  • the horizontal axis represents coordinates (pixel positions of scanning points), and the vertical axis represents matching degrees (correlation values).
  • the coincidence graph is created in a predetermined coordinate range set in advance. The coordinates (pixel position) corresponding to the point having the highest degree of coincidence become the coordinates (pixel position) of the corresponding point CP.
  • Corresponding point search unit 2 scans (searches) on epipolar line EP in the reference image for point of interest AP in the standard image. Specifically, the corresponding point search unit 2 calculates the degree of coincidence between the window W having the center of gravity of each scanning point on the epipolar line EP and the template T having the center of gravity of the point of interest AP, and The coordinates (pixel position) and the degree of coincidence are output to the occlusion determination unit 3.
  • NCC Normalized Cross Correlation
  • similarity RNCC is calculated using the following equation. The closer the calculated similarity RNCC is to 1, the more similar it is.
  • T (i, j) is the luminance value of the pixel of the template T
  • I (i, j) is the luminance value of the pixel of the window W of the reference image.
  • Coordinates (i, j) are (0, 0) for the upper left coordinate of the template and (M-1, N-1) for the lower right when the width of the template T is M pixels and the height is N pixels. Are the coordinates.
  • SAD Sud of Absolute Difference
  • SSD Sud of Squared Difference
  • SAD is a sum of absolute values of differences in luminance values of pixels at the same position when raster scanning a template, and the smaller the value, the more similar.
  • the SSD is a raster scan of the template, and is the sum of the squares of the differences in luminance values of pixels at the same position. The smaller the value, the more similar.
  • the occlusion determination unit 3 performs the corresponding point search result of each pair acquired from the corresponding point search unit 2, that is, the relationship between each scanning point (pixel position) on the epipolar line EP and the degree of coincidence (correlation value). Whether or not occlusion occurs is determined based on the coincidence graph (see FIG. 6C and the like) representing Then, the occlusion determination unit 3 calculates corresponding points (pixel positions) from the coincidence degree graph determined that no occlusion has occurred, and outputs the calculated corresponding points to the distance calculation unit 4.
  • FIG. 10 is a diagram for explaining a method of determining occlusion in the distance measuring apparatus.
  • FIG. 10 shows an example of a coincidence graph created from three different reference images (reference image N1, reference image N2, and reference image N3).
  • the coincidence graph in FIG. 10A shows the result of the corresponding point search with the reference image N1
  • the coincidence graph in FIG. 10B shows the result of the corresponding point search with the reference image N2
  • the result of a corresponding point search with reference image N3 is shown.
  • the horizontal axis indicates the coordinates of the pixel position of the scanning point
  • the vertical axis indicates the degree of coincidence.
  • the occlusion determination unit 3 determines that the degree of coincidence graph having a degree of coincidence equal to or less than a predetermined threshold Th is a pair in which occlusion has occurred. For example, the occlusion determination unit 3 determines that no occlusion has occurred in the coincidence degree graph of FIG. 10A and the coincidence degree graph of FIG. 10B because the coincidence degree exceeds the threshold value Th, and FIG. In the coincidence graph, since there is no portion where the coincidence exceeds the threshold value TH, it is determined that occlusion has occurred.
  • the image pair in which occlusion has occurred is, for example, a pair in which the attention point of the standard image is not shown in the reference image.
  • the captured image P22 of FIG. 9 is a reference image and the captured image P21 is a reference image
  • the point of interest is on the trunk of the tree on the left side of the captured image P22
  • the tree trunk (attention point) on the left is hidden behind the person's head H and is not shown. Therefore, occlusion occurs in the pair of the captured image P22 and the captured image P21.
  • the captured image P22 is a standard image and the captured image P23 is a reference image
  • the point of interest is on the left tree trunk of the captured image P22
  • the left tree trunk (Remarkable point). Therefore, no occlusion occurs in the pair of the captured image P22 and the captured image P23.
  • the image pair in which occlusion has occurred has no window W that has a high degree of coincidence with the template T, so the degree of coincidence is low. It becomes a coincidence graph.
  • the occlusion determination unit 3 obtains the pixel position corresponding to the vertex of the coincidence degree graph determined that no occlusion has occurred, and uses it as the coordinate (pixel position) of the corresponding point.
  • FIG. 11 is a diagram for explaining calculation of corresponding points at sub-pixels in the distance measuring apparatus. As shown in FIG. 11, the coordinates (pixel positions) of the corresponding points are obtained by performing quadratic function approximation with three points (indicated by black circles) near the vertex. That is, a quadratic function is obtained using three or more points in the vicinity of the vertex, and a vertex (indicated by a white circle) of a curve based on the obtained quadratic function is obtained.
  • the pixel position corresponding to the obtained vertex is the coordinates (pixel position) of the corresponding point in the sub-pixel. Since the degree of coincidence is obtained for each scanning point on the epipolar line EP, the degree of coincidence of the scanning points is not necessarily the degree of coincidence of peak parallax. Therefore, by performing approximation using a quadratic function in this way, the peak of the coincidence graph is obtained with higher accuracy. Note that the pixel position of the scanning point for which the highest degree of coincidence is obtained may be used as the coordinates (pixel position) of the corresponding point.
  • the distance calculation unit 4 calculates the distance from the corresponding point passed from the occlusion determination unit 3 to the measurement object Ob.
  • the distance calculation unit 4 calculates the distance to the measurement object Ob for each pair, and outputs the distance to the coordinate conversion unit 5 as the three-dimensional coordinates of the reference image or reference image coordinate system (three-dimensional).
  • the distance to the measurement object Ob is proportional to the deviation amount between the reference image and the reference image obtained by the corresponding point search, that is, the deviation amount between the attention point of the reference image and the corresponding point of the reference image.
  • the calculation of the distance will be described with reference to FIG.
  • Each image of the subject (measurement object Ob) is obtained by a pair of cameras provided separated by a predetermined interval (base line length), and corresponding point search is executed for each image in pixel units. Then, the parallax between the pair of cameras in the separation direction is obtained from these images in pixel units, and the distance to the subject is obtained based on the so-called triangulation principle based on the obtained parallax. More specifically, in FIG.
  • two first and second cameras having at least the focal length (f), the number of pixels on the imaging surface, and the size ( ⁇ ) of one pixel are equal to each other with a predetermined baseline length.
  • the coordinate conversion unit 5 converts the three-dimensional coordinates of the corresponding points in each pair passed from the distance calculation unit 4 into coordinates (three-dimensional) in the same coordinate system. That is, the three-dimensional coordinates of the corresponding points respectively obtained from the pair of N lens images and the pair of F lens images are represented by one coordinate system. In the embodiment, the three-dimensional coordinates of the corresponding points are matched with the coordinate system of the reference image of the N lens image, but may be a coordinate system of another image or may be a predetermined coordinate system.
  • Each camera of the array camera 10 of the imaging unit 1 has known camera parameters (image center, focal length, orientation with respect to the reference camera (rotation, translation)).
  • the camera parameter matrix is obtained by calculating in advance the product of the following equations.
  • (x, y) is a coordinate position of the image coordinate system
  • (X, Y, Z) is a coordinate position of the world coordinate system
  • the center matrix is a perspective transformation matrix.
  • the origin of the world coordinate system is the projection center (the principal point of the lens), and the Y axis is parallel to the optical axis of the lens.
  • the center of the image is a perpendicular foot drawn from the projection center, and the aspect ratio is 1.0.
  • f indicates the focal length of the lens.
  • the left matrix is an internal parameter matrix A, a indicates the aspect ratio, s indicates the skew ratio, and (t x , t y ) indicates the center of the image.
  • the matrix on the right side is an external parameter matrix, which is a translation / rotation homogeneous coordinate transformation matrix [R
  • Camera parameters are obtained from a known set of (X, Y, Z) and (x, y). For this calculation, for example, the following documents can be referred to. “A flexible new technology for camera calibration”. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (11): 1330-1334, 2000
  • the coordinates are converted by the following expression.
  • the coordinate (Xa, Ya, Za) in the coordinate system of the standard image after conversion is m
  • the coordinate (Xb, Yb, Zb) in the coordinate system of the reference image before conversion is M.
  • m R ⁇ M + T
  • the storage unit 6 stores the three-dimensional coordinates of the corresponding points calculated by the coordinate conversion unit 5 for each pair.
  • the stored three-dimensional coordinates are used for processing for calculating the distance to the measuring object Ob, processing for obtaining the surface shape of the measuring object Ob, processing for creating a distance image composed of distance components of each pixel, and the like. Will be used.
  • FIG. 13 is a block diagram showing a configuration of a ranging image creating apparatus using the ranging apparatus.
  • the ranging image creation apparatus 200 includes a ranging apparatus 100, a distance image generation unit 110, and a display unit 120.
  • the distance measuring device 100 calculates the three-dimensional coordinates of the object for each image pair obtained by photographing the measurement object Ob as described above.
  • the distance image generation unit 110 determines the coordinates (three-dimensional) of the corresponding points from the three-dimensional coordinates of the corresponding points for each pair calculated by the distance measuring device 100. For example, the distance image generation unit 110 obtains the average value of the corresponding point coordinates of all the pairs and sets the corresponding point coordinates (three-dimensional). Alternatively, the three-dimensional coordinates of the corresponding point of the pair having the longest baseline length among the N lens pairs may be employed. This is because the N lens can calculate the distance more accurately than the F lens pair, and among the N lens pairs, the pair with the longer baseline length can calculate the distance more accurately. Further, based on the coincidence graphs shown in FIGS. 6 and 10, corresponding points of the pair of coincidence graphs having the highest reliability may be employed.
  • the display unit 120 is a so-called display, and displays the distance image generated by the distance image generation unit 110, for example, as an image composed of pixels of a color corresponding to the distance.
  • the corresponding point search unit 2 to the overall control unit 7 are constituted by, for example, a microcomputer including a microprocessor, a memory, and peripheral circuits thereof.
  • the memory includes a program for searching for a corresponding point, Various programs such as a control program for controlling the entire distance measuring apparatus 100 and various data such as data necessary for executing the program are stored, and a microprocessor such as a so-called CPU (Central Processing Unit) is stored in the memory. By executing the stored program, all or part of each functional unit is realized.
  • a microcomputer including a microprocessor, a memory, and peripheral circuits thereof.
  • the memory includes a program for searching for a corresponding point, Various programs such as a control program for controlling the entire distance measuring apparatus 100 and various data such as data necessary for executing the program are stored, and a microprocessor such as a so-called CPU (Central Processing Unit) is stored in the memory.
  • CPU Central Processing Unit
  • FIG. 12 is a flowchart showing the distance measuring process of the distance measuring device 100.
  • the overall control unit 7 When the overall control unit 7 receives an instruction to start processing via the interface unit (not shown), the overall control unit 7 instructs the imaging unit 1 to output a captured image of the measurement object Ob.
  • the imaging unit 1 Upon receiving the instruction, the imaging unit 1 causes the array camera 10 to photograph the measurement object Ob, and outputs each captured image based on the signal from the array imaging unit 11 to the corresponding point search unit 2 (step S10).
  • Corresponding point search unit 2 stores each captured image input from imaging unit 1 in an internal work memory.
  • the corresponding point search unit 2 first reads out a pair of F lens images from the work memory (step S11). For example, the corresponding point search unit 2 reads the captured image P11 obtained by the single lens 121-11 and the captured image P33 obtained by the single lens 121-33.
  • the corresponding point search unit 2 searches for corresponding points using either one of the pair of F lens images as a standard image and the other as a reference image, and creates a coincidence graph as shown in FIG. 6C. It outputs to the occlusion determination part 3 (step S12).
  • the occlusion determination unit 3 that has input the coincidence graph of the pair of F lens images from the corresponding point search unit 2 calculates the coordinates (pixel position) of the corresponding point corresponding to the vertex of the coincidence graph, and sends it to the distance calculation unit 4. Output (step S13).
  • the occlusion determination is not performed for the pair of F lens images because the occlusion hardly occurs, but the occlusion determination may be performed.
  • the corresponding point search unit 2 that has passed the F lens image pair matching degree graph to the occlusion determination unit 3 reads the N lens image pair from the working memory (step S14).
  • N lens image is used as a standard image and which N lens image is used as a reference image is determined in advance.
  • the corresponding point search unit 2 searches for a corresponding point of one pair of N lens images, creates a coincidence graph, and outputs it to the occlusion determination unit 3 (step S15).
  • the occlusion determination unit 3 that has input the coincidence graph of the pair of N lens images from the corresponding point search unit 2 performs the occlusion determination as described above, and determines that no occlusion has occurred (step S17). : No), the coordinates (pixel position) of the corresponding point corresponding to the vertex of the coincidence degree graph are calculated and output to the distance calculation unit 4 (step S18). On the other hand, when it is determined in step S17 that occlusion has occurred (step S17: Yes), the occlusion determination unit 3 does not calculate corresponding points.
  • Corresponding point search unit 2 performs corresponding point search until all the processes of the pair of N lens images are completed (step S19: Yes).
  • the distance calculation unit 4 that has input the coordinates (pixel position) of the corresponding point of the F lens image pair or the N lens image pair from the occlusion determination unit 3 determines the distance to the measurement object Ob as described above. calculate. Then, the distance calculation unit 4 calculates the three-dimensional coordinates of the corresponding points in the coordinate system of the reference image, and outputs them to the coordinate conversion unit 5 (step S20).
  • the coordinate conversion unit 5 that receives the three-dimensional coordinates of the corresponding points of each pair from the distance calculation unit 4 converts the corresponding point coordinates of all the pairs into the three-dimensional coordinates in the coordinate system of the reference image of the N lens image. (Step S21) and stored in the storage unit 6 (step S22).
  • the corresponding points are calculated using the N image pair after the corresponding points are calculated using the F image pair, but the reverse order may be used.
  • the coincidence degree graph is a curved graph, but may be a line graph, a bar graph, or the like.
  • the distance measuring device includes a plurality of first imaging optical systems, and a plurality of second imaging optical systems having a focal length longer than that of the first imaging optical system, and the plurality of first imaging optical systems.
  • a plurality of first imaging units that capture optical images of the measurement object imaged by each of the plurality of first imaging optical systems
  • a compound eye camera comprising: a plurality of second imaging units that capture optical images of the measurement object imaged by each of the plurality of second imaging optical systems; and at least three of the plurality of first imaging optical systems An image captured by a first imaging unit corresponding to the first imaging optical system and an image captured by a second imaging unit corresponding to at least two second imaging optical systems among the plurality of second imaging optical systems
  • the distance to obtain the distance to the measurement object based on And the two second imaging optical systems have a baseline length of the two first imaging optical systems, the baseline lengths of the two first imaging optical systems of the three first imaging optical systems.
  • the three first imaging optical systems include a first straight line connecting the principal point of the first first imaging optical system and the principal point of the second first imaging optical system.
  • the first straight line of the first imaging optical system and the second straight line connecting the principal point of the third first imaging optical system are arranged so as to intersect each other.
  • a plurality of first imaging optical systems and a plurality of second imaging optical systems having a focal length longer than that of the first imaging optical system are integrally arranged.
  • a distance measuring method used in a distance measuring apparatus having a compound eye camera including a plurality of second imaging units that capture optical images of a measurement object imaged by each of the plurality of second imaging optical systems, Of the plurality of first imaging optical systems, an image captured by a first imaging unit corresponding to at least three first imaging optical systems, and at least two second imaging among the plurality of second imaging optical systems. An image captured by the second imaging unit corresponding to the optical system; A distance calculating step for obtaining a distance to the measurement object based on the two second imaging optical systems, wherein the two second imaging optical systems have a baseline length of the three first imaging optical systems. Are arranged so as to be longer than the base line length of the two first imaging optical systems.
  • Such a distance measuring device and distance measuring method perform distance measurement using images captured by imaging optical systems having different focal lengths, and therefore perform highly accurate distance measurement regardless of the distance to the measurement object.
  • the number of images captured by the first imaging optical system with a short focal length is greater than the number of images captured by the second imaging optical system with a long focal length, and there is a high possibility that an image without occlusion can be selected. It is possible to perform a high distance measurement. Since the images captured by the first imaging optical system in which the straight lines connecting the principal points intersect in this way are images with different viewpoints (optical axes), there is a possibility that the images are free from occlusion. Get higher. As a result, there is a high possibility that the distance to the measurement object can be calculated more accurately.
  • the three first imaging optical systems are arranged so that the first straight line and the second straight line are orthogonal to each other.
  • the first imaging optical on the second straight line orthogonal to the first straight line
  • the effect of suppressing the occurrence of occlusion is further increased by arranging the optical system so that the straight lines connecting the principal points are orthogonal to each other. As a result, there is a high possibility that the distance to the measurement object can be calculated more accurately.
  • the first first imaging optical system is located at or near the center of the plurality of first imaging optical systems and the plurality of second imaging optical systems as a whole. Is arranged.
  • the compound-eye camera corresponds to a plurality of third imaging optical systems having a focal length longer than that of the second imaging optical system and the plurality of third imaging optical systems.
  • a plurality of third imaging units that capture optical images of the measurement object imaged by each of the plurality of third imaging optical systems, and the distance calculation unit further includes the plurality of third imaging units.
  • the imaging optical systems a distance to the measurement object is obtained based on images captured by a third imaging unit corresponding to at least two third imaging optical systems, and two of the third imaging optical systems are obtained.
  • the baseline lengths of the three third imaging optical systems are arranged to be longer than the baseline lengths of the two second imaging optical systems.
  • one of the three images captured by the first imaging unit corresponding to each of the three first imaging optical systems is used as a reference image, and the other images.
  • a matching point search unit for calculating a matching degree indicating a degree of correlation with a window including the coordinates of a scanning point on the horizontal axis and a matching point corresponding to each scanning point on the vertical axis
  • An occlusion determination unit that determines the presence or absence of occlusion, and the distance calculation unit includes a reference image that the occlusion determination unit determines that no occlusion has occurred. Using the corresponding points to obtain the distance to the measurement object.
  • a distance measuring device and a distance measuring method can be provided.

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

La présente invention concerne un dispositif de mesure de distance et son procédé pour trouver la distance à un objet de mesure, en fonction d'une image capturée à l'aide d'au moins trois premiers systèmes optiques d'imagerie et d'une image capturée à l'aide d'au moins deux seconds systèmes optiques d'imagerie dans une caméra à œil composé ayant une pluralité de premiers et seconds systèmes optiques d'imagerie ayant différentes longueurs focales. Les deux seconds systèmes optiques d'imagerie sont disposés de sorte que leurs longueurs de ligne de base soient supérieures à la longueur de ligne de base de deux des trois premiers systèmes optiques d'imagerie.
PCT/JP2015/061099 2014-04-16 2015-04-09 Dispositif de mesure de distance et procédé de mesure de distance WO2015159791A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014084580 2014-04-16
JP2014-084580 2014-04-16

Publications (1)

Publication Number Publication Date
WO2015159791A1 true WO2015159791A1 (fr) 2015-10-22

Family

ID=54324000

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/061099 WO2015159791A1 (fr) 2014-04-16 2015-04-09 Dispositif de mesure de distance et procédé de mesure de distance

Country Status (1)

Country Link
WO (1) WO2015159791A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018032986A (ja) * 2016-08-24 2018-03-01 ソニー株式会社 情報処理装置および方法、車両、並びに情報処理システム
JP2018197674A (ja) * 2017-05-23 2018-12-13 オリンパス株式会社 計測装置の作動方法、計測装置、計測システム、3次元形状復元装置、およびプログラム
CN112351271A (zh) * 2020-09-22 2021-02-09 北京迈格威科技有限公司 一种摄像头的遮挡检测方法、装置、存储介质和电子设备
WO2024239107A1 (fr) * 2023-05-19 2024-11-28 Focusbug Technologies Inc. Télémètre à distance focale multiple

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003346130A (ja) * 2002-05-22 2003-12-05 Saibuaasu:Kk 3次元情報処理装置および3次元情報処理方法
JP2011118235A (ja) * 2009-12-04 2011-06-16 Ricoh Co Ltd 撮像装置
JP2011117787A (ja) * 2009-12-02 2011-06-16 Ricoh Co Ltd 距離画像入力装置と車外監視装置
JP2011203238A (ja) * 2010-03-01 2011-10-13 Ricoh Co Ltd 撮像装置及び距離測定装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003346130A (ja) * 2002-05-22 2003-12-05 Saibuaasu:Kk 3次元情報処理装置および3次元情報処理方法
JP2011117787A (ja) * 2009-12-02 2011-06-16 Ricoh Co Ltd 距離画像入力装置と車外監視装置
JP2011118235A (ja) * 2009-12-04 2011-06-16 Ricoh Co Ltd 撮像装置
JP2011203238A (ja) * 2010-03-01 2011-10-13 Ricoh Co Ltd 撮像装置及び距離測定装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018032986A (ja) * 2016-08-24 2018-03-01 ソニー株式会社 情報処理装置および方法、車両、並びに情報処理システム
US11195292B2 (en) 2016-08-24 2021-12-07 Sony Corporation Information processing apparatus and method, vehicle, and information processing system
JP2018197674A (ja) * 2017-05-23 2018-12-13 オリンパス株式会社 計測装置の作動方法、計測装置、計測システム、3次元形状復元装置、およびプログラム
US11321861B2 (en) 2017-05-23 2022-05-03 Olympus Corporation Method of operating measurement device, measurement device, measurement system, three-dimensional shape restoration device, and recording medium
CN112351271A (zh) * 2020-09-22 2021-02-09 北京迈格威科技有限公司 一种摄像头的遮挡检测方法、装置、存储介质和电子设备
WO2024239107A1 (fr) * 2023-05-19 2024-11-28 Focusbug Technologies Inc. Télémètre à distance focale multiple

Similar Documents

Publication Publication Date Title
US20230362344A1 (en) System and Methods for Calibration of an Array Camera
EP3531066B1 (fr) Procédé de balayage tridimensionnel faisant appel à plusieurs lasers à longueurs d'ondes différentes, et dispositif de balayage
US20110249117A1 (en) Imaging device, distance measuring method, and non-transitory computer-readable recording medium storing a program
JP2017112602A (ja) パノラマ魚眼カメラの画像較正、スティッチ、および深さ再構成方法、ならびにそのシステム
CN107808398B (zh) 摄像头参数算出装置以及算出方法、程序、记录介质
US10027947B2 (en) Image processing apparatus and image processing method
JP2009529824A (ja) 3次元映像獲得用cmosステレオカメラ
JP6702796B2 (ja) 画像処理装置、撮像装置、画像処理方法および画像処理プログラム
JP2004340840A (ja) 距離測定装置、距離測定方法、及び距離測定プログラム
JP7168077B2 (ja) 3次元計測システム及び3次元計測方法
WO2015159791A1 (fr) Dispositif de mesure de distance et procédé de mesure de distance
JP6009206B2 (ja) 3次元計測装置
JP2015049200A (ja) 計測装置、方法及びプログラム
CA3233222A1 (fr) Methode, appareil et dispositif de photogrammetrie, et methode de stockage
CN110580718A (zh) 图像装置的校正方法及其相关图像装置和运算装置
US11175568B2 (en) Information processing apparatus, information processing method, and program as well as in interchangeable lens
JP7195801B2 (ja) 画像処理装置およびその制御方法、距離検出装置、撮像装置、プログラム
TWI571099B (zh) 深度估測裝置及方法
US11348271B2 (en) Image processing device and three-dimensional measuring system
US10356394B2 (en) Apparatus and method for measuring position of stereo camera
JP2016099318A (ja) ステレオマッチング装置とステレオマッチングプログラムとステレオマッチング方法
JP5727969B2 (ja) 位置推定装置、方法、及びプログラム
JP2013120435A (ja) 画像処理装置および画像処理方法、プログラム
WO2020017377A1 (fr) Caméra de télémétrie
JP6241083B2 (ja) 撮像装置及び視差検出方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15779464

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15779464

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP