WO2013073167A1 - 画像処理装置、撮像装置および画像処理方法 - Google Patents
画像処理装置、撮像装置および画像処理方法 Download PDFInfo
- Publication number
- WO2013073167A1 WO2013073167A1 PCT/JP2012/007270 JP2012007270W WO2013073167A1 WO 2013073167 A1 WO2013073167 A1 WO 2013073167A1 JP 2012007270 W JP2012007270 W JP 2012007270W WO 2013073167 A1 WO2013073167 A1 WO 2013073167A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- segment
- pixel
- image processing
- depth data
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Definitions
- the present invention relates to an image processing device, an imaging device, and an image processing method that generate depth data using a first image and a second image taken from different viewpoints.
- 3D displays for displaying 3D images have begun to spread widely.
- 3D cameras that capture 3D images displayed on such 3D displays.
- a stereo image is taken using two sets of lenses and sensors.
- depth data can be generated by detecting corresponding points for each pixel in a stereo image and calculating a parallax value between the corresponding points. Then, various processes can be performed on the stereo image using the depth data generated in this way.
- the present invention reduces the processing load while suppressing a decrease in accuracy of depth data when generating depth data using the first image and the second image taken from different viewpoints.
- An image processing apparatus is an image processing apparatus that generates depth data using a first image and a second image taken from different viewpoints, and includes a part of the first image.
- a parallax value calculating unit that calculates a parallax value of the representative pixel based on a positional relationship between the representative pixel and a pixel in the second image corresponding to the representative pixel for each of the plurality of representative pixels that are
- a segmentation unit that divides the first image into a plurality of segments based on similarity of pixel values, and specifying, for each segment, a disparity value of the segment based on a disparity value of a representative pixel included in the segment
- To a depth data generating unit that generates depth data indicating the depth corresponding to each segment.
- the image processing device of one aspect of the present invention when the depth data is generated using the first image and the second image taken from different viewpoints, a decrease in accuracy of the depth data is suppressed. However, the processing load can be reduced.
- FIG. 1 is a block diagram illustrating a functional configuration of the image processing apparatus according to the first embodiment.
- FIG. 2 is a flowchart showing the processing operation of the image processing apparatus according to the first embodiment.
- FIG. 3 is a diagram for explaining the processing operation of the image processing apparatus according to the first embodiment.
- FIG. 4 is a block diagram illustrating a functional configuration of the image processing apparatus according to the second embodiment.
- FIG. 5 is a flowchart showing the processing operation of the image processing apparatus according to the second embodiment.
- FIG. 6 is a diagram showing an outline of the alignment processing according to the second embodiment.
- FIG. 7 is a diagram for explaining an example of alignment processing according to the second embodiment.
- FIG. 8 is a flowchart showing details of segmentation according to the second embodiment.
- FIG. 1 is a block diagram illustrating a functional configuration of the image processing apparatus according to the first embodiment.
- FIG. 2 is a flowchart showing the processing operation of the image processing apparatus according to the first embodiment.
- FIG. 3 is
- FIG. 9 is a diagram for explaining segmentation according to the second embodiment.
- FIG. 10 is a diagram for explaining segmentation according to the second embodiment.
- FIG. 11 is a diagram illustrating an example of a segmentation result according to the second embodiment.
- FIG. 12 is a flowchart showing details of the segment combining process according to the second embodiment.
- FIG. 13 is a diagram for explaining the segment combining process according to the second embodiment.
- FIG. 14 is a flowchart showing details of the depth data generation processing according to the modification of the second embodiment.
- FIG. 15 is a block diagram illustrating a configuration of an imaging apparatus according to an embodiment.
- An image processing apparatus is an image processing apparatus that generates depth data using a first image and a second image taken from different viewpoints, and includes a part of the first image.
- a parallax value calculating unit that calculates a parallax value of the representative pixel based on a positional relationship between the representative pixel and a pixel in the second image corresponding to the representative pixel for each of the plurality of representative pixels that are
- a segmentation unit that divides the first image into a plurality of segments based on similarity of pixel values, and specifying, for each segment, a disparity value of the segment based on a disparity value of a representative pixel included in the segment
- To a depth data generating unit that generates depth data indicating the depth corresponding to each segment.
- depth data indicating the depth corresponding to each segment can be generated based on the parallax value of the representative pixel included in each segment. That is, in order to generate depth data, it is only necessary to detect pixels in the second image corresponding to each representative pixel, and it is not necessary to detect pixels in the second image corresponding to all of the pixels. Therefore, it is possible to reduce the processing load for generating depth data.
- the first image is divided into a plurality of segments based on the similarity of pixel values, the possibility that a plurality of different subjects are included in one segment is reduced. That is, there is a high possibility that a region having a similar depth is divided as one segment.
- the parallax value for each segment divided in this way it is possible to prevent the accuracy of the depth data indicating the depth corresponding to each segment from being lowered.
- the image processing apparatus may further combine the empty segment and the segment adjacent to the empty segment when the plurality of segments include an empty segment that does not include a representative pixel. It is preferable that a segment combining unit is included, and the depth data generating unit generates the depth data based on the segments combined by the segment combining unit.
- the empty segment and the adjacent segment can be combined into one. Therefore, when the first image is divided into a plurality of segments by the segmentation unit, it is not always necessary to divide the first image so as to include the representative pixel. That is, segmentation can be performed without considering the correspondence with the representative pixel. As a result, the segmentation and the parallax value calculation of the representative pixel can be processed in parallel, and the speed of the depth data generation process can be increased.
- the segment combining unit may select at least one segment from the plurality of segments based on color similarity when the empty segment is adjacent to the plurality of segments, and the selected at least one segment Preferably, one segment and the empty segment are combined into one.
- segments having similar colors can be combined into one. That is, since a region having a similar color is treated as one segment, there is a high possibility that a region having a similar depth becomes one segment.
- depth data indicating the depth corresponding to each segment can be generated more accurately.
- the depth data generation unit may identify a median value or an average value of the parallax values of the two or more representative pixels as the parallax value of the segment when the segment includes two or more representative pixels. It is preferable to do.
- the median value or average value of the parallax values of the two or more representative pixels can be specified as the parallax value of the segment. Accordingly, the disparity value of the segment can be easily specified, and the processing load for generating the depth data can be reduced. In addition, an error between the disparity value of the segment and the disparity value of each pixel included in the segment can be made relatively small, and depth data can be generated more accurately.
- the depth data generation unit is included in the segment by interpolating the disparity value of another pixel included in the segment using the disparity value of at least one representative pixel included in the segment for each segment. It is preferable that a parallax value of each pixel is calculated, and a depth map indicating the depth of each pixel is generated as the depth data based on the calculated parallax value of each pixel.
- the parallax value of other pixels included in the segment can be interpolated using the parallax value of at least one representative pixel included in the segment. Therefore, the parallax value of each pixel can be obtained by interpolation, and depth data can be generated more accurately.
- the segmentation unit divides the first image into a plurality of segments by clustering based on similarity defined using pixel values and pixel positions.
- the first image can be divided into a plurality of segments by clustering based on similarity defined using pixel values and pixel positions. Therefore, the first image can be divided into a plurality of segments with high accuracy so that a plurality of different subjects are not included in one segment. As a result, depth data can be generated more accurately.
- the clustering is preferably a k-means clustering method.
- the first image can be divided into a plurality of segments by the k-average method. Therefore, the first image can be divided into a plurality of segments with higher accuracy so that a plurality of different subjects are not included in one segment. Furthermore, since segmentation is possible by relatively simple processing, it is possible to reduce the processing load for generating depth data.
- the image processing apparatus further includes a feature point calculation unit that calculates a feature point of the first image as the representative pixel.
- a feature point can be calculated as a representative pixel. Therefore, it becomes easy to detect the pixel in the second image corresponding to the representative pixel, and the processing load can be reduced.
- the image processing apparatus further includes an alignment processing unit that performs an alignment process for parallelizing the first image and the second image using the feature points, and the parallax value calculating unit includes: It is preferable that a parallax value of the representative pixel is calculated using the first image and the second image on which the alignment process has been performed.
- alignment processing for parallelizing the first image and the second image can be performed.
- a multi-viewpoint image such as a stereo image
- alignment processing of the multi-viewpoint image is performed.
- feature points are calculated and corresponding points are detected.
- the image processing apparatus further includes an image processing unit that separates the first image into a foreground area and a background area based on the depth data and performs a blurring process on the background area.
- the first image can be separated into the foreground area and the background area based on the depth data, and the background area can be blurred.
- the depth data for separating the foreground region and the background region is not necessarily high-definition depth data in units of pixels. Therefore, the depth data based on the disparity value of each segment can be used effectively.
- the image processing apparatus further separates the first image into a foreground region and a background region based on the depth data, and the foreground region is different from the first image and the second image. It is preferable to include an image processing unit that synthesizes the three images.
- the first image can be separated into the foreground area and the background area based on the depth data, and the foreground area and another image corresponding to the background area can be combined.
- the depth data for separating the foreground region and the background region is not necessarily high-definition depth data in units of pixels. Therefore, the depth data based on the disparity value of each segment can be used effectively.
- the image processing device may be configured as an integrated circuit.
- an imaging apparatus includes the image processing apparatus and an imaging unit that captures the first image and the second image.
- FIG. 1 is a block diagram illustrating a functional configuration of an image processing apparatus 10 according to the first embodiment.
- the image processing apparatus 10 generates depth data of the first image using a first image and a second image (for example, a stereo image) taken from different viewpoints.
- the first image and the second image are, for example, stereo images (left eye image and right eye image).
- the image processing apparatus 10 includes a parallax value calculation unit 11, a segmentation unit 12, and a depth data generation unit 13.
- the parallax value calculation unit 11 calculates the parallax value between the representative pixel and the corresponding pixel by detecting the corresponding pixel in the second image for each representative pixel in the first image. That is, the parallax value calculation unit 11 calculates the parallax value for some pixels in the first image.
- the representative pixel is a part of the pixels included in the first image.
- the representative pixel is a pixel that exists at a predetermined position in the image.
- the corresponding pixel is a pixel corresponding to the representative pixel. That is, the corresponding pixel is a pixel in the second image that is similar to the representative pixel in the first image.
- the two pixels of the representative pixel and the corresponding pixel are also called corresponding points. This corresponding pixel can be detected by, for example, a block matching method.
- the parallax value between the representative pixel and the corresponding pixel is a value representing a deviation between the position of the representative pixel and the position of the corresponding pixel.
- the distance (depth) from the imaging device to the subject can be calculated based on the principle of triangulation. Note that the parallax value between the representative pixel and the corresponding pixel is simply referred to as the parallax value of the representative pixel.
- the segmentation unit 12 divides the first image into a plurality of segments based on the similarity of the pixel values. That is, the segmentation unit 12 divides the first image into a plurality of segments so that pixels having similar pixel values are included in one segment. In the present embodiment, the segmentation unit 12 divides the first image into a plurality of segments so that each segment includes at least one representative pixel.
- segmentation corresponds to a partial area in the first image.
- segmentation The process of dividing into a plurality of segments is also referred to as segmentation below.
- the pixel value is a value that a pixel constituting the image has.
- the pixel value is, for example, a value indicating the luminance, color, brightness, hue or saturation of the pixel, or a combination thereof.
- the depth data generation unit 13 generates depth data for each segment by specifying the disparity value of the segment based on the disparity value of the representative pixel included in the segment. That is, the depth data generation unit 13 generates depth data based on the parallax value specified for each segment.
- the depth data generated here indicates the depth corresponding to each segment.
- the depth data may be data in which the segment depth value is associated with segment information indicating the position and size of the segment.
- the depth data may be a depth map (depth image) having a depth value as a pixel value.
- the depth data does not necessarily include the depth value, and may include data indicating the depth.
- the depth data may include a parallax value as data indicating the depth.
- FIG. 2 is a flowchart showing the processing operation of the image processing apparatus 10 according to the first embodiment.
- FIG. 3 is a diagram for explaining an example of the processing operation of the image processing apparatus 10 according to the first embodiment.
- the parallax value calculation unit 11 calculates the parallax value of each representative pixel (S101). For example, as illustrated in FIG. 3A, the parallax value calculation unit 11 detects a corresponding pixel in the second image 102 for each representative pixel at a predetermined position in the first image 101. Then, the parallax value calculation unit 11 calculates the parallax value of the representative pixel based on the positional relationship between the representative pixel and the corresponding pixel.
- the segmentation unit 12 divides the first image 101 into a plurality of segments (S102). For example, as shown in FIG. 3B, the segmentation unit 12 divides the first image 101 into a plurality of segments each having a rectangular shape having a predetermined size. Here, in FIG. 3B, the first image 101 is divided so that one representative pixel is included in each segment.
- the depth data generation unit 13 generates depth data based on the disparity value of each segment (S103). At this time, the depth data generation unit 13 specifies the disparity value of each segment based on the disparity value of the representative pixel included in the segment as illustrated in (c) of FIG.
- depth data indicating the depth corresponding to each segment can be generated based on the parallax value of the representative pixel included in each segment. That is, in order to generate depth data, it is only necessary to detect pixels in the second image corresponding to each representative pixel, and it is not necessary to detect pixels in the second image corresponding to all of the pixels. Therefore, it is possible to reduce the processing load for generating depth data.
- the image processing apparatus 10 since the first image is divided into a plurality of segments based on the similarity of pixel values, a plurality of different subjects can be included in one segment. Low. That is, there is a high possibility that a region having a similar depth is divided as one segment. By specifying the parallax value for each segment divided in this way, it is possible to prevent the accuracy of the depth data indicating the depth corresponding to each segment from being lowered.
- the segmentation (S102) is performed after the parallax value calculation (S101), but it is not necessarily performed in this order. That is, parallax value calculation (S101) may be performed after segmentation (S102). In this case, for example, the parallax value calculation unit 11 may treat the pixel at the center of gravity of each segment divided by the segmentation unit 12 as a representative pixel.
- parallax value calculation (S101) and the segmentation (S102) may be performed in parallel. As a result, the processing speed can be increased.
- FIG. 4 is a block diagram illustrating a functional configuration of the image processing apparatus 20 according to the second embodiment.
- the image processing apparatus 20 according to the present embodiment includes a feature point calculation unit 21, an alignment processing unit 22, a parallax value calculation unit 23, a segmentation unit 24, a segment combination unit 25, a depth data generation unit 26, An image processing unit 27.
- the feature point calculation unit 21 calculates the feature points of the first image as representative pixels. Specifically, the feature point calculation unit 21 calculates a feature point using the feature amount extracted by the feature amount extraction method.
- a feature amount extraction method for example, reference document 1 (David G. Lowe, “Distinctive image features from scale-invariant keypoints”, International Journal of Computer, p. 91, 602 p. The disclosed SIFT (Scale Invant Feature Transform) can be used.
- Reference 2 Herbert Bay, Andreas Ess, Tinne Tuyterraars, Luc Van Golf, “SURF: Speeded Up Robust Features”, Computer Vision and VU. , Pp.346-359, 2008
- SURF Speeded Up Robust Features
- the alignment processing unit 22 performs an alignment process for performing the rectification of the first image and the second image using the calculated feature points. Specifically, the alignment processing unit 22 detects a point in the second image corresponding to the feature point based on the feature amount. Further, the alignment processing unit 22 performs alignment processing using the positional relationship between two points (corresponding points) between the detected point in the second image and the feature point.
- the parallax value calculation unit 23 calculates the parallax value of the representative pixel using the first image and the second image on which the alignment processing has been performed. That is, the parallax value calculation unit 23 calculates the parallax value for each feature point using the positional relationship between the corresponding points.
- the segmentation unit 24 divides the first image into a plurality of segments by clustering based on similarity defined using pixel values and pixel positions. Details of this clustering will be described later.
- the segment combination unit 25 combines the empty segment and the segment adjacent to the empty segment into one when a plurality of segments includes an empty segment.
- the empty segment is a segment that does not include a representative pixel.
- bond part 25 repeats a coupling
- the depth data generation unit 26 generates depth data based on the segments combined by the segment combination unit 25. Specifically, the depth data generation unit 26 specifies, for example, the parallax value of the representative pixel included in the segment as the parallax value of the segment. In addition, when the segment includes two or more representative pixels, the depth data generation unit 26 specifies, for example, the median value or average value of the parallax values of the two or more representative pixels as the segment parallax value. To do.
- the image processing unit 27 performs image processing on at least one of the first image and the second image based on the generated depth data. For example, the image processing unit 27 separates the first image into a foreground area and a background area based on the depth data. Then, the image processing unit 27 performs a blurring process on the background area. For example, the image processing unit 27 may combine the foreground region with a third image different from the first image and the second image.
- the depth data for separating the first image into the foreground area and the background area does not necessarily have to be high-definition depth data in pixel units. That is, the depth data based on the parallax value of each segment can be used effectively.
- FIG. 5 is a flowchart showing the processing operation of the image processing apparatus 20 according to the second embodiment.
- the feature point calculation unit 21 calculates a feature point of the first image as a representative pixel (S201).
- the alignment processing unit 22 performs alignment processing for parallelizing the first image and the second image using the calculated feature points (S202).
- the parallax value calculation unit 23 calculates the parallax value of the representative pixel using the first image and the second image on which the alignment processing has been performed (S203).
- the segmentation unit 24 divides the first image into a plurality of segments by clustering based on the similarity defined using the pixel value and the pixel position (S204).
- the segment combination unit 25 combines a plurality of segments so that each segment includes at least one representative pixel (S205).
- the depth data generation unit 26 generates depth data based on the segments combined by the segment combination unit 25 (S206).
- the image processing unit 27 performs image processing on at least one of the first image and the second image based on the generated depth data (S207).
- FIG. 6 is a diagram showing an outline of the alignment processing according to the second embodiment.
- stereo images taken with a stereo camera are often not parallel to each other. That is, the epipolar line is often not horizontal in each of the first image 101 and the second image 102.
- the alignment processing unit 22 performs the first image 101 and the second image 102 so that the epipolar line is horizontal in each of the first image 101 and the second image 102. Perform parallelization.
- FIG. 7 is a diagram for explaining an example of the alignment process according to the second embodiment. Specifically, FIG. 7 is based on the method disclosed in Reference 3 (“New Image Analysis Handbook” (supervised by Mikio Takagi and Yoko Shimoda, published by the University of Tokyo Press, September 2004, pages 1333 to 1337)). It is a figure for demonstrating the alignment process based on.
- the image L and the image R are stereo images obtained by shooting the object P.
- a point P′R on the image R corresponding to the point P′L on the image L includes a plane including the projection centers OL and OR of the two images and the point P′L on the image L, and the image R It exists on a straight line that intersects.
- This line is called epipolar line.
- a plane including the object P and the projection centers OL and OR of the two images is called an epipolar plane.
- the line of intersection between the epipolar plane and the image projection planes of the two images L and R is an epipolar line.
- the epipolar line is not parallel to the image scanning direction (here, the horizontal direction). Therefore, a two-dimensional search is required for matching corresponding points, and the amount of calculation increases. Therefore, in order to simplify the search, two stereo images are collimated by the following method.
- the number of unknowns in these coordinate conversion formulas is five (by ′, bz ′) and ( ⁇ ′, ⁇ ′, ⁇ ′) related to the image R. These five unknowns are determined so as to satisfy the following coplanar conditional expression (Expression 3).
- the image L ′ (u′L, v′L) and the image R ′ (u′L, v′L) after this coordinate conversion are rearranged along the epipolar line.
- image L ′ and image R ′ the two images (image L ′ and image R ′), the v ′ coordinates of the corresponding points are equal to each other. That is, in the converted image, the corresponding points need only be searched in the horizontal direction, and the corresponding points can be easily searched. As a result, the image L is newly converted into an image L ′.
- the parallax value calculation unit 23 searches for a pixel in the second image 102 corresponding to the representative pixel in the first image 101 using the first image 101 and the second image 102 that have been parallelized in this way. Thereby, the parallax value of each representative pixel can be easily calculated.
- FIG. 8 is a flowchart showing details of segmentation according to the second embodiment.
- FIG. 9 is a diagram for explaining segmentation according to the second embodiment.
- FIG. 10 is a diagram for explaining segmentation according to the second embodiment.
- FIG. 11 is a diagram illustrating an example of a segmentation result according to the second embodiment.
- the segmentation unit 24 first converts the color space of the first image and the second image (S301). Specifically, the segmentation unit 24 converts the first image and the second image from the RGB color space to the Lab color space.
- This Lab color space is a perceptually uniform color space. That is, in the Lab color space, when the color value changes by the same amount, the change that humans feel when they see it is also equal. Therefore, the segmentation unit 24 can divide the first image along the boundary of the subject perceived by a human by segmenting the first image in the Lab color space.
- the segmentation unit 24 sets the center of gravity of k (k: an integer of 2 or more) initial clusters (S302).
- the centroids of these k initial clusters are set so as to be evenly arranged on the first image, for example.
- the centroids of the k initial clusters are set so that the interval between adjacent centroids is S (pixel).
- the processes in steps S303 and S304 are performed on each pixel in the first image.
- the segmentation unit 24 calculates a distance Ds with respect to the center of gravity of each cluster (S303).
- This distance Ds corresponds to a value indicating similarity defined using the pixel value and the pixel position.
- the smaller the distance Ds the higher the similarity of the pixel to the center of gravity of the cluster.
- the segmentation unit 24 calculates the distance Ds of the target pixel i only with respect to the center of gravity Ck located within the distance calculation target range.
- a position that is equal to or smaller than the center-of-gravity interval S of the initial cluster from the position of the target pixel i is set as the distance calculation target range. That is, the segmentation unit 24 calculates the distance to each of the centroids C2, C3, C6, and C7 for the target pixel i.
- the distance calculation target range it is possible to reduce the calculation load as compared with the case where the distance is calculated for all the centroids.
- the distance Ds of the target pixel i (pixel position (xi, yi), pixel value (li, ai, bi)) with respect to the center of gravity Ck (pixel position (xk, yk), pixel value (lk, ak, bk)) is as follows: It is calculated by the following equation (6).
- m is a coefficient for balancing the influence of the distance dlab based on the pixel value and the distance dxy based on the pixel position on the distance Ds.
- This coefficient m may be predetermined experimentally or empirically.
- the segmentation unit 24 determines the cluster to which the target pixel i belongs using the distance Ds with respect to each centroid of the target pixel i in this way (S304). Specifically, the segmentation unit 24 determines the cluster having the center of gravity with the smallest distance Ds as the cluster to which the target pixel i belongs.
- the cluster to which each pixel belongs is determined by repeating the processes in steps S303 and S304 for each pixel included in the first image.
- the segmentation unit 24 updates the center of gravity of each cluster (S305). For example, as a result of determining the cluster to which each pixel belongs in step S304, as shown in FIG. 10, when the rectangular cluster changes to a hexagonal cluster, the pixel value and pixel position of the centroid C6 are updated.
- the segmentation unit 24 calculates the pixel value (lk_new, ak_new, bk_new) and pixel position (xk_new, yk_new) of the new center of gravity according to the following Expression 7.
- the segmentation unit 24 ends the process. That is, when there is no change in the center of gravity of each cluster before and after the update in step S305, the segmentation unit 24 ends the segmentation. On the other hand, when the centroids of the respective clusters have not converged (No in S306), the segmentation unit 24 repeats the processes in steps S303 to S305.
- the segmentation unit 24 can divide the first image into a plurality of segments by clustering (here, k-average method) based on the similarity defined using the pixel value and the pixel position. Therefore, as shown in FIG. 11, the segmentation unit 24 can divide the first image into a plurality of segments according to the characteristics of the subject area included in the first image.
- the segmentation unit 24 can divide the first image into a plurality of segments so that the same subject is included in one segment.
- the accuracy of the disparity value specified for each segment can be improved. That is, the depth data can be generated more accurately.
- the k-average method is a relatively simple clustering, the processing load for generating depth data can be reduced.
- segment combination S205
- S205 segment combination based on color similarity
- FIG. 12 is a flowchart showing details of the segment combining process according to the second embodiment.
- FIG. 13 is a diagram for explaining the segment combining process according to the second embodiment.
- the segment combining unit 25 first selects an empty segment from a plurality of segmentations obtained by the division by the segmentation unit 24 (S401).
- An empty segment is a segment that does not include any representative pixels.
- the segment coupling unit 25 selects a segment adjacent to the selected empty segment (hereinafter also referred to as an adjacent segment) (S402).
- the segment combination unit 25 selects at least one segment from the plurality of adjacent segments based on the similarity of colors. That is, the segment combination unit 25 selects an adjacent segment that is most similar in color to the empty segment as a segment to be combined.
- this color similarity evaluation is preferably performed in the YUV color space or the RGB color space.
- the segment coupling unit 25 does not necessarily need to select only one adjacent segment.
- the segment combination unit 25 may select a plurality of adjacent segments whose color similarity values are greater than or equal to a threshold value.
- the segment combination unit 25 combines the empty segment selected in step S401 and the adjacent segment selected in step S402 into one (S403). In other words, the segment combination unit 25 combines the selected empty segment and the selected adjacent segment to set one new segment.
- the segment coupling unit 25 when there is an empty segment S2 and adjacent segments S1, S3, S4 adjacent to the empty segment S2, the segment coupling unit 25 includes a plurality of adjacent segments S1, S3. , S4, the adjacent segment S1 having the most similar color to the color of the sky segment S2 (for example, the average color) is selected. Then, as shown in FIG. 13B, the segment combination unit 25 combines the empty segment S2 and the selected adjacent segment S1, and sets a new segment SN.
- the segment combination unit 25 determines whether there is an empty segment (S404). Here, if there is no empty segment (No in S404), the segment combining unit 25 ends the process. On the other hand, if there is an empty segment (Yes in S404), the segment combination unit 25 returns to step S401 and executes the process.
- the depth data generation unit 26 generates depth data based on the segments combined in this way.
- the image processing device 20 when an empty segment is included in a plurality of segments, the empty segment and the adjacent segment are combined until there is no empty segment. be able to. Therefore, when the first image is divided into a plurality of segments by the segmentation unit, it is not always necessary to divide the first image so as to include the representative pixel. That is, segmentation can be performed without considering the correspondence with the representative pixel. As a result, the segmentation and the parallax value calculation of the representative pixel can be processed in parallel, and the speed of the depth data generation process can be increased.
- segments having similar colors can be combined into one. That is, since a region having a similar color is treated as one segment, there is a high possibility that a region having a similar depth becomes one segment.
- depth data indicating the depth corresponding to each segment can be generated more accurately.
- the image processing device 20 when two or more representative pixels are included in the segment, the median value or the average value of the parallax values of the two or more representative pixels is calculated. It can be specified as a parallax value. Accordingly, the disparity value of the segment can be easily specified, and the processing load for generating the depth data can be reduced. In addition, an error between the disparity value of the segment and the disparity value of each pixel included in the segment can be made relatively small, and depth data can be generated more accurately.
- the image processing apparatus 20 it is possible to calculate a feature point as a representative pixel. Therefore, it becomes easy to detect the pixel in the second image corresponding to the representative pixel, and the processing load can be reduced.
- the image processing apparatus 20 it is possible to perform alignment processing for parallelizing the first image and the second image.
- alignment processing when a multi-viewpoint image such as a stereo image is captured, alignment processing of the multi-viewpoint image is performed.
- feature points are calculated and corresponding points are detected.
- the depth data generation unit 26 interpolates the parallax value of other pixels included in the segment using the parallax value of at least one representative pixel included in the segment. By doing so, the parallax value of each pixel included in the segment is calculated.
- the depth data generation unit 26 generates a depth map indicating the depth of each pixel as depth data based on the calculated parallax value of each pixel.
- FIG. 14 is a flowchart showing details of the depth data generation processing according to the modification of the second embodiment.
- the depth data generation unit 26 selects one segment from a plurality of segments in the first image (S501). The depth data generation unit 26 determines whether or not the selected segment includes a plurality of representative pixels (S502).
- the depth data generation unit 26 uses the parallax values of the plurality of representative pixels to determine the parallax values of other pixels included in the segment. Is interpolated to calculate the parallax value of each pixel included in the segment (S503). For example, the depth data generation unit 26 calculates the parallax value of other pixels by spline interpolation.
- the depth data generation unit 26 determines the parallax value of other pixels included in the segment using the parallax value of the representative pixel. (S504). For example, the depth data generation unit 26 determines the parallax values of all the pixels included in the segment as the parallax values of the representative pixels.
- the depth data generation unit 26 determines whether all segments have been selected (S505). If any segment is not selected (No in S505), the process returns to step S501.
- the depth data generation unit 26 when all the segments are selected (Yes in S505), the depth data generation unit 26 generates a depth map (depth data) by converting the parallax value of each pixel into a depth value (S506). Note that the conversion from the parallax value to the depth value is performed based on, for example, the principle of triangulation.
- the parallax value of another pixel included in the segment is interpolated using the parallax value of at least one representative pixel included in the segment. can do. Therefore, the parallax value of each pixel can be obtained by interpolation, and depth data can be generated more accurately.
- the feature point calculation unit 21 may calculate the feature points as representative pixels so as not to exceed a predetermined number. For example, the feature point calculation unit 21 may calculate the feature points so that a plurality of feature points are not included in the segment. For example, the feature point calculation unit 21 may calculate the feature points so that the distance between the feature points does not become less than a predetermined distance. By calculating the feature points in this way, it is possible to prevent an increase in the load of detection processing for pixels corresponding to the representative pixels.
- the segmentation unit 24 performs segmentation based on the k-means method.
- the segmentation unit 24 may perform segmentation based on another clustering method.
- the segmentation unit 24 may perform segmentation based on a mean displacement method (mean-shift clustering).
- the segment combination unit 25 combines segments based on the similarity of colors.
- the segments may be combined based on similarities other than colors.
- the segment combination unit 25 may combine segments based on luminance similarity.
- the components included in the image processing apparatuses 10 and 20 in the first or second embodiment may be configured by one system LSI (Large Scale Integration).
- the image processing apparatus 10 may be configured by a system LSI having a parallax value calculation unit 11, a segmentation unit 12, and a depth data generation unit 13.
- the system LSI is an ultra-multifunctional LSI manufactured by integrating a plurality of components on one chip. Specifically, a microprocessor, a ROM (Read Only Memory), a RAM (Random Access Memory), etc. It is a computer system comprised including. A computer program is stored in the ROM. The system LSI achieves its functions by the microprocessor operating according to the computer program.
- system LSI may be called IC, LSI, super LSI, or ultra LSI depending on the degree of integration.
- method of circuit integration is not limited to LSI's, and implementation using dedicated circuitry or general purpose processors is also possible.
- An FPGA Field Programmable Gate Array
- reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- FIG. 15 is a block diagram illustrating a functional configuration of the imaging apparatus 30 according to an embodiment.
- the imaging device 30 is, for example, a digital still camera or a digital video camera. As illustrated in FIG. 15, the imaging device 30 includes an imaging unit 31 that captures the first image and the second image from different viewpoints, and the image processing device 10 or 20 according to the first or second embodiment.
- each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
- Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
- the software that realizes the image decoding apparatus of each of the above embodiments is the following program.
- this program is an image processing method for generating depth data using a first image and a second image taken from different viewpoints on a computer, and a part of pixels in the first image.
- a parallax value calculating step of calculating a parallax value of the representative pixel based on a positional relationship between the representative pixel and a pixel in the second image corresponding to the representative pixel for each of a plurality of representative pixels;
- a segmentation step of dividing the first image into a plurality of segments based on the similarity of, and for each segment, specifying a disparity value of the segment based on a disparity value of a representative pixel included in the segment,
- An image processing method including a depth data generation step of generating depth data indicating the depth corresponding to the segment is executed.
- the present invention relates to an image processing apparatus capable of generating depth data using a first image and a second image taken from different viewpoints, and a digital still camera or a digital video camera provided with the image processing apparatus. It can be used as an imaging device.
- Imaging unit 101 First 1 image 102 2nd image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Measurement Of Optical Distance (AREA)
- Image Analysis (AREA)
Abstract
Description
図1は、実施の形態1に係る画像処理装置10の機能構成を示すブロック図である。画像処理装置10は、互いに異なる視点から撮影された第1画像および第2画像(例えばステレオ画像)を利用して、第1画像のデプスデータを生成する。第1画像および第2画像は、例えば、ステレオ画像(左目用画像および右目用画像)である。
次に、実施の形態2について図面を参照しながら説明する。
次に、実施の形態2の変形例について説明する。本変形例では、デプスデータ生成部26の処理動作が実施の形態2と異なる。
11、23 視差値算出部
12、24 セグメンテーション部
13、26 デプスデータ生成部
21 特徴点算出部
22 アライメント処理部
25 セグメント結合部
27 画像処理部
30 撮像装置
31 撮像部
101 第1画像
102 第2画像
Claims (15)
- 互いに異なる視点から撮影された第1画像および第2画像を利用してデプスデータを生成する画像処理装置であって、
前記第1画像内の一部の画素である複数の代表画素の各々について、前記代表画素と前記代表画素に対応する前記第2画像内の画素との位置関係に基づいて前記代表画素の視差値を算出する視差値算出部と、
画素値の類似性に基づいて前記第1画像を複数のセグメントに分割するセグメンテーション部と、
セグメントごとに、前記セグメントに含まれる代表画素の視差値に基づいて前記セグメントの視差値を特定することにより、各セグメントに対応する奥行きを示すデプスデータを生成するデプスデータ生成部とを備える
画像処理装置。 - 前記画像処理装置は、さらに、
前記複数のセグメントの中に、代表画素が含まれていない空セグメントが含まれる場合に、前記空セグメントと、前記空セグメントに隣接するセグメントとを1つに結合するセグメント結合部を備え、
前記デプスデータ生成部は、前記セグメント結合部によって結合されたセグメントに基づいて前記デプスデータを生成する
請求項1に記載の画像処理装置。 - 前記セグメント結合部は、前記空セグメントが複数のセグメントと隣接する場合に、
色の類似性に基づいて、前記複数のセグメントの中から少なくとも1つのセグメントを選択し、
選択された前記少なくとも1つのセグメントと前記空セグメントとを1つに結合する
請求項2に記載の画像処理装置。 - 前記デプスデータ生成部は、前記セグメントに2つ以上の代表画素が含まれている場合に、前記2つ以上の代表画素の視差値の中央値または平均値を前記セグメントの視差値として特定する
請求項1~3のいずれか1項に記載の画像処理装置。 - 前記デプスデータ生成部は、
セグメントごとに、前記セグメントに含まれる少なくとも1つの代表画素の視差値を用いて前記セグメントに含まれる他の画素の視差値を補間することにより、前記セグメントに含まれる各画素の視差値を算出し、
算出された各画素の視差値に基づいて、各画素の奥行きを示すデプスマップを前記デプスデータとして生成する
請求項1~3のいずれか1項に記載の画像処理装置。 - 前記セグメンテーション部は、画素値および画素位置を用いて定義された類似性に基づくクラスタリングにより、前記第1画像を複数のセグメントに分割する
請求項1~5のいずれか1項に記載の画像処理装置。 - 前記クラスタリングは、k平均法(k-means clustering)である
請求項6に記載の画像処理装置。 - 前記画像処理装置は、さらに、
前記第1画像の特徴点を前記代表画素として算出する特徴点算出部を備える
請求項1~7のいずれか1項に記載の画像処理装置。 - 前記画像処理装置は、さらに、
前記特徴点を用いて、前記第1画像と前記第2画像とを平行化するためのアライメント処理を行うアライメント処理部を備え、
前記視差値算出部は、前記アライメント処理が行われた前記第1画像および前記第2画像を用いて、前記代表画素の視差値を算出する
請求項8に記載の画像処理装置。 - 前記画像処理装置は、さらに、
前記デプスデータに基づいて、前記第1画像を前景領域と背景領域とに分離し、前記背景領域にぼかし処理を施す画像処理部を備える
請求項1~9のいずれか1項に記載の画像処理装置。 - 前記画像処理装置は、さらに、
前記デプスデータに基づいて、前記第1画像を前景領域と背景領域とに分離し、前記前景領域を、前記第1画像および前記第2画像とは異なる第3画像と合成する画像処理部を備える
請求項1~9のいずれか1項に記載の画像処理装置。 - 前記画像処理装置は、集積回路として構成されている
請求項1~11のいずれか1項に記載の画像処理装置。 - 請求項1~12のいずれか1項に記載の画像処理装置と、
前記第1画像および前記第2画像を撮影する撮像部とを備える
撮像装置。 - 互いに異なる視点から撮影された第1画像および第2画像を利用してデプスデータを生成する画像処理方法であって、
前記第1画像内の一部の画素である複数の代表画素の各々について、前記代表画素と前記代表画素に対応する前記第2画像内の画素との位置関係に基づいて前記代表画素の視差値を算出する視差値算出ステップと、
画素値の類似性に基づいて前記第1画像を複数のセグメントに分割するセグメンテーションステップと、
セグメントごとに、前記セグメントに含まれる代表画素の視差値に基づいて前記セグメントの視差値を特定することにより、各セグメントに対応する奥行きを示すデプスデータを生成するデプスデータ生成ステップとを含む
画像処理方法。 - 請求項14に記載の画像処理方法をコンピュータに実行させるためのプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201280019440.0A CN103493093B (zh) | 2011-11-17 | 2012-11-13 | 图像处理装置、摄像装置及图像处理方法 |
US14/113,114 US9153066B2 (en) | 2011-11-17 | 2012-11-13 | Image processing device, imaging device, and image processing method |
JP2013544128A JP5923713B2 (ja) | 2011-11-17 | 2012-11-13 | 画像処理装置、撮像装置および画像処理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-251944 | 2011-11-17 | ||
JP2011251944 | 2011-11-17 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013073167A1 true WO2013073167A1 (ja) | 2013-05-23 |
Family
ID=48429263
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/007270 WO2013073167A1 (ja) | 2011-11-17 | 2012-11-13 | 画像処理装置、撮像装置および画像処理方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US9153066B2 (ja) |
JP (1) | JP5923713B2 (ja) |
CN (1) | CN103493093B (ja) |
WO (1) | WO2013073167A1 (ja) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015173441A (ja) * | 2014-03-03 | 2015-10-01 | ノキア コーポレイション | 立体画像の視差マップ推定のための方法,装置及びコンピュータプログラム製品 |
JP2018081672A (ja) * | 2016-11-14 | 2018-05-24 | 株式会社リコー | 深層畳み込みニューラルネットワークを用いる新ビュー合成 |
KR20200031169A (ko) * | 2017-11-01 | 2020-03-23 | 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 | 이미지 처리 방법 및 장치 |
KR20200031689A (ko) * | 2017-11-01 | 2020-03-24 | 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 | 이미지 처리 방법, 장치 및 기기 |
US11119201B2 (en) | 2017-10-24 | 2021-09-14 | Canon Kabushiki Kaisha | Distance detecting apparatus, image capturing apparatus, distance detecting method, and storage medium |
JP7566358B2 (ja) | 2020-06-10 | 2024-10-15 | ユーヴイアイ リミテッド | 深度推定システム及び深度推定方法 |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI498526B (zh) * | 2013-06-05 | 2015-09-01 | Nat Univ Chung Cheng | Environment depth measurement method and its image acquisition device |
IN2013CH05313A (ja) * | 2013-11-18 | 2015-05-29 | Nokia Corp | |
US9466118B2 (en) * | 2014-01-17 | 2016-10-11 | Htc Corporation | Image segmentation device, image segmentation method, and depth map generating method |
US9292926B1 (en) * | 2014-11-24 | 2016-03-22 | Adobe Systems Incorporated | Depth map generation |
US10158840B2 (en) * | 2015-06-19 | 2018-12-18 | Amazon Technologies, Inc. | Steganographic depth images |
US20190012789A1 (en) * | 2015-07-21 | 2019-01-10 | Heptagon Micro Optics Pte. Ltd. | Generating a disparity map based on stereo images of a scene |
US10122996B2 (en) * | 2016-03-09 | 2018-11-06 | Sony Corporation | Method for 3D multiview reconstruction by feature tracking and model registration |
US10212306B1 (en) | 2016-03-23 | 2019-02-19 | Amazon Technologies, Inc. | Steganographic camera communication |
US10523918B2 (en) | 2017-03-24 | 2019-12-31 | Samsung Electronics Co., Ltd. | System and method for depth map |
JP6986683B2 (ja) * | 2018-01-05 | 2021-12-22 | パナソニックIpマネジメント株式会社 | 視差値算出装置、視差値算出方法及びプログラム |
EP3702985A1 (en) * | 2019-02-28 | 2020-09-02 | Accenture Global Solutions Limited | Augmented reality enabled cargo loading optimization |
US11800056B2 (en) | 2021-02-11 | 2023-10-24 | Logitech Europe S.A. | Smart webcam system |
US11800048B2 (en) | 2021-02-24 | 2023-10-24 | Logitech Europe S.A. | Image generating system with background replacement or modification capabilities |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005165969A (ja) * | 2003-12-05 | 2005-06-23 | Canon Inc | 画像処理装置、及び方法 |
JP2008084076A (ja) * | 2006-09-28 | 2008-04-10 | Toshiba Corp | 画像処理装置、方法およびプログラム |
JP2011203953A (ja) * | 2010-03-25 | 2011-10-13 | Nec System Technologies Ltd | ステレオマッチング処理装置、ステレオマッチング処理方法、及び、プログラム |
WO2011132404A1 (ja) * | 2010-04-20 | 2011-10-27 | パナソニック株式会社 | 3d映像記録装置及び3d映像信号処理装置 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6301382B1 (en) * | 1996-06-07 | 2001-10-09 | Microsoft Corporation | Extracting a matte of a foreground object from multiple backgrounds by triangulation |
US20080260288A1 (en) | 2004-02-03 | 2008-10-23 | Koninklijke Philips Electronic, N.V. | Creating a Depth Map |
US7292257B2 (en) * | 2004-06-28 | 2007-11-06 | Microsoft Corporation | Interactive viewpoint video system and process |
JP4341564B2 (ja) * | 2005-02-25 | 2009-10-07 | 株式会社豊田中央研究所 | 対象物判定装置 |
US8351646B2 (en) * | 2006-12-21 | 2013-01-08 | Honda Motor Co., Ltd. | Human pose estimation and tracking using label assignment |
US7911513B2 (en) * | 2007-04-20 | 2011-03-22 | General Instrument Corporation | Simulating short depth of field to maximize privacy in videotelephony |
CN101587586B (zh) * | 2008-05-20 | 2013-07-24 | 株式会社理光 | 一种图像处理装置及图像处理方法 |
US8774512B2 (en) * | 2009-02-11 | 2014-07-08 | Thomson Licensing | Filling holes in depth maps |
KR101633620B1 (ko) * | 2010-01-04 | 2016-06-27 | 삼성전자 주식회사 | 영상 기반의 위치 인식을 위한 특징점 등록 장치 및 그 방법 |
JP5772446B2 (ja) * | 2010-09-29 | 2015-09-02 | 株式会社ニコン | 画像処理装置及び画像処理プログラム |
US8494285B2 (en) * | 2010-12-09 | 2013-07-23 | The Hong Kong University Of Science And Technology | Joint semantic segmentation of images and scan data |
US8884949B1 (en) * | 2011-06-06 | 2014-11-11 | Thibault Lambert | Method and system for real time rendering of objects from a low resolution depth camera |
-
2012
- 2012-11-13 WO PCT/JP2012/007270 patent/WO2013073167A1/ja active Application Filing
- 2012-11-13 JP JP2013544128A patent/JP5923713B2/ja not_active Expired - Fee Related
- 2012-11-13 US US14/113,114 patent/US9153066B2/en not_active Expired - Fee Related
- 2012-11-13 CN CN201280019440.0A patent/CN103493093B/zh not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005165969A (ja) * | 2003-12-05 | 2005-06-23 | Canon Inc | 画像処理装置、及び方法 |
JP2008084076A (ja) * | 2006-09-28 | 2008-04-10 | Toshiba Corp | 画像処理装置、方法およびプログラム |
JP2011203953A (ja) * | 2010-03-25 | 2011-10-13 | Nec System Technologies Ltd | ステレオマッチング処理装置、ステレオマッチング処理方法、及び、プログラム |
WO2011132404A1 (ja) * | 2010-04-20 | 2011-10-27 | パナソニック株式会社 | 3d映像記録装置及び3d映像信号処理装置 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015173441A (ja) * | 2014-03-03 | 2015-10-01 | ノキア コーポレイション | 立体画像の視差マップ推定のための方法,装置及びコンピュータプログラム製品 |
JP2018081672A (ja) * | 2016-11-14 | 2018-05-24 | 株式会社リコー | 深層畳み込みニューラルネットワークを用いる新ビュー合成 |
US11119201B2 (en) | 2017-10-24 | 2021-09-14 | Canon Kabushiki Kaisha | Distance detecting apparatus, image capturing apparatus, distance detecting method, and storage medium |
KR20200031169A (ko) * | 2017-11-01 | 2020-03-23 | 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 | 이미지 처리 방법 및 장치 |
KR20200031689A (ko) * | 2017-11-01 | 2020-03-24 | 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 | 이미지 처리 방법, 장치 및 기기 |
KR102266649B1 (ko) * | 2017-11-01 | 2021-06-18 | 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 | 이미지 처리 방법 및 장치 |
KR102279436B1 (ko) * | 2017-11-01 | 2021-07-21 | 광동 오포 모바일 텔레커뮤니케이션즈 코포레이션 리미티드 | 이미지 처리 방법, 장치 및 기기 |
JP7566358B2 (ja) | 2020-06-10 | 2024-10-15 | ユーヴイアイ リミテッド | 深度推定システム及び深度推定方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2013073167A1 (ja) | 2015-04-02 |
US20140072205A1 (en) | 2014-03-13 |
JP5923713B2 (ja) | 2016-05-25 |
CN103493093B (zh) | 2017-07-18 |
US9153066B2 (en) | 2015-10-06 |
CN103493093A (zh) | 2014-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5923713B2 (ja) | 画像処理装置、撮像装置および画像処理方法 | |
CN111066065B (zh) | 用于混合深度正则化的系统和方法 | |
KR101802146B1 (ko) | 화상처리장치 및 화상처리방법 | |
EP3300022B1 (en) | Image processing apparatus, image processing method, and program | |
JP6371553B2 (ja) | 映像表示装置および映像表示システム | |
JP6043293B2 (ja) | 画像処理装置、撮像装置および画像処理方法 | |
US20120121166A1 (en) | Method and apparatus for three dimensional parallel object segmentation | |
CN106488215B (zh) | 图像处理方法和设备 | |
JP5878924B2 (ja) | 画像処理装置、撮像装置および画像処理方法 | |
US9235879B2 (en) | Apparatus, system, and method for temporal domain hole filling based on background modeling for view synthesis | |
US10818018B2 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium | |
JP7159384B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
JP7032871B2 (ja) | 画像処理装置及び画像処理方法、プログラム、記憶媒体 | |
JP2019114103A (ja) | 物体認識処理装置、物体認識処理方法及びプログラム | |
JP2018151830A (ja) | 画像処理装置、画像処理方法及びプログラム | |
US11189053B2 (en) | Information processing apparatus, method of controlling information processing apparatus, and non-transitory computer-readable storage medium | |
Li et al. | RGBD relocalisation using pairwise geometry and concise key point sets | |
US20100220893A1 (en) | Method and System of Mono-View Depth Estimation | |
JP2013185905A (ja) | 情報処理装置及び方法、並びにプログラム | |
US11256949B2 (en) | Guided sparse feature matching via coarsely defined dense matches | |
San et al. | Stereo matching algorithm by hill-climbing segmentation | |
JP5891751B2 (ja) | 画像間差分装置および画像間差分方法 | |
JP6818485B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
JP2018156544A (ja) | 情報処理装置及びプログラム | |
van der Merwe et al. | An examination of weighted cost aggregation methods for local stereo matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12849949 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013544128 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14113114 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12849949 Country of ref document: EP Kind code of ref document: A1 |