Nothing Special   »   [go: up one dir, main page]

WO2020037566A1 - 一种图像处理、匹配方法、装置及存储介质 - Google Patents

一种图像处理、匹配方法、装置及存储介质 Download PDF

Info

Publication number
WO2020037566A1
WO2020037566A1 PCT/CN2018/101798 CN2018101798W WO2020037566A1 WO 2020037566 A1 WO2020037566 A1 WO 2020037566A1 CN 2018101798 W CN2018101798 W CN 2018101798W WO 2020037566 A1 WO2020037566 A1 WO 2020037566A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
edge
resolution
matching
reference pixel
Prior art date
Application number
PCT/CN2018/101798
Other languages
English (en)
French (fr)
Inventor
阳光
Original Assignee
深圳配天智能技术研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳配天智能技术研究院有限公司 filed Critical 深圳配天智能技术研究院有限公司
Priority to PCT/CN2018/101798 priority Critical patent/WO2020037566A1/zh
Priority to CN201880087343.2A priority patent/CN111630558B/zh
Publication of WO2020037566A1 publication Critical patent/WO2020037566A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing, matching method, device, and storage medium.
  • Reduced resolution processing is a commonly used technique in the field of image processing technology.
  • the resolution reduction process can be understood as reducing the number of samples of the image.
  • the scaling factor of the resolution reduction process is k
  • the resolution reduction process is equivalent to the original N ⁇ M image.
  • a point is taken every k points, and an image is formed from the extracted points.
  • the composed image is an image obtained by reducing the resolution of the N ⁇ M image by a scaling factor k.
  • the reduction of the sampling points will affect the clarity of the reduced resolution processed image to a certain extent, which will cause the original image to have blurred edges due to the reduced resolution processing, that is, the reduced resolution processing.
  • the edges of the objects in the image cannot be clearly distinguished, and even the two adjacent edges in the original image will be processed by the reduced resolution process, resulting in the two edges being close to each other after the reduced resolution process.
  • An object of the present invention is to provide an image processing, matching method, device, and storage medium, which can retain pixel information at edges in an image in a reduced-resolution processed image.
  • the present invention provides an image processing method, which includes:
  • the other regions of the first image except the reference pixel region are subjected to resolution reduction processing, and pixel information of the reference pixel region is retained at a corresponding edge position of the processed image to obtain a second image.
  • the present invention proposes an image matching method, which includes: performing the down-resolution processing on the left first image and the right first image using the above-mentioned image processing method to obtain a second left image and a second right image;
  • the left first image and the right first image are images of shooting the same target at different angles.
  • the present invention provides an image processing apparatus, which includes:
  • the memory is used to store operation instructions executed by the processor, as well as images and data;
  • the processor is configured to run the operation instruction to implement the foregoing image processing method.
  • the present invention provides a pattern matching device, which includes:
  • the memory is configured to store an operation instruction executed by the processor, and a left first image and a right first image that need to be matched;
  • the processor is configured to run the operation instruction to implement the foregoing image matching method.
  • the present invention provides a storage medium that stores program data that can be executed to implement the image processing method described above; or that is executed to implement the image processing method described above.
  • the image processing method of the present invention extracts the first image and obtains the pixel data on both sides of the extracted edge; it is determined according to the degree of difference between the pixel data on both sides of the edge
  • the range of the reference pixel area where the reference pixel area is formed by edges, or edges and pixels on both sides of the reference pixel area; the other regions of the first image except the reference pixel area are subjected to resolution reduction, and the pixel information of the reference pixel area is retained
  • a second image is obtained.
  • the edge or pixel information on both sides of the edge is retained in the reduced-resolution image, thereby avoiding the occurrence of blurred edges in the image after the reduced-resolution processing.
  • FIG. 1 is a schematic flowchart of a first embodiment of an image processing method according to the present invention
  • FIG. 2 is a schematic flowchart of step S102 in FIG. 1;
  • FIG. 3 is a schematic flowchart of step S103 in FIG. 1;
  • FIG. 4 is a schematic flowchart of a first embodiment of an image matching method according to the present invention.
  • step S204 in FIG. 4 is a schematic flowchart of step S204 in FIG. 4;
  • FIG. 6 is a schematic flowchart of a second embodiment of an image matching method according to the present invention.
  • step S205 in FIG. 6 is a schematic flowchart of step S205 in FIG. 6;
  • step S208 in FIG. 6 is a schematic flowchart of an embodiment of step S208 in FIG. 6;
  • FIG. 9 is a schematic flowchart of another embodiment of step S208 in FIG. 6; FIG.
  • FIG. 10 is a schematic structural diagram of an embodiment of an image processing apparatus according to the present invention.
  • FIG. 11 is a schematic structural diagram of an embodiment of an image matching device according to the present invention.
  • FIG. 12 is a schematic structural diagram of another embodiment of an image matching device according to the present invention.
  • FIG. 13 is a schematic structural diagram of an embodiment of a storage medium according to the present invention.
  • FIG. 1 is a schematic flowchart of a first embodiment of an image processing method according to the present invention. As shown in FIG. 1, the image processing method in this embodiment may include the following steps:
  • step S101 edge extraction is performed on the first image, and the extracted edge and pixel data on both sides of the edge are obtained.
  • edges in the image can characterize each object or graphic element in the image. It is very important data in image recognition technology, and the edge of the image is where the gray value of the image changes sharply compared to other areas. Can be used to identify the corresponding object. (In image recognition technology, edges are important in the image to characterize each object or graphic element) (in most cases, the edge of each graphic element is that the gray value of the graphic changes more sharply than the non-edge area),
  • the method for edge extraction of a first image is not limited, and any method of edge extraction in the prior art may be used.
  • the edge data obtained by using the extracted edges as a reference is acquired.
  • the pixel selection object may be a pixel on an edge, a pixel on both sides of the edge, or a combination of the two.
  • the pixel data is the gray value of the pixel selection object.
  • step S102 the range of the reference pixel region is determined according to the degree of difference between the edge and pixel data on both sides of the edge.
  • the degree of difference in pixel data such as gray value and brightness information on both sides of the edge can indicate the sharpness of the edge, for example, the gray value of the pixel on the edge, the gray value of the pixel on both sides of the edge, or the edge and both sides of the edge
  • the difference in the gray value of the pixel is larger, indicating that the edge is clearer; if the gray value of the edge pixel, the gray value of the pixels on both sides of the edge, or the gray value of the edge and the pixels on both sides of the edge are different
  • a smaller value means that the edges are blurred. This can determine the sharpness of the edge, and further determine the range of the reference pixel area that needs to be retained in the reduced-resolution image based on the sharpness of the edge.
  • a reference pixel region with a relatively small relative area range is set at a high-definition edge, and a reference pixel region with a large relative area range is set at a low-definition edge; correspondingly, the edge and the pixels on both sides of the edge Where the degree of difference in data is large, set a reference pixel area with a relatively small range of the area; where the degree of difference in pixel data on both sides of the edge is small, set a reference pixel area with a relatively large range of area, so that it can be relatively complete. Preserving the pixel data on the edges and the sides of the edges in the image can also reduce the impact on the resolution reduction process of the overall image.
  • the pixel information at the edges is to be retained in the reduced-resolution processed image, that is, the reference pixel region includes at least the edges, and also includes the edge and the pixel information on both sides thereof.
  • the reference pixel region may also include only pixels at the edge.
  • step S102 may include the following steps:
  • step S1021 the sharpness level of the edge is determined according to the degree of difference between the pixel data on both sides of the edge.
  • the degree of difference between pixel data may be a data set formed by a relatively continuous series of data, and the sharpness level only represents the level difference. Therefore, the data of the degree of difference can be divided into different intervals, each The interval corresponds to a level of sharpness.
  • each region range is set with a corresponding level of sharpness, and the degree of difference falling within the same region range is the same. Sharpness level, the degree of difference falling in different areas is different sharpness levels.
  • the thresholds for the degree of difference are set to a first threshold and a second threshold, respectively, where the first threshold is> the second threshold; the sharpness level of edges with a degree of difference greater than or equal to the first threshold is set to one level, and the degree of difference is between the first
  • the sharpness level of the edge between a threshold and the second threshold is set to two, and the sharpness level of the edge with a difference less than or equal to the second threshold is set to three, so it can be seen that the sharpness level is from one to three. Level, the sharpness of the edges gradually decreases.
  • the threshold of the degree of difference may be set according to actual needs.
  • step S1022 the level width corresponding to the sharpness level is set, and the range of the reference pixel area is determined.
  • different level widths are preset for different sharpness levels, and the level width is a characteristic value reflecting the total number of pixels included in the reference pixel area.
  • the pixels included in the reference pixel area The number is the product of the level width and the scaling factor of the reduced resolution process.
  • the specific value of the grade width is not specifically limited in this embodiment, and can be set according to actual requirements.
  • the edge with higher definition requires less edge information and the pixel information on both sides, so the corresponding preset level width value is smaller; the edge with lower definition needs to be retained. There is more pixel information on both sides, so the value of the corresponding preset level width is larger.
  • the sharpness levels set above the sharpness of the edges gradually decreases from one level to three levels, and accordingly, the width of the corresponding levels from one level to three levels gradually increases.
  • the range of the reference pixel area corresponding to the edge can be determined according to the sharpness level of the edge.
  • step S103 resolution reduction processing is performed on other regions of the first image except the reference pixel region, and pixel information of the reference pixel region is retained at a corresponding edge position of the processed image to obtain a second image.
  • Step S102 determines the reference pixel area that needs to be retained.
  • the image is subjected to resolution reduction processing, and the pixel information in the reference pixel area determined in step S102 is retained to the corresponding edge position of the image after the resolution reduction processing.
  • a second image can be obtained that retains the original pixel information within a certain range of the edges and both sides, and the resolution of other regions is reduced.
  • step S103 may include the following steps:
  • step S1031 the original position of the edge in the first image is obtained.
  • the edges in the first image will naturally shift during the resolution reduction processing, that is, the positions of the edges before and after the resolution reduction are different.
  • the pixel information of the reference pixel region can be retained at the corresponding edge position of the image, and then the position where the edge should be located in the reduced-resolution processed image needs to be found.
  • step S1032 the mapping position of the original position of the edge in the image after the resolution reduction process is calculated.
  • a reduced resolution processed image is obtained, and an original position of the edge in the first image is calculated according to step S1031, and a mapping position of the original position in the reduced resolution processed image is calculated.
  • the reduced resolution is The image is 1 ⁇ 4.
  • the mapping position of the edge in the reduced-resolution image is calculated according to the original position of the edge as (1, 7), and the edge is reduced.
  • the mapping position in the image after resolution processing is (1, 3.5).
  • step S1033 in the image after the resolution reduction process, the edge mapping position is used as a reference position, and the pixel information of the reference pixel region is retained in the processed image.
  • the mapping position of the edge calculated in step S1032 in the reduced-resolution processed image the position of the edge in the reduced-resolution processed image should be determined. Therefore, Using the mapped position of the edge as a reference, the pixels in the reference pixel region obtained in step S102 are retained in the reduced-resolution image, that is, the pixel information of the reference pixel region is retained to the corresponding edge of the reduced-resolution image Location.
  • the image processing method of this embodiment before the image is subjected to resolution reduction processing, corresponding pixel information is obtained for edges with different degrees of difference according to the degree of difference in pixel data on both sides of the edge, and the image is subjected to resolution reduction processing. Then, the obtained pixel information is correspondingly retained to the position of the corresponding edge in the reduced-resolution processed image, so that the reduced-resolution processed image can retain the edge, or the pixel information of the edge and its sides, So that the edges of the image will not be blurred due to the reduced resolution processing.
  • FIG. 4 is a schematic flowchart of a first embodiment of an image matching method according to the present invention. As shown in FIG. 4, the image matching method in this embodiment may include the following steps;
  • step S201 edge extraction is performed on the left first image and the right first image, and pixel data on both sides of the edge extracted on the left first image and the right first image are obtained.
  • the left first image and the right first image are images obtained by shooting the same target at different angles, and thus, the edges extracted from the left first image and the right first image are the contour lines of the same object. Further, it can be understood that in the binocular vision system, the images obtained by the left camera and the right camera on the target can be used as the left first image and the right first image, respectively.
  • step S202 the range of the reference pixel region is determined for the left first image and the right first image according to the degree of difference between the pixel data on both sides of the edge.
  • step S203 the left first image and the right first image are subjected to resolution reduction processing except for the reference pixel area, and the pixel information of the reference pixel area is retained at the corresponding edge position of the processed image.
  • a second left image and a second right image are obtained.
  • the processing of the first left image and the first right image in steps S201 to S203 is the same as steps S101 to S103 of the image processing method shown in FIGS. 1 to 3, and details are not described herein again.
  • the obtained left second image and right second image are respectively processed after the resolution reduction and retain the pixel information contained in the reference pixel area, where the reference pixel area includes edge pixels, or edges Pixel information for pixels and the pixels on their sides.
  • step S204 matching is performed on the obtained left second image and the right second image.
  • matching is performed on the second left image and the second right image after the resolution reduction process.
  • the matching method used when matching the left second image and the right second image is not limited, and an existing matching method such as a small window matching method may be adopted.
  • step S204 may include the following steps:
  • step S2041 using the left first mapping position and the pixel information of the left reference pixel area retained by the left second image, the right first mapping position and the pixel information of the right reference pixel area retained by the right first image, the second left The edges in the image are matched with the edges in the right second image to obtain the first edge matching result.
  • the reference pixel region obtained in the left first image is the left reference pixel region
  • the reference pixel region of the right first image is the right reference pixel region.
  • the original position of the edge in the left first image is obtained before the left first image is subjected to the reduced resolution processing, and the mapped position of the edge calculated in the reduced resolution image after the reduced resolution processing is performed , Denoted as the left first mapping position; similarly, for the right first image, the right first mapping position is obtained.
  • the left first mapping position is the position of the edge in the graphic after the left first image is subjected to the resolution reduction process.
  • the left reference pixel area is based on the left first mapping position as the reference position, and is reserved to the second left In the image, in other words, the left first mapping position is the position of the edge in the second left image.
  • the right first mapping position is the position of the edge in the graphic after the right first image is subjected to the resolution reduction process.
  • the right reference pixel area is based on the right first mapping position and is retained in the right second image, in other words ,
  • the right first mapping position is the position of the edge in the second right image.
  • the left first mapping position and the right first mapping position can be used to determine the edges to be matched in the second left image and the second right image, respectively.
  • the edges to be matched are first matched to obtain the first edge matching result. It can be understood that matching of pixels in the reference pixel region is also included at this time.
  • the edges to be matched are determined according to the left first mapping position and the right first mapping position.
  • the left first mapping position and the right first mapping position It can be known that the left first mapping position and the right first mapping position It can be a decimal number, in other words, the edge matching in the left second image and the right second image can be a decimal level matching, thereby improving the matching accuracy of the edge matching in the left second image and the right second image.
  • the reference pixel area can be completed quickly. Matching of pixels and edges, and the matching accuracy of the first edge matching result can be guaranteed.
  • step S2042 preliminary matching is performed on the other regions of the left second image and the right second image according to the first edge matching result.
  • the first edge matching result is the matching result of the edges of the second left and right images and the reference pixel area.
  • the edge and the reference pixel area are used as a reference for the second left image.
  • the left second image and the right second image are reduced-resolution processed images, they contain a smaller number of pixels.
  • the preliminary matching of the second left image and the second right image is completed faster, and because the matching accuracy of the pixels in the reference pixel area containing the edge is higher, the pixels in the reference pixel area are used as a reference to match the remaining areas. Pixels, the matching accuracy will also be higher than the matching accuracy after the existing reduced resolution processing.
  • the original second pixel image and the pixels on both sides of the left second image and the right second image after the resolution reduction process are retained, and the left second image and the right second image are retained.
  • the position of the edge is represented by decimals, so the original pixel information can be used for matching operations when matching the edges, and the matching can be a decimal level matching, which improves the accuracy of the edge matching.
  • the edge is used as a reference to match the rest. When the pixels are in the region, the matching accuracy can be improved appropriately.
  • FIG. 6 is a schematic flowchart of a second embodiment of an image matching method according to the present invention. This embodiment is improved based on the first embodiment of the matching method shown in FIG. 5. As shown in FIG. 6, the matching method of this embodiment may include the following steps after step S204 shown in FIG. 5. :
  • step S205 the left second image and the right second image are subjected to resolution increasing processing, respectively.
  • the image matching method shown in FIG. 5 only matches the left second image and the right second image after the reduced resolution processing, because the portions of the left second image and the right second image outside the reference pixel area are reduced resolution
  • the processed therefore, contains less pixel information than the left first image and the right first image, so the matching accuracy for matching areas other than the test pixel area is relatively low.
  • step S204 the left second image and the right second image are subjected to resolution increasing processing. It is worth noting that the pixel information in the reference pixel region is still retained in this embodiment.
  • step S205 may include the following steps:
  • step S2051 a scaling factor at the time of the resolution reduction process is acquired.
  • the scale factor of the up-resolution processing should be the same as the value of the scale factor of the down-resolution processing. Therefore, it is necessary to first obtain the scaling factor of the down-resolution processing when performing the up-resolution processing on the image.
  • step S2052 the numerical value of the scaling coefficient is used as the numerical value of the scaling coefficient for the resolution increasing operation, and the resolution of the left second image and the right second image are separately processed.
  • the up-resolution processing is performed on the left second image and the right second image.
  • the left first image and the right first image are subjected to resolution reduction processing
  • multiple resolution reduction processes may be adopted, and the values of the scaling coefficients of each resolution reduction process may be the same or different.
  • the resolution increasing processing on the left second image and the right second image it needs to correspond to the process of the resolution reducing processing.
  • the resolution reduction processing on the first left image and the first right image is divided into three times, and the values of the scaling coefficients of the reduction resolution processing are 2, 4, and 6, respectively.
  • the second image is subjected to the resolution increasing processing, it needs to be divided into three times, and the values of the scaling coefficients of the resolution increasing processing are 6, 4, and 2, respectively.
  • step S206 according to the left first mapping position and the right first mapping position, the left second mapping position and the right second mapping position in the left second image and the right second image after the edge resolution processing are calculated.
  • the position of the edge will also change during the resolution increasing process. Therefore, after performing the resolution increasing processing on the second left image and the second right image , Calculating the left second mapping position and the right second mapping position of the edge in the resolution-upgraded image according to the left first mapping position and the right first mapping position of the edge in the left second image and the right second image.
  • the left second mapping position and the right second mapping position of the edge are the positions where the edge should be located in the left second image and the right second image after the resolution increase processing.
  • step S207 the pixel information of the reference pixel region is retained in the correspondence between the left second image and the right second image after the resolution increasing processing, using the left second mapping position and the right second mapping position as reference positions. At the edge position, a third left image and a right third image are obtained.
  • the calculated left second mapping position and right second mapping position are used as the edge reference positions, and the pixel information in the reference pixel region is retained until the At the corresponding edge positions of the second left image and the second right image after the resolution processing, a third left image and a third right image can be obtained.
  • the left second mapping position and the right second mapping position are positions where the edges are located in the left second image and the right second image after the resolution is increased.
  • step S208 matching is performed on the left third image and the right third image.
  • the third left image and the right third image that retain the pixel information in the reference pixel region are obtained after the resolution increasing process, that is, the left third image and the right third image that retain the pixel information of the edge, or the pixels on both sides of the edge Three images, further matching the left third image and the right third image.
  • the process of increasing the resolution of the left second image and the right second image includes multiple processes of increasing the resolution, each time the left second image and the right second image are processed. After the resolution-increasing process, matching is performed on the left third image and the right third image.
  • the instant resolution reduction processing is also divided into multiple resolution reduction processing, but the matching of the left second image and the right second image after the resolution reduction processing is completed only after the resolution reduction processing is completely completed The left second image and the right second image match.
  • the resolution reduction process is divided into three resolution reduction processes. After completing the three resolution reduction processes, a left second image and a right second image are obtained, and the left second image and the right second image are matched.
  • the process of increasing resolution corresponding to the process of reducing resolution, it is also divided into three times of increasing resolution processing, but at this time, each time the increasing resolution processing is performed, that is, the left third image and the right after the increasing resolution processing are performed. The third image is matched.
  • step S208 may include the following steps:
  • step S2081 using the pixel information of the left second mapping position and the left reference pixel region retained by the left third image, and the pixel information of the right second mapping position and the right reference pixel region retained by the right third image, Perform edge matching on edges in the right third image in the left third image to obtain a second edge matching result.
  • the left second image and the right second image are subjected to resolution increasing processing, and the pixel information in the reference pixel area is retained to obtain the left third image and the right third image, wherein the left third image and the right third image
  • the left second mapping position and the right second mapping position in are the positions of the edges in the third left image and the third right image.
  • the left reference pixel region is retained in the third left image with the left second mapping position as the reference position; the right reference pixel region is retained in the third right image with the second right mapping position as the reference position.
  • the reference pixel area can be completed quickly. Matching of pixels and edges, and the matching accuracy of the second edge matching result can be guaranteed.
  • step S2082 the other regions of the left third image and the right third image are finely matched according to the second edge matching result.
  • the second edge matching result is a matching result of the edges of the third left and right images and the reference pixel area.
  • the edge is used as a reference to the third left image and the third right image.
  • Match other pixels in the image except the reference pixel area At this time, the number of pixels of the third left image and the right third image are larger than the number of pixels of the second left image and the second right image, and the pixel information is more abundant.
  • the pixels of other regions except the reference pixel region have been preliminarily matched.
  • the relative range of pixels that is, the matching range for fine matching of pixels in the third and left images except for the reference pixel area can be determined according to the matching result of the preliminary matching. Performs fine matching on each pixel.
  • the method may further include the following steps:
  • step S2083 the left position offset of the edge of the left first image and the right position offset of the edge of the right first image are used as matching constraints, and the edge is determined to be performed in the third left image and the third right image.
  • the matching range of the match is used as the left position offset of the edge of the left first image and the right position offset of the edge of the right first image.
  • the left position shift amount of the edge of the left first image from the left first image to the left second image is recorded, and the recording is performed.
  • the right position offset of the edge of the right first image from the right first image to the right second image is recorded, and the recording is performed.
  • a matching range including edge matching can be determined according to the left position offset of the edge of the left first image and the right position offset of the edge of the right first image as matching constraints, and the matching range can be performed within the matching range. Edge matching. In this way, edge matching can be performed within a determined range, and the matching accuracy is improved.
  • FIG. 10 is a schematic structural diagram of an embodiment of an image processing apparatus according to the present invention.
  • the image processing apparatus 100 of this embodiment may include a memory 12 and a processor 11, where the memory 12 and the processor 13 are connected through a bus.
  • the memory 12 is configured to store an operation instruction executed by the processor 11, and an image and data.
  • the processor 11 is configured to execute the operation instruction to implement the first embodiment of the image processing method shown in FIG. 1 to FIG. 3 according to the operation instruction.
  • the image processing device may be a terminal device such as a computer, a mobile phone, or a vision-related terminal device such as a camera, a video camera, or a binocular vision system.
  • a terminal device such as a computer, a mobile phone, or a vision-related terminal device such as a camera, a video camera, or a binocular vision system.
  • FIG. 11 is a schematic structural diagram of an embodiment of an image matching device according to the present invention.
  • the image matching device 200 in this embodiment may include a memory 22 and a processor 21, where the memory 22 and the processor 21 are connected through a bus.
  • the memory 22 is configured to store an operation instruction executed by the processor 21 and a left first image and a right first image that need to be matched.
  • the processor 21 is configured to execute the operation instruction according to the operation instruction to implement the first embodiment of the image matching method and the second embodiment of the image matching method as shown in FIG. 4 to FIG. 9.
  • FIG. 4 to FIG. The description of the first embodiment of the image matching method and the second embodiment of the image matching method shown in 9 will not be repeated here.
  • FIG. 12 is a schematic structural diagram of another embodiment of an image matching device according to the present invention.
  • the image matching device of this embodiment is a binocular vision system.
  • the binocular vision system 300 of this embodiment includes a processor 31 and a memory 32 connected through a bus.
  • the processor 31 is connected to the first camera 33, the second camera 34, and the structural light source 35, respectively.
  • the memory 32 is configured to store an operation instruction executed by the processor 31.
  • the processor 31 is configured to control the structured light source 35 to emit structured light on the target object 36 according to the operation instruction, and control the first camera 33 and the second camera 34 to respectively capture the target object 36 to obtain a left first image and a right first image.
  • the obtained left first image and right first image are stored in the memory 32.
  • the processor 31 is further configured to execute the operation instruction according to the operation instruction to implement the first embodiment of the image matching method and the second embodiment of the image matching method as shown in FIG. 4 to FIG. Match the first right image.
  • FIG. 13 is a schematic structural diagram of an embodiment of a storage medium of the present invention.
  • the storage medium 400 in this embodiment stores program data 41 that can be executed, and the program data 41 is executed to implement the first embodiment of the image processing method shown in FIGS. 1 to 3; or It is executed to realize the first embodiment of the image matching method and the second embodiment of the image matching method shown in FIGS. 4 to 9.
  • the storage medium may be a storage module such as a storage module of a smart terminal, a mobile storage device (such as a mobile hard disk, a U disk, etc.), a network cloud disk, or an application storage platform.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明公开一种图像处理、匹配方法、装置及存储介质,该图像处理方法包括对第一图像进行边缘提取,并获得提取的边缘两侧的像素数据;根据所述边缘两侧的像素数据间的差异程度,确定参考像素区域的范围;其中,所述参考像素区域由所述边缘,或所述边缘及其两侧像素形成;将所述第一图像除所述参考像素区域外的其他区域进行降分辨率处理,并将所述参考像素区域的像素信息保留在处理后的图像的对应边缘位置上,得到第二图像。通过上述方法可以在对图像进行降分辨率处理时,有效的保留图像原本的边缘处的像素信息,避免图像在降分辨率处理时边缘出现模糊。

Description

一种图像处理、匹配方法、装置及存储介质
【技术领域】
本发明涉及图像处理技术领域,尤其涉及一种图像处理、匹配方法、装置及存储介质。
【背景技术】
降分辨率处理是图像处理技术领域常用的技术手段。降分辨率处理可以理解为对图像采样的点数减少了,对于原本N×M的图像来说,若降分辨率处理的缩放系数为k,则降分辨率处理相当于在原来N×M的图像中每行每列均每隔k个点取一个点,由取出的点组成一幅图像,组成的图像即为对N×M的图像进行缩放系数k的降分辨率处理后的图像。
根据降分辨率处理的原理可知,由于采样点的减少会使降分辨率处理后的图像的清晰程度受到一定影响,导致原本的图像会由于降分辨率处理处理出现边缘模糊,即降分辨率处理后无法清晰的辨别出图像中物体的边缘,甚至在原本的图像中相邻的两个边缘会由于降分辨率处理,导致两个边缘在降分辨率处理后相互几乎靠在一起而无法辨别。
【发明内容】
本发明的目的在于提供一种图像处理、匹配方法、装置及存储介质,能够在降分辨率处理后的图像中保留图像中边缘处的像素信息。
为实现上述目的,本发明提供一种图像处理方法,该方法包括:
对第一图像进行边缘提取,并获得提取的边缘两侧的像素数据;
根据所述边缘两侧的像素数据间的差异程度,确定参考像素区域的范围;其中,所述参考像素区域由所述边缘,或所述边缘及其两侧像素信息;
将所述第一图像除所述参考像素区域外的其他区域进行降分辨率处理,并将所述参考像素区域的像素信息保留在处理后的图像的对应边缘位置上,得到第二图像。
另一方面,本发明提出一种图像匹配方法,该方法包括:采用上述图像处理方法分别对左第一图像和右第一图像进行降分辨率处理,得到左第二图像和右第二图像;
对所述左第二图像和右第二图像进行匹配;
其中,所述左第一图像和右第一图像为在不同角度对同一目标进行拍摄的图像。
另一方面,本发明提出了一种图像处理装置,该装置包括:
通过总线连接的存储器和处理器;
所述存储器用于保存所述处理器执行的操作指令,以及图像和数据;
所述处理器用于运行所述操作指令,以实现上述的图像处理方法。
另一方面,本发明提出了一种图形匹配装置,该装置包括:
通过总线连接的存储器和处理器;
所述存储器用于保存所述处理器执行的操作指令,以及需要匹配的左第一图像和右第一图像;
所述处理器用于运行所述操作指令,以实现上述的图像匹配方法。
另一方面,本发明提出了一种存储介质,该存储介质存储有程序数据,所述程序数据能够被执行以实现上述的图像处理方法;或被执行以实现上述的图像处理方法。
有益效果:区别于现有技术的情况,本发明的图像处理方法通过对第一图像进行边缘提取,并获得提取的边缘两侧的像素数据;根据边缘两侧的像素数据间的差异程度,确定参考像素区域的范围;其中,参考像素区域由边缘,或边缘及其两侧像素形成;将第一图像除参考像素区域外的其他区域进行降分辨率处理,并将参考像素区域的像素信息保留在处理后的图像的对应边缘位置上,得到第二图像。进而在降分辨率处理后的图像中保留边缘或包含了边缘及边缘两侧的像素信息,避免了图像在降分辨率处理后出现边缘模糊的情况。
【附图说明】
图1是本发明图像处理方法第一实施例的流程示意图;
图2是图1中步骤S102的流程示意图;
图3是图1中步骤S103的流程示意图;
图4是本发明图像匹配方法第一实施例的流程示意图;
图5是图4中步骤S204的流程示意图;
图6是本发明图像匹配方法第二实施例的流程示意图;
图7是图6中步骤S205的流程示意图;
图8是图6中步骤S208的一实施方式的流程示意图;
图9是图6中步骤S208的另一实施方式流程示意图;
图10是本发明图像处理装置一实施例的结构示意图;
图11是本发明图像匹配装置一实施例的结构示意图;
图12是本发明图像匹配装置另一实施例的结构示意图;
图13是是本发明存储介质实施例的结构示意图。
【具体实施方式】
为使本领域的技术人员更好地理解本发明的技术方案,下面结合附图和具体实施方式对本发明做进一步详细描述。显然,所描述的实施方式仅仅是本发明的一部分实施方式,而不是全部的实施方式。基于本发明中的实施方式,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施方式,均属于本发明保护的范围。
参阅图1,图1是本发明图像处理方法第一实施例的流程示意图。如图1所示,该实施例的图像处理方法可包括如下步骤:
在步骤S101中,对第一图像进行边缘提取,并获得提取的边缘及边缘两侧的像素数据。
图像中边缘作为图像的重要信息,能够表征图像中各个物体或图形元素,在图像识别技术中是非常重要的数据,且图像的边缘是图像的灰度值变化相对于其他区域比较剧烈的地方,能够用于能识别相应的物体。(在图像识别技术中,边缘作为图像中用于表征各个物体或图形元素的重要)(大多数情况下,各个图形元素的边缘是该图形灰度值变化较于非边缘区域更为剧烈),
对图像进行边缘提取的方法有很多,本实施例对第一图像进行边缘提取的方法不做限定,可以采用现有技术中的任一中边缘提取的方法。
进一步,本实施例在提取边缘之后,以提取得到的边缘为参考,获取边缘及边缘两侧的像素数据。其中,像素选取对象可以是边缘上的像素点、边缘两侧的像素点或两者的组合。像素数据为像素选取对象的灰度值。
在步骤S102中,根据边缘及边缘两侧的像素数据间的差异程度,确定参考像素区域的范围。
由于不同清晰程度的边缘在保留其像素信息时所需要参与的边缘两侧的像素数量不同,因此,需要根据步骤S101中得到边缘两侧的像素数据来确定边缘的清晰程度。
边缘两侧的如灰度值、亮度信息等像素数据的差异程度可以表征边缘的清晰程度,例如,边缘的像素的灰度值、边缘两侧的像素的灰度值,或边缘及边缘两侧的像素的灰度值的差异较大,则说明边缘较清晰;若边缘的像素的灰度值、边缘两侧的像素的灰度值,或边缘及边缘两侧的像素的灰度值的差异较小,则说明边缘较模糊。由此可以确定边缘的清晰程度,进一步根据边缘的清晰程度确定在降分辨率处理后的图像中需要保留的参考像素区域的范围,由此,即使图像进行了降分辨率处理,但图像中物体或图形元素的边缘信息仍被保留了下来,而不是进行降分辨率的处理,即可令降分辨率处理的图像中仍可对这些物体或图形原始进行保留。
可以理解的是,对于清晰程度较高的边缘,仅需要该边缘及边缘两侧少量的像素即可计算得到边缘的中心像素,即保留该边缘的原始的像素信息;对于清晰程度较低的边缘,则需要该边缘两侧较多的像素来参与边缘的中心像素的计算。因此,在清晰度较高的边缘处设置相对区域范围较小的参考像素区域,在清晰度较低的边缘处设置相对区域范围较大的参考像素区域;对应的,即边缘及边缘两侧像素数据的差异程度较大处,设置相对区域范围较小的参考像素区域;在边缘两侧像素数据的差异程度较小处,设置相对区域范围较大的参考像素区域,由此既可以相对完整的保留下图像中的边缘及边缘两侧的像素数据,又可以减小对整体图像的降分辨率处理的影响。
可以理解的是,本实施例要在降分辨率处理后的图像中保留边缘处的像素信息,即该参考像素区域至少包含边缘,此外还包含边缘及其两侧的像素信息。当然,对于清晰度很高的边缘,其参考像素区域也可以仅含有边缘处的像素。
进一步,请参阅图2,如图2所示,在其他实施例中,步骤S102可包括如下步骤:
在步骤S1021中,根据边缘两侧的像素数据间的差异程度,确定边缘的清晰度等级。
本实施例为了明确如何根据边缘两侧的像素数据间的差异程度来确定参考像素区域范围,对边缘的清晰程度设置了不同的清晰度等级,不同的差异程度对应于不同的清晰度等级,可以理解的是,像素数据间的差异程度可能是相对连续的一系列数据形成的数据集合,而清晰度等级则只是表征了等级差异,因此,可以对差异程度的数据划分成不同的区间,每个区间对应一个清晰度等级。
进一步,可以设置若干个数值不等的差异程度阈值,由差异程度阈值划分出若干个差异程度的区域范围,每个区域范围设置一个对应的清晰度等级,落在同一区域范围的差异程度为同一清晰度等级,落在不同区域范围的差异程度为不同清晰度等级。例如,设置差异程度阈值分别为第一阈值和第二阈值,其中,第一阈值>第二阈值;差异程度大于或等于第一阈值的边缘的清晰度等级设置为一级,差异程度介于第一阈值和第二阈值之间的边缘的清晰度等级设置为二级,差异程度小于或等于第二阈值的边缘的清晰度等级设置为三级,由此可知,清晰度等级从一级到三级,边缘的清晰度逐渐下降。
在其他实施方式中,也可以采用其他的方式对差异程度不同的边缘划分相应的清晰度等级,此外,差异程度阈值可以根据实际需求进行相应设置。
在步骤S1022中,获取清晰度等级对应设定的等级宽度,并确定参考像素区域的范围。
本实施例中,对不同的清晰度等级预设了不同的等级宽度,等级宽度为反映参考像素区域内包含的像素总个数的表征值,本实施例中,参考像素区域内的包含的像素个数为等级宽度与降分辨率处理的缩放系数的乘积。其中,等级宽度的具体数值在本实施例中不做具体限定,可以根据实际需求进行设置。
根据上述的说明,清晰度越高的边缘,需要保留的边缘及其两侧的像素信息较少,因此对应的预设的等级宽度的数值较小;清晰度越低的边缘的需要保留的边缘及其两侧的像素信息较多,因此对应的预设的等级宽度的数值较大。有上述设置的清晰度等级,从一级到三级,边缘的清晰度逐渐降低,由此,从一级到三级对应的等级宽度逐渐增加。
本步骤即可根据边缘的清晰度等级即可确定边缘相应的参考像素区域的范围。
在步骤S103中,将第一图像除参考像素区域外的其他区域进行降分辨率处理,并将参考像素区域的像素信息保留在处理后的图像的对应边缘位置上,得到第二图像。
步骤S102确定了需要保留的参考像素区域,此时对图像进行降分辨率处理,并将步骤S102中确定的参考像素区域内的像素信息保留至降分辨率处理后的图像的对应的边缘位置上,如此即可得到保留了边缘及其两侧一定范围内的原始的像素信息,而其他区域分辨率下降的第二图像。
进一步,请参阅图3,如图3所示,步骤S103可包括如下步骤:
在步骤S1031中,获取边缘在第一图像中的原始位置。
由于对图像进行降分辨率处理会导致图像中像素发生位移,对于第一图像中的边缘自然也会在降分辨率处理的过程中发生位移,即降分辨率处理前后的边缘的位置不同,为了在降分辨率处理后的图像中能够在图像的对应边缘位置上保留参考像素区域的像素信息,则需要在降分辨率处理后的图像中找到边缘应当所在的位置。
因此,对第一图像进行降分辨率处理之前,先获取第一图像中提取得到的边缘在第一图像中的原始位置。
在步骤S1032中,计算边缘的原始位置在经过降分辨率处理后的图像中的映射位置。
在图像进行降分辨率处理之后,得到降分辨率处理的图像,根据步骤S1031中计算得到边缘在第一图像中的原始位置,计算该原始位置在降分辨率处理后的图像中的映射位置。
例如,假设第一图像为1×8的图像,其中,边缘的原始位置为(1,7),对第一图像进行降分辨率处理的缩放系数的数值为2,则降分辨率处理后的图像为1×4,此时,在降分辨率处理后的图像中,根据边缘的原始位置为(1,7)计算边缘在降分辨率处理后的图像中的映射位置,可以得到边缘在降分辨率处理后的图像中的映射位置为(1,3.5)。
在步骤S1033中,在经过降分辨率处理后的图像中,以边缘的映射位置为参考位置,将参考像素区域的像素信息保留至处理后的图像中。
在降分辨率处理后的图像中根据步骤S1032计算得到的边缘在降分辨率处理后的图像中的映射位置,即可确定出在降分辨率处理后的图像中边缘应当处于的位置,因此,将边缘的映射位置作为参考,将步骤S102中得到的参考像素区域内的像素保留至降分辨率处理后的图像中,即将参考像素区域的像素信息保留至降分辨率处理后的图像的对应边缘位置上。
根据本实施例的图像处理方法,在对图像进行降分辨率处理之前,根据边缘两侧的像素数据的差异程度,为不同差异程度的边缘获取相应的像素信息,在对图像进行降分辨率处理后,将获取到的像素信息相应的保留至降分辨率处理后的图像中的对应边缘的位置处,使得降分辨率处理后的图像可以保留的边缘,或边缘及其两侧的像素信息,而令图像的边缘不会因为降分辨率处理而导致模糊。
进一步,参阅图4,图4是本发明图像匹配方法第一实施例的流程示意图。如图4所示,本实施例的图像匹配方法可包括如下步骤;
在步骤S201中,分别对左第一图像和右第一图像进行边缘提取,并分别获得对左第一图像和右第一图像提取的边缘两侧的像素数据。
本实施例中,左第一图像和右第一图像为在不同角度对同一目标进行拍摄得到的图像,由此,左第一图像和右第一图像中提取的边缘为相同物体的轮廓线。进一步,可以理解在双目视觉系统中,左相机和右相机对目标拍摄得到的图像即可分别作为左第一图像和右第一图像。
在步骤S202中,分别对左第一图像和右第一图像,根据边缘两侧的像素数据间的差异程度,确定参考像素区域的范围。
在步骤S203中,分别将左第一图像和右第一图像除参考像素区域外的其他区域进行降分辨率处理,并将参考像素区域的像素信息保留在处理后的图像的对应边缘位置上,得到左第二图像和右第二图像。
本实施例中,在步骤S201至步骤S203中对左第一图像和右第一图像进行处理和图1至3所示的图像处理方法的步骤S101至步骤S103相同,此处不再赘述。本实施例中,得到的左第二图像和右第二图像分别为降分辨率处理后,且保留了参考像素区域内包含的像素信息的图像,其中,参考像素区域内包含边缘像素,或边缘像素及其两侧的像素的像素信息。
在步骤S204中,对得到的左第二图像和右第二图像进行匹配。
本实施例中,对降分辨率处理后的左第二图像和右第二图像进行匹配。本实施例对左第二图像和右第二图像进行匹配时采用的匹配方法不做限定,可以采用小窗口匹配法等现有的匹配方法。
进一步,请参阅图5,如图5所示,步骤S204可包括如下步骤:
在步骤S2041中,利用左第二图像保留的左第一映射位置及左参考像素区域的像素信息、右第一图像保留的右第一映射位置及右参考像素区域的像素信息,对左第二图像中的边缘和右第二图像中的边缘进行边缘的匹配,得到第一边缘匹配结果。
本实施例中,令左第一图像中得到的参考像素区域为左参考像素区域,右第一图像的到的参考像素区域为右参考像素区域。此外,在对左第一图像进行降分辨率处理前获取边缘在左第一图像中的原始位置,并在降分辨率处理后计算得到的边缘在经过降分辨率处理后的图像中的映射位置,记为左第一映射位置;同样,对于右第一图像,得到右第一映射位置。
根据上述步骤可知,左第一映射位置为边缘在对左第一图像进行降分辨率处理后的图形中的位置,左参考像素区域是以左第一映射位置为参考位置,保留至左第二图像中的,换言之,左第一映射位置即为边缘在左第二图像中的位置。右第一映射位置为边缘在对右第一图像进行降分辨率处理后的图形中的位置,右参考像素区域是以右第一映射位置为参考位置,保留至右第二图像中的,换言之,右第一映射位置即为边缘在右第二图像中的位置。因此,在对左第二图像和右第二图像进行匹配时,可以利用左第一映射位置和右第一映射位置分别在左第二图像和右第二图像中确定待匹配的边缘,对确定的待匹配的边缘先进行匹配,即可得到第一边缘匹配结果。可以理解的是,此时也包括了对参考像素区域内的像素的匹配。
值得注意的是,本实施例中是根据左第一映射位置和右第一映射位置来确定待匹配的边缘,根据上述对计算映射位置的解释可知,左第一映射位置和右第一映射位置可以为小数,换言之,在左第二图像和右第二图像中对边缘的匹配可以是小数级别的匹配,进而提高左第二图像和右第二图像中对边缘进行匹配的匹配精度。
由于,参考像素区域内保留的是边缘,或边缘及其两侧的像素的原始的像素信息,而参考像素区域内的像素的数量也不会太多,因此可以较快的完成参考像素区域内的像素及边缘的匹配,且可以保证第一边缘匹配结果的匹配精度。
在步骤S2042中,根据第一边缘匹配结果,对左第二图像和右第二图像的其他区域进行初步匹配。
第一边缘匹配结果是对左第二图像和右第二图像的边缘以及参考像素区域的匹配结果,根据边缘以及参考像素区域的匹配结果,以边缘和参考像素区域为参考,对左第二图像和右第二图像中除参考像素区域外的其他像素进行匹配。由于左第二图像和右第二图像的降分辨率处理后的图像,因此包含的像素数量较少,在匹配左第二图像和右第二图像中除参考像素区域外的其他像素时,可以较快的完成对左第二图像和右第二图像的初步匹配,且由于对包含了边缘的参考像素区域内的像素的匹配精度较高,使得在参考像素区域内的像素为参考匹配其余区域的像素时,其匹配精度也会高于现有降分辨率处理后的匹配精度。
本实施例的图像匹配方法令降分辨率处理后的左第二图像和右第二图像中均保留边缘及其两侧的像素的原始的像素信息,且在左第二图像和右第二图像中以小数表征边缘的位置,因此可以在对边缘进行匹配时利用原始的像素信息进行匹配操作,且匹配可以为小数级别的匹配,提高了边缘的匹配精度,进一步,以边缘为参考,匹配其余区域的像素时,也可以适当的提高匹配精度。
进一步,参阅图6,图6是本发明图像匹配方法第二实施例的流程示意图。本实施例的在图5所示的匹配方法第一实施例的基础上改进得到的,如图6所示,本实施例的匹配方法在图5所示的步骤S204之后,还可包括如下步骤:
在步骤S205中,对左第二图像和右第二图像分别进行升分辨率处理。
图5所示的图像匹配方法仅对降分辨率处理后的左第二图像和右第二图像进行匹配,由于左第二图像和右第二图像在参考像素区域外的部分是经过降分辨率处理的,因此包含的像素信息相对于左第一图像和右第一图像而言较少,因此,对考像素区域外的区域进行匹配的匹配精度相对较低。
由此,在步骤S204之后,对左第二图像和右第二图像进行升分辨率处理,值得注意的是,本实施例中仍会保留参考像素区域内的像素信息。
进一步,参阅图7,步骤S205可包括如下步骤:
在步骤S2051中,获取降分辨率处理时的缩放系数。
由于进行升分辨率处理是相对于降分辨率处理进行的,因此,升分辨率处理的缩放系数应当与降分辨率处理的缩放系数的数值相同。因此,在对图像进行升分辨率处理时需要先获取降分辨率处理的缩放系数。
在步骤S2052中,将缩放系数的数值作为升分辨率操作的缩放系数的数值,对左第二图像和右第二图像分别进行升分辨率处理。
将步骤S2051中获取的降分辨率处理的缩放系数的数值作为升分辨率处理是的缩放系数的数值,对左第二图像和右第二图像进行升分辨率处理。
可以理解的是,对左第一图像和右第一图像进行降分辨率处理时,可以采用多次降分辨率处理的方式,每次降分辨率处理的缩放系数的数值可以相同也可以不同。相应的,在对左第二图像和右第二图像进行升分辨率处理时,需要与降分辨率处理的过程对应。例如,对左第一图像和右第一图像进行降分辨率处理是分为了三次,且降分辨率处理的缩放系数的数值分别为2、4和6,相应的,对左第二图像和右第二图像进行升分辨率处理时,也需要分为三次,且升分辨率处理的缩放系数的数值分别为6、4和2。
在步骤S206中,根据左第一映射位置和右第一映射位置,计算边缘在升分辨率处理后的左第二图像和右第二图像中的左第二映射位置和右第二映射位置。
由于图像在升分辨率处理时会进行相应的插值计算,边缘在升分辨率处理的过程中其位置也会发生变化,因此,在对左第二图像和右第二图像进行升分辨率处理后,根据边缘在左第二图像和右第二图像中的左第一映射位置和右第一映射位置计算边缘在升分辨率处理后的图像中的左第二映射位置和右第二映射位置。
其中,边缘的左第二映射位置和右第二映射位置即为边缘在升分辨率处理后的左第二图像和右第二图像中应当所处的位置。
在步骤S207中,以所述左第二映射位置和右第二映射位置为参考位置将所述参考像素区域的像素信息保留在升分辨率处理后的左第二图像和右第二图像的对应边缘位置上,得到左第三图像和右第三图像。
在升分辨率处理后的左第二图像和右第二图像中,以计算得到的左第二映射位置和右第二映射位置为边缘的参考位置,将参考像素区域内的像素信息保留至升分辨率处理后的左第二图像和右第二图像的对应的边缘位置上,即可得到左第三图像和右第三图像。可以理解的是,左第二映射位置和右第二映射位置即为边缘在升分辨率处理后的左第二图像和右第二图像中的边缘所在的位置。
在步骤S208中,对左第三图像和右第三图像进行匹配。
在升分辨率处理后得到保留了参考像素区域内的像素信息的左第三图像和右第三图像,即保留了边缘、或边缘及其两侧像素的像素信息的左第三图像和右第三图像,进一步对左第三图像和右第三图像进行匹配。
进一步,本实施例中若对左第二图像和右第二图像进行升分辨率处理的过程中包含了多个升分辨率处理的过程,则每次对左第二图像和右第二图像进行升分辨率处理后,均进行一次对左第三图像和右第三图像的匹配。但本实施例中,即时降分辨率处理也是分为多次降分辨率处理的,但对降分辨率处理后的左第二图像和右第二图像的匹配仅在完全完成降分辨率处理后的左第二图像和右第二图像的匹配。
例如,降分辨率处理是分成了三次降分辨率处理,在完成三次降分辨率处理,得到左第二图像和右第二图像,对左第二图像和右第二图像进行匹配。在升分辨率处理时,与降分辨率处理相对应,也分成了三次升分辨率处理,但此时是每执行一次升分辨率处理,即对升分辨率处理后的左第三图像和右第三图像进行匹配。
进一步,请参阅图8,如图8所示,步骤S208可包括如下步骤:
在步骤S2081中,利用所述左第三图像保留的左第二映射位置及左参考像素区域的像素信息、所述右第三图像保留的右第二映射位置及右参考像素区域的像素信息,对所述左第三图像中右第三图像中的边缘进行边缘的匹配,得到第二边缘匹配结果。
在对左第二图像和右第二图像进行升分辨率处理,并保留了参考像素区域内的像素信息,得到左第三图像和右第三图像,其中,左第三图像和右第三图像中的左第二映射位置和右第二映射位置为边缘在左第三图像和右第三图像中的位置。进一步,左参考像素区域是以左第二映射位置为参考位置,保留至左第三图像中的;右参考像素区域是以右第二映射位置为参考位置,保留至右第三图像中的。
由此,在对左第三图像和右第三图像进行匹配时,先利用左第二映射位置和右第二映射位置分别在左第三图像和右第三图像中找到相应的待匹配的边缘,对确定的待匹配的边缘先进行匹配,即可得到第二边缘匹配结果。可以理解的是,此时也包括了对参考像素区域内的像素的匹配。
由于,参考像素区域内保留的是边缘,或边缘及其两侧的像素的原始的像素信息,而参考像素区域内的像素的数量也不会太多,因此可以较快的完成参考像素区域内的像素及边缘的匹配,且可以保证第二边缘匹配结果的匹配精度。
在步骤S2082中,根据第二边缘匹配结果,对左第三图像和右第三图像的其他区域进行精细匹配。
第二边缘匹配结果是对左第三图像和右第三图像的边缘以及参考像素区域的匹配结果,根据边缘以及参考像素区域的匹配结果,以边缘为参考,对左第三图像和右第三图像中除参考像素区域外的其他像素进行匹配。此时,左第三图像和右第三图像的像素数量相对于左第二图像和右第二图像的像素数量更多,包含的像素信息更丰富。而由于对左第二图像和右第二图像而言,已经对除参考像素区域外的其他区域的像素进行了初步匹配,因此,根据初步匹配的结果大致判断出参考像素区域外的其他区域的像素所在的相对范围,即可以根据初步匹配的匹配结果,确定本步骤中对左第三图像和右第三图像的除参考像素区域外的其他区域的像素进行精细匹配的匹配范围,在匹配范围内对每个像素进行精细匹配。
进一步,如图9所示,在步骤S2081之前还可以包括如下步骤:
在步骤S2083中,将左第一图像的边缘的左位置偏移量和右第一图像的边缘的右位置偏移量作为匹配约束条件,在左第三图像和右第三图像中确定进行边缘匹配的匹配范围。
本实施例中,在对左第一图像和右第一图像进行降分辨率处理时,记录左第一图像的边缘从左第一图像到左第二图像发生的左位置偏移量,以及记录右第一图像的边缘从右第一图像到右第二图像发生的右位置偏移量。
可以理解的是,左第一图像和右第一图像进行降分辨率处理时,对于能够匹配的点,其发生的位置偏移量应当的对应的。且以边缘为参考时,该边缘附近的像素的发生的位置偏移量应当与该边缘发生的位置偏移量相对应。由此,可以根据左第一图像的边缘的左位置偏移量和右第一图像的边缘的右位置偏移量作为匹配约束条件,确定出包含边缘匹配的匹配范围,在该匹配范围内进行边缘的匹配。如此,可以在确定的范围内进行边缘匹配,提高匹配精度。
请参阅图10,图10是本发明图像处理装置一实施例的结构示意图。如图10所示,本实施例的图像处理装置100可包括存储器12和处理器11,其中,存储器12和处理器13通过总线连接。存储器12用于保存处理器11执行的操作指令,以及图像和数据。处理器11用于根据操作指令,执行该操作指令以实现如图1至图3所示的图像处理方法第一实施例,详细的处理步骤请参见图1至图3所示的图像处理方法第一实施例的说明,此处不再赘述。
该图像处理装置可以为电脑、手机等终端设备,也可以为相机、摄像机、双目视觉系统等视觉相关的终端设备。
请参阅图11,图11是本发明图像匹配装置一实施例的结构示意图。如图11所示,本实施例的图像匹配装置200可包括存储器22和处理器21,其中,存储器22和处理器21通过总线连接。存储器22用于保存处理器21执行的操作指令,以及需要匹配的左第一图像和右第一图像。处理器21用于根据操作指令,执行该操作指令以实现如图4至图9所示的图像匹配方法第一实施例和图像匹配方法第二实施例,详细的处理步骤请参见图4至图9所示的图像匹配方法第一实施例和图像匹配方法第二实施例的说明,此处不再赘述。
进一步,请参阅图12,图12是本发明图像匹配装置另一实施例的结构示意图,本实施例的图像匹配装置为双目视觉系统。如图12所示,本实施例的双目视觉系统300包括通过总线连接的处理器31和存储器32,此外,处理器31分别与第一相机33、第二相机34和结构光源35连接。
存储器32用于保存处理器31执行的操作指令。处理器31用于根据操作指令控制结构光源35出射结构光在目标物体36上,并控制第一相机33和第二相机34分别对该目标物体36拍摄获得左第一图像和右第一图像,并将获得的左第一图像和右第一图像保存至存储器32中。此外,处理器31还用于根据操作指令,执行该操作指令以实现如图4至图9所示的图像匹配方法第一实施例和图像匹配方法第二实施例,进而对左第一图像和右第一图像进行匹配。
请参阅图13,图13是本发明存储介质实施例的结构示意图。如图13所示,本实施例中的存储介质400中存储有能够被执行的程序数据41,该程序数据41被执行能够实现图1至图3所示的图像处理方法第一实施例;或被执行以实现图4至图9所示的图像匹配方法第一实施例和图像匹配方法第二实施例。本实施例中,该存储介质可以是智能终端的存储模块、移动存储装置(如移动硬盘、U盘等)、网络云盘或应用存储平台等具备存储功能的介质。
以上仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围。

Claims (14)

  1. 一种图像处理方法,其特征在于,包括:
    对第一图像进行边缘提取,并获得提取的边缘及边缘两侧的像素数据;
    根据所述边缘及边缘两侧的像素数据间的差异程度,确定参考像素区域的范围;其中,所述参考像素区域包括所述边缘,或所述边缘及其两侧像素信息;
    将所述第一图像除所述参考像素区域外的其他区域进行降分辨率处理,并将所述参考像素区域的像素信息保留在处理后的图像的对应边缘位置上,得到降分辨率处理的第二图像。
  2. 根据权利要求1所述的方法,其特征在于,
    所述根据所述边缘及边缘两侧的像素数据间的差异程度,确定参考像素区域的范围包括:
    根据所述边缘及边缘两侧的像素数据间的差异程度,确定所述边缘的清晰度等级;
    获取所述清晰度等级对应设定的等级宽度,并依据所述等级宽度确定所述参考像素区域的范围;
    其中,所述等级宽度为与所述清晰度等级对应的,用于反映所述参考像素区域内包含的像素总个数的表正值;所述参考像素区域内包含的像素总个数为所述等级宽度与降分辨率处理的缩放系数的乘积。
  3. 根据权利要求2所述的方法,其特征在于,所述边缘及边缘两侧的像素数据为所述边缘像素及所述边缘两侧的像素的灰度值。
  4. 根据权利要求1所述的方法,其特征在于,
    所述将所述参考像素区域的像素信息保留在处理后的图像的对应边缘位置上,包括:
    获取所述边缘在所述第一图像中的原始位置;
    计算所述边缘的原始位置在经所述降分辨率处理后的图像中的映射位置;
    在所述经降分辨率处理后的图像中,以所述边缘的所述映射位置为参考位置,基于所述参考位置将所述参考像素区域的像素信息保留至所述经降分辨率处理后的图像中。
  5. 一种图像匹配方法,其特征在于,包括:
    采用如权利要求1至4任一项所述的方法分别对左第一图像和右左第一图像进行降分辨率处理,得到左第二图像和右第二图像;
    对所述左第二图像和右第二图像进行匹配;
    其中,所述左第一图像和右第一图像为在不同角度对同一目标进行拍摄的图像。
  6. 根据权利要求5所述的方法,其特征在于,
    所述左第一图像在所述降分辨率处理时的所述参考像素区域为左参考像素区域;所述右第一图像在所述降分辨率处理时的所述参考像素区域为右参考像素区域;
    所述左第二图像保留有左第一图像的边缘的左第一映射位置,所述右第二图像保留有右第一图像的边缘的右第一映射位置。
  7. 根据权利要求6所述的方法,其特征在于,所述对所述左第二图像和右第二图像进行匹配,包括:
    利用所述左第二图像保留的左第一映射位置及左参考像素区域的像素信息、所述右第一图像保留的右第一映射位置及右参考像素区域的像素信息,对所述左第二图像中的边缘和右第二图像中的边缘进行边缘的匹配,得到第一边缘匹配结果;
    根据所述第一边缘匹配结果,对所述左第二图像和右第二图像的其他区域进行初步匹配。
  8. 根据权利要求6所述的方法,其特征在于,
    在所述对所述左第二图像和右第二图像进行匹配之后,还包括:
    对所述左第二图像和右第二图像分别进行升分辨率处理;
    根据所述左第一映射位置和右第一映射位置,计算所述边缘在升分辨率处理后的左第二图像和右第二图像中的左第二映射位置和右第二映射位置;
    以所述左第二映射位置和右第二映射位置为参考位置将所述参考像素区域的像素信息保留在升分辨率处理后的左第二图像和右第二图像的对应边缘位置上,得到左第三图像和右第三图像;
    对左第三图像和右第三图像进行匹配。
  9. 根据权利要求6所述的方法,其特征在于,
    所述对所述左第二图像和右第二图像分别进行升分辨率处理包括:
    获取降分辨率处理时的缩放系数;
    将所述缩放系数的数值作为升分辨率操作的缩放系数的数值,对所述左第二图像和右第二图像分别进行升分辨率处理。
  10. 根据权利要求6所述的方法,其特征在于,
    所述对左第三图像和右第三图像进行匹配,包括:
    利用所述左第三图像保留的左第二映射位置及左参考像素区域的像素信息、所述右第三图像保留的右第二映射位置及右参考像素区域的像素信息,对所述左第三图像中右第三图像中的边缘进行边缘的匹配,得到第二边缘匹配结果;
    根据所述第二边缘匹配结果,对所述左第三图像和右第三图像的其他区域进行精细匹配。
  11. 根据权利要求10所述的方法,其特征在于,
    在所述分别对左第一图像和右左第一图像进行降分辨率处理时,所述方法还包括:
    记录所述左第一图像的边缘从所述左第一图像到所述左第二图像发生的左位置偏移量,以及记录所述右第一图像的边缘从所述右第一图像到所述右第二图像发生的右位置偏移量;
    所述对左第三图像和右第三图像进行匹配,还包括:
    将所述左位置偏移量和右位置偏移量作为匹配约束条件,在所述左第三图像和右第三图像中确定进行边缘匹配的匹配范围。
  12. 一种图像处理装置,其特征在于,包括:
    通过总线连接的存储器和处理器;
    所述存储器用于保存所述处理器执行的操作指令,以及图像和数据;
    所述处理器用于运行所述操作指令,以实现权利要求1-4任意一项所述的图像处理方法。
  13. 一种图形匹配装置,其特征在于,包括:
    通过总线连接的存储器和处理器;
    所述存储器用于保存所述处理器执行的操作指令,以及需要匹配的左第一图像和右第一图像;
    所述处理器用于运行所述操作指令,以实现权利要求5-11任意一项所述的图像匹配方法。
  14. 一种存储介质,其特征在于,存储有程序数据,所述程序数据能够被执行以实现权利要求1-4任意一项所述的图像处理方法;或被执行以实现权利要求5-11任意一项所述的图像匹配方法。
PCT/CN2018/101798 2018-08-22 2018-08-22 一种图像处理、匹配方法、装置及存储介质 WO2020037566A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/101798 WO2020037566A1 (zh) 2018-08-22 2018-08-22 一种图像处理、匹配方法、装置及存储介质
CN201880087343.2A CN111630558B (zh) 2018-08-22 2018-08-22 一种图像处理、匹配方法、装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/101798 WO2020037566A1 (zh) 2018-08-22 2018-08-22 一种图像处理、匹配方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2020037566A1 true WO2020037566A1 (zh) 2020-02-27

Family

ID=69592148

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/101798 WO2020037566A1 (zh) 2018-08-22 2018-08-22 一种图像处理、匹配方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN111630558B (zh)
WO (1) WO2020037566A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972333B (zh) * 2022-07-19 2022-10-25 淄博市淄川区市政环卫服务中心 基于人工智能的道路裂缝检测方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100166334A1 (en) * 2008-12-29 2010-07-01 Arcsoft Hangzhou Co., Ltd. Method for magnifying images and videos
CN102217314A (zh) * 2008-09-18 2011-10-12 汤姆森特许公司 用于视频图像删减的方法和装置
CN104094284A (zh) * 2013-02-05 2014-10-08 Lsi公司 具有保留边缘的抑制噪声功能的图像处理器
CN105354843A (zh) * 2015-10-30 2016-02-24 北京奇艺世纪科技有限公司 一种图像边界提取方法和系统
CN106296578A (zh) * 2015-05-29 2017-01-04 阿里巴巴集团控股有限公司 一种图像处理方法及装置
US20170084004A1 (en) * 2015-09-17 2017-03-23 Samsung Electronics Co., Ltd. Image processing apparatus, image processing method, and computer-readable recording medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390263A (zh) * 2012-05-11 2013-11-13 郭琳 一种sar图像去噪方法
CN103914857A (zh) * 2012-12-28 2014-07-09 中国科学院沈阳自动化研究所 一种面向边缘特征保持的图像压缩方法
CN104217431B (zh) * 2014-08-29 2017-02-08 天津大学 基于边缘提取与图像融合技术的压缩感知补偿方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102217314A (zh) * 2008-09-18 2011-10-12 汤姆森特许公司 用于视频图像删减的方法和装置
US20100166334A1 (en) * 2008-12-29 2010-07-01 Arcsoft Hangzhou Co., Ltd. Method for magnifying images and videos
CN104094284A (zh) * 2013-02-05 2014-10-08 Lsi公司 具有保留边缘的抑制噪声功能的图像处理器
CN106296578A (zh) * 2015-05-29 2017-01-04 阿里巴巴集团控股有限公司 一种图像处理方法及装置
US20170084004A1 (en) * 2015-09-17 2017-03-23 Samsung Electronics Co., Ltd. Image processing apparatus, image processing method, and computer-readable recording medium
CN105354843A (zh) * 2015-10-30 2016-02-24 北京奇艺世纪科技有限公司 一种图像边界提取方法和系统

Also Published As

Publication number Publication date
CN111630558B (zh) 2024-03-01
CN111630558A (zh) 2020-09-04

Similar Documents

Publication Publication Date Title
WO2019164232A1 (ko) 전자 장치, 이의 영상 처리 방법 및 컴퓨터 판독가능 기록 매체
WO2019050360A1 (en) ELECTRONIC DEVICE AND METHOD FOR AUTOMATICALLY SEGMENTING TO BE HUMAN IN AN IMAGE
TWI323434B (en) Method of object segmentation for video
WO2021201422A1 (ko) Ar에 적용 가능한 의미적인 분할 방법 및 시스템
WO2019104616A1 (zh) 一种图像处理方法及装置、计算机可读存储介质
EP3201881A1 (en) 3-dimensional model generation using edges
WO2020231226A1 (en) Method of performing, by electronic device, convolution operation at certain layer in neural network, and electronic device therefor
WO2020060019A1 (ko) 글자 검출 장치, 방법 및 시스템
WO2020116768A1 (ko) 영상 처리 장치 및 그 동작방법
WO2019066373A1 (ko) 이미지에 포함된 오브젝트의 카테고리 및 인식률에 기반하여 이미지를 보정하는 방법 및 이를 구현한 전자 장치
WO2022045495A1 (en) Methods for depth map reconstruction and electronic computing device for implementing the same
EP3472801A1 (en) Image processing apparatus and recording medium
WO2020055181A1 (ko) 영상 처리 장치 및 그 동작방법
WO2019127049A1 (zh) 一种图像匹配方法、装置及存储介质
WO2020067725A1 (ko) 사진 측량을 이용한 모델 재구성 장치 및 방법
WO2019017720A1 (ko) 사생활 보호를 위한 카메라 시스템 및 그 방법
WO2020037566A1 (zh) 一种图像处理、匹配方法、装置及存储介质
WO2022060001A1 (ko) 영상 처리 장치 및 그 동작방법
WO2015196878A1 (zh) 一种电视虚拟触控方法及系统
WO2020080734A1 (ko) 얼굴 인식 방법 및 얼굴 인식 장치
WO2020235854A1 (ko) 불량 이미지 생성 장치 및 방법
WO2020101300A1 (ko) 영상 처리 장치 및 그 동작방법
WO2020050550A1 (en) Methods and systems for performing editing operations on media
WO2020138630A1 (en) Display apparatus and image processing method thereof
WO2023090510A1 (ko) 데이터 보완 조건에 기반한 데이터 선별을 수행하는 전자장치 및 그 수행 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18931208

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18931208

Country of ref document: EP

Kind code of ref document: A1