CN108109150B - Image segmentation method and terminal - Google Patents
Image segmentation method and terminal Download PDFInfo
- Publication number
- CN108109150B CN108109150B CN201711350777.0A CN201711350777A CN108109150B CN 108109150 B CN108109150 B CN 108109150B CN 201711350777 A CN201711350777 A CN 201711350777A CN 108109150 B CN108109150 B CN 108109150B
- Authority
- CN
- China
- Prior art keywords
- pixel points
- space
- bilateral
- grid
- domain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003709 image segmentation Methods 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000002146 bilateral effect Effects 0.000 claims abstract description 77
- 238000013507 mapping Methods 0.000 claims abstract description 17
- 238000005070 sampling Methods 0.000 claims abstract description 17
- 230000004927 fusion Effects 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 abstract description 23
- 238000012805 post-processing Methods 0.000 abstract description 6
- 230000007704 transition Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 238000003708 edge detection Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241001270131 Agaricus moelleri Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image segmentation method and a terminal. The image segmentation method comprises the steps of establishing a bilateral space, wherein the bilateral space comprises a space domain and a characteristic domain; sampling all pixel points of an input image on a spatial domain, and mapping the pixel points to the characteristic domain based on different characteristic values so as to divide the pixel points into a plurality of grids in a bilateral space; searching each grid in the bilateral space for the adjacent grid in the space domain, and grouping the grid and the adjacent grid into a block; and mapping all the grouped pixel points back to the input image to finish image segmentation. According to the technical scheme, under the condition that the color of the segmentation main body and the background color have gradual transition, the phenomenon of under-segmentation is avoided. Further, through post-processing in a pixel space, originally distributed scattered pixel points are divided into different groups again, so that the integrity and independence of a main body are kept in a final division result, and the situation that the complicated background is too scattered in division can be avoided.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image segmentation method and a terminal.
Background
Image segmentation is a technique and a process for dividing an image into a plurality of regions with unique properties according to a certain rule. Image segmentation is a key step of image recognition and computer vision, and is widely applied in the most popular fields of machine vision, face recognition, fingerprint recognition, intelligent automobiles or medical images and the like. Therefore, research on image segmentation has been started for a long time, and many methods for image segmentation have been developed. The current image segmentation technologies commonly used include a threshold-based segmentation method, an edge-based segmentation method, a region-based segmentation method, and a graph theory-based segmentation method.
The threshold-based segmentation method generally calculates one or more thresholds based on a grayscale characteristic (or a color characteristic) of an image, and classifies each pixel of the image into an appropriate region by comparing the grayscale value of the pixel with the threshold.
The edge-based segmentation method is to find a set of pixel points on a boundary line of two different regions in an image, and the core idea is to utilize the characteristic of local discontinuity of the image. Based on this characteristic, edge detection is generally performed by using a differential operator, i.e., determining an edge by using the pole of the first derivative and the zero crossing point of the second derivative, and the edge detection is generally performed by convolving an image with a template.
The segmentation method based on the graph theory is a new research hotspot in the field of image segmentation in recent years. The idea is to convert the image into a weighted undirected graph, the pixels are regarded as the vertexes of the graph, the difference measurement between the pixels is regarded as the weight between the vertexes, and different segmentation strategies are adopted to segment the graph. The optimal principle is to make the internal similarity of the divided subgraphs maximum and the similarity between the subgraphs minimum.
The existing image segmentation methods cannot deal with all scenes, and in comparison, the overall performance of the image segmentation based on the graph theory is better at present, but due to the nature of the image segmentation based on the graph theory, the image segmentation based on the graph theory is often under-segmented when facing an image with gradually changed colors. Because the image segmentation method based on the graph theory is that each pixel point searches for the adjacent pixel point with the difference degree within a certain range, but the difference degree is not changed in the image with gradually changed color, the segmentation method can segment the part with gradually changed color into a region even if the absolute difference of the color of the region is very large.
In practical application, when a color gradual change exists between a foreground and a background, the background and the foreground are segmented into a region by an image segmentation mode based on graph theory, so that an under-segmentation phenomenon is caused.
Disclosure of Invention
The present invention provides an image segmentation method aiming at the above problems, comprising the following steps:
establishing a bilateral space, wherein the bilateral space comprises a space domain and a characteristic domain;
sampling all pixel points of an input image on a spatial domain, and mapping the pixel points to the characteristic domain based on different characteristic values so as to divide the pixel points into a plurality of grids in the bilateral space;
searching each grid in the bilateral space for the adjacent grid in the space domain, and grouping the grid and the adjacent grid into a block;
and mapping all the grouped pixel points back to the input image to finish image segmentation.
Optionally, after mapping all the grouped pixel points back to the input image to complete image segmentation, the method further includes: and regrouping all pixel points which finish image segmentation in the bilateral space so as to regroup the pixel points which are grouped into a block but distributed dispersedly in the bilateral space.
Optionally, the regrouping all the pixel points that have completed image segmentation in the bilateral space to regroup the pixel points that are grouped into a block but distributed scattered in the bilateral space includes: grouping pixels that are directly adjacent in a spatial domain and grouped into a block in the bilateral space into a group; counting the number of pixel points in each group; according to the statistical result, two groups which are directly adjacent and have the number of pixel points less than the threshold value are combined into a group, and the group which has the number of pixel points less than the threshold value is fused into the group which is directly adjacent and has the largest number of direct contact times between the pixel points.
Optionally, the fusing the group with the number of the pixels less than the threshold to the group directly adjacent to the pixel and with the largest number of direct contacts between the pixels includes: counting the times of direct contact with the pixel points in other groups in the upper, lower, left and right directions of each pixel point respectively for all the pixel points in the group of which the number of the pixel points is less than the threshold value; according to the statistical result, the group with the most direct contact times with all the pixel points in the group with the number of the pixel points less than the threshold value in other groups is used as a fusion parent; and fusing the groups with the number of the pixel points less than the threshold value into the fusion parent.
Optionally, the grid in the bilateral space has labels, and an order of the labels is determined according to an order of sampling all pixel points of the input image in the spatial domain.
Optionally, searching each grid in the bilateral space for its neighboring grid in the spatial domain includes: and searching each grid in the bilateral space for an adjacent grid in the positive direction of the space domain.
Optionally, the feature value includes any one or more of a gray value, an RGB value, or a YUV value.
An embodiment of the present invention further provides a terminal, including:
an image processor and a memory; wherein,
the memory stores program instructions that, when executed by the image processor, perform the following operations:
establishing a bilateral space, wherein the bilateral space comprises a space domain and a characteristic domain;
sampling all pixel points of an input image on a spatial domain, and mapping the pixel points to the characteristic domain based on different characteristic values so as to divide the pixel points into a plurality of grids in the bilateral space;
searching each grid in the bilateral space for the adjacent grid in the space domain, and grouping the grid and the adjacent grid into a block;
and mapping all the grouped pixel points back to the input image to finish image segmentation.
Compared with the prior art, the technical scheme of the invention at least has the following beneficial effects:
the embodiment of the invention introduces the concept of bilateral space into image segmentation processing, firstly carries out sampling grouping of space domain and characteristic domain on pixel points in an input image, separates the pixel points with different characteristic values on the characteristic domain to form grids, then searches adjacent grids of each grid on the space domain, groups the grids and the adjacent grids into a block, and then maps all the grouped pixel points back to the input image, thereby effectively segmenting the gradual change part in the image, and particularly avoiding the phenomenon of under-segmentation under the condition that the color of a segmentation main body and the background color have gradual change transition.
Furthermore, after image segmentation, through post-processing in a pixel space, originally distributed scattered pixel points are divided into different groups again, so that the integrity and the independence of a main body are kept in a final segmentation result, and the situation that complicated backgrounds are too scattered in segmentation can be avoided.
Drawings
FIG. 1 is a schematic flow chart of an image segmentation method of the present invention;
FIG. 2 is a schematic diagram of the present invention for creating a three-dimensional bilateral space;
FIG. 3 is a schematic diagram of the bilateral space of the present invention for performing spatial domain and feature domain sampling on an input image into a plurality of grids;
FIG. 4A is a schematic diagram of the segmentation result of the input image mapped back into the input image after image segmentation in the bilateral space in accordance with the present invention;
FIG. 4B is a diagram of the pixel points regrouped based on FIG. 4A;
fig. 5 is a diagram of a specific example of a pixel space regrouping pixel points in the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Fig. 1 is a schematic flow chart of an image segmentation method according to the present invention. Referring to fig. 1, the image segmentation method includes the steps of:
step S1: establishing a bilateral space, wherein the bilateral space comprises a space domain and a characteristic domain;
step S2: sampling all pixel points of an input image on a spatial domain, and mapping the pixel points to the characteristic domain based on different characteristic values so as to divide the pixel points into a plurality of grids in the bilateral space;
step S3: searching each grid in the bilateral space for the adjacent grid in the space domain, and grouping the grid and the adjacent grid into a block;
step S4: and mapping all the grouped pixel points back to the input image to finish image segmentation.
Before describing embodiments of the present invention, the following concepts regarding bilateral space are set forth.
Bilateral refers to the spatial and characteristic parameters of image pixels, a concept proposed by google algorithm engineers in his paper Real-time edge-image processing with the binary grid (Chen, j., & Paris, S. (2007). The method comprises the following steps of grouping pixel points in an image in a mode of sampling the pixel points on a space domain and a characteristic domain respectively, wherein the space refers to a two-dimensional coordinate of a plane in the image where the pixel points are located, namely the coordinate of the pixel in the length direction and the width direction of the image; the feature may be a gray scale, an RGB value, or a YUV value of the pixel, or the like.
The concept of bilateral space is introduced into the image segmentation method in the embodiment of the invention.
Specifically, as stated in step S1, a bilateral space is established, where the bilateral space includes a spatial domain and a feature domain.
In the present embodiment, the spatial domain is a two-dimensional coordinate of a plane of the input image, and the feature domain is determined according to the features of the input image. For example, if the input image is a gray scale image, the feature value in the feature domain is a gray scale value; for example, if the input image is an image in RGB format or YUV format and sampling is performed in all three channels of the feature domain, the feature value in the feature domain is R, G, B three values or Y, U, V three values.
Therefore, the dimension of the bilateral space established in the step is determined according to the number of the characteristic values on the characteristic domain, and if the characteristic values on the characteristic domain are only one-dimensional in gray value, the bilateral space is a three-dimensional space; if the feature values on the feature domain include R, G, B three values or Y, U, V three values, then the bilateral space is a five-dimensional space. Fig. 2 is a schematic diagram of a three-dimensional bilateral space, in which an XY plane is a space domain of an input image, and a Feature axis is a Feature domain, which may be a gray scale of the image, or any channel of RGB or YUV.
As described in step S2, all pixels of the input image are sampled in the spatial domain, and the pixels are mapped to the feature domain based on different feature values, so as to divide the pixels into a plurality of grids in the bilateral space.
Specifically, first, all pixel points of the input image are sampled in two dimensions, i.e., length and width, in a spatial domain, for example, the pixel points may be sequentially sampled in rows from the top left corner of the input image, and then, according to different feature values of each pixel point, the pixel points are sequentially mapped to a feature domain.
All pixel points of the input image are traversed according to the sampling mode, the pixel points are grouped into a plurality of grids of the bilateral space, and the pixel points contained in each grid are adjacent and close in a spatial domain and have similar characteristic values (such as gray values, RGB values or YUV values) in a characteristic domain. Fig. 3 is a schematic diagram illustrating the division of the input image into a plurality of grids in the bilateral space by performing spatial domain and feature domain sampling.
As described in step S3, for each grid in the bilateral space, its neighboring grid is found in the spatial domain, and the grid and its neighboring grid are grouped into a tile.
Specifically, in the process of creating the meshes of the bilateral space, each mesh has a label, and the order of the labels is determined according to the order of sampling all pixel points of the input image on the spatial domain. In this step, the adjacent grids are sequentially searched for each grid according to the sequence of the labels, the search is only performed in the positive direction of the spatial domain, and the grid and the adjacent grids are grouped into a block. After all the grids are traversed, the adjacent grids are connected together, and the pixels with different feature values appear in different feature domain layers, as shown in fig. 3. In practice, the process of image segmentation is to find an adjacent grid in the spatial domain, and after the finding is completed, the adjacent grid is recorded into a block.
In step S4, all the grouped pixels are mapped back to the input image to complete the image segmentation. According to the embodiment of the invention, the adjacent grids are only grouped in the spatial domain, and the grid in one characteristic layer is not connected with the grids in other characteristic layers, so that the problem of under-segmentation during gradual image segmentation is avoided. As shown in fig. 4A, the input image is subjected to image segmentation in the bilateral space and then mapped back to the segmentation result in the input image, and adjacent pixels with similar features (gray scale or color) are segmented into one block (in the figure, 1, 2, and 3 are labels of blocks, and pixels in different blocks have different features).
Further, the inventor found that, since the image segmentation method is performed in the spatial domain, pixels that are not directly adjacent to each other can be divided into one block, for example, an input image is an image with 9 × 9 pixels (i.e., 9 pixels in both the length and width directions), and with the image segmentation method, the 81 pixels are mapped into different grids in the bilateral space. In this case, pixels with similar characteristics (e.g., pixels with similar gray scales) are mapped into the same grid, but these pixels are not necessarily directly adjacent (e.g., the 1 st pixel and the 81 st pixel may be mapped into one grid). However, because they are in a grid, they are in the same region after segmentation, and because they are not directly adjacent, when mapping back to the input image, the pixels in the same region may not be directly adjacent. If the pixel gray scale or color distribution of the image is scattered, that is, the pixel points with similar color or gray scale are distributed in the input image in an interlaced manner, this phenomenon will be obvious.
While noise is generally considered as a small number of pixels in the input image that are significantly different in color or gray from surrounding pixels. For example, a white wall surface has a black dot, in which case the white wall may be considered as a large area of low texture and the black dot may be considered as noise. When a bilateral space is created, white pixel points are in a grid, black points are in the grid, but the grid containing the black points and the grid containing the white points are in different levels in a characteristic domain, and the grid containing the black points cannot find any grid adjacent to the grid in the spatial domain, so that the grid containing the black points is considered as a block, and a small black block appears in the white block after being mapped back to the input image. This results in poor smoothness of the image segmentation result.
Therefore, in order to make the image segmentation result have better smoothness, the post-processing of the pixel space is performed again on the basis of the bilateral space segmentation, and the noise and scattered pixel points are fused and merged on the basis of the bilateral space segmentation.
In this embodiment, the post-processing in pixel space includes the following steps:
and regrouping all pixel points which finish image segmentation in the bilateral space so as to regroup the pixel points which are grouped into a block but distributed dispersedly in the bilateral space. Reference is now made in combination to fig. 4A and 4B, wherein fig. 4A is a segmentation result of an input image after image segmentation in bilateral space and then mapping back into the input image. After regrouping, as shown in fig. 4B, the number of blocks of the input image after processing increases, the original closely-connected larger blocks do not change (as the blocks labeled 1, 2, and 3 in fig. 4B are the same as those in fig. 4A), and the pixels in the same block but distributed scattered are regrouped into different blocks (as the blocks labeled 4, 5, and 6 in fig. 4B are regrouped labels for the block labeled 1 in fig. 4A, because the pixels in the blocks 4, 5, and 6 are the pixels grouped into block 1 in the bilateral space but distributed scattered).
Specifically, regrouping pixel points includes:
(1) grouping pixels that are directly adjacent in a spatial domain and grouped into a block in the bilateral space into a group;
(2) counting the number of pixel points in each group;
(3) according to the statistical result, two groups which are directly adjacent and have the number of pixel points less than the threshold value are combined into a group, and the group which has the number of pixel points less than the threshold value is fused into the group which is directly adjacent and has the largest number of direct contact times between the pixel points. The threshold may be determined according to a degree of smoothness to be achieved for the image, and is not limited herein.
In the step (3), the fusing the group with the number of the pixels less than the threshold value into the group which is directly adjacent to the group and has the largest number of direct contacts between the pixels includes:
counting the times of direct contact with the pixel points in other groups in the upper, lower, left and right directions of each pixel point respectively for all the pixel points in the group of which the number of the pixel points is less than the threshold value;
according to the statistical result, the group with the most direct contact times with all the pixel points in the group with the number of the pixel points less than the threshold value in other groups is used as a fusion parent;
and fusing the groups with the number of the pixel points less than the threshold value into the fusion parent.
The specific process of fusing the grouping with the number of the pixels less than the threshold to the grouping directly adjacent to the grouping with the maximum number of direct contacts between the pixels is described with reference to the schematic diagram of the pixel space for regrouping the pixels shown in fig. 5.
It should be noted that, in this embodiment, the direct contact with the pixel point refers to a pixel point directly connected to the pixel point in four directions, i.e., the up, down, left, and right directions of one pixel point, and does not include four pixel points connected in the 45-degree direction of the pixel point.
Referring to fig. 5, the pixel point with the label 0 is a noise pixel to be merged into other blocks, and statistics is performed from four directions, i.e., up, down, left, and right, respectively, and for the directly adjacent pixel points with the label 1, the number of times of direct adjacency to the pixel point with the label 0 is 13 times. For the directly adjacent pixel points with the label of 2, the number of the direct adjacent times to the pixel point with the label of 0 is 2. For the directly adjacent pixel points with the label of 3, the number of the direct adjacent times to the pixel point with the label of 0 is 9. It can be seen that the grouping directly adjacent to the pixel point labeled 0 and having the largest number of adjacent times is the block labeled 1, and therefore the block labeled 1 serves as the fusion parent, so that the pixel where the noise point is located has more contact with the fusion parent block, and the boundary between the blocks after the image segmentation is smoother. Through the post-processing of the pixel space, the smoothness of image segmentation and the reasonability of segmentation can be greatly improved.
The embodiment of the invention also provides a terminal, which comprises: an image processor and a memory; the memory stores program instructions, and when the image processor executes the program instructions, the step operations in the above method embodiments are executed, which is specifically referred to the above embodiments and will not be described herein again.
In summary, by introducing the concept of bilateral space into the image segmentation process, the present invention first performs sampling grouping of spatial domain and feature domain on the pixel points in the input image, so that the pixel points with different feature values are separated on the feature domain to form individual grids, then searches for the adjacent grid of each grid on the spatial domain, and groups the grid and the adjacent grid into a block, and then maps all the grouped pixel points back to the input image, thereby effectively segmenting the gradual change part in the image, and especially avoiding the phenomenon of under-segmentation under the condition that the color of the segmentation subject has gradual change transition with the background color.
Furthermore, after image segmentation, through post-processing in a pixel space, originally distributed scattered pixel points are divided into different groups again, so that the integrity and the independence of a main body are kept in a final segmentation result, and the situation that complicated backgrounds are too scattered in segmentation can be avoided.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above.
Claims (10)
1. An image segmentation method, characterized by comprising the steps of:
establishing a bilateral space, wherein the bilateral space comprises a space domain and a characteristic domain;
sampling all pixel points of an input image on a spatial domain, and mapping the pixel points to the characteristic domain based on different characteristic values so as to divide the pixel points into a plurality of grids in the bilateral space;
searching each grid in the bilateral space for the adjacent grid in the space domain, and grouping the grid and the adjacent grid into a block;
mapping all the grouped pixel points back to the input image to finish image segmentation; after mapping all the grouped pixel points back to the input image to complete image segmentation, the method further comprises the following steps:
regrouping all pixel points which finish image segmentation in the bilateral space so as to regroup the pixel points which are grouped into a block but distributed dispersedly in the bilateral space; the regrouping of all pixel points which finish image segmentation in the bilateral space so as to regroup pixel points which are grouped into a block but distributed dispersedly in the bilateral space comprises:
grouping pixels that are directly adjacent in a spatial domain and grouped into a block in the bilateral space into a group;
counting the number of pixel points in each group;
according to the statistical result, two groups which are directly adjacent and have the number of pixel points less than the threshold value are combined into a group, and the group which has the number of pixel points less than the threshold value is fused into the group which is directly adjacent and has the largest number of direct contact times between the pixel points.
2. The image segmentation method of claim 1, wherein the fusing of the groups with the number of pixels less than the threshold value into the group with the largest number of direct contacts between pixels and the immediately adjacent groups comprises:
counting the times of direct contact with the pixel points in other groups in the upper, lower, left and right directions of each pixel point respectively for all the pixel points in the group of which the number of the pixel points is less than the threshold value;
according to the statistical result, the group with the most direct contact times with all the pixel points in the group with the number of the pixel points less than the threshold value in other groups is used as a fusion parent;
and fusing the groups with the number of the pixel points less than the threshold value into the fusion parent.
3. The image segmentation method according to claim 1, wherein the meshes in the bilateral space have labels, and an order of the labels is determined according to an order of sampling all the pixel points of the input image in a spatial domain.
4. The image segmentation method of claim 1 wherein finding its neighboring meshes in the spatial domain for each mesh in the bilateral space comprises: and searching each grid in the bilateral space for an adjacent grid in the positive direction of the space domain.
5. The image segmentation method of claim 1, wherein the feature values comprise any one or more of gray scale values, RGB values, or YUV values.
6. A terminal, comprising:
an image processor and a memory; wherein,
the memory stores program instructions that, when executed by the image processor, perform the following operations:
establishing a bilateral space, wherein the bilateral space comprises a space domain and a characteristic domain;
sampling all pixel points of an input image on a spatial domain, and mapping the pixel points to the characteristic domain based on different characteristic values so as to divide the pixel points into a plurality of grids in the bilateral space;
searching each grid in the bilateral space for the adjacent grid in the space domain, and grouping the grid and the adjacent grid into a block;
mapping all the grouped pixel points back to the input image to finish image segmentation; when the image processor executes the program instructions, the following operations are performed: after mapping all the grouped pixel points back to the input image to complete image segmentation, the method further comprises the following steps:
regrouping all pixel points which finish image segmentation in the bilateral space so as to regroup the pixel points which are grouped into a block but distributed dispersedly in the bilateral space; when the image processor executes the program instructions, the following operations are performed: the regrouping of all pixel points which finish image segmentation in the bilateral space so as to regroup pixel points which are grouped into a block but distributed dispersedly in the bilateral space comprises:
grouping pixels that are directly adjacent in a spatial domain and grouped into a block in the bilateral space into a group;
counting the number of pixel points in each group;
according to the statistical result, two groups which are directly adjacent and have the number of pixel points less than the threshold value are combined into a group, and the group which has the number of pixel points less than the threshold value is fused into the group which is directly adjacent and has the largest number of direct contact times between the pixel points.
7. The terminal of claim 6, wherein the image processor, when executing the program instructions, performs the following: the fusing the grouping with the number of the pixel points less than the threshold value into the grouping which is directly adjacent to the pixel points and has the most direct contact times among the pixel points comprises the following steps:
counting the times of direct contact with the pixel points in other groups in the upper, lower, left and right directions of each pixel point respectively for all the pixel points in the group of which the number of the pixel points is less than the threshold value;
according to the statistical result, the group with the most direct contact times with all the pixel points in the group with the number of the pixel points less than the threshold value in other groups is used as a fusion parent;
and fusing the groups with the number of the pixel points less than the threshold value into the fusion parent.
8. The terminal of claim 6, wherein the grid in the bilateral space has labels, and an order of the labels is determined according to an order of sampling all the pixel points of the input image in a spatial domain.
9. The terminal of claim 6, wherein the image processor, when executing the program instructions, performs the following: finding its neighboring grids in the spatial domain for each grid in the bilateral space comprises: and searching each grid in the bilateral space for an adjacent grid in the positive direction of the space domain.
10. The terminal of claim 6, wherein the feature value comprises any one or any more of a gray value, an RGB value, or a YUV value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711350777.0A CN108109150B (en) | 2017-12-15 | 2017-12-15 | Image segmentation method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711350777.0A CN108109150B (en) | 2017-12-15 | 2017-12-15 | Image segmentation method and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108109150A CN108109150A (en) | 2018-06-01 |
CN108109150B true CN108109150B (en) | 2021-02-05 |
Family
ID=62217198
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711350777.0A Active CN108109150B (en) | 2017-12-15 | 2017-12-15 | Image segmentation method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108109150B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101286228A (en) * | 2008-05-15 | 2008-10-15 | 浙江大学 | Real-time vision frequency and image abstraction method based on characteristic |
CN101425182A (en) * | 2008-11-28 | 2009-05-06 | 华中科技大学 | Image object segmentation method |
CN104331699A (en) * | 2014-11-19 | 2015-02-04 | 重庆大学 | Planar fast search and comparison method of three-dimensional point cloud |
EP2840551A1 (en) * | 2013-08-23 | 2015-02-25 | Vistaprint Schweiz GmbH | Methods and systems for automated selection of regions of an image for secondary finishing and generation of mask image of same |
CN106447676A (en) * | 2016-10-12 | 2017-02-22 | 浙江工业大学 | Image segmentation method based on rapid density clustering algorithm |
CN106815850A (en) * | 2017-01-22 | 2017-06-09 | 武汉地普三维科技有限公司 | The method that canopy density forest reserves very high is obtained based on laser radar technique |
-
2017
- 2017-12-15 CN CN201711350777.0A patent/CN108109150B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101286228A (en) * | 2008-05-15 | 2008-10-15 | 浙江大学 | Real-time vision frequency and image abstraction method based on characteristic |
CN101425182A (en) * | 2008-11-28 | 2009-05-06 | 华中科技大学 | Image object segmentation method |
EP2840551A1 (en) * | 2013-08-23 | 2015-02-25 | Vistaprint Schweiz GmbH | Methods and systems for automated selection of regions of an image for secondary finishing and generation of mask image of same |
CN104331699A (en) * | 2014-11-19 | 2015-02-04 | 重庆大学 | Planar fast search and comparison method of three-dimensional point cloud |
CN106447676A (en) * | 2016-10-12 | 2017-02-22 | 浙江工业大学 | Image segmentation method based on rapid density clustering algorithm |
CN106815850A (en) * | 2017-01-22 | 2017-06-09 | 武汉地普三维科技有限公司 | The method that canopy density forest reserves very high is obtained based on laser radar technique |
Non-Patent Citations (1)
Title |
---|
《医学图像三维重建技术研究》;何晓乾;《中国优秀博硕士学位论文全文数据库(硕士)(信息科技辑)》;20061215(第12期);I138-1127 * |
Also Published As
Publication number | Publication date |
---|---|
CN108109150A (en) | 2018-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wei et al. | Toward automatic building footprint delineation from aerial images using CNN and regularization | |
CN108537239B (en) | Method for detecting image saliency target | |
US12131452B1 (en) | Method and apparatus for enhancing stereo vision | |
CN110309824B (en) | Character detection method and device and terminal | |
WO2018107939A1 (en) | Edge completeness-based optimal identification method for image segmentation | |
CN108121991B (en) | Deep learning ship target detection method based on edge candidate region extraction | |
CN110197153B (en) | Automatic wall identification method in house type graph | |
CN108629783B (en) | Image segmentation method, system and medium based on image feature density peak search | |
CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
CN110458172A (en) | A kind of Weakly supervised image, semantic dividing method based on region contrast detection | |
CN102930561B (en) | Delaunay-triangulation-based grid map vectorizing method | |
CN107909079B (en) | Cooperative significance detection method | |
CN109712143B (en) | Rapid image segmentation method based on superpixel multi-feature fusion | |
CN107871321B (en) | Image segmentation method and device | |
JP2001043376A (en) | Image extraction method and device and storage medium | |
CN111583279A (en) | Super-pixel image segmentation method based on PCBA | |
CN109345549A (en) | A kind of natural scene image dividing method based on adaptive compound neighbour's figure | |
CN109448093B (en) | Method and device for generating style image | |
Mukherjee et al. | A hybrid algorithm for disparity calculation from sparse disparity estimates based on stereo vision | |
CN107085725B (en) | Method for clustering image areas through LLC based on self-adaptive codebook | |
CN108109150B (en) | Image segmentation method and terminal | |
Mahmoudpour et al. | Superpixel-based depth map estimation using defocus blur | |
Lindblad et al. | Exact linear time euclidean distance transforms of grid line sampled shapes | |
CN115187744A (en) | Cabinet identification method based on laser point cloud | |
CN110009654B (en) | Three-dimensional volume data segmentation method based on maximum flow strategy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |