CN110188763B - Image significance detection method based on improved graph model - Google Patents
Image significance detection method based on improved graph model Download PDFInfo
- Publication number
- CN110188763B CN110188763B CN201910450367.6A CN201910450367A CN110188763B CN 110188763 B CN110188763 B CN 110188763B CN 201910450367 A CN201910450367 A CN 201910450367A CN 110188763 B CN110188763 B CN 110188763B
- Authority
- CN
- China
- Prior art keywords
- nodes
- superpixel
- salient
- super
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image saliency detection method based on an improved graph model, and belongs to the technical field of computer vision and image detection. The method comprises the steps of segmenting an image into superpixels by adopting simple linear iterative clustering, constructing an undirected graph by taking the superpixels as vertexes, extracting high-level features by utilizing image bottom-level features and priori knowledge on the basis of improving a graph model, and obtaining a salient graph based on the bottom-level features. And then selecting foreground and background seed nodes by utilizing the high-level characteristics and the compactness of the salient object, respectively calculating and fusing salient images based on the foreground and background seeds. And finally, fusing the saliency maps obtained in the two stages to obtain a final saliency map. The method can completely detect and uniformly highlight the salient objects in the image, improve the detection accuracy of the salient objects in the complex environment, meet the design requirements of an actual engineering system, and solve the problem of low detection accuracy of the salient objects in the complex environment.
Description
Technical Field
The invention relates to an image saliency detection method based on an improved graph model, and belongs to the technical field of computer vision and image detection.
Background
Saliency detection aims at letting computers have a visual attention mechanism like human beings, finding the most interesting and valuable information from complex scenes. Early-appearing saliency detection algorithms aimed at the detection of visual attention with the aim of predicting the gaze point of the human eye in an image. Later, many salient region detections aimed at segmenting out entire salient objects emerged. Compared with the former, the significant region detection has higher application value. The significance detection model can be divided into two categories, bottom-up and top-down. The bottom-up model is driven by data, and the significance is calculated by utilizing the color, the contrast and the like of an image; the top-down model is task driven, often requiring a large number of samples to be trained to extract features of the task object. The invention mainly focuses on a bottom-up detection method.
Many researchers at home and abroad deeply explore bottom-up detection methods. Where Cheng et al calculates saliency using global and local contrast of image color histograms. Tong et al propose a coding-based saliency measurement method by color contrast calculation between image regions. These methods, which only use local or global contrast of the image, cannot correctly detect a salient object similar to the background. Therefore, the researcher introduces the knowledge of background priors, central priors and the like in the contrast calculation, and calculates the significant value of each area of the image by comparing with the assumed background or foreground part.
In recent years, a graph model detection method has been proposed in which an input image is represented in a graph form using superpixels as nodes, and a series of calculations are further performed. Different patterning modes are adopted, Qin and the like continuously update the significant value of the super pixel by utilizing a propagation mechanism of a cellular automaton. Yang et al propose a detection method of manifold sorting, regard foreground or background as the label respectively, utilize the sorting function to carry on the sorting according to the correlation of oneself with label item each part of the picture, thus try to get the significance of the superpixel. Sun et al propose a significance detection model based on an absorption Markov chain, regard boundary nodes and other nodes as absorption nodes and transient nodes respectively, and obtain the significance of a superpixel by calculating transfer time. Zhou et al represent the image as a two-layer sparse graph, extract foreground and background seed nodes using the color compactness of the salient object to respectively calculate and fuse salient graphs based on foreground and background seeds. The method has great influence on detection results by the composition mode and the edge weight value distribution. The composition mode of the method often cannot fully reflect the adjacent relation between the super pixels, and most models only adopt the weight matrix of the color feature calculation graph, which can cause the significance detection effect to be poor, especially the problem that the detection of the significance object is incomplete or the inner part of the significance object cannot be uniformly highlighted under the complex environment.
Disclosure of Invention
Aiming at the problems, the invention provides an image saliency detection method based on an improved graph model, which is used for solving the problems that the detection of a salient object in a complex environment is incomplete or the salient object cannot be uniformly highlighted, can completely detect and uniformly divide the whole salient object, improves the performance of an image saliency detection algorithm, and has a great promotion effect on further research and development of image related fields.
The key technology for realizing the invention is as follows: the input image is subjected to superpixel segmentation, the image is represented in a graph form by taking superpixels as nodes, and a series of calculations are developed on the basis of the representation.
Firstly, a weight matrix of a graph is calculated according to bottom layer characteristics such as color, texture and the like, and a high-level characteristic is extracted by combining a plurality of prior knowledge to obtain a salient graph based on the bottom layer characteristics. And then, recalculating the weight matrix according to the high-level features, extracting nodes of foreground seeds and background seeds of the image by using the compactness of the salient object, respectively calculating salient images based on the foreground seeds and the background seeds, and fusing to obtain the salient images based on the high-level features. And finally, fusing the saliency maps obtained in the two stages to obtain a final saliency map.
In order to achieve the above object, the specific implementation steps are as follows:
(1) dividing an input image into N superpixels, wherein the ith superpixel is defined as viDenotes that the jth super pixel is denoted by vjRepresents that i, j belongs to 1, 2.
(2) Calculating the similarity between super pixels by using the bottom layer characteristics of the input image to form a similarity matrix A ═ aij]N×N,aijRepresenting a superpixel viAnd vjThe degree of similarity of (c);
(3) constructing an undirected graph by taking the superpixels as nodes: connecting each node with the adjacent nodes, connecting the node with the most similar node in the nodes with the common edges with the adjacent nodes, and finally connecting the nodes with the highest possibility of belonging to the background around the input image;
(4) computing a weight matrix W for an undirected graph1=[ωij]n×nThe sum degree matrix D1Spreading the similarity matrix A by adopting manifold sequencing to obtain a new similarity matrix H ═ Hij]N×N,hijRepresenting a superpixel viAnd vjTo a similar degree, wherein,
D1=diag{d11,d22,...,dNN},dii=∑jωij,Duijis a super pixel viAnd vjOf the underlying characteristic distance between, σ2To control the parameters of the weight, eijRepresenting a super pixel v in a diagramiAnd vjThe edges are connected with each other to form an edge;
(5) extracting high-level features and simultaneously obtaining a saliency map based on the bottom-level features;
(6) recalculating a weight matrix according to the high-level features, extracting foreground and background seed nodes of the input image by using the compactness of the salient object, respectively calculating salient maps based on the foreground seed nodes and the background seed nodes, and fusing to obtain a salient map based on the high-level features;
(7) and fusing the saliency map obtained based on the two stages of the bottom layer features and the saliency map based on the high layer features to obtain a final saliency map.
Optionally, the (5) comprises:
(5.1) calculating the superpixel viAnd a superpixel v (i) ofiSpatial distance from the center of the image sd (i):
wherein qj ═ qjx,qj y]Denotes vjCoordinates of the center, njIs a super pixel vjThe number of contained pixels, p represents the coordinates of the center of the image;
μi=[μi x,μi y]is a super pixel viSpatial mean of (d), mui x、μi yThe calculation method of (c) is as follows:
(5.2) according to formula Scom(i) 1- (sv (i) + sd (i)) fusing sv (i) and sd (i) to obtain superpixel viCoarse saliency value Scom(i) Assigning the saliency of each super-pixel to all the pixels contained in it to obtain a rough saliency map I0;
(5.3) calculating the superpixel viThe bottom layer characteristic distance ld (i) between the same adjacent super pixels and calculating viSame as I0Central distribution measurement dm (i):
wherein N isiIs represented by the formulaiSet of adjacent super-pixels, DuijIs a super pixel viAnd a super pixel vjThe bottom layer feature distance between; μ s is I0Center of (m)ijIs a super pixel viAnd vjAccording to the similarity obtained by the calculation of the spatial distance, the calculation method is as follows:
(5.4) according to formula Slc(i) (ii) fusing ld (i) and dm (i) to obtain Slc(i) Represented as S after being propagated by manifold sortingcon(i);
(5.5) according to S1(i)=Scom(i)+Scon(i) Will Scom(i) And Scon(i) Obtaining the significant value of each super pixel by fusion and recording as S1(i) And used as the high-level features of each superpixel for subsequent calculations.
Optionally, the (6) comprises:
(6.1) dividing N superpixels into K classes by using a K-means + + algorithm according to high-level features, and enabling C to be [ C ]1,c2,...,cK]Expressing K cluster centers, and calculating the similarity between each super pixel and each cluster to form a similarity matrix B ═ Bir]N×KWherein, in the step (A),
(6.2) recalculating the weight matrix W for the undirected graph2The sum degree matrix D2For the matrix after B is propagated by manifold ordering similarity, X is ═ Xij]N×KDenotes xijRepresenting a superpixel viAnd the degree of similarity between class j;
(6.3) calculating the looseness sc (j) of each type, further calculating the possibility po (j) of each type belonging to the foreground, wherein the probability po (j) is 1-sc (j), the average value of po is used as a threshold value to divide each type into a foreground seed and a background seed, the set formed by the foreground seeds is represented by FG, and the set formed by the background seeds is represented by BG; sc (j) is calculated as follows:
μja spatial mean representing class j;
(6.4) calculating the significant value of each superpixel based on the foreground seedsAnd significance based on background seedRespectively using a formula S after the manifold sequencing propagation2(i)=λ2·Sb(i)+(1-λ2)·Sf(i) Fusion, wherein 0.3 < lambda2<0.5。
Optionally, the (7) comprises:
according to the formula S (i) ═ λ3·S1(i)+(1-λ3)·S2(i) Will S1(i) And S2(i) Fusing to obtain final significant value S (i) of each super pixel, wherein 0.3 < lambda3If the pixel values are less than 0.5, the saliency value of each super pixel is distributed to all pixel points contained in the super pixel to obtain an output saliency map I.
Optionally, the (2) includes:
(2.1) using the CLELAB color mean value c ═ l, a, b of the pixels included in the super-pixelTTo represent the color characteristics of the superpixel, and a 59-dimensional vector t formed by adopting an equivalent mode of a local binary mode represents the texture characteristics of the superpixel; super pixel viAnd vjColor feature distance Dc ofijAnd texture feature distance DtijThe Euclidean distance and the chi-square distance are respectively adopted to calculate as follows:
Dcij=||ci-cj||
wherein, ci、cjRepresenting a superpixel vi、vjColor feature of (1), ti、tjRepresenting a superpixel vi、vjI, j ∈ 1, 2.. N;
(2.2) according to the formula Duij=λ1·Dcij+(1-λ1)·DtijDc toijAnd DtijLinear fusion to obtain DuijWherein 0.6 < lambda1Less than 0.8; and calculating the similarity matrix A ═ a formed by the similarity between the super pixelsij]N×NSuper pixel viAnd vjThe similarity between them is calculated as follows:
optionally, the nodes with the highest possibility that the periphery of the input image belongs to the background in the step (3) are obtained by adopting the following steps:
(a) detecting the outline of an object in an input image by using a canny edge detection operator;
(b) for superpixels v located at four boundaries of imageiCalculating the probability that it belongs to a salient objectQiDenotes viNumber of pixels whose edge contains the contour of the object, BiDenotes viThe total number of pixels contained in the edge;
(c) the PB is adaptively thresholded using the algorithm of the universe, and superpixels smaller than the adaptive threshold are considered as the background and are connected to each other.
Optionally, in the step (1), the input image is segmented into N superpixels by using simple linear iterative clustering.
Optionally, the method adopts an intel Core i5-7300HQ processor, a master frequency of 2.50GHz, a memory of 8GB, and an environment MATLAB2017 a.
The invention also provides the application of the method in the fields of monitoring equipment, satellite images and medical images.
The invention has the beneficial effects that:
(1) the method not only considers the color characteristics but also integrates the texture characteristics when extracting the bottom layer characteristics of the image, which has good effect on detecting the complex texture image existing in nature and solves the problem that the detection of the obvious object in the complex environment is incomplete or the inside of the obvious object can not be uniformly highlighted by the existing detection method; in addition, the accuracy of image significance detection can be improved by utilizing various characteristics and priori knowledge of the image to extract high-level features for subsequent significance calculation.
(2) When the super-pixel is taken as the node composition, each node is connected with the adjacent nodes, the node which is most similar to the node is selected from the nodes which have the common edge with the adjacent nodes to be connected with the node, and finally the node with the highest possibility that the periphery of the image belongs to the background is connected. The super-pixel connection mode can better reflect the relation between image areas, so that the whole remarkable object can be completely detected and uniformly segmented.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is an overall flow diagram of the method of the present invention.
Fig. 2 is a schematic diagram of boundary superpixel background extraction.
Fig. 3 is a graph comparing the visual effect of the present invention and the comparison algorithm.
Fig. 4 is a comparison graph of evaluation indexes of the ASD data set by the comparison algorithm and the present invention, and fig. 4(a), (b), and (c) are sequentially an index PR curve, an F curve, and an P, R, F histogram under an adaptive threshold.
FIG. 5 is a comparison of evaluation indicators on an ECSSD data set according to the present invention and a comparison algorithm, and FIGS. 5(a), (b), and (c) are graphs of an indicator PR curve, an F curve, and P, R, F histogram under an adaptive threshold in this order.
FIG. 6 is a comparison graph of evaluation indexes of the PASCAL-S data set by the comparison algorithm and the invention, and FIGS. 6(a), (b) and (c) are P, R, F bar graphs of indexes PR curve, F curve and adaptive threshold in turn.
FIG. 7 is a comparison graph of evaluation indexes of the MSRA-B data set by the comparison algorithm and the invention, and FIGS. 7(a), (B) and (c) are P, R, F bar graphs of indexes PR curve, F curve and adaptive threshold in sequence.
FIG. 8 is a comparison graph of evaluation indexes of the MSRA-10K data set by the comparison algorithm and the invention, and FIGS. 8(a), (b) and (c) are P, R, F bar graphs of indexes PR curve, F curve and adaptive threshold in sequence.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Introduction of basic theory
1、SLIC(simple linear iterative clustering)
The SLIC algorithm can generate compact and approximately uniform superpixels, has higher comprehensive evaluation in the aspects of operation speed, object contour maintenance and superpixel shape, and is more in line with the expected segmentation effect of people. The algorithm converts an input color image into 5-dimensional feature vectors (l, a, b, x and y) in a CIELAB color space and XY coordinates, then constructs a distance measurement standard for the 5-dimensional feature vectors, and carries out local clustering on image pixels. The method comprises the following specific steps:
(1) according to the set number of the super pixels, uniformly distributing seed points in the image: assuming that a picture has N pixel points in total and is pre-divided into K super pixels with the same size, the size of each super pixel is N/K, and the distance (step length) between adjacent seed points is approximately equal to S ═ sqrt (N/K).
(2) And calculating gradient values of all pixel points in a 3-by-3 neighborhood of the seed point, and moving the seed point to a place with the minimum gradient in the neighborhood to prevent the seed point from falling on the contour boundary with larger gradient and influencing the subsequent clustering effect.
(3) A class label (i.e., to which cluster center) is assigned to each pixel point within each seed point 2S by 2S neighborhood.
(4) For each searched pixel point, the distance between the pixel point and the seed point is calculated according to the following formula:
wherein d iscRepresenting the color distance, dsRepresenting the space distance, m and S are used for coordinating the proportion ratio of the two distances, and m is 10. Because each pixel point can be searched by a plurality of seed points, each pixel point has a distance with the surrounding seed points, and the seed point corresponding to the minimum value is taken as the clustering center of the pixel point.
(5) The above steps are iterated 10 times. Partially isolated superpixels may be obtained, which are assigned to the nearest superpixel to enhance connectivity.
2. Local Binary Pattern (LBP) texture features
The original LBP operator is defined in a 3 x 3 window, the central pixel of the window is used as a threshold value, the threshold value is compared with the gray values of the adjacent 8 pixels, if the peripheral pixel values are larger than the central pixel value, the threshold value is marked as 1, otherwise, the threshold value is marked as 0. Thus, an 8-bit binary number can be obtained, the size of the binary number is between 0 and 255 after the binary number is converted into a 10-system number, and 256 possible values are obtained in total, and the value is used as an LBP value of a window center pixel point to reflect texture information of the 3 x 3 area. Because the LBP operator has too many binary patterns and is inconvenient to calculate, the invention adopts the 'equivalent Pattern' (Uniform Pattern) proposed by Ojala to reduce the dimension of the Pattern type of the LBP operator.
Ojala considers that in practical images, most LBP patterns only contain two jumps from 1 to 0 or from 0 to 1 at most. The "equivalent mode" is then defined as: when a cyclic binary number corresponding to an LBP has at most two transitions from 0 to 1 or from 1 to 0, the binary number corresponding to the LBP is called an equivalent pattern class. For example, 00000000(0 hops), 10001111 (two hops from 1 to 0 and then from 0 to 1) is an equivalent pattern class. Modes other than the equivalent mode class fall into another class, called mixed mode class, e.g., 10010111 (four total hops). With such an improvement, the variety of binary patterns is greatly reduced without losing any information. For 8 samples in the 3 × 3 neighborhood, the binary pattern contains 58 classes of equivalent patterns and one mixed class. Thus, the histogram is reduced from the original 256 dimensions to 59 dimensions. This makes the feature vector less dimensional and can reduce the effect of high frequency noise.
3. Manifold sorting algorithm
In the image saliency detection, the manifold sorting algorithm determines the saliency value of each super pixel by performing label propagation on an image, and can also be used for propagating the saliency of the super pixel, and the specific content of the algorithm is as follows: given a data set X ═ X1,...,xl,xl+1,...,xn}∈Rm×nSome of the data is marked as query nodes, and the rest of the nodes are sorted according to their relevance to the query nodes, let f: x → RnDefining a ranking function for each node xiAll have a ranking value fiIn correspondence therewith, f can thus be regarded as a vector f ═ f1,f2,...,fn]T. Let y be [ y1,y2,...,yn]TAs an indication vector, when xiIs querying node y i1, otherwise y i0. And then constructing a graph G (V, E), and calculating the weight of each edge to obtain a weight matrix W (omega)ij]n×nDegree matrix D ═ diag { D ═ D11,d22,...,dNNIn which d isii=∑jωij. The optimal ranking result is obtained by minimizing an energy function of the following formula:
μ is used to balance the smoothing term (first term on the right) and the fitting term (second term on the right). The smooth term means that neighboring nodes should not differ too much, and the fit term means that the sorted result should not differ too much from the initial value.
Solving the above equation to obtain:
f*=(I-αS)-1y
where I is an identity matrix and α ═1/(1+μ),S=D-1/2WD-1/2Is a normalized laplacian matrix. Using the non-normalized laplacian matrix to obtain another form of better solution:
f*=(D-αW)-1y
the first embodiment is as follows:
the embodiment provides an image saliency detection method based on an improved graph model, and referring to fig. 1, the method includes the following steps:
Step 2, calculating the similarity between the super pixels by using the bottom layer characteristics of the image to form a similarity matrix A ═ aij]N×N,aijRepresenting a superpixel viAnd vjThe similarity degree of (i, j) belongs to 1, 2.
(2.1) using the CLELAB color mean value c ═ l, a, b of the pixels included in the super-pixelTTo represent the color characteristics of the superpixel, and a 59-dimensional vector t formed by adopting an equivalent pattern of a Local Binary Pattern (LBP) represents the texture characteristics of the superpixel; super pixel viAnd vjColor feature distance Dc ofijAnd texture feature distance DtijThe Euclidean distance and the chi-square distance are respectively adopted to calculate as follows:
Dcij=||ci-cj||
wherein, ci、cjRepresenting a superpixel vi、vjColor feature of (1), ti、tjRepresenting a superpixel vi、vjI, j ∈ 1, 2.. N;
(2.2) according to the formula Duij=λ1·Dcij+(1-λ1)·DtijDc toijAnd DtijLinear fusion to obtain DuijWherein 0.6 < lambda1Less than 0.8; and calculating the similarity matrix A ═ a formed by the similarity between the super pixelsij]N×NSuper pixel viAnd vjThe similarity between them is calculated as follows:
and 3, constructing an undirected graph G (V, E) by taking the super pixels as nodes:
(3.1) connecting each node with its neighboring nodes and with the most similar node among the nodes having a common edge with the neighboring nodes.
(3.2) referring to fig. 2, the node with the highest probability of belonging to the background in the boundary superpixel is extracted and connected to each other:
(a) detecting the outline of an object in the image by using a canny edge detection operator;
(b) for superpixels v located at four boundaries of imageiCalculating the probability that it belongs to a salient objectQiDenotes viNumber of pixels whose edge contains the contour of the object, BiDenotes viThe total number of pixels contained in the edge;
(c) adopting Otsu method (OTSU) to obtain an adaptive threshold value for PB, and connecting the superpixels smaller than the adaptive threshold value with each other as a background;
D1=diag{d11,d22,...,dNN},dii=∑jωij,Duijis a super pixel viAnd vjOf the underlying characteristic distance between, σ2To control the parameters of the weight, eijRepresenting a super pixel v in a diagramiAnd vjThe edges are connected with each other to form an edge; .
And 5, extracting high-level features by using the prior knowledge and the salient region characteristics, and simultaneously obtaining a salient map based on the bottom-level features.
(5.1) calculating the superpixel viAnd a superpixel v (i) ofiSpatial distance from the center of the image sd (i):
wherein q isj=[qj x,qj y]Denotes vjCoordinates of the center, njIs a super pixel vjThe number of pixels included, and p represents the coordinates of the center of the image.
μi=[μi x,μi y]Is a super pixel viSpatial mean of (d), mui x、μi yThe calculation method of (c) is as follows:
(5.2) according to formula Scom(i) 1- (sv (i) + sd (i)) fusing sv (i) and sd (i) to obtain superpixel viCoarseSignificant value of Scom(i) Assigning the saliency of each super-pixel to all the pixels contained in it to obtain a rough saliency map I0;
(5.3) calculating the superpixel viThe bottom layer characteristic distance ld (i) between the same adjacent super pixels and calculating viSame as I0Central distribution measurement dm (i):
wherein N isiIs represented by the formulaiSet of adjacent super-pixels, DuijIs a super pixel viAnd a super pixel vjThe bottom layer feature distance between; μ s is I0Center of (m)ijIs a super pixel viAnd vjAccording to the similarity obtained by the calculation of the spatial distance, the calculation method is as follows:
(5.4) according to formula Slc(i) (ii) fusing ld (i) and dm (i) to obtain Slc(i) Represented as S after being propagated by manifold sortingcon(i);
(5.5) according to S1(i)=Scom(i)+Scon(i) Will Scom(i) And Scon(i) Obtaining the significant value of each super pixel by fusion and recording as S1(i) And the high-level characteristics of the super pixels are used for subsequent calculation;
and 6, calculating the image significance by utilizing the high-level features:
(6.1) dividing N superpixels into K classes by using a K-means + + algorithm according to high-level features, and enabling C to be [ C ]1,c2,...,cK]Representing K cluster centers, calculating similarities between the superpixels and each classDegree matrix B ═ Bir]N×KWherein, in the step (A),
(6.2) recalculating the weight matrix W for the graph2The sum degree matrix D2For the matrix after B is propagated by manifold ordering similarity, X is ═ Xij]N×KDenotes xijRepresenting a superpixel viAnd the degree of similarity between class j;
(6.3) calculating the looseness sc (j) of each type, further calculating the possibility po (j) of each type belonging to the foreground to be 1-sc (j), dividing each type into a foreground seed and a background seed by taking the average value of po as a threshold value, wherein the set formed by the foreground seeds is represented by FG, and the set formed by the background seeds is represented by BG. Sc (j) is calculated as follows:
μja spatial mean representing class j;
(6.4) calculating the significant times of each superpixel based on the foreground seedsAnd significance based on background seedRespectively using a formula S after the manifold sequencing propagation2(i)=λ2·Sb(i)+(1-λ2)·Sf(i) Fusion, wherein 0.3 < lambda2<0.5;
The effects of the present invention can be further illustrated by the following simulation experiments.
1. Simulation conditions and parameters
The experiment adopts an Intel Core i5-7300HQ processor, a master frequency of 2.50GHz, a memory of 8GB and an environment MATLAB2017 a.
In the experimental process, the number N of super pixels obtained by dividing an input image is set to 300, and a parameter sigma involved in calculating a weight matrix2All take 0.1, alpha in manifold ordering formula takes 0.99, coefficient lambda in bottom layer characteristic distance fusion formula1Taking 0.7, and fusing coefficient lambda in a formula based on significant values of foreground seeds and background seeds2Taking 0.4, and solving the coefficient lambda in the fusion formula when the final significant value is obtained30.4 is taken, 36 is taken for the category K divided by the N superpixels by using Kmeans + + clustering, and 2 times of the average value of the significant values of the image pixels is taken for the adaptive threshold value in the evaluation index calculation.
2. Simulation content and result analysis
To verify the validity of the present invention (GMF), comparative experiments were performed on ASD, ECSSD, PASCAL-S, MSRA-B, MSRA10K 5 public datasets (see Borji A, Cheng M M, Jiang H, et al, clinical object detection: A benchmark [ J ]. IEEE transactions on image processing,2015,24(12): 5706-.
Qualitative analysis and quantitative analysis are adopted in the comparison process, and the qualitative analysis directly compares the quality of the observation result through the vision of the saliency map; and the performance of the algorithm is judged by calculating the evaluation index through quantitative analysis.
The indexes adopted in the experiment include a Precision-recall (PR) curve, an F-measure and an average Absolute error (MAE).
Fig. 3 is a diagram of the visual effect of the selected homography algorithm of the present invention. It can be seen that the GMF has better detection effect on the salient objects positioned at the edge of the image than the contrast algorithm, and can uniformly highlight the interiors of the salient objects.
Fig. 4 shows the evaluation index comparison of each algorithm on the ASD data set, the PR curves of each algorithm almost completely coincide at the highest point, and the accuracy reaches more than 95%, while the F-measure curve of the GMF is slightly higher than that of the comparison algorithm. This is mainly because the composition of the ASD dataset is too simple and the performance differentiation on this dataset is not great with the PR curve evaluation algorithm.
Fig. 5 shows the comparison of evaluation indexes of the algorithms on the ECSSD data set, and it can be seen that the PR curve of the GMF is higher than that of the majority of the comparison algorithms, and coincides with only a minority of the algorithms at the highest point, and the F-measure curve is even obviously higher than that of the comparison algorithms, which shows that the comprehensive performance of the invention is obviously improved on the data set.
Fig. 6 shows the comparison of evaluation indexes of each algorithm on the PASCAL-S data set, and it is easy to know that the PR curve and the F-measure curve of the invention are both higher than those of the comparison algorithm, but are generally lower than those of each algorithm on other data sets, and the highest accuracy of each algorithm is only about 80%, which indicates that the effect of the invention on the data set is improved, and reflects that the data set is more difficult to detect, and is more challenging compared with other data sets.
Fig. 7 and 8 are comparisons of the evaluation indices of the algorithms on MSRA-B and MSRA-10K datasets on which they were not analyzed due to the GRCC algorithm's inapplicability to partial images in MSRA10K dataset. The F-measure curve shows that the comprehensive performance of the method is improved on the two data sets, the PR curve and the F value of the algorithm on the two data sets are slightly inferior to those of the ASD data set, and the two data sets are simpler in composition and are only larger in quantity.
Table 1 is the mean absolute error MAE over 5 data sets for all algorithms, the mean absolute error of GMF is the smallest in all algorithms in the same data set. As can be seen from the comparison of different data sets, all algorithms perform best on the ASD data set, and the MAE of the invention on the data set is only 0.05, and the saliency map is very close to the standard true value map. The two datasets, MSRA-B and MSRA10K, image formation is similar to ASD, but in large numbers, with the MAE slightly higher than the ASD dataset. The PR curves and F values on both the ECSSD and PASCAL-S datasets were significantly lower than those on the other 3 datasets, with the highest accuracy of each algorithm on the ECSSD dataset not exceeding 90%, while the MAE of the comparison algorithm on the PASCAL-S dataset was significantly higher on both datasets than on the other datasets.
Table 1 mean absolute error MAE of different algorithms over 5 data sets
The analysis shows that the method can completely detect the salient objects near the edges of the images and uniformly protrude the interiors of the salient objects. The significance detection accuracy is superior to that of all comparison algorithms, the detection error is obviously reduced, and the comprehensive performance is obviously improved.
Some steps in the embodiments of the present invention may be implemented by software, and the corresponding software program may be stored in a readable storage medium, such as an optical disc or a hard disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. The image saliency detection method based on the improved graph model is characterized in that the method carries out superpixel segmentation on an input image, the input image is represented into an undirected graph form by taking superpixels as nodes, and a weight matrix of the graph is calculated according to bottom-layer features to obtain a saliency map based on the bottom-layer features; simultaneously extracting high-level features, recalculating a weight matrix according to the high-level features, extracting foreground seed nodes and background seed nodes of the input image by using the compactness of the salient object, respectively calculating salient maps based on the foreground seed nodes and the background seed nodes, and fusing to obtain a salient map based on the high-level features; finally, fusing the salient map based on the bottom layer features and the salient map based on the high layer features to obtain a final salient map, wherein the bottom layer features comprise color features and texture features; representing an input image into an undirected graph with superpixels as nodes comprises the following steps: connecting each node with the adjacent nodes, connecting the node with the most similar node in the nodes with the common edges of the adjacent nodes, and finally connecting the nodes with the highest possibility of belonging to the background around the input image to form an undirected graph;
the method comprises the following steps:
(1) dividing an input image into N superpixels, wherein the ith superpixel is defined as viDenotes that the jth super pixel is denoted by vjRepresents i, j ∈ 1,2, …, N;
(2) calculating the similarity between super pixels by using the bottom layer characteristics of the input image to form a similarity matrix A ═ aij]N×N,aijRepresenting a superpixel viAnd vjThe degree of similarity of (c);
(3) constructing an undirected graph by taking the superpixels as nodes: connecting each node with the adjacent nodes, connecting the node with the most similar node in the nodes with the common edges with the adjacent nodes, and finally connecting the nodes with the highest possibility of belonging to the background around the input image;
(4) computing a weight matrix W for an undirected graph1=[ωij]n×nThe sum degree matrix D1Spreading the similarity matrix A by adopting manifold sequencing to obtain a new similarity matrix H ═ Hij]N×N,hijRepresenting a superpixel viAnd vjTo a similar degree, wherein,
D1=diag{d11,d22,...,dNN},dii=∑jωij,Duijis a super pixel viAnd vjOf the underlying characteristic distance between, σ2To control the parameters of the weight, eijRepresenting a super pixel v in a diagramiAnd vjThe edges are connected with each other to form an edge;
(5) extracting high-level features and simultaneously obtaining a saliency map based on the bottom-level features;
(6) recalculating a weight matrix according to the high-level features, extracting foreground and background seed nodes of the input image by using the compactness of the salient object, respectively calculating salient maps based on the foreground seed nodes and the background seed nodes, and fusing to obtain a salient map based on the high-level features;
(7) and fusing the saliency map obtained based on the bottom-layer features and the saliency map based on the high-layer features to obtain a final saliency map.
2. The method of claim 1, wherein the (5) comprises:
(5.1) calculating the superpixel viAnd a superpixel v (i) ofiSpatial distance from the center of the image sd (i):
wherein q isj=[qj x,qj y]Denotes vjCoordinates of the center, njIs a super pixel vjThe number of contained pixels, p represents the coordinates of the center of the image;
μi=[μi x,μi y]is a super pixel viSpatial mean of (d), mui x、μi yThe calculation method of (c) is as follows:
(5.2) according to formula Scom(i) 1- (sv (i) + sd (i)) fusing sv (i) and sd (i) to obtain superpixel viCoarse saliency value Scom(i) Assigning the saliency of each super-pixel to all the pixels contained in it to obtain a rough saliency map I0;
(5.3) calculating the superpixel viThe bottom layer characteristic distance ld (i) between the same adjacent super pixels and calculating viSame as I0Central distribution measurement dm (i):
wherein N isiIs represented by the formulaiSet of adjacent super-pixels, DuijIs a super pixel viAnd a super pixel vjThe bottom layer feature distance between; μ s is I0Center of (m)ijIs a super pixel viAnd vjAccording to the similarity obtained by the calculation of the spatial distance, the calculation method is as follows:
(5.4) according to formula Slc(i) (ii) fusing ld (i) and dm (i) to obtain Slc(i) Represented as S after being propagated by manifold sortingcon(i);
(5.5) according to S1(i)=Scom(i)+Scom(i) Will Scom(i) And Scom(i) Obtaining the significant value of each super pixel by fusion and recording as S1(i) And used as the high-level features of each superpixel for subsequent calculations.
3. The method of claim 2, wherein the (6) comprises:
(6.1) dividing N superpixels into K classes by using a K-means + + algorithm according to high-level features, and enabling C to be [ C ]1,c2,...,cK]Expressing K cluster centers, and calculating the similarity between each super pixel and each cluster to form a similarity matrix B ═ Bir]N×KWherein, in the step (A),
(6.2) recalculating the weight matrix W for the undirected graph2The sum degree matrix D2For the matrix after B is propagated by manifold ordering similarity, X is ═ Xij]N×KDenotes xijRepresenting a superpixel viAnd the degree of similarity between class j;
(6.3) calculating the looseness sc (j) of each type, further calculating the possibility po (j) of each type belonging to the foreground, wherein the probability po (j) is 1-sc (j), the average value of po is used as a threshold value to divide each type into a foreground seed and a background seed, the set formed by the foreground seeds is represented by FG, and the set formed by the background seeds is represented by BG; sc (j) is calculated as follows:
μja spatial mean representing class j;
4. The method of claim 3, wherein the (7) comprises:
according to the formula S (i) ═ λ3·S1(i)+(1-λ3)·S2(i) Will S1(i) And S2(i) Fusing to obtain final significant value S (i) of each super pixel, wherein 0.3<λ3<And 0.5, distributing the saliency value of each super pixel to all pixel points contained in the super pixel to obtain an output saliency map I.
5. The method of claim 4, wherein the (2) comprises:
(2.1) using the CLELAB color mean value c ═ l, a, b of the pixels included in the super-pixelTTo represent the color characteristics of the superpixel, and a 59-dimensional vector t formed by adopting an equivalent mode of a local binary mode represents the texture characteristics of the superpixel; super pixel viAnd vjColor feature distance Dc ofijAnd texture feature distance DtijThe Euclidean distance and the chi-square distance are respectively adopted to calculate as follows:
Dcij=‖ci-cj‖
wherein, ci、cjRepresenting a superpixel vi、vjColor feature of (1), ti、tjRepresenting a superpixel vi、vjI, j ∈ 1,2, …, N;
(2.2) according to the formula Duij=λ1·Dcij+(1-λ1)·DtijDc toijAnd DtijLinear fusion to obtain DuijWherein 0.6<λ1<0.8; and calculating the similarity matrix A ═ a formed by the similarity between the super pixelsij]N×NSuper pixel viAnd vjThe similarity between them is calculated as follows:
6. the method according to claim 5, wherein the nodes with the highest probability of belonging to the background around the input image in the step (3) are obtained by adopting the following steps:
(a) detecting the outline of an object in an input image by using a canny edge detection operator;
(b) for superpixels v located at four boundaries of imageiCalculating the probability that it belongs to a salient objectQiDenotes viNumber of pixels whose edge contains the contour of the object, BiDenotes viThe total number of pixels contained in the edge;
(c) the PB is adaptively thresholded using the algorithm of the universe, and superpixels smaller than the adaptive threshold are considered as the background and are connected to each other.
7. The method of claim 6, wherein in (1), the input image is segmented into N superpixels using simple linear iterative clustering.
8. The method according to any one of claims 1 to 7, wherein the method employs an Intel Core i5-7300HQ processor, a master frequency of 2.50GHz, a memory of 8GB, and an environment MATLAB2017 a.
9. Use of the method according to any one of claims 1 to 7 in the field of monitoring devices, satellite images and medical imaging.
10. Use of the method of claim 8 in the field of monitoring equipment, satellite images and medical imaging.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910450367.6A CN110188763B (en) | 2019-05-28 | 2019-05-28 | Image significance detection method based on improved graph model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910450367.6A CN110188763B (en) | 2019-05-28 | 2019-05-28 | Image significance detection method based on improved graph model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110188763A CN110188763A (en) | 2019-08-30 |
CN110188763B true CN110188763B (en) | 2021-04-30 |
Family
ID=67718183
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910450367.6A Active CN110188763B (en) | 2019-05-28 | 2019-05-28 | Image significance detection method based on improved graph model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110188763B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539420B (en) * | 2020-03-12 | 2022-07-12 | 上海交通大学 | Panoramic image saliency prediction method and system based on attention perception features |
CN111583290A (en) * | 2020-06-06 | 2020-08-25 | 大连民族大学 | Cultural relic salient region extraction method based on visual saliency |
CN112907595B (en) * | 2021-05-06 | 2021-07-16 | 武汉科技大学 | Surface defect detection method and device |
CN113469976A (en) * | 2021-07-06 | 2021-10-01 | 浙江大华技术股份有限公司 | Object detection method and device and electronic equipment |
CN118212635A (en) * | 2021-07-28 | 2024-06-18 | 中国科学院微小卫星创新研究院 | Star sensor |
CN117974651B (en) * | 2024-03-29 | 2024-05-28 | 陕西彤山生物科技有限公司 | Method and device for detecting uniformity of crushed particles based on image recognition |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107016680A (en) * | 2017-02-24 | 2017-08-04 | 中国科学院合肥物质科学研究院 | A kind of insect image background minimizing technology detected based on conspicuousness |
CN107392968A (en) * | 2017-07-17 | 2017-11-24 | 杭州电子科技大学 | The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure |
CN108846404A (en) * | 2018-06-25 | 2018-11-20 | 安徽大学 | A kind of image significance detection method and device based on the sequence of related constraint figure |
CN109583455A (en) * | 2018-11-20 | 2019-04-05 | 黄山学院 | A kind of image significance detection method merging progressive figure sequence |
-
2019
- 2019-05-28 CN CN201910450367.6A patent/CN110188763B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107016680A (en) * | 2017-02-24 | 2017-08-04 | 中国科学院合肥物质科学研究院 | A kind of insect image background minimizing technology detected based on conspicuousness |
CN107392968A (en) * | 2017-07-17 | 2017-11-24 | 杭州电子科技大学 | The image significance detection method of Fusion of Color comparison diagram and Color-spatial distribution figure |
CN108846404A (en) * | 2018-06-25 | 2018-11-20 | 安徽大学 | A kind of image significance detection method and device based on the sequence of related constraint figure |
CN109583455A (en) * | 2018-11-20 | 2019-04-05 | 黄山学院 | A kind of image significance detection method merging progressive figure sequence |
Non-Patent Citations (1)
Title |
---|
"结合显著性检测和图割的RGBD 图像共分割算法";李晓阳;《系统仿真学报》;20180731;第30卷(第7期);第1-10 * |
Also Published As
Publication number | Publication date |
---|---|
CN110188763A (en) | 2019-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110188763B (en) | Image significance detection method based on improved graph model | |
CN108549891B (en) | Multi-scale diffusion well-marked target detection method based on background Yu target priori | |
CN108537239B (en) | Method for detecting image saliency target | |
Lalitha et al. | A survey on image segmentation through clustering algorithm | |
Shafiei et al. | Detection of Lung Cancer Tumor in CT Scan Images Using Novel Combination of Super Pixel and Active Contour Algorithms. | |
Makrogiannis et al. | Segmentation of color images using multiscale clustering and graph theoretic region synthesis | |
CN105528575B (en) | Sky detection method based on Context Reasoning | |
CN113592894B (en) | Image segmentation method based on boundary box and co-occurrence feature prediction | |
CN110211127B (en) | Image partition method based on bicoherence network | |
CN108629783A (en) | Image partition method, system and medium based on the search of characteristics of image density peaks | |
Feng et al. | A color image segmentation method based on region salient color and fuzzy c-means algorithm | |
CN111524140B (en) | Medical image semantic segmentation method based on CNN and random forest method | |
CN107305691A (en) | Foreground segmentation method and device based on images match | |
Wang et al. | Adaptive nonlocal random walks for image superpixel segmentation | |
CN111091129A (en) | Image salient region extraction method based on multi-color characteristic manifold sorting | |
Gaur et al. | Superpixel embedding network | |
Xiang et al. | Interactive natural image segmentation via spline regression | |
Wang et al. | Salient object detection by robust foreground and background seed selection | |
Zhang et al. | Adaptive fusion affinity graph with noise-free online low-rank representation for natural image segmentation | |
Xu et al. | The image segmentation algorithm of colorimetric sensor array based on fuzzy C-means clustering | |
CN117523271A (en) | Large-scale home textile image retrieval method, device, equipment and medium based on metric learning | |
Khotanzad et al. | Color image retrieval using multispectral random field texture model and color content features | |
CN107085725A (en) | A kind of method that image-region is clustered by the LLC based on adaptive codebook | |
Santamaria-Pang et al. | Cell segmentation and classification via unsupervised shape ranking | |
Hassan et al. | Salient object detection based on CNN fusion of two types of saliency models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230612 Address after: No. 999 Gaolang East Road, Wuxi Economic Development Zone, Jiangsu Province, 214000-8-D1-501-510 Patentee after: Wuxi shangheda Intelligent Technology Co.,Ltd. Address before: 214000 1800 Lihu Avenue, Binhu District, Wuxi, Jiangsu Patentee before: Jiangnan University |
|
TR01 | Transfer of patent right |