CN111507931B - Data processing method and device - Google Patents
Data processing method and device Download PDFInfo
- Publication number
- CN111507931B CN111507931B CN201910032941.6A CN201910032941A CN111507931B CN 111507931 B CN111507931 B CN 111507931B CN 201910032941 A CN201910032941 A CN 201910032941A CN 111507931 B CN111507931 B CN 111507931B
- Authority
- CN
- China
- Prior art keywords
- target image
- window
- window scale
- gray level
- occurrence matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
- G06T7/45—Analysis of texture based on statistical description of texture using co-occurrence matrix computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Probability & Statistics with Applications (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
Abstract
The present disclosure relates to a data processing method and apparatus. The method comprises the following steps: sequentially traversing a target image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image; determining a detection probability map of the target image under any window scale according to an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified; and sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small to determine a modified area in the target image. The present disclosure can accurately determine a modified region in a target image.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a data processing method and apparatus.
Background
With the rapid development of digital image processing technology, it is quite easy to cheat human eyes by tampering a digital image with image tampering technology. In addition, professional image processing software (e.g., photoshop) is becoming more and more popular, and tampering with pictures is no longer the ability of professionals to do so, so that the network is flooded with a large number of tampered images. The image tampering operation transmits wrong information, which has very bad influence on society.
At present, a deblurring tampering mode exists in an image tampering technology, namely, a blurred region in an original image is extracted, deblurring tampering operation is performed on the extracted blurred region, and then the deblurring tampering region is spliced back to the original image, so that deblurring tampering of the original image is achieved. Since the deblurring tampered region and the non-tampered region are from the same image, there is no effective method for determining whether the deblurring tampered region exists in the target image, that is, whether the target image is deblurred and tampered cannot be determined. Therefore, there is a need for an efficient data processing method to determine modified regions (deblurred tampered regions) in a target image.
Disclosure of Invention
In view of the above, the present disclosure provides a data processing method and apparatus, so that a modified region in a target image can be accurately determined.
According to a first aspect of the present disclosure, there is provided a data processing method, including: sequentially traversing a target image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image; determining a detection probability map of the target image under any window scale according to an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified; and sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small to determine a modified area in the target image.
In a possible implementation manner, determining a detection probability map of the target image at any window scale according to an N-order derivative gray level co-occurrence matrix of the target image at any window scale includes: and aiming at any window scale, obtaining a decision model under the window scale obtained by utilizing image sample training, wherein the decision model is used for determining a detection probability map of the target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
In a possible implementation manner, for any window scale, obtaining a decision model under the window scale obtained by training an image sample includes: carrying out fuzzy processing on the image sample to obtain a fuzzy image; deblurring processing is carried out on the blurred image to obtain a deblurred image; sequentially traversing the deblurred image through a plurality of sliding windows with different window scales to determine an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale; aiming at any window scale, training an N-order derivative gray level co-occurrence matrix of the deblurred image under the window scale by using an LIBSVM and a radial basis function to obtain a decision model under the window scale.
In a possible implementation manner, sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small, and determining a modified region in the target image includes: for any window scale, clustering the detection probability graph of the target image under the window scale through a clustering algorithm, and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area; according to a first detection result graph corresponding to the target image under the window scale, adjusting a detection probability graph of the target image under the window scale through connectivity detection and filtering operation; clustering the detection probability graph of the target image under the adjusted window scale again through the clustering algorithm, and determining a second detection result graph corresponding to the target image under the window scale, wherein the second detection result graph comprises a modified area and an unmodified area; and sequentially fusing second detection result graphs corresponding to the target image under different window scales according to the window scales from large to small, and determining a modified area in the target image.
In one possible implementation, the clustering algorithm is two-centroid K-means clustering.
In one possible implementation, the filtering operation is gaussian weighted filtering.
In one possible implementation, the plurality of sliding windows with different window sizes includes: 4 × 4 sliding windows, 8 × 8 sliding windows, 16 × 16 sliding windows, 32 × 32 sliding windows, 64 × 64 sliding windows, and 128 × 128 sliding windows.
According to a second aspect of the present disclosure, there is provided a data processing apparatus comprising: the first determining module is used for sequentially traversing a target image through a plurality of sliding windows with different window scales and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image; a second determining module, configured to determine a detection probability map of the target image in any window scale according to an N-order derivative gray level co-occurrence matrix of the target image in any window scale, where the detection probability map includes a probability that any pixel in the target image is modified; and the third determining module is used for sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small so as to determine the modified area in the target image.
In one possible implementation manner, the method further includes: the acquisition module is used for acquiring a decision model under the window scale, which is obtained by training an image sample, aiming at any window scale, wherein the decision model is used for determining a detection probability map of the target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
In one possible implementation manner, the obtaining module includes: the fuzzy processing submodule is used for carrying out fuzzy processing on the image sample to obtain a fuzzy image; the deblurring processing submodule is used for deblurring the blurred image to obtain a deblurred image; the first determining submodule is used for sequentially traversing the deblurred image through a plurality of sliding windows with different window scales and determining an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale; and the model training submodule is used for training the N-order derivative gray level co-occurrence matrix of the deblurred image under the window scale by using an LIBSVM and a radial basis function to obtain a decision model under the window scale.
In one possible implementation manner, the third determining module includes: the clustering submodule is used for clustering the detection probability graph of the target image under any window scale through a clustering algorithm and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area; the adjusting submodule is used for adjusting a detection probability graph of the target image under the window scale through connectivity detection and filtering operation according to a first detection result graph corresponding to the target image under the window scale; the clustering submodule is further configured to perform clustering on the adjusted detection probability map of the target image under the window scale again through the clustering algorithm, and determine a second detection result map corresponding to the target image under the window scale, where the second detection result map includes a modified region and an unmodified region; and the second determining submodule is used for sequentially fusing second detection result graphs corresponding to the target image under different window scales according to the window scales from large to small so as to determine a modified area in the target image.
In one possible implementation, the clustering algorithm is two-centroid K-means clustering.
In one possible implementation, the filtering operation is gaussian weighted filtering.
In one possible implementation, the N-order derivative gray level co-occurrence matrix at least includes: a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix.
In one possible implementation, the plurality of sliding windows with different window sizes includes: a 4 × 4 sliding window, an 8 × 8 sliding window, a 16 × 16 sliding window, a 32 × 32 sliding window, a 64 × 64 sliding window, and a 128 × 128 sliding window.
According to a third aspect of the present disclosure, there is provided a data processing apparatus comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the data processing method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the data processing method of the first aspect described above.
The method comprises the steps of sequentially traversing a target image through a plurality of sliding windows with different window scales, determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is determined according to the N-order derivative of the target image, determining a detection probability map of the target image under the window scale according to the N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified, and sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small, so that a modified area in the target image can be accurately determined.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of a data processing method of an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a method for deblurring a tampered area according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a second detection result graph corresponding to a target image at different window scales according to an embodiment of the disclosure;
FIG. 4 is a block diagram of a data processing apparatus according to an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. As will be appreciated by those skilled in the art, and/or represents at least one of the connected objects.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In practical applications, images shot by a camera, a monitor or other shooting devices may have a blurred region, and when a falsifier needs to transmit error information by using the images with the blurred region, the falsifier performs deblurring falsification on the blurred region in the images in order to maintain uniform visual coordination.
For example, in an image obtained by shooting a moving automobile by a camera, the automobile may be blurred, the surrounding environment is clear, and a falsifier needs to falsifie the image so that the license plate number in the image is clearly visible. If a falsifier only falsifies the license plate number part, the definition of the license plate number part is inconsistent with that of the part around the license plate number, so that the falsified picture is easy to identify. At present, a falsifier usually performs deblurring falsification processing on the whole blurred area in an image, and then further falsifies the license plate number part, so that the definitions of the falsified image are consistent as a whole, and the difficulty in identifying the falsified image is increased.
At present, with the development of image processing technology, more and more scenes are available for deblurring and tampering images, and the deblurring and tampering operations on images may exist when a tamperer needs to transmit error information by using images with fuzzy areas, without being limited to the above tampering on license plate numbers. Therefore, it becomes increasingly important how to determine deblurred tampered areas in an image to authenticate the image for tampering. The data processing method provided by the disclosure can accurately determine the modified area in the target image, namely determine the deblurred tampered area in the target image.
Fig. 1 shows a schematic flow chart of a data processing method according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
step S11, sequentially traversing the target image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image.
And S12, determining a detection probability map of the target image under any window scale according to the N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified.
And S13, sequentially fusing the detection probability maps of the target images under different window scales according to the window scales from large to small, and determining the modified area in the target image.
In one possible implementation, the target image may be modified in a manner of deblurring tampering, and the modified region in the target image may be determined as a deblurring tampered region in the target image.
In practical application, if the target image includes the deblurring tampered region, the deblurring tampered region is a blurred region extracted from the target image by a tamperer, and is spliced back into the target image after the non-uniform deblurring tampering operation (namely, the non-uniform deblurring tampering operation) is performed on the blurred region, namely, the deblurring tampered region subjected to the non-uniform deblurring tampering and the non-tampered region come from the same image, so that the splicing boundary of the deblurring tampered region is blurred, and an abnormality is difficult to visually perceive. In order to avoid that the tampered target image transmits wrong information, the deblurred tampered area in the target image needs to be effectively determined.
In one possible implementation, the plurality of sliding windows with different window sizes includes: 4 × 4 sliding windows, 8 × 8 sliding windows, 16 × 16 sliding windows, 32 × 32 sliding windows, 64 × 64 sliding windows, and 128 × 128 sliding windows.
The plurality of sliding windows with different window dimensions may include sliding windows with other window dimensions in addition to the six different window dimensions, which is not specifically limited in this disclosure.
And converting the target image into a gray image, sequentially traversing the target image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray co-occurrence matrix of the target image under any window scale according to the N-order derivative of the target image.
In one possible implementation, the nth derivative gray co-occurrence matrix includes at least: a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix.
Fig. 2 is a schematic diagram illustrating a method for determining a deblurred tampered region according to an embodiment of the present disclosure. As shown in fig. 2, the target image is traversed through a 4 × 4 sliding window, an 8 × 8 sliding window, a 16 × 16 sliding window, a 32 × 32 sliding window, a 64 × 64 sliding window, and a 128 × 128 sliding window, respectively, to obtain a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix of the target image at six different window scales.
At least determining a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix of the target image under any window scale, and determining a detection probability map of the target image under the window scale according to the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the window scale, namely determining the probability that any pixel point in the target image determined under the window scale is deblurred and tampered.
For any window scale, before determining the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image at the window scale, the first derivative and the second derivative of the target image at the window scale may be determined.
In one possible implementation, the method includes: determining first derivatives D ' (u, v) and second derivatives D "(u, v) of the target image by the following formulas, D ' (u, v) = f (x, y) × q, D" (u, v) = f (x, y) × q ', where f (x, y) is the target image,
according to a matrixDetermining a first derivative D' (u, v) of the target image f (x, y) by adopting a convolution calculation mode; and according to the matrix->The second derivative D "(u, v) of the target image f (x, y) is determined by convolution calculation.
The first and second derivatives of the target image may be determined in other ways than those described above, and this disclosure is not limited thereto.
In one possible implementation, determining an nth derivative gray level co-occurrence matrix of the target image at any window scale includes: from the first derivative D' (u, v) of the target image, a first derivative gray level co-occurrence matrix M is determined by the following formula 1 (m,n),
Where, when D '(u, v) = m and D' (u + du, v + dv) = n, δ [ D '(u, v) = m, D' (u + du, v + dv) = n]=1, otherwise δ [ D '(u, v) = m, D' (u + du, v + dv) = n]=0; according to the second of the target imageThe first derivative D "(u, v), the second derivative gray level co-occurrence matrix is determined by the following formulaWherein, when D ″ (u, v) = m and D ″ (u + du, v + dv) = n, δ [ D ″ (u, v) = m, D ″ (u + du, v + dv) = n]=1, otherwise δ [ D "(u, v) = m, D" (u + du, v + dv) = n]=0; wherein du is equal to-1,0,1 and dv is equal to-1,0,1.
In a possible implementation manner, determining a detection probability map of a target image under any window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale includes: and aiming at any window scale, obtaining a decision model under the window scale obtained by utilizing image sample training, wherein the decision model is used for determining a detection probability chart of the target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
Before determining a deblurring tampered region in a target image, performing model training on an image sample to determine a decision model under any window scale as prior knowledge, and further after determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, determining the deblurring tampered probability of any pixel point in the target image under the window scale according to the decision model under the window scale to obtain a detection probability graph of the target image under the window scale.
In a possible implementation manner, for any window scale, obtaining a decision model under the window scale obtained by training an image sample includes: carrying out fuzzy processing on the image sample to obtain a fuzzy image; deblurring processing is carried out on the blurred image to obtain a deblurred image; sequentially traversing the deblurred image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale; aiming at any window scale, a static Library Support Vector Machine (LIBSVM) and a radial basis function are adopted to check an N-order derivative gray level co-occurrence matrix of the fuzzy image under the window scale for training, and a decision model under the window scale is obtained.
The image samples may be derived from a Dresden image database, or may be derived from other image databases, which are not specifically limited by this disclosure. The sample volume of the image sample may be determined according to actual conditions (e.g., 1400), which is not specifically limited by the present disclosure.
The method comprises the steps of converting an image sample into a gray image, carrying out blurring processing on the image sample to obtain a blurred image, further carrying out deblurring processing on the blurred image to obtain a deblurred image which meets the requirements of deblurring a tampered scene and maximizes a Peak Signal to Noise Ratio (PSNR).
The ways of deblurring the blurred image include, but are not limited to, the following three deblurring methods: method for efficient edge likelihood optimization in blind deconvolution, using L 0 Regularization strength and gradient prior deblurring method, and a computer program product having L 0 A blind image motion deblurring method of canonical priors.
The method for deblurring the blurred image may include other deblurring methods besides the above three deblurring methods, and the disclosure does not specifically limit this method.
Sequentially traversing the deblurred image through a plurality of sliding windows with different window scales, determining an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale, and performing model training on the N-order derivative gray level co-occurrence matrix of the deblurred image under the window scale by using LIBSVM and a radial basis function to obtain a decision model under the window scale.
In the process of determining the N-order derivative gray level co-occurrence matrix of the blurred image under any window scale and performing model training on the N-order derivative gray level co-occurrence matrix of the blurred image under any window scale, in order to balance the computational performance and complexity, a truncation threshold T and a dimension reduction parameter N are set, for example, the truncation threshold T =10 and the dimension reduction parameter N =50 are set. The values of the truncation threshold T and the dimensionality reduction parameter n may also be set to other values, which are not specifically limited in this disclosure.
After a decision model under any window scale is obtained through LIBSVM and radial basis function kernel training, a detection probability graph under the window scale can be determined according to the decision model under any window scale and an N-order derivative gray level co-occurrence matrix of a target image under the window scale.
Also taking the above fig. 2 as an example, as shown in fig. 2: determining a detection probability chart C of the target image under the 4 x 4 sliding window according to the decision model under the 4 x 4 sliding window and the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the 4 x 4 sliding window 1 (ii) a Determining a detection probability chart C of the target image under the 8 x 8 sliding window according to the decision model under the 8 x 8 sliding window and the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the 8 x 8 sliding window 2 (ii) a Determining a detection probability chart C of the target image under the 16 x 16 sliding window according to the decision model under the 16 x 16 sliding window and the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the 16 x 16 sliding window 3 (ii) a Determining a detection probability chart C of the target image under the 32 x 32 sliding window according to the decision model under the 32 x 32 sliding window and the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the 32 x 32 sliding window 4 (ii) a Determining a detection probability chart C of the target image under the 64 x 64 sliding window according to the decision model under the 64 x 64 sliding window and the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the 64 x 64 sliding window 5 (ii) a Determining a detection probability map C of the target image under the 128 x 128 sliding window according to the decision model under the 128 x 128 sliding window and the first derivative gray level co-occurrence matrix and the second derivative gray level co-occurrence matrix of the target image under the 128 x 128 sliding window 6 。
The accuracy of the detection probability map of the target image under the large window scale is high, the boundary of the deblurring tampered region corresponding to the detection probability map of the target image under the small window scale is accurate, and in order to determine the deblurring tampered region in the target image more accurately, the detection probability maps of the target images under different window scales can be fused in sequence from large to small according to the window scales.
In a possible implementation manner, sequentially fusing detection probability maps of target images under different window scales according to the window scales from large to small to determine a modified region in the target image, including: clustering the detection probability graph of the target image under any window scale through a clustering algorithm, and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area; according to a first detection result graph corresponding to the target image under the window scale, adjusting a detection probability graph of the target image under the window scale through connectivity detection and filtering operation; clustering the adjusted detection probability graph of the target image under the window scale again through a clustering algorithm, and determining a second detection result graph corresponding to the target image under the window scale, wherein the second detection result graph comprises a modified area and an unmodified area; and according to the window scales from large to small, sequentially fusing second detection result graphs corresponding to the target image under different window scales, and determining a modified area in the target image.
In one possible implementation, the clustering algorithm is two-centroid K-means clustering.
And for any window scale, clustering the detection probability graph of the target image under the window scale through two-centroid K-means clustering to obtain a first detection result graph (binary image) corresponding to the target image under the window scale, wherein the first detection result graph comprises a deblurring tampered region (modified region) and an unmodified region (unmodified region), namely, the initial boundary of the deblurring tampered region in the target image under the window scale is determined.
In one possible implementation, the filtering operation is gaussian weighted filtering.
Still taking the foregoing fig. 2 as an example, as shown in fig. 2, in order to reduce the influence of noise on the determination result of the deblurred tampered region, for any window scale, according to a first detection result diagram corresponding to a target image under the window scale, through connectivity detection and gaussian filtering operation, adjusting a detection probability diagram of the target image under the window scale, clustering the detection probability diagram of the target image under the window scale after adjustment again through two centroid K-means clustering, and determining a second detection result diagram (binary image) corresponding to the target image under the window scale, where the second detection result diagram includes the deblurred tampered region and an untampered region, that is, determines a detection boundary of the deblurred tampered region in the target image under the window scale.
Fig. 3 is a schematic diagram of a second detection result graph corresponding to a target image under different window scales according to an embodiment of the disclosure. FIG. 3 includes a second detection result graph C corresponding to the target image under 4 × 4 window scale 1 ', second detection result image C corresponding to target image under 8X 8 window scale 2 ' and 16 x 16 window scale target image corresponding second detection result graph C 3 ', second detection result image C corresponding to target image under 32 x 32 window scale 4 ', second detection result image C corresponding to target image under 64 × 64 window scale 5 ', and a second detection result map C corresponding to the target image at a window scale of 128 × 128 6 And' second detection result images (binary images) corresponding to the target images under six different window scales. And white pixel points in the second detection result image are pixel points which are deblurred and tampered in the target image, namely a white area in the second detection result image is a deblurred and tampered area in the target image.
In order to more accurately determine the deblurred tampered region in the target image, the second detection result graphs of the target image under different window scales can be sequentially fused from large to small according to the window scales.
For the six second detection result graphs C in FIG. 3 1 '-C 6 ' carrying out fusion: first, the second detection result chart C 6 ' As a reference, the second detection result is plotted in a graph C 6 ' and second detection result graph C 5 ' fusion to obtain the third detection result chart C 1 "; secondly, a third detection result chart C is used 1 "based on the third detection result, the third detection result is shown in the graph C 1 "and second test result chart C 4 ' fusion to obtain the third detection result chart C 2 "; secondly, a third detection result chart C is used 2 "as a reference, add the thirdTest result chart C 2 "and the second detection result chart C 3 ' fusion to obtain the third detection result chart C 3 "; secondly, a third detection result chart C is used 3 "based on the third detection result, the third detection result is shown in the graph C 3 "and the second detection result chart C 2 ' fusion to obtain the third detection result chart C 4 "; secondly, a third detection result chart C is used 4 "based on the third detection result, the third detection result is shown in the graph C 4 "and second test result chart C 1 ' fusion to obtain the third detection result chart C 5 ". According to a third detection result graph C obtained after fusion 5 ", the deblurred tampered region in the target image can be accurately determined.
The method comprises the steps of sequentially traversing a target image through a plurality of sliding windows with different window scales, determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is determined according to the N-order derivative of the target image, and determining a detection probability map of the target image under the window scale according to the N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified, and sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small, so that a modified area in the target image can be accurately determined.
Fig. 4 shows a schematic structural diagram of a data processing apparatus according to an embodiment of the present disclosure. The apparatus 40 shown in fig. 4 may be used to implement the steps of the above-described method embodiment shown in fig. 1, and the apparatus 50 includes:
the first determining module 41 is configured to sequentially traverse the target image through a plurality of sliding windows with different window scales, and determine an N-order derivative gray level co-occurrence matrix of the target image at any window scale, where the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to an N-order derivative of the target image;
a second determining module 42, configured to determine a detection probability map of the target image in any window scale according to an N-order derivative gray level co-occurrence matrix of the target image in any window scale, where the detection probability map includes a probability that any pixel in the target image is modified;
and a third determining module 43, configured to sequentially fuse the detection probability maps of the target images at different window scales according to the window scales from large to small, and determine a modified region in the target image.
In one possible implementation, the apparatus 40 further includes:
the acquisition module is used for acquiring a decision model under the window scale, which is obtained by training an image sample, aiming at any window scale, wherein the decision model is used for determining a detection probability map of a target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
In one possible implementation manner, the obtaining module includes:
the fuzzy processing submodule is used for carrying out fuzzy processing on the image sample to obtain a fuzzy image;
the deblurring processing submodule is used for deblurring the blurred image to obtain a deblurred image;
the first determining submodule is used for sequentially traversing the deblurred image through a plurality of sliding windows with different window scales and determining an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale;
and the model training submodule is used for training the N-order derivative gray level co-occurrence matrix of the deblurred image under the window scale by using the LIBSVM and the radial basis function core aiming at any window scale to obtain a decision model under the window scale.
In one possible implementation, the third determining module 43 includes:
the clustering submodule is used for clustering the detection probability graph of the target image under any window scale through a clustering algorithm and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area;
the adjusting submodule is used for adjusting the detection probability chart of the target image under the window scale through connectivity detection and filtering operation according to the first detection result chart corresponding to the target image under the window scale;
the clustering submodule is further used for clustering the adjusted detection probability graph of the target image under the window scale again through a clustering algorithm, and determining a second detection result graph corresponding to the target image under the window scale, wherein the second detection result graph comprises a modified area and an unmodified area;
and the second determining submodule is used for sequentially fusing second detection result images corresponding to the target image under different window scales according to the window scales from large to small so as to determine a modified area in the target image.
In one possible implementation, the clustering algorithm is two-centroid K-means clustering.
In one possible implementation, the filtering operation is gaussian weighted filtering.
In one possible implementation, the nth derivative gray level co-occurrence matrix includes at least: a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix.
In one possible implementation, the plurality of sliding windows with different window sizes includes:
4 × 4 sliding windows, 8 × 8 sliding windows, 16 × 16 sliding windows, 32 × 32 sliding windows, 64 × 64 sliding windows, and 128 × 128 sliding windows.
The apparatus 40 provided in the present disclosure can implement each step in the method embodiment shown in fig. 1, and implement the same technical effect, and is not described herein again to avoid repetition.
Fig. 5 shows a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 5, at the hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
And a memory for storing the program. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads a corresponding computer program from the non-volatile memory into the memory and then runs the computer program, thereby forming the data processing device on a logic level. And a processor executing the program stored in the memory and specifically executing the steps of the embodiment of the method shown in fig. 1.
The method described above with reference to fig. 1 may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present specification may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present specification may be embodied directly in a hardware decoding processor, or in a combination of hardware and software modules in the decoding processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, etc. as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may execute the method performed in the embodiment of the method shown in fig. 1, and implement the functions of the embodiment of the method shown in fig. 1, which are not described herein again.
Embodiments of the present specification also propose a computer-readable storage medium storing one or more programs, the one or more programs including instructions, which when executed by an electronic device including a plurality of application programs, enable the electronic device to perform the data processing method in the embodiment shown in fig. 1, and specifically perform the steps of the embodiment of the method shown in fig. 1.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives the computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the disclosure are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (18)
1. A data processing method, comprising:
sequentially traversing a target image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image;
determining a detection probability map of the target image under any window scale according to an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the detection probability map comprises the probability that any pixel point in the target image is modified;
and sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small to determine a modified area in the target image.
2. The method according to claim 1, wherein determining a detection probability map of the target image at any window scale according to an N-order derivative gray level co-occurrence matrix of the target image at the window scale comprises:
and aiming at any window scale, obtaining a decision model under the window scale obtained by utilizing image sample training, wherein the decision model is used for determining a detection probability map of the target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
3. The method of claim 2, wherein obtaining a decision model at any window scale trained using image samples for the window scale comprises:
carrying out fuzzy processing on the image sample to obtain a fuzzy image;
deblurring processing is carried out on the blurred image to obtain a deblurred image;
sequentially traversing the deblurred image through a plurality of sliding windows with different window scales, and determining an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale;
and aiming at any window scale, training an N-order derivative gray level co-occurrence matrix of the deblurred image under the window scale by adopting a static library support vector machine (LIBSVM) and a radial basis function core to obtain a decision model under the window scale.
4. The method according to claim 1, wherein the step of sequentially fusing the detection probability maps of the target image at different window scales according to the window scales from large to small to determine the modified region in the target image comprises:
for any window scale, clustering the detection probability graph of the target image under the window scale through a clustering algorithm, and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area;
according to a first detection result graph corresponding to the target image under the window scale, adjusting a detection probability graph of the target image under the window scale through connectivity detection and filtering operation;
clustering the detection probability graph of the target image under the adjusted window scale again through the clustering algorithm, and determining a second detection result graph corresponding to the target image under the window scale, wherein the second detection result graph comprises a modified area and an unmodified area;
and sequentially fusing second detection result graphs corresponding to the target image under different window scales according to the window scales from large to small, and determining a modified area in the target image.
5. The method of claim 4, wherein the clustering algorithm is bicentroid K-means clustering.
6. The method of claim 4, wherein the filtering operation is Gaussian weighted filtering.
7. The method according to any of claims 1-6, wherein the Nth derivative gray level co-occurrence matrix comprises at least: a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix.
8. The method of claim 1, wherein the plurality of sliding windows having different window dimensions comprises:
a 4 × 4 sliding window, an 8 × 8 sliding window, a 16 × 16 sliding window, a 32 × 32 sliding window, a 64 × 64 sliding window, and a 128 × 128 sliding window.
9. A data processing apparatus, comprising:
the first determining module is used for sequentially traversing a target image through a plurality of sliding windows with different window scales and determining an N-order derivative gray level co-occurrence matrix of the target image under any window scale, wherein the N-order derivative gray level co-occurrence matrix is a gray level co-occurrence matrix determined according to the N-order derivative of the target image;
a second determining module, configured to determine a detection probability map of the target image in any window scale according to an nth derivative gray co-occurrence matrix of the target image in any window scale, where the detection probability map includes a probability that any pixel in the target image is modified;
and the third determining module is used for sequentially fusing the detection probability maps of the target image under different window scales according to the window scales from large to small so as to determine the modified area in the target image.
10. The apparatus of claim 9, further comprising:
the acquisition module is used for acquiring a decision model under the window scale, which is obtained by training an image sample, aiming at any window scale, wherein the decision model is used for determining a detection probability map of the target image under the window scale according to an N-order derivative gray level co-occurrence matrix of the target image under the window scale.
11. The apparatus of claim 10, wherein the obtaining module comprises:
the fuzzy processing submodule is used for carrying out fuzzy processing on the image sample to obtain a fuzzy image;
the deblurring processing submodule is used for deblurring the blurred image to obtain a deblurred image;
the first determining submodule is used for sequentially traversing the deblurred image through a plurality of sliding windows with different window scales and determining an N-order derivative gray level co-occurrence matrix of the deblurred image under any window scale;
and the model training submodule is used for training the N-order derivative gray level co-occurrence matrix of the deblurred image under the window scale by using an LIBSVM and a radial basis function to obtain a decision model under the window scale.
12. The apparatus of claim 9, wherein the third determining module comprises:
the clustering submodule is used for clustering the detection probability graph of the target image under any window scale through a clustering algorithm and determining a first detection result graph corresponding to the target image under the window scale, wherein the first detection result graph comprises a modified area and an unmodified area;
the adjusting submodule is used for adjusting a detection probability graph of the target image under the window scale through connectivity detection and filtering operation according to a first detection result graph corresponding to the target image under the window scale;
the clustering submodule is further configured to perform clustering again on the adjusted detection probability map of the target image under the window scale through the clustering algorithm, and determine a second detection result map corresponding to the target image under the window scale, where the second detection result map includes a modified region and an unmodified region;
and the second determining submodule is used for sequentially fusing second detection result graphs corresponding to the target image under different window scales according to the window scales from large to small so as to determine a modified area in the target image.
13. The apparatus of claim 12, wherein the clustering algorithm is bicentroid K-means clustering.
14. The apparatus of claim 12, wherein the filtering operation is gaussian weighted filtering.
15. The apparatus according to any of claims 9-14, wherein the nth derivative gray level co-occurrence matrix comprises at least: a first derivative gray level co-occurrence matrix and a second derivative gray level co-occurrence matrix.
16. The apparatus of claim 9, wherein the plurality of sliding windows having different window dimensions comprises:
4 × 4 sliding windows, 8 × 8 sliding windows, 16 × 16 sliding windows, 32 × 32 sliding windows, 64 × 64 sliding windows, and 128 × 128 sliding windows.
17. A data processing apparatus, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the data processing method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the data processing method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910032941.6A CN111507931B (en) | 2019-01-14 | 2019-01-14 | Data processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910032941.6A CN111507931B (en) | 2019-01-14 | 2019-01-14 | Data processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111507931A CN111507931A (en) | 2020-08-07 |
CN111507931B true CN111507931B (en) | 2023-04-18 |
Family
ID=71863772
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910032941.6A Active CN111507931B (en) | 2019-01-14 | 2019-01-14 | Data processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111507931B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112749668A (en) * | 2021-01-18 | 2021-05-04 | 上海明略人工智能(集团)有限公司 | Target image clustering method and device, electronic equipment and computer readable medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011049565A1 (en) * | 2009-10-21 | 2011-04-28 | Hewlett-Packard Development Company, L.P. | Real-time video deblurring |
CN104766095A (en) * | 2015-04-16 | 2015-07-08 | 成都汇智远景科技有限公司 | Mobile terminal image identification method |
CN105046265A (en) * | 2015-03-03 | 2015-11-11 | 沈阳工业大学 | Iris image intestinal loop area detection method based on texture difference |
CN108269221A (en) * | 2018-01-23 | 2018-07-10 | 中山大学 | A kind of JPEG weight contract drawing is as tampering location method |
WO2018152643A1 (en) * | 2017-02-24 | 2018-08-30 | Sunnybrook Research Institute | Systems and methods for noise reduction in imaging |
CN108629743A (en) * | 2018-04-04 | 2018-10-09 | 腾讯科技(深圳)有限公司 | Processing method, device, storage medium and the electronic device of image |
CN109190456A (en) * | 2018-07-19 | 2019-01-11 | 中国人民解放军战略支援部队信息工程大学 | Pedestrian detection method is overlooked based on the multiple features fusion of converging channels feature and gray level co-occurrence matrixes |
-
2019
- 2019-01-14 CN CN201910032941.6A patent/CN111507931B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011049565A1 (en) * | 2009-10-21 | 2011-04-28 | Hewlett-Packard Development Company, L.P. | Real-time video deblurring |
CN105046265A (en) * | 2015-03-03 | 2015-11-11 | 沈阳工业大学 | Iris image intestinal loop area detection method based on texture difference |
CN104766095A (en) * | 2015-04-16 | 2015-07-08 | 成都汇智远景科技有限公司 | Mobile terminal image identification method |
WO2018152643A1 (en) * | 2017-02-24 | 2018-08-30 | Sunnybrook Research Institute | Systems and methods for noise reduction in imaging |
CN108269221A (en) * | 2018-01-23 | 2018-07-10 | 中山大学 | A kind of JPEG weight contract drawing is as tampering location method |
CN108629743A (en) * | 2018-04-04 | 2018-10-09 | 腾讯科技(深圳)有限公司 | Processing method, device, storage medium and the electronic device of image |
CN109190456A (en) * | 2018-07-19 | 2019-01-11 | 中国人民解放军战略支援部队信息工程大学 | Pedestrian detection method is overlooked based on the multiple features fusion of converging channels feature and gray level co-occurrence matrixes |
Non-Patent Citations (5)
Title |
---|
Median Filtering Forensics Based on Convolutional Neural Networks;Jiansheng Chen et al;《 IEEE Signal Processing Letters》;全文 * |
基于GLCM与GGM的图像复制—粘贴伪造检测算法;高皜;《中国优秀硕士学位论文全文数据库》;全文 * |
基于灰度共生矩的图像区域复制篡改检测;欧佳佳等;《计算机应用》(第06期);全文 * |
基于灰度共生矩阵的自适应图像边缘检测;赵海涛等;《微计算机信息》(第17期);全文 * |
融合LWT纹理特征的图像复制篡改检测算法;和平等;《计算机工程》(第10期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111507931A (en) | 2020-08-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11481574B2 (en) | Image processing method and device, and storage medium | |
CN109753971B (en) | Correction method and device for distorted text lines, character recognition method and device | |
CN108337505B (en) | Information acquisition method and device | |
CN110796649B (en) | Target detection method and device, electronic equipment and storage medium | |
CN110135301B (en) | Traffic sign recognition method, device, equipment and computer readable medium | |
CN109300151B (en) | Image processing method and device and electronic equipment | |
CN111126108A (en) | Training method and device of image detection model and image detection method and device | |
CN112330576A (en) | Distortion correction method, device and equipment for vehicle-mounted fisheye camera and storage medium | |
CN115393815A (en) | Road information generation method and device, electronic equipment and computer readable medium | |
CN112597788B (en) | Target measuring method, target measuring device, electronic apparatus, and computer-readable medium | |
CN115690765B (en) | License plate recognition method, device, electronic equipment, readable medium and program product | |
CN111507931B (en) | Data processing method and device | |
CN111415371B (en) | Sparse optical flow determination method and device | |
CN114202457A (en) | Method for processing low-resolution image, electronic device and computer program product | |
CN112287905A (en) | Vehicle damage identification method, device, equipment and storage medium | |
CN114845055B (en) | Shooting parameter determining method and device of image acquisition equipment and electronic equipment | |
CN111860623A (en) | Method and system for counting tree number based on improved SSD neural network | |
CN110852250A (en) | Vehicle weight removing method and device based on maximum area method and storage medium | |
CN112528970A (en) | Guideboard detection method, device, equipment and computer readable medium | |
CN115393763A (en) | Pedestrian intrusion identification method, system, medium and device based on image frequency domain | |
CN115205553A (en) | Image data cleaning method and device, electronic equipment and storage medium | |
CN114862686A (en) | Image processing method and device and electronic equipment | |
JP6623419B2 (en) | Display control device, imaging device, smartphone, display control method, and program | |
CN112991313A (en) | Image quality evaluation method and device, electronic device and storage medium | |
CN114486186B (en) | Device and method for detecting effective focal length of lens |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |