CN113111945B - Antagonistic sample defense method based on transformation self-encoder - Google Patents
Antagonistic sample defense method based on transformation self-encoder Download PDFInfo
- Publication number
- CN113111945B CN113111945B CN202110404528.5A CN202110404528A CN113111945B CN 113111945 B CN113111945 B CN 113111945B CN 202110404528 A CN202110404528 A CN 202110404528A CN 113111945 B CN113111945 B CN 113111945B
- Authority
- CN
- China
- Prior art keywords
- sample
- image
- self
- data set
- defense
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000007123 defense Effects 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000009466 transformation Effects 0.000 title claims abstract description 24
- 230000003042 antagnostic effect Effects 0.000 title description 3
- 238000012549 training Methods 0.000 claims abstract description 27
- 238000012360 testing method Methods 0.000 claims description 37
- 238000013135 deep learning Methods 0.000 claims description 8
- 238000013145 classification model Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000008485 antagonism Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000004821 distillation Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000000427 antigen Substances 0.000 description 1
- 102000036639 antigens Human genes 0.000 description 1
- 108091007433 antigens Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a countermeasure sample defense method based on a transformation self-encoder, which comprises two layers of defense methods, wherein the first layer is a self-encoder structure consisting of an encoding network and a decoding network, the structure judges the difference between a clean sample and the countermeasure sample before and after passing through the self-encoder, detects the countermeasure sample, and screens out the residual sample; and the second layer of defense method carries out image transformation with minimized total variance on the residual samples, removes disturbance information in the countermeasures samples and then gives the disturbance information to the target classifier. The invention can detect the countermeasure sample and remove disturbance in the countermeasure sample, improves the classification accuracy of the target classifier by establishing the double-defense model, does not need participation of the countermeasure sample and training of the target classifier, and has good mobility and universality.
Description
Technical Field
The invention relates to the field of information security, in particular to an anti-sample defense method based on a transformation self-encoder.
Background
Along with the gradual application of the classification method in the deep learning in various fields, the classification method specifically comprises face recognition, automatic driving, text analysis and the like, and the problem of resisting sample attack is also revealed. The problem of attack against samples was raised in 2013 by Szegedy et al, which showed that neural networks could be misclassified by adding small perturbations in the dataset that cannot be discerned by the human eye, and that this misclassification phenomenon is common in the various classifiers.
In the face of numerous methods of attack against specimens, researchers have also proposed many defensive methods against attacks. Goodfellow et al published papers at ICLR international conference in 2015 propose ways of challenge training to deal with challenge samples. The challenge sample is added into the data set of the original classification model for training, the classification model after interactive training has higher robustness to the challenge sample, and meanwhile, the disadvantage that the defense model is single and cannot resist new attacks exists.
The current image denoising function is more and more powerful, and noise in the countersamples can be processed in a denoising way. The conventional method is to remove noise by Non Local Means (NLM), which uses redundant information existing in the whole image to remove noise. The image is divided into a plurality of blocks, and the average value is obtained according to the region with large similarity in each block, so that the boundary between the image neighborhoods is smoothed, and the purpose of removing image noise is achieved. The noise added to the antigen sample is mostly in the neighborhood of the pixel unit, is not obvious from the original image, and cannot be effectively removed in the NLM mean denoising mode. Compared with the traditional denoising method, the convolutional neural network (Convolutional Neural Networks, CNN) is also used for denoising the image in the deep learning, so that the effect is not satisfactory.
2017 Papernot et al propose a distillation defense model, the basic principle is to train a teacher network and a student network, modify the classification probability by changing the softmax at the last layer of the neural network, and finally achieve the purpose of classifying the challenge sample and the clean sample into the same class. However, this defense approach has been compromised by the C & W attack approach proposed by Carlini and Wagner et al.
Therefore, the construction of the universal defense method for the countermeasure sample with excellent performance has important significance that the defense effect is independent of the attack mode.
Disclosure of Invention
In view of the above, the present invention proposes a transform-self-encoder-based challenge sample defense method capable of effectively defending against attacks against samples. Because the challenge sample is an attack caused by adding a disturbance to the clean sample, the present invention will detect and cancel the disturbance in the challenge sample.
In order to achieve the above purpose, the present invention provides the following technical solutions: the invention comprises two layers of defenses, wherein the first layer is a self-encoder network taking discrimination reconstruction errors as a main factor, and is divided into two parts of encoding and decoding, and the antagonism samples can be detected by distinguishing the difference between two samples before and after encoding through encoding and decoding reconstruction of clean samples and antagonism samples; the second layer takes the image which cannot be identified in the first layer as data, eliminates disturbance information in the countermeasure sample through an image transformation mode, compresses main information of the picture, recombines the image through a mode of combining total variance minimization, wherein the total variance is a tiny difference value between the recombined image and an original image, enables the images to be similar through a mode of minimization, and eliminates tiny disturbance in the original image. And finally, inputting the part of disturbance-removed images into a deep learning classification model to obtain an accurate classification result.
Further, the implementation method comprises the following steps:
S1, acquiring a clean sample data set: taking the obtained normal image as a clean sample data set, randomly dividing the clean sample data set into a training set and a testing set, and training a target classifier later;
S2, training a target classifier: establishing a deep learning network classification structure, training and testing by using the clean sample data set in the S1 to obtain a target classifier, and recording the classification accuracy by using a test set;
S3, generating an countermeasure sample data set and classifying the countermeasure sample data set: and selecting the existing common anti-sample attack algorithm and a certain disturbance coefficient, and establishing an anti-sample data set by using the clean sample data set in the S1. Mixing the clean sample in the step S1 with the countermeasure sample generated in the step as a test set, classifying the data set by using the target classifier in the step S2, and recording the accuracy of classification at the moment;
S4, constructing a self-codec network structure: building a self-encoder according to the size and the dimension of the image data in the S1, training the self-encoder by using the clean sample data set in the S1 to obtain a first-layer self-encoder defense model, reconstructing the self-encoding of the clean sample data set in the S1, and calculating the difference value before and after the reconstruction as a threshold value;
S5 using the network to detect challenge samples: re-using the challenge sample test set used in the step S3, carrying out self-coding reconstruction on the challenge sample test set by using a self-coder in the step S4, calculating a difference value between samples before and after reconstruction, selecting a threshold value in the step S4 as a critical value, distinguishing a challenge sample and an uncertain sample, and using the uncertain sample set as a test set of a second-layer defense model to continue to be used;
S6, performing image transformation on the uncertain sample in S5: and (3) carrying out image transformation on the test sets in the step (S5) in a mode of minimizing total variance, and keeping similarity of the images after transformation between the newly generated data set and the original uncertain sample set. The principle of the transformation is as follows:
(1) K-channel image data, which can be described as m rows and n columns for image X, is defined as (i, j, k) for the pixel point of the ith row and the jth column. Random sampling is performed on each pixel position through a Bernoulli random variable b (i, j, k) to obtain a random image Z, wherein the value of the random variable b is 0 or 1, and the image Z can be described as: z=xΘb, where Θ represents an element-wise multiplication.
(2) Calculating an image Z 'closest to X in the random image Z by using a principle of minimizing the total variance, so that the total variance of Z' and X is minimized, wherein the following formula is adopted;
Z′=X-min(||(1-X)Θ(Z-X)||2+λTVTVp(Z))
Where Θ represents element-wise multiplication, TV p (Z) represents the total variance with i p as norm, calculated as follows:
Where K is the total number of channels of the image, I p,Jp is the variance calculated along the rows and columns under the pixel channel, and the formula is as follows:
Where N is the total number of pixels in a row or column, I p and J p are involved in the calculation of TV p (Z).
S7, classifying again by using a target classifier, and proving the effectiveness of defense: and (3) classifying the test set generated in the step (S6) by using an original target classifier, recording the classification accuracy again, and improving the classification accuracy compared with the classification accuracy in the step (S2) and the step (S3), so as to prove the defending effectiveness of the invention on the countermeasure sample.
The invention has the beneficial effects that:
1. the designed transformation self-encoder anti-sample defense model can remarkably improve the classification accuracy of the target classifier on the anti-sample, and has good interpretability in principle from two aspects of detecting the anti-sample and eliminating disturbance;
2. The principle and the method of the self-encoder network structure and image transformation two-layer defense model are independent of the generation mode of the countermeasure sample, and are independent of the training of the target classifier or the enhancement and improvement of the target classifier, and the method is completely independent of the classification model and has obvious migration and universality;
3. The method has the advantages that the difference between the generated image and the sample to be detected is small at the cost of minimum total variance, and the sample generated in the method can remove disturbance information in the countermeasures sample without affecting the classification accuracy, so that the disturbance can be effectively eliminated.
Drawings
FIG. 1 is a flow chart of a method for transforming a self-encoder defense model provided by the embodiment of the invention;
FIG. 2 is a schematic diagram of a challenge sample defense model based on a transform-self encoder according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a self-encoder network according to an embodiment of the present invention;
fig. 4 is a view of a clean sample and an image after processing according to the present invention.
Detailed Description
In order to make the technical scheme and the flow thought of the application clearer, the application is further explained below by using an MNIST data set and a C & W anti-sample attack method as an embodiment. The embodiments used herein are intended to explain the present application and various equivalent modifications coming within the scope of the application are contemplated as falling within the scope of the appended claims.
The MNIST dataset is a handwritten digital dataset commonly used in the field of deep learning, comprising 60000 training image samples and 10000 test cases, each of which is 28 x 28 pixels in fixed size and has a value between [0,1], which is used in this embodiment to train a classification model and a self-encoder network. The C & W attack mode is an attack mode with strong aggressivity and complex principle in the anti-sample attack method, and the method provided by the invention is not limited to defending the C & W attack and is still effective for other common attack methods. The C & W challenge sample generation method is an attack method proposed by Carlini N, wagner D et al, which demonstrates the effectiveness of distillation defenses, the principle of which is briefly described as follows:
The function f is defined as f (x ', t) =max (max { Z (x ') i: i noteqt } -Z (x ') t)
Wherein x is a clean sample, x 'is an countermeasure sample, c is a linear coefficient, f (x', t) loss function can enable labels originally classified in i classes to be predicted as t classes, Z (x ') i represents a classifier prediction result of an ith component in x', and t is a target label;
as shown in fig. 1, an embodiment of the present invention provides a transformation self-encoder-based challenge sample defense method, and the specific implementation process includes the following steps:
S1, acquiring a clean sample data set: and taking the obtained normal image as a clean sample data set, and randomly dividing the clean sample data set into a training set and a testing set for subsequent training of the target classifier.
Further, 10000 pieces of digital data are randomly extracted as a training set x in the MNIST handwriting digital data set, 5000 pieces of digital pictures are extracted as the training set, each digital picture is a black-white image formed by 28 x 28 pixels, and the label y corresponding to the image is one digit of 0-9.
S2, training a target classifier: and (3) establishing a deep learning network classification structure, training and testing by using the clean sample data set in the S1 to obtain a target classifier, and recording the classification accuracy by using the test set.
In this embodiment, a target classifier is constructed with 28×28×1 as the input data format and 10 classes as the output format, where the classifier is defined as f (x, y), where (x, y) is the clean sample data image and label in the previous step.
Furthermore, the target classifier in the step is not specifically defined, and the model constructed based on the convolutional neural network can be classified. In this embodiment, a training is performed by adopting a random gradient descent SGD mode, wherein the training learning rate is 0.01, and the four convolutions, the two pooling layers, the two denoising layers and the final softmax are constructed. And testing the target classifier obtained through training by using the test set in the step S1, and recording the classification accuracy p x of the classifier on the clean sample.
S3, generating an countermeasure sample data set and classifying the countermeasure sample data set: and selecting the existing common anti-sample attack algorithm and a certain disturbance coefficient, and establishing an anti-sample data set by using the clean sample data set in the S1. Mixing the clean sample in the step S1 with the countermeasure sample generated in the step as a test set, classifying the data set by using the target classifier in the step S2, and recording the accuracy of classification at the moment;
In this embodiment, a challenge sample data set x adv is generated for a clean sample x using a C & W challenge sample attack algorithm, which attack is based on the i 2 norm, with a perturbation coefficient epsilon set at 0.05. 10000 challenge sample images are generated, and the image labels y adv +.y of the challenge samples. Mixing x and the same proportion as a test set x test of the method, and executing f (x test, y) by using a target classifier to obtain a classification accuracy rate p x_test under the condition of no defense.
Further, the anti-sample attack algorithm is not limited to C & W or a certain type, the disturbance coefficient is not a fixed value, and the adjustment is performed according to the selected attack algorithm and the attack intensity.
S4, constructing a self-codec network structure: and (3) establishing a self-encoder according to the size and the size of the image data in the S1, training the self-encoder by using the clean sample data set in the S1 to obtain a first-layer self-encoder defense model, reconstructing the self-encoding of the clean sample data set in the S1, and calculating the difference value before and after the reconstruction as a threshold value.
In this embodiment, according to the pixel characteristics of the MNIST image of 28×28×1, the self-encoder network structure is designed, and is mainly divided into an encoding network and a decoding network, and fig. 3 shows a specific network structure, which is described as follows:
(1) Coding network
The first layer of the coding network is a convolution layer using 3*3 convolution kernel as calculation, using ReLU as activation function, the second layer is a maximum pooling layer with 2 x2 size for calculation, repeating the above structure once after the layer is finished, deepening the network structure, and finally using 3*3 convolution layer as the last layer of the coding end.
(2) Decoding network
After the decoding network is spliced on the coding network, the first layer is a convolution layer taking 3*3 as a convolution kernel, the second layer uses a unit with the size of 2 x 2 to carry out up-sampling, the symmetry of the decoding network is maintained as a structure of the up-sampling layer, the decoding network is repeated, finally the decoding is finished by the convolution layer with the 3*3 convolution kernel, a sigmoid minimum cross entropy loss function is used as a training target in the coding and decoding processes, the robustness of the self-coding network is improved, and finally the neural network is classified into 10 types by softmax.
Further, the self-encoder network is trained using the clean sample dataset x and reconstructed using the network to obtain a new dataset x new, the difference between x and x new is calculated as the threshold η for detecting an challenge sample.
S5 using the network to detect challenge samples: and (3) reusing the challenge sample test set used in the step (S3), reconstructing the self-coding of the challenge sample test set by using the self-coder in the step (S4), calculating a difference value between samples before and after reconstruction, selecting a threshold value in the step (S4) as a critical value, distinguishing a challenge sample from an uncertain sample, and using the uncertain sample set as a test set of a second-layer defense model.
As shown in fig. 2, a hybrid set x test of clean samples and challenge samples is used to communicate from the encoder network for detection, and the error before and after reconstruction is compared to a threshold η. If the reconstruction error is large, the sample is an countermeasure sample, if the error is within an acceptable range, the sample is considered to be a clean sample or a countermeasure sample with strong concealment, and the part of the sample x test_2 is submitted to a second layer of defense model processing.
S6, performing image transformation on the uncertain sample in S5: and (3) carrying out image transformation on the test sets in the step (S5) in a mode of minimizing total variance, and keeping similarity of the images after transformation between the newly generated data set and the original uncertain sample set.
In this embodiment, for the image transformation with minimized total variance of x test_2 samples, the method can effectively reduce disturbance noise in the image, and the transformed image dataset is x test_2new. An original image set in a certain process and an example image set obtained after the process of S6 are shown in fig. 4.
S7, classifying again by using a target classifier, and proving the effectiveness of defense: and (3) classifying the test set generated in the step (S6) by using an original target classifier, recording the classification accuracy again, and improving the classification accuracy compared with the classification accuracy in the step (S2) and the step (S3), so as to prove the defending effectiveness of the invention on the countermeasure sample.
In this embodiment, the target classifier is used to perform f (x test_2new, y) classification processing on x test_2new, and the classification accuracy p x_test_new for the sample is calculated. And comparing the classification accuracy rates p x、px_test、px_test_new of the target classifier on the clean sample, the sample without defense and the sample defended by the method. The end result is p x≈px_test_new,px_test<<px_test_new.
In conclusion, the classification accuracy of the clean sample and the sample after defense is very close, and in an acceptable range, the classification accuracy of the sample in a non-defense state is far lower than that of the sample after defense, which proves that the method has good effectiveness on the defensive and antagonistic samples.
Claims (2)
1. A transformation self-encoder based method of countering sample defense is characterized by comprising two layers of defense, wherein,
The first layer of defense is a self-encoder network taking discrimination reconstruction errors as a main factor, and is divided into two parts of encoding and decoding, and the discrimination of two samples before and after encoding is distinguished by carrying out encoding and decoding reconstruction on clean samples and countermeasure samples, so that the countermeasure samples are detected;
The second layer of defense is to take the image which cannot be identified in the first layer of defense as data, eliminate disturbance information in an countermeasure sample in an image transformation mode, compress main information of a picture, recombine the image in a mode of combining total variance minimization, the total variance is a tiny difference value between the recombined image and an original image, make the image between the recombined image and the original image similar in a mode of minimization, remove tiny disturbance in the original image, and finally input the image with the disturbance removed into a deep learning classification model to obtain an accurate classification result; the method comprises the following specific steps:
S1, acquiring a clean sample data set: taking the obtained normal image as a clean sample data set, randomly dividing the clean sample data set into a training set and a testing set, and training a target classifier later;
S2, training a target classifier: establishing a deep learning network classification structure, training and testing by using the clean sample data set in the S1 to obtain a target classifier, and recording the classification accuracy by using a test set;
S3, generating an countermeasure sample data set and classifying the countermeasure sample data set: selecting an anti-sample attack algorithm and a certain disturbance coefficient, establishing an anti-sample data set by using the clean sample data set in the S1, mixing the clean sample in the S1 and the anti-sample generated in the step as a test set, classifying the data set by using the target classifier in the S2, and recording the accuracy of classification at the moment;
S4, constructing a self-codec network structure: building a self-encoder according to the size and the dimension of the image data in the S1, training the self-encoder by using the clean sample data set in the S1 to obtain a first layer self-encoder network structure defense model, reconstructing the self-encoding of the clean sample data set in the S1, and calculating the difference value before and after the reconstruction as a threshold value;
S5 using the network to detect challenge samples: re-using the challenge sample test set used in the step S3, reconstructing the self-coding of the challenge sample test set by using the self-coder in the step S4, calculating a difference value of samples before and after reconstruction, selecting a threshold value in the step S4 as a critical value, distinguishing a challenge sample and an uncertain sample, and using the uncertain sample set as a test set of a second-layer image transformation defense model to continue to be used;
s6, performing image transformation on the uncertain sample in S5: carrying out image transformation on the test sets in the step S5 in a mode of minimizing total variance, and keeping similarity of the newly generated data set and the original uncertain sample set after image transformation;
S7, classifying again by using a target classifier, and proving the effectiveness of defense: and (3) classifying the test set generated in the step (S6) by using an original target classifier, recording the classification accuracy again, and improving the classification accuracy compared with the classification accuracy in the step (S2) and the step (S3), so as to prove the defending effectiveness of the invention on the countermeasure sample.
2. The method of claim 1, wherein the image transformation principle in step S6 is as follows:
(1) K-channel image data described as m rows and n columns for image X, wherein pixel points of an ith row and a jth column are defined as (i, j, k); random sampling is carried out on each pixel position through a Bernoulli random variable b (i, j, k) to obtain a random image Z, wherein the value of the random variable B is 0 or 1, and the image Z is described as follows: z=xΘb, where Θ represents an element-by-element multiplication;
(2) Calculating an image Z 'closest to X in the random image Z by using a principle of minimizing the total variance, so that the total variance of Z' and X is minimized, wherein the following formula is adopted;
Z′=X-min(||(1-X)Θ(Z-X)||2+λTVTVp(Z))
Where Θ represents element-wise multiplication, TV p (Z) represents the total variance with I p as norm, calculated as follows:
Where K is the total number of channels of the image, I p,Jp is the variance calculated along the rows and columns under the pixel channel, and the formula is as follows:
Where N is the total number of pixels in a row or column, I p and J p are involved in the calculation of TV p (Z).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110404528.5A CN113111945B (en) | 2021-04-15 | 2021-04-15 | Antagonistic sample defense method based on transformation self-encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110404528.5A CN113111945B (en) | 2021-04-15 | 2021-04-15 | Antagonistic sample defense method based on transformation self-encoder |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113111945A CN113111945A (en) | 2021-07-13 |
CN113111945B true CN113111945B (en) | 2024-07-09 |
Family
ID=76717077
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110404528.5A Active CN113111945B (en) | 2021-04-15 | 2021-04-15 | Antagonistic sample defense method based on transformation self-encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113111945B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115188384A (en) * | 2022-06-09 | 2022-10-14 | 浙江工业大学 | Voiceprint recognition countermeasure sample defense method based on cosine similarity and voice denoising |
CN115860112B (en) * | 2023-01-17 | 2023-06-30 | 武汉大学 | Model inversion method-based countermeasure sample defense method and equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537271A (en) * | 2018-04-04 | 2018-09-14 | 重庆大学 | A method of resisting sample is attacked based on convolution denoising self-editing ink recorder defence |
CN111600851A (en) * | 2020-04-27 | 2020-08-28 | 浙江工业大学 | Feature filtering defense method for deep reinforcement learning model |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111626367A (en) * | 2020-05-28 | 2020-09-04 | 深圳前海微众银行股份有限公司 | Countermeasure sample detection method, apparatus, device and computer readable storage medium |
-
2021
- 2021-04-15 CN CN202110404528.5A patent/CN113111945B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108537271A (en) * | 2018-04-04 | 2018-09-14 | 重庆大学 | A method of resisting sample is attacked based on convolution denoising self-editing ink recorder defence |
CN111600851A (en) * | 2020-04-27 | 2020-08-28 | 浙江工业大学 | Feature filtering defense method for deep reinforcement learning model |
Also Published As
Publication number | Publication date |
---|---|
CN113111945A (en) | 2021-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Giraldo et al. | Graph moving object segmentation | |
CN108133188B (en) | Behavior identification method based on motion history image and convolutional neural network | |
CN109919204B (en) | Noise image-oriented deep learning clustering method | |
CN109919032B (en) | Video abnormal behavior detection method based on motion prediction | |
CN113111945B (en) | Antagonistic sample defense method based on transformation self-encoder | |
CN112307982B (en) | Human body behavior recognition method based on staggered attention-enhancing network | |
CN115496928B (en) | Multi-modal image feature matching method based on multi-feature matching | |
CN113536972B (en) | Self-supervision cross-domain crowd counting method based on target domain pseudo label | |
CN113903073B (en) | False video detection method based on local enhancement transducer | |
JP7139749B2 (en) | Image recognition learning device, image recognition device, method, and program | |
CN113378775B (en) | Video shadow detection and elimination method based on deep learning | |
Ding et al. | Noise-resistant network: a deep-learning method for face recognition under noise | |
Chakraborty | PRNU-based image manipulation localization with discriminative random fields | |
Hongmeng et al. | A detection method for deepfake hard compressed videos based on super-resolution reconstruction using CNN | |
CN113627543A (en) | Anti-attack detection method | |
Zhou et al. | Ristra: Recursive image super-resolution transformer with relativistic assessment | |
Saealal et al. | Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance | |
CN117911437A (en) | Buckwheat grain adhesion segmentation method for improving YOLOv x | |
Hebbar et al. | Image forgery localization using U-net based architecture and error level analysis | |
CN116468638A (en) | Face image restoration method and system based on generation and balance countermeasure identification | |
CN116385935A (en) | Abnormal event detection algorithm based on unsupervised domain self-adaption | |
CN116311345A (en) | Transformer-based pedestrian shielding re-recognition method | |
CN116311026A (en) | Classroom scene identity recognition method based on multi-level information fusion Transformer | |
CN115546828A (en) | Method for recognizing cow faces in complex cattle farm environment | |
Dou | The text captcha solver: A convolutional recurrent neural network-based approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |