CN112465019A - Countermeasure sample generation and countermeasure defense method based on disturbance - Google Patents
Countermeasure sample generation and countermeasure defense method based on disturbance Download PDFInfo
- Publication number
- CN112465019A CN112465019A CN202011351688.XA CN202011351688A CN112465019A CN 112465019 A CN112465019 A CN 112465019A CN 202011351688 A CN202011351688 A CN 202011351688A CN 112465019 A CN112465019 A CN 112465019A
- Authority
- CN
- China
- Prior art keywords
- disturbance
- sample
- neural network
- deep neural
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a countermeasure sample generation and countermeasure defense method based on disturbance, and belongs to the technical field of deep learning. The existing methods for generating the countermeasure sample are diversified, and the important problem to be solved is about which disturbance is easier to form the countermeasure sample. According to the method, the countermeasure sample is generated by adding disturbance in image data, high-influence disturbance of the test sample is mined based on the deep neural network model, the relevant influence of the disturbance on the generation of the countermeasure sample is researched, the dimension reduction function of the generalized nonnegative matrix decomposition algorithm without dimension matching constraint is applied to the defense process of the countermeasure sample based on the disturbance, the countermeasure sample disturbance reduction method based on the dimension reduction without dimension matching constraint is provided, and the recognition error rate of the deep neural network model can be reduced.
Description
Technical Field
The invention relates to a method for generating a confrontation sample and defending the confrontation based on disturbance, belonging to the field of deep neural networks.
Background
At present, Deep Neural Networks (DNNs) are widely used in many artificial intelligence applications including computer vision, speech recognition and robotics. The neural network is formed by connecting and arranging the neurons in a hierarchy mode, the deep neural network is a neural network with any layer, the depth refers to the number of the layers, and the layer with any layer needs to have a cycle for traversing each layer to perform calculation. The number of applications using DNN has also grown explosively due to the outstanding advances made in the voice and image areas by deep neural networks, and the breakthrough application of DNN to voice recognition and image recognition.
The countersample is an input sample formed by deliberately adding a subtle disturbance to the data set, the input after the disturbance causing the model to give an erroneous output with high confidence. Methods for challenge sample generation are currently diversified, but there is little research on what kind of perturbation is more likely to form a challenge sample. The deep learning model has extremely high vulnerability to the challenge sample, and for the deep neural network model, the influence mechanism of disturbance on the formation of the challenge sample has not been systematically researched. The data compression and PCA (principal component analysis) method is used for projecting the high-dimensional data to the low-dimensional space, so that a certain effect is achieved on defending the countermeasure sample attack, and a research thought is provided for defending the countermeasure sample through the generalized non-negative matrix factorization algorithm dimension reduction data. Therefore, in order to improve the safety of the deep neural network model, systematic research on the generation of perturbation-based countermeasure samples and defense theories thereof is required.
Disclosure of Invention
The invention aims to provide a countermeasure sample generation and countermeasure defense method based on disturbance, which can reduce the recognition error rate of a deep neural network model.
The invention adopts the technical scheme that a countermeasure sample generation and countermeasure defense method based on disturbance comprises the following steps:
s101, aiming at an image sample, analyzing the disturbance characteristics suitable for the image sample by adding disturbance original data, and exploring to obtain high-influence disturbance based on a deep neural network model;
s102, according to the high-influence disturbance, aiming at the deep neural network model, finding out the lowest threshold value of the disturbance intensity through the output result of the deep neural network model in the incremental change process of the disturbance intensity;
s103, according to the confrontation sample constructed by the high-influence disturbance, the detection result of the confrontation sample of the deep neural network model with the detection function in the incremental change process of the disturbance intensity is analyzed, so that the highest threshold value of the disturbance intensity is searched;
s104, a generalized nonnegative matrix factorization algorithm is introduced before the deep neural network model, data dimension reduction is carried out, and defense for resisting sample attack and deep neural network learning is reduced.
Further, in step S101, an attack sample may be generated by adding appropriate disturbances, such as 1) gaussian noise, 2) rayleigh noise, 3) exponential distribution noise, 4) salt and pepper noise, to the image sample. Inputting the original sample and the attack sample into the deep neural network model to respectively obtain an identification result, analyzing and comparing the difference of the identification results, and obtaining high-influence disturbance if the difference is large; otherwise, adding the disturbance again, and continuously analyzing and comparing the recognition result. Aiming at the deep neural network model, the high-influence disturbance obtained by the research can be used for generating a countermeasure sample, obtaining which kind of disturbance is easier to form the countermeasure sample, and providing a basis for the later research.
Further, in step S102, a sample set is given, and the disturbance intensity is changed and input to the deep neural network model to analyze the recognition effect in the intensity variation process until a recognition error occurs, that is, the disturbance intensity is the lowest threshold. Aiming at the deep neural network model, when the countermeasure attack is carried out, the lowest threshold value of the disturbance intensity can be used for generating a countermeasure sample, and the effect that the recognition of the deep neural network model is wrong due to the minimum disturbance is achieved.
Further, in step S103, the deep neural network model with a detection (screening) function can detect (screen) the attack sample, and the detection (screening) can be avoided by appropriately changing the disturbance intensity to influence the identification result. Similarly, for the deep neural network scheme with the detection function, the identification effect in the intensity change process is analyzed until an identification error occurs, namely the disturbance intensity is the highest threshold value. The method is characterized in that a highest threshold value of disturbance intensity is explored aiming at a deep neural network model with a detection (screening) function, and the method can be used for generating an antagonistic sample in an effective threshold value of disturbance and attacking the deep neural network model with the detection function, so that recognition is made mistakes, and a successful attack effect is achieved.
Further, in the step S104, a generalized nonnegative matrix factorization algorithm is introduced to perform dimension reduction, reduce interference in the sample, and study a relationship between the dimension reduction dimension and disturbance reduction, so as to achieve a defense function. The method is characterized in that the high-influence disturbance sample is subjected to dimensionality reduction through a generalized nonnegative matrix decomposition algorithm, and the goal of reducing disturbance is achieved. A generalized nonnegative matrix factorization algorithm is introduced before a deep neural network model to reduce data dimension so as to reduce disturbance, reduce attack of a countersample and realize a defense function of the countervailing deep neural network.
The invention has the beneficial effects that: according to the method for generating the countermeasure sample and defending the countermeasure based on the disturbance, the influence mechanism of the disturbance on the formation of the countermeasure sample, namely the high-influence disturbance, and the minimum threshold value and the maximum threshold value of the disturbance are systematically researched, and the method has research significance on the generation of the countermeasure sample based on the disturbance. For the deep neural network, the researched influence mechanism can be used for resisting sample generation within the effective threshold value of disturbance, so that the deep neural network makes recognition errors, and the attack resistance is high in efficiency. And in the process of the step 4), a generalized nonnegative matrix factorization algorithm is introduced for multiple times to perform data dimensionality reduction so as to prevent the attack of resisting the sample, reduce the identification error rate of the deep neural network model and improve the robustness of the deep neural network model.
Drawings
FIG. 1 is a general diagram of a challenge sample generation and defense technique based on perturbation according to the present invention;
FIG. 2 is a schematic flow chart of a method for generating challenge samples and defending the challenge samples based on perturbation according to the present invention;
FIG. 3 is a schematic diagram of a high impact disturbance exploration process according to the present invention;
FIG. 4 is a flow chart illustrating the mining of the lowest threshold value according to the present invention;
FIG. 5 is a schematic diagram illustrating a process for mining a highest threshold value according to the present invention;
FIG. 6 is a diagram of the generalized non-negative matrix factorization algorithm dimension reduction defense process of the present invention.
Detailed Description
A countermeasure sample generation and countermeasure defense method based on disturbance mainly includes the steps of exploring high-influence disturbance of an image sample to generate a countermeasure sample, mining a disturbance intensity threshold value based on a deep neural network model, and introducing a generalized nonnegative matrix decomposition algorithm to achieve a defense function. In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail and completely with reference to the accompanying drawings.
FIG. 1 is a general diagram of a perturbation-based countermeasure sample generation and defense technique of the present invention. According to the method, the countermeasure sample is generated by adding disturbance in the image sample, and high-influence disturbance is excavated based on the deep neural network model; according to a countermeasure sample constructed by the high-influence disturbance, aiming at the function of the deep neural network model, a threshold value added with the high-influence disturbance is mined; aiming at a countermeasure sample construction method based on disturbance, a relation between dimensionality reduction based on a generalized nonnegative matrix factorization algorithm and sample disturbance reduction is mined, and a countermeasure defense method based on sample dimensionality reduction is explored.
FIG. 2 is a schematic flow chart of a method for generating challenge samples and defending the challenge based on perturbation according to the present invention. The disturbance shown in fig. 2 is mainly noise affecting the image sample, including gaussian noise, rayleigh noise, exponential distribution noise, salt and pepper noise, and the like; the challenge samples need to be generated according to the perturbation characteristic configuration applicable to the image samples. The test samples are challenge samples generated by adding high-impact perturbations. The generalized non-negative matrix factorization algorithm is introduced before the deep neural network model and can be introduced multiple times. The general flow chart shown in fig. 2 may specifically include:
and S101, adding disturbance in the image sample, and researching high-influence disturbance.
Specifically, disturbance can be added into an image sample to interfere original data, the applicable characteristics of the image data are analyzed, and high-influence disturbance of the test sample is explored based on a deep neural network model.
FIG. 3 is a schematic diagram of a high impact disturbance exploration process according to the present invention. The high-influence disturbance exploration is implemented by adding appropriate disturbance original data into an image sample, analyzing the applicable disturbance characteristics of the image sample to generate a countermeasure sample, comparing the original sample with the output result of the training/testing of the countermeasure sample aiming at a deep neural network model, analyzing the difference of the comparison result, and obtaining high-influence disturbance if the difference is large; otherwise, adding the disturbance again, and continuously analyzing the comparison result. As shown in fig. 3, noise such as 1) gaussian noise, 2) rayleigh noise, 3) exponential distribution noise, and 4) salt and pepper noise is added to an image sample to generate a challenge sample. Note that the original image sample is x, and the countermeasure sample class is (x'1,x′2,…,x′n). Firstly x'1Inputting the image sample into a deep neural network model, and comparing the image sample with an output result of an original image sample through the deep neural network model, wherein Structural Similarity Index (SSIM) is used for comparing the difference of the output result of the deep neural network model, and if the difference of the two is larger, namely the SSIM value is smaller, high-influence disturbance (delta x) of the image sample is obtained; otherwise re-adding image noise generates a countermeasure sample x'2And repeating the previous steps again and inputting the steps into the deep neural network model to be compared with the output result of the original image sample until the high-influence disturbance (delta x) of the image sample is obtained.
And S102, mining the lowest threshold value of the test sample.
Specifically, the lowest threshold value of the disturbance intensity can be mined through the output result of the deep neural network model in the incremental change process of the disturbance intensity.
Given an original image sample set x, correct class y and high-impact disturbance delta x E [ delta x ] with gradually increasing disturbance intensity1,Δx2,…,Δxn}. x 'is the corresponding challenge sample, x' ═ x + Δ x. If x is the input of the neural network, y is the corresponding target output, and the classifier function is f, then if correctly classified there are: and f (x) y. Let C (x, y) be a function expressed as whether image x is classified as y-class:
FIG. 4 is a flowchart illustrating the mining of the lowest threshold value according to the present invention. The countermeasure sample x' is generated by adding a high impact perturbation Δ x on the original image sample set x. And inputting the generated confrontation sample x 'into the deep neural network model to obtain an output result f (x'). As shown in fig. 4, the output result f (x ') is compared with the correct class y by using a function C, if the value of C (f (x'), y) is 1, i.e. f (x ') is not different from y, x' is classified as y, which indicates that the added disturbance intensity is not enough and the neural network model is not successfully influenced; if C (f (x '), y) is 0, then f (x') identifies an error and is misclassified. When the value of C (f (x'), y) is 1, gradually increasing delta x in sequence, and recording the delta x0,Δx1,Δx2… … would correspond to the generated challenge sample x'0,x′1,′x2… … is input into the neural network model and the comparison is continued to see if there is a difference in the outputs f (x') and y. Until the value of C (f (x '), y) is 0, the input confrontation sample can successfully pass through the neural network, so that the output f (x ') is different from y, and f (x ') is identified in error and is classified in error. Δ x added at this timenThe value of the perturbation intensity that needs to be mined for the present invention, i.e. a minimum threshold for high impact perturbations that affect the neural network.
And S103, mining the highest threshold of the test sample.
Specifically, the highest threshold value of the disturbance intensity can be mined by analyzing the countermeasure sample detection (screening) results of the deep neural network model with the detection (screening) function during the incremental change of the disturbance intensity.
FIG. 5 is a flowchart illustrating the mining of the highest threshold value according to the present invention. The deep neural network model with the detection (screening) function can detect (screen) attack samples, and can avoid the influence of the detection (screening) on the identification result by properly changing the disturbance intensity. The process is similar to the mining lowest threshold procedure, and a countermeasure sample x' is generated by adding a high impact disturbance Δ x to the original image sample set x. And inputting the generated confrontation sample x 'into a deep neural network model with a detection (screening) function to obtain an output result f (x'). As shown in fig. 5, the output result f (x ') is compared with the correct class y by using a function C, if the value of C (f (x '), y) is 0, that is, f (x ') is identified as erroneous, the erroneous classification indicates that the disturbance intensities can be further superimposed, and the confrontation sample at this time can successfully avoid detection (screening) and influence the neural network model. Stepwise increment of Δ x, denoted as Δ x0,Δx1,Δx2… … antagonistic samples x 'generated in turn'0,x′1,x′2… …, if the test (screening) is successfully passed and the value of C (f (x '), y) is 0, i.e., f (x') is identified as faulty, the disturbance intensity is continuously increased. Until input confrontation sample x'nDetected (screened) by a deep neural network model with a detection (screening) function, and the disturbance intensity is delta x without f (x') outputnThen the disturbance intensity Deltax added in the previous stepn-1The value to be mined by the invention is the highest threshold value of high-influence disturbance influencing the neural network with the detection (screening) function.
And S104, introducing a generalized nonnegative matrix decomposition algorithm to reduce the dimension and reduce disturbance of the data.
Specifically, a generalized nonnegative matrix factorization algorithm can be introduced once to perform data dimension reduction before a deep neural network model, so that countersample attack is reduced, and a deep learning defense effect is achieved.
FIG. 6 is a diagram of the generalized non-negative matrix factorization algorithm dimension reduction defense process of the present invention. Based on the defense of high-impact disturbance attack samples, data sets such as texts, videos, sounds and images are generally stored and used in a non-negative matrix form, and non-negative matrix decomposition decomposes a non-negative matrix X into two non-negative matrices W and H with low rank, and X is approximately equal to WH. Therefore, the data dimension reduction of the generalized nonnegative matrix factorization algorithm is considered to be introduced, and the relation between the data dimension reduction dimension based on the generalized nonnegative matrix factorization algorithm and the disturbance reduction of the countermeasure sample based on the disturbance is researched and analyzed, so that the error rate of the deep neural network model is reduced, and the defense function is achieved. As shown in fig. 6, a high-impact disturbance Δ x is added to an original image sample set x to generate a countermeasure sample x ', a generalized nonnegative matrix factorization algorithm is used to perform data dimension reduction, and then the data dimension reduction is input into a deep neural network model to obtain an output result f (x'). Using SSIM to compare and analyze the difference between f (x') and the original sample through a deep neural network output result f (x), if the difference is large, namely the SSIM value is small, continuously changing dimensionality reduction, and reusing the generated countermeasure sample by a generalized nonnegative matrix decomposition algorithm to perform data dimensionality reduction; if the difference is small, the disturbance can be effectively reduced through data dimensionality reduction of the generalized nonnegative matrix factorization algorithm, the recognition error rate of the deep neural network model is reduced, and the defense effect is achieved.
Claims (7)
1. A method for generating a confrontation sample and defending the confrontation based on disturbance is characterized by comprising the following steps:
s101, aiming at an image sample, analyzing the disturbance characteristics suitable for the image sample by adding disturbance original data, and exploring to obtain high-influence disturbance based on a deep neural network model;
s102, according to the high-influence disturbance, aiming at the deep neural network model, finding out the lowest threshold value of the disturbance intensity through the output result of the deep neural network model in the incremental change process of the disturbance intensity;
s103, according to the confrontation sample constructed by the high-influence disturbance, the detection result of the confrontation sample of the deep neural network model with the detection function in the incremental change process of the disturbance intensity is analyzed, so that the highest threshold value of the disturbance intensity is searched;
s104, a generalized nonnegative matrix factorization algorithm is introduced before the deep neural network model, data dimension reduction is carried out, and defense for resisting sample attack and deep neural network learning is reduced.
2. The method of claim 1, wherein the method comprises: s101, obtaining the high-influence disturbance by adding appropriate disturbance to original image sample data in an image sample to generate a countermeasure sample, respectively inputting the original image sample and the countermeasure sample into a deep neural network model, comparing output results, analyzing difference of the comparison results, and obtaining the high-influence disturbance if the difference is large; otherwise, adding the disturbance again, and continuously analyzing the comparison result.
3. The method of claim 2, wherein the method comprises: the added disturbance comprises any one or any combination of Gaussian noise, Rayleigh noise, exponential distribution noise and salt and pepper noise.
4. The method of claim 1, wherein the method comprises: the step S102 specifically includes adding a high-impact disturbance Δ x to an original image sample set x to generate a countermeasure sample x'; inputting the generated confrontation sample x ' into the deep neural network model to obtain an output result f (x '), inputting the original image sample set x into the deep neural network model to obtain an output result y, comparing by using a function C, and gradually increasing delta x in sequence if the value of C (f (x '), y) is 1, and recording as delta x0,Δx1,Δx2,... further corresponding generated challenge sample x'0,x′1,x′2… … into the neural network model, the comparison is continued to see if there is a difference between the outputs f (x ') and y until C (f (x'), y) is 0, at which time Δ x is addednThen C (f (x '), y) is expressed as a function of whether the image x' is classified as a y class, for the lowest threshold of the disturbance intensity.
5. The method of claim 1, wherein the method comprises: the step S103 specifically includes adding a high-impact disturbance Δ x to the original image sample set x to generate a confrontation sample x'; inputting the generated confrontation sample x ' into a deep neural network model with a detection function, obtaining an output result f (x '), gradually increasing delta x if the value of C (f (x '), and y) is 0, and recording the incremental delta x as delta x0,Δx1,Δx2,..0,x′1,x′2… … is input into the neural network model, and if the test is successfully passed and the value of C (f (x '), y) is 0, the disturbance intensity is continuously increased until the antagonistic sample x ' is input 'nDetected by a deep neural network model with a detection function, and no f (x') is output, and the disturbance intensity is delta xnThen the disturbance intensity Deltax added in the previous stepn-1To find the highest threshold for the intensity of the perturbation.
6. The method of claim 1, wherein the method comprises: and the step S104 comprises the steps of performing data dimension reduction on the countermeasure sample by using a generalized non-negative matrix factorization algorithm, inputting the data dimension reduction into the deep neural network model to obtain an output result f (x '), comparing and analyzing the difference between the output result f (x') and the output result f (x) of the original sample through the deep neural network, if the difference is large, continuously changing the dimension reduction dimension, and performing data dimension reduction on the generated countermeasure sample by using the generalized non-negative matrix factorization algorithm again.
7. A computer-readable storage medium characterized by: the computer-readable storage medium stores computer instructions for causing the computer to perform the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011351688.XA CN112465019B (en) | 2020-11-26 | 2020-11-26 | Countermeasure sample generation and countermeasure defense method based on disturbance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011351688.XA CN112465019B (en) | 2020-11-26 | 2020-11-26 | Countermeasure sample generation and countermeasure defense method based on disturbance |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112465019A true CN112465019A (en) | 2021-03-09 |
CN112465019B CN112465019B (en) | 2022-12-27 |
Family
ID=74807984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011351688.XA Active CN112465019B (en) | 2020-11-26 | 2020-11-26 | Countermeasure sample generation and countermeasure defense method based on disturbance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112465019B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113505886A (en) * | 2021-07-08 | 2021-10-15 | 深圳市网联安瑞网络科技有限公司 | Countermeasure sample generation method, system, terminal and medium based on fuzzy test |
WO2023019970A1 (en) * | 2021-08-20 | 2023-02-23 | 华为技术有限公司 | Attack detection method and apparatus |
WO2023087759A1 (en) * | 2021-11-18 | 2023-05-25 | 华为技术有限公司 | Method and apparatus for testing deep learning model |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170168740A1 (en) * | 2015-12-10 | 2017-06-15 | SK Hynix Inc. | Reducing read disturb in data storage |
CN108135003A (en) * | 2017-12-25 | 2018-06-08 | 广东海格怡创科技有限公司 | The construction method and system of interference type identification model |
CN109740346A (en) * | 2018-12-29 | 2019-05-10 | 南方电网科学研究院有限责任公司 | Privacy protection method and system based on electric power system edge calculation |
CN110941794A (en) * | 2019-11-27 | 2020-03-31 | 浙江工业大学 | Anti-attack defense method based on universal inverse disturbance defense matrix |
WO2020143227A1 (en) * | 2019-01-07 | 2020-07-16 | 浙江大学 | Method for generating malicious sample of industrial control system based on adversarial learning |
CN111600835A (en) * | 2020-03-18 | 2020-08-28 | 宁波送变电建设有限公司永耀科技分公司 | Detection and defense method based on FGSM (FGSM) counterattack algorithm |
CN111627044A (en) * | 2020-04-26 | 2020-09-04 | 上海交通大学 | Target tracking attack and defense method based on deep network |
CN111652290A (en) * | 2020-05-15 | 2020-09-11 | 深圳前海微众银行股份有限公司 | Detection method and device for confrontation sample |
CN111667049A (en) * | 2019-03-08 | 2020-09-15 | 国际商业机器公司 | Quantifying vulnerability of deep learning computing systems to resistant perturbations |
CN111950635A (en) * | 2020-08-12 | 2020-11-17 | 温州大学 | Robust feature learning method based on hierarchical feature alignment |
-
2020
- 2020-11-26 CN CN202011351688.XA patent/CN112465019B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170168740A1 (en) * | 2015-12-10 | 2017-06-15 | SK Hynix Inc. | Reducing read disturb in data storage |
CN108135003A (en) * | 2017-12-25 | 2018-06-08 | 广东海格怡创科技有限公司 | The construction method and system of interference type identification model |
CN109740346A (en) * | 2018-12-29 | 2019-05-10 | 南方电网科学研究院有限责任公司 | Privacy protection method and system based on electric power system edge calculation |
WO2020143227A1 (en) * | 2019-01-07 | 2020-07-16 | 浙江大学 | Method for generating malicious sample of industrial control system based on adversarial learning |
CN111667049A (en) * | 2019-03-08 | 2020-09-15 | 国际商业机器公司 | Quantifying vulnerability of deep learning computing systems to resistant perturbations |
CN110941794A (en) * | 2019-11-27 | 2020-03-31 | 浙江工业大学 | Anti-attack defense method based on universal inverse disturbance defense matrix |
CN111600835A (en) * | 2020-03-18 | 2020-08-28 | 宁波送变电建设有限公司永耀科技分公司 | Detection and defense method based on FGSM (FGSM) counterattack algorithm |
CN111627044A (en) * | 2020-04-26 | 2020-09-04 | 上海交通大学 | Target tracking attack and defense method based on deep network |
CN111652290A (en) * | 2020-05-15 | 2020-09-11 | 深圳前海微众银行股份有限公司 | Detection method and device for confrontation sample |
CN111950635A (en) * | 2020-08-12 | 2020-11-17 | 温州大学 | Robust feature learning method based on hierarchical feature alignment |
Non-Patent Citations (9)
Title |
---|
AI科技评论: ""对抗样本的反思:仅仅设置更小的扰动阈值 ε,或许并不够"", 《HTTPS://WWW.163.COM/DY/ARTICLE/EP2CHUF40511DPVD.HTML》 * |
DIEGO GRAGNANIELLO等: ""Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers"", 《ARXIV:1902.07776V1 [CS.CV]》 * |
MENON A. K. 等: ""Link prediction via matrix factorization"", 《MACHINE LEARNING AND KNOWLEDGE DISCOVERY 》 * |
XIAOYONG YUAN等: ""Adversarial Examples: Attacks and Defenses for Deep Learning"", 《ARXIV:1712.07107V3 [CS.LG]》 * |
夏彬 等: ""基于生成对抗网络的系统日志级异常检测算法"", 《计算机应用》 * |
李一飞: ""轻量化CNN人体活动识别与攻击方法"", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
李祥坤 等: ""一种面向图像识别的神经网络通用扰动生成算法"", 《系统科学与数学》 * |
杨弋鋆 等: ""面向智能驾驶视觉感知的对抗样本攻击与防御方法综述"", 《南京信息工程大学学报(自然科学版)》 * |
贺林声: ""自适应混合图像的防窥视密码键盘研究"", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113505886A (en) * | 2021-07-08 | 2021-10-15 | 深圳市网联安瑞网络科技有限公司 | Countermeasure sample generation method, system, terminal and medium based on fuzzy test |
WO2023019970A1 (en) * | 2021-08-20 | 2023-02-23 | 华为技术有限公司 | Attack detection method and apparatus |
WO2023087759A1 (en) * | 2021-11-18 | 2023-05-25 | 华为技术有限公司 | Method and apparatus for testing deep learning model |
Also Published As
Publication number | Publication date |
---|---|
CN112465019B (en) | 2022-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Halbouni et al. | CNN-LSTM: hybrid deep neural network for network intrusion detection system | |
CN112465019B (en) | Countermeasure sample generation and countermeasure defense method based on disturbance | |
Abou Khamis et al. | Investigating resistance of deep learning-based ids against adversaries using min-max optimization | |
CN109214460B (en) | Power transformer fault diagnosis method based on relative transformation and nuclear entropy component analysis | |
CN111325324A (en) | Deep learning confrontation sample generation method based on second-order method | |
Balduzzi et al. | Neural taylor approximations: Convergence and exploration in rectifier networks | |
CN117134969A (en) | Intrusion detection algorithm based on diffusion generation countermeasure network and improved white whale optimization | |
Disha et al. | A Comparative study of machine learning models for Network Intrusion Detection System using UNSW-NB 15 dataset | |
Zuo et al. | Filter pruning without damaging networks capacity | |
Li et al. | Unbalanced network attack traffic detection based on feature extraction and GFDA-WGAN | |
Li et al. | Sa-es: Subspace activation evolution strategy for black-box adversarial attacks | |
CN114118268B (en) | Antagonistic attack method and system for generating uniformly distributed disturbance by taking pulse as probability | |
CN115131549B (en) | Saliency target detection training method based on self-lifting learning | |
Gajjar et al. | Generating Targeted Adversarial Attacks and Assessing their Effectiveness in Fooling Deep Neural Networks | |
Akan et al. | Just noticeable difference for machine perception and generation of regularized adversarial images with minimal perturbation | |
CN114580462A (en) | Acoustic countermeasure sample generation method based on partition disturbance | |
Xiong et al. | Bi-LSTM: Finding Network Anomaly Based on Feature Grouping Clustering | |
Wang et al. | Enhancing robustness of classifiers based on pca | |
Zhang et al. | A novel noise injection-based training scheme for better model robustness | |
Rakesh et al. | Evaluation of Network Intrusion Detection with Machine Learning and Deep Learning Using Ensemble Methods on CICIDS-2017 Dataset | |
CN115314254B (en) | Semi-supervised malicious traffic detection method based on improved WGAN-GP | |
LU505793B1 (en) | Defensive method against interpretability camouflage samples in deep recognition neural networks | |
Vardhan | An ensemble approach for explanation-based adversarial detection | |
CN115271067B (en) | Android anti-sample attack method based on feature relation evaluation | |
Shakir et al. | Use of Singular Value Decomposition for a Deep Learning-Based Fast Intrusion Detection System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |