CN117765349A - Method for generating challenge sample, related device and storage medium - Google Patents
Method for generating challenge sample, related device and storage medium Download PDFInfo
- Publication number
- CN117765349A CN117765349A CN202311773600.7A CN202311773600A CN117765349A CN 117765349 A CN117765349 A CN 117765349A CN 202311773600 A CN202311773600 A CN 202311773600A CN 117765349 A CN117765349 A CN 117765349A
- Authority
- CN
- China
- Prior art keywords
- image
- countermeasure
- target
- areas
- generating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000001514 detection method Methods 0.000 claims abstract description 107
- 238000004891 communication Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 abstract description 21
- 238000013473 artificial intelligence Methods 0.000 abstract description 11
- 230000007123 defense Effects 0.000 abstract description 5
- 230000007547 defect Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 21
- 238000005516 engineering process Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 12
- 239000007787 solid Substances 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 230000008878 coupling Effects 0.000 description 6
- 238000010168 coupling process Methods 0.000 description 6
- 238000005859 coupling reaction Methods 0.000 description 6
- 239000003086 colorant Substances 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003042 antagnostic effect Effects 0.000 description 1
- 230000008485 antagonism Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000003094 perturbing effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The embodiment of the application relates to the field of artificial intelligence, and provides a method for generating an countermeasure sample, a related device and a storage medium, wherein the method for generating the countermeasure sample comprises the following steps: acquiring an original image containing a target to be attacked; acquiring at least two first countermeasure areas on an original image, wherein the at least two first countermeasure areas are not overlapped; disturbing the original image to obtain a first disturbance image; generating a target image based on images of the first disturbance image located within at least two first contrast areas; in the case that the target image successfully attacks the target detection model, the target image is output as an countermeasure sample. According to the method and the device, the target image with the distribution pattern sample characteristics is generated to perform model attack, the technical defect that a single countermeasure sample is easy to detect and identify by a defense algorithm is overcome by the mode of generating the distribution type countermeasure sample, and therefore the success rate of performing directional target attack on the generated countermeasure sample is improved.
Description
Technical Field
The embodiment of the application relates to the field of artificial intelligence, in particular to a method for generating an countermeasure sample, a related device and a storage medium.
Background
In an automatic driving scene, recognition detection of objects such as vehicles, pedestrians and the like is one of the key tasks. At the same time, research on the attack resistance of the target object in the physical world is also particularly necessary for the floor application of an automatic driving algorithm system. The challenge attack facing the physical world is mainly completed by using a challenge sample-based mode, namely, a challenge sample with aggressiveness is generated through a challenge attack algorithm, and is stuck or attached to the surface of a target object to cause the target object to be missed or misjudged by a detection model, so that the purpose of the challenge attack is achieved. However, the challenge sample generated by the existing algorithm is generally a complete continuous area and can be detected by a part of defense algorithm at one time, meanwhile, because the challenge sample generated by the existing challenge algorithm is a unified whole, on the premise that the area of the challenge sample is limited, it is difficult to make the attacked detection model generate a false recognition detection frame with the size similar to that of a real target through one challenge sample, and the size of the detection frame is particularly important in challenge tasks (such as directional attack recognition) except for vanishing attacks, so that the success rate of the generated challenge sample for performing the directional target attack is low.
Disclosure of Invention
The embodiment of the application provides a method for generating an countermeasure sample, a related device and a storage medium, which can improve the success rate of the generated countermeasure sample for carrying out directional target attack.
In a first aspect, an embodiment of the present application provides a method for generating an challenge sample, the method including:
acquiring an original image containing a target to be attacked;
acquiring at least two first countermeasure areas on the original image, wherein at least two first countermeasure areas do not overlap;
disturbing the original image to obtain a first disturbance image;
generating a target image based on images of the first disturbance image located within at least two of the first contrast areas;
and outputting the target image as an countermeasure sample under the condition that the target image successfully attacks the target detection model.
In one embodiment, the method for generating a challenge sample further comprises:
under the condition that the target image does not attack the target detection model successfully, carrying out disturbance again on the first disturbance image to obtain a second disturbance image;
generating a new target image based on images of the second disturbance image located within at least two of the first contrast areas;
In case the new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
In one embodiment, the method for generating a challenge sample further comprises:
acquiring at least two second countermeasure areas on the original image under the condition that the target image does not attack the target detection model successfully, wherein the at least two second countermeasure areas are different from the at least two first countermeasure areas;
generating a new target image based on images of the first disturbance image located within at least two of the second countermeasure areas;
in case a new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
In one embodiment, the method for generating a challenge sample further comprises:
under the condition that the target detection model is not successfully attacked, recording the successful times of the attack of the round;
judging whether the successful times of the attack of the round reach the preset times or not;
and if the number of times of successful attack of the current round reaches the preset number of times, outputting the target image of the target detection model of the current attack as a countermeasure sample.
In one embodiment, the at least two first countermeasure areas on the original image acquired include:
obtaining a target type of a target to be attacked in the original image;
dividing the original image into at least two sub-regions based on the object type;
randomly generating one first countermeasure area in each sub-area to obtain at least two first countermeasure areas, wherein the area of the first countermeasure area is smaller than that of the sub-area where the first countermeasure area is located.
In a second aspect, an embodiment of the present application provides a device for generating an challenge sample, which has a function of implementing a method for generating a challenge sample corresponding to the first aspect. The functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above, which may be software and/or hardware.
In one embodiment, the apparatus for generating a challenge sample includes:
the first acquisition module is configured to acquire an original image containing an object to be attacked;
a second acquisition module configured to acquire at least two first countermeasure areas on the original image, wherein at least two of the first countermeasure areas do not overlap;
The disturbance module is configured to carry out disturbance on the original image to obtain a first disturbance image;
a generation module configured to generate a target image based on images of the first disturbance image located within at least two of the first contrast areas;
and the output module is configured to output the target image as an countermeasure sample in the case that the target image successfully attacks the target detection model.
In one embodiment, the output module is configured to:
under the condition that the target image does not attack the target detection model successfully, carrying out disturbance again on the first disturbance image to obtain a second disturbance image;
generating a new target image based on images of the second disturbance image located within at least two of the first contrast areas;
in case the new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
In one embodiment, the output module is configured to:
acquiring at least two second countermeasure areas on the original image under the condition that the target image does not attack the target detection model successfully, wherein the at least two second countermeasure areas are different from the at least two first countermeasure areas;
Generating a new target image based on images of the first disturbance image located within at least two of the second countermeasure areas;
in case a new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
In one embodiment, the output module is configured to:
under the condition that the target detection model is not successfully attacked, recording the successful times of the attack of the round;
judging whether the successful times of the attack of the round reach the preset times or not;
and if the number of times of successful attack of the current round reaches the preset number of times, outputting the target image of the target detection model of the current attack as a countermeasure sample.
In one embodiment, the first acquisition module is configured to:
obtaining a target type of a target to be attacked in the original image;
dividing the original image into at least two sub-regions based on the object type;
randomly generating one first countermeasure area in each sub-area to obtain at least two first countermeasure areas, wherein the area of the first countermeasure area is smaller than that of the sub-area where the first countermeasure area is located.
In a third aspect, embodiments of the present application provide a computer-readable storage medium comprising instructions that, when run on a computer, cause the computer to perform the method of generating challenge samples as described in the first aspect.
In a fourth aspect, an embodiment of the present application provides a computing device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method for generating challenge samples according to the first aspect when executing the computer program.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor coupled to a transceiver of a terminal device, for performing the technical solution provided in the first aspect of the embodiment of the present application.
In a sixth aspect, an embodiment of the present application provides a chip system, where the chip system includes a processor, configured to support a terminal device to implement a function related to the first aspect, for example, generate or process information related to an image processing method provided in the first aspect.
In one possible design, the above chip system further includes a memory for holding program instructions and data necessary for the terminal. The chip system may be formed of a chip or may include a chip and other discrete devices.
In a seventh aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of generating challenge samples provided in the first aspect described above.
Compared with the prior art, in the embodiment of the application, an original image containing an object to be attacked is acquired; acquiring at least two first countermeasure areas on an original image, wherein the at least two first countermeasure areas are not overlapped; disturbing the original image to obtain a first disturbance image; generating a target image based on images of the first disturbance image located within at least two first contrast areas; in the case that the target image successfully attacks the target detection model, the target image is output as an countermeasure sample. According to the method and the device for achieving the target attack, the plurality of distributed first countermeasure areas are obtained from the original image containing the target to be attacked, the characteristics on the disturbed image after the disturbance of the original image are collected through the plurality of distributed first countermeasure areas, then the target image with the distributed sample characteristics is generated to conduct model attack, the technical defect that a single countermeasure sample is easy to detect and identify by a defense algorithm is overcome through the mode of generating the distributed countermeasure sample, and therefore the success rate of conducting directional target attack on the generated countermeasure sample is improved.
Drawings
The objects, features and advantages of the embodiments of the present application will become readily apparent from the detailed description of the embodiments of the present application read with reference to the accompanying drawings. Wherein:
FIG. 1 is a schematic diagram of a system for generating a challenge sample according to the method for generating a challenge sample in the embodiments of the present application;
FIG. 2 is a flow chart of a method for generating challenge samples according to an embodiment of the present application;
FIG. 3 is a diagram showing one specific example of at least two first challenge areas in the method for generating challenge samples according to the embodiment of the present application;
FIG. 4 is a diagram showing a specific example of a target image in the method for generating a challenge sample according to the embodiment of the present application;
FIG. 5 is a schematic flow chart of another method for generating a challenge sample according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a device for generating a challenge sample according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a computing device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a mobile phone according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a server in an embodiment of the present application.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The terms first, second and the like in the description and in the claims of the embodiments and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those listed or explicitly listed or inherent to such process, method, article, or apparatus, but may include other steps or modules that may not be listed or inherent to such process, method, article, or apparatus, and the partitioning of such modules by embodiments of the present application may include only one logical partitioning, and may include additional partitioning by practical implementation, such that a plurality of modules may be combined or integrated in another system, or some features may be omitted or not implemented. In addition, the coupling or direct coupling or communication connection shown or discussed may be indirect coupling between modules via interfaces, and the communication connection may be in electrical or other similar forms, which are not limited in this application. The modules or sub-modules described as separate components may or may not be physically separate, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purposes of the embodiments of the present application.
Various methods and software for generating artificial intelligence have been widely used, thereby creating potential risks and hazards. Besides personal information leakage, the false news and key character language made by the generated artificial intelligent tool are difficult to distinguish and spread rapidly, have serious misleading on social public opinion, influence national security and social stability, and can become a new means of network crime and infringe legal rights and interests of citizens.
The universal large model is widely applied to application scenes such as chat conversations, text editing, artistic creation, code writing, mathematical reasoning, biological information and the like. Although many new business models are created and the capability is very strong, after a general large model is online for users, algorithm risks, data risks and application risks mainly exist for three types of applications of translation, chat and collaboration.
The embodiment of the application also provides a method for generating the challenge sample, a related device and a storage medium, which can be applied to a system for generating the challenge sample, wherein the system for generating the challenge sample can comprise a device for generating the challenge sample, and the device for generating the challenge sample can be deployed in an integrated way or in a detachable way. The generation device of the challenge sample is at least used for acquiring an original image containing an object to be attacked; acquiring at least two first countermeasure areas on an original image, wherein the at least two first countermeasure areas are not overlapped; disturbing the original image to obtain a first disturbance image; generating a target image based on images of the first disturbance image located within at least two first contrast areas; in the case that the target image successfully attacks the target detection model, the target image is output as an countermeasure sample.
The solution provided in the embodiments of the present application relates to artificial intelligence (Artificial Intelligence, AI), computer Vision (CV), machine Learning (ML), and the like, and is specifically described by the following embodiments:
the AI is a theory, a method, a technology and an application system which simulate, extend and extend human intelligence by using a digital computer or a machine controlled by the digital computer, sense environment, acquire knowledge and acquire an optimal result by using the knowledge. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
AI technology is a comprehensive discipline, and relates to a wide range of technologies, both hardware and software. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
CV is a science of how to make a machine "look at", and more specifically, it means that a camera and a computer are used to replace human eyes to recognize, track and measure targets, and further perform graphic processing, so that the computer is processed into images more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include techniques for anti-disturbance generation, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, as well as common biometric techniques such as face recognition, fingerprint recognition, and the like.
The challenge sample generated in the prior art is generally a complete continuous area and can be detected by a part of defense algorithm at one time, meanwhile, because the challenge sample generated by the existing challenge attack algorithm is a unified whole, on the premise that the area of the challenge sample is limited, the challenge detection model is difficult to generate a false recognition detection frame with the size similar to that of a real target through one challenge sample, and the size of the detection frame is particularly important in challenge tasks (such as directional attack recognition) except for vanishing attacks, so that the success rate of the generated challenge sample for carrying out directional target attacks is low.
Compared with the prior art, in the embodiment of the application, a plurality of distributed first countermeasure areas are acquired from an original image containing a target to be attacked, the characteristics on a disturbance image after disturbance of the original image are acquired by the plurality of distributed first countermeasure areas, then a target image with the characteristics of a distributed pattern is generated for model attack, the technical defect that a single countermeasure sample is easy to detect and identify by a defense algorithm is overcome by generating the distributed countermeasure sample, and therefore the success rate of the generated countermeasure sample for directional target attack is improved.
In some embodiments, referring to fig. 1, the method for generating a challenge sample provided in the embodiments of the present application may be implemented based on a challenge sample generating system shown in fig. 1. The challenge sample generation system may include an electronic device 100 and a memory 200. The electronic device 100 may be a server or a terminal device.
It should be noted that, the server according to the embodiments of the present application may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, a cloud database, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and an artificial intelligence platform.
The terminal device according to the embodiments of the present application may be a device that provides voice and/or data connectivity to a user, a handheld device with wireless connection functionality, or other processing device connected to a wireless modem. Such as mobile telephones (or "cellular" telephones) and computers with mobile terminals, which can be portable, pocket, hand-held, computer-built-in or car-mounted mobile devices, for example, which exchange voice and/or data with radio access networks. For example, personal communication services (English full name: personal Communication Service, english short name: PCS) telephones, cordless telephones, session Initiation Protocol (SIP) phones, wireless local loop (Wireless Local Loop, english short name: WLL) stations, personal digital assistants (English full name: personal Digital Assistant, english short name: PDA) and the like.
Referring to fig. 2, fig. 2 is a flowchart of a method for generating an challenge sample according to an embodiment of the present application. The method may be performed by a generating device of the challenge sample. The method comprises the steps of 101-105:
step 101, an original image containing an object to be attacked is acquired.
The target to be attacked can be a human body, a vehicle or the like, and the target to be attacked can be set according to specific situations. The original image can be various images uploaded by the user. For example, a user photographs a vehicle, and then uploads the photographed image, that is, an original image including an object to be attacked can be obtained.
Step 102, at least two first countermeasure areas on the original image are acquired.
Wherein at least two first countermeasure areas do not overlap.
In this embodiment of the present application, the number of the first countermeasure areas may be 4, or of course, may be 2, 3 or other numbers, which may be set according to specific situations, and this application is not limited thereto.
As shown in fig. 3, 4 first countermeasure areas on the original image are acquired, wherein the 4 first countermeasure areas are countermeasure area a, countermeasure area B, countermeasure area C, and countermeasure area D, respectively.
In a specific embodiment, acquiring at least two first countermeasure areas on the original image includes:
(1) And obtaining the target type of the target to be attacked in the original image.
In the embodiment of the present application, the target type of the target to be attacked may be a vehicle type, a human body type, or an animal type, etc. The target type of the target to be attacked in the original image can be designated by the user when uploading the original image, and can be obtained according to the uploading information of the user, or can be obtained by detecting the original image by using a target detection model.
(2) The original image is divided into at least two sub-areas based on the object type.
The corresponding relation between the target type and the sub-region dividing mode can be preset. For example, if the target type is a vehicle type, the original image is equally divided into 4 sub-areas distributed in a grid shape; if the target type is a human body type, the original image is equally divided into 3 sub-areas which are distributed transversely or longitudinally.
(3) Randomly generating a first countermeasure zone in each sub-zone to obtain at least two first countermeasure zones.
Wherein the area of the first countermeasure zone is smaller than the area of the sub-zone where it is located.
In this embodiment of the present application, the first countermeasure area is a rectangular area. In other embodiments, the first countermeasure area may be of a circular shape or other shapes, which may be set according to the specific situation, and the application is not limited thereto.
In order to make the first challenge area as contain as much as possible the characteristics of the object to be attacked, in another specific embodiment, the dividing of the original image into at least two sub-areas based on the object type comprises: and obtaining the target type of the target to be attacked in the original image. And carrying out target detection on the target to be attacked in the original image to obtain an attack detection frame, wherein the attack detection frame is the minimum circumscribed rectangular frame of the target to be attacked in the original image. The portion of the original image that is located within the attack detection frame is divided into at least two sub-regions based on the target type. The corresponding relation between the target type and the sub-region dividing mode can be preset. For example, if the target type is a vehicle type, equally dividing the portion of the original image located in the attack detection frame into 4 sub-areas distributed in a grid shape; if the target type is a human body type, equally dividing the part of the original image positioned in the attack detection frame into 3 sub-areas which are distributed transversely or longitudinally. Randomly generating a first countermeasure zone in each sub-zone to obtain at least two first countermeasure zones.
And 103, disturbing the original image to obtain a first disturbance image.
In the embodiment of the application, the original image is disturbed based on a preset countermeasure sample generation strategy, and a first disturbance image is obtained. The preset challenge sample generation strategy may be FGSM (gradient generated challenge sample, fast Gradient Sign Method) algorithm, BIM (Basic Iterative Method) algorithm, JSMA (Jacobian-based Saliency Map Attack) algorithm, or deep pool algorithm.
The core idea of the FGSM algorithm is to use gradient information of the input data to perform perturbation, thereby generating a challenge sample. Specifically, the FGSM algorithm calculates the gradient of the input data with respect to the loss function, and then adds a small disturbance with the sign of the gradient as the direction of the disturbance, thereby generating a challenge sample.
The BIM algorithm was proposed by Kurakin et al, "Adversarial examples in the physical world" in 2017. The principle is that a class with the lowest classification placeability is found first, gradient calculation is carried out along the direction of the class, and then a corresponding countermeasure sample is obtained.
The inspiration of the JSMA algorithm comes from a saliency map in the field of computer vision. In short, different input features have different degrees of influence on different outputs of the classifier. If we find that certain features correspond to a particular output in the classifier, we can cause the classifier to produce a specified type of output by enhancing those features in the input samples. The JSMA algorithm mainly includes three processes: the forward derivative is calculated, the antagonism saliency map is calculated, and the disturbance is added.
The deep fool algorithm attempts to assume that the neural network is perfectly linear, and the deep fool's proposer considers that the neural network divides the space in which the training data resides into different regions, each region being assigned to a class, by hyperplane (hyperplane). Based on this assumption, the core idea of deep pool is to find the least antagonistic disturbance that can cross the sample across the classification boundary by iterating constantly.
In the embodiment of the application, the preset noise can be acquired, and the original image is added with the preset noise to perform disturbance, so that the first disturbance image is obtained. The magnitude of the anti-disturbance noise may be specified by the user.
Step 104, generating a target image based on images of the first disturbance image located in at least two first contrast areas.
In the embodiment of the application, the images of the first disturbance images in the at least two first countermeasure areas are embedded into the target template to obtain a new target image, the size of the target template is the same as that of the first disturbance images, the image characteristics of the target template and the image characteristics of the first disturbance images are different, and the positions of the at least two first countermeasure areas on the new target image are the same as those of the at least two first countermeasure areas on the first disturbance images. Specifically, the target template may be a solid white image, a solid black image, or an image with other colors, or an image with other textures.
For example, as shown in fig. 4, the target template is a solid white image, and the target image includes images of the first disturbance image in the countermeasure area a, the countermeasure area B, the countermeasure area C, and the countermeasure area D.
In the embodiment of the application, target detection is performed on a target to be attacked in an original image, an attack detection frame is obtained, color information in the attack detection frame is calculated, color information of each preset template is obtained, the color information difference degree of each preset template and the attack detection frame is calculated, and the preset template with the largest color information difference degree with the attack detection frame is determined as the target template.
Step 105, outputting the target image as an countermeasure sample in case that the target image successfully attacks the target detection model.
In the embodiment of the application, the target image is utilized to attack the target detection model, whether the target image is successful in attacking the target detection model is judged, and the target image is output as the countermeasure sample under the condition that the target image is successful in attacking the target detection model.
In a specific embodiment, the target image is input into the target detection model, and if the target detection model detects that a target detection frame of the same type as the target to be attacked exists on the target image, the target image is determined to attack the target detection model successfully. If the target detection model detects that the target detection frame which is the same as the target to be attacked does not exist on the target image, determining that the target image does not attack the target detection model successfully.
In another specific embodiment, an original image is input into a target detection model to obtain an original detection frame of a target to be attacked, the target image is input into the target detection model, if the target detection frame which is the same as the target to be attacked exists on the target image detected by the target detection model, whether the intersection ratio of the target detection frame and the original detection frame is higher than a preset intersection ratio is judged, and if the intersection ratio of the target detection frame and the original detection frame is higher than the preset intersection ratio, the target image is determined to attack the target detection model successfully. If the cross-over ratio of the target detection frame and the original detection frame is not higher than the preset cross-over ratio, determining that the target image does not attack the target detection model successfully.
Further, in a specific embodiment, the method for generating the challenge sample further includes:
(1) And under the condition that the target image does not attack the target detection model successfully, carrying out disturbance again on the first disturbance image to obtain a second disturbance image.
In the embodiment of the application, the first disturbance image is disturbed again based on a preset countersample generation strategy to obtain a second disturbance image.
(2) A new target image is generated based on images of the second perturbation image being located within the at least two first contrast regions.
In the embodiment of the application, the images of the second disturbance image in the at least two first countermeasure areas are embedded into the target template to obtain a new target image, the size of the target template is the same as that of the second disturbance image, the image characteristics of the target template and the image characteristics of the second disturbance image are different, and the positions of the at least two first countermeasure areas on the new target image are the same as those of the at least two first countermeasure areas on the second disturbance image. Specifically, the target template may be a solid white image, a solid black image, or an image of other colors.
(3) In case the new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
In the embodiment of the application, a new target image is utilized to attack the target detection model, the new target image is output as an countermeasure sample under the condition that the new target image is successful in attacking the target detection model, and disturbance is carried out on the disturbance image again and a new target image is generated to attack the target detection model under the condition that the new target image is unsuccessful in attacking the target detection model.
Further, in another specific embodiment, the method for generating the challenge sample further includes:
(1) In case the new target image does not successfully attack the target detection model, at least two second challenge areas on the original image are acquired.
Wherein the at least two second countermeasure areas are different from the at least two first countermeasure areas.
In this embodiment of the present application, acquiring at least two second countermeasure areas on an original image includes: randomly generating a candidate countermeasure area in each sub-area to obtain at least two candidate countermeasure areas, and determining the at least two candidate countermeasure areas as at least two second countermeasure areas to obtain at least two second countermeasure areas if the at least two candidate countermeasure areas are different from the at least two first countermeasure areas. The area of the second countermeasure region is smaller than the area of the sub-region in which it is located.
Wherein the second countermeasure area is a rectangular area. In other embodiments, the second countermeasure area may be of a circular shape, which is set according to the specific situation, and this application is not limited thereto.
(2) A new target image is generated based on images of the first disturbance image located within at least two second countermeasure areas.
In the embodiment of the application, the images of the first disturbance image in the at least two second countermeasure areas are embedded into the target template to obtain a new target image, the size of the target template is the same as that of the new target image, the image characteristics of the target template and the new target image are different, and the positions of the at least two second countermeasure areas on the new target image are the same as those of the at least two second countermeasure areas on the first disturbance image. Specifically, the target template may be a solid white image, a solid black image, or an image of other colors.
(3) In case the new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
Further, in yet another specific embodiment, the method for generating the challenge sample further includes:
(1) And under the condition that the new target image does not attack the target detection model successfully, carrying out disturbance again on the first disturbance image to obtain a second disturbance image.
In the embodiment of the application, the first disturbance image is disturbed again based on a preset countersample generation strategy to obtain a second disturbance image.
(2) At least two second challenge areas on the original image are acquired.
Wherein the at least two second countermeasure areas are different from the at least two first countermeasure areas.
In this embodiment of the present application, acquiring at least two second countermeasure areas on an original image includes: randomly generating a candidate countermeasure area in each sub-area to obtain at least two candidate countermeasure areas, and determining the at least two candidate countermeasure areas as at least two second countermeasure areas to obtain at least two second countermeasure areas if the at least two candidate countermeasure areas are different from the at least two first countermeasure areas. The area of the second countermeasure region is smaller than the area of the sub-region in which it is located.
Wherein the second countermeasure area is a rectangular area. In other embodiments, the second countermeasure area may be of a circular shape, which is set according to the specific situation, and this application is not limited thereto.
(3) A new target image is generated based on images of the second disturbance image located within at least two second countermeasure areas.
In the embodiment of the application, the images of the second disturbance images in the at least two second countermeasure areas are embedded into the target template to obtain a new target image, the size of the target template is the same as that of the second disturbance images, the image characteristics of the target template and the image characteristics of the second disturbance images are different, and the positions of the at least two second countermeasure areas on the new target image are the same as those of the at least two second countermeasure areas on the second disturbance images. Specifically, the target template may be a solid white image, a solid black image, or an image of other colors.
(4) In case the new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
In the embodiment of the application, under the condition that the new target image successfully attacks the target detection model, the new target image is output as an countermeasure sample, and under the condition that the new target image does not successfully attack the target detection model, the disturbance image is disturbed again and a new target image is generated to attack the target detection model.
Further, the method for generating the challenge sample further comprises the following steps: under the condition of unsuccessful attack of the target detection model, recording the successful times of the attack of the round; judging whether the number of times of successful attack of the round of attack reaches a preset number of times; and if the number of times of the attack success of the current round of attack reaches the preset number of times, outputting the target image of the target detection model of the current attack as a countermeasure sample. If the number of times of attack success of the round of attack does not reach the preset number of times, a new target image attack target detection model is regenerated.
Referring to fig. 5, fig. 5 is another flow chart of a method for generating an challenge sample according to an embodiment of the present application. The method may be performed by a generating device of the challenge sample. The method comprises the steps of 201-209:
in step 201, an original image containing an object to be attacked is acquired.
Step 202, at least two first countermeasure areas on an original image are acquired.
And 203, disturbing the original image to obtain a first disturbance image.
Step 204, generating a target image based on images of the first perturbation image located within at least two first contrast regions.
Step 205, it is determined whether the target image successfully attacks the target detection model.
Step 206, outputting the target image as a countermeasure sample.
In the embodiment of the application, under the condition that the target image successfully attacks the target detection model, the target image is output as an countermeasure sample.
Step 207, determining whether the number of times of successful attacks of the present round of attacks reaches a preset number of times.
In the embodiment of the application, under the condition that the target image does not attack the target detection model successfully, whether the number of times of successful attack of the round of attack reaches the preset number of times is judged.
Step 208, obtaining at least two second countermeasure areas on the original image, and perturbing the first perturbation image again to obtain a second perturbation image.
Wherein the at least two second countermeasure areas are different from the at least two first countermeasure areas.
In this embodiment of the present application, acquiring at least two second countermeasure areas on an original image includes: randomly generating a candidate countermeasure area in each sub-area to obtain at least two candidate countermeasure areas, and determining the at least two candidate countermeasure areas as at least two second countermeasure areas to obtain at least two second countermeasure areas if the at least two candidate countermeasure areas are different from the at least two first countermeasure areas. The area of the second countermeasure region is smaller than the area of the sub-region in which it is located.
In the embodiment of the application, if the number of times of successful attack of the current round of attack reaches the preset number of times, the target image of the target detection model of the current attack is output as the counterattack sample. If the number of times of attack success of the round of attack does not reach the preset number of times, at least two second countermeasure areas on the original image are obtained, the first disturbance image is disturbed again, a second disturbance image is obtained, and a new target image attack target detection model is regenerated.
Step 209, re-using the second disturbance image as the first disturbance image, and re-using at least two second countermeasure areas as at least two first countermeasure areas.
In this embodiment of the present application, if the number of times of successful attacks of the present round of attacks does not reach the preset number of times, at least two second countermeasure areas on the original image are obtained, the first disturbance image is disturbed again to obtain a second disturbance image, the second disturbance image is re-used as a new first disturbance image, the at least two second countermeasure areas are re-used as new at least two first countermeasure areas, and the new target image re-attack model is generated by returning to 204.
Referring to fig. 6, a schematic structural diagram of a device for generating a challenge sample is shown in fig. 6. The challenge sample generating device in the embodiment of the present application can implement steps corresponding to the challenge sample generating method performed in the embodiment corresponding to fig. 2 described above. The function realized by the device for generating the challenge sample can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above, and the modules may be software and/or hardware. The generating device 60 for the challenge sample may include a first acquiring module 601, a second acquiring module 602, a perturbation module 603, a generating module 604, and an output module 605, and the functional implementation may refer to the operations performed in the embodiment corresponding to fig. 2, which are not described herein.
The device for generating a challenge sample includes:
a first acquisition module 601 configured to acquire an original image containing an object to be attacked;
a second acquisition module 602 configured to acquire at least two first countermeasure areas on the original image, wherein at least two of the first countermeasure areas do not overlap;
the perturbation module 603 is configured to perturb the original image to obtain a first perturbed image;
a generation module 604 configured to generate a target image based on images of the first disturbance image that are located within at least two of the first contrast areas;
an output module 605 configured to output the target image as an challenge sample in case the target image successfully attacks the target detection model.
In one embodiment, the output module is configured to:
under the condition that the target image does not attack the target detection model successfully, carrying out disturbance again on the first disturbance image to obtain a second disturbance image;
generating a new target image based on images of the second disturbance image located within at least two of the first contrast areas;
in case the new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
In one embodiment, the output module is configured to:
acquiring at least two second countermeasure areas on the original image under the condition that the target image does not attack the target detection model successfully, wherein the at least two second countermeasure areas are different from the at least two first countermeasure areas;
generating a new target image based on images of the first disturbance image located within at least two of the second countermeasure areas;
in case a new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
In one embodiment, the output module is configured to:
under the condition that the target detection model is not successfully attacked, recording the successful times of the attack of the round;
judging whether the successful times of the attack of the round reach the preset times or not;
and if the number of times of successful attack of the current round reaches the preset number of times, outputting the target image of the target detection model of the current attack as a countermeasure sample.
In one embodiment, the first acquisition module is configured to:
obtaining a target type of a target to be attacked in the original image;
Dividing the original image into at least two sub-regions based on the object type;
randomly generating one first countermeasure area in each sub-area to obtain at least two first countermeasure areas, wherein the area of the first countermeasure area is smaller than that of the sub-area where the first countermeasure area is located.
The challenge sample generating device 60 in the embodiment of the present application is described above from the point of view of the modularized functional entity, and the challenge sample generating device in the embodiment of the present application is described below from the point of view of hardware processing, respectively.
The apparatuses shown in fig. 6 may have structures as shown in fig. 7, and when the apparatus 60 for generating an challenge sample shown in fig. 6 has a structure as shown in fig. 7, the processor and the transceiver in fig. 7 can implement the same or similar functions as the first acquiring module 601, the second acquiring module 602, the perturbation module 603, the generating module 604, and the output module 605 provided in the foregoing apparatus embodiments corresponding to the apparatuses, and the memory in fig. 7 stores a computer program that needs to be invoked when the processor executes the foregoing method for generating an challenge sample.
The embodiment of the present application further provides a terminal device, as shown in fig. 8, for convenience of explanation, only the portion relevant to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to the method portion of the embodiment of the present application. The terminal device may be any terminal device including a mobile phone, a tablet computer, a personal digital assistant (Personal Digital Assistant, PDA), a Point of Sales (POS), a vehicle-mounted computer, and the like, taking the terminal device as an example of the mobile phone:
Fig. 8 is a block diagram showing a part of the structure of a mobile phone related to a terminal device provided in an embodiment of the present application. Referring to fig. 8, the mobile phone includes: radio Frequency (RF) circuitry 1010, memory 1020, input unit 1030, display unit 1040, sensor 1050, audio circuitry 1060, wireless fidelity (wireless fidelity, wiFi) module 1070, processor 1080, and power source 1090. Those skilled in the art will appreciate that the handset configuration shown in fig. 8 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or may be arranged in a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 8:
the RF circuit 1010 may be used for receiving and transmitting signals during a message or a call, and particularly, after receiving downlink information of a base station, the signal is processed by the processor 1080; in addition, the data of the design uplink is sent to the base station. Generally, RF circuitry 1010 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low noise amplifier (Low NoiseAmplifier, LNA), a duplexer, and the like. In addition, the RF circuitry 1010 may also communicate with networks and other devices via wireless communications. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (GlobalSystem of Mobile communication, GSM), universal packet radio service (GeneralPacketRadioService, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), email, short message service (Short Messaging Service, SMS), and the like.
The memory 1020 may be used to store software programs and modules that the processor 1080 performs various functional applications and data processing of the handset by executing the software programs and modules stored in the memory 1020. The memory 1020 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 1020 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state memory device.
The input unit 1030 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the handset. In particular, the input unit 1030 may include a touch panel 1031 and other input devices 1032. The touch panel 1031, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1031 or thereabout using any suitable object or accessory such as a finger, stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 1031 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 1080 and can receive commands from the processor 1080 and execute them. Further, the touch panel 1031 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1030 may include other input devices 1032 in addition to the touch panel 1031. In particular, other input devices 1032 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a track ball, a mouse, a joystick, etc.
The display unit 1040 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 1040 may include a display panel 1041, and alternatively, the display panel 1041 may be configured in the form of a Liquid crystal display (Liquid CrystalDisplay, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1031 may overlay the display panel 1041, and when the touch panel 1031 detects a touch operation thereon or thereabout, the touch panel is transferred to the processor 1080 to determine a type of touch event, and then the processor 1080 provides a corresponding visual output on the display panel 1041 according to the type of touch event. Although in fig. 8, the touch panel 1031 and the display panel 1041 are two independent components for implementing the input and output functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 1050, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1041 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 1060, a speaker 1061, and a microphone 1062 may provide an audio interface between a user and a cell phone. Audio circuit 1060 may transmit the received electrical signal after audio data conversion to speaker 1061 for conversion by speaker 1061 into an audio signal output; on the other hand, microphone 1062 converts the collected sound signals into electrical signals, which are received by audio circuit 1060 and converted into audio data, which are processed by audio data output processor 1080 for transmission to, for example, another cell phone via RF circuit 1010 or for output to memory 1020 for further processing.
Wi-F i belongs to the short-distance wireless transmission technology, and a mobile phone can help a user to send and receive e-mails, browse web pages, access streaming media and the like through a Wi-F i module 1070, so that wireless broadband Internet access is provided for the user. While fig. 8 shows W i-F i module 1070, it is to be understood that it is not an essential component of a cell phone and can be omitted entirely as desired without changing the essence of the invention.
Processor 1080 is the control center of the handset, connects the various parts of the entire handset using various interfaces and lines, and performs various functions and processes of the handset by running or executing software programs and/or modules stored in memory 1020, and invoking data stored in memory 1020, thereby performing overall monitoring of the handset. Optionally, processor 1080 may include one or more processing units; alternatively, processor 1080 may integrate an application processor primarily handling operating systems, user interfaces, applications, etc., with a modem processor primarily handling wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1080.
The handset further includes a power source 1090 (e.g., a battery) for powering the various components, optionally in logical communication with the processor 1080 via a power management system, such as for managing charge, discharge, and power consumption by the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In the embodiment of the present application, the processor 1080 included in the mobile phone further has a control unit for executing the above method for generating the challenge sample by the device for generating the challenge sample.
The embodiment of the present application further provides a server, please refer to fig. 9, fig. 9 is a schematic diagram of a server structure provided in the embodiment of the present application, where the server 1100 may have a relatively large difference due to different configurations or performances, and may include one or more central processing units (in english: central processing units, in english: CPU) 1122 (for example, one or more processors) and a memory 1132, and one or more storage media 1130 (for example, one or more mass storage devices) storing application 1142 or data 1144. Wherein the memory 1132 and the storage medium 1130 may be transitory or persistent. The program stored on the storage medium 1130 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 1122 may be provided in communication with a storage medium 1130, executing a series of instruction operations in the storage medium 1130 on the server 1100.
The Server 1100 may also include one or more power supplies 1126, one or more wired or wireless network interfaces 1150, one or more input-output interfaces 1158, and/or one or more operating systems 1141, such as Windows Server, mac OS X, unix, linux, freeBSD, and the like.
The steps performed by the server in the above embodiments may be based on the structure of the server 1100 shown in fig. 9. For example, the steps performed by the challenge sample generating means 60 shown in fig. 9 in the above-described embodiment may be based on the server structure shown in fig. 9. For example, the central processor 1122 performs the following operations by calling instructions in the memory 1132:
acquiring an original image containing a target to be attacked;
acquiring at least two first countermeasure areas on the original image, wherein at least two first countermeasure areas do not overlap;
disturbing the original image to obtain a first disturbance image;
generating a target image based on images of the first disturbance image located within at least two of the first contrast areas;
and outputting the target image as an countermeasure sample under the condition that the target image successfully attacks the target detection model.
In one embodiment, the method for generating a challenge sample further comprises:
under the condition that the target image does not attack the target detection model successfully, carrying out disturbance again on the first disturbance image to obtain a second disturbance image;
generating a new target image based on images of the second disturbance image located within at least two of the first contrast areas;
in case the new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
In one embodiment, the method for generating a challenge sample further comprises:
acquiring at least two second countermeasure areas on the original image under the condition that the target image does not attack the target detection model successfully, wherein the at least two second countermeasure areas are different from the at least two first countermeasure areas;
generating a new target image based on images of the first disturbance image located within at least two of the second countermeasure areas;
in case a new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
In one embodiment, the method for generating a challenge sample further comprises:
Under the condition that the target detection model is not successfully attacked, recording the successful times of the attack of the round;
judging whether the successful times of the attack of the round reach the preset times or not;
and if the number of times of successful attack of the current round reaches the preset number of times, outputting the target image of the target detection model of the current attack as a countermeasure sample.
In one embodiment, the at least two first countermeasure areas on the original image acquired include:
obtaining a target type of a target to be attacked in the original image;
dividing the original image into at least two sub-regions based on the object type;
randomly generating one first countermeasure area in each sub-area to obtain at least two first countermeasure areas, wherein the area of the first countermeasure area is smaller than that of the sub-area where the first countermeasure area is located.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, apparatuses and modules described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the embodiments of the present application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules is merely a logical function division, and there may be additional divisions of actual implementation, e.g., multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in the various alternative implementations described above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When a computer program is loaded onto and executed by a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be stored by a computer or data storage devices such as servers, data centers, etc. that contain an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
The foregoing describes in detail the technical solution provided by the embodiments of the present application, in which specific examples are applied to illustrate the principles and implementations of the embodiments of the present application, where the foregoing description of the embodiments is only used to help understand the methods and core ideas of the embodiments of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope according to the ideas of the embodiments of the present application, the present disclosure should not be construed as limiting the embodiments of the present application in view of the above.
Claims (10)
1. A method of generating a challenge sample, the method comprising:
acquiring an original image containing a target to be attacked;
acquiring at least two first countermeasure areas on the original image, wherein at least two first countermeasure areas do not overlap;
disturbing the original image to obtain a first disturbance image;
generating a target image based on images of the first disturbance image located within at least two of the first contrast areas;
and outputting the target image as an countermeasure sample under the condition that the target image successfully attacks the target detection model.
2. The method of generating a challenge sample of claim 1, further comprising:
under the condition that the target image does not attack the target detection model successfully, carrying out disturbance again on the first disturbance image to obtain a second disturbance image;
generating a new target image based on images of the second disturbance image located within at least two of the first contrast areas;
in case the new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
3. The method of generating a challenge sample of claim 1, further comprising:
acquiring at least two second countermeasure areas on the original image under the condition that the target image does not attack the target detection model successfully, wherein the at least two second countermeasure areas are different from the at least two first countermeasure areas;
generating a new target image based on images of the first disturbance image located within at least two of the second countermeasure areas;
in case a new target image successfully attacks the target detection model, the new target image is output as an challenge sample.
4. A method of generating a challenge sample according to any of claims 1-3, further comprising:
under the condition that the target detection model is not successfully attacked, recording the successful times of the attack of the round;
judging whether the successful times of the attack of the round reach the preset times or not;
and if the number of times of successful attack of the current round reaches the preset number of times, outputting the target image of the target detection model of the current attack as a countermeasure sample.
5. The method of generating challenge samples of claim 4, wherein the at least two first challenge areas on the raw image are acquired, comprising:
obtaining a target type of a target to be attacked in the original image;
dividing the original image into at least two sub-regions based on the object type;
randomly generating one first countermeasure area in each sub-area to obtain at least two first countermeasure areas, wherein the area of the first countermeasure area is smaller than that of the sub-area where the first countermeasure area is located.
6. An apparatus for generating a challenge sample, comprising:
The first acquisition module is configured to acquire an original image containing an object to be attacked;
a second acquisition module configured to acquire at least two first countermeasure areas on the original image, wherein at least two of the first countermeasure areas do not overlap;
the disturbance module is configured to carry out disturbance on the original image to obtain a first disturbance image;
a generation module configured to generate a target image based on images of the first disturbance image located within at least two of the first contrast areas;
and the output module is configured to output the target image as an countermeasure sample in the case that the target image successfully attacks the target detection model.
7. A computing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-5 when executing the computer program.
8. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of any of claims 1-5.
9. A computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to perform the method of any of claims 1-5.
10. A chip system, the chip system comprising:
a communication interface for inputting and/or outputting information;
a processor for executing a computer executable program to cause a device on which the chip system is installed to perform the method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311773600.7A CN117765349A (en) | 2023-12-21 | 2023-12-21 | Method for generating challenge sample, related device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311773600.7A CN117765349A (en) | 2023-12-21 | 2023-12-21 | Method for generating challenge sample, related device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117765349A true CN117765349A (en) | 2024-03-26 |
Family
ID=90319473
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311773600.7A Pending CN117765349A (en) | 2023-12-21 | 2023-12-21 | Method for generating challenge sample, related device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117765349A (en) |
-
2023
- 2023-12-21 CN CN202311773600.7A patent/CN117765349A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109857297B (en) | Information processing method and terminal equipment | |
CN116310745B (en) | Image processing method, data processing method, related device and storage medium | |
CN115588131B (en) | Model robustness detection method, related device and storage medium | |
CN113535055B (en) | Method, equipment and storage medium for playing point-to-read based on virtual reality | |
CN115239941B (en) | Countermeasure image generation method, related device and storage medium | |
CN113204302B (en) | Virtual robot-based operation method, device, equipment and storage medium | |
CN113706446A (en) | Lens detection method and related device | |
CN116486463B (en) | Image processing method, related device and storage medium | |
CN117743170A (en) | Test case generation method and device, storage medium and terminal equipment | |
CN115526055B (en) | Model robustness detection method, related device and storage medium | |
CN115471495B (en) | Model robustness detection method, related device and storage medium | |
CN117115590A (en) | Content auditing model training method, device and medium based on self-supervision learning | |
CN113780291B (en) | Image processing method and device, electronic equipment and storage medium | |
CN116758362A (en) | Image processing method, device, computer equipment and storage medium | |
CN116071614A (en) | Sample data processing method, related device and storage medium | |
CN117765349A (en) | Method for generating challenge sample, related device and storage medium | |
CN115984643A (en) | Model training method, related device and storage medium | |
CN110750193B (en) | Scene topology determination method and device based on artificial intelligence | |
CN117831089A (en) | Face image processing method, related device and storage medium | |
CN115061939A (en) | Data set security test method and device and storage medium | |
CN114140655A (en) | Image classification method and device, storage medium and electronic equipment | |
CN114140864B (en) | Trajectory tracking method and device, storage medium and electronic equipment | |
CN117853859B (en) | Image processing method, related device and storage medium | |
CN116167274A (en) | Simulation combat attack and defense training method, related device and storage medium | |
CN111488899B (en) | Feature extraction method, device, equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |