CN111522987A - Image auditing method and device and computer readable storage medium - Google Patents
Image auditing method and device and computer readable storage medium Download PDFInfo
- Publication number
- CN111522987A CN111522987A CN202010330417.XA CN202010330417A CN111522987A CN 111522987 A CN111522987 A CN 111522987A CN 202010330417 A CN202010330417 A CN 202010330417A CN 111522987 A CN111522987 A CN 111522987A
- Authority
- CN
- China
- Prior art keywords
- image
- audited
- violation
- feature library
- result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 82
- 238000001514 detection method Methods 0.000 claims abstract description 103
- 238000012552 review Methods 0.000 claims description 54
- 238000000605 extraction Methods 0.000 claims description 29
- 230000015654 memory Effects 0.000 claims description 24
- 238000000638 solvent extraction Methods 0.000 claims description 2
- 238000012550 audit Methods 0.000 abstract description 25
- 230000008569 process Effects 0.000 description 26
- 238000005516 engineering process Methods 0.000 description 22
- 238000013473 artificial intelligence Methods 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 15
- 239000013598 vector Substances 0.000 description 10
- 238000012549 training Methods 0.000 description 9
- 238000010801 machine learning Methods 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 7
- 239000000284 extract Substances 0.000 description 7
- 238000013178 mathematical model Methods 0.000 description 6
- 241000282414 Homo sapiens Species 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 230000002159 abnormal effect Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 241000234295 Musa Species 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008451 emotion Effects 0.000 description 2
- 238000004880 explosion Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 206010001488 Aggression Diseases 0.000 description 1
- 235000018290 Musa x paradisiaca Nutrition 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 235000021015 bananas Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000009323 psychological health Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an image auditing method, equipment and a computer readable storage medium; the method comprises the following steps: receiving an image to be audited; the image to be audited is any image uploaded through the terminal; performing feature library matching on the image to be audited to obtain a matching result, performing object detection on the image to be audited to obtain an object detection result, and performing image classification on the image to be audited to obtain a classification result; when the matching result, the object detection result and the classification result have results representing that the image to be audited contains the illegal content, generating an audit result that the image to be audited belongs to the illegal image; and the auditing result represents whether the image to be audited belongs to the illegal image. According to the method and the device, the accuracy of checking the illegal image can be improved.
Description
Technical Field
The present invention relates to artificial intelligence technology, and in particular, to an image auditing method and apparatus based on artificial intelligence, and a computer-readable storage medium.
Background
The internet creates great convenience for information transmission, and a user can upload, download or forward pictures through the internet, so that various pictures flow on the internet. However, not all pictures can be streamed on the internet, for example, illegal pictures related to violence and terrorism, etc., propagation needs to be limited to achieve the purpose of maintaining the internet environment.
Image auditing is an important branch in an artificial intelligence technology, and is an important means for searching illegal pictures and limiting illegal image propagation. In the related art, the image auditing device usually extracts features from a picture by using a trained model, and then compares the extracted features with violation features of an existing violation image to judge whether the picture belongs to a violation.
In practical applications, the occurrence form of illegal contents in illegal images is various, for example, some illegal contents in illegal images are directly presented, such as a bloody smell picture, a violent behavior picture and the like contained in the images; some illegal contents are in a small-sized form, for example, a illegal image is nested in a certain area in a normal image, or a special flag and a special article are displayed in the image after being reduced; some of the illegal contents are presented by special techniques such as metaphors, hints, etc., such as ironic caricatures, etc. The trained model in the related art is usually directed at directly presented violation contents, and the coverage degree of the small-sized violation contents and the violation contents presented by a special method is low, so that the accuracy of auditing violation images is low.
Disclosure of Invention
The embodiment of the invention provides an image auditing method, image auditing equipment and a computer readable storage medium, which can improve the accuracy of auditing illegal images.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides an image auditing method, which comprises the following steps:
receiving an image to be audited; the image to be audited is any image uploaded through the terminal;
performing feature library matching on the image to be audited to obtain a matching result, performing object detection on the image to be audited to obtain an object detection result, and performing image classification on the image to be audited to obtain a classification result;
when a result representing that the image to be audited contains illegal contents exists in the matching result, the identification result and the classification result, generating an auditing result that the image to be audited belongs to the illegal image; the auditing result represents whether the image to be audited belongs to an illegal image or not;
the matching result represents whether the image to be audited has the characteristics corresponding to the violation content, the object detection result represents whether the image to be audited has the violation object with the size smaller than a preset size threshold, and the classification result represents whether the event described by the image to be audited is violation.
An embodiment of the present invention provides an image auditing apparatus, including:
a memory to store executable image review instructions;
and the processor is used for realizing the image auditing method provided by the embodiment of the invention when executing the executable image auditing instruction stored in the memory.
The embodiment of the invention provides a computer-readable storage medium, which stores executable image auditing instructions and is used for causing a processor to execute the executable image auditing method provided by the embodiment of the invention.
The embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the image auditing device can receive the image to be audited, perform feature library matching on the image to be audited to obtain a matching result, perform object detection on the image to be audited to obtain an object detection result, perform image classification on the image to be audited to obtain a classification result, and generate the auditing result that the image to be audited belongs to the illegal image when the matching result, the identification result and the classification result have results representing that the illegal content is included in the image to be audited. Therefore, the image auditing equipment can detect the image to be audited by utilizing three different image detection modes of feature library matching, object detection and image classification, and thus, the image auditing equipment can audit the image to be audited in multiple dimensions, so that illegal contents presented in various forms can be audited, and the accuracy rate of auditing the illegal images is improved.
Drawings
Fig. 1 is a schematic diagram of an alternative architecture of an image review system 100 according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an image auditing apparatus 200 according to an embodiment of the present invention;
fig. 3 is a first schematic flow chart of an alternative image auditing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a process for generating an audit result according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an image review process provided by an embodiment of the invention;
fig. 6 is a schematic flow chart of an alternative image review method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a process for responding to a new violation image provided by an embodiment of the present invention;
fig. 8 is a schematic flow chart of an alternative image review method provided in the embodiment of the present invention;
FIG. 9 is a diagram illustrating an example of an audit result that is not an illegal image according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of an image review process in a practical application scenario provided by an embodiment of the present invention;
fig. 11 is a schematic process diagram of integrating audit results according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use instructions to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the abilities of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence base technologies generally include sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
2) Computer Vision technology (CV) is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or is transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
3) Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
4) Image auditing is an important branch in computer vision technology, and aims to audit and monitor the content of an image so as to avoid the illegal image from flowing in the internet and bringing negative influence to the internet environment. For example, when the images contain fishy violence content, the images flow on the internet, which brings panic emotion to the general public, and therefore, the images need to be searched through image review, which is convenient for filtering the images, and even for locating malicious users who maliciously upload the images.
5) Deep learning refers to a technique of machine learning using a deep neural network. The deep learning has the characteristic that the machine autonomously extracts the features, the tedious work of manually selecting the features can be avoided, the features with better representation capability can be autonomously selected by the machine, and the learning effect of the machine is improved.
6) The labeled sample generally refers to a labeled data sample, such as a manually labeled image sample, a automatically labeled text sample, and the like. In the target detection, the labeling refers to selecting an object in the image with a rectangular frame, and indicating a category label of the object selected by the rectangular frame, that is, the label has both position information and category information of the label. For example, when a foreground object, such as a strawberry-flavored drink, is in the image, the annotation may indicate where the foreground object is in the image, and may also indicate that the foreground object belongs to a strawberry-flavored drink.
7) The detection model is a mathematical model obtained by learning the labeled samples by machine learning or deep learning. The learning process of the labeled sample, namely the training process, can obtain the mathematical model after the training is finished. In the mathematical model, various trained parameters are provided, and when identifying and predicting an unlabeled sample, the parameters of the mathematical model can be loaded, and the parameters are used for calculating a prediction frame of a foreground object existing in the unlabeled sample and the probability that the foreground object belongs to a certain class label in a specified range.
8) The identification model is a mathematical model obtained after learning the labeled sample by machine learning or deep learning. Unlike the detection model, the recognition model does not predict where the foreground object is in the image, but classifies the entire image, i.e., using the parameters of the mathematical model obtained by training, the probability that the image belongs to a class label within a specified range is calculated.
The internet creates great convenience for information transmission, and a user can upload, download or forward pictures through the internet, so that various pictures flow on the internet. However, not any pictures can be streamed on the internet, for example, the streaming of illegal images related to violent fear can bring panic emotion to the ordinary users on the internet, thereby influencing the psychological health of the ordinary users. Therefore, filtering bad and illegal pictures related to violence, bloody smell, pornography and political content and limiting the flow of the pictures are important targets for maintaining social security and internet environment. For various application products related to image transmission, finding out bad violation pictures in time and filtering the bad violation pictures are important conditions for ensuring that the application products meet the requirements of the nation and the society on the internet and ensuring the normal operation of the application products.
With research and progress of artificial intelligence technology, the artificial intelligence technology develops research and application in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like. Image review is an important branch in the artificial intelligence technology, and the illegal image is searched by using the artificial intelligence-based image review technology so as to limit the spread of the illegal image on the internet, reduce the working pressure of the artificial review of the image and reduce the negative mental burden of the illegal image on image reviewers, so that the artificial intelligence-based image review technology is an important means for searching the illegal image and limiting the spread of the illegal image on the internet.
The content of violation images is often manifold, for example, stormy violation images often relate to weapons, crowd gathering scenes, bloody pictures, fire explosions, special uniform dresses, etc., while pornographic violation images typically relate to some cued or expression, bare skin, some items, and special behavior, etc., and political violation images typically relate to special flags, sarcasm, money, etc. In the related art, for different types of illegal images, the image recognition model can be used to audit the image content, and at this time, the image auditing device usually extracts features from the image by using the trained model, and then classifies the extracted features, so as to determine whether the image belongs to the illegal image. However, in practical applications, the occurrence forms of the illegal contents in the illegal image are various, and some illegal contents are directly presented in the illegal image, for example, weapons, crowd gathering scenes, bloody smell pictures, fire explosions, special uniform dresses and the like directly appearing in the illegal image; some illegal contents appear in the illegal image in a small-size form, for example, a special flag and a special article are reduced and then appear in the image, a small area of a normal image is known, and one illegal image is nested; some violation contents are presented by special techniques such as cues and metaphors, for example, ironic caricatures. By using a trained model in the related art, violation contents which are directly presented can be detected, and the violation contents which are presented by a special method can be missed. Therefore, coverage of violation images in different presentation forms in the related art is not comprehensive, so that the accuracy rate of auditing the violation images is low.
Also, violation images are also updated, for example, violation images for a specific event that has recently occurred are newly released by the regulatory body. However, after the violation image is updated, the newly published violation image needs to be added to the training database and then the model needs to be retrained to enable the newly published violation image to be detected. However, a series of processes of preprocessing and labeling the newly issued violation images, adding the preprocessed and labeled violation images into the training database, and retraining the model by using the training database added with the newly issued violation images take a long time to complete, and during the process, the newly issued violation images still have a possibility of being spread on the internet, so that the real-time performance of responding to the newly issued violation images is poor.
Further, the violation images are also time-limited, that is, some images are violation images only before a certain special time point, and no violation image is found after the special time point, at this time, if the original trained model is used for image review, the violation images are filtered and shielded, if the violation images are rejected from the training database, the model is retrained by the training database after the violation images are rejected, a long time is also needed, and the response degree for the violation images is low.
The embodiment of the invention provides an image auditing method, image auditing equipment and a computer-readable storage medium, which can improve the accuracy of auditing an illegal image. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, big data and artificial intelligence platform. The user terminal may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present invention is not limited herein. Next, an exemplary application of the image review apparatus will be explained.
Referring to fig. 1, fig. 1 is an optional architecture diagram of an image review system 100 according to an embodiment of the present invention, in order to support an image review application, a terminal 400 is connected to an image review device 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two.
The image auditing device 200 receives an image to be audited sent by the terminal 400, wherein the image to be audited is any image uploaded by the user through the terminal 400. Then, the image auditing device 200 performs feature library matching on the image to be audited to obtain a matching result, performs object detection on the image to be audited to obtain an object detection result, and performs image classification on the image to be audited to obtain a classification result. And then, the image auditing device 200 performs logic judgment on the matching result, the identification result and the classification result, when the matching result, the identification result and the classification result have a result representing that the image to be audited contains the illegal content, the image auditing device 200 generates an auditing result representing that the image to be audited belongs to the illegal image, and the auditing result represents whether the image to be audited belongs to the illegal image. The matching result represents whether the features corresponding to the illegal contents exist in the image to be audited, the object detection result represents whether the illegal objects with the size smaller than the preset size threshold exist in the image to be audited, and the classification result represents whether the events described by the image to be audited are illegal.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an image auditing apparatus 200 according to an embodiment of the present invention, where the image auditing apparatus 200 shown in fig. 2 includes: at least one processor 210, memory 250, at least one network interface 220, and a user interface 230. The various components in the image review device 200 are coupled together by a bus system 240. It is understood that the bus system 240 is used to enable communications among the components. The bus system 240 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 240 in fig. 2.
The Processor 210 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 230 includes one or more output devices 231, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 230 also includes one or more input devices 232, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 250 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 250 described in embodiments of the invention is intended to comprise any suitable type of memory. Memory 250 optionally includes one or more storage devices physically located remotely from processor 210.
In some embodiments, memory 250 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 251 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 252 for communicating to other computing devices via one or more (wired or wireless) network interfaces 220, exemplary network interfaces 220 including: bluetooth, wireless-compatibility authentication (Wi-Fi), and Universal Serial Bus (USB), etc.;
a display module 253 to enable presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 231 (e.g., a display screen, speakers, etc.) associated with the user interface 230;
an input processing module 254 for detecting one or more user inputs or interactions from one of the one or more input devices 232 and translating the detected inputs or interactions.
In some embodiments, the image auditing device provided by the embodiments of the present invention may be implemented in software, and fig. 2 shows an image auditing device 255 stored in a memory 250, which may be software in the form of programs and plug-ins, and the like, and includes the following software modules: a receiving module 2551, an auditing module 2552, a result generating module 2553 and a feature library generating module 2554, the functions of which will be described below.
In other embodiments, the image auditing Device provided by the embodiments of the present invention may be implemented in hardware, and for example, the image auditing Device provided by the embodiments of the present invention may be a processor in the form of a hardware decoding processor, which is programmed to execute the image auditing method provided by the embodiments of the present invention, for example, the processor in the form of the hardware decoding processor may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
Illustratively, an embodiment of the present invention provides an image review apparatus, including:
a memory to store executable image review instructions;
and the processor is used for realizing the image auditing method provided by the embodiment of the invention when executing the executable image auditing instruction stored in the memory.
In the following, an image auditing method provided by the embodiment of the present invention will be described in conjunction with exemplary applications and implementations of the image auditing apparatus provided by the embodiment of the present invention.
Referring to fig. 3, fig. 3 is a first schematic flow chart of an alternative image auditing method according to an embodiment of the present invention, which will be described with reference to the steps shown in fig. 3.
S101, receiving an image to be audited; the image to be audited is any image uploaded through the terminal.
The method and the device for verifying the picture uploaded by the user are realized in the scene of verifying the picture uploaded by the user, for example, when the user sends the picture to friends of the user, the picture uploaded by the user at the front end is verified, or when the user uploads the picture to a social network site, the picture uploaded by the user at the front end is verified, and the like. The image auditing device receives the image sent by any user through the terminal covered by the image auditing device through the network, and takes the image as the image to be audited, so that all the images sent by the user can be audited through the image auditing device, and the condition that some images are overlooked is avoided. And triggering the image auditing process of the image auditing device as long as the image auditing device receives the image to be audited.
It can be understood that the image to be audited is any image uploaded by the user, in other words, all images uploaded by the user need to be audited by the image auditing device, so that the user covered by the image auditing device can have the images propagated on the internet all be audited.
It should be noted that the image auditing device may be understood as a background server that processes and forwards messages, and the image auditing device may cover terminals of multiple users. For example, the image auditing device may be a background server of the social software, and in this case, the users covered by the image auditing device are all users registered on the background server of the social software.
S102, performing feature library matching on the image to be checked to obtain a matching result, performing object detection on the image to be checked to obtain an object detection result, and performing image classification on the image to be checked to obtain a classification result.
In the embodiment of the invention, in order to detect illegal contents presented in various forms, the image auditing device can audit the images to be audited in multiple image processing dimensions after obtaining the images to be audited, so that the illegal contents in various forms can be audited, and the accuracy of image auditing is enhanced. Further, in order to directly detect the features describing the illegal contents, the image auditing equipment can match the feature library with the image to be audited by using a feature library prepared in advance; in order to detect an illegal object, particularly a small-sized illegal object, the image auditing device can perform object detection on an image to be audited; in order to integrally control whether an event described by an image to be checked violates rules, the image checking device may classify the image to be checked. The image auditing device can obtain a result representing whether the image to be audited contains the illegal content or not aiming at each image processing dimension, so that the image auditing device can obtain three results after the step is finished, namely a matching result obtained by matching the feature library, an object detection result obtained by object detection and a classification result obtained by image classification.
Further, in the embodiment of the present invention, the matching result indicates whether features corresponding to the illegal content exist in the image to be audited, the object detection result indicates whether an illegal object with a size smaller than a preset size threshold exists in the image to be audited, and the classification result indicates whether an event described by the image to be audited is illegal.
It should be noted that the feature library may only include the violation feature library, or may include both the violation feature library and the failure feature library. The violation feature library comprises violation features extracted from violation images and representing violation contents, and the violation features comprise the violation features extracted from existing historical violation images and new violation features extracted from latest published violation images obtained from supervision departments or the Internet, so that when the features are matched, the historical violation images and the variant images thereof can be detected, and the latest published violation images and the variant images thereof can also be detected. The invalid feature library has invalid violation features, that is, the contents represented by the violation features do not belong to the violation contents any more, for example, the features in the forbidden violation images are the invalid violation features, so that it can be ensured that the invalid violation images and variant images thereof can be detected during image review, so as to facilitate subsequent release of the images.
It can be understood that, in some embodiments of the present invention, when the feature library includes a violation feature library, if the matching result indicates that the image to be reviewed hits the violation feature library, it indicates that the image to be reviewed includes violation content in the feature library dimension, and when the matching result indicates that the image to be reviewed does not hit the violation feature library, it indicates that the image to be reviewed does not include the violation content in the feature library dimension. In other embodiments of the present invention, the feature library further includes a failure feature library, and when the matching result indicates that the image to be reviewed hits the failure feature library, it indicates that the image to be reviewed does not include the illegal content in the feature library dimension. Further, when the matching result indicates that the image to be audited does not hit the failure feature library, at this time, if the image to be audited hits the violation feature library, it indicates that the image to be audited contains the violation content under the feature library dimension, and if the image to be audited does not hit the violation feature library, it indicates that the image to be audited does not contain the violation content.
In some embodiments of the present invention, in some violation images, violation contents may only occupy a small area of the violation image, and these violation contents in the small area are easily ignored, and are difficult to detect by using a general object detection method, so that in image review, a violation image including the violation contents in the small area is misjudged. For the situation, the image auditing device may firstly train an object detection model capable of specifically detecting the small-size illegal content by using the small-size illegal content, such as a small flag image, and detect the image to be audited by using the object detection model when the image detection mode is illegal object detection, so as to find the illegal object with a smaller size in the image to be audited. Of course, in other embodiments of the present invention, the object detection model may also be trained by using the large-size illegal content, or the object detection model may also be trained by using the large-size illegal content and the small-size illegal content together, so as to search for the large-size illegal content and the small-size illegal content in the image to be reviewed at the same time.
In practical application, there may be a part of violation image, which does not contain obvious violation content, but the event described by the whole image, that is, what the image content is meant to express, is violation, for example, a picture without a fishy smell of blood, but a special method such as hint and metaphor is used to make the whole image content describe a violation event. In this case, the image auditing device can train a classification recognition model for recognizing the illegal images by using images of the event which does not contain the illegal contents but is expressed by the whole images as the illegal, so that when the image auditing device selects image classification detection as an image detection mode, the illegal images which show the illegal contents by using a special method can be detected.
It can be understood that, in the embodiment of the present invention, the violation image may be a set of rejected bad images each time an image review is performed manually, or may be a bad image crawled from a terminal of a management department. The violation images may be various types of bad images, such as a riot-terrorist type bad image, a pornographic type bad image, a bad image relating to a violation political problem, and the like.
S103, when the matching result, the object detection result and the classification result have a result representing that the image to be audited contains the illegal content, generating an audit result that the image to be audited belongs to the illegal image; and the auditing result represents whether the image to be audited belongs to the illegal image.
After the image auditing device obtains the matching result, the object detection result and the classification result, the image auditing device logically integrates the obtained results, and then knows whether the image to be audited contains illegal contents or not from the results. Since the results obtained by the image auditing device are generated by image detection modes with different dimensions, it is highly possible that some of the results represent that illegal contents exist in the image to be audited, and other results represent that illegal contents do not exist in the image to be audited. In order to enhance the review of illegal contents in different forms and improve the accuracy of the review of illegal images, the image review device will classify the image to be reviewed as the illegal image as long as any one of the matching result, the object detection result and the classification result indicates that the image to be reviewed contains the illegal contents, thereby generating the review result.
It should be noted that, when the image review device logically integrates the matching result, the object detection result and the classification result, and determines whether the image to be reviewed includes the illegal content, the illegal content included in the image to be reviewed can be identified, the category of the illegal content can be obtained, the illegal level of the illegal content can be determined, the dimension of the image detection mode of the illegal content can be detected, and the like, and the category of the illegal content, the illegal level and the dimension of the image detection mode can be added to the review result, so that the illegal details of the image to be reviewed can be known by reading the review result.
It can be understood that when the matching result represents that the image to be checked has the features corresponding to the illegal contents, the illegal contents are necessarily indicated in the image to be checked; when the object detection result represents that an illegal object with the size smaller than a preset threshold exists in the image to be audited, the fact that the image to be audited contains illegal contents in the dimension is indicated; when the classification result represents that the event described by the image to be checked is a violation event, for example, a political event or the like, it is indicated that the violation content is included in the image to be checked in the image classification dimension.
For example, an exemplary process schematic diagram for generating an audit result is provided in the embodiment of the present invention, referring to fig. 4, after an image to be audited 4-1 is received by an image auditing device, the image to be audited may be processed by respectively using feature library matching 4-2, object detection 4-3, and image classification 4-4. Then, the image auditing device analyzes the matching result obtained by matching the feature library 4-2, the object detection result obtained by detecting the object 4-3 and the classification result obtained by classifying the image 4-4, so as to know whether the image to be audited hits the violation feature library 4-5, whether the image to be audited has a small-sized violation object 4-6 and whether an event described by the image to be audited violates 4-7, and further know whether the image to be audited has violation content respectively under the feature library dimension, the object identification dimension and the image classification dimension. As shown in fig. 4, when the image to be reviewed hits the violation feature library, has a small-sized violation object, and the described event is a violation, then the image review result will consider that the image to be reviewed has violation content in the feature library dimension, the object identification dimension, and the image classification dimension, and generate the review result 4-8.
Illustratively, referring to fig. 5, an embodiment of the present invention provides a schematic diagram of an image review process. As shown in fig. 5, after an image to be reviewed is input 5-1, the image review device performs feature library matching 5-2 on the image to be reviewed, and in the feature library matching 5-2, the image review device may perform violation feature library matching 5-21 by using a violation feature library and perform failure feature library matching 5-22 by using a failure feature library, respectively. Meanwhile, the image auditing device also performs object detection 5-3 on the image to be audited by using the object detection model, and performs image classification 5-4 on the image to be audited by using the classification identification model. Finally, the image auditing equipment logically integrates 5-5 the detection result obtained by matching 5-21 the violation feature library, the detection result obtained by matching 5-22 the failure feature library, the object detection result obtained by detecting 5-3 the object and the classification result obtained by classifying 5-4 the image, thereby obtaining the final auditing result 5-6. Therefore, the image auditing equipment can respond to various illegal contents presented in various forms, and the accuracy of illegal image auditing is improved.
In the embodiment of the invention, the image auditing equipment can detect the image to be audited by utilizing three different image detection modes of feature library matching, object detection and image classification dimensionality, so that the image auditing equipment can audit the image to be audited in multiple dimensionalities, illegal contents presented in various forms can be audited, and the accuracy of auditing the illegal images is improved.
Referring to fig. 6, fig. 6 is a schematic view illustrating an optional second flow chart of the image review method according to the embodiment of the present invention. In some embodiments of the present invention, performing feature library matching on an image to be checked to obtain a matching result, that is, a specific implementation process of S102 may include: S201-S203, as follows:
s201, acquiring a feature library; the feature library includes at least one feature.
When the image auditing device audits the violation condition of the image to be audited under the dimension of the feature library, the image auditing device performs feature library matching operation on the image to be audited. At this time, the image review device needs to acquire the feature library first, so as to perform feature library matching on the image to be reviewed. At least one feature is in the feature library acquired by the image auditing device. Since the feature library may include the violation feature library, or both the violation feature library and the failure feature library, at least one feature included in the feature library may be a feature in the violation feature library or a feature in the failure violation library.
It is to be understood that each feature in the feature library may be an intuitive feature, such as a color feature, a shape feature, or a feature obtained by abstracting the image content, such as a vector feature generated for the image content, or another type of feature, and the embodiment of the present invention is not limited herein.
S202, extracting the features of the image to be checked to obtain the features to be checked.
The image auditing equipment firstly obtains a trained feature extraction model from a storage space of the image auditing equipment, then inputs an image to be audited into the trained feature extraction model, and performs feature extraction on the image to be audited through the trained feature extraction model, so as to obtain the feature to be audited.
It can be understood that the feature to be audited is description of the image to be audited, and in order to facilitate comparison between the feature to be audited and each feature in the feature library, the form of the feature to be audited should be the same as that of the feature in the feature library, that is, when the feature in the feature library is a visual feature, the feature to be audited is also a visual feature, and when the feature in the feature library is a vector feature, the feature to be audited is also a vector feature. Of course, in other embodiments of the present invention, the forms of the features to be reviewed and the features in the feature library may also be different, and the embodiments of the present invention are not limited herein.
It should be noted that, in some embodiments of the present invention, the trained feature extraction model may be a convolutional neural network model trained using a large number of images, for example, a model trained using crawled historical violation images. Further, in order to obtain features, such as vector features, capable of abstracting and summarizing image contents, the image auditing device may deform the convolutional neural network model, for example, a model of the convolutional neural network model from which the last classification module is removed, that is, a model from which the Softmax layer is removed, is used as a feature extraction model, and at this time, the extracted features to be audited are one-dimensional vectors output by the last fully-connected layer of the convolutional neural network.
Further, in the embodiment of the present invention, the one-dimensional vector output by the last fully-connected layer of the convolutional neural network may be normalized, and the normalized result is used as the extracted feature to be audited. In other embodiments of the present invention, other processing may be performed on the one-dimensional vector, for example, the one-dimensional vector is transformed, and the transformation result is used as the extracted feature to be audited. The specific feature extraction manner may be set according to actual requirements, and the embodiment of the present invention is not limited herein.
And S203, matching the features to be checked with the features in the feature library to obtain a matching result.
After the image auditing device obtains the features to be audited, the features to be audited are respectively compared with each feature in the feature library. When the features to be checked are the same as or similar to any features in the feature library, the image checking device considers that the features matched with the features to be checked exist in the feature library, that is, the features to be checked hit the feature library, and at this time, the image checking device obtains a matching result of the features matched with the features to be checked existing in the feature library.
Certainly, for the situation that the feature library has both the violation feature library and the failure feature library, the image auditing device compares the features to be audited with the features in the violation feature library one by one, and then compares the features to be audited with the features in the failure feature library one by one, so as to know whether the features to be audited hit the violation feature library or the failure feature library, and thus, the feature library hit by the features to be audited is also written into the matching result. Therefore, the image auditing device can know whether the image to be audited has the characteristics corresponding to the illegal contents or not according to the matching result.
In the embodiment of the invention, when the image auditing equipment matches the feature to be audited with each feature in the feature library, similarity calculation is carried out on the feature to be audited and each feature to obtain feature similarity, and when the similarity calculation with all the features in the feature library is completed, the feature similarity corresponding to each feature in the feature library one to one is obtained, so that at least one feature similarity is obtained.
It can be understood that, in the embodiment of the present invention, when performing similarity calculation, the image checking device may calculate cosine similarities between the features to be checked and each feature, and may also calculate euclidean distances between the features to be checked and each feature, and further measure similarities between the features to be checked and the violation features through the euclidean distances. The content of specifically calculating the similarity between the features to be checked and the violation features may be set according to an actual situation, and the embodiment of the present invention is not limited herein.
Further, when at least one feature similarity exists in the feature similarities, the similarity is larger than a preset similarity threshold, and the image auditing equipment generates a matching result of the features which are matched with the features to be audited and exist in the characterization feature library.
It should be noted that, in the embodiment of the present invention, the preset similarity threshold may be set according to an actual situation, for example, may be set to 0.5, and may also be set to 0.7, which is not limited herein.
In some embodiments of the present invention, when the image review device calculates the feature similarity, it may perform an inner product operation on the feature to be reviewed and the features in the feature library to obtain a feature inner product, perform a length operation on the feature to be reviewed to obtain a length of the feature to be reviewed, perform a length operation on the features in the feature library to obtain a length of the features in the feature library, and finally construct a feature similarity by using the feature inner product, the length of the feature to be reviewed and the feature length in the feature library, that is, the image review device multiplies the length of the feature to be reviewed by the length of the features in the feature library to obtain a product result, and then multiplies the product result by using the feature inner product ratio to obtain a ratio, which is the constructed new feature similarity.
For example, the embodiment of the present invention provides a formula for calculating the similarity of new features, see formula (1):
and sim (x, y) is the feature similarity of the feature to be audited and each feature in the feature library. The image auditing equipment only needs to obtain the features to be auditedAnd features in a library of featuresThe specific values of these parameters can be substituted into equation (1) to calculate the feature similarity.
In the embodiment of the invention, the image auditing equipment can match the features in the feature library with the features to be audited, so that whether the features to be audited are the features corresponding to the illegal contents is judged, and finally, a matching result which can indicate whether the images to be audited contain the features corresponding to the illegal contents is obtained, therefore, the image auditing equipment can complete the process of matching the feature library of the images to be audited under the dimension of the feature library, and the accuracy of auditing the illegal images is improved.
In some embodiments of the present invention, performing object detection on an image to be examined to obtain an object detection result, that is, the specific implementation process of S102 may include: S204-S206, as follows:
s204, partitioning the image to be audited to obtain a plurality of image blocks to be audited; and the size of the image block to be audited is smaller than a preset size threshold.
In some illegal images, illegal contents may only occupy a small area of the illegal image, and for the illegal contents with the small area, the illegal contents are difficult to detect by using a general object detection method, so that the illegal images containing the illegal contents with the small area are misjudged in image review. For such a situation, the image auditing device may first train an object detection model capable of specifically detecting the small-size illegal content by using the small-size illegal content, for example, the small-size illegal image, and then perform object detection on the image to be audited by using the object detection model after obtaining the image to be audited, so as to determine whether the image to be audited has the small-size illegal content. At this time, the image checking device divides the image to be checked into a plurality of image blocks to be checked, of which the size is smaller than a preset size threshold, by using the object detection model, so that subsequent feature extraction and feature identification are performed based on the small-size image blocks.
It is understood that, in the embodiment of the present invention, the preset size threshold may be set to 20 × 20, or may be set to 10 × 10, and may also be set according to actual requirements, and the embodiment of the present invention is not specifically limited herein.
And S205, performing feature extraction on the plurality of image blocks to be audited to obtain a plurality of image block features corresponding to the plurality of image blocks to be audited.
And S206, carrying out object detection on the characteristics of the image blocks to obtain object detection results.
The image auditing equipment continues to perform feature extraction on each image block to be audited by using the object detection model so as to obtain image block features corresponding to each image block to be audited, and then identifies the image block features by using the object detection model so as to obtain an identification result corresponding to each image block to be audited. When the identification result of one image block to be audited exists and indicates that the object in the image block to be audited is an illegal object, the image auditing device generates an object detection result that the image to be audited contains the illegal object with the size smaller than the preset size threshold, otherwise, when the identification results of all the image blocks to be audited indicate that the object in the image block to be audited is not the illegal object but a normal object, the object detection result generated by the image auditing device represents that the image to be audited does not contain the illegal object with the size smaller than the preset size threshold.
In the embodiment of the invention, the image auditing equipment can utilize the trained object detection model for the small-size illegal object to detect the object of the image to be audited, so as to judge whether the image to be audited has the small-size illegal content, therefore, the image auditing equipment can detect the small-size illegal content in the image to be audited, and the accuracy of auditing the illegal image is improved.
In some embodiments of the present invention, classifying the image to be reviewed to obtain a classification result, that is, a specific implementation process of S102 may include: S207-S208, as follows:
and S207, performing feature extraction on the image to be audited to obtain the feature to be audited.
And S208, classifying the features to be examined by adopting the trained classification recognition model to obtain a classification result.
For the case that the whole image content describes a specific violation event by using special methods such as hints, metaphors and the like, the image auditing device can train a classification recognition model which can be used for recognizing the violation image by using the image which does not contain the violation content but is expressed by the whole image as the specific event, and store the classification recognition model in a storage space. Then, when an image to be checked is obtained, inputting the image to be checked into the classification recognition model, extracting features to be checked by using a feature extraction layer in the classification recognition model, and classifying the features to be checked by using a feature classification layer of the classification recognition model, so that the classification of the image to be checked is judged, that is, whether an event described by the image to be checked is illegal is judged, and a classification result is obtained.
It is understood that the classification recognition model can be trained by using labeled historical violation images, and the labeled information includes the category to which the historical violation images belong.
In the embodiment of the invention, the image auditing equipment can also carry out integral classification on the images to be audited, thereby completing the auditing process under the image detection dimension, and thus, illegal contents presented by using a special method can be detected, and the accuracy rate of auditing the illegal images is increased.
In some embodiments of the present invention, before obtaining the feature library, i.e., before S201, the method may further include: S209-S211, as follows:
s209, acquiring a historical feature library when a new violation image is received; the historical feature library is extracted from violation images received at historical time points, and the new violation images represent the latest issued violation images.
In the embodiment of the invention, the feature library only comprises the violation feature library. In order to ensure that the image auditing device can respond to the newly issued violation image at the first time in the image auditing process, the image auditing device can acquire the newly issued violation image in real time from a terminal of a management department through a specific image acquisition interface, when the image auditing device acquires the newly issued violation image, namely the new violation image, the fact that the violation image different from the existing violation image exists is indicated, at this time, the image auditing device needs to acquire a historical feature library constructed in advance first, so that the historical feature library can be updated by using the new violation image subsequently.
It is understood that the historical feature library is constructed by features extracted from an existing violation image, i.e., a violation image received from a historical time point, that is, the historical feature library is a description of the existing violation image in the dimension of the feature library. The image auditing equipment can extract the characteristics of violation images received at historical time points in advance, form a historical characteristic library by using the extracted characteristics and store the historical characteristic library, and directly acquire and use the violation images when the violation images need to be used, so that the processing time is saved. In other embodiments of the present invention, the image auditing device may further perform feature extraction on the violation images received at the historical time points in real time, so as to construct a historical feature library.
Further, in an actual situation, there is also a case that no violation image is received at a historical time point, that is, the violation image has not been issued by the supervision department yet, at this time, the historical feature library may be empty, and when a new violation image is acquired, a new violation feature is extracted from the new violation image, and then the feature library is constructed by directly using the new violation feature. When the regulatory authority has a newly published violation image in the future, the feature library becomes a historical feature library.
And S210, performing feature extraction on the new violation image to obtain a new violation feature.
And S211, adding the new violation features into the historical feature library to obtain a feature library.
After the image auditing device obtains the new violation image, the image auditing device extracts the new violation features from the new violation image and then adds the new violation features into the historical feature library, so that the historical feature library is updated by the new violation feature library, and the feature library is obtained. Therefore, when the image auditing equipment audits the image to be audited under the dimension of the feature library, the image auditing equipment can respond to the new illegal image and the variant image thereof in time, so that the accuracy of auditing the illegal image is improved.
For example, each feature in the historical feature library may be respectively represented as Vblack1、Vblack2、……、VblacknAnd the new violation feature extracted from the new violation image by the image auditing device is VnewIn this case, the feature library can be represented as Vblack={Vblack1,Vblack2,…,Vblackn,Vnew}。
For example, referring to fig. 7, the image auditing device collects an existing violation image, that is, collects a historical violation image 7-1, then performs feature extraction on the historical violation image 7-1, and uses features extracted from the historical violation image 7-1 to form a historical feature library 7-2. Then, when the image checking device obtains a new violation graph 7-3, feature extraction is performed on the new violation graph 7-3 to obtain a new violation feature 7-4, and then the new violation feature 7-4 is added into the historical feature library 7-2 to obtain a feature library 7-5. Subsequently, when the image checking device receives the image to be checked 7-6, feature extraction 7-7 can be carried out on the image to be checked 7-6 to obtain a feature to be checked 7-8 of the image to be checked 7-6, then feature matching 7-9 is carried out on the feature to be checked 7-8 and the feature library 7-5 to obtain a matching result, and then a detection result 7-10 under the feature library dimension is obtained.
In the embodiment of the invention, the image auditing device can extract the new violation features at the first time when the newly issued violation images are acquired, and update the historical feature library by using the new violation features, so that the image auditing device can find the newly issued violation images in time from the images uploaded by a user, respond to the newly issued violation images, and improve the accuracy of auditing the violation images.
In some embodiments of the present invention, after performing feature extraction on the new violation image to obtain a new violation feature, the method further includes, before adding the new violation feature to the historical feature library to obtain the feature library, that is, after S210 and before S211: S212-S213, as follows:
and S212, when the violation images are obtained, extracting the features of the violation images to obtain violation features.
Since in practical applications there are some images that no longer belong to the violation image after the preset time point, this image can be regarded as a failed violation image. In this case, the history feature library previously established by the image auditing device may have features corresponding to the illegal contents in the part of invalid illegal images, so that when the image auditing device receives the image to be audited, if the image to be audited contains the illegal contents, the image to be audited is determined as the illegal image by the image auditing image, but in essence, the image to be audited is no longer illegal, and the image to be audited needs to be released when the image is audited. At this time, the image auditing device can acquire the failure violation images from the terminals of the supervision department, and then perform feature extraction on the failure violation images by using the trained model, wherein the extracted features are the failure violation features.
It is to be understood that the preset time point in the embodiment of the present invention may be set according to actual requirements, for example, set to a certain date, and the embodiment of the present invention is not limited herein.
And S213, integrating the failure violation features to obtain a failure feature library.
And integrating the failure violation features together by the image auditing equipment to obtain a failure feature library. In other words, the failure feature library is a collection of failure features. After obtaining the failure feature library, the image review device may use the failure feature library to form a feature library, and at this time, add the new violation feature to the historical feature library to obtain a feature library, that is, the specific implementation process of S211 may be changed to: and adding the new violation features into a historical feature library to obtain a violation feature library, and forming the feature library by using the violation feature library and the failure feature library.
Illustratively, the violation feature library after adding a new violation feature is denoted Vblack={Vblack1,Vblack2,…,Vblackn,VnewAt, the failure feature library is denoted as Vwhite={Vwhite1,…,VblacknWhen the feature library is set as V ═ Vblack,Vwhite}。
It should be noted that, in the embodiment of the present invention, in order to simplify the process of determining whether the image to be checked includes the feature corresponding to the violation content according to the matching result, the image checking device may remove the violation feature from the historical feature library, so that the violation feature library and the violation feature library do not overlap, and the image to be checked only hits one of the violation feature library and the violation feature library. Of course, in other embodiments of the present invention, the image auditing apparatus may not remove the failed violation feature from the historical feature library, and at this time, there is a possibility that the image to be audited may hit both the failed feature library and the violation feature library.
In the embodiment of the invention, the image auditing equipment can extract the invalid violation features from the invalid violation images so as to obtain the invalid feature library, so that the images to be audited belonging to the invalid violation images can be released conveniently when the images to be audited are detected in the dimension of the feature library in the subsequent process, and the response degree of the images to be audited aiming at the invalid violation images is further improved.
In some embodiments of the present invention, when the feature library includes both the failure feature library and the violation feature library, and there is no overlapped portion between the failure feature library and the violation feature library, matching the feature to be checked and each feature in the feature library to obtain a matching result, that is, the specific implementation process of S203 may include: S2031-S203, as follows:
s2031, judging whether the feature library to be audited hits the violation feature library, and judging whether the feature library to be audited hits the failure feature library to obtain a judgment result.
Since the features to be audited are hit by the violation feature library or the failure feature library, and whether the features corresponding to the violation content are included in the image to be audited or not has a direct influence, that is, when the features to be audited are hit by the violation feature library, it is indicated that the image to be audited includes the features corresponding to the violation content, and when the features to be audited are hit by the failure feature library, it is indicated that the image to be audited does not include the features corresponding to the violation content, therefore, after the features to be audited are obtained by the image auditing device, it is required to first determine whether the features to be audited are hit by the violation feature library and whether the features to be audited are hit by the failure feature library.
S2032, when the judgment result represents that the features to be checked hit the violation feature library, generating a matching result representing the features corresponding to the violation contents in the images to be checked.
S2033, when the judgment result shows that the features to be checked hit the failure feature library, generating a matching result of the features corresponding to the features which show that the illegal contents do not exist in the images to be checked.
When the judgment result shows that the hit of the feature to be audited is the violation feature library, the image auditing device generates a matching result of the feature corresponding to the violation content in the image to be audited, and when the hit of the feature to be audited is the failure feature library, the matching result generated by the image auditing device can represent that the feature corresponding to the violation content does not exist in the image to be audited.
It should be noted that, in the embodiment of the present invention, the execution sequence of S2032 and S2033 does not affect the final matching result, so in some embodiments of the present invention, S2033 may be executed first, then S2032 may be executed, or S2032 and S2033 may be executed at the same time, which is not limited herein.
In the embodiment of the invention, the image auditing equipment can judge the hit conditions of the features to be audited on the violation feature library and the failure feature library so as to generate the matching result, so that the image auditing equipment completes the matching of the features to be audited and each feature in the feature library.
In some embodiments of the present invention, when the feature library includes both the failure feature library and the violation feature library, and a portion where the failure feature library and the violation feature library overlap exists, it is necessary to determine whether the image to be reviewed includes the feature corresponding to the violation content according to different situations.
Further, when the features to be checked hit the violation feature library and do not hit the failure feature library, the features corresponding to the violation content exist in the image to be checked; when the features to be checked hit the violation feature library and hit the failure feature library, the features corresponding to the violation contents do not exist in the image to be checked; when the features to be audited do not hit the violation feature library but hit the failure feature library, the features corresponding to the violation contents do not exist in the images to be audited; and when the features to be audited do not hit the violation feature library and do not hit the failure feature library, the features corresponding to the violation contents do not exist in the images to be audited.
Based on fig. 3, referring to fig. 8, fig. 8 is a schematic view illustrating an optional flow of an image review method according to an embodiment of the present invention. In some embodiments of the present invention, after performing feature library matching on an image to be checked to obtain a matching result, performing object detection on the image to be checked to obtain an object detection result, and performing image classification on the image to be checked to obtain a classification result, that is, after S102, the method may further include: s104, the following steps are carried out:
and S104, when no result representing that the image to be audited contains the illegal content exists in the matching result, the object detection result and the classification result, generating an auditing result that the image to be audited does not belong to the illegal image.
The image auditing device can find out illegal images and can also identify which images do not belong to the illegal images, namely identify which images are normal images. At the moment, the matching result, the object detection result and the classification result of the image checking device are read one by one, and when all results are known to represent that the image to be checked does not contain illegal contents, the image checking device can generate a checking result that the image to be checked does not belong to the illegal image. Therefore, the image auditing device can distinguish a normal image.
For example, an exemplary schematic diagram that the audit result is not an illegal image is provided in the embodiment of the present invention, referring to fig. 9, the image content of the image to be audited 9-1 is a banana, and the audit result 9-3 can be obtained after the image audit device performs audit 9-2 on the image to be audited 9-1 by using three dimensions, namely feature library matching, object detection, and image classification. And the auditing result 9-3 is obtained by logically analyzing the matching result, the object detection result and the classification result by the image auditing equipment. Wherein, the classification result is respectively: the image to be examined describes that the probability of pornographic events 9-31 is 0.2, the probability of riot terrorist events 9-32 is 0, and the probability of political events 9-33 is 0; the object detection result is the item category 9-34: bananas; the matching result is the matching condition of the violation feature library 9-35: none, and invalid feature library match cases 9-36: none. The image auditing equipment analyzes and integrates the results, so that the final judgment result 9-3 aiming at the image to be audited 9-1 can be obtained: and (4) normal.
In the embodiment of the invention, the image auditing equipment can also integrate and analyze the matching result, the object detection result and the classification result to obtain the auditing result that the image to be audited does not belong to the illegal image, so that the image auditing equipment can distinguish the illegal image from the normal image. In the following, an exemplary application of the embodiments of the present invention in a practical application scenario will be described.
The embodiment of the invention is realized under the condition that the back end (image auditing equipment) audits the images uploaded by the user at the front end and displays the auditing result on the display interface so as to facilitate the auditor to further process the images uploaded by the user. Referring to fig. 10, fig. 10 is a schematic diagram of an image review process in a practical application scenario provided by the embodiment of the present invention. When a user inputs an image 10-1 (an image to be audited) to a rear end 10-b through a front end 10-a, the rear end 10-b performs image audit 10-2 on the image input by the user by using a black library retrieval mode (illegal feature library matching), a white library retrieval mode (invalid feature library matching), article detection mode (object detection) and sensitive identification mode (image classification) respectively to obtain an audit result, and then the audit result is sent to a terminal used by an auditor, namely the front end 10-c, and the audit result is presented on the front end 10-c so that the auditor can perform next processing conveniently.
Before black library retrieval, initializing a black library seed library, and if historical black library pictures (illegal images received at historical time points) exist, designing the black library seed library for the historical black library pictures, namely adding the historical black library pictures into the black library seed library, otherwise, leaving the black library seed library empty. Next, black library features (historical feature library) are initialized, and when the black library seed library is not empty, the back end 10-b performs feature extraction on the black library pictures in the black library seed library, and adds the extracted features to the black library features. The feature extraction is performed by a pre-specified method, for example, a deep learning pre-training model is used to extract a one-dimensional vector of a full connection layer, and the one-dimensional vector is normalized to obtain Vblack. When a newly added sensitive graph (a new violation image) which is published in real time exists, the newly added sensitive graph is added into a black library seed library, feature extraction is carried out on the newly added sensitive graph (a new violation feature is extracted), and the feature (the new violation feature) of the newly added sensitive graph is added into VblackIn (get violation feature library). During black library retrieval, the back end 8-b performs feature extraction on the image input by the user to obtain Vinput(feature to be checked) and then calculating V using the formula (1)inputAnd VblackWhen V is the cosine similarity of each feature (feature similarity)blackHas a certain characteristic and VinputAnd when the cosine similarity is greater than a specified threshold (a preset similarity threshold), the pairing is successful, and the image input by the user hits a black library (matching result).
White library retrieval is more similar to black library retrieval, except that the white library picture is a black library picture that has expired (a failure violation image). The back end 10-b performs feature extraction on the white library picture to obtain the features (failure violation features) of the white library picture, and thenPost-computing features and V of white library picturesinputCosine similarity of (1), features of white library pictures and VinputWhen the cosine similarity of (1) is greater than a specified threshold, the image input by the user hits a white bank (matching result).
In the article detection, an image input by a user is detected by using a pre-trained target detection model (object detection model), and a detection object is an object with a small area (an illegal object with a size smaller than a preset size threshold value) such as a reflection flag, a black robe, a badge and the like, and a detection result (object detection result) is returned.
The sensitive recognition model is a recognition model (classification recognition model) trained in advance, which is used for recognizing labels of images input by users, such as labels of pornography, riot, political sensitivity and the like, and returning classification results. In this case, the trained recognition model may be trained by using images that do not contain obvious sensitive objects, but describe sensitive events (illegal events) as a whole.
After completing black library retrieval, white library retrieval, article detection and sensitive identification, the back end 10-b integrates the output results of the 4 dimensions and outputs a final auditing result according to the judgment logic. Fig. 11 is a schematic process diagram of integrating audit results according to an embodiment of the present invention. As shown in fig. 11, after the image 11-1 input by the user is subjected to black library retrieval 11-2, white library retrieval 11-3, article detection 11-4 and sensitive identification 11-5, the back end 10-b may determine whether the image input by the user hits the black library 11-6 (matching result), and when the image input by the user hits the black library, obtain a result that the image input by the user is abnormal (the image to be checked includes violation content), and obtain a label of the hit image (category of the violation content); the back end 10-b judges whether the image input by the user hits the white library 11-7, and when the image input by the user hits the white library, the image input by the user is normal (the image to be audited does not contain illegal contents); the back end 10-b judges whether the object detection result has sensitive objects 11-8, and when the sensitive objects exist (the image to be checked contains illegal contents), the back end 10-b considers that the input image of the user is abnormal and outputs the label of the sensitive object or the label of the sensitive object with the highest priority in the plurality of sensitive objects. Further, when the image input by the user hits the white library, the back end 10-b may determine, by combining the result of the article detection, whether the image input by the user is normal, that is, when the image input by the user hits the white library and the result of the article detection is no, the image input by the user is normal (does not include the content of the violation); the back end 10-b may also determine whether the category label of the image input by the user is sensitive 11-9, and when the category label is sensitive, the back end 10-b may obtain a result that the image input by the user is abnormal. Thus, the back-end 10-b can obtain the final audit result 11-10.
By the method, the back end can respond to the new sensitive graph published in real time in time, so that the new sensitive graph cannot be continuously transmitted once published; the back end can also timely release sensitive graphs which are out of date; meanwhile, the rear end can detect small-sized sensitive articles and can also recognize images which do not obviously contain the sensitive articles but describe sensitive events so as to improve the recognition accuracy of different types of sensitive images.
Continuing with the exemplary structure of the image review device 255 provided by the embodiments of the present invention implemented as software modules, in some embodiments, as shown in fig. 2, the software modules stored in the image review device 255 of the memory 250 may include:
a receiving module 2551, configured to receive an image to be audited; the image to be audited is any image uploaded through the terminal;
an auditing module 2552, configured to perform feature library matching on the image to be audited to obtain a matching result, perform object detection on the image to be audited to obtain an object detection result, and perform image classification on the image to be audited to obtain a classification result;
a result generating module 2553, configured to generate an audit result that the image to be audited belongs to the illegal image when a result that represents that the image to be audited includes the illegal content exists in the matching result, the object detection result, and the classification result; the auditing result represents whether the image to be audited belongs to an illegal image or not;
the matching result represents whether the image to be audited has the characteristics corresponding to the violation content, the object detection result represents whether the image to be audited has the violation object with the size smaller than a preset size threshold, and the classification result represents whether the event described by the image to be audited is violation.
In some embodiments of the present invention, the auditing module 2552 is specifically configured to obtain the feature library; the feature library comprises at least one feature; extracting the features of the image to be audited to obtain the features to be audited; and matching the features to be checked with the features in the feature library to obtain the matching result.
In some embodiments of the present invention, the auditing module 2552 is specifically configured to block the image to be audited to obtain a plurality of image blocks to be audited; the size of the image block to be audited is smaller than a preset size threshold; performing feature extraction on the plurality of image blocks to be audited to obtain a plurality of image block features corresponding to the plurality of image blocks to be audited; and carrying out object detection on the characteristics of the image blocks to obtain an object detection result.
In some embodiments of the present invention, the auditing module 2552 is specifically configured to perform feature extraction on the image to be audited to obtain a feature to be audited; and classifying the features to be examined by adopting a trained classification recognition model to obtain the classification result.
In some embodiments of the present invention, the image auditing device 255 further includes: a feature library generation module 2554;
the feature library generating module 2554 is configured to, when a new violation image is received, obtain a historical feature library; the historical feature library is extracted from violation images received at historical time points, and the new violation image represents a newly issued violation image; extracting the characteristics of the new violation image to obtain new violation characteristics; and adding the new violation features to the historical feature library to obtain the feature library.
In some embodiments of the present invention, the feature library generating module 2554 is further configured to, when a violation failure image is obtained, perform feature extraction on the violation failure image to obtain a violation failure feature; integrating the failure violation features to obtain a failure feature library; correspondingly, the adding the new violation feature to the historical feature library to obtain the feature library includes: and adding the new violation features into the historical feature library to obtain a violation feature library, and forming the feature library by using the violation feature library and the failure feature library.
In some embodiments of the present invention, the auditing module 2552 is specifically configured to determine whether the feature to be audited hits in the violation feature library, and determine whether the feature to be audited hits in the failure feature library, so as to obtain a determination result; when the judgment result represents that the features to be checked hit the violation feature library, generating the matching result representing the features corresponding to the violation contents in the image to be checked; and when the judgment result represents that the features to be checked hit the failure feature library, generating the matching result representing the features corresponding to the illegal contents which do not exist in the images to be checked.
In some embodiments of the present invention, the result generating module 2553 is further configured to generate the review result that the image to be reviewed does not belong to the violation image when no result that represents that the violation content is included in the image to be reviewed exists in the matching result, the object detection result, and the classification result.
Embodiments of the present invention provide a computer-readable storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform an image review method provided by embodiments of the present invention, for example, the methods shown in fig. 3, 6 and 8.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, the executable image review instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of a program, software module, script, or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, the executable image review instructions may, but need not, correspond to files in a file system, may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a HyperText Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, the executable image review instructions may be deployed to be executed on one computing device, or on multiple computing devices located at one site, or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present invention are included in the protection scope of the present invention.
Claims (10)
1. An image review method, comprising:
receiving an image to be audited; the image to be audited is any image uploaded through the terminal;
performing feature library matching on the image to be audited to obtain a matching result, performing object detection on the image to be audited to obtain an object detection result, and performing image classification on the image to be audited to obtain a classification result;
when a result representing that the image to be audited contains illegal contents exists in the matching result, the object detection result and the classification result, generating an auditing result that the image to be audited belongs to the illegal image; the auditing result represents whether the image to be audited belongs to an illegal image or not;
the matching result represents whether the image to be audited has the characteristics corresponding to the violation content, the object detection result represents whether the image to be audited has the violation object with the size smaller than a preset size threshold, and the classification result represents whether the event described by the image to be audited is violation.
2. The method according to claim 1, wherein the performing feature library matching on the image to be checked to obtain a matching result comprises:
acquiring the feature library; the feature library comprises at least one feature;
extracting the features of the image to be audited to obtain the features to be audited;
and matching the features to be checked with the features in the feature library to obtain the matching result.
3. The method according to claim 1 or 2, wherein the performing object detection on the image to be audited to obtain an object detection result comprises:
partitioning the image to be audited to obtain a plurality of image blocks to be audited; the size of the image block to be audited is smaller than a preset size threshold;
performing feature extraction on the plurality of image blocks to be audited to obtain a plurality of image block features corresponding to the plurality of image blocks to be audited;
and carrying out object detection on the characteristics of the image blocks to obtain an object detection result.
4. The method according to claim 1 or 2, wherein the classifying the image to be checked to obtain a classification result comprises:
extracting the features of the image to be audited to obtain the features to be audited;
and classifying the features to be examined by adopting a trained classification recognition model to obtain the classification result.
5. The method of claim 2, wherein prior to said obtaining the feature library, the method further comprises:
when a new violation image is received, acquiring a historical feature library; the historical feature library is extracted from violation images received at historical time points, and the new violation image represents a newly issued violation image;
extracting the characteristics of the new violation image to obtain new violation characteristics;
and adding the new violation features to the historical feature library to obtain the feature library.
6. The method according to claim 5, wherein after the feature extraction of the new violation image to obtain a new violation feature, the method further comprises, before the adding the new violation feature to the historical feature library to obtain the feature library:
when a failure violation image is obtained, extracting the characteristics of the failure violation image to obtain failure violation characteristics;
integrating the failure violation features to obtain a failure feature library;
correspondingly, the adding the new violation feature to the historical feature library to obtain the feature library includes:
and adding the new violation features into the historical feature library to obtain a violation feature library, and forming the feature library by using the violation feature library and the failure feature library.
7. The method according to claim 6, wherein the matching the features to be checked and the features in the feature library to obtain the matching result comprises:
judging whether the features to be checked hit the violation feature library or not, and judging whether the features to be checked hit the failure feature library or not to obtain a judgment result;
when the judgment result represents that the features to be checked hit the violation feature library, generating the matching result representing the features corresponding to the violation contents in the image to be checked;
and when the judgment result represents that the features to be checked hit the failure feature library, generating the matching result representing the features corresponding to the illegal contents which do not exist in the images to be checked.
8. The method according to any one of claims 1, 2, and 5 to 7, wherein after performing feature library matching on the image to be audited to obtain a matching result, performing object detection on the image to be audited to obtain an object detection result, and performing image classification on the image to be audited to obtain a classification result, the method further comprises:
and when no result representing that the image to be audited contains the violation content exists in the matching result, the object detection result and the classification result, generating the auditing result that the image to be audited does not belong to the violation image.
9. An image review apparatus characterized by comprising:
a memory to store executable image review instructions;
a processor for implementing the method of any one of claims 1 to 8 when executing executable image review instructions stored in the memory.
10. A computer-readable storage medium having stored thereon executable image review instructions for causing a processor to, when executed, implement the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010330417.XA CN111522987A (en) | 2020-04-24 | 2020-04-24 | Image auditing method and device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010330417.XA CN111522987A (en) | 2020-04-24 | 2020-04-24 | Image auditing method and device and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111522987A true CN111522987A (en) | 2020-08-11 |
Family
ID=71903887
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010330417.XA Pending CN111522987A (en) | 2020-04-24 | 2020-04-24 | Image auditing method and device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111522987A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112149511A (en) * | 2020-08-27 | 2020-12-29 | 深圳市点创科技有限公司 | Method, terminal and device for detecting violation of driver based on neural network |
CN112396571A (en) * | 2021-01-20 | 2021-02-23 | 浙江鹏信信息科技股份有限公司 | Attention mechanism-based EfficientNet sensitive image detection method and system |
CN112633313A (en) * | 2020-10-13 | 2021-04-09 | 北京匠数科技有限公司 | Bad information identification method of network terminal and local area network terminal equipment |
CN112691380A (en) * | 2020-12-31 | 2021-04-23 | 完美世界(北京)软件科技发展有限公司 | Game resource material auditing method and device, storage medium and computer equipment |
CN113239224A (en) * | 2021-05-14 | 2021-08-10 | 百度在线网络技术(北京)有限公司 | Abnormal document identification method, device, equipment and storage medium |
CN113744014A (en) * | 2020-09-29 | 2021-12-03 | 北京沃东天骏信息技术有限公司 | Article information monitoring method, device, equipment and computer readable storage medium |
CN114221956A (en) * | 2021-11-08 | 2022-03-22 | 北京中合谷投资有限公司 | Content examination method of distributed network |
CN115114469A (en) * | 2021-03-17 | 2022-09-27 | 腾讯科技(深圳)有限公司 | Picture identification method, device and equipment and storage medium |
WO2023136775A3 (en) * | 2021-12-17 | 2023-09-07 | Grabtaxi Holdings Pte. Ltd. | Method for filtering images and image hosting server |
-
2020
- 2020-04-24 CN CN202010330417.XA patent/CN111522987A/en active Pending
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112149511A (en) * | 2020-08-27 | 2020-12-29 | 深圳市点创科技有限公司 | Method, terminal and device for detecting violation of driver based on neural network |
CN113744014A (en) * | 2020-09-29 | 2021-12-03 | 北京沃东天骏信息技术有限公司 | Article information monitoring method, device, equipment and computer readable storage medium |
CN112633313A (en) * | 2020-10-13 | 2021-04-09 | 北京匠数科技有限公司 | Bad information identification method of network terminal and local area network terminal equipment |
CN112633313B (en) * | 2020-10-13 | 2021-12-03 | 北京匠数科技有限公司 | Bad information identification method of network terminal and local area network terminal equipment |
CN112691380A (en) * | 2020-12-31 | 2021-04-23 | 完美世界(北京)软件科技发展有限公司 | Game resource material auditing method and device, storage medium and computer equipment |
CN112396571A (en) * | 2021-01-20 | 2021-02-23 | 浙江鹏信信息科技股份有限公司 | Attention mechanism-based EfficientNet sensitive image detection method and system |
CN115114469A (en) * | 2021-03-17 | 2022-09-27 | 腾讯科技(深圳)有限公司 | Picture identification method, device and equipment and storage medium |
CN115114469B (en) * | 2021-03-17 | 2024-09-10 | 腾讯科技(深圳)有限公司 | Picture identification method, device, equipment and storage medium |
CN113239224A (en) * | 2021-05-14 | 2021-08-10 | 百度在线网络技术(北京)有限公司 | Abnormal document identification method, device, equipment and storage medium |
CN114221956A (en) * | 2021-11-08 | 2022-03-22 | 北京中合谷投资有限公司 | Content examination method of distributed network |
WO2023136775A3 (en) * | 2021-12-17 | 2023-09-07 | Grabtaxi Holdings Pte. Ltd. | Method for filtering images and image hosting server |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111522987A (en) | Image auditing method and device and computer readable storage medium | |
US20200004777A1 (en) | Image Retrieval with Deep Local Feature Descriptors and Attention-Based Keypoint Descriptors | |
CN111222500B (en) | Label extraction method and device | |
CN109325148A (en) | The method and apparatus for generating information | |
CN110010156A (en) | The sound event of modeling based on the sequence to event subdivision detects | |
JP2018524678A (en) | Business discovery from images | |
CN106973244A (en) | Using it is Weakly supervised for image match somebody with somebody captions | |
CN111078940B (en) | Image processing method, device, computer storage medium and electronic equipment | |
CN110276068A (en) | Law merit analysis method and device | |
CN111652087A (en) | Car checking method and device, electronic equipment and storage medium | |
CN110197389A (en) | A kind of user identification method and device | |
CN114692593B (en) | Network information safety monitoring and early warning method | |
CN115758282A (en) | Cross-modal sensitive information identification method, system and terminal | |
Zeng et al. | JRL‐YOLO: A Novel Jump‐Join Repetitious Learning Structure for Real‐Time Dangerous Object Detection | |
Sedik et al. | An efficient cybersecurity framework for facial video forensics detection based on multimodal deep learning | |
Sethi et al. | Large-scale multimedia content analysis using scientific workflows | |
Dong et al. | Scene-oriented hierarchical classification of blurry and noisy images | |
CN114373098B (en) | Image classification method, device, computer equipment and storage medium | |
CN111797856A (en) | Modeling method, modeling device, storage medium and electronic equipment | |
CN117011577A (en) | Image classification method, apparatus, computer device and storage medium | |
CN110209880A (en) | Video content retrieval method, Video content retrieval device and storage medium | |
CN111984852B (en) | Generating image acquisition | |
Lee et al. | A mobile picture tagging system using tree-structured layered Bayesian networks | |
Malmurugan et al. | Hybrid Encryption Method for Health Monitoring Systems Based on Machine Learning | |
Al-Qazzaz et al. | Robust DeepFake Face Detection Leveraging Xception Model and Novel Snake Optimization Technique |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200811 |
|
WD01 | Invention patent application deemed withdrawn after publication |