Nothing Special   »   [go: up one dir, main page]

CN111240975B - Artificial intelligence system risk detection method, device, computer equipment and medium - Google Patents

Artificial intelligence system risk detection method, device, computer equipment and medium Download PDF

Info

Publication number
CN111240975B
CN111240975B CN202010014256.3A CN202010014256A CN111240975B CN 111240975 B CN111240975 B CN 111240975B CN 202010014256 A CN202010014256 A CN 202010014256A CN 111240975 B CN111240975 B CN 111240975B
Authority
CN
China
Prior art keywords
risk
detected
model
artificial intelligence
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010014256.3A
Other languages
Chinese (zh)
Other versions
CN111240975A (en
Inventor
李洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010014256.3A priority Critical patent/CN111240975B/en
Priority to PCT/CN2020/093555 priority patent/WO2021139078A1/en
Publication of CN111240975A publication Critical patent/CN111240975A/en
Application granted granted Critical
Publication of CN111240975B publication Critical patent/CN111240975B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the field of model deployment of artificial intelligence, in particular to an artificial intelligence system risk detection method, an artificial intelligence system risk detection device, computer equipment and a storage medium. According to the method, index parameters of various risk evaluation indexes of the artificial intelligent system to be detected in operation are obtained through operating the artificial intelligent system to be detected, multi-angle quantitative detection is carried out on the artificial intelligent system according to the index parameters of various risk evaluation indexes, then corresponding weight parameters are determined through source codes of the system and purposes of the system, the operation risk parameters of the artificial intelligent system to be detected are obtained through comprehensively considering the index parameters and the corresponding parameters, the operation risk of the artificial intelligent system is detected through the operation risk parameters, the operated artificial intelligent system is ensured to be safe and controllable, risks are mastered and avoided in the application process of the artificial intelligent system, and safety problems or events are reduced.

Description

Artificial intelligence system risk detection method, device, computer equipment and medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for risk detection of an artificial intelligence system, a computer device, and a storage medium.
Background
With the development of computer science, artificial intelligence technology is also being updated, and artificial intelligence is a branch of computer science, which attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a similar manner to human intelligence, and research in this field includes robots, language recognition, image recognition, natural language processing, and expert systems. Since birth, the theory and technology are mature, and the application field is expanding, and it is supposed that the technological product brought by artificial intelligence in the future will be a "container" of human intelligence. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is not human intelligence, but can think like a human, and may also exceed human intelligence.
The artificial intelligence helps people's life more convenient, comfortable while also brought huge security risk, and these security risks can not only lead to the fact the loss of property, can harm human life security even, and at present, the industry has not enough cognition and scientific detection method to artificial intelligence algorithm/model, risk such as data, can't detect its risk yet.
Disclosure of Invention
Based on the above, there is a need for an artificial intelligence system risk detection method, an apparatus, a computer device and a storage medium that can effectively detect an artificial intelligence risk, which have not enough cognitive and scientific detection methods for risks such as an artificial intelligence algorithm/model, data and the like, but cannot detect the risks.
A method of risk detection for an artificial intelligence system, the method comprising:
starting an artificial intelligence system to be detected;
Operating the artificial intelligence system to be detected, and acquiring index parameters of each risk evaluation index of the artificial intelligence system to be detected in operation;
identifying the source code and the application field of the artificial intelligence system to be detected, and acquiring detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be detected according to the source code and the application field of the artificial intelligence system to be detected;
acquiring the running risk parameters of the current artificial intelligent system to be detected according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index;
And when the operation risk parameter is larger than a preset risk threshold, judging that the artificial intelligent system to be detected has safety risk.
In one embodiment, the risk evaluation index includes a code security risk parameter, a model accuracy, a model interpretability, a model sensitivity, and a model input data aggressiveness, and the running the artificial intelligence system to be detected, and the obtaining each risk evaluation index corresponding to each preset detection time of the artificial intelligence system to be detected in the test running includes:
acquiring a corresponding source code of the artificial intelligent system to be detected, and acquiring a code security risk parameter according to the source code;
Acquiring preset test sample data with marks, inputting the test sample data into the artificial intelligent system to be detected, acquiring corresponding output data of the preset test sample data with the marks, and acquiring model accuracy according to the output data and the marks carried by the corresponding predicted test sample data;
obtaining error data in the output data according to marks carried by the output data and corresponding predicted test sample data, obtaining error interpretation information corresponding to the error data, and obtaining model interpretability according to the error interpretation information;
acquiring the model sensitivity according to the error data of the artificial intelligence system to be detected and the application field of the artificial intelligence system to be detected;
And acquiring a corresponding dependent data set of the artificial intelligent system to be detected, and acquiring the aggressiveness of the model input data according to the dependent data set.
In one embodiment, the identifying the source code and the application field of the artificial intelligence system to be detected, and obtaining the detection weight parameter corresponding to each risk evaluation index of the artificial intelligence system to be detected according to the source code and the application field of the artificial intelligence system to be detected includes:
identifying a source code and an application field of the artificial intelligence system to be detected;
Determining a risk parameter category corresponding to the source code, searching a preset risk parameter category weight parameter table according to the risk parameter category to obtain a weight parameter corresponding to the risk parameter category, and taking the weight parameter corresponding to the risk parameter category as a first detection weight parameter corresponding to the code security risk parameter;
Searching a preset model accuracy class weight distribution table according to the application field to obtain a first weight parameter corresponding to the application field, and taking the first weight parameter corresponding to the application field as a second detection weight parameter corresponding to the model accuracy;
Determining a construction algorithm corresponding to the source code, searching a preset model interpretability weight distribution table according to the construction algorithm to obtain weight parameters corresponding to the construction algorithm, and taking the weight parameters corresponding to the construction algorithm as third detection weight parameters corresponding to the model interpretability;
Determining a business influence level according to the application field, searching a preset model sensitivity weight distribution table according to the business influence level to obtain a weight parameter corresponding to the business influence level, and taking the weight parameter corresponding to the business influence level as a fourth detection weight parameter corresponding to the model sensitivity;
Searching a preset model input data aggressiveness weight distribution table according to the application field to obtain a second weight parameter corresponding to the application field, and taking the second weight parameter corresponding to the application field as a fifth detection weight parameter corresponding to the model input data aggressiveness.
In one embodiment, determining a service impact level according to the application field, searching a preset model sensitivity weight distribution table according to the service impact level to obtain a weight parameter corresponding to the service impact level, and before using the weight parameter corresponding to the service impact level as the fourth detection weight parameter corresponding to the model sensitivity, further includes:
And according to the application field of the artificial intelligent system to be detected, carrying out service influence evaluation on the artificial intelligent system to be detected, and obtaining the service influence level corresponding to the artificial intelligent system to be detected.
In one embodiment, when the operation risk parameter is greater than a preset risk threshold, before determining that the artificial intelligence system to be detected has a security risk, the method further includes:
acquiring historical operation risk parameters corresponding to an artificial intelligent system with safety risks in a historical record;
And updating the preset risk threshold according to the historical operation risk parameters.
In one embodiment, after obtaining the running risk parameter of the current artificial intelligence system to be detected according to each risk evaluation index and the detection weight parameter corresponding to each index, the method further includes:
and generating a model evaluation report according to the risk evaluation index and the operation risk parameter.
An artificial intelligence system risk detection device, the device comprising:
the model starting module is used for starting the artificial intelligence system to be detected;
The index acquisition module is used for operating the artificial intelligence system to be detected and acquiring index parameters of each risk evaluation index of the artificial intelligence system to be detected in operation;
the weight acquisition module is used for identifying the source code and the application field of the artificial intelligence system to be detected, and acquiring detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be detected according to the source code and the application field of the artificial intelligence system to be detected;
The risk prediction module is used for acquiring the running risk parameters of the current artificial intelligent system to be detected according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index;
And the risk judging module is used for judging that the artificial intelligent system to be detected has safety risk when the operation risk parameter is larger than a preset risk threshold value.
In one embodiment, the risk evaluation index includes a code security risk parameter, a model accuracy, a model interpretation degree, a model sensitivity, and a model input data aggressiveness, and the index acquisition module is configured to:
acquiring a corresponding source code of the artificial intelligent system to be detected, and acquiring a code security risk parameter according to the source code;
Acquiring preset test sample data with marks, inputting the test sample data into the artificial intelligent system to be detected, acquiring corresponding output data of the preset test sample data with the marks, and acquiring model accuracy according to the output data and the marks carried by the corresponding predicted test sample data;
obtaining error data in the output data according to marks carried by the output data and corresponding predicted test sample data, obtaining error interpretation information corresponding to the error data, and obtaining model interpretability according to the error interpretation information;
acquiring the model sensitivity according to the error data of the artificial intelligence system to be detected and the application field of the artificial intelligence system to be detected;
And acquiring a corresponding dependent data set of the artificial intelligent system to be detected, and acquiring the aggressiveness of the model input data according to the dependent data set.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
starting an artificial intelligence system to be detected;
Operating the artificial intelligence system to be detected, and acquiring index parameters of each risk evaluation index of the artificial intelligence system to be detected in operation;
identifying the source code and the application field of the artificial intelligence system to be detected, and acquiring detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be detected according to the source code and the application field of the artificial intelligence system to be detected;
acquiring the running risk parameters of the current artificial intelligent system to be detected according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index;
And when the operation risk parameter is larger than a preset risk threshold, judging that the artificial intelligent system to be detected has safety risk.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
starting an artificial intelligence system to be detected;
Operating the artificial intelligence system to be detected, and acquiring index parameters of each risk evaluation index of the artificial intelligence system to be detected in operation;
identifying the source code and the application field of the artificial intelligence system to be detected, and acquiring detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be detected according to the source code and the application field of the artificial intelligence system to be detected;
acquiring the running risk parameters of the current artificial intelligent system to be detected according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index;
And when the operation risk parameter is larger than a preset risk threshold, judging that the artificial intelligent system to be detected has safety risk.
According to the risk detection method, the risk detection device, the computer equipment and the storage medium for the artificial intelligence system, the index parameters of each risk evaluation index of the artificial intelligence system to be detected in operation are obtained through operating the artificial intelligence system to be detected, multi-angle quantitative detection is carried out on the artificial intelligence system according to the index parameters of each risk evaluation index, then the corresponding weight parameters are determined through the source codes of the system and the application fields of the system, the operation risk parameters of the artificial intelligence system to be detected are obtained through comprehensively considering the index parameters and the corresponding parameters, the operation risk of the artificial intelligence system is detected through the operation risk parameters, the operated artificial intelligence system is ensured to be safe and controllable, risks are mastered and avoided in the application process of the artificial intelligence system, and the occurrence of safety problems or events is reduced.
Drawings
FIG. 1 is an application environment diagram of an artificial intelligence system risk detection method in one embodiment;
FIG. 2 is a flow chart of a risk detection method of an artificial intelligence system in one embodiment;
FIG. 3 is a flowchart of a risk detection method of an artificial intelligence system according to another embodiment;
FIG. 4 is a block diagram of an artificial intelligence system risk detection device in one embodiment;
Fig. 5 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The risk detection method of the artificial intelligence system provided by the application can be applied to an application environment shown in figure 1. Wherein the artificial intelligence server 102 communicates with the detection server 104 over a network. The artificial intelligence server 102 is mounted with an artificial intelligence system to be detected. The detection server 104 is firstly connected with the artificial intelligence server 102, and starts the artificial intelligence system to be detected; operating the artificial intelligence system to be detected, and acquiring index parameters of each risk evaluation index of the artificial intelligence system to be detected in operation; identifying a source code and an application field of an artificial intelligence system to be detected, and acquiring detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be detected according to the source code and the application field of the artificial intelligence system to be detected; acquiring the running risk parameters of the current artificial intelligent system to be detected according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index; and when the operation risk parameter is larger than a preset risk threshold, judging that the artificial intelligent system to be detected has safety risk. The artificial intelligence server 102 and the detection server 104 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in fig. 2, an artificial intelligence system risk detection method is provided, and the method is applied to the detection server 104 in fig. 1 for illustration, and includes the following steps:
s100, starting the artificial intelligence system to be detected by the detection server.
The detection server is specifically a server for detecting the endophytic safety of the artificial intelligence system, and the endophytic safety is specifically a safety problem caused by a mechanism of the artificial intelligence system. Endogenous security of a native artificial intelligence system is often caused by aspects of artificial intelligence system code security, data integrity, model confidentiality, model robustness and the like. The detection server may determine the risk of the artificial intelligence system to be detected from the above aspects to detect whether it has a security risk. Corresponding to the endogenous security is exogenous security, which refers to a security event caused by factors other than the mechanism of the artificial intelligence system, such as the environment on which the artificial intelligence system depends in the application process, the security of data required or generated by the artificial intelligence system in the application process, whether the application of the artificial intelligence system accords with corresponding laws and regulations, and the like. Specifically, the detection server may connect to an artificial intelligence server on which an artificial intelligence system to be detected is mounted through a network, and then start the artificial intelligence system to be detected, and start the detection process.
S300, operating the artificial intelligence system to be detected, and acquiring index parameters of each risk evaluation index of the artificial intelligence system to be detected in operation.
The risk detection of the artificial intelligence system specifically comprises 5 risk evaluation indexes, including: code security risk refers to security caused by vulnerabilities existing in artificial intelligence systems. The accuracy of the model refers to the accuracy of the model in judging the current input under the current environment. The model can explain the degree, and means that the running mechanism of the model and the input data are fed back to enable a large family to take care of the information, and when the model gives incorrect feedback, specific reasons can be located. Model sensitivity refers to the degree of influence on actual business when the model is misjudged or deviates from the design expectation of the artificial intelligent system in the application process of the artificial intelligent system. Aggressiveness of input data. Aggressiveness of input data refers to the degree to which input data makes a model error or deviates from design expectations. The detection server can obtain the index parameters of the risk evaluation indexes through detecting the directions of the artificial intelligent system while running the artificial intelligent system to be detected, and the index parameters are expressed in a specific digital form.
S500, identifying the source code and the application field of the artificial intelligence system to be detected, and acquiring detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be detected according to the source code and the application field of the artificial intelligence system to be detected.
The detection weight parameters refer to the referenceability of the individual risk evaluation indexes in the current artificial intelligent system to be detected, and because the emphasis points, the purposes and the compositions of the artificial intelligent systems are different, the detection weights of the different artificial intelligent systems to be detected are different, and the server needs to identify the source codes and the application fields of the artificial intelligent system to be detected while obtaining the corresponding individual risk evaluation indexes, and obtains the detection weight parameters corresponding to the individual risk evaluation indexes according to the self attributes of the artificial intelligent system to be detected.
S700, acquiring the running risk parameters of the current artificial intelligent system to be detected according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index.
The operational risk parameters of the artificial intelligence system to be detected can be usedThe expression refers to the probability of occurrence of the condition that the artificial intelligence system affects the business due to the error of expressing or judging things in the running process. The calculation formula of the operation risk parameter can be specifically:
Wherein W1-W5 are weights of various indexes, and for simplicity, the weights of various indexes and detection of various indexes can be made of percentages, and then the products are normalized and converted into the percentages. code risk is a code security risk parameter, For model accuracy, inter is model interpretability, sen is model sensitivity, and data attack is model input data aggressiveness. The running risk parameters in the application can be detected from the directions of code security risk parameters, model accuracy, model interpretability, model sensitivity, input data aggressiveness and the like in an endogenous security dimension so as to ensure the security of the artificial intelligence, ensure the security and controllability of an artificial intelligence system, and simultaneously reveal the security risk of the artificial intelligence system to users, enterprises and the like so as to facilitate grasping and avoiding the risk in the application process and reduce the occurrence of security problems or events.
And S900, when the operation risk parameter is larger than a preset risk threshold, judging that the artificial intelligent system to be detected has safety risk.
When the operation risk parameter determined by calculation is larger than a preset risk threshold, a certain safety risk exists in the current artificial intelligent system to be detected, and at the moment, the artificial intelligent system to be detected can be determined to have a certain operation defect, so that the safety risk possibly occurs in the actual use process.
As shown in fig. 3, in one embodiment, step S900 further includes:
s810, acquiring historical operation risk parameters corresponding to the artificial intelligent system with the safety risk in the historical record.
S830, updating a preset risk threshold according to the historical operation risk parameters.
At the initial moment, the detection server can determine an initial preset risk threshold value, and then in the running process of the method, the current preset risk threshold value is continuously updated according to the running risk parameters of the artificial intelligent system which are detected in the history record and have running error conditions in the testing process. The updating may specifically be that the operation risk parameters of the artificial intelligent system with the operation error condition in the detection process are obtained, and then the minimum value of all the operation risk parameters is calculated, and the minimum value is used as a new preset risk threshold value. The accuracy of risk detection of the artificial intelligence system can be effectively improved by continuously updating the preset risk threshold.
According to the risk detection method for the artificial intelligence system, the to-be-detected artificial intelligence system is operated, index parameters of various risk evaluation indexes of the to-be-detected artificial intelligence system in operation are obtained, multi-angle quantitative detection is carried out on the artificial intelligence system according to the index parameters of various risk evaluation indexes, then corresponding weight parameters are determined through source codes of the system and application fields of the system, the index parameters and the corresponding parameters are comprehensively considered to obtain operation risk parameters of the to-be-detected artificial intelligence system, the operation risk of the artificial intelligence system is detected through the operation risk parameters, the operated artificial intelligence system is ensured to be safe and controllable, risks are grasped and avoided in the application process of the artificial intelligence system, and safety problems or events are reduced.
In one embodiment, the risk evaluation indexes include a code security risk parameter, a model accuracy, a model interpretability, a model sensitivity and a model input data aggressiveness, and the obtaining of each risk evaluation index corresponding to each preset detection time of the artificial intelligence system to be detected in the test operation includes:
and acquiring a source code corresponding to the artificial intelligent system to be detected, and acquiring a code security risk parameter according to the source code.
And acquiring preset test sample data with marks, inputting the test sample data into an artificial intelligent system to be detected, acquiring corresponding output data of the preset test sample data with the marks, and acquiring model accuracy according to the output data and the marks carried by the corresponding predicted test sample data.
And obtaining error data in the output data according to the marks carried by the output data and the corresponding predicted test sample data, obtaining error interpretation information corresponding to the error data, and obtaining the interpretation degree of the model according to the error interpretation information.
And acquiring the model sensitivity according to the error data of the artificial intelligence system to be detected and the application field of the artificial intelligence system to be detected.
And acquiring a corresponding dependent data set of the artificial intelligent system to be detected, and acquiring the aggressiveness of the input data of the model according to the dependent data set.
Some risk evaluation indexes of the model can be obtained based on the composition of the model, but other indexes can be obtained by testing the model, and in the application, the specific process of obtaining each risk evaluation index comprises the following steps: the code frame risk is identified through methods such as source code audit and Fuzzing by acquiring codes (such as open source or commercial) and frames (such as Tensorflow, pytorch) adopted by the current artificial intelligent system to be detected, and the code security risk corresponding to the artificial intelligent system to be detected, namely code security risk parameters, are obtained. The accuracy of the model can be detected by using a certain amount of test samples, and the auditing score corresponding to the accuracy of the model can be obtained. And acquiring data examples of the prediction or recognition errors of the test data by the artificial intelligence system, judging whether the current to-be-detected artificial intelligence system can make corresponding misinterpretations (such as explaining a deep learning framework for classifying errors, indicating that a convolution layer of the deep learning framework possibly affects the recognition effect, and the like) for each data example of the prediction or recognition errors, and acquiring corresponding model interpretable degree scores. And acquiring a data instance of the artificial intelligence system for predicting or identifying errors of the test data, judging the influence degree of the data instance on the actual service of the artificial intelligence system to be detected currently, and acquiring a model sensitivity score according to the influence degree. Detecting the degree that the input data of the data set on which the artificial intelligent system depends cause the model to be in error or deviate from the design expectation, obtaining the risk value of the risk index, and obtaining the attack type score of the input data of the artificial intelligent system to be detected according to the risk. And respectively carrying out multi-angle evaluation on the model by different methods, thereby obtaining risk evaluation indexes of different angles of the artificial intelligent system to be detected.
In one embodiment, step S500 includes:
The source code and the application field of the artificial intelligence system to be detected are identified.
Determining a risk parameter class corresponding to the source code, searching a preset risk parameter class weight parameter table according to the risk parameter class, obtaining a weight parameter corresponding to the risk parameter class, and taking the weight parameter corresponding to the risk parameter class as a first detection weight parameter corresponding to the code security risk parameter.
Searching a preset model accuracy class weight distribution table according to the application field to obtain a first weight parameter corresponding to the application field, and taking the first weight parameter corresponding to the application field as a second detection weight parameter corresponding to the model accuracy.
Determining a construction algorithm corresponding to the source code, searching a preset model interpretability weight distribution table according to the construction algorithm to obtain weight parameters corresponding to the construction algorithm, and taking the weight parameters corresponding to the construction algorithm as third detection weight parameters corresponding to the model interpretability.
Determining a business influence level according to the application field, searching a preset model sensitivity weight distribution table according to the business influence level to obtain a weight parameter corresponding to the business influence level, and taking the weight parameter corresponding to the business influence level as a fourth detection weight parameter corresponding to the model sensitivity. Specifically, the service influence evaluation can be performed on the artificial intelligence system to be detected according to the application field of the artificial intelligence system to be detected, and the service influence level corresponding to the artificial intelligence system to be detected is obtained.
Searching a preset model input data aggressiveness weight distribution table according to the application field to obtain a second weight parameter corresponding to the application field, and taking the second weight parameter corresponding to the application field as a fifth detection weight parameter corresponding to the model input data aggressiveness.
In one embodiment, determining a service impact level according to an application field, searching a preset model sensitivity weight distribution table according to the service impact level to obtain a weight parameter corresponding to the service impact level, and before using the weight parameter corresponding to the service impact level as a fourth detection weight parameter corresponding to the model sensitivity, further including:
The server can identify the source code and the application field of the artificial intelligence system to be detected, and then acquire detection weight parameters corresponding to the risk evaluation indexes according to the source code and the application field. The preset weight distribution table may be obtained by summarizing historical experience data or by expert experience. Generally, for each class of model, the corresponding weight distribution includes rules, such as risks that have a significant impact on code execution, such as risks of buffer overflow, information leakage, override viewing, SQL (Structured Query Language ) injection, etc., for the detection weight parameters corresponding to the code security risk parameters, and larger detection weight parameters are allocated. While there is a risk of general impact on code execution, such as code logic errors, code inefficiency, etc., a lower detection weight parameter is assigned. The detection weight parameter selection of the model accuracy mainly depends on the application corresponding to the model accuracy. For more critical applications, higher weight intervals are assigned, whereas lower intervals are assigned. For example, artificial intelligence systems for applications such as image recognition, speech recognition, etc. may be assigned a lower weight, while artificial intelligence systems for key aspects such as unmanned vehicles, airplanes, etc. may be assigned a higher weight. The detection weight parameter selection of the model interpretability mainly depends on the interpretability degree of the corresponding algorithm/model, and is related to the corresponding artificial intelligence classification. Algorithms/models with better interpretability assign lower risk weight intervals, and conversely assign higher intervals. For example, an artificial intelligence system that applies rule-based machine learning, such as the D3, C4.5/C5.0 decision tree algorithm, would be assigned a higher weight, while an artificial intelligence system that applies a deep learning model, such as Tensorflow, pytorch, caffe, would be assigned a lower weight. The selection of the detection weight parameters of the model sensitivity mainly depends on the importance of the corresponding application level, specifically, the service influence evaluation can be performed on the artificial intelligent system to be detected according to the application field of the artificial intelligent system to be detected to obtain the service influence level corresponding to the artificial intelligent system to be detected, and then the table look-up is performed according to the service influence level to obtain the fourth detection weight parameters corresponding to the model sensitivity. This weight is determined according to different traffic impact levels. For example, an artificial intelligence system with a high impact on an unmanned car, airplane, medical procedure, etc. may be assigned a higher weight, while an artificial intelligence system with a low impact on attendance, card punching, etc. using AI may be assigned a lower weight. The detection weight parameter selection of the aggressiveness of the input data mainly depends on the detection of risks such as pollution/bait injection and the like possibly existing in the AI algorithm/model/carried data set depending on the application. The more easily the data set is corrupted or misled, the higher its weight and vice versa. Artificial intelligence systems for image recognition and speech classes are assigned a higher weight, while artificial intelligence systems for human-machine questions and answers and unmanned are assigned a lower weight.
In one embodiment, step S700 further includes:
And generating a model evaluation report according to the risk evaluation index and the operation risk parameter.
After acquiring the risk evaluation indexes and obtaining the corresponding operation risk parameters according to the risk evaluation indexes, all the risk evaluation indexes and the operation risk parameters can be arranged, then a corresponding model evaluation report is generated according to the acquired indexes and the operation risk parameters, and a user can determine which aspect of the artificial intelligence system to be detected has the operation risk through the model evaluation report, so that the risk can be conveniently mastered and avoided in the application process, and the occurrence of safety problems or events is reduced. By generating the model evaluation report, the detection result of the artificial intelligence system to be detected can be more intuitively displayed to the user.
It should be understood that, although the steps in the flowcharts of fig. 2-3 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2-3 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or steps.
In one embodiment, as shown in FIG. 4, there is provided an artificial intelligence system risk detection apparatus comprising:
The model starting module 100 is used for starting the artificial intelligence system to be detected;
the index obtaining module 300 is configured to operate the artificial intelligence system to be detected, and obtain index parameters of each risk evaluation index of the artificial intelligence system to be detected in operation;
The weight acquisition module 500 is configured to identify a source code and an application field of an artificial intelligence system to be detected, and acquire detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be detected according to the source code and the application field of the artificial intelligence system to be detected;
The risk prediction module 700 is configured to obtain an operational risk parameter of the current artificial intelligence system to be detected according to the index parameter of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index;
The risk determination module 900 is configured to determine that the artificial intelligence system to be detected has a security risk when the operation risk parameter is greater than a preset risk threshold.
In one embodiment, the risk evaluation index includes a code security risk parameter, a model accuracy, a model interpretability, a model sensitivity, and a model input data aggressiveness, and the index acquisition module 300 is configured to: acquiring a corresponding source code of an artificial intelligent system to be detected, and acquiring a code security risk parameter according to the source code; acquiring preset test sample data with marks, inputting the test sample data into an artificial intelligent system to be detected, acquiring corresponding output data of the preset test sample data with the marks, and acquiring model accuracy according to the output data and the marks carried by the corresponding predicted test sample data; obtaining error data in the output data according to the marks carried by the output data and the corresponding predicted test sample data, obtaining error interpretation information corresponding to the error data, and obtaining model interpretability according to the error interpretation information; acquiring model sensitivity according to error data of the artificial intelligence system to be detected and the application field of the artificial intelligence system to be detected; and acquiring a corresponding dependent data set of the artificial intelligent system to be detected, and acquiring the aggressiveness of the input data of the model according to the dependent data set.
In one embodiment, the weight acquisition module 500 is configured to identify a source code and an application domain of an artificial intelligence system to be detected; determining a risk parameter class corresponding to the source code, searching a preset risk parameter class weight parameter table according to the risk parameter class to obtain a weight parameter corresponding to the risk parameter class, and taking the weight parameter corresponding to the risk parameter class as a first detection weight parameter corresponding to the code security risk parameter; searching a preset model accuracy class weight distribution table according to the application field to obtain a first weight parameter corresponding to the application field, and taking the first weight parameter corresponding to the application field as a second detection weight parameter corresponding to the model accuracy; determining a construction algorithm corresponding to the source code, searching a preset model interpretability weight distribution table according to the construction algorithm to obtain weight parameters corresponding to the construction algorithm, and taking the weight parameters corresponding to the construction algorithm as third detection weight parameters corresponding to the model interpretability; determining a business influence level according to the application field, searching a preset model sensitivity weight distribution table according to the business influence level to obtain a weight parameter corresponding to the business influence level, and taking the weight parameter corresponding to the business influence level as a fourth detection weight parameter corresponding to the model sensitivity; searching a preset model input data aggressiveness weight distribution table according to the application field to obtain a second weight parameter corresponding to the application field, and taking the second weight parameter corresponding to the application field as a fifth detection weight parameter corresponding to the model input data aggressiveness.
In one embodiment, the weight obtaining module 500 is further configured to perform service impact assessment on the artificial intelligence system to be detected according to an application field of the artificial intelligence system to be detected, so as to obtain a service impact level corresponding to the artificial intelligence system to be detected.
In one embodiment, the threshold updating module is further configured to obtain a historical operation risk parameter corresponding to the artificial intelligence system with the security risk in the history record; and updating a preset risk threshold according to the historical operation risk parameters.
In one embodiment, the system further comprises a report generation module for generating a model evaluation report according to the risk evaluation index and the operation risk parameter.
For specific limitations of the risk detection device of the artificial intelligence system, reference may be made to the above limitation of the risk detection method of the artificial intelligence system, and no further description is given here. The modules in the artificial intelligence system risk detection device can be all or partially implemented by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data related to operational risk parameters of the historical artificial intelligence system. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program when executed by a processor implements an artificial intelligence system risk detection method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 5 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
starting an artificial intelligence system to be detected;
operating the artificial intelligence system to be detected, and acquiring index parameters of each risk evaluation index of the artificial intelligence system to be detected in operation;
identifying a source code and an application field of an artificial intelligence system to be detected, and acquiring detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be detected according to the source code and the application field of the artificial intelligence system to be detected;
acquiring the running risk parameters of the current artificial intelligent system to be detected according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index;
and when the operation risk parameter is larger than a preset risk threshold, judging that the artificial intelligent system to be detected has safety risk.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring a corresponding source code of an artificial intelligent system to be detected, and acquiring a code security risk parameter according to the source code; acquiring preset test sample data with marks, inputting the test sample data into an artificial intelligent system to be detected, acquiring corresponding output data of the preset test sample data with the marks, and acquiring model accuracy according to the output data and the marks carried by the corresponding predicted test sample data; obtaining error data in the output data according to the marks carried by the output data and the corresponding predicted test sample data, obtaining error interpretation information corresponding to the error data, and obtaining model interpretability according to the error interpretation information; acquiring model sensitivity according to error data of the artificial intelligence system to be detected and the application field of the artificial intelligence system to be detected; and acquiring a corresponding dependent data set of the artificial intelligent system to be detected, and acquiring the aggressiveness of the input data of the model according to the dependent data set.
In one embodiment, the processor when executing the computer program further performs the steps of: identifying a source code and an application field of an artificial intelligence system to be detected; determining a risk parameter class corresponding to the source code, searching a preset risk parameter class weight parameter table according to the risk parameter class to obtain a weight parameter corresponding to the risk parameter class, and taking the weight parameter corresponding to the risk parameter class as a first detection weight parameter corresponding to the code security risk parameter; searching a preset model accuracy class weight distribution table according to the application field to obtain a first weight parameter corresponding to the application field, and taking the first weight parameter corresponding to the application field as a second detection weight parameter corresponding to the model accuracy; determining a construction algorithm corresponding to the source code, searching a preset model interpretability weight distribution table according to the construction algorithm to obtain weight parameters corresponding to the construction algorithm, and taking the weight parameters corresponding to the construction algorithm as third detection weight parameters corresponding to the model interpretability; determining a business influence level according to the application field, searching a preset model sensitivity weight distribution table according to the business influence level to obtain a weight parameter corresponding to the business influence level, and taking the weight parameter corresponding to the business influence level as a fourth detection weight parameter corresponding to the model sensitivity; searching a preset model input data aggressiveness weight distribution table according to the application field to obtain a second weight parameter corresponding to the application field, and taking the second weight parameter corresponding to the application field as a fifth detection weight parameter corresponding to the model input data aggressiveness.
In one embodiment, the processor when executing the computer program further performs the steps of: and according to the application field of the artificial intelligence system to be detected, carrying out service influence evaluation on the artificial intelligence system to be detected, and obtaining the service influence level corresponding to the artificial intelligence system to be detected.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring historical operation risk parameters corresponding to an artificial intelligent system with safety risks in a historical record; and updating a preset risk threshold according to the historical operation risk parameters.
In one embodiment, the processor when executing the computer program further performs the steps of: and generating a model evaluation report according to the risk evaluation index and the operation risk parameter.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
starting an artificial intelligence system to be detected;
operating the artificial intelligence system to be detected, and acquiring index parameters of each risk evaluation index of the artificial intelligence system to be detected in operation;
identifying a source code and an application field of an artificial intelligence system to be detected, and acquiring detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be detected according to the source code and the application field of the artificial intelligence system to be detected;
acquiring the running risk parameters of the current artificial intelligent system to be detected according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index;
and when the operation risk parameter is larger than a preset risk threshold, judging that the artificial intelligent system to be detected has safety risk.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a corresponding source code of an artificial intelligent system to be detected, and acquiring a code security risk parameter according to the source code; acquiring preset test sample data with marks, inputting the test sample data into an artificial intelligent system to be detected, acquiring corresponding output data of the preset test sample data with the marks, and acquiring model accuracy according to the output data and the marks carried by the corresponding predicted test sample data; obtaining error data in the output data according to the marks carried by the output data and the corresponding predicted test sample data, obtaining error interpretation information corresponding to the error data, and obtaining model interpretability according to the error interpretation information; acquiring model sensitivity according to error data of the artificial intelligence system to be detected and the application field of the artificial intelligence system to be detected; and acquiring a corresponding dependent data set of the artificial intelligent system to be detected, and acquiring the aggressiveness of the input data of the model according to the dependent data set.
In one embodiment, the computer program when executed by the processor further performs the steps of: identifying a source code and an application field of an artificial intelligence system to be detected; determining a risk parameter class corresponding to the source code, searching a preset risk parameter class weight parameter table according to the risk parameter class to obtain a weight parameter corresponding to the risk parameter class, and taking the weight parameter corresponding to the risk parameter class as a first detection weight parameter corresponding to the code security risk parameter; searching a preset model accuracy class weight distribution table according to the application field to obtain a first weight parameter corresponding to the application field, and taking the first weight parameter corresponding to the application field as a second detection weight parameter corresponding to the model accuracy; determining a construction algorithm corresponding to the source code, searching a preset model interpretability weight distribution table according to the construction algorithm to obtain weight parameters corresponding to the construction algorithm, and taking the weight parameters corresponding to the construction algorithm as third detection weight parameters corresponding to the model interpretability; determining a business influence level according to the application field, searching a preset model sensitivity weight distribution table according to the business influence level to obtain a weight parameter corresponding to the business influence level, and taking the weight parameter corresponding to the business influence level as a fourth detection weight parameter corresponding to the model sensitivity; searching a preset model input data aggressiveness weight distribution table according to the application field to obtain a second weight parameter corresponding to the application field, and taking the second weight parameter corresponding to the application field as a fifth detection weight parameter corresponding to the model input data aggressiveness.
In one embodiment, the computer program when executed by the processor further performs the steps of: and according to the application field of the artificial intelligence system to be detected, carrying out service influence evaluation on the artificial intelligence system to be detected, and obtaining the service influence level corresponding to the artificial intelligence system to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring historical operation risk parameters corresponding to an artificial intelligent system with safety risks in a historical record; and updating a preset risk threshold according to the historical operation risk parameters.
In one embodiment, the computer program when executed by the processor further performs the steps of: and generating a model evaluation report according to the risk evaluation index and the operation risk parameter.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A method of risk detection for an artificial intelligence system, the method comprising:
starting an artificial intelligence system to be detected;
operating the artificial intelligent system to be detected, and acquiring index parameters of various risk evaluation indexes of the artificial intelligent system to be detected in operation, wherein the risk evaluation indexes comprise code security risk parameters, model accuracy, model interpretation degree, model sensitivity and model input data aggressiveness;
identifying a source code and an application field of the artificial intelligence system to be detected;
Determining a risk parameter category corresponding to the source code, searching a preset risk parameter category weight parameter table according to the risk parameter category to obtain a weight parameter corresponding to the risk parameter category, taking the weight parameter corresponding to the risk parameter category as a first detection weight parameter corresponding to the code security risk parameter, and constructing the preset risk parameter category weight parameter table based on the influence degree of the code execution risk;
Searching a preset model accuracy class weight distribution table according to the application field to obtain a first weight parameter corresponding to the application field, and taking the first weight parameter corresponding to the application field as a second detection weight parameter corresponding to the model accuracy;
Determining a construction algorithm corresponding to the source code, searching a preset model interpretability weight distribution table according to the construction algorithm to obtain weight parameters corresponding to the construction algorithm, and taking the weight parameters corresponding to the construction algorithm as third detection weight parameters corresponding to the model interpretability;
Determining a business influence level according to the application field, searching a preset model sensitivity weight distribution table according to the business influence level to obtain a weight parameter corresponding to the business influence level, and taking the weight parameter corresponding to the business influence level as a fourth detection weight parameter corresponding to the model sensitivity;
Searching a preset model input data aggressiveness weight distribution table according to the application field to obtain a second weight parameter corresponding to the application field, and taking the second weight parameter corresponding to the application field as a fifth detection weight parameter corresponding to the model input data aggressiveness, wherein the preset model input data aggressiveness weight distribution table is constructed based on the damage difficulty degree of a data set corresponding to the application field;
acquiring the running risk parameters of the current artificial intelligent system to be detected according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index;
And when the operation risk parameter is larger than a preset risk threshold, judging that the artificial intelligent system to be detected has safety risk.
2. The method of claim 1, wherein the operating the artificial intelligence system to be detected, the obtaining the index parameters of each risk assessment index of the artificial intelligence system to be detected in operation comprises:
acquiring a corresponding source code of the artificial intelligent system to be detected, and acquiring a code security risk parameter according to the source code;
Acquiring preset test sample data with marks, inputting the test sample data into the artificial intelligent system to be detected, acquiring corresponding output data of the preset test sample data with the marks, and acquiring model accuracy according to the output data and the marks carried by the corresponding predicted test sample data;
obtaining error data in the output data according to marks carried by the output data and corresponding predicted test sample data, obtaining error interpretation information corresponding to the error data, and obtaining model interpretability according to the error interpretation information;
acquiring the model sensitivity according to the error data of the artificial intelligence system to be detected and the application field of the artificial intelligence system to be detected;
And acquiring a corresponding dependent data set of the artificial intelligent system to be detected, and acquiring the aggressiveness of the model input data according to the dependent data set.
3. The method according to claim 1, wherein determining a service impact level according to the application field, searching a preset model sensitivity weight distribution table according to the service impact level to obtain a weight parameter corresponding to the service impact level, and before using the weight parameter corresponding to the service impact level as the fourth detection weight parameter corresponding to the model sensitivity, further includes:
And according to the application field of the artificial intelligent system to be detected, carrying out service influence evaluation on the artificial intelligent system to be detected, and obtaining the service influence level corresponding to the artificial intelligent system to be detected.
4. The method of claim 1, wherein when the operational risk parameter is greater than a preset risk threshold, determining that the artificial intelligence system to be detected is at a security risk is preceded by:
acquiring historical operation risk parameters corresponding to an artificial intelligent system with safety risks in a historical record;
And updating the preset risk threshold according to the historical operation risk parameters.
5. The method according to claim 1, wherein the obtaining the running risk parameter of the current artificial intelligence system to be detected according to the index parameter of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index further comprises:
and generating a model evaluation report according to the risk evaluation index and the operation risk parameter.
6. An artificial intelligence system risk detection apparatus, the apparatus comprising:
The starting module is used for starting the artificial intelligence system to be detected;
the system comprises an index acquisition module, a model analysis module and a model input data analysis module, wherein the index acquisition module is used for operating the artificial intelligent system to be detected to acquire index parameters of various risk evaluation indexes of the artificial intelligent system to be detected in operation, wherein the risk evaluation indexes comprise code security risk parameters, model accuracy, model interpretability, model sensitivity and model input data aggressiveness;
The weight acquisition module is used for identifying the source code and the application field of the artificial intelligent system to be detected; determining a risk parameter category corresponding to the source code, searching a preset risk parameter category weight parameter table according to the risk parameter category to obtain a weight parameter corresponding to the risk parameter category, taking the weight parameter corresponding to the risk parameter category as a first detection weight parameter corresponding to the code security risk parameter, and constructing the preset risk parameter category weight parameter table based on the influence degree of the code execution risk; searching a preset model accuracy class weight distribution table according to the application field to obtain a first weight parameter corresponding to the application field, and taking the first weight parameter corresponding to the application field as a second detection weight parameter corresponding to the model accuracy; determining a construction algorithm corresponding to the source code, searching a preset model interpretability weight distribution table according to the construction algorithm to obtain weight parameters corresponding to the construction algorithm, and taking the weight parameters corresponding to the construction algorithm as third detection weight parameters corresponding to the model interpretability; determining a business influence level according to the application field, searching a preset model sensitivity weight distribution table according to the business influence level to obtain a weight parameter corresponding to the business influence level, and taking the weight parameter corresponding to the business influence level as a fourth detection weight parameter corresponding to the model sensitivity; searching a preset model input data aggressiveness weight distribution table according to the application field to obtain a second weight parameter corresponding to the application field, and taking the second weight parameter corresponding to the application field as a fifth detection weight parameter corresponding to the model input data aggressiveness, wherein the preset model input data aggressiveness weight distribution table is constructed based on the damage difficulty degree of a data set corresponding to the application field;
The risk prediction module is used for acquiring the running risk parameters of the current artificial intelligent system to be detected according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index;
And the risk judging module is used for judging that the artificial intelligent system to be detected has safety risk when the operation risk parameter is larger than a preset risk threshold value.
7. The apparatus of claim 6, wherein the index acquisition module is further configured to:
Acquiring a corresponding source code of the artificial intelligent system to be detected, and acquiring a code security risk parameter according to the source code; acquiring preset test sample data with marks, inputting the test sample data into the artificial intelligent system to be detected, acquiring corresponding output data of the preset test sample data with the marks, and acquiring model accuracy according to the output data and the marks carried by the corresponding predicted test sample data; obtaining error data in the output data according to marks carried by the output data and corresponding predicted test sample data, obtaining error interpretation information corresponding to the error data, and obtaining model interpretability according to the error interpretation information; acquiring the model sensitivity according to the error data of the artificial intelligence system to be detected and the application field of the artificial intelligence system to be detected; and acquiring a corresponding dependent data set of the artificial intelligent system to be detected, and acquiring the aggressiveness of the model input data according to the dependent data set.
8. The device according to claim 6, further comprising a threshold updating module, configured to obtain a historical operation risk parameter corresponding to the artificial intelligence system in which a security risk occurs in the history; and updating the preset risk threshold according to the historical operation risk parameters.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 5 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 5.
CN202010014256.3A 2020-01-07 2020-01-07 Artificial intelligence system risk detection method, device, computer equipment and medium Active CN111240975B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010014256.3A CN111240975B (en) 2020-01-07 2020-01-07 Artificial intelligence system risk detection method, device, computer equipment and medium
PCT/CN2020/093555 WO2021139078A1 (en) 2020-01-07 2020-05-30 Artificial intelligence system risk detection method and apparatus, and computer device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014256.3A CN111240975B (en) 2020-01-07 2020-01-07 Artificial intelligence system risk detection method, device, computer equipment and medium

Publications (2)

Publication Number Publication Date
CN111240975A CN111240975A (en) 2020-06-05
CN111240975B true CN111240975B (en) 2024-06-28

Family

ID=70874289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014256.3A Active CN111240975B (en) 2020-01-07 2020-01-07 Artificial intelligence system risk detection method, device, computer equipment and medium

Country Status (2)

Country Link
CN (1) CN111240975B (en)
WO (1) WO2021139078A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111823227B (en) * 2020-06-08 2024-02-02 华南师范大学 Artificial intelligent ethical risk detection and prevention method, deep learning system and robot
CN114091644A (en) * 2020-08-24 2022-02-25 中国科学院软件研究所 Technical risk assessment method and system for artificial intelligence product
CN116235526A (en) * 2020-09-24 2023-06-06 华为技术有限公司 Data analysis method and device
CN112527674B (en) * 2020-12-22 2022-11-04 苏州三六零智能安全科技有限公司 AI frame safety evaluation method, device, equipment and storage medium
CN112905494B (en) * 2021-05-07 2022-04-01 北京银联金卡科技有限公司 Artificial intelligence evaluation method and system fusing multidimensional information
CN113887942A (en) * 2021-09-30 2022-01-04 绿盟科技集团股份有限公司 Data processing method and device, electronic equipment and storage medium
CN114971164B (en) * 2022-04-13 2023-09-22 江苏禹润水务研究院有限公司 Artificial intelligence-based method and system for detecting abnormality of sludge treatment equipment
CN118094282B (en) * 2023-11-21 2024-09-03 深圳市威尔泰电子科技有限公司 Method and system for detecting retention force of contact of electric connector
CN118014392A (en) * 2024-02-01 2024-05-10 中国铁塔股份有限公司 Algorithm evaluation method, device, electronic equipment and storage medium
CN118350627A (en) * 2024-04-16 2024-07-16 义乌中国小商品城大数据有限公司 Big data risk management and analysis system based on big model
CN118396601B (en) * 2024-06-26 2024-09-20 杭州海康威视系统技术有限公司 Intelligent area operation and maintenance system optimization method and device and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684848A (en) * 2018-09-07 2019-04-26 平安科技(深圳)有限公司 Methods of risk assessment, device, equipment and readable storage medium storing program for executing
CN110647412A (en) * 2019-09-17 2020-01-03 华东师范大学 Software credibility evaluation system of spacecraft control system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006079553A (en) * 2004-09-13 2006-03-23 Ricoh Co Ltd Project management device and system
CN108449313B (en) * 2018-02-01 2021-02-19 平安科技(深圳)有限公司 Electronic device, Internet service system risk early warning method and storage medium
CN108959934B (en) * 2018-06-11 2023-08-22 平安科技(深圳)有限公司 Security risk assessment method, security risk assessment device, computer equipment and storage medium
CN109345374B (en) * 2018-09-17 2023-04-18 平安科技(深圳)有限公司 Risk control method and device, computer equipment and storage medium
CN109447048B (en) * 2018-12-25 2020-12-25 苏州闪驰数控系统集成有限公司 Artificial intelligence early warning system
CN110009225B (en) * 2019-04-03 2023-10-31 平安科技(深圳)有限公司 Risk assessment system construction method, risk assessment system construction device, computer equipment and storage medium
CN110135691A (en) * 2019-04-12 2019-08-16 深圳壹账通智能科技有限公司 Finance product methods of risk assessment, device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109684848A (en) * 2018-09-07 2019-04-26 平安科技(深圳)有限公司 Methods of risk assessment, device, equipment and readable storage medium storing program for executing
CN110647412A (en) * 2019-09-17 2020-01-03 华东师范大学 Software credibility evaluation system of spacecraft control system

Also Published As

Publication number Publication date
WO2021139078A1 (en) 2021-07-15
CN111240975A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111240975B (en) Artificial intelligence system risk detection method, device, computer equipment and medium
US10990901B2 (en) Training, validating, and monitoring artificial intelligence and machine learning models
TWI723528B (en) Computer-executed event risk assessment method and device, computer-readable storage medium and computing equipment
CN109359669B (en) Medical insurance reimbursement abnormality detection method, medical insurance reimbursement abnormality detection device, computer equipment and storage medium
CN111177714A (en) Abnormal behavior detection method and device, computer equipment and storage medium
CN108734402B (en) Virtual case-based emergency management decision method and system for irregular emergency
WO2020034801A1 (en) Medical feature screening method and apparatus, computer device, and storage medium
Behore et al. Enhancing reliability in large language models: Self-detection of hallucinations with spontaneous self-checks
CN112820367A (en) Medical record information verification method and device, computer equipment and storage medium
CN112149909A (en) Ship oil consumption prediction method and device, computer equipment and storage medium
CN113656808A (en) Data security evaluation method, device, equipment and storage medium
CN113760670A (en) Cable joint abnormity early warning method and device, electronic equipment and storage medium
EP3832485A1 (en) Question answering systems
CN112100377A (en) Text classification method and device, computer equipment and storage medium
EP3624017A1 (en) Time series data analysis apparatus, time series data analysis method and time series data analysis program
CA3169288A1 (en) Knowledge graph based reasoning recommendation system and method
CN117038074B (en) User management method, device, equipment and storage medium based on big data
CN110865939B (en) Application program quality monitoring method, device, computer equipment and storage medium
CN117251777A (en) Data processing method, device, computer equipment and storage medium
CN116739764A (en) Transaction risk detection method, device, equipment and medium based on machine learning
US11803815B1 (en) System for the computer matching of targets using machine learning
KR102395550B1 (en) Method and apparatus for analyzing confidential information
CN115831339B (en) Medical system risk management and control pre-prediction method and system based on deep learning
CN113723524B (en) Data processing method based on prediction model, related equipment and medium
Vonderhaar et al. Towards Robust Training Datasets for Machine Learning with Ontologies: A Case Study for Emergency Road Vehicle Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant