Nothing Special   »   [go: up one dir, main page]

WO2021139078A1 - Artificial intelligence system risk detection method and apparatus, and computer device and medium - Google Patents

Artificial intelligence system risk detection method and apparatus, and computer device and medium Download PDF

Info

Publication number
WO2021139078A1
WO2021139078A1 PCT/CN2020/093555 CN2020093555W WO2021139078A1 WO 2021139078 A1 WO2021139078 A1 WO 2021139078A1 CN 2020093555 W CN2020093555 W CN 2020093555W WO 2021139078 A1 WO2021139078 A1 WO 2021139078A1
Authority
WO
WIPO (PCT)
Prior art keywords
artificial intelligence
intelligence system
risk
tested
model
Prior art date
Application number
PCT/CN2020/093555
Other languages
French (fr)
Chinese (zh)
Inventor
李洋
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021139078A1 publication Critical patent/WO2021139078A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3604Software analysis for verifying properties of programs

Definitions

  • This application relates to the field of computer technology, in particular to an artificial intelligence system risk detection method, device, computer equipment and storage medium.
  • Artificial intelligence is a branch of computer science. It attempts to understand the essence of intelligence and produce a new kind of response that can respond in a similar way to human intelligence. Intelligent machines. Research in this field includes robotics, language recognition, image recognition, natural language processing, and expert systems. Since the birth of artificial intelligence, the theory and technology have become increasingly mature, and the field of application has continued to expand. It is conceivable that the technological products brought by artificial intelligence in the future will be the "containers" of human wisdom. Artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but it can think like humans and may exceed human intelligence.
  • An artificial intelligence system risk detection method includes:
  • the operating risk parameter is greater than a preset risk threshold, it is determined that the artificial intelligence system to be detected has a security risk.
  • An artificial intelligence system risk detection device comprising:
  • the model start module is used to start the artificial intelligence system to be tested
  • An index acquisition module configured to run the artificial intelligence system to be tested, and acquire the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
  • the weight acquisition module is used to identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested parameter;
  • the risk prediction module is used to obtain the current operational risk parameters of the artificial intelligence system to be tested according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index;
  • the risk determination module is configured to determine that the artificial intelligence system to be tested has a security risk when the operating risk parameter is greater than a preset risk threshold.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when the processor executes the computer program:
  • the operating risk parameter is greater than a preset risk threshold, it is determined that the artificial intelligence system to be detected has a security risk.
  • a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the following steps are implemented:
  • the operating risk parameter is greater than a preset risk threshold, it is determined that the artificial intelligence system to be detected has a security risk.
  • the above-mentioned artificial intelligence system risk detection method, device, computer equipment and storage medium by running the artificial intelligence system to be tested, obtain the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation, and according to the index parameters of each risk evaluation index Perform multi-angle quantitative detection on the artificial intelligence system, and then determine its corresponding weight parameters through the source code of the system and its application fields, comprehensively consider the index parameters and corresponding parameters to obtain the operational risk parameters of the artificial intelligence system to be tested, and use this Operating risk parameters are used to detect the operating risk of the artificial intelligence system to ensure that the operating artificial intelligence system is safe and controllable, grasp and avoid risks during the application of the artificial intelligence system, and reduce the occurrence of safety problems or incidents.
  • Figure 1 is an application environment diagram of an artificial intelligence system risk detection method in an embodiment
  • FIG. 2 is a schematic flowchart of a risk detection method for an artificial intelligence system in an embodiment
  • FIG. 3 is a schematic flowchart of a risk detection method for an artificial intelligence system in another embodiment
  • Figure 4 is a structural block diagram of an artificial intelligence system risk detection device in an embodiment
  • Fig. 5 is an internal structure diagram of a computer device in an embodiment.
  • the artificial intelligence system risk detection method provided in this application can be applied to the application environment as shown in FIG. 1.
  • the artificial intelligence server 102 communicates with the detection server 104 via the network.
  • the artificial intelligence server 102 is equipped with an artificial intelligence system to be tested.
  • the detection server 104 first connects to the artificial intelligence server 102 to start the artificial intelligence system to be tested; runs the artificial intelligence system to be tested, and obtains the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation; identifies the source of the artificial intelligence system to be tested Code and application fields, according to the source code and application fields of the artificial intelligence system to be tested, obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested; according to the index parameters of each risk evaluation index and the corresponding detection of each risk evaluation index The weight parameter obtains the current operational risk parameters of the artificial intelligence system to be tested; when the operational risk parameter is greater than the preset risk threshold, it is determined that the artificial intelligence system to be tested has a security risk.
  • a risk detection method for an artificial intelligence system is provided. Taking the method applied to the detection server 104 in FIG. 1 as an example for description, the method includes the following steps:
  • the detection server starts the artificial intelligence system to be detected.
  • the detection server specifically refers to the server used to perform endogenous security detection of the artificial intelligence system
  • the endogenous security specifically refers to the security problems caused by the mechanism of the artificial intelligence system itself.
  • the endogenous security of native artificial intelligence systems is often caused by artificial intelligence system code security, data integrity, model confidentiality, and model robustness.
  • the detection server can determine the risk of the artificial intelligence system to be detected from the above-mentioned aspects to detect whether it has a security risk.
  • exogenous security which refers to factors outside the mechanism of the artificial intelligence system itself---such as the environment that the artificial intelligence system depends on in the application process, and the artificial intelligence system needs in the application process.
  • the detection server may be connected to the artificial intelligence server equipped with the artificial intelligence system to be tested through a network, and then the artificial intelligence system to be tested is activated to start the detection process.
  • S300 Run the artificial intelligence system to be tested, and obtain index parameters of various risk evaluation indicators of the artificial intelligence system to be tested in operation.
  • the risk detection of the artificial intelligence system specifically includes 5 risk evaluation indicators, including: code security risk, which refers to the security caused by vulnerabilities in the artificial intelligence system.
  • code security risk refers to the security caused by vulnerabilities in the artificial intelligence system.
  • the accuracy of the model refers to the correctness of the model's judgment on the current input in the current environment.
  • Model interpretability refers to the operation mechanism of the model and the feedback of input data that can convince everyone. When the model gives wrong feedback, it can locate the specific reason.
  • Model sensitivity refers to the degree of impact on the actual business of the artificial intelligence system when the model is misjudged or deviates from the artificial intelligence system design expectations during the application process.
  • the aggressiveness of the input data The aggressiveness of the input data refers to the degree to which the input data makes the model wrong or deviates from the design expectations.
  • the detection server can obtain the index parameters of the above-mentioned risk evaluation indicators by detecting all aspects of the artificial intelligence system, and express them in specific digital form.
  • S500 Identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested.
  • the detection weight parameter refers to the referenceability of a risk evaluation index in the current artificial intelligence system to be tested. Because the focus, use and composition of artificial intelligence systems are not the same, so for different artificial intelligence systems to be tested, The weight of detection is also different. While obtaining each corresponding risk evaluation index, the server also needs to identify the source code and application field of the artificial intelligence system to be tested, and obtain each risk evaluation index based on the attributes of the artificial intelligence system to be tested. The corresponding detection weight parameter.
  • the operational risk parameters of the artificial intelligence system to be tested can be used Representation refers to the probability that the artificial intelligence system's expression or judgment of things during the operation process will affect the business.
  • the calculation formula of operational risk parameters can be specifically as follows:
  • W1-W5 are the weights of various indicators.
  • the weight of each indicator and the detection of each indicator can use the percentile system, and then the product is normalized and converted into a percentile system.
  • code risk is the code security risk parameter
  • Inter is the interpretability of the model
  • Sen is the sensitivity of the model
  • data attack is the aggressiveness of the input data of the model.
  • the operational risk parameters in this application can be tested in terms of code security risk parameters, model accuracy, model interpretability, model sensitivity, and input data aggressiveness from the endogenous security dimension to ensure the safety of artificial intelligence itself.
  • the artificial intelligence system is safe and controllable. At the same time, it can reveal the security risks of the artificial intelligence system to users, enterprises, etc., so as to grasp and avoid risks in the application process, and reduce the occurrence of security problems or incidents.
  • the calculated operational risk parameter is greater than the preset risk threshold, it can be judged that the current artificial intelligence system to be tested has a certain security risk. At this time, it can be determined that the artificial intelligence system to be tested has certain operational defects, which may be in actual conditions. There are security risks in the use of.
  • the method before step S900, the method further includes:
  • S810 Obtain historical operation risk parameters corresponding to the artificial intelligence system with security risks in the historical records.
  • the detection server can determine an initial preset risk threshold, and then during the operation of the method, according to the operating risk parameters of the artificial intelligence system that has been detected in the history and that have operating errors during the testing process, Continuously update the current preset risk threshold.
  • the update may specifically be to obtain the operating risk parameters of the artificial intelligence system in the detection process where an operating error occurs, and then calculate the lowest value of all operating risk parameters, and use the lowest value as the new preset risk threshold.
  • the above-mentioned risk detection method of artificial intelligence system by running the artificial intelligence system to be tested, obtains the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation, and performs multi-angle analysis on the artificial intelligence system according to the index parameters of each risk evaluation index. Quantitative detection, and then determine its corresponding weight parameters through the source code of the system and its application fields, comprehensively consider the index parameters and corresponding parameters to obtain the operating risk parameters of the artificial intelligence system to be tested, and use the operating risk parameters to control the artificial intelligence system Operational risk detection to ensure that the operating artificial intelligence system is safe and controllable, grasp and avoid risks in the application process of artificial intelligence system, and reduce the occurrence of safety problems or incidents.
  • the risk evaluation indicators include code security risk parameters, model accuracy, model interpretability, model sensitivity, and model input data offensiveness, and obtain various preset detections of the artificial intelligence system to be tested during operation.
  • the corresponding risk evaluation indicators at all times include:
  • Obtain the labeled preset test sample data input the test sample data into the artificial intelligence system to be tested, obtain the output data corresponding to the labeled preset test sample data, and obtain the accuracy of the model based on the output data and the corresponding predicted test sample data.
  • the error data in the output data is obtained, the error explanation information corresponding to the error data is obtained, and the model interpretability is obtained according to the error explanation information.
  • the sensitivity of the model is obtained.
  • Some risk evaluation indicators of the model can be obtained based on the composition of the model itself, but other indicators need to be tested and run on the model to obtain.
  • the specific process of obtaining each risk evaluation indicator includes: obtaining the current artificial intelligence to be tested
  • the code such as open source or commercial
  • framework such as Tensorflow, Pytorch, etc.
  • source code auditing Fuzzing and other methods to identify the risk of the code framework
  • Fuzzing Fuzzing and other methods to identify the risk of the code framework
  • the accuracy of the model can be tested by using a certain amount of test samples, and the audit score corresponding to the accuracy of the model can be obtained.
  • the model is evaluated from multiple perspectives through different methods, so as to obtain risk evaluation indicators from different perspectives of the artificial intelligence system to be tested.
  • step S500 includes:
  • the preset model accuracy category weight distribution table is searched to obtain the first weight parameter corresponding to the application field, and the first weight parameter corresponding to the application field is used as the second detection weight parameter corresponding to the model accuracy.
  • the application field look up the preset model input data aggressiveness weight distribution table, obtain the second weight parameter corresponding to the application field, and use the second weight parameter corresponding to the application field as the fifth detection weight parameter corresponding to the aggressiveness of the model input data .
  • the business impact level is determined according to the application field, and the preset model sensitivity weight distribution table is searched according to the business impact level, and the weight parameter corresponding to the business impact level is obtained.
  • the weight parameter corresponding to the business impact level is taken as Before the fourth detection weight parameter corresponding to the model sensitivity, it also includes:
  • the server can identify the source code and application field of the artificial intelligence system to be tested, and then obtain the detection weight parameter corresponding to each risk evaluation index according to the source code and application field.
  • the preset weight distribution table can be obtained by summarizing historical empirical data or through expert experience.
  • the corresponding weight distribution includes the following laws, such as the detection weight parameters corresponding to the code security risk parameters, which have a significant impact on the risk of code execution, such as buffer overflow, information leakage, unauthorized viewing, SQL (Structured Query Language, Structured Query Language) injection and other risks are assigned with larger detection weight parameters.
  • a lower detection weight parameter is assigned for risks that have a general impact on code execution, such as code logic errors, low-efficiency codes, and other code risks.
  • the selection of the detection weight parameters for the accuracy of the model mainly depends on the application corresponding to the accuracy of the model. For the more critical applications, a higher weight interval is assigned, and vice versa, a lower interval is assigned. For example, artificial intelligence systems used in image recognition and speech recognition will be assigned a lower weight, while artificial intelligence systems in key aspects such as unmanned vehicles and airplanes will be assigned a higher weight.
  • the selection of the detection weight parameter of the model interpretability mainly depends on the interpretability of the corresponding algorithm/model, and is related to the corresponding artificial intelligence classification. Algorithms/models with better interpretability are assigned a lower risk weight interval, and vice versa, a higher interval is assigned.
  • artificial intelligence systems that apply rule-based machine learning such as D3, C4.5/C5.0 decision tree algorithms will be assigned a higher weight
  • artificial intelligence systems that use deep learning models such as Tensorflow, Pytorch, and Caffe will be assigned a higher weight. Is assigned a lower weight.
  • the selection of the detection weight parameter of model sensitivity mainly depends on the importance of the corresponding application level. Specifically, according to the application field of the artificial intelligence system to be tested, the business impact assessment of the artificial intelligence system to be tested can be performed to obtain the artificial intelligence system to be tested. Corresponding business impact level, and then look up the table according to the business impact level to obtain the fourth detection weight parameter corresponding to the model sensitivity. This weight is determined according to different business impact levels.
  • artificial intelligence systems such as unmanned vehicles, airplanes, and medical operations will be assigned a higher weight
  • artificial intelligence systems that use AI for attendance and check-in and other low-impact artificial intelligence systems will be assigned a lower weight.
  • the selection of the aggressive detection weight parameters of the input data mainly depends on the detection of possible pollution/bait injection and other risks in the data set on which the AI algorithm/model/supported application depends. The more easily the data set is damaged or misleading, the higher its weight, and vice versa, the lower it is.
  • Artificial intelligence systems such as image recognition and voice will be assigned a higher weight
  • artificial intelligence systems such as human-machine question answering and unmanned driving will be assigned a lower weight.
  • the method further includes:
  • the model evaluation report can be used to determine which aspects of the artificial intelligence system to be tested have operational risks, and at the same time it is convenient to grasp and avoid risks in the application process, and reduce the occurrence of security problems or incidents. By generating a model evaluation report, the test results of the artificial intelligence system to be tested can be displayed more intuitively to users.
  • an artificial intelligence system risk detection device including:
  • the model starting module 100 is used to start the artificial intelligence system to be tested
  • the index acquisition module 300 is used to run the artificial intelligence system to be tested, and acquire the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
  • the weight acquisition module 500 is used to identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested;
  • the risk prediction module 700 is used to obtain the current operating risk parameters of the artificial intelligence system to be tested according to the index parameters of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index;
  • the risk determination module 900 is configured to determine that the artificial intelligence system to be tested has a security risk when the operating risk parameter is greater than the preset risk threshold.
  • the risk evaluation indicators include code security risk parameters, model accuracy, model interpretability, model sensitivity, and model input data aggressiveness.
  • the indicator acquisition module 300 is used to: acquire the corresponding source of the artificial intelligence system to be detected Code, obtain code security risk parameters according to the source code; obtain the marked preset test sample data, input the test sample data into the artificial intelligence system to be tested, obtain the corresponding output data of the marked preset test sample data, and according to the output data and the corresponding prediction
  • the mark of the test sample data is used to obtain the accuracy of the model; according to the mark of the output data and the corresponding predicted test sample data, the error data in the output data is obtained, the error explanation information corresponding to the error data is obtained, and the model can be obtained according to the error explanation information.
  • the weight acquisition module 500 is used to identify the source code and application field of the artificial intelligence system to be detected; determine the risk parameter category corresponding to the source code, and search the preset risk parameter category weight parameter table according to the risk parameter category to obtain The weight parameter corresponding to the risk parameter category will be the weight parameter corresponding to the risk parameter category as the first detection weight parameter corresponding to the code security risk parameter; according to the application field, look up the preset model accuracy category weight distribution table, and obtain and apply The first weight parameter corresponding to the domain, the first weight parameter corresponding to the application domain, is used as the second detection weight parameter corresponding to the accuracy of the model; the composition algorithm corresponding to the source code is determined, and the interpretability of the preset model is found according to the composition algorithm
  • the weight distribution table obtains the weight parameters corresponding to the constituent algorithm, and uses the weight parameter corresponding to the constituent algorithm as the third detection weight parameter corresponding to the interpretability of the model; determines the business impact level according to the application field, and finds the preset according to the business impact level
  • the weight obtaining module 500 is further configured to perform business impact assessment on the artificial intelligence system to be tested according to the application field of the artificial intelligence system to be tested, and obtain the business impact level corresponding to the artificial intelligence system to be tested.
  • a threshold update module is also reported to obtain historical operating risk parameters corresponding to the artificial intelligence system with security risks in the historical records; and to update the preset risk threshold according to the historical operating risk parameters.
  • a report generation module is further included, which is used to generate a model evaluation report according to the risk evaluation index and the operational risk parameters.
  • each module in the above-mentioned artificial intelligence system risk detection device can be implemented in whole or in part by software, hardware and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 5.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the database of the computer equipment is used to store data related to the operational risk parameters of the historical artificial intelligence system.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program is executed by the processor to realize an artificial intelligence system risk detection method.
  • FIG. 5 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device including a memory and a processor, a computer program is stored in the memory, and the processor implements the following steps when the processor executes the computer program:
  • the operating risk parameter is greater than the preset risk threshold, it is determined that the artificial intelligence system to be tested has a security risk.
  • the processor further implements the following steps when executing the computer program: obtain the source code corresponding to the artificial intelligence system to be tested, obtain the code security risk parameters according to the source code; obtain the marked preset test sample data, and input the test sample data
  • the artificial intelligence system to be tested obtains the output data corresponding to the labeled preset test sample data, and obtains the model accuracy according to the label of the output data and the corresponding predicted test sample data; according to the output data and the label of the corresponding predicted test sample data , Obtain the error data in the output data, obtain the error interpretation information corresponding to the error data, and obtain the interpretability of the model according to the error interpretation information; obtain the model sensitivity according to the error data of the artificial intelligence system to be tested and the application field of the artificial intelligence system to be tested Degree; Obtain the dependent data set corresponding to the artificial intelligence system to be detected, and obtain the aggressiveness of the model input data according to the dependent data set.
  • the processor further implements the following steps when executing the computer program: identifying the source code and application field of the artificial intelligence system to be tested; determining the risk parameter category corresponding to the source code, and searching for the preset risk parameter category weight according to the risk parameter category Parameter table to obtain the weight parameter corresponding to the risk parameter category, and use the weight parameter corresponding to the risk parameter category as the first detection weight parameter corresponding to the code security risk parameter; search for the preset model accuracy category weight distribution table according to the application field , To obtain the first weight parameter corresponding to the application field, and use the first weight parameter corresponding to the application field as the second detection weight parameter corresponding to the accuracy of the model; determine the composition algorithm corresponding to the source code, and search for the preset according to the composition algorithm Model interpretability weight distribution table, obtain the weight parameters corresponding to the constituent algorithm, and use the weight parameter corresponding to the constituent algorithm as the third detection weight parameter corresponding to the model interpretability; determine the business impact level according to the application field, and determine the business impact level according to the business impact Level search
  • the processor further implements the following steps when executing the computer program: according to the application field of the artificial intelligence system to be tested, the artificial intelligence system to be tested performs business impact assessment, and the business impact level corresponding to the artificial intelligence system to be tested is obtained.
  • the processor further implements the following steps when executing the computer program: acquiring historical operating risk parameters corresponding to the artificial intelligence system with security risks in the historical record; updating the preset risk threshold according to the historical operating risk parameters.
  • the processor further implements the following steps when executing the computer program: generating a model evaluation report according to the risk evaluation index and the operating risk parameter.
  • a computer-readable storage medium is provided.
  • the computer-readable storage medium may be non-volatile or volatile.
  • the computer-readable storage medium stores a computer program. When executed by the processor, the following steps are implemented:
  • the operating risk parameter is greater than the preset risk threshold, it is determined that the artificial intelligence system to be tested has a security risk.
  • the following steps are also implemented: obtain the source code corresponding to the artificial intelligence system to be tested, obtain the code security risk parameters according to the source code; obtain the marked preset test sample data, and transfer the test sample data Input the artificial intelligence system to be tested, obtain the output data corresponding to the labeled preset test sample data, obtain the model accuracy according to the output data and the corresponding prediction test sample data, and obtain the model accuracy; according to the output data and the corresponding prediction test sample data Mark, obtain the error data in the output data, obtain the error explanation information corresponding to the error data, obtain the interpretability of the model according to the error explanation information; obtain the model according to the error data of the artificial intelligence system to be tested and the application field of the artificial intelligence system to be tested Sensitivity: Obtain the dependent data set corresponding to the artificial intelligence system to be detected, and obtain the aggressiveness of the input data of the model according to the dependent data set.
  • the following steps are also implemented: identify the source code and application field of the artificial intelligence system to be tested; determine the risk parameter category corresponding to the source code, and search for the preset risk parameter category according to the risk parameter category Weight parameter table, get the weight parameter corresponding to the risk parameter category, and use the weight parameter corresponding to the risk parameter category as the first detection weight parameter corresponding to the code security risk parameter; find the preset model accuracy category weight distribution according to the application field Table to obtain the first weight parameter corresponding to the application field, and use the first weight parameter corresponding to the application field as the second detection weight parameter corresponding to the accuracy of the model; determine the composition algorithm corresponding to the source code, and look up the forecast according to the composition algorithm Set the model interpretability weight distribution table, obtain the weight parameters corresponding to the constituent algorithm, and use the weight parameter corresponding to the constituent algorithm as the third detection weight parameter corresponding to the model interpretability; determine the business impact level according to the application domain, and determine the business impact level according to the business The influence level searches the prese
  • the following steps are further implemented: according to the application field of the artificial intelligence system to be tested, the business impact assessment of the artificial intelligence system to be tested is performed, and the business impact level corresponding to the artificial intelligence system to be tested is obtained.
  • the following steps are further implemented: obtaining historical operating risk parameters corresponding to the artificial intelligence system with security risks in the historical records; updating the preset risk threshold according to the historical operating risk parameters.
  • the following steps are further implemented: generating a model evaluation report according to the risk evaluation index and the operating risk parameters.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain Channel
  • memory bus Radbus direct RAM
  • RDRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present application relates to the field of artificial intelligence model deployment, and in particular to an artificial intelligence system risk detection method and apparatus, and a computer device and a storage medium. The method comprises: running an artificial intelligence system to be subjected to detection; acquiring index parameters of various risk evaluation indexes of the artificial intelligence system to be subjected to detection during running; performing multi-dimensional quantitative detection on the artificial intelligence system according to the index parameters of the risk evaluation indexes; by means of a source code of the system and the use thereof, determining a weight parameter corresponding to the system; comprehensively taking the index parameters and corresponding parameters into consideration to obtain a running risk parameter of the artificial intelligence system to be subjected to detection; and detecting a running risk of the artificial intelligence system by means of the running risk parameter, such that the running artificial intelligence system is ensured to be secure and controllable, risks are mastered and avoided in an application process of the artificial intelligence system, and the occurrence of security problems or events is reduced.

Description

人工智能系统风险检测方法、装置、计算机设备与介质Artificial intelligence system risk detection method, device, computer equipment and medium
本申请要求于2020年1月7日提交中国专利局、申请号为CN 202010014256.3,发明名称为“人工智能系统风险检测方法、装置、计算机设备与介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application requires the priority of a Chinese patent application filed with the Chinese Patent Office on January 7, 2020 with the application number CN 202010014256.3 and the invention title "Artificial intelligence system risk detection methods, devices, computer equipment and media", and its entire contents Incorporated in this application by reference.
技术领域Technical field
本申请涉及计算机技术领域,特别是涉及一种人工智能系统风险检测方法、装置、计算机设备和存储介质。This application relates to the field of computer technology, in particular to an artificial intelligence system risk detection method, device, computer equipment and storage medium.
背景技术Background technique
随着计算机科学的发展,人工智能技术也在不断更新换代,人工智能是计算机科学的一个分支,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器,该领域的研究包括机器人、语言识别、图像识别、自然语言处理和专家系统等。人工智能从诞生以来,理论和技术日益成熟,应用领域也不断扩大,可以设想,未来人工智能带来的科技产品,将会是人类智慧的“容器”。人工智能可以对人的意识、思维的信息过程的模拟。人工智能不是人的智能,但能像人那样思考、也可能超过人的智能。With the development of computer science, artificial intelligence technology is constantly updated. Artificial intelligence is a branch of computer science. It attempts to understand the essence of intelligence and produce a new kind of response that can respond in a similar way to human intelligence. Intelligent machines. Research in this field includes robotics, language recognition, image recognition, natural language processing, and expert systems. Since the birth of artificial intelligence, the theory and technology have become increasingly mature, and the field of application has continued to expand. It is conceivable that the technological products brought by artificial intelligence in the future will be the "containers" of human wisdom. Artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but it can think like humans and may exceed human intelligence.
人工智能在帮助人们的生活更便捷,舒适的同时,也带来了巨大的安全风险,而这些安全风险可能不仅会造成财产的损失,甚至会危害人类的生命安全,发明人意识到目前业界对于人工智能算法/模型、数据等的风险尚未有足够的认知和科学检测方法,无法对其风险进行检测。While artificial intelligence helps people’s lives more convenient and comfortable, it also brings huge security risks. These security risks may not only cause property losses, but even endanger human life. The inventor realizes that the industry has The risks of artificial intelligence algorithms/models, data, etc., have not yet had enough cognitive and scientific detection methods to detect their risks.
发明内容Summary of the invention
基于此,有必要对于人工智能算法/模型、数据等的风险尚未有足够的认知和科学检测方法,无法对其风险进行检测的技术问题,提供一种能够有效对人工智能风险进行检测的人工智能系统风险检测方法、装置、计算机设备和存储介质。Based on this, it is necessary to have sufficient knowledge and scientific detection methods for the risks of artificial intelligence algorithms/models, data, etc., and cannot detect the technical problems of their risks. Provide a manual that can effectively detect the risks of artificial intelligence. Intelligent system risk detection method, device, computer equipment and storage medium.
一种人工智能系统风险检测方法,所述方法包括:An artificial intelligence system risk detection method, the method includes:
启动待检测人工智能系统;Start the artificial intelligence system to be tested;
运行所述待检测人工智能系统,获取所述待检测人工智能系统在运行中的各个风险评价指标的指标参数;Run the artificial intelligence system to be tested to obtain the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
识别所述待检测人工智能系统的源代码以及应用领域,根据所述待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;Identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested;
根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数;According to the index parameters of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index, obtain the current operating risk parameters of the artificial intelligence system to be tested;
当所述运行风险参数大于预设风险阈值时,判定所述待检测人工智能系统存在安全风险。When the operating risk parameter is greater than a preset risk threshold, it is determined that the artificial intelligence system to be detected has a security risk.
一种人工智能系统风险检测装置,所述装置包括:An artificial intelligence system risk detection device, the device comprising:
模型启动模块,用于启动待检测人工智能系统;The model start module is used to start the artificial intelligence system to be tested;
指标获取模块,用于运行所述待检测人工智能系统,获取所述待检测人工智能系统在运行中的各个风险评价指标的指标参数;An index acquisition module, configured to run the artificial intelligence system to be tested, and acquire the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
权重获取模块,用于识别所述待检测人工智能系统的源代码以及应用领域,根据所述待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;The weight acquisition module is used to identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested parameter;
风险预测模块,用于根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数;The risk prediction module is used to obtain the current operational risk parameters of the artificial intelligence system to be tested according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index;
风险判定模块,用于当所述运行风险参数大于预设风险阈值时,判定所述待检测人工智能系统存在安全风险。The risk determination module is configured to determine that the artificial intelligence system to be tested has a security risk when the operating risk parameter is greater than a preset risk threshold.
一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:A computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when the processor executes the computer program:
启动待检测人工智能系统;Start the artificial intelligence system to be tested;
运行所述待检测人工智能系统,获取所述待检测人工智能系统在运行中的各个风险评价指标的指标参数;Run the artificial intelligence system to be tested to obtain the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
识别所述待检测人工智能系统的源代码以及应用领域,根据所述待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;Identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested;
根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数;According to the index parameters of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index, obtain the current operating risk parameters of the artificial intelligence system to be tested;
当所述运行风险参数大于预设风险阈值时,判定所述待检测人工智能系统存在安全风险。When the operating risk parameter is greater than a preset risk threshold, it is determined that the artificial intelligence system to be detected has a security risk.
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:A computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the following steps are implemented:
启动待检测人工智能系统;Start the artificial intelligence system to be tested;
运行所述待检测人工智能系统,获取所述待检测人工智能系统在运行中的各个风险评价指标的指标参数;Run the artificial intelligence system to be tested to obtain the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
识别所述待检测人工智能系统的源代码以及应用领域,根据所述待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;Identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested;
根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数;According to the index parameters of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index, obtain the current operating risk parameters of the artificial intelligence system to be tested;
当所述运行风险参数大于预设风险阈值时,判定所述待检测人工智能系统存在安全风险。When the operating risk parameter is greater than a preset risk threshold, it is determined that the artificial intelligence system to be detected has a security risk.
上述人工智能系统风险检测方法、装置、计算机设备和存储介质,通过运行待检测人工智能系统,获取待检测人工智能系统在运行中的各个风险评价指标的指标参数,根据各个风险评价指标的指标参数对人工智能系统进行多角度的量化检测,而后通过系统的源代码以及其应用领域,确定其对应的权重参数,综合考虑指标参数以及对应参数来得到待检测人工智能系统的运行风险参数,通过该运行风险参数来对人工智能系统的运行风险检测,来确保已运行的人工智能系统是安全可控的,在人工智能系统的应用过程中把握及规避风险,减少安全问题或者事件的发生。The above-mentioned artificial intelligence system risk detection method, device, computer equipment and storage medium, by running the artificial intelligence system to be tested, obtain the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation, and according to the index parameters of each risk evaluation index Perform multi-angle quantitative detection on the artificial intelligence system, and then determine its corresponding weight parameters through the source code of the system and its application fields, comprehensively consider the index parameters and corresponding parameters to obtain the operational risk parameters of the artificial intelligence system to be tested, and use this Operating risk parameters are used to detect the operating risk of the artificial intelligence system to ensure that the operating artificial intelligence system is safe and controllable, grasp and avoid risks during the application of the artificial intelligence system, and reduce the occurrence of safety problems or incidents.
附图说明Description of the drawings
图1为一个实施例中人工智能系统风险检测方法的应用环境图;Figure 1 is an application environment diagram of an artificial intelligence system risk detection method in an embodiment;
图2为一个实施例中人工智能系统风险检测方法的流程示意图;FIG. 2 is a schematic flowchart of a risk detection method for an artificial intelligence system in an embodiment;
图3为另一个实施例中人工智能系统风险检测方法的流程示意图;FIG. 3 is a schematic flowchart of a risk detection method for an artificial intelligence system in another embodiment;
图4为一个实施例中人工智能系统风险检测装置的结构框图;Figure 4 is a structural block diagram of an artificial intelligence system risk detection device in an embodiment;
图5为一个实施例中计算机设备的内部结构图。Fig. 5 is an internal structure diagram of a computer device in an embodiment.
具体实施方式Detailed ways
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the purpose, technical solutions, and advantages of this application clearer and clearer, the following further describes the application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present application, and are not used to limit the present application.
本申请提供的人工智能系统风险检测方法,可以应用于如图1所示的应用环境中。其中,人工智能服务器102通过网络与检测服务器104通过网络进行通信。人工智能服务器102上搭载有待检测人工智能系统。检测服务器104首先连接人工智能服务器102,启动待检测人工智能系统;运行待检测人工智能系统,获取待检测人工智能系统在运行中的各个风险评价指标的指标参数;识别待检测人工智能系统的源代码以及应用领域,根据待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检 测人工智能系统的运行风险参数;当运行风险参数大于预设风险阈值时,判定待检测人工智能系统存在安全风险。其中,人工智能服务器102与检测服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。The artificial intelligence system risk detection method provided in this application can be applied to the application environment as shown in FIG. 1. Among them, the artificial intelligence server 102 communicates with the detection server 104 via the network. The artificial intelligence server 102 is equipped with an artificial intelligence system to be tested. The detection server 104 first connects to the artificial intelligence server 102 to start the artificial intelligence system to be tested; runs the artificial intelligence system to be tested, and obtains the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation; identifies the source of the artificial intelligence system to be tested Code and application fields, according to the source code and application fields of the artificial intelligence system to be tested, obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested; according to the index parameters of each risk evaluation index and the corresponding detection of each risk evaluation index The weight parameter obtains the current operational risk parameters of the artificial intelligence system to be tested; when the operational risk parameter is greater than the preset risk threshold, it is determined that the artificial intelligence system to be tested has a security risk. Among them, the artificial intelligence server 102 and the detection server 104 can be implemented by independent servers or a server cluster composed of multiple servers.
在一个实施例中,如图2所示,提供了一种人工智能系统风险检测方法,以该方法应用于图1中的检测服务器104为例进行说明,包括以下步骤:In one embodiment, as shown in FIG. 2, a risk detection method for an artificial intelligence system is provided. Taking the method applied to the detection server 104 in FIG. 1 as an example for description, the method includes the following steps:
S100,检测服务器启动待检测人工智能系统。S100, the detection server starts the artificial intelligence system to be detected.
其中,检测服务器具体是指用于进行人工智能系统内生安全检测的服务器,内生安全具体指的是人工智能系统本身的机制所导致的安全问题。原生的人工智能系统的内生安全往往由人工智能系统代码安全、数据完整性、模型保密性、模型鲁棒性等方面所导致。检测服务器可以从上述各个方面对待检测人工智能系统的风险进行判定,以检测其是否存在安全风险。与内生安全相对应的是外生安全,指的是由于人工智能系统本身机制之外的因素---比如人工智能系统在应用过程中所依赖的环境、人工智能系统在应用过程中所需或产生的数据的安全性以及人工智能系统的应用是否符合相应法律法规等,所引发的安全事件。具体的,检测服务器可以通过网络连接搭载有待检测人工智能系统的人工智能服务器,而后启动该待检测人工智能系统,开始检测过程。Among them, the detection server specifically refers to the server used to perform endogenous security detection of the artificial intelligence system, and the endogenous security specifically refers to the security problems caused by the mechanism of the artificial intelligence system itself. The endogenous security of native artificial intelligence systems is often caused by artificial intelligence system code security, data integrity, model confidentiality, and model robustness. The detection server can determine the risk of the artificial intelligence system to be detected from the above-mentioned aspects to detect whether it has a security risk. Corresponding to endogenous security is exogenous security, which refers to factors outside the mechanism of the artificial intelligence system itself---such as the environment that the artificial intelligence system depends on in the application process, and the artificial intelligence system needs in the application process. Or security incidents caused by the security of the data generated and whether the application of the artificial intelligence system complies with the corresponding laws and regulations. Specifically, the detection server may be connected to the artificial intelligence server equipped with the artificial intelligence system to be tested through a network, and then the artificial intelligence system to be tested is activated to start the detection process.
S300,运行待检测人工智能系统,获取待检测人工智能系统在运行中的各个风险评价指标的指标参数。S300: Run the artificial intelligence system to be tested, and obtain index parameters of various risk evaluation indicators of the artificial intelligence system to be tested in operation.
人工智能系统的风险检测中具体包含了5个风险评价指标,包括:代码安全风险,指的是人工智能系统中存在的漏洞而导致的安全。模型的准确度,指的是模型在当前环境下对当前输入进行判断的正确程度。模型可解释度,指的是对模型的运行机制以及对输入数据放入反馈能够让大家信服,当模型给出错误的反馈的时候,能够定位出具体的原因。模型敏感度,指的是人工智能系统在进行应用过程中模型出现误判或偏离人工智能系统设计预期等状况时,对实际业务的影响程度。输入数据的攻击性。输入数据的攻击性,指的是输入数据让模型出错或偏离设计预期的程度。检测服务器可以在运行待检测人工智能系统的同时,通过对人工智能系统进行各方位的检测来获得上述的各个风险评价指标的指标参数,并将其以具体的数字形式表示出来。The risk detection of the artificial intelligence system specifically includes 5 risk evaluation indicators, including: code security risk, which refers to the security caused by vulnerabilities in the artificial intelligence system. The accuracy of the model refers to the correctness of the model's judgment on the current input in the current environment. Model interpretability refers to the operation mechanism of the model and the feedback of input data that can convince everyone. When the model gives wrong feedback, it can locate the specific reason. Model sensitivity refers to the degree of impact on the actual business of the artificial intelligence system when the model is misjudged or deviates from the artificial intelligence system design expectations during the application process. The aggressiveness of the input data. The aggressiveness of the input data refers to the degree to which the input data makes the model wrong or deviates from the design expectations. While running the artificial intelligence system to be tested, the detection server can obtain the index parameters of the above-mentioned risk evaluation indicators by detecting all aspects of the artificial intelligence system, and express them in specific digital form.
S500,识别待检测人工智能系统的源代码以及应用领域,根据待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数。S500: Identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested.
其中检测权重参数是指个风险评价指标在当前待检测人工智能系统中的可参考性,由于人工智能系统的侧重点、用途以及构成都不尽相同,所以对于不同的待检测人工智能系统,其检测的权重也是不相同的,在获得各个相应的各个风险评价指标的同时,服务器还需要识别待检测人工智能系统的源代码以及应用领域,依据待检测人工智能系统自身属性,获取各个风险评价指标对应的检测权重参数。Among them, the detection weight parameter refers to the referenceability of a risk evaluation index in the current artificial intelligence system to be tested. Because the focus, use and composition of artificial intelligence systems are not the same, so for different artificial intelligence systems to be tested, The weight of detection is also different. While obtaining each corresponding risk evaluation index, the server also needs to identify the source code and application field of the artificial intelligence system to be tested, and obtain each risk evaluation index based on the attributes of the artificial intelligence system to be tested. The corresponding detection weight parameter.
S700,根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数。S700, according to the index parameters of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index, obtain the current operating risk parameter of the artificial intelligence system to be detected.
待检测人工智能系统的运行风险参数可以用
Figure PCTCN2020093555-appb-000001
表示,指的是人工智能系统在运行过程中对事物的表达或判断出错而对业务造成影响的情况发生的概率。运行风险参数的计算公式具体可以为:
The operational risk parameters of the artificial intelligence system to be tested can be used
Figure PCTCN2020093555-appb-000001
Representation refers to the probability that the artificial intelligence system's expression or judgment of things during the operation process will affect the business. The calculation formula of operational risk parameters can be specifically as follows:
Figure PCTCN2020093555-appb-000002
Figure PCTCN2020093555-appb-000002
其中,W1-W5为各项指标的权重,为简化起见,各个指标的权重和各指标的检测都可以采用百分制,然后乘积进行归一化处理,换算为百分制。code risk为代码安全风险参数,
Figure PCTCN2020093555-appb-000003
为模型准确度,Inter为模型可解释度,Sen为模型敏感度,data attack为模型输入数据攻击性。本申请中的运行风险参数可以在内生安全维度从代码安全风险参数、模型准确度、模型可解 释性、模型敏感度以及输入数据攻击性等方向进行检测,以确保人工智能自身的安全,保证人工智能系统是安全可控的,同时能够向用户、企业等揭示人工智能系统安全风险,以便于在应用过程中把握及规避风险,减少安全问题或者事件的发生。
Among them, W1-W5 are the weights of various indicators. For the sake of simplification, the weight of each indicator and the detection of each indicator can use the percentile system, and then the product is normalized and converted into a percentile system. code risk is the code security risk parameter,
Figure PCTCN2020093555-appb-000003
Is the accuracy of the model, Inter is the interpretability of the model, Sen is the sensitivity of the model, and data attack is the aggressiveness of the input data of the model. The operational risk parameters in this application can be tested in terms of code security risk parameters, model accuracy, model interpretability, model sensitivity, and input data aggressiveness from the endogenous security dimension to ensure the safety of artificial intelligence itself. The artificial intelligence system is safe and controllable. At the same time, it can reveal the security risks of the artificial intelligence system to users, enterprises, etc., so as to grasp and avoid risks in the application process, and reduce the occurrence of security problems or incidents.
S900,当运行风险参数大于预设风险阈值时,判定待检测人工智能系统存在安全风险。S900: When the operating risk parameter is greater than the preset risk threshold, it is determined that the artificial intelligence system to be detected has a security risk.
当计算确定的运行风险参数大于预先设置的风险阈值时,可以判断当前的待检测人工智能系统存在一定的安全风险,此时可以判定该待检测人工智能系统存在一定的运行缺陷,可能会在实际的使用过程中出现安全风险。When the calculated operational risk parameter is greater than the preset risk threshold, it can be judged that the current artificial intelligence system to be tested has a certain security risk. At this time, it can be determined that the artificial intelligence system to be tested has certain operational defects, which may be in actual conditions. There are security risks in the use of.
如图3所示,在其中一个实施例中,步骤S900之前还包括:As shown in FIG. 3, in one of the embodiments, before step S900, the method further includes:
S810,获取历史记录中出现安全风险的人工智能系统对应的历史运行风险参数。S810: Obtain historical operation risk parameters corresponding to the artificial intelligence system with security risks in the historical records.
S830,根据历史运行风险参数更新预设风险阈值。S830: Update a preset risk threshold according to historical operating risk parameters.
在初始时刻,检测服务器可以确定一个初始的预设风险阈值,而后在方法运行的过程中,根据历史记录中已检测的且出现运行出错情况的人工智能系统在测试过程中的运行风险参数,来不断更新目前的预设风险阈值。更新具体可以为,获取出现运行出错情况的人工智能系统在检测过程中的运行风险参数,而后计算所有运行风险参数的最低值,将该最低值作为新的预设风险阈值。通过不断更新预设风险阈值,可以有效提高人工智能系统风险检测的准确性。At the initial moment, the detection server can determine an initial preset risk threshold, and then during the operation of the method, according to the operating risk parameters of the artificial intelligence system that has been detected in the history and that have operating errors during the testing process, Continuously update the current preset risk threshold. The update may specifically be to obtain the operating risk parameters of the artificial intelligence system in the detection process where an operating error occurs, and then calculate the lowest value of all operating risk parameters, and use the lowest value as the new preset risk threshold. By continuously updating the preset risk thresholds, the accuracy of risk detection in artificial intelligence systems can be effectively improved.
上述人工智能系统风险检测方法,通过运行待检测人工智能系统,获取待检测人工智能系统在运行中的各个风险评价指标的指标参数,根据各个风险评价指标的指标参数对人工智能系统进行多角度的量化检测,而后通过系统的源代码以及其应用领域,确定其对应的权重参数,综合考虑指标参数以及对应参数来得到待检测人工智能系统的运行风险参数,通过该运行风险参数来对人工智能系统的运行风险检测,来确保已运行的人工智能系统是安全可控的,在人工智能系统的应用过程中把握及规避风险,减少安全问题或者事件的发生。The above-mentioned risk detection method of artificial intelligence system, by running the artificial intelligence system to be tested, obtains the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation, and performs multi-angle analysis on the artificial intelligence system according to the index parameters of each risk evaluation index. Quantitative detection, and then determine its corresponding weight parameters through the source code of the system and its application fields, comprehensively consider the index parameters and corresponding parameters to obtain the operating risk parameters of the artificial intelligence system to be tested, and use the operating risk parameters to control the artificial intelligence system Operational risk detection to ensure that the operating artificial intelligence system is safe and controllable, grasp and avoid risks in the application process of artificial intelligence system, and reduce the occurrence of safety problems or incidents.
在其中一个实施例中,风险评价指标包括代码安全风险参数、模型准确度、模型可解释度、模型敏感度以及模型输入数据攻击性,获取待检测人工智能系统测试在运行中的各预设检测时刻对应的各个风险评价指标包括:In one of the embodiments, the risk evaluation indicators include code security risk parameters, model accuracy, model interpretability, model sensitivity, and model input data offensiveness, and obtain various preset detections of the artificial intelligence system to be tested during operation. The corresponding risk evaluation indicators at all times include:
获取待检测人工智能系统对应源代码,根据源代码获取代码安全风险参数。Obtain the source code corresponding to the artificial intelligence system to be tested, and obtain the code security risk parameters according to the source code.
获取带标记预设测试样本数据,将测试样本数据输入待检测人工智能系统,获取带标记预设测试样本数据对应输出数据,根据输出数据与对应的预测测试样本数据所带标记,获取模型准确度。Obtain the labeled preset test sample data, input the test sample data into the artificial intelligence system to be tested, obtain the output data corresponding to the labeled preset test sample data, and obtain the accuracy of the model based on the output data and the corresponding predicted test sample data. .
根据输出数据与对应的预测测试样本数据所带标记,获取输出数据中的出错数据,获取出错数据对应的错误解释信息,根据错误解释信息获取模型可解释度。According to the output data and the corresponding prediction test sample data, the error data in the output data is obtained, the error explanation information corresponding to the error data is obtained, and the model interpretability is obtained according to the error explanation information.
根据待检测人工智能系统的出错数据与待检测人工智能系统的应用领域,获取模型敏感度。According to the error data of the artificial intelligence system to be tested and the application field of the artificial intelligence system to be tested, the sensitivity of the model is obtained.
获取待检测人工智能系统对应依赖数据集,根据依赖数据集获取模型输入数据攻击性。Obtain the corresponding dependent data set of the artificial intelligence system to be detected, and obtain the aggressiveness of the input data of the model according to the dependent data set.
模型的一些风险评价指标可以基于模型本身的构成获取,但是还有的指标需要对模型进行测试运行才能获取,本申请中,获取各个风险评价指标的具体过程包括了:通过获取当前待检测人工智能系统所采用的代码(如开源或者商用)、框架(如Tensorflow、Pytorch等),通过源代码审计、Fuzzing等方法对代码框架风险进行识别,获得待检测人工智能系统对应的代码安全风险,即代码安全风险参数。可以通过使用一定量的测试样本,对模型的准确度进行检测,获得模型的准确度对应的审核分数。获取人工智能系统对测试数据预测或识别出错的数据实例,判断当前待检测人工智能系统是否能对各预测或识别出错的数据实例是否能做出相应的错误解释(如对分类错误的深度学习框架进行说明,指出其卷积层可能影响识别效果等),获取对应的模型可解释程度评分。获取人工智能系统对测试数据预测或识别出错的数据实例,判断该数据实例对当前待检测人工智能系统的实际业务的影响程度,根据该影响程度获取模型敏感度评分。通过对人工智能系统所依赖的数据集,检测其输入数据让模型出错或偏离设计预期的程度,获得该风险指标的风险值,根据该风险获取待检测人工智能系统的输入数据攻击型评分。通过不同的方法分别对模型进行多角度评估,从而获得待检测人工智 能系统不同角度的风险评价指标。Some risk evaluation indicators of the model can be obtained based on the composition of the model itself, but other indicators need to be tested and run on the model to obtain. In this application, the specific process of obtaining each risk evaluation indicator includes: obtaining the current artificial intelligence to be tested The code (such as open source or commercial), framework (such as Tensorflow, Pytorch, etc.) used by the system, through source code auditing, Fuzzing and other methods to identify the risk of the code framework, and obtain the code security risk corresponding to the artificial intelligence system to be tested, that is, the code Security risk parameters. The accuracy of the model can be tested by using a certain amount of test samples, and the audit score corresponding to the accuracy of the model can be obtained. Obtain data instances where the artificial intelligence system predicts or recognizes errors in the test data, and judges whether the current artificial intelligence system to be tested can make corresponding error explanations for each data instance with errors in prediction or recognition (such as a deep learning framework for classification errors) Explain, point out that its convolutional layer may affect the recognition effect, etc.), and obtain the corresponding model interpretability score. Obtain data instances where the artificial intelligence system predicts or identify errors in the test data, determine the degree of influence of the data instance on the actual business of the artificial intelligence system to be tested, and obtain the model sensitivity score according to the degree of influence. Through the data set that the artificial intelligence system relies on, detecting the degree to which the input data makes the model error or deviating from the design expectations, the risk value of the risk indicator is obtained, and the input data attack score of the artificial intelligence system to be tested is obtained according to the risk. The model is evaluated from multiple perspectives through different methods, so as to obtain risk evaluation indicators from different perspectives of the artificial intelligence system to be tested.
在其中一个实施例中,步骤S500包括:In one of the embodiments, step S500 includes:
识别待检测人工智能系统的源代码以及应用领域。Identify the source code and application field of the artificial intelligence system to be tested.
确定源代码对应的风险参数类别,根据风险参数类别查找预设风险参数类别权重参数表,得到与风险参数类别对应的权重参数,将与风险参数类别对应的权重参数,作为与代码安全风险参数对应的第一检测权重参数。Determine the risk parameter category corresponding to the source code, search the preset risk parameter category weight parameter table according to the risk parameter category, obtain the weight parameter corresponding to the risk parameter category, and use the weight parameter corresponding to the risk parameter category as the corresponding code security risk parameter The first detection weight parameter.
根据应用领域查找预设模型准确度类别权重分布表,得到与应用领域对应的第一权重参数,将与应用领域对应的第一权重参数,作为与模型准确度对应的第二检测权重参数。According to the application field, the preset model accuracy category weight distribution table is searched to obtain the first weight parameter corresponding to the application field, and the first weight parameter corresponding to the application field is used as the second detection weight parameter corresponding to the model accuracy.
确定源代码对应的构成算法,根据构成算法查找预设模型可解释度权重分布表,得到与构成算法对应的权重参数,将构成算法对应的权重参数,作为与模型可解释度对应的第三检测权重参数。Determine the composition algorithm corresponding to the source code, search the preset model interpretability weight distribution table according to the composition algorithm, obtain the weight parameter corresponding to the composition algorithm, and use the weight parameter corresponding to the composition algorithm as the third test corresponding to the model interpretability Weight parameter.
根据应用领域确定业务影响级别,根据业务影响级别查找预设模型敏感度权重分布表,得到与业务影响级别对应的权重参数,将与业务影响级别对应的权重参数,作为模型敏感度对应的第四检测权重参数。具体的,可以根据待检测人工智能系统的应用领域,对待检测人工智能系统进行业务影响评估,获取待检测人工智能系统对应的业务影响级别。Determine the business impact level according to the application field, look up the preset model sensitivity weight distribution table according to the business impact level, and obtain the weight parameter corresponding to the business impact level, and use the weight parameter corresponding to the business impact level as the fourth corresponding to the model sensitivity Detection weight parameter. Specifically, according to the application field of the artificial intelligence system to be tested, the business impact assessment of the artificial intelligence system to be tested can be performed, and the business impact level corresponding to the artificial intelligence system to be tested can be obtained.
根据应用领域查找预设模型输入数据攻击性权重分布表,得到与应用领域对应的第二权重参数,将与应用领域对应的第二权重参数,作为模型输入数据攻击性对应的第五检测权重参数。According to the application field, look up the preset model input data aggressiveness weight distribution table, obtain the second weight parameter corresponding to the application field, and use the second weight parameter corresponding to the application field as the fifth detection weight parameter corresponding to the aggressiveness of the model input data .
在其中一个实施例中,根据应用领域确定业务影响级别,根据业务影响级别查找预设模型敏感度权重分布表,得到与业务影响级别对应的权重参数,将与业务影响级别对应的权重参数,作为模型敏感度对应的第四检测权重参数之前,还包括:In one of the embodiments, the business impact level is determined according to the application field, and the preset model sensitivity weight distribution table is searched according to the business impact level, and the weight parameter corresponding to the business impact level is obtained. The weight parameter corresponding to the business impact level is taken as Before the fourth detection weight parameter corresponding to the model sensitivity, it also includes:
服务器可以识别待检测人工智能系统的源代码以及应用领域,而后根据源代码以及应用领域来获取各风险评价指标对应的检测权重参数。预设权重分布表可以通过总结历史经验数据或者是通过专家经验来获得。一般对于各个类别的模型,其对应的权重分布包含以下规律,比如对于代码安全风险参数对应的检测权重参数来说,对代码执行有重大影响的风险,例如缓冲区溢出、信息泄露、越权查看、SQL(Structured Query Language、结构化查询语言)注入等风险,分配较大的检测权重参数。而对于对代码执行有一般影响的风险,例如代码逻辑错误、低效率代码等代码风险,则分配一个较低的检测权重参数。模型准确度的检测权重参数选取主要取决于模型准确度所对应的应用。对于越关键的应用,赋予较高的权重区间,反之则赋予较低的区间。比如对于应用于图像识别、语音识别等方面的人工智能系统会被分配一个较低的权重,而对于无人驾驶汽车、飞机等关键方面的人工智能系统会被分配一个较高的权重。模型可解释度的检测权重参数选取主要取决于所对应的算法/模型的可解释程度,与其所对应的人工智能分类相关。可解释性较好的算法/模型赋予较低的风险权重区间,反之则赋予较高的区间。比如对于应用D3、C4.5/C5.0决策树算法等基于规则的机器学习的人工智能系统会被分配一个较高的权重,而对于Tensorflow、Pytorch、Caffe等深度学习模型的人工智能系统会被分配一个较低的权重。模型敏感度的检测权重参数选取主要取决于对应的应用级别的重要度,具体的,可以根据待检测人工智能系统的应用领域,对待检测人工智能系统进行业务影响评估,来获取待检测人工智能系统对应的业务影响级别,而后根据业务影响级别来查表,获取模型敏感度对应的第四检测权重参数。这个权重根据不同的业务影响级别确定。比如对于无人驾驶汽车、飞机、医疗手术等的人工智能系统会被分配一个较高的权重,而对于使用AI进行考勤、打卡等影响力低的人工智能系统会被分配一个较低的权重。输入数据的攻击性的检测权重参数选取主要取决于AI算法/模型/所承载的应用所依赖的数据集可能存在的污染/药饵注入等风险的检测。该数据集越容易被破坏或者误导,其权重越高,反之则越低。对于图像识别类以及语音类等方面的人工智能系统会被分配一个较高的权重,而对于人机问答类以及无人驾驶等方面的人工智能系统会被分配一个较低的权重。The server can identify the source code and application field of the artificial intelligence system to be tested, and then obtain the detection weight parameter corresponding to each risk evaluation index according to the source code and application field. The preset weight distribution table can be obtained by summarizing historical empirical data or through expert experience. Generally, for each type of model, the corresponding weight distribution includes the following laws, such as the detection weight parameters corresponding to the code security risk parameters, which have a significant impact on the risk of code execution, such as buffer overflow, information leakage, unauthorized viewing, SQL (Structured Query Language, Structured Query Language) injection and other risks are assigned with larger detection weight parameters. For risks that have a general impact on code execution, such as code logic errors, low-efficiency codes, and other code risks, a lower detection weight parameter is assigned. The selection of the detection weight parameters for the accuracy of the model mainly depends on the application corresponding to the accuracy of the model. For the more critical applications, a higher weight interval is assigned, and vice versa, a lower interval is assigned. For example, artificial intelligence systems used in image recognition and speech recognition will be assigned a lower weight, while artificial intelligence systems in key aspects such as unmanned vehicles and airplanes will be assigned a higher weight. The selection of the detection weight parameter of the model interpretability mainly depends on the interpretability of the corresponding algorithm/model, and is related to the corresponding artificial intelligence classification. Algorithms/models with better interpretability are assigned a lower risk weight interval, and vice versa, a higher interval is assigned. For example, artificial intelligence systems that apply rule-based machine learning such as D3, C4.5/C5.0 decision tree algorithms will be assigned a higher weight, and artificial intelligence systems that use deep learning models such as Tensorflow, Pytorch, and Caffe will be assigned a higher weight. Is assigned a lower weight. The selection of the detection weight parameter of model sensitivity mainly depends on the importance of the corresponding application level. Specifically, according to the application field of the artificial intelligence system to be tested, the business impact assessment of the artificial intelligence system to be tested can be performed to obtain the artificial intelligence system to be tested. Corresponding business impact level, and then look up the table according to the business impact level to obtain the fourth detection weight parameter corresponding to the model sensitivity. This weight is determined according to different business impact levels. For example, artificial intelligence systems such as unmanned vehicles, airplanes, and medical operations will be assigned a higher weight, while artificial intelligence systems that use AI for attendance and check-in and other low-impact artificial intelligence systems will be assigned a lower weight. The selection of the aggressive detection weight parameters of the input data mainly depends on the detection of possible pollution/bait injection and other risks in the data set on which the AI algorithm/model/supported application depends. The more easily the data set is damaged or misleading, the higher its weight, and vice versa, the lower it is. Artificial intelligence systems such as image recognition and voice will be assigned a higher weight, while artificial intelligence systems such as human-machine question answering and unmanned driving will be assigned a lower weight.
在其中一个实施例中,步骤S700之后还包括:In one of the embodiments, after step S700, the method further includes:
根据风险评价指标以及运行风险参数生成模型评估报告。Generate model evaluation reports based on risk evaluation indicators and operational risk parameters.
在获得风险评价指标,并根据风险评价指标得到对应的运行风险参数之后,可以整理所有的风险评价指标以及运行风险参数,而后根据这些获得的指标与运行风险参数来生成对应的模型评估报告,用户可以通过该模型评估报告来确定待检测人工智能系统在哪个方面存在运行风险,同时便于在应用过程中把握及规避风险,减少安全问题或者事件的发生。通过生成模型评估报告,可以更加直观地向用户展示待检测人工智能系统的检测结果。After obtaining the risk evaluation index and obtaining the corresponding operation risk parameters according to the risk evaluation index, all the risk evaluation indexes and operation risk parameters can be sorted out, and then the corresponding model evaluation report can be generated according to the obtained indexes and operation risk parameters. The model evaluation report can be used to determine which aspects of the artificial intelligence system to be tested have operational risks, and at the same time it is convenient to grasp and avoid risks in the application process, and reduce the occurrence of security problems or incidents. By generating a model evaluation report, the test results of the artificial intelligence system to be tested can be displayed more intuitively to users.
应该理解的是,虽然图2-3的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-3中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that, although the various steps in the flowchart of FIGS. 2-3 are displayed in sequence as indicated by the arrows, these steps are not necessarily performed in sequence in the order indicated by the arrows. Unless specifically stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least some of the steps in Figure 2-3 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. These sub-steps or stages The execution order of is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
在一个实施例中,如图4所示,提供了一种人工智能系统风险检测装置,包括:In one embodiment, as shown in FIG. 4, an artificial intelligence system risk detection device is provided, including:
模型启动模块100,用于启动待检测人工智能系统;The model starting module 100 is used to start the artificial intelligence system to be tested;
指标获取模块300,用于运行待检测人工智能系统,获取待检测人工智能系统在运行中的各个风险评价指标的指标参数;The index acquisition module 300 is used to run the artificial intelligence system to be tested, and acquire the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
权重获取模块500,用于识别待检测人工智能系统的源代码以及应用领域,根据待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;The weight acquisition module 500 is used to identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested;
风险预测模块700,用于根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数;The risk prediction module 700 is used to obtain the current operating risk parameters of the artificial intelligence system to be tested according to the index parameters of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index;
风险判定模块900,用于当运行风险参数大于预设风险阈值时,判定待检测人工智能系统存在安全风险。The risk determination module 900 is configured to determine that the artificial intelligence system to be tested has a security risk when the operating risk parameter is greater than the preset risk threshold.
在其中一个实施例中,风险评价指标包括代码安全风险参数、模型准确度、模型可解释度、模型敏感度以及模型输入数据攻击性,指标获取模块300用于:获取待检测人工智能系统对应源代码,根据源代码获取代码安全风险参数;获取带标记预设测试样本数据,将测试样本数据输入待检测人工智能系统,获取带标记预设测试样本数据对应输出数据,根据输出数据与对应的预测测试样本数据所带标记,获取模型准确度;根据输出数据与对应的预测测试样本数据所带标记,获取输出数据中的出错数据,获取出错数据对应的错误解释信息,根据错误解释信息获取模型可解释度;根据待检测人工智能系统的出错数据与待检测人工智能系统的应用领域,获取模型敏感度;获取待检测人工智能系统对应依赖数据集,根据依赖数据集获取模型输入数据攻击性。In one of the embodiments, the risk evaluation indicators include code security risk parameters, model accuracy, model interpretability, model sensitivity, and model input data aggressiveness. The indicator acquisition module 300 is used to: acquire the corresponding source of the artificial intelligence system to be detected Code, obtain code security risk parameters according to the source code; obtain the marked preset test sample data, input the test sample data into the artificial intelligence system to be tested, obtain the corresponding output data of the marked preset test sample data, and according to the output data and the corresponding prediction The mark of the test sample data is used to obtain the accuracy of the model; according to the mark of the output data and the corresponding predicted test sample data, the error data in the output data is obtained, the error explanation information corresponding to the error data is obtained, and the model can be obtained according to the error explanation information. Interpretation; obtain model sensitivity according to the error data of the artificial intelligence system to be detected and the application field of the artificial intelligence system to be tested; obtain the corresponding dependent data set of the artificial intelligence system to be tested, and obtain the aggressiveness of the model input data according to the dependent data set.
在其中一个实施例中,权重获取模块500用于识别待检测人工智能系统的源代码以及应用领域;确定源代码对应的风险参数类别,根据风险参数类别查找预设风险参数类别权重参数表,得到与风险参数类别对应的权重参数,将与风险参数类别对应的权重参数,作为与代码安全风险参数对应的第一检测权重参数;根据应用领域查找预设模型准确度类别权重分布表,得到与应用领域对应的第一权重参数,将与应用领域对应的第一权重参数,作为与模型准确度对应的第二检测权重参数;确定源代码对应的构成算法,根据构成算法查找预设模型可解释度权重分布表,得到与构成算法对应的权重参数,将构成算法对应的权重参数,作为与模型可解释度对应的第三检测权重参数;根据应用领域确定业务影响级别,根据业务影响级别查找预设模型敏感度权重分布表,得到与业务影响级别对应的权重参数,将与业务影响级别对应的权重参数,作为模型敏感度对应的第四检测权重参数;根据应用领域查找预设模型输入数据攻击性权重分布表,得到与应用领域对应的第二权重参数,将与应用领域对应的第二权重参数,作为模型输入数据攻击性对应的第五检测权重参数。In one of the embodiments, the weight acquisition module 500 is used to identify the source code and application field of the artificial intelligence system to be detected; determine the risk parameter category corresponding to the source code, and search the preset risk parameter category weight parameter table according to the risk parameter category to obtain The weight parameter corresponding to the risk parameter category will be the weight parameter corresponding to the risk parameter category as the first detection weight parameter corresponding to the code security risk parameter; according to the application field, look up the preset model accuracy category weight distribution table, and obtain and apply The first weight parameter corresponding to the domain, the first weight parameter corresponding to the application domain, is used as the second detection weight parameter corresponding to the accuracy of the model; the composition algorithm corresponding to the source code is determined, and the interpretability of the preset model is found according to the composition algorithm The weight distribution table obtains the weight parameters corresponding to the constituent algorithm, and uses the weight parameter corresponding to the constituent algorithm as the third detection weight parameter corresponding to the interpretability of the model; determines the business impact level according to the application field, and finds the preset according to the business impact level The model sensitivity weight distribution table is used to obtain the weight parameter corresponding to the business impact level, and the weight parameter corresponding to the business impact level is used as the fourth detection weight parameter corresponding to the model sensitivity; according to the application field, search for preset model input data aggressiveness The weight distribution table obtains the second weight parameter corresponding to the application field, and uses the second weight parameter corresponding to the application field as the fifth detection weight parameter corresponding to the aggressiveness of the model input data.
在其中一个实施例中,权重获取模块500还用于根据待检测人工智能系统的应用领域,对待检测人工智能系统进行业务影响评估,获取待检测人工智能系统对应的业务影响级别。In one of the embodiments, the weight obtaining module 500 is further configured to perform business impact assessment on the artificial intelligence system to be tested according to the application field of the artificial intelligence system to be tested, and obtain the business impact level corresponding to the artificial intelligence system to be tested.
在其中一个实施例中,还报告阈值更新模块,用于获取历史记录中出现安全风险的人工智能系统对应的历史运行风险参数;根据历史运行风险参数更新预设风险阈值。In one of the embodiments, a threshold update module is also reported to obtain historical operating risk parameters corresponding to the artificial intelligence system with security risks in the historical records; and to update the preset risk threshold according to the historical operating risk parameters.
在其中一个实施例中,还包括报告生成模块,用于根据风险评价指标以及运行风险参数生成模型评估报告。In one of the embodiments, a report generation module is further included, which is used to generate a model evaluation report according to the risk evaluation index and the operational risk parameters.
关于人工智能系统风险检测装置的具体限定可以参见上文中对于人工智能系统风险检测方法的限定,在此不再赘述。上述人工智能系统风险检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific limitation of the artificial intelligence system risk detection device, please refer to the above limitation on the artificial intelligence system risk detection method, which will not be repeated here. Each module in the above-mentioned artificial intelligence system risk detection device can be implemented in whole or in part by software, hardware and a combination thereof. The above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图5所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机程序和数据库。该内存储器为非易失性存储介质中的操作系统和计算机程序的运行提供环境。该计算机设备的数据库用于存储历史人工智能系统的运行风险参数相关数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种人工智能系统风险检测方法。In one embodiment, a computer device is provided. The computer device may be a server, and its internal structure diagram may be as shown in FIG. 5. The computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus. Among them, the processor of the computer device is used to provide calculation and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used to store data related to the operational risk parameters of the historical artificial intelligence system. The network interface of the computer device is used to communicate with an external terminal through a network connection. The computer program is executed by the processor to realize an artificial intelligence system risk detection method.
本领域技术人员可以理解,图5中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in FIG. 5 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied. The specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现以下步骤:In one embodiment, a computer device is provided, including a memory and a processor, a computer program is stored in the memory, and the processor implements the following steps when the processor executes the computer program:
启动待检测人工智能系统;Start the artificial intelligence system to be tested;
运行待检测人工智能系统,获取待检测人工智能系统在运行中的各个风险评价指标的指标参数;Run the artificial intelligence system to be tested to obtain the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
识别待检测人工智能系统的源代码以及应用领域,根据待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;Identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested;
根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数;According to the index parameters of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index, obtain the current operating risk parameters of the artificial intelligence system to be tested;
当运行风险参数大于预设风险阈值时,判定待检测人工智能系统存在安全风险。When the operating risk parameter is greater than the preset risk threshold, it is determined that the artificial intelligence system to be tested has a security risk.
在一个实施例中,处理器执行计算机程序时还实现以下步骤:获取待检测人工智能系统对应源代码,根据源代码获取代码安全风险参数;获取带标记预设测试样本数据,将测试样本数据输入待检测人工智能系统,获取带标记预设测试样本数据对应输出数据,根据输出数据与对应的预测测试样本数据所带标记,获取模型准确度;根据输出数据与对应的预测测试样本数据所带标记,获取输出数据中的出错数据,获取出错数据对应的错误解释信息,根据错误解释信息获取模型可解释度;根据待检测人工智能系统的出错数据与待检测人工智能系统的应用领域,获取模型敏感度;获取待检测人工智能系统对应依赖数据集,根据依赖数据集获取模型输入数据攻击性。In one embodiment, the processor further implements the following steps when executing the computer program: obtain the source code corresponding to the artificial intelligence system to be tested, obtain the code security risk parameters according to the source code; obtain the marked preset test sample data, and input the test sample data The artificial intelligence system to be tested obtains the output data corresponding to the labeled preset test sample data, and obtains the model accuracy according to the label of the output data and the corresponding predicted test sample data; according to the output data and the label of the corresponding predicted test sample data , Obtain the error data in the output data, obtain the error interpretation information corresponding to the error data, and obtain the interpretability of the model according to the error interpretation information; obtain the model sensitivity according to the error data of the artificial intelligence system to be tested and the application field of the artificial intelligence system to be tested Degree; Obtain the dependent data set corresponding to the artificial intelligence system to be detected, and obtain the aggressiveness of the model input data according to the dependent data set.
在一个实施例中,处理器执行计算机程序时还实现以下步骤:识别待检测人工智能系统的源代码以及应用领域;确定源代码对应的风险参数类别,根据风险参数类别查找预设风险参数类别权重参数表,得到与风险参数类别对应的权重参数,将与风险参数类别对应的权重参数,作为与代码安全风险参数对应的第一检测权重参数;根据应用领域查找预设模型准确度类别权重分布表,得到与应用领域对应的第一权重参数,将与应用领域对应的第一权重参数,作为与模型准确度对应的第二检测权重参数;确定源代码对应的构成算法,根据构成算法查找预设模型可解释度权重分布表,得到与构成算法对应的权重参数,将构成算法对应的权重参数,作为与模型可解释度对应的第三检测权重参数;根据应用领域确定业务影响级别, 根据业务影响级别查找预设模型敏感度权重分布表,得到与业务影响级别对应的权重参数,将与业务影响级别对应的权重参数,作为模型敏感度对应的第四检测权重参数;根据应用领域查找预设模型输入数据攻击性权重分布表,得到与应用领域对应的第二权重参数,将与应用领域对应的第二权重参数,作为模型输入数据攻击性对应的第五检测权重参数。In one embodiment, the processor further implements the following steps when executing the computer program: identifying the source code and application field of the artificial intelligence system to be tested; determining the risk parameter category corresponding to the source code, and searching for the preset risk parameter category weight according to the risk parameter category Parameter table to obtain the weight parameter corresponding to the risk parameter category, and use the weight parameter corresponding to the risk parameter category as the first detection weight parameter corresponding to the code security risk parameter; search for the preset model accuracy category weight distribution table according to the application field , To obtain the first weight parameter corresponding to the application field, and use the first weight parameter corresponding to the application field as the second detection weight parameter corresponding to the accuracy of the model; determine the composition algorithm corresponding to the source code, and search for the preset according to the composition algorithm Model interpretability weight distribution table, obtain the weight parameters corresponding to the constituent algorithm, and use the weight parameter corresponding to the constituent algorithm as the third detection weight parameter corresponding to the model interpretability; determine the business impact level according to the application field, and determine the business impact level according to the business impact Level search the preset model sensitivity weight distribution table, obtain the weight parameter corresponding to the business impact level, and use the weight parameter corresponding to the business impact level as the fourth detection weight parameter corresponding to the model sensitivity; search for the preset model according to the application field Enter the data aggressiveness weight distribution table to obtain the second weight parameter corresponding to the application field, and use the second weight parameter corresponding to the application field as the fifth detection weight parameter corresponding to the aggressiveness of the model input data.
在一个实施例中,处理器执行计算机程序时还实现以下步骤:根据待检测人工智能系统的应用领域,对待检测人工智能系统进行业务影响评估,获取待检测人工智能系统对应的业务影响级别。In one embodiment, the processor further implements the following steps when executing the computer program: according to the application field of the artificial intelligence system to be tested, the artificial intelligence system to be tested performs business impact assessment, and the business impact level corresponding to the artificial intelligence system to be tested is obtained.
在一个实施例中,处理器执行计算机程序时还实现以下步骤:获取历史记录中出现安全风险的人工智能系统对应的历史运行风险参数;根据历史运行风险参数更新预设风险阈值。In one embodiment, the processor further implements the following steps when executing the computer program: acquiring historical operating risk parameters corresponding to the artificial intelligence system with security risks in the historical record; updating the preset risk threshold according to the historical operating risk parameters.
在一个实施例中,处理器执行计算机程序时还实现以下步骤:根据风险评价指标以及运行风险参数生成模型评估报告。In an embodiment, the processor further implements the following steps when executing the computer program: generating a model evaluation report according to the risk evaluation index and the operating risk parameter.
在一个实施例中,提供了一种计算机可读存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,所述计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时实现以下步骤:In one embodiment, a computer-readable storage medium is provided. The computer-readable storage medium may be non-volatile or volatile. The computer-readable storage medium stores a computer program. When executed by the processor, the following steps are implemented:
启动待检测人工智能系统;Start the artificial intelligence system to be tested;
运行待检测人工智能系统,获取待检测人工智能系统在运行中的各个风险评价指标的指标参数;Run the artificial intelligence system to be tested to obtain the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
识别待检测人工智能系统的源代码以及应用领域,根据待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;Identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested;
根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数;According to the index parameters of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index, obtain the current operating risk parameters of the artificial intelligence system to be tested;
当运行风险参数大于预设风险阈值时,判定待检测人工智能系统存在安全风险。When the operating risk parameter is greater than the preset risk threshold, it is determined that the artificial intelligence system to be tested has a security risk.
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:获取待检测人工智能系统对应源代码,根据源代码获取代码安全风险参数;获取带标记预设测试样本数据,将测试样本数据输入待检测人工智能系统,获取带标记预设测试样本数据对应输出数据,根据输出数据与对应的预测测试样本数据所带标记,获取模型准确度;根据输出数据与对应的预测测试样本数据所带标记,获取输出数据中的出错数据,获取出错数据对应的错误解释信息,根据错误解释信息获取模型可解释度;根据待检测人工智能系统的出错数据与待检测人工智能系统的应用领域,获取模型敏感度;获取待检测人工智能系统对应依赖数据集,根据依赖数据集获取模型输入数据攻击性。In one embodiment, when the computer program is executed by the processor, the following steps are also implemented: obtain the source code corresponding to the artificial intelligence system to be tested, obtain the code security risk parameters according to the source code; obtain the marked preset test sample data, and transfer the test sample data Input the artificial intelligence system to be tested, obtain the output data corresponding to the labeled preset test sample data, obtain the model accuracy according to the output data and the corresponding prediction test sample data, and obtain the model accuracy; according to the output data and the corresponding prediction test sample data Mark, obtain the error data in the output data, obtain the error explanation information corresponding to the error data, obtain the interpretability of the model according to the error explanation information; obtain the model according to the error data of the artificial intelligence system to be tested and the application field of the artificial intelligence system to be tested Sensitivity: Obtain the dependent data set corresponding to the artificial intelligence system to be detected, and obtain the aggressiveness of the input data of the model according to the dependent data set.
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:识别待检测人工智能系统的源代码以及应用领域;确定源代码对应的风险参数类别,根据风险参数类别查找预设风险参数类别权重参数表,得到与风险参数类别对应的权重参数,将与风险参数类别对应的权重参数,作为与代码安全风险参数对应的第一检测权重参数;根据应用领域查找预设模型准确度类别权重分布表,得到与应用领域对应的第一权重参数,将与应用领域对应的第一权重参数,作为与模型准确度对应的第二检测权重参数;确定源代码对应的构成算法,根据构成算法查找预设模型可解释度权重分布表,得到与构成算法对应的权重参数,将构成算法对应的权重参数,作为与模型可解释度对应的第三检测权重参数;根据应用领域确定业务影响级别,根据业务影响级别查找预设模型敏感度权重分布表,得到与业务影响级别对应的权重参数,将与业务影响级别对应的权重参数,作为模型敏感度对应的第四检测权重参数;根据应用领域查找预设模型输入数据攻击性权重分布表,得到与应用领域对应的第二权重参数,将与应用领域对应的第二权重参数,作为模型输入数据攻击性对应的第五检测权重参数。In one embodiment, when the computer program is executed by the processor, the following steps are also implemented: identify the source code and application field of the artificial intelligence system to be tested; determine the risk parameter category corresponding to the source code, and search for the preset risk parameter category according to the risk parameter category Weight parameter table, get the weight parameter corresponding to the risk parameter category, and use the weight parameter corresponding to the risk parameter category as the first detection weight parameter corresponding to the code security risk parameter; find the preset model accuracy category weight distribution according to the application field Table to obtain the first weight parameter corresponding to the application field, and use the first weight parameter corresponding to the application field as the second detection weight parameter corresponding to the accuracy of the model; determine the composition algorithm corresponding to the source code, and look up the forecast according to the composition algorithm Set the model interpretability weight distribution table, obtain the weight parameters corresponding to the constituent algorithm, and use the weight parameter corresponding to the constituent algorithm as the third detection weight parameter corresponding to the model interpretability; determine the business impact level according to the application domain, and determine the business impact level according to the business The influence level searches the preset model sensitivity weight distribution table to obtain the weight parameter corresponding to the business influence level, and uses the weight parameter corresponding to the business influence level as the fourth detection weight parameter corresponding to the model sensitivity; find the preset according to the application field The model input data aggressiveness weight distribution table is used to obtain the second weight parameter corresponding to the application domain, and the second weight parameter corresponding to the application domain is used as the fifth detection weight parameter corresponding to the aggressiveness of the model input data.
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:根据待检测人工智能系统的应用领域,对待检测人工智能系统进行业务影响评估,获取待检测人工智能系统对应的业务影响级别。In one embodiment, when the computer program is executed by the processor, the following steps are further implemented: according to the application field of the artificial intelligence system to be tested, the business impact assessment of the artificial intelligence system to be tested is performed, and the business impact level corresponding to the artificial intelligence system to be tested is obtained.
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:获取历史记录中出现安 全风险的人工智能系统对应的历史运行风险参数;根据历史运行风险参数更新预设风险阈值。In one embodiment, when the computer program is executed by the processor, the following steps are further implemented: obtaining historical operating risk parameters corresponding to the artificial intelligence system with security risks in the historical records; updating the preset risk threshold according to the historical operating risk parameters.
在一个实施例中,计算机程序被处理器执行时还实现以下步骤:根据风险评价指标以及运行风险参数生成模型评估报告。In one embodiment, when the computer program is executed by the processor, the following steps are further implemented: generating a model evaluation report according to the risk evaluation index and the operating risk parameters.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through a computer program. The computer program can be stored in a non-volatile computer readable storage medium. When the computer program is executed, it may include the processes of the above-mentioned method embodiments. Wherein, any reference to memory, storage, database, or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments can be combined arbitrarily. In order to make the description concise, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, they should be It is considered as the range described in this specification.
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。The above examples only express several implementation manners of the present application, and the description is relatively specific and detailed, but it should not be understood as a limitation on the scope of the invention patent. It should be pointed out that for those of ordinary skill in the art, without departing from the concept of this application, several modifications and improvements can be made, and these all fall within the protection scope of this application. Therefore, the scope of protection of the patent of this application shall be subject to the appended claims.

Claims (20)

  1. 一种人工智能系统风险检测方法,其中,所述方法包括:An artificial intelligence system risk detection method, wherein the method includes:
    启动待检测人工智能系统;Start the artificial intelligence system to be tested;
    运行所述待检测人工智能系统,获取所述待检测人工智能系统在运行中的各个风险评价指标的指标参数;Run the artificial intelligence system to be tested to obtain the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
    识别所述待检测人工智能系统的源代码以及应用领域,根据所述待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;Identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested;
    根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数;According to the index parameters of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index, obtain the current operating risk parameters of the artificial intelligence system to be tested;
    当所述运行风险参数大于预设风险阈值时,判定所述待检测人工智能系统存在安全风险。When the operating risk parameter is greater than a preset risk threshold, it is determined that the artificial intelligence system to be detected has a security risk.
  2. 根据权利要求1所述的方法,其中,所述风险评价指标包括代码安全风险参数、模型准确度、模型可解释度、模型敏感度以及模型输入数据攻击性,所述运行所述待检测人工智能系统,获取所述待检测人工智能系统测试在运行中的各预设检测时刻对应的各个风险评价指标包括:The method according to claim 1, wherein the risk evaluation indicators include code security risk parameters, model accuracy, model interpretability, model sensitivity, and model input data aggressiveness, and said running the artificial intelligence to be detected The system, acquiring each risk evaluation index corresponding to each preset detection time in the running of the artificial intelligence system test to be detected includes:
    获取所述待检测人工智能系统对应源代码,根据所述源代码获取代码安全风险参数;Obtain the source code corresponding to the artificial intelligence system to be tested, and obtain code security risk parameters according to the source code;
    获取带标记预设测试样本数据,将所述测试样本数据输入所述待检测人工智能系统,获取所述带标记预设测试样本数据对应输出数据,根据所述输出数据与对应的预测测试样本数据所带标记,获取模型准确度;Obtain labeled preset test sample data, input the test sample data into the artificial intelligence system to be tested, obtain output data corresponding to the labeled preset test sample data, and based on the output data and corresponding predicted test sample data With the mark, get the accuracy of the model;
    根据所述输出数据与对应的预测测试样本数据所带标记,获取所述输出数据中的出错数据,获取所述出错数据对应的错误解释信息,根据所述错误解释信息获取模型可解释度;Acquiring the error data in the output data according to the marks carried by the output data and the corresponding predicted test sample data, acquiring the error explanation information corresponding to the error data, and acquiring the model interpretability according to the error explanation information;
    根据所述待检测人工智能系统的出错数据与所述待检测人工智能系统的应用领域,获取所述模型敏感度;Acquiring the model sensitivity according to the error data of the artificial intelligence system to be tested and the application field of the artificial intelligence system to be tested;
    获取所述待检测人工智能系统对应依赖数据集,根据所述依赖数据集获取所述模型输入数据攻击性。Obtain the dependent data set corresponding to the artificial intelligence system to be detected, and obtain the aggressiveness of the model input data according to the dependent data set.
  3. 根据权利要求2所述的方法,其中,所述识别所述待检测人工智能系统的源代码以及应用领域,根据所述待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数包括:The method according to claim 2, wherein said identifying the source code and application field of the artificial intelligence system to be tested, and acquiring each of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested The detection weight parameters corresponding to the risk evaluation indicators include:
    识别所述待检测人工智能系统的源代码以及应用领域;Identify the source code and application field of the artificial intelligence system to be tested;
    确定所述源代码对应的风险参数类别,根据所述风险参数类别查找预设风险参数类别权重参数表,得到与所述风险参数类别对应的权重参数,将所述与所述风险参数类别对应的权重参数,作为与所述代码安全风险参数对应的第一检测权重参数;Determine the risk parameter category corresponding to the source code, search the preset risk parameter category weight parameter table according to the risk parameter category, obtain the weight parameter corresponding to the risk parameter category, and convert the risk parameter category to the risk parameter category. The weight parameter is used as the first detection weight parameter corresponding to the code security risk parameter;
    根据所述应用领域查找预设模型准确度类别权重分布表,得到与所述应用领域对应的第一权重参数,将所述与所述应用领域对应的第一权重参数,作为与所述模型准确度对应的第二检测权重参数;Search for a preset model accuracy category weight distribution table according to the application field to obtain a first weight parameter corresponding to the application field, and use the first weight parameter corresponding to the application field as the accuracy of the model The second detection weight parameter corresponding to the degree;
    确定所述源代码对应的构成算法,根据所述构成算法查找预设模型可解释度权重分布表,得到与所述构成算法对应的权重参数,将所述构成算法对应的权重参数,作为与所述模型可解释度对应的第三检测权重参数;Determine the composition algorithm corresponding to the source code, search the preset model interpretability weight distribution table according to the composition algorithm, obtain the weight parameter corresponding to the composition algorithm, and use the weight parameter corresponding to the composition algorithm as the The third detection weight parameter corresponding to the interpretability of the model;
    根据所述应用领域确定业务影响级别,根据所述业务影响级别查找预设模型敏感度权重分布表,得到与所述业务影响级别对应的权重参数,将所述与所述业务影响级别对应的权重参数,作为所述模型敏感度对应的第四检测权重参数;Determine the business impact level according to the application field, search the preset model sensitivity weight distribution table according to the business impact level, obtain the weight parameter corresponding to the business impact level, and convert the weight corresponding to the business impact level Parameter as the fourth detection weight parameter corresponding to the model sensitivity;
    根据所述应用领域查找预设模型输入数据攻击性权重分布表,得到与所述应用领域对应的第二权重参数,将所述与所述应用领域对应的第二权重参数,作为所述模型输入数据攻击性对应的第五检测权重参数。According to the application field, search for a preset model input data aggressive weight distribution table to obtain a second weight parameter corresponding to the application field, and use the second weight parameter corresponding to the application field as the model input The fifth detection weight parameter corresponding to the data aggressiveness.
  4. 根据权利要求3所述的方法,其中,所述根据所述应用领域确定业务影响级别,根据 所述业务影响级别查找预设模型敏感度权重分布表,得到与所述业务影响级别对应的权重参数,将所述与所述业务影响级别对应的权重参数,作为所述模型敏感度对应的第四检测权重参数之前,还包括:The method according to claim 3, wherein the determining the business impact level according to the application field, searching a preset model sensitivity weight distribution table according to the business impact level, and obtaining a weight parameter corresponding to the business impact level , Before using the weight parameter corresponding to the service impact level as the fourth detection weight parameter corresponding to the model sensitivity, the method further includes:
    根据所述待检测人工智能系统的应用领域,对所述待检测人工智能系统进行业务影响评估,获取所述待检测人工智能系统对应的业务影响级别。According to the application field of the artificial intelligence system to be tested, a business impact assessment is performed on the artificial intelligence system to be tested, and the business impact level corresponding to the artificial intelligence system to be tested is obtained.
  5. 根据权利要求1的方法,其中,所述当所述运行风险参数大于预设风险阈值时,判定所述待检测人工智能系统存在安全风险之前还包括:The method according to claim 1, wherein when the operating risk parameter is greater than a preset risk threshold, before determining that the artificial intelligence system to be detected has a security risk, the method further comprises:
    获取历史记录中出现安全风险的人工智能系统对应的历史运行风险参数;Obtain historical operating risk parameters corresponding to artificial intelligence systems with security risks in historical records;
    根据所述历史运行风险参数更新所述预设风险阈值。The preset risk threshold is updated according to the historical operating risk parameter.
  6. 根据权利要求1的方法,其中,所述根据各个风险评价指标以及各指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数之后还包括:The method according to claim 1, wherein, after acquiring the operating risk parameters of the artificial intelligence system to be tested currently according to each risk evaluation index and the detection weight parameter corresponding to each index, the method further comprises:
    根据所述风险评价指标以及所述运行风险参数生成模型评估报告。A model evaluation report is generated according to the risk evaluation index and the operation risk parameter.
  7. 一种人工智能系统风险检测装置,其中,所述装置包括:An artificial intelligence system risk detection device, wherein the device includes:
    启动模块,用于启动待检测人工智能系统;Startup module, used to start the artificial intelligence system to be tested;
    指标获取模块,用于运行所述待检测人工智能系统,获取所述待检测人工智能系统在运行中的各个风险评价指标的指标参数;An index acquisition module, configured to run the artificial intelligence system to be tested, and acquire the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
    权重获取模块,用于识别所述待检测人工智能系统的源代码以及应用领域,根据所述待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;The weight acquisition module is used to identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested parameter;
    风险预测模块,用于根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数;The risk prediction module is used to obtain the current operational risk parameters of the artificial intelligence system to be tested according to the index parameters of each risk evaluation index and the detection weight parameters corresponding to each risk evaluation index;
    风险判定模块,用于当所述运行风险参数大于预设风险阈值时,判定所述待检测人工智能系统存在安全风险。The risk determination module is configured to determine that the artificial intelligence system to be tested has a security risk when the operating risk parameter is greater than a preset risk threshold.
  8. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其中,所述处理器执行所述计算机程序时实现如下步骤:A computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when the processor executes the computer program:
    启动待检测人工智能系统;Start the artificial intelligence system to be tested;
    运行所述待检测人工智能系统,获取所述待检测人工智能系统在运行中的各个风险评价指标的指标参数;Run the artificial intelligence system to be tested to obtain the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
    识别所述待检测人工智能系统的源代码以及应用领域,根据所述待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;Identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested;
    根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数;According to the index parameters of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index, obtain the current operating risk parameters of the artificial intelligence system to be tested;
    当所述运行风险参数大于预设风险阈值时,判定所述待检测人工智能系统存在安全风险。When the operating risk parameter is greater than a preset risk threshold, it is determined that the artificial intelligence system to be detected has a security risk.
  9. 根据权利要求8所述的计算机设备,其中,所述风险评价指标包括代码安全风险参数、模型准确度、模型可解释度、模型敏感度以及模型输入数据攻击性。8. The computer device according to claim 8, wherein the risk evaluation index includes code security risk parameters, model accuracy, model interpretability, model sensitivity, and model input data offensiveness.
  10. 根据权利要求9所述的计算机设备,其中,所述运行所述待检测人工智能系统,获取所述待检测人工智能系统测试在运行中的各预设检测时刻对应的各个风险评价指标包括:The computer device according to claim 9, wherein said running the artificial intelligence system to be tested and acquiring each risk evaluation index corresponding to each preset detection time during the running of the artificial intelligence system to be tested test comprises:
    获取所述待检测人工智能系统对应源代码,根据所述源代码获取代码安全风险参数;Obtain the source code corresponding to the artificial intelligence system to be tested, and obtain code security risk parameters according to the source code;
    获取带标记预设测试样本数据,将所述测试样本数据输入所述待检测人工智能系统,获取所述带标记预设测试样本数据对应输出数据,根据所述输出数据与对应的预测测试样本数据所带标记,获取模型准确度;Obtain labeled preset test sample data, input the test sample data into the artificial intelligence system to be tested, obtain output data corresponding to the labeled preset test sample data, and based on the output data and corresponding predicted test sample data With the mark, get the accuracy of the model;
    根据所述输出数据与对应的预测测试样本数据所带标记,获取所述输出数据中的出错数据,获取所述出错数据对应的错误解释信息,根据所述错误解释信息获取模型可解释度;Acquiring the error data in the output data according to the marks carried by the output data and the corresponding predicted test sample data, acquiring the error explanation information corresponding to the error data, and acquiring the model interpretability according to the error explanation information;
    根据所述待检测人工智能系统的出错数据与所述待检测人工智能系统的应用领域,获取所述模型敏感度;Acquiring the model sensitivity according to the error data of the artificial intelligence system to be tested and the application field of the artificial intelligence system to be tested;
    获取所述待检测人工智能系统对应依赖数据集,根据所述依赖数据集获取所述模型输入 数据攻击性。Obtain the dependent data set corresponding to the artificial intelligence system to be detected, and obtain the aggressiveness of the model input data according to the dependent data set.
  11. 根据权利要求10所述的计算机设备,其中,所述识别所述待检测人工智能系统的源代码以及应用领域,根据所述待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数包括:The computer device according to claim 10, wherein the source code and application field of the artificial intelligence system to be tested are identified, and the artificial intelligence system to be tested is obtained according to the source code and application field of the artificial intelligence system to be tested The detection weight parameters corresponding to each risk evaluation index include:
    识别所述待检测人工智能系统的源代码以及应用领域;Identify the source code and application field of the artificial intelligence system to be tested;
    确定所述源代码对应的风险参数类别,根据所述风险参数类别查找预设风险参数类别权重参数表,得到与所述风险参数类别对应的权重参数,将所述与所述风险参数类别对应的权重参数,作为与所述代码安全风险参数对应的第一检测权重参数;Determine the risk parameter category corresponding to the source code, search the preset risk parameter category weight parameter table according to the risk parameter category, obtain the weight parameter corresponding to the risk parameter category, and convert the risk parameter category to the risk parameter category. The weight parameter is used as the first detection weight parameter corresponding to the code security risk parameter;
    根据所述应用领域查找预设模型准确度类别权重分布表,得到与所述应用领域对应的第一权重参数,将所述与所述应用领域对应的第一权重参数,作为与所述模型准确度对应的第二检测权重参数;Search for a preset model accuracy category weight distribution table according to the application field to obtain a first weight parameter corresponding to the application field, and use the first weight parameter corresponding to the application field as the accuracy of the model The second detection weight parameter corresponding to the degree;
    确定所述源代码对应的构成算法,根据所述构成算法查找预设模型可解释度权重分布表,得到与所述构成算法对应的权重参数,将所述构成算法对应的权重参数,作为与所述模型可解释度对应的第三检测权重参数;Determine the composition algorithm corresponding to the source code, search the preset model interpretability weight distribution table according to the composition algorithm, obtain the weight parameter corresponding to the composition algorithm, and use the weight parameter corresponding to the composition algorithm as the The third detection weight parameter corresponding to the interpretability of the model;
    根据所述应用领域确定业务影响级别,根据所述业务影响级别查找预设模型敏感度权重分布表,得到与所述业务影响级别对应的权重参数,将所述与所述业务影响级别对应的权重参数,作为所述模型敏感度对应的第四检测权重参数;Determine the business impact level according to the application field, search the preset model sensitivity weight distribution table according to the business impact level, obtain the weight parameter corresponding to the business impact level, and convert the weight corresponding to the business impact level Parameter as the fourth detection weight parameter corresponding to the model sensitivity;
    根据所述应用领域查找预设模型输入数据攻击性权重分布表,得到与所述应用领域对应的第二权重参数,将所述与所述应用领域对应的第二权重参数,作为所述模型输入数据攻击性对应的第五检测权重参数。According to the application field, search for a preset model input data aggressive weight distribution table to obtain a second weight parameter corresponding to the application field, and use the second weight parameter corresponding to the application field as the model input The fifth detection weight parameter corresponding to the aggressiveness of the data.
  12. 根据权利要求11所述的计算机设备,其中,所述根据所述应用领域确定业务影响级别,根据所述业务影响级别查找预设模型敏感度权重分布表,得到与所述业务影响级别对应的权重参数,将所述与所述业务影响级别对应的权重参数,作为所述模型敏感度对应的第四检测权重参数之前,还包括:The computer device according to claim 11, wherein the business impact level is determined according to the application field, and a preset model sensitivity weight distribution table is searched according to the business impact level to obtain the weight corresponding to the business impact level Parameter, before using the weight parameter corresponding to the service impact level as the fourth detection weight parameter corresponding to the model sensitivity, the method further includes:
    根据所述待检测人工智能系统的应用领域,对所述待检测人工智能系统进行业务影响评估,获取所述待检测人工智能系统对应的业务影响级别。According to the application field of the artificial intelligence system to be tested, a business impact assessment is performed on the artificial intelligence system to be tested, and the business impact level corresponding to the artificial intelligence system to be tested is obtained.
  13. 根据权利要求8所述的计算机设备,其中,所述当所述运行风险参数大于预设风险阈值时,判定所述待检测人工智能系统存在安全风险之前还包括:8. The computer device according to claim 8, wherein when the operating risk parameter is greater than a preset risk threshold, before determining that the artificial intelligence system to be detected has a security risk, the method further comprises:
    获取历史记录中出现安全风险的人工智能系统对应的历史运行风险参数;Obtain historical operating risk parameters corresponding to artificial intelligence systems with security risks in historical records;
    根据所述历史运行风险参数更新所述预设风险阈值。The preset risk threshold is updated according to the historical operating risk parameter.
  14. 根据权利要求8所述的计算机设备,其中,所述根据各个风险评价指标以及各指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数之后还包括:8. The computer device according to claim 8, wherein, after acquiring the operating risk parameters of the artificial intelligence system to be tested currently according to each risk evaluation index and the detection weight parameter corresponding to each index, the method further comprises:
    根据所述风险评价指标以及所述运行风险参数生成模型评估报告。A model evaluation report is generated according to the risk evaluation index and the operation risk parameter.
  15. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现如下步骤:A computer-readable storage medium having a computer program stored thereon, wherein the computer program is executed by a processor to implement the following steps:
    启动待检测人工智能系统;Start the artificial intelligence system to be tested;
    运行所述待检测人工智能系统,获取所述待检测人工智能系统在运行中的各个风险评价指标的指标参数;Run the artificial intelligence system to be tested to obtain the index parameters of each risk evaluation index of the artificial intelligence system to be tested in operation;
    识别所述待检测人工智能系统的源代码以及应用领域,根据所述待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数;Identify the source code and application field of the artificial intelligence system to be tested, and obtain the detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system to be tested according to the source code and application field of the artificial intelligence system to be tested;
    根据各个风险评价指标的指标参数以及各风险评价指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数;According to the index parameters of each risk evaluation index and the detection weight parameter corresponding to each risk evaluation index, obtain the current operating risk parameters of the artificial intelligence system to be tested;
    当所述运行风险参数大于预设风险阈值时,判定所述待检测人工智能系统存在安全风险。When the operating risk parameter is greater than a preset risk threshold, it is determined that the artificial intelligence system to be detected has a security risk.
  16. 根据权利要求15所述的计算机可读存储介质,其中,所述风险评价指标包括代码安全风险参数、模型准确度、模型可解释度、模型敏感度以及模型输入数据攻击性,所述运行所述待检测人工智能系统,获取所述待检测人工智能系统测试在运行中的各预设检测时刻对 应的各个风险评价指标包括:The computer-readable storage medium according to claim 15, wherein the risk evaluation indicators include code security risk parameters, model accuracy, model interpretability, model sensitivity, and model input data offensiveness, and the operation of the The artificial intelligence system to be tested, acquiring each risk evaluation index corresponding to each preset detection time in the running of the artificial intelligence system test to be tested includes:
    获取所述待检测人工智能系统对应源代码,根据所述源代码获取代码安全风险参数;Obtain the source code corresponding to the artificial intelligence system to be tested, and obtain code security risk parameters according to the source code;
    获取带标记预设测试样本数据,将所述测试样本数据输入所述待检测人工智能系统,获取所述带标记预设测试样本数据对应输出数据,根据所述输出数据与对应的预测测试样本数据所带标记,获取模型准确度;Obtain labeled preset test sample data, input the test sample data into the artificial intelligence system to be tested, obtain output data corresponding to the labeled preset test sample data, and based on the output data and corresponding predicted test sample data With the mark, get the accuracy of the model;
    根据所述输出数据与对应的预测测试样本数据所带标记,获取所述输出数据中的出错数据,获取所述出错数据对应的错误解释信息,根据所述错误解释信息获取模型可解释度;Acquiring the error data in the output data according to the marks carried by the output data and the corresponding predicted test sample data, acquiring the error explanation information corresponding to the error data, and acquiring the model interpretability according to the error explanation information;
    根据所述待检测人工智能系统的出错数据与所述待检测人工智能系统的应用领域,获取所述模型敏感度;Acquiring the model sensitivity according to the error data of the artificial intelligence system to be tested and the application field of the artificial intelligence system to be tested;
    获取所述待检测人工智能系统对应依赖数据集,根据所述依赖数据集获取所述模型输入数据攻击性。Obtain the dependent data set corresponding to the artificial intelligence system to be detected, and obtain the aggressiveness of the model input data according to the dependent data set.
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述识别所述待检测人工智能系统的源代码以及应用领域,根据所述待检测人工智能系统的源代码以及应用领域,获取待检测人工智能系统各风险评价指标对应的检测权重参数包括:The computer-readable storage medium according to claim 16, wherein the source code and application field of the artificial intelligence system to be tested are identified, and the source code and application field of the artificial intelligence system to be tested are obtained according to the source code and application field of the artificial intelligence system to be tested. The detection weight parameters corresponding to each risk evaluation index of the artificial intelligence system include:
    识别所述待检测人工智能系统的源代码以及应用领域;Identify the source code and application field of the artificial intelligence system to be tested;
    确定所述源代码对应的风险参数类别,根据所述风险参数类别查找预设风险参数类别权重参数表,得到与所述风险参数类别对应的权重参数,将所述与所述风险参数类别对应的权重参数,作为与所述代码安全风险参数对应的第一检测权重参数;Determine the risk parameter category corresponding to the source code, search the preset risk parameter category weight parameter table according to the risk parameter category, obtain the weight parameter corresponding to the risk parameter category, and convert the risk parameter category to the risk parameter category. The weight parameter is used as the first detection weight parameter corresponding to the code security risk parameter;
    根据所述应用领域查找预设模型准确度类别权重分布表,得到与所述应用领域对应的第一权重参数,将所述与所述应用领域对应的第一权重参数,作为与所述模型准确度对应的第二检测权重参数;Search for a preset model accuracy category weight distribution table according to the application field to obtain a first weight parameter corresponding to the application field, and use the first weight parameter corresponding to the application field as the accuracy of the model The second detection weight parameter corresponding to the degree;
    确定所述源代码对应的构成算法,根据所述构成算法查找预设模型可解释度权重分布表,得到与所述构成算法对应的权重参数,将所述构成算法对应的权重参数,作为与所述模型可解释度对应的第三检测权重参数;Determine the composition algorithm corresponding to the source code, search the preset model interpretability weight distribution table according to the composition algorithm, obtain the weight parameter corresponding to the composition algorithm, and use the weight parameter corresponding to the composition algorithm as the The third detection weight parameter corresponding to the interpretability of the model;
    根据所述应用领域确定业务影响级别,根据所述业务影响级别查找预设模型敏感度权重分布表,得到与所述业务影响级别对应的权重参数,将所述与所述业务影响级别对应的权重参数,作为所述模型敏感度对应的第四检测权重参数;Determine the business impact level according to the application field, search the preset model sensitivity weight distribution table according to the business impact level, obtain the weight parameter corresponding to the business impact level, and convert the weight corresponding to the business impact level Parameter as the fourth detection weight parameter corresponding to the model sensitivity;
    根据所述应用领域查找预设模型输入数据攻击性权重分布表,得到与所述应用领域对应的第二权重参数,将所述与所述应用领域对应的第二权重参数,作为所述模型输入数据攻击性对应的第五检测权重参数。According to the application field, search for a preset model input data aggressive weight distribution table to obtain a second weight parameter corresponding to the application field, and use the second weight parameter corresponding to the application field as the model input The fifth detection weight parameter corresponding to the aggressiveness of the data.
  18. 根据权利要求17所述的计算机可读存储介质,其中,所述根据所述应用领域确定业务影响级别,根据所述业务影响级别查找预设模型敏感度权重分布表,得到与所述业务影响级别对应的权重参数,将所述与所述业务影响级别对应的权重参数,作为所述模型敏感度对应的第四检测权重参数之前,还包括:The computer-readable storage medium according to claim 17, wherein the business impact level is determined according to the application field, and a preset model sensitivity weight distribution table is searched according to the business impact level to obtain the business impact level The corresponding weight parameter, before the weight parameter corresponding to the business impact level is used as the fourth detection weight parameter corresponding to the model sensitivity, further includes:
    根据所述待检测人工智能系统的应用领域,对所述待检测人工智能系统进行业务影响评估,获取所述待检测人工智能系统对应的业务影响级别。According to the application field of the artificial intelligence system to be tested, a business impact assessment is performed on the artificial intelligence system to be tested, and the business impact level corresponding to the artificial intelligence system to be tested is obtained.
  19. 根据权利要求15所述的计算机可读存储介质,其中,所述当所述运行风险参数大于预设风险阈值时,判定所述待检测人工智能系统存在安全风险之前还包括:15. The computer-readable storage medium according to claim 15, wherein when the operating risk parameter is greater than a preset risk threshold, before determining that the artificial intelligence system to be detected has a security risk, the method further comprises:
    获取历史记录中出现安全风险的人工智能系统对应的历史运行风险参数;Obtain historical operating risk parameters corresponding to artificial intelligence systems with security risks in historical records;
    根据所述历史运行风险参数更新所述预设风险阈值。The preset risk threshold is updated according to the historical operating risk parameter.
  20. 根据权利要求15所述的计算机可读存储介质,其中,所述根据各个风险评价指标以及各指标对应的检测权重参数,获取当前待检测人工智能系统的运行风险参数之后还包括:15. The computer-readable storage medium according to claim 15, wherein, after acquiring the operating risk parameters of the artificial intelligence system to be tested currently according to each risk evaluation index and the detection weight parameter corresponding to each index, the method further comprises:
    根据所述风险评价指标以及所述运行风险参数生成模型评估报告。A model evaluation report is generated according to the risk evaluation index and the operation risk parameter.
PCT/CN2020/093555 2020-01-07 2020-05-30 Artificial intelligence system risk detection method and apparatus, and computer device and medium WO2021139078A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010014256.3 2020-01-07
CN202010014256.3A CN111240975B (en) 2020-01-07 2020-01-07 Artificial intelligence system risk detection method, device, computer equipment and medium

Publications (1)

Publication Number Publication Date
WO2021139078A1 true WO2021139078A1 (en) 2021-07-15

Family

ID=70874289

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/093555 WO2021139078A1 (en) 2020-01-07 2020-05-30 Artificial intelligence system risk detection method and apparatus, and computer device and medium

Country Status (2)

Country Link
CN (1) CN111240975B (en)
WO (1) WO2021139078A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114971164A (en) * 2022-04-13 2022-08-30 江苏禹润水务研究院有限公司 Sludge treatment equipment abnormity detection method and system based on artificial intelligence
CN118014392A (en) * 2024-02-01 2024-05-10 中国铁塔股份有限公司 Algorithm evaluation method, device, electronic equipment and storage medium
CN118094282A (en) * 2023-11-21 2024-05-28 深圳市威尔泰电子科技有限公司 Method and system for detecting retention force of contact of electric connector
CN118396601A (en) * 2024-06-26 2024-07-26 杭州海康威视系统技术有限公司 Intelligent area operation and maintenance system optimization method and device and computer equipment

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111823227B (en) * 2020-06-08 2024-02-02 华南师范大学 Artificial intelligent ethical risk detection and prevention method, deep learning system and robot
CN114091644A (en) * 2020-08-24 2022-02-25 中国科学院软件研究所 Technical risk assessment method and system for artificial intelligence product
WO2022061675A1 (en) * 2020-09-24 2022-03-31 华为技术有限公司 Data analysis method and apparatus
CN112527674B (en) * 2020-12-22 2022-11-04 苏州三六零智能安全科技有限公司 AI frame safety evaluation method, device, equipment and storage medium
CN112905494B (en) * 2021-05-07 2022-04-01 北京银联金卡科技有限公司 Artificial intelligence evaluation method and system fusing multidimensional information
CN113887942A (en) * 2021-09-30 2022-01-04 绿盟科技集团股份有限公司 Data processing method and device, electronic equipment and storage medium
CN118350627A (en) * 2024-04-16 2024-07-16 义乌中国小商品城大数据有限公司 Big data risk management and analysis system based on big model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006079553A (en) * 2004-09-13 2006-03-23 Ricoh Co Ltd Project management device and system
CN108449313A (en) * 2018-02-01 2018-08-24 平安科技(深圳)有限公司 Electronic device, Internet service system method for prewarning risk and storage medium
CN109684848A (en) * 2018-09-07 2019-04-26 平安科技(深圳)有限公司 Methods of risk assessment, device, equipment and readable storage medium storing program for executing
CN110647412A (en) * 2019-09-17 2020-01-03 华东师范大学 Software credibility evaluation system of spacecraft control system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108959934B (en) * 2018-06-11 2023-08-22 平安科技(深圳)有限公司 Security risk assessment method, security risk assessment device, computer equipment and storage medium
CN109345374B (en) * 2018-09-17 2023-04-18 平安科技(深圳)有限公司 Risk control method and device, computer equipment and storage medium
CN109447048B (en) * 2018-12-25 2020-12-25 苏州闪驰数控系统集成有限公司 Artificial intelligence early warning system
CN110009225B (en) * 2019-04-03 2023-10-31 平安科技(深圳)有限公司 Risk assessment system construction method, risk assessment system construction device, computer equipment and storage medium
CN110135691A (en) * 2019-04-12 2019-08-16 深圳壹账通智能科技有限公司 Finance product methods of risk assessment, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006079553A (en) * 2004-09-13 2006-03-23 Ricoh Co Ltd Project management device and system
CN108449313A (en) * 2018-02-01 2018-08-24 平安科技(深圳)有限公司 Electronic device, Internet service system method for prewarning risk and storage medium
CN109684848A (en) * 2018-09-07 2019-04-26 平安科技(深圳)有限公司 Methods of risk assessment, device, equipment and readable storage medium storing program for executing
CN110647412A (en) * 2019-09-17 2020-01-03 华东师范大学 Software credibility evaluation system of spacecraft control system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114971164A (en) * 2022-04-13 2022-08-30 江苏禹润水务研究院有限公司 Sludge treatment equipment abnormity detection method and system based on artificial intelligence
CN114971164B (en) * 2022-04-13 2023-09-22 江苏禹润水务研究院有限公司 Artificial intelligence-based method and system for detecting abnormality of sludge treatment equipment
CN118094282A (en) * 2023-11-21 2024-05-28 深圳市威尔泰电子科技有限公司 Method and system for detecting retention force of contact of electric connector
CN118014392A (en) * 2024-02-01 2024-05-10 中国铁塔股份有限公司 Algorithm evaluation method, device, electronic equipment and storage medium
CN118396601A (en) * 2024-06-26 2024-07-26 杭州海康威视系统技术有限公司 Intelligent area operation and maintenance system optimization method and device and computer equipment

Also Published As

Publication number Publication date
CN111240975B (en) 2024-06-28
CN111240975A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
WO2021139078A1 (en) Artificial intelligence system risk detection method and apparatus, and computer device and medium
US20220036244A1 (en) Systems and methods for predictive coding
CN109697162B (en) Software defect automatic detection method based on open source code library
WO2021253904A1 (en) Test case set generation method, apparatus and device, and computer readable storage medium
WO2020253358A1 (en) Service data risk control analysis processing method, apparatus and computer device
CN111309912B (en) Text classification method, apparatus, computer device and storage medium
CN109241125B (en) Anti-money laundering method and apparatus for mining and analyzing data to identify money laundering persons
CN111177714B (en) Abnormal behavior detection method and device, computer equipment and storage medium
Ledger et al. Detecting LLM Hallucinations Using Monte Carlo Simulations on Token Probabilities
Monjezi et al. Information-theoretic testing and debugging of fairness defects in deep neural networks
US20200272944A1 (en) Failure feedback system for enhancing machine learning accuracy by synthetic data generation
CN111444724B (en) Medical question-answer quality inspection method and device, computer equipment and storage medium
CN108509424B (en) System information processing method, apparatus, computer device and storage medium
CN113221960B (en) Construction method and collection method of high-quality vulnerability data collection model
Chouhan et al. Generative adversarial networks-based imbalance learning in software aging-related bug prediction
US20220374401A1 (en) Determining domain and matching algorithms for data systems
CN114201328A (en) Fault processing method and device based on artificial intelligence, electronic equipment and medium
CN114036531A (en) Multi-scale code measurement-based software security vulnerability detection method
CN112613072B (en) Information management method, management system and management cloud platform based on archive big data
KR102357630B1 (en) Apparatus and Method for Classifying Attack Tactics of Security Event in Industrial Control System
CN110740111B (en) Data leakage prevention method and device and computer readable storage medium
CN112182225A (en) Knowledge management method for multi-modal scene target based on semi-supervised deep learning
CN116756659A (en) Intelligent operation and maintenance management method, device, equipment and storage medium
Sivapurnima et al. Adaptive Deep Learning Model for Software Bug Detection and Classification.
WO2022147003A1 (en) An adaptive machine learning system for image-based biological sample constituent analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20912522

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20912522

Country of ref document: EP

Kind code of ref document: A1