Nothing Special   »   [go: up one dir, main page]

WO2023123943A1 - Interface automation testing method and apparatus, and medium, device and program - Google Patents

Interface automation testing method and apparatus, and medium, device and program Download PDF

Info

Publication number
WO2023123943A1
WO2023123943A1 PCT/CN2022/102161 CN2022102161W WO2023123943A1 WO 2023123943 A1 WO2023123943 A1 WO 2023123943A1 CN 2022102161 W CN2022102161 W CN 2022102161W WO 2023123943 A1 WO2023123943 A1 WO 2023123943A1
Authority
WO
WIPO (PCT)
Prior art keywords
use case
result
test
interface
preset
Prior art date
Application number
PCT/CN2022/102161
Other languages
French (fr)
Chinese (zh)
Inventor
杨璟斐
曾凌子
江旻
杨杨
Original Assignee
深圳前海微众银行股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海微众银行股份有限公司 filed Critical 深圳前海微众银行股份有限公司
Publication of WO2023123943A1 publication Critical patent/WO2023123943A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Definitions

  • the present application relates to the technical field of interface testing, in particular to an interface automatic testing method, device, medium, equipment and program.
  • the generated checkpoints are not perfect, and cannot automatically cover the requirements of the interface test corresponding to the entire link.
  • the embodiment of the present application provides an interface automatic testing method, device, medium, equipment and program to solve the technical problem that the existing technology cannot automatically cover the corresponding interface testing requirements of the whole link.
  • an interface automated testing method including:
  • test case set includes multiple types of test case subsets, each type of test case subset includes a plurality of test scenario use cases, and each test scenario use case is configured with a corresponding use case business serial number;
  • the use case operation result includes the database operation result and the message result returned by each interface
  • the determination of the test baseline corresponding to each interface of the system link according to the running results of each test scenario use case in the production version code environment and the use case scenario rules includes:
  • the horizontal analysis includes cluster analysis on the operation results generated by running different test scenario use cases in the same test case subset;
  • test baseline corresponding to each interface of the system link is determined according to the use case running result and the corresponding behavior result attribute.
  • the horizontal analysis is performed on the operation result of the use case to determine the attributes of the operation result corresponding to each interface, including:
  • the determination of the corresponding operation result attributes according to the result equivalent rate of each interface and the preset equivalent rate condition includes:
  • the result equivalent rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value;
  • the result equivalence rate is greater than or equal to the preset equivalence rate, it is determined that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list;
  • test results corresponding to the test scenario use cases in which the basic assertion conforms to the preset assertion result are compared with the test baseline corresponding to each interface to generate the assertion.
  • the basic assertion is generated according to the affiliation relationship between the returned message result and the preset result enumeration set in the use case running result corresponding to each test scenario use case, including:
  • test scenario use case is a pass business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a failure use case; If the returned message result belongs to the preset result enumeration set, then the test scenario use case is a successful use case; the preset result enumeration set is a business success set;
  • test scenario use case is a failure business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a successful use case; If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failure use case;
  • test scenario use case is a use case that does not conform to the protocol specification
  • the test scenario use case is a failure use case
  • the test scenario use case is a successful use case
  • the preset result enumeration set is the message does not meet the protocol specification exception set
  • test scenario use case is a use case conforming to the protocol specification, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if the corresponding use case operation result in If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
  • the interface part where the difference between the test baseline indicated by the assertion and the test result is a system change point or a system vulnerability.
  • the result of the use case operation includes a large structured field, split the large structured field for comparison with the preset denoising rule, so as to remove the first category in each field after splitting noise result.
  • the embodiment of the present application also provides an interface automated testing device, including:
  • the obtaining module is used to obtain a test case set, the test case set includes multiple types of test case subsets, each type of test case subset includes a plurality of test scenario use cases, and each test scenario use case is configured with a corresponding use case business serial number;
  • the processing module is used to execute each test scenario use case in the test case set in the system link, and obtain the use case operation result according to the mechanism of the preset safe operation program, and the use case operation result includes the database operation result and each interface Return message result;
  • the processing module is also used to determine the test baseline corresponding to each interface of the system link according to the running results of each test scenario use case in the production version code environment and the use case scenario rules;
  • the obtaining module is also used to execute and run the program of the version to be tested in the system link, and obtain the test results of each interface;
  • the processing module is further configured to generate an assertion according to the test baseline and the test result corresponding to each interface.
  • the processing module is specifically used for:
  • the horizontal analysis includes cluster analysis on the operation results generated by running different test scenario use cases in the same test case subset;
  • test baseline corresponding to each interface of the system link is determined according to the use case running result and the corresponding result attribute.
  • the processing module is specifically used for:
  • the processing module is specifically used for:
  • the result equivalent rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value;
  • the result equivalence rate is greater than or equal to the preset equivalence rate, it is determined that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list;
  • the processing module is also used for:
  • the processing module is also used for:
  • test results corresponding to the test scenario use cases in which the basic assertion conforms to the preset assertion result are compared with the test baseline corresponding to each interface to generate the assertion.
  • test scenario use case is a business use case
  • the test scenario use case is a failure use case
  • the test scenario use case is a successful use case
  • the preset result enumeration set is a business success set
  • test scenario use case is a failure business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a successful use case; If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failure use case;
  • test scenario use case is a use case that does not conform to the protocol specification
  • the test scenario use case is a failure use case
  • the test scenario use case is a successful use case
  • the preset result enumeration set is the message does not meet the protocol specification exception set
  • test scenario use case is a use case conforming to the protocol specification, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if the corresponding use case operation result in If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
  • the processing module is further configured to determine that the interface portion of the difference between the test baseline indicated by the assertion and the test result is a system change point or a system vulnerability.
  • the use case running result includes a structured large field
  • the structured large field is split for comparison with the preset denoising rule, so as to remove the split Type 1 noise results in the following fields.
  • the embodiment of the present application further provides an electronic device, including:
  • a memory for storing executable instructions of the processor
  • the processor is configured to execute any interface automation testing method in the first aspect by executing the executable instructions.
  • the embodiment of the present application further provides a storage medium on which a computer program is stored, and when the program is executed by a processor, any one of the interface automation testing methods in the first aspect is implemented.
  • the embodiment of the present application further provides a computer program product, including a computer program, and when the computer program is executed by a processor, any interface automation testing method in the first aspect is implemented.
  • An interface automation testing method, device, medium, equipment, and program provided by the embodiment of the present application obtain the use cases by executing each test scenario use case in the test case set in the system link, and obtain the use cases according to the mechanism of the preset safe running program Run the results, and then determine the test baseline corresponding to each interface of the system link according to the use case running results and the use case scenario rules, then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, according to each interface
  • Corresponding test baselines and test results generate assertions, so as to realize automatic and accurate generation of full-link assertions of system links, without manual maintenance, and thus meet the requirements of automatically covering the corresponding interface tests of the entire link.
  • FIG. 1 is a schematic diagram of a test flow showing an interface automation test method according to an example embodiment of the present application
  • Fig. 2 is a schematic flowchart of an interface automated testing method according to an example embodiment of the present application
  • Fig. 3 is a schematic flow diagram of an assertion self-learning module according to an example embodiment of the present application.
  • Fig. 4 is another schematic flowchart of an interface automated testing method according to an example embodiment of the present application.
  • Fig. 5 is a schematic structural diagram of an interface automation testing device according to an example embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of an electronic device according to an example embodiment of the present application.
  • the interface automation testing method, device, medium, equipment and program provided in the embodiment of the present application does not require manual input of assertions.
  • the solutions in the existing technology manually enter the assertion, the labor cost is high, and the integrity of the assertion cannot be guaranteed, and the checkpoint is often missed. full link.
  • each test scenario use case in the test case set is executed in the system link, and the operation result of the use case is obtained according to the mechanism of the preset safe operation program, and then, according to the operation result of the use case and the use case scenario rules Determine the test baseline corresponding to each interface of the system link, and then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, generate an assertion according to the test baseline and test results corresponding to each interface, so as to realize the system
  • the full link assertion of the link is automatically and accurately generated without manual maintenance, thus meeting the requirement of automatically covering the corresponding interface test of the whole link.
  • Assertion Expressed as some Boolean expressions, used to indicate that the value of the expression can be believed to be true at a certain point in the program.
  • Interface Testing It is a test type for testing the interface between system components. Interface testing is mainly used to detect the interaction points between external systems and internal subsystems. The focus of testing is to check data exchange, transfer and control management processes, and inter-system logical dependencies.
  • Interface Auto Testing Based on interfaces and protocols, run systems and applications under preset conditions to evaluate the results of a test, where the preconditions should include normal conditions and abnormal conditions.
  • Sandbox refers to a technology, which is a Java Virtual Machine (Java Virtual Machine, JVM) platform that is open sourced by Facebook.
  • JVM Java Virtual Machine
  • the solution is essentially a form of AOP implementation.
  • Fig. 1 is a schematic diagram of a testing process illustrating an interface automation testing method according to an exemplary embodiment of the present application.
  • the test scenario use cases can be generated in batches through the interface automation platform, that is, the test cases in each scenario can be automatically batch generated through the interface automation platform, and each Test scenario use cases are configured with corresponding use case business serial numbers. It is worth noting that it is precisely because the embodiment of this application is based on the interface automation platform, through which a large number of test scenario use cases will be automatically generated. Therefore, the generated assertions are difficult to maintain manually. Therefore, it is urgently needed The assertion is generated and maintained through the interface automated testing method provided by the embodiment of the present application.
  • assertion self-learning is adopted to realize the automatic generation of assertions of interface return messages and various database operation results.
  • the main method is to generate assertion expected values for interface return messages and various database operation results through assertion self-learning, and use the expected values as the baseline, and then generate assertions by comparing and verifying the execution results of use cases through Diff.
  • Fig. 2 is a schematic flowchart of an interface automation testing method according to an example embodiment of the present application. As shown in Figure 2, the interface automated testing method provided in this embodiment includes:
  • Step 101 acquiring a test case set.
  • test case set includes multiple types of test case subsets.
  • Each type of test case subset includes multiple test scenario cases.
  • Each test scenario use case is configured with a corresponding use case business flow Number.
  • Step 102 Execute each test scenario use case in the test case set in the system link, and obtain the test case running result according to the preset safe running program mechanism.
  • each test scenario use case in the test case set can be executed in the system link, and the test case running result can be obtained according to the preset safe running program mechanism (for example: sandbox), wherein the test case running result includes the database operation result And each interface returns the message result.
  • the preset safe running program mechanism for example: sandbox
  • FIG. 3 is a schematic flowchart of the assertion self-learning module according to an example embodiment of the present application.
  • the database operation results of the full link and the message results returned by each interface in the full link can be automatically obtained through the assertion self-learning module, and used as the basic data for the analysis of the assertion results.
  • the test plan can be pulled up periodically to execute the test scenario use case, and the business serial number bizNo of the test scenario use case can be passed as an input parameter to the assertion self-learning module, so as to automatically analyze the test case full-link interface return message results and the database result of the operation.
  • the assertion self-learning module takes the use case business serial number as an input parameter, and obtains the link system list, the SQL list of each interface operation, and the return message of each interface through the sandbox.
  • the system list is combined with the database to obtain the link system library, table, and field collection.
  • the SQL list obtained by the sandbox is associated with the library table field set to remove information such as aliases in the SQL, so as to obtain the library, table, and field information of each interface operation under the link of the use case scenario.
  • the database operation results of the link and the message results returned by each interface are stored as the use case running results.
  • the information contained in the test case running result backup may include: test case CASE_ID, business serial number BIZ_NO, and return message RSP_MSG.
  • Step 103 Determine the test baseline corresponding to each interface of the system link according to the use case running result and the use case scenario rules.
  • Step 104 execute and run the program of the version to be tested in the system link, and obtain the test results of each interface.
  • Step 105 generating assertions according to the test baselines and test results corresponding to each interface.
  • the test baseline corresponding to each interface of the system link can be determined first according to the use case running results and the use case scenario rules.
  • the interface can return messages and various Database operation results are processed, for example, automatic denoising processing can be performed first to remove fields that do not involve logic, and then, through horizontal analysis and vertical analysis, determine the operation result attributes corresponding to each interface, and determine the assertion expectations for each interface value, so that the expected value of the assertion is determined as the test baseline corresponding to each interface of the system link, and entered into the baseline. Then, by executing and running the version program to be tested in the system link, and obtaining the test results of each interface, an assertion is generated according to the test baseline and test results corresponding to each interface.
  • each test scenario use case in the test case set is executed in the system link, and the operation result of the use case is obtained according to the mechanism of the preset safe operation program, and then the system is determined according to the operation result of the use case and the use case scenario rules.
  • the test baseline corresponding to each interface of the link and then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, generate an assertion according to the test baseline and test results corresponding to each interface, so as to realize the system link
  • the full-link assertion is automatically and accurately generated without manual maintenance, thus meeting the requirement of automatically covering the corresponding interface test of the whole link.
  • Fig. 4 is another schematic flowchart of an interface automation testing method according to an example embodiment of the present application.
  • the interface automated testing method provided in this embodiment includes:
  • Step 201 acquiring a test case set.
  • test case set includes multiple types of test case subsets.
  • Each type of test case subset includes multiple test scenario cases.
  • Each test scenario use case is configured with a corresponding use case business flow Number.
  • Step 202 Execute each test scenario use case in the test case set in the system link, and obtain the test case running result according to the preset safe running program mechanism.
  • each test scenario use case in the test case set can be executed in the system link, and the test case running result can be obtained according to the preset safe running program mechanism (for example: sandbox), wherein the test case running result includes the database operation result And each interface returns the message result.
  • the preset safe running program mechanism for example: sandbox
  • the database operation results of the full link and the message results returned by each interface in the full link can be automatically obtained through the assertion self-learning module, and used as the basic data for the analysis of the assertion results.
  • the test plan can be pulled up periodically to execute the test scenario use case, and the business serial number bizNo of the test scenario use case can be passed as an input parameter to the assertion self-learning module, so as to automatically analyze the test case full-link interface return message results and the database result of the operation.
  • the assertion self-learning module takes the use case business serial number as an input parameter, and obtains the link system list, the SQL list of each interface operation, and the return message of each interface through the sandbox.
  • the system list is combined with the database to obtain the link system library, table, and field collection.
  • the SQL list obtained by the sandbox is associated with the library table field set to remove information such as aliases in the SQL, so as to obtain the library, table, and field information of each interface operation under the link of the use case scenario.
  • the database operation results of the link and the message results returned by each interface are stored as the use case running results.
  • the information contained in the test case running result backup may include: test case CASE_ID, business serial number BIZ_NO, and return message RSP_MSG.
  • Step 203 directly remove the first type of noise results in the use case running results according to the preset noise removal rules.
  • the first type of noise results in the use case running results may be directly removed according to the preset denoising rules, wherein the first type of noise results are running results that have nothing to do with the test logic of the system link.
  • the length is greater than 20 characters and consists of letters or numbers, it can be identified as a serial number
  • the first two digits are DCN_NO, which can be regarded as the IOU number;
  • the first 6 are equal to the product number, which can be identified as a logical card number.
  • Step 204 horizontally analyzing the running result of the use case to determine the attributes of the running result corresponding to each interface.
  • all use cases under the interface can be launched in batches n times according to the execution plan, where n can be individually configured according to each interface, and the initial value is 2 if not configured.
  • n is the number of times of launching use cases corresponding to each interface, and N is the minimum threshold of the number of times of launching use cases of all interfaces in the whole link.
  • the number of run cases can be dynamically adjusted to ensure the amount of sample data and the correctness of the training results.
  • test case running results can be analyzed horizontally to determine the properties of the running results corresponding to each interface.
  • the horizontal analysis includes running different test scenario cases in the same test case subset. Cluster analysis of the running results.
  • the result equivalence rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value; if the result equivalence rate is greater than or equal to the preset equivalence rate, then Determine that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list; if the result equivalent rate is less than the preset equivalent rate, determine that the corresponding use case operation result is noise, which can be removed .
  • horizontal clustering analysis can be performed on all the obtained running results of the use cases above. If the message returns or the equivalent rate of the DB field is 100%, it is a fixed value class and directly backed up to the baseline. If the message is returned or the equivalent rate of the DB field is greater than 20%, it is an enumeration class, and the enumeration list is maintained. If the message is returned or the equivalent rate of the DB field is less than 20%, it is an irregular type, which is directly identified as noise and can be removed.
  • Step 205 longitudinally analyzing the running results of the use cases belonging to the enumerated class.
  • the longitudinal analysis includes running the same test scenario Cluster analysis performed on the run results generated by the use case.
  • the running results of all use cases belonging to the enumeration class are the same, back up the running results of the use cases to the enumeration list; if the running results of the use cases belonging to the enumeration class are different, then judge whether the running results of each use case meet the preset backup conditions
  • the preset backup conditions are used to determine that the use case operation is in the end state; if the judgment result is yes, then the running results of each use case will be backed up to the enumeration list; if the judgment result is no, the preset backup conditions will be met.
  • the obtained use case running results are backed up to the enumeration list.
  • the same test scenario use case can be run multiple times, and the use case running result after each run can be obtained. If the running results are consistent every time, you can directly back up the fixed use case running results to the baseline. If the running results are inconsistent, it may be an intermediate state of running or a random hit enumeration. At this point, the user can be notified and intervened to perform batch denoising. And if it is running in the intermediate state, the user can set the preset backup conditions according to the enumeration results, and the system will perform backup after judging that the preset conditions are met, thereby eliminating the intermediate state.
  • the enumeration list will be backed up as the expected value of the assertion as the baseline, and the subsequent test case running results will be considered successful as long as one of the enumeration lists is hit.
  • Step 206 Generate a basic assertion according to the affiliation relationship between the returned message result and the preset result enumeration set in the use case running results corresponding to each test scenario use case.
  • the basic assertion may be generated according to the affiliation relationship between the returned message result in the use case operation result corresponding to each test scenario use case and the preset result enumeration set, and determine that the basic assertion conforms to the test scenario use case of the preset assertion result.
  • the corresponding test results are continuously compared with the test baselines corresponding to each interface to generate assertions.
  • the message return values of all the test scenario cases are clustered.
  • the code enumeration is obtained through denoising analysis and horizontal analysis, combined with self-learning of the use case alias: business success, business failure, message does not conform to the protocol specification exception, and system failure returns code enumeration in four categories.
  • the use cases can be classified according to the identification keywords such as "extra long”, “outside the enumeration range”, and “does not conform to the data type", among which , the return code of each type of use case is theoretically unique.
  • the codes in each group are theoretically equal. In practice, you can analyze the differences of the codes under each set, and the code enumeration value with a consistency greater than 90% means that the corresponding type of use case can reasonably return the enumeration set. Subsequent use case assertions are automatically judged. According to such use cases, the use cases whose return code is equal to this code are successful use cases, and other use cases whose return codes are not equal are failure use cases.
  • test scenario use case is a pass business use case
  • the test scenario use case is a failure use case
  • the test scenario use case is a successful use case
  • the preset result enumeration set is the business success set.
  • test scenario use case is a failed business use case
  • the test scenario use case is a successful use case
  • the test scenario use case is a failed use case
  • test scenario use case is a use case that does not conform to the protocol specification
  • the test scenario use case is a failure use case
  • the test scenario use case is a successful use case
  • the preset result enumeration set is an exception set of messages that do not conform to the protocol specification.
  • test scenario use case is a use case conforming to the protocol specification
  • the test scenario use case is a successful use case
  • the test scenario use case is a failed use case
  • the use case type is a pass business use case
  • the use case result message code is in the business success set, If the success set enumeration is unique, the result is successful, and if the enumeration is not unique, the result is suggestion success.
  • the use case type is a failure business use case
  • the use case result message code is not in the business failure set, and the use case fails; the use case result message code is in the business failure set, if the enumeration is unique, the result is successful, and if the enumeration is not unique, the result is suggestion success .
  • the use case result message code is not in the message does not meet the protocol specification exception set, and the result is failure; the use case result message code is in the message does not meet the protocol specification Exception collection, if the enumeration is unique, the result is successful, if the enumeration is not unique, the result is suggestion success.
  • the use case type is a use case whose field conforms to the protocol specification, the use case result message code is in the message does not conform to the protocol specification exception set, and the result is a suggestion failure; the use case result message code is not in the message does not meet the protocol specification exception set, and the result is a suggestion success.
  • the above basic assertion generation method can also be applied to single test, smoke or SIT test in addition to regression test.
  • the regression test it mainly plays an accelerating role. After all, the subsequent Diff judgment takes a long time.
  • Basic assertions can be used to quickly determine whether it is necessary to do accurate Diff, thereby reducing the consumption of invalid Diff resources.
  • Step 207 execute and run the program of the version to be tested in the system link, and obtain the test results of each interface.
  • Step 208 generating assertions according to the test baselines and test results corresponding to each interface.
  • the test baseline corresponding to each interface of the system link can be determined first according to the use case running results and the use case scenario rules.
  • the interface can return messages and various Database operation results are processed, for example, automatic denoising processing can be performed first to remove fields that do not involve logic, and then, through horizontal analysis and vertical analysis, determine the operation result attributes corresponding to each interface, and determine the assertion expectations for each interface value, so that the expected value of the assertion is determined as the test baseline corresponding to each interface of the system link, and entered into the baseline. Then, by executing and running the version program to be tested in the system link, and obtaining the test results of each interface, an assertion is generated according to the test baseline and test results corresponding to each interface.
  • the interface part whose test baseline indicated by the assertion is different from the test result is a system change point or a system vulnerability.
  • the new assertion of the whole link is obtained, and the assertion expectation is automatically compared with the baseline. The difference is the system change point or system bug.
  • the A interface does not have the "b" field, that is, the comparison is inconsistent, and the user is prompted to locate. If the user judges that this is the new content of this version, the result will be directly updated to the baseline, that is, the baseline will be used as the assertion result in the future.
  • the use case running results include structured large fields
  • split the structured large fields for comparison with preset denoising rules, so as to remove the first category in each field after splitting noise result.
  • the field may be a large field based on " ⁇ " or the escape character " ⁇ ”, and then convert the String into a dictionary. If the conversion is successful, you can also obtain it in the same way as in this step. Denoising assertions.
  • each test scenario use case in the test case set is executed in the system link, and the operation result of the use case is obtained according to the mechanism of the preset safe operation program, and then the system is determined according to the operation result of the use case and the use case scenario rules.
  • the test baseline corresponding to each interface of the link and then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, generate an assertion according to the test baseline and test results corresponding to each interface, so as to realize the system link
  • the full-link assertion is automatically and accurately generated without manual maintenance, thus meeting the requirement of automatically covering the corresponding interface test of the whole link.
  • the baseline of the full-link interface assertion can be automatically generated, and then the comparison method is to compare the full-link database operation results and interface return message results.
  • the automatic analysis and removal of noise is carried out on the running results of the use cases, and preliminary judgments are made through the self-learning method of basic assertions, thereby reducing invalid Diff comparisons.
  • the use case running results include structured large fields
  • the structured large fields can also be split and compared to support large field denoising Diff comparison.
  • Fig. 5 is a schematic structural diagram of a block chain state data processing device provided by an embodiment of the present application.
  • the block chain state data processing device 500 can be realized by software, hardware or a combination of both.
  • Fig. 5 is a schematic structural diagram of an interface automation testing device according to an example embodiment of the present application. As shown in Figure 5, the interface automation testing device provided in this embodiment includes:
  • the obtaining module 301 is used to obtain a test case set, the test case set includes multiple types of test case subsets, each type of test case subset includes a plurality of test scenario use cases, and each test scenario use case is configured with a corresponding Use case business serial number;
  • the processing module 302 is configured to execute each test scenario use case in the test case set in the system link, and obtain the use case operation result according to the preset safe operation program mechanism, and the use case operation result includes the database operation result and each The interface returns the message result;
  • the processing module 302 is further configured to determine a test baseline corresponding to each interface of the system link according to the use case running result and the use case scenario rules;
  • the obtaining module 301 is also used to execute and run the version program to be tested in the system link, and obtain the test results of each interface;
  • the processing module 302 is further configured to generate an assertion according to the test baseline and the test result corresponding to each interface.
  • processing module 302 is specifically configured to:
  • the horizontal analysis includes cluster analysis on the operation results generated by running different test scenario use cases in the same test case subset;
  • test baseline corresponding to each interface of the system link is determined according to the use case running result and the corresponding result attribute.
  • processing module 302 is specifically configured to:
  • processing module 302 is specifically configured to:
  • the result equivalent rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value;
  • the result equivalence rate is greater than or equal to the preset equivalence rate, it is determined that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list;
  • processing module 302 is further configured to:
  • processing module 302 is further configured to:
  • test results corresponding to the test scenario use cases in which the basic assertion conforms to the preset assertion result are compared with the test baseline corresponding to each interface to generate the assertion.
  • test scenario use case is a business use case
  • the test scenario use case is a failure use case
  • the test scenario use case is a successful use case
  • the preset result enumeration set is a business success set
  • test scenario use case is a failure business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a successful use case; If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failure use case;
  • test scenario use case is a use case that does not conform to the protocol specification
  • the test scenario use case is a failure use case
  • the test scenario use case is a successful use case
  • the preset result enumeration set is the message does not meet the protocol specification exception set
  • test scenario use case is a use case conforming to the protocol specification, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if the corresponding use case operation result in If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
  • the processing module 302 is further configured to determine that the interface portion of the difference between the test baseline indicated by the assertion and the test result is a system change point or a system vulnerability.
  • the use case running result includes a structured large field
  • the structured large field is split for comparison with the preset denoising rule, so as to remove the split Type 1 noise results in the following fields.
  • This embodiment provides an interface automation testing device, which can be used to execute the steps in the foregoing method embodiments.
  • an interface automation testing device which can be used to execute the steps in the foregoing method embodiments.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an example embodiment of the present application.
  • an electronic device 400 provided in this embodiment includes:
  • the memory 402 is used to store executable instructions of the processor, and the memory may also be flash (flash memory);
  • the processor 401 is configured to execute each step in the above method by executing the executable instructions.
  • the memory 402 can be independent or integrated with the processor 401 .
  • the electronic device 400 may further include:
  • the bus 403 is used to connect the processor 401 and the memory 402 .
  • This embodiment also provides a readable storage medium, in which a computer program is stored, and when at least one processor of the electronic device executes the computer program, the electronic device executes each step in the above method.
  • This embodiment also provides a program product, where the program product includes a computer program, and the computer program is stored in a readable storage medium. At least one processor of the electronic device can read the computer program from the readable storage medium, and the at least one processor executes the computer program so that the electronic device implements each step in the above method.
  • This embodiment also provides a computer program, including program code.
  • the program code executes each step in the above method.
  • the aforementioned program can be stored in a computer-readable storage medium.
  • the program executes the steps including the above-mentioned method embodiments; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)

Abstract

Provided in the present application are an interface automation testing method and apparatus, and a medium, a device and a program. The interface automation testing method provided in the embodiments of the present application comprises: executing, in a system link, each test scenario use case in a test use case set, and acquiring a use case running result according to a preset secure program running mechanism; determining a test baseline corresponding to each interface of the system link according to the running result of each test scenario use case and a use case scenario rule in a production version code environment; executing, in the system link, the running of a program of a version to be tested, and acquiring a test result of each interface; and finally, generating an assertion according to the test baseline and test result which correspond to each interface, so that a full-link assertion of the system link can be automatically and accurately generated without the need of manual maintenance, thereby meeting the requirements of automatically covering an interface test corresponding to a full link.

Description

接口自动化测试方法、装置、介质、设备及程序Interface automated testing method, device, medium, equipment and program
本申请要求于2021年12月27日提交中国专利局、申请号为202111609529.X、申请名称为“接口自动化测试方法、装置、介质、设备及程序”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application with the application number 202111609529.X and the application name "Interface automation test method, device, medium, equipment and program" submitted to the China Patent Office on December 27, 2021, the entire content of which Incorporated in this application by reference.
技术领域technical field
本申请涉及接口测试技术领域,尤其涉及一种接口自动化测试方法、装置、介质、设备及程序。The present application relates to the technical field of interface testing, in particular to an interface automatic testing method, device, medium, equipment and program.
背景技术Background technique
随着计算机技术的发展,越来越多的技术应用在金融领域,传统金融业正在逐步向金融科技(Finteh)转变,接口测试技术也不例外,但由于金融行业的安全性、实时性要求,也对技术提出的更高的要求。With the development of computer technology, more and more technologies are applied in the financial field. The traditional financial industry is gradually transforming into financial technology (Finteh), and the interface testing technology is no exception. However, due to the security and real-time requirements of the financial industry, It also places higher demands on technology.
目前,在金融科技的接口自动化测试中,通常是基于静态代码分析的方式,生成数据库(Data Base,简称DB)表结构模板,然后关联DB全动态表进行校验,从而通过人工配置接口对应业务流程涉及系统的方式,在利用工具针对涉及系统下全部数据库中动态表数据进行备份、对比,从而完成各个系统的测试。At present, in the interface automation testing of financial technology, it is usually based on static code analysis to generate a database (Data Base, DB) table structure template, and then correlate with the DB full dynamic table for verification, so as to manually configure the interface to correspond to the business The process involves the system, and the tools are used to back up and compare the dynamic table data in all databases under the system, so as to complete the test of each system.
但是,针对接口静态代码分析生成DB表结构模板,生成的校验点存在不完善的问题,无法自动覆盖全链路对应接口测试的需求。However, for the static code analysis of the interface to generate the DB table structure template, the generated checkpoints are not perfect, and cannot automatically cover the requirements of the interface test corresponding to the entire link.
技术解决方案technical solution
本申请实施例提供一种接口自动化测试方法、装置、介质、设备及程序,以解决现有技术无法自动覆盖全链路对应接口测试需求的技术问题。The embodiment of the present application provides an interface automatic testing method, device, medium, equipment and program to solve the technical problem that the existing technology cannot automatically cover the corresponding interface testing requirements of the whole link.
第一方面,本申请实施例提供一种接口自动化测试方法,包括:In the first aspect, the embodiment of the present application provides an interface automated testing method, including:
获取测试用例集合,所述测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号;Obtain a test case set, the test case set includes multiple types of test case subsets, each type of test case subset includes a plurality of test scenario use cases, and each test scenario use case is configured with a corresponding use case business serial number;
在系统链路中执行所述测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,所述用例运行结果包括数据库操作结果以及各个接口返回报文结果;Execute each test scenario use case in the test case set in the system link, and obtain the use case operation result according to the mechanism of the preset safe operation program, the use case operation result includes the database operation result and the message result returned by each interface;
根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线;Determine the test baseline corresponding to each interface of the system link according to the running results of each test scenario use case in the production version code environment and the use case scenario rules;
在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果;Execute and run the program of the version to be tested in the system link, and obtain the test results of each interface;
根据各个接口对应的所述测试基线以及所述测试结果生成断言。An assertion is generated according to the test baseline and the test result corresponding to each interface.
在一种可能的设计中,所述根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线,包括:In a possible design, the determination of the test baseline corresponding to each interface of the system link according to the running results of each test scenario use case in the production version code environment and the use case scenario rules includes:
根据预设去噪规则直接去除用例运行结果中的第一类噪声结果,其中,所述第一类噪声结果为与所述系统链路的测试逻辑无关的运行结果;Directly remove the first type of noise result in the use case operation result according to the preset denoising rule, wherein the first type of noise result is an operation result that has nothing to do with the test logic of the system link;
对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,所述横向分析包括针对运行同一测试用例子集中的不同测试场景用例所生成的运行结果进行的聚类分析;Carrying out horizontal analysis on the operation results of the use cases to determine the corresponding operation result attributes of each interface, the horizontal analysis includes cluster analysis on the operation results generated by running different test scenario use cases in the same test case subset;
根据所述用例运行结果以及对应的所述行为结果属性确定所述系统链路各个接口对应的所述测试基线。The test baseline corresponding to each interface of the system link is determined according to the use case running result and the corresponding behavior result attribute.
在一种可能的设计中,所述对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,包括:In a possible design, the horizontal analysis is performed on the operation result of the use case to determine the attributes of the operation result corresponding to each interface, including:
在运行同一测试用例子集中预设第一数量的测试场景用例后,获取各个接口的用例运行结果;After running the preset first number of test scenario cases in the same test case subset, obtain the test case running results of each interface;
根据各个接口的用例运行结果的结果分布确定对应的结果等值率;Determine the corresponding result equivalent rate according to the result distribution of the use case operation results of each interface;
根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性。Determine the corresponding operation result attribute according to the result equivalent rate of each interface and the preset equivalent rate condition.
在一种可能的设计中,所述根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性,包括:In a possible design, the determination of the corresponding operation result attributes according to the result equivalent rate of each interface and the preset equivalent rate condition includes:
若所述结果等值率为100%,则确定对应的运行结果属性为固定值类,所述固定值类对应的测试基线为固定值; If the result equivalent rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value;
若所述结果等值率大于或等于预设等值率,则确定对应的运行结果属性为枚举类,所述枚举类对应的测试基线为枚举列表;If the result equivalence rate is greater than or equal to the preset equivalence rate, it is determined that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list;
若所述结果等值率小于所述预设等值率,则确定对应的用例运行结果为噪声。If the result equivalent rate is less than the preset equivalent rate, it is determined that the corresponding use case operation result is noise.
在一种可能的设计中,在所述对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性之后,还包括:In a possible design, after the horizontal analysis of the use case operation results to determine the operation result attributes corresponding to each interface, it further includes:
对属于所述枚举类的用例运行结果进行纵向分析,所述纵向分析包括针对运行同一测试场景用例所生成的运行结果进行的聚类分析;Carrying out longitudinal analysis on the running results of the use cases belonging to the enumerated class, the longitudinal analysis including cluster analysis on the running results generated by running the same test scenario use case;
若属于所述枚举类的所有用例运行结果相同,则将用例运行结果备份至所述枚举列表;If the running results of all the use cases belonging to the enumeration class are the same, backing up the running results of the use cases to the enumeration list;
若属于所述枚举类的用例运行结果存在不相同,则判断各个用例运行结果是否是在满足预设备份条件下时获得的,所述预设备份条件用于确定用例运行处于结束态;If the running results of the use cases belonging to the enumeration class are not the same, it is judged whether the running results of each use case are obtained when the preset backup conditions are met, and the preset backup conditions are used to determine that the running of the use cases is in an end state;
若判断结果为是,则将各个用例运行结果备份至所述枚举列表;If the judgment result is yes, back up the running results of each use case to the enumeration list;
若判断结果为否,则将满足所述预设备份条件所获取的用例运行结果备份至所述枚举列表。If the judgment result is no, then back up the use case running results obtained by satisfying the preset backup conditions to the enumeration list.
在一种可能的设计中,在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果之前,还包括:In a possible design, before executing and running the program of the version to be tested in the system link, and obtaining the test results of each interface, it also includes:
根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言;Generate basic assertions according to the affiliation relationship between the returned message results and the preset result enumeration set in the use case running results corresponding to each test scenario use case;
确定所述基础断言符合预设断言结果的测试场景用例所对应的测试结果,与各个接口对应的所述测试基线继续进行对比,以生成所述断言。It is determined that the test results corresponding to the test scenario use cases in which the basic assertion conforms to the preset assertion result are compared with the test baseline corresponding to each interface to generate the assertion.
在一种可能的设计中,所述根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言,包括:In a possible design, the basic assertion is generated according to the affiliation relationship between the returned message result and the preset result enumeration set in the use case running result corresponding to each test scenario use case, including:
若测试场景用例为通过业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为业务成功集合;If the test scenario use case is a pass business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a failure use case; If the returned message result belongs to the preset result enumeration set, then the test scenario use case is a successful use case; the preset result enumeration set is a business success set;
若测试场景用例为失败业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例;If the test scenario use case is a failure business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a successful use case; If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failure use case;
若测试场景用例为不符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为报文不符合协议规范异常集合;If the test scenario use case is a use case that does not conform to the protocol specification, if the returned message result in the corresponding use case running result does not belong to the preset result enumeration set, the test scenario use case is a failure use case; if the corresponding use case running result If the returned message result in the test belongs to the preset result enumeration set, then the test scenario use case is a successful use case; the preset result enumeration set is the message does not meet the protocol specification exception set;
若测试场景用例为符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例。If the test scenario use case is a use case conforming to the protocol specification, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if the corresponding use case operation result in If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
在一种可能的设计中,在所述根据各个接口对应的所述测试基线以及所述测试结果生成断言之后,还包括:In a possible design, after the assertion is generated according to the test baseline corresponding to each interface and the test result, it further includes:
确定所述断言所指示的所述测试基线与所述测试结果存在差异的接口部分为系统变更点或系统漏洞。It is determined that the interface part where the difference between the test baseline indicated by the assertion and the test result is a system change point or a system vulnerability.
在一种可能的设计中,在所述根据预设去噪规则直接去除用例运行结果中的第一类噪声结果之前,还包括:In a possible design, before directly removing the first type of noise results in the use case running results according to the preset denoising rules, it further includes:
若用例运行结果中包括结构化大字段,则将所述结构化大字段进行拆分,以用于与所述预设去噪规则进行比对,以去除拆分后各个字段中的第一类噪声结果。If the result of the use case operation includes a large structured field, split the large structured field for comparison with the preset denoising rule, so as to remove the first category in each field after splitting noise result.
第二方面,本申请实施例还提供一种接口自动化测试装置,包括:In the second aspect, the embodiment of the present application also provides an interface automated testing device, including:
获取模块,用于获取测试用例集合,所述测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号;The obtaining module is used to obtain a test case set, the test case set includes multiple types of test case subsets, each type of test case subset includes a plurality of test scenario use cases, and each test scenario use case is configured with a corresponding use case business serial number;
处理模块,用于在系统链路中执行所述测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,所述用例运行结果包括数据库操作结果以及各个接口返回报文结果;The processing module is used to execute each test scenario use case in the test case set in the system link, and obtain the use case operation result according to the mechanism of the preset safe operation program, and the use case operation result includes the database operation result and each interface Return message result;
所述处理模块,还用于根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线;The processing module is also used to determine the test baseline corresponding to each interface of the system link according to the running results of each test scenario use case in the production version code environment and the use case scenario rules;
所述获取模块,还用于在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果;The obtaining module is also used to execute and run the program of the version to be tested in the system link, and obtain the test results of each interface;
所述处理模块,还用于根据各个接口对应的所述测试基线以及所述测试结果生成断言。The processing module is further configured to generate an assertion according to the test baseline and the test result corresponding to each interface.
在一种可能的设计中,所述处理模块,具体用于:In a possible design, the processing module is specifically used for:
根据预设去噪规则直接去除用例运行结果中的第一类噪声结果,其中,所述第一类噪声结果为与所述系统链路的测试逻辑无关的运行结果;Directly remove the first type of noise result in the use case operation result according to the preset denoising rule, wherein the first type of noise result is an operation result that has nothing to do with the test logic of the system link;
对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,所述横向分析包括针对运行同一测试用例子集中的不同测试场景用例所生成的运行结果进行的聚类分析;Carrying out horizontal analysis on the operation results of the use cases to determine the corresponding operation result attributes of each interface, the horizontal analysis includes cluster analysis on the operation results generated by running different test scenario use cases in the same test case subset;
根据所述用例运行结果以及对应的所述结果属性确定所述系统链路各个接口对应的所述测试基线。The test baseline corresponding to each interface of the system link is determined according to the use case running result and the corresponding result attribute.
在一种可能的设计中,所述处理模块,具体用于:In a possible design, the processing module is specifically used for:
在运行同一测试用例子集中预设第一数量的测试场景用例后,获取各个接口的用例运行结果;After running the preset first number of test scenario cases in the same test case subset, obtain the test case running results of each interface;
根据各个接口的用例运行结果的结果分布确定对应的结果等值率;Determine the corresponding result equivalent rate according to the result distribution of the use case operation results of each interface;
根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性。Determine the corresponding operation result attribute according to the result equivalent rate of each interface and the preset equivalent rate condition.
在一种可能的设计中,所述处理模块,具体用于:In a possible design, the processing module is specifically used for:
若所述结果等值率为100%,则确定对应的运行结果属性为固定值类,所述固定值类对应的测试基线为固定值; If the result equivalent rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value;
若所述结果等值率大于或等于预设等值率,则确定对应的运行结果属性为枚举类,所述枚举类对应的测试基线为枚举列表;If the result equivalence rate is greater than or equal to the preset equivalence rate, it is determined that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list;
若所述结果等值率小于所述预设等值率,则确定对应的用例运行结果为噪声。If the result equivalent rate is less than the preset equivalent rate, it is determined that the corresponding use case operation result is noise.
在一种可能的设计中,所述处理模块,还用于:In a possible design, the processing module is also used for:
对属于所述枚举类的用例运行结果进行纵向分析,所述纵向分析包括针对运行同一测试场景用例所生成的运行结果进行的聚类分析;Carrying out longitudinal analysis on the running results of the use cases belonging to the enumerated class, the longitudinal analysis including cluster analysis on the running results generated by running the same test scenario use case;
若属于所述枚举类的所有用例运行结果相同,则将用例运行结果备份至所述枚举列表;If the running results of all the use cases belonging to the enumeration class are the same, backing up the running results of the use cases to the enumeration list;
若属于所述枚举类的用例运行结果存在不相同,则判断各个用例运行结果是否是在满足预设备份条件下时获得的,所述预设备份条件用于确定用例运行处于结束态;If the running results of the use cases belonging to the enumeration class are not the same, it is judged whether the running results of each use case are obtained when the preset backup conditions are met, and the preset backup conditions are used to determine that the running of the use cases is in an end state;
若判断结果为是,则将各个用例运行结果备份至所述枚举列表;If the judgment result is yes, back up the running results of each use case to the enumeration list;
若判断结果为否,则将满足所述预设备份条件所获取的用例运行结果备份至所述枚举列表。If the judgment result is no, then back up the use case running results obtained by satisfying the preset backup conditions to the enumeration list.
在一种可能的设计中,所述处理模块,还用于:In a possible design, the processing module is also used for:
根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言;Generate basic assertions according to the affiliation relationship between the returned message results and the preset result enumeration set in the use case running results corresponding to each test scenario use case;
确定所述基础断言符合预设断言结果的测试场景用例所对应的测试结果,与各个接口对应的所述测试基线继续进行对比,以生成所述断言。It is determined that the test results corresponding to the test scenario use cases in which the basic assertion conforms to the preset assertion result are compared with the test baseline corresponding to each interface to generate the assertion.
在一种可能的设计中,若测试场景用例为通过业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为业务成功集合;In a possible design, if the test scenario use case is a business use case, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a failure use case; If the returned message result in the corresponding use case operation result belongs to the preset result enumeration set, then the test scenario use case is a successful use case; the preset result enumeration set is a business success set;
若测试场景用例为失败业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例;If the test scenario use case is a failure business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a successful use case; If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failure use case;
若测试场景用例为不符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为报文不符合协议规范异常集合;If the test scenario use case is a use case that does not conform to the protocol specification, if the returned message result in the corresponding use case running result does not belong to the preset result enumeration set, the test scenario use case is a failure use case; if the corresponding use case running result If the returned message result in the test belongs to the preset result enumeration set, then the test scenario use case is a successful use case; the preset result enumeration set is the message does not meet the protocol specification exception set;
若测试场景用例为符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例。If the test scenario use case is a use case conforming to the protocol specification, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if the corresponding use case operation result in If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
在一种可能的设计中,所述处理模块,还用于确定所述断言所指示的所述测试基线与所述测试结果存在差异的接口部分为系统变更点或系统漏洞。In a possible design, the processing module is further configured to determine that the interface portion of the difference between the test baseline indicated by the assertion and the test result is a system change point or a system vulnerability.
在一种可能的设计中,若用例运行结果中包括结构化大字段,则将所述结构化大字段进行拆分,以用于与所述预设去噪规则进行比对,以去除拆分后各个字段中的第一类噪声结果。In a possible design, if the use case running result includes a structured large field, the structured large field is split for comparison with the preset denoising rule, so as to remove the split Type 1 noise results in the following fields.
第三方面,本申请实施例还提供一种电子设备,包括:In a third aspect, the embodiment of the present application further provides an electronic device, including:
处理器;以及,Processor; and,
存储器,用于存储所述处理器的可执行指令;a memory for storing executable instructions of the processor;
其中,所述处理器配置为经由执行所述可执行指令来执行第一方面中任意一种接口自动化测试方法。Wherein, the processor is configured to execute any interface automation testing method in the first aspect by executing the executable instructions.
第四方面,本申请实施例还提供一种存储介质,其上存储有计算机程序,该程序被处理器执行时实现第一方面中任意一种接口自动化测试方法。In a fourth aspect, the embodiment of the present application further provides a storage medium on which a computer program is stored, and when the program is executed by a processor, any one of the interface automation testing methods in the first aspect is implemented.
第五方面,本申请实施例还提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现第一方面中任意一种接口自动化测试方法。In the fifth aspect, the embodiment of the present application further provides a computer program product, including a computer program, and when the computer program is executed by a processor, any interface automation testing method in the first aspect is implemented.
本申请实施例提供的一种接口自动化测试方法、装置、介质、设备及程序,通过在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,然后,根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,再在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,最后,根据各个接口对应的测试基线以及测试结果生成断言,从而实现系统链路的全链路断言自动精准生成,无需人工维护,进而满足自动覆盖全链路对应接口测试的需求。An interface automation testing method, device, medium, equipment, and program provided by the embodiment of the present application obtain the use cases by executing each test scenario use case in the test case set in the system link, and obtain the use cases according to the mechanism of the preset safe running program Run the results, and then determine the test baseline corresponding to each interface of the system link according to the use case running results and the use case scenario rules, then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, according to each interface Corresponding test baselines and test results generate assertions, so as to realize automatic and accurate generation of full-link assertions of system links, without manual maintenance, and thus meet the requirements of automatically covering the corresponding interface tests of the entire link.
附图说明Description of drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description These are some embodiments of the present application. Those skilled in the art can also obtain other drawings based on these drawings without any creative effort.
图1是本申请根据一示例实施例示出接口自动化测试方法的测试流程示意图;FIG. 1 is a schematic diagram of a test flow showing an interface automation test method according to an example embodiment of the present application;
图2是本申请根据一示例实施例示出的接口自动化测试方法的流程示意图;Fig. 2 is a schematic flowchart of an interface automated testing method according to an example embodiment of the present application;
图3是本申请根据一示例实施例示出的断言自学习模块流程示意图;Fig. 3 is a schematic flow diagram of an assertion self-learning module according to an example embodiment of the present application;
图4是本申请根据一示例实施例示出的接口自动化测试方法的另一流程示意图;Fig. 4 is another schematic flowchart of an interface automated testing method according to an example embodiment of the present application;
图5是本申请根据一示例实施例示出的接口自动化测试装置的结构示意图;Fig. 5 is a schematic structural diagram of an interface automation testing device according to an example embodiment of the present application;
图6是本申请根据一示例实施例示出的电子设备的结构示意图。Fig. 6 is a schematic structural diagram of an electronic device according to an example embodiment of the present application.
本发明的实施方式Embodiments of the present invention
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,包括但不限于对多个实施例的组合,都属于本申请保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments It is only a part of the embodiments of the present application, but not all the embodiments. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without creative work, including but not limited to combinations of multiple embodiments, all fall within the protection scope of this application.
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", "third", "fourth", etc. (if any) in the description and claims of this application and the above drawings are used to distinguish similar objects, and not necessarily Used to describe a specific sequence or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the application described herein, for example, can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having", as well as any variations thereof, are intended to cover a non-exclusive inclusion, for example, a process, method, system, product or device comprising a sequence of steps or elements is not necessarily limited to the expressly listed instead, may include other steps or elements not explicitly listed or inherent to the process, method, product or apparatus.
随着计算机技术的发展,越来越多的技术应用在金融领域,传统金融业正在逐步向金融科技(Finteh)转变,接口测试技术也不例外,但由于金融行业的安全性、实时性要求,也对技术提出的更高的要求。目前,在金融科技的接口自动化测试中,通常是基于静态代码分析的方式,生成数据库(Data Base,简称DB)表结构模板,然后关联DB全动态表进行校验,从而通过人工配置接口对应业务流程涉及系统的方式,再利用工具针对涉及系统下全部数据库中动态表数据进行备份、对比,从而完成各个系统的测试,例如,通过蚂蚁金服开源自动化测试框架SOFAACTS进行上述操作。With the development of computer technology, more and more technologies are applied in the financial field. The traditional financial industry is gradually transforming into financial technology (Finteh), and interface testing technology is no exception. However, due to the security and real-time requirements of the financial industry, It also places higher demands on technology. At present, in the automated testing of financial technology interfaces, it is usually based on static code analysis to generate databases (Data Base, referred to as DB) table structure template, and then correlate with the DB full dynamic table for verification, so as to correspond to the way the business process involves the system through the manual configuration interface, and then use the tool to backup and compare the dynamic table data in all databases under the system. In order to complete the testing of each system, for example, the above operations are performed through the open-source automated testing framework SOFAACTS of Ant Financial.
但是,针对接口静态代码分析生成DB表结构模板,生成的校验点存在不完善的问题,无法自动覆盖全链路对应接口测试的需求。此外,上述现有技术的放放还存在维护成本高的问题,需要人工维护校验点预期值。当协议版本发生变更时,SOFAACTS需要人工触发生成新模版。此外,关联DB全动态表校验,需要备份了配置系统下全部动态表,容易造成浪费存储空间。However, for the static code analysis of the interface to generate the DB table structure template, the generated checkpoints are not perfect, and cannot automatically cover the requirements of the interface test corresponding to the entire link. In addition, the release of the above-mentioned prior art also has the problem of high maintenance costs, requiring manual maintenance of the expected value of the check point. When the protocol version changes, SOFAACTS needs to manually trigger the generation of a new template. In addition, to verify the full dynamic table of the associated DB, it is necessary to back up all the dynamic tables under the configuration system, which is likely to waste storage space.
针对上述各个技术问题,本申请实施例中提供的接口自动化测试方法、装置、介质、设备及程序,相较于上述现有技术中的测试流程,无需人工录入断言。现有技术中的方案人工录入断言,人工成本高,且无法保证断言完善,经常遗漏校验点,例如,行业上SOFAACTS等通过静态代码生成的断言只能保证本系统的断言完整性,无法保证全链路。而本申请提供的实施例,通过在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,然后,根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,再在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,最后,根据各个接口对应的测试基线以及测试结果生成断言,从而实现系统链路的全链路断言自动精准生成,无需人工维护,进而满足自动覆盖全链路对应接口测试的需求。In view of the above-mentioned technical problems, the interface automation testing method, device, medium, equipment and program provided in the embodiment of the present application, compared with the above-mentioned testing process in the prior art, does not require manual input of assertions. The solutions in the existing technology manually enter the assertion, the labor cost is high, and the integrity of the assertion cannot be guaranteed, and the checkpoint is often missed. full link. In the embodiment provided by this application, each test scenario use case in the test case set is executed in the system link, and the operation result of the use case is obtained according to the mechanism of the preset safe operation program, and then, according to the operation result of the use case and the use case scenario rules Determine the test baseline corresponding to each interface of the system link, and then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, generate an assertion according to the test baseline and test results corresponding to each interface, so as to realize the system The full link assertion of the link is automatically and accurately generated without manual maintenance, thus meeting the requirement of automatically covering the corresponding interface test of the whole link.
为了能够更加准确地理解本申请,对以下术语进行如下定义及解释:In order to understand this application more accurately, the following terms are defined and explained as follows:
(1)   断言(Assert):表示为一些布尔表达式,用于表示可以相信在程序中的某个特定点该表达式值为真。(1) Assertion (Assert): Expressed as some Boolean expressions, used to indicate that the value of the expression can be believed to be true at a certain point in the program.
(2)   接口测试(Interface Testing):是测试系统组件间接口的一种测试类型。接口测试主要用于检测外部系统与系统之间以及内部各子系统之间的交互点,测试的重点是要检查数据的交换,传递和控制管理过程,以及系统间的相互逻辑依赖关系等。(2) Interface Testing (Interface Testing): It is a test type for testing the interface between system components. Interface testing is mainly used to detect the interaction points between external systems and internal subsystems. The focus of testing is to check data exchange, transfer and control management processes, and inter-system logical dependencies.
(3)   接口自动化测试(Interface Auto Testing):基于接口和协议,在预设条件下运行系统和应用程序,以评估运行结果的一种测试方法,其中,预先条件应包括正常条件和异常条件。(3) Interface Auto Testing (Interface Auto Testing): Based on interfaces and protocols, run systems and applications under preset conditions to evaluate the results of a test, where the preconditions should include normal conditions and abnormal conditions.
(4)   沙盒(SANDBOX):沙盒是指一种技术,是阿里开源的一款Java虚拟机(Java Virtual Machine,JVM)平台非侵入式运行期面向切面编程(Aspect Oriented Programming,AOP)解决方案,本质上是一种 AOP 落地形式。(4) Sandbox (SANDBOX): Sandbox refers to a technology, which is a Java Virtual Machine (Java Virtual Machine, JVM) platform that is open sourced by Alibaba. The solution is essentially a form of AOP implementation.
图1是本申请根据一示例实施例示出接口自动化测试方法的测试流程示意图。如图1所示,在本实施例提供的接口自动化测试方法中,可以是通过接口自动化平台批量生成测试场景用例,即可以通过接口自动化平台自动批量生成各个场景下的测试用例,并且,每个测试场景用例配置有对应的用例业务流水号。值得说明的,也正是因为本申请实施例所基于的是接口自动化平台,通过该平台会自动生成大量的测试场景用例,因此,所生成的断言已经,难以通过人工进行维护,因此,亟需通过本申请实施例所提供的接口自动化测试方法来生成并维护断言。Fig. 1 is a schematic diagram of a testing process illustrating an interface automation testing method according to an exemplary embodiment of the present application. As shown in Figure 1, in the interface automation testing method provided in this embodiment, the test scenario use cases can be generated in batches through the interface automation platform, that is, the test cases in each scenario can be automatically batch generated through the interface automation platform, and each Test scenario use cases are configured with corresponding use case business serial numbers. It is worth noting that it is precisely because the embodiment of this application is based on the interface automation platform, through which a large number of test scenario use cases will be automatically generated. Therefore, the generated assertions are difficult to maintain manually. Therefore, it is urgently needed The assertion is generated and maintained through the interface automated testing method provided by the embodiment of the present application.
具体的,在本申请实施例中,采取的则是通过断言自学习的方式,实现接口返回报文以及各类数据库操作结果的断言自动生成。其中,主要是是通过断言自学习的方式对接口返回报文以及各类数据库操作结果生成断言预期值,并将预期值作为基线,然后,通过Diff对比校验用例执行结果的方式生成断言。Specifically, in the embodiment of the present application, assertion self-learning is adopted to realize the automatic generation of assertions of interface return messages and various database operation results. Among them, the main method is to generate assertion expected values for interface return messages and various database operation results through assertion self-learning, and use the expected values as the baseline, and then generate assertions by comparing and verifying the execution results of use cases through Diff.
图2是本申请根据一示例实施例示出的接口自动化测试方法的流程示意图。如图2所示,本实施例提供的接口自动化测试方法,包括:Fig. 2 is a schematic flowchart of an interface automation testing method according to an example embodiment of the present application. As shown in Figure 2, the interface automated testing method provided in this embodiment includes:
步骤101、获取测试用例集合。Step 101, acquiring a test case set.
在本步骤中,获取测试用例集合,测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号。In this step, a set of test cases is obtained. The test case set includes multiple types of test case subsets. Each type of test case subset includes multiple test scenario cases. Each test scenario use case is configured with a corresponding use case business flow Number.
步骤102、在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果。Step 102: Execute each test scenario use case in the test case set in the system link, and obtain the test case running result according to the preset safe running program mechanism.
具体的,可以是在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制(例如:sandbox)获取用例运行结果,其中,用例运行结果包括数据库操作结果以及各个接口返回报文结果。Specifically, each test scenario use case in the test case set can be executed in the system link, and the test case running result can be obtained according to the preset safe running program mechanism (for example: sandbox), wherein the test case running result includes the database operation result And each interface returns the message result.
其中,图3是本申请根据一示例实施例示出的断言自学习模块流程示意图。如图3所示,可以通过断言自学习模块自动获取全链路的数据库操作结果以及全链路中各个接口返回报文结果,并以此作为断言结果分析的基础数据。具体的,可以是定时拉起测试计划执行测试场景用例,将测试场景用例的业务流水号bizNo作为入参传递到断言自学习模块,从而自动分析出测试用例全链路接口返回报文结果以及数据库操作结果。此外,断言自学习模块以用例业务流水号作为入参,通过sandbox获取链路系统列表、各接口操作SQL列表、各接口返回报文。其中,系统列表结合数据库获取链路系统库、表、字段集合。sandbox获取的SQL列表与库表字段集合关联去除SQL中别名等信息,从而得出用例场景链路下各个接口操作的库、表、字段信息。然后,将链路的数据库操作结果,各个接口返回报文结果存储为用例运行结果。具体的,用例运行结果备份包含信息可以包括:测试用例CASE_ID、业务流水号BIZ_NO以及返回报文RSP_MSG。Wherein, FIG. 3 is a schematic flowchart of the assertion self-learning module according to an example embodiment of the present application. As shown in Figure 3, the database operation results of the full link and the message results returned by each interface in the full link can be automatically obtained through the assertion self-learning module, and used as the basic data for the analysis of the assertion results. Specifically, the test plan can be pulled up periodically to execute the test scenario use case, and the business serial number bizNo of the test scenario use case can be passed as an input parameter to the assertion self-learning module, so as to automatically analyze the test case full-link interface return message results and the database result of the operation. In addition, the assertion self-learning module takes the use case business serial number as an input parameter, and obtains the link system list, the SQL list of each interface operation, and the return message of each interface through the sandbox. Among them, the system list is combined with the database to obtain the link system library, table, and field collection. The SQL list obtained by the sandbox is associated with the library table field set to remove information such as aliases in the SQL, so as to obtain the library, table, and field information of each interface operation under the link of the use case scenario. Then, the database operation results of the link and the message results returned by each interface are stored as the use case running results. Specifically, the information contained in the test case running result backup may include: test case CASE_ID, business serial number BIZ_NO, and return message RSP_MSG.
步骤103、根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线。Step 103: Determine the test baseline corresponding to each interface of the system link according to the use case running result and the use case scenario rules.
步骤104、在系统链路中执行运行待测版本程序,并获取各个接口的测试结果。Step 104, execute and run the program of the version to be tested in the system link, and obtain the test results of each interface.
步骤105、根据各个接口对应的测试基线以及测试结果生成断言。Step 105, generating assertions according to the test baselines and test results corresponding to each interface.
在步骤103-步骤105中,可以是先根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,对于用例场景规则可以是通过断言自学习的方式对接口返回报文以及各类数据库操作结果进行处理,例如,可以是先进行自动去噪处理,以去除不涉及逻辑的字段,然后,通过横向分析以及纵向分析,确定各个接口对应的运行结果属性,并对各个接口确定断言预期值,从而将断言预期值确定为系统链路各个接口对应的测试基线,并打到基线中。然后,通过在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,从而根据各个接口对应的测试基线以及测试结果生成断言。In step 103-step 105, the test baseline corresponding to each interface of the system link can be determined first according to the use case running results and the use case scenario rules. For the use case scenario rules, the interface can return messages and various Database operation results are processed, for example, automatic denoising processing can be performed first to remove fields that do not involve logic, and then, through horizontal analysis and vertical analysis, determine the operation result attributes corresponding to each interface, and determine the assertion expectations for each interface value, so that the expected value of the assertion is determined as the test baseline corresponding to each interface of the system link, and entered into the baseline. Then, by executing and running the version program to be tested in the system link, and obtaining the test results of each interface, an assertion is generated according to the test baseline and test results corresponding to each interface.
在本实施例中,通过在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,然后,根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,再在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,最后,根据各个接口对应的测试基线以及测试结果生成断言,从而实现系统链路的全链路断言自动精准生成,无需人工维护,进而满足自动覆盖全链路对应接口测试的需求。In this embodiment, each test scenario use case in the test case set is executed in the system link, and the operation result of the use case is obtained according to the mechanism of the preset safe operation program, and then the system is determined according to the operation result of the use case and the use case scenario rules. The test baseline corresponding to each interface of the link, and then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, generate an assertion according to the test baseline and test results corresponding to each interface, so as to realize the system link The full-link assertion is automatically and accurately generated without manual maintenance, thus meeting the requirement of automatically covering the corresponding interface test of the whole link.
图4是本申请根据一示例实施例示出的接口自动化测试方法的另一流程示意图。如图4所示,本实施例提供的接口自动化测试方法,包括:Fig. 4 is another schematic flowchart of an interface automation testing method according to an example embodiment of the present application. As shown in Figure 4, the interface automated testing method provided in this embodiment includes:
步骤201、获取测试用例集合。Step 201, acquiring a test case set.
在本步骤中,获取测试用例集合,测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号。In this step, a set of test cases is obtained. The test case set includes multiple types of test case subsets. Each type of test case subset includes multiple test scenario cases. Each test scenario use case is configured with a corresponding use case business flow Number.
步骤202、在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果。Step 202: Execute each test scenario use case in the test case set in the system link, and obtain the test case running result according to the preset safe running program mechanism.
具体的,可以是在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制(例如:sandbox)获取用例运行结果,其中,用例运行结果包括数据库操作结果以及各个接口返回报文结果。Specifically, each test scenario use case in the test case set can be executed in the system link, and the test case running result can be obtained according to the preset safe running program mechanism (for example: sandbox), wherein the test case running result includes the database operation result And each interface returns the message result.
可以通过断言自学习模块自动获取全链路的数据库操作结果以及全链路中各个接口返回报文结果,并以此作为断言结果分析的基础数据。具体的,可以是定时拉起测试计划执行测试场景用例,将测试场景用例的业务流水号bizNo作为入参传递到断言自学习模块,从而自动分析出测试用例全链路接口返回报文结果以及数据库操作结果。此外,断言自学习模块以用例业务流水号作为入参,通过sandbox获取链路系统列表、各接口操作SQL列表、各接口返回报文。其中,系统列表结合数据库获取链路系统库、表、字段集合。sandbox获取的SQL列表与库表字段集合关联去除SQL中别名等信息,从而得出用例场景链路下各个接口操作的库、表、字段信息。然后,将链路的数据库操作结果,各个接口返回报文结果存储为用例运行结果。具体的,用例运行结果备份包含信息可以包括:测试用例CASE_ID、业务流水号BIZ_NO以及返回报文RSP_MSG。The database operation results of the full link and the message results returned by each interface in the full link can be automatically obtained through the assertion self-learning module, and used as the basic data for the analysis of the assertion results. Specifically, the test plan can be pulled up periodically to execute the test scenario use case, and the business serial number bizNo of the test scenario use case can be passed as an input parameter to the assertion self-learning module, so as to automatically analyze the test case full-link interface return message results and the database result of the operation. In addition, the assertion self-learning module takes the use case business serial number as an input parameter, and obtains the link system list, the SQL list of each interface operation, and the return message of each interface through the sandbox. Among them, the system list is combined with the database to obtain the link system library, table, and field collection. The SQL list obtained by the sandbox is associated with the library table field set to remove information such as aliases in the SQL, so as to obtain the library, table, and field information of each interface operation under the link of the use case scenario. Then, the database operation results of the link and the message results returned by each interface are stored as the use case running results. Specifically, the information contained in the test case running result backup may include: test case CASE_ID, business serial number BIZ_NO, and return message RSP_MSG.
步骤203、根据预设去噪规则直接去除用例运行结果中的第一类噪声结果。Step 203, directly remove the first type of noise results in the use case running results according to the preset noise removal rules.
在本步骤中,可以是根据预设去噪规则直接去除用例运行结果中的第一类噪声结果,其中,第一类噪声结果为与系统链路的测试逻辑无关的运行结果。In this step, the first type of noise results in the use case running results may be directly removed according to the preset denoising rules, wherein the first type of noise results are running results that have nothing to do with the test logic of the system link.
具体的,可以是自动识别用例运行结果中流水号、时间戳、借据号、逻辑卡号等字段,并按照规则识别后直接做去噪声处理。例如:Specifically, it can automatically identify the serial number, time stamp, IOU number, logic card number and other fields in the use case running results, and directly perform noise removal processing after identification according to the rules. For example:
1、对于长度大于20位由字母或数字构成,可以认定为流水号;1. If the length is greater than 20 characters and consists of letters or numbers, it can be identified as a serial number;
2、对于长度8、10、13、14、17位数字,且字符串时间转换函数能正常处理;长度16或19位数字、空格、“-”、“:”构成,可以认定为时间;2. For numbers with a length of 8, 10, 13, 14, and 17 digits, and the string time conversion function can handle it normally; if the length is 16 or 19 digits, spaces, "-" and ":", it can be regarded as time;
3、对于长度等于20,前两位为DCN_NO,可以认定为借据号;3. For the length equal to 20, the first two digits are DCN_NO, which can be regarded as the IOU number;
4、对于长度16位,前6为等于产品号,可以认定为逻辑卡号。4. For a length of 16 digits, the first 6 are equal to the product number, which can be identified as a logical card number.
步骤204、对用例运行结果进行横向分析,以确定各个接口对应的运行结果属性。Step 204 , horizontally analyzing the running result of the use case to determine the attributes of the running result corresponding to each interface.
具体的,可以按照执行计划批量拉起接口下全部用例各n次,其中,n可以根据各个接口进行单独配置,如未配置初始值为2。为了方便说明,可以以默认值n=2进行举例,系统根据接口用例量,n>=2自动调节,以此保证接口用例总运行次数大于N,N同样可以进行配置,如未配置默认初始值为100。值得理解的,n为各个接口对应的拉起用例的次数,而N为全链路中所有接口拉起用例的次数最低阈值。此外,还可以动态调节用例运行次数,以此保证样本数据量,从而保证训练结果正确性。Specifically, all use cases under the interface can be launched in batches n times according to the execution plan, where n can be individually configured according to each interface, and the initial value is 2 if not configured. For the convenience of explanation, the default value n=2 can be used as an example. The system automatically adjusts according to the number of interface use cases, n>=2, so as to ensure that the total number of operation times of interface use cases is greater than N, and N can also be configured. If the default initial value is not configured for 100. It is worth understanding that n is the number of times of launching use cases corresponding to each interface, and N is the minimum threshold of the number of times of launching use cases of all interfaces in the whole link. In addition, the number of run cases can be dynamically adjusted to ensure the amount of sample data and the correctness of the training results.
在执行计划运行完毕,并获取到用例运行结果之后,可以对用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,横向分析包括针对运行同一测试用例子集中的不同测试场景用例所生成的运行结果进行的聚类分析。After the execution plan is finished running and the test case running results are obtained, the test case running results can be analyzed horizontally to determine the properties of the running results corresponding to each interface. The horizontal analysis includes running different test scenario cases in the same test case subset. Cluster analysis of the running results.
可选的,在运行同一测试用例子集中预设第一数量的测试场景用例后,获取各个接口的用例运行结果,然后,根据各个接口的用例运行结果的结果分布确定对应的结果等值率,并根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性。Optionally, after running the preset first number of test scenario use cases in the same test case subset, obtain the use case running results of each interface, and then determine the corresponding result equivalent rate according to the result distribution of the use case running results of each interface, And determine the corresponding operation result attribute according to the result equivalent rate of each interface and the preset equivalent rate condition.
具体的,若结果等值率为100%,则确定对应的运行结果属性为固定值类,固定值类对应的测试基线为固定值;若结果等值率大于或等于预设等值率,则确定对应的运行结果属性为枚举类,枚举类对应的测试基线为枚举列表;若结果等值率小于所述预设等值率,则确定对应的用例运行结果为噪声,可以进行去除。Specifically, if the result equivalence rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value; if the result equivalence rate is greater than or equal to the preset equivalence rate, then Determine that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list; if the result equivalent rate is less than the preset equivalent rate, determine that the corresponding use case operation result is noise, which can be removed .
在一种可能的设计中,针对上述获取到的全部用例运行结果,可以进行横向聚类分析,其中,如果报文返回或者DB字段等值率100%,为固定值类,直接备份到基线。如果报文返回或者DB字段等值率大于20%为枚举类,维护枚举列表。如果报文返回或者DB字段等值率小于20%,为无规律类,直接认定为噪声,可以进行去除。In a possible design, horizontal clustering analysis can be performed on all the obtained running results of the use cases above. If the message returns or the equivalent rate of the DB field is 100%, it is a fixed value class and directly backed up to the baseline. If the message is returned or the equivalent rate of the DB field is greater than 20%, it is an enumeration class, and the enumeration list is maintained. If the message is returned or the equivalent rate of the DB field is less than 20%, it is an irregular type, which is directly identified as noise and can be removed.
步骤205、对属于所述枚举类的用例运行结果进行纵向分析。Step 205 , longitudinally analyzing the running results of the use cases belonging to the enumerated class.
在步骤中,在对用例运行结果进行横向分析,以确定各个接口对应的运行结果属性之后,需要对属于所述枚举类的用例运行结果进行纵向分析,其中,纵向分析包括针对运行同一测试场景用例所生成的运行结果进行的聚类分析。若属于枚举类的所有用例运行结果相同,则将用例运行结果备份至枚举列表;若属于枚举类的用例运行结果存在不相同,则判断各个用例运行结果是否是在满足预设备份条件下时获得的,预设备份条件用于确定用例运行处于结束态;若判断结果为是,则将各个用例运行结果备份至枚举列表;若判断结果为否,则将满足预设备份条件所获取的用例运行结果备份至枚举列表。In the step, after performing a horizontal analysis on the running results of the use cases to determine the corresponding running result attributes of each interface, it is necessary to conduct a longitudinal analysis on the running results of the use cases belonging to the enumeration class, wherein the longitudinal analysis includes running the same test scenario Cluster analysis performed on the run results generated by the use case. If the running results of all use cases belonging to the enumeration class are the same, back up the running results of the use cases to the enumeration list; if the running results of the use cases belonging to the enumeration class are different, then judge whether the running results of each use case meet the preset backup conditions The preset backup conditions are used to determine that the use case operation is in the end state; if the judgment result is yes, then the running results of each use case will be backed up to the enumeration list; if the judgment result is no, the preset backup conditions will be met. The obtained use case running results are backed up to the enumeration list.
具体的,可以针对同一测试场景用例进行多次运行,并获取每次运行后的用例运行结果。如果每次运行结果一致,则可以直接将改固定的用例运行结果备份到基线。如果运行结果不一致,则可能为运行中间态或随机命中枚举。此时,可以通知用户,介入进行批量去噪。而如果为运行中间态,用户可以根据枚举类结果设置预设备份条件,当系统在判断符合预设条件后再进行备份,从而消灭中间态。如果用户确定为随机命中枚举结果,则将枚举列表作为断言预期值备份为基线,后续的测试用例运行结果只要命中枚举列表之一即为视为运行成功。Specifically, the same test scenario use case can be run multiple times, and the use case running result after each run can be obtained. If the running results are consistent every time, you can directly back up the fixed use case running results to the baseline. If the running results are inconsistent, it may be an intermediate state of running or a random hit enumeration. At this point, the user can be notified and intervened to perform batch denoising. And if it is running in the intermediate state, the user can set the preset backup conditions according to the enumeration results, and the system will perform backup after judging that the preset conditions are met, thereby eliminating the intermediate state. If the user determines that the enumeration result is randomly hit, the enumeration list will be backed up as the expected value of the assertion as the baseline, and the subsequent test case running results will be considered successful as long as one of the enumeration lists is hit.
步骤206、根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言。Step 206: Generate a basic assertion according to the affiliation relationship between the returned message result and the preset result enumeration set in the use case running results corresponding to each test scenario use case.
在本步骤中,可以是根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言,确定基础断言符合预设断言结果的测试场景用例所对应的测试结果,与各个接口对应的测试基线继续进行对比,以生成断言。In this step, the basic assertion may be generated according to the affiliation relationship between the returned message result in the use case operation result corresponding to each test scenario use case and the preset result enumeration set, and determine that the basic assertion conforms to the test scenario use case of the preset assertion result. The corresponding test results are continuously compared with the test baselines corresponding to each interface to generate assertions.
具体的,在测试用例批量执行后,将全量的测试场景用例的报文返回值,即包含code字符串的字段进行聚类。并通过去噪分析以及横向分析得到code枚举,再结合用例别名自学习得出:业务成功、业务失败、报文不符合协议规范异常、系统失败四大类返回code枚举。其中,以“报文不符合协议规范异常”为例,可以是根据用例关键字“超长”、“枚举范围外”、“不符合数据类型”等标识关键字,将用例进行分类,其中,每一类用例返回码理论上唯一。将通过去噪分析以及横向分析得到的code根据用例相应分群后,各自集合下code理论上都是相等的。而在实际中,可以将各个集合下code进行差异分析,一致性大于90%的code枚举值即代表相应类型用例合理返回枚举集合。后续用例断言自动判断,根据此类用例返回code等于此code的用例为成功用例,而其他返回code不相等的用例为失败用例。Specifically, after the test cases are executed in batches, the message return values of all the test scenario cases, that is, the fields containing the code string, are clustered. The code enumeration is obtained through denoising analysis and horizontal analysis, combined with self-learning of the use case alias: business success, business failure, message does not conform to the protocol specification exception, and system failure returns code enumeration in four categories. Among them, taking "the message does not conform to the protocol specification exception" as an example, the use cases can be classified according to the identification keywords such as "extra long", "outside the enumeration range", and "does not conform to the data type", among which , the return code of each type of use case is theoretically unique. After the codes obtained through denoising analysis and horizontal analysis are grouped according to the use cases, the codes in each group are theoretically equal. In practice, you can analyze the differences of the codes under each set, and the code enumeration value with a consistency greater than 90% means that the corresponding type of use case can reasonably return the enumeration set. Subsequent use case assertions are automatically judged. According to such use cases, the use cases whose return code is equal to this code are successful use cases, and other use cases whose return codes are not equal are failure use cases.
在一种可能的设计中,若测试场景用例为通过业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则测试场景用例为成功用例;预设结果枚举集合为业务成功集合。In a possible design, if the test scenario use case is a pass business use case, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a failure use case; if the corresponding If the returned message result in the running result of the use case belongs to the preset result enumeration set, the test scenario use case is a successful use case; the preset result enumeration set is the business success set.
在一种可能的设计中,若测试场景用例为失败业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则测试场景用例为失败用例。In a possible design, if the test scenario use case is a failed business use case, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if the corresponding If the returned message result in the test case running result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
在一种可能的设计中,若测试场景用例为不符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则测试场景用例为成功用例;预设结果枚举集合为报文不符合协议规范异常集合。In one possible design, if the test scenario use case is a use case that does not conform to the protocol specification, if the returned message result in the corresponding use case running result does not belong to the preset result enumeration set, the test scenario use case is a failure use case; If the returned message result in the running result of the corresponding use case belongs to the preset result enumeration set, the test scenario use case is a successful use case; the preset result enumeration set is an exception set of messages that do not conform to the protocol specification.
在一种可能的设计中,若测试场景用例为符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则测试场景用例为失败用例。In one possible design, if the test scenario use case is a use case conforming to the protocol specification, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if If the returned message result in the corresponding use case running result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
在一种可能的具体实现方式中,对于业务、mock类用例,如果用例类型是通过类业务用例,则用例结果报文code不在业务成功集合,用例失败;用例结果报文code在业务成功集合,如果成功集合枚举唯一则结果成功,枚举不唯一则结果为建议成功。如果用例类型是失败业务类用例,则用例结果报文code不在业务失败集合,用例失败;用例结果报文code在业务失败集合,如果枚举唯一则结果成功,枚举不唯一则结果为建议成功。In a possible specific implementation, for business and mock use cases, if the use case type is a pass business use case, the use case result message code is not in the business success set, and the use case fails; the use case result message code is in the business success set, If the success set enumeration is unique, the result is successful, and if the enumeration is not unique, the result is suggestion success. If the use case type is a failure business use case, the use case result message code is not in the business failure set, and the use case fails; the use case result message code is in the business failure set, if the enumeration is unique, the result is successful, and if the enumeration is not unique, the result is suggestion success .
而对于字段类用例,如果用例类型为字段不符合协议规范类用例,则用例结果报文code不在报文不符合协议规范异常集合,结果为失败;用例结果报文code在报文不符合协议规范异常集合,如果枚举唯一则结果成功,枚举不唯一则结果为建议成功。 如果用例类型为字段符合协议规范类用例,则用例结果报文code在报文不符合协议规范异常集合,结果为建议失败;用例结果报文code不在报文不符合协议规范异常集合,结果为建议成功。For field use cases, if the use case type is a field does not conform to the protocol specification use case, the use case result message code is not in the message does not meet the protocol specification exception set, and the result is failure; the use case result message code is in the message does not meet the protocol specification Exception collection, if the enumeration is unique, the result is successful, if the enumeration is not unique, the result is suggestion success. If the use case type is a use case whose field conforms to the protocol specification, the use case result message code is in the message does not conform to the protocol specification exception set, and the result is a suggestion failure; the use case result message code is not in the message does not meet the protocol specification exception set, and the result is a suggestion success.
此外,值得说明的,上述基础断言生成方式除应用于回归测试外,也可应用于单测、冒烟或SIT测试。其中,在回归测试中,主要起到加速作用,毕竟后续Diff判断耗时较长。可以通过基础断言快速判断出是否需要做精准Diff的必要,从而减少无效Diff资源消耗。In addition, it is worth noting that the above basic assertion generation method can also be applied to single test, smoke or SIT test in addition to regression test. Among them, in the regression test, it mainly plays an accelerating role. After all, the subsequent Diff judgment takes a long time. Basic assertions can be used to quickly determine whether it is necessary to do accurate Diff, thereby reducing the consumption of invalid Diff resources.
步骤207、在系统链路中执行运行待测版本程序,并获取各个接口的测试结果。Step 207, execute and run the program of the version to be tested in the system link, and obtain the test results of each interface.
步骤208、根据各个接口对应的测试基线以及测试结果生成断言。Step 208, generating assertions according to the test baselines and test results corresponding to each interface.
在步骤207-步骤208中,可以是先根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,对于用例场景规则可以是通过断言自学习的方式对接口返回报文以及各类数据库操作结果进行处理,例如,可以是先进行自动去噪处理,以去除不涉及逻辑的字段,然后,通过横向分析以及纵向分析,确定各个接口对应的运行结果属性,并对各个接口确定断言预期值,从而将断言预期值确定为系统链路各个接口对应的测试基线,并打到基线中。然后,通过在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,从而根据各个接口对应的测试基线以及测试结果生成断言。In steps 207-208, the test baseline corresponding to each interface of the system link can be determined first according to the use case running results and the use case scenario rules. For the use case scenario rules, the interface can return messages and various Database operation results are processed, for example, automatic denoising processing can be performed first to remove fields that do not involve logic, and then, through horizontal analysis and vertical analysis, determine the operation result attributes corresponding to each interface, and determine the assertion expectations for each interface value, so that the expected value of the assertion is determined as the test baseline corresponding to each interface of the system link, and entered into the baseline. Then, by executing and running the version program to be tested in the system link, and obtaining the test results of each interface, an assertion is generated according to the test baseline and test results corresponding to each interface.
具体的,在根据各个接口对应的测试基线以及测试结果生成断言之后,可以确定断言所指示的测试基线与测试结果存在差异的接口部分为系统变更点或系统漏洞。在进行回归时,执行用例后获取全链路新断言、断言预期和基线自动做Diff对比,其中,差异部分则为系统变更点或系统bug。Specifically, after the assertion is generated according to the test baseline and test result corresponding to each interface, it can be determined that the interface part whose test baseline indicated by the assertion is different from the test result is a system change point or a system vulnerability. When performing regression, after executing the use case, the new assertion of the whole link is obtained, and the assertion expectation is automatically compared with the baseline. The difference is the system change point or system bug.
在一种可能的场景中,用例运行结果的字段值(value)不一致:用例基线的数据库操作结果为A表字段“a=1”。待测版本运行后新备份的用例运行结果为A表字段“a=2”,则系统判断用例运行结果为失败。则可以听过出报表的方式,引导用户定位原因。用户则可以判断为系统bug,从而上报缺陷,并再缺陷修复后重新运行对比,在对比一致后,认为顺利通过。也可能用户判断为本次需求变更导致A表字段“a=2”,为正常结果,则用户直接将结果置为成功,系统将该用例运行结果更新入基线,从而使得基线中A表a字段变更为“a=2”。In a possible scenario, the field value (value) of the use case running result is inconsistent: the database operation result of the use case baseline is the field "a=1" of the A table. After the version to be tested is running, the newly backed up use case running result is "a=2" in the A table field, and the system judges the use case running result as failure. Then you can listen to the way of reporting and guide users to locate the reason. Users can judge that it is a system bug, report the defect, and re-run the comparison after the defect is repaired. After the comparison is consistent, it is considered to have passed successfully. It is also possible that the user judges that the requirement change causes the field "a=2" in table A to be a normal result, then the user directly sets the result as success, and the system updates the running result of the use case into the baseline, so that the field a in table A in the baseline Change to "a=2".
而在另一个场景中,用例运行结果的字段(key)不一致:用例链路上B接口在待测版本新增了报文返回字段“b”。用例运行后新备份结果中B接口返回报文中多了“b=1”。而基线中A接口没有“b”字段,即对比不一致,则提示用户进行定位。若用户判断此为本次版本新增内容,直接将结果更新到基线,即后续以此基线作为断言判断结果。In another scenario, the field (key) of the use case running result is inconsistent: the B interface on the use case link adds a message return field "b" in the version to be tested. After the use case is run, "b=1" is added to the B interface return message in the new backup result. In the baseline, the A interface does not have the "b" field, that is, the comparison is inconsistent, and the user is prompted to locate. If the user judges that this is the new content of this version, the result will be directly updated to the baseline, that is, the baseline will be used as the assertion result in the future.
可选的,若用例运行结果中包括结构化大字段,则将结构化大字段进行拆分,以用于与预设去噪规则进行比对,以去除拆分后各个字段中的第一类噪声结果。具体的,针对结构化大字段,可以根据“{”或转义符“\”识别字段可能为大字段,再将String转换为字典,如能转换成功则,同样可以按照本步骤中的方式获得去噪断言。Optionally, if the use case running results include structured large fields, then split the structured large fields for comparison with preset denoising rules, so as to remove the first category in each field after splitting noise result. Specifically, for structured large fields, you can identify that the field may be a large field based on "{" or the escape character "\", and then convert the String into a dictionary. If the conversion is successful, you can also obtain it in the same way as in this step. Denoising assertions.
在本实施例中,通过在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,然后,根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,再在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,最后,根据各个接口对应的测试基线以及测试结果生成断言,从而实现系统链路的全链路断言自动精准生成,无需人工维护,进而满足自动覆盖全链路对应接口测试的需求。并且,在本实施例中,可以自动生成全链路接口断言的基线,然后采取的对比方式,也是针对全链路的数据库操作结果以及接口返回报文结果作对比断言。此外,还对用例运行结果进行了噪声自动分析去除,并通过基础断言自学习的方式,做初步判断,从而减少无效Diff对比。并且,若用例运行结果中包括结构化大字段,则还可以将结构化大字段进行拆分比对,从而支持大字段去噪Diff对比。In this embodiment, each test scenario use case in the test case set is executed in the system link, and the operation result of the use case is obtained according to the mechanism of the preset safe operation program, and then the system is determined according to the operation result of the use case and the use case scenario rules. The test baseline corresponding to each interface of the link, and then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, generate an assertion according to the test baseline and test results corresponding to each interface, so as to realize the system link The full-link assertion is automatically and accurately generated without manual maintenance, thus meeting the requirement of automatically covering the corresponding interface test of the whole link. Moreover, in this embodiment, the baseline of the full-link interface assertion can be automatically generated, and then the comparison method is to compare the full-link database operation results and interface return message results. In addition, the automatic analysis and removal of noise is carried out on the running results of the use cases, and preliminary judgments are made through the self-learning method of basic assertions, thereby reducing invalid Diff comparisons. In addition, if the use case running results include structured large fields, the structured large fields can also be split and compared to support large field denoising Diff comparison.
图5为本申请实施例提供的一种区块链状态数据处理装置的结构示意图。该区块链状态数据处理装置500可以通过软件、硬件或者两者的结合实现。Fig. 5 is a schematic structural diagram of a block chain state data processing device provided by an embodiment of the present application. The block chain state data processing device 500 can be realized by software, hardware or a combination of both.
图5是本申请根据一示例实施例示出的接口自动化测试装置的结构示意图。如图5所示,本实施例提供的接口自动化测试装置,包括:Fig. 5 is a schematic structural diagram of an interface automation testing device according to an example embodiment of the present application. As shown in Figure 5, the interface automation testing device provided in this embodiment includes:
获取模块301,用于获取测试用例集合,所述测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号;The obtaining module 301 is used to obtain a test case set, the test case set includes multiple types of test case subsets, each type of test case subset includes a plurality of test scenario use cases, and each test scenario use case is configured with a corresponding Use case business serial number;
处理模块302,用于在系统链路中执行所述测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,所述用例运行结果包括数据库操作结果以及各个接口返回报文结果;The processing module 302 is configured to execute each test scenario use case in the test case set in the system link, and obtain the use case operation result according to the preset safe operation program mechanism, and the use case operation result includes the database operation result and each The interface returns the message result;
所述处理模块302,还用于根据所述用例运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线;The processing module 302 is further configured to determine a test baseline corresponding to each interface of the system link according to the use case running result and the use case scenario rules;
所述获取模块301,还用于在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果;The obtaining module 301 is also used to execute and run the version program to be tested in the system link, and obtain the test results of each interface;
所述处理模块302,还用于根据各个接口对应的所述测试基线以及所述测试结果生成断言。The processing module 302 is further configured to generate an assertion according to the test baseline and the test result corresponding to each interface.
在一种可能的设计中,所述处理模块302,具体用于:In a possible design, the processing module 302 is specifically configured to:
根据预设去噪规则直接去除用例运行结果中的第一类噪声结果,其中,所述第一类噪声结果为与所述系统链路的测试逻辑无关的运行结果;Directly remove the first type of noise result in the use case operation result according to the preset denoising rule, wherein the first type of noise result is an operation result that has nothing to do with the test logic of the system link;
对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,所述横向分析包括针对运行同一测试用例子集中的不同测试场景用例所生成的运行结果进行的聚类分析;Carrying out horizontal analysis on the operation results of the use cases to determine the corresponding operation result attributes of each interface, the horizontal analysis includes cluster analysis on the operation results generated by running different test scenario use cases in the same test case subset;
根据所述用例运行结果以及对应的所述结果属性确定所述系统链路各个接口对应的所述测试基线。The test baseline corresponding to each interface of the system link is determined according to the use case running result and the corresponding result attribute.
在一种可能的设计中,所述处理模块302,具体用于:In a possible design, the processing module 302 is specifically configured to:
在运行同一测试用例子集中预设第一数量的测试场景用例后,获取各个接口的用例运行结果;After running the preset first number of test scenario cases in the same test case subset, obtain the test case running results of each interface;
根据各个接口的用例运行结果的结果分布确定对应的结果等值率;Determine the corresponding result equivalent rate according to the result distribution of the use case operation results of each interface;
根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性。Determine the corresponding operation result attribute according to the result equivalent rate of each interface and the preset equivalent rate condition.
在一种可能的设计中,所述处理模块302,具体用于:In a possible design, the processing module 302 is specifically configured to:
若所述结果等值率为100%,则确定对应的运行结果属性为固定值类,所述固定值类对应的测试基线为固定值; If the result equivalent rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value;
若所述结果等值率大于或等于预设等值率,则确定对应的运行结果属性为枚举类,所述枚举类对应的测试基线为枚举列表;If the result equivalence rate is greater than or equal to the preset equivalence rate, it is determined that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list;
若所述结果等值率小于所述预设等值率,则确定对应的用例运行结果为噪声。If the result equivalent rate is less than the preset equivalent rate, it is determined that the corresponding use case operation result is noise.
在一种可能的设计中,所述处理模块302,还用于:In a possible design, the processing module 302 is further configured to:
对属于所述枚举类的用例运行结果进行纵向分析,所述纵向分析包括针对运行同一测试场景用例所生成的运行结果进行的聚类分析;Carrying out longitudinal analysis on the running results of the use cases belonging to the enumerated class, the longitudinal analysis including cluster analysis on the running results generated by running the same test scenario use case;
若属于所述枚举类的所有用例运行结果相同,则将用例运行结果备份至所述枚举列表;If the running results of all the use cases belonging to the enumeration class are the same, backing up the running results of the use cases to the enumeration list;
若属于所述枚举类的用例运行结果存在不相同,则判断各个用例运行结果是否是在满足预设备份条件下时获得的,所述预设备份条件用于确定用例运行处于结束态;If the running results of the use cases belonging to the enumeration class are not the same, it is judged whether the running results of each use case are obtained when the preset backup conditions are met, and the preset backup conditions are used to determine that the running of the use cases is in an end state;
若判断结果为是,则将各个用例运行结果备份至所述枚举列表;If the judgment result is yes, back up the running results of each use case to the enumeration list;
若判断结果为否,则将满足所述预设备份条件所获取的用例运行结果备份至所述枚举列表。If the judgment result is no, then back up the use case running results obtained by satisfying the preset backup conditions to the enumeration list.
在一种可能的设计中,所述处理模块302,还用于:In a possible design, the processing module 302 is further configured to:
根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言;Generate basic assertions according to the affiliation relationship between the returned message results and the preset result enumeration set in the use case running results corresponding to each test scenario use case;
确定所述基础断言符合预设断言结果的测试场景用例所对应的测试结果,与各个接口对应的所述测试基线继续进行对比,以生成所述断言。It is determined that the test results corresponding to the test scenario use cases in which the basic assertion conforms to the preset assertion result are compared with the test baseline corresponding to each interface to generate the assertion.
在一种可能的设计中,若测试场景用例为通过业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为业务成功集合;In a possible design, if the test scenario use case is a business use case, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a failure use case; If the returned message result in the corresponding use case operation result belongs to the preset result enumeration set, then the test scenario use case is a successful use case; the preset result enumeration set is a business success set;
若测试场景用例为失败业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例;If the test scenario use case is a failure business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a successful use case; If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failure use case;
若测试场景用例为不符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为报文不符合协议规范异常集合;If the test scenario use case is a use case that does not conform to the protocol specification, if the returned message result in the corresponding use case running result does not belong to the preset result enumeration set, the test scenario use case is a failure use case; if the corresponding use case running result If the returned message result in the test belongs to the preset result enumeration set, then the test scenario use case is a successful use case; the preset result enumeration set is the message does not meet the protocol specification exception set;
若测试场景用例为符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例。If the test scenario use case is a use case conforming to the protocol specification, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if the corresponding use case operation result in If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
在一种可能的设计中,所述处理模块302,还用于确定所述断言所指示的所述测试基线与所述测试结果存在差异的接口部分为系统变更点或系统漏洞。In a possible design, the processing module 302 is further configured to determine that the interface portion of the difference between the test baseline indicated by the assertion and the test result is a system change point or a system vulnerability.
在一种可能的设计中,若用例运行结果中包括结构化大字段,则将所述结构化大字段进行拆分,以用于与所述预设去噪规则进行比对,以去除拆分后各个字段中的第一类噪声结果。In a possible design, if the use case running result includes a structured large field, the structured large field is split for comparison with the preset denoising rule, so as to remove the split Type 1 noise results in the following fields.
本实施例提供接口自动化测试装置,可以用于执行上述方法实施例中的步骤。对于本申请装置实施例中未披露的细节,请参照本申请上述的方法实施例。This embodiment provides an interface automation testing device, which can be used to execute the steps in the foregoing method embodiments. For details not disclosed in the device embodiments of the present application, please refer to the above-mentioned method embodiments of the present application.
图6是本申请根据一示例实施例示出的电子设备的结构示意图。如图6所示,本实施例提供的一种电子设备400,包括:Fig. 6 is a schematic structural diagram of an electronic device according to an example embodiment of the present application. As shown in FIG. 6, an electronic device 400 provided in this embodiment includes:
处理器401;以及,processor 401; and,
存储器402,用于存储所述处理器的可执行指令,该存储器还可以是flash(闪存);The memory 402 is used to store executable instructions of the processor, and the memory may also be flash (flash memory);
其中,所述处理器401配置为经由执行所述可执行指令来执行上述方法中的各个步骤。Wherein, the processor 401 is configured to execute each step in the above method by executing the executable instructions.
可选地,存储器402既可以是独立的,也可以跟处理器401集成在一起。Optionally, the memory 402 can be independent or integrated with the processor 401 .
当所述存储器402是独立于处理器401之外的器件时,所述电子设备400,还可以包括:When the memory 402 is a device independent of the processor 401, the electronic device 400 may further include:
总线403,用于连接所述处理器401以及所述存储器402。The bus 403 is used to connect the processor 401 and the memory 402 .
本实施例还提供一种可读存储介质,可读存储介质中存储有计算机程序,当电子设备的至少一个处理器执行该计算机程序时,电子设备执行上述方法中的各个步骤。This embodiment also provides a readable storage medium, in which a computer program is stored, and when at least one processor of the electronic device executes the computer program, the electronic device executes each step in the above method.
本实施例还提供一种程序产品,该程序产品包括计算机程序,该计算机程序存储在可读存储介质中。电子设备的至少一个处理器可以从可读存储介质读取该计算机程序,至少一个处理器执行该计算机程序使得电子设备实施上述方法中的各个步骤。This embodiment also provides a program product, where the program product includes a computer program, and the computer program is stored in a readable storage medium. At least one processor of the electronic device can read the computer program from the readable storage medium, and the at least one processor executes the computer program so that the electronic device implements each step in the above method.
本实施例还提供一种计算机程序,包括程序代码,当计算机运行该计算机程序时,程序代码执行上述方法中的各个步骤。This embodiment also provides a computer program, including program code. When the computer runs the computer program, the program code executes each step in the above method.
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the steps for implementing the above method embodiments can be completed by program instructions and related hardware. The aforementioned program can be stored in a computer-readable storage medium. When the program is executed, it executes the steps including the above-mentioned method embodiments; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various media that can store program codes.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或对其中部分或全部技术特征进行等同替换;而这些修改或替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, rather than limiting them; although the application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: It is still possible to modify the technical solutions described in the foregoing embodiments, or perform equivalent replacements for some or all of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the various embodiments of the present application. scope.

Claims (14)

  1. 一种接口自动化测试方法,其特征在于,包括:An interface automated testing method, characterized in that it comprises:
    获取测试用例集合,所述测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号;Obtain a test case set, the test case set includes multiple types of test case subsets, each type of test case subset includes a plurality of test scenario use cases, and each test scenario use case is configured with a corresponding use case business serial number;
    在系统链路中执行所述测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,所述用例运行结果包括数据库操作结果以及各个接口返回报文结果;Execute each test scenario use case in the test case set in the system link, and obtain the use case operation result according to the mechanism of the preset safe operation program, the use case operation result includes the database operation result and the message result returned by each interface;
    根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线;Determine the test baseline corresponding to each interface of the system link according to the running results of each test scenario use case in the production version code environment and the use case scenario rules;
    在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果;Execute and run the program of the version to be tested in the system link, and obtain the test results of each interface;
    根据各个接口对应的所述测试基线以及所述测试结果生成断言。An assertion is generated according to the test baseline and the test result corresponding to each interface.
  2. 根据权利要求1所述的接口自动化测试方法,其特征在于,所述根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线,包括:The interface automated testing method according to claim 1, wherein said determination of the test baselines corresponding to each interface of said system link according to the operation results of each test scenario use case in the production version code environment and the use case scenario rules includes:
    根据预设去噪规则直接去除用例运行结果中的第一类噪声结果,其中,所述第一类噪声结果为与所述系统链路的测试逻辑无关的运行结果;Directly remove the first type of noise result in the use case operation result according to the preset denoising rule, wherein the first type of noise result is an operation result that has nothing to do with the test logic of the system link;
    对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,所述横向分析包括针对运行同一测试用例子集中的不同测试场景用例所生成的运行结果进行的聚类分析;Carrying out horizontal analysis on the operation results of the use cases to determine the corresponding operation result attributes of each interface, the horizontal analysis includes cluster analysis on the operation results generated by running different test scenario use cases in the same test case subset;
    根据所述用例运行结果以及对应的所述运行结果属性确定所述系统链路各个接口对应的所述测试基线。The test baseline corresponding to each interface of the system link is determined according to the use case operation result and the corresponding operation result attribute.
  3. 根据权利要求2所述的接口自动化测试方法,其特征在于,所述对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,包括:The interface automated testing method according to claim 2, wherein the horizontal analysis of the operation results of the use cases to determine the attributes of the operation results corresponding to each interface includes:
    在运行同一测试用例子集中预设第一数量的测试场景用例后,获取各个接口的用例运行结果;After running the preset first number of test scenario cases in the same test case subset, obtain the test case running results of each interface;
    根据各个接口的用例运行结果的结果分布确定对应的结果等值率;Determine the corresponding result equivalent rate according to the result distribution of the use case operation results of each interface;
    根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性。Determine the corresponding operation result attribute according to the result equivalent rate of each interface and the preset equivalent rate condition.
  4. 根据权利要求3所述的接口自动化测试方法,其特征在于,所述根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性,包括:The interface automated testing method according to claim 3, wherein the determination of the corresponding operation result attributes according to the result equivalent rate of each interface and the preset equivalent rate condition includes:
    若所述结果等值率为100%,则确定对应的运行结果属性为固定值类,所述固定值类对应的测试基线为固定值; If the result equivalent rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value;
    若所述结果等值率大于或等于预设等值率,则确定对应的运行结果属性为枚举类,所述枚举类对应的测试基线为枚举列表;If the result equivalence rate is greater than or equal to the preset equivalence rate, it is determined that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list;
    若所述结果等值率小于所述预设等值率,则确定对应的用例运行结果为噪声。If the result equivalent rate is less than the preset equivalent rate, it is determined that the corresponding use case operation result is noise.
  5. 根据权利要求4所述的接口自动化测试方法,其特征在于,在所述对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性之后,还包括:The interface automated testing method according to claim 4, characterized in that, after the horizontal analysis of the operation results of the use cases to determine the attributes of the operation results corresponding to each interface, it further includes:
    对属于所述枚举类的用例运行结果进行纵向分析,所述纵向分析包括针对运行同一测试场景用例所生成的运行结果进行的聚类分析;Carrying out longitudinal analysis on the running results of the use cases belonging to the enumerated class, the longitudinal analysis including cluster analysis on the running results generated by running the same test scenario use case;
    若属于所述枚举类的所有用例运行结果相同,则将用例运行结果备份至所述枚举列表;If the running results of all the use cases belonging to the enumeration class are the same, backing up the running results of the use cases to the enumeration list;
    若属于所述枚举类的用例运行结果存在不相同,则判断各个用例运行结果是否是在满足预设备份条件下时获得的,所述预设备份条件用于确定用例运行处于结束态;If the running results of the use cases belonging to the enumeration class are not the same, it is judged whether the running results of each use case are obtained when the preset backup conditions are met, and the preset backup conditions are used to determine that the running of the use cases is in an end state;
    若判断结果为是,则将各个用例运行结果备份至所述枚举列表;If the judgment result is yes, back up the running results of each use case to the enumeration list;
    若判断结果为否,则将满足所述预设备份条件所获取的用例运行结果备份至所述枚举列表。If the judgment result is no, then back up the use case running results obtained by satisfying the preset backup conditions to the enumeration list.
  6. 根据权利要求2-5中任意一项所述的接口自动化测试方法,其特征在于,在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果之前,还包括:According to the interface automated testing method described in any one of claims 2-5, it is characterized in that, before executing and running the version program to be tested in the system link, and obtaining the test results of each interface, it also includes:
    根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言;Generate basic assertions according to the affiliation relationship between the returned message results and the preset result enumeration set in the use case running results corresponding to each test scenario use case;
    确定所述基础断言符合预设断言结果的测试场景用例所对应的测试结果,与各个接口对应的所述测试基线继续进行对比,以生成所述断言。It is determined that the test results corresponding to the test scenario use cases in which the basic assertion conforms to the preset assertion result are compared with the test baseline corresponding to each interface to generate the assertion.
  7. 根据权利要求6所述的接口自动化测试方法,其特征在于,所述根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言,包括:The interface automation testing method according to claim 6, wherein the basic assertion is generated according to the affiliation relationship between the return message result and the preset result enumeration set in the use case operation results corresponding to each test scenario use case, including:
    若测试场景用例为通过业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为业务成功集合;If the test scenario use case is a pass business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a failure use case; If the returned message result belongs to the preset result enumeration set, then the test scenario use case is a successful use case; the preset result enumeration set is a business success set;
    若测试场景用例为失败业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例;If the test scenario use case is a failure business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a successful use case; If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failure use case;
    若测试场景用例为不符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为报文不符合协议规范异常集合;If the test scenario use case is a use case that does not conform to the protocol specification, if the returned message result in the corresponding use case running result does not belong to the preset result enumeration set, the test scenario use case is a failure use case; if the corresponding use case running result If the returned message result in the test belongs to the preset result enumeration set, then the test scenario use case is a successful use case; the preset result enumeration set is the message does not meet the protocol specification exception set;
    若测试场景用例为符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例。If the test scenario use case is a use case conforming to the protocol specification, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if the corresponding use case operation result in If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
  8. 根据权利要求7所述的接口自动化测试方法,其特征在于,在所述根据各个接口对应的所述测试基线以及所述测试结果生成断言之后,还包括:The interface automated testing method according to claim 7, wherein, after the assertion is generated according to the test baseline corresponding to each interface and the test result, further comprising:
    确定所述断言所指示的所述测试基线与所述测试结果存在差异的接口部分为系统变更点或系统漏洞。It is determined that the interface part where the difference between the test baseline indicated by the assertion and the test result is a system change point or a system vulnerability.
  9. 根据权利要求2-5中任意一项所述的接口自动化测试方法,其特征在于,在所述根据预设去噪规则直接去除用例运行结果中的第一类噪声结果之前,还包括:The interface automation testing method according to any one of claims 2-5, characterized in that, before directly removing the first type of noise results in the use case running results according to the preset denoising rules, it also includes:
    若用例运行结果中包括结构化大字段,则将所述结构化大字段进行拆分,以用于与所述预设去噪规则进行比对,以去除拆分后各个字段中的第一类噪声结果。If the result of the use case operation includes a large structured field, split the large structured field for comparison with the preset denoising rule, so as to remove the first category in each field after splitting noise result.
  10. 一种接口自动化测试装置,其特征在于,包括:An interface automated testing device is characterized in that it comprises:
    获取模块,用于获取测试用例集合,所述测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号;The obtaining module is used to obtain a test case set, the test case set includes multiple types of test case subsets, each type of test case subset includes a plurality of test scenario use cases, and each test scenario use case is configured with a corresponding use case business serial number;
    处理模块,用于在系统链路中执行所述测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,所述用例运行结果包括数据库操作结果以及各个接口返回报文结果;The processing module is used to execute each test scenario use case in the test case set in the system link, and obtain the use case operation result according to the mechanism of the preset safe operation program, and the use case operation result includes the database operation result and each interface Return message result;
    所述处理模块,还用于根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线;The processing module is also used to determine the test baseline corresponding to each interface of the system link according to the running results of each test scenario use case in the production version code environment and the use case scenario rules;
    所述获取模块,还用于在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果;The obtaining module is also used to execute and run the program of the version to be tested in the system link, and obtain the test results of each interface;
    所述处理模块,还用于根据各个接口对应的所述测试基线以及所述测试结果生成断言。The processing module is further configured to generate an assertion according to the test baseline and the test result corresponding to each interface.
  11. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:
    处理器;以及processor; and
    存储器,用于存储所述处理器的计算机程序;a memory for storing the computer program of the processor;
    其中,所述处理器被配置为通过执行所述计算机程序来实现权利要求1至9任一项所述的接口自动化测试方法。Wherein, the processor is configured to implement the interface automation testing method according to any one of claims 1 to 9 by executing the computer program.
  12. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至9任一项所述的接口自动化测试方法。A computer-readable storage medium, on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the interface automation testing method described in any one of claims 1 to 9 is implemented.
  13. 一种计算机程序产品,包括计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1至9任一项所述的接口自动化测试方法。A computer program product, comprising a computer program, characterized in that, when the computer program is executed by a processor, the interface automation testing method described in any one of claims 1 to 9 is implemented.
  14. 一种计算机程序,其特征在于,包括程序代码,当计算机运行所述计算机程序时,所述程序代码执行如权利要求1至9任一项所述的接口自动化测试方法。A computer program, characterized in that it includes program code, and when the computer runs the computer program, the program code executes the interface automation testing method according to any one of claims 1 to 9.
PCT/CN2022/102161 2021-12-27 2022-06-29 Interface automation testing method and apparatus, and medium, device and program WO2023123943A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111609529.XA CN114168486A (en) 2021-12-27 2021-12-27 Interface automation test method, device, medium, device, and program
CN202111609529.X 2021-12-27

Publications (1)

Publication Number Publication Date
WO2023123943A1 true WO2023123943A1 (en) 2023-07-06

Family

ID=80488455

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/102161 WO2023123943A1 (en) 2021-12-27 2022-06-29 Interface automation testing method and apparatus, and medium, device and program

Country Status (2)

Country Link
CN (1) CN114168486A (en)
WO (1) WO2023123943A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116756046A (en) * 2023-08-16 2023-09-15 湖南长银五八消费金融股份有限公司 Automatic interface testing method, device, equipment and storage medium
CN117149820A (en) * 2023-09-25 2023-12-01 湖南长银五八消费金融股份有限公司 Borrowing operation detection method, device, equipment and storage medium
CN117453577A (en) * 2023-12-25 2024-01-26 湖南兴盛优选网络科技有限公司 Method, device and computer equipment for generating interface automation use case based on flow recording
CN117493162A (en) * 2023-12-19 2024-02-02 易方达基金管理有限公司 Data verification method, system, equipment and storage medium for interface test
CN118467356A (en) * 2024-05-10 2024-08-09 长江信达软件技术(武汉)有限责任公司 Method and system for testing application programming interface

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114168486A (en) * 2021-12-27 2022-03-11 深圳前海微众银行股份有限公司 Interface automation test method, device, medium, device, and program
CN114706796A (en) * 2022-06-07 2022-07-05 广州易方信息科技股份有限公司 UI (user interface) automatic diff assertion method and device based on DOM (document object model) tree structure
CN115065612B (en) * 2022-06-22 2024-03-08 上海哔哩哔哩科技有限公司 Testing method and device for full-link pressure measurement transformation
CN115757069A (en) * 2022-11-21 2023-03-07 中国人民财产保险股份有限公司 Method, device and equipment for evaluating system performance based on test platform
CN117707936B (en) * 2023-11-28 2024-06-11 海通证券股份有限公司 Multisystem multi-version full-link testing method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100100872A1 (en) * 2008-10-22 2010-04-22 Oracle International Corporation Methods and systems for implementing a test automation framework for testing software applications on unix/linux based machines
CN110750433A (en) * 2018-07-23 2020-02-04 北京奇虎科技有限公司 Interface test method and device
CN111124871A (en) * 2018-10-31 2020-05-08 北京国双科技有限公司 Interface test method and device
CN111382074A (en) * 2020-03-09 2020-07-07 摩拜(北京)信息技术有限公司 Interface test method and device and electronic equipment
WO2020155778A1 (en) * 2019-02-03 2020-08-06 苏州市龙测智能科技有限公司 Interface automation test method, test apparatus, test device and storage medium
CN112463588A (en) * 2020-11-02 2021-03-09 北京健康之家科技有限公司 Automatic test system and method, storage medium and computing equipment
CN112799953A (en) * 2021-02-08 2021-05-14 北京字节跳动网络技术有限公司 Interface testing method and device, computer equipment and storage medium
CN113407449A (en) * 2021-03-29 2021-09-17 广州海量数据库技术有限公司 Interface testing method and device
CN114168486A (en) * 2021-12-27 2022-03-11 深圳前海微众银行股份有限公司 Interface automation test method, device, medium, device, and program

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100100872A1 (en) * 2008-10-22 2010-04-22 Oracle International Corporation Methods and systems for implementing a test automation framework for testing software applications on unix/linux based machines
CN110750433A (en) * 2018-07-23 2020-02-04 北京奇虎科技有限公司 Interface test method and device
CN111124871A (en) * 2018-10-31 2020-05-08 北京国双科技有限公司 Interface test method and device
WO2020155778A1 (en) * 2019-02-03 2020-08-06 苏州市龙测智能科技有限公司 Interface automation test method, test apparatus, test device and storage medium
CN111382074A (en) * 2020-03-09 2020-07-07 摩拜(北京)信息技术有限公司 Interface test method and device and electronic equipment
CN112463588A (en) * 2020-11-02 2021-03-09 北京健康之家科技有限公司 Automatic test system and method, storage medium and computing equipment
CN112799953A (en) * 2021-02-08 2021-05-14 北京字节跳动网络技术有限公司 Interface testing method and device, computer equipment and storage medium
CN113407449A (en) * 2021-03-29 2021-09-17 广州海量数据库技术有限公司 Interface testing method and device
CN114168486A (en) * 2021-12-27 2022-03-11 深圳前海微众银行股份有限公司 Interface automation test method, device, medium, device, and program

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116756046A (en) * 2023-08-16 2023-09-15 湖南长银五八消费金融股份有限公司 Automatic interface testing method, device, equipment and storage medium
CN116756046B (en) * 2023-08-16 2023-11-03 湖南长银五八消费金融股份有限公司 Automatic interface testing method, device, equipment and storage medium
CN117149820A (en) * 2023-09-25 2023-12-01 湖南长银五八消费金融股份有限公司 Borrowing operation detection method, device, equipment and storage medium
CN117149820B (en) * 2023-09-25 2024-05-14 湖南长银五八消费金融股份有限公司 Borrowing operation detection method, device, equipment and storage medium
CN117493162A (en) * 2023-12-19 2024-02-02 易方达基金管理有限公司 Data verification method, system, equipment and storage medium for interface test
CN117453577A (en) * 2023-12-25 2024-01-26 湖南兴盛优选网络科技有限公司 Method, device and computer equipment for generating interface automation use case based on flow recording
CN117453577B (en) * 2023-12-25 2024-03-22 湖南兴盛优选网络科技有限公司 Method, device and computer equipment for generating interface automation use case based on flow recording
CN118467356A (en) * 2024-05-10 2024-08-09 长江信达软件技术(武汉)有限责任公司 Method and system for testing application programming interface

Also Published As

Publication number Publication date
CN114168486A (en) 2022-03-11

Similar Documents

Publication Publication Date Title
WO2023123943A1 (en) Interface automation testing method and apparatus, and medium, device and program
US8621278B2 (en) System and method for automated solution of functionality problems in computer systems
CN106506283B (en) Business test method and device of bank and enterprise docking system
CN109344053B (en) Interface coverage test method, system, computer device and storage medium
US20070011541A1 (en) Methods and systems for identifying intermittent errors in a distributed code development environment
US9047260B2 (en) Model-based testing of a graphical user interface
CN111736865B (en) Database upgrading method and system
CN112396419A (en) Method, device and equipment for generating check rule and storage medium
CN113220588A (en) Automatic testing method, device and equipment for data processing and storage medium
CN110674118A (en) Database management method, database management device, server and computer-readable storage medium
CN111752846A (en) Interface testing method and device
CN112650676A (en) Software testing method, device, equipment and storage medium
CN110990289B (en) Method and device for automatically submitting bug, electronic equipment and storage medium
CN112100070A (en) Version defect detection method and device, server and storage medium
CN115114064A (en) Micro-service fault analysis method, system, equipment and storage medium
JP2017068293A (en) Test db data generation method and device
CN112765014A (en) Automatic test system for multi-user simultaneous operation and working method
CN115934559A (en) Testing method of intelligent form testing system
CN116467188A (en) Universal local reproduction system and method under multi-environment scene
CN116069628A (en) Intelligent-treatment software automatic regression testing method, system and equipment
RU128741U1 (en) SYSTEM FOR FORMING SOLVING PROBLEMS OF FUNCTIONING COMPUTER SYSTEMS
CN114841281A (en) Data table identification method, device, equipment, medium and program product
CN113326196A (en) Method and device for testing case
CN114519003A (en) Regression testing method and device based on mapping relation and electronic equipment
CN114579809A (en) Event analysis method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22913232

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE