WO2023123943A1 - 接口自动化测试方法、装置、介质、设备及程序 - Google Patents
接口自动化测试方法、装置、介质、设备及程序 Download PDFInfo
- Publication number
- WO2023123943A1 WO2023123943A1 PCT/CN2022/102161 CN2022102161W WO2023123943A1 WO 2023123943 A1 WO2023123943 A1 WO 2023123943A1 CN 2022102161 W CN2022102161 W CN 2022102161W WO 2023123943 A1 WO2023123943 A1 WO 2023123943A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- use case
- result
- test
- interface
- preset
- Prior art date
Links
- 238000012360 testing method Methods 0.000 title claims abstract description 391
- 230000007246 mechanism Effects 0.000 claims abstract description 14
- 238000004519 manufacturing process Methods 0.000 claims abstract description 7
- 238000004458 analytical method Methods 0.000 claims description 30
- 238000012545 processing Methods 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 21
- 238000007621 cluster analysis Methods 0.000 claims description 10
- 238000010234 longitudinal analysis Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 8
- 238000003860 storage Methods 0.000 claims description 8
- 238000009826 distribution Methods 0.000 claims description 5
- 238000012423 maintenance Methods 0.000 abstract description 7
- 238000013461 design Methods 0.000 description 29
- 238000000034 method Methods 0.000 description 22
- 238000005516 engineering process Methods 0.000 description 15
- 244000035744 Hura crepitans Species 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000003068 static effect Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000010998 test method Methods 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3684—Test management for test design, e.g. generating new test cases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
Definitions
- the present application relates to the technical field of interface testing, in particular to an interface automatic testing method, device, medium, equipment and program.
- the generated checkpoints are not perfect, and cannot automatically cover the requirements of the interface test corresponding to the entire link.
- the embodiment of the present application provides an interface automatic testing method, device, medium, equipment and program to solve the technical problem that the existing technology cannot automatically cover the corresponding interface testing requirements of the whole link.
- an interface automated testing method including:
- test case set includes multiple types of test case subsets, each type of test case subset includes a plurality of test scenario use cases, and each test scenario use case is configured with a corresponding use case business serial number;
- the use case operation result includes the database operation result and the message result returned by each interface
- the determination of the test baseline corresponding to each interface of the system link according to the running results of each test scenario use case in the production version code environment and the use case scenario rules includes:
- the horizontal analysis includes cluster analysis on the operation results generated by running different test scenario use cases in the same test case subset;
- test baseline corresponding to each interface of the system link is determined according to the use case running result and the corresponding behavior result attribute.
- the horizontal analysis is performed on the operation result of the use case to determine the attributes of the operation result corresponding to each interface, including:
- the determination of the corresponding operation result attributes according to the result equivalent rate of each interface and the preset equivalent rate condition includes:
- the result equivalent rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value;
- the result equivalence rate is greater than or equal to the preset equivalence rate, it is determined that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list;
- test results corresponding to the test scenario use cases in which the basic assertion conforms to the preset assertion result are compared with the test baseline corresponding to each interface to generate the assertion.
- the basic assertion is generated according to the affiliation relationship between the returned message result and the preset result enumeration set in the use case running result corresponding to each test scenario use case, including:
- test scenario use case is a pass business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a failure use case; If the returned message result belongs to the preset result enumeration set, then the test scenario use case is a successful use case; the preset result enumeration set is a business success set;
- test scenario use case is a failure business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a successful use case; If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failure use case;
- test scenario use case is a use case that does not conform to the protocol specification
- the test scenario use case is a failure use case
- the test scenario use case is a successful use case
- the preset result enumeration set is the message does not meet the protocol specification exception set
- test scenario use case is a use case conforming to the protocol specification, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if the corresponding use case operation result in If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
- the interface part where the difference between the test baseline indicated by the assertion and the test result is a system change point or a system vulnerability.
- the result of the use case operation includes a large structured field, split the large structured field for comparison with the preset denoising rule, so as to remove the first category in each field after splitting noise result.
- the embodiment of the present application also provides an interface automated testing device, including:
- the obtaining module is used to obtain a test case set, the test case set includes multiple types of test case subsets, each type of test case subset includes a plurality of test scenario use cases, and each test scenario use case is configured with a corresponding use case business serial number;
- the processing module is used to execute each test scenario use case in the test case set in the system link, and obtain the use case operation result according to the mechanism of the preset safe operation program, and the use case operation result includes the database operation result and each interface Return message result;
- the processing module is also used to determine the test baseline corresponding to each interface of the system link according to the running results of each test scenario use case in the production version code environment and the use case scenario rules;
- the obtaining module is also used to execute and run the program of the version to be tested in the system link, and obtain the test results of each interface;
- the processing module is further configured to generate an assertion according to the test baseline and the test result corresponding to each interface.
- the processing module is specifically used for:
- the horizontal analysis includes cluster analysis on the operation results generated by running different test scenario use cases in the same test case subset;
- test baseline corresponding to each interface of the system link is determined according to the use case running result and the corresponding result attribute.
- the processing module is specifically used for:
- the processing module is specifically used for:
- the result equivalent rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value;
- the result equivalence rate is greater than or equal to the preset equivalence rate, it is determined that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list;
- the processing module is also used for:
- the processing module is also used for:
- test results corresponding to the test scenario use cases in which the basic assertion conforms to the preset assertion result are compared with the test baseline corresponding to each interface to generate the assertion.
- test scenario use case is a business use case
- the test scenario use case is a failure use case
- the test scenario use case is a successful use case
- the preset result enumeration set is a business success set
- test scenario use case is a failure business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a successful use case; If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failure use case;
- test scenario use case is a use case that does not conform to the protocol specification
- the test scenario use case is a failure use case
- the test scenario use case is a successful use case
- the preset result enumeration set is the message does not meet the protocol specification exception set
- test scenario use case is a use case conforming to the protocol specification, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if the corresponding use case operation result in If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
- the processing module is further configured to determine that the interface portion of the difference between the test baseline indicated by the assertion and the test result is a system change point or a system vulnerability.
- the use case running result includes a structured large field
- the structured large field is split for comparison with the preset denoising rule, so as to remove the split Type 1 noise results in the following fields.
- the embodiment of the present application further provides an electronic device, including:
- a memory for storing executable instructions of the processor
- the processor is configured to execute any interface automation testing method in the first aspect by executing the executable instructions.
- the embodiment of the present application further provides a storage medium on which a computer program is stored, and when the program is executed by a processor, any one of the interface automation testing methods in the first aspect is implemented.
- the embodiment of the present application further provides a computer program product, including a computer program, and when the computer program is executed by a processor, any interface automation testing method in the first aspect is implemented.
- An interface automation testing method, device, medium, equipment, and program provided by the embodiment of the present application obtain the use cases by executing each test scenario use case in the test case set in the system link, and obtain the use cases according to the mechanism of the preset safe running program Run the results, and then determine the test baseline corresponding to each interface of the system link according to the use case running results and the use case scenario rules, then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, according to each interface
- Corresponding test baselines and test results generate assertions, so as to realize automatic and accurate generation of full-link assertions of system links, without manual maintenance, and thus meet the requirements of automatically covering the corresponding interface tests of the entire link.
- FIG. 1 is a schematic diagram of a test flow showing an interface automation test method according to an example embodiment of the present application
- Fig. 2 is a schematic flowchart of an interface automated testing method according to an example embodiment of the present application
- Fig. 3 is a schematic flow diagram of an assertion self-learning module according to an example embodiment of the present application.
- Fig. 4 is another schematic flowchart of an interface automated testing method according to an example embodiment of the present application.
- Fig. 5 is a schematic structural diagram of an interface automation testing device according to an example embodiment of the present application.
- Fig. 6 is a schematic structural diagram of an electronic device according to an example embodiment of the present application.
- the interface automation testing method, device, medium, equipment and program provided in the embodiment of the present application does not require manual input of assertions.
- the solutions in the existing technology manually enter the assertion, the labor cost is high, and the integrity of the assertion cannot be guaranteed, and the checkpoint is often missed. full link.
- each test scenario use case in the test case set is executed in the system link, and the operation result of the use case is obtained according to the mechanism of the preset safe operation program, and then, according to the operation result of the use case and the use case scenario rules Determine the test baseline corresponding to each interface of the system link, and then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, generate an assertion according to the test baseline and test results corresponding to each interface, so as to realize the system
- the full link assertion of the link is automatically and accurately generated without manual maintenance, thus meeting the requirement of automatically covering the corresponding interface test of the whole link.
- Assertion Expressed as some Boolean expressions, used to indicate that the value of the expression can be believed to be true at a certain point in the program.
- Interface Testing It is a test type for testing the interface between system components. Interface testing is mainly used to detect the interaction points between external systems and internal subsystems. The focus of testing is to check data exchange, transfer and control management processes, and inter-system logical dependencies.
- Interface Auto Testing Based on interfaces and protocols, run systems and applications under preset conditions to evaluate the results of a test, where the preconditions should include normal conditions and abnormal conditions.
- Sandbox refers to a technology, which is a Java Virtual Machine (Java Virtual Machine, JVM) platform that is open sourced by Facebook.
- JVM Java Virtual Machine
- the solution is essentially a form of AOP implementation.
- Fig. 1 is a schematic diagram of a testing process illustrating an interface automation testing method according to an exemplary embodiment of the present application.
- the test scenario use cases can be generated in batches through the interface automation platform, that is, the test cases in each scenario can be automatically batch generated through the interface automation platform, and each Test scenario use cases are configured with corresponding use case business serial numbers. It is worth noting that it is precisely because the embodiment of this application is based on the interface automation platform, through which a large number of test scenario use cases will be automatically generated. Therefore, the generated assertions are difficult to maintain manually. Therefore, it is urgently needed The assertion is generated and maintained through the interface automated testing method provided by the embodiment of the present application.
- assertion self-learning is adopted to realize the automatic generation of assertions of interface return messages and various database operation results.
- the main method is to generate assertion expected values for interface return messages and various database operation results through assertion self-learning, and use the expected values as the baseline, and then generate assertions by comparing and verifying the execution results of use cases through Diff.
- Fig. 2 is a schematic flowchart of an interface automation testing method according to an example embodiment of the present application. As shown in Figure 2, the interface automated testing method provided in this embodiment includes:
- Step 101 acquiring a test case set.
- test case set includes multiple types of test case subsets.
- Each type of test case subset includes multiple test scenario cases.
- Each test scenario use case is configured with a corresponding use case business flow Number.
- Step 102 Execute each test scenario use case in the test case set in the system link, and obtain the test case running result according to the preset safe running program mechanism.
- each test scenario use case in the test case set can be executed in the system link, and the test case running result can be obtained according to the preset safe running program mechanism (for example: sandbox), wherein the test case running result includes the database operation result And each interface returns the message result.
- the preset safe running program mechanism for example: sandbox
- FIG. 3 is a schematic flowchart of the assertion self-learning module according to an example embodiment of the present application.
- the database operation results of the full link and the message results returned by each interface in the full link can be automatically obtained through the assertion self-learning module, and used as the basic data for the analysis of the assertion results.
- the test plan can be pulled up periodically to execute the test scenario use case, and the business serial number bizNo of the test scenario use case can be passed as an input parameter to the assertion self-learning module, so as to automatically analyze the test case full-link interface return message results and the database result of the operation.
- the assertion self-learning module takes the use case business serial number as an input parameter, and obtains the link system list, the SQL list of each interface operation, and the return message of each interface through the sandbox.
- the system list is combined with the database to obtain the link system library, table, and field collection.
- the SQL list obtained by the sandbox is associated with the library table field set to remove information such as aliases in the SQL, so as to obtain the library, table, and field information of each interface operation under the link of the use case scenario.
- the database operation results of the link and the message results returned by each interface are stored as the use case running results.
- the information contained in the test case running result backup may include: test case CASE_ID, business serial number BIZ_NO, and return message RSP_MSG.
- Step 103 Determine the test baseline corresponding to each interface of the system link according to the use case running result and the use case scenario rules.
- Step 104 execute and run the program of the version to be tested in the system link, and obtain the test results of each interface.
- Step 105 generating assertions according to the test baselines and test results corresponding to each interface.
- the test baseline corresponding to each interface of the system link can be determined first according to the use case running results and the use case scenario rules.
- the interface can return messages and various Database operation results are processed, for example, automatic denoising processing can be performed first to remove fields that do not involve logic, and then, through horizontal analysis and vertical analysis, determine the operation result attributes corresponding to each interface, and determine the assertion expectations for each interface value, so that the expected value of the assertion is determined as the test baseline corresponding to each interface of the system link, and entered into the baseline. Then, by executing and running the version program to be tested in the system link, and obtaining the test results of each interface, an assertion is generated according to the test baseline and test results corresponding to each interface.
- each test scenario use case in the test case set is executed in the system link, and the operation result of the use case is obtained according to the mechanism of the preset safe operation program, and then the system is determined according to the operation result of the use case and the use case scenario rules.
- the test baseline corresponding to each interface of the link and then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, generate an assertion according to the test baseline and test results corresponding to each interface, so as to realize the system link
- the full-link assertion is automatically and accurately generated without manual maintenance, thus meeting the requirement of automatically covering the corresponding interface test of the whole link.
- Fig. 4 is another schematic flowchart of an interface automation testing method according to an example embodiment of the present application.
- the interface automated testing method provided in this embodiment includes:
- Step 201 acquiring a test case set.
- test case set includes multiple types of test case subsets.
- Each type of test case subset includes multiple test scenario cases.
- Each test scenario use case is configured with a corresponding use case business flow Number.
- Step 202 Execute each test scenario use case in the test case set in the system link, and obtain the test case running result according to the preset safe running program mechanism.
- each test scenario use case in the test case set can be executed in the system link, and the test case running result can be obtained according to the preset safe running program mechanism (for example: sandbox), wherein the test case running result includes the database operation result And each interface returns the message result.
- the preset safe running program mechanism for example: sandbox
- the database operation results of the full link and the message results returned by each interface in the full link can be automatically obtained through the assertion self-learning module, and used as the basic data for the analysis of the assertion results.
- the test plan can be pulled up periodically to execute the test scenario use case, and the business serial number bizNo of the test scenario use case can be passed as an input parameter to the assertion self-learning module, so as to automatically analyze the test case full-link interface return message results and the database result of the operation.
- the assertion self-learning module takes the use case business serial number as an input parameter, and obtains the link system list, the SQL list of each interface operation, and the return message of each interface through the sandbox.
- the system list is combined with the database to obtain the link system library, table, and field collection.
- the SQL list obtained by the sandbox is associated with the library table field set to remove information such as aliases in the SQL, so as to obtain the library, table, and field information of each interface operation under the link of the use case scenario.
- the database operation results of the link and the message results returned by each interface are stored as the use case running results.
- the information contained in the test case running result backup may include: test case CASE_ID, business serial number BIZ_NO, and return message RSP_MSG.
- Step 203 directly remove the first type of noise results in the use case running results according to the preset noise removal rules.
- the first type of noise results in the use case running results may be directly removed according to the preset denoising rules, wherein the first type of noise results are running results that have nothing to do with the test logic of the system link.
- the length is greater than 20 characters and consists of letters or numbers, it can be identified as a serial number
- the first two digits are DCN_NO, which can be regarded as the IOU number;
- the first 6 are equal to the product number, which can be identified as a logical card number.
- Step 204 horizontally analyzing the running result of the use case to determine the attributes of the running result corresponding to each interface.
- all use cases under the interface can be launched in batches n times according to the execution plan, where n can be individually configured according to each interface, and the initial value is 2 if not configured.
- n is the number of times of launching use cases corresponding to each interface, and N is the minimum threshold of the number of times of launching use cases of all interfaces in the whole link.
- the number of run cases can be dynamically adjusted to ensure the amount of sample data and the correctness of the training results.
- test case running results can be analyzed horizontally to determine the properties of the running results corresponding to each interface.
- the horizontal analysis includes running different test scenario cases in the same test case subset. Cluster analysis of the running results.
- the result equivalence rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value; if the result equivalence rate is greater than or equal to the preset equivalence rate, then Determine that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list; if the result equivalent rate is less than the preset equivalent rate, determine that the corresponding use case operation result is noise, which can be removed .
- horizontal clustering analysis can be performed on all the obtained running results of the use cases above. If the message returns or the equivalent rate of the DB field is 100%, it is a fixed value class and directly backed up to the baseline. If the message is returned or the equivalent rate of the DB field is greater than 20%, it is an enumeration class, and the enumeration list is maintained. If the message is returned or the equivalent rate of the DB field is less than 20%, it is an irregular type, which is directly identified as noise and can be removed.
- Step 205 longitudinally analyzing the running results of the use cases belonging to the enumerated class.
- the longitudinal analysis includes running the same test scenario Cluster analysis performed on the run results generated by the use case.
- the running results of all use cases belonging to the enumeration class are the same, back up the running results of the use cases to the enumeration list; if the running results of the use cases belonging to the enumeration class are different, then judge whether the running results of each use case meet the preset backup conditions
- the preset backup conditions are used to determine that the use case operation is in the end state; if the judgment result is yes, then the running results of each use case will be backed up to the enumeration list; if the judgment result is no, the preset backup conditions will be met.
- the obtained use case running results are backed up to the enumeration list.
- the same test scenario use case can be run multiple times, and the use case running result after each run can be obtained. If the running results are consistent every time, you can directly back up the fixed use case running results to the baseline. If the running results are inconsistent, it may be an intermediate state of running or a random hit enumeration. At this point, the user can be notified and intervened to perform batch denoising. And if it is running in the intermediate state, the user can set the preset backup conditions according to the enumeration results, and the system will perform backup after judging that the preset conditions are met, thereby eliminating the intermediate state.
- the enumeration list will be backed up as the expected value of the assertion as the baseline, and the subsequent test case running results will be considered successful as long as one of the enumeration lists is hit.
- Step 206 Generate a basic assertion according to the affiliation relationship between the returned message result and the preset result enumeration set in the use case running results corresponding to each test scenario use case.
- the basic assertion may be generated according to the affiliation relationship between the returned message result in the use case operation result corresponding to each test scenario use case and the preset result enumeration set, and determine that the basic assertion conforms to the test scenario use case of the preset assertion result.
- the corresponding test results are continuously compared with the test baselines corresponding to each interface to generate assertions.
- the message return values of all the test scenario cases are clustered.
- the code enumeration is obtained through denoising analysis and horizontal analysis, combined with self-learning of the use case alias: business success, business failure, message does not conform to the protocol specification exception, and system failure returns code enumeration in four categories.
- the use cases can be classified according to the identification keywords such as "extra long”, “outside the enumeration range”, and “does not conform to the data type", among which , the return code of each type of use case is theoretically unique.
- the codes in each group are theoretically equal. In practice, you can analyze the differences of the codes under each set, and the code enumeration value with a consistency greater than 90% means that the corresponding type of use case can reasonably return the enumeration set. Subsequent use case assertions are automatically judged. According to such use cases, the use cases whose return code is equal to this code are successful use cases, and other use cases whose return codes are not equal are failure use cases.
- test scenario use case is a pass business use case
- the test scenario use case is a failure use case
- the test scenario use case is a successful use case
- the preset result enumeration set is the business success set.
- test scenario use case is a failed business use case
- the test scenario use case is a successful use case
- the test scenario use case is a failed use case
- test scenario use case is a use case that does not conform to the protocol specification
- the test scenario use case is a failure use case
- the test scenario use case is a successful use case
- the preset result enumeration set is an exception set of messages that do not conform to the protocol specification.
- test scenario use case is a use case conforming to the protocol specification
- the test scenario use case is a successful use case
- the test scenario use case is a failed use case
- the use case type is a pass business use case
- the use case result message code is in the business success set, If the success set enumeration is unique, the result is successful, and if the enumeration is not unique, the result is suggestion success.
- the use case type is a failure business use case
- the use case result message code is not in the business failure set, and the use case fails; the use case result message code is in the business failure set, if the enumeration is unique, the result is successful, and if the enumeration is not unique, the result is suggestion success .
- the use case result message code is not in the message does not meet the protocol specification exception set, and the result is failure; the use case result message code is in the message does not meet the protocol specification Exception collection, if the enumeration is unique, the result is successful, if the enumeration is not unique, the result is suggestion success.
- the use case type is a use case whose field conforms to the protocol specification, the use case result message code is in the message does not conform to the protocol specification exception set, and the result is a suggestion failure; the use case result message code is not in the message does not meet the protocol specification exception set, and the result is a suggestion success.
- the above basic assertion generation method can also be applied to single test, smoke or SIT test in addition to regression test.
- the regression test it mainly plays an accelerating role. After all, the subsequent Diff judgment takes a long time.
- Basic assertions can be used to quickly determine whether it is necessary to do accurate Diff, thereby reducing the consumption of invalid Diff resources.
- Step 207 execute and run the program of the version to be tested in the system link, and obtain the test results of each interface.
- Step 208 generating assertions according to the test baselines and test results corresponding to each interface.
- the test baseline corresponding to each interface of the system link can be determined first according to the use case running results and the use case scenario rules.
- the interface can return messages and various Database operation results are processed, for example, automatic denoising processing can be performed first to remove fields that do not involve logic, and then, through horizontal analysis and vertical analysis, determine the operation result attributes corresponding to each interface, and determine the assertion expectations for each interface value, so that the expected value of the assertion is determined as the test baseline corresponding to each interface of the system link, and entered into the baseline. Then, by executing and running the version program to be tested in the system link, and obtaining the test results of each interface, an assertion is generated according to the test baseline and test results corresponding to each interface.
- the interface part whose test baseline indicated by the assertion is different from the test result is a system change point or a system vulnerability.
- the new assertion of the whole link is obtained, and the assertion expectation is automatically compared with the baseline. The difference is the system change point or system bug.
- the A interface does not have the "b" field, that is, the comparison is inconsistent, and the user is prompted to locate. If the user judges that this is the new content of this version, the result will be directly updated to the baseline, that is, the baseline will be used as the assertion result in the future.
- the use case running results include structured large fields
- split the structured large fields for comparison with preset denoising rules, so as to remove the first category in each field after splitting noise result.
- the field may be a large field based on " ⁇ " or the escape character " ⁇ ”, and then convert the String into a dictionary. If the conversion is successful, you can also obtain it in the same way as in this step. Denoising assertions.
- each test scenario use case in the test case set is executed in the system link, and the operation result of the use case is obtained according to the mechanism of the preset safe operation program, and then the system is determined according to the operation result of the use case and the use case scenario rules.
- the test baseline corresponding to each interface of the link and then execute the version program to be tested in the system link, and obtain the test results of each interface, and finally, generate an assertion according to the test baseline and test results corresponding to each interface, so as to realize the system link
- the full-link assertion is automatically and accurately generated without manual maintenance, thus meeting the requirement of automatically covering the corresponding interface test of the whole link.
- the baseline of the full-link interface assertion can be automatically generated, and then the comparison method is to compare the full-link database operation results and interface return message results.
- the automatic analysis and removal of noise is carried out on the running results of the use cases, and preliminary judgments are made through the self-learning method of basic assertions, thereby reducing invalid Diff comparisons.
- the use case running results include structured large fields
- the structured large fields can also be split and compared to support large field denoising Diff comparison.
- Fig. 5 is a schematic structural diagram of a block chain state data processing device provided by an embodiment of the present application.
- the block chain state data processing device 500 can be realized by software, hardware or a combination of both.
- Fig. 5 is a schematic structural diagram of an interface automation testing device according to an example embodiment of the present application. As shown in Figure 5, the interface automation testing device provided in this embodiment includes:
- the obtaining module 301 is used to obtain a test case set, the test case set includes multiple types of test case subsets, each type of test case subset includes a plurality of test scenario use cases, and each test scenario use case is configured with a corresponding Use case business serial number;
- the processing module 302 is configured to execute each test scenario use case in the test case set in the system link, and obtain the use case operation result according to the preset safe operation program mechanism, and the use case operation result includes the database operation result and each The interface returns the message result;
- the processing module 302 is further configured to determine a test baseline corresponding to each interface of the system link according to the use case running result and the use case scenario rules;
- the obtaining module 301 is also used to execute and run the version program to be tested in the system link, and obtain the test results of each interface;
- the processing module 302 is further configured to generate an assertion according to the test baseline and the test result corresponding to each interface.
- processing module 302 is specifically configured to:
- the horizontal analysis includes cluster analysis on the operation results generated by running different test scenario use cases in the same test case subset;
- test baseline corresponding to each interface of the system link is determined according to the use case running result and the corresponding result attribute.
- processing module 302 is specifically configured to:
- processing module 302 is specifically configured to:
- the result equivalent rate is 100%, it is determined that the corresponding operation result attribute is a fixed value class, and the test baseline corresponding to the fixed value class is a fixed value;
- the result equivalence rate is greater than or equal to the preset equivalence rate, it is determined that the corresponding operation result attribute is an enumeration class, and the test baseline corresponding to the enumeration class is an enumeration list;
- processing module 302 is further configured to:
- processing module 302 is further configured to:
- test results corresponding to the test scenario use cases in which the basic assertion conforms to the preset assertion result are compared with the test baseline corresponding to each interface to generate the assertion.
- test scenario use case is a business use case
- the test scenario use case is a failure use case
- the test scenario use case is a successful use case
- the preset result enumeration set is a business success set
- test scenario use case is a failure business use case, then if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, then the test scenario use case is a successful use case; If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failure use case;
- test scenario use case is a use case that does not conform to the protocol specification
- the test scenario use case is a failure use case
- the test scenario use case is a successful use case
- the preset result enumeration set is the message does not meet the protocol specification exception set
- test scenario use case is a use case conforming to the protocol specification, if the returned message result in the corresponding use case operation result does not belong to the preset result enumeration set, the test scenario use case is a successful use case; if the corresponding use case operation result in If the returned message result belongs to the preset result enumeration set, the test scenario use case is a failed use case.
- the processing module 302 is further configured to determine that the interface portion of the difference between the test baseline indicated by the assertion and the test result is a system change point or a system vulnerability.
- the use case running result includes a structured large field
- the structured large field is split for comparison with the preset denoising rule, so as to remove the split Type 1 noise results in the following fields.
- This embodiment provides an interface automation testing device, which can be used to execute the steps in the foregoing method embodiments.
- an interface automation testing device which can be used to execute the steps in the foregoing method embodiments.
- FIG. 6 is a schematic structural diagram of an electronic device according to an example embodiment of the present application.
- an electronic device 400 provided in this embodiment includes:
- the memory 402 is used to store executable instructions of the processor, and the memory may also be flash (flash memory);
- the processor 401 is configured to execute each step in the above method by executing the executable instructions.
- the memory 402 can be independent or integrated with the processor 401 .
- the electronic device 400 may further include:
- the bus 403 is used to connect the processor 401 and the memory 402 .
- This embodiment also provides a readable storage medium, in which a computer program is stored, and when at least one processor of the electronic device executes the computer program, the electronic device executes each step in the above method.
- This embodiment also provides a program product, where the program product includes a computer program, and the computer program is stored in a readable storage medium. At least one processor of the electronic device can read the computer program from the readable storage medium, and the at least one processor executes the computer program so that the electronic device implements each step in the above method.
- This embodiment also provides a computer program, including program code.
- the program code executes each step in the above method.
- the aforementioned program can be stored in a computer-readable storage medium.
- the program executes the steps including the above-mentioned method embodiments; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
- Test And Diagnosis Of Digital Computers (AREA)
Abstract
本申请提供一种接口自动化测试方法、装置、介质、设备及程序。本申请实施例提供的接口自动化测试方法,通过在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,然后,根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,再在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,最后,根据各个接口对应的测试基线以及测试结果生成断言,从而实现系统链路的全链路断言自动精准生成,无需人工维护,进而满足自动覆盖全链路对应接口测试的需求。
Description
本申请要求于2021年12月27日提交中国专利局、申请号为202111609529.X、申请名称为“接口自动化测试方法、装置、介质、设备及程序”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及接口测试技术领域,尤其涉及一种接口自动化测试方法、装置、介质、设备及程序。
随着计算机技术的发展,越来越多的技术应用在金融领域,传统金融业正在逐步向金融科技(Finteh)转变,接口测试技术也不例外,但由于金融行业的安全性、实时性要求,也对技术提出的更高的要求。
目前,在金融科技的接口自动化测试中,通常是基于静态代码分析的方式,生成数据库(Data Base,简称DB)表结构模板,然后关联DB全动态表进行校验,从而通过人工配置接口对应业务流程涉及系统的方式,在利用工具针对涉及系统下全部数据库中动态表数据进行备份、对比,从而完成各个系统的测试。
但是,针对接口静态代码分析生成DB表结构模板,生成的校验点存在不完善的问题,无法自动覆盖全链路对应接口测试的需求。
本申请实施例提供一种接口自动化测试方法、装置、介质、设备及程序,以解决现有技术无法自动覆盖全链路对应接口测试需求的技术问题。
第一方面,本申请实施例提供一种接口自动化测试方法,包括:
获取测试用例集合,所述测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号;
在系统链路中执行所述测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,所述用例运行结果包括数据库操作结果以及各个接口返回报文结果;
根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线;
在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果;
根据各个接口对应的所述测试基线以及所述测试结果生成断言。
在一种可能的设计中,所述根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线,包括:
根据预设去噪规则直接去除用例运行结果中的第一类噪声结果,其中,所述第一类噪声结果为与所述系统链路的测试逻辑无关的运行结果;
对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,所述横向分析包括针对运行同一测试用例子集中的不同测试场景用例所生成的运行结果进行的聚类分析;
根据所述用例运行结果以及对应的所述行为结果属性确定所述系统链路各个接口对应的所述测试基线。
在一种可能的设计中,所述对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,包括:
在运行同一测试用例子集中预设第一数量的测试场景用例后,获取各个接口的用例运行结果;
根据各个接口的用例运行结果的结果分布确定对应的结果等值率;
根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性。
在一种可能的设计中,所述根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性,包括:
若所述结果等值率为100%,则确定对应的运行结果属性为固定值类,所述固定值类对应的测试基线为固定值;
若所述结果等值率大于或等于预设等值率,则确定对应的运行结果属性为枚举类,所述枚举类对应的测试基线为枚举列表;
若所述结果等值率小于所述预设等值率,则确定对应的用例运行结果为噪声。
在一种可能的设计中,在所述对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性之后,还包括:
对属于所述枚举类的用例运行结果进行纵向分析,所述纵向分析包括针对运行同一测试场景用例所生成的运行结果进行的聚类分析;
若属于所述枚举类的所有用例运行结果相同,则将用例运行结果备份至所述枚举列表;
若属于所述枚举类的用例运行结果存在不相同,则判断各个用例运行结果是否是在满足预设备份条件下时获得的,所述预设备份条件用于确定用例运行处于结束态;
若判断结果为是,则将各个用例运行结果备份至所述枚举列表;
若判断结果为否,则将满足所述预设备份条件所获取的用例运行结果备份至所述枚举列表。
在一种可能的设计中,在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果之前,还包括:
根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言;
确定所述基础断言符合预设断言结果的测试场景用例所对应的测试结果,与各个接口对应的所述测试基线继续进行对比,以生成所述断言。
在一种可能的设计中,所述根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言,包括:
若测试场景用例为通过业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为业务成功集合;
若测试场景用例为失败业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例;
若测试场景用例为不符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为报文不符合协议规范异常集合;
若测试场景用例为符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例。
在一种可能的设计中,在所述根据各个接口对应的所述测试基线以及所述测试结果生成断言之后,还包括:
确定所述断言所指示的所述测试基线与所述测试结果存在差异的接口部分为系统变更点或系统漏洞。
在一种可能的设计中,在所述根据预设去噪规则直接去除用例运行结果中的第一类噪声结果之前,还包括:
若用例运行结果中包括结构化大字段,则将所述结构化大字段进行拆分,以用于与所述预设去噪规则进行比对,以去除拆分后各个字段中的第一类噪声结果。
第二方面,本申请实施例还提供一种接口自动化测试装置,包括:
获取模块,用于获取测试用例集合,所述测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号;
处理模块,用于在系统链路中执行所述测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,所述用例运行结果包括数据库操作结果以及各个接口返回报文结果;
所述处理模块,还用于根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线;
所述获取模块,还用于在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果;
所述处理模块,还用于根据各个接口对应的所述测试基线以及所述测试结果生成断言。
在一种可能的设计中,所述处理模块,具体用于:
根据预设去噪规则直接去除用例运行结果中的第一类噪声结果,其中,所述第一类噪声结果为与所述系统链路的测试逻辑无关的运行结果;
对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,所述横向分析包括针对运行同一测试用例子集中的不同测试场景用例所生成的运行结果进行的聚类分析;
根据所述用例运行结果以及对应的所述结果属性确定所述系统链路各个接口对应的所述测试基线。
在一种可能的设计中,所述处理模块,具体用于:
在运行同一测试用例子集中预设第一数量的测试场景用例后,获取各个接口的用例运行结果;
根据各个接口的用例运行结果的结果分布确定对应的结果等值率;
根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性。
在一种可能的设计中,所述处理模块,具体用于:
若所述结果等值率为100%,则确定对应的运行结果属性为固定值类,所述固定值类对应的测试基线为固定值;
若所述结果等值率大于或等于预设等值率,则确定对应的运行结果属性为枚举类,所述枚举类对应的测试基线为枚举列表;
若所述结果等值率小于所述预设等值率,则确定对应的用例运行结果为噪声。
在一种可能的设计中,所述处理模块,还用于:
对属于所述枚举类的用例运行结果进行纵向分析,所述纵向分析包括针对运行同一测试场景用例所生成的运行结果进行的聚类分析;
若属于所述枚举类的所有用例运行结果相同,则将用例运行结果备份至所述枚举列表;
若属于所述枚举类的用例运行结果存在不相同,则判断各个用例运行结果是否是在满足预设备份条件下时获得的,所述预设备份条件用于确定用例运行处于结束态;
若判断结果为是,则将各个用例运行结果备份至所述枚举列表;
若判断结果为否,则将满足所述预设备份条件所获取的用例运行结果备份至所述枚举列表。
在一种可能的设计中,所述处理模块,还用于:
根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言;
确定所述基础断言符合预设断言结果的测试场景用例所对应的测试结果,与各个接口对应的所述测试基线继续进行对比,以生成所述断言。
在一种可能的设计中,若测试场景用例为通过业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为业务成功集合;
若测试场景用例为失败业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例;
若测试场景用例为不符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为报文不符合协议规范异常集合;
若测试场景用例为符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例。
在一种可能的设计中,所述处理模块,还用于确定所述断言所指示的所述测试基线与所述测试结果存在差异的接口部分为系统变更点或系统漏洞。
在一种可能的设计中,若用例运行结果中包括结构化大字段,则将所述结构化大字段进行拆分,以用于与所述预设去噪规则进行比对,以去除拆分后各个字段中的第一类噪声结果。
第三方面,本申请实施例还提供一种电子设备,包括:
处理器;以及,
存储器,用于存储所述处理器的可执行指令;
其中,所述处理器配置为经由执行所述可执行指令来执行第一方面中任意一种接口自动化测试方法。
第四方面,本申请实施例还提供一种存储介质,其上存储有计算机程序,该程序被处理器执行时实现第一方面中任意一种接口自动化测试方法。
第五方面,本申请实施例还提供一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现第一方面中任意一种接口自动化测试方法。
本申请实施例提供的一种接口自动化测试方法、装置、介质、设备及程序,通过在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,然后,根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,再在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,最后,根据各个接口对应的测试基线以及测试结果生成断言,从而实现系统链路的全链路断言自动精准生成,无需人工维护,进而满足自动覆盖全链路对应接口测试的需求。
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请根据一示例实施例示出接口自动化测试方法的测试流程示意图;
图2是本申请根据一示例实施例示出的接口自动化测试方法的流程示意图;
图3是本申请根据一示例实施例示出的断言自学习模块流程示意图;
图4是本申请根据一示例实施例示出的接口自动化测试方法的另一流程示意图;
图5是本申请根据一示例实施例示出的接口自动化测试装置的结构示意图;
图6是本申请根据一示例实施例示出的电子设备的结构示意图。
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,包括但不限于对多个实施例的组合,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
随着计算机技术的发展,越来越多的技术应用在金融领域,传统金融业正在逐步向金融科技(Finteh)转变,接口测试技术也不例外,但由于金融行业的安全性、实时性要求,也对技术提出的更高的要求。目前,在金融科技的接口自动化测试中,通常是基于静态代码分析的方式,生成数据库(Data
Base,简称DB)表结构模板,然后关联DB全动态表进行校验,从而通过人工配置接口对应业务流程涉及系统的方式,再利用工具针对涉及系统下全部数据库中动态表数据进行备份、对比,从而完成各个系统的测试,例如,通过蚂蚁金服开源自动化测试框架SOFAACTS进行上述操作。
但是,针对接口静态代码分析生成DB表结构模板,生成的校验点存在不完善的问题,无法自动覆盖全链路对应接口测试的需求。此外,上述现有技术的放放还存在维护成本高的问题,需要人工维护校验点预期值。当协议版本发生变更时,SOFAACTS需要人工触发生成新模版。此外,关联DB全动态表校验,需要备份了配置系统下全部动态表,容易造成浪费存储空间。
针对上述各个技术问题,本申请实施例中提供的接口自动化测试方法、装置、介质、设备及程序,相较于上述现有技术中的测试流程,无需人工录入断言。现有技术中的方案人工录入断言,人工成本高,且无法保证断言完善,经常遗漏校验点,例如,行业上SOFAACTS等通过静态代码生成的断言只能保证本系统的断言完整性,无法保证全链路。而本申请提供的实施例,通过在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,然后,根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,再在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,最后,根据各个接口对应的测试基线以及测试结果生成断言,从而实现系统链路的全链路断言自动精准生成,无需人工维护,进而满足自动覆盖全链路对应接口测试的需求。
为了能够更加准确地理解本申请,对以下术语进行如下定义及解释:
(1) 断言(Assert):表示为一些布尔表达式,用于表示可以相信在程序中的某个特定点该表达式值为真。
(2) 接口测试(Interface Testing):是测试系统组件间接口的一种测试类型。接口测试主要用于检测外部系统与系统之间以及内部各子系统之间的交互点,测试的重点是要检查数据的交换,传递和控制管理过程,以及系统间的相互逻辑依赖关系等。
(3) 接口自动化测试(Interface Auto Testing):基于接口和协议,在预设条件下运行系统和应用程序,以评估运行结果的一种测试方法,其中,预先条件应包括正常条件和异常条件。
(4) 沙盒(SANDBOX):沙盒是指一种技术,是阿里开源的一款Java虚拟机(Java Virtual Machine,JVM)平台非侵入式运行期面向切面编程(Aspect Oriented Programming,AOP)解决方案,本质上是一种 AOP 落地形式。
图1是本申请根据一示例实施例示出接口自动化测试方法的测试流程示意图。如图1所示,在本实施例提供的接口自动化测试方法中,可以是通过接口自动化平台批量生成测试场景用例,即可以通过接口自动化平台自动批量生成各个场景下的测试用例,并且,每个测试场景用例配置有对应的用例业务流水号。值得说明的,也正是因为本申请实施例所基于的是接口自动化平台,通过该平台会自动生成大量的测试场景用例,因此,所生成的断言已经,难以通过人工进行维护,因此,亟需通过本申请实施例所提供的接口自动化测试方法来生成并维护断言。
具体的,在本申请实施例中,采取的则是通过断言自学习的方式,实现接口返回报文以及各类数据库操作结果的断言自动生成。其中,主要是是通过断言自学习的方式对接口返回报文以及各类数据库操作结果生成断言预期值,并将预期值作为基线,然后,通过Diff对比校验用例执行结果的方式生成断言。
图2是本申请根据一示例实施例示出的接口自动化测试方法的流程示意图。如图2所示,本实施例提供的接口自动化测试方法,包括:
步骤101、获取测试用例集合。
在本步骤中,获取测试用例集合,测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号。
步骤102、在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果。
具体的,可以是在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制(例如:sandbox)获取用例运行结果,其中,用例运行结果包括数据库操作结果以及各个接口返回报文结果。
其中,图3是本申请根据一示例实施例示出的断言自学习模块流程示意图。如图3所示,可以通过断言自学习模块自动获取全链路的数据库操作结果以及全链路中各个接口返回报文结果,并以此作为断言结果分析的基础数据。具体的,可以是定时拉起测试计划执行测试场景用例,将测试场景用例的业务流水号bizNo作为入参传递到断言自学习模块,从而自动分析出测试用例全链路接口返回报文结果以及数据库操作结果。此外,断言自学习模块以用例业务流水号作为入参,通过sandbox获取链路系统列表、各接口操作SQL列表、各接口返回报文。其中,系统列表结合数据库获取链路系统库、表、字段集合。sandbox获取的SQL列表与库表字段集合关联去除SQL中别名等信息,从而得出用例场景链路下各个接口操作的库、表、字段信息。然后,将链路的数据库操作结果,各个接口返回报文结果存储为用例运行结果。具体的,用例运行结果备份包含信息可以包括:测试用例CASE_ID、业务流水号BIZ_NO以及返回报文RSP_MSG。
步骤103、根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线。
步骤104、在系统链路中执行运行待测版本程序,并获取各个接口的测试结果。
步骤105、根据各个接口对应的测试基线以及测试结果生成断言。
在步骤103-步骤105中,可以是先根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,对于用例场景规则可以是通过断言自学习的方式对接口返回报文以及各类数据库操作结果进行处理,例如,可以是先进行自动去噪处理,以去除不涉及逻辑的字段,然后,通过横向分析以及纵向分析,确定各个接口对应的运行结果属性,并对各个接口确定断言预期值,从而将断言预期值确定为系统链路各个接口对应的测试基线,并打到基线中。然后,通过在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,从而根据各个接口对应的测试基线以及测试结果生成断言。
在本实施例中,通过在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,然后,根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,再在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,最后,根据各个接口对应的测试基线以及测试结果生成断言,从而实现系统链路的全链路断言自动精准生成,无需人工维护,进而满足自动覆盖全链路对应接口测试的需求。
图4是本申请根据一示例实施例示出的接口自动化测试方法的另一流程示意图。如图4所示,本实施例提供的接口自动化测试方法,包括:
步骤201、获取测试用例集合。
在本步骤中,获取测试用例集合,测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号。
步骤202、在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果。
具体的,可以是在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制(例如:sandbox)获取用例运行结果,其中,用例运行结果包括数据库操作结果以及各个接口返回报文结果。
可以通过断言自学习模块自动获取全链路的数据库操作结果以及全链路中各个接口返回报文结果,并以此作为断言结果分析的基础数据。具体的,可以是定时拉起测试计划执行测试场景用例,将测试场景用例的业务流水号bizNo作为入参传递到断言自学习模块,从而自动分析出测试用例全链路接口返回报文结果以及数据库操作结果。此外,断言自学习模块以用例业务流水号作为入参,通过sandbox获取链路系统列表、各接口操作SQL列表、各接口返回报文。其中,系统列表结合数据库获取链路系统库、表、字段集合。sandbox获取的SQL列表与库表字段集合关联去除SQL中别名等信息,从而得出用例场景链路下各个接口操作的库、表、字段信息。然后,将链路的数据库操作结果,各个接口返回报文结果存储为用例运行结果。具体的,用例运行结果备份包含信息可以包括:测试用例CASE_ID、业务流水号BIZ_NO以及返回报文RSP_MSG。
步骤203、根据预设去噪规则直接去除用例运行结果中的第一类噪声结果。
在本步骤中,可以是根据预设去噪规则直接去除用例运行结果中的第一类噪声结果,其中,第一类噪声结果为与系统链路的测试逻辑无关的运行结果。
具体的,可以是自动识别用例运行结果中流水号、时间戳、借据号、逻辑卡号等字段,并按照规则识别后直接做去噪声处理。例如:
1、对于长度大于20位由字母或数字构成,可以认定为流水号;
2、对于长度8、10、13、14、17位数字,且字符串时间转换函数能正常处理;长度16或19位数字、空格、“-”、“:”构成,可以认定为时间;
3、对于长度等于20,前两位为DCN_NO,可以认定为借据号;
4、对于长度16位,前6为等于产品号,可以认定为逻辑卡号。
步骤204、对用例运行结果进行横向分析,以确定各个接口对应的运行结果属性。
具体的,可以按照执行计划批量拉起接口下全部用例各n次,其中,n可以根据各个接口进行单独配置,如未配置初始值为2。为了方便说明,可以以默认值n=2进行举例,系统根据接口用例量,n>=2自动调节,以此保证接口用例总运行次数大于N,N同样可以进行配置,如未配置默认初始值为100。值得理解的,n为各个接口对应的拉起用例的次数,而N为全链路中所有接口拉起用例的次数最低阈值。此外,还可以动态调节用例运行次数,以此保证样本数据量,从而保证训练结果正确性。
在执行计划运行完毕,并获取到用例运行结果之后,可以对用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,横向分析包括针对运行同一测试用例子集中的不同测试场景用例所生成的运行结果进行的聚类分析。
可选的,在运行同一测试用例子集中预设第一数量的测试场景用例后,获取各个接口的用例运行结果,然后,根据各个接口的用例运行结果的结果分布确定对应的结果等值率,并根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性。
具体的,若结果等值率为100%,则确定对应的运行结果属性为固定值类,固定值类对应的测试基线为固定值;若结果等值率大于或等于预设等值率,则确定对应的运行结果属性为枚举类,枚举类对应的测试基线为枚举列表;若结果等值率小于所述预设等值率,则确定对应的用例运行结果为噪声,可以进行去除。
在一种可能的设计中,针对上述获取到的全部用例运行结果,可以进行横向聚类分析,其中,如果报文返回或者DB字段等值率100%,为固定值类,直接备份到基线。如果报文返回或者DB字段等值率大于20%为枚举类,维护枚举列表。如果报文返回或者DB字段等值率小于20%,为无规律类,直接认定为噪声,可以进行去除。
步骤205、对属于所述枚举类的用例运行结果进行纵向分析。
在步骤中,在对用例运行结果进行横向分析,以确定各个接口对应的运行结果属性之后,需要对属于所述枚举类的用例运行结果进行纵向分析,其中,纵向分析包括针对运行同一测试场景用例所生成的运行结果进行的聚类分析。若属于枚举类的所有用例运行结果相同,则将用例运行结果备份至枚举列表;若属于枚举类的用例运行结果存在不相同,则判断各个用例运行结果是否是在满足预设备份条件下时获得的,预设备份条件用于确定用例运行处于结束态;若判断结果为是,则将各个用例运行结果备份至枚举列表;若判断结果为否,则将满足预设备份条件所获取的用例运行结果备份至枚举列表。
具体的,可以针对同一测试场景用例进行多次运行,并获取每次运行后的用例运行结果。如果每次运行结果一致,则可以直接将改固定的用例运行结果备份到基线。如果运行结果不一致,则可能为运行中间态或随机命中枚举。此时,可以通知用户,介入进行批量去噪。而如果为运行中间态,用户可以根据枚举类结果设置预设备份条件,当系统在判断符合预设条件后再进行备份,从而消灭中间态。如果用户确定为随机命中枚举结果,则将枚举列表作为断言预期值备份为基线,后续的测试用例运行结果只要命中枚举列表之一即为视为运行成功。
步骤206、根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言。
在本步骤中,可以是根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言,确定基础断言符合预设断言结果的测试场景用例所对应的测试结果,与各个接口对应的测试基线继续进行对比,以生成断言。
具体的,在测试用例批量执行后,将全量的测试场景用例的报文返回值,即包含code字符串的字段进行聚类。并通过去噪分析以及横向分析得到code枚举,再结合用例别名自学习得出:业务成功、业务失败、报文不符合协议规范异常、系统失败四大类返回code枚举。其中,以“报文不符合协议规范异常”为例,可以是根据用例关键字“超长”、“枚举范围外”、“不符合数据类型”等标识关键字,将用例进行分类,其中,每一类用例返回码理论上唯一。将通过去噪分析以及横向分析得到的code根据用例相应分群后,各自集合下code理论上都是相等的。而在实际中,可以将各个集合下code进行差异分析,一致性大于90%的code枚举值即代表相应类型用例合理返回枚举集合。后续用例断言自动判断,根据此类用例返回code等于此code的用例为成功用例,而其他返回code不相等的用例为失败用例。
在一种可能的设计中,若测试场景用例为通过业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则测试场景用例为成功用例;预设结果枚举集合为业务成功集合。
在一种可能的设计中,若测试场景用例为失败业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则测试场景用例为失败用例。
在一种可能的设计中,若测试场景用例为不符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则测试场景用例为成功用例;预设结果枚举集合为报文不符合协议规范异常集合。
在一种可能的设计中,若测试场景用例为符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则测试场景用例为失败用例。
在一种可能的具体实现方式中,对于业务、mock类用例,如果用例类型是通过类业务用例,则用例结果报文code不在业务成功集合,用例失败;用例结果报文code在业务成功集合,如果成功集合枚举唯一则结果成功,枚举不唯一则结果为建议成功。如果用例类型是失败业务类用例,则用例结果报文code不在业务失败集合,用例失败;用例结果报文code在业务失败集合,如果枚举唯一则结果成功,枚举不唯一则结果为建议成功。
而对于字段类用例,如果用例类型为字段不符合协议规范类用例,则用例结果报文code不在报文不符合协议规范异常集合,结果为失败;用例结果报文code在报文不符合协议规范异常集合,如果枚举唯一则结果成功,枚举不唯一则结果为建议成功。 如果用例类型为字段符合协议规范类用例,则用例结果报文code在报文不符合协议规范异常集合,结果为建议失败;用例结果报文code不在报文不符合协议规范异常集合,结果为建议成功。
此外,值得说明的,上述基础断言生成方式除应用于回归测试外,也可应用于单测、冒烟或SIT测试。其中,在回归测试中,主要起到加速作用,毕竟后续Diff判断耗时较长。可以通过基础断言快速判断出是否需要做精准Diff的必要,从而减少无效Diff资源消耗。
步骤207、在系统链路中执行运行待测版本程序,并获取各个接口的测试结果。
步骤208、根据各个接口对应的测试基线以及测试结果生成断言。
在步骤207-步骤208中,可以是先根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,对于用例场景规则可以是通过断言自学习的方式对接口返回报文以及各类数据库操作结果进行处理,例如,可以是先进行自动去噪处理,以去除不涉及逻辑的字段,然后,通过横向分析以及纵向分析,确定各个接口对应的运行结果属性,并对各个接口确定断言预期值,从而将断言预期值确定为系统链路各个接口对应的测试基线,并打到基线中。然后,通过在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,从而根据各个接口对应的测试基线以及测试结果生成断言。
具体的,在根据各个接口对应的测试基线以及测试结果生成断言之后,可以确定断言所指示的测试基线与测试结果存在差异的接口部分为系统变更点或系统漏洞。在进行回归时,执行用例后获取全链路新断言、断言预期和基线自动做Diff对比,其中,差异部分则为系统变更点或系统bug。
在一种可能的场景中,用例运行结果的字段值(value)不一致:用例基线的数据库操作结果为A表字段“a=1”。待测版本运行后新备份的用例运行结果为A表字段“a=2”,则系统判断用例运行结果为失败。则可以听过出报表的方式,引导用户定位原因。用户则可以判断为系统bug,从而上报缺陷,并再缺陷修复后重新运行对比,在对比一致后,认为顺利通过。也可能用户判断为本次需求变更导致A表字段“a=2”,为正常结果,则用户直接将结果置为成功,系统将该用例运行结果更新入基线,从而使得基线中A表a字段变更为“a=2”。
而在另一个场景中,用例运行结果的字段(key)不一致:用例链路上B接口在待测版本新增了报文返回字段“b”。用例运行后新备份结果中B接口返回报文中多了“b=1”。而基线中A接口没有“b”字段,即对比不一致,则提示用户进行定位。若用户判断此为本次版本新增内容,直接将结果更新到基线,即后续以此基线作为断言判断结果。
可选的,若用例运行结果中包括结构化大字段,则将结构化大字段进行拆分,以用于与预设去噪规则进行比对,以去除拆分后各个字段中的第一类噪声结果。具体的,针对结构化大字段,可以根据“{”或转义符“\”识别字段可能为大字段,再将String转换为字典,如能转换成功则,同样可以按照本步骤中的方式获得去噪断言。
在本实施例中,通过在系统链路中执行测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,然后,根据用例运行结果以及用例场景规则确定系统链路各个接口对应的测试基线,再在系统链路中执行运行待测版本程序,并获取各个接口的测试结果,最后,根据各个接口对应的测试基线以及测试结果生成断言,从而实现系统链路的全链路断言自动精准生成,无需人工维护,进而满足自动覆盖全链路对应接口测试的需求。并且,在本实施例中,可以自动生成全链路接口断言的基线,然后采取的对比方式,也是针对全链路的数据库操作结果以及接口返回报文结果作对比断言。此外,还对用例运行结果进行了噪声自动分析去除,并通过基础断言自学习的方式,做初步判断,从而减少无效Diff对比。并且,若用例运行结果中包括结构化大字段,则还可以将结构化大字段进行拆分比对,从而支持大字段去噪Diff对比。
图5为本申请实施例提供的一种区块链状态数据处理装置的结构示意图。该区块链状态数据处理装置500可以通过软件、硬件或者两者的结合实现。
图5是本申请根据一示例实施例示出的接口自动化测试装置的结构示意图。如图5所示,本实施例提供的接口自动化测试装置,包括:
获取模块301,用于获取测试用例集合,所述测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号;
处理模块302,用于在系统链路中执行所述测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,所述用例运行结果包括数据库操作结果以及各个接口返回报文结果;
所述处理模块302,还用于根据所述用例运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线;
所述获取模块301,还用于在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果;
所述处理模块302,还用于根据各个接口对应的所述测试基线以及所述测试结果生成断言。
在一种可能的设计中,所述处理模块302,具体用于:
根据预设去噪规则直接去除用例运行结果中的第一类噪声结果,其中,所述第一类噪声结果为与所述系统链路的测试逻辑无关的运行结果;
对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,所述横向分析包括针对运行同一测试用例子集中的不同测试场景用例所生成的运行结果进行的聚类分析;
根据所述用例运行结果以及对应的所述结果属性确定所述系统链路各个接口对应的所述测试基线。
在一种可能的设计中,所述处理模块302,具体用于:
在运行同一测试用例子集中预设第一数量的测试场景用例后,获取各个接口的用例运行结果;
根据各个接口的用例运行结果的结果分布确定对应的结果等值率;
根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性。
在一种可能的设计中,所述处理模块302,具体用于:
若所述结果等值率为100%,则确定对应的运行结果属性为固定值类,所述固定值类对应的测试基线为固定值;
若所述结果等值率大于或等于预设等值率,则确定对应的运行结果属性为枚举类,所述枚举类对应的测试基线为枚举列表;
若所述结果等值率小于所述预设等值率,则确定对应的用例运行结果为噪声。
在一种可能的设计中,所述处理模块302,还用于:
对属于所述枚举类的用例运行结果进行纵向分析,所述纵向分析包括针对运行同一测试场景用例所生成的运行结果进行的聚类分析;
若属于所述枚举类的所有用例运行结果相同,则将用例运行结果备份至所述枚举列表;
若属于所述枚举类的用例运行结果存在不相同,则判断各个用例运行结果是否是在满足预设备份条件下时获得的,所述预设备份条件用于确定用例运行处于结束态;
若判断结果为是,则将各个用例运行结果备份至所述枚举列表;
若判断结果为否,则将满足所述预设备份条件所获取的用例运行结果备份至所述枚举列表。
在一种可能的设计中,所述处理模块302,还用于:
根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言;
确定所述基础断言符合预设断言结果的测试场景用例所对应的测试结果,与各个接口对应的所述测试基线继续进行对比,以生成所述断言。
在一种可能的设计中,若测试场景用例为通过业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为业务成功集合;
若测试场景用例为失败业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例;
若测试场景用例为不符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为报文不符合协议规范异常集合;
若测试场景用例为符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例。
在一种可能的设计中,所述处理模块302,还用于确定所述断言所指示的所述测试基线与所述测试结果存在差异的接口部分为系统变更点或系统漏洞。
在一种可能的设计中,若用例运行结果中包括结构化大字段,则将所述结构化大字段进行拆分,以用于与所述预设去噪规则进行比对,以去除拆分后各个字段中的第一类噪声结果。
本实施例提供接口自动化测试装置,可以用于执行上述方法实施例中的步骤。对于本申请装置实施例中未披露的细节,请参照本申请上述的方法实施例。
图6是本申请根据一示例实施例示出的电子设备的结构示意图。如图6所示,本实施例提供的一种电子设备400,包括:
处理器401;以及,
存储器402,用于存储所述处理器的可执行指令,该存储器还可以是flash(闪存);
其中,所述处理器401配置为经由执行所述可执行指令来执行上述方法中的各个步骤。
可选地,存储器402既可以是独立的,也可以跟处理器401集成在一起。
当所述存储器402是独立于处理器401之外的器件时,所述电子设备400,还可以包括:
总线403,用于连接所述处理器401以及所述存储器402。
本实施例还提供一种可读存储介质,可读存储介质中存储有计算机程序,当电子设备的至少一个处理器执行该计算机程序时,电子设备执行上述方法中的各个步骤。
本实施例还提供一种程序产品,该程序产品包括计算机程序,该计算机程序存储在可读存储介质中。电子设备的至少一个处理器可以从可读存储介质读取该计算机程序,至少一个处理器执行该计算机程序使得电子设备实施上述方法中的各个步骤。
本实施例还提供一种计算机程序,包括程序代码,当计算机运行该计算机程序时,程序代码执行上述方法中的各个步骤。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或对其中部分或全部技术特征进行等同替换;而这些修改或替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。
Claims (14)
- 一种接口自动化测试方法,其特征在于,包括:获取测试用例集合,所述测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号;在系统链路中执行所述测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,所述用例运行结果包括数据库操作结果以及各个接口返回报文结果;根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线;在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果;根据各个接口对应的所述测试基线以及所述测试结果生成断言。
- 根据权利要求1所述的接口自动化测试方法,其特征在于,所述根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线,包括:根据预设去噪规则直接去除用例运行结果中的第一类噪声结果,其中,所述第一类噪声结果为与所述系统链路的测试逻辑无关的运行结果;对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,所述横向分析包括针对运行同一测试用例子集中的不同测试场景用例所生成的运行结果进行的聚类分析;根据所述用例运行结果以及对应的所述运行结果属性确定所述系统链路各个接口对应的所述测试基线。
- 根据权利要求2所述的接口自动化测试方法,其特征在于,所述对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性,包括:在运行同一测试用例子集中预设第一数量的测试场景用例后,获取各个接口的用例运行结果;根据各个接口的用例运行结果的结果分布确定对应的结果等值率;根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性。
- 根据权利要求3所述的接口自动化测试方法,其特征在于,所述根据各个接口的结果等值率以及预设等值率条件确定对应的运行结果属性,包括:若所述结果等值率为100%,则确定对应的运行结果属性为固定值类,所述固定值类对应的测试基线为固定值;若所述结果等值率大于或等于预设等值率,则确定对应的运行结果属性为枚举类,所述枚举类对应的测试基线为枚举列表;若所述结果等值率小于所述预设等值率,则确定对应的用例运行结果为噪声。
- 根据权利要求4所述的接口自动化测试方法,其特征在于,在所述对所述用例运行结果进行横向分析,以确定各个接口对应的运行结果属性之后,还包括:对属于所述枚举类的用例运行结果进行纵向分析,所述纵向分析包括针对运行同一测试场景用例所生成的运行结果进行的聚类分析;若属于所述枚举类的所有用例运行结果相同,则将用例运行结果备份至所述枚举列表;若属于所述枚举类的用例运行结果存在不相同,则判断各个用例运行结果是否是在满足预设备份条件下时获得的,所述预设备份条件用于确定用例运行处于结束态;若判断结果为是,则将各个用例运行结果备份至所述枚举列表;若判断结果为否,则将满足所述预设备份条件所获取的用例运行结果备份至所述枚举列表。
- 根据权利要求2-5中任意一项所述的接口自动化测试方法,其特征在于,在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果之前,还包括:根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言;确定所述基础断言符合预设断言结果的测试场景用例所对应的测试结果,与各个接口对应的所述测试基线继续进行对比,以生成所述断言。
- 根据权利要求6所述的接口自动化测试方法,其特征在于,所述根据各个测试场景用例对应的用例运行结果中的返回报文结果与预设结果枚举集合的从属关系生成基础断言,包括:若测试场景用例为通过业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为业务成功集合;若测试场景用例为失败业务类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例;若测试场景用例为不符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为失败用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为成功用例;所述预设结果枚举集合为报文不符合协议规范异常集合;若测试场景用例为符合协议规范类用例,则若对应的用例运行结果中的返回报文结果不属于预设结果枚举集合,则所述测试场景用例为成功用例;若对应的用例运行结果中的返回报文结果属于预设结果枚举集合,则所述测试场景用例为失败用例。
- 根据权利要求7所述的接口自动化测试方法,其特征在于,在所述根据各个接口对应的所述测试基线以及所述测试结果生成断言之后,还包括:确定所述断言所指示的所述测试基线与所述测试结果存在差异的接口部分为系统变更点或系统漏洞。
- 根据权利要求2-5中任意一项所述的接口自动化测试方法,其特征在于,在所述根据预设去噪规则直接去除用例运行结果中的第一类噪声结果之前,还包括:若用例运行结果中包括结构化大字段,则将所述结构化大字段进行拆分,以用于与所述预设去噪规则进行比对,以去除拆分后各个字段中的第一类噪声结果。
- 一种接口自动化测试装置,其特征在于,包括:获取模块,用于获取测试用例集合,所述测试用例集合包括多种类型的测试用例子集,每个类型的测试用例子集包括多个测试场景用例,每个测试场景用例配置有对应的用例业务流水号;处理模块,用于在系统链路中执行所述测试用例集合中的各个测试场景用例,并按照预设安全的运行程序的机制获取用例运行结果,所述用例运行结果包括数据库操作结果以及各个接口返回报文结果;所述处理模块,还用于根据生产版本代码环境中各个测试场景用例的运行结果以及用例场景规则确定所述系统链路各个接口对应的测试基线;所述获取模块,还用于在所述系统链路中执行运行待测版本程序,并获取各个接口的测试结果;所述处理模块,还用于根据各个接口对应的所述测试基线以及所述测试结果生成断言。
- 一种电子设备,其特征在于,包括:处理器;以及存储器,用于存储所述处理器的计算机程序;其中,所述处理器被配置为通过执行所述计算机程序来实现权利要求1至9任一项所述的接口自动化测试方法。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至9任一项所述的接口自动化测试方法。
- 一种计算机程序产品,包括计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1至9任一项所述的接口自动化测试方法。
- 一种计算机程序,其特征在于,包括程序代码,当计算机运行所述计算机程序时,所述程序代码执行如权利要求1至9任一项所述的接口自动化测试方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111609529.XA CN114168486A (zh) | 2021-12-27 | 2021-12-27 | 接口自动化测试方法、装置、介质、设备及程序 |
CN202111609529.X | 2021-12-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023123943A1 true WO2023123943A1 (zh) | 2023-07-06 |
Family
ID=80488455
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/102161 WO2023123943A1 (zh) | 2021-12-27 | 2022-06-29 | 接口自动化测试方法、装置、介质、设备及程序 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114168486A (zh) |
WO (1) | WO2023123943A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116756046A (zh) * | 2023-08-16 | 2023-09-15 | 湖南长银五八消费金融股份有限公司 | 一种接口自动化测试方法、装置、设备及存储介质 |
CN117149820A (zh) * | 2023-09-25 | 2023-12-01 | 湖南长银五八消费金融股份有限公司 | 一种借据操作检测方法、装置、设备及存储介质 |
CN117453577A (zh) * | 2023-12-25 | 2024-01-26 | 湖南兴盛优选网络科技有限公司 | 基于流量录制生成接口自动化用例的方法、装置和计算机设备 |
CN117493162A (zh) * | 2023-12-19 | 2024-02-02 | 易方达基金管理有限公司 | 一种接口测试的数据校验方法、系统、设备及存储介质 |
CN118467356A (zh) * | 2024-05-10 | 2024-08-09 | 长江信达软件技术(武汉)有限责任公司 | 用于对应用程序编程接口进行测试的方法以及系统 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114168486A (zh) * | 2021-12-27 | 2022-03-11 | 深圳前海微众银行股份有限公司 | 接口自动化测试方法、装置、介质、设备及程序 |
CN114706796A (zh) * | 2022-06-07 | 2022-07-05 | 广州易方信息科技股份有限公司 | 基于DOM树结构的UI自动化diff断言方法及装置 |
CN115065612B (zh) * | 2022-06-22 | 2024-03-08 | 上海哔哩哔哩科技有限公司 | 全链路压测改造的测试方法及装置 |
CN115757069A (zh) * | 2022-11-21 | 2023-03-07 | 中国人民财产保险股份有限公司 | 一种基于测试平台评估系统性能的方法、装置及设备 |
CN117707936B (zh) * | 2023-11-28 | 2024-06-11 | 海通证券股份有限公司 | 多系统多版本全链路测试方法、装置、设备和存储介质 |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100100872A1 (en) * | 2008-10-22 | 2010-04-22 | Oracle International Corporation | Methods and systems for implementing a test automation framework for testing software applications on unix/linux based machines |
CN110750433A (zh) * | 2018-07-23 | 2020-02-04 | 北京奇虎科技有限公司 | 接口测试方法和装置 |
CN111124871A (zh) * | 2018-10-31 | 2020-05-08 | 北京国双科技有限公司 | 接口测试方法及装置 |
CN111382074A (zh) * | 2020-03-09 | 2020-07-07 | 摩拜(北京)信息技术有限公司 | 接口测试方法、装置及电子设备 |
WO2020155778A1 (zh) * | 2019-02-03 | 2020-08-06 | 苏州市龙测智能科技有限公司 | 接口自动化测试方法、测试装置、测试设备及存储介质 |
CN112463588A (zh) * | 2020-11-02 | 2021-03-09 | 北京健康之家科技有限公司 | 一种自动化测试系统及方法、存储介质、计算设备 |
CN112799953A (zh) * | 2021-02-08 | 2021-05-14 | 北京字节跳动网络技术有限公司 | 一种接口测试方法及装置、计算机设备及存储介质 |
CN113407449A (zh) * | 2021-03-29 | 2021-09-17 | 广州海量数据库技术有限公司 | 一种接口测试方法以及装置 |
CN114168486A (zh) * | 2021-12-27 | 2022-03-11 | 深圳前海微众银行股份有限公司 | 接口自动化测试方法、装置、介质、设备及程序 |
-
2021
- 2021-12-27 CN CN202111609529.XA patent/CN114168486A/zh active Pending
-
2022
- 2022-06-29 WO PCT/CN2022/102161 patent/WO2023123943A1/zh unknown
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100100872A1 (en) * | 2008-10-22 | 2010-04-22 | Oracle International Corporation | Methods and systems for implementing a test automation framework for testing software applications on unix/linux based machines |
CN110750433A (zh) * | 2018-07-23 | 2020-02-04 | 北京奇虎科技有限公司 | 接口测试方法和装置 |
CN111124871A (zh) * | 2018-10-31 | 2020-05-08 | 北京国双科技有限公司 | 接口测试方法及装置 |
WO2020155778A1 (zh) * | 2019-02-03 | 2020-08-06 | 苏州市龙测智能科技有限公司 | 接口自动化测试方法、测试装置、测试设备及存储介质 |
CN111382074A (zh) * | 2020-03-09 | 2020-07-07 | 摩拜(北京)信息技术有限公司 | 接口测试方法、装置及电子设备 |
CN112463588A (zh) * | 2020-11-02 | 2021-03-09 | 北京健康之家科技有限公司 | 一种自动化测试系统及方法、存储介质、计算设备 |
CN112799953A (zh) * | 2021-02-08 | 2021-05-14 | 北京字节跳动网络技术有限公司 | 一种接口测试方法及装置、计算机设备及存储介质 |
CN113407449A (zh) * | 2021-03-29 | 2021-09-17 | 广州海量数据库技术有限公司 | 一种接口测试方法以及装置 |
CN114168486A (zh) * | 2021-12-27 | 2022-03-11 | 深圳前海微众银行股份有限公司 | 接口自动化测试方法、装置、介质、设备及程序 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116756046A (zh) * | 2023-08-16 | 2023-09-15 | 湖南长银五八消费金融股份有限公司 | 一种接口自动化测试方法、装置、设备及存储介质 |
CN116756046B (zh) * | 2023-08-16 | 2023-11-03 | 湖南长银五八消费金融股份有限公司 | 一种接口自动化测试方法、装置、设备及存储介质 |
CN117149820A (zh) * | 2023-09-25 | 2023-12-01 | 湖南长银五八消费金融股份有限公司 | 一种借据操作检测方法、装置、设备及存储介质 |
CN117149820B (zh) * | 2023-09-25 | 2024-05-14 | 湖南长银五八消费金融股份有限公司 | 一种借据操作检测方法、装置、设备及存储介质 |
CN117493162A (zh) * | 2023-12-19 | 2024-02-02 | 易方达基金管理有限公司 | 一种接口测试的数据校验方法、系统、设备及存储介质 |
CN117453577A (zh) * | 2023-12-25 | 2024-01-26 | 湖南兴盛优选网络科技有限公司 | 基于流量录制生成接口自动化用例的方法、装置和计算机设备 |
CN117453577B (zh) * | 2023-12-25 | 2024-03-22 | 湖南兴盛优选网络科技有限公司 | 基于流量录制生成接口自动化用例的方法、装置和计算机设备 |
CN118467356A (zh) * | 2024-05-10 | 2024-08-09 | 长江信达软件技术(武汉)有限责任公司 | 用于对应用程序编程接口进行测试的方法以及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN114168486A (zh) | 2022-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023123943A1 (zh) | 接口自动化测试方法、装置、介质、设备及程序 | |
US8621278B2 (en) | System and method for automated solution of functionality problems in computer systems | |
CN106506283B (zh) | 银行和企业对接系统的业务测试方法和装置 | |
CN109344053B (zh) | 接口覆盖测试方法、系统、计算机设备和存储介质 | |
US20070011541A1 (en) | Methods and systems for identifying intermittent errors in a distributed code development environment | |
US9047260B2 (en) | Model-based testing of a graphical user interface | |
CN111736865B (zh) | 一种数据库升级方法及系统 | |
CN112396419A (zh) | 校验规则的生成方法、装置、设备和存储介质 | |
CN113220588A (zh) | 一种数据处理的自动化测试方法、装置、设备及存储介质 | |
CN110674118A (zh) | 数据库管理方法、装置、服务器及计算机可读存储介质 | |
CN111752846A (zh) | 一种接口测试方法及装置 | |
CN112650676A (zh) | 软件测试方法、装置、设备及存储介质 | |
CN110990289B (zh) | 一种自动提交bug的方法、装置、电子设备及存储介质 | |
CN112100070A (zh) | 版本缺陷的检测方法、装置、服务器及存储介质 | |
CN115114064A (zh) | 一种微服务故障分析方法、系统、设备及存储介质 | |
JP2017068293A (ja) | テストdbデータ生成方法及び装置 | |
CN112765014A (zh) | 一种用于多用户同时操作的自动测试系统及工作方法 | |
CN115934559A (zh) | 表单智能测试系统的测试方法 | |
CN116467188A (zh) | 一种多环境场景下的通用本地复现系统和方法 | |
CN116069628A (zh) | 一种智能处置的软件自动化回归测试方法、系统及设备 | |
RU128741U1 (ru) | Система формирования решения проблем функционирования компьютерных систем | |
CN114841281A (zh) | 一种数据表的识别方法、装置、设备、介质及程序产品 | |
CN113326196A (zh) | 一种测试用例的方法及装置 | |
CN114519003A (zh) | 基于映射关系的回归测试方法、装置及电子设备 | |
CN114579809A (zh) | 事件分析方法、装置、电子设备及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22913232 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |