CN114020601A - Automatic test method and tool for data verification after data return - Google Patents
Automatic test method and tool for data verification after data return Download PDFInfo
- Publication number
- CN114020601A CN114020601A CN202111122271.0A CN202111122271A CN114020601A CN 114020601 A CN114020601 A CN 114020601A CN 202111122271 A CN202111122271 A CN 202111122271A CN 114020601 A CN114020601 A CN 114020601A
- Authority
- CN
- China
- Prior art keywords
- data
- test
- project
- code
- database
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013524 data verification Methods 0.000 title claims abstract description 11
- 238000010998 test method Methods 0.000 title claims description 10
- 238000012360 testing method Methods 0.000 claims abstract description 119
- 238000012216 screening Methods 0.000 claims abstract description 25
- 238000000034 method Methods 0.000 claims abstract description 20
- 230000009471 action Effects 0.000 claims abstract description 3
- 230000008520 organization Effects 0.000 claims description 22
- 238000007639 printing Methods 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 230000005540 biological transmission Effects 0.000 claims description 2
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 230000001502 supplementing effect Effects 0.000 claims description 2
- 238000007726 management method Methods 0.000 abstract 1
- 238000013461 design Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 210000001520 comb Anatomy 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G06F11/3664—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3684—Test management for test design, e.g. generating new test cases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3692—Test management for test results analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/242—Query formulation
- G06F16/2433—Query languages
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a method and a tool for automatically testing data verification after data return. Aiming at the data in the company system being transmitted back to the client system, the data accuracy is tested, and an automatic testing tool is realized, the method and the tool can more efficiently complete the checking test in a large data range. The method specifically comprises the following steps: (1) firstly, completing data returning action; (2) according to the individuation of the project, parameter configuration is carried out in the tool; (3) executing an automatic test script, and screening part or all of returned data through the SQL sentence to perform test checking; (4) and checking the execution result or the log, and comparing the consistency of the test result with the expected result. The method and the tool can carry out the automatic regression test and the management of the automatic test case only by modifying the configuration file and reserving the configuration files of a plurality of projects. The data testing method and the data testing tool can greatly improve the testing work efficiency.
Description
Technical Field
The invention belongs to the technical field of information, and particularly relates to a method and a tool for automatically testing data verification after data return.
Background
Since this century, the technology of China is continuously developing and advancing, the appearance of the Internet generates a large amount of data information, and mankind has entered the era of "big data". Under the background of the times, algorithms such as data mining and machine learning which are developed by means of big data are applied to various industries, so that how to test data software products is important to ensure the accuracy of the data.
In a traditional software test system, the description of test methods such as function, performance, stability and the like is relatively perfect, but few test methods related to big data exist. Therefore, the invention aims at the big data testing method, combs out a kind of data to return and check the data accuracy, provides an automatic testing tool, provides powerful support for the perfection of a big data testing system, and simultaneously makes a solid step in the aspect of improving the data testing efficiency.
Disclosure of Invention
The invention aims to provide and develop a data check automatic test method and a tool after data return aiming at the defects of the existing software test system on a big data test method.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the supportable test scene of the invention comprises:
scene 1: the data of the system A is transmitted back to the system B through an interface provided by a request client, and messages of different project request interfaces are personalized;
scene 2: directly accessing a background database of the system B, directly dropping the returned data into an intermediate table of the system B, wherein data dictionaries of different intermediate tables of items are personalized;
based on the supportable number-pushing scene, the invention has the following specific test implementation modes:
step 1: and finishing the data returning action:
firstly, data needing to be transmitted back is selected and submitted in a system A;
then, executing a data return task on an Airflow scheduling platform, and pushing the data to a system B (client system);
step 2: configuring parameters according to the individuation of the project;
the configuration involved mainly consists of 5 modules: connection information [ db _ conf ] of a system A PG library, project information [ project _ msg ], connection information [ log _ db _ conf ] of a system B database, a field of deduction values needing to be checked on a project, and a personalized query SQL [ SQL ] on the project, wherein the connection information [ db _ conf ] of the system A PG library, the project information [ project _ msg ], the connection information [ log _ db _ conf ] of the system B database, the field of deduction values needing to be checked on the project, the personalized query SQL [ SQL ] is specifically as follows:
module 1: connection information [ db _ conf ] of the a-system PG library:
db _ database: DataBase DataBase name of the system PG library A;
db _ user: a, logging in a user name of a system PG database;
db _ password: a, a login password of a system PG database;
db _ host: IP address of PG database of A system;
db _ port: a, a port corresponding to the IP address of a system PG database;
and (3) module 2: project information [ project _ msg ]
project _ id: tenant number of customer corresponding project in A system
scene _ code: business scene coding of customer corresponding project participating in A system
brand _ code: list of brand codes for the presence of customer goods
And a module 3: connection information of B System database [ log _ db _ conf ]
db _ database: name of DataBase of B System
db _ user: login user name of B system database
db _ password: b login password of system database
db _ host: IP address of B system database
db _ port: port corresponding to IP address of B system database
db _ type: database type of B system
And (4) module: field of numerator needed to check on project
test _ field: fields needing to be checked in intermediate tables in B system database
comp _ field: fields needing to be checked in PG database in A system
And a module 5: personalized query SQL on items [ SQL ]
target _ table _ name: the table name of the intermediate table in the database of the system B must be configured;
schema: if the schema where the log table in the database of the system B is located does not exist, the configuration is not needed;
test _ where _ pg: a, screening conditions of query data in a system PG library can be tested, and single or multiple screening conditions can be tested and correspond to a configuration item test _ where _ cus;
test _ where _ cus: b, screening conditions of query data in the system database can be tested, and single or multiple items can be tested and correspond to the configuration item test _ where _ pg;
test _ with _ pg: a, a temporary table which needs to be additionally added for querying data in a system PG library is developed in a personalized manner, and configuration is not needed if the project is not developed;
test _ select _ pg: fields needing to be additionally returned are added to query data in the PG library of the system A, the project is developed in a personalized manner, and configuration is not needed if the fields do not exist;
test _ join _ pg: a table needing additional association is searched in the PG library of the A system, the project is developed in a personalized mode, and configuration is not needed if the table does not exist.
And step 3: reading module information in the step 2 about the A system configuration file:
reading parameters under the connection information [ db _ conf ] of the PG library of the A system to obtain the connection information of the PG library of the A system and create database connection;
reading project information [ project _ msg ] and parameters under personalized query SQL [ SQL ] on the project to construct query SQL;
executing the SQL sentence to obtain data pushed to the system B by the system A in the step 1;
further, the query SQL statement template is:
WITH dt AS (/ dt temporary table;
SELECT brand, max (day _ date) as day _ date/latest decision date (brand information) corresponding to the read item brand information;
FROM tend _ { project _ id } _ rst, rst _ ra _ sku _ org _ detail/table where return data is stored in system a;
WHERE scene _ code in ({ scene _ code })/screening condition, specified project service scenario (scene _ code, service scenario field; { scene _ code }, service scenario for configuration parameter configuration);
AND commit _ status is 2/filter condition, data status is committed for backtransmission;
the AND human _ ra _ qty is more than 0/screening condition, AND the number of submitted supplementary tone lists of the data is more than 0;
grouping the GROUP BY brand _ code according to the item brand information;
)
{ test _ with }/configuring parameters, and creating an SQL temporary table;
SELECT skin, order _ id AS uuid,/obtain commodity information of the project: supplementing tone sheet numbers;
brand _ code,/acquire item information: item brand information;
day _ date,/get item decision date;
to _ char (sku. update _ time, 'yyyyy-mm-dd hh 24: mi: ss') AS commit _ time,/update time of acquired data;
commit _ user _ name AS commit _ user,/update user who obtains data;
scene _ code AS type _ code,/acquiring a service scene of a project;
sku _ code,/acquiring commodity information of a project: coding the style code;
product _ code,/acquiring commodity information of a project: coding the money;
color _ code,/acquiring commodity information of a project: color coding;
size _ code,/acquire item merchandise information: size coding;
obtaining organization information of the project: receiving a code for a store or warehouse;
dim _ out _ org.stockorg _ code AS send _ org _ code,/acquiring organization information of the project: sending out a code for a store or warehouse;
storing _ type AS receive _ storing _ type,/acquiring organization information of the project: receiving a type of store or warehouse;
store _ type AS send _ store _ type,/obtain organization information of project: type of issuing store or warehouse;
dim _ in _ org.org _ type AS receive _ org _ type,/acquire organization information of the project: a type of the reception area;
dim _ out _ org.org _ type AS send _ org _ type,/acquire organization information of the project: the type of the issue area;
human _ ra _ qty AS send _ qty/get item information:
{ test _ select }/configuration parameter, with additional data fields;
FROM team _ { project _ id } _ rst, rst _ ra _ sku _ org _ detail AS sku/table where return data is stored in system a;
the InNER JOIN tenant _ { project _ id } _ rst. dim _ stockorg AS dim _ in _ org/return data corresponds to a table stored in the A system by a receiving organization; ON skin, human _ allow _ in _ org _ sk is the associated condition of dim _ in _ org.stockorg _ sk/table, and the surrogate keys of the organization are the same;
the InNER JOIN tenant _ { project _ id } _ rst. dim _ stockorg AS dim _ out _ org/return data correspondingly sends out a table stored in the A system by an organization; human _ allow _ out _ org _ sk is the associated condition of dim _ out _ org.stockorg _ sk/table, and the surrogate key of the organization needs to be matched;
the method comprises the following steps of associating an INNER JOIN dt ON sku.brand _ code AND sku.day _ date dt.day _ date/brand with a latest decision date temporary table corresponding to the brand, associating conditions of the table, AND matching dates;
{ test _ join }/configuration parameters, table associations that may be supplemented;
WHERE skin _ code in ({ scene _ code })/screening condition, specified project service scenario (scene _ code, service scenario field; { scene _ code }, service scenario for configuration parameter configuration);
is _ effective 1/screening condition, the state of the commodity is effective
Brand _ code in ({ brand _ code })/screening condition, specified item brand (brand _ code, brand field; { brand _ code }, brand information of configuration parameter configuration);
commit _ status is 2/screening condition, data state is submitted for return;
AND sku. human _ ra _ qty is more than 0/screening condition, AND the number of submitted supplementary debugging lists of the data is more than 0;
{ test _ where }/configuration parameters, supplementary sieve conditions;
wherein { project _ id }, { scene _ code }, { brand _ code }, { test _ with }, { test _ select }, { test _ join }, { test _ where } parameters are obtained by reading a configuration file; when the project is personalized and the template SQL sentence does not meet the requirement and other field data needs to be returned, the parameter { test _ select } can be configured; when other tables need to be associated, a parameter { test _ join } can be configured; when a screening condition needs to be added, a parameter { test _ where } can be configured; when more complex logic needs to recreate the temporary table, the parameter { test _ with } may be configured.
In a template SQL sentence, an rst _ ra _ sku _ org _ detail table is a main data table of a system A, the state to be acquired is effective data submitted under a corresponding service scene, and a dim _ stockorg table is an organization dimension table of the system A;
and 4, step 4: reading module information in the step 2 about the B system configuration file:
reading parameters under the connection information [ log _ db _ conf ] of the B system database so as to obtain the connection information of the B system database and create database connection;
reading parameters under personalized query SQL [ SQL ] on the system item B, and constructing the query SQL;
executing the SQL sentence to obtain the data received by the system B in the step 1;
further, the SQL sentence template is:
select*from{log_table_name}where 1=1{test_where};
wherein { log _ table _ name }, { test _ where } } parameters are obtained by reading a configuration file, { log _ table _ name } is a middle table of a B system database, and { test _ where } is a screening condition of query;
and 5: checking the data obtained in the step 3 and the step 4;
since the contents of the data obtained in step 3 and step 4 are consistent, when the result of the verification is false, the reasons for the result include: 1. the ordering of the fields in each record is inconsistent; 2. when a plurality of records are recorded, the data are not in consistent sequencing; 3. the types of data are different, so the types of data are converted before data verification, and the types of data are converted according to the following steps: and (4) carrying out value sorting according to the parameter requirements under the deduction number field needing to be checked on the project.
The data were collated, resulting in the following test results:
the result is as follows: the comparison result is True, namely the test is passed, and the number of the submitted data and the number-pushed data is consistent with the number-pushed content;
and a second result: the comparison result is False, namely the test fails, the submitted data and the pushed data are consistent in number, but the pushed content is different;
and a third result: the submitted data does not exist, namely the test fails, the data is not submitted, and the test is invalid;
and a fourth result: the submitted data and the pushed data are consistent in number, but the pushed data content is different, namely the test is not passed, and the log can respectively output the difference set of the submitted data of the commodity and the data in the log table of the system B;
and a fifth result: the submitted data and the pushed data are inconsistent, namely the test is not passed, and the logs respectively output the data submitted by commodity currency and the data in the log table of the system B, the number of intersection and a difference set;
step 6: executing a test script according to the project tenant number project _ id, and printing a log in a key step;
the key steps comprise:
firstly, checking input parameters during execution;
acquiring test SQL;
obtaining the compared data;
and fourthly, testing results of execution.
And 7: the tester can view the test results in the corresponding log file or console.
The technical architecture of the automatic test tool for the supportable pushed scenario comprises 5 large modules: the system comprises a configuration file reading module, a data acquisition module, a data checking module and a log module, wherein the configuration file reading module acquires data returned by the system A and acquires data received by the system B.
The configuration file reading module acquires the name of the configuration file through the transmission parameter and then acquires the parameters of each module of the specified configuration file;
the data returned by the system A is obtained, the parameters are filled into the template SQL sentence, the connection of the PG database of the system A is established, the corresponding SQL sentence is executed, and the data returned by the system A is obtained;
the data received by the system B is obtained, the parameters are filled into the template SQL sentence, the connection of the database of the system B is established, the corresponding SQL sentence is executed, and the data received by the system B is obtained;
the data checking module compares the acquired data returned by the system A with the acquired data received by the system B, if the acquired data is consistent with the acquired data, the test is passed, and if the acquired data is inconsistent with the acquired data, the test is not passed;
the log module writes the log of the console into a log file of a corresponding date under a corresponding project tenant number;
the test execution method and the log storage of the supportable pushed number scene are as follows:
executing an execute _ case. py test script under a line _ qa- _ data _ push _ test _1 root directory, wherein the execution command is a python execute _ case [ -cf tenant number ], and the command example is as follows: python execute _ case-cf 1000000659
Parameters are as follows: and-cf (-config _ file) represents the file name of the configuration file, when the parameter is transmitted, the tool loads the configuration information according to the file name of the parameter, and if the folder does not exist, the tool reports an error. And when the parameter is not entered, reading the configuration file config.
After the execution is finished, the log is stored in the log file, and a user can check the log file by himself and can display execution information such as the executed SQL, the acquired data returned by the system A, the acquired data received by the system B, the data check result and the like.
The invention has the beneficial effects that:
the invention provides an automatic test method for data verification after data return by combing and summarizing a big data test method, aiming at data return between two systems, tests the data accuracy and realizes an automatic test tool. And a big data testing system is perfected, and the data testing efficiency is favorably improved.
The data check automatic test method and the tool after data return have the advantages that: from the test configuration level, more than 90% of test scenes can be adapted by modifying the configuration file according to the requirements of personalized projects, and a plurality of test scenes can be compatible in a tool, so that the parameter configuration has flexibility, convenience and easy operability; from the test implementation level, only a python language/pycharm development tool needs to be installed, the test tool can be executed, and the method has strong implementability; from the test time level, run execution is carried out in pycharm, automatic test can be started, the whole test process time from configuration parameters, execution commands to judgment of test results can be controlled within 10 minutes, compared with the original manual test, the time consumption is within several hours, and the tool is greatly improved in test speed; from the test accuracy level, the mass data can be traversed according to the program for checking, and compared with pure naked eye checking, the checking accuracy and the checking data magnitude have great span.
Drawings
FIG. 1 is a flowchart illustrating an embodiment of a method and tool for automated testing of data verification after data return;
FIG. 2 is an exemplary configuration file of the automated testing method and tool for data verification after data return according to an embodiment of the present invention;
FIG. 3 is an exemplary configuration file of the automated testing method and tool for data verification after data return according to an embodiment of the present invention.
Detailed Description
The objects and results of the present invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings. The example is a type of embodiment of the invention, and the specific operation flow is shown in fig. 1.
Step 1: selecting and submitting data to be transmitted back in the system A; then, executing a data return task on an Airflow scheduling platform, and pushing the data to a system B;
step 2: adding configuration files according to corresponding projects, perfecting the following 5 configuration modules: connection information [ db _ conf ] of a system PG library, project information [ project _ msg ], connection information [ log _ db _ conf ] of a system B database, a field of deduction values needing to be checked on a project, and a personalized query SQL [ SQL ] on the project, wherein examples are shown in FIG. 2 and FIG. 3;
and step 3: then executing an automated test script execute _ case.
And 4, step 4: printing a log in a key step;
and 5: the tester can view the test results in the corresponding log file or console.
The present invention is not limited to the above-described embodiments, and those skilled in the art can implement the present invention in other various embodiments based on the disclosure of the present invention. Therefore, the design of the invention is within the scope of protection, with simple changes or modifications, based on the design structure and thought of the invention.
Claims (8)
1. An automatic test method for data verification after data return comprises a scenario 1: the data of the system A is transmitted back to the system B through an interface provided by a request client, and messages of different project request interfaces are personalized; scene 2: directly accessing a background database of the system B, directly dropping the returned data into an intermediate table of the system B, wherein data dictionaries of different intermediate tables of items are personalized; the method is characterized by comprising the following specific tests:
step 1: and finishing the data returning action:
firstly, data needing to be transmitted back is selected and submitted in a system A;
then, executing a data return task on an Airflow scheduling platform, and pushing the data to a system B;
step 2: configuring parameters according to the individuation of the project;
and step 3: reading module information in the system configuration file A in the step 2;
reading parameters under the connection information [ db _ conf ] of the PG library of the A system to obtain the connection information of the PG library of the A system and create database connection; reading project information [ project _ msg ] and parameters under personalized query SQL [ SQL ] on the project to construct query SQL; executing the SQL sentence to obtain data pushed to the system B by the system A in the step 1;
and 4, step 4: reading module information in the step 2 about the B system configuration file:
reading parameters under the connection information [ log _ db _ conf ] of the B system database so as to obtain the connection information of the B system database and create database connection; reading parameters under personalized query SQL [ SQL ] on the system item B, and constructing the query SQL; executing the SQL sentence to obtain the data received by the system B in the step 1;
and 5: checking the data obtained in the step 3 and the step 4;
since the contents of the data obtained in step 3 and step 4 are consistent, when the result of the verification is false, the reasons for the result include: 1. the ordering of the fields in each record is inconsistent; 2. when a plurality of records are recorded, the data are not in consistent sequencing; 3. the types of data are different, so the types of data are converted before data verification, and the types of data are converted according to the following steps: sorting the values according to the parameter requirements under the deduction number field needing to be checked on the project;
step 6: executing a test script according to the project tenant number project _ id, and printing a log in a key step;
and 7: the tester can view the test results in the corresponding log file or console.
2. The method according to claim 1, wherein the configuration in step 2 mainly comprises 5 modules: connection information [ db _ conf ] of the PG library of the A system, project information [ project _ msg ], connection information [ log _ db _ conf ] of the database of the B system, a field of a deduction number required to be checked on a project, and a personalized query SQL [ SQL ] on the project.
3. The method according to claim 2, wherein the module 1: connection information [ db _ conf ] of the a-system PG library:
db _ database: DataBase DataBase name of the system PG library A;
db _ user: a, logging in a user name of a system PG database;
db _ password: a, a login password of a system PG database;
db _ host: IP address of PG database of A system;
db _ port: a, a port corresponding to the IP address of a system PG database;
and (3) module 2: project information [ project _ msg ]
project _ id: tenant number of customer corresponding project in A system
scene _ code: business scene coding of customer corresponding project participating in A system
brand _ code: list of brand codes for the presence of customer goods
And a module 3: connection information of B System database [ log _ db _ conf ]
db _ database: name of DataBase of B System
db _ user: login user name of B system database
db _ password: b login password of system database
db _ host: IP address of B system database
db _ port: port corresponding to IP address of B system database
db _ type: database type of B system
And (4) module: field of numerator needed to check on project
test _ field: fields needing to be checked in intermediate tables in B system database
comp _ field: fields needing to be checked in PG database in A system
And a module 5: personalized query SQL on items [ SQL ]
target _ table _ name: the table name of the intermediate table in the database of the system B must be configured;
schema: if the schema where the log table in the database of the system B is located does not exist, the configuration is not needed;
test _ where _ pg: a, screening conditions of query data in a system PG library can be tested, and single or multiple screening conditions can be tested and correspond to a configuration item test _ where _ cus;
test _ where _ cus: b, screening conditions of query data in the system database can be tested, and single or multiple items can be tested and correspond to the configuration item test _ where _ pg;
test _ with _ pg: a, a temporary table which needs to be additionally added for querying data in a system PG library is developed in a personalized manner, and configuration is not needed if the project is not developed;
test _ select _ pg: fields needing to be additionally returned are added to query data in the PG library of the system A, the project is developed in a personalized manner, and configuration is not needed if the fields do not exist;
test _ join _ pg: a table needing additional association is searched in the PG library of the A system, the project is developed in a personalized mode, and configuration is not needed if the table does not exist.
4. The method according to claim 3, wherein the step 3 is implemented as follows:
the query SQL statement template is:
WITH dt AS (/ dt temporary table;
SELECT brand, max (day _ date) as day _ date/latest decision date (brand information) corresponding to the read item brand information;
FROM tend _ { project _ id } _ rst, rst _ ra _ sku _ org _ detail/table where return data is stored in system a;
WHERE scene _ code in ({ scene _ code })/screening condition, specified project service scenario (scene _ code, service scenario field; { scene _ code }, service scenario for configuration parameter configuration);
AND commit _ status is 2/filter condition, data status is committed for backtransmission;
the AND human _ ra _ qty is more than 0/screening condition, AND the number of submitted supplementary tone lists of the data is more than 0;
grouping the GROUP BY brand _ code according to the item brand information;
)
{ test _ with }/configuring parameters, and creating an SQL temporary table;
SELECT skin, order _ id AS uuid,/obtain commodity information of the project: supplementing tone sheet numbers;
brand _ code,/acquire item information: item brand information;
day _ date,/get item decision date;
to _ char (sku. update _ time, 'yyyyy-mm-dd hh 24: mi: ss') AS commit _ time,/update time of acquired data;
commit _ user _ name AS commit _ user,/update user who obtains data;
scene _ code AS type _ code,/acquiring a service scene of a project;
sku _ code,/acquiring commodity information of a project: coding the style code;
product _ code,/acquiring commodity information of a project: coding the money;
color _ code,/acquiring commodity information of a project: color coding;
size _ code,/acquire item merchandise information: size coding;
obtaining organization information of the project: receiving a code for a store or warehouse;
dim _ out _ org.stockorg _ code AS send _ org _ code,/acquiring organization information of the project: sending out a code for a store or warehouse;
storing _ type AS receive _ storing _ type,/acquiring organization information of the project: receiving a type of store or warehouse;
store _ type AS send _ store _ type,/obtain organization information of project: type of issuing store or warehouse;
dim _ in _ org.org _ type AS receive _ org _ type,/acquire organization information of the project: a type of the reception area;
dim _ out _ org.org _ type AS send _ org _ type,/acquire organization information of the project: the type of the issue area;
human _ ra _ qty AS send _ qty/get item information:
{ test _ select }/configuration parameter, with additional data fields;
FROM team _ { project _ id } _ rst, rst _ ra _ sku _ org _ detail AS sku/table where return data is stored in system a;
the InNER JOIN tenant _ { project _ id } _ rst. dim _ stockorg AS dim _ in _ org/return data corresponds to a table stored in the A system by a receiving organization; ON skin, human _ allow _ in _ org _ sk is the associated condition of dim _ in _ org.stockorg _ sk/table, and the surrogate keys of the organization are the same;
the InNER JOIN tenant _ { project _ id } _ rst. dim _ stockorg AS dim _ out _ org/return data correspondingly sends out a table stored in the A system by an organization;
human _ allow _ out _ org _ sk is a condition associated with dim _ out _ org.stockorg _ sk/table, and the surrogate keys of the organization are matched;
the method comprises the following steps of associating an INNER JOIN dt ON sku.brand _ code AND sku.day _ date dt.day _ date/brand with a latest decision date temporary table corresponding to the brand, associating conditions of the table, AND matching dates;
{ test _ join }/configuration parameters, table associations that may be supplemented;
WHERE skin _ code in ({ scene _ code })/screening condition, specified project service scenario (scene _ code, service scenario field; { scene _ code }, service scenario for configuration parameter configuration);
is _ effective 1/screening condition, the state of the commodity is effective
Brand _ code in ({ brand _ code })/screening condition, specified item brand (brand _ code, brand field; { brand _ code }, brand information of configuration parameter configuration);
commit _ status is 2/screening condition, data state is submitted for return;
AND sku. human _ ra _ qty is more than 0/screening condition, AND the number of submitted supplementary debugging lists of the data is more than 0;
{ test _ where }/configuration parameters, supplementary sieve conditions;
wherein { project _ id }, { scene _ code }, { brand _ code }, { test _ with }, { test _ select }, { test _ join }, { test _ where } parameters are obtained by reading a configuration file; when the project is personalized and the template SQL sentence does not meet the requirement and other field data needs to be returned, the parameter { test _ select } can be configured; when other tables need to be associated, a parameter { test _ join } can be configured; when a screening condition needs to be added, a parameter { test _ where } can be configured; when more complex logic needs to create a temporary table again, the parameter { test _ with } can be configured;
in the template SQL sentence, an rst _ ra _ sku _ org _ detail table is a main data table of the system A, the state to be acquired is effective data submitted under a corresponding service scene, and a dim _ stockorg table is an organization dimension table of the system A.
5. The method for automatically checking and testing data after data return according to claim 3 or 4, wherein the SQL sentence template in step 4 is:
select*from{log_table_name}where 1=1{test_where};
wherein, the { log _ table _ name }, { test _ where } } parameters are obtained by reading a configuration file, the { log _ table _ name } is a middle table of a B system database, and the { test _ where } is a screening condition of a query.
6. The method according to claim 5, wherein the verification data in step 5 is obtained by the following steps:
the result is as follows: the comparison result is True, namely the test is passed, and the number of the submitted data and the number-pushed data is consistent with the number-pushed content;
and a second result: the comparison result is False, namely the test fails, the submitted data and the pushed data are consistent in number, but the pushed content is different;
and a third result: the submitted data does not exist, namely the test fails, the data is not submitted, and the test is invalid;
and a fourth result: the submitted data and the pushed data are consistent in number, but the pushed data content is different, namely the test is not passed, and the log can respectively output the difference set of the submitted data of the commodity and the data in the log table of the system B;
and a fifth result: the submitted data and the pushed data are inconsistent in number, namely the test is not passed, and the logs respectively output the number of the submitted data of the commodity currency and the number of the intersection and the difference set of the data in the log table of the B system.
7. The method according to claim 5, wherein the key steps in step 6 include:
firstly, checking input parameters during execution;
acquiring test SQL;
obtaining the compared data;
and fourthly, testing results of execution.
8. The method according to claim 5, wherein the technical architecture of the automatic testing tool for checking data after data backhaul comprises 5 modules: the system comprises a configuration file reading module, a data acquisition module, a data checking module and a log module, wherein the configuration file reading module acquires data returned by a system A and acquires data received by a system B;
the configuration file reading module acquires the name of the configuration file through the transmission parameter and then acquires the parameters of each module of the specified configuration file;
the data returned by the system A is obtained, the parameters are filled into the template SQL sentence, the connection of the PG database of the system A is established, the corresponding SQL sentence is executed, and the data returned by the system A is obtained;
the data received by the system B is obtained, the parameters are filled into the template SQL sentence, the connection of the database of the system B is established, the corresponding SQL sentence is executed, and the data received by the system B is obtained;
the data checking module compares the acquired data returned by the system A with the acquired data received by the system B, if the acquired data is consistent with the acquired data, the test is passed, and if the acquired data is inconsistent with the acquired data, the test is not passed;
the log module writes the log of the console into a log file of a corresponding date under a corresponding project tenant number;
the test execution method and the log storage of the supportable pushed number scene are as follows:
executing an execute _ case. py test script under a line _ qa- _ data _ push _ test _1 root directory, wherein the execution command is a python execute _ case [ -cf tenant number ], and the command example is as follows: python execute _ case-cf 1000000659
Parameters are as follows: -cf (- -config _ file) represents the file name of the configuration file, when transferring the parameter, the tool loads the configuration information according to the file name of the parameter, and if the folder does not exist, the tool reports an error; when the configuration information is not entered, reading the configuration file config.ini to load the configuration information by default;
after the execution is finished, the log is stored in the log file, and a user can check the log file by himself and can display execution information such as the executed SQL, the acquired data returned by the system A, the acquired data received by the system B, the data check result and the like.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111122271.0A CN114020601A (en) | 2021-09-24 | 2021-09-24 | Automatic test method and tool for data verification after data return |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111122271.0A CN114020601A (en) | 2021-09-24 | 2021-09-24 | Automatic test method and tool for data verification after data return |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114020601A true CN114020601A (en) | 2022-02-08 |
Family
ID=80054744
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111122271.0A Pending CN114020601A (en) | 2021-09-24 | 2021-09-24 | Automatic test method and tool for data verification after data return |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114020601A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101196926A (en) * | 2007-12-29 | 2008-06-11 | 中国建设银行股份有限公司 | Database access platform and access method thereof |
CN102567517A (en) * | 2011-12-28 | 2012-07-11 | 用友软件股份有限公司 | Device and method for issuing data of database |
US20180218030A1 (en) * | 2017-01-31 | 2018-08-02 | Salesforce.Com, Inc. | Systems, methods, and apparatuses for implementing a by orgid command term within a multi-tenant aware structured query language |
CN110399304A (en) * | 2019-07-30 | 2019-11-01 | 江苏恒宝智能系统技术有限公司 | A kind of test method and system of script |
CN110704475A (en) * | 2019-09-29 | 2020-01-17 | 中国银行股份有限公司 | Method and system for comparing ETL loading table structures |
CN110764998A (en) * | 2019-09-06 | 2020-02-07 | 平安健康保险股份有限公司 | Data comparison method, device and equipment based on Django framework and storage medium |
-
2021
- 2021-09-24 CN CN202111122271.0A patent/CN114020601A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101196926A (en) * | 2007-12-29 | 2008-06-11 | 中国建设银行股份有限公司 | Database access platform and access method thereof |
CN102567517A (en) * | 2011-12-28 | 2012-07-11 | 用友软件股份有限公司 | Device and method for issuing data of database |
US20180218030A1 (en) * | 2017-01-31 | 2018-08-02 | Salesforce.Com, Inc. | Systems, methods, and apparatuses for implementing a by orgid command term within a multi-tenant aware structured query language |
CN110399304A (en) * | 2019-07-30 | 2019-11-01 | 江苏恒宝智能系统技术有限公司 | A kind of test method and system of script |
CN110764998A (en) * | 2019-09-06 | 2020-02-07 | 平安健康保险股份有限公司 | Data comparison method, device and equipment based on Django framework and storage medium |
CN110704475A (en) * | 2019-09-29 | 2020-01-17 | 中国银行股份有限公司 | Method and system for comparing ETL loading table structures |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8386419B2 (en) | Data extraction and testing method and system | |
Zawodny et al. | High Performance MySQL: Optimization, Backups, Replication, Load Balancing & More | |
US9792351B2 (en) | Tolerant and extensible discovery of relationships in data using structural information and data analysis | |
US7933869B2 (en) | Method and system for cloning a tenant database in a multi-tenant system | |
US7421442B2 (en) | System and method for data capture and reporting | |
CA2639549C (en) | Data mapping design tool | |
AU2013202010B2 (en) | Data selection and identification | |
US20080162509A1 (en) | Methods for updating a tenant space in a mega-tenancy environment | |
US20110145210A1 (en) | System and Method for Managing One or More Databases | |
US10445675B2 (en) | Confirming enforcement of business rules specified in a data access tier of a multi-tier application | |
JP2011503681A (en) | Method and system for analysis of systems for matching data records | |
CN110704475A (en) | Method and system for comparing ETL loading table structures | |
EP4006740A1 (en) | Method for indexing data in storage engines, and related device | |
CN109582286B (en) | Freemarker technology-based data normalization verification method and device | |
Pettit et al. | The MySQL Workshop: A practical guide to working with data and managing databases with MySQL | |
US9286335B1 (en) | Performing abstraction and/or integration of information | |
CN114020601A (en) | Automatic test method and tool for data verification after data return | |
CN109670728B (en) | Ship design quality information management system based on database | |
Patil et al. | Data integration problem of structural and semantic heterogeneity: data warehousing framework models for the optimization of the ETL processes | |
ElGamal et al. | An Architecture-Oriented Data Warehouse Testing Approach. | |
Fiorillo | Oracle Database 11gR2 Performance Tuning Cookbook: Over 80 Recipes to Help Beginners Achieve Better Performance from Oracle Database Applications | |
Chaffin et al. | Professional SQL Server 2000 DTS (Data Transformation Services) | |
JP2001265580A (en) | Review supporting system and review supporting method used for it | |
Paygude et al. | Automation of data validation testing for QA in the project of DB migration | |
AU2004202620B2 (en) | Database interactions and applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |