CN105989044A - Database verification method and system - Google Patents
Database verification method and system Download PDFInfo
- Publication number
- CN105989044A CN105989044A CN201510059589.7A CN201510059589A CN105989044A CN 105989044 A CN105989044 A CN 105989044A CN 201510059589 A CN201510059589 A CN 201510059589A CN 105989044 A CN105989044 A CN 105989044A
- Authority
- CN
- China
- Prior art keywords
- data
- row
- inconsistent
- database
- checking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012795 verification Methods 0.000 title claims abstract description 69
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000013524 data verification Methods 0.000 claims description 45
- 238000013467 fragmentation Methods 0.000 description 27
- 238000006062 fragmentation reaction Methods 0.000 description 27
- 230000006870 function Effects 0.000 description 25
- 239000012634 fragment Substances 0.000 description 15
- 230000002776 aggregation Effects 0.000 description 10
- 238000004220 aggregation Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000009191 jumping Effects 0.000 description 4
- 239000002131 composite material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a database verification method and a database verification system. The database verification method comprises the following steps: implementing the following data piece verification on corresponding tables of corresponding databases of a host base and a standby base; acquiring a data piece verification sum of present data pieces of present tables to be verified from a source terminal, acquiring a data piece verification sum of corresponding data pieces from a target terminal, and judging whether present data pieces of the source terminal and the target terminal are identical or not according to whether the acquired data piece verification sums are the same or not; if the present data pieces are not identical, performing row verification on the present data pieces, or else implementing data piece verification on other unverified data pieces of the tables to be verified at present as present data pieces; performing the following row verification on different data pieces: acquiring main key values of different rows and row verification sums of the present data pieces from the source terminal, acquiring main key values of different rows and row verification sums of the present data pieces from the target terminal, judging whether corresponding rows of the source terminal and the target terminal are identical according to whether the row verification sums corresponding to the acquired main key values are identical or not, and recording main key values of different rows.
Description
Technical Field
The present application relates to the field of database technologies, and in particular, to a database verification method and system.
Background
Databases are core components of enterprise IT (Information Technology) facilities, and their role in the field of persistent storage is not replaceable. With the continuous development of services and the continuous expansion of data scale and access amount, in order to solve the performance and capacity problems, a general solution is to deploy a backup database (backup database for short) cluster for a main database (main database for short) cluster in the same city or in different places, and the main database cluster and the backup database cluster can be collectively called as a main/backup cluster. The main and standby clusters perform data synchronization through a database source synchronization tool or a third-party synchronization tool, so that the standby cluster can bear read flow on one hand, and the access performance is improved; on the other hand, when the main library cluster is abnormal, the main library cluster can be replaced to bear data access service, and the purpose of disaster tolerance is achieved. Therefore, data consistency between the main cluster and the standby cluster is particularly important, which is a precondition for the standby cluster to play a role.
An existing checking tool for checking data consistency between a main cluster and a standby cluster, such as mk-table-checksum, executes a data checking statement in a main library by using a special synchronization mode based on statement level replication, and inserts a data checking result into an auxiliary table of the main library; synchronizing the data verification statement to the standby database by copying, executing the data verification statement in the standby database, and inserting a data verification result into an auxiliary table of the standby database; the checking tool achieves the checking effect by comparing whether the data of the auxiliary tables of the main library and the standby library are consistent or not.
The existing checking tool mk-table-checksum has the following defects:
firstly, the check tool needs to rely on a special statement level copy-based synchronization mode of MySQL (a database product name), and the check tool cannot be used if the database does not support the statement level copy-based synchronization mode;
secondly, the inconsistent data cannot be accurately positioned by using the checking tool, and further the inconsistent data cannot be repaired according to the checking result.
Disclosure of Invention
The application aims to provide a database checking method and a database checking system.
In order to achieve the above object, the present application discloses a database verification method, which includes:
and executing the following data piece verification on each corresponding table of each corresponding database in the main library and the standby library: acquiring a data piece checksum of a current data piece of a current table to be checked from a source end, acquiring a data piece checksum of a corresponding data piece from a destination end, and judging whether the current data pieces of the source end and the destination end are consistent according to whether the acquired data piece checksums are the same; if the current data pieces are inconsistent, performing row verification on the current data pieces, otherwise, performing data piece verification by taking other non-verified data pieces of the current table to be verified as the current data pieces;
and performing the following row check on the inconsistent data pieces: acquiring a primary key value and a row check sum of each row in a current data slice from a source end, acquiring the primary key value and the row check sum of each row in the current data slice from a destination end, judging whether corresponding rows of the source end and the destination end are consistent according to whether the row check sums corresponding to the acquired primary key values are the same, and recording the primary key values of inconsistent rows;
the source end is one of a master library and a standby library, and the destination end is the other of the master library and the standby library.
In addition, after the row check is executed, the method further includes:
acquiring field values of all fields of the inconsistent row from a source end and a destination end according to the primary key value of the inconsistent row;
comparing the field values of all the obtained fields of the inconsistent rows one by one, and finding out the inconsistent fields of the inconsistent rows;
the field names of the inconsistent fields of the inconsistent rows are recorded.
In addition, when the data slice is verified, the line number of the current data slice is obtained from the source end and the destination end, and whether the current data slices of the source end and the destination end are consistent is judged according to whether the obtained line number and the data slice checksum are the same.
In addition, before the data slice is verified, the method further comprises the following steps:
dividing a database checking task aiming at a database cluster into a plurality of checking subtasks, and distributing each checking subtask to a plurality of data checking modules for execution;
and one of the checking subtasks is used for checking data of one database in the database cluster or used for checking data of one table in the database of the database cluster.
In addition, after the row check is executed, the method further includes:
acquiring database operation information corresponding to the inconsistent row according to the primary key value of the inconsistent row;
judging whether a delay relative to data verification exists between the main library and the standby library or not according to the database operation information; if the row is existed, after waiting for a preset time interval, re-checking the corresponding rows in the primary library and the standby library according to the primary key values of the inconsistent rows;
wherein, the database operation information comprises: the last update time of the main library, the last update time of the standby library and the data verification time.
In order to achieve the above object, the present application also discloses a database verification system, including: a data verification module; wherein:
the data verification module comprises: a data slice verifying unit and a row verifying unit;
the data sheet checking unit is used for executing the following data sheet checking on each corresponding table of each corresponding database in the main library and the standby library: acquiring a data piece checksum of a current data piece of a current table to be checked from a source end, acquiring a data piece checksum of a corresponding data piece from a destination end, and judging whether the current data pieces of the source end and the destination end are consistent according to whether the acquired data piece checksums are the same; if the current data pieces are inconsistent, performing row verification on the current data pieces, otherwise, performing data piece verification by taking other non-verified data pieces of the current table to be verified as the current data pieces;
the row checking unit is used for performing the following row checking on the inconsistent data pieces: acquiring a primary key value and a row check sum of each row in a current data slice from a source end, acquiring the primary key value and the row check sum of each row in the current data slice from a destination end, judging whether corresponding rows of the source end and the destination end are consistent according to whether the row check sums corresponding to the acquired primary key values are the same, and recording the primary key values of inconsistent rows;
the source end is one of a master library and a standby library, and the destination end is the other of the master library and the standby library.
In addition, the data checking module further comprises:
and the field checking unit is used for acquiring field values of all fields of the inconsistent row from the source end and the destination end according to the primary key value of the inconsistent row, comparing the acquired field values of all fields of the inconsistent row one by one, finding out inconsistent fields of the inconsistent row, and recording field names of the inconsistent fields of the inconsistent row.
In addition, when the data slice verifying unit verifies the data slice, the data slice verifying unit further obtains the line number of the current data slice from the source end and the destination end, and judges whether the current data slices of the source end and the destination end are consistent according to whether the obtained line number and the data slice checksum are the same.
In addition, the system also comprises:
the scheduling machine is used for dividing a database checking task aiming at the database cluster into a plurality of checking subtasks and distributing each checking subtask to a plurality of data checking modules for execution;
and one of the checking subtasks is used for checking data of one database in the database cluster or used for checking data of one table in the database of the database cluster.
In addition, the system also comprises:
the rechecking module is used for acquiring database operation information corresponding to the inconsistent row according to the primary key values of the inconsistent row; judging whether a delay relative to data verification exists between the main library and the standby library according to the database operation information; if the row is existed, after waiting for a preset time interval, re-checking the corresponding rows in the primary library and the standby library according to the primary key values of the inconsistent rows;
wherein, the database operation information comprises: the last update time of the main library, the last update time of the standby library and the data verification time.
Compared with the prior art, the technical effects that can be obtained by the application include:
(1) by adopting a multi-stage verification strategy comprising data piece verification, row verification and field verification, inconsistent data can be quickly found and accurately positioned;
(2) the data slice check and the row check are carried out by acquiring and comparing the checksums, so that the extraction and matching of the check table row by row and field by field are not needed, the network bandwidth is saved, and the cost required by the check is reduced;
(3) the distributed idea is used for data verification, data verification can be performed on one database cluster at one time, the database cluster verification task is decomposed into a plurality of subtasks, and the plurality of subtasks are executed concurrently, so that the verification efficiency is improved, and computer resources are fully utilized;
(4) and the check result is rechecked according to the synchronous delay information between the main library and the standby library, so that the false detection is reduced to the greatest extent, and the accuracy of data check is improved.
By adopting the database verification method, one table with the capacity of 15G and 6000 ten thousand rows of records is opened for single table verification, and data verification can be completed in 11 minutes; and performing data verification on a database cluster which comprises 64 sub-libraries, 491 tables in each sub-library and 16T in capacity, starting 3 concurrences in each sub-library, and completing the verification of the total data in only 3 hours.
Of course, it is not necessary for any product to achieve all of the above-described technical effects simultaneously.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of one configuration of a database verification system of the present application;
FIG. 2 is a flow chart of a method of database verification according to an embodiment of the present application;
FIG. 3 is a flow chart of a method of another database verification method according to an embodiment of the present application;
FIG. 4 is a flow chart of a method of another database verification method according to an embodiment of the present application;
FIG. 5 is a flow chart of a method of another database verification method according to an embodiment of the present application;
FIG. 6 is a flow chart of a method of another database verification method according to an embodiment of the present application;
fig. 7 is a system configuration diagram of a database verification system according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in detail with reference to the drawings and examples, so that how to implement technical means to solve technical problems and achieve technical effects of the present application can be fully understood and implemented.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Before describing the embodiments of the present invention, a brief description of the database verification system of the present application will be given. Fig. 1 is a schematic structural diagram of a database verification system according to the present application. The system may comprise: the system comprises a dispatching machine, a checking machine and a central node; wherein:
the connection between the dispatching machine and the checking machine, and between the checking machine and the central node can be through a data line or a network (a local area network or a wide area network).
The dispatching machine comprises: the system comprises a metadata analysis module and a task scheduling module.
The metadata analysis module is used for acquiring metadata information of the database cluster and dividing the data verification task into a plurality of verification subtasks according to the metadata information;
the metadata information may be stored in the dispatcher or in other computers.
And the task scheduling module is used for distributing each checking subtask to the data checking module of the checking machine.
And the data checking module is used for carrying out data checking on the tables contained in the corresponding database according to the distributed checking subtasks and outputting the data checking result to the central node.
In the present application, in order to improve the speed of data verification, a plurality of data verification modules may be arranged in the system, and each data verification module may concurrently execute different verification subtasks.
The plurality of check modules can be arranged in one check machine, and different check modules correspond to different processes or threads which can run simultaneously. The checking modules can also be arranged in different checking machines respectively.
And the central node is used for summarizing the data verification results sent by the data verification modules and generating database cluster verification results.
Description of the embodiments
The following is a further illustration of the implementation of the method of the present application in one embodiment. Fig. 2 is a flowchart illustrating a method of a database verification method according to an embodiment of the present application; in this embodiment, distributed data verification is performed for a master library cluster and a standby library cluster; the method comprises the following steps:
step S200: a metadata analysis module of the scheduling machine acquires metadata information of the database cluster according to the name of the database cluster to be verified;
the name of the database cluster may be an Identifier (ID) of the data cluster.
The database cluster can be divided into a main database cluster and a standby database cluster corresponding to the main database cluster, one database cluster can contain a plurality of databases, and different databases can be arranged on different computers.
The metadata information includes: database name, database address.
The database name is a name of a database included in the database cluster. The name of the database may be an Identifier (ID) of the database.
The database address may include an IP (internet protocol) address and a port number of the database corresponding to the database name.
Step S202: a metadata analysis module of the scheduling machine divides the data verification task into a plurality of verification subtasks according to the metadata information of the database cluster, and generates a subtask list containing the plurality of verification subtasks;
in this embodiment, the check subtasks may be divided in units of databases, that is, each check subtask performs data check on one database of the primary library cluster and the backup library cluster (that is, performs data check on all tables in one database); the check subtasks may also be divided in units of database tables (tables for short), that is, each check subtask performs data check on a table in one database of the primary and backup clusters.
Step S204: a task scheduling module of the scheduler traverses the subtask list to schedule the verification tasks, namely, each verification subtask is distributed to each data verification module to be executed;
in this embodiment, the scheduler and the checking machine are different computers connected via a network, so that the scheduler needs to remotely schedule the checking tasks via the network, that is, distribute the checking subtasks to the data checking modules via the network for execution.
Step S206: each data checking module executes a checking subtask, performs data checking on corresponding databases in the main library cluster and the standby library cluster, and sends a data checking result to the central node;
each data checking module performs data checking on a database table (table for short) contained in the corresponding database according to the distributed checking subtask, and the specific steps will be described in detail below.
Step S208: the central node stores the data verification results sent by the data verification modules and collects the data verification results to generate database cluster verification results;
the database cluster verification result includes the following data in the verified database cluster: the name of the inconsistent table, the name of the database to which the inconsistent table belongs, and the inconsistent information list of the inconsistent table.
The inconsistent information list includes: primary key values for inconsistent rows (also referred to as records); or comprises: the primary key value of the inconsistent row and the field name of the inconsistent field.
The implementation of the method of the present application is further illustrated in a second embodiment. As shown in fig. 3, is a flowchart of another method for verifying a database according to an embodiment of the present application, where the method includes:
step S300: and executing the following data piece verification on each corresponding table of each corresponding database in the main library and the standby library: acquiring a data piece checksum of a current data piece of a current table to be checked from a source end, acquiring a data piece checksum of a corresponding data piece from a destination end, and judging whether the current data pieces of the source end and the destination end are consistent according to whether the acquired data piece checksums are the same; and if the current data pieces are inconsistent, performing row verification on the current data pieces, and otherwise, performing data piece verification by taking other non-verified data pieces of the current table to be verified as the current data pieces.
Step S302: and performing the following row check on the inconsistent data pieces: acquiring a primary key value and a row check sum of each row in a current data slice from a source end, acquiring the primary key value and the row check sum of each row in the current data slice from a destination end, judging whether corresponding rows of the source end and the destination end are consistent according to whether the row check sums corresponding to the acquired primary key values are the same, and recording the primary key values of inconsistent rows;
the source end is one of a master library and a standby library, and the destination end is the other of the master library and the standby library.
The method of the present application is further illustrated in a third embodiment. Fig. 4 is a flowchart illustrating another method for verifying a database according to an embodiment of the present disclosure; in the embodiment, data verification is performed on the tables in the main library and the standby library, and inconsistent data are accurately positioned to the inconsistent field of each inconsistent row in the tables; the method comprises the following steps:
step S400: dividing records (rows) contained in a table to be checked of a source side (e.g., a master library) into a plurality of data pieces;
in this step, a fixed-length slicing mode may be adopted for slicing, for example: the total record number of the table to be checked is recorded as N, and the preset data sheet length value is recorded as NsWhen the above-mentioned fragmentation is carried out, the main key can be used for sorting the tables to be checked, and the tables to be checked can be divided intoA piece of data, or division intoA data piece; wherein,meaning that the rounding is done down,indicating rounding up.
The above-mentioned main keys are divided into two categories of single main key and composite main key. A single primary key is a primary key that contains only one field, and a compound primary key is a primary key that contains multiple fields.
In addition to the fixed-length fragmentation mode, the fragmentation can be performed in a field value range fragmentation mode, that is, the fragmentation is performed according to the value range of one or more fields (usually, main key fields) of the table to be checked.
For example, the table to be checked includes a user identifier field, and the value range of the user identifier field in the table to be checked is: [ ID ]1,IDn+1]The table to be checked can be divided into a plurality of data pieces according to the value range of the user identifier field; example (b)Such as:
having user identifier field value in interval [ ID1,ID2) The recording of (1) is divided into the 1 st data slice;
having user identifier field value in interval [ ID2,ID3) The record of (2) is divided into the 2 nd data slice;
…
having user identifier field value in interval [ IDn,IDn+1) The record of (a) is divided into the nth data slice;
wherein "[" denotes a closed interval, and ")" denotes an open interval.
Step S402: sending a query request to a source end so as to obtain the line number and the data slice checksum of the current data slice to be verified from the source end, or obtain the data slice checksum of the current data slice to be verified from the source end;
in this step, the data slice checksum may be calculated at the source end using the aggregation function and the checksum function provided by the database system.
The aggregation function is used for calculating a group of row record rows to obtain a result; the aggregation function includes: AVG (function for calculating average), COUNT (function for calculating the number of items in a specified group), SUM (function for calculating the SUM of specified data), and the like.
The checksum function may be a HASH (HASH) function, such as an MD5(Message Digest 5) function.
MD5 is a hash function widely used in the field of computer security to provide integrity protection for messages. In the field of databases, hash values may be generated for data by the MD5 function.
If the fixed-length fragmentation mode is adopted, in this step, the source end may first query the minimum primary key of the current data fragment to be verified, and send the query request to the source end according to the minimum primary key and the length of the data fragment.
If the primary key is a composite primary key, the minimum primary key may be obtained by sequentially comparing the primary key fields.
In addition, if the fixed-length fragmentation mode is adopted, in this step, the maximum primary key of the current data piece to be verified needs to be queried from the source end, so that the corresponding data piece of the destination end is determined in the subsequent steps by using the minimum primary key and the maximum primary key of the current data piece to be verified.
If the field value range fragmentation mode is adopted, the query request can be sent according to the value range of the field in the step.
Step S404: sending a query request to a destination (for example, a standby library) so as to obtain the line number and the data slice checksum of the current data slice to be verified from the destination, or obtain the data slice checksum of the current data slice to be verified from the destination;
in this step, the data slice checksum may be calculated at the destination using an aggregation function and a checksum function provided by the database system.
If the fixed-length fragmentation mode is adopted, in this step, the query request may be sent to the destination end by using the minimum primary key and the maximum primary key obtained in step S402, so as to obtain the row number and the data fragment checksum of the to-be-verified data fragment corresponding to the source end in the destination end, or obtain the data fragment checksum of the to-be-verified data fragment corresponding to the source end in the destination end.
If the field value range fragmentation mode is adopted, the query request can be sent according to the value range of the field in the step, so as to obtain the line number and the data slice checksum of the data slice to be verified corresponding to the source end in the destination end, or obtain the data slice checksum of the data slice to be verified corresponding to the source end in the destination end.
Step S406: comparing whether the line number of the current data slice to be verified and the checksum of the data slice, which are acquired from the source end and the destination end, are the same:
if the number of lines is the same as the checksum of the data slice, it indicates that the data of the current data slice to be verified is consistent, and the step S422 is skipped;
if the line number and/or the checksum of the data slice acquired from the source end and the destination end are different, it indicates that the data of the current data slice to be verified is inconsistent, and the process jumps to step S408 to perform line verification on the data slice with inconsistent data.
In addition, according to the nature of the hashing algorithms such as MD5, when the data piece checksums of the current data pieces to be checked, which are obtained from the source end and the destination end, are the same, it indicates that there is a very high probability that the data of the current data pieces to be checked are consistent; when the data piece check sums of the current data pieces to be checked, which are acquired from the source end and the destination end, are different, it is indicated that the data of the current data pieces to be checked are inconsistent.
Therefore, in this step, it may also be determined whether the data of the current data slice to be verified are consistent only by comparing the checksum of the data slice, that is, if the checksums of the data slices acquired from the source end and the destination end are the same, it indicates that the data of the current data slice to be verified are consistent, and the step S422 is skipped; if the checksum of the data pieces acquired from the source end and the destination end are different, it indicates that the data of the current data piece to be verified is inconsistent, and the step S408 is skipped to, and the row verification is performed on the data piece with inconsistent data.
Of course, comparing the number of rows and the checksum of the slice simultaneously can improve the accuracy of data verification. In addition, since the data length of the checksum of the data slice is usually much greater than the data length of the number of rows, the speed of comparing the number of rows is higher than that of comparing the checksum of the data slice.
Step S408: sending a query request to a source end so as to obtain a primary key value and a row checksum of each row of a current data slice to be checked from the source end;
in this step, the row checksum may be calculated at the source end using an aggregation function and a checksum function provided by the database system.
If the fixed-length fragmentation mode is adopted, the query request can be sent to the source end according to the minimum primary key and the maximum primary key of the current data fragment to be verified in the step.
If the field value range fragmentation mode is adopted, the query request can be sent according to the value range of the field in the step.
Step S410: sending a query request to a destination end so as to obtain a primary key value and a row checksum of each row in a current data piece to be verified from the destination end;
in this step, the row checksum may be calculated at the destination using an aggregation function and a checksum function provided by the database system.
If the fixed-length fragmentation mode is adopted, the query request can be sent to the target end according to the minimum main key and the maximum main key of the current data to be verified.
If the field value range fragmentation mode is adopted, the query request can be sent to the destination terminal according to the value range of the field in the step.
Step S412: comparing the row check sums obtained from the source end and the destination end line by line according to the primary key values, searching and recording the primary key values of inconsistent rows, and executing the next step by taking the first inconsistent row of the current data sheet to be checked as the current row to be checked;
the inconsistent row may include:
the source end and the destination end both comprise rows with the same primary key value, but the corresponding row checksums are different;
the source end comprises a row corresponding to a certain primary key value, but the destination end does not have the row corresponding to the primary key value;
the destination end contains a row corresponding to a certain primary key value, but the source end does not have the row corresponding to the primary key value.
Step S414: and sending a query request to the source end according to the primary key value of the current row to be checked so as to obtain field values of all fields (columns) of the current row to be checked from the source end.
Step S416: and sending a query request to a destination terminal according to the primary key value of the current row to be checked so as to obtain field values of all fields (columns) of the current row to be checked from the destination terminal.
Step S418: comparing field values of all fields of the current row to be verified acquired from the source end and the destination end one by one, finding out inconsistent fields in the current row to be verified, and writing inconsistent information into an inconsistent information list;
the inconsistent information list includes: primary key value of inconsistent row, field name of inconsistent field.
Step S420: judging whether the current row to be checked is the last inconsistent row of the current data piece to be checked: if yes, jumping to step S422; otherwise, the next inconsistent row of the current data slice to be checked is taken as the current row to be checked, and the step S414 is skipped.
Step S422: judging whether the current data piece to be checked is the last data piece to be checked in the current table to be checked: if yes, the process is ended; otherwise, taking the next data piece to be verified in the current table to be verified as the current data piece to be verified, and jumping to step S402.
The method of the present application is further illustrated by a fourth embodiment. FIG. 5 is a flow chart of another database verification method according to an embodiment of the present disclosure; in the embodiment, data verification is performed on the tables in the main library and the standby library, and inconsistent data are accurately positioned to each inconsistent row in the tables; the method comprises the following steps:
step S500: dividing records (rows) contained in a table to be checked of a source side (e.g., a master library) into a plurality of data pieces;
similar to step S400, in this step, a fixed-length fragmentation mode may be adopted for fragmentation, or a field value range fragmentation mode may be adopted for fragmentation.
Step S502: sending a query request to a source end so as to obtain the line number and the data slice checksum of the current data slice to be verified from the source end, or obtain the data slice checksum of the current data slice to be verified from the source end;
in this step, the data slice checksum may be calculated at the source end using the aggregation function and the checksum function provided by the database system.
If the fixed-length fragmentation mode is adopted, in this step, the source end may first query the minimum primary key of the current data fragment to be verified, and send the query request to the source end according to the minimum primary key and the length of the data fragment.
In addition, if the fixed-length fragmentation mode is adopted, the maximum primary key of the current data fragment to be verified needs to be queried from the source end in this step.
If the field value range fragmentation mode is adopted, the query request can be sent according to the value range of the field in the step.
Step S504: sending a query request to a destination (for example, a standby library) so as to obtain the line number and the data slice checksum of the current data slice to be verified from the destination, or obtain the data slice checksum of the current data slice to be verified from the destination;
in this step, the data slice checksum may be calculated at the destination using an aggregation function and a checksum function provided by the database system.
If the fixed-length fragmentation mode is adopted, in this step, the query request may be sent to the destination end by using the minimum primary key and the maximum primary key obtained in step S502, so as to obtain the line number and the data fragment checksum of the to-be-verified data fragment corresponding to the source end in the destination end, or obtain the data fragment checksum of the to-be-verified data fragment corresponding to the source end in the destination end.
If the field value range fragmentation mode is adopted, the query request can be sent according to the value range of the field in the step, so as to obtain the line number and the data slice checksum of the data slice to be verified corresponding to the source end in the destination end, or obtain the data slice checksum of the data slice to be verified corresponding to the source end in the destination end.
Step S506: comparing whether the line number of the current data slice to be verified and the checksum of the data slice, which are acquired from the source end and the destination end, are the same:
if the line number is the same as the checksum of the data slice, it indicates that the data of the current data slice to be verified is consistent, and the step S514 is skipped;
if the line number and/or the checksum of the data slice acquired from the source end and the destination end are different, it indicates that the data of the current data slice to be checked is inconsistent, and the process jumps to step S508 to perform line checking on the data slice with inconsistent data.
Similarly, in this step, it may also be determined whether the data of the current data slice to be verified are consistent only by comparing the checksum of the data slice, that is, if the checksums of the data slices acquired from the source end and the destination end are the same, it indicates that the data of the current data slice to be verified are consistent, and the step S514 is skipped; if the checksum of the data pieces acquired from the source end and the destination end are different, it indicates that the data of the current data piece to be verified is inconsistent, and the step S508 is skipped to, and the row verification is performed on the data piece with inconsistent data.
Step S508: sending a query request to a source end so as to obtain a primary key value and a row checksum of each row of a current data slice to be checked from the source end;
in this step, the row checksum may be calculated at the source end using an aggregation function and a checksum function provided by the database system.
If the fixed-length fragmentation mode is adopted, the query request can be sent to the source end according to the minimum primary key and the maximum primary key of the current data fragment to be verified in the step.
If the field value range fragmentation mode is adopted, the query request can be sent according to the value range of the field in the step.
Step S510: sending a query request to a destination end so as to obtain a primary key value and a row checksum of each row in a current data piece to be verified from the destination end;
in this step, the row checksum may be calculated at the destination using an aggregation function and a checksum function provided by the database system.
If the fixed-length fragmentation mode is adopted, the query request can be sent to the target end according to the minimum main key and the maximum main key of the current data to be verified.
If the field value range fragmentation mode is adopted, the query request can be sent to the destination terminal according to the value range of the field in the step.
Step S512: comparing the row check sums obtained from the source end and the destination end line by line according to the primary key values, searching the primary key values of inconsistent rows, and writing inconsistent information into an inconsistent information list;
the inconsistent row may include:
the source end and the destination end both comprise rows with the same primary key value, but the corresponding row checksums are different;
the source end comprises a row corresponding to a certain primary key value, but the destination end does not have the row corresponding to the primary key value;
the destination end contains a row corresponding to a certain primary key value, but the source end does not have the row corresponding to the primary key value.
The inconsistent information list includes: primary key values of inconsistent rows.
Step S514: judging whether the current data piece to be checked is the last data piece to be checked in the current table to be checked: if yes, the process is ended; otherwise, taking the next data piece to be verified in the current table to be verified as the current data piece to be verified, and jumping to step S502.
The method of the present application is further illustrated in the following fifth embodiment. Fig. 6 is a flowchart illustrating another method for verifying a database according to an embodiment of the present disclosure; in this embodiment, rechecking is performed on inconsistent data between the main library and the standby library, and the rechecking can be started immediately after the table is checked; the method comprises the following steps:
step S600: acquiring an inconsistent information list generated by data verification of a main library and a standby library;
the above-mentioned inconsistent information list may be generated by the database verification method shown in fig. 4 or fig. 5.
Step S602: acquiring database operation information for judging whether the synchronization of the corresponding database has delay relative to data verification according to the inconsistent information list;
the synchronization of the databases is usually to synchronize the updated data to the backup database after the data in the primary database is updated, and there is a time difference between the data update in the primary database and the corresponding data update in the backup database. If the time for performing data verification on the primary library and the backup library occurs after the data in the primary library is updated and before the corresponding data in the backup library is updated, the synchronization of the databases is delayed relative to the data verification.
The database operation information may include: the last update time of the main library, the last update time of the standby library and the data verification time.
The last update time of the main library and the last update time of the standby library in the database operation information can be respectively obtained from database log files of the main library and the standby library, and can also be obtained by calling corresponding query interfaces of database synchronization software.
The check time in the database operation information can be obtained from database log files of the main database and the standby database, or corresponding information is recorded during data check and recheck.
The master library last update time and the standby library last update time may be the time for updating data of the database, or may be the time for updating data of a single table in the database.
Step S604: judging whether a delay relative to data verification exists between the main library and the standby library according to the database operation information; if there is a delay, perform step S606; if no delay exists, the reinspection process is ended;
recording the last update time of the master library as TaAnd the last update time of the spare bank is recorded as TbData verification time is recorded as Tc(ii) a Then there is a delay in synchronization between the master and backup banks with respect to data checking if:
Ta>Tb(ii) a The standby database does not synchronize the latest data update of the main database until the current time, namely, the synchronization between the main database and the standby database is bound to have delay relative to data verification;
Ta<Tc<Tb(ii) a It also indicates that there is a delay in synchronization between the master and slave banks relative to the data check.
Step S606: wait for a predetermined time interval, for example, 5 seconds.
Step S608: and respectively acquiring corresponding row data from the primary library and the standby library according to primary key values of inconsistent rows contained in the inconsistent information list, comparing the row data by field, judging whether the corresponding row data are consistent, and if the corresponding row data are consistent, updating the inconsistent information list, namely deleting corresponding records from the inconsistent information list.
Step S610: checking whether the inconsistent information list also contains inconsistent row information, if not, ending the rechecking process; otherwise, executing the next step.
Step S612: judging whether the last round of reinspection is finished; if so, ending the rechecking process, otherwise, jumping to the step S602;
in order to avoid data verification errors caused by delay between the main and standby banks as much as possible, multiple rounds of review, for example, 2 rounds of review, may be performed.
The system of the present application is further described below in terms of another embodiment. As shown in fig. 7, which is a system structure diagram of a database verification system according to an embodiment of the present application, the database verification system includes: the data checking module 710, the data checking module 710 includes: a data piece verifying unit 711 and a row verifying unit 712; wherein:
the data sheet checking unit is used for executing the following data sheet checking on each corresponding table of each corresponding database in the main library and the standby library: acquiring a data piece checksum of a current data piece of a current table to be checked from a source end, acquiring a data piece checksum of a corresponding data piece from a destination end, and judging whether the current data pieces of the source end and the destination end are consistent according to whether the acquired data piece checksums are the same; if the current data pieces are inconsistent, performing row verification on the current data pieces, otherwise, performing data piece verification by taking other non-verified data pieces of the current table to be verified as the current data pieces;
and the data slice verifying unit is used for acquiring the line number of the current data slice from the source end and the destination end when the data slice is verified, and judging whether the current data slices of the source end and the destination end are consistent according to the acquired line number and the data slice verifying sum.
The row checking unit is used for performing the following row checking on the inconsistent data pieces: acquiring a primary key value and a row check sum of each row in a current data slice from a source end, acquiring the primary key value and the row check sum of each row in the current data slice from a destination end, judging whether corresponding rows of the source end and the destination end are consistent according to whether the row check sums corresponding to the acquired primary key values are the same, and recording the primary key values of inconsistent rows;
the source end is one of a master library and a standby library, and the destination end is the other of the master library and the standby library.
In addition, the data checking module further comprises:
and the field checking unit is used for acquiring field values of all fields of the inconsistent row from the source end and the destination end according to the primary key value of the inconsistent row, comparing the acquired field values of all fields of the inconsistent row one by one, finding out inconsistent fields of the inconsistent row, and recording field names of the inconsistent fields of the inconsistent row.
In addition, the system also comprises:
the scheduling machine is used for dividing a database checking task aiming at the database cluster into a plurality of checking subtasks and distributing each checking subtask to a plurality of data checking modules for execution;
and one of the checking subtasks is used for checking data of one database in the database cluster or used for checking data of one table in the database of the database cluster.
In addition, the system also comprises:
the rechecking module is used for acquiring database operation information corresponding to the inconsistent row according to the primary key values of the inconsistent row; judging whether a delay relative to data verification exists between the main library and the standby library according to the database operation information; if the row is existed, after waiting for a preset time interval, re-checking the corresponding rows in the primary library and the standby library according to the primary key values of the inconsistent rows;
wherein, the database operation information comprises: the last update time of the main library, the last update time of the standby library and the data verification time.
The system corresponds to the description of the method flow, and for a more detailed description, reference is made to the description of the method flow, which is not repeated.
The foregoing description shows and describes several preferred embodiments of the present application, but as aforementioned, it is to be understood that the application is not limited to the forms disclosed herein, but is not to be construed as excluding other embodiments and is capable of use in various other combinations, modifications, and environments and is capable of changes within the scope of the inventive concept as expressed herein, commensurate with the above teachings, or the skill or knowledge of the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the application, which is to be protected by the claims appended hereto.
Claims (10)
1. A database verification method, the method comprising:
and executing the following data piece verification on each corresponding table of each corresponding database in the main library and the standby library: acquiring a data piece checksum of a current data piece of a current table to be checked from a source end, acquiring a data piece checksum of a corresponding data piece from a destination end, and judging whether the current data pieces of the source end and the destination end are consistent according to whether the acquired data piece checksums are the same; if the current data pieces are inconsistent, performing row verification on the current data pieces, otherwise, performing data piece verification by taking other non-verified data pieces of the current table to be verified as the current data pieces;
and performing the following row check on the inconsistent data pieces: acquiring a primary key value and a row check sum of each row in a current data slice from a source end, acquiring the primary key value and the row check sum of each row in the current data slice from a destination end, judging whether corresponding rows of the source end and the destination end are consistent according to whether the row check sums corresponding to the acquired primary key values are the same, and recording the primary key values of inconsistent rows;
the source end is one of a master library and a standby library, and the destination end is the other of the master library and the standby library.
2. The method of claim 1,
after the row check is performed, the method further comprises:
acquiring field values of all fields of the inconsistent row from a source end and a destination end according to the primary key value of the inconsistent row;
comparing the field values of all the obtained fields of the inconsistent rows one by one, and finding out the inconsistent fields of the inconsistent rows;
the field names of the inconsistent fields of the inconsistent rows are recorded.
3. The method of claim 1,
and when the data slice is verified, acquiring the line number of the current data slice from the source end and the destination end, and judging whether the current data slices of the source end and the destination end are consistent according to the condition whether the acquired line number and the data slice checksum are the same.
4. The method of claim 1,
before the data slice is verified, the method further comprises the following steps:
dividing a database checking task aiming at a database cluster into a plurality of checking subtasks, and distributing each checking subtask to a plurality of data checking modules for execution;
and one of the checking subtasks is used for checking data of one database in the database cluster or used for checking data of one table in the database of the database cluster.
5. The method of claim 1,
after the row check is performed, the method further comprises:
acquiring database operation information corresponding to the inconsistent row according to the primary key value of the inconsistent row;
judging whether a delay relative to data verification exists between the main library and the standby library or not according to the database operation information; if the row is existed, after waiting for a preset time interval, re-checking the corresponding rows in the primary library and the standby library according to the primary key values of the inconsistent rows;
wherein, the database operation information comprises: the last update time of the main library, the last update time of the standby library and the data verification time.
6. A database verification system, comprising: a data verification module; wherein:
the data verification module comprises: a data slice verifying unit and a row verifying unit;
the data sheet checking unit is used for executing the following data sheet checking on each corresponding table of each corresponding database in the main library and the standby library: acquiring a data piece checksum of a current data piece of a current table to be checked from a source end, acquiring a data piece checksum of a corresponding data piece from a destination end, and judging whether the current data pieces of the source end and the destination end are consistent according to whether the acquired data piece checksums are the same; if the current data pieces are inconsistent, performing row verification on the current data pieces, otherwise, performing data piece verification by taking other non-verified data pieces of the current table to be verified as the current data pieces;
the row checking unit is used for performing the following row checking on the inconsistent data pieces: acquiring a primary key value and a row check sum of each row in a current data slice from a source end, acquiring the primary key value and the row check sum of each row in the current data slice from a destination end, judging whether corresponding rows of the source end and the destination end are consistent according to whether the row check sums corresponding to the acquired primary key values are the same, and recording the primary key values of inconsistent rows;
the source end is one of a master library and a standby library, and the destination end is the other of the master library and the standby library.
7. The system of claim 6,
the data checking module further comprises:
and the field checking unit is used for acquiring field values of all fields of the inconsistent row from the source end and the destination end according to the primary key value of the inconsistent row, comparing the acquired field values of all fields of the inconsistent row one by one, finding out inconsistent fields of the inconsistent row, and recording field names of the inconsistent fields of the inconsistent row.
8. The system of claim 6,
and the data slice verifying unit is used for acquiring the line number of the current data slice from the source end and the destination end when the data slice is verified, and judging whether the current data slices of the source end and the destination end are consistent according to the acquired line number and the data slice verifying sum.
9. The system of claim 6,
the system also comprises:
the scheduling machine is used for dividing a database checking task aiming at the database cluster into a plurality of checking subtasks and distributing each checking subtask to a plurality of data checking modules for execution;
and one of the checking subtasks is used for checking data of one database in the database cluster or used for checking data of one table in the database of the database cluster.
10. The system of claim 6,
the system also comprises:
the rechecking module is used for acquiring database operation information corresponding to the inconsistent row according to the primary key values of the inconsistent row; judging whether a delay relative to data verification exists between the main library and the standby library according to the database operation information; if the row is existed, after waiting for a preset time interval, re-checking the corresponding rows in the primary library and the standby library according to the primary key values of the inconsistent rows;
wherein, the database operation information comprises: the last update time of the main library, the last update time of the standby library and the data verification time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510059589.7A CN105989044A (en) | 2015-02-04 | 2015-02-04 | Database verification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510059589.7A CN105989044A (en) | 2015-02-04 | 2015-02-04 | Database verification method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105989044A true CN105989044A (en) | 2016-10-05 |
Family
ID=57037124
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510059589.7A Pending CN105989044A (en) | 2015-02-04 | 2015-02-04 | Database verification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105989044A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709069A (en) * | 2017-01-25 | 2017-05-24 | 焦点科技股份有限公司 | High-reliability big data logging collection and transmission method |
CN107103077A (en) * | 2017-04-25 | 2017-08-29 | 广东浪潮大数据研究有限公司 | Integrality determines method and system before and after a kind of Data Migration |
CN107402970A (en) * | 2017-06-29 | 2017-11-28 | 北京小度信息科技有限公司 | Information generating method and device |
CN107483227A (en) * | 2017-07-11 | 2017-12-15 | 上海精数信息科技有限公司 | Across the public network data transmission system and transmission method of a kind of efficient stable |
CN108509328A (en) * | 2017-02-23 | 2018-09-07 | 腾讯科技(深圳)有限公司 | Database method of calibration and device |
CN109213431A (en) * | 2017-07-04 | 2019-01-15 | 阿里巴巴集团控股有限公司 | The consistency detecting method and device and electronic equipment of more copy datas |
CN109960613A (en) * | 2019-03-11 | 2019-07-02 | 中国银联股份有限公司 | A kind of method and device of data batch processing |
CN110209521A (en) * | 2019-02-22 | 2019-09-06 | 腾讯科技(深圳)有限公司 | Data verification method, device, computer readable storage medium and computer equipment |
CN111125063A (en) * | 2019-12-20 | 2020-05-08 | 无线生活(杭州)信息科技有限公司 | Method and device for rapidly verifying data migration among clusters |
CN111563088A (en) * | 2020-04-20 | 2020-08-21 | 成都库珀区块链科技有限公司 | Data consistency detection method and device |
CN111966882A (en) * | 2020-09-14 | 2020-11-20 | 量子数聚(北京)科技有限公司 | Data import method, device, system and computer readable storage medium |
CN112101926A (en) * | 2020-11-19 | 2020-12-18 | 广州博士信息技术研究院有限公司 | Intelligent payment method and system for patent annual fee |
CN112347189A (en) * | 2020-11-05 | 2021-02-09 | 江苏电力信息技术有限公司 | Cloud computing-based financial data consistency failure discovery and recovery method |
CN112579591A (en) * | 2019-09-30 | 2021-03-30 | 重庆小雨点小额贷款有限公司 | Data verification method and device, electronic equipment and computer readable storage medium |
CN113010609A (en) * | 2020-12-23 | 2021-06-22 | 上海海鼎信息工程股份有限公司 | Differentiated synchronization method and system applied to store operation |
CN113064909A (en) * | 2021-06-03 | 2021-07-02 | 广州宸祺出行科技有限公司 | Data synchronization verification method and device |
CN113672604A (en) * | 2021-08-16 | 2021-11-19 | 浙江大华技术股份有限公司 | User data synchronization method, device and system and electronic equipment |
CN114282268A (en) * | 2021-12-10 | 2022-04-05 | 南京国电南自电网自动化有限公司 | Database integrity checking method and device based on SM3 algorithm |
CN116150175A (en) * | 2023-04-18 | 2023-05-23 | 云账户技术(天津)有限公司 | Heterogeneous data source-oriented data consistency verification method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101452410A (en) * | 2007-12-06 | 2009-06-10 | 中兴通讯股份有限公司 | Data backup system for embedded database, and data backup and recovery method |
CN102354292A (en) * | 2011-09-21 | 2012-02-15 | 国家计算机网络与信息安全管理中心 | Method and system for checking consistency of records in master and backup databases |
-
2015
- 2015-02-04 CN CN201510059589.7A patent/CN105989044A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101452410A (en) * | 2007-12-06 | 2009-06-10 | 中兴通讯股份有限公司 | Data backup system for embedded database, and data backup and recovery method |
CN102354292A (en) * | 2011-09-21 | 2012-02-15 | 国家计算机网络与信息安全管理中心 | Method and system for checking consistency of records in master and backup databases |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106709069A (en) * | 2017-01-25 | 2017-05-24 | 焦点科技股份有限公司 | High-reliability big data logging collection and transmission method |
CN106709069B (en) * | 2017-01-25 | 2018-06-15 | 焦点科技股份有限公司 | The big data log collection and transmission method of high reliability |
CN108509328B (en) * | 2017-02-23 | 2021-03-19 | 腾讯科技(深圳)有限公司 | Database checking method and device |
CN108509328A (en) * | 2017-02-23 | 2018-09-07 | 腾讯科技(深圳)有限公司 | Database method of calibration and device |
CN107103077A (en) * | 2017-04-25 | 2017-08-29 | 广东浪潮大数据研究有限公司 | Integrality determines method and system before and after a kind of Data Migration |
CN107103077B (en) * | 2017-04-25 | 2021-05-18 | 广东浪潮大数据研究有限公司 | Method and system for determining integrity before and after data migration |
CN107402970A (en) * | 2017-06-29 | 2017-11-28 | 北京小度信息科技有限公司 | Information generating method and device |
CN107402970B (en) * | 2017-06-29 | 2020-09-08 | 北京星选科技有限公司 | Information generation method and device |
CN109213431B (en) * | 2017-07-04 | 2022-05-13 | 阿里巴巴集团控股有限公司 | Consistency detection method and device for multi-copy data and electronic equipment |
CN109213431A (en) * | 2017-07-04 | 2019-01-15 | 阿里巴巴集团控股有限公司 | The consistency detecting method and device and electronic equipment of more copy datas |
CN107483227A (en) * | 2017-07-11 | 2017-12-15 | 上海精数信息科技有限公司 | Across the public network data transmission system and transmission method of a kind of efficient stable |
CN110209521A (en) * | 2019-02-22 | 2019-09-06 | 腾讯科技(深圳)有限公司 | Data verification method, device, computer readable storage medium and computer equipment |
CN110209521B (en) * | 2019-02-22 | 2022-03-18 | 腾讯科技(深圳)有限公司 | Data verification method and device, computer readable storage medium and computer equipment |
CN109960613A (en) * | 2019-03-11 | 2019-07-02 | 中国银联股份有限公司 | A kind of method and device of data batch processing |
CN112579591B (en) * | 2019-09-30 | 2023-06-16 | 重庆小雨点小额贷款有限公司 | Data verification method, device, electronic equipment and computer readable storage medium |
CN112579591A (en) * | 2019-09-30 | 2021-03-30 | 重庆小雨点小额贷款有限公司 | Data verification method and device, electronic equipment and computer readable storage medium |
CN111125063B (en) * | 2019-12-20 | 2023-09-26 | 无线生活(杭州)信息科技有限公司 | Method and device for rapidly checking data migration among clusters |
CN111125063A (en) * | 2019-12-20 | 2020-05-08 | 无线生活(杭州)信息科技有限公司 | Method and device for rapidly verifying data migration among clusters |
CN111563088A (en) * | 2020-04-20 | 2020-08-21 | 成都库珀区块链科技有限公司 | Data consistency detection method and device |
CN111966882A (en) * | 2020-09-14 | 2020-11-20 | 量子数聚(北京)科技有限公司 | Data import method, device, system and computer readable storage medium |
CN112347189A (en) * | 2020-11-05 | 2021-02-09 | 江苏电力信息技术有限公司 | Cloud computing-based financial data consistency failure discovery and recovery method |
CN112101926B (en) * | 2020-11-19 | 2021-02-26 | 广州博士信息技术研究院有限公司 | Intelligent payment method and system for patent annual fee |
CN112101926A (en) * | 2020-11-19 | 2020-12-18 | 广州博士信息技术研究院有限公司 | Intelligent payment method and system for patent annual fee |
CN113010609A (en) * | 2020-12-23 | 2021-06-22 | 上海海鼎信息工程股份有限公司 | Differentiated synchronization method and system applied to store operation |
CN113010609B (en) * | 2020-12-23 | 2023-05-16 | 上海海鼎信息工程股份有限公司 | Differentiated synchronization method and system applied to store operation |
CN113064909B (en) * | 2021-06-03 | 2021-10-22 | 广州宸祺出行科技有限公司 | Data synchronization verification method and device |
CN113064909A (en) * | 2021-06-03 | 2021-07-02 | 广州宸祺出行科技有限公司 | Data synchronization verification method and device |
CN113672604A (en) * | 2021-08-16 | 2021-11-19 | 浙江大华技术股份有限公司 | User data synchronization method, device and system and electronic equipment |
CN114282268A (en) * | 2021-12-10 | 2022-04-05 | 南京国电南自电网自动化有限公司 | Database integrity checking method and device based on SM3 algorithm |
CN116150175A (en) * | 2023-04-18 | 2023-05-23 | 云账户技术(天津)有限公司 | Heterogeneous data source-oriented data consistency verification method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105989044A (en) | Database verification method and system | |
US11288144B2 (en) | Query optimized distributed ledger system | |
CN109739929B (en) | Data synchronization method, device and system | |
CN106815218B (en) | Database access method and device and database system | |
US20150032759A1 (en) | System and method for analyzing result of clustering massive data | |
CN113111129B (en) | Data synchronization method, device, equipment and storage medium | |
CN110515927B (en) | Data processing method and system, electronic device and medium | |
WO2019001017A1 (en) | Inter-cluster data migration method and system, server, and computer storage medium | |
CN105205154B (en) | Data migration method and device | |
US10949401B2 (en) | Data replication in site recovery environment | |
CN106899654B (en) | Sequence value generation method, device and system | |
CN105550229A (en) | Method and device for repairing data of distributed storage system | |
CN104598459A (en) | Database processing method and system and data access method and system | |
WO2020199713A1 (en) | Data verification method, system, apparatus, and device | |
CN112579692B (en) | Data synchronization method, device, system, equipment and storage medium | |
CN107832446B (en) | Configuration item information searching method and computing device | |
CN110928891B (en) | Data consistency detection method, device, computing equipment and medium | |
WO2017113694A1 (en) | File synchronizing method, device and system | |
CN111625396A (en) | Backup data verification method, server and storage medium | |
CN108399175A (en) | A kind of storage of data, querying method and its device | |
CN112559857A (en) | Redis-based crowd pack application method and system, electronic device and storage medium | |
CN112579591B (en) | Data verification method, device, electronic equipment and computer readable storage medium | |
WO2019001021A1 (en) | Data processing method, apparatus and system, server, and computer storage medium | |
CN111857981A (en) | Data processing method and device | |
CN113468143A (en) | Data migration method, system, computing device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161005 |
|
RJ01 | Rejection of invention patent application after publication |