US20220229727A1 - Encoding and storage node repairing method for minimum storage regenerating codes for distributed storage systems - Google Patents
Encoding and storage node repairing method for minimum storage regenerating codes for distributed storage systems Download PDFInfo
- Publication number
- US20220229727A1 US20220229727A1 US17/532,904 US202117532904A US2022229727A1 US 20220229727 A1 US20220229727 A1 US 20220229727A1 US 202117532904 A US202117532904 A US 202117532904A US 2022229727 A1 US2022229727 A1 US 2022229727A1
- Authority
- US
- United States
- Prior art keywords
- chunks
- output
- data
- storage nodes
- segment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims description 65
- 230000001172 regenerating effect Effects 0.000 title description 7
- 230000009897 systematic effect Effects 0.000 claims abstract description 109
- 239000011159 matrix material Substances 0.000 claims description 79
- 230000006835 compression Effects 0.000 claims description 21
- 238000007906 compression Methods 0.000 claims description 21
- 239000013598 vector Substances 0.000 claims description 19
- 238000012937 correction Methods 0.000 claims description 15
- 238000013467 fragmentation Methods 0.000 claims description 15
- 238000006062 fragmentation reaction Methods 0.000 claims description 15
- 239000012634 fragment Substances 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 8
- 230000005540 biological transmission Effects 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 238000000638 solvent extraction Methods 0.000 claims description 3
- 238000000844 transformation Methods 0.000 claims description 2
- 230000008439 repair process Effects 0.000 description 55
- 238000010586 diagram Methods 0.000 description 19
- 238000012545 processing Methods 0.000 description 15
- 238000013461 design Methods 0.000 description 13
- 230000014509 gene expression Effects 0.000 description 13
- 238000013459 approach Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000010076 replication Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000012938 design process Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 101150012579 ADSL gene Proteins 0.000 description 1
- 102100020775 Adenylosuccinate lyase Human genes 0.000 description 1
- 108700040193 Adenylosuccinate lyases Proteins 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000011112 process operation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
- G06F11/1044—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices with specific ECC/EDC distribution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0608—Saving storage space on storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
- G06F3/0641—De-duplication techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/58—Random or pseudo-random number generators
- G06F7/588—Random number generators, i.e. based on natural stochastic processes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/11—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits using multiple parity bits
- H03M13/1102—Codes on graphs and decoding on graphs, e.g. low-density parity check [LDPC] codes
- H03M13/1148—Structural properties of the code parity-check or generator matrix
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/13—Linear codes
- H03M13/15—Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
- H03M13/151—Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
- H03M13/1515—Reed-Solomon codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/03—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
- H03M13/05—Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using block codes, i.e. a predetermined number of check bits joined to a predetermined number of information bits
- H03M13/13—Linear codes
- H03M13/15—Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
- H03M13/151—Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes using error location or error correction polynomials
- H03M13/154—Error and erasure correction, e.g. by using the error and erasure locator or Forney polynomial
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/37—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35
- H03M13/373—Decoding methods or techniques, not specific to the particular type of coding provided for in groups H03M13/03 - H03M13/35 with erasure correction and erasure determination, e.g. for packet loss recovery or setting of erasures for the decoding of Reed-Solomon codes
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/63—Joint error correction and other techniques
- H03M13/6312—Error control coding in combination with data compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/06—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
- H04L9/0643—Hash functions, e.g. MD5, SHA, HMAC or f9 MAC
Definitions
- a distributed storage system may require many hardware devices, which often results in component failures that will require recovery operations. Moreover, components in a distributed storage system may become unavailable, such as due to poor network connectivity or performance, without necessarily completely failing. Data loss can occur during standard IT procedures such as migration, or through malicious attacks via ransomware or other malware.
- Ransomware is a type of malicious software from cryptovirology that threatens to publish the victim's data or perpetually block access to it unless a ransom is paid.
- An advanced malware uses a technique called cryptoviral extortion, in which it encrypts the victim's files, making them inaccessible, and demands a ransom payment to decrypt them.
- redundancy measures are often introduced to protect data against storage node failures and outages, or other impediments. Such measures can include distributing data with redundancy over a set of independent storage nodes.
- Replication is often used in distributed storage systems to provide fast access to data.
- Triple replication can suffer from very low storage efficiency which, as used herein, generally refers to a ratio of an amount of original data to an amount of actually stored data, i.e., data with redundancy.
- Error-correcting coding provides an opportunity to store data with relatively high storage efficiency, while simultaneously maintaining an acceptable level of tolerance against storage node failure.
- relatively high storage efficiency can be achieved by maximum distance separable (MDS) codes, such as, but not limited to, Reed-Solomon codes. Long MDS codes, however, can incur prohibitively high repair costs.
- MDS maximum distance separable
- LDC Locally decodable codes
- a class of regenerating codes was particularly proposed to provide efficient repair of failed storage nodes in distributed storage systems.
- MSR minimum-storage regenerating
- MBR minimum-bandwidth regenerating
- security may consist of data encryption, however, although the computation complexity of data encryption is high and maintaining keys continues to be an operational issue.
- Alternative approaches can include such mixing original data and dispersal among different locations, that any amount of original data can be reconstructed only by accessing not less than a pre-defined number of storage nodes. This pre-defined number of storage nodes is such that probability that a malicious adversary is able to access all these nodes is negligible.
- a system and method for distributing data of a plurality of files over a plurality of respective remote storage nodes, the method comprising:
- step of data splitting provides data within a respective segment that comprises a part of one individual file or several different files.
- step of segment preprocessing comprises one or several of the following transformations: deduplication, compression, encryption and fragmentation.
- the step of segment preprocessing includes encryption, wherein one or several parts of a segment are encrypted in individual manner or a segment is encrypted entirely.
- the step of segment preprocessing includes fragmentation consisting of data partitioning and encoding, wherein fragmentation encoding is a function of one or several of the following: random (pseudo-random) values, values derived from original data (e.g. derived using deterministic cryptographic hash) and predetermined values.
- step of encoding employs supplementary inputs given by random data, values derived from original data (e.g. derived using deterministic hash) or predetermined values.
- the step of encoding comprises applying erasure coding to k input chunks to produce n output chunks, where erasure coding is performed using a linear block error correction code in such a way that t highly sensitive input chunks may be reconstructed only as a function of at least k output chunks (any k output chunks are suitable), while (v ⁇ t) frequently demanded input chunks may be reconstructed as a copy of a related output chunks, as well as a function of any other k input chunks.
- method for erasure coding utilizes a maximum distance separable (MDS) error-correction code and encoding is performed using k ⁇ n generator matrix G comprising (k ⁇ p) columns of k ⁇ k identity matrix, where 0 ⁇ t ⁇ p ⁇ k and v ⁇ t ⁇ k ⁇ p, while other columns form k ⁇ (n+p ⁇ k) matrix such that any its square submatrix is nonsingular.
- MDS maximum distance separable
- employed MDS error-correction code is a Reed-Solomon code.
- step of assigning of output chunks to storage nodes comprises selection of trusted storage nodes (e.g. in private storage) and mapping frequently demanded output chunks to these trusted storage nodes.
- step of assigning of output chunks to storage nodes comprises selection of highly available storage nodes, mapping frequently demanded output chunks to these storage nodes and encrypting frequently demanded output chunks in individual manner prior to transmission, where highly available storage nodes demonstrate high average data transferring speed and low latency.
- step of data (at least a part of at least one of the plurality of files) retrieving comprises
- requested data is contained only in frequently demanded input chunks.
- requested data may be retrieved by downloading only corresponding frequently demanded output chunks.
- a system and method for distributing data of a plurality of files over a plurality of respective remote storage nodes, the method comprising:
- step of data retrieval comprising transferring from storage nodes any k out of k+r output multi-chunks produced from the same segment and reconstruction of the corresponding data segment as a function of these output multi-chunks.
- step of encoding comprises
- encoder for the stage 1 of erasure coding is such that
- encoder for the stage 2 of erasure coding is such that
- step of optional data mixing is such that erasure coding integrated with data mixing ensures that any piece of data segment may be reconstructed only from pieces of at least k output multi-chunks produced from the same segment.
- step of data mixing comprises
- FIG. 1 is a schematic block diagram illustrating a distributed storage system interacting with client applications.
- FIG. 2 is a schematic block diagram illustrating encoding of files into output chunks transferred to storage nodes.
- FIG. 3 illustrates flexibility of the present invention depending on structure of data being encoded.
- FIG. 4 illustrates design of the employed erasure coding scheme.
- FIG. 5 illustrates an example of reconstruction of a part of a data segment from parts of output chunks received from storage nodes.
- FIG. 6 is a schematic block diagram illustrating a distributed storage system interacting with client applications, in accordance with the present application.
- FIG. 7 is a block-diagram illustrating general design of the erasure coding scheme.
- FIG. 8 shows an example of data splitting and combining steps in accordance with the erasure coding scheme.
- FIG. 9 is a block-diagram illustrating design of encoder for the first stage of erasure coding.
- FIG. 10 is a block-diagram illustrating design of encoder for the second stage of erasure coding.
- FIG. 11 shows an example of data mixing scheme, which may be optionally applied prior to erasure coding.
- FIG. 12 is a block-diagram illustrating operation of failed storage node repair.
- FIG. 13 is a block-diagram illustrating recovering of a parity multi-chunk in case of a single storage node failure.
- FIG. 14 is a block-diagram illustrating recovering of a systematic multi-chunk in case of a single storage node failure.
- the present disclosure is intended to provide reliability, security and integrity for a distributed storage system.
- the present disclosure is based on erasure coding, information dispersal, secret sharing and ramp schemes to assure reliability and security. More precisely, the present disclosure combines ramp threshold secret sharing and systematic erasure coding.
- Reliability (the number of tolerated storage node failures) depends on parameters of erasure coding scheme.
- Security is achieved by means of information dispersal among different storage nodes. Here storage nodes can be both public and/or private. Higher security levels are achieved by introducing supplementary inputs into erasure coding scheme, which results in ramp threshold scheme. Increase in amount of supplementary inputs leads to increase in security level.
- Computational security may be further improved by applying optional encryption and/or fragmentation. There is no need to trust neither cloud service providers no network data transfers. As for data integrity, in order to verify the honesty of cloud service providers and the correctness of stored data chunks two types of hash-based signatures are incorporated.
- Secret sharing is a particularly interesting cryptographic technique. Its most advanced variants indeed simultaneously enforce data privacy, availability and integrity, while allowing computation on encrypted data.
- a secret sharing scheme transforms sensitive data, called secret, into individually meaningless data pieces, called shares, and a dealer distributes shares to parties such that only authorized subsets of parties can reconstruct the secret.
- secret sharing e.g. Shamir's scheme
- the size of each share is equal to the size of secret.
- a storage node is typically a datacenter, a physical server with one or more hard-disk drives (HDDs) or solid-state drives (SSDs), an individual HDD or SSD.
- a storage node may be a part of a private storage system or belong to a cloud service provider (CSP).
- CSP cloud service provider
- Storage nodes are individually unreliable. Applying erasure coding, information dispersal, secret sharing or ramp scheme enables data reconstruction is case of storage node failures or outages.
- data (secret) may be represented by original client's data or generated metadata, e.g. encryption keys.
- (k,n)-threshold secret sharing scheme any k out of n shares can decrypt secret although any k ⁇ 1 or less shares do not leak out any information of the secret. Thus, it is possible to reconstruct secret even if up to n ⁇ k storage are unavailable.
- the present disclosure combines secret sharing, more precisely, ramp threshold secret sharing and systematic erasure coding.
- ramp secret sharing schemes may be employed, which have a tradeoff between security and storage efficiency.
- the price for increase in storage efficiency is partial leakage of information about relations between parts of secret, e.g. value for a linear combination of several parts of the secret.
- Storage efficiency is computed as amount of original data (secret) divided by the total amount of stored data.
- Storage efficiency of classical secret sharing scheme is equal to 1/n.
- Storage efficiency of ramp threshold scheme varies between 1/n and k/n depending on the amount of introduced supplementary inputs. The highest storage efficiency k/n is achieved when ramp threshold scheme reduces to information dispersal algorithm.
- Considered security techniques are based on error-correction codes, more precisely, linear block error-correction codes.
- the present disclosure makes use of maximum distance separable codes (MDS).
- FIG. 1 is a schematic block diagram illustrating a distributed storage system interacting with client applications, in accordance with the present application.
- Original data 103 e.g., files, produced by client applications 102
- Original data 103 is distributed over a set of storage nodes 106
- original data 103 is available to client applications 102 upon request. Any system producing and receiving data on the client side can be considered as an instance of a client application 102 .
- data processing and transmission control are arranged by processing system 101 , located on the client side or in the cloud.
- Processing system 101 transforms original data 103 into output chucks 104 , and vice-versa.
- Output chucks 104 may include none, one or several frequently demanded output chunks 105 in case of original data containing frequently accessed data.
- Storage nodes 106 can operate independently from each other, and can be physically located in different areas. According to the present disclosure, storage nodes 106 may include none, one or several highly available and/or trusted storage nodes 107 , where the number of these nodes 107 is at least equal to the number of frequently demanded output chunks 105 .
- trusted storage node ensures data privacy, while probability of data leakage from untrusted storage node may be significant; highly available storage nodes demonstrate high average data transmission speed and low latency.
- trusted storage nodes may be represented by storage nodes at client's datacenter, any other private storage and/or storage nodes with self-encrypted drives. Reliabilities of storage nodes are supposed to be comparable, i.e. probabilities of storage node failures are supposed to be similar.
- Processing system 101 ensures data integrity, security, protection against data loss, compression and deduplication.
- FIG. 2 is a schematic block diagram illustrating encoding of files into output chunks transferred to storage nodes.
- Encoding of input file 201 consists of two stages: precoding 202 and erasure coding 210 .
- Precoding includes none, one or several of the following optional steps: deduplication, compression, encryption and fragmentation.
- Optional deduplication may be performed at file-level at step 203 , as well as at segment-level at step 206 . In case of file-level deduplication space reduction is achieved only in case of presence of copies of whole files. In case of segment-level deduplication copies of parts of files are eliminated, so space reduction is more significant, than in case of file-level deduplication. Segment-level deduplication may be implemented for fixed-size segments or for content defined flexible-size segments.
- segment boundaries depend on the content of file, which provide opportunity to detect shifted copies of file fragments.
- deduplication and file partition are performed simultaneously.
- Optional compression at step 204 or 207 may be performed after deduplication depending on the client's workload and application requirements. Compression may be either total or selective, where compression transformation is applied to a whole file or to each data segment of the file independently on file content in case of total compression.
- compression transformation is applied at first to a piece of data segment, and if a reasonable degree of compression is achieved for the piece, then the compression transformation is applied to a whole data segment or a whole file. Compression is performed prior to encryption 208 in order to obtain a fair degree of compression.
- optional encryption is performed at step 208 depending on client preferences, file content and storage node configuration. Encryption is computationally demanding and introduces the problem of key management. In most cases, the present disclosure ensures sufficient security level without encryption. However, files with especially high secrecy degree are encrypted. In most cases, encryption is applied to files or segments or parts of segments prior to erasure coding. However, in case of presence of one or more highly available untrusted storage nodes, encryption may be applied to highly demanded output chunks assigned to these storage nodes. An appropriate encryption option is selected depending on file access pattern. In partial file read/write operation is needed, then encryption is applied to segments or parts of segments, where size of encrypted parts depends on size of typically requested file fragments.
- fragmentation is applied to a data segment at step 209 prior to erasure coding.
- Fragmentation is employed as a low-complexity operation, which is able to transform data into meaningless form.
- Erasure coding integrated with fragmentation ensures high level of uniformity, i.e. high entropy, and independence of output chunks. Here independence means that correlation between original data and output chunks is minimized, so no detectable correlation exists between them, neither between output chunks.
- it is also ensured that one bit change in a seed leads to significant changes in output chunks.
- IDA information dispersal algorithm
- Fragmentation may include data partitioning and encoding, wherein fragmentation encoding is a function of one or several of the following: random (pseudo-random) values, values derived from original data (e.g. derived using deterministic cryptographic hash) and predetermined values.
- random (pseudo-random) values e.g. derived from original data
- predetermined values e.g. derived from original data
- fragmentation improves security level almost without sacrificing computational effectiveness.
- each segment is transformed into a number of output chunks, which are further transferred to storage nodes. Processing of data segments is performed in individual manner.
- a data segment may be a part of a single file or it may be a container for several small files. Design of employed erasure coding scheme is described below in more details.
- FIG. 3 illustrates flexibility of the present disclosure depending on structure of data being encoded.
- a data segment 301 produced from one or several files is divided into v chunks and accompanied by k ⁇ v chunks containing supplementary inputs 305 , where k ⁇ v, supplementary inputs may be random, values derived from the data segment (e.g. derived using deterministic cryptographic hash) or have predetermined values.
- These k chunks are referred as input chunks 302 and their encoding result is referred as output chunks 307 .
- the number of output chunks n is not less than the number of input chunks k. As in case of ramp schemes, by increasing the number of supplementary input chunks 305 one can achieve higher security level.
- a data segment 301 may be represented by a fragment of one file, by the whole file, as well as by several individual files. In the latter case, independent access to files may be needed.
- input chunks are classified as highly sensitive chunks 303 and frequently demanded chunks 304 .
- highly sensitive input chunks 303 contain data which should be stored in unrecognizable manner.
- Highly sensitive input chunks 303 are encoded in such a way that each of them may be reconstructed only as a function of k output chunks (any k output chunks are suitable).
- Frequently demanded input chunk 304 are encoded in such a way that each of these chunks may be reconstructed as a copy of a related output chunk, as well as a function of any other k output chunks.
- Output chunks 307 except frequently demanded output chunks 308 , contain only meaningless data (unrecognizable data), which means that these chunks do not contain any copy of data segment produced from client's data.
- Access to at least k output chunks is usable to reconstruct any highly sensitive chunk, so these chunks are protected even in case of data leakage from any k ⁇ 1 storage nodes.
- Probability of simultaneous data loss or data leakage from several storage nodes belonging to the same datacenter or cloud service provider is higher, than in the case of geographically remote storage nodes and maintained by different storage service providers.
- the number of storage nodes aloud to be located near each other or managed by the same owner is limited to be not higher than k ⁇ 1. For example, this eliminates possibility of a storage service provider being able to reconstruct client's data.
- reconstruction of original data imposes an upper limit on the number of simultaneously unavailable storage nodes equal to n ⁇ k.
- the number of storage nodes aloud to be located near each other or managed by the same owner is limited to be not higher than n ⁇ k. For example, this ensures data reconstruction in case of ransomware attack on a storage service provider (cloud service provider).
- FIG. 4 illustrates design of the employed erasure coding scheme.
- the design process results in a generator matrix G 405 of a maximum distance separable (MDS) liner block error-correction code C of length n and dimension k, where dimension k is the number of input chunks being encoded and length n>k is the number of output chunks produced from k input chunks.
- MDS maximum distance separable
- Erasure coding scheme is specified by a k ⁇ n generator matrix G (of a MDS code) comprising (k ⁇ p) columns of k ⁇ k identity matrix, where 0 ⁇ p ⁇ k, while other columns form k ⁇ (n+p ⁇ k) matrix such that any its square submatrix is nonsingular.
- Such matrix G is further referred as selectively mixing matrix.
- This matrix specifies not only underlying error-correction code, but also a particular encoder.
- Parameter p is not lower than the number of highly sensitive input chunks, while (k ⁇ p) is not lower than the number of frequently demanded input chunks.
- the input parameters 401 for the erasure coding scheme design are length n and dimension k of the error-correction code and parameter p.
- a MDS linear block code C(parent) of length (n+p) and dimension k is selected at step 402 . Any MDS code with specified parameters is suitable.
- G(parent) be a k ⁇ (n+p) generator matrix of the code C(parent).
- generator matrix in systematic form G(parent,syst) is obtained from matrix G(parent) at step 403 , where k ⁇ (n+p) matrix in systematic form is such matrix that includes k ⁇ k identity matrix as its submatrix. Indices of identity matrix columns within generator matrix G(parent,syst) are referred as systematic positions.
- any k positions may be selected as systematic ones.
- p among k systematic positions are selected and corresponding p columns are excluded from k ⁇ (n+p) matrix G(parent,syst), as result, k ⁇ n selectively mixing generator matrix G.
- code C generated by matrix G is a punctured code that matrix G generates an MDS code C.
- code C is a punctured code C(parent), consequently code C is also a MDS code.
- Reed-Solomon code is used as a MDS code C(parent).
- Reed-Solomon codes are widely used to correct errors in many systems including storage devices (e.g. tape, Compact Disk, DVD, barcodes, etc.), wireless or mobile communications (including cellular telephones, microwave links, etc.), satellite communications, digital television/DVB, high-speed modems such as ADSL, xDSL, etc. It is possible to construct a Reed-Solomon code for any given length and dimension.
- There are several ways to perform encoding with Reed-Solomon code e.g. polynomial representation or vector-matrix representation may be employed.
- Reed-Solomon code may be generated by Cauchy matrix concatenated with identity matrix or by Vandermonde matrix.
- k ⁇ n generator matrix G for erasure coding is derived from k ⁇ (n+p) Vandermonde matrix.
- k ⁇ n generator matrix G for erasure coding is derived from k ⁇ (n+p ⁇ k) Cauchy matrix concatenated with k ⁇ k identity matrix.
- FIG. 3 shows a flow diagram of steps executed for erasure encoding of a data segment.
- a data segment 301 is already preprocessed, i.e. deduplication, compression, encryption and/or fragmentation are already applied, if necessary.
- Preprocessed data segment 301 is divided into v ⁇ k input chunks 302 , comprising t highly sensitive chunks 303 and v ⁇ t frequently demanded chunks 304 , 0 ⁇ t ⁇ v. Value of t is selected depending on the segment structure and the number of untrusted storage nodes.
- k ⁇ v supplementary input chunks 305 are generated to accompany input chunks produced from data segment, supplementary inputs may be random, values derived from original data (e.g.
- element size is defined by the error-correction code parameters.
- Generated output chunks are assigned storage nodes and then transferred to them. Frequently demanded output chunks are assigned to highly available and/or trusted storage nodes, while other output chunks are assigned to untrusted storage nodes. Output chunks produced from the same segment, except frequently demanded output chunks, are considered to be equally important and treated evenly. So, these chunks may be mapped to untrusted storage nodes using any approach, e.g. depending on their index or randomly.
- frequently demanded output chunks are also treated evenly and mapped to highly available and/or trusted storage nodes depending on their index or randomly.
- knowledge of content/structure of frequently demanded output chunks may be employed to optimize storage node assigning. For example, an output chunk comprising a number of frequently accessed small files may be assigned to the most available trusted storage node, i.e. storage node with the highest average data transferring speed and low cost.
- a client is allowed to request a file, a data segment or its part.
- a whole segment may be reconstructed from any k output chunks received from storage nodes.
- Requested part of a data segment may be reconstructed from corresponding parts of any k output chunks, where these corresponding parts have the same boundaries (i.e. range of indices) within output chunks. If the requested part of a data segment is contained in one or more frequently demanded input chunks, then it is sufficient to download only these corresponding output chunks from storage nodes (i.e. download the same amount as requested). Thus, low traffic is demonstrated in case of processing requests for frequently demanded input chunks.
- Output chunks stored at untrusted storage nodes are of the same significance for data reconstruction. Chunks to download are selected depending on available network bandwidth, more precisely, predicted latency for transferring data from corresponding storage node to the client's side. In case of output chunks of large size, the present disclosure provide opportunity to achieve lower latency by downloading parts of more than k output chuck and reconstructing data segment from them. The total size of these downloaded parts is at least the same as the size of output chunk multiplied by k, and the number of downloaded bytes with the same index within output chunks is at least k.
- FIG. 5 illustrates an example of reconstruction of a part of a data segment from parts of output chunks received from storage nodes.
- Reconstruction of requested data (at least a part of a data segment 505 ) from parts of output chunks 502 is performed as follows. First, range of indices within each input chunk corresponding to requested data is identified, where boundaries define range of indices. Second, such parts of output chunks 502 are downloaded from storage nodes 501 that the total size these parts is equal to the size of the widest range multiplied by k, and the number of parts with the same range of indices within output chunks is equal to k.
- processing system 503 combines parts with the same range of indices into a vector cS for each set S of k source storage nodes, and then decoder 504 multiplies this vector cS by inverse matrix to matrix G(S), where G(S) is a matrix consisting of k columns of selectively mixing matrix G with indices from the set S.
- G(S) is a matrix consisting of k columns of selectively mixing matrix G with indices from the set S.
- Reconstruction of a frequently demanded input chunk may be performed by downloading only related output chunk, i.e. output chunk containing a copy of this input chunk.
- a frequently demanded input chunk may be reconstructed from any k output chunks. The latter approach is used, when storage node containing related output chunk is unavailable or data transmission speed is too low.
- an output chunk received from a storage node may differ from an output chunk initially transferred to this storage node.
- Undetected changes in output chunk lead to errors during segment reconstruction process.
- hash-generated signatures are exploited in order to check integrity of each output chunk in a timely manner.
- Each output chunk is supplied with two signatures: visible and hidden signatures, where visible signature may be checked prior to downloading data chunk from storage node and hidden signature is checked after data segment reconstruction from a number of output chunks.
- Visible signature helps to detect incorrect or erroneous data (e.g. lost or damaged data) on the cloud service provider side.
- Visible signature is generated individually for a particular output chunk, and it depends only on content of this chunk.
- Rigorous monitoring of stored data is performed in order to reveal any inconsistency in the first place.
- Hidden signature is generated based on a whole segment, and it matches with reconstructed segment only if all output chunks are correct. So, hidden signature enables one to detect skillfully replaced output chunk even when check on visible signature was successfully passed, e.g. in case of data replacement by a malicious cloud service provider or as result of intruder attack.
- homomorphic hash functions are used to compute signatures. Homomorphic hash function allows one express hash (signature) for a data block given by a linear combination of several data blocks via hashes of data blocks participating in this combination.
- the present disclosure includes an erasure coding method for distributed storage systems, which ensures high storage efficiency and low repair bandwidth in case of storage node failure.
- the proposed erasure coding scheme may be considered as an instance of minimum-storage regenerating (MSR) regenerating code.
- MSR minimum-storage regenerating
- storage efficiency is the same as in case of any maximum distance separable (MDS) code, e.g. Reed-Solomon code.
- MDS maximum distance separable
- Reed-Solomon codes amount of encoded data usable to repair a single storage node failure is equal to the total size of original data, that is network traffic is high during repair operation.
- the erasure coding scheme of the present disclosure is optimized to achieve low repair bandwidth in case of a single storage node failure, where repair bandwidth is measured as the amount of data transferred to repair data contained within failed storage nodes divided by amount of encoded data stored within these failed storage nodes. Low repair bandwidth is provided for both systematic and parity encoded chunks of data.
- the present disclosure in average provides 2-times reduction of repair bandwidth compared to Reed-Solomon codes.
- the present disclosure demonstrates the same erasure recovering capability as MDS codes, e.g. Reed-Solomon codes.
- FIG. 6 is a schematic block diagram illustrating a distributed storage system interacting with client applications, in accordance with the present application.
- Original data 1103 e.g., files, produced by client applications 1102
- Original data 103 is distributed over a set of storage nodes 1105
- original data 103 is available to client applications 1102 upon request. Any system producing and receiving data on the client side can be considered as an instance of a client application 1102 .
- processing system 1101 located on the client side or in the cloud. Processing system 1101 transforms original data 1103 into output multi-chucks 1104 , and vice-versa.
- Storage nodes 1105 can operate independently from each other, and can be physically located in different areas. Reliabilities of storage nodes are supposed to be comparable, i.e. probabilities of storage node failures are supposed to be similar.
- Processing system 1101 ensures data integrity, protection against data loss, and optionally security, compression and deduplication. Protection against data loss, caused by storage node failures (e.g., commodity hardware failures), is provided by erasure coding. Moreover, erasure coding helps to tolerate storage node outages, while high storage efficiency is provided by selected construction of error-correction code, such as shown and described in greater detail herein.
- Data security is optionally arranged by means of data mixing and dispersing among different locations.
- Storage efficiency is may be enhanced by deduplication.
- deduplication can be performed for not just files, but also for small pieces of files, an appropriate tradeoff between deduplication complexity and storage efficiency, which can be selectable by a client.
- optional compression can be applied to data, depending on respective client preferences.
- the present disclosure includes an erasure coding method minimizing network traffic induced by storage node repair operation, i.e. recovering of data stored at a single failed storage node. Minimization of network traffic leads to the smallest latency and the fastest data recovery.
- FIG. 7 is a block-diagram illustrating general design of the erasure coding scheme.
- information multi-chunks where k is the number of storage nodes which should be accessed in order to reconstruct original data, wherein total amount of data transferred from these k storage nodes is equal to the segment size.
- Data security may be optionally arranged by means of data mixing and subsequent dispersal among different locations. Processing of information multi-chunks using data mixing is described below ( FIG. 11 ).
- Input multi-chunks for erasure coding are referred as systematic multi-chunks, where systematic multi-chunks may be the same as information multi-chunks or be a function of information multi-chunks as in case of data mixing.
- k systematic multi-chunks as k columns of a table, wherein a rows are distinguished in the table and each row is further divided into r sub-rows, where r is such that the total number of storage nodes is equal to r+k and ⁇ is a parameter defined by the stage 1 of erasure coding 1203 .
- An element in a row/column intersection is referred as chunk and an element in a sub-row/column intersection is referred as sub-chunk.
- the following notation is used: c g,i,j for an element (sub-chunk) located in g-th column and j-th sub-row of i-th row, and c a . . . b,c . . .
- Output of the encoding scheme at FIG. 7 is represented by ⁇ ar ⁇ (k+r) table c 1 . . . k+r,1 . . . ⁇ ,1 . . . r , which comprises input data table c 1 . . . k,1 . . . ⁇ ,1 . . . r and computed parity data table c k+1 . . . k+r,1 . . . ⁇ ,1 . . . r , i.e. output is given by k systematic multi-chunks and r parity multi-chunks.
- Output (systematic and parity) multi-chunks may be assigned to storage nodes using an arbitrary method.
- a storage node may contain systematic and parity multi-chunks produced from different data segments.
- Storage efficiency is equal to k/(k+r), i.e. the same as in case of maximum distance separable codes, e.g. Reed-Solomon codes.
- the present disclosure provides the best possible storage efficiency for given values of parameters k and r.
- the erasure coding scheme satisfies the following requirements:
- the first requirement means that employed error-correcting code demonstrate the same property as maximum distance separable (MDS) codes.
- MDS requirement MDS requirement.
- the second requirement means the smallest possible repair bandwidth, i.e. amount of data transferred during recovering of data stored in failed storage node is minimized. Observe that repair bandwidth in case of a Reed-Solomon code is equal to k.
- Erasure coding is performed in two stages.
- data is split into r sub-tables c 1 . . . k,1 . . . ⁇ ,j , 1 ⁇ j ⁇ r.
- encoder of the stage 1 independently compute a parity sub-table p 1 . . . r,1 . . . ⁇ ,j for each data sub table c 1 . . . . k,1 . . . ⁇ ,j , 1 ⁇ j ⁇ r.
- Elements (sub-chunks) of p 1 . . . r,1 . . . ⁇ ,j are further referred as intermediate parity elements (sub-chunks).
- Encoder for stage 1 is such that
- each r ⁇ r sub-table p 1 . . . r,i,1 . . . r is independently transformed by encoders of the stage 2 into r ⁇ r sub-table f 1 . . . r,i,1 . . . r .
- Elements (sub-chunks) of f 1 . . . r,i,1 . . . r are further referred as parity elements (sub-chunks).
- parity sub-chunks are combined into a sub table f 1 . . . r,1 . . . ⁇ ,1 . . . r containing r parity multi-chunks.
- these r parity multi-chunks f 1 . . . r,1 . . . ⁇ ,1 . . . r are combined with k systematic multi-chunks c 1 . . . k,1 . . . ⁇ ,1 . . . r to obtain usable c 1 . . . k+r,1 . . . ⁇ ,1 . . . r .
- Multi-chunks c g,1 . . . ⁇ ,1 . . . r , 1 ⁇ g ⁇ k+r, are further transferred to k+r independent storage nodes.
- Encoder for the stage 2 is such that
- the first requirement means that MDS property ensured by the encoders of stage 1 for k systematic multi-chunks and r intermediate parity chunks is hold after applying encoders of stage 2 for k systematic multi-chunks and r parity multi-chunks.
- FIG. 8 shows an example of data splitting and combining steps in accordance with the erasure coding scheme.
- Each multi-chunk c i,1 . . . 4,1 . . . 2 is represented as a rectangular, which consists of squares representing sub-chunks.
- Output of stage 1 of erasure coding 1303 i.e. intermediate parity sub-chunks, is represented by shaded squares.
- FIG. 9 is a block-diagram illustrating design of encoder for the first stage of erasure coding.
- Encoder of the stage 1 operates over data represented as tables.
- the encoder takes as input a table consisting of ⁇ rows and k columns and computes a table consisting of ⁇ rows and r columns.
- ⁇ is divisible by r.
- Elements of input ⁇ k table are referred as systematic elements, while elements of output ⁇ r table are referred as intermediate parity elements.
- Systematic elements contain original data, while each intermediate parity element is a function of systematic elements.
- each element is given by one or several symbols from Galois field.
- an element may be represented by chunk or sub-chunk consisting of symbols from Galois field.
- the encoder is specified by a ⁇ r expressions, where each expression is intended for computation of a particular intermediate parity element as a linear combination of systematic elements. Presence of a systematic element with non-zero coefficient in expression for an intermediate parity element is denoted as reference between these elements. Design process comprises two steps ( 1401 and 1402 ) related to generation of references between elements and step 1403 related to generation of appropriate coefficients for these references.
- each intermediate parity element has references to all k systematic elements from the same row. That is each intermediate parity element is a function of at least k systematic elements from the same row.
- additional inter-row references are generated for r ⁇ 1 columns of intermediate parity elements that the following conditions are satisfied:
- each intermediate parity element includes k, k + ⁇ k/r ⁇ or k+ ⁇ k/r ⁇ systematic elements.
- the specified requirements for references enables recovering of elements of i-th systematic column from rows belonging to repair set W (i) and ⁇ i elements from other rows, 1 ⁇ i ⁇ k.
- the following objective function is minimized in 5-th requirement:
- repair bandwidth is equal to (k+r ⁇ 1)/r.
- repair bandwidth for i-th systematic column is (k+r ⁇ 1)/r+ ⁇ i / ⁇
- the average repair bandwidth for systematic column is (k+r ⁇ 1)/r+f( ⁇ )/ ⁇ .
- coefficients for references are generated such that MDS condition is satisfied. That is elements of any column may be recovered from elements of any k other columns, i.e. any element in specified column may be expressed via elements of any k pre-selected columns.
- FIG. 10 is a block-diagram illustrating design of encoder for the second stage of erasure coding.
- Encoder of the stage 2 operates over data also represented as tables. In a view of two-level encoding, these tables are referred as sub-tables consisting of columns and sub-rows.
- the encoder takes as input a sub-table consisting of r sub-rows and r columns and computes a sub-table also consisting of r sub-rows and r columns.
- Elements of input r ⁇ r sub-table are referred as intermediate parity elements (e.g. sub-chunks), while elements of output r ⁇ r sub-table are referred as parity elements.
- Intermediate parity elements contain output from the 1 stage encoder, while each parity element is a function of intermediate parity elements.
- a set of r sub-rows is mapped onto a set of r parity columns, thus for each parity index (column) there is a related sub-row.
- references between elements of each row-column pair are generated such that each elements in the sub-row is connected to exactly one element in the column and these elements are different, while element in the row-column intersection stays single.
- expressions for r(r ⁇ 1) parity elements are given by linear combinations of two intermediate parity elements, while expressions for r parity elements are given by single intermediate parity elements.
- such coefficients are generated for each parity element expression that there exists an inverse transformation for the stage 2 encoding.
- Each sub-row contains one parity element equal to intermediate parity element from the same sub-row and column;
- Each parity column contains one parity element equal to intermediate element from the same sub-row and column;
- Intermediate parity elements may be recovered from parity elements.
- Data mixing scheme is designed for the erasure coding scheme described above. Data is mixed in such a way that erasure coded data satisfy the following condition:
- any piece of original input data may be reconstructed only from pieces of at least k erasure coded multi-chunks.
- FIG. 11 shows an example of data mixing scheme, which may be optionally applied prior to erasure coding.
- data mixing is performed as follows.
- a segment 1601 is divided into k parts referred as information multi-chunks.
- a segment 1601 is further treated as a data stream of vectors, where each vector consists of k symbols belonging to k different information multi-chunks.
- each vector consisting of k information symbols is multiplied by k ⁇ k non-singular matrix M such that its inverse matrix does not contain any zeros.
- Such matrix M is referred as mixing matrix.
- the same mixing matrix M may be used for all vectors or different matrices. Multiplication results in an output stream of vectors of the same length k.
- symbols of output data stream are mapped to k systematic multi-chunks 1608 .
- output stream of vectors is divided into parts 1604 , where each part 1604 consists of by ⁇ r vectors.
- Sequences of wr symbols with the same index within vectors are mapped to k different temporary multi-chunks 1605 in such a way that positions of sequences produces from the same part do not intersect.
- Symbols of i-th temporary multi-chunk 1605 are mapped to symbols of i-th systematic multi-chunk 1607 , more precisely, ⁇ r symbols of j-th sequence of i-th temporary multi-chunk are mapped to j-th symbols of ⁇ r sub-chunks 1606 of i-th systematic multi-chunk, where 1 ⁇ i ⁇ k.
- Produced systematic multi-chunks 1608 are further employed for erasure coding.
- FIG. 12 is a block-diagram illustrating operation of failed storage node repair.
- Repair process includes reconstruction of data stored on failed storage nodes (SNs) and transferring of reconstructed data to new storage nodes.
- Failed SNs are supposed to be detected by monitoring process.
- Identifiers of failed SNs 1709 are input arguments for repair process.
- Failed SN identifiers are employed to retrieve metadata on lost multi-chunks, i.e. multi-chunks erased due to SN failure.
- List of identifiers of these multi-chunks is formed at step 1701 , where identifiers of erased multi-chunks are further employed to retrieve data about parameters of the erasure coding scheme, systematic/parity index of erased multi-chunk and references to other multi-chunks produced from the same segment.
- the process of recovering of erased data includes decoding in employed error-correction code. Decoding schedule is formed depending on systematic/parity index of erased multi-chunks and parameters of the erasure coding scheme.
- the present disclosure includes a low bandwidth repair method for the case of a single SN failure.
- This method is applied at step 1704 for recovering of each multi-chunk and it comprises two algorithms.
- An appropriate algorithm is selected depending on whether the erased multi-chunk is systematic one or parity (step 1705 ).
- Recovering of erased parity multi-chunk is performed at step 1706 , which is further described in details by FIG. 13 .
- Recovering of erased systematic multi-chunk is performed at step 1706 , which is further described in details by FIG. 14 .
- the number of failed SNs is checked at step 1703 . If more than one storage node has failed, then multiple SN repair is performed at step 1708 . Upon repair completion, acknowledgements 1710 are issued.
- FIG. 13 is a block-diagram illustrating recovering of a parity multi-chunk in case of a single storage node failure.
- Parity index C of erased multi-chunk 1807 is employed at step 1801 to identify row R related to the column C within the 2 stage of erasure coding scheme. Recall that each multi-chunk consists of ⁇ chunks and each chunk consists of r sub-chunks.
- sub-chunks corresponding to the row R of a chunks of k systematic multi-chunks are transferred from SNs.
- the total number of transferred systematic sub-chunks is equal to ⁇ k, while the total number of stored systematic sub-chunks is r ⁇ k.
- encoding corresponding to the stage 1 of erasure coding is performed for a ⁇ k sub-chunks, which results in r ⁇ a intermediate parity sub-chunks.
- These r ⁇ a intermediate parity sub-chunks are further divided into ⁇ groups, where i-th group consists of r sub-chunks corresponding to the row R of chunks, 1 ⁇ i ⁇ .
- Execution of steps 1804 and 1805 results in reconstruction of i-th chunk. These steps may be performed independently in parallel for different i, 1 ⁇ i ⁇ .
- r ⁇ 1 parity sub-chunks located in the same positions as already reconstructed intermediate parity sub-chunks for i-th chunks are transferred from survived storage nodes.
- decoding corresponding to the stage 2 of erasure coding is performed in order to recover full i-th chunk of erased multi-chunk from r intermediate parity sub-chunks and r ⁇ 1 parity sub-chunks corresponding to the row R.
- reconstructed parity multi-chunk is transferred to the corresponding SN at step 1806 .
- the multi-chunk may be transferred to the SN by chunks or sub-chunks as soon as these chunks or sub-chunks are recovered.
- an acknowledgement 1808 is send.
- FIG. 14 is a block-diagram illustrating recovering of a systematic multi-chunk in case of a single storage node failure.
- corresponding repair set W(i) consisting of ⁇ /r ⁇ rows is identified according to the stage 1 of erasure coding.
- Repair process for i-th systematic erased multi-chunk comprises two stages. At the first stage (steps 1902 and 1903 ) transformation inverse to the stage 2 of erasure coding is performed. At the second stage (steps 1904 - 1906 ) repair according to the stage 1 of erasure coding scheme is performed.
- r parity chunks related to each row from the repair set W(i) are transferred from SNs.
- r intermediate parity chunks are reconstructed for each row from the repair set W(i).
- Reconstruction is performed by applying inverse transformation for the stage 2 of erasure coding, where a system of r ⁇ r linear equations is solved for unknown variables represented by r ⁇ r intermediate parity sub-chunks.
- the system comprises r(r ⁇ 1) equations with 2 unknown variables each.
- In the second stage of repair process operations over ⁇ /r ⁇ (k ⁇ 1) ⁇ r systematic sub-chunks and ⁇ /r ⁇ r ⁇ r reconstructed intermediate parity sub-chunks are performed.
- k ⁇ 1 systematic chunks for each row from the repair set W(i) are transferred from k ⁇ 1 survived SNs.
- Sub-chunks of the erased multi-chunk are further recovered in r steps, where at j-th step gj sub-chunks are recovered, ⁇ /r ⁇ gj ⁇ /r ⁇ and g1 is equal to the cardinality of the repair set W(i).
- At the first g1 chunks of the erased multi-chunk are recovered as result of intra-row decoding performed at step 1905 , where decoding in the error-correction code of the 1 stage of erasure coding is performed.
- intra-row decoding means that decoding is independently performed for each row from the repair set W(i), where decoding for a row consists in recovering of a chunk of the erased multi-chunk from k ⁇ 1 systematic chunks and one intermediate parity chunk by solving a linear equation. Recall that each chunk consists of r sub-chunks, so operations over chunks may be represented as operations over sub-chunks performed independently in parallel.
- step 1907 gj intermediate parity chunks of a multi-chunk containing references to gj chunks of the erased multi-chunk are identified, where these intermediate parity chunks are from rows of the repair set W(i).
- chunks of the erased multi-chunk are recovered by solving a system of gj linear equations, where the equations are obtained from expressions employed for 1 stage erasure coding. Chunks of the erased multi-chunk are expressed via other chunks in these expressions. Recall that by design these expressions are such that each of them contains exactly one chunk of the erased multi-chunk and these chunks are not repeated. Finally, reconstructed systematic multi-chunk is transferred to the corresponding SN at step 1908 . Alternatively, the multi-chunk may be transferred to the SN by chunks or sub-chunks as soon as these chunks or sub-chunks are recovered. Upon recovering and transferring of the whole reconstructed multi-chunk, an acknowledgement 1909 is send.
- Original data may be retrieved as follows. At first a list of data segments comprising original data is identified. k output multi-chunks are transferred from storage nodes for each data segment from the list, where output multi-chunks are given by systematic and parity multi-chunks. Recall, that according to the present disclosure, the employed erasure coding scheme is such that a data segment may be reconstructed from any k out of (k+r) output multi-chunks. So, any k out of (k+r) output multi-chunks may be selected for each segment. In most cases, output multi-chunks are selected to minimize average latency. Then reconstruction of each data segment is performed as a function of corresponding k output multi-chunks.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Probability & Statistics with Applications (AREA)
- Pure & Applied Mathematics (AREA)
- Algebra (AREA)
- Computer Security & Cryptography (AREA)
- Quality & Reliability (AREA)
- Power Engineering (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Storage Device Security (AREA)
Abstract
The present disclosure is based on erasure coding, information dispersal, secret sharing and ramp schemes to assure reliability and security. More precisely, the present disclosure combines ramp threshold secret sharing and systematic erasure coding.
Description
- This application is based on and claims priority to U.S. Provisional Patent Application No. 62/798,265, filed Jan. 29, 2019 and this application is based on and claims priority to U.S. Provisional Patent Application No. 62/798,256, filed Jan. 29, 2019, all of which are incorporated by reference, as if expressly set forth in their respective entireties herein.
- Distributed storage systems play an important role in management of big data, particularly for data generated at tremendous speed. A distributed storage system may require many hardware devices, which often results in component failures that will require recovery operations. Moreover, components in a distributed storage system may become unavailable, such as due to poor network connectivity or performance, without necessarily completely failing. Data loss can occur during standard IT procedures such as migration, or through malicious attacks via ransomware or other malware. Ransomware is a type of malicious software from cryptovirology that threatens to publish the victim's data or perpetually block access to it unless a ransom is paid. An advanced malware uses a technique called cryptoviral extortion, in which it encrypts the victim's files, making them inaccessible, and demands a ransom payment to decrypt them. Therefore, in view that any individual storage node may become unreliable, redundancy measures are often introduced to protect data against storage node failures and outages, or other impediments. Such measures can include distributing data with redundancy over a set of independent storage nodes.
- One relatively simple data protection technique is replication. Replication, particularly triple replication, is often used in distributed storage systems to provide fast access to data. Triple replication, however, can suffer from very low storage efficiency which, as used herein, generally refers to a ratio of an amount of original data to an amount of actually stored data, i.e., data with redundancy. Error-correcting coding, and more particularly erasure coding, provides an opportunity to store data with relatively high storage efficiency, while simultaneously maintaining an acceptable level of tolerance against storage node failure. Thus, relatively high storage efficiency can be achieved by maximum distance separable (MDS) codes, such as, but not limited to, Reed-Solomon codes. Long MDS codes, however, can incur prohibitively high repair costs. In case of employing locally decodable codes, any single storage node failure can be recovered by accessing a pre-defined number of storage nodes and by performing corresponding computations. Locally decodable codes (LDC) are designed to minimize I/O overhead. In the case of cloud storage systems, minimization of I/O overhead is especially desirable because data transmission can consume many resources, while computational complexity is less significant. In spite of promising theoretical results, the number of practical constructions of LDC codes is low.
- Another important requirement is related to bandwidth optimization, which leads to reduced latency. A class of regenerating codes was particularly proposed to provide efficient repair of failed storage nodes in distributed storage systems. There are two special sub-classes of regenerating codes: minimum-storage regenerating (MSR) and minimum-bandwidth regenerating (MBR). In case of MSR codes, storage efficiency is the same as in case of Reed-Solomon codes, but repair bandwidth is at lowest bound. In case of MBR codes, storage efficiency is sacrificed to enable further reduction of repair bandwidth.
- Another consideration of cloud storage systems is a security requirement. In such systems, security may consist of data encryption, however, although the computation complexity of data encryption is high and maintaining keys continues to be an operational issue. Alternative approaches can include such mixing original data and dispersal among different locations, that any amount of original data can be reconstructed only by accessing not less than a pre-defined number of storage nodes. This pre-defined number of storage nodes is such that probability that a malicious adversary is able to access all these nodes is negligible.
- In one or more implementations, a system and method are provided for distributing data of a plurality of files over a plurality of respective remote storage nodes, the method comprising:
-
- a. splitting into segments, by at least one processor configured to execute code stored in non-transitory processor readable media, the data of the plurality of files;
- b. preprocessing each segment and then splitting it into v of input chunks: t highly sensitive chunks and v−t frequently demanded chunks, where highly sensitive chunks contain data which ought to be stored securely and highly demanded chunks contain data which ought to be stored in highly-available manner;
- c. encoding, by the at least one processor, v input chunks (produced from the same segment) together with k−v supplementary input chunks into n of output chunks, where any of n output chunks do not contain copy of any fragment of highly sensitive chunks, while v−t output chunks are given by copies of v−t frequently demanded input chunks (these output chunks are further referred as frequently demanded output chunks), n≥k;
- d. assigning, by the at least one processor, output chunks to remote storage nodes, wherein n output chunks produced from the same segment are assigned to n different storage nodes
- e. transmitting, by the at least one processor, each of the output chunks to at least one respective storage node; and
- f. retrieving, by the at least one processor, at least a part of at least one of the plurality of files by downloading parts of output chunks from storage nodes, where amount of data transferred from each storage node is optimized to minimize average latency for data reconstruction.
- In a further aspect of the system and method, wherein the step of data splitting provides data within a respective segment that comprises a part of one individual file or several different files.
- In a further aspect of the system and method, wherein the step of segment preprocessing comprises one or several of the following transformations: deduplication, compression, encryption and fragmentation.
- In a further aspect of the system and method, wherein the step of segment preprocessing includes encryption, wherein one or several parts of a segment are encrypted in individual manner or a segment is encrypted entirely.
- In a further aspect of the system and method, wherein the step of segment preprocessing includes fragmentation consisting of data partitioning and encoding, wherein fragmentation encoding is a function of one or several of the following: random (pseudo-random) values, values derived from original data (e.g. derived using deterministic cryptographic hash) and predetermined values.
- In a further aspect of the system and method, wherein the step of encoding employs supplementary inputs given by random data, values derived from original data (e.g. derived using deterministic hash) or predetermined values.
- In a further aspect of the system and method, wherein the step of encoding comprises applying erasure coding to k input chunks to produce n output chunks, where erasure coding is performed using a linear block error correction code in such a way that t highly sensitive input chunks may be reconstructed only as a function of at least k output chunks (any k output chunks are suitable), while (v−t) frequently demanded input chunks may be reconstructed as a copy of a related output chunks, as well as a function of any other k input chunks.
- In a further aspect of the system and method, wherein method for erasure coding utilizes a maximum distance separable (MDS) error-correction code and encoding is performed using k×n generator matrix G comprising (k−p) columns of k×k identity matrix, where 0≤t≤p≤k and v−t≤k−p, while other columns form k×(n+p−k) matrix such that any its square submatrix is nonsingular.
- In a further aspect of the system and method, wherein a k×n MDS code generator matrix G is obtained as follows
-
- a. Selecting an arbitrary MDS code of length (n+p) and dimension k;
- b. Constructing a k×(n+p) generator matrix in systematic form (i.e. generator matrix, which includes k×k identity matrix as its submatrix);
- c. Excluding p columns of k×k identity matrix from k×(n+p) generator matrix in systematic form to obtain k×n matrix G.
- In a further aspect of the system and method, wherein t=v, that is output chunks do not contain any copy of a fragment of input chunks produced from a segment and any fragment of a these input chunks may be reconstructed only as a function of at least k output chunks.
- In a further aspect of the system and method, wherein employed MDS error-correction code is a Reed-Solomon code.
- In a further aspect of the system and method, wherein for encoding with Reed-Solomon code employed generator matrix is based on Vandermonde matrix.
- In a further aspect of the system and method, wherein for encoding with Reed-Solomon code employed generator matrix is based on Cauchy matrix concatenated with identity matrix.
- In a further aspect of the system and method, wherein the step of assigning of output chunks to storage nodes comprises selection of trusted storage nodes (e.g. in private storage) and mapping frequently demanded output chunks to these trusted storage nodes.
- In a further aspect of the system and method, wherein the step of assigning of output chunks to storage nodes comprises selection of highly available storage nodes, mapping frequently demanded output chunks to these storage nodes and encrypting frequently demanded output chunks in individual manner prior to transmission, where highly available storage nodes demonstrate high average data transferring speed and low latency.
- In a further aspect of the system and method, wherein the step of data (at least a part of at least one of the plurality of files) retrieving comprises
-
- a. identifying range of indices within each information chunk corresponding to requested data;
- b. downloading, by the at least one processor, such parts of output chunks from storage nodes that
- i. total size these parts is equal to the size of the widest range multiplied by k and
- ii. the number of parts with the same range of indices within output chunks is equal to k;
- c. reconstructing, by the at least one processor, requested data by performing the following steps:
- for each set S of k source storage nodes
- i. combing parts with the same range of indices into a vector cS, and
- ii. multiplying vector cS by inverse matrix to matrix G(S), where G(S) is a matrix consisting of k columns of selectively mixing matrix G with indices from the set S.
- for each set S of k source storage nodes
- In a further aspect of the system and method, wherein requested data is contained only in frequently demanded input chunks. In this case, requested data may be retrieved by downloading only corresponding frequently demanded output chunks. Thus, traffic reduction is achieved compared to general case of data retrieval.
- In one or more implementations, a system and method are provided for distributing data of a plurality of files over a plurality of respective remote storage nodes, the method comprising:
-
- a. splitting data into segments, by at least one processor configured to execute code stored in non-transitory processor readable media, the data of the plurality of files;
- b. optionally applying deduplication, compression and/or encryption to each segment;
- c. splitting each segment into k information multi-chunks and optionally applying data mixing to these information chunks to produce k systematic multi-chunks;
- d. encoding, by the at least one processor, k systematic multi-chunks (produced from the same segment) into r parity multi-chunks, wherein employed erasure coding scheme maximizes storage efficiency, enables reconstruction of the k systematic multi-chunks from any k output multi-chunks and enables recovering of a single output multi-chunk with minimized network traffic, where the set of k+r output multi-chunks comprises k systematic multi-chunks and r parity multi-chunks;
- e. assigning, by the at least one processor, k+r output multi-chunks to remote storage nodes, wherein k+r output multi-chunks produced from the same segment are assigned to k+r different storage nodes;
- f. transmitting, by the at least one processor, each of the output multi-chunks to at least one respective storage node;
- g. storage node repairing, by the at least one processor, wherein at least one output multi-chunk is recovered as a function of parts of other output multi-chunks produced from the same segment, wherein network traffic is minimized; and
- h. retrieving, by the at least one processor, at least a part of at least one of the plurality of files as a function of parts of output multi-chunks.
- In a further aspect of the system and method, wherein the storage node repairing step is such that
-
- a. Recovering of the i-th parity multi-chunk requires accessing 1/r portion of each of other k+r−1 output multi-chunk produced from the same segment. Repair bandwidth for i-th parity multi-chunk is equal to (k+r−1)/r.
- b. Recovering of the i-th systematic multi-chunk requires accessing 1/r portion of each of other k+r−1 output multi-chunk produced from the same segment in case sufficiently high value of parameter α, otherwise τi supplementary sub-chunks are also accessed. Repair bandwidth for i-th systematic multi-chunk is equal to (k+r−1)/r+τi/α.
- In a further aspect of the system and method, wherein the step of data retrieval comprising transferring from storage nodes any k out of k+r output multi-chunks produced from the same segment and reconstruction of the corresponding data segment as a function of these output multi-chunks.
- In a further aspect of the system and method, wherein the step of encoding comprises
-
- a. Representing k systematic multi-chunks as k columns of a table, distinguishing α rows in the table and further distinguishing r sub-rows in each row, where an element in a row/column intersection is referred as chunk and an element in a sub-row/column intersection is referred as sub-chunk;
- b.
Stage 1 of erasure coding, wherein k multi-chunks are encoded into r intermediate parity multi-chunks and the following conditions are satisfied- i. k systematic multi-chunks may be reconstructed from any k out of k+r systematic and intermediate parity multi-chunks (MDS requirement);
- ii. Any systematic multi-chunk may be reconstructed with repair bandwidth (k+r−1)/r+τi/α, wherein 1/r portion of each of other k+r−1 multi-chunk is transferred from storage nodes together with τi supplementary sub-chunks, and wherein either all or none intermediate parity sub-chunks in a sub-row are required.
- c.
Stage 2 of erasure coding, wherein r intermediate parity multi-chunks are encoded into r parity multi-chunks and the following conditions are satisfied- i. k systematic multi-chunks may be reconstructed from any k out of (k+r) systematic and parity multi-chunks, that is replacement of r intermediate parity multi-chunks by r parity multi-chunks does not affect compliance with the MDS requirement;
- ii. Any parity multi-chunk may be recovered with repair bandwidth (k+r−1)/r, wherein 1/r portion of each of other k+r−1 multi-chunk is transferred from storage nodes.
- In a further aspect of the system and method, wherein encoder for the
stage 1 of erasure coding is such that -
- a. Encoding is individually performed for each input axk sub-table consisting of α rows and k systematic columns, wherein an element of the sub-table may be represented by a single symbol or a sequence of symbols, e.g. chunk or sub-chunk;
- b. The encoder is specified by α×r expressions, where each expression is intended for computation of a particular intermediate parity element as a linear combination of systematic elements, wherein presence of a systematic element with non-zero coefficient in expression for an intermediate parity element is denoted as reference between these elements;
- c. Each systematic element has at most one inter-row reference, wherein systematic elements with inter-row references are referred as highly-connected and systematic elements without inter-row references are referred as low-connected;
- d. Each intermediate parity element has none, └k/r┘ or ┌k/r┐ references;
- e. Each systematic column has a/r low-connected elements. A set of row-indices of α/r low-connected elements from the i-th systematic column is referred as repair set W(i), 1≤i≤k;
- f. Each systematic column has α-α/r highly connected elements;
- g. References of α-α/r highly connected elements of i-th systematic column point to α-α/r different intermediate parity elements belonging to rows from repair set W(i) and the smallest possible number Ti of elements from other rows, 1≤i≤k;
- h. Each row has └k/r┘ or ┌k/r┐ low-connected element.
- In a further aspect of the system and method, wherein encoder for the
stage 2 of erasure coding is such that -
- a. Encoding is individually performed over each r×r sub-table consisting of r sub-rows of a fixed row and r parity columns, wherein an element of the sub-table may be represented by a single symbol or a sequence of symbols, e.g. sub-chunk;
- b. Each parity element located in sub-row i and column C is given by
- i. an intermediate parity element located in sub-row i and column C, or
- ii. linear combination of two intermediate parity elements, one of which is located in sub-row i and column C and another one belongs to sub-row R and column j, where sub-row R is related to column C and j-th element of sub-row R is connected with i-th element of column C.
- c. Each sub-row contains one parity element equal to intermediate parity element from the same sub-row and column;
- d. Each parity column contains one parity element equal to intermediate element from the same sub-row and column;
- e. Intermediate parity elements may be recovered from parity elements.
- In a further aspect of the system and method, wherein the step of optional data mixing is such that erasure coding integrated with data mixing ensures that any piece of data segment may be reconstructed only from pieces of at least k output multi-chunks produced from the same segment.
- In a further aspect of the system and method, wherein the step of data mixing comprises
-
- a. multiplication of a data segment by such k×k matrix M that its inverse matrix is non-singular, where the multiplication results in a number of output vectors of length k; and
- b. mapping the symbols of output vectors to k systematic multi-chunks in such a way that k symbols of each output vector are assigned to k different systematic multi-chunks and indices of these symbols within sub-chunks of multi-chunks are different.
- These and other aspects, features, and advantages can be appreciated from the accompanying description of certain embodiments of the disclosure and the accompanying drawing figures and claims.
-
FIG. 1 is a schematic block diagram illustrating a distributed storage system interacting with client applications. -
FIG. 2 is a schematic block diagram illustrating encoding of files into output chunks transferred to storage nodes. -
FIG. 3 illustrates flexibility of the present invention depending on structure of data being encoded. -
FIG. 4 illustrates design of the employed erasure coding scheme. -
FIG. 5 illustrates an example of reconstruction of a part of a data segment from parts of output chunks received from storage nodes. -
FIG. 6 is a schematic block diagram illustrating a distributed storage system interacting with client applications, in accordance with the present application. -
FIG. 7 is a block-diagram illustrating general design of the erasure coding scheme. -
FIG. 8 shows an example of data splitting and combining steps in accordance with the erasure coding scheme. -
FIG. 9 is a block-diagram illustrating design of encoder for the first stage of erasure coding. -
FIG. 10 is a block-diagram illustrating design of encoder for the second stage of erasure coding. -
FIG. 11 shows an example of data mixing scheme, which may be optionally applied prior to erasure coding. -
FIG. 12 is a block-diagram illustrating operation of failed storage node repair. -
FIG. 13 is a block-diagram illustrating recovering of a parity multi-chunk in case of a single storage node failure. -
FIG. 14 is a block-diagram illustrating recovering of a systematic multi-chunk in case of a single storage node failure. - The present disclosure is intended to provide reliability, security and integrity for a distributed storage system. The present disclosure is based on erasure coding, information dispersal, secret sharing and ramp schemes to assure reliability and security. More precisely, the present disclosure combines ramp threshold secret sharing and systematic erasure coding. Reliability (the number of tolerated storage node failures) depends on parameters of erasure coding scheme. Security is achieved by means of information dispersal among different storage nodes. Here storage nodes can be both public and/or private. Higher security levels are achieved by introducing supplementary inputs into erasure coding scheme, which results in ramp threshold scheme. Increase in amount of supplementary inputs leads to increase in security level. Computational security may be further improved by applying optional encryption and/or fragmentation. There is no need to trust neither cloud service providers no network data transfers. As for data integrity, in order to verify the honesty of cloud service providers and the correctness of stored data chunks two types of hash-based signatures are incorporated.
- Secret sharing is a particularly interesting cryptographic technique. Its most advanced variants indeed simultaneously enforce data privacy, availability and integrity, while allowing computation on encrypted data. A secret sharing scheme transforms sensitive data, called secret, into individually meaningless data pieces, called shares, and a dealer distributes shares to parties such that only authorized subsets of parties can reconstruct the secret. In case of classical secret sharing, e.g. Shamir's scheme, the size of each share is equal to the size of secret. Thus, applying secret sharing leads to n-times increase in data volume, where n is the number of participants.
- In case of a distributed storage system, participants are represented by storage nodes. A storage node is typically a datacenter, a physical server with one or more hard-disk drives (HDDs) or solid-state drives (SSDs), an individual HDD or SSD. A storage node may be a part of a private storage system or belong to a cloud service provider (CSP).
- Storage nodes are individually unreliable. Applying erasure coding, information dispersal, secret sharing or ramp scheme enables data reconstruction is case of storage node failures or outages. Here data (secret) may be represented by original client's data or generated metadata, e.g. encryption keys. In case of (k,n)-threshold secret sharing scheme, any k out of n shares can decrypt secret although any k−1 or less shares do not leak out any information of the secret. Thus, it is possible to reconstruct secret even if up to n−k storage are unavailable. The present disclosure combines secret sharing, more precisely, ramp threshold secret sharing and systematic erasure coding. In order to improve the storage efficiency of secret sharing schemes, ramp secret sharing schemes (ramp schemes) may be employed, which have a tradeoff between security and storage efficiency. The price for increase in storage efficiency is partial leakage of information about relations between parts of secret, e.g. value for a linear combination of several parts of the secret.
- Storage efficiency is computed as amount of original data (secret) divided by the total amount of stored data. Storage efficiency of classical secret sharing scheme is equal to 1/n. Storage efficiency of ramp threshold scheme varies between 1/n and k/n depending on the amount of introduced supplementary inputs. The highest storage efficiency k/n is achieved when ramp threshold scheme reduces to information dispersal algorithm. Considered security techniques are based on error-correction codes, more precisely, linear block error-correction codes. The present disclosure makes use of maximum distance separable codes (MDS).
-
FIG. 1 is a schematic block diagram illustrating a distributed storage system interacting with client applications, in accordance with the present application.Original data 103, e.g., files, produced byclient applications 102, are distributed over a set ofstorage nodes 106, andoriginal data 103 is available toclient applications 102 upon request. Any system producing and receiving data on the client side can be considered as an instance of aclient application 102. Further, data processing and transmission control are arranged by processingsystem 101, located on the client side or in the cloud.Processing system 101 transformsoriginal data 103 into output chucks 104, and vice-versa. Output chucks 104 may include none, one or several frequently demandedoutput chunks 105 in case of original data containing frequently accessed data. -
Client applications 102,processing system 101 andstorage nodes 106 communicate via a data communication network, such as the Internet.Storage nodes 106 can operate independently from each other, and can be physically located in different areas. According to the present disclosure,storage nodes 106 may include none, one or several highly available and/or trustedstorage nodes 107, where the number of thesenodes 107 is at least equal to the number of frequently demandedoutput chunks 105. Here trusted storage node ensures data privacy, while probability of data leakage from untrusted storage node may be significant; highly available storage nodes demonstrate high average data transmission speed and low latency. For example, trusted storage nodes may be represented by storage nodes at client's datacenter, any other private storage and/or storage nodes with self-encrypted drives. Reliabilities of storage nodes are supposed to be comparable, i.e. probabilities of storage node failures are supposed to be similar.Processing system 101 ensures data integrity, security, protection against data loss, compression and deduplication. -
FIG. 2 is a schematic block diagram illustrating encoding of files into output chunks transferred to storage nodes. Encoding ofinput file 201 consists of two stages:precoding 202 anderasure coding 210. Precoding includes none, one or several of the following optional steps: deduplication, compression, encryption and fragmentation. Optional deduplication may be performed at file-level atstep 203, as well as at segment-level atstep 206. In case of file-level deduplication space reduction is achieved only in case of presence of copies of whole files. In case of segment-level deduplication copies of parts of files are eliminated, so space reduction is more significant, than in case of file-level deduplication. Segment-level deduplication may be implemented for fixed-size segments or for content defined flexible-size segments. In the former case, all segments are of the same size or of several fixed sizes. This approach is easier to implement than deduplication for content defined flexible-size segments, however, small changes in files may lead to shifts of beginnings of segments, which lead to inability to detect shifted copies of file fragments. In the latter case, segments boundaries depend on the content of file, which provide opportunity to detect shifted copies of file fragments. In particular, that means that deduplication and file partition are performed simultaneously. Optional compression atstep encryption 208 in order to obtain a fair degree of compression. - In one or more implementations, optional encryption is performed at
step 208 depending on client preferences, file content and storage node configuration. Encryption is computationally demanding and introduces the problem of key management. In most cases, the present disclosure ensures sufficient security level without encryption. However, files with especially high secrecy degree are encrypted. In most cases, encryption is applied to files or segments or parts of segments prior to erasure coding. However, in case of presence of one or more highly available untrusted storage nodes, encryption may be applied to highly demanded output chunks assigned to these storage nodes. An appropriate encryption option is selected depending on file access pattern. In partial file read/write operation is needed, then encryption is applied to segments or parts of segments, where size of encrypted parts depends on size of typically requested file fragments. - In one or more implementations, fragmentation is applied to a data segment at
step 209 prior to erasure coding. Fragmentation is employed as a low-complexity operation, which is able to transform data into meaningless form. Erasure coding integrated with fragmentation ensures high level of uniformity, i.e. high entropy, and independence of output chunks. Here independence means that correlation between original data and output chunks is minimized, so no detectable correlation exists between them, neither between output chunks. In one or more implementations, it is also ensured that one bit change in a seed leads to significant changes in output chunks. In contrast to information dispersal algorithm (IDA), the proposed solution prevents appearance of any pattern in output data. Fragmentation may include data partitioning and encoding, wherein fragmentation encoding is a function of one or several of the following: random (pseudo-random) values, values derived from original data (e.g. derived using deterministic cryptographic hash) and predetermined values. In case of absence of encryption, fragmentation improves security level almost without sacrificing computational effectiveness. - At
erasure coding step 210 each segment is transformed into a number of output chunks, which are further transferred to storage nodes. Processing of data segments is performed in individual manner. A data segment may be a part of a single file or it may be a container for several small files. Design of employed erasure coding scheme is described below in more details. -
FIG. 3 illustrates flexibility of the present disclosure depending on structure of data being encoded. Prior to actual encoding atstep 306, adata segment 301 produced from one or several files is divided into v chunks and accompanied by k−v chunks containingsupplementary inputs 305, where k≥v, supplementary inputs may be random, values derived from the data segment (e.g. derived using deterministic cryptographic hash) or have predetermined values. These k chunks are referred asinput chunks 302 and their encoding result is referred asoutput chunks 307. The number of output chunks n is not less than the number of input chunks k. As in case of ramp schemes, by increasing the number ofsupplementary input chunks 305 one can achieve higher security level. In case of absence ofsupplementary input chunks 305, the proposed encoding scheme reduces to erasure coding or information dispersal. Adata segment 301 may be represented by a fragment of one file, by the whole file, as well as by several individual files. In the latter case, independent access to files may be needed. According to the present disclosure, input chunks are classified as highlysensitive chunks 303 and frequently demanded chunks 304. Here highlysensitive input chunks 303 contain data which should be stored in unrecognizable manner. Highlysensitive input chunks 303 are encoded in such a way that each of them may be reconstructed only as a function of k output chunks (any k output chunks are suitable). Frequently demanded input chunk 304 are encoded in such a way that each of these chunks may be reconstructed as a copy of a related output chunk, as well as a function of any other k output chunks.Output chunks 307, except frequently demandedoutput chunks 308, contain only meaningless data (unrecognizable data), which means that these chunks do not contain any copy of data segment produced from client's data. - Access to at least k output chunks is usable to reconstruct any highly sensitive chunk, so these chunks are protected even in case of data leakage from any k−1 storage nodes. Probability of simultaneous data loss or data leakage from several storage nodes belonging to the same datacenter or cloud service provider is higher, than in the case of geographically remote storage nodes and maintained by different storage service providers. In one or more implementations, the number of storage nodes aloud to be located near each other or managed by the same owner is limited to be not higher than k−1. For example, this eliminates possibility of a storage service provider being able to reconstruct client's data. On the other hand, reconstruction of original data imposes an upper limit on the number of simultaneously unavailable storage nodes equal to n−k. So, in one or more implementations, the number of storage nodes aloud to be located near each other or managed by the same owner is limited to be not higher than n−k. For example, this ensures data reconstruction in case of ransomware attack on a storage service provider (cloud service provider).
-
FIG. 4 illustrates design of the employed erasure coding scheme. The design process results in agenerator matrix G 405 of a maximum distance separable (MDS) liner block error-correction code C of length n and dimension k, where dimension k is the number of input chunks being encoded and length n>k is the number of output chunks produced from k input chunks. Here sizes of input chunks and output chunks are the same. Erasure coding scheme is specified by a k×n generator matrix G (of a MDS code) comprising (k−p) columns of k×k identity matrix, where 0≤p≤k, while other columns form k×(n+p−k) matrix such that any its square submatrix is nonsingular. Such matrix G is further referred as selectively mixing matrix. This matrix specifies not only underlying error-correction code, but also a particular encoder. Parameter p is not lower than the number of highly sensitive input chunks, while (k−p) is not lower than the number of frequently demanded input chunks. Theinput parameters 401 for the erasure coding scheme design are length n and dimension k of the error-correction code and parameter p. - The process of obtaining k×n selectively mixing generator matrix G for given p is further described in more details. At first, a MDS linear block code C(parent) of length (n+p) and dimension k is selected at
step 402. Any MDS code with specified parameters is suitable. Let G(parent) be a k×(n+p) generator matrix of the code C(parent). Second, generator matrix in systematic form G(parent,syst) is obtained from matrix G(parent) atstep 403, where k×(n+p) matrix in systematic form is such matrix that includes k×k identity matrix as its submatrix. Indices of identity matrix columns within generator matrix G(parent,syst) are referred as systematic positions. Atstep 403 any k positions may be selected as systematic ones. At step 404 p among k systematic positions are selected and corresponding p columns are excluded from k×(n+p) matrix G(parent,syst), as result, k×n selectively mixing generator matrix G. Observe that code C generated by matrix G is a punctured code that matrix G generates an MDS code C. Thus, code C is a punctured code C(parent), consequently code C is also a MDS code. - In one or more implementations, Reed-Solomon code is used as a MDS code C(parent). Reed-Solomon codes are widely used to correct errors in many systems including storage devices (e.g. tape, Compact Disk, DVD, barcodes, etc.), wireless or mobile communications (including cellular telephones, microwave links, etc.), satellite communications, digital television/DVB, high-speed modems such as ADSL, xDSL, etc. It is possible to construct a Reed-Solomon code for any given length and dimension. There are several ways to perform encoding with Reed-Solomon code, e.g. polynomial representation or vector-matrix representation may be employed. In the latter case Reed-Solomon code may be generated by Cauchy matrix concatenated with identity matrix or by Vandermonde matrix. In one or more implementations, k×n generator matrix G for erasure coding is derived from k×(n+p) Vandermonde matrix. In one or more implementations, k×n generator matrix G for erasure coding is derived from k×(n+p−k) Cauchy matrix concatenated with k×k identity matrix.
-
FIG. 3 shows a flow diagram of steps executed for erasure encoding of a data segment. Here adata segment 301 is already preprocessed, i.e. deduplication, compression, encryption and/or fragmentation are already applied, if necessary.Preprocessed data segment 301 is divided into v≤k input chunks 302, comprising t highlysensitive chunks 303 and v−t frequently demandedchunks 304, 0≤t≤v. Value of t is selected depending on the segment structure and the number of untrusted storage nodes. k−vsupplementary input chunks 305 are generated to accompany input chunks produced from data segment, supplementary inputs may be random, values derived from original data (e.g. derived using deterministic hash) or have predetermined values. Input chunks are ordered in such a way that theirencoding 306 results in k−p output chunks on systematic positions being equal to v−t frequently demanded input chunks 304 and k−p−(v−t)supplementary input chunks 305. In order to reduce computational complexity of encoding, p=t is selected, which maximizes the number of systematic positions. Encoding with k×n generator matrix G results inn output chunks 307. Input chunks and output chunks have the same size. During encoding 306, each chuck is represented as a sequence of elements, and vector x(i) consisting of i'th elements of k input chunks is encoded into vector c(i) consisting of i'th elements of n output chunks, that is c(i)=x(i)G. Here element size is defined by the error-correction code parameters. Thus, computations for elements with different indices may be performed in parallel, e.g., using vectorization. - Generated output chunks are assigned storage nodes and then transferred to them. Frequently demanded output chunks are assigned to highly available and/or trusted storage nodes, while other output chunks are assigned to untrusted storage nodes. Output chunks produced from the same segment, except frequently demanded output chunks, are considered to be equally important and treated evenly. So, these chunks may be mapped to untrusted storage nodes using any approach, e.g. depending on their index or randomly.
- In one or more implementations, frequently demanded output chunks are also treated evenly and mapped to highly available and/or trusted storage nodes depending on their index or randomly. However, knowledge of content/structure of frequently demanded output chunks may be employed to optimize storage node assigning. For example, an output chunk comprising a number of frequently accessed small files may be assigned to the most available trusted storage node, i.e. storage node with the highest average data transferring speed and low cost.
- A client is allowed to request a file, a data segment or its part. Several data reconstruction scenarios are possible. A whole segment may be reconstructed from any k output chunks received from storage nodes. Requested part of a data segment may be reconstructed from corresponding parts of any k output chunks, where these corresponding parts have the same boundaries (i.e. range of indices) within output chunks. If the requested part of a data segment is contained in one or more frequently demanded input chunks, then it is sufficient to download only these corresponding output chunks from storage nodes (i.e. download the same amount as requested). Thus, low traffic is demonstrated in case of processing requests for frequently demanded input chunks.
- Output chunks stored at untrusted storage nodes are of the same significance for data reconstruction. Chunks to download are selected depending on available network bandwidth, more precisely, predicted latency for transferring data from corresponding storage node to the client's side. In case of output chunks of large size, the present disclosure provide opportunity to achieve lower latency by downloading parts of more than k output chuck and reconstructing data segment from them. The total size of these downloaded parts is at least the same as the size of output chunk multiplied by k, and the number of downloaded bytes with the same index within output chunks is at least k.
-
FIG. 5 illustrates an example of reconstruction of a part of a data segment from parts of output chunks received from storage nodes. Reconstruction of requested data (at least a part of a data segment 505) from parts of output chunks 502 is performed as follows. First, range of indices within each input chunk corresponding to requested data is identified, where boundaries define range of indices. Second, such parts of output chunks 502 are downloaded fromstorage nodes 501 that the total size these parts is equal to the size of the widest range multiplied by k, and the number of parts with the same range of indices within output chunks is equal to k. Third,processing system 503 combines parts with the same range of indices into a vector cS for each set S of k source storage nodes, and then decoder 504 multiplies this vector cS by inverse matrix to matrix G(S), where G(S) is a matrix consisting of k columns of selectively mixing matrix G with indices from the set S. Thus, requested parts of information chunks are reconstructed. - Reconstruction of a frequently demanded input chunk may be performed by downloading only related output chunk, i.e. output chunk containing a copy of this input chunk. Alternatively, a frequently demanded input chunk may be reconstructed from any k output chunks. The latter approach is used, when storage node containing related output chunk is unavailable or data transmission speed is too low.
- Observe that typically data encoding methods for distributed storage systems with untrusted storage nodes support only whole segment reconstruction because of mandatory encryption for segments. In contrast, the present disclosure enables reconstruction of any part of a segment, since employed encoding scheme ensures security without encryption. This became possible also because cloud service providers started to provide opportunity to perform partial object retrieval.
- Intentional or eventual data corruptions at storage service provider side are possible. Thus, an output chunk received from a storage node may differ from an output chunk initially transferred to this storage node. Undetected changes in output chunk lead to errors during segment reconstruction process. According to the present disclosure, hash-generated signatures are exploited in order to check integrity of each output chunk in a timely manner. Each output chunk is supplied with two signatures: visible and hidden signatures, where visible signature may be checked prior to downloading data chunk from storage node and hidden signature is checked after data segment reconstruction from a number of output chunks. Visible signature helps to detect incorrect or erroneous data (e.g. lost or damaged data) on the cloud service provider side. Visible signature is generated individually for a particular output chunk, and it depends only on content of this chunk. Rigorous monitoring of stored data is performed in order to reveal any inconsistency in the first place. Hidden signature is generated based on a whole segment, and it matches with reconstructed segment only if all output chunks are correct. So, hidden signature enables one to detect skillfully replaced output chunk even when check on visible signature was successfully passed, e.g. in case of data replacement by a malicious cloud service provider or as result of intruder attack. In one or more implementations, homomorphic hash functions are used to compute signatures. Homomorphic hash function allows one express hash (signature) for a data block given by a linear combination of several data blocks via hashes of data blocks participating in this combination.
- In addition to the features shown and described herein, the present disclosure includes an erasure coding method for distributed storage systems, which ensures high storage efficiency and low repair bandwidth in case of storage node failure. The proposed erasure coding scheme may be considered as an instance of minimum-storage regenerating (MSR) regenerating code. Thus, storage efficiency is the same as in case of any maximum distance separable (MDS) code, e.g. Reed-Solomon code. Observe that in case of Reed-Solomon codes, amount of encoded data usable to repair a single storage node failure is equal to the total size of original data, that is network traffic is high during repair operation. The erasure coding scheme of the present disclosure is optimized to achieve low repair bandwidth in case of a single storage node failure, where repair bandwidth is measured as the amount of data transferred to repair data contained within failed storage nodes divided by amount of encoded data stored within these failed storage nodes. Low repair bandwidth is provided for both systematic and parity encoded chunks of data. The present disclosure in average provides 2-times reduction of repair bandwidth compared to Reed-Solomon codes. At the same time, the present disclosure demonstrates the same erasure recovering capability as MDS codes, e.g. Reed-Solomon codes.
-
FIG. 6 is a schematic block diagram illustrating a distributed storage system interacting with client applications, in accordance with the present application.Original data 1103, e.g., files, produced byclient applications 1102, are distributed over a set ofstorage nodes 1105, andoriginal data 103 is available toclient applications 1102 upon request. Any system producing and receiving data on the client side can be considered as an instance of aclient application 1102. Further, data processing and transmission control are arranged byprocessing system 1101, located on the client side or in the cloud.Processing system 1101 transformsoriginal data 1103 into output multi-chucks 1104, and vice-versa. -
Client applications 1102,processing system 1101 andstorage nodes 1105 communicate via a data communication network, such as the Internet.Storage nodes 1105 can operate independently from each other, and can be physically located in different areas. Reliabilities of storage nodes are supposed to be comparable, i.e. probabilities of storage node failures are supposed to be similar.Processing system 1101 ensures data integrity, protection against data loss, and optionally security, compression and deduplication. Protection against data loss, caused by storage node failures (e.g., commodity hardware failures), is provided by erasure coding. Moreover, erasure coding helps to tolerate storage node outages, while high storage efficiency is provided by selected construction of error-correction code, such as shown and described in greater detail herein. Data security is optionally arranged by means of data mixing and dispersing among different locations. Storage efficiency is may be enhanced by deduplication. Furthermore, deduplication can be performed for not just files, but also for small pieces of files, an appropriate tradeoff between deduplication complexity and storage efficiency, which can be selectable by a client. Further, optional compression can be applied to data, depending on respective client preferences. The present disclosure includes an erasure coding method minimizing network traffic induced by storage node repair operation, i.e. recovering of data stored at a single failed storage node. Minimization of network traffic leads to the smallest latency and the fastest data recovery. -
FIG. 7 is a block-diagram illustrating general design of the erasure coding scheme. At first input data segment is divided into k parts of equal size, referred as information multi-chunks, where k is the number of storage nodes which should be accessed in order to reconstruct original data, wherein total amount of data transferred from these k storage nodes is equal to the segment size. Data security may be optionally arranged by means of data mixing and subsequent dispersal among different locations. Processing of information multi-chunks using data mixing is described below (FIG. 11 ). Input multi-chunks for erasure coding are referred as systematic multi-chunks, where systematic multi-chunks may be the same as information multi-chunks or be a function of information multi-chunks as in case of data mixing. - Let us represent k systematic multi-chunks as k columns of a table, wherein a rows are distinguished in the table and each row is further divided into r sub-rows, where r is such that the total number of storage nodes is equal to r+k and α is a parameter defined by the
stage 1 oferasure coding 1203. An element in a row/column intersection is referred as chunk and an element in a sub-row/column intersection is referred as sub-chunk. The following notation is used: cg,i,j for an element (sub-chunk) located in g-th column and j-th sub-row of i-th row, and ca . . . b,c . . . d,e . . . f for a sub-table consisting of columns {e, . . . , f} and sub-rows {a, . . . , b} of rows {c, . . . , d}. - Output of the encoding scheme at
FIG. 7 is represented by αar×(k+r) table c1 . . . k+r,1 . . . α,1 . . . r, which comprises input data table c1 . . . k,1 . . . α,1 . . . r and computed parity data table ck+1 . . . k+r,1 . . . α,1 . . . r, i.e. output is given by k systematic multi-chunks and r parity multi-chunks. Output (systematic and parity) multi-chunks may be assigned to storage nodes using an arbitrary method. In particular, a storage node may contain systematic and parity multi-chunks produced from different data segments. Storage efficiency is equal to k/(k+r), i.e. the same as in case of maximum distance separable codes, e.g. Reed-Solomon codes. In other words, the present disclosure provides the best possible storage efficiency for given values of parameters k and r. - According to the present disclosure, the erasure coding scheme satisfies the following requirements:
-
- 1. Original data may be reconstructed as a function of encoded data stored at any k out of (k+r) storage nodes, where size of original data is equal to the total size of encoded data stored at k storage nodes.
- 2. Repair of i-th parity multi-chunk requires accessing 1/r portion of each of other k+r−1 output multi-chunk produced from the same segment. Repair bandwidth for i-th parity multi-chunk is given by (k+r−1)/r.
- 3. Repair of i-th systematic multi-chunk requires accessing 1/r portion of each of other k+r−1 output multi-chunk produced from the same segment in case sufficiently high value of parameter α, otherwise τi supplementary sub-chunks are also accessed. Repair bandwidth for i-th systematic multi-chunk is given by (k+r−1)/r+τi/α.
- From coding theory perspective, the first requirement means that employed error-correcting code demonstrate the same property as maximum distance separable (MDS) codes. Thus, the first requirement is further referred as MDS requirement. The second requirement means the smallest possible repair bandwidth, i.e. amount of data transferred during recovering of data stored in failed storage node is minimized. Observe that repair bandwidth in case of a Reed-Solomon code is equal to k.
- These requirements for erasure coding scheme are satisfied as follows. Erasure coding is performed in two stages. At
step 1202 data is split into r sub-tables c1 . . . k,1 . . . α,j, 1≤j≤r. Atstep 1203 each of these sub-tables is independently encoded: encoder of thestage 1 independently compute a parity sub-table p1 . . . r,1 . . . α,j for each data sub table c1 . . . . k,1 . . . α,j, 1≤j≤r. Elements (sub-chunks) of p1 . . . r,1 . . . α,j are further referred as intermediate parity elements (sub-chunks). Encoder forstage 1 is such that -
- 1. k input systematic multi-chunks may be reconstructed from any k out of k+r systematic and intermediate parity multi-chunks, i.e. MDS requirement is satisfied after
stage 1 of erasure coding; - 2. Any systematic multi-chunk may be reconstructed with repair bandwidth (k+r−1)/r+τi/α, wherein 1/r portion of each of other k+r−1 multi-chunk is transferred from storage nodes together with τi supplementary sub-chunks, and wherein either all or none intermediate parity sub-chunks in a sub-row are usable.
- 1. k input systematic multi-chunks may be reconstructed from any k out of k+r systematic and intermediate parity multi-chunks, i.e. MDS requirement is satisfied after
- At
step 1204 the obtained intermediate parity sub-chunks are combined and then split into r×r sub-tables p1 . . . r,i,1 . . . r. Atstep 1205 each r×r sub-table p1 . . . r,i,1 . . . r is independently transformed by encoders of thestage 2 into r×r sub-table f1 . . . r,i,1 . . . r. Elements (sub-chunks) of f1 . . . r,i,1 . . . r are further referred as parity elements (sub-chunks). Atstep 1206 obtained parity sub-chunks are combined into a sub table f1 . . . r,1 . . . α,1 . . . r containing r parity multi-chunks. Then atstep 1207 these r parity multi-chunks f1 . . . r,1 . . . α,1 . . . r are combined with k systematic multi-chunks c1 . . . k,1 . . . α,1 . . . r to obtain usable c1 . . . k+r,1 . . . α,1 . . . r. Multi-chunks cg,1 . . . α,1 . . . r, 1≤g≤k+r, are further transferred to k+r independent storage nodes. - Encoder for the
stage 2 is such that -
- 1. k systematic multi-chunks may be reconstructed from any k out of (k+r) systematic and parity multi-chunks, that is replacement of r intermediate parity multi-chunks by r parity multi-chunks does not affect compliance with the MDS requirement;
- 2. Any parity multi-chunk may be recovered with repair bandwidth (k+r−1)/r, wherein 1/r portion of each of other k+r−1 multi-chunk is transferred from storage nodes.
- The first requirement means that MDS property ensured by the encoders of
stage 1 for k systematic multi-chunks and r intermediate parity chunks is hold after applying encoders ofstage 2 for k systematic multi-chunks and r parity multi-chunks. - Observe that i-th chunks of intermediate parity multi-chunk are transformed into i-th chunks of parity multi-chunks independently for different i, 1≤i≤α. This ensures that repair of a systematic storage node may be performed for systematic multi-chunks combined with parity multi-chunks in the same way as for systematic multi-chunks combined with intermediate parity multi-chunks.
-
FIG. 8 shows an example of data splitting and combining steps in accordance with the erasure coding scheme. Encoding of a data segment for a particular set of parameters: k=6, α=4 and r=2, is considered. So, an input segment of data is represented as 6×4×2 array c1 . . . 6,1 . . . 4,1 . . . 2. Each multi-chunk ci,1 . . . 4,1 . . . 2 is represented as a rectangular, which consists of squares representing sub-chunks. Thus, atstep 1302 each multi-chunk is split into r=2 parts, where each part consists of α=4 sub-chunks. Output ofstage 1 oferasure coding 1303, i.e. intermediate parity sub-chunks, is represented by shaded squares. At step 1304 these sub-chunks are combined into intermediate parity multi-chunks, and then split into chunks forstage 2 oferasure coding 1305, where each chunk consists of r=2 sub-chunks. Atsteps -
FIG. 9 is a block-diagram illustrating design of encoder for the first stage of erasure coding. Encoder of thestage 1 operates over data represented as tables. The encoder takes as input a table consisting of α rows and k columns and computes a table consisting of α rows and r columns. Here it is assumed that α is divisible by r. Elements of input α×k table are referred as systematic elements, while elements of output α×r table are referred as intermediate parity elements. Systematic elements contain original data, while each intermediate parity element is a function of systematic elements. Here each element is given by one or several symbols from Galois field. For example, an element may be represented by chunk or sub-chunk consisting of symbols from Galois field. The encoder is specified by a×r expressions, where each expression is intended for computation of a particular intermediate parity element as a linear combination of systematic elements. Presence of a systematic element with non-zero coefficient in expression for an intermediate parity element is denoted as reference between these elements. Design process comprises two steps (1401 and 1402) related to generation of references between elements andstep 1403 related to generation of appropriate coefficients for these references. - At
step 1401 intra-row references are generated such that each intermediate parity element has references to all k systematic elements from the same row. That is each intermediate parity element is a function of at least k systematic elements from the same row. Atstep 1402 such additional inter-row references are generated for r−1 columns of intermediate parity elements that the following conditions are satisfied: -
- 1. Each systematic element has at most one inter-row reference, wherein systematic elements with inter-row references are referred as highly-connected and systematic elements without inter-row references are referred as low-connected;
- 2. Each intermediate parity element has none, └k/r┘ or ┌k/r┐ references;
- 3. Each systematic column has α/r low-connected elements. A set of row-indices of α/r low-connected elements from the i-th systematic column is referred as repair set W(i), 1≤i≤k;
- 4. Each systematic column has α-α/r highly connected elements;
- 5. References of α-α/r highly connected elements of i-th systematic column point to α-α/r different intermediate parity elements belonging to rows from repair set W(i) and the smallest possible number τi of elements from other rows, 1≤i≤k;
- 6. Each row has └k/r┘ or ┌k/r┐ low-connected element.
- Thus, expression for each intermediate parity element includes k, k +└k/r┘ or k+┌k/r┐ systematic elements. The specified requirements for references enables recovering of elements of i-th systematic column from rows belonging to repair set W(i) and τi elements from other rows, 1≤i≤k. According to one implementation, the following objective function is minimized in 5-th requirement:
-
- For sufficiently high α it is possible to achieve f(τ)=0, in this case it is sufficient to transfer 1/r portion of each of other k+r−1 columns for repair of any systematic column stored on a failed SN, i.e. repair bandwidth is equal to (k+r−1)/r. In case of f(τ)>0, repair bandwidth for i-th systematic column is (k+r−1)/r+τi/α, and the average repair bandwidth for systematic column is (k+r−1)/r+f(τ)/α. So, minimization of average number of supplementary elements f(τ) leads to minimization of repair bandwidth.
- At
step 1403 coefficients for references are generated such that MDS condition is satisfied. That is elements of any column may be recovered from elements of any k other columns, i.e. any element in specified column may be expressed via elements of any k pre-selected columns. -
FIG. 10 is a block-diagram illustrating design of encoder for the second stage of erasure coding. Encoder of thestage 2 operates over data also represented as tables. In a view of two-level encoding, these tables are referred as sub-tables consisting of columns and sub-rows. The encoder takes as input a sub-table consisting of r sub-rows and r columns and computes a sub-table also consisting of r sub-rows and r columns. Elements of input r×r sub-table are referred as intermediate parity elements (e.g. sub-chunks), while elements of output r×r sub-table are referred as parity elements. Intermediate parity elements contain output from the 1 stage encoder, while each parity element is a function of intermediate parity elements. At step 1501 a set of r sub-rows is mapped onto a set of r parity columns, thus for each parity index (column) there is a related sub-row. Atstep 1502 references between elements of each row-column pair are generated such that each elements in the sub-row is connected to exactly one element in the column and these elements are different, while element in the row-column intersection stays single. According to generated references, expressions for r(r−1) parity elements are given by linear combinations of two intermediate parity elements, while expressions for r parity elements are given by single intermediate parity elements. Atstep 1503 such coefficients are generated for each parity element expression that there exists an inverse transformation for thestage 2 encoding. - After the second encoding stage parity elements satisfy the following conditions:
- Each parity element located in sub-row i and column C is given by
-
- an intermediate parity element located in sub-row i and column C, or
- linear combination of two intermediate parity elements, one of which is located in sub-row i and column C and another one belongs to sub-row R and column j, where sub-row R is related to column C and j-th element of sub-row R is connected with i-th element of column C.
- Each sub-row contains one parity element equal to intermediate parity element from the same sub-row and column;
- Each parity column contains one parity element equal to intermediate element from the same sub-row and column;
- Intermediate parity elements may be recovered from parity elements.
- Erasure Coding Integrated with Data Mixing
- Data mixing scheme is designed for the erasure coding scheme described above. Data is mixed in such a way that erasure coded data satisfy the following condition:
- any piece of original input data may be reconstructed only from pieces of at least k erasure coded multi-chunks.
- Observe that multi-chunks are stored in different storage nodes, so the above condition ensures that data is protected until a malicious adversary gains access to at least k storage nodes.
-
FIG. 11 shows an example of data mixing scheme, which may be optionally applied prior to erasure coding. According to one implementation, data mixing is performed as follows. Asegment 1601 is divided into k parts referred as information multi-chunks. Asegment 1601 is further treated as a data stream of vectors, where each vector consists of k symbols belonging to k different information multi-chunks. Atstep 1602 each vector consisting of k information symbols is multiplied by k×k non-singular matrix M such that its inverse matrix does not contain any zeros. Such matrix M is referred as mixing matrix. The same mixing matrix M may be used for all vectors or different matrices. Multiplication results in an output stream of vectors of the same length k. Atstep 1603 symbols of output data stream are mapped to ksystematic multi-chunks 1608. For that output stream of vectors is divided intoparts 1604, where eachpart 1604 consists of by α·r vectors. Sequences of wr symbols with the same index within vectors are mapped to k different temporary multi-chunks 1605 in such a way that positions of sequences produces from the same part do not intersect. Symbols of i-th temporary multi-chunk 1605 are mapped to symbols of i-th systematic multi-chunk 1607, more precisely, α·r symbols of j-th sequence of i-th temporary multi-chunk are mapped to j-th symbols of α·r sub-chunks 1606 of i-th systematic multi-chunk, where 1≤i≤k. Produced systematic multi-chunks 1608 are further employed for erasure coding. -
FIG. 12 is a block-diagram illustrating operation of failed storage node repair. Repair process includes reconstruction of data stored on failed storage nodes (SNs) and transferring of reconstructed data to new storage nodes. Failed SNs are supposed to be detected by monitoring process. Identifiers of failed SNs 1709 are input arguments for repair process. Failed SN identifiers are employed to retrieve metadata on lost multi-chunks, i.e. multi-chunks erased due to SN failure. List of identifiers of these multi-chunks is formed atstep 1701, where identifiers of erased multi-chunks are further employed to retrieve data about parameters of the erasure coding scheme, systematic/parity index of erased multi-chunk and references to other multi-chunks produced from the same segment. The process of recovering of erased data includes decoding in employed error-correction code. Decoding schedule is formed depending on systematic/parity index of erased multi-chunks and parameters of the erasure coding scheme. - The present disclosure includes a low bandwidth repair method for the case of a single SN failure. This method is applied at
step 1704 for recovering of each multi-chunk and it comprises two algorithms. An appropriate algorithm is selected depending on whether the erased multi-chunk is systematic one or parity (step 1705). Recovering of erased parity multi-chunk is performed atstep 1706, which is further described in details byFIG. 13 . Recovering of erased systematic multi-chunk is performed atstep 1706, which is further described in details byFIG. 14 . The number of failed SNs is checked atstep 1703. If more than one storage node has failed, then multiple SN repair is performed atstep 1708. Upon repair completion,acknowledgements 1710 are issued. -
FIG. 13 is a block-diagram illustrating recovering of a parity multi-chunk in case of a single storage node failure. Parity index C of erased multi-chunk 1807 is employed atstep 1801 to identify row R related to the column C within the 2 stage of erasure coding scheme. Recall that each multi-chunk consists of α chunks and each chunk consists of r sub-chunks. Atstep 1802 sub-chunks corresponding to the row R of a chunks of k systematic multi-chunks are transferred from SNs. Thus, the total number of transferred systematic sub-chunks is equal to α·k, while the total number of stored systematic sub-chunks is r·α·k. At step 1803 encoding corresponding to thestage 1 of erasure coding is performed for a·k sub-chunks, which results in r·a intermediate parity sub-chunks. These r·a intermediate parity sub-chunks are further divided into α groups, where i-th group consists of r sub-chunks corresponding to the row R of chunks, 1≤i≤α. Execution ofsteps step 1805 decoding corresponding to thestage 2 of erasure coding is performed in order to recover full i-th chunk of erased multi-chunk from r intermediate parity sub-chunks and r−1 parity sub-chunks corresponding to the row R. Finally, reconstructed parity multi-chunk is transferred to the corresponding SN atstep 1806. Alternatively, the multi-chunk may be transferred to the SN by chunks or sub-chunks as soon as these chunks or sub-chunks are recovered. Upon recovering and transferring of the whole reconstructed multi-chunk, anacknowledgement 1808 is send. -
FIG. 14 is a block-diagram illustrating recovering of a systematic multi-chunk in case of a single storage node failure. Given systematic index of erased multi-chunk 1907, at step 1901 corresponding repair set W(i) consisting of ┌α/r┐ rows is identified according to thestage 1 of erasure coding. - Repair process for i-th systematic erased multi-chunk comprises two stages. At the first stage (steps 1902 and 1903) transformation inverse to the
stage 2 of erasure coding is performed. At the second stage (steps 1904-1906) repair according to thestage 1 of erasure coding scheme is performed. - At step 1902 r parity chunks related to each row from the repair set W(i) are transferred from SNs. Then, at step 1903 r intermediate parity chunks are reconstructed for each row from the repair set W(i). Reconstruction is performed by applying inverse transformation for the
stage 2 of erasure coding, where a system of r·r linear equations is solved for unknown variables represented by r·r intermediate parity sub-chunks. The system comprises r(r−1) equations with 2 unknown variables each. In the second stage of repair process operations over ┌α/r┐·(k−1)·r systematic sub-chunks and ┌α/r┐·r·r reconstructed intermediate parity sub-chunks are performed. At step 1904 k−1 systematic chunks for each row from the repair set W(i) are transferred from k−1 survived SNs. Sub-chunks of the erased multi-chunk are further recovered in r steps, where at j-th step gj sub-chunks are recovered, └α/r┘≤gj≤┌α/r┐ and g1 is equal to the cardinality of the repair set W(i). The first step differs from other j=2, . . . , r steps. At the first g1 chunks of the erased multi-chunk are recovered as result of intra-row decoding performed at step 1905, where decoding in the error-correction code of the 1 stage of erasure coding is performed. Here intra-row decoding means that decoding is independently performed for each row from the repair set W(i), where decoding for a row consists in recovering of a chunk of the erased multi-chunk from k−1 systematic chunks and one intermediate parity chunk by solving a linear equation. Recall that each chunk consists of r sub-chunks, so operations over chunks may be represented as operations over sub-chunks performed independently in parallel. - Further steps for j=2, . . . , r employ inter-row decoding. In some cases, inter-row decoding requires τi supplementary systematic chunks to repair i-th multi-chunk; transferring of these chunks may be performed at
step 1906 prior to decoding. Recall that for sufficiently high α the number of supplementary systematic chunks τi=0. Other chunks of the erased multi-chunk are recovered by performing inter-row decoding atstep 1907 for each of j=2, . . . , r. Atstep 1907 gj intermediate parity chunks of a multi-chunk containing references to gj chunks of the erased multi-chunk are identified, where these intermediate parity chunks are from rows of the repair set W(i). At j-th step chunks of the erased multi-chunk are recovered by solving a system of gj linear equations, where the equations are obtained from expressions employed for 1 stage erasure coding. Chunks of the erased multi-chunk are expressed via other chunks in these expressions. Recall that by design these expressions are such that each of them contains exactly one chunk of the erased multi-chunk and these chunks are not repeated. Finally, reconstructed systematic multi-chunk is transferred to the corresponding SN atstep 1908. Alternatively, the multi-chunk may be transferred to the SN by chunks or sub-chunks as soon as these chunks or sub-chunks are recovered. Upon recovering and transferring of the whole reconstructed multi-chunk, anacknowledgement 1909 is send. - Original data may be retrieved as follows. At first a list of data segments comprising original data is identified. k output multi-chunks are transferred from storage nodes for each data segment from the list, where output multi-chunks are given by systematic and parity multi-chunks. Recall, that according to the present disclosure, the employed erasure coding scheme is such that a data segment may be reconstructed from any k out of (k+r) output multi-chunks. So, any k out of (k+r) output multi-chunks may be selected for each segment. In most cases, output multi-chunks are selected to minimize average latency. Then reconstruction of each data segment is performed as a function of corresponding k output multi-chunks.
- Observe that reconstruction of original data in case of Reed-Solomon codes (or any other maximum distance separable code) is performed in the same way.
- Particular embodiments of the subject matter described in this specification have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.
Claims (19)
1. A method for distributing data of a plurality of files over a plurality of respective remote storage nodes, the method comprising:
a. splitting into segments, by at least one processor configured to execute code stored in non-transitory processor readable media, the data of the plurality of files;
b. preprocessing each segment and then splitting it into v of input chunks: t highly sensitive chunks and v−t frequently demanded chunks, where highly sensitive chunks contain data which ought to be stored securely and highly demanded chunks contain data which ought to be stored in highly-available manner;
c. encoding, by the at least one processor, v input chunks (produced from the same segment) together with k−v supplementary input chunks into n of output chunks, where any of n output chunks do not contain copy of any fragment of highly sensitive chunks, while v−t output chunks are given by copies of v−t frequently demanded input chunks (these output chunks are further referred as frequently demanded output chunks), n≥k;
d. assigning, by the at least one processor, output chunks to remote storage nodes, wherein n output chunks produced from the same segment are assigned to n different storage nodes
e. transmitting, by the at least one processor, each of the output chunks to at least one respective storage node; and
f. retrieving, by the at least one processor, at least a part of at least one of the plurality of files by downloading parts of output chunks from storage nodes, where amount of data transferred from each storage node is optimized to minimize average latency for data reconstruction.
2. The method of claim 1 , wherein the step of data splitting provides data within a respective segment that comprises a part of one individual file or several different files.
3. The method of claim 1 , wherein the step of segment preprocessing comprises one or several of the following transformations: deduplication, compression, encryption and fragmentation.
4. The method of claim 3 , wherein the step of segment preprocessing includes encryption, wherein one or several parts of a segment are encrypted in individual manner or a segment is encrypted entirely.
5. The method of claim 3 , wherein the step of segment preprocessing includes fragmentation consisting of data partitioning and encoding, wherein fragmentation encoding is a function of one or several of the following: random (pseudo-random) values, values derived from original data (e.g. derived using deterministic cryptographic hash) and predetermined values.
6. The method of claim 1 , wherein the step of encoding employs supplementary inputs given by random data, values derived from original data (e.g. derived using deterministic hash) or predetermined values.
7. The method of claim 1 , wherein the step of encoding comprises applying erasure coding to k input chunks to produce n output chunks, where erasure coding is performed using a linear block error correction code in such a way that t highly sensitive input chunks may be reconstructed only as a function of at least k output chunks (any k output chunks are suitable), while (v−t) frequently demanded input chunks may be reconstructed as a copy of a related output chunks, as well as a function of any other k input chunks.
8. The method of claim 7 , wherein method for erasure coding utilizes a maximum distance separable (MDS) error-correction code and encoding is performed using k×n generator matrix G comprising (k−p) columns of k×k identity matrix, where 0≤t≤p≤k and v−t≤k−p, while other columns form k×(n+p−k) matrix such that any its square submatrix is nonsingular.
9. The method of claim 15 , wherein a k×n MDS code generator matrix G is obtained as follows
a. Selecting an arbitrary MDS code of length (n+p) and dimension k;
b. Constructing a k×(n+p) generator matrix in systematic form (i.e. generator matrix, which includes k×k identity matrix as its submatrix);
c. Excluding p columns of k×k identity matrix from k×(n+p) generator matrix in systematic form to obtain k×n matrix G.
10. The method of claim 15 , wherein t=v, that is output chunks do not contain any copy of a fragment of input chunks produced from a segment and any fragment of a these input chunks may be reconstructed only as a function of at least k output chunks.
11. The method of claim 15 , wherein employed MDS error-correction code is a Reed-Solomon code.
12. The method of claim 15 , wherein for encoding with Reed-Solomon code employed generator matrix is based on Vandermonde matrix.
13. The method of claim 15 , wherein for encoding with Reed-Solomon code employed generator matrix is based on Cauchy matrix concatenated with identity matrix.
14. The method of claim 1 , wherein the step of assigning of output chunks to storage nodes comprises selection of trusted storage nodes (e.g. in private storage) and mapping frequently demanded output chunks to these trusted storage nodes.
15. The method of claim 1 , wherein the step of assigning of output chunks to storage nodes comprises selection of highly available storage nodes, mapping frequently demanded output chunks to these storage nodes and encrypting frequently demanded output chunks in individual manner prior to transmission, where highly available storage nodes demonstrate high average data transferring speed and low latency.
16. The method of claim 1 , wherein the step of data (at least a part of at least one of the plurality of files) retrieving comprises
a. identifying range of indices within each information chunk corresponding to requested data;
b. downloading, by the at least one processor, such parts of output chunks from storage nodes that
i. total size these parts is equal to the size of the widest range multiplied by k and
ii. the number of parts with the same range of indices within output chunks is equal to k;
c. reconstructing, by the at least one processor, requested data by performing the following steps:
for each set S of k source storage nodes
i. combing parts with the same range of indices into a vector cs, and
ii. multiplying vector cs by inverse matrix to matrix G(S), where G(S) is a matrix consisting of k columns of selectively mixing matrix G with indices from the set S.
17. The method of claim 1 , wherein requested data is contained only in frequently demanded input chunks. In this case, requested data may be retrieved by downloading only corresponding frequently demanded output chunks. Thus, traffic reduction is achieved compared to general case of data retrieval (described in claim 16 ).
18. A method for distributing data of a plurality of files over a plurality of respective remote storage nodes, the method comprising:
a. splitting data into segments, by at least one processor configured to execute code stored in non-transitory processor readable media, the data of the plurality of files;
b. optionally applying deduplication, compression and/or encryption to each segment;
c. splitting each segment into k information multi-chunks and optionally applying data mixing to these information chunks to produce k systematic multi-chunks;
d. encoding, by the at least one processor, k systematic multi-chunks (produced from the same segment) into r parity multi-chunks, wherein employed erasure coding scheme maximizes storage efficiency, enables reconstruction of the k systematic multi-chunks from any k output multi-chunks and enables recovering of a single output multi-chunk with minimized network traffic, where the set of k+r output multi-chunks comprises k systematic multi-chunks and r parity multi-chunks;
e. assigning, by the at least one processor, k+r output multi-chunks to remote storage nodes, wherein k+r output multi-chunks produced from the same segment are assigned to k+r different storage nodes;
f. transmitting, by the at least one processor, each of the output multi-chunks to at least one respective storage node;
g. storage node repairing, by the at least one processor, wherein at least one output multi-chunk is recovered as a function of parts of other output multi-chunks produced from the same segment, wherein network traffic is minimized; and
h. retrieving, by the at least one processor, at least a part of at least one of the plurality of files as a function of parts of output multi-chunks.
19. A system for distributing data of a plurality of files over a plurality of respective remote storage nodes, the system comprising:
at least one processor configured by executing instructions from non-transitory processor readable media, the at least processor configured for:
a. splitting data into segments, by at least one processor configured to execute code stored in non-transitory processor readable media, the data of the plurality of files;
b. optionally applying deduplication, compression and/or encryption to each segment;
c. splitting each segment into k information multi-chunks and optionally applying data mixing to these information chunks to produce k systematic multi-chunks;
d. encoding k systematic multi-chunks (produced from the same segment) into r parity multi-chunks, wherein employed erasure coding scheme maximizes storage efficiency, enables reconstruction of the k systematic multi-chunks from any k output multi-chunks and enables recovering of a single output multi-chunk with minimized network traffic, where the set of k+r output multi-chunks comprises k systematic multi-chunks and r parity multi-chunks;
e. assigning k+r output multi-chunks to remote storage nodes, wherein k+r output multi-chunks produced from the same segment are assigned to k+r different storage nodes;
f. transmitting each of the output multi-chunks to at least one respective storage node;
g. storage node repairing wherein at least one output multi-chunk is recovered as a function of parts of other output multi-chunks produced from the same segment, wherein network traffic is minimized; and
h. retrieving at least a part of at least one of the plurality of files as a function of parts of output multi-chunks.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/532,904 US20220229727A1 (en) | 2019-01-29 | 2021-11-22 | Encoding and storage node repairing method for minimum storage regenerating codes for distributed storage systems |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962798256P | 2019-01-29 | 2019-01-29 | |
US201962798265P | 2019-01-29 | 2019-01-29 | |
US16/776,070 US11182247B2 (en) | 2019-01-29 | 2020-01-29 | Encoding and storage node repairing method for minimum storage regenerating codes for distributed storage systems |
US17/532,904 US20220229727A1 (en) | 2019-01-29 | 2021-11-22 | Encoding and storage node repairing method for minimum storage regenerating codes for distributed storage systems |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/776,070 Division US11182247B2 (en) | 2019-01-29 | 2020-01-29 | Encoding and storage node repairing method for minimum storage regenerating codes for distributed storage systems |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220229727A1 true US20220229727A1 (en) | 2022-07-21 |
Family
ID=71731259
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/776,070 Active US11182247B2 (en) | 2019-01-29 | 2020-01-29 | Encoding and storage node repairing method for minimum storage regenerating codes for distributed storage systems |
US17/532,904 Abandoned US20220229727A1 (en) | 2019-01-29 | 2021-11-22 | Encoding and storage node repairing method for minimum storage regenerating codes for distributed storage systems |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/776,070 Active US11182247B2 (en) | 2019-01-29 | 2020-01-29 | Encoding and storage node repairing method for minimum storage regenerating codes for distributed storage systems |
Country Status (4)
Country | Link |
---|---|
US (2) | US11182247B2 (en) |
EP (1) | EP3918484A4 (en) |
MX (1) | MX2021009011A (en) |
WO (1) | WO2020160142A1 (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019152573A1 (en) | 2018-01-31 | 2019-08-08 | John Rankin | System and method for secure communication using random blocks or random numbers |
US11294636B2 (en) * | 2018-02-28 | 2022-04-05 | Rankin Labs, Llc | System and method for expanding a set of random values |
WO2020041390A1 (en) | 2018-08-21 | 2020-02-27 | John Rankin | System and method for scattering network traffic across a number of disparate hosts |
US11150805B2 (en) * | 2019-05-02 | 2021-10-19 | Vast Data Ltd. | System and method for using free space to improve erasure code locality |
MX2021011531A (en) * | 2019-05-22 | 2022-06-30 | Myota Inc | Method and system for distributed data storage with enhanced security, resilience, and control. |
US11729184B2 (en) | 2019-05-28 | 2023-08-15 | Rankin Labs, Llc | Detecting covertly stored payloads of data within a network |
WO2021127320A1 (en) | 2019-12-18 | 2021-06-24 | John Rankin | Distribution of data over a network with interconnected rings |
US12099997B1 (en) | 2020-01-31 | 2024-09-24 | Steven Mark Hoffberg | Tokenized fungible liabilities |
US11567840B2 (en) * | 2020-03-09 | 2023-01-31 | Rubrik, Inc. | Node level recovery for clustered databases |
US11405420B2 (en) * | 2020-08-28 | 2022-08-02 | Seagate Technology Llc | Distributed secure edge heterogeneous storage network with redundant storage and byzantine attack resilience |
CN112732203B (en) * | 2021-03-31 | 2021-06-22 | 中南大学 | Regeneration code construction method, file reconstruction method and node repair method |
US11593015B2 (en) * | 2021-04-06 | 2023-02-28 | EMC IP Holding Company LLC | Method to enhance the data invulnerability architecture of deduplication systems by optimally doing read-verify and fix of data moved to cloud tier |
US11706203B2 (en) * | 2021-05-14 | 2023-07-18 | Citrix Systems, Inc. | Method for secondary authentication |
US12074962B2 (en) | 2021-08-10 | 2024-08-27 | Samsung Electronics Co., Ltd. | Systems, methods, and apparatus for dividing and encrypting data |
US20230131765A1 (en) * | 2021-10-25 | 2023-04-27 | Sap Se | Backup and restore of arbitrary data |
TWI764856B (en) * | 2021-12-13 | 2022-05-11 | 慧榮科技股份有限公司 | Memory controller and data processing method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190036648A1 (en) * | 2014-05-13 | 2019-01-31 | Datomia Research Labs Ou | Distributed secure data storage and transmission of streaming media content |
US20190377637A1 (en) * | 2018-06-08 | 2019-12-12 | Samsung Electronics Co., Ltd. | System, device and method for storage device assisted low-bandwidth data repair |
Family Cites Families (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6307868B1 (en) | 1995-08-25 | 2001-10-23 | Terayon Communication Systems, Inc. | Apparatus and method for SCDMA digital data transmission using orthogonal codes and a head end modem with no tracking loops |
US6665308B1 (en) | 1995-08-25 | 2003-12-16 | Terayon Communication Systems, Inc. | Apparatus and method for equalization in distributed digital data transmission systems |
US7010532B1 (en) | 1997-12-31 | 2006-03-07 | International Business Machines Corporation | Low overhead methods and apparatus for shared access storage devices |
EP1228453A4 (en) | 1999-10-22 | 2007-12-19 | Activesky Inc | An object oriented video system |
US6952737B1 (en) * | 2000-03-03 | 2005-10-04 | Intel Corporation | Method and apparatus for accessing remote storage in a distributed storage cluster architecture |
AU2002214659A1 (en) | 2000-10-26 | 2002-05-06 | James C. Flood Jr. | Method and system for managing distributed content and related metadata |
CA2331474A1 (en) | 2001-01-19 | 2002-07-19 | Stergios V. Anastasiadis | Stride-based disk space allocation scheme |
US7240236B2 (en) | 2004-03-23 | 2007-07-03 | Archivas, Inc. | Fixed content distributed data storage using permutation ring encoding |
JP2007018563A (en) | 2005-07-05 | 2007-01-25 | Toshiba Corp | Information storage medium, method and device for recording information, method and device for reproducing information |
US8694668B2 (en) | 2005-09-30 | 2014-04-08 | Cleversafe, Inc. | Streaming media software interface to a dispersed data storage network |
US7574579B2 (en) | 2005-09-30 | 2009-08-11 | Cleversafe, Inc. | Metadata management system for an information dispersed storage system |
US8285878B2 (en) | 2007-10-09 | 2012-10-09 | Cleversafe, Inc. | Block based access to a dispersed data storage network |
JP2009543409A (en) | 2006-06-29 | 2009-12-03 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Data encoding and decoding method and apparatus by error correction |
US7698242B2 (en) | 2006-08-16 | 2010-04-13 | Fisher-Rosemount Systems, Inc. | Systems and methods to maintain process control systems using information retrieved from a database storing general-type information and specific-type information |
US8296812B1 (en) | 2006-09-01 | 2012-10-23 | Vudu, Inc. | Streaming video using erasure encoding |
US8442989B2 (en) | 2006-09-05 | 2013-05-14 | Thomson Licensing | Method for assigning multimedia data to distributed storage devices |
US9411976B2 (en) * | 2006-12-01 | 2016-08-09 | Maidsafe Foundation | Communication system and method |
US8655939B2 (en) | 2007-01-05 | 2014-02-18 | Digital Doors, Inc. | Electromagnetic pulse (EMP) hardened information infrastructure with extractor, cloud dispersal, secure storage, content analysis and classification and method therefor |
AU2008216698B2 (en) | 2007-02-12 | 2011-06-23 | Mushroom Networks Inc. | Access line bonding and splitting methods and apparatus |
US8717885B2 (en) | 2007-04-26 | 2014-05-06 | Mushroom Networks, Inc. | Link aggregation methods and devices |
US8315999B2 (en) | 2007-08-29 | 2012-11-20 | Nirvanix, Inc. | Policy-based file management for a storage delivery network |
US9106630B2 (en) | 2008-02-01 | 2015-08-11 | Mandiant, Llc | Method and system for collaboration during an event |
EP2294739B1 (en) | 2008-06-24 | 2019-08-07 | Mushroom Networks Inc. | Inter-office communication method |
WO2010033644A1 (en) | 2008-09-16 | 2010-03-25 | File System Labs Llc | Matrix-based error correction and erasure code methods and apparatus and applications thereof |
US7818430B2 (en) | 2008-10-15 | 2010-10-19 | Patentvc Ltd. | Methods and systems for fast segment reconstruction |
US8504847B2 (en) | 2009-04-20 | 2013-08-06 | Cleversafe, Inc. | Securing data in a dispersed storage network using shared secret slices |
US8572282B2 (en) | 2009-10-30 | 2013-10-29 | Cleversafe, Inc. | Router assisted dispersed storage network method and apparatus |
US9462316B2 (en) | 2009-12-29 | 2016-10-04 | International Business Machines Corporation | Digital content retrieval utilizing dispersed storage |
US10216647B2 (en) | 2010-02-27 | 2019-02-26 | International Business Machines Corporation | Compacting dispersed storage space |
US8892598B2 (en) | 2010-06-22 | 2014-11-18 | Cleversafe, Inc. | Coordinated retrieval of data from a dispersed storage network |
US8473778B2 (en) | 2010-09-08 | 2013-06-25 | Microsoft Corporation | Erasure coding immutable data |
US8924359B1 (en) * | 2011-04-07 | 2014-12-30 | Symantec Corporation | Cooperative tiering |
US8560801B1 (en) * | 2011-04-07 | 2013-10-15 | Symantec Corporation | Tiering aware data defragmentation |
US8935493B1 (en) | 2011-06-30 | 2015-01-13 | Emc Corporation | Performing data storage optimizations across multiple data storage systems |
US8683286B2 (en) | 2011-11-01 | 2014-03-25 | Cleversafe, Inc. | Storing data in a dispersed storage network |
US8627066B2 (en) | 2011-11-03 | 2014-01-07 | Cleversafe, Inc. | Processing a dispersed storage network access request utilizing certificate chain validation information |
US8656257B1 (en) | 2012-01-11 | 2014-02-18 | Pmc-Sierra Us, Inc. | Nonvolatile memory controller with concatenated error correction codes |
US8799746B2 (en) | 2012-06-13 | 2014-08-05 | Caringo, Inc. | Erasure coding and replication in storage clusters |
WO2014005279A1 (en) | 2012-07-03 | 2014-01-09 | 北京大学深圳研究生院 | Method and device for constructing distributed storage code capable of accurate regeneration |
US9741005B1 (en) | 2012-08-16 | 2017-08-22 | Amazon Technologies, Inc. | Computing resource availability risk assessment using graph comparison |
US20140136571A1 (en) | 2012-11-12 | 2014-05-15 | Ecole Polytechnique Federale De Lausanne (Epfl) | System and Method for Optimizing Data Storage in a Distributed Data Storage Environment |
US9304859B2 (en) | 2012-12-29 | 2016-04-05 | Emc Corporation | Polar codes for efficient encoding and decoding in redundant disk arrays |
US20140278807A1 (en) | 2013-03-15 | 2014-09-18 | Cloudamize, Inc. | Cloud service optimization for cost, performance and configuration |
RU2013128346A (en) | 2013-06-20 | 2014-12-27 | ИЭмСи КОРПОРЕЙШН | DATA CODING FOR A DATA STORAGE SYSTEM BASED ON GENERALIZED CASCADE CODES |
US9241044B2 (en) | 2013-08-28 | 2016-01-19 | Hola Networks, Ltd. | System and method for improving internet communication by using intermediate nodes |
EP2953026B1 (en) * | 2013-10-18 | 2016-12-14 | Hitachi, Ltd. | Target-driven independent data integrity and redundancy recovery in a shared-nothing distributed storage system |
US9648100B2 (en) | 2014-03-05 | 2017-05-09 | Commvault Systems, Inc. | Cross-system storage management for transferring data across autonomous information management systems |
CN106462605A (en) | 2014-05-13 | 2017-02-22 | 云聚公司 | Distributed secure data storage and transmission of streaming media content |
US9684594B2 (en) | 2014-07-16 | 2017-06-20 | ClearSky Data | Write back coordination node for cache latency correction |
CN105335150B (en) | 2014-08-13 | 2019-03-19 | 苏宁易购集团股份有限公司 | The quick decoding method and system of correcting and eleting codes data |
US10043211B2 (en) | 2014-09-08 | 2018-08-07 | Leeo, Inc. | Identifying fault conditions in combinations of components |
KR101618269B1 (en) | 2015-05-29 | 2016-05-04 | 연세대학교 산학협력단 | Method and Apparatus of Encoding for Data Recovery in Distributed Storage System |
CA2989334A1 (en) | 2015-07-08 | 2017-01-12 | Cloud Crowding Corp. | System and method for secure transmission of signals from a camera |
US20170060700A1 (en) | 2015-08-28 | 2017-03-02 | Qualcomm Incorporated | Systems and methods for verification of code resiliency for data storage |
US10931402B2 (en) | 2016-03-15 | 2021-02-23 | Cloud Storage, Inc. | Distributed storage system data management and security |
ES2899933T3 (en) | 2016-03-15 | 2022-03-15 | Datomia Res Labs Ou | Distributed storage system data management and security |
US10387248B2 (en) | 2016-03-29 | 2019-08-20 | International Business Machines Corporation | Allocating data for storage by utilizing a location-based hierarchy in a dispersed storage network |
US10216740B2 (en) | 2016-03-31 | 2019-02-26 | Acronis International Gmbh | System and method for fast parallel data processing in distributed storage systems |
US11461485B2 (en) * | 2016-08-12 | 2022-10-04 | ALTR Solutions, Inc. | Immutable bootloader and firmware validator |
US20190182115A1 (en) | 2017-11-02 | 2019-06-13 | Datomia Research Labs Ou | Optimization of distributed data storage systems |
-
2020
- 2020-01-29 EP EP20749736.3A patent/EP3918484A4/en active Pending
- 2020-01-29 US US16/776,070 patent/US11182247B2/en active Active
- 2020-01-29 WO PCT/US2020/015672 patent/WO2020160142A1/en unknown
- 2020-01-29 MX MX2021009011A patent/MX2021009011A/en unknown
-
2021
- 2021-11-22 US US17/532,904 patent/US20220229727A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190036648A1 (en) * | 2014-05-13 | 2019-01-31 | Datomia Research Labs Ou | Distributed secure data storage and transmission of streaming media content |
US20190377637A1 (en) * | 2018-06-08 | 2019-12-12 | Samsung Electronics Co., Ltd. | System, device and method for storage device assisted low-bandwidth data repair |
Also Published As
Publication number | Publication date |
---|---|
EP3918484A4 (en) | 2023-03-29 |
US11182247B2 (en) | 2021-11-23 |
US20200241960A1 (en) | 2020-07-30 |
WO2020160142A1 (en) | 2020-08-06 |
MX2021009011A (en) | 2021-11-12 |
EP3918484A1 (en) | 2021-12-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11182247B2 (en) | Encoding and storage node repairing method for minimum storage regenerating codes for distributed storage systems | |
Curtmola et al. | Robust remote data checking | |
Ateniese et al. | Remote data checking using provable data possession | |
Bowers et al. | HAIL: A high-availability and integrity layer for cloud storage | |
JP5905068B2 (en) | Decentralized storage and communication | |
US5530757A (en) | Distributed fingerprints for information integrity verification | |
Hendricks et al. | Verifying distributed erasure-coded data | |
Chen et al. | Robust dynamic provable data possession | |
EP3669488A1 (en) | Secure hardware signature and related methods and applications | |
US10902138B2 (en) | Distributed cloud storage | |
Rashmi et al. | Information-theoretically secure erasure codes for distributed storage | |
US10437525B2 (en) | Communication efficient secret sharing | |
US20140047239A1 (en) | Authenticator, authenticatee and authentication method | |
Chen et al. | Robust dynamic remote data checking for public clouds | |
He et al. | Public integrity auditing for dynamic regenerating code based cloud storage | |
Rasina Begum et al. | SEEDDUP: a three-tier SEcurE data DedUPlication architecture-based storage and retrieval for cross-domains over cloud | |
US20130073901A1 (en) | Distributed storage and communication | |
CN112764677B (en) | Method for enhancing data migration security in cloud storage | |
VS et al. | A secure regenerating code‐based cloud storage with efficient integrity verification | |
Juels et al. | Falcon codes: fast, authenticated lt codes (or: making rapid tornadoes unstoppable) | |
US11580091B2 (en) | Method of ensuring confidentiality and integrity of stored data and metadata in an untrusted environment | |
Chan et al. | Fault-tolerant and secure networked storage | |
Oggier et al. | Homomorphic self-repairing codes for agile maintenance of distributed storage systems | |
Sengupta et al. | An efficient secure distributed cloud storage for append-only data | |
Omote et al. | ND-POR: A POR based on network coding and dispersal coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |