Nothing Special   »   [go: up one dir, main page]

US20200081780A1 - Data storage device and parity code processing method thereof - Google Patents

Data storage device and parity code processing method thereof Download PDF

Info

Publication number
US20200081780A1
US20200081780A1 US16/533,818 US201916533818A US2020081780A1 US 20200081780 A1 US20200081780 A1 US 20200081780A1 US 201916533818 A US201916533818 A US 201916533818A US 2020081780 A1 US2020081780 A1 US 2020081780A1
Authority
US
United States
Prior art keywords
user data
ecc engine
volatile memory
data
selector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/533,818
Inventor
An-Pang Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Silicon Motion Inc
Original Assignee
Silicon Motion Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Motion Inc filed Critical Silicon Motion Inc
Priority to US16/533,818 priority Critical patent/US20200081780A1/en
Assigned to SILICON MOTION, INC. reassignment SILICON MOTION, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, AN-PANG
Publication of US20200081780A1 publication Critical patent/US20200081780A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0253Garbage collection, i.e. reclamation of unreferenced memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0625Power saving in storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/401Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
    • G11C11/4063Auxiliary circuits, e.g. for addressing, decoding, driving, writing, sensing or timing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7205Cleaning, compaction, garbage collection, erase control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to a data storage device, and more particularly to a data storage device and a parity code processing method thereof.
  • a data storage device is composed of a controller and a non-volatile memory such as a flash memory
  • the controller may include a redundant array of independent disks (RAID) error correcting code (ECC) engine.
  • RAID ECC engine is mainly used to perform error correction procedures.
  • One of the operating principles is that when a host wants to write user data to the non-volatile memory and the written user data uses page as the management unit, the controller simultaneously sends the user data to the RAID ECC engine for encoding operation until the user data of a page group, for example, the user data of page 0 to page N-1 is encoded.
  • the RAID ECC engine will generate a parity code corresponding to the user data of page 0 to page N-1, and then write the parity code into the non-volatile memory, that is, the parity code can be used as the data of page N, wherein N is a positive integer greater than 1. Therefore, after the user data of each N pages is encoded, the RAID ECC engine will switch its state to output the encoded parity code to the non-volatile memory by each encoding operation. This makes the parity code written to the non-volatile memory not continuous and also lower the writing efficiency.
  • an object of the invention is to provide a data storage device and a parity code processing method thereof.
  • a data storage device including a non-volatile memory and a controller electrically coupled to the non-volatile memory.
  • the controller includes an access interface, a central processing unit (CPU) and a RAID ECC engine.
  • the RAID ECC engine has a memory, wherein after completing the encoding operation on each N pages of the user data to generate a corresponding parity code, the RAID ECC engine compresses the parity code and stores the compressed parity code in the memory of the RAID ECC engine, wherein after all K parity codes of the K ⁇ N pages of the user data are compressed and stored in the memory, the RAID ECC engine writes the compressed K parity codes to the non-volatile memory, wherein K and N are both positive integers greater than one.
  • an embodiment of the invention further provides a parity code processing method, which is implemented in the controller of the foregoing embodiment, and includes the following steps. First, configuring the CPU to issue at least one control signal to the RAID ECC engine and transmitting the user data of a plurality of pages to the RAID ECC engine. Second, configuring the RAID ECC engine to perform an encoding operation on the user data of N pages based on the control signal to generate a corresponding parity code, compress the parity code, and store the compressed parity code in a memory of the RAID ECC engine.
  • FIG. 1 is a schematic functional block diagram of a data storage device according to an embodiment of the invention.
  • FIG. 2 is a schematic functional block diagram of a RAID ECC engine in the data storage device of FIG. 1 ;
  • FIG. 3A to FIG. 3D are schematic diagrams of the RAID ECC engine performing a parity code processing method according to FIG. 2 ;
  • FIG. 3E is a schematic diagram of a stored written data in the non-volatile memory in FIG. 1 storing the written data under the embodiments of FIG. 3A to FIG. 3D ;
  • FIG. 3F is a schematic timing diagram of the operational circuits in FIG. 3A to FIG. 3D performing the parity code processing method.
  • FIG. 4 is a schematic flow diagram of a parity code processing method according to an embodiment of the invention.
  • FIG. 1 is a schematic functional block diagram of a data storage device according to an embodiment of the invention.
  • the data storage device 1 includes a non-volatile memory 110 and a controller 120 .
  • the non-volatile memory 110 includes a plurality of blocks (not shown). Each block further includes a plurality of pages, and the page is the smallest unit of program. That is, the page is the smallest unit for data writing or reading, and a word line can control more than one page.
  • the block is the smallest unit for data erasing. Therefore, the blocks can be classified into spare blocks, active blocks and data blocks according to their functions.
  • the spare blocks are the blocks that can be selected and written the data to
  • the active blocks are the blocks that already be selected and written the data to
  • the data blocks are the blocks where the data is finish written and can no longer be written. It should be noted that the invention does not limit the specific implementation manner of the blocks and the pages, and those skilled in the art should be able to perform related designs according to actual needs or applications.
  • the non-volatile memory 110 is preferably implemented by a flash memory, but the invention is not limited thereto.
  • the controller 120 is electrically coupled to the non-volatile memory 110 and is used to control data access in the non-volatile memory 110 .
  • the data storage device 1 is usually used together with a host 2 to write user data into the non-volatile memory 110 or read the user data from the non-volatile memory 110 according to the write/read command issued by the host 2 .
  • the controller 120 preferably is a flash memory controller and may mainly include a CPU 124 , an access interface 121 and a RAID ECC engine 123 .
  • the CPU 124 preferably has its own memory, such as a CPU memory 1241 for storing temporary data.
  • the CPU 124 and the CPU memory 1241 are separately drawn. However, it should be understood by those of ordinary skill in the art that the CPU memory 1241 is actually encompassed within the CPU 124 . Incidentally, since the CPU memory 1241 usually has a fixed size, the CPU 124 will store the temporary data to an external memory outside the controller 120 as long as the temporary data stored by the CPU 124 is larger than the available space of the CPU memory 1241 . However, the speed of which the CPU 124 accesses the external memory is much slower than the speed of which the CPU accesses the internal memory, so that the overall system performance is likely to decline.
  • the access interface 121 is coupled to the host 2 and the non-volatile memory 110 .
  • the CPU 124 is used to interpret write/read commands issued by the host 2 to generate operation commands and control the access interface 121 to access the user data of the non-volatile memory 110 based on the operation commands.
  • the controller 120 can further include a data buffer 122 coupled to the access interface 121 , the RAID ECC engine 123 , and the non-volatile memory 110 , and is used to temporarily store the user data from the host 2 or the non-volatile memory 110 .
  • the data buffer 122 can also preferably be used to temporarily store in-system programming (ISP) or logical-to-physical address mapping table required for the operation of the CPU 124 , but the invention is not limited thereto.
  • ISP in-system programming
  • the data buffer 122 is preferably implemented by a static random access memory (SRAM) or a random access memory (RAM) capable of fast access, but the invention is not limited thereto.
  • the RAID ECC engine 123 is coupled to the data buffer 122 and the non-volatile memory 110 and is used to perform an error correction procedure on the user data.
  • the operation principle of the error correction procedure refers to exclusive OR (XOR) logical operations and can be divided into an encoding operation or a decoding operation according to the functions. Therefore, in FIG. 1 , different operation paths will be represented by different chain lines, and it should be understood that the decoding operation provides function such as error detection and correction, or data recovery.
  • the RAID ECC engine 123 preferably also has its own memory, such as the RAM 1231 .
  • the RAID ECC engine 123 compresses the parity code and stores the compressed parity code in the RAM 1231 . After all K parity codes of the K ⁇ N pages of the user data are compressed and stored in the RAM 1231 , the RAID ECC engine 123 writes the compressed K parity codes to the non-volatile memory 110 , wherein K and N are both positive integers greater than one.
  • the RAID ECC engine 123 may also write the compressed K parity codes into the data buffer 122 , and then write the compressed K parity code into the non-volatile memory 110 through the data buffer 122 .
  • the invention does not limit the specific implementation manner in which the RAID ECC engine 123 writes the compressed K parity codes to the non-volatile memory 110 , and those of ordinary skill in the art should be able to make relevant designs based on actual needs or applications.
  • the user data of the K ⁇ N pages refers to the user data of the K page groups, or it is simply referred to as user data of a super page group in the embodiment.
  • the management unit of the user data is a sector instead of a page, for example, a page may include a plurality of sectors
  • the RAID ECC engine 123 also compresses the parity code and stores the compressed parity code in the RAM 1231 after completing the encoding operation on the user data of each sector group to generate the corresponding parity code.
  • the RAID ECC engine 123 After the K parity codes of the user data of the K sector groups are compressed and stored in the RAM 1231 , the RAID ECC engine 123 writes the compressed K parity codes to the non-volatile memory 110 . Since the data management methods of the sector and page are similar, only the example of the page will be described below, but the invention is not limited thereto.
  • the controller 120 when the controller 120 needs to read the user data from the non-volatile memory 110 , the controller 120 reads the user data of the page according to the preset read parameters and performs the error correction by using other error correction codes (e.g., a low density parity check (LDPC) code) on the read user data of the page.
  • error correction codes e.g., a low density parity check (LDPC) code
  • the controller 120 When the controller 120 reads the user data of a certain page of the super page group (e.g., the user data of the page 1) but the LDPC code cannot correct the error, the controller 120 may read the user data of page 0, page 2 to page N-1 and the compressed parity codes corresponding to the user data of page 0 to page N-1 from the non-volatile memory 110 , and then send them to the RAID ECC engine 123 to perform the decoding operation, wherein the data obtained by the decoding operation is the correct user data of the page 1. Since the operation of the encoding or decoding operation of the user data by the RAID ECC engine 123 is well known to those of ordinary skill in the art, no redundant detail is to be given herein.
  • the RAID ECC engine 123 of the embodiment reduces the amount of data by compressing the parity code and stores the compressed parity code in the RAM 1231 first, instead of immediately writing a parity code to the non-volatile memory 110 for each encoding operation. After a plurality of parity codes of the user data of an entire super page group are compressed and stored in the RAM 1231 , the RAID ECC engine 123 writes the compressed plurality of parity codes to the non-volatile memory 110 in one time.
  • the embodiment can achieve the effect of reading and writing acceleration.
  • the RAID ECC engine 123 also uses the RAM 1231 for temporary storage of the operation values when the RAID ECC engine 123 performs the encoding operation or the decoding operation.
  • the size of the RAM 1231 can be, for example, 64 KB, but the RAID ECC engine 123 may need only 16 KB or 32 KB of memory space to operate according to actual needs, such as page size, parity code size, or numbers of paths used. Therefore, when the memory space of the RAM 1231 is not fully used, the data storage device 1 of the embodiment can also map the unused memory space of the RAM 1231 to the CPU memory 1241 .
  • the unused memory space of the RAM 1231 is shared, and the unused memory space address of the RAM 1231 is mapped to the memory space address of the CPU memory 1241 to be virtualized as part of the CPU memory 1241 .
  • the memory space of the CPU memory 1241 is substantially extended (expanded).
  • the CPU 124 can also utilize the unused memory space of the RAM 1231 to store temporary data.
  • the temporary data can be stored in the RAM 1231 of the RAID ECC engine 123 in addition to the CPU memory 1241 and need not to be stored in the external memory outside the controller 120 .
  • the frequency of access the external memory by the CPU 124 is reduced and the overall system performance is improved.
  • FIG. 2 is a schematic functional block diagram of the RAID ECC engine in the data storage device of FIG. 1 .
  • the components in FIG. 2 and the same components as those in FIG. 1 are labeled with the same numbers, so no redundant detail is to be given herein.
  • the RAID ECC engine 123 mainly includes a state machine 220 , a selector 230 , a control register 240 , and M+ 1 operational circuits 210 _ 0 to 210 _M, wherein M is a positive integer greater than 1 .
  • Each of the operational circuits 210 _ 0 to 210 _M includes an XOR logical operation unit, a page buffer, and a first selector.
  • the operational circuit 210 _ 0 includes an XOR logical operation unit 211 _ 0 , a page buffer 212 _ 0 , and a first selector 213 _ 0 , and so on
  • the operational circuit 210 _M includes an XOR logical operation unit 211 _M, a page buffer 212 _M, and a first selector 213 _M.
  • state machine 220 the selector 230 , the control register 240 , the XOR logical operation units 211 _ 0 to 211 _M, and the first selectors 213 _ 0 to 213 _M may be implemented by a pure hardware circuit or by a hardware circuit with firmware or software, but the invention is not limited thereto.
  • the control register 240 is coupled to the CPU 124 and used to receive at least one control signal from the CPU 124 and the temporary data.
  • the embodiment will be first described with only one of the operational circuits, for example, the operational circuit 210 _ 0 .
  • the first selector 213 _ 0 has two input ends 0 and 1 and an output end. The two input ends 0 and 1 of the first selector 213 _ 0 are respectively coupled to an output end of the XOR logical operation unit 211 _ 0 and the control register 240 .
  • the output end of the first selector 213 _ 0 is coupled to the page buffer 212 _ 0 .
  • the CPU 124 can control (set) a selection end (sel) of the first selector 213 _ 0 by the control register 240 to select the input end 0 as the input source of the page buffer 212 _ 0 to input data, so the input data at this time is the operation result (not shown) of the XOR logical operation unit 211 _ 0 , and the operation result can be stored in the page buffer 212 _ 0 .
  • the CPU 124 can also control (set) the selection end (sel) of the first selector 213 _ 0 by the control register 240 to select the input end 1 as the input source of the page buffer 212 _ 0 to input data. Therefore, the input data at this time is changed to the temporary data (not shown) from the CPU 124 provided by the control register 240 , and the temporary data can also be stored in the page buffer 212 _ 0 . That is, the RAM 1231 in FIG. 1 can be composed, for example, by the page buffers 212 _ 0 to 212 _M in FIG. 2 .
  • the CPU 124 can control the first selector 213 _ 0 so that the input end 1 of the first selector 213 _ 0 serves as an input source of the page buffer 212 _ 0 .
  • the unused page buffer 212 _ 0 can be used to store temporary data from the CPU 124 , thereby achieving another object of the present invention, which is to have the CPU 124 share the RAM 1231 of the RAID ECC engine 123 .
  • the state machine 220 is coupled to the control register 240 and is used to control whether the RAID ECC engine 123 performs encoding or decoding operations or enters an idle or done state.
  • the state machine 220 can further assist the control register 240 to control (set) the selection ends of the first selectors 213 _ 0 to 213 _M to determine the input sources of the page buffers 212 _ 0 to 212 _M, and assist the control register 240 to control the selection end (sel) of the selector 230 to determine its output interface.
  • the selector 230 has M+1 output ends and an input end.
  • the M+1 output ends of the selector 230 are respectively coupled to the input ends of the operation circuits 210 _ 0 to 210 _M, and the input end of the selector 230 is coupled to the data buffer 122 or the non-volatile memory 110 . Therefore, in the embodiment, the CPU 124 can also control (set) the selection end (sel) of the selector 230 by the control register 240 so that the user data received by the input end of the selector 230 can be outputted to the designated operational circuits 210 _ 0 to 210 _M to perform subsequent encoding or decoding operations.
  • control register 240 is not only used to control (set) the selection ends of the first selectors 213 _ 0 to 213 _M, but also to control (set) the selection end (sel) of the selector 230 and is also used to control the operation of the state machine 220 .
  • the first selectors 213 _ 0 to 213 _M of the operation circuits 210 _ 0 to 210 _M can preferably be respectively implemented by multiplexers (MUX), and the selector 230 is preferably implemented by a demultiplexer (DeMUX), but the invention is not limited thereto.
  • FIG. 3A to FIG. 3E are used to explain in detail the operation principle of the parity code processing method performed by the RAID ECC engine 123 .
  • FIG. 3E is a schematic diagram of the stored written data in the non-volatile memory 110 in FIG. 1 under the embodiment of FIG. 3A to FIG. 3D . It should be noted that in the embodiment of FIG. 3A to FIG. 3D , it is first assumed that both K and N are 3 for the following description, but it is not intended to limit the invention. As shown in FIG.
  • the CPU 124 when the host 2 is to write the user data to the non-volatile memory 110 , the CPU 124 temporarily stores the user data from the host 2 in the data buffer 122 , and then the user data is transmitted to the RAID ECC engine 123 via the data buffer 122 . Then, the CPU 124 sets the control register 240 to trigger the state machine 220 to instruct the RAID ECC engine 123 to perform the encoding operation on the user data. Then, the control register 240 controls the selector 230 so that the user data received by the input end of the selector 230 from the data buffer 122 (e.g., the user data of page 0 to page 2) is outputted to the operational circuit 210 _ 0 via the output end 0 of the selector 230 .
  • the control register 240 controls the selector 230 so that the user data received by the input end of the selector 230 from the data buffer 122 (e.g., the user data of page 0 to page 2) is outputted to the operational circuit 210 _
  • control register 240 sets the selection end (sel) of the first selector 213 _ 0 of the operational circuit 210 _ 0 to “0”, so that the XOR logical operation unit 211 _ 0 of the operation circuit 210 _ 0 can perform an encoding operation (XOR logical operation) on the user data outputted by the selector 230 and the encoded data temporarily stored in the page buffer 212 _ 0 . Then, the operation result (new encoded data) is outputted to the page buffer 212 _ 0 to replace the originally stored encoded data (old encoded data).
  • the operation circuit 210 _ 0 can sequentially receive the user data of the page 0 to page 2 transmitted by the data buffer 122 , and then perform the encoding operation on the user data of the page 0 to page 2 by the XOR logical operation unit 211 _ 0 to obtain the parity code P 0 of a page size. As such, the encoding operation on the user data of the first page group is completed.
  • the RAID ECC engine 123 can write the parity code P 0 via the page buffer 212 _ 0 to the page buffer (e.g., the page buffer 212 _M) used by the compression/decompression circuit 250 and start the compression function, so that the page buffer 212 _M stores the compressed parity code P 0 .
  • the invention does not limit the specific implementation of the compression/decompression circuit 250 and the way it compresses the parity code P 0 .
  • the operational circuit 210 _ 0 is adopted to perform the encoding operation on the user data of the first page group and the page buffer 212 _M is used by the compression/decompression circuit 250 are merely an example, and is not used to limit the invention.
  • the CPU 124 can determine which operational circuit is to perform the encoding operation on the user data of the first page group according to actual needs or applications, and determine which page buffers are used by the compression/decompression circuit 250 .
  • the RAID ECC engine 123 further includes compression and decompression functions.
  • the CPU 124 can switch to output the user data received by the input end of the selector 230 to other unused operational circuits, such as the operational circuit 210 _ 1 , thereby performing the encoding operation on the user data of the second page group.
  • control register 240 controls the selector 230 so that the user data of page 3 to page 5 received by the input end of the selector 230 from the data buffer 122 is outputted to the operational circuit 210 _ 1 via the output end of the selector 230 .
  • control register 240 sets the selection end (sel) of the first selector 213 _ 1 of the operational circuit 210 _ 1 to “0”, so that the operational circuit 210 _ 1 can sequentially receive the user data of the page 3 to page 5 transmitted by the data buffer 122 , and then perform the encoding operation on the user data of page 3 to page 5 by the XOR logical operational unit 211 _ 1 to obtain the parity code P 1 of a page size. As such, the encoding operation on the user data of the second page group is completed.
  • the RAID ECC engine 123 can further write the parity code P 1 via the page buffer 212 _ 1 to the page buffer 212 _M used by the compression/decompression circuit 250 and start the compression function, so that the page buffer 212 _M also stores the compressed parity code P 1 .
  • the CPU 124 can again switch to output the user data received by the input end of the selector 230 to other unused operational circuits, such as the operational circuit 210 _ 2 , thereby performing the encoding operation on the user data of the third page group.
  • control register 240 controls the selector 230 so that the user data of page 6 to page 8 received by the input end of the selector 230 from the data buffer 122 is outputted to the operational circuit 210 _ 2 via the output end 2 of the selector 230 .
  • control register 240 sets the selection end (sel) of the first selector 213 _ 2 of the operational circuit 210 _ 2 to “0”, so that the operational circuit 210 _ 2 can sequentially receive the user data of the page 6 to page 8 transmitted by the data buffer 122 , and then perform the encoding operation on the user data of page 6 to page 8 by the XOR logical operational unit 211 _ 2 to obtain the parity code P 2 of a page size. As such, the encoding operation on the user data of the third page group is completed.
  • the RAID ECC engine 123 can further write the parity code P 2 via the page buffer 212 _ 2 to the page buffer 212 _M used by the compression/decompression circuit 250 and start the compression function, so that the page buffer 212 _M also stores the compressed parity code P 2 .
  • the compressed parity code using the super page group as the configuration only needs to share the same page. Therefore, after the parity codes P 0 to P 2 of the user data of the entire super page group have been compressed and stored in the page buffer 212 _M, the RAID ECC engine 123 can write the compressed parity codes P 0 to P 2 to the non-volatile memory 110 via the page buffer 212 _M at one time. As such, the RAID ECC engine 123 does not need to switch the state in the intermediate process to individually output each parity code P 0 to P 2 .
  • the RAID ECC engine 123 can also write the compressed parity codes P 0 to P 2 into the data buffer 122 first, and then write the compressed parity codes P 0 to P 2 to the non-volatile memory 110 via the data buffer 122 , but the invention is not limited thereto.
  • the non-volatile memory 110 stores the written data by using a blank page of one of the blocks B 0 to B 3 , and each block B 0 to B 3 is placed in a channel, for example, block B 0 is placed in channel CH# 0 , block B 1 is placed in channel CH# 1 , and so on, and block B 3 is placed in Channel CH# 3 .
  • the data may be sequentially written to the blank pages of the blocks B 0 to B 3 , or may be written in parallel to the blank pages of the blocks B 0 to B 3 , and the invention is not limited thereto.
  • the controller 120 when the controller 120 is to write the user data of page 0 to page 8 to the non-volatile memory 110 , the user data of page 0 can be stored in the first blank page of the block B 0 , the user data of page 1 can be stored in the first blank page of block B 1 , and so on, the user data of page 7 can be stored in the second blank page of block B 3 , and the user profile of page 8 can be stored in the third blank page of block B 0 .
  • the compressed parity codes P 0 to P 2 are then written to the third blank page of the block B 1 via the RAID ECC engine 123 , as shown in FIG. 3E .
  • the controller 120 can directly write the user data to the blank page without waiting for the generation of the parity code P 0 or P 1 , and thereby the user data can be written to the non-volatile memory 110 at the fastest speed.
  • the invention does not limit that the compressed parity codes P 0 to P 2 can only be written to the third blank page of the block B 1 .
  • the RAID ECC engine 123 (or the controller 120 ) can decide to write the compressed parity code P 0 to P 2 to which blank page of which block according to actual needs or applications.
  • the blocks B 0 to B 3 can be further divided into two areas (not shown), that is, a data area and a parity code area. Therefore, the controller 120 may first write the user data to the blank page of the data area, and the RAID ECC engine 123 then writes the compressed parity code to the blank page of the parity code area. Alternatively, after the controller 120 first fills the data area with the user data, the RAID ECC engine 123 writes the compressed parity code to the blank page of the parity code area, but the invention is not limited thereto.
  • FIG. 3F is a schematic timing diagram of the operational circuits in FIG. 3A to FIG. 3D performing the parity code processing method.
  • the CPU 124 can switch to output the user data received by the input end of the selector 230 to other unused operational circuits to perform the encoding operation on the user data of the next page group. Therefore, the compression processing of the previous parity code and the encoding operation of the user data of the next page group can be processed in parallel, as such, the overall system performance is not degraded.
  • FIG. 4 is a schematic flow diagram of a parity code processing method according to an embodiment of the invention. It should be noted that the parity code processing method of FIG. 4 can be performed by the controller 120 of the data storage device 1 , especially by the CPU 124 and the RAID ECC engine 123 of the controller 120 , but the invention does not limit the parity code processing method of FIG. 4 to be executed only by the controller 120 in FIG. 1 . As shown in FIG. 4
  • step S 410 the CPU 124 issues at least one control signal to the RAID ECC engine 123 .
  • step S 420 the CPU 124 transmits the user data of a plurality of pages, for example, the user data of N pages, to the RAID ECC engine 123 .
  • the CPU 124 temporarily stores the user data from the host 2 in the data buffer 122 , and transmits the user data to the RAID ECC engine 123 via the data buffer 122 .
  • step S 430 the RAID ECC engine 123 performs the encoding operation on the user data of the N pages to generate a corresponding parity code according to the control signal.
  • step S 440 the RAID ECC engine 123 compresses the parity code and stores the compressed parity code in the RAM 1231 .
  • step S 450 the CPU 124 determines whether the user data of a super page group has been transmitted to the RAID ECC engine 123 . If yes, step S 460 is performed. If no, step S 420 is performed.
  • the user data of a super page group refers to the user data of K ⁇ N pages.
  • the CPU 124 can determine whether the K ⁇ N pages of user data have been transmitted to the RAID ECC engine 123 according to the count of pages.
  • the RAID ECC engine 123 may generate K parity codes according to the user data of the K ⁇ N pages, compress the K parity codes to form the compressed K parity codes of the user data of the super page group according to the control signal, and store the compressed K parity codes of the user data of the super page group in the RAM 1231 .
  • step S 450 may be changed to: the RAID ECC engine 123 determines whether the K parity codes of the K ⁇ N pages of the user data are all compressed and stored in the RAM 1231 . If yes, step S 460 performed. If no, step S 420 is performed. In summary, this does not affect the implementation of the invention.
  • the CPU 124 transmits the user data of the N sectors to the RAID ECC engine 123 in step S 420 , and the RAID ECC engine 123 performs the encoding operation on the N sectors of the user data according to the control signal to generate a corresponding parity code in step S 430 , and so on, and the CPU 124 determines whether the user data of a super sector group has been transferred to the RAID ECC engine 123 in step S 450 . Therefore, in other embodiments, the CPU 124 can also determine whether the user data of the K ⁇ N sectors have been transmitted to the RAID ECC engine 123 according to the count of sectors.
  • the RAID ECC engine 123 may generate K parity codes according to the user data of the K ⁇ N sectors, compress the K parity codes to form compressed K parity codes of the user data of the super sector group according to the control signal, and store the compressed K parity codes of the user data of the super sector group in RAM 1231 . In general, this does not affect the implementation of the invention.
  • step S 460 the CPU 124 controls the RAID ECC engine 123 to write the compressed K parity codes of the user data of the super page group to the non-volatile memory 110 .
  • the CPU 124 may first write the user data of the super page group to the non-volatile memory 110 via the data buffer 122 .
  • the CPU 124 can control the RAID ECC engine 123 to write the compressed K parity codes to the non-volatile memory 110 via the page buffer 212 _M (and the data buffer 122 ).
  • the CPU 124 may temporarily store the user data of the super page group in the data buffer 122 . After the K parity codes of the user data of the super page group are compressed and stored in the RAM 1231 , the CPU 124 can control the RAID ECC engine 123 to temporarily store the compressed K parity codes in the data buffer 122 . Then, the CPU 124 writes the user data of the super page group and the compressed K parity code of the user data of the super page group to the non-volatile memory 110 via the data buffer 122 at one time, but the invention is not limited thereto.
  • the data storage device and the parity code processing method provided by the embodiments of the invention may reduce the amount of data by compressing the parity code, and store the compressed parity code in the memory of the RAID ECC engine first instead of immediately writing the parity code generated by each encoding operation to the non-volatile memory of the data storage device.
  • the RAID ECC engine After the plurality of parity codes of the user data of an entire super page group are compressed and stored in the memory of the RAID ECC engine, the RAID ECC engine writes the compressed plurality of parity codes to the non-volatile memory at one time.
  • the frequency of switching the state of the RAID ECC engine and the number and time of writing data to non-volatile memory are reduced, thereby relatively increasing the service life of the non-volatile memory.
  • the compression operation of the previous parity code and the encoding operation of the user data of the next page group may be processed in parallel to prevent the overall system performance from degrading.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Detection And Correction Of Errors (AREA)

Abstract

A data storage device and a parity code processing method thereof are provided. The data storage device includes a non-volatile memory and a controller. The controller includes a RAID ECC engine. The RAID ECC engine has a memory, wherein after completing an encoding operation on each N pages of user data to generate a corresponding parity code. The RAID ECC engine compresses the parity code and stores the compressed parity code in the memory, wherein after all K parity codes of the K×N pages of the user data are compressed and stored in the memory, the RAID ECC engine writes the compressed K parity codes to the non-volatile memory. As such, the frequency of switching the state of the RAID ECC engine is reduced, and the number and time of writing data to the non-volatile memory is reduced.

Description

    FIELD OF THE INVENTION
  • The present invention relates to a data storage device, and more particularly to a data storage device and a parity code processing method thereof.
  • BACKGROUND OF THE INVENTION
  • Generally, a data storage device is composed of a controller and a non-volatile memory such as a flash memory, and the controller may include a redundant array of independent disks (RAID) error correcting code (ECC) engine. The RAID ECC engine is mainly used to perform error correction procedures. One of the operating principles is that when a host wants to write user data to the non-volatile memory and the written user data uses page as the management unit, the controller simultaneously sends the user data to the RAID ECC engine for encoding operation until the user data of a page group, for example, the user data of page 0 to page N-1 is encoded. After that, the RAID ECC engine will generate a parity code corresponding to the user data of page 0 to page N-1, and then write the parity code into the non-volatile memory, that is, the parity code can be used as the data of page N, wherein N is a positive integer greater than 1. Therefore, after the user data of each N pages is encoded, the RAID ECC engine will switch its state to output the encoded parity code to the non-volatile memory by each encoding operation. This makes the parity code written to the non-volatile memory not continuous and also lower the writing efficiency.
  • SUMMARY OF THE INVENTION
  • In view of the above, an object of the invention is to provide a data storage device and a parity code processing method thereof. To achieve the above object, an embodiment of the invention provides a data storage device, including a non-volatile memory and a controller electrically coupled to the non-volatile memory. The controller includes an access interface, a central processing unit (CPU) and a RAID ECC engine. The RAID ECC engine has a memory, wherein after completing the encoding operation on each N pages of the user data to generate a corresponding parity code, the RAID ECC engine compresses the parity code and stores the compressed parity code in the memory of the RAID ECC engine, wherein after all K parity codes of the K×N pages of the user data are compressed and stored in the memory, the RAID ECC engine writes the compressed K parity codes to the non-volatile memory, wherein K and N are both positive integers greater than one.
  • In addition, an embodiment of the invention further provides a parity code processing method, which is implemented in the controller of the foregoing embodiment, and includes the following steps. First, configuring the CPU to issue at least one control signal to the RAID ECC engine and transmitting the user data of a plurality of pages to the RAID ECC engine. Second, configuring the RAID ECC engine to perform an encoding operation on the user data of N pages based on the control signal to generate a corresponding parity code, compress the parity code, and store the compressed parity code in a memory of the RAID ECC engine. Then, configuring the CPU to determine whether the user data of a super page group has been transmitted to the RAID ECC engine, wherein the user data of the super page group is referred to as the user data of K×N pages. When it is determined that the user data of the super page group has been transmitted to the RAID ECC engine, configuring the CPU to control the RAID ECC engine to write the compressed K parity codes of the user data of the super page group to the non-volatile memory.
  • In order to further understand the features and technical contents of the present invention, please refer to the following detailed description and the accompanying drawings of the invention. However, the description and the drawings are merely illustrative of the invention and are not intended to limit the scope of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention will become more readily apparent to those ordinarily skilled in the art after reviewing the following detailed description and accompanying drawings, in which:
  • FIG. 1 is a schematic functional block diagram of a data storage device according to an embodiment of the invention;
  • FIG. 2 is a schematic functional block diagram of a RAID ECC engine in the data storage device of FIG. 1;
  • FIG. 3A to FIG. 3D are schematic diagrams of the RAID ECC engine performing a parity code processing method according to FIG. 2;
  • FIG. 3E is a schematic diagram of a stored written data in the non-volatile memory in FIG. 1 storing the written data under the embodiments of FIG. 3A to FIG. 3D;
  • FIG. 3F is a schematic timing diagram of the operational circuits in FIG. 3A to FIG. 3D performing the parity code processing method; and
  • FIG. 4 is a schematic flow diagram of a parity code processing method according to an embodiment of the invention.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • In the following, the invention will be described in detail by illustration of various embodiments of the invention. However, the concept of the invention may be embodied in many different forms and should not be construed as being limited to the illustrative embodiments set forth herein. In addition, the same reference numerals used in the drawings may represent similar elements.
  • First, please refer to FIG. 1. FIG. 1 is a schematic functional block diagram of a data storage device according to an embodiment of the invention. The data storage device 1 includes a non-volatile memory 110 and a controller 120. In the embodiment, the non-volatile memory 110 includes a plurality of blocks (not shown). Each block further includes a plurality of pages, and the page is the smallest unit of program. That is, the page is the smallest unit for data writing or reading, and a word line can control more than one page. In addition, the block is the smallest unit for data erasing. Therefore, the blocks can be classified into spare blocks, active blocks and data blocks according to their functions. The spare blocks are the blocks that can be selected and written the data to, the active blocks are the blocks that already be selected and written the data to, and the data blocks are the blocks where the data is finish written and can no longer be written. It should be noted that the invention does not limit the specific implementation manner of the blocks and the pages, and those skilled in the art should be able to perform related designs according to actual needs or applications. In addition, in the embodiment, the non-volatile memory 110 is preferably implemented by a flash memory, but the invention is not limited thereto.
  • The controller 120 is electrically coupled to the non-volatile memory 110 and is used to control data access in the non-volatile memory 110. It must be understood that the data storage device 1 is usually used together with a host 2 to write user data into the non-volatile memory 110 or read the user data from the non-volatile memory 110 according to the write/read command issued by the host 2. Therefore, in the embodiment, the controller 120 preferably is a flash memory controller and may mainly include a CPU 124, an access interface 121 and a RAID ECC engine 123. In addition, the CPU 124 preferably has its own memory, such as a CPU memory 1241 for storing temporary data. In order to facilitate the following description, in the embodiment, the CPU 124 and the CPU memory 1241 are separately drawn. However, it should be understood by those of ordinary skill in the art that the CPU memory 1241 is actually encompassed within the CPU 124. Incidentally, since the CPU memory 1241 usually has a fixed size, the CPU 124 will store the temporary data to an external memory outside the controller 120 as long as the temporary data stored by the CPU 124 is larger than the available space of the CPU memory 1241. However, the speed of which the CPU 124 accesses the external memory is much slower than the speed of which the CPU accesses the internal memory, so that the overall system performance is likely to decline.
  • The access interface 121 is coupled to the host 2 and the non-volatile memory 110. The CPU 124 is used to interpret write/read commands issued by the host 2 to generate operation commands and control the access interface 121 to access the user data of the non-volatile memory 110 based on the operation commands. In addition, in the embodiment, the controller 120 can further include a data buffer 122 coupled to the access interface 121, the RAID ECC engine 123, and the non-volatile memory 110, and is used to temporarily store the user data from the host 2 or the non-volatile memory 110. However, in addition to the user data, the data buffer 122 can also preferably be used to temporarily store in-system programming (ISP) or logical-to-physical address mapping table required for the operation of the CPU 124, but the invention is not limited thereto. In addition, in the embodiment, the data buffer 122 is preferably implemented by a static random access memory (SRAM) or a random access memory (RAM) capable of fast access, but the invention is not limited thereto.
  • The RAID ECC engine 123 is coupled to the data buffer 122 and the non-volatile memory 110 and is used to perform an error correction procedure on the user data. In this embodiment, the operation principle of the error correction procedure refers to exclusive OR (XOR) logical operations and can be divided into an encoding operation or a decoding operation according to the functions. Therefore, in FIG. 1, different operation paths will be represented by different chain lines, and it should be understood that the decoding operation provides function such as error detection and correction, or data recovery. In addition, in the embodiment, the RAID ECC engine 123 preferably also has its own memory, such as the RAM 1231. Specifically, after completing the encoding operation on each N pages of the user data (or so called the user data of each page group) to generate a corresponding parity code, the RAID ECC engine 123 compresses the parity code and stores the compressed parity code in the RAM 1231. After all K parity codes of the K×N pages of the user data are compressed and stored in the RAM 1231, the RAID ECC engine 123 writes the compressed K parity codes to the non-volatile memory 110, wherein K and N are both positive integers greater than one. It should be noted that, in the embodiment, the RAID ECC engine 123 may also write the compressed K parity codes into the data buffer 122, and then write the compressed K parity code into the non-volatile memory 110 through the data buffer 122. In summary, the invention does not limit the specific implementation manner in which the RAID ECC engine 123 writes the compressed K parity codes to the non-volatile memory 110, and those of ordinary skill in the art should be able to make relevant designs based on actual needs or applications.
  • According to the above teachings, those of ordinary skill in the art should understand that the user data of the K×N pages refers to the user data of the K page groups, or it is simply referred to as user data of a super page group in the embodiment. In addition, if the management unit of the user data is a sector instead of a page, for example, a page may include a plurality of sectors, the RAID ECC engine 123 also compresses the parity code and stores the compressed parity code in the RAM 1231 after completing the encoding operation on the user data of each sector group to generate the corresponding parity code. After the K parity codes of the user data of the K sector groups are compressed and stored in the RAM 1231, the RAID ECC engine 123 writes the compressed K parity codes to the non-volatile memory 110. Since the data management methods of the sector and page are similar, only the example of the page will be described below, but the invention is not limited thereto.
  • In contrast, when the controller 120 needs to read the user data from the non-volatile memory 110, the controller 120 reads the user data of the page according to the preset read parameters and performs the error correction by using other error correction codes (e.g., a low density parity check (LDPC) code) on the read user data of the page. When the controller 120 reads the user data of a certain page of the super page group (e.g., the user data of the page 1) but the LDPC code cannot correct the error, the controller 120 may read the user data of page 0, page 2 to page N-1 and the compressed parity codes corresponding to the user data of page 0 to page N-1 from the non-volatile memory 110, and then send them to the RAID ECC engine 123 to perform the decoding operation, wherein the data obtained by the decoding operation is the correct user data of the page 1. Since the operation of the encoding or decoding operation of the user data by the RAID ECC engine 123 is well known to those of ordinary skill in the art, no redundant detail is to be given herein.
  • In summary, compared with the prior art, the RAID ECC engine 123 of the embodiment reduces the amount of data by compressing the parity code and stores the compressed parity code in the RAM 1231 first, instead of immediately writing a parity code to the non-volatile memory 110 for each encoding operation. After a plurality of parity codes of the user data of an entire super page group are compressed and stored in the RAM 1231, the RAID ECC engine 123 writes the compressed plurality of parity codes to the non-volatile memory 110 in one time. Thereby, the frequency of switching the state of the RAID ECC engine 123 is reduced and the number and time of writing the parity code to the non-volatile memory 110 are reduced, so as to relatively increase the service life of the non-volatile memory 110. In addition, since the compression of parity code can reduce the amount of data actually written to the non-volatile memory particles, the embodiment can achieve the effect of reading and writing acceleration.
  • On the other hand, in addition to using the RAM 1231 for storing the compressed parity code, the RAID ECC engine 123 also uses the RAM 1231 for temporary storage of the operation values when the RAID ECC engine 123 performs the encoding operation or the decoding operation. The size of the RAM 1231 can be, for example, 64 KB, but the RAID ECC engine 123 may need only 16 KB or 32 KB of memory space to operate according to actual needs, such as page size, parity code size, or numbers of paths used. Therefore, when the memory space of the RAM 1231 is not fully used, the data storage device 1 of the embodiment can also map the unused memory space of the RAM 1231 to the CPU memory 1241. That is, the unused memory space of the RAM 1231 is shared, and the unused memory space address of the RAM 1231 is mapped to the memory space address of the CPU memory 1241 to be virtualized as part of the CPU memory 1241. As shown by the slanted line block in FIG. 1, the memory space of the CPU memory 1241 is substantially extended (expanded). As such, the CPU 124 can also utilize the unused memory space of the RAM 1231 to store temporary data. In other words, the temporary data can be stored in the RAM 1231 of the RAID ECC engine 123 in addition to the CPU memory 1241 and need not to be stored in the external memory outside the controller 120. As such, the frequency of access the external memory by the CPU 124 is reduced and the overall system performance is improved.
  • Next, the implementation of the RAID ECC engine 123 of the embodiment will be further described below. Please also refer to FIG. 2. FIG. 2 is a schematic functional block diagram of the RAID ECC engine in the data storage device of FIG. 1. The components in FIG. 2 and the same components as those in FIG. 1 are labeled with the same numbers, so no redundant detail is to be given herein. In the embodiment, the RAID ECC engine 123 mainly includes a state machine 220, a selector 230, a control register 240, and M+1 operational circuits 210_0 to 210_M, wherein M is a positive integer greater than 1. Each of the operational circuits 210_0 to 210_M includes an XOR logical operation unit, a page buffer, and a first selector. For example, the operational circuit 210_0 includes an XOR logical operation unit 211_0, a page buffer 212_0, and a first selector 213_0, and so on, and the operational circuit 210_M includes an XOR logical operation unit 211_M, a page buffer 212_M, and a first selector 213_M. It can be understood that the state machine 220, the selector 230, the control register 240, the XOR logical operation units 211_0 to 211_M, and the first selectors 213_0 to 213_M may be implemented by a pure hardware circuit or by a hardware circuit with firmware or software, but the invention is not limited thereto.
  • In the embodiment, the control register 240 is coupled to the CPU 124 and used to receive at least one control signal from the CPU 124 and the temporary data. In addition, in order to facilitate the following description, the embodiment will be first described with only one of the operational circuits, for example, the operational circuit 210_0. However, those skilled in the art should be able to understand the principle of the other operational circuits 210_1 to 210_M. As shown in FIG. 2, the first selector 213_0 has two input ends 0 and 1 and an output end. The two input ends 0 and 1 of the first selector 213_0 are respectively coupled to an output end of the XOR logical operation unit 211_0 and the control register 240. The output end of the first selector 213_0 is coupled to the page buffer 212_0. In the embodiment, the CPU 124 can control (set) a selection end (sel) of the first selector 213_0 by the control register 240 to select the input end 0 as the input source of the page buffer 212_0 to input data, so the input data at this time is the operation result (not shown) of the XOR logical operation unit 211_0, and the operation result can be stored in the page buffer 212_0.
  • In contrast, the CPU 124 can also control (set) the selection end (sel) of the first selector 213_0 by the control register 240 to select the input end 1 as the input source of the page buffer 212_0 to input data. Therefore, the input data at this time is changed to the temporary data (not shown) from the CPU 124 provided by the control register 240, and the temporary data can also be stored in the page buffer 212_0. That is, the RAM 1231 in FIG. 1 can be composed, for example, by the page buffers 212_0 to 212_M in FIG. 2. When the operational circuit 210_0 does not perform encoding or decoding operations or even is not used to store the compressed parity code (i.e., the page buffer 212_0 of the operational circuit 210_0 is not used), the CPU 124 can control the first selector 213_0 so that the input end 1 of the first selector 213_0 serves as an input source of the page buffer 212_0. As such, the unused page buffer 212_0 can be used to store temporary data from the CPU 124, thereby achieving another object of the present invention, which is to have the CPU 124 share the RAM 1231 of the RAID ECC engine 123.
  • In addition, the state machine 220 is coupled to the control register 240 and is used to control whether the RAID ECC engine 123 performs encoding or decoding operations or enters an idle or done state. In addition, the state machine 220 can further assist the control register 240 to control (set) the selection ends of the first selectors 213_0 to 213_M to determine the input sources of the page buffers 212_0 to 212_M, and assist the control register 240 to control the selection end (sel) of the selector 230 to determine its output interface. In the embodiment, the selector 230 has M+1 output ends and an input end. The M+1 output ends of the selector 230 are respectively coupled to the input ends of the operation circuits 210_0 to 210_M, and the input end of the selector 230 is coupled to the data buffer 122 or the non-volatile memory 110. Therefore, in the embodiment, the CPU 124 can also control (set) the selection end (sel) of the selector 230 by the control register 240 so that the user data received by the input end of the selector 230 can be outputted to the designated operational circuits 210_0 to 210_M to perform subsequent encoding or decoding operations. In summary, the control register 240 is not only used to control (set) the selection ends of the first selectors 213_0 to 213_M, but also to control (set) the selection end (sel) of the selector 230 and is also used to control the operation of the state machine 220. In practice, the first selectors 213_0 to 213_M of the operation circuits 210_0 to 210_M can preferably be respectively implemented by multiplexers (MUX), and the selector 230 is preferably implemented by a demultiplexer (DeMUX), but the invention is not limited thereto.
  • Next, please refer to FIG. 3A to FIG. 3E. FIG. 3A to FIG. 3D are used to explain in detail the operation principle of the parity code processing method performed by the RAID ECC engine 123. FIG. 3E is a schematic diagram of the stored written data in the non-volatile memory 110 in FIG. 1 under the embodiment of FIG. 3A to FIG. 3D. It should be noted that in the embodiment of FIG. 3A to FIG. 3D, it is first assumed that both K and N are 3 for the following description, but it is not intended to limit the invention. As shown in FIG. 3A, when the host 2 is to write the user data to the non-volatile memory 110, the CPU 124 temporarily stores the user data from the host 2 in the data buffer 122, and then the user data is transmitted to the RAID ECC engine 123 via the data buffer 122. Then, the CPU 124 sets the control register 240 to trigger the state machine 220 to instruct the RAID ECC engine 123 to perform the encoding operation on the user data. Then, the control register 240 controls the selector 230 so that the user data received by the input end of the selector 230 from the data buffer 122 (e.g., the user data of page 0 to page 2) is outputted to the operational circuit 210_0 via the output end 0 of the selector 230. At the same time, the control register 240 sets the selection end (sel) of the first selector 213_0 of the operational circuit 210_0 to “0”, so that the XOR logical operation unit 211_0 of the operation circuit 210_0 can perform an encoding operation (XOR logical operation) on the user data outputted by the selector 230 and the encoded data temporarily stored in the page buffer 212_0. Then, the operation result (new encoded data) is outputted to the page buffer 212_0 to replace the originally stored encoded data (old encoded data). According to the above procedure, the operation circuit 210_0 can sequentially receive the user data of the page 0 to page 2 transmitted by the data buffer 122, and then perform the encoding operation on the user data of the page 0 to page 2 by the XOR logical operation unit 211_0 to obtain the parity code P0 of a page size. As such, the encoding operation on the user data of the first page group is completed.
  • Then, the RAID ECC engine 123 can write the parity code P0 via the page buffer 212_0 to the page buffer (e.g., the page buffer 212_M) used by the compression/decompression circuit 250 and start the compression function, so that the page buffer 212_M stores the compressed parity code P0. In summary, the invention does not limit the specific implementation of the compression/decompression circuit 250 and the way it compresses the parity code P0. Moreover, the above-mentioned description in which the operational circuit 210_0 is adopted to perform the encoding operation on the user data of the first page group and the page buffer 212_M is used by the compression/decompression circuit 250 are merely an example, and is not used to limit the invention. The CPU 124 can determine which operational circuit is to perform the encoding operation on the user data of the first page group according to actual needs or applications, and determine which page buffers are used by the compression/decompression circuit 250. It can be understood that, in the embodiment, the RAID ECC engine 123 further includes compression and decompression functions. Moreover, after the parity code P0 is written to the page buffer 212_M used by the compression/decompression circuit 250 and the compression is started, the CPU 124 can switch to output the user data received by the input end of the selector 230 to other unused operational circuits, such as the operational circuit 210_1, thereby performing the encoding operation on the user data of the second page group.
  • Therefore, as shown in FIG. 3B, the control register 240 controls the selector 230 so that the user data of page 3 to page 5 received by the input end of the selector 230 from the data buffer 122 is outputted to the operational circuit 210_1 via the output end of the selector 230. At the same time, the control register 240 sets the selection end (sel) of the first selector 213_1 of the operational circuit 210_1 to “0”, so that the operational circuit 210_1 can sequentially receive the user data of the page 3 to page 5 transmitted by the data buffer 122, and then perform the encoding operation on the user data of page 3 to page 5 by the XOR logical operational unit 211_1 to obtain the parity code P1 of a page size. As such, the encoding operation on the user data of the second page group is completed. Then, the RAID ECC engine 123 can further write the parity code P1 via the page buffer 212_1 to the page buffer 212_M used by the compression/decompression circuit 250 and start the compression function, so that the page buffer 212_M also stores the compressed parity code P1. Similarly, after the parity code P1 is written to the page buffer 212_M used by the compression/decompression circuit 250 and compression is started, the CPU 124 can again switch to output the user data received by the input end of the selector 230 to other unused operational circuits, such as the operational circuit 210_2, thereby performing the encoding operation on the user data of the third page group.
  • As shown in FIG. 3C, the control register 240 controls the selector 230 so that the user data of page 6 to page 8 received by the input end of the selector 230 from the data buffer 122 is outputted to the operational circuit 210_2 via the output end 2 of the selector 230. At the same time, the control register 240 sets the selection end (sel) of the first selector 213_2 of the operational circuit 210_2 to “0”, so that the operational circuit 210_2 can sequentially receive the user data of the page 6 to page 8 transmitted by the data buffer 122, and then perform the encoding operation on the user data of page 6 to page 8 by the XOR logical operational unit 211_2 to obtain the parity code P2 of a page size. As such, the encoding operation on the user data of the third page group is completed. Then, the RAID ECC engine 123 can further write the parity code P2 via the page buffer 212_2 to the page buffer 212_M used by the compression/decompression circuit 250 and start the compression function, so that the page buffer 212_M also stores the compressed parity code P2.
  • Finally, as shown in FIG. 3D, the compressed parity code using the super page group as the configuration only needs to share the same page. Therefore, after the parity codes P0 to P2 of the user data of the entire super page group have been compressed and stored in the page buffer 212_M, the RAID ECC engine 123 can write the compressed parity codes P0 to P2 to the non-volatile memory 110 via the page buffer 212_M at one time. As such, the RAID ECC engine 123 does not need to switch the state in the intermediate process to individually output each parity code P0 to P2. As described above, it is understood that the RAID ECC engine 123 can also write the compressed parity codes P0 to P2 into the data buffer 122 first, and then write the compressed parity codes P0 to P2 to the non-volatile memory 110 via the data buffer 122, but the invention is not limited thereto.
  • It is noted that, in the embodiment of FIG. 3E, it is assumed that the non-volatile memory 110 stores the written data by using a blank page of one of the blocks B0 to B3, and each block B0 to B3 is placed in a channel, for example, block B0 is placed in channel CH# 0, block B1 is placed in channel CH# 1, and so on, and block B3 is placed in Channel CH# 3. In addition, the data may be sequentially written to the blank pages of the blocks B0 to B3, or may be written in parallel to the blank pages of the blocks B0 to B3, and the invention is not limited thereto. Therefore, when the controller 120 is to write the user data of page 0 to page 8 to the non-volatile memory 110, the user data of page 0 can be stored in the first blank page of the block B0, the user data of page 1 can be stored in the first blank page of block B1, and so on, the user data of page 7 can be stored in the second blank page of block B3, and the user profile of page 8 can be stored in the third blank page of block B0. After the parity codes P0 to P2 of the user data of page 0 to page 8 have been compressed and stored in the page buffer 212_M, the compressed parity codes P0 to P2 are then written to the third blank page of the block B1 via the RAID ECC engine 123, as shown in FIG. 3E.
  • According to the above content, it is known that writing the user data of page 0 to page 3 to the first blank page and writing the user data of page 4 to page 7 to the second blank page belong to writing the user data to the blank page, instead of writing a parity code P0 or P1 to a blank page. Therefore, the controller 120 can directly write the user data to the blank page without waiting for the generation of the parity code P0 or P1, and thereby the user data can be written to the non-volatile memory 110 at the fastest speed. Of course, the invention does not limit that the compressed parity codes P0 to P2 can only be written to the third blank page of the block B1. The RAID ECC engine 123 (or the controller 120) can decide to write the compressed parity code P0 to P2 to which blank page of which block according to actual needs or applications. In other words, in the other embodiments, the blocks B0 to B3 can be further divided into two areas (not shown), that is, a data area and a parity code area. Therefore, the controller 120 may first write the user data to the blank page of the data area, and the RAID ECC engine 123 then writes the compressed parity code to the blank page of the parity code area. Alternatively, after the controller 120 first fills the data area with the user data, the RAID ECC engine 123 writes the compressed parity code to the blank page of the parity code area, but the invention is not limited thereto.
  • In addition, please refer to FIG. 3F. FIG. 3F is a schematic timing diagram of the operational circuits in FIG. 3A to FIG. 3D performing the parity code processing method. As shown in FIG. 3F, after the parity code of the user data of the current page group is written to the page buffer 212_M used by the compression/decompression circuit 250 and the compression is started, the CPU 124 can switch to output the user data received by the input end of the selector 230 to other unused operational circuits to perform the encoding operation on the user data of the next page group. Therefore, the compression processing of the previous parity code and the encoding operation of the user data of the next page group can be processed in parallel, as such, the overall system performance is not degraded.
  • Furthermore, in order to further explain the operation flow of the data storage device 1 processing the parity codes P0 to P2, the invention further provides an embodiment of the parity code processing method. Therefore, please also refer to FIG. 4, which is a schematic flow diagram of a parity code processing method according to an embodiment of the invention. It should be noted that the parity code processing method of FIG. 4 can be performed by the controller 120 of the data storage device 1, especially by the CPU 124 and the RAID ECC engine 123 of the controller 120, but the invention does not limit the parity code processing method of FIG. 4 to be executed only by the controller 120 in FIG. 1. As shown in FIG. 4, in step S410, the CPU 124 issues at least one control signal to the RAID ECC engine 123. Thereafter, in step S420, the CPU 124 transmits the user data of a plurality of pages, for example, the user data of N pages, to the RAID ECC engine 123. As described above, in the embodiment, the CPU 124 temporarily stores the user data from the host 2 in the data buffer 122, and transmits the user data to the RAID ECC engine 123 via the data buffer 122.
  • Thereafter, in step S430, the RAID ECC engine 123 performs the encoding operation on the user data of the N pages to generate a corresponding parity code according to the control signal. Thereafter, in step S440, the RAID ECC engine 123 compresses the parity code and stores the compressed parity code in the RAM 1231. Thereafter, in step S450, the CPU 124 determines whether the user data of a super page group has been transmitted to the RAID ECC engine 123. If yes, step S460 is performed. If no, step S420 is performed. As described above, in the embodiment, the user data of a super page group refers to the user data of K×N pages. Therefore, the CPU 124 can determine whether the K×N pages of user data have been transmitted to the RAID ECC engine 123 according to the count of pages. When it is determined that the user data of the K×N pages has been transmitted to the RAID ECC engine 123, the RAID ECC engine 123 may generate K parity codes according to the user data of the K×N pages, compress the K parity codes to form the compressed K parity codes of the user data of the super page group according to the control signal, and store the compressed K parity codes of the user data of the super page group in the RAM 1231.
  • Alternatively, in other embodiments, step S450 may be changed to: the RAID ECC engine 123 determines whether the K parity codes of the K×N pages of the user data are all compressed and stored in the RAM 1231. If yes, step S460 performed. If no, step S420 is performed. In summary, this does not affect the implementation of the invention. In addition, in other embodiments, if the management unit of the user data is a sector instead of a page, the CPU 124 transmits the user data of the N sectors to the RAID ECC engine 123 in step S420, and the RAID ECC engine 123 performs the encoding operation on the N sectors of the user data according to the control signal to generate a corresponding parity code in step S430, and so on, and the CPU 124 determines whether the user data of a super sector group has been transferred to the RAID ECC engine 123 in step S450. Therefore, in other embodiments, the CPU 124 can also determine whether the user data of the K×N sectors have been transmitted to the RAID ECC engine 123 according to the count of sectors. When it is determined that the user data of the K×N sectors has been transmitted to the RAID ECC engine 123, the RAID ECC engine 123 may generate K parity codes according to the user data of the K×N sectors, compress the K parity codes to form compressed K parity codes of the user data of the super sector group according to the control signal, and store the compressed K parity codes of the user data of the super sector group in RAM 1231. In general, this does not affect the implementation of the invention.
  • Finally, in step S460, the CPU 124 controls the RAID ECC engine 123 to write the compressed K parity codes of the user data of the super page group to the non-volatile memory 110. As described above, in the embodiment, the CPU 124 may first write the user data of the super page group to the non-volatile memory 110 via the data buffer 122. After the K parity codes of the user data of the super page group are compressed and stored in the RAM 1231, the CPU 124 can control the RAID ECC engine 123 to write the compressed K parity codes to the non-volatile memory 110 via the page buffer 212_M (and the data buffer 122). Or, in other embodiments, the CPU 124 may temporarily store the user data of the super page group in the data buffer 122. After the K parity codes of the user data of the super page group are compressed and stored in the RAM 1231, the CPU 124 can control the RAID ECC engine 123 to temporarily store the compressed K parity codes in the data buffer 122. Then, the CPU 124 writes the user data of the super page group and the compressed K parity code of the user data of the super page group to the non-volatile memory 110 via the data buffer 122 at one time, but the invention is not limited thereto.
  • In summary, the data storage device and the parity code processing method provided by the embodiments of the invention may reduce the amount of data by compressing the parity code, and store the compressed parity code in the memory of the RAID ECC engine first instead of immediately writing the parity code generated by each encoding operation to the non-volatile memory of the data storage device. After the plurality of parity codes of the user data of an entire super page group are compressed and stored in the memory of the RAID ECC engine, the RAID ECC engine writes the compressed plurality of parity codes to the non-volatile memory at one time. As such, the frequency of switching the state of the RAID ECC engine and the number and time of writing data to non-volatile memory are reduced, thereby relatively increasing the service life of the non-volatile memory. In addition, in the embodiment of the invention, the compression operation of the previous parity code and the encoding operation of the user data of the next page group may be processed in parallel to prevent the overall system performance from degrading.
  • The above description is only embodiments of the invention, and is not intended to limit the scope of the invention.

Claims (8)

What is claimed is:
1. A data storage device, comprising:
a non-volatile memory; and
a controller, electrically coupled to the non-volatile memory and comprising:
an access interface, coupled to a host and the non-volatile memory;
a central processing unit (CPU), used to interpret write/read commands issued by the host and control the access interface to access user data of the non-volatile memory; and
a redundant array of independent disks (RAID) error correcting code (ECC) engine, coupled to the non-volatile memory and used to perform an error correction procedure on the user data, wherein the error correction procedure is divided into an encoding operation or a decoding operation, the RAID ECC engine has a memory, wherein after completing the encoding operation on each N pages of the user data to generate a corresponding parity code, the RAID ECC engine compresses the parity code and stores the compressed parity code in the memory, wherein after all K parity codes of the K×N pages of the user data are compressed and stored in the memory, the RAID ECC engine writes the compressed K parity codes to the non-volatile memory, wherein K and N are both positive integers greater than one.
2. The data storage device according to claim 1, wherein the controller further comprises:
a data buffer, coupled to the access interface, the RAID ECC engine and the non-volatile memory, wherein the data buffer is used to temporarily store the user data from the host or the non-volatile memory.
3. The data storage device according to claim 2, wherein the RAID ECC engine further comprises:
a control register, coupled to the CPU and used to receive at least one control signal and temporary data from the CPU; and
a plurality of operational circuits, wherein each of the plurality of operational circuits comprises:
an exclusive OR (XOR) logical operation unit;
a page buffer; and
a first selector, having two input ends and an output end, wherein the two input ends of the first selector are respectively coupled to an output end of the XOR logical operation unit and the control register, the output end of the first selector is coupled to the page buffer, wherein the CPU controls, by the control register, the first selector to determine an input source of the page buffer, so that the page buffer is used to store an operation result from the XOR logical operation unit or used to store the temporary data from the CPU.
4. The data storage device according to claim 3, wherein the RAID ECC engine further comprises:
a state machine, coupled to the control register and used to control the RAID ECC engine to perform the encoding operation or the decoding operation; and
a second selector, having a plurality of output ends and an input end, wherein the plurality of output ends of the second selector are respectively coupled to input ends of the plurality of operational circuits, the input end of the second selector is coupled to the data buffer or the non-volatile memory, wherein the CPU controls the second selector by the control register, so that the user data received by the input end of the second selector from the data buffer or the non-volatile memory is outputted to at least one of the specified plurality of operational circuits.
5. A parity code processing method executed by a controller of a data storage device, wherein the data storage device further comprises a non-volatile memory electrically coupled to the controller, the controller comprises an access interface, a CPU and a RAID ECC engine, the access interface is coupled to a host and the non-volatile memory, the CPU is used to interpret write/read commands issued by the host and control the access interface to access user data of the non-volatile memory, and the parity code processing method comprises:
configuring the CPU to issue at least one control signal to the RAID ECC engine and transmitting the user data of a plurality of pages to the RAID ECC engine;
configuring the RAID ECC engine to perform an encoding operation on the user data of N pages based on the control signal to generate a corresponding parity code, compress the parity code, and store the compressed parity code in a memory of the RAID ECC engine;
configuring the CPU to determine whether the user data of a super page group has been transmitted to the RAID ECC engine, wherein the user data of the super page group is referred to as the user data of K×N pages; and
when it is determined that the user data of the super page group has been transmitted to the RAID ECC engine, configuring the CPU to control the RAID ECC engine to write compressed K parity codes of the user data of the super page group to the non-volatile memory, wherein K and N are both positive integers greater than one.
6. The parity code processing method according to claim 5, wherein the controller further comprises:
a data buffer, coupled to the access interface, the RAID ECC engine and the non-volatile memory, wherein the data buffer is used to temporarily store the user data from the host or the non-volatile memory.
7. The parity code processing method according to claim 6, wherein the RAID ECC engine comprises:
a control register, coupled to the CPU and used to receive at least one control signal and temporary data from the CPU; and
a plurality of operational circuits, wherein each of the plurality of operational circuits comprises:
an XOR logical operation unit;
a page buffer; and
a first selector, having two input ends and an output end, wherein the two input ends of the first selector are respectively coupled to an output end of the XOR logical operation unit and the control register, the output end of the first selector is coupled to the page buffer, wherein the CPU controls, by the control register, the first selector to determine an input source of the page buffer, so that the page buffer is used to store an operation result from the XOR logical operation unit, or used to store the temporary data from the CPU.
8. The parity code processing method according to claim 7, wherein the RAID ECC engine further comprises:
a state machine, coupled to the control register and used to control the RAID ECC engine to perform the encoding operation or a decoding operation; and
a second selector, having a plurality of output ends and an input end, wherein the plurality of output ends of the second selector are respectively coupled to input ends of the plurality of operational circuits, the input end of the second selector is coupled to the data buffer or the non-volatile memory, wherein the CPU controls the second selector by the control register, so that the user data received by the input end of the second selector from the data buffer or the non-volatile memory is outputted to at least one of the specified plurality of operational circuits.
US16/533,818 2018-09-11 2019-08-07 Data storage device and parity code processing method thereof Abandoned US20200081780A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/533,818 US20200081780A1 (en) 2018-09-11 2019-08-07 Data storage device and parity code processing method thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862729556P 2018-09-11 2018-09-11
US16/533,818 US20200081780A1 (en) 2018-09-11 2019-08-07 Data storage device and parity code processing method thereof

Publications (1)

Publication Number Publication Date
US20200081780A1 true US20200081780A1 (en) 2020-03-12

Family

ID=69719181

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/533,818 Abandoned US20200081780A1 (en) 2018-09-11 2019-08-07 Data storage device and parity code processing method thereof
US16/542,311 Active 2039-10-08 US11068391B2 (en) 2018-09-11 2019-08-16 Mapping table updating method for data storage device

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/542,311 Active 2039-10-08 US11068391B2 (en) 2018-09-11 2019-08-16 Mapping table updating method for data storage device

Country Status (3)

Country Link
US (2) US20200081780A1 (en)
CN (2) CN110895514A (en)
TW (2) TWI703438B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11137921B2 (en) * 2019-03-05 2021-10-05 Samsung Electronics Co., Ltd. Data storage device and system
US11216207B2 (en) * 2019-12-16 2022-01-04 Silicon Motion, Inc. Apparatus and method for programming user data on the pages and the parity of the page group into flash modules
CN114371814A (en) * 2021-12-08 2022-04-19 浙江大华存储科技有限公司 Data processing method and device and solid state disk
US11561722B2 (en) * 2020-08-25 2023-01-24 Micron Technology, Inc. Multi-page parity data storage in a memory device
US11775386B2 (en) 2022-02-18 2023-10-03 Silicon Motion, Inc. Data storage device and control method for non-volatile memory
US11860775B2 (en) 2021-09-29 2024-01-02 Silicon Motion, Inc. Method and apparatus for programming data into flash memory incorporating with dedicated acceleration hardware
US11922044B2 (en) 2022-02-18 2024-03-05 Silicon Motion, Inc. Data storage device and control method for non-volatile memory
US11966604B2 (en) 2021-09-29 2024-04-23 Silicon Motion, Inc. Method and apparatus for programming data arranged to undergo specific stages into flash memory based on virtual carriers
US11972150B2 (en) 2021-09-29 2024-04-30 Silicon Motion, Inc. Method and non-transitory computer-readable storage medium and apparatus for programming data into flash memory through dedicated acceleration hardware
US12008258B2 (en) 2022-02-18 2024-06-11 Silicon Motion, Inc. Data storage device and control method for non-volatile memory

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110895513B (en) * 2018-09-12 2024-09-17 华为技术有限公司 System garbage recycling method and garbage recycling method in solid state disk
KR102637478B1 (en) * 2018-12-05 2024-02-15 삼성전자주식회사 open channel solid state drive, nonvolatile memory system including the same and Method of power loss protection of open channel solid state drive
KR20210044564A (en) * 2019-10-15 2021-04-23 삼성전자주식회사 Storage device and garbage collection method thereof
CN113806133A (en) * 2020-06-12 2021-12-17 华为技术有限公司 Data writing method and device
TWI799718B (en) * 2020-06-22 2023-04-21 群聯電子股份有限公司 Memory control method, memory storage device and memory control circuit unit
CN111737165B (en) * 2020-07-02 2023-09-12 群联电子股份有限公司 Memory control method, memory storage device and memory control circuit unit
US11513891B2 (en) * 2020-07-24 2022-11-29 Kioxia Corporation Systems and methods for parity-based failure protection for storage devices
US11087858B1 (en) * 2020-07-24 2021-08-10 Macronix International Co., Ltd. In-place refresh operation in flash memory
CN112364273A (en) * 2020-09-18 2021-02-12 上海泛微软件有限公司 Method, device and equipment for generating portal page and computer readable storage medium
US11500782B2 (en) * 2020-12-18 2022-11-15 Micron Technology, Inc. Recovery of logical-to-physical table information for a memory device
CN112799765B (en) * 2021-01-30 2022-10-11 交通银行股份有限公司 Intelligent skip method and equipment based on page coding and storage medium
CN112799601B (en) * 2021-02-24 2023-06-13 群联电子股份有限公司 Effective data merging method, memory storage device and control circuit unit
CN118331512B (en) * 2024-06-14 2024-09-13 山东云海国创云计算装备产业创新中心有限公司 Processing method and device based on memory control card

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090198887A1 (en) * 2008-02-04 2009-08-06 Yasuo Watanabe Storage system
US20130304970A1 (en) * 2012-04-20 2013-11-14 Stec, Inc. Systems and methods for providing high performance redundant array of independent disks in a solid-state device
WO2015173925A1 (en) * 2014-05-15 2015-11-19 株式会社日立製作所 Storage device
US20160321184A1 (en) * 2015-04-30 2016-11-03 Marvell Israel (M.I.S.L) Ltd. Multiple Read and Write Port Memory
US20190243578A1 (en) * 2018-02-08 2019-08-08 Toshiba Memory Corporation Memory buffer management for solid state drives
US20210064468A1 (en) * 2019-08-29 2021-03-04 Micron Technology, Inc. Shared parity protection
US11455122B2 (en) * 2019-12-20 2022-09-27 Hitachi, Ltd. Storage system and data compression method for storage system

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2912299B2 (en) * 1997-06-10 1999-06-28 四国日本電気ソフトウェア株式会社 Disk array controller
US8341332B2 (en) * 2003-12-02 2012-12-25 Super Talent Electronics, Inc. Multi-level controller with smart storage transfer manager for interleaving multiple single-chip flash memory devices
US7877539B2 (en) * 2005-02-16 2011-01-25 Sandisk Corporation Direct data file storage in flash memories
US7409489B2 (en) * 2005-08-03 2008-08-05 Sandisk Corporation Scheduling of reclaim operations in non-volatile memory
US8533564B2 (en) * 2009-12-23 2013-09-10 Sandisk Technologies Inc. System and method of error correction of control data at a memory device
US8407449B1 (en) * 2010-02-26 2013-03-26 Western Digital Technologies, Inc. Non-volatile semiconductor memory storing an inverse map for rebuilding a translation table
US8775868B2 (en) * 2010-09-28 2014-07-08 Pure Storage, Inc. Adaptive RAID for an SSD environment
KR101774496B1 (en) * 2010-12-08 2017-09-05 삼성전자주식회사 Non-volatile memory device, devices having the same, method of operating the same
CN102591737B (en) * 2011-01-13 2015-04-22 群联电子股份有限公司 Data writing and reading method, memory controller and memory storage device
US8996951B2 (en) * 2012-11-15 2015-03-31 Elwha, Llc Error correction with non-volatile memory on an integrated circuit
US10013203B2 (en) * 2013-01-04 2018-07-03 International Business Machines Corporation Achieving storage compliance in a dispersed storage network
US9535774B2 (en) * 2013-09-09 2017-01-03 International Business Machines Corporation Methods, apparatus and system for notification of predictable memory failure
US20150349805A1 (en) * 2014-05-28 2015-12-03 Skymedi Corporation Method of Handling Error Correcting Code in Non-volatile Memory and Non-volatile Storage Device Using the Same
CN104166634A (en) * 2014-08-12 2014-11-26 华中科技大学 Management method of mapping table caches in solid-state disk system
CN106326136A (en) * 2015-07-02 2017-01-11 广明光电股份有限公司 Method for collecting garage block in solid state disk
TWI569139B (en) * 2015-08-07 2017-02-01 群聯電子股份有限公司 Valid data merging method, memory controller and memory storage apparatus
TWI591482B (en) * 2016-01-30 2017-07-11 群聯電子股份有限公司 Data protecting method, memory control circuit unit and memory storage device
CN107391026B (en) * 2016-04-27 2020-06-02 慧荣科技股份有限公司 Flash memory device and flash memory management method
CN106802777A (en) * 2017-01-20 2017-06-06 杭州电子科技大学 A kind of flash translation layer (FTL) control method for solid storage device
CN108038026B (en) * 2017-11-17 2021-11-30 中国科学院信息工程研究所 Flash memory-based data rapid recovery method and system
CN107967125A (en) * 2017-12-20 2018-04-27 北京京存技术有限公司 Management method, device and the computer-readable recording medium of flash translation layer (FTL)
CN108089822A (en) * 2017-12-20 2018-05-29 北京京存技术有限公司 Management method, system, equipment and the storage medium of storage chip
TWI670594B (en) * 2018-01-18 2019-09-01 慧榮科技股份有限公司 Data storage device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090198887A1 (en) * 2008-02-04 2009-08-06 Yasuo Watanabe Storage system
US20130304970A1 (en) * 2012-04-20 2013-11-14 Stec, Inc. Systems and methods for providing high performance redundant array of independent disks in a solid-state device
WO2015173925A1 (en) * 2014-05-15 2015-11-19 株式会社日立製作所 Storage device
US20160321184A1 (en) * 2015-04-30 2016-11-03 Marvell Israel (M.I.S.L) Ltd. Multiple Read and Write Port Memory
US20190243578A1 (en) * 2018-02-08 2019-08-08 Toshiba Memory Corporation Memory buffer management for solid state drives
US20210064468A1 (en) * 2019-08-29 2021-03-04 Micron Technology, Inc. Shared parity protection
US11455122B2 (en) * 2019-12-20 2022-09-27 Hitachi, Ltd. Storage system and data compression method for storage system

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11137921B2 (en) * 2019-03-05 2021-10-05 Samsung Electronics Co., Ltd. Data storage device and system
US11768618B2 (en) 2019-03-05 2023-09-26 Samsung Electronics Co., Ltd. Distributed processing data storage device and system
US11216207B2 (en) * 2019-12-16 2022-01-04 Silicon Motion, Inc. Apparatus and method for programming user data on the pages and the parity of the page group into flash modules
US11928353B2 (en) 2020-08-25 2024-03-12 Micron Technology, Inc. Multi-page parity data storage in a memory device
US11561722B2 (en) * 2020-08-25 2023-01-24 Micron Technology, Inc. Multi-page parity data storage in a memory device
US11860775B2 (en) 2021-09-29 2024-01-02 Silicon Motion, Inc. Method and apparatus for programming data into flash memory incorporating with dedicated acceleration hardware
US11966604B2 (en) 2021-09-29 2024-04-23 Silicon Motion, Inc. Method and apparatus for programming data arranged to undergo specific stages into flash memory based on virtual carriers
US11972150B2 (en) 2021-09-29 2024-04-30 Silicon Motion, Inc. Method and non-transitory computer-readable storage medium and apparatus for programming data into flash memory through dedicated acceleration hardware
CN114371814A (en) * 2021-12-08 2022-04-19 浙江大华存储科技有限公司 Data processing method and device and solid state disk
US11775386B2 (en) 2022-02-18 2023-10-03 Silicon Motion, Inc. Data storage device and control method for non-volatile memory
US11922044B2 (en) 2022-02-18 2024-03-05 Silicon Motion, Inc. Data storage device and control method for non-volatile memory
US12008258B2 (en) 2022-02-18 2024-06-11 Silicon Motion, Inc. Data storage device and control method for non-volatile memory
TWI845896B (en) * 2022-02-18 2024-06-21 慧榮科技股份有限公司 Data storage device and control method for non-volatile memory

Also Published As

Publication number Publication date
CN110888594B (en) 2023-04-14
TWI773890B (en) 2022-08-11
TWI703438B (en) 2020-09-01
TW202011195A (en) 2020-03-16
US20200081832A1 (en) 2020-03-12
CN110895514A (en) 2020-03-20
CN110888594A (en) 2020-03-17
US11068391B2 (en) 2021-07-20
TW202011187A (en) 2020-03-16

Similar Documents

Publication Publication Date Title
US20200081780A1 (en) Data storage device and parity code processing method thereof
CN110858128B (en) Data storage device and method for sharing memory in controller
TWI534618B (en) Mapping table updating method, memory control circuit unit and memory storage device
US8234541B2 (en) Method and controller for data access in a flash memory
US20180276114A1 (en) Memory controller
KR20080023191A (en) Device and method for accessing binary data in fusion memory
TWI540582B (en) Data management method, memory control circuit unit and memory storage apparatus
US10013187B2 (en) Mapping table accessing method, memory control circuit unit and memory storage device
US10733094B2 (en) Memory system, controller, method of operating a controller, and method of operating a memory system for processing big data by using compression and decompression
US9430327B2 (en) Data access method, memory control circuit unit and memory storage apparatus
TW201913382A (en) Decoding method, memory storage device and memory control circuit unit
TW201606503A (en) Data management method, memory control circuit unit and memory storage apparatus
CN109491828B (en) Decoding method, memory storage device and memory control circuit unit
JP2020149195A (en) Memory system
TWI629590B (en) Memory management method, memory control circuit unit and memory storage device
TWI707234B (en) A data storage device and a data processing method
US11003531B2 (en) Memory system and operating method thereof
CN102591737B (en) Data writing and reading method, memory controller and memory storage device
CN113504880B (en) Memory buffer management method, memory control circuit unit and memory device
CN108664350B (en) Data protection method, memory storage device and memory control circuit unit
US9600363B2 (en) Data accessing method, memory controlling circuit unit and memory storage apparatus
US12038811B2 (en) Memory controller and data processing method
CN113360429B (en) Data reconstruction method, memory storage device and memory control circuit unit
TWI695264B (en) A data storage device and a data processing method
CN117992282A (en) Encoding method after abnormal power failure and memory storage device

Legal Events

Date Code Title Description
AS Assignment

Owner name: SILICON MOTION, INC., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LI, AN-PANG;REEL/FRAME:049982/0007

Effective date: 20190418

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION