CN118202336A - Memory controller and method for storing local memory mapping tables in client nodes - Google Patents
Memory controller and method for storing local memory mapping tables in client nodes Download PDFInfo
- Publication number
- CN118202336A CN118202336A CN202280071365.6A CN202280071365A CN118202336A CN 118202336 A CN118202336 A CN 118202336A CN 202280071365 A CN202280071365 A CN 202280071365A CN 118202336 A CN118202336 A CN 118202336A
- Authority
- CN
- China
- Prior art keywords
- memory
- request
- map
- node
- logical address
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000015654 memory Effects 0.000 title claims abstract description 887
- 238000013507 mapping Methods 0.000 title claims abstract description 99
- 238000000034 method Methods 0.000 title claims description 59
- 230000002085 persistent effect Effects 0.000 claims abstract description 89
- 230000004044 response Effects 0.000 claims abstract description 45
- 238000005516 engineering process Methods 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 36
- 239000008186 active pharmaceutical agent Substances 0.000 description 7
- 238000004891 communication Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000002955 isolation Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 239000004744 fabric Substances 0.000 description 4
- 238000013468 resource allocation Methods 0.000 description 4
- 239000007787 solid Substances 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000014616 translation Effects 0.000 description 3
- 230000010076 replication Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 235000008694 Humulus lupulus Nutrition 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001152 differential interference contrast microscopy Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1081—Address translation for peripheral access to main memory, e.g. direct memory access [DMA]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0815—Cache consistency protocols
- G06F12/0831—Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
- G06F12/0835—Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means for main memory peripheral accesses (e.g. I/O or DMA)
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/62—Details of cache specific to multiprocessor cache arrangements
- G06F2212/621—Coherency control relating to peripheral accessing, e.g. from DMA or I/O device
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A memory controller is provided for storing a local memory map in a client node and is coupled to one or more storage nodes, wherein each storage node comprises physical memory and the control node comprises a central memory map comprising local memory maps of a plurality of client nodes. The memory controller is used for: receiving a memory request for a logical address from the application; determining whether the logical address is in the local memory mapping table; if the logical address is in the local memory mapping table, executing the memory request according to the local memory mapping table; and if the logical address is not in the local memory mapping table, transmitting a corresponding memory request for the logical address to the control node. The memory controller receives a response from the controller and updates the local memory map accordingly. The memory controller provides an efficient and effective shared persistent memory experience across various computing nodes of a storage technology.
Description
Technical Field
The present invention relates generally to the field of distributed systems, and more particularly, to a memory controller and a method for storing a local memory map in a memory controller of a client node.
Background
In general, memory latency is the most critical factor in evaluating the overall performance of any memory technology. Furthermore, among various types of storage technologies, for example, cloud storage technology, four-tier unit-driven storage technology, storage class memory technology, and the like, there is an increasing demand for low-latency storage. In general, storage class memory is considered a building block for low latency data transfer for different applications in conventional storage technology. The storage class memory also provides new byte-addressable hardware media and devices such as conventional dynamic random access memory and persistent memory (e.g., solid state disk memory). However, conventional storage techniques remain inefficient and ineffective due to various challenges faced in practical applications, such as sharing and allocation of storage class memory, and load balancing among multiple storage nodes within virtual memory. Therefore, conventional storage techniques are inefficient and cannot handle virtual memory with low latency and thin provisioning. Therefore, there is a technical problem: how to provide an efficient and effective shared persistent memory experience among various computing nodes.
Accordingly, in view of the above discussion, there is a need to overcome the above-described shortcomings associated with conventional memory systems and memory allocators.
Disclosure of Invention
The invention provides a memory controller and a method for storing a local memory mapping table in a memory controller of a client node. The present invention provides a solution to the existing problem of how to provide an efficient and effective shared persistent memory experience on various computing nodes equipped with new storage technologies. It is an object of the present invention to provide a solution that at least partly overcomes the problems encountered in the prior art and to provide an improved memory controller and an improved method for storing a local memory map table in a memory controller of a client node. The invention also provides a remote distributed random access memory pool with reduced configuration allocation and domain isolation access.
One or more of the objects of the invention are achieved by the solution provided in the attached independent claims. Advantageous embodiments of the invention are further defined in the dependent claims.
In one aspect, the present invention provides a memory controller to be used in a client node that includes local memory for storing a local memory map. The client node is configured to execute an application program. Further, the memory controller is configured to be operably coupled to one or more storage nodes, wherein each storage node comprises physical memory. Further, the memory controller is configured to be operably coupled to a control node, the control node including a central memory map table including local memory maps of a plurality of client nodes. In addition, the memory controller is configured to receive a memory request for a logical address from the application program; determining whether the logical address is in the local memory mapping table; and if the logical address is in the local memory mapping table, executing the memory request according to the local memory mapping table. And if the logical address is not in the local memory mapping table, transmitting a corresponding memory request for the logical address to the control node. In addition, the memory controller is configured to receive a response from the controller and update the local memory mapping table accordingly.
The memory controller is used in the client node, which includes a local memory for storing a local memory map table to ensure low latency. Advantageously, the memory controller is configured to receive the memory request for the logical address from the application program and determine whether the logical address is in the local memory map. In addition, the memory controller is configured to receive a response from the controller and update the local memory mapping table accordingly. By storing the local memory map, the memory controller may share one or more storage node resources (i.e., access the same memory space) between multiple client (or computing) nodes and the control node. In other words, the memory controller provides an effective and efficient shared persistent memory experience while reducing latency. The low latency further provides direct memory reads on the client node and bypasses the memory controller (or CPU) for address translation and multi-tenant assurances (i.e., domain isolated access). The memory controller also allows memory mapping details to be stored and memory protection to be implemented to access remote distributed memory pools through thin provisioning and domain isolation.
In one implementation, the local memory map is a subset of the central memory map.
The local memory mapping table can easily access the logical address, thereby further reducing the latency.
In another implementation, if the memory request is a write operation, the memory controller is configured to transmit an allocation request for the logical address as the corresponding memory request to the control node, such that the control node allocates physical memory in one of the one or more storage nodes; and receiving the physical address of the allocated memory from the controller as the response. In addition, the memory controller is configured to update the local memory map accordingly by mapping the logical address to the physical address and transmit a write request to the storage node storing the physical address.
In the implementation, the memory controller performs the write operation when receiving the memory request, and further updates the local memory map to store the memory map, and updates the memory map after each operation.
In another implementation, if the memory request is a read operation, the memory controller is configured to: transmitting a memory map request for the logical address as the corresponding memory request to the control node, such that the control node retrieves a map of the logical address from the central memory map; a memory map of logical addresses of the requests is received from the controller as the response, the memory map including physical addresses. In addition, the memory controller is configured to update the local memory map accordingly by mapping the logical address to the remote physical address. The memory controller also transmits the memory request to the storage node of the physical address.
In the implementation, the memory controller performs the read operation when receiving the memory request, and updates the local memory map after each memory map storing operation.
In another implementation, the memory controller is configured to execute the memory request according to the local memory map by: transmitting a corresponding memory request to the storage node indicated by the local map. The corresponding memory request includes an authentication key, and the corresponding memory request causes the storage node to authenticate the corresponding memory request according to the authentication key and receive a response from the storage node. Transmitting a memory mapping request for the logical address to the control node if the response is an indicator of an invalid authentication key; receiving a memory map from the central memory map of the control node, the memory map including updated authentication keys; updating the local memory mapping table; and transmitting the updated corresponding memory request to the storage node. The corresponding memory request includes the updated authentication key.
In the implementation manner, the authentication of the memory request is checked according to the authentication key, so that the secure transmission of the memory request is ensured.
In another implementation, the received memory map includes a memory map of another storage node. The memory controller is configured to update the local memory mapping table and transmit an updated corresponding memory request to the other storage node.
In the implementation, the memory controller processes the memory map table in which the local memory table is stored by the other storage node, thereby providing low latency.
In another implementation, the memory request transmitted to the storage node is a remote memory access command.
By transmitting the memory request to the storage node, access time is reduced, and latency is further reduced.
In another implementation, the logical memory address is associated with virtual persistent memory.
By utilizing virtual persistent memory, low latency can be achieved.
In another aspect, the present invention provides a method to be used in a memory controller in a client node comprising a local memory for storing a local memory mapping table. The client node is configured to execute an application program. In addition, the memory controller is coupled to one or more storage nodes. Each storage node includes a physical memory and a control node including a central memory map table including local memory maps of a plurality of client nodes. The method includes receiving a memory request for a logical address from the application. In addition, the method includes determining whether the logical address is in the local memory map, and if the logical address is in the local memory map, executing the memory request according to the local memory map. If the logical address is not in the local memory map, transmitting a corresponding memory request for the logical address to the control node and receiving a response from the controller. Furthermore, the method includes updating the local memory map table accordingly.
The method achieves all the advantages and technical features of the memory controller of the invention.
It should be understood that all of the above implementations may be combined together.
It should be noted that all devices, elements, circuits, units and modules described in the present application may be implemented in software elements or hardware elements or any type of combination thereof. The steps performed by the various entities described in the present application and the functions to be performed by the various entities described are all intended to mean that the various entities are adapted to perform the various steps and functions. Even though in the description of specific embodiments below the specific functions or steps to be performed by external entities are not reflected in the description of specific detailed elements of the entity performing the specific steps or functions, it should be clear to a person skilled in the art that these methods and functions may be implemented in respective software or hardware elements, or any combination of such elements. It will be appreciated that features of the application are susceptible to being combined in various combinations without departing from the scope of the application as defined by the accompanying claims.
Other aspects, advantages, features and objects of the invention will become apparent from the accompanying drawings and the detailed description of illustrative implementations explained in conjunction with the following appended claims.
Drawings
The foregoing summary, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there is shown in the drawings exemplary constructions of the invention. However, the invention is not limited to the specific methods and instrumentalities disclosed herein. Moreover, those skilled in the art will appreciate that the drawings are not drawn to scale. Identical elements are denoted by the same numerals, where possible.
Embodiments of the present invention will now be described, by way of example only, with reference to the following drawings.
FIG. 1A is a block diagram of an operative connection provided by an embodiment of the present invention to be used with a memory controller in a client node;
FIG. 1B is a block diagram of a client node provided by another embodiment of the present invention;
FIG. 2 is a diagram of a storage class memory-based map provided by an embodiment of the present invention;
FIG. 3 is a diagram of various exemplary components provided by an embodiment of the present invention;
FIG. 4 is a diagram of logical and physical views of a logical virtual persistent memory (virtual persistent memory, VPM) provided by an embodiment of the present invention;
FIG. 5 is a diagram providing an embodiment of the present invention depicting steps of creating a virtual persistent memory (virtual persistent memory, VPM) operation;
FIG. 6 is a diagram of a virtual persistent memory (virtual persistent memory, VPM) cache according to one embodiment of the present invention;
FIG. 7 is a diagram of a client cache implementation provided by an embodiment of the present invention;
FIG. 8 is a diagram depicting a write operation from an application to a logical address provided by an embodiment of the present invention;
FIG. 9 is a diagram depicting a read operation from an application to a logical address provided by an embodiment of the invention;
FIG. 10 is a diagram provided by an embodiment of the present invention depicting a read operation involving a local cache hit and an authentication key;
FIG. 11 is a diagram providing a depiction of a read operation involving a local cache miss, provided by an embodiment of the present invention;
FIG. 12 is a diagram provided by an embodiment of the present invention depicting a read operation involving a local cache hit;
fig. 13 is a flowchart of a method for storing a local memory mapping table in a memory controller according to an embodiment of the present invention.
In the drawings, the underlined numbers are used to denote items where the underlined numbers are located or items adjacent to the underlined numbers. The non-underlined number is associated with the item identified by the line linking the non-underlined number to the item. When a number is not underlined but with an associated arrow, the number without the underline is used to identify the general item to which the arrow refers.
Detailed Description
The following detailed description illustrates embodiments of the invention and the manner in which the embodiments may be implemented. While some embodiments of the invention have been disclosed, those skilled in the art will recognize that other embodiments for practicing or practicing the invention can be implemented as well.
Fig. 1A is a block diagram of an operative connection of a memory controller to be used in a client node provided by an embodiment of the present invention. Referring to FIG. 1A, a block diagram 100A is shown that includes a client node 102, a memory controller 104, one or more storage nodes 106, such as storage nodes 106A-106N, and a control node 108. Also shown are a plurality of physical memories 110A-110N and a local memory 112.
The client node 102 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to execute an application on at least one application host node. Examples of client node 102 may include, but are not limited to, a computer, a personal digital assistant, a portable computing device, or an electronic device.
The memory controller 104 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to perform memory allocation. Examples of implementations of the memory controller 104 may include, but are not limited to, a central data processing device, a microprocessor, a microcontroller, a complex instruction set computing (complex instruction set computing, CISC) processor, an application-specific integrated circuit (ASIC) processor, a reduced instruction set computing (reduced instruction set computing, RISC) processor, a very long instruction word (very long instruction word, VLIW) processor, a state machine, and other processors or control circuits.
One or more storage nodes 106 include storage nodes 106A through 106N. Each of the one or more storage nodes 106 may be operable to provide status information to the memory controller 104 and also to allocate memory via various selection criteria. Examples of one or more storage nodes 106 include, but are not limited to, a block storage system, a file storage system, an object storage system, or a combination thereof.
The control node 108 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to execute an application program. Examples of control node 108 may include, but are not limited to, a computer, a personal digital assistant, a portable computing device, or an electronic device.
Each of the plurality of physical memories 110A-110N may comprise suitable logic, circuitry, and/or interfaces that may enable storage of memory allocation information received by the client node 102. Examples of implementations of the plurality of physical memories 110A-110N may include, but are not limited to, electrically erasable programmable Read-Only Memory (EEPROM), dynamic Random-Access Memory (DRAM), random-Access Memory (Random Access Memory, RAM), read-Only Memory (ROM), hard disk drive (HARD DISK DRIVE, HDD), flash Memory, secure Digital (SD) card, solid state disk (Solid-state-STATE DRIVE, SSD), non-volatile high-speed transmission bus (Non-volatile Memory Express, NVMe), storage class Memory (Storage Class Memory, SCM), persistent Memory (PERSISTENT MEMORY, PM), and/or CPU cache. In one example, each of the plurality of physical memories 110A-110N may correspond to physical memory included in each corresponding storage node. For example, physical memory 110A is included in storage node 106A and physical memory 110B is included in storage node 106B, similar for subsequent storage nodes.
The local memory 112 may comprise suitable logic, circuitry, and/or interfaces that may be operable to store instructions that may be executed by the memory controller 104. Examples of implementations of local Memory 112 may include, but are not limited to, electrically erasable programmable Read-Only Memory (EEPROM), dynamic Random-Access Memory (DRAM), random-Access Memory (Random Access Memory, RAM), read-Only Memory (ROM), hard disk drive (HARD DISK DRIVE, HDD), flash Memory, secure Digital (SD) card, solid state disk (Solid-state-STATE DRIVE, SSD), non-volatile high-speed transmission bus (Non-volatile Memory Express, NVMe), storage class Memory (Storage Class Memory, SCM), persistent Memory (PERSISTENT MEMORY, PM), and/or CPU cache.
A memory controller 104 is provided to be used in the client node 102. The memory controller 104 is operably connected to one or more storage nodes 106, where each storage node includes physical memory. The memory controller 104 is also operatively coupled to a control node 108, the control node 108 including a central memory map table including local memory maps of a plurality of client nodes. In other words, the memory controller 104 is operatively coupled to one or more storage nodes 106 and control nodes 108. The control node 108 also includes a central memory map for storing local memory maps of a plurality of client nodes (e.g., client node 102). Each storage node, such as storage nodes 106A-106N, includes physical memory. For example, storage node 106A includes physical memory 110A. Similarly, storage node 106B includes physical memory 110B and is similar for all subsequent storage nodes up to storage node 106N.
Further provided, the client node 102 includes a local memory 112 for storing a local memory map. The client node 102 is configured to execute an application program to send a memory request to the memory controller 104. The memory requests may correspond to different virtual persistent memory (virtual persistent memory, VPM) operations, which may include write operations, read operations, create operations, and any other such operations, without limiting the scope of the invention.
In operation, the memory controller 104 is configured to receive a memory request for a logical address from an application program and determine whether the logical address is in a local memory map. In addition, if the logical address is in the local memory map, the memory controller 104 performs the memory request according to the local memory map. In addition, if the corresponding memory request is not found in the logical memory map, the memory controller 104 transmits the corresponding memory request for the logical address to the control node 108. First, the memory controller 104 receives a response including the logical address from the control node 108 and further updates the local memory map accordingly based on the memory request. In one example, if the logical address is in the local memory map and the memory request corresponds to a write operation, the local memory map should be updated according to the write operation. In another example, if the logical address is in the local memory map and the memory request corresponds to a read operation, the local memory map should be updated according to the read operation. In one implementation, the VPM logical capacity may be thin-provisioned and further distributed across the client nodes 102. In one example, thin provisioning refers to the logical capacity of a VPM that can be expanded or contracted by an application program interface (application programinterface, API) exposed by a library of clients. Further, physical memory (e.g., physical memory 110A of storage node 106A) is used for resource allocation, and therefore, client node 102 is optimized to allocate the required physical space based solely on the write input/output pattern. Therefore, thin provisioning of distributed and shared persistent resource allocation is achieved. In addition, client node 102 supports access to a non-contiguous logical space and reads logical addresses. In addition, client node 102 reads a logical address that has not yet been assigned, and also returns a value of zero. In one example, a zero value corresponds to a logical address that is not assigned, which facilitates execution of the application.
According to one embodiment, the local memory map is a subset of the central memory map. In other words, the central memory map includes local memory maps of a plurality of client nodes (e.g., client node 102). For example, a local memory map is stored in the client node 102 to provide logical addresses to the client node 102. In one implementation, if the logical address is not found in the local memory map, the central memory map also provides the logical address to store the corresponding logical address in the local memory map.
According to another embodiment, the memory request transmitted to the storage node is a remote memory access command. The memory request may correspond to a memory operation of write, read, compare-and-swap (CAS), etc. Memory requests require access to memory to perform memory operations in a corresponding storage node (e.g., storage node 106A).
According to a further embodiment, the remote memory access command is a remote direct memory access (remote direct memory access, RDMA) command. The memory request is transmitted to the storage node to perform the desired operation. In one example, if the memory request is a write operation, the write operation is performed in the corresponding storage node. The transfer of memory requests from the memory controller 104 to the storage nodes corresponds to RDMA commands that provide an interconnect between the client node 102 and the storage nodes with low latency.
According to one embodiment, the remote memory access command is a memory interconnect (Compute Express Link, CXL) memory protocol CXL.mem command. In one implementation, the transmission of memory requests from the memory controller 104 to the storage nodes corresponds to RDMA commands that provide an interconnection between the client node 102 and the storage nodes. Similarly, the memory access command may be a cxl.mem command, which may transmit a memory request to a storage node to perform a memory operation. Advantageously, such remote memory access commands support both volatile memory architecture and persistent memory architecture. The command may also be a CXL.cache protocol command or a CXL.io protocol command.
According to one embodiment, the logical memory address is associated with virtual persistent memory. In one implementation, the logical memory address provides a remote address of the virtual persistent memory that may be accessed through the logical memory address stored in the local memory map.
According to one embodiment, the memory controller 104 is for use in virtual persistent memory. Virtual persistent memory is a byte-addressable memory that provides low latency and improved efficiency for memory controllers.
The memory controller 104 is used in the client node 102, where the client node 102 includes a local memory for storing a local memory map table to ensure low latency. Advantageously, the memory controller 104 is configured to receive the memory request for the logical address from the application program and determine whether the logical address is in the local memory map. In addition, the memory controller 104 is configured to receive a response from the controller and update the local memory map accordingly. By storing a local memory map, the memory controller 104 may implement sharing of one or more storage nodes 106 and multiple physical memories 110A through 110N (i.e., accessing the same memory space) between the client node 102 and the control node 108. In other words, the memory controller 104 provides an effective and efficient shared persistent memory experience while reducing latency. The low latency further provides direct memory reads on the client node 102 and bypasses the memory controller 104 (or CPU) for address translation and multi-tenant assurances (i.e., domain isolated access). The memory controller 104 also allows memory mapping details to be stored and memory protection to be implemented to access remote distributed memory pools through thin provisioning and domain isolation.
Fig. 1B is a block diagram of a client node according to another embodiment of the present invention. Fig. 1B shows the elements of fig. 1A together. Referring to fig. 1B, a block diagram 100B of a client node 102 is shown. In one implementation, the client node 102 also includes a memory 116, a communication interface 114, and a memory controller 104.
In one implementation, the operations performed by the client node 102 may be performed and controlled by the memory controller 104. Examples of memory controller 104 may include, but are not limited to, a microprocessor, a microcontroller, a complex instruction set computing (complex instruction set computing, CISC) processor, an application-specific integrated circuit (ASIC) processor, a reduced instruction set computing (reduced instruction set computing, RISC) processor, a very long instruction word (very long instruction word, VLIW) processor, a central processor (central processing unit, CPU), a state machine, a data processing unit, and other processors or circuits.
The communication interface 114 may also be referred to as a network interface, which may comprise suitable logic, circuitry, and/or interfaces that may be operable to communicate with one or more of the storage node 106, memory controller 104, and components of the control node 108 shown in fig. 1A.
The memory 116 may comprise suitable logic, circuitry, and/or interfaces that may enable storage of memory allocation information received by the client node 102. Examples of implementations of Memory 116 may include, but are not limited to, electrically erasable programmable Read-Only Memory (EEPROM), dynamic Random-Access Memory (DRAM), random-Access Memory (Random Access Memory, RAM), read-Only Memory (ROM), hard disk drive (HARD DISK DRIVE, HDD), flash Memory, secure Digital (SD) card, solid state disk (Solid-STATE DRIVE, SSD), non-volatile high-speed transmission bus (Non-volatile Memory Express, NVMe), storage class Memory (Storage Class Memory, SCM), persistent Memory (PERSISTENT MEMORY, PM), and/or CPU cache.
FIG. 2 is a diagram of a storage class memory-based map provided by an embodiment of the present invention. Fig. 2 is described in conjunction with the elements of fig. 1A and 1B. Referring to fig. 2, an illustration 200 is shown. Also shown are a first client node 202A, a second client node 202B, a first library 204A, a second library 204B, a first remote direct memory access (remote direct memory access, RDMA) Network Interface Card (RNIC) 206A, a second RNIC 206B, a first central processor (central processing unit, CPU) 208A, a second CPU 208B, and a first persistent memory (PERSISTENT MEMORY, PM) 210A and a second PM 210B.
The first client node 202A and the second client node 202B correspond to the client node 102 in fig. 1, respectively. And, the first bank 204A and the second bank 204B correspond to persistent memory address banks. In one implementation, each client node includes an RNIC, e.g., an RNIC on each client node side is used internally by the first and second libraries 204A, 204B (or persistent memory address libraries) to connect to the first RNIC 206A and the second RNIC 206B. In addition, because of the low latency memory operation, the first client node 202A is coupled to the first RNIC 206A and the second RNIC 206B, and similarly, the second client node 202B is coupled to the first RNIC 206A and the second RNIC 206B. Or each client node, e.g., the first client node 202A and the second client node 202B, may support software emulation of an RDMA network interface.
The first RNIC 206A and the second RNIC 206B refer to RDMA network interface cards. In one example, the first RNIC 206A and the second RNIC 206B correspond to one network fabric. In one implementation, memory requests (i.e., memory access requests) pass through the first RNIC 206A and the second RNIC 206B, which first RNIC 206A and second RNIC 206B further bypass the first CPU 208A and the second CPU 208B. For example, as shown in fig. 2, the arrow passes through the first RNIC 206A and the second RNIC 206B, with the first CPU 208A and the second CPU 208B remaining unchanged.
The first CPU 208A and the second CPU 208B refer to central processors of corresponding storage nodes that are bypassed by the client node 102. Also, the first persistent memory 210A and the second persistent memory 210B refer to memories that can be accessed by the client node 102.
In one implementation, the first client node 202A includes a first library 204A, which first library 204A provides the logical address of the storage node that the second client node 202B needs to access. The first library 204A provides a logical address and the second client node 202B may directly access the first persistent memory 210A via the second RNIC 206B. In other words, the first client node 202A includes a first library 204A, which first library 204A is capable of communicating with a first persistent memory 210A and a second persistent memory 210B by bypassing the first CPU 208A and the second CPU 208B by the first RNIC 206A and the second RNIC 206B. Similarly, the second client node 202B includes a second library 204B, which second library 204B is capable of communicating with the first persistent memory 210A and the second persistent memory 210B by bypassing the first CPU 208A and the second CPU 208B by the first RNIC 206A and the second RNIC 206B. Thus, the first library 204A and the second library 204B provide zero copy and random access to any storage node. In contrast to conventional approaches, it is advantageous to provide remote persistent memory that is shared among multiple clients, such as first persistent memory 210A and second persistent memory 210B that are shared between first client node 202A and second client node 202B. In other words, the first client node 202A and the second client node 202B are accessing the same memory space. Further, first persistent memory 210A and second persistent memory 210B are distributed between first client node 202A and second client node 202B. Thus, there is remote access, shared and distributed memory, zero copy, random access, multi-tenant domain isolated access to the first client node 202A and the second client node 202B.
FIG. 3 is a diagram of various exemplary components for memory allocation provided by an embodiment of the present invention. Fig. 3 is described in conjunction with the elements of fig. 1A, 1B, and 2. Referring to FIG. 3, a diagram 300 is shown that includes client nodes 302A-302N, a plurality of client libraries 304A-304N, and a plurality of virtual persistent memory (virtual persistent memory, VPM) caches 306A-306N. Also shown are a network structure 308, a coarse gain allocation unit (coarse gain allocation unit, CGAU) mapping Database (DB) 310, and a control plane 312. Also shown are a plurality of data planes 314A-314N, a plurality of persistent memories (PERSISTENT MEMORY, PM) 316A-316N, a plurality of storage node caches 318A-318N, and one or more storage nodes 106A-106N.
Each of the client nodes 302A through 302N corresponds to the client node 102 of fig. 1. Each of the plurality of client libraries 304A-304N is operable to manage the local cache using access details of memory allocated at the remote storage node. The client library allows direct communication with one or more storage nodes 106 and, if direct communication cannot be established, with the control plane.
Each VPM cache of the plurality of VPM caches 306A through 306N refers to a potentially stale logical mapped cache that holds information regarding how remote memory is reached in terms of addressing, network access, and memory protection attributes.
Network fabric 308 includes hardware or software for establishing communications between a plurality of client nodes 302A through 302N, a plurality of data planes 314A through 314N, and a control plane 316. Examples of network fabric 308 may include, but are not limited to, computer ports, network sockets, network interface controllers (network interface controller, NICs), and any other network interface devices. In one example, network fabric 308 may correspond to a remote direct memory access (remote direct memory access, RDMA) or a memory interconnect (computer express link, CXL).
Coarse grain gain allocation unit (coarse grain allocation unit, CGAU) maps Database (DB) 310 to a map that holds information about how to reach remote memory in terms of addressing, network access, and memory protection attributes to provide a use hint.
The control plane 312 is used to control CGAU the mapping database 310 in a load-balanced manner among the one or more storage nodes 106. Control plane 312 provides a lamina with multiple endpoints to disclose core distribution services, management, and billing. Control plane 312 may asynchronously service multiple allocation requests and provide logical to physical mapping and logical tenant separation.
Each of the plurality of data planes 314A-314N corresponds to a plane to store a corresponding storage node, a corresponding persistent memory, and a corresponding storage node cache. In one implementation, the data plane is used to validate RDMA keys and process duplicate data. For example, data plane 314A stores the storage node 106A, persistent memory 316A, and storage node cache 318A.
Each persistent memory of the plurality of persistent memories 316A-316N may refer to a non-volatile, solid-state, high-performance byte-addressable memory that resides on a memory bus. In general, persistent memory may be defined as ordinary memory that persists when powered down or shut down. Examples of the plurality of persistent memories 316A-316N may include, but are not limited to, byte-addressable three-dimensional cross-point memory (e.g., phase change memory), resistive Random Access Memory (RRAM), spin-transfer torque RAM (STTRAM).
The plurality of storage node caches 318A-318N refer to mapped caches that hold information about reaching remote memory in terms of addressing, network access, and memory protection attributes.
In one implementation, the plurality of VPM caches 306A through 306N correspond to potentially stable mapped caches. In addition, each of the plurality of client libraries 304A-304N is configured to manage a corresponding VPM cache (or local cache) using CGAU the access details of the mapping database 310 (or memory allocated at a remote storage node). In one example, such a logically mapped cache of multiple storage node caches 318A-318N maintains information regarding how remote memory is reached in terms of addressing, network access, and memory protection attributes. In addition, there are end-to-end system protocols from client nodes 302A through 302N to multiple data planes 314A through 314N. In one example, the end-to-end system protocol considers that multiple VPM caches 306A through 306N may miss CGAU mappings. The control plane 312 manages information about available storage nodes. The control plane 312 allows dynamic storage extensions and handles service interruption due to network or power interruption. In addition, control plane 312 continually monitors storage resources and actively responds to reduce service time interruptions caused by partial storage node interruptions, to remove unresponsive storage nodes from the map, to reduce CGAU the replication level of map database 310 (or some logic CGAU), or to move backup allocation units from one or more storage nodes 106 to different storage nodes.
In addition, the end-to-end system protocol also considers cached stale mappings that were once authentic, but were modified by the control plane 312 as client nodes 302A-302N cache (e.g., for load balancing, layering, or any other purpose). In addition, the multiple client libraries 304A-304N ensure client-server communication, and thus redundant hops can be eliminated. The plurality of client libraries 304A-304N also provide a communication network that can handle thin provisioning and CGAU access details of the map database 310. To simplify the system, the control plane 312 does not initiate invalidation of the plurality of VPM caches 306A through 306N. In one example, each persistent memory of the plurality of persistent memories 316A-316N addresses the client node 302A-302N to access the control plane 312 for updating/invalidating its cache, such as the plurality of VPM caches 306A-306N. In one implementation, the undesired control plane 312 actively updates the client nodes 302A-302N. Advantageously, the interconnection of client nodes 302A-302N, plurality of data planes 314A-314N, and control plane 316 provides a potentially stale map cache for distributed and shared memory allocation. In addition, direct streaming of memory requests from data plane 314 to one or more storage nodes 106 does not contribute to low latency without any involvement of the CPU.
FIG. 4 is an illustration of logical and physical views of a logical virtual persistent memory provided by an embodiment of the present invention. Fig. 4 is described in conjunction with the elements of fig. 1A, 1B, 2 and 3. Referring to fig. 4, a diagram 400 is shown that includes a first logical virtual persistent memory (virtual persistent memory, VPM) 402, a second logical VPM 404, and storage nodes 406A-406E.
The first logical VPM 402 refers to virtual persistent memory comprising logical units. Each logical unit is represented by an offset, as well as two corresponding physical allocation units.
The second logical VPM 404 corresponds to a logical VPM and includes logical units supported by physically allocated units. Each logic cell is also represented by an offset.
The two logical virtual persistent memories (e.g., the first logical VPM 402 and the second logical VPM 404) illustrate logical virtual persistent memories by two different client nodes (e.g., the client node 102A and the client node 102B). Each logical VPM is represented by two logical virtual persistent memories that are supported by five corresponding storage nodes (e.g., storage nodes 406A-406E). In one example, the first logical VPM 402 includes different offset values supported by the storage nodes 406A-406E, such as offset 1 gigabyte (Gigabyte, GB), offset 2GB, and offset 4GB. For example, storage nodes 406B and 406E include an offset of 1GB, storage nodes 406A and 406B include an offset of 2GB, and storage nodes 406D and 406E include an offset of 4GB. Similarly, the second logical VPM 404 includes offset 1GB and offset 3GB supported by storage nodes 406A through 406E. For example, storage node 406A and storage node 406C include an offset of 3GB, and storage node 40B and storage node 406E also include an offset of 4GB.
In one implementation, the physical and logical space of the client node 102 (or control node 108 or one or more storage nodes 106) is divided into coarse allocation units (coarse allocation unit, CGAU). CGAU provide the functions of load balancing, safety isolation, thin provisioning and the like. In one example, CGAU is aligned with storage nodes 406A-406E and also aligned with an integer multiple operating system (e.g., 1 gigabyte). In addition CGAU can be allocated upon request (or demand) to further map to logical domains to act as independent addressing. Further, the first logical VPM 402 and the second logical VPM 404 (or logical units) are supported by one or more physical units across the cluster according to replication requirements. In one implementation, the first logical VPM 402 and the second logical VPM 404 include a number of logical units having access isolation and near infinite lateral expansion in terms of size and backing physical devices. In one example, isolation of access is achieved by addressing and memory access protection independent of the device. In addition, the client node 102 also has read/write access rights to the shared logical VPM, such as the first logical VPM 402 and the second logical VPM 404. In other words, the end user application may read/write/CAS shared logical VPM addresses (or offsets) in a manner similar to how processes in a single server access local shared memory. In one example, the pool service layer is used to perform virtual address to remote physical address translations, as well as to perform local access caching. Advantageously, the capacity of the first logical VPM 402 and the second logical VPM 404 is thin-provisioned such that each of the first logical VPM 402 and the second logical VPM 404 (or VPM domain) grows nearly infinitely.
Fig. 5 is a diagram providing an illustration depicting steps of creating a virtual persistent memory (virtual persistent memory, VPM) operation according to an embodiment of the invention. Fig. 5 is described in conjunction with the elements of fig. 1A, 1B, 2,3 and 4. Referring to fig. 5, a diagram 500 is shown that includes an application 502, a client node 102, and a control node 108. Client node 102 also includes a mapping cache 504 and a client library 506. Similarly, the control node 108 includes a controller 508 and a database 510.
Application 502 refers to an application executed by client node 102. Application 502 may include different virtual persistent memory (virtual persistent memory, VPM) operations such as write, create, read, and any other operations without limiting the scope of the invention. The client node 102 executes the application 502 to send virtual persistent memory operations (i.e., memory requests) to the memory controller 104 to perform the corresponding operations.
Mapping cache 504 refers to a cache that stores local memory mapping tables that provide logical addresses for corresponding memory requests initiated by application 502.
The client library 506 corresponds to the first library 204A and the second library 204B of fig. 2 (or each client node in the plurality of client libraries 304A-304N of fig. 3).
The controller 508 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to communicate a response to the control node 108. Examples of implementations of controller 508 may include, but are not limited to, a central data processing device, a microprocessor, a microcontroller, a complex instruction set computing (complex instruction set computing, CISC) processor, an application-specific integrated circuit (ASIC) processor, a reduced instruction set computing (reduced instruction set computing, RISC) processor, a very long instruction word (very long instruction word, VLIW) processor, a state machine, and other processors or control circuits.
Database 510 corresponds to a central mapping table. The central mapping table includes local memory mappings of a plurality of client nodes (e.g., client node 102). If the corresponding logical address is not found in the local memory map, the central map provides the local memory map to the controller 508. In one implementation, database 510 is referred to as a central memory map.
In one implementation, the client node 102 is configured to execute an application 502. Thereafter, the client library 506 receives a memory request (e.g., create VPM operation) for a logical address from the application 502. In addition, the memory controller 104 of the client node 102 is configured to determine whether the logical address is in the mapping cache 504 (or in a local memory mapping table). In one example, determining the logical address in the mapping cache 504 is referred to as a lookup. Furthermore, if the logical address is not found in the mapping cache 504, the memory controller 104 transmits a memory request (e.g., CREATEVPM (vpm: myVpm)) for the logical address to the control node 108. Control node 108 also receives memory requests from client node 102 and creates virtual persistent memory. In addition, the controller 508 is configured to store the logical address in the database 510. In one example, database 510 includes a memory map for a plurality of client nodes and also creates a VPM in one of the one or more storage nodes 106. In addition, the memory controller 104 receives a virtual persistent memory logical address and stores the logical address in the mapping cache 504. As a result, the create VPM operation results in the creation of a new VPM that may be accessed later through the memory operation. The memory controller 104 of the client node 102 also receives a response from the controller 508 of the control node 108 and updates the mapping cache 504 accordingly.
FIG. 6 is a diagram of a virtual persistent memory (virtual persistent memory, VPM) cache according to an embodiment of the present invention. Fig. 6 is described in conjunction with the elements of fig. 1A, 1B, 2,3, 4,5, and 6. Referring to fig. 6, a diagram 600 depicting a virtual persistent memory (virtual persistent memory, VPM) cache 602 including a VPM coarse-granularity allocation unit (coarse grain allocation unit, CGAU) 604, a Data Store (DSi) CGAU 606, and a CGAU offset 608 is shown.
VPM cache 602 refers to a potentially stale logical mapped cache that holds information about how remote memory is reached in terms of addressing, network access, and memory protection attributes.
VPM CGAU 604 denotes a VPM and its corresponding coarse-granularity distribution unit. DSi CGAU 606,606 refers to the address of CGAU stored in the VPM cache 602. CGAU offset 608 refers to the coarse granularity allocation unit offset of the storage node that stores the corresponding VPM.
In one implementation, the client library 302 of FIG. 3 is used to manage the VPM cache 602 through access details of the allocated memory (or DSi CGAU 606,606). In addition, the VPM cache 602 maintains information about remote memory in terms of addressing, network access, and memory protection attributes. DSi CGAU 606,606 provides the address of the VPM cache 602. In addition VPM CGAU, 604 represents a storage node that stores the VPM cache 602. In one example, VPM cache 602 is a subset of VPM related information that provides information for storage nodes responsible for the primary copy of VPM CGAU 604 and its DS address (e.g., DSi CGAU 606). In one example, VPM CGAU is 1 gigabyte in size. In one implementation, the VPM cache 602 also includes a remote direct memory access (remote direct memory access, RDMA) access key that may be used to authenticate the one or more storage nodes 106. Further, for each CGAU, for example for VPM CGAU, there is an assigned access key that will be invalidated if any RDMA access keys (credentials) are removed.
Fig. 7 is a diagram of a client application provided by an embodiment of the present invention. Fig. 7 is described in conjunction with the elements of fig. 1A, 1B, 2, 3, 4, 5, and 6. Referring to FIG. 7, an illustration 700 of a client application 702 including a persistent memory address store 704, a virtual persistent memory (virtual persistent memory, VPM) cache 706, and an allocation map database 708 is shown. Further shown are storage node (x) 710 and VPM logical view 712.
Client application 702 may correspond to application 502 of client node 102 (or client nodes 302A-302N of fig. 3). Client application 702 includes persistent memory address store 704 and VPM cache 706. Persistent memory address store 704 corresponds to client store 302 of fig. 3, client store 302 managing VPM cache 706 along with allocation map database 708 (or CGAU map database 310). In addition, persistent memory address store 704 holds information related to the remote address of each storage node. In one example, each storage node, such as storage node (x) 710, corresponds to one or more storage nodes 106 of fig. 1. In one implementation, the remote address of each storage node and its corresponding allocation hint are stored in the VPM cache 706 in tabular form along with a virtual persistent memory offset, a remote direct memory address (remote direct memory address, RDMA) key, and a VPM timestamp. In addition, client application 702 includes a VPM logical view 712 of virtual persistent memory that is divided into a form in which the VPM is configured and in which the VPM is not configured (i.e., available space for allocation or unavailable space).
In one implementation, the table of the VPM cache 706 provides a subset of fields of the assignment map database 708. In one example, as shown in the first row of the table of VPM cache 706, the new VPM offset is 100, which further has "X" as its address and 0X 120 as its RDMA key, for verifying that the allocation hint is "NA". Further shown is a hash table that includes a thin-provisioned VPM, even though it is sparsely used. The hash table implementation allows client node 102 to efficiently handle unbounded and unknown VPM sizes.
FIG. 8 is a diagram depicting a write operation to a logical address from an application provided by an embodiment of the invention. Fig. 8 is described in conjunction with the elements of fig. 1A, 1B, 2,3, 4, 5, 6, and 7. Referring to fig. 8, a diagram 800 including a data store (DSj) 802 is shown. Also shown are client node 102, storage node 106A, control node 108, and application 502. Client node 102 also includes a mapping cache 504 and a client library 506. Also shown is a control node 108 comprising a controller 508 and a database 510.
DSj 802 refers to a logical address of a storage node 106A of one or more storage nodes 106 corresponding to a logical address in a mapping cache 504 (or local memory mapping table) or a central memory mapping table.
In one implementation, the memory request is a write operation. In addition, the memory controller 104 is configured to transmit an allocation request for a logical address as a corresponding memory request to the control node 108, such that the control node 108 allocates physical memory in one of the one or more storage nodes 106. The memory controller 104 is also configured to receive the physical address of the allocated memory from the controller 508, in response, update the local memory map accordingly by mapping the logical address to the physical address, and send a write request to the storage node storing the physical address. First, the memory controller 104 receives a memory request for a Write operation (e.g., write (offset: 1GB+5Kilobyte, kB)) from the application 502. In one example, the memory request is stored in the client library 506. Thereafter, the memory controller 104 is configured to determine whether the logical address is in the local memory map, such as in the map cache 504. In one example, determining a logical address in the local memory map is referred to as a lookup. Furthermore, if a logical address is not found in the local memory map (i.e., a cache miss), the memory controller 104 sends an allocation request (e.g., allocation (vpm: 1, offset:1 GB)) for the logical address as a corresponding memory request to the control node 108.
The control node 108 also allocates physical memory in one of the one or more storage nodes 106 (e.g., in storage node 106A). Thereafter, the controller 508 in the control node 108 loads a physical address of one of the one or more storage nodes 106, such as the physical address of storage node 106A, to allocate physical memory. Thereafter, the memory controller 104 receives the physical address of the allocated memory from the controller 508 in response. The memory controller 104 also updates the mapping cache 504 (or local memory map) accordingly by mapping logical addresses to physical addresses. As a result, the memory controller 104 transmits the write request to the storage node 106A that stores the physical address. In one example, storage node 106A refers to a storage node with a logical address of DSj 802. In one example, storage node 106A from one of the one or more storage nodes 106 verifies an authentication key (i.e., RDMA key) and successfully executes the write command.
FIG. 9 is a diagram depicting a read operation from an application to a logical address provided by an embodiment of the invention. Fig. 9 is described in conjunction with the elements of fig. 1A, 1B, 2, 3, 4,5, 6,7, and 8. Referring to FIG. 9, a diagram 900 is shown that includes an application 502, a client node 102, and a storage node 106A. The client node also includes a mapping cache 504 and a client library 506. The control node 108 also includes a controller 508 and a database 510. Storage node 106A also includes DSj 802.
In one implementation, the memory controller 104 is configured to receive a memory request from the application 502, the memory request corresponding to the read operation. First, the memory controller 104 receives a memory request for a Read operation (e.g., read (offset: 1gb+5kb)) from the application 502. In one example, the memory request is stored in the client library 506. The memory controller 104 then checks the logical address for which the read operation was received, for example, if the logical address is found in the local memory map. In one example, determining a logical address in a local memory map (e.g., in the map cache 504) is referred to as a lookup. Furthermore, if a logical address (i.e., a cache hit) is found in the mapping cache 504 (or local memory map), the memory controller 104 transmits the read request to one of the one or more storage nodes 106, e.g., to the storage node 106A to which the logical address corresponds. In one example, one of the one or more storage nodes 106 corresponds to a storage node 106A having a logical address (e.g., DSj 802). In addition, after RDMA key validation, storage node 106A also returns data to application 502 through client node 102. In one example, data is first received by the client library 506, then executed by the client node 102, and then sent to the application 502, as shown in FIG. 9.
FIG. 10 is a diagram providing a depiction of a read operation involving a local cache hit and an authentication key in accordance with an embodiment of the present invention. Fig. 10 is described in conjunction with the elements of fig. 1A, fig. 1B, fig. 2, fig. 3, fig. 4, fig. 5, fig. 6, fig. 7, fig. 8, and fig. 9. Referring to FIG. 10, a diagram 1000 is shown that includes an application 502, a client node 102, a control node 108, and a storage node 106A. Client node 102 also includes a mapping cache 504 and a client library 506. The control node 108 also includes a controller 508 and a database 510. Storage node 106A also includes DSj 802.
According to one embodiment, the memory controller 104 is configured to execute the memory request according to the local memory map by: the corresponding memory request is transmitted to the storage node 106A indicated by the local map. The corresponding memory request includes an authentication key. The corresponding memory request causes the storage node to authenticate the corresponding memory request according to the authentication key. The memory controller 104 is further configured to execute the memory request according to the local memory map by receiving a response from the storage node 106A, and in response thereto, transmit a memory map request for the logical address to the control node if the response is an indicator of an invalid authentication key. The memory controller 104 also receives a memory map including updated authentication keys from the central memory map of the control node 108. The memory controller 104 is then configured to update the local memory map and transmit an updated corresponding memory request to the storage node 106A, the corresponding memory request including the updated authentication key. In one implementation, the memory request is a read operation, received by the memory controller 104 from the application 502, and executed by the client node 102. The memory controller 104 further determines whether the logical address at which the memory request is performed is in the local memory map (i.e., the map cache 504). In addition, if the memory controller 104 determines that a logical address is found in the local memory map table, it will return an acknowledgement as a cache hit.
The memory controller 104 also transmits memory requests to the storage node 106A. In one example, the logical address of a storage node may be referred to as DSj 802. In addition, the memory request received by storage node 106A includes an authentication key. The memory request causes the storage node 106A to validate the corresponding memory request according to the authentication key (i.e., RDMA key). The memory controller 104 also receives a response from the storage node 106A. If, in response, storage node 106A reflects an indicator of an invalid authentication key, memory controller 104 transmits a memory mapping request for a logical address to control node 108. Thereafter, the control node 108 provides the updated authentication key from the central memory map. In one example, the memory controller 104 also receives a memory map including updated authentication keys from the database 510 (or central memory map table) and the map cache 504 (or local memory map table). The memory controller 104 also transmits updated corresponding memory requests to the storage node 106A. The corresponding memory request includes the updated authentication key. The storage node 106A verifies the authentication key and performs a memory read operation accordingly by providing data to the application 502 if the received authentication key is valid.
FIG. 11 is a diagram providing a depiction of a read operation involving a local cache miss, in accordance with an embodiment of the present invention. Fig. 11 is described in conjunction with the elements of fig. 1A, fig. 1B, fig. 2, fig. 3, fig. 4, fig. 5, fig. 6, fig. 7, fig. 8, fig. 9, and fig. 10. Referring to FIG. 11, a diagram 1100 is shown that includes an application 502, a client node 102, a control node 108, and one of the one or more storage nodes 106. The client node also includes a mapping cache 504 and a client library 506. The control node 108 also includes a controller 508 and a database 510. Storage node 106A also includes a data store (DSj) 802.
In one implementation, the memory request received by the memory controller 104 is a read operation. The memory controller 104 is further configured to transmit a memory map request for a logical address to the control node 108 as a corresponding memory request, such that the control node 108 retrieves a map of logical addresses from the central memory map. The memory controller 104 also receives a memory map of the requested logical address from the controller 508, the memory map including the physical address, updates the local memory map table accordingly by mapping the logical address to the memory map, and transmits a read request to the storage node 106A of the physical address. First, the memory controller 104 receives a memory request for a Read operation (e.g., read (offset: 1gb+5kb)) from the application 502. In one example, the memory request is stored in the client library 506. The memory controller 104 is also configured to determine whether the logical address at which the read operation is performed is in a local memory map (e.g., map cache 504). Furthermore, if the logical address is not found in the local memory map table (i.e., a cache miss), the memory controller 104 transmits a memory map request for the logical address as a corresponding memory request to the control node 108. The control node 108 retrieves the mapping of the corresponding logical address from a central memory mapping table, such as database 510, and transmits it to the memory controller 104. The memory controller 104 also receives a memory map of the requested logical address from the controller 508, the memory map including the physical address. Thereafter, the memory controller 104 updates the map cache 504 accordingly by mapping logical addresses to the map cache 504. Finally, the memory controller 104 transmits the memory request to the storage node 106A at the physical address. In one example, the logical address of storage node 106A may be referred to as DSj 802. In one example, the memory controller 104 also retrieves data after RDMA key verification.
In one implementation, the client library 506 receives a memory request, such as Read (offset: 1GB+5kB), from the application 502. The client node 102 is then operable to perform a lookup operation and send a request (e.g., obtain a mapping of offset 1 GB) to the control node 108. In one example, such a request is sent from the client library 506 to the controller 508 of the control node 108. Thereafter, the control node 108 is configured to perform a mapping operation and load the request into the database 510. The control node 108 also transmits information to the client node 102, such as information related to DSj, offset:3GB, RDMA keys, DSj, ip network information, etc. In one example, such information is stored in mapping cache 504 of client node 102. Thereafter, the client node 102 is configured to send a Read operation request, e.g., read (DSj, offset: 3gb+5kb), to the control node 108, and the control node 108 is configured to validate the RDMA key. Thereafter, the memory controller 104 of the client node 102 is operable to receive a response, such as data, from the controller 508 of the control node 108. Finally, the memory controller 104 of the client node 102 updates the mapping cache 504 based on the received response.
FIG. 12 is a diagram providing a depiction of a read operation involving a local cache hit, in accordance with an embodiment of the present invention. Fig. 12 is described in conjunction with the elements of fig. 1A, fig. 1B, fig. 2, fig. 3, fig. 4, fig. 5, fig. 6, fig. 7, fig. 8, fig. 9, fig. 10, and fig. 11. Referring to fig. 12, a diagram 1200 is shown that includes a storage node 1202, another storage node 1204, a Data Store (DSi) 1206, an application 502, a client node 102, and a control node 108. Client node 102 also includes a mapping cache 504 and a client library 506. The control node 108 also includes a controller 508 and a database 510.
Storage node 1202 and another storage node 1204 correspond to storage nodes in one or more storage nodes 106. The storage node 1202 and the other storage node 1204 are configured to validate the RDMA key and perform corresponding operations according to the received memory operations.
According to one embodiment, the received memory map includes a memory map for another storage node 1204, and the memory controller 104 is configured to update the local memory map table and transmit an updated corresponding memory request to the other storage node 1204. In one implementation, the memory controller 104 is configured to perform memory requests according to the mapping cache 504 (or the local memory map table) by: the corresponding memory request is transmitted to the storage node 1202 indicated by the local map. In one example, the logical address of the storage node 1202 corresponds to DSj 802. In another example, the logical address of another storage node 1204 corresponds to DSi 1206. In one implementation, the memory request is a read operation, received by the memory controller 104 from the application 502, and executed by the client node 102. The memory controller 104 further determines whether the logical address to perform the memory request is in the map cache 504. In addition, if the memory controller 104 determines that a logical address is found in the mapping cache 504, it will return an acknowledgement as a cache hit. The memory controller 104 also transmits a memory request, such as Read (DS k, offset: 3gb+5kb), to the storage node 1202. In one example, the memory request includes an authentication key for the storage node 1202 to authenticate the memory request. In one example, the storage node 1202 receives a response reflecting an indicator of an invalid authentication key, and the memory controller 104 transmits a memory mapping request for a logical address to the control node 108. Thereafter, the controller 508 of the control node 108 provides the logical address of the other storage node 1204 to the memory controller 104. The memory controller 104 also updates the map cache 504 and transmits an updated memory request, such as Read (DS k, offset: 7gb+5kb), to the other storage node 1204 that received the updated request. Thereafter, the other storage node 1204 receives the read request and performs a read operation by providing the data accordingly.
Fig. 13 is a flowchart of a method for storing a local memory mapping table in a memory controller according to an embodiment of the present invention. Fig. 13 is described in conjunction with the elements of fig. 1A, fig. 1B, fig. 2, fig. 3, fig. 4, fig. 5, fig. 6, fig. 7, fig. 8, fig. 9, fig. 10, fig. 11, and fig. 12. Referring to fig. 13, a flow chart of a method 1300 including steps 1302 to 512 is shown. The memory controller 104 is configured to perform the method 1300.
A method 1300 of storing a local memory map by a memory controller 104 is provided. The memory controller 104 is used in the client node 102, where the client node 102 includes a local memory for storing a local memory map. The client node 102 is used to execute an application. The memory controller 104 is configured to connect to one or more storage nodes 106, wherein each storage node comprises physical memory, and the control node 108 comprises a central memory map table comprising local memory maps of a plurality of client nodes.
In step 1302, method 1300 includes receiving a memory request for a logical address from an application and determining if the logical address is in a local memory map. In other words, a memory request for a logical address is received from an application. The logical address is then checked against the local memory map.
In step 1304, the method 1300 includes determining if the logical address is in a local memory map. In one example, the controller 104 is configured to determine whether the logical address is in the local memory map, and if so, the memory controller 104 proceeds to step 1306.
In step 1306, method 1300 includes executing the memory request according to the local memory map. In one example, if the logical address is in the local memory map and the memory request corresponds to a write operation, the local memory map should be updated according to the write operation. In another example, if the logical address is in the local memory map and the memory request corresponds to a read operation, the local memory map should be updated according to the read operation.
In step 1308, method 1300 includes transmitting a corresponding memory request for the logical address to the control node. In other words, if the corresponding memory request is not found in the logical memory map, the memory controller 104 transmits the corresponding memory request for the logical address to the control node 108.
In step 1310, the method 1300 includes receiving a response from the control node 108. In other words, the method 1300 includes: data including a logical address is received from the control node 108.
In step 1312, method 1300 includes updating the local memory map accordingly based on the memory request. In one example, if the memory request corresponds to a write operation, the local memory map table should be updated according to the write operation. In another example, if the logical address is in the local memory map and the memory request corresponds to a read operation, the local memory map should be updated according to the read operation. In one implementation, virtual persistent memory logical capacity may be thin-provisioned and further distributed across client nodes 102. In one example, thin provisioning refers to the logical capacity of a VPM that can be expanded or contracted by an application program interface (application program interface, API) exposed by a library of clients. Further, physical memory (e.g., physical memory 110A of storage node 106A) is used for resource allocation, which is random access memory, and thus client node 102 is optimized to allocate the required physical space based solely on write input/output patterns. Thus, there is a need for thin provisioning of distributed and shared persistent resource allocations. In addition, client node 102 supports access to a non-contiguous logical space and reads logical addresses. In addition, client node 102 reads a logical address that has not yet been assigned, and also returns a value of zero. In one example, a zero value corresponds to a logical address that is not assigned, which facilitates execution of the application.
In one implementation, the local memory map is a subset of the central memory map. In other words, the central memory map includes local memory maps of a plurality of client nodes (e.g., client node 102). For example, a local memory map is stored in the client node 102 to provide logical addresses to the client node 102. In one implementation, if the logical address is not found in the local memory map, the central memory map also provides the logical address to store the corresponding logical address in the local memory map.
According to one embodiment, the memory request is a write operation. In addition, the method 1300 includes transmitting an allocation request for the logical address as a corresponding memory request to the control node 108, thereby causing the control node 108 to allocate physical memory in one of the one or more storage nodes 106. The method 1300 further includes receiving the physical address of the allocated memory from the controller 508, in response, updating the local memory map accordingly by mapping the logical address to the physical address, and transmitting the write request to a storage node storing the physical address. First, the memory controller 104 receives a memory request for a Write operation (e.g., write (offset: 1GB+5KB)) from the application 502. Thereafter, the memory controller 104 is configured to determine whether the logical address is in the local memory map, such as in the map cache 504. Furthermore, if a logical address is not found in the local memory map (i.e., a cache miss), the memory controller 104 sends an allocation request (e.g., allocation (vpm: 1, offset:1 GB)) for the logical address as a corresponding memory request to the control node 108. The control node 108 also allocates physical memory in one of the one or more storage nodes 106 (e.g., in storage node 106A). Thereafter, the controller 508 in the control node 108 loads a physical address of one of the one or more storage nodes 106, such as the physical address of storage node 106A, to allocate physical memory. Thereafter, the memory controller 104 receives the physical address of the allocated memory from the controller 508 in response. The memory controller 104 also updates the mapping cache 504 (or local memory map) accordingly by mapping logical addresses to physical addresses. As a result, the memory controller 104 transmits the write request to the storage node 106A that stores the physical address. In one example, storage node 106A refers to a storage node with a logical address of DSj 802. In one example, storage node 106A from one of the one or more storage nodes 106 verifies an authentication key (i.e., RDMA key) and successfully executes the write command.
In one implementation, the memory request received by the memory controller 104 is a read operation. The memory controller 104 is further configured to transmit a memory map request for a logical address to the control node 108 as a corresponding memory request, such that the control node 108 retrieves a map of logical addresses from the central memory map. The memory controller 104 also receives a memory map of the requested logical address from the controller 508, the memory map including the physical address, updates the local memory map table accordingly by mapping the logical address to the memory map, and transmits a read request to the storage node 106A of the physical address. First, the memory controller 104 receives a memory request for a Read operation (e.g., read (offset: 1gb+5kb)) from the application 502. In one example, the memory request is stored in the client library 506. The memory controller 104 is also configured to determine whether the logical address at which the read operation is performed is in a local memory map (e.g., map cache 504). Furthermore, if the logical address is not found in the local memory map table (i.e., a cache miss), the memory controller 104 transmits a memory map request for the logical address as a corresponding memory request to the control node 108. The control node 108 retrieves the mapping of the corresponding logical address from a central memory mapping table, such as database 510, and transmits it to the memory controller 104. The memory controller 104 also receives a memory map of the requested logical address from the controller 508, the memory map including the physical address. Thereafter, the memory controller 104 updates the map cache 504 accordingly by mapping logical addresses to the map cache 504. Finally, the memory controller 104 transmits the memory request to the storage node 106A at the physical address. In one example, the logical address of storage node 106A may be referred to as DSj 802. In one example, the memory controller 104 also retrieves data after RDMA key verification.
In one implementation, the client library 506 receives a memory request, such as Read (offset: 1GB+5kB), from the application 502. The client node 102 is then operable to perform a lookup operation and send a request (e.g., obtain a mapping of offset 1 GB) to the control node 108. In one example, such a request is sent from the client library 506 to the controller 508 of the control node 108. Thereafter, the control node 108 is configured to perform a mapping operation and load the request into the database 510. The control node 108 also transmits information to the client node 102, such as information related to DSj, offset:3GB, RDMA keys, DSj, ip network information, etc. In one example, such information is stored in mapping cache 504 of client node 102. Thereafter, the client node 102 is configured to send a Read operation request, e.g., read (DSj, offset: 3gb+5kb), to the control node 108, and the control node 108 is configured to validate the RDMA key. Thereafter, the memory controller 104 of the client node 102 is operable to receive a response, such as data, from the controller 508 of the control node 108. Finally, the memory controller 104 of the client node 102 updates the mapping cache 504 based on the received response.
According to one embodiment, the method 1300 includes performing a memory request according to the local memory map by: the corresponding memory request is transmitted to the storage node 106A indicated by the local map. The corresponding memory request includes an authentication key, and the corresponding memory request causes the storage node to authenticate the corresponding memory request according to the authentication key. The method 1300 further comprises: a response is received from the storage node 106A and in response, if the response is an indicator of an invalid authentication key, a memory mapping request for a logical address is sent to the control node 108. The method 1300 also includes a memory map including updated authentication keys from a central memory map of the control node 108. The method 1300 also includes updating the local memory map and transmitting an updated corresponding memory request to the storage node 106A, the corresponding memory request including the updated authentication key. In one implementation, the memory request is a read operation, received by the memory controller 104 from the application 502, and executed by the client node 102. The memory controller 104 further determines whether the logical address at which the memory request is performed is in the local memory map (i.e., the map cache 504). In addition, if the memory controller 104 determines that a logical address is found in the local memory map table, it will return an acknowledgement as a cache hit. The memory controller 104 also transmits memory requests to the storage node 106A. In one example, the logical address of a storage node may be referred to as DSj 802. In addition, the memory request received by storage node 106A includes an authentication key. The memory request causes the storage node 106A to validate the corresponding memory request according to the authentication key (i.e., RDMA key).
The memory controller 104 also receives a response from the storage node 106A. If, in response, storage node 106A reflects an indicator of an invalid authentication key, memory controller 104 transmits a memory mapping request for a logical address to control node 108. Thereafter, the control node 108 provides the updated authentication key from the central memory map. In one example, the memory controller 104 also receives a memory map including updated authentication keys from the database 510 (or central memory map table) and the map cache 504 (or local memory map table). The memory controller 104 also transmits updated corresponding memory requests to the storage node 106A. The corresponding memory request includes the updated authentication key. The storage node 106A verifies the authentication key and performs a memory read operation accordingly by providing data to the application 502 if the received authentication key is valid.
According to one embodiment, the method 1300 includes receiving a memory map and a memory map of another storage node 1204. Method 1300 includes updating a local memory map table and transmitting an updated corresponding memory request to another storage node 1204. In one implementation, the memory controller 104 is configured to perform memory requests according to the mapping cache 504 (or the local memory map table) by: the corresponding memory request is transmitted to the storage node 1202 indicated by the local map. In one example, the logical address of the storage node 1202 corresponds to DSj 802. In another example, the logical address of another storage node 1204 corresponds to DSi 1206. In one implementation, the memory request is a read operation, received by the memory controller 104 from the application 502, and executed by the client node 102. The memory controller 104 further determines whether the logical address to perform the memory request is in the map cache 504. Furthermore, the method 1300 includes: the logical address in the map cache 504 is determined and then an acknowledgement is returned as a cache hit. The memory controller 104 also transmits a memory request, such as Read (DS k, offset: 3gb+5kb), to the storage node 1202. In one example, the memory request includes an authentication key for the storage node 1202 to authenticate the memory request. In one example, the storage node 1202 receives a response reflecting an indicator of an invalid authentication key, and the memory controller 104 transmits a memory mapping request for a logical address to the control node 108. Thereafter, the method 1300 includes: the memory controller 104 is provided with the logical address of another storage node 1204. The memory controller 104 also updates the map cache 504 and transmits an updated memory request, such as Read (DS k, offset: 7gb+5kb), to the other storage node 1204 that received the updated request. Thereafter, the other storage node 1204 receives the read request and performs a read operation by providing the data accordingly.
According to another embodiment, the method 1300 further comprises the memory request transmitted to the storage node being a remote memory access command. The memory request may correspond to a write, read, CAS, or other memory operation. Memory requests require access to memory to perform memory operations in a corresponding storage node (e.g., storage node 106A).
According to one embodiment, the method 1300 further includes the remote memory access command being a remote direct memory access (remote direct memory access, RDMA) command. The memory request is transmitted to the storage node to perform the desired operation. In one example, if the memory request is a write operation, the write operation is performed in the corresponding storage node. The transfer of memory requests from the memory controller 104 to the storage nodes corresponds to RDMA commands that provide an interconnect between the client node 102 and the storage nodes with low latency.
According to one embodiment, the method 1300 further includes the remote memory access command being a cxl. In one implementation, the transmission of memory requests from the memory controller 104 to the storage nodes corresponds to RDMA commands that provide an interconnection between the client node 102 and the storage nodes. Similarly, the memory access command may be a "cxl.mem" command that may transmit a memory request to a storage node to perform a memory operation. Advantageously, such remote memory access commands support both volatile memory architecture and persistent memory architecture.
According to one embodiment, method 1300 further includes a logical memory address associated with the virtual persistent memory. In one implementation, the logical memory address provides a remote address of the virtual persistent memory that may be accessed through the logical memory address stored in the local memory map.
Method 1300 implements all of the advantages and features of memory controller 104 of the present invention.
Steps 1302 through 1312 are merely illustrative and other alternatives may be provided in which one or more steps are added, one or more steps are deleted, or one or more steps are provided in a different order without departing from the scope of the claims herein.
Modifications may be made to the embodiments of the invention described above without departing from the scope of the invention, which is defined in the accompanying claims. The terms "comprising," "including," "incorporating," "having," "being" and the like used to describe and claim the present invention should be construed in a non-exclusive manner to allow items, parts or elements not explicitly described to be present. Reference to the singular is also to be construed to relate to the plural. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments and/or does not include features that are combined with other embodiments. The word "optionally" as used herein means "provided in some embodiments and not provided in other embodiments. It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable combination or as in any other described embodiment of the invention.
Claims (21)
1. A memory controller (104) for use in a client node (102), the client node (102) comprising a local memory (112) for storing a local memory map, the client node (102) for executing an application, the memory controller (104) being for operative connection to:
one or more storage nodes (106), wherein each storage node comprises physical memory;
A control node (108), wherein the control node (108) comprises a central memory map table comprising local memory maps of a plurality of client nodes, wherein the memory controller (104) is configured to:
Receiving a memory request for a logical address from the application;
Determining whether the logical address is in the local memory mapping table;
If the logical address is in the local memory map table, then
Executing the memory request according to the local memory mapping table;
If the logical address is not in the local memory map table, then
Transmitting a corresponding memory request for the logical address to the control node (108);
-receiving a response from the control node (108);
and correspondingly updating the local memory mapping table.
2. The memory controller (104) of claim 1, wherein the local memory map table is a subset of the central memory map.
3. The memory controller (104) of claim 1 or 2, wherein the memory request is a write operation, the memory controller (104) further configured to:
transmitting an allocation request for the logical address as the corresponding memory request to the control node (108) such that the control node (108) allocates physical memory in one of the one or more storage nodes (106);
receiving a physical address of the allocated memory from the controller as the response;
updating the local memory map accordingly by mapping the logical address to the physical address;
a write request is transmitted to the storage node storing the physical address.
4. The memory controller (104) of any one of the preceding claims, wherein the memory request is a read operation, the memory controller (104) further configured to:
Transmitting a memory map request for the logical address as the corresponding memory request to the control node, such that the control node (108) retrieves a map of the logical address from the central memory map;
receiving a memory map of the logical address of the request from the controller as the response, the memory map including physical addresses;
Updating the local memory map table accordingly by mapping the logical address to the memory map;
a read request is transmitted to the storage node of the physical address.
5. The memory controller (104) of any one of the preceding claims, wherein the memory controller (104) is further configured to execute the memory request according to the local memory map by:
transmitting a corresponding memory request to the storage node indicated by the local mapping, wherein the corresponding memory request comprises an authentication key, and the corresponding memory request enables the storage node to authenticate the corresponding memory request according to the authentication key;
a response is received from the storage node, and in response thereto,
If the response is an indicator of an invalid authentication key, then
Transmitting a memory mapping request for the logical address to the control node;
Receiving a memory map from the central memory map of the control node (108), the memory map comprising updated authentication keys;
Updating the local memory mapping table;
Transmitting an updated corresponding memory request to the storage node, wherein the corresponding memory request includes the updated authentication key.
6. The memory controller (104) of claim 5, wherein the received memory map comprises a memory map of another storage node, the memory controller (104) to:
Updating the local memory mapping table;
and transmitting the updated corresponding memory request to the other storage node.
7. The memory controller (104) of any of the above claims, wherein the memory request transmitted to the storage node is a remote memory access command.
8. The memory controller (104) of claim 7, wherein the remote memory access command is an RDMA command.
9. The memory controller (104) of claim 7, wherein the remote memory access command is a cxl.mem command.
10. The memory controller (104) of any of the above claims, wherein the logical memory address is associated with virtual persistent memory.
11. The memory controller (104) of claim 10, wherein the memory controller (104) is configured for use in a VPM.
12. A method (1300) for use in a memory controller (104) in a client node (102), the client node (102) comprising a local memory for storing a local memory map, the client node (102) being for executing an application, the memory controller (104) being for operatively connecting to:
one or more storage nodes (106), wherein each storage node comprises physical memory;
A control node (108), wherein the control node (108) comprises a central memory map table comprising local memory maps of a plurality of client nodes, wherein the method comprises:
Receiving a memory request for a logical address from the application;
Determining whether the logical address is in the local memory mapping table;
If the logical address is in the local memory map table, then
Executing the memory request according to the local memory mapping table;
If the logical address is not in the local memory map table, then
Transmitting a corresponding memory request for the logical address to the control node;
Receiving a response from the controller;
and correspondingly updating the local memory mapping table.
13. The method (1300) of claim 12, wherein the local memory map is a subset of the central memory map.
14. The method (1300) of claim 12 or 13, wherein the memory request is a write operation, the method (1300) further comprising:
transmitting an allocation request for the logical address as the corresponding memory request to the control node (108) such that the control node (108) allocates physical memory in one of the one or more storage nodes (106);
receiving a physical address of the allocated memory from the controller as the response;
updating the local memory map accordingly by mapping the logical address to the physical address;
a write request is transmitted to the storage node storing the physical address.
15. The method (1300) of any of claims 12 to 14, wherein the memory request is a read operation, the method (1300) further comprising:
Transmitting a memory map request for the logical address as the corresponding memory request to the control node, such that the control node (108) retrieves a map of the logical address from the central memory map;
receiving a memory map of the logical address of the request from the controller as the response, the memory map including physical addresses;
Updating the local memory map table accordingly by mapping the logical address to the memory map;
a read request is transmitted to the storage node of the physical address.
16. The method (1300) of any of claims 12 to 15, wherein the method (1300) further comprises performing the memory request according to the local memory map by:
transmitting a corresponding memory request to the storage node indicated by the local mapping, wherein the corresponding memory request comprises an authentication key, and the corresponding memory request enables the storage node to authenticate the corresponding memory request according to the authentication key;
Receiving a response from the storage node and in response thereto
If the response is an indicator of an invalid authentication key, then
Transmitting a memory mapping request for the logical address to the control node;
Receiving a memory map from the central memory map of the control node (108), the memory map comprising updated authentication keys;
Updating the local memory mapping table;
Transmitting an updated corresponding memory request to the storage node, wherein the corresponding memory request includes the updated authentication key.
17. The method (1300) of claim 16, wherein the received memory map includes a memory map of another storage node, the method (1300) further comprising:
Updating the local memory mapping table;
and transmitting the updated corresponding memory request to the other storage node.
18. The method (1300) of any of claims 12-17, wherein the memory request transmitted to the storage node is a remote memory access command.
19. The method (1300) of claim 18, wherein the remote memory access command is a remote direct memory access command.
20. The method (1300) of claim 18, wherein the remote memory access command is a cxl.
21. The method (1300) of any of claims 12-20, wherein the logical memory address is associated with a virtual persistent memory.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2022/052805 WO2023147878A1 (en) | 2022-02-07 | 2022-02-07 | Memory controller and method used in client node to store local memory mapping table |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118202336A true CN118202336A (en) | 2024-06-14 |
Family
ID=80628520
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202280071365.6A Pending CN118202336A (en) | 2022-02-07 | 2022-02-07 | Memory controller and method for storing local memory mapping tables in client nodes |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN118202336A (en) |
WO (1) | WO2023147878A1 (en) |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10713210B2 (en) * | 2015-10-13 | 2020-07-14 | Microsoft Technology Licensing, Llc | Distributed self-directed lock-free RDMA-based B-tree key-value manager |
US10375167B2 (en) * | 2015-11-20 | 2019-08-06 | Microsoft Technology Licensing, Llc | Low latency RDMA-based distributed storage |
US10802766B2 (en) * | 2017-09-29 | 2020-10-13 | Oracle International Corporation | Database with NVDIMM as persistent storage |
US11042657B2 (en) * | 2017-09-30 | 2021-06-22 | Intel Corporation | Techniques to provide client-side security for storage of data in a network environment |
KR102103782B1 (en) * | 2018-06-07 | 2020-04-24 | 주식회사 티맥스소프트 | Method for controlling near cache in distributed cache environment, and distributed cache server using the same |
US11509606B2 (en) * | 2018-06-29 | 2022-11-22 | Intel Corporation | Offload of storage node scale-out management to a smart network interface controller |
US11734192B2 (en) * | 2018-12-10 | 2023-08-22 | International Business Machines Corporation | Identifying location of data granules in global virtual address space |
US11675706B2 (en) * | 2020-06-30 | 2023-06-13 | Western Digital Technologies, Inc. | Devices and methods for failure detection and recovery for a distributed cache |
-
2022
- 2022-02-07 CN CN202280071365.6A patent/CN118202336A/en active Pending
- 2022-02-07 WO PCT/EP2022/052805 patent/WO2023147878A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2023147878A1 (en) | 2023-08-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11640356B2 (en) | Methods for managing storage operations for multiple hosts coupled to dual-port solid-state disks and devices thereof | |
US9760497B2 (en) | Hierarchy memory management | |
US7392291B2 (en) | Architecture for providing block-level storage access over a computer network | |
US7043623B2 (en) | Distributed memory computing environment and implementation thereof | |
JP5347396B2 (en) | Multiprocessor system | |
JP6005835B2 (en) | Storage system with multicast DMA and integrated address space | |
JP6385995B2 (en) | System and method for storing data using a table of content entry | |
US10852955B2 (en) | Method and system for accessing data objects stored in a storage system using object descriptors allocated by clients | |
US10452279B1 (en) | Architecture for flash storage server | |
US8732381B2 (en) | SAS expander for communication between drivers | |
CN111149081A (en) | Metadata control in load-balanced distributed storage systems | |
US10700925B2 (en) | Dedicated endpoints for network-accessible services | |
US8122182B2 (en) | Electronically addressed non-volatile memory-based kernel data cache | |
US10872036B1 (en) | Methods for facilitating efficient storage operations using host-managed solid-state disks and devices thereof | |
US10229085B2 (en) | Fibre channel hardware card port assignment and management method for port names | |
EP4002139A2 (en) | Memory expander, host device using memory expander, and operation method of server system including memory expander | |
JP6972202B2 (en) | Computer system and memory management method | |
CN118202336A (en) | Memory controller and method for storing local memory mapping tables in client nodes | |
US11093161B1 (en) | Storage system with module affinity link selection for synchronous replication of logical storage volumes | |
KR20230088215A (en) | Distributed storage system | |
US9501290B1 (en) | Techniques for generating unique identifiers | |
US12131200B2 (en) | Balanced winner assignment for deadlock resolution | |
KR20230163238A (en) | Computing system for managing distributed storage devices, and method of operating the same | |
WO2023117069A1 (en) | Memory controller and method of allocating memory in memory controller used in client node | |
WO2023193906A1 (en) | Storage server, client device, and method of storing incoming write data in storage server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |