Nothing Special   »   [go: up one dir, main page]

US9898414B2 - Memory corruption detection support for distributed shared memory applications - Google Patents

Memory corruption detection support for distributed shared memory applications Download PDF

Info

Publication number
US9898414B2
US9898414B2 US14/530,354 US201414530354A US9898414B2 US 9898414 B2 US9898414 B2 US 9898414B2 US 201414530354 A US201414530354 A US 201414530354A US 9898414 B2 US9898414 B2 US 9898414B2
Authority
US
United States
Prior art keywords
cache line
node
memory
copied
version
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/530,354
Other versions
US20150278103A1 (en
Inventor
Zoran Radovic
Paul Loewenstein
John G. Johnson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle International Corp
Original Assignee
Oracle International Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oracle International Corp filed Critical Oracle International Corp
Priority to US14/530,354 priority Critical patent/US9898414B2/en
Assigned to ORACLE INTERNATIONAL CORPORATION reassignment ORACLE INTERNATIONAL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JOHNSON, JOHN G., LOEWENSTEIN, PAUL, RADOVIC, ZORAN
Priority to EP15714996.4A priority patent/EP3123331B1/en
Priority to JP2017502751A priority patent/JP6588080B2/en
Priority to PCT/US2015/019587 priority patent/WO2015148100A1/en
Priority to CN201580016557.7A priority patent/CN106164870B/en
Publication of US20150278103A1 publication Critical patent/US20150278103A1/en
Application granted granted Critical
Publication of US9898414B2 publication Critical patent/US9898414B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0891Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0721Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment within a central processing unit [CPU]
    • G06F11/0724Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment within a central processing unit [CPU] in a multiprocessor or a multi-core unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/073Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a memory management context, e.g. virtual memory or cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • G06F11/0763Error or fault detection not based on redundancy by bit configuration check, e.g. of formats or tags
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1041Resource optimization
    • G06F2212/1044Space efficiency improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/62Details of cache specific to multiprocessor cache arrangements
    • G06F2212/621Coherency control relating to peripheral accessing, e.g. from DMA or I/O device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure

Definitions

  • the present disclosure relates generally to techniques for detecting memory corruption in distributed node systems.
  • clusters of distributed computing nodes Many functionalities and services available over the Internet or over a corporate network are provided by one or more clusters of distributed computing nodes.
  • a database used to run a large scale business may be maintained by, and made available through, a plurality of database servers running on a plurality of distributed computing nodes that form a cluster.
  • Using a cluster of computing nodes to provide a functionality or service may have a number of advantages. For example, with a cluster, it is relatively easy to add another node to increase the capacity of the system to meet increased demand.
  • a cluster also makes it possible to load balance among the various nodes so that if one node becomes overburdened, work can be assigned to other nodes.
  • a cluster makes it possible to tolerate failures so that if one or more nodes fail, the functionality or service is still available.
  • nodes in a cluster may be able to share information in order to, for example, work together and carry out transactions, load balance, implement failure prevention and recovery, etc.
  • memory corruption detection may be required.
  • Memory corruption occurs when a memory location is inappropriately accessed or modified.
  • One example of memory corruption occurs when an application attempts to advance a pointer variable beyond the memory allocated for a particular data structure. These memory errors can cause program crashes or unexpected program results.
  • Memory corruption detection schemes exist for single-machine applications.
  • the single-machine memory corruption detection schemes allow a computer to track application pointers at run-time and inform a user of memory errors.
  • FIG. 1 is a block diagram that depicts an example distributed node system in an embodiment
  • FIG. 2 illustrates an example in which some nodes in a distributed node system are sharing memory, in accordance with an embodiment
  • FIG. 3 is a flow diagram that depicts a procedure for detecting memory corruption in a node, in an embodiment
  • FIG. 4 is a flow diagram that depicts a procedure for updating a cache line when loading the cache line while detecting memory corruption, in an embodiment
  • FIG. 5A is a flow diagram that depicts a procedure for performing a store in a remote node, in an embodiment
  • FIG. 5B is a flow diagram that depicts a procedure for propagating a store from a remote node to a source node, in an embodiment
  • FIG. 6 is a flow diagram that depicts a procedure for performing a store in a source node, in an embodiment
  • FIG. 7 is a block diagram that illustrates a computer system upon which an embodiment of the invention may be implemented.
  • nodes in a distributed node system are configured to support memory corruption detection when memory is shared between the nodes.
  • Nodes in the distributed node system share data in units of memory referred to herein as “shared cache lines.”
  • a node associates a version value with data in a shared cache line. The version value and data may be stored in a shared cache line in the node's main memory.
  • the node performs a memory operation, it can use the version value to determine whether memory corruption has occurred.
  • a pointer may be associated with a version value. When the pointer is used to access memory, the version value of the pointer may indicate the expected version value at the memory location. If the version values do not match, then memory corruption has occurred.
  • a pointer is a value that contains an address to a memory location of another value stored in memory. The value is loadable into a register of processor. According to an embodiment, a pointer contains two separate values, a version value and a virtual address, which is translated to a physical address for execution of a memory operation.
  • the nodes in a distributed node system share portions of their main memory with other nodes in the system.
  • a node (“source node”) makes a portion of its main memory available for sharing with other nodes in the system, and another node (“remote node”) copies the shared memory portion in its own main memory.
  • a memory portion may comprise one or more shared cache lines.
  • the remote node creates a copied cache line that is a copy of a source cache line in the source node.
  • a shared cache line comprises version bits and data bits.
  • the version bits of a shared cache line indicate a version value associated with the shared cache line.
  • a pointer configured to point to the shared cache line also contains a version value. When the pointer is used to perform a memory operation on the shared cache line, the node compares the version value of the pointer to the version value indicated by the version bits of the shared cache line.
  • the source node generates the version value in response to a memory allocation request. For example, if an application allocates memory for a data structure, the source node may generate a version value to be associated with that data structure. The generated version value and the associated data structure may be copied in the main memory of the local node.
  • the memory operation is requested by an application. If a node detects that memory corruption has occurred, the node may inform the application of the error. The node may also terminate the memory operation rather than execute it.
  • a node uses the version value to maintain coherency between nodes. For example, the version value in a remote cache line may indicate that the remote cache line is out of date. The remote node may then update the remote cache line from the corresponding source cache line.
  • one or more version values are reserved for indicating when the copied cache line is invalid. The one or more reserved version values are not used when a node generates a version value in response to a memory allocation request.
  • FIG. 1 shows a block diagram of an example distributed node system 100 , in an embodiment.
  • Distributed node system 100 includes three nodes: Node 1 102 A, Node 2 102 B, and Node 3 102 C. Although three nodes are shown in the present illustration, system 100 may include more or fewer nodes.
  • Each node 102 includes a main memory 108 .
  • the main memory 108 includes one or more shared cache lines 106 .
  • shared cache line 106 comprises version bits 112 and data bits 114 . Data is stored in data bits 114 . Version bits 112 indicate a version value associated with the shared cache line 106 .
  • Shared cache lines 106 may be the same size or the size may vary.
  • a node 102 may make a portion of its main memory 108 available for sharing with other nodes (“shared memory portion”). Another node 102 may allocate a portion of its main memory 108 (“copied memory portion”) for duplicating the contents of the shared memory portion. In an embodiment, a node 102 may both make a portion of its main memory 108 available for sharing and may copy a portion of main memory 108 made available by another node 102 . For purposes of the present invention, a node 102 may share any number of memory portions (zero or more) and may copy any number of shared memory portions (zero or more). Each memory portion may include one or more shared cache lines 106 . In an embodiment, sharing or copying a portion of main memory 108 includes, respectively, sharing or copying the one or more shared cache lines 106 .
  • Node 2 102 B is making a portion of its main memory 108 B available for sharing with the other nodes.
  • Nodes 1 and 3 are copying the shared memory portion 202 .
  • Node 1 102 A has a memory portion 204 A in its main memory 108 A that is a copy of the shared memory portion 202
  • Node 3 102 C has a memory portion 204 C in main memory 108 C that is a copy of the shared memory portion 202
  • Node 3 102 C is also making a portion of its main memory 108 C available for sharing with the other nodes.
  • Nodes 1 and 2 are copying the shared memory portion 206 .
  • Node 2 102 B has a memory portion 208 B that is a copy of the shared memory portion 206
  • Node 1 102 A has a memory portion 208 A that is a copy of the shared memory portion 206 .
  • Nodes 2 and 3 are both sharing a memory portion and copying a shared memory portion from another node.
  • Node 1 is copying a memory portion from two nodes, but is not sharing a memory portion.
  • a node 102 may include a directory 210 .
  • the directory 210 indicates, for each shared memory portion, which nodes in system 100 contain a copy of that shared memory portion.
  • the directory 210 contains an entry for each source cache line in the shared memory portion. That is, the directory 210 contains an entry for each shared cache line for which the node 102 is a source node.
  • a node 102 may include an index 212 .
  • the index 212 indicates, for each shared memory portion, the location of the directory in main memory 108 of the shared memory portion.
  • the index 212 also indicates, for each copied memory portion, the source node that shared the memory portion and the location of the shared memory portion in the main memory of the source node.
  • the index 212 contains an entry for each shared cache line in the main memory 108 .
  • the index 212 indicates, for each shared cache line in a copied memory portion, the source node that shared the source cache line and the location of the source cache line in the main memory of the source node.
  • the nodes 102 are initialized.
  • the nodes 102 may be initialized in the manner described below.
  • a node 102 may share any number of memory portions and may copy any number of memory portions shared by other nodes. Depending on what a node 102 decides to do, it may perform some, all, or none of the operations described.
  • a node 102 determines whether it wishes to make any portion of its main memory 108 available for sharing with other nodes in the system 100 . If it does, the node 102 broadcasts information to the other nodes 102 indicating its willingness to share a portion of its main memory. The information broadcasted may include information about the node 102 , the size of the shared memory portion 202 , as well as where the memory portion 202 is located on the main memory 108 . The information indicates to other nodes in the system 100 where to access the shared memory location.
  • a node 102 may receive broadcasted information indicating that another node wishes to share a portion of its main memory. In response to receiving the broadcasted information, the node 102 may decide whether to copy or not to copy the shared memory portion 202 . If the node 102 decides to copy the shared memory portion, the node will allocate a copied memory portion sufficient to store a copy of the shared memory portion.
  • the node 102 does not populate the allocated memory with data. That is, the node only allocates the memory, but does not copy data from the shared memory portion.
  • the node sets the version value for each copied cache line in the copied memory portion to a value that indicates the copied cache lines are invalid.
  • a node 102 will not copy the data from the shared memory portion into its copy of the memory portion until an application requests the data.
  • the version value will indicate to the node that the shared cache line is invalid.
  • the node may then copy the source cache line from the shared memory portion into the copied cache line in the copied memory portion.
  • the node if node 102 is sharing a portion of its main memory 108 , the node allocates memory in main memory 108 for storing a directory structure 210 .
  • the directory structure 210 indicates which nodes contain a copy of each memory portion shared by node 102 .
  • the directory structure 210 comprises a directory entry for each shared cache line that is in the shared memory portion.
  • each source cache line is associated with a directory entry.
  • the directory entries indicate, for each source cache line, which other nodes have a copied cache line that should be a copy of that source cache line.
  • the directory entry may also indicate whether each copied cache line in the remote nodes is a valid (up-to-date) copy.
  • the directory entry may include a lock to serialize access to the directory entry.
  • node 102 allocates memory in its main memory 108 for an index structure 212 .
  • the index structure 212 comprises an index entry for each shared cache line in main memory 108 . If the node 102 is sharing a shared cache line in a shared memory portion, the index entry indicates the location in main memory 108 of the directory entry for the shared cache line. If the shared cache line is in a copied memory portion, the index entry indicates the source node that shared the shared memory portion and the location of the corresponding source cache line in the main memory of the source node.
  • the node 102 updates the index structure 212 if it decides to copy a shared memory portion upon receiving broadcasted information from a source node. The information received from the source node may correspond to information stored in the index structure 212 .
  • node 102 assigns a version value to a memory location when the memory is allocated. For example, when an application performs a malloc request, the node 102 allocates the requested amount of memory, generates a version value to associate with the allocated memory, and returns a pointer to the application.
  • the allocated memory location comprises one or more shared cache lines. A version value may be indicated by the version bits of each shared cache line.
  • the version value is generated by the heap manager of the application.
  • the version value may be chosen from a range of valid values.
  • one or more version values are used to indicate when a shared cache line is invalid, and are not included in the range of valid values to choose from.
  • the format of the version value may vary depending on the implementation.
  • the version value may be four bits long, resulting in sixteen possible values.
  • the version value may be a 44-bit time stamp.
  • the version value is also associated with the pointer to the allocated memory.
  • a pointer includes both a version value and a virtual address.
  • a node might use 44-bit registers to store a pointer, but the virtual address does not use the entire 44 bits.
  • the version value may be stored in extra unused bits of the 44-bit register.
  • other nodes 102 may copy the shared cache lines in the allocated memory location into their respective copied memory portions.
  • copying the shared cache lines includes copying the associated version value.
  • the other nodes 102 may also generate pointers to the copied shared cache lines.
  • a version value may be stored in association with each generated pointer.
  • FIG. 3 is a flowchart illustrating a procedure for detecting memory corruption in a node 102 using a version value associated with a pointer.
  • the procedure may performed when performing a memory operation involving a shared cache line referred to by a pointer, where the pointer is associated with a version value.
  • the procedure may be referred to hereafter as pointer-based memory corruption detection.
  • node 102 receives a command from an application.
  • the command may be, for example, a request to execute a memory operation such as a load or a store command.
  • the node 102 executes steps for detecting memory corruption.
  • the command may include a pointer to a shared cache line in main memory 108 .
  • the node when the node 102 allocates memory to an application, the node returns a pointer that is associated with a version value.
  • the node 102 determines the version value associated with the pointer included with the command.
  • the pointer includes the version value.
  • the version value associated with the pointer may indicate a version value the command expects to be associated with the requested shared cache line. For example, if the command is using the pointer to access a data structure, the version value may be associated with the data structure.
  • step 304 the node 102 compares the version value of the pointer with a version value associated with the requested shared cache line.
  • the version bits of the shared cache line indicate the version value associated with the shared cache line. The method then proceeds to decision block 308 .
  • a trap operation is executed.
  • the trap operation may include indicating to the application that a memory corruption was detected.
  • the trap operation may also include terminating execution of the memory operation. Alternatively, the procedure ends and the memory operation proceeds.
  • the procedure for detecting memory corruption using a version value associated with a pointer illustrated in FIG. 3 may be performed while performing various kinds of memory operations. Such memory operations shall be described in further detail.
  • a version value in a shared cache line may also be used to manage coherency of shared cache lines between nodes.
  • a source node updates a source cache line
  • the copied cache lines in the remote nodes will be out of date.
  • the remote nodes may not immediately update their copied cache lines. Instead, the version value of each copied cache line is set to indicate that the copied cache line is invalid. Later, if the remote node attempts to access the copied cache line, the node will see that the copied cache line is invalid and will update the copied cache line.
  • a node 102 when a node 102 executes a store command, it may execute a trap operation. In an embodiment, the node 102 will execute different steps depending on whether the target shared cache line is a source cache line or a copied cache line. If the target shared cache line is a copied cache line, then the node 102 will propagate the store to the source cache line in the source node. In an embodiment, a remote node may record the store in a store buffer prior to sending the store to the source node.
  • the node 102 contains an index 212 . If the requested shared cache line is a copied cache line, the index entry will indicate the source node and location of the source cache line for the copied cache line. Thus, the node 102 may reference the index 212 to determine whether the requested shared cache line is a copied cache line or a source cache line. Based on the determination, the node 102 may determine which steps to take to execute the store command.
  • nodes do not update a copied cache line when a source node updates a corresponding source cache line.
  • a node may only update the copied cache line when the copied cache line is loaded at the node. The version value indicating that copied cache line is invalid triggers the updating.
  • memory corruption detection is performed.
  • FIG. 4 is a flowchart illustrating a procedure for updating a shared cache line when a copied cache line is requested in a node 102 .
  • node 102 receives a command from an application.
  • the command may be a memory operation involving a load operation, such as a load command.
  • the command may include a pointer to a shared cache line in main memory 108 .
  • the node 102 allocates memory to an application, the node returns a pointer that is associated with a version value. For purposes of this illustration, it will be assumed that the pointer included with the command is associated with a version value.
  • node 102 determines whether the version value indicates that the shared cache line is invalid.
  • at least one version value is used to indicate the shared cache line is invalid and is not used during memory allocation.
  • the shared cache line is a copied cache line.
  • the version value may indicate the shared cache line is invalid if, for example, the copied cache line has not been populated with data from the source cache line.
  • the requested shared cache line may or may not be a copied cache line.
  • the shared cache line is not a copied cache line.
  • a shared cache line that is not in a copied memory portion is presumed to always be valid.
  • the shared cache line is a copied cache line.
  • the data in the shared cache line may be out of date. That is, the data in the copied cache line is not the same as the data in the source cache line. This may occur, for example, when a source node stores data to the source cache line.
  • step 406 if the version value indicates that the shared cache line is valid, the node 102 continues execution of the procedure and proceeds to step 410 , where pointer-based memory corruption detection is performed.
  • step 408 the node suspends execution of the command and executes a trap operation.
  • the trap operation includes copying a source cache line to the copied cache line.
  • Copying the source cache line may include copying the version bits and the data bits of the source cache line. Therefore, after the copy is performed, the version value of the copied cache line is set to the version value from the source cache line.
  • the data in the copied cache line is set to the most recent data contained in the source cache line as modified by any stores to the copied cache line made by the remote node recorded to the store buffer that have not been propagated to the source cache line.
  • the node is able to update the data in the shared cache line in order to maintain coherency with other nodes.
  • the node contains an index 212 .
  • the node may use an index entry corresponding to the requested shared cache line in order to determine which source node contains the corresponding source cache line and where the corresponding source cache line is located in the main memory of the source node.
  • the source node contains a directory 210 .
  • the source node may update the directory entry for the corresponding source cache line to indicate that the copy at the remote node is a valid copy.
  • FIG. 5A is a flowchart illustrating a store performed by a remote node 102 in distributed node system 100 .
  • the store may be performed to execute a store command.
  • the command may include a pointer to a copied cache line in main memory 108 .
  • the pointer may be associated with a version value.
  • step 502 the node 102 suspends execution of the command and executes a trap operation to execute steps that follow.
  • the store is recorded in a store buffer.
  • the information recorded in the store buffer may indicate a memory location to which to perform the store and what data to store.
  • Recording the store in a store buffer may include indicating the source node and the location of the source cache line in the main memory of the source node to which the store should be performed, the storing thread, and the version number associated with the store(s).
  • the node 102 contains an index 212 .
  • the node may use an index entry corresponding to the requested shared cache line in order to determine which source node contains the corresponding source cache line and where the corresponding source cache line is located in the main memory of the source node.
  • step 506 the node 102 determines whether the version value of the copied cache line indicates the copied cache line is invalid. If the version value indicates that the shared cache line is invalid, then the store is not performed to the shared cache line. If the value indicates that the copied cache line is valid, then the method proceeds to step 508 .
  • the node 102 performs pointer-based memory corruption detection. If the pointer-based memory corruption detection performed by node 102 does not detect memory corruption, then the method proceeds to step 510 .
  • the node 102 stores the data in its shared cache line.
  • a remote node records a store in its store buffer but does not send the store to a source node containing the corresponding source cache line. After the node records the store in its store buffer, the store needs to be propagated to the source node. Propagating the store may be performed as part of the same procedure as recording the store buffer or it may be performed separately.
  • the node may receive a command that includes a propagate stores operation.
  • the store command may include instructions to propagate the store.
  • the store may be propagated after the trap operation is completed, as part of resuming execution of the store command.
  • the node 102 may check the store buffer for entries prior to writing to a shared cache line.
  • FIG. 5B is a flowchart illustrating store propagation in the distributed node system 100 . The store may be propagated asynchronously by another thread of execution.
  • the node retrieves an entry from the store buffer.
  • the entry may include information indicating a source node, a source cache line to which the store should be performed, the data to be stored, the version number associated with the store(s) and the storing thread.
  • the node 102 requests from the source node a list of remote nodes for the source cache line. After receiving the information, the method proceeds to step 526 .
  • the source node in response to the request, refers to the directory entry for that shared cache line.
  • the directory entry indicates which nodes contain a copy of the source cache line. Any number of nodes in system 100 may contain a copy of the source cache line.
  • the source node when accessing the directory entry for the requested shared cache line, locks the directory entry.
  • the source node only shares a list of remote nodes that contain a valid copy of the source cache line.
  • the directory entry may be updated to indicate that all remote nodes contain an invalid copy.
  • the node 102 causes other remote nodes that contain a copy of the source cache line to mark their copied cache line as invalid.
  • the node indicates to each node that holds a respective copied cache line the data in the source cache line has been changed.
  • the version value of the copied cache line at the remote nodes is changed to indicate that the copied cache line is invalid.
  • the node 102 notifies the source node to perform the store.
  • the notification may include the location of the source cache line in the main memory of the source node, the data to be stored in the source cache line and the version number.
  • the source node compares the version number from the store buffer to the version number in respective source cache line. If a version mismatch is detected, the source node does not perform the store and the issuing thread may be notified, for example via an asynchronous trap.
  • the stored data is removed from the store buffer.
  • the steps are repeated for each entry in the store buffer.
  • a remote node does not record the store in a store buffer. Instead, the remote node performs the update propagation steps during execution of the trap operation, in place of writing to the store buffer.
  • the source node executes a store command to store a shared cache line without using a store buffer.
  • FIG. 6 is a flowchart illustrating steps performed by a source node 102 to execute a store command in a distributed node system 100 .
  • the stored command may include a pointer to a source cache line in main memory 108 .
  • the pointer may be associated with a version value.
  • the node 102 suspends execution of the store command and executes a trap operation.
  • the node 102 performs pointer-based memory corruption detection for the source cache line. If no memory corruption is detected, then the method proceeds to step 606 . If memory corruption is detected, then the method exits the trap operation without performing the store.
  • the node 102 instructs the remote nodes to invalidate their respective copied cache lines.
  • the node 102 indicates to each remote node that the data in the source cache line has been changed.
  • the version value of the copied cache line at the remote nodes is changed to indicate that the copied cache line is invalid.
  • the source node refers to the directory entry for that shared cache line.
  • the directory entry indicates which nodes contain a copy of the source cache line. Any number of nodes in system 100 may contain a copy of the source cache line.
  • the node indicates to each node that is copying the source cache line that the data has been changed. The version value of the copied cache line at the other nodes is changed to indicate that the copy of the source cache line is invalid.
  • the invalidation of the source cache line is recorded and an instruction to the remote nodes to invalidate is sent lazily.
  • a thread other than a thread performing the store discovers the recording of the invalidated source cache line and sends instructions to the remote nodes to invalidate the copied cache line of the source cache line.
  • the source node performs the store on the source cache line.
  • the source node completes the trap operation.
  • the techniques described herein are implemented by one or more special-purpose computing devices.
  • the special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination.
  • ASICs application-specific integrated circuits
  • FPGAs field programmable gate arrays
  • Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques.
  • the special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
  • FIG. 7 is a block diagram that illustrates a computer system 700 upon which an embodiment of the invention may be implemented.
  • Computer system 700 includes a bus 702 or other communication mechanism for communicating information, and a hardware processor 704 coupled with bus 702 for processing information.
  • Hardware processor 704 may be, for example, a general purpose microprocessor.
  • Computer system 700 also includes a main memory 706 , such as a random access memory (RAM) or other dynamic storage device, coupled to bus 702 for storing information and instructions to be executed by processor 704 .
  • Main memory 706 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 704 .
  • Such instructions when stored in non-transitory storage media accessible to processor 704 , render computer system 700 into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • Computer system 700 further includes a read only memory (ROM) 708 or other static storage device coupled to bus 702 for storing static information and instructions for processor 704 .
  • ROM read only memory
  • a storage device 710 such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to bus 702 for storing information and instructions.
  • Computer system 700 may be coupled via bus 702 to a display 712 , such as a cathode ray tube (CRT), for displaying information to a computer user.
  • a display 712 such as a cathode ray tube (CRT)
  • An input device 714 is coupled to bus 702 for communicating information and command selections to processor 704 .
  • cursor control 716 is Another type of user input device
  • cursor control 716 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 704 and for controlling cursor movement on display 712 .
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • Computer system 700 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 700 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 700 in response to processor 704 executing one or more sequences of one or more instructions contained in main memory 706 . Such instructions may be read into main memory 706 from another storage medium, such as storage device 710 . Execution of the sequences of instructions contained in main memory 706 causes processor 704 to perform the procedure steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
  • Non-volatile media includes, for example, optical disks, magnetic disks, or solid-state drives, such as storage device 710 .
  • Volatile media includes dynamic memory, such as main memory 706 .
  • storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
  • Storage media is distinct from but may be used in conjunction with transmission media.
  • Transmission media participates in transferring information between storage media.
  • transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 702 .
  • transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 704 for execution.
  • the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer.
  • the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.
  • a modem local to computer system 700 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal.
  • An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 702 .
  • Bus 702 carries the data to main memory 706 , from which processor 704 retrieves and executes the instructions.
  • the instructions received by main memory 706 may optionally be stored on storage device 710 either before or after execution by processor 704 .
  • Computer system 700 also includes a communication interface 718 coupled to bus 702 .
  • Communication interface 718 provides a two-way data communication coupling to a network link 720 that is connected to a local network 722 .
  • communication interface 718 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • communication interface 718 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • communication interface 718 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • Network link 720 typically provides data communication through one or more networks to other data devices.
  • network link 720 may provide a connection through local network 722 to a host computer 724 or to data equipment operated by an Internet Service Provider (ISP) 726 .
  • ISP 726 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 728 .
  • Internet 728 uses electrical, electromagnetic or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link 720 and through communication interface 718 which carry the digital data to and from computer system 700 , are example forms of transmission media.
  • Computer system 700 can send messages and receive data, including program code, through the network(s), network link 720 and communication interface 718 .
  • a server 730 might transmit a requested code for an application program through Internet 728 , ISP 726 , local network 722 and communication interface 718 .
  • the received code may be executed by processor 704 as it is received, and/or stored in storage device 710 , or other non-volatile storage for later execution.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Debugging And Monitoring (AREA)

Abstract

Nodes in a distributed node system are configured to support memory corruption detection when memory is shared between the nodes. Nodes in the distributed node system share data in units of memory referred to herein as “shared cache lines.” A node associates a version value with data in a shared cache line. The version value and data may be stored in a shared cache line in the node's main memory. When the node performs a memory operation, it can use the version value to determine whether memory corruption has occurred. For example, a pointer may be associated with a version value. When the pointer is used to access memory, the version value of the pointer may indicate the expected version value at the memory location. If the version values do not match, then memory corruption has occurred.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS; BENEFIT CLAIM
This application claims priority to U.S. Provisional Application No. 61/972,082, entitled “Memory Corruption Detection Support For Distributed Shared Memory Applications”, filed by Zoran Radovic, et al. on Mar. 28, 2014, the contents of which are incorporated herein by reference. This application is related to U.S. patent application Ser. No. 13/838,542, filed on Mar. 15, 2013, entitled “MEMORY BUS PROTOCOL TO ENABLE CLUSTERING BETWEEN NODES OF DISTINCT PHYSICAL DOMAIN ADDRESS SPACES”; U.S. patent application Ser. No. 13/839,525, filed on Mar. 15, 2013, entitled “REMOTE-KEY BASED MEMORY BUFFER ACCESS CONTROL MECHANISM”; and U.S. patent application Ser. No. 13/828,555, filed on Mar. 14, 2013, entitled “MEMORY SHARING ACROSS DISTRIBUTED NODES”; the contents of each application in this paragraph is hereby incorporated by reference.
FIELD OF THE INVENTION
The present disclosure relates generally to techniques for detecting memory corruption in distributed node systems.
BACKGROUND
Many functionalities and services available over the Internet or over a corporate network are provided by one or more clusters of distributed computing nodes. For example, a database used to run a large scale business may be maintained by, and made available through, a plurality of database servers running on a plurality of distributed computing nodes that form a cluster. Using a cluster of computing nodes to provide a functionality or service may have a number of advantages. For example, with a cluster, it is relatively easy to add another node to increase the capacity of the system to meet increased demand. A cluster also makes it possible to load balance among the various nodes so that if one node becomes overburdened, work can be assigned to other nodes. In addition, a cluster makes it possible to tolerate failures so that if one or more nodes fail, the functionality or service is still available. Furthermore, nodes in a cluster may be able to share information in order to, for example, work together and carry out transactions, load balance, implement failure prevention and recovery, etc.
For applications that run on the cluster, memory corruption detection may be required. Memory corruption occurs when a memory location is inappropriately accessed or modified. One example of memory corruption occurs when an application attempts to advance a pointer variable beyond the memory allocated for a particular data structure. These memory errors can cause program crashes or unexpected program results.
Memory corruption detection schemes exist for single-machine applications. The single-machine memory corruption detection schemes allow a computer to track application pointers at run-time and inform a user of memory errors.
However, applications that run on clusters are more difficult than single-machine applications to debug. Some solutions exist for debugging applications running on clusters. Such debugging solutions may include in-house tool support, run-time support, or check-summing schemes. Unfortunately, these solutions complicate programming models and add performance overheads to a system and may not detect memory corruption.
The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings:
FIG. 1 is a block diagram that depicts an example distributed node system in an embodiment;
FIG. 2 illustrates an example in which some nodes in a distributed node system are sharing memory, in accordance with an embodiment;
FIG. 3 is a flow diagram that depicts a procedure for detecting memory corruption in a node, in an embodiment;
FIG. 4 is a flow diagram that depicts a procedure for updating a cache line when loading the cache line while detecting memory corruption, in an embodiment;
FIG. 5A is a flow diagram that depicts a procedure for performing a store in a remote node, in an embodiment;
FIG. 5B is a flow diagram that depicts a procedure for propagating a store from a remote node to a source node, in an embodiment;
FIG. 6 is a flow diagram that depicts a procedure for performing a store in a source node, in an embodiment;
FIG. 7 is a block diagram that illustrates a computer system upon which an embodiment of the invention may be implemented.
DETAILED DESCRIPTION
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
General Overview
According to embodiments described herein, nodes in a distributed node system are configured to support memory corruption detection when memory is shared between the nodes. Nodes in the distributed node system share data in units of memory referred to herein as “shared cache lines.” A node associates a version value with data in a shared cache line. The version value and data may be stored in a shared cache line in the node's main memory. When the node performs a memory operation, it can use the version value to determine whether memory corruption has occurred. For example, a pointer may be associated with a version value. When the pointer is used to access memory, the version value of the pointer may indicate the expected version value at the memory location. If the version values do not match, then memory corruption has occurred.
A pointer, as the term is used herein, is a value that contains an address to a memory location of another value stored in memory. The value is loadable into a register of processor. According to an embodiment, a pointer contains two separate values, a version value and a virtual address, which is translated to a physical address for execution of a memory operation.
The nodes in a distributed node system share portions of their main memory with other nodes in the system. A node (“source node”) makes a portion of its main memory available for sharing with other nodes in the system, and another node (“remote node”) copies the shared memory portion in its own main memory. A memory portion may comprise one or more shared cache lines. The remote node creates a copied cache line that is a copy of a source cache line in the source node.
In an embodiment, a shared cache line comprises version bits and data bits. The version bits of a shared cache line indicate a version value associated with the shared cache line. A pointer configured to point to the shared cache line also contains a version value. When the pointer is used to perform a memory operation on the shared cache line, the node compares the version value of the pointer to the version value indicated by the version bits of the shared cache line.
In an embodiment, the source node generates the version value in response to a memory allocation request. For example, if an application allocates memory for a data structure, the source node may generate a version value to be associated with that data structure. The generated version value and the associated data structure may be copied in the main memory of the local node.
In an embodiment, the memory operation is requested by an application. If a node detects that memory corruption has occurred, the node may inform the application of the error. The node may also terminate the memory operation rather than execute it.
In another embodiment, a node uses the version value to maintain coherency between nodes. For example, the version value in a remote cache line may indicate that the remote cache line is out of date. The remote node may then update the remote cache line from the corresponding source cache line. In an embodiment, one or more version values are reserved for indicating when the copied cache line is invalid. The one or more reserved version values are not used when a node generates a version value in response to a memory allocation request.
System Overview
FIG. 1 shows a block diagram of an example distributed node system 100, in an embodiment. Distributed node system 100 includes three nodes: Node 1 102A, Node 2 102B, and Node 3 102C. Although three nodes are shown in the present illustration, system 100 may include more or fewer nodes.
Each node 102 includes a main memory 108. The main memory 108 includes one or more shared cache lines 106. In an embodiment, shared cache line 106 comprises version bits 112 and data bits 114. Data is stored in data bits 114. Version bits 112 indicate a version value associated with the shared cache line 106. Shared cache lines 106 may be the same size or the size may vary.
A node 102 may make a portion of its main memory 108 available for sharing with other nodes (“shared memory portion”). Another node 102 may allocate a portion of its main memory 108 (“copied memory portion”) for duplicating the contents of the shared memory portion. In an embodiment, a node 102 may both make a portion of its main memory 108 available for sharing and may copy a portion of main memory 108 made available by another node 102. For purposes of the present invention, a node 102 may share any number of memory portions (zero or more) and may copy any number of shared memory portions (zero or more). Each memory portion may include one or more shared cache lines 106. In an embodiment, sharing or copying a portion of main memory 108 includes, respectively, sharing or copying the one or more shared cache lines 106.
As an example, in FIG. 2, Node 2 102B is making a portion of its main memory 108B available for sharing with the other nodes. Nodes 1 and 3 are copying the shared memory portion 202. Thus, Node 1 102A has a memory portion 204A in its main memory 108A that is a copy of the shared memory portion 202, and Node 3 102C has a memory portion 204C in main memory 108C that is a copy of the shared memory portion 202. Node 3 102C is also making a portion of its main memory 108C available for sharing with the other nodes. Nodes 1 and 2 are copying the shared memory portion 206. Therefore, Node 2 102B has a memory portion 208B that is a copy of the shared memory portion 206, and Node 1 102A has a memory portion 208A that is a copy of the shared memory portion 206. In the illustrated example, Nodes 2 and 3 are both sharing a memory portion and copying a shared memory portion from another node. Node 1 is copying a memory portion from two nodes, but is not sharing a memory portion.
In an embodiment, a node 102 may include a directory 210. The directory 210 indicates, for each shared memory portion, which nodes in system 100 contain a copy of that shared memory portion. In an embodiment, the directory 210 contains an entry for each source cache line in the shared memory portion. That is, the directory 210 contains an entry for each shared cache line for which the node 102 is a source node.
In an embodiment, a node 102 may include an index 212. The index 212 indicates, for each shared memory portion, the location of the directory in main memory 108 of the shared memory portion. The index 212 also indicates, for each copied memory portion, the source node that shared the memory portion and the location of the shared memory portion in the main memory of the source node. In an embodiment, the index 212 contains an entry for each shared cache line in the main memory 108. The index 212 indicates, for each shared cache line in a copied memory portion, the source node that shared the source cache line and the location of the source cache line in the main memory of the source node.
System Initialization
In order to prepare the nodes 102 in system 100 to share memory, the nodes 102 are initialized. In an embodiment, the nodes 102 may be initialized in the manner described below. A node 102 may share any number of memory portions and may copy any number of memory portions shared by other nodes. Depending on what a node 102 decides to do, it may perform some, all, or none of the operations described.
During initialization, a node 102 determines whether it wishes to make any portion of its main memory 108 available for sharing with other nodes in the system 100. If it does, the node 102 broadcasts information to the other nodes 102 indicating its willingness to share a portion of its main memory. The information broadcasted may include information about the node 102, the size of the shared memory portion 202, as well as where the memory portion 202 is located on the main memory 108. The information indicates to other nodes in the system 100 where to access the shared memory location.
A node 102 may receive broadcasted information indicating that another node wishes to share a portion of its main memory. In response to receiving the broadcasted information, the node 102 may decide whether to copy or not to copy the shared memory portion 202. If the node 102 decides to copy the shared memory portion, the node will allocate a copied memory portion sufficient to store a copy of the shared memory portion.
In an embodiment, the node 102 does not populate the allocated memory with data. That is, the node only allocates the memory, but does not copy data from the shared memory portion. The node sets the version value for each copied cache line in the copied memory portion to a value that indicates the copied cache lines are invalid. In an embodiment, a node 102 will not copy the data from the shared memory portion into its copy of the memory portion until an application requests the data. When the node attempts to execute an operation that targets the copied cache line, the version value will indicate to the node that the shared cache line is invalid. The node may then copy the source cache line from the shared memory portion into the copied cache line in the copied memory portion.
In an embodiment, if node 102 is sharing a portion of its main memory 108, the node allocates memory in main memory 108 for storing a directory structure 210. The directory structure 210 indicates which nodes contain a copy of each memory portion shared by node 102. In an embodiment, the directory structure 210 comprises a directory entry for each shared cache line that is in the shared memory portion. In other words, each source cache line is associated with a directory entry. Thus, the directory entries indicate, for each source cache line, which other nodes have a copied cache line that should be a copy of that source cache line. In an embodiment, the directory entry may also indicate whether each copied cache line in the remote nodes is a valid (up-to-date) copy. In an embodiment, the directory entry may include a lock to serialize access to the directory entry.
In an embodiment, node 102 allocates memory in its main memory 108 for an index structure 212. The index structure 212 comprises an index entry for each shared cache line in main memory 108. If the node 102 is sharing a shared cache line in a shared memory portion, the index entry indicates the location in main memory 108 of the directory entry for the shared cache line. If the shared cache line is in a copied memory portion, the index entry indicates the source node that shared the shared memory portion and the location of the corresponding source cache line in the main memory of the source node. In an embodiment, the node 102 updates the index structure 212 if it decides to copy a shared memory portion upon receiving broadcasted information from a source node. The information received from the source node may correspond to information stored in the index structure 212.
Exemplary Memory Allocation
In an embodiment, node 102 assigns a version value to a memory location when the memory is allocated. For example, when an application performs a malloc request, the node 102 allocates the requested amount of memory, generates a version value to associate with the allocated memory, and returns a pointer to the application. In an embodiment, the allocated memory location comprises one or more shared cache lines. A version value may be indicated by the version bits of each shared cache line.
In an embodiment, the version value is generated by the heap manager of the application. The version value may be chosen from a range of valid values. In an embodiment, one or more version values are used to indicate when a shared cache line is invalid, and are not included in the range of valid values to choose from. The format of the version value may vary depending on the implementation. For example, the version value may be four bits long, resulting in sixteen possible values. In another example, the version value may be a 44-bit time stamp.
The version value is also associated with the pointer to the allocated memory. In an embodiment, a pointer includes both a version value and a virtual address. For example, a node might use 44-bit registers to store a pointer, but the virtual address does not use the entire 44 bits. The version value may be stored in extra unused bits of the 44-bit register.
If the allocated memory is being shared as part of a shared memory portion, other nodes 102 may copy the shared cache lines in the allocated memory location into their respective copied memory portions. In an embodiment, copying the shared cache lines includes copying the associated version value. The other nodes 102 may also generate pointers to the copied shared cache lines. A version value may be stored in association with each generated pointer.
Pointer-Based Memory Corruption Detection
FIG. 3 is a flowchart illustrating a procedure for detecting memory corruption in a node 102 using a version value associated with a pointer. The procedure may performed when performing a memory operation involving a shared cache line referred to by a pointer, where the pointer is associated with a version value. The procedure may be referred to hereafter as pointer-based memory corruption detection.
For example, node 102 receives a command from an application. The command may be, for example, a request to execute a memory operation such as a load or a store command. During execution of the command, the node 102 executes steps for detecting memory corruption. The command may include a pointer to a shared cache line in main memory 108. As discussed above, in an embodiment, when the node 102 allocates memory to an application, the node returns a pointer that is associated with a version value.
In step 302, the node 102 determines the version value associated with the pointer included with the command. In an embodiment, the pointer includes the version value. The version value associated with the pointer may indicate a version value the command expects to be associated with the requested shared cache line. For example, if the command is using the pointer to access a data structure, the version value may be associated with the data structure.
In step 304, the node 102 compares the version value of the pointer with a version value associated with the requested shared cache line. In an embodiment, the version bits of the shared cache line indicate the version value associated with the shared cache line. The method then proceeds to decision block 308.
At decision block 308, if the version value of the pointer does not match the version value associated with the requested shared line, memory corruption is detected. In an embodiment, a trap operation is executed. The trap operation may include indicating to the application that a memory corruption was detected. The trap operation may also include terminating execution of the memory operation. Alternatively, the procedure ends and the memory operation proceeds.
If the version value of the pointer matches the version value associated with the requested shared line, then the procedure ends and the memory operation proceeds.
The procedure for detecting memory corruption using a version value associated with a pointer illustrated in FIG. 3 may be performed while performing various kinds of memory operations. Such memory operations shall be described in further detail.
Coherency Between Nodes
In an embodiment, a version value in a shared cache line may also be used to manage coherency of shared cache lines between nodes. When a source node updates a source cache line, the copied cache lines in the remote nodes will be out of date. However, the remote nodes may not immediately update their copied cache lines. Instead, the version value of each copied cache line is set to indicate that the copied cache line is invalid. Later, if the remote node attempts to access the copied cache line, the node will see that the copied cache line is invalid and will update the copied cache line.
In an embodiment, when a node 102 executes a store command, it may execute a trap operation. In an embodiment, the node 102 will execute different steps depending on whether the target shared cache line is a source cache line or a copied cache line. If the target shared cache line is a copied cache line, then the node 102 will propagate the store to the source cache line in the source node. In an embodiment, a remote node may record the store in a store buffer prior to sending the store to the source node.
In an embodiment, the node 102 contains an index 212. If the requested shared cache line is a copied cache line, the index entry will indicate the source node and location of the source cache line for the copied cache line. Thus, the node 102 may reference the index 212 to determine whether the requested shared cache line is a copied cache line or a source cache line. Based on the determination, the node 102 may determine which steps to take to execute the store command.
Remote Node Load
In an embodiment, nodes do not update a copied cache line when a source node updates a corresponding source cache line. A node may only update the copied cache line when the copied cache line is loaded at the node. The version value indicating that copied cache line is invalid triggers the updating. When the copied cache line is updated, memory corruption detection is performed. FIG. 4 is a flowchart illustrating a procedure for updating a shared cache line when a copied cache line is requested in a node 102.
In step 402, node 102 receives a command from an application. For example, the command may be a memory operation involving a load operation, such as a load command.
The command may include a pointer to a shared cache line in main memory 108. As discussed above, in an embodiment, when the node 102 allocates memory to an application, the node returns a pointer that is associated with a version value. For purposes of this illustration, it will be assumed that the pointer included with the command is associated with a version value.
In step 404, node 102 determines whether the version value indicates that the shared cache line is invalid. In an embodiment, at least one version value is used to indicate the shared cache line is invalid and is not used during memory allocation. In an embodiment, the shared cache line is a copied cache line. The version value may indicate the shared cache line is invalid if, for example, the copied cache line has not been populated with data from the source cache line. The requested shared cache line may or may not be a copied cache line.
In one example, the shared cache line is not a copied cache line. In an embodiment, a shared cache line that is not in a copied memory portion is presumed to always be valid.
In another example, the shared cache line is a copied cache line. The data in the shared cache line may be out of date. That is, the data in the copied cache line is not the same as the data in the source cache line. This may occur, for example, when a source node stores data to the source cache line.
The method then proceeds to decision block 406. At decision block 406, if the version value indicates that the shared cache line is valid, the node 102 continues execution of the procedure and proceeds to step 410, where pointer-based memory corruption detection is performed.
If the version value indicates that the shared cache line is invalid, the method proceeds to step 408. In step 408, the node suspends execution of the command and executes a trap operation.
In an embodiment, the trap operation includes copying a source cache line to the copied cache line. Copying the source cache line may include copying the version bits and the data bits of the source cache line. Therefore, after the copy is performed, the version value of the copied cache line is set to the version value from the source cache line. The data in the copied cache line is set to the most recent data contained in the source cache line as modified by any stores to the copied cache line made by the remote node recorded to the store buffer that have not been propagated to the source cache line. Thus, the node is able to update the data in the shared cache line in order to maintain coherency with other nodes.
In an embodiment, the node contains an index 212. The node may use an index entry corresponding to the requested shared cache line in order to determine which source node contains the corresponding source cache line and where the corresponding source cache line is located in the main memory of the source node.
In an embodiment, the source node contains a directory 210. When the remote node updates its copied cache line, the source node may update the directory entry for the corresponding source cache line to indicate that the copy at the remote node is a valid copy.
Remote Node Store
As mentioned previously, in an embodiment, a remote node uses a store buffer to record a store before the store is sent to a source node. FIG. 5A is a flowchart illustrating a store performed by a remote node 102 in distributed node system 100. The store may be performed to execute a store command. The command may include a pointer to a copied cache line in main memory 108. The pointer may be associated with a version value.
In step 502, the node 102 suspends execution of the command and executes a trap operation to execute steps that follow.
At step 504, the store is recorded in a store buffer. The information recorded in the store buffer may indicate a memory location to which to perform the store and what data to store. Recording the store in a store buffer may include indicating the source node and the location of the source cache line in the main memory of the source node to which the store should be performed, the storing thread, and the version number associated with the store(s).
In an embodiment, the node 102 contains an index 212. The node may use an index entry corresponding to the requested shared cache line in order to determine which source node contains the corresponding source cache line and where the corresponding source cache line is located in the main memory of the source node.
In step 506, the node 102 determines whether the version value of the copied cache line indicates the copied cache line is invalid. If the version value indicates that the shared cache line is invalid, then the store is not performed to the shared cache line. If the value indicates that the copied cache line is valid, then the method proceeds to step 508.
At 508, the node 102 performs pointer-based memory corruption detection. If the pointer-based memory corruption detection performed by node 102 does not detect memory corruption, then the method proceeds to step 510.
At step 510, the node 102 stores the data in its shared cache line.
The trap operation ends.
Update Propagation
In an embodiment, a remote node records a store in its store buffer but does not send the store to a source node containing the corresponding source cache line. After the node records the store in its store buffer, the store needs to be propagated to the source node. Propagating the store may be performed as part of the same procedure as recording the store buffer or it may be performed separately. In an embodiment, the node may receive a command that includes a propagate stores operation. For example, the store command may include instructions to propagate the store. The store may be propagated after the trap operation is completed, as part of resuming execution of the store command. In another embodiment, the node 102 may check the store buffer for entries prior to writing to a shared cache line. FIG. 5B is a flowchart illustrating store propagation in the distributed node system 100. The store may be propagated asynchronously by another thread of execution.
At step 522, the node retrieves an entry from the store buffer. The entry may include information indicating a source node, a source cache line to which the store should be performed, the data to be stored, the version number associated with the store(s) and the storing thread.
At step 524, the node 102 requests from the source node a list of remote nodes for the source cache line. After receiving the information, the method proceeds to step 526.
In an embodiment, in response to the request, the source node refers to the directory entry for that shared cache line. The directory entry indicates which nodes contain a copy of the source cache line. Any number of nodes in system 100 may contain a copy of the source cache line. In an embodiment, when accessing the directory entry for the requested shared cache line, the source node locks the directory entry. In an embodiment, the source node only shares a list of remote nodes that contain a valid copy of the source cache line. The directory entry may be updated to indicate that all remote nodes contain an invalid copy.
At step 526, the node 102 causes other remote nodes that contain a copy of the source cache line to mark their copied cache line as invalid. The node indicates to each node that holds a respective copied cache line the data in the source cache line has been changed. The version value of the copied cache line at the remote nodes is changed to indicate that the copied cache line is invalid.
At step 528, the node 102 notifies the source node to perform the store. The notification may include the location of the source cache line in the main memory of the source node, the data to be stored in the source cache line and the version number. Before performing the store, the source node compares the version number from the store buffer to the version number in respective source cache line. If a version mismatch is detected, the source node does not perform the store and the issuing thread may be notified, for example via an asynchronous trap.
At step 530, the stored data is removed from the store buffer.
In an embodiment, the steps are repeated for each entry in the store buffer.
In an alternative embodiment, a remote node does not record the store in a store buffer. Instead, the remote node performs the update propagation steps during execution of the trap operation, in place of writing to the store buffer.
Source Node Store
In an embodiment, the source node executes a store command to store a shared cache line without using a store buffer. FIG. 6 is a flowchart illustrating steps performed by a source node 102 to execute a store command in a distributed node system 100. The stored command may include a pointer to a source cache line in main memory 108. The pointer may be associated with a version value.
At step 602, the node 102 suspends execution of the store command and executes a trap operation.
At step 604, the node 102 performs pointer-based memory corruption detection for the source cache line. If no memory corruption is detected, then the method proceeds to step 606. If memory corruption is detected, then the method exits the trap operation without performing the store.
In step 606, the node 102 instructs the remote nodes to invalidate their respective copied cache lines. The node 102 indicates to each remote node that the data in the source cache line has been changed. The version value of the copied cache line at the remote nodes is changed to indicate that the copied cache line is invalid.
In an embodiment, the source node refers to the directory entry for that shared cache line. The directory entry indicates which nodes contain a copy of the source cache line. Any number of nodes in system 100 may contain a copy of the source cache line. The node indicates to each node that is copying the source cache line that the data has been changed. The version value of the copied cache line at the other nodes is changed to indicate that the copy of the source cache line is invalid.
In an embodiment, the invalidation of the source cache line is recorded and an instruction to the remote nodes to invalidate is sent lazily. For example, a thread other than a thread performing the store discovers the recording of the invalidated source cache line and sends instructions to the remote nodes to invalidate the copied cache line of the source cache line.
At step 608, the source node performs the store on the source cache line.
The source node completes the trap operation.
Hardware Overview
According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques.
For example, FIG. 7 is a block diagram that illustrates a computer system 700 upon which an embodiment of the invention may be implemented. Computer system 700 includes a bus 702 or other communication mechanism for communicating information, and a hardware processor 704 coupled with bus 702 for processing information. Hardware processor 704 may be, for example, a general purpose microprocessor.
Computer system 700 also includes a main memory 706, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 702 for storing information and instructions to be executed by processor 704. Main memory 706 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 704. Such instructions, when stored in non-transitory storage media accessible to processor 704, render computer system 700 into a special-purpose machine that is customized to perform the operations specified in the instructions.
Computer system 700 further includes a read only memory (ROM) 708 or other static storage device coupled to bus 702 for storing static information and instructions for processor 704. A storage device 710, such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to bus 702 for storing information and instructions.
Computer system 700 may be coupled via bus 702 to a display 712, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 714, including alphanumeric and other keys, is coupled to bus 702 for communicating information and command selections to processor 704. Another type of user input device is cursor control 716, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 704 and for controlling cursor movement on display 712. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
Computer system 700 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 700 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 700 in response to processor 704 executing one or more sequences of one or more instructions contained in main memory 706. Such instructions may be read into main memory 706 from another storage medium, such as storage device 710. Execution of the sequences of instructions contained in main memory 706 causes processor 704 to perform the procedure steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical disks, magnetic disks, or solid-state drives, such as storage device 710. Volatile media includes dynamic memory, such as main memory 706. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 702. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 704 for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 700 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 702. Bus 702 carries the data to main memory 706, from which processor 704 retrieves and executes the instructions. The instructions received by main memory 706 may optionally be stored on storage device 710 either before or after execution by processor 704.
Computer system 700 also includes a communication interface 718 coupled to bus 702. Communication interface 718 provides a two-way data communication coupling to a network link 720 that is connected to a local network 722. For example, communication interface 718 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 718 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 718 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 720 typically provides data communication through one or more networks to other data devices. For example, network link 720 may provide a connection through local network 722 to a host computer 724 or to data equipment operated by an Internet Service Provider (ISP) 726. ISP 726 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 728. Local network 722 and Internet 728 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 720 and through communication interface 718, which carry the digital data to and from computer system 700, are example forms of transmission media.
Computer system 700 can send messages and receive data, including program code, through the network(s), network link 720 and communication interface 718. In the Internet example, a server 730 might transmit a requested code for an application program through Internet 728, ISP 726, local network 722 and communication interface 718.
The received code may be executed by processor 704 as it is received, and/or stored in storage device 710, or other non-volatile storage for later execution.
In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.

Claims (24)

What is claimed is:
1. A method, comprising:
in a memory of a local node, generating a copied cache line that is a copy of a source cache line on a source node, wherein the said copied cache line comprises version bits and data bits, said version bits being set to a version value;
generating a pointer that points to said copied cache line, said pointer having a pointer value that includes said version value;
using said pointer to perform a memory operation on said copied cache line, wherein performing a memory operation includes:
comparing the version value included in said pointer value to the version value to which the version bits of the copied cache line are set; and
determining whether memory corruption has occurred based on the comparison.
2. The method of claim 1, wherein generating a copied cache line includes:
determining a version value of the source cache line; and
setting the version bits to the version value of the source cache line.
3. The method of claim 2, wherein the version value is generated by the source node in response to a memory allocation request.
4. The method of claim 1, wherein comparing the version value includes:
determining whether the copied cache line is invalid; and
in response to determining that the copied cache line is invalid, copying the source cache line to the copied cache line.
5. The method of claim 4, wherein the method further includes the steps of:
the local node storing in a store buffer one or more updates to the copied cache line that have not been propagated to said source cache line; and
further in response to determining that the copied cache line is invalid, propagating said one or more updates to said copied cache line.
6. The method of claim 1, further comprising:
executing a trap operation if memory corruption has occurred.
7. The method of claim 6, wherein executing a trap operation includes:
informing an application that memory corruption has occurred.
8. The method of claim 6, wherein executing a trap operation includes:
terminating the memory operation.
9. One or more non-transitory storage media storing instructions which, when executed by one or more processors, cause performance of:
in a memory of a local node, generating a copied cache line that is a copy of a source cache line on a source node, wherein the said copied cache line comprises version bits and data bits, said version bits being set to a version value;
generating a pointer that points to said copied cache line, said pointer having a pointer value that includes said version value;
using said pointer to perform a memory operation on said copied cache line, wherein performing a memory operation includes:
comparing the version value included in said pointer value to the version value to which the version bits of the copied cache line are set;
determining whether said copied cache line has been corrupted based on the comparison.
10. The one or more non-transitory storage media of claim 9, wherein generating a copied cache line includes:
determining a version value of the source cache line;
setting the version bits to the version value of the source cache line.
11. The one or more non-transitory storage media of claim 10, wherein the version value is generated by the source node in response to a memory allocation request.
12. The one or more non-transitory storage media of claim 9, wherein comparing the version value includes:
determining whether the copied cache line is invalid;
copying the source cache line to the copied cache line if the copied cache line is invalid.
13. The one or more non-transitory storage media of claim 12, wherein the instructions further include instructions for:
the local node storing in a store buffer one or more updates to the copied cache line that have not been propagated to said source cache line; and
further in response to determining that the copied cache line is invalid, propagating said one or more updates to said copied cache line.
14. The one or more non-transitory storage media of claim 9, further comprising:
executing a trap operation if memory corruption has occurred.
15. The one or more non-transitory storage media of claim 14, wherein executing a trap operation includes:
informing an application that memory corruption has occurred.
16. The one or more non-transitory storage media of claim 14, wherein executing a trap operation includes:
terminating the memory operation.
17. A computer system, comprising:
one or more computing nodes, wherein each computing node of the one or more computing nodes is configured to:
in a memory of said each computing node, generate a copied cache line that is a copy of a source cache line on a source node belonging to said one or more computing nodes, wherein the said copied cache line comprises version bits and data bits, said version bits being set to a version value;
generate a pointer that points to said copied cache line, said pointer having a pointer value that includes said version value;
use said pointer to perform a memory operation on said copied cache line, wherein the memory operation includes:
to compare the version value included in said pointer value to the version value to which the version bits of the copied cache line are set; and
to determine whether said copied cache line has been corrupted based on the comparison.
18. The system of claim 17, wherein to generate a copied cache line, each computing node of the one or more computing nodes is configured to:
determine the version value of said source cache line; and
set the version bits to the version value of the source cache line.
19. The system of claim 18, wherein for each computing node of the one or more computing nodes, the version value is generated by a source node that is configured to generate the version value in response to a memory allocation request.
20. The system of claim 17, wherein for each computing node of said one or more computing nodes, to compare the version value, each computing node is configured to:
determine whether the copied cache line is invalid; and
copy the source cache line to the copied cache line if the copied cache line is invalid.
21. The system of claim 17, wherein for each computing node of said one or more computing nodes, each computing node is configured to:
store in a store buffer one or more updates to the copied cache line that have not been propagated to said source cache line; and
further in response to the determination that the copied cache line is invalid, propagate said one or more updates to said copied cache line.
22. The system of claim 17, wherein for each computing of said one or more computing nodes, to execute a trap operation, each computing node is configured to execute a trap operation if memory corruption has occurred.
23. The system of claim 22, wherein for each computing of node said one or more computing nodes, to execute a trap operation, each computing node is configured to inform an application that memory corruption has occurred.
24. The system of claim 22, wherein to execute a trap operation, each computing node of said one or more computing is configured to terminate the memory operation.
US14/530,354 2014-03-28 2014-10-31 Memory corruption detection support for distributed shared memory applications Active 2035-08-23 US9898414B2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US14/530,354 US9898414B2 (en) 2014-03-28 2014-10-31 Memory corruption detection support for distributed shared memory applications
EP15714996.4A EP3123331B1 (en) 2014-03-28 2015-03-10 Memory corruption detection support for distributed shared memory applications
JP2017502751A JP6588080B2 (en) 2014-03-28 2015-03-10 Support for detecting memory corruption in distributed shared memory applications
PCT/US2015/019587 WO2015148100A1 (en) 2014-03-28 2015-03-10 Memory corruption detection support for distributed shared memory applications
CN201580016557.7A CN106164870B (en) 2014-03-28 2015-03-10 The memory damage detection of distributed shared memory application is supported

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461972082P 2014-03-28 2014-03-28
US14/530,354 US9898414B2 (en) 2014-03-28 2014-10-31 Memory corruption detection support for distributed shared memory applications

Publications (2)

Publication Number Publication Date
US20150278103A1 US20150278103A1 (en) 2015-10-01
US9898414B2 true US9898414B2 (en) 2018-02-20

Family

ID=54190571

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/530,354 Active 2035-08-23 US9898414B2 (en) 2014-03-28 2014-10-31 Memory corruption detection support for distributed shared memory applications

Country Status (5)

Country Link
US (1) US9898414B2 (en)
EP (1) EP3123331B1 (en)
JP (1) JP6588080B2 (en)
CN (1) CN106164870B (en)
WO (1) WO2015148100A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9063974B2 (en) 2012-10-02 2015-06-23 Oracle International Corporation Hardware for table scan acceleration
US9679084B2 (en) 2013-03-14 2017-06-13 Oracle International Corporation Memory sharing across distributed nodes
US9858140B2 (en) 2014-11-03 2018-01-02 Intel Corporation Memory corruption detection
US10073727B2 (en) * 2015-03-02 2018-09-11 Intel Corporation Heap management for memory corruption detection
US9619313B2 (en) 2015-06-19 2017-04-11 Intel Corporation Memory write protection for memory corruption detection architectures
US10162694B2 (en) 2015-12-21 2018-12-25 Intel Corporation Hardware apparatuses and methods for memory corruption detection
US10191791B2 (en) 2016-07-02 2019-01-29 Intel Corporation Enhanced address space layout randomization
US10803039B2 (en) 2017-05-26 2020-10-13 Oracle International Corporation Method for efficient primary key based queries using atomic RDMA reads on cache friendly in-memory hash index
EP3502898A1 (en) * 2017-12-20 2019-06-26 Vestel Elektronik Sanayi ve Ticaret A.S. Devices and methods for determining possible corruption of data stored in a memory of an electronic device
US10467139B2 (en) 2017-12-29 2019-11-05 Oracle International Corporation Fault-tolerant cache coherence over a lossy network
US10452547B2 (en) 2017-12-29 2019-10-22 Oracle International Corporation Fault-tolerant cache coherence over a lossy network
CN111198746B (en) * 2018-11-20 2023-05-30 中标软件有限公司 Communication method and system between hosts based on shared storage in virtualized cluster
CN113312385A (en) * 2020-07-07 2021-08-27 阿里巴巴集团控股有限公司 Cache operation method, device and system, storage medium and operation equipment

Citations (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4817140A (en) 1986-11-05 1989-03-28 International Business Machines Corp. Software protection system using a single-key cryptosystem, a hardware-based authorization system and a secure coprocessor
US5133053A (en) 1987-02-13 1992-07-21 International Business Machines Corporation Interprocess communication queue location transparency
US5522045A (en) 1992-03-27 1996-05-28 Panasonic Technologies, Inc. Method for updating value in distributed shared virtual memory among interconnected computer nodes having page table with minimal processor involvement
US5561799A (en) 1993-06-17 1996-10-01 Sun Microsystems, Inc. Extensible file system which layers a new file system with an old file system to provide coherent file data
US5684977A (en) 1995-03-31 1997-11-04 Sun Microsystems, Inc. Writeback cancellation processing system for use in a packet switched cache coherent multiprocessor system
US6148377A (en) 1996-11-22 2000-11-14 Mangosoft Corporation Shared memory computer networks
US6151688A (en) 1997-02-21 2000-11-21 Novell, Inc. Resource management in a clustered computer system
US6175566B1 (en) 1996-09-11 2001-01-16 Electronics And Telecommunications Research Institute Broadcast transfer method for a hierarchical interconnection network with multiple tags
US6230240B1 (en) 1998-06-23 2001-05-08 Hewlett-Packard Company Storage management system and auto-RAID transaction manager for coherent memory map across hot plug interface
US6292705B1 (en) 1998-09-29 2001-09-18 Conexant Systems, Inc. Method and apparatus for address transfers, system serialization, and centralized cache and transaction control, in a symetric multiprocessor system
US6295571B1 (en) 1999-03-19 2001-09-25 Times N Systems, Inc. Shared memory apparatus and method for multiprocessor systems
WO2002078254A2 (en) 2001-03-26 2002-10-03 Intel Corporation Methodology and mechanism for remote key validation for ngio/infinibandtm applications
US20020191599A1 (en) 2001-03-30 2002-12-19 Balaji Parthasarathy Host- fabrec adapter having an efficient multi-tasking pipelined instruction execution micro-controller subsystem for NGIO/infinibandTM applications
US20030061417A1 (en) 2001-09-24 2003-03-27 International Business Machines Corporation Infiniband work and completion queue management via head and tail circular buffers with indirect work queue entries
US20030105914A1 (en) 2001-12-04 2003-06-05 Dearth Glenn A. Remote memory address translation
US20040064653A1 (en) * 2000-06-10 2004-04-01 Kourosh Gharachorloo System and method for limited fanout daisy chaining of cache invalidation requests in a shared-memory multiprocessor system
US6757790B2 (en) 2002-02-19 2004-06-29 Emc Corporation Distributed, scalable data storage facility with cache memory
US20060095690A1 (en) * 2004-10-29 2006-05-04 International Business Machines Corporation System, method, and storage medium for shared key index space for memory regions
US20060098649A1 (en) 2004-11-10 2006-05-11 Trusted Network Technologies, Inc. System, apparatuses, methods, and computer-readable media for determining security realm identity before permitting network connection
US7197647B1 (en) * 2002-09-30 2007-03-27 Carnegie Mellon University Method of securing programmable logic configuration data
US7218643B1 (en) 1998-09-30 2007-05-15 Kabushiki Kaisha Toshiba Relay device and communication device realizing contents protection procedure over networks
US20080010417A1 (en) 2006-04-28 2008-01-10 Zeffer Hakan E Read/Write Permission Bit Support for Efficient Hardware to Software Handover
US20080065835A1 (en) * 2006-09-11 2008-03-13 Sun Microsystems, Inc. Offloading operations for maintaining data coherence across a plurality of nodes
US20090037571A1 (en) 2007-08-02 2009-02-05 Erol Bozak Dynamic Agent Formation For Efficient Data Provisioning
US20090240664A1 (en) 2008-03-20 2009-09-24 Schooner Information Technology, Inc. Scalable Database Management Software on a Cluster of Nodes Using a Shared-Distributed Flash Memory
US20090240869A1 (en) * 2008-03-20 2009-09-24 Schooner Information Technology, Inc. Sharing Data Fabric for Coherent-Distributed Caching of Multi-Node Shared-Distributed Flash Memory
US20100030796A1 (en) 2008-07-31 2010-02-04 Microsoft Corporation Efficient column based data encoding for large-scale data storage
US7664938B1 (en) * 2004-01-07 2010-02-16 Xambala Corporation Semantic processor systems and methods
WO2010039895A2 (en) 2008-10-05 2010-04-08 Microsoft Corporation Efficient large-scale joining for querying of column based data encoded structures
US20120011398A1 (en) 2010-04-12 2012-01-12 Eckhardt Andrew D Failure recovery using consensus replication in a distributed flash memory system
EP2423843A1 (en) 2010-08-23 2012-02-29 Raytheon Company Secure field-programmable gate array (FPGA) architecture
US8255922B1 (en) 2006-01-09 2012-08-28 Oracle America, Inc. Mechanism for enabling multiple processes to share physical memory
US20130013843A1 (en) * 2011-07-07 2013-01-10 Zoran Radovic Efficient storage of memory version data
US20130036332A1 (en) * 2011-08-05 2013-02-07 Gove Darryl J Maximizing encodings of version control bits for memory corruption detection
US20130191330A1 (en) * 2008-08-25 2013-07-25 International Business Machines Corporation Reducing contention and messaging traffic in a distributed shared caching for clustered file systems
US8504791B2 (en) * 2007-01-26 2013-08-06 Hicamp Systems, Inc. Hierarchical immutable content-addressable memory coprocessor
US20130232344A1 (en) * 2010-12-17 2013-09-05 Simon P. Johnson Technique for supporting multiple secure enclaves
US20140095805A1 (en) 2012-10-02 2014-04-03 Oracle International Corporation Remote-key based memory buffer access control mechanism
US20140115283A1 (en) * 2012-10-23 2014-04-24 Oracle International Corporation Block memory engine with memory corruption detection
US20140181454A1 (en) 2012-12-20 2014-06-26 Oracle International Corporation Method and system for efficient memory region deallocation
US20140229440A1 (en) 2013-02-12 2014-08-14 Atlantis Computing, Inc. Method and apparatus for replicating virtual machine images using deduplication metadata
US20140279894A1 (en) 2013-03-14 2014-09-18 Oracle International Corporation Memory sharing across distributed nodes
US9052936B1 (en) 2011-08-10 2015-06-09 Nutanix, Inc. Method and system for communicating to a storage controller in a virtualization environment
US9083614B2 (en) 2012-10-15 2015-07-14 Oracle International Corporation System and method for supporting out-of-order message processing in a distributed data grid
US20150227414A1 (en) * 2012-08-31 2015-08-13 Pradeep Varma Systems And Methods Of Memory And Access Management

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6633891B1 (en) * 1998-11-24 2003-10-14 Oracle International Corporation Managing replacement of data in a cache on a node based on caches of other nodes
JP2004054906A (en) * 2003-05-21 2004-02-19 Hitachi Ltd Memory access method and computer system for executing the same
US8195891B2 (en) * 2009-03-30 2012-06-05 Intel Corporation Techniques to perform power fail-safe caching without atomic metadata
US8751736B2 (en) * 2011-08-02 2014-06-10 Oracle International Corporation Instructions to set and read memory version information

Patent Citations (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4817140A (en) 1986-11-05 1989-03-28 International Business Machines Corp. Software protection system using a single-key cryptosystem, a hardware-based authorization system and a secure coprocessor
US5133053A (en) 1987-02-13 1992-07-21 International Business Machines Corporation Interprocess communication queue location transparency
US5522045A (en) 1992-03-27 1996-05-28 Panasonic Technologies, Inc. Method for updating value in distributed shared virtual memory among interconnected computer nodes having page table with minimal processor involvement
US5561799A (en) 1993-06-17 1996-10-01 Sun Microsystems, Inc. Extensible file system which layers a new file system with an old file system to provide coherent file data
US5684977A (en) 1995-03-31 1997-11-04 Sun Microsystems, Inc. Writeback cancellation processing system for use in a packet switched cache coherent multiprocessor system
US6175566B1 (en) 1996-09-11 2001-01-16 Electronics And Telecommunications Research Institute Broadcast transfer method for a hierarchical interconnection network with multiple tags
US6148377A (en) 1996-11-22 2000-11-14 Mangosoft Corporation Shared memory computer networks
US6151688A (en) 1997-02-21 2000-11-21 Novell, Inc. Resource management in a clustered computer system
US6230240B1 (en) 1998-06-23 2001-05-08 Hewlett-Packard Company Storage management system and auto-RAID transaction manager for coherent memory map across hot plug interface
US6292705B1 (en) 1998-09-29 2001-09-18 Conexant Systems, Inc. Method and apparatus for address transfers, system serialization, and centralized cache and transaction control, in a symetric multiprocessor system
US7218643B1 (en) 1998-09-30 2007-05-15 Kabushiki Kaisha Toshiba Relay device and communication device realizing contents protection procedure over networks
US6295571B1 (en) 1999-03-19 2001-09-25 Times N Systems, Inc. Shared memory apparatus and method for multiprocessor systems
US20040064653A1 (en) * 2000-06-10 2004-04-01 Kourosh Gharachorloo System and method for limited fanout daisy chaining of cache invalidation requests in a shared-memory multiprocessor system
WO2002078254A2 (en) 2001-03-26 2002-10-03 Intel Corporation Methodology and mechanism for remote key validation for ngio/infinibandtm applications
US20020191599A1 (en) 2001-03-30 2002-12-19 Balaji Parthasarathy Host- fabrec adapter having an efficient multi-tasking pipelined instruction execution micro-controller subsystem for NGIO/infinibandTM applications
US20030061417A1 (en) 2001-09-24 2003-03-27 International Business Machines Corporation Infiniband work and completion queue management via head and tail circular buffers with indirect work queue entries
US20030105914A1 (en) 2001-12-04 2003-06-05 Dearth Glenn A. Remote memory address translation
US6757790B2 (en) 2002-02-19 2004-06-29 Emc Corporation Distributed, scalable data storage facility with cache memory
US7197647B1 (en) * 2002-09-30 2007-03-27 Carnegie Mellon University Method of securing programmable logic configuration data
US7664938B1 (en) * 2004-01-07 2010-02-16 Xambala Corporation Semantic processor systems and methods
US20060095690A1 (en) * 2004-10-29 2006-05-04 International Business Machines Corporation System, method, and storage medium for shared key index space for memory regions
US20060098649A1 (en) 2004-11-10 2006-05-11 Trusted Network Technologies, Inc. System, apparatuses, methods, and computer-readable media for determining security realm identity before permitting network connection
US8255922B1 (en) 2006-01-09 2012-08-28 Oracle America, Inc. Mechanism for enabling multiple processes to share physical memory
US20080010417A1 (en) 2006-04-28 2008-01-10 Zeffer Hakan E Read/Write Permission Bit Support for Efficient Hardware to Software Handover
US20080065835A1 (en) * 2006-09-11 2008-03-13 Sun Microsystems, Inc. Offloading operations for maintaining data coherence across a plurality of nodes
US8504791B2 (en) * 2007-01-26 2013-08-06 Hicamp Systems, Inc. Hierarchical immutable content-addressable memory coprocessor
US20090037571A1 (en) 2007-08-02 2009-02-05 Erol Bozak Dynamic Agent Formation For Efficient Data Provisioning
US20090240664A1 (en) 2008-03-20 2009-09-24 Schooner Information Technology, Inc. Scalable Database Management Software on a Cluster of Nodes Using a Shared-Distributed Flash Memory
US8732386B2 (en) 2008-03-20 2014-05-20 Sandisk Enterprise IP LLC. Sharing data fabric for coherent-distributed caching of multi-node shared-distributed flash memory
US20090240869A1 (en) * 2008-03-20 2009-09-24 Schooner Information Technology, Inc. Sharing Data Fabric for Coherent-Distributed Caching of Multi-Node Shared-Distributed Flash Memory
US20100030796A1 (en) 2008-07-31 2010-02-04 Microsoft Corporation Efficient column based data encoding for large-scale data storage
US20130191330A1 (en) * 2008-08-25 2013-07-25 International Business Machines Corporation Reducing contention and messaging traffic in a distributed shared caching for clustered file systems
WO2010039895A2 (en) 2008-10-05 2010-04-08 Microsoft Corporation Efficient large-scale joining for querying of column based data encoded structures
US20120011398A1 (en) 2010-04-12 2012-01-12 Eckhardt Andrew D Failure recovery using consensus replication in a distributed flash memory system
EP2423843A1 (en) 2010-08-23 2012-02-29 Raytheon Company Secure field-programmable gate array (FPGA) architecture
US20130232344A1 (en) * 2010-12-17 2013-09-05 Simon P. Johnson Technique for supporting multiple secure enclaves
US20130013843A1 (en) * 2011-07-07 2013-01-10 Zoran Radovic Efficient storage of memory version data
US20130036332A1 (en) * 2011-08-05 2013-02-07 Gove Darryl J Maximizing encodings of version control bits for memory corruption detection
US9052936B1 (en) 2011-08-10 2015-06-09 Nutanix, Inc. Method and system for communicating to a storage controller in a virtualization environment
US20150227414A1 (en) * 2012-08-31 2015-08-13 Pradeep Varma Systems And Methods Of Memory And Access Management
US20140095810A1 (en) 2012-10-02 2014-04-03 Oracle International Corporation Memory sharing across distributed nodes
US20140096145A1 (en) 2012-10-02 2014-04-03 Oracle International Corporation Hardware message queues for intra-cluster communication
US20140095651A1 (en) 2012-10-02 2014-04-03 Oracle International Corporation Memory Bus Protocol To Enable Clustering Between Nodes Of Distinct Physical Domain Address Spaces
US20140095805A1 (en) 2012-10-02 2014-04-03 Oracle International Corporation Remote-key based memory buffer access control mechanism
US9083614B2 (en) 2012-10-15 2015-07-14 Oracle International Corporation System and method for supporting out-of-order message processing in a distributed data grid
US20140115283A1 (en) * 2012-10-23 2014-04-24 Oracle International Corporation Block memory engine with memory corruption detection
US20140181454A1 (en) 2012-12-20 2014-06-26 Oracle International Corporation Method and system for efficient memory region deallocation
US20140229440A1 (en) 2013-02-12 2014-08-14 Atlantis Computing, Inc. Method and apparatus for replicating virtual machine images using deduplication metadata
US20140279894A1 (en) 2013-03-14 2014-09-18 Oracle International Corporation Memory sharing across distributed nodes

Non-Patent Citations (32)

* Cited by examiner, † Cited by third party
Title
Agarwal et al., "An Evaluation of Directory Schemes for Cache Cohrerence", Proceedings of the 15th Annual International Symposiumn on Computer Architecture, dated May 1988, 10 pages.
Brewer et al., Remote Queues: Exposing Message Queues for Optimization and Atomicity, dated 1995 ACM, 12 pages.
Cockshott et al., "High-Performance Operations Using a Compressed Database Architecture" dated Aug. 12, 1998, 14 pages.
Franke et al., "Introduction to the wire-speed processor and architecture" IBM J. Res & Dev. vol. 54 No. 1 Paper 3, dated Jan. 2010, 12 pages.
Gao et al., "Application-Transparent Checkpoint/Restart for MPI Programs over InfiniBand", dated 2006, 8 pages.
Hardavellas et al., "Software Cache Coherence with Memory Scaling", dated Apr. 16, 1998, 2 pages.
Kent, Christopher A., "Cache Coherence in Distributed Systems", dated Dec. 1987, Digital Western Research Laboratory, dated 1986, 90 pages.
Lee et al., "A Comprehensive Framework for Enhancing Security in InfiniBand Architecture", IEEE, vol. 18 No. 10, Oct. 2007, 14 pages.
Lenoski et al., "The Directory-Based Cache Coherence Protocol for the DASH Multiprocessor", IEEE, dated 1990, 12 pages.
Li et al., "Memory Coherence in Shared Virtual Memory Systems", ACM Transactions on Computer Systems, vol. 7, No. 4, dated Nov. 1989, 39 pages.
Loewenstein, U.S. Appl. No. 13/828,555, filed Mar. 14, 2013, Adviosery Action, dated Apr. 21, 2017.
Loewenstein, U.S. Appl. No. 13/828,555, filed Mar. 14, 2013, Final Office Action, dated Feb. 9, 2017.
Loewenstein, U.S. Appl. No. 13/828,555, filed Mar. 14, 2013, Interview Summary, dated Apr. 18, 2017.
Loewenstein, U.S. Appl. No. 13/828,555, filed Mar. 14, 2013, Office Action, dated Sep. 8, 2017
Loewenstein, U.S. Appl. No. 13/828,983, filed Mar. 14, 2013, Notice of Allowance, dated Jan. 27, 2017.
Ming et al., "An Efficient Attribute Based Encryption Scheme with Revocation for Outsourced Data Sharing Control",2011 Internat. Conference, Oct. 21, 2011, pp. 516-520.
U.S. Appl. No. 13/828,555, filed Mar. 14, 2013, Final Office Action, dated Jun. 30, 2016.
U.S. Appl. No. 13/828,555, filed Mar. 14, 2013, Interview Summary, dated Apr. 28, 2016.
U.S. Appl. No. 13/828,555, filed Mar. 14, 2013, Office Action, dated Feb. 4, 2016.
U.S. Appl. No. 13/828,983, filed Mar. 14, 2013, Advisory Action, dated Jul. 22, 2016.
U.S. Appl. No. 13/828,983, filed Mar. 14, 2013, Final Office Action, dated May 16, 2016.
U.S. Appl. No. 13/828,983, filed Mar. 14, 2013, Interview Summary, dated Jun. 23, 2016.
U.S. Appl. No. 13/828,983, filed Mar. 14, 2013, Notice of Allowance, dated Oct. 11, 2016.
U.S. Appl. No. 13/828,983, filed Mar. 14, 2013, Office Action, dated Oct. 1, 2015.
U.S. Appl. No. 13/838,542, filed Mar. 15, 2013, Notice of Allowance, dated Apr. 7, 2016.
U.S. Appl. No. 13/839,525, filed Mar. 15, 2013, Notice of Allowance, dated Apr. 12, 2016.
Von Eicken et al., Active Messages: A Mechanism for Integrated Communication and Computation, dated 1992, ACM, 12 pages.
Wang et al., "Hierarchical Attribute-Based Encryption and Scalable User Revocation for Sharing Data in Cloud Servers", Computeres & Security 2011, vol. 30, No. 5, May 14, 2011, pp. 320-331.
Wang et al., HyperSafe: A Lightweight Approach to Provide Lifetime hypervisor Control-Flow Integrity IEEE, dated 2010, 16 pages.
Yu et al., "Attribute Based Data Sharing with Attribute Revocation", Proceedings of the 5th ACM Symposium, Apr. 13, 2010, pp. 261-271, New York, USA.
Zeffer et al., "TMA: A Trap-Based Memory Architecture", in ICS, dated 2006 Proceedings of the 20th Annual International Conference of Supercomputing, pp. 259-268.
Zhang, Long, "Attribute Based Encryption Made Practical" A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science, dated Apr. 2012, 62 pages.

Also Published As

Publication number Publication date
JP6588080B2 (en) 2019-10-09
EP3123331A1 (en) 2017-02-01
CN106164870A (en) 2016-11-23
EP3123331B1 (en) 2021-08-25
JP2017510925A (en) 2017-04-13
WO2015148100A1 (en) 2015-10-01
US20150278103A1 (en) 2015-10-01
CN106164870B (en) 2019-05-28

Similar Documents

Publication Publication Date Title
US9898414B2 (en) Memory corruption detection support for distributed shared memory applications
US20150089137A1 (en) Managing Mirror Copies without Blocking Application I/O
US8429134B2 (en) Distributed database recovery
US7827374B2 (en) Relocating page tables
US10599535B2 (en) Restoring distributed shared memory data consistency within a recovery process from a cluster node failure
US7490214B2 (en) Relocating data from a source page to a target page by marking transaction table entries valid or invalid based on mappings to virtual pages in kernel virtual memory address space
US7721068B2 (en) Relocation of active DMA pages
US7627614B2 (en) Lost write detection and repair
US20180060318A1 (en) Coordinated hash table indexes to facilitate reducing database reconfiguration time
US9977760B1 (en) Accessing data on distributed storage systems
US10127054B2 (en) Bootstrapping server using configuration file stored in server-managed storage
US10649981B2 (en) Direct access to object state in a shared log
US10635541B2 (en) Fine-grained conflict resolution in a shared log
US8898413B2 (en) Point-in-time copying of virtual storage
US8745340B2 (en) Reduction of communication and efficient failover processing in distributed shared memory-based application
US10613774B2 (en) Partitioned memory with locally aggregated copy pools
US11960742B1 (en) High-performance, block-level fail atomicity on byte-level non-volatile media
CN114365109A (en) RDMA-enabled key-value store
US8892838B2 (en) Point-in-time copying of virtual storage and point-in-time dumping
US20150135004A1 (en) Data allocation method and information processing system

Legal Events

Date Code Title Description
AS Assignment

Owner name: ORACLE INTERNATIONAL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RADOVIC, ZORAN;LOEWENSTEIN, PAUL;JOHNSON, JOHN G.;SIGNING DATES FROM 20141028 TO 20141029;REEL/FRAME:034085/0165

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4