Nothing Special   »   [go: up one dir, main page]

US10855766B2 - Networking switch with object storage system intelligence - Google Patents

Networking switch with object storage system intelligence Download PDF

Info

Publication number
US10855766B2
US10855766B2 US15/718,756 US201715718756A US10855766B2 US 10855766 B2 US10855766 B2 US 10855766B2 US 201715718756 A US201715718756 A US 201715718756A US 10855766 B2 US10855766 B2 US 10855766B2
Authority
US
United States
Prior art keywords
storage system
storage
networking switch
replicas
switch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/718,756
Other versions
US20190098085A1 (en
Inventor
Yi Zou
Arun Raghunath
Anjaneya Reddy Chagam Reddy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US15/718,756 priority Critical patent/US10855766B2/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHAGAM REDDY, ANJANEYA REDDY, RAGHUNATH, ARUN, ZOU, YI
Publication of US20190098085A1 publication Critical patent/US20190098085A1/en
Application granted granted Critical
Publication of US10855766B2 publication Critical patent/US10855766B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/356Switches specially adapted for specific applications for storage area networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2466Traffic characterised by specific attributes, e.g. priority or QoS using signalling traffic
    • H04L67/32
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Definitions

  • the field of invention pertains generally to the computing sciences, and, more specifically, to a networking switch with object storage system intelligence.
  • FIG. 1 a shows a prior art object storage write process
  • FIG. 1 b shows a prior art object storage acknowledgment process
  • FIG. 2 a shows a first improved object storage write process
  • FIG. 2 b shows a first improved object storage acknowledgment process
  • FIG. 3 a shows a second improved object storage write process
  • FIG. 3 b shows a second improved object storage acknowledgment process
  • FIG. 4 shows an improved networking switch
  • FIG. 5 show a methodology
  • FIG. 6 shows a computing system
  • FIG. 1 a shows a prior art data center.
  • a typical data center includes one or more “racks” 101 _ 1 , 101 _ 2 , . . . 101 _N where each rack includes one or more servers 103 and a network router or switch 104 (hereinafter referred to as “switch”).
  • the server(s) 103 and switch 104 of a same rack 101 are typically coupled proximately to one another, e.g., in a same mechanical fixture or frame in which server(s) and switch(es) may slide on rails on the sidewalls of the frame and mount to the back of the frame so that the installed equipment in the frame takes on the appearance of, e.g., a bookcase.
  • the proximate coupling of the server(s) 103 and switch 104 in a same rack 101 provides for various space/distance efficiencies such as, e.g., forming interconnections between the server(s) 103 and the switch 104 with copper wires that plug into a patch panel of the rack.
  • Other possible efficiencies include cooling the server(s) 103 and switch 104 with a same fan system that is integrated in the rack, powering the server(s) and switch from a same electrical outlet, etc.
  • the rack's switch 104 provides a platform communication for the rack's server(s) to communicate with one another and to connect to a larger network 105 of the data center that, e.g., other racks are coupled to.
  • the switch 104 is commonly referred to as the “top of rack” (TOR) because its traditional location is the top shelf of the rack 101 .
  • the TORs 104 _ 1 through 104 _N of their respective racks 101 _ 1 through 101 _N can be viewed as the gateway to the data center's network 105 .
  • An EOR unit 107 helps economize the configuration, management and networking of multiple racks 101 _ 1 through 101 _N.
  • EOR unit 107 is typically a switch that provides communication services for multiple racks. That is, servers within different racks 101 _ 1 through 101 _N where the different racks 101 _ 1 through 101 _N are associated with a same EOR 107 can communicate to one another through the EOR switch 107 . Additionally, the EOR 107 provides the servers of its constituent racks 101 _ 1 through 101 _N to, e.g., the data center's backbone network 106 to which are coupled multiple EORs and their corresponding racks.
  • an EOR unit 107 is a unit that is associated with multiple racks 101 _ 1 through 101 _N to provide communication services for the multiple racks 101 _ 1 through 101 _N (e.g., communication between racks 101 _ 1 through 101 _N and access to the data center's backbone network 106 ).
  • the servers 103 of the data center typically include multiple CPUs capable of executing the respective program code of various kinds of software applications.
  • One type of software application is an object storage software application.
  • An object storage software application may include client access nodes (or client nodes) “CN” and storage nodes “SN”.
  • the client access nodes act as gateways to the storage facilities for users of the object storage system (e.g., other application software programs that have storage needs, individuals, etc.).
  • the client access nodes typically accept data objects from the users that are to be written into the storage system and provide data objects to the users that were read from the storage system.
  • the storage nodes of the object storage application are responsible for storing data objects in the physical storage resources associated with the server that they execute on (e.g., non volatile storage of the server).
  • Object storage systems identify individual data items to be stored as objects were each object typically has an identifier of the object (object ID).
  • object ID identifier of the object
  • a hashing algorithm is executed on the object ID by the object storage system to identify which storage node is responsible for storing the object.
  • object storage systems also commonly replicate an object into multiple copies and store the different copies on different servers. Thus, should a server that stores a copy of the object fail, there are still other copies of the object that are available.
  • a problem with replication is the overhead that is presented to the data center network 105 , 106 .
  • a full protocol exchange e.g., acknowledgements (ACKs)
  • ACKs acknowledgements
  • FIG. 1 a depicts an exemplary prior art object write scenario.
  • FIG. 1 a depicts an exemplary prior art object write scenario.
  • all three copies are to be stored in racks that are serviced by one EOR 107 (in actual circumstances object copies may be stored in multiple groups of racks that have multiple corresponding EORs).
  • the client node 109 that receives the data object with the corresponding write command replicates the object into three separate copies and sends the three different objects into the network (through its rack's switch 104 _N) with three associated write requests 1 (one for each of the data object copies).
  • object ID generation intelligence 113 within the client node 109 determines a same object ID to be assigned to each of the three different objects, respectively.
  • a hashing function 114 executed by the client node 109 on the object ID is then able to convert the single object ID into the identity of three different storage nodes (e.g., by identifying their corresponding IP addresses).
  • one copy (or “replica”) of the object is to be stored on each of the three different storage nodes.
  • the client node 109 can re-execute the hashing function on the object ID to identify where other copies of the object are stored and access a surviving copy of the object.
  • the three storage requests 1 are forwarded by the TOR 104 _N of the client node's 109 rack 101 _N to the EOR unit 107 . Again, for ease of example it is assumed that all three object copies are to be stored in one of the racks associated with the EOR unit 107 .
  • the respective identifier of each destination storage node (e.g., its IP address) for each of the objects is essentially used by the EOR's networking switch 107 to route 3 a , 3 b the three object copies with their corresponding write request to their appropriate rack. For the sake of example, two of the copies are to be stored in rack 101 _ 1 and one of the copies is to be stored in rack 101 _ 2 .
  • EOR unit routes 3 a the two copies whose respective destination corresponds to a storage node within rack 101 _ 1 to rack 101 _ 1 .
  • the EOR unit 107 routes 3 b the single copy and its write request whose respective destination corresponds to a storage node within rack 101 _ 2 to rack 101 _ 2 .
  • Rack 101 _ 1 receives the two copies 3 a and its local TOR switch 104 _ 1 , forwards each copy to its respective object storage node 110 , 111 internally within rack 101 _ 1 , and the respective object storage node stores its respective object.
  • the TOR switch 104 _ 2 within rack 101 _ 2 routes 4 b its received copy of the object to the proper destination object storage node 112 within rack 101 _ 2 .
  • the object storage node 112 then stores the object.
  • the client node 109 includes object ID generation intelligence 113 and hashing intelligence 114 that converts an object's object ID into three destination storage node identifiers (e.g., three IP addresses).
  • the storage requests 1 each include an identity of the destination storage node that is to store the object that is appended to the request (e.g., by way of its IP address).
  • the internal routing/switching tables of the network switches 104 , 107 are configured with look up tables or other information that converts the identity of a particular storage node or its IP address to an appropriate output port of the switch. By so doing, the switches of the network 104 , 107 are able to route any object to its correct destination storage node.
  • FIG. 1 b shows a prior art acknowledgement process.
  • storage nodes 110 , 111 within rack 101 _ 1 complete the storage of their respective objects, they each send 1 a an acknowledgement (ACK) back to the client node 109 that originally submitted the storage request for the three object copies.
  • ACK acknowledgement
  • the object storage node 112 within rack 101 _ 2 sends 1 b an ACK back to the client node 109 upon successful storage of its object.
  • Each of the ACKs identify the client node 109 as the destination.
  • the switches 104 , 107 of the network 105 , 106 route 2 a , 2 b , 3 , 4 the ACKs back to the client node 109 .
  • the client node 109 receives 4 the three ACKs it understands that three copies of the object have been successfully stored in the system.
  • the switches 104 , 107 can be seen as having no internal intelligence capabilities of the storage system itself. That is, the network switches 104 , 107 are configured to only understand which network addresses correspond to which network destination. That is, as discussed above, the client nodes of the storage application include object_ID generation intelligence 113 and assignment of object_ID to storage node intelligence 114 but the networking switches 104 , 107 do not include any such intelligence.
  • client nodes of the application storage system are capable of generating specific object_IDs for specific objects and copies of objects. Additionally, only client nodes are capable of determining which storage nodes the replicas of an object having a particular object_ID should be assigned to.
  • each copy of the object is provided to the network 105 , 106 by the client 109 and forwarded through the network 105 , 106 until it reaches its destination storage node.
  • ACK flows that originate from each destination storage node, each of which progress through the network 105 , 106 and terminate at the client node 109 . Because there is an end-to-end traffic flow for each copy of the object (forwards to store the object and backwards to acknowledge the object), the offered load that is presented to the network 105 , 106 scales in direct proportion to the number of replicas that the storage system is configured to store per object.
  • a storage system that is configured to have six replicas will generate twice as much internal network traffic as a system that is configured to have only three replicas.
  • the scaling of internal network traffic with the numbers of replicas creates a tradeoff between reliability of the storage system and the performance of the storage system.
  • the storage system is more reliable with increased number of replicas (the likelihood that data will be truly lost drops with each replica).
  • the storage system will be observed by its users to have slower read/write storage access times as the internal traffic of the network increases.
  • the number of replicas increases the storage system will become slower from the perspective of the users. This is particularly unfortunate for large mission critical environments (e.g., the core storage system of a large corporation) which highly desire both reliability and performance.
  • FIG. 2 shows an improved approach in which the object ID generation intelligence 213 _ 1 through 213 _N and/or object_ID assignment to storage node intelligence 214 _ 1 through 214 _N that has been traditionally reserved for client nodes of the storage application (to generate object IDs and/or determine which specific object IDs are to be stored onto which specific object storage nodes) is embedded within the TOR switching nodes 204 _ 1 through 204 _N of the racks of the network 201 _ 1 through 201 _N.
  • the internal traffic of the network 205 , 206 can be greatly reduced because the “per-copy” end-to-end traffic flow environment is dramatically reduced.
  • the client node in order to prevent changes to legacy client node software, the client node generates three separate object IDs and sends separate requests for each of three different copies of an object as described above with respect to FIG. 1 a .
  • the three separate requests are forwarded to the EOR unit 207 .
  • the EOR unit understands that the TOR switches 204 _ 1 through 204 _N of the racks 201 _ 1 through 201 _N are upgraded to have storage system intelligence 213 , 214 that has been traditionally reserved for the client nodes of the storage application to generate object IDs and/or determine which specific object IDs are to be stored onto which specific object storage nodes.
  • the EOR 207 will recognize that two of the new requests have a same next hop address (the address of TOR switch 204 _ 1 ) and will consolidate the two requests into a single request 3 a with a single copy of the object.
  • the single forwarded request 3 a of FIG. 2 a includes the object ID for the object and a single copy of the object.
  • a single request 3 b is also sent to TOR switch 204 _ 2 of rack 204 _ 2 in both the approach of FIG. 2 a and FIG. 1 a.
  • the TOR switch 204 _ 1 of rack 201 _ 1 In response to receiving the single storage request, the TOR switch 204 _ 1 of rack 201 _ 1 , having the enhanced storage intelligence 214 _ 1 , will perform a hash on the object ID of the single request 3 a to identify the destination storage nodes (e.g., the IP addresses) for the object copies. After the hash is preformed, for a redundancy of 3, the TOR switch 204 _ 1 will identify three destination storage nodes. From the destination storage node identities, the TOR switch 204 _ 1 will recognize that two of the destination storage nodes are within the TOR switch's own rack 201 _ 1 .
  • the destination storage nodes e.g., the IP addresses
  • the TOR switch 204 _ 1 will then make a copy of the object that was included in the single storage request 3 a it received, and, forward first and second copies of the object to the first and second destination storage nodes 210 , 211 within rack 201 _ 1 , respectively.
  • the TOR switch 204 _ 2 associated with rack 201 _ 1 will also perform a hash on the primary object ID from the single request 3 b it received to generate three destination storage node identifiers (e.g., three IP addresses). Of the three, TOR switch 204 _ 2 will recognize that only one of these corresponds to a storage node within rack 201 _ 2 . As such, no additional copies of the object will be made by TOR switch 204 _ 2 (it will forward the single copy it received to destination storage node 212 for storage).
  • three destination storage node identifiers e.g., three IP addresses
  • TOR switch 204 _ 1 can combine the two ACKs 1 a from storage nodes 210 , 211 into a single ACK for both objects that were stored in rack 201 _ 1 (the single ACK may include the object ID and information that verifies the successful storage of two objects).
  • the single ACK is then sent 2 a to the EOR unit 207 .
  • TOR switch 204 _ 2 will also send 2 b a single ACK 2 b to EOR unit 207 (which also includes the object ID and successful storage of one object).
  • the EOR unit 207 forwards both ACKS to the TOR switch 204 _N in the rack 201 N that the client node 209 is within.
  • the TOR switch within rack 201 _N having enhanced functionality, is able to split the ACK from TOR switch 204 _ 1 into two separate ACKs. As such, the legacy client 209 will receive three ACKS and will recognize successful storage of three copies of the object. Thus, additional storage system intelligence that is included in switch 204 _N is acknowledgment processing.
  • the client understands that the network includes storage system functionality. As such, referring to FIG. 2 a , the client 209 issues only one storage request to TOR switch 204 _N and TOR switch 204 _N performs object ID generation and object ID to storage node correlation with intelligence 213 _N and 214 _N. TOR switch 204 _N can then forward the three requests to the EOR switch 207 . The rest of the storage process transpires as described above with respect to FIG. 2 a . With respect to the acknowledgment process, switch 204 _N can also accumulate the received ACKs and send a single, final ACK to the client 209 when both acknowledgements 3 of FIG. 2 b have been received. Thus, again, additional storage system intelligence that is included in switch 204 _N is acknowledgement processing. It is pertinent to point out that a single TOR switch as described herein may include all forms of TOR switch intelligence (or some subset thereof).
  • FIG. 3 a shows another object storage write process embodiment in which the EOR switch 307 also includes storage system intelligence 313 _N+1, 314 _N+1 (along with the TOR switches).
  • the client 309 is upgraded to know that the network 305 , 306 includes storage system intelligence, the client 309 can send only one storage request 1 to the TOR switch 304 _N unit rather than three storage requests as was performed by the client in the embodiment of FIG. 2 a .
  • the TOR switch 304 _N may generate an object ID for the object and send a single request that includes the object and the object ID to EOR 307 .
  • EOR switch 307 also has storage system intelligence 313 _N+1, 314 _N+1, TOR switch 304 _N need only forward 2 a single write request and object to the EOR switch 307 .
  • object ID intelligence 313 _N+1 of EOR may generate an object ID for the object.
  • object ID generation intelligence 314 _N+1 of the EOR 307 After receipt of the single request with the object and having possession of an object_ID (whether received or locally generated), object ID generation intelligence 314 _N+1 of the EOR 307 will perform a hash on the object_ID which will generate three destination storage node identifiers (e.g., three IP addresses). Moreover, the EOR 307 will recognize that two of these storage nodes 310 , 311 map to single rack 301 _ 1 . As such, the EOR 307 can consolidate the request for these two storage nodes 310 , 311 into a single storage request and corresponding object that is sent 3 a to TOR switch 304 _ 1 .
  • the single request 3 a only includes the object ID (does not identify the destinations 310 , 311 for the pair of objects that are to be stored in rack 301 _ 1 ) and the TOR switch 304 _ 1 of rack 301 _ 1 performs the same procedures described above with respect to FIG. 2 a (switch 304 _ 1 generates destination addressees for the copies from the object ID in the request).
  • the single request 3 a identifies both destination nodes 310 , 311 and the TOR switch 304 _ 1 duplicates the object and forwards the object copies to both destinations 310 , 311 for storage.
  • the EOR unit 307 also sends a second request 3 b with corresponding copy of the object to rack 301 _ 2 .
  • the request may conform to any of the embodiments described immediately above, resulting in the storage of the single copy in the appropriate destination storage node 312 within rack 301 _ 2 .
  • the acknowledgements may reduce to: 1) a single ACK being sent from TOR switch 304 _ 1 to EOR switch 307 that confirms the storage of both copies of the object in rack 301 _ 1 ; 2) a single ACK being sent from TOR switch 304 _ 2 to EOR switch 307 that confirms the storage of the single copy of the object in rack 301 _ 2 ; and, 3) a single ACK being sent from the EOR unit 307 to TOR switch 304 _N in rack 301 _N that confirms the storage of all three objects in the appropriate storage nodes (the EOR includes ACK processing intelligence that can accumulate the ACKS from the racks so that only one ACK is sent toward the client).
  • the TOR switch 304 _N can then send a single confirmation to the client 309 or three separate requests to the client 309 depending on whether the client is a legacy client or not.
  • FIG. 4 shows an embodiment of a networking switch 400 that corresponds to either or both of the EOR and TOR units discussed at length above.
  • the network switch 400 includes a number of ingress ports 401 and egress ports 402 .
  • the ingress ports 401 receive input traffic and the output ports 402 transmit output traffic.
  • a switch core 403 is coupled between the ingress ports 401 and the egress ports 402 .
  • the switch core 403 can be implemented with custom dedicated hardware logic circuitry (e.g., application specific integrated circuitry (ASIC) logic circuitry, programmable logic circuitry (e.g., field programmable gate array (FPGA) logic circuitry, programmable logic device (PLD) logic circuitry, programmable logic array (PLA) logic circuitry, etc.), logic circuitry that executes program code that is written to perform network switching and/or routing tasks (e.g., a microprocessor, one or more processing cores of a multi-core processor, etc.) or any combination of these. Programmable logic circuits may be particularly useful in the case of software defined networking and/or software defined storage implementations.
  • ASIC application specific integrated circuitry
  • FPGA field programmable gate array
  • PLD programmable logic device
  • PLA programmable logic array
  • Programmable logic circuits may be particularly useful in the case of software defined networking and/or software defined storage implementations.
  • the networking switch 400 also includes object storage intelligence 404 , 405 , 406 as described above with respect to FIGS. 2 a , 2 b , 3 a and 3 b .
  • object storage intelligence 404 is able to generate object IDs for replica objects, e.g., by performing a hash function on an object's primary ID.
  • Object storage intelligence 405 is able to determine an object's storage node from that object's object ID.
  • Object storage intelligence 406 is able to perform various object storage system acknowledgement tasks such as accumulating multiple received acknowledgements and consolidating them to a single acknowledgement, or, splitting a combined acknowledgement into separate acknowledgements.
  • any of the object storage intelligence 404 , 405 , 406 may be implemented with any of the types of logic circuitry described in the preceding paragraph or combination thereof. In the case of logic circuitry that executes program code, the program code would be written to perform some object storage intelligence task.
  • packets are sent over the network 205 , 206 / 305 , 306 between devices.
  • the payloads of these packets may contain data objects and/or object identifiers of the object storage system (in the storage direction) or acknowledgements of the object storage system (in the acknowledgement direction).
  • the packets may also contain in header or payload information that identifies the packet as an object storage packet and/or which object storage intelligence 404 , 405 , 406 is to be executed by the networking switch 400 in order to process the packet according to some object storage system related task.
  • FIG. 5 shows a method performed by a networking switch in an object storage system as described above.
  • the method includes receiving a first packet from a network that includes an object ID and a data object 501 .
  • the method includes generating a replica for the data object 502 .
  • the method includes generating an object ID for the replica of the data object 503 .
  • the method includes determining a destination storage node for the replica of the data object 504 .
  • the method includes sending a second packet from the networking switch to the destination storage node 505 .
  • the second packet includes the object ID for the replica of the data object and the replica of the data object.
  • FIG. 6 shows a basic model of a basic computing system which may represent any of the servers described above.
  • the basic computing system 600 may include a central processing unit 601 (which may include, e.g., a plurality of general purpose processing cores 615 _ 1 through 615 _X) and a main memory controller 617 disposed on a multi-core processor or applications processor, system memory 602 , a display 603 (e.g., touchscreen, flat-panel), a local wired point-to-point link (e.g., USB) interface 604 , various network I/O functions 605 (such as an Ethernet interface and/or cellular modem subsystem), a wireless local area network (e.g., WiFi) interface 606 , a wireless point-to-point link (e.g., Bluetooth) interface 607 and a Global Positioning System interface 608 , various sensors 609 _ 1 through 609 _Y, one or more cameras 610 , a battery 611 , a power management
  • An applications processor or multi-core processor 650 may include one or more general purpose processing cores 615 within its CPU 601 , one or more graphical processing units 616 , a memory management function 617 (e.g., a memory controller) and an I/O control function 618 .
  • the general purpose processing cores 615 typically execute the operating system and application software of the computing system.
  • the graphics processing unit 616 typically executes graphics intensive functions to, e.g., generate graphics information that is presented on the display 603 .
  • the memory control function 617 interfaces with the system memory 602 to write/read data to/from system memory 602 .
  • the power management control unit 612 generally controls the power consumption of the system 600 .
  • Each of the touchscreen display 603 , the communication interfaces 604 - 1107 , the GPS interface 608 , the sensors 609 , the camera(s) 610 , and the speaker/microphone codec 613 , 614 all can be viewed as various forms of I/O (input and/or output) relative to the overall computing system including, where appropriate, an integrated peripheral device as well (e.g., the one or more cameras 610 ).
  • I/O input and/or output
  • various ones of these I/O components may be integrated on the applications processor/multi-core processor 650 or may be located off the die or outside the package of the applications processor/multi-core processor 650 .
  • the computing system may also include a system memory (also referred to as main memory) having multiple levels.
  • a system memory also referred to as main memory
  • main memory having multiple levels.
  • a first (faster) system memory level may be implemented with DRAM and a second (slower) system memory may be implemented with an emerging non-volatile memory (such as non-volatile memory whose storage cells are composed of chalcogenide, resistive memory (RRAM), ferroelectric memory (FeFRAM), etc.).
  • Emerging non volatile memory technologies have faster access times that traditional FLASH and can therefore be used in a system memory role rather than be relegated solely to mass storage.
  • Software and/or firmware executing on a general purpose CPU core (or other functional block having an instruction execution pipeline to execute program code) of processor may perform any of the functions described above.
  • Embodiments of the invention may include various processes as set forth above.
  • the processes may be embodied in machine-executable instructions.
  • the instructions can be used to cause a general-purpose or special-purpose processor to perform certain processes.
  • these processes may be performed by specific hardware components that contain hardwired logic for performing the processes, or by any combination of programmed computer components and custom hardware components.
  • Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions.
  • the machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, FLASH memory, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions.
  • the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
  • a remote computer e.g., a server
  • a requesting computer e.g., a client
  • a communication link e.g., a modem or network connection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method performed by a networking switch in an object storage system. The method includes receiving a first packet from a network comprising an object ID and a data object. The method includes generating a replica for the data object. The method includes generating an object ID for the replica of the data object. The method includes determining a destination storage node for the replica of the data object. The method includes sending a second packet from the networking switch to the destination storage node. The second packet includes the object ID for the replica of the data object and the replica of the data object.

Description

FIELD OF INVENTION
The field of invention pertains generally to the computing sciences, and, more specifically, to a networking switch with object storage system intelligence.
BACKGROUND
Large enterprises typically process a tremendous amount of information in real-time. Part of the processing includes storing data in a storage system and retrieving data that is stored in the storage system. Large enterprises generally require that the storage system be reliable (not lose data) and have high performance (store/retrieve data quickly). As such, storage system designers are highly motivated to improve storage system reliability and performance.
FIGURES
A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:
FIG. 1a shows a prior art object storage write process;
FIG. 1b shows a prior art object storage acknowledgment process;
FIG. 2a shows a first improved object storage write process;
FIG. 2b shows a first improved object storage acknowledgment process;
FIG. 3a shows a second improved object storage write process;
FIG. 3b shows a second improved object storage acknowledgment process;
FIG. 4 shows an improved networking switch;
FIG. 5 show a methodology;
FIG. 6 shows a computing system.
DETAILED DESCRIPTION
FIG. 1a shows a prior art data center. As is known in the art, a typical data center includes one or more “racks” 101_1, 101_2, . . . 101_N where each rack includes one or more servers 103 and a network router or switch 104 (hereinafter referred to as “switch”). The server(s) 103 and switch 104 of a same rack 101 are typically coupled proximately to one another, e.g., in a same mechanical fixture or frame in which server(s) and switch(es) may slide on rails on the sidewalls of the frame and mount to the back of the frame so that the installed equipment in the frame takes on the appearance of, e.g., a bookcase.
The proximate coupling of the server(s) 103 and switch 104 in a same rack 101 provides for various space/distance efficiencies such as, e.g., forming interconnections between the server(s) 103 and the switch 104 with copper wires that plug into a patch panel of the rack. Other possible efficiencies include cooling the server(s) 103 and switch 104 with a same fan system that is integrated in the rack, powering the server(s) and switch from a same electrical outlet, etc.
The rack's switch 104 provides a platform communication for the rack's server(s) to communicate with one another and to connect to a larger network 105 of the data center that, e.g., other racks are coupled to. The switch 104 is commonly referred to as the “top of rack” (TOR) because its traditional location is the top shelf of the rack 101. Here, the TORs 104_1 through 104_N of their respective racks 101_1 through 101_N can be viewed as the gateway to the data center's network 105.
An EOR unit 107 helps economize the configuration, management and networking of multiple racks 101_1 through 101_N. Here, EOR unit 107 is typically a switch that provides communication services for multiple racks. That is, servers within different racks 101_1 through 101_N where the different racks 101_1 through 101_N are associated with a same EOR 107 can communicate to one another through the EOR switch 107. Additionally, the EOR 107 provides the servers of its constituent racks 101_1 through 101_N to, e.g., the data center's backbone network 106 to which are coupled multiple EORs and their corresponding racks.
The term “end or row” has traditionally been used to refer to an EOR unit 107 (because the EOR was traditionally an end rack of a row of racks). However, more generally, an EOR unit 107 is a unit that is associated with multiple racks 101_1 through 101_N to provide communication services for the multiple racks 101_1 through 101_N (e.g., communication between racks 101_1 through 101_N and access to the data center's backbone network 106).
The servers 103 of the data center typically include multiple CPUs capable of executing the respective program code of various kinds of software applications. One type of software application is an object storage software application. An object storage software application may include client access nodes (or client nodes) “CN” and storage nodes “SN”. The client access nodes act as gateways to the storage facilities for users of the object storage system (e.g., other application software programs that have storage needs, individuals, etc.). As such, the client access nodes typically accept data objects from the users that are to be written into the storage system and provide data objects to the users that were read from the storage system.
The storage nodes of the object storage application are responsible for storing data objects in the physical storage resources associated with the server that they execute on (e.g., non volatile storage of the server). Object storage systems identify individual data items to be stored as objects were each object typically has an identifier of the object (object ID). Commonly, a hashing algorithm is executed on the object ID by the object storage system to identify which storage node is responsible for storing the object. For redundancy purposes to protect against failure of a server that stores an object, object storage systems also commonly replicate an object into multiple copies and store the different copies on different servers. Thus, should a server that stores a copy of the object fail, there are still other copies of the object that are available.
A problem with replication is the overhead that is presented to the data center network 105, 106. Here, typically, in order to store an object into the storage system, not only is each copy of the object passed over the data center network 105, 106, but also, a full protocol exchange (e.g., acknowledgements (ACKs)) passed over the network 105, 106 for each copy.
FIG. 1a depicts an exemplary prior art object write scenario. For ease of discussion it is assumed that there are only three copies of a data object stored in the system to effect replication of the stored data object. Also for ease of explanation, all three copies are to be stored in racks that are serviced by one EOR 107 (in actual circumstances object copies may be stored in multiple groups of racks that have multiple corresponding EORs).
As observed in FIG. 1, the client node 109 that receives the data object with the corresponding write command replicates the object into three separate copies and sends the three different objects into the network (through its rack's switch 104_N) with three associated write requests 1 (one for each of the data object copies).
Here, object ID generation intelligence 113 within the client node 109 determines a same object ID to be assigned to each of the three different objects, respectively. A hashing function 114 executed by the client node 109 on the object ID is then able to convert the single object ID into the identity of three different storage nodes (e.g., by identifying their corresponding IP addresses). Here, one copy (or “replica”) of the object is to be stored on each of the three different storage nodes. Thus, should one of the objects become lost (e.g., because of the failure of the server that hosts one of the storage nodes), the client node 109 can re-execute the hashing function on the object ID to identify where other copies of the object are stored and access a surviving copy of the object.
The three storage requests 1 are forwarded by the TOR 104_N of the client node's 109 rack 101_N to the EOR unit 107. Again, for ease of example it is assumed that all three object copies are to be stored in one of the racks associated with the EOR unit 107. The respective identifier of each destination storage node (e.g., its IP address) for each of the objects is essentially used by the EOR's networking switch 107 to route 3 a, 3 b the three object copies with their corresponding write request to their appropriate rack. For the sake of example, two of the copies are to be stored in rack 101_1 and one of the copies is to be stored in rack 101_2. As such, EOR unit routes 3 a the two copies whose respective destination corresponds to a storage node within rack 101_1 to rack 101_1. Similarly, the EOR unit 107 routes 3 b the single copy and its write request whose respective destination corresponds to a storage node within rack 101_2 to rack 101_2.
Rack 101_1 receives the two copies 3 a and its local TOR switch 104_1, forwards each copy to its respective object storage node 110, 111 internally within rack 101_1, and the respective object storage node stores its respective object. Similarly, the TOR switch 104_2 within rack 101_2 routes 4 b its received copy of the object to the proper destination object storage node 112 within rack 101_2. The object storage node 112 then stores the object.
Here, in the preceding example, note that the client node 109 includes object ID generation intelligence 113 and hashing intelligence 114 that converts an object's object ID into three destination storage node identifiers (e.g., three IP addresses). The storage requests 1 each include an identity of the destination storage node that is to store the object that is appended to the request (e.g., by way of its IP address). The internal routing/switching tables of the network switches 104, 107 are configured with look up tables or other information that converts the identity of a particular storage node or its IP address to an appropriate output port of the switch. By so doing, the switches of the network 104, 107 are able to route any object to its correct destination storage node.
The prior art write process of FIG. 1a does not deem the write process complete until the client node 109 that originally submitted the storage requests 1 for three copies is informed that all three objects have been successfully stored in their respective storage nodes. FIG. 1b shows a prior art acknowledgement process. Here, after storage nodes 110, 111 within rack 101_1 complete the storage of their respective objects, they each send 1 a an acknowledgement (ACK) back to the client node 109 that originally submitted the storage request for the three object copies. Likewise, the object storage node 112 within rack 101_2 sends 1 b an ACK back to the client node 109 upon successful storage of its object.
Each of the ACKs identify the client node 109 as the destination. The switches 104, 107 of the network 105, 106 route 2 a, 2 b, 3, 4 the ACKs back to the client node 109. When the client node 109 receives 4 the three ACKs it understands that three copies of the object have been successfully stored in the system.
With respect to the storage of the object copies, the switches 104, 107 can be seen as having no internal intelligence capabilities of the storage system itself. That is, the network switches 104, 107 are configured to only understand which network addresses correspond to which network destination. That is, as discussed above, the client nodes of the storage application include object_ID generation intelligence 113 and assignment of object_ID to storage node intelligence 114 but the networking switches 104, 107 do not include any such intelligence.
As such, only the client nodes of the application storage system are capable of generating specific object_IDs for specific objects and copies of objects. Additionally, only client nodes are capable of determining which storage nodes the replicas of an object having a particular object_ID should be assigned to.
As a consequence of the lack of storage system intelligence in the network 105, 106, there is complete end-to-end routing traffic flow for each object copy. That is, each copy of the object is provided to the network 105, 106 by the client 109 and forwarded through the network 105, 106 until it reaches its destination storage node.
Likewise, there are three separate ACK flows that originate from each destination storage node, each of which progress through the network 105, 106 and terminate at the client node 109. Because there is an end-to-end traffic flow for each copy of the object (forwards to store the object and backwards to acknowledge the object), the offered load that is presented to the network 105, 106 scales in direct proportion to the number of replicas that the storage system is configured to store per object.
That is, for instance, a storage system that is configured to have six replicas will generate twice as much internal network traffic as a system that is configured to have only three replicas. Unfortunately, the scaling of internal network traffic with the numbers of replicas creates a tradeoff between reliability of the storage system and the performance of the storage system.
Here, the storage system is more reliable with increased number of replicas (the likelihood that data will be truly lost drops with each replica). However, the storage system will be observed by its users to have slower read/write storage access times as the internal traffic of the network increases. Thus, as the number of replicas increases the storage system will become slower from the perspective of the users. This is particularly unfortunate for large mission critical environments (e.g., the core storage system of a large corporation) which highly desire both reliability and performance.
FIG. 2 shows an improved approach in which the object ID generation intelligence 213_1 through 213_N and/or object_ID assignment to storage node intelligence 214_1 through 214_N that has been traditionally reserved for client nodes of the storage application (to generate object IDs and/or determine which specific object IDs are to be stored onto which specific object storage nodes) is embedded within the TOR switching nodes 204_1 through 204_N of the racks of the network 201_1 through 201_N. By so doing, as will be described in detail immediately below, the internal traffic of the network 205, 206 can be greatly reduced because the “per-copy” end-to-end traffic flow environment is dramatically reduced.
Referring to FIG. 2a , according to a first embodiment, e.g., in order to prevent changes to legacy client node software, the client node generates three separate object IDs and sends separate requests for each of three different copies of an object as described above with respect to FIG. 1a . The three separate requests are forwarded to the EOR unit 207. However, in the implementation of FIG. 2a , the EOR unit understands that the TOR switches 204_1 through 204_N of the racks 201_1 through 201_N are upgraded to have storage system intelligence 213, 214 that has been traditionally reserved for the client nodes of the storage application to generate object IDs and/or determine which specific object IDs are to be stored onto which specific object storage nodes.
As such, rather than simply forward all three copies of the object to their appropriate destination (a blind look-up and forward of all input object requests as in FIG. 1a ), the EOR 207 will recognize that two of the new requests have a same next hop address (the address of TOR switch 204_1) and will consolidate the two requests into a single request 3 a with a single copy of the object. As such, comparing FIG. 2a with FIG. 1a , only a single request and object copy are sent to TOR switch 204_1 in the improved approach of FIG. 2a whereas two requests and object copies are sent to the TOR switch 104_1 in the traditional approach of FIG. 1a . The single forwarded request 3 a of FIG. 2a includes the object ID for the object and a single copy of the object. A single request 3 b is also sent to TOR switch 204_2 of rack 204_2 in both the approach of FIG. 2a and FIG. 1 a.
In response to receiving the single storage request, the TOR switch 204_1 of rack 201_1, having the enhanced storage intelligence 214_1, will perform a hash on the object ID of the single request 3 a to identify the destination storage nodes (e.g., the IP addresses) for the object copies. After the hash is preformed, for a redundancy of 3, the TOR switch 204_1 will identify three destination storage nodes. From the destination storage node identities, the TOR switch 204_1 will recognize that two of the destination storage nodes are within the TOR switch's own rack 201_1. The TOR switch 204_1 will then make a copy of the object that was included in the single storage request 3 a it received, and, forward first and second copies of the object to the first and second destination storage nodes 210, 211 within rack 201_1, respectively.
The TOR switch 204_2 associated with rack 201_1 will also perform a hash on the primary object ID from the single request 3 b it received to generate three destination storage node identifiers (e.g., three IP addresses). Of the three, TOR switch 204_2 will recognize that only one of these corresponds to a storage node within rack 201_2. As such, no additional copies of the object will be made by TOR switch 204_2 (it will forward the single copy it received to destination storage node 212 for storage).
For the acknowledgements, referring to FIG. 2b , TOR switch 204_1 can combine the two ACKs 1 a from storage nodes 210, 211 into a single ACK for both objects that were stored in rack 201_1 (the single ACK may include the object ID and information that verifies the successful storage of two objects). The single ACK is then sent 2 a to the EOR unit 207. TOR switch 204_2 will also send 2 b a single ACK 2 b to EOR unit 207 (which also includes the object ID and successful storage of one object). The EOR unit 207 forwards both ACKS to the TOR switch 204_N in the rack 201N that the client node 209 is within. The TOR switch within rack 201_N, having enhanced functionality, is able to split the ACK from TOR switch 204_1 into two separate ACKs. As such, the legacy client 209 will receive three ACKS and will recognize successful storage of three copies of the object. Thus, additional storage system intelligence that is included in switch 204_N is acknowledgment processing.
In an alternate embodiment, the client understands that the network includes storage system functionality. As such, referring to FIG. 2a , the client 209 issues only one storage request to TOR switch 204_N and TOR switch 204_N performs object ID generation and object ID to storage node correlation with intelligence 213_N and 214_N. TOR switch 204_N can then forward the three requests to the EOR switch 207. The rest of the storage process transpires as described above with respect to FIG. 2a . With respect to the acknowledgment process, switch 204_N can also accumulate the received ACKs and send a single, final ACK to the client 209 when both acknowledgements 3 of FIG. 2b have been received. Thus, again, additional storage system intelligence that is included in switch 204_N is acknowledgement processing. It is pertinent to point out that a single TOR switch as described herein may include all forms of TOR switch intelligence (or some subset thereof).
In the specific embodiments discussed above with respect to FIG. 2a , only the TOR switches 204_1 through 204_N have embedded storage system intelligence (the EOR switch knows that the TOR switches have the embedded storage intelligence but does not have any such intelligence itself).
FIG. 3a , by contrast, shows another object storage write process embodiment in which the EOR switch 307 also includes storage system intelligence 313_N+1, 314_N+1 (along with the TOR switches). In this case, if the client 309 is upgraded to know that the network 305, 306 includes storage system intelligence, the client 309 can send only one storage request 1 to the TOR switch 304_N unit rather than three storage requests as was performed by the client in the embodiment of FIG. 2a . The TOR switch 304_N may generate an object ID for the object and send a single request that includes the object and the object ID to EOR 307. Additionally, because the EOR switch 307 also has storage system intelligence 313_N+1, 314_N+1, TOR switch 304_N need only forward 2 a single write request and object to the EOR switch 307. In this case, object ID intelligence 313_N+1 of EOR may generate an object ID for the object.
After receipt of the single request with the object and having possession of an object_ID (whether received or locally generated), object ID generation intelligence 314_N+1 of the EOR 307 will perform a hash on the object_ID which will generate three destination storage node identifiers (e.g., three IP addresses). Moreover, the EOR 307 will recognize that two of these storage nodes 310, 311 map to single rack 301_1. As such, the EOR 307 can consolidate the request for these two storage nodes 310, 311 into a single storage request and corresponding object that is sent 3 a to TOR switch 304_1.
In a first embodiment, the single request 3 a only includes the object ID (does not identify the destinations 310, 311 for the pair of objects that are to be stored in rack 301_1) and the TOR switch 304_1 of rack 301_1 performs the same procedures described above with respect to FIG. 2a (switch 304_1 generates destination addressees for the copies from the object ID in the request). In a second embodiment, the single request 3 a identifies both destination nodes 310, 311 and the TOR switch 304_1 duplicates the object and forwards the object copies to both destinations 310, 311 for storage.
The EOR unit 307 also sends a second request 3 b with corresponding copy of the object to rack 301_2. The request may conform to any of the embodiments described immediately above, resulting in the storage of the single copy in the appropriate destination storage node 312 within rack 301_2.
With respect to the acknowledgements, referring to FIG. 3b , the acknowledgements may reduce to: 1) a single ACK being sent from TOR switch 304_1 to EOR switch 307 that confirms the storage of both copies of the object in rack 301_1; 2) a single ACK being sent from TOR switch 304_2 to EOR switch 307 that confirms the storage of the single copy of the object in rack 301_2; and, 3) a single ACK being sent from the EOR unit 307 to TOR switch 304_N in rack 301_N that confirms the storage of all three objects in the appropriate storage nodes (the EOR includes ACK processing intelligence that can accumulate the ACKS from the racks so that only one ACK is sent toward the client). The TOR switch 304_N can then send a single confirmation to the client 309 or three separate requests to the client 309 depending on whether the client is a legacy client or not.
FIG. 4 shows an embodiment of a networking switch 400 that corresponds to either or both of the EOR and TOR units discussed at length above. As observed in FIG. 4, the network switch 400 includes a number of ingress ports 401 and egress ports 402. The ingress ports 401 receive input traffic and the output ports 402 transmit output traffic. A switch core 403 is coupled between the ingress ports 401 and the egress ports 402. The switch core 403 can be implemented with custom dedicated hardware logic circuitry (e.g., application specific integrated circuitry (ASIC) logic circuitry, programmable logic circuitry (e.g., field programmable gate array (FPGA) logic circuitry, programmable logic device (PLD) logic circuitry, programmable logic array (PLA) logic circuitry, etc.), logic circuitry that executes program code that is written to perform network switching and/or routing tasks (e.g., a microprocessor, one or more processing cores of a multi-core processor, etc.) or any combination of these. Programmable logic circuits may be particularly useful in the case of software defined networking and/or software defined storage implementations.
The networking switch 400 also includes object storage intelligence 404, 405, 406 as described above with respect to FIGS. 2a, 2b, 3a and 3b . Here, object storage intelligence 404 is able to generate object IDs for replica objects, e.g., by performing a hash function on an object's primary ID. Object storage intelligence 405 is able to determine an object's storage node from that object's object ID. Object storage intelligence 406 is able to perform various object storage system acknowledgement tasks such as accumulating multiple received acknowledgements and consolidating them to a single acknowledgement, or, splitting a combined acknowledgement into separate acknowledgements. Any of the object storage intelligence 404, 405, 406 may be implemented with any of the types of logic circuitry described in the preceding paragraph or combination thereof. In the case of logic circuitry that executes program code, the program code would be written to perform some object storage intelligence task.
In the aforementioned discussion of the improved flows of FIGS. 2a, 2b, 3a and 3b , generally, packets are sent over the network 205, 206/305, 306 between devices. The payloads of these packets may contain data objects and/or object identifiers of the object storage system (in the storage direction) or acknowledgements of the object storage system (in the acknowledgement direction). The packets may also contain in header or payload information that identifies the packet as an object storage packet and/or which object storage intelligence 404, 405, 406 is to be executed by the networking switch 400 in order to process the packet according to some object storage system related task.
FIG. 5 shows a method performed by a networking switch in an object storage system as described above. The method includes receiving a first packet from a network that includes an object ID and a data object 501. The method includes generating a replica for the data object 502. The method includes generating an object ID for the replica of the data object 503. The method includes determining a destination storage node for the replica of the data object 504. The method includes sending a second packet from the networking switch to the destination storage node 505. The second packet includes the object ID for the replica of the data object and the replica of the data object.
FIG. 6 shows a basic model of a basic computing system which may represent any of the servers described above. As observed in FIG. 6, the basic computing system 600 may include a central processing unit 601 (which may include, e.g., a plurality of general purpose processing cores 615_1 through 615_X) and a main memory controller 617 disposed on a multi-core processor or applications processor, system memory 602, a display 603 (e.g., touchscreen, flat-panel), a local wired point-to-point link (e.g., USB) interface 604, various network I/O functions 605 (such as an Ethernet interface and/or cellular modem subsystem), a wireless local area network (e.g., WiFi) interface 606, a wireless point-to-point link (e.g., Bluetooth) interface 607 and a Global Positioning System interface 608, various sensors 609_1 through 609_Y, one or more cameras 610, a battery 611, a power management control unit 612, a speaker and microphone 613 and an audio coder/decoder 614.
An applications processor or multi-core processor 650 may include one or more general purpose processing cores 615 within its CPU 601, one or more graphical processing units 616, a memory management function 617 (e.g., a memory controller) and an I/O control function 618. The general purpose processing cores 615 typically execute the operating system and application software of the computing system. The graphics processing unit 616 typically executes graphics intensive functions to, e.g., generate graphics information that is presented on the display 603. The memory control function 617 interfaces with the system memory 602 to write/read data to/from system memory 602. The power management control unit 612 generally controls the power consumption of the system 600.
Each of the touchscreen display 603, the communication interfaces 604-1107, the GPS interface 608, the sensors 609, the camera(s) 610, and the speaker/ microphone codec 613, 614 all can be viewed as various forms of I/O (input and/or output) relative to the overall computing system including, where appropriate, an integrated peripheral device as well (e.g., the one or more cameras 610). Depending on implementation, various ones of these I/O components may be integrated on the applications processor/multi-core processor 650 or may be located off the die or outside the package of the applications processor/multi-core processor 650.
The computing system may also include a system memory (also referred to as main memory) having multiple levels. For example a first (faster) system memory level may be implemented with DRAM and a second (slower) system memory may be implemented with an emerging non-volatile memory (such as non-volatile memory whose storage cells are composed of chalcogenide, resistive memory (RRAM), ferroelectric memory (FeFRAM), etc.). Emerging non volatile memory technologies have faster access times that traditional FLASH and can therefore be used in a system memory role rather than be relegated solely to mass storage.
Software and/or firmware executing on a general purpose CPU core (or other functional block having an instruction execution pipeline to execute program code) of processor may perform any of the functions described above.
Embodiments of the invention may include various processes as set forth above. The processes may be embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor to perform certain processes. Alternatively, these processes may be performed by specific hardware components that contain hardwired logic for performing the processes, or by any combination of programmed computer components and custom hardware components.
Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, FLASH memory, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (19)

The invention claimed is:
1. A networking switch, comprising:
a plurality of ingress ports;
a plurality of egress ports;
a switch core coupled between the ingress ports and the egress ports;
object storage system intelligence logic circuitry, said object storage system intelligence logic circuitry to perform a), b), c), and d) below:
a) generate object IDs for object replicas, the object replicas to each contain same information, the object storage system intelligence logic circuitry to protect against loss of the information by storing the object replicas in different destination storage nodes of an object storage system;
b) determine a destination storage node from an object ID for an object having the object ID without previously having been configured with information that indicates that the object ID is to be stored in the destination storage node;
c) convert a single write request with a first corresponding object into multiple write requests having respective replicas of the first corresponding object, and/or, consolidate multiple write requests with a second corresponding object into a single write request with the second corresponding object; and,
d) consolidate multiple acknowledgements for multiple stored objects into a single acknowledgement.
2. The networking switch of claim 1 wherein the networking switch is to provide networking services for a group of servers.
3. The networking switch of claim 2 wherein the networking switch is integrated into a rack.
4. The networking switch of claim 1 wherein the networking switch is to provide networking services for a group of racks of servers.
5. The networking switch of claim 1 wherein the object storage system intelligence logic circuitry is to perform a hash on the object ID.
6. The networking switch of claim 1 wherein the networking switch is integrated into a rack and is to receive a storage request from a client for the object without receiving any replicas of the object.
7. The networking switch of claim 1 wherein the networking switch is not integrated into a rack and is to receive a storage request for the object without receiving any replicas of the object.
8. An object storage system, comprising:
multiple servers, respective CPU cores of the multiple servers to execute at least one of client node object storage system program code and storage node object storage system program code;
a plurality of networking switches coupled between the multiple servers, a networking switch of the networking switches comprising object storage system intelligence logic circuitry, said object storage system intelligence logic circuitry to perform one or more of a), b), c), and d) below:
a) generate object IDs for object replicas, the object replicas to each contain same information, the object storage system intelligence logic circuitry to protect against loss of the information by storing the object replicas in different destination storage nodes of an object storage system;
b) determine a destination storage node from an object ID for an object having the object ID without previously having been configured with information that indicates that the object ID is to be stored in the destination storage node;
c) convert a single write request with a first corresponding object into multiple write requests having respective replicas of the first corresponding object, and/or, consolidate multiple write requests with a second corresponding object into a single write request with the second corresponding object; and,
d) consolidate multiple acknowledgements for multiple stored objects into a single acknowledgement.
9. The object storage system of claim 8 wherein the networking switch is integrated within a rack.
10. The object storage system of claim 8 wherein the networking switch is to provide networking services for a group of racks of servers.
11. The object storage system of claim 8 wherein the object storage system intelligence logic circuitry is to perform a hash on the object ID.
12. The object storage system of claim 8 wherein the networking switch is integrated into a rack and is to receive a storage request from a client for the object without receiving any replicas of the object.
13. The object storage system of claim 8 wherein the networking switch is not integrated within a rack and is to receive a storage request for the object without receiving any replicas of the object.
14. A method, comprising:
performing the following by a networking switch in an object storage system, wherein, the networking switch does not include destination storage nodes for the object storage system:
receiving a first packet from a network comprising a first object ID and a data object;
generating object replicas for the data object, the object replicas to each contain same information, the object storage system to protect against loss of the same information by storing the object replicas in different storage nodes of the object storage system;
forwarding the first packet to a first destination storage node for the data object;
generating a second object ID for a replica of the data object, the replica of the data object being one of the object replicas for the data object;
determining a second destination storage node for the replica of the data object, the second destination storage node being a different storage node than the first destination storage node;
sending a second packet from the networking switch to the second destination storage node, the second packet comprising the second object ID for the replica of the data object and the replica of the data object;
receiving a first acknowledgement from the first destination storage node that the data object has been stored;
receiving a second acknowledgment from the second destination storage node that the replica of the data object has been stored;
consolidating the first and second acknowledgements into a third acknowledgment and sending the third acknowledgement into the network to a sender of the first packet.
15. The method of claim 14 wherein the networking switch is integrated within a rack.
16. The method of claim 15 wherein the first object ID is a primary object ID.
17. The method of claim 14 wherein the networking switch is coupled to multiple racks.
18. The method of claim 14 wherein the first destination storage node and the second destination storage node are in different racks.
19. The method of claim 14 wherein the first destination storage node and the second destination storage node are in different servers of a same rack and the networking switch is integrated within the rack.
US15/718,756 2017-09-28 2017-09-28 Networking switch with object storage system intelligence Active 2038-01-09 US10855766B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/718,756 US10855766B2 (en) 2017-09-28 2017-09-28 Networking switch with object storage system intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/718,756 US10855766B2 (en) 2017-09-28 2017-09-28 Networking switch with object storage system intelligence

Publications (2)

Publication Number Publication Date
US20190098085A1 US20190098085A1 (en) 2019-03-28
US10855766B2 true US10855766B2 (en) 2020-12-01

Family

ID=65808251

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/718,756 Active 2038-01-09 US10855766B2 (en) 2017-09-28 2017-09-28 Networking switch with object storage system intelligence

Country Status (1)

Country Link
US (1) US10855766B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104458B (en) * 2019-11-12 2024-04-05 杭州创谐信息技术股份有限公司 Distributed data exchange system and method based on RK3399Pro
US20220224673A1 (en) * 2021-01-13 2022-07-14 Terafence Ltd. System and method for isolating data flow between a secured network and an unsecured network

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020091636A1 (en) * 1999-03-25 2002-07-11 Nortel Networks Corporation Capturing quality of service
US20110196828A1 (en) * 2010-02-09 2011-08-11 Alexandre Drobychev Method and System for Dynamically Replicating Data Within A Distributed Storage System
US20110294472A1 (en) * 2008-05-01 2011-12-01 Nigel Bramwell Communications device, communications service and methods for providing and operating the same
US20120278804A1 (en) * 2010-11-14 2012-11-01 Brocade Communications Systems, Inc. Virtual machine and application movement over a wide area network
US20130232260A1 (en) * 2009-12-23 2013-09-05 Citrix Systems, Inc. Systems and methods for gslb mep connection management across multiple core appliances
US20140025770A1 (en) * 2012-07-17 2014-01-23 Convergent.Io Technologies Inc. Systems, methods and devices for integrating end-host and network resources in distributed memory
US20140269261A1 (en) * 2013-03-14 2014-09-18 Telefonaktiebolaget L M Ericsson (Publ) Method and apparatus for ip/mpls fast reroute
US20140317293A1 (en) * 2013-04-22 2014-10-23 Cisco Technology, Inc. App store portal providing point-and-click deployment of third-party virtualized network functions
US20150124809A1 (en) * 2013-11-05 2015-05-07 Cisco Technology, Inc. Policy enforcement proxy
US20150125112A1 (en) * 2012-04-25 2015-05-07 Ciena Corporation Optical switch fabric for data center interconnections
US20170094002A1 (en) * 2015-09-26 2017-03-30 Dinesh Kumar Technologies for offloading data object replication and service function chain management
US9633051B1 (en) * 2013-09-20 2017-04-25 Amazon Technologies, Inc. Backup of partitioned database tables
US9928168B2 (en) * 2016-01-11 2018-03-27 Qualcomm Incorporated Non-volatile random access system memory with DRAM program caching
US9935887B1 (en) * 2015-09-24 2018-04-03 Juniper Networks, Inc. Fragmentation and reassembly of network traffic
US10025673B1 (en) * 2013-09-20 2018-07-17 Amazon Technologies, Inc. Restoring partitioned database tables from backup
US10038624B1 (en) * 2016-04-05 2018-07-31 Barefoot Networks, Inc. Flexible packet replication and filtering for multicast/broadcast

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020091636A1 (en) * 1999-03-25 2002-07-11 Nortel Networks Corporation Capturing quality of service
US20110294472A1 (en) * 2008-05-01 2011-12-01 Nigel Bramwell Communications device, communications service and methods for providing and operating the same
US20130232260A1 (en) * 2009-12-23 2013-09-05 Citrix Systems, Inc. Systems and methods for gslb mep connection management across multiple core appliances
US20110196828A1 (en) * 2010-02-09 2011-08-11 Alexandre Drobychev Method and System for Dynamically Replicating Data Within A Distributed Storage System
US20120278804A1 (en) * 2010-11-14 2012-11-01 Brocade Communications Systems, Inc. Virtual machine and application movement over a wide area network
US20150125112A1 (en) * 2012-04-25 2015-05-07 Ciena Corporation Optical switch fabric for data center interconnections
US20140025770A1 (en) * 2012-07-17 2014-01-23 Convergent.Io Technologies Inc. Systems, methods and devices for integrating end-host and network resources in distributed memory
US10341285B2 (en) * 2012-07-17 2019-07-02 Open Invention Network Llc Systems, methods and devices for integrating end-host and network resources in distributed memory
US20140269261A1 (en) * 2013-03-14 2014-09-18 Telefonaktiebolaget L M Ericsson (Publ) Method and apparatus for ip/mpls fast reroute
US9577874B2 (en) * 2013-03-14 2017-02-21 Telefonaktiebolaget L M Ericsson (Publ) Method and apparatus for IP/MPLS fast reroute
US20140317293A1 (en) * 2013-04-22 2014-10-23 Cisco Technology, Inc. App store portal providing point-and-click deployment of third-party virtualized network functions
US20140317261A1 (en) * 2013-04-22 2014-10-23 Cisco Technology, Inc. Defining interdependent virtualized network functions for service level orchestration
US9633051B1 (en) * 2013-09-20 2017-04-25 Amazon Technologies, Inc. Backup of partitioned database tables
US20170228290A1 (en) * 2013-09-20 2017-08-10 Amazon Technologies, Inc. Backup of partitioned database tables
US10025673B1 (en) * 2013-09-20 2018-07-17 Amazon Technologies, Inc. Restoring partitioned database tables from backup
US20150124809A1 (en) * 2013-11-05 2015-05-07 Cisco Technology, Inc. Policy enforcement proxy
US9935887B1 (en) * 2015-09-24 2018-04-03 Juniper Networks, Inc. Fragmentation and reassembly of network traffic
US20170094002A1 (en) * 2015-09-26 2017-03-30 Dinesh Kumar Technologies for offloading data object replication and service function chain management
US9928168B2 (en) * 2016-01-11 2018-03-27 Qualcomm Incorporated Non-volatile random access system memory with DRAM program caching
US10038624B1 (en) * 2016-04-05 2018-07-31 Barefoot Networks, Inc. Flexible packet replication and filtering for multicast/broadcast

Also Published As

Publication number Publication date
US20190098085A1 (en) 2019-03-28

Similar Documents

Publication Publication Date Title
US10917351B2 (en) Reliable load-balancer using segment routing and real-time application monitoring
US11962501B2 (en) Extensible control plane for network management in a virtual infrastructure environment
US20210243247A1 (en) Service mesh offload to network devices
CN107465590B (en) Network infrastructure system, method of routing network traffic and computer readable medium
JP6445621B2 (en) Distributed load balancer
JP6169251B2 (en) Asymmetric packet flow in distributed load balancers
TWI543566B (en) Data center network system based on software-defined network and packet forwarding method, address resolution method, routing controller thereof
JP6030807B2 (en) Open connection with distributed load balancer
CN106533992B (en) PCI express fabric routing for fully connected mesh topologies
JP2019092217A (en) Networking techniques
JP2005538588A (en) Switchover and switchback support for network interface controllers with remote direct memory access
US10826823B2 (en) Centralized label-based software defined network
CN104811392A (en) Method and system for processing resource access request in network
US10230795B2 (en) Data replication for a virtual networking system
US20160216891A1 (en) Dynamic storage fabric
US10700893B1 (en) Multi-homed edge device VxLAN data traffic forwarding system
US20210294702A1 (en) High-availability memory replication in one or more network devices
US20220337499A1 (en) Systems and methods for determining network component scores using bandwidth capacity
US10855766B2 (en) Networking switch with object storage system intelligence
CN109120556B (en) A kind of method and system of cloud host access object storage server
US20120158998A1 (en) API Supporting Server and Key Based Networking
CN104348737A (en) Multicast message transmission method and switches
US20220060418A1 (en) Network interface device-based computations
CN113098788B (en) Method and device for releasing route
US8750120B2 (en) Confirmed delivery of bridged unicast frames

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZOU, YI;RAGHUNATH, ARUN;CHAGAM REDDY, ANJANEYA REDDY;SIGNING DATES FROM 20171010 TO 20171012;REEL/FRAME:043910/0610

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4