US10855766B2 - Networking switch with object storage system intelligence - Google Patents
Networking switch with object storage system intelligence Download PDFInfo
- Publication number
- US10855766B2 US10855766B2 US15/718,756 US201715718756A US10855766B2 US 10855766 B2 US10855766 B2 US 10855766B2 US 201715718756 A US201715718756 A US 201715718756A US 10855766 B2 US10855766 B2 US 10855766B2
- Authority
- US
- United States
- Prior art keywords
- storage system
- storage
- networking switch
- replicas
- switch
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 230000006855 networking Effects 0.000 title claims abstract description 42
- 238000000034 method Methods 0.000 claims abstract description 39
- 230000008569 process Effects 0.000 description 19
- 238000012545 processing Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 2
- 230000010076 replication Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 150000004770 chalcogenides Chemical class 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 229920003245 polyoctenamer Polymers 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 210000000352 storage cell Anatomy 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/15—Interconnection of switching modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/35—Switches specially adapted for specific applications
- H04L49/356—Switches specially adapted for specific applications for storage area networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2466—Traffic characterised by specific attributes, e.g. priority or QoS using signalling traffic
-
- H04L67/32—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
Definitions
- the field of invention pertains generally to the computing sciences, and, more specifically, to a networking switch with object storage system intelligence.
- FIG. 1 a shows a prior art object storage write process
- FIG. 1 b shows a prior art object storage acknowledgment process
- FIG. 2 a shows a first improved object storage write process
- FIG. 2 b shows a first improved object storage acknowledgment process
- FIG. 3 a shows a second improved object storage write process
- FIG. 3 b shows a second improved object storage acknowledgment process
- FIG. 4 shows an improved networking switch
- FIG. 5 show a methodology
- FIG. 6 shows a computing system
- FIG. 1 a shows a prior art data center.
- a typical data center includes one or more “racks” 101 _ 1 , 101 _ 2 , . . . 101 _N where each rack includes one or more servers 103 and a network router or switch 104 (hereinafter referred to as “switch”).
- the server(s) 103 and switch 104 of a same rack 101 are typically coupled proximately to one another, e.g., in a same mechanical fixture or frame in which server(s) and switch(es) may slide on rails on the sidewalls of the frame and mount to the back of the frame so that the installed equipment in the frame takes on the appearance of, e.g., a bookcase.
- the proximate coupling of the server(s) 103 and switch 104 in a same rack 101 provides for various space/distance efficiencies such as, e.g., forming interconnections between the server(s) 103 and the switch 104 with copper wires that plug into a patch panel of the rack.
- Other possible efficiencies include cooling the server(s) 103 and switch 104 with a same fan system that is integrated in the rack, powering the server(s) and switch from a same electrical outlet, etc.
- the rack's switch 104 provides a platform communication for the rack's server(s) to communicate with one another and to connect to a larger network 105 of the data center that, e.g., other racks are coupled to.
- the switch 104 is commonly referred to as the “top of rack” (TOR) because its traditional location is the top shelf of the rack 101 .
- the TORs 104 _ 1 through 104 _N of their respective racks 101 _ 1 through 101 _N can be viewed as the gateway to the data center's network 105 .
- An EOR unit 107 helps economize the configuration, management and networking of multiple racks 101 _ 1 through 101 _N.
- EOR unit 107 is typically a switch that provides communication services for multiple racks. That is, servers within different racks 101 _ 1 through 101 _N where the different racks 101 _ 1 through 101 _N are associated with a same EOR 107 can communicate to one another through the EOR switch 107 . Additionally, the EOR 107 provides the servers of its constituent racks 101 _ 1 through 101 _N to, e.g., the data center's backbone network 106 to which are coupled multiple EORs and their corresponding racks.
- an EOR unit 107 is a unit that is associated with multiple racks 101 _ 1 through 101 _N to provide communication services for the multiple racks 101 _ 1 through 101 _N (e.g., communication between racks 101 _ 1 through 101 _N and access to the data center's backbone network 106 ).
- the servers 103 of the data center typically include multiple CPUs capable of executing the respective program code of various kinds of software applications.
- One type of software application is an object storage software application.
- An object storage software application may include client access nodes (or client nodes) “CN” and storage nodes “SN”.
- the client access nodes act as gateways to the storage facilities for users of the object storage system (e.g., other application software programs that have storage needs, individuals, etc.).
- the client access nodes typically accept data objects from the users that are to be written into the storage system and provide data objects to the users that were read from the storage system.
- the storage nodes of the object storage application are responsible for storing data objects in the physical storage resources associated with the server that they execute on (e.g., non volatile storage of the server).
- Object storage systems identify individual data items to be stored as objects were each object typically has an identifier of the object (object ID).
- object ID identifier of the object
- a hashing algorithm is executed on the object ID by the object storage system to identify which storage node is responsible for storing the object.
- object storage systems also commonly replicate an object into multiple copies and store the different copies on different servers. Thus, should a server that stores a copy of the object fail, there are still other copies of the object that are available.
- a problem with replication is the overhead that is presented to the data center network 105 , 106 .
- a full protocol exchange e.g., acknowledgements (ACKs)
- ACKs acknowledgements
- FIG. 1 a depicts an exemplary prior art object write scenario.
- FIG. 1 a depicts an exemplary prior art object write scenario.
- all three copies are to be stored in racks that are serviced by one EOR 107 (in actual circumstances object copies may be stored in multiple groups of racks that have multiple corresponding EORs).
- the client node 109 that receives the data object with the corresponding write command replicates the object into three separate copies and sends the three different objects into the network (through its rack's switch 104 _N) with three associated write requests 1 (one for each of the data object copies).
- object ID generation intelligence 113 within the client node 109 determines a same object ID to be assigned to each of the three different objects, respectively.
- a hashing function 114 executed by the client node 109 on the object ID is then able to convert the single object ID into the identity of three different storage nodes (e.g., by identifying their corresponding IP addresses).
- one copy (or “replica”) of the object is to be stored on each of the three different storage nodes.
- the client node 109 can re-execute the hashing function on the object ID to identify where other copies of the object are stored and access a surviving copy of the object.
- the three storage requests 1 are forwarded by the TOR 104 _N of the client node's 109 rack 101 _N to the EOR unit 107 . Again, for ease of example it is assumed that all three object copies are to be stored in one of the racks associated with the EOR unit 107 .
- the respective identifier of each destination storage node (e.g., its IP address) for each of the objects is essentially used by the EOR's networking switch 107 to route 3 a , 3 b the three object copies with their corresponding write request to their appropriate rack. For the sake of example, two of the copies are to be stored in rack 101 _ 1 and one of the copies is to be stored in rack 101 _ 2 .
- EOR unit routes 3 a the two copies whose respective destination corresponds to a storage node within rack 101 _ 1 to rack 101 _ 1 .
- the EOR unit 107 routes 3 b the single copy and its write request whose respective destination corresponds to a storage node within rack 101 _ 2 to rack 101 _ 2 .
- Rack 101 _ 1 receives the two copies 3 a and its local TOR switch 104 _ 1 , forwards each copy to its respective object storage node 110 , 111 internally within rack 101 _ 1 , and the respective object storage node stores its respective object.
- the TOR switch 104 _ 2 within rack 101 _ 2 routes 4 b its received copy of the object to the proper destination object storage node 112 within rack 101 _ 2 .
- the object storage node 112 then stores the object.
- the client node 109 includes object ID generation intelligence 113 and hashing intelligence 114 that converts an object's object ID into three destination storage node identifiers (e.g., three IP addresses).
- the storage requests 1 each include an identity of the destination storage node that is to store the object that is appended to the request (e.g., by way of its IP address).
- the internal routing/switching tables of the network switches 104 , 107 are configured with look up tables or other information that converts the identity of a particular storage node or its IP address to an appropriate output port of the switch. By so doing, the switches of the network 104 , 107 are able to route any object to its correct destination storage node.
- FIG. 1 b shows a prior art acknowledgement process.
- storage nodes 110 , 111 within rack 101 _ 1 complete the storage of their respective objects, they each send 1 a an acknowledgement (ACK) back to the client node 109 that originally submitted the storage request for the three object copies.
- ACK acknowledgement
- the object storage node 112 within rack 101 _ 2 sends 1 b an ACK back to the client node 109 upon successful storage of its object.
- Each of the ACKs identify the client node 109 as the destination.
- the switches 104 , 107 of the network 105 , 106 route 2 a , 2 b , 3 , 4 the ACKs back to the client node 109 .
- the client node 109 receives 4 the three ACKs it understands that three copies of the object have been successfully stored in the system.
- the switches 104 , 107 can be seen as having no internal intelligence capabilities of the storage system itself. That is, the network switches 104 , 107 are configured to only understand which network addresses correspond to which network destination. That is, as discussed above, the client nodes of the storage application include object_ID generation intelligence 113 and assignment of object_ID to storage node intelligence 114 but the networking switches 104 , 107 do not include any such intelligence.
- client nodes of the application storage system are capable of generating specific object_IDs for specific objects and copies of objects. Additionally, only client nodes are capable of determining which storage nodes the replicas of an object having a particular object_ID should be assigned to.
- each copy of the object is provided to the network 105 , 106 by the client 109 and forwarded through the network 105 , 106 until it reaches its destination storage node.
- ACK flows that originate from each destination storage node, each of which progress through the network 105 , 106 and terminate at the client node 109 . Because there is an end-to-end traffic flow for each copy of the object (forwards to store the object and backwards to acknowledge the object), the offered load that is presented to the network 105 , 106 scales in direct proportion to the number of replicas that the storage system is configured to store per object.
- a storage system that is configured to have six replicas will generate twice as much internal network traffic as a system that is configured to have only three replicas.
- the scaling of internal network traffic with the numbers of replicas creates a tradeoff between reliability of the storage system and the performance of the storage system.
- the storage system is more reliable with increased number of replicas (the likelihood that data will be truly lost drops with each replica).
- the storage system will be observed by its users to have slower read/write storage access times as the internal traffic of the network increases.
- the number of replicas increases the storage system will become slower from the perspective of the users. This is particularly unfortunate for large mission critical environments (e.g., the core storage system of a large corporation) which highly desire both reliability and performance.
- FIG. 2 shows an improved approach in which the object ID generation intelligence 213 _ 1 through 213 _N and/or object_ID assignment to storage node intelligence 214 _ 1 through 214 _N that has been traditionally reserved for client nodes of the storage application (to generate object IDs and/or determine which specific object IDs are to be stored onto which specific object storage nodes) is embedded within the TOR switching nodes 204 _ 1 through 204 _N of the racks of the network 201 _ 1 through 201 _N.
- the internal traffic of the network 205 , 206 can be greatly reduced because the “per-copy” end-to-end traffic flow environment is dramatically reduced.
- the client node in order to prevent changes to legacy client node software, the client node generates three separate object IDs and sends separate requests for each of three different copies of an object as described above with respect to FIG. 1 a .
- the three separate requests are forwarded to the EOR unit 207 .
- the EOR unit understands that the TOR switches 204 _ 1 through 204 _N of the racks 201 _ 1 through 201 _N are upgraded to have storage system intelligence 213 , 214 that has been traditionally reserved for the client nodes of the storage application to generate object IDs and/or determine which specific object IDs are to be stored onto which specific object storage nodes.
- the EOR 207 will recognize that two of the new requests have a same next hop address (the address of TOR switch 204 _ 1 ) and will consolidate the two requests into a single request 3 a with a single copy of the object.
- the single forwarded request 3 a of FIG. 2 a includes the object ID for the object and a single copy of the object.
- a single request 3 b is also sent to TOR switch 204 _ 2 of rack 204 _ 2 in both the approach of FIG. 2 a and FIG. 1 a.
- the TOR switch 204 _ 1 of rack 201 _ 1 In response to receiving the single storage request, the TOR switch 204 _ 1 of rack 201 _ 1 , having the enhanced storage intelligence 214 _ 1 , will perform a hash on the object ID of the single request 3 a to identify the destination storage nodes (e.g., the IP addresses) for the object copies. After the hash is preformed, for a redundancy of 3, the TOR switch 204 _ 1 will identify three destination storage nodes. From the destination storage node identities, the TOR switch 204 _ 1 will recognize that two of the destination storage nodes are within the TOR switch's own rack 201 _ 1 .
- the destination storage nodes e.g., the IP addresses
- the TOR switch 204 _ 1 will then make a copy of the object that was included in the single storage request 3 a it received, and, forward first and second copies of the object to the first and second destination storage nodes 210 , 211 within rack 201 _ 1 , respectively.
- the TOR switch 204 _ 2 associated with rack 201 _ 1 will also perform a hash on the primary object ID from the single request 3 b it received to generate three destination storage node identifiers (e.g., three IP addresses). Of the three, TOR switch 204 _ 2 will recognize that only one of these corresponds to a storage node within rack 201 _ 2 . As such, no additional copies of the object will be made by TOR switch 204 _ 2 (it will forward the single copy it received to destination storage node 212 for storage).
- three destination storage node identifiers e.g., three IP addresses
- TOR switch 204 _ 1 can combine the two ACKs 1 a from storage nodes 210 , 211 into a single ACK for both objects that were stored in rack 201 _ 1 (the single ACK may include the object ID and information that verifies the successful storage of two objects).
- the single ACK is then sent 2 a to the EOR unit 207 .
- TOR switch 204 _ 2 will also send 2 b a single ACK 2 b to EOR unit 207 (which also includes the object ID and successful storage of one object).
- the EOR unit 207 forwards both ACKS to the TOR switch 204 _N in the rack 201 N that the client node 209 is within.
- the TOR switch within rack 201 _N having enhanced functionality, is able to split the ACK from TOR switch 204 _ 1 into two separate ACKs. As such, the legacy client 209 will receive three ACKS and will recognize successful storage of three copies of the object. Thus, additional storage system intelligence that is included in switch 204 _N is acknowledgment processing.
- the client understands that the network includes storage system functionality. As such, referring to FIG. 2 a , the client 209 issues only one storage request to TOR switch 204 _N and TOR switch 204 _N performs object ID generation and object ID to storage node correlation with intelligence 213 _N and 214 _N. TOR switch 204 _N can then forward the three requests to the EOR switch 207 . The rest of the storage process transpires as described above with respect to FIG. 2 a . With respect to the acknowledgment process, switch 204 _N can also accumulate the received ACKs and send a single, final ACK to the client 209 when both acknowledgements 3 of FIG. 2 b have been received. Thus, again, additional storage system intelligence that is included in switch 204 _N is acknowledgement processing. It is pertinent to point out that a single TOR switch as described herein may include all forms of TOR switch intelligence (or some subset thereof).
- FIG. 3 a shows another object storage write process embodiment in which the EOR switch 307 also includes storage system intelligence 313 _N+1, 314 _N+1 (along with the TOR switches).
- the client 309 is upgraded to know that the network 305 , 306 includes storage system intelligence, the client 309 can send only one storage request 1 to the TOR switch 304 _N unit rather than three storage requests as was performed by the client in the embodiment of FIG. 2 a .
- the TOR switch 304 _N may generate an object ID for the object and send a single request that includes the object and the object ID to EOR 307 .
- EOR switch 307 also has storage system intelligence 313 _N+1, 314 _N+1, TOR switch 304 _N need only forward 2 a single write request and object to the EOR switch 307 .
- object ID intelligence 313 _N+1 of EOR may generate an object ID for the object.
- object ID generation intelligence 314 _N+1 of the EOR 307 After receipt of the single request with the object and having possession of an object_ID (whether received or locally generated), object ID generation intelligence 314 _N+1 of the EOR 307 will perform a hash on the object_ID which will generate three destination storage node identifiers (e.g., three IP addresses). Moreover, the EOR 307 will recognize that two of these storage nodes 310 , 311 map to single rack 301 _ 1 . As such, the EOR 307 can consolidate the request for these two storage nodes 310 , 311 into a single storage request and corresponding object that is sent 3 a to TOR switch 304 _ 1 .
- the single request 3 a only includes the object ID (does not identify the destinations 310 , 311 for the pair of objects that are to be stored in rack 301 _ 1 ) and the TOR switch 304 _ 1 of rack 301 _ 1 performs the same procedures described above with respect to FIG. 2 a (switch 304 _ 1 generates destination addressees for the copies from the object ID in the request).
- the single request 3 a identifies both destination nodes 310 , 311 and the TOR switch 304 _ 1 duplicates the object and forwards the object copies to both destinations 310 , 311 for storage.
- the EOR unit 307 also sends a second request 3 b with corresponding copy of the object to rack 301 _ 2 .
- the request may conform to any of the embodiments described immediately above, resulting in the storage of the single copy in the appropriate destination storage node 312 within rack 301 _ 2 .
- the acknowledgements may reduce to: 1) a single ACK being sent from TOR switch 304 _ 1 to EOR switch 307 that confirms the storage of both copies of the object in rack 301 _ 1 ; 2) a single ACK being sent from TOR switch 304 _ 2 to EOR switch 307 that confirms the storage of the single copy of the object in rack 301 _ 2 ; and, 3) a single ACK being sent from the EOR unit 307 to TOR switch 304 _N in rack 301 _N that confirms the storage of all three objects in the appropriate storage nodes (the EOR includes ACK processing intelligence that can accumulate the ACKS from the racks so that only one ACK is sent toward the client).
- the TOR switch 304 _N can then send a single confirmation to the client 309 or three separate requests to the client 309 depending on whether the client is a legacy client or not.
- FIG. 4 shows an embodiment of a networking switch 400 that corresponds to either or both of the EOR and TOR units discussed at length above.
- the network switch 400 includes a number of ingress ports 401 and egress ports 402 .
- the ingress ports 401 receive input traffic and the output ports 402 transmit output traffic.
- a switch core 403 is coupled between the ingress ports 401 and the egress ports 402 .
- the switch core 403 can be implemented with custom dedicated hardware logic circuitry (e.g., application specific integrated circuitry (ASIC) logic circuitry, programmable logic circuitry (e.g., field programmable gate array (FPGA) logic circuitry, programmable logic device (PLD) logic circuitry, programmable logic array (PLA) logic circuitry, etc.), logic circuitry that executes program code that is written to perform network switching and/or routing tasks (e.g., a microprocessor, one or more processing cores of a multi-core processor, etc.) or any combination of these. Programmable logic circuits may be particularly useful in the case of software defined networking and/or software defined storage implementations.
- ASIC application specific integrated circuitry
- FPGA field programmable gate array
- PLD programmable logic device
- PLA programmable logic array
- Programmable logic circuits may be particularly useful in the case of software defined networking and/or software defined storage implementations.
- the networking switch 400 also includes object storage intelligence 404 , 405 , 406 as described above with respect to FIGS. 2 a , 2 b , 3 a and 3 b .
- object storage intelligence 404 is able to generate object IDs for replica objects, e.g., by performing a hash function on an object's primary ID.
- Object storage intelligence 405 is able to determine an object's storage node from that object's object ID.
- Object storage intelligence 406 is able to perform various object storage system acknowledgement tasks such as accumulating multiple received acknowledgements and consolidating them to a single acknowledgement, or, splitting a combined acknowledgement into separate acknowledgements.
- any of the object storage intelligence 404 , 405 , 406 may be implemented with any of the types of logic circuitry described in the preceding paragraph or combination thereof. In the case of logic circuitry that executes program code, the program code would be written to perform some object storage intelligence task.
- packets are sent over the network 205 , 206 / 305 , 306 between devices.
- the payloads of these packets may contain data objects and/or object identifiers of the object storage system (in the storage direction) or acknowledgements of the object storage system (in the acknowledgement direction).
- the packets may also contain in header or payload information that identifies the packet as an object storage packet and/or which object storage intelligence 404 , 405 , 406 is to be executed by the networking switch 400 in order to process the packet according to some object storage system related task.
- FIG. 5 shows a method performed by a networking switch in an object storage system as described above.
- the method includes receiving a first packet from a network that includes an object ID and a data object 501 .
- the method includes generating a replica for the data object 502 .
- the method includes generating an object ID for the replica of the data object 503 .
- the method includes determining a destination storage node for the replica of the data object 504 .
- the method includes sending a second packet from the networking switch to the destination storage node 505 .
- the second packet includes the object ID for the replica of the data object and the replica of the data object.
- FIG. 6 shows a basic model of a basic computing system which may represent any of the servers described above.
- the basic computing system 600 may include a central processing unit 601 (which may include, e.g., a plurality of general purpose processing cores 615 _ 1 through 615 _X) and a main memory controller 617 disposed on a multi-core processor or applications processor, system memory 602 , a display 603 (e.g., touchscreen, flat-panel), a local wired point-to-point link (e.g., USB) interface 604 , various network I/O functions 605 (such as an Ethernet interface and/or cellular modem subsystem), a wireless local area network (e.g., WiFi) interface 606 , a wireless point-to-point link (e.g., Bluetooth) interface 607 and a Global Positioning System interface 608 , various sensors 609 _ 1 through 609 _Y, one or more cameras 610 , a battery 611 , a power management
- An applications processor or multi-core processor 650 may include one or more general purpose processing cores 615 within its CPU 601 , one or more graphical processing units 616 , a memory management function 617 (e.g., a memory controller) and an I/O control function 618 .
- the general purpose processing cores 615 typically execute the operating system and application software of the computing system.
- the graphics processing unit 616 typically executes graphics intensive functions to, e.g., generate graphics information that is presented on the display 603 .
- the memory control function 617 interfaces with the system memory 602 to write/read data to/from system memory 602 .
- the power management control unit 612 generally controls the power consumption of the system 600 .
- Each of the touchscreen display 603 , the communication interfaces 604 - 1107 , the GPS interface 608 , the sensors 609 , the camera(s) 610 , and the speaker/microphone codec 613 , 614 all can be viewed as various forms of I/O (input and/or output) relative to the overall computing system including, where appropriate, an integrated peripheral device as well (e.g., the one or more cameras 610 ).
- I/O input and/or output
- various ones of these I/O components may be integrated on the applications processor/multi-core processor 650 or may be located off the die or outside the package of the applications processor/multi-core processor 650 .
- the computing system may also include a system memory (also referred to as main memory) having multiple levels.
- a system memory also referred to as main memory
- main memory having multiple levels.
- a first (faster) system memory level may be implemented with DRAM and a second (slower) system memory may be implemented with an emerging non-volatile memory (such as non-volatile memory whose storage cells are composed of chalcogenide, resistive memory (RRAM), ferroelectric memory (FeFRAM), etc.).
- Emerging non volatile memory technologies have faster access times that traditional FLASH and can therefore be used in a system memory role rather than be relegated solely to mass storage.
- Software and/or firmware executing on a general purpose CPU core (or other functional block having an instruction execution pipeline to execute program code) of processor may perform any of the functions described above.
- Embodiments of the invention may include various processes as set forth above.
- the processes may be embodied in machine-executable instructions.
- the instructions can be used to cause a general-purpose or special-purpose processor to perform certain processes.
- these processes may be performed by specific hardware components that contain hardwired logic for performing the processes, or by any combination of programmed computer components and custom hardware components.
- Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions.
- the machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, FLASH memory, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions.
- the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
- a remote computer e.g., a server
- a requesting computer e.g., a client
- a communication link e.g., a modem or network connection
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
Claims (19)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/718,756 US10855766B2 (en) | 2017-09-28 | 2017-09-28 | Networking switch with object storage system intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/718,756 US10855766B2 (en) | 2017-09-28 | 2017-09-28 | Networking switch with object storage system intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190098085A1 US20190098085A1 (en) | 2019-03-28 |
US10855766B2 true US10855766B2 (en) | 2020-12-01 |
Family
ID=65808251
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/718,756 Active 2038-01-09 US10855766B2 (en) | 2017-09-28 | 2017-09-28 | Networking switch with object storage system intelligence |
Country Status (1)
Country | Link |
---|---|
US (1) | US10855766B2 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111104458B (en) * | 2019-11-12 | 2024-04-05 | 杭州创谐信息技术股份有限公司 | Distributed data exchange system and method based on RK3399Pro |
US20220224673A1 (en) * | 2021-01-13 | 2022-07-14 | Terafence Ltd. | System and method for isolating data flow between a secured network and an unsecured network |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020091636A1 (en) * | 1999-03-25 | 2002-07-11 | Nortel Networks Corporation | Capturing quality of service |
US20110196828A1 (en) * | 2010-02-09 | 2011-08-11 | Alexandre Drobychev | Method and System for Dynamically Replicating Data Within A Distributed Storage System |
US20110294472A1 (en) * | 2008-05-01 | 2011-12-01 | Nigel Bramwell | Communications device, communications service and methods for providing and operating the same |
US20120278804A1 (en) * | 2010-11-14 | 2012-11-01 | Brocade Communications Systems, Inc. | Virtual machine and application movement over a wide area network |
US20130232260A1 (en) * | 2009-12-23 | 2013-09-05 | Citrix Systems, Inc. | Systems and methods for gslb mep connection management across multiple core appliances |
US20140025770A1 (en) * | 2012-07-17 | 2014-01-23 | Convergent.Io Technologies Inc. | Systems, methods and devices for integrating end-host and network resources in distributed memory |
US20140269261A1 (en) * | 2013-03-14 | 2014-09-18 | Telefonaktiebolaget L M Ericsson (Publ) | Method and apparatus for ip/mpls fast reroute |
US20140317293A1 (en) * | 2013-04-22 | 2014-10-23 | Cisco Technology, Inc. | App store portal providing point-and-click deployment of third-party virtualized network functions |
US20150124809A1 (en) * | 2013-11-05 | 2015-05-07 | Cisco Technology, Inc. | Policy enforcement proxy |
US20150125112A1 (en) * | 2012-04-25 | 2015-05-07 | Ciena Corporation | Optical switch fabric for data center interconnections |
US20170094002A1 (en) * | 2015-09-26 | 2017-03-30 | Dinesh Kumar | Technologies for offloading data object replication and service function chain management |
US9633051B1 (en) * | 2013-09-20 | 2017-04-25 | Amazon Technologies, Inc. | Backup of partitioned database tables |
US9928168B2 (en) * | 2016-01-11 | 2018-03-27 | Qualcomm Incorporated | Non-volatile random access system memory with DRAM program caching |
US9935887B1 (en) * | 2015-09-24 | 2018-04-03 | Juniper Networks, Inc. | Fragmentation and reassembly of network traffic |
US10025673B1 (en) * | 2013-09-20 | 2018-07-17 | Amazon Technologies, Inc. | Restoring partitioned database tables from backup |
US10038624B1 (en) * | 2016-04-05 | 2018-07-31 | Barefoot Networks, Inc. | Flexible packet replication and filtering for multicast/broadcast |
-
2017
- 2017-09-28 US US15/718,756 patent/US10855766B2/en active Active
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020091636A1 (en) * | 1999-03-25 | 2002-07-11 | Nortel Networks Corporation | Capturing quality of service |
US20110294472A1 (en) * | 2008-05-01 | 2011-12-01 | Nigel Bramwell | Communications device, communications service and methods for providing and operating the same |
US20130232260A1 (en) * | 2009-12-23 | 2013-09-05 | Citrix Systems, Inc. | Systems and methods for gslb mep connection management across multiple core appliances |
US20110196828A1 (en) * | 2010-02-09 | 2011-08-11 | Alexandre Drobychev | Method and System for Dynamically Replicating Data Within A Distributed Storage System |
US20120278804A1 (en) * | 2010-11-14 | 2012-11-01 | Brocade Communications Systems, Inc. | Virtual machine and application movement over a wide area network |
US20150125112A1 (en) * | 2012-04-25 | 2015-05-07 | Ciena Corporation | Optical switch fabric for data center interconnections |
US20140025770A1 (en) * | 2012-07-17 | 2014-01-23 | Convergent.Io Technologies Inc. | Systems, methods and devices for integrating end-host and network resources in distributed memory |
US10341285B2 (en) * | 2012-07-17 | 2019-07-02 | Open Invention Network Llc | Systems, methods and devices for integrating end-host and network resources in distributed memory |
US20140269261A1 (en) * | 2013-03-14 | 2014-09-18 | Telefonaktiebolaget L M Ericsson (Publ) | Method and apparatus for ip/mpls fast reroute |
US9577874B2 (en) * | 2013-03-14 | 2017-02-21 | Telefonaktiebolaget L M Ericsson (Publ) | Method and apparatus for IP/MPLS fast reroute |
US20140317293A1 (en) * | 2013-04-22 | 2014-10-23 | Cisco Technology, Inc. | App store portal providing point-and-click deployment of third-party virtualized network functions |
US20140317261A1 (en) * | 2013-04-22 | 2014-10-23 | Cisco Technology, Inc. | Defining interdependent virtualized network functions for service level orchestration |
US9633051B1 (en) * | 2013-09-20 | 2017-04-25 | Amazon Technologies, Inc. | Backup of partitioned database tables |
US20170228290A1 (en) * | 2013-09-20 | 2017-08-10 | Amazon Technologies, Inc. | Backup of partitioned database tables |
US10025673B1 (en) * | 2013-09-20 | 2018-07-17 | Amazon Technologies, Inc. | Restoring partitioned database tables from backup |
US20150124809A1 (en) * | 2013-11-05 | 2015-05-07 | Cisco Technology, Inc. | Policy enforcement proxy |
US9935887B1 (en) * | 2015-09-24 | 2018-04-03 | Juniper Networks, Inc. | Fragmentation and reassembly of network traffic |
US20170094002A1 (en) * | 2015-09-26 | 2017-03-30 | Dinesh Kumar | Technologies for offloading data object replication and service function chain management |
US9928168B2 (en) * | 2016-01-11 | 2018-03-27 | Qualcomm Incorporated | Non-volatile random access system memory with DRAM program caching |
US10038624B1 (en) * | 2016-04-05 | 2018-07-31 | Barefoot Networks, Inc. | Flexible packet replication and filtering for multicast/broadcast |
Also Published As
Publication number | Publication date |
---|---|
US20190098085A1 (en) | 2019-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10917351B2 (en) | Reliable load-balancer using segment routing and real-time application monitoring | |
US11962501B2 (en) | Extensible control plane for network management in a virtual infrastructure environment | |
US20210243247A1 (en) | Service mesh offload to network devices | |
CN107465590B (en) | Network infrastructure system, method of routing network traffic and computer readable medium | |
JP6445621B2 (en) | Distributed load balancer | |
JP6169251B2 (en) | Asymmetric packet flow in distributed load balancers | |
TWI543566B (en) | Data center network system based on software-defined network and packet forwarding method, address resolution method, routing controller thereof | |
JP6030807B2 (en) | Open connection with distributed load balancer | |
CN106533992B (en) | PCI express fabric routing for fully connected mesh topologies | |
JP2019092217A (en) | Networking techniques | |
JP2005538588A (en) | Switchover and switchback support for network interface controllers with remote direct memory access | |
US10826823B2 (en) | Centralized label-based software defined network | |
CN104811392A (en) | Method and system for processing resource access request in network | |
US10230795B2 (en) | Data replication for a virtual networking system | |
US20160216891A1 (en) | Dynamic storage fabric | |
US10700893B1 (en) | Multi-homed edge device VxLAN data traffic forwarding system | |
US20210294702A1 (en) | High-availability memory replication in one or more network devices | |
US20220337499A1 (en) | Systems and methods for determining network component scores using bandwidth capacity | |
US10855766B2 (en) | Networking switch with object storage system intelligence | |
CN109120556B (en) | A kind of method and system of cloud host access object storage server | |
US20120158998A1 (en) | API Supporting Server and Key Based Networking | |
CN104348737A (en) | Multicast message transmission method and switches | |
US20220060418A1 (en) | Network interface device-based computations | |
CN113098788B (en) | Method and device for releasing route | |
US8750120B2 (en) | Confirmed delivery of bridged unicast frames |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: INTEL CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZOU, YI;RAGHUNATH, ARUN;CHAGAM REDDY, ANJANEYA REDDY;SIGNING DATES FROM 20171010 TO 20171012;REEL/FRAME:043910/0610 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |