Nothing Special   »   [go: up one dir, main page]

US20170115712A1 - Server on a Chip and Node Cards Comprising One or More of Same - Google Patents

Server on a Chip and Node Cards Comprising One or More of Same Download PDF

Info

Publication number
US20170115712A1
US20170115712A1 US15/281,462 US201615281462A US2017115712A1 US 20170115712 A1 US20170115712 A1 US 20170115712A1 US 201615281462 A US201615281462 A US 201615281462A US 2017115712 A1 US2017115712 A1 US 2017115712A1
Authority
US
United States
Prior art keywords
subsystem
node
power
soc
management
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/281,462
Inventor
Mark Bradley Davis
David James Borland
Arnold Thomas Schnell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
III Holdings 2 LLC
Original Assignee
III Holdings 2 LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/794,996 external-priority patent/US20110103391A1/en
Priority claimed from US12/889,721 external-priority patent/US20140359323A1/en
Priority claimed from US13/234,054 external-priority patent/US9876735B2/en
Priority claimed from US13/284,855 external-priority patent/US20130107444A1/en
Priority claimed from US13/453,086 external-priority patent/US8599863B2/en
Priority claimed from US13/475,722 external-priority patent/US9077654B2/en
Priority claimed from US13/475,713 external-priority patent/US9054990B2/en
Priority claimed from US13/527,498 external-priority patent/US9069929B2/en
Priority to US15/281,462 priority Critical patent/US20170115712A1/en
Application filed by III Holdings 2 LLC filed Critical III Holdings 2 LLC
Publication of US20170115712A1 publication Critical patent/US20170115712A1/en
Assigned to III HOLDINGS 2, LLC reassignment III HOLDINGS 2, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SILICON VALLEY BANK
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/266Arrangements to supply power to external peripherals either directly from the computer or under computer control, e.g. supply of power through the communication port, computer controlled power-strips
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7803System on board, i.e. computer system on one or more PCB, e.g. motherboards, daughterboards or blades
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/102Program control for peripheral devices where the programme performs an interfacing function, e.g. device driver
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4221Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4406Loading of operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • U.S. application Ser. No. 13/662,759, filed Oct. 29, 2012 is a Continuation-In-Part of U.S. application Ser. No. 12/794,996, filed Jun. 7, 2010, which claims priority from Provisional Application U.S. Application 61/256,723, filed Oct. 30, 2009, incorporated herein by reference in their entireties.
  • U.S. application Ser. No. 13/662,759, filed Oct. 29, 2012 is a Continuation-In-Part of U.S. application Ser. No. 12/889,721, filed Sep. 24, 2010, which claims priority from Provisional Application U.S. Application 61/245,592, filed Sep. 24, 2009, incorporated herein by reference in their entireties.
  • the disclosure relates generally to provisioning of modular compute resources within a system design and, more particularly, to a system on a chip that provides integrated CPU, peripheral, switch fabric, system management, and power management functionalities.
  • Server systems generally provide a fixed number of options. For example, there are usually a fixed number of CPU sockets, memory DIMM slots, PCI Express 10 slots and a fixed number of hard drive bays, which often are delivered empty as they provide future upgradability. The customer is expected to gauge future needs and select a server chassis category that will serve present and future needs. Historically, and particularly with x86-class servers, predicting the future needs has been achievable because product improvements from one generation to another have been incremental.
  • FIG. 1 illustrates an example of a system board on which one or more node cards may be installed
  • FIG. 2 illustrates an embodiment of the details of each node card
  • FIG. 3 illustrates an example of a quad node card
  • FIGS. 4 and 5 illustrate two examples of node cards with one or more connectors
  • FIG. 6 illustrates an example of a single server node card
  • FIG. 7 illustrates a logical view of a system on a chip (SOC).
  • FIG. 8A illustrates an architectural block diagram view of a SOC showing subsystems thereof
  • FIG. 8B illustrates an architectural block diagram view of a SOC showing architectural elements thereof
  • FIG. 9 illustrates a logical view of a SOC node CPU subsystem
  • FIG. 10 illustrates a logical view of a peripheral subsystem
  • FIG. 11 illustrates an architectural block diagram view of a system interconnect subsystem
  • FIG. 12 illustrates a logical view of a system interconnect subsystem
  • FIG. 13 illustrates a logical view of a power management unit of a management subsystem
  • FIG. 14 illustrates a software view of a power management unit.
  • the disclosure is particularly applicable to examples of the node cards illustrated and described below and it is in this context that the disclosure will be described. It will be appreciated, however, that the disclosure has broader applicability since the disclosed system and node cards can be implemented in different manners that are within the scope of the disclosure and may be used for any application since all of the various applications in which the system and node cards may be used are within the scope of the disclosure.
  • FIG. 1 illustrates an example of a system 40 that may include a system board 42 on which one or more node cards 46 may be installed.
  • the system board 42 may be fit into a typical server chassis 44 and the system board 42 may have the one or more node cards 46 , such as one or more server node units (described below with reference to FIG. 2 ) plugged into the system board.
  • the system board 42 is the component that ties the ServerNodes 46 to these components.
  • the system board 42 is desirable if a hierarchical hardware partition is desired where the “building block” is smaller than the desired system, or when the “building block” is not standalone.
  • the system board 42 roles can include: Ethernet network connectivity, internal fabric connections between ServerNodes or groups a ServerNodes in a sub-system (the fabric design in FIG. 1 ) and chassis control and management.
  • the system board is the component that connects the fabric links between ServerNodes and allows them to communicate with the external world.
  • the system board 42 can glue the system components together and the input/output (I/O) of the system may include: management data input/output (MDIO) for SFP communication, comboPHYs for internal fabric links, storage and Ethernet access, UART and JTAG ports for debug and SMBus and GPIOs for chassis component control and communication.
  • MDIO management data input/output
  • comboPHYs for internal fabric links
  • storage and Ethernet access UART and JTAG ports for debug and SMBus
  • GPIOs for chassis component control and communication.
  • node cards leverage highly integrated SoCs designed for Server applications, that enable density and system design options that has not been available to date.
  • Cards can be defined that have the functionality of one or more servers and these Cards can be linked together to form clusters of servers in very dense implementations.
  • a high level description of the Card would include a highly integrated SOC implementing the server functionality, DRAM memory, support circuitry such as voltage regulation, and clocks.
  • the input/output of the card would be power and server to server interconnect and/or server to Ethernet PHY connectivity.
  • SATA serial advanced technology attachment
  • An example of a node card is shown in FIG. 2 with one or more system on a chip (SOC) units (i.e., SoCs).
  • SOC system on a chip
  • the fabric connections on each node card 46 can be designed to balance: usage of SOC PHYs, link redundancy, link bandwidth and flexibility in usage of the 8 links at the edge connectors.
  • a node card 46 like that shown in FIG. 3 can be used in conjunction with a system board where the system board provides power to the node cards and connections to interconnect off the system board such as an Ethernet transceiver.
  • the system board could house one or more node cards. In the case of housing more than one node card, the system board creates a cluster of Servers that utilize a server to server interconnect or fabric that is integrated in the SOC or a separate function on the card.
  • This system board can be made in many forms, including industry standard form factors such as ATX or in customer form factors.
  • the system board could be a blade or could fit into a standard chassis such as a 2U or any other size.
  • FIG. 2 illustrates an example a node card 60 .
  • the node card may be a printed circuit board with a male physical connector, on which there is one or more servers that get power from some of the signals on the physical connector and use some of the signals on the connector for server to server communication or server to Ethernet PHY connections.
  • the physical connector may be PCIe (Peripheral Component Interconnect Express) connector.
  • the node card 60 may have an enable of the physical connector (see CARD EN in FIG. 2 ) that enables the server.
  • the node card may have regulators included on the PCB to provide regulated power supplies to various parts of the server off the power supply that is provided through the PCIe physical connector and the enables (CARD EN) may be connected to the regulators.
  • the voltages on the node card may be 12V.
  • the regulators may generate a common voltage that may be 3.3V (as shown in the example in FIG. 2 ), 1.8V, 0.9V and/or 1.35 or 1.5V.
  • Each node card may have one or more SoCs 62 , memory and appropriate regulators, but may also have multiple servers on the PCB including multiple SOC and multiple sets of DRAM (dynamic random access memory) and the DRAM is soldered on the PCB and signals are routed to the SOC.
  • the DRAM is on a DIMM (ual in-lin memory mpodule) and the DIMM is connected to the PCB using a connector whose signals are routed to the SOC.
  • the node card 60 may include one or more system on a chip (SOC) 62 (such as SOC0-SOC3 as shown in FIG. 2 ) and each SOC 62 (i.e., each an instance of a SOC unit) is part of a node 64 , such as Node N0-N3 as shown, wherein the node may be a compute node, a storage node and the like.
  • the SoCs on the node card may have heat sinks.
  • Each node 64 may further include one or more LEDs, memory (DDR, for example), a clock, a temperature sensor (TEMP) connected to the SOC, an SD slot and an SPI_FLASH slot as shown in FIG. 2 .
  • the node card 60 may also have a storage card such as SD, uSD, MMC, eMMC that is connected to the SOC (as shown in the example below in FIG. 6 ).
  • a NAND or NOR can be used and connected to the SOC (such as in the examples in FIGS. 4-5 below) and/or a serial flash may be used and connected to the SOC.
  • the node card may also have one or more communication and/or storage connects 66 , such as connects to various SATA devices, connects to XAUI interconnects and a UART that may be through an edge connector.
  • the server-to-server communication may be XAUI and one or more XAUI is routed to the PCIe physical connector and the XAUI signals are routed from the PCIe physical connector to the SOC and/or the XAUI signals are routed between SoCs on the PCB.
  • the server-to-server communication may be SGMII and one or more SGMII is routed to the PCIe physical connector and the SGMII signals are routed from the PCIe connector to the SOC or the SGMII signals are routed between SoCs on the PCB.
  • the node card may also have a SATA connector.
  • the SATA signals may be routed from the SOC to the SATA connector or multiple SATA connectors are added to the PCB and multiple SATA connectors are routed from the SOC to the SATA connectors.
  • the node card may also have a mini SATA on the Card or mSATA on the Card.
  • the SATA may be routed to the PCIe physical connector from the SOC. In some embodiments, multiple SATA connections are made between the SOC and PCIe physical connector and PCIe x1 or x2, or x4, or x8 or x16 or x32 is used.
  • the node card may use multiple PCIe physical connectors or any combination of multiple PCIe connectors such as x1 or x2, or x4, or x8 or x16 or x32.
  • the DC values applied to the PCIe connector and routed onto the PCB for set up, control, ID or information and the DC values are routed to GPIOs on one or more SoCs.
  • the edge connector may also have signaling for JTAG and ALTBOOT (described below in more detail).
  • the edge connector may also provide SLOT signaling, GPIO signaling and power (with an enable).
  • the JTAG signals are routed from one or more SoCs to PCIe physical connector and the serial port and/or UART signals are routed from the PCIe physical connector to one or more SoCs.
  • the SOC may have an addition signal or set of signals is routed to the PCIe physical connector that is used to arbitrate usage of the serial port or UART.
  • a digital signal can be applied to the PCIe connector to cause an alternative boot procedure by connecting this signal from the PCIe connector to a signal on one or more SoCs that causes or enable an alternative boot.
  • the digital signal or signals can be applied to the PCIe physical connector to cause an interrupt to the SOC or SoCs by connecting the SOC or SoCs to this digital signal on the connector.
  • the system may have a level shifter(s) that is used on the PCB to translate a signal applied on the PCIe connector edge to a signal that is applied to the SOC(s).
  • the digital signal that is routed from an SOC to the PCIe connector that resets and/or controls and/or provides info to an Ethernet phy or SFP that is not on the PCB and may be for reset, enable, disable, mdio, fault, loss of signal, rate.
  • the node 64 of the node card 60 forms what can be thought of as an independent cluster node.
  • Each SOC 62 is an example of a node central processing unit (CPU) of the node card 60 .
  • An independent operating system (OS) is booted on the Node CPU.
  • Linux Ubuntu brand operating system is an example of such an independent operating system.
  • each SOC 62 includes one or more embedded PCIe controllers, one or more SATA controllers, and one or more 10 Gigabit (10 GigE) Ethernet MACs.
  • Each node 64 is interconnected to other nodes via a high speed interconnect such that the topology of the interconnect is logically transparent to the user.
  • Each node 64 even those that don't have direct access to an outside network, has network connectivity.
  • each node 64 has flash (i.e., flash storage space) that can be logically partitioned.
  • a local file system e.g., a Linux file system
  • each node 64 includes power management functionality that is optimized transparently to the user. Accordingly, a skilled person will appreciate that the node 64 provides all of the characteristics that would normally be attributed to a node of a computer cluster as well as value-added functionalities (e.g., storage functionality, power management functionality, etc).
  • FIG. 3 illustrates an example of a quad node card 100 .
  • the quad node card 100 may have one or more systems on a chip 103 (SoC0-SoC3 in this example), one or more volatile memory devices 104 , such as four 4 GB DDR3 Mini-DIMMs (1 per node), one or more storage interfaces 106 , such as sixteen SATA connectors (4 per node), one or more SD slots (one per node, MMC not supported) and one or more SPI flash chips (1 per node).
  • the quad node card may be powered by 12V dc, supplied via edge connectors 108 —all other voltages are internally generated by regulators.
  • the quad node card may have server interconnect Fabric connections 110 routed via the edge connector 108 , through a system board to which the node card is connected, to other node cards or external Ethernet transceivers and I2C and GPIO rout via the edge connector, per system board requirements.
  • the quad node card 100 does not have Ethernet PHY transceivers in some implementations, other implementations may choose to use Ethernet transceivers on the node card and route this as the interconnect and the node card is not a stand alone design, but may be used with a system board.
  • the quad Card example consists of 4 server nodes, each formed by a Calxeda® EnergyNode SOC, with its DIMM and local peripherals, which runs Linux independently from any other node.
  • these nodes can be directly interconnected to form a high bandwidth fabric, which provides network access through the system Ethernet ports. From the network view, the server nodes appear as independent servers; each available to take work on.
  • FIGS. 4 and 5 illustrate two examples of node cards 120 , 130 with one or more connectors 108 .
  • the connectors may be a PCIe connector that makes a convenient physical interconnect between the node card and the system board, but any type of connector can be used.
  • the connector type is selected based on its performance at the switching frequency of the fabric interconnect. For example, industry-standard Micro TCA connectors available from Tyco Electronics and Samtec operate up to 12 GHz.
  • the node card has the SOCs 102 , the memory 104 , the storage interfaces 106 and the fabric connector 110 , but may also include one or more persistent memory devices 112 , such as NAND flash.
  • the node card definition can vary as seen below with variation in a number of SATA connectors and/or in a number of fabric interconnect for server-to-server communication.
  • the type of PCIe connector in the node card could vary significantly based on quantity of interconnect and other signals desired in the design.
  • FIGS. 4 and 5 shows two PCIe x16 connectors, but the node cards could vary using any quantity of PCIe connector and any type of PCIe (x1, x2, x4 etc. . . . ).
  • the physical Ethernet interfaces depicted on the System Board 42 can also reside on the node cards.
  • FIG. 6 illustrates an example of a single server node card 140 .
  • the single server node card 140 may have one processor SOC 102 , a 4 GB DDR3 DRAM 104 down (no DIMM), a microSD slot 114 , a SATA data connector 106 , a mSATA connector 116 , one or more XAUI channels (four in this example) to the edge connector 108 for fabric connectivity and may be smaller than 2-inch.times.4-inch.
  • This combination provides the compute, networking IO, system memory, and storage interfaces needed for a robust ARM server, in a form factor that is easily integrated into many chassis designs.
  • This node card implements a x16 PCI connector with a custom electrical signaling interface that follows the Ethernet XAUI interface definition.
  • the node card 140 may be a two-sided printed circuit board with components on each side as shown in FIG. 6 .
  • FIGS. 7, 8A and 8B show a SOC 200 (i.e., an instance of a SOC unit) configured in accordance with the present invention.
  • the SOC 200 is a specific example of the SoCs discussed above in reference to FIGS. 2-6 (e.g., SOC 62 and/or SOC 102 ).
  • the SOC 200 can be utilized in standalone manner such as, for example, as discussed in reference to FIG. 6 .
  • the SOC 200 can be utilized in combination with a plurality of other SoCs on a node card such as, for example, with each one of the SoCs being associated with a respective node of the node card as discussed above in reference to FIGS. 2-5 .
  • the SOC 200 includes a node CPU subsystem 202 , a peripheral subsystem 204 , a system interconnect subsystem 206 , and a management subsystem 208 .
  • a SOC configured in accordance with the present invention can be logically divided into several subsystems.
  • Each one of the subsystems includes a plurality of operation components therein that enable a particular one of the subsystems to provide functionality thereof.
  • each one of these subsystems is preferably managed as independent power domains.
  • the node CPU subsystem 202 of SOC 200 provides the core CPU functionality for the SOC, and runs the primary user operating system (e.g. Ubuntu Linux).
  • the Node CPU subsystem 202 comprises a node CPU 210 , a snoop control unit (SCU) 212 , L2 cache 214 , a L2 cache controller 216 , memory controller 217 , an accelerator coherence port (ACP) 218 , main memory 219 and a generalized interrupt controller (GIC) 220 .
  • the node CPU 210 includes 4 processing cores 222 that share the L2 cache 214 .
  • the processing cores 222 are each an ARM Cortex A9 brand processing core with an associated media processing engine (e.g., Neon brand processing engine) and each one of the processing cores 222 has independent L1 instruction cache 224 and L1 data cache 226 .
  • each one of the processing cores can be a different brand of core that functions in a similar or substantially the same manner as ARM Cortex A9 brand processing core.
  • Each one of the processing cores 222 and its respective L1 cache 224 , 226 is in a separate power domain.
  • the media processing engine of each processing core 222 can be in a separate power domain.
  • all of the processing cores 222 within the node CPU subsystem 202 run at the same speed or are stopped (e.g., idled, dormant or powered down).
  • the SCU 212 is responsible for managing interconnect, arbitration, communication, cache-to-cache, system memory transfers, and cache coherency functionalities. With regard to cache coherency, the SCU 212 is responsible for maintaining coherence between the L1 caches 224 , 226 and ensuring that traffic from the ACP 218 is made coherent with the L1 caches 224 , 226 .
  • the L2 cache controller 216 can be a unified, physically addressed, physically tagged cache with up to 16 ways.
  • the memory controller 217 is coupled to the L2 cache 214 and to a peripheral switch 221 of the peripheral subsystem 204 .
  • the memory controller 217 is configured to control a plurality of different types of main memory (e.g., DDR3, DDR3L, LPDDR2).
  • An internal interface of the memory controller 217 includes a core data port, a peripherals data port, a data port of a power management unit (PMU) portion of the management subsystem 208 , and an asynchronous 32-bit AHB slave port.
  • the PMU data port is desirable to ensure isolation for some low power states.
  • the asynchronous 32-bit AHB slave port is used to configure the memory controller 217 and access its registers.
  • the asynchronous 32-bit AHB slave port is attached to the PMU fabric and can be synchronous to the PMU fabric in a similar manner as the asynchronous interface is at this end.
  • the memory controller 217 is an AXI interface (i.e., an Advanced eXtensible Interface) offered under the brand Databahn, which includes an AXI interface, a Databahn controller engine and a PHY (DFI).
  • AXI interface i.e., an Advanced eXtensible Interface
  • DFI PHY
  • the ACP 218 provides the function of ensuring that system traffic (e.g., I/O traffic, etc) can be driven in order to ensure that there is no need to flush or invalidate the L1 caches 224 , 226 to see the data.
  • the ACP 218 can server as a slave interface port to the SCU 212 .
  • Read/write transactions can be initiated by an AXI master through the ACP 218 to either coherent or non-coherent memory.
  • the SCU 212 will perform necessary coherency operations against the L1 caches 224 , 226 , the L2 cache 214 and the main memory 219 .
  • the GIC 220 can be integrated into the SCU 212 .
  • the GIC 220 provides a flexible approach to inter-processor communication, routing, and prioritization of system interrupts.
  • the GIC 220 supports independent interrupts such that each interrupt can be distributed across CPU subsystem, hardware prioritized, and routed between the operating system and software management layer of the CPU subsystem. More specifically, interrupts to the processing cores 222 are connected via function of the GIC 220 .
  • the node CPU subsystem 202 can include other elements/modules for providing further functionalities.
  • a L2 MBIST i.e., memory build-in self trust
  • a direct memory access (DMA) controller i.e., a DMAC
  • DMA direct memory access
  • the peripheral subsystem 204 of SOC 200 has the primary responsibility of providing interfaces that enable information storage and transfer functionality.
  • This information storage and transfer functionality includes information storage and transfer both within a given SOC Node and with SOC Nodes accessibly by the given SOC Node. Examples of the information storage and transfer functionality include, but are not limited to, flash interface functionality, PCIe interface functionality, SATA interface functionality, and Ethernet interface functionality.
  • the peripheral subsystem 204 can also provide additional information storage and transfer functionality such as, for example, direct memory access (DMA) functionality.
  • DMA direct memory access
  • Each of these peripheral subsystem functionalities is provided by one or more respective controllers that interface to one or more corresponding storage media (i.e., storage media controllers).
  • the peripherals subsystem 204 includes the peripheral switch 221 and a plurality of peripheral controllers for providing the abovementioned information storage and transfer functionality.
  • the peripheral switch 221 can be implemented in the form of a High-Performance Matrix (HPM) that is a configurable auto-generated advanced microprocessor bus architecture 3 (i.e., AMBA protocol 3) bus subsystem based around a high-performance AXI cross-bar switch known as the AXI bus matrix, and extended by AMBA infrastructure components.
  • HPM High-Performance Matrix
  • the peripherals subsystem 204 includes flash controllers 230 (i.e. a first type of peripheral controller).
  • the flash controllers 230 can provide support for any number of different flash memory configurations.
  • a NAND flash controller such as that offered under the brand name Denali is an example of a suitable flash controller.
  • flash media include MultiMediaCard (MMC) media, embedded MultiMediaCard (eMMC) media, Secure Digital (SD) media, SLC/MLC+ECC media, and the like.
  • MMC MultiMediaCard
  • eMMC embedded MultiMediaCard
  • SD Secure Digital
  • SLC/MLC+ECC media Secure Digital
  • Memory is an example of media (i.e., storage media)
  • ECC error correcting code
  • the peripherals subsystem 204 includes Ethernet MAC controllers 232 (i.e. a second type of peripheral controller). Each Ethernet MAC controller 232 can be of the universal 1 Gig design configuration or the 10 G design configuration. The universal 1 Gig design configuration offers a preferred interface description.
  • the Ethernet MAC controllers 232 includes a control register set and a DMA (i.e., an AXI master and an AXI slave). Additionally, the peripherals subsystem 204 can include an AXI2 Ethernet controller 233
  • the peripherals subsystem 204 includes a DMA controller 234 (i.e., (i.e. a third type of peripheral controller).
  • the DMA controller 234 includes a master port (AXI) and two APB slave ports (i.e., one for secure communication and the other for non-secure communication). DMA requests are sent to the DMA controller 234 and interrupts are generated from the DMA controller 234 .
  • a basic assumption in regard to the DMA controller 234 is that it needs to be able to transfer data into and out of the L2 cache 214 to ensure that the memory remains coherent and it also needs to access the peripherals of the peripheral subsystem 204 . As such, this implies that the DMA controller 234 needs to connect into two places in the system.
  • the most obvious approach to accomplish this is to provide a DMA fabric and plug the DMA fabric into both the CONFAB (i.e., the connection to the slave ports of the main peripherals) and the ACPFAB (i.e., the ACP fabric) as additional master, thereby providing connectivity to the PMU (i.e., a portion of the management subsystem 208 ) which allows access to all the slaves and the ACP fabric).
  • An alternative approach is to connect only into the ACP and rely on the L2 cache 214 to pass the access through the SCU 212 and L2 cache 214 and then back out on the core port to the CONFAB (and then reverse). This alternate approach needs to ensure that the SCU 212 understand that those accesses do not create L2 entires.
  • the alternate approach may not be operable in the power-down case (i.e., when the only the management processor and switch fabric of the management subsystem 208 are active) and may not allow DMA into the private memory of the management subsystem 208 .
  • these scenarios are acceptable because DMA functionality is useful only for fairly large transfers.
  • private memory of the management subsystem 208 is relatively small, the assumption is that associated messages will be relatively small and can be handled by INT. If the management subsystem 208 needs/wants large data transfer, it can power up the whole system except the cores and then DMA is available.
  • the peripherals subsystem 204 includes a SATA controller 236 (i.e. a fourth type of peripheral controller).
  • the SATA controller 236 has two AHB ports: one master for memory access and one slave for control and configuration.
  • the peripherals subsystem 204 also includes PCIe controllers 238 .
  • the PCIe controllers 238 use a DWC PCIe core configuration as opposed to a shared DBI interface so that a plurality of AXI interfaces: a master AXI interface, a slave AXI interface and a DBI AXI interface.
  • a XAUI controller 240 of the peripherals subsystem 204 is provided for enabling interfacing with other CPU nodes (e.g., of a common node card).
  • FIGS. 7, 8B, 11 and 12 show block diagrams of the system interconnect subsystem 206 (also referred to herein as the fabric switch).
  • the system interconnect subsystem 206 is a packet switch that provides intra-node and inter-node packet connectivity to Ethernet and within a node cluster (e.g., small clusters up through integration with heterogeneous large enterprise data centers).
  • the system interconnect subsystem 206 provides a high-speed interconnect fabric, providing a dramatic increase in bandwidth and reduction in latency compared to traditional servers connected via 1 Gb Ethernet to a top of rack switch.
  • the system interconnect subsystem 206 is configured to provide adaptive link width and speed to optimize power based upon utilization.
  • An underlying objective of the system interconnect subsystem 206 is support a scalable, power-optimized cluster fabric of server nodes.
  • the system interconnect subsystem 206 has three primary functionalities. The first one of these functionalities is serving as a high-speed fabric upon which TCP/IP networking is built and upon which the operating system of the node CPU subsystem 202 can provide transparent network access to associated network nodes and storage access to associated storage nodes. The second one of these functionalities is serving as a low-level messaging transport between associated nodes. The third one of these functionalities is serving as a transport for remote DMA between associated nodes.
  • the system interconnect subsystem 206 is connected to the node CPU subsystem 202 and the management subsystem 208 through a bus fabric 250 (i.e., Ethernet AXIs) of the system interconnect subsystem 206 .
  • An Ethernet interface 252 of the system interconnect subsystem 206 is connected to peripheral interfaces (e.g., interfaces 230 , 232 , 234 , 238 ) of the peripheral subsystem 204 .
  • a fabric switch 249 i.e., a switch-mux
  • Port 1-4 are XAUI link ports (i.e., high-speed interconnect interfaces) enabling the node that comprises the SOC 200 to be connected to associated nodes each having their own SOC (e.g., identically configured SoCs).
  • Port 0 can be mux'd to be either a XAUI link port or an Outside Ethernet MAC port.
  • the processor cores 222 (i.e., A9 cores) of the node CPU subsystem 202 and management processor 270 (i.e., M3) of the management subsystem 208 can address MACs 272 , 274 , 276 of the system interconnect subsystem 206 .
  • the processor cores 222 of the node CPU subsystem 202 will utilize first MAC 272 and second MAC 274 and the management processor 270 of the management subsystem 208 will utilize the third MAC 276 .
  • MACs 272 , 274 , 276 can be configured specifically for their respective application (e.g., the first and second MACs 272 , 274 providing 1 G and/or 10 G Ethernet functionality and the third MAC 276 providing DMA functionality).
  • the system interconnect subsystem 206 provides architectural support for various functionalities of the management subsystem 208 .
  • the system interconnect subsystem 206 supports network proxying functionality.
  • network proxy functionality allows the management processor of a CPU node to process or respond to network packets received thereby while the respective processing cores are in low-power “sleep” states and intelligently wake one or more of the respective processing cores when further network processing is needed thereby allowing the CPU node to maintain network presence.
  • the system interconnect subsystem 206 supports the ability for the management processor of a CPU node to optionally snoop locally initiated broadcasts (e.g., commonly to capture gratuitous ARPs).
  • the system interconnect subsystem 206 can be implemented in a manner that enables an ability to measure and report on utilization on each of the links provided via the system interconnect subsystem 206 .
  • a global configuration register (FS_GLOBAL_CFG) can be configured to enable utilization and statistics measurement, to select the utilization measurement time period, and to set the statistics counter interrupt threshold.
  • a bandwidth alarm registers can allow software to configure a plurality of thresholds that, when crossed, causes a respective bandwidth alarm alert (e.g., that can generate an interrupt to the management processor 270 ).
  • Bandwidth alarms can be are enabled in a channel configuration register. Transmit and receive bandwidth on each of the MAC ports can be read from the a channel bandwidth register.
  • the management subsystem 208 is coupled directly to the node CPU subsystem 202 and directly to the to the system interconnect subsystem 206 .
  • An inter-processor communication (IPC) module (i.e., IPCM) 281 of the management subsystem 208 is coupled to the SCU 212 of the node CPU subsystem 202 , thereby directly coupling the management subsystem 208 to the node CPU subsystem 202 .
  • An AXI fabric 282 of the IPCM 281 is coupled to the bus fabric 250 of the system interconnect subsystem 206 , thereby directly coupling the management subsystem 208 to the system interconnect subsystem 206
  • the management processor 270 of the management subsystem 208 is preferably, but not necessarily, an ARM Cortex brand M3 microprocessor.
  • the management processor 270 can have private ROM and private SRAM.
  • the management processor 270 is coupled to shared peripherals 286 and private peripherals 288 of the management subsystem 208 .
  • the private peripherals 288 are only accessible by the management processor 270
  • the shared peripherals 286 are accessible by the management processor 270 , each of the processing cores 222 , and a debug unit 290 of the SOC 200 .
  • the management processor 270 can see master memory map with only DRAM requiring mapping.
  • the management processor 270 utilizes GPIO 292 and I2C 294 (i.e., private peripherals) for controlling power and clocks in the node.
  • Main code and working space for the management processor 270 are on the local Dcode and Icode buses but code can be executed from the system bus (i.e., the main ROM 295 & RAM 296 and, if necessary, external memory).
  • the IPCM 281 which is used for software communication between the management processor 270 and the processing cores 222 , can include 8 mailboxes (e.g., each with 7 data registers) and 8 interrupts (e.g., interrupts 0:3 are sent to the management processor 270 and interrupts 4:7 are sent to the GIC 220 of the node CPU subsystem 202 ).
  • the management processor 270 can utilize a system management interface (SMI) functionality to carry IPMI (i.e., intelligent platform management interface) traffic (e.g., to/from the processing cores 222 ).
  • SMI system management interface
  • IPMI communication via SMIC (Server Management Interface Chip) between the processing cores 222 the management processor 270 is implemented with a private communication channel leverages the IPCM 281 . This implements the SMIC protocol with mailbox features of the IPCM 281 coupled with memory buffers.
  • One capability that leverages the management processor 270 having control and visibility of all peripherals and controllers is that the management processor 270 can field error interrupts from each of the peripheral controllers.
  • DRAM errors reported by the DRAM controller generate interrupts and the management processor 270 can log and report the errors.
  • the management processor 270 can then attempt dynamic recovery and improvement by techniques including, but not limited to, increasing the voltage to the DRAM controller or the DIMMs in an attempt to reduce bit errors.
  • management processor 270 has visibility into all buses, peripherals, and controllers. It can directly access registers for statistics on all buses, memory controllers, network traffic, fabric links, and errors on all devices without disturbing or even the knowledge of the access by the core processing cores 222 . This allows for billing use cases where statistics can be gathered securely by the management processor without having to consume core processing resources (e.g., the processing cores 222 ) to gather, and in a manner that cannot be altered by the core processor 222 .
  • core processing resources e.g., the processing cores 222
  • An alternative Coresight/JTAG debug bus is coupled to the management processor 270 .
  • This Coresight/JTAG debug bus serves as an infrastructure that provides an alternate back door interface into all on-chip devices, even if the main busses are unavailable. This also provides for security and intrusion detection use cases where the management processor can detect anomalous accesses and disable internal busses or controllers for self-protection. Additionally, leveraging this pervasive access, the management processor can read all on-chip and CPU registers and memory images for post-mortem analysis for debug.
  • the management processor 270 has a plurality of responsibilities within its respective node.
  • One responsibility of the management processor 270 is booting an operating system of the node CPU 210 .
  • Another responsibility of the management processor 270 is node power management.
  • the management subsystem 208 can also be considered to comprise a power management Unit (PMU) for the node and thus, is sometime referred to as such.
  • PMU power management Unit
  • the management subsystem 208 controls power states to various power domains of the SOC 200 (e.g., to the processing cores 222 by regulating clocks).
  • the management subsystem 208 is an “always-on” power domain.
  • the management processor 270 can turn off the clocks to the management processor 270 and/or its private and/or shared peripherals to reduce the dynamic power.
  • Another responsibility of the management processor 270 is varying synchronized clocks of the node CPU subsystem 202 (e.g., of the node CPU 210 and the SCU 212 ).
  • Another responsibility of the management processor 270 is providing baseboard management control (BMC) and IPMI functionalities including console virtualization.
  • Another responsibility of the management processor 270 is providing router management.
  • Another responsibility of the management processor 270 is acting as proxy for the processing cores 222 for interrupts and/or for network traffic.
  • the GIC 220 of the node CPU subsystem 202 will cause interrupts intended to be received by a particular one of the processing core 222 to be reflected to the management processor 270 for allowing the management processor 270 to wake the particular one of the processing cores 222 when an interrupt needs to be processed by the particular one of the of the processing cores that is sleeping, as will be discussed below in greater detail.
  • Another responsibility of the management processor 270 is controlling phased lock loops (PLLs). A frequency is set in the PLL and it is monitored for lock. Once lock is achieved the output is enabled to the clock control unit (CCU). The CCU is then signaled to enable the function.
  • the management processor 270 is also responsible for selecting the dividers but the actual change over will happen in a single cycle in hardware.
  • Another responsibility of the management processor 270 is controlling a configuration of a variable internal supply used to supply electrical power to the node CPU subsystem 202 .
  • a plurality of discrete power supplies e.g., some being of different power supplying specification than others (e.g., some having different power capacity levels)
  • can be selectively activated and deactivated as necessary for meeting power requirements of the node CPU subsystem 202 e.g., based on power demands of the processing cores 222 , the SCU 216 , and/or the controller of the L2 cache 214 ).
  • a separate power control mechanism e.g., switch
  • Another responsibility of the management processor 270 is managing a real-time-clock (RTC) that exists on a shared peripheral bus of the management subsystem 208 .
  • RTC real-time-clock
  • Another responsibility of the management processor 270 is managing a watchdog timer on a private peripheral bus of the management subsystem 208 to aid in recovery from catastrophic software failures.
  • Still another responsibility of the management processor 270 is managing an off-board EEPROM that is accessible via the I2C 292 on the private peripheral bus of the management subsystem 208 .
  • the off-board EEPROM is device is used to store all or a portion of boot and node configuration information as well as all or a portion of IPMI statistics that require non-volatile storage.
  • Each of these responsibilities of the management processor 270 is an operational functionality managed by the management processor 270 . Accordingly, operational management functionality of each one of the subsystem refers to two or more of these responsibilities being managed by the management processor 270 .
  • the management processor 270 includes a plurality of application tasks 302 , an operating system (OS)/input-output (I/O) abstraction layer 304 , a real-time operating system (RTOS) 306 , and device drivers 308 for the various devices.
  • the operating system (OS)/input-output (I/O) abstraction layer 304 is a software layer that resides between the application tasks 302 and the real-time operating system (RTOS) 306 .
  • the operating system (OS)/input-output (I/O) abstraction layer 304 aids in porting acquired software into this environment.
  • the OS abstraction portion of the operating system (OS)/input-output (I/O) abstraction layer 304 provides posix-like message queues, semaphores and mutexes.
  • the device abstraction portion of the operating system (OS)/input-output (I/O) abstraction layer 304 provides a device-transparent open/close/read/write interface much like the posix equivalent for those devices used by ported software.
  • the real-time operating system (RTOS) 306 resides between the operating system (OS)/input-output (I/O) abstraction layer 304 and the device drivers 308 .
  • the application tasks 302 include, but are not limited to, a boot task 310 , a system management task 312 , a power management task 314 , a serial concentrator task 316 , a frame switch management task 318 (sometimes called routing management), and a network proxy task 320 .
  • the boot task 310 provides the function of booting the processing cores 222 and the management processor 270 .
  • the system management task 312 provides the function of integrated operation of the various subsystems of the SOC 200 .
  • the power management task 314 provides the function of managing power utilization of the various subsystems of the SOC 200 .
  • the serial concentrator task 316 provide the function of managing communication from the other application tasks to a system console.
  • This console may be directly connected to the SOC node via a UART (i.e., a universal asynchronous receiver/transmitter) or it can be connected to another node in the system.
  • the frame switch management task 318 (sometimes called routing management) is responsible for configuring and managing routing network functionality.
  • the network proxy task 320 maintains network presence of one or more of the processing cores 222 while in a low-power sleep/hibernation state and to intelligently wake one or more of the processing cores 222 when further processing is required.
  • Device drivers 308 are provided for all of the devices that are controlled by the management processor 270 .
  • the device drivers 308 include, but are not limited to, an I2C driver 322 , a SMI driver 324 , a flash driver 326 (e.g., NAND type storage media), a UART driver 328 , a watchdog time (i.e., WDT) driver 330 , a general purpose input-output (i.e., GPIO) driver 332 , an Ethernet driver 334 , and an IPC driver 336 .
  • these drivers are implemented as simple function calls. In some cases where needed for software portability, however, a device-transparent open/close/read/write type I/O abstraction is provided on top of these functions.
  • the node CPU 210 only runs one boot loader before loading the operating system.
  • the ability for the node CPU 210 to only run one boot loader before loading the operating system is accomplished via the management processor 270 preloading a boot loader image into main memory (e.g., DRAM) of the node CPU subsystem before releasing the node CPU 210 from a reset state.
  • main memory e.g., DRAM
  • the SOC 200 can be configured to use a unique boot process, which includes the management processor 270 loading a suitable OS boot loader (e.g., U-Boot) into main memory, starting the node CPU 210 main OS boot loader (e.g., UEFI or U-Boot), and then loading the OS.
  • a suitable OS boot loader e.g., U-Boot
  • main OS boot loader e.g., UEFI or U-Boot
  • the underlying principle of network proxy functionality is maintaining network presence of each one of the processing cores 222 while one or more of the processing cores 222 is in a low-power sleep/hibernation state and to intelligently wake the one or more sleeping processing cores 222 when further processing associated with the one or more sleeping processing cores 222 is required.
  • the network proxy task 320 monitors network events of each processing cores 222 and, when all or a particular one of the processing cores 222 is in dormant or shutdown state, the network proxy function enables the management processor 270 to act as proxy for the processing core(s) 222 that it can reasonably do this for and causes the management processor 270 to wake up the processing core(s) 222 when the management processor 270 receives a network event that it is unable proxy for.
  • a CSR i.e., a certified signing request
  • Port IDs i.e., portRemap function
  • a switch of the SOC 200 is to deliver a packet to the MAC0 port 272 (shown in FIG. 12 )
  • this port remapping CSR allows software to remap MAC0 port 272 to the management processor 270 and have the packet delivered to the management processor 270 for network proxy processing.
  • This remapping CSR can also be used to remap traffic destined for the MAC1 port 274 (shown in FIG. 12 ) to MAC0 port 272 .
  • This CSR port remap function is a key SOC feature that facilitates the management processor implementation of network proxy functionality within a SOC node.
  • a typical use sequence for implementing network proxy functionality in accordance with an embodiment of the present invention begins with the management processor 270 maintaining the IP to MAC address mappings for the MAC0 port 272 and the MAC1 port 274 . This can be done via either explicit communication of these mappings from an instantiation of the operating system running on the node CPU 210 to the management processor 270 or can be done implicitly by having the management processor 270 snoop local gratuitous ARP broadcasts.
  • the node CPU 210 coordinates with the management processor 270 for causing one or more of the processing cores 222 to go to a low power dormant state.
  • the management processor 270 sets up the Port ID remapping CSR to route MAC0 port 272 and MAC1 port 274 traffic to the management processor 270 . Thereafter, the management processor 270 processes any incoming packets that are transmitted for reception by the MAC0 port 272 or MAC1 port 274 .
  • the management processor can implement various categories of packet processing.
  • a first category of packet processing includes responding to some classes of transactions (e.g. an address resolution protocol (ARP) response).
  • a second category of packet processing includes dumping and ignoring some classes of packets.
  • a third category of packet processing includes deciding that one or more of the processing cores 222 that is sleeping must be woken to process some classes of packets.
  • the management processor 270 will wake one or more of the processing cores 222 that is/are sleeping, undo the Port ID remapping register, and re-send the packets (e.g., through a switch where they were initially received) so that the packets are rerouted back to MAC port that they were originally destined (e.g., MAC0 port 272 or MAC port1 274 ).
  • the management processor 270 can support Wake-On-LAN (WOL) packets. To this end, the management processor 270 will acquire the WOL packets, which hare broadcast as opposed to being transmitted for reception by a specific recipient. The management processor 270 will know the MAC addresses for the other MACs on the node and, as necessary/appropriate, will be able to wake up the processing cores 222 .
  • WOL Wake-On-LAN
  • power domains there are preferably multiple power domains in the SOC 200 .
  • These power domains are implemented with level shifters, clamps, and switches. Examples of these power domains include, but are not limited to, a plurality of power domains within the node CPU subsystem 202 that can each be transitioned between two or more power states, a plurality of power domains within the peripheral subsystem 204 having that can each be transitioned between two or more power states, a plurality of power domains within the system interconnect subsystem 206 having that can each be transitioned between two or more power states, and a single always-on power domain consisting of the management subsystem 208 .
  • the node CPU subsystem 202 can be configured to include 11 power domains (e.g., four processing core power domains, four media processing engine power domains, a SCU power domain, a Debug PTM power domain and a L1 BIST (i.e., built-in self trust) power domain.
  • the peripheral CPU subsystem 204 can be configured to include 2 power domains (e.g., a first power domain for PCIe, SATA, eMMC, NAND controller, and DDR controller) and a second power domain for DDR Phy).
  • the system interconnect subsystem 206 can be configured to include a first power domain for shared logic and a first plurality of XAUI links and a second power domain for a second plurality of XAUI links and outside MAC port.
  • power domains of the SOC 200 can be defined by and/or within the processing cores, the SCU, the peripheral interfaces and/or controllers, various storage media, the management processor, XAUI phys, and the switch fabric.
  • a debug subsystem of the SOC 200 can be an additional power domain.
  • the management subsystem 208 controls the reset and power for the various power domains of the SOC.
  • the management subsystem 208 is an “always-on” power domain and the power domains of the remaining subsystems can be selectively transitioned between two or more power states (e.g., through the use of registers which are written by the management processor 270 ).
  • each power domain generally has three signals that can be controlled by registers in a respective SOC subsystem.
  • a run state can be implement at one of a number of voltage points and hence frequencies.
  • a WFI state which is also known as a clock gated or waiting for interrupt state, is a state where the clocks are gated off but the logic remains in a state where it can resume quickly.
  • a dormant state is when a domain is powered down but another state is stored (e.g., by software) previous to removing power.
  • An off state is when all power to a domain is removed.
  • States down to dormant are controlled primarily by the WFI and power status registers of the node CPU 210 and/or operations of the IPC 280 operations from the software to the management processor 270 modifying the processing core power state and clock frequency. States below dormant are controlled by operations being sent to the management processor 270 either based on software ahead of time (i.e. before the state is entered) or on system loading. Software will inform the management processor 270 before it enters a low power state (below dormant) that the target state is. The power down state is reached only when all of the power sources are removed from the system.
  • Each one of the processing cores 222 can be in a number of states independent from the others. Furthermore, if the processing cores 222 are all in a low power state then the L2 cache 216 and SCU 214 can potentially transition to dormant and off low power states. The processing cores to power down their L1 caches until we are moving the entire subsystem into a low power state (which implies that the ACP port and debug is also not in use). Table 2 below provides examples of various power states supported in the node CPU 210 .
  • a processing core When a processing core is in the ON state, it is powered up and running at some run frequency. When a core is in the ON slow state at least one of the cores is running, but all of those that are running are running at a lower than normal voltage and frequency point. The SCU and L2 are also running at this lower frequency point. Functionally, the ON slow state is the same as the ON state. Control of the ON state and the ON slow state is implemented by the management processor 270 .
  • the IPC 280 sends an operation to the management processor 270 indicating that the processing cores 222 can afford to run slower than normal and hence voltage and clock frequency can be sequenced lower asynchronously to software, similarly an increase frequency event can also be sent. Frequency changes will have implications for the periphclock within the node CPU. Normally this clock is synchronous and a fixed divide of the coreclock but in order to maintain correct timing periods, the core periphclock ratio will change as frequency of the core changes.
  • the node CPU subsystem 202 can also be voltage and frequency scaled.
  • a single voltage and frequency scaling apply across the entire node CPU subsystem 202 .
  • individual functional blocks and/or subsystem elements cannot be individually set (e.g., on a per-core basis).
  • the subsystem elements that get uniformly voltage and frequency scaled include processing cores 222 , the L1 caches 224 , 226 , the media processing engine of each one of the processing cores 222 , the SCU 216 , and the L2 controller 216 .
  • Control for voltage scaling can be implemented via an interface to an external PMIC.
  • Control for frequency scaling can be implemented via PLL control.
  • power management of silicon-based components of the SOC 200 is of particular interest with respect to techniques for accomplishing power management.
  • Maximum performance of silicon-based components is achieved by high clock frequency at high voltage and reduced power consumption is provided by reducing clock frequency. As the voltage is lowered, the transistors of such silicon-based components become weaker and the frequency of operation decreases.
  • Total power consumption of silicon-based components is the sum of dynamic power consumption and leakage power consumption.
  • Leakage power consumption refers to power burned by transistors when they are not switching and dynamic power consumption refers to power consumption directly related to switching operations.
  • the leakage power consumption is highly dependent on temperature and voltage of the component and it is common for leakage power consumption to equal or exceed dynamic power consumption. Because power consumption of silicon-based components is a function of the clock frequency and the square of operating voltage, a change in voltage will typically have a much more pronounced effect on power consumption than will a change in clock frequency. For example, a 27% reduction in operating voltage for a given clock frequency corresponds to 47% less power whereas a 27% reduction in clock frequency corresponds to a 27% reduction in power for a given operating voltage.
  • useful power reduction techniques in regard to leakage power consumption can include turning power off, reducing voltage, reducing temperature through use of heat sinks, fans, packaging, etc whereas useful power reduction techniques in regard to dynamic power consumption can include lower clock frequencies, turning off clocks, and reducing operating voltage.
  • the management subsystem 208 is an “always-on” domain.
  • the PMU can, however, turn off clocks to the management processor 270 and/or its peripherals (e.g., Private and/or Shared) to reduce dynamic power consumption.
  • the management processor 270 is typically in WFI (wait-for-interrupt) state. In this state, the clock of the management subsystem 208 is gated to the management processor 270 but still clocks the interrupt controller of the management processor 270 (e.g., the nested vectored interrupt controller (NVIC)). When the NVIC receives an interrupt, it will cause the clocks to the management processor 270 to be turned back on and the node core 210 will service the interrupt.
  • WFI Wait-for-interrupt
  • the interrupt controller of the management processor 270 e.g., the nested vectored interrupt controller (NVIC)
  • Implementing power management within the node CPU 210 can include the PMU 281 selectively controlling voltage and frequency levels at which components of the node CPU 210 operate. All of the processing cores 222 are clocked by same frequency and operate at nominally the same voltage (e.g., powered by a common power supply), but the PMU can change this frequency and/or the voltage for altering power consumption. Furthermore, the alter leakage power consumption, the PMU 281 can gate the power supply of each one of the processing cores 222 and/or gate clocks to powered off domains for altering power consumption.
  • the operating system controls which one(s) of the processing cores 222 are being used and whether the unused ones of the processing cores 222 are in WFI/WFE or shutdown mode (e.g., via writes to power status register of the SCU 212 and execution of WFI/WFE information).
  • Table 3 below shows various power modes for the node CPU 210 .
  • the media processing engine of each one of the processing cores 222 occupies a significant amount of die space. As such, it has a fair amount of leakage current that translates to a corresponding amount of leakage power consumption.
  • the SOC 200 can be implemented in a manner whereby a scalar floating point (FPU) is provided in the node CPU power domain and whereby the media processing engine associated with one of the processing cores 222 is in a separate power domain.
  • FPU scalar floating point
  • an XML configuration associated with the node will have an entry that indicates whether a media processing engine is to be powered on or off during boot configuration.
  • an API would be exposed on both on the node CPU 210 and via an IPMI interface on the management processor 270 to allow the media processing engine associated with one of the processing cores 222 to be selectively powered up or down. If the power state condition is set on the management processor 270 , this setting could be persisted and made the default for a subsequent boot instance.
  • the media processing engines are powered up only when instruction types associated with the media processing engines are needed. To this end, the strategy would start with the media processing engines powered off and isolated.
  • the peripheral subsystem 204 can include one or more power domains that are controlled by the PMU 281 .
  • These peripherals include controller (i.e., interfaces) for PCIe, SATA, NAND, eMMC, and DDR storage media. In one implementation, they are all within a common power domain that has a single reset, isolate, and power-up signaling structure. In another implementation, these controllers can reside in one of a plurality of different power domains. For example, it may be beneficial to have the DDR controller in a separate domain than the other peripheral controllers for allowing the DDR to be selectively accessed by the management processor 270 while other peripherals are in a powered down state.
  • the PMU 281 can also include a PCI power management module can also provide for PCI compatible active state power management.
  • the PCI power management module is powered up while the node CPU 210 is in a lower power state and contains context that is reset only at power up and can contain sideband wake mechanism for the SOC node.
  • the system interconnect subsystem 206 can include two or more power domains that are controlled by the PMU 281 .
  • a portion of the system interconnect subsystem 206 that is considered to be the fabric switch can be divided into two power domains. These power domains are partitioned so that power to the fabric switch power can be optimized for leaf nodes that only have 1 or 2 links to reduce leakage power consumption.
  • a first power domain can contain MAC0, MAC1, MAC2, Link1, Link2, the Switch, Switch Arbitration logic, the CSRs, and global control logic
  • a second power domain can contain Outlink/Link0, Link3, and Link4.
  • the fabric switch is configured such that each power domain has an enable bit in a register. When a particular power domain is reset, this enable bit is cleared thereby disabling functionality of the particular power domain. This enable bit is effectively a synchronous reset to all the logic in the particular power domain. In view of this enable bit functionality, only one reset is needed for the entire fabric switch and each one of the power domains will have its own separate isolate and power-up signals.
  • interrupts it should be appreciated and understood that most of the on-chip peripherals generate interrupts. With few exceptions, these interrupts are routed to both the node CPU subsystem 202 and the management subsystem 208 . The exceptions to this exist for those peripherals that are private to the management processor 270 and those that are private to the node CPU 210 . These interrupts can be acted on in a manner that supports or enables power management functionality (e.g., network proxy functionality) and that support power utilization functionality (e.g., interrupts acquired by the management processor 270 and used for reporting on node CPU utilization).
  • power management functionality e.g., network proxy functionality
  • power utilization functionality e.g., interrupts acquired by the management processor 270 and used for reporting on node CPU utilization.
  • the node CPU 210 can have a hierarchical interrupt scheme in which external interrupts of the node CPU 210 are sent first to an interrupt distributor that resides, for example, in the SCU 212 .
  • the interrupts can be routed to any or all of the interrupt controllers of the node CPU 210 (e.g., interrupt controller of any one of the processing cores 222 ).
  • the interrupt distributor controls a list of processing cores to which each interrupt is routed.
  • Each of the quad cores' interrupt controllers allows masking of the interrupt source locally as well.
  • Interrupts are in general visible to both the node CPU 210 and the management processor 270 . It is then the responsibility of the management processor to unmask the interrupts it wants to see. If the whole CPU subsystem 202 is powered down (e.g., hibernated) then the management processor 270 will unmask important interrupts of the node CPU 210 to see events that would cause the node CPU 210 to be woken. It is the responsibility of the management processor 270 to either service the interrupt or re-power the OS on the node CPU subsystem 202 so it can service it.
  • the management processor 270 can unmask the interrupt to one of the other processing cores that is already powered up thereby allowing the already powered up processing cores to service the interrupt. This is an example of subsystem masking an interrupt and allowing another the subsystem to service it, which is a form of network proxy functionality discussed above.
  • Interrupts on the node CPU 210 can also be used for implementing various power modes within power domains of the node CPU subsystem 202 . More specifically, the OS running on the node CPU 210 can distribute the processing load among each one of the processing cores 222 . In times when peak performance is not necessary, the OS can lower the power consumption within the node CPU 210 by clock-gating or powering down individual cores. As long as at least one of the processing cores 222 is running, the OS requires no intervention from the management processor 270 (e.g., the PMU thereof) for handling interrupts. A particular one of the processing cores 222 can be stopped in WFI/WFE state which causes the clock to be gated most of that particular processing core, except for its interrupt controller.
  • the management processor 270 e.g., the PMU thereof
  • the clock of that particular processing core can be turned back on for allowing that particular core to service the interrupt.
  • the OS of the node CPU 210 can route an interrupt for that core to another one of the processing cores 222 that is already powered up. If the whole node CPU 210 is powered down, interrupts will be steered to the management processor 270 where the event will be seen and it will then be the responsibility of the management processor 270 to either service the interrupt or reboot the OS on the node CPU 210 so that one of the processing cores 222 can service the interrupt.
  • a system on a chip refers to integration of one or more processors, one or more memory controllers, and one or more I/O controllers onto a single silicone chip.
  • a SOC configured in accordance with the present invention can be specifically implemented in a manner to provide functionalities definitive of a server.
  • a SOC in accordance with the present invention can be referred to as a server on a chip.
  • a server on a chip configured in accordance with the present invention can include a server memory subsystem, a server I/O controllers, and a server node interconnect.
  • this server on a chip will include a multi-core CPU, one or more memory controllers that supports ECC, and one or more volume server I/O controllers that minimally includes Ethernet and SATA controllers.
  • the server on a chip can be structured as a plurality of interconnected subsystems, including a CPU subsystem, a peripherals subsystem, a system interconnect subsystem, and a management subsystem.
  • An exemplary embodiment of a server on a chip that is configured in accordance with the present invention is the ECX-1000 Series server on a chip offered by Calxeda incorporated.
  • the ECX-1000 Series server on a chip includes a SOC architecture that provides reduced power consumption and reduced space requirements.
  • the ECX-1000 Series server on a chip is well suited for computing environments such as, for example, scalable analytics, webserving, media streaming, infrastructure, cloud computing and cloud storage.
  • a node card configured in accordance with the present invention can include a node card substrate having a plurality of the ECX-1000 Series server on a chip instances (i.e., each a server on a chip unit) mounted on the node card substrate and connected to electrical circuitry of the node card substrate.
  • An electrical connector of the node card enables communication of signals between the node card and one or more other instances of the node card.
  • the ECX-1000 Series server on a chip includes a CPU subsystem (i.e., a processor complex) that uses a plurality of ARM brand processing cores (e.g., four ARM Cortex brand processing cores), which offer the ability to seamlessly turn on-and-off up to several times per second.
  • the CPU subsystem is implemented with server-class workloads in mind and comes with a ECC L2 cache to enhance performance and reduce energy consumption by reducing cache misses.
  • Complementing the ARM brand processing cores is a host of high-performance server-class I/O controllers via standard interfaces such as SATA and PCI Express interfaces.
  • Table 4 below shows technical specification for a specific example of the ECX-1000 Series server on a chip.
  • Network Proxy Support to maintain network presence even with node powered off Management 1.
  • Separate embedded processor dedicated for Engine systems management 2.
  • Advanced power management with dynamic power capping 3.
  • Dedicated Ethernet MAC for out-of-band communication 4.
  • 72-bit DDR controller with ECC support Memory 2.
  • 32-bit physical memory addressing Controller 3.
  • Four (4) integrated Gen2 PCIe controllers 2.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computer Security & Cryptography (AREA)
  • Power Sources (AREA)

Abstract

A server on a chip that can be a component of a node card. The server on a chip can include a node central processing unit subsystem, a peripheral subsystem, a system interconnect subsystem, and a management subsystem. The central processing unit subsystem can include a plurality of processing cores each running an independent instance of an operating system. The peripheral subsystem includes a plurality of interfaces for various configurations of storage media. The system interconnect subsystem provides for intra-node and inter-node packet connectivity. The management subsystem provides for various system and power management functionalities within the subsystems of the server on a chip.

Description

    CROSS-REFERENCE TO RELATED PATENT APPLICATIONS
  • This application is a Continuation of U.S. application Ser. No. 13/662,759, filed Oct. 29, 2012, which is a Continuation-In-Part of U.S. application Ser. No. 13/475,713, filed May 18, 2012, which claims priority from Provisional Application U.S. Application 61/489,569, filed May 24, 2011, U.S. application Ser. No. 13/475,713 is also a Continuation-In-Part of U.S. application Ser. No. 12/794,996, filed Jun. 7, 2010, all of which are incorporated herein by reference in their entireties. U.S. application Ser. No. 13/662,759, filed Oct. 29, 2012 is a Continuation-In-Part of U.S. application Ser. No. 13/475,722, filed May 18, 2012, which claims priority from Provisional Application U.S. Application 61/489,569, filed May 24, 2011; U.S. application Ser. No. 13/475,722 is also a Continuation-In-Part of U.S. application Ser. No. 12/794,996, filed Jun. 7, 2010 and all of which are incorporated herein by reference in their entireties. U.S. application Ser. No. 13/662,759, filed Oct. 29, 2012 is a Continuation-In-Part of U.S. application Ser. No. 13/527,498, filed Jun. 19, 2012, which claims priority from Provisional Application U.S. Application 61/553,555, filed Oct. 31, 2011, incorporated herein by reference in their entireties. U.S. application Ser. No. 13/662,759, filed Oct. 29, 2012 is a Continuation-In-Part of U.S. application Ser. No. 12/794,996, filed Jun. 7, 2010, which claims priority from Provisional Application U.S. Application 61/256,723, filed Oct. 30, 2009, incorporated herein by reference in their entireties. U.S. application Ser. No. 13/662,759, filed Oct. 29, 2012 is a Continuation-In-Part of U.S. application Ser. No. 12/889,721, filed Sep. 24, 2010, which claims priority from Provisional Application U.S. Application 61/245,592, filed Sep. 24, 2009, incorporated herein by reference in their entireties. U.S. application Ser. No. 13/662,759, filed Oct. 29, 2012 is a Continuation-In-Part of U.S. application Ser. No. 13/234,054, filed Sep. 15, 2011, which claims priority from Provisional Application U.S. Application 61/383,585, filed Sep. 16, 2010; U.S. application Ser. No. 13/234,054 is also a Continuation-In-Part of U.S. application Ser. No. 12/794,996, filed Jun. 7, 2010, incorporated herein by reference in their entireties. U.S. application Ser. No. 13/662,759, filed Oct. 29, 2015 is a Continuation-In-Part of U.S. application Ser. No. 13/284,855, filed Oct. 28, 2011, incorporated herein by reference in its entirety. U.S. application Ser. No. 13/662,759, filed Oct. 29, 2012 is a Continuation-In-Part of U.S. application Ser. No. 13/543,086, filed Apr. 23, 2012, incorporated herein by reference in its entirety.
  • FIELD
  • The disclosure relates generally to provisioning of modular compute resources within a system design and, more particularly, to a system on a chip that provides integrated CPU, peripheral, switch fabric, system management, and power management functionalities.
  • BACKGROUND
  • Server systems generally provide a fixed number of options. For example, there are usually a fixed number of CPU sockets, memory DIMM slots, PCI Express 10 slots and a fixed number of hard drive bays, which often are delivered empty as they provide future upgradability. The customer is expected to gauge future needs and select a server chassis category that will serve present and future needs. Historically, and particularly with x86-class servers, predicting the future needs has been achievable because product improvements from one generation to another have been incremental.
  • With the advent of power optimized, scalable servers, the ability to predict future needs has become less obvious. For example, in this class of high-density, low-power servers within a 2U chassis, it is possible to install on the order of 120 compute nodes in an incremental fashion. Using this server as a data storage device, the user may require only 4 compute nodes, but may desire 80 storage drives. Using the same server as a pure compute function focused on analytics, the user may require 120 compute nodes and no storage drives. The nature of scalable servers lends itself to much more diverse applications that require diverse system configurations. As the diversity increases over time, the ability to predict the system features that must scale becomes increasingly difficult.
  • It is desirable to provide smaller sub-units of a computer system that are modular and can be connected to each other to form larger, highly configurable scalable servers. Thus, it is desirable to create a system and method to modularly scale compute resources in these power-optimized, high density, scalable servers.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an example of a system board on which one or more node cards may be installed;
  • FIG. 2 illustrates an embodiment of the details of each node card;
  • FIG. 3 illustrates an example of a quad node card;
  • FIGS. 4 and 5 illustrate two examples of node cards with one or more connectors;
  • FIG. 6 illustrates an example of a single server node card;
  • FIG. 7 illustrates a logical view of a system on a chip (SOC);
  • FIG. 8A illustrates an architectural block diagram view of a SOC showing subsystems thereof;
  • FIG. 8B illustrates an architectural block diagram view of a SOC showing architectural elements thereof;
  • FIG. 9 illustrates a logical view of a SOC node CPU subsystem;
  • FIG. 10 illustrates a logical view of a peripheral subsystem;
  • FIG. 11 illustrates an architectural block diagram view of a system interconnect subsystem;
  • FIG. 12 illustrates a logical view of a system interconnect subsystem;
  • FIG. 13 illustrates a logical view of a power management unit of a management subsystem; and
  • FIG. 14 illustrates a software view of a power management unit.
  • DETAILED DESCRIPTION OF ONE OR MORE EMBODIMENTS
  • The disclosure is particularly applicable to examples of the node cards illustrated and described below and it is in this context that the disclosure will be described. It will be appreciated, however, that the disclosure has broader applicability since the disclosed system and node cards can be implemented in different manners that are within the scope of the disclosure and may be used for any application since all of the various applications in which the system and node cards may be used are within the scope of the disclosure.
  • FIG. 1 illustrates an example of a system 40 that may include a system board 42 on which one or more node cards 46 may be installed. The system board 42 may be fit into a typical server chassis 44 and the system board 42 may have the one or more node cards 46, such as one or more server node units (described below with reference to FIG. 2) plugged into the system board. There are a number of functions that are needed to complete a full classic server which includes Ethernet PHYs to interface the one or more ServerNodes 46 or a cluster of ServerNodes and server control functions (fan control, buttons etc. . . . ). The system board 42 is the component that ties the ServerNodes 46 to these components. The system board 42 is desirable if a hierarchical hardware partition is desired where the “building block” is smaller than the desired system, or when the “building block” is not standalone. The system board 42 roles can include: Ethernet network connectivity, internal fabric connections between ServerNodes or groups a ServerNodes in a sub-system (the fabric design in FIG. 1) and chassis control and management. The system board is the component that connects the fabric links between ServerNodes and allows them to communicate with the external world. Once the fabric design, hardware partitioning and storage decisions have been made, the system board 42 can glue the system components together and the input/output (I/O) of the system may include: management data input/output (MDIO) for SFP communication, comboPHYs for internal fabric links, storage and Ethernet access, UART and JTAG ports for debug and SMBus and GPIOs for chassis component control and communication.
  • Now, several different examples of node cards that may be plugged into the system board are described in more detail. These node cards leverage highly integrated SoCs designed for Server applications, that enable density and system design options that has not been available to date. Cards can be defined that have the functionality of one or more servers and these Cards can be linked together to form clusters of servers in very dense implementations. A high level description of the Card would include a highly integrated SOC implementing the server functionality, DRAM memory, support circuitry such as voltage regulation, and clocks. The input/output of the card would be power and server to server interconnect and/or server to Ethernet PHY connectivity. SATA (serial advanced technology attachment) connections can also be added to interface to drives. An example of a node card is shown in FIG. 2 with one or more system on a chip (SOC) units (i.e., SoCs).
  • The fabric connections on each node card 46 can be designed to balance: usage of SOC PHYs, link redundancy, link bandwidth and flexibility in usage of the 8 links at the edge connectors. A node card 46 like that shown in FIG. 3 can be used in conjunction with a system board where the system board provides power to the node cards and connections to interconnect off the system board such as an Ethernet transceiver. The system board could house one or more node cards. In the case of housing more than one node card, the system board creates a cluster of Servers that utilize a server to server interconnect or fabric that is integrated in the SOC or a separate function on the card. This system board can be made in many forms, including industry standard form factors such as ATX or in customer form factors. The system board could be a blade or could fit into a standard chassis such as a 2U or any other size.
  • FIG. 2 illustrates an example a node card 60. The node card may be a printed circuit board with a male physical connector, on which there is one or more servers that get power from some of the signals on the physical connector and use some of the signals on the connector for server to server communication or server to Ethernet PHY connections. In one embodiment, the physical connector may be PCIe (Peripheral Component Interconnect Express) connector. The node card 60 may have an enable of the physical connector (see CARD EN in FIG. 2) that enables the server. The node card may have regulators included on the PCB to provide regulated power supplies to various parts of the server off the power supply that is provided through the PCIe physical connector and the enables (CARD EN) may be connected to the regulators. The voltages on the node card may be 12V. The regulators may generate a common voltage that may be 3.3V (as shown in the example in FIG. 2), 1.8V, 0.9V and/or 1.35 or 1.5V. Each node card may have one or more SoCs 62, memory and appropriate regulators, but may also have multiple servers on the PCB including multiple SOC and multiple sets of DRAM (dynamic random access memory) and the DRAM is soldered on the PCB and signals are routed to the SOC. Alternatively, the DRAM is on a DIMM (ual in-lin memory mpodule) and the DIMM is connected to the PCB using a connector whose signals are routed to the SOC.
  • In the example in FIG. 2, the node card 60 may include one or more system on a chip (SOC) 62 (such as SOC0-SOC3 as shown in FIG. 2) and each SOC 62 (i.e., each an instance of a SOC unit) is part of a node 64, such as Node N0-N3 as shown, wherein the node may be a compute node, a storage node and the like. The SoCs on the node card may have heat sinks. Each node 64 may further include one or more LEDs, memory (DDR, for example), a clock, a temperature sensor (TEMP) connected to the SOC, an SD slot and an SPI_FLASH slot as shown in FIG. 2. Thus, the node card 60 may also have a storage card such as SD, uSD, MMC, eMMC that is connected to the SOC (as shown in the example below in FIG. 6). In one embodiment, a NAND or NOR can be used and connected to the SOC (such as in the examples in FIGS. 4-5 below) and/or a serial flash may be used and connected to the SOC.
  • The node card may also have one or more communication and/or storage connects 66, such as connects to various SATA devices, connects to XAUI interconnects and a UART that may be through an edge connector. In the node card, the server-to-server communication may be XAUI and one or more XAUI is routed to the PCIe physical connector and the XAUI signals are routed from the PCIe physical connector to the SOC and/or the XAUI signals are routed between SoCs on the PCB. In the node card, the server-to-server communication may be SGMII and one or more SGMII is routed to the PCIe physical connector and the SGMII signals are routed from the PCIe connector to the SOC or the SGMII signals are routed between SoCs on the PCB.
  • The node card may also have a SATA connector. The SATA signals may be routed from the SOC to the SATA connector or multiple SATA connectors are added to the PCB and multiple SATA connectors are routed from the SOC to the SATA connectors. The node card may also have a mini SATA on the Card or mSATA on the Card. The SATA may be routed to the PCIe physical connector from the SOC. In some embodiments, multiple SATA connections are made between the SOC and PCIe physical connector and PCIe x1 or x2, or x4, or x8 or x16 or x32 is used. The node card may use multiple PCIe physical connectors or any combination of multiple PCIe connectors such as x1 or x2, or x4, or x8 or x16 or x32. The DC values applied to the PCIe connector and routed onto the PCB for set up, control, ID or information and the DC values are routed to GPIOs on one or more SoCs.
  • The edge connector may also have signaling for JTAG and ALTBOOT (described below in more detail). The edge connector may also provide SLOT signaling, GPIO signaling and power (with an enable). The JTAG signals are routed from one or more SoCs to PCIe physical connector and the serial port and/or UART signals are routed from the PCIe physical connector to one or more SoCs. The SOC may have an addition signal or set of signals is routed to the PCIe physical connector that is used to arbitrate usage of the serial port or UART. In the system, a digital signal can be applied to the PCIe connector to cause an alternative boot procedure by connecting this signal from the PCIe connector to a signal on one or more SoCs that causes or enable an alternative boot. The digital signal or signals can be applied to the PCIe physical connector to cause an interrupt to the SOC or SoCs by connecting the SOC or SoCs to this digital signal on the connector. The system may have a level shifter(s) that is used on the PCB to translate a signal applied on the PCIe connector edge to a signal that is applied to the SOC(s). Furthermore, the digital signal that is routed from an SOC to the PCIe connector that resets and/or controls and/or provides info to an Ethernet phy or SFP that is not on the PCB and may be for reset, enable, disable, mdio, fault, loss of signal, rate.
  • Thus, the node 64 of the node card 60 forms what can be thought of as an independent cluster node. Each SOC 62 is an example of a node central processing unit (CPU) of the node card 60. An independent operating system (OS) is booted on the Node CPU. Linux Ubuntu brand operating system is an example of such an independent operating system.
  • As discussed below in greater detail, each SOC 62 includes one or more embedded PCIe controllers, one or more SATA controllers, and one or more 10 Gigabit (10 GigE) Ethernet MACs. Each node 64 is interconnected to other nodes via a high speed interconnect such that the topology of the interconnect is logically transparent to the user. Each node 64, even those that don't have direct access to an outside network, has network connectivity. Preferably, each node 64 has flash (i.e., flash storage space) that can be logically partitioned. A local file system (e.g., a Linux file system) can be created on one or more of the flash partitions. For example, the user can create a root and swap partition on local flash partitions. Furthermore, flash partitions can be aggregated to form large flash volumes. Typical operating system storage capabilities can be provided to create network file systems, clustered files systems, and/or iSCSI NAS systems on either the a storage portion of a node (e.g., a storage node portion) or remotely in the external network. As discussed below in greater detail, each node 64 includes power management functionality that is optimized transparently to the user. Accordingly, a skilled person will appreciate that the node 64 provides all of the characteristics that would normally be attributed to a node of a computer cluster as well as value-added functionalities (e.g., storage functionality, power management functionality, etc).
  • FIG. 3 illustrates an example of a quad node card 100. The quad node card 100 may have one or more systems on a chip 103 (SoC0-SoC3 in this example), one or more volatile memory devices 104, such as four 4 GB DDR3 Mini-DIMMs (1 per node), one or more storage interfaces 106, such as sixteen SATA connectors (4 per node), one or more SD slots (one per node, MMC not supported) and one or more SPI flash chips (1 per node). The quad node card may be powered by 12V dc, supplied via edge connectors 108—all other voltages are internally generated by regulators. The quad node card may have server interconnect Fabric connections 110 routed via the edge connector 108, through a system board to which the node card is connected, to other node cards or external Ethernet transceivers and I2C and GPIO rout via the edge connector, per system board requirements. The quad node card 100 does not have Ethernet PHY transceivers in some implementations, other implementations may choose to use Ethernet transceivers on the node card and route this as the interconnect and the node card is not a stand alone design, but may be used with a system board.
  • The quad Card example consists of 4 server nodes, each formed by a Calxeda® EnergyNode SOC, with its DIMM and local peripherals, which runs Linux independently from any other node. By design, these nodes can be directly interconnected to form a high bandwidth fabric, which provides network access through the system Ethernet ports. From the network view, the server nodes appear as independent servers; each available to take work on.
  • FIGS. 4 and 5 illustrate two examples of node cards 120, 130 with one or more connectors 108. The connectors may be a PCIe connector that makes a convenient physical interconnect between the node card and the system board, but any type of connector can be used. The connector type is selected based on its performance at the switching frequency of the fabric interconnect. For example, industry-standard Micro TCA connectors available from Tyco Electronics and Samtec operate up to 12 GHz. In the examples in FIGS. 4 and 5, the node card has the SOCs 102, the memory 104, the storage interfaces 106 and the fabric connector 110, but may also include one or more persistent memory devices 112, such as NAND flash. The node card definition can vary as seen below with variation in a number of SATA connectors and/or in a number of fabric interconnect for server-to-server communication. The type of PCIe connector in the node card could vary significantly based on quantity of interconnect and other signals desired in the design. FIGS. 4 and 5 shows two PCIe x16 connectors, but the node cards could vary using any quantity of PCIe connector and any type of PCIe (x1, x2, x4 etc. . . . ). Though not shown in FIG. 4 or 5 for brevity, since fabric connectivity exists with the node cards, the physical Ethernet interfaces depicted on the System Board 42 can also reside on the node cards.
  • FIG. 6 illustrates an example of a single server node card 140. The single server node card 140 may have one processor SOC 102, a 4 GB DDR3 DRAM 104 down (no DIMM), a microSD slot 114, a SATA data connector 106, a mSATA connector 116, one or more XAUI channels (four in this example) to the edge connector 108 for fabric connectivity and may be smaller than 2-inch.times.4-inch. This combination provides the compute, networking IO, system memory, and storage interfaces needed for a robust ARM server, in a form factor that is easily integrated into many chassis designs. This node card implements a x16 PCI connector with a custom electrical signaling interface that follows the Ethernet XAUI interface definition. The node card 140 may be a two-sided printed circuit board with components on each side as shown in FIG. 6.
  • FIGS. 7, 8A and 8B show a SOC 200 (i.e., an instance of a SOC unit) configured in accordance with the present invention. The SOC 200 is a specific example of the SoCs discussed above in reference to FIGS. 2-6 (e.g., SOC 62 and/or SOC 102). In this regard, the SOC 200 can be utilized in standalone manner such as, for example, as discussed in reference to FIG. 6. Alternatively, the SOC 200 can be utilized in combination with a plurality of other SoCs on a node card such as, for example, with each one of the SoCs being associated with a respective node of the node card as discussed above in reference to FIGS. 2-5.
  • The SOC 200 includes a node CPU subsystem 202, a peripheral subsystem 204, a system interconnect subsystem 206, and a management subsystem 208. In this regard, a SOC configured in accordance with the present invention can be logically divided into several subsystems. Each one of the subsystems includes a plurality of operation components therein that enable a particular one of the subsystems to provide functionality thereof. Furthermore, as will be discussed below in greater detail, each one of these subsystems is preferably managed as independent power domains.
  • The node CPU subsystem 202 of SOC 200 provides the core CPU functionality for the SOC, and runs the primary user operating system (e.g. Ubuntu Linux). As shown in FIGS. 7-9, the Node CPU subsystem 202 comprises a node CPU 210, a snoop control unit (SCU) 212, L2 cache 214, a L2 cache controller 216, memory controller 217, an accelerator coherence port (ACP) 218, main memory 219 and a generalized interrupt controller (GIC) 220. The node CPU 210 includes 4 processing cores 222 that share the L2 cache 214. Preferably, the processing cores 222 are each an ARM Cortex A9 brand processing core with an associated media processing engine (e.g., Neon brand processing engine) and each one of the processing cores 222 has independent L1 instruction cache 224 and L1 data cache 226. Alternatively, each one of the processing cores can be a different brand of core that functions in a similar or substantially the same manner as ARM Cortex A9 brand processing core. Each one of the processing cores 222 and its respective L1 cache 224, 226 is in a separate power domain. Optionally, the media processing engine of each processing core 222 can be in a separate power domain. Preferably, all of the processing cores 222 within the node CPU subsystem 202 run at the same speed or are stopped (e.g., idled, dormant or powered down).
  • The SCU 212 is responsible for managing interconnect, arbitration, communication, cache-to-cache, system memory transfers, and cache coherency functionalities. With regard to cache coherency, the SCU 212 is responsible for maintaining coherence between the L1 caches 224, 226 and ensuring that traffic from the ACP 218 is made coherent with the L1 caches 224, 226. The L2 cache controller 216 can be a unified, physically addressed, physically tagged cache with up to 16 ways.
  • The memory controller 217 is coupled to the L2 cache 214 and to a peripheral switch 221 of the peripheral subsystem 204. Preferably, the memory controller 217 is configured to control a plurality of different types of main memory (e.g., DDR3, DDR3L, LPDDR2). An internal interface of the memory controller 217 includes a core data port, a peripherals data port, a data port of a power management unit (PMU) portion of the management subsystem 208, and an asynchronous 32-bit AHB slave port. The PMU data port is desirable to ensure isolation for some low power states. The asynchronous 32-bit AHB slave port is used to configure the memory controller 217 and access its registers. The asynchronous 32-bit AHB slave port is attached to the PMU fabric and can be synchronous to the PMU fabric in a similar manner as the asynchronous interface is at this end. In one implementation, the memory controller 217 is an AXI interface (i.e., an Advanced eXtensible Interface) offered under the brand Databahn, which includes an AXI interface, a Databahn controller engine and a PHY (DFI).
  • The ACP 218 provides the function of ensuring that system traffic (e.g., I/O traffic, etc) can be driven in order to ensure that there is no need to flush or invalidate the L1 caches 224, 226 to see the data. In this regard, the ACP 218 can server as a slave interface port to the SCU 212. Read/write transactions can be initiated by an AXI master through the ACP 218 to either coherent or non-coherent memory. For read/write transactions to coherent regions of memory, the SCU 212 will perform necessary coherency operations against the L1 caches 224, 226, the L2 cache 214 and the main memory 219.
  • The GIC 220 can be integrated into the SCU 212. The GIC 220 provides a flexible approach to inter-processor communication, routing, and prioritization of system interrupts. The GIC 220 supports independent interrupts such that each interrupt can be distributed across CPU subsystem, hardware prioritized, and routed between the operating system and software management layer of the CPU subsystem. More specifically, interrupts to the processing cores 222 are connected via function of the GIC 220.
  • The node CPU subsystem 202 can include other elements/modules for providing further functionalities. One example of such further functionality is provided by a L2 MBIST (i.e., memory build-in self trust) controller that is integrated with the L2 cache controller 216 for performing memory testing of the L2 cache 214. Still another example of such further functionality is provided by a direct memory access (DMA) controller (i.e., a DMAC) that provides an AXI interface to perform DMA transfers and that has two APB interfaces that control operation of the DMAC.
  • The peripheral subsystem 204 of SOC 200, shown in FIGS. 7, 8 and 10, has the primary responsibility of providing interfaces that enable information storage and transfer functionality. This information storage and transfer functionality includes information storage and transfer both within a given SOC Node and with SOC Nodes accessibly by the given SOC Node. Examples of the information storage and transfer functionality include, but are not limited to, flash interface functionality, PCIe interface functionality, SATA interface functionality, and Ethernet interface functionality. The peripheral subsystem 204 can also provide additional information storage and transfer functionality such as, for example, direct memory access (DMA) functionality. Each of these peripheral subsystem functionalities is provided by one or more respective controllers that interface to one or more corresponding storage media (i.e., storage media controllers).
  • The peripherals subsystem 204 includes the peripheral switch 221 and a plurality of peripheral controllers for providing the abovementioned information storage and transfer functionality. The peripheral switch 221 can be implemented in the form of a High-Performance Matrix (HPM) that is a configurable auto-generated advanced microprocessor bus architecture 3 (i.e., AMBA protocol 3) bus subsystem based around a high-performance AXI cross-bar switch known as the AXI bus matrix, and extended by AMBA infrastructure components.
  • The peripherals subsystem 204 includes flash controllers 230 (i.e. a first type of peripheral controller). The flash controllers 230 can provide support for any number of different flash memory configurations. A NAND flash controller such as that offered under the brand name Denali is an example of a suitable flash controller. Examples of flash media include MultiMediaCard (MMC) media, embedded MultiMediaCard (eMMC) media, Secure Digital (SD) media, SLC/MLC+ECC media, and the like. Memory is an example of media (i.e., storage media) and error correcting code (ECC) memory is an example of a type of memory to which the main memory 217 interfaces (e.g., main memory 219).
  • The peripherals subsystem 204 includes Ethernet MAC controllers 232 (i.e. a second type of peripheral controller). Each Ethernet MAC controller 232 can be of the universal 1 Gig design configuration or the 10 G design configuration. The universal 1 Gig design configuration offers a preferred interface description. The Ethernet MAC controllers 232 includes a control register set and a DMA (i.e., an AXI master and an AXI slave). Additionally, the peripherals subsystem 204 can include an AXI2 Ethernet controller 233
  • The peripherals subsystem 204 includes a DMA controller 234 (i.e., (i.e. a third type of peripheral controller). The DMA controller 234 includes a master port (AXI) and two APB slave ports (i.e., one for secure communication and the other for non-secure communication). DMA requests are sent to the DMA controller 234 and interrupts are generated from the DMA controller 234. A basic assumption in regard to the DMA controller 234 is that it needs to be able to transfer data into and out of the L2 cache 214 to ensure that the memory remains coherent and it also needs to access the peripherals of the peripheral subsystem 204. As such, this implies that the DMA controller 234 needs to connect into two places in the system. The most obvious approach to accomplish this is to provide a DMA fabric and plug the DMA fabric into both the CONFAB (i.e., the connection to the slave ports of the main peripherals) and the ACPFAB (i.e., the ACP fabric) as additional master, thereby providing connectivity to the PMU (i.e., a portion of the management subsystem 208) which allows access to all the slaves and the ACP fabric). An alternative approach is to connect only into the ACP and rely on the L2 cache 214 to pass the access through the SCU 212 and L2 cache 214 and then back out on the core port to the CONFAB (and then reverse). This alternate approach needs to ensure that the SCU 212 understand that those accesses do not create L2 entires. Furthermore, the alternate approach may not be operable in the power-down case (i.e., when the only the management processor and switch fabric of the management subsystem 208 are active) and may not allow DMA into the private memory of the management subsystem 208. However, these scenarios are acceptable because DMA functionality is useful only for fairly large transfers. Thus, because private memory of the management subsystem 208 is relatively small, the assumption is that associated messages will be relatively small and can be handled by INT. If the management subsystem 208 needs/wants large data transfer, it can power up the whole system except the cores and then DMA is available.
  • The peripherals subsystem 204 includes a SATA controller 236 (i.e. a fourth type of peripheral controller). Preferably, the SATA controller 236 has two AHB ports: one master for memory access and one slave for control and configuration. The peripherals subsystem 204 also includes PCIe controllers 238. Preferably, the PCIe controllers 238 use a DWC PCIe core configuration as opposed to a shared DBI interface so that a plurality of AXI interfaces: a master AXI interface, a slave AXI interface and a DBI AXI interface. As will be discussed below in greater detail, a XAUI controller 240 of the peripherals subsystem 204 is provided for enabling interfacing with other CPU nodes (e.g., of a common node card).
  • FIGS. 7, 8B, 11 and 12 show block diagrams of the system interconnect subsystem 206 (also referred to herein as the fabric switch). The system interconnect subsystem 206 is a packet switch that provides intra-node and inter-node packet connectivity to Ethernet and within a node cluster (e.g., small clusters up through integration with heterogeneous large enterprise data centers). The system interconnect subsystem 206 provides a high-speed interconnect fabric, providing a dramatic increase in bandwidth and reduction in latency compared to traditional servers connected via 1 Gb Ethernet to a top of rack switch. Furthermore, the system interconnect subsystem 206 is configured to provide adaptive link width and speed to optimize power based upon utilization.
  • An underlying objective of the system interconnect subsystem 206 is support a scalable, power-optimized cluster fabric of server nodes. As such, the system interconnect subsystem 206 has three primary functionalities. The first one of these functionalities is serving as a high-speed fabric upon which TCP/IP networking is built and upon which the operating system of the node CPU subsystem 202 can provide transparent network access to associated network nodes and storage access to associated storage nodes. The second one of these functionalities is serving as a low-level messaging transport between associated nodes. The third one of these functionalities is serving as a transport for remote DMA between associated nodes.
  • The system interconnect subsystem 206 is connected to the node CPU subsystem 202 and the management subsystem 208 through a bus fabric 250 (i.e., Ethernet AXIs) of the system interconnect subsystem 206. An Ethernet interface 252 of the system interconnect subsystem 206 is connected to peripheral interfaces (e.g., interfaces 230, 232, 234, 238) of the peripheral subsystem 204. A fabric switch 249 (i.e., a switch-mux) is coupled between the ports 0-4 and the MAC's 272, 274, 276. Port 1-4 are XAUI link ports (i.e., high-speed interconnect interfaces) enabling the node that comprises the SOC 200 to be connected to associated nodes each having their own SOC (e.g., identically configured SoCs). Port 0 can be mux'd to be either a XAUI link port or an Outside Ethernet MAC port.
  • The processor cores 222 (i.e., A9 cores) of the node CPU subsystem 202 and management processor 270 (i.e., M3) of the management subsystem 208 can address MACs 272, 274, 276 of the system interconnect subsystem 206. In certain embodiments, the processor cores 222 of the node CPU subsystem 202 will utilize first MAC 272 and second MAC 274 and the management processor 270 of the management subsystem 208 will utilize the third MAC 276. To this end, MACs 272, 274, 276 can be configured specifically for their respective application (e.g., the first and second MACs 272, 274 providing 1 G and/or 10 G Ethernet functionality and the third MAC 276 providing DMA functionality).
  • The system interconnect subsystem 206 provides architectural support for various functionalities of the management subsystem 208. In one example, the system interconnect subsystem 206 supports network proxying functionality. As discussed below in greater detail, network proxy functionality allows the management processor of a CPU node to process or respond to network packets received thereby while the respective processing cores are in low-power “sleep” states and intelligently wake one or more of the respective processing cores when further network processing is needed thereby allowing the CPU node to maintain network presence. Another example is that the system interconnect subsystem 206 supports the ability for the management processor of a CPU node to optionally snoop locally initiated broadcasts (e.g., commonly to capture gratuitous ARPs).
  • The system interconnect subsystem 206 can be implemented in a manner that enables an ability to measure and report on utilization on each of the links provided via the system interconnect subsystem 206. To this end, a global configuration register (FS_GLOBAL_CFG) can be configured to enable utilization and statistics measurement, to select the utilization measurement time period, and to set the statistics counter interrupt threshold. A bandwidth alarm registers can allow software to configure a plurality of thresholds that, when crossed, causes a respective bandwidth alarm alert (e.g., that can generate an interrupt to the management processor 270). Bandwidth alarms can be are enabled in a channel configuration register. Transmit and receive bandwidth on each of the MAC ports can be read from the a channel bandwidth register.
  • Turning now to FIGS. 7, 8B and 13, a discussion of the management subsystem 208 is provided. As best shown in FIG. 8, the management subsystem 208 is coupled directly to the node CPU subsystem 202 and directly to the to the system interconnect subsystem 206. An inter-processor communication (IPC) module (i.e., IPCM) 281 of the management subsystem 208, which includes IPC 280, is coupled to the SCU 212 of the node CPU subsystem 202, thereby directly coupling the management subsystem 208 to the node CPU subsystem 202. An AXI fabric 282 of the IPCM 281 is coupled to the bus fabric 250 of the system interconnect subsystem 206, thereby directly coupling the management subsystem 208 to the system interconnect subsystem 206
  • The management processor 270 of the management subsystem 208 is preferably, but not necessarily, an ARM Cortex brand M3 microprocessor. The management processor 270 can have private ROM and private SRAM. As best shown in FIGS. 8 and 14, the management processor 270 is coupled to shared peripherals 286 and private peripherals 288 of the management subsystem 208. The private peripherals 288 are only accessible by the management processor 270, whereas the shared peripherals 286 are accessible by the management processor 270, each of the processing cores 222, and a debug unit 290 of the SOC 200.
  • The management processor 270 can see master memory map with only DRAM requiring mapping. The management processor 270 utilizes GPIO 292 and I2C 294 (i.e., private peripherals) for controlling power and clocks in the node. Main code and working space for the management processor 270 are on the local Dcode and Icode buses but code can be executed from the system bus (i.e., the main ROM 295 & RAM 296 and, if necessary, external memory). The IPCM 281, which is used for software communication between the management processor 270 and the processing cores 222, can include 8 mailboxes (e.g., each with 7 data registers) and 8 interrupts (e.g., interrupts 0:3 are sent to the management processor 270 and interrupts 4:7 are sent to the GIC 220 of the node CPU subsystem 202). The management processor 270 can utilize a system management interface (SMI) functionality to carry IPMI (i.e., intelligent platform management interface) traffic (e.g., to/from the processing cores 222). For example, IPMI communication via SMIC (Server Management Interface Chip) between the processing cores 222 the management processor 270 is implemented with a private communication channel leverages the IPCM 281. This implements the SMIC protocol with mailbox features of the IPCM 281 coupled with memory buffers.
  • One capability that leverages the management processor 270 having control and visibility of all peripherals and controllers is that the management processor 270 can field error interrupts from each of the peripheral controllers. One example is that DRAM errors reported by the DRAM controller generate interrupts and the management processor 270 can log and report the errors. The management processor 270 can then attempt dynamic recovery and improvement by techniques including, but not limited to, increasing the voltage to the DRAM controller or the DIMMs in an attempt to reduce bit errors.
  • Additional capabilities arise because the management processor 270 has visibility into all buses, peripherals, and controllers. It can directly access registers for statistics on all buses, memory controllers, network traffic, fabric links, and errors on all devices without disturbing or even the knowledge of the access by the core processing cores 222. This allows for billing use cases where statistics can be gathered securely by the management processor without having to consume core processing resources (e.g., the processing cores 222) to gather, and in a manner that cannot be altered by the core processor 222.
  • An alternative Coresight/JTAG debug bus is coupled to the management processor 270. This Coresight/JTAG debug bus serves as an infrastructure that provides an alternate back door interface into all on-chip devices, even if the main busses are unavailable. This also provides for security and intrusion detection use cases where the management processor can detect anomalous accesses and disable internal busses or controllers for self-protection. Additionally, leveraging this pervasive access, the management processor can read all on-chip and CPU registers and memory images for post-mortem analysis for debug.
  • The management processor 270 has a plurality of responsibilities within its respective node. One responsibility of the management processor 270 is booting an operating system of the node CPU 210. Another responsibility of the management processor 270 is node power management. Accordingly, the management subsystem 208 can also be considered to comprise a power management Unit (PMU) for the node and thus, is sometime referred to as such. As discussed below in greater detail, the management subsystem 208 controls power states to various power domains of the SOC 200 (e.g., to the processing cores 222 by regulating clocks). The management subsystem 208 is an “always-on” power domain. However, the management processor 270 can turn off the clocks to the management processor 270 and/or its private and/or shared peripherals to reduce the dynamic power. Another responsibility of the management processor 270 is varying synchronized clocks of the node CPU subsystem 202 (e.g., of the node CPU 210 and the SCU 212). Another responsibility of the management processor 270 is providing baseboard management control (BMC) and IPMI functionalities including console virtualization. Another responsibility of the management processor 270 is providing router management. Another responsibility of the management processor 270 is acting as proxy for the processing cores 222 for interrupts and/or for network traffic. For example, the GIC 220 of the node CPU subsystem 202 will cause interrupts intended to be received by a particular one of the processing core 222 to be reflected to the management processor 270 for allowing the management processor 270 to wake the particular one of the processing cores 222 when an interrupt needs to be processed by the particular one of the of the processing cores that is sleeping, as will be discussed below in greater detail. Another responsibility of the management processor 270 is controlling phased lock loops (PLLs). A frequency is set in the PLL and it is monitored for lock. Once lock is achieved the output is enabled to the clock control unit (CCU). The CCU is then signaled to enable the function. The management processor 270 is also responsible for selecting the dividers but the actual change over will happen in a single cycle in hardware. Another responsibility of the management processor 270 is controlling a configuration of a variable internal supply used to supply electrical power to the node CPU subsystem 202. For example, a plurality of discrete power supplies (e.g., some being of different power supplying specification than others (e.g., some having different power capacity levels)) can be selectively activated and deactivated as necessary for meeting power requirements of the node CPU subsystem 202 (e.g., based on power demands of the processing cores 222, the SCU 216, and/or the controller of the L2 cache 214). A separate power control mechanism (e.g., switch) can be used to control power supply to each of the processing cores 222 and separately to the SCU 216. Another responsibility of the management processor 270 is managing a real-time-clock (RTC) that exists on a shared peripheral bus of the management subsystem 208. Another responsibility of the management processor 270 is managing a watchdog timer on a private peripheral bus of the management subsystem 208 to aid in recovery from catastrophic software failures. Still another responsibility of the management processor 270 is managing an off-board EEPROM that is accessible via the I2C 292 on the private peripheral bus of the management subsystem 208. The off-board EEPROM is device is used to store all or a portion of boot and node configuration information as well as all or a portion of IPMI statistics that require non-volatile storage. Each of these responsibilities of the management processor 270 is an operational functionality managed by the management processor 270. Accordingly, operational management functionality of each one of the subsystem refers to two or more of these responsibilities being managed by the management processor 270.
  • As shown in FIG. 14, software 300 is provided on the management processor 270. The management processor 270 includes a plurality of application tasks 302, an operating system (OS)/input-output (I/O) abstraction layer 304, a real-time operating system (RTOS) 306, and device drivers 308 for the various devices. The operating system (OS)/input-output (I/O) abstraction layer 304 is a software layer that resides between the application tasks 302 and the real-time operating system (RTOS) 306. The operating system (OS)/input-output (I/O) abstraction layer 304 aids in porting acquired software into this environment. The OS abstraction portion of the operating system (OS)/input-output (I/O) abstraction layer 304 provides posix-like message queues, semaphores and mutexes. The device abstraction portion of the operating system (OS)/input-output (I/O) abstraction layer 304 provides a device-transparent open/close/read/write interface much like the posix equivalent for those devices used by ported software. The real-time operating system (RTOS) 306 resides between the operating system (OS)/input-output (I/O) abstraction layer 304 and the device drivers 308.
  • The application tasks 302 include, but are not limited to, a boot task 310, a system management task 312, a power management task 314, a serial concentrator task 316, a frame switch management task 318 (sometimes called routing management), and a network proxy task 320. The boot task 310 provides the function of booting the processing cores 222 and the management processor 270. The system management task 312 provides the function of integrated operation of the various subsystems of the SOC 200. The power management task 314 provides the function of managing power utilization of the various subsystems of the SOC 200. The serial concentrator task 316 provide the function of managing communication from the other application tasks to a system console. This console may be directly connected to the SOC node via a UART (i.e., a universal asynchronous receiver/transmitter) or it can be connected to another node in the system. The frame switch management task 318 (sometimes called routing management) is responsible for configuring and managing routing network functionality. As discussed in greater detail below, the network proxy task 320 maintains network presence of one or more of the processing cores 222 while in a low-power sleep/hibernation state and to intelligently wake one or more of the processing cores 222 when further processing is required.
  • Device drivers 308 are provided for all of the devices that are controlled by the management processor 270. Examples of the device drivers 308 include, but are not limited to, an I2C driver 322, a SMI driver 324, a flash driver 326 (e.g., NAND type storage media), a UART driver 328, a watchdog time (i.e., WDT) driver 330, a general purpose input-output (i.e., GPIO) driver 332, an Ethernet driver 334, and an IPC driver 336. In many cases, these drivers are implemented as simple function calls. In some cases where needed for software portability, however, a device-transparent open/close/read/write type I/O abstraction is provided on top of these functions.
  • In regard to boot processes, it is well known that multiple-stage boot loaders are often used, during which several programs of increasing complexity sequentially load one after the other in a process of chain loading. Advantageously, however, the node CPU 210 only runs one boot loader before loading the operating system. The ability for the node CPU 210 to only run one boot loader before loading the operating system is accomplished via the management processor 270 preloading a boot loader image into main memory (e.g., DRAM) of the node CPU subsystem before releasing the node CPU 210 from a reset state. More specifically, the SOC 200 can be configured to use a unique boot process, which includes the management processor 270 loading a suitable OS boot loader (e.g., U-Boot) into main memory, starting the node CPU 210 main OS boot loader (e.g., UEFI or U-Boot), and then loading the OS. This eliminates the need for a boot ROM for the node CPU, a first stage boot loader for the node CPU, and dedicated SRAM for boot of the node CPU.
  • Present now is a discussion relating to network proxy functionality implemented using the management processor 270. The underlying principle of network proxy functionality is maintaining network presence of each one of the processing cores 222 while one or more of the processing cores 222 is in a low-power sleep/hibernation state and to intelligently wake the one or more sleeping processing cores 222 when further processing associated with the one or more sleeping processing cores 222 is required. More specifically, the network proxy task 320 monitors network events of each processing cores 222 and, when all or a particular one of the processing cores 222 is in dormant or shutdown state, the network proxy function enables the management processor 270 to act as proxy for the processing core(s) 222 that it can reasonably do this for and causes the management processor 270 to wake up the processing core(s) 222 when the management processor 270 receives a network event that it is unable proxy for.
  • There are several architectural features related to the network proxy functionality. A CSR (i.e., a certified signing request) is implemented to allow the remapping of Port IDs (i.e., portRemap function). For example, when a switch of the SOC 200 is to deliver a packet to the MAC0 port 272 (shown in FIG. 12), this port remapping CSR allows software to remap MAC0 port 272 to the management processor 270 and have the packet delivered to the management processor 270 for network proxy processing. This remapping CSR can also be used to remap traffic destined for the MAC1 port 274 (shown in FIG. 12) to MAC0 port 272. This CSR port remap function is a key SOC feature that facilitates the management processor implementation of network proxy functionality within a SOC node.
  • As an example, a typical use sequence for implementing network proxy functionality in accordance with an embodiment of the present invention begins with the management processor 270 maintaining the IP to MAC address mappings for the MAC0 port 272 and the MAC1 port 274. This can be done via either explicit communication of these mappings from an instantiation of the operating system running on the node CPU 210 to the management processor 270 or can be done implicitly by having the management processor 270 snoop local gratuitous ARP broadcasts. The node CPU 210 coordinates with the management processor 270 for causing one or more of the processing cores 222 to go to a low power dormant state. During this transition, the management processor 270 sets up the Port ID remapping CSR to route MAC0 port 272 and MAC1 port 274 traffic to the management processor 270. Thereafter, the management processor 270 processes any incoming packets that are transmitted for reception by the MAC0 port 272 or MAC1 port 274. The management processor can implement various categories of packet processing. A first category of packet processing includes responding to some classes of transactions (e.g. an address resolution protocol (ARP) response). A second category of packet processing includes dumping and ignoring some classes of packets. A third category of packet processing includes deciding that one or more of the processing cores 222 that is sleeping must be woken to process some classes of packets. To this end, the management processor 270 will wake one or more of the processing cores 222 that is/are sleeping, undo the Port ID remapping register, and re-send the packets (e.g., through a switch where they were initially received) so that the packets are rerouted back to MAC port that they were originally destined (e.g., MAC0 port 272 or MAC port1 274).
  • Using the network proxy functionality, the management processor 270 can support Wake-On-LAN (WOL) packets. To this end, the management processor 270 will acquire the WOL packets, which hare broadcast as opposed to being transmitted for reception by a specific recipient. The management processor 270 will know the MAC addresses for the other MACs on the node and, as necessary/appropriate, will be able to wake up the processing cores 222.
  • Turning now to a discussion of power management functionality, there are preferably multiple power domains in the SOC 200. These power domains are implemented with level shifters, clamps, and switches. Examples of these power domains include, but are not limited to, a plurality of power domains within the node CPU subsystem 202 that can each be transitioned between two or more power states, a plurality of power domains within the peripheral subsystem 204 having that can each be transitioned between two or more power states, a plurality of power domains within the system interconnect subsystem 206 having that can each be transitioned between two or more power states, and a single always-on power domain consisting of the management subsystem 208. The node CPU subsystem 202 can be configured to include 11 power domains (e.g., four processing core power domains, four media processing engine power domains, a SCU power domain, a Debug PTM power domain and a L1 BIST (i.e., built-in self trust) power domain. The peripheral CPU subsystem 204 can be configured to include 2 power domains (e.g., a first power domain for PCIe, SATA, eMMC, NAND controller, and DDR controller) and a second power domain for DDR Phy). The system interconnect subsystem 206 can be configured to include a first power domain for shared logic and a first plurality of XAUI links and a second power domain for a second plurality of XAUI links and outside MAC port. In this regard, power domains of the SOC 200 can be defined by and/or within the processing cores, the SCU, the peripheral interfaces and/or controllers, various storage media, the management processor, XAUI phys, and the switch fabric. Furthermore, a debug subsystem of the SOC 200 can be an additional power domain.
  • The management subsystem 208 (e.g., via the PMU 281) controls the reset and power for the various power domains of the SOC. As mentioned above, the management subsystem 208 is an “always-on” power domain and the power domains of the remaining subsystems can be selectively transitioned between two or more power states (e.g., through the use of registers which are written by the management processor 270). To this end, each power domain generally has three signals that can be controlled by registers in a respective SOC subsystem.
  • Each of those domains can logically be in one of a few states, although not all states exist in each domain. A run state can be implement at one of a number of voltage points and hence frequencies. A WFI state, which is also known as a clock gated or waiting for interrupt state, is a state where the clocks are gated off but the logic remains in a state where it can resume quickly. A dormant state is when a domain is powered down but another state is stored (e.g., by software) previous to removing power. An off state is when all power to a domain is removed.
  • States down to dormant are controlled primarily by the WFI and power status registers of the node CPU 210 and/or operations of the IPC 280 operations from the software to the management processor 270 modifying the processing core power state and clock frequency. States below dormant are controlled by operations being sent to the management processor 270 either based on software ahead of time (i.e. before the state is entered) or on system loading. Software will inform the management processor 270 before it enters a low power state (below dormant) that the target state is. The power down state is reached only when all of the power sources are removed from the system.
  • There are several states which can exist in the SOC overall. These are combinations of the different subsystem states described above. Table 1 below provides examples of various overall states of the SOC.
  • TABLE-US-00001 TABLE 1 Overall SOC Power Domain States State Cores SCU Peripherals Switch DDR SRAM M3 RUN ON.sup.a ON ON ON ON.sup.b ON ON RUN ON.sup.a slow Lower voltage WFI Clock gated Dormant Dormant 51 Dormant Clock Self retention gated refresh S3 Off Off OFF Off OFF OFF Off Clock gated Full OFF power off Power All batteries drained down .sup.aSome cores may be in WFI state or dormant state .sup.bDDR can enter auto power down or pre-charge power down states
  • There are several power states supported in the node CPU 210. Each one of the processing cores 222 can be in a number of states independent from the others. Furthermore, if the processing cores 222 are all in a low power state then the L2 cache 216 and SCU 214 can potentially transition to dormant and off low power states. The processing cores to power down their L1 caches until we are moving the entire subsystem into a low power state (which implies that the ACP port and debug is also not in use). Table 2 below provides examples of various power states supported in the node CPU 210.
  • TABLE-US-00002 TABLE 2 node CPU Power States State SCU & L2 Core 0 Core 1 Core 2 Core 3 RUN ON ON ON ON ON RUN slow ON slow ON slow ON slow ON slow WFI WFI WFI WFI WFI Dormant Dormant Dormant Dormant Dormant S1 Dormant Dormant S3 or below OFF
  • When a processing core is in the ON state, it is powered up and running at some run frequency. When a core is in the ON slow state at least one of the cores is running, but all of those that are running are running at a lower than normal voltage and frequency point. The SCU and L2 are also running at this lower frequency point. Functionally, the ON slow state is the same as the ON state. Control of the ON state and the ON slow state is implemented by the management processor 270. For example, the IPC 280 sends an operation to the management processor 270 indicating that the processing cores 222 can afford to run slower than normal and hence voltage and clock frequency can be sequenced lower asynchronously to software, similarly an increase frequency event can also be sent. Frequency changes will have implications for the periphclock within the node CPU. Normally this clock is synchronous and a fixed divide of the coreclock but in order to maintain correct timing periods, the core periphclock ratio will change as frequency of the core changes.
  • In addition to the power domains described in the previous section, the node CPU subsystem 202 can also be voltage and frequency scaled. A single voltage and frequency scaling apply across the entire node CPU subsystem 202. In this respect, individual functional blocks and/or subsystem elements cannot be individually set (e.g., on a per-core basis). In regard to the node CPU subsystem 202, the subsystem elements that get uniformly voltage and frequency scaled include processing cores 222, the L1 caches 224, 226, the media processing engine of each one of the processing cores 222, the SCU 216, and the L2 controller 216. Control for voltage scaling can be implemented via an interface to an external PMIC. Control for frequency scaling can be implemented via PLL control.
  • Continuing the discussion of power management functionality that can be provided within the SOC 200, power management of silicon-based components of the SOC 200 (e.g., processors, controllers, storage media, etc) is of particular interest with respect to techniques for accomplishing power management. Maximum performance of silicon-based components is achieved by high clock frequency at high voltage and reduced power consumption is provided by reducing clock frequency. As the voltage is lowered, the transistors of such silicon-based components become weaker and the frequency of operation decreases.
  • Total power consumption of silicon-based components is the sum of dynamic power consumption and leakage power consumption. Leakage power consumption refers to power burned by transistors when they are not switching and dynamic power consumption refers to power consumption directly related to switching operations. The leakage power consumption is highly dependent on temperature and voltage of the component and it is common for leakage power consumption to equal or exceed dynamic power consumption. Because power consumption of silicon-based components is a function of the clock frequency and the square of operating voltage, a change in voltage will typically have a much more pronounced effect on power consumption than will a change in clock frequency. For example, a 27% reduction in operating voltage for a given clock frequency corresponds to 47% less power whereas a 27% reduction in clock frequency corresponds to a 27% reduction in power for a given operating voltage. Accordingly, useful power reduction techniques in regard to leakage power consumption can include turning power off, reducing voltage, reducing temperature through use of heat sinks, fans, packaging, etc whereas useful power reduction techniques in regard to dynamic power consumption can include lower clock frequencies, turning off clocks, and reducing operating voltage.
  • As mentioned above, the management subsystem 208 is an “always-on” domain. The PMU can, however, turn off clocks to the management processor 270 and/or its peripherals (e.g., Private and/or Shared) to reduce dynamic power consumption. The management processor 270 is typically in WFI (wait-for-interrupt) state. In this state, the clock of the management subsystem 208 is gated to the management processor 270 but still clocks the interrupt controller of the management processor 270 (e.g., the nested vectored interrupt controller (NVIC)). When the NVIC receives an interrupt, it will cause the clocks to the management processor 270 to be turned back on and the node core 210 will service the interrupt.
  • Implementing power management within the node CPU 210 can include the PMU 281 selectively controlling voltage and frequency levels at which components of the node CPU 210 operate. All of the processing cores 222 are clocked by same frequency and operate at nominally the same voltage (e.g., powered by a common power supply), but the PMU can change this frequency and/or the voltage for altering power consumption. Furthermore, the alter leakage power consumption, the PMU 281 can gate the power supply of each one of the processing cores 222 and/or gate clocks to powered off domains for altering power consumption. The operating system controls which one(s) of the processing cores 222 are being used and whether the unused ones of the processing cores 222 are in WFI/WFE or shutdown mode (e.g., via writes to power status register of the SCU 212 and execution of WFI/WFE information). Table 3 below shows various power modes for the node CPU 210.
  • TABLE-US-00003 TABLE 3 Node CPU Power Modes Mode Clocks Power Comments Run Mode On On Running code WFI/WFE Off (except On Waiting on interrupt Mode wakeup logic) to turn clocks back on Dormant Off Core power off L1 RAMs retain Mode* state* RAM power on External wakeup (retention) event, M3 reset's the A9 processor Shutdown Off Everything Off No state retention Mode unless it was moved to DRAM. External wakeup event, M3 reset's the A9 processor
  • The media processing engine of each one of the processing cores 222 occupies a significant amount of die space. As such, it has a fair amount of leakage current that translates to a corresponding amount of leakage power consumption. Advantageously, the SOC 200 can be implemented in a manner whereby a scalar floating point (FPU) is provided in the node CPU power domain and whereby the media processing engine associated with one of the processing cores 222 is in a separate power domain. In a static power management strategy for the media processing engines, an XML configuration associated with the node will have an entry that indicates whether a media processing engine is to be powered on or off during boot configuration. In a settable power management strategy for the media processing engines, an API would be exposed on both on the node CPU 210 and via an IPMI interface on the management processor 270 to allow the media processing engine associated with one of the processing cores 222 to be selectively powered up or down. If the power state condition is set on the management processor 270, this setting could be persisted and made the default for a subsequent boot instance. In a dynamic power management strategy for the media processing engines, the media processing engines are powered up only when instruction types associated with the media processing engines are needed. To this end, the strategy would start with the media processing engines powered off and isolated. When a media processing engine instruction is executed, software of the management subsystem 208 and/or node CPU subsystem 202 will trap with an unimplemented instruction and a suitable handler software can perform the appropriate power-up sequence of media processing engine(s), thereby allowing the media processing engine instruction to be executed.
  • The peripheral subsystem 204 can include one or more power domains that are controlled by the PMU 281. These peripherals include controller (i.e., interfaces) for PCIe, SATA, NAND, eMMC, and DDR storage media. In one implementation, they are all within a common power domain that has a single reset, isolate, and power-up signaling structure. In another implementation, these controllers can reside in one of a plurality of different power domains. For example, it may be beneficial to have the DDR controller in a separate domain than the other peripheral controllers for allowing the DDR to be selectively accessed by the management processor 270 while other peripherals are in a powered down state. It is disclosed herein that the PMU 281 can also include a PCI power management module can also provide for PCI compatible active state power management. The PCI power management module is powered up while the node CPU 210 is in a lower power state and contains context that is reset only at power up and can contain sideband wake mechanism for the SOC node.
  • The system interconnect subsystem 206 can include two or more power domains that are controlled by the PMU 281. In particular, a portion of the system interconnect subsystem 206 that is considered to be the fabric switch can be divided into two power domains. These power domains are partitioned so that power to the fabric switch power can be optimized for leaf nodes that only have 1 or 2 links to reduce leakage power consumption. For example, a first power domain can contain MAC0, MAC1, MAC2, Link1, Link2, the Switch, Switch Arbitration logic, the CSRs, and global control logic and a second power domain can contain Outlink/Link0, Link3, and Link4. In this example, there would be three power states: first and second power domains are both off, the first power domain is on and the second power domain is off, and both power domains are on.
  • In certain implementation of power domains within the system interconnect subsystem 206, the fabric switch is configured such that each power domain has an enable bit in a register. When a particular power domain is reset, this enable bit is cleared thereby disabling functionality of the particular power domain. This enable bit is effectively a synchronous reset to all the logic in the particular power domain. In view of this enable bit functionality, only one reset is needed for the entire fabric switch and each one of the power domains will have its own separate isolate and power-up signals.
  • Turning now to a discussion of interrupts, it should be appreciated and understood that most of the on-chip peripherals generate interrupts. With few exceptions, these interrupts are routed to both the node CPU subsystem 202 and the management subsystem 208. The exceptions to this exist for those peripherals that are private to the management processor 270 and those that are private to the node CPU 210. These interrupts can be acted on in a manner that supports or enables power management functionality (e.g., network proxy functionality) and that support power utilization functionality (e.g., interrupts acquired by the management processor 270 and used for reporting on node CPU utilization).
  • The node CPU 210 can have a hierarchical interrupt scheme in which external interrupts of the node CPU 210 are sent first to an interrupt distributor that resides, for example, in the SCU 212. The interrupts can be routed to any or all of the interrupt controllers of the node CPU 210 (e.g., interrupt controller of any one of the processing cores 222). Under software control, the interrupt distributor controls a list of processing cores to which each interrupt is routed. Each of the quad cores' interrupt controllers allows masking of the interrupt source locally as well.
  • Interrupts are in general visible to both the node CPU 210 and the management processor 270. It is then the responsibility of the management processor to unmask the interrupts it wants to see. If the whole CPU subsystem 202 is powered down (e.g., hibernated) then the management processor 270 will unmask important interrupts of the node CPU 210 to see events that would cause the node CPU 210 to be woken. It is the responsibility of the management processor 270 to either service the interrupt or re-power the OS on the node CPU subsystem 202 so it can service it. Similarly, if a processing core for which the interrupt is intended is in WFI (wait-for-interrupt) mode or WFE (wait-for-exception) mode, the management processor 270 can unmask the interrupt to one of the other processing cores that is already powered up thereby allowing the already powered up processing cores to service the interrupt. This is an example of subsystem masking an interrupt and allowing another the subsystem to service it, which is a form of network proxy functionality discussed above.
  • Interrupts on the node CPU 210 can also be used for implementing various power modes within power domains of the node CPU subsystem 202. More specifically, the OS running on the node CPU 210 can distribute the processing load among each one of the processing cores 222. In times when peak performance is not necessary, the OS can lower the power consumption within the node CPU 210 by clock-gating or powering down individual cores. As long as at least one of the processing cores 222 is running, the OS requires no intervention from the management processor 270 (e.g., the PMU thereof) for handling interrupts. A particular one of the processing cores 222 can be stopped in WFI/WFE state which causes the clock to be gated most of that particular processing core, except for its interrupt controller. If an interrupt occurs for that particular processing core, the clock of that particular processing core can be turned back on for allowing that particular core to service the interrupt. Alternatively, as discussed above, if an individual core is powered off, the OS of the node CPU 210 can route an interrupt for that core to another one of the processing cores 222 that is already powered up. If the whole node CPU 210 is powered down, interrupts will be steered to the management processor 270 where the event will be seen and it will then be the responsibility of the management processor 270 to either service the interrupt or reboot the OS on the node CPU 210 so that one of the processing cores 222 can service the interrupt.
  • In summary, in view of the disclosures made herein a skilled person will appreciate that a system on a chip (SOC) refers to integration of one or more processors, one or more memory controllers, and one or more I/O controllers onto a single silicone chip. Furthermore, in view of the disclosures made herein, the skilled person will also appreciate that a SOC configured in accordance with the present invention can be specifically implemented in a manner to provide functionalities definitive of a server. In such implementations, a SOC in accordance with the present invention can be referred to as a server on a chip. In view of the disclosures made herein, the skilled person will appreciate that a server on a chip configured in accordance with the present invention can include a server memory subsystem, a server I/O controllers, and a server node interconnect. In one specific embodiment, this server on a chip will include a multi-core CPU, one or more memory controllers that supports ECC, and one or more volume server I/O controllers that minimally includes Ethernet and SATA controllers. The server on a chip can be structured as a plurality of interconnected subsystems, including a CPU subsystem, a peripherals subsystem, a system interconnect subsystem, and a management subsystem.
  • An exemplary embodiment of a server on a chip that is configured in accordance with the present invention is the ECX-1000 Series server on a chip offered by Calxeda incorporated. The ECX-1000 Series server on a chip includes a SOC architecture that provides reduced power consumption and reduced space requirements. The ECX-1000 Series server on a chip is well suited for computing environments such as, for example, scalable analytics, webserving, media streaming, infrastructure, cloud computing and cloud storage. A node card configured in accordance with the present invention can include a node card substrate having a plurality of the ECX-1000 Series server on a chip instances (i.e., each a server on a chip unit) mounted on the node card substrate and connected to electrical circuitry of the node card substrate. An electrical connector of the node card enables communication of signals between the node card and one or more other instances of the node card.
  • The ECX-1000 Series server on a chip includes a CPU subsystem (i.e., a processor complex) that uses a plurality of ARM brand processing cores (e.g., four ARM Cortex brand processing cores), which offer the ability to seamlessly turn on-and-off up to several times per second. The CPU subsystem is implemented with server-class workloads in mind and comes with a ECC L2 cache to enhance performance and reduce energy consumption by reducing cache misses. Complementing the ARM brand processing cores is a host of high-performance server-class I/O controllers via standard interfaces such as SATA and PCI Express interfaces.
  • Table 4 below shows technical specification for a specific example of the ECX-1000 Series server on a chip.
  • TABLE-US-00004 TABLE 4 Example of ECX-1000 Series server on a chip technical specification Processor 1. Up to four ARM® Cortex™-A9 cores @ 1.1 to Cores 1.4 GHz 2. NEON® technology extensions for multimedia and SIMD processing 3. Integrated FPU for floating point acceleration 4. Calxeda brand TrustZone® technology for enhanced security 5. Individual power domains per core to minimize overall power consumption Cache 1. 32 KB L1 instruction cache per core 2. 32 KB L1 data cache per core 3. 4 MB shared L2 cache with ECC Fabric 1. Integrated 80 Gb (8 .times. 8) crossbar switch with Switch through-traffic support 2. Five (5) 10 Gb external channels, three (3) 10 Gb internal channels 3. Configurable topology capable of connecting up to 4096 nodes 4. Dynamic Link Speed Control from 1 Gb to 10 Gb to minimize power and maximize performance 5. Network Proxy Support to maintain network presence even with node powered off Management 1. Separate embedded processor dedicated for Engine systems management 2. Advanced power management with dynamic power capping 3. Dedicated Ethernet MAC for out-of-band communication 4. Supports IPMI 2.0 and DCMI management protocols 5. Remote console support via Serial-over-LAN (SoL) Integrated 1. 72-bit DDR controller with ECC support Memory 2. 32-bit physical memory addressing Controller 3. Supports DDR3 (1.5 V) and DDR3L (1.35 V) at 800/1066/1333 MT/s 4. Single and dual rank support with mirroring PCI Express 1. Four (4) integrated Gen2 PCIe controllers 2. One (1) integrated Gen1 PCIe controller 3. Support for up to two (2) PCIe x8 lanes 4. Support for up to four (4) PCIe x1, x2, or x4 lanes Networking 1. Support 1 Gb and 10 Gb Ethernet Interfaces 2. Up to five (5) XAUI 10 Gb ports 3. Up to six (6) 1 Gb SGMII ports (multiplexed w/XAUI ports) 4. Three (3) 10 Gb Ethernet MACs supporting IEEE 802.1Q VLANs, IPv4/6 checksum processing, and TCP/UDP/ICMP checksum offload 5. Support for shared or private management LAN SATA 1. Support for up to five (5) SATA disks Controllers 2. Compliant with Serial ATA 2.0, AHCI Revision 1.3, and eSATA specifications 3. SATA 1.5 Gb/s and 3.0 Gb/s speeds supported SD/eMMC 1. Compliant with SD 3.0 Host and MMC 4.4 Controller (eMMC) specifications 2 Supports 1 and 4-bit SD modes and 1/4/8-bit MMC modes 3. Read/write rates up to 832 Mbps for MMC and up to 416 Mbps for SD System 1. Three (3) I2C interfaces Integration 2 Two (2) SPI (master) interface Features 3. Two (2) high-speed UART interfaces 4. 64 GPIO/Interrupt pins 5. JTAG debug port
  • While the foregoing has been with reference to a particular embodiment of the invention, it will be appreciated by those skilled in the art that changes in this embodiment may be made without departing from the principles and spirit of the disclosure, the scope of which is defined by the appended claims.

Claims (20)

1. A server on a chip (SoC), comprising:
a node central processing unit (CPU) subsystem that includes a plurality of processing cores;
a peripheral subsystem that includes a plurality of peripheral controllers;
a system interconnect subsystem configured to provide packet switch functionality within the (SoC) and
a management subsystem coupled to the node CPU subsystem, the peripheral subsystem and the system interconnect subsystem, wherein the management subsystem includes a management processor that manages operational functionality of each one of the subsystems.
2. The SoC of claim 1, wherein:
the node CPU subsystem includes a plurality of node CPU subsystem power domains,
the peripheral subsystem includes a plurality of peripheral subsystem power domains,
the system interconnect subsystem includes a plurality of system interconnect subsystem power domains; and
the management processor is configured to manage one or more activities within each of the node CPU subsystem power domains, the peripheral subsystem power domains, and the system interconnect subsystem power domains that influence power consumption therein.
3. The SoC of claim 2, wherein:
the management processor is configured to cause each of the node CPU subsystem power domains, the peripheral subsystem power domains, and the system interconnect subsystem power domains to be selectively transitioned between at least two different power states; and
wherein functionality of at least one operational component of one of the subsystems associated with a respective power domain is configured to transition to a reduced power consumption state in response to the respective power domain being transitioned from a first power state to a second power state.
4. The SoC of claim 3, wherein:
the plurality of processing cores are within separate node CPU subsystem power domains;
at least two of the plurality of peripheral controllers are within separate peripheral subsystem power domains;
at least two XAUI links of the system interconnect subsystem are within separate system interconnect subsystem power domains.
5. The SoC of claim 1, wherein the management subsystem is configured to:
manage power consumption on a per-power domain basis;
act as proxy for the plurality of processing cores for interrupts intended for reception by the plurality of processing cores; and
control a configuration of a variable internal supply used to supply electrical power to the node CPU subsystem.
6. The SoC of claim 1, wherein the management subsystem is configured to:
selectively transition a first clock to the management processor between a first on-state and a first off-state;
selectively transition a second clock to one or more private peripherals of the management processor between a second on-state and a second off-state; and
selectively transition a third clock to one or more shared peripherals of the management processor between a third on-state and a third off-state.
7. The SoC of claim 6, wherein the management subsystem is configured to:
manage power consumption of a per-power domain basis;
act as proxy for the plurality of processing cores for interrupts intended for reception by the plurality of processing cores; and
control a configuration of a variable internal supply used to supply electrical power to the node CPU subsystem.
8. The SoC of claim 1, wherein:
the node CPU subsystem includes a cache memory, a main memory, and a main memory controller coupled between the cache memory and the main memory;
the cache memory is coupled to each of the plurality of processing cores thereby enabling the cache memory to be shared by all of the plurality of processing cores;
the main memory controller is configured to support error code correction (ECC) functionality; and
the peripheral subsystem includes one or more Ethernet controllers and one or more serial advanced technology attachment (SATA) controllers.
9. The SoC of claim 8, wherein the peripheral subsystem further includes:
one or more flash controllers; and
one or more peripheral component interconnect express (PCIe) controllers.
10. The SoC of claim 8, wherein:
the node CPU subsystem includes a plurality of node CPU subsystem power domains,
the peripheral subsystem includes a plurality of peripheral subsystem power domain,
the system interconnect subsystem includes a plurality of system interconnect subsystem power domains; and
the management processor is configured to manage one or more activities within each of the node CPU subsystem power domain, the peripheral subsystem power domain, and the system interconnect subsystem power domains that influence power consumption therein.
11. The SoC of claim 10, wherein:
the management processor is configured to cause each of the node CPU subsystem power domains, the peripheral system power domains, and the system interconnect subsystem power domains to be selectively transitioned between at least two different power states; and
wherein functionality of at least one operational component of one of the subsystems associated with a respective power domain is transitioned to a reduced power consumption state in response to the respective power domain being transitioned from a first power state to a second power state.
12. The SoC of claim 8, wherein the management subsystem is configured to:
manage power consumption on a per-power domain basis;
act as proxy for the plurality of processing cores for interrupts intended for reception by the plurality of processing cores; and
control a configuration of a variable internal supply used to supply electrical power to the node CPU subsystem.
13. The SoC of claim 1, wherein the management subsystem is coupled to the node CPU subsystem via an inter-processor communication module (IPCM), and wherein the management subsystem is coupled to the system interconnect subsystem via a bus fabric.
14. The SoC of claim 13, wherein the IPCM includes data registers and interrupts configured to enable communication between the management processor and the plurality of processing cores of the node CPU subsystem.
15. A node card, comprising:
a node card substrate that includes circuitry configured to enable communication of information between the node card and one or more other node cards; and
a plurality of server on a chip (SoC) units mounted on the node card substrate and electrically connected to the circuitry of the node card substrate, wherein each of the SoC units defines an instance of a SoC node of the node card, wherein each SoC node includes a SoC that comprises:
a node CPU subsystem,
a peripheral subsystem,
a system interconnect subsystem, and
a management subsystem coupled to the node CPU subsystem, the peripheral subsystem, and the system interconnect subsystem, wherein the management subsystem includes a management processor that manages operational functionality of each one of the subsystems.
16. The node card of claim 15, wherein:
the node CPU subsystem of each of the SoC units includes a plurality of processing cores, a cache memory, a main memory, and a main memory controller coupled between the cache memory and the main memory;
the cache memory is coupled to each of the plurality of processing cores thereby enabling the cache memory to be shared by all of the plurality of processing cores;
the main memory controller is configured to support error code correction (ECC) functionality;
the peripheral subsystem of each of the SoC units includes a plurality of peripheral controllers; and
the system interconnect subsystem of each of the SoC units is configured to provide intra-node and inter-node packet connectivity.
17. The node card of claim 16, wherein the management subsystem of each of the SoC unit is configured to:
manage power consumption on a per-power domain basis;
act as proxy for the plurality of processing cores for interrupts intended for reception by the plurality of processing cores; and
control a configuration of a variable internal supply used to supply electrical power to the node CPU subsystem.
18. The node card of claim 15, wherein the management processor of each of the SoC units is configured to boot an instance of a second operating system in the node CPU subsystem thereof.
19. The node card of claim 18, wherein:
the management processor is configured to load an operating system boot loader into a main memory of the node CPU subsystem thereof, start the boot loader, and load the second operating system.
20. The node card of claim 15, wherein:
the node CPU subsystem of each of the SoC units includes a plurality of node CPU subsystem power domains,
the peripheral subsystem of each of the SoC units includes a plurality of peripheral subsystem power domains,
the system interconnect subsystem of each of the SoC units includes a plurality of system interconnect subsystem power domains; and
the management processor of each of the SoC units is configured to manage one or more activities within each of the node CPU subsystem power domain, the peripheral subsystem power domains, and the system interconnect subsystem power domains that influence power consumption therein.
US15/281,462 2009-09-24 2016-09-30 Server on a Chip and Node Cards Comprising One or More of Same Abandoned US20170115712A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/281,462 US20170115712A1 (en) 2009-09-24 2016-09-30 Server on a Chip and Node Cards Comprising One or More of Same

Applications Claiming Priority (15)

Application Number Priority Date Filing Date Title
US24559209P 2009-09-24 2009-09-24
US25672309P 2009-10-30 2009-10-30
US12/794,996 US20110103391A1 (en) 2009-10-30 2010-06-07 System and method for high-performance, low-power data center interconnect fabric
US38358510P 2010-09-16 2010-09-16
US12/889,721 US20140359323A1 (en) 2009-09-24 2010-09-24 System and method for closed loop physical resource control in large, multiple-processor installations
US201161489569P 2011-05-24 2011-05-24
US13/234,054 US9876735B2 (en) 2009-10-30 2011-09-15 Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US13/284,855 US20130107444A1 (en) 2011-10-28 2011-10-28 System and method for flexible storage and networking provisioning in large scalable processor installations
US201161553555P 2011-10-31 2011-10-31
US13/453,086 US8599863B2 (en) 2009-10-30 2012-04-23 System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US13/475,722 US9077654B2 (en) 2009-10-30 2012-05-18 System and method for data center security enhancements leveraging managed server SOCs
US13/475,713 US9054990B2 (en) 2009-10-30 2012-05-18 System and method for data center security enhancements leveraging server SOCs or server fabrics
US13/527,498 US9069929B2 (en) 2011-10-31 2012-06-19 Arbitrating usage of serial port in node card of scalable and modular servers
US13/662,759 US9465771B2 (en) 2009-09-24 2012-10-29 Server on a chip and node cards comprising one or more of same
US15/281,462 US20170115712A1 (en) 2009-09-24 2016-09-30 Server on a Chip and Node Cards Comprising One or More of Same

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/662,759 Continuation US9465771B2 (en) 2009-09-24 2012-10-29 Server on a chip and node cards comprising one or more of same

Publications (1)

Publication Number Publication Date
US20170115712A1 true US20170115712A1 (en) 2017-04-27

Family

ID=50548563

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/662,759 Active 2031-05-31 US9465771B2 (en) 2009-09-24 2012-10-29 Server on a chip and node cards comprising one or more of same
US15/281,462 Abandoned US20170115712A1 (en) 2009-09-24 2016-09-30 Server on a Chip and Node Cards Comprising One or More of Same

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/662,759 Active 2031-05-31 US9465771B2 (en) 2009-09-24 2012-10-29 Server on a chip and node cards comprising one or more of same

Country Status (1)

Country Link
US (2) US9465771B2 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109870921A (en) * 2019-03-26 2019-06-11 广东美的制冷设备有限公司 Drive control circuit and household appliance
US10395721B1 (en) 2018-02-26 2019-08-27 Micron Technology, Inc. Memory devices configured to provide external regulated voltages
CN111338984A (en) * 2020-02-25 2020-06-26 大唐半导体科技有限公司 Cache RAM and Retention RAM data high-speed exchange architecture and method thereof
CN113032329A (en) * 2021-05-21 2021-06-25 千芯半导体科技(北京)有限公司 Computing structure, hardware architecture and computing method based on reconfigurable memory chip
US11119153B1 (en) * 2020-05-29 2021-09-14 Stmicroelectronics International N.V. Isolation enable test coverage for multiple power domains
US11231765B2 (en) 2018-06-28 2022-01-25 Nordic Semiconductor Asa Peripheral power domains
US20220114070A1 (en) * 2012-12-28 2022-04-14 Iii Holdings 2, Llc System, Method and Computer Readable Medium for Offloaded Computation of Distributed Application Protocols within a Cluster of Data Processing Nodes
US11467883B2 (en) 2004-03-13 2022-10-11 Iii Holdings 12, Llc Co-allocating a reservation spanning different compute resources types
US11494235B2 (en) 2004-11-08 2022-11-08 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11496415B2 (en) 2005-04-07 2022-11-08 Iii Holdings 12, Llc On-demand access to compute resources
US11522952B2 (en) 2007-09-24 2022-12-06 The Research Foundation For The State University Of New York Automatic clustering for self-organizing grids
US11526304B2 (en) 2009-10-30 2022-12-13 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11630704B2 (en) 2004-08-20 2023-04-18 Iii Holdings 12, Llc System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information
US11650857B2 (en) 2006-03-16 2023-05-16 Iii Holdings 12, Llc System and method for managing a hybrid computer environment
US11652706B2 (en) 2004-06-18 2023-05-16 Iii Holdings 12, Llc System and method for providing dynamic provisioning within a compute environment
US11658916B2 (en) 2005-03-16 2023-05-23 Iii Holdings 12, Llc Simple integration of an on-demand compute environment
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11960937B2 (en) 2004-03-13 2024-04-16 Iii Holdings 12, Llc System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter
US12120040B2 (en) 2005-03-16 2024-10-15 Iii Holdings 12, Llc On-demand compute environment

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9465771B2 (en) 2009-09-24 2016-10-11 Iii Holdings 2, Llc Server on a chip and node cards comprising one or more of same
US9069929B2 (en) 2011-10-31 2015-06-30 Iii Holdings 2, Llc Arbitrating usage of serial port in node card of scalable and modular servers
US9876735B2 (en) 2009-10-30 2018-01-23 Iii Holdings 2, Llc Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US9077654B2 (en) 2009-10-30 2015-07-07 Iii Holdings 2, Llc System and method for data center security enhancements leveraging managed server SOCs
US8599863B2 (en) 2009-10-30 2013-12-03 Calxeda, Inc. System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US20130107444A1 (en) 2011-10-28 2013-05-02 Calxeda, Inc. System and method for flexible storage and networking provisioning in large scalable processor installations
US9054990B2 (en) 2009-10-30 2015-06-09 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
US20110103391A1 (en) 2009-10-30 2011-05-05 Smooth-Stone, Inc. C/O Barry Evans System and method for high-performance, low-power data center interconnect fabric
US9680770B2 (en) 2009-10-30 2017-06-13 Iii Holdings 2, Llc System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US9311269B2 (en) 2009-10-30 2016-04-12 Iii Holdings 2, Llc Network proxy for high-performance, low-power data center interconnect fabric
US9648102B1 (en) * 2012-12-27 2017-05-09 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US9507406B2 (en) 2012-09-21 2016-11-29 Atmel Corporation Configuring power domains of a microcontroller system
US9323312B2 (en) 2012-09-21 2016-04-26 Atmel Corporation System and methods for delaying interrupts in a microcontroller system
US9489307B2 (en) * 2012-10-24 2016-11-08 Texas Instruments Incorporated Multi domain bridge with auto snoop response
US9250679B2 (en) 2013-03-08 2016-02-02 Intel Corporation Reduced wake up delay for on-die routers
US20150046646A1 (en) * 2013-08-07 2015-02-12 Ihab H. Elzind Virtual Network Disk Architectures and Related Systems
US9383807B2 (en) 2013-10-01 2016-07-05 Atmel Corporation Configuring power domains of a microcontroller system
CN103984394A (en) * 2014-05-08 2014-08-13 浪潮电子信息产业股份有限公司 High-density energy-saving blade server system
WO2015195076A1 (en) * 2014-06-16 2015-12-23 Hewlett-Packard Development Company, L.P. Cache coherency for direct memory access operations
US9684367B2 (en) * 2014-06-26 2017-06-20 Atmel Corporation Power trace port for tracing states of power domains
US9804989B2 (en) * 2014-07-25 2017-10-31 Micron Technology, Inc. Systems, devices, and methods for selective communication through an electrical connector
US10365947B2 (en) 2014-07-28 2019-07-30 Hemett Packard Enterprise Development Lp Multi-core processor including a master core performing tasks involving operating system kernel-related features on behalf of slave cores
KR102291505B1 (en) 2014-11-24 2021-08-23 삼성전자주식회사 Storage device and operating method of storage device
US10394731B2 (en) 2014-12-19 2019-08-27 Amazon Technologies, Inc. System on a chip comprising reconfigurable resources for multiple compute sub-systems
US10523585B2 (en) * 2014-12-19 2019-12-31 Amazon Technologies, Inc. System on a chip comprising multiple compute sub-systems
US11200192B2 (en) * 2015-02-13 2021-12-14 Amazon Technologies. lac. Multi-mode system on a chip
WO2016159935A1 (en) * 2015-03-27 2016-10-06 Intel Corporation Dynamic configuration of input/output controller access lanes
US20160292115A1 (en) * 2015-03-30 2016-10-06 Integrated Device Technology, Inc. Methods and Apparatus for IO, Processing and Memory Bandwidth Optimization for Analytics Systems
US9811492B2 (en) * 2015-08-05 2017-11-07 American Megatrends, Inc. System and method for providing internal system interface-based bridging support in management controller
TWI588658B (en) * 2015-10-20 2017-06-21 旺宏電子股份有限公司 I/o bus shared memory system
CN105550010B (en) * 2016-03-11 2019-02-05 湘潭大学 A kind of intelligent wireless program loading method and system based on SoC
CN105930598B (en) * 2016-04-27 2019-05-03 南京大学 A kind of Hierarchical Information processing method and circuit based on controller flowing water framework
TW201741899A (en) * 2016-05-31 2017-12-01 創義達科技股份有限公司 Apparatus assigning controller and data sharing method
CN106201362B (en) * 2016-07-22 2019-04-30 纳瓦电子(上海)有限公司 A kind of storage configuration information method
US11042496B1 (en) * 2016-08-17 2021-06-22 Amazon Technologies, Inc. Peer-to-peer PCI topology
CN106326753B (en) * 2016-08-23 2020-04-28 记忆科技(深圳)有限公司 Encryption Hub device realized based on EMMC interface
US20180077021A1 (en) * 2016-09-14 2018-03-15 Apple Inc. Selective Network Sleep and Wake
US10419227B2 (en) * 2017-01-09 2019-09-17 Allied Telesis Holdings Kabushiki Kaisha Network card
TWI644214B (en) * 2017-05-12 2018-12-11 神雲科技股份有限公司 Method for initializing peripheral component interconnect express card
US11500681B2 (en) * 2017-06-29 2022-11-15 Intel Corporation Technologies for managing quality of service platform interconnects
CN107491408B (en) * 2017-07-31 2023-09-15 郑州云海信息技术有限公司 Computing server node
CN108062234B (en) * 2017-12-07 2021-07-27 郑州云海信息技术有限公司 System and method for realizing server host to access BMC FLASH through mailbox protocol
US10671148B2 (en) * 2017-12-21 2020-06-02 Advanced Micro Devices, Inc. Multi-node system low power management
US10719241B2 (en) * 2018-05-25 2020-07-21 Micron Technology, Inc. Power management integrated circuit with embedded address resolution protocol circuitry
US11789883B2 (en) * 2018-08-14 2023-10-17 Intel Corporation Inter-die communication of programmable logic devices
US11436024B2 (en) * 2018-12-27 2022-09-06 Texas Instruments Incorporated Independent operation of an ethernet switch integrated on a system on a chip
US10908214B2 (en) * 2019-03-01 2021-02-02 Arm Limited Built-in self-test in a data processing apparatus
US11106471B2 (en) 2019-03-29 2021-08-31 Dell Products L.P. System and method to securely map UEFI ISCSI target for OS boot using secure M-Search command option in UEFI discover protocol
TWI735050B (en) * 2019-10-09 2021-08-01 宜鼎國際股份有限公司 Data storage device, electronic apparatus, and system capable of remotely controlling electronic apparatus
US11392526B2 (en) * 2020-06-04 2022-07-19 Micron Technology, Inc. Memory system with selectively interfaceable memory subsystem
CN111813737A (en) * 2020-09-02 2020-10-23 展讯通信(上海)有限公司 System level chip and intelligent wearing equipment
CN114168508B (en) * 2020-09-10 2023-10-13 富联精密电子(天津)有限公司 Single-wire bidirectional communication circuit and single-wire bidirectional communication method
CN114510136A (en) * 2020-11-17 2022-05-17 比亚迪股份有限公司 Central processing unit system and power supply management device thereof
JP7393380B2 (en) * 2021-04-21 2023-12-06 矢崎総業株式会社 Communication system and communication system arrangement method
US20220345378A1 (en) * 2021-04-26 2022-10-27 Hewlett Packard Enterprise Development Lp Electronic paper-based display device node fault visualization
CN115603326B (en) * 2022-12-15 2023-08-04 国网浙江省电力有限公司金华供电公司 Power distribution network load transfer method and system based on tree topology
CN115964257B (en) * 2023-03-17 2023-06-06 上海谐振半导体科技有限公司 Alarm device and method based on system interrupt design

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050021728A1 (en) * 2003-07-23 2005-01-27 Brother Kogyo Kabushiki Kaisha Status information notification system
US20110307887A1 (en) * 2010-06-11 2011-12-15 International Business Machines Corporation Dynamic virtual machine shutdown without service interruptions

Family Cites Families (335)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5594908A (en) 1989-12-27 1997-01-14 Hyatt; Gilbert P. Computer system having a serial keyboard, a serial display, and a dynamic memory with memory refresh
US5396635A (en) 1990-06-01 1995-03-07 Vadem Corporation Power conservation apparatus having multiple power reduction levels dependent upon the activity of the computer system
US5451936A (en) 1991-06-20 1995-09-19 The Johns Hopkins University Non-blocking broadcast network
US5781187A (en) 1994-05-31 1998-07-14 Advanced Micro Devices, Inc. Interrupt transmission via specialized bus cycle within a symmetrical multiprocessing system
JPH08123763A (en) 1994-10-26 1996-05-17 Nec Corp Memory assigning system for distributed processing system
US6055618A (en) 1995-10-31 2000-04-25 Cray Research, Inc. Virtual maintenance network in multiprocessing system having a non-flow controlled virtual maintenance channel
US6842430B1 (en) 1996-10-16 2005-01-11 Koninklijke Philips Electronics N.V. Method for configuring and routing data within a wireless multihop network and a wireless network for implementing the same
JP3662378B2 (en) 1996-12-17 2005-06-22 川崎マイクロエレクトロニクス株式会社 Network repeater
US5908468A (en) 1997-10-24 1999-06-01 Advanced Micro Devices, Inc. Data transfer network on a chip utilizing a multiple traffic circle topology
US5968176A (en) 1997-05-29 1999-10-19 3Com Corporation Multilayer firewall system
US5971804A (en) 1997-06-30 1999-10-26 Emc Corporation Backplane having strip transmission line ethernet bus
US6507586B1 (en) 1997-09-18 2003-01-14 International Business Machines Corporation Multicast data transmission over a one-way broadband channel
KR100286375B1 (en) 1997-10-02 2001-04-16 윤종용 Radiator of electronic system and computer system having the same
US5901048A (en) 1997-12-11 1999-05-04 International Business Machines Corporation Printed circuit board with chip collar
KR100250437B1 (en) 1997-12-26 2000-04-01 정선종 Path control device for round robin arbitration and adaptation
US6192414B1 (en) 1998-01-27 2001-02-20 Moore Products Co. Network communications system manager
US8108508B1 (en) 1998-06-22 2012-01-31 Hewlett-Packard Development Company, L.P. Web server chip for network manageability
US6373841B1 (en) 1998-06-22 2002-04-16 Agilent Technologies, Inc. Integrated LAN controller and web server chip
US6181699B1 (en) 1998-07-01 2001-01-30 National Semiconductor Corporation Apparatus and method of assigning VLAN tags
US6314501B1 (en) 1998-07-23 2001-11-06 Unisys Corporation Computer system and method for operating multiple operating systems in different partitions of the computer system and for allowing the different partitions to communicate with one another through shared memory
US6574238B1 (en) 1998-08-26 2003-06-03 Intel Corporation Inter-switch link header modification
AU755189B2 (en) 1999-03-31 2002-12-05 British Telecommunications Public Limited Company Progressive routing in a communications network
US8346971B2 (en) 1999-05-04 2013-01-01 At&T Intellectual Property I, Lp Data transfer, synchronising applications, and low latency networks
US6711691B1 (en) 1999-05-13 2004-03-23 Apple Computer, Inc. Power management for computer systems
US7970929B1 (en) 2002-03-19 2011-06-28 Dunti Llc Apparatus, system, and method for routing data to and from a host that is moved from one location on a communication system to another location on the communication system
US6442137B1 (en) 1999-05-24 2002-08-27 Advanced Micro Devices, Inc. Apparatus and method in a network switch for swapping memory access slots between gigabit port and expansion port
US7020695B1 (en) 1999-05-28 2006-03-28 Oracle International Corporation Using a cluster-wide shared repository to provide the latest consistent definition of the cluster (avoiding the partition-in time problem)
US6446192B1 (en) 1999-06-04 2002-09-03 Embrace Networks, Inc. Remote monitoring and control of equipment over computer networks using a single web interfacing chip
US6697359B1 (en) 1999-07-02 2004-02-24 Ancor Communications, Inc. High performance switch fabric element and switch systems
US7801132B2 (en) 1999-11-09 2010-09-21 Synchrodyne Networks, Inc. Interface system and methodology having scheduled connection responsive to common time reference
US6857026B1 (en) 1999-12-14 2005-02-15 Nortel Networks Limited Using alternate routes for fail-over in a communication network
US8171204B2 (en) 2000-01-06 2012-05-01 Super Talent Electronics, Inc. Intelligent solid-state non-volatile memory device (NVMD) system with multi-level caching of multiple channels
US6608564B2 (en) 2000-01-25 2003-08-19 Hewlett-Packard Development Company, L.P. Removable memory cartridge system for use with a server or other processor-based device
US20020107903A1 (en) 2000-11-07 2002-08-08 Richter Roger K. Methods and systems for the order serialization of information in a network processing environment
US6990063B1 (en) 2000-03-07 2006-01-24 Cisco Technology, Inc. Distributing fault indications and maintaining and using a data structure indicating faults to route traffic in a packet switching system
US6556952B1 (en) 2000-05-04 2003-04-29 Advanced Micro Devices, Inc. Performance monitoring and optimizing of controller parameters
US7080078B1 (en) 2000-05-09 2006-07-18 Sun Microsystems, Inc. Mechanism and apparatus for URI-addressable repositories of service advertisements and other content in a distributed computing environment
US7143153B1 (en) 2000-11-09 2006-11-28 Ciena Corporation Internal network device dynamic health monitoring
JP2001333091A (en) 2000-05-23 2001-11-30 Fujitsu Ltd Communication equipment
US6816750B1 (en) 2000-06-09 2004-11-09 Cirrus Logic, Inc. System-on-a-chip
US6668308B2 (en) 2000-06-10 2003-12-23 Hewlett-Packard Development Company, L.P. Scalable architecture based on single-chip multiprocessing
US6452809B1 (en) 2000-11-10 2002-09-17 Galactic Computing Corporation Scalable internet engine
US7032119B2 (en) 2000-09-27 2006-04-18 Amphus, Inc. Dynamic power and workload management for multi-server system
US6760861B2 (en) 2000-09-29 2004-07-06 Zeronines Technology, Inc. System, method and apparatus for data processing and storage to provide continuous operations independent of device failure or disaster
US7274705B2 (en) 2000-10-03 2007-09-25 Broadcom Corporation Method and apparatus for reducing clock speed and power consumption
US20020040425A1 (en) 2000-10-04 2002-04-04 David Chaiken Multi-dimensional integrated circuit connection network using LDT
US7165120B1 (en) 2000-10-11 2007-01-16 Sun Microsystems, Inc. Server node with interated networking capabilities
US6954463B1 (en) 2000-12-11 2005-10-11 Cisco Technology, Inc. Distributed packet processing architecture for network access servers
US7616646B1 (en) 2000-12-12 2009-11-10 Cisco Technology, Inc. Intraserver tag-switched distributed packet processing for network access servers
JP3532153B2 (en) 2000-12-22 2004-05-31 沖電気工業株式会社 Level shifter control circuit
AU2001297630A1 (en) 2000-12-29 2002-09-12 Ming Qiu Server array hardware architecture and system
US20020097732A1 (en) 2001-01-19 2002-07-25 Tom Worster Virtual private network protocol
US6977939B2 (en) 2001-01-26 2005-12-20 Microsoft Corporation Method and apparatus for emulating ethernet functionality over a serial bus
US7339786B2 (en) 2001-03-05 2008-03-04 Intel Corporation Modular server architecture with Ethernet routed across a backplane utilizing an integrated Ethernet switch module
US7093280B2 (en) 2001-03-30 2006-08-15 Juniper Networks, Inc. Internet security system
US20030196126A1 (en) 2002-04-11 2003-10-16 Fung Henry T. System, method, and architecture for dynamic server power management and dynamic workload management for multi-server environment
US6996058B2 (en) 2001-04-27 2006-02-07 The Boeing Company Method and system for interswitch load balancing in a communications network
US20020161917A1 (en) 2001-04-30 2002-10-31 Shapiro Aaron M. Methods and systems for dynamic routing of data in a network
US8009569B2 (en) 2001-05-07 2011-08-30 Vitesse Semiconductor Corporation System and a method for maintaining quality of service through a congested network
US7161901B2 (en) 2001-05-07 2007-01-09 Vitesse Semiconductor Corporation Automatic load balancing in switch fabrics
US6766389B2 (en) 2001-05-18 2004-07-20 Broadcom Corporation System on a chip for networking
DE10127198A1 (en) 2001-06-05 2002-12-19 Infineon Technologies Ag Physical address provision method for processor system with virtual addressing uses hierarchy mapping process for conversion of virtual address
US6950895B2 (en) 2001-06-13 2005-09-27 Intel Corporation Modular server architecture
US7159017B2 (en) 2001-06-28 2007-01-02 Fujitsu Limited Routing mechanism for static load balancing in a partitioned computer system with a fully connected network
US7200662B2 (en) 2001-07-06 2007-04-03 Juniper Networks, Inc. Integrated rule network management system
US6813676B1 (en) 2001-07-27 2004-11-02 Lsi Logic Corporation Host interface bypass on a fabric based array controller
US6724635B2 (en) 2001-08-07 2004-04-20 Hewlett-Packard Development Company, L.P. LCD panel for a server system
US6968470B2 (en) 2001-08-07 2005-11-22 Hewlett-Packard Development Company, L.P. System and method for power management in a server system
US7325050B2 (en) 2001-09-19 2008-01-29 Dell Products L.P. System and method for strategic power reduction in a computer system
US7337333B2 (en) 2001-09-19 2008-02-26 Dell Products L.P. System and method for strategic power supply sequencing in a computer system with multiple processing resources and multiple power supplies
US6779086B2 (en) 2001-10-16 2004-08-17 International Business Machines Corporation Symmetric multiprocessor systems with an independent super-coherent cache directory
US7447197B2 (en) 2001-10-18 2008-11-04 Qlogic, Corporation System and method of providing network node services
US8325716B2 (en) 2001-10-22 2012-12-04 Broadcom Corporation Data path optimization algorithm
US6963948B1 (en) 2001-11-01 2005-11-08 Advanced Micro Devices, Inc. Microcomputer bridge architecture with an embedded microcontroller
US7310319B2 (en) 2001-11-02 2007-12-18 Intel Corporation Multiple-domain processing system using hierarchically orthogonal switching fabric
US7464016B2 (en) 2001-11-09 2008-12-09 Sun Microsystems, Inc. Hot plug and hot pull system simulation
US7209657B1 (en) 2001-12-03 2007-04-24 Cheetah Omni, Llc Optical routing using a star switching fabric
US7599360B2 (en) 2001-12-26 2009-10-06 Cisco Technology, Inc. Methods and apparatus for encapsulating a frame for transmission in a storage area network
US20030140190A1 (en) 2002-01-23 2003-07-24 Sun Microsystems, Inc. Auto-SCSI termination enable in a CPCI hot swap system
US7340777B1 (en) 2003-03-31 2008-03-04 Symantec Corporation In memory heuristic system and method for detecting viruses
US7284067B2 (en) 2002-02-20 2007-10-16 Hewlett-Packard Development Company, L.P. Method for integrated load balancing among peer servers
US20030172191A1 (en) 2002-02-22 2003-09-11 Williams Joel R. Coupling of CPU and disk drive to form a server and aggregating a plurality of servers into server farms
US7096377B2 (en) * 2002-03-27 2006-08-22 Intel Corporation Method and apparatus for setting timing parameters
US20030202520A1 (en) 2002-04-26 2003-10-30 Maxxan Systems, Inc. Scalable switch fabric system and apparatus for computer networks
US7095738B1 (en) 2002-05-07 2006-08-22 Cisco Technology, Inc. System and method for deriving IPv6 scope identifiers and for mapping the identifiers into IPv6 addresses
US7353530B1 (en) 2002-05-10 2008-04-01 At&T Corp. Method and apparatus for assigning communication nodes to CMTS cards
US7376125B1 (en) 2002-06-04 2008-05-20 Fortinet, Inc. Service processing switch
US7161904B2 (en) 2002-06-04 2007-01-09 Fortinet, Inc. System and method for hierarchical metering in a virtual router based network switch
US7415723B2 (en) 2002-06-11 2008-08-19 Pandya Ashish A Distributed network security system and a hardware processor therefor
US7453870B2 (en) 2002-06-12 2008-11-18 Intel Corporation Backplane for switch fabric
US7180866B1 (en) 2002-07-11 2007-02-20 Nortel Networks Limited Rerouting in connection-oriented communication networks and communication systems
US7039018B2 (en) 2002-07-17 2006-05-02 Intel Corporation Technique to improve network routing using best-match and exact-match techniques
US7286544B2 (en) 2002-07-25 2007-10-23 Brocade Communications Systems, Inc. Virtualized multiport switch
US7286527B2 (en) 2002-07-26 2007-10-23 Brocade Communications Systems, Inc. Method and apparatus for round trip delay measurement in a bi-directional, point-to-point, serial data channel
US8295288B2 (en) 2002-07-30 2012-10-23 Brocade Communications System, Inc. Registered state change notification for a fibre channel network
US7055044B2 (en) 2002-08-12 2006-05-30 Hewlett-Packard Development Company, L.P. System and method for voltage management of a processor to optimize performance and power dissipation
EP1394985A1 (en) 2002-08-28 2004-03-03 Siemens Aktiengesellschaft Test method for network path between network elements in communication networks
US20110090633A1 (en) 2002-09-23 2011-04-21 Josef Rabinovitz Modular sata data storage device assembly
US7080283B1 (en) 2002-10-15 2006-07-18 Tensilica, Inc. Simultaneous real-time trace and debug for multiple processing core systems on a chip
US8199636B1 (en) 2002-10-18 2012-06-12 Alcatel Lucent Bridged network system with traffic resiliency upon link failure
US7792113B1 (en) 2002-10-21 2010-09-07 Cisco Technology, Inc. Method and system for policy-based forwarding
US7512788B2 (en) 2002-12-10 2009-03-31 International Business Machines Corporation Method and apparatus for anonymous group messaging in a distributed messaging system
US7917658B2 (en) 2003-01-21 2011-03-29 Emulex Design And Manufacturing Corporation Switching apparatus and method for link initialization in a shared I/O environment
US8024548B2 (en) 2003-02-18 2011-09-20 Christopher Joseph Daffron Integrated circuit microprocessor that constructs, at run time, integrated reconfigurable logic into persistent finite state machines from pre-compiled machine code instruction sequences
US7447147B2 (en) 2003-02-28 2008-11-04 Cisco Technology, Inc. Ethernet switch with configurable alarms
US7039771B1 (en) 2003-03-10 2006-05-02 Marvell International Ltd. Method and system for supporting multiple external serial port devices using a serial port controller in embedded disk controllers
US7216123B2 (en) 2003-03-28 2007-05-08 Board Of Trustees Of The Leland Stanford Junior University Methods for ranking nodes in large directed graphs
US20040215650A1 (en) 2003-04-09 2004-10-28 Ullattil Shaji Interfaces and methods for group policy management
US7047372B2 (en) 2003-04-15 2006-05-16 Newisys, Inc. Managing I/O accesses in multiprocessor systems
US7334064B2 (en) 2003-04-23 2008-02-19 Dot Hill Systems Corporation Application server blade for embedded storage appliance
US20040215991A1 (en) 2003-04-23 2004-10-28 Dell Products L.P. Power-up of multiple processors when a voltage regulator module has failed
US20040215864A1 (en) 2003-04-28 2004-10-28 International Business Machines Corporation Non-disruptive, dynamic hot-add and hot-remove of non-symmetric data processing system resources
US7685254B2 (en) 2003-06-10 2010-03-23 Pandya Ashish A Runtime adaptable search processor
US7400996B2 (en) 2003-06-26 2008-07-15 Benjamin Thomas Percer Use of I2C-based potentiometers to enable voltage rail variation under BMC control
US7477655B2 (en) 2003-07-21 2009-01-13 Qlogic, Corporation Method and system for power control of fibre channel switches
US7894348B2 (en) 2003-07-21 2011-02-22 Qlogic, Corporation Method and system for congestion control in a fibre channel switch
US7646767B2 (en) 2003-07-21 2010-01-12 Qlogic, Corporation Method and system for programmable data dependant network routing
US7512067B2 (en) 2003-07-21 2009-03-31 Qlogic, Corporation Method and system for congestion control based on optimum bandwidth allocation in a fibre channel switch
US7353362B2 (en) 2003-07-25 2008-04-01 International Business Machines Corporation Multiprocessor subsystem in SoC with bridge between processor clusters interconnetion and SoC system bus
US7412588B2 (en) 2003-07-25 2008-08-12 International Business Machines Corporation Network processor system on chip with bridge coupling protocol converting multiprocessor macro core local bus to peripheral interfaces coupled system bus
US7170315B2 (en) 2003-07-31 2007-01-30 Actel Corporation Programmable system on a chip
US7028125B2 (en) 2003-08-04 2006-04-11 Inventec Corporation Hot-pluggable peripheral input device coupling system
US7620736B2 (en) 2003-08-08 2009-11-17 Cray Canada Corporation Network topology having nodes interconnected by extended diagonal links
US20050050334A1 (en) 2003-08-29 2005-03-03 Trend Micro Incorporated, A Japanese Corporation Network traffic management by a virus/worm monitor in a distributed network
US7934005B2 (en) 2003-09-08 2011-04-26 Koolspan, Inc. Subnet box
WO2005038599A2 (en) 2003-10-14 2005-04-28 Raptor Networks Technology, Inc. Switching system with distributed switching fabric
US7174470B2 (en) 2003-10-14 2007-02-06 Hewlett-Packard Development Company, L.P. Computer data bus interface control
US7415543B2 (en) 2003-11-12 2008-08-19 Lsi Corporation Serial port initialization in storage system controllers
US7916638B2 (en) 2003-12-24 2011-03-29 Alcatel Lucent Time-independent deficit round robin method and system
US7109760B1 (en) 2004-01-05 2006-09-19 Integrated Device Technology, Inc. Delay-locked loop (DLL) integrated circuits that support efficient phase locking of clock signals having non-unity duty cycles
JP4248420B2 (en) 2004-02-06 2009-04-02 日本電信電話株式会社 Handover control method for mobile communication network
US7664110B1 (en) 2004-02-07 2010-02-16 Habanero Holdings, Inc. Input/output controller for coupling the processor-memory complex to the fabric in fabric-backplane interprise servers
US7583661B2 (en) 2004-03-05 2009-09-01 Sid Chaudhuri Method and apparatus for improved IP networks and high-quality services
US7865582B2 (en) 2004-03-24 2011-01-04 Hewlett-Packard Development Company, L.P. System and method for assigning an application component to a computing resource
ITMI20040600A1 (en) 2004-03-26 2004-06-26 Atmel Corp DSP SYSTEM ON DOUBLE PROCESSOR WITH MOBILE COMB IN THE COMPLEX DOMAIN
EP1591906A1 (en) 2004-04-27 2005-11-02 Texas Instruments Incorporated Efficient data transfer from an ASIC to a host using DMA
US7436832B2 (en) 2004-05-05 2008-10-14 Gigamon Systems Llc Asymmetric packets switch and a method of use
US7203063B2 (en) 2004-05-21 2007-04-10 Hewlett-Packard Development Company, L.P. Small form factor liquid loop cooling system
ES2246702B2 (en) 2004-06-02 2007-06-16 L & M DATA COMMUNICATIONS, S.A. ETHERNET UNIVERSAL TELECOMMUNICATIONS SERVICE.
US7467358B2 (en) 2004-06-03 2008-12-16 Gwangju Institute Of Science And Technology Asynchronous switch based on butterfly fat-tree for network on chip application
EP2408119A1 (en) 2004-06-15 2012-01-18 Fujitsu Component Limited Transceiver module comprising optical and electrical connection
JP4334419B2 (en) 2004-06-30 2009-09-30 富士通株式会社 Transmission equipment
US7586904B2 (en) 2004-07-15 2009-09-08 Broadcom Corp. Method and system for a gigabit Ethernet IP telephone chip with no DSP core, which uses a RISC core with instruction extensions to support voice processing
US9264384B1 (en) 2004-07-22 2016-02-16 Oracle International Corporation Resource virtualization mechanism including virtual host bus adapters
US7466712B2 (en) 2004-07-30 2008-12-16 Brocade Communications Systems, Inc. System and method for providing proxy and translation domains in a fibre channel router
US7657756B2 (en) 2004-10-08 2010-02-02 International Business Machines Corporaiton Secure memory caching structures for data, integrity and version values
US7257655B1 (en) 2004-10-13 2007-08-14 Altera Corporation Embedded PCI-Express implementation
CN101057223B (en) 2004-10-15 2011-09-14 索尼计算机娱乐公司 Methods and apparatus for supporting multiple configurations in a multi-processor system
US7620057B1 (en) 2004-10-19 2009-11-17 Broadcom Corporation Cache line replacement with zero latency
US20060090025A1 (en) 2004-10-25 2006-04-27 Tufford Robert C 9U payload module configurations
US7760720B2 (en) 2004-11-09 2010-07-20 Cisco Technology, Inc. Translating native medium access control (MAC) addresses to hierarchical MAC addresses and their use
US7278582B1 (en) 2004-12-03 2007-10-09 Sun Microsystems, Inc. Hardware security module (HSM) chip card
US7394288B1 (en) 2004-12-13 2008-07-01 Massachusetts Institute Of Technology Transferring data in a parallel processing environment
TWM270514U (en) 2004-12-27 2005-07-11 Quanta Comp Inc Blade server system
US8533777B2 (en) 2004-12-29 2013-09-10 Intel Corporation Mechanism to determine trust of out-of-band management agents
US7676841B2 (en) 2005-02-01 2010-03-09 Fmr Llc Network intrusion mitigation
JP4489030B2 (en) 2005-02-07 2010-06-23 株式会社ソニー・コンピュータエンタテインメント Method and apparatus for providing a secure boot sequence within a processor
US8140770B2 (en) 2005-02-10 2012-03-20 International Business Machines Corporation Data processing system and method for predictively selecting a scope of broadcast of an operation
US7467306B2 (en) 2005-03-08 2008-12-16 Hewlett-Packard Development Company, L.P. Methods and systems for allocating power to an electronic device
US7881332B2 (en) 2005-04-01 2011-02-01 International Business Machines Corporation Configurable ports for a host ethernet adapter
JP4591185B2 (en) 2005-04-28 2010-12-01 株式会社日立製作所 Server device
US7363463B2 (en) 2005-05-13 2008-04-22 Microsoft Corporation Method and system for caching address translations from multiple address spaces in virtual machines
US7586841B2 (en) 2005-05-31 2009-09-08 Cisco Technology, Inc. System and method for protecting against failure of a TE-LSP tail-end node
US7596144B2 (en) 2005-06-07 2009-09-29 Broadcom Corp. System-on-a-chip (SoC) device with integrated support for ethernet, TCP, iSCSI, RDMA, and network application acceleration
EP1897317A1 (en) 2005-06-23 2008-03-12 TELEFONAKTIEBOLAGET LM ERICSSON (publ) Arrangement and method relating to load distribution
JP2007012000A (en) 2005-07-04 2007-01-18 Hitachi Ltd Storage controller and storage system
US7461274B2 (en) 2005-08-23 2008-12-02 International Business Machines Corporation Method for maximizing server utilization in a resource constrained environment
US8982778B2 (en) 2005-09-19 2015-03-17 Qualcomm Incorporated Packet routing in a wireless communications environment
US7382154B2 (en) 2005-10-03 2008-06-03 Honeywell International Inc. Reconfigurable network on a chip
US8516165B2 (en) 2005-10-19 2013-08-20 Nvidia Corporation System and method for encoding packet header to enable higher bandwidth efficiency across bus links
US7574590B2 (en) 2005-10-26 2009-08-11 Sigmatel, Inc. Method for booting a system on a chip integrated circuit
CN100417118C (en) 2005-10-28 2008-09-03 华为技术有限公司 System and method for renewing network mobile node position in wireless net-like network
CN2852260Y (en) 2005-12-01 2006-12-27 华为技术有限公司 Server
EP1808994A1 (en) 2006-01-12 2007-07-18 Alcatel Lucent Universal switch for transporting packet data frames
WO2007084403A2 (en) 2006-01-13 2007-07-26 Sun Microsystems, Inc. Compact rackmount storage server
WO2007084422A2 (en) 2006-01-13 2007-07-26 Sun Microsystems, Inc. Modular blade server
WO2007084735A2 (en) 2006-01-20 2007-07-26 Avise Partners Customer service management
US7991817B2 (en) 2006-01-23 2011-08-02 California Institute Of Technology Method and a circuit using an associative calculator for calculating a sequence of non-associative operations
US20070180310A1 (en) 2006-02-02 2007-08-02 Texas Instruments, Inc. Multi-core architecture with hardware messaging
US7606225B2 (en) 2006-02-06 2009-10-20 Fortinet, Inc. Integrated security switch
US20070226795A1 (en) 2006-02-09 2007-09-27 Texas Instruments Incorporated Virtual cores and hardware-supported hypervisor integrated circuits, systems, methods and processes of manufacture
US9177176B2 (en) 2006-02-27 2015-11-03 Broadcom Corporation Method and system for secure system-on-a-chip architecture for multimedia data processing
US20090133129A1 (en) 2006-03-06 2009-05-21 Lg Electronics Inc. Data transferring method
FR2898753B1 (en) 2006-03-16 2008-04-18 Commissariat Energie Atomique SEMI-DISTRIBUTED CONTROL CHIP SYSTEM
US7555666B2 (en) 2006-05-04 2009-06-30 Dell Products L.P. Power profiling application for managing power allocation in an information handling system
JP2007304687A (en) 2006-05-09 2007-11-22 Hitachi Ltd Cluster constitution and its control means
US7660922B2 (en) 2006-05-12 2010-02-09 Intel Corporation Mechanism to flexibly support multiple device numbers on point-to-point interconnect upstream ports
US20070280230A1 (en) 2006-05-31 2007-12-06 Motorola, Inc Method and system for service discovery across a wide area network
US7522468B2 (en) 2006-06-08 2009-04-21 Unity Semiconductor Corporation Serial memory interface
CN101094125A (en) 2006-06-23 2007-12-26 华为技术有限公司 Exchange structure in ATCA / ATCA300 expanded exchange bandwidth
US7693072B2 (en) 2006-07-13 2010-04-06 At&T Intellectual Property I, L.P. Method and apparatus for configuring a network topology with alternative communication paths
US20080040463A1 (en) 2006-08-08 2008-02-14 International Business Machines Corporation Communication System for Multiple Chassis Computer Systems
CN101127696B (en) 2006-08-15 2012-06-27 华为技术有限公司 Data forwarding method for layer 2 network and network and node devices
EP1892913A1 (en) 2006-08-24 2008-02-27 Siemens Aktiengesellschaft Method and arrangement for providing a wireless mesh network
US20080052437A1 (en) 2006-08-28 2008-02-28 Dell Products L.P. Hot Plug Power Policy for Modular Chassis
US7802082B2 (en) 2006-08-31 2010-09-21 Intel Corporation Methods and systems to dynamically configure computing apparatuses
US8599685B2 (en) 2006-09-26 2013-12-03 Cisco Technology, Inc. Snooping of on-path IP reservation protocols for layer 2 nodes
US7853754B1 (en) 2006-09-29 2010-12-14 Tilera Corporation Caching in multicore and multiprocessor architectures
US8684802B1 (en) 2006-10-27 2014-04-01 Oracle America, Inc. Method and apparatus for balancing thermal variations across a set of computer systems
US8447872B2 (en) 2006-11-01 2013-05-21 Intel Corporation Load balancing in a storage system
US7992151B2 (en) 2006-11-30 2011-08-02 Intel Corporation Methods and apparatuses for core allocations
EP2109812A2 (en) 2006-12-06 2009-10-21 Fusion Multisystems, Inc. Apparatus, system, and method for an in-server storage area network
US20080140930A1 (en) 2006-12-08 2008-06-12 Emulex Design & Manufacturing Corporation Virtual drive mapping
US20080140771A1 (en) 2006-12-08 2008-06-12 Sony Computer Entertainment Inc. Simulated environment computing framework
CN101212345A (en) 2006-12-31 2008-07-02 联想(北京)有限公司 Blade server management system
US8504791B2 (en) 2007-01-26 2013-08-06 Hicamp Systems, Inc. Hierarchical immutable content-addressable memory coprocessor
US8407428B2 (en) 2010-05-20 2013-03-26 Hicamp Systems, Inc. Structured memory coprocessor
US7865614B2 (en) 2007-02-12 2011-01-04 International Business Machines Corporation Method and apparatus for load balancing with server state change awareness
FI120088B (en) 2007-03-01 2009-06-30 Kone Corp Arrangement and method of monitoring the security circuit
US7870907B2 (en) 2007-03-08 2011-01-18 Weatherford/Lamb, Inc. Debris protection for sliding sleeve
JP4370336B2 (en) 2007-03-09 2009-11-25 株式会社日立製作所 Low power consumption job management method and computer system
US20080239649A1 (en) 2007-03-29 2008-10-02 Bradicich Thomas M Design structure for an interposer for expanded capability of a blade server chassis system
US7783910B2 (en) 2007-03-30 2010-08-24 International Business Machines Corporation Method and system for associating power consumption of a server with a network address assigned to the server
WO2008127672A2 (en) 2007-04-11 2008-10-23 Slt Logic Llc Modular blade for providing scalable mechanical, electrical and environmental functionality in the enterprise using advanced tca boards
JP4815385B2 (en) 2007-04-13 2011-11-16 株式会社日立製作所 Storage device
US7715400B1 (en) 2007-04-26 2010-05-11 3 Leaf Networks Node identification for distributed shared memory system
US7515412B2 (en) 2007-04-26 2009-04-07 Enermax Technology Corporation Cooling structure for power supply
US7925795B2 (en) 2007-04-30 2011-04-12 Broadcom Corporation Method and system for configuring a plurality of network interfaces that share a physical interface
DE102007020296A1 (en) 2007-04-30 2008-11-13 Philip Behrens Device and method for the wireless production of a contact
PT103744A (en) 2007-05-16 2008-11-17 Coreworks S A ARCHITECTURE OF ACCESS TO THE NETWORK CORE.
US7552241B2 (en) 2007-05-18 2009-06-23 Tilera Corporation Method and system for managing a plurality of I/O interfaces with an array of multicore processor resources in a semiconductor chip
US7693167B2 (en) 2007-05-22 2010-04-06 Rockwell Collins, Inc. Mobile nodal based communication system, method and apparatus
WO2008147926A1 (en) 2007-05-25 2008-12-04 Venkat Konda Fully connected generalized butterfly fat tree networks
US8141143B2 (en) 2007-05-31 2012-03-20 Imera Systems, Inc. Method and system for providing remote access to resources in a secure data center over a network
US7783813B2 (en) 2007-06-14 2010-08-24 International Business Machines Corporation Multi-node configuration of processor cards connected via processor fabrics
US8060775B1 (en) 2007-06-14 2011-11-15 Symantec Corporation Method and apparatus for providing dynamic multi-pathing (DMP) for an asymmetric logical unit access (ALUA) based storage system
EP2009554A1 (en) 2007-06-25 2008-12-31 Stmicroelectronics SA Method for transferring data from a source target to a destination target, and corresponding network interface
US7761687B2 (en) 2007-06-26 2010-07-20 International Business Machines Corporation Ultrascalable petaflop parallel supercomputer
US8060760B2 (en) 2007-07-13 2011-11-15 Dell Products L.P. System and method for dynamic information handling system prioritization
US7688578B2 (en) 2007-07-19 2010-03-30 Hewlett-Packard Development Company, L.P. Modular high-density computer system
US8150019B2 (en) 2007-08-10 2012-04-03 Smith Robert B Path redundant hardware efficient communications interconnect system
US7840703B2 (en) 2007-08-27 2010-11-23 International Business Machines Corporation System and method for dynamically supporting indirect routing within a multi-tiered full-graph interconnect architecture
US8180901B2 (en) 2007-08-28 2012-05-15 Cisco Technology, Inc. Layers 4-7 service gateway for converged datacenter fabric
US20090080428A1 (en) 2007-09-25 2009-03-26 Maxxan Systems, Inc. System and method for scalable switch fabric for computer network
US20090251867A1 (en) 2007-10-09 2009-10-08 Sharma Viswa N Reconfigurable, modularized fpga-based amc module
US7739475B2 (en) 2007-10-24 2010-06-15 Inventec Corporation System and method for updating dirty data of designated raw device
US7822841B2 (en) 2007-10-30 2010-10-26 Modern Grids, Inc. Method and system for hosting multiple, customized computing clusters
EP2061191A1 (en) 2007-11-13 2009-05-20 STMicroelectronics (Grenoble) SAS Buffering architecture for packet injection and extraction in on-chip networks.
US8068433B2 (en) 2007-11-26 2011-11-29 Microsoft Corporation Low power operation of networked devices
US7877622B2 (en) 2007-12-13 2011-01-25 International Business Machines Corporation Selecting between high availability redundant power supply modes for powering a computer system
US7962771B2 (en) 2007-12-31 2011-06-14 Intel Corporation Method, system, and apparatus for rerouting interrupts in a multi-core processor
US20090166065A1 (en) 2008-01-02 2009-07-02 Clayton James E Thin multi-chip flex module
US7779148B2 (en) 2008-02-01 2010-08-17 International Business Machines Corporation Dynamic routing based on information of not responded active source requests quantity received in broadcast heartbeat signal and stored in local data structure for other processor chips
US20090204834A1 (en) 2008-02-11 2009-08-13 Nvidia Corporation System and method for using inputs as wake signals
US20090204837A1 (en) * 2008-02-11 2009-08-13 Udaykumar Raval Power control system and method
US8854831B2 (en) 2012-04-10 2014-10-07 Arnouse Digital Devices Corporation Low power, high density server and portable device for use with same
US8082400B1 (en) 2008-02-26 2011-12-20 Hewlett-Packard Development Company, L.P. Partitioning a memory pool among plural computing nodes
US8156362B2 (en) 2008-03-11 2012-04-10 Globalfoundries Inc. Hardware monitoring and decision making for transitioning in and out of low-power state
TWI354213B (en) 2008-04-01 2011-12-11 Inventec Corp Server
US20090259864A1 (en) 2008-04-10 2009-10-15 Nvidia Corporation System and method for input/output control during power down mode
US8762759B2 (en) 2008-04-10 2014-06-24 Nvidia Corporation Responding to interrupts while in a reduced power state
BRPI0910949B1 (en) 2008-04-16 2020-09-15 Telefonaktiebolaget Lm Ericsson (Publ) METHOD AND SYSTEM FOR DETECTING AND CORRECTING INCOMPATIBILITY, AND, MAINTENANCE ASSOCIATION END POINT
US7742844B2 (en) 2008-04-21 2010-06-22 Dell Products, Lp Information handling system including cooling devices and methods of use thereof
JP5075727B2 (en) 2008-04-25 2012-11-21 株式会社日立製作所 Stream distribution system and failure detection method
US7861110B2 (en) 2008-04-30 2010-12-28 Egenera, Inc. System, method, and adapter for creating fault-tolerant communication busses from standard components
US20090282419A1 (en) 2008-05-09 2009-11-12 International Business Machines Corporation Ordered And Unordered Network-Addressed Message Control With Embedded DMA Commands For A Network On Chip
US7921315B2 (en) 2008-05-09 2011-04-05 International Business Machines Corporation Managing power consumption in a data center based on monitoring circuit breakers
WO2009138133A1 (en) 2008-05-12 2009-11-19 Telefonaktiebolaget Lm Ericsson (Publ) Re-routing traffic in a communications network
US20100008038A1 (en) 2008-05-15 2010-01-14 Giovanni Coglitore Apparatus and Method for Reliable and Efficient Computing Based on Separating Computing Modules From Components With Moving Parts
US8180996B2 (en) 2008-05-15 2012-05-15 Calxeda, Inc. Distributed computing system with universal address system and method
US8775718B2 (en) 2008-05-23 2014-07-08 Netapp, Inc. Use of RDMA to access non-volatile solid-state memory in a network storage system
US7519843B1 (en) 2008-05-30 2009-04-14 International Business Machines Corporation Method and system for dynamic processor speed control to always maximize processor performance based on processing load and available power
US7904345B2 (en) 2008-06-10 2011-03-08 The Go Daddy Group, Inc. Providing website hosting overage protection by transference to an overflow server
US8244918B2 (en) 2008-06-11 2012-08-14 International Business Machines Corporation Resource sharing expansion card
IL192140A0 (en) 2008-06-12 2009-02-11 Ethos Networks Ltd Method and system for transparent lan services in a packet network
US8886985B2 (en) 2008-07-07 2014-11-11 Raritan Americas, Inc. Automatic discovery of physical connectivity between power outlets and IT equipment
EP2313819A2 (en) 2008-07-14 2011-04-27 The Regents of the University of California Architecture to enable energy savings in networked computers
US20100026408A1 (en) 2008-07-30 2010-02-04 Jeng-Jye Shau Signal transfer for ultra-high capacity circuits
US8031703B2 (en) 2008-08-14 2011-10-04 Dell Products, Lp System and method for dynamic maintenance of fabric subsets in a network
US8132034B2 (en) 2008-08-28 2012-03-06 Dell Products L.P. System and method for managing information handling system power supply capacity utilization based on load sharing power loss
US8804710B2 (en) 2008-12-29 2014-08-12 Juniper Networks, Inc. System architecture for a scalable and distributed multi-stage switch fabric
JP5428267B2 (en) 2008-09-26 2014-02-26 富士通株式会社 Power supply control system and power supply control method
US8484493B2 (en) 2008-10-29 2013-07-09 Dell Products, Lp Method for pre-chassis power multi-slot blade identification and inventory
US8068482B2 (en) 2008-11-13 2011-11-29 Qlogic, Corporation Method and system for network switch element
US10255463B2 (en) 2008-11-17 2019-04-09 International Business Machines Corporation Secure computer architecture
JP5151924B2 (en) 2008-11-19 2013-02-27 富士通株式会社 Power management proxy device, server device, server power management method using proxy device, proxy device power management program, server device power management program
US20100161909A1 (en) 2008-12-18 2010-06-24 Lsi Corporation Systems and Methods for Quota Management in a Memory Appliance
US20100158005A1 (en) 2008-12-23 2010-06-24 Suvhasis Mukhopadhyay System-On-a-Chip and Multi-Chip Systems Supporting Advanced Telecommunication Functions
US20100169479A1 (en) 2008-12-26 2010-07-01 Electronics And Telecommunications Research Institute Apparatus and method for extracting user information using client-based script
US8122269B2 (en) 2009-01-07 2012-02-21 International Business Machines Corporation Regulating power consumption in a multi-core processor by dynamically distributing power and processing requests by a managing core to a configuration of processing cores
US8918488B2 (en) 2009-02-04 2014-12-23 Citrix Systems, Inc. Methods and systems for automated management of virtual resources in a cloud computing environment
US8510744B2 (en) 2009-02-24 2013-08-13 Siemens Product Lifecycle Management Software Inc. Using resource defining attributes to enhance thread scheduling in processors
GB2468137A (en) 2009-02-25 2010-09-01 Advanced Risc Mach Ltd Blade server with on board battery power
JP5816407B2 (en) 2009-02-27 2015-11-18 ルネサスエレクトロニクス株式会社 Semiconductor integrated circuit device
US8725946B2 (en) 2009-03-23 2014-05-13 Ocz Storage Solutions, Inc. Mass storage system and method of using hard disk, solid-state media, PCIe edge connector, and raid controller
US8140871B2 (en) 2009-03-27 2012-03-20 International Business Machines Corporation Wake on Lan for blade server
TWI358016B (en) 2009-04-17 2012-02-11 Inventec Corp Server
US8127128B2 (en) 2009-05-04 2012-02-28 International Business Machines Corporation Synchronization of swappable module in modular system
TWM377621U (en) 2009-05-25 2010-04-01 Advantech Co Ltd Interface card with hardware monitor and function extension, computer device and single board
US8004922B2 (en) * 2009-06-05 2011-08-23 Nxp B.V. Power island with independent power characteristics for memory and logic
US9001846B2 (en) 2009-06-09 2015-04-07 Broadcom Corporation Physical layer device with dual medium access controller path
US8321688B2 (en) 2009-06-12 2012-11-27 Microsoft Corporation Secure and private backup storage and processing for trusted computing and data services
CN102473157B (en) 2009-07-17 2015-12-16 惠普开发有限公司 Virtual thermal in share I/O environment inserts function
CN101989212B (en) 2009-07-31 2015-01-07 国际商业机器公司 Method and device for providing virtual machine management program for starting blade server
US8340120B2 (en) 2009-09-04 2012-12-25 Brocade Communications Systems, Inc. User selectable multiple protocol network interface device
US9465771B2 (en) 2009-09-24 2016-10-11 Iii Holdings 2, Llc Server on a chip and node cards comprising one or more of same
US9876735B2 (en) 2009-10-30 2018-01-23 Iii Holdings 2, Llc Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US8599863B2 (en) 2009-10-30 2013-12-03 Calxeda, Inc. System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US20110103391A1 (en) 2009-10-30 2011-05-05 Smooth-Stone, Inc. C/O Barry Evans System and method for high-performance, low-power data center interconnect fabric
US9054990B2 (en) 2009-10-30 2015-06-09 Iii Holdings 2, Llc System and method for data center security enhancements leveraging server SOCs or server fabrics
TW201112936A (en) 2009-09-29 2011-04-01 Inventec Corp Electronic device
US8832222B2 (en) 2009-10-05 2014-09-09 Vss Monitoring, Inc. Method, apparatus and system for inserting a VLAN tag into a captured data packet
US8194659B2 (en) 2009-10-06 2012-06-05 Red Hat, Inc. Mechanism for processing messages using logical addresses
US8571031B2 (en) 2009-10-07 2013-10-29 Intel Corporation Configurable frame processing pipeline in a packet switch
US9680770B2 (en) 2009-10-30 2017-06-13 Iii Holdings 2, Llc System and method for using a multi-protocol fabric module across a distributed server interconnect fabric
US9767070B2 (en) 2009-11-06 2017-09-19 Hewlett Packard Enterprise Development Lp Storage system with a memory blade that generates a computational result for a storage device
US20110119344A1 (en) 2009-11-17 2011-05-19 Susan Eustis Apparatus And Method For Using Distributed Servers As Mainframe Class Computers
US20110191514A1 (en) 2010-01-29 2011-08-04 Inventec Corporation Server system
JP5648926B2 (en) 2010-02-01 2015-01-07 日本電気株式会社 Network system, controller, and network control method
TW201128395A (en) 2010-02-08 2011-08-16 Hon Hai Prec Ind Co Ltd Computer motherboard
US20110210975A1 (en) 2010-02-26 2011-09-01 Xgi Technology, Inc. Multi-screen signal processing device and multi-screen system
US8397092B2 (en) 2010-03-24 2013-03-12 Emulex Design & Manufacturing Corporation Power management for input/output devices by creating a virtual port for redirecting traffic
KR101641108B1 (en) 2010-04-30 2016-07-20 삼성전자주식회사 Target device providing debugging functionality and test system comprising the same
US8045328B1 (en) 2010-05-04 2011-10-25 Chenbro Micom Co., Ltd. Server and cooler moduel arrangement
US8830823B2 (en) 2010-07-06 2014-09-09 Nicira, Inc. Distributed control platform for large-scale production networks
US8812400B2 (en) 2010-07-09 2014-08-19 Hewlett-Packard Development Company, L.P. Managing a memory segment using a memory virtual appliance
WO2012023604A1 (en) 2010-08-20 2012-02-23 日本電気株式会社 Communication system, control apparatus, communication method and program
CN102385417B (en) 2010-08-25 2013-02-20 英业达股份有限公司 Rack-mounted server
JP2012053504A (en) 2010-08-31 2012-03-15 Hitachi Ltd Blade server device
US8601288B2 (en) * 2010-08-31 2013-12-03 Sonics, Inc. Intelligent power controller
GB2497493B (en) 2010-09-16 2017-12-27 Iii Holdings 2 Llc Performance and power optimized computer system architectures and methods leveraging power optimized tree fabric interconnect
US20120081850A1 (en) 2010-09-30 2012-04-05 Dell Products L.P. Rack Assembly for Housing and Providing Power to Information Handling Systems
US8699220B2 (en) 2010-10-22 2014-04-15 Xplore Technologies Corp. Computer with removable cartridge
US8738860B1 (en) 2010-10-25 2014-05-27 Tilera Corporation Computing in parallel processing environments
DE102011056141A1 (en) 2010-12-20 2012-06-21 Samsung Electronics Co., Ltd. A negative voltage generator, decoder, non-volatile memory device and memory system using a negative voltage
US20120198252A1 (en) 2011-02-01 2012-08-02 Kirschtein Phillip M System and Method for Managing and Detecting Server Power Connections
US8670450B2 (en) 2011-05-13 2014-03-11 International Business Machines Corporation Efficient software-based private VLAN solution for distributed virtual switches
US8547825B2 (en) 2011-07-07 2013-10-01 International Business Machines Corporation Switch fabric management
US8683125B2 (en) 2011-11-01 2014-03-25 Hewlett-Packard Development Company, L.P. Tier identification (TID) for tiered memory characteristics
US9565132B2 (en) 2011-12-27 2017-02-07 Intel Corporation Multi-protocol I/O interconnect including a switching fabric
US8782321B2 (en) 2012-02-08 2014-07-15 Intel Corporation PCI express tunneling over a multi-protocol I/O interconnect
US20130290643A1 (en) 2012-04-30 2013-10-31 Kevin T. Lim Using a cache in a disaggregated memory architecture
US20130290650A1 (en) 2012-04-30 2013-10-31 Jichuan Chang Distributed active data storage system
US20130318269A1 (en) 2012-05-22 2013-11-28 Xockets IP, LLC Processing structured and unstructured data using offload processors
US9304896B2 (en) 2013-08-05 2016-04-05 Iii Holdings 2, Llc Remote memory ring buffers in a cluster of data processing nodes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050021728A1 (en) * 2003-07-23 2005-01-27 Brother Kogyo Kabushiki Kaisha Status information notification system
US20110307887A1 (en) * 2010-06-11 2011-12-15 International Business Machines Corporation Dynamic virtual machine shutdown without service interruptions

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11960937B2 (en) 2004-03-13 2024-04-16 Iii Holdings 12, Llc System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter
US12124878B2 (en) 2004-03-13 2024-10-22 Iii Holdings 12, Llc System and method for scheduling resources within a compute environment using a scheduler process with reservation mask function
US11467883B2 (en) 2004-03-13 2022-10-11 Iii Holdings 12, Llc Co-allocating a reservation spanning different compute resources types
US11652706B2 (en) 2004-06-18 2023-05-16 Iii Holdings 12, Llc System and method for providing dynamic provisioning within a compute environment
US12009996B2 (en) 2004-06-18 2024-06-11 Iii Holdings 12, Llc System and method for providing dynamic provisioning within a compute environment
US11630704B2 (en) 2004-08-20 2023-04-18 Iii Holdings 12, Llc System and method for a workload management and scheduling module to manage access to a compute environment according to local and non-local user identity information
US11861404B2 (en) 2004-11-08 2024-01-02 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11886915B2 (en) 2004-11-08 2024-01-30 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11537434B2 (en) 2004-11-08 2022-12-27 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11762694B2 (en) 2004-11-08 2023-09-19 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US12008405B2 (en) 2004-11-08 2024-06-11 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11494235B2 (en) 2004-11-08 2022-11-08 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11709709B2 (en) 2004-11-08 2023-07-25 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11656907B2 (en) 2004-11-08 2023-05-23 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US12039370B2 (en) 2004-11-08 2024-07-16 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11537435B2 (en) 2004-11-08 2022-12-27 Iii Holdings 12, Llc System and method of providing system jobs within a compute environment
US11658916B2 (en) 2005-03-16 2023-05-23 Iii Holdings 12, Llc Simple integration of an on-demand compute environment
US12120040B2 (en) 2005-03-16 2024-10-15 Iii Holdings 12, Llc On-demand compute environment
US11765101B2 (en) 2005-04-07 2023-09-19 Iii Holdings 12, Llc On-demand access to compute resources
US11831564B2 (en) 2005-04-07 2023-11-28 Iii Holdings 12, Llc On-demand access to compute resources
US11522811B2 (en) 2005-04-07 2022-12-06 Iii Holdings 12, Llc On-demand access to compute resources
US11496415B2 (en) 2005-04-07 2022-11-08 Iii Holdings 12, Llc On-demand access to compute resources
US11533274B2 (en) 2005-04-07 2022-12-20 Iii Holdings 12, Llc On-demand access to compute resources
US11650857B2 (en) 2006-03-16 2023-05-16 Iii Holdings 12, Llc System and method for managing a hybrid computer environment
US11522952B2 (en) 2007-09-24 2022-12-06 The Research Foundation For The State University Of New York Automatic clustering for self-organizing grids
US11526304B2 (en) 2009-10-30 2022-12-13 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US20220114070A1 (en) * 2012-12-28 2022-04-14 Iii Holdings 2, Llc System, Method and Computer Readable Medium for Offloaded Computation of Distributed Application Protocols within a Cluster of Data Processing Nodes
US11922990B2 (en) 2018-02-26 2024-03-05 Micron Technology, Inc. Memory devices configured to provide external regulated voltages
US11195569B2 (en) 2018-02-26 2021-12-07 Micron Technology, Inc. Memory devices configured to provide external regulated voltages
US10665288B2 (en) 2018-02-26 2020-05-26 Micron Technology, Inc. Memory devices configured to provide external regulated voltages
WO2019164547A1 (en) * 2018-02-26 2019-08-29 Micro Technology, Inc. Memory devices configured to provide external regulated voltages
US10395721B1 (en) 2018-02-26 2019-08-27 Micron Technology, Inc. Memory devices configured to provide external regulated voltages
US11231765B2 (en) 2018-06-28 2022-01-25 Nordic Semiconductor Asa Peripheral power domains
CN109870921A (en) * 2019-03-26 2019-06-11 广东美的制冷设备有限公司 Drive control circuit and household appliance
CN111338984A (en) * 2020-02-25 2020-06-26 大唐半导体科技有限公司 Cache RAM and Retention RAM data high-speed exchange architecture and method thereof
US11119153B1 (en) * 2020-05-29 2021-09-14 Stmicroelectronics International N.V. Isolation enable test coverage for multiple power domains
CN113032329A (en) * 2021-05-21 2021-06-25 千芯半导体科技(北京)有限公司 Computing structure, hardware architecture and computing method based on reconfigurable memory chip

Also Published As

Publication number Publication date
US20160154760A9 (en) 2016-06-02
US9465771B2 (en) 2016-10-11
US20140122833A1 (en) 2014-05-01

Similar Documents

Publication Publication Date Title
US9465771B2 (en) Server on a chip and node cards comprising one or more of same
US11526304B2 (en) Memcached server functionality in a cluster of data processing nodes
US10135731B2 (en) Remote memory access functionality in a cluster of data processing nodes
US10140245B2 (en) Memcached server functionality in a cluster of data processing nodes
US9648102B1 (en) Memcached server functionality in a cluster of data processing nodes
US10205653B2 (en) Fabric discovery for a cluster of nodes
US7490254B2 (en) Increasing workload performance of one or more cores on multiple core processors
US8140871B2 (en) Wake on Lan for blade server
TW202145767A (en) Device, system and method for providing storage resource
US8782456B2 (en) Dynamic and idle power reduction sequence using recombinant clock and power gating
US11720290B2 (en) Memcached server functionality in a cluster of data processing nodes
TW202147123A (en) System for managing memory resources and method for performing remote direct memory access in computing system
US7577755B2 (en) Methods and apparatus for distributing system management signals
US10783109B2 (en) Device management messaging protocol proxy
CN111512266A (en) System, apparatus, and method for handshake protocol for low power state transitions
US20160306634A1 (en) Electronic device
US7418517B2 (en) Methods and apparatus for distributing system management signals
Otani et al. Peach: A multicore communication system on chip with PCI Express
CN115934627A (en) System on chip and application processor
US20240028201A1 (en) Optimal memory tiering of large memory systems using a minimal number of processors

Legal Events

Date Code Title Description
AS Assignment

Owner name: III HOLDINGS 2, LLC, DELAWARE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:043759/0175

Effective date: 20140630

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION