EP1573515A2 - Pipeline accelerator and related system and method - Google Patents
Pipeline accelerator and related system and methodInfo
- Publication number
- EP1573515A2 EP1573515A2 EP03781553A EP03781553A EP1573515A2 EP 1573515 A2 EP1573515 A2 EP 1573515A2 EP 03781553 A EP03781553 A EP 03781553A EP 03781553 A EP03781553 A EP 03781553A EP 1573515 A2 EP1573515 A2 EP 1573515A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- data
- pipeline
- hardwired
- operable
- memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 90
- 230000015654 memory Effects 0.000 claims abstract description 124
- 230000008569 process Effects 0.000 claims abstract description 42
- 238000004891 communication Methods 0.000 claims description 62
- 238000012545 processing Methods 0.000 claims description 42
- 230000004044 response Effects 0.000 claims description 19
- 238000012163 sequencing technique Methods 0.000 claims description 5
- 238000003491 array Methods 0.000 claims 1
- 238000012546 transfer Methods 0.000 abstract description 17
- 239000013598 vector Substances 0.000 abstract description 13
- 230000002457 bidirectional effect Effects 0.000 abstract description 2
- 238000013461 design Methods 0.000 description 30
- 238000010586 diagram Methods 0.000 description 18
- 239000000872 buffer Substances 0.000 description 12
- 238000012986 modification Methods 0.000 description 10
- 230000004048 modification Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 238000010200 validation analysis Methods 0.000 description 7
- 230000003936 working memory Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 5
- 239000000284 extract Substances 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 238000012805 post-processing Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000013523 data management Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005204 segregation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3877—Concurrent instruction execution, e.g. pipeline or look ahead using a slave processor, e.g. coprocessor
- G06F9/3879—Concurrent instruction execution, e.g. pipeline or look ahead using a slave processor, e.g. coprocessor for non-native instruction execution, e.g. executing a command; for Java instruction set
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
Definitions
- a common computing architecture for processing relatively large amounts of data in a relatively short period of time includes multiple interconnected processors that share the processing burden. By sharing the processing burden, these multiple processors can often process the data more quickly than a single processor can for a given clock frequency. For example, each of the processors can process a respective portion of the data or execute a respective portion of a processing algorithm.
- FIG. 1 is a schematic block diagram of a conventional computing machine 10 having a multi-processor architecture.
- the machine 10 includes a master processor 12 and coprocessors 14 ⁇ - 14 n , which communicate with each other and the master processor via a bus 16, an input port 18 for receiving raw data from a remote device (not shown in FIG. 1), and an output port 20 for providing processed data to the remote source.
- the machine 10 also includes a memory 22 for the master processor 12, respective memories 24 ⁇ - 24 n for the coprocessors 14 ⁇ - 14 tone, and a memory 26 that the master processor and coprocessors share via the bus 16.
- the memory 22 serves as both a program and a working memory for the master processor 12, and each memory 24 ⁇ - 24 n serves as both a program and a working memory for a respective coprocessor 14 ⁇ - 14 n .
- the shared memory 26 allows the master processor 12 and the coprocessors 14 to transfer data among themselves, and from/to the remote device via the ports 18 and 20, respectively.
- the master processor 12 and the coprocessors 14 also receive a common clock signal that controls the speed at which the machine 10 processes the raw data.
- the computing machine 10 effectively divides the processing of raw data among the master processor 12 and the coprocessors 14.
- the remote source such as a sonar array loads the raw data via the port 18 into a section of the shared memory 26, which acts as a first-in-first-out (FIFO) buffer (not shown) for the raw data.
- the master processor 12 retrieves the raw data from the memory 26 via the bus 16, and then the master processor and the coprocessors 14 process the raw data, transferring data among themselves as necessary via the bus 16.
- the master processor 12 loads the processed data into another FIFO buffer (not shown) defined in the shared memory 26, and the remote source retrieves the processed data from this FIFO via the port 20.
- the computing machine 10 processes the raw data by sequentially performing n + 1 respective operations on the raw data, where these operations together compose a processing algorithm such as a Fast Fourier Transform (FFT). More specifically, the machine 10 forms a data-processing pipeline from the master processor 12 and the coprocessors 14. For a given frequency of the clock signal, such a pipeline often allows the machine 10 to process the raw data faster than a machine having only a single processor.
- FFT Fast Fourier Transform
- the master processor 12 After retrieving the raw data from the raw-data FIFO (not shown) in the memory 26, the master processor 12 performs a first operation, such as a trigonometric function, on the raw data. This operation yields a first result, which the processor 12 stores in a first-result FIFO (not shown) defined within the memory 26.
- the processor 12 executes a program stored in the memory 22, and performs the above-described actions under the control of the program.
- the processor 12 may also use the memory 22 as working memory to temporarily store data that the processor generates at intermediate intervals of the first operation.
- the coprocessor 14 ⁇ performs a second operation, such as a logarithmic function, on the first result. This second operation yields a second result, which the coprocessor 14 ⁇ stores in a second-result FIFO (not shown) defined within the memory 26.
- the coprocessor 14 ⁇ executes a program stored in the memory 24 ⁇ , and performs the above-described actions under the control of the program.
- the coprocessor 14 ⁇ may also use the memory 24 ⁇ as working memory to temporarily store data that the coprocessor generates at intermediate intervals of the second operation.
- the coprocessors 24 - 24 are sequentially perform third - n th operations on the second - (n-1 ) th results in a manner similar to that discussed above for the coprocessor 24 ⁇ .
- the n th operation which is performed by the coprocessor 24 n , yields the final result, i.e., the processed data.
- the coprocessor 24 n loads the processed data into a processed-data FIFO (not shown) defined within the memory 26, and the remote device (not shown in FIG. 1) retrieves the processed data from this FIFO.
- the computing machine 10 is often able to process the raw data faster than a computing machine having a single processor that sequentially performs the different operations.
- the single processor cannot retrieve a new set of the raw data until it performs all n + 1 operations on the previous set of raw data.
- the master processor 12 can retrieve a new set of raw data after performing only the first operation. Consequently, for a given clock frequency, this pipeline technique can increase the speed at which the machine 10 processes the raw data by a factor of approximately n + 1 as compared to a single-processor machine (not shown in FIG. 1).
- the computing machine 10 may process the raw data in parallel by simultaneously performing n + 1 instances of a processing algorithm, such as an FFT, on the raw data. That is, if the algorithm includes n + 1 sequential operations as described above in the previous example, then each of the master processor 12 and the coprocessors 14 sequentially perform all n + 1 operations on respective sets of the raw data. Consequently, for a given clock frequency, this parallel-processing technique, like the above-described pipeline technique, can increase the speed at which the machine 10 processes the raw data by a factor of approximately n + 1 as compared to a single-processor machine (not shown in FIG.
- a processing algorithm such as an FFT
- the computing machine 10 can process data more quickly than a single-processor computer machine (not shown in FIG. 1 ), the data-processing speed of the machine 10 is often significantly less than the frequency of the processor clock. Specifically, the data-processing speed of the computing machine 10 is limited by the time that the master processor 12 and coprocessors 14 require to process data. For brevity, an example of this speed limitation is discussed in conjunction with the master processor 12, although it is understood that this discussion also applies to the coprocessors 14. As discussed above, the master processor 12 executes a program that controls the processor to manipulate data in a desired manner. This program includes a sequence of instructions that the processor 12 executes.
- the processor 12 typically requires multiple clock cycles to execute a single instruction, and often must execute multiple instructions to process a single value of data. For example, suppose that the processor 12 is to multiply a first data value A (not shown) by a second data value B (not shown). During a first clock cycle, the processor 12 retrieves a multiply instruction from the memory 22. During second and third clock cycles, the processor 12 respectively retrieves A and B from the memory 26. During a fourth clock cycle, the processor 12 multiplies A and B, and, during a fifth clock cycle, stores the resulting product in the memory 22 or 26 or provides the resulting product to the remote device (not shown). This is a best-case scenario, because in many cases the processor 12 requires additional clock cycles for overhead tasks such as initializing and closing counters.
- Gops Gigaoperations/second
- FIG. 2 is a block diagram of a hardwired data pipeline 30 that can typically process data faster than a processor can for a given clock frequency, and often at substantially the same rate at which the pipeline is clocked.
- the pipeline 30 includes operator circuits 32 ⁇ - 32 relief,which each perform a respective operation on respective data without executing program instructions. That is, the desired operation is "burned in" to a circuit 32 such that it implements the operation automatically, without the need of program instructions.
- the pipeline 30 can typically perform more operations per second than a processor can for a given clock frequency.
- the pipeline 30 can often solve the following equation faster than a processor can for a given clock frequency:
- Y(x k ) (5x k + 3)2 xk
- Xk represents a sequence of raw data values.
- the operator circuit 32 ⁇ is a multiplier that calculates 5x k
- the circuit 32 2 is an adder that calculates 5X + 3
- the circuit 32 ⁇ receives data value xi and multiplies it by 5 to generate 5x- ⁇ .
- the pipeline 30 continues processing subsequent raw data values x k in this manner until all the raw data values are processed. [21 ] Consequently, a delay of two clock cycles after receiving a raw data value xi — this delay is often called the latency of the pipeline 30 — the pipeline generates the result (5x ⁇ + 3)2 x1 , and thereafter generates one result — e.g., (5x 2 + 3)2 x2 , (5x 3 + 3)2 x3 , . . ., 5x n + 3)2 xn — each clock cycle.
- the pipeline 30 thus has a data-processing speed equal to the clock speed.
- the master processor 12 and coprocessors 14 (FIG. 1) have data-processing speeds that are 0.4 times the clock speed as in the above example, the pipeline 30 can process data 2.5 times faster than the computing machine 10 (FIG. 1) for a given clock speed.
- a designer may choose to implement the pipeline 30 in a programmable logic IC (PLIC), such as a field-programmable gate array (FPGA), because a PLIC allows more design and modification flexibility than does an application specific IC (ASIC).
- PLIC programmable logic IC
- FPGA field-programmable gate array
- ASIC application specific IC
- the designer merely sets interconnection-configuration registers disposed within the PLIC to predetermined binary states. The combination of all these binary states is often called “firmware.”
- the designer loads this firmware into a nonvolatile memory (not shown in FIG. 2) that is coupled to the PLIC. When one "turns on” the PLIC, it downloads the firmware from the memory into the interconnection-configuration registers.
- the designer merely modifies the firmware and allows the PLIC to download the modified firmware into the interconnection-configuration registers.
- This ability to modify the PLIC by merely modifying the firmware is particularly useful during the prototyping stage and for upgrading the pipeline 30 "in the field".
- the hardwired pipeline 30 may not be the best choice to execute algorithms that entail significant decision making, particularly nested decision making.
- a processor can typically execute a nested-decision-making instruction (e.g., a nested conditional instruction such as "if A, then do B, else if C, do D else do n") approximately as fast as it can execute an operational instruction (e.g., "A + B") of comparable length.
- a nested-decision-making instruction e.g., a nested conditional instruction such as "if A, then do B, else if C, do D else do n
- an operational instruction e.g., "A + B”
- the pipeline 30 may be able to make a relatively simple decision (e.g., "A > B?”) efficiently, it typically cannot execute a nested decision (e.g., "if A, then do B, else if C, do D, . .
- the pipeline 30 may have little on-board memory, and thus may need to access external working/instruction memory (not shown). And although one may be able to design the pipeline 30 to execute such a nested decision, the size and complexity of the required circuitry often makes such a design impractical, particularly where an algorithm includes multiple different nested decisions.
- processors are typically used in applications that require significant decision making, and hardwired pipelines are typically limited to "number crunching" applications that entail little or no decision making.
- Computing components such as processors and their peripherals
- processors typically include industry-standard communication interfaces that facilitate the interconnection of the components to form a processor-based computing machine.
- a standard communication interface typically includes two layers: a physical layer and a services layer.
- the physical layer includes the circuitry and the corresponding circuit interconnections that form the interface and the operating parameters of this circuitry.
- the physical layer includes the pins that connect the component to a bus, the buffers that latch data received from the pins, and the drivers that drive signals onto the pins.
- the operating parameters include the acceptable voltage range of the data signals that the pins receive, the signal timing for writing and reading data, and the supported modes of operation (e.g., burst mode, page mode).
- Conventional physical layers include transistor-transistor logic (TTL) and RAMBUS.
- the services layer includes the protocol by which a computing component transfers data.
- the protocol defines the format of the data and the manner in which the component sends and receives the formatted data.
- FTP file-transfer protocol
- TCP/IP transmission control protocol/internet protocol
- Designing a computing component that supports an industry-standard communication interface allows one to save design time by using an existing physical-layer design from a design library. This also insures that he/she can easily interface the component to off-the-shelf computing components.
- a pipeline accelerator includes a memory and a hardwired-pipeline circuit coupled to the memory.
- the hardwired-pipeline circuit is operable to receive data, load the data into the memory, retrieve the data from the memory, process the retrieved data, and provide the processed data to an external source.
- the hardwired-pipeline circuit is operable to receive data, process the received data, load the processed data into the memory, retrieve the processed data from the memory, and provide the retrieved processed data to an external source.
- the memory facilitates the transfer of data — whether unidirectional or bidirectional — between the hardwired-pipeline circuit and an application that the processor executes.
- FIG. 1 is a block diagram of a computing machine having a conventional multi-processor architecture.
- FIG. 2 is a block diagram of a conventional hardwired pipeline.
- FIG. 3 is a block diagram of a computing machine having a peer-vector architecture according to an embodiment of the invention.
- FIG. 4 is a block diagram of the pipeline accelerator of FIG. 3 according to an embodiment of the invention.
- FIG. 5 is a block diagram of the hardwired-pipeline circuit and the data memory of FIG. 4 according to an embodiment of the invention.
- FIG. 6 is a block diagram of the memory-write interfaces of the communication shell of FIG. 5 according to an embodiment of the invention.
- FIG. 7 is a block diagram of the memory-read interfaces of the communication shell of FIG. 5 according to an embodiment of the invention.
- FIG. 8 is a block diagram of the pipeline accelerator of FIG. 3 according to another embodiment of the invention.
- FIG. 9 is a block diagram of the hardwired-pipeline circuit and the data memory of FIG. 8 according to an embodiment of the invention.
- FIG. 3 is a schematic block diagram of a computing machine 40, which has a peer-vector architecture according to an embodiment of the invention.
- the peer-vector machine 40 includes a pipeline accelerator 44, which performs at least a portion of the data processing, and which thus effectively replaces the bank of coprocessors 14 in the computing machine 10 of FIG. 1. Therefore, the host-processor 42 and the accelerator 44 (or units thereof as discussed below) are "peers" that can transfer data vectors back and forth. Because the accelerator 44 does not execute program instructions, it typically performs mathematically intensive operations on data significantly faster than a bank of coprocessors can for a given clock frequency.
- the machine 40 has the same abilities as, but can often process data faster than, a conventional computing machine such as the machine 10. Furthermore, as discussed below, providing the accelerator 44 with a communication interface that is compatible with the communication interface of the host processor 42 facilitates the design and modification of the machine 40, particularly where the processor's communication interface is an industry standard. And where the accelerator 44 includes multiple pipeline units (e.g., PLIC-based circuits), providing each of these units with the same communication interface facilitates the design and modification of the accelerator, particularly where the communication interfaces are compatible with an industry-standard interface. Moreover, the machine 40 may also provide other advantages as described below and in the previously cited patent applications.
- PLIC-based circuits e.g., PLIC-based circuits
- the peer-vector computing machine 40 includes a processor memory 46, an interface memory 48, a bus 50, a firmware memory 52, an optional raw-data input port 54, a processed-data output port 58, and an optional router 61.
- the host processor 42 includes a processing unit 62 and a message handler 64
- the processor memory 46 includes a processing-unit memory 66 and a handler memory 68, which respectively serve as both program and working memories for the processor unit and the message handler.
- the processor memory 46 also includes an accelerator-configuration registry 70 and a message-configuration registry 72, which store respective configuration data that allow the host processor 42 to configure the functioning of the accelerator 44 and the format of the messages that the message handler 64 sends and receives.
- the pipeline accelerator 44 is disposed on at least one PLIC (not shown) and includes hardwired pipelines 74 ⁇ - 74 immunity, which process respective data without executing program instructions.
- the firmware memory 52 stores the configuration firmware for the accelerator 44.
- the accelerator 44 may be disposed on multiple PLICs, these PLICs and their respective firmware memories may be disposed in multiple pipeline units (FIG. 4).
- the accelerator 44 and pipeline units are discussed further below and in previously cited U.S. Patent App. Serial No. 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD.
- the accelerator 44 may be disposed on at least one ASIC, and thus may have internal interconnections that are unconfigurable.
- the machine 40 may omit the firmware memory 52.
- the accelerator 44 is shown including multiple pipelines 74, it may include only a single pipeline.
- the accelerator 44 may include one or more processors such as a digital-signal processor (DSP).
- the accelerator 44 may include a data input port and/or a data output port.
- DSP digital-signal processor
- FIG. 4 is a schematic block diagram of the pipeline accelerator 44 of FIG. 3 according to an embodiment of the invention.
- the accelerator 44 includes one or more pipeline units 78, each of which includes a pipeline circuit 80, such as a PLIC or an ASIC.
- a pipeline circuit 80 such as a PLIC or an ASIC.
- each pipeline unit 78 is a "peer" of the host processor 42 and of the other pipeline units of the accelerator 44. That is, each pipeline unit 78 can communicate directly with the host processor 42 or with any other pipeline unit.
- this peer-vector architecture prevents data "bottlenecks" that otherwise might occur if all of the pipeline units 78 communicated through a central location such as a master pipeline unit (not shown) or the host processor 42. Furthermore, it allows one to add or remove peers from the peer-vector machine 40 (FIG. 3) without significant modifications to the machine.
- the pipeline circuit 80 includes a communication interface 82, which transfers data between a peer, such as the host processor 42 (FIG. 3), and the following other components of the pipeline circuit: the hardwired pipelines 74 ⁇ -74k (FIG. 3) via a communication shell 84, a controller 86, an exception manager 88, and a configuration manager 90.
- the pipeline circuit 80 may also include an industry-standard bus interface 91. Alternatively, the functionality of the interface 91 may be included within the communication interface 82. [59] By designing the components of the pipeline circuit 80 as separate modules, one can often simplify the design of the pipeline circuit.
- HDL hardware description language
- the communication interface 82 sends and receives data in a format recognized by the message handler 64 (FIG. 3), and thus typically facilitates the design and modification of the peer-vector machine 40 (FIG. 3). For example, if the data format is an industry standard such as the Rapid I/O format, then one need not design a custom interface between the host processor 42 and the accelerator 44. Furthermore, by allowing the pipeline circuit 80 to communicate with other peers, such as the host processor 42 (FIG. 3), via the pipeline bus 50 instead of via a non-bus interface, one can change the number of pipeline units 78 by merely connecting or disconnecting them (or the circuit cards that hold them) to the pipeline bus instead of redesigning a non-bus interface from scratch each time a pipeline unit is added or removed.
- the data format is an industry standard such as the Rapid I/O format
- the hardwired pipelines 74 ⁇ -74 are perform respective operations on data as discussed above in conjunction with FIG. 3 and in previously cited U.S. Patent App. Serial No. 10/684,102 entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD, and the communication shell 84 interfaces the pipelines to the other components of the pipeline circuit 80 and to circuits (such as a data memory 92 discussed below) external to the pipeline circuit.
- the controller 86 synchronizes the hardwired pipelines 74 ⁇ .74 n and monitors and controls the sequence in which they perform the respective data operations in response to communications, i.e., "events," from other peers.
- a peer such as the host processor 42 may send an event to the pipeline unit 78 via the pipeline bus 50 to indicate that the peer has finished sending a block of data to the pipeline unit and to cause the hardwired pipelines 74 ⁇ .74 n to begin processing this data.
- An event that includes data is typically called a message, and an event that does not include data is typically called a "door bell.”
- the pipeline unit 78 may also synchronize the pipelines 74 ⁇ .74sky in response to a synchronization signal.
- the exception manager 88 monitors the status of the hardwired pipelines 74 ⁇ -74 n , the communication interface 82, the communication shell 84, the controller 86, and the bus interface 91, and reports exceptions to the host processor 42 (FIG. 3). For example, if a buffer in the communication interface 82 overflows, then the exception manager 88 reports this to the host processor 42.
- the exception manager may also correct, or attempt to correct, the problem giving rise to the exception. For example, for an overflowing buffer, the exception manager 88 may increase the size of the buffer, either directly or via the configuration manager 90 as discussed below.
- the configuration manager 90 sets the soft configuration of the hardwired pipelines 74r74 n , the communication interface 82, the communication shell 84, the controller 86, the exception manager 88, and the interface 9fin response to soft-configuration data from the host processor 42 (FIG. 3) — as discussed in previously cited U.S. Patent App. Serial No. 10/684,102 entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD, the hard configuration denotes the actual topology, on the transistor and circuit-block level, of the pipeline circuit 80, and the soft configuration denotes the physical parameters (e.g., data width, table size) of the hard-configured components.
- the hard configuration denotes the actual topology, on the transistor and circuit-block level, of the pipeline circuit 80
- the soft configuration denotes the physical parameters (e.g., data width, table size) of the hard-configured components.
- soft configuration data is similar to the data that can be loaded into a register of a processor (not shown in FIG. 4) to set the operating mode (e.g., burst-memory mode) of the processor.
- the host processor 42 may send soft-configuration data that causes the configuration manager 90 to set the number and respective priority levels of queues in the communication interface 82.
- the exception manager 88 may also send soft-configuration data that causes the configuration manager 90 to, e.g., increase the size of an overflowing buffer in the communication interface 82.
- the pipeline unit 78 of the accelerator 44 includes the data memory 92, an optional communication bus 94, and, if the pipeline circuit is a PLIC, the firmware memory 52 (FIG. 3).
- the data memory 92 buffers data as it flows between another peer, such as the host processor 42 (FIG. 3), and the hardwired pipelines 74r74 regard, and is also a working memory for the hardwired pipelines.
- the communication interface 82 interfaces the data memory 92 to the pipeline bus 50 (via the communication bus 94 and industry-standard interface 91 if present), and the communication shell 84 interfaces the data memory to the hardwired pipelines 74 ⁇ -74 n .
- the industry-standard interface 91 is a conventional bus-interface circuit that reduces the size and complexity of the communication interface 82 by effectively offloading some of the interface circuitry from the communication interface. Therefore, if one wishes to change the parameters of the pipeline bus 50 or router 61 (FIG. 3), then he need only modify the interface 91 and not the communication interface 82. Alternatively, one may dispose the interface 91 in an IC (not shown) that is external to the pipeline circuit 80. Offloading the interface 91 from the pipeline circuit 80 frees up resources on the pipeline circuit for use in, e.g., the hardwired pipelines 74 ⁇ -74 texture and the controller 86. Or, as discussed above, the bus interface 91 may be part of the communication interface 82.
- the firmware memory 52 stores the firmware that sets the hard configuration of the pipeline circuit.
- the memory 52 loads the firmware into the pipeline circuit 80 during the configuration of the accelerator 44, and may receive modified firmware from the host processor 42 (FIG. 3) via the communication interface 82 during or after the configuration of the accelerator.
- the loading and receiving of firmware is further discussed in previously cited U.S. Patent App. Serial No. 10/684,057 entitled PROGRAMMABLE CIRCUIT AND RELATED COMPUTING MACHINE AND METHOD.
- the pipeline circuit 80, data memory 92, and firmware memory 52 may be disposed on a circuit board or card 98, which may be plugged into a pipeline-bus connector (not shown) much like a daughter card can be plugged into a slot of a mother board in a personal computer (not shown).
- a pipeline-bus connector not shown
- conventional ICs and components such as a power regulator and a power sequencer may also be disposed on the card 98 as is known.
- Further details of the structure and operation of the pipeline unit 78 are discussed below in conjunction with FIG. 5.
- FIG. 5 is a block diagram of the pipeline unit 78 of FIG. 4 according to an embodiment of the invention.
- the pipeline circuit 80 receives a master CLOCK signal, which drives the below-described components of the pipeline circuit either directly or indirectly.
- the pipeline circuit 80 may generate one or more slave clock signals (not shown) from the master CLOCK signal in a conventional manner.
- the pipeline circuit 80 may also a receive a synchronization signal SYNC as discussed below.
- the data memory 92 includes an input dual-port-static-random-access memory (DPSRAM) 100, an output DPSRAM 102, and an optional working DPSRAM 104.
- DPSRAM dual-port-static-random-access memory
- the input DPSRAM 100 includes an input port 06 for receiving data from a peer, such as the host processor 42 (FIG. 3), via the communication interface 82, and includes an output port 108 for providing this data to the hardwired pipelines 74 ⁇ -74 n via the communication shell 84.
- a peer such as the host processor 42 (FIG. 3)
- the input DPSRAM 100 includes an input port 06 for receiving data from a peer, such as the host processor 42 (FIG. 3), via the communication interface 82, and includes an output port 108 for providing this data to the hardwired pipelines 74 ⁇ -74 n via the communication shell 84.
- Having two ports, one for data input and one for data output, increases the speed and efficiency of data transfer to/from the DPSRAM 100 because the communication interface 82 can write data to the DPSRAM while the pipelines 74 ⁇ -74 n read data from the DPSRAM.
- using the DPSRAM 100 to buffer data from a peer such as the host processor 42 allows the peer and the pipelines
- the peer can send data to the pipelines 74 ⁇ -74 unpleasantness for the pipelines to complete a current operation.
- the pipelines 74r74 n can retrieve data without "waiting" for the peer to complete a data-sending operation.
- the output DPSRAM 102 includes an input port 110 for receiving data from the hardwired pipelines 74 ⁇ -74 n via the communication shell 84, and includes an output port 112 for providing this data to a peer, such as the host processor 42 (FIG. 3), via the communication interface 82.
- a peer such as the host processor 42 (FIG. 3)
- the two data ports 110 (input) and 112 (output) increase the speed and efficiency of data transfer to/from the DPSRAM 102, and using the DPSRAM 102 to buffer data from the pipelines 74r74 n allows the peer and the pipelines to operate asynchronously relative to one another.
- the pipelines 74 ⁇ -74 n can publish data to the peer without "waiting" for the output-data handler 726 to complete a data transfer to the peer or to another peer.
- the output-data handler 726 can transfer data to a peer without "waiting" for the pipelines 74 ⁇ -74 n to complete a data-publishing operation.
- the working DPSRAM 704 includes an input port 774 for receiving data from the hardwired pipelines 74 ⁇ -74 n via the communication shell 84, and includes an output port 776 for returning this data back to the pipelines via the communication shell. While processing input data received from the DPSRAM 700, the pipelines 74 ⁇ -74 n may need to temporarily store partially processed, i.e., intermediate, data before continuing the processing of this data. For example, a first pipeline, such as the pipeline 74 ⁇ , may generate intermediate data for further processing by a second pipeline, such as the pipeline 74 2 ; thus, the first pipeline may need to temporarily store the intermediate data until the second pipeline retrieves it. The working DPSRAM 704 provides this temporary storage.
- the two data ports 774 (input) and 776 (output) increase the speed and efficiency of data transfer between the pipelines 74 ⁇ -74 n and the DPSRAM 704.
- including a separate working DPSRAM 704 typically increases the speed and efficiency of the pipeline circuit 80 by allowing the DPSRAMs 700 and 702 to function exclusively as data-input and data-output buffers, respectively.
- either or both of the DPSRAMS 700 and 702 can also be a working memory for the pipelines 74 ⁇ -74 n when the DPSRAM 704 is omitted, and even when it is present.
- DPSRAMS 700, 702, and 704 are described as being external to the pipeline circuit 80, one or more of these DPSRAMS, or equivalents thereto, may be internal to the pipeline circuit.
- the communication interface 82 includes an industry-standard bus adapter 778, an input-data handler 720, input-data and input-event queues 722 and 724, an output-data handler 726, and output-data and output-event queues 728 and 730.
- the queues 722, 724, 728, and 730 are shown as single queues, one or more of these queues may include sub queues (not shown) that allow segregation by, e.g., priority, of the values stored in the queues or of the respective data that these values represent.
- the industry-standard bus adapter 778 includes the physical layer that allows the transfer of data between the pipeline circuit 80 and the pipeline bus 50 (FIG. 4) via the communication bus 94. Therefore, if one wishes to change the parameters of the bus 94, then he need only modify the adapter 778 and not the entire communication interface 82. Where the industry-standard bus interface 91 is omitted from the pipeline unit 78, , then the adapter 778 may be modified to allow the transfer of data directly between the pipeline bus 50 and the pipeline circuit 80. In this latter implementation, the modified adapter 778 includes the functionality of the bus interface 97, and one need only modify the adapter 778 if he/she wishes to change the parameters of the bus 50.
- the input-data handler 720 receives data from the industry-standard adapter 78, loads the data into the DPSRAM 700 via the input port 706, and generates and stores a pointer to the data and a corresponding data identifier in the input-data queue 722. If the data is the payload of a message from a peer, such as the host processor 42 (FIG. 3), then the input-data handler 720 extracts the data from the message before loading the data into the DPSRAM 700.
- the input-data handler 720 includes an interface 732, which writes the data to the input port 706 of the DPSRAM 700 and which is further discussed below in conjunction with FIG. 6. Alternatively, the input-data handler 720 can omit the extraction step and load the entire message into the DPSRAM 700.
- the input-data handler 720 also receives events from the industry-standard bus adapter 778, and loads the events into the input-event queue 724.
- the input-data handler 720 includes a validation manager 734, which determines whether received data or events are intended for the pipeline circuit 80.
- the validation manager 734 may make this determination by analyzing the header (or a portion thereof) of the message that contains the data or the event, by analyzing the type of data or event, or the analyzing the instance identification (i.e., the hardwired pipeline 74 for which the data/event is intended) of the data or event. If the input-data handler 720 receives data or an event that is not intended for the pipeline circuit 80, then the validation manager 734 prohibits the input-data handler from loading the received data/even. Where the peer-vector machine 40 includes the router 67 (FIG.
- the validation manager 734 may also cause the input-data handler 720 to send to the host processor 42 (FIG. 3) an exception message that identifies the exception (erroneously received data/event) and the peer that caused the exception.
- the output-data handler 726 retrieves processed data from locations of the DPSRAM 702 pointed to by the output-data queue 728, and sends the processed data to one or more peers, such as the host processor 42 (FIG. 3), via the industry-standard bus adapter 778.
- the output-data handler 726 includes an interface 736, which reads the processed data from the DPSRAM 702 via the port 772. The interface 736 is further discussed below in conjunction with FIG. 7.
- the output-data handler 726 also retrieves from the output-event queue 730 events generated by the pipelines 74 ⁇ - 74 n , and sends the retrieved events to one or more peers, such as the host processor 42 (FIG. 3) via the industry-standard bus adapter 778.
- the output-data handler 726 includes a subscription manager 738, which includes a list of peers, such as the host processor 42 (FIG. 3), that subscribe to the processed data and to the events; the output-data handler uses this list to send the data/events to the correct peers. If a peer prefers the data/event to be the payload of a message, then the output-data handler 726 retrieves the network or bus-port address of the peer from the subscription manager 738, generates a header that includes the address, and generates the message from the data/event and the header.
- a subscription manager 738 which includes a list of peers, such as the host processor 42 (FIG. 3), that subscribe to the processed data and to the events; the output-data handler uses this list to send the data/events to the correct peers. If a peer prefers the data/event to be the payload of a message, then the output-data handler 726 retrieves the network or bus-port address of the peer from the subscription manager 738, generate
- the DPSRAMS 700 and 702 involves the use of pointers and data identifiers, one may modify the input- and output-data handlers 720 and 726 to implement other data-management techniques.
- Conventional examples of such data-management techniques include pointers using keys or tokens, input/output control (IOC) block, and spooling.
- IOC input/output control
- the communication shell 84 includes a physical layer that interfaces the hardwired pipelines 74 ⁇ -74 n to the output-data queue 728, the controller 86, and the DPSRAMs 700, 702, and 704.
- the shell 84 includes interfaces 740 and 742, and optional interfaces 744 and 146.
- the interfaces 740 and 746 may be similar to the interface 736; the interface 740 reads input data from the DPSRAM 700 via the port 708, and the interface 746 reads intermediate data from the DPSRAM 704 via the port 776.
- the interfaces 742 and 744 may be similar to the interface 732; the interface 742 writes processed data to the DPSRAM 702 via the port 770, and the interface 744 writes intermediate data to the DPSRAM 704 via the port 774.
- the controller 86 includes a sequence manager 748 and a synchronization interface 750, which receives one or more synchronization signals SYNC.
- a peer such as the host processor 42 (FIG. 3), or a device (not shown) external to the peer-vector machine 40 (FIG. 3) may generate the SYNC signal, which triggers the sequence manager 748 to activate the hardwired pipelines 74 ⁇ - 74 pollution as discussed below and in previously cited U.S. Patent App. Serial No.
- the synchronization interface 750 may also generate a SYNC signal to trigger the pipeline circuit 80 or to trigger another peer.
- the events from the input-event queue 724 also trigger the sequence manager 748 to activate the hardwired pipelines 74r74 n as discussed below.
- the sequence manager 748 sequences the hardwired pipelines 74r
- each pipeline 74 has at least three operating states: preprocessing, processing, and post processing.
- preprocessing the pipeline 74, e.g., initializes its registers and retrieves input data from the DPSRAM 700.
- the pipeline 74 e.g., operates on the retrieved data, temporarily stores intermediate data in the DPSRAM 704, retrieves the intermediate data from the DPSRAM 704, and operates on the intermediate data to generate result data.
- post processing the pipeline 74, e.g., loads the result data into the DPSRAM 702.
- the sequence manager 748 monitors the operation of the pipelines 74 1 -74 n and instructs each pipeline when to begin each of its operating states. And one may distribute the pipeline tasks among the operating states differently than described above. For example, the pipeline 74 may retrieve input data from the DPSRAM 700 during the processing state instead of during the preprocessing state.
- the sequence manager 748 maintains a predetermined internal operating synchronization among the hardwired pipelines 74 ⁇ -74 n .
- a predetermined internal operating synchronization among the hardwired pipelines 74 ⁇ -74 n For example, to avoid all of the pipelines 74 ⁇ -74 n simultaneously retrieving data from the DPSRAM 700, it may be desired to synchronize the pipelines such that while the first pipeline 74 ⁇ is in a preprocessing state, the second pipeline 74 2 is in a processing state and the third pipeline 74 3 is in a post-processing state. Because a state of one pipeline 74 may require a different number of clock cycles than a concurrently performed state of another pipeline, the pipelines 74 74 n may lose synchronization if allowed to run freely.
- the sequence manager 748 allows all of the pipelines 74 to complete a current operating state before allowing any of the pipelines to proceed to a next operating state. Therefore, the time that the sequence manager 748 allots for a current operating state is long enough to allow the slowest pipeline 74 to complete that state.
- circuitry (not shown) for maintaining a predetermined operating synchronization among the hardwired pipelines 74 1 -74 n may be included within the pipelines themselves.
- the sequence manager 748 synchronizes the operation of the pipelines to the operation of other peers, such as the host processor 42 (FIG. 3), and to the operation of other external devices in response to one or more SYNC signals or to an event in the input-events queue 724.
- a SYNC signal triggers a time-critical function but requires significant hardware resources; comparatively, an event typically triggers a non-time-critical function but requires significantly fewer hardware resources.
- an event typically triggers a non-time-critical function but requires significantly fewer hardware resources.
- PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD because a SYNC signal is routed directly from peer to peer, it can trigger a function more quickly than an event, which must makes its way through, e.g., the pipeline bus 50 (FIG. 3), the input-data handler 720, and the input-event queue 724. But because they are separately routed, the SYNC signals require dedicated circuitry, such as routing lines, buffers, and the SYNC interface 750, of the pipeline circuit 80. Conversely, because they use the existing data-transfer infrastructure (e.g. the pipeline bus 50 and the input-data handler 720), the events require only the dedicated input-event queue 724. Consequently, designers tend to use events to trigger all but the most time-critical functions.
- the pipeline unit 78 and the sensor can employ to determine when the pipeline 74 ⁇ is finished.
- the sequence manager 748 may provide a corresponding SYNC pulse or event to the sensor.
- the sensor may send an event to the sequence manager 748 via the pipeline bus 50 (FIG. 3).
- the sequence manager 748 may also provide to a peer, such as the host processor 42 (FIG. 3), information regarding the operation of the hardwired pipelines 74 ⁇ -74 n by generating a SYNC pulse or an event.
- the sequence manager 748 sends a SYNC pulse via the SYNC interface 750 and a dedicated line (not shown), and sends an event via the output-event queue 730 and the output-data handler 726.
- a peer further processes the data blocks from the pipeline 74 2 .
- the sequence manager 748 may notify the peer via a SYNC pulse or an event when the pipeline 74 2 has finished processing a block of data.
- the sequence manager 748 may also confirm receipt of a SYNC pulse or an event by generating and sending a corresponding SYNC pulse or event to the appropriate peer(s).
- the industry-standard bus interface 97 receives data signals
- the industry-standard bus adapter 778 converts the messages from the industry-standard bus interface 91 into a format that is compatible with the input-data handler 720.
- the input-data handler 720 dissects the message headers and extracts from each header the portion that describes the data payload.
- the extracted header portion may include, e.g., the address of the pipeline unit 78, the type of data in the payload, or an instance identifier that identifies the pipeline(s) 78 ⁇ - 78 n for which the data is intended.
- the validation manager 734 analyzes the extracted header portion and confirms that the data is intended for one of the hardwired pipelines 74r 74 n , the interface 732 writes the data to a location of the DPSRAM 700 via the port 706, and the input-data handler 720 stores a pointer to the location and a corresponding data identifier in the input-data queue 722.
- the data identifier identifies the pipeline or pipelines 74 ⁇ -74 decay for which the data is intended, or includes information that allows the sequence manager 748 to make this identification as discussed below.
- the queue 722 may include a respective subqueue (not shown) for each pipeline 74 ⁇ -74 n , and the input-data handler 720 stores the pointer in the subqueue or subqueues of the intended pipeline or pipelines.
- the data identifier may be omitted.
- the input-data handler 720 extracts the data from the message before the interface 732 stores the data in the DPSRAM 700.
- the interface 732 may store the entire message in the DPSRAM 700.
- the sequence manager 748 reads the pointer and the data identifier from the input-data queue 722, determines from the data identifier the pipeline or pipelines 74 ⁇ - 74 n for which the data is intended, and passes the pointer to the pipeline or pipelines via the communication shell 84.
- the data-receiving pipeline or pipelines 74 ⁇ - 74 decay cause the interface 740 to retrieve the data from the pointed-to location of the DPSRAM 700 via the port 708.
- the data-receiving pipeline or pipelines 74 ⁇ -74 n process the retrieved data
- the interface 742 writes the processed data to a location of the DPSRAM 702 via the port 770
- the communication shell 84 loads into the output-data queue 728 a pointer to and a data identifier for the processed data.
- the data identifier identifies the destination peer or peers, such as the host processor 42 (FIG. 3), that subscribe to the processed data, or includes information (such as the data type) that allows the subscription manager 738 to subsequently determine the destination peer or peers (e.g., the host processor 42 of FIG. 3).
- the queue 728 may include a respective subqueue (not shown) for each pipeline 74r 74 radical, and the communication shell 84 stores the pointer in the subqueue or subqueues of the originating pipeline or pipelines.
- the communication shell 84 may omit loading a data identifier into the queue 728.
- the output-data handler 726 retrieves the pointer and the data identifier from the output-data queue 728, the subscription manager 738 determines from the identifier the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the data, the interface 736 retrieves the data from the pointed-to location of the DPSRAM 702 via the port 772, and the output-data handler sends the data to the industry-standard bus adapter 778. If a destination peer requires the data to be the payload of a message, then the output-data handler 726 generates the message and sends the message to the adapter 778. For example, suppose the data has multiple destination peers and the pipeline bus 50 supports message broadcasting.
- the output-data handler 726 generates a single header that includes the addresses of all the destination peers, combines the header and data into a message, and sends (via the adapter 778 and the industry-standard bus interface 97) a single message to all of the destination peers simultaneously.
- the output- data handler 726 generates a respective header, and thus a respective message, for each destination peer, and sends each of the messages separately.
- the industry-standard bus adapter 778 formats the data from the output-data handler 726 so that it is compatible with the industry-standard bus interface 91.
- the industry-standard bus interface 97 formats the data from the industry-standard bus adapter 778 so that it is compatible with the pipeline bus 50 (FIG. 3).
- the industry-standard bus interface 97 receives a signal (which originates from a peer, such as the host processor 42 of FIG. 3) from the pipeline bus 50 (and the router 67 if present), and translates the signal into a header (i.e., a data-less message) that includes the event.
- a signal which originates from a peer, such as the host processor 42 of FIG. 3
- the pipeline bus 50 and the router 67 if present
- the industry-standard bus adapter 778 converts the header from the industry-standard bus interface 97 into a format that is compatible with the input-data handler 720.
- the input-data handler 720 extracts from the header the event and a description of the event.
- the description may include, e.g., the address of the pipeline unit 78, the type of event, or an instance identifier that identifies the pipeline(s) 78 ⁇ - 78 n for which the event is intended.
- the validation manager 734 analyzes the event description and confirms that the event is intended for one of the hardwired pipelines 74 ⁇ -74 n , and the input-data handler 720 stores the event and its description in the input-event queue 724.
- the sequence manager 748 reads the event and its description from the input-event queue 724, and, in response to the event, triggers the operation of one or more of the pipelines 74r74 n as discussed above.
- the sequence manager 748 may trigger the pipeline 74 to begin processing data that the pipeline 74 ⁇ previously stored in the DPSRAM 704.
- the sequence manager 748 To output an event, the sequence manager 748 generates the event and a description of the event, and loads the event and its description into the output-event queue 730 — the event description identifies the destination peer(s) for the event if there is more than one possible destination peer. For example, as discussed above, the event may confirm the receipt and implementation of an input event, an input-data or input-event message, or a SYNC pulse
- the output-data handler 726 retrieves the event and its description from the output-event queue 730, the subscription manager 738 determines from the event description the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the event, and the output-data handler sends the event to the proper destination peer or peers via the industry-standard bus adapter 778 and the industry-standard bus interface 97 as discussed above.
- the industry-standard bus adapter 778 receives the command from the host processor 42 (FIG. 3) via the industry-standard bus interface 97, and provides the command to the input-data handler 720 in a manner similar to that discussed above for a data-less event (i.e., doorbell)
- the validation manager 734 confirms that the command is intended for the pipeline unit 78, and the input-data handler 720 loads the command into the configuration manager 90. Furthermore, either the input-data handler 720 or the configuration manager 90 may also pass the command to the output-data handler 726, which confirms that the pipeline unit 78 received the command by sending the command back to the peer (e.g., the host processor 42 of FIG. 3) that sent the command. This confirmation technique is sometimes called "echoing.”
- the configuration manager 90 implements the command.
- the command may cause the configuration manager 90 to disable one of the pipelines 74 ⁇ -74 texture for debugging purposes.
- the command may allow a peer, such as the host processor 42 (FIG. 3), to read the current configuration of the pipeline circuit 80 from the configuration manager 90 via the output-data handler 726.
- a configuration command to define an exception that is recognized by the exception manager 88.
- a component such as the input-data queue 722, of the pipeline circuit 80 triggers an exception to the exception manager 88.
- the component includes an exception-triggering adapter (not shown) that monitors the component and triggers the exception in response to a predetermined condition or set of conditions.
- the exception-triggering adapter may be a universal circuit that can be designed once and then included as part of each component of the pipeline circuit 80 that generates exceptions.
- the exception manager 88 generates an exception identifier.
- the identifier may indicate that the input-data queue 722 has overflowed.
- the identifier may include its destination peer if there is more than one possible destination peer.
- the output-data handler 726 retrieves the exception identifier from the exception manager 88 and sends the exception identifier to the host processor 42 (FIG. 3) as discussed in previously cited U.S. Patent App. Serial No. 10/684,053 entitled COMPUTING MACHINE HAVING IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD.
- the exception identifier can also include destination information from which the subscription manager 738 determines the destination peer or peers (e.g., the host processor 42 of FIG. 3) of the identifier.
- the output-data handler 726 then sends the identifier to the destination peer or peers via the industry-standard bus adapter 778 and the industry-standard bus interface 97.
- the data memory 92 may include other types of memory ICs such as quad-data-rate (QDR) SRAMs.
- QDR quad-data-rate
- FIG. 6 is a block diagram of the interface 742 of FIG. 5 according to an embodiment of the invention.
- the interface 742 writes processed data from the hardwired pipelines 74 ⁇ -74 mask to the DPSRAM 702.
- the structure of the interface 742 reduces or eliminates data "bottlenecks" and, where the pipeline circuit 80 (FIG. 5) is a PLIC, makes efficient use of the PLICs local and global routing resources.
- the interface 742 includes write channels 750* - 7 0,,, one channel for each hardwired pipeline 74 ⁇ - 74 n (FIG. 5), and includes a controller 752.
- the channel 7 0* is discussed below, it being understood that the operation and structure of the other channels 750 2 - 0 classroom are similar unless stated otherwise.
- the channel 750* includes a write-address/data FIFO 754* and a address/data register 756*. [122] The FIFO 754* stores the data that the pipeline 74 ⁇ writes to the
- the DPSRAM 702 stores the address of the location within the DPSRAM 702 to which the pipeline writes the data, until the controller 752 can actually write the data to the DPSRAM 702 via the register 756*. Therefore, the FIFO 754* reduces or eliminates the data bottleneck that may occur if the pipeline 74 had to "wait" to write data to the channel 750* until the controller 752 finished writing previous data.
- the FIFO 754* receives the data from the pipeline 74 ⁇ via a bus 758*, receives the address of the location to which the data is to be written via a bus 760*, and provides the data and address to the register 756* via busses 762* and 764*, respectively. Furthermore, the FIFO 754* receives a WRITE FIFO signal from the pipeline 74 ⁇ on a line 766*, receives a CLOCK signal via a line 768*, and provides a FIFO FULL signal to the pipeline 74 ⁇ on a line 770*.
- the FIFO 754* receives a READ FIFO signal from the controller 752 via a line 772*, and provides a FIFO EMPTY signal to the controller via a line 774*.
- the pipeline circuit 80 (FIG. 5) is a PLIC
- the busses 758*, 760*, 762*, and 764* and the lines 766*, 768*, 770*, 772*, and 774* are preferably formed using local routing resources.
- local routing resources are preferred to global routing resources because the signal- path lengths are generally shorter and the routing is easier to implement.
- the register 756* receives the data to be written and the address of the write location from the FIFO 754* via the busses 762* and 764*, respectively, and provides the data and address to the port 770 of the DPSRAM 702 (FIG. 5) via an address/data bus 776. Furthermore, the register 756* also receives the data and address from the registers 756 2 - 156 n via an address/data bus 778* as discussed below. In addition, the register 756* receives a SHIFT/LOAD signal from the controller 752 via a line 780. Where the pipeline circuit 80 (FIG. 5) is a PLIC, the bus 776 is typically formed using global routing resources, and the busses 778* - 778 token.* and the line 780 are preferably formed using local routing resources.
- the controller 752 provides a WRITE DPSRAM signal to the port 770 of the DPSRAM 702 (FIG. 5) via a line 782.
- the FIFO 754* drives the FIFO FULL signal to the logic level corresponding to the current state ("full” or “not full") of the FIFO.
- the FIFO 754* drives the FIFO EMPTY signal to the logic level corresponding to the current state ("empty” or "not empty") of the FIFO.
- the controller 752 asserts the READ FIFO signal and drives the SHIFT/LOAD signal to the load logic level, thus loading the first loaded data and address from the FIFO into the register 756*. If the FIFO 754* is empty, the controller 752 does not assert READ FIFO, but does drive SHIFT load to the load logic level if any of the other FIFOs 754 2 -754 transit are not empty.
- the channels 7 0 2 - 750 radical operate in a similar manner such that first- loaded data in the FIFOs 754 2 - 754 administrat are respectively loaded into the registers 156 2 .156 transit.
- the controller 752 drives the SHIFT/LOAD signal to the shift logic level and asserts the WRITE DPSRAM signal, thus serially shifting the data and addresses from the registers 756* - 756 transit onto the address/data bus 776 and loading the data into the corresponding locations of the DPSRAM 702. Specifically, during a first shift cycle, the data and address from the register 756* are shifted onto the bus 776 such that the data from the FIFO 754* is loaded into the addressed location of the DPSRAM 702. Also during the first shift cycle, the data and address from the register 56 2 are shifted into the register 756*, the data and address from the register 756 3 (not shown) are shifted into the register 756 2 , and so on.
- the data and address from the register 756* are shifted onto the bus 776 such that the data from the FIFO 7542 is loaded into the addressed location of the DPSRAM 702.
- the data and address from the register 756 2 are shifted into the register 756*
- the data and address from the register 7563 are shifted into the register 756 2 , and so on.
- the data and address from the register 756 is shifted onto the bus 776.
- the controller 752 may implement these shift cycles by pulsing the SHIFT/LOAD signal, or by generating a shift clock signal (not shown) that is coupled to the registers 756*-756 transit. Furthermore, if one of the registers 756*-756 transit is empty during a particular shift operation because its corresponding FIFO 754*-754 transit was empty when the controller 752 loaded the register, then the controller may bypass the empty register, and thus shorten the shift operation by avoiding shifting null data and a null address onto the bus 776. [133] Referring to FIGS. 5 and 6, according to an embodiment of the invention, the interface 744 is similar to the interface 742, and the interface 732 is also similar to the interface 742 except that the interface 732 includes only one write channel 750.
- FIG. 7 is a block diagram of the interface 740 of FIG. 5 according to an embodiment of the invention.
- the interface 740 reads input data from the DPSRAM 700 and transfers this data to the hardwired 74 ⁇ -74 unpleasantness.
- the structure of the interface 740 reduces or eliminates data "bottlenecks" and, where the pipeline circuit 80 (FIG. 5) is a PLIC, makes efficient use of the PLICs local and global routing resources.
- the interface 740 includes read channels 790* - 790 tone, one channel for each hardwired pipeline 74-t - 74 n (FIG. 5), and a controller 792.
- the read channel 790* is discussed below, it being understood that the operation and structure of the other read channels 790 2 - 790 shadow are similar unless stated otherwise.
- the channel 790* includes a FIFO 794* and an address/identifier (ID) register 796*.
- ID address/identifier
- the identifier identifies the pipeline 74 ⁇ -74 n that makes the request to read data from a particular location of the DPSRAM 700 to receive the data.
- the FIFO 794* includes two sub-FIFOs (not shown), one for storing the address of the location within the DPSRAM 700 from which the pipeline 74 ⁇ wishes to read the input data, and the other for storing the data read from the DPSRAM 700. Therefore, the FIFO 794* reduces or eliminates the bottleneck that may occur if the pipeline 74 ⁇ had to "wait" to provide the read address to the channel 790* until the controller 792 finished reading previous data, or if the controller had to wait until the pipeline 74 ⁇ retrieved the read data before the controller could read subsequent data.
- the FIFO 794* receives the read address from the pipeline 74 ⁇ via a bus 798* and provides the address and ID to the register 796* via a bus 200 ⁇ . Since the ID corresponds to the pipeline 74 ⁇ and typically does not change, the FIFO 794* may store the ID and concatenate the ID with the address. Alternatively, the pipeline 74 ⁇ may provide the ID to the FIFO 794* via the bus 798*. Furthermore, the FIFO 794* receives a READY WRITE FIFO signal from the pipeline 74 ⁇ via a line 202 receives a CLOCK signal via a line 204 ⁇ , and provides a FIFO FULL (of read addresses) signal to the pipeline via a line 206 ⁇ .
- the FIFO 794* receives a WRITE/READ FIFO signal from the controller 792 via a line 208 1t and provides a FIFO EMPTY signal to the controller via a line 270*. Moreover, the FIFO 794* receives the read data and the corresponding ID from the controller 792 via a bus 272, and provides this data to the pipeline 74*>via a bus 274*.
- the pipeline circuit 80 (FIG. 5) is a PLIC
- the busses 798*, 200 1t and 274* and the lines 202 1t 204 ⁇ , 206 ⁇ , 208 ⁇ , and 270* are preferably formed using local routing resources, and the bus 272 is typically formed using global routing resources.
- the register 796* receives the address of the location to be read and the corresponding ID from the FIFO 794* via the bus 206 ⁇ , provides the address to the port 708 of the DPSRAM 700 (FIG. 5) via an address bus 276, and provides the ID to the controller 792 via a bus 278. Furthermore, the register 796* also receives the addresses and IDs from the registers 7 6 2 - 796 disregard via an address/ID bus 220 ⁇ as discussed below. In addition, the register 796* receives a SHIFT/LOAD signal from the controller 792 via a line 222. Where the pipeline circuit 80 (FIG. 5) is a PLIC, the bus 276 is typically formed using global routing resources, and the busses
- 220 ⁇ .220 n - ⁇ and the line 222 are preferably formed using local routing resources.
- the controller 792 receives the data read from the port 708 of the DPSRAM 700 (FIG. 5) via a bus 224 and generates a READ DPSRAM signal on a line 226, which couples this signal to the port 708.
- the pipeline circuit 80 (FIG. 5) is a PLIC
- the bus 224 and the line 226 are typically formed using global routing resources.
- the FIFO 794* drives the FIFO FULL signal to the logic level corresponding to the current state ("full" or “not full") of the FIFO relative to the read addresses. That is, if the FIFO 794* is full of addresses to be read, then it drives the logic level of FIFO FULL to one level, and if the FIFO is not full of read addresses, it drives the logic level of FIFO FULL to another level.
- the pipeline 74 ⁇ drives the address of the data to be read onto the bus 798*, and asserts the READ/WRITE FIFO signal to a write level, thus loading the address into the FIFO.
- the pipeline 74 ⁇ gets the address from the input-data queue 722 via the sequence manager 748. If, however, the FIFO 794* is full of read addresses, the pipeline 74 ⁇ waits until the FIFO is not full before loading the read address.
- the FIFO 794* drives the FIFO EMPTY signal to the logic level corresponding to the current state ("empty" or “not empty") of the FIFO relative to the read addresses. That is, if the FIFO 794* is loaded with at least one read address, it drives the logic level of FIFO EMPTY to one level, and if the FIFO is loaded with no read addresses, it drives the logic level of FIFO EMPTY to another level. [145] Next, if the FIFO 794* is not empty, the controller 792 asserts the
- the channels 7 0 2 - 790 operate in a similar manner such that the controller 792 respectively loads the first-loaded addresses and IDs from the FIFOs 794 2 - 794 poison into the registers 796 2 - 796 transit. If all of the FIFOs 794 2 -794 ⁇ are empty, then the controller 792 waits for at least one of the FIFOs to receive an address before proceeding.
- the controller 792 drives the SHIFT/LOAD signal to the shift logic level and asserts the READ DPSRAM signal to serially shift the addresses and IDs from the registers 796* - 796 ⁇ onto the address and ID busses 276 and 278 and to serially read the data from the corresponding locations of the DPSRAM 700 via the bus 224. [148] Next, the controller 792 drives the received data and corresponding ID
- the controller 792 shifts the address and ID from the register 796* onto the busses 276 and 278, respectively, asserts read DPSRAM, and thus reads the data from the corresponding location of the DPSRAM 700 via the bus 224 and reads the ID from the bus 278.
- the controller 792 drives WRITE/READ FIFO signal on the line 208 ⁇ to a write level and drives the received data and the ID onto the bus 272.
- the FIFO 794* recognizes the ID and thus loads the data from the bus 272 in response the write level of the WRITE/READ FIFO signal.
- the remaining FIFOs 794 2 - 794 immunity do not load the data because the ID on the bus 272 does not correspond to their IDs.
- the pipeline 74 asserts the READ/WRITE FIFO signal on the line 202 ⁇ to the read level and retrieves the read data via the bus 274*.
- the address and ID from the register 796 2 are shifted into the register 796*
- the address and ID from the register 796 3 (not shown) are shifted into the register 796 2 , and so on.
- the controller 792 may recognize the ID and drive only the WRITE/READ FIFO signal on the line 20 ⁇ to the write level. This eliminates the need for the controller 792 to send the ID to the FIFOs 794*-794 ⁇ .
- the WRITE/READ FIFO signal may be only a read signal, and the FIFO 794* (as well as the other FIFOs 194 2 -194 confusion) may load the data on the bus 272 when the ID on the bus 272 matches the ID of the FIFO 794*. This eliminates the need of the controller 792 to generate a write signal.
- the address and ID from the register 796* is shifted onto the busses 276 and 278 such that the controller 792 reads data from the location of the DPSRAM 700 specified by the FIFO 794 2 .
- the controller 792 drives the WRITE/READ FIFO signal to a write level and drives the received data and the ID onto the bus 272. Because the ID is the ID from the FIFO 794 2 , the FIFO 794 2 recognizes the ID and thus loads the data from the bus 272. The remaining FIFOs 794* and 794 3 - 794 immunity do not load the data because the ID on the bus 272 does not correspond to their IDs.
- the pipeline 74 2 asserts its READ/WRITE FIFO signal to the read level and retrieves the read data via the bus 2742. Also during the second shift cycle, the address and ID from the register 796 2 is shifted into the register 796*, the address and ID from the register 796 3 (not shown) is shifted into the register 7 6 2 , and so on.
- the interface 744 is similar to the interface 740, and the interface 736 is also similar to the interface 740 except that the interface 736 includes only one read channel 790, and thus includes no ID circuitry.
- FIG. 8 is a schematic block diagram of a pipeline unit 230 of FIG. 4 according to another embodiment of the invention.
- the pipeline unit 230 is similar to the pipeline unit 78 of FIG. 4 except that the pipeline unit 230 includes multiple pipeline circuits 80 — here two pipeline circuits 80a and 80b.
- Increasing the number of pipeline circuits 80 typically allows an increase in the number n of hardwired pipelines 74 ⁇ -74AN, and thus an increase in the functionality of the pipeline unit 230 as compared to the pipeline unit 78.
- the services components i.e., the communication interface 82, the controller 86, the exception manager 88, the configuration manager 90, and the optional industry-standard bus interface 97, are disposed on the pipeline circuit 80a, and the pipelines 74 ⁇ -74 n and the communication shell 84 are disposed on the pipeline circuit 80b.
- the services components and the pipelines 74 ⁇ -74 are disposed on separate pipeline circuits, one can include a higher number n of pipelines and/or more complex pipelines than he can where the service components and the pipelines are located on the same pipeline circuit.
- FIG. 9 is a schematic block diagram of the pipeline circuits 80a and
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Advance Control (AREA)
- Multi Processors (AREA)
- Stored Programmes (AREA)
- Microcomputers (AREA)
- Programmable Controllers (AREA)
- Logic Circuits (AREA)
- Bus Control (AREA)
- Complex Calculations (AREA)
Abstract
Description
Claims
Applications Claiming Priority (13)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US42250302P | 2002-10-31 | 2002-10-31 | |
US422503P | 2002-10-31 | ||
US684053 | 2003-10-09 | ||
US10/684,057 US7373432B2 (en) | 2002-10-31 | 2003-10-09 | Programmable circuit and related computing machine and method |
US683932 | 2003-10-09 | ||
US684102 | 2003-10-09 | ||
US10/683,929 US20040136241A1 (en) | 2002-10-31 | 2003-10-09 | Pipeline accelerator for improved computing architecture and related system and method |
US10/684,053 US7987341B2 (en) | 2002-10-31 | 2003-10-09 | Computing machine using software objects for transferring data that includes no destination information |
US683929 | 2003-10-09 | ||
US10/683,932 US7386704B2 (en) | 2002-10-31 | 2003-10-09 | Pipeline accelerator including pipeline circuits in communication via a bus, and related system and method |
US684057 | 2003-10-09 | ||
US10/684,102 US7418574B2 (en) | 2002-10-31 | 2003-10-09 | Configuring a portion of a pipeline accelerator to generate pipeline date without a program instruction |
PCT/US2003/034558 WO2004042562A2 (en) | 2002-10-31 | 2003-10-31 | Pipeline accelerator and related system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1573515A2 true EP1573515A2 (en) | 2005-09-14 |
Family
ID=34280226
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP03781550A Ceased EP1573514A2 (en) | 2002-10-31 | 2003-10-31 | Pipeline accelerator and related computer and method |
EP03781553A Withdrawn EP1573515A2 (en) | 2002-10-31 | 2003-10-31 | Pipeline accelerator and related system and method |
EP03781554A Withdrawn EP1559005A2 (en) | 2002-10-31 | 2003-10-31 | Computing machine having improved computing architecture and related system and method |
EP03781552A Expired - Lifetime EP1570344B1 (en) | 2002-10-31 | 2003-10-31 | Pipeline coprocessor |
EP03781551A Ceased EP1576471A2 (en) | 2002-10-31 | 2003-10-31 | Programmable circuit and related computing machine and method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP03781550A Ceased EP1573514A2 (en) | 2002-10-31 | 2003-10-31 | Pipeline accelerator and related computer and method |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP03781554A Withdrawn EP1559005A2 (en) | 2002-10-31 | 2003-10-31 | Computing machine having improved computing architecture and related system and method |
EP03781552A Expired - Lifetime EP1570344B1 (en) | 2002-10-31 | 2003-10-31 | Pipeline coprocessor |
EP03781551A Ceased EP1576471A2 (en) | 2002-10-31 | 2003-10-31 | Programmable circuit and related computing machine and method |
Country Status (8)
Country | Link |
---|---|
EP (5) | EP1573514A2 (en) |
JP (9) | JP2006515941A (en) |
KR (5) | KR101012745B1 (en) |
AU (5) | AU2003287320B2 (en) |
CA (5) | CA2503617A1 (en) |
DE (1) | DE60318105T2 (en) |
ES (1) | ES2300633T3 (en) |
WO (4) | WO2004042561A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7676649B2 (en) | 2004-10-01 | 2010-03-09 | Lockheed Martin Corporation | Computing machine with redundancy and related systems and methods |
US7987341B2 (en) | 2002-10-31 | 2011-07-26 | Lockheed Martin Corporation | Computing machine using software objects for transferring data that includes no destination information |
Families Citing this family (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8095508B2 (en) | 2000-04-07 | 2012-01-10 | Washington University | Intelligent data storage and processing using FPGA devices |
US7711844B2 (en) | 2002-08-15 | 2010-05-04 | Washington University Of St. Louis | TCP-splitter: reliable packet monitoring methods and apparatus for high speed networks |
JP2006515941A (en) * | 2002-10-31 | 2006-06-08 | ロックヒード マーティン コーポレーション | Pipeline accelerator having multiple pipeline units, associated computing machine, and method |
US20070277036A1 (en) | 2003-05-23 | 2007-11-29 | Washington University, A Corporation Of The State Of Missouri | Intelligent data storage and processing using fpga devices |
US10572824B2 (en) | 2003-05-23 | 2020-02-25 | Ip Reservoir, Llc | System and method for low latency multi-functional pipeline with correlation logic and selectively activated/deactivated pipelined data processing engines |
JP2008532177A (en) | 2005-03-03 | 2008-08-14 | ワシントン ユニヴァーシティー | Method and apparatus for performing biological sequence similarity searches |
JP4527571B2 (en) * | 2005-03-14 | 2010-08-18 | 富士通株式会社 | Reconfigurable processing unit |
WO2007011203A1 (en) * | 2005-07-22 | 2007-01-25 | Stichting Astron | Scalable control interface for large-scale signal processing systems. |
US7702629B2 (en) | 2005-12-02 | 2010-04-20 | Exegy Incorporated | Method and device for high performance regular expression pattern matching |
JP2007164472A (en) * | 2005-12-14 | 2007-06-28 | Sonac Kk | Arithmetic device with queuing mechanism |
US7954114B2 (en) | 2006-01-26 | 2011-05-31 | Exegy Incorporated | Firmware socket module for FPGA-based pipeline processing |
US7840482B2 (en) | 2006-06-19 | 2010-11-23 | Exegy Incorporated | Method and system for high speed options pricing |
US7921046B2 (en) | 2006-06-19 | 2011-04-05 | Exegy Incorporated | High speed processing of financial information using FPGA devices |
US8326819B2 (en) | 2006-11-13 | 2012-12-04 | Exegy Incorporated | Method and system for high performance data metatagging and data indexing using coprocessors |
US7660793B2 (en) | 2006-11-13 | 2010-02-09 | Exegy Incorporated | Method and system for high performance integration, processing and searching of structured and unstructured data using coprocessors |
US8374986B2 (en) | 2008-05-15 | 2013-02-12 | Exegy Incorporated | Method and system for accelerated stream processing |
JP5138040B2 (en) * | 2008-07-30 | 2013-02-06 | パナソニック株式会社 | Integrated circuit |
CA2744746C (en) | 2008-12-15 | 2019-12-24 | Exegy Incorporated | Method and apparatus for high-speed processing of financial market depth data |
US8478965B2 (en) | 2009-10-30 | 2013-07-02 | International Business Machines Corporation | Cascaded accelerator functions |
US10037568B2 (en) | 2010-12-09 | 2018-07-31 | Ip Reservoir, Llc | Method and apparatus for managing orders in financial markets |
US9990393B2 (en) | 2012-03-27 | 2018-06-05 | Ip Reservoir, Llc | Intelligent feed switch |
US10121196B2 (en) | 2012-03-27 | 2018-11-06 | Ip Reservoir, Llc | Offload processing of data packets containing financial market data |
US11436672B2 (en) | 2012-03-27 | 2022-09-06 | Exegy Incorporated | Intelligent switch for processing financial market data |
US10650452B2 (en) | 2012-03-27 | 2020-05-12 | Ip Reservoir, Llc | Offload processing of data packets |
FR2996657B1 (en) * | 2012-10-09 | 2016-01-22 | Sagem Defense Securite | CONFIGURABLE GENERIC ELECTRICAL BODY |
WO2014066416A2 (en) | 2012-10-23 | 2014-05-01 | Ip Reservoir, Llc | Method and apparatus for accelerated format translation of data in a delimited data format |
US9633093B2 (en) | 2012-10-23 | 2017-04-25 | Ip Reservoir, Llc | Method and apparatus for accelerated format translation of data in a delimited data format |
US10133802B2 (en) | 2012-10-23 | 2018-11-20 | Ip Reservoir, Llc | Method and apparatus for accelerated record layout detection |
WO2014182314A2 (en) * | 2013-05-10 | 2014-11-13 | Empire Technology Development, Llc | Acceleration of memory access |
WO2015164639A1 (en) | 2014-04-23 | 2015-10-29 | Ip Reservoir, Llc | Method and apparatus for accelerated data translation |
US9977422B2 (en) * | 2014-07-28 | 2018-05-22 | Computational Systems, Inc. | Intelligent configuration of a user interface of a machinery health monitoring system |
US10942943B2 (en) | 2015-10-29 | 2021-03-09 | Ip Reservoir, Llc | Dynamic field data translation to support high performance stream data processing |
JP2017135698A (en) * | 2015-12-29 | 2017-08-03 | 株式会社半導体エネルギー研究所 | Semiconductor device, computer, and electronic device |
CN108701029A (en) * | 2016-02-29 | 2018-10-23 | 奥林巴斯株式会社 | Image processing apparatus |
EP3560135A4 (en) | 2016-12-22 | 2020-08-05 | IP Reservoir, LLC | Pipelines for hardware-accelerated machine learning |
JP6781089B2 (en) | 2017-03-28 | 2020-11-04 | 日立オートモティブシステムズ株式会社 | Electronic control device, electronic control system, control method of electronic control device |
GB2570729B (en) * | 2018-02-06 | 2022-04-06 | Xmos Ltd | Processing system |
IT202100020033A1 (en) * | 2021-07-27 | 2023-01-27 | Carmelo Ferrante | INTERFACING SYSTEM BETWEEN TWO ELECTRONIC CONTROLLED DEVICES AND ELECTRONIC CONTROL UNIT INCLUDING SUCH INTERFACING SYSTEM |
Family Cites Families (52)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4703475A (en) * | 1985-12-04 | 1987-10-27 | American Telephone And Telegraph Company At&T Bell Laboratories | Data communication method and apparatus using multiple physical data links |
US4811214A (en) * | 1986-11-14 | 1989-03-07 | Princeton University | Multinode reconfigurable pipeline computer |
US4914653A (en) * | 1986-12-22 | 1990-04-03 | American Telephone And Telegraph Company | Inter-processor communication protocol |
US4956771A (en) * | 1988-05-24 | 1990-09-11 | Prime Computer, Inc. | Method for inter-processor data transfer |
JP2522048B2 (en) * | 1989-05-15 | 1996-08-07 | 三菱電機株式会社 | Microprocessor and data processing device using the same |
JP2858602B2 (en) * | 1991-09-20 | 1999-02-17 | 三菱重工業株式会社 | Pipeline operation circuit |
US5283883A (en) * | 1991-10-17 | 1994-02-01 | Sun Microsystems, Inc. | Method and direct memory access controller for asynchronously reading/writing data from/to a memory with improved throughput |
US5268962A (en) * | 1992-07-21 | 1993-12-07 | Digital Equipment Corporation | Computer network with modified host-to-host encryption keys |
US5440687A (en) * | 1993-01-29 | 1995-08-08 | International Business Machines Corporation | Communication protocol for handling arbitrarily varying data strides in a distributed processing environment |
JPH06282432A (en) * | 1993-03-26 | 1994-10-07 | Olympus Optical Co Ltd | Arithmetic processor |
US5583964A (en) * | 1994-05-02 | 1996-12-10 | Motorola, Inc. | Computer utilizing neural network and method of using same |
US5568614A (en) * | 1994-07-29 | 1996-10-22 | International Business Machines Corporation | Data streaming between peer subsystems of a computer system |
US5692183A (en) * | 1995-03-31 | 1997-11-25 | Sun Microsystems, Inc. | Methods and apparatus for providing transparent persistence in a distributed object operating environment |
JP2987308B2 (en) * | 1995-04-28 | 1999-12-06 | 松下電器産業株式会社 | Information processing device |
US5748912A (en) * | 1995-06-13 | 1998-05-05 | Advanced Micro Devices, Inc. | User-removable central processing unit card for an electrical device |
US5752071A (en) * | 1995-07-17 | 1998-05-12 | Intel Corporation | Function coprocessor |
JP3156562B2 (en) * | 1995-10-19 | 2001-04-16 | 株式会社デンソー | Vehicle communication device and traveling vehicle monitoring system |
US5784636A (en) * | 1996-05-28 | 1998-07-21 | National Semiconductor Corporation | Reconfigurable computer architecture for use in signal processing applications |
JPH1084339A (en) * | 1996-09-06 | 1998-03-31 | Nippon Telegr & Teleph Corp <Ntt> | Communication method for stream cryptograph and communication system |
US5892962A (en) * | 1996-11-12 | 1999-04-06 | Lucent Technologies Inc. | FPGA-based processor |
JPH10304184A (en) * | 1997-05-02 | 1998-11-13 | Fuji Xerox Co Ltd | Image processor and image processing method |
DE19724072C2 (en) * | 1997-06-07 | 1999-04-01 | Deutsche Telekom Ag | Device for carrying out a block encryption process |
JP3489608B2 (en) * | 1997-06-20 | 2004-01-26 | 富士ゼロックス株式会社 | Programmable logic circuit system and method for reconfiguring programmable logic circuit device |
US6216191B1 (en) * | 1997-10-15 | 2001-04-10 | Lucent Technologies Inc. | Field programmable gate array having a dedicated processor interface |
JPH11120156A (en) * | 1997-10-17 | 1999-04-30 | Nec Corp | Data communication system in multiprocessor system |
US6076152A (en) * | 1997-12-17 | 2000-06-13 | Src Computers, Inc. | Multiprocessor computer architecture incorporating a plurality of memory algorithm processors in the memory subsystem |
US6049222A (en) * | 1997-12-30 | 2000-04-11 | Xilinx, Inc | Configuring an FPGA using embedded memory |
DE69919059T2 (en) * | 1998-02-04 | 2005-01-27 | Texas Instruments Inc., Dallas | Data processing system with a digital signal processor and a coprocessor and data processing method |
JPH11271404A (en) * | 1998-03-23 | 1999-10-08 | Nippon Telegr & Teleph Corp <Ntt> | Method and apparatus for self-test in circuit reconstitutable by program |
US6282627B1 (en) * | 1998-06-29 | 2001-08-28 | Chameleon Systems, Inc. | Integrated processor and programmable data path chip for reconfigurable computing |
JP2000090237A (en) * | 1998-09-10 | 2000-03-31 | Fuji Xerox Co Ltd | Plotting processor |
SE9902373D0 (en) * | 1998-11-16 | 1999-06-22 | Ericsson Telefon Ab L M | A processing system and method |
JP2000278116A (en) * | 1999-03-19 | 2000-10-06 | Matsushita Electric Ind Co Ltd | Configuration interface for fpga |
JP2000295613A (en) * | 1999-04-09 | 2000-10-20 | Nippon Telegr & Teleph Corp <Ntt> | Method and device for image coding using reconfigurable hardware and program recording medium for image coding |
JP2000311156A (en) * | 1999-04-27 | 2000-11-07 | Mitsubishi Electric Corp | Reconfigurable parallel computer |
US6308311B1 (en) * | 1999-05-14 | 2001-10-23 | Xilinx, Inc. | Method for reconfiguring a field programmable gate array from a host |
EP1061438A1 (en) * | 1999-06-15 | 2000-12-20 | Hewlett-Packard Company | Computer architecture containing processor and coprocessor |
US20030014627A1 (en) * | 1999-07-08 | 2003-01-16 | Broadcom Corporation | Distributed processing in a cryptography acceleration chip |
JP3442320B2 (en) * | 1999-08-11 | 2003-09-02 | 日本電信電話株式会社 | Communication system switching radio terminal and communication system switching method |
US6526430B1 (en) * | 1999-10-04 | 2003-02-25 | Texas Instruments Incorporated | Reconfigurable SIMD coprocessor architecture for sum of absolute differences and symmetric filtering (scalable MAC engine for image processing) |
US6326806B1 (en) * | 2000-03-29 | 2001-12-04 | Xilinx, Inc. | FPGA-based communications access point and system for reconfiguration |
JP3832557B2 (en) * | 2000-05-02 | 2006-10-11 | 富士ゼロックス株式会社 | Circuit reconfiguration method and information processing system for programmable logic circuit |
US6982976B2 (en) * | 2000-08-11 | 2006-01-03 | Texas Instruments Incorporated | Datapipe routing bridge |
US7196710B1 (en) * | 2000-08-23 | 2007-03-27 | Nintendo Co., Ltd. | Method and apparatus for buffering graphics data in a graphics system |
JP2002207078A (en) * | 2001-01-10 | 2002-07-26 | Ysd:Kk | Apparatus for processing radar signal |
JPWO2002057921A1 (en) * | 2001-01-19 | 2004-07-22 | 株式会社日立製作所 | Electronic circuit device |
US6657632B2 (en) * | 2001-01-24 | 2003-12-02 | Hewlett-Packard Development Company, L.P. | Unified memory distributed across multiple nodes in a computer graphics system |
JP2002269063A (en) * | 2001-03-07 | 2002-09-20 | Toshiba Corp | Massaging program, messaging method of distributed system, and messaging system |
JP3873639B2 (en) * | 2001-03-12 | 2007-01-24 | 株式会社日立製作所 | Network connection device |
JP2002281079A (en) * | 2001-03-21 | 2002-09-27 | Victor Co Of Japan Ltd | Image data transmitting device |
JP2006515941A (en) * | 2002-10-31 | 2006-06-08 | ロックヒード マーティン コーポレーション | Pipeline accelerator having multiple pipeline units, associated computing machine, and method |
US7373528B2 (en) * | 2004-11-24 | 2008-05-13 | Cisco Technology, Inc. | Increased power for power over Ethernet applications |
-
2003
- 2003-10-31 JP JP2005502222A patent/JP2006515941A/en active Pending
- 2003-10-31 KR KR1020057007751A patent/KR101012745B1/en active IP Right Grant
- 2003-10-31 WO PCT/US2003/034555 patent/WO2004042561A2/en active Application Filing
- 2003-10-31 WO PCT/US2003/034559 patent/WO2004042574A2/en active Application Filing
- 2003-10-31 JP JP2005502225A patent/JP2006518058A/en active Pending
- 2003-10-31 AU AU2003287320A patent/AU2003287320B2/en not_active Ceased
- 2003-10-31 EP EP03781550A patent/EP1573514A2/en not_active Ceased
- 2003-10-31 CA CA002503617A patent/CA2503617A1/en not_active Abandoned
- 2003-10-31 AU AU2003287318A patent/AU2003287318B2/en not_active Ceased
- 2003-10-31 DE DE60318105T patent/DE60318105T2/en not_active Expired - Lifetime
- 2003-10-31 CA CA2503613A patent/CA2503613C/en not_active Expired - Fee Related
- 2003-10-31 AU AU2003287321A patent/AU2003287321B2/en not_active Ceased
- 2003-10-31 AU AU2003287317A patent/AU2003287317B2/en not_active Ceased
- 2003-10-31 KR KR1020057007752A patent/KR100996917B1/en not_active IP Right Cessation
- 2003-10-31 EP EP03781553A patent/EP1573515A2/en not_active Withdrawn
- 2003-10-31 KR KR1020057007750A patent/KR101012744B1/en active IP Right Grant
- 2003-10-31 JP JP2005502226A patent/JP2006518495A/en active Pending
- 2003-10-31 EP EP03781554A patent/EP1559005A2/en not_active Withdrawn
- 2003-10-31 WO PCT/US2003/034556 patent/WO2004042569A2/en active Application Filing
- 2003-10-31 EP EP03781552A patent/EP1570344B1/en not_active Expired - Lifetime
- 2003-10-31 ES ES03781552T patent/ES2300633T3/en not_active Expired - Lifetime
- 2003-10-31 JP JP2005502223A patent/JP2006518056A/en active Pending
- 2003-10-31 KR KR1020057007748A patent/KR101035646B1/en not_active IP Right Cessation
- 2003-10-31 WO PCT/US2003/034557 patent/WO2004042560A2/en active IP Right Grant
- 2003-10-31 CA CA2503611A patent/CA2503611C/en not_active Expired - Fee Related
- 2003-10-31 CA CA002503620A patent/CA2503620A1/en not_active Abandoned
- 2003-10-31 CA CA2503622A patent/CA2503622C/en not_active Expired - Fee Related
- 2003-10-31 AU AU2003287319A patent/AU2003287319B2/en not_active Ceased
- 2003-10-31 JP JP2005502224A patent/JP2006518057A/en active Pending
- 2003-10-31 EP EP03781551A patent/EP1576471A2/en not_active Ceased
- 2003-10-31 KR KR1020057007749A patent/KR101062214B1/en not_active IP Right Cessation
-
2011
- 2011-03-28 JP JP2011070196A patent/JP5568502B2/en not_active Expired - Fee Related
- 2011-03-29 JP JP2011071988A patent/JP2011170868A/en active Pending
- 2011-04-01 JP JP2011081733A patent/JP2011175655A/en active Pending
- 2011-04-05 JP JP2011083371A patent/JP2011154711A/en active Pending
Non-Patent Citations (2)
Title |
---|
RICHARD R. ECKERT: "Microprogrammed versus hardwired control units: how computers really work", SIGCSE BULLETIN., vol. 20, no. 3, 1 September 1988 (1988-09-01), US, pages 13 - 22, XP055336753, ISSN: 0097-8418, DOI: 10.1145/51594.51598 * |
See also references of WO2004042562A3 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7987341B2 (en) | 2002-10-31 | 2011-07-26 | Lockheed Martin Corporation | Computing machine using software objects for transferring data that includes no destination information |
US8250341B2 (en) | 2002-10-31 | 2012-08-21 | Lockheed Martin Corporation | Pipeline accelerator having multiple pipeline units and related computing machine and method |
US7676649B2 (en) | 2004-10-01 | 2010-03-09 | Lockheed Martin Corporation | Computing machine with redundancy and related systems and methods |
US7809982B2 (en) | 2004-10-01 | 2010-10-05 | Lockheed Martin Corporation | Reconfigurable computing machine and related systems and methods |
US8073974B2 (en) | 2004-10-01 | 2011-12-06 | Lockheed Martin Corporation | Object oriented mission framework and system and method |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2003287320B2 (en) | Pipeline accelerator and related system and method | |
US20040136241A1 (en) | Pipeline accelerator for improved computing architecture and related system and method | |
WO2004042562A2 (en) | Pipeline accelerator and related system and method | |
US7487302B2 (en) | Service layer architecture for memory access system and method | |
CN116324741A (en) | Method and apparatus for configurable hardware accelerator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
PUAK | Availability of information related to the publication of the international search report |
Free format text: ORIGINAL CODE: 0009015 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: 7G 06F 9/46 A Ipc: 7G 06F 15/78 B Ipc: 7G 06F 9/38 B |
|
DAX | Request for extension of the european patent (deleted) | ||
RBV | Designated contracting states (corrected) |
Designated state(s): DE ES FR GB |
|
17P | Request for examination filed |
Effective date: 20060202 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): DE ES FR GB |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: CHERASARO, TROY Inventor name: JACKSON, LARRY Inventor name: RAPP, JOHN, W. Inventor name: JONES, MARK |
|
17Q | First examination report despatched |
Effective date: 20090910 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20170906 |