A parallel, multi-threaded processor system and technique for arbitrating command requests is described. The system includes a plurality of microengines, a plurality of shared system resources and a global command arbiter. The global command arbiter uses a command request protocol that is based on the shared system resources and command type to grant or deny a microengine command request for a shared resource.

Patent
   RE41849
Priority
Dec 22 1999
Filed
Jun 22 2005
Issued
Oct 19 2010
Expiry
Dec 22 2019

TERM.DISCL.
Assg.orig
Entity
Large
0
410
all paid
0. 18. A method comprising:
identifying a last programmable unit of a plurality of multiple multi-threaded programmable units within an integrated circuit to have a request granted; and
based, at least in part, on the identifying of the last programmable unit of the plurality of multiple multi-threaded programmable units within the integrated circuit to have a request granted, selecting a different one of the multiple multi-threaded programmable units within the integrated circuit to have a next request granted.
0. 34. A communications system comprising:
at least one ethernet medium access controller (MAC);
a multithreaded processor, the processor including:
a plurality of microengines for processing a plurality of hardware threads;
at least one of an asb translator, a pci bus interface, a sdram controller, a sram controller, and an bus interface to the ethernet MAC; and
a pointer to store an identity of a last agent that had a request granted, the system configured to determine whether a particular command request should be granted.
10. A communications system comprising:
at least one ethernet medium access controller (MAC);
a multithreaded processor, the processor including:
a plurality of microengines for processing a plurality of hardware threads;
at least one of an asb translator, a pci bus interface, a sdram controller, a sram controller, and an bus interface to the ethernet MAC; and
a global command arbiter including a pointer to store the identity of the a last agent that had a request granted to determine whether a particular command request should be granted.
0. 25. A method for using a parallel, multi-threaded processor system comprising:
processing threads with a plurality of microengines, at least one microengine capable of processing at least two independent threads;
processing commands issued by the microengines using a plurality of system resource interface units that each include at least one commands queue; and
storing an identity of a last agent that had a request granted to determine whether a particular microengine command request should be granted, wherein a pointer is included to store the identity.
1. A method for using a parallel, multi-threaded processor system comprising:
processing threads with a plurality of microengines, at least one microengine capable of processing at least two independent threads;
processing commands issued by the microengines using a plurality of system resource interface units that each include at least one commands queue; and
utilizing a global command arbiter including a pointer to store the identity of the a last agent that had a request granted to determine whether a particular microengine command request should be granted.
0. 21. An integrated circuit, comprising:
multiple multi-threaded programmable units in the integrated circuit; and
logic, communicatively coupled to the multiple multi-threaded programmable units, to:
identify a last programmable unit of the plurality of multiple multi-threaded programmable units within the integrated circuit to have a request granted; and
based, at least in part, on the identified last programmable unit of the plurality of multiple multi-threaded programmable units within the integrated circuit to have a request granted, select a one of the multiple multi-threaded programmable units within the integrated circuit to have a next request granted.
2. The method of claim 1 wherein each microengine utilizes a FIFO commands register.
3. The method of claim 1 wherein the system resource units include at least one of a core controller, a sdram controller, a sram controller, a pci bus interface and an FBUS interface.
4. The method of claim 3 wherein in at least one of the sdram controller, the sram controller and the FBUS interface utilize three command queues.
5. The method of claim 3 wherein in at least one of the sdram controller and the sram controller utilize a high priority queue.
6. The method of claim 3 wherein the sram controller utilizes a read lock fail queue.
7. The method of claim 3 wherein the pci bus interface utilizes a single command register.
8. The method of claim 1, wherein the agent comprises at least one of the following: a microengine and a microengine thread.
9. The method of claim 1, wherein the threads comprise at least one thread that operates on a packet.
11. The system of claim 10 further comprising a FIFO commands register for each microengine.
12. The system of claim 10 wherein at least one of the sdram controller, the sram controller and the FBUS interface includes three command queues.
13. The system of claim 10 wherein at least one of the sdram controller and the sram controller includes a high priority queue.
14. The system of claim 10 wherein the sram controller includes a read lock fail queue.
15. The system of claim 10 wherein the pci bus interface includes a single command register.
16. The method system of claim 10, wherein the agent comprises at least one of the following: a microengine and a microengine thread.
17. The method system of claim 10, wherein the threads comprise at least one thread that operates on a packet received via the at least one ethernet MAC.
0. 19. The method of claim 18, wherein the plurality of multiple multi-threaded programmable units within the integrated circuit are associated with a sequence of the multiple multi-threaded programmable units within the integrated circuit; and wherein selecting the one of the multiple multi-threaded programmable units within the integrated circuit to have a next request granted comprises selecting a next one of the multiple multi-threaded programmable units within the integrated circuit in the sequence.
0. 20. The method of claim 18, further comprising:
selecting a memory access operation issued by the selected one of the multiple multi-threaded programmable units within the integrated circuit.
0. 22. The integrated circuit of claim 21, wherein the plurality of multiple multi-threaded programmable units within the integrated circuit are associated with a sequence of the multiple multi-threaded programmable units; and wherein the logic to select the one of the multiple multi-threaded programmable units within the integrated circuit to have a next request granted comprises logic to select a next one of the multiple multi-threaded programmable units in the sequence.
0. 23. The integrated circuit of claim 21, wherein the logic comprises an arbiter coupled to the multiple multi-threaded programmable units and to a memory controller to a memory shared by the multiple multi-threaded programmable units.
0. 24. The integrated circuit of claim 21, wherein the logic further comprises logic to:
select a memory access operation issued by the selected one of the multiple multi-threaded programmable units within the integrated circuit.
0. 26. The method of claim 25, wherein each microengine utilizes a FIFO commands register.
0. 27. The method of claim 25, wherein the system resource units include at least one of a core controller, a sdram controller, a sram controller, a pci bus interface and an FBUS interface.
0. 28. The method of claim 27, wherein at least one of the sdram controller, the sram controller and the FBUS interface utilize three command queues.
0. 29. The method of claim 27, wherein in at least one of the sdram controller and the sram controller utilize a high priority queue.
0. 30. The method of claim 27, wherein the sram controller utilizes a read lock fail queue.
0. 31. The method of claim 27, wherein the pci bus interface utilizes a single command register.
0. 32. The method of claim 25, wherein the agent comprises at least one of the following: a microengine and a microengine thread.
0. 33. The method of claim 25, wherein the threads comprise at least one thread that operates on a packet.
0. 35. The system of claim 34 further comprising a FIFO commands register for each microengine.
0. 36. The system of claim 34 wherein at least one of the sdram controller, the sram controller and the FBUS interface includes three command queues.
0. 37. The system of claim 34 wherein at least one of the sdram controller and the sram controller includes a high priority queue.
0. 38. The system of claim 34 wherein the sram controller includes a read lock fail queue.
0. 39. The system of claim 34 wherein the pci bus interface includes a single command register.
0. 40. The system of claim 34, wherein the agent comprises at least one of the following: a microengine and a microengine thread.
0. 41. The system of claim 34, wherein the threads comprise at least one thread that operates on a packet received via the at least one ethernet MAC.

This application is a continuation of U.S. application Ser. No. 09/470,541 filed on Dec. 22, 1999, now U.S. Pat. No. 6,532,509.

This invention relates to a protocol for providing parallel, multi-threaded processors with high bandwidth access to shared resources.

Parallel processing is an efficient form of computer information processing of concurrent events. Certain problems may be solved by applying parallel computer processing, which demands concurrent execution of many programs to do more than one thing at the same time. Unlike a serial paradigm where all tasks are performed sequentially at a single station, or a pipelined machine where tasks are performed at specialized stations, parallel processing requires that a plurality of stations have the capability to perform all tasks. In general, all or a plurality of the stations work simultaneously and independently on the same or common elements of a problem.

Types of computer processing include single instruction stream, single data stream, which is the conventional serial von Neumann computer that includes a single stream of instructions. A second processing type is the single instruction stream, multiple data streams process (SIMD). This processing scheme may include multiple arithmetic-logic processors and a single control processor. Each of the arithmetic-logic processors performs operations on the data in lock step and are synchronized by the control processor. A third type is multiple instruction streams, single data stream (MISD) processing which involves processing the same data stream flows through a linear array of processors executing different instruction streams. A fourth processing type is multiple instruction streams, multiple data streams (MIMD) processing which uses multiple processors, each executing its own instruction stream to process a data stream fed to each of the processors. MIMD processors may have several instruction processing units and therefore several data streams.

According to an aspect of the present invention, a parallel, hardware-based, multi-threaded processor includes a global command arbiter for determining the allocation of access to system resources. The multi-threaded processor system includes a plurality of microengines, a plurality of shared system resources and a global command arbiter. The global command arbiter uses a command request protocol based on the shared system resources and command type to grant or deny a microengine command request for a shared resource. The processor system may be advantageously realized on an integrated circuit chip with minimal wiring and buffer storage elements.

The technique according to the invention provides each microengine with fair access to the shared system resources based on command priority and resource utilization. Consequently, the microengines have high bandwidth access to the shared system resources.

FIG. 1 is a block diagram of a communication system employing a hardware-based multithreaded processor.

FIG. 2 is a simplified block diagram of a global arbitration system for a multithreaded process according to the invention.

FIGS. 3A and 3B illustrate a flow chart of an implementation of a global command arbitration process according to the invention.

FIG. 1 illustrates a communication system 10 that includes a parallel, hardware-based multithreaded processor 12. The system 10 is especially useful for tasks that can be broken into parallel subtasks or functions, and the hardware-based multithreaded processor 12 is particularly useful for tasks that are bandwidth oriented rather than latency oriented.

The hardware-based multithreaded processor 12 may be an integrated circuit, and may be coupled to a bus such as a PCI bus 14, a memory system 16 and a second bus 18. In the illustrated implementation, the hardware-based multi-threaded processor 12 has multiple microengines 22a to 22f that each includes multiple hardware-controlled threads that can be simultaneously active and that may independently work on a task. The multithreaded processor 12 also includes a central or core controller 20 that assists in loading microcode control for other resources and performs other general purpose computer-type functions such as handling protocols, handling exceptions, and providing extra support for packet processing, which may occur if the microengines pass the packets off for more detailed processing. In one embodiment, the core controller 20 is a Strong Arm® (Arm is a trademark of ARM Limited, United Kingdom) based architecture embedded general-purpose microprocessor, which includes an operating system. The operating system enables the core processor 20 to call functions to operate on the microengines 22a-22f. The core processor 20 can use any supported operating system but preferably utilizes a real time operating system. Suitable operating systems for a core processor implemented as a Strong Arm architecture microprocessor may include Microsoft NT real-time, VXWorks and μCUS, which is a freeware operating system available over the Internet.

The plurality of functional microengines 22a-22f each maintain a plurality of program counters in hardware, and maintain states associated with the program counters. Each of the six microengines 22a-22f is capable of processing four independent hardware threads. Such processing allows one thread to start executing just after another thread issues a memory reference and then waits until that reference completes before doing more work. This behavior is critical to maintaining efficient hardware execution of the microegines because memory latency may be significant. Stated differently, if only a single thread execution was supported, the microengines would sit idle for a significant number of cycles waiting for references to return and thereby reduce overall computational throughput. Multi-threaded execution allows the microengines to mask memory latency by performing useful independent work across several threads. Effectively, a corresponding plurality of sets of threads can be simultaneously active on each of the microengines 22a-22f while only one is actually operating at any one time.

The six microengines 22a-22f operate with shared system resources including the memory system 16, the PCI bus 14 and the FBUS 18. The memory system 16 may be accessed via a Synchronous Dynamic Random Access Memory (SDRAM) controller 26a and a Static Random Access Memory (SRAM) controller 26b. SDRAM memory 16a and SDRAM controller 26a may be typically used for processing large volumes of data or high bandwidth data, such as processing network payloads from network packets. The SRAM controller 26b and SRAM memory 16b may be used in a networking implementation for low latency, fast access tasks or low bandwidth data, such as accessing look-up tables, memory for the core processor 20, and so forth.

The six microengines 22a-22f access either the SDRAM 16a or SRAM 16b based on characteristics of the data. Low latency, low bandwidth data is stored in and fetched from SRAM 16b, whereas higher bandwidth data for which latency is not as important is stored in and fetched from SDRAM 16a. The microengines 22a-22f can execute memory reference instructions to either the SDRAM controller 26a or SRAM controller 26b.

Advantages of hardware multithreading can be explained in the context of SRAM or SDRAM memory accesses. For example, an SRAM access requested by a Thread_0 from a microengine will cause the SRAM controller 26b to initiate an access to the SRAM memory 16b. The SRAM controller 26b controls arbitration for the SRAM bus 15, accesses the SRAM 16b, fetches the data from the SRAM 16b, and returns data to a requesting microengine 22a-22b. During a SRAM access, if the microengine 22a had only a single thread that could operate, that microengine would be dormant until data was returned from the SRAM. By employing hardware context swapping within each of the microengines 22a-22f, another thread such as Thread_1 can function while the first thread, Thread_0, is awaiting the read data to return. Hardware context swapping enables other contexts with unique program counters to execute in that same microengine. Continuing the example, during execution Thread_1 may access the SDRAM memory 16a. While Thread_1 operates on the SDRAM unit, and Thread_0 is operating on the SRAM unit, a new thread such as Thread_2 can now operate in the microengine 22a. Thread_2 can operate for a certain amount of time until it needs to access memory or perform some other long latency operation, such as making an access to a bus interface. Therefore, the processor 12 can simultaneously perform a bus operation, SRAM operation and SDRAM operation with all being completed or operated upon by one microengine 22a, which microengine 22a has one more thread available to process more work in the data path.

The hardware context swapping also synchronizes completion of tasks. For example, it is possible that two threads could hit the same-shared resource such as the SRAM 16b. Each one of the separate functional units, such as the interface 28, the SRAM controller 26a, and the SDRAM controller 26b, reports back a flag signaling completion of an operation when a requested task from one of the microengine thread contexts is completed. When the flag is received by the microengine, the microengine can determine which thread to turn on.

The processor 12 includes a bus interface 28 that couples the processor to a second bus 18. In an implementation, an FBUS interface 28 couples the processor 12 to the so-called FBUS 18 (FIFO bus). The FBUS is a 64-bit wide FIFO bus, used to interface to Media Access Controller (MAC) devices. The FBUS interface 28 is responsible for controlling and interfacing the processor 12 to the FBUS 18.

The processor 12 also includes a PCI bus interface 24 that couples other system components that reside on the PCI bus 14 to the processor 12. The PCI bus interface 24 also provides a high-speed data path 24a to the SDRAM memory 16a. The data path 24a permits data to be moved quickly from the SDRAM 16a to the PCI bus 14, via direct memory access (DMA) transfers. The hardware based multithreaded processor 12 can employ a plurality of DMA channels so if one target of a DMA transfer is busy, another one of the DMA channels can take over the PCI bus 14 to deliver information to another target to maintain high processor 12 efficiency. The PCI bus interface 24 supports image transfers, target operations and master operations. Target operations are operations where slave devices on bus 14 access the SDRAM through reads and writes that are serviced as a slave to target operation. In master operations, the processor core 20 sends data directly to or receives data directly from the PCI interface 24.

Each of the functional units of the processor 12 are coupled to one or more internal buses. In an implementation, the internal buses are dual 32-bit buses (i.e., one bus for read and one for write). The multithreaded processor 12 also is constructed such that the sum of the bandwidths of the internal buses exceeds the bandwidth of external buses coupled to the processor 12. The internal core processor bus 32 may be an Advanced System Bus (ASB bus) that couples the processor core 20 to the memory controllers 26a and 26b and to an ASB translator 30. The ASB bus is a subset of an “AMBA” bus that is used with the Strong Arm processor core. The processor 12 also includes a private bus 34 that couples the microengine units to SRAM controller 26b, ASB translator 30 and FBUS interface 28. A memory bus 38 couples the SDRAM controller 26a, the PCI bus interface 24, the FBUS interface 28 and memory system 16 together, including Flash ROM 16c which is used for boot operations and the like.

The hardware-based multithreaded processor 12 may be utilized as a network processor. As a network processor, the hardware-based multithreaded processor 12 interfaces to network devices such as a media access controller (MAC) device such as a 10/100BaseT Octal MAC 13a or a Gigabit Ethernet device 13b. In general, the hardware-based multi-threaded processor 12 can interface to any type of communication device or interface that receives/sends large amount of data. The communication system 10 functioning in a networking application could receive a plurality of network packets from the devices 13a, 13b and process each of those packets independently in a parallel manner.

The processor 12 may also be utilized as a print engine for a postscript processor, as a processor for a storage subsystem such as RAID disk storage, or as a matching engine. In the securities industry for example, the advent of electronic trading requires the use of electronic matching engines to match orders between buyers and sellers. These and other parallel types of tasks can be accomplished on the system 10.

FIG. 2 shows a global arbitration system 40 for use with the multithreaded processor 12 of FIG. 1. A global command arbiter 42 is connected to each of the microengines 22a-22f, to the SDRAM controller 26a, to the SRAM controller 26b, to the interface 28 and to the PCI interface 24. The global command arbiter 42 functions to provide high bandwidth access to the shared system resources utilizing a minimal amount of buffer storage elements and minimal wiring. The global command arbiter provides each microengine 22a-22f with fair access to the common system resources of the SDRAM, SRAM, PCI interface registers and FBUS interface registers based on command priority and resource utilization, which is explained below.

In an implementation, each microengine 22a-22f has a two-command deep first-in, first-out (FIFO) register for issuing command requests for SDRAM 16a and SRAM 16b memory access, and for issuing command requests for access to registers in the PCI interface 24 and FBUS interface 28. The SDRAM controller 26a queues commands from the microengines in one of four FIFO command queue structures: an eight-entry high-priority queue 44, a sixteen-entry odd bank queue 46, a sixteen-entry even bank queue 48, and a twenty-four entry maintain order queue 50. A single physical random access memory (RAM) structure with four input pointers and four output pointers may be used to implement the SDRAM queues 44, 46, 48, 50. A reference request from a microengine may include a bit set called the “optimized MEM bit” which will be sorted into either the odd bank queue 46 or the even bank queue 48. If the memory reference request does not have a memory optimization bit set, the default will be to go into the order queue 50. The order queue 50 maintains the order of reference requests from the microengines 22a-22f. With a series of odd and even banks references it may be required that a signal is returned to both the odd and even banks. If the microengine 22f sorts the memory references into odd bank and even bank references and one of the banks, for example the even bank, is drained of memory references before the odd bank but the signal is asserted on the last even reference, the SDRAM controller 26a could conceivably signal back to a microengine that the memory request had completed, even though the odd bank reference had not been serviced. This occurrence could cause a coherency problem. The situation is avoided by providing the order queue 50 which permits a microengine to have multiple memory references outstanding, of which only its last memory reference needs to signal a completion.

The SDRAM controller 26a also included a high priority queue 44. If an incoming memory reference from one of the microengines goes directly to the high priority queue then it is operated upon at a higher priority than other memory references in the other queues.

A feature of the SDRAM controller 26a is that when a memory reference is stored in the queues, in addition to the optimized MEM bit that may be set, a “chaining bit” may be set to require special handling of contiguous memory references. A microengine context may issue chained memory references when the second and/or third reference of the chain must be scheduled by the SDRAM controller 26a immediately after the initial chained memory request. The global command arbiter 42 must ensure that chained references are delivered to consecutive locations of the same SDRAM controller queue.

The SRAM controller 26b also has four command queues: an eight-entry high priority queue 62, a sixteen-entry read queue 64, a sixteen-entry write order queue 66 and a twenty-four entry read-lock fail queue 68. A single physical RAM structure may be used to implement the four queues. The SRAM controller 26b is optimized based on the type of memory operation; i.e., a read or a write operation, and the predominant function that the SRAM performs is read operations.

The read lock fail queue 68 is used to hold read memory reference requests that fail because of a lock existing on a portion of memory. That is, one of the microengines issues a memory request that has a read lock request that is processed in an address and control queue. The memory request will operate on either the write order queue 66 or the read queue 64 and will recognize it as a read lock request. The SRAM controller 26b will access a lock lookup device to determine whether this memory location is already locked. If this memory location is locked from any prior read lock request, then this memory lock request will fail and will be stored in the read lock fail queue 68. If it is unlocked or if the lock lookup device shows no lock on that address, then the address of that memory reference will be used by the SRAM interface 26b to perform a traditional SRAM address read/write request to SRAM memory 16b. A command controller and address generator will also enter the lock into the lock look up device so that subsequent read lock requests will find the memory location locked. A memory location is unlocked by clearing a valid bit in a content addressable memory (CAM) of the SRAM controller. After an unlock, the read lock fail queue 68 becomes the highest priority queue giving all queued read lock misses a chance to issue a memory lock request. The read-lock miss queue is loaded by the SRAM controller itself and not directly from a microengine output buffer. The global arbiter 42 ensures that a command from a microengine to a SRAM queue is not selected on the same cycle that the SRAM controller must write a read-lock miss entry.

The FBUS interface 28 includes three command queues: an eight-entry push queue 72, an eight-entry pull queue 74 and an eight-entry hash queue 76. The pull queue is used when data is moved from a microengine to an FBUS interface resource, the push queue is used for reading data from the FBUS interface to a microengine, and the hash queue is used for sending from one to three hash arguments to a polynomial hash unit within the FBUS interface and for getting the hash result returned. The FBUS interface 28 in a network application can perform header processing of incoming packets from the FBUS 18. A key function performed by the FBUS interface 28 is extraction of packet headers, and a hashed lookup of microprogrammable source/destination/protocol in SRAM memory 16b. If the hash does not successfully resolve, then the packet header is subjected to more sophisticated processing.

The PCI bus interface 24 includes a single, two-entry direct memory access (DMA) command register 78. The DMA register provides a completion signal to the initiating microengine thread.

The global command arbiter 42 operates to select commands from the two-deep output command queues of each microengine for transmission to a destination queue in one of the functional units. The functional units include the core controller 20, the PCI interface 24, the SDRAM controller 26a, the SRAM controller 26b, the FBUS interface 28 and the microengines 22a to 22f. Each microengine request to the global command arbiter 42 is a three-bit encoded field that specifies the command type and destination. Each microengine global command arbiter request is serviced with the following priority:

1. SDRAM chained commands
2. SRAM
3. SDRAM
4. FBUS
5. PCI bus

The global arbiter maintains a pointer that indicates the last microengine request granted. If more than one request is present at the same priority, the global command arbiter selects the next higher numbered microengine (with a wrap-around feature). For example, the microengines 22a to 22f may be numbered from 1 to 6 in an implementation so that if a request from microengine 6 was the last one granted, then when priority is not an issue a request from microengine 1 is next up for consideration.

The three SRAM controller command queues 62, 64 and 66 are loaded directly from microengine commands. Since an SRAM command could be granted every cycle, it is possible that up to 6 additional SRAM commands will be granted and are in the pipeline, all of which could be destined for the same SRAM queue before a signal indicating that the queue is full is received by the global command arbiter. Thus, the SRAM controller asserts an SRAM_queue_full signal to the global command arbiter 42 if there is less than seven (7) empty entries in any SRAM command queue loaded from the microengines. For example, if the high priority queue has two entries filled then the SRAM_queue_full signal is asserted (because eight entries minus two entries is six). Similarly, if the read queue or the order queue contains ten entries then the SRAM_queue_full signal is asserted. This protocol is followed because a six cycle minimum latency exists from the assertion of a command request from a microengine and the command actually being stored in a destination queue.

The following diagram illustrates the timing of a request for a command destined for a queue in a system resource:

1 2 3 4 5 6 7 8 9
Req arb gat bus cmd rcv full arb NOGNT
req arb gnt bus cmd rcv full arb
req arb gnt bus cmd rcv full
req arb gat bus cmd rcv
req arb gnt bus cmd
req arb gnt bus
req arb NOGNT

Where: req=bus request from the microengine;

Referring to the above timing diagram, in the first cycle, a request is sent to the global command arbiter. In cycle two, arbitration is performed and in cycle three the request is granted to the requesting microengine. In cycle four, a bus is enabled and in cycle five the command is driven onto the bus. In cycle six the receiving unit (SDRAM controller, SRAM controller, PCI bus interface or FBUS interface) queues the command. In cycle seven a full_status_que command is driven if necessary (e.g. that queue contains less than a minimum number of available entry spaces). In cycle eight, the global command arbiter is deciding whether another request should be granted to that system resource, but sees that the full_status_queue signal was generated. The arbiter then acts to deny requests (nognt) to the queue which sent a full signal by the seventh cycle.

The FBUS interface 28 has 3 command queues (pull, hash, push) which all contain eight (8) entries. Commands to the FBUS interface are not granted in consecutive cycles. Thus, when any of the 3 FBUS interface queues reaches four (4) entries (instead of the two discussed above for an eight entry queue) a FBUS_queue_full signal is sent to the global command arbiter since only a maximum of 3 commands can be in transit to the FBUS interface queues prior to the global arbiter detecting FBUS_queue_full.

The SDRAM controller 26a has 4 command queues (high=8, even=16, odd=16, order=24). The threshold for asserting SDRAM_queue_full is the same as for the SRAM, i.e. less than 7 entries available in any queue. However, commands to the SDRAM controller are not granted on consecutive cycles. This insures queue entry space for any SDRAM chained commands from a particular microengine, which must be granted, even after SDRAM_queue_full asserts. It is necessary to always transfer SDRAM chained commands to avoid a live-lock condition, in which the SDRAM controller is waiting for the chained command in one queue while the command is “stuck” in a microengine because the global arbiter is no longer granting SDRAM commands since a different SDRAM queue is “full”. A limit is placed on the chain length of SDRAM commands to three as a coding restriction. In addition, when a chained SDRAM command is granted to a microengine, the next SDRAM command to be granted must also come from the same microengine so that the paired commands arrive in the selected SDRAM queue contiguously.

The restrictions of not sending commands to the FBUS on consecutive cycles, and not sending commands to the SDRAM on consecutive cycles do not degrade system performance since each command requires many cycles to actually execute. The restriction is not placed on SRAM commands since the SRAM queue sizing is more than adequate, and more SRAM references requiring fewer cycles with lower latency are issued in most applications.

FIGS. 3A and 3B illustrate an implementation of a global command arbiter protocol process 100. The global command arbiter reviews 102 the command requests in the FIFO registers of the microengines 22a-22f. If all of the requests have the same priority 104, a pointer is checked 106 to determine the identity of the last microengine that had a request granted, and then the request of the next higher microengine is considered. Before granting the command request, the arbiter checks 108 to see if a queue_full_signal has been asserted. If so, the command request is denied 110 and the pointer is incremented 111 so that the next microengine's request will be considered. However, if no queue_full_signal has been asserted, then the command request is granted 112 and the flow returns to 102.

Referring again to step 104 of FIG. 3A, if the command requests in the microengines 22a to 22f have different priorities, then the global command arbiter checks 114 to see if a SDRAM request with a chained bit set has been granted previously. If so, then the SDRAM request from the same microengine that sent the previous SDRAM request with a chained bit is granted 116. Next, the SDRAM queues are checked 118 to determine if any contain less than “N” empty entries, where N is equal to the number of microengines plus one. In the implementation described above, the SDRAM_queue_full signal will be asserted 120 if any SDRAM queue contains less than seven (7) empty entries and then the flow returns to 102. If checking the queues 118 determines that the SDRAM queues have space for seven or more entries, then the flow returns to 102.

If there was no history of an SDRAM command request with a chained bit set 114, the global command arbiter determines 122 if there is a SRAM command request. If there is a SRAM request, the SRAM queues are checked 124 to see if any SRAM queue contains less than N empty entries. If so, then a SRAM_queue_full signal is asserted 126, the command request is denied and the flow moves to 134 where the arbiter determines if a SDRAM request has been made. However, if the answer 124 is no, then the arbiter checks 128 to see if the SRAM controller 26b needs to write a read_lock_miss entry. If so, then the command request is denied in step 130 and the flow moves to 134; if not, then the command request is granted 132 and the flow returns to 102.

If the answer was no at 122, then the arbiter checks 134 (see FIG. 3B) to see if a SDRAM request is being made. If so, the arbiter determines 136 if the last granted request was also a SDRAM command request. If it was, then the request is denied 138 and the flow goes to 146 where the arbiter determines if an FBUS command request has been made. Commands are not granted to the SDRAM controller in consecutive cycles to ensure that there is adequate queue entry space for a SDRAM chained command which is always granted when it occurs (even after a SDRAM_queue_full signal has been asserted). If the last granted command request was not an SDRAM command the SDRAM queues are checked 140 to see if any contains less than N entries. If so, then an SDRAM_queue_full signal is asserted 142, access is denied 138 and the flow moves to 146. If the SDRAM queues have adequate entry space, then the command request is granted 144 and the flow returns to 102.

If a SDRAM request is not being made 134, then the arbiter checks 146 to see if an FBUS command request has been made. If so, the arbiter checks 148 to see if the last granted request was a FBUS request. If so, then the request is denied 150 and the flow moves to 160 where the arbiter determines if a PCI command request has been made. Command requests to the FBUS are not granted in consecutive cycles to improve processing efficiency of the system. If the last granted request was not an FBUS command request 148, then the FBUS queues are checked 152 to see if any contain less than “F” empty entries. For the example discussed above where there are six microengines and each of the FBUS command queues (pull, hash, push) contains eight entries, F equals five (5) since only a maximum of three (3) commands can be in transit to the FBI queues. Thus, if four or fewer entries are available in any FBUS queue, then the FBUS_queue_full signal is asserted 154, the command is denied 150 and the flow moves to 160. However, if the FBUS queues have adequate space, the request is granted 156 and the flow returns to 102.

If an FBUS request is not made 146, a PCI command request has been asserted 160. Direct memory access is granted and a completion signal is sent, then the flow returns to 102.

It is to be understood that while implementations of the invention have been described, the foregoing description is intended to illustrate and not limit the invention, which is defined by the scope of the appended claims. For example, the flow chart depicted in FIGS. 3A and 3B could be modified to accommodate more, less or different system resources. Other aspects, advantages, and modifications are within the scope of the following claims.

Wheeler, William, Adiletta, Matthew J., Wolrich, Gilbert, Bernstein, Debra

Patent Priority Assignee Title
Patent Priority Assignee Title
3373408,
3478322,
3623001,
3736566,
3792441,
3889243,
3940745, Jun 05 1973 Ing. C. Olivetti & C., S.p.A. Data processing unit having a plurality of hardware circuits for processing data at different priority levels
4016548, Apr 11 1975 Sperry Rand Corporation Communication multiplexer module
4032899, May 05 1975 International Business Machines Corporation Apparatus and method for switching of data
4075691, Nov 06 1975 CONTEL FEDERAL SYSTEMS, INC , A DE CORP Communication control unit
4130890, Jun 08 1977 ITT Industries, Inc. Integrated DDC memory with bitwise erase
4400770, Nov 10 1980 International Business Machines Corporation; INTERNATIONAL BUSINESS MACHINES CORPORATION, A CORP OF NY Cache synonym detection and handling means
4514807, May 21 1980 Parallel computer
4523272, Apr 10 1981 Hitachi, Ltd.; Hitachi Engineering Co., Ltd. Bus selection control in a data transmission apparatus for a multiprocessor system
4658351, Oct 09 1984 Intel Corporation Task control means for a multi-tasking data processing system
4709347, Dec 17 1984 Honeywell Inc. Method and apparatus for synchronizing the timing subsystems of the physical modules of a local area network
4745544, Dec 12 1985 Texas Instruments Incorporated Master/slave sequencing processor with forced I/O
4788640, Jan 17 1986 Intel Corporation Priority logic system
4831358, Dec 21 1982 Texas Instruments Incorporated Communications system employing control line minimization
4858108, Mar 20 1985 Hitachi, Ltd. Priority control architecture for input/output operation
4866664, Mar 09 1987 Unisys Corporation Intercomputer communication control apparatus & method
4890218, Jul 02 1986 Raytheon Company Variable length instruction decoding apparatus having cross coupled first and second microengines
4890222, Dec 17 1984 Honeywell Inc. Apparatus for substantially syncronizing the timing subsystems of the physical modules of a local area network
4991112, Dec 23 1987 U S PHILIPS CORPORATION Graphics system with graphics controller and DRAM controller
5115507, Dec 23 1987 U.S. Philips Corp. System for management of the priorities of access to a memory and its application
5140685, Mar 14 1988 Unisys Corporation Record lock processing for multiprocessing data system with majority voting
5142683, Mar 09 1987 Unisys Corporation Intercomputer communication control apparatus and method
5155831, Apr 24 1989 International Business Machines Corporation Data processing system with fast queue store interposed between store-through caches and a main memory
5155854, Feb 03 1989 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P System for arbitrating communication requests using multi-pass control unit based on availability of system resources
5168555, Sep 06 1989 Unisys Corporation Initial program load control
5173897, Dec 23 1989 Alcatel N.V. Method of restoring the correct cell sequence, particularly in an ATM exchange, and output unit therefor
5251205, Sep 04 1990 ENTERASYS NETWORKS, INC Multiple protocol routing
5255239, Aug 13 1991 CYPRESS SEMICONDUCTOR CORPORATION A CORP OF DELAWARE Bidirectional first-in-first-out memory device with transparent and user-testable capabilities
5263169, Nov 03 1989 Zoran Corporation Bus arbitration and resource management for concurrent vector signal processor architecture
5313454, Apr 01 1992 Cisco Technology, Inc Congestion control for cell networks
5347648, Jun 29 1990 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Ensuring write ordering under writeback cache error conditions
5367678, Dec 06 1990 REGENTS OF THE UNIVERSITY OF CALIFORNIA, THE, 2150 SHATTUCK AVENUE, SUITE 510, BERKELEY, CA 94704 A CORP OF CA Multiprocessor system having statically determining resource allocation schedule at compile time and the using of static schedule with processor signals to control the execution time dynamically
5379295, Jul 31 1990 NEC Corporation Cross-connect system for asynchronous transfer mode
5379432, Jul 19 1993 Apple Inc Object-oriented interface for a procedural operating system
5390329, Jun 11 1990 RPX Corporation Responding to service requests using minimal system-side context in a multiprocessor environment
5392391, Oct 18 1991 LSI LOGIC CORPORATION, A CORP OF DELAWARE High performance graphics applications controller
5392411, Feb 03 1992 Matsushita Electric Industrial Co., Ltd. Dual-array register file with overlapping window registers
5392412, Oct 03 1991 Standard Microsystems Corporation Data communication controller for use with a single-port data packet buffer
5404464, Feb 11 1993 SAMSUNG ELECTRONICS CO , LTD Bus control system and method that selectively generate an early address strobe
5404469, Feb 25 1992 INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE A CORP OF THE REPUBLIC OF CHINA Multi-threaded microprocessor architecture utilizing static interleaving
5404482, Jun 29 1990 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Processor and method for preventing access to a locked memory block by recording a lock in a content addressable memory with outstanding cache fills
5432918, Jun 29 1990 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Method and apparatus for ordering read and write operations using conflict bits in a write queue
5448702, Mar 02 1993 International Business Machines Corporation; INTERANTIONAL BUSINESS MACHINES CORP Adapters with descriptor queue management capability
5450351, Nov 19 1993 CISCO TECHNOLOGY, INC , A CORPORATION OF CALIFORNIA Content addressable memory implementation with random access memory
5452437, Nov 18 1991 EMERSON NETWORK POWER - EMBEDDED COMPUTING, INC Methods of debugging multiprocessor system
5452452, Jun 11 1990 RPX Corporation System having integrated dispatcher for self scheduling processors to execute multiple types of processes
5459842, Jun 26 1992 LENOVO SINGAPORE PTE LTD System for combining data from multiple CPU write requests via buffers and using read-modify-write operation to write the combined data to the memory
5459843, Nov 26 1991 International Business Machines Corporation RISC-type pipeline processor having N slower execution units operating in parallel interleaved and phase offset manner with a faster fetch unit and a faster decoder
5463625, Oct 01 1993 IBM Corporation High performance machine for switched communications in a heterogeneous data processing network gateway
5467452, Jul 17 1992 International Business Machines Corporation Routing control information via a bus selectively controls whether data should be routed through a switch or a bus according to number of destination processors
5475856, Nov 27 1991 International Business Machines Corporation Dynamic multi-mode parallel processing array
5485455, Jan 28 1994 ENTERASYS NETWORKS, INC Network having secure fast packet switching and guaranteed quality of service
5515296, Nov 24 1993 Intel Corporation Scan path for encoding and decoding two-dimensional signals
5517648, Jun 10 1994 RPX Corporation Symmetric multiprocessing system with unified environment and distributed system functions
5539737, Dec 30 1994 GLOBALFOUNDRIES Inc Programmable disrupt of multicast packets for secure networks
5542070, May 20 1993 AG Communication Systems Corporation Method for rapid development of software systems
5542088, Apr 29 1994 Intergraph Hardware Technologies Company Method and apparatus for enabling control of task execution
5544236, Jun 10 1994 AT&T Corp. Access to unsubscribed features
5550816, Dec 29 1994 NETWORK SYSTEMS CORP Method and apparatus for virtual switching
5557766, Oct 21 1991 Kabushiki Kaisha Toshiba High-speed processor for handling multiple interrupts utilizing an exclusive-use bus and current and previous bank pointers to specify a return bank
5568476, Oct 26 1994 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Method and apparatus for avoiding packet loss on a CSMA/CD-type local area network using receive-sense-based jam signal
5568617, Mar 12 1986 Hitachi, Ltd. Processor element having a plurality of processors which communicate with each other and selectively use a common bus
5574922, Jun 17 1994 Apple Computer, Inc. Processor with sequences of processor instructions for locked memory updates
5581729, Mar 31 1995 Sun Microsystems, Inc Parallelized coherent read and writeback transaction processing system for use in a packet switched cache coherent multiprocessor system
5592622, May 10 1995 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Network intermediate system with message passing architecture
5613071, Jul 14 1995 Intel Corporation Method and apparatus for providing remote memory access in a distributed memory multiprocessor system
5613136, Dec 04 1991 University of Iowa Research Foundation Locality manager having memory and independent code, bus interface logic, and synchronization components for a processing element for intercommunication in a latency tolerant multiple processor
5617327, Jul 30 1993 XILINX, Inc.; Xilinx, Inc Method for entering state flow diagrams using schematic editor programs
5623489, Sep 26 1991 IPC Systems, Inc Channel allocation system for distributed digital switching network
5627829, Oct 07 1993 SAMSUNG ELECTRONICS CO , LTD Method for reducing unnecessary traffic over a computer network
5630074, Dec 18 1992 Network Systems Corporation Inter-program communication and scheduling method for personal computers
5630130, Dec 23 1992 Centre Electronique Horloger S.A. Multi-tasking low-power controller having multiple program counters
5633865, Mar 31 1995 ENTERASYS NETWORKS, INC Apparatus for selectively transferring data packets between local area networks
5644623, Mar 01 1994 Agilent Technologies, Inc Automated quality assessment system for cellular networks by using DTMF signals
5649110, Nov 07 1994 ENTERASYS NETWORKS, INC Traffic shaping system with virtual circuit table time stamps for asynchronous transfer mode networks
5649157, Mar 30 1995 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Memory controller with priority queues
5651002, Jul 12 1995 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Internetworking device with enhanced packet header translation and memory
5659687, Nov 30 1995 UNILOC 2017 LLC Device for controlling memory data path in parallel processing computer system
5680641, Aug 16 1995 Sharp Electronics Corporation Multiple register bank system for concurrent I/O operation in a CPU datapath
5689566, Oct 24 1995 Network with secure communications sessions
5692126, Jan 24 1995 Verizon Patent and Licensing Inc ISDN access to fast packet data network
5699537, Dec 22 1995 Intel Corporation Processor microarchitecture for efficient dynamic scheduling and execution of chains of dependent instructions
5701434, Mar 16 1995 Hitachi, Ltd. Interleave memory controller with a common access queue
5717898, Oct 11 1991 Intel Corporation Cache coherency mechanism for multiprocessor computer systems
5721870, May 25 1994 NEC Corporation Lock control for a shared main storage data processing system
5724574, Dec 17 1993 Remote Systems Company, LLC; Empire Blue Cross/Blue Shield; Wang Software N.Y., Inc. Method and apparatus for transferring data to a remote workstation using communications established as a background function at time workstation
5740402, Dec 15 1993 ARM Finance Overseas Limited Conflict resolution in interleaved memory systems with multiple parallel accesses
5742587, Feb 28 1997 LANart Corporation Load balancing port switching hub
5742782, Apr 15 1994 Hitachi, Ltd. Processing apparatus for executing a plurality of VLIW threads in parallel
5742822, Dec 19 1994 NEC Corporation Multithreaded processor which dynamically discriminates a parallel execution and a sequential execution of threads
5745913, Aug 05 1996 SAMSUNG ELECTRONICS CO , LTD Multi-processor DRAM controller that prioritizes row-miss requests to stale banks
5751987, Mar 16 1990 Texas Instruments Incorporated Distributed processing memory chip with embedded logic having both data memory and broadcast memory
5754764, Feb 22 1994 National Semiconductor Corp Combination of input output circuitry and local area network systems
5761507, Mar 05 1996 GLOBALFOUNDRIES Inc Client/server architecture supporting concurrent servers within a server with a transaction manager providing server/connection decoupling
5761522, May 24 1995 Fuji Xerox Co., Ltd. Program control system programmable to selectively execute a plurality of programs
5764915, Mar 08 1996 International Business Machines Corporation Object-oriented communication interface for network protocol access using the selected newly created protocol interface object and newly created protocol layer objects in the protocol stack
5768528, May 24 1996 Silicon Valley Bank Client-server system for delivery of online information
5781551, Sep 15 1994 Texas Instruments Incorporated Computer communications system with tree architecture and communications method
5781774, Jun 29 1994 Intel Corporation Processor having operating modes for an upgradeable multiprocessor computer system
5784649, Mar 13 1996 Altera Corporation Multi-threaded FIFO pool buffer and bus transfer control system
5784712, Mar 01 1995 Unisys Corporation Method and apparatus for locally generating addressing information for a memory access
5796413, Dec 06 1995 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Graphics controller utilizing video memory to provide macro command capability and enhanched command buffering
5797043, Mar 13 1996 Altera Corporation System for managing the transfer of data between FIFOs within pool memory and peripherals being programmable with identifications of the FIFOs
5805816, May 12 1992 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Network packet switch using shared memory for repeating and bridging packets at media rate
5809235, Mar 08 1996 International Business Machines Corporation Object oriented network event management framework
5809237, Nov 24 1993 Intel Corporation Registration of computer-based conferencing system
5809530, Nov 13 1995 SHENZHEN XINGUODU TECHNOLOGY CO , LTD Method and apparatus for processing multiple cache misses using reload folding and store merging
5812868, Sep 16 1996 SHENZHEN XINGUODU TECHNOLOGY CO , LTD Method and apparatus for selecting a register file in a data processing system
5828746, Jun 07 1995 THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT Telecommunications network
5828863, Jun 09 1995 Canon Kabushiki Kaisha Interface device connected between a LAN and a printer for outputting formatted debug information about the printer to the printer
5828881, Nov 09 1995 ATI Technologies Inc System and method for stack-based processing of multiple real-time audio tasks
5828901, Dec 21 1995 Cirrus Logic, Inc. Method and apparatus for placing multiple frames of data in a buffer in a direct memory access transfer
5832215, Jul 10 1990 Fujitsu Limited Data gathering/scattering system for a plurality of processors in a parallel computer
5835755, Apr 04 1994 TERADATA US, INC Multi-processor computer system for operating parallel client/server database processes
5838988, Jun 25 1997 Oracle America, Inc Computer product for precise architectural update in an out-of-order processor
5850399, Mar 27 1998 RPX Corporation Hierarchical packet scheduling method and apparatus
5850530, Dec 18 1995 International Business Machines Corporation Method and apparatus for improving bus efficiency by enabling arbitration based upon availability of completion data
5854922, Jan 16 1997 WILMINGTON TRUST FSB, AS ADMINISTRATIVE AGENT Micro-sequencer apparatus and method of combination state machine and instruction memory
5857188, Apr 29 1996 NCR CORPORATION, Management of client requests in a client-server environment
5860138, Oct 02 1995 International Business Machines Corporation Processor with compiler-allocated, variable length intermediate storage
5860158, Nov 15 1996 Samsung Electronics Company, Ltd Cache control unit with a cache request transaction-oriented protocol
5886992, Apr 14 1995 Intellectual Ventures Holding 9 LLC Frame synchronized ring system and method
5887134, Jun 30 1997 Oracle America, Inc System and method for preserving message order while employing both programmed I/O and DMA operations
5890208, Mar 30 1996 SAMSUNG ELECTRONICS CO , LTD Command executing method for CD-ROM disk drive
5892979, Jul 20 1994 Fujitsu Limited Queue control apparatus including memory to save data received when capacity of queue is less than a predetermined threshold
5898686, Apr 25 1995 ENTERASYS NETWORKS, INC Network bridge with multicast forwarding table
5898701, Dec 21 1995 MORGAN STANLEY SENIOR FUNDING, INC Method and apparatus for testing a device
5905876, Dec 16 1996 Intel Corporation Queue ordering for memory and I/O transactions in a multiple concurrent transaction computer system
5905889, Mar 20 1997 International Business Machines Corporation Resource management system using next available integer from an integer pool and returning the integer thereto as the next available integer upon completion of use
5909686, Jun 30 1997 Oracle America, Inc Hardware-assisted central processing unit access to a forwarding database
5915123, Oct 31 1997 AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD Method and apparatus for controlling configuration memory contexts of processing elements in a network of multiple context processing elements
5918235, Apr 04 1997 Hewlett-Packard Company; HEWLETT-PACKARD DEVELOPMENT COMPANY, L P ; Agilent Technologies, Inc Object surrogate with active computation and probablistic counter
5933627, Jul 01 1996 Oracle America, Inc Thread switch on blocked load or store using instruction thread field
5937187, Jul 01 1996 Oracle America, Inc Method and apparatus for execution and preemption control of computer process entities
5938736, Jun 30 1997 Oracle America, Inc Search engine architecture for a high performance multi-layer switch element
5940612, Sep 27 1995 International Business Machines Corporation System and method for queuing of tasks in a multiprocessing system
5940866, Dec 13 1995 LENOVO SINGAPORE PTE LTD Information handling system having a local address queue for local storage of command blocks transferred from a host processing side
5946487, Jun 10 1996 AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD Object-oriented multi-media architecture
5948081, Dec 22 1997 Hewlett Packard Enterprise Development LP System for flushing queued memory write request corresponding to a queued read request and all prior write requests with counter indicating requests to be flushed
5953336, Aug 05 1996 Conexant Systems UK Limited Method and apparatus for source rate pacing in an ATM network
5958031, Jun 25 1996 SAMSUNG ELECTRONICS CO , LTD Data transmitting/receiving device of a multiprocessor system and method therefor
5961628, Jan 28 1997 Samsung Electronics Co., Ltd. Load and store unit for a vector processor
5968169, Jun 07 1995 Advanced Micro Devices, Inc. Superscalar microprocessor stack structure for judging validity of predicted subroutine return addresses
5970013, Feb 26 1998 AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD Adaptive addressable circuit redundancy method and apparatus with broadcast write
5974518, Apr 10 1997 MILGO SOLUTIONS, INC Smart buffer size adaptation apparatus and method
5978838, Aug 19 1996 SAMSUNG ELECTRONICS CO , LTD Coordination and synchronization of an asymmetric, single-chip, dual multiprocessor
5983274, May 08 1997 Microsoft Technology Licensing, LLC Creation and use of control information associated with packetized network data by protocol drivers and device drivers
5995513, Sep 08 1995 SGS-THOMSON MICROELECTRONICS, S A Multitask processing system
6012151, Jun 28 1996 Fujitsu Limited Information processing apparatus and distributed processing control method
6014729, Sep 29 1997 FirstPass, Inc. Shared memory arbitration apparatus and method
6023742, Jul 18 1996 University of Washington Reconfigurable computing architecture for providing pipelined data paths
6032190, Oct 03 1997 Ascend Communications, Inc. System and method for processing data packets
6032218, May 28 1998 Hewlett Packard Enterprise Development LP Configurable weighted round robin arbiter
6047002, Jan 16 1997 Advanced Micro Devices, INC Communication traffic circle system and method for performing packet conversion and routing between different packet formats including an instruction field
6049867, Jun 07 1995 International Business Machines Corporation Method and system for multi-thread switching only when a cache miss occurs at a second or higher level
6058168, Dec 29 1995 TIXI.COM GmbH Telecommunication Systems Method and microcomputer system for the automatic, secure and direct transmission of data
6061710, Oct 29 1997 International Business Machines Corporation Multithreaded processor incorporating a thread latch register for interrupt service new pending threads
6067300, Jun 11 1998 Extreme Networks, Inc Method and apparatus for optimizing the transfer of data packets between local area networks
6067585, Jun 23 1997 Hewlett Packard Enterprise Development LP Adaptive interface controller that can operate with segments of different protocol and transmission rates in a single integrated device
6070231, Dec 02 1997 Intel Corporation Method and apparatus for processing memory requests that require coherency transactions
6072781, Oct 22 1996 International Business Machines Corporation Multi-tasking adapter for parallel network applications
6073215, Aug 03 1998 Apple Inc Data processing system having a data prefetch mechanism and method therefor
6079008, Apr 04 1997 HANGER SOLUTIONS, LLC Multiple thread multiple data predictive coded parallel processing system and method
6085215, Mar 26 1993 Extreme Networks, Inc Scheduling mechanism using predetermined limited execution time processing threads in a communication network
6085248, Feb 11 1997 SECURE AXCESS LLC Media access control transmitter and parallel network management system
6085294, Oct 24 1997 Hewlett Packard Enterprise Development LP Distributed data dependency stall mechanism
6092127, May 15 1998 Hewlett Packard Enterprise Development LP Dynamic allocation and reallocation of buffers in links of chained DMA operations by receiving notification of buffer full and maintaining a queue of buffers available
6092158, Jun 13 1997 Intel Corporation Method and apparatus for arbitrating between command streams
6104700, Aug 29 1997 ARISTA NETWORKS, INC Policy based quality of service
6111886, Mar 07 1997 RPX Corporation Apparatus for and method of communicating among devices interconnected on a bus
6112016, Apr 12 1995 Intel Corporation Method and apparatus for sharing a signal line between agents
6122251, Nov 13 1996 Juniper Networks, Inc Switch control circuit and control method of ATM switchboard
6128669, Sep 30 1997 Hewlett Packard Enterprise Development LP System having a bridge with distributed burst engine to decouple input/output task from a processor
6134665, Jan 20 1998 Hewlett Packard Enterprise Development LP Computer with remote wake up and transmission of a status packet when the computer fails a self test
6141677, Oct 13 1995 Apple Inc Method and system for assigning threads to active sessions
6141689, Oct 01 1993 International Business Machines Corp. Method and mechanism for allocating switched communications ports in a heterogeneous data processing network gateway
6141765, May 19 1997 Gigabus, Inc.; GIGABUS, INC Low power, high speed communications bus
6144669, Dec 12 1997 Alcatel Canada Inc Prioritized PVC management queues for improved frame processing capabilities
6145054, Jan 21 1998 Oracle America, Inc Apparatus and method for handling multiple mergeable misses in a non-blocking cache
6157955, Jun 15 1998 Intel Corporation Packet processing system including a policy engine having a classification unit
6160562, Aug 18 1998 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P System and method for aligning an initial cache line of data read from local memory by an input/output device
6170051, Aug 01 1997 Round Rock Research, LLC Apparatus and method for program level parallelism in a VLIW processor
6175927, Oct 06 1998 Lenovo PC International Alert mechanism for service interruption from power loss
6182177, Jun 13 1997 Intel Corporation Method and apparatus for maintaining one or more queues of elements such as commands using one or more token queues
6195676, Dec 29 1989 Hewlett Packard Enterprise Development LP Method and apparatus for user side scheduling in a multiprocessor operating system program that implements distributive scheduling of processes
6199133, Mar 29 1996 Hewlett Packard Enterprise Development LP Management communication bus for networking devices
6201807, Feb 27 1996 Intel Corporation Real-time hardware method and apparatus for reducing queue processing
6212542, Dec 16 1996 International Business Machines Corporation Method and system for executing a program within a multiscalar processor by processing linked thread descriptors
6212544, Oct 23 1997 GOOGLE LLC Altering thread priorities in a multithreaded processor
6212604, Dec 03 1998 Oracle America, Inc Shared instruction cache for multiple processors
6212611, Nov 03 1998 Intel Corporation Method and apparatus for providing a pipelined memory controller
6216220, Apr 08 1998 AIDO LLC Multithreaded data processing method with long latency subinstructions
6223207, Apr 24 1995 Microsoft Technology Licensing, LLC Input/output completion port queue data structures and methods for using same
6223238, Mar 31 1998 Round Rock Research, LLC Method of peer-to-peer mastering over a computer bus
6223243, Jun 12 1997 NEC Corporation Access control method with plural users having I/O commands prioritized in queues corresponding to plural memory units
6223274, Nov 19 1998 INTERUNIVERSITAIR MICRO-ELEKTRONICA CENTRUM IMEC VZW Power-and speed-efficient data storage/transfer architecture models and design methodologies for programmable or reusable multi-media processors
6223279, Apr 30 1991 Kabushiki Kaisha Toshiba Single chip microcomputer having a dedicated address bus and dedicated data bus for transferring register bank data to and from an on-line RAM
6247025, Jul 17 1997 International Business Machines Corporation Locking and unlocking mechanism for controlling concurrent access to objects
6256713, Apr 29 1999 Freescale Semiconductor, Inc Bus optimization with read/write coherence including ordering responsive to collisions
6269391, Feb 24 1997 Oracle International Corporation Multi-processor scheduling kernel
6272109, Nov 18 1997 Extreme Networks, Inc Hierarchical schedules for different ATM traffic
6272520, Dec 31 1997 Intel Corporation; Hewlett-Packard Company Method for detecting thread switch events
6272616, Jun 17 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Method and apparatus for executing multiple instruction streams in a digital processor with multiple data paths
6275505, May 30 1998 WSOU Investments, LLC Method and apparatus for packetizing data into a data stream
6279113, Mar 16 1998 GEN DIGITAL INC Dynamic signature inspection-based network intrusion detection
6282169, Jun 11 1999 HANGER SOLUTIONS, LLC Serial redundant bypass control mechanism for maintaining network bandwidth management service
6286083, Jul 08 1998 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Computer system with adaptive memory arbitration scheme
6289011, May 21 1997 Samsung Electronics Co., Ltd. 2n×n multiplexing switch
6295600, Jul 01 1996 Sun Microsystems, Inc. Thread switch on blocked load or store using instruction thread field
6298370, Apr 04 1997 Texas Instruments Incorporated Computer operating process allocating tasks between first and second processors at run time based upon current processor load
6307789, Dec 28 1999 Intel Corporation Scratchpad memory
6311261, Jun 12 1995 Georgia Tech Research Corporation Apparatus and method for improving superscalar processors
6320861, May 15 1998 Ericsson AB Hybrid scheme for queuing in a shared memory ATM switch buffer
6324624, Dec 28 1999 Intel Corporation Read lock miss control and queue management
6335932, Jul 08 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED High performance self balancing low cost network switching architecture based on distributed hierarchical shared memory
6338078, Dec 17 1998 Mellanox Technologies, LTD System and method for sequencing packets for multiprocessor parallelization in a computer network system
6345334, Jan 07 1998 Renesas Electronics Corporation High speed semiconductor memory device capable of changing data sequence for burst transmission
6347344, Oct 14 1998 Hitachi, LTD; EQUATOR TECHNOLOGIES, INC Integrated multimedia system with local processor, data transfer switch, processing modules, fixed functional unit, data streamer, interface unit and multiplexer, all integrated on multimedia processor
6349331, Jun 05 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Multiple channel communication system with shared autonegotiation controller
6356962, Sep 30 1998 STMicroelectronics, Inc. Network device and method of controlling flow of data arranged in frames in a data-based network
6359911, Dec 04 1998 ENTROPIC COMMUNICATIONS, INC MPEG-2 transport demultiplexor architecture with non-time-critical post-processing of packet information
6360262, Nov 24 1997 International Business Machines Corporation Mapping web server objects to TCP/IP ports
6360277, Jul 22 1998 CRYDOM, INC Addressable intelligent relay
6366998, Oct 14 1998 CALLAHAN CELLULAR L L C Reconfigurable functional units for implementing a hybrid VLIW-SIMD programming model
6373848, Jul 28 1998 International Business Machines Corporation; IBM Corporation Architecture for a multi-port adapter with a single media access control (MAC)
6377998, Aug 22 1997 AVAYA MANAGEMENT L P Method and apparatus for performing frame processing for a network
6389031, Nov 05 1997 Intellectual Ventures Holding 19, LLC Methods and apparatus for fairly scheduling queued packets using a ram-based search engine
6389449, Dec 16 1998 ARM Finance Overseas Limited Interstream control and communications for multi-streaming digital processors
6393026, Sep 17 1998 RPX CLEARINGHOUSE LLC Data packet processing system and method for a router
6393483, Jun 30 1997 Emulex Corporation Method and apparatus for network interface card load balancing and port aggregation
6404737, Aug 10 2000 FOURNIER ASSETS LIMITED LIABILITY COMPANY Multi-tiered shaping allowing both shaped and unshaped virtual circuits to be provisioned in a single virtual path
6415338, Feb 11 1998 Synaptics Incorporated System for writing a data value at a starting address to a number of consecutive locations equal to a segment length identifier
6418488, Dec 18 1998 EMC IP HOLDING COMPANY LLC Data transfer state machines
6424657, Aug 10 2000 RAKUTEN, INC Traffic queueing for remote terminal DSLAMs
6424659, Jul 17 1998 SONUS NETWORKS, INC Multi-layer switching apparatus and method
6426940, Jun 30 1997 Samsung Electronics, Co. Ltd.; SAMSUNG ELECTRONICS CO , LTD , A CORPORATION ORGANIZED UNDER THE LAWS OF THE REPUBLIC OF KOREA Large scaled fault tolerant ATM switch and a self-routing method in a 2N×N multiplexing switch
6426943, Apr 10 1998 TOP LAYER NETWORKS, INC , A COMMONWEALTH OF MASSACHUSETTS CORPORATION Application-level data communication switching system and process for automatic detection of and quality of service adjustment for bulk data transfers
6427196, Aug 31 1999 Intel Corporation SRAM controller for parallel processor architecture including address and command queue and arbiter
6430626, Dec 30 1996 Hewlett Packard Enterprise Development LP Network switch with a multiple bus structure and a bridge interface for transferring network data between different buses
6434145, Jun 22 1998 RPX Corporation Processing of network data by parallel processing channels
6438132, Oct 14 1998 RPX CLEARINGHOUSE LLC Virtual port scheduler
6438134, Aug 19 1998 Alcatel-Lucent Canada Inc Two-component bandwidth scheduler having application in multi-class digital communications systems
6448812, Jun 11 1998 Infineon Technologies AG Pull up/pull down logic for holding a defined value during power down mode
6453404, May 27 1999 ZHIGU HOLDINGS LIMITED Distributed data cache with memory allocation model
6457015, May 07 1999 NetApp, Inc Adaptive and generalized status monitor
6463035, Dec 30 1998 AT&T Corp Method and apparatus for initiating an upward signaling control channel in a fast packet network
6463072, Dec 28 1999 Intel Corporation Method and apparatus for sharing access to a bus
6463480, Jul 04 1996 International Business Machines Corporation Method and system of processing a plurality of data processing requests, and method and system of executing a program
6463527, Mar 21 1997 VISHKIN, UZI Y Spawn-join instruction set architecture for providing explicit multithreading
6466898, Jan 12 1999 Multithreaded, mixed hardware description languages logic simulation on engineering workstations
6477562, Dec 16 1998 ARM Finance Overseas Limited Prioritized instruction scheduling for multi-streaming processors
6484224, Nov 29 1999 Cisco Technology, Inc Multi-interface symmetric multiprocessor
6501731, Jun 27 1998 U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT CBR/VBR traffic scheduler
6507862, May 11 1999 Oracle America, Inc Switching method in a multi-threaded processor
6522188, Apr 10 1998 Top Layer Networks, Inc.; BLAZENET, INC High-speed data bus for network switching
6526451, Sep 30 1998 STMicroelectronics, Inc. Method and network device for creating circular queue structures in shared memory
6526452, Nov 17 1998 Cisco Technology, Inc Methods and apparatus for providing interfaces for mixed topology data switching system
6529983, Nov 03 1999 Cisco Technology, Inc Group and virtual locking mechanism for inter processor synchronization
6532509, Dec 22 1999 Sony Corporation of America Arbitrating command requests in a parallel multi-threaded processing system
6535878, May 02 1997 CDN INNOVATIONS, LLC Method and system for providing on-line interactivity over a server-client network
6552826, Feb 21 1997 VOIP ACQUISITION COMPANY Facsimile network
6553406, Aug 03 2000 EPICOR SOFTWARE CORPORATION Process thread system receiving request packet from server thread, initiating process thread in response to request packet, synchronizing thread process between clients-servers.
6560667, Dec 28 1999 Intel Corporation Handling contiguous memory references in a multi-queue system
6570850, Apr 23 1998 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED System and method for regulating message flow in a digital data network
6577542, Dec 20 1999 Intel Corporation Scratchpad memory
6584522, Dec 30 1999 Intel Corporation Communication between processors
6604125, Sep 24 1999 Oracle America, Inc Mechanism for enabling a thread unaware or non thread safe application to be executed safely in a multi-threaded environment
6606704, Aug 31 1999 Intel Corporation Parallel multithreaded processor with plural microengines executing multiple threads each microengine having loadable microcode
6625654, Dec 28 1999 Intel Corporation Thread signaling in multi-threaded network processor
6628668, Mar 16 1999 FUJITSU NETWORK COMMUNICATIONS, INC Crosspoint switch bandwidth allocation management
6629147, Mar 31 2000 Intel Corporation Segmentation and reassembly of data frames
6629236, Nov 12 1999 GOOGLE LLC Master-slave latch circuit for multithreaded processing
6631422, Aug 26 1999 Mellanox Technologies, LTD Network adapter utilizing a hashing function for distributing packets to multiple processors for parallel processing
6631430, Dec 28 1999 Intel Corporation Optimizations to receive packet status from fifo bus
6631462, Jan 05 2000 Intel Corporation Memory shared between processing threads
6657963, Jul 30 1999 Sound View Innovations, LLC Method and apparatus for controlling data congestion in a frame relay/ATM internetworking system
6658551, Mar 30 2000 AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITED Method and apparatus for identifying splittable packets in a multithreaded VLIW processor
6661774, Feb 16 1999 SAGEMCOM BROADBAND SAS System and method for traffic shaping packet-based signals
6661794, Dec 29 1999 Intel Corporation Method and apparatus for gigabit packet assignment for multithreaded packet processing
6665699, Sep 23 1999 Bull HN Information Systems Inc.; BULL INFORMATION SYSTEMS, INC Method and data processing system providing processor affinity dispatching
6665755, Dec 22 2000 AVAYA MANAGEMENT L P External memory engine selectable pipeline architecture
6667920, Dec 28 1999 Intel Corporation Scratchpad memory
6668317, Aug 31 1999 Intel Corporation Microengine for parallel processor architecture
6671827, Dec 21 2000 Intel Corporation Journaling for parallel hardware threads in multithreaded processor
6675190, Oct 08 1998 WSOU Investments, LLC Method for cooperative multitasking in a communications network, and a network element for carrying out the method
6675192, Oct 01 1999 GOOGLE LLC Temporary halting of thread execution until monitoring of armed events to memory location identified in working registers
6678746, Aug 01 2000 Hewlett Packard Enterprise Development LP Processing network packets
6680933, Sep 23 1999 RPX CLEARINGHOUSE LLC Telecommunications switches and methods for their operation
6681300, Dec 28 1999 Intel Corporation Read lock miss control and queue management
6684326, Mar 31 1999 Lenovo PC International Method and system for authenticated boot operations in a computer system of a networked computing environment
6694380, Dec 27 1999 Intel Corporation Mapping requests from a processing unit that uses memory-mapped input-output space
6697379, May 18 1998 CONVERSANT INTELLECTUAL PROPERTY MANAGEMENT INC System for transmitting messages to improved stations, and corresponding processing
6721325, Apr 23 1998 Alcatel Canada Inc Fair share scheduling of multiple service classes with prioritized shaping
6724767, Jun 27 1998 Micron Technology, Inc Two-dimensional queuing/de-queuing methods and systems for implementing the same
6728845, Aug 31 1999 Intel Corporation SRAM controller for parallel processor architecture and method for controlling access to a RAM using read and read/write queues
6732187, Sep 24 1999 Cisco Technology, Inc Opaque packet handles
6754211, Dec 01 1999 CONVERSANT INTELLECTUAL PROPERTY MANAGEMENT INC Method and apparatus for wire speed IP multicast forwarding
6754222, Jun 12 1999 SAMSUNG ELECTRONIC CO , LTD Packet switching apparatus and method in data network
6768717, May 28 1999 SILICON VALLEY BANK, AS ADMINISTRATIVE AGENT Apparatus and method for traffic shaping in a network switch
6775284, Jan 07 2000 International Business Machines Corporation Method and system for frame and protocol classification
6792488, Dec 30 1999 Intel Corporation Communication between processors
6798744, May 14 1999 PMC-SIERRA, INC Method and apparatus for interconnection of flow-controlled communication
6826615, Oct 14 1999 HITACHI VANTARA LLC Apparatus and method for hardware implementation or acceleration of operating system functions
6834053, Oct 27 2000 Nortel Networks Limited Distributed traffic scheduler
6850521, Mar 17 1999 AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD Network switch
6856622, Feb 20 2001 PMC-Sierra, Inc. Multicast cell scheduling protocol
6873618, Mar 16 1999 RPX CLEARINGHOUSE LLC Multipoint network routing protocol
6876561, Dec 28 1999 Intel Corporation Scratchpad memory
6895457, Sep 16 2003 Intel Corporation Bus interface with a first-in-first-out memory
6925637, Nov 16 1998 RPX Corporation Low-contention grey object sets for concurrent, marking garbage collection
6931641, Apr 04 2000 Intel Corporation Controller for multiple instruction thread processors
6934780, Dec 22 2000 AVAYA MANAGEMENT L P External memory engine selectable pipeline architecture
6934951, Jan 17 2002 TAHOE RESEARCH, LTD Parallel processor with functional pipeline providing programming engines by supporting multiple contexts and critical section
6938147, May 11 1999 Oracle America, Inc Processor with multiple-thread, vertically-threaded pipeline
6944850, Dec 21 2000 Intel Corporation Hop method for stepping parallel hardware threads
6947425, Dec 29 1999 Intel Corporation Multi-threaded sequenced transmit software for packet forwarding device
6952824, Dec 30 1999 Intel Corporation Multi-threaded sequenced receive for fast network port stream of packets
6959002, May 01 2001 MICROSEMI STORAGE SOLUTIONS US , INC Traffic manager for network switch port
6967963, Dec 01 1998 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Telecommunication method for ensuring on-time delivery of packets containing time-sensitive data
6976095, Dec 30 1999 Intel Corporation Port blocking technique for maintaining receive packet ordering for a multiple ethernet port switch
6981077, Dec 22 2000 AVAYA MANAGEMENT L P Global access bus architecture
6983350, Aug 31 1999 Intel Corporation SDRAM controller for parallel processor architecture
7006495, Aug 31 2001 SI VENTURE FUND II, L P ; CROSSBOW VENTURE PARTNERS, L P ; MI-2 CAPITAL LLC Transmitting multicast data packets
7065569, Jan 09 2001 Force 10 Networks, Inc System and method for remote traffic management in a communication network
7069548, Jun 28 2002 Intel Corporation Inter-procedure global register allocation method
7096277, Aug 07 2002 Intel Corporation Distributed lookup based on packet contents
7100102, Sep 18 2003 Intel Corporation Method and apparatus for performing cyclic redundancy checks
7111072, Sep 13 2000 Fortinet, INC Packet routing system and method
7111296, Dec 28 1999 Intel Corporation Thread signaling in multi-threaded processor
7124196, Aug 07 2002 Intel Corporation Processing a network packet using queues
7126952, Sep 28 2001 Intel Corporation Multiprotocol decapsulation/encapsulation control structure and packet protocol conversion method
7149786, Oct 06 1998 Jetter AG Network for data transmission
7181742, Nov 19 2002 Intel Corporation Allocation of packets and threads
7191321, Aug 31 1999 Intel Corporation Microengine for parallel processor architecture
7206858, Sep 19 2002 Intel Corporation DSL transmit traffic shaper structure and procedure
7248584, Aug 07 2002 Intel Corporation Network packet processing
7305500, Aug 31 1999 Intel Corporation Sram controller for parallel processor architecture including a read queue and an order queue for handling requests
7328289, Dec 30 1999 Intel Corporation Communication between processors
7352769, Sep 12 2002 Intel Corporation Multiple calendar schedule reservation structure and method
20010023487,
20020027448,
20020041520,
20020075878,
20020118692,
20020150047,
20020181194,
20030043803,
20030067934,
20030086434,
20030105917,
20030110166,
20030115347,
20030115426,
20030131198,
20030140196,
20030145159,
20030147409,
20030161303,
20030161337,
20030196012,
20030210574,
20030231635,
20040039895,
20040052269,
20040054880,
20040059828,
20040071152,
20040073728,
20040073778,
20040085901,
20040098496,
20040109369,
20040148382,
20040162933,
20040252686,
20050033884,
20050149665,
20060007871,
20060069882,
20060156303,
EP379709,
EP464715,
EP633678,
EP745933,
EP773648,
EP809180,
EP959602,
JP59111533,
WO38376,
WO56024,
WO116718,
WO116769,
WO116770,
WO116782,
WO117179,
WO131856,
WO148596,
WO148606,
WO148619,
WO150247,
WO150679,
WO3030461,
WO9415287,
WO9738372,
WO9820647,
//////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Mar 20 2000WOLRICH, GILBERTIntel CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0212170529 pdf
Mar 20 2000BERNSTEIN, DEBRAIntel CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0212170529 pdf
Mar 20 2000ADILETTA, MATTHEW J Intel CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0212170529 pdf
Mar 20 2000WHEELER, WILLIAM R Intel CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0212170529 pdf
Jun 22 2005Intel Corporation(assignment on the face of the patent)
Apr 02 2014Intel CorporationSony Corporation of AmericaASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0328930199 pdf
Date Maintenance Fee Events
Feb 07 2011REM: Maintenance Fee Reminder Mailed.
Jul 01 2011M1552: Payment of Maintenance Fee, 8th Year, Large Entity.
Jul 01 2011M1555: 7.5 yr surcharge - late pmt w/in 6 mo, Large Entity.
Sep 18 2014ASPN: Payor Number Assigned.
Dec 24 2014M1553: Payment of Maintenance Fee, 12th Year, Large Entity.


Date Maintenance Schedule
Oct 19 20134 years fee payment window open
Apr 19 20146 months grace period start (w surcharge)
Oct 19 2014patent expiry (for year 4)
Oct 19 20162 years to revive unintentionally abandoned end. (for year 4)
Oct 19 20178 years fee payment window open
Apr 19 20186 months grace period start (w surcharge)
Oct 19 2018patent expiry (for year 8)
Oct 19 20202 years to revive unintentionally abandoned end. (for year 8)
Oct 19 202112 years fee payment window open
Apr 19 20226 months grace period start (w surcharge)
Oct 19 2022patent expiry (for year 12)
Oct 19 20242 years to revive unintentionally abandoned end. (for year 12)