A method and mechanism for allocating transaction tags. A request queue includes a counter whose value is used to identify a corresponding tag of a plurality of unique tags. The queue is configured to increment the counter and use the current count value to index into a buffer pool. If the tag corresponding to the currently indexed buffer pool entry is available for allocation, a determination is made as to whether the currently identified tag is corresponds to the last tag which has been de-allocated. If the identified available buffer pool entry/tag matches the last tag which was de-allocated, the queue continues the process of counter incrementation and searching of the buffer pool for an available entry to select for allocation. When a request is received, the currently selected tag is allocated. Additional circuitry may be used to identify a fallback tag in the event a request is received while the queue is searching for a tag. In such an event, the fallback tag may be allocated.
|
1. A method for allocating a tag, said method comprising:
incrementing a counter responsive to determining a tag of a plurality of unique tags has not been selected for allocation;
utilizing a value of said counter to identify a first tag of said tags;
determining whether said first tag is available for allocation;
repeating said incrementing, utilizing, and determining if said first tag is not available for allocation; and
selecting said first tag for allocation if said first tag is available for allocation and said first tag does not correspond to a tag most recently de-allocated.
20. A system comprising:
a bus master including a bus interface coupled to a bus; and
a request queue coupled to said bus master via said bus;
wherein said request queue is configured to:
increment a counter responsive to determining no tag of said entries has been selected for allocation;
utilize a value of said counter to identify a first entry of said entries;
determine whether a first tag corresponding to said first entry is available for allocation;
repeat said incrementing, utilizing, and determining in response to detecting said first tag is not available for allocation; and
select said first tag for allocation in response to detecting said first tag is available for allocation and said first tag does not correspond to a tag most recently de-allocated.
10. A request queue comprising:
a buffer pool including a plurality of entries, each said entry being indicative of a unique tag; and
a control unit, wherein said control unit is configured to:
increment a counter responsive to determining no tag of said entries has been selected for allocation;
utilize a value of said counter to identify a first entry of said entries;
determine whether a first tag corresponding to said first entry is available for allocation;
repeat said incrementing, utilizing, and determining in response to detecting said first tag is not available for allocation; and
select said first tag for allocation in response to detecting said first tag is available for allocation and said first tag does not correspond to a tag most recently de-allocated.
2. The method as recited in
3. The method as recited in
4. The method as recited in
5. The method as recited in
6. The method as recited in
detecting a received transaction; and
allocating a tag for said transaction in response to determining said transaction corresponds to a request.
7. The method as recited in
8. The method as recited in
9. The method as recited in
de-allocating a tag which corresponds to the received transaction; and
storing an indication that said de-allocated tag is a tag most recently de-allocated.
11. The queue as recited in
12. The queue as recited in
13. The queue as recited in
14. The queue recited in
15. The queue as recited in
detect a received transaction; and
allocate a tag for said transaction in response to determining said transaction corresponds to a request.
16. The queue as recited in
17. The queue as recited in
18. The queue as recited in
de-allocate a tag which corresponds to the received transaction; and
store an indication that said de-allocated tag is a tag most recently de-allocated.
19. The queue as recited in
21. The system as recited in
22. The system as recited in
23. The system as recited in
24. The system as recited in
detect a received transaction; and
allocate a tag for said transaction in response to determining said transaction corresponds to a request.
|
1. Field of Invention
This invention relates to computer systems, and more particularly to resource tag allocation.
2. Description of Related Art
Typical computer systems include one or more sets of buses which serve as communication links between system components. Generally speaking, each bus has at least one component connected to it which can initiate a transaction. Such components are called “bus master” devices. Other devices coupled to the bus which are capable of responding to requests initiated by bus master devices are called “slave” devices. For example, a typical computer system includes a CPU coupled to a memory system via a bus. The CPU executes instructions stored in the memory system, and initiates transfers of instructions from the memory system as needed. The CPU also transfers data to the memory system for storage as required. The CPU is, by definition, a bus master device. The memory system simply responds to data transfer requests initiated by the CPU and is a slave device. A computer system including multiple CPUs or other components coupled to a common bus and capable of initiating transactions has multiple bus master devices. In such systems, each bus master device must compete with the other bus master devices for control of the bus when the bus is needed.
A typical computer system bus (i.e., system bus) includes multiple address, data, and control signal lines. Such system buses are typically divided into an address bus including the address signal lines, a data bus including the data signal lines, and a control bus including the control signal lines. Data transfers across the system bus are carried out using bus “transactions”. A bus master device (e.g., the CPU) performs a “read” transaction in order to obtain needed data from another system component coupled to the system bus. During the read transaction, the bus master device transmits address signals upon the address bus and control signals indicative of a request for data (i.e., a read transaction) upon the control bus. Upon receiving the address and control signals, the target system component (e.g., the memory system) accesses the requested data. The target component then transmits data signals representing the requested data upon the data bus and control signals which indicate the availability of the data upon the control bus. In many computer systems, the bus master device maintains control of the system bus during the entire period of time between transmission of the address signals and reception of the requested data.
When performing a “write” transaction, the bus master device transmits address signals upon the address bus, data signals representing the data upon the data bus, and control signals indicating a transmission of data (i.e., a write transaction) upon the control bus. Upon receiving the address and control signals, the target system component (e.g., the memory system) stores the data signals. Following simultaneous transmission of address, data, and control signals during a write transaction, the bus master device typically assumes reception of the data by the target component and immediately relinquishes control of the system bus.
Where multiple bus master devices share a common system bus, it is often necessary to limit the amount of time that any one bus master device has control of the system bus. One method of limiting the amount of time a bus master device has control of the system bus is to utilize a split-transaction protocol wherein transactions are split into separate “address” and “data” portions or phases. A bus master device transmits the address and associated control signals during the address phase of a transaction. Following the address portion, the bus master device relinquishes control of the system bus, allowing other bus master devices to use the system bus while the target component is accessing the requested data. When the target component is ready to transmit the requested data, the target component transmits data signals representing the requested data and associated control signals during the data portion of the read transaction.
In a system employing a split transaction bus, each bus master typically generates a unique “tag” value which identifies the transaction. The tag value is transmitted along with information associated with the transaction (e.g., address or data signals), typically upon signal lines of the control bus. A bus master device initiating a read transaction may not retire the tag value until the data portion of the read transaction is completed (i.e., until the requested data is received from the target component). The tag received with the initial request is also included in the response in order to allow the response to be matched to the corresponding request.
Because each tag which is assigned by a requester must uniquely identify the corresponding transaction, a method for allocating and de-allocating unique tags is required. If a receiving device such as a memory controller has pending more than one request with a same tag, the receiving device must typically delay processing of the later received request until the ambiguity is resolved.
Various techniques have been used to allocate tags for transaction initiated by a requester. One technique involves the requester having a buffer with a fixed number of entries. Each entry of the buffer may, for example, be configured to store the address of an initiated transaction. Those entries which are currently storing a valid entry are marked as such, while those which are not storing a valid entry are marked as being available for allocation. When a request is received, the buffer may be searched from the bottom up until an available entry is found. The address corresponding to the available entry may then be used as a tag for the transaction. While this technique may be fairly straightforward, it may lead to tag re-circulation problems. For example, since the buffer may be searched in a particular order each time a request is received, those entries occurring earlier in the search order will be selected more often. Consequently, tags earlier in the search order will tend to be used more frequently—leading to an increased probability of multiple transactions with a same tag being outstanding.
Another technique used for allocating tags involves the use of free lists. In this method, tags which are available for allocation are maintained in a free list. Essentially, the free list acts as a FIFO with tags for a new request coming from one end while de-allocated tags are added to the other. However, such a technique may prove expensive given storage for all possible tags is required, as well as a structure to maintain the resulting FIFO.
Accordingly, an efficient method of allocating tags is desired.
A method and mechanism are contemplated wherein a request queue includes a counter and a buffer pool of tags for allocation. The counter includes a number of bits necessary to include a count value for each entry of the buffer pool. Generally speaking, the request queue is configured to increment the counter and use the current count value to index into the buffer pool. If the currently indexed buffer pool entry is available for allocation, the tag corresponding to the indexed entry is selected for allocation. In response to a request for a tag, the request queue may then allocate the currently selected tag. The request queue may further include an indication of the last buffer pool entry which has been de-allocated. During counter incrementation, indexing and searching of the buffer pool, if an identified available buffer pool entry matches the last buffer de-allocated, the identified entry is not selected and the request queue continues the process of counter incrementation and searching of the buffer pool for an available entry. Also contemplated is an embodiment wherein additional circuitry identifies a fallback tag. Subsequently, if the request queue receives a request for a new tag while it is currently searching for an available tag, the fallback tag may be immediately provided in response to the request.
Other objects and advantages of the invention will become apparent upon reading the following detailed description and upon reference to the accompanying drawings in which:
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.
Processing nodes 112A–112D implement a packet-based link for inter-processing node communication. In the present embodiment, the link is implemented as sets of unidirectional lines (e.g. lines 124A are used to transmit packets from processing node 112A to processing node 112B and lines 124B are used to transmit packets from processing node 112B to processing node 112A). Other sets of lines 124C–124H are used to transmit packets between other processing nodes as illustrated in
Processing nodes 112A–112D, in addition to a memory controller and interface logic, may include one or more processors. Broadly speaking, a processing node comprises at least one processor and may optionally include a memory controller for communicating with a memory and other logic as desired.
Memories 114A–114D may comprise any suitable memory devices. For example, a memory 114A–114D may comprise one or more RAMBUS DRAMs (RDRAMs), synchronous DRAMs (SDRAMs), static RAM, etc. The address space of computer system 100 is divided among memories 114A–114D. Each processing node 112A–112D may include a memory map used to determine which addresses are mapped to which memories 114A–114D, and hence to which processing node 112A–112D a memory request for a particular address should be routed. In one embodiment, the coherency point for an address within computer system 100 is the memory controller 116A–116D coupled to the memory storing bytes corresponding to the address. In other words, the memory controller 116A–116D is responsible for ensuring that each memory access to the corresponding memory 114A–114D occurs in a cache coherent fashion. Memory controllers 116A–116D may comprise control circuitry for interfacing to memories 114A–1 14D. Additionally, memory controllers 116A–116D may include request queues for queuing memory requests.
Generally, interface logic 118A–118L may comprise a variety of buffers for receiving packets from the link and for buffering packets to be transmitted upon the link. Computer system 100 may employ any suitable flow control mechanism for transmitting packets.
I/O devices 120A–120C may be any suitable I/O devices. For example, I/O devices 120 may include network interface cards, video accelerators, audio cards, hard or floppy disk drives or drive controllers, SCSI (Small Computer Systems Interface) adapters and telephony cards, modems, sound cards, and a variety of data acquisition cards such as GPIB or field bus interface cards.
Turning now to
In the embodiment shown in
During the address portion of a read transaction initiated by bus interface 26 of a bus master device, the interface unit 30 of the bus interface 26: (i) generates a tag value which uniquely identifies the bus transaction, (ii) drives the tag value upon signal lines of control bus 36, and (iii) provides the tag value to the corresponding transaction queue 28. The bus interface 26 of the target component saves the tag value while accessing the requested data. During the data portion of the read transaction, the bus interface 26 of the target component drives the tag value upon signal lines of control bus 36, informing the bus interface 26 of the bus master device that the requested data is being provided.
In one embodiment, transaction queue 28 includes a number of storage locations for storing tag values of outstanding (i.e., incomplete) bus transactions. Transaction queue 28 removes the tag value associated with an outstanding transaction from the transaction queue (i.e., retires the tag value) when the corresponding bus transaction is completed. In the case of a write transaction initiated by the interface unit 30 of a bus interface 26, the corresponding transaction queue 28 may retire the associated tag value immediately after interface unit 30 drives the associated address, data, and control signals upon the signal lines of processor bus 16. In the case of a read transaction initiated by the interface unit 30 of a bus interface 26, the corresponding transaction queue 28 may retain the tag value until the data portion is completed (i.e., until the requested data is received from the target component during the data portion of the read transaction).
In one embodiment, bus interface 213, or the master which corresponds to bus interface 213, includes an indication 31 as to how many requests the master may have pending at any one time. For example, request 220 may include a limited number of entries. Accordingly, at system reset, or another configuration event, each master in the system which may convey requests to request queue 220 may be allotted a certain number of requests they may issue. When a master issues a requests, the master decrements the number of requests which it is permitted to issue. When a transaction corresponding to a request is completed, the corresponding master may then increment the number of requests which it may issue. Request queue 220 may be configured to dynamically allot additional permitted requests to any master when the queue 220 has entries available. In this manner, the total number of pending requests may be controlled and request queue 220 will not receive a request when the queue 220 has no entries available.
Each bus interface 26 may further be connected to a bus arbiter (not shown) by an “address bus request” signal line 38, an “address bus grant” signal line 40, a “data bus request” signal line 42, and a “data bus grant” signal line 44 of control bus 36. During a bus transaction, the bus interface 26 of a device coupled to processor bus 16 competes with other bus master devices for control of address bus 32 and/or data bus 34. In order to gain control of address bus 32, the interface unit 30 of a bus interface 26 may assert an “address bus request” signal upon the dedicated address bus request signal line 38. The address bus request signal is received by the bus arbiter. In response to the address bus request signal, the bus arbiter awards control of address bus 32 to the bus interface 26 by asserting an “address bus grant” signal upon the dedicated address bus grant signal line 40. Upon receiving the address bus grant signal, the bus interface 26 assumes control of address bus 32.
In order to gain control of data bus 34, the interface unit 30 of a bus interface 26 asserts a “data bus request” signal upon the dedicated data bus request signal line 42. The data bus request signal is received by the bus arbiter. In response to the data bus request signal, the bus arbiter awards control of data bus 34 to the bus interface 26 by asserting a “data bus grant” signal upon the dedicated data bus grant signal line 44. Upon receiving the data bus grant signal, the bus interface 26 assumes control of data bus 34. It is noted that other methods of bus arbitration may be employed as well.
Control unit 404 includes a counter 430 which includes as many bits as necessary to address the number of entries included in the buffer pool 402. For example, if buffer pool 402 includes 32 entries, the counter 430 must include at least five bits. In general, if buffer pool 402 includes n entries, then the number of bits x needed for counter 430 may be represented
where the function ┐y ┌ is the “ceiling function” giving the smallest integer≧y. In one embodiment, counter 430 is configured as a “wrap around” counter wherein it starts over at zero once its maximum value has been reached. Accordingly, buffer pool 402 may be indexed as a circular buffer.
Generally speaking, control unit 404 is configured to increment counter 430 as described below and use the counter value as an index into buffer pool 402. The buffer pool entry 420 which is currently addressed, or indexed, by the counter 430 is checked for availability. In the embodiment shown, a valid bit 410 may be used to indicate whether or not an entry has already been allocated. Other methods of indicating the allocation or de-allocation of an entry may be utilized as well. If the entry which is currently indexed by the counter is available for allocation, control unit 404 disables the incrementing of counter 430. Subsequently, when a request 470 is detected, control unit marks the currently indexed entry as allocated (e.g., by setting/resetting the valid bit 410) and request queue 220 conveys the tag corresponding to the currently indexed entry via bus 480. Control unit 404 may then resume incrementing counter 430 and searching buffer pool 402 for an available entry.
Control unit 404 may be further configured to detect if all entries of buffer pool 402 have been allocated, a condition which may be referred to as “full”. When buffer pool 402 is detected to be full, control unit 404 may store an indication 490 of this condition. As noted above, each master may be allotted a limited number of requests and keep track of requests which it has issued. Consequently, a master will not issue requests when they have none available to issue and the request queue 220 does not receive requests when it is full. However, in an alternative embodiment, if a request 470 is received when the buffer pool 402 is full, request queue 28 may convey a full indication rather than a tag via bus 480. In one embodiment, full indication 490 may comprise a counter which is incremented each time an entry 420 is allocated, and decremented each time an entry 420 is de-allocated. If the full counter 490 equals the number of entries in the buffer pool 402, then a full condition is detected. In one embodiment, control unit 404 is configured to disable counter 430, and possibly other circuitry, while a full condition is detected. In a further embodiment, discussed below, control unit 404 may be configured to detect, or include an indication 491, that only one entry is available in the buffer pool 402.
In addition to the above, request queue 220 is configured to receive an indication 472 that a transaction has completed, or is done. For example, read data corresponding to a read transaction may have been received by bus interface 213. Done signal 472 includes an indication of the tag, originally provided by request queue 28, which corresponds to the completed transaction. In response to receiving the done signal 472, control unit 404 is configured to de-allocate the buffer pool entry 420 corresponding to the received tag. In the embodiment shown, de-allocation may simply comprise setting or re-setting the valid bit 410 as appropriate, thereby making the entry/tag available for allocation.
In one embodiment, control unit 404 includes fallback scan circuit 440 and a last buffer/tag de-allocated (LB) indication 442. In such an embodiment, when an entry is de-allocated (made available) by control unit 404, control unit 404 is further configured to store an indication of the entry/tag which was de-allocated in the location designated LB indication 442. Subsequently, while incrementing counter 430 and indexing into buffer pool 402, buffer pool entries 420 which are detected to be available are compared to the entry/tag value stored in LB indication 442. If the detected available entry 420 matches the last entry which was de-allocated (LB indication 442), the control unit 404 is configured to continue incrementing counter 430 and searching buffer pool 402 for an available entry. In this manner, the last buffer de-allocated is disfavored for immediate re-allocation and the probability of re-allocating a recently de-allocated tag is minimized.
Fallback circuit 440 is configured to identify an available buffer pool entry in the event counter 430 has not identified an entry when a request is received. In one embodiment, fallback circuit 440 is configured to operate concurrently with the operation of counter 430. As already described above, counter 430 may search for an available buffer pool entry 420 in preparation for a next received request. This search for an entry 420 may include disfavoring a last entry de-allocated as described. However, it is possible for one or more requests to be received by queue 220 prior to control circuit 404 identifying an available entry 420 with counter 430. In addition, even if counter 430 has identified an available entry, it is possible that more than one request may be concurrently received. In such an event, only one of the received requests may be allocated the entry identified by circuit 430 while the other request is allocated an entry identified by fallback circuit 440.
In one embodiment, fallback circuit 440 comprises one or more levels of combinational logic coupled to valid bits 410. Circuit 440 is configured to identify entries 420 which are available. In one embodiment, circuit 440 is configured to scan entries 420 from the top entry 420-0 to the bottom entry 420-n. The first entry 420 identified as available, starting from the top, is selected as the “fallback” entry. This fallback entry is then allocated for a request in the event that counter 430 has not identified an available entry. In yet another embodiment, fallback circuit 440 may be configured to search entries 420 from the bottom 420-n to the top 420-0 while concurrently searching the entries 420 from top to bottom as described above. Such an embodiment may be desirable if, for example, queue 28 is configured to receive more than one request concurrently via two or more ports.
As mentioned above, in a further embodiment, if the detected available entry 420 matches the last entry which was de-allocated, the control unit 404 may be configured to determine whether this entry is the only entry currently available. If the entry is the only entry currently available, control unit 404 may stop incrementing the counter 430 and searching for a further entry. Subsequently, when an additional entry is de-allocated, control unit 404 may resume the incrementing/searching process. Alternatively, when an entry is subsequently de-allocated, control unit 404 may be configured to detect that it currently (prior to the de-allocation) has only one entry available. In such a case, rather than immediately resuming the counter/search process, control unit 404 may be configured to directly set the counter 430 to the value indicated by fallback scan circuit 440 and store the newly de-allocated entry/tag as the new fallback tag. A subsequently de-allocated entry/tag may then result in the control unit 404 resuming the counter/search process.
If a request 470 then arrives while control unit 402 is searching for an available entry 420, control unit 402 is configured to provide the tag which is indicated by fallback circuit 440 via bus 480. In one embodiment, when providing the tag indicated by fallback circuit 440, control unit 404 is configured to utilize the indicated tag to directly address the entry 420 in buffer pool 402 which corresponds to the fallback tag and store an indication that the entry has been allocated.
It is noted that in certain embodiments buffer pool entries 420 may be addressed in increments of one and the value of counter 430 may be directly used to index into buffer pool 402. However, in alternative embodiments, buffer pool entries 420 may be addressed in increments other than one. In such embodiment, control unit may be configured to multiply the value of counter 430 as appropriate in order to index into buffer pool 402.
Returning to decision block 606, if the currently indexed entry is detected to be available, a determination is made as to whether the currently indexed entry matches the last buffer entry de-allocated (LB) (decision block 616). If the currently indexed entry is available and does not match the last buffer de-allocated, the entry currently indexed by the counter is selected as the next entry to be allocated and a wait state 620 is entered until the currently indexed entry is allocated. Once the currently indexed entry is allocated 620, the counter is again incremented 602. On the other hand, if the currently indexed available entry matches the currently stored LB (decision block 616), processing returns to decision block 602. In one embodiment, if the counter mechanism identifies and rejects an available entry because it matches the last entry de-allocated (decision block 616), that entry may serve as an alternate fallback tag. This alternate fallback tag could be used, for example, in the event a request is received and the counter mechanism has not identified an available entry.
Processing block 680 illustrates the processing of received transactions. Wait state 608 waits for a transaction requiring processing. In the event a transaction is detected (decision block 608), a determination is made (decision block 610) as to whether the detected transaction is a request for a tag or an indication that a previous transaction has completed. If the detected transaction is a request for a new tag, the request is serviced (block 614) by providing a new tag as described above. If, on the other hand, the detected transaction indicates completion of an earlier transaction, the tag/entry corresponding to the detected transaction is de-allocated and an indication of this de-allocated tag/entry is stored as a last buffer de-allocated (LB) (block 612).
In the embodiment of
As already described above, a transaction may be received prior to the counter mechanism identifying an entry/tag for allocation. Consequently, wait state 608 may be viewed as being active concurrently with blocks 602, 604, 606, 616, and 620 which generally correspond to the identification and selection of an entry/tag by the counter mechanism. In one embodiment, reception (decision block 610) and servicing (decision block 614) of a request may generally correspond to the embodiment illustrated by
Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. For example, while the request queue has been described in the context of a bus master in a processing unit, the method and mechanism described herein may be utilized in any context wherein the allocation of tags may be required. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Patent | Priority | Assignee | Title |
10107893, | Aug 05 2011 | TrackThings LLC | Apparatus and method to automatically set a master-slave monitoring system |
10254961, | Feb 21 2017 | International Business Machines Corporation | Dynamic load based memory tag management |
10338965, | Apr 03 2012 | Hewlett Packard Enterprise Development LP | Managing a set of resources |
10386457, | Aug 05 2011 | TrackThings LLC | Apparatus and method to automatically set a master-slave monitoring system |
10482018, | Sep 13 2017 | Fujitsu Limited | Arithmetic processing unit and method for controlling arithmetic processing unit |
10545875, | Dec 27 2017 | Advanced Micro Devices, INC | Tag accelerator for low latency DRAM cache |
10552331, | Sep 30 2016 | Fujitsu Limited | Arithmetic processing device having a move-in buffer control unit that issues a move-in request in shorter time upon a memory access request, information apparatus including the same and method for controlling the arithmetic processing device |
10949292, | Oct 07 2019 | ARM Limited | Memory interface having data signal path and tag signal path |
11347667, | Jan 10 2018 | Qualcomm Incorporated | Bus controller and related methods |
7251659, | Dec 04 2003 | Sprint Communications Company L.P.; SPRINT COMMUNICATIONS COMPANY L P | Method and system for managing resource indexes in a networking environment |
7730238, | Oct 07 2005 | BROADCOM INTERNATIONAL PTE LTD | Buffer management method and system with two thresholds |
8046513, | Jul 22 2008 | Realtek Semiconductor Corp. | Out-of-order executive bus system and operating method thereof |
8291136, | Dec 02 2009 | International Business Machines Corporation | Ring buffer |
8516170, | Dec 02 2009 | International Business Machines Corporation | Control flow in a ring buffer |
8806090, | May 31 2011 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Apparatus including buffer allocation management and related methods |
8949492, | May 31 2011 | Micron Technology, Inc. | Apparatus including buffer allocation management and related methods |
9245053, | Mar 12 2013 | International Business Machines Corporation | Efficiently searching and modifying a variable length queue |
9245054, | Mar 12 2013 | International Business Machines Corporation | Efficiently searching and modifying a variable length queue |
9477516, | Mar 19 2015 | GOOGLE LLC | Concurrent in-memory data publication and storage system |
Patent | Priority | Assignee | Title |
5522045, | Mar 27 1992 | Panasonic Corporation of North America | Method for updating value in distributed shared virtual memory among interconnected computer nodes having page table with minimal processor involvement |
5524263, | Feb 25 1994 | Intel Corporation | Method and apparatus for partial and full stall handling in allocation |
5781925, | Oct 14 1994 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | Method of preventing cache corruption during microprocessor pipelined burst operations |
5813031, | Sep 21 1994 | Industrial Technology Research Institute | Caching tag for a large scale cache computer memory system |
5964859, | Oct 30 1997 | Advanced Micro Devices, Inc. | Allocatable post and prefetch buffers for bus bridges |
6032219, | Aug 01 1997 | Garmin Corporation | System and method for buffering data |
6230191, | Oct 05 1998 | WSOU Investments, LLC | Method and apparatus for regulating the amount of buffer memory requested by a port in a multi-port switching device with shared buffer memory |
6446183, | Feb 15 2000 | International Business Machines Corporation | Systems and methods for persistent and robust memory management |
6515963, | Jan 27 1999 | Cisco Technology, Inc | Per-flow dynamic buffer management |
6574708, | May 18 2001 | Qualcomm Incorporated | Source controlled cache allocation |
6574725, | Nov 01 1999 | GLOBALFOUNDRIES Inc | Method and mechanism for speculatively executing threads of instructions |
6587929, | Jul 31 2001 | IP-First, L.L.C. | Apparatus and method for performing write-combining in a pipelined microprocessor using tags |
6591342, | Dec 14 1999 | Intel Corporation | Memory disambiguation for large instruction windows |
20020188820, | |||
20030046356, | |||
20030131043, | |||
20030182537, | |||
20030217238, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 17 2003 | HUGHES, WILLIAM A | Advanced Micro Devices, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014415 | /0244 | |
Aug 12 2003 | Advanced Micro Devices, Inc. | (assignment on the face of the patent) | / | |||
Jun 30 2009 | Advanced Micro Devices, INC | GLOBALFOUNDRIES Inc | AFFIRMATION OF PATENT ASSIGNMENT | 023119 | /0083 | |
Nov 17 2020 | WILMINGTON TRUST, NATIONAL ASSOCIATION | GLOBALFOUNDRIES U S INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 056987 | /0001 |
Date | Maintenance Fee Events |
Feb 19 2010 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Feb 12 2014 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Apr 23 2018 | REM: Maintenance Fee Reminder Mailed. |
Oct 15 2018 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Sep 12 2009 | 4 years fee payment window open |
Mar 12 2010 | 6 months grace period start (w surcharge) |
Sep 12 2010 | patent expiry (for year 4) |
Sep 12 2012 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 12 2013 | 8 years fee payment window open |
Mar 12 2014 | 6 months grace period start (w surcharge) |
Sep 12 2014 | patent expiry (for year 8) |
Sep 12 2016 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 12 2017 | 12 years fee payment window open |
Mar 12 2018 | 6 months grace period start (w surcharge) |
Sep 12 2018 | patent expiry (for year 12) |
Sep 12 2020 | 2 years to revive unintentionally abandoned end. (for year 12) |