A method, processor, and data processing system for enabling utilization of a single prefetch stream to access data across a memory page boundary. A prefetch engine includes an active streams table in which information for one or more scheduled prefetch streams are stored. The prefetch engine also includes a victim table for storing a previously active stream whose next prefetch crosses a memory page boundary. The scheduling logic issues a prefetch request with a real address to fetch data from the lower level memory. Then, responsive to detecting that the real address of the stream's next sequential prefetch crosses the memory page boundary, the prefetch engine determines when the first prefetch stream can continue across the page boundary of the first memory page (via an effective address comparison). The PE automatically reinserts the first prefetch stream into the active stream table to jump start prefetching across the page boundary.
|
1. In data processing system comprising a processor, and a memory subsystem with at least one cache and a memory configured with memory pages for storing data, a method comprising:
issuing a prefetch request of a first prefetch stream to fetch one or more data from the memory subsystem, wherein the prefetch request has a first real address corresponding to a memory location within a first memory page;
determining when a next prefetch request of the first prefetch stream targets a next real address that crosses a page boundary of the first memory page, wherein the next prefetch targets data on a next sequential memory page;
moving the first prefetch stream from an active streams table to a second table, wherein prefetch streams are scheduled to be issued when within the active streams table and are not scheduled when in the second table; and
in response to an effective address of a subsequent memory access instruction being equal to a next effective address corresponding to the next real address, automatically reinserting the first prefetch stream into the active streams table from the second table.
20. A processor comprising:
one or more processing units that issue requests for data;
a level 1 (L1) cache for storing data received during data processing; a prefetch engine associated with the processing units and which includes:
an active streams table in which information for one or more prefetch streams that are scheduled for prefetching operations are stored;
logic for scheduling the one or more prefetch streams to return data from the lower level memory to the cache, said logic comprising logic for issuing a prefetch request of a first prefetch stream to fetch one or more data from the lower level memory, wherein the prefetch request has a first real address corresponding to a memory location within a first memory page; and
logic, responsive to detecting that a next sequential prefetch request of the first prefetch stream crosses a page boundary of the first memory page, for automatically jump starting a continuation of the first prefetch stream following a determination that the first prefetch stream is to continue to retrieve data from a next page located across the page boundary of the first memory page.
8. A data processing system comprising:
a central processing unit that issues requests for data;
a memory subsystem having at least one cache that stores data and a lower level memory configured with memory pages accessible via real addresses;
a prefetch engine associated with the central processing unit and which includes:
an active streams table in which information for one or more prefetch streams that are scheduled for prefetching operations are stored;
logic for scheduling the one or more prefetch streams to return data from the lower level memory to the cache, said logic comprising logic for issuing a prefetch request of a first prefetch stream to fetch one or more data from the lower level memory, wherein the prefetch request has a first real address corresponding to a memory location within a first memory page; and
logic, responsive to detecting that a next sequential prefetch request of the first prefetch stream crosses a page boundary of the first memory page, for automatically jump starting a continuation of the first prefetch stream following a determination that the first prefetch stream is to continue to retrieve data from a next page located across the page boundary of the first memory page.
16. A computer program product comprising:
a non-transitory computer storage medium; and
program code on the non-transitory computer storage medium that when executed within a processing system completes the functions of:
issuing a prefetch request of a first prefetch stream to fetch one or more data from the memory subsystem, wherein the prefetch request has a first real address corresponding to a memory location within a first memory page;
determining when a next prefetch request of the first prefetch stream targets a next real address that crosses a page boundary of the first memory page, wherein the next prefetch targets data on a next sequential memory page;
moving the first prefetch stream from an active streams table to a second table, wherein prefetch streams are scheduled to be issued when within the active streams table and are not scheduled when in the second table;
in response to an effective address of a subsequent memory access instruction being equal to a next effective address corresponding to the next real address, automatically reinserting the first prefetch stream into the active streams table from the second table; and
subsequently scheduling and issuing the next prefetch request within the first prefetch stream to continue prefetching data from the next sequential memory page.
2. The method of
3. The method of
reinserting the first prefetch stream with a scheduling priority that enables the data of the first prefetch stream to arrive at the cache at a time the data is demanded by the processor.
4. The method of
assigning a first scheduling priority to the first prefetch stream; and
dynamically changing the first scheduling priority by completing one of:
deterministically changing the first scheduling priority by a priority count that correlates to the amount of time elapsed between the return of the data and the receipt of the demand load from the processor, wherein a larger time results in a larger change in the first scheduling priority; and
increasing the first scheduling priority after one of: (a) the return of data following the demand load has occurred more than a first preset sequential number of times for the prefetch stream; and (b) the return of data more than the minimum amount of time before receiving the demand load has occurred more than a second preset sequential number of times for the prefetch stream.
5. The method of
automatically discarding the first prefetch stream from the second table; and
initiating a new stream to prefetch data from the real address corresponding to the effective address of the subsequent memory access instruction.
6. The method of
receiving the first prefetch request from the processor with a real address of a first memory location from which to retrieve data;
receiving the effective address corresponding to the real address;
storing the effective address;
evaluating and storing a stride length for the first prefetch stream; and
determining what next effective address would cross the page boundary by adding the stride length to the last prefetch request that occurs within the first memory page.
7. The method of
while the first prefetch stream is within the second queue, comparing each subsequently received effective address of processor issued memory access operations with the next effective address.
9. The data processing system of
a second table for temporarily storing stream information for a stream whose prefetch reaches a memory page boundary; and
wherein said logic for automatically jump starting the continuation of the first prefetch stream comprises:
logic for determining when a next prefetch request of the first prefetch stream targets a next real address that crosses a page boundary of the first memory page, wherein the next prefetch targets data on a next sequential memory page;
logic for moving the first prefetch stream from an active streams table to the second table, wherein prefetch streams are scheduled to be issued when within the active streams table and are not scheduled when in the second table; and
in response to an effective address of a subsequent memory access instruction being equal to a next effective address corresponding to the next real address, logic for automatically reinserting the first prefetch stream into the active streams table from the second table.
10. The data processing system of
11. The data processing system of
12. The data processing system of
logic for assigning a first scheduling priority to the first prefetch stream; and
logic for dynamically changing the first scheduling priority by completing one of:
deterministically changing the first scheduling priority by a priority count that correlates to the amount of time elapsed between the return of the data and the receipt of the demand load from the processor, wherein a larger time results in a larger change in the first scheduling priority; and
increasing the first scheduling priority after one of: (a) the return of data following the demand load has occurred more than a first preset sequential number of times for the prefetch stream; and (b) the return of data more than the minimum amount of time before receiving the demand load has occurred more than a second preset sequential number of times for the prefetch stream.
13. The data processing system of
logic for, when an effective address of a subsequent memory access instruction is not equal to a next effective address corresponding to the next real address:
automatically discarding the first prefetch stream from the second table; and
initiating a new stream to prefetch data from the real address corresponding to the effective address of the subsequent memory access instruction.
14. The data processing system of
said prefetch engine further comprises an effective address (EA) table; and
wherein said logic for storing includes logic for storing the EA within the EA table.
15. The data processing system of
logic for receiving the first prefetch request from the processor with a real address of a first memory location from which to retrieve data;
logic for receiving the effective address corresponding to the real address;
logic for storing the effective address;
logic for evaluating and storing a stride length for the first prefetch stream;
logic for determining what next effective address would cross the page boundary by adding the stride length to the last prefetch request that occurs within the first memory page; and
logic for comparing each subsequently received effective address of processor issued memory access operations with the next effective address, while the first prefetch stream is within the second queue.
17. The computer program product of
assigning a first scheduling priority to the first prefetch stream that enables the data of the first prefetch stream to arrive at the cache at a time the data is demanded by the processor;
reinserting the first prefetch stream with the first scheduling priority; and
dynamically changing the first scheduling priority by completing one of:
deterministically changing the first scheduling priority by a priority count that correlates to the amount of time elapsed between the return of the data and the receipt of the demand load from the processor, wherein a larger time results in a larger change in the first scheduling priority; and
increasing the first scheduling priority after one of: (a) the return of data following the demand load has occurred more than a first preset sequential number of times for the prefetch stream; and (b) the return of data more than the minimum amount of time before receiving the demand load has occurred more than a second preset sequential number of times for the prefetch stream.
18. The computer program product of
automatically discarding the first prefetch stream from the second table; and
initiating a new stream to prefetch data from the real address corresponding to the effective address of the subsequent memory access instruction.
19. The computer program product of
receiving the first prefetch request from the processor with a real address of a first memory location from which to retrieve data;
receiving the effective address corresponding to the real address;
storing the effective address;
evaluating and storing a stride length for the first prefetch stream;
determining what next effective address would cross the page boundary by adding the stride length to the last prefetch request that occurs within the first memory page; and
while the first prefetch stream is within the second queue, comparing each subsequently received effective address of processor issued memory access operations with the next effective address.
21. The processor of
a second table for temporarily storing stream information for a stream whose prefetch reaches a memory page boundary; and
wherein said logic for automatically jump starting the continuation of the first prefetch stream comprises:
logic for determining when a next prefetch request of the first prefetch stream targets a next real address that crosses a page boundary of the first memory page, wherein the next prefetch targets data on a next sequential memory page;
logic for moving the first prefetch stream from an active streams table to the second table, wherein prefetch streams are scheduled to be issued when within the active streams table and are not scheduled when in the second table;
in response to an effective address of a subsequent memory access instruction being equal to a next effective address corresponding to the next real address, logic for automatically reinserting the first prefetch stream into the active streams table from the second table; and
logic for subsequently scheduling and issuing the next prefetch request within the first prefetch stream to continue prefetching data from the next sequential memory page.
22. The processor of
23. The processor of
logic for, when an effective address of a subsequent memory access instruction is not equal to a next effective address corresponding to the next real address:
automatically discarding the first prefetch stream from the second table; and
initiating a new stream to prefetch data from the real address corresponding to the effective address of the subsequent memory access instruction.
24. The processor of
logic for receiving the first prefetch request from the processor with a real address of a first memory location from which to retrieve data;
logic for receiving the effective address corresponding to the real address;
logic for storing the effective address;
logic for evaluating and storing a stride length for the first prefetch stream;
logic for determining what next effective address would cross the page boundary by adding the stride length to the last prefetch request that occurs within the first memory page; and
logic for comparing each subsequently received effective address of processor issued memory access operations with the next effective address, while the first prefetch stream is within the second queue.
25. The processor of
said prefetch engine further comprises an effective address (EA) table; and
wherein said logic for storing includes logic for storing the EA within the EA table.
|
This invention was made with United States Government support under Agreement No. HR0011-07-9-0002 awarded by DARPA. The Government has certain rights in the invention.
1. Technical Field
The present invention relates generally to data processing systems and more particularly to fetching data for utilization during data processing. Still more particularly, the present invention relates to data prefetching operations in a data processing system.
2. Description of Related Art
Conventional computer systems are designed with a memory hierarchy comprising different memory devices with increasing access latency the further the device is away from the processor. The processors typically operate at a very high speed and are capable of executing instructions at such a fast rate that it is necessary to prefetch a sufficient number of cache lines of data from lower level (and/or system memory) to avoid the long latencies when a cache miss occurs. This prefetching ensures that the data is ready and available when needed for utilization by the processor.
Conventional prefetch operations involve a prefetch engine that monitors accesses to the L1 cache and, based on the observed patterns, issues requests for data that is likely to be referenced in the future. If the prefetch request succeeds, the processor's request for data will be resolved by loading the data from the L1 cache on demand, rather than the processor stalling while waiting for the data to be fetched/returned from lower level memory.
In conventional processor configurations, the effective address of a prefetch instruction (or a memory access instruction, such as a demand load) passes through a translation mechanism, such as a translation lookaside buffer (TLB), which translates the effective addresses into corresponding real addresses. The TLB then passes the real addresses to the prefetch engine to execute the prefetch at the lower level memory.
Within lower level memory, data are stored in memory blocks and addressed by real addresses. Sequential data are typically stored in sequential memory blocks, which are accessed by their corresponding sequential real addresses. Also, a configurable number of these sequential memory blocks are stored in memory pages, which pages are separated by known address boundaries. While sequentially adjacent pages have sequential real address assignments from page-to-page, an executing program's allocation of effective addresses (for processor operations) does not necessarily match up to a same sequential allocation. Programs that have sequential streams of data typically access the data in a linear manner in the effective address space. Thus, it is quite common for a pair of sequential effective addresses at a page boundary to correspond to real addresses on pages that are not sequentially adjacent to each other (i.e., the real address are not sequential).
Typically, when prefetching data, the prefetch engines utilize some set sequence to identify a stream of cache lines to be fetched and a stride pattern. A “prefetch stream” may refer to a stream of addresses (and blocks associated with those addresses) that are prefetched into the cache as a result of the detected prefetch pattern. When prefetching data using prefetch streams, the memory controller sources the data sequentially from a memory page using sequential real addresses. The sequential real addresses may however, cross page boundaries, resulting in the prefetch engine stopping the stream. The prefetch engine stops the stream because the prefetch engine has no way of determining if the next data block found sequentially in the physical address space is mapped to a correspondingly sequential block in the effective address space. To reduce potentially polluting the cache with non-sequential prefetches, the prefetcher will stop issuing prefetch requests at each physical page boundaries.
When the real addresses within a stream crosses over the boundary of a physical page of memory, the prefetch engine stops the stream at the boundary because the effective addresses that target adjacent memory pages are not necessarily assigned in sequence. If the prefetch engine were to continue across the boundary, based on the sequential effective addresses, the prefetch engine may begin prefetching data that does not really belong to the current stream. Thus, with conventional implementations of prefetch engines, the prefetch engine stops a stream when the stream crosses a page boundary. Then, the prefetch engine may later detect/initiate a new, different stream to prefetch the remaining data that will be demanded by the processor.
Disclosed are a method and data processing system for enabling utilization of a single prefetch stream to access data across a memory page boundary. A prefetch engine (PE) includes an active streams table in which information for one or more scheduled prefetch streams are stored. The prefetch engine also includes a victim table for storing a previously active stream whose next prefetch crosses a memory page boundary. The scheduling logic issues a prefetch request with a real address to fetch data from the lower level memory. Then, responsive to detecting that the real address of the stream's next sequential prefetch crosses the memory page boundary, the prefetch engine determines when the first prefetch stream can continue across the page boundary of the first memory page (via an effective address comparison). The PE automatically reinserts the first prefetch stream into the active stream table to jump start prefetching across the page boundary.
All features and advantages of the present invention will become apparent in the following detailed written description.
The illustrative embodiments will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
The present invention provides a method and data processing system for jump starting stream prefetching after encountering a memory page boundary.
Referring now to the drawings and in particular to
CPU 105 connects to and communicates with a memory hierarchy that includes an L1 data cache 110, one (or more) lower level caches 125, and memory 130 and associated memory controller 127. Memory controller 127 controls accesses to memory 130. As will become clear below, L1 data cache 110 serves as a prefetch buffer for data (and/or data streams) that are pre-fetched. In the illustrative embodiment, L1 data cache has a corresponding load miss queue (LMQ) 112, which the cache utilizes to save information about ongoing prefetch requests. Lower level caches 125 may comprise a single level two (L2) cache or multiple other sequentially numbered lower levels, e.g., L3, L4. In addition to the illustrated memory hierarchy, data processing system 130 may also comprise additional storage devices that form a part of memory hierarchy from the perspective of CPU 105. The storage device may be one or more electronic storage media such as a floppy disk, hard drive, CD-ROM, or digital versatile disk (DVD). CPU 105 communicates with each of the above devices within the memory hierarchy by various means, including via busses and/or direct channels.
Load store unit (LSU) 115, coupled to CPU 105, includes a load/store queue (LSQ) 117, and issues memory access operations (loads and stores) that retrieves prefetched data or causes the data to be fetched from the memory subsystem. A prefetch engine (PE) 120 is coupled to LSU 115 via a translation mechanism 107, indicated as a translation lookaside buffer (TLB) or an effective to real address table (ERAT). PE 120 includes logic that enables the various enhanced prefetching features of the embodiments described herein. As utilized herein, the term prefetching refers to the method by which data that is stored in one memory location of the memory hierarchy (e.g., system memory 130) is transferred to a higher level memory location (e.g., L1 data cache 110) that is closer (yields lower access latency) to the CPU 105, before the CPU 105 actually requests/demands the data. More specifically, prefetching as described hereinafter, refers to the early retrieval of data from lower level memory 130 to the data cache 110 before the CPU 105 issues a demand for the specific data being returned.
In a software-based prefetch implementation, during normal execution of program code, CPU 105 encounters and executes/issues a prefetch instruction before the CPU executes a load instruction associated with the same data. The prefetch instruction instructs the PE 120 to prefetch the data from the lower memory location and to store the data in the data cache 110. The translation mechanism 107 translates the effective addresses within the prefetch instruction (or memory access operation which triggers a subsequent prefetch) into corresponding real addresses and forwards the prefetch instruction (or memory access operation) with the real addresses to the PE 120. Logic within the PE 120 then determines a prefetch stream and schedules the prefetch stream to prefetch multiple sequential blocks of data. The CPU 115 subsequently executes the corresponding load (or other memory access) instruction that instructs the LSU 115 to load the data from the data cache 110 into one of the CPU's execution registers. To load the data, the LSU 115 issues a memory access request (e.g., a read/write) to the data cache 110.
Those skilled in the art will further appreciate that while a particular configuration of data processing system 100 is illustrated and described, it is understood that other configurations may be possible, utilizing functional components within and/or associated with the data processing system to achieve the same functional results. The illustrative embodiments contemplates that all such configurations fall within the scope of the embodiments and their equivalents.
Also, while the illustrative embodiments have been, and will continue to be, described in the context of a fully functional data processing system, those skilled in the art will appreciate that the software aspects of an illustrative embodiment are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment applies equally regardless of the particular type of media used to actually carry out the distribution.
Also, it is understood that the use of specific parameter names are for example only and not meant to imply any limitations on the invention. The invention may thus be implemented with different nomenclature/terminology utilized to described the various parameters (e.g., logic, tables, and the like), without limitation.
PE 120 also comprises active streams unit 230, which includes active streams table 232 and prefetch request issue logic 237. Also, in accordance with the described embodiments, PE 120 comprises a victim table 240. PE 120 and specifically active streams table 232 concurrently maintains information about multiple, independent prefetch streams. Three entries of active streams information (i.e., show collectively as entries 234 of active prefetch stream information) are illustrated within active streams table 232, representing different streams that the PE 120 currently prefetches.
According to the embodiments described herein, prefetch request issue logic 237 sends out prefetch requests at times determined by a set schema that enables the requested data of each stream to arrive at the data cache prior to the time (or just at the time) the CPU issues a load demand for the particular data. The scheduled prefetch issue times may be based on a round robin scheme, a FIFO queue, or some dynamic allocation, such as priority-based round robin scheme. The actual scheduling scheme is not directly relevant to the description or implementation of the core features provided by the illustrative embodiments.
In the depicted embodiment, prefetch request issue logic 237 comprises (or is represented by) two different functional logic, first logic (or scheduling logic) 210 and second logic (or real address page crossing-RAPC-logic) 205. First and second logic together enable the scheduling of a prefetch stream across a memory page boundary as well as other functions performed by the PE 120. Scheduling logic 210 performs the basic scheduling of the multiple streams for issuance to the memory subsystem. Second RAPC logic 210 operates in conjunction with victim table 240 to resume processing the same prefetch stream when the RAPC logic 210 determines that the stream may cross the memory page boundary. As described in greater detail below, one of scheduling logic 210 and RAPC logic 205 also suspends the stream's prefetches once the stream prefetch reaches the memory page boundary.
As utilized herein, the term logic 122 refers to one or a combination of software utility and/or pseudo code and hardware registers and components. Also, logic may refer to a singular construct or a plural construct, such that multiple different logic within the PE 120 perform different parts of the functions involved in scheduling the streams and the other functions described herein. The logic operates to ensure that data prefetch operation for a particular stream completes (i.e., returns the fetched cache line(s) to the data cache 115) at substantially the time (or clock cycle) at which the processor issues a demand for that cache line data. The functionality provided by the described and illustrated embodiments enables the data prefetch mechanisms within PE 120 to enable a single stream prefetch to continue fetching data beyond a memory page boundary, without requiring a restart or reconfiguring of the prefetch stream for the new memory page.
The RAPC logic 205 responds to detection of a page boundary by removing the prefetch stream from the active streams table 232. Each entry within the active streams table includes a real address (Addr) entry along with other stream information. The Addr entry is the real address corresponding to an effective address within the EA table 207 (translated by the translation mechanism).
In one embodiment, RAPC logic maintains an EA table 207 of the effective addresses corresponding to each of the active stream prefetch's real addresses. The PE 120 receives the initial EAs from the processor operations when the processor issues the initial prefetch instruction (or memory access operation) that results in the creation of the stream prefetch. RAPC logic 205 removes the stream information (of the stream that encountered the memory page boundary) from the active streams table 232 and places the stream information in the victim table 240, maintained by the RAPC logic 205. The RAPC logic 205 then assigns/tags the corresponding effective address to the stream information in the victim table 240.
In the described embodiment, several of the functionality provided by the method are implemented by scheduling logic 210 and/or RAPC logic 205 operating/executing within PE 120. However, for simplicity, the method is generally described from the perspective of the PE 120, which encompasses both of the logic components, tables, queues and other components/devices illustrated and described herein.
Turning now to
At block 408, the PE (logic) monitors the stream execution relative to the known/pre-identified memory page boundary for that stream. At block 410, the PE 120 determines whether the next sequential real address within a prefetch stream crosses the memory page boundary. If the next address does not cross the memory page boundary, the PE 120 continues to execute the stream prefetch, as shown at block 412. The PE's execution of the prefetch stream continues as scheduled until the processor terminates the stream or the stream encounters a page boundary.
However, if, at decision block 410, the stream does encounter a page boundary, the PE 120 moves the particular stream (Stream2) out of the active streams table 232 and into the victim table 240, as provided at block 414. This movement results in table allocation B of
Following, at block 418, the PE 120 determines whether the EAs are the same. If the EAs are the same, indicating that the data for the new memory page can be prefetched with the same, previously active stream, the PE 120 moves the stream information from the victim table 240 back into the active streams table 232, as shown at block 420. The PE (scheduling logic) then continues prefetching data from the new memory page using the same, previously established stream. Table allocation C1 of
Notably, in the illustration of
Returning to
Thus, with the above described functionality of the illustrative embodiments, the PE 120 jumpstarts the prefetching of a stream across a page boundary. Notably, the stream information within the victim table 240 has an EA tag in addition to the real address. Also, in one embodiment, the PE 120 factors in the stride pattern of the stream. Thus, when the stride pattern is not singular (i.e., +1), the PE's determination of the next EA adds the stride pattern to the last EA of the stream prefetch before encountering the page boundary. The result of the comparison yields a positive only when a subsequent effective address of a demand load matches the sum of the stream's last effective address plus the stream's stride. When this result is positive, the stream is immediately moved back into the active streams table. The described embodiments save the start-up cost for initializing a new stream and also remove the cost associated with stopping and discarding a stream because the stream has continued across the page boundary.
It is important to note that although the present invention has been described in the context of a data processing system, those skilled in the art will appreciate that the mechanisms of the present invention are capable of being distributed as a program product in a variety of forms, and that the present invention applies equally regardless of the particular type of signal bearing media utilized to actually carry out the distribution. Examples of signal bearing media include, without limitation, recordable type media such as floppy disks or compact discs and transmission type media such as analog or digital communications links.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.
Speight, William E., Zhang, Lixin
Patent | Priority | Assignee | Title |
10929948, | Dec 28 2018 | Intel Corporation | Page cache system and method for multi-agent environments |
8719510, | Mar 29 2010 | Via Technologies, INC | Bounding box prefetcher with reduced warm-up penalty on memory block crossings |
9483406, | Mar 11 2013 | VIA Technologies, Inc.; Via Technologies, INC | Communicating prefetchers that throttle one another |
9804969, | Dec 20 2012 | Qualcomm Incorporated | Speculative addressing using a virtual address-to-physical address page crossing buffer |
Patent | Priority | Assignee | Title |
5555435, | Sep 03 1992 | Hewlett-Packard Company | Automatic language boundary identification for a peripheral unit that supports multiple control languages |
5764271, | Aug 22 1994 | Xerox Corporation | Multi-mode printing circuitry providing ROS printing correction |
6075617, | Nov 19 1997 | Hewlett-Packard Company | Banner page detection and handling mechanism |
6455322, | Dec 02 1998 | AstraZeneca AB | Competition binding assay for detecting P2YADP receptor ligands |
6819795, | Jul 07 2000 | FUJI XEROX CO , LTD | Genetic segmentation method for data, such as image data streams |
6992785, | Feb 09 2000 | Ricoh Company, LTD | Method, data structure and apparatus for identifying resources prior to printing |
7061267, | Oct 17 2003 | AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD | Page boundary detector |
20060248279, | |||
20080016330, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 31 2008 | SPEIGHT, WILLIAM E | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020610 | /0303 | |
Jan 31 2008 | ZHANG, LIXIN | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020610 | /0303 | |
Feb 01 2008 | International Business Machines Corporation | (assignment on the face of the patent) | / | |||
Feb 01 2008 | ZHANG, LIXIN | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 020718 | /0516 |
Date | Maintenance Fee Events |
Aug 03 2015 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Jul 18 2019 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Nov 06 2023 | REM: Maintenance Fee Reminder Mailed. |
Apr 22 2024 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Mar 20 2015 | 4 years fee payment window open |
Sep 20 2015 | 6 months grace period start (w surcharge) |
Mar 20 2016 | patent expiry (for year 4) |
Mar 20 2018 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 20 2019 | 8 years fee payment window open |
Sep 20 2019 | 6 months grace period start (w surcharge) |
Mar 20 2020 | patent expiry (for year 8) |
Mar 20 2022 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 20 2023 | 12 years fee payment window open |
Sep 20 2023 | 6 months grace period start (w surcharge) |
Mar 20 2024 | patent expiry (for year 12) |
Mar 20 2026 | 2 years to revive unintentionally abandoned end. (for year 12) |