An apparatus is described for interleaving bank and page access to a multibank memory device, such as an SDRAM or SLDRAM. An address detector detects a pending page access, and the associated data transfer request is then stored in a page hit register. A control timing chain includes a rank register queue with a bank access input, a page write input, and a page read input. Comparator circuitry provides bank address comparisons to avoid bank conflicts and to control the timing of insertion of the page hit register contents into the appropriate page write or page read input. While a pending page access request is stored in the page hit register, other pending bank access operations can be initiated. Consequently, bank and page accesses can be interleaved in substantially contiguous command cycles, and data transfer bandwidth is correspondingly improved.
|
16. A memory controller adapted for use with a memory having a plurality of banks, comprising:
a request queue operable to receive a plurality of memory access requests; a timing chain circuit coupled to the request queue and having a plurality of rank registers sequentially coupled together; and a page hit register coupled to the request queue and to at least some of the plurality of rank registers, the page hit register being operable to store a page hit write request in a write rank register based on a write latency of the memory, and being operable to store a page hit read request in a read rank register based on a read latency of the memory.
1. A computer system, comprising:
a processor; a memory operable to store data and having a plurality of banks; and a memory controller coupling the processor with the memory, the memory controller operable to initiate a first data transfer request directed to a first of the banks, to temporarily store a second data transfer request that is a page hit to the first bank, and to initiate a third data transfer request directed to a second of the banks prior to initiation of the second data transfer request, the memory controller including: a request queue operable to receive a plurality of memory access requests; a timing chain circuitry having a plurality of rank registers sequentially coupled between the request queue and the memory; and a page hit register coupled to the request queue and to at least some of the plurality of rank registers, the page hit register being operable to store a page hit write request in a write rank register based on a write latency of the memory, and being operable to store a page hit read request in a read rank register based on a read latency of the memory. 10. A memory controller for controlling operations of a memory having a plurality of banks, comprising:
a request queue operable to store first and second data transfer requests; a control state machine coupled with the request queue and operable to receive each of the data transfer requests at respective times, the control state machine responsively applying a plurality of control signals to the memory to initiate corresponding data transfer operations therewith; an address detector coupled with the request queue and operable to determine if the first and second data transfer requests are directed to a same page, the address detector responsively asserting a page hit signal; a page hit register coupled with the request queue and with the address detector, the page hit register operable to receive the asserted page hit signal and responsively store the second data transfer request; and a control timing chain coupled with the request queue and with the page hit register, the control timing chain including a rank register queue through which data transfer request information propagates between the request queue and the memory, the rank register queue including a bank access input and a page access input separate from the bank access input, the bank access input coupled with the request queue to receive first information associated with the first data transfer request, and the page access input coupled with the page hit register to receive second information associated with the second data transfer request, the rank register queue having a plurality of rank registers; and wherein the page hit register is coupled to at least some of the plurality of rank registers, the page hit register being operable to store a page hit write request in a write rank register based on a write latency of the memory, and being operable to store a page hit read request in a read rank register based on a read latency of the memory.
6. A memory controller for controlling operations of a memory having a plurality of banks, comprising:
a request queue operable to store a plurality of data transfer requests; a control state machine coupled with the request queue and operable to receive the data transfer requests at respective times, the control state machine responsively applying a plurality of control signals to the memory to initiate corresponding data transfer operations therewith; an address detector coupled with the request queue and operable to determine if an address of a first one of the data transfer requests corresponds to an address of a second one of the data transfer requests, the address detector responsively asserting a hit signal; a hit register coupled with the request queue and with the address detector, the register operable to receive the asserted hit signal and responsively store the second data transfer request; and a control timing chain coupled with the request queue, with the control state machine, and with the hit register, the control timing chain operable to assert a timing control signal to enable the control state machine to receive the data transfer requests, the control timing chain enabling the control state machine to receive the first data transfer request at a first time, to receive the second data transfer request stored in the hit register at a second time, and to receive a third one of the data transfer requests at a third time, the third time being after the first time and prior to the second time, the control timing chain having a plurality of rank registers coupled between the request queue and the memory; and wherein the hit register is coupled to at least some of the plurality of rank registers, the hit register being operable to store a page hit write request in a write rank register based on a write latency of the memory, and being operable to store a page hit read request in a read rank register based on a read latency of the memory.
2. The computer system of
3. The computer system of
7. The memory controller of
a plurality of bank comparators, each having a first input coupled with the request queue and a second input coupled with a respective one of the rank registers, each of the bank comparators operable to compare a bank address of a next pending one of the data transfer requests with a bank address stored in the rank register and to responsively produce a respective one of a plurality of bank comparison signals, the control timing chain asserting the timing control signal to enable the control state machine to receive the next pending data transfer request in response to the bank comparison signals being deasserted, and the first rank register receiving the bank address of the next pending data transfer request in response to the bank comparison signals being deasserted; and a hit comparator having a first input coupled with the hit register and a second input coupled with a corresponding one of the rank registers, the hit comparator operable to compare a bank address stored in the hit register with a bank address stored in the corresponding rank register and to responsively produce a hit comparison signal, the second rank register receiving the bank address stored in the hit register in response to the hit comparison signal being deasserted.
8. The memory controller of
9. The memory controller of
11. The memory controller of
12. The memory controller of
13. The memory controller of
14. The memory controller of
15. The memory controller of
17. The memory controller of
18. The memory controller of
19. The memory controller of
20. The memory controller of
a plurality of bank comparators, each having a first input coupled with the request queue and a second input coupled with a respective one of the rank registers, each of the bank comparators operable to compare a bank address of a next pending one of the data transfer requests with a bank address stored in the rank register and to responsively produce a respective one of a plurality of bank comparison signals, the control timing chain asserting the timing control signal to enable the control state machine to receive the next pending data transfer request in response to the bank comparison signals being deasserted, and the first rank register receiving the bank address of the next pending data transfer request in response to the bank comparison signals being deasserted; and a hit comparator having a first input coupled with the hit register and a second input coupled with a corresponding one of the rank registers, the hit comparator operable to compare a bank address stored in the hit register with a bank address stored in the corresponding rank register and to responsively produce a hit comparison signal, the second rank register receiving the bank address stored in the hit register in response to the hit comparison signal being deasserted.
21. The memory controller of
|
The present invention relates generally to circuitry and protocols associated with operating memory devices, and more particularly to apparatus for controlling multibank memory devices.
Particular locations within the memory array 202 are addressable by Address signals that external circuitry such as a memory controller (not shown) provides to the memory device 200. The memory controller also provides a plurality of Control or command signals that are used to designate the particular memory access type and/or sequence of memory accesses. As depicted in
Data written to and read from the memory array 202 is transferred from and to the memory controller or other external circuitry via a data I/O circuit 212 and the access circuits 210A and 210B. Those skilled in the art will also understand that the depicted data I/O circuit 212 represents a collection of various functional circuit components adapted to transmit data to or receive data from external circuitry and to correspondingly receive read data from or transmit write data to the array 202 via the access circuits 210A and 210B.
The memory device 200 depicted in
Successive memory access operations directed to a single bank ordinarily result in precharge time intervals during which memory access operations cannot be performed. However, if operations are directed to the same page in a given bank (a "page hit"), the successive operations can be performed without precharge. Thus, improving data transfer speed requires detecting the existence of such page hits and interleaving multiple bank and page lit access operations to the memory device 200.
In accordance with the invention, a memory controller is provided for controlling operations of a multibank memory. The memory controller includes a request queue coupled with a control state machine. The request queue stores data transfer requests, and the control state machine receives these requests at respective times and applies control signals to the memory to initiate corresponding data transfer operations. An address detector is coupled with the request queue to determine if a first and second of the data transfer requests constitute a page hit. A hit register is coupled with the address detector and with the request queue and stores the second data transfer request if a page hit. A control timing chain is coupled with the request queue, with the control state machine, and with the hit register. The control timing chain asserts a timing control signal to enable the control state machine to receive the first, second, and a third of the data transfer requests at respective first, second, and third times, with the third time being after the first time and prior to the second time.
In one aspect of the invention, the control timing chain includes a rank register queue through which bank addresses of the data transfer requests propagates. The rank register queue has separate bank access and page access inputs. The bank access input receives the bank addresses of requests for new bank accesses, such as the first and third data transfer requests. The page access input receives the bank address of requests for page hit accesses, such as the second data transfer request from the hit register. Comparator circuitry may be provided to determine the timing of requests being received at the control state machine, as well as to control the timing of bank addresses being received by the rank request queue.
In another aspect of the invention, a computer system is provided that includes a memory controller coupling a processor with a multibank memory. The memory controller is able to initiate a first data transfer request directed to a first bank, to temporarily store a second data transfer request directed to the first bank, and to initiate a third data transfer request directed to a second of the banks prior to initiation of the second data transfer request.
The following describes a novel apparatus for controlling operations of a multibank memory, which may be included in a computer system. Certain details are set forth to provide a sufficient understanding of the present invention. However, it will be clear to one skilled in the art that the present invention may be practiced without these details. In other instances, well-known circuits, control signals, timing protocols, and software operations have not been shown in detail in order to avoid unnecessarily obscuring the invention.
Referring to
After registration of the page read command, successive commands to other banks may then be registered.
The system controller 26 also includes CPU interface circuitry 33 that couples the microprocessor 22 with other components of the system controller. The system controller 26 also includes a cache controller (not shown) for controlling data transfer operations to a cache memory 35 that provides higher speed access to a subset of the information stored in the main memory 30. The cache memory 35 may include any of a wide variety of suitable high-speed memory devices, such as static random access memory (SRAM) modules manufactured by Micron Technology, Inc.
The system controller 26 also functions as a bridge circuit (sometimes called the host bus bridge or North bridge) between the processor bus 24 and a system bus, such as I/O bus 36. The I/O bus 36 may itself be a combination of one or more bus systems with associated interface circuitry (e.g., AGP bus and PCI bus with connected SCSI and ISA bus systems). Multiple I/O devices 38-46 are coupled with the I/O bus 36. Such I/O devices include a data input device 38 (such as a keyboard, mouse, etc.), a data output device 40 (such as a printer), a visual display device 42 (commonly coupled with the system controller 26 via a high-speed PCI or AGP bus), a data storage device 44 (such as a disk drive, tape drive, CD-ROM drive, etc.), and a communications device 46 (such as a modem, LAN interface, etc.). Additionally expansion slots 48 are provided for future accommodation of other I/O devices not selected during the original design of the computer system 20.
The memory controller 28 includes a DRAM state machine 54 that receives a request and associated address from the request queue 52 and produces the well-known control signal sets and sequences to initiate the corresponding memory access operations. The particular control signal types and protocols of the DRAM state machine 54 vary, depending on the particular multibank memory device types populating the main memory 30 (see FIG. 3). For an SDRAM, example control signals include the row address strobe (RAS), column address strobe (CAS), write enable (WE), and chip select (CS) signals. For an SLDRAM, example control signals include the packet-defined control/address signals that indicate device identification, command code, bank address, row address, and column address values. Details of the various control signals and protocols are well known to those skilled in the art and need not be described herein.
A control timing chain circuit 56 applies a plurality of timing control signals to the DRAM state machine 54. The control timing chain 56 controls the time at which the DRAM state machine 54 registers a request and associated address from the request queue 52. In particular, and as described in detail below, the control timing chain 56 determines whether bank conflicts or bus conflicts exist between pending requests stored in the request queue 52 and requests previously registered in the DRAM state machine 54.
The memory controller 28 also includes page hit detect circuitry 58 coupled with the request queue 52. As depicted, the page hit detect circuitry 58 is coupled with the final ranks of the request queue 52. The page hit detect circuitry 58 includes comparator circuitry for comparing the request addresses stored in these final ranks, as will be understood by those skilled in the art. The page hit detect circuitry 58 produces an input control signal applied to a page hit register 60. In the event a page hit occurs, the input control signal is asserted to enable the page hit register to store the page hit request and associated address. Once the page hit request has been stored in the page hit register 60, other pending requests stored in the request queue 52 may then be registered in the DRAM state machine 54 under control of the control timing chain 56. The control timing chain 56 also produces an output control signal applied to the page hit register 60 to control subsequent provision of the page hit request to the DRAM state machine 54.
The bank address of a next pending request stored in the request queue 52 is applied to each of the bank conflict comparators 64 at a first comparator input. A second input of each of the bank conflict comparators 64 receives the bank address stored in a corresponding one of the rank registers 62. In this way, the control timing chain 56 determines whether a next pending request stored in the request queue 52 presents a bank conflict with any of the previously registered requests currently being executed by the DRAM state machine 54. Each of the bank conflict comparators 64 produces a comparison output signal to indicate whether such a bank conflict exists. As known to those skilled in the art a bank conflict occurs when the next pending request is directed to a bank in which operations are currently being performed. Unless the next pending request is a page hit, the control timing chain 56 will not allow its registration until the previously registered conflicting request has cleared the rank register queue 63.
Each of the rank registers 62 outputs a Write or a Read signal indicating the request type associated with the address currently stored in the respective rank register. Such information concerning write or read access is important for determining the existence of bus conflicts, as is well understood by those skilled in the art. For example, a write request following a read request requires an intervening idle time interval for turnaround of the external memory data bus. Similarly, a read request following a write request requires a time interval for turnaround of the internal memory data bus/pipeline. Thus, the control timing chain 56 also includes circuitry (not shown) to account for bus/pipeline turnaround times, as well as to account for bank conflicts as particularly depicted in FIG. 5. The timing of the well-known Write Data and Read Data strobe signals may also be conveniently controlled by the rank register queue 63, as shown in FIG. 5.
As discussed above in connection with
Page bank conflict comparators 66 compare the bank address of the request stored in the page hit register 60 with the bank address of the Rank7 and Rank6 registers to ensure proper timing of page hit request insertion. In the particular embodiment depicted in
Thus, the control timing chain 56 has separate inputs for bank and page accesses--namely, the inputs to the Rank5 and Rank6 registers being read and write page access inputs 68 and 70, respectively, and the input to the Rank8 register being a bank access input 72. By temporarily storing a page hit request in the page hit register 60 (and later insertion at a page access input 68 or 70 of the control timing chain 56), subsequent requests stored in the request queue 52 may be applied to the DRAM state machine 54 (see
For example, a first bank write request directed to a first bank is applied to the DRAM state machine 54 and to the, bank access input 72 for registration in the Rank8 register of the control timing chain 56. If the next pending request is a page write to the first bank, this request is then stored in the page hit register 60 during the next command time period, and the first bank write request shifts to the Rank7 register. During the next command time period, a second bank write request directed to a second bank can be applied to the DRAM state machine 54 and to the bank access input 72 for registration in the Rank8 register (the first bank write request has now shifted to the Rank6 register). During the next command time period, the first bank write request shifts to the Rank5 register, the second bank write request shifts to the Rank7 register, and the page write request is applied to the DRAM state machine 54 and inserted at the page access input 70 into the Rank6 register (i.e., in between the first and second bank write requests). Thus, the timing advantages associated with a page hit may be exploited without delaying registration of other pending requests to other banks in the multibank memory device.
Referring to
Referring to
If the Request is determined not to be a page hit in step 104, the Request is treated like any other bank access request. The control timing chain 56 determines whether any bus or bank conflicts exist in step 112. Once it has been determined that no such conflicts exist, the Request is then applied to the DRAM state machine 54 and control timing chain registration occurs in step 114. Operations associated with the method 100 then cease pending receipt of another Request. Of course, while a page hit request is being processed according to operations 106-110, another bank access request may be processed according to operations 112-114.
A number of advantages are provided by the above-described embodiments of the present invention.
Those skilled in the art will appreciate that the present invention may be accomplished with circuits other than those particularly depicted and described in connection with
Those skilled in the art will also understand that each of the circuits whose functions and interconnections are described in connection with
It will be appreciated that, although specific embodiments of the invention have been described for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. For example, the above-described location of control timing chain page access inputs relative to the bank access input is exemplary, and may well vary depending on particular memory device timing specifications. Those skilled in the art will appreciate that many of the advantages associated with the circuits and processes described above may be provided by other circuit configurations and processes. Indeed, a number of suitable circuit components can be adapted and combined in a variety of circuit topologies to implement a multibank memory controller in accordance with the present invention.
Those skilled in the art will also appreciate that various terms used in the description above are sometimes used with somewhat different, albeit overlapping, meanings. For example, the term "bank" may refer solely to a memory array bank, or may refer both to an array bank and its associated access circuitry. The term "request" or "command" may refer solely to a request or command type (e.g., read or write), or may refer also to the associated address to which the request or command is directed. One skilled in the art will understand, therefore, that terms used in the following claims are properly construed to include any of various well-known meanings. Accordingly, the invention is not limited by the particular disclosure above, but instead the scope of the invention is determined by the following claims.
Patent | Priority | Assignee | Title |
6654860, | Jul 27 2000 | GLOBALFOUNDRIES Inc | Method and apparatus for removing speculative memory accesses from a memory access queue for issuance to memory or discarding |
6687172, | Apr 05 2002 | BEIJING XIAOMI MOBILE SOFTWARE CO , LTD | Individual memory page activity timing method and system |
6779076, | Oct 05 2000 | Round Rock Research, LLC | Method and system for using dynamic random access memory as cache memory |
6862654, | Aug 17 2000 | Round Rock Research, LLC | Method and system for using dynamic random access memory as cache memory |
6922770, | May 27 2003 | Sony Corporation; Sony Electronics Inc. | Memory controller providing dynamic arbitration of memory commands |
6948027, | Aug 17 2000 | Round Rock Research, LLC | Method and system for using dynamic random access memory as cache memory |
6965536, | Oct 05 2000 | Round Rock Research, LLC | Method and system for using dynamic random access memory as cache memory |
7076627, | Jun 29 2001 | Intel Corporation | Memory control for multiple read requests |
7127573, | May 04 2000 | Advanced Micro Devices, Inc. | Memory controller providing multiple power modes for accessing memory devices by reordering memory transactions |
7155561, | Aug 17 2000 | Round Rock Research, LLC | Method and system for using dynamic random access memory as cache memory |
7187385, | Mar 12 2001 | Ricoh Company, Ltd. | Image processing apparatus |
7350018, | Aug 17 2000 | Round Rock Research, LLC | Method and system for using dynamic random access memory as cache memory |
7373453, | Feb 13 2004 | Samsung Electronics Co., Ltd. | Method and apparatus of interleaving memory bank in multi-layer bus system |
7428186, | Apr 07 2005 | Hynix Semiconductor Inc. | Column path circuit |
7626885, | Apr 07 2005 | Hynix Semiconductor Inc. | Column path circuit |
7698498, | Dec 29 2005 | Intel Corporation | Memory controller with bank sorting and scheduling |
7917692, | Aug 17 2000 | Round Rock Research, LLC | Method and system for using dynamic random access memory as cache memory |
9396805, | May 19 2014 | Samsung Electronics Co., Ltd. | Nonvolatile memory system with improved signal transmission and reception characteristics and method of operating the same |
Patent | Priority | Assignee | Title |
5953743, | Mar 12 1997 | Round Rock Research, LLC | Method for accelerating memory bandwidth |
6034900, | Sep 02 1998 | Round Rock Research, LLC | Memory device having a relatively wide data bus |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 23 1998 | Micron Technology, Inc. | (assignment on the face of the patent) | / | |||
Mar 23 1999 | CHRISTENSON, LEONARD E | Micron Electronics, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 009937 | /0233 | |
Apr 26 2016 | Micron Technology, Inc | MORGAN STANLEY SENIOR FUNDING, INC , AS COLLATERAL AGENT | PATENT SECURITY AGREEMENT | 038954 | /0001 | |
Apr 26 2016 | Micron Technology, Inc | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | CORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE ERRONEOUSLY FILED PATENT #7358718 WITH THE CORRECT PATENT #7358178 PREVIOUSLY RECORDED ON REEL 038669 FRAME 0001 ASSIGNOR S HEREBY CONFIRMS THE SECURITY INTEREST | 043079 | /0001 | |
Apr 26 2016 | Micron Technology, Inc | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 038669 | /0001 | |
Jun 29 2018 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | Micron Technology, Inc | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 047243 | /0001 | |
Jul 03 2018 | MICRON SEMICONDUCTOR PRODUCTS, INC | JPMORGAN CHASE BANK, N A , AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 047540 | /0001 | |
Jul 03 2018 | Micron Technology, Inc | JPMORGAN CHASE BANK, N A , AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 047540 | /0001 | |
Jul 31 2019 | JPMORGAN CHASE BANK, N A , AS COLLATERAL AGENT | Micron Technology, Inc | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 051028 | /0001 | |
Jul 31 2019 | JPMORGAN CHASE BANK, N A , AS COLLATERAL AGENT | MICRON SEMICONDUCTOR PRODUCTS, INC | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 051028 | /0001 | |
Jul 31 2019 | MORGAN STANLEY SENIOR FUNDING, INC , AS COLLATERAL AGENT | Micron Technology, Inc | RELEASE BY SECURED PARTY SEE DOCUMENT FOR DETAILS | 050937 | /0001 |
Date | Maintenance Fee Events |
Jul 02 2003 | ASPN: Payor Number Assigned. |
Jul 02 2003 | RMPN: Payer Number De-assigned. |
Oct 20 2006 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Oct 14 2010 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Oct 15 2014 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
May 13 2006 | 4 years fee payment window open |
Nov 13 2006 | 6 months grace period start (w surcharge) |
May 13 2007 | patent expiry (for year 4) |
May 13 2009 | 2 years to revive unintentionally abandoned end. (for year 4) |
May 13 2010 | 8 years fee payment window open |
Nov 13 2010 | 6 months grace period start (w surcharge) |
May 13 2011 | patent expiry (for year 8) |
May 13 2013 | 2 years to revive unintentionally abandoned end. (for year 8) |
May 13 2014 | 12 years fee payment window open |
Nov 13 2014 | 6 months grace period start (w surcharge) |
May 13 2015 | patent expiry (for year 12) |
May 13 2017 | 2 years to revive unintentionally abandoned end. (for year 12) |