A system for interrupting loading of data into a high speed memory device from main storage when a processor requests cache access. A high speed cache is connected to main storage for storing at least a subset of the data residing therein, and the cache can be directly accessed by a processor. In a preferred embodiment, a buffering device is connected to main storage and to the cache for buffering data to be loaded therein. The data buffer is adapted to receive data from main storage continuously and is adapted to transfer the data to the cache continuously unless the cache is being accessed by the processor.
|
10. A method of using logic circuits for coordinating the transfer of lines of data from main storage into a high speed cache memory with the accessing by a processor of data which is already located in the cache memory, including the steps of:
loading data from the main storage into a buffer for temporary retention; transferring the loaded data from the buffer into the cache memory during all periods of time when the processor is not requesting access to data which is already located in the cache memory; monitoring data requests from the processor to identify any requests for data already located in cache memory; interrupting immediately said transferring step, without interrupting said loading step, whenever said monitoring step identifies requests by the processor for data in the cache memory; and resuming said transferring step immediately after the accessing by the processor of data in the cache memory has been completed.
1. A system which operates on clock cycles and which provides immediate access to data which is already located in a high speed memory device including:
processor means for processing data; storage means for storing data and which is operatively connected to said data processing means; a high speed cache memory operatively connected through a first bus to said storage means for storing at least a subset of data from said storage means, with each unit of said data requiring multiple clock cycles to be loaded through said first bus into said cache memory, with said cache memory accessible to said processor means through a second bus; and priority control means comprising logic circuits connected with both said processor means and said cache memory for monitoring data requests received from said processor means and for immediately interrupting in a given clock cycle any further loading of a unit of said data from said storage means to said cache memory only whenever such a data request is for data located in said cache memory, and for preventing any delay after said given clock cycle by allowing the data request for data located in said cache memory to be carried out concurrently with said interrupting and by allowing resumption of loading of said unit of data during successive clock cycles following said given clock cycle.
2. A system in accordance with
3. The system in accordance with
4. The system in accordance with
5. The system in accordance with
6. The system in accordance with
7. The system in accordance with
8. The system in accordance with
9. The system in accordance with
11. The method of
|
The present invention relates to a data processing system having a main memory and a high speed memory and, more particularly, to an improved memory access management mechanism for controlling data transfer therebetween.
In the field of data processing, sophisticated high speed computers often incorporate large memories or data storage devices. While the speed of the engines or processors within such computer systems has consistently increased over the years, so too have computer applications continued to demand ever greater speeds.
Among the many variables to be considered in an attempt to increase the performance of data processing systems, two considerations are the speed of the system processor and the speed with which data can be transferred between the main storage and the processor. In general, when two or more logic devices are incorporated in a computer system, one of the devices operates at a slower rate of speed than do the remaining devices. Overall system performance is of course dependent upon the speed of the slowest logical device.
The speed of a memory device is inversely proportional to the time required to access data stored therein. As sophisticated computer systems develop, memory storage capacity often increases. Although the operating speed of the smallest components may increase, overall system performance may, in fact, degenerate when memory capacity is extremely large.
Historically, it was common for a processor to communicate with main storage by means of individual connections thereto. The great increase in processing power provided by modern processors, however, resulted in a prodigious amount of data constantly being requested by the processor, exceeding the capacity of the main storage to transfer data to the processor at optimal rates. The size of the memory required for use also increased at a faster rate than that of processor improvement. It would have been uneconomical to continue building nonvolatile memories of ever increasing size and speed.
An approach to maximizing performance of a computer system was to develop a temporary memory storage mechanism called a cache. The cache is a relatively high speed memory that tends to be more expensive than conventional data storage devices.
The cache is a limited storage capacity memory that is usually local to the processor and that contains a time-varying subset of the contents of main storage. This subset of data stored in the cache is that data that was recently used by the processor.
The purpose of a cache memory is to reduce cost of a system while minimally affecting the average effective access time for a memory reference. A very high proportion of memory reads can be satisfied out of the high speed cache.
The cache contains a relatively small high speed buffer and control logic situated between two logical devices, such as a processor and main storage. The cache matches the high speed of one of the devices (the processor) to the relatively low speed of the other device (the main storage).
The data most often used is temporarily stored in the high speed buffer. The most recent information requested by one logical device from another logical device is stored in the cache memory simultaneously with its transfer to the first device. Subsequent requests for such information result in the transfer of data directly from the cache to the first device without need for accessing the second device.
When a processor, for example, requests data, a cache first searches its buffer. If the data is stored in the cache, a so-called hit occurs. The data is returned in one or two cycles. Often, of course, the data sought is not stored in the cache. Consequently, a so-called miss occurs and the cache must retrieve the data from main storage.
Caches derive their performance from the principle of locality. According to this principle, over short periods of time processor memory references tend to be clustered in both time and space. Data that will be in use in the near future is likely to be currently in use. Similarly, data that will be in use in the near future is located near data currently in use. The degree to which systems exhibit locality determines the benefits of the cache. A cache can contain a small fraction of the data stored in main storage, yet still have hit rates that are extremely high under normal system loads.
A main storage line fetch occurs when the cache accesses data from main storage. A line castout occurs when a convenient block of data, called a line or cache line, is returned to main storage from the cache after modification to make room for a new line of data. The line of data is the unit which is moved between the cache and main storage and is typically 4 to 16 times longer than the width of the bus between the cache and main storage. This incompatibility results in multiple transfers of data between memory and cache to complete a data transfer operation.
The loading of the cache device is sometimes called inpaging. Inpaging of data from main storage to the cache may take an appreciable amount of time, depending upon the amount of data that is transferred. Often particular data within a line which the processor requests is fetched from main storage first and passed to the processor so that it can resume instruction processing. The remainder of the line is inpaged into the cache immediately afterward. If, during the inpaging operation, a processor requires access to data in the cache, conventionally the processor has been required to wait until the line transfer operation from main storage to the cache was completed.
U.S. Pat. No. 4,317,168 issued to Messina et al discloses a cache organization that enables cache functions to overlap. The main storage has two lines: data bus-out and data bus-in, each transferring a double word in one cycle. Both buses may transfer respective double words in opposite directions in the same cycle. Moreover, the cache has a quadword write register and a quadword read register, a quadword meaning two double words on a quadword address boundary. During a line fetch of sixteen double words, the first double word or the pair of double words is loaded into the quadword write register. Thereafter, during the line fetch the even and odd double words are formed into quadwords as received from the bus-out and the quadwords are written into the cache on alternate cycles. If a line castout is required from the same or a different location in the cache, the castout can proceed during the alternate non-write cycles of any line fetch. Any cache bypass to the processor during the line fetch can overlap the line fetch and the line castout. Although processor accesses are permitted in the aforementioned system, they receive lower priority than do memory transfers.
U.S. Pat. No. 4,169,284 issued to Hogan et al teaches concurrent access to a cache by main storage and a processor by means of a cache control which provides two cache access timing cycles during each processor storage request cycle. No alternatively accessible modules, buffering, delay or interruption is provided for main storage line transfers to the cache.
U.S. Pat. No. 4,371,929 issued to Brann et al teaches a controllable cache store interface to a shared disk memory employing a plurality of storage partitions whose access is interleaved in a time domain multiplexed manner. A common bus is provided with the shared disk to enable high speed sharing of the disk storage by all processors in a multiprocessor system. The communication between each processor and its corresponding cache memory partition can be overlapped with one another and with accesses between the cache memory and the commonly shared disk memory. Interleaving of transfers within full disk block transfers, however, is not permissible. Thus, processor access to the cache memory is halted until data is transferred from the cache to the disk drives.
It would be advantageous to provide a system for improving performance of a high speed computer.
It would also be advantageous to provide a system for efficient data management between a cache and a main storage in such a high speed computer system.
It would further be advantageous to provide a system for interrupting cache operations when a processor requests access to the data stored therein.
Moreover, it would be advantageous to allow a processor to have highest access to the cache, even when data transfer operations are occurring between the cache and main storage.
It would also be advantageous to provide a system for buffering data from main storage to the cache so that data can be loaded continuously into the cache unless the processor requests access to the data.
In accordance with the present invention, there is provided a system for interrupting loading of data from main storage into a high speed memory device when a processor requests cache access. A high speed cache is connected to main storage for storing at least a subset of the data residing therein and for accessing data from a processor. A buffering device is connected to main storage and to the cache for buffering data to be loaded therein. The data buffer is adapted to receive data from main storage continuously and is adapted to transfer the data to the cache continuously unless the cache is being accessed by the processor.
A complete understanding of the present invention may be obtained by reference to the accompanying drawings, when taken in conjunction with the detailed description thereof and in which:
FIG. 1 is a block diagram of a data processing system environment which provides interpretable cache loading in accordance with the present invention;
FIGS. 2A-2C, taken together, represent a flowchart of a preferred embodiment which includes an inpage buffer operation;
FIGS. 3A-3C, taken together, represent a flowchart showing interrupted data transfer from memory in an alternate embodiment having no inpage buffer;
FIG. 4 is a timing diagram representing the occurrence of events during an inpage operation of the preferred embodiment of FIG. 2; and
FIG. 5 is a timing diagram representing the occurrence of events in the alternate embodiment of FIG. 3 during which transfer of data from memory is suspended.
Referring now to FIG. 1, there is shown a block diagram of a high speed buffer subsystem, shown generally at reference numeral 10, of the data processing system of the present invention. The main storage 11 has capacity for storing a plurality of data bits and words. In the preferred embodiment, the main storage 11 has a capacity on the order of 8 M bytes to 256 M bytes of data and can be accessed randomly. It should be understood, however, that any reasonable capacity of memory can be used within the scope of the present invention.
Connected to main storage 11 by means of a bidirectional data bus or memory bus 12 is an 8-byte wide data register 16. An inpage buffer 18 is connected to data register 16 by means of bus 19. The inpage buffer 18 is a component of the preferred embodiment, but need not be used in alternate embodiments. Its function is described in further detail hereinbelow. An 8-byte wide multiplexer 20 is connected to the inpage buffer 18 by means of a bus 21.
A set of cache arrays is shown at reference numeral 22. The cache arrays 22 in the preferred embodiment comprise four partitions each 8 bytes wide, not shown, the total memory being 16 K bytes. It should be understood, however, that a cache array having six partitions and 24 K bytes could also be used. In fact, any size cache array and any number of partitions is possible within the scope of the present invention. Similarly, for purposes of description, one set of cache arrays 22 is herein disclosed, but alternative embodiments having a plurality of caches would also be within the scope of the present invention and could easily be implemented by those skilled in the art once the present invention is understood.
The cache arrays 22 are connected to the multiplexer 20 by means of a bus 23. Another data register 24 is connected to the cache arrays 22 by means of a bus 25.
Another multiplexer 26 is connected to the data register 24 by means of a bus 27. Other inputs to the multiplexer 26 are data from the inpage buffer 18 by means of a bus 29 and data from data register 16 by means of a bus 19.
The output of multiplexer 26 is applied to a shifter 28 by means of a bus 31. The output of multiplexer 26 can also be applied, by means of bus 31, to an outpage data register 32, the output of which is applied to the bidirectional data bus 12.
The shifter 28 generates data signals and transmits them over a bus 14 to a processor 13. Data from the processor 13 is applied to a store data register 30 by means of a bus 33 and thence, by means of a bus 33a, to the multiplexer 20 during store operations.
The cache directory and controls are shown generally at reference numeral 40. A directory 42, having a least recently used (LRU) mechanism 43, provides a portion of addresses to a compare circuit 44 by means of lines 45. The output of the compare circuit 44 is applied to inpage controls 46 over HIT/MISS lines 47. The inpage controls 46 determine whether valid data is in the cache arrays 22 or in the inpage buffer 18. Priority controls 48 signal main storage 11 to discontinue and restart data transfers if the inpage buffer 18 is not present, as can be the case in an alternate embodiment of the present invention.
The inpage controls 46 generate a signal applied to priority controls 48 over INPAGE REQUEST line 49. The priority controls 48 determine whether the processor 13 or the inpage operation from main storage 11 will access the cache arrays 22. The output of priority controls 48 is applied to a multiplexer 50 by means of a CACHE ADDRESS SELECT line 51. The output of the multiplexer 50 is applied to an address register 52 by means of a bus 53. The address register 52 generates ADDRESS signals on bus 55 which is applied to the cache arrays 22.
Another multiplexer 70 receives ADDRESS signals from the processor 13 by means of a PROCESSOR ADDRESS bus 56. Also applied to multiplexer 70 is an INPAGE ADDRESS bus 60 and the CACHE ADDRESS SELECT line 51. Connected to multiplexer 70 by means of bus 71 is an address register 54.
The priority controls 48 receive a PROCESSOR REQUEST signal from the processor 13 by means of line 58.
Also input to the multiplexer 50 are the PROCESSOR ADDRESS signals 56 from the processor 13 and the INPAGE ADDRESS signals 60 generated by the inpage controls 46. The inpage controls 46 also generate three groups of SELECT lines: SELECT/CONTROL 62, which lines are input to inpage buffer 18; CACHE DATA INPUT SELECT 64, which line is input to multiplexer 20; and CACHE DATA OUTPUT SELECT 66, which lines are input to multiplexer 26.
The directory 42 receives an address from address register 54 by means of bus 68.
In operation, the data from main storage 11 is latched in data register 16, from which it is routed to the cache arrays 22 or to the inpage buffer 18, if present, for an inpage operation. The inpage buffer 18 allows a complete line of data to be transferred from main storage 11 without interruption, as far as the main storage 11 is concerned. This frees the memory bus 12 for other operations, such as accesses by other processors, not shown, or store operations. The inpage buffer 18 also allows an inpaging operation to restart more quickly after an interruption in most cases, since the buffer 18 is integrated into the cache structure in the preferred embodiment.
From the inpage buffer 18, data is written into the cache arrays 22 during those cycles in which the processor 13 does not request cache accesses. Data can be written into the cache arrays 22 in any convenient size block, depending on cache array organization. For example, the cache array 22 may be organized to write 16 bytes of data in a single access, even though the memory bus 12 is only 8 bytes wide.
Data from the inpage buffer 18 can also be gated through multiplexer 26 via bus 29 to the processor 13 to allow early access to data not yet written into the cache arrays 22. In practice, all data from the cache line being inpaged are obtained from the inpage buffer 18, except for the first access which is bypassed directly from data register 16 to the processor 13, until the complete cache line is written into the cache arrays 22 and the directory 42 is marked valid.
The multiplexer 26, in addition to selecting data from the inpage buffer 18 and data register 16, selects data from one of the associativity classes in the cache arrays 22. Thus, the inclusion of data from the inpage buffer 18 does not necessarily add a stage of logic to the cache data path that terminates at the processor 13. It does, however, increase the number of inputs to the multiplexer 26. Control of the multiplexer 26 is handled by logic in the inpage controls 46, which verifies that valid data is in the inpage buffer 18 and checks the output of directory compare logic 44 for those accesses to other cache lines.
Priority controls 48 determine whether the processor 13 is requesting the next cache access, and, if so, gates the PROCESSOR ADDRESS signal 56 instead of the inpage address to the directory 42 and cache arrays 22. If no PROCESSOR REQUEST signal 58 is present, the inpage buffer 18 loads data to the cache arrays 22 by means of the multiplexer 20 and buses 21 and 23 connected thereto. A more thorough understanding of the specific operations that occur during cache data transfer activity in the preferred embodiment using the inpage buffer 18 can be obtained by referring to FIGS. 2A-2C, which disclose a self-explanatory flowchart thereof. Reference numerals included in the blocks of FIGS. 2a-2c refer to the circuit elements shown in FIG. 1 and are numbered identically to the reference numerals identifying the elements therein.
In an alternate embodiment where no inpage buffer 18 exists, a RESEND signal 72 is sent to main storage 11 requesting that the data be retransmitted and data from register 16 is applied directly to multiplexers 20 and 26, the latter over bus 19. A more thorough understanding of the specific operations that occur during transfer activity in such an alternate embodiment which relies on retransmission of data from main storage 11 to cache arrays 22 when interrupted by the processor 13 can be obtained by referring to FIGS. 3A-3C, which disclose a self-explanatory flowchart thereof. Reference numerals included in the blocks of FIGS. 3A-3C refer to the circuit elements shown in FIG. 1 and are numbered identically to the reference numerals identifying the elements therein.
Data validity for the cache line being inpaged is determined by a series of latches, not shown, in the inpage controls 46. The number of latches is determined by the number of transfers from main storage 11 to the cache arrays 22 for a complete cache line transfer. Each latch corresponds to a portion of the cache line and is set valid as the portion is loaded into the cache arrays 22 or into the inpage buffer 18. Another latch, not shown, is set when the inpage operation begins and is reset when the operation is completed. This procedure indicates whether the inpage controls 46 are to be used.
Prior to loading the complete cache line into the cache arrays 22, the processor 13 may attempt to read from or write to any storage location. If a fetch is attempted for data which is not yet in the inpage buffer 18, the processor 13 is signalled to wait until valid data is available. If the processor 13 attempts to access another line in the cache arrays 22 and a miss occurs, a BUSY condition is sent over line 74, which causes the processor 13 to wait and then to resend the request when the BUSY condition is reset. Thus, the access to main storage 11 does not begin until the first inpage operation is complete. Data from the processor 13 is not written into the inpage buffer 18 or into the cache arrays 22 for the cache line being inpaged, should a write operation be required to the cache line being inpaged. Instead, the BUSY signal is sent to the processor 13 over line 74. The processor 13 waits until the inpage operation is complete, at which time the data from the processor 13 is written over the inpaged data. The latter restriction can be removed in alternate embodiments in which the controls are more complex but still within the abilities of one skilled in the art.
Referring now also to the timing chart of FIGURE 4, data lines labelled "data 0" through "data 7" are transferred sequentially as follows, pursuant to a series of fetch requests identified as `A`, `B`, and so on. A PROCESSOR REQUEST signal 58 (FIG. 1) is sent to the cache arrays 22 with the address for the first fetch (fetch `A`), followed immediately by fetch `B`, which is held, pending data returning to the processor 13 for fetch `A`. The first access to the cache arrays 22 and directory 42 occurs in cycle 2. The directory 42 indicates that an inpage operation is required. The data fetched from the cache arrays 22 on cycle 2 is discarded. On cycle 3 the address is presented to main storage 11 and the access begins. Several cycles later, "data 0" (the part of the cache line initially requested by the processor 13 in fetch `A`) is placed on the bus 12 to the data register 16. On the following cycle, "data 0" is transferred to the processor 13 through multiplexer 26 and concurrently fetch `B` is allowed. In this example, since the fetch is for data in another cache line present in the cache arrays 22, the appropriate cache output is selected by the multiplexer 26 in the next cycle.
Two cycles later, fetch `C` of the cache line being inpaged occurs. A directory miss occurs because the cache line is not yet completely stored in the cache arrays 22 and marked valid. The inpage controls 46 determine that the requested data is in the inpage buffer 18 and select the correct buffer location using SELECT/CONTROL lines 62, routing the data through the multiplexer 26 to the processor 13. When "data 7" is finally written into the cache arrays 22, the entry in the directory 42 is marked valid. Subsequent accesses use cache array data.
Referring now also to FIG. 5, when the inpage buffer 18 does not exist, the main storage 11 can resend the last data transferred to the cache arrays 22. Operations then proceed as hereinabove described, but the second transfer ("data 1") must be resent because "data 0" is loaded in data register 16 prior to being transferred to the cache arrays 22. The cache arrays 22 are currently executing fetch `B`. The priority controls 48 detect this situation and activate the RESEND line 72 to the main storage 11. RESEND is also activated later when fetch `C` accesses the cache arrays 22. The effect of this mode of operation is similar to the preferred embodiment in which the inpage buffer 18 exists, except that the memory bus 12 is in use for a greater period of time in order to resend the interrupted transfers. Moreover, the memory system 11 must be capable of resending data upon receiving such requests from the priority controls 48.
Since other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the invention is not considered limited to the example chosen for purposes of disclosure, and covers all changes and modifications which do not constitute departures from the true spirit and scope of this invention .
Jeremiah, Thomas L., Ruane, Albert J., Zurla, Frank A.
Patent | Priority | Assignee | Title |
5636364, | Dec 01 1994 | International Business Machines Corporation; IBM Corporation | Method for enabling concurrent misses in a cache memory |
Patent | Priority | Assignee | Title |
3938097, | Apr 01 1974 | SOLOMON, JACK D | Memory and buffer arrangement for digital computers |
4298929, | Jan 26 1979 | International Business Machines Corporation | Integrated multilevel storage hierarchy for a data processing system with improved channel to memory write capability |
4354232, | Dec 16 1977 | Honeywell Information Systems Inc. | Cache memory command buffer circuit |
4370710, | Aug 26 1980 | LAURENCE J MARHOEFER | Cache memory organization utilizing miss information holding registers to prevent lockup from cache misses |
4439829, | Jan 07 1981 | SAMSUNG ELECTRONICS CO , LTD | Data processing machine with improved cache memory management |
4460959, | Sep 16 1981 | Honeywell Information Systems Inc. | Logic control system including cache memory for CPU-memory transfers |
4604691, | Sep 07 1982 | Nippon Electric Co., Ltd. | Data processing system having branch instruction prefetching performance |
4631660, | Aug 30 1983 | AMDAHL CORPORATION, CORP OF CA | Addressing system for an associative cache memory |
4685082, | Feb 22 1985 | SAMSUNG ELECTRONICS CO , LTD | Simplified cache with automatic update |
4740889, | Jun 26 1984 | Freescale Semiconductor, Inc | Cache disable for a data processor |
4819203, | Apr 16 1986 | Hitachi, Ltd. | Control system for interruption long data transfers between a disk unit or disk coche and main memory to execute input/output instructions |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 24 1987 | IBM Corporation | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Date | Maintenance Schedule |
Oct 16 1993 | 4 years fee payment window open |
Apr 16 1994 | 6 months grace period start (w surcharge) |
Oct 16 1994 | patent expiry (for year 4) |
Oct 16 1996 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 16 1997 | 8 years fee payment window open |
Apr 16 1998 | 6 months grace period start (w surcharge) |
Oct 16 1998 | patent expiry (for year 8) |
Oct 16 2000 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 16 2001 | 12 years fee payment window open |
Apr 16 2002 | 6 months grace period start (w surcharge) |
Oct 16 2002 | patent expiry (for year 12) |
Oct 16 2004 | 2 years to revive unintentionally abandoned end. (for year 12) |