A data storage system pre-fetches data blocks from a mass storage device, then determines whether reallocation of the pre-fetched blocks would improve access to them. If access would be improved, the pre-fetched blocks are written to different areas of the mass storage device. Several different implementations of such data storage systems are described.
|
19. A method comprising:
performing, by a server, a pre-fetch read of a sequential first set of data blocks from a mass storage device in response to receiving a client request to access a second set of data blocks on the mass storage device, the sequential first set of data blocks stored into cache memory of the server and having a plurality of physical volume block numbers (pvbns), a pvbn being a block number on the mass storage device;
examining, by the server, during the pre-fetch read, the pvbns for the sequential first set of data blocks that is stored in the cache memory to detect that at least two subsets of the sequential first set of data blocks that are not contiguous with the sequential first set and not contiguous with each other; and
reallocating, by the server, during the pre-fetch read, the at least two subsets to a contiguous area of the mass storage device.
1. A method comprising:
pre-fetching, by a server, a first plurality of data blocks in response to receiving a client request to access a second plurality of data blocks on a mass storage device coupled to the server, the first plurality of data blocks stored into cache memory of the server and having a plurality of physical volume block numbers (pvbns), a pvbn being a block number on the mass storage device;
examining, by the server, the pvbns for the first plurality of data blocks that is stored in the cache memory to determine whether the first plurality of data blocks are fragmented on the mass storage device; and
writing, by the server, the first plurality of data blocks to different locations of the mass storage device during the pre-fetching of the first plurality of data blocks, the writing based on a determination that the first plurality of data blocks are fragmented on the mass storage device.
13. A non-transitory computer-readable medium containing data and instructions to cause a programmable processor to perform operations comprising:
maintaining a filesystem on a mass storage subsystem;
predicting a first plurality of data blocks of the mass storage subsystem that are not required yet but are expected to be required soon in response to receiving a client request to access a second plurality of data blocks on the mass storage device;
pre-fetching the first plurality of data blocks into a cache memory, the first plurality of data blocks having a plurality of physical volume block numbers (pvbns), a pvbn being a block number on the mass storage device;
examining the pvbns for the first plurality of data blocks that is pre-fetched into the cache memory to determine whether the first plurality of data blocks are fragmented on the mass storage device; and
moving the first plurality of data blocks during the pre-fetching of the first plurality of data blocks based on a determination that the first plurality of data blocks are fragmented on the mass storage device.
8. A system comprising:
a communication interface to receive requests from a client to access first data on a mass storage device coupled to the system;
a processor to interpret the requests;
filesystem logic to locate the first data on the mass storage device, wherein the first data is identified by the requests;
prediction logic to identify additional data on the mass storage device that may soon be requested and to pre-fetch the additional data from the mass storage device, the additional data having a plurality of physical volume block numbers (pvbns), a pvbn being a block number on the mass storage device;
cache memory to store the additional data that is pre-fetched from the mass storage device; and
reallocation logic to examine during the pre-fetch the pvbns for the additional data that is stored in the cache memory to determine whether the additional data is fragmented on the mass storage device and to write the additional data to different locations on the mass storage device during the pre-fetch of the additional data based on a determination that the additional data is fragmented on the mass storage device.
2. The method of
monitoring activity of a storage system; and
predicting the first plurality of data blocks to be pre-fetched based on the activity.
3. The method of
fetching the requested second plurality of data blocks from the mass storage device in response to the request from a client, and wherein
said determining comprises determining whether reallocation would improve access to a combined plurality of data blocks including the requested second plurality and the pre-fetched first plurality; and
said writing comprises writing the combined plurality of data blocks.
4. The method of
6. The method of
7. The method of
9. The system of
11. The system of
14. The non-transitory computer-readable medium of
maintaining an inode to identify a sequence of data blocks that make up a file; and
maintaining a block map to distinguish between used data blocks and unused data blocks.
15. The non-transitory computer-readable medium of
allocating a second inode to identify a second sequence of data blocks that make up a file, wherein a data block of the first sequence is also in the second sequence.
16. The non-transitory computer-readable medium of
17. The non-transitory computer-readable medium of
18. The non-transitory computer-readable medium of
monitoring activity affecting the filesystem, and wherein
the predicting operation refers to information collected by the monitoring operation.
20. The method of
predicting a client interaction that will require data from the sequential first set of data blocks, wherein
the read operation is performed before the client interaction.
21. The method of
placing data blocks of the at least two subsets in ascending order.
22. The method of
|
The invention relates to data storage operations. More specifically, the invention relates to low-computational-cost methods for detecting and reducing fragmentation in objects stored on a mass storage device.
Many contemporary data processing systems consume and/or produce vast quantities of data. Electromechanical devices such as hard disk drives are often used to store this data during processing or for later review. The mechanical nature of many types of mass storage devices limits their speed to a fraction of the system's potential processing speed, so measures must be taken to ameliorate the effects of slow storage.
Mass storage devices are commonly viewed as providing a series of addressable locations in which data can be stored. Some devices (such as tape drives) permit storage locations to be accessed in sequential order, while other devices (such as hard disks) permit random access. Each addressable storage location can usually hold several data bytes; such a location is called a “block.” Common block sizes are 512 bytes, 1024 bytes and 4096 bytes, though other sizes may also be encountered. A “mass storage device” may be constructed from a number of individual devices operated together to give the impression of a single device with certain desirable characteristics. For example, a Redundant Array of Independent Disks (“RAID array”) may contain two or more hard disks with data spread among them to obtain increased transfer speed, improved fault tolerance or simply increased storage capacity. The placement of data (and calculation and storage of error detection and correction information) on various devices in a RAID array may be managed by hardware and/or software.
Occasionally, the entire capacity of a storage device is dedicated to holding a single data object, but more often a set of interrelated data structures called a “filesystem” is used to divide the storage available among a plurality of data files. Filesystems usually provide a hierarchical directory structure to organize the files on the storage device. Note that a file in a filesystem is basically a sequence of stored bytes, so it can be treated identically to a mass storage device for many purposes. For example, a second filesystem can be created in a file on a first filesystem. The second filesystem can be used to divide the storage space of the file among a plurality of data files, all of which reside within the file on the first filesystem. Such nested filesystems can be constructed to an arbitrary depth, although depths exceeding one or two levels are not particularly useful. A file that contains a nested filesystem is called a “container file.”
The logic and procedures used to maintain a filesystem (including its files and directories) within storage provided by an underlying mass storage device or container file can have a profound effect on data storage operation speed. This, in turn, can affect the speed of processing operations that read and write data in files. Thus, filesystem optimizations can improve overall system performance.
Read reallocation is a technique that can improve a storage system's performance on large sequential reads. When a read request calls for many data blocks to be copied from a mass storage device into system memory, the read may proceed faster if the data blocks are located physically near one another and/or in sequential order on the storage device. Prior-art systems recognize the benefit of read reallocation, under the rubric of file defragmentation.
Techniques to reduce fragmentation without explicit, time-consuming defragmentation cycles, may be useful in improving storage operations.
A mass storage device access optimizer uses information collected when data blocks are pre-fetched from storage to decide whether to reallocate some or all of the data blocks for improved access.
Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one.”
When a storage client requests data that is stored on a mass storage device of a storage server, filesystem management logic at the server may read extra data blocks that have not yet been requested by the client. Reading these extra blocks (“speculative reading” or “read-ahead”) may save time if the client later requests the pre-fetched data. However, even if the client does not request the pre-fetched data, the filesystem logic has already spent the processing time required to locate the read-ahead data blocks on the storage device, and the input/output (“I/O”) cost to read the data into memory. Thus, information about fragmentation in the data blocks is available, and part of the cost of defragmenting the blocks (that of finding and loading the blocks into memory) has already been borne. Instead of simply discarding the read-ahead data if it is not used, if the blocks were fragmented, an embodiment of the invention can mark the data for re-writing in a less-fragmented location. This process can salvage some value from an erroneous read-ahead prediction (otherwise, the computational and I/O costs would simply appear to users as system “slowness,” without the offsetting benefit of faster future access). Of course, if the read-ahead prediction is correct, then embodiments of the invention get two wins for the price of one: the correctly-predicted successive reads proceed faster, and subsequent reads may be faster as well.
Embodiments of the invention can be used in almost any system that stores and retrieves data on a mass storage device (or a storage subsystem) in accordance with space management information maintained in a filesystem. However, certain environments are particularly dependent upon storage system performance, and may consequently derive particular benefit from the techniques described herein. Some of these environments are described here in greater detail. It is appreciated that filesystem operations are quite complex, and a concrete implementation may differ from the systems described here in many respects. However, the principles underlying embodiments of the invention will be clear to those of ordinary skill in the relevant arts, and can be adapted to fit most implementations.
After protocol processing, a client's request may be forwarded to a filesystem manager 440, which administers the storage space available from server 300 and ensures that data can be reliably stored and retrieved. Filesystem manager 440 interacts with storage drivers 450 to read or write data on mass storage devices 460, which may be operated as a RAID array. Filesystem managers that can benefit from an embodiment of the invention are found in several commercially-available storage server systems, including the Data ONTAP family of products from Network Appliance, Inc. of Sunnyvale, Calif., which implement the Write Anywhere File Layout (“WAFL”) filesystem. Filesystem managers that implement copy-on-write and write-in-place filesystems can also use embodiments of the invention.
Filesystem manager 440 maintains various data structures to perform its duties. Most filesystems maintain at least two main types of information: inodes 470 and a block map 480. Specific filesystem implementations may divide the information up differently, and may keep many other ancillary data structures as well, but will generally have data with semantics similar to inodes 470 and block map 480, described below. For the purposes of understanding embodiments of the invention, an inode is a data structure that contains (or leads to) information to identify a sequence of data blocks that contain data in a file or other object. A block map is a data structure that indicates, for each data block of a plurality of blocks, whether the block is in use or is free.
Many filesystem managers maintain a data block cache 490 containing copies of data from mass storage devices 460, but stored in a memory that can be accessed faster than the electromechanical devices. Cache 490 may contain copies of data blocks that were recently requested by a client (492, 494), copies of data blocks that have been modified by a client but not yet written back to a storage device (496), and—of relevance to embodiments of the invention—data blocks 498 that have not been requested by a client, but that read-ahead logic 443 has determined are likely to be requested in the future.
Read-ahead logic 443, which may be implemented as software, firmware, hardware (e.g., an Application Specific Integrated Circuit or “ASIC,” or a Field-Programmable Gate Array, “FPGA”) or a combination of these, may monitor clients' access patterns and other information to decide when reading more data than is strictly required to fulfill pending requests may be beneficial. For example, if a client has recently requested several successive portions of a file, read-ahead logic 443 may predict that the client will request more data from the file, and proactively load that data into cache 490. Pre-fetched or read-ahead data is different from other data read from a mass storage device, although the procedures and subsystems used to get the data from a mass storage device into memory are usually the same. The difference is that no client has yet requested the pre-fetched data, the data may never be used, and no client or process is waiting for it when the decision to load it is made. A system may pre-fetch data when it anticipates that the data will be useful (i.e., that a client will ask for the data, or that the system will need to refer to the data to fulfill a client's request). If the system's prediction is correct, the data will be ready to send to a client that requests it. If the prediction is wrong, the system will have done extra work that turned out to be unnecessary. A system may pre-fetch data that it expects a client will request, and may also pre-fetch other data that it will use internally to fulfill a client's expected request. For example, if read-ahead logic 443 predicts that a client will open a file in a directory, blocks containing inode data and directory data may be pre-fetched in anticipation of the open request. This data may not be returned to the client, but may be used in performing the client's request (if the expected request actually occurs). If the predictions of read-ahead logic 443 are often wrong over a period of time, a different prediction algorithm may be tried, or read-ahead logic may be turned off temporarily, since the system's current workload does not seem to be predictable.
As discussed in greater detail below, logic in the storage server must locate the mass storage device blocks that contain the read-ahead data so that it can be loaded, so information about the blocks' absolute location, and location relative to other blocks, is available to an embodiment of the invention if a read-ahead is performed. A block's absolute location is its address or index relative to a known point. For example, a physical mass storage device usually enumerates blocks sequentially from the start of the device, starting at zero and continuing to the last block. A file can be seen as a sequence of data bytes, so the absolute location of a block within a file may be the offset within the file of the bytes that make up the data block.
This information is used by read reallocation logic 446 to identify blocks that are out of sequence, are located far from other related blocks, or are otherwise disposed on the mass storage device in a way that impairs their efficient retrieval. (An example of blocks that may be difficult to retrieve efficiently is presented below in connection with
Note that inode 470 does not contain a name for the file. Instead, filesystems typically store the file's name and a pointer to its inode in a directory, which can be thought of as (and often is) simply a specially-formatted file containing a list of names and pointers to associated inodes. The example inode 470 shown in
Block map 480 is a second data structure that indicates which data blocks of the underlying mass storage device are in use. It is appreciated that the data in block map is redundant, in the sense that it could be recreated by examining all the inodes to find in-use blocks. However, filesystems usually maintain block maps for improved efficiency and fault detection/recovery.
In light of the foregoing material, the method described in the flow chart of
If the storage activity suggests that reasonably accurate predictions of future read operations can be made (710), and if adequate cache memory to hold read-ahead data is available (715), the system computes the offset(s) and length(s) of expected reads (720). Here, “reasonably accurate” and “adequate cache memory” imply tunable parameters. If, for example, system I/O activity is moderate and cache usage is low, the system may decide to risk pre-fetching data that is not particularly likely to be needed. On the other hand, if the system is already busy performing I/O or the cache is nearly full, only data that is fairly certain to be requested soon may be speculatively read. Predicting future reads may take into account information about the number of active clients and the type of access the clients are using. Prediction logic may take into account the correctness of recent predictions—if many recent corrections are correct, then it is likely that the storage server's current workload is similar to a model workload on which the predictions are based. On the other hand, if many recent predictions are incorrect, the system may pre-fetch fewer blocks (or cease pre-fetch activity altogether) until the workload changes to something that is more predictable.
Throughout this description, “predicting” has been used in its colloquial sense of “declaring in advance” or “making an inference regarding a future event based on probability theory,” (Webster's Third New International Dictionary). “Predicting” problems arise in many important disciplines such as signal analysis and data compression, and a great deal is known about designing algorithms to predict the behavior of systems based on limited or incomplete information. Since these techniques are known and competently described elsewhere, they are not discussed here. An implementer of an embodiment of the invention may wish to investigate techniques such as Prediction by Partial Matching (“PPM”), lossless encoding algorithms, and the Efficient Universal Prediction Algorithm described by Jacob Ziv in his eponymous 2002 paper. It is appreciated that future advancements in prediction theory are likely to be such that one of ordinary skill can incorporate the new techniques into an embodiment without undue experimentation.
After predictions are made about future reads, filesystem logic refers to various data structures (including, for example, those described with reference to
Eventually, an embodiment will have a set of PVBNs that can be used to read data blocks from a mass storage device. These blocks are read into cache memory (740). The PVBNs also indicate whether (and to what extent) the data blocks are fragmented on the underlying storage device. If the blocks are out of (physical) order (or are non-contiguous or otherwise time-consuming to read) (745), and if access to the blocks could be improved by rearranging them on the storage device (750), an embodiment selects a data reallocation strategy to perform the rearrangement (755) and then moves the data blocks (760). If the blocks are (nearly) in order, or if access is unlikely to be improved, no rearrangement is attempted. In some embodiments, the final operation (moving the data blocks) may be omitted. Merely collecting information about the fragmentation state of files and data objects on a mass storage device may be useful to guide information technology managers' decisions relating to performing backups, adding storage, and so on.
Block rearrangement strategies, like most defragmentation techniques, involve moving data from one place on the mass storage device to another. Clearly, this requires both a read operation and a write operation. However, the read operation has already been performed as part of the speculative pre-fetch, so only the cost of the write operation remains. Furthermore, since the read operation was performed with the expectation that the data would soon be requested by a client, it is (on average) less costly than an arbitrary read that is only part of a defragmentation process. In addition, collecting and (possibly) acting on fragmentation information as described here permits the system to extract value from mistaken prefetch predictions. That is, even if the speculatively-read data is not requested by a client, the computational cycles and I/O bandwidth consumed to read it are not completely wasted. Instead, the system has an opportunity to improve the layout of data blocks on the mass storage device.
It should be appreciated that data blocks need not be stored strictly sequentially or contiguously on a mass storage device. For example, a sequence of related data blocks (e.g. data blocks of the same file) interrupted by a few unrelated blocks can often be read all together: it is faster to read the whole sequence of blocks and discard the unrelated data blocks than to read several sub-sequences containing only the related blocks. Furthermore, a contiguous group of data blocks may not be stored on the mass storage device in the same order they appear in the file, but they can all be read together efficiently, and pointers or other system data structures adjusted in memory so that the data blocks can be delivered to a client in the correct order. No reallocation or defragmenting may be necessary in these cases.
Block reallocation performed in connection with speculatively-read or pre-fetched data may only optimize a subset of all the blocks in a file or other data object. For example, a simple read predictor that forecasts a read of n successive blocks whenever it notices a client's read of the previous n blocks would never predict a read of the first blocks in a file, so these blocks would never be prefetched and an embodiment of the invention would not normally consider reallocating them. However, an embodiment may consider pre-fetched data blocks and blocks loaded in response to a client's request together, and make reallocation decisions based on a set containing both.
In any case, optimizing access to just portions of a file or other group of data blocks can still provide overall improved performance. In addition, it is appreciated that optimizing excessively long portions of a file may result in diminishing gains compared to the work of optimization. Mass storage device hardware limitations, I/O interface limitations, and cache memory availability may restrict the maximum number of data blocks that can be read at once. Optimizing data files to contain sequential and/or contiguous groups larger than this maximum number may not provide much additional benefit. For example, if the maximum number of blocks that can be read in one operation is 128, then most of the benefit of read reallocation can be realized by coalescing portions of the file into groups of about 128 blocks. A group of 256 blocks would be read as two separate groups of 128, so there may be little point in ensuring that the second set of 128 follows immediately after the first set.
Referring now to
Client 810 may create and maintain a filesystem within the array of n blocks of storage 860 that seem to be directly connected. Data files may be created within this filesystem. However, the underlying blocks of data storage are actually provided by an array of mass storage devices 850, which is connected to storage appliance 820. Mass storage devices 850 provide a larger array 870 of m data blocks. Storage appliance 820 may create a second filesystem within array 870, and an ordinary data file within this second filesystem actually contains the data blocks within which client 810 constructs its filesystem. The black rectangles represent portions of this ordinary data file. This arrangement is another example of a container file, but two different systems maintain the two filesystems. Client 810 maintains one filesystem, and storage appliance 820 maintains the other filesystem. Note that the file may be fragmented as it is stored on mass storage devices 850, but client 810 is probably unaware of the fragmentation.
In this environment, it is likely that client 810 cannot determine the physical arrangement of the data blocks of array 860, so any defragmentation client 810 attempts to perform is as likely to reduce performance as to enhance it. On the other hand, storage appliance 820 may be unable to interpret the filesystem that client 810 creates in the data file in array 870. Thus, traditional defragmentation methods cannot be used by appliance 820, either. However, according to an embodiment of the invention, appliance 820 can monitor the operations of client 810 and make predictions about which data blocks will be accessed next. These blocks may be prefetched into cache memory, and the information collected during these speculative reads can be used to select blocks that could beneficially be moved or reallocated.
Embodiments of the invention are also useful in another environment.
In a system that operates this way, an earlier version of a file may remain available even after a client request changes or deletes the file. The “current” (or most recent) version of the file is indicated by inode 960, while an earlier version is available through inode 910.
With regard to an embodiment of the invention, note that even if blocks 920, 925 and 930 were arranged sequentially and contiguously on a mass storage device, blocks 920, 970 and 930 of the “current” file may not be so arranged. Filesystem management logic that operates this way may frequently create or cause file fragmentation. Also, because blocks 920 and 930 are shared between two files, it may not be easy to decide which sequence (920, 925, 930; or 920, 970, 930) should be reallocated for better access. Because an embodiment of the invention can operate based on pre-fetch predictions that may not be grounded in an analysis of filesystem structures, it can improve data access in a storage server that continues to provide access to older versions of files after the files are modified.
Embodiments of the invention can, of course, operate with data blocks that are pre-fetched because the system recognizes that a file is being read sequentially. However, it is not necessary for the pre-fetch predictions to be based on files or filesystem structures. Pre-fetch decisions can also be made by monitoring clients' block level access, or by analyzing historical data (e.g. a storage server may notice that a certain group of data blocks are often requested after a certain other group, so may prefetch the first group after a request for blocks from the second.
Assume that the 128, 4 KB blocks of the data file are initially stored contiguously (all together, without any unrelated blocks interspersed among them) on the mass storage device, as shown in
The application could read all 128 blocks of the file shown in
Further operations by this application might result in the allocation of other eight-block groups to contain data from blocks in group 1010 or 1020 that are modified. Eventually, group 1010 or 1020 may become so fragmented that it is worthwhile to reallocate the entire group. Thus, generally speaking, an embodiment of the invention may look for small fragmented sections of blocks during read-ahead. If the overall fragmentation of the segment is large (many fragments found and/or fragments are large), reallocate the whole segment. Otherwise, mark only the small, fragmented sections for re-writing.
An embodiment of the invention may be a machine-readable medium having stored thereon data and instructions which cause a programmable processor to perform operations as described above. In other embodiments, the operations might be performed by specific hardware components that contain hardwired logic. Those operations might alternatively be performed by any combination of programmed computer components and custom hardware components.
A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including but not limited to Compact Disc Read-Only Memory (CD-ROM), Read-Only Memory (ROM), Random Access Memory (RAM), flash memory, and any of various forms of Erasable Programmable Read-Only Memory (EPROM).
The applications of the present invention have been described largely by reference to specific examples and in terms of particular allocations of functionality to certain hardware and/or software components. However, those of skill in the art will recognize that storage fragmentation detection during read-ahead processing can also be achieved by software and hardware that distribute the functions of embodiments of this invention differently than herein described. Such variations and implementations are understood to be captured according to the following claims.
English, Robert M., Fair, Robert L.
Patent | Priority | Assignee | Title |
10127236, | Jun 27 2013 | EMC IP HOLDING COMPANY LLC | Filesystem storing file data in larger units than used for metadata |
10268386, | Dec 28 2016 | Western Digital Technologies, Inc. | Data storage device including temporary storage locations |
8549105, | Sep 26 2011 | GOOGLE LLC | Map tile data pre-fetching based on user activity analysis |
8683008, | Aug 04 2011 | GOOGLE LLC | Management of pre-fetched mapping data incorporating user-specified locations |
8711181, | Nov 16 2011 | GOOGLE LLC | Pre-fetching map data using variable map tile radius |
8803920, | Dec 12 2011 | GOOGLE LLC | Pre-fetching map tile data along a route |
8805959, | Sep 26 2011 | GOOGLE LLC | Map tile data pre-fetching based on user activity analysis |
8812031, | Sep 26 2011 | Google Inc. | Map tile data pre-fetching based on mobile device generated event analysis |
8849942, | Jul 31 2012 | GOOGLE LLC | Application programming interface for prefetching map data |
8886715, | Nov 16 2011 | GOOGLE LLC | Dynamically determining a tile budget when pre-fetching data in a client device |
8972529, | Aug 04 2011 | GOOGLE LLC | Management of pre-fetched mapping data incorporating user-specified locations |
9063951, | Nov 16 2011 | GOOGLE LLC | Pre-fetching map data based on a tile budget |
9111397, | Dec 12 2011 | GOOGLE LLC | Pre-fetching map tile data along a route |
9197713, | Dec 09 2011 | GOOGLE LLC | Method and apparatus for pre-fetching remote resources for subsequent display on a mobile computing device |
9245046, | Sep 26 2011 | GOOGLE LLC | Map tile data pre-fetching based on mobile device generated event analysis |
9275374, | Nov 15 2011 | GOOGLE LLC | Method and apparatus for pre-fetching place page data based upon analysis of user activities |
9305107, | Dec 08 2011 | GOOGLE LLC | Method and apparatus for pre-fetching place page data for subsequent display on a mobile computing device |
9307045, | Nov 16 2011 | GOOGLE LLC | Dynamically determining a tile budget when pre-fetching data in a client device |
9332387, | May 02 2012 | GOOGLE LLC | Prefetching and caching map data based on mobile network coverage |
9389088, | Dec 12 2011 | GOOGLE LLC | Method of pre-fetching map data for rendering and offline routing |
9491255, | Dec 09 2011 | GOOGLE LLC | Method and apparatus for pre-fetching remote resources for subsequent display on a mobile computing device |
9563976, | Dec 12 2011 | GOOGLE LLC | Pre-fetching map tile data along a route |
9569463, | Nov 16 2011 | GOOGLE LLC | Pre-fetching map data using variable map tile radius |
9813521, | Dec 08 2011 | GOOGLE LLC | Method and apparatus for pre-fetching place page data for subsequent display on a mobile computing device |
Patent | Priority | Assignee | Title |
6434663, | Sep 06 1996 | Intel Corporation | Disk block allocation optimization methodology with accommodation for file system cluster size greater than operating system memory page size |
6463509, | Jan 26 1999 | Rovi Technologies Corporation | Preloading data in a cache memory according to user-specified preload criteria |
6633968, | Mar 30 1999 | Microsoft Technology Licensing, LLC | Pre-fetching of pages prior to a hard page fault sequence |
7536505, | Mar 29 2004 | Toshiba Digital Solutions Corporation | Storage system and method for controlling block rearrangement |
20020002658, | |||
20040049367, | |||
20050050279, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 27 2007 | Network Appliance, Inc. | (assignment on the face of the patent) | / | |||
May 03 2007 | ENGLISH, ROBERT M | Network Appliance, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019413 | /0621 | |
May 18 2007 | FAIR, ROBERT L | Network Appliance, Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019413 | /0621 |
Date | Maintenance Fee Events |
Jul 12 2011 | ASPN: Payor Number Assigned. |
Feb 09 2015 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Feb 11 2019 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Feb 09 2023 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Aug 09 2014 | 4 years fee payment window open |
Feb 09 2015 | 6 months grace period start (w surcharge) |
Aug 09 2015 | patent expiry (for year 4) |
Aug 09 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 09 2018 | 8 years fee payment window open |
Feb 09 2019 | 6 months grace period start (w surcharge) |
Aug 09 2019 | patent expiry (for year 8) |
Aug 09 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 09 2022 | 12 years fee payment window open |
Feb 09 2023 | 6 months grace period start (w surcharge) |
Aug 09 2023 | patent expiry (for year 12) |
Aug 09 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |