systems and methods for managing cached information maintained in a distributed storage system are provided. requests for information within the distributed storage system are directed to specific coordinators. Each coordinator can maintain cached version of information maintained at various information sources within the distributed storage system. To manage the cache, the system can utilize generation identifiers that identify the component within the distributed storage system that last updated information sources. Each coordinator can determine whether cached version of information is valid based on an analysis of generation identifiers.
|
8. A computer-implemented method comprising:
maintaining a first coordinator identifier at a first coordinator of a plurality of coordinators configured to facilitate access to data on a distributed storage system, wherein the first coordinator identifier identifies information on the distributed storage system that the first coordinator has most recently modified;
receiving a request from a computing device for a first information item stored within the distributed storage system;
obtaining a generations identifier identifying one of the plurality of information coordinators that most recently modified the first information item;
processing the generations identifier to determine that the generations identifier matches the first coordinator identifier; and
in response to the determination that the generations identifier matches the first coordinator identifier:
retrieving a cached version of the first information item from a storage cache of the first coordinator; and
transmitting the cached version of the first information item to the computing device.
15. Non-transitory computer-readable storage media including computer-executable instructions comprising:
first computer-executable instructions that, when executed by a processor of a first coordinator, cause the processor to maintain, within a storage cache, a cached version of a first information item stored within a distributed storage system; and
second computer-executable instructions that, when executed by the processor, cause the processor to:
receive a request from a computing device for the first information item;
obtain a generations identifier identifying one of a plurality of coordinators associated with the distributed storage system that most recently modified the first information item on the distributed storage system;
process the generations identifier to determine that the generations identifier matches a coordinator identifier associated with the first coordinator, the coordinator identifier identifying information on the distributed storage system that the first coordinator has most recently modified; and
in response to the determination that the generations identifier matches the coordinator identifier, transmit the cached version of the first information item to the computing device.
1. A system comprising:
non-transitory data storage comprising a data cache, the data cache including a cached version of a first information item stored within a distributed storage system; and
a first coordinator computing device comprising a processor, the first coordinator computing device being included within a plurality of coordinators configured to facilitate access to data on the distributed storage system, the first coordinator computing device in communication with the data cache and associated with a first coordinator identifier identifying information on the distributed storage system that the first coordinator computing device has most recently modified, the first coordinator computing device configured to:
receive a request from a client computing device for the first information item;
obtain a generations identifier identifying one of the plurality of coordinators that most recently modified the first information item;
process the generations identifier to determine that the generations identifier matches the first coordinator identifier; and
in response to the determination that the generations identifier matches the first coordinator identifier, transmit the cached version of the first information item to the client computing device.
2. The system of
process the generations identifier to determine that the generations identifier does not match the first coordinator identifier; and
in response to the determination that the generations identifier does not match the first coordinator identifier:
retrieve an updated version of the first information item from the distributed storage system; and
transmit the updated version of the first information item to the client computing device.
3. The system of
4. The system of
5. The system of
6. The system of
7. The system of
9. The computer-implemented method of
processing the generations identifier to determine that the generations identifier does not match the first coordinator identifier; and
in response to the determination that the generations identifier does not match the first coordinator identifier, transmitting a version of the first information item retrieved from the distributed storage system to the computing device.
10. The computer-implemented method of
11. The computer-implemented method of
receiving a request to modify the first information item; and
generating an updated generations identifier for the first information item, the updated generations identifier identifying the first coordinator as last modifying the first information item.
12. The computer-implemented method of
the request to modify the first information item comprises at least one of a write request or a delete request.
13. The computer-implemented method of
14. The computer-implemented method of
16. The non-transitory computer-readable storage media of
process the generations identifier to determine that the generations identifier does not match the first coordinator identifier; and
in response to the determination that the generations identifier does not match the first coordinator identifier, transmit a version of the first information item retrieved from the distributed storage system to the computing device.
17. The non-transitory computer-readable storage media of
18. The non-transitory computer-readable storage media of
19. The non-transitory computer-readable storage media of
receive a request to modify the first information item; and
generate an updated generations identifier for the first information item, the updated generations identifier identifying the first coordinator as last modifying the first information item.
20. The non-transitory computer-readable storage media of
21. The non-transitory computer-readable storage media of
|
This application is a continuation of U.S. patent application Ser. No. 12/533,873, entitled MANAGEMENT OF CACHED OBJECT MAPPING INFORMATION CORRESPONDING TO A DISTRIBUTED STORAGE SYSTEM, and filed Jul. 31, 2009, the disclosure of which is incorporated herein by reference in its entirety.
Generally described, many different computing applications rely on some type of storage medium for the persistent storage of various kinds of application data. For example, common office applications and multimedia applications generate and use application data of various types and formats, such as documents, spreadsheets, still images, audio and video data, among others. Frequently, such data is stored for repeated access or use on behalf of a user. For example, a user may wish to store and work with a number of documents or other data over a period of time, and may expect that the data will be readily available in a predictable state when needed.
In some computing systems, the storage medium used by applications for persistent application data storage is most commonly a magnetic fixed drive or “hard drive.” Such devices are typically either integrated within a computer system that executes the applications or accessible to that system via a local peripheral interface or a network. Such devices serve as application storage are managed by an operating system to present a consistent storage interface, such as a file system interface, to various applications needing storage access.
The conventional model of application storage presents several limitations. First, it generally limits the accessibility of application data. For example, if application data is stored on the local hard drive of a particular computer system, it may be inaccessible to applications executing on other systems. Even if the data is stored on a network-accessible device, applications that execute on systems outside the immediate network may not be able to access that device. For example, for security reasons, enterprises commonly restrict access to their local area networks (LANs) such that systems external to the enterprise cannot access systems or resources within the enterprise. Thus, applications that execute on portable devices (e.g., notebook or handheld computers, personal digital assistants, mobile telephony devices, etc.) may experience difficulty accessing data that is persistently associated with fixed systems or networks.
Conventional application storage model approaches also may fail to adequately ensure the reliability of stored data. For example, conventional operating systems typically store one copy of application data on one storage device by default, requiring a user or application to generate and manage its own copies of application data if data redundancy is desired. While individual storage devices or third-party software may provide some degree of redundancy, these features may not be consistently available to applications, as the storage resources available to applications may vary widely across application installations. The operating-system-mediated conventional storage model may also limit the cross-platform accessibility of data. For example, different operating systems may store data for the same application in different, incompatible formats, which may make it difficult for users of applications executing on one platform (e.g., operating system and underlying computer system hardware) to access data stored by applications executing on different platforms.
The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
Generally described, the present disclosure is directed to managing keymap information corresponding to objects maintained in a distributed storage network. Specifically, aspects of the disclosure will be described with regard to techniques for the management of keymap coordinators within a keymap subsystem of a distributed storage network. Specifically, in accordance with embodiments, keymap information requests may be routed to specific keymap coordinator components that are part of a keymap subsystem by utilizing consistent hashing scheme. The consistent hashing scheme may be propagated to different components via a peer-to-peer distribution model of a storage network to facilitate interaction with the keymap subsystem. Additionally, aspects of the disclosure will be described with regard to various techniques for implementing and managing cached keymap information within a keymap subsystem. Specifically, in accordance with embodiments, cached keymap information may be associated with identifiers, generally referred to as generation identifiers, utilized to facilitate the identification of expired cached keymap information. In accordance with other embodiments, cached keymap information may also incorporate keymap information placeholders indicative of pending updates to keymap information. In accordance with still other embodiments, a keymap coordinator, or other component of a keymap subsystem, may apply cache optimization techniques to facilitate the caching of information related to multiple components within the storage network.
Although the present disclosure will be described with regard to illustrative examples and embodiments, one skilled in the art will appreciate that the disclosed embodiments and examples should not be construed as limiting. The various embodiments may be practiced in combination or individually. Accordingly, the disclosed and described embodiments should not be construed as requiring any particular combination or excluding any combination unless specifically identified in the disclosure.
Generally described, one embodiment of a storage model for providing data storage to users is implemented as a service, such as a Web service. In this example, a storage service interface is provided as a customer- or user-facing interface to the storage service. According to the model presented to a user by interface, the storage service may be organized as an arbitrary number of stores, generally referred to as “buckets,” that are accessible via interface. Each bucket may be configured to store an arbitrary number of objects, which in turn may store data specified by a user of the storage service.
Generally speaking, a bucket may function as the root of an object namespace that is associated with a user of the storage service. For example, a bucket may be analogous to a file system directory or folder. In some embodiments, individual buckets may also form the basis for accounting for usage of the storage service. For example, a user may be associated with one or more buckets for billing purposes, and that user may be billed for usage of storage resources (e.g., storage of object) that hierarchically reside within the namespace established by those buckets.
Illustratively, each of the buckets includes associated metadata. Generally speaking, metadata may include any suitable metadata that may be used to describe aspects or properties of a given bucket. For example, metadata may include information identifying the date of a bucket's creation, the identity of its creator, whether the bucket has any objects associated with it, or other suitable information. In some embodiments, metadata may include information indicative of usage characteristics of a bucket, such as the total size of objects associated with bucket, access history of users with respect to bucket and/or its associated objects, billing history associated with bucket, or any other suitable information related to current or historical usage of bucket. In one embodiment, each bucket may be associated with a respective unique identifier, which may be specified by a user or automatically assigned by the storage service. The unique identifier may be stored within metadata or as a separate property or field of bucket. It is noted that in some embodiments, a given bucket may not include explicit references, pointers or other information corresponding to the objects associated with given bucket. Rather, as described in greater detail below, location and selection of objects may be performed through the use of a separate mapping facility referred to herein as a keymap.
Still further, each of the buckets may also include respective access policies. Generally described, an access policy may include any information needed to control access to objects associated with a bucket. The access policy may include information identifying the client or clients allowed to access a bucket and its associated objects, and in what capacity. For example, access policy may store a user identifier and/or authentication credentials (e.g., public/private authentication keys) for one or more clients, and may further specify whether a given client is allowed to modify or only read objects. The access policy may also implement default or group-oriented policies (e.g., by allowing universal read access but limiting write access to objects to a specified client or group of clients) or any other desired security model.
In an illustrative embodiment, buckets may be associated with one or more objects, each of which may include respective metadata and data. Additionally, a bucket may exist with no objects. Generally speaking, data of an object may correspond to any sequence of bits. The type of data represented by the bits stored within an object may be transparent to the storage service. That is, the bits may represent text data, executable program code, audio, video or image data, or any other type of digital data, and the storage service may not necessarily distinguish among these various data types in storing and manipulating objects. In some embodiments, the size of data may be limited to a fixed ceiling, while in other embodiments objects may be allowed to scale in size subject only to the physical storage resources available to the storage service.
Similarly, metadata may be configured to store any desired descriptive information about its corresponding object. For example, metadata may include information about the date and/or time the corresponding object was created, the size of object, the type of data stored by object (e.g., a data type defined by the Multipurpose Internet Mail Extensions (MIME) standard), or any other type of descriptive information. In some embodiments, metadata may store usage or history information indicative of user interactions with corresponding object, as well as access policy information (e.g., permission information indicating the types of access various users may have to the object), object cost information (e.g., billing rate or history associated with the object), or any other suitable information or combination of types of information attributable to object. For example, a client may provide metadata along with object data to be stored as metadata. In other examples, metadata may include metadata generated by the system that manages storage service features. Some, all or none of metadata may be accessible to a client having access rights to an object, depending on the type of metadata, specific provisions of the client's access rights, or other suitable factors.
In some embodiments storage service interface may be configured to support interaction between the storage service and its users according to a Web services model. For example, in one embodiment, interface may be accessible by clients as a Web services endpoint having a Uniform Resource Locator (URL), e.g., http://identifier.storageprovider.com, to which Web services calls generated by service clients may be directed for processing. A Web service may refer to any type of computing service that is made available to a requesting client via a request interface that includes one or more Internet-based application layer data transport protocols, such as a version of the Hypertext Transport Protocol (HTTP) or another suitable protocol.
Web services may be implemented in a variety of architectural styles, using a variety of enabling service protocols. For example, in a Representational State Transfer (REST)-style Web services architecture, the parameters that are pertinent to a Web services call (e.g., specifying the type of service requested, user credentials, user data to be operated on, etc.) may be specified as parameters to the data transport command that invokes the Web services call to the Web services endpoint, such as an HTTP GET or PUT command. In some implementations, REST-style Web services architectures are stateless, in that each Web services call may contain all the information necessary to process that call without reference to external state information. In contrast to REST-style Web services architectures, document-based or message-based Web services architectures may encode the parameters and data pertinent to a Web services call as a document that may be transmitted to a Web services endpoint and then decoded and acted upon by the endpoint. In another embodiment, Web services may be implemented to utilize peer-to-peer distribution models and architectures, such as the BitTorrent file sharing communication protocol.
In alternative embodiments, the interface may support interfaces other than Web services interfaces, instead of or in addition to a Web services interface. For example, an enterprise may implement a storage service for use by clients external to the enterprise, who may access the service via Web services protocols, as well as users within the enterprise, who may use a different type of interface (e.g., a proprietary interface customized to the enterprise's intranet). In some embodiments, interface may support each of the various types of interfacing protocols through which any user of the storage service may access the service. In other embodiments, different instances of interface may be provided for each distinct interface approach. It is noted that in some embodiments, those aspects of interface related to handling interactions with clients (e.g., receiving and responding to service requests) may be implemented separately from those aspects that implement the general architecture of the storage service (e.g., the organization of the service into a hierarchy of buckets and objects). In some such embodiments, the portion of interface relating to client interaction (e.g., via Web services protocols) may be bypassed by certain users, such as those internal to an enterprise.
In one embodiment, individual objects may be identified within the distributed storage network using either of two distinct items of information: a key or a locator. Generally speaking, keys and locators may each include alphanumeric strings or other types of symbols that may be interpreted within the context of the namespace of the distributed storage network as a whole, although keys and locators may be interpreted in different ways. In one embodiment, a key may be specified by a client, or on behalf of a client, at the time a corresponding object is created within a particular bucket (e.g., in response to a request by the client to store a new object). If no key is specified by the client, a key may be assigned to the new object by the distributed storage network. In such an embodiment, each respective key associated with objects of a particular bucket may be required to be unique within the namespace of that bucket.
With reference above, a locator may represent a globally unique identifier (GUID) of an object among all objects known to the distributed storage network. That is, while a key may be unique to a namespace associated with a particular bucket, a locator may be unique within a global namespace of all objects within all buckets. For example, a locator may include an alphanumeric string generated by the distributed storage network to be unique among other locators. As described in greater detail below, in some embodiments, multiple instances of an object may be replicated throughout the physical storage devices used to implement the distributed storage network, for example to increase data redundancy and fault tolerance. In such embodiments, a unique locator may exist for each replicated instance of a given object. As will be described in greater detail below, in illustrative embodiments, keymap information can correspond to a mapping of an object key to one or more corresponding locators. The keymap information may be maintained by a keymap subsystem and utilized by a distributed storage network to access objects maintained by the distributed storage network. Further, the keymap information may be updated, or otherwise modified, as instances of specific objects are created, deleted, or otherwise manipulated within the distributed storage network.
With reference now to
The Web services interface 108 may be configured to interface with a keymap subsystem 110 for obtaining specific mappings of keys to object instance locators. The keymap subsystem 110 will be described in greater detail with regard to
In various embodiments, the components illustrated in
With continued reference to
In other embodiments, storage client 102 may be configured to provide access to Web services-based storage to other applications in a manner that is transparent to those applications. For example, storage client 102 may be configured to integrate with an operating system or file system to provide storage in accordance with a suitable variant of the storage model described above. However, the operating system or file system may present a different storage interface to applications, such as a conventional file system hierarchy of files, directories, and/or folders.
In various embodiments, the communication network 104 may encompass any suitable combination of networking hardware and protocols necessary to establish Web-based communications between storage clients 102 and the Web service interface 108. For example, the communication network 104 may generally encompass the various telecommunications networks and service providers that collectively implement the Internet. The communication network 108 may also include private networks such as local area networks (LANs) or wide area networks (WANs) as well as public or private wireless networks. For example, both a given storage client 102 and the Web services interface 108 may be respectively provisioned within enterprises having their own internal networks. In such an embodiment, the communication network 104 may include the hardware (e.g., modems, routers, switches, load balancers, proxy servers, etc.) and software (e.g., protocol stacks, accounting software, firewall/security software, etc.) necessary to establish a networking link between given storage client 102 and the Internet as well as between the Internet and Web services interface 104. It is noted that in some embodiments, storage clients 102 may communicate with Web services interface 104 using a private network rather than the public Internet. For example, storage clients 102 may be provisioned within the same enterprise as the distributed storage network. In such a case, storage clients 102 may communicate with Web service interface 102 entirely through a private communication network 102.
Turning now to
Upon receipt of a client request, the Web service interface 108 attempts to identify the object instances that correspond to the client request. Illustratively, the client request can correspond to requests to retrieve objects from the distributed storage network 100 or requests to add, modify, or delete objects from the distributed storage network. Examples of such requests were previously discussed with regard to GET and PUT instructions. Specifically, the Web service interface 108 obtains keymap information from the keymap subsystem 110. As previously discussed, each object can be represented by a key used to identify the object logically within the distributed storage network 100. As described above, the keymap information can correspond to a mapping of object keys to the locators of the instances of the object maintained with the distributed storage network 100. The Web service interface 108 can then utilize the keymap information from the keymap subsystem 110 to interface with the storage node interface 112 and identify the appropriate instances of the object that correspond to the storage client request.
With reference now to
Illustratively, the hash router components 302 include, or are made up of, an interface component 303 for obtaining consistent hashing information that facilitates the identification of which keymap coordinators 304 have been assigned to process requests for specific keymap information. Although the hash router component 302 is illustrated as a stand alone component separate from the keymap subsystem 110, in an illustrative embodiment, the hash router component is implemented as a component with the distributed storage network 110, such as the Web service interface component 108 (
In one aspect, the keymap coordinators 304 may be associated with a keymap information cache component 306 for caching keymap information, which will also be explained in greater detail below. The keymap information cache component 306 may be implemented in one or more hardware components, such as internal memory, external memory, shared memory, etc. Accordingly, the keymap cache component 306 represents a logical cache maintained by each keymap coordinator 304, or shared between multiple keymap coordinators 304. Still further, as will be explained in greater detail below, the keymap cache component 306 can be logically considered as having separate cache components for various components (such as the brick manager components) regardless of whether the keymap cache component is implemented as a single cache or multiple caches. Accordingly, the logical implementation of each keymap cache component 306 does not need to match the physical implementation of such a keymap cache component within a keymap coordinator 304.
With continued reference to
With reference now to
With continued reference to
Turning now to
With continued reference to
As previously described, in an illustrative embodiment, the hash router components 302 may be associated or otherwise found in a number of components with the distributed storage network 100. For example, the Web services interface 108 may include a hash router component 302 for directing requests to the appropriate keymap coordinator. In such an embodiment, individual copies of the table 520 may be maintained at each hash router component 302 via the interface component 303. The information associated with the definition of the hash ring ranges and assignments, as well as any other information associated with the selection of keymap coordinators 304 will be generally referred to as “hash routing information.” Additionally, in an illustrative embodiment, the various hash router components 302 can utilize various protocols to update or propagate the hash ring 500 range information. For example, the distributed storage network 100 can utilize a peer-to-peer distribution communication protocol, such as a gossip protocol or epidemic protocol computer to computer communication protocol, to allow various components with the distributed storage network to maintain updated hash ring range information between the hash router components. In such an embodiment, each hash router component 302 would transmit and receive the hash routing information from other, known hash router components. Each hash router component 302 does not need to know about all possible hash router components 302 in the distributed storage network 100. Further, each hash router component 302 can process information from multiple hash router components to determine the most current version of the hash routing information. Accordingly, in this embodiment, the selection of an appropriate keymap coordinator can be implemented as a non-centralized process in which the hash routing information is updated within the distributed storage network 100.
As previously described, in accordance with aspects of the disclosure, the keymap coordinators 304 may implement a cache component 306 for maintaining keymap information. The cache component 306 may be a proxy cache that maintains keymap information (e.g., keymap values) for each of the brick managers associated with, or otherwise in communication with, the corresponding keymap coordinator 304. In such an embodiment, a keymap coordinator 304 may be able to utilize cached keymap information to respond to a keymap retrieval request. In one aspect, the keymap coordinator 304 may utilize memory optimization techniques for maintaining keymap information for a plurality of brick manager components 308. In another aspect, the keymap coordinators 304 may utilize generation identifier information to manage the keymap information maintained in the cache component 306.
In the illustrative embodiment of
In an alternative embodiment, the keymap coordinator 304 may optimize the amount of keymap information maintained in the cache component by maintaining a single version/value of the keymap information in the cache component 306. The single version/value of the keymap information may represent the most recent version of the keymap information by any of the brick manager components 308, as determined by timing information associated with keymap information, such as a timestamp. The single version/value may also represent the version/value associated with a quorum of the brick manager components. Additionally, the single version/value may also represent a version/value designated by the client 102 or the distributed storage network 100.
In the event that any brick manager components 308 are not currently storing the single version/value, the cache component 306 does not need to maintain separate values/versions. Instead, the cache component 306 can maintain indicators that specific brick manager components 308 have different values that will be used by the keymap coordinators 304 in assessing quorum, described below. Additionally, in an alternative embodiment, the cache component 306 may also maintain additional information that would allow the keymap coordinators 304, or other components, to generate different values from the cached single version/value. As applied to an embodiment implementing a quorum analysis to determine keymap information to return subject to a read request, the keymap coordinator 304 can determine a number of brick manager components associated with the single version/value and the number of brick managers associated with a different value. Accordingly, quorum for the single version/value can be determined if the number of brick managers 308 associated with single version/value exceeds the number of brick managers associated with a different version/value. In a further embodiment, the keymap coordinator 304 may initiate a repair function within the keymap subsystem 110 for any brick manager components 308 associated with a different value than the single version/value, such as an out of band request to synchronize the brick managers.
In another aspect and as will be explained in greater detail below, the keymap coordinators 304 can utilize one or more generation identifiers to determine whether the cached values in the cache component 306 can be considered to be valid. Specifically, so long as the keymap coordinator 304 is considered to be the last component to update a value within the assigned range of the ring (e.g., the last component to update keymap information corresponding to an assigned range), the keymap coordinator 304 may utilized the cached values to respond to keymap information retrieval requests. Alternatively, if the keymap coordinator 304 is not the last component to update a value within the assigned range, the cached values will not be considered valid and the keymap coordinator 304 will be required to obtain updated information from one or more of the brick managers 308.
In this illustrative embodiment, the keymap coordinator 304 will update the entire cache in the event that it is determined that another component has updated a relevant portion of the keymap information for at least one key. Alternatively, the keymap coordinator 304 may have the ability to update only a portion of the cache based on a determination of updated keymap information, such as by type. Still further, in one embodiment, the keymap coordinator 304 may be able update the portion of the cache 306 that corresponds to each individual brick manager 308. Alternatively, the keymap coordinator 304 may have to update the entire cache corresponding to all brick managers 308.
Turning now to
With reference now to
At decision block 706, the keymap coordinator 304 determines whether the generation identifier associated with the cached keymap information is associated with the specific keymap coordinator 304. In an illustrative embodiment, the brick manager 308 maintains a table of current owners of assigned ranges of hashed values, similar to the ranges identified in
With continued reference to
With reference to
Referring now to
At block 804, the brick manager component 308 obtains a current generation associated with the key, generally referred to as the current generation information. As discussed above, the generation identifier represents a unique identifier, such as a GUID, that is used to identify a particular component or role with the distributed storage network 100. A new generation identifier is generated and assigned each time a different component of the distributed storage network 100 makes a modification to the stored keymap information. However, a current generation identifier is maintained if modifications to stored keymap information are initiated by the same component associated with a current generation identifier. For example, multiple modifications of the keymap information by the same component (e.g., the keymap coordinator 304) will not result in the generation of a new generation identifier because the same entity has made the modifications. In contrast, the modification of keymap information previously associated with a generation identifier corresponding to a keymap coordinator 304 by an administrative component, such as a reconciler, or another keymap coordinator 304 in the distributed storage network 100 would result in a new generation identifier. Likewise, a subsequent update of the modified keymap information by the keymap coordinator 304 would result in the generation of a new generation identifier and accordingly, an update of the current generation information.
At decision block 806, a test is conducted to determine whether the keymap coordinator 304 associated with the request received by the brick manager component 308 corresponds to the current generation identifier associated with the key identified the obtained request. As previously discussed, the brick manager component 308 can obtain the current generation identifier from the current generation identification information. If the brick manager component 308 identifies a generation identifier that has been assigned, or otherwise associated with, to the requesting keymap coordinator 304, the brick manager component 308 assumes that the cached value is the most current value and that no other components have updated the cached keymap information. Accordingly, at block 808, the brick manager component 308 processes the request and attempts to modify the keymap information in accordance with the request transmitted by the keymap coordinator 304. Alternatively, if the keymap coordinator obtains an identification of a generation identifier that has not been assigned to the requesting keymap coordinator 304, the brick manager component 308 assumes that another component has made modifications to at least one of the records associated with the generation identifier. Accordingly, at block 810, the brick manager component 308 generates a new generation identifier corresponding to the keymap coordinator 304. The routine 800 proceeds to block 808 in which the brick manager component 308 attempts to process the write request, described above.
With continued reference to
In an illustrative embodiment, the utilization a single generation identifier (e.g., GUID) for an entire range of cached keymap information can result in the invalidation of the cache in the event a single entry in the cached keymap information is updated, created or deleted by any other component in the distributed storage network 100. With regard to updates to existing entries in cache, in one embodiment, the keymap coordinator 304 does not manage individual entries in the cache. Instead, the keymap coordinator 304 may consider all cached keymap information as no longer valid (e.g., “stale”), without verification of whether any, or all, such values are in fact no longer valid. In some instances, however, certain classes of cached keymap information may have no relation to other classes of cached keymap information. In such instances, the modification of any cached keymap information from one class of cached keymap information should not affect any cached keymap information associated with other classes. For example, with regard to the deletion of previous cached keymap information, the cache component 306 may maintain legacy information, often referred to as “tombstones” that corresponded to the previous entries. Accordingly, modification of a tombstone value (or the addition/deletion) would not necessarily result in a modification active/existing entries. Accordingly, in an illustrative embodiment, the brick manager component 308 can implement multiple generation identifiers representative of classes of cached keymap information such that modification of cached information associated with a class as specified by a first generation identifier does not have an influence of the validity of entries corresponding to another class as specified by a separate generation identifier.
Referring now to
At block 906, the brick manager component 308 determines, or selects, which generation is associated with the request. As discussed above, in this embodiment, the cached keymap information may be associated with multiple generation identifiers based on classes or attributes of data. Specifically, the brick manager component 308 may from multiple generation identifiers based on the type of data that is cached or according to the specifics of the write request. For example, if cached keymap information corresponds to multiple classes of data, but the received write request is only attempting to modify a portion of the cached data, the brick manager component 308 may select the generation identifier corresponding to the affected class.
At decision block 908, a test is conducted to determine whether keymap coordinator 304 associated with the request received by the brick manager component 308 corresponds to the selected, current generation identifier associated with the key identified the obtained request. As previously discussed, the brick manager component 308 can obtain the current generation identifier from the current generation identification information. If the brick manager component 308 identifies a generation identifier that has been assigned, or otherwise associated with, to the requesting keymap coordinator 304, the brick manager component 308 assumes that the cached value is the most current value and that no other components have updated the cached keymap information. Accordingly, at block 912, the brick manager component 308 processes the request and attempts to modify the keymap information in accordance with the request transmitted by the keymap coordinator 304. Alternatively, if the keymap coordinator obtains an identification of a generation identifier that has not been assigned to the requesting keymap coordinator 304, the brick manager component 308 assumes that another component has made modifications to at least one of the records associated with the generation identifier. Accordingly, at block 910, the brick manager component 308 generates a new generation identifier corresponding to the keymap coordinator 304. The routine 900 proceeds to block 912 in which the brick manager component 308 attempts to process the write request, described above.
At block 914, the brick manager component 308 can update the information with the updated generation information. In one embodiment, the brick manager component 308 can update the table information of ranges to generation identifiers and propagate the information to one or more keymap coordinators 304. In alternative embodiment, in the event that the brick manager component 308 did not generate anew generation identifier, block 914 may be omitted. Alternatively, the brick manager component 308 can update the generation information even if the information has not changed. The routine 900 terminates at block 916.
With reference to
With reference now to
At block 1008, the keymap coordinator 304 stores the keymap value placeholder in cached keymap information. At block 1010, the keymap coordinator 304 determines whether the write has been confirmed by the corresponding brick manager 308. If the write has not been confirmed, the routine 1000 remains at decision block 1010. Alternatively, once the write process has been confirmed by the brick manager 308, the keymap coordinator 304 updates the cache with the value corresponding to the write command at block 1012. The routine 1000 terminates at block 1014. As described above, by replacing the previous value of the cached keymap information with the placeholder, the keymap coordinator 304 mitigates errors associated with receiving read requests by the keymap coordinator 304 prior to receipt of the confirmation of the write request at decision block 1010.
With reference now to
Alternatively, if the cached keymap information corresponds to a keymap information placeholder, at block 1110, the keymap coordinator 304 processes the keymap information placeholder. In one embodiment, the keymap coordinator 304 may wait for a period of time to determine whether the previous processing of the write command (as illustrated in
With continued reference to
While illustrative embodiments have been disclosed and discussed, one skilled in the relevant art will appreciate that additional or alternative embodiments may be implemented within the spirit and scope of the present disclosure. Additionally, although many embodiments have been indicated as illustrative, one skilled in the relevant art will appreciate that the illustrative embodiments do not need to be combined or implemented together. As such, some illustrative embodiments do not need to be utilized or implemented in accordance with the scope of variations to the present disclosure.
Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements or steps. Thus, such conditional language is not generally intended to imply that features, elements or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements or steps are included or are to be performed in any particular embodiment. Moreover, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey utilization of the conjunction “or” in enumerating a list of elements does not limit the selection of only a single element and can include the combination of two or more elements.
Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those skilled in the art. It will further be appreciated that the data and/or components described above may be stored on a computer-readable medium and loaded into memory of the computing device using a drive mechanism associated with a computer-readable medium storing the computer executable components, such as a CD-ROM, DVD-ROM, or network interface. Further, the component and/or data can be included in a single device or distributed in any manner. Accordingly, general purpose computing devices may be configured to implement the processes, algorithms and methodology of the present disclosure with the processing and/or execution of the various data and/or components described above. Alternatively, some or all of the methods described herein may alternatively be embodied in specialized computer hardware. In addition, the components referred to herein may be implemented in hardware, software, firmware or a combination thereof.
It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Sorenson, III, James Christopher, McHugh, Jason G., Vermeulen, Allan H., Kakulapati, Gunavardhan
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
6009266, | Mar 22 1995 | Sun Microsystems, Inc. | Methods, apparatus and data structures for managing objects |
6098078, | Dec 29 1995 | THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT | Maintaining consistency of database replicas |
6289358, | Apr 15 1998 | R2 SOLUTIONS LLC | Delivering alternate versions of objects from an object cache |
6757896, | Jan 29 1999 | International Business Machines Corporation | Method and apparatus for enabling partial replication of object stores |
6970987, | Jan 27 2003 | Hewlett-Packard Development Company, L.P. | Method for storing data in a geographically-diverse data-storing system providing cross-site redundancy |
7007024, | Mar 29 2002 | Panasas, Inc. | Hashing objects into multiple directories for better concurrency and manageability |
7430744, | Jul 01 1996 | Microsoft Technology Licensing, LLC | Urgent replication facility |
7478221, | May 03 2005 | Veritas Technologies LLC | System and method for using consistent virtual addresses to communicate in cooperative multi-layer virtualization environments |
7505997, | May 03 2004 | Oracle America, Inc | Methods and apparatus for identifying cached objects with random numbers |
7647329, | Dec 29 2005 | Amazon Technologies, Inc. | Keymap service architecture for a distributed storage system |
7716180, | Dec 29 2005 | Amazon Technologies, Inc. | Distributed storage system with web services client interface |
7801912, | Dec 29 2005 | Amazon Technologies, Inc | Method and apparatus for a searchable data service |
7925624, | Mar 31 2006 | Amazon Technologies, Inc | System and method for providing high availability data |
20050050067, | |||
20070016842, | |||
20070156842, | |||
20070168336, | |||
20070192544, | |||
20070282915, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 27 2014 | Amazon Technologies, Inc. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Sep 13 2021 | REM: Maintenance Fee Reminder Mailed. |
Feb 28 2022 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Jan 23 2021 | 4 years fee payment window open |
Jul 23 2021 | 6 months grace period start (w surcharge) |
Jan 23 2022 | patent expiry (for year 4) |
Jan 23 2024 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 23 2025 | 8 years fee payment window open |
Jul 23 2025 | 6 months grace period start (w surcharge) |
Jan 23 2026 | patent expiry (for year 8) |
Jan 23 2028 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 23 2029 | 12 years fee payment window open |
Jul 23 2029 | 6 months grace period start (w surcharge) |
Jan 23 2030 | patent expiry (for year 12) |
Jan 23 2032 | 2 years to revive unintentionally abandoned end. (for year 12) |