An information handling system and method allows implementation of fault-tolerant storage subsystems using multiple storage controllers not themselves originally designed to support the redundancy of such fault-tolerant storage subsystems. In accordance with one embodiment, uncommitted data is efficiently and rapidly replicated across multiple commodity storage controllers, enabling faster and less expensive fault-tolerant storage subsystems. A redundant storage controller system can improve the efficiency of data replication while providing failure protection against controller failure. A redundant storage controller system using shared memory commonly accessible to the storage controllers can be enhanced to replicate data within host memory regions to protect against non-volatile memory failure. In accordance with at least one embodiment, an efficient data replication mechanism can be provided between storage controllers using off-the-shelf hardware.
|
1. A method comprising:
issuing, at a host processor, a write command to a primary storage controller;
receiving, at the primary storage controller, the write command;
transferring, at the primary storage controller, write data associated with the write command to a primary storage controller memory local to the primary storage controller;
resending, at the host processor, the write command to a peer storage controller, wherein the peer storage controller provides redundancy for the primary storage controller;
receiving, at the peer storage controller, the write command;
transferring, at the peer storage controller, the write data associated with the write command to a peer storage controller memory local to the peer storage controller;
writing, by the primary storage controller, the write data from the primary storage controller memory to a storage subsystem; and
instructing, by the primary storage controller, the peer storage controller to discard the write data stored in the peer storage controller memory.
15. A method comprising:
issuing, at a host processor, a write command to a primary storage controller;
receiving, at the primary storage controller, the write command;
transferring, by the primary storage controller, write data associated with the write command to a host memory region of a host memory local to the host processor, wherein the host memory region is commonly accessible to the primary storage controller and to a peer storage controller, wherein the peer storage controller provides redundancy for the primary storage controller;
informing, by the primary storage controller, a peer storage controller of the write data stored in the host memory region;
generating, by the primary storage controller, parity data in response to the write command;
storing, by the primary storage controller, the parity data in the host memory region;
updating, by the primary storage controller, metadata for use by the peer storage controller;
writing, by the primary storage controller, the write data from the host memory region to a storage subsystem; and
updating, by the primary storage controller, the metadata in response to the writing.
8. An information handling system comprising:
a host processor including a host memory for storing instructions executable by the host processor to cause the host processor to issue a write command to a primary storage controller and to resend the write command to a peer storage controller; and
the primary storage controller including a primary storage controller memory for storing instructions executable by the primary storage controller to cause the primary storage controller to receive the write command, to transfer write data associated with the write command to the primary storage controller memory, and to write the write data from the primary storage controller memory to a storage subsystem,
wherein the peer storage controller provides redundancy for the primary storage controller, the peer storage controller including peer storage controller memory for storing instructions executable by the peer storage controller to cause the peer storage controller to receive the write command, and to transfer the write data associated with the write command to the peer storage controller memory, wherein the instructions executable by the primary storage controller further comprise instructions to cause the primary storage controller to instruct the peer storage controller to discard the write data stored in the peer storage controller memory.
2. The method of
discarding, by the peer storage controller, the write data stored in the peer storage controller memory.
3. The method of
notifying, by the primary storage controller, the host processor of completion of the write command after the transferring, at the primary storage controller, the write data associated with the write command to the primary storage controller memory.
4. The method of
acknowledging, by the peer storage controller, receipt of the write command to the host processor.
5. The method of
generating parity data.
6. The method of
translating, at the host processor, the SGL into a translated SGL for the peer storage controller; and
sending, by the host processor, the translated SGL to the peer storage controller.
7. The method of
when the primary storage controller fails, establishing the peer storage controller as a newly established primary storage controller and writing, by the newly established primary storage controller, the write data from the newly established primary storage controller memory of the newly established primary storage controller to the storage subsystem.
9. The information handling system of
10. The information handling system of
11. The information handling system of
12. The information handling system of
13. The information handling system of
14. The information handling system of
16. The method of
generating journal entries; and
storing the journal entries in the host memory region.
17. The method of
transferring, at the primary storage controller, write data associated with the write command to a primary storage controller memory local to the primary storage controller.
18. The method of
upon failure of the primary storage controller, writing, by a peer storage controller, the write data from the host memory region to the storage subsystem; and
upon failure of the host memory, writing, by the primary storage controller, the write data from the primary storage controller memory to the storage subsystem.
19. The method of
instructing, by the primary storage controller, the peer storage controller to discard the write data stored in the peer storage controller memory.
20. The method of
when the primary storage controller fails, establishing the peer storage controller as a newly established primary storage controller and writing, by the newly established primary storage controller, the write data from the host memory region to the storage subsystem.
|
The present disclosure generally relates to information handling systems, and more particularly relates to replicating data for multiple controllers.
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option is an information handling system. An information handling system generally processes, compiles, stores, or communicates information or data for business, personal, or other purposes. Technology and information handling needs and requirements can vary between different applications. Thus information handling systems can also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information can be processed, stored, or communicated. The variations in information handling systems allow information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems can include a variety of hardware and software resources that can be configured to process, store, and communicate information and can include one or more computer systems, graphics interface systems, data storage systems, networking systems, and mobile communication systems. Information handling systems can also implement various virtualized architectures. Data and voice communications among information handling systems may be via networks that are wired, wireless, or some combination.
It will be appreciated that for simplicity and clarity of illustration, elements illustrated in the Figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements. Embodiments incorporating teachings of the present disclosure are shown and described with respect to the drawings herein, in which:
The use of the same reference symbols in different drawings indicates similar or identical items.
The following description in combination with the Figures is provided to assist in understanding the teachings disclosed herein. The description is focused on specific implementations and embodiments of the teachings, and is provided to assist in describing the teachings. This focus should not be interpreted as a limitation on the scope or applicability of the teachings.
Information handling system 100 can include devices or modules that embody one or more of the devices or modules described above, and operates to perform one or more of the methods described above. Information handling system 1100 includes a processor 110, a chipset 120, a memory 130, a graphics interface 140, a disk controller 160, a disk emulator 180, an input/output (I/O) interface 150, and a network interface 170. Processor 110 is connected to chipset 120 via processor interface 112. Processor 110 is connected to memory 130 via memory bus 118. Memory 130 is connected to chipset 120 via a memory bus 122. Graphics interface 140 is connected to chipset 120 via a graphics interface 114, and provides a video display output 146 to a video display 142. Video display 142 is connected to touch controller 144 via touch controller interface 148. In a particular embodiment, information handling system 100 includes separate memories that are dedicated to processor 110 via separate memory interfaces. An example of memory 130 includes random access memory (RAM) such as static RAM (SRAM), dynamic RAM (DRAM), non-volatile RAM (NV-RAM), or the like, read only memory (ROM), another type of memory, or a combination thereof. Memory 130 can store, for example, at least one application 132 and operating system 134. Operating system 134 includes operating system code operable to detect resources within information handling system 100, to provide drivers for the resources, initialize the resources, to access the resources, and to support execution of the at least one application 132. Operating system 134 has access to system elements via an operating system interface 136. Operating system interface 136 is connected to memory 130 via connection 138.
Battery management unit (BMU) 151 is connected to I/O interface 150 via battery management unit interface 155. BMU 151 is connected to battery 153 via connection 157. Operating system interface 136 has access to BMU 151 via connection 139, which is connected from operating system interface 136 to battery management unit interface 155.
Graphics interface 140, disk controller 160, and I/O interface 150 are connected to chipset 120 via interfaces that may be implemented, for example, using a Peripheral Component Interconnect (PCI) interface, a PCI-Extended (PCI-X) interface, a high-speed PCI-Express (PCIe) interface, another industry standard or proprietary communication interface, or it combination thereof. Chipset 120 can also include one or more other I/O interfaces, including an Industry Standard Architecture (ISA) interface, a Small Computer Serial Interface (SCSI) interface, an Inter-Integrated Circuit (I2C) interface, a System Packet Interface (SPI), a Universal Serial Bus (USB), another interface, or a combination thereof.
Disk controller 160 is connected to chipset 120 via connection 116. Disk controller 160 includes a disk interface 162 that connects the disc controller to a hard disk drive (HDD) 164, to an optical disk drive (ODD) 166, and to disk emulator 180. An example of disk interface 162 includes an Integrated Drive Electronics (IDE) interface, an Advanced Technology Attachment (ATA) such as a parallel ATA (PATA) interface or a serial ATA (SATA) interface, a SCSI interface, a USB interface, a proprietary interface, or a combination thereof. Disk emulator 180 permits a solid-state drive 184 to be connected to information handling system 100 via an external interface 182. An example of external interface 182 includes a USB interface, an IEEE 1194 (Firewire) interface, a proprietary interface, or a combination thereof. Alternatively, solid-state drive 184 can be disposed within information handling system 100.
I/O interface 150 is connected to chipset 120 via connection 166. I/O interface 150 includes a peripheral interface 152 that connects the I/O interface to an add-on resource 154, to platform fuses 156, and to a security resource 158. Peripheral interface 152 can be the same type of interface as connects graphics interface 140, disk controller 160, and I/O interface 150 to chipset 120, or can be a different type of interface. As such, I/O interface 150 extends the capacity of such an interface when peripheral interface 152 and the I/O channel are of the same type, and the I/O interface translates information from a format suitable to such an interface to a format suitable to the peripheral channel 152 when they are of a different type. Add-on resource 154 can include a data storage system, an additional graphics interface, a network interface card (NIC), a sound/video processing card, another add-on resource, or a combination thereof. As an example, add-on resource 154 is connected to data storage system 190 via data storage system interface 192. Add-on resource 154 can be on a main circuit board, on separate circuit board or add-in card disposed within information handling system 100, a device that is external to the information handling system, or a combination thereof.
Network interface 170 represents a NIC disposed within information handling system 100, on a main circuit board of the information handling system, integrated onto another component such as chipset 120, in another suitable location, or a combination thereof. Network interface 170 is connected to I/O interface 150 via connection 174. Network interface device 170 includes network channel 172 that provides an interface to devices that are external to information handling system 100. In a particular embodiment, network channel 172 is of a different type than peripheral channel 152 and network interface 170 translates information from a format suitable to the peripheral channel to a format suitable to external devices. An example of network channels 172 includes InfiniBand channels, Fibre Channel channels, Gigabit Ethernet channels, proprietary channel architectures, or a combination thereof. Network channel 172 can be connected to external network resources (not illustrated). The network resource can include another information handling system, a data storage system, another network, a grid management system, another suitable resource, or a combination thereof.
Host processor 201 can communicate commands, such as read commands and write commands, to storage controller 203 via connection 211 and to storage controller 205 via connection 212. One of storage controllers 203 and 205 can be selected to serve as a primary controller, and the other of storage controllers 203 and 205 can be selected to serve as a peer controller. The primary controller can provide control of a storage subsystem comprising bus expanders 207 and 208 and disk array 209. The peer controller can serve as a backup controller ready to serve as the primary controller, for example, upon failure of a current primary controller. While the system of
In accordance with at least one embodiment, storage controller 203 stores state information locally in memory 204, and storage controller 205 stores state information locally in memory 206. The storage controller serving as the primary controller can use the state information stored locally for performing storage operations, such as a write operation, on the storage subsystem. The primary controller can store the state information in response to a storage command received from host processor 201. Host processor 201 can provide a copy of the storage command to the storage controller serving as the peer controller. The peer controller can store the state information in response to the copy of the storage command received from host processor 201. The peer controller can have in its memory a copy of the state information in the memory of the primary controller. If the primary controller fails, the peer controller can assume the rote of the primary controller using the copy of the state information already stored in its memory.
In accordance with at least one embodiment, host processor 201 can provide a host memory region of host memory 202 to be used as a common memory region shared between storage controller 203 and storage controller 205. Both storage controller 203 and storage controller 205 can access the host memory region of host memory 202. As examples, both storage controller 203 and storage controller 205 can read from and write to the host memory region of host memory 202. As an example, direct memory access (DMA) can be used by storage controller 203 and storage controller 205 to access the host memory region of host memory 202. As an example, the host memory region of host memory 202 can be implemented as non-volatile memory (NVM), such as non-volatile random-access memory (NVRAM), which can preserve data, for example, write data to be written to a storage. subsystem. One of storage controller 203 and storage controller 205 can serve as a primary controller. The primary controller can use information stored in the host memory region of host memory 202 to perform a storage operation, such as a write operation, on the storage subsystem, for example, to disk array 209. In the event of a failure of the primary controller, the other one of storage controller 203 and storage controller 205 serving as a peer controller can assume the role of the primary controller and can perform a storage operation using the information stored in the host memory region of host memory 202. The failed storage controller can be removed from service or assigned to serve as a peer controller in case its failure was of a temporary nature.
From block 307, method 300 continues to block 308. In block 308, the peer controller transfers write data associated with the write command to a peer write cache. For example, the peer controller can store the write data in a peer write cache in the peer controller's memory contained within the peer controller. From block 308, method 300 continues to block 309. In block 309, the peer controller acknowledges the write command. For example, the peer controller sends a report back to the host processor that the peer controller has received the write command. From block 309, method 300 continues to block 310. In block 310, the primary controller flushes the write data to the backend. For example, the primary controller can retrieve the write data from the (primary controller's memory and can send the write data to be written to a storage subsystem, such as PD 210 of disk array 209 via bus expander 207 or bus expander 208. From block 310, method 300 continues to decision block 311. In decision block 311, a decision is made as to whether or not the cache flush of block 310 was successful. If the cache flush was successful (e.g., if the write data were successfully written to the storage subsystem), method 300 continues to block 312. In block 312, the primary controller instructs the peer controller to discard from its memory the write data successfully flushed from the write cache of the primary controller's memory. From block 312, method 300 continues to block 313. In block 313, method 300 continues to block 313. In block 313, the peer controller discards from its memory the write data that was successfully flushed from the write cache of the primary controller's memory. From block 313, method 300 can, for example, return to block 302, where another write command can be received.
If, at decision block 311, a decision is made that the cache flush of block 310 was not successful, method 300 can continue to decision block 314. In decision block 314, a decision is made as to whether or not the primary controller has failed. If not, method 300 returns to block 310, where the primary controller can again attempt to flush write data to the backend. If, in decision block 314, the decision is made that the primary controller has failed, method 300 continues to block 315. In block 315, the storage controller that was serving as the peer controller is selected to become the primary controller. As an example, the storage controller that was serving as the primary controller can be selected to become the peer controller. As another example, the storage controller that was serving as the primary controller can be removed from service. For example, host processor 201 can select among storage controller 203 and storage controller 205 to select one to serve as the primary controller and the other to serve as a peer controller.
While method 300 has been described with respect to two storage controllers, more than two storage controllers may be provided. For example, one of several storage controller s can be a primary controller, and others of the several storage controllers can be peer controllers. Multiple instances of method 300 may be performed, such that one storage controller of at least two storage controllers is a primary controller for one disk array, and one storage controller of at least two other storage controllers is a primary controller for another disk array. From block 315, method 300 continues to block 310, where the newly selected primary controller, using write data already stored in its memory at block 308, can attempt to flush the write data to the backend.
From block 405, method 400 continues to block 406. In block 406, the write command is reported as having been completed. As an example, the primary controller reports to the host processor that the write command has been completed. From block 406, method 400 continues to block 407. In block 407, the primary controller generates parity data and journal entries. Generating the parity data and the journal entries can comprise reading data from the storage subsystem, such as from disk array 209. The generation of parity data and journal entries can occur in preparation for the primary controller to flush write data to the backend in block 410. From block 407, method 400 continues to block 408. In block 408, parity data are stored in the host memory region. As an example, the primary controller stores in the host memory region the parity data it generated.
From block 408, method 400 continues to block 409. In block 409, metadata fir a peer controller are updated. As an example, metadata descriptive of the primary controller's use of the host memory region are updated and stored so that the peer controller will be able to utilize the data stored by the primary controller in the host memory region in the event of a failure of the primary controller. From block 409, method 400 continues to block 410. In block 410, the primary controller flushes the write data to the backend. For example, the primary controller can retrieve the write data from the host memory region and can send the write data to be written to a storage subsystem, such as PD 210 of disk array 209 via bus expander 207 or bus expander 208. From block 410, method 400 continues to decision block 411. In decision block 411, a decision is made as to whether or not the cache flush of block 410 was successful. If the cache flush was successful (e.g., if the write data were successfully written to the storage subsystem), method. 400 continues to block 412. In block 412, the primary controller updates the metadata in the host memory region. As an example, the primary controller updates the metadata in the host memory region to relieve a peer controller of the obligation to write the write data that was successfully flushed in block 410 in the event the primary controller fails and to relieve itself of the obligation to write the write data that was successfully flushed in block 410 in the event the (primary controller continues to operate properly. From block 412, method 400 can, for example, return to block 403, where the host processor can issue another write command.
If, at decision block 411, a decision is made that the cache flush of block 410 was not successful, method 400 can continue to decision block 414. In decision block 414, a decision is made as to whether or not the primary controller has failed. If not, method 400 returns to block 410, where the primary controller can again attempt to flush write data to the backend. To avoid the possibility of an infinite loop, continued inability of the primary controller to successfully flush the cache can be treated as a failure of the primary controller so that decision block 414 will cause the method to have a peer controller attempt to perform the cache flush, as described below. If, in decision block 414, the decision is made that the primary controller has failed, method 400 continues to block 415. In block 415, the storage controller that was serving as the peer controller is selected to become the primary controller. As an example, the storage controller that was serving as the primary controller can be selected to become the peer controller. As another example, the storage controller that was serving as the primary controller can be removed from service. For example, host processor 201 can select among storage controller 203 and storage controller 205 to select one to serve as the primary controller and the other to serve as a peer controller.
While method 400 has been described with respect to two storage controllers, more than two storage controllers may be provided. For example, one of several storage controllers can be a primary controller, and others of the several storage controllers can be peer controllers. Multiple instances of method 300 may be performed, such that one storage controller of at least two storage controllers is a primary controller for one disk array, and one storage controller of at least two other storage controllers is a primary controller for another disk array. From block 415, method 400 continues to block 410, where the newly selected primary controller, using data already stored in the host memory region, can attempt to flush the write data to the backend.
Fault-tolerant storage subsystems can be created using custom-designed storage controller hardware. However, such custom-designed storage controller hardware intended from its inception to provide fault tolerance tends to be expensive, inflexible, and may impair any ability to offer improved performance in the future. More useful is an ability to use commodity systems, operating systems (OSs), and software to more efficiently and rapidly replicate uncommitted data across multiple commodity storage controllers, enabling faster and less expensive fault-tolerant storage subsystems. A redundant storage controller system using non-redundant storage controllers can improve the efficiency of data replication while providing failure protection against controller failure. A redundant storage controller system using non-redundant storage controllers and shared memory commonly accessible to the storage controllers can be enhanced to replicate data within host memory regions to protect against non-volatile memory failure. In accordance with at least one embodiment, an efficient data replication mechanism can be provided between storage controllers using off-the-shelf hardware. In accordance with at least one embodiment, improved overall utilization of a peripheral interconnect, such as a Peripheral Component Interconnect Express (PCIe) interconnect, and storage interconnects can be provided. In accordance with at least one embodiment, a software-based mechanism applied to commercially available off-the-shelf storage controllers that do not otherwise provide fault tolerance can transform such storage controllers into a redundant storage controller system that provides fault tolerance and can replicate data, thereby replacing or eliminating a need for proprietary hardware solutions. In accordance with at least one embodiment, a fault tolerant system can be provided to be scalable for various types of devices, such as various types of PCIe devices.
As one example, two storage controllers can be configured to synchronize cache storage and operations using back-end bus expanders, such as back-end serial-attached small-computer serial interface (SAS) expanders, as a communication conduit. According to such an example, the throughput and latency of controller cache mirroring during write operations can be improved as discussed herein. The following three failure modes can be accommodated without data loss using an exemplary embodiment controller failover at runtime, controller failover coincident with system reboot, and controller failover coincident with system failure.
When using commodity storage controllers to build a redundant fault tolerant storage solution, by definition no specialized hardware assist exists for data replication between controllers. A fault tolerant storage subsystem has a need to exchange user data and other information between multiple storage controllers to respond to failure scenarios. A performant redundant storage controller subsystem requires a fast and efficient mechanism to replicate data between controllers, both host-originating and controller-originating. This data exchange can occur for multiple reasons including load balancing, failover, unexpected controller failures, or controller power loss. Using storage interconnects for this data exchange, involving busses and the accompanying control mechanisms not designed for this purpose, can have the effect of reducing the effective throughput of input and output (I/O) to the storage devices and can create bus contention.
Extending the common commodity-based approach, a full system solution can be built upon commodity computing platform architectures, commodity operating systems, and accompanying software. Moreover, a solution can be achieved using no hardware modifications to the storage controllers.
In accordance with at least one embodiment, a double dispatch technique is provided. Such a technique can utilize software to transform storage controllers not otherwise configured to provide cache replication into storage controllers using cache replication to parallelize operation of the several storage controllers. A single write command coming from an operating system is replicated to all storage controllers, with acknowledgement of the write back into the operating system occurring only after all controllers have completed the operation. The primary controller signals the peer controllers when lazy back-end flush completes.
As an example according to the double dispatch technique, the device driver, or other layer, of the respective operating system forks a single write command into two commands. One write request is sent to the active controller while a duplicate of the command is sent to the passive controller. The software layer will be responsible for all necessary actions to translate the scatter-gather list (SGL) addresses into appropriate physical addresses for each controller. The software layer performing the double dispatch must wait until both write commands are acknowledged before returning the original (single) command back up the stack. If failover occurs, the controller becoming active must adhere to a “read peers” only policy (no read-modify-write (RMW)) to flush data to parity based volumes (RAID logical unit numbers (LUNs)) because it cannot be known whether a journal was open at the time of the failover. This approach can accommodate a shared write journal when a parity based VD is non-optimal, for example, by falling back to legacy behavior of maintaining cache mirroring using the SAS back-end.
When primary (such as active) con roller flushes such as writes) dirty data to disk it informs the peer (such as passive) controller(s) that the respective cache lines can be freed, so as to free memory to allow further write operations to succeed. This invalidation operation can be performed over the storage interconnect or the peripheral interconnect, for example with PCIe-to-PCIe transfer. The invalidation message size is small and fixed (as opposed to moving all user data across the storage interconnect). Furthermore, the invalidation may be posted (no acknowledgement) and latent with no harmful consequences because in the event of failure the newly-active controller shall write all dirty data in a read peers mode of operation. Note that re-writing the same data by both the formerly active and newly active controllers has no negative consequence (such as exhibiting an idempotent property).
The double dispatch approach parallelizes the copy of data from host memory to the controller's memory, taking advantage of separate PCIe hierarchies. Write command latency can be improved because, according to at least one embodiment, command completion involves data movement across the peripheral interconnect only. The storage interconnect need not be involved in the host write command process. Operating system components, for example, Microsoft Windows MPIO DSM layer or VMWare ESX NMP, can be used to provide interception points in the stack to implement a double dispatch technique.
In accordance with at least one embodiment, a technique of sharing host memory, such as sharing host dynamic random-access memory (DRAM), which may be, for example, non-volatile DRAM (NVDRAM), is provided. Such a technique can leverage emerging non-volatile RAM technologies to efficiently share data between multiple controllers, potentially reducing or removing the need for on-controller DRAM. In accordance with an example of such technique, storage controllers can use host DRAM as an extension of the internal cache within a controller where all controllers involved are aware of the shared region within the host DRAM. The primary controller makes writes into this region, and, in the event of a failover, the newly active controller handles the area as it would its own cache. This shared cache region contains host data, controller generated parity data, and metadata regarding the state of the individual cache lines. A robust implementation can be provided by configuring this shared memory to reside in non-volatile host memory in order to preserve cache contents in the event of an abrupt power loss.
Both the primary (such as active) and peer (such as passive) controllers have knowledge of a common region of host memory which is used to coordinate activities. Such a shared memory can be used to implement a fully-shared cache holding the host data yet-to-be-written. Both controllers can map this host memory into their local address space and manage it as an extension of the already existing cache lines. Upon receipt of a write command, the active controller copies the write data to both its local DRAM and the reserved host memory region. The primary (such as active) controller is configured to write into this region exclusive of the peer (such as passive) controllers. Upon failover, the newly established primary (such as newly active) controller resumes operations from the current state of the shared memory cache. As an example, the newly established primary controller can do so by first mirroring the contents into the newly established primary controller's local DRAM. From this point forward, the newly established primary controller serves as the primary such as active controller.
The location of the common shared memory region is communicated to both the primary controller and the peer controller(s). As an example, a device driver may be used as the mechanism by which to communicate such location. The system firmware (such as unified extensible firmware interface/basic input output system (UEFI/BIOS)) can optionally be configured to reserve a fixed-size pre-determined address range. Cache invalidation is provided in this shared memory cache extension, as the metadata reflects the state of the cache line. No explicit cache invalidation is necessary.
In accordance with at least one embodiment, a robust non-volatile DRAM region is provided in the host memory and configured to be accessible to both primary and peer controllers. The reliability of the storage subsystem is based on the unlikelihood of both a primary controller failure and host DRAM failure happening simultaneously. If the primary controller fails, but host DRAM does not, the passive controller can take control and complete the write operations. If the host DRAM fails, the primary controller is able to complete the writes based on the (mirrored) contents of its internal DRAM. if a peer controller fails, the primary controller can continue to operate normally.
While embodiments have been described herein with respect to storage technology, such as disk arrays, other embodiments may be applied to any system that shares an interconnect topology, such as a PCIe topology, across multiple devices. For example, if it is desirable to replicate data across multiple non-volatile memory controllers of a suitable interconnect topology, such as a PCIe topology, an embodiment using multiple non-volatile memory controllers may be implemented in accordance with the disclosure herein.
While the computer-readable medium is shown to be a single medium, the term “computer-readable medium” includes a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term “computer-readable medium” shall also include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by a processor or that cause a computer system to perform any one or more of the methods or operations disclosed herein.
In a particular non-limiting, exemplary embodiment, the computer-readable medium can include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. Further, the computer-readable medium can be a random access memory or other volatile re-writable memory. Additionally, the computer-readable medium can include a magneto-optical or optical medium, such as a disk or tapes or other storage device to store information received via carrier wave signals such as a signal communicated over a transmission medium. Furthermore, a computer readable medium can store information received from distributed network resources such as from a cloud-based environment. A digital file attachment to an e-mail or other self-contained information archive or set of archives may be considered a distribution medium that is equivalent to a tangible storage medium. Accordingly, the disclosure is considered to include any one or more of a computer-readable medium or a distribution medium and other equivalents and successor media, in which data or instructions may be stored.
In the embodiments described herein, an information handling system includes any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or use any form of information, intelligence, or data for business, scientific, control, entertainment, or other purposes. For example, an information handling system can be a personal computer, a consumer electronic device, a network server or storage device, a switch router, wireless router, or other network communication device, a network connected device (cellular telephone, tablet device, etc.), or any other suitable device, and can vary in size, shape, performance, price, and functionality.
The information handling system can include memory (volatile (such as random-access memory, etc.), nonvolatile (read-only memory, flash memory etc.) or any combination thereof), one or more processing resources, such as a central processing unit (CPU), a graphics processing unit (CPU), hardware or software control logic, or any combination thereof. Additional components of the information handling system can include one or more storage devices, one or more communications ports for communicating with external devices, as well as, various input and output (I/O) devices, such as a keyboard, a mouse, a video/graphic display, or any combination thereof. The information handling system can also include one or more buses operable to transmit communications between the various hardware components. Portions of an information handling system may themselves be considered information handling systems.
When referred to as a “device,” a “module,” or the like, the embodiments described herein can be configured as hardware. For example, a portion of an information handling system device may be hardware such as, for example, an integrated circuit (such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a structured ASIC, or a device embedded on a larger chip), a card (such as a Peripheral Component Interface (PCD card, a PCI-express card, a Personal Computer Memory Card International Association (PCMCIA) card, or other such expansion card), or a system (such as a motherboard, a system-on-a-chip (SoC), or a stand-alone device).
The device or module can include software, including firmware embedded at a device, such as a Pentium class or PowerPC™ brand processor, or other such device, or software capable of operating a relevant environment of the information handling system. The device or module can also include a combination of the foregoing examples of hardware or software. Note that an information handling system can include an integrated circuit or a board-level product having portions thereof that can also be any combination of hardware and software.
Devices, modules, resources, or programs that are in communication with one another need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices, modules, resources, or programs that are in communication with one another can communicate directly or indirectly through one or more intermediaries.
Although only a few exemplary embodiments have been described in detail herein, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of the embodiments of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of the embodiments of the present disclosure as defined in the following claims. In the claims, means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures.
Nelogal, Chandrashekar, Giannoules, James P.
Patent | Priority | Assignee | Title |
10558453, | Dec 13 2018 | Dell Products, LP | System and method to achieve shared drive firmware version consistency via ESRT update |
10754737, | Jun 12 2018 | DELL PRODUCTS, L.P. | Boot assist metadata tables for persistent memory device updates during a hardware fault |
11449230, | Mar 07 2019 | Dell Products, LP | System and method for Input/Output (I/O) pattern prediction using recursive neural network and proaction for read/write optimization for sequential and random I/O |
11516256, | May 20 2020 | Dell Products L P | Certificate authorization policy for security protocol and data model capable devices |
12074913, | May 20 2020 | Dell Products L.P. | Certificate authorization policy for security protocol and data model capable devices |
Patent | Priority | Assignee | Title |
7457980, | Aug 13 2004 | SLATER TECHNOLOGY FUND, INC | Data replication method over a limited bandwidth network by mirroring parities |
7542986, | Mar 26 2002 | Hewlett Packard Enterprise Development LP | System and method for maintaining order for a replicated multi-unit I/O stream |
8762642, | Jan 30 2009 | EMC IP HOLDING COMPANY LLC | System and method for secure and reliable multi-cloud data replication |
20080005614, | |||
20080133869, | |||
20100054254, |
Date | Maintenance Fee Events |
Jun 14 2017 | ASPN: Payor Number Assigned. |
Aug 20 2020 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Aug 20 2024 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Mar 21 2020 | 4 years fee payment window open |
Sep 21 2020 | 6 months grace period start (w surcharge) |
Mar 21 2021 | patent expiry (for year 4) |
Mar 21 2023 | 2 years to revive unintentionally abandoned end. (for year 4) |
Mar 21 2024 | 8 years fee payment window open |
Sep 21 2024 | 6 months grace period start (w surcharge) |
Mar 21 2025 | patent expiry (for year 8) |
Mar 21 2027 | 2 years to revive unintentionally abandoned end. (for year 8) |
Mar 21 2028 | 12 years fee payment window open |
Sep 21 2028 | 6 months grace period start (w surcharge) |
Mar 21 2029 | patent expiry (for year 12) |
Mar 21 2031 | 2 years to revive unintentionally abandoned end. (for year 12) |