A universal storage management system which facilitates storage of data from a client computer and computer network is disclosed. The universal storage management system functions as an interface between the client computer and at least one storage device, and facilitates reading and writing of data by handling I/O operations. I/O operation overhead in the client computer is reduced by translating I/O commands from the client computer into high level commands which are employed by the storage management system to carry out I/O operations. The storage management system also enables interconnection of a normally incompatible storage device and client computer by translating I/O requests into an intermediate common format which is employed to generate commands which are compatible with the storage device receiving the request. files, error messages and other information from the storage device are similarly translated and provided to the client computer.

Patent
   RE42860
Priority
Sep 18 1995
Filed
Jul 31 2002
Issued
Oct 18 2011
Expiry
Sep 17 2016
Assg.orig
Entity
Large
22
161
all paid
12. A device for providing an interface between at least one client computer and at least one storage device, the client computer having a first microprocessor for running a software application and a first operating system which produce high level I/O commands, the storage device containing at least one file, comprising:
a plurality of storage devices each having a different type storage media;
a second microprocessor interposed between the client computer and said plurality of storage devices to control access thereto, said second microprocessor processing said high level I/O commands to control the power supplied to individual storage devices of said plurality of storage devices.
0. 64. A system for providing an interface between at least one client computer and at least one storage device, the client computer having a first microprocessor configured to run a software application and a first operating system which produce I/O commands, wherein the client computer and the system are configured to be communicatively linked to each other via a data communication network, the system comprising:
a transport driver operative to receive high level commands in an intermediate common format from the client computer via said network and convert said high level commands in the intermediate common format to high level I/O commands;
a second microprocessor operative to execute said high level I/O commands received from said transport driver and access the at least one storage device to copy data from the client computer to the at least one storage device wherein said second microprocessor employs a second operating system distinct from said first operating system.
0. 76. A system for providing an interface between at least one client computer and at least one storage device, the client computer having a first microprocessor configured to run a software application and a first operating system which produce I/O commands, wherein the client device and the system are configured to be communicatively linked to each other via a data communication network, the system comprising:
a transport driver operative to receive high level commands in an intermediate common format from the client computer via said network and convert said high level commands in the intermediate common format to high level I/O commands;
a device handler operative to execute said high level I/O commands received from said transport driver and access the at least one storage device to copy data from the client computer to the at least one storage device; and
a second microprocessor operative to execute said transport driver and said device handler, wherein said second microprocessor employs a second operating system distinct from said first operating system.
1. A device for providing an interface between at least one client computer and at least one storage device, the client computer having a first microprocessor for running a software application and a first operating system which produce I/O commands, the storage device containing at least one file, comprising:
a file management system operative to convert the I/O commands from the software application and said first operating system in the client computer to high level commands to a selected format, said file management system further operative to receive said high level commands and convert said high level commands to compatible I/O commands;
a second microprocessor operative to execute said high level commands received from said file management system and access the storage device to copy data in said intermediate common format from the client computer to at least one storage device wherein said second microprocessor employs a second operating system distinct from said first operating system; and
a file device driver interfacing said first operating system and the file management system by functioning to receive data and commands from the client computer and redirect the received data and commands to said file management system.
0. 33. An interface system between a client network configured to provide data and input/output commands and a data storage system having at least one storage device, said interface system comprising:
a file management system configured to manage the movement of information between said client network and said data storage system, said file management system comprising a first arrangement in communication with a second arrangement,
the first arrangement configured to receive said input/output commands to implement storage of said data in said data storage system when a first set of conditions exists; and
the second arrangement in communication with said first arrangement, said client network and said at least one storage device, said second arrangement configured to manage the flow of data between said storage device and said client network when a second set of conditions exists, wherein said first arrangement comprises a file system supervisor program comprising a file device driver configured to receive and convert said input/output commands having a first format to an intermediate format different than said first format and wherein said file system supervisor is configured to receive said input/output commands in said intermediate format and said second arrangement is a storage management architecture (SMA) kernel.
2. The interface device of claim 1 wherein said file device driver resides in the client computer.
3. The interface device of claim 2 a wherein said file management system further includes a transport driver having first and second sections for facilitating transfer of data and commands between said file device driver and said file management system, said first section receiving data and commands from said file device driver and said second section relaying such data and commands to said file management system.
4. The interface device of claim 3 wherein said file management system includes a file system supervisor operative to select file-level applications for receipt of the data from the client computer and provide storage commands.
5. The interface device of claim 4 wherein said file system supervisor is further operative to select a storage device or storage of data received from the client computer.
6. The interface device of claim 4 wherein said file system supervisor is further operative to break data received from the client computer down into blocks.
7. The interface device of claim 6 wherein said file management system further includes at least one device handler operative to interface said file system supervisor with the at least one storage device by driving the at least one storage device in response to said storage commands from said file system supervisor.
8. The interface device of claim 7 wherein said file management system further includes a device handler for each at least one storage device.
9. The interface device of claim 3 further including a kernel operative to directly execute I/O commands from the software application in the client computer.
10. The interface driver of claim 9 wherein said kernel utilizes the first operating system.
11. The interface device of claim 10 wherein said SMA kernel includes a scheduler for supervising flow of data by selectively relaying blocks of data to RAID applications.
13. The interface device of claim 12 wherein said interconnection device executes a reconfiguration routine which identifies storage device ID conflicts among said plurality of storage devices.
14. The interface device of claim 13 wherein said reconfiguration routine provides powers-up individual storage devices of said plurality of storage devices while executing.
15. The interface device of claim 14 wherein when a storage device ID conflict is detected said reconfiguration routing changes the ID of at least one of the storage devices in conflict.
16. The interface device of claim 12 wherein said interconnection device executes a media tracking routine which identifies storage device types.
17. The interface device of claim 16 wherein said media tracking routine automatically selects a storage device for WRITE operations.
18. The interface device of claim 17 wherein said media tracking routine selects said storage device based upon the block size of the data to be stored.
19. The interface device of claim 17 wherein said media tracking routine selects said storage device based upon media write speed.
20. The interface device of claim 12 including a plurality of power supplies for supplying power to said storage devices, said storage devices being grouped into racks such that at least one global power supply is available to serve as backup to a plurality of such racks of power supplies.
21. The interface device of claim 12 including a plurality of power supplies for supplying power to said storage devices, said storage devices being grouped into racks such that each rack is associated with a global power supply available to serve as backup to the rack with which the global power supply is associated.
22. The interface device of claim 12 including a plurality of power supplies, said microprocessor controlled interconnection device monitoring said power supplied to detect failed devices.
23. The interface connector of claim 22 wherein said storage devices are disposed in a protective chassis, and failed devices are automatically ejected from said chassis.
24. The interface connector of claim 12 further including a redundant array of independent disks.
25. The interface connector of claim 24 further including an XOR router having dedicated XOR hardware.
26. The interface connector of claim 25 wherein at least one surface of each disk in said redundant array of independent disks is dedicated to parity operations.
27. The interface connector of claim 25 wherein at least one disk in said redundant array of independent disks is dedicated to parity operations.
28. The interface connector of claim 25 wherein said disks of said redundant array of independent disks are arranged on a plurality of separate channels.
29. The interface connector of claim 28 wherein each said channel includes a dedicated memory pool.
30. The interface connector of claim 29 wherein said channels are interconnected by first and second thirty-two bit wide busses.
31. The interface connector of claim 24 further including a graphic user interface for displaying storage system status and receiving commands from a user.
32. The interface connector of claim 24 including hardware for data splitting and parity generation “on the fly” with no performance degradation.
0. 34. The interface system of claim 33 wherein said client network is configured to operate according to a first format and said storage device is configured to operate according to a format compatible with said first format and wherein said data flows between said client network and said storage device.
0. 35. The interface system of claim 34 wherein data flows in both directions between said client network and said data storage system.
0. 36. The interface system of claim 34 further comprising at least one device handler between the first arrangement and the second arrangement, said at least one device handler configured to isolate the first arrangement from the storage device.
0. 37. The interface system of claim 33 wherein said storage device is configured to operate according to said intermediate format and data flows between said client network and said storage device via said file management system.
0. 38. The interface system of claim 37 wherein said file device driver is configured to receive said data in said first format and convert said received data to said intermediate format.
0. 39. The interface system of claim 38 further comprising a transport driver in communication with said file device driver and said first arrangement, the transport driver configured to receive said data and said input/output commands in said intermediate format and relay said data and said input/output commands to said first arrangement.
0. 40. The interface system of claim 39 wherein said client network comprises at least one computer configured to run a selected operating system.
0. 41. The interface system of claim 39 wherein said client network comprises a multiplicity of computers, each of said multiplicity of computers configured to run one of a selected group of operating systems to provide outputs in one of a selected plurality of first formats.
0. 42. The interface system of claim 41 wherein said file device driver resides in a computer in said client network.
0. 43. The interface system of claim 41 further comprising a host computer configured to run said first arrangement, said file device driver, said second portion of said transport driver and said second arrangement.
0. 44. The interface system of claim 41 wherein said file management system is configured to operate according to one of said selected operating systems, and said data files and input/output commands converted by file device driver are compatible to said operating system.
0. 45. The interface system of claim 39 wherein data flows in both directions between said client network and said data storage system.
0. 46. The interface system of claim 33 further comprising a transport driver in communication with said file device driver and said first arrangement, the transport driver configured to receive said input/output commands in said intermediate format and relay said input/output commands to said first arrangement.
0. 47. The interface system of claim 46 wherein said transport driver comprises a first portion associated with said client network and a second portion associated with said first arrangement and further comprising a communication link configured to connect said first and second portions.
0. 48. The interface system of claim 47 wherein said communication link is selected from the group consisting of SCSI-2, SCSI-3, fiber link, 802.3, 802.5, synchronous RS232, wireless RF and wireless IR.
0. 49. The interface system of claim 48 wherein data flows in both directions between said client network and said data storage system.
0. 50. The interface system of claim 46 wherein data flows in both directions between said client network and said data storage system.
0. 51. The interface system of claim 33 wherein said client network comprises at least one computer configured to run a selected operating system.
0. 52. The interface system of claim 33 wherein said client network comprises a multiplicity of computers, each of said multiplicity of computers configured to run one of a selected group of operating systems to provide outputs in one of a selected plurality of first formats.
0. 53. The interface system of claim 33 wherein said file device driver resides in a computer in said client network.
0. 54. The interface system of claim 33 wherein data flows in both directions between said client network and said data storage system.
0. 55. The interface system of claim 33 further comprising at least one device handler between the first arrangement and the second arrangement, said at least one device handler configured to isolate the first arrangement from the storage device.
0. 56. The interface system of claim 55 wherein said storage device is configured to operate according to a different format than said first arrangement.
0. 57. The interface system of claim 33 further comprising at least one device handler between the first arrangement and the second arrangement, said at least one device handler configured to isolate the first arrangement from the storage device so that configuration of the storage device may differ from the configuration of the first arrangement.
0. 58. The interface system of claim 57 wherein said at least one device handler comprises a plurality of device handlers associated with a plurality of storage devices, at least one of said plurality of storage devices having a different configuration than the other device handler.
0. 59. The interface system of claim 33 wherein said at least one storage device comprises a plurality of storage devices.
0. 60. The interface system of claim 33 wherein said plurality of storage devices comprises a redundant array of independent disks (RAID).
0. 61. The interface system of claim 60 wherein at least one surface of each disk in said redundant array of independent disks is dedicated to parity operations.
0. 62. The interface system of claim 60 wherein at least one disk in said redundant array of independent disks is dedicated to parity operations.
0. 63. The interface system of claim 60 wherein said disks of said redundant array of independent disks are arranged on a plurality of separate channels.
0. 65. The system of claim 64, wherein the transport driver is further operative to convert high level I/O commands to high level commands in the intermediate common format.
0. 66. The system of claim 65, wherein the transport driver is further operative to send high level commands in the intermediate common format over the network to the client computer.
0. 67. The system of claim 64, wherein the high level I/O commands are SCSI commands.
0. 68. The system of claim 64, wherein the storage device comprises a redundant array of independent disks (RAID) device.
0. 69. The system of claim 68, wherein the RAID device comprises a processor.
0. 70. The system of claim 64, wherein the high level I/O commands are commands selected from the group of commands consisting of read, write, lock, and copy.
0. 71. The system of claim 64, wherein said network is an 802.3 network.
0. 72. The system of claim 64, wherein said network is an 802.5 network.
0. 73. The system of claim 64, wherein said network is a wireless network.
0. 74. The system of claim 64, further comprising a plurality of storage devices.
0. 75. The system of claim 74, further comprising a plurality of device handlers to accommodate said plurality of storage devices.
0. 77. The system of claim 76, further comprising a plurality of storage devices.
0. 78. The system of claim 77, further comprising a plurality of device handlers to accommodate said plurality of storage devices.
0. 79. The system of claim 76, wherein the high level I/O commands are commands selected from the group of read, write, lock, and copy.
0. 80. The system of claim 76, wherein the high level I/O commands are SCSI commands.
0. 81. The system of claim 76, wherein the storage device comprises a redundant array of independent disks (RAID) device.
0. 82. The system of claim 81, wherein the RAID device comprises a processor.
0. 83. The system of claim 76, wherein said network is an 802.3 network.
0. 84. The system of claim 76, wherein said network is an 802.5 network.
0. 85. The system of claim 76, wherein said network is a wireless network.

This is a reissue application of U.S. Pat. No. 6,098,128 that issued on Aug. 1, 2000. A claim of priority is made to U.S. Provisional Patent Application Ser. No. 60/003,920 entitled UNIVERSAL STORAGE MANAGEMENT SYSTEM, filed Sep. 18 1995.

The present invention is generally related to data storage systems, and more particularly to cross-platform data storage systems and RAID systems.

One problem facing the computer industry is lack of standardization in file subsystems. This problem is exacerbated by I/O addressing limitations in existing operating systems and the growing number of non-standard storage devices. A computer and software application can sometimes be modified to communicate with normally incompatible storage devices. However, in most cases such communication can only be achieved in a manner which adversely affects I/O throughput, and thus compromises performance. As a result, many computers in use today are “I/O bound.” More particularly, the processing capability of the computer is faster than the I/O response of the computer, and performance is thereby limited. A solution to the standardization problem would thus be of interest to both the computer industry and computer users.

In theory it would be possible to standardize operating systems, file subsystems, communications and other systems to resolve the problem. However, such a solution is hardly feasible for reasons of practicality. Computer users often exhibit strong allegiance to particular operating systems and architectures for reasons having to do with what the individual user requires from the computer and what the user is accustomed to working with. Further, those who design operating systems and associated computer and network architectures show little propensity toward cooperation and standardization with competitors. As a result, performance and ease of use suffer.

Disclosed is a universal storage management system which facilitates storage of data from a client computer. The storage management system functions as an interface between the client computer and at least one storage device and facilitates reading and writing of data by handling I/O operations. More particularly, I/O operation overhead in the client computer is reduced by translating I/O commands from the client computer to high level I/O commands which are employed by the storage management system to carry out I/O operations. The storage management system also enables interconnection of a normally incompatible storage device and client computer by translating I/O requests into an intermediate common format which is employed to generate commands which are compatible with the storage device receiving the request. Files, error messages and other information from the storage device are similarly translated and provided to the client computer.

The universal storage management system provides improved performance since client computers attached thereto are not burdened with directly controlling I/O operations. Software applications in the client computers generate I/O commands which are translated into high level commands which are sent by each client computer to the storage system, The storage management system controls I/O operations for each client computer based on the high level commands. Overall network throughput is improved since the client computers are relieved of the burden of processing slow I/O requests.

The universal storage management system can provide a variety of storage options which are normally unavailable to the client computer. The storage management system is preferably capable of controlling multiple types of storage devices such as disk drives, tape drives, CD-ROMS, magneto optical drives etc., and making those storage devices available to all of the client computers connected to the storage management system. Further, the storage management system can determine which particular storage media any given unit of data should be stored upon or retrieved from. Each client computer connected to the storage system thus gains data storage options because operating system limitations and restrictions on storage capacity are removed along with limitations associated with support of separate storage media. For example, the universal storage management system can read information from a CD-ROM and then pass that information on to a particular client computer, even though the operating system of that particular client computer has no support for or direct connection to the CD-ROM.

By providing a common interface between a plurality of client computers and a plurality of shared storage devices, network updating overhead is reduced. More particularly, the storage management system allows addition of drives to a computer network without reconfiguration of the individual client computers in the network. The storage management system thus saves installation time and removes limitations associated with various network operating systems to which the storage management system may be connected.

The universal storage management system reduces wasteful duplicative storage of data. Since the storage management system interfaces incompatible client computers and storage devices, the storage management system can share files across multiple heterogeneous platforms. Such file sharing can be employed to reduce the overall amount of data stored in a network. For example, a single copy of a given database can be shared by several incompatible computers, where multiple database copies were previously required. Thus, in addition to reducing total storage media requirements, data maintenance is facilitated.

The universal storage management system also provides improved protection of data. The storage management system isolates regular backups from user intervention, thereby addressing problems associated with forgetful or recalcitrant employees who fail to execute backups regularly.

These and other features of the present invention will become apparent in light of the following detailed description thereof, in which:

FIG. 1 is a block diagram which illustrates the storage management system in a host computer;

FIG. 1a is a block diagram of the file management system;

FIG. 2 is a block diagram of the SMA kernel;

FIG. 2a illustrates the storage devices of FIG. 2;

FIGS. 3 and 4 are block diagrams of an example cross-platform network employing the universal storage management system;

FIG. 5 is a block diagram of a RAID board for storage of data in connection with the universal storage management system;

FIG. 6 is a block diagram of the universal storage management system which illustrates storage options;

FIG. 7 is a block diagram of the redundant storage device power supply;

FIGS. 8-11 are block diagrams which illustrate XOR and parity computing processes;

FIGS. 12a-13 are block diagrams illustrating RAID configurations for improved efficiency;

FIG. 14 is a block diagram of the automatic failed disk ejection system;

FIGS. 15 and 15a are perspective views of the storage device chassis;

FIG. 16 is a block diagram which illustrates loading of a new SCSI ID in a disk;

FIG. 17 is a flow diagram which illustrates the automatic initial configuration routine;

FIGS. 18 & 19 are backplane state flow diagrams;

FIG. 20 is an automatic storage device ejection flow diagram;

FIG. 21 is a block diagram which illustrates horizontal power sharing for handling power failures;

FIG. 22 is a block diagram which illustrates vertical power sharing for handling power failures;

FIGS. 23-25 are flow diagrams which illustrate a READ cycle; and

FIGS. 26-29 are flow diagrams which illustrate a WRITE cycle.

Referring to FIGS. 1 and 1a, the universal storage management system includes electronic hardware and software which together provide a cross platform interface between at least one client computer 10 in a client network 12 and at least one storage device 14. The universal storage management system is implemented in a host computer 16 and can include a host board 18, a four channel board 20, a five channel board 22 for controlling the storage devices 14. It should be noted, however, that the software could be implemented on standard hardware. The system is optimized to handle I/O requests from the client computer and provide universal storage support with any of a variety of client computers and storage devices. I/O commands from the client computer are translated into high level commands, which in turn are employed to control the storage devices.

Referring to FIGS. 1, 1a, 2 & 2a, the software portion of the universal storage management system includes a file management system 24 and a storage management architecture (“SMA”) kernel 26. The file management system manages the conversion and movement of files between the client computer 10 and the SMA Kernel 26. The SMA kernel manages the flow of data and commands between the client computer, device level applications and actual physical devices.

The file management system includes four modules: a file device driver 28, a transport driver 30a, 30b, a file system supervisor 32, and a device handler 34. The file device driver provides an interface between the client operating system 36 and the transport driver. More particularly, the file device driver resides in the client computer and redirects files to the transport driver. Interfacing functions performed by the file device driver include receiving data and commands from the client operating system, converting the data and commands to a universal storage management system file format, and adding record options, such as lock, read-only and script.

The transport driver 30a, 30b facilitates transfer of files and other information between the file device driver 28 and the file system supervisor 32. The transport driver is specifically configured for the link between the client computers and the storage management system. Some possible links include: SCSI-2, SCSI-3, fiber link, 802.3, 802.5, synchronous and a synchronous RS232, wireless RF, and wireless IR. The transport driver includes two components: a first component 30a which resides in the client computer and a second component 30b which resides in the storage management system computer. The first component receives data and commands from the file device driver. The second component relays data and commands to the file system supervisor. Files, data, commands and error messages can be relayed from the file system supervisor to the client computer operating system through the transport driver and file device driver.

The file system supervisor 32 operates to determine appropriate file-level applications for receipt of the files received from the client computer 10. The file system supervisor implements file specific routines on a common format file system. Calls made to the file system supervisor are high level, such as Open, Close, Read, Write, Lock, and Copy. The file system supervisor also determines where files should be stored, including determining on what type of storage media the files should be stored. The file system supervisor also breaks each file down into blocks and then passes those blocks to the device handler. Similarly, the file system supervisor can receive data from the device handler.

The device handler 34 provides an interface between the file system supervisor 32 and the SMA kernel 26 to provide storage device selection for each operation. A plurality of device handlers are employed to accommodate a plurality of storage devices. More particularly, each device handler is a driver which is used by the file system supervisor to control a particular storage device, and allow the file system supervisor to select the type of storage device to be used for a specific operation. The device handlers reside between the file system supervisor and the SMA kernel and the storage devices. The device handler thus isolates the file system supervisor from the storage devices such that the file system supervisor configuration is not dependent upon the configuration of the specific storage devices employed in the system.

The SMA Kernel 26 includes three independent modules: a front end interface 36, a scheduler 38, and a back-end interface 40. The front end interface is in communication with the client network and the scheduler. The scheduler is in communication with the back-end interface, device level applications, redundant array of independent disks (“RAID”) applications and the file management system. The back-end interface is in communication with various storage devices.

The front-end interface 36 handles communication between the client network 12 and resource scheduler 38, running on a storage management system based host controller which is connected to the client network and interfaced to the resource scheduler. A plurality of scripts are loaded at start up for on-demand execution of communication tasks. More particularly, if the client computer and storage management system both utilize the same operating system, the SMA kernel can be utilized to execute I/O commands from software applications in the client computer without first translating the I/O commands to high level commands as is done in the file management system.

The resource scheduler 38 supervises the flow of data through the universal storage management system. More particularly, the resource scheduler determines whether individual data units can be passed directly to the back-end interface 40 or whether the data unit must first be processed by one of the device level applications 42 or RAID applications 44. Block level data units are passed to the resource scheduler from either the front-end interface or the file management system.

The back-end interface 40 manages the storage devices 14. The storage devices are connected to the back-end interface by one or more SCSI type controllers through which the storage devices are connected to the storage management system computer. In order to control non-standard SCSI devices, the back-end interface includes pre-loaded scripts and may also include device specific drivers.

FIG. 2a illustrates the storage devices 14 of FIG. 2. The storage devices are identified by rank (illustrated as columns), channel (illustrated as rows) and device ID. A rank is a set of devices with a common ID, but sitting on different channels. The number of the rank is designated by the common device ID. For example, rank 0 includes the set of all devices with device ID=0. The storage devices may be addressed by the system individually or in groups called arrays 46. An array associates two or more storage devices 14 (either physical devices or logical devices) into a RAID level. A volume is a logical entity for the host such as a disk or tape or array which has been given a logical SCSI ID. There are four types of volumes including a partition of an array, an entire array, a span across arrays, and referring to a single device.

The storage management system employs high level commands to access the storage devices. The high level commands include array commands and volume commands, as follows:

The acreate command creates a new array by associating a group of storage devices in the same rank and assigning them a RAID level.

Syntax:

rank_id Id of rank on which the array will be created.
level RAID level to use for the array being created
aname Unique name to be given to array. if NULL, one
will be assigned by the system.
ch_use bitmap indicating which channels to use in this set
of drives.
Return 0
ERANK Given rank does not exist or it is not available
to create more arrays.
ELEVEL Illegal RAID level
ECHANNEL No drives exist in given bitmap or drives are
already in use by another array.

The aremove command removes the definition of a given array name and makes the associated storage devices available for the creation of other arrays.

The vopen command creates and/or opens a volume, and brings the specified volume on-line and readies that volume for reading and/or writing.

arrayname Name of the army on which to create/open the
volume.
volname Name of an existing volume or the name to be given
to the volume to create. If left NULL, and the
O_CREAT flag is given, one will be assigned by the
system and this argument will contain the new
name.
vh When creating a volume, this contains a pointer to
parameters to be used in the creation of requested
volume name. If opening an existing volume, these
parameters will be returned by the system.
flags A constant with one or more of the following
values.
O_CREAT The system will attempt create the volume using the
parameters give in vh. If the volume already
exists, this flag will be ignored.
O_DENYRD Denies reading privileges to any other tasks on
this volume anytime after this call is made.
O_DENYWR Deny writing privileges to any other tasks that
open this volume anytime after this call is made.
O_EXCLUSIVE Deny any access to this volume anytime after
this call is made.
Return 0 Successful open/creation of volume
EARRAY Given array does not exist
EFULL Given array is full

The vclose command closes a volume, brings the specified volume off-line, and removes all access restrictions imposed on the volume by the task that opened it.

vh Volume handle, returned by the system when the volume
was opened/created

The vread command reads a specified number of blocks into a given buffer from an open volume given by “vh”.

vh Handle of the volume to read from
bufptr Pointer to the address in memory where the data is to be read
into
Iba Logical block address to read from
count Number of blocks to read from given volume
Return 0 Successful read
EACCESS Insufficient rights to read from this
volume
EHANDLE Invalid volume handle
EADDR Illegal logical block address

The vwrite command writes a specified number of blocks from the given buffer to an open volume given by “vh.”

vh Handle of the volume to write to
bufptr Pointer to the address in memory where the data to be written
to the device resides
Iba Volume Logical block address to write to
count Number of blocks to write to given volume
Return 0 Successful read
EACCESS Insufficient rights to write to this volume
EHANDLE Invalid volume handle
EADDR Illegal logical block address

The volcpy command copies “count” number of blocks from the location given by src_addr in src_vol to the logical block address given by dest_addr in dest_vol. Significantly, the command is executed without interaction with the client computer.

dest_vol handle of the volume to be written to
dest_Iba destination logical block address
src_vol handle of the volume to be read from
src_Iba Source logical block address
count Number of blocks to write to given volume
Return 0 Successful read
EACCW Insufficient rights to write to this destination volume
EACCR Insufficient rights to read from source volume
EDESTH Invalid destination volume handle
ESRCH Invalid source volume handle
EDESTA Illegal logical block address for destination volume
ESRCA Illegal logical block address for source volume

The modular design of the storage management system software provides some advantages. The SMA Kernel and file management system are independent program groups which do not have interdependency limitations. However, both program groups share a common application programming interface (API). Further, each internal software module (transport driver, file system supervisor, device handler, front-end interface, back-end interface and scheduler) interacts through a common protocol. Development of new modules or changes to an existing module thus do not require changes to other SMA modules, provided compliance with the protocol is maintained. Additionally, software applications in the client computer are isolated from the storage devices and their associated limitations. As such, the complexity of application development and integration is reduced, and reduced complexity allows faster development cycles. The architecture also offers high maintainability, which translates into simpler testing and quality assurance processes and the ability to implement projects in parallel results in a faster time to market.

FIGS. 3 & 4 illustrate a cross platform client network employing the universal storage management system. A plurality of client computers which reside in different networks are part of the overall architecture. Individual client computers 10 and client networks within the cross platform network utilize different operating systems. The illustrated architecture includes a first group of client computers on a first network operating under a Novell based operating system, a second group of client computers on a second network operating under OS/2, a third group of client computers on a third network operating under DOS, a fourth group of client computers on a fourth network operating under UNIX, a fifth group of client computers on a fifth network operating under VMS and a sixth group of client computers on a sixth network operating under Windows-NT. The file management system includes at least one dedicated file device driver and transport driver for each operating system with which the storage management system will interact. More particularly, each file device driver is specific to the operating system with which it is used. Similarly, each transport driver is connection specific. Possible connections include SCSI-2, SCSI-3, fiber link, 802.3, 802.5, synchronous and a synchronous RS232, wireless RF, and wireless IR.

The universal storage management system utilizes a standard file format which is selected based upon the cross platform client network for ease of file management system implementation. The file format may be based on UNIX, Microsoft-NT or other file formats. In order to facilitate operation and enhance performance, the storage management system may utilize the same file format and operating system utilized by the majority of client computers connected thereto, however this is not required. Regardless of the file format selected, the file management system includes at least one file device driver, at least one transport driver, a file system supervisor and a device handler to translate I/O commands from the client computer.

Referring to FIGS. 5, 6 and 10b, the storage management system is preferably capable of simultaneously servicing multiple client computer I/O requests at a performance level which is equal to or better than that of individual local drives. In order to provide prompt execution of I/O operations for a group of client computers the universal storage management system computer employs a powerful microprocessor or multiple microprocessors 355 capable of handling associated overhead for the file system supervisor, device handler, and I/O cache. Available memory 356 is relatively large in order to accommodate the multi-tasking storage management system operating system running multiple device utilities such as backups and juke box handlers. A significant architectural advance of the RAID is the use of multiple SCSI processors with dedicated memory pools 357. Each processor 350 can READ or WRITE devices totally in parallel. This provides the RAID implementation with true parallel architecture. Front end memory 358 could also be used as a first level of I/O caching for the different client I/O's. A double 32 bit wide dedicated I/O bus 48 is employed for I/O operations between the storage management system and the storage device modules 354. The I/O bus is capable of transmission at 200 MB/sec, and independent 32 bit wide caches are dedicated to each I/O interface.

Referring to FIGS. 7, 21 and 22, a redundant power supply array is employed to maintain power to the storage devices when a power supply fails. The distributed redundant low voltage power supply array includes a global power supply 52 and a plurality of local power supplies 54 interconnected with power cables throughout a disk array chassis. Each local power supply provides sufficient power for a rack 56 of storage devices 14. In the event of a failure of a local power supply 54, the global power supply 52 provides power to the storage devices associated with the failed local power supply. In order to provide sufficient power, the global power supply therefore should have a power capacity rating at least equal to the largest capacity local power supply.

Preferably both horizontal and vertical power sharing are employed. In horizontal power sharing the power supplies 54 for each rack of storage devices includes one redundant power supply 58 which is utilized when a local power supply 54 in the associated rack fails. In vertical power sharing a redundant power supply 60 is shared between a plurality of racks 56 of local storage devices 54.

Referring now to FIGS. 8 and 9, a redundant array of independent disks (“RAID”) is provided as a storage option. For implementation of the RAID, the storage management system has multiple SCSI-2 and SCSI-3 channels having from 2 to 11 independent channels capable of handling up to 1080 storage devices. The RAID reduces the write overhead penalty of known RAIDS which require execution of Read-modify-Write commands from the data and parity drives when a write is issued to the RAID. The parity calculation procedure is an XOR operation between old parity data and the old logical data. The resulting data is then XORed with the new logical data. The XOR operations are done by dedicated XOR hardware 62 in an XOR router 64 to provide faster write cycles. This hardware is dedicated for RAID-4 or RAID-5 implementations. Further, for RAID-3 implementation, parity generation and data striping have been implemented by hardware 359. As such, there is no time overhead cost for this parity calculation which is done “on the fly,” and the RAID-3 implementation is as fast as a RAID-0 implementation.

Referring now to FIGS. 9-11, at least one surface 66 of each of the drives is dedicated for parity. As such, a RAID-3 may be implemented in every individual disk of the array with the data from all other drives (See FIG. 9 specifically). The parity information may be sent to any other parity drive surface (See FIG. 10 specifically). In essence, RAID-3 is implemented within each drive of the array, and the generated parity is transmitted to the appointed parity drive for RAID-4 implementation, or striped across all of the drives for RAID-5 implementation. The result is a combination of RAID-3 and RAID-4 or RAID-5, but without the write overhead penalties. Alternatively, if there is no internal control over disk drives, as shown in FIG. 11, using standard double ported disk drives, the assigned parity drive 70 has a dedicated controller board 68 associated therewith for accessing other drives in the RAID via the dedicated bus 48, to calculate the new parity data without the intervention of the storage management system computer microprocessor.

Referring to FIGS. 12a, 12b and 13, the storage management system optimizes disk mirroring for RAID-1 implementation. Standard RAID-1 implementations execute duplicate WRITE commands for each of two drives simultaneously. To obtain improved performance the present RAID divides a logical disk 72, such as a logical disk containing a master disk 71 and a mirror disk 75, into two halves 74, 76. This is possible because the majority of the operations in a standard system are Read operations, and since the information is contained in both drives. The respective drive heads 78, 80 of the master and mirror disks are then positioned at a halfway point in the first half 74 and second half 76, respectively. If the Read request goes to the first half 74 of the logical drive 72, then this command is serviced by the master disk 71. If the Read goes to the second half 76 of the logical drive 72, then it is serviced by the mirror disk 75. Since each drive head only travels one half of the total possible distance, average seek time is reduced by a factor of two. Additionally, the number of storage devices required for mirroring can be reduced by compressing 82 mirrored data and thereby decreasing the requisite number of mirror disks. By compressing the mirrored data “on the fly” overall performance is maintained.

File storage routines may be implemented to automatically select the type of media upon which to store data. Decision criteria for determining which type of media to store a file into can be determined from a data file with predetermined attributes. Thus, the file device driver can direct data to particular media in an intelligent manner. To further automate data storage, the storage management system includes routines for automatically selecting an appropriate RAID level for storage of each file. When the storage management system is used in conjunction with a computer network it is envisioned that a plurality of RAID storage options of different RAID levels will be provided. In order to provide efficient and reliable storage, software routines are employed to automatically select the appropriate RAID level for storage of each file based on file size. For example, in a system with RAID levels 3 and 5, large files might be assigned to RAID-3, while small files would be assigned to RAID-5. Alternatively, the RAID level may be determined based on block size, as predefined by the user.

Referring now to FIGS. 14 and 15, the RAID disks 14 are arrayed in a protective chassis 84. The chassis includes the global and local power supplies, and includes an automatic disk eject feature which facilitates identification and replacement of failed disks. Each disk 14 is disposed in a disk shuttle 86 which partially ejects from the chassis in response to a solenoid 88. A system controller 90 controls securing and releasing of the disk drive mounting shuttle 86 by actuating the solenoid 88. When the storage system detects a failed disk in the array, or when a user requests release of a disk, the system controller actuates the solenoid associated with the location of that disk and releases the disk for ejection.

An automatic storage device ejection method is illustrated in FIG. 20. In an initial step 92 a logical drive to physical drive conversion is made to isolate and identify the physical drive being worked upon. Then, if a drive failure is detected in step 94, the drive is powered down 96. If a drive failure is not detected, the cache is flushed 98 and new commands are disallowed 100 prior to powering the drive down 96. After powering down the drive, a delay 102 is imposed to wait for drive spin-down and the storage device ejection solenoid is energized 104 and the drive failure indicator is turned off 106.

Referring to FIGS. 16 & 17, an automatic configuration routine can be executed by the backplane with the dedicated microprocessor thereon for facilitating configuring and replacement of failed storage devices. The backplane microprocessor allows control over power supplied to individual storage devices 14 within the pool of storage devices. Such individual control allows automated updating of the storage device IDs. When a storage device fails, it is typically removed and a replacement storage device is inserted in place of the failed storage device. The drive will be automatically set to the ID of the failed drive, as this information is saved in SRAM on the backplane when the automatic configuration routine was executed at system initialization (FIG. 17). When initializing the system for the first time, any device could be in conflict with another storage device in the storage device pool, the system will not be able to properly address the storage devices. Therefore, when a new system is initialized the automatic configuration routine is executed, to assure that the device Ids are not in conflict. As part of the automatic ID configuration routine all devices are reset 108, storage device identifying variables are set 110, and each of the storage devices 14 in the pool is powered down 112. Each individual storage device is then powered up 114 to determine if that device has the proper device ID 116. If the storage device has the proper ID, then the device is powered down and the next storage device is tested. If the device does not have the proper ID, then the device ID is reset 118 and the storage device is powercycled. The pseudocode for the automatic ID configuration routine includes the following steps:

1. Reset all disks in all channels
2. Go through every channel in every cabinet:
3. channel n = 0
cabinet j = 0
drive k = 0
4. Remove power to all disks in channel n
5. With first disk in channel n
a. turn drive on via back plane
b. if its id conflicts with previously turned on drive, change its id
via back plane then turn drive off
c. turn drive off
d. goto next drive until all drives in channel n have
been checked.
Use next channel until all channels in cabinet j have
been checked.

Automatic media selection is employed to facilitate defining volumes and arrays for use in the system. As a practical matter, it is preferable for a single volume or array to be made up of a single type of storage media. However, it is also preferable that the user not be required to memorize the location and type of each storage device in the pool, i.e., where each device is. The automatic media selection feature provides a record of each storage device in the pool, and when a volume or array is defined, the location of different types of storage devices are brought to the attention of the user. This and other features are preferably implemented with a graphic user interface (“GUI”) 108 (FIG. 15a) which is driven by the storage management system and displayed on a screen mounted in the chassis.

Further media selection routines may be employed to provide reduced data access time. Users generally prefer to employ storage media with a fast access time for storage of files which are being created or edited. For example, it is much faster to work from a hard disk than from a CD-ROM drive. However, fast access storage media is usually more costly than slow access storage media. In order to accommodate both cost and ease of use considerations, the storage management system can automatically relocate files within the system based upon the frequency at which each file is accessed. Files which are frequently accessed are relocated to and maintained on fast access storage media. Files which are less frequently accessed are relocated to and maintained on slower storage media.

The method executed by the microprocessor controlled backplane is illustrated in FIGS. 18 & 19. In a series of initialization steps the backplane powers up 110, executes power up diagnostics 112, activates an AC control relay 114, reads the ID bitmap 116, sets the drive IDs 118, sequentially powers up the drives 120, reads the fan status 122 and then sets fan airflow 124 based upon the fan status. Temperature sensors located within the chassis are then polled 126 to determine 128 if the operating temperature is within a predetermined acceptable operating range. If not, airflow is increased 130 by resetting fan airflow. The backplane then reads 132 the 12V and 5V power supplies and averages 134 the readings to determine 136 whether power is within a predetermined operating range. If not, the alarm and indicators are activated 138. If the power reading is within the specified range, the AC power is read 140 to determine 142 whether AC power is available. If not, DC power is supplied 144 to the controller and IDE drives and an interrupt 146 is issued. If AC power exists, the state of the power off switch is determined 148 to detect 150 a power down condition. If power down is active, the cache is flushed 152 (to IDE for power failure and to SCSI for shutdown) and the unit is turned off 154. If power down is not active, application status is read 156 for any change in alarms and indicators. Light and audible alarms are employed 158 if required. Fan status is then rechecked 122. When no problem is detected this routine is executed in a loop, constantly monitoring events.

A READ cycle is illustrated in FIGS. 23-25. In a first step 160 a cache entry is retrieved. If the entry is in the cache as determined in step 162, the data is sent 164 to the host and the cycle ends. If the entry is not in the cache, a partitioning address is calculated 166 and a determination 168 is made as to whether the data lies on the first half of the disk. If not, the source device is set 170 to be the master. If the data lies on the first half of the disk, mirror availability is determined 172. If no mirror is available, the source device is set 170 to be the master. If a mirror is available, the source device is set 174 to be the mirror. In either case, it is next determined 176 whether the entry is cacheable, i.e., whether the entry fits in the cache. If not, the destination is set 178 to be temporary memory. If the entry is cacheable, the destination is set 180 to be cache memory. A read is then performed 182 and, if successful as determined in step 184, the data is sent 164 to the host. If the read is not successful, the storage device is replaced 186 with the mirror and the read operation is retried 188 on the new drive. If the read retry is successful as determined in step 190, the data is sent 164 to the host. If the read is unsuccessful, the volume is taken off-line 192.

A WRITE cycle is illustrated in FIGS. 26-29. In an initial step 194 an attempt is made to retrieve the entry from the cache. If the entry is in the cache as determined in step 196, the destination is set 198 to be the cache memory and the data is received 200 from the host. If the entry is not in the cache, a partitioning address is calculated 202, the destination is set 204 to cache memory, and the data is received 206 from the host. A determination 208 is then made as to whether write-back is enabled. If write back is not enabled, a write 210 is made to the disk. If write-back is enabled, send status is first set 212 to OK, and then a write 210 is made to the disk. A status check is then executed 214 and, if status is not OK, the user is notified 216 and a mirror availability check 218 is done. If no mirror is available, an ERROR message is produced 220. If a mirror is available, a write 222 is executed to the mirror disk and a further status check is executed 224. If the status check 224 is negative (not OK), the user is notified 226. If the status check 224 is positive, send status is set to OK 228. If status is OK in status check 214, send status is set to OK 230 and a mirror availability check is executed 232. If no mirror is available, flow ends. If a mirror is available, a mirror status check is executed 234, and the user is notified 236 if the result of the status check is negative.

Other modifications and alternative embodiments of the present invention will become apparent to those skilled in the art in light of the information provided herein. Consequently, the invention is not to be viewed as limited to the specific embodiments disclosed herein.

Velez-McCaskey, Ricardo E., Barillas-Trennert, Gustavo

Patent Priority Assignee Title
10104175, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Massively scalable object storage system
11816356, Jul 06 2021 Pure Storage, Inc. Container orchestrator-aware storage system
8510267, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Synchronization of structured information repositories
8538926, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Massively scalable object storage system for storing object replicas
8554951, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Synchronization and ordering of multiple accessess in a distributed system
8712975, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Modification of an object replica
8712982, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Virtual multi-cluster clouds
8775375, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Higher efficiency storage replication using compression
8930693, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Cluster federation and trust
8990257, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Method for handling large object files in an object storage system
9021137, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Massively scalable object storage system
9116629, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Massively scalable object storage for storing object replicas
9116859, Jul 17 2012 Hitachi, Ltd. Disk array system having a plurality of chassis and path connection method
9197483, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Massively scalable object storage
9231988, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Intercluster repository synchronizer and method of synchronizing objects using a synchronization indicator and shared metadata
9237193, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Modification of an object replica
9405781, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Virtual multi-cluster clouds
9560093, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Higher efficiency storage replication using compression
9612776, Dec 31 2013 DELL PRODUCTS, L.P. Dynamically updated user data cache for persistent productivity
9626420, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Massively scalable object storage system
9684453, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Cluster federation and trust in a cloud environment
9760289, Mar 08 2011 CITIBANK, N A , AS COLLATERAL AGENT Massively scalable object storage for storing object replicas
Patent Priority Assignee Title
3449718,
3876978,
4044328, Jun 22 1976 BELL & HOWELL COMPANY A DE CORP Data coding and error correcting methods and apparatus
4092732, May 31 1977 International Business Machines Corporation System for recovering data stored in failed memory unit
4228496, Sep 07 1976 TCI-DELAWARE INCORPORATED, A CORP OF DEL Multiprocessor system
4410942, Mar 06 1981 International Business Machines Corporation Synchronizing buffered peripheral subsystems to host operations
4425615, Nov 14 1980 SPERRY CORPORATION, A CORP OF DE Hierarchical memory system having cache/disk subsystem with command queues for plural disks
4433388, Oct 06 1980 NCR Corporation Longitudinal parity
4467421, May 08 1981 Storage Technology Corporation Virtual storage system and method
4590559, Nov 23 1983 Tokyo Shibaura Denki Kabushiki Kaisha Data disc system for a computed tomography X-ray scanner
4636946, Feb 24 1982 International Business Machines Corporation Method and apparatus for grouping asynchronous recording operations
4644545, May 16 1983 Data General Corporation Digital encoding and decoding apparatus
4656544, Mar 09 1984 Sony Corporation Loading device for disc cassettes
4722085, Feb 03 1986 Unisys Corporation High capacity disk storage system having unusually high fault tolerance level and bandpass
4761785, Jun 12 1986 International Business Machines Corporation Parity spreading to enhance storage access
4800483, Dec 01 1982 Hitachi, Ltd. Method and system for concurrent data transfer disk cache system
4817035, Mar 16 1984 CII Honeywell Bull; CII HONEYWELL BULL, A CORP OF FRANCE Method of recording in a disk memory and disk memory system
4849929, Mar 16 1984 Cii Honeywell Bull (Societe Anonyme) Method of recording in a disk memory and disk memory system
4849978, Jul 02 1987 International Business Machines Corporation Memory unit backup using checksum
4903218, Aug 13 1987 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Console emulation for a graphics workstation
4933936, Aug 17 1987 The United States of America as represented by the Administrator of the Distributed computing system with dual independent communications paths between computers and employing split tokens
4934823, Nov 10 1986 Hitachi, Ltd. Staging method and system in electronic file apparatus
4942579, Jun 02 1987 STORAGE COMPUTER CORP High-speed, high-capacity, fault-tolerant error-correcting storage system
4993030, Apr 22 1988 AMDAHL CORPORATION, 1250 EAST ARQUES AVENUE, SUNNYVALE, CALIFORNIA 94088 A DE CORP File system for a plurality of storage classes
4994963, Nov 01 1988 Icon Systems International, Inc.; ICON INTERNATIONAL, INC , 774 SOUTH 400 EAST, OREM, UT 84058, A CORP UT System and method for sharing resources of a host computer among a plurality of remote computers
5072378, Dec 18 1989 Storage Technology Corporation Direct access storage device with independently stored parity
5134619, Apr 06 1990 EMC Corporation Failure-tolerant mass storage system
5148432, Nov 14 1988 EMC Corporation Arrayed disk drive system and method
5163131, Sep 08 1989 NetApp, Inc Parallel I/O network file server architecture
5197139, Apr 05 1990 International Business Machines Corporation Cache management for multi-processor systems utilizing bulk cross-invalidate
5210824, Mar 03 1989 Xerox Corporation Encoding-format-desensitized methods and means for interchanging electronic document as appearances
5220569, Jul 09 1990 Seagate Technology LLC Disk array with error type indication and selection of error correction method
5257367, Jun 02 1987 STORAGE COMPUTER CORP Data storage system with asynchronous host operating system communication link
5274645, Apr 06 1990 EMC Corporation Disk array system
5285451, Apr 06 1990 EMC Corporation Failure-tolerant mass storage system
5301297, Jul 03 1991 IBM Corp. (International Business Machines Corp.); International Business Machines Corporation Method and means for managing RAID 5 DASD arrays having RAID DASD arrays as logical devices thereof
5305326, Mar 06 1992 DATA GENERAL CORPORATION, A CORP OF DE High availability disk arrays
5313631, May 21 1991 Hewlett-Packard Company; HEWLETT-PACKARD DEVELOPMENT COMPANY, L P ; Agilent Technologies, Inc Dual threshold system for immediate or delayed scheduled migration of computer data files
5315708, Feb 28 1990 EMC Corporation Method and apparatus for transferring data through a staging memory
5317722, Nov 17 1987 International Business Machines Corporation Dynamically adapting multiple versions on system commands to a single operating system
5329619, Oct 30 1992 Software AG Cooperative processing interface and communication broker for heterogeneous computing environments
5333198, May 27 1993 UNITED STATES OF AMERICA, THE, AS REPRESENTED BY THE SECRETARY OF THE NAVY Digital interface circuit
5355453, Sep 08 1989 Auspex Systems, Inc. Parallel I/O network file server architecture
5367647, Aug 19 1991 International Business Machines Corporation Apparatus and method for achieving improved SCSI bus control capacity
5371743, Mar 06 1992 DATA GENERAL CORPORATION, A CORP OF DE On-line module replacement in a multiple module data processing system
5392244, Aug 19 1993 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Memory systems with data storage redundancy management
5396339, Dec 06 1991 TOMAS RECORDINGS LLC Real-time disk system
5398253, Mar 11 1992 EMC Corporation Storage unit generation of redundancy information in a redundant storage array system
5412661, Oct 06 1992 International Business Machines Corporation Two-dimensional disk array
5416915, Dec 11 1992 International Business Machines Corporation Method and system for minimizing seek affinity and enhancing write sensitivity in a DASD array
5418921, May 05 1992 LSI Logic Corporation Method and means for fast writing data to LRU cached based DASD arrays under diverse fault tolerant modes
5420998, Apr 10 1992 Toshiba Storage Device Corporation Dual memory disk drive
5423046, Dec 17 1992 International Business Machines Corporation High capacity data storage system using disk array
5428787, Feb 23 1993 Seagate Technology LLC Disk drive system for dynamically selecting optimum I/O operating system
5440716, Nov 03 1989 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Method for developing physical disk drive specific commands from logical disk access commands for use in a disk array
5452444, Mar 10 1992 Data General Corporation Data processing system using fligh availability disk arrays for handling power failure conditions during operation of the system
5469453, Mar 02 1990 EMC Corporation Data corrections applicable to redundant arrays of independent disks
5483419, Sep 24 1991 DZU Technology Corporation Hot-swappable multi-cartridge docking module
5485579, Sep 08 1989 Network Appliance, Inc Multiple facility operating system architecture
5495607, Nov 15 1993 CLOUDING CORP Network management system having virtual catalog overview of files distributively stored across network domain
5499337, Sep 27 1991 EMC Corporation Storage device array architecture with solid-state redundancy unit
5513314, Jan 27 1995 Network Appliance, Inc Fault tolerant NFS server system and mirroring protocol
5519831, Jun 12 1991 Intel Corporation Non-volatile disk cache
5519844, Nov 09 1990 EMC Corporation Logical partitioning of a redundant array storage system
5519853, Mar 11 1993 EMC Corporation Method and apparatus for enhancing synchronous I/O in a computer system with a non-volatile memory and using an acceleration device driver in a computer operating system
5524204, Nov 03 1994 International Business Machines Corporation Method and apparatus for dynamically expanding a redundant array of disk drives
5530829, Dec 17 1992 International Business Machines Corporation Track and record mode caching scheme for a storage system employing a scatter index table with pointer and a track directory
5530845, May 13 1992 SBC Technology Resources, INC Storage control subsystem implemented with an application program on a computer
5535375, Apr 20 1992 International Business Machines Corporation File manager for files shared by heterogeneous clients
5537534, Feb 10 1995 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Disk array having redundant storage and methods for incrementally generating redundancy as data is written to the disk array
5537567, Mar 14 1994 International Business Machines Corporation Parity block configuration in an array of storage devices
5537585, Feb 25 1994 CLOUDING CORP Data storage management for network interconnected processors
5537588, May 11 1994 International Business Machines Corporation Partitioned log-structured file system and methods for operating the same
5542064, Nov 21 1991 Hitachi, Ltd. Data read/write method by suitably selecting storage units in which multiple copies of identical data are stored and apparatus therefor
5542065, Feb 10 1995 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Methods for using non-contiguously reserved storage space for data migration in a redundant hierarchic data storage system
5544347, Sep 24 1990 EMC Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
5546558, Jun 07 1994 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Memory system with hierarchic disk array and memory map store for persistent storage of virtual mapping information
5551002, Jul 01 1993 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P System for controlling a write cache and merging adjacent data blocks for write operations
5551003, Dec 11 1992 International Business Machines Corporation System for managing log structured array (LSA) of DASDS by managing segment space availability and reclaiming regions of segments using garbage collection procedure
5559764, Aug 18 1994 International Business Machines Corporation HMC: A hybrid mirror-and-chained data replication method to support high data availability for disk arrays
5564116, Nov 19 1993 Hitachi, Ltd. Array type storage unit system
5568628, Dec 14 1992 Hitachi, Ltd.; Hitachi Microcomputer System, Ltd. Storage control method and apparatus for highly reliable storage controller with multiple cache memories
5572659, May 12 1992 International Business Machines Corporation Adapter for constructing a redundant disk storage system
5572660, Oct 27 1993 Dell USA, L.P. System and method for selective write-back caching within a disk array subsystem
5574851, Apr 19 1993 TAIWAN SEMICONDUCTOR MANUFACTURING CO , LTD Method for performing on-line reconfiguration of a disk array concurrent with execution of disk I/O operations
5579474, Dec 28 1992 Hitachi, Ltd. Disk array system and its control method
5581726, Dec 21 1990 Fujitsu Limited Control system for controlling cache storage unit by using a non-volatile memory
5583876, Oct 05 1993 Hitachi, Ltd. Disk array device and method of updating error correction codes by collectively writing new error correction code at sequentially accessible locations
5586250, Nov 12 1993 Seagate Technology LLC SCSI-coupled module for monitoring and controlling SCSI-coupled raid bank and bank environment
5586291, Dec 23 1994 SWAN, CHARLES A Disk controller with volatile and non-volatile cache memories
5611069, Nov 05 1993 Fujitsu Limited Disk array apparatus which predicts errors using mirror disks that can be accessed in parallel
5615352, Oct 05 1994 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Methods for adding storage disks to a hierarchic disk array while maintaining data availability
5615353, Mar 05 1991 Zitel Corporation Method for operating a cache memory using a LRU table and access flags
5617425, May 26 1993 Seagate Technology LLC Disc array having array supporting controllers and interface
5621882, Dec 28 1992 Hitachi, Ltd. Disk array system and method of renewing data thereof
5632027, Sep 14 1995 International Business Machines Corporation Method and system for mass storage device configuration management
5634111, Mar 16 1992 Hitachi, Ltd. Computer system including a device with a plurality of identifiers
5642337, Mar 14 1995 Sony Corporation; Sony Electronics Inc. Network with optical mass storage devices
5649152, Oct 13 1994 EMC Corporation Method and system for providing a static snapshot of data stored on a mass storage system
5650969, Apr 22 1994 International Business Machines Corporation Disk array system and method for storing data
5657468, Aug 17 1995 Xenogenic Development Limited Liability Company Method and apparatus for improving performance in a reduntant array of independent disks
5659704, Dec 02 1994 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Methods and system for reserving storage space for data migration in a redundant hierarchic data storage system by dynamically computing maximum storage space for mirror redundancy
5664187, Oct 26 1994 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Method and system for selecting data for migration in a hierarchic data storage system using frequency distribution tables
5671439, Jan 10 1995 Round Rock Research, LLC Multi-drive virtual mass storage device and method of operating same
5673412, Jul 13 1990 Hitachi, Ltd. Disk system and power-on sequence for the same
5678061, Jul 19 1995 Alcatel-Lucent USA Inc Method for employing doubly striped mirroring of data and reassigning data streams scheduled to be supplied by failed disk to respective ones of remaining disks
5680574, Feb 26 1990 Hitachi, Ltd. Data distribution utilizing a master disk unit for fetching and for writing to remaining disk units
5687390, Nov 14 1995 Veritas Technologies LLC Hierarchical queues within a storage array (RAID) controller
5689678, Mar 11 1993 EMC Corporation Distributed storage array system having a plurality of modular control units
5696931, Sep 09 1994 Seagate Technology LLC Disc drive controller with apparatus and method for automatic transfer of cache data
5696934, Jun 22 1994 HEWLETT-PACKARD DEVELOPMENT COMPANY, L P Method of utilizing storage disks of differing capacity in a single storage volume in a hierarchial disk array
5699503, May 09 1995 Rovi Technologies Corporation Method and system for providing fault tolerance to a continuous media server system
5701516, Mar 09 1992 Network Appliance, Inc High-performance non-volatile RAM protected write cache accelerator system employing DMA and data transferring scheme
5708828, May 25 1995 BMC SOFTWARE, INC System for converting data from input data environment using first format to output data environment using second format by executing the associations between their fields
5720027, May 21 1996 Storage Computer Corporation Redundant disc computer having targeted data broadcast
5732238, Jun 12 1996 Storage Computer Corporation Non-volatile cache for providing data integrity in operation with a volatile demand paging cache in a data storage system
5734812, Aug 20 1991 Hitachi, Ltd. Storage unit with parity generation function and storage systems using storage unit with parity generation analyzation
5737189, Jan 10 1994 DOT HILL SYSTEMS CORP High performance mass storage subsystem
5742762, May 19 1995 Telogy Networks, Inc.; TELOGY NETWORKS, INC Network management gateway
5742792, Apr 23 1993 EMC Corporation Remote data mirroring
5758074, Nov 04 1994 International Business Machines Corp System for extending the desktop management interface at one node to a network by using pseudo management interface, pseudo component interface and network server interface
5761402, Mar 08 1993 Hitachi Maxell, Ltd Array type disk system updating redundant data asynchronously with data access
5774641, Sep 14 1995 International Business Machines Corporation Computer storage drive array with command initiation at respective drives
5778430, Apr 19 1996 Veritas Technologies LLC Method and apparatus for computer disk cache management
5787459, Mar 11 1993 EMC Corporation Distributed disk array architecture
5790774, May 21 1996 Storage Computer Corporation Data storage system with dedicated allocation of parity storage and parity reads and writes only on operations requiring parity information
5794229, Apr 16 1993 SYBASE, INC Database system with methodology for storing a database table by vertically partitioning all columns of the table
5802366, Sep 08 1989 NetApp, Inc Parallel I/O network file server architecture
5809224, Oct 13 1995 Hewlett Packard Enterprise Development LP On-line disk array reconfiguration
5809285, Dec 21 1995 Hewlett Packard Enterprise Development LP Computer system having a virtual drive array controller
5812753, Oct 13 1995 Veritas Technologies LLC Method for initializing or reconstructing data consistency within an array of storage elements
5815648, Nov 14 1995 Veritas Technologies LLC Apparatus and method for changing the cache mode dynamically in a storage array system
5819292, Jun 03 1993 NetApp, Inc Method for maintaining consistent states of a file system and for creating user-accessible read-only copies of a file system
5857112, Sep 09 1992 HITACHI COMPUTER PRODUCTS AMERICA, INC System for achieving enhanced performance and data availability in a unified redundant array of disk drives by using user defined partitioning and level of redundancy
5872906, Oct 14 1993 Fujitsu Limited Method and apparatus for taking countermeasure for failure of disk array
5875456, Aug 17 1995 Xenogenic Development Limited Liability Company Storage device array and methods for striping and unstriping data and for adding and removing disks online to/from a raid storage array
5890204, Jun 03 1996 EMC Corporation User controlled storage configuration using graphical user interface
5890214, Nov 14 1996 Data General Corporation Dynamically upgradeable disk array chassis and method for dynamically upgrading a data storage system utilizing a selectively switchable shunt
5890218, Sep 18 1990 Fujitsu Limited System for allocating and accessing shared storage using program mode and DMA mode
5911150, Jan 25 1994 Data General Corporation Data storage tape back-up for data processing systems using a single driver interface unit
5931918, Sep 08 1989 Network Appliance, Inc Parallel I/O network file server architecture
5944789, Oct 27 1995 EMC IP HOLDING COMPANY LLC Network file server maintaining local caches of file directory information in data mover computers
5948110, Jun 04 1993 NetApp, Inc Method for providing parity in a raid sub-system using non-volatile memory
5963962, May 31 1995 Network Appliance, Inc. Write anywhere file-system layout
5966510, Aug 16 1996 Seagate Technology LLC SCSI-coupled module for monitoring and controlling SCSI-coupled raid bank and bank environment
6038570, Jun 03 1993 Network Appliance, Inc. Method for allocating files in a file system integrated with a RAID disk sub-system
6052797, May 28 1996 EMC Corporation Remotely mirrored data storage system with a count indicative of data consistency
6073222, Oct 13 1994 EMC Corporation Using a virtual device to access data as it previously existed in a mass data storage system
6076142, Mar 15 1996 MICRONET TECHNOLOGY, INC User configurable raid system with multiple data bus segments and removable electrical bridges
6148142, Mar 18 1994 INTEL NETWORK SYSTEMS, INC Multi-user, on-demand video server system including independent, concurrently operating remote data retrieval controllers
EP201330,
EP274817,
GB2086625,
JP2148125,
JP56074807,
JP57185554,
JP59085564,
JP60254318,
JP6162920,
JP63278132,
RE34100, Feb 02 1990 Seagate Technology LLC Data error correction system
Executed onAssignorAssigneeConveyanceFrameReelDoc
Date Maintenance Fee Events
Feb 07 2012M1553: Payment of Maintenance Fee, 12th Year, Large Entity.
Feb 07 2012M1556: 11.5 yr surcharge- late pmt w/in 6 mo, Large Entity.


Date Maintenance Schedule
Oct 18 20144 years fee payment window open
Apr 18 20156 months grace period start (w surcharge)
Oct 18 2015patent expiry (for year 4)
Oct 18 20172 years to revive unintentionally abandoned end. (for year 4)
Oct 18 20188 years fee payment window open
Apr 18 20196 months grace period start (w surcharge)
Oct 18 2019patent expiry (for year 8)
Oct 18 20212 years to revive unintentionally abandoned end. (for year 8)
Oct 18 202212 years fee payment window open
Apr 18 20236 months grace period start (w surcharge)
Oct 18 2023patent expiry (for year 12)
Oct 18 20252 years to revive unintentionally abandoned end. (for year 12)