A data storage system comprising a quality of service server to optimize revenue realized under multiple service level agreements with a data storage client.
|
11. A data storage system, comprising:
a quality of service (“QOS”) server in communication with a storage services customer and comprising a non-transitory computer readable medium having computer readable program code disposed therein to optimize the total value of a set of applications being run for one or more storage services customers using an interconnected information storage and retrieval system comprising one or more data storage devices, wherein said information storage and retrieval system is operated by a storage services provider, and wherein each of said (N) service level agreements specifies a maximum response time rtSLA comprising an elapsed time from receipt by the storage services provider from a storage services customer of a request to write data to a data storage device or read data from a data storage device and completion of that request, and wherein the (j)th SLA recites rt(j)SLA, wherein (j) is greater than or equal to 1 and less than or equal to (N), and wherein (N) is greater than 1 and
a plurality of data storage devices in communication with said QOS server;
the computer readable program code comprising a series of computer readable program steps to effect:
obtaining for each value of (j), a monetary value per unit throughput vj for the (j)th application;
determining for each value of (j) an optimum data flow rate x(j)OPT;
estimating, for each value of (j), a maximum data flow rate x(j)MAX that the (j)th application can utilize;
determining, for each value of (j), if x(j)OPT equals x(j)MAX;
for each value of (j) wherein x(j)OPT does not equal x(j)MAX, delaying implementation of I/O requests from the (j)th application, such that the average response time for the (j)th application equals rt(j)SLA.
1. A data storage system, comprising:
a quality of service (“QOS”) server in communication with a storage services customer and comprising a non-transitory computer readable medium having computer readable program code disposed therein to optimize revenue realized under (N) service level agreements entered into by a storage services provider with one storage services customer to provide data storage services for (N) applications using a plurality of data storage devices, wherein each of said (N) service level agreements specifies an average maximum response time rtSLA comprising an elapsed time from receipt by the storage services provider from a storage services customer of a request to write data to a data storage device or read data from a data storage device and completion of that request, and wherein the (j)th SLA recites rt(j)SLA, wherein (j) is greater than or equal to 1 and less than or equal to (N), and wherein (N) is greater than 1; and
a plurality of data storage devices in communication with said QOS server;
the computer readable program code comprising a series of computer readable program steps to effect:
obtaining for each value of (j), a monetary value per unit throughput vj for the (j)th application;
calculating (N) optimum data flow rates to maximize the revenues to the storage services provider;
determining for each value of (j) an optimum data flow rate x(j)OPT;
estimating, for each value of (j), a maximum data flow rate x(j)MAX that the (j)th application can utilize;
determining, for each value of (j), if x(j)OPT equals x(j)MAX;
for each value of (j) wherein x(j)OPT does not equal x(j)MAX, delaying implementation of I/O requests from the (j)th application, such that the average response time for the (j)th application equals rt(j)SLA.
2. The data storage system of
for each value of (j) wherein x(j)OPT equals x(j)MAX, setting the (j)th indicator to said first value;
for each value of (j) wherein x(j)OPT does not equal x(j)MAX, setting the (j)th indicator said second value;
servicing I/O requests from said (N) applications;
operative if the (j)th indicator is set to said first value, measuring x(j)MAX;
operative if the (j)th indicator is set to said second value, measuring x(j)MIN, wherein x(j)MIN comprises the minimum data flow rate for the (j)th application to achieve rt(j)SLA.
3. The data storage system 1, said computer readable program code further comprising a series of computer readable program steps to effect forming a bitmap comprising (N) bits, wherein said bitmap comprises said (N) indicators.
4. The data storage system of
5. The data storage system of
6. The data storage system of
7. The data storage system of
8. The data storage system of
9. The data storage system of
10. The data storage system of
12. The data storage system of
for each value of (j) wherein x(j)OPT equals x(j)MAX, setting the (j)th indicator to said first value;
for each value of (j) wherein x(j)OPT does not equal x(j)MAX, setting the (j)th indicator said second value;
servicing I/O requests from said (N) applications;
operative if the (j)th indicator is set to said first value, measuring x(j)MAX;
operative if the (j)th indicator is set to said second value, measuring x(j)MIN, wherein x(j)MIN comprises the minimum data flow rate for the (j)th application to achieve rt(j)SLA.
13. The data storage system 11, said computer readable program code further comprising a series of computer readable program steps to effect forming a bitmap comprising (N) bits, wherein said bitmap comprises said (N) indicators.
14. The data storage system of
15. The data storage system of
16. The data storage system of
17. The data storage system of
18. The data storage system of
19. The data storage system of
20. The data storage system of
|
This application is a Continuation application claiming priority to the application having Ser. No. 10/929,081 filed Aug. 27, 2004, which is hereby incorporated by reference.
This invention relates to a data storage system to optimize revenue realized under multiple service level agreements.
A person offering a data storage service, such as a Storage Service Provider (“SSP”) or an information services department within a company, needs to ensure that performance requirements are met for accessing the stored data. It is common in computer systems for a single data storage system to be used to hold data for multiple storage clients, which may be different computers, different applications, or different users. When the data storage system is owned by a Storage Service Provider, different clients using the same system may be separate customers, with separate contractual arrangements with the SSP.
A storage system has many components that participate in the servicing of requests from clients. These include but are not limited to: arm actuators, data channels, disk controllers, memory buses, and protocol chips on the disk drives themselves; processors, memory controllers, buses, and protocol chips on storage system controllers; and SCSI buses, network links, loops, fabric switches, and other components for the client-to-controller and controller-to-disk interconnect. A request generally requires several of these components to participate at particular steps in its processing. Many components can generally be used concurrently, so that steps in the servicing of many requests are being performed simultaneously.
To facilitate the concurrent utilization of resources, the system is built with an ability to enqueue requests and the subtasks involved in servicing them. There is a tradeoff between throughput (the total number of requests or number of bytes processed) and response time (the elapsed time from when the request is received by the system and when its completion is reported to the client). To achieve maximum throughput, a client usually submits a large number of requests for data. The large request load enables efficient workload scheduling in the system, but the response time in this case may be many times greater than that for a lightly loaded system because the requests spend a long time in the queue before being serviced.
Typically, the storage system contains one or more storage devices such as disk drives for storing data in a persistent way. It also contains one or more processors that handle requests for access, generally calling upon the storage devices to do so. Associated with these storage devices and processors are memory devices and data transfer channels, such as data buses, that are all needed for processing the requests. The system further includes some form of interconnect facility through which the clients submit data requests to the processors. This may be a network capable of supporting general purpose communications among clients, processors and other devices, or it may consist of more specialized interconnect facilities such as direct connections. Within one system, there may be many instances of each kind of device and facility. These are all resources of the system; however, they need not all be owned exclusively by the storage system. For example, the processors and memory buses might be involved in other computational tasks that are not part of handling storage requests from the clients.
One request from a client to the system generally does not require exclusive use of all resources. The system is designed therefore to handle many requests from many clients concurrently by scheduling stages in the processing of requests concurrently, such as disk arm motion and data transfer. One of the system's functions for achieving concurrency is queuing, by which the stages of processing for one request can be delayed when other requests are occupying required resources.
Storage service providers often enter into Service Level Agreements (“SLAs”) with data owners, whereby each SLA typically specifies a maximum average response time, i.e. an RTSLA, for requests made by the data owner to write and/or read data to and/or from the SSPs storage facility. When servicing requests from (N) multiple data owners under (N) SLAs, the SSP must allocate system resources such that RT(j)SLA, for each value of (j), is satisfied, where (j) is greater than or equal to 1 and less than or equal to (N).
Although the data objects used by different clients will generally be separate, the storage system resources involved in accessing those data objects will often overlap. These resources may include any of the components described above, such as storage devices, processors, memory, buses, and interconnect. One client's access to data can suffer performance degradation when another client consumes too much of one or more resources. If this competition for resources is not controlled, may be difficult to meet the response times specified in the (N) SLAs. Even if each RT(j)SLA is satisfied, permitting each of the (N) applications to consume arbitrary levels of system resources will not likely generate the maximum revenue for the storage system provider.
Various mechanisms are known in the art to allocate system resources amongst multiple storage system clients. What is needed, however, is a data storage system configured to both satisfy the contractual obligations of the storage system provider, and provide system resources in a way that maximizes the revenue to the storage system provider.
Applicants' invention includes Quality of Service (“QOS”) server and method to optimize revenue realized under multiple service level agreements. The method includes entering into (N) service level agreements to provide data storage services for (N) applications using the information storage and retrieval system, where each of the (N) service level agreements specifies an average maximum response time RTSLA.
The method calculates for each value of (j), the value per unit throughput vj for the (j)th application, and then determines for each value of (j) the optimum data flow rate x(j)OPT. The method estimates, for each value of (j), a maximum data flow rate X(j)MAX that the (j)th application can utilize, and determines, for each value of (j), if X(j)OPT equals X(j)MAX. For each value of (j) where x(j)OPT does not equal X(j)MAX, the method delays implementation of I/O requests from the (j)th application, such that the average response time for the (j)th application equals RT(j)SLA.
The invention will be better understood from a reading of the following detailed description taken in conjunction with the drawings in which like reference designators are used to designate like elements, and in which:
This invention is described in preferred embodiments in the following description with reference to the Figures, in which like numbers represent the same or similar elements. The invention will be described as embodied in an apparatus and method to operate a data processing system. A pending United States patent application having Ser. No. 10/187,227, owned by the common assignee hereof, further describes Applicants' computer storage system, and is hereby incorporated by reference herein.
Referring now to
In certain embodiments, one or more of the (N) clients comprises a computer system, such as a mainframe computer, personal computer, workstation, and combinations thereof, including one or more operating systems such as Windows, AIX, Unix, MVS, LINUX, etc. (Windows is a registered trademark of Microsoft Corporation; AIX is a registered trademark and MVS is a trademark of IBM Corporation; and UNIX is a registered trademark in the United States and other jurisdictions licensed exclusively through The Open Group.) As those skilled in the art will appreciate, such interconnected computers are often referred to as host computers. In certain embodiments, one or more of the (N) clients comprises an application running on a host computer.
Each of the (N) clients is capable of generating requests 240 for the storage system 270 to store data to and retrieve data from data objects 205 associated with the storage system. The requests 240 contain attributes 245 such as whether data is stored or retrieved, the location at which the data is stored or retrieved, and the length of the request. The storage system 270 may consist of one device or multiple devices which are used by their owner to constitute a single data storage facility. Each client has at least one gateway connection 208 to a gateway 210, such as in the illustrated embodiment gateways 210a and 210n.
Each gateway includes a processor, such as processors 212a and 212n, and a memory, such as memory 214a and 214n. In certain embodiments, each gateway device further includes a request classifier 220 and a flow controller 230. A client may have connections to multiple gateways as well as multiple connections to the same gateways, and multiple clients may have connections to the same gateway.
Each gateway has at least one storage connection 216 to the storage system, i.e. system 270, by which it can transmit requests to the storage system and by which the storage system transmits the responses to these requests to the gateway. The gateways are connected to a Quality of Service (“QoS”) server 260 which provides configuration and control information to the gateways and extracts and stores monitor data from the gateways. QoS Server 260 includes processor 262 and memory 264.
Within each flow controller 230 in operation are data objects each of which is referred to as a service class 231. Each service class contains a balance vector 234, a replenishment rate vector 236, and a carryover limit vector 238. Also in each service class 231 is a delay queue 232 into which requests 240 can be enqueued.
Within each classifier 220 in operation are data objects each of which is referred to as a classification rule 225. The classification rules 225 contain information by which each request 240 is associated with a service class 231.
In the illustrated embodiment of
In certain embodiments, communication links 330, 342, 344, 346, 352, and 354, are selected from a serial interconnection, such as RS-232 or RS-422, an ethernet interconnection, a SCSI interconnection, a Fibre Channel interconnection, an ESCON interconnection, a FICON interconnection, a Local Area Network (LAN), a Wide Area Network (WAN), a public wide area network, Storage Area Network (SAN), Transmission Control Protocol/Internet Protocol (TCP/IP), the Internet, and combinations thereof.
In certain embodiments, the clients, storage system, and gateways are attached in a network via Fibre Channel hardware, through one or more switch fabrics 320. In these Fibre Channel embodiments, gateways 320a and 320b are computing devices comprising a processor and a memory, and are attached to the Fibre Channel fabric.
In certain embodiments, a processor disposed in the gateway, such as processor 362, executes a program, such as program 368, stored in memory 364 that performs the actions of a classifier 220 and a flow controller 230. QoS Server 360 comprises a computing device 366 which includes a processor 362, a memory 364, and one or more programs 368 stored in memory 364.
In certain embodiments, storage system 270 (
Information storage and retrieval system 100 further includes a plurality of host adapters 102-105, 107-110, 112-115, and 117-120, disposed in four host bays 101, 106, 111, and 116. Each host adapter may comprise one or more Fibre Channel ports, one or more FICON ports, one or more ESCON ports, or one or more SCSI ports. Each host adapter is connected to both clusters through one or more Common Platform Interconnect bus 121 such that each cluster can handle I/O from any host adapter.
Processor portion 130 includes processor 132 and cache 134. In certain embodiments, processor portion 130 further include memory 133. In certain embodiments, memory device 133 comprises random access memory. In certain embodiments, memory device 133 comprises non-volatile memory.
Processor portion 140 includes processor 142 and cache 144. In certain embodiments, processor portion 140 further include memory 143. In certain embodiments, memory device 143 comprises random access memory. In certain embodiments, memory device 143 comprises non-volatile memory.
I/O portion 160 includes non-volatile storage (“NVS”) 162 and NVS batteries 164. I/O portion 170 includes NVS 172 and NVS batteries 174.
I/O portion 160 further comprises a plurality of device adapters, such as device adapters 165, 166, 167, and 168, and sixteen disk drives organized into two arrays, namely array “A” and array “B”. The illustrated embodiment of
In the illustrated embodiment of
In certain embodiments, Applicants' storage system 270/370 comprises an automated media library comprising a plurality of tape cassettes one or more robotic accessors, and one or more tape drives. U.S. Pat. No. 5,970,030, assigned to the common assignee herein, describes such an automated media library and is hereby incorporated by reference. In certain embodiments, Applicants' storage system 270/370 comprises a virtual tape system. U.S. Pat. No. 6,269,423, assigned to the common assignee herein, describes such a virtual tape system, and is hereby incorporated by reference.
In step 405, the Storage System Provider (“SSP”) enters into (N) Service Level Agreements (“SLAs”) for (N) applications, where the (j)th SLA specifies a maximum average response time RT(j)SLA. In entering into the SLA, the SSP agrees that the (j)th application will receive I/O services, such that the average I/O request from the (j)th application is serviced within the specified RT(j)SLA.
In step 410, the SSP operates the storage system, such as for example system 200 or system 300, where the system receives I/O requests from the (N) applications. In step 420, Applicants' method measures and saves the maximum data flow rate for each of the (N) applications. These values are used to initialize the quantities X(j)MAX, which in the subsequent operation of the algorithm, represent estimates of the maximum data flow rate that the (j)th application can utilize. That is, providing system resources in excess of those needed to reach X(j)MAX will be estimated not to result in an higher throughput for the (j)th application. The quantities X(j)(MIN) are also set to initial values in step 420. In subsequent operation of the algorithm, the values X(j)(MIN) represent estimates of the data flow rate that the (j)th application will utilize when its average I/O response time is equal to the specified RT(j)SLA. In certain embodiments, the initial value of X(j)(MIN) is set to 0.5*X(j)MAX. Step 420 can be performed any time after step 410 and prior to performing step 460.
In certain embodiments, step 420 is performed by the SSP. In certain embodiments, step 420 is performed by a gateway device, such as gateway device 210 (FIG. 2)/310 (
In step 430, Applicants' method calculates and saves the value per unit throughput vj for the (j)th application. As those skilled in the art will appreciate, vj can be expressed in any units of currency, i.e. U.S. Dollars, Euros, and the like. Step 420 may be performed any time after performing step 405 and prior to performing step 440.
In certain embodiments, step 430 is performed by the SSP. In certain embodiments, step 430 is performed by a gateway device, such as gateway device 210 (FIG. 2)/310 (
In certain embodiments of Applicants' method, the quantities vj are determined solely from contractual agreements, i.e. the SLAs, in which a base payment level Pj is stated in return for a corresponding base level of throughput Yj, provided that the required response time objective is met. In that case, vj=Pj/Yj. In other embodiments, v(j) may also reflect dynamic adjustments of the contractual agreement. For example, the service level agreement may permit the application to add incremental payments to vj as a temporary mechanism by which to influence the priority with which requests by the application are being handled. In certain of these embodiments,
vj=(1+Fj)*Pj/Yj
where Fj>=0 is an adjustment factor specified dynamically by the application.
In step 440, Applicants' method maximizes linear optimization Equation (1),
x1v1+x2v2+ . . . +xnvj (1)
where xj represents the throughput, i.e. the data flow rate, for the (j)th application. Thus, the term xjvj represents the monies generated by the (j)th application. Step 440 includes maximizing equation (1), subject to the constraints of Equations (2) and Equation (3):
Referring now to Equations (2), the equation c11x1+c12x2+ . . . +c1jxj comprises the aggregate usage of a first system resource by all (N) applications. System resources include, for example, device adapter bandwidth, host adapter bandwidth, disk utilization, and the like. For that first system resource, U1 comprises the maximum available level of that first system resource. For the k-th system resource, Uk comprises the maximum allowable utilization of that resource. This is the maximum average utilization with which it is feasible for each application that uses the resource to satisfy its response time RT(j)SLA. Needless to say, the aggregate usage of a system resource by all (N) applications cannot exceed the maximum available level of that system resource.
In certain embodiments, the values cnj and Un in Equations (2) are obtained from the known characteristics of the system resources, from the distribution of the data used by the j-th application over those resources, and from the characteristics of the requests generated by the j-th application. For example, if the (n)th resource is a data channel with bandwidth capacity of 200 megabytes per second, and the (j)th application transmits 50% of its data over this channel, and if xj is measured as the number of megabytes transmitted by the (j)th application per second, then we would have Un=200 and cnj=0.5 (expressing 50% as a fraction). In certain embodiments, the values cnj and Un in Equations (2) may be changed over time either because the system determines that the characteristics of the system resources, the distribution of data over the resources, or the characteristics of requests are different from what was used to produce the prior set of values. This determination may be done, for example, by the QoS Server 260 (
Referring now to Equation (3), the value for x(j) must fall within the range bounded by X(j)MIN and X(j)MAX. X(j)MIN represents the minimum level of system resources that must be provided to application (j) in order for the storage system to fulfill its contractual obligations, i.e. application (j) must realize an average response time less than or equal to RT(j)SLA. X(j)MAX represents the maximum level of system resources that application (j) can effectively utilize.
Methods to solve linear optimization equations, such as Equation (1) subject to Equations (2) and Equation (3), are known in the art. For example, Harvey M. Wagner, Principles of Operations Research: With Applications to Managerial Decisions, 2nd Edition, Prentice-Hall: Englewood Cliffs, 1975, at Section 5.10 entitled “Upper-Bounded Variables teaches a method for solving linear optimization equations, such as Equation (1) subject to Equations (2) and Equation (3), and is hereby incorporated by reference.
By maximizing Equation (1), subject to the constraints of Equations (2) and Equation (3), step 440 calculates (N) optimum data flow rates to maximize the revenues realized by the SSP, where x(j)OPT represents the optimum data flow rate for the (j)th application.
In step 450, Applicants' method sets (j) equal to 1. In step 460, Applicants' method determines if the optimum data throughput rate, x(j)OPT, calculated in step 440 for the (j)th application, is equal to the measured maximum data flow rate X(j)(MAX measured in step 420 for the (j)th application. Step 460 includes obtaining the stored values for X(j)(MAX and x(j)OPT from memory, such as for example memory 214 (
If Applicants' method determines in step 460 that the optimum data throughput rate x(j)OPT is equal to the measured maximum data flow rate X(j)MAX, then the method transitions from step 460 to step 470 wherein the method sets an indicator to indicate that X(j)MAX is to be later measured.
In certain embodiments, Applicants' method creates and maintains a database which includes the calculated values for x(j)(OPT), the values for RT(j) abstracted from the relevant SLAs, and measured values for certain parameters X(j)MAX and X(j)MIN, in accord with steps 462 and 470 of Applicants' method. In these embodiments, step 470 includes setting a field in the database which indicates that X(j)MAX is to be measured when the provision of system resources is “throttled” for certain other applications. In certain embodiments, this database is created and maintained in a Quality or Service Server, such as for example QoS Server 260 (
In other embodiments, Applicants' method includes forming a bitmap comprising (N) bits, where each of those bits can have a first value or a second value. Setting the (j)th bit to the first value indicates that the scheduling of requests from the (j)th application should not be intentionally delayed, and that the measurements of actual throughput for the (j)th application should be used to update estimates of X(j)MAX. Setting the (j)th bit to the second value indicates that the scheduling of requests from the (j)th application should be intentionally delayed such that the response-time requirement in the SLA is just met, and that the measurements of actual throughput for the (j)th application should be used to update estimates of X(j)MIN.
In certain embodiments, this bitmap is created and maintained in a Quality or Service Server, such as for example QoS Server 260 (
Alternatively, if Applicants' method determines in step 460 that the optimum data throughput rate, x(j)OPT is not equal to the measured maximum data flow rate X(j)(MAX, then the method transitions from step 460 to step 462 wherein the method indicates that X(j)MIN is to be measured. In certain embodiments, step 462 includes setting a field in the database described above, where that field indicates that X(j)MIN is to be measured for the (j)th application. In certain embodiments, step 462 includes setting the (j)th bit in the above-described bitmap of (N) bits to the second value.
Applicants' method transitions from step 462 to step 464 wherein the method throttles the (j)th application such that I/O requests serviced from the (j)th application just comply with the average response time specified in the (j)th SLA, i.e. to RT(j)SLA. In certain embodiments, step 464 includes enqueuing I/O requests received from the (j)th application, where those I/O requests are enqueued for incrementally increasing time periods until Applicants' data processing system just reaches the contractual RT(j)SLA.
In certain embodiments, step 464 is performed by a gateway device, such as gateway device 210 (FIG. 2)/310 (
In step 480, Applicants' method determines if the calculated value for x(j)OPT has been compared to the measured value for X(j)MAX for each of the (N) applications, i.e. if (j) equals (N). In certain embodiments, step 470 is performed by a gateway device interconnecting the computer running the (j)th application and Applicants' information storage and retrieval system. In certain embodiments, step 480 is performed by a Quality Of Service server interconnected with the computer running the (j)th application and with Applicants' information storage and retrieval system.
If Applicants' method determines in step 480 that the calculated value for x(j)OPT has not been compared to the measured value for X(j)MAX for each of the (N) applications, then Applicants' method transitions from step 480 to step 485 wherein the method increments (j). Applicants' method transitions from step 485 to step 460 and continues as described above.
Alternatively, if Applicants' method determines in step 480 that the calculated value for x(j)OPT has been compared to the measured value for X(j)MAX for each of the (N) applications, then Applicants' method transitions from step 480 to step 490 wherein the method operates Applicants' data processing system, such as for example system 200/300, and for each value of (j) measures the actual throughput utilized by the (j)th application, and updates the saved value of either X(j)MIN or X(j)MAX, as determined by steps 462 or 470, respectively. In some embodiments either X(j)MIN or X(j)MAX is replaced with the newly measured value. In other embodiments, the updated value is a combination of the previous value with the newly measured value. Applicants' method transitions from step 490 to step 430 and continues as described above.
In certain embodiments, individual steps recited in
In certain embodiments, Applicants' invention includes instructions residing in memory 264 (
While the preferred embodiments of the present invention have been illustrated in detail, it should be apparent that modifications and adaptations to those embodiments may occur to one skilled in the art without departing from the scope of the present invention as set forth in the following claims.
McNutt, Bruce, Chambliss, David D., Aschoff, John G.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
5951634, | Jul 13 1994 | EVIDAN | Open computing system with multiple servers |
6147975, | Jun 02 1999 | Accenture Global Services Limited | System, method and article of manufacture of a proactive threhold manager in a hybrid communication system architecture |
6272110, | Oct 10 1997 | RPX CLEARINGHOUSE LLC | Method and apparatus for managing at least part of a communications network |
6453255, | Jan 17 2001 | Unisys Corporation | Method for complex products configuration and guarantee generation |
6556659, | Jun 02 1999 | Accenture Global Services Limited | Service level management in a hybrid network architecture |
6557035, | Mar 30 1999 | LENOVO INTERNATIONAL LIMITED | Rules-based method of and system for optimizing server hardware capacity and performance |
6571283, | Dec 29 1999 | Unisys Corporation | Method for server farm configuration optimization |
6615261, | Dec 03 1999 | TELEFONAKTIEBOLAGET L M ERICSSON PUBL | Performance analysis of data networks using a normalized sampling method |
6625650, | Jun 27 1998 | U S BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT | System for multi-layer broadband provisioning in computer networks |
7359378, | Oct 11 2001 | GOOGLE LLC | Security system for preventing unauthorized packet transmission between customer servers in a server farm |
20020049608, | |||
20020049841, | |||
20020103895, | |||
20020129143, | |||
20020198995, | |||
20040003087, | |||
20040064557, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 23 2013 | International Business Machines Corporation | (assignment on the face of the patent) | / | |||
Jan 06 2014 | CHAMBLISS, DAVID D | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033206 | /0835 | |
Jan 06 2014 | MCNUTT, BRUCE | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033206 | /0835 | |
Jan 13 2014 | ASCHOFF, JOHN G | International Business Machines Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033206 | /0835 |
Date | Maintenance Fee Events |
Jun 17 2019 | REM: Maintenance Fee Reminder Mailed. |
Dec 02 2019 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Oct 27 2018 | 4 years fee payment window open |
Apr 27 2019 | 6 months grace period start (w surcharge) |
Oct 27 2019 | patent expiry (for year 4) |
Oct 27 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 27 2022 | 8 years fee payment window open |
Apr 27 2023 | 6 months grace period start (w surcharge) |
Oct 27 2023 | patent expiry (for year 8) |
Oct 27 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 27 2026 | 12 years fee payment window open |
Apr 27 2027 | 6 months grace period start (w surcharge) |
Oct 27 2027 | patent expiry (for year 12) |
Oct 27 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |