dynamic coordination and control of network connected devices within a distributed processing platform is disclosed for large-scale network site testing, or for other distributed projects. For network site testing, the distributed processing system utilizes a plurality of client devices which are running a client agent program associated with the distributed computing platform and which are running potentially distinct project modules for the testing of network sites or other projects. The participating client devices can be selected based upon their attributes and can receive test workloads from the distributed processing server systems. In addition, the client devices can send and receive poll communications that may be used during processing of the project to control, manage and coordinate the project activities of the distributed devices. If desired, a separate poll server system can be dedicated to handling the poll communication and coordination and control operations with the participating distributed devices during test operations, thereby allowing other server tasks to be handled by other distributed processing server systems. Once the tests are complete, the results can be communicated from the client devices to the server systems and can be reported, as desired. Additionally, the distributed processing system can identify the attributes, including device capabilities, of distributed devices connected together through a wide variety of communication systems and networks and utilize those attributes to organize, manage and distribute project workloads to the distributed devices.
|
18. A distributed computing platform having dynamic coordination capabilities for distributed client systems processing project workloads, comprising:
a plurality of network-connected distributed client systems, the client systems having under-utilized capabilities;
a client agent program configured to run on the client systems and to provide workload processing for at least one project of a distributed computing platform; and
at least one server system configured to communicate with the plurality of client systems through a network to provide the client agent program to the client systems, to send initial project and poll parameters to the client systems, to receive poll communications from the client systems during processing of the project workloads, wherein a dynamic snapshot information of current project status is provided based at least in part upon the poll communications from the client systems, to analyze the poll communications utilizing the dynamic snapshot information to determine whether to change how many client systems are active in the at least one project, wherein if a fewer number is desired, including within a poll response communications a reduction in the number of actively participating clients, and if a greater number is desired, adding client systems to active participation in the at least one project within a poll response communications, the server system repeatedly utilizing the poll communications and the poll response communications to coordinate project activities of the client systems during project operations.
0. 51. A server system comprising a network interface, the server configured to:
distribute to each of a plurality of client systems via the network interface workloads for a project that is configured to be carried out by a client agent program executing on each of the plurality of client systems;
transmit to each of the plurality of client systems via the network interface initial project and poll parameters applicable to workload processing of the project by the client agent program;
receive via the network interface poll communications indicative of ongoing workload processing of the project by the client agent program executing the each of the plurality client systems, wherein the poll communications provides at least a partial basis for a dynamic snapshot information of current project status;
analyze the poll communications utilizing the dynamic snapshot information to make a determination of whether to change a current number of client systems that are active in the project;
transmit via the network interface a poll response communications, wherein the poll response communications include a reduction in the current number if the determination is to reduce the current number of client systems that are active in the project, and the poll response communications include an increase in the current number if the determination is to increase the current number of client systems that are active in the project;
repeatedly utilize the poll communications and the poll response communications to coordinate project activities of the client systems during project operations.
0. 66. A tangible computer-readable medium having stored thereon computer-executable instructions that, if executed by a server system, cause the server system to perform a method comprising:
distributing workloads for at least one project, and initial project and poll parameters, to each of a plurality of client systems, each of the plurality of client systems running a client agent program to provide workload processing for the at least one project;
receiving poll communications from the plurality of client systems during processing of project workloads by the plurality of client systems, the poll communications providing at least part of a dynamic snapshot information of current project status;
analyzing the poll communications to determine whether or not to make one or more modifications to the initial project and poll parameters, wherein the modifications to the initial project and poll parameters utilize the dynamic snapshot information to determine whether to change how many client systems are active in the at least one project, and if a fewer number is desired, including within a polling response communications a reduction in the number of actively participating clients, and if a greater number is desired, adding client systems to active participation in the at least one project;
transmitting the poll response communications to the plurality of client systems to modify the initial project and poll parameters depending upon one or more decisions reached in the analyzing step; and
repeating the receiving, analyzing and transmitting steps to dynamically coordinate project activities of the plurality of client systems during project operations.
0. 34. In a server system communicatively coupled to a network via a communication interface, a method of providing dynamic coordination of distributed client systems, the method comprising:
distributing via the communication interface workloads for at least one project, and initial project and poll parameters, to each of a plurality of client systems that is communicatively connected to the network, each of the plurality of client systems running a client agent program to provide workload processing for the at least one project;
receiving via the communication interface poll communications from the plurality of client systems during processing of project workloads by the plurality of client systems, the poll communications providing at least part of a dynamic snapshot information of current project status;
analyzing the poll communications to determine whether or not to make one or more modifications to the initial project and poll parameters, wherein the modifications to the initial project and poll parameters utilize the dynamic snapshot information to determine whether to change how many client systems are active in the at least one project, and if a fewer number is desired, including within a polling response communications a reduction in the number of actively participating clients, and if a greater number is desired, adding client systems to active participation in the at least one project;
transmitting via the communication interface the poll response communications to the plurality of client systems to modify the initial project and poll parameters depending upon one or more decisions reached in the analyzing step; and
repeating the receiving, analyzing and transmitting steps to dynamically coordinate project activities of the plurality of client systems during project operations.
1. A method of providing dynamic coordination of distributed client systems in a distributed computing platform, comprising:
providing at least one server system coupled to a network;
providing a plurality of network-connected distributed client systems, the client systems having under-utilized capabilities and running a client agent program to provide workload processing for at least one project of a distributed computing platform;
utilizing the server system to distribute workloads for the at least one project to the client systems and to distribute initial project and poll parameters to the client systems;
receiving poll communications from the client systems during processing of project workloads by the client systems, wherein a dynamic snapshot information of current project status is provided based at least in part upon the poll communications;
analyzing the poll communications to determine whether or not to make one or more modification to the initial project and poll parameters, wherein the modifications to the initial project and poll parameters utilize the dynamic snapshot information to determine whether to change how many client systems are active in the at least one project, and if a fewer number is desired, including within a polling response communications a reduction in the number of actively participating clients, and if a greater number is desired, adding client systems to active participation in the at least one project;
sending the poll response communications to the client systems to modify the initial project and poll parameters depending upon one or more decisions reached in the analyzing step; and
repeating the receiving, analyzing and sending steps to dynamically coordinate project activities of the plurality of client systems during project operations.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
8. The method of
9. The method of
11. The method of
12. The method of
13. The method of
14. The method of
15. The method of
16. The method
17. The method of
19. The distributed computing platform of
20. The distributed computing platform of
21. The distributed computing platform of
22. The distributed computing platform of
23. The distributed computing platform of
24. The distributed computing platform of
25. The distributed computing platform of
26. The distributed computing platform of
27. The distributed computing platform of
28. The distributed computing platform of
29. The distributed computing platform of
30. The distributed computing platform of
31. The distributed computing platform of
32. The distributed computing platform of
33. The distributed computing platform of
0. 35. The method of
0. 36. The method of
0. 37. The method of
0. 38. The method of
0. 39. The method of
0. 40. The method of
0. 41. The method of
0. 42. The method of
and wherein transmitting via the communication interface the poll response communications to the plurality of client systems comprises sending the poll response communications from the poll server system of the server system.
0. 43. The method of
0. 44. The method of
0. 45. The method of
0. 46. The method of
0. 47. The method of
0. 48. The method of
0. 49. The method
0. 50. The method of
0. 52. The server system of
0. 53. The server system of
0. 54. The server system of
0. 55. The server system of
0. 56. The server system of
0. 57. The server system of
0. 58. The server system of
0. 59. The server system of
0. 60. The server system of
0. 61. The server system of
0. 62. The server system of
0. 63. The server system of
0. 64. The server system of
0. 65. The server system of
|
This application is a continuation-in-part application of the following applications: application Ser. No. 09/539,448 entitled “CAPABILITY-BASED DISTRIBUTED PARALLEL PROCESSING SYSTEM AND ASSOCIATED METHOD,” now abandoned application Ser. No. 09/539,428 entitled “METHOD OF MANAGING DISTRIBUTED WORKLOADS AND ASSOCIATED SYSTEM,” and application Ser. No. 09/539,106 entitled “NETWORK SITE TESTING METHOD AND ASSOCIATED SYSTEM,” which was filed on Mar. 30, 2000, now U.S. Pat. No. 6,891,802 and which is hereby incorporated by reference in its entirety. This application is also a continuation-in-part application of the following application: application Ser. No. 09/603,740 entitled “METHOD OF MANAGING WORKLOADS AND ASSOCIATED DISTRIBUTED PROCESSING SYSTEM,” now abandoned and application Ser. No. 09/602,983 entitled “CUSTOMER SERVICES AND ADVERTISING BASED UPON DEVICE ATTRIBUTES AND ASSOCIATED DISTRIBUTED PROCESSING SYSTEM,” now U.S. Pat. No. 6,963,897 each of which was filed on Jun. 23, 2000, and each of which is hereby incorporated by reference in its entirety. This application is also a continuation-in-part application of the following application: application Ser. No. 09/648,832 entitled “SECURITY ARCHITECTURE FOR DISTRIBUTED PROCESSING SYSTEMS AND ASSOCIATED METHOD,” which was filed on Aug. 25, 2000, now U.S. Pat. No. 6,847,995 and which is hereby incorporated by reference in its entirety. This application is also a continuation-in-part application of the following co-pending application: application Ser. No. 09/794,969 entitled “SYSTEM AND METHOD FOR MONITIZING NETWORK CONNECTED USER BASES UTILIZING DISTRIBUTED PROCESSING SYSTEMS,” which was filed on Feb. 27, 2001, and which is hereby incorporated by reference in its entirety. This application is also a continuation-in-part application of the following co-pending application: application Ser. No. 09/834,785 entitled “SOFTWARE-BASED NETWORK ATTACHED STORAGE SERVICES HOSTED ON MASSIVELY DISTRIBUTED PARALLEL COMPUTING NETWORKS,” which was filed on Apr. 13, 2001, and which is hereby incorporated by reference in its entirety. The present application also claims priority to the following co-pending U.S. provisional patent application: Provisional Application Ser. No. 60/368,871 that is entitled “MASSIVELY DISTRIBUTED PROCESSING SYSTEM ARCHITECTURE, SCHEDULING, UNIQUE DEVICE IDENTIFICATION AND ASSOCIATED METHODS,” which was filed Mar. 29, 2002, and which is hereby incorporated by reference in its entirety.
This invention relates to distributing project workloads among a multitude of distributed devices and more particularly to techniques and related methods for managing, facilitating and implementing distributed processing in a network environment. This invention is also related to functional, quality of server (QoS), and other testing of network sites utilizing a distributed processing platform.
Network site testing is typically desired to determine how a site or connected service performs under a desired set of test circumstances. Several common tests that are often attempted are site load testing and quality of service (QoS) testing. Quality of service (QoS) testing refers to testing a user's experience accessing a network site under normal or various other usability situations. Load testing refers to testing the load a particular network site's infrastructure can handle in user interactions. An extreme version of load testing is a denial-of-service attack, where a system or group of systems intentionally attempt to overload and shut-down a network site. Co-pending Application Ser No. 09/539,106 entitled “NETWORK SITE TESTING METHOD AND ASSOCIATED SYSTEM,” (which is commonly owned by United Devices, Inc.) discloses a distributed processing system capable of utilizing a plurality of distributed client devices to test network web sites, for example, with actual expected user systems. One problem associated with network site testing is the management, control and coordination of the distributed devices participating in the network site testing project.
The present invention provides architectures and methods for the dynamic coordination and control of network connected devices for network site testing and other distributed computing projects. For the network site testing, the distributed processing system utilizes a plurality of client devices that run client agent programs which are associated with a distributed computing platform and which are running one or more possibly distinct project modules for network site testing or other projects. The participating client devices receive project workloads unit from the distributed processing server systems. Poll communications between the client systems and the server systems are used during processing of the distributed project to control, manage and coordinate the activities of the distributed devices in accomplishing the project goal, such as network site testing. If desired, a separate poll server system can be dedicated to handle the poll communications and coordination and control operations with the participating distributed devices during test operation, thereby allowing other server tasks to be handled by other distributed processing server systems. Once the tests are complete, the results can be communicated from the client devices to the server systems and can be reported, as desired. Additionally, the distributed processing system can identify the attributes of distributed devices connected together through a wide variety of communication systems and networks and utilize those attributes to organize, manage and distribute project workloads to the distributed devices.
It is noted that the appended drawings illustrate only exemplary embodiments of the invention and are, therefore, not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
The present invention provides a dynamic coordination and control architecture for network site testing within a distributed processing platform that utilizes a plurality of network-connected client devices. The client systems are configured to run a client agent program and project modules for the testing of network sites or other distributed project activities. In addition to project work units, these client devices can receive poll communications that are used during project operations to control, manage and coordinate the project activities of the distributed devices. In addition, if desired, a separate poll server system can be dedicated to handling the poll communications and coordination and control operations with the participating distributed devices during test operation, thereby allowing other server tasks to be handled by other distributed processing server systems. Once the tests are complete, the results can be collected and reported.
Example embodiments for the coordination and control architecture of the present invention, including a poll server, are described with respect to
As described in the co-pending applications, distributed processing systems according to the present invention may identify the capabilities of distributed devices connected together through a wide variety of communication systems and networks and then utilize these capabilities to accomplish network site testing objectives of the present invention. For example, distributed devices connected to each other through the Internet, an intranet network, a wireless network, home networks, or any other network may provide any of a number of useful capabilities to third parties once their respective capabilities are identified, organized, and managed for a desired task. These distributed devices may be connected personal computer systems (PCs), internet appliances, notebook computers, servers, storage devices, network attached storage (NAS) devices, wireless devices, hand-held devices, or any other computing device that has useful capabilities and is connected to a network in any manner. The present invention further contemplates providing an incentive, which may be based in part upon capabilities of the distributed devices, to encourage users and owners of the distributed devices to allow the capabilities of the distributed devices to be utilized in the distributed parallel processing system of the present invention.
The number of usable distributed devices contemplated by the present invention is preferably very large. Unlike a small local network environment, for example, which may include less than 100 interconnected computers systems, the present invention preferably utilizes a multitude of widely distributed devices to provide a massively distributed processing system. With respect to the present invention, a multitude of distributed devices refers to greater than 1,000 different distributed devices. With respect to the present invention, widely distributed devices refers to a group of interconnected devices of which at least two are physically located at least 100 miles apart. With respect to the present invention, a massively distributed processing system is one that utilizes a multitude of widely distributed devices. The Internet is an example of a interconnected system that includes a multitude of widely distributed devices. An intranet system at a large corporation is an example of an interconnected system that includes multitude of distributed devices, and if multiple corporate sites are involved, may include a multitude of widely distributed devices. A distributed processing system according to the present invention that utilizes such a multitude of widely distributed devices, as are available on the Internet or in a large corporate intranet, is a massively distributed processing system according to the present invention.
Looking now to
It is noted that the client systems 108, 110 and 112 represent any number of systems and/or devices that may be identified, organized and utilized by the server systems 104 to accomplish a desired task, for example, personal computer systems (PCs), internet appliances, notebook computers, servers, storage devices, network attached storage (NAS) devices, wireless devices, hand-held devices, or any other computing device that has useful capabilities and is connected to a network in any manner. The server systems 104 represent any number of processing systems that provide the function of identifying, organizing and utilizing the client systems to achieve the desired tasks.
The incentives provided by the incentives block 126 may be any desired incentive. For example, the incentive may be a sweepstakes in which entries are given to client systems 108, 110 . . . 112 that are signed up to be utilized by the distributed processing system 100. Other example incentives are reward systems, such as airline frequent-flyer miles, purchase credits and vouchers, payments of money, monetary prizes, property prizes, free trips, time-share rentals, cruises, connectivity services, free or reduced cost Internet access, domain name hosting, mail accounts, participation in significant research projects, achievement of personal goals, or any other desired incentive or reward.
As indicated above, any number of other systems may also be connected to the network 102. The element 106, therefore, represents any number of a variety of other systems that may be connected to the network 102. The other systems 106 may include ISPs, web servers, university computer systems, and any other distributed device connected to the network 102, for example, personal computer systems (PCs), internet appliances, notebook computers, servers, storage devices, network attached storage (NAS) devices, wireless devices, hand-held devices, or any other connected computing device that has useful capabilities and is connected to a network in any manner. The customer systems 152 represents customers that have projects for the distributed processing system, as further described with respect to FIG. 1B. The customer systems 152 connect to the network 102 through the communication link 119.
It is noted that the communication links 114, 116, 118, 119, 120 and 122 may allow for communication to occur, if desired, between any of the systems connected to the network 102. For example, client systems 108, 110 . . . 112 may communicate directly with each other in peer-to-peer type communications. It is further noted that the communication links 114, 116, 118, 119, 120 and 122 may be any desired technique for connecting into any portion of the network 102, such as, Ethernet connections, wireless connections, ISDN connections, DSL connections, modem dial-up connections, cable modem connections, fiber optic connections, direct T1 or T3 connections, routers, portal computers, as well as any other network or communication connection. It is also noted that there are any number of possible configurations for the connections for network 102, according to the present invention. The client system 108 may be, for example, an individual personal computer located in someone's home and may be connected to the Internet through an Internet Service Provider (ISP). Client system 108 may also be a personal computer located on an employee's desk at a company that is connected to an intranet through a network router and then connected to the Internet through a second router or portal computer. Client system 108 may further be personal computers connected to a company's intranet, and the server systems 104 may also be connected to that same intranet. In short, a wide variety of network environments are contemplated by the present invention on which a large number of potential client systems are connected.
It is noted, therefore, that the capabilities for client systems 108, 110 . . . 112 may span the entire range of possible computing, processing, storage and other sub-systems or devices that are connected to a system connected to the network 102. For example, these subsystems or devices may include: central processing units (CPUs), digital signal processors (DSPs), graphics processing engines (GPEs), hard drives (HDs), memory (MEM), audio sub-systems (ASs), communications subsystems (CSs), removable media types (RMs), and other accessories with potentially useful unused capabilities (OAs). In short, for any given computer system connected to a network 102, there exists a variety of capabilities that may be utilized by that system to accomplish its direct tasks. At any given time, however, only a fraction of these capabilities are typically used on the client systems 108, 110 . . . 112.
As indicated above, to encourage owners or users of client systems to allow their system capabilities to be utilized by control system 104, an incentive system may be utilized. This incentive system may be designed as desired. Incentives may be provided to the user or owner of the clients systems when the client system is signed-up to participate in the distributed processing system, when the client system completes a workload for the distributed processing system, or any other time during the process. In addition, incentives may be based upon the capabilities of the client systems, based upon a benchmark workload that provides a standardized assessment of the capabilities of the client systems, or based upon any other desired criteria.
Security subsystems and interfaces may also be included to provide for secure interactions between the various devices and systems of the distributed processing system 100. The security subsystems and interfaces operate to secure the communications and operations of the distributed processing system. This security subsystem and interface also represents a variety of potential security architectures, techniques and features that may be utilized. This security may provide, for example, authentication of devices when they send and receive transmissions, so that a sending device verifies the authenticity of the receiving device and/or the receiving device verifies the authenticity of the sending device. In addition, this security may provide for encryption of transmissions between the devices and systems of the distributed processing system. The security subsystems and interfaces may also be implemented in a variety of ways, including utilizing security subsystems within each device or security measures shared among multiple devices, so that security is provided for all interactions of the devices within the distributed processing system. In this way, for example, security measures may be set in place to make sure that no unauthorized entry is made into the programming or operations of any portion of the distributed processing system including the client agents.
As discussed above, each client system includes a client agent that operates on the client system and manages the workloads and processes of the distributed processing system. As shown in
Also as discussed above, security subsystems and interfaces may be included to provide for secure interactions between the various devices and systems of the distributed processing system 100. As depicted in
In operation, client systems or end-users may utilize the clients subsystem 1548 within the web interface 1554 to register, set user preferences, check statistics, check sweepstakes entries, or accomplish any other user interface option made available, as desired. Advertising customers may utilize the advertisers subsystem 1552 within the web interface 1554 to register, add or modify banner or other advertisements, set up rules for serving advertisements, check advertising statistics (e.g., click statistics), or accomplish any other advertiser interface option made available, as desired. Customers and their respective task or project developers may utilize the task developer subsystem 1550 to access information within database systems 1546 and modules within the server systems 104, such as the version/phase control subsystem 1528, the task module and work unit manager 1530, and the workload information 308. Customers may also check project results, add new work units, check defect reports, or accomplish any other customer or developer interface option made available, as desired.
Advantageously, the customer or developer may provide the details of the project to be processed, including specific program code and algorithms that will process the data, in addition to any data to be processed. In the embodiment shown in
Information sent from the server systems 104 to the client agents 270A, 270B . . . 270C may include task modules, data for work units, and advertising information. Information sent from the client agents 270A, 270B . . . 270C to the server systems 104 may include user information, system information and capabilities, current task module version and phase information, and results. The database systems 1546 may hold any relevant information desired, such as workload information (WL) 308 and client capability vectors (CV) 620. Examples of information that may be stored include user information, client system information, client platform information, task modules, phase control information, version information, work units, data, results, advertiser information, advertisement content, advertisement purchase information, advertisement rules, or any other pertinent information.
It may be expected that different workload projects WL1, WL2 . . . WLN within the workload database 308 may require widely varying processing requirements. Thus, in order to better direct resources to workload projects, the server system may access various system vectors when a client system signs up to provide processing time and other system or device capabilities to the server system. This capability scheduling helps facilitate project operation and completion. In this respect, the capability vector database 620 keeps track of any desired feature of client systems or devices in capability vectors CBV1, CBV2 . . . CBVN, represented by elements 628, 630 . . . 632, respectively. These capability vectors may then be utilized by the control system 304 through line 626 to capability balance workloads.
This capability scheduling according to the present invention, therefore, allows for the efficient management of the distributed processing system of the present invention. This capability scheduling and distribution will help maximize throughput, deliver timely responses for sensitive workloads, calculate redundancy factors when necessary, and in general, help optimize the distributed processing computing system of the present invention. The following TABLE 1 provides lists of capability vectors or factors that may be utilized. It is noted that this list is an example list, and any number of vectors or factors may be identified and utilized, as desired.
TABLE 1
Example Client Capability Vectors or Factors
1.
BIOS Support:
a.
BIOS Type (brand)
b.
ACPI
c.
S1, S2, S3, and S4
sleep/wake states
d.
D1, D2 and D3 ACPI device
e.
Remote Wake Up Via
states
Modem
f.
Remote Wake Up Via
g.
CPU Clock control
Network
h.
Thermal Management control
i.
Docked/Unlocked state
control
j.
APM 1.2 support
k.
Hotkey support
l.
Resume on Alarm, Modem
m.
Password Protected Resume
Ring and LAN
from Suspend
n.
Full-On power mode
o.
APM/Hardware Doze mode
p.
Stand-by mode
q.
Suspend to DRAM mode
r.
Video Logic Power Down
s.
HDD, FDD and FDC
Power Down
t.
Sound Chip Power Down
u.
Super I/O Chip Power
Down
2.
CPU Support:
a.
CPU Type (brand)
b.
MMX instruction set
c.
SIMD instruction set
d.
WNI instruction set
e.
3DNow instruction set
f.
Other processor dependent
g.
Raw integer performance
instruction set(s)
h.
Raw FPU performance
i.
CPU L1 data cache size
j.
CPU L1 instruction cache size
k.
CPU L2 cache size
l.
CPU speed (MHz/GHz . . . )
m.
System bus
(MHz/GHz . . . ) speed
supported
n.
Processor Serial Number
o.
CPUID
3.
Graphic Support
a.
Graphics type (brand)
b.
# of graphics engines
c.
Memory capacity
d.
OpenGL support
e.
Direct3D/DirectX support
f.
Color depth supported
g.
MPEG 1/II decode assist
h.
MPEG1/II encode assist
i.
OS support
j.
Rendering type(s) supported
k.
Single-Pass Multitexturing
support
l.
True Color Rendering
m.
Triangle Setup Engine
n.
Texture Cache
o.
Bilinear/Trilinear Filtering
p.
Anti-aliasing support
q.
Texture Compositing
r.
Texture Decompression
s.
Perspectively Correct
Texture Mapping
t.
Mip-Mapping
u.
Z-buffering and Double-
buffering support
v.
Bump mapping
w.
Fog effects
x.
Texture lighting
y.
Video texture support
z.
Reflection support
aa.
Shadows support
4.
Storage Support
a.
Storage Type (brand)
b.
Storage Type (fixed,
c.
Total storage capacity
removable, etc.)
d.
Free space
e.
Throughput speed
f.
Seek time
g.
User dedicated space for
current
h.
SMART capable
5.
System
a.
System Type (brand)
b.
System form factor (desktop, portable, workstation, server, etc.)
6.
Communications Support
a.
Type of Connection (brand
of ISP)
b.
Type of Connection Device
c.
Hardware device
(brand of hardware)
capabilities
d.
Speed of connection
e.
Latency of connection
f.
Round trip packet time of
g.
Number of hops on
connection
connection type
h.
Automatic connection support
i.
Dial-up only (yes/no)
(yes/no)
j.
Broadband type (brand)
k.
Broadband connection type
(DSL/Sat./Cable/T1/Intra-
net/etc.)
7.
Memory
a.
Type of memory error con-
nection (none, ECC, etc.)
b.
Type of memory supported
c.
Amount of total memory
(EDO, SDRAM, RDRAM,
etc.)
d.
Amount of free memory
e.
Current virtual memory size
f.
Total available virtual
memory size
8.
Operating System
a.
Type of operating system
(brand)
b.
Version of operating system
c.
Health of operating system
9.
System application software
a.
Type of software loaded
and/or operating on system
b.
Version of software
c.
Software features
enabled/disabled
d.
Health of software operation
This capability scheduling and management based upon system related vectors allows for efficient use of resources. For example, utilizing the operating system or software vectors, workloads may be scheduled or managed so that desired hardware and software configurations are utilized. This scheduling based upon software vectors may be helpful because different software versions often have different capabilities. For example, various additional features and services are included in MICROSOFT WINDOWS '98 as compared with MICROSOFT WINDOWS '95. Any one of these additional functions or services may be desired for a particular workload that is to be hosted on a particular client system device. Software and operating system vectors also allow for customers to select a wide variety of software configurations on which the customers may desire a particular workload to be run. These varied software configurations may be helpful, for example, where software testing is desired. Thus, the distributed processing system of the present invention may be utilized to test new software, data files, Java programs or other software on a wide variety of hardware platforms, software platforms and software versions. For example, a Java program may be tested on a wide proliferation of JREs (Java Runtime Engines) associated with a wide variety of operating systems and machine types, such as personal computers, handheld devices, etc.
From the customer system perspective, the capability management and the capability database, as well as information concerning users of the distributed devices, provide a vehicle through which a customer may select particular hardware, software, user or other configurations, in which the customer is interested. In other words, utilizing the massively parallel distributed processing system of the present invention, a wide variety of selectable distributed device attributes, including information concerning users of the distributed devices, may be provided to a customer with respect to any project, advertising, or other information or activity a customer may have to be processed or distributed.
For example, a customer may desire to advertise certain goods or services to distributed devices that have certain attributes, such as particular device capabilities or particular characteristics for users of those distributed devices. Based upon selected attributes, a set of distributed devices may be identified for receipt of advertising messages. These messages may be displayed to a user of the distributed device through a browser, the client agent, or any other software that is executing either directly or remotely on the distributed device. Thus, a customer may target particular machine specific device or user attributes for particular advertising messages. For example, users with particular demographic information may be targeted for particular advertisements. As another example, the client agent running on client systems that are personal computers may determine systems that are suffering from numerous page faults (i.e., through tracking operating system health features such as the number of page faults). High numbers of page faults are an indication of low memory. Thus, memory manufacturers could target such systems for memory upgrade banners or advertisements.
Still further, if a customer desires to run a workload on specific device types, specific hardware platforms, specific operating systems, etc., the customer may then select these features and thereby select a subset of the distributed client systems on which to send a project workload. Such a project would be, for example, if a customer wanted to run a first set of simulations on personal computers with AMD ATHLON microprocessors and a second set of simulations on personal computers with INTEL PENTIUM III microprocessors. Alternatively, if a customer is not interested in particular configurations for the project, the customer may simply request any random number of distributed devices to process its project workloads.
Customer pricing levels for distributed processing may then be tied, if desired, to the level of specificity desired by a particular customer. For example, a customer may contract for a block of 10,000 random distributed devices for a base amount. The customer may later decide for an additional or different price to utilize one or more capability vectors in selecting a number of devices for processing its project. Further, a customer may request that a number of distributed devices be dedicated solely to processing its project workloads. In short, once device attributes, including device capabilities and user information, are identified, according to the present invention, any number of customer offerings may be made based upon the device attributes for the connected distributed devices. It is noted that to facilitate use of the device capabilities and user information, capability vectors and user information may be stored and organized in a database, as discussed above.
Referring now to
As shown in
Site testing is typically desired to determine how a site or connected service performs under any desired set of test circumstances. With the distributed processing system of the present invention, site performance testing may be conducted using any number of real client systems 108, 110 and 112, rather than simulated activity that is currently available. Several tests that are commonly desired are site load tests and quality of service (QoS) tests. Quality of service (QoS) testing refers to testing a user's experience accessing a network site under normal usability situations. Load testing refers to testing what a particular network site's infrastructure can handle in user interactions. An extreme version of load testing is a denial-of-service attack, where a system or group of systems intentionally attempt to overload and shut-down a network site. Advantageously, the current invention will have actual systems testing network web sites, as opposed to simulated tests for which others in the industry are capable and which yield inaccurate and approximate results.
Network site 106B and the multiple interactions represented by communication lines 116B, 116C and 116D are intended to represent a load testing environment. Network site 106A and the single interaction 116A is indicative of a user interaction or QoS testing environment. It is noted that load testing, QoS testing and any other site testing may be conducted with any number of interactions from client systems desired, and the timing of those interactions may be manipulated and controlled to achieve any desired testing parameters. It is further noted that periodically new load and breakdown statistics will be provided for capacity planning.
Looking first to
As discussed above, the server systems 104 can be connected to and configured to utilize a variety of databases, as desired. These databases can also store information, as need, that is related to the dynamic coordination and control of tasks and results data. In the embodiment of
The poll server 502 is provided to allow the control server 504 to off-load much of its management tasks for site testing activities during operation of the tests on the participating client systems. As shown in the example embodiment of
The project information and project control information can take any of a variety of forms depending upon the nature of the project being run and the nature of the management and scheduling control desired. For example, as part of the initial project setup or control information provided to the client systems, the client systems can be given poll parameters, such as a poll period, a test start time and a test end time. The poll period refers to information that determines when the client system will communicate with the poll server 502. For example, the poll period information can define a regular time interval, scheduled times or defined times-at which the client systems communicate with the poll server 502 to provide project information such as status of the project on the client system, partial result data, local clock information, or any other desired project related data or information, that may be utilized by the poll server 502 to help manage and coordinate the project operations of the various different client systems. If the poll period is zero, the client system can simply run the project from its start time to finish time without polling the poll server 502. The poll server 502 can send back information such as clock synchronization information, project instructions, poll period changes, or any other desired instructions or information, as desired to manage and coordinate the activities of the client systems conducting the project processing.
A control interface 509 can also be provided. The control interface 509 allows someone formulating and running a project to communicate through link 511 with the control server 504 and the poll server 502. And the control interface 509 can provide a variety of functional controls and information to a user of the interface, such as coordination tools, project overview information, project processing status, project snapshot information during project operations, or other desired information and/or functional controls. For example, with respect to a network site testing project, a tester can use this interface 509 to create the test scripts that are included within the work units that are sent to client systems participating in the test and could set and adjust the poll parameters that are to be used by each client system. The control interface 509 is also used over the duration of the test to view dynamic snapshot information about the current state of the test, including the load on the system, and to use this information to modify test activities such as the number of active clients participating in the test. The broken line 507 represents a demarcation between the servers 502 and 054 and the interface 509. It is noted that the interface 509 could take any of a variety of forms and that the interface 509 can be remote or disconnected from the server systems 104 (which in
Looking back to
If the poll period is greater than zero, then the client agent running the test project code will poll the poll server 502 at periodic intervals. The poll communications that are received from the client systems in block 562 can include a wide variety of information, as desired. These client system communications, for example, can provide information about the current project operations of the client systems and partial test results for the project. In response to the poll communications from the client systems, the poll server 502 can modify test, load and poll parameters as desired in block 564 to manage, control and coordinate the test activities of the client systems. In decision block 560, the determination is made whether the test end time has been reached. In “NO,” then the test continues in block 558. If “YES,” then the test ends in block 566. Test results can then be reported, for example, by being sent from the client systems to the control server 504 for compilation and further processing, as desired. The final results can be stored in a results database 510 and can be provided to the customer that requested or sponsored the site testing project. It is noted that the “load” parameter includes the load on the site under test (SUT), and a change to the load could include increasing or decreasing the number of client systems active in the test project. It is also noted that the poll period can be relatively simple, such as a regular time interval at which the client system communicates with the poll server 502. And the poll period could be more complicated, such as a time interval that changes based upon some condition or criteria, or a communication that occurs after a certain event or events during the test processing, such as each time a test routine is completed. In other words, any of a variety of procedures or algorithms could be utilized, as desired, to set the polling activity of the client systems, and each client system could be set to have unique polling instructions.
As stated above, in one example operation, a goal of the poll server 502 and control server 504 is to coordinate a multitude of clients interconnected over the Internet (or other unbounded network) to conduct a project such as load testing a web site. Some advantageous features of this design are the ability to select clients for the load test based on client characteristics, capabilities, components and attributes, and the ability to dynamically alter the number of active clients actively participating in the test. This is an improvement on the prior techniques where the client systems were typically simulated on a small number of test machines, leading to less accurate results. Other coordinated applications that can use this method of control include measuring the quality of service (QoS) of a site under test.
As shown in
This coordinated testing architecture could be used for other network site testing operations. For example, it can be used for quality of server (QoS) testing, where the typical goal is to be able to measure response times at Internet connected desktops in order to gauge the user experience when browsing a website (e.g., the site under test (SUT)). The number of active clients selected for QoS testing is typically much smaller than the number for load testing, but the selected active clients are typically spread across the network (e.g., geographically, and by ISP). Each client periodically runs a project workload script making HTTP commands to one or more websites and measures the response times from each. These summarized results are returned to the poll server 502 which aggregates results across all active clients and generates reports for each website being tested. The active clients in this case typically do not, by themselves, add significant load to the SUT. The load on the SUT is the normal load generated by browsing on the Internet. The active clients are merely providing performance measurement data at a wide variety of points across the Internet, and their results tend to provide a true reflection of what a person browsing on his desktop would see when interacting with the SUT. For example, QoS testing can identify performance bottlenecks over time by geography, ISP, machine type, system type or related other possible factors. For example, a website might be able to determine that response times at night to machines within a major ISP are much longer than the mean response time.
There are a number of advantageous that are provided by the poll server architecture of the present invention. For example, where the network is the Internet, it is expected that the set of clients on the Internet are non-dedicated resources. Thus, there is desirably a mechanism to keep track of the current state of each client system. This task is difficult to accomplish in an efficient and reasonable manner by the dispatch or control server alone, which is also responsible for scheduling distributed computing work to all other clients in the distributed computing network. One method for getting the state of a client machine is to have a listening port on the client, which is queried by the server to get status information. In other words, instead of the polling by the client system to the poll server as indicated above, the poll server could initiate contact to each client system. However, due to the reluctance of information technology managers, individual PC owners, and others who control client systems to have open ports on their machines, the alternative where the client system periodically communicates with the poll server to sends summary status information and to receive test instructions is likely a method that is more widely acceptable. It is noted that the poll server 502 and the dispatch/control server 504 can each be one or more server systems that operate to perform desired functions in the dynamic coordination and control architecture. It is also again noted that the poll server 502 and control server 504 could be combined if desired into a single server system or set of systems that handles both roles. However, this would likely lead to a more inefficient operation of the overall distributed processing system.
As discussed above, a poll server 502 can be used to offload the polling connections from the main server 504. (The poll requests can be short, unencrypted, unauthenticated, single-turnaround requests from the client agent running on each client system.) Without the separate poll server, there are communication requirements that would likely reduce the performance of the distributed computing platform, for example, the number of database queries that can be handled at a given time and the number of connected client systems at a given time. This architecture of the present invention helps to improve performance by offloading the work of handling agent poll requests to another server. It is noted, however, that the present invention could still be utilized without offloading the polling functions, if this were desired. In general, the polling server 502 can be designed to open a single connection to a database to retrieve information about active schedex records. Periodically, the poll server 502 can use this database connection to refresh and update current running count information. On each agent poll request, the poll server 502 uses data structures in memory to determine whether the client system should start, stop, or terminate.
The client systems can make the polling connection to the server using TCP. However, UDP could be utilized to reduce the overhead inherent in TCP connection establishment. If the agent has a proxy configured, however, then UDP will likely not work. Otherwise, UDP could be tried, and if no response were received, TCP could be used as a fall back communication protocol. When the agent receives a new schedex record, one of the attributes can be the address of a polling server where the client will send poll requests. If this is not specified, the agent can fall back to using the main server address. It is noted, however, that in the latter case a different port would preferably be utilized on the main server, because the polling server function is best viewed as a separate process from the main server function.
In a more-generalized environment, where the server systems include multiple dispatch servers, each responsible for a different set of project applications, the poll server could have a broader function of tracking outstanding messages for delivery to clients the next time they contact the poll server. Periodic polling by a client systems can improve the responsiveness of the system. For example, if the person conducting the test stops a project currently running on the distributed computing system, the poll server can obtain a list of all client systems processing work on behalf of the project and its workloads and can instruct these client systems to stop the currently executing workload and return to the dispatch server to get a new piece of work. In addition, high priority jobs entering the system can be immediately serviced by having the poll server draft clients from a client system resource pool by issuing a preempt call to the client at the next poll. This preempt call would preempt all pending work being done by the client system and would start operation of the high priority job on the selected client systems.
To further describe the dynamic coordination and control architecture of the present invention (referred to below in relation to a scheduled execution (schedex) project), example polling procedures, poll communications, initialization parameters, test parameters, management, coordination and control procedures and associated function calls are now discussed.
A scheduled execution (schedex) project can also have associated with it a variety of polling and related test parameters. For example, the following attributes can be provided:
A scheduled execution project can further define client type quotas for the number of cued client systems possessing particular attribute values. The attribute types can include any of a variety of client capabilities, attributes and components as discussed above, for example, with respect to personal computers, the attributes can include geographic location such as country, device operating system, and downstream bandwidth. The client system type quotas can be used to limit the client systems to which the server systems distribute the scheduled execution project. For each quota, the server system can maintain a counter of the number of client systems with that attribute that have been cued so far to participate in the particular scheduled execution project. Client systems can be considered in a non-deterministic order. For each client system, the UD server checks whether the counters for the client systems particular attributes are less than the corresponding quotas. If so, the scheduled execution project is cued on that client system. These selection parameters can be used to accomplish various goals. Some examples are provided below.
For example, suppose that the number of client systems (or hosts) to cue is 1000, such that nhosts_cue=1000.
If the tester wants at least 50% of the hosts to be from Canada, the following could be used:
<attr_type=“country”, value=“Canada”, quota=1000>
<attr_type=“country”, value=“*”, quota=500>
If you want exactly 50% each from Canada and Poland, use
<attr_type=“country”, value=“Canada”, quota=500>
<attr_type=“country”, value=“Poland”, quota=500>
<attr_type=“country”, value=“*”, 0>
If, in addition, you want only Windows computers, use
<attr_type=“country”, value=“Canada”, quota=500>
<attr_type=“country”, value=“Poland”, quota=500>
<attr_type=“country”, value=“*”, quota=0>
<attr_type=“OS”, value=“Win95”, quota=1000>
<attr_type=“OS”, value=“Win98”, quota=1000>
<attr_type=“OS”, value=“WinNT”, quota=1000>
<attr_type=“OS”, value=“*”, quota=0>
It is noted that the above parameter system may not able to express some requirements, such as a requirement that at least 25% of the clients are from one country and at least 25% are from another. However, if desired, additional execution parameters could be added to provide such capability. It is also noted that client system type quotas discussed above may be designed such that they affect the set of hosts on which the scheduled execution project is cued and not the hosts on which the project actually runs. For example, client systems could be chosen to run the scheduled execution project essentially randomly, so the properties of the set of running hosts will generally approximate those of the set of cued hosts; however, they may not match exactly. There may be exceptions, for example, if the scheduled execution project is scheduled at a time when most hosts in Poland are turned off, the fraction of running Polish hosts may be smaller than desired.
The control or console interface 509, which can be an Internet web interface, can be configured to allow a variety of tasks, including (1) create, edit and activate a scheduled execution project, (2) to control a scheduled execution project while it is running by viewing and adjusting the number of clients running the scheduled execution project (if polling by client systems is implement, these adjustments will likely have a certain lag time associated with the poll period until they go into effect), and (3) to mark a scheduled execution project as “completed” to stop operation on all running clients. Alternatively, the same operations are available as HTTP RPCs (Remote Procedure Calls).
The scheduled execution architecture of the present invention lends itself to a variety if implementations. Example implementation and operation details are provided below with respect to function calls and operations that may be utilized to realize the present invention.
Create a schedex
<schedex_create>
<task name=“foo”/>
<schedex_name value=“foo”/>
<phase value=“1”/>
<wuid value=“23”/>
<startup_start_time value=“123456”/>
<startup_end_time value=“12345”/>
<end_time value=“12345”/>
<poll_period value=“44”/>
<nhosts_cue value=“123”/>
<quota attr_type=“country” value=“Poland” quota=“100”/>
<quota attr_type=“country” value=“United States” quota=“100”/>
<quota attr_type=“country” value=“Any” quota=“100”/>
<quota attr_type=“OS” value=“Win95” quota=“100”/>
<quota attr_type=“OS” value=“WinNT” quota=“100”/>
<quota attr_type=“OS” value=“Macintosh” quota=“100”/>
<quota attr_type=“downstream_bandwidth” value=“0_30000”
quota=“100”/>
<quota attr_type=“downstream_bandwidth” value=“30000_100000”
quota=“100”/>
<quota attr_type=“downstream_bandwidth” value=“100000_”
quota=“100”/>
</create_schedex>
It is noted that this is an example operation to creates and activate a scheduled execution project for a given task. Times are given in seconds. The return value “status” is “OK” if the operation succeeded, else a description of the error.
Set number of running clients
<schedex_nhosts_set>
<task name=“foo”/>
<schedex name=“foo”/>
<nhosts value=“55”/>
</schedex_nhosts_set>
It is noted that this operation requests a change in the number of clients running the scheduled execution project. If client system polling is utilized, it will typically take up to “poll_period” seconds for this target to be reached. If the number is increased, additional clients (cued but not yet running) are started. If the number is decreased, the application is gracefully terminated on some hosts, creating a result file on each host. If the application is later started on the host, additional result files will be created.
Terminate a schedex
<schedex_terminate>
<task name=“foo”/>
<schedex name=“foo”/>
</schedex_terminate>
It is noted that the scheduled execution project is gracefully terminated on all hosts. In this example, no further operations on the scheduled execution project are allowed. The transfer of result files to the server systems is started.
Get schedex status
Request:
<schedex_status>
<task name=“foo”/>
<schedex name=“foo”/>
</schedex_status>
Reply:
<schedex_status>
<status value=“OK”/>
<nhosts_cued value=“234”/>
<nhosts_running value=“234”/>
<nhosts_available value=“234”/>
</schedex_status>
It is noted that this operation returns the number of client cued to run the scheduled execution project, the number currently running it, and the number of clients available to run it (i.e. that are actively polling the server). The latter two numbers arc defined only for a scheduled execution project where client system polling is utilized
Scheduled Execution (Schedex) Protocol
Regular (<request>) RPCs can include the following item in both requests and replies.
<schedex>
id=n
taskid=n
wuid=n
startup_start_time=n
startup_end_time=n
end_time=n
</schedex>
The client tells the server what schedex workloads are currently cued. The server gives the client new schedex workloads to cue.
Clients with a cued, active polling schedex periodically make the following RPC:
request:
<schedex_poll_request>
schedexid=n
hostid=n
running=n
</schedex_poll_request>
reply:
<schedex_poll_reply>
[ <schedex_start> ]
[ <schedex_stop> ]
[ <schedex_terminate> ]
</schedex_poll_reply>
It is noted that <schedex_stop/> tells the client to stop a running schedex, <schedex_start/> tells the client to start a cued schedex, and <schedex_terminate/> says to stop a schedex if running and delete it.
Database
The schedex table, in addition to the schedex attributes, can include the following:
struct SCHEDEX {
...
int ncued;
// how many hosts are cued
int nrunning_target;
// how many hosts we want to be running};
The schedex_host table stores hosts on which the schedex is cued.
struct SCHEDEX_HOST {
int hostid;
int schedexid;
double poll_deadline; // if don't get a poll RPC before this
time,
// assume host is not running
int is_running;
// whether host is running app module};
(It is noted that the number of running clients can be found by counting the number of records with “running” set.
The schedex — quota table stores quoas:
struct SCHEDEX_HOST {
int id;
int schedexid;
int attr_type;
char value[64]
int quota;
int ncued;
Server
The server maintains in-memory copies of the schedex and schedex_quota tables.
GLOBALS::check_schedex(CLIENT_CONN&cc)
When the server handles a <request> RPC, and there is a schedex with ncued <nhosts_cue, and the host is of eligible type and not barred by user preferences from running the schedex, and doesn't already have an overlapping schedex, and no quotas are exceeded, the server sends the host that schedex. If the schedex is polling, it creates a schedex_host record. It updates and reloads the schedex and schedex_quota entries.
CLIENT_CONN::handle_schedex_poll( )
When a <schedex_poll_request> RPC is received, the server looks up the schedex_host record. If not found it returns a <schedex_terminate> (this should never happen). If the client is running this module, and number of running hosts is more than nrunning_target, the server returns a <schedex_stop> and clears the running field in the schedex_host record. Similarly, if the client is not running this module and the number of running hosts is less than nrunning_target, the server returns a <schedex_start> and sets the running field in the schedex_host record. In any case it updates the “last poll time” field in the DB.
GLOBALS::schedex_timer( )
Each server periodically enumerates all server_host records with the “running” flag set and “poll deadline” <now—poll period, and clears the “running” flag. When a schedex end_time is reached each server changes the state to “ended” and clears the “running” flag of all schedex_host records. It is noted that in principle the above tasks can be accomplished by one server, but it may be better for all servers to do them.
Client
The client stores a list of pending schedex workloads in memory and in the core state file. It also may have variables, such as:
int schedex_active;
int schedex_polling;
SCHEDEX active_schedex;
int schedex_running;
double schedex_timer;
// if polling: when to send next RPC
// if nonpolling: when to start
When a polling schedex becomes active, the client sets the polling timer randomly in the interval [now . . , now+polling_period].
INSTANCE::schedex_timer_func( )
The client maintains a polling timer for each active polling schedex. When this reaches zero, it sends a poll RPC. If the schedex remains active, it resets the timer. When a nonpolling schedex becomes active, the client picks a start time randomly in the startup period. When the end time of a schedex is reached, the client stops it (if running) and removes it from the data structure. If no other cued schedex references the same workunit, it removes the workunit.
Data Structures
The polling server maintains a list of “active” schedex records and the current number of hosts running that schedex task:
struct SchedexPollInfo {
SCHEDEX schedex;
int running_hosts; // this should he moved into the database SCHEDEX
record
SchedexHostList *host_list;
};
This list is indexed by schedex identification. Schedex records will be added and removed infrequently, but there will be one lookup on this table per poll request.
The SchedexHostList is a list of hosts that are currently running the schedex task. The list consists of records containing the following information:
struct SchedexHostInfo {
int hostid;
time_t poll_deadline;
bool is_running;
};
This list is indexed by host identification. Hosts will be added once during the lifetime of the schedex task, and removed en masse at the end of the schedex. There will be one lookup on this table per poll request.
Poll Requests
Each poll request contains the following information:
Schedex id
Host id
Agent's is_running flag
Each poll response can contain zero or one of the following commands:
On each poll request, the server performs the following sequence of operations:
Look up schedex id in list of schedexes.
If not found then
Look up schedex record in database
If not found then
Return <schedex_terminate> command
End if
Add schedex record to list of schedexes
Set the running_hosts to 0
End if
If the current time is past the schedex end time then
Return <schedex_terminate> command
End if
Look up the host id in the list of hosts for this schedex
If not found then
// see note below about validating host id
Add host record to host list
Set is_running to the agent's is_running
End if
Update the poll_deadline to the current time plus the grace period
multiplier (2 or 3) times the poll_period
If agent is_running != our is_running then
Set our is_running flag to the same as the agent is_running
Adjust out running_hosts count up or down one as necessary
End if
If not is_running and running_hosts < nrunning_target then
Set our is_running true
Increment running_hosts
Return <schedex_start>
Else if is_running and running_hosts > nrunning_target then
Set our is_running false
Decrement running_hosts
Return <schedex_stop>
End if
Return empty response
An invariant after this operation is that the running_count for the schedex should match the number of host records where the is_running flag is set.
The poll server also runs a background process that periodically performs (every 10 seconds or perhaps more often) the following operations:
For each schedex in the schedex list
Read the schedex record from the database to obtain the current
nrunning_target
If the current time is past the schedex end time
Remove the entire schedex host line
Else
For each host in the schedex host list
If the current time is past the poll_deadline then
Set is_running to false
Decrement running_hosts
End if
End for
End if
Update the running_hosts in the database schedex record
End for
Further modifications and alternative embodiments of this invention will be apparent to those skilled in the art in view of this description. It will be recognized, therefore, that the present invention is not limited by these example arrangements. Accordingly, this description is to be construed as illustrative only and is for the purpose of teaching those skilled in the art the manner of carrying out the invention. It is to be understood that the forms of the invention herein shown and described are to be taken as the presently preferred embodiments. Various changes may be made in the implementations and architectures for database processing. For example, equivalent elements may be substituted for those illustrated and described herein, and certain features of the invention may be utilized independently of the use of other features, all as would be apparent to one skilled in the art after having the benefit of this description of the invention.
Venkatramani, Krishnamurthy, Hubbard, Edward A., Anderson, David P., Adiga, Ashok K., Hewgill, Greg D., Lawson, Jeff A.
Patent | Priority | Assignee | Title |
10067940, | Mar 02 2012 | International Business Machines Corporation | Enhanced storage quota management for cloud computing systems |
11625806, | Jan 23 2019 | Qualcomm Incorporated | Methods and apparatus for standardized APIs for split rendering |
8656360, | Apr 20 2011 | International Business Machines Corporation | Collaborative software debugging in a distributed system with execution resumption on consensus |
8671393, | Oct 21 2010 | International Business Machines Corporation | Collaborative software debugging in a distributed system with client-specific dynamic breakpoints |
8739127, | Apr 20 2011 | International Business Machines Corporation | Collaborative software debugging in a distributed system with symbol locking |
8756577, | Jun 28 2011 | International Business Machines Corporation | Collaborative software debugging in a distributed system with private debug sessions |
8806438, | Apr 20 2011 | International Business Machines Corporation | Collaborative software debugging in a distributed system with variable-specific messages |
8850397, | Nov 10 2010 | International Business Machines Corporation | Collaborative software debugging in a distributed system with client-specific display of local variables |
8904356, | Oct 20 2010 | International Business Machines Corporation | Collaborative software debugging in a distributed system with multi-member variable expansion |
8972945, | Oct 21 2010 | International Business Machines Corporation | Collaborative software debugging in a distributed system with client-specific access control |
8977677, | Dec 01 2010 | Microsoft Technology Licensing, LLC | Throttling usage of resources |
8990775, | Nov 10 2010 | International Business Machines Corporation | Collaborative software debugging in a distributed system with dynamically displayed chat sessions |
9009673, | Oct 21 2010 | International Business Machines Corporation | Collaborative software debugging in a distributed system with collaborative step over operation |
9122524, | Jan 08 2013 | Microsoft Technology Licensing, LLC | Identifying and throttling tasks based on task interactivity |
9305274, | Jan 16 2012 | Microsoft Technology Licensing, LLC | Traffic shaping based on request resource usage |
9317407, | Mar 19 2010 | JPMORGAN CHASE BANK, N A , AS SUCCESSOR AGENT | Techniques for validating services for deployment in an intelligent workload management system |
9411709, | Nov 10 2010 | International Business Machines Corporation | Collaborative software debugging in a distributed system with client-specific event alerts |
9495473, | Jul 19 2010 | AKAMAI TECHNOLOGIES, INC | Analytic dashboard with user interface for producing a single chart statistical correlation from source and target charts during a load test |
9544387, | Jun 01 2011 | Hewlett Packard Enterprise Development LP | Indication of URL prerequisite to network communication |
9645856, | Dec 09 2011 | Microsoft Technology Licensing, LLC | Resource health based scheduling of workload tasks |
9825869, | Jan 16 2012 | Microsoft Technology Licensing, LLC | Traffic shaping based on request resource usage |
Patent | Priority | Assignee | Title |
4669730, | Nov 05 1984 | Automated sweepstakes-type game | |
4699513, | Feb 08 1985 | BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIR UNIVERSITY THE, A CORP OF CA | Distributed sensor and method using coherence multiplexing of fiber-optic interferometric sensors |
4815741, | Nov 05 1984 | Automated marketing and gaming systems | |
4818064, | Sep 24 1987 | BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY, THE | Sensor array and method of selective interferometric sensing by use of coherence synthesis |
4839798, | Nov 07 1984 | Hitachi, Ltd. | Method and apparatus for controlling job transfer between computer systems |
4893075, | Aug 29 1988 | Eaton Corporation | Dual speed sensor pickup assembly with anti-cross talk coils |
4987533, | May 05 1988 | International Business Machines Corporation | Method of managing data in a data storage hierarchy and a data storage hierarchy therefor with removal of the least recently mounted medium |
5031089, | Dec 30 1988 | United States of America as represented by the Administrator, National | Dynamic resource allocation scheme for distributed heterogeneous computer systems |
5056019, | Aug 29 1989 | CITIBANK NEW YORK STATE INC | Automated purchase reward accounting system and method |
5332218, | Jan 30 1989 | Automated golf sweepstakes game | |
5402394, | Dec 04 1991 | Renesas Electronics Corporation | Process for generating a common time base for a system with distributed computing units |
5483444, | Oct 26 1993 | Radisson Hotels International, Inc. | System for awarding credits to persons who book travel-related reservations |
5594792, | Jan 28 1994 | Firsttel Systems Corporation | Methods and apparatus for modeling and emulating devices in a network of telecommunication systems |
5598566, | Jan 30 1990 | Johnson Controls Technology Company | Networked facilities management system having a node configured with distributed load management software to manipulate loads controlled by other nodes |
5655081, | Mar 08 1995 | BMC SOFTWARE, INC | System for monitoring and managing computer resources and applications across a distributed computing environment using an intelligent autonomous agent architecture |
5659614, | Nov 28 1994 | DOMINION VENTURE FINANCE L L C | Method and system for creating and storing a backup copy of file data stored on a computer |
5703949, | Apr 28 1994 | Citibank, N.A. | Method for establishing secure communications among processing devices |
5740231, | Sep 16 1994 | AVAYA Inc | Network-based multimedia communications and directory system and method of operation |
5740549, | Jun 12 1995 | Meta Platforms, Inc | Information and advertising distribution system and method |
5761507, | Mar 05 1996 | GLOBALFOUNDRIES Inc | Client/server architecture supporting concurrent servers within a server with a transaction manager providing server/connection decoupling |
5768504, | Jun 30 1995 | International Business Machines Corporation | Method and apparatus for a system wide logan in a distributed computing environment |
5768532, | Jun 17 1996 | International Business Machines Corporation | Method and distributed database file system for implementing self-describing distributed file objects |
5790789, | Aug 02 1996 | Method and architecture for the creation, control and deployment of services within a distributed computer environment | |
5793964, | Jun 07 1995 | GOOGLE LLC | Web browser system |
5802062, | Jun 19 1996 | AT&T Corp | Preventing conflicts in distributed systems |
5806045, | Feb 04 1994 | LOYALTY CENTRAL INC | Method and system for allocating and redeeming incentive credits between a portable device and a base device |
5815793, | Oct 05 1995 | Microsoft Technology Licensing, LLC | Parallel computer |
5826261, | May 10 1996 | EXCITE, INC | System and method for querying multiple, distributed databases by selective sharing of local relative significance information for terms related to the query |
5826265, | Dec 06 1996 | International Business Machines Corporation | Data management system having shared libraries |
5832411, | Feb 06 1997 | Raytheon Company | Automated network of sensor units for real-time monitoring of compounds in a fluid over a distributed area |
5842219, | Mar 14 1996 | International Business Machines Corporation | Method and system for providing a multiple property searching capability within an object-oriented distributed computing network |
5848415, | Dec 18 1996 | GOOGLE LLC | Selective multiple protocol transport and dynamic format conversion in a multi-user network |
5862325, | Feb 29 1996 | Intermind Corporation | Computer-based communication system and method using metadata defining a control structure |
5881232, | Jul 23 1996 | International Business Machines Corporation | Generic SQL query agent |
5884072, | Jan 30 1990 | Johnson Controls Technology Company | Networked facilities management system with updated data based on aging time |
5884320, | Aug 20 1997 | International Business Machines Corporation | Method and system for performing proximity joins on high-dimensional data points in parallel |
5887143, | Oct 26 1995 | Hitachi, Ltd. | Apparatus and method for synchronizing execution of programs in a distributed real-time computing system |
5893075, | Apr 01 1994 | Plainfield Software | Interactive system and method for surveying and targeting customers |
5893905, | Dec 24 1996 | Verizon Patent and Licensing Inc | Automated SLA performance analysis monitor with impact alerts on downstream jobs |
5907619, | Dec 20 1996 | Intel Corporation | Secure compressed imaging |
5909540, | Nov 22 1996 | GOOGLE LLC | System and method for providing highly available data storage using globally addressable memory |
5911776, | Dec 18 1996 | Unisys Corporation | Automatic format conversion system and publishing methodology for multi-user network |
5916024, | Mar 10 1986 | QUEST NETTECH CORPORATION | System and method of playing games and rewarding successful players |
5918229, | Nov 22 1996 | GOOGLE LLC | Structured data storage using globally addressable memory |
5921865, | Jan 16 1997 | Lottotron, Inc. | Computerized lottery wagering system |
5937192, | Jan 16 1996 | British Telecommunications public limited company | Compilation of computer program for execution on single compiling computer and for execution on multiple distributed computer system |
5953420, | Oct 25 1996 | International Business Machines Corporation | Method and apparatus for establishing an authenticated shared secret value between a pair of users |
5958010, | Mar 20 1997 | GOOGLE LLC | Systems and methods for monitoring distributed applications including an interface running in an operating system kernel |
5964832, | Apr 18 1997 | Intel Corporation | Using networked remote computers to execute computer processing tasks at a predetermined time |
5966451, | Feb 20 1997 | Kabushiki Kaisha Toshiba | Distributed network computing system, and data exchange apparatus and method and storage medium used in this system |
5970469, | Dec 26 1995 | Catalina Marketing Corporation | System and method for providing shopping aids and incentives to customers through a computer network |
5970477, | Jul 15 1996 | BellSouth Intellectual Property Corp | Method and system for allocating costs in a distributed computing network |
5978594, | Sep 30 1994 | BMC Software, Inc. | System for managing computer resources across a distributed computing environment by first reading discovery information about how to determine system resources presence |
5987506, | Nov 22 1996 | GOOGLE LLC | Remote access and geographically distributed computers in a globally addressable storage environment |
6003065, | Apr 24 1997 | Oracle America, Inc | Method and system for distributed processing of applications on host and peripheral devices |
6003083, | Feb 19 1998 | International Business Machines Corporation | Workload management amongst server objects in a client/server network with distributed objects |
6009455, | Apr 20 1998 | Distributed computation utilizing idle networked computers | |
6014634, | Dec 26 1995 | Catalina Marketing Corporation | System and method for providing shopping aids and incentives to customers through a computer network |
6014712, | May 21 1996 | GOOGLE LLC | Network system |
6024640, | Jun 30 1995 | Walker Digital, LLC | Off-line remote lottery system |
6026474, | Nov 22 1996 | GOOGLE LLC | Shared client-side web caching using globally addressable memory |
6052584, | Jul 24 1997 | BELL ATLANTIC MOBILE SYSTEMS, INC | CDMA cellular system testing, analysis and optimization |
6052785, | Nov 21 1997 | CLOUD SOFTWARE GROUP SWITZERLAND GMBH | Multiple remote data access security mechanism for multitiered internet computer networks |
6058393, | Feb 23 1996 | International Business Machines Corporation | Dynamic connection to a remote tool in a distributed processing system environment used for debugging |
6061660, | Oct 20 1997 | KROY IP HOLDINGS LLC | System and method for incentive programs and award fulfillment |
6065046, | Jul 29 1997 | CATHARON PRODUCTS INTELLECTUAL PROPERTY, LLC | Computerized system and associated method of optimally controlled storage and transfer of computer programs on a computer network |
6070190, | May 11 1998 | ServiceNow, Inc | Client-based application availability and response monitoring and reporting for distributed computing environments |
6076105, | Aug 02 1996 | Hewlett Packard Enterprise Development LP | Distributed resource and project management |
6078953, | Dec 29 1997 | EMC IP HOLDING COMPANY LLC | System and method for monitoring quality of service over network |
6094654, | Dec 06 1996 | International Business Machines Corporation | Data management system for file and database management |
6098091, | Dec 30 1996 | Intel Corporation | Method and system including a central computer that assigns tasks to idle workstations using availability schedules and computational capabilities |
6112181, | Nov 06 1997 | INTERTRUST TECHNOLOGIES CORP | Systems and methods for matching, selecting, narrowcasting, and/or classifying based on rights management and/or other information |
6112225, | Mar 30 1998 | International Business Machines Corporation | Task distribution processing system and the method for subscribing computers to perform computing tasks during idle time |
6112243, | Dec 30 1996 | Intel Corporation | Method and apparatus for allocating tasks to remote networked processors |
6112304, | Aug 27 1997 | Zipsoft, Inc.; ZIPSOFT, INC | Distributed computing architecture |
6115713, | Jan 30 1990 | Johnson Controls Technology Company | Networked facilities management system |
6128644, | Mar 04 1998 | Fujitsu Limited | Load distribution system for distributing load among plurality of servers on www system |
6131067, | Sep 06 1996 | SNAPTRACK, INC | Client-server based remote locator device |
6134532, | Nov 14 1997 | TUMBLEWEED HOLDINGS LLC | System and method for optimal adaptive matching of users to most relevant entity and information in real-time |
6135646, | Oct 22 1993 | Corporation for National Research Initiatives | System for uniquely and persistently identifying, managing, and tracking digital objects |
6138155, | Mar 21 1997 | THE NIELSEN COMPANY US , LLC, A DELAWARE LIMITED LIABILITY COMPANY | Method and apparatus for tracking client interaction with a network resource and creating client profiles and resource database |
6148335, | Nov 25 1997 | International Business Machines Corporation | Performance/capacity management framework over many servers |
6148377, | Nov 22 1996 | GOOGLE LLC | Shared memory computer networks |
6151684, | Mar 28 1997 | QNAP SYSTEMS, INC | High availability access to input/output devices in a distributed system |
6167428, | Nov 29 1996 | Personal computer microprocessor firewalls for internet distributed processing | |
6189045, | Mar 26 1998 | International Business Machines Corp.; International Business Machines Corporation | Data type conversion for enhancement of network communication systems |
6191847, | Oct 01 1997 | Texas Instruments Incorporated | Fixed optic sensor system and distributed sensor network |
6208975, | Jul 24 1996 | TVL LP | Information aggregation and synthesization system |
6211782, | Jan 04 1999 | Vivint, Inc | Electronic message delivery system utilizable in the monitoring of remote equipment and method of same |
6212550, | Jan 21 1997 | Google Technology Holdings LLC | Method and system in a client-server for automatically converting messages from a first format to a second format compatible with a message retrieving device |
6249836, | Dec 30 1996 | Intel Corporation | Method and apparatus for providing remote processing of a task over a network |
6253193, | Feb 13 1995 | Intertrust Technologies Corporation | Systems and methods for the secure transaction management and electronic rights protection |
6263358, | Jul 25 1997 | British Telecommunications public limited company | Scheduler for a software system having means for allocating tasks |
6308203, | Oct 14 1997 | Sony Corporation | Information processing apparatus, information processing method, and transmitting medium |
6334126, | Aug 26 1997 | Casio Computer Co., Ltd. | Data output system, communication terminal to be connected to data output system, data output method and storage medium |
6336124, | Oct 01 1998 | BCL COMPUTERS, INC | Conversion data representing a document to other formats for manipulation and display |
6345240, | Aug 24 1998 | Bell Semiconductor, LLC | Device and method for parallel simulation task generation and distribution |
6347340, | Feb 18 2000 | MBLOX INCORPORATED | Apparatus and method for converting a network message to a wireless transport message using a modular architecture |
6356929, | Apr 07 1999 | TRN Business Trust | Computer system and method for sharing a job with other computers on a computer network using IP multicast |
6370510, | May 08 1997 | CareerBuilder, Inc. | Employment recruiting system and method using a computer network for posting job openings and which provides for automatic periodic searching of the posted job openings |
6370560, | Sep 16 1996 | NEW YORK, THE RESEARCH FOUNDATION OF STATE UNIVERSITY OF | Load sharing controller for optimizing resource utilization cost |
6374254, | Jun 30 1999 | International Business Machines Corporation | Scalable, distributed, asynchronous data collection mechanism |
6377975, | Mar 01 2000 | Genesys Telecommunications Laboratories, Inc | Methods and systems to distribute client software tasks among a number of servers |
6389421, | Dec 11 1997 | International Business Machines Corporation | Handling processor-intensive operations in a data processing system |
6393014, | Jun 03 1997 | AT&T MOBILITY II LLC | Method and system for providing data communication with a mobile station |
6415373, | Jan 12 1998 | CERBERUS BUSINESS FINANCE, LLC, AS COLLATERAL AGENT | Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner |
6418462, | Jan 07 1999 | Yongyong, Xu | Global sideband service distributed computing method |
6421781, | Apr 30 1998 | Unwired Planet, LLC | Method and apparatus for maintaining security in a push server |
6434594, | Mar 09 1999 | Intellisync Corporation | Virtual processing network enabler |
6434609, | Mar 16 1998 | CIDERA, INC | Comprehensive global information network broadcasting system and methods of distributing information |
6438553, | Dec 28 1998 | NEC Electronics Corporation | Distributed job integrated management system and method |
6463457, | Aug 26 1999 | Parabon Computation, Inc. | System and method for the establishment and the utilization of networked idle computational processing power |
6473805, | Jun 08 1998 | Symbol Technologies, LLC | Method and apparatus for intergrating wireless and non-wireless devices into an enterprise computer network using an interfacing midware server |
6477565, | Jun 01 1999 | YODLEE, INC | Method and apparatus for restructuring of personalized data for transmission from a data network to connected and portable network appliances |
6499105, | Jun 05 1997 | Hitachi, Ltd. | Digital data authentication method |
6505246, | Dec 30 1998 | Cisco Technology, Inc | User interface for system management applications |
6516338, | May 15 1998 | ANDREAS ACQUISITION LLC | Apparatus and accompanying methods for implementing network servers for use in providing interstitial web advertisements to a client computer |
6516350, | Jun 17 1999 | International Business Machines Corporation | Self-regulated resource management of distributed computer resources |
6546419, | May 07 1998 | SAMSUNG ELECTRONICS CO , LTD , A KOREAN CORP | Method and apparatus for user and device command and control in a network |
6570870, | Jan 28 1999 | OATH INC | Method and system for making a charged telephone call during an Internet browsing session |
6574605, | Nov 17 1998 | CITIBANK, N A | Method and system for strategic services enterprise workload management |
6574628, | May 30 1995 | Corporation for National Research Initiatives | System for distributed task execution |
6587866, | Jan 10 2000 | Oracle America, Inc | Method for distributing packets to server nodes using network client affinity and packet distribution table |
6601101, | Mar 15 2000 | Hewlett Packard Enterprise Development LP | Transparent access to network attached devices |
6604122, | Nov 03 1998 | Telefonaktiebolaget LM Ericsson (publ) | Method and apparatus for evaluating a data processing request performed by distributed processes |
6615166, | May 27 1999 | Accenture Global Services Limited | Prioritizing components of a network framework required for implementation of technology |
6643291, | Jun 18 1997 | Kabushiki Kaisha Toshiba | Multimedia information communication system |
6643640, | Mar 31 1999 | GOOGLE LLC | Method for performing a data query |
6654783, | Mar 30 2000 | BYTEWEAVR, LLC | Network site content indexing method and associated system |
6714976, | Mar 20 1997 | GOOGLE LLC | Systems and methods for monitoring distributed applications using diagnostic information |
6757730, | May 31 2000 | CLOUD SOFTWARE GROUP, INC | Method, apparatus and articles-of-manufacture for network-based distributed computing |
6775699, | Apr 28 2000 | ServiceNow, Inc | System and method for implementing integrated polling functions in a client management tool |
6792455, | Apr 28 2000 | ServiceNow, Inc | System and method for implementing polling agents in a client management tool |
6847995, | Mar 30 2000 | BYTEWEAVR, LLC | Security architecture for distributed processing systems and associated method |
6871223, | Apr 13 2001 | VALTRUS INNOVATIONS LIMITED | System and method for agent reporting in to server |
6891802, | Mar 30 2000 | BYTEWEAVR, LLC | Network site testing method and associated system |
6963897, | Mar 30 2000 | BYTEWEAVR, LLC | Customer services and advertising based upon device attributes and associated distributed processing system |
7003547, | Mar 30 2000 | BYTEWEAVR, LLC | Distributed parallel processing system having capability-based incentives and associated method |
7020678, | Mar 30 2000 | BYTEWEAVR, LLC | Machine generated sweepstakes entry model and associated distributed processing system |
7082474, | Mar 30 2000 | BYTEWEAVR, LLC | Data sharing and file distribution method and associated distributed processing system |
7136857, | Sep 01 2000 | OP40, Inc. | Server system and method for distributing and scheduling modules to be executed on different tiers of a network |
7143089, | Feb 10 2000 | QUICK COMMENTS INC | System for creating and maintaining a database of information utilizing user opinions |
20010029613, | |||
20020010757, | |||
20020018399, | |||
20020019584, | |||
20020019725, | |||
20020065864, | |||
20020133593, | |||
20020188733, | |||
20020194251, | |||
20020198957, | |||
20040098449, | |||
20070011224, | |||
20090132649, | |||
20090138551, | |||
20090164533, | |||
20090171855, | |||
20090216641, | |||
20090216649, | |||
20090222508, | |||
20100036723, | |||
EP883313, | |||
WO2001014961, | |||
WO2001073545, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Aug 06 2002 | HUBBARD, EDWARD A | UNITED DEVICES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029042 | /0290 | |
Aug 06 2002 | VENKATRAMANI, KRISHNAMURTHY | UNITED DEVICES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029042 | /0290 | |
Sep 06 2002 | ADIGA, ASHOK K | UNITED DEVICES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029042 | /0290 | |
Sep 06 2002 | HEWGILL, GREG D | UNITED DEVICES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029042 | /0290 | |
Sep 06 2002 | LAWSON, JEFF A | UNITED DEVICES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029042 | /0290 | |
Sep 18 2002 | ANDERSON, DAVID P | UNITED DEVICES, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 029042 | /0290 | |
Aug 30 2007 | UNITED DEVICES, INC | UD LIQUIDATION, INC | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 029057 | /0651 | |
Feb 27 2008 | UD LIQUIDATION, INC , F K A UNITED DEVICES, INC | UNIVA CORPORATION | ASSIGNMENT WITH THE EFFECTIVE DATE OF 08 30 2007 | 029058 | /0090 | |
Dec 16 2008 | UNIVA CORPORATION | Prashtama Wireless LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 023248 | /0800 | |
Aug 26 2015 | Prashtama Wireless LLC | CALLAHAN CELLULAR L L C | MERGER SEE DOCUMENT FOR DETAILS | 037487 | /0582 | |
May 06 2016 | CALLAHAN CELLULAR L L C | Intellectual Ventures II LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 038515 | /0697 | |
Oct 05 2023 | INTELLECTUAL VENTURES ASSETS 190 LLC | AI-CORE TECHNOLOGIES, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 065227 | /0107 | |
Feb 28 2024 | AI-CORE TECHNOLOGIES, LLC | BYTEWEAVR, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 066639 | /0008 |
Date | Maintenance Fee Events |
Dec 31 2014 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Apr 04 2017 | ASPN: Payor Number Assigned. |
Jan 16 2019 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Feb 15 2014 | 4 years fee payment window open |
Aug 15 2014 | 6 months grace period start (w surcharge) |
Feb 15 2015 | patent expiry (for year 4) |
Feb 15 2017 | 2 years to revive unintentionally abandoned end. (for year 4) |
Feb 15 2018 | 8 years fee payment window open |
Aug 15 2018 | 6 months grace period start (w surcharge) |
Feb 15 2019 | patent expiry (for year 8) |
Feb 15 2021 | 2 years to revive unintentionally abandoned end. (for year 8) |
Feb 15 2022 | 12 years fee payment window open |
Aug 15 2022 | 6 months grace period start (w surcharge) |
Feb 15 2023 | patent expiry (for year 12) |
Feb 15 2025 | 2 years to revive unintentionally abandoned end. (for year 12) |