The present invention extends to methods, systems, and computer program products for using templates to configure cloud resources. Embodiments of the invention include encapsulating cloud configuration information in an importable/exportable node template. node templates can also be used to bind groups of nodes to different cloud subscriptions and cloud service accounts. Accordingly, managing the configuration of cloud based resources can be facilitated through an interface at a (e.g., high performance) computing component. templates can also specify a schedule for starting/stopping instance running within a resource cloud.
|
15. A computer-implemented method for configuring access to resources that are available through two distinct clusters of nodes, the method comprising:
storing a first node template which includes a schema that is usable by a customer at a computing component to encapsulate a common configuration for a set of nodes that represent resources provided at a first cluster of nodes;
storing a second node template which includes a schema that is usable by the customer at the computing component to encapsulate a common configuration for another set of nodes representing resources at a second cluster of nodes;
accessing account data established by the customer that permit the customer's use of the resources available on at least the second cluster of nodes;
retrieving the second node template and populating the schema of the second node template with the accessed account data;
populating the schema of the first node template with the common configuration for the set of nodes representing the particular resources of the first cluster of nodes, and populating the schema of the second node template with the common configuration for the second cluster of nodes representing the particular resources of the second cluster of nodes;
instantiating an interface module which formulates commands for deploying resources at the second cluster of nodes in accordance with the common configuration of the second node template for the account data contained in the second node template, as well as a cluster manager which formulates commands for deploying resources in the first cluster of nodes in accordance with the common configuration of the first node template;
wherein the cluster manager submits the formulated commands for deploying resources in the first cluster of nodes in accordance with the common configuration of the first node template when resources of the first cluster of nodes are to be employed for the customer and wherein the interface module submits the formulated commands for deploying resources at the second cluster of nodes in accordance with the common configuration of the second node template for the account data contained in the second node template when resources at the second cluster of nodes are to be employed for the customer; and
storing both the populated first node template and the populated second node template so that each stored node template can be imported or exported for subsequent use in providing the common configuration contained therein to facilitate deployment of resources when desired by the customer.
8. A computer program product comprising one or more hardware storage devices having stored thereon computer-executable instructions that, when executed at one or more processor, cause a computer system to implement a method for configuring access to resources that are available through two distinct clusters of nodes, the method comprising:
storing a first node template which includes a schema that is usable by a customer at a computing component to encapsulate a common configuration for a set of nodes that represent resources provided at a first cluster of nodes;
storing a second node template which includes a schema that is usable by the customer at the computing component to encapsulate a common configuration for another set of nodes representing resources at a second cluster of nodes;
accessing account data established by the customer that permit the customer's use of the resources available on at least the second cluster of nodes;
retrieving the second node template and populating the schema of the second node template with the accessed account data;
populating the schema of the first node template with the common configuration for the set of nodes representing the particular resources of the first cluster of nodes, and populating the schema of the second node template with the common configuration for the second cluster of nodes representing the particular resources of the second cluster of nodes;
instantiating an interface module which formulates commands for deploying resources at the second cluster of nodes in accordance with the common configuration of the second node template for the account data contained in the second node template, as well as a cluster manager which formulates commands for deploying resources in the first cluster of nodes in accordance with the common configuration of the first node template;
wherein the cluster manager submits the formulated commands for deploying resources in the first cluster of nodes in accordance with the common configuration of the first node template when resources of the first cluster of nodes are to be employed for the customer and wherein the interface module submits the formulated commands for deploying resources at the second cluster of nodes in accordance with the common configuration of the second node template for the account data contained in the second node template when resources at the second cluster of nodes are to be employed for the customer; and
storing both the populated first node template and the populated second node template so that each stored node template can be imported or exported for subsequent use in providing the common configuration contained therein to facilitate deployment of resources when desired by the customer.
1. A computing system comprising:
at least one processor; and
one or more computer-readable storage device having stored computer-executable instructions which, when executed by the at least one processor, implement a method for configuring access to resources that are available through two distinct clusters of nodes, the method comprising:
storing a first node template which includes a schema that is usable by a customer at a computing component to encapsulate a common configuration for a set of nodes that represent resources provided at a first cluster of nodes;
storing a second node template which includes a schema that is usable by the customer at the computing component to encapsulate a common configuration for another set of nodes representing resources at a second cluster of nodes;
accessing account data established by the customer that permit the customer's use of the resources available on at least the second cluster of nodes;
retrieving the second node template and populating the schema of the second node template with the accessed account data;
populating the schema of the first node template with the common configuration for the set of nodes representing the particular resources of the first cluster of nodes, and populating the schema of the second node template with the common configuration for the second cluster of nodes representing the particular resources of the second cluster of nodes;
instantiating an interface module which formulates commands for deploying resources at the second cluster of nodes in accordance with the common configuration of the second node template for the account data contained in the second node template, as well as a cluster manager which formulates commands for deploying resources in the first cluster of nodes in accordance with the common configuration of the first node template;
wherein the cluster manager submits the formulated commands for deploying resources in the first cluster of nodes in accordance with the common configuration of the first node template when resources of the first cluster of nodes are to be employed for the customer and wherein the interface module submits the formulated commands for deploying resources at the second cluster of nodes in accordance with the common configuration of the second node template for the account data contained in the second node template when resources at the second cluster of nodes are to be employed for the customer; and
storing both the populated first node template and the populated second node template so that each stored node template can be imported or exported for subsequent use in providing the common configuration contained therein to facilitate deployment of resources when desired by the customer.
2. The computing system of
storing the second node template subsequent to populating the second node template with the access account data so that information for deploying nodes in the one or more remote data center nodes is maintained in a central location.
3. The computing system of
4. The computing system of
5. The computing system of
6. The computing system of
7. The computing system of
9. The computer program product of
storing the second node template subsequent to populating the second node template with the access account data so that information for deploying nodes in the one or more remote data center nodes is maintained in a central location.
10. The computer program product of
11. The computer program product of
12. The computer program product of
13. The computer program product of
14. The computer program product of
16. The method of
storing the second node template subsequent to populating the second node template with the access account data so that information for deploying nodes in the one or more remote data center nodes is maintained in a central location.
17. The method of
18. The method of
19. The method of
20. The method of
|
This application is a continuation of U.S. patent application Ser. No. 12/965,550 filed on Dec. 10, 2010, entitled “USING TEMPLATES TO CONFIGURE CLOUD RESOURCES,” which issued as U.S. Pat. No. 8,775,626 on Jul. 8, 2014, and which claims the benefit of and priority to U.S. Provisional Patent Application Ser. No. 61/383,924 filed on Sep. 17, 2010 and entitled “SIMPLIFIED CLOUD RESOURCE CONFIGURATION USING TEMPLATES,” both of which are incorporated herein by reference in their entireties.
Computer systems and related technology affect many aspects of society. Indeed, the computer system's ability to process information has transformed the way we live and work. Computer systems now commonly perform a host of tasks (e.g., word processing, scheduling, accounting, etc.) that prior to the advent of the computer system were performed manually. More recently, computer systems have been coupled to one another and to other electronic devices to form both wired and wireless computer networks over which the computer systems and other electronic devices can transfer electronic data. Accordingly, the performance of many computing tasks are distributed across a number of different computer systems and/or a number of different computing environments.
In some computing environments, an entity builds out an infrastructure and runs applications, such as, for example, Web services, “on-premise” within the infrastructure. In other environments, one entity uses another entity's infrastructure to run application on behalf of the entity. For example, one entity can run an application on machines in another entity's data center. Running an application in another entity's data center can be referred to as running an application “in the cloud”.
When applications are run in the cloud, computing resources and storage resources of the data center are allocated to a user. Data centers providing cloud based resources typically require an account so that the owner of the cloud resource can bill for resource usage. As such, one desiring to use cloud based resources can establish an account for that purpose. Once an account is established, setting up a system within the cloud typically includes configuring two components, a service (computing resources) and data (storage resources).
Configuration of service and data in the cloud is typically a manual process, prone to human errors. Further manual data entry (e.g., of credentials) can also be required to subsequently access computing and storage resources in the cloud. That is, a user is typically required to submit appropriate credentials along with resource access requests. For example, a user may be required to manually submit an appropriate key to access allocated storage resources. Manual data entry can lead to errors preventing access to cloud resources.
Some hybrid environments include using on premise resources and cloud resources. In these environments, a user may configure on premise resources though an established interface using known data formats. However, the interface and data formats may not be usable to configure cloud resources. As such, a user reverts to manually entering data through a separate interface to configure cloud based resources.
The present invention extends to methods, systems, and computer program products for using templates to configure cloud resources. Account data for a customer that is to use resources in the resource cloud is accessed. A cloud node template is populated with the accessed account data. User commands for deploying resources in the resource cloud are received.
The cloud node template is used to access relevant account data for the customer. A deployment for use in the resource cloud is formulated. The deployment represents the user commands in accordance with the relevant account data for the customer. The deployment is submitted to a cloud management service.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
The present invention extends to methods, systems, and computer program products for using templates to configure cloud resources. Account data for a customer that is to use resources in the resource cloud is accessed. A cloud node template is populated with the accessed account data. User commands for deploying resources in the resource cloud are received.
The cloud node template is used to access relevant account data for the customer. A deployment for use in the resource cloud is formulated. The deployment represents the user commands in accordance with the relevant account data for the customer. The deployment is submitted to a cloud management service.
Embodiments of the present invention may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present invention also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are physical storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry or desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (devices) (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system. Thus, it should be understood that computer storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Embodiments of the invention include using templates within a (e.g., high performance) computing component to encapsulate a common configuration for a set of nodes in a resource cloud. Using templates for cloud-based resources provides mechanisms to store cloud configuration information in a centralized location. Using templates for cloud-based resources also provides for automating the deployment of (e.g., high performance) computing worker nodes in a resource cloud. Templates also provide import/export functionality for cloud configuration information.
Thus embodiments of the invention include encapsulating cloud configuration information in an importable/exportable node template. Templates can also be used to bind groups of nodes to different cloud subscriptions and cloud service accounts. Accordingly, managing the configuration of cloud based resources can be performed through an interface at a (e.g., high performance) computing component. Templates can also specify a schedule for starting/stopping instance running within the cloud.
Computing component 103 further includes on premise cluster manager 106 and cloud interface module 107. On premise cluster manager 106 is configured to submit work for execution using resources in on premise cluster 108. Customer 102 can enter commands 154 through user-interface 104, which are used to configure and control the use of on premise cluster 108.
User-interface 104 also provides a unified user-interface experience for modifying cluster node templates and cloud node templates. For example, commands 154 can be entered to edit cluster node template 152 or cloud node template 153.
Cluster node template 152 can encapsulate a common configuration for a set of nodes in on premise cluster 108. Commands 154 can edit cluster node template 152 to configure a set of nodes in on premise cluster 108 for a specific purpose. On premise cluster manager 106 can then use data in cluster node template 152 to configure and control the use of on premise cluster 108.
On premise cluster 108 can include a cluster of computer systems configured to interoperate with one another and aggregate resources together (e.g., processor, storage, and system memory) to solve (e.g., different portions of larger and/or more complex) computations that would overburden a single computer system. The cluster of computer systems can include a plurality of computer systems, such as, for example, tens or even hundreds of computer systems, having computational and storage resources.
From time to time, customer 102 may also desire to use computation and storage resources in cloud 161 (e.g., to supplement the use of on premise cluster 108). To establish a relationship between customer premise 101 and cloud 161, customer 102 can register for a subscription with cloud manager service 111. In response, cloud management service 111 can return account data 174 to customer 102. Cloud management service 111 can also store account data 174 along other credentials, for example, a storage credential (e.g., a storage encryption key) in an account database. A storage credential can be used to access storage resources for customer 102 in cloud 161. Account data 174 can include one or more of a subscription ID, a service account name, a storage account name, and a certificate for customer 102.
Customer 102 can also provide account data 174 to computing component 103 through the user-interface 104. Computing component 103 can store account data 174. Computing component 103 can subsequently use account data 174 on behalf of customer 102 to facilitate the performance of work in cloud 161.
To facilitate the performance of work in cloud 161, customer 102 can enter commands 154 through user-interface 104, which are used to configure and control the use of resources in cloud 161. Cloud node template 153 can encapsulate a common configuration for a set of nodes in cloud 161. Commands 154 can edit cloud node template 153 to configure a set of nodes in cloud 161 for a specific purpose. Cloud interface module 107 can then use data in cloud node template 152 to configure and control the use (deployment 151) of resources in cloud 161.
Method 200 includes an act of accessing account data for a customer that is to use resources in the resource cloud (act 201). For example, computing component 103 can access account data 174 for customer 102. Computing component 103 can access account data 174 in an automated manner and/or user 102 can enter account information 174 through user-interface 104. Method 200 includes an act of populating a cloud node template with the accessed account data (act 202). For example, computing component 103 can populate cloud node template 153 with account data 174. Subsequently, cloud node template 153 can be stored at computing component 103 so that information for deploying nodes in cloud 161 is maintained in a central location.
Method 200 includes an act of receiving user commands for deploying resources in the resource cloud (act 203). For example, user-interface 104 can receive commands 154 from customer 102. Commands 102 can be commands for deploying resources in cloud 161. Method 200 includes an act of referring to the cloud node template to access relevant account data for the customer (act 204). For example, cloud interface module 107 can refer to cloud node template 153 to access relevant portions of account data 174 for user 102.
Method 200 includes an act of formulating a deployment for use in the resource cloud, the deployment representing the user commands in accordance with the relevant account data for the customer (act 205). For example, cloud interface module can formulate deployment 151 for use in cloud 161. Deployment 151 represents user commands 154 in accordance with relevant portions of account data 174. Method 200 includes an act of submitting the deployment to the cloud management service (act 206). For example, cloud interface module 107 can submit deployment 151 to cloud management server 111. Cloud management service 111 can allocate resources of cloud 161 for use by customer 102.
Deployment 151 can include operational data along with appropriate credentials for storage within cloud 161. Operational data can include data for configuring resources of cloud 161 to assist in computations that are also being worked on by resources in on premise cluster 108. Deployment 151 can also include binary code along with credentials for storage within cloud 161. Subsequently, the binary code can be executed within cloud 161 to instantiate computing services. Services can receive and process units of work to assist resources within on premise cluster 108.
Thus, the submission of operational data and binary code is collectively represented as deployment 151. Upon receiving deployment 151, cloud management service 111 performs a number of operations to implement deployment 151, such as, for example, successfully appending cloud resources to a cluster for use by computing component 103.
In some embodiments, computing component 103 is a High Performance Computing (“HPC”) component (e.g., a head node for on premise cluster 108). As such, work submitted for execution (to on premise cluster 108 and/or to cloud 161) can be part of scientific or other computationally intensive operations. In these embodiments, operational data can be HPC operational data and binary code can be HPC binary code. Based on HPC operational data and HPC binary code, HPC service 137 can run within cloud 161. The HPC service can include queues for accepting units of HPC work, mapping tables to track machines, etc.
In some embodiments, cluster node templates and cloud node templates are formatted in accordance with a node template schema. Generally, a node template can include a number of child template items, encapsulating pieces of configuration. For example, HPC Node Templates for a cloud can contain a template Item “DeployCloudWorker”. A DeployCloudWorker item can contain configuration information used for deploying HPC instances in the cloud. Configuration information can include, for example: 1) The ID of the Cloud Subscription; 2) certificate information for contacting cloud management services; 3) The cloud service name to use; 4) The cloud storage service name to use. Additionally, there is an optional second template item “DeploymentSchedule”, which can be used to provide a schedule for when cloud instances should be started and stopped.
When cloud resources are added to a cluster, they can be bound to a specific template for their lifetime. The template is used to bind the set of cloud resources to a specific cloud subscription and service name. The template is persisted to a database, and is subsequently used to drive cloud-specific operations, such as: starting a set of cloud workers, stopping a set of cloud workers, and reading cloud worker performance data.
Additionally, an optional deployment schedule can start or stop cloud workers without user intervention. A deployment schedule can also be used to control availability of on-premise workstations or to stop/start any type of on-premise resource. When a deployment schedule is used, deployment commands can be automatically submitted to computing component 103 (e.g., from a scheduler module). Thus, a user (e.g., user 102) can set up a schedule and is relieved from having to enter commands manually.
Accordingly, embodiments of the invention facilitate providing an administrator/operator of a cluster with familiar interface and interaction with the administrative software (e.g. that of computing component 103, including user-interface 104), with extensions for cloud resources that change the cluster characteristics (e.g., local storage or server size as small, medium, etc.)
The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Alam, Salim, Teather, Gregory Wray, Mehta, Parmita
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
7496890, | Jun 30 2003 | Microsoft Technology Licensing, LLC | Generation of configuration instructions using an abstraction technique |
7725572, | Dec 30 2003 | SAP SE | Notification architecture and method employed within a clustered node configuration |
20050021688, | |||
20060026267, | |||
20070156902, | |||
20070162892, | |||
20080163194, | |||
20090276771, | |||
20100058332, | |||
20100199276, | |||
20100223610, | |||
20110022812, | |||
20120072597, | |||
20130346899, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jul 02 2014 | Microsoft Technology Licensing, LLC | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Date | Maintenance Schedule |
Aug 16 2019 | 4 years fee payment window open |
Feb 16 2020 | 6 months grace period start (w surcharge) |
Aug 16 2020 | patent expiry (for year 4) |
Aug 16 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Aug 16 2023 | 8 years fee payment window open |
Feb 16 2024 | 6 months grace period start (w surcharge) |
Aug 16 2024 | patent expiry (for year 8) |
Aug 16 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Aug 16 2027 | 12 years fee payment window open |
Feb 16 2028 | 6 months grace period start (w surcharge) |
Aug 16 2028 | patent expiry (for year 12) |
Aug 16 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |