systems, methods, and computer readable media for identifying resources to implement a service in a cloud computing environment are disclosed. In general, the disclosed methodologies analyze a cloud's ability to support a desired service while maintaining separation between the cloud's logical layers. For example, given a list of resources needed to implement a target service, a hierarchical plan may be generated. The plan may then be used by each layer to track and record the availability of various possible layer-specific resource selections. Since each layer may be permitted access only to that portion of the plan that is associated with, or applicable to, the specific layer, the logical separation between different layers may be enforced. As a consequence, each layer may implement its resource selection mechanisms in any desired manner.
|
1. A method, comprising:
receiving a plan to support a cloud-based service in a cloud-based computing system, the system having resources organized in a plurality of architectural layers, each layer including a different resource type or types than the other layers, the plan generated as resources eligible to support the service are identified, a respective portion of the plan corresponding to each of the plurality of architectural layers identifying layer-specific resource needed to support the cloud-based service, the plan recording details of which resources are needed or have been allocated to support the service;
for a current layer, carrying out a resource identification operation to determine an availability of one or more layer-specific resources that are needed to support the cloud-based service according to the respective portion of the plan for the current layer; and
based on a determination that a needed resource is not available, returning a failure message for the current layer;
based on a determination that the one or more needed resources are available, selecting one resource instance and accordingly updating the plan to reflect the selection for the current layer;
determining whether the selected resource instance in the current layer needs to be supported by one or more resource from a lower layer in the plurality of architectural layers than the current layer according to updated plan;
based on a determination that the selected resource instance does not need to be supported by the one or more resources from the lower layer, reporting success for the current layer; based on a determination that the selected resource instance does need to be supported by the one or more resources from the lower layer, issuing a resource request to the lower layer for the one or more resources from the lower layer to support he selected resource instance,
wherein the resources organized in the plurality of architectural layers include a first layer of one or more pods, a second layer of one or more network containers and a third layer of one or more virtual clusters.
9. A cloud-based resource allocation system, comprising:
a memory having stored therein at least part of a plan, the plan indicating a plurality of resources of a cloud based computing system required to provision a cloud-based service, the resources organized in a plurality of architectural layers, each layer having different resource types than the other layers, the plan generated as resources eligible to support the service are identified, with a respective portion of the plan corresponding to each of the plurality of architectural layers identifying layer-specific resources needed to support the cloud-based service;
a programmable control device having access to program instructions, the program instructions when executed causing the programmable control device to:
for a current layer, determine an availability of one or more resources that are needed to support the cloud-based service according to the respective portion of the plan for the current layer;
based on a determination that a needed resource is not available, return a resource identification failure message for the current layer;
based on a determination that the one or more needed resources are available: select one resource instance and accordingly update the plan to
reflect the selection for the current layer;
determine whether the selected resource instance in the current layer needs to be supported by one or more resources from a lower layer in the plurality of architectural layers than the current layer according to updated plan;
based on a determination that the selected resource instance does not need to be supported by the one or more resources from the lower layer, report success for the current layer;
based on a determination that the selected resource instance does need to be supported by the one or more resources from the lower layer, issue a resource request to the lower layer for the one or more resources from the lower layer to support the selected resource instance,
wherein the resources in the plurality of architectural layers, include a first layer of one or more pods, a second layer of one or more network containers and a third layer of one or more virtual clusters. of a particular type.
2. The method of
3. The method of
4. The method of
checking the plan to determine if it has a record for a resource of a particular type for the current layer;
taking a lock on the plan if there is no record for the resource of the particular type for the current layer;
identifying one or more eligible resources of the particular type of resource; and
indicating the identified one or more eligible resources in the plan.
5. The method of
identifying one or more eligible resources of a particular type of resource in accordance with one or more policies.
6. The method of
indicating, in the plan, the selected resource instance; and removing the lock from the plan.
7. The method of
8. A program storage device, readable by a programmable control device, comprising instructions tangibly stored thereon for causing the programmable control device to perform the method of
10. The cloud-based resource allocation system of
take a lock, for the lower layer, on a portion of the plan for the lower layer; and
read, for the lower layer, at least a part of the portion of the plan for the lower layer.
11. The cloud-based resource allocation system of
select, for the lower layer, an instance of a resource of a particular type from the one or more eligible resources identified in a portion of the plan for the next lower layer;
update a portion of the plan for the lower layer to indicate selection of the resource instance of the particular type; and
release the lock, for the lower layer, on the portion of the plan for the lower layer.
12. The cloud-based resource allocation system of
when the plan indicates, in a portion of the plan for the lower layer, two or more eligible resources of a particular type,
receive for the lower layer, a failure message from the lower layer, the failure message indicating that a selected resource instance could not be supplied by the lower layer;
update the portion of the plan for the lower layer to indicate that the selected resource instance is not available;
select the lower layer, a second resource instance from the two or more eligible resources identified in the portion of the plan for the lower layer.
13. The cloud-based resource allocation system of
a layer control module in each of the architectural layers coupled to an advisor module, the advisor module configured to guide selection of one resource instance from one or more available resource instances in the respective architectural layer.
|
This disclosure relates generally to the field of computer network management. More particularly, but not by way of limitation, it relates to techniques for identifying and allocating resources to provision a specified service in a cloud computing environment.
The North American National Institute for Standard and Technology (NIST) describes cloud computing as a model for enabling on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned with minimal management effort or service provider interaction. In many modern environments the implementation of a cloud may be conceptually divided into layers—where each layer can “talk” with only those layers directly above and below it (typically through Application Programming Interfaces or APIs). For example, The NIST describes three basic cloud model layers Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). in one cloud environment the user may interact with a workload manager (at the SaaS layer) in which services are defined (e.g., a shopping cart web application). Conceptually below this may be a PaaS layer in which a given resource (e.g., a compute cluster) is defined which, in truth, may be comprised of one or more elements from the IaaS layer (e.g., compute platforms or blades).
When provisioning a new cloud-based service, a user typically provides a set of resource requirements. The task is then to determine if the necessary resources are available and, if so, to allocate them so that the service may be provided. In the past, one of three approaches are adopted for this task: brute force; merging of architectural layers; and finding an optimal solution. In the brute-force approach, an assumption is made that the necessary resources are available. Under this assumption, each needed resource is identified and allocated in turn. A drawback to this approach is that if ‘N’ resources of a specified type are needed, but only (N-1) of those resources are actually available, the process fails on the attempted allocation of the Nth resource. At that time, all prior allocations must be undone. For complex services, this approach can be very time consumptive and, in addition, inefficient in its use of typing up resources that ultimately cannot be used. In an approach that merges the architectural layers of a cloud, a single layer gains visibility to all aspects of a service's topology. While this can work, and work efficiently, it results in an architecture that is rigid and inflexible. No architectural layer implementation may be changed without affecting all other layers. In an optimal solution approach, a function may be generated based on the required resources whereafter all suitable resources are identified through an investigation of each layer to identify all possible solutions to satisfy the target service request (i.e., the function). Once identified, all possible solutions are evaluated against a measurement metric and the “best” solution is chosen. A drawback to this approach is that it can be very time consumptive. For large systems (i.e., services requiring a number of different resources, some of which may be defined in terms of collections of other resources), the optimal solution may take an infinitely long time to identify.
Thus, it would be beneficial to provide a mechanism to identify those resources needed to satisfy a service request that is cost effective in terms of both time and resource use.
In one embodiment the invention provides a method to identify resources required to support an application. The method includes receiving a plan indicating all of the resources required to support the service and, further, having sections corresponding to different architectural layers in the computing system (e.g., first and second layers) within which the service is to be provided; identifying one or more eligible resources of a type needed to support the application from all the resources indicated by the plan; selecting a particular resource instance from the eligible resources; updating the plan to indicate the particular resource instance was selected; and calling a lower architectural layer to supply the selected particular resource instance. (In general, each architectural layer communicates only with those layers immediately above and below itself.)
In another embodiment, if a particular architectural layer needs multiple instances of a particular type of resource from its lower layer, it may make a separate call to that layer for each needed instance (e.g., in parallel). In this way, methods to identify selected resources may be made in parallel. In accord with this approach, if a lower architectural layer indicates a first instance of a particular type is not available, its immediately higher layer may select another eligible instance of the resource (if one is available) and issue another call to its lower layer. Once all of the resources needed to support a desired application, the identified resources may be allocated (without fear of the process failing) and provisioned to supply the service. Illustrative web-based applications that may be deployed using the disclosed technology include a shopping cart and a wiki application (e.g., embodied in a two-tier architecture that includes a database and some PHP code that runs in Apache).
In still other embodiments, the disclosed methods may be implemented in one or more program modules and stored on a tangible (i.e., non-transitory) storage medium. In yet another embodiment, computer systems may be interconnected to provide the described functionality.
This disclosure pertains to systems, methods, and computer readable media for identifying resources to implement a service in a cloud computing environment. (As used herein, the term resource may be physical or virtual.) In general, techniques are disclosed herein for analyzing a cloud's ability to support a desired service while maintaining separation between the cloud's logical (or architectural) layers. In one embodiment, given a list of resources needed to implement a target service, a hierarchical plan may be generated. The plan may then be used by each layer to track and record the availability of various possible layer-specific resource selections. Once all of the necessary resources are identified, they may be safely and quickly allocated and provisioned to implement the service. In another embodiment, each layer may be permitted access only to that portion of the plan that is associated with, or applicable to, the specific layer. Because the logical separation between different layers is enforced, each layer may implement its resource selection mechanisms in any desired manner without interfering with the operation of other layers within the system.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the inventive concept. As part of the this description, some structures and devices may be shown in block diagram form in order to avoid obscuring the invention. Moreover, the language used in this disclosure has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter. Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention, and multiple references to “one embodiment” or “an embodiment” should not be understood as necessarily all referring to the same embodiment.
It will be appreciated that in the development of any actual implementation (as in any development project), numerous decisions must be made to achieve the developers' specific goals (e.g., compliance with system- and business-related constraints), and that these goals will vary from one implementation to another. It will also be appreciated that such development efforts might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the network management and resource allocation field having the benefit of this disclosure.
Referring to
Plan 115 may then be generated (block 120) as resources eligible to support the service are identified (block 125). In some embodiments, plan 115 may be implemented as a tree-like object that is at least partially accessible from each of the different layers. In general, plan 115 may record the details of what has been allocated for each required resource in the blueprint's deployment model and may further be annotated with details of how/where that resource was allocated. More specifically, in one embodiment each node in plan 115 includes: the type of resource required; the resource instances available (after taking into account any system or user specified rules/policies) and, for each instance, an indication of whether that instance was evaluated for eligibility and failed (i.e., determined not to be available for a target application); the currently selected resource; and related resources that are related to the currently selected resource.
As suggested in
By way of providing context for the following discussion, consider
Referring now to
With respect to acts in accordance with block 300, the identification of one or more resources may be made using any desired user or system specified constraints (e.g., policies). For example, plan 115 may simply require a relational database. System policy may, however, prioritize the selection of relational databases such that Oracle® databases are selected first if available, followed by a MySQL® database if available, followed by a Microsoft Access® database if neither of the first two are available. (ORACLE is a registered trademark of the Oracle International Corporation. MYSQL is a registered trademark of MySQL AB, a Swedish company. MICROSOFT ACCESS is a registered trademark of the Microsoft Corporation.)
With respect to acts in accordance with block 315, selection of one resource instance from multiple available resource instances may be made using any desired user or system specified constraints (e.g., policies). For example, selection criteria may be made to maximize performance, equalize load, minimize cost, etc.
Referring to
Referring to
The goal in this example (e.g., as specified in a blueprint) is to identify those resources needed by a target application: 1 pod, 1 network container, and 1 virtual cluster. Track 536 illustrates a selection path through system 500 that could be taken by a resource identification operation in accordance with one embodiment (e.g., operation 125). Initially pod 504 was determined not to be eligible, as indicated by diagonal hashing (e.g., through the evaluation of policies by a layer control module and, possibly, the use of an Advisor module and Policy Engine as depicted in
Problem Set-Up
To identify the resources needed to support a target application, a resource identification operation (e.g., operation 125) takes as input, the type of resource to allocate (call this type ‘X’), the parent resource to draw from such as a network container, compute pool or virtual cluster (call this instance ‘P’), and a plan object. Assume that a target service's blueprint requires three (3) instances of type ‘Y’ resource and that these should be drawn from type ‘X’ resources.
Successful Resource Identification
Referring to
Referring to
At the end of operation 600 the plan may contain details about what resources have been selected at each layer. It is noted that during operation 600, the system may also place soft-locks on the selected resources as well as take established soft-allocations into account. As used herein, the term “system” refers to the collection of operating modules at each layer. For example, if there are 3 layers and each layer includes a layer control module (e.g., module 400), an advisor module (e.g., module 405) and a policy engine (e.g., Policy Engine 410), the “system” would refer the aggregate collection of modules.
Dealing With Failure
Referring now to
Various changes in the components as well as in the details of the illustrated operational methods are possible without departing from the scope of the following claims. For instance, the disclosed methodologies are not restricted to cloud-based computing systems, but rather, may be useful in any computer system that may be modeled as a layered system.
It will be recognized that the disclosed methodologies (and their functional equivalents) may be embodied as one or more software program modules that can be executed by one or more programmable control devices. A programmable control device (e.g., provisioning server 205, one or more devices in compute resource pool 220 or a programmable resource in pool 225) may include any programmable controller device including, for example, one or more members of the Intel Atom®, Core®, Pentium® and Celeron® processor families from Intel Corporation. (INTEL, INTEL ATOM, CORE, PENTIUM, and CELERON are registered trademarks of the Intel Corporation.) Custom designed state machines may be used to implement some or all of the operations disclosed herein. Such devices may be embodied in a hardware device such as an application specific integrated circuits (ASICs) and field programmable gate array (FPGAs). Storage devices suitable for tangibly embodying program instructions (e.g., storage pool 215 objects as well as long-term storage and random access memory included in a programmable device such as provisioning server 205) include, but are not limited to: magnetic disks (fixed, floppy, and removable) and tape; optical media such as CD-ROMs and digital video disks (“DVDs”); and semiconductor memory devices such as Electrically Programmable Read-Only Memory (“EPROM”), Electrically Erasable Programmable Read-Only Memory (“EEPROM”), Programmable Gate Arrays and flash devices.
Finally, it is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-described embodiments may be used in combination with each other. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention therefore should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.”
Eriksson, Johan, Whitney, Jonathan
Patent | Priority | Assignee | Title |
10218825, | Jun 15 2015 | KYNDRYL, INC | Orchestrating resources in a multilayer computing environment by sending an orchestration message between layers |
10362109, | Mar 30 2016 | Task Performance Group, Inc. | Cloud operating system and method |
10387172, | Sep 06 2016 | International Business Machines Corporation | Creating an on-demand blueprint of a mobile application |
10613888, | Dec 15 2015 | Amazon Technologies, Inc | Custom placement policies for virtual machines |
10911371, | Mar 16 2015 | Amazon Technologies, Inc | Policy-based allocation of provider network resources |
11409556, | Dec 15 2015 | Amazon Technologies, Inc. | Custom placement policies for virtual machines |
9647904, | Nov 25 2013 | Amazon Technologies, Inc | Customer-directed networking limits in distributed systems |
9686121, | Sep 23 2013 | Amazon Technologies, Inc | Client-premise resource control via provider-defined interfaces |
9888098, | Jun 15 2015 | KYNDRYL, INC | Orchestrating resources in a multilayer computing environment by sending an orchestration message between layers |
Patent | Priority | Assignee | Title |
6950874, | Dec 15 2000 | International Business Machines Corporation | Method and system for management of resource leases in an application framework system |
6986137, | Sep 28 1999 | International Business Machines Corporation | Method, system and program products for managing logical processors of a computing environment |
7266823, | Feb 21 2002 | International Business Machines Corporation | Apparatus and method of dynamically repartitioning a computer system in response to partition workloads |
7725559, | Oct 07 2004 | Unisys Corporation | Virtual data center that allocates and manages system resources across multiple nodes |
8396989, | Dec 11 2009 | International Business Machines Corporation | Resource planning and data interchange functionality within a cloud computing environment |
8516495, | Dec 09 2010 | International Business Machines Corporation | Domain management and integration in a virtualized computing environment |
20090241108, | |||
20100175107, | |||
20110185064, | |||
20110320606, | |||
20120047265, | |||
20120054626, | |||
20120102199, | |||
20120182993, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Mar 16 2011 | WHITNEY, JONATHAN | BMC SOFTWARE, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026012 | /0979 | |
Mar 22 2011 | ERIKSSON, JOHAN | BMC SOFTWARE, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 026012 | /0979 | |
Mar 23 2011 | BMC Software, Inc. | (assignment on the face of the patent) | / | |||
Sep 10 2013 | BMC SOFTWARE, INC | CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT | SECURITY AGREEMENT | 031204 | /0225 | |
Sep 10 2013 | BLADELOGIC, INC | CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT | SECURITY AGREEMENT | 031204 | /0225 | |
Oct 02 2018 | Credit Suisse AG, Cayman Islands Branch | BMC SOFTWARE, INC | RELEASE OF PATENTS | 047198 | /0468 | |
Oct 02 2018 | Credit Suisse AG, Cayman Islands Branch | BLADELOGIC, INC | RELEASE OF PATENTS | 047198 | /0468 | |
Oct 02 2018 | Credit Suisse AG, Cayman Islands Branch | BMC ACQUISITION L L C | RELEASE OF PATENTS | 047198 | /0468 | |
Oct 02 2018 | BMC SOFTWARE, INC | CREDIT SUISSE, AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 047185 | /0744 | |
Oct 02 2018 | BLADELOGIC, INC | CREDIT SUISSE, AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 047185 | /0744 | |
Jun 01 2020 | BMC SOFTWARE, INC | THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 052844 | /0646 | |
Jun 01 2020 | BLADELOGIC, INC | THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 052844 | /0646 | |
Sep 30 2021 | BLADELOGIC, INC | ALTER DOMUS US LLC | GRANT OF SECOND LIEN SECURITY INTEREST IN PATENT RIGHTS | 057683 | /0582 | |
Sep 30 2021 | BMC SOFTWARE, INC | ALTER DOMUS US LLC | GRANT OF SECOND LIEN SECURITY INTEREST IN PATENT RIGHTS | 057683 | /0582 | |
Jan 31 2024 | ALTER DOMUS US LLC | BLADELOGIC, INC | TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS | 066567 | /0283 | |
Jan 31 2024 | ALTER DOMUS US LLC | BMC SOFTWARE, INC | TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS | 066567 | /0283 | |
Feb 29 2024 | CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS RESIGNING COLLATERAL AGENT | GOLDMAN SACHS BANK USA, AS SUCCESSOR COLLATERAL AGENT | OMNIBUS ASSIGNMENT OF SECURITY INTERESTS IN PATENT COLLATERAL | 066729 | /0889 | |
Jul 30 2024 | BMC SOFTWARE, INC | GOLDMAN SACHS BANK USA, AS COLLATERAL AGENT | GRANT OF FIRST LIEN SECURITY INTEREST IN PATENT RIGHTS | 069352 | /0628 | |
Jul 30 2024 | BLADELOGIC, INC | GOLDMAN SACHS BANK USA, AS COLLATERAL AGENT | GRANT OF SECOND LIEN SECURITY INTEREST IN PATENT RIGHTS | 069352 | /0568 | |
Jul 30 2024 | BMC SOFTWARE, INC | GOLDMAN SACHS BANK USA, AS COLLATERAL AGENT | GRANT OF SECOND LIEN SECURITY INTEREST IN PATENT RIGHTS | 069352 | /0568 | |
Jul 30 2024 | BLADELOGIC, INC | GOLDMAN SACHS BANK USA, AS COLLATERAL AGENT | GRANT OF FIRST LIEN SECURITY INTEREST IN PATENT RIGHTS | 069352 | /0628 | |
Jul 31 2024 | THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENT | BMC SOFTWARE, INC | RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 052854 0139 | 068339 | /0617 | |
Jul 31 2024 | THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENT | BLADELOGIC, INC | RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 052844 0646 | 068339 | /0408 | |
Jul 31 2024 | THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENT | BMC SOFTWARE, INC | RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 052844 0646 | 068339 | /0408 | |
Jul 31 2024 | THE BANK OF NEW YORK MELLON TRUST COMPANY, N A , AS COLLATERAL AGENT | BLADELOGIC, INC | RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL FRAME 052854 0139 | 068339 | /0617 |
Date | Maintenance Fee Events |
Mar 08 2018 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Mar 09 2022 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 23 2017 | 4 years fee payment window open |
Mar 23 2018 | 6 months grace period start (w surcharge) |
Sep 23 2018 | patent expiry (for year 4) |
Sep 23 2020 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 23 2021 | 8 years fee payment window open |
Mar 23 2022 | 6 months grace period start (w surcharge) |
Sep 23 2022 | patent expiry (for year 8) |
Sep 23 2024 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 23 2025 | 12 years fee payment window open |
Mar 23 2026 | 6 months grace period start (w surcharge) |
Sep 23 2026 | patent expiry (for year 12) |
Sep 23 2028 | 2 years to revive unintentionally abandoned end. (for year 12) |