systems and methods for deploying packages to devices in a fleet in stages are provided. A method includes first scanning hardware configured to handle functions unrelated to the customer workloads in a first stage to determine whether a selected diversity target for deploying the package is met. The method further includes, if the selected diversity target for deploying the package is not met, then scanning hardware configured to handle at least a subset of the customer workloads in a second stage to determine whether the selected diversity target for deploying the package is met. The method further includes, if the selected diversity target for deploying the package is met after the scanning, then using the processor, deploying the package to the first subset of the set of devices in the first stage and to a second subset of the set of devices in the second stage.
|
1. A method for deploying a package to a set of devices in a fleet comprising hardware configurable to perform functions related to customer workloads, the method comprising:
first scanning hardware in a first stage to determine whether a selected diversity target for deploying the package is met, wherein the first scanning further comprises generating a first information regarding a first minimum scanning tree comprising a first subset of a set of devices in the first stage, wherein the first stage comprises hardware configured to handle functions unrelated to the customer workloads, and wherein the selected diversity target is based on at least a package type associated with the package and a type of impact associated with the package;
if the selected diversity target for deploying the package is met after the first scanning, then using a processor, deploying the package to at least the first subset of the set of devices in the first stage based on instructions associated with the package;
if the selected diversity target for deploying the package is not met after the first scanning, then second scanning hardware in a second stage to determine whether the selected diversity target for deploying the package is met, wherein the second stage comprises hardware configured to handle at least a subset of the customer workloads; and
if the selected diversity target for deploying the package is met after the second scanning, then using the processor, deploying the package to the first subset of the set of devices in the first stage and to a second subset of the set of devices in the second stage based on the instructions associated with the package.
13. A system for deploying a package to a set of devices in a fleet comprising hardware configurable to perform functions related to customer workloads, wherein the system comprises at least one processor and a set of instructions stored in at least one memory, the set of instructions when executed by the at least one processor, configured to:
first scan hardware in a first stage to determine whether a selected diversity target for deploying the package is met, wherein at least a first subset of the set of instructions is further configured to generate a first information regarding a first minimum scanning tree comprising a first subset of a set of devices in the first stage, wherein the first stage comprises hardware configured to handle functions unrelated to the customer workloads, and wherein the selected diversity target is based on at least a package type associated with the package and a type of impact associated with the package;
if the selected diversity target for deploying the package is met after the first scan, then deploy the package to at least the first subset of the set of devices in the first stage based on instructions associated with the package;
if the selected diversity target for deploying the package is not met after the first scan, then second scan hardware in a second stage to determine whether the selected diversity target for deploying the package is met, wherein the second stage comprises hardware configured to handle at least a subset of the customer workloads; and
if the selected diversity target for deploying the package is met after the second scan, then deploy the package to the first subset of the set of devices in the first stage and to a second subset of the set of devices in the second stage based on the instructions associated with the package.
7. A method for deploying a package to a set of devices in a fleet comprising hardware configurable to perform functions related to customer workloads, the method comprising:
first scanning hardware in a first stage to determine whether a selected diversity target for deploying the package is met, wherein the first scanning further comprises generating a first information regarding a first minimum scanning tree comprising a first subset of a set of devices in the first stage, wherein the first stage comprises hardware configured to not handle any workloads, and wherein the selected diversity target is based on at least a package type associated with the package and a type of impact associated with the package;
if the selected diversity target for deploying the package is not met after the first scanning, then second scanning hardware in a second stage to determine whether the selected diversity target for deploying the package is met, wherein the second stage comprises hardware configured to handle a first selected number of the customer workloads;
if the selected diversity target for deploying the package is not met after the second scanning, then third scanning a third stage to determine whether the selected diversity target for deploying the package is met, wherein the third stage comprises hardware configured to handle a second selected number of the customer workloads, wherein the second selected number is greater than the first selected number; and
if the selected diversity target for deploying the package is met after third scanning the third stage, using the processor, deploying the package to at least the first subset of the set of devices in the first stage, a second subset of the set of devices in the second stage, and a third subset of the set of devices in the third stage based on the instructions associated with the package.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
8. The method of
9. The method of
10. The method of
11. The method of
12. The method of
14. The system of
15. The system of
16. The system of
17. The system of
|
Deploying packages, including firmware or other low-level system code, to components in a fleet comprising hardware corresponding to a cloud is difficult. The public cloud includes a global network of servers that perform a variety of functions, including storing and managing data, running applications, and delivering content or services, such as streaming videos, electronic mail, office productivity software, or social media. The servers and other components may be located in data centers across the world. While the public cloud offers services to the public over the Internet, businesses may use private clouds or hybrid clouds. Both private and hybrid clouds also include a network of servers housed in data centers.
The data centers include not only servers, but also other components, such as networking switches, routers, and other appliances. The servers and other components may be provided by different vendors and may include different types or versions of motherboards, CPUs, memory, and other devices. Apart from compute, network, and storage components, data centers further include other components, such as chassis, racks, power supply units, and other such components.
Each of these devices may need low-level system code, including firmware. Deploying packages to a wide variety of devices potentially distributed over many data centers across the world is challenging. Thus, there is a need for methods and systems for deploying packages to the devices in the fleet.
In one example, the present disclosure relates to a method for deploying a package to a set of devices in a fleet comprising hardware configurable to perform functions related to customer workloads. The method may include first scanning hardware in a first stage to determine whether a selected diversity target for deploying the package is met, where the first stage comprises hardware configured to handle functions unrelated to the customer workloads. The method may further include, if the selected diversity target for deploying the package is met after the first scanning, then using a processor, deploying the package to a first subset of the set of devices in the first stage based on instructions associated with the package. The method may further include, if the selected diversity target for deploying the package is not met after the first scanning, then scanning hardware in a second stage to determine whether the selected diversity target for deploying the package is met, where the second stage comprises hardware configured to handle at least a subset of the customer workloads. The method may further include, if the selected diversity target for deploying the package is met after the scanning, then using the processor, deploying the package to the first subset of the set of devices in the first stage and to a second subset of the set of devices in the second stage based on the instructions associated with the package.
In another example, the present disclosure relates to a method for deploying a package to a set of devices in a fleet comprising hardware configurable to perform functions related to customer workloads. The method may include first scanning hardware in a first stage to determine whether a selected diversity target for deploying the package is met, where the first stage comprises hardware configured to not handle any workloads. The method may further include, if the selected diversity target for deploying the package is not met after the first scanning, then scanning hardware in a second stage to determine whether the selected diversity target for deploying the package is met, where the second stage comprises hardware configured to handle a first selected number of the customer workloads. The method may further include, if the selected diversity target for deploying the package is not met after the scanning, then scanning a third stage to determine whether the selected diversity target for deploying the package is met, where the third stage comprises hardware configured to handle a second selected number of the customer workloads, where the second selected number is greater than the first selected number. The method may further include, if the selected diversity target for deploying the package is met after scanning the third stage, using the processor, deploying the package to a first subset of the set of devices in the first stage, a second subset of the set of devices in the second stage, and a third subset of the set of devices in the third stage based on the instructions associated with the package.
In yet another example, the present disclosure relates to a system for deploying a package to a set of devices in a fleet comprising hardware configurable to perform functions related to customer workloads. The system may be configured to first scan hardware in a first stage to determine whether a selected diversity target for deploying the package is met, where the first stage comprises hardware configured to handle functions unrelated to the customer workloads. The system may further be configured to, if the selected diversity target for deploying the package is met after the first scan, then deploy the package to a first subset of the set of devices in the first stage based on instructions associated with the package. The system may further be configured to, if the selected diversity target for deploying the package is not met after the first scan, then scan hardware in a second stage to determine whether the selected diversity target for deploying the package is met, where the second stage comprises hardware configured to handle at least a subset of the customer workloads. The system may further be configured to, if the selected diversity target for deploying the package is met after the scan, then deploy the package to the first subset of the set of devices in the first stage and to a second subset of the set of devices in the second stage based on the instructions associated with the package.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The present disclosure is illustrated by way of example and is not limited by the accompanying figures, in which like references indicate similar elements. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.
Examples described in this disclosure relate to creating and deploying packages including a payload to a fleet. Certain examples relate to creating and deploying packages based on operations derived from a machine learning model. Deploying packages, including firmware or other low-level system code, to components in a fleet, including hardware, in the cloud is difficult. The public cloud includes a global network of servers that perform a variety of functions, including storing and managing data, running applications, and delivering content or services, such as streaming videos, electronic mail, office productivity software, or social media. The servers and other components may be located in data centers across the world. While the public cloud offers services to the public over the Internet, businesses may use private clouds or hybrid clouds. Both private and hybrid clouds also include a network of servers housed in data centers.
The data centers include not only servers, but also other components, such as networking switches, routers, and other appliances. The servers and other components may be provided by different vendors and may include different types or versions of motherboards, CPUs, memory, and other devices.
Each of these devices may need low-level system code, including firmware. Deploying packages to a wide variety of devices potentially distributed over many data centers across the world is challenging. This is because the deployment of the packages needs to be done safely, securely, and reliably. There are several external factors that impact the safety, security, and reliability goals. As an example, there are generally more deployments than can be managed at a single time, especially when some have high-impact potential. There are certain types of changes or targets that require explicit agreement from other parties which gate the deployment (e.g. potential power or performance impacts). Moreover, the impact of deployment of the packages may need to be monitored to ensure safe and reliable deployment. Finally, often the payloads include firmware or other code sourced from other companies and must be evaluated and tested to ensure security.
To ensure safe, secure, and reliable deployment of the packages, certain examples of this disclosure relate to ensuring quality payloads, appropriate validation and testing, and monitoring of impacts on the fleet. Certain examples relate to using machine learning to improve the creation and the deployment of the packages.
With continued reference to
Still referring to
With continued reference to
With continued reference to
Still referring to
Network interfaces 414 may include communication interfaces, such as Ethernet, cellular radio, Bluetooth radio, UWB radio, or other types of wireless or wired communication interfaces. Bus 420 may be coupled to both the control plane and the data plane. Although
Still referring to
With reference to
TABLE 1
Primary KPIs
KPI
KPI Description
Time-to-
This KPI may measure the time it takes to
Detect (TTD)
detect issues automatically via monitoring.
This may contribute to effective and efficient
monitoring.
Time-to-Broad-
This KPI may measure the time it takes from
Deployment
when a package is tested and ready until we
(TTBD)
initiate broad rollout in the fleet. This KPI
measures how long it takes us to get through
STAGE 1 (described later) and begin the
broader rollout. This KPI contributes to
execution efficiency and scale efficiency.
Time-to-
This KPI may measure the time it takes to
Complete-
complete a deployment. This KPI contributes
Deployment
to execution efficiency and scale efficiency.
(TTCD)
Deployment
This KPI tracks the overall rate of incidents
Incident
triggered by the deployment. This KPI
Control
contributes to deploying quality packages.
Management
(DICM)
High-Impact
This KPI reflects the number/percentage of
Deployments
deployments that are categorized as highly
(HID)
impactful to customers (requiring reboot or
vacating). This LPI contributes to ensuring
minimal impact.
Monitor
This KPI measures the number of issues that
Misses (MM)
were found that were not caught by monitoring.
This KPI contributes to effective and efficient
monitoring.
Deployment and monitoring 512 may also track additional KPIs, which are referred to as Secondary KPIs in Table 2 below.
TABLE 2
Secondary KPIs
KPI
KPI Description
Time-to-
This KPI measures the time it takes to qualify
Qualify-
a release and contributes to execution
Deployment
efficiency and deploying quality packages.
(TTQ)
Time-to-
This KPI measures the time from when a
Initiate-
determination is made that an update is needed
Deployment
until the time the deployment initiated. The
(TTID)
goal of this KPI is to understand the time to
prepare a deployment. This KPI contributes to
overall execution efficiency.
Cluster
This KPI measures the readiness of a cluster
Deployment
prior to deployment by performing pre-requisite
Readiness
checks. The intention is to use this as a gate
(CDR)
if a cluster is ready for deployment (% of
nodes not ready, firmware version variance,
MOS version variance, remediation package
readiness, queued deployments, etc.). This KPI
contributes to execution efficiency.
Hygiene KPI
This KPI indicates the current freshness of the
(HYG)
fleet. This KPI contributes to scaling efficiency
and ensuring minimal impact.
Time-to-
This KPI measures the time from when a critical
HotFix (TTHF)
bug is discovered until the hotfix deployment
is initiated. This KPI contributes to execution
efficiency.
Automation
This KPI measures the amount of the process
Efficiency
that is automated (vs. requiring manual
(AE)
processing). This KPI contributes to execution
efficiency and scaling efficiency.
With continued reference to
With continued reference to
Still referring to
Step 1104 may include classifying the hardware in the fleet into deployment categories by volume. In one example, classifying the hardware in the fleet into deployment categories by volume may include planning module 504 of
Step 1106 may include mapping the package to devices selected for deployment. As part of this step, planning module 504 may create information (e.g., a table or a set of tables) mapping the package to the devices selected for deployment of the package. This information may be stored in deployment database 412 of
Step 1108 may include scanning the hardware in STAGE 1 to determine whether a selected diversity target is met. If the selected diversity target is met, then the flow may proceed to processing stage A 1110. Otherwise, the flow may proceed to processing stage B 1112. In one example, as part of this step, planning module 504 may construct (or process existing) minimum scanning trees as described with respect to
With respect to
Step 1116 may include scanning hardware in STAGE 3 to determine whether selected diversity target is met. In this example, planning module 504 may construct (or process existing) minimum scanning trees for STAGE 3 in a similar manner as described with respect to
Step 1118 may include continuing scanning additional stages until the selected diversity target is met or all of the remaining stages have been scanned. In one example, the selected diversity target may be chosen based on the package type. Alternatively, or additionally, the selected diversity target may be chosen based on the impact type. Thus, for a certain package type, the selected diversity target may be 75% of the SKUs, whereas for another package type the selected diversity target may be 90%.
Once the state of the fleet has been determined and a minimum scanning tree has been determined for the deployment of a particular package or a group of packages, the processing may proceed to the next steps. These steps may include determining the velocity of the deployment. In one example, the velocity of the deployment may be related to the number of gates the deployment includes. Each gate may correspond to a wait time period (e.g., a certain number of hours, days, or months) specifying the time for which the deployment may be delayed after each step of the deployment process. As an example, for the deployment of a particular package to the CPUs, the deployment may be gated for 24 hours after deployment to the minimum scanning tree; after the elapse of the 24 hours, the package may be deployed to CPUs with the relevant SKUs in the rest of the fleet. In one example, the gates may specify a longer wait time period when the deployment relates to the devices that are processed via the control plane (e.g., control plane 230 of
In another example, the velocity of the deployment may be related to the impact of the deployment. Thus, the number of gates and the wait time period specified by the gates may depend upon the impact of the deployment on the fleet. As an example, certain deployments may be characterized as impactful and other deployments may be characterized as impact-less. Deployments may also be characterized along a sliding scale between the impact-less and impactful. This process may include, planning module 504 considering both the package type and the impact type of the package. The information corresponding to impact, including the impact type and the package type, may be stored in a table in a database (e.g., deployment database 412 of
With continued reference to
Still referring to
Although impact table 1200 contains information concerning certain package types and impact types, impact table 1200 may contain information concerning additional or fewer of each of package types and impact types. As an example, impact table 1200 may include information concerning impact on the deployment of packages to Network Interface Controllers (NICs), Top-of-Rack (TOR) switches, Middle-of-Rack (MOR) switches, routers, power distribution units (PDUs), and rack level uninterrupted power supply (UPS) systems.
With continued reference to
Display 1320 may be any type of display, such as LCD, LED, or other types of display. Network interfaces 1322 may include communication interfaces, such as Ethernet, cellular radio, Bluetooth radio, UWB radio, or other types of wireless or wired communication interfaces. Although
With continued reference to
it=σ(Wxixt+Whiht−1+Wcict−1+bi
ft=σ(Wxfxt+Whfht−1+Wcfct−1+bf)
ct=ftct−1it tanh(Wxcxt+Whcht−1+bc)
ot=σ(Wxoxt+Whoht−1+Wcoct+bo)
ht=ot tanh(ct)
In this example, inside each LSTM layer, the inputs and hidden states may be processed using a combination of vector operations (e.g., dot-product, inner product, or vector addition) or non-linear operations, if needed.
Although
Training data 1420 may be data that may be used to train a neural network model or a similar machine learning model. In one example, training data 1420 may be used to train the machine learning model to minimize an error function associated with the deployment of a package. In one example, the minimization of the error function may be obtained by obtaining user feedback on the various payload and package parameters and to determine appropriate weights for convolution operations or other types of operations to be performed as part of machine-based learning. As an example, the users in a test environment may be provided with a set of preselected mapping functions with known payload and package parameters and asked to select the mapping function that they prefer.
ML models 1430 may include machine language models that may be used as part of machine learning system 1300. ML models 1430 may include models that are created by the training process. In this example, training data 1420 may include target attributes, such as a selected diversity target for deploying a package. An appropriate machine learning algorithm included as part of LBA 1410 may find patterns in training data 1420 that map a given set of input parameters (e.g., payload parameters and package parameters) to a selected diversity target for deploying the package. In another example, the machine learning algorithm may find patterns in training data 1420 that map the input parameters to a deployment classification. An example deployment classification may include at least two categories: impactful or impact-less. Other machine language models may also be used. As an example, training data 1420 may be used to train a machine language model that maps the input package type to any impact associated with the deployment of the package. The impact may be represented in a similar form as described with respect to impact table 1440. Thus, impact table 1440 may be similar or identical to impact table 1200 of
Payload parameters 1450 may include parameters associated with a payload. In one example, payload parameters may include the type of the payload, the target SKUs for the payload, the degree of change caused by the deployment of the payload, any prerequisites, any known impact, and required deployment time. Payload parameters 1450 may be extracted from the metadata associated with the payload or otherwise obtained through the submission process as described earlier.
Package parameters 1460 may include parameters associated with a package that includes the payload. In one example, package parameters 1460 may include information concerning the type of health monitoring that is included with the package. Package parameters 1460 may further include the package type and the gates and watchdogs required for the deployment of the package.
Deployment parameters 1470 may include information concerning the rollout plan. As an example, deployment parameters 1470 may include an assessment of the target conditions that will be required for the deployment. These conditions may include information regarding whether any of a device reset, node reboot, node repave, power supply cycle, or disk reformat is required. These parameters may be included as part of the instructions and/or metadata associated with a package.
Fleet parameters 1480 may include information concerning the entire fleet or a subset of the fleet that may be the target of deployment. Fleet parameters may include information related to the item types (e.g., the SKUs) associated with the data centers in the fleet or the subset of the fleet. This information may include the number of each of the SKUs. In addition, fleet parameters 1480 may include additional details on the data centers included in the fleet or the subset of the fleet. As an example, the information concerning data centers may include the location information, the AC voltage supply in the data center (e.g., 120 Volts or 240 Volts), the operator information (e.g., whether the data center is operated by the service provider or by the customer of the service provider). Fleet parameters 1480 may be assessed using deploy module 510 of
ML models 1430 may include models that are trained to prioritize targets with minimal impact. Thus, in one example, an ML model may learn that when a node reboot is required then the deployment should first be made to nodes that are empty—in that they are not running any workloads. ML models 1430 may also include models that can be trained to receive as input the parameters associated with the payload, the package, the deployment, and the fleet, and determine whether some of the deployment steps could be performed in parallel. In addition, ML models 1430 may include models that can be trained to receive as input the parameters associated with the payload, the package, the deployment, and the fleet, and determine the specific gates and watchdogs that may be needed during the deployment to the fleet. Moreover, ML models 1430 may include models that can be trained to receive as input the parameters associated with the payload, the package, the deployment, and the fleet, and determine the type of health monitoring that should be included as part of the deployment of the package. Finally, other automated feedback models may also be used. As an example, such automated feedback models may not rely upon machine learning; instead, they may rely on other feedback mechanisms to allow for the automatic creation of the packages for deployment or to allow for the automatic creation of a deployment plan for deploying a package to the fleet. Regardless, in some cases, automated feedback models may use machine language models for learning, such as reinforcement learning models.
Step 1504 may include using a processor, automatically creating the package for the deployment to the set of devices, where the package comprises instructions for deploying the payload to the set of devices, and where the instructions specify at least one of a plurality of operations derived from a machine learning model based at least on a subset of the associated set of payload parameters. In this example, processor 1302 may execute instructions (e.g., instructions corresponding to learning-based analyzer 1410) stored in memory 1306 to perform this step. The instructions for deploying the payload may specify operations such as the number of gates and/or watchdogs required for the deployment. The operations may relate to any of the deployment parameters (e.g., deployment parameters 1470 of
In one example, the automatically creating the package for the deployment to the set of devices may include processing metadata, or other submission parameters, associated with the payload. The machine learning model may be trained based on training data comprising a mapping between the at least the subset of the associated set of payload parameters and a set of labels classifying an impact of deploying the payload to the set of devices. In one example, the set of labels may include a first label classifying the impact as impactful and a second label classifying the impact as impact-less. Any of the ML models 1430 described with respect to
Step 1604 may include using a processor, automatically creating a deployment plan for deploying the package to the fleet, where the deployment plan comprises instructions for deploying the package to the fleet, and where the instructions specify at least one of a plurality of operations derived from a machine learning model based at least on a subset of the set of fleet parameters. In this example, processor 1302 may execute instructions (e.g., instructions corresponding to learning-based analyzer 1410) stored in memory 1306 to perform this step. The machine learning model may be trained based on training data comprising a mapping between the at least the subset of the fleet parameters and at least one label related to the deployment plan. In addition, the machine learning model may be trained based on feedback concerning the deployment of the package to the fleet. The plurality of operations may include actions corresponding to monitoring the deployment of the package to the fleet. Thus, as explained earlier, a deployment monitor may monitor the deployment to the fleet. Additional details concerning the deployment monitor are provided with respect to deployment monitor 512 of
In one example, the rollout of the package across the fleet may be staged in a manner that has minimal impact on customer workloads. Thus, first the package may be deployed to empty nodes (e.g., nodes that are not hosting any workloads). Next, the package may be deployed to those nodes that have the minimal number (e.g., two) of workloads (e.g., determined based on the container count or the number of virtual machines being supported by the node). Next, the package may be deployed to those nodes that have a slightly higher number of workloads and so on. This may limit the blast radius and help contain any harm to customer workloads if the deployment causes disruption to the hardware's functioning.
A logical reporting service may also be implemented to keep track of the deployment in real time. This service may access data stored in deployment database 412 of
In addition, other dashboards may be provided, including dashboards to track each active deployment. Each such dashboard may display the progress of the deployment, including the current deployment rate and the estimated time of completion. Aside from active deployments, pending deployments may also be displayed. For a pending deployment, the dashboard may include the status of the deployment, such as submitted, packaging, testing, waiting, aborted, or completed. Additional details regarding each deployment (active or pending) may be made available by the deployment monitor (e.g., deployment monitor 512 of
In conclusion, the present disclosure relates to a method for deploying a package to a set of devices in a fleet comprising hardware configurable to perform functions related to customer workloads. The method may include first scanning hardware in a first stage to determine whether a selected diversity target for deploying the package is met, where the first stage comprises hardware configured to handle functions unrelated to the customer workloads. The method may further include, if the selected diversity target for deploying the package is met after the first scanning, then using a processor, deploying the package to a first subset of the set of devices in the first stage based on instructions associated with the package. The method may further include, if the selected diversity target for deploying the package is not met after the first scanning, then scanning hardware in a second stage to determine whether the selected diversity target for deploying the package is met, where the second stage comprises hardware configured to handle at least a subset of the customer workloads. The method may further include, if the selected diversity target for deploying the package is met after the scanning, then using the processor, deploying the package to the first subset of the set of devices in the first stage and to a second subset of the set of devices in the second stage based on the instructions associated with the package.
The method may further include scanning the hardware in the fleet to obtain information about the hardware and storing the information about the hardware in a database. The method may further include classifying the hardware in the fleet into deployment categories by volume. The method may further include generating a first information regarding a first minimum scanning tree comprising the first subset of the set of devices. The method may further include generating a second information regarding a second minimum scanning tree comprising the second subset of the set of devices.
The fleet may comprise a predetermined number of types of the set of devices including a first type of devices, and wherein the selected diversity target specifies a percentage of the first type of devices. The method may further include if the selected diversity target for deploying the package is not met after the scanning, then continuing to scan any remaining stages until the selected diversity target for deploying the package is met or all of the remaining stages have been scanned.
In another example, the present disclosure relates to a method for deploying a package to a set of devices in a fleet comprising hardware configurable to perform functions related to customer workloads. The method may include first scanning hardware in a first stage to determine whether a selected diversity target for deploying the package is met, where the first stage comprises hardware configured to not handle any workloads. The method may further include, if the selected diversity target for deploying the package is not met after the first scanning, then scanning hardware in a second stage to determine whether the selected diversity target for deploying the package is met, where the second stage comprises hardware configured to handle a first selected number of the customer workloads. The method may further include, if the selected diversity target for deploying the package is not met after the scanning, then scanning a third stage to determine whether the selected diversity target for deploying the package is met, where the third stage comprises hardware configured to handle a second selected number of the customer workloads, where the second selected number is greater than the first selected number. The method may further include, if the selected diversity target for deploying the package is met after scanning the third stage, using the processor, deploying the package to a first subset of the set of devices in the first stage, a second subset of the set of devices in the second stage, and a third subset of the set of devices in the third stage based on the instructions associated with the package.
The method may further include scanning the hardware in the fleet to obtain information about the hardware and storing the information about the hardware in a database. The method may further include classifying the hardware in the fleet into deployment categories by volume. The method may further include generating a first information regarding a first minimum scanning tree comprising the first subset of the set of devices.
The method may further include generating a second information regarding a second minimum scanning tree comprising the second subset of the set of devices. The method may further include generating a third information regarding a third minimum scanning tree comprising the third subset of the set of devices. The method may further include if the selected diversity target for deploying the package is not met after the scanning, then continuing to scan any remaining stages until the selected diversity target for deploying the package is met or all of the remaining stages have been scanned.
In yet another example, the present disclosure relates to a system for deploying a package to a set of devices in a fleet comprising hardware configurable to perform functions related to customer workloads. The system may be configured to first scan hardware in a first stage to determine whether a selected diversity target for deploying the package is met, where the first stage comprises hardware configured to handle functions unrelated to the customer workloads. The system may further be configured to, if the selected diversity target for deploying the package is met after the first scan, then deploy the package to a first subset of the set of devices in the first stage based on instructions associated with the package. The system may further be configured to, if the selected diversity target for deploying the package is not met after the first scan, then scan hardware in a second stage to determine whether the selected diversity target for deploying the package is met, where the second stage comprises hardware configured to handle at least a subset of the customer workloads. The system may further be configured to, if the selected diversity target for deploying the package is met after the scan, then deploy the package to the first subset of the set of devices in the first stage and to a second subset of the set of devices in the second stage based on the instructions associated with the package.
The system may further be configured to scan the hardware in the fleet to obtain information about the hardware and storing the information about the hardware in a database. The system may further be configured to classify the hardware in the fleet into deployment categories by volume. The system may further be configured to generate a first information regarding a first minimum scanning tree comprising the first subset of the set of devices.
The system may further be configured to generate a second information regarding a second minimum scanning tree comprising the second subset of the set of devices. The system may further be configured to, if the selected diversity target for deploying the package is not met after the scanning, then continuing to scan any remaining stages until the selected diversity target for deploying the package is met or all of the remaining stages have been scanned.
It is to be understood that the methods, modules, and components depicted herein are merely exemplary. Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-Programmable Gate Arrays (FPGAs), Application-Specific Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System-on-a-Chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. In an abstract, but still definite sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or inter-medial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “coupled,” to each other to achieve the desired functionality.
The functionality associated with some examples described in this disclosure can also include instructions stored in a non-transitory media. The term “non-transitory media” as used herein refers to any media storing data and/or instructions that cause a machine to operate in a specific manner. Exemplary non-transitory media include non-volatile media and/or volatile media. Non-volatile media include, for example, a hard disk, a solid-state drive, a magnetic disk or tape, an optical disk or tape, a flash memory, an EPROM, NVRAM, PRAM, or other such media, or networked versions of such media. Volatile media include, for example, dynamic memory such as DRAM, SRAM, a cache, or other such media. Non-transitory media is distinct from, but can be used in conjunction with transmission media. Transmission media is used for transferring data and/or instruction to or from a machine. Exemplary transmission media, include coaxial cables, fiber-optic cables, copper wires, and wireless media, such as radio waves.
Furthermore, those skilled in the art will recognize that boundaries between the functionality of the above described operations are merely illustrative. The functionality of multiple operations may be combined into a single operation, and/or the functionality of a single operation may be distributed in additional operations. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments.
Although the disclosure provides specific examples, various modifications and changes can be made without departing from the scope of the disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure. Any benefits, advantages, or solutions to problems that are described herein with regard to a specific example are not intended to be construed as a critical, required, or essential feature or element of any or all the claims.
Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles.
Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements.
Kaler, Christopher G., Munjal, Ashish
Patent | Priority | Assignee | Title |
11221837, | Apr 11 2019 | Microsoft Technology Licensing, LLC | Creating and deploying packages to devices in a fleet based on operations derived from a machine learning model |
12067380, | Nov 11 2022 | AMERICAN MEGATRENDS INTERNATIONAL, LLC | System and method of offloading and migrating management controller functionalities using containerized services and application thereof |
ER1027, |
Patent | Priority | Assignee | Title |
10097621, | Sep 11 2015 | AT&T Intellectual Property I, L.P. | Application deployment engine |
10146524, | Mar 28 2016 | Amazon Technologies, Inc.; Amazon Technologies, Inc | Preemptive deployment in software deployment pipelines |
10162619, | Feb 16 2016 | Amazon Technologies, Inc. | Fleet-wide management of software packages |
10257268, | Mar 09 2015 | VAPOR IO INC | Distributed peer-to-peer data center management |
10268574, | Sep 01 2016 | SALESFORCE, INC | Deployment testing for infrastructure delivery automation |
10318265, | Oct 09 2015 | Amazon Technologies, Inc | Template generation for deployable units |
10318287, | Dec 21 2016 | Hewlett Packard Enterprise Development LP | Deploying documents to a server in a specific environment |
10320625, | Aug 21 2018 | Capital One Services, LLC | Managing service deployment in a cloud computing environment |
10372421, | Aug 31 2015 | SALESFORCE, INC | Platform provider architecture creation utilizing platform architecture type unit definitions |
10476736, | Jan 30 2013 | PAYPAL, INC. | Daisy chain distribution in data centers |
10528516, | Mar 16 2018 | LENOVO GLOBAL TECHNOLOGIES INTERNATIONAL LTD | Selection of a location for installation of a hardware component in a compute node using historical performance scores |
10534595, | Jun 30 2017 | Palantir Technologies Inc | Techniques for configuring and validating a data pipeline deployment |
10540608, | May 22 2015 | Amazon Technologies, Inc | Dynamically scaled training fleets for machine learning |
10542091, | Nov 14 2017 | SAP SE | Repository-based shipment channel for cloud and on-premise software |
10581675, | May 04 2017 | Amazon Technologies, Inc. | Metadata-based application and infrastructure deployment |
10649759, | Nov 01 2017 | Amazon Technologies, Inc. | Automated transparent distribution of updates to server computer systems in a fleet |
10719301, | Oct 26 2018 | Amazon Technologies, Inc.; Amazon Technologies, Inc | Development environment for machine learning media models |
10732967, | Feb 22 2019 | Amazon Technologies, Inc. | Safe deployment of configurations to server fleets |
7353389, | Apr 07 2003 | BELARC, INC | Software update and patch audit subsystem for use in a computer information database system |
7865889, | Sep 14 2006 | IVANTI, INC | Systems and methods for verifying the compatibility of software with a group of managed nodes |
8060585, | Feb 06 2008 | Qualcomm Incorporated | Self service distribution configuration framework |
8135775, | Aug 31 2007 | IVANTI, INC | Systems and methods for staged deployment of software |
8966475, | Aug 10 2009 | Suse LLC | Workload management for heterogeneous hosts in a computing system environment |
8984104, | May 31 2011 | Red Hat, Inc.; Red Hat, Inc | Self-moving operating system installation in cloud-based network |
9088479, | Jan 24 2012 | International Business Machines Corporation | Automatically selecting appropriate platform to run application in cloud computing environment |
9442769, | Sep 30 2011 | Red Hat, Inc.; Red Hat, Inc | Generating cloud deployment targets based on predictive workload estimation |
9456057, | Mar 31 2011 | Amazon Technologies, Inc. | Random next iteration for data update management |
9645808, | Aug 26 2013 | Amazon Technologies, Inc. | Integrating software updates with the testing and deployment of software |
9696982, | Nov 05 2013 | Amazon Technologies, Inc | Safe host deployment for a heterogeneous host fleet |
9703890, | Oct 17 2014 | VMware LLC | Method and system that determine whether or not two graph-like representations of two systems describe equivalent systems |
9792099, | Jun 24 2014 | Oracle International Corporation | System and method for supporting deployment in a multitenant application server environment |
9858068, | Jun 22 2010 | Hewlett Packard Enterprise Development LP | Methods and systems for planning application deployment |
9906578, | Jan 10 2012 | Oracle International Corporation | System and method for providing an enterprise deployment topology |
9934027, | Sep 21 2011 | ACTIAN CORPORATION | Method and apparatus for the development, delivery and deployment of action-oriented business applications supported by a cloud based action server platform |
9946879, | Aug 27 2015 | Amazon Technologies, Inc | Establishing risk profiles for software packages |
9996335, | Feb 25 2016 | Cisco Technology, Inc. | Concurrent deployment in a network environment |
20060080413, | |||
20070118654, | |||
20070240143, | |||
20080103996, | |||
20080256219, | |||
20080306798, | |||
20090320019, | |||
20100306364, | |||
20110004564, | |||
20110271275, | |||
20130103841, | |||
20130179876, | |||
20130191842, | |||
20130198354, | |||
20140033188, | |||
20140130036, | |||
20140208314, | |||
20140325503, | |||
20140337830, | |||
20140359556, | |||
20150019204, | |||
20150178063, | |||
20150381711, | |||
20160013992, | |||
20160019043, | |||
20160092185, | |||
20160092207, | |||
20160269319, | |||
20170061348, | |||
20170315796, | |||
20180046453, | |||
20180048520, | |||
20180048521, | |||
20180091624, | |||
20180165122, | |||
20180225095, | |||
20180267787, | |||
20180321719, | |||
20180373961, | |||
20190037337, | |||
20190065165, | |||
20190102700, | |||
20190155614, | |||
20190163593, | |||
20190207869, | |||
20190229987, | |||
20190286491, | |||
20190294525, | |||
20190294563, | |||
20190311298, | |||
20190347668, | |||
20200019393, | |||
20200104113, | |||
20200326919, | |||
WO2015165111, | |||
WO2016149080, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 10 2019 | KALER, CHRISTOPHER G | Microsoft Technology Licensing, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 048861 | /0843 | |
Apr 10 2019 | MUNJAL, ASHISH | Microsoft Technology Licensing, LLC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 048861 | /0843 | |
Apr 11 2019 | Microsoft Technology Licensing, LLC | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Apr 11 2019 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Nov 21 2024 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Jun 08 2024 | 4 years fee payment window open |
Dec 08 2024 | 6 months grace period start (w surcharge) |
Jun 08 2025 | patent expiry (for year 4) |
Jun 08 2027 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 08 2028 | 8 years fee payment window open |
Dec 08 2028 | 6 months grace period start (w surcharge) |
Jun 08 2029 | patent expiry (for year 8) |
Jun 08 2031 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 08 2032 | 12 years fee payment window open |
Dec 08 2032 | 6 months grace period start (w surcharge) |
Jun 08 2033 | patent expiry (for year 12) |
Jun 08 2035 | 2 years to revive unintentionally abandoned end. (for year 12) |